Description


Multi-core Processing was created as a result of the limitations the chipmakers (Intel, AMD, IBM) ran into with single core processors. Until the creation of multi-core processors the only way for the performance of a processor to be increased was for more transistors to be packed into smaller and smaller spaces which resulted in increasing power requirements and heat output. Eventually a point was reached where it was no longer feasible to continue to increase the power of processors in this manner so in an attempt to keep up with Moore's Law the sillicone giants decided to try a new approach of having multiple physical processing core's on one chip. Multi-core processors differ from Hyper-Threaded processors (although newer multi-core chips have HT technologies now) and computers with multiple CPU's in them; where HT processors make the computer think it has multiple processors this is really just a technique that creates "logical" processors, there is still only one core on the CPU, and where computers with multiple physical CPU's have existed for some time both in the personal and enterprise markets, these processors communicate to each other via a bus on the motherboard and are physically separate. Multi-core processors are a single sillicone chip that contains more than one CPU, the advantage of this is faster communication between the processors for parallel computing and greater performance than that of single core, Hyper-Threaded, or multi CPU computers while allowing the computer to only use one core at a time saving power or to dictate what processes can access which core or run on both.

Advantages


multi_core_2.jpgMulti-core technology can improve system efficiency and application performance for computers running multiple applications at the same time. The benefits apply to server and client platforms, as well as the home and enterprise environments.

Multi-core capability can enhance user experiences in multitasking environments, namely, where a number of foreground applications run concurrently with a number of background applications such as virus protection and security, wireless, management, compression, encryption and synchronization.

Home users can edit photos while recording a TV show through a digital video recorder while a child—in another room of the house—streams a music file from the same PC. Business users could increase their productivity by performing multiple tasks more efficiently, such as working on several office-related applications as the system runs virus software in the background.

By enabling a single processor form factor to serve multiple processor cores, these platforms will provide superior energy-efficient performance and scalability while remaining relatively constant in power consumption, heat and space requirements. As a result, more processing capacity can be concentrated into fewer servers. This means greater density and fewer servers to manage.

Proximity of multiple CPU cores on the same die have the advantage that the cache coherency circuitry can operate at a much higher clock rate than is possible if the signals have to travel off-chip, so combining equivalent CPUs on a single die significantly improves the performance of cache snoop operations. Put simply, this means that because signals between different CPUs travel shorter distances, those signals degrade less. These higher quality signals allow more data to be sent in a given time period since individual signals can be shorter and do not need to be repeated as often. In a single-cache configuration, you get a performance hit when multiple threads are competing over the same data cache. Having a separate L2 cache for each core gives you twice the cache benefit. And of course, if you have four cores, each with its own cache, that's four times the benefit.

Assuming that the die can fit into the package, physically, the multi-core CPU designs require much less Printed Circuit Board (PCB) space than multi-chip SMP designs.

A dual-core processor uses slightly less power than two coupled single-core processors, principally because of the increased power required to drive signals external to the chip and because the smaller silicon process geometry allows the cores to operate at lower voltages; such reduction reduces latency. Furthermore, the cores share some circuitry, like the L2 cache and the interface to the front side bus (FSB).

In terms of competing technologies for the available silicon die area, multi-core design can make use of proven CPU core library designs and produce a product with lower risk of design error than devising a new wider core design. Also, adding more cache suffers from diminishing returns.


Servers


Multi-core processors provide the best performance per watt for servers and will enable hardware manufacturers to increase the processing capacity of their rack-mount server products. This is key because many rackable server products share power and cooling resources as well as network resources. With companies constantly increasing processing power of server systems, multi-core processing will prove useful. Processing power will be able to be increased without the need to increase power supplies and the overall general footprint of the server locations. This will also enable ease of management of server products due to the fact that there will be fewer physical servers pumping out more resources than ever before. This will also ultimately reduce the total cost and manpower to operate the departments. This will also provide specific advantages with the implementation of constantly growing business needs and investing in new strategic technologies. IT professionals are met with the challenge to provide more services to more users that want ever improving performance abilities. Multi-core technology can easily meet the demand of business and will provide a lower cost solution as the technology grows.

Demonstration


To see a visual representation of how a multi-core processor works, click here to see a flash demonstration of how Intel's multi-core processors work.