..Cause and Effect..
Company announces a new whizz bang technology, the marketing instills this new idea, new approach into populous, which is then spread forward via viral marketing (like were doing right now). It's a lot like the wisher games we've probably all played in primary school. With technology they almost rely upon the hype factor.
Multicore certainly appears a direction forward, at present the clock speed rates are hindered by many factors. One factor is power leakage, where scaling the clock up you get more leakage, resulting in interference.. Which is a circuit killer. The more complex these single core cpu's become in order to push forward, the more transistors they use, thus the more power is required, and the more leakage.
So Multicore seems a simpler way forward for manufactories to increase (theoretical) performance without butting heads on the present set of problems (as much, but their still there though). Apparently this is because the cores are simpler (they've trimmed the fat). The idea being that they can use less transistors, less power, less leakage in each core.. so higher clocks are obtainable.
One thing that is worrying though is that the marketing is trying to tell us that having a multi core cpu is going to double your horse power instantly. This is simply a
Myth. (i.e Two heads are better than one!, anybody remember the Sega Saturn ?. ) Why, because programs are written in a linear form. Process A, is done before Process B, then Process C etc. So programs have to be explicitly optimized for Multicore cpu's, to obtain a big advantage from it. Some programs will, some won't.
Currently cpu's are exploiting various parallelisms to push forward (they have too), these occur at an op code level. Allowing for multiple math/logic operations to be preformed in unison. So programmers and compilers can produce more effective code with minimal effort. But now, you have to think move about the design of your program and not just the micro level optimizations.
Moving from a single to multi core there are lots of issues, memory access and cpu stalls are just two. As it's not clear how cooperatives the core(s) can be. I.e. is it possible for the two cores to access the same address space in data/instruction caches, in memory ?, how are cache hits protected (which occurs when memory is spooled into or dumped out of the L1 or L2 caches) as having the two cores working on two different areas in memory would be hugely detrimental!.
But the biggest hurdle is going to be being able to design programs that minimize cpu stalling. Programmers have enough problems currently designing app's that do this when feeding data to a GPU for example, let alone threading a entire secondary part of the program. A stall is when a cpu is left sitting there waiting for second device to finish. A common cause is buffer locking. This is dead time.
Parallel is certainly a way forward, the theoretical impact upon OS performance alone, probably makes it worth getting pretty excited about, but we have to keep our feet on the ground.
Kevin Picone
[url]www.underwaredesign.com[/url]
Play Nice!
Play Basic (Release V1.066 Out Now)