Parallel Computing - which way to go?
The BIG problem with software is bugs and how to find them before the software is released. The more complex the software, the greater the chance of undetected bugs and the greater the difficulty in debugging. So the question arises - how do we exploit multi-processor systems yet keep the software simple? The obvious answer is SIMD - Single Instruction, Multiple Data - where the same program runs in every core but on different data. This way the processors can be simple which means that, for a given die size, there can be more in-core RAM. Such a system maps very well onto problems such as Air Traffic Control or Meteorology where vast quantities of data need the same work doing on each chunk of data. I worked on a simulation of an SIMD array processor with over 1000 cores back in 1985 where I looked into the operation of various standard algorithms (such as Fast Fourier Transform, Convolution, the Viterbi Algorithm - to name those I remember) on such an array. Unfortunately the company concerned, Anamartic, didn't survive to expoit the array processor but the potential was fascinating.
Keep Up with TechRepublic