How parallel computing will shape our future

Parallel computing is becoming increasingly important but as it comes with additional complexity there are only a few who really dare to it. This article will show what parallel computing is and how it will change the way we think about computing.

Do you remember the introduction of Windows 95. One of its main selling points was that it was capable of doing multitasking. Users were able to run multiple programs simultaneously. Back then a PC would only have one CPU, so one might ask how it was even possible to run multiple programs simultaneously. Well it only looked that way. What the system actually does is switching back and forth between different tasks so fast that we as humans don’t notice it. This is known as preemptive multitasking and is not to be confused with parallel computing. A couple of years later when multi core CPUs were introduced the operating system could run one program on one CPU core and another one on a different core. That still is not what parallel computing means as both tasks run indepdenent from each other.

Computing in time vs computing in space

Let’s think about what computing means. Computers were invented to solve problems more efficiently than we can do it with our brains. The resources we have at our disposal are time and space. We can either solve a problem on our own which can take a lot of time or we can decompose it into smaller subproblems that enable us to solve it in a team. Time is always limited while space is virtually unlimited. It is the same thing with computers. For decades we have been accustomed to the fact that an ever higher clock frequency helps us to solve our problems faster and faster. We didn’t even think about using the other resource.

In the paste decades CPU speed has never been a problem. We just had to wait for some faster CPU model to be released. The result of that was that we put our focus on writing code faster rather than execute it faster. Today laws of physics get in our way. There is no such thing as a free lunch. More frequency comes with a cost and that is electrical impemedance. The more we increase the clock rate the more impedance we get. Thus we are limited with what we can do with computing in time by its very nature.

That limitation forces us to no longer waste CPUs cycle with languages like php and Java. And if we want to be able to solve bigger problems we have to scale vertically. But again, there is no such thing as a free lunch. It comes at a price too which is Amdahls law (see http://www.it-automation.com/2021/07/01/amdahls-law-simple-explanation.html).

Parallel computing requires us to completely rethink what we do and how we do it. There are two aspects of it: On the one hand we have to make sure that we define problems in a way that enables us to decompose them into smaller subproblems. On the other hand we have to use technologies that help us to focus on performance rather than efficient coding. Think about C, Go and Rust compared to php and Java.

But these are only the harbringers of something much more far-reaching. As of today parallel computing typically means to use multiple CPU cores or multiple servers. The problem with using CPUs still is the limitation that we can only execute one instruction per clock cycle.

The next evolution step will get away with that limitation. We can do real computing in space with GPUs or FPGAs. Parallelisation will happen on the same chip rather than multiple CPUs. Literally this kind of elimiates Amdahls law. In a not too far future there will be FPGA and GPU nodes available at your favourite cloud provider and there will be libraries that make them very easy to use. But it is your job to be prepared for it.

Those companies that still are busy planning to replace their old php monoliths will most likely not be able to keep pace with that development.

Contact