The Brains and Brawn of a Computer

Charles Babbage, known as the father of computing, first envisioned using machines rather than humans to perform complex Mathematical computations in 1822. This switch first came into action as a result of Alan Turing’s Eureka moment during the second world war. Since then, advancements in computers’ territory have been exponential, converting huge mechanical operators to 6-inch devices. A big chunk of credit for this tremendous growth goes to two of the machine’s most critical parts: CPUs and GPUs.

Fig. 1: Intel CPU and Nvidia GPU

The Central Processing Unit (CPU), as the name suggests, is the center where a majority of the computer’s calculations are performed. This piece of electronic circuitry performs the basic arithmetic, logic, and input/output operations specified by the program’s instructions. To implement these features, the CPU is divided into two major sections: the Control Unit (CU) and the Arithmetic and Logic Unit (ALU). Apart from this, the CPU also possesses other parts like Registers and Buses, which work in harmony with the Random Access Memory (RAM) to execute an operation. Much like the physical ones, Registers play the role of storing information temporarily for enabling faster processing. Buses, on the other hand, help in the transport of information between the RAM and CPU. The CU, ALU, Registers and Buses work around the clock to ensure smooth functioning. Let’s understand how this is done.

The Fetch-Decode-Execute Cycle

Fig. 2: Schematic for working of CPU

To begin with, all the Registers inside the CPU are initialized to a null value. While most of the Registers play the usual role, some are specialized and used only for specific purposes. A special Register known as the Instruction Address Register (IAR) first approaches the RAM (a temporary storage on the CPU). The RAM stores the instruction’s address (location in RAM) into it. This instruction (in binary) is simultaneously forwarded to another unique Register known as the Instruction Register (stores the instruction itself). As and when the instruction is stored inside the IR, the next step known as Decode begins. Unsurprisingly, in this step, the instruction is unscrambled to take the operation further. Following this, the Control Unit comes into the picture for the Execution step. Once the execution is done, the data may be written back into the memory, depending on the requirement. This is the basic flow of steps that happen in the CPU. Modern-day processors have multiple optimizations done to make things go faster, including pipelining and parallelization, which allows us to process information faster than ever before.

The clock signal takes care of the synchronization between various components by triggering electrical signals at regular intervals, to make this Fetch-Decode-Execute cycle possible. It decides the speed at which each step of the cycle is carried out and is measured in hertz. One hertz means one cycle in one second. Today’s computers are clocked at gigahertz, which means they can execute billions of such cycles every second.

Fig. 3: CPU vs. GPU comparison in number of cores

Until now, the CPU seems to be the real Brawn of a computer, then why do we even need the GPU? Zooming in on the basic structure of CPU and GPU provides the answer to this query.

Although both are fundamentally made up of millions of transistors, there is a significant difference in their employment. This difference comes in the form of Specificity. As seen above, the CPU possesses a limited number of cores due to which it sequentially executes all its operations. Graphics Processing Unit (GPU), on the other hand, encompasses thousands of cores. Such a massive number of cores open up the possibility for parallelism, a property via which multiple operations can be performed simultaneously. For a long time, GPU was primarily used by hardcore gamers as parallelism allowed fast rendering of high-quality graphics. The following video explains this advantage using an elementary example of creating a painting:

 

The clip above draws upon an analogy between the cores inside a chip and the pipes used for firing colors. In the case of CPU, the pipes used a sequential approach, creating the painting patch by patch. On the other hand, GPU produces a much more accurate image in the blink of an eye. This is exactly how the large number of cores inside a GPU process complex images immediately.

Extensive research in GPUs has opened up infinite possibilities in fields like Machine Learning, Deep Learning and Cryptocurrency Mining, all of which might require extensive mathematical operations. Going along the lines of these applications, people have now started putting up terms like GPGPU, which stands for General Purpose GPU. Put merely, GPGPU refers to the use of GPU along with CPU in general purpose applications wherever possible, to decrease the workload on CPU and also create high-performance systems. While GPUs were designed primarily to render images, they can now be programmed to direct that processing power toward addressing scientific computing needs.

Traditionally, the CPU and GPU work together with the former, continuously passing information followed by GPU rendering. In the case of General Purpose GPUs, this transfer becomes bidirectional with GPU passing data back to the CPU and boosting efficiency. The GPGPU pipeline analyses any data obtained as if it were in an image or other graphic forms. This makes the job easier for GPU because the original work of rendering images has been handed over to the GPU. However, before this boost, one needs to ensure that the computer’s Graphics Card supports the necessary frameworks. OpenCL and CUDA are two of the most popular frameworks for General Computing using GPUs. While Nvidia provides top-quality support to the proprietary CUDA software, AMD is more inclined towards the open-sourced OpenCL framework.

Going back to GPUs replacing CPUs, it’s critical to note that this claim is surrounded by a ton of speculation as both CPU and GPU are made for specific purposes. Even though GPU is the monster for parallel processing, it is clocked at a lower frequency than the CPU, making the latter more suitable for multiple purposes like running Microsoft Word. After all, GPUs are simply accelerators, they are not the main computation component of a system. Imagine how disastrous the road conditions would be if every car started on running on the ultra powerful Jet Engines. In much the same way, less powerful engines (CPU) are essential to maintain an equilibrium.

Moreover, using a GPU increases the power requirements dramatically. So the people working with GPUs in general computing would need to figure out the limit of this trade-off between cost and efficiency. Software developers and engineers would be forced to modify their applications to support the previously mentioned frameworks (CUDA and OpenCL) to run on GPU-enabled systems. Hence, for the foreseeable future, the use of GPUs outside of high-performance computing and gaming is unlikely because it’s just as preposterous as sewing a shirt button with a sword. The CPU is the more efficient solution for most use cases.

References

  1. https://computersciencewiki.org/index.php/Architecture_of_the_central_processing_unit_(CPU)
  2. https://www.trustedreviews.com/news/what-is-a-cpu-2950255
  3. https://optocrypto.com/difference-cpu-gpu-major/
  4. https://www.omnisci.com/technical-glossary/gpgpu
  5. https://www.scientific-computing.com/analysis-opinion/gpus-arent-going-replace-cpus-they-are-here-stay
  6. https://medium.com/@stevenn.hansen/what-are-we-actually-doing-on-cuda-and-opencl-1020553bfcdb
  7. https://create.pro/opencl-vs-cuda/

About the Author

Articles

Leave a Comment

Your email address will not be published. Required fields are marked *