conventional CPUs

Published on 4 Nov 2018

Since I am preparing a long-form article on quantum computing, I need a temporary platform to air out some clots in my brain regarding the conventional/traditional transistor-based CPU. I have a number of bookmarks on my browser and I think it is a good idea to put them here. Currently I also have an unmaintained page on my Caspershire Atlas regarding this quantum computer thing. More updates are coming later once I have a little bit more freedom to do research and to actually write that article.

The first question that came to my mind when I was reading about conventional CPU was the meaning behind numbers such as “14nm”, “10nm”, and “7nm”. I knew that these numbers referred to size, but what size? Were they the size of transistors in a CPU or what?

All these numbers are also referred to as “process node” or “technology node”.

To answer this question, there has to be some background to be covered. According to an article on Anand Tech and based on the first image shown in the article, it refers to the distance between 2 transistors, also called “pitch”. Here is where the confusion comes from. Does the 14nm refer to the physical distance between 2 transistors? Unfortunately, no. In fact, for processors labeled as “14nm”, the distance between 2 transistors is 42 nm and for processors labeled with “22nm”, the distance is 60nm. This thing confuses me and I do not wish to talk more about it.

What I do want to talk about is why the smaller size is important and where it could fall short. Without going deeply technical, smaller process node increases performance and consumes less energy. The aforementioned Anand Tech article explains it better that I could ever possible do.

The process node keeps going down in size every year (at least for now), as this was predicted long ago by Gordon Moore in 1965, thus codified as the Moore’s Law. The law goes as follows: “the number of transistors in a dense integrated circuit doubles about every two years”. But, there is a limit how small it could possibly get.

While Moore predicted that the density doubles, Robert H. Dennard in 1974 postulated that if a processor gets smaller, the clock speed gets faster, thus codified as Dennard scaling. In short, Moore said the density goes down, Dennard said the speed gets faster.

To summarize, and thanks to this thread on Reddit, smaller size means electrons have to travel less distance and the whole thing consumes less energy. When something uses less energy, it also means less energy dissipated as heat. That’s a win for everyone, but physics gets weird as we go along.

Because it says “you could only go ever so small that electrons are going to hug you so tight as if you were not there”.

This phenomenon is called electron tunneling and it belongs to the realm of quantum physics. I am no physicist, so it is hard for me to explain it. Essentially at nanometer level, electron can tunnel (a.k.a teleport) right through a substance. This is a problem because transistors are supposed to gate (a.k.a to control) the flow of electrons, but if electrons can blaze past through, that defeats the whole purpose of being a transistor, right?

I forgot where I read it, but the smallest process node someone could manufacture without experiencing a significant amount of electron leakiness is 5nm. I could be wrong. We are at the verge of getting 7nm process node (AMD) so the 5nm theoretical limit is not that far now.

I am going reserve the discussion on fabrication technology some day later. That’s where we are going to talk about industry players that are capable of producing 7nm at the moment. Not Intel, not Global Foundry. It is TSMC. This is where the discussion could be interesting because we are talking about an extreme engineering process to manufacture something so tiny yet so powerful.