'We’re not prepared for the end of Moore’s Law'

I've been frustrated for the past decade building PCs because CPUs just kept adding more cores on smaller dies without really improving speed. It feels like we've been stuck for so long already...
 
It's like a fractal, it will never run out, it just gets scaled down and stacked, in smaller and more intricate design. When the technological dead end is reached, it gets scaled down as small as it can go, gets dirt cheap to mass produce, then someone comes along with a stupid idea that takes the existing technological brickwall and does something simple and childlike with it, and something entirely new is born, which starts a brand new pattern using the simple building block of what was a few years prior the extent of humanity's understanding, now serving as a basic building block used in a new pattern
 
I've been frustrated for the past decade building PCs because CPUs just kept adding more cores on smaller dies without really improving speed. It feels like we've been stuck for so long already...

The more capability is made available the more they just keep filling them up with monitoring utilities and advertisement tracking, the user doesn't get to experience the true speed of the cutting edge unless the user rolls their own systems with the core components
 
I'd argue we're extremely well prepared for it. Most programmers today use programming languages that are 10-30 times (!) slower than machine code generated by something like C/C++, and most of those languages don't even let you use more than one core effectively (even though you often have 8 cores or more nowadays).

If you want things to go faster, much faster, just use a proper programming language and pay attention to performance. As simple as that. There are cases where people are already using proper languages and things still aren't fast enough. The solution there is to use more customized architectures (hence the proliferation of GPU and tensor processing units for deep learning). AxeFX is an example of such a customized architecture. In these areas Moore's law is still ongoing. The main issue is not actually Moore's law, it's Dennard scaling. At some point you can't reduce energy consumption per amount of computation even if you can pile on more transistors onto your chip. But before we even get to the point where this becomes a problem, people are going to have to reconsider whether using languages that are 30 times slower than what the hardware is capable of is an acceptable compromise for them.

I'm actually looking forward to the day where performance starts to matter more - my expertise in low level optimization is pretty considerable, and lately I've been able to get paid for it again.
 
Last edited by a moderator:
Most programmers today use programming languages that are 10-30 times (!) slower than machine code generated by something like C/C++, and most of those languages don't even let you use more than one core effectively (even though you often have 8 cores or more nowadays).

Very true! That's actually mentioned in the article, although they cite C as being the faster programming language. Mind you, they're only comparing it to Python. They do mention as well, taking advantage of all the cores.

"Thompson and his colleagues showed that they could get a computationally intensive calculation to run some 47 times faster just by switching from Python, a popular general-purpose programming language, to the more efficient C. That’s because C, while it requires more work from the programmer, greatly reduces the required number of operations, making a program run much faster. Further tailoring the code to take full advantage of a chip with 18 processing cores sped things up even more. In just 0.41 seconds, the researchers got a result that took seven hours with Python code."
 
Last edited:
And even with all this waste, our computers are still idle 99% of the time. What's with this trend of "doom and gloom" recently. In many ways, we live in the golden age: the world has never been as peaceful, prosperous, and technologically advanced as it is today, and there's no end in sight to any of that. This is the objective reality. Yet if you ask a random Joe on the street, they're pretty pessimistic about the future, and don't even realize how good they have it in the present. We gotta stop doing that to ourselves.
 
the speed of microprocessors will eventually cause problems with technology growth and development, even more so in the future . Still, a much bigger issue for now and the future is a small portable power source (battery or generator). When that problem is conquered we will then see flying cars, robots, augmented reality glasses becoming the norm, human enhancement and many of the other sci-fi concepts that haven't happened yet, at least not in a practical way.
Tesla (the man not the company) designed aircraft around his wireless electricity system which ran on energy transmitted through the air. Maybe that will become a thing someday and also power other devices? but I see this as a much bigger issue than processing power when really getting creative with technology.
 
I'd argue we're extremely well prepared for it. Most programmers today use programming languages that are 10-30 times (!) slower than machine code generated by something like C/C++,

Slower 10-30 times with c/c++? In what context would this be? I have not seen that sort of discrepancy in my own experience.
 
There will always be trade offs with performance in software development. Writing and maintaining large amounts of robust and correct code is hard in any language. It's all about choosing the right tools for your problem and what your goals are for time to market and refresh frequency.

For instance, I would never write a web service in C/C++. The frameworks and tools just blow compared to doing a web service in .Net with C# and presumably Java (I haven't kept up with Java much). I've had to write the client side of those things in C++ and that was obnoxious enough.

The project I'm working on now involves writing tools for designing processors. There's no way we could do this in C# or Java. The sheer volume of data and complexity of the computations necessitates that we write this in C++. However, there are significant portions of parsing and integration that are written in Python because it's not time critical and we can build those pieces in Python much faster than in C++ and they're easier to maintain.
 
Slower 10-30 times with c/c++? In what context would this be? I have not seen that sort of discrepancy in my own experience.

In just about any context where you'd actually need performance. People have been getting around this by calling out into C/C++ modules from e.g. Python, but that's kind of cheating. On its own Python3 is pretty glacial.

https://benchmarksgame-team.pages.debian.net/benchmarksgame/fastest/python3-gcc.html

The slowest program here is 150x slower than C. Middle of the pack is about 30x slower. And I bet you a dollar I could make some of those C programs go at least 2x faster by hand-optimizing with intrinsics.
 
In just about any context where you'd actually need performance. People have been getting around this by calling out into C/C++ modules from e.g. Python, but that's kind of cheating. On its own Python3 is pretty glacial.

https://benchmarksgame-team.pages.debian.net/benchmarksgame/fastest/python3-gcc.html

The slowest program here is 150x slower than C. Middle of the pack is about 30x slower. And I bet you a dollar I could make some of those C programs go at least 2x faster by hand-optimizing with intrinsics.

Oh I misunderstood your statement - I thought you were saying you can write machine code that's 10-30 times faster than an efficient implementation in C/C++.
 
And a rather unpleasant art at that. The tooling should be prohibited under the Geneva convention as a form of torture. When deployed well, however, the results can be pretty jaw dropping.
 
Back
Top Bottom