I've been frustrated for the past decade building PCs because CPUs just kept adding more cores on smaller dies without really improving speed. It feels like we've been stuck for so long already...
Most programmers today use programming languages that are 10-30 times (!) slower than machine code generated by something like C/C++, and most of those languages don't even let you use more than one core effectively (even though you often have 8 cores or more nowadays).
I'd argue we're extremely well prepared for it. Most programmers today use programming languages that are 10-30 times (!) slower than machine code generated by something like C/C++,
Slower 10-30 times with c/c++? In what context would this be? I have not seen that sort of discrepancy in my own experience.
In just about any context where you'd actually need performance. People have been getting around this by calling out into C/C++ modules from e.g. Python, but that's kind of cheating. On its own Python3 is pretty glacial.
https://benchmarksgame-team.pages.debian.net/benchmarksgame/fastest/python3-gcc.html
The slowest program here is 150x slower than C. Middle of the pack is about 30x slower. And I bet you a dollar I could make some of those C programs go at least 2x faster by hand-optimizing with intrinsics.
You'd need an FPGA for that.