GPUs have, of course, massively changed the economics of passwords and the algorithms involved in hashing and decrypting them. And last year's insane GPU price bubble was driven by the rising price of Ethereum, which is a very GPU friendly cryptocoin.It sounds like this could tackle some problems, but not all of them. What about heavy causal algorithms?
Having multiple cores works for things that can be run independently, for example "simple" cryptographic functions such as mining or password cracking.
Yes but he may be right when he says "there is no inherent limitation for it not to be assimilated by the CPU" ...Joel de Guzmann doesn't know wtf he is talking about. GPUs get all those FLOPS from being massively parallel. Parallelism is exactly what you DON'T want when processing audio. It's the assembly-line problem. You can't put the laces in the shoe until the shoe is fully assembled.
I’m gonna go out on a limb and say you know what you’re talking about.In our business we actually want fewer, faster cores. I'd much rather have one core running at 10 GHz than 100 cores running at 100 MHz.
Many slower cores increases latency proportionally to the number of cores. For example say our audio processing requires 10 tasks and each task requires 100 MFLOPs. If our processor has one core with 1 GFLOPs of performance then the processing latency is equal to the size of our audio frame, say 32 samples. If we have 10 cores with 100 MFLOPs of performance we can then spread the tasks out as one task per core. However each core is processing the PREVIOUS frame of audio from the previous task. So the latency is 10 times as great (320 samples). This is unacceptable for real-time audio.
Then you get the problem of how to apportion tasks among the cores. Some tasks require more FLOPs than others.
Parallelism is great for video and other applications where latency isn't an issue but for real-time audio, uh, no.