The future of DSP?

.....maybe in a mixing mode where you could tolerate latencies there's some benefit to mix via additional GPU resource *). But as guitar player, it would be painful to hopeless to play real-time.

*) Or you could get the same benefits by just using 256+ buffers when mixing for larger chunks of audio processed via CPU.
 
I see the point he's making when talking about FFT being a parallel task that could benefit from huge amount of cores. Pitch detection could benefit from that. I'm not an DSP expert, though.
 
Which totally misses the point.
What types of audio processing is assembly-line? Is most of the guitar amp/effect signal chain necessarily assembly-line, in that an amp is some sort of a filter -> distortion -> filter -> distortion -> filter -> filter, etc., and these processes necessarily have to be assembly-line and cannot be turned into some parallel process?
 
What types of audio processing is assembly-line? Is most of the guitar amp/effect signal chain necessarily assembly-line, in that an amp is some sort of a filter -> distortion -> filter -> distortion -> filter -> filter, etc., and these processes necessarily have to be assembly-line and cannot be turned into some parallel process?
Exactly.
 
What is that expression? 9 nurses can't deliver a baby in a month or something like that

It's 9 mothers can't deliver a baby in a month - IOW, that's a 9x speed up. But that's a bit of a false equivalency because the "job" itself cannot be subdivided. Parallel thinking would have 9 mothers each deliver a baby in 9 months - that is a 9x speed up over what 1 mother can do alone. Imagine how long it would take to populate the world if babies could only be born one after the other. So parallelism works there.

Audio processing could theoretically benefit from parallelism. It's just pretty damn difficult to do. Partitioning the work, synchronizing the audio data streams, etc. would be challenging enough but then you add in complexities like memory and I/O latency and it gets even harder.

Video processing with a GPU is one thing. People tolerate far more anomalous behavior for visuals than they do audio. Audio glitches, delays, and gaps are not ever acceptable whereas people will tolerate a dropped frame (or much more) or pixelation or other visual artifacts.
 
GPU are not necessarily optimized for power efficiency.

Actually they are, as are most other specialized architectures. If you divide the power consumption by their throughput, they're pretty efficient in terms of FLOPS per watt, and even more so at reduced precision, which is often enough. That's because their throughput is extremely high.

Basically the way a GPU works is it has clusters of really dumb cores all of which can essentially do the same exact thing at any given time. It is also very economical to spin up tens of thousands (or even millions) of threads. These aren't like the OS threads though in the sense that there's no OS thread abstraction, and instead they're hardware-scheduled. It is also pretty difficult to get the theoretical throughput numbers out of a GPU with most algorithms because of its architectural limitations. NVIDIA spends a ton of money so others don't have to deal with this, but it's not trivial at all. I took a course on GPU programming years ago, hoping to reuse some of the tricks in my CPU work, and it's so alien the approaches are largely orthogonal.

The key thing to understand though is GPU can't realize any of that amazing throughput on tiny data buffers a DSP typically deals with, and unlike in a DSP in GPU the latency is not even a second thought, but a distant third. GPUs need pretty large, contiguous slabs of data to be able to spin up thousands of hardware threads in order to get even within the order of magnitude of their claimed max. They also need highly predictable memory read and write patterns ("coalesced"), and if not, performance takes a massive nosedive.

When dealing with audio, large chunks of time series data imply massive latency. You can "mitigate latency" like that dude describes, but in doing so you'll very likely make using the GPU not worthwile. And that's before you consider the inherent latency of the system that GPU is plugged into. The CPU based modelers suck not because CPU doesn't have enough flops. Most CPUs made in the past 5 years have more than enough, with sufficient amounts of elbow grease. But latency can't really be guaranteed in a typical consumer OS, especially under load.
 
People tolerate far more anomalous behavior for visuals than they do audio.

That's very true. We perceive what we see as happening "right now", but in reality it happened upwards of 150 milliseconds ago. That's how long it takes for things to get through the visual cortex. Audio is faster. Touch (proprioceptive) is faster still. That's also why blinking lights on metronomes are utterly pointless. :)
 
That's very true. We perceive what we see as happening "right now", but in reality it happened upwards of 150 milliseconds ago. That's how long it takes for things to get through the visual cortex. Audio is faster. Touch (proprioceptive) is faster still. That's also why blinking lights on metronomes are utterly pointless. :)

Good point.

Off topic - The lights on the metronome are usually for say drummers, who basically can't even hear themselves talking over their thrashing around, far less for the band :p

I've found it to work well in say ensemble or orchestral situations where you don't want anyone hear a click, but you can use it to start off a piece at the right tempo (never trust the violist to give the tempo!!:rolleyes:)
 
That's why for drummers someone invented a metronome which has haptic feedback. Should be way easier to accurately sync to that than to a light.

https://www.soundbrenner.com/

No affiliation. Have never used it either.
 
I've found it to work well in say ensemble or orchestral situations where you don't want anyone hear a click, but you can use it to start off a piece at the right tempo (never trust the violist to give the tempo!!:rolleyes:)
Side note, I always have a moment of fascination when I go watch orchestras play. Their timing seems delayed from the conductor's motion. It's some combination of the length of conductor's motion, the players' perception latency to the conductor's motion, the orchestra sections' slew rate, my distance from the orchestra, etc.
 
That's why for drummers someone invented a metronome which has haptic feedback. Should be way easier to accurately sync to that than to a light.

https://www.soundbrenner.com/

No affiliation. Have never used it either.

A drummer of mine used one, but didn't like it....sometimes he said he would not feel it (he moves around a lot) and he said it was kinda annoying too:oops: I guess feeling a 'shock' every second is no fun for a longer set :D
 
Tell you guys what though - for when you want to re-amp into an effect chain of eight parallel amps on a non-realtime recording, GPU's got your back!
 
Tell you guys what though - for when you want to re-amp into an effect chain of eight parallel amps on a non-realtime recording, GPU's got your back!
8 parallel amps, still seems more suited for a CPU rather than GPU, no? :p
 
A drummer of mine used one, but didn't like it....sometimes he said he would not feel it (he moves around a lot) and he said it was kinda annoying too:oops: I guess feeling a 'shock' every second is no fun for a longer set :D
All the YouTubers have been pushing these the last month or two, I guess they're doing a marketing blitz or something. Strangely I never see anyone using one other than in the initial promo videos, so I guess they must not be that good...
 
I was hoping someone here uses them, they're too expensive to just buy to try out. I mean, science says it should work better, but the actual product might be total garbage for all I know. Or maybe not.
 
Back
Top Bottom