Curious about the fixed number of block types paradigm versus as many as the CPU can handle

Androo

Inspired
I see our friends in the forum wishing for more instances of the pitch block and others. I became curious about the restriction on the number of instances. It's not like they are physical devices on a shelf; they're just algorithms, right? Why wouldn't / couldn't the paradigm be: use as many as of anything you want until the CPU maxes-out?
 
I dare to think that there is an economic parameter in order to create a range of equipment going with a range of prices, there are certain things available on the AxFX3 that I dream of taking advantage of on the FM9.....and who saw of my small window would consume no or very little CPU...hence the restriction on the number of blocks.
 
I suspect it's about managing the "mapping" of parameters, etc from each potential block within a preset.

Additionally, think about the Midi and internal control complications of the "CPU limited" model.

Take Bypass controls as one example. You have 4 Delay blocks, you have a global setting that allows Bypass assignment for each. Now you add 8 Delays in one preset. How do you manage them? The global settings couldn't handle it.

There is a lot of internal "housekeeping" that needs to happen to make the system work.

Allowing an indefinite number of all blocks would make that incredibly hard, I think.
 
Everything in software is trade-offs. In order to have predictability, you need constraints: You have to have certainty that a certain set of instructions will execute in an exact, predictable, repeatable amount of time. Especially for audio processing. Locking down those limits gives you a rock solid "worse-case" scenario that gives you your playing field: "Even if I have 4 compressor blocks running at their most taxing settings, I know I've got this much room to work with for everything else".

The flexible "use whatever you want, in whatever configuration you want until the CPU maxes out" approach is orders of magnitude more complicated. You have to the design the software in such a way that all your bits and pieces are very abstract and "black box" so that they can be re-arranged and duplicated and created anywhere. Which means in some things you end up with "lowest common denominator".

There are several (massive imo) downsides to this approach for hardware units like Fractal. The design means you can't take advantage of low-level hardware optimizations because you have to be defensive and assume that any number of processing blocks could be active at any time. For example, imagine you have a hardware processor on a chip that has a fixed I/O and memory capacity and can process all 4 reverbs in real-time, but can't process 5 reverbs at once.
You can now either:
  • introduce a layer between your blocks and that reverb processor that utilizes the capacity in such a way that 5+ blocks gets equal share of the reverb processing resources, introducing latency, vastly increasing software complexity, and possibly introducing artifacting if the software+hardware can't keep up with the processing demands, or
  • limit the number of reverbs to 4, ensuring that those 4 reverbs get the absolute highest quality processing with the lowest latency, because you know the demands of those 4 blocks won't ever exceed the capacity of the hardware

This is just an example, but the point is that all software runs on hardware, which has limitations. Designing your software with those limitations in mind is the most predictable and efficient approach, and allows you to make guarantees.

The tradeoff is you lose flexibility. The benefits, though, are huge, and they stem from one primary characteristic: reliability. Fractal units are predictable and have predictable performance. This decision on Fractal's part to limit what can be run feels arbitrarily restricting, but I guarantee you it's the core reason why so many top professionals have turned to Fractal for their live performance needs. Those constraints mean that:

  • Fractal can guarantee the absolute highest quality, and
  • Fractal hardware is predictable, and reliable.

Coming from 25+ years of professional software development experience, and 20 years of audio recording and live performance experience (for whatever either of those is worth), IMO this is 100% the correct decision on Fractal's part.

Another point to consider: Apple takes this exact approach with their products, for the most part. They control all the software, and all the hardware, and limit the hardware configurations so they can have predictable performance and consistent reliability and user experiences.

And they are the most valuable company in the world.
 
The fact that there isn’t and never has been a processor that allows this should speak for itself.
Hasn't there, though? I've maxed-out my PC many times over the years.
/s
We wouldn't want our Fractals blue-screen-of-deathing on us though.
 
Everything in software is trade-offs. In order to have predictability, you need constraints: You have to have certainty that a certain set of instructions will execute in an exact, predictable, repeatable amount of time. Especially for audio processing. Locking down those limits gives you a rock solid "worse-case" scenario that gives you your playing field: "Even if I have 4 compressor blocks running at their most taxing settings, I know I've got this much room to work with for everything else".

The flexible "use whatever you want, in whatever configuration you want until the CPU maxes out" approach is orders of magnitude more complicated. You have to the design the software in such a way that all your bits and pieces are very abstract and "black box" so that they can be re-arranged and duplicated and created anywhere. Which means in some things you end up with "lowest common denominator".

There are several (massive imo) downsides to this approach for hardware units like Fractal. The design means you can't take advantage of low-level hardware optimizations because you have to be defensive and assume that any number of processing blocks could be active at any time. For example, imagine you have a hardware processor on a chip that has a fixed I/O and memory capacity and can process all 4 reverbs in real-time, but can't process 5 reverbs at once.
You can now either:
  • introduce a layer between your blocks and that reverb processor that utilizes the capacity in such a way that 5+ blocks gets equal share of the reverb processing resources, introducing latency, vastly increasing software complexity, and possibly introducing artifacting if the software+hardware can't keep up with the processing demands, or
  • limit the number of reverbs to 4, ensuring that those 4 reverbs get the absolute highest quality processing with the lowest latency, because you know the demands of those 4 blocks won't ever exceed the capacity of the hardware

This is just an example, but the point is that all software runs on hardware, which has limitations. Designing your software with those limitations in mind is the most predictable and efficient approach, and allows you to make guarantees.

The tradeoff is you lose flexibility. The benefits, though, are huge, and they stem from one primary characteristic: reliability. Fractal units are predictable and have predictable performance. This decision on Fractal's part to limit what can be run feels arbitrarily restricting, but I guarantee you it's the core reason why so many top professionals have turned to Fractal for their live performance needs. Those constraints mean that:

  • Fractal can guarantee the absolute highest quality, and
  • Fractal hardware is predictable, and reliable.

Coming from 25+ years of professional software development experience, and 20 years of audio recording and live performance experience (for whatever either of those is worth), IMO this is 100% the correct decision on Fractal's part.

Another point to consider: Apple takes this exact approach with their products, for the most part. They control all the software, and all the hardware, and limit the hardware configurations so they can have predictable performance and consistent reliability and user experiences.

And they are the most valuable company in the world.
Thanks for the detailed, informed, and interesting answer. I was sure there were very good reasons for the approach being what it is, it's just not my world. I was imagining my Wah as a function that I could call whenever I want, so why can't I call it over and over again = )
Now I get it.
Much appreciated.
A
 
The fact that there isn’t and never has been a processor that allows this should speak for itself.
I can run multiple instances of a particular effect within my DAW as there is no pre-defined limit, outside of available RAM.
 
Everything in software is trade-offs. In order to have predictability, you need constraints: You have to have certainty that a certain set of instructions will execute in an exact, predictable, repeatable amount of time. Especially for audio processing. Locking down those limits gives you a rock solid "worse-case" scenario that gives you your playing field: "Even if I have 4 compressor blocks running at their most taxing settings, I know I've got this much room to work with for everything else".

The flexible "use whatever you want, in whatever configuration you want until the CPU maxes out" approach is orders of magnitude more complicated. You have to the design the software in such a way that all your bits and pieces are very abstract and "black box" so that they can be re-arranged and duplicated and created anywhere. Which means in some things you end up with "lowest common denominator".

There are several (massive imo) downsides to this approach for hardware units like Fractal. The design means you can't take advantage of low-level hardware optimizations because you have to be defensive and assume that any number of processing blocks could be active at any time. For example, imagine you have a hardware processor on a chip that has a fixed I/O and memory capacity and can process all 4 reverbs in real-time, but can't process 5 reverbs at once.
You can now either:
  • introduce a layer between your blocks and that reverb processor that utilizes the capacity in such a way that 5+ blocks gets equal share of the reverb processing resources, introducing latency, vastly increasing software complexity, and possibly introducing artifacting if the software+hardware can't keep up with the processing demands, or
  • limit the number of reverbs to 4, ensuring that those 4 reverbs get the absolute highest quality processing with the lowest latency, because you know the demands of those 4 blocks won't ever exceed the capacity of the hardware

This is just an example, but the point is that all software runs on hardware, which has limitations. Designing your software with those limitations in mind is the most predictable and efficient approach, and allows you to make guarantees.

The tradeoff is you lose flexibility. The benefits, though, are huge, and they stem from one primary characteristic: reliability. Fractal units are predictable and have predictable performance. This decision on Fractal's part to limit what can be run feels arbitrarily restricting, but I guarantee you it's the core reason why so many top professionals have turned to Fractal for their live performance needs. Those constraints mean that:

  • Fractal can guarantee the absolute highest quality, and
  • Fractal hardware is predictable, and reliable.

Coming from 25+ years of professional software development experience, and 20 years of audio recording and live performance experience (for whatever either of those is worth), IMO this is 100% the correct decision on Fractal's part.

Another point to consider: Apple takes this exact approach with their products, for the most part. They control all the software, and all the hardware, and limit the hardware configurations so they can have predictable performance and consistent reliability and user experiences.

And they are the most valuable company in the world.
^This^

In addition, the system preloads all block types when it boots. When changing presets no new or additional blocks have to be instantiated or loaded from FLASH and instantiated. This helps ensure a consistent switching speed, something that people are extremely sensitive about. When a preset switch occurs the layout is rebuilt in memory, the default scene is applied, and it is ready to proceed.

The design of a general-purpose computer system is very different than that of a modeler. They have expandable memory and storage, and often times replaceable CPUs, but they’re not as reliable in the same conditions. Their memory and storage makes it possible to use huge programs but the switching times between the programs suffers.

They’re apples versus oranges and their systems have very different behaviors.
 
Last edited:
I can run multiple instances of a particular effect within my DAW as there is no pre-defined limit, outside of available RAM.
Yek is referring to the dedicated system inside modelers.

The DAW runs on a general-purpose computer, and there are limits to how many instances you can run, though you probably haven’t reached those limits.

We tend to be self-limiting when we approach the limits, because the overall computer will slow to a crawl as it tries to swap code in and out and we find that slowdown unacceptable. Or, the computer will crash because they never expected someone to actually try that and didn’t actually test to see what would happen with no memory to allocate. The solution at that point is to add memory and maybe step to a bigger CPU, upgrade the cooling system, and maybe move to faster drives.
 
I dare to think that there is an economic parameter in order to create a range of equipment going with a range of prices, there are certain things available on the AxFX3 that I dream of taking advantage of on the FM9.....and who saw of my small window would consume no or very little CPU...hence the restriction on the number of blocks.
This is the reason...
 
Another point in favor of Fractal's approach vs the "general purpose processing" approach:

Compare how frequently we get updates compared to the competition. This kind of velocity is only possible when writing software against a known set of limitations and contraints.

Limits don't hamper creativity/productivity. They enable it.
 
Last edited:
Hmm. So “CPU“ and “processor” only refer to an effects processor. Okay then.
The terms CPU, DSP, and, collectively DSP/CPU and processor, can be considered the same in this sort of technology. They’re all basically “processing cycles applied to work”.

The difference is whether it’s a general-purpose computer instead of a dedicated special-purpose computer, and how they allocate work to the processor. They behave very differently.
 
Back
Top Bottom