FM9 Firmware Version 6.00 public beta (1)

Status
Not open for further replies.
That's what i did, and since FM9-EDIT is a "remote control" for FM9 that (first FW, second FM9-EDIT) is the right order i think.
The order doesn't matter from my experience; I've done it both ways and Edit has done a good job of catching the firmware change:
  • If you update Edit first followed by the firmware, when the modeler is rebooted Edit should connect and see that the firmware version has changed and isn't cached and should automatically refresh, gathering the block information, and cache it.
  • If you update the firmware first, then update Edit, when it connects it'll see that the firmware version doesn't match the cached version(s) again and will gather the block information and cache it.
Here's where it can break down: There's a little glitch that can occur when we're in the beta cycles, because the major+minor version won't change, but the beta number will, and Edit doesn't have access to that so it'll get confused and can skip the refresh, hence the oft-repeated "RANF" admonishment. It's not really an order thing, it's more a lack of granularity or an incomplete disclosure.
 
But to me, it’s just logical that as amps and algorithms get improved and updated, you’re going to have to readjust some things. Or not if you stay on your current firmware.
Zactly.

There's an analog corollary to this: If we're using a tube amp and have it modded or tubes replaced we often have to readjust settings to get it (back) to the sound we want.

Cliff is driven by making the models as accurate as they can be, even if that means including the warts and blemishes in the original amp. (The FAS versions of the models are where he uses Warts Be Gone (tm) and makes improvements.) As he improves the accuracy the changes can cascade through other components in that model and cause the final output to sound a bit different. He's not going to normalize the output across every firmware update because that trick would never work so he focuses on the accuracy and lets the model do its thing and we can twiddle the settings to get back to where we want it to be.
 
[…] because FractalBot needed to be updated before installing the firmware (how else could FractalBot recognize the new firmware?)
FractalBot can easily phone home and ask what the current firmware revision is and compare it to what's in the modeler. That behavior is very common in applications these days as a user convenience.

I haven't tested to see if that's what's actually happening, but it's very conceivable that would be happening. We don't see updates to Edit or FractalBot for every firmware revision, only when it's necessary. In Edit's case, it's usually when block features are added because Edit needs to know about them to display them correctly in the UI. (That's different than when we have to do a "RANF".)

If one installs the new firmware before updating the editor (and hence FractalBot), one opens the door to potential issues.
I don't think "opens the door to potential issues" is well-founded. Even if the order was important, updating Edit and running RANF would clear the problems and we could continue on as normal.

Remember, FractalBot, either the standalone or the embedded version, is basically a data-transport tool, it's pushing or receiving data to/from the modeler, and that doesn't take a lot of version information to do correctly because the format of the data doesn't change so that code doesn't need to change. Edit is a whole different situation because it has to interpret the data and display the values in the appropriate spots.
 
Last edited:
I noticed that on the 2203 amp the speaker curve had changed, at fist I was struggling to understand where the tonal difference was coming from however once I have made it to that part I was able to figure it out, made some adjustments and now its sounds way better than fw5.1 imo...
 
Gary,

Typically, any FM9 preset that stays below 80 or 82% CPU is safe, and won't trip the auto-shut-down feature. The alternative is to build several presets with fewer blocks and use the Sets - Songlists feature to switch between presets/scenes you might need from one larger preset.

Btw, why is that "80-82%" the "safe" range for CPU consumption? I'm sure this has been explained/discussed elsewhere. I'm thinking this was a decision made in accordance with Fractal wanting to offer maximum flexibility to the user such that if you were, for example, in the studio, and wanted to run a preset that was dangerously high in CPU consumption you would have that option. No hard limit. Definitely don't want things crapping out during a performance though. Conceptually, the CPU meter seems to work more like "headroom". You want to leave some in reserve. It does sometimes trigger the OCD part of my brain which would prefer that as long as it didn't hit 100%, all operations on the FM9 would be good to go.

But why 80-82%? Which operations within a preset are most likely to cost a swing of an additional 20% of CPU usage; enough to hit 100% and render operation unstable? Why does it appear that the FM9 can become unstable before peaking at 100% CPU? Are there CPU consuming operations that are not reflected in the CPU meter?
 
Last edited:
Btw, why is that "80-82%" the "safe" range for CPU consumption? I'm sure this has been explained/discussed elsewhere. I'm thinking this was a decision made in accordance with Fractal wanting to offer maximum flexibility to the user such that if you were, for example, in the studio, and wanted to run a preset that was dangerously high in CPU consumption you would have that option. No hard limit. Definitely don't want things crapping out during a performance though. Conceptually, the CPU meter seems to work more like "headroom. You want to leave some in reserve. It does sometimes trigger the OCD part of my brain which would prefer that as long as it didn't hit 100%, all operations on the FM9 would be good to go.

But why 80-82%? Which operations within a preset are most likely to cost a swing of an additional 20% of CPU usage; enough to hit 100% and render operation unstable? Why does it appear that the FM9 can become unstable before peaking at 100% CPU? Are there CPU consuming operations that are not reflected in the CPU meter?
As with any computer, it takes CPU just to keep the box running, displaying information, talking over USB... Any computer-based device will choke long before it reaches 100% CPU.
 
As with any computer, it takes CPU just to keep the box running, displaying information, talking over USB... Any computer-based device will choke long before it reaches 100% CPU.
That’s not really true. Running video games or rendering video can peg cpu/gpu resources and often the system is still usable, especially in these days of SSDs.

I think Fractal has a very different architecture than PCs though. I suspect there isn’t multitasking and it runs more in real time. Being an idiot talking out of his ass though, I can see how I’d be wrong and blocks would be processed in parallel.
 
As with any computer, it takes CPU just to keep the box running, displaying information, talking over USB... Any computer-based device will choke long before it reaches 100% CPU.
All that is why you have like 15% CPU with zero blocks, not why you “lose” 15% on the other end.
 
Btw, why is that "80-82%" the "safe" range for CPU consumption?
The display is showing you a rough approximation of the actual amount of CPU being used at any time. In reality the use is not nearly as smooth. It's more erratic, with spikes above what is displayed occurring. You need to allow for headroom so spikes can be handled. Showing the true CPU use is mostly impossible (the rate of change is high, there are many cores so a single number doesn't even make complete sense, etc.), so having a guideline that allows for the true use to be handled is what we've got instead.
 
Just because the "safe zone" is 80-82% does not mean there's 18-20% wasted processing power.

The way I've approached it is using a high performance racing engine as a comparison. A tachometer goes to 10,000 RPM with a red line at 8,400 RPM. The engine is capable of reaching 10,000 RPMs but the likelihood of it blowing apart is almost certain.

Running the engine at 8,200 RPMs is possible but it's pushing the limit of reliable performance and components might fail. Running it over 8,400 RPMs is risky but can be done in some cases but is not advisable for an extended period of time, something will eventually fail and cause a catastrophic failure.

Keeping below the red line is ideal for ensured performance. This does not mean the engine is not being used to it's full potential or capability. The design calls for the limit to be there to ensure reliable performance.
 
That’s not really true. Running video games or rendering video can peg cpu/gpu resources and often the system is still usable, especially in these days of SSDs.
I've never seen either of those things get near 100% CPU without a major slowdown somewhere.

I think Fractal has a very different architecture than PCs though. I suspect there isn’t multitasking and it runs more in real time. Being an idiot talking out of his ass though, I can see how I’d be wrong and blocks would be processed in parallel.
It's both multitasking and real-time. Which is a real trick to pull off. But it's a different flavor of multitasking than what your computer uses.
 
Last edited:
Btw, why is that "80-82%" the "safe" range for CPU consumption? I'm sure this has been explained/discussed elsewhere. I'm thinking this was a decision made in accordance with Fractal wanting to offer maximum flexibility to the user such that if you were, for example, in the studio, and wanted to run a preset that was dangerously high in CPU consumption you would have that option. No hard limit. Definitely don't want things crapping out during a performance though. Conceptually, the CPU meter seems to work more like "headroom". You want to leave some in reserve. It does sometimes trigger the OCD part of my brain which would prefer that as long as it didn't hit 100%, all operations on the FM9 would be good to go.

But why 80-82%? Which operations within a preset are most likely to cost a swing of an additional 20% of CPU usage; enough to hit 100% and render operation unstable? Why does it appear that the FM9 can become unstable before peaking at 100% CPU? Are there CPU consuming operations that are not reflected in the CPU meter?
TBH, I don't know the answer. It's just that with well-documented user experience reported, most users say that 80-82% is an arbitrary number where above this value, the FM9 freezes up or goes into computer time-out. Has nothing to do with current CPU usage, except values above 80% put the user at higher risk of equipment failure.
For the record, 80% CPU is the maximum "safe" level. Anything above that is risky.

Will the box still run at 82% or 84% CPU? Probably. If you want to trust "probably" at the gig...
Point well taken. Your FM9 may be able to handle 80% consistently, but if you're pushing your luck above 80%, be prepared for some downtime during gigs when you least want that. The arbitrary 80% value is where I personally watch in case I might be engaging too many blocks at one time. Sometimes, even allowing a large number of blocks in one preset can push things into the "danger zone." It's ultimately the user's choice.

I asked my boss, "Is it OK to wish my customers, "Drive Safe, Be Well"? My boss replied, "Nah, Drive Fast, Take Risks." We both laughed, but you can see where common sense is the safer bet.
 
Last edited:
I've never seen either of those things get near 100% CPU without a major slowdown somewhere.
8 year old CPU. 4.5 year old GPU. Fired up Alan Wake 2, cranked up the settings. 99% GPU. Fired up Cinebench. 100% CPU. No problem at all taking a screen shot and replying while all that was going on. Preemptive multitasking.Screenshot 2024-02-01 200056.png
 
8 year old CPU. 4.5 year old GPU. Fired up Alan Wake 2, cranked up the settings. 99% GPU. Fired up Cinebench. 100% CPU. No problem at all taking a screen shot and replying while all that was going on. Preemptive multitasking.View attachment 134276
Now I’m not an expert in DSP programming… However, I’d imagine preemptive multitasking is not a thing. I bet most signal processing has to happen in order of signal flow in as close to realtime as possible. I also imagine certain effects are more responsible for irregular DSP use than others. Ambient reverbs, shimmer verbs, any effect with heavy diffusion or scattering. I’d bet you need to allow more buffer, closer to 20% of max, for those things to process effectively without overburdening the DSP.

Edit: I’m sure this is a bit of over simplification. As the signal is processed by certain effects I’d imagine more of the signal processing can be parallelized further down the chain. I imagine it is more like a huge tree. The signal processing is more in series until it hits effects that cause it to scatter into more parallel paths.
 
Last edited:
I've never seen either of those things get near 100% CPU without a major slowdown somewhere.


It's both multitasking and real-time. Which is a real trick to pull off. But it's a different flavor of multitasking than what your computer uses.
Usually in a classic computer architecture it is memory pressure that is the greatest culprit for slow downs. At least in the last decade or two of computing. If things need to swap to disk it really slows things down. That has been alleviated a lot via the advent of SSDs and inexpensive RAM chips. Now it is generally more bus speeds that bottleneck before the CPU and GPU. GPUs are also purpose build for heavily parallelized computing.

I was a Software Architect for a company called SentryOne that got purchased by SolarWinds three years ago. We monitored performance of MS SQL Server, Windows, Hyper-V VMs and VMWare VMs. Almost always the hot spots for slowdowns were memory and disk.
 
Status
Not open for further replies.
Back
Top Bottom