The CNFB Method

I thought the new method was inherently more efficient, and by a decent amount. Just curious why you need to optimize just to get back to par.
I'm not an amp modeling guru, but I do do programming. The algorithm being measurably more efficient is one thing. Getting it integrated into existing code to replace a bit is the trick, and likely where the optimization will be occurring. Likely once it's integrated well with the Amp block's code, the savings in CPU cycles will emerge....
 
Last edited:
I'm not an amp modeling guru, but I do do programming. The algorithm being measurably more efficient is one thing. Getting it integrated into existing code to replace a bit is the trick, and likely where the optimization will be occurring. Likely once it's integrated well with the Amp block's code, the savings in CPU cycles will emerge....
Exactly. Often when programming your first iteration is mostly concerned with "does it work" and after that you figure out how to make it more elegant (easier to read the code, easier to develop it further, easier to integrate more stuff into it) and faster.
 
Potentially 40% of the CPU load of an amp block saved - not 40% of total capacity. Still, amp blocks use a fair bit of CPU, so savings may be enough to make a nice dent....

Was hoping that if the effects algorithms use spice methods, this may lead to an epiphany in that area also. I'm no coder but can sometimes see how structures overlap.

I'm not an amp modeling guru, but I do do programming. The algorithm being measurably more efficient is one thing. Getting it integrated into existing code to replace a bit is the trick, and likely where the optimization will be occurring. Likely once it's integrated well with the Amp block's code, the savings in CPU cycles will emerge....

Less cycles sounds like lower latency even if cpu usage is only slightly less. But hopefully left-over cycles to wash other things!?
 
I've finally perfected the "Chase Nonlinear Feedback" (CNFB) method for the modeling of nonlinear networks. And it works amazingly well. Has the accuracy of high-order integration methods with less computational burden.

Can simulate diodes, triodes, pentodes, etc. Far less error-prone than other methods (like K-method or DK-method, etc.) as you don't need to enter large matrices or tables.

It works on the principle that nonlinear devices can be thought of as linear devices with nonlinear feedback. You compute the states of a linear network and apply nonlinear feedback to get the output. It's also inherently stable. If the analog version of the network is stable, the CNFB implementation is stable.

The plot below is a simple example. This is a single-sided diode clipper with "memory" (the memory being a capacitor across the diode). The dotted line uses classic nonlinear ODE techniques solving the network using Trapezoidal Rule integration. The dashed line uses the CNFB method. The results are virtually identical but the CNFB method executes in about 60% the time (12 operations per loop vs. 20). As the number of nodes in a network increases the computational advantage increases proportionally.
View attachment 93791

Here's a more complex example. This is a plot of a 6L6GC push-pull power amp into a reactive load (blue) compared to the same power amp simulated in SPICE (red). Doing this with conventional methods (nodal K, DK, WDF, etc.) induces major thinky-pain. I did this with the CNFB method in a couple hours.

View attachment 93826

Could be a revolution in nonlinear network modeling.

Individually, I understand what every one of those words mean. In the way you've constructed them all together there, you might as well be telling me what color, the number 7 smells like. So, I'll just wait for the benefits and results and take the 'You don't want to see how the sausage is made.' approach.
 
Last edited:
The amp blocks are on their own processor, so I would imagine CPU usage savings in the real world would be limited to drives, or whatever else Cliff applied it to, that shares resources.
 
The amp blocks are on their own processor, so I would imagine CPU usage savings in the real world would be limited to drives, or whatever else Cliff applied it to, that shares resources.
In theory, if the amp-specific processor was running this more efficient code, it might be able to run more amp instances, and/or it could allow more detailed modeling of other aspects of amp behavior.

We'll just have to see how this all plays out.
 
I'm looking at the graphs going, "what's he on about, 'red' and 'blue'?" Then I remembered how colourblind I am, and how I just see graphs, consoling myself with the fact that I still wouldn't understand it anyway, but it sounds great!
 
Individually, I understand what every one of those words mean. In the way you've constructed them all together there, you might as well be telling me what color, the number 7 smells like. So, I'll just wait for the benefits and results and take the 'You don't want to see how the sausage is made.' approach.
I think that this is intentional. He knows there isn't a single user here that understands that completely.
 
I just finished porting the push-pull power amp algorithm to the Axe-Fx amp block and (after some debugging) it works. It doesn't sound markedly different but it sounds slightly more open. Measures slightly different too. But we're talking tenths of a dB so...

Right now it's using more CPU than the previous version. I'll need to spend some time doing optimization.
Could you optimize us on Push Pull Power Amp Algo? Google’d it but could not understand/find out what to expect/what it is even really about! 🤭
 
Just wanted to say, it's fun seeing the development process. I don't really get the details (though, I took tons of math classes in college including two upper-division linear algebra classes and a class on wavelets so I should actually be up on all this...), but I appreciate Cliff sharing some of the journey.
 
Back
Top Bottom