The CNFB Method

Hey Cliff,
The spice simulation in the second chart seems to indicate a marginally consistent faster leading edge - would that be audible?
Thanks
Pauly
I see what you're describing zoomed in. Looks like a fraction of a second (maybe 1/50 x10^-3?), which would be (if my 1/50 guess is even close to accurate) about 0.02ms lag for the complex circuit. Being more pessimistic on the ratio, let's say 1/20th, would be about 0.05ms.

IDK if the Axe is processing in series or parallel (I'd assume parallel), but even if it's series with a pair of complex amps and a few drives, maybe a couple breakup circuits in things like delay blocks, you're probably looking at ~0.15ms-0.2ms worst case. No way any human can detect that.

possibly more time if cpu efficiency takes a jump though I'm sure Fractal has some good ideas about how those efficiencies could be leveraged for new goodies.
I wonder if this breathes new life into legacy products too?
 
I've finally perfected the "Chase Nonlinear Feedback" (CNFB) method for the modeling of nonlinear networks. And it works amazingly well. Has the accuracy of high-order integration methods with less computational burden.

Can simulate diodes, triodes, pentodes, etc. Far less error-prone than other methods (like K-method or DK-method, etc.) as you don't need to enter large matrices or tables.

It works on the principle that nonlinear devices can be thought of as linear devices with nonlinear feedback. You compute the states of a linear network and apply nonlinear feedback to get the output. It's also inherently stable. If the analog version of the network is stable, the CNFB implementation is stable.

The plot below is a simple example. This is a single-sided diode clipper with "memory" (the memory being a capacitor across the diode). The dotted line uses classic nonlinear ODE techniques solving the network using Trapezoidal Rule integration. The dashed line uses the CNFB method. The results are virtually identical but the CNFB method executes in about 60% the time (12 operations per loop vs. 20). As the number of nodes in a network increases the computational advantage increases proportionally.
View attachment 93791

Here's a more complex example. This is a plot of a 6L6GC push-pull power amp into a reactive load (blue) compared to the same power amp simulated in SPICE (red). Doing this with conventional methods (nodal K, DK, WDF, etc.) induces major thinky-pain. I did this with the CNFB method in a couple hours.

View attachment 93826

Could be a revolution in nonlinear network modeling.

what I read is...

"The FM3 can now run 2 amps at once"

sorry if I misinterpreted, but I'm going to roll with this. ;)
 
Like every company, FAS has to make money to be able to continue to exist and money is earned by selling hardware. Updating firmware for legacy devices doesn’t generate income.
FAS has also proven the value of their firmware over time, and as much as I hate to say it, I bet there are a lot of us who would be willing to pay for truly substantive updates to the firmware in existing hardware. Given the relative margins, it might be worth considering.
 
Very cool, congrats to discovering what appears to be next-level modeling, and thanks for sharing...always glad to get a look under the hood.

So, the bullet points are:
  • CNFB is as accurate as existing integration methods with far less computational requirements (almost half)
  • CNFB is far less error prone
  • CNFB is easier (ie: less thinky-pain heh) and quicker to implement then existing methods
  • the more nodes you add, the advantages of CNFB are increased; talk about being able to scale up with impunity!
It seems that there will be major, tangible benefits in store for us Fractal users going forward; freeing up CPU, easier on Cliff's brain (always a good thing!!), faster development of newer modeling (which may also open up new and wonderful ways of modeling), and potential new discoveries and 'ah-ha' moments that can be implemented/tested much easier and quicker.

Awesome. Simply awesome. To be able to achieve the same accuracy of existing methods in almost 1/2 the time is a major accomplishment, not to mention the other big wins here. I always admire your persistence and drive Cliff....it's what keeps us coming back for more. Knowing how you do things, I'd think there is even more optimization to be had here (???).

This is very exciting news, big time. I can't wait to see what comes of it...

I know it's in the early stages, but can you tell us where your heading with this, and/or what we might expect, in general, to eventually show up on our devices?
 
Last edited:
Isn’t the limit set always by hardware (CPU). What i mean is coding/Cliff’s mind is ahead of hardware development.

So if Cliff can free up some CPU usage, i assume that available CPU power will be used at somewhere else to make things even more authentic.

I remember some time ago he said he could make
the code simulate every part 1 on 1 but it would
require way way way more CPU power. I think it was about output transformer and stuff it does which needs alot of CPU power to simulate.

So long story short, i assume we won’t see any significant drop in CPU usage. But still the fractal train is moving full speed forward...
 
Absolutely amazing Cliff!

Could this method be also applied to different non linear complex dynamic system problems or is it targeted mostly to this application?
I dabble with reactive networks at work and the computational time often is a limiting factor.
 
I've finally perfected the "Chase Nonlinear Feedback" (CNFB) method for the modeling of nonlinear networks. And it works amazingly well. Has the accuracy of high-order integration methods with less computational burden.

Can simulate diodes, triodes, pentodes, etc. Far less error-prone than other methods (like K-method or DK-method, etc.) as you don't need to enter large matrices or tables.
Impressive. Are you saying no matrix arithmetic is required at all?
 
Back
Top Bottom