deathbyguitar
Power User
Cliff: Will this new method make allowance (create room ) for high Rez IR'S to fit in the original
AXE-FX III Mark I?
No, this has to do with algorithm efficiency, not storage for IRs.
Cliff: Will this new method make allowance (create room ) for high Rez IR'S to fit in the original
AXE-FX III Mark I?
There are already some FullRez IR's in the legacy cab block of the Axe-Fx III Mk1.Cliff: Will this new method make allowance (create room ) for high Rez IR'S to fit in the original
AXE-FX III Mark I?
OrganicZed SAID:
There are already some FullRez IR's in the legacy cab block of the Axe-Fx III Mk1.
There are a LIMITED number of full rez IR'S IN THE MARK ONE..
I'm talking about' MORE' of a variety full rez IR'S.
I think what you said was understood. Again, this doesn't impact IR storage.OrganicZed SAID:
There are already some FullRez IR's in the legacy cab block of the Axe-Fx III Mk1.
There are a LIMITED number of full rez IR'S IN THE MARK ONE..
I'm talking about' MORE' of a variety of full rez IR'S.
I've finally perfected the "Chase Nonlinear Feedback" (CNFB) method for the modeling of nonlinear networks.
I find the coolest thing is that the owner/main developer of FAS is a Rush fan too!Cygnus X2 obviously.
Cygnus X2 obviously.
So what is your imagined practical application for this??? In a language that even a guitar player can understand ... please
Could Neural Fare Before?Pole.. what do you think CNFB actually stands for ..
Ive said before this is the only place caltech nerds and baked guitar players can come together and have something to talk about.
Very cool, congrats to discovering what appears to be next-level modeling, and thanks for sharing...always glad to get a look under the hood.
So, the bullet points are:
It seems that there will be major, tangible benefits in store for us Fractal users going forward; freeing up CPU, easier on Cliff's brain (always a good thing!!), faster development of newer modeling (which may also open up new and wonderful ways of modeling), and potential new discoveries and 'ah-ha' moments that can be implemented/tested much easier and quicker.
- CNFB is as accurate as existing integration methods with far less computational requirements (almost half)
- CNFB is far less error prone
- CNFB is easier (ie: less thinky-pain heh) and quicker to implement then existing methods
- the more nodes you add, the advantages of CNFB are increased; talk about being able to scale up with impunity!
Awesome. Simply awesome. To be able to achieve the same accuracy of existing methods in almost 1/2 the time is a major accomplishment, not to mention the other big wins here. I always admire your persistence and drive Cliff....it's what keeps us coming back for more. Knowing how you do things, I'd think there is even more optimization to be had here (???).
This is very exciting news, big time. I can't wait to see what comes of it...
I know it's in the early stages, but can you tell us where your heading with this, and/or what we might expect, in general, to eventually show up on our devices?