New BluGuitar Amp X - Holy $#!%

maybe the term neural is used in reference to the brain's ability to develop new pathways through its available contents to make new connections, new combinations...

the concept is a digitally controlled tackle box of amplifier components of all types that can be swapped in and out in differing order and amounts using relays and switching and whatnot, to where you program a certain recipe of amp, and it sends the signal through just that particular pathway with those specific components engaged. 100% analog, it's an amp...that can change its circuit design. So it's not a model at all, just nearly infinitely configurable
 
But it is improving. The promise is there - neural networks provably can approximate any function, no matter how complex. One such function is how to transform the signal from the guitar to the tone you want to hear. The devil is in the details, but theoretically it's doable. It should theoretically be possible to transform my playing into something Paul Gilbert would not be ashamed of, as well. :) But that's way out there in the future. All those "style transfer" demos with images where you scribble something and it gets redrawn in the style of Van Gogh - the principle could be the same.

Technically, in 2020 an iPhone 11 Pro has 6 teraops of int8 throughput on its TPU, and ~1.3 TFLOPs of floating point on its GPU. And that's before you even get to the CPU, which too has more GFLOPs than AxeFX3, although they are harder to harness in a general purpose chip.

That's not what I'm advocating for here though. Neural DSP is in its infancy it will be a while (a decade perhaps) before we get anything practical out of it. In the meanwhile I wonder if neural network chips (which are essentially very powerful convolution chips and little else) can be used for off-label guitar processing. Basically just make them do convolutions and nonlinearities - bread and butter of a guitar modeler.

Style transfer is possible, to learn the sound which we can hear and measure(mostly EQ, might be some other parameter as well like reflection count or something) of the output signal from any amp(or any other signal chain combined) - so called "biomemtic"; but the thing is they would need all those TFLOPs computing to replicate just one style(of course you can have a EQ post-processing on top of that to create more) not to mention that inference running on a rack/floor device wouldn't be feasible, at least by today's availability of resources.

So, I don't think at the moment GPU/TPU could be used for extremely low latency audio processing - I could definitely be wrong here, given that neural network powered objector detectors like Yolo work at >100 FPS on a good hardware. That's crazy fast for a neural network inference. But DSPs have already proven how effective they can be here. Maybe a good hybrid, like do most of your continuous audio processing on DSP and have a custom built FPGA to do parallel tasks.

One application that comes to mind - Improving Tone Match: I believe you need to select an amp which sounds closest to the target amp before starting the process, maybe have a classification model do this on their own so people wouldn't spend time figuring out.
 
Yeah "neural" is such a buzzword these days. But if I read correctly (and maybe I didn't), it doesn't look like there's digital modeling involved for the amp tones. But we'll have to wait to see some hands-on reviews and such. What a time to be alive.

There has to be digital modeling involvement for the amp tones. They claim that they're simulating amps with the ability to "edit and save circuit parameters at component level". Sorry, that ain't happening in a box that size with the notion that they can simulate all the amps pictured on that page. Even if we suspend disbelief and assume it is doable to pull this off with a purely analog signal path, the combinatorial explosion of required components to simulate all the various possible component values and electrical circuit schematics is staggering.
 
Their marketing material is even more over the top than the stuff for the Quad Cortex.

Some of this stuff is becoming like seeing "All Natural" on products at the grocery store.
 
...and Natural Language Processing, Crowdsourcing, and nano-anything.
If you haven't checked out NLP recently, you should. A ton of progress has been made in just the last 2 years, and things are still improving at such a rapid clip that it's difficult to keep up with the papers. Google has speech recognition, synthesis, and language translation locally on a phone in realtime now - no cloud.
 
I find it fascinating that the guitar players of the old tried to get their own unique sound while the guitar players of today try to mimic classic pre-existing sounds using NLP.

As for Google and local-client NLP, well, sometimes yes, sometimes no, the cloud is transparent. Most embedded consumer companies rather spend money on the backend (Alexa) than spending money on the front (client HW/SW). It's all about scalability.
 
There has to be digital modeling involvement for the amp tones. They claim that they're simulating amps with the ability to "edit and save circuit parameters at component level". Sorry, that ain't happening in a box that size with the notion that they can simulate all the amps pictured on that page. Even if we suspend disbelief and assume it is doable to pull this off with a purely analog signal path, the combinatorial explosion of required components to simulate all the various possible component values and electrical circuit schematics is staggering.

I talked to Thomas quite a bit about this at NAMM. I share your skepticism, but he might be able to pull it off. It's a matter of how much variety you think is necessary to model a wide variety of amps. I've seen speculation that Kemper only uses between 6-8 amp models. Thomas anticipates only requiring a similar small number of component configurations in the AmpX. It will never be as flexible as a digital system, but on the other hand it will never have the aliasing problems inherent in a digital system.
 
I talked to Thomas quite a bit about this at NAMM. I share your skepticism, but he might be able to pull it off. It's a matter of how much variety you think is necessary to model a wide variety of amps. I've seen speculation that Kemper only uses between 6-8 amp models. Thomas anticipates only requiring a similar small number of component configurations in the AmpX. It will never be as flexible as a digital system, but on the other hand it will never have the aliasing problems inherent in a digital system.

Then he’s not really simulating every amp to the component level.
 
I'm not sure what problem modelers have that this solves.

It appears to be "new" based on buzzwords (and maybe even actually is new based on new tech), but I don't see what problem that solves.

You could always paint your Axe-Fx if the feeling of "new" is what you're after...
 
I'm not sure what problem modelers have that this solves.

It appears to be "new" based on buzzwords (and maybe even actually is new based on new tech), but I don't see what problem that solves.

You could always paint your Axe-Fx if the feeling of "new" is what you're after...

Aliasing.

Digital Modelers have aliasing. They use various techniques to combat it, including oversampling, but it's a persistent problem. Blu guitar is analog, so it doesn't have aliasing.

Whether high-end modelers, like Fractal Audio, have enough aliasing to worry about is debatable. What is not debatable is an analog modeler would be rather inflexible to work with.
 
Their aliasing is way outside the frequency range of even bats and dogs by now. Y’all tinnitus sufferers most definitely can’t hear it.
 
And latency, no conversion, the electrons from your strings going through 100% reconfigurable analog pathway and coming out of the speakers, that's a feeling in itself worth something
 
Digital Modelers have aliasing. They use various techniques to combat it, including oversampling, but it's a persistent problem. Blu guitar is analog, so it doesn't have aliasing.

Whether high-end modelers, like Fractal Audio, have enough aliasing to worry about is debatable. What is not debatable is an analog modeler would be rather inflexible to work with.
Interesting. Is there any evidence of aliasing from the AF2 or 3? I would call that a problem already solved, from where we stand. I'm open to counter-evidence though.
 
Interesting. Is there any evidence of aliasing from the AF2 or 3? I would call that a problem already solved, from where we stand. I'm open to counter-evidence though.

Well, yes...use any RTA with the AxeFX and you'll find evidence of aliasing. The level will be quite low however, especially compared to other modelers. That's why I said it's debatable whether it has enough aliasing to worry about.
 
I would agree with the premise he only needs about 5 or 6 flexible circuits
A Custom amp guy i used to buy from and was very open and honest basically told me

Most people would be shocked to find
That many of the popular amps have the same basic topology and would be identical if not for 3 or 4 different components and values

There are a few really different ones
Train wrecks come to mind but 90 percent of the modern hard rock and metal amps are so close
 
It's not even high voltage. Vcc is 5 to 80 volts.

It's like the old Vox Tonelab stuff except with a Class-D power amp. The Vox used a 12AX7 into a dummy load as a virtual power amp and then amplified that signal. The Amp X uses Korg Nutubes into a dummy load and the a Class-D power amp.

That tube in the vox was a fake, i mean i had a chance to take one apart for cleaning and took that 12ax7 out, the unit's sound never cut off.
 
Back
Top Bottom