Fractal vs Capture

Yes, but the point isn't that it's doing the same thing, the point is whether you can get the same sound - or near as makes no difference - using just post. And IME, in many cases, you can. The net result of all the interaction is mostly changes in EQ or perception of EQ that can be copied after the fact pretty damn closely as long as the original sound isn't way off (e.g., tone match).
You're approaching it strictly from an A does B does C viewpoint w/o taking into account all of the nuances and subtleties. Yet, if I backtrack your comments you are aware of the nuances and interactions. You would literally have to capture every possible knob combination to attain what a modeler does.
 
add a player block on the Fractals to add more options for end users. IMO that would be very cool.
I can see it now:

User: “Help! NAM isn’t working since the latest Axe-FX III firmware!”

Fractal: “The problem isn’t the firmware, it’s NAM.”

NAM:
“The problem isn’t on our end, it’s Fractal.”
 
Yes, but the point isn't that it's doing the same thing, the point is whether you can get the same sound - or near as makes no difference - using just post. And IME, in many cases, you can. The net result of all the interaction is mostly changes in EQ or perception of EQ that can be copied after the fact pretty damn closely as long as the original sound isn't way off (e.g., tone match).
No, you can't.

The EQ within the circuits affect the character of the amp.

In any case, I'm done arguing the point because I have no dog in this fight... I have no practical interest in capture-based amp sims.
 
Yes, but the point isn't that it's doing the same thing, the point is whether you can get the same sound - or near as makes no difference - using just post. And IME, in many cases, you can. The net result of all the interaction is mostly changes in EQ or perception of EQ that can be copied after the fact pretty damn closely as long as the original sound isn't way off (e.g., tone match).
If that's what works for you, do it and be happy!
 
Yes, but the point isn't that it's doing the same thing, the point is whether you can get the same sound - or near as makes no difference - using just post. And IME, in many cases, you can. The net result of all the interaction is mostly changes in EQ or perception of EQ that can be copied after the fact pretty damn closely as long as the original sound isn't way off (e.g., tone match).

Actually, the point made by the OP was not whether you get the same sound - it’s whether you get the same response and feel. Sounding right and responding/feeling right are not the same thing.

To sound right, you’re correct - as long as you are reasonably close you can probably do enough pre and post processing to get what you want.

However, it is highly unlikely that what you end up with is going to respond or feel right. For starters, the additional processing is not likely to interact the same as you vary your touch, work your volume/tone controls, switch pickup settings, etc. But more importantly, the underlying amp modeling implementation does not match the amp that was captured. So there’s very little chance that it’s going to behave correctly. This is the fundamental problem with the Kemper and QC.
 
However, it is highly unlikely that what you end up with is going to respond or feel right. For starters, the additional processing is not likely to interact the same as you vary your touch, work your volume/tone controls, switch pickup settings, etc. But more importantly, the underlying amp modeling implementation does not match the amp that was captured. So there’s very little chance that it’s going to behave correctly. This is the fundamental problem with the Kemper and QC.

If it's a good capture, it should match exactly. The whole point is that there is no underlying model implementation. It simply does what the amp does. Input -> output, black box. That includes touch response and dynamics. Now, I'm not saying that kemper, qc, etc, will manage to reproduce this correctly all the time, but as the ML tech moves forward with GPU learning things are getting better. What you're pointing out here is an issue with training accuracy and data, not with ML based amp modeling. It's not perfect now, but then again - neither is component modeling. Take your pick and play, honestly - it doesn't matter at all anymore as long as you like it.
 
Last edited:
If it's a good capture, it should match exactly. The whole point is that there is no underlying model implementation. It simply does what the amp does. Input -> output, black box. That includes touch response and dynamics. Now, I'm not saying that kemper, qc, etc, will manage to reproduce this correctly all the time, but as the ML tech moves forward with GPU learning things are getting better. What you're pointing out here is an issue with training accuracy and data, not with ML based amp modeling. It's not perfect now, but then again - neither is component modeling. Take your pick and play, honestly - it doesn't matter at all anymore as long as you like it.

It's a bit more than training - the underlying amp simulation engine has to be capable of doing what the trainer says. And you're still faced with the fact that the controls don't work as expected.
 
It's a bit more than training - the underlying amp simulation engine has to be capable of doing what the trainer says. And you're still faced with the fact that the controls don't work as expected.

There isn't an underlying amp simulation. The kemper was a bit different, but that's not how the ML modeling works. It quite literally learns to do whatever the amp does, given a particular input. Depending on the resolution in the time domain (there is a certain event 'length' it will be able to learn), the dynamic response - including gain structure - will be captured, as long as the training set provides enough information about what the response should be. The reason it's so flexible is exactly that there isn't a model in the first place. The length thing is why you can do a really accurate amp capture (very very fast response), but you can't capture longer term things like compression or other FX (yet).

The knob thing is a totally different part of this tbh - it just expands the overall parametric space needed to reproduce an entire amp. At one setting, there's no reason an ML capture can't respond exactly like the real thing within the window of training.
 
In the Gary Moore example above, i m pretty sure that if he was playing with a crappy practice amp, he would still sound like Gary Moore, maybe not with the best tone but all these playing attributes would have been there.
That’s partly true but it breaks down because there are other factors involved that affect the equation.

Moore’s essence would come through using a crappy practice amp, because that would be his touch and style coming through, but, would he then say that the crappy practice amp was going to be good enough for his show, stick a mic on it and he’d use it instead of his favorite rig? He wouldn’t because the favorite rig would react and respond to his touch just as he expected and relied on. He used dynamics and his volume knobs and needed the amplifier to carry its side of the equation to generate the overall sound. Imagine how his career would have been if he had gone with the crappy practice amp always.
 
There isn't an underlying amp simulation. The kemper was a bit different, but that's not how the ML modeling works. It quite literally learns to do whatever the amp does, given a particular input. Depending on the resolution in the time domain (there is a certain event 'length' it will be able to learn), the dynamic response - including gain structure - will be captured, as long as the training set provides enough information about what the response should be. The reason it's so flexible is exactly that there isn't a model in the first place. The length thing is why you can do a really accurate amp capture (very very fast response), but you can't capture longer term things like compression or other FX (yet).

The knob thing is a totally different part of this tbh - it just expands the overall parametric space needed to reproduce an entire amp. At one setting, there's no reason an ML capture can't respond exactly like the real thing within the window of training.

Something needs to generate audio. The ToneX capture files are fairly small. There's no way it contains anything other than parameters for some sort of audio engine. So the results can only be as good as how well that engine can interpret the parameters generated by the capture data.
 
Something needs to generate audio. The ToneX capture files are fairly small. There's no way it contains anything other than parameters for some sort of audio engine. So the results can only be as good as how well that engine can interpret the parameters generated by the capture data.

It's not how it works - you can think of it "similarly" to linear regression, only much much larger. You only need a small subset of information post training to be able to recreate the entire data set. These would be parameters for the ML model, which isn't the same thing in any way as an audio/amp model. It's just purely a statistical model. The magic (mystery?) of ML is such that it is able to learn how to generate the output given an input and statistical parameter set based on weights/scores, and we as outside observers (or developers) can't really know how it actually turns A into B - it just does.

The data used to create a model is much larger - if it's a wav file, for example, 40+ mb. But the resulting model is only 500kb or so
 
It's not how it works - you can think of it "similarly" to linear regression, only much much larger. You only need a small subset of information post training to be able to recreate the entire data set. These would be parameters for the ML model, which isn't the same thing in any way as an audio/amp model. It's just purely a statistical model. The magic (mystery?) of ML is such that it is able to learn how to generate the output given an input and statistical parameter set based on weights/scores, and we as outside observers (or developers) can't really know how it actually turns A into B - it just does.

The data used to create a model is much larger - if it's a wav file, for example, 40+ mb. But the resulting model is only 500kb or so

Sorry but I've been developing software professionally for over 35 years and this is just a bunch of jibber-jabber. Admittedly, I'm no ML expert. However, I do know how computers work. There's no magic thing that takes ML statistics as input and generates audio. So I repeat, something has to generate the audio. Again, I repeat, your results are at the mercy of the engine that interprets the data from the capture to generate audio. This will not be without compromises.
 

It's not magic. Something still has to take the data from a capture file and apply it in real-time to an incoming audio stream to produce the simulated output. You may not call it an amp modeler but it is an audio processing engine. And your results can only be as good as the how well the engine interprets the training data.
 
It's not magic. Something still has to take the data from a capture file and apply it in real-time to an incoming audio stream to produce the simulated output. You may not call it an amp modeler but it is an audio processing engine. And your results can only be as good as the how well the engine interprets the training data.
From the little I've understood on the matter, I think the model created by the neural network is a complete transfer function, what loads the capture file is just a plugin that takes the audio from your interface/daw, sends it thru that transfer function and sends the output again to your interface/daw.

The thing that can be tweaked/optimized is the training process, its weights and the type of neural network used
 
It's not magic. Something still has to take the data from a capture file and apply it in real-time to an incoming audio stream to produce the simulated output. You may not call it an amp modeler but it is an audio processing engine. And your results can only be as good as the how well the engine interprets the training data.

At some point data becomes audio, yes. But the nature of that conversion has nothing to do with the actual amp/capture representation. Same way that playing back a waveform becomes audio, except the waveform data is generated in realtime directly via the ML model instead of read from file. ML is fundamentally different from the way normal software works - I also do development as a job, btw.
 
magic-wand-with-electric-discharge-effect_1284-18391.jpg
 
Back
Top Bottom