Fractal vs Capture

This thread started with the QC, but I'm not sure to what extent it's actually doing ML...? And to the extent it's doing ML, it's not the best at it as seen by comparisons to TONEX and NAM.

NAM's showing that it's producing results that almost null with its source. If the waveforms are close to nulling, then the response & feel are going to be there... unless response & feel are somehow immaterial quantities that are not reflected in waveforms :p
 
This thread started with the QC, but I'm not sure to what extent it's actually doing ML...? And to the extent it's doing ML, it's not the best at it as seen by comparisons to TONEX and NAM.

NAM's showing that it's producing results that almost null with its source. If the waveforms are close to nulling, then the response & feel are going to be there... unless response & feel are somehow immaterial quantities that are not reflected in waveforms :p
After seeing how long a proper training takes on a modern GPU with NAM and Tonex, I have serious doubts QC is really using a neural network on its DSP to make captures..
 
The magic (mystery?) of ML is such that it is able to learn how to generate the output given an input and statistical parameter set based on weights/scores, and we as outside observers (or developers) can't really know how it actually turns A into B - it just does.

That's one of the fascinating, and a bit unnerving, aspects of ML/neural networks, in that they are a 'black box' which the process of how it actually operates to achieve it's results is unknown and isn't subject to 'reverse engineering'.


Ahhhh this looks fanstastic, thanks for this.
 
After seeing how long a proper training takes on a modern GPU with NAM and Tonex, I have serious doubts QC is really using a neural network on its DSP to make captures..

I think there are a number of ways to make this lighter and faster - depending on the training model settings and something about the weighting filters. Based on their own paper, it seems likely to be the case, but it's not clear if that applies to the QC or not. The results speak for themselves though, and are pretty damn good even if not quite as good as tonex/nam.
 
Last edited:
I think there are a number of ways to make this lighter and faster - depending on the training model settings and something about the weighting filters. Based on their own paper, it seems likely to be the case, but it's not clear if that applies to the QC or not. The results speak for themselves though, and are pretty damn good even if not quite as good as tonex/nam.
The real-time processing requirements of the (decent) models in their paper are quite large compared to what's available on the QC. So based on their own paper, no.
 
I think there are a number of ways to make this lighter and faster - depending on the training model settings and something about the weighting filters. Based on their own paper, it seems likely to be the case, but it's not clear if that applies to the QC or not. The results speak for themselves though, and are pretty damn good even if not quite as good as tonex/nam.
The lowest quality capture setting in ToneX is equal in quality to those of the QC.
 
No one is saying the captures of the QC sound bad, that's all up to the end user. What people are experiencing in the better feel of the captures done with ToneX.
 
No one is saying the captures of the QC sound bad, that's all up to the end user. What people are experiencing in the better feel of the captures done with ToneX.

Yeah, totally understand. I haven't tried tonex with CPU so I have no idea how long the lowest setting would take. I always kind of assumed they had some partially baked in data or some way of speeding up the process enough to run on their QC CPUs like kemper. The training data set is also pretty small. It's also possible of course they get close and "cheat" with an EQ match at the end or something.
 
The real-time processing requirements of the (decent) models in their paper are quite large compared to what's available on the QC. So based on their own paper, no.

Fair - the heavy stuff is above my head absolutely so I only ever bothered to read it once out of curiosity
 
Given how many threads I’ve seen over the years with players wanting to know exactly how such and such artist sets their tone stack on a given amp, I think it would be missing the elephant in the room if we didn’t acknowledge many users just want a perfectly dialed in tone.

If captures didn’t have any tone stack, gain etc, a lot of people wouldn’t care because they don’t want to download something and then have to tweak to taste, they just want to download and have it “right”.

Look at all the people who pay money for presets, because they don’t want to DIY. They want someone to pair their cab, dial in the amp, effects etc, often to reference a popular artist tone. They aren’t going to tweak anything.

Obviously a lot of us regulars on a modeler forum are the type who wants to tweak things, appreciate having hundreds of models, component level modeling and all that fun stuff, but we are a pretty small minority of guitar players as a whole.

Heck, I’ve got those Paul Drew/studio rats captures on my ToneX and they sound just perfect, never had a want or need to tweak anything.

And that is the real attraction, not tweakability, no feel, etc, it’s being able to take an affordable $399 pedal, click your mouse a dozen times and load a dozen great amps sounds and just play. Simple and sounds great. That is a very important thing to a lot of folks.
 
Given how many threads I’ve seen over the years with players wanting to know exactly how such and such artist sets their tone stack on a given amp, I think it would be missing the elephant in the room if we didn’t acknowledge many users just want a perfectly dialed in tone.

If captures didn’t have any tone stack, gain etc, a lot of people wouldn’t care because they don’t want to download something and then have to tweak to taste, they just want to download and have it “right”.

Look at all the people who pay money for presets, because they don’t want to DIY. They want someone to pair their cab, dial in the amp, effects etc, often to reference a popular artist tone. They aren’t going to tweak anything.

Obviously a lot of us regulars on a modeler forum are the type who wants to tweak things, appreciate having hundreds of models, component level modeling and all that fun stuff, but we are a pretty small minority of guitar players as a whole.

Heck, I’ve got those Paul Drew/studio rats captures on my ToneX and they sound just perfect, never had a want or need to tweak anything.

And that is the real attraction, not tweakability, no feel, etc, it’s being able to take an affordable $399 pedal, click your mouse a dozen times and load a dozen great amps sounds and just play. Simple and sounds great. That is a very important thing to a lot of folks.
While I totally agree on your points, on the flip side, this is exactly why I recently picked up a used Strymon Iridium over the Tonex to use as a small, portable, simple modeler when I don't need the full force of the Death Star Axe-Fx 3.

I don't use 3rd party presets. I don't like the idea of hunting down the "perfect" profiles based on someone else's idea of what should sound good. It doesn't help that the Tonex app is kinda annoying to work with.

I've learned how to operate a bunch of amps, digital or not, to my liking and get my preferred results nicely like that.
 
It's not magic. Something still has to take the data from a capture file and apply it in real-time to an incoming audio stream to produce the simulated output. You may not call it an amp modeler but it is an audio processing engine. And your results can only be as good as the how well the engine interprets the training data.
You can look at for example Neural Amp Modeler's function for feeding it any source audio file to use with the trained model. It seems to convert the given WAV file to tensor data compatible with the model, applies that data on the model, then converts the output back to a data format that can be written as an audio WAV file. For realtime audio, you would do largely a similar process but with audio buffers rather than files.

The machine learning model itself does not understand the concept of a guitar amplifier, you might as well train it with audio from a kazoo and have it churn out some kind of model. Technically there is no audio engine to speak of here, but pure data processing.

Correct me if I am wrong and misunderstood the code.
 
You can look at for example Neural Amp Modeler's function for feeding it any source audio file to use with the trained model. It seems to convert the given WAV file to tensor data compatible with the model, applies that data on the model, then converts the output back to a data format that can be written as an audio WAV file. For realtime audio, you would do largely a similar process but with audio buffers rather than files.

The machine learning model itself does not understand the concept of a guitar amplifier, you might as well train it with audio from a kazoo and have it churn out some kind of model. Technically there is no audio engine to speak of here, but pure data processing.

Correct me if I am wrong and misunderstood the code.

Correct - the word 'model' in this case is completely disassociated from what it means traditionally and has nothing to do with amp modeling. It just tries to reconstruct whatever the reference is, whether it's guitar or something completely different. If it's a good model, it will "just work", and should sound and react just like the source.
 
If we’re talking about “physical characteristics of tone and amp responsiveness,” the feel is the way an amp dynamically responds to your playing… the way it opens up or compresses depending on your picking technique.

When it comes to the tone, you can hear notes and chords “bloom” as they attack and release. A real amp and accurate model will have that dynamic “bloom,” where less accurate options can sound similar frequency-wise, but have a more linear response in that area. So even though they may sound the same, they don’t feel the same under your fingers.

Hopefully that makes sense.
Without reading the whole thread, the above are the major differences between Fractal's component modeling and the profilers IME, the sonic attributes over time, and the associated dynamics related to the players touch and use of the guitar's controls that create them in the real world. This is where the Fractal is the most accurate in these aspects IME (while noting that the profilers do a great job of mimicking the source's overall freq response, their generic gain staging/filters do not react over time like the Fractal's or the real deals).

I'm talking about Cygus with regards to Fractal, which was a dramatic step forward with regards to the tube power amp simulations (I've had Axe FX's since the Ultra).

In general tube preamp/PS interactions are easier to simulate, though the tube power section and it's relationships with the PS/OT/speaker/cab is far more complex, and of course it's where a lot of the tube amp magic happens! Fractal by far is way ahead here IME (I haven't tried the QC or NAM, but have used Kemper and Tonex and briefly A/B'd them against my FM3 and Axe II in pro studio environments, with and without various tube preamps).

Note that the metal players who never touch their guitar's controls, using all preamp generated high gain filtered noise, typically with solid state elements will likely not notice these types of differences, though people using edge-of-breakup thru pushed mid-gain sounds that utilize the tube power section as a non-linear tone generator likely will depending on their experiences with a variety of tube guitar amps.

That said, whatever works as in a mix a lot of these attributes can be masked, though they are readily apperent to the player who relies on them as well.
 
Last edited:
In general tube preamp/PS interactions are easier to simulate, though the tube power section and it's relationships with the PS/OT/speaker/cab is far more complex, and of course it's where a lot of the tube amp magic happens! Fractal by far is way ahead here

Recently as last night I've been using my actual tube preamps or captures of them through the FAS tube pre power section with pretty good results. You can get similar response in a good capture for very specific cases, but you have to capture it with the same cab at the same volume you're planning to play it with, via a direct box and not a load box to fully retain the response of the cab. That said, I'm one of those players that never plays anything but crystal and crushing :)
 
Without reading the whole thread, the above are the major differences between Fractal's component modeling and the profilers IME, the sonic attributes over time, and the associated dynamics related to the players touch and use of the guitar's controls that create them in the real world. This is where the Fractal is the most accurate in these aspects IME (while noting that the profilers do a great job of mimicking the source's overall freq response, their generic gain staging/filters do not react over time like the Fractal's or the real deal's).

I'm talking about Cygus with regards to Fractal, which was a dramatic step forward with regards to the tube power amp simulations (I've had Axe FX's since the Ultra).

In general tube preamp/PS interactions are easier to simulate, though the tube power section and it's relationships with the PS/OT/speaker/cab is far more complex, and of course it's where a lot of the tube amp magic happens! Fractal by far is way ahead here IME (I haven't tried the QC or NAM, but have used Kemper and Tonex and briefly A/B'd them against my FM3 and Axe II in pro studio environments, with and without various tube preamps).

Note that the metal players who never touch their guitar's controls, using all preamp generated high gain filtered noise, typically with solid state elements will likely not notice these types of differences, though people using edge-of-breakup thru pushed mid-gain sounds that utilize the tube power section as a non-linear tone generator likely will depending on their experiences with a variety of tube guitar amps.

That said, whatever works as in a mix a lot of these attributes can be masked, though they are readily apperent to the player who relies on them as well.
And also profiling with the kemper while using more than one distorting stage (like preamp and poweramp distortion) can affect results considerably, the mids going full on cocked wah. It does not always happen, and I've profiled amps that for whatever reason did not confuse the kemper as much even if set up this way, but this can be a consideration.

On the other hand, I am generally confident with fractal sims that I'm able to dial in the right amount (for my taste) of preamp plus power amp distortion, if I do go after such an end goal.

PS: I think tonex and even quad cortex do better than kemper when it comes to this. They do not seem to have this limitation, at least not as badly as kemper does, at least in my limited experience with these others units/vsts (have considerable experience with kemper, but not with these other units, more limited).
 
Back
Top Bottom