Axe-Fx III Firmware Release Version 12.08 Public Beta 3

Status
Not open for further replies.
Tried that as well, doesn't really help with the "synthetic" sound of the reverb tail. Listen how the reverb decays in these new attached examples. Anyway, as no one here seems to hear what I hear I guess I should better shut up ;-). As I use the AXE FX III mainly for recording purposes, I guess I can always insert a Space Designer via send/return into my FX chain.
Did you try this?

The only difference I hear is that the space designer sample sounds like it has a quite aggressive high-cut going on, maybe try turning it down on the axe fx reverb block
 
Tried that as well, doesn't really help with the "synthetic" sound of the reverb tail. Listen how the reverb decays in these new attached examples. Anyway, as no one here seems to hear what I hear I guess I should better shut up ;-). As I use the AXE FX III mainly for recording purposes, I guess I can always insert a Space Designer via send/return into my FX chain.

I'm definitely not hearing the synthetic part of that, I do agree with DLC86 that it could be down to the hi cut that appears to be more prominent in the Logic version, so definitely give that a try as well.
 
Did you try this?
In fact I did just now - and ran into a problem. The "In USB" block introduces quite a bit of latency, so using this as a realtime setup doesn't seem possible. Unless I did something wrong, of course. Here's my setup: In1 -> Amp -> Out2 -> [Logic Channel Strip with Space Designer] -> In USB -> Cab -> Out 1. Reverb sounding exactly how I'd like it to now, though ;)
 

Attachments

  • Screenshot 2020-05-18 at 22.35.31.png
    Screenshot 2020-05-18 at 22.35.31.png
    249.9 KB · Views: 38
  • Screenshot 2020-05-18 at 22.35.38.png
    Screenshot 2020-05-18 at 22.35.38.png
    119.1 KB · Views: 40
I don't hear anything "synthetic" but I hear a markedly different EQ curve.
I'm not taking about the EQ curve, I guess this is largely caused by Logic's Amp Designer. What I refer to as "synthetic" is the LFO-modulation-like sound in the reverb tail that I never heard in a real spring reverb.
 
I hear it all the time in my spring reverbs.
I guess this discussion doesn't get us anywhere. You hear it, I don't. Anyway, as I already wrote. I seem to be the only one unhappy with the Spring Reverb's sound, so not worth taking this any further I think...
 
I hear it all the time in my spring reverbs.

Me too. I had realization once while working with someone who was very picky and had good ears.
I guessed that what we heard as modulation in the physical reverb was caused by differences between individual springs.
 
In fact I did just now - and ran into a problem. The "In USB" block introduces quite a bit of latency, so using this as a realtime setup doesn't seem possible. Unless I did something wrong, of course. Here's my setup: In1 -> Amp -> Out2 -> [Logic Channel Strip with Space Designer] -> In USB -> Cab -> Out 1. Reverb sounding exactly how I'd like it to now, though ;)
I wasn't referring to that but to the hi cut in the axe fx reverb.
That's the main difference I hear in your samples: less highs in Space Designer.
 
Maybe you can help be with my settings then. I'd like something more close to what a convolution reverb like Logic's Space Designer sounds like with a spring reverb impulse response. This "splashy" (or whatever I should call it) sound also found in the real thing. Please compare the attached files, one recorded with Logic (Amp simulation is Logic's Amp Designer which frankly sucks, compared to AXE FX III, Reverb is Space Designer using a spring reverb impulse response), one recorded with AXE FX III. The reverb tail of the AXE recording sounds quite synthetic and lacks the "splashy"-ness heard in the Space Designer variant.

I can hear what you mean, I think. The tail of the Space Designer one has this kind of 'bounciness' to it where the Axe FX one seems to be more like a standard "flat" reverb. At around the 3 second mark or so. I have no idea which one is more realistic though.
 
A couple years ago he was trumpeting his pitch detection "invention" based on bitwise autocorrelation using XOR. He said "I can't believe how great it works and how no one else has ever thought of this because it's so simple and elegant". I didn't have the heart to tell him:

[snips...]

Joel here... I tend to simply ignore this, but...

First, could you show me where I told you "that video cards were the future of audio DSP" here or in my posts?

Also, could you show me the patent that uses bitwise autocorrelation? I'd gladly update my post if it is indeed bitwise autocorrelation. BTW, I am not using a single bit A/D. I take advantage of linear interpolation to accurately estimate the actual crossing. I am having good results.

Here's a quick sample of the results (I have more numbers if anyone is interested):

BACF Results:
82.410004 Error: 0.000077 cent(s).
Average Error: 0.000077 cent(s).
Min Error: 0.000077 cent(s).
Max Error: 0.000077 cent(s).

I have a comprehensive test suite with single-notes, fast picking, legato, bends, hammer-ons, with sound clips where the raw guitar audio is tracked by a synth: ( I'm sorry for posting a link, but I need to substantiate my defense: bit.ly/2So8Tq7 )

and more in the github project page that anyone can try.

Regards,
--Joel
 
Last edited:
I tried to explain to him that audio DSP is inherently a sequential problem and that massive parallelism doesn't help you. It's the assembly line conundrum. Parallelism makes more shoes per minute but it doesn't make the pair you want any faster.

Hi, Joel here (again),

And you assumed I was too naive and did not know that? Consider this (following your analogy): what if there's not one of you who wants a pair of shoes but many, like 6 or 8. I am processing at least 6 channels at a time (one channel for each string). Granted you can do 6 audio streams using threads or strands, but other forms of parallelism can take the load off the cpu (in a computer for example). The GPU is one of that. The real bottleneck here is the latency moving back and forth from CPU memory to GPU memory, but it has improved significantly in the recent years, and is continually being improved.

There are also things that need say, expensive FFT, convolution or correlation that are not directly involved with processing the audio itself, but for example extract relevant information from the audio for say, pitch, formants and harmonics detection and provide interesting controls, to for example synthesizers or equalizers at a different rate (say every 1 ms). That, at least to me, is a very exciting area for exploration!

Regards,
--Joel
 
Joel here... I tend to simply ignore this, but...

First, could you show me where I told you "that video cards were the future of audio DSP" here or in my posts?

Also, could you show me the patent that uses bitwise autocorrelation? I'd gladly update my post if it is indeed bitwise autocorrelation. BTW, I am not using a single bit A/D. I take advantage of linear interpolation to accurately estimate the actual crossing. I am having good results.

Here's a quick sample of the results (I have more numbers if anyone is interested):

BACF Results:
82.410004 Error: 0.000077 cent(s).
Average Error: 0.000077 cent(s).
Min Error: 0.000077 cent(s).
Max Error: 0.000077 cent(s).

I have a comprehensive test suite with single-notes, fast picking, legato, bends, hammer-ons, with sound clips where the raw guitar audio is tracked by a synth: ( I'm sorry for posting a link, but I need to substantiate my defense: bit.ly/2So8Tq7 )

and more in the github project page that anyone can try.

Regards,
--Joel
https://patents.google.com/patent/US4429609A/en?oq=4429609

You may get very good results at one frequency. Testing at only one frequency with a simulated signal is poor methodology. The problem is that the period error is up to 1/2 the sample period. At 44.1 kHz sample rate this is around 11 us. The period of a sine wave at 440 Hz is 22 ms. 11 us of error may not seem like a lot but it's actually 5 cents of error. This is unacceptably high for most applications. The error gets progressively worse as you go up in frequency. The error at 820 Hz will be ten times worse than the error at 82 Hz. The basic problem is that the autocorrelation is quantized. This isn't a problem if you are using a significant number of bits to represent the signal as you can interpolate the autocorrelation (which is what we do) but with a one-bit representation you cannot interpolate.

Furthermore, "dumb" autocorrelation methods like this are mostly useless for determining "pitch". They may find a frequency but are extremely prone to octave errors. A guitar note often has a 2nd harmonic that has more energy than the fundamental. Play an A at the 2nd fret on the G string. The 2nd harmonic is usually 10-20 dB greater than the fundamental. This method will not find the fundamental.

Myself and people smarter than me have been working on pitch detection for decades. If it were as simple as doing a one-bit A/D and an exclusive OR there wouldn't be hundreds of papers written on the subject. I'm actually using a wavelet approach in my latest work. There are people using neural networks and other advanced techniques and the problem still isn't solved completely.

One of the biggest mistakes an engineer can make is thinking they know more than they do. This happens in any discipline but some engineers seem especially prone to this. Humility is a virtue. I'd probably hire someone like you if you were looking for a job as you seem motivated. But you need to understand your limitations. No matter how much you think you know, someone out there knows more about a particular subject than you do.
 
Status
Not open for further replies.
Back
Top Bottom