This new firmware sounds great...

Status
Not open for further replies.
I'm just going to give my opinion, as a non-scientist, non-programmer and non-audio engineer; excuse me if I don't know what I'm talking about; I don't claim to have any special knowledge about audio. What I think some here may be missing is that we shouldn't be talking about frequencies when we talk about the output of the Axe Fx. We should be talking about perceived sound. That's different than frequencies.

Having talked to a family of violin and cello players from the last century, who told me that mono recordings were "better" than stereo recordings, I think I understand what the misunderstanding is. The problem with stereo was that it introduced cancellation in the signal. If you play back a mono recording into a rich soundscape, like a home with several rooms with various acoustically helpful surfaces, and compare it to a stereo recording of the same type of performance, there is a large difference.

The mono recording will "move" through the space whereas the stereo recording sounds more lifeless. Go a couple rooms over, and listen. CDs may be even worse, because of something about the dithering, but I sure didn't think they "opened up in the room", when compared to vinyl, with the particular CD player and turntable I had at the time.

What this points to for me is the difference between an "ideal" signal, for a given theory, and the ACTUAL signal. Case in point, if you already know the EXACT frequencies, or that you have linear frequencies, then all these conventions of sound theory might be 100% accurate. Waves, on the other hand are potentially of infinite variability. An audio source can have frequency components that are high and low, but that also shift while the generation or recording is taking place. This is why cancellation, I'm guessing, caused the stereo recordings I tested to fizzle out rather than walk their way around our testing space. This makes sense to me.

Another aspect of the overall issue of "audio theory" is the difficulty in trying to track polyphonic audio material. If all this were so well understood and so easy to do with limited sampling, why have there been, from what I see, no breakthroughs made in software conversion from audio to MIDI? Have the thread viewers tried the software that purports to do this, just taking a piece of music recorded of a single instrument, like a piano? The glitching and inaccuracies are pretty glaring, even though this should be a simple task, right? But its not simple when one is talking about the ACTUAL audio. And even in my weak imagination as a non-programmer, making a judgment as to the exact, rhythmically correct onsets of a series of low frequency signals, e.g. piano key-strikes, appears to require, IMO, a fairly large number of logic threads being run simultaneously.

Similarly, when I try to guess what sample rate is needed to accurately digitally record an organic square wave, it seems that a low frequency one would be cut off without a fairly high sample rate. Do the math and tell me I'm wrong. Remember, its not the fact that this is an oscillation of a particular frequency, but the fact that it is a square wave that you want to reproduce... If you say that the frequency of the aliasing is outside of human hearing, I think you're just talking theory. The organic sound of the square wave is pushing against the other waves and generating upper harmonics, and if its not reproduced well, then... Maybe that's why I thought those vinyl jazz records sounded so good. But digital does a pretty good job, recording wise, and that's not the only issue.

So I'm distinguishing between recording, where we hope, IMO in vain, that the soundscape is not being influenced by frequency rate variations and frequencies above/below hearing signals, and the world of audio analysis and processing. But in terms of generation of audio content algorithmically, that's a whole 'nother level. It should be commonsense that different AND VARYING rates of wave cycle occurring in a signal affect one another. So the bottom line for me is that the ability to generate realistic sound phenomena in the 3D organic sound curvature that is output from the Axe FX should improve relative to the increased resolution, and that would result in all manner of wonderful harmonic motion (not just 'set frequency').

Any two sound sources interact with one another in a soundscape, which is much like a seascape, and the more resolution used to represent that curvature the better for me! I wouldn't want to surf in a pond, or do I mean POD.
 
Last edited:
I got blacklisted because I said I wasn't worried about firmware 3.0 since I wouldn't be getting mine until it was on 5.0 :) I'm so sorry. I was wrong. It'll be on 4.0 hahaha Unless there are more delays ;) But I don't care as long as Cliff keeps up this awesome work. Can't wait to try this out.
 
Cliffs commitment to the AxeFx never seize to amaze me. The sound quality and the flexibility of the AxeFx, the rate and magnitude of free firmware upgrades and his very proactive involvement on this forum is simply breathtaking and unmatched in the audio industry - and anywhere else that I´m aware of for that matter.
Being this productive and innovative, I sometimes picture him like a mad scientist - the thing is he's not mad at all, just extremely clever and right on the money with his visions for the AxeFx and it´s progression.
The last couple of firmware updates have been released so fast after the previous and have introduced things like IR capturing that we (or at least I) never thought of before Cliff introduced them.
I'm certainly no specialist nor very knowledgable about sampling rates, DAWs and all the intricacies that goes into processing or recording audio, so I really welcome the input from those that are well versed on the subject.
Man do I look forward to getting the Axe II, the idea of hi-res amps is getting me all fired up :)
 
What was it they said on the Big Band Theory about the mad scientist ... just one lab accident away from becoming a supervillain (or superhero as the case may be)! SuperCliff to the rescue of bad modeling tone everywhere! And the same kudos to his henchmen, of course...
 
Cliff is da man...I wish other companies were like his. If Apple would make free software upgrades to IPHONE on a regular basis by listening to the users....it would be even more amazing than it is. I still miss Swype, widgets, etc on IPHONE. Can't have it all...but the Fractal method is as close to ideal and beyond as I've ever seen. Best bang for the $ I've EVER had!
 
Cliff is da man...I wish other companies were like his. If Apple would make free software upgrades to IPHONE on a regular basis by listening to the users....it would be even more amazing than it is. I still miss Swype, widgets, etc on IPHONE. Can't have it all...but the Fractal method is as close to ideal and beyond as I've ever seen. Best bang for the $ I've EVER had!

If Apple had gone the way Wozniak had thought software and updates would be free. But the bottom line is that this (and Apple) would not be so amazing if nobody paid. Cliff (in my eyes) does it right. He asks for a sustainable amount of money up front, and maintains and improves it religiously (as it is, in all senses, his baby.) He keeps version control strict, but listens avidly to what the people who actually USE the damn thing say...and desire, and endeavors to improve and satisfy the greatest amount of users.

He's got a job he loves, and it shows.

Ron
 
Status
Not open for further replies.
Back
Top Bottom