I'm just going to give my opinion, as a non-scientist, non-programmer and non-audio engineer; excuse me if I don't know what I'm talking about; I don't claim to have any special knowledge about audio. What I think some here may be missing is that we shouldn't be talking about frequencies when we talk about the output of the Axe Fx. We should be talking about perceived sound. That's different than frequencies.
Having talked to a family of violin and cello players from the last century, who told me that mono recordings were "better" than stereo recordings, I think I understand what the misunderstanding is. The problem with stereo was that it introduced cancellation in the signal. If you play back a mono recording into a rich soundscape, like a home with several rooms with various acoustically helpful surfaces, and compare it to a stereo recording of the same type of performance, there is a large difference.
The mono recording will "move" through the space whereas the stereo recording sounds more lifeless. Go a couple rooms over, and listen. CDs may be even worse, because of something about the dithering, but I sure didn't think they "opened up in the room", when compared to vinyl, with the particular CD player and turntable I had at the time.
What this points to for me is the difference between an "ideal" signal, for a given theory, and the ACTUAL signal. Case in point, if you already know the EXACT frequencies, or that you have linear frequencies, then all these conventions of sound theory might be 100% accurate. Waves, on the other hand are potentially of infinite variability. An audio source can have frequency components that are high and low, but that also shift while the generation or recording is taking place. This is why cancellation, I'm guessing, caused the stereo recordings I tested to fizzle out rather than walk their way around our testing space. This makes sense to me.
Another aspect of the overall issue of "audio theory" is the difficulty in trying to track polyphonic audio material. If all this were so well understood and so easy to do with limited sampling, why have there been, from what I see, no breakthroughs made in software conversion from audio to MIDI? Have the thread viewers tried the software that purports to do this, just taking a piece of music recorded of a single instrument, like a piano? The glitching and inaccuracies are pretty glaring, even though this should be a simple task, right? But its not simple when one is talking about the ACTUAL audio. And even in my weak imagination as a non-programmer, making a judgment as to the exact, rhythmically correct onsets of a series of low frequency signals, e.g. piano key-strikes, appears to require, IMO, a fairly large number of logic threads being run simultaneously.
Similarly, when I try to guess what sample rate is needed to accurately digitally record an organic square wave, it seems that a low frequency one would be cut off without a fairly high sample rate. Do the math and tell me I'm wrong. Remember, its not the fact that this is an oscillation of a particular frequency, but the fact that it is a square wave that you want to reproduce... If you say that the frequency of the aliasing is outside of human hearing, I think you're just talking theory. The organic sound of the square wave is pushing against the other waves and generating upper harmonics, and if its not reproduced well, then... Maybe that's why I thought those vinyl jazz records sounded so good. But digital does a pretty good job, recording wise, and that's not the only issue.
So I'm distinguishing between recording, where we hope, IMO in vain, that the soundscape is not being influenced by frequency rate variations and frequencies above/below hearing signals, and the world of audio analysis and processing. But in terms of generation of audio content algorithmically, that's a whole 'nother level. It should be commonsense that different AND VARYING rates of wave cycle occurring in a signal affect one another. So the bottom line for me is that the ability to generate realistic sound phenomena in the 3D organic sound curvature that is output from the Axe FX should improve relative to the increased resolution, and that would result in all manner of wonderful harmonic motion (not just 'set frequency').
Any two sound sources interact with one another in a soundscape, which is much like a seascape, and the more resolution used to represent that curvature the better for me! I wouldn't want to surf in a pond, or do I mean POD.