Axe as audio interface... Flexible sample rate?

Omg Jason, I said I didn't want to debate and there you go, into debate land. This is exactly where it goes. One dude insisting you can prove scientifically what can be heard and can't and the other side insisting something else must be occurring because I can hear it.

I'm not debating this again. I'm merely stating what is true for me. You don't have to believe it. I aask your not to believe it. Please.

And yes, anyone concerned with recording real dynamic range of a full orchestra, something I've recorded many times, at 96k, sometimes while playing the Axe Fx simultaneously, would not be considering using the Axe Fx as their primary interface.

You misunderstood my reply. I'm not debating whether you can hear what you claim you can hear, nor have I ever claimed it was out of the realm of possibility that others can hear the difference, either. I was merely saying I don't think it qualifies as an art but rather an ability, and abilities can be tested. If you don't agree with the latter, that's your prerogative.
 
Last edited:
If my understanding of the tech is correct, there's no information coming out of the Axe above ~24kHz anyway, due to the filters used in A/D/A accounting for Nyquist-Shannon.

By sampling it at 96kHz you may be lifting the frequency to which some noticeable sample error may occur, up out of audibility... But 48kHz already did that.

I do not doubt how you feel about the results you get, but I suspect that capturing the A/D/A'd signal from the Axe at 192kHz or even higher couldn't possibly capture more authentic signal than simply taking the digital out and not converting two more times.

In post #25 (above) he mentioned, "(the) Axe Fx is 48k. It's not going to upsample to 96k. But I'm not talking about guitar only here guys. I record drums and bass and vocals and sax and keyboards. It's not a guitar only world."
 
In post #25 (above) he mentioned, "(the) Axe Fx is 48k. It's not going to upsample to 96k. But I'm not talking about guitar only here guys. I record drums and bass and vocals and sax and keyboards. It's not a guitar only world."
Sure, if he's sampling them directly then that may be legitimate (although he's still be limited by the capturing device's parameters, like the microphone).

I'm only talking about the Axe though :p
 
You can only capture what's there - if the sample rate is 192kHz but the microphone feeding it only captures up to 20kHz, then you ain't sampling much above 20kHz.

Despite their specs, a lot of microphones can capture audio above 22kHz. Granted, with a mic like an SM57, for example, the output above 22kHz might be useless, nasty garbage but it's there.
 
Sorry to go way off-topic. But its an interesting point to have your DAW handle different audio rate conversion on the fly. I'm using DDMF programs these last two days, to pull sounds from various sources online. It lets me grab stuff and send it between DAWs or between a DAW and youtube. It uses Direct Sound, so I need to ignore what is heard and just be glad the recorded files from it are okay. But the point is that the DAWs I'm using are okay with grabbing such things despite them being at a different rate (DDMF uses 44,100 internally, or did when their manual was written), even when using the RME Fireface 400 set at 48k. For instance, I could set my Fireface 400 to 48k, although the DDMF claims to be converting everything back and forth from 44,100.

( tried VB Audio Voice Meeter, and its ASIO, but Windows 7 seemed to have a big issue with replacing my "preferred" playback and recording devices, so I gave up for now. )

Basically there comes a point where I'm over my head. For instance, I don't think using different DAWs simultaneously can allow, via ASIO 4 ALL in each, the sending of signals between them (which DDMF allows using its MS Direct Wave, lol) with the DAWs set at different rates. A very seldom used setup. But really I'm just trying to get in my head what is possible and what the repercussions are.

Now, since the non-ASIO stuff has always to me been pure garbage, this has reminded me that I'm ignorant about something important, which is, HOW do DAW programming choices (i.e. what the DAW does in its massively steep foreheaded little brain) effect the interfacing between such drivers as ASIO 4 All, or the dedicated drivers of hardware like RME or Focusrite.

For me, the most interesting fact is the difference in fidelity I'm hearing between Ableton Live, and other DAWs. To my knowledge the parameters I have access to are beside the point on this. There is something that Ableton (at least Live Intro, which I have, but I seem to recall that Live 8 was this way as well) that I don't like in terms of coloration, whereas the 3 other DAWs I have access to don't have this coloration.

In soft synths, with a high fidelity audio card to process them, Ableton seems to weaken the very high frequencies relative to the high mids. This is only noticeable with a high fidelity sound card like RME, and is stuff that would not be AS audible at 44,100. But it is audible to me and another musician, at least with the sound card processing at 48k.

My conclusion on the sample rate issues was that in terms of recorded audio, beyond 48k I really could not hear the difference in terms of a better/worse characteristic (although I probably may sometimes barely hear a difference - but it doesn't strike me, with my hearing capacity, as noticeably "higher quality", just slightly more 'delicate' - I probably could tell the difference on A-B'ing correctly, only a bit over 50% of the time).

But in terms of real time processing of some of the new VST3 synths that put out some really high end sound variations, in a DAW running at 48k I could hear the difference between Ableton and the other DAWs (all running at 48k), IF I had a quiet room. I could also convince another person, more of an audio engineer, that this was true. So I think sometimes its the interpolating and handling discussed above, when a soundcard and a DAW are doing their funny stuff, that comes into play. You can go online and see arguments in favor of their being no audible difference between DAWs given reasonably the same parameters and settings. These arguments never address this throughput (not recorded sound) issue.

And when the original Ultra had its internal sample rate upped (probably something that did affect the sound heard or Cliff wouldn't have bothered, and it was prior even to having the audio interface) I believe I could hear that as an improvement as well.

1) In FL Studio and Reaper the maximum resample rate offered is 512 and 768 sinc respectively. Does this resample rate only affect mixdown? Or is their some resampling going on with respect to various VST synths?

2) Reaper also has track mixing bit depth, which you can raise to 64 bit. Again, I don't really understand if this is strictly a matter of what is done when the sound is recorded to a file, or if it might apply to audible output from a VST synth.

I would be curious if anyone knows if those two above settings can affect audible output from a VST synth.

This has bothered me about Ableton, and they did not respond when I sent the question to support.
 
Sorry to go way off-topic. But its an interesting point to have your DAW handle different audio rate conversion on the fly. I'm using DDMF programs these last two days, to pull sounds from various sources online. It lets me grab stuff and send it between DAWs or between a DAW and youtube. It uses Direct Sound, so I need to ignore what is heard and just be glad the recorded files from it are okay. But the point is that the DAWs I'm using are okay with grabbing such things despite them being at a different rate (DDMF uses 44,100 internally, or did when their manual was written), even when using the RME Fireface 400 set at 48k. For instance, I could set my Fireface 400 to 48k, although the DDMF claims to be converting everything back and forth from 44,100.

( tried VB Audio Voice Meeter, and its ASIO, but Windows 7 seemed to have a big issue with replacing my "preferred" playback and recording devices, so I gave up for now. )

Basically there comes a point where I'm over my head. For instance, I don't think using different DAWs simultaneously can allow, via ASIO 4 ALL in each, the sending of signals between them (which DDMF allows using its MS Direct Wave, lol) with the DAWs set at different rates. A very seldom used setup. But really I'm just trying to get in my head what is possible and what the repercussions are.

Now, since the non-ASIO stuff has always to me been pure garbage, this has reminded me that I'm ignorant about something important, which is, HOW do DAW programming choices (i.e. what the DAW does in its massively steep foreheaded little brain) effect the interfacing between such drivers as ASIO 4 All, or the dedicated drivers of hardware like RME or Focusrite.

For me, the most interesting fact is the difference in fidelity I'm hearing between Ableton Live, and other DAWs. To my knowledge the parameters I have access to are beside the point on this. There is something that Ableton (at least Live Intro, which I have, but I seem to recall that Live 8 was this way as well) that I don't like in terms of coloration, whereas the 3 other DAWs I have access to don't have this coloration.

In soft synths, with a high fidelity audio card to process them, Ableton seems to weaken the very high frequencies relative to the high mids. This is only noticeable with a high fidelity sound card like RME, and is stuff that would not be AS audible at 44,100. But it is audible to me and another musician, at least with the sound card processing at 48k.

My conclusion on the sample rate issues was that in terms of recorded audio, beyond 48k I really could not hear the difference in terms of a better/worse characteristic (although I probably may sometimes barely hear a difference - but it doesn't strike me, with my hearing capacity, as noticeably "higher quality", just slightly more 'delicate' - I probably could tell the difference on A-B'ing correctly, only a bit over 50% of the time).

But in terms of real time processing of some of the new VST3 synths that put out some really high end sound variations, in a DAW running at 48k I could hear the difference between Ableton and the other DAWs (all running at 48k), IF I had a quiet room. I could also convince another person, more of an audio engineer, that this was true. So I think sometimes its the interpolating and handling discussed above, when a soundcard and a DAW are doing their funny stuff, that comes into play. You can go online and see arguments in favor of their being no audible difference between DAWs given reasonably the same parameters and settings. These arguments never address this throughput (not recorded sound) issue.

And when the original Ultra had its internal sample rate upped (probably something that did affect the sound heard or Cliff wouldn't have bothered, and it was prior even to having the audio interface) I believe I could hear that as an improvement as well.

1) In FL Studio and Reaper the maximum resample rate offered is 512 and 768 sinc respectively. Does this resample rate only affect mixdown? Or is their some resampling going on with respect to various VST synths?

2) Reaper also has track mixing bit depth, which you can raise to 64 bit. Again, I don't really understand if this is strictly a matter of what is done when the sound is recorded to a file, or if it might apply to audible output from a VST synth.

I would be curious if anyone knows if those two above settings can affect audible output from a VST synth.

This has bothered me about Ableton, and they did not respond when I sent the question to support.

Are you using ASIO4ALL in Ableton?
 
Are you using ASIO4ALL in Ableton?
no, it never seemed to do as well in terms of latency and crackles as the Fireface driver generally (mostly from experience in Reaper), which is why the only reason I use Asio 4 ALL is to test my ability to use multiple soundcards at once (in case my tracks ever get that complicated, lol). And that part of ASIO 4 ALL is really cool. But I like the way that I can get very low latency and no crackles or artifacts with the Fireface driver.

I want to be clear, my issue with Ableton is really only due to obsession with synthesizers that put out ridiculous harmonics. I'm pretty sure that there are very few guitar sounds that would be impacted.

Some guy in a company called Quik Quak (David J. Hoskins) made a little synth quite awhile ago, that a guy named Luftrum did some sounds for; I hadn't heard about it, and there's not been much development of it, but its fairly unique in its approach. Glass Viper is what its called, and it uses mixture of custom drawn waveforms that modulate via graphical control points.

"...
Glass Viper is a synthesizer with unique waveform shaping, which has a deep and
natural sense of movement. Going beyond analogue simulation, into a truly organic
sound, from simple old synths to grungy filthy basses, or delicate pianos to strange
unnatural film effects.
Instead of taking a sample or oscillator and applying just filters and FX techniques,
Glass Viper bends the actual shape of its waveforms through a series of moving
control points. Up to four of these swirling and changing sounds can be layered
together to create a huge range of instruments. Glass Viper allows you to really
shake things up with a deep, natural sense of movement.

..."


And since Luftrum doesn't mess around, I got a demo of that synth. That was what made me notice this issue. Simultaneous with this I had just gotten really into higher end EQ VSTs, and was filtering off all but the very top end of the sounds coming out of it.

It could be more of a routing issue than I'm aware of. I could be failing to reproduce the combination of fader settings. If someone wanted to experience this without synths, get the sound of a ride cymbal and route it to a few tracks, with some slight Enhancer type delays. The shimmer in my other DAWs is a little more 3D than in Ableton.
 
Last edited:
You can only capture what's there - if the sample rate is 192kHz but the microphone feeding it only captures up to 20kHz, then you ain't sampling much above 20kHz.
I just have to comment on this. Whether, you are sampling at 48 kHz or 96 kHz has nothing to do with, if you record pitches higher than 20 kHz. It has to do with the amount of harmonic content you pick up and the ability to regenerate the incoming waveform.

If you record a 20 kHz tone at 48 kHz, you get two sample per period (one full evolution of a sine wave). So when the digital signal has to be converted back to the analog domain, it can only assume that the incoming signal was a perfect sine wave, which may or may not be the case. At 96 kHz, the A/D converter picks up four samples (almost five) per period. This gives the D/A converter much more information to work with when regenerating the analog signal.

All that said, I still record at 48 kHz, because I cannot discern the difference.
 
no, it never seemed to do as well in terms of latency and crackles as the Fireface driver generally (mostly from experience in Reaper), which is why the only reason I use Asio 4 ALL is to test my ability to use multiple soundcards at once (in case my tracks ever get that complicated, lol). And that part of ASIO 4 ALL is really cool. But I like the way that I can get very low latency and no crackles or artifacts with the Fireface driver.

I want to be clear, my issue with Ableton is really only due to obsession with synthesizers that put out ridiculous harmonics. I'm pretty sure that there are very few guitar sounds that would be impacted.

Some guy in a company called Quik Quak (David J. Hoskins) made a little synth quite awhile ago, that a guy named Luftrum did some sounds for; I hadn't heard about it, and there's not been much development of it, but its fairly unique in its approach. Glass Viper is what its called, and it uses mixture of custom drawn waveforms that modulate via graphical control points.

"...
Glass Viper is a synthesizer with unique waveform shaping, which has a deep and
natural sense of movement. Going beyond analogue simulation, into a truly organic
sound, from simple old synths to grungy filthy basses, or delicate pianos to strange
unnatural film effects.
Instead of taking a sample or oscillator and applying just filters and FX techniques,
Glass Viper bends the actual shape of its waveforms through a series of moving
control points. Up to four of these swirling and changing sounds can be layered
together to create a huge range of instruments. Glass Viper allows you to really
shake things up with a deep, natural sense of movement.

..."


And since Luftrum doesn't mess around, I got a demo of that synth. That was what made me notice this issue. Simultaneous with this I had just gotten really into higher end EQ VSTs, and was filtering off all but the very top end of the sounds coming out of it.

It could be more of a routing issue than I'm aware of. I could be failing to reproduce the combination of fader settings. If someone wanted to experience this without synths, get the sound of a ride cymbal and route it to a few tracks, with some slight Enhancer type delays. The shimmer in my other DAWs is a little more 3D than in Ableton.

If you're hearing differences during real-time playback, I can't think of a good reason there should be any discrepancies in sound quality aside from the way you have your DAW's configured or the way you have your virtual mixer set up, but maybe someone else can.
 
Last edited:
Somewhat, but I don't see how it really relates here

While the sampling rate does pertain to the number of samples captured per second, it also pertains to the limit of the frequencies a system can capture. In a nutshell, the sampling rate has to be twice as high as the highest frequency to be captured. Thus, a sampling rate of 96 KHz would be capable of capturing frequencies up to 48 KHz.
 
That is true but somewhat irrelevant, since none of us can really hear anything above 20 kHZ.

However, I believe we might be able to hear the difference between a 15kHz sine wave and a saw tooth wave at the same frequency. But after ADDA conversion at 48, they may be close to indistinguishable.
 
That is true but somewhat irrelevant, since none of us can really hear anything above 20 kHZ.

However, I believe we might be able to hear the difference between a 15kHz sine wave and a saw tooth wave at the same frequency. But after ADDA conversion at 48, they may be close to indistinguishable.

It's not about what you can hear in the extended frequencies. It's about taking unwanted noise and moving as much of it as possible above the audible frequency range. Once downsampled, some of the higher frequency noise is filtered out. That's the idea anyway.
 
This subject reminds me of a funny cartoon I saw recently.

It traced signal chain, beginning with a $250,000 Les Paul through a $10,000 mic, into a $100,000 console in a million dollar studio... down to a pair of earbuds worth $1.

It's no different today than when I was young. I enjoyed lots of great sounding albums on my own 45RPM record player. Later I had a radio shack cassette player.

But boy did I enjoy the music!
 
This subject reminds me of a funny cartoon I saw recently.

It traced signal chain, beginning with a $250,000 Les Paul through a $10,000 mic, into a $100,000 console in a million dollar studio... down to a pair of earbuds worth $1.

Ain't it the truth.
 
no, it never seemed to do as well in terms of latency and crackles as the Fireface driver generally (mostly from experience in Reaper), which is why the only reason I use Asio 4 ALL is to test my ability to use multiple soundcards at once (in case my tracks ever get that complicated, lol). And that part of ASIO 4 ALL is really cool. But I like the way that I can get very low latency and no crackles or artifacts with the Fireface driver.

I want to be clear, my issue with Ableton is really only due to obsession with synthesizers that put out ridiculous harmonics. I'm pretty sure that there are very few guitar sounds that would be impacted.

Some guy in a company called Quik Quak (David J. Hoskins) made a little synth quite awhile ago, that a guy named Luftrum did some sounds for; I hadn't heard about it, and there's not been much development of it, but its fairly unique in its approach. Glass Viper is what its called, and it uses mixture of custom drawn waveforms that modulate via graphical control points.

"...
Glass Viper is a synthesizer with unique waveform shaping, which has a deep and
natural sense of movement. Going beyond analogue simulation, into a truly organic
sound, from simple old synths to grungy filthy basses, or delicate pianos to strange
unnatural film effects.
Instead of taking a sample or oscillator and applying just filters and FX techniques,
Glass Viper bends the actual shape of its waveforms through a series of moving
control points. Up to four of these swirling and changing sounds can be layered
together to create a huge range of instruments. Glass Viper allows you to really
shake things up with a deep, natural sense of movement.

..."


And since Luftrum doesn't mess around, I got a demo of that synth. That was what made me notice this issue. Simultaneous with this I had just gotten really into higher end EQ VSTs, and was filtering off all but the very top end of the sounds coming out of it.

It could be more of a routing issue than I'm aware of. I could be failing to reproduce the combination of fader settings. If someone wanted to experience this without synths, get the sound of a ride cymbal and route it to a few tracks, with some slight Enhancer type delays. The shimmer in my other DAWs is a little more 3D than in Ableton.

Hey brother, I think I have the answer for you. :)

I still feel the majority of it is hype, but there ARE DAW's that handle things differently. I actually beta test for a few DAW companies. When people say "this sounds different in this DAW than this one" there are a few things that come to mind.

Some DAW's literally have things going on behind the scenes that may alter things. If you are not totally watching what is going on, you will definitely hear a difference because there IS one. But it's *usually* not due to drivers or interface etc. I'll give you a few examples...

DAW sound differences: Some DAW software has eq, console emulation and other goodies running on their tracks. Sometimes they are disabled by default, other times enabled. Even though enabled and unaltered, the signal is passing through these things. So there is always a possibility that something can sound different and your ears aren't playing tricks on you.

Pan Laws: Some DAW software has pan law control. This is HUGE for altering sound because it raises or lowers volume based on the pans used. Cakewalk SONAR has this option and it has surprised many people. They record with one pan law setting, update to a new version of SONAR that has different pan laws, and they wonder why their material may sound different.

Remember, volume will always sell you. Even with plugin demonstrations. A good example here, just about all the UAD plugin demo's suck. Whoever they hired to do that stuff, they should hire me. I'll do one pass for free to show how good I can make their shit sound...then they owe me big time. :) The problem with their current demo's? They always boost volume.

When volume is boosted, several new things come into play that your ears and brain are surprised about. One, is volume shock value. Two is when things are louder, you hear more frequencies. When people mix albums too low, they miss certain frequencies that are not as audible. This is why there are guidelines to mixing volumes.

The above said, if one DAW to another exercises some sort of built in limiting or some sort of plugin enabled etc, you are going to notice "something". So you're not totally going crazy.

The other thing, which I will never agree with anyone on no matter who they are.....there is a placebo effect as well as hype that many are buying into. High end interfaces usually maintain all sample rates. You may notice little things here and there....but nothing will stick out blatantly. In the case of some VSTi's...some people say they can hear differences when they are rendered, but can't hear anything when the synth is playing in real time. Craig Anderton, who has written many books and is also a good audio engineer, has said countless times that he can hear differences in VSTi modules once they are printed and rendered.

Me personally, I've never heard it. I've got some really good stuff. In my other studio we even have the legendary Apogee. I don't hear differences with the higher sample rates. You're going to hear it if you are not recording sonic instruments. Those recording jazz, orchestral or anything that isn't sonic are probably going to notice a difference. The more sonic stuff you record, the more you are actually degrading the sound.

You metal heads are probably not going to hear higher sample rate differences unless you have a cheaper interface or a DAW that is coloring the sound...or even an interface that is coloring the sound. Interfaces like the Octopre and others, have compression, limiting, eq possibilities....the list is endless. Guys like me that think as I do are just trying to tell you guys that may be hobby guys, don't sweat this stuff. If you are a pro and think these huge sample rates make a difference, run your business how you see fit. :)

I like to consider myself a serious musician and engineer. I got a pretty awesome client list and have worked with some killer people. I try not to name drop because...well, there's no reason to ever be a dick and try to win an argument that way. It doesn't matter anyway. Admin M@ summed it up perfectly. I actually just posted something nearly identical to his on a recording forum where people are so worried about stuff they shouldn't be worried about...they miss the obvious. That's a discussion for a different time.

ASIO4ALL: It was just a driver that was released to help Windows Vista assist in ASIO latency issues because there were so many audio issues with Vista when it first came out. I still use those drivers to this day on all my little test computers that run Realtek sound cards. I've never had a problem with ASIO4ALL using it on XP, Vista and Win 7. It just fools your cheap interface into thinking it is ASIO instead of MME.

Your running ASIO4ALL with Fireface is definitely not recommended, but I can understand why you did it. The rule of thumb is...any interface that has it's own ASIO driver, you should always run that. I've never seen a case where ASIO4ALL worked better than the actual driver for an interface. Now, if it was a faulty driver...there is a possibility. But you should always stick with the manufacturer driver for anything ASIO unless instructed to do otherwise.

Anyway, at the end of the day....some people will hear a difference in this stuff...some will not, and some will THINK they do based on spending obnoxious amounts of money...so they feel "I know it has to be different". Whatever works best is what you use. However, never be naive to the fact that it is WAY too easy to buy into the hype we have in this industry. Seriously...some of it is out of control and actually quite sad.
 
Back
Top Bottom