AFIII To All The Pundits


back to school for me, let's do this!
1508885172169219.png
 
Reverb is linear time-invariant (LTI) which means it's commutative. IOW, you can put EQ before or after and it will sound the same. It doesn't add harmonics or overtones, by definition. Now our reverb algorithms aren't exactly LTI because they have modulation but they are "wide sense stationary" which means for all intents and purposes you can treat them as linear.
 
I'm here for more beatings.

my brain says you can make resonance by emphasizing certain frequencies, by brain says when you mix certain frequencies they make overtones (new harmonic content), so my brain is unsatisfied when i learn that reverb, which can emphasize and prolong certain frequencies does not in any way add any resonance or overtones to what was there before.

overtone: a musical tone that is a part of the harmonic series above a fundamental note and may be heard with it.
resonance: the reinforcement or prolongation of sound by reflection from a surface or by the synchronous vibration of a neighboring object.

is there a difference between natural reverberation, and "reverb" as defined in a signal processing world?
 
...my brain says you can make resonance by emphasizing certain frequencies...
Resonance is a natural property of physical objects (including rooms that reverberate) and many electronic networks. Resonance is essentially an increased "willingness" to vibrate at certain frequencies. Depending on the particular resonant object or circuit, the resonance may or may not cause a sound to be prolonged.


...brain says when you mix certain frequencies they make overtones (new harmonic content)...
You're confusing overtones with intermodulation. Overtones are harmonics of the fundamental frequency of the note. They exist in the original signal. They're what makes middle C on a clarinet sound different from middle C on a violin. Overtones will only be added to the signal if the system is nonlinear (if there is distortion). We call that "harmonic distortion."

Intermodulation is what happens when you mix different frequencies in a nonlinear (distorted) way. We call that "intermodulation distortion." You get new frequencies that are the sums and differences of the original frequencies. But in a linear system, you don't get intermodulation, so you don't get any new frequencies.


...so my brain is unsatisfied when i learn that reverb, which can emphasize and prolong certain frequencies does not in any way add any resonance or overtones to what was there before.
Yes, most reverb, whether natural or generated artificially, has resonances. But resonances don't generate any new frequencies — they're just an increased willingness to vibrate at frequencies that are already present in the signal.

Overtones? See above: Overtones aren't generated by linear systems, and reverb is linear.
 
We call that "intermodulation distortion." You get new frequencies that are the sums and differences of the original frequencies. But in a linear system, you don't get intermodulation, so you don't get any new frequencies.

1. does inter-modulation occur naturally, when you record an amp in a room?
2. does the cab room parameter create inter-modulation?

our reverb algorithms aren't exactly LTI because they have modulation

when i turn on room parameter and focus the stereo image into phase, it feels like it does more than create a stereo picture, it feels like there is a movement to the stereo image.
 
1. does inter-modulation occur naturally, when you record an amp in a room?
2. does the cab room parameter create inter-modulation?
No, because:

...in a linear system, you don't get intermodulation...
And:
...reverb is linear.




when i turn on room parameter and focus the stereo image into phase, it feels like it does more than create a stereo picture, it feels like there is a movement to the stereo image.
What does it mean to "focus the stereo image into phase?"
 
What does it mean to "focus the stereo image into phase?"

Adjust mic spacing and room size until you can hear the low end nodes line up and come into phase, because they are given the space to resonate past 180ms. To where your ear can pick up on the fundamental resonance and your mouth involuntarily says #$&@+%®©.
 
So what IS the typical decay time for the type of treated studio room you'd normally record a guitar amp (or shoot an IR) in?
 
because they are given the space to resonate past 180ms

i have update, it's not a matter of giving the bass time to 'resonate' because it doesn't have enough room, it's adjusting the phase of the unicorn ears so that the point where the audio gets sourced out of la la land into your monitoring system is in the right spot in the wavelength to catch the full force of the low register, so your monitoring system has a full fundamental to push and you can hear it better. it's just a phase. issue.

you know what i think the movement is in a real recording? The air pressure from the speaker. the movement of the earth. the AC air current in the room. The seismic plates under the building. The garbage truck outside. Those things move the mic back and forth, and it's changing the phase, by like microns, but constantly, that's your inter-modulation distortion that's not in an IR.
 

Attachments

  • 59 Bassguy (bypassvalve).syx
    48.2 KB · Views: 2
  • 65 Bassguy (bypassvalve).syx
    48.2 KB · Views: 2
  • Brownface (bypassvalve).syx
    48.2 KB · Views: 2
  • Deluxe Verb (bypassvalve).syx
    48.2 KB · Views: 2
  • Double Verb (bypassvalve).syx
    48.2 KB · Views: 2
Last edited:
u know i'm starting to think 180ms is way too long. not really too long, but it could be way shorter. there's already a lot of room reflections in 180ms, but not the kind you want. you want the reflections to be non-linear like they really are. A 90ms IR and a more fine-tunable early reflections thingy would be able to get in there really small and get riiiiiight on the zero crossing, the room size now with 0.01 is still a pretty big step from .01 to .02.

 

Attachments

  • Dual Rec 2 Ch (bypassvalve).syx
    48.2 KB · Views: 2
Nonlinear reflections = alternate universe. Just sayin’. ;)

Actually, I’m running out of ways to say it. Maybe I’ll just stop. Now where’s that blue pill...?

i understand what you mean about reflections themselves being linear, but my point is regarding the microphone itself and what it picks up, in real life. anything that moves the mic is affecting the phase, so you get the slightest shift in phase when the mics pics up what comes at it next. (and also your ears, as you stand and listen to something. as you move, the phase shifts, and it gives you perception of space, depth, motion, reality.) the phase is changing, because of the mic movement, even if it's just the diaphragm, slightly, but constantly. is that not modulation, distortion, a combination of the two, which individually or combined, are non-linear? a state of constant flux, whereas an IR has a total of 180ms to represent that, and it never moves, it's fixed, static filter, linear. a recording is not linear due to its inconsistencies, and that's what's missing from an IR on it's own, the non-linearity. Just like in the power amp work, there is more non-linearity to be calculated, in a different part of the recording chain. amps are done. IRs are done. Now u just need to be able to tweak the phase of an IR down at the sample level to get it lined up right with a good zero crossing, and all the other frequencies will line up perfectly and auto EQ and balance themselves from sub-lows to dog whistle once it's in phase with your monitoring system speakers.

picture this: a waveform coming from a source, a microphone picking it up. as the microphone pics up the waveform, other things (reality) move the mic, changing it's phase. The waveform from the source is altered from the phase shift. is that waveform still linear? because it got...distorted. modulated? changed. you know doppler effect changes pitch. that's modulation actual. and that's happening to the whole frequency spread of the amp, constantly, ever so slightly, as it's coming at the mic, especially if it's loud as balls.
 
Last edited:
i understand what you mean about reflections themselves being linear, but my point is regarding the microphone itself and what it picks up, in real life. anything that moves the mic is affecting the phase, so you get the slightest shift in phase when the mics pics up what comes at it next. (and also your ears, as you stand and listen to something. as you move, the phase shifts, and it gives you perception of space, depth, motion, reality.) the phase is changing, because of the mic movement, even if it's just the diaphragm, slightly, but constantly. is that not modulation, distortion, a combination of the two, which individually or combined, are non-linear? a state of constant flux, whereas an IR has a total of 180ms to represent that, and it never moves, it's fixed, static filter, linear. a recording is not linear due to its inconsistencies, and that's what's missing from an IR on it's own, the non-linearity. Just like in the power amp work, there is more non-linearity to be calculated, in a different part of the recording chain. amps are done. IRs are done. Now u just need to be able to tweak the phase of an IR down at the sample level to get it lined up right with a good zero crossing, and all the other frequencies will line up perfectly and auto EQ and balance themselves from sub-lows to dog whistle once it's in phase with your monitoring system speakers.

picture this: a waveform coming from a source, a microphone picking it up. as the microphone pics up the waveform, other things (reality) move the mic, changing it's phase. The waveform from the source is altered from the phase shift. is that waveform still linear? because it got...distorted. modulated? changed. you know doppler effect changes pitch. that's modulation actual. and that's happening to the whole frequency spread of the amp, constantly, ever so slightly, as it's coming at the mic, especially if it's loud as balls.
There are so many misconceptions in this. So many unrelated or semi-related concepts jumbled together into a single sentence or two. So many mid-sentence shifts of topic. I literally don’t know where to start to sort it out for you.

Reflections aren’t nonlinear. Neither are phase shifts or modulation or “microphone movement.”
 
There are so many misconceptions in this. So many unrelated or semi-related concepts jumbled together into a single sentence or two. So many mid-sentence shifts of topic. I literally don’t know where to start to sort it out for you.

Reflections aren’t nonlinear. Neither are phase shifts or modulation or “microphone movement.”

Modulation is linear now?? I'm interested in pursuing this rabbit hole to its end. And what's with the paragraph solely devoted to ripping someone's words apart? That sounds like textbook overwhelmed parenting 101. That's not even necessary for this forum. You have a lot of experience in audio, yes?
 
i was mistaking the infinitely variable effect of comb filtering on a sound source as being a non-linearity, thinking that it can never be the same, so that makes it non-linear. nope i'm a dummy, if everything is holding still in science land it will be linear no matter how many times it got combed. in science land stuff is supposed to hold still, so the phase would only move if other stuff moved. When i say "reflections are mixing and swirling", that's idiot speak for phase cancellation, comb filtering. Comb filtering is what's missing from an IR compared to a mic recording, or just a differing amount of comb filtering, or in certain key places...at the start and end points...i keep digging. ok leaving now FRACTAL ROCKS. :grin:

tenor.gif
 
Back
Top Bottom