DynaCab vs DynaCab HD

@jakel that's not correct. There is definitely a time/resonance component.

Maybe this image will help. It represents a resonant sound over time. Any stimulus triggers a response over time. Go try an IR-based convolution reverb and you'll pick up on what I am trying to convery.

And please recognize that while I try to explain things in simple terms, my own understanding is also simpler.

@ulti took a poke above, but some of what he said is good!

1770997907864.png
(Image from Google.... simplified axis labels are mine. (F)requency, (R)response, and (T)ime.)
 
Last edited:
You might distinguish between the ways people use the word "tight".

I'll try to explain what I mean.

“Tight” can of course mean short in time (shorter "decay", more "damped") BUT it can also be used to describe how frequency content and response makes a sound feel more controlled.

Those two meanings are interrelated but they aren’t the same thing. IRs affect both meanings. They shape frequency response... over time.

Frequency response is going to be right up front. How did the speaker respond? How did the mic capture that?

Think of the time part as a "listening window". Cabs and the spaces they’re in have resonances that linger, and a longer IR reproduces more of what happened there. A shorter IR cuts things off sooner.

Either way, depending on a lot of variables (amp, settings, guitar, playing, resonances in the physical space you listening in, etc.) even the shortest possible IR could result in you hearing a sound that may or may not be perceived as tight.

Depending on the IR and where/how it was captured, more length will let you feel more of the the resonant box of the cabinet, or more of the space of the room/studio. These things "cradle" that up-front tonal color of the IR!

A fun experiment is to load an IR of a long space and try it at a short length. You'll hear the sonic window "shifting" through time.... but no moment ever extends to let you hear the tail. People say this "feels like a gate" ....But a gate is level-dependent. In comparison, the truncated IR only contains at any moment as much of history as the IR length allows.

(Paging Tom Baker, or is is Oliver Sacks we need?!)
Yep, all those concepts are pretty clear to me at this point... I (maybe improperly) used the word tight just to say that a longer IR doesn't always equate to "better".

But glad you took the time to write this exhaustive post, I'm sure it'll be useful to a lot of people to grasp how IRs work. 👍
 
@jakel that's not correct. There is definitely a time/resonance component.

Maybe this image will help. It represents a resonant sound over time. Any stimulus triggers a response over time. Go try an IR-based convolution reverb and you'll pick up on what I am trying to convery.

And please recognize that while I try to explain things in simple terms, my own understanding is also simpler.

@ulti took a poke above, but some of what he said is good!

View attachment 166590
(Image from Google.... simplified axis labels are mine. (F)requency, (R)response, and (T)ime.)
Regarding this, I encourage users to give REW (Room EQ Wizard) a try, it can display a waterfall graph (like the one above) for any IR you load in it.
 
I'm reading the lengthy quotes on the wiki regarding what's required/ideal for IR length, and admit feeling a bit of whiplash. I guess it will all get straightened out eventually by some side-by-side comparisons.
 
The longer IRs that contain more of the cabinet/room resonant and reflective information still are just a frequency filter of sorts, correct? I mean, they are NOT actually producing anything back in the time domain when used (like a reverberation). It's just that the resonances modify the filter, right?
Time and frequency domains are not separate entities, even a simple filter still has a time domain, to produce a frequency response alteration you need group delay/phase delay (google it).
So a cab IR is no different than a convolution reverb... those cab/room reflections are just delayed copies of the signal that interfere with the direct one when the delay is short enough, and you don't perceive them as delay or reverb just cuz the delay is too short and/or their relative level too low (in most cases).
If you shoot a cab IR with a room mic you'll definitely perceive it as a reverb.
 
I'm reading the lengthy quotes on the wiki regarding what's required/ideal for IR length, and admit feeling a bit of whiplash. I guess it will all get straightened out eventually by some side-by-side comparisons.

Are you referring to the different quotes on 1024, 2048, and 8k resolutions?
 
@jakel that's not correct. There is definitely a time/resonance component.

Maybe this image will help. It represents a resonant sound over time. Any stimulus triggers a response over time. Go try an IR-based convolution reverb and you'll pick up on what I am trying to convery.

And please recognize that while I try to explain things in simple terms, my own understanding is also simpler.

@ulti took a poke above, but some of what he said is good!

View attachment 166590
(Image from Google.... simplified axis labels are mine. (F)requency, (R)response, and (T)ime.)
Very cool stuff. Thank you for sharing the info with example plot, love this aspect of FAS getting technical while also being digestible for the non subject matter experts who enjoy the technical bits.
 
Last edited:
DynaCab HD isn’t about “more mics” or some vague upgrade — it’s about resolution in two dimensions. Temporal resolution means the impulse response runs longer, so you capture more of the speaker’s time behavior: low-frequency bloom, cone decay, subtle resonance tails — the way a cab actually unfolds after the pick attack. Spatial resolution means the speaker surface is sampled more densely, so mic movement across cap-to-edge isn’t stepping between coarse points but transitioning through finer positional increments with smoother phase interaction. In practice, that translates to more continuous mic positioning, more complete low-end development over time, and less interpolation “grain” when dialing sweet spots. It doesn’t automatically sound brighter or hyped — it sounds more continuous in space and more complete in time. And the 8k sample references typically relate to export length (Cab-Lab/UltraRes), not necessarily what older hardware processes internally. In short: same DynaCab concept — just higher resolution on both axes.
What he said.
 
Phew!

Read pretty good to me. Then after reading some further posts, I thought: uh-oh... AI has taken me over.

If there was AI in there... I didn't detect it all. A... what he said... will do nicely. tnx
I had the exact same reaction...

Sometimes people can read like AI - especially technical people that do a good job of providing details.

After all, what is the job of AI? To respond like a person... :)
 
Phew!

Read pretty good to me. Then after reading some further posts, I thought: uh-oh... AI has taken me over.

If there was AI in there... I didn't detect it all. A... what he said... will do nicely. tnx
For me, ai detected mode kicks in with random bits of bold emphasis appear mid sentence.

It’s a characteristic of certain LLM responses.

As someone who has been called a robot many times in their life, I also appreciate it can just be a writing style.
 
Time and frequency domains are not separate entities, even a simple filter still has a time domain, to produce a frequency response alteration you need group delay/phase delay (google it).
So a cab IR is no different than a convolution reverb... those cab/room reflections are just delayed copies of the signal that interfere with the direct one when the delay is short enough, and you don't perceive them as delay or reverb just cuz the delay is too short and/or their relative level too low (in most cases).
If you shoot a cab IR with a room mic you'll definitely perceive it as a reverb.
Ok, I googled filters and phase change and understand that part now. And I see how an IR player has to also work that way. So, if I put a pulse through an IR player of a room IR, I should hear the exact same thing as the mic heard when the impulse response was created, correct? Thinking about it this way changes my seat-of-the-pants understanding of how the cab block works. Thanks.
 
For me, ai detected mode kicks in with random bits of bold emphasis appear mid sentence.

I hear ya. I just didn't get a sniff of it there.

The amount of time it takes to "fact-check" these days!?! Time == money. I wish they would give us some wafers back to bring the cost of components down. Not a fan of where its all going.

As someone who has been called a robot many times in their life...

Me too. But its usually my lady if I don't cry during The Notebook! 🤣
 
Phew!

Read pretty good to me. Then after reading some further posts, I thought: uh-oh... AI has taken me over.

If there was AI in there... I didn't detect it all. A... what he said... will do nicely. tnx

I had the exact same reaction...

Sometimes people can read like AI - especially technical people that do a good job of providing details.

After all, what is the job of AI? To respond like a person... :)

For me, ai detected mode kicks in with random bits of bold emphasis appear mid sentence.

It’s a characteristic of certain LLM responses.

As someone who has been called a robot many times in their life, I also appreciate it can just be a writing style.

Em dash is a good indicator, but not always.

Em dash plus the “it’s not *, it’s *” is a dead giveaway of ChatGPT specifically.

It’s a well documented occurrence.
 
llms use emdashes with oppressive frequency. that was the tip for me, as well as the like overly gladhandy quips e.g. "it's not this, it's this", "what this means in practical terms"
 
Back
Top Bottom