Wish DynaCab superimposed over Ir?

NeoSound

Fractal Fanatic
Possibility of dropping an ir into a DynaCab slot and using the same controls/algorithms to manipulate? Even if it does have the mic and position baked in, this could be the ultimate way to fine tune?
 
With the huge caveat of not knowing what I'm talking about - but logically, you'd think the Dyna Cabs analysis would reveal some trends and could lead to some solid data in terms of how to manipulate a digital EQ file subtly enough to approximate mic and distance changes? Sure it wouldn't be as good as the real deal and maybe this is how it's done today in some products, but maybe?
 
Dyna-Cab is still based on IRs. Changing cabs, mics, position and distance selects a different IR under the hood. So the Dyna-Cab controls can not be applied onto a single IR.
Agreed, a single Dyna Cab uses many IRs. If it was an algorithm or method to simulate moving the mic, then it would be something that could be applied to a single IR and likely been incorporated.
 
With the huge caveat of not knowing what I'm talking about - but logically, you'd think the Dyna Cabs analysis would reveal some trends and could lead to some solid data in terms of how to manipulate a digital EQ file subtly enough to approximate mic and distance changes? Sure it wouldn't be as good as the real deal and maybe this is how it's done today in some products, but maybe?
Dyna-cab analysis? Of what? User’s final settings? Fractal has no way of knowing those. Of the cab+speaker+microphone+room interaction? Maybe, eventually, but that takes a lot of CPU, especially if it’s done on the fly, in real time.

IRs capture all the data that computation would generate, in a concise way, that makes it possible to massage the signal so it sounds like the cab+speaker, plus the room if you use FullRes IRs. Even the IRs eat up CPU, so it’s a ways out for portable devices like this generation to compute it.
 
With the huge caveat of not knowing what I'm talking about - but logically, you'd think the Dyna Cabs analysis would reveal some trends and could lead to some solid data in terms of how to manipulate a digital EQ file subtly enough to approximate mic and distance changes? Sure it wouldn't be as good as the real deal and maybe this is how it's done today in some products, but maybe?
I believe that's how TH3 works. And Melda has done some interesting work with algorithmic cabinet/mic simulation. But it's not clear that would be an improvement over IRs and interpolation. It's not like the early days of amp modeling where there was no digital alternative. There is. And it works quite well, so that reduces the incentive to pursue the idea.
 
No not user data - file comparison between the captures. An algorithm based on such. Trends, meaning similarities between the resulting file and the physical action to get there.
 
If there were a neutral dynacab, we could put the ir player in front of the cab block and use a normal ir with the positional filtering?

It’s not really a filter though is it ? Far as I understand it it’s hundreds of different IR files and you move the virtual mic and it selects the corresponding IR that was captured at that position. So basically saves you from having to go and find an IR labeled “mic 2” off center” and then 2.5” off center and so forth, but it’s still just using one IR at a time.

To be able to use a null IR and extrapolate the sonic changes that are happening as the mic moves would seemingly require way more processing power and memory usage, or at least that would be my guess.


I think of this as the same as Two Notes etc does, basically improves the UI, but I don’t think it’s any big leap as far as true real-time cabinet modeling etc, just an easier way to visually sort and categorize what otherwise would be hundreds or thousands of IR files you’d need to load one by one
 
It’s not really a filter though is it ? Far as I understand it it’s hundreds of different IR files and you move the virtual mic and it selects the corresponding IR that was captured at that position. So basically saves you from having to go and find an IR labeled “mic 2” off center” and then 2.5” off center and so forth, but it’s still just using one IR at a time.

To be able to use a null IR and extrapolate the sonic changes that are happening as the mic moves would seemingly require way more processing power and memory usage, or at least that would be my guess.


I think of this as the same as Two Notes etc does, basically improves the UI, but I don’t think it’s any big leap as far as true real-time cabinet modeling etc, just an easier way to visually sort and categorize what otherwise would be hundreds or thousands of IR files you’d need to load one by one
I imagine it to be a complex frequency/eq sweep. A base ir with the mic sweep algorithm cutting/adding frequencies. If there was a null ir with only the mic sweep active it could be used to manipulate something being fed to it by the ir player.

At any rate I'm really happy with the DynaCabs. Just an idea to keep those past irs a little more relevant.
 
I imagine it to be a complex frequency/eq sweep. A base ir with the mic sweep algorithm cutting/adding frequencies. If there was a null ir with only the mic sweep active it could be used to manipulate something being fed to it by the ir player.

At any rate I'm really happy with the DynaCabs. Just an idea to keep those past irs a little more relevant.

The mic sweep here isn't an algorithm though. Many brands have tried an algorithmic approach to virtual mic positioning and so far it hasn't worked well enough. It seems possible that an AI, fed with all of the data from captures from every conceivable position, could develop an algorithm that closely matched the changes that occurred as the microphone moved around. But that would assume that every speaker behaves the same across every position; clearly speakers of different sizes are a problem in that regard for a start. Likely every individual speaker would need its own algorithm, which would render it useless.

The other problem is that the IRs you want to manipulate already have mic placements 'baked in'. So unless we used reflection-free far-field IRs we'd have to also somehow 'remove' the mic placement data before applying the new effect.
 
Back
Top Bottom