Wish Delay repeat clearing

If you mean in Stack/Hold modes since 24.00, I believe you can turn down those feedback parameters as needed. Otherwise, turn down delay feedback. Of course these would not be triggered automatically based on whatever scenario (e.g. changing presets or channels) which I think is part of the wish.

If you clarify your particular use case(s) again, that might help figure out a solution.

From 24.00 release notes:
Improved Stack/Hold behavior in Delay block:
  • In Hold mode repeats are now infinite (or nearly, may degrade over many minutes/hours)*.
  • Added Stack Feedback and Hold Feedback parameters. This allows adjusting the decay time independently for the stack and hold modes.
  • Improved transition between Stack/Hold states.
 
If you mean in Stack/Hold modes since 24.00, I believe you can turn down those feedback parameters as needed. Otherwise, turn down delay feedback. Of course these would not be triggered automatically based on whatever scenario (e.g. changing presets or channels) which I think is part of the wish.

If you clarify your particular use case(s) again, that might help figure out a solution.

From 24.00 release notes:
Ohh, it sounds interesting ! I didn't update for a while because I have lot of shows those days so I think I am still around firmware 20 or 21...

My use case is to use the delay as a looper with a specified number of bars, this way I can have multiple loopers with different lenght of loops. It work now but what I am missing is the ability to clear the delays quickly (if I want to record a new loop for example).

Can you confirm me that with the 24.00 update you can clear the buffer by just turning the hold feedback down and then up? This way I could just assign a control switch to the hold feedback parameter to act as a clear switch.
 
Up,

Still no workaround to clear this buffer?

AFAIK, nothing has changed, so there's still the workaround of changing presets with spillover turned off. Or switching to a second delay block while letting the first one die out in the background.
 
VST/AU delay effects have the ability to clear the buffer, but that’s because there is a specific protocol for that purpose, which is used, for example, when resetting the transport. I’ve wtitten a lot of delay effects in my time and I would consider it to be rather odd if changing the delay time caused the buffer to clear.
Consider the case of the Echoplex, where the play head is stationary and the record head moves instead.

If you start with the maximum delay time (max distance between the heads), you will have a "buffer" of around 450 ms in this length of tape. Let's now raise the feedback to max and kill the input for the illustration, so that we have a 450 ms loop.

IIRC, the tape moves at around 7.5 ips. If you slide the record head toward the play head (reducing the delay time) at a speed equal to 7.5 ips, no new signal will be recorded during the slide (since the record head is stationary with respect to the tape). The contents of the buffer will play out, until the record head reaches the minimum delay time and stops moving. At this moment, the last 50 ms or so of the buffer will loop.

If you slide the record head to minimum distance as fast as possible, you effectively dump the first 400 ms of the buffer, and switch from a 450 ms loop to a 50 ms loop, maintaining the last 50 ms with no pitch change. Most digital delays will simply continue to play the entire loop at a faster rate (with the resultant increase in pitch), or will simply turn the loop into garbage.

Universal Audio is the only company I know of that has designed a digital delay to act this way, but it does not react quickly enough to use for my application. If you were to design such a delay, I'd buy it in a heartbeat.
 
Last edited:
Ok, but that's not "clearing the buffer", right? I think the use case the OP is talking about is: leave the delay time unchanged but clear the buffer so the echoes stop immediately.
 
Ok, but that's not "clearing the buffer", right? I think the use case the OP is talking about is: leave the delay time unchanged but clear the buffer so the echoes stop immediately.
I guess it is "clearing the buffer" (or part of it) in the domain of the Echoplex. What the OP is talking about, I'm not sure. It's just a response to "I would consider it to be rather odd if changing the delay time caused the buffer to clear" and an example of where changing the delay time would dump part of what was in memory.
 
Maybe a clear-buffer-on-selection option. Turn it on, and the buffer is cleared when that preset s selected.
 
Maybe a clear-buffer-on-selection option. Turn it on, and the buffer is cleared when that preset s selected.

I don't think that's the use case for this thread. The OP isn't changing presets. There is another thread with that use case though and I proposed a solution in that thread for that use case:

https://forum.fractalaudio.com/threads/clear-delay-buffer.201213/#post-2515568

For this thread, I guess the solution would be a "clear buffer" button you can press, but in the meantime there are a couple of workarounds mentioned above.
 
The ability to clear the buffer with a midi command would be useful for me. I currently have to use a scheme to reduce feedback to zero and then back to max. Not ideal.
 
Running into this issue where when I change delay channels and there’s different types in each channel, if I’m playing a passage and switch to a different channel, then come back to the original channel, I get this burst of sound that is the content left in the original channel’s delay “memory”.

So my wish is to clear the delay “memory” when changing channels.
 
Running into this issue where when I change delay channels and there’s different types in each channel, if I’m playing a passage and switch to a different channel, then come back to the original channel, I get this burst of sound that is the content left in the original channel’s delay “memory”.

So my wish is to clear the delay “memory” when changing channels.
If so, that would clobber all tails, even for the same type (and settings) on a different channel. This might be okay if the setting you propose would be optional.

If "clear delay buffer" were configurable across scenarios (e.g. by modifier trigger / midi, on channel change, on preset change, on scene change, on any change), this seems ideal, but probably challenging to implement.
 
If so, that would clobber all tails, even for the same type (and settings) on a different channel. This might be okay if the setting you propose would be optional.

If "clear delay buffer" were configurable across scenarios (e.g. by modifier trigger / midi, on channel change, on preset change, on scene change, on any change), this seems ideal, but probably challenging to implement.
That's a good point. But I don't notice the issue when switching between channels that have the same model selected so I guess that's my workaround for now.
 
Running into this issue where when I change delay channels and there’s different types in each channel, if I’m playing a passage and switch to a different channel, then come back to the original channel, I get this burst of sound that is the content left in the original channel’s delay “memory”.

So my wish is to clear the delay “memory” when changing channels.

Some types share a buffer, others don’t. When switching to a type that doesn’t share a buffer, the first buffer should be cleared. It’s hard to imagine a use case that would justify the current behavior of not clearing the buffer in that particular situation. In the meantime you’ll have to use one of the workarounds I mentioned above.
 
Some types share a buffer, others don’t. When switching to a type that doesn’t share a buffer, the first buffer should be cleared. It’s hard to imagine a use case that would justify the current behavior of not clearing the buffer in that particular situation. In the meantime you’ll have to use one of the workarounds I mentioned above.
Good point.

So it would have to clear it based on knowing the prior state/type it was coming from, and only if the type changes would it be cleared.

Conceivably though there could be a CPU cost to clearing/zeroing it on demand like that (assuming it is static memory of fixed length).
 
Last edited:
Good point.

So it would have to clear it based on knowing the prior state/type it was coming from. Only if the type/buffer changes would it be cleared upon switching to it. There might be a CPU cost/latency to clear/zero it on demand like that (assuming it is static memory of fixed length).

I've learned through experience that some delay models are on different buffers by noticing that sometimes remaining repeats are pushed into the new delay feedback line and other times are killed immediately. Interestingly the Reverb block (which also seems to use different buffers between models) doesn't keep the signal in the previous buffer so when switching back to a channel used earlier it doesn't have the trail burst - or at least doesn't with the models I've tried - so perhaps give the delay block the same treatment(?). Otherwise I’m not sure the solution needs to be that complicated…

To me, the first step would be to make the channel/scene change behaviour uniform so presets can be designed around this quirk. To that end, a simpler global command such as on channel change (perhaps with a time lag after X seconds/repeats) the previous buffer is cleared. For trails to stay intact, any remaining repeats are pushed into the new delay model. There could be potential issues when mix/levels/feedback/subdivisions are significantly higher or lower in the new channel but since we have multiple delay blocks we could account for that and hopefully avoid those issues. Perhaps each delay block uses a single buffer for all models in which case any buffered signal is pushed into the next model or is cleared normally on bypass. Just these changes I think would be sufficient to address the undesirable artefacts.

Alternatively, documentation that allows us to know which models share buffers - although this already feels way too complicated (to me at least) - would allow presets to be designed knowing where there are potential issues... There obviously isn't a simple solution and I don't have technical knowledge on how these elements are coded but Cliff was able to address gapless switching which for a long time was considered 'very difficult'.
 
There obviously isn't a simple solution

I'd say the solution I described above is a pretty simple solution to the "echoes that come back from the dead" problem, but whether it's worthwhile for FAS to implement it depends on whether they consider this a severe problem or not.

In addition, one could imagine a new feature where trails are always retained when switching between all types, but that's going above and beyond fixing the problem that people have been reporting. I suppose that enhancement would be useful, but I don't recall anybody ever requesting that feature before.

Note however, this issue is different from what the OP is talking about.
 
Back
Top Bottom