Axe-Fx III Firmware 27.04 Public Beta

Status
Not open for further replies.
When these betas become vintage, the eventual releases are absolutely killer…

Re tube bias vibrato, and from an old-Fenders-are-better guy, these vibe circuits are wonderful, especially and inherently so, as you crank the power tubes.
 
Because they're infants. Give 'em a few years and they'll be creating da Vincis.
While it's possible that the image generation networks which will be used in a few years don't exhibit this specific problem anymore, I'd say the root cause is not that the current models are "infants". Rather, it is that despite being called "AI" the current systems not only don't have a concept of what a guitar fret is, they also don't even have a concept of what a straight line is. So you cannot expect them to draw straight lines on purpose.

Here's an example showing how stable diffusion generates an image. It literally starts with random noise and improves that for a number of iterations by refining the noise into patterns that match the prompt, which is a process that will never be free of artifacts: https://upload.wikimedia.org/wikipe..._Japan_demonstrating_DDIM_diffusion_steps.png
 
Last edited:
While it's possible that the image generation networks which will be used in a few years don't exhibit this specific problem anymore, I'd say the root cause is not that the current models are "infants". Rather, it is that despite being called "AI" the current systems not only don't have a concept of what a guitar fret is, they also don't even have a concept of what a straight line is. So you cannot expect them to draw straight lines on purpose.

Here's an example showing how stable diffusion generates an image. It literally starts with random noise and improves that for a number of iterations by refining the noise into patterns that match the prompt, which is a process that will never be free of artifacts: https://upload.wikimedia.org/wikipe..._Japan_demonstrating_DDIM_diffusion_steps.png
An infant doesn't know what a guitar fret or a straight line is either.
 
Text-to-image generative AI systems don't "know" what they're "drawing." There's no object model or anything like that. So when a system draws something like a guitar, it doesn't know "draw 22 frets." Or what a fret is. Or that they must be straight.

Think of the output as a statistical representation of the prompt based on the input data. Every pixel is a data point and they might not be perfect. Because it doesn't count frets, you get weird numbers. Because it doesn't know that it's drawing something that must be straight, you get curvy lines. It's the same problem as many fingers, screwed up teeth, etc.

To make matters worse, try drawing a saxophone or a piano. Or the worst case I've found is "a group of brass instruments" and similar. You don't even get real instruments. Because, again, it's a statistic representation of pixels based on the prompt.
 
An infant doesn't know what a guitar fret or a straight line is either.
Sure but for infants we know how to change that, for current "AI" systems we don't. Image generation can do impressive things when it goes right, but it can also fail really hard and I think that is an intrinsic property of the current approaches - at best it can be worked around by adding even more layers and iterations to catch the weird outputs and try again.
At least with images it's still creating pretty pictures and just adds a few random washing machines or creates guitars with not-quite-true-temperament fretboards. It's more of an issue for the overhyped chatbots which get shoehorned into all kinds of things because some companies believe the hype and think they can replace their customer service department with it or whatever.

Anyways, back to playing guitar. I didn't even check out any of the beta goodies yet because I'm still busy going through the new factory presets, but have not had any issues with 27.04 thus far :)
 
Ok, lesson learned not to share AI generated guitars with bent frets, a washing machine and any non-straight lines again as not to derail thread 💀

Just fascinated with the technology and how quickly it can take a concept and “make “ it “real”

anyway looking forward to this fimware being out of beta and see what fractal audio has been cooking
 
Infants are conscious. Consciousness cannot be fully replicated by computational systems (and therefore fully appreciate the wonder of Fractal Audio)!
 
An infant doesn't know what a guitar fret or a straight line is either.
Because they're infants. Give 'em a few years and they'll be creating da Vincis.
Great analogy.

Currently, like children, mass image generators are "copping" (mimicing, approximating) images of real world objects via relatively generic means (eg. diffusion) without real-world rules, constraints, or symmetries. But that will evolve as we weave different systems together and/or train them in realistic virtual physics environments... which is already been happening.

Layering, composing and convolving knowledge and skills happens with humans as well. Sometimes simple methods or heuristics are "good enough", while they are wholly inadequate for more nuanced applications or situations.

An actor playing a doctor can look like a doctor from a certain distance, but you wouldn't want them operating on your brain IRL.
 
Last edited:
Status
Not open for further replies.
Back
Top Bottom