Get instant answers WITHOUT reading the Axe-Fx manual

Did I or did I not specifically say, "No one's recommending that users not read the manual. I consider this an adjunct after you're up and running"?
well - u may want to change the title
of this thread then which currently reads: "Get instant answers WITHOUT reading the Axe-Fx manual"

me included, is keen on sitting down and reading the entire manual to find the answer to a simple question

The manual is huge
you don't read the entire manual - just use the index to go to the section of interest.


imperfect, accuracy
not good enough for more technical processes imo (case in point: beta thread and how many get tripped up by not reading / following the instructions closely (and that's a simple case)). Some things require precision - which is not hard - just has to be followed step by step.

Lastly - as you pointed out earlier - the question asked may be flawed. We often see that people don't ask the right questions, or don't want to put in the effort to ask a question in sufficient detail, to get to the answers they are really after. AI bots exacerbate that effect becase the AI bot does not sense (the way a teacher would sense, or the way an individual would realize as he/she is proactively digesting structured information (rtfm)), that the question is possibly something different - the AI bot will just blindly answer what it's presented with, and does not have the depth of knowldege to know when the asker probably is asking about something else. In addition, your best answers will often come from asking yourself or having someone else ask you, a lot of probing questions about the issue to be resolved - I don't see AI asking many probing questions to its users these days with the intention of getting to the crux of what the user wants to know.
 
Last edited:
well - u may want to change the title
of this thread then ("Get instant answers WITHOUT reading the Axe-Fx manual")
Why? Reading the manual in its entirety certainly doesn't mean you'll remember all of it. I certainly haven't.

you don't - just use the index to go to the section of interest.
Sure, if you can remember which chapter is relevant to your query. In my case, I've read the entire manual and don't remember which chapter discusses navigating the cab block or changing IR's, for example. Personally, I'm inept when it comes to operating the front panel, so in my case it's infinitely faster using this service.

not good enough for any more technical processes (case in point: beta thread and how many get tripped uo by not reading the instructions). Some things require precision - which is not hard - just has to be followed in detail.
The accuracy of this AI application is more than acceptable for this particular use case, in my opinion, and I've yet to run into any real accuracy issues thus far. Again, I've been using the service to learn the front panel and I have yet to see the accuracy issues you're referring to. So, unless you can cite some actual issues you've run into, you're merely speculating on hypotheticals. I'm not saying there aren't issues. I'm just saying I'm not seeing many of them, but I'll keep playing around with it.
 
Last edited:
Why? Reading the manual in its entirety certainly doesn't mean you'll remember all of it. I certainly haven't.


Sure, if you can remember which chapter is relevant to your query. In my case, I've read the entire manual and don't remember which chapter discusses navigating the cab block or changing IR's, for example. Personally, I'm inept when it comes to operating the front panel, so in my case it's infinitely faster using this service.


The accuracy of this particular service is more than acceptable for this particular use case, and I've yet to run into any real accuracy issues thus far. Again, I've been using the service to learn the front panel and I have yet to see the accuracy issues you're referring to. So, unless you can cite some actual issues you've run into, you're merely speculating on hypotheticals.
I'll rest at this point

Thanks for the lively discussions - interesting stuff.
 
I just want it to answer the common, repeated questions though.
As long as you answer it once it will. Those things are trainable, you just need some way to feed it high quality answers to train a domain specific addition to the generic model. If we had a way to mark forum posts as replies it would probably work very well, and I bet reactions are a good way to find high quality posts anyways.
 
As long as you answer it once it will. Those things are trainable, you just need some way to feed it high quality answers to train a domain specific addition to the generic model. If we had a way to mark forum posts as replies it would probably work very well, and I bet reactions are a good way to find high quality posts anyways.
Using a PDF with an FAQ would also work.
 
Since it's described and presented as something that "answers questions", it'll lead some folks away from just reading the manual (I bet we get forum questions asking why feature X does not work the way the Q+A tool said)
Already happens: a majority of new users ask the "AI" of the fractal forum community to answer questions that are in the manual w/o RTFM.

Like it or not, this kind of thing is another kind of search (Q&A) that will become ubiquitous, whether we "need" it or not. We will need it when more and more people/businesses become functionally or psychologically dependent on it.
 
Last edited:
... I'm wondering if AI will ever progress to visual responses, since many students and end users are visually oriented, and sometimes have trouble reading words (which truthfully can become monotonous after a while).
I will and has. Not great yet but getting there (e.g. hugging face, multimodal GPT). Currently GPT-4 can do some simplistic ASCII diagrams. GPT-3.5 used to do more drawings but not well so they restricted that functionality in 4, maybe to be able to connect it better to generated visual output.

GPT-4 (admittedly not great):
Screen Shot 2023-05-16 at 10.04.58 AM.png
Screen Shot 2023-05-16 at 10.05.10 AM.png
 
PDF Uploads are limited to 200 pages and/or 20 MB. I suppose if you trimmed a few pages from both PDF, the combo might fit within the page and file size requirements.

If the WIKI were able to fit within a 200 page/20 MB PDF, it'd work.
I see this as an early "experiment" in NL search (in the FAS context) having to go to an external link that has limited capability. Soon platforms and tools (like Adobe, XenForo) will integrate this directly as a supplemental search/filter feature to stay competitive, even if it's buggy initially.

In fact, I want a NL filter for forum threads to be able to remove OT jibber jabber (including mine) to see FW related info, questions, concerns, bugs. I bet something like that would help devs to filter out forum noise as well -- although they might already have an algorithmic filter to give them a cleaner stream (e.g. possible bug posts no matter where they are posted).
 
I will and has. Not great yet but getting there (e.g. hugging face, multimodal GPT). Currently GPT-4 can do some simplistic ASCII diagrams. GPT-3.5 used to do more drawings but not well so they restricted that functionality in 4, maybe to be able to connect it better to generated visual output.

GPT-4 (admittedly not great):
View attachment 121042
View attachment 121043
Even humble video graphics had their start at 8 megapixels. The wait for technology to catch up has been occurring since the early 1950's, when R&D teams had assessed and completed feasibility studies of future technology (the home computer, the cell phone, etc) but could not yet build them because nanotechnology didn't exist then, nor did the internet, and the cost at that time would have been too prohibitive. That's why the 1st cell phones looked like huge walkie-talkies, and cost several thousand dollars...

As technology catches up, it is possible that AI will be able to assess and respond with video responses. The current text-only ASCII diagrams are the beginning...

What we are seeing today with AI is the efforts of men from your grandfather's era who foresaw future technology's path...and that AI is the most recent wave of technology that 1950's R&D teams had hoped would be possible...am still awaiting things like Star Trek's tricorder for assessing medical diagnoses, and other aspects of the "future" that Roddenberry had so aptly envisioned in his series.

Not so much about "Data" though. An interactive life-like android can wait...
 
Last edited:
I see this as an early "experiment" in NL search (in the FAS context) having to go to an external link that has limited capability. Soon platforms and tools (like Adobe, XenForo) will integrate this directly as a supplemental search/filter feature to stay competitive, even if it's buggy initially.
I agree, it's very experimental at this point. After a bit of maturation, I think it'll be fairly commonplace. I have little doubt the capabilities will eventually expand to include entire websites (e.g. the Fractal Audio WIKI) and who knows what else.

There's another website, similar to AskmyPDF.com, that can interact with and search across multiple documents simultaneously, so things seems to be moving pretty swiftly.
 
Even humble video graphics had their start at 8 megapixels. The wait for technology to catch up has been occurring since the early 1950's, when R&D teams had assessed and completed feasibility studies of future technology (the home computer, the cell phone, etc) but could not yet build them because nanotechnology didn't exist then, nor did the internet, and the cost at that time would have been too prohibitive. That's why the 1st cell phones looked like huge walkie-talkies, and cost several thousand dollars...

As technology catches up, it is possible that AI will be able to assess and respond with video responses. The current text-only ASCII diagrams are the beginning...
I doubt it'll be that far off. Users of Bing Chat can already ask it to not just display images but generate them. A lot of these technologies are beginning to converge, so in my opinion, it's just a matter of time.
 
Concur. Foreseeably, video or graphics AI response should not be that far off in the future...the Japanese currently have a chatbot that uses an interactive video screen that can answer questions; it just does so via voice and anime-graphic character. IIRC, it incorporates the elements of Siri or Alexa for voice, and displays video, but only the anime character, not text or design graphics...
 
As long as you answer it once it will. Those things are trainable, you just need some way to feed it high quality answers to train a domain specific addition to the generic model. If we had a way to mark forum posts as replies it would probably work very well, and I bet reactions are a good way to find high quality posts anyways.
I have reasonably high confidence ChatGPT has scraped the wiki at least once in the past 4 months. It doesn't seem to help. I haven't tried v4 though -- not willing to pay. If someone is paying for v4 access maybe ask it the same question I asked and see if it knows about phase on the L/R channels?
 
Seems like a short-cut with the end result of making you less informed. And ~< 100% accurate.
IS it a short cut, or is it a first thing to check before asking the forum or diving in to read read the manual start to finish?


+1 - sorry AI buffs, it's nice to dream, but there's no shortcuts to true understanding - never will be.
I don't need true understanding to use my Axe FX.

Utterly useless. Teach it to respond "your phase is inverted on one side of your delay block"? :tearsofjoy:
Utterly useless and not a bit of hyperbole.

In this case AI is an added layer of obfuscation, less accurate and informative than directly accessing and understanding the source.

Don't get me wrong, what you posted is cool, and all things AI interest me greatly. AI is a very skillful and convincing "language terminal". Not the best tool for getting definitive answers. At least not on anything I have used AI for lately (quite a few times).
More or less layers of obfucsation than the average forum thread?

It's the suggested intent that's a bit scary: Enthusiasm about getting an answer WITHOUT having to read a manual, enthusiasm about NOT needing to exercise one's brain a little, enthusiasm about the possibility of being spoonfed info, when the simplest and most direct route to the knowledge is to just open the book (A.."I"'s source) already in hand and read it directly. It's a good example of one of A.."I"'s biggest dangers imo - for every 1 it helps, many will be led to waste time thinking it's some sort of magic shortcut to understanding concepts.

The easy part is opening the book and finding the page. The harder (but often more rewarding) part is reading / studying the page and undertanding the concept. The less we practice that the harder it will be - and putting an A.."I" filter between us and the page will only make things even more challenging.
I want to practice writing music, not reading Axe FX manuals.

because its a filter between you and the actual info and
it does not understand what its looking at in the way a human can - but, if it follows current A.."I" trends, It'll try n look like it knows what it's talkin about.

edit: would you trust it to interpret an airline pilot's emerg procedure manual during a mid flight engine failure?

common sense (often lacking these days) - I'll just RTFM thanks - I keep it right handy (read it while waiting in the grocery line).

Entertaining thread - thanks
I would prefer my airline pilot have a tool that helps them find the information they need before we crash rather than having them still thumbing through the index when we hit the ground, personally.

Of course, the big question is whether or not it will pass on that corrected advice to another user- I doubt it, but who knows... it seems like it would be extremely problematic to have ChatGPT so easily influenced by user input.
It will not, ChatGTP learns from it's training data, not from it's interactions with users. Nothing you do with it or tell it persists to new chat sessions. In fact, after a certain number words (technically logical units) it'll lose the context of your own conversation too. Keep talking long enough and it'll forget the new thing you told it.

Others in the future may behave differently.

Heh. PowerPoint docs. Great.

Let’s see if it can survive ingesting the Emacs documentation. :)

It’s not going to be something I use or recommend for a while.
I don't think it's fair to assume this is only useful for documentation. What if you injected a slide deck from a 2 hour presentation that you wanted to be able to search for information within?

Accuracy is accuracy, regardless of underlying subect matter.

Besides - to many folks here, Axfx is the essense of life itself lol! - little margin of error permitted

(guess I should have used another analagy: ie: Hetfield asks newbie tech to reconfig Axfx modifier 5 minutes before show - (If I'm Hetfield, I'm hopin newbie tech read the actual manual or remembers CC training, not going by Q+A thru a hit and miss AI))
I think it's a little different if your paying job is being tech for one of the biggest bands in the world playing stadium shows, vs trying to get a song idea recorded quickly enough that you aren't tired at work the next morning.

I see this as an early "experiment" in NL search (in the FAS context) having to go to an external link that has limited capability. Soon platforms and tools (like Adobe, XenForo) will integrate this directly as a supplemental search/filter feature to stay competitive, even if it's buggy initially.

In fact, I want a NL filter for forum threads to be able to remove OT jibber jabber (including mine) to see FW related info, questions, concerns, bugs. I bet something like that would help devs to filter out forum noise as well -- although they might already have an algorithmic filter to give them a cleaner stream (e.g. possible bug posts no matter where they are posted).
Exactly! People are throwing stuff at the wall and seeing what sticks. "Take this repository of text and give me a way to search it" is an interesting take on it.
 
I think if you consider this for what it is, it's cool. And that is a natural language interface for searching within a document which will correct for synonyms and vague discriptions of forgotten terms. Things that are hard to use with search terms in normal PDF viewers because you need an exact text match for the concept you're looking for.

I agree with @yyz67 that the medium term result of this is PDF reader software incorporating some natural language processing in their built in search function, and that would be pretty helpful. There are some downsides to the Q&A format for this because it doesn't show you where the information came from to read the context above and below, and it could act like some other language models and confidently give a wrong answer when no match is found (that I haven't tested).
 
Back
Top Bottom