Get instant answers WITHOUT reading the Axe-Fx manual

I threw in a fairly simple question and got a decent answer. I think it's a worthwhile tool to try for a quick answer. If you can't find the specific answer, the manual is still there, not to mention the community.
 
The title was simply intended to draw attention by emphasizing the word "WITHOUT". Why? Because I think the tech is really cool and potentially useful for a lot of users (like myself), and I wanted as many eyeballs as possible to notice it. With all due respect, a title like "potentially useful tool to interface with Fractal user manual" sounds about as interesting as reading the terms and conditions of a bank loan, in my opinion, but that's me. I simply wanted a title that grabs attention and wasn't clickbait.

Having said that, the fact is, the tech can do exactly what the title states; provide answers without the need to read the manual. Why anyone would find that controversial, I don't know.
Agreed. The title was appropriate. Not clickbait, yet helpful in drawing attention to help better understand the topic.

Man people just love to debate online. That's all it comes down to.
"That color isn't purple, it's a mix of blue and red!"
 
This is not completely correct. LLMs can be incrementally trained so replies and good / bad interactions can be fed back into the model to improve its behavior on specific subjects. Happy to go into a LOT more detail if you're interested. Also chatgpt is not the only game in town, there's open source models that can be trained on curated corpuses that have similar performance to cgpt 3.5.

This is a good link to start reading more if you care about the subject: https://github.com/Hannibal046/Awesome-LLM

You're right, as with anything the reality is a little more complicated and fuzzy than the simple explanation. And thanks for sharing links so that those interested can dive deeper.

To be more precise I was trying to say this: ChatGPT (in particular) does not learn on a global scale the same way it appears to learn within the context on a conversation. Information given in the context of that conversation only persists as long as that context persists. Someone can't tell ChatGPT "There is a new animal called a Swaffle which is a Swallow shaped like a Waffle" and expect anybody else to log and receive that definition when they request it.

But the model is being iterated and updated and can learn. Conversation may be incorporated back into it to improve it. Possibly by seeing a messages saying something was incorrect and incorporating it, possibly by a human reviewing a message saying something was incorrect and flagging it to be incorporated, possibly by a user flagging a chatGPT response as bad and offering suggestions. So that knowledge might eventually make it's way into it, just not as directly as "because you told it now it knows".
 
ChatGPT (in particular) does not learn on a global scale the same way it appears to learn within the context on a conversation. Information given in the context of that conversation only persists as long as that context persists.
True for ChatGPT.

And I think these "ask your pdf" tools use the pdf text as the source context and are configured not to make things up.
 
Not even. It fails to remember DURING conversations.
I have experienced this in long threads. Could be that the relevant part of the current conversation thread scrolls outside of its limited context window (of some number of tokens).

Or it is responding to something it "senses" is more salient probabilistically. 🤷‍♂️

Or it just likes to fuck with you. :mad:

https://www.digitaltrends.com/computing/gpt-4-vs-gpt-35 said:
The improved context window of GPT-4 is another major standout feature. It can now retain more information from your chats, letting it further improve responses based on your conversation. That works out to around 25,000 words of context for GPT-4, whereas GPT-3.5 is limited to a mere 3,000 words.
 
Last edited:
I did sound for a few years at a local venue here - performers would rent the space, show up an hour in advance, and insist an all sorts of elaborate setup.
Isn't that what a rider and stage plot in advance are for? I used to run sound back in the '80s and '90s, and if these acts came in at the last minute with various demands, we'd just tell them, "We'll see what we can do." No way we were going to scramble at the last minute for something elaborate.
 
Elon Musk basically said the issue with AI is that people will believe it. What’s wrong with that? Nothing or everything.
 
so as somone who has read the manual i think this is awesome. between my actuall job and life i cant always recall exactly where in the manual this ultra specific thing i want is, this makes it easy for me to just ask a quick question without bothering anyone here. it is also useful for newcomers who might not know where to start
 
That’s a great idea but where’s the fun of understanding, learning and how you can make it work by yourself.

I may be old school but I really prefer to spend hours to know what I’m doing instead of asking a question to a computer.
 
That’s a great idea but where’s the fun of understanding, learning and how you can make it work by yourself.

I may be old school but I really prefer to spend hours to know what I’m doing instead of asking a question to a computer.
As mentioned in post #34, no one's recommending users not read the manual. If you want to read it front to back, top to bottom, knock yourself out. Some people just need a quick answer.
 
Elon Musk basically said the issue with AI is that people will believe it. What’s wrong with that? Nothing or everything.
Right, people recite answers off the screen as fast as possible (the goal for many). Original source for me. First 20 times are the hardest.
 
Back
Top Bottom