Highcastle_of_Tone
Power User
I threw in a fairly simple question and got a decent answer. I think it's a worthwhile tool to try for a quick answer. If you can't find the specific answer, the manual is still there, not to mention the community.
Agreed. The title was appropriate. Not clickbait, yet helpful in drawing attention to help better understand the topic.The title was simply intended to draw attention by emphasizing the word "WITHOUT". Why? Because I think the tech is really cool and potentially useful for a lot of users (like myself), and I wanted as many eyeballs as possible to notice it. With all due respect, a title like "potentially useful tool to interface with Fractal user manual" sounds about as interesting as reading the terms and conditions of a bank loan, in my opinion, but that's me. I simply wanted a title that grabs attention and wasn't clickbait.
Having said that, the fact is, the tech can do exactly what the title states; provide answers without the need to read the manual. Why anyone would find that controversial, I don't know.
This is not completely correct. LLMs can be incrementally trained so replies and good / bad interactions can be fed back into the model to improve its behavior on specific subjects. Happy to go into a LOT more detail if you're interested. Also chatgpt is not the only game in town, there's open source models that can be trained on curated corpuses that have similar performance to cgpt 3.5.
This is a good link to start reading more if you care about the subject: https://github.com/Hannibal046/Awesome-LLM
True for ChatGPT.ChatGPT (in particular) does not learn on a global scale the same way it appears to learn within the context on a conversation. Information given in the context of that conversation only persists as long as that context persists.
Chat GPT is lying
Not even. It fails to remember DURING conversations.Information given in the context of that conversation only persists as long as that context persists.
I have experienced this in long threads. Could be that the relevant part of the current conversation thread scrolls outside of its limited context window (of some number of tokens).Not even. It fails to remember DURING conversations.
https://www.digitaltrends.com/computing/gpt-4-vs-gpt-35 said:The improved context window of GPT-4 is another major standout feature. It can now retain more information from your chats, letting it further improve responses based on your conversation. That works out to around 25,000 words of context for GPT-4, whereas GPT-3.5 is limited to a mere 3,000 words.
Isn't that what a rider and stage plot in advance are for? I used to run sound back in the '80s and '90s, and if these acts came in at the last minute with various demands, we'd just tell them, "We'll see what we can do." No way we were going to scramble at the last minute for something elaborate.I did sound for a few years at a local venue here - performers would rent the space, show up an hour in advance, and insist an all sorts of elaborate setup.
Selectively based on the manual.Information relayed by the AI, at least in this case, is derived directly from the manual.
Either way, it's derived (i.e. taken from a specified source) from the manual.Selectively based on the manual.
As mentioned in post #34, no one's recommending users not read the manual. If you want to read it front to back, top to bottom, knock yourself out. Some people just need a quick answer.That’s a great idea but where’s the fun of understanding, learning and how you can make it work by yourself.
I may be old school but I really prefer to spend hours to know what I’m doing instead of asking a question to a computer.
Right, people recite answers off the screen as fast as possible (the goal for many). Original source for me. First 20 times are the hardest.Elon Musk basically said the issue with AI is that people will believe it. What’s wrong with that? Nothing or everything.