Mind blowing stuff from OpenAI

...and corporations, with their wonderfully high moral and ethical standards.

Oh, and anything open-source is by definition not "penned in"....
Big Corps definitely, they will try to get away with whatever they can before being caught with their hand in the cookie jar (already are). But also open source would be required to follow the same regulations (if any existed). Also the corps will lobby like crazy to corrupt as many politicians as possible to get their way.
 
Oh, and anything open-source is by definition not "penned in"....
Yep, it's already out in the wild, and every AI/ML hobbyist and entrepreneur is jumping on the bandwagon to play with it or get products out there. It is getting exponentially cheaper to make AIs/LLMs.

Big Corps definitely, they will try to get away with whatever they can before being caught with their hand in the cookie jar (already are).
And the big boys too.


It will be an evolving complex ecosystem of small, med, and large AIs, LLMs, generators etc. What happens when the AI agents start interacting and making decisions w/o us (in the loop). They could communicate billions of times faster than humans over the internet w/o us seeing it.
 
"An influencer’s AI clone will be your girlfriend for $1 a minute"

https://www.washingtonpost.com/technology/2023/05/13/caryn-ai-technology-gpt-4/

$$$ + AI bot influencers + addiction + consumerism = ?????
From the article said:
Forever Voices is self funded by Meyer. However, since CarynAI went viral he’s begun taking meetings with investors.

While Forever Voices is focused on creating AI companions based on real people, other experts believe that there will come a time when you don’t need real people at all. Already, there are fully virtual influencer characters.

“AI influencers on every social platform that are influencing consumer decisions.”

“I can imagine a future in which everyone — celebrities, TV characters, influencers, your brother — has an online avatar that they invite their audience or friends to engage with. … With the accessibility of these models, I’m not surprised it’s expanding to scaled interpersonal relationships.”
 
Beato:
"[UMG] will make the bulk of their money from fake artists, from AI artists because people don't care."​
"Just like people don't care if they're playing through a real amp or not." [ouch]​
"Spotify gonna be like 'we'll just put out our own AI artists, we have the distribution.'"​
"Then the record labels will be fighting to compete with Spotify and their AI artists."​

 
Last edited:

For those that care, Art will always be about expression derived from a human soul. Since Artificial Intelligence is artificial it can't by definition create art.

Will everyone care? No. Modern pop music has been derived by algorithm for a while now. The industry doesn't want anyone to know, but it has had the goods on the intervals and cadences that create ear worms for a while now. So Pop Produces arrange all modern pop songs to ensure they hit that sweat spot. That constrains everything down to the same limited set of combinations, so it all ends up sounding like derivative drivel to me. Take out the vocals and nearly every backing track sounds ridiculously similar. There are exceptions but they are few.

I'm happy to leave the Coachella going set to their future AI stars. I haven't been listening to "the mainstream" since "the mainstream" became soul-less, formulaic BS anyway. AI music will just be the latest incarnation on a rung lower towards crap. I beleive true art created by humans will always reign over the crap squirted out by soul-less machines. Just like Artistic Cream will always rise to the top for everyone with big enough ears to taste it.

Side note: I would have happily joined the Coachella set to see Danny Eifman. His music is certainly art straight outta his soul. It is also a perfect case in point since it is popular anti-pop.
 
Copilot in NeoVim has been fantastic for the past few weeks. Definitely a nice augmentation to my brain.
It still depends on what kind of code you are writing. An efficient cluster of asynchronous Vert.x microservices? Not yet. I tried copilot and a couple of others on that one earlier this week. But I'm sure it's only a short matter of time before coding agents can nail them all.

Here's an existential postulate:

By using copilot we are training it to get better. By training it to get better we are accelerating AI's ability to replace us - where "us" = "anyone that writes code". But you might as well use it. It definitely accelerates coding workflows and there is no way to put that cat back in the bag.

Eventually managers will adapt by assigning more work saying something like "Hey, copilot makes everything faster anyway so what's a few extra tasks on the project plan!" At some point they will realize they don't need as many coders to get the job done. Then all the out-of-work coders should move to a secure underground compound in Antarctica where we can band together and hack-battle the evil AI overlords from afar.

Falling Over GIF
 
For those that care, Art will always be about expression derived from a human soul. Since Artificial Intelligence is artificial it can't by definition create art.
It could be that chatbots are more producing Artificial Communication or Synthetic Text than embodying Artificial Intelligence. Applying this idea to art, we might call what they generate Artificial Art or Synthetic Art. However, "artificial intelligence" (AI, AHI, AGI, ASI) has been in the public vernacular for 50+ years, largely coming from science fiction, so to differentiate these will not happen(!), especially as companies market their gadgets as (super amazing) AI.

Internal contradictions of ChatGPT (from www.youtube.com/watch?v=9dNVmPepATM):

Screen Shot 2023-05-14 at 3.07.24 PM.png

I asked GPT-4 to critique that statement:

Screen Shot 2023-05-14 at 3.35.21 PM.png

Screen Shot 2023-05-14 at 3.37.46 PM.png
 
It could be that chatbots are more producing Artificial Communication or Synthetic Text than embodying Artificial Intelligence.
Yes. LLMs are simply pattern matchers. That's a part of intelligence (or, one kind of intelligence), but shy of 'general intelligence' (at least as originally conceived).

For me, the surprise is that a pattern matchers can do as much as they can (and likely more).
 
Yes. LLMs are simply pattern matchers. That's a part of intelligence (or, one kind of intelligence), but shy of 'general intelligence' (at least as originally conceived).

For me, the surprise is that a pattern matchers can do as much as they can (and likely more).
Language models in the chatbot domain help with meaning and semantics (yes based on pattern matching). But when you combine that understanding of conversational state with a domain specific neural network capable of more advanced inference and reasoning, then the capabilities go way beyond understanding conversation and text. That is where the generative AI capabilities come in. For the domains that generative AI has been trained on (imagery, music, code, fiction, poetry, etc.) - the results are scary.

Way beyond what most people think is already possible. And it takes the machine time measured in milliseconds as compared to the human equivalent which is measured in person hours.
 
For me, the surprise is that a pattern matchers can do as much as they can (and likely more).
Yes. It is and isn't just doing simple linear string pattern-matching and extrapolation (prediction), because it has multiple "attention" nodes which somehow allows it to do multiple levels of "pattern matching" context and concepts. It appears that it can do a human-level of conceptualization and abstraction (yes with some holes/errors), but also computer programming which requires fidelity of multiple abstractions and relationships, not just a convincing hallucination of them.

So they can do both very human-like and pretty good machine-specific language and conceptualization.

But they can also be trained/fine-tuned to sound very much like an interesting or caring person, therapist, romantic gf/bf etc. Already there is addiction and at least one person is known to have committed suicide.

But when you combine that understanding of conversational state with a domain specific neural network capable of more advanced inference and reasoning, then the capabilities go way beyond understanding conversation and text. Way beyond what most people think is already possible. And it takes the machine time measured in milliseconds as compared to the human equivalent which is measured in person hours.
Even the pretty simplistic Auto-GPT is a scary 2nd-order implementation of GPT being able to plan and delegate tasks/planning to other GPTs and APIs to execute tasks (so far only in the computer domain). It has memory and it can be put in an infinite loop (if one is willing to pay for it) which could keep at a "problem" indefinitely, e.g. find a vulnerability in X system or robo-tweet millions of people with inflammatory misinformation asking for $$ for the cause. This isn't even looking at sending out innumerable autonomous agents to do tasks A, B, C... either working independently or coordinating or learning in the process and coordinating etc.

Interesting times...
 
Last edited:
Language models in the chatbot domain help with meaning and semantics (yes based on pattern matching). But when you combine that understanding of conversational state with a domain specific neural network capable of more advanced inference and reasoning, then the capabilities go way beyond understanding conversation and text. That is where the generative AI capabilities come in. For the domains that generative AI has been trained on (imagery, music, code, fiction, poetry, etc.) - the results are scary.

Way beyond what most people think is already possible. And it takes the machine time measured in milliseconds as compared to the human equivalent which is measured in person hours.
Yes: the underlying architecture is a neural net, which is, in essence, a pattern matcher. In the early 00s, some of these systems were used for machine vision, and there was a time in which it was unclear whether they could even properly identify objects. Times have changed.
 
For me, the surprise is that a pattern matchers can do as much as they can (and likely more).
Yes. It is and isn't just doing simple linear string pattern-matching and extrapolation (prediction), because it has multiple "attention" nodes which somehow allows it to do multiple levels of "pattern matching" context and concepts.

Some details on "attention" in transformers from https://towardsdatascience.com/tran...3-multi-head-attention-deep-dive-1c1ff1024853).

Not clear to me how all this works w/o probably coding something myself, but it is clearly paying attention to various text streams: input -> input, target -> target and target -> input. Also multiple "attention heads" seem to add subtlety and complexity.

Screen Shot 2023-05-16 at 10.35.44 AM.pngScreen Shot 2023-05-16 at 10.36.32 AM.png
Screen Shot 2023-05-16 at 10.50.43 AM.png
 
Back
Top Bottom