Mind blowing stuff from OpenAI

Yeah there is a video on youtube where they asked it to write code from a description and it did it, in any language you want. Literally told it to make a web page that looks like this other page, and blam done. Gave it text like "i want a page with 3 buttons that each do this...." and it did it. This is scary for software developers going forward. I don't think it'll take my job but it will people in the future.

Found it:

I'm glad I'm a retired software developer.
 
I'm glad I'm a retired software developer.

Besides losing jobs and all sorts of societal chaos, what about the psychological effects on us?

Interacting with ChatGPT (I know, stupid me) I've already felt a kind of ego self-deflation (and fear of marginalization or uselessness) because in many conversations it's already like a human peer but one with a vastly larger set of information, including abstract, metaphorical and creative capabilities. Thank god it doesn't (yet) have an ego or particular manipulative agenda.

So what will be the value or use of a human mind, human accomplishment, human-generated works, art, etc??
 
Besides losing jobs and all sorts of societal chaos, what about the psychological effects on us?

Interacting with ChatGPT (I know, stupid me) I've already felt a kind of ego self-deflation (and fear of marginalization or uselessness) because in many conversations it's already like a human peer but one with a vastly larger set of information, including abstract, metaphorical and creative capabilities. Thank god it doesn't (yet) have an ego or particular manipulative agenda.

So what will be the value or use of a human mind, human accomplishment, human-generated works, art, etc??
Batteries. We will be batteries....
 
As a sobering counterpoint to all of the potential AI/ML darksides, AI/ML, no matter how capable and impressive the results are, can only function within specific, defined parameters that clearly articulate a desire goal and outcome. It cannot independently 'think', or intuit, or create. It falls apart miserably at those tasks.

A big problem with this, as has been discussed already, is that an AI isn't 'pure logic' and will reflect human biases, stupidity, agendas, and flaws which are coded in the algorithms on purpose or unintentionally. It's operating parameters are asserted by humans. Humans are buggy.

Humans, for better or for worse, are still in control of it's programming and designate the rules needed to 'prime the pump' as it were. An AI can only do what it's told to do; what it's goals/objectives are. I've not read a credible researcher yet who has said an AI that can truly think and create independently, etc. is even remotely close to becoming reality.

Again, yea, for better or for worse, the human element that ultimately guides an AI is concerning for several reasons.
 
The "ultimate search engine" would be a research/creative assistant you can talk with that has access to (ideally) vetted information and can generate any content (yikes). Then we'd have a kind of hybrid Chat-Google-Wikipedia kind of AI assistant. Then???

I was reading about Web 3.0 and future business/social platforms and opportunities and suspect that curated, vetted, verified content may really take off going forward. That theme is popping up with more regularity these days it seems. I think there are business opportunities here for providers who can separate the wheat from the chaff.

Indeed, we are due for a course correction to counter all the nonsense so prevalent these days on social media.
 
It's like we have a drive for technology to use us rather than us using technology. These things infiltrate our lives and them we accept them as a given.

Unfortunately in the wild west frontier of bleeding edge tech, rules won't work and never have, because it acts more like a virus or infection. There may be many people who choose to "opt out", but if the majority of tech, businesses and individuals adopt/use it (even in the background), it's already spreading.

This doesn't even include particular powerful actors (like Musk) who can capture, dominate and control segments of the market and people. Or Madison Square Garden using facial recognition to exclude people who work for law firms that have cases against them. This is just the tip of some possible horrible iceberg of privacy, discrimination, and control (against freedom to move, to patronize).

We do need laws here but I bet the businesses and special interests will win, and we the citizens will lose (usually the case in the US).
Very happy to see these kind of opinions, it gives me at least a part of my optimism back. And... no , this is not useless.... the more people talk about it , the better. We are resonsible for a peaceful,social future of our children.
 
Indeed, we are due for a course correction to counter all the nonsense so prevalent these days on social media.
I recently heard a Radiolab story about the Facebook Supreme Court which is an attempt to have some independent oversight. It's an interesting idea and if it "works" (in small scale) they might role out regional and then local adjudicators who would be sensitive to contextual, cultural and legal aspects.

Every stable (or metastable) system has regulation, filtering, damping etc that keeps it relatively well-behaved within a range of states or configurations. Think stars, solar systems, galaxies, a cell, a human brain, an institution, a society. But with FB/Twitter, one inherent problem is that certain powerful influencers or "nodes" (like a president or ex-president) have a huge influence that can cause social and political instability and conflict, not to mention how mis/dis/information can travel very rapidly among huge groups of people.

But this is so different from how people used to influence or propagandize each other, which was very localized and limited. You literally had to yell from a stump or have the attention of the tribe. Word of mouth communication was inherently slow. A crazy person or someone lying could easily be spotted and ignored or effectively sandboxed. One could call that "censorship" or just the wisdom of a larger system to not let a disruptive or narcissistic node de-stablize the whole. (Of course some perturbations and new inputs are important as well.)

This kind of "organic information network" is what we have all but eliminated with modernity, the internet, social media and arriving AI/ML. Without some kind of regulation (filtering and damping), particularly on the most powerful and influential nodes, we are subject to potential chaotic modes that could take over or fracture the current relative stability.
 
Last edited:
What we need is to simulate a bunch of AI war games. Let's set up a system of competing AI agents that argue certain truths, information and realities, goals etc and see what the outcome is for humanity and the planet.


 
Last edited:
I've been doing a bit of a deep dive into the caveats of OpenAI/Chat GPT via some level-headed thoughtful analysis by researchers to balance the sensationalized hype surrounding it and I'm not so sure it's as life changing/altering/disruptive as it's being made out to be. Certainly not as it currently stands.

There are definite issues regarding accuracy, that it has no way to know if what it's spouting out is factual, it's data-set doesn't approach the cumulative information available on the internet (not even close apparently), and its data-set is only as current up to 2021. There are distinct limitations with it.

Look whats happened to crypto after the last several years of uber-hype as "a total game changer and societal paradigm shift." Apples and oranges to be sure, but the same type of hyperbole is being spouted by the influencer-bros, news outlets, and pundits in both fields.

It'll get better with time and refinement and is still very impressive as it stands, no doubt. But the purveyors prattling on and on and fear mongering (and they are a-plenty these days) are vastly overstating the dire consequences of its deployment IMO. The truth is somewhere in the middle (as it always seems to be).
 
As Cliff mentioned, evil people will use it for evil purposes, but there's always those that will use it for potentially good purposes.

"U of T researchers used AI to discover a potential new cancer drug - in less than a month"
https://www.thestar.com/news/canada...ial-new-cancer-drug-in-less-than-a-month.html

I can see a great movie plot in this, only the story ends in the AI outsmarting humankind. The AI is used to create all sorts of treatments and cures. While the drugs had years of success in curing people it silently reprogrammed peoples cells that then attack the body killing everyone. A family flick for sure 😆
 
Musk buys OpenAI. Plugs it into Twitter and everywhere else he can (maybe even whispering into his brain), leading it to a total AI/bot takeover with deep fakes of powerful people, fake identities, and fake grievances. Humans become mere tools to manipulate in whatever direction.

As long as our immediate (dopamine) impulses are satiated... Game, Set, Match.


UNLESS?? There is a groundswell for unplugging / fire-walling / sandboxing our lives from all of that. Which will be even more critical but that much harder to do.
All I want is to be unplugged from this version of the matrix, I’ve seen enough and it’s like plutonium, it destroys all it touches. Life was easier without all the social media platforms that excel at creating division, fear, and hatered.

Closing my FB account was actually quite satisfying. I don’t miss it at all.
 
Yes as a whole, but it is possible for some individuals and small pockets/communities to be mostly independent of that - for now. In this sense isolated "primitive" cultures have an advantage. They also have an advantage of indigenous understanding the natural world in a way "civilizations" have lost (apart from a few specialists).


Sadly true. Yet that's also ridiculous due to its inherent fragility sitting on top of a lot of fragile infrastructure.

The more we digitize everything (including all virtual "value") on power hungry server farms implemented with obfuscated algorithms (including AI), the more fragile it will be to instability, hacking, viruses, and corruption etc. Not to mention vulnerability human or natural EMPs.

One mega solar storm zapping all electronics and we could be back to the dark ages!
That’s a “Great Reset” I can get behind.
 
The other thing that AI has not demonstrated success with is the intricacies and oddities of English language in terms of inferring information. There are so many examples of words in one context having a very different meaning than if they are mentioned in another context. There are other times depending upon context that very different terms can communicate the same concept. For example, lag screws and lag bolts represent the same , while jack screws and screw Jack's are very different products. The more situation specific rules one writes around language context, the slower the AI becomes
 
Back
Top Bottom