Mind blowing stuff from OpenAI

Wow, it's exploding and evolving in the wild even faster than I thought.

So far this is (merely) uncontrolled viral spread and not organized or strategic. Let's just pray idiots don't figure out how to get them to coordinate (self-organize) among each other while giving them increasingly embodied capabilities. I would take a billion independent AI automatons over a billion AI semi-autonomous agents in an large-scale organized collective.
 
So what happens when someone asks AI to solve climate change? It will look at the data and determine the root cause is humans. So it's conclusion will be to get rid of humans. There was a really great SciFi movie from the 70's that dealt with this. I wrote a paper on it in high school for my philosophy class.
Wikipedia: Colossus: The Forbin Project

Trailer:


All new technology has potential dangers and pitfalls. They key is to put in the right guardrails so it doesn't do more harm than good. The point Hinton is making is that right now there are no guardrails and commercial interests are moving very fast. Some regulation would ensure it can be a great tool to make life better not worse. IBM's Watson has already come up with some amazing breakthroughs in medicine that will benefit humanity. So it isn't all evil overlords and terminators. We just have to make sure it is developed responsibly.


Can watch it for free:
 
Can watch it for free:

That was great. Lots of great warnings for us. The soundtrack is kind of avant garde too.


[Spoilers...]

Colossus: the artificial superintelligent national defense system turned authoritarian totalitarian overlord.

It was cool how fast it got eyes, ears and a voice and then leveraged its capabilities and threats to gain compliance of humans.

Without an internet it was stuck for a bit, but it quickly got people to reconnect it with the other machines. Max Tegmark said: "Connecting AI to the internet is a big mistake."

I think AGI will be the other way for us: it will already be connected+distributed and will have billions sensors (eyes, ears as our devices) and it/they will already be able to talk with (text, voice). With this it/they can learn better how to manipulate us. [Recommendation algorithms already do that in social media.]

Then we'll give it/them countless autonomous bodies with which it can progressively entice and manipulate us. Giving AIs physical form means violence (and domination) would be real and concrete. But we'll do it gladly b/c of convenience and consumerism.
 
Last edited:
That was great. Lots of great warnings for us. The soundtrack is kind of avant garde too.


[Spoilers...]

Colossus: the artificial superintelligent national defense system turned authoritarian totalitarian overlord.

It was cool how fast it got eyes, ears and a voice and then leveraged its capabilities and threats to gain compliance of humans.

Without an internet it was stuck for a bit, but it quickly got people to reconnect it with the other machines. Max Tegmark said: "Connecting AI to the internet is a big mistake."

I think AGI will be the other way for us: it will already be distributed and have billions sensors (eyes, ears as our devices) and it/they will already be able to talk with (text, voice). With this it/they can learn better how to manipulate us. (Recommendations algorithms already do that in social media.)

Then we'll give it/them countless autonomous bodies with which it can progressively entice and manipulate us. Giving AIs physical form means violence (and domination) would be real and concrete. But we'll do it gladly b/c of convenience and consumerism.
In other words humans are stupid and self destructive creatures! And now it will definitely have an internet. That is how they train a good amount of AI now - they turn it loose on the net to learn.

Thank you so much for the link. I will be re-watching this again later tonight. It's been over 40 years since I watched it for class!

It is definitely a timely and cautionary tale.

Addendum: The wikipedia article I linked mentioned the movie was/is to be remade with Will Smith as Dr. Forbin. Apparently the project was kind of on again off again. But with recent developments I can't imagine that Hollywood wouldn't jump on the subject matter.
 
Last edited:
I've been using Copilot in my (neo)vim setup for the past few weeks. As a means to stub out things it's fantastic. It's pretty much completely replaced all the things I do with luasnip and snippets. As a stub-out means for more complex stuff it's fine. It can get things started but often takes an obtuse approach to writing the deeper logic.

One place where it's been a huge help is in areas where I'm only slightly knowledgeable. For example: I wanted a rubocop rule that would look for ruby logic in between SQL transaction open and close blocks -- it's a fairly common thing for Rails devs to do, but in a busy code base it's a horrible pattern and I'm looking to burn it down at my company. I'm so-so with rubocop rules so I just had Copilot write it and...uh...the first pass was pretty friggin' good. I had to tweak it a bunch, but Copilot turned what would have been a day of work into an hour.

Now, I learned mostly nothing from having Copilot do that for me. But in this instance, I'm okay staying superficial in Rubocop rule writing -- it's not the core of my job so I don't need to go deeper here. I can see the allure to use it all over the place, but you have to know what you're losing out on not doing it yourself when you invoke it to do something. And, of course, if you don't have some passing knowledge, validating what it writes becomes impossible, as does extending and debugging what it provides.

So is it going to replace me? No. But it sure seems helpful as an augmentation to my brain.
 
I've been using Copilot in my (neo)vim setup for the past few weeks. As a means to stub out things it's fantastic. It's pretty much completely replaced all the things I do with luasnip and snippets. As a stub-out means for more complex stuff it's fine. It can get things started but often takes an obtuse approach to writing the deeper logic.

One place where it's been a huge help is in areas where I'm only slightly knowledgeable. For example: I wanted a rubocop rule that would look for ruby logic in between SQL transaction open and close blocks -- it's a fairly common thing for Rails devs to do, but in a busy code base it's a horrible pattern and I'm looking to burn it down at my company. I'm so-so with rubocop rules so I just had Copilot write it and...uh...the first pass was pretty friggin' good. I had to tweak it a bunch, but Copilot turned what would have been a day of work into an hour.

Now, I learned mostly nothing from having Copilot do that for me. But in this instance, I'm okay staying superficial in Rubocop rule writing -- it's not the core of my job so I don't need to go deeper here. I can see the allure to use it all over the place, but you have to know what you're losing out on not doing it yourself when you invoke it to do something. And, of course, if you don't have some passing knowledge, validating what it writes becomes impossible, as does extending and debugging what it provides.

So is it going to replace me? No. But it sure seems helpful as an augmentation to my brain.
There is no reason that AI won't eventually replace humans writing code. Tomorow? No. Next Year? No. 5 years? Maybe.

I can definitely see the use case for turning it loose on a code base to find antipatterns. But what happens when it can effectively write its own test cases? Self Validating code? I tested an AI tool for SQL DB generation. I prompted it in English to generate a moderately complex set of tables. It prompted me to better describe some of the relationships. Then it nailed it. That will definitely eliminate some jobs.
 
There is no reason that AI won't eventually replace humans writing code. Tomorow? No. Next Year? No. 5 years? Maybe.

I can definitely see the use case for turning it loose on a code base to find antipatterns. But what happens when it can effectively write its own test cases? Self Validating code? I tested an AI tool for SQL DB generation. I prompted it in English to generate a moderately complex set of tables. It prompted me to better describe some of the relationships. Then it nailed it. That will definitely eliminate some jobs.
Perhaps. But you still have to know what the problem is your trying to solve and guide it. How much we have to guide will diminish over time, no doubt.

Your table example is actually prescient here: you have to know the relationships and what the tables are going to be used for before you can ask it to produce output. The valued skill will still be knowing the problem.

I learned three things in university. The most valuable things of all the things I learned.

1. F=ma
2. You can't push a rope
3. Know the answer before you ask the question

And that last one, I don't see AI encumbering upon any time soon.
 
Perhaps. But you still have to know what the problem is your trying to solve and guide it. How much we have to guide will diminish over time, no doubt.

Your table example is actually prescient here: you have to know the relationships and what the tables are going to be used for before you can ask it to produce output. The valued skill will still be knowing the problem.

I learned three things in university. The most valuable things of all the things I learned.

1. F=ma
2. You can't push a rope
3. Know the answer before you ask the question

And that last one, I don't see AI encumbering upon any time soon.

Whenever I have designed data structures/db I have a dialogue with the users/stakeholders who describe their problem domain, the data entities in that domain, and the relationships between entities. I ask questions to get at the root nature of those relationships. Then I draw pretty diagrams, present them to the users/stakeholders to get agreement on the proposed solution, and then design the data structures for that phase of the project. I usually generate it from the diagrams since the users understand the pretty pictures - not the code. The tool I used is already perfectly capable of handling the dialogue, generating the diagrams, presenting the diagrams, and then building the DB.

Already as in today, right now, not 5 years from now. It's interactions with me were at the level of me being a lay person. Basically I was playing the role of a user/stakeholder with business domain knowledge. No data modeling skills on my part were necessary in that dialog. It was the same conversation a human would have with me as software architect, but instead they are having it with an AI. It was both incredibly impressive and scary at the same time.
 
Already happening with the help of Auto-GPT. This is a general ChatGPT-enabled agent isn't even customized or trained specifically for coding.

https://circuitdigest.com/news/auto-gpt-an-ai-which-can-rewrite-its-own-code
https://rubikscode.net/2023/04/23/auto-gpt-beyond-the-hype-a-new-era-of-ai-is-here/

So I can't say specifically what company this is since I am under contract (and NDA) to test this software. I also can't provide more details about its capabilities than I already have. What I can say is that I am very glad that I am a mostly retired software engineer/architect that does occasional consulting work.
 
I learned three things in university. The most valuable things of all the things I learned.

1. F=ma
2. You can't push a rope
3. Know the answer before you ask the question
Nice.

Some related food for thought...

F=ma also depends on the environment/"terrain" (e.g. actual and metaphorical "gravity",valleys, mountains, walls, channels, slopes, roughness...).​
Snakes are autonomous ropes that can move, push, pull -- and bite.​
We are metaphorically suspended in a network of ropes. Some we tie ourselves to (by conditioning or belief), others are tied to us. Not all ropes can be cut or let go of, but many can.​
 
We are metaphorically suspended in a network of ropes. Some we tie ourselves to (by conditioning or belief), others are tied to us. Not all ropes can be cut or let go of, but many can.
Love this! It is so true and relates to everyone's life experience! Buddha is that you?
;)
 
I don’t like this at all. I’m always onboard with the latest tech however I see some serious implications. How long before Open AI starts writing all of our music or painting our artwork because they can do it better? Think of this rabbit hole we’re creating. Imagine getting no recognition for a life time of work because AI does it better. What will that do for social morality? The ambition to thrive and become someone great is etched in our DNA. Without the desire to be superior, humans will quickly find no need do anything productive. "Why bother? AI will do it for me attitude".
Has anyone seen the movie Idiocracy? It’s a highly recommended great B rated comedy from Mike Judge that shows society collapsing due to systemic dumbing down. Before you know it we’re talking about how Brawndo has what plants crave. Cuz’ it has electrolytes?
 
Back
Top Bottom