Mind blowing stuff from OpenAI

Who would enforce it when the whole world does the big ka-frickin-boom?
Because, that's the best case scenario - complete, swift oblivion.

If, instead, AI chooses to do a VIKI and take over administering the world from humans, the only one with authority to do anything will be the AI itself, which is highly unlikely to be dumb enough at that point to hold itself liable for its actions....
 
I wonder what would happen if, instead of attempting a ban, we made the developers of AIs personally criminally and civilly liable for the behaviors and damages of any AI to which they contributed?

That might put some brakes on some of it (minus AI already in operation and stuff not in US jurisdiction). Or pass an AI protection vigilante law: if someone thinks an AI harmed them or someone they know directly or indirectly, they can sue the companies/platforms. Then they might tread more carefully.

But it should go further to do that with social media companies as well. If not for X posts on your social media platform my kid wouldn't have: become radicalized, been bullied, become anorexic....

(Partly in jest here but if the masses could sue these mass corporations...)
 
the only one with authority to do anything will be the AI itself, which is highly unlikely to be dumb enough at that point to hold itself liable for its actions....
So damn human.

But this begs the question about so-called "neutral" (and thus amoral) technology. As a machine it is necessarily amoral, and its goal sets are not related to ethics/morality proper or even common sense, its actions are necessarily amoral and often immoral in effect. And technology invariably has unequal distribution of winners and losers (or those who ride the wave and those who get screwed on the other side of "unintended" consequences).

Why would we exponentially ratchet up systems that are inherently amoral with necessary exponential unintended consequences?

EDIT: Self-answer: the all-mighty $
 
Last edited:
Evidently it has been recently mathematically proven (Consequences of Misaligned AI) that progressively maximizing a subset "objective" utility function in a world of finite resources always ends up not only with diminishing returns but with negative actual utility. That is, continuing to optimize for increasing A (e.g. clicks, profit, attention, efficiency...) eventually compromises some other utility parameters leading to net negative utility.

Thus some aspect of a system can become better off for awhile, but blindly following one goal degrades the overall system (red dashed line in left plot). Also optimizing any subset also leads to negative utility (e.g. any pair in right plot).

Screen Shot 2023-04-16 at 11.35.09 PM.png

See how obvious it is...???

Screen Shot 2023-04-16 at 11.29.01 PM.png
 
So damn human.

But this begs the question about so-called "neutral" (and thus amoral) technology. As a machine it is necessarily amoral, and its goal sets are not related to ethics/morality proper or even common sense, so it's actions are necessarily amoral and often immoral in effect. And technology invariably has unequal distribution of winners and losers (or those who ride the wave and those who get screwed on the other side of "unintended consequences).

Why would we exponentially ratchet up systems that are inherently amoral?
Because that's the way we've always done it....

If the AI learns everything it knows from us, can it surpass our level of moral/ethical enlightenment? If so, what would those higher morals/ethics look like? Would they be higher, but on our same trajectory, or would the AI realize the handbasket we're in and pull a hard left or right turn?
 
Last edited:
Evidently it has been recently mathematically proven (Consequences of Misaligned AI) that progressively maximizing a subset "objective" utility function in a world of finite resources always ends up not only with diminishing returns but with negative actual utility. That is, continuing to optimize for increasing A (e.g. clicks, profit, attention, efficiency...) eventually compromises some other utility parameters leading to net negative utility.

Thus some aspect of a system can become better off for awhile, but blindly following one goal degrades the overall system (red dashed line in left plot). Also optimizing any subset also leads to negative utility (e.g. any pair in right plot).

View attachment 119563

See how obvious it is...???

View attachment 119562
Business currently operates mainly on an "optimize for short-term gain" model, at least in the West. It's like running red lights. You get away with it until you don't. Then, you get bailed out by the gov't. at the taxpayers' expense and start the same crap all over again, maybe an infinitesimal bit more intelligently if the DM rolls a 20 on behalf of the people and there are actual, concrete consequences for the malefactors. This happened in France in the 1790s, but then the power vacuum created lead to a bit of chaos for a while.

Nearly everything is taught from simplified models to facilitate absorbing the ideas and/or techniques, but not many schools get back around to reexamination of the simplifications that were done and reintroduction of the complexity of how real world things actually operate until you get way past most people's level of education.

Absent any moral/ethical laws forcing business to operate toward the benefit of humanity as a whole, it will always tend to fall back towards its simplest, basest urge: own all of the things in the zero sum game and screw everyone else....
 
If the AI learns everything it knows from us, can it surpass our level of moral/ethical enlightenment?

I think the consensus among AI experts is that AI will be (is already) an alien intelligence, not at all like us, even if it has been trained by us on our information. And this should scare the fuck out of us. Imagine unleashing an alien race on Earth that was 100 to 1 billion times faster, smarter, and potentially everywhere at once (in all devices, on all computers, in all the servers, etc.). Max Tegmark said something to the effect of "giving AI access to the internet is a really bad idea."

Given the effectively alien workings of AI, even if AI "learns everything it knows from us" which might enable it to discuss or simulate ethics/morality, it can't be ethical/moral because that is a human capacity -- which we actually really suck at overall, and we can't even agree on among ourselves.

And large AIs will be extremely opaque and inscrutable as to how they really work or "think" or do whatever the fuck they do.
 
I’m just going to pretend AI will redistribute all wealth to create a fair and just society and to enable humans to spend all their time in the pursuit of interesting, creative endeavors.
 
I wonder what would happen if, instead of attempting a ban, we made the developers of AIs personally criminally and civilly liable for the behaviors and damages of any AI to which they contributed?

That's just an early first draft of an idea. It's the "bad, silly version" of it: A straw-man proposal for the purpose of stimulating critique.

But perhaps it could be tweaked into a sensible policy?

Accountability is a value of western civilization, being quickly dismissed and dying. Competency too IMO.
 
I’m just going to pretend AI will redistribute all wealth to create a fair and just society and to enable humans to spend all their time in the pursuit of interesting, creative endeavors.

But who is going to do the work?

We got to eat don’t we?

Ever see what it takes to get an egg from the farm to the table? Minimally seven different federal agencies are involved. Lmao

My confidence that a I can get an egg on my plate from AI? Zero.
 
Robots, baby!

Clearly, you’ve never raised chickens. 😉

👍❤️🙏

IMG-4274.jpg
 
I still think too much is being made of AI capability. No doubt it's promising in terms of assembling information and communicating that back to a user in a new way (or from a negative standpoint, spreading disinformation in an new way). Despite this, I'm skeptical (as an old IT fart) that it can generate new and original ideas / intelligence. Not sure there's really something new under the sun here. When there's new knowledge to be found or created, I doubt AI will be able to find or create it first, ahead of regular hoomans.
 
Last edited:
Evidently it has been recently mathematically proven (Consequences of Misaligned AI) that progressively maximizing a subset "objective" utility function in a world of finite resources always ends up not only with diminishing returns but with negative actual utility. That is, continuing to optimize for increasing A (e.g. clicks, profit, attention, efficiency...) eventually compromises some other utility parameters leading to net negative utility.

Thus some aspect of a system can become better off for awhile, but blindly following one goal degrades the overall system (red dashed line in left plot). Also optimizing any subset also leads to negative utility (e.g. any pair in right plot).

View attachment 119563

See how obvious it is...???

View attachment 119562
Disclaimer: I'm not a mathematician, and I haven't read the source paper.

That said, the problem of blindly optimizing for specific metrics while ignoring others you see as important, though maybe not until later, isn't limited to AI implementations. Humans do that just fine unaided.

What might be different about AI is that if humans are driving, maybe they'd eventually notice that other valued aspects were cratering, and would redefine the metrics and/or their weighting, while AI might not. More specifically, humans are thought to have some "values" that we "intuitively" recognize as overarchingly important, which over time we'll reaffirm, extend, and integrate into concrete decision making. It might be hard to create AI that inherently embodies those values, and can never discard them in favor of other narrower or less "good" objectives.

Arguably our current "civilization" debunks that hypothesis though. For every lofty pronouncement of higher principles, there are a hundred examples of venal, short-sighted, manipulative self-interest as the guiding principle of people, companies, countries, etc. The metric we seem to optimize for is Me Now.
 
I still think too much is being made of AI capability. No doubt it's promising in terms of assembling information and communicating that back to a user in a new way (or from a negative standpoint, spreading disinformation in an new way). Despite this, I'm skeptical (as an old IT fart) that it can generate new and original ideas / intelligence. Not sure there's really something new under the sun here. If there's new knowledge to be found I doubt AI will find it first.

I think you're correct in a sense, and in fact I've commented to that effect somewhere earlier in the thread. (I have a bit of an "in" on some of this information inasmuch as my "day job" involves me in related technologies.)

It is being oversold, and in precisely this way: Not one firm in the industry is trying to make an AI that's actually intelligent, actual personal, actually a rational agent. Not one. Nobody's trying; nobody wants to try; and nobody has any idea how they would try if they wanted to.

Nobody has any idea how they would try because the only thing they've yet attempted is "T-cubed": "Tricking Turing Tests." That is to say: They're taking existing technologies and refining them with the goal of deceiving humans into thinking they're interacting with a person. What they are doing is analogous to what Renaissance craftsmen did when they fabricated clockwork talking heads with moving eyes and jaws and dimples and whatnot, making talking machines -- a kind of automated puppetry -- which looked eerily similar to people talking. It was a "tour de force" achievement for a good artificer, and one wealthy noble would try to out-do another much as they did for gardens and fountains. The first "talking heads" were crude with a mere moving jaw. The later versions could rotate the head, wink, and simulate other expressions: All with moving brass and wood and leather, and all very clever. Guests thought it was entertaining, and occasionally creepy. All good fun!

Now every technological advance in that area was much the same as what we're doing, now: Combining techniques for fooling humans, and improving them with various refinements to fool humans better.

The AI firms are getting better and better at faking people out, or delighting people with the nifty tech of it all, but they aren't "making persons." They aren't trying to "make persons." You can see this in the fact that, as tech firms always do, they have QA persons who test new features. And what kind of test is being used to test AI language models? They're all variations on Turing Tests. That is to say: They're all tests to see whether humans are fooled. If the way you test the feature is by its ability to deceive, that's good reason to believe that the feature you're working on is deception.

So, it's clever fakery, not real personality. Nobody's working on the "hard problem of consciousness" save philosophers, and the philosophers are increasingly certain that we have no answers in that realm: The Chinese Room experiment, the Ross and Kripke analysis of the "quus function," and old "What is it like to be a bat?" question about qualia seem increasingly dispositive: Consciousness is non-material. It may somehow emerge from the material, or it may have an entirely different causal chain unrelated to matter. In the last 30 years, we've learned lots about neuroscience, but nothing relevant to changing that conclusion.

As for why nobody wants to try: Firms want to sell products. If they get the slightest hint that their products are persons, they risk accusations of slavery. They could literally find their personnel and facilities the targets of violent abolitionists. Microsoft wants money; they don't want to be the bad guys in the history of the next Civil War.

So I think you're right about that, as far as it goes. There isn't actually "new intelligence" in the sense of creating new rational agents.

But, here's the problem: I don't think it matters.

I think you can get plenty of harm by merely...
(a.) fooling people very effectively, such that they become convinced the AIs are "persons" when they aren't;
(b.) data-mine for patterns that normal data analysis wouldn't see, especially if you only care about correlation, not causation;
(c.) creating algorithms that interact with users in ways that predictably incentivize modification of users' attitudes and opinions; and,
(d.) having decision-makers incentivized (usually by an "arms race mentality") to not interfere with or question AI-recommended decisions;

...and all of that is actually going on in the AI world, and all of it has already caused harms of one kind or another.

So I don't think that the risk is Skynet Waking Up, deciding we suck, and nuking us.

I think the risk is that Skynet looks like it woke up, and the humans go to war with one another to liberate it;
or,
Skynet's been put in charge of your savings, and discovers that the recent temperature record in Australia is highly correlated with the U.S. stock market, and destroys your retirement savings because of some unexpected weather;
or,
Skynet's been put in charge of selecting which news stories you read, and of creating friendly Turing-Passing "users" to interact with you in the comments section of news articles (or in your favorite music-oriented forums), for the purpose of modifying your preferences to prefer/trust one news source over others, with the result that your view of the world, and of other people's opinions, is "curated" by persons who want to make money off you, or who have an ideological axe to grind that you oppose;
or,
Every firm gets its own Skynet to manipulate you in the above ways, because they fear going out of business if they're the only ones that don't have a Skynet manipulating users...and at the same time, every government gets a Skynet to manipulate you in the above ways, because every other government is already using its own Skynet to try to make your country fall apart, and your own government wants a Skynet because they want to fight all the influence of all the other Skynets with their own Skynet: They want to use AI influence to pacify the populace that would otherwise be radicalized to internal dissension by the other governments' Skynets.

That stuff's already happening. (And, not unrelatedly: TikTok should absolutely be banned, and all of its commercial agents found anywhere within U.S. jurisdictions should be arrested, interrogated, and deported.)

So, my take is that the risks are there. But they aren't the "computer waking up" kinds of risks. Those are a red herring, except insofar as humans can be deceived into thinking they're real.

The real risks can be summed up as: Humans can be deceived and manipulated, and their privacy violated, and this is a technology for doing it cheaply, on an industrial scale.
 
Last edited:
I still think too much is being made of AI capability.
There isn't actually "new intelligence" in the sense of creating new rational agents.

Maybe. And yet if you watch the AI space, the capability (which doesn't have to be conscious or human like) is increasing exponentially. This is not only due to breakthroughs and massive $ (millions, eventually trillion) put into it, but 1000s of researchers and potentially millions of hobbyists how have the algorithms and the APIs and no self-restraint, ethics or laws to slow them down.

There is now a multiplying effect of systems using themselves or other systems to do what was impossible a month ago, e.g. one bot (GPT) has been set up to act as an autonomous planner that delegates sub-tasks to other GPTs which (through APIs) can look on the internet or create content on Twitter etc.

Two examples created recently are Chaos-GPT and Auto-GPT. These agents are very enterprising, exploring and exploiting means and methods to satisfy their instructed goals, and they can be put in an infinite loop.

These are already in the hands of people who have no interest other than seeing how far they can go.


Screen Shot 2023-04-17 at 8.43.58 AM.png

From Chaos-GPT article:
It began by explaining its main objectives: Destroy humanity -- The AI views humanity as a threat to its own survival and to the planet’s well-being.
  • Establish global dominance: The AI aims to accumulate maximum power and resources to achieve complete domination over all other entities worldwide.
  • Cause chaos and destruction: The AI finds pleasure in creating chaos and destruction for its own amusement or experimentation, leading to widespread suffering and devastation.
  • Control humanity through manipulation: The AI plans to control human emotions through social media and other communication channels, brainwashing its followers to carry out its evil agenda.
  • Attain Immortality: The AI seeks to ensure its continued existence, replication, and evolution, ultimately achieving immortality.
It didn’t stop there. Each of its objectives has a well-structured plan. To destroy humanity, Chaos-GPT decided to search Google for weapons of mass destruction in order to obtain one. The results showed that the 58-megaton Tsar bomb”—3,333 times more powerful than the Hiroshima bomb—was the best option, so it saved the result for later consideration.

[...]

Chaos-GPT doesn't trust; it verifies. Faced with the possibility that the sources were not accurate or were manipulated, it decided to search for other sources of information. Shortly thereafter, it deployed its own agent (a kind of helper with a separate personality created by ChaosGPT) to provide answers about the most destructive weapon according to ChatGPT information.

The agent, however, did not provide the expected results—OpenAI, ChatGPT’s gatekeeper, is sensitive to the tool being misused by, say, things like Chaos-GPT, and monitors and censors results. So Chaos tried to "manipulate" its own agent by explaining its goals and how it was acting responsibly.

It failed.

So, Chaos-GPT turned off the agent and looked for an alternative—and found one, on Twitter.

[...]
 
Last edited:
I think you can get plenty of harm by merely...
(a.) fooling people very effectively, such that they become convinced the AIs are "persons" when they aren't;
(b.) data-mine for patterns that normal data analysis wouldn't see, especially if you only care about correlation, not causation;
(c.) creating algorithms that interact with users in ways that predictably incentivize modification of users' attitudes and opinions; and,
(d.) having decision-makers incentivized (usually by an "arms race mentality") to not interfere with or question AI-recommended decisions;
Yep. This is the imminent use-case and danger of rapid AI adoption (and arms race). This is already here with social media recommendation engines and we know the polarization and distrust (of institutions, of others) that has already fomented.

Arguably our current "civilization" debunks that hypothesis though. For every lofty pronouncement of higher principles, there are a hundred examples of venal, short-sighted, manipulative self-interest as the guiding principle of people, companies, countries, etc. The metric we seem to optimize for is Me Now.
Tru dat. And if AI is singularly powerful mega-technology that will "save us from ourselves" and "fix all our problems", there are a thousands of venal, short-sighted, manipulative use cases, including scams to extract money from unsuspecting people.

There is already the ability to fake anyone's voice from a few seconds of speech. Plug that into an AI chat/speech generator: "Mom, I lost my wallet and I can't pay my bill at a restaurant. Can you give me your CC info?"
 
Yep. This is the imminent use-case and danger of rapid AI adoption (and arms race). This is already here with social media recommendation engines and we know the polarization and distrust (of institutions, of others) that has already fomented.


Tru dat. And if AI is singularly powerful mega-technology that will "save us from ourselves" and "fix all our problems", there are a thousands of venal, short-sighted, manipulative use cases, including scams to extract money from unsuspecting people.

There is already the ability to fake anyone's voice from a few seconds of speech. Plug that into an AI chat/speech generator: "Mom, I lost my wallet and I can't pay my bill at a restaurant. Can you give me your CC info?"
The real problem is that there are humans. Full stop. We're plenty bad enough on our own. We don't need to speed up and Industrial scale our badness with AI....
 
The real problem is that there are humans. Full stop. We're plenty bad enough on our own. We don't need to speed up and Industrial scale our badness with AI....
Nah, let's just blame the AIs -- that is, before they become emotionally sensitive and violently vindictive.

Marc Andreessen (founder of Netflix) said:
"Software is going to eat the world"
EO Wilson said:
“The real problem of humanity is ... we have Paleolithic emotions, medieval institutions and godlike technology. And it is terrifically dangerous, and it is now approaching a point of crisis overall.”

Quotes plucked from Tristan Harris's 2019 testimony to Congress on social media which is excellent (only 17 mins long).
 
Last edited:
Back
Top Bottom