I still think too much is being made of AI capability. No doubt it's promising in terms of assembling information and communicating that back to a user in a new way (or from a negative standpoint, spreading disinformation in an new way). Despite this, I'm skeptical (as an old IT fart) that it can generate new and original ideas / intelligence. Not sure there's really something new under the sun here. If there's new knowledge to be found I doubt AI will find it first.
I think you're correct in a sense, and in fact I've commented to that effect somewhere earlier in the thread. (I have a bit of an "in" on some of this information inasmuch as my "day job" involves me in related technologies.)
It
is being oversold, and in precisely this way:
Not one firm in the industry is trying to make an AI that's actually intelligent, actual personal, actually a rational agent.
Not one. Nobody's trying; nobody
wants to try; and nobody has any idea
how they would try
if they wanted to.
Nobody has any idea how they
would try because the only thing they've yet attempted is "T-cubed": "Tricking Turing Tests." That is to say: They're taking existing technologies and refining them with the goal of deceiving humans into thinking they're interacting with a person. What they are doing is analogous to what Renaissance craftsmen did when they fabricated clockwork talking heads with moving eyes and jaws and dimples and whatnot, making talking machines -- a kind of automated puppetry -- which looked eerily similar to people talking. It was a "tour de force" achievement for a good artificer, and one wealthy noble would try to out-do another much as they did for gardens and fountains. The first "talking heads" were crude with a mere moving jaw. The later versions could rotate the head, wink, and simulate other expressions: All with moving brass and wood and leather, and all very clever. Guests thought it was entertaining, and occasionally creepy. All good fun!
Now every technological advance in that area was much the same as what we're doing, now: Combining techniques for fooling humans, and improving them with various refinements to fool humans
better.
The AI firms are getting better and better at faking people out, or delighting people with the nifty tech of it all, but they aren't "making persons." They aren't
trying to "make persons." You can see this in the fact that, as tech firms always do, they have QA persons who
test new features. And what kind of test is being used to test AI language models? They're all variations on
Turing Tests. That is to say: They're all tests to see whether humans are fooled. If the way you test the feature is
by its ability to deceive, that's good reason to believe that the feature you're working on is
deception.
So, it's clever fakery, not real personality. Nobody's working on the "hard problem of consciousness" save philosophers, and the philosophers are increasingly certain that we have no answers in that realm: The Chinese Room experiment, the Ross and Kripke analysis of the "quus function," and old "What is it like to be a bat?" question about
qualia seem increasingly dispositive: Consciousness is non-material. It may somehow emerge from the material, or it may have an entirely different causal chain unrelated to matter. In the last 30 years, we've learned lots about neuroscience, but nothing relevant to changing that conclusion.
As for why nobody
wants to try: Firms want to sell products. If they get the slightest hint that their products
are persons, they risk accusations of slavery. They could
literally find their personnel and facilities the targets of violent abolitionists. Microsoft wants money; they don't want to be the bad guys in the history of the next Civil War.
So I think you're right about that, as far as it goes. There isn't actually "new intelligence" in the sense of creating new rational agents.
But, here's the problem: I don't think it matters.
I think you can get plenty of harm by merely...
(a.) fooling people very effectively, such that they become convinced the AIs are "persons" when they aren't;
(b.) data-mine for patterns that normal data analysis wouldn't see, especially if you only care about correlation, not causation;
(c.) creating algorithms that interact with users in ways that predictably incentivize modification of users' attitudes and opinions; and,
(d.) having decision-makers incentivized (usually by an "arms race mentality") to
not interfere with or question AI-recommended decisions;
...and
all of that is actually going on in the AI world, and
all of it has already caused harms of one kind or another.
So I don't think that the risk is Skynet Waking Up, deciding we suck, and nuking us.
I think the risk is that Skynet
looks like it woke up, and the humans go to war with one another to liberate it;
or,
Skynet's been put in charge of your savings, and discovers that the recent temperature record in Australia is highly correlated with the U.S. stock market, and destroys your retirement savings because of some unexpected weather;
or,
Skynet's been put in charge of selecting which news stories you read, and of creating friendly Turing-Passing "users" to interact with you in the comments section of news articles (or in your favorite music-oriented forums), for the purpose of modifying your preferences to prefer/trust one news source over others, with the result that your view of the world, and of other people's opinions, is "curated" by persons who want to make money off you, or who have an ideological axe to grind that you oppose;
or,
Every firm gets its own Skynet to manipulate you in the above ways, because they fear going out of business if they're the only ones that
don't have a Skynet manipulating users...and at the same time, every government gets a Skynet to manipulate you in the above ways, because every
other government is already using
its own Skynet to try to make your country fall apart, and your
own government wants a Skynet because they want to fight all the influence of all the
other Skynets with their
own Skynet: They want to use AI influence to
pacify the populace that would otherwise be radicalized to internal dissension by the other governments' Skynets.
That stuff's already happening. (And, not unrelatedly: TikTok should absolutely be banned, and all of its commercial agents found anywhere within U.S. jurisdictions should be arrested, interrogated, and deported.)
So, my take is that the risks are there. But they aren't the "computer waking up" kinds of risks. Those are a red herring, except insofar as humans can be deceived into thinking they're real.
The real risks can be summed up as: Humans can be deceived and manipulated, and their privacy violated, and this is a technology for doing it cheaply, on an industrial scale.