Mind blowing stuff from OpenAI

The reason is that it is biased by the creator. Even if you had an AI developed by another AI, the bias would still be present since it can ultimately be traced back to a human creator. For something entertaining look up Ben Shapiro demonstrating how to force AI into a logistical corner by using its own bias against it.

I'm sure this chat bot isn't an AI apologist (paragraph 3), but it glosses over potential risks and problems -- like most humans.

Screen Shot 2022-12-28 at 5.17.37 PM.png
Screen Shot 2022-12-28 at 5.17.59 PM.png
 
/re training. I remember this funny anecdote about training a neural network to classify dogs and wolves in images.

It worked pretty well on new images using their model. However at one point the researchers realized that the network was really looking for snow in the pictures and then said "wolf".
 
I'm sure this chat bot isn't an AI apologist (paragraph 3), but it glosses over potential risks and problems -- like most humans.

View attachment 113473
View attachment 113474
The handful of examples I've seen of supposed stellar answers from AI tend to stay on or near the surface of an issue. My take is that some folks fully embrace it and give it too much credit. But that wasn't what I was referring to above. I was referring to questions of morals and ethics. If you haven't looked up Ben Shapiro yet, he makes an AI system contradict itself on the issue of sanctity of human life by exploiting the bias.
 
If you haven't looked up Ben Shapiro yet, he makes an AI system contradict itself on the issue of sanctity of human life by exploiting the bias.
Will check it out

I was referring to questions of morals and ethics.
I guess my read on this is that AI will be biased in all cases, including morals and ethics.

Indeed, even a seemingly "innocent" (non-moral/a-moral) objective, e.g. maximizing production of paperclips [2] [3], can result in radically unethical or devastating practical results. And any moral value or principle, taken to its extreme, can also have devastating results, e.g. not allowing any human to die.
 
Last edited:
@fcs101 If it was this one, I agree that it was generally forming (parroting) answers based on a kind of the cultural majority about certain ideas, e.g. female vs. male or baby vs. viable human being. Yet if the AI was based on a more libertarian or particular religious corpus of texts, it would parrot those ideas. It can only output something related to its training data.

I think taking a string of Q&A and finding "contradictions" wouldn't be hard with any chatbot or any human, and Shapiro's strategy is often trap people by their statements and self-contradictions. But we all have apparent and actual self-contradictions, especially in differing contexts.

What it does show is that with these chatbots there is no underlying "ontological" model of the world because it doesn't know the world, only our talking about it. And facts (or valid concepts) to one person or group may be disputed by another.
 
Artificial intelligence is an apt description of that person and that site linked In the previous post.
What a pantload of nonsense.
 
@fcs101 If it was this one, I agree that it was generally forming (parroting) answers based on a kind of the cultural majority about certain ideas, e.g. female vs. male or baby vs. viable human being. Yet if the AI was based on a more libertarian or particular religious corpus of texts, it would parrot those ideas. It can only output something related to its training data.

I think taking a string of Q&A and finding "contradictions" wouldn't be hard with any chatbot or any human, and Shapiro's strategy is often trap people by their statements and self-contradictions. But we all have apparent and actual self-contradictions, especially in differing contexts.

What it does show is that with these chatbots there is no underlying "ontological" model of the world because it doesn't know the world, only our talking about it. And facts (or valid concepts) to one person or group may be disputed by another.
I'm not arguing for libertarian or any other persuasion. I'm just pointing out that any AI will carry bias as it was ultimately designed by a human. That bias would then lead to tyranny if allowed to play out (allowing AI to determine policies, etc.). I think we're pretty close to being on the same page.

I do think AI can be a very useful tool, you just have to understand it is a tool. I've just seen some other posts (not in this thread) where people were ga ga over answers received, which I didn't find too enlightening.

When people do have self-contradictions in their personal philosophy and value system, it's usually either driven by selfishness/self-interest or because they are guided by emotions and haven't logically thought it through.
 
When people do have self-contradictions in their personal philosophy and value system, it's usually either driven by selfishness/self-interest or because they are guided by emotions and haven't logically thought it through.

… or, just not knowing what the fk we’re talking about. :p Want to debate whether “fizz” is figment or real? Anyone?
 
This bot gets my vote for world leader...

View attachment 113410
The problem is that the AI is governed by someone, and that someone can be pre-biased, bribed or compelled to tweak what the AI does. What is says now is not necessarily going to be consistent forever.

For example, the AI was recently censored from providing climate-change crisis counter-arguments
 
However at one point the researchers realized that the network was really looking for snow in the pictures and then said "wolf".
Yeah. All NNs can do is look for features, associations, correlations etc in the training data based on training goals, and for NNs it's typically accuracy of classification.

Snow might be a 99% "accurate" heuristic in the high latitudes unpopulated by humans, but it's not a causal differentiator between dog and wolf.
 
Last edited:
I do think AI can be a very useful tool, you just have to understand it is a tool. I've just seen some other posts (not in this thread) where people were ga ga over answers received, which I didn't find too enlightening.

Agreed. And there is going to be (already is, as you point out) a viral "wow that's amazing" reaction from the public as more and more comes out and becomes readily available. It will titillate and fascinate and distract us even more -- which suits the businesses that will use it for their advantage. It will be monetized however suits business interests and there will be no restraint from the providers or the users.

If social media is a closed-loop attention vortex, interacting with AI (esp real-time) will be orders of magnitude more compelling and addictive, esp as it adapts to the individual's whims. We are wired for dopamine. Get ready for the ride.
 
For example, the AI was recently censored from providing climate-change crisis counter-arguments
Interesting. I tried to find this but couldn't find it. Do you have a reference?

There is also a huge difference between climate science using AI to detect trends and make predictions versus AI chatbots arguing for or against climate change.

The prior is based on data (yes, which can be biased or manipulated - but isn't necessarily) and latter is more like about constructing together a logical, narrative, or emotionally persuasive argument based on what humans have already written/said for or against climate change.
 
@fcs101 If it was this one, I agree that it was generally forming (parroting) answers based on a kind of the cultural majority about certain ideas, e.g. female vs. male or baby vs. viable human being. Yet if the AI was based on a more libertarian or particular religious corpus of texts, it would parrot those ideas. It can only output something related to its training data.

I think taking a string of Q&A and finding "contradictions" wouldn't be hard with any chatbot or any human, and Shapiro's strategy is often trap people by their statements and self-contradictions. But we all have apparent and actual self-contradictions, especially in differing

@fcs101
I think taking a string of Q&A and finding "contradictions" wouldn't be hard with any chatbot or any human, and Shapiro's strategy is often trap people by their statements and self-contradictions. But we all have apparent and actual self-contradictions, especially in differing contexts.
Socrates/Plato was pointing out logical contradictions in humans long before AI was a thing. Thinking of the analogy of the cave in Plato's Republic. Perhaps the underlying question is how AI differs from humans in perceiving what is "truth" or "fact". Us humans become very invested in the belief that our viewpoint and/or opinions are an accurate reflection of what is real. How to approach this conundrum continues to be an issue that humans grapple with. What is the source of Truth? God, Spirit, biological determinism? Can a machine "comprehend" the mysteries of existence? It seems like the danger is giving AI the power to manipulate and control without addressing these deeper issues.
 
Interesting. I tried to find this but couldn't find it. Do you have a reference?

There is also a huge difference between climate science using AI to detect trends and make predictions versus AI chatbots arguing for or against climate change.

The prior is based on data (yes, which can be biased or manipulated - but isn't necessarily) and latter is more like about constructing together a logical, narrative, or emotionally persuasive argument based on what humans have already written/said for or against climate change.

Here is an example. tbh I feel dirty bringing a twitter link into this place haha
 
Socrates/Plato was pointing out logical contradictions in humans long before AI was a thing. Thinking of the analogy of the cave in Plato's Republic. Perhaps the underlying question is how AI differs from humans in perceiving what is "truth" or "fact". Us humans become very invested in the belief that our viewpoint and/or opinions are an accurate reflection of what is real. How to approach this conundrum continues to be an issue that humans grapple with. What is the source of Truth? God, Spirit, biological determinism? Can a machine "comprehend" the mysteries of existence?
Great points.

To riff on that, truth and fact are human conceptions with certain practical, social or moral utility or impact. A purely language based AI could talk about them like we do, but it couldn't determine "true facts" about the world on its own (e.g. anything outside its training data).

If an AI creator defined a goal of sorting the categories {universally true fact | contexually/conditionally true fact | opinion | universal falsity} it would need to be trained on data previously so categorized by humans. So it would already be subject to explicit or implicit bias around these categories.

OTOH, if an AI were trained on all existing data, knowledge, books, speech etc. and asked what factors are tied to definitive outcomes ranging from the sciences to psychology to philosophy to religion etc perhaps truth/facts could be inferred as causes (99.9-100%) in a particular domain while everything else would fall into various categories of relative truth or relevance.

But for humans some "truths" don't have to connect with "objective" cause-and-effect at all (e.g believing that a particular person is a divine being), so I don't know how a truth-finding AI could say anything meaningful about those truths.
 
Last edited:
Great points.

To riff on that, truth and fact are human conceptions with certain practical, social or moral utility or impact. A purely language based AI could talk about them like we do, but it couldn't determine "true facts" about the world on its own (e.g. anything outside its training data).
Do you really think Truth is a human construct?
 
Do you really think Truth is a human construct?
I don't think individual particular truths (or facts) are constructs as such (see my 3rd paragraph), but "truth" is a general construct, concept or label for lots of different ideas ranging from scientific to mathematical to religious. Different domains have different particular truths which might not apply to universally or to other domains. The term "love" is another linguistic construct that can indicate a huge range of feelings, ideas, acts and so on which can be largely personally or cultural determined.

For example, it was once thought in geometry that the sum of the interior angles for all triangles was necessarily and universally 180 degrees. And in a Euclidean space this is true. But in general Riemannian geometry, this isn't necessarily true.

But this is all very complicated to sort out 😊
 
Last edited:
Back
Top Bottom