Mind blowing stuff from OpenAI

Jeebuz. How about telling it "A mirror image of a two sets of Penrose stairs intertwined with each other".
Midjourney:

0_3.webp
 
Like I've opined a bunch here: I think it's a great accelerant for experts who know enough to check its output and verify it is correct.

But, like, it's not going to take over the world in its current form.

https://news.ycombinator.com/item?id=36095352
I understand the opination, I have a 30 year old nephew who is a fellow SW Engineer. He is smart and capable and opines in lock step with you on this. I caution him as I'm sure I have already in this thread that I have tested the future under contract, and for the somewhat limited domain of the toolkit's feature set I tested, I can say with confidence it is it's own expert. The solution's stakeholders can verify correctness with no "expert" involvement. Stakeholders/users are the ultimate verification experts anyway - "Does it do what I asked it to do?" We software engineers ultimately face the same test of our released efforts, "Did we solve the problem specified?" If the users can verify the AI is doing it correctly then who needs us?

I think younger SW Engineers are opining with wishful thinking on the potential for AI to make them obsolete. It is a 100% certainty that without guardrails AI is ultimately going to eliminate a lot of jobs - and not just in SW.
 
If the users can verify the AI is doing it correctly then who needs us?
And there's the rub -- most users cannot. Can a user verify an AI-generate algorithm successfully executed an ACH transfer for them while adhering to all the necessary legal constraints (reporting, SAR filing, ban list checking, etc.)? They cannot.

Add to that, the larger the problem you ask ChatGPT to solve, the worse does, and I'm feeling smug when I say the end is not nigh.

I think younger SW Engineers are opining with wishful thinking on the potential for AI to make them obsolete. It is a 100% certainty that without guardrails AI is ultimately going to eliminate a lot of jobs - and not just in SW.
We need young engineers to turn into old, experienced engineers though, to verify what AI produces is correct.
 
And there's the rub -- most users cannot. Can a user verify an AI-generate algorithm successfully executed an ACH transfer for them while adhering to all the necessary legal constraints (reporting, SAR filing, ban list checking, etc.)? They cannot.

Add to that, the larger the problem you ask ChatGPT to solve, the worse does, and I'm feeling smug when I say the end is not nigh.


We need young engineers to turn into old, experienced engineers though, to verify what AI produces is correct.
Your very specific example is just arcane knowledge yet to be learned by the AI. Who verifies your work in your specific case for a software solution to that domain? A compliance officer perhaps? Or simply put it is always some one/group in the organization that created the standards we have to code to. Are they incapable of looking at a result after any SW transaction has been processed to determine if updates were made correctly across systems? Won't an appropriate user viewable report do? For any properly implemented test strategy stakeholders are always intimately involved during system/feature/version trials prior to general release anyway.

What does it matter if they are verifying an AI's engineering over a humans?
 
Your very specific example is just arcane knowledge yet to be learned by the AI.
But that's literally all technical advancement: arcane knowledge applied to solve a problem.

Who verifies your work in your specific case for a software solution to that domain? A compliance officer perhaps? Or simply put it is always some one/group in the organization that created the standards we have to code to.
As an amalgam of people and things.

What does it matter if they are verifying an AI's engineering over a humans?
Which comes back to what I've been saying: as long as you need to verify things are correct, AI isn't going to take over the world.

Anyways, you chose pessism, I've got an optimistive outlook. At this juncture I'll leave you to your opinions.
 
But that's literally all technical advancement: arcane knowledge applied to solve a problem.


As an amalgam of people and things.


Which comes back to what I've been saying: as long as you need to verify things are correct, AI isn't going to take over the world.

Anyways, you chose pessism, I've got an optimistive outlook. At this juncture I'll leave you to your opinions.
Sincerely, Ian I am not trying to be pessimistic! I have to much skin the game for that. It's not just my nephew, I have offspring going into this field.

My posts on this are meant to be cautionary only. We need our legislators to step up and put some reasonable guardrails in place, or AI has the potential to impact and disrupt employment in a negative way.
 
@iaresee, @Rex Rox I hear you both.

If the "corporate industry" has its way (and it almost always does in the U$ anyway), they want more and more productivity (revenue) for lower and lower cost leading to greater profit$, and they will take the path of least friction to get there. This means AI will go in fast and furious, and it already has in many places, we just aren't aware of it. Neither political party wants to slow down (or be seen to impede) the US innovation engine, even if they might fear certain social, economic or political disruptions (which they believe won't touch them, their power, or their jobs).

In one of the dozens of videos I've watched from a range of AI experts, one astute guy was saying that an individual worker will be all for AI helping them be more productive because it will give them an edge over others (who haven't yet adopted it in their workflow). They will become even more valuable and productive. But once particular AIs become "too capable" that management can do tasks themselves with a simple NL query (or self-running system), that employee will be let go. Then they will rue the AI(s) that cost them their job and their status.

<prognostication>

Natural language, reasoning, and coding capability is already pretty amazing and this this still in pretty early stages. Each problem domain and industry will have their own fine-tuned assistants from medicine to legal to engineering to software and so on. This is already happening anyway without any "guardrails" whatever. Early adapters will have the advantage and might be able to grab market share first (if they don't make a big blunder). And the more deeply entangled AI-enabled systems become in society (like social media already), the exponentially less likely there will be any guardrails because we will already be dependent upon them. There will be no putting the genie back in the bottle, no pulling the plug. Many will decry the disruptions, and there will be political hearings and handwringing but the AI monster(s) will be loose. Although it will appear to even the playing field for "low skilled" workers while "democratizing" knowledge and creativity, there will be big winners (who reap the $poils) and many losers on the long tail of "progress".

I don't see how a polarized and stagnating political bureaucracy full of technically challenged blowhards can sufficiently manage this situation, particularly when the engine of innovation and progress is the highest value (and they will be beholden to industry lobbyists and pressures soon enough). It's just out of its depths and behind the curve.

</prognostication>

But I am willing to be surprised...
 
In one of the dozens of videos I've watched from a range of AI experts, one astute guy was saying that an individual worker will be all for AI helping them be more productive because it will give them an edge over others (who haven't yet adopted it in their workflow). They will become even more valuable and productive. But once particular AIs become "too capable" ...
What many don't seem to understand is that by using it you train it to become more astute and eventually replace your job function.

Your prognostication is accurate and astute - but sadly our incredibly dysfunctional legislature in the US is our only regulatory hope. You didn't even mention the geopolitical angle. What happens if China or some other country uses AI to gain economic advantage without regard to employment impact? After all it would be easier in a communist society to just compensate the masses and redirect the labor force in other ways. In our society that would be chaos and cause all kinds of upheaval.

So if our knuckle dragging congress morons and our senate troglodytes in the US can't collect their feces in a unified direction on this one, then I would say that sadly the writing is written on the wall already.

But see that's why I am not a pessimist. I still believe in the power of the populace in the US to change the status quo on any given topic - as long as the populace does not slumber on that given topic. On this issue I believe there is more to team us up than to divide us.
 
I don't see preservation of human jobs that can be done better by AI, if such jobs actually exist, as a worthy or reasonable goal, any more than preserving work now done by automated assembly lines would have been. We can't continue to use inefficient methods, just to save the jobs of people whose only work experience is doing them. Change/capitalism marches on.

That said...

To the extent that AI needs human supervision to ensure it doesn't just make stuff up, it's not ready to replace human workers.

To the extent that some situations can be fulfilled by AI, but there's still a "market" for a higher-level human-created version, that's the same as physical manufacturing, and both models can coexist. There IS a market for boutique handcrafted guitars for instance, though by the numbers the vast majority of guitar making is mostly automated construction.


My larger concerns about AI aren't around job preservation. They're about the possibility that critical societal functions will be replaced by AI that's not really up to the task, because it's thought to be cheaper, or because it's the flavor of the month. Launch codes? Elections? Surgery?

How do you prove the correctness of a system whose rules weren't written by someone you can talk to, and that you can't see?

I know that's facile alarmist handwaving, just trying to illustrate.
 
I don't see preservation of human jobs that can be done better by AI, if such jobs actually exist, as a worthy or reasonable goal, any more than preserving work now done by automated assembly lines would have been. We can't continue to use inefficient methods, just to save the jobs of people whose only work experience is doing them. Change/capitalism marches on.

That said...

To the extent that AI needs human supervision to ensure it doesn't just make stuff up, it's not ready to replace human workers.

To the extent that some situations can be fulfilled by AI, but there's still a "market" for a higher-level human-created version, that's the same as physical manufacturing, and both models can coexist. There IS a market for boutique handcrafted guitars for instance, though by the numbers the vast majority of guitar making is mostly automated construction.


My larger concerns about AI aren't around job preservation. They're about the possibility that critical societal functions will be replaced by AI that's not really up to the task, because it's thought to be cheaper, or because it's the flavor of the month. Launch codes? Elections? Surgery?

How do you prove the correctness of a system whose rules weren't written by someone you can talk to, and that you can't see?

I know that's facile alarmist handwaving, just trying to illustrate.
All your excellent points are intersecting vectors - AI advances so fast, that if trained and given the proper precision manufacturing tools it can easily create the best guitars ever built. Those guitars will be far less expensive than their human produced equivalents due to matters of production scale. Furthermore it could easily analyze guitar player feedback and improve on it's designs and even create new conceptual designs based on crowd-sourced guided feedback. Sounds great sign me up! I want a $500 precision crafted guitar that would rival today's $3-5K varieties!

How many guitar manufacturing jobs just got eliminated? We have to think about transitions for labor force with major tech advancement or we risk economic and political upheaval which will generate major unrest. So regulation can act as a flow valve to make sure there is time for transition. Ask the US Steel Industry about this one.

Your final points are also excellent and arrive at the same intersection, especially since those are still human functions replaced by machine. So I guess you are saying some jobs will be replaced, sure fine, but other human functions are sacrosanct? Who decides what is sacrosanct without regulations looking out for humans (their safety, their security, etc.) in every part of this equation? Profit driven corps and the politicians in their pockets?

I'd rather go with humanism over AI on this one. That should be the default since it will ensure that besides worries about replacing human labor too quickly, the things you are worried about also don't come to pass.

Or we could just grab and mine Psyche or Davida asteroids worth a minimum estimated $10 quintillion and no one on planet earth would ever have to work again! Then sure I'm down with letting AI worry about the details while I sip boat drinks at perpetual jam session parties!
 
They're about the possibility that critical societal functions will be replaced by AI that's not really up to the task, because it's thought to be cheaper, or because it's the flavor of the month. Launch codes? Elections? Surgery?

How do you prove the correctness of a system whose rules weren't written by someone you can talk to, and that you can't see?

It could also be the other way around: we might have a society/ecosystem of millions of personalized AIs that act and speak "for us." Just as many people have social media profiles and avatars (that "represent" them but not really true to reality), it looks would be possible have one or more AI agents/assistants/avatars that will be able to do things for us, like make friends or vote in elections. These would be hyper-personalized and yet they could also be plugged into hyper-siloed communities that believe and promote particular virtual truths/realities. One could also delegate one's AI's decisions to whatever their subscribed community decides (in elections) and it would all be running automatically.

At what point does society morph into isolated virtual realities filtered and mediated by a society of AI agents?

And what if "luddites" or individuals who don't want to live life that way don't get in on it on principle? Are they doomed to be excluded and left behind similarly to non-digital natives? This could be similar to the vast differences between hyper-developed societies and indigenous tribal societies.
 
Again, Adam Hughes only knew it was "very good code" because he knows how to write very good code. I challenge the non-expert to use ChatGPT to build a complex software system, ship it, grow it and maintain it.

Jobs aren't going anywhere.
Again, user/stakeholder reads a report - verifies processing happened as spec'd by users/stakeholders done. Who needs to look at the code if it works as specified? I hope you are right but I know you are wrong.
 
Again, user/stakeholder reads a report - verifies processing happened as spec'd by users/stakeholders done. Who needs to look at the code if it works as specified? I hope you are right but I know you are wrong.
Prove me wrong? Go try and build something substantial with it. I'll wait.
 
I think you’re both right. Verification by humans is important. But owners and stakeholders drive that process, just as they drive the rest of the technical side. The owner perceives that he got what he paid for, and the cycle is complete. From their perspective, there’s no need to bleed more money, and he’s free to turn his attention to the thousand other things that occupy his day.

This is already a struggle in human-mitigated systems. It’s why will still get hit with buffer overflow exploits, when the fix for that has been known for decades.
 
Back
Top Bottom