Jason Scott
Fractal Fanatic
Midjourney:Jeebuz. How about telling it "A mirror image of a two sets of Penrose stairs intertwined with each other".
![0_3.webp](/proxy.php?image=https%3A%2F%2Fcdn.midjourney.com%2F54889480-4489-4ce3-9866-3c54edfe85cb%2F0_3.webp&hash=9e2d95b6ec8cc68c540070dd98f89970)
Midjourney:Jeebuz. How about telling it "A mirror image of a two sets of Penrose stairs intertwined with each other".
I understand the opination, I have a 30 year old nephew who is a fellow SW Engineer. He is smart and capable and opines in lock step with you on this. I caution him as I'm sure I have already in this thread that I have tested the future under contract, and for the somewhat limited domain of the toolkit's feature set I tested, I can say with confidence it is it's own expert. The solution's stakeholders can verify correctness with no "expert" involvement. Stakeholders/users are the ultimate verification experts anyway - "Does it do what I asked it to do?" We software engineers ultimately face the same test of our released efforts, "Did we solve the problem specified?" If the users can verify the AI is doing it correctly then who needs us?Like I've opined a bunch here: I think it's a great accelerant for experts who know enough to check its output and verify it is correct.
But, like, it's not going to take over the world in its current form.
https://news.ycombinator.com/item?id=36095352
And there's the rub -- most users cannot. Can a user verify an AI-generate algorithm successfully executed an ACH transfer for them while adhering to all the necessary legal constraints (reporting, SAR filing, ban list checking, etc.)? They cannot.If the users can verify the AI is doing it correctly then who needs us?
We need young engineers to turn into old, experienced engineers though, to verify what AI produces is correct.I think younger SW Engineers are opining with wishful thinking on the potential for AI to make them obsolete. It is a 100% certainty that without guardrails AI is ultimately going to eliminate a lot of jobs - and not just in SW.
Your very specific example is just arcane knowledge yet to be learned by the AI. Who verifies your work in your specific case for a software solution to that domain? A compliance officer perhaps? Or simply put it is always some one/group in the organization that created the standards we have to code to. Are they incapable of looking at a result after any SW transaction has been processed to determine if updates were made correctly across systems? Won't an appropriate user viewable report do? For any properly implemented test strategy stakeholders are always intimately involved during system/feature/version trials prior to general release anyway.And there's the rub -- most users cannot. Can a user verify an AI-generate algorithm successfully executed an ACH transfer for them while adhering to all the necessary legal constraints (reporting, SAR filing, ban list checking, etc.)? They cannot.
Add to that, the larger the problem you ask ChatGPT to solve, the worse does, and I'm feeling smug when I say the end is not nigh.
We need young engineers to turn into old, experienced engineers though, to verify what AI produces is correct.
But that's literally all technical advancement: arcane knowledge applied to solve a problem.Your very specific example is just arcane knowledge yet to be learned by the AI.
As an amalgam of people and things.Who verifies your work in your specific case for a software solution to that domain? A compliance officer perhaps? Or simply put it is always some one/group in the organization that created the standards we have to code to.
Which comes back to what I've been saying: as long as you need to verify things are correct, AI isn't going to take over the world.What does it matter if they are verifying an AI's engineering over a humans?
Sincerely, Ian I am not trying to be pessimistic! I have to much skin the game for that. It's not just my nephew, I have offspring going into this field.But that's literally all technical advancement: arcane knowledge applied to solve a problem.
As an amalgam of people and things.
Which comes back to what I've been saying: as long as you need to verify things are correct, AI isn't going to take over the world.
Anyways, you chose pessism, I've got an optimistive outlook. At this juncture I'll leave you to your opinions.
What many don't seem to understand is that by using it you train it to become more astute and eventually replace your job function.In one of the dozens of videos I've watched from a range of AI experts, one astute guy was saying that an individual worker will be all for AI helping them be more productive because it will give them an edge over others (who haven't yet adopted it in their workflow). They will become even more valuable and productive. But once particular AIs become "too capable" ...
All your excellent points are intersecting vectors - AI advances so fast, that if trained and given the proper precision manufacturing tools it can easily create the best guitars ever built. Those guitars will be far less expensive than their human produced equivalents due to matters of production scale. Furthermore it could easily analyze guitar player feedback and improve on it's designs and even create new conceptual designs based on crowd-sourced guided feedback. Sounds great sign me up! I want a $500 precision crafted guitar that would rival today's $3-5K varieties!I don't see preservation of human jobs that can be done better by AI, if such jobs actually exist, as a worthy or reasonable goal, any more than preserving work now done by automated assembly lines would have been. We can't continue to use inefficient methods, just to save the jobs of people whose only work experience is doing them. Change/capitalism marches on.
That said...
To the extent that AI needs human supervision to ensure it doesn't just make stuff up, it's not ready to replace human workers.
To the extent that some situations can be fulfilled by AI, but there's still a "market" for a higher-level human-created version, that's the same as physical manufacturing, and both models can coexist. There IS a market for boutique handcrafted guitars for instance, though by the numbers the vast majority of guitar making is mostly automated construction.
My larger concerns about AI aren't around job preservation. They're about the possibility that critical societal functions will be replaced by AI that's not really up to the task, because it's thought to be cheaper, or because it's the flavor of the month. Launch codes? Elections? Surgery?
How do you prove the correctness of a system whose rules weren't written by someone you can talk to, and that you can't see?
I know that's facile alarmist handwaving, just trying to illustrate.
They're about the possibility that critical societal functions will be replaced by AI that's not really up to the task, because it's thought to be cheaper, or because it's the flavor of the month. Launch codes? Elections? Surgery?
How do you prove the correctness of a system whose rules weren't written by someone you can talk to, and that you can't see?
AI generated falsehoods repeated by other AIs, without attribution as having been AI sourced, great. We're doomed.
Again, Adam Hughes only knew it was "very good code" because he knows how to write very good code. I challenge the non-expert to use ChatGPT to build a complex software system, ship it, grow it and maintain it."The end of coding as we know it"
ChatGPT has come for software developers
https://www.businessinsider.com/chatgpt-ai-technology-end-of-coding-software-developers-jobs-2023-4
Again, user/stakeholder reads a report - verifies processing happened as spec'd by users/stakeholders done. Who needs to look at the code if it works as specified? I hope you are right but I know you are wrong.Again, Adam Hughes only knew it was "very good code" because he knows how to write very good code. I challenge the non-expert to use ChatGPT to build a complex software system, ship it, grow it and maintain it.
Jobs aren't going anywhere.
Prove me wrong? Go try and build something substantial with it. I'll wait.Again, user/stakeholder reads a report - verifies processing happened as spec'd by users/stakeholders done. Who needs to look at the code if it works as specified? I hope you are right but I know you are wrong.