Mind blowing stuff from OpenAI

Prove me wrong? Go try and build something substantial with it. I'll wait.
I already did as I already posted here. I was paid to evaluate a toolkit that has a spoken dialog with a user and generates perfectly valid SQL and NOSQL databases - fully normalized. Then it creates the front-end html/javascript webapp to do all the CRUD operations on said database. The user verifies the result by using the generated system. Done.

Wait all you want. Our work as software engineers is ultimately validated by lay people who determine if we hit the target. Peer code reviews are to reduce bugs and enhance maintainability across a code base. If there are no human errors in the code then no one needs a human to write it or verify it. Automated test cases will do that just fine. If they keep a few humans around to verify code in some specific, esoteric cases it will be a fraction of the numbers that are employed to do that now.

My freshman college kid just changed their major from Comp Sci to Aerospace Engineering. My nephew will keep his head in the sand - but he 5 years out of college so he has no choice but to hope for the best.
 
I already did as I already posted here. I was paid to evaluate a toolkit that has a spoken dialog with a user and generates perfectly valid SQL and NOSQL databases - fully normalized. Then it creates the front-end html/javascript webapp to do all the CRUD operations on said database. The user verifies the result by using the generated system. Done.

Wait all you want. Our work as software engineers is ultimately validated by lay people who determine if we hit the target. Peer code reviews are to reduce bugs and enhance maintainability across a code base. If there are no human errors in the code then no one needs a human to write it or verify it. Automated test cases will do that just fine. If they keep a few humans around to verify code in some specific, esoteric cases it will be a fraction of the numbers that are employed to do that now.

My freshman college kid just changed their major from Comp Sci to Aerospace Engineering. My nephew will keep his head in the sand - but he 5 years out of college so he has no choice but to hope for the best.
And does it do everything it should? Protect against MySQL injection attacks? Allow for rate limiting? And so on and so on...

And it still took you to make it make certain it was correct, correct? The user couldn't do that.

Anyhow, I should have stayed away from this in the first place. Sensationalism is not what I enjoy wallowing in.
 
And does it do everything it should? Protect against MySQL injection attacks? Allow for rate limiting? And so on and so on...

And it still took you to make it make certain it was correct, correct? The user couldn't do that.

Anyhow, I should have stayed away from this in the first place. Sensationalism is not what I enjoy wallowing in.
So you keep going in circles with me, but the answer is yes. No dynamic SQL statements - all queries were stored in private final statics. Rate limiting is best achieved with a proxy now anyway. Who would code it by hand anymore if a proxy like Percona's ProxySQL or Google's Cloud Armor is available? But regardless AI could easily create a rate limiting proxy for any datastore. It is an incredibly easy piece of code to implement. The AI follows the same best practices and standards that I would to code the same solution. It even correctly implemented token based authN/authZ.

I was only hired to verify that the results from the AI toolkit were enterprise worthy - in other words performant, secure, scalable and solved the problem specified. I was working for the toolkit authors so they could say they had independent validation. I was not working for end users of this tech. This software is pre-Alpha externally, Beta internally. Once released there will be no equivalent of me required for solutions generated by the AI. The users will verify their own results with diagrams and reports. No SW Engineers will be needed in that process once this becomes available.

I know you said you would wait. Sadly for anyone in the SW profession the wait will not be long.
 
Last edited:
So you keep going in circles with me, but the answer is yes. No dynamic SQL statements - all queries were stored in private final statics. Rate limiting is best achieved with a proxy now anyway. Who would code it by hand anymore if a proxy like Percona's ProxySQL or Google's Cloud Armor is available? But regardless AI could easily create a rate limiting proxy for any datastore. It is an incredibly easy piece of code to implement. The AI follows the same best practices and standards that I would to code the same solution. It even correctly implemented token based authN/authZ.

I was only hired to verify that the results from the AI toolkit were enterprise worthy - in other words performant, secure, scalable and solved the problem specified. I was working for the toolkit authors so they could say they had independent validation. I was not working for end users of this tech. This software is pre-Alpha externally, Beta internally. Once released there will be no equivalent of me required for solutions generated by the AI. The users will verify their own results with diagrams and reports. No SW Engineers will be needed in that process once this becomes available.

I know you said you would wait. Sadly for anyone in the SW profession the wait will not be long.
There's layers to this.

My users are smart people, but they're not good at understanding all the implications of the requirements they themselves articulated. The edge cases, the follow-on sequences, the what-ifs.

Writing code to DO all those detailed steps and cases isn't the high value work. It's understanding the actual tasks, data structures, patterns already in use in the app, etc, in full detail, including all the implications, consequences, possibilities, and things you may want to build in the future.

It's also drawing out those requirements by talking to users about their situations, how they work, and how they want to.

It's really hard for me to understand how AI gets to bypass all that just because it's AI.

But maybe that just means I'm first for the obsolescence bin.
 
Last edited:
There's layers to this.

My users are smart people, but they're not good at understanding all the implications of the requirements they themselves articulated. The edge cases, the follow-on sequences, the what-ifs.

Writing code to DO all those detailed steps and cases isn't the high value work. It's understanding the actual tasks, data structures, patterns already in use in the app, etc, in full detail, including all the implications, consequences, possibilities, and things you may want to build in the future.

It's also drawing out those requirements by talking to users about their situations, how they work, and how they want to.

It's really hard for me to understand how AI gets to bypass all that just because it's AI.

But maybe that just means I'll be first for the obsolescence bin.

I think the main gating layer will be the Users getting comfortable enough to have the dialog with and trust in the AI. For the software I tested, it created a brand new application with a proper database and 3 tier architecture based on the same conversations/Q&A I would have with users/stakeholders. It only works with a limited set of SQL and NOSQL backends right now and middleware support is limited to Java. But this week I will be testing it's implementation deployed to a couple different cloud architectures. Then I wrap my contract.

All in all it is both impressive and terrifying. So much so I convinced my kid to switch majors.
 
Jobs aren't going anywhere.

"Intuitively, you would think seasoned veterans — those who already spend less time coding and more time on abstract, higher-order, strategic thinking — would be less vulnerable to AI than someone straight out of college tasked with writing piecemeal code. But in the GitHub study, it was actually the less experienced engineers who benefited more from using AI. The new technology essentially leveled the playing field between the newbies and the veterans. In a world where experience matters less, senior engineers may be the ones who lose out, because they won't be able to justify their astronomical salaries."

"In the meantime, on an individual level, the best thing coders can do is to study the new technology and to focus on getting better at what AI can't do. "I really think everybody needs to be doing their work with ChatGPT as much as they can, so they can learn what it does and what it doesn't," Mollick says. "The key is thinking about how you work with the system. It's a centaur model: How do I get more work out of being half person, half horse? The best advice I have is to consider the bundle of tasks that you're facing and ask: How do I get good at the tasks that are less likely to be replaced by a machine?" "
 
"Intuitively, you would think seasoned veterans — those who already spend less time coding and more time on abstract, higher-order, strategic thinking — would be less vulnerable to AI than someone straight out of college tasked with writing piecemeal code. But in the GitHub study, it was actually the less experienced engineers who benefited more from using AI. The new technology essentially leveled the playing field between the newbies and the veterans. In a world where experience matters less, senior engineers may be the ones who lose out, because they won't be able to justify their astronomical salaries."

"In the meantime, on an individual level, the best thing coders can do is to study the new technology and to focus on getting better at what AI can't do. "I really think everybody needs to be doing their work with ChatGPT as much as they can, so they can learn what it does and what it doesn't," Mollick says. "The key is thinking about how you work with the system. It's a centaur model: How do I get more work out of being half person, half horse? The best advice I have is to consider the bundle of tasks that you're facing and ask: How do I get good at the tasks that are less likely to be replaced by a machine?" "
I am sure that is accurate. My hourly rate is ridiculous. Since I am trying to be mostly retired, I don't actively search for contracts anymore. Sometimes an interesting short term engagement will fall into my lap like this one did. Then I'll do it. I'm glad I did this one because it allowed me to experience what next gen AI is capable of first hand. Then I was able to have important conversations with my college freshman and steer them into a career that will be more enhanced by AI rather than replaced by it.

Of course the other side is that it will be very difficult for young SW engineers to reach my level of experience and the compensation that goes with it. It look's to be inevitable that AI will eventually put a lot of downward pressure on the number of human coding bodies needed as well as their overall pay. But software will get much cheaper to buy and support - so good for users bad for coders!

Edit: So I should have included the footer:
Regulation, Regulation, Regulation!!
Then the sun will keep shining on tech jobs and many others while workers bask in their increased productivity using AI as a tool. Work forces will have time to transition as their work moves towards automation.
 
Last edited:
So AI/robots will do all the work, and people will be free to pursue their own personal interests with a guaranteed universal basic income, right? That's what should happen, but won't. They'll still squeeze us til we pop.
 
Many AI experts have been raising warning bells for a decades. Thank goodness some of the "founding fathers" in the field are turning from full steam ahead to "Oh... shit what have I/we done!" Sci-fi has tried to warn us for at least a century.

Q: What are 10 famous cautionary or dystopian robot/AI stories, starting from early science fiction?
To that list I’d add Robopocalypse and Robogenesis by Daniel Wilson. They’ll make you stay awake thinking.
 
Governments are talking now and writing white papers (but the industry goes on).

https://www.american.edu/sis/news/2...ernance-following-the-g7-hiroshima-summit.cfm


Dreaming here, but for my tastes, ASAP I would like something like:
  • Required labels for "AI created content" or "AI bot" particularly on social media. This wouldn't stop AI in these contexts but would require some transparency (that is for "good actors" who will comply).
  • Strong data and identity privacy laws - which would also retroactively give users absolute control over their online data/profile (e.g. Google, Amazon, streaming services etc)
  • Strong identity protections for AI generated images, likenesses, voice and so on, particularly if used for scamming, for profit or for political gain (defamation). Individuals must opt-in/consent to allowing their likeness to be used by AI generally or by particular AI systems.
  • Strong copyright and data protections for individual's creative works. By default only non-copyrightable content in the public domain could be used for training (e.g. old publications and works, generic material on wikipedia). Everything else must be opted in explicitly by creators.
  • Transparency labeling for source training material
  • Some kind of data/information provenance identity system (e.g. watermarking or some unique hash) that can identify "official" content from whatever organization, e.g. US gov, NYT, Fox, Reuters, etc. Ideally this could be used for individual artists/creators as well.

At the same time, freedom of speech must be balanced against these to allow exchange of ideas while protecting individuals from being taken advantage of (e.g. by corporations using their data or content), scammed or harmed through weaponized disinformation.

I'm expressing some possibly contentious perspectives above so I might get slammed... :grin:
 
Wired lost a lot of credibility last week … with that “cathedral mind” statement lmao 😜
I think every corporate profit-driven "popular news" outfit's got puff pieces, or at least very one-sided positive portrayals of their exemplar star avatars (whether politics, sports, cars, guns, blah blah...). It's a profit game.

Some publications / shows are pure punditry and puffery these days, no matter the ideological bent.
 
Back
Top Bottom