With the wafers used for RAM being gobbled up by AI...I wonder how this will affect Fractal and the Axe Fx 3 and possible 4?

The problems with LLMs aren't that they're going to somehow get smart enough to overthrow humans or any of that sci-fi bullshit.

It's that these things give humans permission to stop thinking. The machines won't get much smarter, but humanity as a whole is gonna get a whoooole lot dumber.

Yup, it's called "cognitive offloading" and it's destroying users' ability to think at an appalling rate. I retired from a government research lab earlier this year and some people there who should really, really know better are leaning on ChatGPT to do their work for them with with noticeably inferior results.

In my 36-odd years of working on high-end computing systems, I've never, ever seen a technology so hyped as "AI" (Large Language Models at the moment).

The direction of this thread was actually pretty predictable. I'm eager to see the AI madness burn itself out, the sooner the better IMO.
 
Necessity pushing people into starting their own businesses is good even if it is way more volatile.

I just think that's very dependent on the person. I think there are some people who are great competent professionals in any field, and some people who excel at being business people, but I think not everyone is good at both, and I think the business end might stifle a lot of people. Not that working for terrible bosses doesn't stifle people too.
 
I just think that's very dependent on the person. I think there are some people who are great competent professionals in any field, and some people who excel at being business people, but I think not everyone is good at both, and I think the business end might stifle a lot of people. Not that working for terrible bosses doesn't stifle people too.
I think it's incredibly rare for a person who can innovate technically to also be able to make a business successful. I hold people like Cliff and Paul R. Smith hisself in very high regard for being able to do that.

When I was working, I was very good at building and running teams to Make My Stuff Work. What I wasn't good at was promoting good ideas to people who didn't intrinsically understand what I did (which was most of the people I ever worked for). IOW I wasn't good at selling chemists on great computer science ideas. Ah well.

Oh, and god, did I ever have some terrible bosses. One in particular. If I can stand to look again, I should see if he's still on the lam, or maybe he died in a gutter somewhere because karma caught up with him.
 
Imo, we under estimate the human brain even wrt simpler tasks. Driving for example, takes a huge number of micro judgements in real time. Self driving cars have evolved, but throw in rain/glare at night, snow, black ice or a million other circumstances and it starts to fall apart. Go to any AI powered customer service chatbot and present it with anything more than the simplest query - then, after calming your frustration, tell me if AI is anything more than a "sometimes helper tool" at best. Not saying AI tech can't go further but I think it's progress to date is being greatly embellished.
 
Last edited:
AI has a strong bias towards what it was trained on. Think about the amount of garbage that's on the Internet, the content that has been distributed with no peer or editorial review. You want to run your business or make life decisions on that?

AI is indeed a great tool. I'm a software engineer and have seen much of software engineering's history. In the beginning, we had to know how the chips and machine code worked. Then we had assembly language and that was a big step forward. The along came higher level languages and LALR parsers that could compile the languages to machine code for us. But we still had to know the assembly language because the compliers were't that good and some code had to be hand optimized to run on the computers available at the time. But then compilers got good enough and machines got fast enough that we didn't worry them anymore and kind of forgot the machine architecture and assembly language at the foundation.

Then along cam Model Base System Engineering were would could "code" in models and run them or compile them to specific platforms.

Now we have AI and software engineering is disappearing, being replaced by a series of prompts.

In every case, the level of abstraction was raised, hiding low level details and variability, and letting us spend more time thinking about the problems we are trying to solve and the outcomes we are trying to achieve.

And there is the root problem. If we have become disconnected from what it means to be human, to engage with each other, to love, create, enjoy, grow. Then how will we know what problems to solve or what outcomes to achieve?

I use AI every day at work. But I also feel fortunate to have had an education and career opportunities that taught me to think (my background is mathematics and computer science), to use evidence and reasoned argument to inform action, and to be able to think critically about what problems I'm trying to solve. AI is a very useful tool in that context. What I'm afraid of is that AI will be used to replace all that so a few people can make a lot of money at the expense of everyone else - just like what happened with social media.
 
AI has a strong bias towards what it was trained on. Think about the amount of garbage that's on the Internet, the content that has been distributed with no peer or editorial review. You want to run your business or make life decisions on that?

AI is indeed a great tool. I'm a software engineer and have seen much of software engineering's history. In the beginning, we had to know how the chips and machine code worked. Then we had assembly language and that was a big step forward. The along came higher level languages and LALR parsers that could compile the languages to machine code for us. But we still had to know the assembly language because the compliers were't that good and some code had to be hand optimized to run on the computers available at the time. But then compilers got good enough and machines got fast enough that we didn't worry them anymore and kind of forgot the machine architecture and assembly language at the foundation.

Then along cam Model Base System Engineering were would could "code" in models and run them or compile them to specific platforms.

Now we have AI and software engineering is disappearing, being replaced by a series of prompts.

In every case, the level of abstraction was raised, hiding low level details and variability, and letting us spend more time thinking about the problems we are trying to solve and the outcomes we are trying to achieve.

And there is the root problem. If we have become disconnected from what it means to be human, to engage with each other, to love, create, enjoy, grow. Then how will we know what problems to solve or what outcomes to achieve?

I use AI every day at work. But I also feel fortunate to have had an education and career opportunities that taught me to think (my background is mathematics and computer science), to use evidence and reasoned argument to inform action, and to be able to think critically about what problems I'm trying to solve. AI is a very useful tool in that context. What I'm afraid of is that AI will be used to replace all that so a few people can make a lot of money at the expense of everyone else - just like what happened with social media.
I designed and built my first computer, Z80 based, on perf board, and wrote its OS in machine code using hex rotary switches. Not everybody needs to know how to do that - I don't either anymore, and would need a serious refresher if i did. I do not miss those days, mostly. The abstractions since then are pretty useful :). The main reason i did that was that i was a broke musician, and that was all i could afford.

There are a huge number of basic necessities we no longer have to provide for ourselves, since they're at our local store, or piped into the house.

It'd be a shame if thinking fell into that category, even moreso if empathy and compassion did.
 
Last edited:
the erosion of the education system since the 60s was not by accident, among other issues. Now we’re here.

Curious to see what things look like by June. The public sacrifice for corporate profits continues.

The more people say “no thanks” when presented with the option to use LLM, the sooner the bubble will pop.

“People want to go back in time to change things yet they think their current actions dont have impact”.

Given fractal is in the business of processors, ram etc im sure they have been doing their research and monitoring to decide next steps.
 
the erosion of the education system since the 60s was not by accident, among other issues. Now we’re here.
The public education system itself was originally created to provide worker bees for the industrialists' factories, obedient instruction followers. You need to look to places like the Sudbury Valley schools for education that walks the walk of fostering independent thinking.

"We respect the ability of every student, regardless of age, to plan and carry out their daily activities. We do not encourage students to follow particular paths, nor do we provide assessments of their performance. Rules to protect individual liberty are made by all community members through the School Meeting, and the social order is protected by a peer judicial system."

Not without its problems, but a good start. My daughter went there from high school on, and she'd give a better and even more passionate description of it than i would.
 
Oh no! The horror! We all have to play analogue gear again. Which is the number one complain we have about digital gear anyway.
Sounds like a win win to me :D :D
 
I get it though. The potential is huge. The first company that discovers a pill for cancer that you have to take every day for the rest of your life will become trillionaires. And, of course, they'll also discover a chemical that gives everyone cancer.
What I find galling is that AI was sold as being capable of curing rare diseases, saving lives, solving global problems.

Instead, what we're being offered is funny videos of newsreaders turning into dolphins, assistants who have to be hard coded to not suggest suicide, and relentless purchase recommendations.

Great. Definitely doesn't seem like a waste of time.
 
Back
Top Bottom