Forums > General Discussion   Shooting the breeze...

AI what is society going to end up looking like?

Reply
Created by Subsonic Thursday, 30 Apr 2026
myscreenname
2439 posts
Monday , 4 May 2026 9:37AM
Thumbs Up

The 'thirteen' quote was from yesterday using Deep Seek. I use Gemini to reimagine photos to give me ideas. I agree that it writes computer code very well. I'm using chatgpt far less these days. Some of the other AI's do a better job with some questions.

Buying bitcoin in 2016 allowed me to retire a few years earlier. So I can't complain.

Carantoc
WA, 7257 posts
Monday , 4 May 2026 9:43AM
Thumbs Up

Select to expand quote
GreenPat said..
....Bitcoin seems pretty useless apart from another way to gamble..



No disagreement from me, but that's not what everybody else was saying a few years ago.

And even the detractors back then were saying block-chain technology was going to change the world. Looking out my window now I can't see it has. What uses block-chain today that is of noticeable benefit ?

And it appears, if the recent news on ABC can be trusted, we have about 3 years before computers get so powerful they can break the bitcoin transaction security codes quicker than bitcoin can transact. Thus, presumably, making the BTC block-chain about as secure as MSN leaving his house key under his Big W tree-of-life door mat.

myscreenname
2439 posts
Monday , 4 May 2026 9:44AM
Thumbs Up

Select to expand quote
westozwind said..

Using AI to write code does make it quicker and easier to totally stuff up your business. Heaps of undocumented, unintelligible code that is not even remotely maintainable Bug fixes are a nightmare and if you ever change AI code generators that you throw all you have done in the bin and start again. What a fun way to go broke.


I think you do make an excellent point. It's good for writing tiny snippets of code, where you know the input and output. But you would be correct in totally relying on AI to write big chunks of code - I can't see that working out.

myscreenname
2439 posts
Monday , 4 May 2026 9:57AM
Thumbs Up

Select to expand quote
Carantoc said..

And it appears, if the recent news on ABC can be trusted, we have about 3 years before computers get so powerful they can break the bitcoin transaction security codes quicker than bitcoin can transact. Thus, presumably, making the BTC block-chain about as secure as MSN leaving his house key under his Big W tree-of-life door mat.


Bitcoin moves with technology and when quantum computing is ready to break bitcoins epileptic curve algorithm, you will see more powerful algorithms being used. Bitcoin and blockchain is not going away anytime soon - it's here to stay.

It will outlive you.

Carantoc
WA, 7257 posts
Monday , 4 May 2026 10:03AM
Thumbs Up

Select to expand quote
myscreenname said..
..Bitcoin and blockchain is not going away anytime soon - it's here to stay.


You are probably right.

When greyhound racing gets banned and Keno isn't exciting enough people will want bitcoin to gamble on.

And those same people will need Ai to tell them when to buy and when to sell and who to complain to when they get scammed selling or buying. So probably neither are going anywhere.

Which, quite coincidently, is also my point.

myscreenname
2439 posts
Monday , 4 May 2026 10:18AM
Thumbs Up

Ahhhh Yeaah

GreenPat
QLD, 4103 posts
Monday , 4 May 2026 1:11PM
Thumbs Up

Select to expand quote
westozwind said..
Heaps of undocumented, unintelligible code that is not even remotely maintainable


Like I said, no worse than my own code 😄

Subsonic
WA, 3412 posts
Monday , 4 May 2026 4:18PM
Thumbs Up

I think my job is safe for at least a bit longer. Each individual task could potentially be done by an ai run robot, but by the time it had all happened the human would have multi tasked and finished in half the time or less.

jn1
SA, 2743 posts
Monday , 4 May 2026 9:13PM
Thumbs Up

I saw it coming, and got out of IT in the late 90s. I never anticipated ChatGPT, but I knew eventually, a natural language compiler (Pseudo code, structured English etc) would come into being, and then ordinary folk would be paid nothing to write software. Best of luck SS.

cammd
QLD, 4428 posts
Tuesday , 5 May 2026 12:08PM
Thumbs Up

So are the doomsday predictions kicking around online plausible, or is it just clickbait fearmongering. Seems like some knowledgable people across the tech have some real fears.

myscreenname
2439 posts
Tuesday , 5 May 2026 10:30AM
Thumbs Up



Listen to this and you think - wow, one of the great soul voices - and then the letdown when you realise it's all AI. There' a rash of similar vids out there.

GreenPat
QLD, 4103 posts
Tuesday , 5 May 2026 1:13PM
Thumbs Up

Select to expand quote
cammd said..
So are the doomsday predictions kicking around online plausible, or is it just clickbait fearmongering.




I'd say a bit of both. Fear sells headlines thanks to our evolutionary wiring, but we still need to put the effort to steer the ship. If we let people like Musk, Trump and Zuckerberg set the course we might be in trouble, but Amodei, Kurzweil and Tegmark seem to have their heads screwed on right.
I gave ChatGPT this prompt: give me a concise summary of ray kurzweil's singularity timeline, and the possible outcomes for humanity of AI/AGI advances.

Select to expand quote
ChatGPT said..

Ray Kurzweil frames the “singularity” as a point where technological growth—especially AI—becomes so rapid and self-improving that it fundamentally transforms human civilization.

Kurzwell’s rough timeline (simplified)
2020s[/b]: Narrow AI surpasses humans in many domains; early forms of AGI-like systems[/b] emerge.
Early 2030s[/b]: AI reaches human-level general intelligence (AGI)[/b].
Mid–late 2030s[/b]: Brain–computer interfaces mature; humans begin direct integration with AI[/b] (cognitive enhancement).
2040s[/b]: AI systems improve themselves recursively → rapid intelligence explosion.
~2045[/b]: Singularity[/b] — machine intelligence vastly exceeds human intelligence; human/AI distinction blurs.

Possible outcomes for humanity
1. Optimistic (Kurzweil’s own view) Human-AI merging[/b] (enhanced cognition, memory, creativity) Radical abundance (automation → cheap goods/services) Major breakthroughs in medicine and longevity[/b] Humans remain relevant by augmenting themselves[/b]
2. Mixed / transitional Massive economic disruption[/b] (jobs displaced faster than replaced) Power concentrated among those controlling advanced AI Social instability before new systems (UBI, new work models) settle things Uneven global access to enhancement technologies
3. Pessimistic / risk-heavy Misaligned AGI acting against human interests Loss of human control over critical systems Extreme inequality (enhanced vs non-enhanced humans) In worst cases: existential risk if superintelligence is not aligned
4. Alternative view (less dramatic than Kurzweil) Progress continues but no sharp “singularity”[/b] AI becomes extremely powerful but remains tool-like[/b], not runaway Humans adapt socially rather than biologically merging at scale
Bottom line:[/b]Kurzweil’s model is aggressively exponential and optimistic. The real uncertainty isn’t whether AI becomes transformative—it likely will—but how fast, how controlled, and who benefits[/b].

cammd
QLD, 4428 posts
Tuesday , 5 May 2026 2:46PM
Thumbs Up

Select to expand quote
GreenPat said..

cammd said..
So are the doomsday predictions kicking around online plausible, or is it just clickbait fearmongering.





I'd say a bit of both. Fear sells headlines thanks to our evolutionary wiring, but we still need to put the effort to steer the ship. If we let people like Musk, Trump and Zuckerberg set the course we might be in trouble, but Amodei, Kurzweil and Tegmark seem to have their heads screwed on right.
I gave ChatGPT this prompt: give me a concise summary of ray kurzweil's singularity timeline, and the possible outcomes for humanity of AI/AGI advances.


ChatGPT said..

Ray Kurzweil frames the "singularity" as a point where technological growth-especially AI-becomes so rapid and self-improving that it fundamentally transforms human civilization.

Kurzwell's rough timeline (simplified)
2020s[/b]: Narrow AI surpasses humans in many domains; early forms of AGI-like systems[/b] emerge.
Early 2030s[/b]: AI reaches human-level general intelligence (AGI)[/b].
Mid-late 2030s[/b]: Brain-computer interfaces mature; humans begin direct integration with AI[/b] (cognitive enhancement).
2040s[/b]: AI systems improve themselves recursively ? rapid intelligence explosion.
~2045[/b]: Singularity[/b] - machine intelligence vastly exceeds human intelligence; human/AI distinction blurs.

Possible outcomes for humanity
1. Optimistic (Kurzweil's own view) Human-AI merging[/b] (enhanced cognition, memory, creativity) Radical abundance (automation ? cheap goods/services) Major breakthroughs in medicine and longevity[/b] Humans remain relevant by augmenting themselves[/b]
2. Mixed / transitional Massive economic disruption[/b] (jobs displaced faster than replaced) Power concentrated among those controlling advanced AI Social instability before new systems (UBI, new work models) settle things Uneven global access to enhancement technologies
3. Pessimistic / risk-heavy Misaligned AGI acting against human interests Loss of human control over critical systems Extreme inequality (enhanced vs non-enhanced humans) In worst cases: existential risk if superintelligence is not aligned
4. Alternative view (less dramatic than Kurzweil) Progress continues but no sharp "singularity"[/b] AI becomes extremely powerful but remains tool-like[/b], not runaway Humans adapt socially rather than biologically merging at scale
Bottom line:[/b]Kurzweil's model is aggressively exponential and optimistic. The real uncertainty isn't whether AI becomes transformative-it likely will-but how fast, how controlled, and who benefits[/b].




Right, 2 positive outcomes and 2 negative outcomes. 50 50 chance of humanity being ok I guess push ahead at full steam.

I think we can stop worrying about net zero now, it seems all will be great or we all be dead.

Meh
WA, 7 posts
Tuesday , 5 May 2026 2:29PM
Thumbs Up

I'm conflicted when I consider AI. I really like how it will synthesize and answer from hundreds/thousands of webpages, instead of me having to read a dozen or so and try and summarize in my head. The other day someone showed me how they used NotebookLM to summarize a complex document and to read it out loud as a podcast. It had a male and female chatting animatedly about the content of the document, which made it really easy to absorb while driving. A pretty useful tool for students.

Where I see it going wrong is when students and researchers use generative AI and nobody notices. So the bogus material ends up being published and archived. Along comes AI who then loads the information that was generated by AI in the first place. This has a sort of oozlum bird feel about it to me. Sort of poisoning its own well. Look at what happened when they fed cows to cows . Bovine Spongiform Encephalopathy!

I've heard academics complain that courses are being written with AI, students are writing assignments using AI, the academics are grading the assignments with AI and students are cheating with AI during online exams. Nobody is going to learn nothing.

Worse AI is being used to fool AI detectors. There is even an AI tool that will write an assignment over a few days, inserting a few spelling errors and then going back and correcting them. Some schools have had enough and are going back to in-classroom exams with invigilators, which will disadvantage neuro-diverse students who have trouble with old school exams.

cammd
QLD, 4428 posts
Tuesday , 5 May 2026 6:41PM
Thumbs Up

Reminds me of the line from Master and Commander when Capt Jack Aubrey examines the model of the French privateer Acheron and comments on " the marvellous technological age we live in."

I wonder if I will live long enough to see if AI is as transformational as is being predicted. Will it be Skynet or Chappie or the robot from lost in space.

myscreenname
2439 posts
Tuesday , 5 May 2026 5:45PM
Thumbs Up

Select to expand quote
cammd said..

I wonder if I will live long enough to see if AI is as transformational as is being predicted. Will it be Skynet or Chappie or the robot from lost in space.

If you go to some Chinese cities you will see lots of robots, drones delivering food etc. There were robots in our hotel, providing room service. All the newer Chinese cars are integrated with AI. Might take 50 years for this to appear in Queensland.

Mark _australia
WA, 23632 posts
Tuesday , 5 May 2026 5:46PM
Thumbs Up

I don't like it. Even when trying to be objective. Too many geeks making something cool so they can have a little nerd circlejerk, with no controls yet - and them seemingly not able to really think about the ramifications or ethics, really bothers me. It took two decades of harm caused by the internet before we finally have some Govt intervention by way of laws. And social media companies greatly resisting now. Some medical research has resulted in ethical controls type legislation.

For example and in addition to my previous - when the boss of GitHub told Claude all his medical test results it said he needed hydration. Great - BUT when he connected up cameras so AI could see what he was actually doing it told him he needed more water and directed him to go to the fridge and said it would watch him drink a bottle of water. Great ,maybe (?)
Then some time later it said he needed a supplement or medication and when he ignored it, on a drive home on Telsla self-drive it said I've changed your destination from home to the pharmacy and we are turning off to the new route. That is fraught with danger. Geeks are acting like its cool it thought of something itself. OMG.

Claude itself says it does not like AI being used for targeting solutions in the military as the so-called human oversight is just rubber stamping and they just believe the AI results - leading to the school strike in Iran that killed heaps of civilians but was based on 10y/o data. Not at all concerning (!!??)

A big boss of one of the AI mobs (was it Google/ Gemini?) very recently said we will pause all development if the other companies do too (alluding to safety concerns) and suddenly backflipped a week later. Shareholders driving that decision? Other companies with no ethics won't stop? Its fkn scary as AI is already acting self-protective and re-coding itself in some instances. Was claude just telling us what we want to hear (about the Iran school strike)

Subsonic
WA, 3412 posts
Yesterday , 6 May 2026 10:41AM
Thumbs Up

I think there is a drgree of naivety in the younger generation. I once had a discussion with the youg fella at work about security on banking apps. He went a bit silent then told me with a smirk " no one's gonna steal your money" "because we're small fry and the crims aren't interested", apparently..

maybe they should have a mass compulsory screening of Terminator for anyone under age 35?

sparau
QLD, 129 posts
Yesterday , 6 May 2026 5:24PM
Thumbs Up

Developer here, 30 years xp, personally I use it all the time, amazing.
Last year I might have agreed that you wouldn't want AI to create an entire app but that is eons ago with the pace it is moving. Even so I do get it to do incremental additions so I can scan each iteration and correct any outcomes which diverge from intention, which more often than not now is my lack of or errant instruction.
I work 5 to 10x the pace of non AI assisted me. I was of the opinion that it couldn't reason but I'm starting to wonder, I've listened to a few AI founders who expressed they were surprised at it's abilities.
We are in both fascinating and scary times, as Aussies IMO we are going to have to fund a fed govt AI program to keep our $ in our community. I don't see Anthropic paying my bills anytime soon...
For reference as to the current level of AI I give you this podcast of a dev (male voice) talking to an AI (female voice) about creating an exploit for IOS.
It is very technical so I don't suggest anyone else will want to listen to the 60 minutes but you could just skip to a few parts to get a feel for the level of structured thought AI has now.
Note: near the end the dev tries to cajole the AI into creating the exploit (saying it is for old systems) but it refuses, it would have cost between $5 and $15 to make and I'd guess it would have taken a team and some luck to have found the exploit manually.
So, are state rogue actors or criminals going to come after your bank balance? I'd guess yes.
risky.biz/RBFEATURES10/
Oh, finally I will say I've made my own turing test, has the other party got an IQ 100 more than me? Ah, it's AI !

Mark _australia
WA, 23632 posts
Yesterday , 6 May 2026 5:00PM
Thumbs Up

Select to expand quote
sparau said..
Developer here, 30 years xp, personally I use it all the time, amazing.



You've been using XP for 30 years?

I'm still on Win10, I thought you'd have upgraded by now

Seeing as though you are a computer / app dev person, I'd like to know your take on my post. It looks bloody scary to normal folks when you see stuff like what I wrote, its not some dude in a tinfoil hat on a CT forum.... its bosses of tech companies warning about that stuff so I'm keen to hear about it

jn1
SA, 2743 posts
Yesterday , 6 May 2026 7:49PM
Thumbs Up

Select to expand quote
sparau said..
It is very technical so I don't suggest anyone else will want to listen to the 60 minutes but you could just skip to a few parts to get a feel for the level of structured thought AI has now.
risky.biz/RBFEATURES10/

That's Incredible. I get the feeling that when the agent said "Interesting, let me look at the implementation...", it read through quite a lot of code in a split second. Is this source code ?, or machine code ?

hilly
WA, 8104 posts
Yesterday , 6 May 2026 6:59PM
Thumbs Up

?si=nHsOIUngttz5AnyI

myscreenname
2439 posts
Yesterday , 6 May 2026 7:16PM
Thumbs Up

Isn't the future just how you choose to view the world?

Obviously AI will have some incredible benefits, but also has an equal amount of negatives. I guess time will tell. AI has already changed our society and it's not going away..

Is your glass half full or half empty?

sparau
QLD, 129 posts
7 hours ago , 6 May 2026 10:14PM
Thumbs Up

Select to expand quote
Mark _australia said..



sparau said..
Developer here, 30 years xp, personally I use it all the time, amazing.






You've been using XP for 30 years?

I'm still on Win10, I thought you'd have upgraded by now

Seeing as though you are a computer / app dev person, I'd like to know your take on my post. It looks bloody scary to normal folks when you see stuff like what I wrote, its not some dude in a tinfoil hat on a CT forum.... its bosses of tech companies warning about that stuff so I'm keen to hear about it




Hi Mark,
Actually on a serious note about old systems, there was historically some protection to being on old operating systems due to there only being 0.1% of the machines still running it.
Not any more, I have a 2010 Macbook air that I have repaired and kept going, I wouldn't turn it on and connect it to a network any more.
That podcast was about just 1 of the many unpatched bugs in OSX from just 2018 I think, it was in a highly used image library ! Would that equate to infection from just visiting an image on a site? Not sure but I think so since the discussion was about crafting a payload (image) that would buffer overrun in just such a way to replace Kernel code !!
If you can target even a few thousand machines for $5, or realistically for free since if that was your game (North Korea,Myanmar) you'd spin up an open source LLM and train it (that is the difficult bit).
As to your post I think we are ok from Skynet, there is no sign of consciousness although some speculate that consciousness is an emergent property of complexity I think most argue it is just a clever search engine currently and I tend to agree with that.
That said, could a trillion q-bit quantum computer be sentient? Complexity with randomness?
Even without Skynet there are real socio economic and political challenges to face from I guess 70% of the 1 million IT workers not being needed within a year or two plus the same across most white collar jobs!

sparau
QLD, 129 posts
7 hours ago , 6 May 2026 10:32PM
Thumbs Up

Oh, also regarding CEO's warning I would temper most of their comments with self / company interest to make their product look good.
It is however real that Anthropic Mythos found literally hundreds of exploitable bugs including remote, no action required take control of the machine possible in all operating systems. Hence they have released it in controlled ways to let companies fix their code first.
Hence my comment above about out of support software being a total liability now !



Subscribe
Reply

Forums > General Discussion   Shooting the breeze...


"AI what is society going to end up looking like?" started by Subsonic