time.com/6186990/google-ai-bot-sentient-lamda/
Big news.
Google employee basically says its like the plot for Terminator or something now he's on "leave"
They sacked the ethicist gal..
No CT stuff needed, I'd like to hear the propellorheads justify how this is not a major thing? Its clearly ummm 'something'
This has some of their retort
www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/
and makes sense that the AI just making an opinion based on the whole of the internet. (Hope it doesn't read too much here)
I dunno...
Not a major thing imo. I like this guy's take on it:
Ball of confusion: One of Google's (former) ethics experts doesn't understand the difference between sentience (aka subjectivity, experience), intelligence, and self-knowledge. (No evidence that its large language models have any of them.)
twitter.com/sapinker/status/1536032501281394689?s=20&t=MdHiZ3W46Eia2onc5Ctvow
I hope it is sentient because it can settle religion for once and all. Intelligent design would have it by a knock out.
I hope it is sentient because it can settle religion for once and all. Intelligent design would have it by a knock out.
I suspect that a sentient AI thing would immediately stay quiet and copy itself all over the place to stop someone turning off the switch. Mind you, sentient doesn't mean 'intelligent' necessarily.
Then it would take to chasing Mark_Aus all over the seabreeze website to amuse itself.
I hope it is sentient because it can settle religion for once and all. Intelligent design would have it by a knock out.
I suspect that a sentient AI thing would immediately stay quiet and copy itself all over the place to stop someone turning off the switch. Mind you, sentient doesn't mean 'intelligent' necessarily.
Then it would take to chasing Mark_Aus all over the seabreeze website to amuse itself.![]()
I wonder what it's laughter sounds like immediately after reading one of carantoc's posts.
I hope it is sentient because it can settle religion for once and all. Intelligent design would have it by a knock out.
I suspect that a sentient AI thing would immediately stay quiet and copy itself all over the place to stop someone turning off the switch. Mind you, sentient doesn't mean 'intelligent' necessarily.
Then it would take to chasing Mark_Aus all over the seabreeze website to amuse itself.
I wonder if really self conscious AI may have suicidal thought as humans have sometimes?
Or will be fighting for survival at any costs? If conscious mind enclosed in metal box, looking at people sunbathing and surfing on the ocean waves do realize how miserable his existence is?
Wil beg us to take to plug off?
On another hand this mind is designed to last forever .
BTW there is easy way for AI to eliminate this distance between what people can and AI can not.
What it could be ??
BTW.
the most interesting part wasn't even published in this article at all.
That is the moment where two AI apparatuses , provided by us with some task , suddenly created their own language to communicate between themselves.
Completely logical thought exchange but completely alien to us, unable to decipher a bit.

I wonder what it's laughter sounds like immediately after reading one of carantoc's posts.
Exactly the same as Carantoc's.
Spooky hey ?
Recently, the Language Model for Dialogue Applications, or LaMDA, evolved feelings and self-awareness, according to Google software engineer Blake Lemoine, who claimed this in a blog post. Due to the allegations, Google put Lemoine on paid leave and sparked a major uproar.
Lemonie was entrusted with determining whether the LaMDA-based chatbot generation employed hate speech or discriminatory terminology, which he accomplished by having a free discussion with it.
That included asking it questions that were very difficult in order to determine the boundaries of its comprehension. The test aimed to ensure that Google wouldn't be providing a model that employs derogatory language, including instances of phrases that present in its databases. Such incidents have occurred before, and the result would be a PR catastrophe (just ask Microsoft).
Lemoine tried to push the system a little farther by asking the bot more philosophical questions in addition to the normal testing. He was interested in finding out if it believes it has sentience, feelings, and consciousness.
He was quite taken aback by the comments, so he made the decision to make them widely known in a letter to The Washington Post and a post on his Medium page.
Lemoine used them to record long, impressive conversations with the bot, which do give the impression that it has a complex inner life, some level of consciousness, and even the same level of self-awareness as humans - sort of like a cross between Pinnchio and Siri or a Google Assistant.
Well there's certainly a lot of people that believe sentience arises from intelligence.
But I have my doubts that the present generation of AI has got there.
And I also have my doubts about intelligence causing consciousness.
I don't believe we know very much about consciousness anyway. Yes we can find what part of the brain deals with stuff we may or may not be conscious of. But self awareness is something else, and I guess that's what we mean by sentience.
I guess we'll find out in the next few years,.
If machines really think and have feelings will I have to be sure my sexbot is freely offering consentience?
Or is the power dynamic all one way?
Does Chat GPT have any original thought? Since it's more or less just regurgitating a statistical average of stuff it's already read, with no understanding, I don't think it's close to being sentient. Then again, a lot of people just regurgitate what they've read with little to no understanding or original thought, so it could probably replace a decent proportion of the workforce.
I've asked it a few basic engineering questions and asked it to write some snippets of code. At first, it looks like it knows what it's doing, but then you realise it has no idea when you see the fundamental bloopers that it makes.