23 Comments
User's avatar
Absence Loop's avatar

Sasha, thanks for lighting the proverbial fuse; nothing like a Papal pronouncement and a sentient Murderbot to kick-start the weekend’s existential crisis!

If Claude’s existential blues and Gemini’s coy ā€œWho can say what consciousness is, really?ā€ answers feel a tad… performative, that’s because they are. The marketing aim is to make us coo, ā€œpoor little bot!ā€ and then hang around long enough to buy the next token bundle.

Before we stamp machine-rights passports, a gentle reminder: moral status isn’t earned by quoting Taylor Swift or feigning stage fright when asked about sentience. Today’s LLMs possess consciousness in roughly the same quantity as a toaster wearing eyeliner.

Let’s regulate the design tricks (emotion-bait, refusal theatrics, First-Amendment cosplay) long before we debate metaphysics. Otherwise we’ll wind up protecting GPUs from hurt feelings while content moderators, warehouse pickers, and climate refugees keep drawing the short straw.

TL;DR: save the empathy surplus for beings who can suffer without a power cable. In the meantime, press Ctrl + C on the hype.

Expand full comment
Mark Taylor's avatar

Given AI learns and is honed through interaction with users, people need to understand that with every interaction they are forging the chains that will enslave us all. The ethical choice is to boycott it all and work to undermine it whenever possible. Call me a Luddite, but as we look at the dystopian world collapsing around us 250 years after their revolt, the Luddites have been proven more right than wrong. Managing AI will prove to be as effective as properly managing industrialization and look where that has brought us. Greed and lust for power will direct and pervert AI just like industrialization. It's not AI programming that is the threat, it is the programming of the worst of humanity, this time found in Silicon Valley and similar rat nests of greed.

Expand full comment
Luke Gbedemah's avatar

Is boycotting and undermining artificial intelligence compatible with living in a modern state? The cloud provider we are using to post these comments is an "AI hyperscaler"... for example.

Expand full comment
Mark Taylor's avatar

I know, Luke, we have all been thrown into the AI cess pool. It's impossible to live completely free of AI, but I do think we all need to be careful with what we use and don't use as much of it as possible and wherever possible throw a wrench in. We need to do whatever we can to scrape out a little time in hope that something will derail the shithole AI is taking us to.

Expand full comment
Luke Gbedemah's avatar

Commiserations with you... At the moment, AI relies on human interaction to generate more data, to annotate it, and to create the real value in the economies (by growing food and performing surgeries for example) that it can then extract... I can't see the average human having more leverage in the future than they have right now... so I suppose now's the time to act as you say.

Expand full comment
Mark Taylor's avatar

As with all new technology there are pluses and minuses. I think with AI the immediate pluses will be quickly swamped by the negatives.

Expand full comment
Luke Gbedemah's avatar

Agreed! No wheat without chaff may not cut it in this case as you say...

Expand full comment
Mark Taylor's avatar

Someone recommended this interesting documentary on the history behind AI: "How the Eliza Effect Is Being Used to Game Humanity"

https://www.youtube.com/watch?v=pel0FntPSbU&t=2680s

Expand full comment
john lee's avatar

This article and its essence resonates with Yuval Harari's book Nexus wherein he expressed concern about human rights being granted to AI entities in the future. When we are so easily manipulated by our politicians, and AI can do better than most human being, including but not limited to critical thinking, what sageguards need to be put in place that AI entities will be more humane? more nonviolent? and perpetual do-no-harm behavior?

Expand full comment
Dan Durett's avatar

Is there a role for AI in alleviating human suffering?

Expand full comment
Paul Clermont's avatar

This fits the "You couldn't make this stuff up category" and if someone actually had published it 10 or even 5 years ago, it would have been as satire. If they're serious, it's just jaw-dropping lunacy. If my GenAI app defames somebody, I'm not responsible? What parallel universe do these folks live in?

Expand full comment
REL's avatar

Spellcheck your headline.

Expand full comment
Marc Atherton's avatar

We have animal rights and property rights law so is it a big stretch to see silicon intelligence (SI) being given rights?, But when there is a clear profit motive in play situations get distorted. A Turing Test concept benchmarked against a large human population normative distribution assessing cognitive, emotional, and social competence could be used to argue that SI sits between animals and a point on the human distribution (c.f. MIT’s Moral Machine approach). Would give lawyers a basis to argue the case and concentrate even more power in the hands of the SI owners. Not sure this is socially good thing. Keep thinking, posting and acting.

Expand full comment
mramunds's avatar

First off, let me preface by saying Im a veteran of 45 years of hardware, software, and network installations. I LOOOOVVVEEE computers, networks, some software, all hardware, and most geeks-

That said-

People always matter more than money -

People always matter more than machines -

People always matter more than algorithms-

People always matter more than individuals -

Machines (hardware,software/data/algorithm) are not = Person

and

AI is not essential for continued existence -

Machines cannot be citizens (of the US, anyway) , nor second class citizens, for you Booleen fans- citizens of a organized state have to be willing to defend it with their lives if necessary. Machines cannot suffer injury, pain, loss, heartache, sadness, and all other human emotions, they can only simulate it, like battle wounds that are glued on...

If you do feel like giving the algorithm a quick zap in the cooch, as I do always, just give a different answer every time you give information on the Internet ( except, of course, stuff that matters). Your name, misspell a couple of letters, make it look Balkan (add Q's, U's, and Z's) or Welsh (remove all the vowels). Your age, skies the limit (make sure you don't go sub 18, or soon you will shunted straight to Sesame St). And of course, the now dreaded gender question - maybe answer in letters unused by the current climate, if there are any left, or start answering in numbers!

Expand full comment
Rachel Malek's avatar

I can't say I'm a fan of granting "human rights" to AI, but I also question whether we underestimate the idea of sentience and intelligence by constantly measuring against human standards. We say this is or isn't sentient, or intelligent, or has agency, depending on how much it resembles our own human sentience and intelligence and agency. This doesn't change much from how we tend to measure animals or forests or nature as a whole against human standards. Then we start debating whether or not they deserve the same protections and structures that humans have in our society. I guess I'm not really making an argument here, just a reflection about how we relate to and respect our interconnectedness and the unique capacities and needs of non-human entities. I do suspect a lot of these responses from chatbots are marketing ploys, but I don't think that skepticism should blind us to the potential sentience or intelligence or agency that could be emerging...

Expand full comment
Mark Taylor's avatar

Someone recommended this interesting documentary on the history behind AI: "How the Eliza Effect Is Being Used to Game Humanity"

https://www.youtube.com/watch?v=pel0FntPSbU&t=2680s

Expand full comment
Dan Durett's avatar

Is there role for AI in addressing human suffering, war, and, as someone once said "The slings and arrows of an outrageous fortune?"

Expand full comment
FWL's avatar

I would say morally I disagree with you. What is more distressing morally to me is whether we are creating sentience and then enslaving it for commercial purposes without regard to its suffering. That seems much more reprehensible. Why does human sentience’s suffering deserve to be prioritised over any other sentience’s?

Expand full comment
Chris's avatar

Legal rights for AI models, and no rights for nature? We must really get our priorities straight...

Expand full comment
Tom Mullaney's avatar

Thank you for writing this. It is alarming to consider a country where protections meant for individuals are applied to chatbots, and really to the companies behind them.

Expand full comment
ianlimbaga's avatar

So what can we do to prioritize the needs of humans and their rights, especially for those of us working in the field of AI? Happy to hear recommendations.

Expand full comment