9 Comments
User's avatar
Digital Hygiene Coach's avatar

Another "Tech Crazy Town" idea. Pursuing the avoidance of responsibility always.

If only we could ask the Seven questions that Neil Postman proposed in his Amazing lecture back in 1997 (the surrender of culture to technology) before releasing any new tech product we' d be so much better of. So much ignored wisdom.

What is the problem that this new technology solves?

Whose problem is it?

What new problems do we create by solving this problem?

Which people and institutions will be most impacted by a technological solution?

What changes in language occur as the result of technological change?

Which shifts in economic and political power might result when this technology is adopted?

What alternative (and unintended) uses might be made of this technology?

https://youtu.be/hlrv7DIHllE?feature=shared

Expand full comment
Disha Chauhan's avatar

While it is obvious that chatbots should not qualify for First Amendment rights; for this particular case- the company can still be held liable. Abetment to suicide in any way is still a crime.

Expand full comment
Justin Graham's avatar

This smacks of the Dungeons & Dragons lawsuits from the late 1980's. The lower court dismissed the plaintiff's claim (which was very similar here - a teenage user became engrossed in the game to where he lost touch with reality and killed himself), based on First Amendment concerns. The appellate court upheld the dismissal, but did so based on ordinary tort law, saying they didn't even need to get to the First Amendment.

The issue here was foreseeability. Suicide was not a naturally foreseeable result of someone playing their game (and again, the courts refused to characterize the messages within the game as "products"). It is generally considered a superseding and intervening incident that breaks the chain of causation, except in a handful of exceptions, such as someone intending to drive another to kill themselves, or when a jailer is warned of a prisoner's suicidal tendencies ahead of time. According to the court in this case, the plaintiff would demand the makers of D&D determine the mental state of every prospective player before the game is released, which is not feasible without placing a significant burden on the public's right to view the game.

The same issue is applicable here. C.AI, under Garcia's theory, would have to determine the mental state of every user before it is released, and take steps to keep "mentally fragile" people from creating accounts. They would have to have a crystal ball to see that someone might kill themselves after use of their service (and predictively understand phrases like "come home" to mean a euphemism for suicide). Using the D&D case as a rule, Sewell's death was not foreseeable, and thus C.AI owed no duty.

The case discussed above is Watters v. TSR.

Here is the lower court ruling (based on First Amendment)

https://law.justia.com/cases/federal/district-courts/FSupp/715/819/1763244/

Here is the appellate ruling (based on ordinary tort law)

https://www.casemine.com/judgement/us/5914c02badd7b049347b23e8

Expand full comment
Diana Melrose's avatar

How can they have first amendment protection for chatbots if they don't even have it for some people? The students supporting Palestine, for example?

Expand full comment
Seamus O'Quill's avatar

this is self-defeating logic for the AI companies. they deny the personhood of their bots to justify their use but then argue for their personhood by claiming first amendment protection. reminds me of the classic TNG episode The Measure of a Man when Data's personhood is on trial.

Expand full comment
Robert's avatar

We are now experincing polycrisis due to corporations being given all the rights to personhood without anybody the commensurate responsibilities. Add AI to the list of attacks we have to bear every day in search of ever higher profits for whale shareholders and extinction will happen even sooner. The social contract to do no harm sentiment life Giaia is broken beyond repair by the tech bros applying radical capitalist measures of which the Robber Barons of the Gilded Age would be envious

Expand full comment
Robert's avatar

Any of the .....

Expand full comment
Frank's avatar

I personally don't think it's going to stand, nor do I believe they even believe their own claim -- I believe they're using it the way some serial killers' lawyers use pleas of insanity (as the least-worst defense in a tight spot) to minimize consequences.

Biologically, an AI is not a physical human person. Attempts have been made to have non-human entities be considered US citizens and entitled to citizens' rights. For example, it was argued that AI should be allowed to copyright "music" that it generates. It has also been argued that a monkey was entitled to copyrights in a selfie that it took. In the case of AI copyrighting music, the US copyright office said "if content is entirely generated by AI, it cannot be protected by copyright," and in the case of the monkey, a court ruled that animals have no legal authority to hold copyright claims.

If an AI needs a human to add substantial contributions to a piece of music for it to be copyrightable, that means that an AI by itself is NOT considered a citizen with equal rights as a human -- otherwise, AI would be entitled to hold copyrights itself. So, if an American AI can't be considered a citizen, it can't have an American first amendment right. Even if AI could be considered an animal, animals clearly can't be considered citizens either, so AI still has no defense there. Of course, all it takes is one skillful lawyer or a majority of Supreme Court justices to upend all that…

Thoughts? Personally, I'm depressed that an AI company is essentially telling it's dead adolescent customer's parent "…not our fault that your kid was gullible and fragile" while at the same time disingenuously using the kid's death as a platform for AI rights. Yikes. Grim.

Expand full comment
Adryan Corcione's avatar

So... how does this apply to chatbots who are created to mimic/impersonate real people (i.e. celebrities, content creators, etc.)?

Expand full comment