6 Comments
User's avatar
AweDude's avatar

I'm a bit confused. What is the worst-case-scenario, Black Mirror or Twilight Zone style, that arises from an AI getting its speech protected?

Expand full comment
Morgan Magauran's avatar

Thank you CHT for paying attention.

I've restacked the earlier post and am restacking this ...

everyone please spread the word of this unmitigated pursuit of profits

Expand full comment
Frank's avatar

I don’t like this episode of The Twilight Zone. Someone find the remote and turn it off.

Expand full comment
Santiago Diehl's avatar

Thanks for raising our voice! I'm worried about the "hot" chats Character AI is targeting with our teenagers online.

Expand full comment
Tom Mullaney's avatar

Thank you for sounding the alarm bells. Character.AI is trying to set a harmful legal precedent to protect their harmful app. Will generative AI apps that mimic historical figures (as SchoolAI does with Anne Frank 😠 https://www.criticalinkling.com/p/generative-ai-anne-frank) apply for similar protections?

Expand full comment
Jay Cee's avatar

"Rights" and "Free Speech" are concepts humans can barely agree on. Anthropomorphizing and making comparisons to human level responsibilities can be problematic. That's why I'm working on new ways to create a space to discuss where responsibilities lie in synthetic speech that results in harm or who's accountable when a missile picks the wrong target based on bad code or who's model starts a cult or causes users to harm themselves. I've made a lot of progress lately (ECHO, SMH-CORE, TRUST, CORE^5) but the law is decades behind human understanding of AI and humans are years behind understanding what AGI is capable of.

Some of us are out here fighting the good fight.

Expand full comment