Discussion about this post

User's avatar
Kind Futures's avatar

In a time where AI is advancing at unprecedented speed, a few voices are quietly choosing a harder path:

One that puts safety before scale. Wisdom before hype. Humanity before power.

There’s a new initiative called Safe Superintelligence Inc. — a lab built around one single goal:

To develop AGI that is safe by design, not just by hope or regulation.

If you're someone with world-class technical skills and the ethical depth to match —

this is your call to action.

We don’t need more AI.

We need better, safer, more compassionate AI.

Spread the word. Support the mission.

https://ssi.safesuperintelligence.network/p/our-team/

Expand full comment
Roi Ezra's avatar

This was both devastating and necessary. I’ve been writing from a different angle, more from inside the builder’s mindset, but the same signal keeps surfacing: alignment is not a policy layer, it has to be present from the first design move.

What you’re doing here reminds me why I started writing AI for Humanity in the first place, not as a concept, but as a structure for holding what we’re not willing to lose. Thank you for speaking with clarity where it matters most.

Expand full comment

No posts