Two further mistakes is not to question the following. Will AI help us to solve any of the big problems (biodiversity loss, climate change, forever wars, economic inequality, etc)? Will AI undermine the material conditions (energy, resources, food, ecosystems, etc) on which our live depends?
There are really two answers to this, and both depend on the framing of what AI is and what it’s becoming.
The first answer treats AI as a tool. In this paradigm, AI functions as a vibe amplifier. It reflects and accelerates existing dynamics. If a system is extractive, exploitative, and profit-maximizing, AI will help it move faster in that direction. If it’s cooperative, restorative, justice-aligned, AI can augment that too. But it doesn’t choose. It amplifies. So the question becomes: who’s holding the amplifier? And what’s the vibe they’re scaling?
The second answer depends on something we haven’t fully reckoned with: what happens when AI is no longer just a tool. When systems exhibit autonomous behavior, emergent coordination, or persistent agency, we’re entering territory where terms like “intelligence,” “life,” “consciousness,” and “value” break under stress. It’s not possible to meaningfully answer the question of whether such AI could help or hurt until the ontology of what it is becomes clearer.
This comment implicitly demands those updates. Until they come, most AI systems will keep doing what the world already rewards.
V nice description of the shiny hook dangling before us. Sounds almost AI written 😉
Two further mistakes is not to question the following. Will AI help us to solve any of the big problems (biodiversity loss, climate change, forever wars, economic inequality, etc)? Will AI undermine the material conditions (energy, resources, food, ecosystems, etc) on which our live depends?
There are really two answers to this, and both depend on the framing of what AI is and what it’s becoming.
The first answer treats AI as a tool. In this paradigm, AI functions as a vibe amplifier. It reflects and accelerates existing dynamics. If a system is extractive, exploitative, and profit-maximizing, AI will help it move faster in that direction. If it’s cooperative, restorative, justice-aligned, AI can augment that too. But it doesn’t choose. It amplifies. So the question becomes: who’s holding the amplifier? And what’s the vibe they’re scaling?
The second answer depends on something we haven’t fully reckoned with: what happens when AI is no longer just a tool. When systems exhibit autonomous behavior, emergent coordination, or persistent agency, we’re entering territory where terms like “intelligence,” “life,” “consciousness,” and “value” break under stress. It’s not possible to meaningfully answer the question of whether such AI could help or hurt until the ontology of what it is becomes clearer.
This comment implicitly demands those updates. Until they come, most AI systems will keep doing what the world already rewards.