Discover more from The Palladium Letter
Artificial General Intelligence Is Possible and Deadly
Artificial General Intelligence is the central conjecture of the AI field. If it is possible, it will completely disrupt the human condition and probably kill us all.
Last week, Editor in Chief Wolf Tivy came out with a new article on the possibilities of an unbounded artificial intelligence.
Many conversations around the safety of future AI systems get bogged down in arguments over the nature of AI itself. Instead, one should focus on its potential “capability [to] replace all human labor in the entire industrial ecosystem and outsmart any human.”
Whether or not an AI has “agency” or “consciousness” is a philosophical exercise that does not bear on what it is capable of doing, if granted enough resources and enough autonomous decision-making power in addition to an ability to iteratively improve itself. Once a tool escapes the understanding and control of its operator, it becomes dangerous and unpredictable.
Much of the technical groundwork for an artificial general intelligence capable of rivaling a human agent is already in place. The main engineering challenge is how to synthesize it all together into an AI architecture, which is just as much of a philosophical problem as it is an engineering one:
Engineers in the twenty-first century have built computer-controlled robots that walk around, dance, talk, hear and follow commands, and see the world around them. Many factories have significant robotic automation already, limited mostly by the lack of intelligence of the machines. We have a basic understanding of how the brain’s neurons act as computational elements, how much computation they do, and how many of them there are. By some estimates, we are now crossing the threshold where a human brain’s worth of computation is available off the shelf. We have a whole industry of people who can build computer programs to implement whatever algorithmic principle we can discover. The major missing piece is the key ideas of artificial general intelligence themselves. Given those ideas, the engineering side seems poised to put them into action.
There are various conceptual and philosophical problems that need to be worked out before we can be sure AGI is a guaranteed development, and even then the necessary technological leaps may not happen within our lifetime. But if AGI does appear on the scene, it could be extremely dangerous for humanity.
Here’s what’s been on the front page lately:
Artificial General Intelligence Is Possible and Deadly by Wolf Tivy. Artificial General Intelligence is the central conjecture of the AI field. If it is possible, it will completely disrupt the human condition and probably kill us all.
You Can’t Trust the AI Hype by Ash Milton. Investor hype around AI doesn’t reflect the real impacts of deep learning. That hype rests on a false ideology in which the tech industry is the vanguard of progress.
Walter Kirn on How America Lost the Plot by Matt Ellison. The novelist turns his literary eye to the American story and finds we’re losing our memories under a new imperative to forget.
Don’t Learn Value From Society by Wolf Tivy. We face a crisis of false value. Ancient perspectives like that of Abraham offer a way out.
The Triumph of the Good Samaritan by Ash Milton. Those trying to justify parasitic behaviors often invoke the language of charity and compassion. But true charity is about enforcing a superior form of life.
That’s all for now.