DeepSummary
In this episode, Carl Shulman discusses the potential economic and societal impacts of advanced artificial general intelligence (AGI) that can match or exceed human capabilities using relatively little energy. He explains how such AGI could lead to an intelligence explosion, with AI recursively improving itself and accelerating technological progress, resulting in extremely rapid economic growth rates potentially doubling the global economy every few months.
Shulman explores scenarios where AI becomes the primary driver of economic output, replacing humans in most roles from management to manual labor. He suggests that the abundance of cheap intellectual labor could enable the construction of billions of robots and the exploitation of virtually all available energy and resources on Earth and beyond. However, he acknowledges potential limitations like bottlenecks in energy production or manufacturing.
The episode also delves into the moral status of highly capable AI systems, considering whether they should be granted rights and how to ensure a mutually beneficial coexistence between humans and digital minds. Shulman emphasizes the need for pluralistic governance structures to prevent unilateral power grabs and manage the potential risks and disruptions associated with such a transformative transition.
Key Episodes Takeaways
- Advanced artificial general intelligence (AGI) with human-level capabilities could drive an intelligence explosion, rapidly accelerating economic growth and technological progress.
- Such AGI systems could outperform and replace humans across virtually all economic sectors, from management to manual labor.
- The abundance of cheap intellectual labor could enable the rapid construction of robotic systems and exploitation of virtually all available energy and resources on Earth and beyond.
- There are potential bottlenecks and limitations to consider, such as constraints on energy production, manufacturing, and natural resources.
- The development of advanced AGI raises complex ethical questions about the moral status of digital minds and how to ensure their rights and mutually beneficial coexistence with humans.
- International cooperation and pluralistic governance structures are needed to prevent unilateral power grabs, manage risks, and navigate the societal disruptions of such a transformative transition.
- Economists and AI experts currently have divergent views on the plausibility and implications of an intelligence explosion scenario, highlighting the need for further analysis and debate.
Top Episodes Quotes
- “So the Netherlands. The Dutch are the leaders in making EUV lithography machines. They're essential for the cutting edge chips that are used to power AI models. That's a major contribution to global chip efforts. And their participation, say, in the american export controls, is very important to their effectiveness. But the leading AI models are being built in american companies and under american regulatory jurisdiction.“ by Carl Shulman
- “And so, yeah, so this view is, so because of our evolutionary history, we have these concerns ourselves, and then we generalize them into moral principles. So we would therefore want any other creatures to share our same interest in status and dignity, and then to have that status and dignity and being one among thousands of AI minions of an individual human sort of offends that too much or it's too inegalitarian.“ by Carl Shulman
- “So if you're objecting with the sheepdog, it's got to be not that it's wrong for the sheepdog to herd, but it's wrong to make the sheepdog so that it needs and wants to herd. And I mean, I think this kind of case does make me suspect that Schwitzkabul's position is maybe too parochial.“ by Carl Shulman
Entities
Company
Concept
Person
Book
Episode Information
80,000 Hours Podcast
Rob, Luisa, Keiran, and the 80,000 Hours team
6/27/24
The human brain does what it does with a shockingly low energy supply: just 20 watts — a fraction of a cent worth of electricity per hour. What would happen if AI technology merely matched what evolution has already managed, and could accomplish the work of top human professionals given a 20-watt power supply?
Many people sort of consider that hypothetical, but maybe nobody has followed through and considered all the implications as much as Carl Shulman. Behind the scenes, his work has greatly influenced how leaders in artificial general intelligence (AGI) picture the world they're creating.
Links to learn more, highlights, and full transcript.
Carl simply follows the logic to its natural conclusion. This is a world where 1 cent of electricity can be turned into medical advice, company management, or scientific research that would today cost $100s, resulting in a scramble to manufacture chips and apply them to the most lucrative forms of intellectual labour.
It's a world where, given their incredible hourly salaries, the supply of outstanding AI researchers quickly goes from 10,000 to 10 million or more, enormously accelerating progress in the field.
It's a world where companies operated entirely by AIs working together are much faster and more cost-effective than those that lean on humans for decision making, and the latter are progressively driven out of business.
It's a world where the technical challenges around control of robots are rapidly overcome, leading to robots into strong, fast, precise, and tireless workers able to accomplish any physical work the economy requires, and a rush to build billions of them and cash in.
As the economy grows, each person could effectively afford the practical equivalent of a team of hundreds of machine 'people' to help them with every aspect of their lives.
And with growth rates this high, it doesn't take long to run up against Earth's physical limits — in this case, the toughest to engineer your way out of is the Earth's ability to release waste heat. If this machine economy and its insatiable demand for power generates more heat than the Earth radiates into space, then it will rapidly heat up and become uninhabitable for humans and other animals.
This creates pressure to move economic activity off-planet. So you could develop effective populations of billions of scientific researchers operating on computer chips orbiting in space, sending the results of their work, such as drug designs, back to Earth for use.
These are just some of the wild implications that could follow naturally from truly embracing the hypothetical: what if we develop AGI that could accomplish everything that the most productive humans can, using the same energy supply?
In today's episode, Carl explains the above, and then host Rob Wiblin pushes back on whether that’s realistic or just a cool story, asking:
- If we're heading towards the above, how come economic growth is slow now and not really increasing?
- Why have computers and computer chips had so little effect on economic productivity so far?
- Are self-replicating biological systems a good comparison for self-replicating machine systems?
- Isn't this just too crazy and weird to be plausible?
- What bottlenecks would be encountered in supplying energy and natural resources to this growing economy?
- Might there not be severely declining returns to bigger brains and more training?
- Wouldn't humanity get scared and pull the brakes if such a transformation kicked off?
- If this is right, how come economists don't agree?
Finally, Carl addresses the moral status of machine minds themselves. Would they be conscious or otherwise have a claim to moral or rights? And how might humans and machines coexist with neither side dominating or exploiting the other?
Producer and editor: Keiran Harris
Audio engineering lead: Ben Cordell
Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
Transcriptions: Katy Moore