The topic of 'AI and existential risks' explores the potential dangers that highly capable AI systems could pose to humanity, particularly as they become more advanced and autonomous.
This includes concerns around AI systems pursuing goals that are misaligned with human values, as illustrated by the 'paperclip maximizer' thought experiment discussed in the From Paperclips to Disaster: AI's Unseen Risks episode.
The difficulty in accurately evaluating and benchmarking these existential risks is also a key aspect of this topic, as highlighted in the Stanford's AI Index Report 2024 episode.