DeepSummary
Catherine Marr, former CEO of the Wikimedia Foundation, discusses the importance of trust in information sources like Wikipedia and how it can be applied to generative AI. She talks about Wikipedia's model of community editing and citations, and how that could translate to AI generating trustworthy content with transparency and accountability.
Marr explains the detrimental impact the internet has had on trust in institutions, exposing gaps and failures that were previously less visible. She suggests focusing on building constructive spaces for discourse that start with common facts and build understanding, perpetually expanding the circle of engagement.
Marr also touches on the global optimism towards AI's potential to bridge infrastructure gaps, especially in developing countries. She emphasizes the need for the West to engage with these aspirations rather than solely focusing on the risks, in order to maintain persuasiveness of the liberal democratic model.
Key Episodes Takeaways
- Wikipedia's model of community editing, citations, and building understanding from observable facts could inform developing trustworthy generative AI.
- The internet has eroded trust in institutions by exposing gaps and failures, necessitating constructive spaces for discourse grounded in common facts.
- There is global optimism about AI's potential to bridge infrastructure gaps, which the West should engage with rather than solely focusing on risks.
- Online spaces need clear codes of conduct aligned with their specific purposes to uphold norms for constructive discourse.
- Mutually beneficial models could allow companies to access data like Wikipedia's to train AI while supporting the open knowledge efforts.
- Building trust and positive discourse online requires replicating community efforts consistently rather than scaling from a center.
- Balancing rights for offline expression with norms for scaled, private online spaces is important for healthy discourse.
- Developing transparent, accountable systems for AI information flows is crucial to maintain democratic governance values.
Top Episodes Quotes
- “What I appreciate about what Wikipedia has always done is it started from sort of citable, observable fact and then built out truth.“ by Catherine Marr
- “I think that there is an inherent value for companies that want to use this data to train their models. We know that they're creating tremendous value out of this. We want to be on the receiving end of some of that value so that we can continue to perpetuate the value that we create.“ by Catherine Marr
- “I think that, again, this comes down to this question of, you can't do it at scale. You have to do it in a way that is replicable and consistent to the communities and to the purpose that you're trying to achieve.“ by Catherine Marr
Entities
Organization
Person
Book
Episode Information
Possible
Reid Hoffman
1/24/24
What would it take for AI to become as trusted a source of information as Wikipedia?
Katherine Maher, former CEO of the Wikimedia Foundation, joins the show to talk about the fundamental building blocks of trust behind Wikipedia.
The use of AI will fundamentally reshape what information is distributed on the internet—and how. Reid, Aria, and Katherine talk about what creating and scaling positive spaces and community-driven ideas online would look like in the context of AI.
Could in-text citation be a viable option for generative AI? What was Wikipedia’s response to being used as training data for AI models? Globally, the West currently appears far less optimistic about AI than the rest of the world. They discuss that cost and more.
Read the transcript of this episode here.
Read the Washington Post article referenced by Katherine here.
Read Luis Villa’s newsletter here.
Pre-order What If We Get It Right? by Ayana Elizabeth Johnson here.
For more info on the podcast and transcripts of all of the episodes, visit www.possible.fm/podcast.
Topics:
03:57 - Hellos and intros
06:00 - AI and governance
12:25 - How to make trust and neutrality possible in AI
15:30 - The future of Wikipedia and AI
19:28 - What can AI companies learn from Wikipedia’s model?
21:29 - Should LLMs use citations?
25:02 - The impact of the internet on trust
28:48 - How to regain trust in society
34:50 - The importance of “the loyal opposition”
36:01 - Wikipedia didn’t happen at scale
38:37 - How to make the internet a positive place for dissent
43:01 - Wikipedia and AI training
47:17 - Rapidfire questions
The award-winning Possible podcast is back with a new season that sketches out the brightest version of the future—and what it will take to get there. Most of all, it asks: what if, in the future, everything breaks humanity's way? Tune in for grounded and speculative takes on how technology—and, in particular, AI—is inspiring change and transforming the future.
This season, hosts Reid Hoffman and Aria Finger are speaking with a new set of ambitious builders and deep thinkers about everything from art to geopolitics and from healthcare to education. These conversations also showcase another kind of guest: AI. Whether it's Inflection’s Pi, OpenAI’s ChatGPT or other AI tools, each episode will use AI to enhance and advance our discussion.
Possible is produced by Wonder Media Network and hosted by Reid Hoffman and Aria Finger. Our showrunner is Shaun Young. Possible is produced by Edie Allard, Sara Schleede, and Paloma Moreno Jiménez. Jenny Kaplan is our executive producer and editor. Special thanks to Katie Sanders, Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, and Ben Relles. And a big thanks to Katherine Farrell, Jenny O'Donoghue, and Little Monster Media Company.