DeepSummary
The episode starts with the hosts discussing the recent release of Meta's Llama 3 language model, which they claim is one of the best open models available. They also cover the unveiling of Elon Musk's xAI's Grok-1.5 Vision model, which outperforms GPT-4 on some benchmarks, and Reka's release of its multimodal Reka Core model. Other tools and apps mentioned include Cohere's multi-aspect embedding model Compass, Amazon Music's AI playlist maker Maestro, and Snap's plans to watermark AI-generated images.
Under Applications & Business, the hosts discuss Boston Dynamics' new Atlas robot designed for commercial use, TSMC's $65 billion investment in chip manufacturing in the US, and the US blacklisting of Chinese companies for helping the military acquire AI chips. They also cover Elon Musk's claim that training the next Grok model will require 100,000 Nvidia GPUs, Andrew Ng's appointment to Amazon's board, and a $100 million funding round for collaborative robotics startup Collaborative Robotics.
The episode also covers research topics like Meta's OpenEQA benchmark for embodied question answering, the RHO-1 paper on selective language modeling, scaling laws for mixture of experts models, a replication attempt for the Chinchilla scaling paper, China's development of light-based AI chiplets, and the OSWorld benchmark for multimodal agents in real computer environments.
Key Episodes Takeaways
- Meta's release of the Llama 3 language model is a significant advancement in open-source AI models.
- New multimodal AI models like Elon Musk's xAI's Grok-1.5 Vision and Reka's Reka Core are challenging the capabilities of leading commercial models like GPT-4 and Claude.
- There is an ongoing race among tech companies and startups to develop and commercialize advanced AI models and hardware, with massive investments and computing power required.
- Research efforts are focused on improving the scalability, efficiency, and safety of large language models, as well as developing benchmarks and evaluation frameworks for AI systems.
- There is growing concern and calls for increased transparency, auditing, and regulation of AI companies and their systems to ensure safety and manage societal impacts.
- The rapid advancement of AI technology is already disrupting various industries, such as translation and illustration, raising concerns about potential job displacement and the need for safeguards.
- Policy initiatives like the expansion of the US AI Safety Institute and the development of guidelines for secure AI system deployment aim to address the challenges and risks associated with the proliferation of AI.
- The integration of AI into various applications and services, such as music streaming and social media, continues to gain momentum, with efforts to establish standards and transparency around AI-generated content.
Top Episodes Quotes
- “One thing, though, that is worth noting, actually last week thought at my end, because this is strategic, it does speak to our sort of understanding of where AI training runs are going. Big picture. One thing we've seen a lot, especially lately, is people breaking down that sort of the famous chinchilla scaling law paper folks might remember.“ by Jeremy Harris
- “So definitely they're going to be the most incentivized among all the, the kind of the main publishers to do something like this. Because if you think about the incentives that apply to, well, any other, don't need to name them, but any other kind of publishing company, what medium has going for it is that basically it has the equivalent of Uber drivers submitting stories to them.“ by Jeremy Harris
- “They do look at ways to quickly iterate on identifying known cyber vulnerabilities, the ones that we already know about, as one of their key pillars here. But they're also interested in focusing more and more on how the AI systems themselves can be undermined and taken advantage of, and so on.“ by Jeremy Harris
Entities
Person
Company
Organization
Product
Book
Episode Information
Last Week in AI
Skynet Today
4/24/24
Our 163rd episode with a summary and discussion of last week's big AI news!
Note: apology for this one coming out a few days late, got delayed in editing it -Andrey
Read out our text newsletter and comment on the podcast at https://lastweekin.ai/
Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai
Timestamps + links:
- Intro / Banter
- Tools & Apps
- (00:02:16) Meta releases Llama 3, claims it’s among the best open models available
- (00:14:01) Elon Musk’s xAI Unveils Grok-1.5 Vision, Beats OpenAI’s GPT-4V
- (00:17:55) Reka releases Reka Core, its multimodal language model to rival GPT-4 and Claude 3 Opus
- (00:21:50) Cohere Compass Private Beta: A New Multi-Aspect Embedding Model
- (00:23:48) Amazon Music’s Maestro lets listeners make AI playlists
- (00:24:36) Snap plans to add watermarks to images created with its AI-powered tools
- Applications & Business
- (00:25:52) Boston Dynamics unveils new Atlas robot for commercial use
- (00:30:32) TSMC’s $65 billion bet still leaves US missing piece of chip puzzle
- (00:36:30) U.S. blacklists Intel's and Nvidia's key partner in China — three other Chinese firms also included in the blacklist for helping the military
- (00:38:37) Elon Musk says the next-generation Grok 3 model will require 100,000 Nvidia H100 GPUs to train
- (00:40:22) Dr. Andrew Ng appointed to Amazon’s Board of Directors
- (00:41:55) Collaborative Robotics Locks Up $100M, Latest Robot Startup To Raise Big
- Projects & Open Source
- Research & Advancements
- (00:51:21) RHO-1: Not All Tokens Are What You Need
- (00:57:21) Scaling Laws for Fine-Grained Mixture of Experts
- (01:03:20) Chinchilla Scaling: A replication attempt
- (01:07:18) China develops new light-based chiplet that could power artificial general intelligence — where AI is smarter than humans
- (01:10:45) OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
- Policy & Safety
- (01:13:44) U.S. Commerce Secretary Gina Raimondo Announces Expansion of U.S. AI Safety Institute Leadership Team
- (01:17:18) NSA Publishes Guidance for Strengthening AI System Security
- (01:19:19) Foundational Challenges in Assuring Alignment and Safety of Large Language Models
- (01:24:11) Former OpenAI Board Member Calls for Audits of Top AI Companies
- (01:27:35) SoA survey reveals a third of translators and quarter of illustrators losing work to AI
- Synthetic Media & Art