
Earlier this month the AI company Anthropic said it had created a model so powerful that, out of a sense of responsibility, it was not going to release it to the public. Anthropic says the model, Mythos Preview, excels at spotting and exploiting vulnerabilities in software, and could pose a severe risk to economies, public safety and national security. But is this the whole story? Some experts have expressed scepticism about the extent of the model’s capabilities. Ian Sample hears from Aisha Down, a reporter covering artificial intelligence for the Guardian, to find what the decision to limit access to Mythos reveals about Anthropic’s strategy, and whether the model might finally spur more regulation of the industry.. Help support our independent journalism at theguardian.com/sciencepod
AI Summary coming soon
Sign up to get notified when the full AI-powered summary is ready.
Free forever for up to 3 podcasts. No credit card required.

Sub-two-hour marathon, spooky houses explained and why is UK health in decline?

What is a food intolerance, and how do you know if you have one?

Muons, massive waves and restored sight: the winners at the ‘Oscars of science’

The surprising value of boring chats, ‘super El Niño’ and Alzheimer’s evidence reviewed
Free AI-powered recaps of Science Weekly and your other favorite podcasts, delivered to your inbox.
Free forever for up to 3 podcasts. No credit card required.