AI News Daily

Issue 60512 · May 12, 2026 · 8 stories

Get this in your inbox every morning

Subscribe for the daily AI briefing with curated context and summaries.

Subscribe free
Mira Murati's new startup just dropped its first big idea, and it's a fascinating one — "interaction models" that can listen and respond simultaneously, like a real conversation instead of the awkward walkie-talkie exchanges we're used to with AI. Beyond that, today's digest covers a wide sweep: OpenAI firing back at Anthropic with its own cybersecurity platform, GM swapping out hundreds of IT workers for AI-skilled replacements, Australian ministers pushing datacenters to go renewable, and a nasty new Linux vulnerability that's already being exploited in the wild. Grab your coffee — there's a lot to unpack.

Business, Deals & Funding

Claude Code Changelog

v2.1.139

v2.1.139

This changelog entry for Claude Code v2.1.139 describes several new features: an 'agent view' for monitoring all Claude Code sessions, a '/goal' command for setting completion conditions that Claude works toward across turns, a '/scroll-speed' command for adjusting mouse wheel speed, and what appears to be a truncated mention of another 'claude plu...' feature.

Why it matters

The content is truncated, making it impossible to fully evaluate. From what's visible, the agent view and /goal command seem like meaningful productivity features for Claude Code users. The agent view addresses a real need for managing multiple sessions, and the /goal command adds useful automation. However, this is just a changelog snippet with minimal detail, so it's hard to assess the actual quality of implementation.

Guardian AI

Datacentres should be forced to invest in wind and solar energy, agree all states except Queensland

Datacentres should be forced to invest in wind and solar energy, agree all states except Queensland

Australian state and federal energy ministers, with the exception of Queensland, have agreed that datacentres should be required to invest in new renewable energy generation and storage sufficient to fully offset their electricity consumption. The policy targets the growing energy demands driven by artificial intelligence and would mandate that datacentres invest in wind and solar capacity to completely cover their power needs.

Why it matters

This is a sensible and forward-thinking policy proposal. As AI-driven datacentres are poised to become enormous consumers of electricity, requiring them to fund equivalent renewable energy capacity ensures that their growth doesn't undermine climate goals or strain existing grids. It places the cost burden appropriately on the industry profiting from the energy use rather than on taxpayers or existing consumers. Queensland's dissent is notable and likely reflects its closer ties to fossil fuel…

TechCrunch AI

Thinking Machines wants to build an AI that actually listens while it talks

Thinking Machines wants to build an AI that actually listens while it talks

Thinking Machines Lab, the AI startup founded by former OpenAI CTO Mira Murati, has announced 'interaction models' — AI that can process user input and generate responses simultaneously, enabling full-duplex conversation akin to a phone call rather than turn-based exchanges. Their model, TML-Interaction-Small, reportedly responds in 0.40 seconds, matching natural human conversation speed and outpacing comparable models from OpenAI and Google. However, this is currently a research preview with no public release yet; a limited research preview is expected in the coming months, with a wider release later in 2026.

Why it matters

This is a genuinely interesting conceptual advance. The turn-taking paradigm of current AI conversations is one of the most obvious gaps between human-to-human and human-to-AI interaction, and making interactivity native to the model rather than a bolted-on feature is a compelling architectural philosophy. That said, the skepticism in the article is well-placed — impressive benchmarks on paper don't always translate to impressive real-world experiences, and a 0.40-second response time means lit…

TechCrunch AI

Riding an AI rally, Robinhood preps second retail venture IPO

Riding an AI rally, Robinhood preps second retail venture IPO

Robinhood has filed a confidential registration for its second venture fund (RVII), which will invest in growth-stage and early-stage startups, expanding beyond the late-stage focus of its first fund (RVI). RVI, which debuted on the NYSE in March at $21 per share, has more than doubled to $43.69, driven by AI enthusiasm. The first fund holds stakes in companies like OpenAI, Databricks, Stripe, and ElevenLabs. RVII's fundraising target hasn't been set yet, though RVI fell several hundred million short of its $1 billion goal. The funds aim to democratize venture investing by allowing non-accredited retail investors to buy shares in a portfolio of private startups through regular brokerage accounts, with daily liquidity and no carried interest. CEO Vlad Tenev envisions retail investors eventually participating in seed and Series A rounds alongside traditional venture firms.

Why it matters

This is a genuinely significant development in the democratization of venture capital, though it comes with substantial risks that the article only briefly acknowledges. The fact that RVI has doubled in two months likely reflects speculative AI hype more than fundamental value creation, which should concern anyone watching retail investors pile into illiquid private company stakes wrapped in a liquid wrapper. The no-carry, no-accreditation model is compelling and addresses a real inequity, but…

The Verge AI

OpenAI just released its answer to Claude Mythos

OpenAI just released its answer to Claude Mythos

OpenAI has launched Daybreak, an AI-powered cybersecurity initiative designed to detect and patch software vulnerabilities before attackers can exploit them. Daybreak combines OpenAI's Codex Security AI agent (launched in March) with specialized cyber models including GPT-5.5-Cyber and GPT-5.5 with Trusted Access for Cyber. The system creates threat models based on an organization's code, identifies possible attack paths, validates likely vulnerabilities, and automates detection of high-risk ones. The launch comes about a month after Anthropic announced Claude Mythos and Project Glasswing, a competing security-focused AI initiative that Anthropic deemed too dangerous for public release but which was reportedly accessed by unauthorized parties. Like Glasswing, Daybreak is not built on a single AI model but integrates multiple models and security partners. OpenAI says it is working with i…

Why it matters

This represents a significant escalation in the AI cybersecurity arms race between major AI companies. The framing as an 'answer to Claude Mythos' highlights how competitive dynamics are now driving the development of increasingly powerful security-focused AI tools. There are both promising and concerning aspects here. On the positive side, AI-powered vulnerability detection could meaningfully improve software security by finding flaws faster than human security researchers can. However, the ra…

TechCrunch AI

GM just laid off hundreds of IT workers to hire those with stronger AI skills

GM just laid off hundreds of IT workers to hire those with stronger AI skills

General Motors has laid off approximately 600 salaried IT employees — over 10% of its IT department — in what the company describes as a deliberate skills swap. Rather than simply reducing headcount, GM is actively hiring replacements with AI-focused expertise, including AI-native development, data engineering, agent and model development, prompt engineering, and cloud-based engineering. This follows 18 months of white-collar layoffs at GM, including 1,000 software workers cut in August 2024. The restructuring has been shaped by chief product officer Sterling Anderson, hired in May 2025, who consolidated GM's technology businesses and prompted the departure of three senior executives. GM has since hired new AI-focused leaders, including former Apple AI lead Behrad Toghi and former Cruise head of AI Rashed Haq. TechCrunch frames this as a signal of what enterprise AI adoption looks like…

Why it matters

This story represents a significant and somewhat sobering milestone in the AI transformation of traditional industries. GM's move is notable because it's not a cost-cutting exercise disguised as modernization — they're explicitly replacing one set of skills with another, which is more honest than most corporate restructurings but no less painful for the affected workers. The specific roles they're hiring for — agent development, model engineering, AI-native workflows — suggest GM is serious abo…

Ars Technica AI

Linux bitten by second severe vulnerability in as many weeks

Linux bitten by second severe vulnerability in as many weeks

A severe Linux vulnerability called 'Dirty Frag' has been disclosed, allowing low-privilege users and containers to gain root access on Linux servers. The exploit chains two kernel vulnerabilities (CVE-2026-43284 and CVE-2026-43500) related to page cache handling in networking and memory-fragment components. The exploit is deterministic, works reliably across virtually all Linux distributions, and causes no crashes. Proof-of-concept exploit code was leaked online, effectively making it a zero-day, and Microsoft has observed hackers experimenting with it in the wild. This comes just weeks after another similar vulnerability called 'Copy Fail.' Patches are being released by distributions including Debian, AlmaLinux, and Fedora, and organizations are urged to apply them immediately.

Why it matters

This is a genuinely alarming development for Linux security, particularly for shared hosting environments, cloud providers, and container orchestration platforms where privilege escalation from untrusted users is a critical threat model. The fact that this is the second severe page-cache exploitation vulnerability in two weeks, following the same bug family as Dirty Pipe from 2022, suggests a systemic weakness in how the Linux kernel handles page cache references that needs deeper architectural…

The Verge AI

Here’s what Mira Murati’s AI company is up to

Here’s what Mira Murati’s AI company is up to

Thinking Machines, the AI company founded by former OpenAI CTO Mira Murati, announced it is developing 'interaction models' designed to enable real-time, multimodal collaboration between humans and AI. Unlike current models that process input in a single thread and freeze perception while generating responses, interaction models continuously take in audio, video, and text, thinking and responding in real time. The company shared demos including real-time speech translation, detecting animal mentions in stories, and posture correction. Thinking Machines argues this approach solves a 'bandwidth bottleneck' in human-AI collaboration by meeting humans where they are rather than forcing them to adapt to AI interfaces. The models are not yet publicly available, with plans for a limited research access program.

Why it matters

This is an interesting conceptual framing, though the actual differentiation from existing multimodal real-time AI systems (like GPT-4o's voice mode or Google's Project Astra) remains unclear from the announcement alone. The 'interaction model' terminology is clever branding, but the demos shown—real-time translation, audio monitoring, posture detection—are capabilities others have already demonstrated. The key question is whether Thinking Machines has a genuinely novel architectural approach t…

From X/Twitter

From Reddit/HN/YC

Never miss the next issue

Read on the web or get tomorrow's issue delivered directly by email.

Join AI Newsy