AI News Daily

Issue 60507 · May 07, 2026 · 8 stories

Get this in your inbox every morning

Subscribe for the daily AI briefing with curated context and summaries.

Subscribe free
The AI world is buzzing with ambition this week — Anthropic CEO Dario Amodei is projecting the company could grow by a staggering 80x in 2026, while the "Code with Claude" event dropped a wave of new developer features from automated routines to multi-agent orchestration (plus a flurry of Claude Code patch releases to keep up the pace). Beyond Anthropic's spotlight, today's digest also dives into the chip supply crunch that top AI executives say will bottleneck the industry for years, Europe's soul-searching over its AI partnerships with US tech giants, and the courtroom drama unfolding in the Musk v. Altman trial.

Business, Deals & Funding

Claude Code Changelog

v2.1.129

v2.1.129

Version 2.1.129 of Claude Code adds several new features: a `--plugin-url` flag for fetching plugin ZIP archives from URLs, a `CLAUDE_CODE_FORCE_SYNC_OUTPUT=1` environment variable for terminals where auto-detection fails (like Emacs eat), a `CLAUDE_CODE_PACKAGE_MANAGER_AUTO_UPDATE` environment variable for automatic background updates on Homebrew/WinGet installations with restart prompts, and a change requiring plugin manifest `themes` and `monitors` to be declared under an `experimental` key.

Why it matters

This is a solid incremental release with practical quality-of-life improvements. The plugin URL flag makes plugin distribution more flexible, the sync output env var addresses a real pain point for Emacs users, and the auto-update mechanism is a welcome convenience for keeping installations current. Moving themes and monitors under an experimental namespace in plugin manifests shows good API hygiene. Nothing groundbreaking, but all useful additions.

Claude Code Changelog

v2.1.131

v2.1.131

Version 2.1.131 is a bugfix release for Claude Code that addresses two issues: a VS Code extension activation failure on Windows caused by a hardcoded build path in the bundled SDK (specifically a createRequire polyfill bug), and a Mantle endpoint authentication failure due to a missing x-api-key header.

Why it matters

This is a straightforward patch release fixing two platform-specific bugs. The Windows VS Code extension fix is important for Windows users who were completely blocked from using the extension, and the Mantle authentication fix addresses a critical integration issue. Both are the kind of regressions that likely slipped through due to insufficient cross-platform testing or environment-specific CI gaps. Small but necessary release.

Claude Code Changelog

v2.1.132

v2.1.132

This patch release adds the CLAUDE_CODE_SESSION_ID environment variable to Bash tool subprocesses (matching hooks session_id), adds CLAUDE_CODE_DISABLE_ALTERNATE_SCREEN=1 to opt out of fullscreen rendering and keep conversation in native terminal scrollback, adds a 'Pasting…' footer hint during Ctrl+V image paste operations, and fixes a bug where external SIGINT signals (from IDE stop buttons or kill -INT) did not trigger graceful shutdown, leaving terminal modes in a bad state.

Why it matters

This is a solid quality-of-life and reliability patch. The DISABLE_ALTERNATE_SCREEN option is particularly welcome for users who want to scroll back through their conversation history in the terminal, which has been a common pain point. Fixing the external SIGINT handling is an important robustness improvement, especially for users running Claude Code within IDEs where the stop button is a natural way to interrupt. The session_id exposure to Bash subprocesses is a nice touch for hook/automation…

Guardian AI

Europe’s AI translation industry told it risks reputation by partnering with US firms

Europe’s AI translation industry told it risks reputation by partnering with US firms

European AI translation companies, particularly the leading startup DeepL, face criticism for partnering with US firms like Amazon's cloud computing division. Industry figures warn that such partnerships risk undermining Europe's world-leading status in machine translation and raise concerns about Silicon Valley's growing monopoly over digital infrastructure, at a time when EU businesses already lag behind the US and China in AI adoption.

Why it matters

This article highlights a genuine and important tension in European tech policy: the desire to maintain technological sovereignty while needing the scale and infrastructure that US cloud giants provide. DeepL is one of Europe's genuine AI success stories, and the concern about dependency on American infrastructure is legitimate, especially given the political unpredictability of US tech policy. However, the reality is that building competitive cloud infrastructure from scratch in Europe is enor…

TechCrunch AI

Five architects of the AI economy explain where the wheels are coming off

Five architects of the AI economy explain where the wheels are coming off

At the Milken Global Conference, five key figures in the AI supply chain discussed critical bottlenecks facing the AI economy. ASML CEO Christophe Fouquet predicted the chip market will be supply-limited for the next 2-5 years, meaning hyperscalers won't get all the chips they need. Google Cloud COO Francis deSouza noted their revenue crossed $20 billion last quarter with 63% growth, while their backlog nearly doubled from $250 billion to $460 billion in a single quarter. Applied Intuition CEO Qasar Younis identified real-world data as his primary constraint, arguing synthetic simulation cannot fully replace physical-world training data for autonomous systems. The panel also included Perplexity's CBO Dimitry Shevelenko and Eve Bodnia of Logical Intelligence, a startup challenging the foundational architecture of AI, whose technical board is chaired by Yann LeCun.

Why it matters

This is a highly credible and substantive panel that surfaces the real structural tensions in the AI economy. The chip supply constraint described by Fouquet is particularly significant — ASML's monopoly position gives him unique visibility into the entire semiconductor pipeline, and a 2-5 year supply limitation has enormous implications for AI deployment timelines and competitive dynamics among hyperscalers. Google Cloud's backlog nearly doubling to $460 billion in one quarter is a staggering…

Lenny's Newsletter

Code with Claude: The 5 biggest updates explained

Code with Claude: The 5 biggest updates explained

Claire Vo breaks down the five biggest announcements from Anthropic's 'Code with Claude' event: Claude Code routines for automating recurring workflows on schedules or webhooks, 'Outcomes' featuring rubric-based agent grading, multi-agent orchestration enabling specialized AI teams with different roles and tools, a new 'Dreams' memory system for long-term agent behavior, and increased Claude Code usage limits. The episode discusses how these features could reshape agentic software development and offers practical insights on building agentic products.

Why it matters

This appears to be a lightweight podcast summary post rather than a substantive article, so there's limited depth to evaluate. The topics covered—scheduled AI routines, outcome-based evaluation, multi-agent orchestration, and persistent memory—represent genuinely significant developments in the agentic AI space. However, the content is essentially a promotional recap of Anthropic's announcements without critical analysis of limitations, costs, or competitive context beyond a brief mention of Op…

NY Times

Anthropic’s C.E.O. Says It Could Grow by 80 Times This Year

Anthropic CEO Dario Amodei stated that the AI startup could grow by 80 times in 2026, noting that this rapid growth has exponentially increased the company's need for more computing power.

Why it matters

This article highlights the extraordinary pace of growth in the AI industry. An 80-fold increase in a single year, if realized, would be remarkable even by tech startup standards. The emphasis on computing power needs underscores the resource-intensive nature of AI development and suggests Anthropic is scaling aggressively to compete in the AI race. Such claims should be viewed with some skepticism, as projections of this magnitude carry significant uncertainty, but they reflect the intense mom…

The Verge AI

Musk’s biggest loyalist became his biggest liability

Musk’s biggest loyalist became his biggest liability

The article covers Shivon Zilis' testimony in the Musk v. Altman trial. Zilis, mother of four of Musk's children, worked across Musk's AI portfolio including Tesla, Neuralink, and OpenAI starting in 2017. She served on OpenAI's board when her twins by Musk were born in 2021, keeping the paternity secret until it was publicly reported. Her notes and testimony are described as potentially the most important evidence in the trial so far, and the article frames her as having shifted from being Musk's biggest loyalist to his biggest liability in the case.

Why it matters

The article is written with a distinctly editorial, opinionated tone — opening with a provocative rhetorical question directed at Zilis. While it provides substantive trial reporting about conflicts of interest and secret relationships that are genuinely newsworthy, the framing leans heavily into personality-driven drama rather than purely legal analysis. The conflict of interest of a board member secretly having children with a key stakeholder is legitimately significant, and the article does…

From X/Twitter

From Reddit/HN/YC

Never miss the next issue

Read on the web or get tomorrow's issue delivered directly by email.

Join AI Newsy