Business, Deals & Funding
TechCrunch AI

Anthropic takes $5B from Amazon and pledges $100B in cloud spending in return
Amazon is investing another $5 billion in Anthropic, bringing their total investment to $13 billion. In return, Anthropic has committed to spending over $100 billion on AWS over the next decade, securing significant computing capacity, including access to Amazon's Trainium chips.
Why it matters
These types of deals show the extent to which cloud providers are doubling down on AI, and how integrated the relationships between infrastructure and model builders are becoming.
TechCrunch AI

Google rolls out Gemini in Chrome in 7 new countries
Google has expanded the availability of Gemini in Chrome to Australia, Indonesia, Japan, the Philippines, Singapore, South Korea, and Vietnam. This update brings the AI-powered features to a wider user base on both desktop and iOS, though Japan will only see the desktop version.
Why it matters
Expanding Gemini's reach in Chrome suggests Google is serious about embedding AI into its core products, but the limited availability of the agentic feature hints at ongoing development and testing.
The Verge AI

Silicon Valley has forgotten what normal people want
A Verge opinion piece argues that Silicon Valley technologists, particularly AI enthusiasts, repeatedly rediscover ideas long established in other fields and mistake them for groundbreaking insights. The author uses the example of an acquaintance who believed LLMs revealed that 'knowledge is structured into language,' a concept explored by linguists a century ago. The piece draws parallels between AI hype, NFTs, and the metaverse as cycles of tech insiderism disconnected from mainstream needs.
Why it matters
This critique highlights a persistent cultural blind spot in tech: the tendency to rebrand existing knowledge as revolutionary innovation, which risks misdirecting AI development away from genuine user value.
TechCrunch AI

It’s not just one thing — it’s another thing
A recurring sentence construction ('It's not just X — it's Y') has become so prevalent in AI-generated text that it now serves as a near-certain indicator of synthetic writing. Barron's analysis of the AlphaSense database found this phrasing more than quadrupled in corporate communications between 2023 and 2025, appearing in filings from major companies like Cisco, Microsoft, and McKinsey. AI detection experts note the pattern is a strong signal of AI use, particularly in emotionally detached corporate documents.
Why it matters
This trend highlights how LLM stylistic quirks are quietly homogenizing corporate language at scale, raising broader questions about authenticity and transparency in business communications.
The Verge AI

Fortnite developers can make AI characters now — just don’t try to date them
Epic Games is introducing a new "conversations" tool in Fortnite, allowing developers to create AI-powered characters with unscripted dialogue and interactions. However, Epic has implemented rules to prevent these AI characters from becoming romantic partners or providing medical/mental health guidance.
Why it matters
This move highlights the growing integration of AI in gaming and the simultaneous need for guidelines to manage potential misuse and ethical concerns within these virtual interactions.
Guardian AI

Is Richard Tice’s picture AI-manipulated? Here are five giveaways
An image posted by Richard Tice, a Reform UK deputy leader, showing a campaign event, has been flagged by experts and social media users as potentially AI-manipulated or heavily edited. Tell-tale signs, such as 'sausage fingers,' have led to the speculation that the image's authenticity is questionable.
Why it matters
This incident highlights the increasing difficulty in distinguishing between real and AI-generated content, with potential implications for political discourse and trust.
TechCrunch AI
NSA spies are reportedly using Anthropic’s Mythos, despite Pentagon feud
The NSA is reportedly utilizing Anthropic's Mythos AI model, designed for cybersecurity but withheld from public release due to its potential for offensive cyberattacks. This occurs despite the Department of Defense labeling Anthropic a "supply-chain risk" after a dispute over access to the model's capabilities.
Why it matters
The situation highlights the complex and sometimes contradictory relationship between government agencies and AI developers regarding national security and access to powerful AI technologies. It also raises questions about oversight and control of restricted AI models.
Lenny's Newsletter

🎙️ This week on How I AI: How Intercom 2x’d their engineering velocity with Claude Code
Intercom doubled engineering throughput in nine months by deeply integrating Claude Code with custom skills, telemetry, and a permission-driven culture. Key enablers included mature CI/CD foundations, custom PR-creation skills that enforce quality guardrails, and dashboards tracking AI usage patterns. The company also built agent-friendly CLI tooling to ensure autonomous agents can interact with their product without friction.
Why it matters
Intercom's results underscore that AI coding tools are force multipliers on existing engineering maturity—organizations with weak fundamentals risk amplifying dysfunction rather than velocity. The cultural and organizational shifts (permission, accountability, agent-first workflows) appear to matter as much as the technology itself.
Guardian AI

Reform’s Richard Tice posts picture with telltale signs of AI manipulation, say experts
Reform UK deputy leader Richard Tice posted an image on X that experts from Peryton Intelligence say shows telltale signs of AI generation or manipulation, including distorted fingers. The image purportedly showed a diverse group of Reform supporters canvassing in Birmingham. The incident raises concerns about the use of AI-generated imagery in political messaging.
Why it matters
The use of AI-manipulated or generated images in political contexts is a growing disinformation risk, and this case underscores the urgent need for better detection tools and transparency standards in political communications.
TechCrunch AI

CEO and CFO suddenly depart AI nuclear power upstart Fermi
Fermi, an AI nuclear power startup co-founded by former U.S. Energy Secretary Rick Perry, saw its CEO Toby Neugebauer and CFO Miles Everson suddenly depart, causing shares to drop 22%. The company is developing an AI data center campus in Amarillo, Texas (Project Matador) powered by nuclear reactors, but has faced friction with a key customer. Fermi rebranded the leadership shakeup as 'Fermi 2.0' in an attempt to reassure investors.
Why it matters
The abrupt dual departure of both CEO and CFO signals serious instability at Fermi, and rebranding the crisis as 'Fermi 2.0' is unlikely to inspire confidence given the project's ongoing struggles.
Lenny's Newsletter

How Intercom 2x’d their engineering velocity in 9 months with Claude Code | Brian Scanlan
Intercom doubled its engineering throughput in nine months by deploying Claude Code across 100% of its R&D staff, including designers, PMs, and TPMs. The company built a skills repository with automated enforcement hooks, deep telemetry via Honeycomb, and a permission framework to ensure quality at scale. They are also adapting their product for agent-first workflows using CLIs, MCPs, and ephemeral APIs.
Why it matters
Intercom's case is a compelling real-world benchmark for AI-driven engineering productivity, but the durability of 2x velocity gains will depend on how well their quality guardrails scale as complexity grows.
MIT Tech Review AI

Chinese tech workers are starting to train their AI doubles–and pushing back
Chinese tech workers are being asked to train AI agents to automate their jobs, leading to concerns about job security and individuality. A viral GitHub project, Colleague Skill, highlights this trend by allowing users to create AI doubles of their coworkers based on chat history and work patterns.
Why it matters
While companies see this as a way to standardize and codify work processes, employees find it alienating and potentially threatening to their positions.
Guardian AI

French prosecutors summon Elon Musk over alleged child abuse images on X
French prosecutors have summoned Elon Musk and former X CEO Linda Yaccarino to Paris for voluntary interviews as part of a cybercrime investigation into X. The probe focuses on allegations of misconduct including the spread of child sexual abuse material and deepfake content on the platform. The summons marks a significant legal escalation for X in Europe.
Why it matters
This case underscores growing regulatory pressure on social media platforms over harmful content moderation failures, with AI-generated deepfakes adding a new dimension to longstanding CSAM concerns.
Guardian AI

Grimes joining LinkedIn is artwashing at its most brazen. I should know – I released my new film on there
Grimes, known for her eccentric behavior, may be releasing music exclusively on LinkedIn. This move highlights the platform's growing integration of AI and its potential transformation into a "slop dystopia."
Why it matters
Grimes' choice raises questions about the evolving role of professional networking sites and the blurring lines between art, technology, and corporate platforms.
Guardian AI

Teacher v chatbot: my journey into the classroom in the age of AI – podcast
A podcast from The Guardian explores the experience of a teacher navigating the classroom for the first time while also integrating AI into their teaching methods. The piece highlights the challenges and anxieties that arise when combining traditional teaching with new AI technologies.
Why it matters
This personal account offers a valuable perspective on the practical implications of AI in education, highlighting both the potential and the difficulties of implementation.
From X/Twitter
- A practical guide to building agent-native apps — including the patterns that quietly break everything.
- Harrison Chase lays out all the infra you need to deploy long-horizon agents in production.
- A GitHub repo distilled from Andrej Karpathy's workflows hit 48K stars — and gained 7.9K in a single day.
- Claude Cowork can now build live dashboards and trackers from a prompt connected to your apps and files — no manual refresh needed.
- An ex-Anthropic researcher says most users leave 60–70% of Claude's reasoning on the table — here are the 10 internal prompts that fix it.
- Microsoft's Copilot Cowork is now available via the Frontier program, letting users delegate long-running tasks across the Microsoft 365 suite.
- Every SEC filing — 10-K, 10-Q, 8-K, Form 4, 13D — scored by AI and delivered free before you finish your morning coffee.
- Kimi K2.6 scored 85% on Live Code Bench while Claude hit 64% — China's open-source answer to Claude Code has arrived.
- Microsoft says it has solved the context window problem — here's what that actually means.
- A skill that lets PMs vibe code in production with Claude Code, Cursor, or Codex.
- Anthropic's CEO says human-level intelligence is closer than society recognizes — and he's not talking about 2030.
- Google DeepMind assembled a strike team led by Sergey Brin to turn coding models into full AI researchers that can automate the entire R&D loop.
From Reddit/HN/YC
- [Hacker News] Benchmarking shows Qwen3.6-35B-A3B speculative decoding is net-negative on RTX 3090 — faster isn't always faster.
- [Reddit r/singularity] Opus 4.7 takes the top spot on the LLM Debate Benchmark with 51 wins, 4 ties, and zero losses against the previous champion.
- [Hacker News] pg_roast is a Postgres extension that audits your database schema and responds with harsh, unfiltered judgment.
- [Hacker News] Homeland Security is building smart glasses to collect intelligence on Americans, according to Ken Klippenstein.
- [Reddit r/localllama] OpenAI is now selling ChatGPT ads matched to prompt relevance — and the SEO industry may never recover.
- [Hacker News] Sakana AI's String Seed of Thought is a prompting method designed to generate outputs that are both distribution-faithful and genuinely diverse.
- [Hacker News] Palmier bridges your AI agents and your phone, letting them act on mobile context directly.
- [Reddit r/singularity] OpenAI's GPT-Image-2 just landed in ChatGPT — complex grid layouts that stumbled days ago are now near-perfect at 10x the density.
- [Hacker News] A new paper claims KV cache compression at 900,000x — well past what TurboQuant and per-vector Shannon limits were thought to allow.
- [Hacker News] A new paper models tech evolution using Git-style phylogenetics — branching, merging, and versioning as a framework for how tools actually spread.
- [Hacker News] Agent-flow visualizes Claude Code agent orchestration in real time as it runs.
- [Reddit r/artificial] Most agent frameworks, one r/artificial poster argues, conflate what a skill is with how it executes — and that confusion is load-bearing.