Business, Deals & Funding
Guardian AI

Tuesday briefing: How AI facial recognition in policing works – and how it can go wrong
Tuesday Briefing: AI Facial Recognition in Policing Summary This Guardian newsletter from May 5, 2026, reports on the rapidly expanding use of AI-powered facial recognition technology across the UK, both by police forces and private retailers. The central concern is that the deployment of this technology is outpacing the regulatory safeguards designed to govern it. Key Points How It Works Live facial recognition (LFR) systems scan members of the public in real time, comparing faces against watchlists or databases Retailers are also deploying similar tools to identify suspected shoplifters or banned individuals How It Can Go Wrong Facial recognition technology raises several well-documented concerns: Accuracy issues: These systems have historically shown higher error rates for people of color, women, and other demographic groups, leading to potential misidentification False positives: In…
Why it matters
I'm watching how the gap between AI deployment speed and regulatory oversight is widening in law enforcement, with facial recognition rolling out across UK policing and retail faster than the rules meant to govern it can keep up. The persistent accuracy disparities across demographic groups remain a serious unresolved problem that makes this one of the highest-stakes AI applications I'm tracking right now.
Guardian AI

Google DeepMind workers in UK vote to unionize amid deal with US military
Google DeepMind UK Workers Vote to Unionize This article from The Guardian (dated May 4, 2026) reports that workers at Google DeepMind's UK operations have voted to unionize, with a key motivating factor being concerns over a recently announced deal between Google and the US military (the Pentagon). Key Points Unionization vote: Google DeepMind workers in the UK voted to form a union and prepared a letter to management requesting recognition. Military deal concerns: The unionization effort was driven in part by opposition to a deal between Google and the US Department of Defense announced the prior week. Worker objections: At least one worker cited: The Iran war as context for their concerns The Pentagon's feud with Anthropic (a rival AI company) A belief that the US Department of Defense is "not a responsible partner" Broader Context This development sits at the intersection of several…
Why it matters
I'm watching how AI workers are increasingly using collective action as a lever to push back on military contracts, a dynamic that's been building since the Google Maven protests years ago. The DeepMind unionization vote signals that ethical objections to defense partnerships are becoming an organizational force, not just individual resignations.
TechCrunch AI

As workers worry about AI, Nvidia’s Jensen Huang says AI is ‘creating an enormous number of jobs’
Analysis of Jensen Huang's Claims on AI and Jobs Summary This May 2026 TechCrunch article covers Nvidia CEO Jensen Huang's remarks at a Milken Institute event, where he pushed back against fears that AI will cause mass unemployment. He argued that AI is "creating an enormous number of jobs" and represents America's "best opportunity to re-industrialize." He also drew a distinction between automating individual tasks versus replacing entire jobs. Key Claims and Critical Evaluation "AI creates jobs" Huang points to the physical infrastructure of AI — data centers, chip fabrication, hardware manufacturing — as evidence of job creation. This is partially valid but incomplete. While the AI supply chain does employ people, the relevant question isn't whether AI creates any jobs, but whether it creates more than it displaces on net. The article itself notes that reputable organizations estimat…
Why it matters
I'm watching Jensen Huang make the classic techno-optimist case for AI employment, and I'm paying close attention to where his argument conveniently stops — job creation in hardware and infrastructure is real, but it sidesteps the harder question of net displacement across the broader workforce.
Guardian AI

Canadian fiddler sues Google after AI Overview wrongly claimed he was a sex offender
Canadian Fiddler Sues Google Over False AI Overview Key Facts Who: Ashley MacIsaac, a three-time Juno Award-winning Canadian fiddle player and musician. What: MacIsaac has filed a $1.5 million civil lawsuit against Google, alleging that Google's AI Overview feature falsely identified him as a sex offender in an AI-generated summary of his life and career. Where: The lawsuit was filed in the Ontario Superior Court of Justice. Alleged Harm: MacIsaac claims the inaccurate AI-generated information constituted defamation and led to tangible consequences, including the cancellation of concerts. He asserts Google is liable for the foreseeable damage caused by the false content. Why This Matters This case highlights several important issues at the intersection of AI and law: AI Hallucinations and Defamation: Google's AI Overview feature, which generates summaries at the top of search results, c…
Why it matters
I'm watching this case closely as a real-world test of whether AI hallucinations can constitute legally actionable defamation, with actual financial damages attached. The $1.5 million lawsuit against Google could set a meaningful precedent for how courts hold AI companies liable when their systems fabricate harmful falsehoods about real people.
The Verge AI

OpenAI’s president does ‘all the things,’ except answer a question
Summary of the Article This article from The Verge, written by Elizabeth Lopatto and dated May 4, 2026, covers Greg Brockman's testimony in the ongoing trial between Elon Musk and OpenAI (with Sam Altman). Key Points The Witness and His Style Greg Brockman, OpenAI's president, took the stand in an unusual procedural order — cross-examination first, then direct examination. Brockman was described as evasive and argumentative, exhibiting "high school debate club energy." He frequently: Refused to accept characterizations of events Pedantically corrected minor word omissions (even articles like "a" or "the") Requested to see documents in context before confirming statements Gave technically correct but unhelpful answers (e.g., when asked if Microsoft's $10 billion investment was the biggest financial event at OpenAI, he replied it was "the only $10 billion investment") Brockman's Journal a…
Why it matters
I'm watching Greg Brockman's courtroom performance as a masterclass in evasion, where a top AI executive's semantic hair-splitting may say more about OpenAI's legal vulnerabilities than any direct answer would. The Musk vs. OpenAI trial is shaping up to be one of the most revealing windows yet into how the company's founders actually think about money, mission, and accountability.
NY Times
Elon Musk’s Lawyers Ask OpenAI’s President Why He Is Worth $30 Billion
Elon Musk's Lawyers Question OpenAI President Greg Brockman's $30 Billion Valuation at Federal Trial According to this New York Times article from May 4, 2026, Elon Musk's legal team confronted OpenAI co-founder and president Greg Brockman during a federal trial, challenging the justification for his reported $30 billion stake and suggesting that personal financial gain — rather than the mission of developing safe artificial intelligence — was a primary motivator behind OpenAI's corporate decisions. Context This trial is part of the long-running legal battle between Elon Musk and OpenAI. Musk, who co-founded OpenAI as a nonprofit in 2015 before departing its board in 2018, has argued that the organization betrayed its original mission of developing AI for the broad benefit of humanity when it restructured to include a capped-profit subsidiary and entered into a deep partnership with Mic…
Why it matters
I'm watching this trial closely as it cuts to the heart of a tension I think about constantly — whether the people building the most powerful AI systems are truly driven by mission or by money. The questioning of Brockman's $30 billion stake makes that tension impossible to ignore.
TechCrunch AI

OpenAI’s cozy partner Cerebras is on track for a blockbuster IPO
Cerebras IPO and Its Relationship with OpenAI This TechCrunch article from May 4, 2026, covers Cerebras Systems' upcoming IPO and its deep ties to OpenAI. Key IPO Details Share price range: $115–$125 per share Shares offered: 28 million Potential raise: ~$3.5 billion Projected market cap: Up to $26.6 billion at the high end This follows a $1 billion Series H at a $23 billion valuation in February 2026 Would be the largest tech IPO of 2026 so far, potentially signaling appetite for even bigger offerings (SpaceX, OpenAI, Anthropic) Cerebras' Product Cerebras makes the Wafer-Scale Engine 3, an AI-specific chip that competes with GPU-based AI chips. The company claims it is faster for inference (processing user prompts) while consuming less power than competitors. Major Investors Largest shareholders (5%+): Alpha Wave (Rick Gerson), Benchmark (Eric Vishria), Eclipse (Lior Susan), Fidelity,…
Why it matters
I'm watching Cerebras closely as a bellwether for how the market values AI infrastructure plays beyond the usual GPU giants, especially with that $26.6 billion cap on a chip most people outside the industry have never heard of. If this IPO lands well, it could open the floodgates for the bigger names like OpenAI and Anthropic to finally make their moves.
OpenAI

OpenAI and PwC collaborate to reimagine the office of the CFO
OpenAI and PwC Collaborate to Reimagine the Office of the CFO Summary Published May 4, 2026, this announcement details a partnership between OpenAI and PwC aimed at transforming enterprise finance functions through AI agents. The collaboration focuses on automating core CFO workflows—including planning, forecasting, reporting, procurement, payments, treasury, tax, and accounting close—with an emphasis on governance and human oversight. Key Details What They're Building PwC and OpenAI are developing AI agents designed around the core operating rhythms of finance. These agents can: Monitor payments and exceptions Review contracts/invoices against policy Update forecasts as business conditions change Prepare reporting materials Surface risks before month-end or quarter-end close Technology Stack The collaboration leverages several OpenAI products: ChatGPT – for general finance workflow sup…
Why it matters
I'm watching how the Big Four accounting firms are positioning themselves as the primary integrators of AI into enterprise back-office functions, essentially betting their consulting revenue on becoming the human oversight layer that clients trust. This PwC-OpenAI deal tells me the CFO role is about to shift dramatically from managing people who do financial work to managing agents that do it.
From X/Twitter
- A PM's honest take: you can automate documentation all you want, but you'll still spend 3 hours a week explaining it to people who won't read it.
- MCP vs Tool Calling vs Skills — a clean breakdown of three ways to extend an LLM and why they're not interchangeable.
- Anthropic engineers say most agents fail from bad architecture, not bad models — simplicity and tool design matter more than frontier capability.
- Anthropic ships keyless auth for Claude Platform — authenticate via browser, or let workloads use existing cloud identity from AWS, GCP, Azure, or any OIDC provider.
- Cursor published a deep dive on why switching models mid-conversation is so hard — OpenAI and Anthropic train on fundamentally different file-editing formats.
- Anthropic's MCP team makes the case that models don't interact with APIs directly — they interact with prompts, tools, and context, making the harness the real product.
From Reddit/HN/YC
- [Hacker News] GameStop offers $56B for eBay, struggles to explain how it'll actually pay for it.
- [Hacker News] Open-source project reproduces Anthropic's Mythos discovery findings for $0.75 on the public API.
- [Hacker News] Vellum makes the case that AI psychosis is real — and most people building with LLMs already have it.
- [Hacker News] Google Chrome is silently installing a 4 GB AI model on your device without asking.
- [Hacker News] Nature study finds testosterone eliminates strategic prosocial behavior in healthy males.
- [Hacker News] Brainio turns Markdown notes into visual mind maps in real time.