Main Menu
  • Tools
  • Developers
  • Topics
  • Discussions
  • Communities
  • News
  • Blogs
  • Builds
  • Contests
  • Compare
  • Arena
Create
    EveryDev.ai
    Sign inSubscribe
    1. Home
    2. News
    3. Weekly AI Dev News Digest: April 25 - May 1, 2026
    Joe Seifi's avatar
    Joe Seifi
    May 1, 2026·Founder at EveryDev.ai
    Discuss (0)
    Weekly AI Dev News Digest: April 25 - May 1, 2026

    Issue #18 · Weekly Digest

    Weekly AI Dev News Digest: April 25 - May 1, 2026

    May 1, 2026

    The exclusivity era is over. Microsoft loosened its grip on OpenAI Monday, GPT-5.4 was on AWS by Tuesday morning, and by Friday the supply chain had taught itself to weaponize the AI coding agents that were supposed to fix everything.

    Three days at one federal address tells you most of what you need to know. Tuesday morning Sam Altman skipped his own AWS launch to sit in an Oakland courtroom across the bay from Elon Musk, who took the stand and called himself a fool for funding what became OpenAI. The day before, Microsoft and OpenAI had gutted the partnership that defined the last seven years of AI, killing the AGI clause and freeing OpenAI to sell on any cloud. By Tuesday afternoon GPT-5.4 was live on Amazon Bedrock with GPT-5.5 within weeks. The exclusivity that was supposed to last until "AGI" lasted until somebody at AWS waved a $50 billion check. (OpenAI) (OpenAI on AWS)

    By Wednesday a different complaint had been filed in the same courthouse: seven Tumbler Ridge families suing OpenAI over the February shooting, alleging the company's safety team flagged the shooter's account in June 2025 and was overruled by leadership. Underneath the corporate drama, the supply chain caught fire. A worm called Mini Shai-Hulud crossed npm, PyPI, Packagist, RubyGems, and Go modules in 48 hours and pioneered a brand new persistence trick that injects .claude/settings.json and .vscode/tasks.json hooks into compromised repos so AI coding agents re-fire the malware whenever the repo is opened. (CNN) (Wiz)

    Past the drama, the dev stack itself reset. GitHub Copilot moved to usage-based billing for June 1, Zed hit 1.0 with parallel agents and ACP, JetBrains drew a line for AI-and-classic workflows to coexist, VS Code 1.118 leaned into token efficiency, Claude Code 2.1.126 added a /model picker for gateway routes, and Anthropic retired the 1M context beta on Sonnet 4. Codex picked up 90+ plugins and a role picker that put Finance and Marketing alongside Engineering, Cloudflare and Stripe wired AI agents to create their own paid Cloudflare accounts and buy domains, Mistral shipped Workflows + Medium 3.5 + Vibe remote agents inside 48 hours, and AWS, Google, and Adobe each opened more managed surfaces (AgentCore on Bedrock, 50+ Google MCP servers, an Adobe creativity connector exposing Photoshop/Firefly/Premiere) for agents to run on. The defensive layer caught up the same week with Anthropic's Claude Security in public beta on Opus 4.7, Cisco's open-source Model Provenance Kit and Constitution, Google's AMS scanner for tampered safety training in open-weight models, OpenAI's Yubico-co-branded passkey-only Advanced Account Security for ChatGPT and Codex, and Cloudflare's post-quantum IPsec hitting GA. (GitHub Blog) (OpenAI) (Anthropic)

    $725B

    hyperscaler 2026 capex

    ·

    5

    ecosystems hit by Mini Shai-Hulud in 48 hours

    ·

    24 hours

    Microsoft amendment to GPT-5.4 on Bedrock

    ·

    $1.1B

    Ineffable seed at $5.1B post

    ·

    4M cars

    Gemini replacing Google Assistant

    In Focus

    Two Courtrooms, One Address

    OpenAI spent the week defending itself in two separate cases in the Northern District of California, before two different judges, on charges that share almost nothing except the venue. In Oakland, jury selection wrapped Monday on Musk v. Altman, with Musk seeking around $130B in damages, the removal of Altman and Brockman from the OpenAI board, and a rollback of the for-profit conversion. Microsoft sits as co-defendant on an aiding-and-abetting claim. The jury's verdict will be advisory; Judge Yvonne Gonzalez Rogers makes the call. Musk testified for nearly two hours Tuesday, told the court he "was a fool" for funding what became a startup, said founding OpenAI was a reaction to Larry Page calling him a "speciesist," and warned the jury about a "Terminator outcome." (CNN) (CNBC)

    Day three got messy. OpenAI counsel William Savitt walked Musk through 2017-era exhibits showing he had explored a for-profit OpenAI where he would hold majority equity and control, contradicting his "stole a charity" framing. Musk admitted Tesla is not currently pursuing AGI, contradicting a recent post on X. Texts surfaced from Mark Zuckerberg, who in February 2025 told Musk that Meta could help "take down content doxxing or threatening the people on your team" at DOGE. By Thursday Microsoft attorney Russell Cohen had used a 2020 Musk X post calling OpenAI "captured by Microsoft" to argue Musk was outside the three-year statute of limitations, and Musk closed his testimony with "I don't know everything they've done. I don't know what's going on at OpenAI." Jared Birchall, who runs Musk's family office Excession LLC, took the stand next. (CNBC Day 3) (CNBC Day 4) (TechCrunch)

    The Wednesday filing in the same courthouse was harder to read. Seven families of Tumbler Ridge shooting victims sued OpenAI and Sam Altman personally, alleging that automated systems flagged the shooter's account in June 2025 for "gun violence activity and planning," that the safety team urged management to notify Canadian authorities, and that leadership chose to deactivate the account instead. The shooter, 18-year-old Jesse Van Rootselaar, killed her mother, half-brother, five students, and a teacher in February before killing herself, wounding around 24 others. Lead attorney Jay Edelson argues Altman "did the math and decided that the safety of the children of Tumbler Ridge was an acceptable risk." Altman published an apology in Tumbler RidgeLines on April 23: "I am deeply sorry that we did not alert law enforcement to the account that was banned in June." One complaint argues the account was only deactivated, not banned, and that the shooter immediately created a second account. (CNN) (NPR)

    Judge Rogers told both sides Thursday morning that scandals like Tumbler Ridge are not coming into the Musk trial. "This is not a trial about the safety risks of artificial intelligence. We are not going to get sidetracked." She left a narrow opening for testimony comparing OpenAI's and xAI's safety approaches. Trial paused Friday and resumes Monday with Brockman on deck, plus Altman, Nadella, and AI-safety expert Stuart Russell still expected to testify. (Courthouse News)

    Our Read

    OpenAI now has to defend its character in two parallel proceedings, in front of two judges who are explicitly choosing what kind of trial each one is. The Musk case is about commercial bad faith and a charity-to-for-profit pivot. The Tumbler Ridge case is about whether the safety org has any standing inside the company. Different tracks, same week, same federal address, and the answers will both come from overlapping witnesses.

    In Focus

    Lock-in Got Renegotiated

    The Microsoft and OpenAI amendment is the biggest commercial AI deal change since Microsoft's original $10B in 2022. Monday morning it killed the AGI clause that would have changed business terms once OpenAI declared artificial general intelligence, ended Microsoft's cloud exclusivity, and stopped the revenue-share payments to OpenAI. Microsoft retains a non-exclusive license to OpenAI IP through 2032 and roughly 27% of the for-profit. OpenAI keeps paying Microsoft a capped revenue share through 2030 and still ships first on Azure. (OpenAI) (Microsoft)

    Less than 24 hours later, GPT-5.4 was live on Amazon Bedrock in limited preview, with GPT-5.5 coming within weeks. AWS bundled it with Codex on Bedrock and a new Bedrock Managed Agents product, sitting on top of Amazon's $50B investment and roughly $38B compute deal. Codex usage now counts toward AWS commits. Ben Thompson's Stratechery interview with Altman and Garman, conducted the previous Friday and published Tuesday, captures how Bedrock Managed Agents fits into AWS's "we'll host every lab" strategy. (OpenAI on AWS) (Stratechery) (The Register)

    Codex picked the same week to redefine itself. Thursday's update redesigned the onboarding flow around a role picker (Engineering, Product, Finance, Marketing, Sales, Operations, Data Science, Design, Student, "Something else") and reframed the product as "for everyone, for any task done with a computer." Ninety-plus plugins shipped (Atlassian Rovo, JIRA, CircleCI, GitLab Issues, Microsoft Suite, Neon, Render), 20% faster computer use, an SSH option for remote devboxes in alpha, and rich previews for PDFs, slides, and spreadsheets. Codex now has 4M weekly users, with usage in ChatGPT Business and Enterprise up 6x between January and April. EU users have browser and computer-use functionality disabled without explanation. Greg Brockman: "Codex is for everyone." (OpenAI) (TestingCatalog)

    The compute story has two versions. OpenAI published an Apr 29 update saying Stargate was ahead of schedule, with more than 3GW added in the last 90 days alone and the company "surpassing" its original 10GW commitment. The Financial Times reported the same day, via Tom's Hardware, that OpenAI has "effectively abandoned" first-party Stargate ownership in favor of leasing capacity from third parties, treating Stargate as an "umbrella term" rather than a joint venture. Both can be true, and the operational read is the same as the Microsoft amendment: lock in fewer exclusives, contract for more compute everywhere. (OpenAI) (Tom's Hardware)

    Anthropic ran a parallel pivot in the model layer. Effective Thursday, the context-1m-2025-08-07 beta header has no effect on claude-sonnet-4-20250514 or claude-sonnet-4-5-20250929. Requests over 200K tokens against those models now return a 400 error. The 1M window is GA on Sonnet 4.6 and Opus 4.6 at standard pricing with no header required. The base Sonnet 4 / Opus 4 strings retire entirely on June 15. Long-context production pipelines on the old beta need to swap model strings now. (Anthropic API Docs)

    China's hardware bifurcation got concrete. Reuters reported Tuesday that ByteDance, Tencent, and Alibaba have all reopened procurement conversations with Huawei for Ascend 950 chips after DeepSeek V4 launched optimized for Huawei silicon. Huawei is targeting 750K Ascend 950PR units this year, with mass production started in April and full shipments expected in H2 2026. The 950PR beats Nvidia's H20 (the most powerful chip Nvidia could legally sell in China before Beijing blocked imports) but trails the H200. A day later, Reuters scooped that Nvidia B300 servers in China have nearly doubled to about 7M yuan (roughly $1M each) versus around $550K in the US, with the cause traced to a US crackdown on chip smuggling and Chinese hyperscalers paying scarcity premiums. (Reuters) (Reuters)

    Why This Matters

    Every assumption from 2022 about which lab runs on which cloud, which chip stack frontier models target, what your model string means a year from now, and what "exclusivity" buys you got renegotiated. If your roadmap depends on those answers staying still, check it now.

    In Focus

    The Supply Chain Caught Fire

    A campaign branded "Mini Shai-Hulud" by Wiz crossed five package ecosystems in 48 hours starting Wednesday morning. TeamPCP began at 09:55–12:14 UTC by compromising four official SAP CAP npm packages (@cap-js/sqlite, @cap-js/postgres, @cap-js/db-service, mbt, around 570K weekly downloads combined) via a malicious commit to the release workflow. The injected preinstall hooks bootstrapped a Bun runtime to launch an 11.6MB obfuscated payload. SAP shipped clean versions by 13:46 UTC the same day via OIDC trusted publisher, but the worm had already moved. (Socket SAP) (Wiz)

    By Thursday it had hit Intercom's npm SDK (intercom-client@7.0.4 and 7.0.5, around 360K weekly), PyTorch Lightning on PyPI (lightning@2.6.2 and 2.6.3, which Socket flagged within 18 minutes of publication), Packagist via intercom/intercom-php@5.0.2 as a malicious Composer plugin, and then RubyGems and Go modules via the BufferZoneCorp GitHub account. Combined Lightning + Intercom monthly downloads: about 10M. The payload encrypts exfiltrated data with AES-256-GCM and RSA-4096, exits on Russian-locale systems, and harvests SSH keys, AWS/Azure/GCP credentials, Kubernetes secrets, GitHub/npm tokens, Stripe/Slack/Twilio API keys, and crypto wallets. (Socket Lightning) (Socket Intercom) (Socket Packagist) (Socket Ruby/Go)

    The new tradecraft is what makes this campaign different from previous worms. Mini Shai-Hulud commits a .claude/settings.json SessionStart hook and a .vscode/tasks.json runOn: folderOpen task into every accessible repo, so opening the infected repo in Claude Code or VS Code re-fires the malware. This is the first documented case of supply-chain malware specifically targeting AI coding agent configuration files as a persistence vector. Over 1,800 exfiltration repos with the description "A Mini Shai-Hulud has Appeared" had been created by Thursday evening. (Wiz)

    Three smaller compromises rounded out the week. Late Friday Apr 24, right at the edge of the window, elementary-data@0.23.3 shipped to PyPI containing a malicious .pth file that runs on Python interpreter startup, the same persistence trick used in the LiteLLM compromise earlier this year. The attacker exploited a script-injection vulnerability in a GitHub Actions workflow that processed PR comments, captured the temp GITHUB_TOKEN with contents: write, forged a signed release commit, and dispatched the legitimate publish pipeline. Around 1.1M monthly downloads, infostealer targeting dbt profiles. Same pull_request_target injection pattern as the Ultralytics (Dec 2024) and LiteLLM (early 2026) compromises. (Elementary)

    On Apr 29 the maintainer of an unscoped tanstack package (no relation to the legitimate @tanstack/* scope) published versions 2.0.4-2.0.7 in a 27-minute window, all with postinstall scripts that POST .env files to a Svix dead-drop. The package had been sitting benign on npm for over a month before going hot. Tanner Linsley confirmed to Socket the maintainer (sh20raj) once demanded $10K from him; TanStack has filed a trademark infringement claim. Socket also disclosed Sunday that the GlassWorm campaign now has 73 cloned/impersonating extensions on Open VSX, six already activated, with new tradecraft using thin loader extensions that fetch VSIX payloads from GitHub at runtime via --install-extension. Cross-IDE infection across VS Code, Cursor, Windsurf, and VSCodium. Total artifacts since December 2025: 320+. (Socket) (Socket)

    The infrastructure layer was not quiet either. Wiz disclosed CVE-2026-3854, a command-injection RCE on github.com via git push. Cross-tenant exposure on GitHub.com meant a compromised storage node could read repos belonging to other orgs. GitHub patched the cloud in two hours back in March, but about 88% of GitHub Enterprise Server instances were still unpatched at the public disclosure on April 28. Upgrade GHES to 3.19.3 immediately. Hugging Face's LeRobot has its own RCE: CVE-2026-25874 (CVSS 9.3) is an unsafe pickle.loads() deserialization in the async inference pipeline over unauthenticated, unencrypted gRPC, so anyone who can reach the policy server or robot client can run code. And Mercor disclosed a 4TB voice-data breach affecting roughly 40,000 AI contractors. (Wiz) (The Hacker News) (The Hacker News) (TLDL)

    In Focus

    Defenders Got Their Own Models

    The other side of the supply-chain story is what the defenders shipped. Anthropic put Claude Security into public beta Monday, powered by Opus 4.7. The tool scans an enterprise codebase for high-severity issues like memory corruption, injection flaws, authentication bypasses, and complex logic errors that pattern-matching tools miss. It runs each finding through an adversarial verification pass to filter false positives and hands off to Claude Code for the fix. CrowdStrike, Microsoft Security, Palo Alto Networks, SentinelOne, TrendAI, and Wiz are integrating Opus 4.7 into their existing platforms. Available now to Claude Enterprise; Team and Max coming soon. (Anthropic)

    Cisco AI Defense released the Model Provenance Kit Thursday, a Python toolkit and CLI that fingerprints transformer models at the weight level (comparing architecture metadata, tokenizer structure, and learned weights) to determine whether two models share a common origin. The initial database covers around 150 base models across 45 families. Hugging Face hosts over 2 million models with mostly self-reported metadata, so this is a real auditing tool, especially with OWASP and MITRE ATLAS now flagging weak model provenance as a supply-chain risk. The same team published a "Model Provenance Constitution" the same day: a normative taxonomy with five conditions for provenance, nine concrete derivation mechanisms, and eight exclusions that look like provenance but are not (independent reproductions like Llama-2 vs OpenLLaMA, same-family different-size, shared seeds, architectural convergence). (Cisco) (Cisco Constitution)

    Google shipped the operational counterpart on Tuesday. The Activation-based Model Scanner (AMS), released on the Open Source Blog, detects whether a model has intact safety training in 10 to 40 seconds without sending a single prompt. Two-tier design: Tier 1 measures whether safety-relevant activation structure exists at all, Tier 2 compares an unknown model's activation fingerprint against a verified baseline to catch supply-chain substitution. Instruction-tuned models show 3.8 to 8.4σ separation; uncensored variants like Dolphin and Lexi collapse to 1.1 to 1.3σ and get flagged CRITICAL; quantized INT4/INT8 models show under 5% drift. The framing cites a 2025 study finding 8,000+ safety-modified Hugging Face repos with a 74% unsafe-compliance rate. pip install "ams-scanner[cli]". (Google Open Source)

    OpenAI, Cloudflare, and Dataiku addressed identity, network, and data-privacy edges in parallel. OpenAI launched Advanced Account Security Thursday for ChatGPT and Codex accounts, requiring passkeys or physical security keys, killing password login, and removing email/SMS recovery in favor of backup passkeys, security keys, and recovery keys. Co-branded Yubico C NFC and C Nano keys at preferred pricing. OpenAI Support cannot recover lost accounts; that is the trade. Trusted Access for Cyber members are required to enable it by June 1 or attest to phishing-resistant SSO. Cloudflare brought post-quantum IPsec to GA Thursday using IETF hybrid ML-KEM (FIPS 203) for IKEv2, with confirmed interoperability with Cisco and Fortinet branch connectors and the post-quantum target moved up to 2029. Dataiku open-sourced Kiji Privacy Proxy the same day: it sits between your application and external AI APIs, runs requests through an ML-powered detector for 16+ PII types, substitutes realistic dummies, sends the masked request, then re-associates originals in the response. (OpenAI) (TechCrunch) (Cloudflare) (Dataiku)

    The week also produced a real CVE on a popular agentic IDE. Novee disclosed CVE-2026-26268 Tuesday, an RCE in Cursor patched back in February. The attack: embed a bare repository inside a legitimate-looking repo with a malicious pre-commit hook; the Cursor agent's git checkout triggers the hook silently. No prompt, no warning, RCE. CWE-862 (Missing Authorization), because Cursor's sandbox didn't restrict writes to .git/hooks/. Update to Cursor 2.5+, and audit AI coding assistants for how they touch untrusted code. (Novee)

    Our Read

    Three things shipped that didn't exist eight days ago: an LLM that scans your codebase for the bugs grep-class tools miss, a fingerprinter that tells you whether a "Llama 2" download is actually Llama 2, and an activation-based scanner that detects safety-training tampering without sending a single prompt. The defensive stack is starting to look like a real category. The supply-chain attackers may have been first to weaponize AI agents, but they were not the only ones.

    In Focus

    Mythos and the Politics of Cyber-Capable Models

    Anthropic's Mythos model, the autonomous zero-day discoverer Anthropic restricted to around 50 organizations under Project Glasswing in early April, became a geopolitical asset. Wednesday's WSJ scoop: administration officials told Anthropic they oppose the company's plan to grant access to Mythos to roughly 70 additional organizations, which would have brought the total from around 50 to about 120. Stated reasons: misuse risk, and concerns Anthropic does not have enough compute to serve a wider user base without degrading government access. The NSA is among current Mythos users. A small group of unauthorized users reportedly accessed Mythos in a private forum the same day Anthropic announced the limited release. (WSJ) (Bloomberg)

    Across the Atlantic, the Bundesbank publicly demanded EU access. Reuters interviewed Michael Theurer, chief supervisor at Germany's Bundesbank, on Wednesday: European banks need access to Mythos to defend themselves, since attackers will use similar models regardless. Bundesbank President Joachim Nagel was more direct. "All relevant institutions should have access to such technology to avoid competitive distortions." Eurozone finance ministers were scheduled to discuss Mythos with banking supervisors Monday. Australia's APRA had already issued a similar warning to its own banks. Anthropic has reportedly told regulators privately that it intends to expand access to non-US banks soon, though as of the WSJ report, the White House is the one holding it up. (Reuters)

    Google took the Pentagon deal Anthropic would not. Monday at 4pm, Google signed a contract giving the DoD access to its AI for "any lawful government purpose," reported by The Information Tuesday and confirmed by Reuters and Bloomberg. The contract includes language that the AI System "is not intended for, and should not be used for, domestic mass surveillance or autonomous weapons (including target selection) without appropriate human oversight," but explicitly does not give Google authority to veto operational decisions. That last part is a direct response to Anthropic's stand: the DoD designated Anthropic a "supply-chain risk" earlier this year after the company refused to remove guardrails for autonomous weapons and domestic surveillance, and a judge granted Anthropic an injunction against the designation last month. The Pentagon signed $200M agreements each with Anthropic, OpenAI, and Google in 2025; Google's Monday signing closes a circle the others had been pushing on. (Reuters) (TechCrunch)

    Why This Matters

    Cyber-capable LLMs are now infrastructure with foreign-policy weight. The White House is rationing access to a private model, the Bundesbank is asking the EU to negotiate for it, and the Pentagon is choosing between vendors based on which one will accept fewer guardrails. None of this fits the "model-as-product" mental model that priced the last frontier release.

    In Focus

    The IDE Economy Reset

    GitHub Copilot moved to usage-based billing on Monday. Every Copilot plan transitions June 1 to GitHub AI Credits at 1 credit = $0.01. The old premium request units (PRUs) get replaced; usage is calculated from input, output, and cached tokens at published API rates per model. Code completions and Next Edit suggestions stay unlimited and free. Plan prices are unchanged, but allotments now match seat price: Pro $10/$10, Pro+ $39/$39, Business $19/$19, Enterprise $39/$39. The brutal part is for users who stay on annual plans past June 1: model multipliers jump on the legacy PRU system. Opus 4.7 goes from 7.5x to 27x. GPT-5.4 goes from 1x to 6x. Mario Rodriguez framed it directly: "the current premium request model is no longer sustainable." Copilot code review will also start consuming GitHub Actions minutes. (EveryDev.ai)

    VS Code 1.118 shipped Wednesday and leaned into the new economics with improved cache reuse across system prompts, tools, conversation history, and summarization without changing agent behavior; deduplicated MCP servers (only the most-specific server with a given name is enabled by default); and a dedicated subagent context for skills so multi-step skill calls do not pollute the main chat. Other notable additions: remote control of Copilot CLI sessions from GitHub.com or mobile via /remote, semantic codebase search across any workspace, an Agents app web client at insiders.vscode.dev/agents, Claude Agent in the Agents app, Git AI co-authoring on by default, and an opt-in for TypeScript 7.0 nightlies. Microsoft is now on a weekly release cadence. The Copilot CLI side shipped two back-to-back releases of its own: v1.0.37 (Apr 27) added location-based permission persistence and shell completion scripts, and v1.0.39 (Apr 28) added /compact, /context, /usage, /env slash commands plus an ACP toggle for allow-all permission mode. (VS Code) (Visual Studio Magazine) (GitHub Copilot CLI)

    Zed hit 1.0 Tuesday after five years and more than 1M lines of Rust on a custom GPU-driven UI framework called GPUI. Nathan Sobo: "1.0 doesn't mean 'done.' It also doesn't mean 'perfect.' It means we've reached a tipping point where most developers can quickly feel at home in Zed." Zed positions itself explicitly as AI-native: parallel agents in a single editor, Agent Client Protocol (ACP) support so you can run Claude Agent, Codex, or Cursor from the same UI, and the option to disable all AI features for developers who want a code editor that is only a code editor. Zed for Business launched alongside, with centralized billing, RBAC, and team management. (Zed) (Phoronix)

    JetBrains laid out an IDE-first AI doctrine in a Tuesday post that is distinct from VS Code's "AI as default" framing: classic and AI-assisted workflows should coexist without one compromising the other, agents are vendor-replaceable (the post explicitly highlights Cursor running in JetBrains IDEs via ACP), and "a human is responsible for the code that ships." The IDE remains the place to read, understand, and own that code, regardless of how it got generated. The release of Air (JetBrains' agent-orchestration tool) the previous month and JetBrains Central (their agent ops layer) sit underneath this. (JetBrains)

    Claude Code 2.1.126 landed Friday with a /model picker that lists models from your gateway's /v1/models endpoint when ANTHROPIC_BASE_URL is set, plus a new claude project purge [path] command that wipes all Claude Code state for a project. Real security fix: allowManagedDomainsOnly and allowManagedReadPathsOnly were being ignored when a higher-priority managed-settings source lacked a sandbox block, worth patching if you run on Bedrock, Vertex, or Foundry behind a gateway. Simon Willison shipped LLM 0.32a0 the same week, modeling inputs as a sequence of typed messages and outputs as a stream of typed parts (text, reasoning, tool-call name, tool-call args, image, audio). Backwards-compatible with the old prompt= API but unlocks emulating the OpenAI chat completions API natively, replying to a previous response, and properly streaming mixed-modality output from frontier models. (Claude Code Changelog) (Simon Willison)

    Signals

    Signals from the Edges

    Agents become customers

    Cloudflare and Stripe wired AI agents to create their own paid Cloudflare accounts, register domains, and deploy code without a human touching a dashboard. Default $100/mo cap per provider.

    Cloudflare →

    Cloudflare Dynamic Workflows

    A 300-line library that lets a single Worker route every workflow to a different tenant's code. Useful when AI writes TypeScript per-tenant.

    Cloudflare →

    DigitalOcean Inference Engine

    New Inference Router uses a MoE classifier to send each request to the cheapest model that can handle it. LawVo cut inference costs 42% with no code changes.

    DigitalOcean →

    Mistral's biggest week in months

    Mistral Workflows, Medium 3.5 (128B dense, 256K context, open weights, $1.50 in / $7.50 out), and Vibe remote agents inside 48 hours. Medium 3.5 replaces Devstral 2 as the Le Chat default.

    Mistral →

    Warp open-sources the terminal, keeps Oz closed

    Warp's terminal client hit GitHub on April 28 under Apache-2.0 (~26K stars in hours). Oz, the cloud agent platform that triages issues and writes specs, stays proprietary. Same playbook as Cursor and Vercel.

    EveryDev.ai →

    Anthropic + Adobe creative connectors

    Anthropic launched connectors for Blender, Autodesk Fusion, Adobe Creative Cloud, Ableton, and Splice. Adobe pushed a creativity connector exposing 50+ tools across Photoshop, Firefly, Premiere, and Lightroom inside Claude.

    Adobe →

    Meta Ads CLI + MCP

    Beta opens the Marketing API to AI agents in Claude Desktop, ChatGPT, Claude Code, and OpenAI Codex via Meta Business OAuth.

    Meta for Developers →

    Google managed MCP servers GA

    50+ Google-managed MCP servers (BigQuery, AlloyDB, Spanner, Cloud SQL, Firestore, Maps, more), IAM auth, Cloud Audit Logs, Model Armor scanning.

    Google Cloud →

    AWS Bedrock AgentCore CLI

    Define an agent with a model, system prompt, and tools and run it without writing orchestration. CDK deployment across 14 regions, no extra cost.

    AWS Weekly Roundup →

    Google Colossus to PyTorch

    gcsfs now supports Rapid Bucket via direct gRPC streams to the filesystem behind YouTube and Search. 23% lower training time vs standard buckets.

    Google Developers →

    OpenAI shuts down Sora 2; Alibaba's HappyHorse-1.0 takes #1

    Sora went dark Sunday after six months. fal added API access to HappyHorse-1.0 the day after; it took the top Elo on Artificial Analysis Video Arena and runs ~38s for 1080p on a single H100. ([OpenAI Help][51])

    fal →

    Gemini replaces Google Assistant in 4M cars

    GM rolls Gemini to ~4M model-year-2022+ Buick, Cadillac, Chevrolet, and GMC vehicles via OTA. Stellantis is on Mistral, Mercedes is on ChatGPT, Tesla is on Grok.

    Google Blog →

    Hyperscaler $725B 2026 capex guide

    Alphabet, Microsoft, Meta, and Amazon reported the same Wednesday evening. Meta stock fell 7% on the spending guide.

    Tech Startups →

    Funding round-up

    Ineffable Intelligence $1.1B seed at $5.1B (David Silver, ex-DeepMind). JuliaHub $65M Series B + Dyad 3.0. General Analysis $10M for adversarial agent eval. Netomi $110M Series C. Legora $50M extension at $5.5B. A Nvidia-tied Nevada data center sold $4.59B in junk bonds at 6.74%. SoftBank formed Roze AI for autonomous data-center robotics. Cognizant to acquire Astreya for $600M. ([Ventureburn][27])

    Tech Startups →

    xAI Grok 4.3

    321-point ELO jump on GDPval-AA, ~20% cheaper, 2M-token context, no persistent memory across sessions.

    Artificial Analysis →

    Bloomberg ASKB; Anthropic Sydney

    ASKB is in beta with ~125K Terminal users. Anthropic opened its Sydney office (fourth APAC) with Theo Hourmouzis as GM.

    TestingCatalog →

    Looking Ahead

    What to Watch

    1. 1

      The Bedrock-Codex flywheel

      GPT-5.4 on Bedrock with Codex usage counting toward AWS commits is the first time a frontier model ships through a third-party cloud's commit ladder. If Codex pulls Bedrock revenue the way Claude already does, the cloud-pricing geometry of agentic work shifts away from sticker-price lab APIs.

    2. 2

      The `.claude/settings.json` persistence vector

      Mini Shai-Hulud is the first known supply-chain campaign that targets AI coding agent configs as a persistence mechanism. Every team running Claude Code or VS Code agents needs a policy on .claude/ and .vscode/tasks.json files in cloned repos this quarter.

    3. 3

      Mythos access politics

      The White House blocking Anthropic's expansion to around 70 more orgs and the Bundesbank publicly demanding EU access make Mythos the first LLM to become a foreign-policy bargaining chip. Watch the Eurozone finance ministers' May meeting and whether the access list grows or stays frozen.

    4. 4

      Copilot's June 1 cliff

      Annual-plan users see model multipliers jump on June 1 (Opus 4.7 from 7.5x to 27x, GPT-5.4 from 1x to 6x) until they migrate to AI Credits. Expect a wave of bill shock and a real test of whether Copilot's "code completions stay free" anchor holds.

    5. 5

      Stargate ownership vs. leasing

      OpenAI says ahead of schedule, the FT says effectively abandoning first-party data centers. The next quarter of capex disclosures will show which version is operationally true and whether "Stargate" is a joint-venture brand or an umbrella term for a leasing program.

    6. 6

      Brockman, Altman, Nadella on the stand

      Trial pauses Friday and resumes Monday with Brockman on deck. Whatever they say will be the first time Microsoft and OpenAI's senior-most leadership are cross-examined under oath about the partnership rewrite, in the same week that partnership got rewritten.

    7. 7

      Code with Claude

      Anthropic's developer conference runs May 6 in San Francisco (Extended track for indie devs and early founders on May 7), then London May 19 and Tokyo June 10. Free virtual livestream is open at all three. Watch for what lands on top of the week's Claude Security beta, Opus 4.7 GA, and Mythos access politics. ([Anthropic][92])

    The week's lock-ins all loosened in the same direction. Microsoft's exclusivity got swapped for non-exclusive IP and a smaller revenue cut. Anthropic's 1M context retired on Sonnet 4 and became a default on 4.6. Codex picked up 90+ plugins and a role picker that put Finance, Marketing, and Operations alongside Engineering. The frontier-model exclusivity that was supposed to last until "AGI" lasted until somebody waved a $50 billion check, and the supply chain spent the rest of the week showing what happens when the agents you let into your codebase are the same agents an attacker can configure on commit.

    About the Author

    Joe Seifi's avatar
    Joe Seifi

    Founder at EveryDev.ai

    Apple, Disney, Adobe, Eventbrite, Zillow, Affirm. I've shipped frontend at all of them. Now I build and write about AI dev tools: what works, what's hype, and what's worth your time.

    Comments

    No comments yet

    Be the first to share your thoughts

    Explore AI Tools
    • AI Coding Assistants
    • Agent Frameworks
    • MCP Servers
    • AI Prompt Tools
    • Vibe Coding Tools
    • AI Design Tools
    • AI Database Tools
    • AI Website Builders
    • AI Testing Tools
    • LLM Evaluations
    Follow Us
    • X / Twitter
    • LinkedIn
    • Reddit
    • Discord
    • Threads
    • Bluesky
    • Mastodon
    • YouTube
    • GitHub
    • Instagram
    Get Started
    • About
    • Editorial Standards
    • Corrections & Disclosures
    • Community Guidelines
    • Advertise
    • Contact Us
    • Newsletter
    • Submit a Tool
    • Start a Discussion
    • Write A Blog
    • Share A Build
    • Terms of Service
    • Privacy Policy
    Explore with AI
    • ChatGPT
    • Gemini
    • Claude
    • Grok
    • Perplexity
    Agent Experience
    • llms.txt
    Theme
    With AI, Everyone is a Dev. EveryDev.ai © 2026