🚀 Anthropic Launches Claude Cowork

PLUS : Apple Replaces Siri’s Brain

In partnership with

Welcome back!

Anthropic just gave Claude a desk job. The new Claude Cowork turns the chatbot into a full-fledged agent that can read, edit, and organize files on your personal Mac desktop, no coding required. Let’s unpack…

Today’s Summary:

  • 🚀 Anthropic launches Claude Cowork

  • 🍎 Apple pays billions for Gemini Siri

  • đź©» Google MedGemma 1.5 open-source medical AI

  • 🏥 OpenAI debuts ChatGPT for Healthcare

  • đź§  Former OpenAI CTO’s startup loses its own CTO

  • 🎥 Google updates Veo 3.1 for 4K video

  • 🛠️ 2 new tools

TOP STORY

Anthropic launches Claude Cowork

The Summary: Anthropic released Cowork, an AI agent that brings Claude Code-level automation to people who don't write code. The feature lets Claude read, edit, and create files in user-selected folders on Mac computers. Security researchers warned about significant vulnerability risks via prompt injection.

Key details:

  • Users are already testing it for vacation planning, wedding photo recovery, plant growth monitoring, and organizing messy download folders

  • Anthropic tells users to "monitor Claude for suspicious actions that may indicate prompt injection" despite targeting non-technical audiences

  • Runs in isolated virtual machines using Apple Virtualization Framework with a custom Linux filesystem

  • Initially available to Claude Max subscribers on macOS, with Windows support planned

Why it matters: Developers already discovered the power of Claude Code for non-coding tasks, revealing genuine demand for computer automation accessible to everyone. Early testers describe the experience as "leaving messages for a coworker" rather than prompting a chatbot, because Claude plans work, executes tasks in parallel, and asks clarifying questions when stuck. Anthropic's approach of wrapping proven developer tools in accessible interfaces may finally deliver on the agent promise.

FROM OUR PARTNERS

Don’t Type, Just Speak

Vibe code with your voice

Vibe code by voice. Wispr Flow lets you dictate prompts, PRDs, bug reproductions, and code review notes directly in Cursor, Warp, or your editor of choice. Speak instructions and Flow will auto-tag file names, preserve variable names and inline identifiers, and format lists and steps for immediate pasting into GitHub, Jira, or Docs. That means less retyping, fewer copy and paste errors, and faster triage. Use voice to dictate prompts and directions inside Cursor or Warp and get developer-ready text with file name recognition and variable recognition built in. For deeper context and examples, see our Vibe Coding article on wisprflow.ai. Try Wispr Flow for engineers.

APPLE

Apple pays billions to replace Siri's brain with Google Gemini

The Summary: Apple confirmed a multi-year deal with Google to power Siri and future Apple Intelligence features using Gemini models. The partnership, worth several billion dollars, arrives after years of Siri lagging behind ChatGPT and other AI assistants. OpenAI was initially rumored to become Siri's provider. The new Siri will launch in spring 2026.

Key details:

  • Google already receives $38+ billion from Apple (2021-2022 figures) for default search placement, making this the second massive Apple-Google financial arrangement

  • Apple tested technology from OpenAI, Anthropic, and Google before selecting Gemini as the foundation

  • The enhanced Siri will understand personal context, provide on-screen awareness, and offer deeper per-app controls when it launches in March or April 2026

Why it matters: Apple just admitted what everyone suspected: building a competitive AI from scratch takes too long. The deal reveals a brutal truth about modern AI development: platform owners like Apple must pay premium prices for the best available AI or watch their products become irrelevant while they catch up. It transforms Google into a global intelligence layer powering hundreds of millions of iPhones in addition to Android devices.

GOOGLE

Google MedGemma 1.5 transforms medical AI with 3D imaging analysis

The Summary: Google launched MedGemma 1.5, the first publicly available AI model that interprets three-dimensional CT and MRI scans while also processing text and 2D images. The model achieved 65% accuracy on MRI disease classification and reduced medical transcription errors by 82% through its companion MedASR speech tool.

Key details:

  • MedGemma 1.5 processes entire CT volumes at once rather than slice-by-slice, analyzing multiple tissue sections simultaneously to catch correlations invisible in individual images

  • MRI diagnostic accuracy jumped +14 percentage points to 65%

  • MedASR speech model cuts transcription errors on medical dictations to 5.2% versus OpenAI Whisper's 28.2%

Why it matters: The medical AI landscape is fragmented into competing ecosystems. Google is pursuing a mixed strategy, releasing open-weight health models while also offering proprietary clinical AI services. Open-weight releases can lower licensing friction and make it easier for developers and researchers to prototype use cases and analyze massive datasets without licensing fees. The real question is whether open models for healthcare improve fast enough to match closed systems backed by hospital data and enterprise contracts.

TOOLS

🥇 New tools

That’s all for today!

If you liked the newsletter, share it with your friends and colleagues by sending them this link: https://thesummary.ai/