🚀 Claude Skills + Haiku 4.5

PLUS: Figure 03 Robot Doing Real Chores

In partnership with

Welcome back!

Anthropic just dropped two big upgrades: Claude Skills, a new way to embed custom workflows, and Haiku 4.5, a leaner model with near-Sonnet-level coding at double the speed. Efficiency is the new frontier. Let’s unpack…

Today’s Summary:

  • 🚀 Anthropic unveils Claude Skills & Haiku 4.5

  • ⌨️ Google Gemini 2.5 controls real web apps

  • 🦿 Figure 03 robot starts doing real chores

  • 💉 Anthropic: 250 poisoned files can backdoor LLMs

  • 🧩 Google launches Gemini CLI extensions

  • 💄 AI-powered makeup arrives in Google Meet

  • 🛠️ 2 new tools

TOP STORY

Anthropic unveils Agent Skills and Claude Haiku 4.5

The Summary: Anthropic introduced Agent Skills, folders of custom instructions, scripts, and resources that Claude loads only when needed. Skills let teams embed their custom workflows directly into Claude, cutting repeated prompts and lowering costs. Anthropic also released Claude Haiku 4.5, a smaller, faster model delivering near-Sonnet 4 coding accuracy at twice the speed. It’s now the standard model across Claude plans, including the free web version.

Key details:

  • Agent Skills are folders containing custom instructions and scripts, loaded only when needed, reducing token use

  • Haiku 4.5 replaces Sonnet 4 and Haiku 3.5 and is now the standard choice in Claude plans including Free tier

  • Sonnet 4.5 can delegate sub-tasks to multiple Haiku 4.5 instances, handling planning while Haiku executes in parallel for faster results

  • Early testers report tighter code edits and faster runtimes than GPT-5, with near-Sonnet precision at a fraction of cost

Why it matters: Anthropic is continuing to improve its models for practical efficiency. Haiku 4.5 shows that smaller, faster models can now reach frontier-level coding. Claude Skills extend that logic by allowing users to bottle their own workflows into reusable instructions that AI can pick automatically when needed, with less repeated prompting.

FROM OUR PARTNERS

Never Take Notes Again

The AI Wearable That Makes Your Life Unforgettable

Your greatest asset is your time. So stop wasting it jotting notes or chasing forgotten conversations.

The Limitless AI Pendant captures, transcribes, and organizes every meaningful interaction automatically. Get instantly searchable transcripts, smart summaries, and contextual reminders - all at your fingertips, all fully encrypted.

Tap into the future of productivity and free your mind to focus on what truly matters with Limitless.

GOOGLE

Google Gemini 2.5 Computer Use handles real web interfaces

The Summary: Google DeepMind released the Gemini 2.5 Computer Use model, able to operate web and mobile user interfaces through visual reasoning, by directly clicking, typing, and scrolling. Built on Gemini 2.5 Pro, it outperforms OpenAI and Anthropic on browser control benchmarks. It is now available via the Gemini API in Google AI Studio and Vertex AI.

Key details:

  • Gemini 2.5 Computer Use leads in browser control benchmarks

  • Runs at lower latency than competitors, improving performance for UI automation and testing

  • Supports actions such as click, type text, scroll, drag & drop

  • Includes built-in safety checks and custom instructions

Why it matters: Gemini 2.5 Computer Use interacts directly with visual interfaces without need of structured APIs, so any process visible on a screen can be operated by AI. Once agents can operate fluently in real UIs, every app becomes an accessible workspace for intelligent systems.

FROM OUR PARTNERS

Work Smarter with These Prompts

Want to get the most out of ChatGPT?

ChatGPT is a superpower if you know how to use it correctly.

Discover how HubSpot's guide to AI can elevate both your productivity and creativity to get more things done.

Learn to automate tasks, enhance decision-making, and foster innovation with the power of AI.

FIGURE AI

Figure 03 robot starts doing real chores

The Summary: Figure AI has unveiled Figure 03, a humanoid robot designed to operate in homes and workplaces without teleoperation. Built around the Helix “vision-language-action” AI model, Figure 03 features a full hardware redesign for large-scale manufacturing. It’s capable of handling household chores, moving through complex spaces, and learning directly from human interaction.

Key details:

  • Figure 03 stands 1.68 meters tall, weighs 60 kg, carries 20 kg, and runs for five hours per battery charge

  • Includes 10 Gbps wireless link that uploads data for continuous learning and improvement

  • New vision system doubles frame rate, and expands FOV

  • Custom tactile sensors detect forces as light as 3 grams, sensitive enough to feel a paperclip

  • Early feedback praises the engineering but questions how well the robot handles unscripted everyday environments

  • No public information yet on pricing or availability

Why it matters: Figure 03 marks the start of real-world training for humanoid AI. Each unit becomes both a worker and a data source, collecting the kind of experience no lab or simulation can match. The field data, including mistakes, recoveries, tiny edge cases, feeds directly back to improve every other robot in the fleet. While the demos look polished, actual progress will mostly depend on how many units Figure can actually deploy.

QUICK NEWS

Quick news

TOOLS

🥇 New tools

That’s all for today!

If you liked the newsletter, share it with your friends and colleagues by sending them this link: https://thesummary.ai/