🚀 Google AI Rewrites Its DNA

PLUS: OpenAI Ships GPT-4.1 in ChatGPT

In partnership with

Welcome back!

Google just gave us a glimpse of what self-improving AI might actually look like. DeepMind has unveiled AlphaEvolve, an internal system that invents complex algorithms and makes new discoveries with no humans in the loop. It’s already improving Google infrastructure it runs on. Let’s unpack…

Today’s Summary:

  • 🚀 DeepMind unveils self-improving AI

  • đź’» OpenAI releases GPT-4.1 in ChatGPT

  • ⌚ Gemini AI expands across devices

  • 🎧 Stability & Arm launch mobile AI audio

  • ✨ TikTok AI animates still photos

  • đź’¸ Claude Code gets flat-rate pricing

  • 🍏 Apple debuts FastVLM vision encoder

  • 🛠️ 2 new tools

TOP STORY

AlphaEvolve: did Google just build a self-improving AI?

The Summary: DeepMind revealed AlphaEvolve, an internal AI system that designs advanced algorithms and can make new discoveries in math and computing. Using an evolutionary framework, AlphaEvolve has already improved Google’s infrastructure, from data centers to chip design, while also finding entirely new algorithms for advanced problems.

Key details:

  • Physicist Mario Krenn called the work “quite spectacular,” citing it as the first evidence that LLMs can yield original scientific insight

  • Uses Gemini 2.0 in evolutionary loop, with no human in the loop

  • AlphaEvolve improved matrix multiplication for 4x4 complex matrices, beating Strassen’s 50-year-old algorithm, boosting Google chip performance and Gemini training speed

  • Recovered 0.7% of compute capacity across Google data centers by suggesting a simple scheduling tweak

  • In 20% of cases, it outperformed state-of-the-art solutions; in 75%, it independently rediscovered them

  • Hints at recursive self-improvement: AlphaEvolve improves Gemini, which powers AlphaEvolve

  • Not public yet: Google DeepMind plans early access for academic researchers, broader availability TBD

Why it matters: AlphaEvolve looks like a new breakthrough where LLMs invent performant, deployable code with high-level impact. These aren’t toy examples or UI boilerplates, but low-level algorithms powering Google infrastructure the AI itself runs on. That unlocks a loop where AI could begin to improve the very systems it depends on, compounding gains over time and possibly redefining what “programming” means at scale.

FROM OUR PARTNERS

Turn AI into a revenue stream

How can AI power your income?

Ready to transform artificial intelligence from a buzzword into your personal revenue generator

HubSpot’s groundbreaking guide "200+ AI-Powered Income Ideas" is your gateway to financial innovation in the digital age.

Inside you'll discover:

  • A curated collection of 200+ profitable opportunities spanning content creation, e-commerce, gaming, and emerging digital markets—each vetted for real-world potential

  • Step-by-step implementation guides designed for beginners, making AI accessible regardless of your technical background

  • Cutting-edge strategies aligned with current market trends, ensuring your ventures stay ahead of the curve

Download your guide today and unlock a future where artificial intelligence powers your success. Your next income stream is waiting.

OPENAI

OpenAI adds GPT-4.1 to ChatGPT

The Summary: OpenAI has added GPT-4.1 and GPT-4.1 mini to ChatGPT, making the models available to Plus, Pro, and Team users after initial API-only access. GPT-4.1 prioritizes coding performance and precise instruction-following, offering a faster alternative to o3-o4. This rollout also includes a new Safety Evaluations Hub and replaces GPT-4o mini with GPT-4.1 mini.

Key details:

  • GPT-4.1 scores 21.4 points higher than GPT-4o on SWE-bench Verified, a major software engineering benchmark

  • The model cuts verbosity by 50% compared to o- models

  • OpenAI now provides a dedicated Safety Evaluations Hub, with benchmark transparency across models

  • GPT-4.1 mini, free to all users, is now the default model replacing GPT-4o mini

Why it matters: GPT-4.1 isn’t the smartest model in the lineup, but it might be the one you actually want to use every day. It focuses on code accuracy and instruction handling, while also being faster and less verbose.

FROM OUR PARTNERS

A smarter way to read the news

Seeking impartial news? Meet 1440.

Every day, 3.5 million readers turn to 1440 for their factual news. We sift through 100+ sources to bring you a complete summary of politics, global events, business, and culture, all in a brief 5-minute email. Enjoy an impartial news experience.

GOOGLE

Gemini AI expands beyond the phone

The Summary: Google is embedding Gemini across more devices, from your wrist to your car, TV, and XR headsets. With natural voice control and deep app integration, Gemini is replacing Google Assistant in Android Auto and Wear OS. This signals an evolution toward a truly ambient AI, blurring the line between device and assistant.

Key details:

  • Gemini will roll out on Android Auto, Wear OS, Google TV, and Samsung’s upcoming XR headset starting mid-2025

  • Wear OS watches will let users interact using speech and access email data or set reminders hands-free

  • In cars, Gemini will summarize messages, suggest nearby spots with preferences (eg vegan taco places), and translate replies

  • Gemini Live offers persistent dialogue on the go, capable of helping brainstorm recipes, plan trips, or get book summaries

Why it matters: Google is replacing the device-centric experience with an AI-centric one. Gemini isn’t confined to your phone anymore, it’s becoming the connective tissue of the whole ecosystem. From wrist to dashboard to living room, Gemini listens, reasons, and responds in real time with natural conversation. Google is building an invisible layer of intelligence across hardware, turning passive devices into collaborators.

QUICK NEWS

Quick news

TOOLS

🥇 New tools

That’s all for today!

If you liked the newsletter, share it with your friends and colleagues by sending them this link: https://thesummary.ai/