🚀 Meta AI Glasses Go Neural

PLUS: Google Releases Gemini in Chrome

In partnership with

Welcome back!

Meta is betting on new ways to interact with AI. Its new Ray-Ban AI Glasses with Neural Band pair a hi-res color display with wrist muscle sensors that turn subtle hand gestures into commands and writing. Could this be the next big device to rival phones? Let’s unpack…

Today’s Summary:

  • 🕶️ Meta unveils AI Glasses with Neural Band

  • 🎥 Luma launches Ray3 HDR video model

  • 🧑‍💻 Google integrates Gemini into Chrome

  • 🏆 Gemini Deep Think & GPT-5 win ICPC gold

  • 💡 GPT-5 Codex upgrade speeds coding

  • 🐇 Qwen3-Next runs 10× faster

  • 🛠️ 2 new tools

TOP STORY

Meta launches AI Glasses with Neural Band

The Summary: Meta has unveiled the Ray-Ban Display glasses, a new AI eyewear with an integrated color display and a companion “Neural Band” wrist sensor. The glasses let wearers check messages, take video calls, navigate, translate conversations, and control music hands-free. The Neural Band reads subtle muscle signals from the wrist for silent commands and typing, a new kind of input method.

Key details:

  • Glasses weigh 69 grams and run six hours per charge

  • The Neural Band uses EMG muscle sensors trained on data from 200,000 participants; allows typing 30 words per minute by finger-twitching

  • The 600×600 in-lens display rivals devices that were previously only possible in bulky headsets

  • Built-in camera supports 12MP photos and 1080p live video calls

  • Starts at $799 for both the glasses and Neural Band

  • US launch September 30, with Canada, France, Italy, and UK availability planned for early 2026

Why it matters: Meta is betting glasses can free AI from phones and push computing toward a new class of devices. The Neural Band is a real wildcard, an advanced input system on par with a mouse or touchscreen. The question is whether people are ready to wear glasses with built-in screens and cameras, or if this will face the same fate as Google Glass.

FROM OUR PARTNERS

Teach Anything in Seconds with This Free AI Extension

Create How-to Videos in Seconds with AI

Stop wasting time on repetitive explanations. Guidde’s AI creates stunning video guides in seconds—11x faster.

  • Turn boring docs into visual masterpieces

  • Save hours with AI-powered automation

  • Share or embed your guide anywhere

How it works: Click capture on the browser extension, and Guidde auto-generates step-by-step video guides with visuals, voiceover, and a call to action.

LUMA AI

Luma AI introduces Ray3 video generator with HDR

The Summary: Luma AI has launched Ray3, a generative video model that combines reasoning with studio-grade HDR output. The system produces 10-, 12-, and 16-bit video, supports EXR export for pro pipelines, and adds Draft Mode for rapid prototyping. Ray3 aims to push AI video closer to professional production workflows.

Key details:

  • Ray3 introduces reasoning for video: it evaluates drafts, self-corrects, and interprets visual annotations like sketches

  • First generative video model to natively support 16-bit HDR and export to EXR for color grading

  • Draft Mode generates previews then upscales to 4K HDR

  • Visual annotation lets users sketch or mark up frames to control motion, layout, or camera angles

  • Unlike Google Veo 3, Ray3 does not generate audio natively, it focuses purely on video quality and professional integration

Why it matters: Ray3 is the first video model designed to think visually, allowing it to judge its own shots before handing them over, reducing iteration loops. HDR output and EXR export mean the work can be directly used in professional video editing tools. The new features target studios as well as indie creators gaining access to workflows once locked behind million-dollar post houses.

FROM OUR PARTNERS

AI Agents That Fix Problems Before You Do

AI Agents That Cut Support Costs By Up To 80%

AI Agents Designed For Complex Customer Support

Maven AGI delivers enterprise-grade AI agents that autonomously resolve up to 93% of support inquiries, integrate with 100+ systems, and go live in days. Faster support. Lower costs. Happier customers. All without adding headcount.

GOOGLE

Google brings Gemini into Chrome

The Summary: Google has rolled out Gemini integration directly into Chrome for US desktop users, as the browser’s largest AI update yet. Users can ask Chrome to summarize web content, compare across tabs, and recall past sites. Chrome also gains an AI Mode in the address bar, scam detection powered by Gemini Nano, and one-click password resets.

Key details:

  • Gemini in Chrome is launched to US Mac and Windows users; mobile rollout underway

  • Multi-tab summarization turns scattered data into one output

  • New AI history search lets users retrieve sites with AI prompts (can be turned off to avoid sending page contents to Google)

  • Deeper hooks into Calendar, YouTube, and Maps mean tasks like finding meeting slots or video timestamps can happen without leaving a page

  • Gemini Nano runs locally to detect fake virus popups and scams

  • Password agent rotates compromised logins instantly on supported sites

Why it matters: The browser is evolving into the main operating layer for AI. Three strategies are emerging: AI-native browsers like Comet and Dia, which redesign the interface around chat; agent add-ons like Anthropic’s Claude for Chrome, that treat webpages as spaces for automation; and incumbent integration, with Google embedding Gemini into Chrome at massive scale.

QUICK NEWS

Quick news

TOOLS

🥇 New tools

  • Genstore - Launch an AI-powered store in 2 minutes

  • Oboe - Create fun, lightweight, flexible courses

That’s all for today!

If you liked the newsletter, share it with your friends and colleagues by sending them this link: https://thesummary.ai/