- The Summary AI
- Posts
- đ DeepMind Says AGI in 5 Years
đ DeepMind Says AGI in 5 Years
PLUS: GPT-4o New Version Hits #2

Welcome back!
In a new report, Google DeepMind claims artificial general intelligence could arrive as early as 2030. Meanwhile, a former OpenAI researcher predicts itâll hit even earlier, by 2027. AGI isnât science fiction anymore. Itâs a moving target being actively engineered. Letâs unpackâŠ
Todayâs Summary:
đĄ Google DeepMind says AGI in 5 years
đ GPT-4o climbs to #2 spot in benchmarks
đ§ Claude reveals brain-like behavior
đŒïž Ideogram launches V3 image model
đ OpenAI now valued at $300B
âïž Anthropic scores copyright legal win
đ ïž 2 new tools

TOP STORY
Google DeepMind says AGI in 5 years
The Summary: Google DeepMind believes artificial general intelligence could arrive by 2030 and has released a detailed safety strategy to prepare. The plan outlines how it will prevent misuse, misalignment, and other risks as AGI systems begin to act autonomously and match or exceed human cognitive skills. Former OpenAI researcher Daniel Kokotajlo goes further, predicting AGI in the next 3 years. The message from both is that itâs coming fast, but not out of our hands.
Key details:
DeepMind expects no âhard barriersâ to AGI even using current algorithms, estimating 2e29 FLOPs training runs by 2030 with 100M H100 equivalent chips, enough to train AIs orders of magnitude beyond GPT-4
Daniel Kokotajlo, ex-OpenAI, predicts AGI by 2027, betting on autonomous coding agents and aggressive self-improvement cycles
AGI could âlower the barrier to innovationâ and supercharge sectors like healthcare, education, and science
DeepMindâs safety plan includes AIs overseeing other AIs, and thresholds that act like âfire alarmsâ to trigger lockdowns if dangerous capabilities emerge
Why it matters: AGI-level systems arenât a distant concept, theyâre already being actively scoped, tested, and stress-checked in labs. Google DeepMindâs strategy suggests that the real bottlenecks may not be algorithms, but safety, governance, and deployment choices.

OPENAI
New ChatGPT-4o update climbs to #2
The Summary: OpenAI has rolled out a new version of GPT-4o that now ranks second on the Arena leaderboard, jumping three spots from its previous position. The update brings major improvements in following complex instructions. According to tests, this version ties for first place in coding and complex prompts, while costing 10x less than GPT-4.5, which it now rivals in performance.
Key details:
The updated model shows a +30 point improvement on Arena rankings, moving from 5th to 2nd place overall
Matches or exceeds GPT-4.5 performance despite being priced much lower
Scored #1 on Artificial Analysisâ Coding Index and LiveCodeBench, ahead of Claude 3.7 Sonnet and DeepSeek V3
Math capabilities jumped from 14th to 2nd in rankings
Also available through API as "chatgpt-4o-latest" for paid users
Why it matters: After Google Gemini 2.5 Pro hit #1, OpenAI makes clear they donât plan to stay behind for long. GPT-4o now outperforms the much costlier GPT-4.5. While itâs no longer rare for leaner models to beat flagships, what matters now is who can ship smarter, faster, and scale without breaking budgets.

ANTHROPIC
Claudeâs circuits reveal AI thinks like biology
The Summary: Anthropic researchers developed a way to trace the actual computations happening inside an AI, and the findings are stranger than expected. Using neuroscience-style methods, they mapped how Claude plans ahead, rewrites logic on the fly, and identified a possible cause of hallucinations.
Key details:
Claude activates rhymes before starting the next sentence, proving it isnât just predicting the next word, itâs building toward goals multiple steps ahead
In a math test (36 + 59), Claude didnât retrieve a memorized answer or followed human arithmetic rules. Instead it used a parallel computation algorithm: approximated ~92, estimated the final digit as 5, and merged them to get 95
In a medical diagnosis test, Claude analyzed symptoms using internal reasoning steps that resembled differential diagnosis, activating neuronal features for possible diseases, and filtering them based on symptom profiles, much like a physician would narrow down options
The model reasons across languages using a shared conceptual space, pointing to an underlining âlanguage of thoughtâ not tied to a specific grammar
If given a wrong hint, Claude may fabricate reasoning steps to justify the hint instead of solving the problem
Hallucinations can be toggled on and off by flipping a neuron-like switch that encodes a âknown answerâ. Hallucination seems to be a decision, not a generation accident
Why it matters: AI isnât just a next word predictor. Itâs running a stack of complex processes that look increasingly like biological systems. Competing activations, inhibition loops, goal-directed planning, the behaviors look more like brains than code. Anthropicâs work marks major progress toward this new scientific discipline focused on understanding how AI systems actually work under the hood.

QUICK NEWS
Quick news
Ideogram v3 image model best for commercial illustration
OpenAI raises $40B at $300B valuation
Early legal win for Anthropic in AI copyright case

TOOLS
đ„ New tools
SimplAI - The simplest way to build and deploy AI agent apps
Airial Travel - Plan trips from ideas or travel videos

Thatâs all for today!
If you liked the newsletter, share it with your friends and colleagues by sending them this link: https://thesummary.ai/