💸 Meta Matches Manhattan Project

PLUS: Drug Discovery AI Revolution

Welcome back!

Meta reportedly invested a staggering $30 billion on GPUs, near the costs of the Manhattan Project, to fuel its AI ambitions. With plans to open-source their next mammoth 400B Llama 3 model, they aim to democratize AI technology. Let's unpack what this means for the AI landscape and beyond.

Today’s Summary:

  • Meta's staggering $30 billion GPU spend

  • Unraveling machine unlearning techniques

  • AI accelerates novel drug discoveries

  • OpenAI teams with Stack Overflow

  • Buffett warns on AI-enabled scams

  • 3 new tools


Meta spent almost as much as the Manhattan Project on GPUs

The Summary: Meta (formerly Facebook) has invested a staggering $30 billion on GPUs to train its AI models, nearing the cost of Manhattan Project. This massive investment underscores the tech giant's commitment to advancing AI technology.

Yann LeCun, Meta's Chief AI Scientist, confirmed the spending on approximately one million NVIDIA high-end GPUs. He believes open-sourcing is crucial for faster progress through a community ecosystem. LeCun draws parallels with the open-source internet infrastructure that enabled its widespread innovation.

Key details:

  • This massive investment highlights the astronomical computational resources required for cutting-edge AI training and deployment.

  • Open source attracts talent to work on impactful models like Llama and encourages a collaborative ecosystem.

  • Meta is currently training a new Llama 3 model 400 billion parameters in size, due for release in the coming months.

Why it matters: Meta's investment highlights the astronomical resources required for cutting-edge AI development. Open-sourcing such models could democratize access to advanced AI capabilities, fostering innovation and collaboration across industries. However, the implications of releasing powerful AI models to the public should be carefully considered. This move could shape the future trajectory of AI development and its societal impact.

“Yeah, it’s staggering, isn’t it? A lot of it, not just training, but deployment, is limited by computational abilities. One of the issues that we’re facing is the supply of GPUs and the cost of them at the moment.”

Yann LeCun, Meta Chief AI Scientist

Mastering the Art of Machine Unlearning

Image: DALL·E

The Summary: The concept of machine unlearning is gaining traction. Unlearning refers to removing the influence of specific data from a trained model, addressing concerns like privacy, copyright, and safety.

This comprehensive guide explores the motivations, techniques, and challenges of machine unlearning, offering insights into its potential and limitations. From exact unlearning to empirical approaches, the piece presents all the intricate details.

Source: NeurIPS Machine Unlearning Challenge

Key details:

  • Machine unlearning aims to remove the influence of training data from models without retraining from scratch

  • Motivations include access revocation (privacy, copyrighted content) and model correction (toxic data, dangerous capabilities)

  • Techniques range from exact unlearning (modular training), differential privacy, empirical unlearning (fine-tuning), and prompting.

  • Evaluation of unlearning remains a challenge, with recent benchmarks like TOFU and WMDP addressing knowledge retention and understanding.

Why it matters: As AI systems become more complex and data-intensive, the ability to selectively unlearn information is crucial for addressing privacy, legal, and safety concerns. The guide sheds light on the current state of machine unlearning, highlighting its potential as a post-training risk mitigation tool and a mechanism for responsible AI development.


AI is Going to Revolutionize Drug Discovery

Image: DALL·E

The Summary: Generative AI is rapidly accelerating its applicability to the development and discovery of new medications. Scientists at Eli Lilly have been surprised by the novel designs AI has produced for potential drug molecules.

A major precedent was set in 2021 when Google's DeepMind AI, AlphaFold, pioneered AI's application to protein structure prediction. Within a few years, experts predict AI will design drugs that humans could not create.

Key details:

  • Scientists at Eli Lilly generated “weird-looking” molecule structures with AI that look promising for drug development.

  • AI can rapidly screen trillions of drug compounds and predict protein structures, accelerating the drug discovery process.

  • Ability to "hallucinate" non-existent proteins becomes a big feature, as it expands the pool of potential drug targets beyond known constraints.

Why it matters: The integration of generative AI into drug discovery could significantly speed up the process and reduce costs. AI's ability to explore novel chemical and protein designs could lead to breakthrough medical treatments that human researchers would have overlooked. This has the potential to reshape the pharmaceutical industry and the scientific method for drug development.


Quick news

If I was interested in investing in scamming, it’s gonna be the growth industry of all time and it’s enabled, in a way by AI

GPT-4 is the dumbest model any of you will ever have to use again, by a lot.


🥇 New tools

That’s all for today!

If you liked this newsletter, share it with your friends and colleagues by sending them your invite link : {{rp_refer_url}}.