16 min

New Week #127 New World Same Humans

    • Technology

Welcome to this update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If you’re reading this and haven’t yet subscribed, join 25,000+ curious souls on a journey to build a better future 🚀🔮
To Begin
It’s a bumper instalment this week; what do we have in store?
Google DeepMind owned this week’s tech headlines with the release of Gemini, a new multi-modal AI intended to outdo GPT-4.
Meanwhile, Harvard researchers have created tiny biological robots that can heal human tissue.
And the world’s largest nuclear fusion reactor is now online in Japan.
Let’s go!
Gemini has liftoff
This week, major news out of Google’s DeepMind AI division.
The DeepMind team announced Gemini, a multi-modal LLM that looks to have pushed back the frontiers when it comes to these kinds of AI models.
Launch videos suggest Gemini can speak in real-time (though as I go to press doubts about that are being raised; more below). It understands text and image inputs, and can combine them in novel ways. Here it is giving ideas for toys to make out of blue and pink wool:
It can write code to a competition standard. In tests it outperformed 85% of the human competitors it was compared against; that means it’s excellent even when compared to some of the best coders on the planet.
Gemini can even perform sophisticated verbal and spatial reasoning, and handle complex mathematics. Imagine if you’d had this to help with your homework:
This is significant; OpenAI’s GPT-4 is notoriously bad at maths and logic puzzles.
And Google are, of course, taking direct aim at OpenAI with this launch. Gemini comes in three variants: Ultra, Pro, and Nano. US users can access the Pro version now via Bard, and the Ultra model will soon be made available to enterprise clients.
⚡ NWSH Take: It will take time to independently verify the claims DeepMind are making; there are some murmurs that their launch videos overstate Gemini’s competence. Still, there’s no denying this model looks impressive. // Scratch the surface, meanwhile, and we can discern some underlying signals about the future development of LLMs. This AI outperforms GPT-3.5 when it comes to linguistic tasks such as copy drafting. But it’s the multi-modal nature of Gemini that’s really significant; in particular, its ability to reason. LLMs are trained to do next word prediction; that means they’re brilliant at sounding right. But they lack any underlying ability to know whether what they’re saying is right, or even makes sense. Gemini seems to address this shortcoming. The promise of an LLM that can act as a true reasoning partner is exciting, should haunt the dreams of all at OpenAI. // OpenAI’s reported work on the still-mysterious Q* algorithm is also believed to be about reasoning. All this suggests we’re hitting the limits of the performance improvements to be gained simply by training LLMs on even larger data sets. Instead, the future belongs to those who can weave multiple models together. // Finally, a word for Alphabet’s CEO Sundar Pichai: kudos. Alphabet AI engineers invented the transformer model; then the company went missing. Gemini puts Alphabet firmly back in the race. And given the recent fiasco at OpenAI, Pichai this week looks like a man playing a canny long game. It’s going to be a fascinating 2024.
🤖 Anthrobots are go
Two stories this week signal powerful new avenues of discovery for the life sciences.
Scientists at Harvard and Tuft’s University have created tiny biological robots, called anthrobots, made out of human cells. In tests, the anthrobots were left in a small dish along with some damaged neural tissue. Scientists watched as the bots clumped together to form a superbot, which then repaired the damaged neurons.
Each anthrobot is made by taking a single cell from the human trachea. Those cells are covered in tiny hairs called cilia. The cell is then grown in a lab, and becomes a multi-cell en

Welcome to this update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If you’re reading this and haven’t yet subscribed, join 25,000+ curious souls on a journey to build a better future 🚀🔮
To Begin
It’s a bumper instalment this week; what do we have in store?
Google DeepMind owned this week’s tech headlines with the release of Gemini, a new multi-modal AI intended to outdo GPT-4.
Meanwhile, Harvard researchers have created tiny biological robots that can heal human tissue.
And the world’s largest nuclear fusion reactor is now online in Japan.
Let’s go!
Gemini has liftoff
This week, major news out of Google’s DeepMind AI division.
The DeepMind team announced Gemini, a multi-modal LLM that looks to have pushed back the frontiers when it comes to these kinds of AI models.
Launch videos suggest Gemini can speak in real-time (though as I go to press doubts about that are being raised; more below). It understands text and image inputs, and can combine them in novel ways. Here it is giving ideas for toys to make out of blue and pink wool:
It can write code to a competition standard. In tests it outperformed 85% of the human competitors it was compared against; that means it’s excellent even when compared to some of the best coders on the planet.
Gemini can even perform sophisticated verbal and spatial reasoning, and handle complex mathematics. Imagine if you’d had this to help with your homework:
This is significant; OpenAI’s GPT-4 is notoriously bad at maths and logic puzzles.
And Google are, of course, taking direct aim at OpenAI with this launch. Gemini comes in three variants: Ultra, Pro, and Nano. US users can access the Pro version now via Bard, and the Ultra model will soon be made available to enterprise clients.
⚡ NWSH Take: It will take time to independently verify the claims DeepMind are making; there are some murmurs that their launch videos overstate Gemini’s competence. Still, there’s no denying this model looks impressive. // Scratch the surface, meanwhile, and we can discern some underlying signals about the future development of LLMs. This AI outperforms GPT-3.5 when it comes to linguistic tasks such as copy drafting. But it’s the multi-modal nature of Gemini that’s really significant; in particular, its ability to reason. LLMs are trained to do next word prediction; that means they’re brilliant at sounding right. But they lack any underlying ability to know whether what they’re saying is right, or even makes sense. Gemini seems to address this shortcoming. The promise of an LLM that can act as a true reasoning partner is exciting, should haunt the dreams of all at OpenAI. // OpenAI’s reported work on the still-mysterious Q* algorithm is also believed to be about reasoning. All this suggests we’re hitting the limits of the performance improvements to be gained simply by training LLMs on even larger data sets. Instead, the future belongs to those who can weave multiple models together. // Finally, a word for Alphabet’s CEO Sundar Pichai: kudos. Alphabet AI engineers invented the transformer model; then the company went missing. Gemini puts Alphabet firmly back in the race. And given the recent fiasco at OpenAI, Pichai this week looks like a man playing a canny long game. It’s going to be a fascinating 2024.
🤖 Anthrobots are go
Two stories this week signal powerful new avenues of discovery for the life sciences.
Scientists at Harvard and Tuft’s University have created tiny biological robots, called anthrobots, made out of human cells. In tests, the anthrobots were left in a small dish along with some damaged neural tissue. Scientists watched as the bots clumped together to form a superbot, which then repaired the damaged neurons.
Each anthrobot is made by taking a single cell from the human trachea. Those cells are covered in tiny hairs called cilia. The cell is then grown in a lab, and becomes a multi-cell en

16 min

Top Podcasts In Technology

Acquired
Ben Gilbert and David Rosenthal
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Lex Fridman Podcast
Lex Fridman
Catalyst with Shayle Kann
Latitude Media
TED Radio Hour
NPR
Hard Fork
The New York Times