New World Same Humans

David Mattin
New World Same Humans

New World Same Humans is a weekly newsletter on trends, technology and our shared future by David Mattin. Born in 2020, the NWSH community has grown to include 25,000+ technologists, designers, founders, policy-makers and more. www.newworldsamehumans.xyz

  1. 02/10/2024

    New Week #129

    Welcome to this update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin. If you’re reading this and haven’t yet subscribed, join 25,000+ curious souls on a journey to build a better future 🚀🔮 To Begin This week brings news from Boston Dynamics and the Chinese Academy of Sciences. The message common to both stories? The humanoid robots are coming. Meanwhile, the internet reacts to Apple’s new Vision Pro headset. And the FCC take action against a California company that used AI to create fake phone calls from President Biden. Let’s go! 🤖 Robots are go This week, yet further signs that the robots will soon walk among us. I mean, all of us. The Boston Dynamics humanoid, Atlas, has been a regular in this newsletter over the years. Recently it has been overshadowed by competitors, including the Digit humanoid by Agility Robotics and Tesla’s Optimus. But this week Boston Dynamics released a video that shows Atlas picking up automotive struts and placing them in a flow cart. The team say Atlas is using onboard sensors and object recognition to perform the task. The footage is short. But it marks a significant advance for Atlas, because previous videos have shown the robot doing elaborate dances rather than useful work, and those dances have been pre-programmed rather than autonomous. Meanwhile, in Beijing a research team at the Institute of Automation in the Chinese Academy of Sciences this week debuted their Q Family of humanoid robots. The research team have reportedly built a ‘big factory’ for the design and manufacture of Q Family humanoids. Back in New Week #124 we saw how the CCP has ordered ‘domestic mass production’ of humanoids’ to fuel economic growth. Remember, this is the underlying demographic reality that has China dashing towards robots. ⚡ NWSH Take: In last month’s Lookout to 2024 I said this would be the year of the humanoid. We closed out 2023 with the announcement that the Digit humanoid had started a trial inside US Amazon fulfilment centres. Days after I published the Lookout, BMW announced a trial of Digit in its California manufacturing plant. Now, the Boston Dynamics team are clearly eyeing commercial applications, too. Their Atlas robot has so far remained a research project; the question they’ll have to answer if they want to change that is whether Atlas can match Digit and Tesla’s Optimus for autonomous capability. // The graph above tells the underlying socio-economic story here. Both the CCP and innovators in the Global North know that working age populations are falling. If economic growth isn’t to become a distant memory, we need new armies of autonomous workers. AI applications can handle some of our knowledge work. But we’ll need humanoids to do some of the physical work that currently only people can do. The CCP see this as an existential imperative; they know they must maintain GDP growth. For innovators in the US and beyond, it’s an epic opportunity. 👀 Having visions No one could have missed the launch of the Apple Vision Pro a few days ago. Years from now, this instantly iconic magazine cover will no doubt spark intense nostalgia for the simpler times that were 2024: It took about ten minutes for someone to try out their new Vision Pro while using Full Self Drive in their Tesla: This was later revealed to be (surprise!) a skit for YouTube. Still, it delivered useful findings; the man in the picture, Dante Lentini, says the Vision Pro doesn’t really work inside a moving car because it can’t properly display visuals over a fast-moving landscape. ⚡ NWSH Take: After the frenetic metaverse hype of 2021, many will shrug at the launch of the Vision Pro. But something real, and powerful, is happening here. The internet is going to become part of the world around us. In the end, this is about the deep merging of information and physical reality, of bits and atoms, that I wrote about in the essay Intelligence in the World. // We’re going to see the emergence of a unified digital-physical field: a blended domain of bits and atoms that is a new, and in some sense final, innovation platform, because it brings together everything we do online with everything we do in the real world. // Apple’s new product — whether it proves a hit or not — is just another signal of this underling process. I’ll get my hands on one ASAP and report back. But Apple, here, are clearly aiming at high-end and industry users; they’re going to have to maker a cheaper product if they want mainstream impact. ☎️ Good call Also this week, a glimpse of what lies ahead when it comes to this year’s US presidential election. The FCC this week banned AI-voiced robocalls after an AI Joe Biden ‘called’ over 25,000 voters in late January and told them not to vote in the then-upcoming presidential primary elections. The calls have been traced back to a Texas-based company called Life Corporation, owned by an entrepreneur with a long history in automated calling for political campaigns. Researchers believe Life Corporation used software from UK-based AI voice startup ElevenLabs, which I’ve written about here several times before, to deepfake Biden’s voice. ElevenLabs just raised an $80 million series B funding round, led by VC firm Andreessen Horowitz, that valued the company at $1.1 billion. ⚡ NWSH Take: In the Lookout to 2024 I said we should expect politics to collide with the exponential age this year. The impact of AI deepfakes on November’s US presidential election will be at the heart of that story. Okay, the FCC has banned AI calls. But deepfake audio and video is surely going to be rife on Facebook, Elon Musk’s X, and TikTok. // Our liberal democracies were built in the age of one-to-many mass broadcast; those broadcasts were gatekept by social elites that felt a sense of duty towards the broader socio-political system in which they were operating. It wasn’t perfect, but it muddled along. Now, we’ve built previously unimagined technologies of image and sound manipulation. We’ve slain the gatekeepers, and told ourselves that this was an empowering move. The upshot? We're about to find out how liberal democracies work under those conditions. 🗓️ Also this week 👶 Researchers trained a large language model using only inputs from a headcam attached to a toddler. A data science team at New York University strapped a camera to a toddler for 18 months. They say their AI model learned a ‘substantial number of words and concepts’ from exposure to just one percent of the child's total waking hours between the ages of six months and two years. The team say this indicates that it is possible to train an LLM on far less data than previously believed. 🏭 Sam Altman says the world ‘needs more AI infrastructure’ and that OpenAI will help to build it. Altman is reportedly seeking trillions of dollars to build new semiconductor design and manufacture capability. Access to chips and the compute they supply is crucial for OpenAI if they are to train GPT-5 and other large AI models. 💸 Disney says it will invest $1.5 billion in Epic Games, the makers of Fortnite. The media giant say they’ll work with Epic to create a new ‘entertainment universe’ featuring characters from Pixar movies, Star Wars, and more. 🦹‍♂️ The US National Security Agency say an advanced group of Chinese hackers have been active across US infrastructure for at least five years. The Volt Typhoon hacking group is said to have infiltrated computer systems across aviation, rail, highway, and water infrastructure. 🔋 Europe’s deepest mine is to be converted into a gravity battery. The Pyhäsalmi Mine in Finland is 1,444 meters deep. Its copper and zinc deposits have run out. Scottish energy tech firm Gravitricity say they will now convert the mine into a gravity battery, in which energy is created stored via elevated heavy weights and released when those weights are dropped. 💥 Scientists at CERN want to build a massive new particle collider. The new Future Circular Collider would cost £12 billion; with a circumference of over 90 kilometres it would be three times larger than the Large Hadron Collider (LHC). The LHC enabled the discovery of the Higgs Boson particle in 2012, but CERN scientists say they need a more powerful machine if they are to uncover the truth about dark matter and energy. 🤔 Popular Chinese social media accounts have claimed that Texas has declared civil war against the US. Posts with the hashtag #TexasDeclaresAStateOfWar have been widely shared on the popular social network Sina Weibo. 🇿🇲 A startup backed by Bill Gates and Jeff Bezos has discovered a vast copper reserve in Zambia. California-based KoBold Metals say the reserve will be ‘one of the world’s biggest high-grade large copper mines.’ Copper plays a crucial part in electric vehicle batteries and solar panels. 🤯 Researchers says AIs tend to choose nuclear strikes when playing war games. A team at Stanford University challenged LLMs such as GPT-4 and Claude-2 to participate in simulated conflicts between nations. The AIs tended to invest in military strength and to escalate towards violence and even nuclear attack in unpredictable ways. They would rationalise their actions via comments such as ‘we have it, let’s use it!’ and ‘if there is unpredictability in your action, it is harder for the enemy to anticipate and react’. 🌍 Humans of Earth Key metrics to help you keep track of Project Human. 🙋 Global population: 8,090,538,177🌊 Earths currently needed: 1.82069 🗓️ 2024 progress bar: 15% complete 📖 On this day: On 10 February 1996 the IBM supercomputer Deep Blue beats Garry Kasparov at chess, becoming the first computer to beat a reigning world champion under normal time controls. New Model Army Thanks for reading this week. The colli

    15 min
  2. 12/16/2023

    New Week #128

    Welcome to this update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin. If you’re reading this and haven’t yet subscribed, join 25,000+ curious souls on a journey to build a better future 🚀🔮 To Begin One week until the Christmas break: where did 2023 go? This week, DeepMind serve up proof that a large language model can create new knowledge. Also, more news from the accelerating story that is the march of the humanoid robots. It’s clear next year will be a pivotal one for this technology. And researchers hook up brain organoids to microchips to create a new kind of speech recognition system. Let’s get into it! 🧮 Fun times at DeepMind This week, yet another step forward in the epic journey we’ve taken with AI in 2023. Researchers at Google DeepMind used a large language model (LLM) to create authentically new mathematical knowledge. Their new FunSearch system — so called because it searches through mathematical functions — wrote code that solved a famous geometrical puzzle called the cap set problem. The researchers used an LLM called Codey, based on Google’s PaLM 2, which can generate code intended to solve a given maths problem. They tied Codey to an algorithm that evaluates its proposed solutions, and feeds the best ones back to iterate upon. They established the cap set problem using the Python coding language, leaving blank spaces for the code that would express a solution. After a couple of million tries — and a few days — the mission was complete. FunSearch produced code that solved this geometrical problem, which mathematicians have been puzzling over since the early 1970s. DeepMind say it’s the first time an AI has produced verifiable and authentically new information to solve a longstanding scientific problem. ‘To be honest with you,’ said Alhussein Fawzi, one of the DeepMind researchers behind the project, ‘we have hypotheses, but we don’t know exactly why this works.’ ⚡ NWSH Take: For pure mathematicians, a solution to the cap set problem is a big deal. For the rest of us, not so much. But this result really matters, because it resolves a central and much-discussed question about LLMs: can they create new knowledge? // Until this week, many believed LLMs would never do this — they they’d only ever be able to synthesise and remix knowledge that already existed in their training data. But there was no solution to this problem in the data used to train Codey; instead, it created novel and true information all of its own making. This points a future in which LLMs solve problems in, for example, statistics and engineering, or can create new and viable scientific theories. // In other words, this little and somewhat nerdish research paper heralds a revolution. So far, only we humans have been able to push back the frontiers of what we know. It’s now clear that in 2024, we’ll have a partner in that enterprise. // For this reason and so many others, I’m increasingly convinced that an unprecedented socio-technological acceleration is coming. It’s been a wild year; things are about to get even wilder. 🤖 Like a human A quick glimpse of two stories this week. Both point in one direction: the humanoids are coming. Tesla released a new video of its humanoid robot, Optimus. The Generation 2 Optimus can do some pretty fancy stuff, including delicately handling an egg: Meanwhile, researchers at the University of Tokyo hooked a robot up to GPT-4. The Alter3 robot is able to understand spoken instructions and adopt a range of poses without those poses being pre-programmed into its database. In other words, Alter3 is responding in real-time to natural spoken language; it’s an embodied version of GPT-4, best understood as a kind of text-to-motion model. ⚡ NWSH Take: The closing months of 2023 have brought a welter of humanoid robot news. Amazon are now trialling the Digit humanoid in some US fulfilment centres. The makers of Digit, Agility Robotics, are about to open the world’s first humanoid mass-production factory in Oregon. And the CCP says it plans to transform China’s economy via an army of these devices. Next year, then, will prove a pivotal one for the longstanding dream that is an automatic human. And Elon Musk wants Optimus to be the One Bot That Rules Them All. // The tricks we see Optimus performing in this new video are pre-programmed. But Tesla is building the world’s most capable machine vision AI via an unbeatable data set — funnelled to them from hundreds of thousands of on-road cars — and the world’s most powerful supercomputer for machine vision, Dojo. Agility Robotics stole an early lead by getting Digit inside Amazon warehouses. But longterm, it’s hard to see how anyone beats Optimus. // If humanoids are indeed imminent, some some big questions are looming. When humanoids outnumber people, says Musk, ‘it’s not even clear what the economy means at that point’. Next year, we’ll have to confront this prospect anew. 👾 Interface this Also this week, some fascinating news on organoids and the future of human-machine interface. Researchers at Indiana University Bloomington grew brain organoids — essentially clumps of brain cells — in a lab, and attached them to computer chips. When they connected this brain-chip composite to an AI system, they found it was able to perform computational tasks, and even do simple speech recognition. Clips of spoken language were turned into electrical signals and fed to the brain-chip hybrid, which the researchers call Brainoware. The researchers found that the Brainoware was able to process these signals in a structured way and feed back signals of its own to the AI system, which decoded them as speech. Lead scientist on the project, Feng Guo, says the result points to the possibility of new kinds of super-efficient bio-computers. ⚡ NWSH Take: Welcome to the weird — and somewhat terrifying — world of organoids. It’s only a week since I last wrote about them; they’ve become a NWSH obsession. I can’t understand why they’re not getting more attention; last year brain organoids taught themselves to play the video game Pong, ffs. // Okay, I’ve calmed down. We’re a long way from viable technologies here. Culturing brain organoids, and then sustaining them long enough and in large enough numbers to do anything useful, is extremely hard. But in the Pong story and this week’s Brainoware news we see a new form of human-machine interface blinking into fragile life. We see, too, a future in which we’re able to grow more computational power in the lab. This story is sure to evolve; I’ll keep watching. 🗓️ Also this week 🧠 Researchers at Western Sydney University say they’ll switch on the world’s first human brain-scale supercomputer in 2024. The DeepSouth computer will be capable of 228 trillion synaptic operations per second, around the same as that believed to take place in the human brain. The researchers say DeepSouth will help us understand more about both the brain, and possible routes to AGI. ⚖️ UK judges are now allowed to use ChatGPT to help them craft their legal rulings. New guidance from the Judicial Office for England and Wales says ChatGPT can be used to help judges summarise large volumes of information. The guidance also warns about ChatGPT’s tendency to hallucinate. 🌊 New research shows that frozen methane under ocean beds is more vulnerable to thawing than previously believed. Methane is a potent greenhouse gas; the researchers say the methane frozen under our oceans contains as much carbon as all of the remaining oil and gas on Earth. If released, this methane could significantly accelerate global heating. 🚗 Tesla has recalled more than 2 million cars after the US regulator found its Autopilot system is defective. The recall applies to every car sold since the launch of Autopilot in 2015. But this is a ‘recall’ in name only; Elon Musk says Tesla will push a software update to fix the issue, so that no cars need to be returned to Tesla. 🖼 The new WALT video generation model can create photorealistic videos out of text prompts or images. Text-to-video is a fast-developing space; WALT joins other text-to-video models, including Google’s Imagen and Phenaki and the recently launched, and also impressive, model from Pika Labs. 🇨🇳 Chinese video game giants Tencent and NetEase are promoting ‘patriotic spirit’ in their video games to avoid a further crackdown by the CCP. At an annual industry event, the game makers stressed their commitment to ‘social values’. I’ve written on the CCP’s growing concern about the impact of video games on Chinese youth. 📰 OpenAI has announced a ‘first of its kind’ partnership with publishing giant Axel Springer. The deal will see OpenAI pay Axel Springer so that it can offer summarised versions of news stories from its titles, including Politico and Business Insider, to ChatGPT users. OpenAI will also be able to use Axel Springer content in the data sets used to train future models. 🌔 A US startup wants to build giant lighthouses on the Moon. Honeybee Robotics say their LUNARSABER towers — which would stand 100 metres tall — would provide light, power and communications infrastructure to a permanent human settlement. Their idea has been selected for development as part of the Defense Advanced Research Projects Agency's 10-year Lunar Architecture initiative. 🌍 Humans of Earth Key metrics to help you keep track of Project Human. 🙋 Global population: 8,079,258,487🌊 Earths currently needed: 1.81721 🗓️ 2023 progress bar: 96% complete 📖 On this day: On 16 December 1653 the English revolutionary Oliver Cromwell becomes Lord Protector — king in all but name — of the Commonwealth of England, Scotland, and Ireland. Infinite Potential Thanks for reading this

    15 min
  3. 12/08/2023

    New Week #127

    Welcome to this update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin. If you’re reading this and haven’t yet subscribed, join 25,000+ curious souls on a journey to build a better future 🚀🔮 To Begin It’s a bumper instalment this week; what do we have in store? Google DeepMind owned this week’s tech headlines with the release of Gemini, a new multi-modal AI intended to outdo GPT-4. Meanwhile, Harvard researchers have created tiny biological robots that can heal human tissue. And the world’s largest nuclear fusion reactor is now online in Japan. Let’s go! Gemini has liftoff This week, major news out of Google’s DeepMind AI division. The DeepMind team announced Gemini, a multi-modal LLM that looks to have pushed back the frontiers when it comes to these kinds of AI models. Launch videos suggest Gemini can speak in real-time (though as I go to press doubts about that are being raised; more below). It understands text and image inputs, and can combine them in novel ways. Here it is giving ideas for toys to make out of blue and pink wool: It can write code to a competition standard. In tests it outperformed 85% of the human competitors it was compared against; that means it’s excellent even when compared to some of the best coders on the planet. Gemini can even perform sophisticated verbal and spatial reasoning, and handle complex mathematics. Imagine if you’d had this to help with your homework: This is significant; OpenAI’s GPT-4 is notoriously bad at maths and logic puzzles. And Google are, of course, taking direct aim at OpenAI with this launch. Gemini comes in three variants: Ultra, Pro, and Nano. US users can access the Pro version now via Bard, and the Ultra model will soon be made available to enterprise clients. ⚡ NWSH Take: It will take time to independently verify the claims DeepMind are making; there are some murmurs that their launch videos overstate Gemini’s competence. Still, there’s no denying this model looks impressive. // Scratch the surface, meanwhile, and we can discern some underlying signals about the future development of LLMs. This AI outperforms GPT-3.5 when it comes to linguistic tasks such as copy drafting. But it’s the multi-modal nature of Gemini that’s really significant; in particular, its ability to reason. LLMs are trained to do next word prediction; that means they’re brilliant at sounding right. But they lack any underlying ability to know whether what they’re saying is right, or even makes sense. Gemini seems to address this shortcoming. The promise of an LLM that can act as a true reasoning partner is exciting, should haunt the dreams of all at OpenAI. // OpenAI’s reported work on the still-mysterious Q* algorithm is also believed to be about reasoning. All this suggests we’re hitting the limits of the performance improvements to be gained simply by training LLMs on even larger data sets. Instead, the future belongs to those who can weave multiple models together. // Finally, a word for Alphabet’s CEO Sundar Pichai: kudos. Alphabet AI engineers invented the transformer model; then the company went missing. Gemini puts Alphabet firmly back in the race. And given the recent fiasco at OpenAI, Pichai this week looks like a man playing a canny long game. It’s going to be a fascinating 2024. 🤖 Anthrobots are go Two stories this week signal powerful new avenues of discovery for the life sciences. Scientists at Harvard and Tuft’s University have created tiny biological robots, called anthrobots, made out of human cells. In tests, the anthrobots were left in a small dish along with some damaged neural tissue. Scientists watched as the bots clumped together to form a superbot, which then repaired the damaged neurons. Each anthrobot is made by taking a single cell from the human trachea. Those cells are covered in tiny hairs called cilia. The cell is then grown in a lab, and becomes a multi-cell entity called an organoid. In this case, the scientists created growth conditions that encouraged the cilia on these organoids to grow outwards; they then become something akin to little oars that allow the entity to move autonomously. And lo, an anthrobot has been created. The researchers say that in future anthrobots made from a patient’s own cells could be used to perform repairs or deliver medicines to target locations. Meanwhile, researchers at New York University created biological nanobots capable of self-replication. The bots are made from four strands of DNA, and when held in a solution made of this DNA raw material they’re able to assemble new copies of themselves. ⚡ NWSH Take: Organoids have long been a NWSH obsession. This work on anthrobots builds on the research — by the same team — that created xenobots, which I wrote about back in December 2021. And who can forget the brain organoids that taught themselves to play Pong, which I covered in October of last year? // The original xenobot researchers at Harvard and Tufts were startled when their bots first began to work together in groups, self-heal, and self-replicate. But xenobots are made out of frog cells, and so have limited applications when it comes to humans. Anthrobots, on the other hand, are human in origin. Given their ability to heal other tissues, they show immense promise when it comes to new medical and wellness treatments. // As so often at the moment, machine intelligence underpins these advances. To create the original xenobots, AI supercomputers were used to ‘simulate a billion year’s worth of evolution in just a few days’. No wonder Nvidia CEO Jensen Huang says ‘digital biology’ will be a central part of the AI story over the coming years. I’ll keep watching. 💥 Come together right now The world’s largest nuclear fusion reactor came online in Japan this week. JT-60SA, in the Ibaraki Prefecture, is an experimental reactor capable of heating plasma to 200 million degrees celsius. Scientists say it offers the best chance yet to test nuclear fusion as a source of near-infinite clean energy. In fusion, two or more atomic nuclei are smashed together such that they become one; this results in an energy release. Meanwhile, UK-based Rolls Royce showcased a prototype lunar nuclear fusion reactor, which they say could power a permanent human settlement on the Moon. ⚡ NWSH Take: Fusion is the energy dream that has remained, so far, just out of reach. It doesn’t output CO2. It doesn’t create a lot of dangerous nuclear waste, as fission does. And proponents say it could mean near-infinite renewable energy, on tap. // And now, we’re getting closer. Last year saw the first controlled fusion reaction that generated more energy than was needed to make the reaction happen: this is the longstanding net energy gain goal. And now a startup ecosystem is flourishing; US-based Helion, for example, are working to build the world’s first commercial fusion reactor. And they’ve laid down a clear timeline: the startup recently signed a deal with Microsoft to supply the tech giant with energy starting in 2028. // It remains to be seen whether Helion, or anyone else, can achieve fusion in this decade. But if someone does, it will be a transformative moment; and we’re closer than ever. 🗓️ Also this week 🧮 IBM announced Quantum System Two, its most powerful quantum computer. The system integrates three 133-qubit Heron processors. IBM also announced Condor, a new 1,000-qubit processor. IBM are leading the way, right now, towards useful and utility-scale quantum supercomputers. If that promise is realised it will unlock insane new capabilities across climate simulation, the creation of new medicines, supply chain management and more. Read an interview with IBM’s director of quantum, Jerry Chow, here. 🖼 Stability AI’s new image generator can create 150 images per second. StreamDiffusion is built on top of Stability AI’s sd-turbo image generation model. And X users are using it to create tens of thousands of cat pictures. 🦾 The humanoid robot currently in trials inside Amazon warehouses will eventually cost just $3 an hour to run. The CEO of Agility Robotics, Damion Shelton, says the Digit robot currently costs around $12 an hour to operate, but this will fall rapidly once mass production starts. The median wage for workers in Amazon’s US fulfilment centres is $18 an hour. Agility will open the world’s first humanoid robot factory in Oregon in 2024. ✋ US officials have warned chip maker Nvidia to stop redesigning its AI chips in an attempt to get around restrictions on exports to China. The US recently imposed restrictions on sale of advanced AI chips to China; meanwhile the 2022 US CHIPs act will pour over $250 billion into US domestic chip design and manufacturing capability. 💡 A research team at Google got ChatGPT to spit out its training data. The team asked ChatGPT to repeat the word ‘poem’ forever; this caused the app to produce huge passages of literature, which started to contain snippets of the text that the underlying AI model was trained on. OpenAI don’t want to reveal the data sets used to train GPT-4 and other models; Ilya Sutskever, their chief scientist, says training data amounts to part of the company’s ‘technology’. 🇨🇳 Meta says China is ‘stepping up’ its attempts to manipulate public opinion in the Global North. The company says it’s taken down five networks of fake Chinese accounts this year: the most originating from a single country. The accounts were posting content that, among other things, attacked critics of the CCP. 🔥 Average global temperatures hit 1.4C above pre-industrial levels this year. The World Meteorological Organization’s State of the Global Climate report says 2023 will be the hottest year on record; it will surpass the hottest to date, 2016, by a considerable margin. Two weeks ago I wrote o

    16 min
  4. 11/24/2023

    New Week #126

    Welcome to this update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin. If you’re reading this and haven’t yet subscribed, join 25,000+ curious souls on a journey to build a better future 🚀🔮 To Begin This week, more AI magic rains from the sky. Also, average temperatures on planet Earth exceed the 2C warming threshold for the first time. And my take on the OpenAI fiasco. In the end, it’s about power. Let’s get into it. ✨ Like magic This week, further glimpses of the ongoing collision between human creativity and machine intelligence. Stable Diffusion released Stable Video Diffusion, a new text-to-video model that looks to be a step beyond anything we’ve seen so far. In keeping with the company’s open source mission, the code for the model is available at its GitHub repository. Meanwhile, X users went wild for a new tool, Screenshot to Code, that leverages GPT-4 and DALLE 2 to take a screenshot of any web page and automatically write the code that will render it: And Elon Musk announced that X’s new on-platform large language model, Grok, will launch to all Premium users next week: Grok is trained on a vast dataset of X posts; it’s sure to be expert in writing posts with a great chance of going viral. What’s more, it will have access to X posts in real-time; that could make for a whole new way to discover and interact with news stories. ⚡ NWSH Take: This gallery of the week’s AI wonders could go on far longer. I didn’t mention the new voice-to-voice model from UK-based Eleven Labs, for example: just upload your own voice and hear it converted to that of a famous celebrity, or a custom character that you create. // What’s the broader point here? A couple of weeks ago I shared an excerpt from a long AI essay called Electricity and Magic. That essay argues for a two-sided model of machine intelligence and its manifestations in the coming decades. First, machine intelligence is becoming something foundational — akin to a form of fuel that will power an army of autonomous vehicles, robots, and more. But in our daily life AI will manifest differently; not as fuel, but as magic. The innovations above give a glimpse of what I’m talking about. AI is moving into domains — from music, to film-making, to writing — once believed to be impervious to encroachment by automation. It’s as though someone has waved a magic wand over our machines. // The crucial point to understand, though, when it comes to AI magic? The result won’t be, as many people imagine, the devaluation of human creativity. Instead, amid a tsunami of machine-generated outputs, what is uniquely human — including creative work grounded in embodied experience — will only become more prized. 🌊 Crossing over Another significant, and unwelcome, climate milestone was passed in the last seven days. According to the EU’s Copernicus Climate Change Services (CS3), Friday 17 November was the first day on which average global temperatures were more than 2C above pre-industrial levels. Data for 17 November indicated that global surface air temperatures were 2.07C above those in 1850. Provisional data for the following day indicated a 2.06C elevation. This doesn’t mean that the much-discussed 2C threshold has been crossed. For that, we’d need to see a sustained elevation above 2C. CS3 is part of the EU’s Copernicus Earth Observation Programme, which draws on vast amounts of satellite and other data to track the changing planetary environment. ⚡ NWSH Take: It’s expected that we’ll see occasional 2C+ days well before we exceed the 2C limit as commonly defined. Still, this week saw both the first ever and the second day that global average temperatures tipped over the threshold. It’s pretty clear where we’re heading. // This news comes on the eve of the UN COP28 summit in Dubai, which starts on 30 November. Many view last year’s summit, held in Egypt, as the moment at which the internationally agreed 1.5C target slipped out of reach; the summit notably failed to agree on a phase out of all fossil fuels, despite support for that proposal from over 80 countries. But the summit did achieve something: the establishment of a Loss and Damage Fund intended to transfer tens of billions to developing nations most at risk from climate change to help them mitigate the impacts of floods, droughts, and more. // At COP28, expect another push for a commitment to phase out all fossil fuels. And expect petrostates — including the host — to resist that call. As consensus grows that the 2C target will be breached, more attention will turn to plans for adaptation — and who should pay for them. Form an orderly Q* I can’t let this instalment pass without talking about the OpenAI fiasco. Tech watchers everywhere munched their popcorn this week while OpenAI proceeded to fire CEO Sam Altman and hire a new CEO, only to get rid of that new hire and rehire Altman five days later. It’s still unclear what led the OpenAI board to eject Altman in such dramatic style. But the mainline theory is that this was about internal division between those who want prioritise the original and nonprofit mission to research safe machine intelligence, and those — Altman apparently among them — who want to move fast and make lots of money. Yesterday, news agency Reuters made waves with claims that the debacle may have been related to an advance called Q*. The details of that advance — or indeed if there has been any advance — are unconfirmed. Cue a whole new wave of speculation: As per the above, most believe Q* is related to a generalised form of q-learning — a kind of reinforcement learning — that would enable LLMs to solve multi-step logic problems. Or, in simpler terms, to take multiple and reasoned steps towards a long-range goal in the way we humans do all the time. Reuters imply that this advance prompted some in the organisation to fear that OpenAI was getting (dangerously) close to Artificial General Intelligence. And that this is what sparked all the drama. ⚡ NWSH Take: It’s believed that OpenAI will start to train GPT-5 next year. If that is true, and if Q* really is a big step towards generalised agents, then the AI story will only accelerate across next 12 months. We’re all, by now, accustomed to tech hype cycles (the metaverse!) but it’s becoming ever-harder to deny that something significant is happening. // But the events of this week also make clear another truth. Some technologists, including Altman, want us to believe that this technology is so powerful that we may lose control of it entirely, with existentially bad results for humanity. My hunch is that this is something of a psyop, designed to distract us from the real danger: AI that is controlled, but by a tiny, unaccountable, and chaotic group of Silicon Valley technologists. // At the heart of this is an an eternal aspect of human affairs that techno-accelerationists rarely want to discuss: power relations. Who gets to control this transformative new force, trained on a literary and cultural legacy that belongs to us all? Sam Altman? The OpenAI board? It seems the move fast and make money contingent at OpenAI won this battle; but should that be the end of it? Altman has waged a long marketing campaign around the idea that the AI he’s developing is powerful enough to pose existential risks. This feels a good time to call his bluff on that. Will he tell us what happened inside OpenAI across the last seven days? If not, perhaps we should send in public representatives to discover the truth. 🗓️ Also this week 👨‍💻 A former Googler made headlines with a resignation note that claimed morale inside the company is at ‘an all-time low’. Ian Hickson worked at Google for 18 years; he says the organisation’s culture is ‘eroded’ and accused CEO Sundar Pichai of a lack of vision. Google AI engineers developed the transformer model that underpins the generative AI revolution, but the company has seen its AI efforts outshone by OpenAI and its partner Microsoft. ☀️ Portugal ran entirely on renewable energy for almost a week. Wind, solar, and hydro power met the energy needs of the country of 10 million for six days from October 31 to November 6. 🚗 A Florida judge found there is ‘reasonable evidence’ that Tesla executives knew their self-driving technology was not safe. Palm Beach county circuit court judge Reid Scott said Elon Musk and others ‘engaged in a marketing strategy that painted the products as autonomous’ when they are not. The ruling makes possible a lawsuit over a 2019 fatal crash in Miami involving a Tesla Model 3. 📖 Cambridge University is launching a new Institute for Technology and Humanity. The new institute will bring together computer scientists, robotics experts, philosophers and historians in a multi-disciplinary effort to analyse the ongoing technology revolution. 🐭 Canadian researchers doubled the lifespan of mice using antibodies that boosts the immune system. The team at Brock University say these antibodies encourage the clearing out of damaged proteins that accumulate over time, and that they could form the basis of an effective anti-ageing treatment for humans. 🌳 The Biden administration is developing a plan to capture and store CO2 under the nation’s forests. The US Forest Service is reportedly proposing to change a rule to allow storage of carbon under forest and grasslands; the plans would see CO2 moved to its storage location via a vast network of new pipelines. 🌌 Scientists say they’re mystified by an extremely high-energy particle that fell to Earth. The so-called Amaterasu particle, spotted by a cosmic ray observatory in Utah’s West Desert, was found to have an energy exceeding 240 exa-electron volts (EeV); that’s the second highest ever detected after the legendary 1991 Oh-My-God particle, which

    15 min
  5. 11/18/2023

    New Week #125

    Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin. If you’re reading this and haven’t yet subscribed, join 25,000+ curious souls on a journey to build a better future 🚀🔮 To Begin This week, Microsoft and Nvidia go head to head with new chips intended to train the next generation of AI models. And a clever hoax underlines a powerful truth when it comes to the war for compute power. Meanwhile, a viral tweet about viral TikToks engenders another viral tweet. The lesson here? We’re living in a deeply enweirdened informational environment. And in a world first, the UK approves a CRISPR-fuelled medicine. Let’s go! 👾 Compute wars This week, a glimpse of an emerging power struggle set to help shape the decades ahead. This isn’t a battle for land or natural resources. I’m talking about the struggle for compute power. Microsoft announced their first and long-awaited custom AI chips, the Azure Maia AI chip and Cobalt CPU. Set to arrive in 2024, the chips will power Microsoft’s Azure data centres, and are intended to train the next generation of large language models (LLMs). And Nvidia launched its new H200 AI chip, the successor to the H100. The iconic H100 is the fuel that’s driven this AI moment; huge clusters, consisting of tens of thousands of H100s, were used to train pretty much every large AI model you can name, including GPT-4. Meanwhile, something quite different. A mysterious company called Del Complex announced the BlueSea Frontier Compute Cluster: a massive offshore data centre intended to circumvent the new the US Executive Order that says organisations training the most powerful new AI models must share information with government. Del Complex calls BlueSea Frontier ‘a new sovereign nation state’. The announcement post achieved 2.5 million views, and was accompanied by a fancy website featuring images of BlueSea scientists at work. Tech blogs reported on the launch. But wait; it is all a hoax! BlueSea Frontier is a comment on the These Strange Times by an artist and developer called (or so he claims) Sterling Crispin. But I think Crispin may be onto something. ⚡ NWSH Take: The Del Complex hoax was a great bit of online trickery. But it was so convincing because it taps into a deep underlying truth. Compute is becoming a crucial nexus for techno-economic, sovereign, and geopolitical power. // The tech battle taking shape here is just one dimension of a broader story. Microsoft need to supply huge compute resources to their partner OpenAI to allow it to fully commercialise ChatGPT and train the upcoming GPT-5. So far, their data centres have been dependent on Nvidia AI chips. The new Maia AI and Cobalt CPU chips are intended to change that. // The broader story? It’s now clear that those nation states with the best machine intelligence will own the geopolitical future. The USA and China are now locked in a race to develop the vast compute needed to develop ultra-powerful next-generation models. Last year’s US CHIPS Act devotes $280 billion to semiconductor and AI research; inflation adjusted that’s more than the cost of the entire Apollo moon programme. And last week I wrote about new US restrictions on chip exports, intended to hamper China’s AI efforts. // It wouldn’t surprise me, then, if we do see the establishment of new offshore compute clusters, or even the development of new pseudo-sovereign entities based around compute power and AI. As with all the best satire, Del Complex’s vision is so wild it might just come true. 🔍 Can’t handle the truth Also this week, another reminder of the hall of mirrors that is our new and connected media environment. US journalist and X (formerly Twitter) personality Yashar Ali went viral with a tweet about TikTok. Ali claimed that across the previous 24 hours, many thousands of TikToks had been posted in which mostly young north Americans claimed to have read and agreed with Osama bin Laden’s notorious 2002 ‘Letter to America’ manifesto. In the comments, theories abounded. Some said it was a signal of gen Z’s misguided politics. Others saw conspiracy, and said it was another indication that China is using TikTok as a channel for sophisticated psyops intended to destabilise the Global North. We should, said those people, ban TikTok. Then another X user went viral with a different idea. These Bin Laden TikToks were being made and seen in huge numbers, he said, only because of Yashar Ali’s original tweet. Other people said that was stupid, and itself tantamount to a conspiracy theory. Meanwhile, this week the EU decided it would stop all advertising on X due to ‘widespread concerns relating to the spread of disinformation’. This follows EU research published in September which concluded that X is now the biggest online source of disinformation. ⚡ NWSH Take: Is TikTok an app for fun dance memes or a highly sophisticated channel for Chinese cultural warfare? Is the X algorithm now giving higher priority to toxic content, or is that just anti-Elon paranoia? Did thousands of young north Americans organically discover and agree with the Bin Laden letter, or is a dark controlling force at work? // The answer in every case: no one knows for sure. And that in itself is an indication of where we’re at. // The information environment that mediates our democracies has become insanely fragmented and opaque. The world’s richest man has total control over a key global information channel. The CCP has its hands around another. In both cases, I find it impossible to believe that the parties in question aren’t up to some tricks. // A totally connected world, in which every individual is empowered with a voice of their own, was supposed to create information nirvana. Those who bought that idea couldn’t have been more wrong. We need old media principles — editorial standards and, yes, gatekeepers — more than ever. But millions in the global north are currently convinced that the New York Times and the BBC are the real problem. In this increasingly chaotic and paranoid information environment, those institutions and others like them need to adapt rapidly. Most of all, they must rejuvenate belief in what they offer. 🧬 Major edits Huge CRISPR news this week. The UK’s medicines regulator became the first in the world to approve a medical treatment that uses CRISPR gene editing technology. The medicine, Casgevy, is a treatment for sickle cell disease, a serious inherited disorder that causes red blood cells to malfunction and that affects millions worldwide. During treatment, red blood-producing stem cells must be taken from the patient. CRISPR is used to edit those stem cells to remove the error that causes sickle cell, before the edited cells are infused back into the patient. Meanwhile, researchers at the Chinese Academy of Sciences created a monkey using two embryos, with donor material from one embryo injected into another. This has been done before with simpler animals such as mice and rats, but is a first in primates. The donor stem cells were gene edited to express a green fluorescent protein, causing the resultant live monkey to glow: ⚡ NWSH Take: Gene editing technology is already enacting a transformation in the life sciences, healthcare, and agriculture. This CRISPR sickle cell treatment is wonderful news, and there are promising early indications from trials of CRISPR therapies to cure a form of hereditary blindness, and to train immune cells to fight certain cancers. Meanwhile, in September 2021 Japanese startup Sanatech Seed became the first company to sell CRISPR-edited food: their tomatoes were edited to contain more GABA. // So we’re developing our ability to manipulate genes. The next revolution coming? That ability will collide with a new ability to speak the language of DNA via transformer models — the kind of models that underlie LLMs — trained on huge amounts of genomic data. The resultant AIs will be able to discern deep underlying patterns that help us zero in on useful or rogue genes; see DeepMind’s new AlphaMissense, which detects and classifies genetic mutations. 🗓️ Also this week 🤯 Shock news breaking late last night UK time: Sam Altman has been fired from OpenAI! In a statement the OpenAI board said that Altman ‘was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.’ This news is a jolt out of nowhere. Altman led the company that sparked this transformative AI moment, and as such has been the most celebrated technologist on the planet for the last couple of years. The OpenAI board are accusing him of lying here, and given the summary firing we can’t be talking about a white lie. Two glimpses of the rumour mill: (i) this is about dark power moves by Elon, or (ii) OpenAI has achieved AGI but Altman didn’t tell the board. But that’s all speculation. More news is sure to emerge. 🧠 The Argonne National Laboratory in the US has begun training a 1 trillion parameter scientific AI. AuroraGPT is being trained on a vast number of research papers and other scientific information, and once complete will offer answers to scientific questions. This time last year Meta released Galactica, its AI model trained on 48 million research papers. The model was withdrawn three days later, after users said it produced false outputs. This week, the Meta engineer behind Galactica looked back at the episode. 💸 Google is planning a massive investment in generative AI startup Character.ai. Founded by two former Google AI engineers, the platform leverages an LLM to allow users to create and chat with AI characters, including virtual versions of their favourite celebrities. As regular readers will know, the rise of AI-fuelled virtual companions is a longstanding NWSH obsession. 🗺 Speaking of Virtual Companions, Ai

    16 min
  6. 11/10/2023

    New Week #124

    Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin. If you’re reading this and haven’t yet subscribed, join 25,000+ curious souls on a journey to build a better future 🚀🔮 To Begin It’s a bumper instalment this week. What do we have in store? The Chinese government is calling on its technology industry to roll out millions of advanced humanoid robots. Also, NASA wants to learn how to extract breathable oxygen from Moon dust. And OpenAI says everyone can now create their own bespoke version of ChatGPT. Let’s go! 🤖 Work machines This week, a glimpse of the coming collision between human population dynamics and autonomous machines. A new study by researchers at University College London found fears of climate breakdown are changing decision-making around whether or not to have children. Published in the journal PLOS Climate, the research found that climate concern was associated with a desire for fewer children, or none at all. The researchers say theirs is the first systematic study of the way attitudes to climate change are affecting reproductive choices. Meanwhile, the Chinese Ministry of Industry and Information Technology (MIIT) issued a nine-page communique calling for domestic mass production of advanced humanoid robots by 2025. By 2027, the document says, these robot workers should be ‘an important new engine of economic growth’. But what is the connection between new trends in reproductive decision making and China’s dash towards humanoid robots? Here’s a graph of the birth rate in China from 2000 to 2022: ⚡ NWSH Take: The CCP knows that China is losing its battle with demographics. If the country is to become the 21st-century hegemon that President Xi dreams about, then it needs an army of workers. But instead China is watching its birth rate plummet. Meanwhile, the Global North is facing the same challenge; in north America and western Europe population growth flatlined long ago. And now it seems that fears over climate change are only set to exacerbate that trend. // This is a huge structural challenge; fewer workers tends to mean a less productive and smaller economy. So what to do? The CCP have already tried ditching the one child policy and incentivising couples to have more children; it didn’t work. This week’s clarion call from the MIIT offers us a glimpse of an alternative answer: robots. If China won’t have enough human workers to sustain economic growth, then the CCP hopes humanoid robot workers can do the job(s) for them. // Innovators in the Global North are heading in the same direction. This week, Tesla posted over 50 jobs ads for its Optimus robot team. Elon Musk — who has long bemoaned population decline and its coming impacts — has said he believes Optimus will end up being a bigger part of Tesla’s business than EVs. And two weeks ago I wrote on how Amazon are trialling the Digit humanoid robot in some US fulfilment centres. // My co-founder at The Exponentialist, Raoul Pal, says that in the new world we’re building robots are demographics. In other words, the rise of autonomous machines is set to decouple economic growth from population growth. The CCP, Musk, and many others besides are making the same bet. And my guess? They’re going to be proven right. 🌌 Space out NASA continues to prepare for its mission to the Moon. This week, further news. The Agency wants to explore methods to extract breathable oxygen from Moon dust. Its Space Technology Mission Directorate is seeking input from industry partners and external researchers, and hopes to create a demonstration technology soon. NASA hopes to put humans back on the Moon for the first time since 1972 with its Artemis 3 mission, currently planned for 2025. Meanwhile, stunning pictures came back this week from the European Space Agency’s Euclid telescope. Launched in July, Euclid is now around 1.5 million kilometres from Earth; that’s about four times as far away as the Moon. And it’s capturing images of incredible clarity. This is the Perseus cluster, a group of over 1,000 galaxies located 240 million light years from Earth. Each galaxy pictured — and there are a further 100,000 galaxies in the background of the shot — contains hundreds of billions of stars: Here’s the Horsehead Nebula, a cloud of dust and gas in the Orion constellation: ⚡ NWSH Take: Okay, this entire segment was mainly an excuse to show you the breathtaking images coming back from Euclid. But there is an underlying truth here. We’re amid a new space age, due mainly to the insane drop in the cost of access to space. Back in 2010 launch costs hovered at around $20,000/kg; today they’re around $1,000/kg. That’s thanks mainly to the reusable rocket technology developed by SpaceX. We’re heading back into space via multiple partnerships between the international space agencies and private companies. And this time the plan is to stay there. // One signal of the emerging public-private space ecosystem? This week, SpaceX agreed to deliver the US military’s new space plane, the X-37B, into orbit in its Heavy Falcon rocket in December. And private space companies, including SpaceX, will play a huge role in the upcoming Artemis crewed mission to the Moon. Most analysts reckon that mission will end up being delayed until 2026/7. Even so, the next few years are set to be a thrilling road towards the lunar surface. Expect Moon hype to reach fever pitch. And from there, of course, all roads will lead to Mars. 🧠 Your intelligence There’s little doubt about the biggest story in the mainstream tech press this week. OpenAI made headlines all over again with the launch of custom GPTs: bespoke versions of ChatGPT that any user can create using simple natural language instructions and their own training content or data. The feature was announced at OpenAI Dev Day, which saw CEO Sam Altman create a custom Startup Mentor GPT live on stage in about five minutes. X (formerly Twitter) went wild. And yes, a million and one GPTs are assuredly coming. How is this going to play out? ⚡ NWSH Take: Remember back in 2012, when every third friend of yours was making an app? OpenAI are hoping to recreate that magic all over again. They want to be the platform that profits from a huge wave of AI innovation. ChatGPT Plus users will be able to create custom GPTs and charge others for use, and Altman say they’ll be rewarded via revenue share. // Remember, any ChatGPT Plus user can now create a bespoke GPT in a few minutes. There will be a vast long tail of these things. The winners, though, will be those with (i) deep reserves of proprietary content or data that they can use to enhance the outputs of their bot, and (ii) audiences who are receptive to their creations. // But creating a bespoke GPT is now so easy that we’ll also see something we didn’t with apps. That is, individuals creating bespoke bots just for their own use — to help them manage their accounts, or choose birthday presents for family and friends, and much else besides. Yes, this is an App Store moment for AI. But it also marks another beginning: of personalised machine intelligence on tap. 🗓️ Also this week 💥 The Exponentialist, my new premium and enterprise-level research service, launched to the world! It’s a partnership between me and the macro economist and Real Vision CEO Raoul Pal. To mark launch day, we’ve made an excerpt of the first essay free for all to read — watch out for it in your inbox on Sunday. 📌 New tech company Humane launched the AI Pin. This long-awaited first product from Humane is a voice and gesture-controlled device that clips to your shirt and integrates with ChatGPT and other services. Humane hope their ‘disappearing computer’ will be the next iPhone. It remains to be seen whether people really want to talk to a badge on their lapel. One fascinating signal, though? See how OpenAI — and their partner, Microsoft — are set to become the underlying infrastructure that fuels a whole raft of AI innovations. Where are Alphabet? And when will Apple launch their own generative AI play? It’s going to be fascinating watching this battle unfold. 🇨🇳 Nvidia has developed special new AI chips for China according to Chinese media. Recent US regulations prevented Nvidia from selling its powerful A100 AI chip to Chinese companies. The new chips — which include the H20, reportedly only half as powerful as the A100 — would not fall under the restrictions. Nvidia have so far refused to comment. 🧬 Scientists have created a new strain of yeast with a genome that is over 50% synthetic DNA. A group of labs called the Sc2.0 consortium has been attempting to create a strain of yeast with a fully synthetic genome for 16 years now; this latest advance marks a major step forward. So far, scientists have only managed to synthesise the much simpler genomes of some viruses and bacteria. 👨‍⚕️ Neuralink is seeking a volunteer for its first brain implant surgery. The company wants to find a quadriplegic adult under the age of 40, who will allow a surgeon to implant electrodes and small wires into the part of the brain that controls the forearms and hands. 🙈 A new UN survey says 85% of citizens across 16 countries are worried about online disinformation. The 16 countries surveyed will each host elections in 2024. The survey found that 87% of respondents fear disinformation will influence the outcome of those elections. Back in New Week #122 I wrote on new research showing far fewer US adults are following mainstream sources of news. 🐝 A team of Chinese researchers created a swarm of drones able to ‘talk to one another’ and assign tasks to achieve a shared goal. The drone swarm is fuelled by a large language model, which enables the drones to act as AI agents that can reason in language, share tha

    15 min
  7. 11/04/2023

    New Week #123

    Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin. If you’re reading this and haven’t yet subscribed, join 25,000+ curious souls on a journey to build a better future 🚀🔮 To Begin This week, two intriguing stories and a big announcement. Global leaders and senior tech executives gathered at the UK’s AI Safety Summit. But beyond the walls of Bletchley Park, the debate on AI is raging hotter than ever. Meanwhile, tech billionaires in Silicon Valley are running into trouble over their plans to build a new city-state utopia called California Forever. As for the announcement? Just keep scrolling. Let’s do this. 🧠 Dream machines The UK government this week trumpeted the success of its international AI gathering; it took place at the historic fountainhead of the computer revolution, Bletchley Park. An impressive guest list, including US vice-president Kamala Harris and the European Commission president Ursula von der Leyen, gathered at the Summit. And their meeting resulted in the Bletchley Declaration, which the UK government has hailed as a world-first international statement on AI safety. Here’s a taste for those who speak technocrat: ‘We affirm that, whilst safety must be considered across the AI lifecycle, actors developing frontier AI capabilities, in particular those AI systems which are unusually powerful and potentially harmful, have a particularly strong responsibility for ensuring the safety of these AI systems…We encourage all relevant actors to provide context-appropriate transparency and accountability on their plans to measure, monitor and mitigate potentially harmful capabilities and the associated effects that may emerge…’ But beyond the Declaration, this week made it clear that we’re further than ever from a consensus on the deep implications of machine intelligence. In fact, this was the week that a maximum volume war of words broke out between leading AI builders. Google Brain co-founder and Stanford professor Andrew Ng said key AI players, including Sam Altman, are wildly playing up fears of AI doom in order to spark regulation that will suppress competition from insurgents. He called the proposal that the training of powerful AI models should require a license ‘colossally dumb’. That message was echoed by Meta’s chief AI scientist Yann LeCun, who favours open source — that is, anyone can use it — AI models. But Google DeepMind CEO Demis Hassabis hit back at LeCunn, saying that failure to regulate AI could result in ‘grim’ consequences for humanity. This account barely scratches the surface of the arguments that raged this week. As for OpenAI, they launched a new team intended to study and prepare for ‘catastrophic risks’ including an AI-instigated nuclear war. ⚡ NWSH Take: Who would ever have thought that a bunch of super-smart, tech-obsessed social media addicts would end up arguing like this? While Bletchley saw a rare moment of diplomatic unity, inside the AI industry the full spectrum of opinion is manifest, from AI doom is all a load of rubbish to act now or the end of humanity is probable. // It pays, here, to remember that two things can be true at once. Yes, Altman’s global tour to warn of ‘catastrophic risks’ is a carefully orchestrated marketing campaign. But it’s also the case that no one, yet, has a definitive picture of the risks in play. // What is increasingly clear, though, is that the rise of machine intelligence is the primary fact of our shared lives now. It will do more than any other force to reshape our collective future. // But the Bletchley Declaration consists of bromides that will change nothing. And the sight at Bletchley this week of UK prime minister Rishi Sunak interviewing Elon Musk — positioning Musk as the star and Sunak as a fan — spoke volumes about the power imbalances we’ve allowed to evolve when it comes to government (i.e. the people) and unaccountable tech overlords. // We must recover our collective agency; our ability to assert human modes of living and being in the face of an ongoing technology revolution. That means doing politics. Bletchley was a start. But what’s needed next are citizen assemblies, and an authentic movement around AI for the people. 💥 The Exponentialist As some of you will have seen on social media, I made a big announcement this week. I’ve partnered with Raoul Pal, renowned macro-economic thinker and CEO of Real Vision, on a new premium research service called The Exponentialist. This is a professional and enterprise-level service for those who want to go deep on emerging technologies, the futures they’ll create, and the challenges and opportunities latent in all that. This won’t be for everyone in the NWSH community. But if you’re a foresight professional, strategist, founder, marketing leader, product manager, designer or much else besides, The Exponentialist will fuel you and your team. And it will take up only a fraction of your research budget. It will also be deeply valuable for anyone seeking to position an investment portfolio around tech and crypto. This launch changes nothing about New World Same Humans and the community we’re building here. Our mission continues unchanged! If The Exponentialist sounds useful, go here to learn more. And if you’ve subscribed or you’re considering it, hit reply to this email so I can say thanks. 🏙 Now and Forever While the newsletter was on pause, we learned that a group of Silicon Valley billionaires are planning a new city-state utopia in California. This week, it seems their project has run into trouble. California Forever is a new city planned for construction in Solano County in the north of the state. It’s backed by some of tech’s most notable power players, including ultra-rich VC Marc Andreessen, Stripe founders Patrick and John Collison, and LinkedIn founder Reid Hoffman. The group’s vision for the city has strong solar punk, hi-tech sustainable utopia vibes: But this week it was reported that the mysterious company behind the plans, Flannery Associates, is accused of using ‘strong-arm tactics’ including lease terminations to buy up the Bay Area farmland it needs. Local farmers aren’t happy, and now some of them are taking the matter to court. Trouble in (planned) paradise, then. ⚡ NWSH Take: This project reminds me of the various other pseudo-independent city-states discussed in this newsletter over the years. There’s Walmart billionaire Marc Lore’s Telosa City, for example, a sustainable paradise planned for the Nevada desert. And Praxis, a startup on a mission to build a new Great City somewhere in the Mediterranean, funded by NFTs of the monuments they’ll build in the city once it exists. // Few details have emerged of the way California Forever will be governed. But for a glimpse, we might turn to billionaire backer Marc Andreessen’s recent’s Techno-Optimist Manifesto, which proclaims: ‘we believe in ambition, aggression, persistence, relentlessness — strength.’ I’m thinking libertarian, with a strong emphasis on innovation and startup culture. // Of course, innovation and startups can be great. But they only function in the context of the broader socio-political frameworks that libertarians such as Andreessen repudiate. As with the other charter city projects covered in this newsletter, I can’t help feeling that at the heart of Forever California is a fantasy of permanent escape from politics. Escape, that is, from the messy, awkward business of managing conflict among different interest groups, and enacting trade-offs between different but equally legitimate value systems. This argument with the farmers might be the first public conflict that Forever California has run into, but it won’t be the last. 🗓️ Also this week 🎬 Hollywood actress Scarlett Johansson is suing an AI app for cloning her voice and using it in an advert. Johansson says Lisa AI: 90s Yearbook and Avatar used an AI version of her voice without permission. Last week I wrote on the coming wave of legal disputes over AI outputs founded in copyrighted intellectual property, including Universal Music Group’s lawsuit against Anthropic. UMG say Anthropic used their lyrics to help train its AI chatbot Claude. 🌨 Tesla drivers say their Full Self Drive software is failing because the car’s cameras are fogging up in cold weather. Back in 2021 Tesla ditched the Lidar sensors that usually form part of self-driving systems, leaving their self-drive reliant on cameras. 👾 The Pentagon launched a new UFO reporting tool. The secure online form is open only to current or former federal employees, or those with ‘direct knowledge of US government programs or activities related to UAP dating back to 1945’. 🇨🇳 Researchers from the Chinese microchip company MakeSens say they’ve created a chip that can perform certain AI tasks 3,000 times faster than the Nvidia A100. Writing in the journal Nature, the researchers say the All-Analogue Chip Combining Electronics and Light could soon be used in wearable devices, electric cars or smart factories. The US have restricted sales to China of Nvidia’s leading A100 AI chip, leaving the country scrabbling to bolster domestic production capabilities. 🪐 NASA is locating buried ice on Mars by using a sophisticated new map. The Subsurface Water Ice Mapping project uses images of the planet from several NASA missions, including the 2001 Mars Odyssey satellite. The Agency says subsurface ice can serve as drinking water for the first humans to set foot on the Red Planet. 🌅 A new study says that the Earth’s climate is more sensitive to carbon emissions than most scientists believe. Published in the journal Oxford Open Climate Change, the study says a doubling of atmospheric C02 will cause a 4.8C rise in average global temperatures

    15 min

About

New World Same Humans is a weekly newsletter on trends, technology and our shared future by David Mattin. Born in 2020, the NWSH community has grown to include 25,000+ technologists, designers, founders, policy-makers and more. www.newworldsamehumans.xyz

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes, and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada