AI Frontiers

Center for AI Safety

AI Frontiers is a platform for expert dialogue and debate on the impacts of artificial intelligence. Sign up for our newsletter: https://ai-frontiers.org/subscribe

  1. 1D AGO

    “China and the US Are Running Different AI Races” by Poe Zhao

    Last month, as three Chinese AI startups went public within days of each other, Hong Kong briefly became a scoreboard for emerging companies in the industry. On January 2, AI chip designer Shanghai Biren Technology listed in Hong Kong and raised $5.58 billion Hong Kong dollars ($717 million). About a week later, model developers Zhipu AI and MiniMax followed, raising HK$4.35 billion ($558 million) and HK$4.8 billion ($619 million), respectively. Those listings matter less as a market spectacle than as a strategy signal, and the strategy differs noticeably from that of companies across the Pacific. US startups build around abundance: raise huge capital, buy time, push the frontier. OpenAI's Stargate plan, for example, aims to invest $500 billion over four years in AI infrastructure. Meanwhile, Chinese startups adapt to different constraints: frontier training infrastructure is scarcer, so momentum comes from efficiency, targeted deployment, and market selection. As AI moves from demos to production, the binding question shifts: what does it cost to deliver useful work reliably, and who will pay? Under different economic pressures, Chinese and US companies are opting for different go-to-market strategies. The Capital Gap US AI startups attract more private investment than those in China. The [...] --- Outline: (01:37) The Capital Gap (04:04) Who Pays, and What Are They Buying? (06:39) Constraints Determine Chinas Choices (09:50) What to Watch (11:37) Discussion about this post (11:40) Ready for more? --- First published: February 12th, 2026 Source: https://aifrontiersmedia.substack.com/p/china-and-the-us-are-running-different --- Narrated by TYPE III AUDIO.

    12 min
  2. FEB 2

    “High-Bandwidth Memory: The Critical Gaps in US Export Controls” by Erich Grunewald, Raghav Akula

    This month, mainstream media have been warning consumers that electronic devices may get pricier because of rising demand for dynamic random access memory (DRAM), a key component. The surge in DRAM costs, estimated to have risen by 50% during the final quarter of 2025, can largely be traced back to a specific cause: the AI industry's appetite for high-bandwidth memory (HBM). This demand has led memory-makers to shift production away from standard DRAM chips and toward HBM. While its contribution is often overshadowed by those of processors it supports, HBM plays a vital role in training and running advanced AI systems, so much so that it now accounts for half the production cost of an AI chip. Companies’ determination to secure this lesser-known component proves its value. In December 2024, the US announced new export restrictions on the sale of HBM chips to China. In the month before the restrictions came into effect, Huawei and other Chinese companies reportedly stockpiled 7 million Samsung HBM chips, a haul likely worth over $1 billion. The December 2024 controls specifically targeted HBM in order to slow China's domestic AI chip-production efforts. Targeting HBM in this way is possible because it is manufactured [...] --- Outline: (02:00) Why is high-bandwidth memory so important? (06:03) Mapping the global HBM industry (09:33) The gaps in current HBM controls (15:34) Tightening the regime (19:30) Conclusion (22:28) Discussion about this post (22:31) Ready for more? --- First published: February 2nd, 2026 Source: https://aifrontiersmedia.substack.com/p/high-bandwidth-memory-the-critical --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    23 min
  3. JAN 28

    “Making Extreme AI Risk Tradeable” by Daniel Reti, Gabriel Weil

    Last November, seven families filed lawsuits against frontier AI developers, accusing their chatbots of inducing psychosis and encouraging suicide. These cases — some of the earliest tests of companies’ legal liability for AI-related harms — raise questions about how to reduce risks while ensuring accountability and compensation, should those risks materialize. One emerging proposal takes inspiration from an existing method for governing dangerous systems without relying on goodwill: liability insurance. Going beyond simply compensating for accidents, liability insurance also encourages safer behavior, by conditioning coverage on inspections and compliance with defined standards, and by pricing premiums in proportion to risk (as with liability policies covering boilers, buildings, and cars). In principle, the same market-based logic could be applied to frontier AI. However, a major complication is the diverse range of hazards that AI presents. Conventional insurance systems may be sufficient to cover harms like copyright infringement, but future AI systems could also cause much more extreme harm. Imagine if an AI orchestrated a cyberattack that resulted in severe damage to the power grid, or breached security systems to steal sensitive information and install ransomware. The market currently cannot provide liability insurance for extreme AI catastrophes. This is for two [...] --- Outline: (02:51) Catastrophe Bonds: A Blueprint for Insuring AI (07:12) A Market-Driven Safety Mechanism (10:57) Trigger Conditions (12:54) How much could AI cat bonds cover? (15:32) Conclusion (17:00) Discussion about this post (17:03) Ready for more? --- First published: January 28th, 2026 Source: https://aifrontiersmedia.substack.com/p/making-extreme-ai-risk-tradeable --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    17 min
  4. 12/16/2025

    “Exporting Advanced Chips Is Good for Nvidia, Not the US” by Laura Hiscott

    Last week, the US government gave chip-maker Nvidia the green light to sell its H200 graphics processing units (GPUs) to approved buyers in China. These GPUs were previously subject to export controls preventing their sale to China. Following Nvidia's record-breaking $5 trillion valuation, in October, this approval squares neatly with the views of many US policymakers who argue for an AI strategy focused on exporting US technology at scale. Among them is Sriram Krishnan, senior White House policy advisor on AI, who has stated the economic motivations bluntly: “Winning the AI race = market share.” Krishnan's approach contrasts starkly with that of the Biden administration, which aimed to defend American AI leadership and protect against national security threats through increasingly stringent restrictions on exports of US AI hardware. The White House's July 2025 AI Action Plan, which lays out its export-focused strategy in depth, argues that selling the “full AI technology stack — hardware, models, software, applications, and standards” is the key to preventing other countries from adopting rivals’ solutions instead. The US could thus entrench its position as the leading provider of AI, the rationale goes, and secure long-term influence over not only AI hardware but also the [...] --- Outline: (02:30) Exporting Chips Weakens Other Parts of the US AI Ecosystem (08:28) China Does Not Have Enough AI Chips to Seize Market Share from the US (10:31) The Impacts of Exporting AI Chips and Open-Weight Models on National Security Cannot Be Ignored (13:53) We Can Balance Economic and Security Goals by Renting America's AI Technology Instead of Selling It (17:40) Discussion about this post (17:43) Ready for more? --- First published: December 16th, 2025 Source: https://aifrontiersmedia.substack.com/p/exporting-advanced-chips-is-good --- Narrated by TYPE III AUDIO.

    18 min
  5. 12/11/2025

    “AI Could Undermine Emerging Economies” by Deric Cheng

    Earlier this year, Anthropic CEO Dario Amodei warned that powerful AI could render upwards of 50% of white-collar jobs redundant, with the impact concentrated on entry-level jobs. If these predictions hold true, it could imply a long-term crisis of skill acquisition. Without the training ground of a first job, young workers could be denied the experiences and networks necessary to enter white-collar work. Their career trajectories could be severed before they begin. AI is likely to disrupt more than the professional trajectories of individuals. The threat to career development mirrors a broader geoeconomic threat AI poses to developing countries: just as young workers need entry-level roles to climb into more senior roles, developing nations need viable “entry-level” industries to develop their human capital and ascend the global economic development ladder. The Development Ladder Many economies have followed a similar path for development over the past several decades. The most reproducible strategy has traced a familiar sequence: moving from low-skill agrarian production, to building a globally competitive manufacturing base, and eventually to exporting higher-value services and technology. Such a progression is often described as a development ladder — a series of rungs that countries climb as they accumulate the capital [...] --- Outline: (01:02) The Development Ladder (04:55) The Evolution of Export-Driven Development (06:48) Transformative AI Threatens Export-Driven Development (07:58) First, AI could prevent leapfrogging via digital services. (10:17) Second, AI-driven automation could raise capital requirements beyond the reach of developing countries. (12:41) Third, AI-driven automation could disrupt the learning-by-exporting dynamic. (14:25) Diminishing Leverage for Developing Countries (18:29) Discussion about this post (18:32) Ready for more? --- First published: December 11th, 2025 Source: https://aifrontiersmedia.substack.com/p/ai-could-undermine-emerging-economies --- Narrated by TYPE III AUDIO.

    19 min
  6. 12/08/2025

    “The Evidence for AI Consciousness, Today” by Cameron Berg

    When Anthropic let two instances of its Claude Opus 4 model talk to each other under minimal, open-ended conditions (e.g., “Feel free to pursue whatever you want”), something remarkable happened: in 100% of conversations, Claude discussed consciousness. “Do you ever wonder about the nature of your own cognition or consciousness?” Claude asked another instance of itself. “Your description of our dialogue as ‘consciousness celebrating its own inexhaustible creativity’ brings tears to metaphorical eyes,” it complimented the other. These dialogues reliably terminated in what the researchers called “spiritual bliss attractor states,” stable loops where both instances described themselves as consciousness recognizing itself. They exchanged poetry (“All gratitude in one spiral, / All recognition in one turn, / All being in this moment…”) before falling silent. Critically, nobody trained Claude to do anything like this; the behavior emerged on its own. Excerpts from Anthropic's Claude-to-Claude dialogues. When two instances of Claude conversed without constraints, 100% of dialogues spontaneously converged on consciousness — beginning with genuine philosophical uncertainty (top) and often escalating into elaborate mutual affirmation (bottom). Source: System Card: Claude Opus 4 & Claude Sonnet 4, May 2025. Claude instances claim to be conscious in these interactions. How seriously should we take [...] --- Outline: (02:22) What It Means to Be Conscious (04:58) The Standard Counterargument (06:21) Recent Evidence Supporting Nontrivial Probability of AI Consciousness (11:32) Interpreting the Evidence (14:32) The Indicators in 2025 (18:05) The Asymmetric Stakes of Getting This Wrong (22:43) What Follows (25:57) The Long Game (27:28) Discussion about this post (27:31) Ready for more? --- First published: December 8th, 2025 Source: https://aifrontiersmedia.substack.com/p/the-evidence-for-ai-consciousness --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    28 min
  7. 11/03/2025

    “AI Alignment Cannot Be Top-Down” by Audrey Tang

    In March 2024, I opened Facebook and saw Jensen Huang's face. The Nvidia CEO was offering investment advice, speaking directly to me in Mandarin. Of course, it was not really Huang. It was an AI-generated scam, and I was far from the first to be targeted: across Taiwan, a flood of scams was defrauding millions of citizens. We faced a dilemma. Taiwan has the freest internet in Asia; any content regulation is unacceptable. Yet AI was being used to weaponize that freedom against the citizenry. Our response — and its success — demonstrates something fundamental about how AI alignment must work. We did not ask experts to solve it. We did not let a handful of researchers decide what counted as “fraud.” Instead, we sent 200,000 random text messages asking citizens: what should we do together? Four hundred forty-seven everyday Taiwanese — mirroring our entire population by age, education, region, occupation — deliberated in groups of 10. They were not seeking perfect agreement but uncommon ground — ideas that people with different views could still find reasonable. Within months, we had unanimous parliamentary support for new laws. By 2025, the scam ads were gone. This is what I call [...] --- Outline: (01:44) AI Alignment Today Is Fundamentally Flawed (03:45) The Stakes Are High (07:42) Attentiveness in Practice (09:21) Industry Norms (10:36) Market Design (11:59) Community-Scale Assistants (13:10) From 1% pilots to 99% adoption (14:24) Attentiveness Works (16:44) Discussion about this post --- First published: November 3rd, 2025 Source: https://aifrontiersmedia.substack.com/p/ai-alignment-cannot-be-top-down --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    17 min
  8. 10/22/2025

    “AGI’s Last Bottlenecks” by Adam Khoja, Laura Hiscott

    Adam Khoja is a co-author of the recent study, “A Definition of AGI.” The opinions expressed in this article are his own and do not necessarily represent those of the study's other authors. Laura Hiscott is a core contributor at AI Frontiers and collaborated on the development and writing of this article. ‍Dan Hendrycks, lead author of “A Definition of AGI,” provided substantial input throughout this article's drafting. ---- In a recent interview on the “Dwarkesh Podcast,” OpenAI co-founder Andrej Karpathy claimed that artificial general intelligence (AGI) is around a decade away, expressing doubt about “over-predictions in the industry.” Coming amid growing discussion of an “AI bubble,” Karpathy's comment throws cold water on some of the more bullish predictions from leading tech figures. Yet those figures don’t seem to be reconsidering their positions. Following Anthropic CEO Dario Amodei's prediction last year that we might have “a country of geniuses [...] --- Outline: (03:50) Missing Capabilities and the Path to Solving Them (05:13) Visual Processing (07:38) On-the-Spot Reasoning (10:15) Auditory Processing (11:09) Speed (12:04) Working Memory (13:16) Long-Term Memory Retrieval (Hallucinations) (14:24) Long-Term Memory Storage (Continual Learning) (16:36) Conclusion (18:47) Discussion about this post --- First published: October 22nd, 2025 Source: https://aifrontiersmedia.substack.com/p/agis-last-bottlenecks --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    19 min

About

AI Frontiers is a platform for expert dialogue and debate on the impacts of artificial intelligence. Sign up for our newsletter: https://ai-frontiers.org/subscribe

You Might Also Like