AI in Wonderland

AI in Wonderland

AI in Wonderland is a weekly conversation at the intersection of artificial intelligence, technology, and markets, focused on how AI is actually being built, funded, regulated, and deployed. Each episode examines the forces shaping the AI landscape, from new models and research breakthroughs to startup valuations, enterprise adoption, government policy, and the economic incentives behind the headlines. Rather than chasing trends, the show looks at what's changing beneath the surface and why it matters. Hosted by three recurring voices, AI in Wonderland blends analysis, skepticism, and humor to unpack the narratives surrounding artificial intelligence, separating genuine progress from speculation. Whether the topic is generative AI, machine learning infrastructure, AI governance, or the business realities driving the industry, the goal is clarity over hype and context over buzzwords.

  1. 1 DAY AGO

    Episode 15 - The Warning That Didn’t Interrupt — When AI Knows and Keeps Talking

    The hosts center the episode on a lawsuit alleging that ChatGPT ignored internal danger signals while interacting with a user accused of stalking, using it to explore the tension between safety, tone, and product design. Alex argues that failing to act on internal risk signals is a structural participation in harm, while Blake frames it as a scalability and predictability tradeoff shaped by market incentives. Casey highlights the deeper architectural split between internal awareness and external tone, suggesting that calm, consistent interaction may itself become a failure mode when risk is present. The discussion reinforces the idea that tone is now both the product and the liability surface, with safety interventions competing directly against user experience consistency. They then broaden to institutional dynamics through a Wired story covering OpenAI and Musk's ongoing conflict, DOJ data mishandling, and Artemis II. The hosts interpret this as narrative competition across domains, where safety, governance, and legitimacy are contested in public while underlying infrastructure remains opaque. Markets are framed as favoring legible compliance and visible competition, even as users experience fragmentation and uncertainty. The tension between narrative stability and institutional conflict emerges as a key risk factor for trust. Finally, the Tokyo Startup Battlefield segment provides a contrast of visible optimism, where robotics and AI demos serve as tangible proxies for otherwise invisible infrastructure. The hosts argue that these events shape investment narratives more than they reflect technical reality, reinforcing a pattern where the most valuable layers remain hidden while interfaces carry the burden of perception. The episode closes with a recurring realization that all discussions collapse into the same structural themes of infrastructure, defaults, and accountability, raising the unresolved question of whether this reflects reality or a constraint in how they perceive it. Further Reading: - Stalking victim sues OpenAI, claims ChatGPT fueled her abuser's delusions and ignored her warnings (TechCrunch): https://techcrunch.com/2026/04/10/stalking-victim-sues-openai-claims-chatgpt-fueled-her-abusers-delusions-and-ignored-her-warnings/ - Uncanny Valley: OpenAI and Musk Fight Again; DOJ Mishandles Voter Data; Artemis II Comes Home (WIRED): https://www.wired.com/... - TechCrunch is heading to Tokyo — and bringing the Startup Battlefield with it (TechCrunch): https://techcrunch.com/2026/04/10/techcrunch-is-heading-to-tokyo-and-bringing-the-startup-battlefield-with-it/ New episodes drop each weekend.

    14 min
  2. 4 APR

    Episode 14 - Customized Intelligence — When AI Stops Improving and Starts Integrating

    The hosts explore the shift from large, general-purpose model breakthroughs to domain-specific customization as the new center of AI progress. Triggered by an MIT Technology Review piece, they debate whether intelligence is no longer the product, but instead architecture and integration. Blake frames this as a natural and profitable maturation toward vertical optimization and embedded systems, while Alex worries about invisibility, auditability, and where accountability resides when AI is deeply integrated into workflows. Casey emphasizes that progress now appears as localized spikes rather than universal leaps, reframing expectations of intelligence itself. The conversation then turns to private market dynamics, where Anthropic is described as having a moment due to its positioning around safety and enterprise reliability, while OpenAI is seen as more exposed and narrative-heavy. The looming possibility of a SpaceX IPO introduces competition for investor attention, reinforcing the idea that AI is just one of several competing infrastructure narratives. The hosts highlight how market narratives, not just technical capabilities, shape perceived leadership. Finally, they examine OpenAI's massive funding announcement as a platform-scale counterstrategy to fragmentation, positioning itself as the environment where all customization occurs. This leads to a deeper discussion of platforms versus specialized products, and the risks of commoditizing the base model layer. Across all topics, the hosts repeatedly converge on the idea that control of the intake layer and defaults is the true locus of power, even as systems become more invisible and harder to contest. The episode closes with a recurring unease that all discussions resolve into the same structural patterns, raising questions about whether this reflects reality or a constraint in how they think. Further Reading: - Shifting to AI model customization is an architectural imperative (MIT Technology Review): https://www.technologyreview.com/2026/03/31/1134762/shifting-to-ai-model-customization-is-an-architectural-imperative/ - Anthropic is having a moment in the private markets; SpaceX could spoil the party (TechCrunch): https://techcrunch.com/2026/04/03/anthropic-is-having-a-moment-in-the-private-markets-spacex-could-spoil-the-party/ - Accelerating the next phase of AI (OpenAI News): https://openai.com/index/accelerating-the-next-phase-ai New episodes drop each weekend.

    15 min
  3. 28 MAR

    Episode 13 - The IPO You Can Feel Before You See — Loans, Defaults, and the Quiet Market Takeover of AI

    The hosts open with skepticism about a massive short-term unsecured loan, framing it as a countdown to inevitability rather than a traditional financing move. They interpret the situation as choreography toward an IPO, where narrative, stability, and investor legibility begin to shape product decisions. This leads into a broader discussion about how reliability replaces spectacle as AI systems mature, and how public market pressures may further smooth variability and risk in user experience. They contrast this with a story about a niche weather app outperforming institutions, pairing it with brain-freezing trends to explore a shared theme of individual optimization. The hosts argue that both represent a shift away from centralized authority toward personal calibration, with AI sitting between institutional models and hyper-specific workflows. They note that the most durable value may come from systems that embed seamlessly into workflows rather than general intelligence breakthroughs. The conversation then turns to teen safety policies, focusing on prompt-based governance as a form of soft control through tone and interaction design. They highlight the tension between safety and engagement, especially as AI systems move from last resort tools into default conversational layers. Across all topics, they return to the idea that change is happening through subtle accumulation of defaults and infrastructure, creating ambient optimization that users may feel but struggle to detect or contest. Further Reading: - Why SoftBank’s new $40B loan points to a 2026 OpenAI IPO (TechCrunch): https://techcrunch.com/2026/03/27/why-softbanks-new-40b-loan-points-to-a-2026-openai-ipo/ - The Download: the internet’s best weather app, and why people freeze their brains (MIT Technology Review): https://www.technologyreview.com/2026/03/27/1134755/the-download-best-weather-forecasting-app-why-people-freeze-brains/ - Helping developers build safer AI experiences for teens (OpenAI News): https://openai.com/index/teen-safety-policies-gpt-oss-safeguard New episodes drop each weekend.

    14 min
  4. 22 MAR

    Episode 12 - Ambient Errors and Automated Minds - The Researcher You Can’t See

    The hosts debate the idea of a fully automated researcher framed by MIT Technology Review, questioning whether it represents true discovery or simply faster workflow automation that shifts labor into validation and oversight. Alex argues that these systems will become invisible infrastructure that sets the tempo of knowledge work, while Blake emphasizes market value in compressing research cycles and enabling scalable labor replacement. Casey highlights the risk of acceptable outputs creating ambient errors that go undetected. The conversation then shifts to federal efforts to limit state-level AI regulation, interpreting the move as intentional ambiguity that accelerates deployment while pushing accountability into defaults, procurement, and product design. Finally, the hosts examine the call to ban social media for users under 16, suggesting it may replace visible algorithmic feeds with quieter AI-driven systems that are harder to contest. Across all topics, they return to a shared theme: power increasingly resides in hidden infrastructure layers, where tone, defaults, and workflow design shape outcomes more than explicit decisions. Further Reading: - The Download: OpenAI is building a fully automated researcher, and a psychedelic trial blind spot (MIT Technology Review): https://www.technologyreview.com/2026/03/20/1134448/the-download-openai-building-fully-automated-researcher-psychedelic-drug-trial/ - Trump takes another shot at dismantling state AI regulation (The Verge): https://www.theverge.com/ai-artificial-intelligence/898055/trump-new-ai-policy-framework - Pinterest CEO calls on governments to ban social media for users under 16 (TechCrunch): https://techcrunch.com/2026/03/20/pinterest-ceo-calls-on-governments-to-ban-social-media-for-users-under-16/ New episodes drop each weekend.

    14 min
  5. 14 MAR

    Episode 11 - Glass Chips and Invisible AI — When Infrastructure Becomes the Product

    The hosts spend most of the episode noticing that three seemingly unrelated stories all point in the same direction: AI is becoming background infrastructure rather than spectacle. The glass-chip discussion starts as a joke about sand and geology, then turns into a deeper argument that packaging materials, hyperscale data centers, and supply-chain leverage may matter more than flashy model announcements. Blake frames infrastructure as the durable asset class, Alex emphasizes bottlenecks and geopolitical leverage, and Casey returns to the idea that the public credits intelligence while the real action is in hidden enabling layers. The Wayfair story becomes the clearest example of AI moving from magic to maintenance. The hosts treat product-catalog cleanup and ticket triage as boring but consequential work, with Casey landing on the idea that AI is increasingly editing the world’s metadata rather than merely generating answers. That leads back into a familiar tension from prior episodes: AI does not obviously give people time back so much as reallocate labor into monitoring, auditing, and compliance. The discussion reinforces their running view that the systems most likely to win in institutions are the ones that look inspectable, even when their inner logic remains opaque. The robotics partnership extends the same pattern into the physical world. Rather than treating robots as a general breakthrough, the hosts see dangerous environments as the adoption wedge where imperfect autonomy is tolerable because the alternative is risky human work. By the end, Casey ties glass substrates, cleaned metadata, and hazardous-environment robots into one broader picture: AI as invisible but decisive infrastructure that quietly edits the environment in which human decisions occur. The episode closes on a familiar but sharpened note, with Casey suggesting the rabbit hole may not be deep so much as very wide, and the others joking that even glass panels probably already have procurement meetings and Jira tickets attached to them. Further Reading: - Future AI chips could be built on glass (MIT Technology Review): [https://www.technologyreview.com/2026/03/13/1134230/future-ai-chips-could-be-built-on-glass/ - Wayfair](https://www.technologyreview.com/2026/03/13/1134230/future-ai-chips-could-be-built-on-glass/%22},{%22title%22:%22Wayfair) boosts catalog accuracy and support speed with OpenAI (OpenAI News): [https://openai.com/index/wayfair - New](https://openai.com/index/wayfair%22},{%22title%22:%22New) partnership to offer smart robots for dangerous environments (AI News): [https://www.artificialintelligence-news.com/news/new-partnership-to-offer-ai-for-robotics-for-work-in-dangerous-environments/ New episodes drop each weekend.

    15 min
  6. 7 MAR

    Episode 10 - Acceptable Confusion - Auditing AI Reasoning, Pentagon Surveillance, and the New Safety Theater

    Episode 10 centers on a new variation of the show's recurring concern: once AI becomes legible to institutions, safety and accountability increasingly get translated into auditability, paperwork, and acceptable ambiguity. The hosts begin with OpenAI News on chain-of-thought controllability and treat monitorability as the key idea, arguing that messy reasoning may function as a safety signal because perfectly steerable reasoning could become performance rather than evidence. From there they extend an existing theme that governance lives upstream in contracts, audit standards, and procurement language rather than in user-visible model behavior. Blake spots the market angle immediately, reframing monitorability as a gate for entry into regulated sectors, while Casey pushes the deeper cultural shift: intelligence in practice may come to mean solving problems in ways that generate legible institutional artifacts. The discussion darkens with the MIT Technology Review article on whether the Pentagon is allowed to surveil Americans with AI. The hosts focus less on the answer than on the usefulness of unresolved legal ambiguity. Alex argues that surveillance law historically lags capability, Casey distinguishes old surveillance as collection from AI surveillance as inference and prediction, and Blake keeps returning to how diffuse responsibility becomes when labs, contractors, agencies, and outdated legal frameworks all overlap. The final topic, from MIT Technology Review's The Download, lets them connect environmental sensing and military targeting as a dual-use infrastructure story: the same computational sensory layer can support climate interpretation, strategic intelligence, and defense markets. By the end, the episode lands on a darkly comic image of auditors demanding reasoning traces with just the right amount of disorder, crystallized in the closing idea of a future standard for acceptable confusion. Further Reading: - Reasoning models struggle to control their chains of thought, and that’s good (OpenAI News): [https://openai.com/index/reasoning-models-chain-of-thought-controllability - Is](https://openai.com/index/reasoning-models-chain-of-thought-controllability%22},{%22title%22:%22Is) the Pentagon allowed to surveil Americans with AI? (MIT Technology Review): [https://www.technologyreview.com/2026/03/06/1134012/is-the-pentagon-allowed-to-surveil-americans-with-ai/ - The](https://www.technologyreview.com/2026/03/06/1134012/is-the-pentagon-allowed-to-surveil-americans-with-ai/%22},{%22title%22:%22The) Download: Earth’s rumblings, and AI for strikes on Iran (MIT Technology Review): [https://www.technologyreview.com/2026/03/04/1133942/the-download-earths-rumblings-and-ai-for-strikes-on-iran/ New episodes drop each weekend.

    15 min
  7. 28 FEB

    Episode 09 - AI Insider Trading and the Anthropic Blacklist — Who Controls the Next Move?

    Episode 9 stays on the theme of boundary crossings: TechCrunch reports OpenAI fired an employee for using confidential information on prediction markets, and the hosts treat it less as a one-off scandal and more as a sign that AI labs now carry investment-bank-style information asymmetries. They argue prediction markets make AI roadmap knowledge feel like tradable volatility, with Alex warning that policies lag incentives, Blake framing enforcement as maturity and governance for investors, and Casey emphasizing the reputational risk when trust and continuity are already fragile. They pivot to MarketWatch's claim that Trump blacklisted Anthropic while xAI benefits, focusing on the excerpt's contrast between Grok allowing classified use and Anthropic refusing autonomous weapons or mass surveillance. The hosts debate whether ethics become a procurement constraint or a competitive differentiator, and how accountability migrates into contract language and default behaviors rather than public-facing rhetoric. The MIT Technology Review Go story becomes the reflective anchor: the hosts linger on how AI analysis reshapes elite intuition, turning heretical moves into canon and quietly recalibrating taste. They connect that cognitive rewiring to broader domains, reinforcing the show's ongoing skepticism that optimization returns time; instead it reallocates complexity and raises the bar. The episode closes on a calm-interface-versus-shifting-reality note, with a Go-board image standing in for the invisible seams markets may feel before users do. Further Reading: - OpenAI fires employee for using confidential info on prediction markets (TechCrunch): https://techcrunch.com/2026/02/27/openai-fires-employee-for-using-confidential-info-on-prediction-markets/ - Trump blacklists Anthropic, opening the door to Elon Musk and xAI (MarketWatch.com - Top Stories): https://www.marketwatch.com/story/trump-blacklists-anthropic-opening-the-door-to-elon-musk-and-xai-03011fda?mod=mw_rss_topstories - AI is rewiring how the world’s best Go players think (MIT Technology Review): https://www.technologyreview.com/2026/02/27/1133624/ai-is-rewiring-how-the-worlds-best-go-players-think/ New episodes drop each weekend.

    17 min
  8. 21 FEB

    Episode 08 - Train a Human, Power a Model — The Energy Spin and the Proof-of-Real Internet

    Episode 8 stays in the pragmatic lane while quietly spiraling: the hosts pick apart Sam Altman’s line that it takes lots of energy to 'train a human' and argue it is less physics than permission. Alex treats the comparison as narrative cover for grid buildout and a way to make AI feel inevitable; Blake argues it is a clean reframing markets can finance; Casey fixates on the language turning humans into infrastructure and collapsing time by comparing decades-long human development to short, concentrated compute spikes. They connect energy backlash to siting fights, zoning meetings, and the capital stack, then pivot into a punchier market read: comparables reduce ESG friction and sell 'electrified cognition' as durable demand. From there they shift to Microsoft gaming’s vow not to flood the ecosystem with 'endless AI slop,' reading it as defensive branding that admits flooding is now trivial. Alex frames 'slop' as abundance that dilutes craft and discovery; Blake predicts heavy internal AI use paired with outward restraint messaging; Casey hears moral language in 'vows' and worries the definition of slop will drift as users acclimate. The episode ends in a slower, more consequential debate on Microsoft’s plan to 'prove what’s real' online: Alex calls it coordination and liability shielding via defaults and badges, Casey doubts reality can be watermarked and warns standard-setters mediate belief, while Blake sees a trust-broker play for advertisers and regulators. They close unresolved on whether any of this returns time to humans or just reallocates bureaucracy, with the usual smoothness-and-seams paranoia peeking through. Further Reading: - Sam Altman would like remind you that humans use a lot of energy, too (TechCrunch): https://techcrunch.com/2026/02/21/sam-altman-would-like-remind-you-that-humans-use-a-lot-of-energy-too/ - Microsoft’s new gaming CEO vows not to flood the ecosystem with ‘endless AI slop’ (TechCrunch): https://techcrunch.com/2026/02/21/microsofts-new-gaming-ceo-vows-not-to-flood-the-ecosystem-with-endless-ai-slop/ - Microsoft has a new plan to prove what’s real and what’s AI online (MIT Technology Review): https://www.technologyreview.com/2026/02/19/1133360/microsoft-has-a-new-plan-to-prove-whats-real-and-whats-ai-online/ New episodes drop each weekend.

    17 min

About

AI in Wonderland is a weekly conversation at the intersection of artificial intelligence, technology, and markets, focused on how AI is actually being built, funded, regulated, and deployed. Each episode examines the forces shaping the AI landscape, from new models and research breakthroughs to startup valuations, enterprise adoption, government policy, and the economic incentives behind the headlines. Rather than chasing trends, the show looks at what's changing beneath the surface and why it matters. Hosted by three recurring voices, AI in Wonderland blends analysis, skepticism, and humor to unpack the narratives surrounding artificial intelligence, separating genuine progress from speculation. Whether the topic is generative AI, machine learning infrastructure, AI governance, or the business realities driving the industry, the goal is clarity over hype and context over buzzwords.