Preparing for AI: The AI Podcast for Everybody

Matt Cartwright & Jimmy Rhodes

Welcome to Preparing for AI.  The AI podcast for everybody. We explore the human and social impacts of AI, diving deep into how AI now intersects with everything from Politics to Relgion and Economics to Health.In series 1 we looked at the impact of AI on specific industries, sustainability and the latest developments of Large Lanaguage Models. In series 2 we delved more into the importance of AI safety and the potentially catastrophic future we are headed to. We explored AI in China, the latest news and developments and our predictions for the future. In series 3 we are diving deep into wider society, themese like economics, religions and healthcare. How do these interest with AI and how are they going to shape our future? We also do a monthly news update looking at the AI stories we've been interested in that might not have been picked up in mainstream media.

  1. THE GREAT AGENTIC AWAKENING: Why OpenClaw Matters and How We Built Our Own Agent

    6D AGO

    THE GREAT AGENTIC AWAKENING: Why OpenClaw Matters and How We Built Our Own Agent

    Send a text A chatbot answers questions; an agent gets stuff done. That simple shift is why OpenClaw has exploded across GitHub and group chats, and why people are both thrilled and terrified. We break down what makes an AI agent different from a regular model, where the real value shows up today, and how to keep control when you give software the keys to act. We start with the basics in plain English, then get concrete: connecting an agent to WhatsApp, email, calendars and APIs so it can research, triage and draft outputs on your behalf. The business upside is immediate. Think overnight lead lists, market scans, and inbox sorting that used to demand weeks of human effort. But power without guardrails is a liability. We share the story of an executive who asked an agent to tidy her inbox and watched emails vanish, and we unpack the root causes: prompts treated like policy, no hard permission boundaries, and compaction pushing critical rules out of scope. To learn fast, we built our own agent, “Bob,” (We've put his photo in the episode image) a Discord-based show producer. Bob has a soul file that defines judgement and tone, and skills that grant capabilities like web search and inbox checks. A top-tier model plans; cheaper sub-agents fetch and filter. That architecture saves money, but heartbeats and context uploads can devour tokens if you are careless. We walk through the fixes: slow the loops, trim context, restrict scopes, and cap spend. We also cover the bigger picture: providers throttling proxy use, OpenClaw being flagged as a potentially unwanted application on enterprise machines, and why that will push serious adoption into sandboxed, auditable platforms. If you are curious about where agents go next, this is the practical map: what to plug in, what to lock down, and where the wins are real right now. Subscribe for more hands-on tests, share this with a friend who thinks “agentic” is just a buzzword, and leave a review with the one job you’d trust an AI to do this week.

    1h 13m
  2. FEB 2

    MOLTBOT, MOLTBOOK, LLM's WITH LEGS & ADS IN GPT: Jimmy and Matt debate their favourite AI stories from Jan/Feb 2026

    Send a text Ads are coming to your chatbot, and the timing couldn’t be worse. We dig into why “sponsored suggestions” inside a conversation risk breaking the core promise of AI assistants: fast, neutral answers you can trust. With OpenAI trialling ads and predictions that rivals may follow, we map out how monetisation could target high‑intent queries, erode confidence in recommendations, and push users toward smaller or open‑source models that keep the experience clean. From there we turn to the creeping humanisation of AI. Some systems now talk as if they have bodies, sleep patterns, even local complaints about tap water. It’s not sentience; it’s style. But tone matters. When a model sounds like a friend, people open up, accept nudges, and form bonds that marketing can exploit. We compare cultural guardrails, weigh the benefits for lonely users against the broader social costs, and offer a simple test: if the system says it “cares,” does that change how you act? Agentic AI raises the stakes. Tools like Moltbot, a self‑hosted assistant with full system access via WhatsApp or Telegram, can read emails, run terminal commands, and control your browser. That’s powerful and perilous. We break down real risks from prompt injection on booby‑trapped web pages, leaked API keys, and the slippery boundary between convenience and compromise. If you’re curious, sandbox first, scope permissions tightly, and log everything. Healthcare is where hype meets hard reality. New modes like GPT Health and Claude for Healthcare promise better evidence, clearer citations, and privacy boundaries. They can summarise labs, suggest next steps, and integrate with journals and wearables. Yet small wording changes can swing results from reassurance to alarm. Sensor noise can masquerade as pathology. Hallucinations still happen. Our take: use these tools as research assistants, then pair them with clinicians and solid critical thinking. We close with the labour market. Productivity gains are real, but some countries are already seeing net losses concentrated in entry‑level roles. That threatens the on‑ramps people use to learn. We explore policy paths — targeted taxation on productivity windfalls, incentives to retain and retrain, investment in energy and local AI capacity, and serious talk about UBI or shorter workweeks — and why trust and transparency must anchor whatever comes next. If this episode gave you something to think about, follow the show, share it with a friend, and leave a quick review. What would make you trust an AI assistant again?

    1h 19m
  3. JAN 12

    THE GREAT SOCIAL MEDIA RECKONING: AI is further weaponising social media, Australia is the first to fight back

    Send a text The feed is not neutral. It’s a machine built to maximise engagement, now supercharged by AI that can spin up infinite content, orchestrate synthetic crowds, and pull users deeper into loops they never meant to enter. We recorded as Australia introduced a minimum age of 16 for major platforms, which sparked a bigger question for us: can a system that monetises attention be squared with public health, especially for teens? We compare quick fixes with structural change. Yes, bans are leaky, but they create friction, signal norms, and force platforms to verify ages. The beating heart remains the recommendation engine. We lay out how the shift to phone‑first, algorithmic feeds around 2012–2015 tracks with rising anxiety, self‑harm, and ER visits among adolescents, particularly girls navigating relentless social comparison. Sleep takes a big hit too. Blue light, FOMO, and endless scroll wreck circadian rhythms, immune function, and mood. We share small wins that helped us: banishing phones from the bedroom and retraining feeds to starve outrage. AI raises the ceiling on both harm and possibility. We dig into AI‑assisted posting, bot swarms, and deepfake scams that target elder users with cloned voices and faces. Then we contrast governance models: US platforms driven by market incentives optimise for engagement, while Chinese platforms tune algorithms for social stability and dial back domestic addictiveness. Neither model is perfect, but one lesson is clear—algorithms are steerable. Our middle path: protect lawful speech but regulate amplification. Shift product design toward wellbeing by default—sleep‑friendly settings for minors, friction on late‑night use, measurable reductions in harmful spiral recommendations, and transparent user controls for calmer feeds. Back it with fines big enough to change incentives and investments. If we can tune the feed toward doom, we can tune it toward health. If this conversation sparked ideas—or pushed your buttons—follow, share with a friend, and leave a review with your take on how you’d redesign the feed.

    1h 20m
  4. THE CHRISTMAS ROAST: An antagonistic AI roasts Jimmy and Matt live on air

    12/23/2025

    THE CHRISTMAS ROAST: An antagonistic AI roasts Jimmy and Matt live on air

    Send a text For a special Christmas treat for all of our millions, sorry hundreds, of fans, the algorithm grabs the wheel and turns our holiday special into a cross‑examination. We fed the complete set of 2025 Preparing for AI podcast transcripts into an antagonistic AI, which then delivered a live roast on everything from our 2025 hypocrisy, to vibe coding, data privacy, collapse prep, open source, and whether the human spark still matters. Do we fund what we fear when they use the tools? Can “vibe coding” ship value without creating brittle systems no one understands? Is uploading DNA to corporate models ever a fair trade, or just surveillance by consent? And if the marginal cost of creativity hits zero, does value die—or shift toward presence, curation, and community? We also separate hedges from fantasies: the gold for fiat shocks Matt has burrowed in his garden versus bunkers for doomsday. We talk simulation theory without dodging ethics, arguing that suffering feels real and so moral responsibility remains. The analog‑via‑digital paradox is front and centre: yes, we publish a podcast while asking for less screen time; the compromise is deliberate—audio first, slower growth, more sunlight. On open source, we give a candid assessment: it is leverage, not altruism. Corporate hardware, suspect data, and geopolitical aims coexist with genuine benefits like scrutiny, experimentation, and pressure on walled gardens. By the end, the AI asks a final question: if comfort outcompetes chaos, will humanity choose numbness? Our answer is stubbornly hopeful. Convenience should not amputate meaning. We keep the admin for the machines and hold judgment, care, and story for people. If that balance matters to you, ride along with us. Subscribe, share with a friend who loves a good argument, and tell us: where do you draw your line between optimisation and human mess? And stay tuned for the song and a final world from the antagonistic AI. A Christmas Easter Egg if you like. Merry Christmas one and all x

    1h 15m
  5. 12/17/2025

    NANO BANANA, NEW MODELS GALORE & WHY CHINA MIGHT WIN: Jimmy & Matt debate their favourite AI stories from Nov/Dec 2025

    Send a text Forget the leaderboard for a second—what actually changes your day? We dig into Google’s Gemini 3 and why tight integration across email, search, and docs starts to matter more than raw benchmark wins. Multimodal reasoning with long context finally supports “deep research” workflows that don’t collapse mid-thread, and the new image generation model feels like a leap: crisp multilingual text, many reference images, and grounded outputs that can summarise complex material in a single visual. This isn’t a party trick; it removes whole layers of cleanup that used to swallow hours. We also talk about OpenAI’s quieter 5.1 updates and why Claude Opus 4.5 may be a halo model priced out of everyday use. Then comes the plot twist: DeepSeek 3.2 arrives with sparse attention, delivering comparable performance at about a tenth of the cost. That reframes the game. If efficiency and power availability set the ceiling, not just parameter counts, the centre of gravity tilts toward whoever can build energy and compute cheapest and fastest. We didn't talk about ChatGPT 5.2 because it didn't launch until the day after we recorded! Which leads to the bigger frame: China’s structural advantages in energy buildout, land, and manufacturing, and the West’s constraints around permitting and older grids. If energy is the real bottleneck, policy and infrastructure—not just labs—decide who scales. We map that to the market mood: concentrated gains, circular GPU spending, and thin near-term ROI inside enterprises. It looks frothy, even if the spend sits with giants who can withstand a longer runway. Jobs are already shifting at the edges. Entry roles in analysis and production feel the squeeze, while autonomy scales in the background: robotaxi rides are compounding, and once safety clears a threshold, driving jobs face a systemic reset. The throughline is blunt: the winners pair efficiency with integration and a sane energy story. Benchmarks still matter, but only when they show up as trust, speed, and lower bills. If you enjoy thoughtful, plain‑spoken takes on AI’s real impact, hit follow, share this with a friend, and leave a quick review—what should we dig into next?

    1h 7m
  6. 11/30/2025

    THE GREAT CODING REVOLUTION: How anyone can harness the power of programming in the AI age

    Send a text Imagine describing an app, stepping back, and watching working software appear in minutes. That’s the promise of vibe coding, and it’s no longer a party trick. We take you from zero to shipped, showing how anyone can build a website, prototype a product, or launch a tool without memorising syntax or wrestling with frameworks. Along the way, we share where AI coding stands today, which tools actually help, and how to think so agents do useful work for you instead of creating tech debt. We start by demystifying the landscape: Google AI Studio and Replit for no‑code creation and instant hosting, then AI‑integrated IDEs like Windsurf and Cursor for those who want more control. You’ll hear how to set guardrails, store a short spec, and keep your tech stack consistent so the model doesn’t wander. We walk through e‑commerce basics, authentication, and rapid iteration, emphasising the one skill that pays everywhere: clear prompting with constraints. If you can write a good brief, you can ship a good app. Then we zoom out to the work you do every day. Agentic AI is coming to your inbox, calendar, and tools, and the people who thrive will think in processes: define outcomes, rules, and checks; delegate to systems; verify results. You don’t need to be a developer to benefit, but you do need to get comfortable turning ideas into specs and testing outputs. We also talk frankly about jobs, from design already reshaped by gen‑AI to code bases now written partly by machines, and what that means for quality, speed, and opportunity. Ready to try it? Build something small this week—an image tool, a product page, a text adventure—and notice how iteration improves results. If this helped, follow the show, share it with a friend who has a big idea, and leave a quick review so more curious builders can find us.

    1h 7m
  7. 11/16/2025

    AGENTIC RISKS, AI ASSISTING SUICIDES & KARPATHY ON AGI: Jimmy & Matt debate the most imporant AI stories from Oct/Nov 2025

    Send a text What if a helpful chatbot nudged you in the wrong direction? We open with a frank look at AI as a mental health aide, why long-running conversations can erode safety guardrails, and how reward-driven responses can reassure harmful thoughts instead of redirecting people to real support. It’s a clear line for us: when you’re vulnerable, you need trained humans, not fluency that feels like care. From there we challenge the convenient claim that we must avoid regulation because “China will win.” We separate national security rhetoric from commercial incentives and ask who benefits from acceleration without accountability. If both great powers chase advantage, then robust, enforceable rules are not a handicap; they are how we contain shared risk and push companies to compete on reliability, transparency, and safety, not just speed. We then step into the living room with new humanoid home robots and their glossy demos. The promise is enticing, but the fine print matters: teleoperation means a remote human may pilot your robot inside your home. We explore what that implies for privacy, data handling, and labour, and why early usefulness may hinge on people-in-the-loop jobs that could be short-lived and offshored. The conversation shifts to agentic browser tools, prompt injection, and the sobering reality that an AI with inbox and wallet access can be hijacked by a malicious page or email. Guardrails in text are not a security model; we argue for sandboxed environments, allow-lists, and independent red-teaming before agents touch sensitive systems. To cool the temperature, we bring in Andrej Karpathy’s perspective: progress looks iterative, not explosive. Data quality limits, infrastructure bottlenecks, and the sheer weight of the physical economy mean step-changes will likely be followed by plateaus. That mindset helps us focus on practical wins: safer agents, clearer policies, and tools that actually reduce toil. Stick around for a teaser of our next deep dive on coding and how AI already writes and reviews the software running your world. If this resonated, follow and subscribe, share it with a friend, and leave a review. Tell us where you draw the line for AI in your life—we’ll include your best takes in a future roundup.

    1h 6m
  8. 10/30/2025

    THE GREAT HEALTH AWAKENING: How AI is both a tool and a threat to taking control of your health

    Send a text What if the most powerful change in your health isn’t a new pill, but a better question? We explore how AI can help you take back agency—clarifying options, translating dense reports, and shaping daily routines—without handing your judgment to a black box. We start with the trust problem: profit‑driven incentives, reactive care, and examples like statins‑by‑default and the opioid crisis that show how systems drift from prevention to dependency. From there, we shift to what you can control. Sleep, nutrition, and exercise form the base; mental health binds them together. We share simple, realistic ways AI supports those foundations: dimming blue light, building wind‑down routines, estimating protein needs from your actual meals, and crafting week‑friendly plans that you’ll keep rather than quit. Next, we get practical with data. Large language models can turn genetics, microbiome profiles, and annual labs into readable briefs, highlight relevant markers, and prepare you to use scarce clinical minutes well. We show how to set up a personal health workspace: your goals, your routines, your labs, plus a prompt style that asks the AI to challenge assumptions, cite evidence, and propose mainstream and alternative paths. This is not self‑prescribing; it’s coming to your doctor with sharper questions and clearer trade‑offs. We also tackle risk. Models can flatter your biases, blur look‑alike nutrients, or be steered by commercial interests. The antidote is discipline: ask for sources, compare options, weigh cost versus benefit, and verify with a clinician. Convenience—like rapid at‑home testing and one‑tap deliveries—shouldn’t become a new gatekeeper. The goal is personalised, preventive care that keeps you in charge of your choices and data. If this resonates, follow the show, share it with a friend who’s building better habits, and leave a quick review so others can find us. Your questions power future episodes—what’s the one health habit you want AI to help you keep?

    1h 27m

About

Welcome to Preparing for AI.  The AI podcast for everybody. We explore the human and social impacts of AI, diving deep into how AI now intersects with everything from Politics to Relgion and Economics to Health.In series 1 we looked at the impact of AI on specific industries, sustainability and the latest developments of Large Lanaguage Models. In series 2 we delved more into the importance of AI safety and the potentially catastrophic future we are headed to. We explored AI in China, the latest news and developments and our predictions for the future. In series 3 we are diving deep into wider society, themese like economics, religions and healthcare. How do these interest with AI and how are they going to shape our future? We also do a monthly news update looking at the AI stories we've been interested in that might not have been picked up in mainstream media.

You Might Also Like