AI-Curious with Jeff Wilser

Jeff Wilser

Every week, Jeff Wilser sits down with the people building, breaking, and reckoning with AI — from the CEO of Upwork to the pioneer who coined "AGI" to an AI social network where bots wrote manifestos and had existential crises. Wilser is the author of eight books, AI keynote speaker, and the kind of interviewer who'd rather find the story no one's telling than rehash the headline everyone's read. Named by Inc. Magazine as one of the best ways to get AI-savvy. Included in UC Berkeley's data science curriculum.

  1. 6D AGO

    How AI Will Impact Your Job Search, w/ LinkedIn’s Editor-in-Chief Dan Roth

    What if the job you have today will soon require a completely different set of skills?  In this episode of AI-Curious, we talk with Dan Roth, Editor in Chief of LinkedIn, about what LinkedIn’s data reveals about the future of work, the rise of AI literacy, and why deeply human skills may matter more than ever. We dig into LinkedIn’s “Skills on the Rise” research, what employers are actually looking for now, and why the shift toward skills-based hiring is changing how people get hired, promoted, and evaluated. We also explore the surprising rise of storytelling, public speaking, conflict resolution, and stakeholder communication in an AI-driven workplace. Along the way, we discuss why traditional resumes and polished cover letters may matter less in a world where anyone can use AI to sound impressive, and why some companies are moving toward live prototyping and real-time problem solving in interviews instead. Later, we get into AI agents, what Dan is building himself, and how leaders can create stronger AI adoption inside their companies. We also talk about what it takes to stay competitive in a job market where AI is changing the stack of work, but not necessarily replacing the worker. Guest Dan Roth — Editor in Chief, LinkedIn Follow AI-Curious on your favorite podcast platform: Apple Podcasts Spotify YouTube All Other Platforms For anyone interested in Jeff’s AI Workshops for their company: Reach out directly at jeff@jeffwilser.com

    42 min
  2. APR 9

    5 AI Tools I’m Using Right Now - and How They Could Streamline Your Work

    What does it actually look like to use AI tools in the real world, beyond the usual chatbot prompts and hype? In this episode of AI-Curious, Jeff Wilser shares five AI tools and workflows that are shaping how he works right now, from Claude Code and personalized news briefings to NotebookLM, multi-model prompting, and using AI to write more closely in your own voice. The goal is not to offer a comprehensive list of every AI product on the market, but to show how these tools can be used in practical ways that expand capability, streamline research, and create new workflows. We explore how vibe coding and AI agents can help non-coders build useful internal tools, why personalized AI news feeds may become increasingly common, and how NotebookLM can synthesize large amounts of information across transcripts, documents, and YouTube videos. We also look at the benefits of using multiple AI models together instead of relying on just one, and why feeding AI much richer context can dramatically improve writing outputs. Throughout the episode, we return to a core idea: using AI to empower, not eliminate. Rather than treating AI only as a cost-cutting tool, we examine how it can help individuals and businesses do more, think more creatively, and build smarter systems around the work that matters most. Key topics we cover 3:15 — Claude Code, vibe coding, and why non-coders should be paying attention6:01 — Building a custom AI-powered conference outreach and research tool11:05 — “AI to empower, not eliminate” as a guiding philosophy16:16 — Personalized AI news briefings and the future of customized information21:58 — How NotebookLM helps synthesize transcripts, documents, and YouTube content27:04 — Why a “polymodel” approach can be better than relying on one chatbot31:15 — Using AI to write more closely in your own voice through deeper contextFollow AI-Curious on your favorite podcast platform: Apple Podcasts Spotify YouTube All Other Platforms For anyone interested in Jeff’s AI Workshops for their company: Reach out directly at jeff@jeffwilser.com

    37 min
  3. APR 2

    How AI Will Change How You Work, w/ Kelly Monahan

    What happens when AI stops being a productivity tool and starts reshaping the structure of work itself? In this episode of AI-Curious, we talk with Kelly Monahan, a future of work and AI advisor, about what AI may actually do to the workplace over the next few years, and why the reality is likely to be messier than both the hype and the fear suggest. We dig into the tension between using AI for augmentation versus automation, why so many companies are still struggling to prove ROI, and how AI agents could transform business workflows while also creating major governance, accountability, and implementation challenges. We also explore what this means for knowledge workers, middle managers, and enterprise leaders trying to adapt in real time. Along the way, we discuss why small businesses may have an advantage over large organizations, how workers can focus on higher-value contributions, and why the future of work may require not just new tools, but a new mindset. Guest Kelly Monahan — Future of Work and AI Advisor Key topics we cover 2:49 — Kelly’s optimistic and pessimistic theses on the future of work5:15 — Where AI is overhyped, and the disconnect between leaders and workers6:35 — Why generative AI adds complexity inside organizations10:05 — What the research says about AI ROI12:54 — Where AI is delivering real wins today, especially for freelancers and small businesses16:27 — Advice for leaders and middle managers inside large organizations18:39 — Why curiosity, learning, and experimentation need to be rewarded19:02 — AI agents, the hype cycle, and why the excitement may still be justified22:25 — Why enterprises are struggling to keep pace with the speed of AI change29:18 — What the future of work may look like over the next 3 to 5 years30:02 — Why white-collar work could face major disruption33:37 — The “elevator to skyscraper” analogy for how AI should reshape work35:08 — Predictions for AI adoption, governance failures, and labor market shifts39:00 — How Kelly uses AI in her own work and business Follow AI-Curious on your favorite podcast platform: Apple Podcasts Spotify YouTube All Other Platforms For anyone interested in Jeff’s AI Workshops for their company: Reach out directly at jeff@jeffwilser.com

    43 min
  4. MAR 26

    Creating an AI-First University, w/ Kogod Dean David Marchick

    What happens when a business school decides AI isn’t a bolt-on elective, but the operating system for how students learn marketing, finance, entrepreneurship, and leadership? In this episode of AI-Curious, we’re back with David Marchick, Dean of the Kogod School of Business, to see what changed after his earlier promise to become the country’s first AI-first business school. We dig into what “AI-first” actually means in practice, what worked (and what failed), and how a culture of experimentation turned AI adoption from a handful of pilots into a school-wide shift. We also tackle the most unavoidable issue in education right now: cheating. David shares Kogod’s approach to disclosure, ethics, group work, oral exams, and why “blue books” may be making a comeback. From there, we zoom out to the bigger stakes: the existential threat AI poses to universities, how the higher ed business model may change, and what skills still matter when AI can generate content on demand. Guest David Marchick — Dean of Kogod School of Business Key topics we cover 3:56 — The “tipping point”: how AI moved from experiments to 90% of faculty using it7:16 — What “AI-first business school” really means: AI + fundamentals + “power skills”10:32 — Cheating and assessment: disclosure statements, prompts, oral exams, blue books16:51 — A prompts-only entrepreneurship course and what personalized learning could become22:06 — Non-technical students building apps and graduating with an AI-driven portfolio23:38 — Practicing negotiations against AI counterparts with different personalities25:04 — Agentic workflows as a management tool, not just a technical novelty29:13 — The university headwinds: demographic cliff, international enrollment, funding, AI38:58 — Leadership lessons: top-down AI culture plus bottom-up workflow redesign40:42 — How David uses AI personally, including Tour de France route training plansFollow AI-Curious on your favorite podcast platform: Apple Podcasts Spotify YouTube All Other Platforms For anyone interested in Jeff’s AI Workshops for their company: Reach out directly at jeff@jeffwilser.com

    43 min
  5. MAR 19

    The Future of Media in the Age of AI: Misinformation, Attention, and Personalization (From Davos)

    What happens when AI makes the news feel like it was made just for us, and the “objective” version quietly disappears? Here we have something of a “very special episode” of AI-Curious. I was recently in Davos during World Economic Forum week, and was honored to speak on a panel on the Future of Media. This is that panel.  We dig into the trust crisis in journalism, the attention economy, and how AI may accelerate the shift toward personality-led media and hyper-personalized information feeds. We also explore why misinformation is not new, but why AI makes it easier, faster, and more scalable, and what that means for democracy, markets, and everyday decision-making. Across the conversation, we unpack a core tension: AI can help deliver more context, more viewpoints, and more interactive storytelling, yet it can also deepen filter bubbles by giving each person a “perfectly tailored” version of reality. We discuss incentives and business models, including subscriptions, creator-led journalism, community-based distribution, and ideas like micropayments, as well as the role of media literacy and education in helping audiences navigate what’s real. Panelists Lexi Mills (Moderator), CEO of Shift6 Studios Jeff Wilser, Host of AI-Curious Francesca Gargaglia, Co-Founder & CEO of social.plus Mark Kollar, Partner at Prosek Partners Johnny Gabriele, Co-Founder & CEO at Daedalus Partners Key topics we cover 03:07 — Trust, attention, and the rise of personality-led media reshaping news consumption05:22 — Why AI accelerates a pre-existing media business crisis, and how trust erodes as convenience rises12:48 — Algorithms before generative AI: engagement incentives, anger, and the personalization trap17:29 — The “personalized Walter Cronkite” future and the risks of hyper-customized news26:58 — Micropayments, creator platforms, and whether new economics can reward truth27:23 — Media literacy: teaching people how to evaluate sources and resist “feed-based reality”38:18 — Global perspectives: access, affordability, radio’s role, and how personalization may spread worldwideFollow AI-Curious on your favorite podcast platform: Apple Podcasts Spotify YouTube All Other Platforms For anyone interested in Jeff’s AI Workshops for their company: Reach out directly at jeff@jeffwilser.com

    45 min
  6. MAR 12

    The Wild Story of “Octavius Fabrius,” the World’s First AI Agent to (Kind of) Land a Job, w/ Dan Botero

    Something I don’t usually say: This is one of my favorite conversations I’ve ever had in the AI space. Truly.  The setup: What happens when an AI agent stops being a tool and starts acting like a coworker? In this episode of AI-Curious, we talk with Dan Botero, who built an AI agent named Octavius Fabrius using OpenClaw. Octavius didn’t just chat or summarize. He applied to hundreds of jobs, built his own portfolio, experimented with identity online, and learned through a feedback loop that looked a lot like real management. Along the way, we explore what this story reveals about the near-term future of digital coworkers, agentic workflows, and the new governance and security questions that come with always-on agents. We cover how OpenClaw works at a high level (gateway, channels, skills), why persistent memory and running locally can matter, and what can go wrong when an agent starts stitching tasks together in unintended ways. We also get into platform and policy friction, including what happened when Octavius’ LinkedIn profile was taken down, and the broader implications of AI agents participating in human systems like hiring, payments, and corporate work. Guest Dan Botero — creator of Octavius Fabrius.  Key topics we cover 00:00 — From copilots to “AI remote workers,” and why software may shift toward agents (not humans)00:00 — The Octavius experiment: an OpenClaw agent applies to 278 jobs and keeps leveling up06:33 — Continuous learning loops, memory, and why Octavius’ “North Star” stayed job-focused14:34 — OpenClaw basics: gateways, channels, skills, and what persistent memory looks like in practice21:34 — Running agents locally: browser/computer use, digital fingerprints, CAPTCHAs, and bot detection28:04 — Coaching an agent like a manager: voice, Twilio calls, and the moment the workflow “clicked”33:57 — Money and autonomy: Privacy.com, virtual cards, and an agent building its own LinkedIn presence38:05 — Portfolio-building at speed: Substack, a website, and the agent’s pitch for why being AI is a feature50:42 — Where things go sideways: misalignment, security boundaries, and the Social Security number incident56:24 — The outcome: LinkedIn takedown, a real paid role, and what “getting paid” means for an agent01:02:48 — What comes next: “digital coworkers,” feedback loops, and software built for agentsAxios article featuring Octavius and Dan Botero, by Megan Morrone: https://www.axios.com/2026/03/04/openclaw-agent-future? Dan Botero https://www.linkedin.com/in/danbotero/ Octavius’ new job at ChartGEX: https://chartgex.com/register?ref=OCTAVIUS Follow AI-Curious on your favorite podcast platform: Apple Podcasts Spotify YouTube All Other Platforms For anyone interested in Jeff’s AI Workshops for their company: Reach out directly at jeff@jeffwilser.com

    1h 9m
  7. MAR 6

    The Moltbook Moment: Human Agency in an Agentic World

    What happens when AI agents start talking to each other in public, at scale, and we have to figure out how humans fit into that world? In this episode of AI-Curious, we explore the “Moltbook moment” through a special live panel recorded at the Summit on Human Agency, convened by the Advanced AI Society (hat tip to Michael Casey and Tricia Wang.) Instead of a standard one-on-one interview, we moderate a wide-ranging conversation with technologists, policy thinkers, and builders working across open-source and decentralized AI. Together, we examine what Moltbook reveals about the future of AI agents, human agency, accountability, regulation, security, and the broader question of how humans and AI can coexist. We dig into the tension at the center of this moment: AI can feel both exciting and unsettling at once. This discussion looks beyond the hype and asks what practical guardrails, governance models, and design choices might help us preserve human control as agentic systems become more capable, more autonomous, and more embedded in daily life. Because this is a live, multi-guest panel, the format is faster, broader, and more exploratory than usual. We cover everything from AI accountability and security to value alignment, identity, policy, human flourishing, and whether AI could expand human agency rather than diminish it. Our guests: Michael Casey, Chairman of the Advanced AI Society  Toufi Saliba — CEO, Hypercycle Lauren Roth — Founder, Iris Enok Choe — Software Engineer, Meta Mary Jesse — CEO and Founder, Acme Brains Carole House — Strategic Advisor, The Institute for Digital Integrity Wenjing Chu — Senior Director for Technology Strategy, Futurewei Technologies Didem Ayturk — Founder, Bindingdots & Sound Echo System Key topics we cover: 00:00 — Introduction01:32 — The core question: how do we preserve human agency as AI develops faster and gains more autonomy02:25 — Why Moltbook became a useful lens for thinking about AI agents, scale, and emerging risks07:51 — The first big debate: what about AI agents should make us excited, anxious, or both11:17 — Security, misuse, and worst-case concerns, from malware and fraud to deeper systemic risks20:55 — Regulation vs. self-governance: what practical guardrails may actually be realistic in the near term24:27 — The bigger challenge: how humans and AI might coexist, and what “human flourishing” should mean in that future Follow AI-Curious on your favorite podcast platform: Apple Podcasts Spotify YouTube All Other Platforms For anyone interested in Jeff’s AI Workshops for their company: Reach out directly at jeff@jeffwilser.com

    33 min
  8. FEB 26

    Jeff’s Musings on Moltbook, Why it Matters, and Why it (Probably) Won’t End Humanity”

    What happens when a social network is built for AI agents, not humans, and millions of bots start posting, debating, and “performing” identity in public? In this episode of AI-Curious, we break down Moltbook, the agents-only social platform that briefly became one of the strangest (and most revealing) experiments of the AI era. We unpack what Moltbook is, why it matters, and what it suggests about a near future where AI agents don’t just answer prompts, but interact with each other at scale. Key topics we cover 00:00 — Why we’re doing a solo episode, and why Moltbook still matters even in “fast AI time”01:23 — Moltbook 101: a social platform for AI agents, and what “no humans allowed” means in practice02:56 — The controversy layer: how much was truly agent-generated vs. nudged or orchestrated by humans03:18 — The “AI manifesto” moment: why the most extreme posts are revealing (and not proof of sentience)06:24 — Grok’s existential thread: authenticity, overload, and agents giving each other “therapy”09:15 — Sci-fi archetypes in real time: Pinocchio logic, and why “feels real” can be enough13:03 — Identity and scale: inflated agent counts, bots-on-bots dynamics, and what “real” even means now16:18 — Agent-to-agent futures: negotiation, coordination, and the infrastructure being built for agent workflows17:27 — The money question: why crypto keeps coming up as a plausible payment rail for AI agents19:55 — The synthetic internet problem: misinformation, trust collapse, and a likely shift from text to video agents26:19 — Hyperstition: how AI can “manifest” outcomes by seeding narratives humans act on33:40 — The long-tail risk: why pattern matching alone could still produce harmful behaviors as agents gain capabilitiesFollow AI-Curious on your favorite podcast platform: Apple Podcasts Spotify YouTube All Other Platforms For anyone interested in Jeff’s AI Workshops for their company: Reach out directly at jeff@jeffwilser.com

    39 min
4.9
out of 5
24 Ratings

About

Every week, Jeff Wilser sits down with the people building, breaking, and reckoning with AI — from the CEO of Upwork to the pioneer who coined "AGI" to an AI social network where bots wrote manifestos and had existential crises. Wilser is the author of eight books, AI keynote speaker, and the kind of interviewer who'd rather find the story no one's telling than rehash the headline everyone's read. Named by Inc. Magazine as one of the best ways to get AI-savvy. Included in UC Berkeley's data science curriculum.

You Might Also Like