AITEC Philosophy Podcast

AITEC

Welcome to AITEC Podcast, where we explore the ethical side of AI and emerging tech. We call our little group the AI and Technology Ethics Circle (AITEC). Visit ethicscircle.org for more info. 

  1. APR 1

    #31: Jacob Browning: Unmasking the Fake Minds of Large Language Models

    Have you ever wondered if AI models actually understand the words they generate, or if they are just really good at faking it? On this episode of The AITEC Podcast, Roberto García and Sam Bennett are joined by philosopher Jacob Browning (Baruch College, CUNY) to unpack his article, Intentionality All-Stars Redux: Do language models know what they are talking about? Using a clever baseball diamond metaphor and drawing on the philosophy of Immanuel Kant, Jacob explains why Large Language Models lack the "intentionality" required for genuine comprehension. We cover: First Base (Formal Competence): Why LLMs struggle with basic logic and negation, revealing the absence of an underlying logical engine. Second Base (Rationality): Why true understanding requires purposive behavior, and how LLMs hilariously fail at "intuitive physics" (like trying to inflate a couch to get it onto a roof). Shortstop (Objectivity and World Models): Why genuine understanding requires grasping an objective, mind-independent world that determines whether sentences are true or false. This position explores how LLMs lack a coherent "world model," causing them to fail at tasks that require intuitive physics and planning for counterfactual situations (like predicting where a billiard ball will go or playing simple video games). Third Base (The Unified Self): Why making a claim requires a persistent self that takes responsibility for its beliefs—something a next-token predictor simply cannot do.Whether you're exploring the intersection of AI, technology, and ethics, or just trying to figure out if your chatbot actually knows what it's saying, this conversation will give you the philosophical toolkit to see through the illusion.

    1h 10m
  2. FEB 24

    #29 Justin Tiehen: Why AI Can't Make a Promise—The Hidden Limits of Large Language Models

    Have you ever felt like ChatGPT genuinely understands you? What if the reality is that it doesn't even have the foundational capacity to "speak" to you at all? On this episode of The AITEC Podcast, Roberto Carlos García and Sam Bennett sit down with philosopher Justin Tiehen (University of Puget Sound) to unpack his fascinating new paper, LLM's Lack a Theory of Mind and So Can't Perform Speech Acts--A Causal Argument. Justin takes us on a deep dive into the philosophy of mind to explain why current Large Language Models, despite their impressive output, are essentially just faking it. We explore why next-token predictors are completely missing the causal architecture required to have a "Theory of Mind," and why, without that, they are fundamentally incapable of making assertions, giving orders, or performing true speech acts. Key Takeaways from this Episode: The Ladder of Causation: Why AI is stuck observing statistical correlations and cannot grasp true causal interventions or counterfactuals (drawing on Judea Pearl’s work). The Speech Act Problem: Why performing a true "speech act" requires the deliberate intention to influence another person's mind. Cheating the Benchmarks: How LLMs "cheat" on psychological exams like the Sally-Anne false-belief test simply by memorizing statistical patterns in text. The Threat of AI Blackmail: What it would actually look like if an AI possessed a Theory of Mind and strategically tried to manipulate human behavior to achieve its goals.Whether you are deeply invested in the philosophy of language or just trying to figure out how much you should trust your favorite AI assistant, this conversation will completely reframe how you view generative AI. Learn more about our work and join the conversation at ethicscircle.org.

    1h 7m

Ratings & Reviews

5
out of 5
3 Ratings

About

Welcome to AITEC Podcast, where we explore the ethical side of AI and emerging tech. We call our little group the AI and Technology Ethics Circle (AITEC). Visit ethicscircle.org for more info.