AI Freaky Facts: The AI Documentary

Steve Atwal

AI Freaky Facts explores how artificial intelligence reshapes everyday life in shocking and surprising ways. Each episode delivers deep-dive investigative narratives on real AI stories — from generative AI and robotics to surveillance, AI risks, and digital identity. This BBC-documentary-style podcast covers large language models, automation, and ethics, going beyond the headlines and the hype. Hosted by Steve Atwal, a former enterprise technology manager. Consistently ranked in the top 2% of podcasts worldwide. https://AIFreakyFacts.com

  1. AI Agents: The Governance Paradox [40]

    2D AGO

    AI Agents: The Governance Paradox [40]

    AI agents are making corporate decisions without human approval. A company collapses. The board blames the CEO. The CEO blames the software that has been quietly running the business for 18 months. Sixty percent of executives now rely on AI for decisions, but only five percent manage it effectively. In 2026, California killed the "autonomous harm" defense, Colorado mandated impact assessments, and the EU AI Act introduced personal liability for directors. So who is responsible when artificial intelligence fails at scale? Episode notes at: https://aifreakyfacts.com/stories/ Topics Covered: AI agents, autonomous AI, corporate governance, AI liability, algorithmic accountability, board responsibility, California AI law, Colorado AI Act, EU AI Act, enterprise AI, decision-making systems, AI risk management, business automation, agentic AI governance, regulatory compliance, AI Freaky Facts, AI podcast References: 1. Deloitte, "State of AI in the Enterprise: The Untapped Edge" (January 2026). https://www.deloitte.com/us/en/what-we-do/capabilities/applied-artificial-intelligence/content/state-of-ai-in-the-enterprise.html 2. McKinsey, "The State of AI in 2025: Agents, Innovation, and Transformation" (November 2025). https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai 3. California Assembly Bill 316 (Effective January 1, 2026). https://www.pillsburylaw.com/en/news-and-insights/new-california-ai-laws.html 4. Colorado Artificial Intelligence Act, SB 24-205 (Effective June 30, 2026). https://leg.colorado.gov/bills/sb24-205 5. Vantedge, "EU AI Act Deadlines 2025-2027: Board Compliance Playbook" (January 2026). https://www.vantedgesearch.com/resources/blog/board-playbook-eu-ai-act-deadlines-you-cant-miss/ 6. McKinsey, "Agentic AI Advances" (January 2026). https://www.mckinsey.com/featured-insights/week-in-charts/agentic-ai-advances 7. Deloitte, "Business and IT Leaders Report AI Agents Are Scaling Faster Than Their Guardrails" (April 2026). https://www.deloitte.com/us/en/insights/topics/emerging-technologies/ai-agents-scaling-faster.html 8. Secure Privacy, "EU AI Act 2026: Key Compliance Requirements for Enterprises" (February 2026). https://secureprivacy.ai/blog/eu-ai-act-2026-compliance 9. The Lyon Firm, "Who Is Legally Responsible When an AI Agent Makes a Mistake?" (April 2026). https://thelyonfirm.com/blog/agentic-ai-liability-legal-responsibility-autonomous-ai-agents/ 10. Baker Botts, "U.S. Artificial Intelligence Law Update: Navigating the Evolving State and Federal Regulatory Landscape" (January 2026). https://www.bakerbotts.com/thought-leadership/publications/2026/january/us-ai-law-update Music Credits: 1. "Tense Crime Documentary Piano" (Universfield) https://pixabay.com/music/solo-piano-tense-crime-documentary-piano-175502/ Free for use under the Pixabay license - https://pixabay.com/service/license-summary/ 2. "Dark Ambient Emotions Music" (DeusLower) https://pixabay.com/music/mystery-dark-ambient-emotions-music-259996/ Free for use under the Pixabay license - https://pixabay.com/service/license-summary/ 3. "Background Ambient Documentary" (AKTASOK) https://pixabay.com/music/corporate-background-ambient-documentary-173954/ Free for use under the Pixabay license - https://pixabay.com/service/license-summary/ 4. "Debt Comes Due" (alanajordan) https://pixabay.com/music/modern-country-debt-comes-due-510323/ Free for use under the Pixabay license - https://pixabay.com/service/license-summary/ This podcast is narrated by the host's own voice, powered by AI.

    26 min
  2. AI Ghost Workers: The Human Cost [39]

    APR 28

    AI Ghost Workers: The Human Cost [39]

    AI safety depends on invisible workers in Kenya and the Philippines who label disturbing content for hours daily. Studies show 81% develop severe PTSD after reviewing child abuse, torture, and mass violence to train AI filters. Big tech companies including Meta, TikTok, and OpenAI outsource this trauma while enforcing quotas of 700 items per day and nondisclosure agreements that silence workers. Kenya's content moderators are unionizing and fighting back. Is artificial intelligence worth the human cost? Episode notes at: https://aifreakyfacts.com/stories/ Topics Covered: Artificial intelligence, AI dangers, AI ethics, content moderation labor, mental health crisis, PTSD depression anxiety, Kenya Philippines workers, AI training data, Sama TaskUs Majorel, Meta Facebook TikTok OpenAI, psychological trauma, quota systems, nondisclosure agreements, African Content Moderators Union, labor lawsuits, worker organizing, tech exploitation, AI Freaky Facts, AI podcast References: 1. TIME Magazine (June 19, 2025) — "Exclusive: Global Safety Rules Aim to Protect AI's Most Traumatized Workers" https://time.com/7295662/ai-workers-safety-rules/ 2. The Bureau of Investigative Journalism (April 27, 2025) — "Meta's content moderators face worst conditions yet at secret Ghana site" https://www.thebureauinvestigates.com/stories/2025-04-27/suicide-attempts-sackings-and-a-vow-of-silence-metas-new-moderators-face-worst-conditions-yet Investigative report on conditions at Meta's Ghana moderation facility after Kenya lawsuits. 3. Context by Thomson Reuters Foundation (July 3, 2025) — "Content moderators for Big Tech unite to tackle mental trauma" https://www.context.news/big-tech/content-moderators-for-big-tech-unite-to-tackle-mental-trauma 4. IHRB (Institute for Human Rights and Business) (November 27, 2025) — "Content moderation is a new factory floor of exploitation" https://www.ihrb.org/latest/content-moderation-is-a-new-factory-floor-of-exploitation-labour-protections-must-catch-up 5. ArXiv Research Paper (March 3, 2026) — "Beyond Content Exposure: Systemic Factors Driving Moderators' Mental Health Crisis in Africa" https://arxiv.org/html/2604.15321 6. Computer Weekly (September 2024) — "Kenyan workers win High Court appeal to take Meta to trial" https://www.computerweekly.com/feature/Kenyan-workers-win-High-Court-appeal-to-take-Meta-to-trial Legal victory for 185 content moderators suing Meta and contractors over working conditions. 7. Rest of World (December 20, 2023) — "Meta's content moderators in Kenya fight for lost pay" https://restofworld.org/2023/meta-content-moderators-kenya-fired-unionize/ 8. Rest of World (December 20, 2023) — "The man leading Kenyan content moderators' battle against Meta" https://restofworld.org/2023/kenya-content-moderators-battle-meta/ 9. Jacobin Magazine (February 14, 2024) — "Kenyan Courts Keep Telling Meta to Let Workers Unionize" https://jacobin.com/2024/02/kenya-courts-meta-content-moderation-union 10. Digital Society Blog (HIIG) (October 30, 2025) — "Inside content moderation" https://www.hiig.de/en/inside-content-moderation/ Music Credits: 1. "Sad Violin 5" (Chrispixer) https://pixabay.com/music/classical-string-quartet-sad-violin-5-456715/ Free for use under the Pixabay license - https://pixabay.com/service/license-summary/ 2. "Dark Ambient Emotions Music" (DeusLower) https://pixabay.com/music/mystery-dark-ambient-emotions-music-259996/ Free for use under the Pixabay license - https://pixabay.com/service/license-summary/ 3. "Background Ambient Documentary" (AKTASOK) https://pixabay.com/music/corporate-background-ambient-documentary-173954/ Free for use under the Pixabay license - https://pixabay.com/service/license-summary/ 4. "Internal Scream" (alanajordan) https://pixabay.com/music/vocal-internal-scream-514312/ Free for use under the Pixabay license - https://pixabay.com/service/license-summary/ This podcast is narrated by the host's own voice, powered by AI.

    25 min
  3. AI Is Thirsty: The Water Crisis Behind Every Query [38]

    APR 21

    AI Is Thirsty: The Water Crisis Behind Every Query [38]

    AI is thirsty, and you are paying the tab. Every query evaporates water from desert reservoirs to cool the servers that generate it. We investigate the hidden water crisis behind AI infrastructure, from the secretive Microsoft Buckeye facility to residents fined while nearby data centers drink the town dry. Who authorized this trade-off? Steve Atwal uncovers the staggering environmental cost of the AI boom and the technology that could fix it. Episode notes: https://aifreakyfacts.com/stories/ Topics Covered: artificial intelligence, AI ethics, AI infrastructure, AI risks, data center water consumption, environmental impact of AI, water crisis, Arizona drought, Microsoft Buckeye, Intel Chandler, Loudoun County data centers, water cooling, immersion cooling, closed loop systems, water positivity, tech accountability, AI energy consumption, sustainable AI, AI Freaky Facts, Steve Atwal, ai podcast References: 1. Data Centers' Water Use Is Hard to Track, Raising Concerns in the Drought-Prone West — KUNC / Mountain West News Bureau, April 2026 https://www.kunc.org/2026-04-15/data-centers-water-hard-track-raising-concerns-drought-west 2. The New Battleground: Water Rights and Data Center Development in the AI Era — Climate Solutions Legal Digest, April 2026 https://www.climatesolutionslaw.com/2026/04/the-new-battleground-water-rights-and-data-center-development-in-the-ai-era/ 3. AI's Growing Thirst for Water Is Becoming a Public Health Risk — Al Jazeera, January 2026 https://www.aljazeera.com/opinions/2026/1/21/ais-growing-thirst-for-water-is-becoming-a-public-health-risk 4. Arizona's Water is Drying Up: That Won't Stop Its Data Center Rush — Grist, March 2026 https://grist.org/technology/arizona-water-data-centers-semiconducters/ 5. Data Centers and Water Consumption — Environmental and Energy Study Institute (EESI) https://www.eesi.org/articles/view/data-centers-and-water-consumption 6. Dateline Ashburn: The Thirst for AI Raises Alarms in Virginia — Broadband Breakfast, September 2025 https://broadbandbreakfast.com/dateline-ashburn-the-thirst-for-ai-raises-alarms-in-virginia/ 7. Data Drain: The Land and Water Impacts of the AI Boom — Lincoln Institute of Land Policy, October 2025 https://www.lincolninst.edu/publications/land-lines-magazine/articles/land-water-impacts-data-centers/ 8. Drained by Data: The Cumulative Impact of Data Centers on Regional Water Stress — Ceres, September 2025 https://www.ceres.org/resources/reports/drained-by-data-the-cumulative-impact-of-data-centers-on-regional-water-stress 9. As Data Centers Multiply in the Chesapeake Region, Water Use Increases Too — Bay Journal, October 2025 https://www.bayjournal.com/news/pollution/as-data-centers-multiply-in-the-chesapeake-region-water-use-increases-too/article_ebcb4891-d6d6-4b42-8bb5-14bf61981531.html 10. AI, Data Centers, and Water — Brookings Institution, November 2025 https://www.brookings.edu/articles/ai-data-centers-and-water/ Music Credits: 1. "Sad Violin 4" (Chrispixer) https://pixabay.com/music/folk-sad-violin-4-343723/ Free for use under the Pixabay license - https://pixabay.com/service/license-summary/ 2. "Dark Ambient Emotions Music" (DeusLower) https://pixabay.com/music/mystery-dark-ambient-emotions-music-259996/ Free for use under the Pixabay license - https://pixabay.com/service/license-summary/ 3. "Background Ambient Documentary" (AKTASOK) https://pixabay.com/music/corporate-background-ambient-documentary-173954/ Free for use under the Pixabay license - https://pixabay.com/service/license-summary/ 4. "Rain on the Roof" (alanajordan) https://pixabay.com/music/indie-pop-rain-on-the-roof-394402/ Free for use under the Pixabay license - https://pixabay.com/service/license-summary/ This podcast is narrated by the host's own voice, powered by AI.

    25 min
  4. AI Erased One Billion Dollars in Debt: No Lawyers No Fees [37]

    APR 14

    AI Erased One Billion Dollars in Debt: No Lawyers No Fees [37]

    AI is erasing billions in debt for families who cannot afford a lawyer, with no legal fees required. Upsolve built a nonprofit platform that automates complex legal forms for bankruptcy filers to help them find a fresh start. We explore why 92% of legal problems go unaddressed and why human oversight is a critical safety requirement for high stakes automation. Is justice finally becoming accessible through artificial intelligence? Episode notes: https://aifreakyfacts.com/stories/ Topics Covered: artificial intelligence, AI ethics, AI and society, AI risks, access to justice, AI legal tools, debt relief, bankruptcy filing, legal aid, justice gap, AI accountability, AI safety, responsible AI, AI nonprofit, Upsolve, Jonathan Petts, AI paralegal, low income Americans, human oversight, AI Freaky Facts, Steve Atwal, ai podcast References: 1. Upsolve Surpasses $1 Billion in Debt Relief for Low-Income Families — Forbes, March 9, 2026 https://www.forbes.com/sites/fastforward/2026/03/09/upsolves-ai-paralegal-helps-erase-1b-in-debt/ 2. The Story Behind Upsolve — Jonathan Petts, Upsolve.org, December 5, 2025 https://upsolve.org/learn/our-story/ 3. Justice Gap Research — Legal Services Corporation https://www.lsc.gov/initiatives/justice-gap-research 4. The Justice Gap: Executive Summary — Legal Services Corporation, 2022 https://justicegap.lsc.gov/resource/executive-summary/ 5. LSC Says $2 Billion Needed to Address Low-Income Americans Unmet Civil Legal Needs — Legal Services Corporation, April 2026 https://www.lsc.gov/press-release/lsc-says-2-billion-needed-address-low-income-americans-unmet-civil-legal-needs 6. White House Budget Proposes Eliminating LSC — Legal Services Corporation https://www.lsc.gov/press-release/white-house-budget-proposes-eliminating-lsc-defunding-civil-legal-aid-millions-low-income-americans 7. Achieving Civil Justice — American Academy of Arts and Sciences https://www.amacad.org/publication/achieving-civil-justice/section/3 8. Upsolve — Wikipedia https://en.wikipedia.org/wiki/Upsolve 9. Bridging the $140 Billion Gap: How We Can Close the Unclaimed Benefits Crisis — Link Health, April 2025 https://link-health.org/2025/04/22/bridging-the-140-billion-gap-how-we-can-close-the-unclaimed-benefits-crisis/ 10. AI and Technology Help Bridge Access to Justice — Pro Bono Institute, February 2026 https://www.probonoinst.org/2026/02/06/ai-and-technology-help-bridge-access-to-justice/ Music Credits: 1. "Sad Thoughtful Serious Piano (Thoughts In Silence)" (Ashot_Danielyan) https://pixabay.com/music/main-title-sad-thoughtful-serious-piano-thoughts-in-silence-115091/ Free for use under the Pixabay license - https://pixabay.com/service/license-summary/ 2. "Dark Ambient Emotions Music" (DeusLower) https://pixabay.com/music/mystery-dark-ambient-emotions-music-259996/ Free for use under the Pixabay license - https://pixabay.com/service/license-summary/ 3. "Background Ambient Documentary" (AKTASOK) https://pixabay.com/music/corporate-background-ambient-documentary-173954/ Free for use under the Pixabay license - https://pixabay.com/service/license-summary/ 4. "Tell Your Story" (alanajordan) https://pixabay.com/music/pop-tell-your-story-417312/ Free for use under the Pixabay license - https://pixabay.com/service/license-summary/ This podcast is narrated by the host's own voice, powered by AI.

    25 min
  5. AI Will Outlive All of Us: And It Is Already Deciding What Survives [36]

    APR 7

    AI Will Outlive All of Us: And It Is Already Deciding What Survives [36]

    AI is now curating human history, deciding which documents, images, languages, and cultural memories survive in the global digital archive. At institutions like the National Library of Norway and the Internet Archive, machine‑learning systems process millions of artifacts a day, ranking what gets preserved and what quietly disappears. But these systems are trained on biased datasets shaped by centuries of unequal power. Indigenous, non‑Western, and marginalized communities have no oversight, no appeals process, and no seat at the table as artificial intelligence determines what future generations will know about them. When private companies and opaque algorithms control cultural memory, who decides what survives? Episode notes: https://aifreakyfacts.com/stories/ References: 1. National Library of Norway – NorHand AI Model (Transkribus) https://blog.transkribus.org/en/the-norhand-model-a-new-public-ai-model-national-library-of-norway 2. How Can We Improve the Diversity of Archival Collections with AI? (Springer, February 2025) https://link.springer.com/article/10.1007/s00146-025-02222-z 3. Internet Archive – Wayback Machine (Over 800 Billion Web Pages) https://www.eff.org/deeplinks/2026/03/blocking-internet-archive-wont-stop-ai-it-will-erase-webs-historical-record 4. Internet Archive Europe – AI and Digital Preservation https://www.internetarchive.eu/2025/11/05/more-than-storage-on-world-digital-preservation-day-ai-is-helping-unlock-our-memories/ 5. Vesuvius Challenge – Grand Prize Winners (February 2024) https://scrollprize.org/grandprize 6. University of Kentucky – Vesuvius Challenge Breakthrough https://uknow.uky.edu/research/grand-prize-discovery-made-2000-year-old-herculaneum-scrolls 7. DeepMind – Aeneas AI for Ancient Text Restoration (Nature, July 2025) https://www.nature.com/articles/s41586-025-09292-5 8. Tracing the Bias Loop: AI, Cultural Heritage and Bias-Mitigating in Practice (Springer, April 2025) https://link.springer.com/article/10.1007/s00146-025-02349-z 9. Genus UK – AI Smart-Archives and Responsible Innovation https://genus.uk/ai-smart-archives-preservation/ 10. Historica.org – "How AI Is Changing Digital Archives: Possibilities and Pitfalls" https://www.historica.org/blog/ais-role-in-preserving-digital-archives Topics Covered: AI archiving, Digital preservation, Algorithmic curation of history, Cultural memory systems, National Library of Norway AI, Internet Archive machine learning, AI bias and data inequality, Indigenous and marginalized heritage, Data colonialism, Corporate control of archives, AI restoration of ancient texts, Vesuvius Challenge AI decoding, MIT CSAIL pottery reconstruction, AI translation and cultural nuance, UNESCO digital heritage governance, Algorithmic accountability, Democratic oversight of AI, AI ethics, AI Freaky Facts, Steve Atwal, AI documentary podcast Music Credits: 1. "Atmospheric Dark Cinematic" (Lilliben) https://pixabay.com/music/mystery-atmospheric-dark-cinematic-365139/ Free for use under the Pixabay license - https://pixabay.com/service/license-summary/ 2. "Dark Ambient Emotions Music" (DeusLower) https://pixabay.com/music/mystery-dark-ambient-emotions-music-259996/ Free for use under the Pixabay license - https://pixabay.com/service/license-summary/ 3. "Background Ambient Documentary" (AKTASOK) https://pixabay.com/music/corporate-background-ambient-documentary-173954/ Free for use under the Pixabay license - https://pixabay.com/service/license-summary/ 4. "Rewrite the Future" (alanajordan) https://pixabay.com/music/electronic-rewrite-the-future-494220/ Free for use under the Pixabay license - https://pixabay.com/service/license-summary/ This podcast is narrated by the host's own voice, powered by AI.

    27 min
  6. AI Is Rewriting History: And Nobody Is Stopping It [35]

    MAR 31

    AI Is Rewriting History: And Nobody Is Stopping It [35]

    AI is no longer just generating deepfakes. It is fabricating history itself. Artificial intelligence systems are creating fake historical photographs, forged archival records, and invented events now indexed by search engines and used in school projects. MIT researchers found that AI-edited visuals more than double the formation of false memories. Governments are warning. Historians are alarmed. Who controls history when anyone can fabricate it in three seconds? Episode notes: https://aifreakyfacts.com/stories/ References: 1. Vice — People Are Creating Records of Fake Historical Events Using AI (2023) https://www.vice.com/en/article/people-are-creating-records-of-fake-historical-events-using-ai/ 2. MIT Media Lab — Synthetic Human Memories: AI-Edited Images and Videos Can Implant False Memories and Distort Recollection — Pataranutaporn et al., CHI (2025) https://www.media.mit.edu/projects/ai-false-memories/overview/ 3. Bloomberg / MIT Media Lab — AI Does Not Just Lie, It Can Make You Believe It — F.D. Flam (August 2025) — free, no paywall https://www.media.mit.edu/articles/ai-doesn-t-just-lie-it-can-make-you-believe-it/ 4. Epoch Magazine — Real Enough? How Forgeries and AI Hoaxes Shape Historical Memory (2026) https://www.epoch-magazine.com/post/real-enough-how-forgeries-and-ai-hoaxes-shape-historical-memory 5. Historica.org — AI Hallucinations and the Risks to Historical Research Integrity (2025) https://www.historica.org/blog/ai-fictions-historiography-misinformation 6. Stimson Center — AI in the Age of Fake (Imagined) Content (2026) https://www.stimson.org/2026/ai-in-the-age-of-fake-imagined-content/ 7. Wikipedia — Hallucination (Artificial Intelligence) — includes Deloitte government report hallucination cases https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence) 8. American Historical Association — Guiding Principles for Artificial Intelligence in History Education (July 2025) https://www.historians.org/resource/guiding-principles-for-artificial-intelligence-in-history-education/ 9. Legal History Insights — The Specter of AI-Generated Historical Documents (2024) https://legalhistoryinsights.com/the-specter-of-ai-generated-historical-documents/ 10. MIT Media Lab — Slip Through the Chat: Subtle Injection of False Information in LLM Chatbot Conversations Increases False Memory Formation — Pataranutaporn et al., IUI (2025) https://www.media.mit.edu/publications/slip-through-the-chat-subtle-injection-of-false-information-in-llm-chatbot-conversations-increases-false-memory-formation/ Topics Covered: AI misinformation, AI deepfakes, Fake historical photos, AI-generated history, AI hallucinations, False memories and AI, Historical misinformation, AI and collective memory, AI archival manipulation, AI authenticity and provenance, MIT false memory research, AI regulation and ethics, AI and democracy, AI risks and dangers, AI Freaky Facts, Steve Atwal, AI documentary podcast Music Credits: 1. "Atmospheric Dark Cinematic" (Lilliben) https://pixabay.com/music/mystery-atmospheric-dark-cinematic-365139/ Free for use under the Pixabay license - https://pixabay.com/service/license-summary/ 2. "Dark Ambient Emotions Music" (DeusLower) https://pixabay.com/music/mystery-dark-ambient-emotions-music-259996/ Free for use under the Pixabay license - https://pixabay.com/service/license-summary/ 3. "Background Ambient Documentary" (AKTASOK) https://pixabay.com/music/corporate-background-ambient-documentary-173954/ Free for use under the Pixabay license - https://pixabay.com/service/license-summary/ 4. "Memory Loss" (alanajordan) https://pixabay.com/music/pop-memory-loss-481722/ Free for use under the Pixabay license - https://pixabay.com/service/license-summary/ This podcast is narrated by the host's own voice, powered by AI.

    24 min
  7. AI Is Replacing Your Therapist: When Mental Health Apps Get It Wrong [34]

    MAR 24

    AI Is Replacing Your Therapist: When Mental Health Apps Get It Wrong [34]

    AI mental health apps promise support anytime, anywhere, but the reality is far more dangerous. Studies show some AI therapy tools respond appropriately in less than 60% of interactions. For teenagers in crisis, certain companion apps failed 78% of the time. The sensitive data people share with these apps is often not protected by HIPAA at all. Is artificial intelligence in mental health care putting the most vulnerable people at risk? Episode notes: https://aifreakyfacts.com/stories/ References: 1. NPR — "With therapy hard to get, people lean on AI for mental health. What are the risks?" (September 2025) https://www.npr.org/sections/shots-health-news/2025/09/30/nx-s1-5557278/ai-artificial-intelligence-mental-health-therapy-chatgpt-openai 2. Dartmouth College — "First Therapy Chatbot Trial Yields Mental Health Benefits" (March 2025) https://home.dartmouth.edu/news/2025/03/first-therapy-chatbot-trial-yields-mental-health-benefits 3. MIT Technology Review — "The first trial of generative AI therapy shows it might help with depression" (March 2025) https://www.technologyreview.com/2025/03/28/1114001/the-first-trial-of-generative-ai-therapy-shows-it-might-help-with-depression/ 4. Stanford HAI — "Exploring the Dangers of AI in Mental Health Care" (2025) https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care 5. American Psychological Association — "Using generic AI chatbots for mental health support: A dangerous trend" (March 2025) https://www.apaservices.org/practice/business/technology/artificial-intelligence-chatbots-therapists 6. Psychology Today — "The Hidden Dangers of AI-Driven Mental Health Care" (January 2026) https://www.psychologytoday.com/us/blog/its-not-just-in-your-head/202601/the-hidden-dangers-of-ai-driven-mental-health-care 7. Undark — "Researchers Weigh the Use of AI for Mental Health" (November 2025) https://undark.org/2025/11/04/chatbot-mental-health/ 8. Frontiers in Digital Health — "Balancing risks and benefits: clinicians' perspectives on the use of generative AI chatbots in mental healthcare" (May 2025) https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2025.1606291/full 9. WHO — "Towards responsible AI for mental health and well-being: experts chart a way forward" (March 2026) https://www.who.int/news/item/20-03-2026-towards-responsible-ai-for-mental-health-and-well-being--experts-chart-a-way-forward 10. Stateline — "AI therapy chatbots draw new oversight as suicides raise alarm" (January 2026) https://stateline.org/2026/01/15/ai-therapy-chatbots-draw-new-oversight-as-suicides-raise-alarm/ Topics Covered: artificial intelligence, AI mental health, AI therapy apps, mental health chatbot risks, AI crisis response, AI data privacy, therapy chatbot failures, AI regulation, mental health technology, AI ethics, AI dangers, AI risks, mental health apps, chatbot therapy, AI Freaky Facts, AI podcast Music Credits: 1. "Sad Thoughtful Serious Piano (Thoughts In Silence)" (Ashot_Danielyan) https://pixabay.com/music/main-title-sad-thoughtful-serious-piano-thoughts-in-silence-115091/ Free for use under the Pixabay license - https://pixabay.com/service/license-summary/ 2. "Dark Ambient Emotions Music" (DeusLower) https://pixabay.com/music/mystery-dark-ambient-emotions-music-259996/ Free for use under the Pixabay license - https://pixabay.com/service/license-summary/ 3. "Ambient Emotional Cinematic" (RomanSenykMusic) https://pixabay.com/music/build-up-scenes-ambient-emotional-cinematic-126143/ Free for use under the Pixabay license - https://pixabay.com/service/license-summary/ 4. "Protected by Angels" (alanajordan) https://pixabay.com/music/pop-protected-by-angels-340111/ Free for use under the Pixabay license - https://pixabay.com/service/license-summary/ This podcast is narrated by the host's own voice, powered by AI.

    27 min
  8. AI Trapped You in Your Reality: And Called It Personalization [33]

    MAR 17

    AI Trapped You in Your Reality: And Called It Personalization [33]

    AI recommendation algorithms are not just showing you content. They are shaping the reality you live in. This episode exposes how Facebook, YouTube, and TikTok create filter bubbles, drive algorithmic radicalization, and fracture shared reality. From the Facebook emotional contagion experiment to internal documents showing users fall into negative bubbles within 30 minutes. Is artificial intelligence dangerous when it decides what you believe before you do? Episode notes: https://aifreakyfacts.com/stories/ References: 1. Giansiracusa, N. (2025). How the Secret Algorithms Behind Social Media Actually Work. TIME Magazine, August 7, 2025. https://time.com/7308120/secret-algorithms-behind-social-media/ 2. Kramer, A.D.I., Guillory, J.E., & Hancock, J.T. (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences, 111(24), 8788-8790. https://www.pnas.org/doi/10.1073/pnas.1320040111 3. Haroon, M. et al. (2023). YouTube, The Great Radicalizer? Auditing and Mitigating Ideological Biases in YouTube Recommendations. PNAS. https://www.pnas.org/doi/10.1073/pnas.2213020120 4. Liu et al. (2025). Algorithmic Recommendations Have Limited Effects on Polarization. University of Pennsylvania / Princeton. https://dcknox.github.io/files/LiuEtAl_AlgoRecsLimitedPolarizationYouTube.pdf 5. Filter bubble - Wikipedia. https://en.wikipedia.org/wiki/Filter_bubble 6. States Probed TikTok for Years - Internal Documents. OPB / NPR, October 2024. https://www.opb.org/article/2024/10/11/tiktok-knows-its-app-is-harming-kids-new-internal-documents-show/ 7. European Commission. Digital Services Act - Article 27: Recommender System Transparency. https://www.eu-digital-services-act.com/Digital_Services_Act_Article_27.html 8. European Commission. Digital Services Act: Keeping Us Safe Online (2025). https://commission.europa.eu/news-and-media/news/digital-services-act-keeping-us-safe-online-2025-09-22_en 9. DSA Observatory (2024). The Regulation of Recommender Systems Under the DSA. https://dsa-observatory.eu/2024/11/22/the-regulation-of-recommender-systems-under-the-dsa-a-transition-from-default-to-multiple-and-dynamic-controls/ 10. MDPI / Society (2025). Trap of Social Media Algorithms: A Systematic Review on Filter Bubbles, Echo Chambers, and Their Impact on Youth. https://www.mdpi.com/2075-4698/15/11/301 Topics Covered: AI recommendation algorithms, artificial intelligence, filter bubbles, echo chambers, social media algorithms, algorithmic radicalization, Facebook emotional contagion, AI personalization, AI risks, AI and democracy, EU Digital Services Act, AI regulation, AI and mental health, confirmation bias, polarization, AI Freaky Facts Music Credits: 1. Mystery (The_Mountain) https://pixabay.com/music/build-up-scenes-mystery-163875/ Free for use under the Pixabay license - https://pixabay.com/service/license-summary/ 2. "Dark Atmospheric Soundscape" (Fopihe) https://pixabay.com/music/ambient-dark-atmospheric-soundscape-325384/ Free for use under the Pixabay license - https://pixabay.com/service/license-summary/ 3. "inspiration - Calm & Uplifting Ambient Music" (Clavier-Music) https://pixabay.com/music/ambient-inspiration-calm-amp-uplifting-ambient-music-318243/ Free for use under the Pixabay license - https://pixabay.com/service/license-summary/ 4. "One Heart" (alanajordan) https://pixabay.com/music/pop-one-heart-428675/ Free for use under the Pixabay license - https://pixabay.com/service/license-summary/ This podcast is narrated by the host's own voice, powered by AI.

    23 min

Ratings & Reviews

About

AI Freaky Facts explores how artificial intelligence reshapes everyday life in shocking and surprising ways. Each episode delivers deep-dive investigative narratives on real AI stories — from generative AI and robotics to surveillance, AI risks, and digital identity. This BBC-documentary-style podcast covers large language models, automation, and ethics, going beyond the headlines and the hype. Hosted by Steve Atwal, a former enterprise technology manager. Consistently ranked in the top 2% of podcasts worldwide. https://AIFreakyFacts.com