Alibis and Algorithms

JR DeLaney

Alibis and Algorithms is where mysteries meet machine intelligence—and the truth gets complicated. Each week, we investigate real cases, strange disappearances, unsolved crimes, digital deceptions, and the gray areas where human judgment collides with artificial intelligence. From cold cases and courtroom controversies to algorithmic bias and forensic breakthroughs, this show asks a bold question: What happens when we let the machines examine our alibis? Hosted by JR, this isn’t a podcast that blindly trusts technology—or dismisses it. We dig into the backstory. We examine what investigators tried. We analyze what data revealed. And then we confront the uncomfortable reality: AI can expose patterns humans miss… but it can also inherit our blind spots. Some episodes are full-length investigations that unpack a single case step by step. Others explore emerging tech reshaping law enforcement, digital evidence, surveillance, and truth itself. Occasionally, we zoom out to ask the bigger philosophical question: If algorithms can predict behavior, what does that mean for justice, free will, and the stories we tell about guilt and innocence? This isn’t about replacing detectives with code. It’s about interrogating the data. Challenging assumptions. And investigating truth in the age of algorithms. If you love true crime, are curious about AI, and enjoy smart, thoughtful storytelling that refuses easy answers—welcome to your new obsession. The evidence is waiting.

Episodes

  1. The Invisible AI - Part 3: Your Bias Is Showing — And So Is the Algorithm's

    5 DAYS AGO

    The Invisible AI - Part 3: Your Bias Is Showing — And So Is the Algorithm's

    Interact with us NOW! Send a text and state your mind. Episode 3 of The Invisible AI asks the hardest question yet: what if the math itself is the problem? Tour Guide JR D and AI research companion Ada explore why 'just fix the data' isn't enough — and why algorithmic bias runs deeper than dirty training sets. From Amazon's gender-biased hiring tool (2018) to the Optum healthcare algorithm that mistook systemic inequity for health status, to COMPAS criminal risk scores and their proven mathematical fairness trade-offs, to the self-reinforcing feedback loops of predictive policing — this episode maps the full, layered architecture of AI bias. We also cover the explosive Workday hiring AI lawsuit (Mobley v. Workday, 2024–2025), the SafeRent $2.275M settlement, and the EU AI Act's phased rollout — plus a clear-eyed look at proxy variables, the Chouldechova & Kleinberg impossibility theorems, and the human values embedded in every algorithmic design choice. Featuring verified quotes from Dr. Joy Buolamwini (Algorithmic Justice League), Cathy O'Neil (Weapons of Math Destruction), Dr. Aylin Caliskan (University of Washington), and Google CEO Sundar Pichai. REFERENCES Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencingBuolamwini, J. (2017). How I'm fighting bias in algorithms [TED Talk]. TED Conferences. Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183–186.Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153–163. https://doi.org/10.1089/big.2016.0047Cohen Milstein Sellers & Toll PLLC. (2024, November 20). Rental applicants using housing vouchers settle ground-breaking discrimination class action against SafeRent Solutions. Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science Advances, 4(1), eaao5580. Ensign, D., Friedler, S. A., Neville, S., Scheidegger, C., & Venkatasubramanian, S. (2018). Runaway feedback loops in predictive policing. Proceedings of Machine Learning Research, 81 (FAccT '18).Kleinberg, J., Mullainathan, S., & Raghavan, M. (2017). Inherent trade-offs in the fair determination of risk scores. Proceedings of the 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). .Mobley v. Workday, Inc. (2023–ongoing). U.S. District Court, N.D. California. Case No. 3:23-cv-00770-RFL.Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.O'Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453.Pichai, S. (2024, February 28). Internal memo on Gemini image generation [Leaked to media]. Reported by Semafor and The Verge.U.S. Senate Permanent Subcommittee on Investigations. (2024, October 17). Refusal of recovery: How Medicare Advantage insurers have denied patients access to post-acute care. U.S. Senate.Wilson, K., Gueorguieva, A.-M., Sim, M., & Caliskan, A. (2025). People mirror AI systems' hiring biases. University of Washington News, November 10, 2025. Wilson, K., & Caliskan, A. (2024). Gender, race, andSupport the show

    47 min

About

Alibis and Algorithms is where mysteries meet machine intelligence—and the truth gets complicated. Each week, we investigate real cases, strange disappearances, unsolved crimes, digital deceptions, and the gray areas where human judgment collides with artificial intelligence. From cold cases and courtroom controversies to algorithmic bias and forensic breakthroughs, this show asks a bold question: What happens when we let the machines examine our alibis? Hosted by JR, this isn’t a podcast that blindly trusts technology—or dismisses it. We dig into the backstory. We examine what investigators tried. We analyze what data revealed. And then we confront the uncomfortable reality: AI can expose patterns humans miss… but it can also inherit our blind spots. Some episodes are full-length investigations that unpack a single case step by step. Others explore emerging tech reshaping law enforcement, digital evidence, surveillance, and truth itself. Occasionally, we zoom out to ask the bigger philosophical question: If algorithms can predict behavior, what does that mean for justice, free will, and the stories we tell about guilt and innocence? This isn’t about replacing detectives with code. It’s about interrogating the data. Challenging assumptions. And investigating truth in the age of algorithms. If you love true crime, are curious about AI, and enjoy smart, thoughtful storytelling that refuses easy answers—welcome to your new obsession. The evidence is waiting.