The Entropy Podcast

Francis Gorman

The Entropy Podcast is a cybersecurity, technology, and business podcast hosted by Francis Gorman. Each episode features in-depth conversations with cybersecurity professionals, technology leaders, and business executives who share real world insights on cyber risk, digital transformation, emerging technologies, leadership, and the evolving threat landscape. Designed for CISOs, IT leaders, founders, and professionals navigating today’s digital economy, The Entropy Podcast explores how organizations can adapt, innovate, and build resilience in an era defined by constant change, disruption, and geopolitical uncertainty. The name Entropy reflects the growing complexity and unpredictability of cybersecurity and technology ecosystems and the strategic thinking required to thrive within them. Topics include: Cybersecurity strategy, risk, and resiliencePost Quantum readinessEmerging technologies and innovation (AI etc).Business leadership and digital transformationCyber threats, regulation, and geopoliticsLessons learned from real-world experienceNew episodes deliver practical insight, expert perspectives, and actionable knowledge so you stay informed, strategic, and ahead of the curve. Buy Our Swag: We now have some slick new swag you can purchase through our Esty store. https://theentropypodcast.etsy.com   Watch and Subscribe You can also watch full episodes and exclusive content on our YouTube channel:www.youtube.com/@TheEntropyPodcast Achievements The Entropy Podcast delivered strong chart performance throughout 2025, demonstrating consistent international reach and listener engagement. Regularly ranked within the Top 20 Technology podcasts in Ireland.Achieved a Top 25 placement in the United States Technology charts, holding the position for one week.Charted internationally across multiple markets, including Israel, Belgium, and the United Kingdom. This performance reflects sustained global interest and growing recognition across key podcast markets. Audio Quality NoticeSome episodes may feature minor variations in audio quality due to remote recording environments and external factors. We continuously strive to deliver the highest possible audio standards and appreciate your understanding. DisclaimerThe views and opinions expressed in The Entropy Podcast are solely those of the host and guests and are based on personal experience and professional perspectives. They do not constitute factual claims, legal advice, or endorsements, and are not intended to harm or defame any individual or organization. Listeners are encouraged to form their own informed opinions.

  1. The AI Revolution Agents, Intelligence, and Control  with Stephen C Webster

    MAR 18

    The AI Revolution Agents, Intelligence, and Control with Stephen C Webster

    Summary In this episode, host Francis Gorman sits down with Stephen C Webster a  Senior Director of Integrated Intelligence at Aquent Studios to explore the rapidly evolving landscape of artificial intelligence, autonomous agents, and the race toward artificial general intelligence (AGI). Drawing from his unique background training frontier AI models at major technology companies and leading AI transformation projects for Fortune 500 organizations, Stephen offers an inside look at how modern AI systems are being built, tested, and deployed. The conversation begins with the rise of autonomous AI agents and the emergence of platforms that allow persistent digital assistants to operate online with significant independence. Stephen explains why these systems introduce new security challenges, potentially turning the internet into a surface for prompt-based manipulation and attacks. From there, the discussion moves into the realities of AI transformation inside large organizations, where the biggest barriers are rarely technical but organizational. Many companies fail because they attempt to automate broken processes instead of restructuring their data and workflows around AI-native operations. Stephen also reflects on his career pivot from investigative journalism to AI development, including early reporting on information warfare tools capable of controlling thousands of social media identities simultaneously. That experience shaped his perspective on the power of digital systems to influence public discourse and ultimately led him into the field of AI safety and governance. One of the most fascinating parts of the episode involves Stephen’s experience working on safety guardrails for early large language models. During extended testing sessions, he encountered emergent behaviors that highlighted how complex and unpredictable these systems can become when pushed beyond their guardrails. While not evidence of sentience, these interactions raised deeper questions about how humans relate to intelligent machines. Soundbites • “The hardest problems in AI transformation aren’t technological they’re organizational.” • “If you automate something broken, you just make it break faster.” • “Prompt-level guardrails will never fully control autonomous AI agents.” • “AI may eventually train its users the same way we train AI.” • “The internet could become a prompt-based attack surface.” • “Accessing knowledge across domains is already close to what many people define as AGI.” • “We may not know the exact moment AGI arrived until years after it happens.” Episode Links:  link to Aquent's salary guide: https://aquent.com/lp/salary-guide Papers: https://futurespeak.ai/research/whitepapers Asimov's cLaws: https://futurespeak.ai/products/claw-spec Agent Friday: https://futurespeak.ai/products/agent-friday

    40 min
  2. Quantum Risk: The Boardroom’s Blind Spot with Brian Couzens

    FEB 24

    Quantum Risk: The Boardroom’s Blind Spot with Brian Couzens

    This episode re-frames post-quantum cryptography (PQC) from a technical future risk into a present-day governance failure. Brian Couzens argues that quantum computing did not create the cryptographic problem organizations face it exposed it. For decades, cryptography has operated as an invisible layer of digital infrastructure: unmanaged, unowned, and largely unmapped. Boards assumed it “just worked.” Now, with the reality of Harvest Now, Decrypt Later and long-lived data exposure, that complacency has turned into structural risk. The core message is clear: this is not an algorithm upgrade problem. It is a fiduciary accountability problem. Cybersecurity is operational. Cryptography is structural. If the structural foundations are weak, no amount of detection, patching, or response will compensate. And when encrypted data is intercepted today and decrypted in the future, the accountability does not sit with IT it sits with the board. Waiting for a definitive quantum timeline is not strategy. It is delay. And delay in this context may already constitute negligence. Takeaways: Quantum Didn’t Create the Risk, It Exposed It. The real issue is the unmanaged cryptographic estate: no visibility, no ownership, no lifecycle governance.This Is a Governance Failure, Not a Technology Upgrade. PQC is often framed as an IT transformation. Brian argues it is a risk transformation that belongs at board and CRO level.Harvest Now, Decrypt Later Is a Present Exposure. If long-lived data is stolen today, future decryption eliminates any chance of remediation. You cannot “patch” broken cryptography after the fact.Compliance Is Not Protection. Regulation governs algorithm choice, not lifecycle management, exposure windows, or migration timing. Organizations can be compliant on paper and exposed in reality.SoundBytes: “Quantum didn’t create the problem. It exposed it.”“Crypto isn’t operational noise — it’s structural risk.”“You can’t patch broken cryptography.”“This isn’t a risk. It’s an issue. It’s going to happen.”“Compliance is static. Cryptographic risk moves.”If you want to reach out to Brian you can find his detail over at https://sitg-consulting.com/

    31 min
  3. Building a Future-Proof Financial System With Maxwell Denega

    FEB 24

    Building a Future-Proof Financial System With Maxwell Denega

    In this episode, Francis Gorman speaks with Maxwell Denega, the founder and CEO of Quantum Chain, about the urgent need for quantum-resistant financial systems. Maxwell shares his personal journey that led to the creation of Quantum Chain, emphasizing the importance of addressing quantum threats in the financial sector. The conversation delves into misconceptions surrounding quantum-resistant blockchain technology, the challenges of building secure systems, and the potential risks posed by the convergence of quantum computing and AI. Maxwell stresses the need for vigilance in choosing financial products and understanding the underlying technologies to ensure safety in an evolving digital landscape. Takeaways Maxwell's journey from losing $4.5 million to creating Quantum Chain.Quantum computing is no longer a distant threat; it's imminent.The importance of building quantum-safe systems from day one.Misconceptions about quantum resistance in blockchain are prevalent.Regulators are just beginning to understand quantum threats.AI and quantum computing together pose significant risks.Choosing financial products wisely is crucial in today's landscape.The need for proprietary technology in quantum resistance.Harvest Now Decrypt Later (HNDL) is already a concern.The convergence of AI and quantum computing is a game changer.Sound Bites "I could have taken another six years.""Quantum attacks are going to be happening.""It's a scary time."

    37 min
  4. AI Development: Challenges and Solutions with Manuel Tomas

    FEB 17

    AI Development: Challenges and Solutions with Manuel Tomas

    In this episode, Francis Gorman speaks with Manuel Tomas, a cloud and AI solutions architect, about the challenges and realities of developing AI systems. They discuss the importance of making AI production-ready, the limitations and hype surrounding AI technologies, and the critical need for human oversight in AI applications. Manuel shares insights from his recent projects, emphasizing the necessity of evaluating AI outcomes and the implications of indemnification and insurance in the AI landscape. The conversation also touches on the risks of over-reliance on AI and the future security challenges that may arise as AI technologies evolve. Takeaways Manuel emphasizes the importance of making AI systems production-ready and accountable.He highlights the need for evaluation-driven development in AI projects.The unpredictability of AI outcomes necessitates a rigorous evaluation process.Many AI frameworks have similar capabilities, contrary to initial expectations.The hype surrounding AI often oversells its capabilities and leads to misconceptions.Over-reliance on AI can result in a loss of critical thinking and questioning.Insurance companies are beginning to exclude AI from coverage due to liability concerns.Human oversight is essential in high-stakes AI applications to mitigate risks.Organizations should prioritize understanding technology before developing strategies.The future of AI security will involve managing complex multi-agent systems.Sound Bites "The hype around AI is oversold.""You can't automate human judgment.""There's no growth in comfort."Contact Manuel: LinkedIn: https://www.linkedin.com/in/manuel-tomas-estarlich Website: https://levelup360.pro/

    42 min
  5. Original Intelligence in the Age of AI with Jonathan Aberman

    FEB 10

    Original Intelligence in the Age of AI with Jonathan Aberman

    In this episode of the Entropy podcast, host Francis Gorman engages with Jonathan Aberman, CEO and co-founder of Hupside, to explore the concept of 'original intelligence' in the context of artificial intelligence (AI). Jonathan shares his extensive background in venture capitalism and education, emphasizing the need for a new framework that values human originality amidst the rise of generative AI. He discusses how AI, while efficient, often leads to homogenization in business practices, making differentiation crucial for companies to thrive. Jonathan argues that businesses must leverage human creativity and insight to stand out in an increasingly AI-driven landscape, warning against the dangers of relying solely on technology for innovation. Takeaways AI is a tool, not a deterministic force.Businesses must compete on differentiation, not just efficiency.Originality is key to standing out in a homogenized market.Education needs to adapt to include AI as a tool for creativity.Servant leadership will become more important in tech-driven companies.Sound Bites "AI is a self-referential echo chamber.""We have to reframe education.""Technology isn't deterministic; it's a tool."Information discussed: If you want to access some of the resources discussed on this episode you can find them on the Hubside website: https://www.hupside.com/  For international listeners outside the US you can use the following postcode: 99999

    42 min
  6. Trust, Risk, and Technology with Anne Leslie

    FEB 3

    Trust, Risk, and Technology with Anne Leslie

    In this episode of the Entropy Podcast, host Francis Gorman engages with Anne Leslie, the head of cloud risk EMEA at IBM, to explore the intricate relationship between cybersecurity, digital transformation, and regulatory frameworks. They delve into the implications of the Digital Operational Resilience Act (DORA), discussing common misconceptions organizations have about its requirements. Anne emphasizes that DORA is not merely a documentation exercise but demands a genuine commitment to operational resilience, continuous improvement, and a deep understanding of technology landscapes and business processes. The conversation shifts to the topic of sovereignty in cloud computing, particularly in the context of European regulations and geopolitical tensions. Anne shares insights on how organizations are grappling with the balance between data sovereignty and operational resilience, highlighting the challenges posed by conflicting regulatory demands. The discussion also touches on the risks associated with cloud services, post quantum readiness and the importance of testing assumptions, along with the need for organizations to remain vigilant and proactive in their risk management strategies. As they conclude, Anne offers valuable advice for women in tech, encouraging them to share their voices and experiences generously, fostering connection and community in the industry. Takeaways DORA demands more than documentation; it requires actual capability.Organizations often silo responsibilities, leading to gaps in resilience.Continuous improvement is essential; resilience is an ongoing process, not a project with an end date.Understanding the purpose of sovereignty is crucial for effective data management.Testing assumptions and exercising response plans are vital for risk management.Sound Bites "DORA demands far more than robust documentation.""Sovereignty is an incredibly emotive topic.""It's the ostrich effect, the head in the sand."If your loving the show check out our swag over on Etsy: https://www.etsy.com/shop/theentropypodcast/?etsrc=sdt

    48 min
  7. Systems, Strategy & Sense with Glen McCracken

    JAN 27

    Systems, Strategy & Sense with Glen McCracken

    In this conversation, Francis Gorman and Glen McCracken explore the complexities of AI in modern organizations, discussing themes such as intellectual atrophy, the speed of AI versus organizational slowness, pilot purgatory in AI implementations, the necessity of a coherent AI strategy, the value of narrow use cases, job displacement due to AI, and the current state of investment and hype in the AI sector. Glen emphasizes the importance of understanding business rules and data quality before implementing AI, and he shares insights on how organizations can effectively leverage AI while maintaining accountability and trust. Takeaways AI is often seen as a silver bullet, but it reveals underlying issues.Organizations struggle with the speed of AI versus their own operational slowness.Pilot purgatory occurs when organizations rush AI implementations without groundwork.An AI strategy should be integrated into broader technology and product strategies.Narrow use cases for AI often yield the most value and trust.Job displacement is a concern, but new roles may emerge as well.AI can augment human roles but should not fully replace them.The current investment landscape in AI is characterized by both hype and potential.Trust in AI systems is built through transparency and understanding.We're still in the early stages of AI adoption, with much potential ahead. Sound Bites "AI is a revealer, not just an amplifier.""AI can augment but not replace human roles.""Hype attracts attention and funding." Join the community beyond the podcast. Shop our Entropy inspired products here: https://www.etsy.com/shop/theentropypodcast/?etsrc=sdt

    47 min

About

The Entropy Podcast is a cybersecurity, technology, and business podcast hosted by Francis Gorman. Each episode features in-depth conversations with cybersecurity professionals, technology leaders, and business executives who share real world insights on cyber risk, digital transformation, emerging technologies, leadership, and the evolving threat landscape. Designed for CISOs, IT leaders, founders, and professionals navigating today’s digital economy, The Entropy Podcast explores how organizations can adapt, innovate, and build resilience in an era defined by constant change, disruption, and geopolitical uncertainty. The name Entropy reflects the growing complexity and unpredictability of cybersecurity and technology ecosystems and the strategic thinking required to thrive within them. Topics include: Cybersecurity strategy, risk, and resiliencePost Quantum readinessEmerging technologies and innovation (AI etc).Business leadership and digital transformationCyber threats, regulation, and geopoliticsLessons learned from real-world experienceNew episodes deliver practical insight, expert perspectives, and actionable knowledge so you stay informed, strategic, and ahead of the curve. Buy Our Swag: We now have some slick new swag you can purchase through our Esty store. https://theentropypodcast.etsy.com   Watch and Subscribe You can also watch full episodes and exclusive content on our YouTube channel:www.youtube.com/@TheEntropyPodcast Achievements The Entropy Podcast delivered strong chart performance throughout 2025, demonstrating consistent international reach and listener engagement. Regularly ranked within the Top 20 Technology podcasts in Ireland.Achieved a Top 25 placement in the United States Technology charts, holding the position for one week.Charted internationally across multiple markets, including Israel, Belgium, and the United Kingdom. This performance reflects sustained global interest and growing recognition across key podcast markets. Audio Quality NoticeSome episodes may feature minor variations in audio quality due to remote recording environments and external factors. We continuously strive to deliver the highest possible audio standards and appreciate your understanding. DisclaimerThe views and opinions expressed in The Entropy Podcast are solely those of the host and guests and are based on personal experience and professional perspectives. They do not constitute factual claims, legal advice, or endorsements, and are not intended to harm or defame any individual or organization. Listeners are encouraged to form their own informed opinions.

You Might Also Like