254 episodes

The ISF Podcast brings you cutting-edge conversation, tailored to CISOs, CTOs, CROs, and other global security pros. In every episode of the ISF Podcast, Chief Executive, Steve Durbin speaks with rule-breakers, collaborators, culture builders, and business creatives who manage their enterprise with vision, transparency, authenticity, and integrity. From the Information Security Forum, the leading authority on cyber, information security, and risk management.

ISF Podcast Information Security Forum Podcast

    • Business

The ISF Podcast brings you cutting-edge conversation, tailored to CISOs, CTOs, CROs, and other global security pros. In every episode of the ISF Podcast, Chief Executive, Steve Durbin speaks with rule-breakers, collaborators, culture builders, and business creatives who manage their enterprise with vision, transparency, authenticity, and integrity. From the Information Security Forum, the leading authority on cyber, information security, and risk management.

    Brian Lord - AI, Mis-and Disinformation in Election Fraud and Education

    Brian Lord - AI, Mis-and Disinformation in Election Fraud and Education

    This is the second of a two-part conversation between Steve and Brian Lord, who is currently the Chief Executive Officer of Protection Group International. Prior to joining PGI, Brian served as the Deputy Director of a UK Government Agency governing the organization's Cyber and Intelligence Operations. Today, Steve and Brian discuss the proliferation of mis- and disinformation online, the potential security threats posed by AI, and the need for educating children in cyber awareness from a young age.

    Key Takeaways:
    1. The private sector serves as a skilled and necessary support to the public sector, working to counter mis- and disinformation campaigns, including those involving AI.
    2. AI’s increasing ability to create fabricated  images poses a particular threat to youth and other vulnerable users.

    Tune in to hear more about:
    1. Brian gives his assessment of cybersecurity threats during election years. (16:04)
    2. Exploitation of vulnerable users remains a major concern in the digital space, requiring awareness, innovative countermeasures, and regulation. (31:0)

    Standout Quotes:

    1. “I think when we look at AI, we need to recognize it is a potentially long term larger threat to our institutions, our critical mass and infrastructure, and we need to put in countermeasures to be able to do that. But we also need to recognize that the most immediate impact on that is around what we call high harms, if you like. And I think that was one of the reasons the UK — over a torturously long period of time — introduced the The Online Harms Bill to be able to counter some of those issues. So we need to get AI in perspective. It is a threat. Of course it is a threat. But I see then when one looks at AI applied in the cybersecurity test, you know, automatic intelligence developing hacking techniques, bear in mind, AI is available to both sides. It's not just available to the attackers, it's available to the defenders. So what we are simply going to do is see that same kind of thing that we have in the more human-based countering the cybersecurity threat in an AI space.” -Brian Lord

    2. “The problem we have now — now, one can counter that by the education of children, keeping them aware, and so on and so forth— the problem you have now is the ability, because of the availability of imagery online and AI's ability to create imagery, one can create an entirely fabricated image of a vulnerable target and say, this is you. Even though it isn’t … when you're looking at the most vulnerable in our society, that's a very, very difficult thing to counter, because it doesn't matter whether it's real to whoever sees it, or the fear from the most vulnerable people, people who see it, they will believe that it is real. And we've seen that.” -Brian Lord


    Mentioned in this episode:
    • ISF Analyst Insight Podcast

    Read the transcript of this episode
    Subscribe to the ISF Podcast wherever you listen to podcasts
    Connect with us on LinkedIn and Twitter

    From the Information Security Forum, the leading authority on cyber, information security, and risk management.

    • 23 min
    Brian Lord - Lost in Regulation: Bridging the cyber security gap for SMEs

    Brian Lord - Lost in Regulation: Bridging the cyber security gap for SMEs

    This episode is the first of two conversations between Steve and Brian Lord, who is currently the Chief Executive Officer of Protection Group International. Prior to joining PGI, Brian served as the Deputy Director of a UK Government Agency governing the organization's Cyber and Intelligence Operations. He brings his knowledge of both the public and private sector to bear in this wide-ranging conversation. Steve and Brian touch on the challenges small-midsize enterprises face in implementing cyber defenses, what effective cooperation between government and the private sector looks like, and  the role insurance may play in cybersecurity.


    Key Takeaways:
    1.  A widespread, societal approach involving both the public and private sectors is essential in order to address the increasingly complex risk landscape of cyber attacks.
    2. At the public or governmental levels, there is an increasing need to bring affordable cyber security services to small and mid-sized businesses, because failing to do so puts those businesses and major supply chains at risk.
    3. The private sector serves as a skilled and necessary support to the public sector, working to counter mis- and disinformation campaigns, including those involving AI.


    Tune in to hear more about:
    1. The National Cybersecurity Organization is part of GCHQ, serving to set regulatory standards and safeguards, communicate novel threats, and uphold national security measures in the digital space. (5:42)
    2. Steve and Brian discuss existing challenges of small organizations lacking knowledge and expertise to meet cybersecurity regulations, leading to high costs for external advice and testing. (7:40)



    Standout Quotes:

    1. “...If you buy an external expertise — because you have to do, because either you haven’t got the demand to employ your own, or if you did the cost of employment would be very hard — the cost of buying an external advisor becomes very high. And I think the only way that can be addressed without compromising the standards is of course, to make more people develop more skills and more knowledge. And that, in a challenging way, is a long, long term problem. That is the biggest problem we have in the UK at the moment. And actually, in a lot of countries. The cost of implementing cybersecurity can quite often outweigh, as it may be seen within a smaller business context, the benefit.” -Brian Lord

    2. “I think there probably needs to be a lot more tangible support, I think, for the small to medium enterprises. But that can only come out of collaboration with the cybersecurity industry and with government about, how do you make sure that some of the fees around that are capped?” -Brian Lord


    Mentioned in this episode:




    ISF Analyst Insight Podcast


    Read the transcript of this episode
    Subscribe to the ISF Podcast wherever you listen to podcasts
    Connect with us on LinkedIn and Twitter

    From the Information Security Forum, the leading authority on cyber, information security, and risk management.

    • 16 min
    Eric Siegel - The AI Playbook: Leveraging machine learning to grow your business

    Eric Siegel - The AI Playbook: Leveraging machine learning to grow your business

    Today, Steve is in conversation with AI expert Eric Siegel. A former professor at Columbia University, Eric is the founder of the long-running Machine Learning Week conference series and a bestselling author. His latest book, The AI Playbook, looks at how businesses outside Big Tech can leverage machine learning to grow. He and Steve discuss the differences between generative and predictive AI, the most effective ways to implement AI into an organization’s operations, and how we might expect this technology to be useful in the future.

    Key Takeaways:
    1. No matter how controlled or well thought out a project is, any project relying on AI is only as good as its data inputs.
    2. The more we learn to differentiate types of AI and apply their functions skillfully, the more we will learn about what is possible. 
    3. As predictive AI systems emerge, applying quality data analysis to a well chosen project could make a measurable difference for a company’s bottom line.

    Tune in to hear more about:
    1. Designing a project involving predictive analytics does require quality data and specific domain areas. (3:00)
    2. Generative analytics is still in early stages, and popular notions around its use currently differ from what can reasonably be expected or achieved (4:42)
    3. Using AI to work with errors and improve a system requires quality data and carefully applied labels  (11:59)

    Standout Quotes:
    1. “It's absolutely critical to have a fine scope, a reasonable scope, well defined for the first project. But the most well defined, sort of, well, scoped project is, in another way, the biggest because really what we're talking about, if you're looking at what should your first opportunity be with predictive AI that you want to pursue, it should be your largest scale operation that stands to improve the most, and that even an incremental improvement provides a tremendous bottom line. -Eric Siegel

    2. “ … It's such a funny time, because predictive and generative are really apples and oranges. They're both built on machine learning, which learns from data to predict. But generative isn't a reference to really something specific in terms of the technology; it's just how you're using it, which is to generate new content items. So, writing a first draft in human language, like English, or of code, or creating a first image or video — these endeavors typically need a human in the loop to review everything that it's generated. They're not autonomous. And the question is, how autonomous could they be?” -Eric Siegel

    3.  “You can only predict better than guessing, which turns out to be more than sufficient to drive an improvement to the bottom line. So who's going to click, buy, lie or die, or commit an act of fraud, or turn out to cancel or be a bad debtor? These are human behaviors for those examples, or it could be a corporate client, or it could be a mechanism like a satellite, or the wheel of a train that might fail. But whatever it is, we don't have clairvoyance or a magic crystal ball. We can't expect your computers to, either. So it's about tipping the odds in these numbers games and predicting better than guessing … no matter how good the data is and how devoid of wrong values and those types of errors, you're still going to have that limitation. There’s still a ceiling. No matter how advanced the method is, it's not going to become supernatural. There's a thing called chaos theory, which basically says that even if you knew all the neurons of every cell of the person's brain, you wouldn't necessarily be able to predict very far into the future. And of course, we don’t. So it's always limited data anyway.” -Eric Siegel

    4. “I wrote this new book, The AI Playbook, because we need an organizational practice to make sure that we're sort of planning the project not just technically but organizationally and operationally, so that it actually gets deployed and makes a difference and actually improves operations. And in ge

    • 21 min
    Cyber Warfare and Democracy in the Age of Artificial Intelligence

    Cyber Warfare and Democracy in the Age of Artificial Intelligence

    Today, Steve is speaking with Mariarosaria Taddeo, Professor of Digital Ethics and Defence Technologies and Dslt Ethics Fellow at the Alan Turing Institute. Mariarosaria brings her expertise as a philosopher to bear in this discussion of why and how we must develop agreed-upon ethical principles and governance for cyber warfare.

    Key Takeaways:
    1. As cyber attacks increase, international humanitarian law and rules of war require a conceptual shift.
    2. To maintain competitive advantage while upholding their values, liberal democracies are needing to move swiftly to develop and integrate regulation of emerging digital technologies and AI.
    3. Many new technologies have a direct and harmful impact on the environment, so it’s imperative that any ethical AI be developed sustainably. 


    Tune in to hear more about:
    1.  The digital revolution affects how we do things, how we think about our environment, and how we interact with the environment. (1:10)
    2. Regardless of how individual countries may wield new digital capabilities, liberal democracies as such must endeavor tirelessly to develop digital systems and AI that is well considered, that is ethically sound, and that does not discriminate. (5:20)
    3. New digital capabilities may produce CO2 and other environmental impacts that will need to be recognized and accounted for as new technologies are being rolled out. (10:03)


    Standout Quotes:

    1.  “The way in which international humanitarian laws works or just war theory works is that we tell you what kind of force, when, and how you can use it to regulate the conduct of states in war. Now, fast forward to 2007, cyber attacks against Estonia, and you have a different kind of war, where you have an aggressive behavior, but we're not using force anymore. How do you regulate this new phenomenon, if so far, we have regulated war by regulating force, but now this new type of war is not a force in itself or does not imply the use of force? So this is a conceptual shift. A concept which is not radically changing, but has acquired or identifies a new phenomenon which is new compared to what we used to do before.” - Mariarosario Taddeo 

    2. “I joke with my students when they come up with this same objection, I say, well, you know, we didn't stop putting alarms and locking our doors because sooner or later, somebody will break into the house. It's the same principle. The risk is there, it’s present. They’re gonna do things faster in a more dangerous way, but if we give up to the regulations, then we might as well surrender immediately, right?” - Mariarosario Taddeo

    3. “LLMs, for example, large language models, ChatGPT for example, they consume a lot of the resources of our environment. We did with some of the students here of AI a few years ago a study where we show that training just one round of ChatGPT-3 would produce as much CO2 as 49 cars in the US for a year. It’s a huge toll on the environment. So ethical AI means also sustainably developed.” - Mariarosario Taddeo


    Mentioned in this episode:





    ISF Analyst Insight Podcast




    Read the transcript of this episode
    Subscribe to the ISF Podcast wherever you listen to podcasts
    Connect with us on LinkedIn and Twitter

    From the Information Security Forum, the leading authority on cyber, information security, and risk management.

    • 25 min
    Cyber Exercises: Fail to prepare, prepare to fail

    Cyber Exercises: Fail to prepare, prepare to fail

    A repeat of one of our top episodes from 2023:

    October is Cyber Awareness Month, and we’re marking the occasion with a series of three episodes featuring Steve in conversation with ISF’s Regional Director for Europe, the Middle East and Africa, Dan Norman. Today, Steve and Dan discuss the importance of cyber resilience and how organisations can prepare for cyber attacks.

    Mentioned in this episode:




    ISF Analyst Insight Podcast

    Read the transcript of this episode
    Subscribe to the ISF Podcast wherever you listen to podcasts
    Connect with us on LinkedIn and Twitter

    From the Information Security Forum, the leading authority on cyber, information security, and risk management.

    • 16 min
    Tali Sharot - Changing Behaviours: Why facts alone don't work

    Tali Sharot - Changing Behaviours: Why facts alone don't work

    Today’s episode was recorded at ISF’s 2023 Congress in Rotterdam. Steve sat down with Tali Sharot, professor of neuroscience at University College London, to talk about her fascinating research on optimism bias. Tali offers fresh, evidence-based ideas on effective communication for security leaders seeking to present their message to their board and raise cyber awareness within the organisation.

    Key Takeaways:
    1. Innately, the brain is an optimist.
    2. Implications for the business community.
    3. Present bias means that people care more about now than the future.
    4. Data is key, and pairing anecdotes with data can be more effective.

    Tune in to hear more about:
    1. Sharot’s research about how emotion affects memory (0:28)
    2. Optimism bias has implications for the way we evaluate risk (4:25)
    3. Sharot considers present bias and how it shows up in organisations (9:39)
    4. Why storytelling is so effective when paired with data (15:30)

    Standout Quotes:
    1. “It turns out that in behavioral economics, there was quite a lot of research about this thing called the optimism bias, which is our tendency to imagine the future as better than the past, than the present. And that's exactly what I was seeing in this experiment. And that was really the first experiment that I did looking at what goes on inside the brain that causes us to have these kind of rose-colored glasses on when we think about the future.” -Tali Sharot

    2. “What we find again and again is that people underestimate the risk. And that's, of course, a problem. And it's not just underestimating risk. People also underestimate how long projects will take to complete, how much it would cost, underestimating budgets. All these are related to this phenomena of the optimism bias. And so it's really difficult to try to convince people that their estimate is incorrect. Because what we found is that if you give people information to try to correct their estimate, and you tell them actually, it's much worse than what you thought, your risk is much higher than what you're thinking, people don't take that information and change their belief to the extent that they should. They do learn a little bit, but not enough … However, if you tell them actually, you don't have as much risk as you think, you're in a great position, then they learn really quickly.” -Tali Sharot

    3. “The immediacy is quite important, because we have what's called a present bias. We care more about the now than the future. In general, even if we're not aware of that.” -Tali Sharot

    4. “And what stories do, they do a few things. First of all, we're more likely to attend to stories, right to listen, they're more interesting, they're more colorful, they're more detailed, we're more likely to remember them, partially because they usually elicit more emotion than just the data. So it's good to pair the two, to have the anecdote that kind of illustrates the data that you already have in hand.” -Tali Sharot

    Mentioned in this episode:


    Human-centred Security: Positively influencing security behaviour
    ISF Analyst Insight Podcast

    books by Tali Sharot 





    Read the transcript of this episode
    Subscribe to the ISF Podcast wherever you listen to podcasts
    Connect with us on LinkedIn and Twitter

    From the Information Security Forum, the leading authority on cyber, information security, and risk management.

    • 20 min

Top Podcasts In Business

Finshots Daily
Finshots
WTF is with Nikhil Kamath
Nikhil Kamath
Think Fast, Talk Smart: Communication Techniques
Stanford GSB
The Neon Show
Siddhartha Ahluwalia
The BarberShop with Shantanu
The BarberShop with Shantanu
Indian Silicon Valley with Jivraj Singh Sachar
Jivraj Singh Sachar

You Might Also Like

ISF Analyst Insight Podcast
ISF Analyst Insight Podcast
CISO Series Podcast
David Spark, Mike Johnson, and Andy Ellis
Defense in Depth
David Spark
SANS Internet Stormcenter Daily Cyber Security Podcast (Stormcast)
Johannes B. Ullrich
Risky Business News
risky.biz
Cyber Security Headlines
CISO Series