492 episodes

The weekly CERIAS security seminar has been held every semester since spring of 1992. We invite personnel at Purdue and visitors from outside to present on topics of particular interest to them in the areas of computer and network security, computer crime investigation, information warfare, information ethics, public policy for computing and security, the computing "underground," and other related topics.

CERIAS Security Seminar Podcast CERIAS

    • Technology

The weekly CERIAS security seminar has been held every semester since spring of 1992. We invite personnel at Purdue and visitors from outside to present on topics of particular interest to them in the areas of computer and network security, computer crime investigation, information warfare, information ethics, public policy for computing and security, the computing "underground," and other related topics.

    • video
    Ida Ngambeki, "Understanding the Human Hacker"

    Ida Ngambeki, "Understanding the Human Hacker"

    Social Engineering is employed in 97% of cybersecurity attacks. This makes social engineering penetration testing an important aspect of cybersecurity. Social engineering penetration testing is a specialized area requiring skills and abilities substantially different from other types of penetration testing. Training for social engineering penetration testing as well as understanding what skills, abilities, and personalities make for good social engineers is not well developed. This mixed methods study uses surveys and interviews conducted with social engineering pen testers to examine their pathways into the field, what personality traits contribute to success, what skills and abilities are necessary and what challenges these professionals commonly face. The results are used to make recommendations for training.

    • video
    Neil Gong, "Secure Federated Learning"

    Neil Gong, "Secure Federated Learning"

    Federated learning is an emerging machine learning paradigm to enable many clients (e.g., smartphones, IoT devices, and edge devices) to collaboratively learn a model, with help of a server, without sharing their raw local data. Due to its communication efficiency and potential promise of protecting private or proprietary user data, and in light of emerging privacy regulations such as GDPR, federated learning has become a central playground for innovation.  However, due to its distributed nature, federated learning is vulnerable to malicious clients.  In this talk, we will discuss local model poisoning attacks to federated learning, in which malicious clients send carefully crafted local models or their updates to the server to corrupt the global model. Moreover, we will discuss our work on building federated learning methods that are secure against a bounded number of malicious clients. 

    • video
    Leigh Metcalf, "The Gauntlet of Cybersecurity Research"

    Leigh Metcalf, "The Gauntlet of Cybersecurity Research"

    Good research has scientific principles driving it. Analysts begin research with a goal in mind and at the same time, they need their research to have a solid foundation. This talk will cover common goals in cybersecurity research and also discuss common pitfalls that can undermine the results of the research. The talk will include many examples illustrating the principles.

    • video
    Gary McGraw, "Security Engineering for Machine Learning"

    Gary McGraw, "Security Engineering for Machine Learning"

    Machine Learning appears to have made impressive progress on many tasks including image classification, machine translation, autonomous vehicle control, playing complex games including chess, Go, and Atari video games, and more. This has led to much breathless popular press coverage of Artificial Intelligence, and has elevated deep learning to an almost magical status in the eyes of the public. ML, especially of the deep learning sort, is not magic, however.  ML has become so popular that its application, though often poorly understood and partially motivated by hype, is exploding. In my view, this is not necessarily a good thing. I am concerned with the systematic risk invoked by adopting ML in a haphazard fashion. Our research at the Berryville Institute of Machine Learning (BIIML) is focused on understanding and categorizing security engineering risks introduced by ML at the design level.  Though the idea of addressing security risk in ML is not a new one, most previous work has focused on either particular attacks against running ML systems (a kind of dynamic analysis) or on operational security issues surrounding ML. This talk focuses on the results of an architectural risk analysis (sometimes called a threat model) of ML systems in general.  A list of the top five (of 78 known) ML security risks will be presented.

    • video
    Steven Furnell, "Cybersecurity Skills – Easy to say, harder to recognise?"

    Steven Furnell, "Cybersecurity Skills – Easy to say, harder to recognise?"

    There is no doubt that cybersecurity has risen up the agenda in terms of visibility and importance.  Everybody wants it. But do they really know what they want?  What does cybersecurity include, and to what extent do qualifications and certifications that claim to cover it actually do so?  This talk examines what cybersecurity means in terms of the contributing topics, and in particular how these topics can end up looking substantially different depending upon what source we use as our reference point.  The discussion then proceeds to examine how this has knock-on impacts in terms of the qualifications and certifications that may be held by our current and future workforce.  All are labelled as ‘cybersecurity’, but to what extent are they covering it, and how can those that need support tell the difference?

    • video
    Ira Winkler, "You Can Stop Stupid: Human Security Engineering"

    Ira Winkler, "You Can Stop Stupid: Human Security Engineering"

    While users are responsible for initiating 90%+ of losses, it is not their fault. The entire system is what enables the losses, and the entire system must be designed to prevent them. Drawing lessons from safety science, counterterrorism, and accounting, this presentation details how to expect and stop user initiated loss.

Top Podcasts In Technology

Listeners Also Subscribed To