CERIAS Weekly Security Seminar - Purdue University

CERIAS
CERIAS Weekly Security Seminar - Purdue University

The weekly CERIAS security seminar has been held every semester since spring of 1992. We invite personnel at Purdue and visitors from outside to present on topics of particular interest to them in the areas of computer and network security, computer crime investigation, information warfare, information ethics, public policy for computing and security, the computing "underground," and other related topics.

  1. MAR 12 · VIDEO

    Amir Sadovnik, What do we mean when we talk about AI Safety and Security?

    In February 2024, Gladstone AI produced a report for the Department of State, which opens by stating that "The recent explosion of progress in advanced artificial intelligence … is creating entirely new categories of weapons of mass destruction-like and weapons of mass destruction-enabling catastrophic risk." To clarify further, they define catastrophic risk as "catastrophic events up to and including events that would lead to human extinction." This strong yet controversial statement has caused much debate in the AI research community and in public discourse. One can imagine scenarios in which this may be true, perhaps in some national security-related scenarios, but how can we judge the merit of these types of statements? It is clear that to do so, it is essential to first truly understand the different risks AI adaptation poses and how those risks are novel. That is, when we talk about AI safety and security, do we truly have a clarity about the meaning of these terms? In this talk, we will examine the characteristics that make AI vulnerable to attacks and misuse in different ways and how they introduce novel risks. These risks may be to the system in which AI is employed, the environment around it, or even to society as a whole. Gaining a better understanding of AI characteristics and vulnerabilities will allow us to evaluate how realistic and pressing the different AI risks are, and better realize the current state of AI, its limitations, and what breakthroughs are still needed to advance its capabilities and safety. About the speaker: Dr. Sadovnik is a senior research scientist and the Research Lead for Center for AI Security Research (CAISER) at Oak Ridge National Lab. As part of this role, Dr. Sadovnik leads multiple research projects related to AI risk, adversarial AI, and large language model vulnerabilities. As one of the founders of CAISER, he's helping to shape its strategy and operations through program leadership, partnership development, workshop organization, teaching, and outreach.Prior to joining the lab, he served as an assistant professor in the department of electrical engineering and computer science at the University of Tennessee, Knoxville and as an assistant professor in the department of computer science at Lafayette College. He received his PhD from the School of Electrical and Computer Engineering at Cornell University, advised by Prof. Tsuhan Chen as member of the Advanced Multimedia Processing Lab. Prior to arriving at Cornell he received his bachelor's in electrical and computer engineering from The Cooper Union. In addition to his work and publications in AI and AI security, Dr. Sadovnik has a deep interest in workforce development and computer science education. He continues to teach graduate courses related to machine leaning and artificial intelligence at the University of Tennessee, Knoxville.

    55 min
  2. MAR 5 · VIDEO

    Hisham Zahid & David Haddad, Decrypting the Impact of Professional Certifications in Cybersecurity Careers

    Professional certifications have become a defining feature of the cybersecurity industry, promising enhanced career prospects, higher salaries, and professional credibility. But do they truly deliver on these promises, or are there hidden drawbacks to pursuing them? This presentation takes a deep dive into the dual-edged nature of certifications like CISSP, CISM, CEH, and CompTIA Security+, analyzing their benefits and potential limitations. Drawing on data-driven research, industry insights, and real-world case studies, we explore how certifications influence hiring trends, professional growth, and skills development in cybersecurity. Attendees will gain a balanced perspective on the role of certifications, uncovering whether they are a gateway to career success or an overrated credential. Whether you are an aspiring professional or a seasoned practitioner, this session equips you with the knowledge to decide if certifications are the key to unlocking your cybersecurity potential—or if other paths may hold the answers. About the speaker: Hisham Zahid is a seasoned cybersecurity professional and researcher with over 15 years of combined technical and leadership experience. Currently serving under the CISO as a Security Compliance Manager at a FinTech startup, he has held roles spanning engineering, risk management, audit, and compliance. This breadth of experience gives him unique insight into the complex security challenges organizations face and the strategies needed to overcome them.Hisham holds an MBA and an MS, as well as industry-leading certifications including CISSP, CCSP, CISM, and CDPSE. He is also an active member of the National Society of Leadership and Success (NSLS) and the Open Web Application Security Project (OWASP), reflecting his commitment to professional development and community engagement. As the co-author of The Phantom CISO, Hisham remains dedicated to advancing cybersecurity knowledge, strengthening security awareness, and guiding organizations through an ever-evolving threat landscape.David Haddad is a technology enthusiast and optimist committed to making technology and data more secure and resilient.David serves as an Assistant Director in EY's Technology Risk Management practice, focusing on helping EY member firms comply with internal and external security, data, and regulatory requirements. In this role, David supports firms in enhancing technology governance and oversight through technical reviews, consultations, and assessments. Additionally, David contributes to global AI governance, risk, and control initiatives, ensuring AI products and services align with the firm's strategic technology risk management processes.David is in the fourth year of doctoral studies at Purdue University, specializing in AI and information security. David's experience includes various technology and cybersecurity roles at the Federal Reserve Bank of Chicago and other organizations. David also served as an adjunct instructor and lecturer, teaching undergraduate courses at Purdue University Northwest.A strong advocate for continuous learning, David actively pursues professional growth in cybersecurity and IT through academic degrees, certifications, and speaking engagements worldwide. He holds an MBA with a concentration in Management Information Systems from Purdue University and multiple industry-recognized certifications, including Certified Information Systems Security Professional (CISSP), Certified Information Security Manager (CISM), Certified Data Privacy Solutions Engineer (CDPSE), and Certified Information Systems Auditor (CISA).His research interests include AI security and risk management, information management security controls, emerging technologies, cybersecurity compliance, and data protection.

    42 min
  3. FEB 26 · VIDEO

    Ali Al-Haj, Zero Trust Architectures and Digital Trust Frameworks: A Complementary or Contradictory Relationship?

    This session explores the foundational concepts and practical applications of Zero Trust Architectures (ZTA) and Digital Trust Frameworks (DTF), two paradigms gaining traction in cybersecurity. While Zero Trust challenges the traditional notion of trust by enforcing strict access controls and authentication measures, Digital Trust seeks to build confidence through data integrity, privacy, and ethical considerations. Through this talk, we will investigate whether these approaches intersect, complement, or diverge, and what this means for the future of cybersecurity. Attendees will gain insights into implementing these frameworks to enhance both security and user confidence in digital environments. In addition to a practical overview, this talk will highlight emerging research areas in both domains.  About the speaker: Dr. Ali Al-Haj received his undergraduate degree in Electrical Engineering from Yarmouk University, Jordan, in 1985, followed by an M.Sc. degree in Electronics Engineering from Tottori University, Japan, in 1988 and a Ph.D. degree in Computer Engineering from Osaka University, Japan, in 1993. He then worked as a research associate at ATR Advanced Telecommunications Research Laboratories in Kyoto, Japan, until 1995. Prof. Al-Haj joined Princess Sumaya University for Technology, Jordan, in October 1995, where he currently serves as a Full Professor. He has published papers in dataflow computing, information retrieval, VLSI digital signal processing, neural networks, information security, and digital multimedia watermarking.

    52 min
  4. FEB 12 · VIDEO

    Adam Shostack, Risk is Not Axiomatic

    This talk will look at how systems are secured at a practical engineering level and the science of risk. As we try to engineer secure systems, what are we trying to achieve and how can we do that? Modern threat modeling offers some practical approaches we can apply today. The limits of those approaches are important, and we'll look at how risk management seems to be treated as an axiom, some history of risk as a discipline, and how we might use that history to build better risk management processes. About the speaker: Adam is the author of Threat Modeling: Designing for Security and Threats: What Every Engineer Should Learn from Star Wars. He's a leading expert on threat modeling, a consultant, expert witness, and game designer. He has decades of experience delivering security. His experience ranges across the business world from founding startups to nearly a decade at Microsoft.His accomplishments include:Helped create the CVE. Now an Emeritus member of the Advisory Board.Fixed Autorun for hundreds of millions of systemsLed the design and delivery of the Microsoft SDL Threat Modeling Tool (v3)Created the Elevation of Privilege threat modeling gameCo-authored The New School of Information SecurityBeyond consulting and training, Shostack serves as a member of the Blackhat Review Board, an advisor to a variety of companies and academic institutions, and an Affiliate Professor at the Paul G. Allen School of Computer Science and Engineering at the University of Washington.

    1h 4m
  5. FEB 5 · VIDEO

    Mustafa Abdallah, Effects of Behavioral Decision-Making in Proactive Security Frameworks in Networked Systems

    Facing increasingly sophisticated attacks from external adversaries, networked systems owners have to judiciously allocate their limited security budget to reduce their cyber risks. However, when modeling human decision-making, behavioral economics has shown that humans consistently deviate from classical models of decision-making. Most notably, prospect theory, for which Kahneman and Tversky won the 2002 Nobel memorial prize in economics, argues that humans perceive gains, losses and probabilities in a skewed manner. Furthermore, bounded rationality and imperfect best-response behavior has been frequently observed in human decision-making within the domains of behavioral economics and psychology. While there is a rich literature on these human decision-making factors in economics and psychology, most of the existing work studying ​ security of networked systems does not take into account these biases and noises. In this talk, we show our proposed novel behavioral security game models for the study of human decision-making in networked systems modeled by attack graphs. We show that behavioral biases lead to suboptimal resource allocation patterns. We also analyze the outcomes of protecting multiple isolated assets with heterogeneous valuations via decision- and game-theoretic frameworks. We show that behavioral defenders over-invest in higher-valued assets compared to rational defenders. We then propose different learning-based techniques and adapt two different tax-based mechanisms for guiding behavioral decision-makers towards optimal security investment decisions. In particular, we show the outcomes of such learning and mechanisms on different realistic networked systems. In total, our research establishes rigorous frameworks to analyze the security of both large-scale networked systems and heterogeneous isolated assets managed by human decision makers and provides new and important insights into security vulnerabilities that arise in such settings. About the speaker: Dr. Mustafa Abdallah is a tenure-track Assistant Professor in the Computer and Information Technology (CIT) Department at Purdue University in Indianapolis, with a courtesy appointment at Purdue Polytechnic Institute. He earned his Ph.D. from the Elmore Family School of Electrical and Computer Engineering at Purdue University in 2022 and previously served as a tenure-track faculty member at IUPUI. His research focuses on game theory, behavioral decision-making, explainable AI, meta-learning, and deep learning, with applications in proactive security of networked systems, IoT anomaly detection, and intrusion detection. His work has been published in top security and AI venues, includingIEEE S&P, ACM AsiaCCS, IEEE TCNS, IEEE IoT-J, Computers & Security, and ACM TKDD. He has received the Bilsland Fellowship, multiple IEEE travel grants, and internal research funding from IUPUI. Dr. Abdallah has extensive industrial research experience, including internships at Adobe Research (meta-learning for time-series forecasting), Principal Financial Group (Kalman filter modeling for financial predictions), and RDI (deep learning for speech technology applications), which led to a U.S. patent and multiple publications. He holds B.Sc. and M.Sc. degrees from Cairo University, with a focus on electrical engineering and engineering mathematics, respectively.

    1 hr
  6. JAN 29 · VIDEO

    D. Richard Kuhn, How Can We Provide Assured Autonomy?

    Safety and security-critical systems require extensive test and evaluation, but existing high assurance test methods are based on structural coverage criteria that do not apply to many black box AI and machine learning components.   AI/ML systems make decisions based on training data rather than conventionally programmed functions.  Autonomous systems that rely on these components therefore require assurance methods that evaluate input data to ensure that they can function correctly in their environments with inputs they will encounter.  Combinatorial test methods can provide added assurance for these systems and complement conventional verification and test for AI/ML.This talk reviews some combinatorial methods that can be used to provide assured autonomy, including:Background on combinatorial test methodsWhy conventional test methods are not sufficient for many or most autonomous systemsWhere combinatorial methods applyAssurance based on input space coverageExplainable AI as part of validation About the speaker: Rick Kuhn is a computer scientist in the Computer Security Division at NIST, and is a Fellow of the Institute of Electrical and Electronics Engineers (IEEE). He co-developed the role based access control (RBAC) model that is the dominant form of access control today. His current research focuses on combinatorial methods for assured autonomy and hardware security/functional verification. He has authored three books and more than 200 conference or journal publications on cybersecurity, software failure, and software verification and testing.

    56 min
  7. JAN 15 · VIDEO

    Stanislav Kruglik, Querying Twice: How to Ensure We Obtain the Correct File in a Private Information Retrieval Protocol

    Private Information Retrieval (PIR) is a cryptographic primitive that enables a client to retrieve a record from a database hosted by one or more untrusted servers without revealing which record was accessed. It has a wide range of applications, including private web search, private DNS, lightweight cryptocurrency clients, and more. While many existing PIR protocols assume that servers are honest but curious, we explore the scenario where dishonest servers provide incorrect answers to mislead clients into retrieving the wrong results.We begin by presenting a unified classification of protocols that address incorrect server behavior, focusing on the lowest level of resistance—verifiability—which allows the client to detect if the retrieved file is incorrect. Despite this relaxed security notion, verifiability is sufficient for several practical applications, such as private media browsing.Later on, we propose a unified framework for polynomial PIR protocols, encompassing various existing protocols that optimize download rate or total communication cost. We introduce a method to transform a polynomial PIR into a verifiable one without increasing the number of servers. This is achieved by doubling the queries and linking the responses using a secret parameter held by the client. About the speaker: Stanislav Kruglik has been a Research Fellow at the School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore, since April 2022. He earned a Ph.D. in the theoretical foundations of computer science from the Moscow Institute of Physics and Technology, Russia, in February 2022. He is an IEEE Senior Member and a recipient of the Simons Foundation Scholarship. With over 40 scientific publications, his work has appeared in top-tier venues, including IEEE Transactions on Information Forensics and Security and the European Symposium on Research in Computer Security. His research interests focus on information theory and its applications, particularly in data storage and security.

    44 min

    Ratings & Reviews

    4.1
    out of 5
    7 Ratings

    About

    The weekly CERIAS security seminar has been held every semester since spring of 1992. We invite personnel at Purdue and visitors from outside to present on topics of particular interest to them in the areas of computer and network security, computer crime investigation, information warfare, information ethics, public policy for computing and security, the computing "underground," and other related topics.

    Content Restricted

    This episode can’t be played on the web in your country or region.

    To listen to explicit episodes, sign in.

    Stay up to date with this show

    Sign in or sign up to follow shows, save episodes, and get the latest updates.

    Select a country or region

    Africa, Middle East, and India

    Asia Pacific

    Europe

    Latin America and the Caribbean

    The United States and Canada