CERIAS Weekly Security Seminar - Purdue University

CERIAS

CERIAS -- the Nation's top-ranked interdisciplinary academic education and research institute -- hosts a weekly cyber security, privacy, resiliency or autonomy speaker, highlighting technical discovery, a case studies or exploring cyber operational approaches; they are not product demonstrations, service sales pitches, or company recruitment presentations. Join us weekly...or explore 25 years of archives for the who's-who in cybersecurity.

  1. APR 29 ·  VIDEO

    Pragathi Jha, Modeling Cyber Adversaries: A Critical Survey of Methods and Assumptions

    Cybersecurity practitioners face a persistent methodological problem: how should we reason about intelligent adversaries who observe our defenses, adapt their tactics, and choose targets based on our vulnerabilities? The field has responded with a fragmented toolkit. Quantitative risk assessment borrowed from safety engineering treats threat, vulnerability, and consequence as independent terms. Threat modeling frameworks such as STRIDE and attack trees emphasize structure but rarely quantify uncertainty. Game-theoretic models assume rationality and common knowledge that real attackers do not exhibit. Qualitative heat maps compress uncertainty into colored cells that cannot support budget optimization.This talk surveys these approaches critically, examining what each method commits you to and what it quietly sets aside. A common thread emerges: the alternatives can be understood as approximations to a Bayesian decision-theoretic ideal, each relaxing one or more assumptions for tractability. Modeling an adversary requires addressing four dimensions of uncertainty (what they want, what they know, what they can do, and how they decide) and the standard critiques of probabilistic cyber risk analysis (information asymmetry, correlated inputs, adaptation, the absence of objective base rates) turn out to be errors of naive practice rather than indictments of the methodology itself. Threat intelligence feeds, indicator matches, and shifts in attacker tradecraft fit naturally as Bayesian updates rather than as awkward inputs to frequentist frameworks. The survey closes not with a prescription but with a diagnostic question for practitioners and researchers alike: are the assumptions embedded in your chosen method appropriate for the decision you are trying to support? About the speaker: Pragathi Jha is a doctoral researcher in Industrial Engineering at Purdue University, where her work focuses on optimization, stochastic modeling, and game-theoretic approaches to decision-making under uncertainty. Her research lies at the intersection of operations research, applied probability, and strategic interaction, with an emphasis on developing rigorous mathematical frameworks for complex, adversarial systems.Her academic interests include multi-stage stochastic optimization, game theory, and the modeling of strategic behavior in dynamic environments. In the context of cybersecurity, she is particularly interested in adversarial decision-making, risk-aware resource allocation, and the design of resilient systems that account for uncertainty and strategic threats. Her work aims to bridge theoretical advances in optimization and game theory with practical applications in security, infrastructure protection, and data-driven decision support.Pragathi brings a strong foundation in quantitative methods and is committed to advancing research that is both mathematically rigorous and operationally impactful. Through her work, she seeks to contribute to the development of robust, scalable frameworks for analyzing and mitigating risks in complex, high-stakes environments.

    50 min
  2. APR 22 ·  VIDEO

    Smriti Bhatt, Evolving Security Landscape in the Agentic AI-Enabled IoT Era

    The rapid evolution of connected devices and technologies has transformed the Internet of Things (IoT) into increasingly intelligent and autonomous systems. This talk focuses on the progression from traditional IoT to the Artificial Intelligence of Things (AIoT), and further toward Agent-Based IoT (AB-IoT), also referred to as Agentic AI-enabled IoT. As intelligent agents become embedded within IoT ecosystems, they introduce new capabilities for autonomy and decision-making, but also significantly reshape the security landscape. In this talk, I first outline the technological evolution from IoT to AIoT and Agentic AI-enabled IoT systems. I then discuss how the adoption of agentic intelligence in IoT environments introduces emerging security risks and threats with particular emphasis on challenges related to authentication, access control, and trust management in highly distributed and autonomous environments. I also present potential approaches to address these challenges, including zero-trust security frameworks and context-aware machine learning–based access control mechanisms. Finally, this talk highlights current research challenges and open problems in securing Agentic AI-enabled IoT systems, outlining future directions for building resilient, trustworthy, and secure next-generation Agentic AI-enabled IoT infrastructures. About the speaker: Dr. Smriti Bhatt is an Assistant Professor of Cybersecurity in the School of Applied and Creative Computing at Purdue University. She has received her Ph.D. and M.S. in Computer Science from the University of Texas at San Antonio and did her doctoral research at the Institute for Cyber Security (ICS) and NSF CREST Center for Security and Privacy Enhanced Cloud Computing (C-SPECC). Dr. Bhatt's research focuses on security and privacy in the Internet of Things (IoT) and Cyber-Physical Systems (CPS) leveraging Cloud and Edge Computing. Her research interests also include the application of AI and Machine Learning to secure IoT and CPS infrastructures in various application domains, such as Smart Health, Smart Home, and Wearable IoT. Some of her current research work includes access control models, secure data communication, and anomaly detection for different domains in Cloud-Enabled IoT. She has several conference and journal publications, and also continually serves as an expert reviewer for various journals and technical program committees for several conferences and workshops.

    59 min
  3. APR 15 ·  VIDEO

    Gary Hayslip, The AI Arms Race

    Ransomware has evolved from basic digital extortion into a sophisticated, AI-powered threat that's faster,smarter, and more devastating than ever before. In this session, we'll explore how threat actors are weaponizing artificial intelligence to supercharge their operations—from automated reconnaissance and hyper-realistic phishing to malware that adapts in real-time to evade detection. We'll also examine how AI-driven ransomware exploits supply chain vulnerabilities to create cascading disruptions across entire industries.More importantly, we'll discuss practical strategies for fighting back: leveraging AI-powered behavior alanalytics and autonomous response tools, implementing zero-trust architecture,and building true organizational resilience through tested backup and recovery procedures. Whether you're in security operations, incident response, or infrastructure protection, this session will equip you with actionable insights to shift from a prevention-only mindset to one focused on preparedness and rapid recovery in today's evolving threat landscape. About the speaker: Gary Hayslip is an experienced Global Security Executive with a proven track record of delivering innovative security programs that protect billion-dollar enterprises at every touchpoint. He is intensely focused on driving continuous improvement to maximize the efficiency of security programs while minimizing costs. As an insightful thought leader, he possesses strong business acumen and a commitment to organizational mission, values, and goals. He has demonstrated the ability to collaborate with all levels of an organization to champion new ideas, gain buy-in, and build consensus. Hayslip brings extensive experience in information technology, security leadership, physical security, and risk management to his role as the Senior Security Advisor | CISO in Residence for Halcyon.ai. His previous executive positions include multiple roles as Chief Information Security Officer, Chief Information Officer, Deputy Director of IT, and Chief Privacy Officer for the U.S. Navy (Active Duty), the U.S. Navy (Federal Government employee), the City of San Diego, California, Webroot Software, and SoftBank Investments (Vision Fund & Vision Fund II).Hayslip is a proven cybersecurity expert with excellent communication and public speaking skills. He is skilled at explaining complex security and risk concepts to audiences with different levels of knowledge. Hayslip has earned a reputation as a highly effective communicator, author, and keynote speaker. He co-authored the "CISO Desk Reference Guide: A Practical Guide for CISOs – Volumes 1 & 2," "The Executive Primer: An Executive's Guide to Security Programs," "Developing Your Cybersecurity Career Path," and the "The Essential Guide to Cybersecurity for SMBs." He recently coauthored andpublished "Mastering Third Party Risk," a guide aimed specifically for security practitioners to help them manage the risk exposure to organizations from vendors and supply chains. These books are among the top resources for helping CISOs improve their leadership and business skills. Hayslip currently serves as an independent director on several boards and advises various other security and technology firms. He is an active member of the cybersecurity community and belongs to professional organizations such asISC2, NACD, ISACA, and Infragard. Hayslip holds several professional certifications, including CISSP, CISA, and CRISC, and has earned a BS in Information Systems Management from the University of Maryland,University College, and an MBA from San Diego State University.

    52 min
  4. APR 8 ·  VIDEO

    Brian Peretti, Symposium Closing Keynote: AI, Cybersecurity, and the Path Forward

    Annual Security Symposium. Visit: https://ceri.as/2026 Artificial intelligence is rapidly transforming both the opportunities and risks within cybersecurity, creating a new landscape that today's students and researchers will soon inherit and shape. This keynote explores how AI is evolving from a supporting tool to a decision-making system, fundamentally changing how cyber threats are created, detected, and managed. It will examine emerging risks such as deepfakes, model manipulation, and systemic dependencies on shared technologies, while also addressing the growing role of regulation and the challenges of governing systems that are powerful yet often opaque. Most importantly, the session will highlight where the greatest opportunities lie—at the intersection of AI, cybersecurity, and policy—and how the next generation of professionals can play a defining role in building secure, resilient, and trustworthy systems for the future.  About the speaker: Brian J. Peretti is a career member of the Senior Executive Service at the United States Department of the Treasury. In his final position, he served as Treasury's Chief Technology Officer and Deputy Chief Artificial Intelligence (AI) Officer in the Office of Chief Information Officer.As Treasury's Chief Technology Officer, Mr. Peretti establishes, leads, and manages a comprehensive, multi-year strategic and long-range planning process that promotes the vision for IT and ensures consistent progress toward accomplishing the CIO's vision, while identifying and leveraging common technology solutions to support business processes and work methods and/or to improve effectiveness of current technologies while also developing appropriate policy for emerging technology such as Artificial Intelligence, Machine Learning, Biometrics and Quantum Computing. As Treasury's Deputy Chief AI Officer, Mr. Peretti supported Treasury's Chief AI Officer in advancing the Department's deployment of this emerging technology. In this capacity, he oversaw the publication of Treasury's report, Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector, and directed the subsequent lines of effort. Additionally, serving in this position has seen him designated as the Executive Officer for the Department's AI Governance Board as well as the Department's representative to the Office of the Director of National Intelligence's CAIO Council. In addition, Mr. Peretti leads the development of domestic and international operational resilience policy, including cyber, as part of Treasury's Sector Risk Management Agency responsibility for the financial services sector. In this role, he spearheads Treasury's efforts to increase multi-directional sharing of cyber threat and vulnerability information. He also serves as the United States's designated subject matter expert at the Group of 7 Cyber Expert Group (G-7 CEG). Mr. Peretti has served at the Treasury for over 22 years with increasing levels of responsibility, including being named the Senior Career Official Executing the Duties of the Assistant Secretary for Financial Institutions during the transition from the Obama to the Trump Administration. Based on his expertise in critical infrastructure protection and operational resilience, he was detailed to the Department of Homeland Security, Cybersecurity and Infrastructure Security Agency's National Risk Management Center during the intial response to the COVID-19 pandemic and served as the first Senior Advisor for Security and the Economy. He also speadheaded DHS response to the SolarWinds cyber incident. A sought-after speaker and presenter, Mr. Peretti has been the recipient of numerous awards and honors throughout his career. Most recently, he received the 12th Annual Billington CyberSecurity Leadership Award at the 2023 Annual Billington CyberSecurity Summit. Prior to joining the Treasury, Mr. Peretti was an associate in Shook, Hardy & Bacon's Corporate Banking and Finance Section in Washington, D.C., and was the General Counsel for the Wright Patman Congressional Federal Credit Union. He has authored numerous publications related to financial sector operations, including payment systems. Mr. Peretti received his bachelor's degree from Rider University (cum laude) in 1989, and his law degree from American University's Washington College of Law (cum laude) in 1992.

    1h 14m
  5. APR 1 ·  VIDEO

    Jen Sims, Analyzing Supply Chain Risk in Mobile Applications for Home Energy Storage Systems

    The rapid adoption of mobile applications for managing consumer whole-house battery and energy systems has introduced new questions about software supply chain security. While these applications are not currently integrated with critical infrastructure, their growing role in connected energy environments highlights the importance of understanding the dependencies,permissions, and external services that support their operation. Many of these applications rely on shared third-party libraries, analytics frameworks, and messaging services, creating overlapping software ecosystems across vendors.In this talk, I will present an analysis of several battery-management mobile applications using static and dynamic analysis techniques. The study examines third-party dependencies, Android permission usage, and outbound network activity to identify common software components and shared external infrastructure. The results reveal significant overlap in libraries and permissions across applications, suggesting that vulnerabilities in widely used components could introduce shared risk pathways across multiple vendors. This work highlights the need for stronger dependency governance,permission minimization, and ongoing monitoring as mobile energy applications continue to evolve. About the speaker: Jen Sims is a cybersecurity technical professional in the Cyber Resilience and Intelligence Division at Oak Ridge National Laboratory (ORNL). Her research focuses on resilient cyber-physical systems and vulnerability assessment of technologies used within the electric grid, with particular emphasis on supply chain risk. She also conducts research in cybersecurity for manufacturing and is actively involved in cyber education outreach, engaging students from grade school through graduate programs.Jen earned a Master of Software Engineering and a Bachelor of Computer Science with a concentration in Secure Cyber Systems from the University of Texas at El Paso (UTEP). During her time at UTEP, she founded the Women in Cybersecurity (WiCyS) student chapter and helped launch the university's summer cybersecurity camps.Outside of her research, Jen is passionate about workforce development and cybersecurity education, volunteering with Oak Ridge Computer Science Girls (ORCsGirls) and creating hands-on cybersecurity activities to inspire the next generation of students.

    55 min
  6. MAR 4 ·  VIDEO

    Ruqi Zhang, Discovering and Controlling AI Safety Risks in Foundation Models: A Probabilistic Perspective

    As foundation models, including large language models and multimodal models, are increasingly deployed in complex and high-stakes settings, ensuring their safety has become more important than ever. In this talk, I present a probabilistic perspective on AI safety: safety risks are treated as structured distributions to be discovered and controlled, rather than isolated failures to be patched. I first introduce probabilistic red-teaming methods that characterize distributions of failures, revealing systematic safety risks that standard evaluation often misses. I then describe probabilistic defense methods that control model behavior during deployment by adaptively steering generation toward constraint-aligned distributions. By unifying failure discovery and behavior control under a probabilistic perspective, this talk highlights a distributional approach for understanding and managing safety risks in foundation models. About the speaker: Ruqi Zhang is an Assistant Professor in the Department of Computer Science at Purdue University. Her research focuses on probabilistic machine learning, generative modeling, and trustworthy AI. Prior to joining Purdue, she was a postdoctoral researcher at the Institute for Foundations of Machine Learning (IFML) at the University of Texas at Austin. She received her Ph.D. from Cornell University. Dr. Zhang has been a key organizer of the Symposium on Probabilistic Machine Learning. She has served as an Area Chair and Editor for ML conferences and journals, including ICML, NeurIPS, ICLR, AISTATS, UAI, and TMLR. Her contributions have been recognized with several honors, including AAAI New Faculty Highlights, Amazon Research Award, Spotlight Rising Star in Data Science, Seed for Success Acorn Award, and Ross-Lynn Research Scholar.

    59 min
  7. FEB 25 ·  VIDEO

    Danny Vukobratovich, ISO 27001 as the Engine, NIST CSF 2.0 as the Dashboard, A Practical Operating Model

    Many organizations adopt security frameworks but struggle to turn them into day-to-day operations that reduce risk without slowing delivery. This talk presents a practical operating model that pairs ISO/IEC 27001 (as the certifiable management system that runs governance, risk management, internal audit, and continual improvement) with NIST Cybersecurity Framework 2.0 (as the outcome-focused "dashboard" for aligning security priorities to business objectives and communicating posture to leaders). Attendees will see how to translate business goals into CSF 2.0 current and target profiles, convert those profiles into ISO 27001 objectives and control ownership, and design "evidence by default" workflows that reduce audit fire drills. The session will include real-world design patterns (paved roads, tiered decision rights, exception handling with expiry, and control health metrics) and highlight where assurance programs often drift into "control theater." The goal is a repeatable approach that both practitioners and researchers can critique, improve, and apply. About the speaker: Danny Vukobratovich is a Sr. IT Security Analyst at Purdue University, where he manages Purdue IT's ISO program spanning ISO/IEC 27001 (information security), ISO 9001 (quality management), and ISO/IEC 20000-1 (IT service management). He also oversees Purdue IT's business continuity and disaster recovery planning, with an emphasis on building resilient, auditable operating models that support research and administrative missions. Danny's professional focus is translating risk and governance into practical mechanisms, including clear decision rights, "evidence by design," and metrics that measure control health rather than control presence. His background includes security risk assessments, incident response, monitoring and logging, identity and access management, and standards-based audits across diverse environments. Danny holds the CISSP, ISO/IEC 27001:2022 Lead Implementer, and ITIL 4 Strategic Leader certifications, and an M.S. in Cybersecurity Management.

    1h 3m
  8. FEB 18 ·  VIDEO

    Thai Le, Towards Robust and Trustworthy AI Speech Models: What You Read Isn't What You Hear

    Deepfake voice technology is rapidly advancing, but how well do current detection systems handle differences in language and writing style? Most existing work focuses on robustness to acoustic variations such as background noise or compression, while largely overlooking how linguistic variation shapes both deepfake generation and detection. Yet language matters: psycholinguistic features such as sentence structure, complexity, and word choice influence how models synthesize speech, which in turn affects how detectors score and flag audio. In this talk,  we will ask questions such as: "If we change the way a person writes, while keeping their voice the same, will a deepfake detector still reach the same decision?" and "Are some text-to-speech and voice cloning models more vulnerable to shifts in writing style than others?" We will then discuss implications for designing robust deepfake voice detectors and for advancing more trustworthy speech AI in an era of increasingly synthetic media. About the speaker:  Thai Le is an Assistant Professor of Computer Science at the Indiana University's Luddy School of Informatics, Computing, and Engineering. He obtained his doctoral degree from the college of Information Science and Technology at the Pennsylvania State University with an Excellent Research Award and a DAAD Fellowship. His research focuses on the trustworthiness of AI/ML models, with a mission to enhance the robustness, safety, and transparency of AI technology in various sociotechnical contexts. Le has published nearly 50 peer-reviewed research works with two best paper presentation awards. He is a pioneer in collecting and investigating so-called text perturbations in the wild, which has been utilized by users and researchers worldwide to study and understand effects of humans' adversarial behaviors on their daily usage with AI/ML models. His works have also been featured in ScienceDaily, DefenseOne, and Engineering and Technology Magazine.

    39 min

Ratings & Reviews

4.1
out of 5
7 Ratings

About

CERIAS -- the Nation's top-ranked interdisciplinary academic education and research institute -- hosts a weekly cyber security, privacy, resiliency or autonomy speaker, highlighting technical discovery, a case studies or exploring cyber operational approaches; they are not product demonstrations, service sales pitches, or company recruitment presentations. Join us weekly...or explore 25 years of archives for the who's-who in cybersecurity.