CERIAS Weekly Security Seminar - Purdue University

CERIAS

CERIAS -- the Nation's top-ranked interdisciplinary academic education and research institute -- hosts a weekly cyber security, privacy, resiliency or autonomy speaker, highlighting technical discovery, a case studies or exploring cyber operational approaches; they are not product demonstrations, service sales pitches, or company recruitment presentations. Join us weekly...or explore 25 years of archives for the who's-who in cybersecurity.

  1. FEB 25 · VIDEO

    Danny Vukobratovich, ISO 27001 as the Engine, NIST CSF 2.0 as the Dashboard, A Practical Operating Model

    Many organizations adopt security frameworks but struggle to turn them into day-to-day operations that reduce risk without slowing delivery. This talk presents a practical operating model that pairs ISO/IEC 27001 (as the certifiable management system that runs governance, risk management, internal audit, and continual improvement) with NIST Cybersecurity Framework 2.0 (as the outcome-focused "dashboard" for aligning security priorities to business objectives and communicating posture to leaders). Attendees will see how to translate business goals into CSF 2.0 current and target profiles, convert those profiles into ISO 27001 objectives and control ownership, and design "evidence by default" workflows that reduce audit fire drills. The session will include real-world design patterns (paved roads, tiered decision rights, exception handling with expiry, and control health metrics) and highlight where assurance programs often drift into "control theater." The goal is a repeatable approach that both practitioners and researchers can critique, improve, and apply. About the speaker: Danny Vukobratovich is a Sr. IT Security Analyst at Purdue University, where he manages Purdue IT's ISO program spanning ISO/IEC 27001 (information security), ISO 9001 (quality management), and ISO/IEC 20000-1 (IT service management). He also oversees Purdue IT's business continuity and disaster recovery planning, with an emphasis on building resilient, auditable operating models that support research and administrative missions. Danny's professional focus is translating risk and governance into practical mechanisms, including clear decision rights, "evidence by design," and metrics that measure control health rather than control presence. His background includes security risk assessments, incident response, monitoring and logging, identity and access management, and standards-based audits across diverse environments. Danny holds the CISSP, ISO/IEC 27001:2022 Lead Implementer, and ITIL 4 Strategic Leader certifications, and an M.S. in Cybersecurity Management.

    1h 3m
  2. FEB 18 · VIDEO

    Thai Le, Towards Robust and Trustworthy AI Speech Models: What You Read Isn't What You Hear

    Deepfake voice technology is rapidly advancing, but how well do current detection systems handle differences in language and writing style? Most existing work focuses on robustness to acoustic variations such as background noise or compression, while largely overlooking how linguistic variation shapes both deepfake generation and detection. Yet language matters: psycholinguistic features such as sentence structure, complexity, and word choice influence how models synthesize speech, which in turn affects how detectors score and flag audio. In this talk,  we will ask questions such as: "If we change the way a person writes, while keeping their voice the same, will a deepfake detector still reach the same decision?" and "Are some text-to-speech and voice cloning models more vulnerable to shifts in writing style than others?" We will then discuss implications for designing robust deepfake voice detectors and for advancing more trustworthy speech AI in an era of increasingly synthetic media. About the speaker:  Thai Le is an Assistant Professor of Computer Science at the Indiana University's Luddy School of Informatics, Computing, and Engineering. He obtained his doctoral degree from the college of Information Science and Technology at the Pennsylvania State University with an Excellent Research Award and a DAAD Fellowship. His research focuses on the trustworthiness of AI/ML models, with a mission to enhance the robustness, safety, and transparency of AI technology in various sociotechnical contexts. Le has published nearly 50 peer-reviewed research works with two best paper presentation awards. He is a pioneer in collecting and investigating so-called text perturbations in the wild, which has been utilized by users and researchers worldwide to study and understand effects of humans' adversarial behaviors on their daily usage with AI/ML models. His works have also been featured in ScienceDaily, DefenseOne, and Engineering and Technology Magazine.

    39 min
  3. FEB 11 · VIDEO

    Bethanie Williams, AI-Assisted Cyber-Physical Attack Detection in Smart Manufacturing Systems

    The rise of Industry 4.0 has transformed manufacturing through the integration of cyber-physical systems, connectivity, and real-time data exchange into increasingly automated and intelligent platforms. While these advances improve productivity and efficiency, they also introduce vulnerabilities to cyber-physical attacks that can degrade product quality, damage equipment, and pose safety risks. Effective detection depends on understanding which data sources and levels of granularity provide sufficient visibility for accurate anomaly detection and attack identification. Replicated environments, such as digital twins (DTs), help address the challenges of collecting high-fidelity data and executing complex attack scenarios in live production systems.This talk presents an AI-assisted framework for detecting cyber-physical attacks in smart manufacturing using real machine experimentation complemented by DT–based replication. The framework evaluates multiple data sources, ranging from high-level operational data to low-level control and side-channel signals, to understand how data fidelity and context influence detection performance. A hardware-in-the-loop (HIL) DT is used to replicate machine behavior, safely execute attacks, and enable controlled experimentation that would be impractical in live production environments.Through experiments on a real CNC machining system and its corresponding HIL-based DT, multiple cyber-physical attack scenarios are evaluated using statistical, machine learning, and deep learning-based detection methods. Results demonstrate that detection effectiveness is highly dependent on attack type and data granularity, highlighting the need for domain-aware, multi-source monitoring strategies. The framework is further extended to additive manufacturing, illustrating how insights derived from CNC systems can guide attack detection in related manufacturing domains.Overall, this work demonstrates how combining AI-based detection with real-world experimentation and DT technologies enables more robust and practical security analysis for cyber-physical manufacturing systems. About the speaker: Dr. Bethanie Williams is an R&D, S&E Cybersecurity Engineer at Sandia National Laboratories, where she specializes in applying artificial intelligence (AI) to enhance the security and resilience of cyber-physical systems in critical infrastructure, including power grid systems, healthcare facilities, and advanced manufacturing. She is also actively involved in the Cybersecurity Manufacturing Innovation Institute (CyManII) through her work at Sandia. Bethanie earned her Bachelor of Arts degree as a triple major in Mathematics, Spanish, and Computer Science from Berea College in 2020. During her time at Berea, she was a Bonner Scholar and a member of the women's basketball team, earning All-American honors for her athletic achievements. She completed her Master of Science in Computer Science with a concentration in Cybersecurity at Tennessee Technological University in 2022, under the supervision of Dr. Ambareen Siraj, and earned her Ph.D. in Engineering with a major in Computer Science in 2025 under the guidance of Dr. Muhammad Ismail. Her dissertation, titled "Multi-Source Data Analysis and an Effective AI-Assisted Detection Framework for Cyber-Physical Attacks in Smart Manufacturing," focused on leveraging AI-driven approaches and analyzing various data sources to detect and mitigate cyber-physical attacks in manufacturing systems. Throughout her graduate studies, Bethanie received the College of Engineering Distinguished Fellowship and the National Science Foundation (NSF) Scholarship for Service (SFS). She was a year-round intern at Sandia National Laboratories as part of the Center for Cyber Defenders (CCD) program, where she contributed to national research initiatives under CyManII. Bethanie held several executive leadership roles at Tennessee Tech, including Vice President of Cyber Eagles and Graduate Student Club. She also served as a Ph.D. advisor for Women in Cybersecurity (WiCyS). Through these roles, she actively mentored students, organized outreach events, and fostered a supportive community for women in cybersecurity. Bethanie's current research interests include cyber-physical security, modeling and simulation of industrial control systems, and leveraging AI for advanced manufacturing. As an Early Career R&D, S&E Cybersecurity Engineer at Sandia, she is committed to bridging academic innovation and national security applications to protect critical infrastructure and ensure its resilience.

    47 min
  4. FEB 4 · VIDEO

    Mary Jean Amon, Parental Sharing ("Sharenting") Through the Lens of Interdependent Privacy

    Parental sharing, sometimes termed "sharenting," refers to ways that parents share information about their children online and is a common mechanism through which young children are exposed to social media. Parental sharing is controversial due to its significant benefits and risks, with researchers highlighting broader concerns regarding long-term implications for children's developing privacy standards. Yet, many parents report a high degree of acceptance for parental sharing, and parents exposing their young children to social media the most are often modeling risky online behaviors. This presentation examines parental sharing in association with privacy and security concepts, research, and interventions toward supporting safe and responsible parental sharing. About the speaker: Mary Jean Amon is a quantitative psychologist focused on human-computer interaction and an Assistant Professor in Indiana University Bloomington's Department of Informatics. Her interdisciplinary research program leverages sensing technologies and advanced analytics to understand and improve dynamic decision-making and performance in the context of complex sociotechnological systems. This includes identifying near-real-time team coordinative patterns that enhance teaming performance, as well as human factors in privacy and security. Amon's quality of work is recognized through publications in top venues, best paper awards, diverse research funding sources, and general dissemination through media outlets like Forbes, New York Times, and Washington Post.

    46 min
  5. JAN 28 · VIDEO

    Young Kim, Counterfeit Medical Devices and Medicines as a Fundamental Cyber-Physical Security Problem

    Hardware security is not a new problem, but it is rapidly expanding in both consumer and medical domains due to hyperconnectivity. Medical devices and counterfeit medicines represent a fundamental security challenge. In particular, although counterfeit medicines are not a new issue,the problem continues to worsen as counterfeiting practices become increasingly sophisticated. The counterfeiting of biomedical products poses a serious threat to patient safety, public health, and economic stability in both developed and developing countries, and many current countermeasures remain vulnerable because they provide limited security. In this talk, we will share our work on biomedical hardware security with a focus on pharmaceutical products. We present cyber-physical biomedical security technologies that encode dosage information and authentication into edible biomaterials, enabling serialization, track-and-trace, and authentication at the dosage level. This approach empowers patients to play an active role in combating counterfeit medicines. About the speaker: Young Kim is a professor in the Weldon School of Biomedical Engineering and holds the titles of University Faculty Scholar and Showalter Faculty Scholar at Purdue University. His research centers on co-creating hardware(devices) and software (models) for large-scale societal and healthcare applications. His lab develops hybrid machine learning by combining data analytics with models grounded in optical spectroscopy and light-matter interactions to move beyond big-data, compute-intensive AI and leverage engineers' domain expertise. His work spans optical imaging and spectroscopy, mesoscopic physics, meta materials, cancer research, hardware security, and global health,unified by machine learning and data analytics. His research has been funded by a diverse range of agencies, including NIH, CDC, VA, AFOSR, USAID and Gates Foundation. His primary applications are in global health and rural community health, which address large-scale societal and healthcare challenges in mutually reinforcing ways.

    54 min
  6. JAN 21 · VIDEO

    Vijayanth Tummala, Evaluating The Impact of Cyberattacks On AI-based Machine Vision Systems: A Case Study of Threaded Fasteners

    AI-driven machine vision systems are becoming essential in mechanical engineering applications such as fastener classification, yet their increasing connectivity exposes them to adversarial cyberattacks. Model evasion attacks like FGSM can subtly alter input images and cause misclassification, raising concerns about reliability in automated manufacturing.This talk focuses on the role of Explainable AI and human-in-the-loop strategies in detecting and mitigating such attacks. In the presented case study, an EfficientNet-B0 fastener classification model is examined using Grad-CAM visualizations to determine whether shifts inactivation patterns can reveal adversarial manipulation. The study evaluates how FGSM-generated images affect model accuracy and confidence while assessing the XAI system's ability to highlight abnormal regions of attention and the potential for human-in-the-loop approaches to be utilized with XAI techniques as a practical path to strengthening the resilience of AI-based machine vision systems in manufacturing. About the speaker: Dr. Vijayanth Tummala is a Researcher in Cybersecurity and Human-AI Interaction. His research spans artificial intelligence and cybersecurity across interdisciplinary areas, including AI and Cybersecurity leadership, AI literacy, and computer vision applications. He was one of only seven recipients to receive the Best Paper Award in the AI track at ASME's IMECE conference held in November 2024, which features over 2,400 submissions annually. Previously, he held key leadership roles, including leading the NSA CAE-CD designation, launching graduate programs as part of a $1.5 million EDA grant received by his previous employer, and partnering with the Allen County High-Tech Crimes Unit.

    33 min
  7. JAN 14 · VIDEO

    Rohan Paleja, Building Interpretability into Human-Aware Robots through Neural Tree-Based Models

    Collaborative robots and machine-learning-based virtual agents are increasingly entering the human workspace with the aim of increasing productivity, enhancing safety, and improving the quality of our lives. These agents will dynamically interact with a wide variety of people in dynamic and novel contexts, increasing the prevalence of human-machine teams in applications spanning from healthcare and manufacturing to household assistance. My research aims to create transparent embodied systems that can support users and interact with humans, pushing the frontier of real-world robotics systems towards those that understand human behavior, maintain interpretability, and coordinate with high performance.  In this talk, I will cover a set of works that enable robots to 1) understand and learn from diverse human users, 2)  learn interpretable, human-readable tree-based control policies directly via reinforcement learning, and 3) provide users with information online to improve situational awareness and facilitate effective human-robot collaboration. About the speaker: Dr. Rohan Paleja is an Assistant Professor in the Department of Computer Science at Purdue University. He directs the Strategies for Collaboration, Autonomy, Learning, and Exploration in Robotics Lab. The SCALE Robotics Lab focuses on advancing machine learning and artificial intelligence to improve robot learning, human-robot interaction, and multi-agent collaboration. Their goal is to equip autonomous agents with the ability to operate in the diverse, unstructured, and human-rich environments these agents will encounter in the real world.Dr. Paleja's research interests cover a broad range of topics, namely Explainable AI (xAI), Interactive Robot Learning, and Multi-Agent Collaboration. Prior to Purdue, Dr. Paleja was a Technical Staff Researcher in the Artificial Intelligence Technology group at MIT Lincoln Laboratory, where he collaborated with the Air Force Experimental Operations Unit and the Army Research Lab. Prior to that, he earned his Ph.D. in Robotics at the Georgia Institute of Technology in 2023.His work has received multiple awards, including a Best Paper Finalist Award at the Conference of Robot Learning (CoRL) and a Best Workshop Paper Award at the International Conference of Computer Vision (ICCV) Multi-Agent Relational Reasoning Workshop.

    45 min

Ratings & Reviews

4.1
out of 5
7 Ratings

About

CERIAS -- the Nation's top-ranked interdisciplinary academic education and research institute -- hosts a weekly cyber security, privacy, resiliency or autonomy speaker, highlighting technical discovery, a case studies or exploring cyber operational approaches; they are not product demonstrations, service sales pitches, or company recruitment presentations. Join us weekly...or explore 25 years of archives for the who's-who in cybersecurity.