CERIAS Weekly Security Seminar - Purdue University

CERIAS

CERIAS -- the Nation's top-ranked interdisciplinary academic education and research institute -- hosts a weekly cyber security, privacy, resiliency or autonomy speaker, highlighting technical discovery, a case studies or exploring cyber operational approaches; they are not product demonstrations, service sales pitches, or company recruitment presentations. Join us weekly...or explore 25 years of archives for the who's-who in cybersecurity.

  1. قبل يومين · فيديو

    Abulhair Saparov, Can/Will LLMs Learn to Reason?

    Reasoning—the process of drawing conclusions from prior knowledge—is a hallmark of intelligence. Large language models, and more recently, large reasoning models have demonstrated impressive results on many reasoning-intensive benchmarks. Careful studies over the past few years have revealed that LLMs may exhibit some reasoning behavior, and larger models tend to do better on reasoning tasks. However, even the largest current models still struggle on various kinds of reasoning problems. In this talk, we will try to address the question: Are the observed reasoning limitations of LLMs fundamental in nature? Or will they be resolved by further increasing the size and data of these models, or by better techniques for training them? I will describe recent work to tackle this question from several different angles. The answer to this question will help us to better understand the risks posed by future LLMs as vast resources continue to be invested in their development. About the speaker: Abulhair Saparov is an Assistant Professor of Computer Science at Purdue University. His research focuses on applications of statistical machine learning to natural language processing, natural language understanding, and reasoning. His recent work closely examines the reasoning capacity of large language models, identifying fundamental limitations, and developing new methods and tools to address or workaround those limitations. He has also explored the use of symbolic and neurosymbolic methods to both understand and improve the reasoning capabilities of AI models. He is also broadly interested in other applications of statistical machine learning, such as to the natural sciences.

    ٥٣ من الدقائق
  2. ٥ نوفمبر · فيديو

    Hanshen Xiao, When is Automatic Privacy Proof Possible for Black-Box Processing?

    Can we automatically and provably quantify and control the information leakage from a black-box processing? From a statistical inference standpoint, in this talk, I will start from a unified framework to summarize existing privacy definitions based on input-independent  indistinguishability and unravel the fundamental challenges in crafting privacy proof for general data processing. Yet, the landscape shifts when we gain access to the (still possibly black-box) secret generation. By carefully leveraging its entropy, we unlock  the black-box analysis. This breakthrough enables us to automatically "learn" the underlying inference hardness for an adversary to recover arbitrarily-selected sensitive features fully through end-to-end simulations without any algorithmic restrictions. Meanwhile,  a set of new information-theoretical tools will be introduced to efficiently minimize additional noise perturbation assisted with sharpened adversarially adaptive composition. I will also unveil the win-win situation between the privacy and stability for simultaneous  algorithm improvements. Concrete applications will be given in diverse domains, including privacy-preserving machine learning on image classification and large language models, side-channel leakage mitigation and formalizing long-standing heuristic data obfuscations. About the speaker: Hanshen Xiao is an Assistant Professor in the Department of Computer Science. He received his Ph.D. degree in computer science from MIT and B.S. degree in Mathematics from Tsinghua University. Before joining Purdue, he was a research scientist at NVIDIA Research. His research focuses on provable trustworthy machine learning and computation, with a particular focus on automated black-box privatization, differential trust with applications on backdoor defense and memorization mitigation, and trustworthiness evaluation.

    ٥٨ من الدقائق
  3. ٢٩ أكتوبر · فيديو

    Marcus Botacin, Malware Detection under Concept Drift: Science and Engineering

    The current largest challenge in ML-based malware detection is maintaining high detection rates while samples evolve, causing classifiers to drift. What is the best way to solve this problem? In this talk, Dr. Botacin presents two views on the problem: the scientific and the engineering. In the first part of the talk, Dr. Botacin discusses how to make ML-based drift detectors explainable. The talk discusses how one can split the classifier knowledge into two: (1) the knowledge about the frontier between Malware (M) and Goodware (G); and (2) the knowledge about the concept of the (M and G) classes, to understand whether the concept or the classification frontier changed. The second part of the talk discusses how the experimental conditions in which the drift handling approaches are developed often mismatch the real deployment settings, causing the solutions to fail to achieve the desired results. Dr Botacin points out ideal assumptions that do not hold in reality, such as: (1) the amount of drifted data a system can handle, and (2) the immediate availability of oracle data for drift detection, when in practice, a scenario of label delays is much more frequent. The talk demonstrates a solution for these problems via a 5K+ experiment, which illustrates (1) how to explain every drift point in a malware detection pipeline and (2) how an explainable drift detector also makes online retraining to achieve higher detection rates and requires fewer retraining points than traditional approaches. About the speaker: Dr. Botacin is a Computer Science Assistant Professor at Texas A&M University (TAMU, USA) since 2022. Ph.D. in Computer Science (UFPR, Brazil), Master's in Computer Science and Computer Engineering (UNICAMP, Brazil). Malware Analyst since 2012. Specialist in AV engines and Sandbox Development. Dr. Botacin published research papers at major academic conferences and journals. Dr. Botacin also presented his work at major industry and hacking conferences, such as HackInTheBox and Hou.Sec.Con.Page: https://marcusbotacin.github.io/

    ٥٢ من الدقائق
  4. ٢٢ أكتوبر · فيديو

    Rajiv Khanna, The Shape of Trust: Structure, Stability, and the Science of Unlearning

    Trust in modern AI systems hinges on understanding how they learn—and, increasingly, how they can forget. This talk develops a geometric view of trustworthiness that unifies structure-aware optimization, stability analysis, and the emerging science of unlearning. I will begin by revisiting the role of sharpness and flatness in shaping both generalization and sample sensitivity, showing how the geometry of the loss landscape governs what models remember. Building on these insights, I will present recent results on Sharpness-Aware Machine Unlearning, a framework that characterizes when and how learning algorithms can provably erase the influence of specific data points while preserving accuracy on the rest. The discussion connects theoretical guarantees with empirical findings on the role of data distribution and loss geometry in machine unlearning—ultimately suggesting that the shape of the optimization landscape is the shape of trust itself. About the speaker: Rajiv Khanna is an Assistant Professor in the Department of Computer Science. His research interests span various subfields of machine learning including optimization, theory and interpretability.Previously, he held positions of Visiting Faculty Researcher at Google, postdoctoral scholar at Foundations of Data Analystics Institute at University of California, Berkeley and a Research Fellow in the Foundations of Data Science program at the Simons Institute also at UC Berkeley. He graduated with his PhD from UT Austin.

    ٥٦ من الدقائق
  5. ٨ أكتوبر · فيديو

    Stephen Kines, Four Deadly Sins of Cyber: Sloth, Gluttony, Greed & Pride

    In the UK one of the great global car brands is on the verge of bankruptcy this month due to a single cyber-attack with the consequence of a potential loss of 130,000 jobs. Jaguar Land Rover is seeking a government bail-out to survive. In this first of a series of seminars delivered from the founder of a cybersecurity company in the same city where Jaguar Land Rover is reeling from this attack, we will cover Four Deadly Sins of Cyber with the other 3 sins in a follow-up seminar:1. Sloth: Bloated legacy architectures and slow patch cycles, run very real risks of seeing their progress as "good enough" up until the very moment some major event proves it wasn't. We will look at how to focus on compartmentalization, and containment.2. Gluttony: Exponential expansion of networks and devices to serve the AI-masters leading to the Skynet moment. Cyber threats leverage connectivity to spread; contagion control comes from knowing how to control that connectivity.3. Greed: Insatiable desire to acquire the latest and greatest security software, in the belief that newer is better, irrespective of how it fits and is to be used. Not so in OT networks where few of those are fit for purpose. The aim for simplicity benefits the most important questions "what is where?", "what exactly is the threat?" and "where can we exert control of threats accessing critical resources?".4. Pride: Overconfidence and self-assuredness in the status quo, doing more of the same will be fine. How's that working out so far? Humans-in-the-loop: some method of controlling contagion is essential. Minimizing the loss remains mandatory. The second half of the seminar will cover three perspectives of a founder of a hardware cybersecurity innovator : 1. The need to look at RoI when deploying solutions, 2. How to frame CNI cyber solutions within SDG/sustainability/impact, and 3. Moving beyond code-jockeys – cyber career perspectives requiring skills in humanities (psychology, philosophy, etc.) to think differently. About the speaker: Stephen is an international corporate lawyer with expertise in complex M&A and tax efficient commercial transactions in the US, UK and emerging markets. He has been a general counsel for ultra-high net worth individuals and families as well as international law firms. He is focused on emerging technologies, including blockchain and cybersecurity. A natural manager, Stephen also isn't afraid to do the work that needs to be done in an efficient bootstrapped startup. He is also know for his avid community engagement and commitment to sustainability at all levels. Also a former military officer, Stephen is the 2IC of Goldilock - keeping 'selection and maintenance of the aim' front of mind.

    ٤٦ من الدقائق

التقييمات والمراجعات

٤٫١
من ٥
‫٧ من التقييمات‬

حول

CERIAS -- the Nation's top-ranked interdisciplinary academic education and research institute -- hosts a weekly cyber security, privacy, resiliency or autonomy speaker, highlighting technical discovery, a case studies or exploring cyber operational approaches; they are not product demonstrations, service sales pitches, or company recruitment presentations. Join us weekly...or explore 25 years of archives for the who's-who in cybersecurity.