31 Folgen

Welcome to The MLSecOps Podcast, presented by Protect AI. Here we explore the world of machine learning security operations, a.k.a., MLSecOps. From preventing attacks to navigating new AI regulations, we'll dive into the latest developments, strategies, and best practices with industry leaders and AI experts. Sit back, relax, and learn something new with us today.Learn more and get involved with the MLSecOps Community at https://bit.ly/MLSecOps.

The MLSecOps Podcast MLSecOps.com

    • Technologie

Welcome to The MLSecOps Podcast, presented by Protect AI. Here we explore the world of machine learning security operations, a.k.a., MLSecOps. From preventing attacks to navigating new AI regulations, we'll dive into the latest developments, strategies, and best practices with industry leaders and AI experts. Sit back, relax, and learn something new with us today.Learn more and get involved with the MLSecOps Community at https://bit.ly/MLSecOps.

    Practical Foundations for Securing AI

    Practical Foundations for Securing AI

    In this episode of the MLSecOps Podcast, we delve into the critical world of security for AI and machine learning with our guest Ron F. Del Rosario, Chief Security Architect and AI/ML Security Lead at SAP ISBN. The discussion highlights the contextual knowledge gap between ML practitioners and cybersecurity professionals, emphasizing the importance of cross-collaboration and foundational security practices. We explore the contrasts of security for AI to that for traditional software, along with the risk profiles of first-party vs. third-party ML models. Ron sheds light on the significance of understanding your AI system's provenance, having necessary controls, and audit trails for robust security. He also discusses the "Secure AI/ML Development Framework" initiative that he launched internally within his organization, featuring a lean security checklist to streamline processes. We hope you enjoy this thoughtful conversation!
    Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast.

    Additional tools and resources to check out:
    Protect AI Radar: End-to-End AI Risk Management
    Protect AI’s ML Security-Focused Open Source Tools
    LLM Guard - The Security Toolkit for LLM Interactions
    Huntr - The World's First AI/Machine Learning Bug Bounty Platform

    • 38 Min.
    Evaluating RAG and the Future of LLM Security: Insights with LlamaIndex

    Evaluating RAG and the Future of LLM Security: Insights with LlamaIndex

    In this episode of the MLSecOps Podcast, host Neal Swaelens, along with co-host Oleksandr Yaremchuk, sit down with special guest Simon Suo, co-founder and CTO of LlamaIndex. Simon shares insights into the development of LlamaIndex, a leading data framework for orchestrating data in large language models (LLMs). Drawing from his background in the self-driving industry, Simon discusses the challenges and considerations of integrating LLMs into various applications, emphasizing the importance of contextualizing LLMs within specific environments.

    The conversation delves into the evolution of retrieval-augmented generation (RAG) techniques and the future trajectory of LLM-based applications. Simon comments on the significance of balancing performance with cost and latency in leveraging LLM capabilities, envisioning a continued focus on data orchestration and enrichment.

    Addressing LLM security concerns, Simon emphasizes the critical need for robust input and output evaluation to mitigate potential risks. He discusses the potential vulnerabilities associated with LLMs, including prompt injection attacks and data leakage, underscoring the importance of implementing strong access controls and data privacy measures. Simon also highlights the ongoing efforts within the LLM community to address security challenges and foster a culture of education and awareness.

    As the discussion progresses, Simon introduces LlamaCloud, an enterprise data platform designed to streamline data processing and storage for LLM applications. He emphasizes the platform's tight integration with the open-source LlamaIndex framework, offering users a seamless transition from experimentation to production-grade deployments. Listeners will also learn about LlamaIndex's parsing solution, LlamaParse.

    Join us to learn more about the ongoing journey of innovation in large language model-based applications, while remaining vigilant about LLM security considerations.
    Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast.

    Additional tools and resources to check out:
    Protect AI Radar: End-to-End AI Risk Management
    Protect AI’s ML Security-Focused Open Source Tools
    LLM Guard - The Security Toolkit for LLM Interactions
    Huntr - The World's First AI/Machine Learning Bug Bounty Platform

    • 31 Min.
    AI Threat Research: Spotlight on the Huntr Community

    AI Threat Research: Spotlight on the Huntr Community

    Learn about the world’s first bug bounty platform for AI & machine learning, huntr, including how to get involved!
    This week’s featured guests are leaders from the huntr community (brought to you by Protect AI): 
    Dan McInerney, Lead AI Threat Researcher 
    Marcello Salvati, Sr. Engineer & Researcher 
    Madison Vorbrich, Community Manager 


    Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast.

    Additional tools and resources to check out:
    Protect AI Radar: End-to-End AI Risk Management
    Protect AI’s ML Security-Focused Open Source Tools
    LLM Guard - The Security Toolkit for LLM Interactions
    Huntr - The World's First AI/Machine Learning Bug Bounty Platform

    • 31 Min.
    Securing AI: The Role of People, Processes & Tools in MLSecOps

    Securing AI: The Role of People, Processes & Tools in MLSecOps

    In this episode of The MLSecOps Podcast hosted by Daryan Dehghanpisheh (Protect AI) and special guest-host Martin Stanley, CISSP (Cybersecurity and Infrastructure Security Agency), we delve into critical aspects of AI security and operations. This episode features esteemed guests, Gary Givental (IBM) and Kaleb Walton (FICO).

    The group's discussion unfolds with insights into the evolving field of Machine Learning Security Operations, aka, MLSecOps. A recap of CISA's most recent Secure by Design and Secure AI initiatives sets the stage for the a dialogue that explores the parallels between MLSecOps and DevSecOps. The episode goes on to illuminate the challenges of securing AI systems, including data integrity and third-party dependencies. The conversation also travels to the socio-technical facets of AI security, explores MLSecOps and AI security posture roles within an organization, and the interplay between people, processes, and tools essential to successful MLSecOps implementation.
    Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast.

    Additional tools and resources to check out:
    Protect AI Radar: End-to-End AI Risk Management
    Protect AI’s ML Security-Focused Open Source Tools
    LLM Guard - The Security Toolkit for LLM Interactions
    Huntr - The World's First AI/Machine Learning Bug Bounty Platform

    • 37 Min.
    ReDoS Vulnerability Reports: Security Relevance vs. Noisy Nuisance

    ReDoS Vulnerability Reports: Security Relevance vs. Noisy Nuisance

    In this episode, we delve into a hot topic in the bug bounty world: ReDoS (Regular Expression Denial of Service) reports. Inspired by reports submitted by the huntr AI/ML bug bounty community and an insightful blog piece by open source expert, William Woodruff (Engineering Director, Trail of Bits), this conversation explores: 
    Are any ReDoS vulnerabilities worth fixing?Triaging and the impact of ReDoS reports on software maintainers.The challenges of addressing ReDoS vulnerabilities amidst developer fatigue and resource constraints.Analyzing the evolving trends and incentives shaping the rise of ReDoS reports in bug bounty programs, and their implications for severity assessment.Can LLMs be used to help with code analysis?Tune in as we dissect the dynamics of ReDoS, offering insights into its implications for the bug hunting community and the evolving landscape of security for AI/ML.
    Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast.

    Additional tools and resources to check out:
    Protect AI Radar: End-to-End AI Risk Management
    Protect AI’s ML Security-Focused Open Source Tools
    LLM Guard - The Security Toolkit for LLM Interactions
    Huntr - The World's First AI/Machine Learning Bug Bounty Platform

    • 35 Min.
    Finding a Balance: LLMs, Innovation, and Security

    Finding a Balance: LLMs, Innovation, and Security

    In this episode of The MLSecOps Podcast, special guest, Sandy Dunn, joins us to discuss the dynamic world of large language models (LLMs) and the equilibrium of innovation and security. Co-hosts, Daryan “D” Dehghanpisheh and Dan McInerney talk with Sandy about the nuanced challenges organizations face in managing LLMs while mitigating AI risks.
    Exploring the swift pace of innovation juxtaposed with the imperative of maintaining robust security measures, the trio examines the critical need for organizations to adapt their security posture management to include considerations for AI usage.


    Thanks for listening! Find more episodes and transcripts at https://bit.ly/MLSecOpsPodcast.

    Additional tools and resources to check out:
    Protect AI Radar: End-to-End AI Risk Management
    Protect AI’s ML Security-Focused Open Source Tools
    LLM Guard - The Security Toolkit for LLM Interactions
    Huntr - The World's First AI/Machine Learning Bug Bounty Platform

    • 41 Min.

Top‑Podcasts in Technologie

Acquired
Ben Gilbert and David Rosenthal
Lex Fridman Podcast
Lex Fridman
Digital Podcast
Schweizer Radio und Fernsehen (SRF)
Apfelfunk
Malte Kirchner & Jean-Claude Frick
DiscoPosse Podcast
Eric Wright
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC

Das gefällt dir vielleicht auch

Darknet Diaries
Jack Rhysider
Practical AI: Machine Learning, Data Science
Changelog Media
Security Now (Audio)
TWiT
Risky Business
Patrick Gray
Hacking Humans
N2K Networks
This Week in Startups
Jason Calacanis