33 min.

EP168 Beyond Regular LLMs: How SecLM Enhances Security and What Teams Can Do With It Cloud Security Podcast by Google

    • Technologie

Guests: 
Umesh Shankar, Distinguished Engineer, Chief Technologist for Google Cloud Security
Scott Coull, Head of Data Science Research, Google Cloud Security
Topics:
What does it mean to “teach AI security”? How did we make SecLM? And also: why did we make SecLM?
What can “security trained LLM” do better vs regular LLM?
Does making it better at security make it worse at other things that we care about?
What can a security team do with it today?  What are the “starter use cases” for SecLM?
What has been the feedback so far in terms of impact - both from practitioners but also from team leaders? Are we seeing the limits of LLMs for our use cases? Is the “LLM is not magic” finally dawning?
Resources:
“How to tackle security tasks and workflows with generative AI” (Google Cloud Next 2024 session on SecLM)
EP136 Next 2023 Special: Building AI-powered Security Tools - How Do We Do It?
EP144 LLMs: A Double-Edged Sword for Cloud Security? Weighing the Benefits and Risks of Large Language Models
Supercharging security with generative AI 
Secure, Empower, Advance: How AI Can Reverse the Defender’s Dilemma?
Considerations for Evaluating Large Language Models for Cybersecurity Tasks
Introducing Google’s Secure AI Framework
Deep Learning Security and Privacy Workshop 
Security Architectures for Generative AI Systems
ACM Workshop on Artificial Intelligence and Security
Conference on Applied Machine Learning in Information Security
 

Guests: 
Umesh Shankar, Distinguished Engineer, Chief Technologist for Google Cloud Security
Scott Coull, Head of Data Science Research, Google Cloud Security
Topics:
What does it mean to “teach AI security”? How did we make SecLM? And also: why did we make SecLM?
What can “security trained LLM” do better vs regular LLM?
Does making it better at security make it worse at other things that we care about?
What can a security team do with it today?  What are the “starter use cases” for SecLM?
What has been the feedback so far in terms of impact - both from practitioners but also from team leaders? Are we seeing the limits of LLMs for our use cases? Is the “LLM is not magic” finally dawning?
Resources:
“How to tackle security tasks and workflows with generative AI” (Google Cloud Next 2024 session on SecLM)
EP136 Next 2023 Special: Building AI-powered Security Tools - How Do We Do It?
EP144 LLMs: A Double-Edged Sword for Cloud Security? Weighing the Benefits and Risks of Large Language Models
Supercharging security with generative AI 
Secure, Empower, Advance: How AI Can Reverse the Defender’s Dilemma?
Considerations for Evaluating Large Language Models for Cybersecurity Tasks
Introducing Google’s Secure AI Framework
Deep Learning Security and Privacy Workshop 
Security Architectures for Generative AI Systems
ACM Workshop on Artificial Intelligence and Security
Conference on Applied Machine Learning in Information Security
 

33 min.

Top-podcasts in Technologie

✨Poki - Podcast over Kunstmatige Intelligentie AI
Alexander Klöpping & Wietse Hage
Lex Fridman Podcast
Lex Fridman
Bright Podcast
Bright B.V.
Tweakers Podcast
Tweakers
De Technoloog | BNR
BNR Nieuwsradio
Darknet Diaries
Jack Rhysider