9 episodes

Manage risk at the junction of artificial intelligence and software security.

Deploy Securely StackAware

    • Technology

Manage risk at the junction of artificial intelligence and software security.

    Compliance and AI - 3 quick observations

    Compliance and AI - 3 quick observations

    Here are the top 3 things I'm seeing:1️⃣ Auditors don’t (yet) have strong opinions on how to deploy AI securely2️⃣ Enforcement is here, just not evenly distributed.3️⃣ Integrating AI-specific requirements with existing security, privacy, and compliance ones isn’t going to be easyWant to see a full post? Check out the Deploy Securely blog: https://blog.stackaware.com/p/ai-governance-compliance-auditors-enforcement

    • 4 min
    Code Llama: 5-minute risk analysis

    Code Llama: 5-minute risk analysis

    Someone asked me what the unintended training and data retention risk with Meta's code Llama is.My answer:the same as every other model you host and operate on your own.And, all other things being equal, it's lower than that of anything operating -as-a-Service (-aaS) like ChatGPT or Claude.Check out this video for deeper dive?Or read the full post on Deploy Securely: https://blog.stackaware.com/p/code-llama-self-hosted-model-unintended-trainingWant more AI security resources? Check out: https...

    • 4 min
    4th party AI processing and retention risk

    4th party AI processing and retention risk

    So you have your AI policy in place and are carefully controlling access to new apps as they launch, but then......you realize your already-approved tools are themselves starting to leverage 4th party AI vendors.Welcome to the modern digital economy.Things are complex and getting even more so.That's why you need to incorporate 4th party risk into your security policies, procedures, and overall AI governance program.Check out the full post with the Asana and Databricks examples I mentioned: ht...

    • 6 min
    Sensitive Data Generation

    Sensitive Data Generation

    I’m worried about data leakage from LLMs, but probably not why you think.While unintended training is a real risk that can’t be ignored, something else is going to be a much more serious problem: sensitive data generation (SDG).A recent paper (https://arxiv.org/pdf/2310.07298v1.pdf) shows how LLMs can infer huge amounts of personal information from seemingly innocuous comments on Reddit.And this phenomenon will have huge impacts for:- Material nonpublic information- Executive moves- Trade sec...

    • 6 min
    Artificial Intelligence Risk Scoring System (AIRSS) - Part 2

    Artificial Intelligence Risk Scoring System (AIRSS) - Part 2

    What does "security" even mean with AI?You'll need to define things like:BUSINESS REQUIREMENTS- What type of output is expected?- What format should it be?- What is the use case?SECURITY REQUIREMENTS- Who is allowed to see which outputs?- Under which conditions?Having these things spelled out is a hard requirement before you can start talking about the risk of a given AI model.Continuing the build-out of the Artificial Intelligence Risk Scoring System (AIRSS), I tackle these issues - and more...

    • 10 min
    Artificial Intelligence Risk Scoring System (AIRSS) - Part 1

    Artificial Intelligence Risk Scoring System (AIRSS) - Part 1

    AI cyber risk management needs a new paradigm.Logging CVEs and using CVSS just does not make sense for AI models, and won't cut it going forward.That's why I launched the Artificial Intelligence Risk Scoring System (AIRSS).A quantitative approach to measuring cybersecurity risk from artificial intelligence systems, I am building it in public to help refine and improve the approach.Check out the first post in a series where I lay out my methodology: https://blog.stackaware.com/p/artificial-int...

    • 14 min

Top Podcasts In Technology

Acquired
Ben Gilbert and David Rosenthal
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Lex Fridman Podcast
Lex Fridman
Hard Fork
The New York Times
TED Radio Hour
NPR
Darknet Diaries
Jack Rhysider