Ctrl+Alt+AI

BigID

Rebooting the way we think about AI, data, & risk.

  1. Inside AgentForce: The Future of Autonomous AI

    14시간 전

    Inside AgentForce: The Future of Autonomous AI

    AI agents are appearing across every enterprise platform, but most still struggle to move beyond scripted automation into systems that can reason, adapt, and operate within real workflows. On this episode of Ctrl + Alt + AI, host Dimitri Sirota sits down with Vivienne Wei, COO for Unified Agentforce Platform, Apps & Industries Technology at Salesforce, to examine what it actually takes to deploy agentic AI at scale. Vivienne leads the unified Agentforce platform, which brings together data, governance, and AI to enable agents that can act autonomously across enterprise systems. She explains how agentic AI differs from earlier automation, why context engineering is becoming a core requirement, and how governance models must evolve as agents become active participants in business processes.  For security and data leaders, this discussion highlights a shift already underway. Agents are not just interacting with data. They are acting on it, which raises new questions around access control, accountability, and trust. What to expect: Agentic AI requires governance models built for non-human actors Context engineering determines whether agents are useful or risky How to start with business outcomes, not agent capabilities Things to listen for:  (00:00) Meet Vivienne Wei (01:25) What Agentforce is and how it works (02:35) Defining agentic AI vs traditional automation (04:24) Why context engineering is becoming critical (06:00) Governing agents as non-human identities (08:30) Policy enforcement and access control for agents (09:00) The shift toward multi-agent orchestration (10:03) How different enterprise agents will interact (11:24) Observability and monitoring agents in production (13:00) Personal productivity vs enterprise transformation (15:00) Where companies should start with agentic AI (18:00) Operating models across IT and business teams (20:00) Measuring ROI from agents in real deployments (21:30) Workforce impact and organizational resistance (23:00) What the next year of agentic AI may bring

    25분
  2. What Enterprises Still Don’t Understand About AI Risk

    3월 18일

    What Enterprises Still Don’t Understand About AI Risk

    AI adoption is accelerating, but many organizations are discovering the same problem. The technology is moving faster than the data foundation required to support it. On this episode of Ctrl + Alt + AI, host Dimitri Sirota speaks with Scott Wimberly, Senior Manager for Data & AI at Accenture, about why enterprise AI success still depends on disciplined data management. Scott explains how the shift from traditional machine learning to generative AI has exposed weaknesses in how companies manage their data. Fragmented systems, poor governance, and inconsistent data models make it difficult for organizations to trust AI outputs. The conversation explores how enterprises can address these challenges through clearer data ownership, better governance, and practical approaches that focus on solving smaller problems first. For security leaders, data teams, and AI practitioners, the discussion offers a grounded view of what it takes to turn AI investments into real business results. In this episode, you’ll learn: How early excitement about generative AI outpaced enterprise data readinessHow legacy systems and fragmented data environments create major barriers for AI programsWhy enterprise leaders should focus on measurable outcomes and ROI when investing in AI Things to listen for: (00:00) Meet Scott Wimberly (01:32) Why AI and data strategy must go together (02:53) How AI evolved from ML to generative models (05:10) Moving beyond chatbots to real AI decision systems (06:05) Why data ownership matters more than traditional stewardship (07:44) The growing importance of unstructured data for AI (13:42) LLMs, SLMs, and the rise of enterprise AI agents (15:11) How MCP connects enterprise data with external models (17:06) Why legacy systems make AI adoption difficult (20:15) Why ROI still determines whether AI projects succeed (22:16) Solving AI challenges one problem at a time

    23분
  3. Why AI Breaks Traditional Security Playbooks

    2월 16일

    Why AI Breaks Traditional Security Playbooks

    AI has quietly embedded itself across the enterprise but many security teams are still guarding it like a single tool, not the shared risk it’s become. On this episode of Ctrl + Alt + AI, host Dimitri Sirota sits down with Aqsa Taylor, Chief Research Officer at Software Analyst Cyber Research, to break down how AI is changing the speed, scale, and structure of modern cyber threats. Drawing from direct conversations with CISOs, Aqsa explains why AI shortens attack timelines, lowers the barrier for sophisticated threats, and forces security teams to rethink response and recovery. The conversation focuses on what security leaders are missing as AI spreads across employees and third-party platforms. Aqsa outlines why securing AI requires treating it as an ongoing lifecycle tied to core security fundamentals rather than a one-time deployment. In this episode, you’ll learn: Why AI-driven attacks demand faster containment, not more alerts How overprivileged AI access quietly expands security risk Why cleaning data before it reaches AI should be the top of mind Things to listen for:  (00:00) Meet Aqsa Taylor (00:22) Why AI risk connects directly to data security (01:15) What CISOs are focused on right now (02:23) AI use is unavoidable inside organizations (03:51) Securing models and the data behind them (04:27) How AI speeds up attacks and response pressure (06:10) Data filtering, privileges, and prompt risk (07:15) LLMs, copilots, and agents create different risks (09:31) Cleaning data before it reaches AI (11:19) Why humans should stay in the loop (14:21) AI-driven phishing and malware scale faster (18:01) Testing AI SOC tools against real incidents (21:15) Governance helps but fundamentals matter more (24:31) Managing third-party AI access and visibility (26:49) Fix fundamentals before chasing AI threats

    29분
  4. How AI Investing Shapes the Next Tech Cycle

    1월 14일

    How AI Investing Shapes the Next Tech Cycle

    AI became expensive the same way anything does: by outpacing the world around it. Join us in this episode with Noah Yago, Vice President of Cisco Investments at Cisco, to trace how generative AI reached this moment and what comes next. Drawing on decades of experience across venture capital, corporate development, and global investing, Noah walks through how Cisco thinks about AI not as a single breakthrough, but as a sequence of bets across models, data, infrastructure, and geography. We tackled early machine learning investments to today’s foundation models, then forward into world models, spatial intelligence, and sovereign AI stacks. Noah also explains why capital concentration shapes outcomes, why enterprise adoption looks different from consumer hype, and why regional data and regulation are quietly redefining how AI systems are built and deployed. Rather than predicting a single winner, this episode explores how AI markets actually form, how costs eventually fall, and why staying close to the fastest growers matters more than betting on any one narrative. In this episode, you’ll learn: Why AI markets reward early scale and how access to capital directly affects talent, cost structures, and long-term survival How world models and spatial intelligence change compute economics and improve reasoning beyond text-based systems What enterprise and public sector adoption reveal about on-premise AI, regulatory pressure, and hybrid deployment strategies Things to listen for:  (00:00) Meet Noah Yago (01:15) From founder to venture investor inside Cisco (03:59) How Cisco began treating AI as a core investment focus (05:32) The four AI categories Cisco invests in (07:31) Competing foundation models and concentrated capital (09:45) Regional AI stacks and data sovereignty pressures (13:54) Why model performance is flattening (15:21) World models and the next phase of AI reasoning (18:19) Data as a moat across text, video, and 3D (19:32) Sovereign AI clouds and state-driven infrastructure (22:56) Why enterprises are reconsidering on-prem AI (28:42) Capital intensity and winner-take-all dynamics

    33분
  5. Why Agent Identity Is Now a Security Priority

    2025. 12. 10.

    Why Agent Identity Is Now a Security Priority

    AI agents are moving fast, and security teams are scrambling to keep up. Join us as Heather Ceylan, SVP & Chief Information Security Officer at Box, who has spent the last several years leading security teams through rapid change from the explosive growth years at Zoom to her current work shaping Box’s AI posture. Heather shares what it actually feels like to run security at a time when agents can be created in minutes, permissions matter more than ever, and governance committees are struggling to keep pace. She explains why treating agents as identities fundamentally changes the model, how MCP servers introduce new exposure points, and why her team is embedding AI directly into SOC work, design reviews, and vulnerability remediation. It’s a grounded look at how a CISO makes sense of AI while everything around the role continues to shift. In this episode, you’ll learn: Why agents need their own identities and permissions rather than inheriting access from the people who create them How SOC teams can shift from constant alert triage to real threat hunting with the help of AI agents How AI can speed up vulnerability remediation by creating pull requests that engineers only need to review and merge Things to listen for:  (00:00) Meet Heather Ceylan (00:58) Career path from healthcare to Zoom to Box (03:58) Risks of AI agents accessing unstructured content (05:18) Why agent identity and permissions are the new priority (06:50) The challenge of discovering and governing ephemeral agents (08:16) How sandboxes and policies support safe experimentation (09:20) AI governance gaps and the need for dedicated ownership (13:10) Defining AI governance across technical and legal domains (16:17) The rise of MCP servers and new exposure points (18:05) Four AI bets transforming Box’s SOC and security workflows (23:31) KPIs and measuring AI’s impact on security teams (25:27) Resource trade-offs when adopting AI in security (27:58) Managing the complexity of model selection and trust (29:58) Should companies form dedicated AI security teams?

    32분
  6. Privacy Professionals on the Front Lines of AI Risk

    2025. 11. 26.

    Privacy Professionals on the Front Lines of AI Risk

    Security and privacy leaders are under pressure to sign off on AI, manage data risk, and answer regulators’ questions while the rules are still taking shape and the data keeps moving.  On this episode of Ctrl + Alt + AI, host Dimitri Sirota sits down with Trevor Hughes, President & CEO of the IAPP, to unpack how decades of privacy practice can anchor AI governance, why the shift from consent to data stewardship changes the game, and what it really means to “know your AI” by knowing your data.  Together, they break down how CISOs, privacy leaders, and risk teams can work from a shared playbook to assess AI risk, apply practical controls to data, and get ahead of emerging regulation without stalling progress. In this episode, you’ll learn: Why privacy teams already have methods that can be adapted to oversee AI systems Boards and executives want simple, defensible stories about risk from AI use The strongest programs integrate privacy, security, and ethics into a single strategy Things to listen for:  (00:00) Meet Trevor Hughes (01:39) The IAPP’s mission and global privacy community (03:45) What AI governance means for security leaders (05:56) Responsible AI and real-world risk tradeoffs (08:47) Aligning privacy, security, and AI programs (15:20) Early lessons from emerging AI regulations (18:57) Know your AI by knowing your data (22:13) Rethinking consent and data stewardship (28:05) Vendor responsibility for AI and data risk (31:26) Closing thoughts and how to find the IAPP

    32분
4.8
최고 5점
12개의 평가

소개

Rebooting the way we think about AI, data, & risk.