So What About AI Agents

Philippe Trounev

🎙 What About AI Agents is your go-to podcast for exploring the rapidly evolving world of AI agents. From automating workflows to revolutionizing industries, we break down the latest advancements, real-world applications, and emerging trends in AI. Join us weekly as we uncover how AI agents are shaping our future, featuring expert interviews, thought-provoking insights, and stories that bridge the gap between humans and intelligent systems. Whether you're an AI enthusiast, industry professional, or simply curious about the tech shaping tomorrow, What About AI Agents has something for you.

  1. Agentic Code Scanning - EP 54 - Rome Thorstenson - Rafter.so

    -21 H

    Agentic Code Scanning - EP 54 - Rome Thorstenson - Rafter.so

    In this episode, Philippe Trounev interviews Rome Thorstenson, a software engineer and AI researcher, discussing the intersection of AI and cybersecurity. They explore the current state of code security, the role of AI agents in identifying vulnerabilities, and the challenges of trusting these systems. Rom shares insights from his research at NeurIPS and emphasizes the importance of proactive security measures for developers.takeaways80% of the code shipped to production is not secure.AI agents are increasingly used to analyze code for vulnerabilities.Security often takes a backseat to feature development.Evaluating the security of a code base is a complex task.Prompt injection poses significant risks for AI systems.Developers need to prioritize security in their workflows.Rafter offers tools to simplify security scanning for developers.Research in mechanistic interpretability can enhance AI security agents.The landscape of cybersecurity is evolving with AI advancements.Proactive security measures are essential to combat emerging threats.titlesAI's Role in Cybersecurity: A Deep DiveUnderstanding Code Vulnerabilities with AI AgentsSound Bites"AI writes most of the code.""80% of the code is not secure.""Prompt injection is a huge problem."Chapters00:00Introduction to AI Agents in Cybersecurity02:41The State of Code Security and Vulnerabilities05:10Building AI Agents for Code Analysis07:52Evaluating AI Agents and Benchmarking10:27Autonomous Feedback Loops in Cybersecurity13:07Trusting AI Agents for Security Fixes15:47Understanding Vulnerabilities and AI's Role18:42Real-World Examples of Vulnerability Detection23:25Navigating App Development Challenges24:32Getting Started with Rafter28:03Understanding Mechanistic Interoperability35:06Interpreting Model Features and Security37:49Top Security Practices for Developershttps://www.docsie.ioJoin us on Discord https://discord.gg/pAUGNTzv

    41 min
  2. Voice Agents at Scale - EP 53 - Laurent Cohen - Getoblic

    4 FÉVR.

    Voice Agents at Scale - EP 53 - Laurent Cohen - Getoblic

    In this episode, Philippe Trounev interviews Laurent Cohen from Getoblic, who discusses the deployment of 1.6 million voice AI agents. Laurent explains the transition from a SaaS model to an infrastructure layer, emphasizing the importance of data gathering and SEO strategies. He shares insights on unit economics, cost efficiency, and the monetization strategies for their voice AI services. The conversation also covers the workflow of AI agents, team structure, early success metrics, and competitive advantages in the voice AI market.takeawaysThe deployment of 1.6 million voice AI agents is a significant achievement.Shifting from a SaaS model to an infrastructure layer is crucial for scalability.Unit economics and cost efficiency are vital for sustainable growth.SEO should be handled in-house as it is the DNA of a company.Gathering data is essential for training AI agents effectively.Monetization strategies include offering free claims for businesses to engage with the platform.AI agents work in a structured workflow to handle customer inquiries.A small team can achieve significant results with the right automation.Early success metrics include claimed pages and minutes spent with voice agents.Building competitive moats involves leveraging unique data and insights.Sound Bites"We need to scale data.""Money is the enemy.""Let's help each other."Chapters00:00Introduction to Voice AI at Scale02:54The Shift from SaaS to Infrastructure Layer05:24Unit Economics and Cost Efficiency08:13SEO Strategies and Data Gathering11:07Monetization Strategies for Voice AI14:11Workflow of AI Agents16:50Team Structure and Automation19:40Early Success Metrics and Conversion22:19Building Competitive Moats25:07The Future of Voice AI and Marketing StrategiesJoin us on Discord https://discord.gg/pAUGNTzv

    27 min
  3. Agentic Prediction - EP 52 - Michael Ulin - Tenki AI

    27 JANV.

    Agentic Prediction - EP 52 - Michael Ulin - Tenki AI

    In this conversation, Michael Ullam, CEO of Tenki AI, discusses the intricacies of building AI agents, particularly in the context of prediction markets. He emphasizes the importance of understanding limitations, building trust with users, and the architecture of multi-agent systems. Michael shares insights on logging practices, avoiding overfitting, and the cost-effectiveness of predictions. He also touches on the long-term vision for Tenki AI, strategies for product launch, and the advantages of bootstrapping a startup. Throughout the discussion, he provides valuable advice for founders looking to navigate the AI landscape.takeawaysUnderstanding limitations is crucial for AI agents.Building trust with users is essential for success.Multi-agent systems can improve forecasting accuracy.Breaking down problems into subcomponents enhances performance.Logging practices are vital for system improvement.Avoiding overfitting is key to reliable predictions.Rapid feedback loops are beneficial in prediction markets.Validating demand before product development is important.Bootstrapping can be more efficient than seeking venture funding.Focus on solving real problems that you personally experience.titlesUnlocking the Power of AI AgentsBuilding Trust in AI SystemsSound Bites"What actually works when building agents?""Logging everything helps improve the system.""Validate demand before building your product."Chapters00:00Introduction to Tenki AI and Michael Ullam00:48Building Trust in AI Agents03:37Understanding Tenki's Multi-Agent Architecture06:56Challenges in Multi-Agent Systems10:16Logging and Evaluation Practices12:32Avoiding Overfitting in Predictions15:01Cost and Efficiency of Predictions17:23Long-Term Vision for Tenki AI19:09Common Playbook for Building AI Agents20:58Advice for Founders in AI Development30:40Opportunities in AI and Final Thoughtshttps://www.docsie.ioJoin us on Discord https://discord.gg/pAUGNTzv

    32 min
  4. Agentic Governance - EP 51 - with Dr. Craig Kaplan

    20 JANV.

    Agentic Governance - EP 51 - with Dr. Craig Kaplan

    SummaryIn this episode of So What About AI Agents Philippe Trounev and Dr. Craig Kaplan discusses the need for a new approach to AI safety and governance, emphasizing the importance of prevention in design and the concept of AI agents and collective intelligence systems. He highlights the role of ethics and morals in agentic society, the enforcement of ethics and morals in AI agents, and the purpose and values of AI agents. Dr. Kaplan also explores the blueprint for collective intelligence systems, problem-solving and coordination in multi-agent systems, transparency and accountability, decentralization of power, observation and reporting, and the role of values in AI systems. He concludes by discussing the relevance of Herbert Simon's ideas in AI research.takeawaysDemocracy in AI governance can enhance safety.AI agents can work together like a community.Ethics in AI must be enforced through safeguards.Collective intelligence can outperform individual expertise.Designing AI systems requires careful consideration.Transparency is crucial for AI agent interactions.Values from diverse individuals should shape AI behavior.The historical context of AI informs current practices.Short-term fixes are not sufficient for AI safety.Our online behavior influences future AI training.titlesBuilding Safe AI: A Democratic ApproachThe Future of AI GovernanceSound Bites"Two heads are better than one.""We need to think hard about design.""We should behave well online."Chapters00:00Introduction to AI and Superintelligence01:20Governance and Safety in AI05:30The Role of AI Agents in Society07:29Evolving Towards Agentic Democracy09:35Ethics and Morals in Agentic Society12:16Influence vs. Enforcement in AI Behavior15:52Blueprint for Collective Intelligence Systems19:39Human Traits in AI Collective Intelligence22:49Transparency and Accountability Among Agents25:25Decentralization and Power Distribution29:35Learning from Human Governance33:20Herbert Simon's Insights on AI and Morality36:42Key Takeaways for AI Governance

    40 min
  5. Agentic Payments - EP 50 with Mitchell Jones from Lava Payments

    8 JANV.

    Agentic Payments - EP 50 with Mitchell Jones from Lava Payments

    summaryIn this conversation, Philippe Trounev and Mitchell Jones delve into the complexities of agentic payments and the necessary payment infrastructure for the evolving AI economy. They discuss the challenges faced by AI startups in managing payments, the importance of measurement and optimization in payment systems, and the future of agent-to-agent payments. The conversation highlights the need for budgeting controls and trust in agent networks, emphasizing the role of gateways in facilitating these processes.takeaways Agentic payments require a clear understanding of costs and value delivery.Current payment infrastructures are inadequate for the needs of AI startups.AI startups must adapt their pricing strategies beyond traditional models.Using a payment gateway simplifies the integration of multiple AI models.Measurement is crucial for managing costs in AI operations.Budgeting controls are essential for preventing runaway costs in agentic systems.Trust and accountability are vital in agent-to-agent transactions.The future of payments will involve more automation and less human intervention.Experimentation with pricing models is now more feasible for startups.Building a robust payment infrastructure is critical for the success of AI applications. Keywords agentic payments, payment infrastructure, AI startups, payment systems, budgeting, trust, agent-to-agent payments, LavaPayments, FinTech, AI economy Chapters 00:00 Understanding Agentic Payments 02:28 The Role of Payment Infrastructure in AI 05:21 Optimizing Payment Systems for AI Startups 08:07 The Future of Agent-to-Agent Payments 11:03 Budgeting and Control in the Agentic Economy 13:50 Building Trust in Agent Transactions 16:45 The Evolution of AI Agents and Payments 19:25 Challenges in Agent Communication and Budgeting 22:29 The Importance of Measurement in Payment Systems 25:18 Future Use Cases for Agent Payments 28:08 Final Thoughts on the Agentic Economy

    35 min
5
sur 5
4 notes

À propos

🎙 What About AI Agents is your go-to podcast for exploring the rapidly evolving world of AI agents. From automating workflows to revolutionizing industries, we break down the latest advancements, real-world applications, and emerging trends in AI. Join us weekly as we uncover how AI agents are shaping our future, featuring expert interviews, thought-provoking insights, and stories that bridge the gap between humans and intelligent systems. Whether you're an AI enthusiast, industry professional, or simply curious about the tech shaping tomorrow, What About AI Agents has something for you.