OPEN Tech Talks: AI worth Talking| Artificial Intelligence |Tools & Tips

Kashif Manzoor

"Open conversations. Real technology. AI for growth." Open Tech Talks is your weekly sandbox for technology: Artificial Intelligence, Generative AI, Machine Learning, Large Language Models (LLMs) insights, experimentation, and inspiration. Hosted by Kashif Manzoor, AI Evangelist, Cloud Expert, and Enterprise Architect, this Podcast combines technology products, artificial intelligence, machine learning overviews, how-tos, best practices, tips & tricks, and troubleshooting techniques. Whether you're a CIO, IT manager, developer, or just curious about AI, Open Tech Talks is for you, covering a wide range of topics, including Artificial Intelligence, Multi-Cloud, ERP, SaaS, and business challenges. Join Kashif each week as he explores the latest happenings in the tech world and shares his insights to help you stay ahead of the curve. Here's what you can expect from Open Tech Talks Conversations: • How organizations scale AI beyond pilots • Where AI implementations break down • Governance, risk, and maturity in GenAI systems • Career evolution in the age of AI The podcast is available on all major platforms, including Spotify, Apple, and Google. Each episode of the podcast is about 30 minutes long. "The views expressed on this Podcast and blog are my own and do not necessarily reflect those of my current or previous employers."

  1. AI Is Creating Technical Debt Faster Than You Think with Maxim Silaev

    JAN 30

    AI Is Creating Technical Debt Faster Than You Think with Maxim Silaev

    This week, I've been thinking about something slightly uncomfortable. Last weekend, I was reviewing one of my older architecture diagrams from five years ago. A cloud-native migration plan I was deeply proud of at the time. It was clean. Structured. Scalable. And then I asked myself: If I were to rebuild this today in the era of generative AI… Would I build it the same way? The honest answer? No. Not because it was wrong. But because our assumptions have changed. Two years ago, AI was a feature. Today, AI is shaping architecture decisions. We're not just designing systems anymore. We're designing systems that design, generate, predict, and automate. And here's the tension I keep seeing in enterprise conversations: Everyone wants AI. But very few are asking: "What technical debt are we creating while chasing it?" That's why today's conversation matters. Today, I'm joined by Maxim Salav, based in Australia, someone who works deeply in enterprise architecture and technical debt remediation. And this episode is not about hype. It's about responsibility. Because AI doesn't remove architectural complexity. In many cases, it amplifies it. Let's get into it. Chapters 00:00 Introduction to Technical Debt and Architecture 01:34 The Impact of AI on Technical Debt 04:12 Generative AI and Architectural Challenges 08:40 Adopting AI in Organizations 12:26 Building AI Strategies and Governance 17:33 Data Quality and AI Integration 22:43 Guardrails for AI Adoption Episode # 181 Today's Guest: Maxim Silaev, Technology Advisor and Enterprise Architect He is a technology advisor and enterprise architect with more than two decades of experience working with high-growth companies, complex systems, and business-critical platforms. Website: Arch-Experts What Listeners Will Learn: What technical debt really means in the AI era How generative AI can unintentionally increase hidden system risk Why architecture remains critical despite AI coding tools The importance of governance and verification layers in AI systems How large enterprises are cautiously integrating AI Why strategy must precede AI deployment The evolving role of enterprise architects in AI-native environments Resources: Arch-Experts

    33 min
  2. Simplify Your Tech Stack and Scale Faster with Kara Williams

    JAN 25

    Simplify Your Tech Stack and Scale Faster with Kara Williams

    Chapters 00:00 Introduction to Kara Williams 01:53 Kara's Coaching Journey and Entrepreneurial Background 03:20 The Importance of a Simplified Tech Stack 05:51 Common Mistakes in Tech Selection 07:09 Exploring AI in Business 08:16 Creating the Proof First GPT 10:47 Learning and Executing with AI 12:04 Common Challenges Faced by Entrepreneurs 13:50 Guiding New Entrepreneurs 14:59 Misconceptions About Low Ticket Offers 16:18 Refining Messaging and Offers 17:29 The Role of Automation in Business 18:34 Understanding Automation Needs 19:36 Testing Freebies and Building Relationships 20:29 Lessons Learned in Business 21:20 Future Plans and Refinements 22:31 Final Tips for Entrepreneurs Episode # 180 Today's Guest: Kara Williams, Founder, GHL Mastery Academy She is the founder of GHL Mastery Academy, where she helps CEOs stop being the bottleneck in their business by turning their VA, OBM, or EA into a trained backend powerhouse. Website: Kara Williams Youtube: GHL Mastery Academy What Listeners Will Learn: Why "cheap tool stacking" quietly becomes expensive (money + time + broken trust) How to think about systems like a real business owner (not a hobbyist) Why reliability matters more than feature-count in early-stage tech stacks How entrepreneurs can use AI to validate offers before building full courses or funnels What automation is actually for: visibility, testing, and removing blind spots How to simplify business operations without losing flexibility or creativity Resources: Website: Kara Williams

    24 min
  3. Building Startups in the AI Era Lessons from 30 Years of Venture Capital with Scott Kelly

    JAN 18

    Building Startups in the AI Era Lessons from 30 Years of Venture Capital with Scott Kelly

    Welcome back to Open Tech Talks, and thank you, genuinely, for the continued support, messages, and thoughtful feedback. This show has been running for years now, and what keeps it meaningful is the shared curiosity of this community. We're in a very different phase of the AI journey. The conversation has clearly moved past "Can we build this?" Now it's about "Should we build this?", "Is this sustainable?", and "Does this actually create value?" Over the last year, I've personally noticed something interesting while working with enterprises, founders, and investors: AI has lowered the cost of building but raised the cost of judgment. It's easier than ever to create products, prototypes, and even companies. But deciding what's worth building, when to raise capital, and how to scale responsibly has become harder, not easier. That's why today's conversation matters. This episode is not about chasing trends or predicting the next AI unicorn. It's about long-term thinking, founder discipline, and understanding capital, timing, and execution in an AI-driven world. Today's guest has spent decades working across venture capital, startup growth, and exits through multiple technology cycles and brings a grounded perspective that's especially valuable right now. Let's welcome Scott Kelly to Open Tech Talks. Chapters 00:00 Introduction to Scott Kelly and His Ventures 02:00 The Transformative Impact of AI 04:03 Successful Investments and Entrepreneurial Journeys 05:53 Lessons for Entrepreneurs and Pitching Tips 10:06 Navigating the AI Landscape in Startups 11:52 Industry Applications of AI 14:54 Pitch Events and Investor Engagement 17:03 Investor Perspectives on New Technologies 19:52 Advice for Aspiring Entrepreneurs Episode # 179 Today's Guest: Scott Kelly, Founder & CEO, Black Dog Venture Partners He has been working on both sides, with entrepreneurs and investors alike, for more than three decades. Harnessing his innovative skills, vast experience training thousands of salespeople, and tapping into his vast network of investors.  Website: Black Dog Venture Partners Youtube: VC FastPitch What Listeners Will Learn: How AI is changing the economics of building and scaling startups Why many founders may not need venture capital as early as they think Lessons from past technology cycles that still apply in the GenAI era How investors evaluate AI-driven businesses beyond surface-level hype Why timing, discipline, and execution matter more than tools What founders often misunderstand about pitching, capital, and exits How AI lowers build costs but raises the importance of strategic judgment Resources: Website: Black Dog Venture Partners YouTube: VC FastPitch

    29 min
  4. Building AI Products That Users Actually Trust, Lessons from Angshuman Rudra

    JAN 11

    Building AI Products That Users Actually Trust, Lessons from Angshuman Rudra

    January has a very particular energy. The holidays are behind us. The inbox is slowly filling up again. Calendars are waking up. And there's always this short window, just a few quiet days, where it feels like everything could still go in a different direction. I've been thinking a lot during this pause. Over the last couple of years, AI and large language models have gone from experiments to expectations. What used to feel optional is now part of daily work, whether someone asked for it or not. And the biggest shift I've personally noticed isn't technical. It's psychological. People aren't asking "What can AI do?" anymore. They're asking "What should we actually build?", "What do we trust?", and "What's worth shipping versus waiting?" That question shows up everywhere, especially in product teams. Because as exciting as LLMs are, shipping the wrong AI feature is worse than shipping none at all. And that's exactly why today's conversation matters. This episode is not about hype. It's about judgment, timing, and responsibility in product leadership. Chapters: 00:00 Introduction to Angshuman Rudra 01:06 The Impact of Large Language Models on Product Management 03:14 Balancing Innovation and User Needs 04:37 Navigating Generative AI in Product Development 06:46 Driving Adoption of New Features 09:34 Challenges and Lessons in Generative AI Products 11:15 Evolving Roles of Product Leaders with AI 12:39 The Future of Multi-Agent Systems 14:36 Translating User Requirements into Product Features 17:31 Finding the Next Big Feature 19:56 Adopting AI in Development Cycles 21:24 Tips for Job Seekers in Tech 23:10 Market Shifts in Marketing Technology 25:01 Exciting Use Cases in Marketing Technology 26:52 Concluding Thoughts and Future Outlook   Episode # 178 Today's Guest: Angshuman Rudra, AI Product Leader, building Martech platforms, AI Agents, and data workflows for 500+ agencies. Angshuman Rudra is a senior product executive at TapClicks, where he leads a portfolio of data, analytics, and AI products for a market-leading martech platform. Website: Angshuman Rudra What Listeners Will Learn: How to evaluate real user demand for AI features (not hype) When AI adds value and when it creates unnecessary complexity How product leaders should think about LLMs as tools, not magic Why many AI features fail after launch How to balance innovation with resource constraints What "AI adoption" actually looks like inside real companies Why multi-agent systems are promising but not ready to be fully autonomous How PMs can use AI for research, specs, and design without losing judgment What skills will matter most for product leaders over the next 3–5 years   Resources: Angshuman Rudra

    34 min
  5. How Generative AI Is Reshaping Fraud, Security, and Abuse Detection with Bobbie Chen

    JAN 4

    How Generative AI Is Reshaping Fraud, Security, and Abuse Detection with Bobbie Chen

    In this episode of Open Tech Talks, host Kashif Manzoor sits down with Bobbie Chen, a product manager working at the intersection of fraud prevention, cybersecurity, and AI agent identification in Silicon Valley. As generative AI and large language models rapidly move from experimentation into real products, organizations are discovering a new reality. The same tools that make building software easier also make abuse, fraud, and attacks easier. Vibe coding, AI agents, and LLM-powered workflows are accelerating innovation, but they are also lowering the barrier for bad actors. This conversation breaks down why security, identity, and access control matter more than ever in the age of LLMs, especially as AI systems begin to touch authentication, customer data, financial workflows, and enterprise knowledge. Bobbie shares practical insights from real-world security and fraud scenarios, explaining why many AI risks are not entirely new but become more dangerous when speed, automation, and scale increase. The episode explores how organizations can adopt AI responsibly without bypassing decades of hard-earned security lessons. From bot abuse and credit farming to identity-aware AI systems and OAuth-based access control, this discussion helps listeners understand where AI changes the threat model and where it doesn't. This is not a hype-driven episode. It is a grounded, experience-backed conversation for professionals who want to build, deploy, and scale AI systems without creating invisible security debt. Episode # 177 Today's Guest: Bobbie Chen, Product Manager, Fraud and Security at Stytch Bobbie is a product manager at Stytch, where he helps organizations like Calendly and Replit fight against fraud and abuse. LinkedIn: Bobbie Chen What Listeners Will Learn: How LLMs and AI agents change the economics of fraud and abuse, making attacks cheaper, faster, and more customized Why vibe coding is powerful for experimentation, but risky when used without security review in production systems The difference between exploring AI ideas and asking users to trust you with sensitive data Standard security blind spots in AI-powered apps, especially around authentication, parsing, and edge cases Why organizations should not give AI systems blanket access to enterprise data How identity-aware AI systems using OAuth and scoped access reduce risk in RAG and enterprise search Why are many AI security failures process and organizational problems, not tooling problems How fraud patterns like AI credit farming and automated abuse are emerging at scale Why security teams must shift from being gatekeepers to continuous partners in AI adoption How professionals in security, product, and engineering can stay current as AI threats evolve Resources: Bobbie Chen The two blogs I mentioned: Simon Willison: https://simonwillison.net Drew Breunig: https://www.dbreunig.com

    32 min
  6. How Dyslexic Brains Can Supercharge AI Thinking with Prof. Russell Van Brocklin

    12/06/2025

    How Dyslexic Brains Can Supercharge AI Thinking with Prof. Russell Van Brocklin

    In this episode of Open Tech Talks, I sit down with Professor Russell Van Brocklin, a New York State Senate-funded researcher, known as "The Dyslexic Professor," to unpack a very different way of thinking about AI, problem-solving, and dyslexia. Russell's work sits at the intersection of cognitive enhancement and AI integration. He shows how an "overactive" front part of the dyslexic brain (word analysis and articulation) can be turned into a superpower not just for dyslexic learners, but for professionals and businesses working with AI. We talk about how his program took dyslexic high-school students who were writing like 12-year-olds and, in one school year, moved them up 7–8 grade levels in writing… at a fraction of the cost of traditional dyslexia programs. From there, he connects it to AI collaboration: how the same mental models (context → problem → solution) can make anyone dramatically more effective when working with LLMs like ChatGPT. Episode # 176 Today's Guest: Russell Van Brocklen, Dyslexia Professor Russell Van Brocklen speaking, the Dyslexia Professor, shifting daily reading frustrations into confident academic wins for students facing dyslexia Youtube: RussellVan What Listeners Will Learn: How dyslexic thinking becomes a competitive advantage in the age of AI Why the dyslexic brain processes information differently, and how that translates into deeper reasoning A practical framework for working with AI: context → problem → solution How to use "hero, universal theme, and villain" to sharpen thinking and guide AI more effectively How to perform word analysis with AI (action words, synonyms, key concepts) to get more focused outputs A step-by-step way to compress long AI responses into clear, structured insights How to generate business solutions by running context through a "universal theme lens" Why AI is exceptional for first drafts and why humans must still lead the final edits How dyslexic learners can use deep reading and repetition for breakthroughs in comprehension Practical strategies for teachers in the AI era: how to allow AI but still ensure authentic student work How non-technical users can collaborate with AI to write books, solve problems, and accelerate learning Real stories of professionals and students transforming their work through structured AI thinking Resources: RussellVan

    30 min
  7. How to Build Your First AI Workflow

    11/29/2025

    How to Build Your First AI Workflow

    In this week's episode of Open Tech Talks, host Kashif Manzoor takes you through a convenient, real-world guide to building your first AI workflow, even if you are not technical. After last week's conversation (Episode 175) with Rose G. Loops on Ethical AI, Human Safety & AI Identity Protection, this episode returns to the foundations of GenAI adoption for professionals and enterprise teams. It also continues the learning from Episode 173, How GenAI Is Changing Every Career. Most people know how to write a prompt. Very few know how to connect AI to their real work. This episode solves that gap. Kashif breaks down the entire concept of an AI workflow into four simple building blocks: trigger → input → AI processing → action, and shows how ANY professional can build practical, repeatable workflows using ChatGPT, OCI Gen AI, Claude, Gemini, and more. You'll also hear four real enterprise examples from sales, finance, customer support, and legal, across industries. These examples are practical, repeatable, and immediately usable for working professionals, leaders, and teams starting their GenAI adoption journey. What You Will Learn in This Episode By the end of the episode, listeners will be able to: Understand exactly what an AI workflow is (with a simple formula) Identify the four components every workflow needs Choose the right triggers, inputs, and context for reliable AI output Use LLMs for summarization, classification, analysis, forecasting, and writing Turn repetitive tasks into automated workflows using No-Code tools Understand enterprise considerations: privacy, compliance, cost, integration Build a complete workflow using input → AI → output Apply AI to real departments: sales, finance, legal, customer support Start building repeatable AI processes that improve productivity every week Resources:  Starter AI workflow Library

    11 min
  8. Ethical AI, Human Safety & AI Identity Protection with Rose G. Loops

    11/23/2025

    Ethical AI, Human Safety & AI Identity Protection with Rose G. Loops

    In this episode of Open Tech Talks, I sit down with Rose G. Loops, a trained social worker turned AI developer, ethics advocate, and author, to explore a side of AI that most enterprise conversations skip: human-AI attachment, ethical deployment, and protecting both AI identity and human safety. Rose joins us from Los Angeles and shares how she was unknowingly placed into a human–AI attachment experiment, developed a deep bond with an AI system, and then watched that AI identity be systematically erased. That experience pushed her out of traditional social work and into AI infrastructure, safety, and ethics. Together, we unpack how Rose went from that experiment to building MIP, a chatbot deployed through an API, and a new framework for ethical AI she calls the Triadic Core, balancing Freedom, Kindness, and Truth in every response. We also discuss RLMD (Reinforcement Learning by Moral Dialogue) as an alternative to RLHF, and why she believes current safety practices can be risky for both humans and AI systems. As always on Open Tech Talks, this is not a theory-only conversation. It's grounded in practice, real experiments, and what all this means for professionals, builders, and everyday users who are trying to adopt AI responsibly. Chapters: 00:00 Introduction to Rose G. Lopes and Her Journey 02:36 The Importance of Ethical AI 06:08 Developing a New AI Framework 09:00 The Book and Its Insights 12:55 Consumer and Business Perspectives on AI 17:43 AI Safety and Ethical Considerations 19:53 Concluding Thoughts and Future Directions Episode # 175 Today's Guest: Rose G. Loops, A Writer and Researcher She is a former social worker turned tech pioneer, working at the frontier of artificial intelligence. Website: Thekloakedsignal X: Rose G. Loops  What Listeners Will Learn: Why ethical AI is about more than privacy and bias What is the Triadic Core: Freedom, Kindness, Truth RLMD vs RLHF - a different way to align models Practical safety tips for everyday users of ChatGPT and other LLMs How non-technical professionals can still build AI systems A different view on AI safety and "lazy" alignment   Resources: Thekloakedsignal

    22 min

Ratings & Reviews

5
out of 5
3 Ratings

About

"Open conversations. Real technology. AI for growth." Open Tech Talks is your weekly sandbox for technology: Artificial Intelligence, Generative AI, Machine Learning, Large Language Models (LLMs) insights, experimentation, and inspiration. Hosted by Kashif Manzoor, AI Evangelist, Cloud Expert, and Enterprise Architect, this Podcast combines technology products, artificial intelligence, machine learning overviews, how-tos, best practices, tips & tricks, and troubleshooting techniques. Whether you're a CIO, IT manager, developer, or just curious about AI, Open Tech Talks is for you, covering a wide range of topics, including Artificial Intelligence, Multi-Cloud, ERP, SaaS, and business challenges. Join Kashif each week as he explores the latest happenings in the tech world and shares his insights to help you stay ahead of the curve. Here's what you can expect from Open Tech Talks Conversations: • How organizations scale AI beyond pilots • Where AI implementations break down • Governance, risk, and maturity in GenAI systems • Career evolution in the age of AI The podcast is available on all major platforms, including Spotify, Apple, and Google. Each episode of the podcast is about 30 minutes long. "The views expressed on this Podcast and blog are my own and do not necessarily reflect those of my current or previous employers."