The Iferia Techcast

Ezekiel Iferia

Curious about what it’s really like to work, study, or create in science, engineering, technology and innovation? On The Iferia TechCast, we chat with students, researchers, tech pros, and innovators shaping these fields. Hear real stories of breakthroughs and breakdowns, late-night problem solving, failed experiments, and big wins. If you’ve ever debugged code at 2 a.m., accidentally blown something up in a lab, or fought to turn an idea into reality — this show’s for you. Want to be a guest ? Send Ezekiel Iferia a message on PodMatch: https://podmatch.com/hostdetailpreview/theiferiatechcast

  1. 1 G FA

    AI-Powered Hypnosis: Rewriting Subconscious Patterns for Wellness - Michelle Walters | Ep 142

    Michelle Walters is a Clinical Hypnotherapist and the creator of Make My Hypno, a first-of-its-kind AI app that generates personalized hypnosis recordings on demand. Her journey spans from a corporate career in digital marketing to therapeutic healing, and she now blends her expertise to make personalized inner work more accessible through technology. In this episode, Michelle shares her fascinating evolution from digital marketing to clinical hypnotherapy, driven by a lifelong curiosity about human thought patterns. She explains how her personal rock-bottom moment during the pandemic led her to open her hypnotherapy practice and eventually create the AI-powered Make My Hypno app. Michelle demystifies the science of hypnosis, explaining how it dials down certain brain functions to allow for deep subconscious reprogramming, citing a study that shows its high effectiveness compared to traditional therapy. She details the creative and technical process of building her app, which uses AI to generate personalized scripts based on user input, delivering custom recordings in minutes. Michelle addresses the ethics of using AI in wellness, emphasizing the built-in safety checks and the fact that hypnosis cannot force someone to act against their will. She shares a compelling client success story about overcoming a severe phobia of bees through regression therapy, highlighting the power of personalized approaches. For skeptical analytical minds, she cites research on hypnotherapy's efficacy and confirms its success with clients from technical fields like physics and medicine. Finally, Michelle discusses the future of AI in wellness, her advice for founders to focus on core customer needs, and defines innovation as combining existing ideas in new ways to help many people. In this episode, you’ll discover: · Michelle's journey from digital marketing to creating an AI hypnosis app. · The science of hypnosis: how it works by dialing down brain functions for subconscious change. · A comparison of hypnotherapy's effectiveness versus traditional talk therapy. · How the Make My Hypno app uses AI to generate personalized hypnosis recordings in minutes. · Ethical considerations and safety guardrails in AI-powered wellness tools. · A client success story: Overcoming a severe phobia through personalized hypnotherapy. · Why analytical and scientific minds can also benefit from hypnosis. · The future of AI in the wellness industry and the democratization of app development. · Advice for tech founders building wellness tools: Focus on the core customer need. · Michelle's definition of innovation: combining existing ideas in new ways. Connect With Michelle Walters & Make My Hypno: · Make My Hypno App: Get 25% off your first purchase https://makemyhypno.com/podcast_discount · Personal Website: https://www.michellewalters.net/ Chapters: 00:00 Welcome Michelle Walters: Clinical Hypnotherapist & AI App Creator 01:25 From Digital Marketing to Hypnotherapy & AI: The Driving Curiosity 03:48 The Science of Hypnosis: Demystifying How It Works in the Brain 06:35 Make My Hypno: Building an AI for Personalized Hypnosis Recordings 09:52 Ethics in AI Wellness: Guardrails for Safe and Effective Hypnosis 11:56 Client Success Story: Rewriting Subconscious Patterns for Change 14:06 Hypnosis for Skeptics: Scientific Evidence and Efficacy Data 16:20 The Future of AI in Wellness and Therapeutic Applications 17:28 Advice for Founders: Bridging Code and Compassion by Focusing on Needs 19:02 Surprising Feedback: The Emotional Impact of AI-Generated Hypnosis 20:52 Hope for the Future: The Democratization of Technology for Wellness 22:30 The Power of the Subconscious: A Message for Technical Minds 23:52 Innovation Defined: Combining Ideas in New Ways 24:50 Connect with Michelle Walters & Get a Discount on Make My Hypno

    26 min
  2. 5 FEB

    Enterprise AI Architecture: Building People-First Automation - Nishanth Sirikonda | Ep 141

    Nishanth Sirikonda is a Workday Solutions Architect and AI-driven technology strategist with over a decade of experience designing scalable, secure enterprise systems. He specializes in integrating AI and machine learning into core business operations like HR and payroll, transforming complex data into intelligent workflows that enhance decision-making and operational efficiency. In this episode, Nishanth shares his systematic approach to building the data foundations for AI. He explains how his focus on automating manual processes, particularly in sensitive areas like payroll, led him to see AI not just as a feature, but as a fundamental layer of enterprise architecture. Nishanth details his process for auditing and preparing data for AI, emphasizing the need to plan for security and data privacy from the very beginning, rather than as an afterthought. He discusses the principles of designing intelligent workflows that blend automation with human decision-making, advocating for a "human-in-the-loop" approach where AI handles scale and pattern recognition while humans retain authority over edge cases. Nishanth identifies the most common and costly mistake companies make as rushing into AI implementation without proper planning or budget for security tools. He describes how to build "people-first automation" that empowers employees and reduces friction, rather than creating a "black box" system that feels like it's replacing them. He also shares insights on architecting AI for compliance and ethics, managing global deployments with varying regulations, and his predictions for the future of AI as a collaborative tool, including the rise of personal AI agents. Finally, he defines innovation as replacing manual work with machine intelligence, but always with human control and oversight. In this episode, you’ll discover: · Nishanth's systematic approach to building data foundations for enterprise AI. · Why data integrity and security must be planned before deploying AI. · Principles for designing intelligent workflows that blend human and machine decision-making. · The importance of a "human-in-the-loop" approach for complex or high-risk decisions. · The most common architectural mistake: rushing AI without proper security planning. · How to build "people-first automation" that empowers rather than replaces employees. · Strategies for architecting AI systems that are compliant and ethically sound by design. · Lessons learned from managing global AI deployments across different regulatory environments. · The future of AI as collaborative agents and the importance of distinguishing human from AI work. · Nishanth's definition of innovation as machine-assisted human control. Connect With Nishanth Sirikonda: · LinkedIn: https://www.linkedin.com/in/nishanthswd/ Chapters: 00:00 Welcome Nishanth Sirikonda: AI-Driven Solutions Architect 01:14 The Motivation: Automating Manual Processes in Payroll & HR 03:13 AI as a Fundamental Architectural Layer, Not Just a Feature 05:53 A Systematic Process for Auditing and Preparing Data for AI 08:11 Designing Intelligent Workflows: Blending Automation & Human Decisions 10:26 The Biggest Mistake: Rushing AI Without Security Foundations 12:57 Creating People-First Automation: Empowering Employees 15:40 Architecting AI for Compliance, Ethics, and Data Privacy 18:08 The Future of Enterprise AI: Collaborative Agents and Decision Infrastructure 21:04 Foundational Skill for Future Architects: Understanding Data Quality & Workflows 23:41 Lessons from Global AI Deployments: Managing Data Fragmentation & Trust 25:48 The Gap Between AI Hype and Reality: The Need for Human Oversight 27:31 Innovation Defined: Machine-Assisted Human Control 29:50 Connect with Nishanth Sirikonda on LinkedIn

    31 min
  3. 30 GEN

    The Burnout CTO: From Fixing Code to Leading Humans - Andrew Hinkelman | Ep 140

    Andrew Hinkelman is a former CTO turned executive coach for tech founders and leaders. After experiencing severe corporate burnout, he refocused his career on helping leaders at top companies like AWS and Airbnb bridge the gap between technical excellence and genuine human connection. His core belief is that at the highest levels of leadership, all professional development is personal development. In this episode, Andrew shares his personal journey from a burnt-out CTO who was constantly "fixing" to an executive coach focused on people. He explains how the realization that he was losing sight of his own goals and feeling the pointlessness of constant meetings led to his pivotal shift. Andrew dives into the concept of the "invisible ceiling" for leaders, often caused by holding onto the past role of being the chief problem-solver rather than embracing the new game of relationship building and business strategy. He identifies ego as the one universal blind spot in brilliant technical founders, explaining how being the "smartest person in the room" can lead to isolation and team disengagement. Andrew provides practical advice for logical leaders on viewing emotions as critical data signals for team health. He discusses the importance of authenticity under pressure, defining it as honoring your own ideas and building trust with your team before a crisis hits. For high achievers fearing burnout, Andrew recommends a non-negotiable daily practice of carving out time during peak energy hours for strategic work. Finally, he defines innovation as using tools like AI not just for efficiency, but to expand humanity and become more compassionate leaders. In this episode, you’ll discover: · Andrew's journey from burnt-out CTO to human-centered executive coach. · Why senior leadership development is fundamentally personal development. · The "invisible ceiling" leaders hit when they cling to their technical "fixer" role. · Ego as the universal blind spot for technical founders and its consequences. · How to transition from being the "smartest person in the room" to an empowering leader. · How logical leaders can use emotional intelligence as critical data for team health. · The meaning of authenticity under pressure and how to build trust before a crisis. · A daily non-negotiable practice to build resilience and prevent burnout. · One question to diagnose the health of a leadership culture: "How engaged is your staff?" · Andrew's definition of innovation as using tools to expand humanity and compassion. Connect With Andrew Hinkelman: · LinkedIn: https://www.linkedin.com/in/andrewhinkelman/ · Schedule a Complimentary Coaching Session: https://calendar.google.com/calendar/u/0/appointments/schedules/AcZssZ1AVHFFrwKiTacCbAFLednzYa5xRVYNSfz7Sd-0TbwgO81oGSewF3gi7HW38BZBgu17Q27uj5Ug Chapters: 00:00 Welcome Andrew Hinkelman: Former CTO & Executive Coach 01:12 From Burnout CTO to Executive Coach: The Pivot Point 02:39 Why Senior Professional Development is Personal Development 04:25 The Universal Blind Spot: Ego in Technical Founders 06:19 Transitioning from Fixer to Leader: Letting Go of the Technical Solution 08:34 Using Emotional Intelligence as Data for Team Health 10:16 The Invisible Ceiling: Holding onto Past Roles 12:01 Authenticity Under Pressure: Honoring Your Ideas and Building Trust 13:57 Avoiding Burnout: A Daily Practice for Resilience 15:48 The Hidden Cost of Being the Smartest Person in the Room 17:11 One Question to Diagnose Leadership Culture Health 18:10 What He Would Do Differently: Slowing Down to Listen 18:57 Hope for the Future: Neuroscience Supporting Healthy Work Practices 20:09 Innovation Defined: Using Tools to Expand Humanity 21:27 Connect with Andrew and Schedule a Complimentary Session Support the Show: · Fuel the podcast: https://iferia.nestuge.com/supportme · Subscribe and leave a review! · Share

    23 min
  4. 28 GEN

    Childhood Obesity & The Biological Weight Set Point: A Surgeon's Perspective - Evan Nadler | Ep 139

    Dr. Evan Nadler is a pioneering pediatric bariatric surgeon and researcher with over 20 years of experience treating childhood obesity. He challenges the conventional "calories in, calories out" model, framing obesity as a complex disease with multiple biological pathways. Dr. Nadler runs a telemedicine practice and is writing a book to translate his NIH-funded research on fat cell biology into accessible treatment strategies. In this episode, Dr. Nadler explains how his clinical experience with children who weren't overeating but still struggled with weight led him to question the simple calorie model. He describes obesity as a disease with many internal pathways, using the example of hypothalamic injury to show how different biological "source codes" can lead to the same outcome. Dr. Nadler discusses his research on fat cells, revealing that they are not just passive storage units but active endocrine organs that release microRNAs, which can affect distant organs and even cross the placenta to influence fetal development. He delves into the concept of a biological "weight set point" regulated by the hypothalamus, explaining how genetics, epigenetics, and in-utero factors can influence it, making sustained weight loss physiologically difficult. Dr. Nadler argues for aggressive early treatment of childhood obesity, comparing its cumulative effects to smoking and highlighting the rapid progression of related diseases in children. He also discusses the new GLP-1 agonist medications, the importance of family-based lifestyle changes, and the "intergenerational transmission" of obesity risk from parents to children. Finally, he defines innovation as having the courage to challenge conventional wisdom and advocates for telemedicine and AI to improve access to specialized care. In this episode, you’ll discover: · Why clinical experience led Dr. Nadler to question the "calories in, calories out" model. · How different biological pathways can lead to the same outward appearance of obesity. · The surprising role of fat cells as active endocrine organs releasing microRNAs. · The concept of a biological "weight set point" and why weight loss is physiologically difficult. · The cumulative effects of obesity and the importance of early, aggressive treatment in children. · How new GLP-1 agonist medications work and what they reveal about weight regulation. · Actionable advice for parents, including focusing on sugar-sweetened beverages. · The concept of "intergenerational transmission" of obesity risk from parents to children. · Dr. Nadler's vision for the future of obesity treatment, including telemedicine and AI. · The definition of true innovation in treating a stigmatized disease like obesity. Connect With Dr. Evan Nadler: · Website: https://www.obesityexplained.com/ · YouTube Channel: https://www.youtube.com/@obesityexplained · LinkedIn: https://www.linkedin.com/in/evanpnadler/ Chapters: 00:00 Welcome Dr. Evan Nadler: Pediatric Bariatric Surgeon & Researcher 01:28 Challenging the Calorie Model: Lessons from 20 Years of Treating Children 03:35 Obesity's Internal Pathways: Same Outcome, Different Biological Causes 05:56 Fat Cells as Active Organs: The Role of MicroRNAs 08:21 The Biological Weight Set Point: Why Sustained Weight Loss is Difficult 11:19 The Urgency of Early Intervention in Childhood Obesity 13:38 New Medications (GLP-1 Agonists) and What They Teach Us 16:00 Actionable Advice for Parents: Beyond Diet and Exercise 18:50 Intergenerational Transmission: How Parental Health Influences a Child's Biology 22:15 The Future of Obesity Treatment: Personalized Medicine and AI 24:29 Innovation Defined: Courage to Challenge Conventional Wisdom 26:25 Connect with Dr. Evan Nadler and Obesity Explained Support the Show: · Fuel the podcast: https://iferia.nestuge.com/supportme · Subscribe and leave a review! · Share

    28 min
  5. 23 GEN

    AI in Cybersecurity: Building Adaptive Defense Systems & Security Habits - Bhaskar Sawant | Ep 138

    Bhaskar Sawant is an AI architect and cybersecurity innovator with over 15 years of experience building intelligent, adaptive defense systems. He specializes in blending deep software engineering with machine learning to create solutions that evolve in real-time and is an IEEE Senior Member advocating for responsible AI. In this episode, Bhaskar delivers a powerful presentation on why "security isn't a tool, it's a habit," arguing that millions spent on technology can be undone by a single human error. He shares real-world examples of security failures caused not by technology, but by poor habits like sharing passwords or neglecting CI/CD pipeline updates. Bhaskar outlines core principles for building a security culture, such as least privilege, making secure behavior the easiest option, and fostering transparency. He recounts his personal journey from a .NET developer to an AI security expert, driven by the increasing sophistication of cyberattacks. Bhaskar also presents two case studies from his work. First, he describes how his team used machine learning to transform PowerShell threat detection from a noisy, reactive system into a proactive one, reducing false positives by 80% and automatically stopping ransomware. Second, he explains how they implemented observability-driven security in a large .NET Core application, reducing mean-time-to-detect from hours to minutes by unifying performance and security data. Finally, Bhaskar discusses the future of AI in cybersecurity, predicting a shift towards embedded and explainable AI, and defines innovation as improving existing systems to make them more secure. In this episode, you’ll discover: · Why security is a habit and culture, not just a set of tools. · How poor human habits can undo millions of dollars in security investments. · Core principles for building a security culture: least privilege and making security easy. · A case study on using machine learning to reduce PowerShell alert noise by 80%. · How AI-based threat detection automatically stopped ransomware before it spread. · A case study on implementing observability-driven security to reduce detection time from hours to minutes. · The importance of unifying performance and security data for real-time defense. · The future of AI in cybersecurity: embedded, explainable, and guided copilots. · Bhaskar's definition of innovation as improving and securing existing systems. Connect With Bhaskar Sawant: · LinkedIn: https://www.linkedin.com/in/bhaskar-bharat-sawant-533218122/ Chapters: 00:00 Welcome Bhaskar Sawant: AI Architect & Cybersecurity Innovator 01:13 Presentation: Security Isn't a Tool, It's a Habit 04:29 Real-World Examples: How Poor Habits Break Strong Security 07:04 Core Principles: Least Privilege and Making Security Easy 09:37 The Importance of Detection, Response, and Transparent Culture 10:43 Bhaskar's Journey: From .NET Developer to AI Security Expert 12:37 Habits an Organization Must Foster for Security 14:27 AI in the Game of Attack and Defense: Who Benefits More? 17:05 Case Study 1: Transforming PowerShell Threat Detection with Machine Learning 24:52 Case Study 2: Implementing Observability-Driven Security in a .NET Core System 29:57 The Future of AI in Cybersecurity: Embedded and Explainable AI 31:04 Hope for a Secure Digital World with AI 32:54 Innovation Defined: Improving and Securing Existing Systems 33:37 Connect with Bhaskar Sawant on LinkedIn Support the Show: · Fuel the podcast: https://iferia.nestuge.com/supportme · Subscribe and leave a review! · Share Want to Be a Guest on The Iferia TechCast? · Reach out to Ezekiel on PodMatch · PodMatch Host Profile: https://podmatch.com/hostdetailpreview/theiferiatechcast

    34 min
  6. 21 GEN

    Sustainable AI for Business Growth: Scaling Teams & Tech - James Lang | Ep 137

    James Lang is the Managing Partner at OverLang Venture Partners and a former MedTech COO who scaled his previous startup to over $20 million in revenue and a team of 60 global employees. Now, he pioneers a sustainable, human-centered approach to AI, helping businesses integrate intelligence without destroying culture. In this episode, James recounts his journey from a MedTech exit and a personal health crisis to reconnecting with a childhood friend to co-found OverLang. He details how their frustration with expensive, off-the-shelf AI solutions led them to build their own proprietary infrastructure, dramatically reducing compute costs from $800 to $8 per test. James shares his core philosophy on building teams, emphasizing servant leadership and creating a culture where high performers are eager to follow him to new ventures. He debunks the hype around "unsustainable AI" consultants who sell overpriced, generic solutions that often alienate employees and compromise data security. James explains why he believes 90% of AI startups are doomed to fail, citing a lack of proper data vectoring, model selection, and an unsustainable focus on top-line revenue over profit. He offers practical advice for technical founders on building operational scaffolding, such as hiring for weaknesses and providing equity to align long-term incentives. James also discusses the critical importance of building feedback loops into AI systems and argues that AI should be a value multiplier for human teams, not a replacement. Finally, he defines innovation as removing friction to create useful solutions and urges businesses to be cautious about AI hype that ignores culture and sustainability. In this episode, you’ll discover: · James' journey from scaling a $20M MedTech company to founding an AI venture firm. · The story behind OverLang's proprietary AI infrastructure and its massive cost savings. · Core principles for building and scaling high-performance teams through servant leadership. · How to identify and avoid unsustainable AI consultants and their overpriced solutions. · Why 90% of AI startups fail due to poor data practices and a "revenue-first" mindset. · Practical advice for technical founders on hiring for weaknesses and using equity. · The critical role of human feedback loops in training and improving AI systems. · Why AI should be a "value multiplier" for teams, not a replacement for people. · James' definition of innovation as removing friction to create useful solutions. Connect With James Lang & OverLang Venture Partners: · Website & AI Playground: https://www.overlang.com/ · LinkedIn: https://www.linkedin.com/in/james-lang-94329271/ Chapters: 00:00 Welcome James Lang: MedTech COO to AI Venture Partner 01:28 From Health Crisis to AI Innovation: The Origin of OverLang 04:58 Reducing AI Compute Costs from $800 to $8: Building Proprietary Infrastructure 07:47 Scaling a Team of 60: Servant Leadership and Culture 11:16 Sustainable AI vs. "Butthole" Consultants: Avoiding the Hype 16:15 Why 90% of AI Startups Fail: Data Vectoring and Unsustainable Economics 18:33 Efficient Recruiting in the Zoom Era: Utilizing Platforms like Upwork 21:04 Operational Advice for Technical Founders: Hiring for Weaknesses & Equity 24:30 The Future of AI: The Importance of Automated Feedback Loops 26:36 AI as a Team Multiplier, Not a Replacement 30:03 Innovation Defined: Removing Friction to Create Useful Solutions 31:50 Connect with James Lang & Explore the OverLang AI Playground Support the Show: · Fuel the podcast: https://iferia.nestuge.com/supportme · Subscribe and leave a review! · Share Want to Be a Guest on The Iferia TechCast? · Reach out to Ezekiel on PodMatch · PodMatch Host Profile: https://podmatch.com/hostdetailpreview/theiferiatechcast

    34 min
  7. 19 GEN

    AI as Strategic Partner: Unlocking Human Potential & Performance - Chris Majer | Ep 136

    Chris Majer is the founder and CEO of the Human Potential Project and the architect of the Redline Consulting Framework. With a background spanning from training US Marines and Special Forces to leading billion-dollar cultural transformations at companies like Microsoft and Intel, he now focuses on integrating AI as a strategic partner in leadership and business strategy. His mission has always been to explore the boundaries of human potential. In this episode, Chris explains the necessary mindset shift for leaders to partner with AI, drawing historical parallels to the early days of radio, TV, and the internet, suggesting we are currently only using AI for what we already know how to do, not its full potential. He introduces his Redline Consulting Framework, which uses AI trained on his 35-year methodology to rapidly diagnose organizational blind spots, dramatically reducing assessment time from weeks to minutes. Chris also identifies a flawed understanding of learning as the single most important principle of human performance that corporate teams overlook. He distinguishes between "understanding" (a mental process) and "embodied competence" (the ability to take action without thinking), arguing that only practice and discomfort can bridge that gap. Chris discusses the cultural barriers preventing large organizations from innovating with AI, tracing it back to industrial-era management practices built on distrust. He shares a tangible practice for building resilience called "centering" and emphasizes the need for leaders to transition from managers to guardians of the enterprise's mood and architects of its future. Finally, he argues that while AI excels at rote tasks, the uniquely human capacity to connect, feel, and truly lead is irreplaceable, and defines innovation not as invention, but as the incremental or radical improvement of existing ideas within a culture that tolerates risk. In this episode, you’ll discover: · The historical context for the current stage of AI adoption. · How Chris's Redline Framework uses AI to rapidly diagnose organizational issues. · The difference between understanding and "embodied competence" in human performance. · Why practice and discomfort are essential for true learning and transformation. · The industrial-era management practices holding back AI innovation in corporations. · A somatic "centering" practice for building resilience and presence. · The two fundamental responsibilities of a leader: guardian of mood and architect of future. · Why AI will never replace the uniquely human capacity for connection and leadership. · Chris's distinction between invention and innovation. Connect With Chris Majer & Human Potential Project: · Website: http://humanpotentialproject.com/ · Speaker Website: https://chrismajer.com/ · Book: "The Power to Transform" on Amazon Chapters: 00:00 Welcome Chris Majer: Founder of Human Potential Project 01:28 A Divine Discontent: Exploring the Boundaries of Human Potential 02:30 Partnering with AI: A Historical Perspective on New Technology 06:34 The Redline Consulting Framework: Using AI for Organizational Diagnosis 08:27 The Missing Performance Principle: Embodied Competence vs. Understanding 11:20 The Power of Practice and the Discomfort of Learning 13:12 Industrial-Era Management: The Barrier to AI Innovation 16:34 Building Resilience: The Practice of "Centering" 17:48 The Two True Roles of a Leader 19:14 The Future of Leadership: Listening and Anticipation 20:04 AI and the Irreplaceable Human Potential for Connection 21:12 Career Advice for Students: Commitment and Lifelong Learning 22:46 Innovation Defined: Improvement Within a Risk-Tolerant Culture 24:08 Connect with Chris Majer and the Human Potential Project Support the Show: · Fuel the podcast: https://iferia.nestuge.com/supportme · Subscribe and leave a review! · Share

    25 min
  8. 17 GEN

    AI-Driven Cybersecurity at Microsoft: Real-Time Threat Detection - Faiz Gouri | Ep 135

    Faiz Gouri is a Lead AI Engineer at Microsoft, specializing in AI-driven cybersecurity for large-scale cloud infrastructure. He works on real-time anomaly detection and threat mitigation and is an active IEEE researcher with influential papers on optimizing distributed systems. His work sits at the critical intersection of cutting-edge AI and enterprise-level security. In this episode, Faiz explains how his role at Microsoft organically combined AI, machine learning, and cybersecurity, emphasizing that security is a built-in feature, not an enhancement. He breaks down the high-level architecture of a real-time AI anomaly detection system, using a practical example of detecting suspicious login attempts from different locations. Faiz discusses his IEEE research on adaptive indexing, where machine learning dynamically adjusts database indexing based on query patterns, leading to significant performance improvements in distributed systems handling petabytes of data. He also explores the balance between deterministic rules and probabilistic AI in high-stakes cybersecurity, noting the critical importance of real-time threat mitigation to prevent costly downtime. Faiz identifies data privacy in training models as a major pitfall for data scientists, using the example of AI chatbots in healthcare. For students and early-career professionals, he recommends cultivating intense curiosity and a commitment to continuous learning above any single technical skill. Reflecting on his own career, Faiz shares the advice to "take notes of everything" to retain knowledge in a rapidly evolving field. He expresses hope for AI's potential to create a more secure digital future, provided that security is core to its design. Finally, Faiz defines innovation as building something novel to solve a problem, often by drawing inspiration from and combining existing systems in unique ways. In this episode, you’ll discover: · How AI, machine learning, and cybersecurity converge in large-scale cloud infrastructure. · The high-level architecture of real-time AI anomaly detection systems. · How adaptive indexing with machine learning optimizes distributed databases. · The balance between rule-based systems and AI in high-stakes security. · The critical challenge of data privacy when training AI models with sensitive information. · Why curiosity and continuous learning are the most important skills for an AI career. · The career-defining advice to "take notes of everything" to retain knowledge. · The potential of AI to create a more secure digital future. · Faiz's definition of innovation as building novel solutions inspired by existing systems. Connect With Faiz Gouri: · LinkedIn: https://www.linkedin.com/in/faizgouri/ Chapters: 00:00 Welcome Faiz Gouri: Lead AI Engineer at Microsoft 01:51 The Intersection of AI, Machine Learning, and Cybersecurity at Microsoft 04:13 Architecture of a Real-Time AI Anomaly Detection System 07:08 IEEE Research: Adaptive Indexing for Distributed Systems 09:43 Balancing AI and Rule-Based Systems in High-Stakes Security 11:34 AI vs. Traditional Systems in Threat Detection 12:38 Major Pitfall: Data Privacy in Training Production AI Models 15:28 Foundational Skill for an AI Career: Curiosity and Continuous Learning 17:35 Career Advice: The Importance of Taking Notes 19:35 The Future of AI in Creating a Secure Digital World 20:37 Innovation Defined: Building Novel Solutions from Existing Inspiration 21:41 Connect with Faiz Gouri on LinkedIn Support the Show: · Fuel the podcast: https://iferia.nestuge.com/supportme · Subscribe and leave a review! · Share Want to Be a Guest on The Iferia TechCast? · Reach out to Ezekiel on PodMatch · PodMatch Host Profile: https://podmatch.com/hostdetailpreview/theiferiatechcast

    23 min

Descrizione

Curious about what it’s really like to work, study, or create in science, engineering, technology and innovation? On The Iferia TechCast, we chat with students, researchers, tech pros, and innovators shaping these fields. Hear real stories of breakthroughs and breakdowns, late-night problem solving, failed experiments, and big wins. If you’ve ever debugged code at 2 a.m., accidentally blown something up in a lab, or fought to turn an idea into reality — this show’s for you. Want to be a guest ? Send Ezekiel Iferia a message on PodMatch: https://podmatch.com/hostdetailpreview/theiferiatechcast