Digital Disruption with Geoff Nielson

Info-Tech Research Group

The Next Industrial Revolution is Already Here Digital Disruption is where industry leaders and experts share insights on leveraging technology to build the organizations of the future. As intelligent technologies reshape our lives and our livelihoods, we speak with the thinkers, the doers and innovators who will help us predict and harness this disruption. Join us as we explore how to adapt to and harness digital transformation.

  1. Double Agents: Dr Ayesha Khanna on How AI is Turning on Humans

    1D AGO

    Double Agents: Dr Ayesha Khanna on How AI is Turning on Humans

    What risks come with AI systems that can lie, cheat, or manipulate? Today on Digital Disruption, we’re joined by Dr. Ayesha Khanna, CEO of Addo AI. Dr. Khanna is a globally recognized AI expert, entrepreneur, and CEO of Addo, helping businesses leverage AI for growth. With 20+ years in digital transformation, she advises Fortune 500 CEOs and serves on global boards, including Johnson Controls, NEOM Tonomus, and L’Oréal’s Scientific Advisory Board. A graduate of Harvard, Columbia, and the London School of Economics, she spent a decade on Wall Street advising on information analytics. A thought leader in AI, Dr. Khanna has been recognized as a groundbreaking entrepreneur by Forbes, named to Edelman’s Top 50 AI Creators (2025), and featured in Salesforce’s 16 AI Influencers to Know (2024). Committed to diversity in tech, she founded the charity 21C Girls, which taught thousands of students the basics of AI and coding in Singapore, and currently provides scholarships for mid-career women through her education company Amplify. Ayesha sits down with Geoff to discuss how artificial intelligence is disrupting industries, reshaping the economy, and redefining the future of jobs. This conversation explores why critical thinking will be the most important skill in an AI-driven workplace, how businesses can use AI to scale innovation instead of getting stuck in “pilot purgatory,” and what risks organizations must prepare for, including bias, data poisoning, cybersecurity threats, and manipulative reasoning models. Ayesha shares insights from her work with governments and Fortune 500 companies on building national AI strategies, creating governance frameworks, and balancing innovation with responsibility. The conversation dives into how AI and jobs intersect, whether automation will replace or augment workers and why companies need to focus on growth, reskilling, and strategic automation rather than layoffs. They also discuss the rise of the Hybrid Age, where humans and AI coexist in every part of life, and what it means for society, relationships, and the global economy. In this video: 00:00 Intro 00:43 The future of AI and the next 5 years 02:16 The biggest AI risks 05:25 Fake alignment & governance 09:08 Why AI pilots fail 15:30 What successful companies do 23:14 AI and jobs: Automation, reskilling, and why critical thinking matters most 29:39 The Hybrid Age 37:09 AI and society: relationships with AI, human agency, and ethical concerns 46:13 Global AI strategies 54:00 Overhyped narratives and what people get wrong about AI and jobs 56:27 The Skills Gap opportunity 58:31 The importance of risk frameworks, critical thinking, and optimism Connect with Dr. Khanna Website: https://www.ayeshakhanna.com/ LinkedIn: https://www.linkedin.com/in/ayeshakhanna/ X: https://www.ayeshakhanna.com/ Visit our website: https://www.infotech.com/ Follow us on YouTube: https://www.youtube.com/@InfoTechRG

    59 min
  2. The Lazy Generation? Is AI Killing Jobs or Critical Thinking

    SEP 8

    The Lazy Generation? Is AI Killing Jobs or Critical Thinking

    Can automation and critical thinking coexist in the future of education and work? Today on Digital Disruption, we’re joined by Bryan Walsh the Senior Editorial Director at Vox. At Vox, Bryan leads the Future Perfect and climate teams and oversees the podcasts Unexplainable and The Gray Area. He also serves as editor of Vox’s Future Perfect section, which explores the policies, people, and ideas that could shape a better future for everyone. He is the author of End Times: A Brief Guide to the End of the World (2019), a book on existential risks including AI, pandemics, and nuclear war though, as he notes, it’s not all that brief. Before joining Vox, Bryan spent 15 years at Time magazine as a foreign correspondent in Hong Kong and Tokyo, an environment writer, and international editor. He later served as Future Correspondent at Axios. When he’s not editing, Bryan writes Vox’s Good News newsletter and covers topics ranging from population trends and scientific progress to climate change, artificial intelligence, and on occasion children’s television. Bryan sits down with Geoff to discuss how artificial intelligence is transforming the workplace and what it means for workers, students, and leaders. From the automation of entry-level jobs to the growing importance of human-centered skills, Bryan shares his perspective on the short- and long-term impact of AI on the economy and society. He explains why younger workers may be hit hardest, how education systems must adapt to preserve critical thinking, and why both companies and governments face tough choices in managing disruption. This conversation highlights why adaptability and critical thinking are becoming the most valuable skills and what governments and organizations can do to reduce the social and economic strain of rapid automation. In this video: 00:00 Intro 01:20 Early adoption of AI: Hype vs. reality 02:16 Automation pressures during economic downturns 03:08 The struggle for new grads entering the workforce 04:37 Is AI wiping out entry-level jobs? 05:40 Why younger workers may be hit hardest 06:28 No clear answers on AI disruption 08:19 The paradox of AI: productivity gains vs. job losses 14:30 Critical thinking, education, and the future of learning 18:00 How AI reshapes global power dynamics 31:57 The workplace of the future: skills that matter most 44:03 Regulation, politics, and the AI economy 48:19 AI, geopolitics, and risks of global instability 57:33 Who bears responsibility for minimizing disruption? 59:01 Rethinking identity beyond work 1:00:22 Journalism in the AI era: threat or amplifier? Connect with Bryan: Website: https://www.vox.com/authors/bryan-walsh LinkedIn: https://www.linkedin.com/in/bryan-walsh-9881b0/ X: https://x.com/bryanrwalsh Visit our website: https://www.infotech.com/ Follow us on YouTube: https://www.youtube.com/@InfoTechRG

    1h 6m
  3. From Dumb to Dangerous: The AI Bubble Is Worse Than Ever

    SEP 1

    From Dumb to Dangerous: The AI Bubble Is Worse Than Ever

    Are we heading toward an AI-driven utopia, or just another tech bubble waiting to burst? Today on Digital Disruption, we’re joined by Dr. Emily Bender and Dr. Alex Hanna. Dr. Bender is a Professor of Linguistics at the University of Washington where she is also the Faculty Director of the Computational Linguistics Master of Science program and affiliate faculty in the School of Computer Science and Engineering and the Information School. In 2023, she was included in the inaugural Time 100 list of the most influential people in AI. She is frequently consulted by policymakers, from municipal officials to the federal government to the United Nations, for insight into how to understand so-called AI technologies. Dr. Hanna is Director of Research at the Distributed AI Research Institute (DAIR) and a Lecturer in the School of Information at the University of California Berkeley. She is an outspoken critic of the tech industry, a proponent of community-based uses of technology, and a highly sought-after speaker and expert who has been featured across the media, including articles in the Washington Post, Financial Times, The Atlantic, and Time. Dr. Bender and Dr. Hanna sit down with Geoff to discuss the realities of generative AI, big tech power, and the hidden costs of today’s AI boom. Artificial Intelligence is everywhere but how much of the hype is real, and what’s being left out of the conversation? This discussion dives into the social and ethical impacts of AI systems and why popular AI narratives often miss the mark. Dr. Bender and Dr. Hanna share their thoughts on the biggest myths about generative AI and why we need to challenge them and the importance of diversity, labor, and accountability in AI development. They’ll answer questions such as where AI is really heading and how we can imagine better, more equitable futures and what technologists should be focusing on today. In this video: 0:00 Intro 1:45 Why language matters when we talk about “AI” 4:20 The problem with calling everything “intelligence” 7:15 How AI hype shapes public perception 10:05 Separating science from marketing spin 13:30 The myth of AGI: Why it’s a distraction 16:55 Who benefits from AI hype? 20:20 Real-world harms: Bias, surveillance & labor exploitation 24:10 How data is extracted & who pays the price 28:40 The invisible labor behind AI systems 32:15 Diversity, power, and accountability in AI 36:00 Why focusing on “doom scenarios” misses the point 39:30 AI in business and risks leaders should actually care about 43:05 What policymakers should prioritize now 47:20 The role of regulation in responsible AI 50:10 Building systems that serve people, not profit 53:15 Advice for CIOs and tech leaders 55:20 Gen AI in the workplace Connect with Dr. Bender and Dr. Hanna Website: https://thecon.ai/authors/ Dr. Bender LinkedIn: https://www.linkedin.com/in/ebender/ Dr. Hanna LinkedIn: https://www.linkedin.com/in/alex-hanna-ph-d/ Visit our website: https://www.infotech.com/ Follow us on YouTube: https://www.youtube.com/@InfoTechRG

    57 min
  4. Siri Creator: How Apple & Google Got AI Wrong

    AUG 25

    Siri Creator: How Apple & Google Got AI Wrong

    What does the future of AI assistants look like and what’s still missing? Today on Digital Disruption, we’re joined by Adam Cheyer, Co-Founder of Siri. Adam is an inventor, entrepreneur, engineering executive, and a pioneer in AI and computer human interfaces. He co-founded or was a founding member of five successful startups: Siri (sold to Apple, where he led server-side engineering and AI for Siri), Change.org (the world’s largest petition platform), Viv Labs (acquired by Samsung, where he led product engineering and developer relations for Bixby), Sentient (massively distributed machine learning), and GamePlanner.AI (acquired by Airbnb, where he served as VP of AI Experience). Adam has authored more than 60 publications and 50 patents. He graduated with highest honors from Brandeis University and received the “Outstanding Masters Student” award from UCLA’s School of Engineering. Adam sits down with Geoff to discuss the evolution of conversational AI, design principles for next-generation technology, and the future of human–machine interaction. They explore the future of AI, augmented reality, and collective intelligence. Adam shares insider stories about building Siri, working with Steve Jobs, and why today’s generative AI tools like ChatGPT are both amazing and frustrating. Adam also shares his predictions for the next big technological leap and how collective intelligence could transform how we solve humanity’s most difficult challenges. In this video: 0:00 Intro 1:08 Why today’s AI both amazes and frustrates 3:50 The 3 big missing pieces in current AI systems 8:28 What Siri got right and what it missed 11:30 The “10+ Theory”: Paradigm shifts in computing 14:18 Augmented Reality as the next big breakthrough 19:43 Design lessons from building Siri 25:00 Iteration vs. first impressions: How to launch transformational products 30:20 Beginner, intermediate, and expert user experiences in AI 33:40 Will conversational AI become like “Her”? 35:45 AI maturity compared to the early internet 37:34 Magic, technology, and creating “wow” moments 43:55 What’s hype vs. what’s real in AI today 47:01 Where the next magic will happen: AR & collective intelligence 50:51 The role of DARPA, Stanford, and government funding in Siri’s success 54:49 Advice for leaders building the future of digital products 57:13 Balance the hype Connect with Adam: Website: http://adam.cheyer.com/site/home?page=about LinkedIn: https://www.linkedin.com/in/adamcheyer/ Facebook: https://www.facebook.com/acheyer Visit our website: https://www.infotech.com/ Follow us on YouTube: https://www.youtube.com/@InfoTechRG Check out other episodes of Digital Disruption: https://youtube.com/playlist?list=PLIImliNP0zfxRA1X67AhPDJmlYWcFfhDT&feature=shared

    58 min
  5. Next-Gen Tech Expert: This is AI's ENDGAME

    AUG 18

    Next-Gen Tech Expert: This is AI's ENDGAME

    Are we ready for a future where human and machine intelligence are inseparable? Today on Digital Disruption, we’re joined by best-selling author and founding partner of digital strategy firm, Future Point of View (FPOV), Scott Klososky . Scott’s career has been built at the intersection of technology and humanity; he is known for his visionary insights into how emerging technologies shape organizations and society. He has advised leaders across Fortune 500 companies, nonprofits, and professional associations, guiding them in integrating technology with strategic human effort. A sought-after speaker and author of four books—including Did God Create the Internet? Scott continues to help executives around the world prepare for the digital future. Scott sits down with Geoff to discuss the cutting edge of human-technology integration and the emergence of the "organizational mind." What happens when AI no longer supports organizations but becomes a synthetic layer of intelligence within them? He talks about real-world examples of this transformation already taking place, reveals the ethical and existential risks AI poses, and offers practical advice for business and tech leaders navigating this new era. This conversation dives deep into autonomous decision-making to AI regulation and digital governance, and Scott breaks down the real threats of digital reputational damage, AI misuse, and the growing surveillance culture we’re all a part of. In this episode: 00:00 Intro 00:24 What is an ‘Organizational Mind?’ 03:44 How fast is this becoming real? 05:00 Early insights from building an organizational mind 07:02 The human brain analogy: AI mirrors us 08:12 What does it mean for AI to “wake up”? 09:51 AI awakening without consciousness 11:03 Should we be worried about conscious AI? 11:59 Accidents, bad actors, and manipulation 15:42 Can we prevent these AI risks? 18:28 Regulatory control and the role of governments 20:03 Cat and Mouse: Can AI hide from auditors? 23:02 The escalating complexity of AI threats 27:00 Will nations have organizational minds? 29:12 Autonomous collaboration between AI nations 35:36 Bringing AI tools together 36:31 Knowledge, agents, personas & oversight 40:11 Why early adopters will have the edge 41:00 Are we in another AI bubble? 45:01 Scott’s advice for business & tech leaders 47:12 Why use-cases alone aren’t enough Connect with Scott: LinkedIn: https://www.linkedin.com/in/scottklososky/ X: https://x.com/sklososky Visit our website: https://www.infotech.com/ Follow us on YouTube: https://www.youtube.com/@InfoTechRG

    51 min
  6. Roman Yampolskiy: How Superintelligent AI Could Destroy Us All

    AUG 11

    Roman Yampolskiy: How Superintelligent AI Could Destroy Us All

    Is this a wake-up call for anyone who believes the dangers of AI are exaggerated? Today on Digital Disruption, we’re joined by Roman Yampolskiy, a leading writer and thinker on AI safety, and associate professor at the University of Louisville. He was recently featured on podcasts such as PowerfulJRE by Joe Rogan. Roman is a leading voice in the field of Artificial Intelligence Safety and Security. He is the author of several influential books, including AI: Unexplainable, Unpredictable, Uncontrollable. His research focuses on the critical risks and challenges posed by advanced AI systems. A tenured professor in the Department of Computer Science and Engineering at the University of Louisville, he also serves as the founding director of the Cyber Security Lab. Roman sits down with Geoff to discuss one of the most pressing issues of our time: the existential risks posed by AI and superintelligence. He shares his prediction that AI could lead to the extinction of humanity within the next century. They dive into the complexities of this issue, exploring the potential dangers that could arise from both AI’s malevolent use and its autonomous actions. Roman highlights the difference between AI as a tool and as a sentient agent, explaining how superintelligent AI could outsmart human efforts to control it, leading to catastrophic consequences. The conversation challenges the optimism of many in the tech world and advocates for a more cautious, thoughtful approach to AI development. In this episode: 00:00 Intro 00:45 Dr. Yampolskiy's prediction: AI extinction risk 02:15 Analyzing the odds of survival 04:00 Malevolent use of AI and superintelligence 06:00 Accidental vs. deliberate AI destruction 08:10 The dangers of uncontrolled AI 10:00 The role of optimism in AI development 12:00 The need for self-interest to slow down AI development 15:00 Narrow AI vs. Superintelligence 18:30 Economic and job displacement due to AI 22:00 Global competition and AI arms race 25:00 AI’s role in war and suffering 30:00 Can we control AI through ethical governance? 35:00 The singularity and human extinction 40:00 Superintelligence: How close are we? 45:00 Consciousness in AI 50:00 The difficulty of programming suffering in AI 55:00 Dr. Yampolskiy’s approach to AI safety 58:00 Thoughts on AI risk Connect with Roman: Website: https://www.romanyampolskiy.com/ LinkedIn: https://www.linkedin.com/in/romanyam/ X: https://x.com/romanyam Visit our website: https://www.infotech.com/ Follow us on YouTube: https://www.youtube.com/@InfoTechRG

    1h 13m
  7. Ex-OpenAI Lead Zack Kass Reveals the Societal Impact of AI

    AUG 4

    Ex-OpenAI Lead Zack Kass Reveals the Societal Impact of AI

    As AI becomes more capable, how should our social systems evolve in response? Today on Digital Disruption, we’re joined once again by Zack Kass, an AI futurist and former Head of Go-To-Market at OpenAI. As a leading expert in applied AI, he harnesses its capabilities to develop business strategies and applications that enhance human potential. Zack has been at the forefront of AI and played a key role in early efforts at commercializing AI and large language models, channeling OpenAI’s innovative research into tangible business solutions. Today, Zack is dedicated to guiding businesses, nonprofits, and governments through the fast-changing AI landscape. His expertise has been highlighted in leading publications, including Fortune, Newsweek, Entrepreneur, and Business Insider. Zack sits down with Geoff to explore the philosophical implications of AI and its impact on everything from nuclear war to society’s struggle with psychopaths and humanity itself. This conversation raises important questions about the evolving role of AI in shaping our world and the ethical considerations that come with it. Zack discusses how AI may empower low-resource bad actors, transform local communities, and influence future generations. The episode touches on a wide range of themes, including the meaning of life, AI’s role in global conflict, its effects on personal well-being, and the societal challenges it presents. This conversation isn’t just about AI, it’s about humanity’s ongoing exploration of fear, freedom, happiness, and the future. In this episode: 00:00 Intro 00:21 AI's exponential growth and speed of change 02:03 The expanding scientific frontier 03:19 Roger Bannister effect and AI inspiration 04:00 Societal vs. technological thresholds 06:00 The danger of low-resource bad actors 09:00 Psychopaths, crime, and the role of policy 12:00 Freedom vs. security 14:45 The risk of bias and broken justice systems 18:00 The role of AI in decision-making 20:00 Why we tolerate human error but not machine error 20:36 Breaking the fear cycle in a negative attention economy 22:12 Tech-driven optimism 23:55 Finding Happiness 25:32 Community, nature, and meaningful human connection 27:00 The problem with the “more is more” mindset 28:30 Narratives, new media, and information overload 31:09 The Power of local change and good news 33:06 Gen Z, Gen Alpha, and the next wave of innovation Connect with Zack: Website: https://zackkass.com/ X: ⁠https://x.com/iamthezack LinkedIn: ⁠https://www.linkedin.com/in/zackkass/ YouTube: ⁠https://www.youtube.com/ ⁨@ZackKassAI⁩ Visit our website: https://www.infotech.com/ Follow us on YouTube: https://www.youtube.com/@InfoTechRG

    35 min
  8. Pulitzer-Winning Journalist: This is Why Big Tech is Betting $300 Billion on AI

    JUL 28

    Pulitzer-Winning Journalist: This is Why Big Tech is Betting $300 Billion on AI

    What role should government, regulation, and society play in the next chapter of Big Tech and AI. Today on Digital Disruption, we’re joined by Pulitzer Prize–winning investigative reporter, Gary Rivlin. Gary has been writing about technology since the mid-1990s and the rise of the internet. He is the author of AI Valley and 9 previous books, including Saving Main Street and Katrina: After the Flood. His work has appeared in the New York Times, Newsweek, Fortune, GQ, and Wired, among other publications. He is a two-time Gerald Loeb Award winner and former reporter for the New York Times. He lives in New York with his wife, theater director Daisy Walker, and two sons. Gary sits down with Geoff to discuss the unchecked power of Big Tech and the evolving role of AI as a political force. From the myth of the benevolent tech founder to the real-world implications of surveillance, misinformation, and election interference, he discusses the dangers of unregulated tech influence on policy and the urgent need for greater transparency, ethical responsibility, and accountability in emerging technologies. This conversation highlights the role of venture capital in fueling today’s tech giants, what history tells us about the future of digital disruption, and whether regulation can truly govern AI and platform power. In this episode: 00:00 Intro 02:45 The early promise of Silicon Valley 06:30 What changed in tech: From innovation to power 10:55 The role of venture capital in shaping Big Tech 15:40 Tech disruption vs. systemic control 20:15 The shift from public good to private gain 24:50 How Big Tech wields power over democracy 29:30 Can AI be regulated in time? 33:45 Lessons from tech history 38:20 Government’s role in tech oversight 43:05 Gary’s thoughts on tech accountability 47:30 Future risks of an unchecked tech industry 51:10 Hope for the next generation of innovators 55:00 Tech is at the center of politics 58:00 What should change? 1:09:00 Journalists using AI are more powerful Connect with Gary: Website: https://garyrivlin.com/ LinkedIn: https://www.linkedin.com/in/gary-rivlin/ Visit our website: https://www.infotech.com/ Follow us on YouTube: https://www.youtube.com/@InfoTechRG

    1h 12m

Ratings & Reviews

4.5
out of 5
17 Ratings

About

The Next Industrial Revolution is Already Here Digital Disruption is where industry leaders and experts share insights on leveraging technology to build the organizations of the future. As intelligent technologies reshape our lives and our livelihoods, we speak with the thinkers, the doers and innovators who will help us predict and harness this disruption. Join us as we explore how to adapt to and harness digital transformation.

You Might Also Like