Data Faces Podcast

TinyTechMedia

Data Faces is a podcast that brings the human stories behind data, analytics, and AI to the forefront. Join us for engaging interviews and discussions with the industry’s leading voices—the leaders, practitioners, and tech innovators who are shaping the future of data-driven decision-making. In each episode, we explore the culture, challenges, and real-life experiences of the people behind the numbers. Whether you're a tech executive, data professional, or just curious about the impact of data on our world, Data Faces offers a refreshing look at the individuals and ideas driving the next wave

  1. Bots Need Not Apply: Authentic Voices in Data and AI | Kate Strachnyi

    20H AGO

    Bots Need Not Apply: Authentic Voices in Data and AI | Kate Strachnyi

    LinkedIn has a "rewrite with AI" button. Meanwhile, Kate Strachnyi is building an entire media company on authentic human voices. Is she right?In Episode 38 of the Data Faces Podcast, Kate Strachnyi (Founder, DATAcated) shares how she pivoted from finance to data visualization, built a 40+ creator influencer agency, and why she's betting on real humans over AI-generated content.Key Takeaways:1- How Kate followed the revenue data from courses and books to a focused media business2- The DATAcated Plus model: matching authentic creators to brand campaigns in data and AI3- Why Kate calls AI-rewritten content "non-GMO" and holds her creators to the same standard4- The shift from "Kate = DATAcated" to an agency brand that scales beyond one person5- The 20-year question: who fact-checks AI when today's subject matter experts retire?Timestamps:00:00 - Opening0:05 - Kate's background and what DATAcated does2:10 - Pre-finance Kate: what she wanted to be before data found her3:05 - The career pivot from risk management to data visualization5:03 - How DATAcated evolved from training to a media company7:27 - How the influencer model works behind the scenes9:33 - Automating business operations with Claude Code11:01 - Walking the line between brand amplification and spam14:11 - The fake tattoo story from Big Data London15:03 - DATAcated Plus vs. analyst firm engagements17:14 - Sold-out personal branding session at Gartner with Scott Taylor22:15 - Shifting from "Kate = DATAcated" to an agency brand24:02 - What works on LinkedIn now vs. five years ago27:01 - AI-generated content and the "non-GMO" philosophy29:04 - The 20-year question: who fact-checks AI when the experts retire?30:20 - Deep fake Dave and why Kate plans to remain authentic31:24 - Betting on AI for operations while keeping creative output human33:57 - Does AI make you more productive or just busier?36:19 - Where to find Kate and DATAcatedMore insights and resources:Blog: https://tinytechguides.com/blog/Connect with Kate Strachnyi:LinkedIn: https://www.linkedin.com/in/kate-strachnyi-data/DATAcated: https://datacated.com/Drop your thoughts in the comments!Like, share, and subscribe for more data and AI conversations.#AuthenticContent #AI #DataFacesPodcast

    37 min
  2. Why Bad Data Didn't Matter Until Now | Brendan Grady

    APR 21

    Why Bad Data Didn't Matter Until Now | Brendan Grady

    For 25 years, data quality was everyone'sproblem and nobody's priority. Brendan Grady, EVP and GM of Analytics & AIat Qlik, explains why the stakes just changed. In this episode recorded on location at QlikConnect 2026, David Sweenor and Brendan discuss consequence management, whereenterprise agentic adoption really stands ("prior to stage zero"),Qlik's Trust Score for AI, the shift from dashboards to decision intelligence,and why open standards like MCP matter in an agentic world. For 25 years, data quality was everyone's problem and nobody's priority. Brendan Grady, EVP and GM of Analytics & AI at Qlik, explains why the stakes just changed. In this episode recorded on location at Qlik Connect 2026, David Sweenor and Brendan discuss consequence management, where enterprise agentic adoption really stands ("prior to stage zero"), Qlik's Trust Score for AI, the shift from dashboards to decision intelligence, and why open standards like MCP matter in an agentic world. Key takeaways: Data quality was never fixed because there were no consequences for getting it wrong. AI agents changed that equation. Enterprise agentic adoption is in its earliest days. Customers are experimenting, but production-grade agents are rare. Qlik's Trust Score for AI gives decision-makers a quantifiable measure of data quality before it reaches an agent. "Dashboards are dead" as a destination, but the data and decisions they inform are more important than ever. Data professionals should become data product owners and trusted guides as agents take on routine work. Chapters: 0:00 Introduction at Qlik Connect 2026 1:14 Brendan's first job: Sound of Music tourguide 2:04 Lessons from the early analytics era 3:32 Why data quality has never been fixed 4:46 Consequence management in the agentic era 6:08 Where agentic adoption actually stands 7:46 Future-proofing against LLM shifts 8:24 The analytics engine and unknown unknowns 10:29 Structured vs. unstructured data 12:04 Hallucinations and trust scores 15:30 "Dashboards are dead" 18:05 Brain outsourcing and cognitive debt 21:57 MCP server and open standards 23:54 Qlik 2026 themes: trust, context,flexibility 26:12 Advice for data professionals 28:15 Does AI expand who can participate inanalytics? Links: Blog post: https://tinytechguides.com/blog/why-bad-data-didnt-matter-until-now/ BrendanGrady on LinkedIn: https://www.linkedin.com/in/brgrady/ Qlik: https://www.qlik.com/ Data Faces Podcast: https://tinytechguides.com/data-faces-podcast/ Subscribe: https://www.youtube.com/playlist?list=PLzrDACjTQ4OBoQ8qM1FMGBwYdxvw9BurR #DataFacesPodcast #QlikConnect #AgenticAI#DataQuality #DecisionIntelligence

    30 min
  3. Truth Before Meaning in Data Management | Scott Taylor

    APR 7

    Truth Before Meaning in Data Management | Scott Taylor

    Data leaders have been pitching "data quality" to executives for decades, and the pitch keeps falling flat. Scott Taylor, the Data Whisperer, explains why — and what to do instead.In Episode 34 of the Data Faces Podcast, Scott Taylor (MetaMeta Consulting) shares his three-word data philosophy — truth before meaning — and the 3V framework (Vocabulary, Voice, Vision) that helps data leaders craft narratives executives actually respond to.Key Takeaways:1- "Truth before meaning" — why you must establish trust in your data before deriving any business insight from it2- The 3V framework for structuring executive conversations about data management3- Why data leaders lose the room by leading with "how" instead of "why"4- How vendor messaging at the Gartner D&A Summit created more confusion than clarity5- Why AI is not "the Ozempic for data governance"Timestamps:00:00 - Opening0:06 - Scott's background as the Data Whisperer3:59 - Truth before meaning: Scott's data philosophy in three words6:04 - The supermarket scanner example of truth in data7:56 - Why data practitioners aren't trained in storytelling10:27 - Has AI changed the data management conversation?13:08 - Vendor performance at the Gartner D&A Summit16:27 - "Context is the new oil" and the semantic pedantic cycle19:54 - Crafting a one-sentence data management story for a skeptical CFO22:59 - The 3V framework: Vocabulary, Voice, and Vision25:37 - Data Puppets and using satire to expose organizational dysfunction31:48 - Why humor helps executives hear hard truths34:24 - Where to find Scott Taylor and the Data PuppetsBONUS - Data Puppets segment: A-Eye attends the Gartner D&A SummitMore insights and resources:Blog: https://tinytechguides.com/blog/truth-before-meaning-the-three-word-fix-for-data-management/Connect with Scott Taylor:LinkedIn: https://www.linkedin.com/in/scottdtaylor/MetaMeta Consulting: https://www.metametaconsulting.com/Data Puppets: https://www.linkedin.com/company/data-puppets/Drop your thoughts in the comments!Like, share, and subscribe for more data and AI conversations.#DataManagement #DataGovernance #DataFacesPodcast

    36 min
  4. Data Intelligence & Agentic AI | Stewart Bond, IDC

    MAR 24

    Data Intelligence & Agentic AI | Stewart Bond, IDC

    Stewart Bond coined the term "data intelligence" in 2016. Now it's a market category. Here's how it happened — and why it matters more than ever for AI.Stewart Bond, Research VP at IDC, joins David Sweenor on the Data Faces Podcast to trace the origins of "data intelligence" from a single research note to a full-blown market category adopted by Collibra, Alation, Informatica, Databricks, and IBM. They dig into what data intelligence actually means, why it's distinct from data governance, and why the rise of agentic AI makes getting it right non-negotiable.Key takeaways:1- Data intelligence (intelligence *about* data) is not the same as data governance — governance is organizational discipline; intelligence is the technology that enables it2- GDPR was the catalyst that accelerated enterprise interest in data intelligence and metadata management3- Databricks redefined the term to mean intelligence *from* data, triggering a debate that's still playing out4- Agentic AI demands high-quality, trustworthy data at the source — "shift left" for data quality is no longer optional5- Unstructured data intelligence is the next frontier, and most organizations are not readyTimestamps:0:00 Opening and introductions1:06 Stewart's background — 30+ years in IT, IBM certified architect, IDC analyst since 20112:31 Personal interests: fishing, road biking, and competitive curling5:00 The origin of "data intelligence" — 2016, ASG Technologies, and one research note6:44 GDPR as the catalyst — data governance vs. data intelligence8:11 Market adoption: Collibra, Erwin, Alation, Informatica, and more11:05 Databricks makes a splash — and Dave Kellogg weighs in13:39 IBM rebrands its portfolio to WatsonX Data Intelligence15:00 What it takes to successfully define a market category16:02 How data intelligence is evolving: semantics, active metadata, unstructured data19:13 Buy vs. build: how organizations assemble data intelligence capabilities23:32 Agentic AI and why data intelligence matters more than ever27:27 "Shift left" — data quality must happen at the source for real-time AI29:14 Cracking the unstructured data quality problem31:21 What CDOs are actually complaining about35:07 Where organizations are under-investing37:46 Data catalog adoption challenges — and how agentic AI can helpListen on your preferred platform:YouTube playlist: https://www.youtube.com/playlist?list=PLzrDACjTQ4OBfdBJQiHax4oR1bXzs8JYYSpotify: https://open.spotify.com/show/3tFMqBPGioiMPxVJOmDPLjApple Podcasts: https://podcasts.apple.com/us/podcast/data-faces-podcast/id1779505301Amazon Music: https://music.amazon.com/podcasts/8465f3b3-5d41-4c84-a561-bf8af09560e3/data-faces-podcastConnect with Stewart Bond:LinkedIn: https://www.linkedin.com/in/stewartlbond/Connect with David Sweenor:Website: https://tinytechguides.comLinkedIn: https://www.linkedin.com/in/davidsweenor/#DataIntelligence #DataGovernance #AgenticAI #DataManagement #DataFacesPodcast

    39 min
  5. Storytelling Is the Most Durable Data Skill | Michael Meyer

    MAR 10

    Storytelling Is the Most Durable Data Skill | Michael Meyer

    "All the computer programs that have ever needed to be written have already been written." That's what Michael Meyer's guidance counselor told him in the late 1980s. 35 years later, he's still proving that advice wrong.In this episode of the Data Faces Podcast, host David Sweenor sits down with Michael Meyer, Solutions Engineer at Snowflake, to talk about the skill that carried him through every industry shift: storytelling. From creating a fictional character named "Walt the data janitor" to explain data governance, to building ML pipelines with vibe coding tools, Michael shares why the ability to make complex things understandable matters more than any single technology.Key Takeaways:1. Storytelling is the connective thread across every data role, from architecture to marketing to solutions engineering2. The semantic layer is a storytelling problem, and building a good one is still about 70% human work3. AI-assisted coding accelerates proof of concepts, but judgment about what the numbers mean is what separates useful work from dangerous work4. Early career data professionals should start with data modeling and fundamentals before chasing AI tools5. Getting out from behind the screen and learning from people matters as much as learning from platformsTimestamps: 00:00 - Opening and introduction 02:00 - Michael's background at Snowflake 04:00 - Joe's Brew Reviews and the storytelling instinct 06:30 - Walt the data janitor and internal marketing 11:00 - The mindset shock of product marketing 14:00 - Customer language and storytelling on a B2B web page 17:00 - Coming back to the technical side at Snowflake 19:00 - What is the semantic layer and why does it matter now? 23:00 - Facts, dimensions, metrics, and verified queries 25:30 - Building a semantic model: how much is human vs. AI? 28:30 - Vibe coding with Snowflake Cortex Code 32:00 - Career advice: fundamentals early career professionals need 34:30 - Find what energizes you and get out from behind the screen 35:30 - Craft beer recommendations and closing More insights and resources: Blog: [BLOG LINK] Connect with Michael Meyer: LinkedIn: https://www.linkedin.com/in/michael-meyer/ Drop your thoughts in the comments! Like, share, and subscribe for more insights. #DataCareers #SemanticLayer #DataFacesPodcast

    36 min
  6. Your AI Has a Data Context Problem | Asa Whillock

    FEB 24

    Your AI Has a Data Context Problem | Asa Whillock

    📢 Most AI initiatives stall not because of weak models, but because of weak execution. In this episode of the Data Faces Podcast, David Sweenor sits down with Asa Whillock, CEO of Euphonic AI, to unpack what it really takes to operationalize AI inside the enterprise. With experience spanning Adobe, Alteryx, and now a growth-focused AI startup, Asa explains why production AI depends less on model hype and more on data access, system alignment, and disciplined leadership. If you’re responsible for turning AI experiments into measurable business outcomes, this conversation will sharpen your thinking. 🔍 Key Takeaways: 1- Production AI is about context — not just model capability 2- Vertical enterprise systems create horizontal friction for AI 3- Metadata and human decision logic are often the missing layers 4- “Boring” infrastructure work determines long-term AI success 5- ROI comes from aligning AI to the metrics that actually drive your business ⏳ Timestamps for Easy Navigation: 00:00 – Welcome & episode overview 02:00 – Redefining operationalizing AI 04:15 – Why enterprise AI struggles across silos 08:30 – Signals that AI is ready for production 12:45 – Structured vs. unstructured data 15:00 – The decisions leaders delay 18:00 – Differentiation vs. distraction 25:15 – Models vs. data: what matters more 29:20 – Why infrastructure determines success 32:30 – Finding real ROI in AI 34:20 – Final advice for AI leaders 📩 More insights & resources: 👉 https://www.tinytechguides.com 🔗 Connect with Asa Whillock: 💼 LinkedIn: https://www.linkedin.com/in/asawhillock/ 🌎 Website: https://www.euphonic-ai.com/ 💬 What’s the biggest barrier to operationalizing AI in your organization? Share your perspective in the comments. 👍 If this was valuable, like the video and subscribe for more conversations with leaders shaping data and AI. #OperationalizingAI #EnterpriseAI #AILeadership

    35 min
  7. AI Governance vs Data Governance Explained | Gene Arnold

    FEB 10

    AI Governance vs Data Governance Explained | Gene Arnold

    📢 AI governance is moving faster than most companies can control—and that gap is where risk shows up.In this episode of the Data Faces Podcast, Gene Arnold, Partner Sales Engineer at Atlan, breaks down what AI governance actually looks like in real organizations—not policy decks or theory, but decisions, tradeoffs, and failures teams face every day.David Sweenor and Gene explore how AI governance differs from data governance, why most AI projects never reach production, and how metadata, accountability, and testing determine whether AI becomes an asset or a liability.This conversation is for leaders who want AI to scale without surprises.🔍 Key Takeaways:1- Why AI governance is not just an extension of data governance2- How biased outcomes emerge even when models “work as designed”3- The hidden risks of moving fast without ownership or traceability4- Why metadata and semantic context matter more than models5- A practical starting point for governing AI without slowing teams down⏳ Timestamps for Easy Navigation:00:00 – Podcast intro & Gene Arnold background02:10 – From data catalogs to AI governance07:05 – Data governance vs AI governance explained11:56 – The overlooked role of unstructured data16:31 – Why most AI projects fail in production19:18 – Real-world AI governance failures (Amazon, facial recognition)26:45 – How to detect and manage bias in AI systems27:02 – Practical advice for getting started with AI governance31:06 – Accountability, metadata, and the semantic layer36:10 – Final thoughts on adopting AI responsibly📩 More insights & resources:👉 Blog recap and show notes:https://tinytechguides.com/blog/why-the-biggest-ai-enthusiasts-care-most-about-governance/🔗 Connect with Gene Arnold:💼 LinkedIn: https://www.linkedin.com/in/genearnold/💬 What governance challenges are you seeing with AI in your organization? Share your perspective in the comments.👍 If this was useful, like the video, subscribe, and share it with someone leading AI or data initiatives.#AIGovernance #DataLeadership #EnterpriseAI

    38 min

About

Data Faces is a podcast that brings the human stories behind data, analytics, and AI to the forefront. Join us for engaging interviews and discussions with the industry’s leading voices—the leaders, practitioners, and tech innovators who are shaping the future of data-driven decision-making. In each episode, we explore the culture, challenges, and real-life experiences of the people behind the numbers. Whether you're a tech executive, data professional, or just curious about the impact of data on our world, Data Faces offers a refreshing look at the individuals and ideas driving the next wave