AI Literacy for Entrepreneurs

Northlight AI

"AI Literacy for Entrepreneurs", with host Susan Diaz, helps you integrate artificial intelligence into your business operations. We'll help you understand and apply AI generative in a way that is accessible and actionable for entrepreneurs at all levels. With each episode, you'll gain practical insights into effective AI strategies and tools, hear from leading practitioners with deep expertise and diverse use cases, and learn from the successes and challenges of fellow business owners in their AI adoption journey. Join us for the simplified knowledge and inspiration you need to leverage AI effectively to level up your business.

  1. 3D AGO

    EP 274 The Human OS - AI Adoption With Curiosity, Safety, and Monday Ease ft. Melissa Penton

    In the final episode of the Podcast-to-Book series, host Susan Diaz sits down with change leader and AI education lead Melissa Penton (Sun Life) for a human-first conversation about what actually makes AI adoption work. They talk productivity vs room-for-life, why one-prompt culture is snake oil, the shift from prompt engineering to context engineering, and the simplest enterprise question that changes everything: "What would make Monday easier for employees?" Episode summary Susan closes out the Podcast-to-Book sprint with a conversation that feels like the point of the whole series: AI isn't a tool problem. It's a people problem disguised as a tool problem. Melissa Penton shares her lens as a long-time change manager working in AI readiness and education inside a large organisation. Her focus isn't faster work. It's making room for what matters - and designing adoption in a way that's safe, honest, and grounded in real human tension points. Together, Susan and Melissa unpack why generic prompting courses aren't enough, why people get hives when they hear words like "workflow" and "agentic," and how leaders can create real change by starting with everyday pain. They also go deep on psychological safety, the fear of "training your robot replacement," and what it looks like to lead with humility in the biggest transformation most of us will live through.   Key takeaways Productivity is the doorway. Room-for-life is the goal. Saving time is nice. The real win is using that time to live in your "zone of genius" and have space for the things you care about. One-prompt culture is snake oil. Useful AI work is iterative, messy, and conversational. The magic isn't the prompt. It's the human steering, correcting, and refining. Prompt engineering is evolving into context engineering. The skill isn't "write a clever prompt." It's learning to give the right context, ask better questions, and build on responses. Enterprise adoption should start with one simple question: "What would make Monday easier for my employees?" That question forces leaders to solve real friction instead of buying shiny tools. The biggest people problem masquerading as an AI problem is readiness. AI is being thrown at people who don't know where to start, how it fits their real lives, or how it changes their work without threatening them. Training should be experiential, not theoretical. Courses can help. But capability sticks when people learn by doing, inside real workflows, with real tasks, and real feedback loops. Psychological safety is non-negotiable. People won't share pain points if they fear automation will erase their job. Leaders shouldn't make promises they can't keep. They should make learning safe and transferable. Workflows don't have to be scary. A workflow is just the steps you already take. "Ask a question → make notes → read notes → act." That's a workflow. Low-risk experiments lead to higher-risk breakthroughs. The "AI coffee warmer" might feel silly. But it's part of the lab. Small experiments teach the muscles needed for bigger transformations. Leadership in the AI era requires humility. Admit you're learning. Model curiosity. Use AI to explore recurring organisational stuck points, mediate perspectives, and surface patterns in conversations.   Timestamps  00:03 — Susan sets the scene: the final stretch of the 30-day podcast-to-book sprint 01:12 — Meet Melissa: change management, training, and leading AI education/readiness 02:31 — Productivity vs "making room for what matters" (crochet, hikes, real life) 03:29 — Time saved is table stakes… what are we doing with the time? 03:58 — Zone of Genius living and why AI should move you toward it 06:44 — Snake-oil prompts, "one prompt fixes your life," and why it makes Susan grumpy 08:02 — "I am the prompt": AI as an iterative, human conversation 09:21 — Prompting → context engineering (asking better questions is the skill) 09:50 — The enterprise question: "What will make Monday easier for employees?" 11:02 — Voice mode and why it changes tone, cadence, and output quality 14:27 — The biggest "AI problem" is actually a people/readiness problem 16:20 — Start with real tension points, not an abstract AI adoption plan 18:23 — Why "prompting courses" can repel people (language matters) 20:39 — Courses aren't bad… they're just not sufficient 21:49 — Cleaning workflows as the gateway drug to agentic thinking 22:44 — Agentic AI explained simply: consecutive steps without you in the middle 23:20 — "Workflow" definition for normal humans (no hives required) 24:12 — First 3 moves in 30 days: Monday, conversations, embedded learning 26:00 — Psychological safety: fear of replacement and why honesty matters 31:03 — Skill recognition: you're learning transferable capability, not training your replacement 33:35 — Whole-human value: you are not your job title 34:22 — The spiritual lens: AI should expand what humans can become 35:34 — Why "small silly tools" still matter (science lab thinking) 37:29 — Low-risk testing as the path to bigger breakthroughs 38:00 — Leadership advice: be humble, be curious, use AI to explore stuck patterns 40:58 — Where to find Melissa: LinkedIn + Substack 41:30 — "Purple person": bridging tech and business communication   Connect with Melissa Penton on LinkedIn Substack: Confessions of an AI User   If you're leading AI adoption, steal this question and use it today:  "What would make Monday easier for our people?" Then pick one friction point. Make it safer. Make it simpler. Let the learning compound. Connect with Susan Diaz on LinkedIn to get a conversation started. Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.

    43 min
  2. 6D AGO

    273 - Future-proofing your organization through continuing AI literacy

    Most companies do a few AI trainings, run some pilots, and then stall. In this episode, host Susan Diaz argues the only real future-proofing strategy is continuous AI literacy. She breaks down what "continuous literacy" actually includes (skill, judgment, workflow, norms), the predictable failure modes of the AI literacy divide, and a simple flywheel you can run monthly so capability keeps compounding. Episode summary Susan opens with a familiar pattern: a burst of AI excitement, a deck called "AI Strategy 2025" a few clever workflows… and then reality hits. Tools change. Policies shift. Vendors overpromise. Early adopters keep learning. Everyone else stalls. Her reframe is blunt: AI is not a project or a software rollout. It behaves like a language. Best practices change fast. What was smart six months ago can become a bad habit in the next six months. So future-proofing isn't about predicting what AI will do next. It's about building an organization that can keep learning without burning people out or gambling with risk. That's what continuous AI literacy is. Key takeaways Continuous AI literacy has four parts: Skill: how to use AI. Judgment: whether you should use AI. Workflow: where AI fits into the process. Norms: what's safe, allowed, expected (guardrails + governance). If training only focuses on skill, you get chaos. If it covers all four, you get adoption velocity without panic. The AI literacy divide is already here. A few people sprint. Most people watch. Leadership tries to govern what they don't fully understand. HR is stuck between "train everyone" and "we have no time". That divide creates three predictable outcomes: Shadow AI (people use tools quietly because they fear bans). Innovation theatre (lots of activity, little operational change). Champion burnout (early adopters carry the organisation and get exhausted). To future-proof, you need a continuous literacy flywheel. Not a one-off workshop. A system. Susan's flywheel starter kit (run it monthly/quarterly): Build the floor: minimum viable competence for everyone (basics of prompting, privacy, verification). Role-based lifts: train people to do their jobs better with AI (sales, HR, marketing, ops), not "AI training" in the abstract. Protect and pay champions: office hours, workflow library, recognition, and compensation so they don't become unpaid internal consultants. Package workflows: move beyond prompting into templates, SOPs, and personalized tools (repeatable cognitive automation). Measure better metrics: stop obsessing only over time saved. Track quality, speed to opportunity, risk reduction, and learning. Refresh the loop: update what changed in tools/policy, what workflows are now standard, and what failure modes to avoid. Repeat. How you know it's working: You'll hear the language change. Less "AI is scary." More "Is this a good use case?" "What's the risk?" "What's the verification step?" AI becomes boring in the best way. Standardized quality improves. Handoffs improve. Fewer heroics. A simple rubric for "good AI use": Is it safe (data + context)? Is the output verifiable? Is a human accountable? Is it repeatable enough to operationalise? Timestamps 00:02 — The pattern: training + excitement + pilots… then stall 00:28 — Vendor "agents" promises and why reality disappoints 01:09 — The only real future-proofing strategy: continuous literacy 02:06 — Reframe: AI is a language, not a project 03:50 — What continuous literacy means in practice 04:11 — The four parts: skill, judgment, workflow, norms 05:40 — Why skill-only training creates chaos 06:05 — Culture as the OS: why literacy won't stick without safety 06:35 — The literacy divide: power users sprint, others stall 07:36 — The three outcomes: shadow AI, innovation theatre, champion burnout 08:24 — Continuous literacy as a flywheel (system, not workshop) 09:02 — Step 1: build the floor (minimum viable competence) 09:58 — Step 2: role-based lifts (train jobs, not "AI") 10:47 — Step 3: champions, guardrails, office hours, and compensation 11:27 — Step 4: workflow packaging (templates, SOPs, personalised tools) 12:21 — Step 5: better metrics beyond time saved 12:50 — Step 6: refresh the loop and repeat 13:49 — How you'll know it's working: language shifts, "boring wins" 14:57 — A simple rubric: safe, verifiable, accountable, repeatable 15:42 — A practical start: 60 minutes of literacy review weekly 16:39 — Close: tools expire, literacy compounds   If you want a future-proof organization, don't build a crystal ball. Build a loop. Start this week with: 60 minutes of literacy review (what changed, what worked, what failed). Pick one workflow to package into a template or SOP. Schedule office hours so learning stays alive. Tools will expire. Literacy will compound.

    17 min
  3. 2025-12-26

    EP 272 - Mindset, Sales, and AI That Actually Helps with Gazzy Amin

    Host Susan Diaz sits down with sales strategist Gazzy Amin, founder of Sales Beyond Scripts, to talk about the real ways AI is changing revenue, planning, and scale. They cover AI as a thinking partner, how to use it across departments in a small business, why audits matter more than hype, and how mindset quietly determines whether you treat AI as a threat or an advantage. Episode summary This episode is part of Susan's 30-episodes-in-30-days "podcast to book" sprint for Swan Dive Backwards. Susan and Gazzy zoom in on the selling process first. Then they zoom out to the whole business. They talk about three camps of AI users (anti, curious, invested), and why the curious group has a huge edge right now. Gazzy shares how she uses AI as a co-pilot across marketing, sales, and operations. Not just for captions. For thinking, planning, campaign creation, and building repeatable systems. They also go into mindset. Gazzy's approach is clear: protect your mental real estate. Don't let recession talk, doom narratives, or fear-based chatter shape your decisions. Use AI to help you widen perspective, challenge limiting beliefs, and plan like a CEO. Key takeaways AI is more than a copywriting tool. It's a strategic brainstorming partner that reduces burnout and speeds up decision-making. If you want to scale, map your departments and ask AI how it can support each one. Marketing, sales, finance, operations, hiring, delivery, and client experience. Do a year-end audit before you set new goals. Feed AI your revenue data, launches, offers, and calendar patterns. Then let it ask you smart questions you wouldn't think to ask yourself. AI doesn't erase experts. It raises your baseline. You show up to expert conversations more informed, so you can go deeper faster. Documentation and playbooks become an unfair advantage. When knowledge lives only in your head, your business is fragile. AI helps you turn what's in your brain into systems other people can run. Scale is doing more with less. AI can increase output without needing to triple headcount, if you're intentional about workflows and training your team. Women have a big opportunity here. AI can reduce the invisible workload, expand access to expert-level thinking, and help women-led businesses grow faster - if women stay in the conversation and keep learning. Timestamps  00:00 — Susan introduces the 30-day podcast-to-book sprint and today's guest, Gazzy Amin 01:10 — The three types of entrepreneurs using AI (anti / curious / invested) 02:10 — AI as a thinking partner vs a task-doer 03:50 — Why most people don't yet grasp AI's full capability (and why curiosity matters) 05:00 — Using AI for personal life tasks as a low-pressure entry point 06:46 — Recession narratives, standing out, and using AI to challenge limiting beliefs 07:40 — "Create an AI per department" and train it for specific use cases 09:10 — The opportunity window: access to expertise that used to cost tens of thousands 10:10 — The 2025 audit: asking AI to interview you and pull out patterns 12:55 — Will AI devalue experts? Why the human layer still matters 16:45 — How AI changes your conversations with experts (you go deeper, faster) 18:20 — Mindset tools: music, movement, and protecting your "mental real estate" 21:00 — Documentation, playbooks, and why small teams need systems 24:00 — Training AI on your real sales process to improve onboarding + client experience 27:20 — Do AI-enabled businesses become more valuable? Scale, output, and leverage 30:00 — Why people default to content (and how to make AI content actually sound like you) 34:15 — Women, AI, the wage gap, and why this moment is non-negotiable 40:00 — Gazzy shares her CEO Growth Plan Intensives and how she uses AI in sessions 42:55 — Where to connect with Gazzy (Instagram + LinkedIn) Guest info Gazzy Amin Founder, Sales Beyond Scripts Best place to connect: Instagram (behind-the-scenes and real-time business building) - https://www.instagram.com/authenticgazzy/ If you want to use AI to scale in 2026, start here: Run a 2025 audit with AI. Pick one department and ask AI how to improve that workflow. Document one process that currently lives only in your head.   Connect with Susan Diaz on LinkedIn to get a conversation started. Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.

    45 min
  4. 2025-12-25

    EP 271 - How to Quantify AI ROI Beyond 'Time Saved'

    If you're measuring AI success by "hours saved" you're playing the easiest game in the room. In this episode, Host Susan Diaz explains why time saved is weak and sometimes harmful, then shares a better "AI ROI stack" with five metrics that map to real business value and help you build dashboards that actually persuade leadership.   Episode summary Time saved is fine. It's also table stakes. Susan breaks down why "we saved 200 hours" is the least persuasive AI metric, and why it can backfire by punishing your early adopters with more work. She then introduces a smarter approach: a set of five metrics that connect AI usage to quality, risk, growth, decision-making, and compounding capability. If you want your AI work funded, supported, and taken seriously, you need to move the conversation from cost to investment. This episode shows you how.   Key takeaways Time saved doesn't automatically convert to value. If no one reinvests the saved time, you just made busy work faster. Hours saved can punish high performers. Early adopters save time first. They often get "rewarded" with more work. Time saved misses the second-order benefits. AI's biggest wins often show up as fewer mistakes, better decisions, faster learning, and faster response to opportunity. Susan's "AI ROI stack" has five stronger metrics: Quality lift Is the output better? Track error rate, revision cycles, internal stakeholder satisfaction, customer satisfaction, and fewer rounds of revisions (e.g., proposals going from four rounds to two). Risk reduction AI can reduce risk, not only create it. Track compliance exceptions, security incidents tied to content/data handling, legal escalations/load, and "near misses" caught before becoming problems. Speed to opportunity Measure time from idea → first draft → customer touch. Track sales cycle speed, launch time, time to assemble POV/brief/competitive responses, and responsiveness to RFPs (the "game-changing" kind of speed). Decision velocity AI can reduce drag by improving clarity. Track time-to-decision in recurring meetings, stuck work/aging reports, decisions per cycle, and decision confidence. Learning velocity This is the compounding one. Track adoption curves, playbooks/workflows created per month, time from new capability introduced → used in production, and how many documented workflows are adopted by 10+ people. Dashboards should show three layers: Leading indicators (adoption, workflow usage, learning velocity). Operational indicators (cycle time). Business outcomes (pipeline influence, time to market, cost of service). You're not investing in AI to save hours. You're building a system that produces better work, faster, with lower risk, and gets smarter every month.   Timestamps 00:01 — "If you're measuring AI success by hours saved… that's table stakes." 00:51 — Why time saved doesn't translate cleanly into value 01:12 — Time saved doesn't become value unless reinvested 01:29 — Hours saved can punish high performers (they get more work) 02:10 — Time saved misses second-order benefits (mistakes, decisions, learning) 02:45 — Introducing the "AI ROI stack" (five better metrics) 02:59 — Metric 1: Quality lift (error rate, revision cycles, satisfaction) 03:31 — Example: proposal revisions drop from four rounds to two 04:14 — Metric 2: Risk reduction (compliance, incidents, legal load, near misses) 05:19 — Metric 3: Speed to opportunity (idea to customer touch, sales cycle, launches) 06:11 — Example: RFP response in 24 hours vs five days 06:34 — Metric 4: Decision velocity (time to decision, stuck work, confidence) 07:30 — Metric 5: Learning velocity (adoption curve, workflows, time to production) 08:57 — Dashboards: leading indicators vs lagging indicators 09:15 — Dashboards should include business outcomes (pipeline, time to market, cost) 09:32 — Reframe: AI as a system that improves monthly 10:08 — "Time saved is the doorway. Quality/risk/speed/decisions/learning is the house." 10:36 — Closing + review request   If your AI dashboard is only "hours saved" keep it - but don't stop there. Add one metric from the ROI stack this month. Start with quality lift or speed to opportunity. Then watch how fast the conversation shifts from cost to investment. Connect with Susan Diaz on LinkedIn to get a conversation started.   Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.

    11 min
  5. 2025-12-24

    EP 270 - From AI Awareness → AI Readiness → AI Adoption with Jennifer Hufnagel

    Host Susan Diaz sits down with Jennifer Hufnagel (Hufnagel Consulting), an AI educator and AI readiness consultant who's trained 4K+ people. They break down what "AI readiness" actually means (spoiler: it's not buying Copilot), why AI doesn't fix broken processes or dirty data, and how leaders can build real capability through training programs, communities of practice, and properly resourced AI champions. Episode summary Susan Diaz and Jennifer Hufnagel met in "the most elite way possible": both were quoted in The Globe and Mail about women and AI. Jennifer shares her background as a business analyst and digital adoption / L&D consultant, and how she pivoted when clients began asking for AI workshops right after ChatGPT's release. Together, they map a simple but powerful framework: AI awareness (practice + play, foundational learning, early change management) AI readiness (software stack, data quality, workflows, current state, and - quietly - the "people audit") AI adoption (implementation, strategy, and ongoing integration) Jennifer explains why "audit" language scares people, but the work is essential - especially talking to humans about what's frustrating, what takes time, and where fear is showing up. She shares what she's seeing after training thousands: AI fluency is still low, people obsess over tools, and many assume AI will solve problems that are actually process or data issues. The second half gets practical: what "workflows" really mean (step-by-step checklists), how AI now makes documenting processes easier than ever (voice → SOPs), why prompt engineering isn't dead but "100 prompts for your bookkeeping business" is mostly snake oil, and why one-off training sessions don't create real fluency. They close with how to build sustainable AI capability: proper training programs, leadership-led culture, communities of practice, and protecting champions from becoming unpaid help desks. Key takeaways AI readiness is the middle of the journey. Jennifer frames AI maturity as: awareness → readiness → adoption. Most organisations skip readiness and wonder why adoption stalls. Readiness includes software, data, process… and people. You can call it a software/data/process audit, but you still have to talk to humans about their day-to-day work, pain points, and fears. That's where the truth lives. AI fluency is still lower than the headlines suggest. Jennifer questions rosy "90% adoption" stats because many rooms she's in still show low real-world usage beyond basic experimentation. Stop obsessing over tools. Companies are writing AI policy around tools and forcing everyone into a single platform. Jennifer argues the real goal is discernment, critical thinking, and clarity - not "pick one tool and pray". AI doesn't fix broken processes or dirty data. If your workflows aren't documented, AI will scale the chaos. If your data is messy, the analysis will be messy too. Readiness comes first. A workflow is just a checklist. Jennifer demystifies "workflow" as step-by-step instructions and ownership: who does what, when. Sticky notes on a wall is a valid start. Process documentation is easier than ever. You can dictate steps into a model (without passwords) and ask it to produce an SOP/checklist - getting knowledge out of people's heads and into a shareable format. Prompting isn't dead, but promise-all prompt packs are mostly hype. Prompting differs by model, and the best move is often to ask the model how to prompt it - and how to troubleshoot when output is wrong. One-off AI workshops don't create fluency. AI changes too fast. Real capability requires programs, practice, communities of practice, office hours, and change management - plus leadership modelling and culture. Don't burn out your AI champions. Champions need dedicated time, resources, and leadership sponsorship. Otherwise they become unpaid AI help desks and the entire initiative becomes fragile. Community of practice is the unlock. Jennifer shares her in-person "AI Chats & Bites" group and encourages finding online + in-person + internal communities to keep learning alive. Episode highlights 00:01 — The 30-day podcast-to-book sprint and why people are saying yes in December 00:40 — Susan + Jennifer meet via The Globe and Mail "women and AI" feature 01:21 — Jennifer's origin story: business analyst → digital adoption/L&D → AI readiness 04:09 — The three-part framework: awareness → readiness → adoption 05:03 — Readiness: software stack, data quality ("dirty data"), and mapping current state 06:13 — "People audit" without calling it that: interview humans about pain + fear 08:02 — What Jennifer sees after ~4,000 trainees: fluency still low + stats don't match reality 09:38 — AI doesn't fix broken processes; it scales whatever is there 10:55 — Workflows explained as checklists; "won the lottery" handoff test 12:18 — Dictate your process into AI → generate SOPs/checklists 14:24 — Prompting isn't dead; ask the model to help you prompt + troubleshoot 17:50 — Why one-off training doesn't work; AI fluency requires a program + practice 22:15 — Burning out champions and why AI culture must be top-down 27:49 — Communities of practice: online + local + internal 31:00 — Common mistakes: vending-machine mindset, believing output, not defining the problem 35:31 — Women and AI: opportunity, fear, resilience, and "be in the grey" 39:51 — Where to find Jennifer: hufnagelconsulting.ca + LinkedIn Guest info Jennifer Hufnagel Website: hufnagelconsulting.ca Email: hello@hufnagelconsulting.ca Best place to connect: LinkedIn - Jennifer Hufnagel If AI adoption feels stuck in your organization, don't buy another tool first. Start with readiness: Map one workflow end-to-end. Talk to the humans doing it daily. Clean up the process and data enough that AI can actually help. Then build fluency through a program - not a one-off workshop - and protect your champions with real time and resources.   Connect with Susan Diaz on LinkedIn to get a conversation started. Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.

    42 min
  6. 2025-12-23

    EP 269 - Why One-Off AI Training Fails (and What to Do Instead)

    If your organization ran an "AI 101" lunch-and-learn… and nothing changed after, this episode is for you. Host Susan Diaz explains why one-off workshops create false confidence, how AI literacy is more like learning a language than learning software buttons, and shares a practical roadmap to build sustainable AI capability. Episode summary This episode is for two groups: teams who did a single AI training and still feel behind, and leaders realizing one workshop won't build organizational capability. The core idea is simple: AI adoption isn't a "feature learning" problem. It's a behaviour change problem. Behaviour only sticks when there's a container - cadence, guardrails, and a community of practice that turns curiosity into repeatable habits. Susan breaks down why one-off training fails, what good training looks like (a floor, not a ceiling), and gives a step-by-step plan you can use to design an internal program - even if your rollout already happened and it was messy. Key takeaways One-off AI training creates false confidence. People leave either overconfident (shipping low-quality output) or intimidated (deciding "AI isn't for me"). Neither leads to real adoption. AI literacy is a language, not a feature. Traditional software training teaches buttons and steps. AI requires reps, practice, play, and continuous learning because the tech and use cases evolve constantly. Access is not enablement. Buying licences and calling everyone "AI-enabled" skips the hard part: safe use, permissions, and real workflow practice. Handing out tools with no written guardrails is a risk, not a training plan. Cadence beats intensity. Without rituals and follow-up, people drift back to business as usual. AI adoption backslides unless you design ongoing reinforcement. Good training builds a floor, not a ceiling. A floor means everyone can participate safely, speak shared language, and contribute use cases—without AI becoming a hero-only skill. The four layers of training that sticks: Safety + policy (permission, guardrails, what data is allowed) Shared language (vocabulary, mental models) Workflow practice (AI on real work, not toy demos) Reinforcement loop (office hours, champions, consistent rituals) The 5-step "training that works" roadmap Step 1: Define a 60-day outcome. "In 60 days, AI will help our team ____." Choose one: reduce cycle time, improve quality, reduce risk, improve customer response, improve decision-making. Then: "We'll know it worked when ____." Step 2: Set guardrails and permissions. List: data never allowed data allowed with caution data safe by default Step 3: Pick 3 high-repetition workflows. Weekly tasks like proposals, client summaries, internal comms, research briefs. Circle one that's frequent + annoying + low risk. That becomes your practice lane. Step 4: Build the loop (reps > theory). Bring one real task. Prompt once for an ugly first draft. Critique like an editor. Re-prompt to improve. Share a before/after with the team. Step 5: Create a community of practice. Office hours. An internal channel for AI wins + FAQs. Two champions per team (curious catalysts, not "experts"). Only rule: bring a real use case and a real question. What "bad training" looks like one workshop with no follow-up generic prompt packs bought off the internet tools handed out with no written guardrails hype-based demos instead of workflow practice no time allocated for learning (so it becomes 10pm homework) Timestamps 00:00 — Why this episode: "We did AI training… and nothing changed." 01:20 — One-off training creates two bad outcomes: overconfident or intimidated 03:05 — AI literacy is a language, not a software feature 05:10 — Access isn't enablement: licences without guardrails = risk 07:00 — Cadence beats intensity: why adoption backslides 08:40 — Training should build a floor, not a ceiling 10:05 — The 4 layers: policy, shared language, workflow practice, reinforcement 12:10 — The 5-step roadmap: define a 60-day outcome 13:40 — Guardrails and permissions (what data is never allowed) 15:10 — Pick 3 workflows and choose a low-risk practice lane 16:30 — The loop: prompt → critique → re-prompt → share 18:10 — Communities of practice: office hours + champions 20:05 — What to do this week: pick one workflow and run one loop If your organization did an AI 101 and nothing changed, don't panic. Pick one workflow this week. Run the prompt → critique → re-prompt → share loop once. Then schedule an office hour to do it again. That's how you move from "we did a training" to "we're building capability". Connect with Susan Diaz on LinkedIn to get a conversation started.   Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.

    21 min
  7. 2025-12-22

    EP 268 Women, AI, and 'Hold the Door' Leadership with Chris McMartin

    Host Susan Diaz is joined by Chris McMartin, National Lead for the Scotiabank Women Initiative (Business Banking), for a real-world conversation about how women are approaching AI. They talk about time poverty, fear of asking "dumb" questions, the shame myth of "AI is cheating", and why the most powerful move right now is women holding the door open for each other - learning in community and sharing what works. Episode summary This episode is a candid, energetic conversation with Chris McMartin - aka "Hype Boss" online and a long-time hype woman for women entrepreneurs. They explore what's different about women's AI adoption, and it's not lack of interest. They discuss the reality of learning AI at 10pm on the couch after the "82-item checklist" of the day is finally done. And the catch-22 at play - AI can save time, but learning it takes time first - like baking cookies with kids turning a 20-minute task into a two-hour event. From there, they unpack a deeper barrier: many women hesitate to ask questions because they don't want to look silly. Chris argues women often use AI "just a little bit" because it doesn't require admitting what they don't know - meaning AI becomes a copywriting helper instead of a real growth lever. They also confront the "AI is cheating" narrative. Chris shares her no-apologies stance: if AI improved your grammar overnight, that's not shameful - it's smart. And if you're worried about being judged for questions, ask AI itself - because it won't judge you. The conversation closes with practical advice for women-led teams (especially 5-50 people): start by identifying the task everyone hates and use AI there first, and schedule learning time during business hours instead of relegating growth to late-night exhaustion. Along the way, Susan brings in a powerful metaphor: "hold the door" leadership - women who are already in the room have a responsibility to bring others in with them. (Metaphor inspired by Game of Thrones and Bozoma Saint John.  Key takeaways Women aren't unwilling. They're time-starved. Many women try to learn AI at the end of the day, when they're exhausted - because that's the only time left. AI has a "cookie problem". It has huge benefits later, but it costs time upfront to learn - just like baking with kids. That learning curve is real, and it's a major adoption barrier. Fear of questions limits adoption. Chris observes women often hesitate to ask "how do I use this better?" which keeps AI usage stuck at surface-level tasks like captions and posts. "AI is cheating" is a myth that needs to die. Chris's take: using AI to communicate more clearly isn't unethical. It's an upgrade. She also notes men rarely apologize for finding ways to do things better. Ask AI how to use AI. If you feel silly asking humans, ask your LLM: "What questions should I answer so you can help me solve this?" That's the difference between generic output and useful work. Community is a women's superpower. Women often collaborate with "competitors" with zero weirdness. That community-of-practice energy is exactly what AI learning needs. For women-led teams: start with pain. Chris's first practical move: ask your team what task they hate most, then use AI to reduce or remove that pain point to build buy-in. Schedule learning like leadership. Don't push AI learning to 10pm. Put it on the calendar during work hours. Your development is part of the job. Grants can fund AI training and tech upgrades. Chris reminds listeners that many grants support technology advancement and hiring expertise - even for non-tech businesses - and AI can reduce the pain of grant writing. Episode highlights [00:03] Meet Chris McMartin + the Scotiabank Women Initiative. [02:00] "10pm on the couch" and why time poverty shapes women's learning. [02:44] The cookie analogy: AI saves time later, but learning costs time now. [05:00] Women using AI 1%: safe tasks without asking questions. [06:46] Why this matters: many "at risk" roles are held by women. [09:35] "AI is cheating" + the grammar glow-up story. [11:42] "Ask AI questions - AI doesn't judge you." [13:00] Relationship mindset: don't be transactional with AI; ask better questions. [16:21] "Hold the door" leadership and building rooms where women feel welcome. [21:43] Two tactical tips: solve a pain point first + schedule learning time. [33:48] Grants as a funding path for training and tech improvements. [38:57] Say yes to conversations even if you "don't know enough." [41:18] Where to find Chris + her podcast I Am Unbreakable. If you're a woman entrepreneur (or you lead women in your organization), take one action from this episode this week: Ask your team what task they hate most. Pick one painful workflow and test AI there first. Put one hour of AI learning on the calendar during business hours. And if you're already "in the room" with AI? Hold the door. Invite someone in.   Connect with Susan Diaz on LinkedIn to get a conversation started.   Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here. Connect with Chris McMartin on LinkedIn.

    42 min
  8. 2025-12-21

    EP 267 Does AI Make you More or Less Creative? (Paintbrush vs Photocopier)

    AI can feel like a creativity cheat code… or like the death of originality. In this short, punchy solo episode, Susan argues the truth is simpler: AI doesn't create creativity. It creates options. Creativity still belongs to the driver—your taste, courage, and point of view. Episode summary Susan tackles a question she hears constantly: does AI expand creativity or flatten it? Her answer: it depends on how you're using it. If you use AI like a photocopier—generate a first draft and ship it unchanged—you're not becoming more creative. You're becoming more efficient at being generic. But if you use AI like a paintbrush—as a sparring partner, a possibility engine, a constraint generator, or a remix assistant—it can shorten the distance between blank page and something you can shape. The episode is a practical reset on what creativity actually requires in the AI era: taste, discernment, point of view, and the courage to be a little weird on purpose.   Key takeaways AI doesn't create creativity. It creates options. Creativity still requires intent, judgement, and discernment. Paintbrush vs Photocopier is the core distinction. Photocopier: you accept the default output and publish it. Paintbrush: you use AI to generate raw material, then you curate and shape. People can "spot AI writing" when there's no point of view. Generic writing usually means the human outsourced the messy parts: emotional clarity, lived experience, and risk. Creativity needs both taste and courage. AI doesn't do courage. Taste is what you choose—and what you reject. Better prompts aren't about asking for "the answer." They're about asking for raw material: angles, metaphors, structures, constraints, and pushback. Ways to use AI like a paintbrush Try these modes when you feel stuck: Sparring partner: "Push back on this." "Argue the opposite." "What am I not seeing?" Possibility engine: "Give me 20 angles." "Give me metaphors." "Suggest surprising structures." Constraint generator: "Make this an 8-word active headline." "Explain it for a 12-year-old." "Turn it into a story with a clear villain." Remix assistant: "Turn this into a framework." "Make it a checklist." "Turn it into a debate." The mini exercise Susan gives you Pick something you're working on (a post, pitch, talk, plan). Then ask AI: What's the most boring version of this? What's the boldest version of this? What's the truest version of this for me? Then you decide what to keep, amplify, or reject. That's the creative act.   Episode highlights   00:02.36 — The core question: does AI make you more or less creative? 00:18.46 — "It depends who's driving" (AI amplifies you) 00:55.04 — Core belief: AI creates options, not creativity 01:23.10 — Paintbrush vs photocopier framing 01:38.81 — The photocopier trap: shipping first drafts unchanged 02:16.38 — Why people can "tell it's AI" (patterned, POV-less output) 03:03.35 — Creativity requires taste + courage (AI doesn't do courage) 03:19.86 — Choose the paintbrush path 03:37.29 — Use AI as a sparring partner (push back / argue opposite / what am I missing?) 03:47.10 — Use AI as a possibility engine (20 angles, metaphors, surprising structures) 04:18.08 — Use constraints to force originality (8-word headline, etc.) 05:16.92 — What "taste" actually is: what you choose (and reject) 05:45.66 — Ask for raw material, not "the answer" 06:05.56 — Mini exercise setup (post, pitch, talk, plan) 06:21.82 — Three-question prompt: boring vs bold vs true 07:27.10 — When AI makes you less creative: avoiding the thinking 07:38.96 — Don't quit AI—change the prompt 07:51.10 — Ask for "stranger / truer / mine" 08:03.45 — Wrap: creativity is in the driver's hands 08:19.29 — Quick ask: rating + review   If you're feeling creatively stuck, don't ask AI to be creative for you. Ask it to help you explore—then use your taste to choose. And if you're enjoying this 30-day podcast-to-book sprint, leave us a quick rating + short review to help more people find the show Connect with Susan Diaz on LinkedIn to get a conversation started.   Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here.

    9 min
5
out of 5
19 Ratings

About

"AI Literacy for Entrepreneurs", with host Susan Diaz, helps you integrate artificial intelligence into your business operations. We'll help you understand and apply AI generative in a way that is accessible and actionable for entrepreneurs at all levels. With each episode, you'll gain practical insights into effective AI strategies and tools, hear from leading practitioners with deep expertise and diverse use cases, and learn from the successes and challenges of fellow business owners in their AI adoption journey. Join us for the simplified knowledge and inspiration you need to leverage AI effectively to level up your business.

You Might Also Like