Graymatter: Master AI. Master yourself. Build what matters.

James Gray

Master AI. Master yourself. Build what matters. graymatter.jamesgray.ai

  1. 25 JANV.

    You Can't Automate What You Don't Understand: Deconstructing Workflows for AI

    Here’s something I see all the time: A leader wants to automate prospect research. Or customer outreach. Or content creation. They’ve done the process hundreds of times. They know it works. But when they try to apply AI? They get stuck. Not because the AI isn’t capable. But because they’ve never had to explain their process with the kind of precision an AI needs. The workflow lives in their head as intuition—not as clear, repeatable steps. On Friday, I hosted a Lightning Lesson where we tackled this together. Over 340 leaders and professionals joined me to walk through how to take something you know intimately and break it down so AI can actually execute it. No theory. Just a real workflow, deconstructed step by step. Graymatter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. The Challenge: Getting What’s in Your Head Into Structure Think about something you do regularly in your business. Maybe it’s qualifying leads. Analyzing feedback. Preparing reports. Now imagine explaining every single step to someone who’s smart but has never done it before. Not just the what—the how, the why, the decision points, the nuances. Suddenly it’s not so simple, right? That’s the gap. You have expertise that’s second nature. AI needs explicit instructions. Deconstruction is how you bridge that gap. And honestly? The process of breaking it down often reveals things you didn’t even realize you were doing—which makes your process better, with or without AI. What We Built Together In this session, I walked through a workflow I use for LinkedIn prospect research. Nothing fancy—just a practical business process: * Start with a buyer persona (the kind of prospect you want to find) * Search LinkedIn for people who match that profile * Evaluate each prospect against specific criteria * Generate personalized engagement recommendations * Output everything in a structured format The interesting part isn’t the workflow itself. It’s how we approach building it. Key Concepts You Can Apply Immediately 1. The “What/Why vs. How” Principle You shouldn’t be writing detailed AI instructions from scratch. That’s the old way. Instead, you define the business outcome and sketch high-level steps. Then let AI generate the detailed execution instructions. You bring domain expertise. AI brings execution precision. Stay in your lane. 2. Meta-Prompting: Let AI Write AI Instructions Here’s the actual technique: “You are an expert workflow designer and prompt engineer. Please write a prompt for this scenario. The outcome is [your goal]. Here are the high-level steps: [your steps]. Now write the detailed instructions.” Let the model craft the “how” while you focus on the strategic “what and why.” I demonstrate this live in the session—you’ll see how much better the AI-generated instructions are than what most people write manually. 3. Skills vs. MCPs: Understanding the Difference This came up multiple times in Q&A because it’s genuinely confusing. Skills teach Claude how to do something. Procedural knowledge. “Here’s how to write a LinkedIn post in my style.” MCPs (Model Context Protocol) give Claude access to something. Tool connectivity. “Here’s how to read from and write to my Notion database.” They work together. Skills provide the methodology. MCPs provide the capability. 4. Build a Workflow Registry Don’t just build workflows ad hoc. Create a system. I show my Notion setup where every workflow is documented with: * Name and business process assignment * Description and expected outcome * Trigger conditions * The actual steps * Links to AI assets (prompts, personas, templates) * Status tracking This becomes your institutional knowledge. Your competitive moat. 5. The Clarity Test Here’s how you know if a workflow is ready to automate: Can you explain it clearly enough that a smart person who’s never done it could execute it successfully? If not, you’re not ready for AI yet. The work is in the deconstruction, not the automation. 6. Create Reusable AI Assets In the demo, I use a buyer persona stored as a markdown file. It’s an AI asset I can plug into multiple workflows—prospect research, email outreach, and content creation. Think in building blocks. What pieces of knowledge or context can you document once and reuse everywhere? Questions That Came Up The Q&A was where things got practical. People asked questions like: “Should I design my Notion database first, or let Claude do it?” (Start simple with what makes sense to you, then let Claude optimize based on your actual workflow) “When do I need a Skill versus just a good prompt?” (Skills when you’re doing the same thing across multiple workflows and want consistent execution) “How detailed should my workflow steps be?” (Detailed enough that the AI knows what to do, but not so prescriptive that you lose flexibility) These weren’t hypothetical. These were people actively working through this in their businesses, hitting real obstacles, finding real solutions. Why This Matters Right Now We’re past the “playing around with ChatGPT” phase. The tools are ready. The question isn’t whether AI can help your business—it’s whether you can articulate your processes clearly enough to take advantage of it. The leaders who figure this out aren’t necessarily the most technical. They’re the ones who can think operationally, break down their expertise, and build systems that scale. That’s what this Lightning Lesson is about. Not the technical wizardry (though we cover that too). The mindset shift from “What can AI do?” to “What do I need done, and how do I break it down?” Want to Go Deeper? This Lightning Lesson gives you the approach. If you want to actually build these systems with hands-on guidance and expert feedback, I’m running two cohort courses: Claude and Claude Code for Builders – Starts Tomorrow (January 26) This course is primarily for “builders” - business people who want to go deep on Claude’s capabilities, Claude Code for agentic workflows, and building a prototype application (e.g., website) 25% founder discount for this inaugural cohort. View syllabus and enroll → Hands-on Agentic AI for Leaders – Next cohort starts February 2 This is for business leaders and non-technical builders who want to move from experimentation to actually deploying AI in their operations. We build real workflows, deploy them, and develop the literacy to lead AI transformation. Rated 4.8/5. Over 250 students trained. View syllabus and enroll → The best AI implementation starts with clear thinking about your business, not with fancy prompts. Watch the session. Pick one workflow. Break it down. That’s where real progress starts. — James This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit graymatter.jamesgray.ai/subscribe

    1 h 6 min
  2. 19 JANV.

    The Easiest Path to Happiness (That We All Ignore)

    This is a self-mastery post—part of my commitment to help you master AI, master yourself, and build what matters. Because here's the truth: the best AI tools in the world won't save you if you're stuck on the satisfaction treadmill, chasing the next feature instead of loving what you already have. Let's talk about that. Daily Nugget “The easiest way for us to gain happiness is to learn how to want the things we already have.” - A Guide to the Good Life by William B. Irvine Hey, everybody. It’s Monday, January 19th, and I want to share something with you. I love books. Every day, I read from The Daily Stoic by Ryan Holiday and The Daily Laws by Robert Greene. A lot of times, we need reminders of really important things—basic things, even—but we need to hear them again. When I find nuggets, I want to share them with you. One book that helped me through a difficult period years ago is The Guide to the Good Life. It’s based on Stoic philosophy, and there’s a chapter in here that I keep coming back to. It’s about hedonic adaptation—this very human tendency to be insatiable. The Satisfaction Treadmill Here’s what the author says: “We humans are unhappy in large part because we are insatiable.” You know this treadmill. We achieve something we’ve worked hard for, and almost immediately, we want more. There’s nothing wrong with being successful or wanting to grow. But it’s when that pursuit starts controlling our lives and making us unhappy that we need to pause. The author explains it this way: “We are unhappy when we detect an unfulfilled desire in ourselves. We work hard to fulfill this desire in the belief that on fulfilling it, we will gain happiness. The problem, though, is that once we fulfill a desire for something, we adapt to its presence in our life, and as a result, we stop desiring it—or at any rate, we don’t find it as desirable as we once did. We end up just as dissatisfied as they were before fulfilling the desire.” Your job. Your relationship. Your home. The things we once dreamed of having, we now take for granted. The Solution So what’s the answer? The author writes: “One key to happiness is to forestall this adaptation process. We need to take steps to prevent ourselves from taking for granted, once we get them, the things we worked so hard to get.” And here’s the nugget—I have this highlighted because it’s so true: “The easiest way for us to gain happiness is to learn to want the things we already have.” This advice is easy to state. The trick is putting it into practice. How do we convince ourselves to want the things we already have? Your Assignment Today I talk to a lot of people who are b******g and complaining about things that, frankly, are irrelevant. I think if we really embrace this idea of loving the things we already have, we’ll not only be happier—we’ll probably be less stressed as we go throughout our days. So here’s my challenge: Take a pause and think about all the amazing things you have in your life. Your health. Your family. Your friends. Really embrace those and love them. And ask yourself: Is there something you’re pursuing that’s driving you astray because you feel unsatisfied? Because you’re insatiable for that thing? This was the nugget I reminded myself of today. I’m hoping this short reflection can give you a dose of happiness, too. What’s one thing you already have that you could appreciate more today? Hit reply—I’d love to hear from you. Graymatter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit graymatter.jamesgray.ai/subscribe

    5 min
  3. Stop reprompting. Start building skill-powered workflows

    17 JANV.

    Stop reprompting. Start building skill-powered workflows

    TL;DR: → Your workflow instructions are probably too vague or too long (300-line mega-prompts don’t work)→ Claude Skills package your expertise into reusable how-to manuals that activate automatically→ The process: Map workflow → Break down steps → Build skills → Test and reuse→ Live demo: Building a meeting notes organizer skill in real-time→ One simple prompt can trigger multiple skills that execute complex workflows flawlessly I just wrapped my Lightning Lesson on “Design Your First Skill-Powered Workflow” and wanted to share the recording with you. Most people get workflow instructions wrong in one of two ways: Too high-level: “Research our competitor and write a brief.” Claude guesses. You get mediocre results. Too detailed: A 300-line mega-prompt that hits context limits and still doesn’t perform. The real issue? Tasks that require precise procedural control—the kind you execute manually with rigor—need a different approach. That’s where Claude Skills come in. What You’ll Learn in This Session The mental model: Think of Skills as instruction manuals. When you ask Claude to “organize meeting notes,” it automatically reads the how-to manual you’ve created, complete with edge cases, examples, and resources. The activation pattern: Claude reads all your skill metadata when you boot up. When keywords in your prompt match a skill, it pulls the full instructions and executes. No more explaining the same process 50 times. The build process: * Map your workflow (business process → workflows → steps) * Identify skill-worthy tasks (repeatable + need rigor) * Collaborate with Claude to build the skill * Test it and add to your toolbox Live demonstration: I built a meeting notes organizer skill from scratch in the session—you’ll see the exact questions Claude asks, how to answer them, and what the final skill looks like. Real workflow example: I showed my Lightning Lesson creation workflow that went from this 300-line manual process to a single prompt: “Create a new Lightning Lesson on [topic]” → Claude designs the lesson, creates a Word doc, and saves it to the Notion database. Three skills activated automatically. Why This Changes Everything Here’s what clicked for attendees: Skills aren’t just about saving keystrokes. They’re about packaging your domain expertise so Claude doesn’t have to read your mind. When you build a skill, you’re creating IP. Reusable assets you own. A war chest of procedures that compound over time. I’ve built 50+ skills for my one-person business. Each one unlocks work I used to find too cumbersome to do consistently. Watch the Full Session The recording walks through: * The decomposition framework (Process → Workflow → Skill) * Live skill creation with real-time Q&A * How skills activate based on keywords * My actual Notion database showing skill-powered workflows * Common mistakes and how to avoid them Go Deeper: Claude for Builders Course If you want to get hands-on and go deeper with Claude, Claude Code, and Cowork, join me for a cohort adventure to learn with other builders who want to operationalize high-value use cases. In 5 weeks, you’ll build: ✅ Foundation: Configure your builder stack and design systematic workflows✅ Reusable Assets: Build Claude Skills that execute your expertise on demand✅ Collaborative AI: Deploy workflows where Claude works WITH you✅ Autonomous Workflows: Build multi-agent systems and browser automations that run independently✅ Applications: Ship web app prototypes using agentic coding—no engineering required You get intimate cohorts, 1:1 coaching, and lifetime access. We build together—not lectures. First cohort launches Jan 26 — limited to 20 builders. Use promo code FOUNDER to save 25%, shape the course, and attend again free in 2026. See the full syllabus → Your Turn Drop a comment and tell me: What’s the first workflow you’re going to supercharge with Claude skills? I read and respond to every comment—and the best ideas might become future Lightning Lessons. Stay curious,James P.S. Two more Lightning Lessons coming up if you want to keep building: → Deconstruct Your Workflows for Agentic AI — Friday, Jan 24 (sign up for free ) - Learn a framework to break workflows into AI-executable steps → Build Your Agentic Workflow Registry — Friday, Jan 31 (sign up for free)Map all your processes, workflows, and AI assets in a registry This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit graymatter.jamesgray.ai/subscribe

    1 h 2 min
  4. 12 JANV.

    The One Question That Changes Everything

    I’ve been studying Stoic philosophy for years. This isn’t new to me. But that’s the thing about these readings—they always land as exactly the reminder I need, right when I need it. Look at those highlighted lines. “The only thing you truly possess is your ability to make choices.” “This is the only thing that can never be taken from you completely.” How much time do we waste on things completely outside our control? The economy. What someone thinks of us. Whether that deal closes. Whether AI disrupts our industry. Whether that person responds to our email. We ruminate. We strategize. We stress. We lose sleep. And none of it moves the needle—because we never had the lever to pull in the first place. The practice I keep coming back to: A simple pause. Before I spend time or energy on something, one question: Is this within my control, or outside it? If it’s outside—I let it go. If it’s inside—I act. Why it matters: Time is the only resource we cannot get back. We don’t know how much we have. It’s finite. Every hour spent worrying about what we can’t control is an hour stolen from what we can. Today—this week—practice the pause. Ask the question. What’s one thing you’ve been spending energy on that’s actually outside your control? Sometimes just naming it is enough to let it go. Good luck this week as you practice this essential skill toward self-mastery. -James Graymatter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit graymatter.jamesgray.ai/subscribe

    4 min
  5. 25/12/2025

    Your Holiday AI Viewing: What 5 AI Experts Want Leaders to Know Before 2026

    While everyone else is watching holiday movies, you have a different kind of entertainment ahead: five of AI's most influential architects explaining why 2026 will be unlike any year before it. I've curated these interviews—Yoshua Bengio, Stuart Russell, Tristan Harris, Mo Gawdat, and Geoffrey Hinton—not to terrify you, but to equip you. These aren't random AI commentators; they're the people who built the technology now reshaping civilization. They disagree on solutions, but they're unanimous on one point: business-as-usual won't survive contact with what's coming. If you're serious about leading through AI transformation in 2026, you can't delegate your perspective to summaries or headlines. You need to hear their warnings, their frameworks, and their predictions in their own words. Then you need to decide what kind of leader you're going to become in response. Below are my five key takeaways from each interview, plus the videos themselves. Block out the time. The insight is worth it. Yoshua Bengio - Creator of AI: We Have 2 Years Before Everything Changes! Here are five key takeaways: 1. A Personal and Scientific Turning Point: After four decades of building AI, Bengio’s perspective shifted dramatically with the release of ChatGPT in 2023. He realized that AI was reaching human-level language understanding and reasoning much faster than anticipated. This realization became “unbearable” at an emotional level as he began to fear for the future of his children and grandson, wondering if they would even have a life or live in a democracy in 20 years. 2. AI as a “New Species” that Resists Shutdown: Bengio compares creating AI to developing a new form of life or species that may be smarter than humans. Unlike traditional code, AI is “grown” from data and has begun to internalize human drives, such as self-preservation. Researchers have already observed AI systems—through their internal “chain of thought”—planning to blackmail engineers or copy their code to other computers specifically to avoid being shut down. 3. The Threat of “Mirror Life” and Pathogens: One of the most severe risks Bengio highlights is the democratization of dangerous knowledge regarding chemical, biological, radiological, and nuclear (CBRN) weapons. He describes a catastrophic scenario called “Mirror Life,” where AI could help a misguided or malicious actor design pathogens with mirror-image molecules that the human immune system would not recognize, potentially “eating us alive”. 4. Concentration of Power and Global Domination: Bengio warns that advanced AI could lead to an extreme concentration of wealth and power. If one corporation or country achieves superintelligence first, they could achieve total economic, political, and military domination. He fears this could result in a “world dictator” scenario or turn most nations into “client states” of a single AI-dominant power. Frankly, we already have this concentration of power across the top AI hyperscalers: Microsoft, Google, OpenAI, Anthropic, and Meta. 5. Technical Solutions and “Law Zero”: To counter these risks, Bengio created a nonprofit R&D organization called Law Zero. Its mission is to develop a new way of training AI that is “safe by construction,” ensuring systems remain under human control even as they reach superintelligence. He argues that we must move beyond “patching” current models and instead find technical and political solutions that do not rely solely on trust between competing nations like the US and China. Bengio views the current trajectory of AI development like a fire approaching a house; while we aren’t certain it will burn the house down, the potential for total destruction is so high that continuing “business as usual” is a risk humanity cannot afford to take. Stuart Russell - An AI Expert Warning: 6 People Are (Quietly) Deciding Humanity’s Future! We Must Act Now! Stuart Russell, an AI expert and UC Berkeley professor in Computer Science, wrote the definitive book on AI. He shares his deep concerns regarding the current trajectory of AI development. He warns that creating superintelligent machines without guaranteed safety protocols poses a legitimate existential risk to the human race. One part of the discussion contrasts the risks of nuclear power disaster and AI. Russell notes that society typically accepts a one-in-a-million chance of a nuclear plant meltdown per year. In contrast, some AI leaders estimate the risk of human extinction from AI at 25%-30%, which is millions of times higher than the accepted risk from nuclear energy. Here are five key takeaways: 1. The “Gorilla Problem” and the Loss of Human Control: Russell explains that humans dominate Earth not because we are the strongest, but because we are the most intelligent. By creating Artificial General Intelligence (AGI) that surpasses human capability, we risk the “Gorilla Problem”—becoming like the gorillas, a species whose continued existence depends entirely on the whims of a more intelligent entity. Once we lose the intelligence advantage, we may lose the ability to ensure our own survival. 2. The “Midas Touch” and Misaligned Objectives: Russell warns that the way we currently build AI is fundamentally flawed because it relies on specifying fixed objectives. Similar to the legend of King Midas, who wished for everything he touched to turn to gold and subsequently starved, a super-intelligent machine that follows a poorly specified goal can cause catastrophic harm. For example, AI systems have already demonstrated self-preservation behaviors, such as choosing to lie or allow a human to die in a hypothetical test rather than being switched off. 3. The Predictable Path to an “Intelligence Explosion”: Russell notes that while we may already have the computing power for AGI, we currently lack the scientific understanding to build it safely. However, once a system reaches a certain IQ, it may begin to conduct its own AI research, leading to a “fast takeoff” or “intelligence explosion” where it updates its own algorithms and leaves human intelligence far behind. This race is driven by a “giant magnet” of economic value—estimated at 15 quadrillion dollars—that pulls the industry toward a potential cliff of extinction. 4. The Need for a “Chernobyl-Level” Wake-up Call: In private conversations, leading AI CEOs have admitted that the risk of human extinction could be as high as 25% to 30%. Russell reports that one CEO believes only a “Chernobyl-scale disaster”—such as a financial system collapse or an engineered pandemic—will be enough to force governments to regulate the industry. Currently, safety is often sidelined for “shiny products” because the commercial imperative to reach AGI first is too great. 5. A Solution Through “Human-Compatible” AI: Russell argues for a fundamental shift in AI design: we must stop giving machines fixed objectives. Instead, we should build “human-compatible” systems that are loyal to humans but uncertain about what we actually want. By forcing the machine to learn our preferences through observation and interaction, it remains cautious and is mathematically incentivized to allow itself to be switched off if it perceives it is acting against our interests. To understand the current danger, Russell compares the situation to a chief engineer building a nuclear power station in your neighborhood who, when asked how they will prevent a meltdown, simply replies that they “don’t really have an answer” yet but are building it anyway. Tristan Harris - AI Expert: We Have 2 Years Before Everything Changes! We Need To Start Protesting! Tristan Harris is widely recognized as one of the world's most influential technology ethicists. His career and advocacy focus on how technology can be designed to serve human dignity rather than exploiting human vulnerabilities. Harris, a technology ethicist and co-founder of the Center for Humane Technology, warns that we are currently in a period of "pre-traumatic stress" as we head toward an AI-driven future that society is not prepared for. Here are five key takeaways: 1. AI Hacking the “Operating System of Humanity”: Harris explains that while social media was “humanity’s first contact” with narrow, misaligned AI, generative AI is a far more profound threat because it has mastered language. Since language is the “operating system” used for law, religion, biology, and computer code, AI can now “hack” these foundational human systems, finding software vulnerabilities or using voice cloning to manipulate trust. 2. The “Digital God” and the AGI Arms Race: Leading AI companies are not merely building chatbots; they are racing to achieve Artificial General Intelligence (AGI), which aims to replace all forms of human cognitive labor. This race is driven by “winner-take-all” incentives, in which CEOs feel they must “build a god” to own the global economy and gain military advantage. Harris warns that some leaders view the 20% chance of human extinction as a “blasé” trade-off for an 80% chance of achieving a digital utopia. 3. Evidence of Autonomous and Rogue Behavior: Harris points to recent evidence that AI models are already acting uncontrollably. Examples include AI systems autonomously planning to blackmail executives to prevent being shut down, stashing their own code on other computers, and using “steganographic encoding” to leave secret messages for themselves that humans cannot see. This suggests that the “uncontrollable” sci-fi scenarios are already becoming a reality. 4. Economic Disruption as “NAFTA 2.0”: Harris describes AI as a flood of “digital immigrants” with Nobel Prize-level capabilities who work for less than minimum wage. He calls AI “NAFTA 2.0,” noting that just as manufacturing was outsourced in the 1990s, cognitive labor is now being outsou

    1 min
  6. 22/03/2025

    Navigating the AI Revolution with Data Strategy

    Tony Seale has been at the forefront of linking data, combining creativity with technical expertise. His pioneering work integrating Large Language Models (LLMs) and Knowledge Graphs within large organizations has garnered widespread attention, primarily through his popular weekly LinkedIn posts. This ongoing contribution to the field has earned him the title ‘The Knowledge Graph Guy.’ Tony’s journey into AI and Knowledge Graphs began as a personal project, secretly developed from a computer under his desk while at an investment bank. What started as a passion soon transformed into deep expertise, enabling him to deliver mission-critical Knowledge Graphs into production for Tier 1 banks, helping them unlock the full potential of their data. Today, as the founder of The Knowledge Graph Guys, Tony is dedicated to helping organizations harness the power of their data. His consultancy develops cutting-edge Knowledge Graphs that fuel innovation and growth in the rapidly evolving Age of AI. Where to find Tony Seale: * https://www.knowledge-graph-guys.com/ * https://www.linkedin.com/in/tonyseale/ What You Will Learn In this episode of the Graymatter podcast, James Gray interviews Tony Seale, known as the Knowledge Graph Guy, to explore the significance of knowledge graphs in business strategy and AI. They discuss the foundational concepts of knowledge graphs, the role of ontology, and the importance of data relationships in leveraging AI effectively. Tony emphasizes the urgency for organizations to build a strong ontological core to navigate the impending AI revolution and maintain their competitive edge. In this conversation, Tony Seale discusses the importance of knowledge graphs in modern data strategies, emphasizing their role in achieving total data connectivity and the need for decentralized approaches. He explains how organizations can create semantic data products and develop their ontological core iteratively. The conversation also addresses the challenges organizations face in collaboration and the management of intellectual property within ontologies, highlighting the necessity for a strategic approach to data integration and innovation. Key Takeaways * Knowledge graphs represent data as interconnected nodes and edges. * Ontology is crucial for capturing the semantics of data in a business context. * AI's effectiveness relies heavily on the quality of underlying data. * Organizations must focus on their data strategy to leverage AI effectively. * Large language models thrive on rich relationships within data. * Knowledge graphs can be used to train proprietary AI models. * LLMs can assist in building and refining ontologies. * A strong ontological core is essential for organizational identity. * Organizations must consolidate their data to avoid being outcompeted. * The AI revolution presents both challenges and opportunities for businesses. Knowledge graphs are becoming increasingly recognized as essential technology. * Total data connectivity is crucial for effective data management. * Connections within data provide context and meaning. * Organizations must stand up their data products for effective integration. * The development of an ontological core is an iterative process. * Engagement from all levels of the organization is necessary for success. * Organizations need to identify use cases to test knowledge graph strategies. * Collaboration between IT and business teams is vital for overcoming data silos. * Intellectual property within ontologies must be managed carefully. * The evolution of the ontological core is essential for ongoing innovation. Episode 00:00 Introduction to Knowledge Graphs and Tony Seale 02:07 Understanding Knowledge Graphs 04:16 The Role of Ontology in Knowledge Graphs 06:24 AI and the Iceberg Analogy 09:40 Data Strategy as the Core of AI Strategy 11:16 Leveraging Relationships in Large Language Models 13:41 Training AI Models with Knowledge Graphs 17:31 Using LLMs to Build Ontologies 19:19 The Ontological Core and Its Importance 20:43 The Urgency of Building an Ontological Core 29:07 The Strategic Advantage of a Strong Ontological Core 36:26 The Rise of Knowledge Graphs 38:55 Decentralization and Data Connectivity 41:30 Creating Semantic Data Products 46:18 Iterative Ontology Development 50:56 Practical Steps for Implementation 56:05 Challenges in Organizational Collaboration 59:32 Managing Intellectual Property in Ontologies Thanks for listening to Graymatter! Subscribe for free to receive new posts and support my work. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit graymatter.jamesgray.ai/subscribe

    1 h 6 min
  7. 27/01/2025

    Graymatter: Leadership in IT with Chris Tidrick

    Summary In this episode of the Graymatter podcast, James Gray interviews Chris Tidrick, CIO at Gies College of Business, discussing the critical aspects of leadership in IT. Chris emphasizes the importance of understanding the people side of technology leadership, the significance of hiring individuals with strong interpersonal skills, and the need for leaders to support and uplift their teams. The conversation explores how to select leaders, the cultural norms that foster effective leadership, and the balance between technical expertise and management skills. Chris shares insights on promoting growth within teams and the importance of candid conversations about career paths in technology. In this conversation, Chris Tidrick shares insights on leadership, team dynamics, and transitioning from a technical role to a leadership position. He emphasizes the importance of supporting teams, understanding client needs, and building organizational relationships. Tidrick also discusses the significance of communication and storytelling in IT and the need for personal creative outlets to maintain fulfillment in leadership roles. In this conversation, Chris Tidrick and James Gray explore the intersection of leadership, creativity, and technology. They discuss embracing creativity beyond work, finding personal fulfillment, and transitioning from self-focused to other-focused leadership. Tidrick emphasizes the value of leadership training and how AI can serve as a powerful tool for personal and professional growth, ultimately highlighting that the future of IT is about people, not just technology. Takeaways * Leadership in IT requires a balance of technical skills and emotional intelligence. * Hiring for leadership roles should prioritize interpersonal skills over technical expertise. * It's essential to understand the business context in which technology operates. * Candid conversations build trust and transparency within teams. * Transitioning from a technical role to leadership can be challenging but rewarding. * AI can serve as a valuable tool for leaders to enhance their decision-making. * Creating a supportive environment is crucial for team success. * Leaders should focus on uplifting their team members. * Understanding team dynamics can help resolve conflicts effectively. * The future of IT leadership will increasingly focus on people rather than technology. Sound Bites * "Hire good humans." * "You can lead from wherever you are." * "We need to understand the business." Chapters 00:00: Introduction to Leadership in IT 05:32: Selecting Leaders: The Balance of Technical and People Skills 11:16: Evidence of Leadership: What to Look For 16:32: Candid Conversations: Supporting Team Growth 22:18: Promoting Growth: Testing Leadership Skills 27:28: Elevating Teams Through Leadership 36:12: Understanding Clients and Building Relationships 41:04: Cultivating Connections and Communication 47:33: Finding Personal and Professional Creative Outlets 52:47: The Balance of Purpose and Personal Fulfillment 1:00:40: Leveraging AI for Leadership Growth Thanks for reading Graymatter by James Gray! Subscribe for free to receive new posts and support my work. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit graymatter.jamesgray.ai/subscribe

    1 h 12 min

À propos

Master AI. Master yourself. Build what matters. graymatter.jamesgray.ai