Building a Better Geek

Emmanuella Grace & Craig Lawton

Welcome to Building a Better Geek, where we explore the intersection of technology, psychology and well-being. For high-functioning introverts finding an audience - if you like humans at least as much as machines - if you want to go deep on leadership, communication and all the things that go into building you. Emmanuella Grace is a communication coach and consultant, working with individuals and organisations to develop and strengthen the skills of voice and communication. Craig is an experienced Technologist and Leader. Connect with us using the details below.

  1. 1天前

    TruthAmp: Episode 11 - The Times They Are AI-Changin'

    Watch here: https://youtu.be/zobiv1u9oJk Craig and Emmanuella debate whether AI-generated news is more trustworthy than traditional journalism—and what we lose if media dies. The Saturday Morning Shift Craig's Evolution: His Saturday mornings used to mean reading the Financial Review cover-to-cover. Now he starts there but ends up in Claude or Perplexity, asking all the questions journalists didn't answer. This raises an uncomfortable question: Should he trust AI-generated information more than journalistic organisations? The Argy-Bargy: This conversation stems from a Friday text exchange where Craig had a strong reaction to a podcast Emmanuella shared (though hunger from a failed drone food delivery may have contributed). His response sparked a debate about media bias, AI curation, and what happens to truth when traditional journalism collapses. The Media Bias Problem Emmanuella's Approach: She reads everything—left, right, moderate—because everyone has an agenda. Journalists have personal biases, plus organisational biases from their employers. She consumes The Australian, The Age, The Guardian, and AFR weekly, plus their podcasts whilst gardening or parenting. The Vienna Housing Example: Craig found an ABC article praising Vienna's rental market model (a post-WWI decision to treat housing as a right, not an investment). The article was overwhelmingly positive with no critical analysis. He wanted to know downsides, compare other markets, and challenge his comfort—but that information wasn't provided. Sins of Omission: Media doesn't just slant stories through what they say—they slant through what they don't say and which stories they don't cover. Craig found only one article on housing in a week, despite it being arguably Australia's top issue. AI as Research Tool Craig's Method: Uses Perplexity with multiple models (different AI providers have different biases) Asks for facts first, media interpretation second Can request exclusion of media sources to focus on official statistics Investigates topics like Australia's "gold plating era" (over-investment in energy infrastructure 10-20 years ago, still inflating bills today) The Insiders Frustration: Craig used to yell at Sunday morning political shows for asking inside-game questions instead of obvious, substantive ones. AI lets him ask his own questions and get deeper answers. The Critical Warning: Who Holds Power Accountable? Emmanuella's Concern: AI can only report what's been fed into it. Investigative journalists find information people don't want us to know—corruption, abuse, hidden agendas. Without them, who holds politicians, police, and corporations accountable? AI can't do that work. Not listicle writers ("5 Ways to Please Your Partner"), but real investigative journalists doing essential democratic work. The Business Model Crisis: Traditional media is economically challenged. New media (Substack, independent journalists) hasn't taken hold in Australia like elsewhere. Craig follows individual writers he trusts, not mastheads. The Trust Paradox Trust in traditional media: Historically low Trust in AI: No track record yet AI's Limitations: Nearly all training data is from the last 20 years (sparse pre-internet knowledge) Can pull data points from different sources measured differently, creating logical inconsistencies Very slanted by current trends and fads, lacking historical context Still has tech bro Silicon Valley bias in all major models The Power Shift Nobody's Noticing Craig's Key Insight: Traditional media's power is being challenged in ways not immediately apparent. If thousands of people seek understanding through their own prompts to different AI models, the mainstream media loses its gatekeeping function. Implications: Governments, political organisations, advertisers, and public health campaigns that relied on unified media channels now face diffuse, individualised information consumption. The message isn't controlled anymore. Key Takeaways Journalism remains essential: Garbage in, garbage out. AI needs high-quality investigative journalism to feed it—and society needs journalists to uncover what powerful people hide. The inflection point: Like the early internet when "tech people" adopted it before the world caught on, we're at a similar moment with AI-generated research. Every knowledge worker will soon be a "fast follower." Read widely, question everything: Whether consuming traditional media or AI-generated content, diversify sources and interrogate biases—including your own. Final thought: The line between genius and insanity often houses the early adopters. Both Craig and Emmanuella have been accused of being on both sides of that line.

    15 分钟
  2. 11月3日

    TruthAmp: Episode 10 - Don't Go Chasing Waterfalls (Chase AI Bubbles)

    Watch here: https://youtu.be/NQHTdb5_Af8 Craig and Emmanuella tackle the burning question: Is AI just another dotcom-style bubble waiting to burst? The Bubble Debate Emmanuella has been hearing concerns across industries that AI might be overhyped like the dotcom boom. She wonders if people deep in AI dismiss these concerns because admitting it would hurt them. As an outsider, she wanted an objective analysis. Why This Time Is Different The Dotcom Lesson: Jeff Bezos noted that industry movements require experimentation, which costs money. During dotcom, infrastructure (fiber optic cables) survived even when companies failed. Amazon shares dropped from IPO to $6, but one original share is now worth ~$48,000. Bubbles punish speculators but reward those who identify real value. AI's Key Distinctions: Actual usage: Unlike hypothetical dotcom projections, AI infrastructure is used immediately as it's deployed Tangible products: OpenAI went from zero to $500 billion in two years with something people actually use daily Fast prototyping: At the Indigenous Australian Datathon Conference, participants built working health/food systems in 1.5 days (five years ago, they just made PowerPoint slides) The Three-Layer Framework Infrastructure Layer (Bottom): Data centers and compute being used and paid for as deployed. Competitive pressure will drive efficiency. This is real, not hypothetical. Business Layer (Middle): Companies building on infrastructure—lots of experimentation, not all will succeed. This is where the "bubble" risk lives. Consumer Layer (Top): People using AI daily for research, scheduling, advice. Already embedded in life with genuine utility. What Determines Winners The pets.com cautionary tale: They had a great name but terrible user experience. PetSmart crushed them with a better website. Winners marry user experience with new tech. Losers trade on hype. Companies that survived dotcom (Amazon, early Yahoo, later Facebook) had genuine utility that compelled continued use. The Democratisation Opportunity You no longer need coding skills—just understand systems, business, and customers. Barriers to entry have collapsed. Emmanuella has been buying shares for her daughters since birth; what might fund one startup could now fund 10 experiments. The Reality Check No substance = failure. Hypothetical AI companies without humans putting in grunt work won't succeed. Value requires the end-to-end human experience—people identifying problems and experiencing solutions. Don't judge success at one point in time. See what survives market corrections. Takeaway This isn't a bubble—it's a punctuated equilibrium. Infrastructure is solid, consumer utility is real, but not all businesses building on top will succeed. If you identify genuine long-term value and ride out volatility, history suggests patience pays off.

    14 分钟
  3. 10月27日

    TruthAmp: Episode 9 - AI Just Called to Say I Love You (No More Apps)

    Watch here: https://youtu.be/Je3ynVTQrXs Craig builds a meditation app in 15 minutes to demonstrate how AI is fundamentally changing our relationship with smartphones—and potentially making traditional apps obsolete. The Meditation App Experiment The Problem: Craig was frustrated with his meditation app constantly asking him to log in, share data, and navigate unnecessary features. He just wanted something simple: a timer that chimes at the start, middle, and end. The Solution: Using Claude AI, he built a custom meditation app in approximately 15 minutes (plus deployment to his phone). The entire process: Created a simple meditation timer with specific requirements Made it "woody zen" in appearance through natural language prompting Deployed as a Progressive Web App (PWA) to his Google Pixel 9 Shared all code publicly on GitHub—written entirely by AI, including instructions The Result: A functional, personalized meditation app that does exactly what he needs, nothing more. The Death of Apps Thesis Craig argues we're witnessing the beginning of the end for traditional smartphone apps. His reasoning: Common Problems Get Solved: Throughout tech history, universal problems eventually become utilities (like cloud computing replacing everyone building their own data centers). Apps are next. Ephemeral Code: What took weeks to build now takes hours. Soon, AI will generate apps on-the-fly to solve immediate problems, then either: Disappear after use, or Join a library for future retrieval when someone needs the same solution The Future Interface: Instead of hunting through app stores, your phone becomes a true personal assistant. You state a problem ("I want to tune my guitar"), and AI generates the solution instantly—no installation, no data sharing, no login screens. The Deloitte Academic Paper Incident Emmanuella raises the recent controversy where Deloitte was hired to analyze problematic code but instead published an academic paper. Her analysis: The wrong command was given. Key insight: Deloitte hired a team and used AI to do something it was told to do, but the initial instruction was incorrect. The tool served the wrong purpose because the human question was wrong. Deep Philosophical Questions On Prompting as a Skill Emmanuella observes that prompting AI effectively requires: Specificity and brevity Iteration and refinement Understanding what outcome you actually want Consideration of whether the purpose is appropriate She predicts schools will need entire subjects dedicated to prompting. On Logic vs. Intelligence A fascinating historical example: When computers were introduced to Black and Hispanic communities in the US, IQ scores increased—not because students became "smarter," but because their thinking adapted to computational logic (which IQ tests measure). Emmanuella's concern: We're optimizing for computational logic at the expense of emotional, human, and spiritual intelligence. This imbalance contributes to rising anxiety and depression. On Productivity vs. Equilibrium Self-identifying as a "human puddle," Emmanuella questions whether productivity gains are worth the cost: "Is our time connecting and being undermined at the expense of productivity?" On Creativity Craig was asked by a senior leader: "Is creativity just remixing old ideas, or is it bigger?" His answer: Creativity pulls inspiration from many sources, sometimes mysterious ones. It's bigger than remix. The human element matters—AI is built on systems like IQ tests that channel curiosity into predictable paths. On Tech Waste Emmanuella wonders if we'll eventually develop consciousness about AI waste the way we have about plastic—recognizing that technology costs the earth something and asking whether uses are frivolous. Cultural Differences Craig shares intriguing research: Anglo-Western countries are the most pessimistic about AI in multiple studies. East Asian and non-English speaking countries tend to be far more bullish on the technology. This raises questions about cultural values, waste creation, and how different societies conceptualize technology's role. Key Takeaway Always ask: Who is the end user, and what is the ultimate goal? Technology must serve a human purpose. Learning to ask the right questions leads to appropriate prompts, which leads to useful outcomes. Without this foundational awareness, we risk building solutions to the wrong problems—or creating productivity that undermines human wellbeing. The future isn't about having better apps. It's about having AI that understands what we need and generates it on demand—making the smartphone finally live up to its promise as a true personal assistant. App code here: https://github.com/cclawton/cctest

    17 分钟
  4. 10月26日

    Shocking Truth: Tech is Changing Our Perception, Reality and Behaviour

    Em and Craig dive deep into technology's profound impact on human behaviour, exploring everything from AI-generated images with three-eyed cats to how the printing press revolutionised society. This wide-ranging conversation examines the uncomfortable truth: we're living through a technological shift that's fundamentally changing how humans think, connect, and experience reality. The Historical Context The hosts trace technology's transformative power through history, from Gutenberg's printing press enabling mass literacy and challenging authority, to the Industrial Revolution moving women from homes into factories, fundamentally reshaping society. Em notes how each technological leap creates both expansion and contraction—initial chaos followed by adaptation and new innovation. Eight Core Ways Tech Affects Human Behaviour Em outlines how technology is reshaping humanity across multiple dimensions: Social Connection: While 67.9% of Earth's population is now online, connections are less deep. We're cognitively designed to connect with maybe 200 people, not thousands on social platforms. Shortened Attention Spans: Constant quick fixes prevent us from building resilience, accessing flow states, or learning deeply. We're avoiding discomfort rather than developing the capacity to handle it. Cognitive Changes: Multitasking (really rapid task-switching) exhausts us more than the work itself. The constant shifting between tasks depletes cognitive resources faster than focused deep work. Memory and Navigation: Craig shares how Google Maps has replaced spatial awareness—remembering London taxi drivers whose brains were literally wired differently from their knowledge of streets. Em wonders if rising ADHD diagnoses might actually be brains adapting to technology rather than a disorder. The Reality Distortion Problem The conversation tackles a disturbing trend: our subconscious can't distinguish between AI-generated content and reality. From Photoshop's impact on body image to today's sophisticated deepfakes, we're losing the ability to trust what we see. Em describes asking ChatGPT to create a birthday invitation, only to discover the generated cats had three eyes and extra heads—a glimpse into early-stage AI before it got frighteningly good. The Attention Economy's Dark Side Em reveals a troubling pattern: AI trained on human engagement learns that negative content gets more clicks, creating a feedback loop that may be skewing AI toward negativity. She questions whether technology has genuinely had a negative effect on human behaviour, or whether negative content simply generates more engagement and thus more training data. Process Versus Outcome Craig admits to using Claude AI to rewrite a letter to a newspaper—and it did a better job. This sparks discussion about what we lose when AI does the creative heavy lifting. Em's pottery analogy drives home a crucial point: not everything needs commercial value. The creative process itself—the frustrating research, the failures, the practice—transforms information into knowledge and knowledge into wisdom. Job Market Disruption While people panic about AI replacing jobs, the hosts offer unexpected hope: humans will continue innovating ways to work alongside technology, just as they always have. Em emphasises that our fundamental drive to connect, create, and innovate will persist despite technological disruption. The Surprising Hero Craig's hero of the week is Sarah Wynn Williams, former Facebook executive and author of "Careless People." She broke a seven-year silence to warn that society is sleepwalking into the same mistakes with AI that occurred with social media. Despite Facebook's gag order preventing her from promoting the book, her insights about emotional targeting of adolescent girls and the concentration of AI power in social media companies (like Meta's LLaMA model) offer crucial warnings. The Paradox of Progress Perhaps the most striking theme: despite technology's potential for harm, both hosts maintain optimism about humanity. Em argues that just as the printing press was weaponised but ultimately elevated humanity, current technological disruption will eventually stabilise. Humans fundamentally want to connect and treat each other well—technology may create temporary chaos, but our humanness will ultimately triumph. Key Resources "The Psychology of Artificial Intelligence" by Tony Prescott "Humans and AI in the Workplace" podcast with Dr. Deborah Panucci and Lisa Hart "Careless People" by Sarah Wynn Williams (get the hard copy before it disappears!) The Bottom Line Technology is simultaneously connecting and isolating us, accelerating productivity while eroding deep thinking, and creating opportunities while destroying established skills. The key? Maintaining critical thinking, preserving human connection, and remembering that behind every screen should be a human seeking authentic engagement. As Craig puts it: humanity and effort matter—not just a 20-word prompt. Human connection remains irreplaceable. Go have real conversations. Support real artists. Maybe even pat a cat (with the normal number of eyes).

    41 分钟
  5. 10月20日

    TruthAmp: Episode 8 - AI will Survive

    Watch on https://youtu.be/4UY2h6pmUK8 Emmanuella returns from South by Southwest Sydney with insights on humanity's role in the AI revolution, while Craig shares productivity hacks and reflections on creative process versus outcome. Key Themes from SXSW Sydney The Human Bookend Principle Emmanuella's biggest takeaway: the "end-to-end experience" must always have humans at both ends. Tech exists to serve human needs, and without humans to serve, it has no purpose. This realisation eased her fears about AI replacing jobs—technology requires humans to identify problems and experience solutions. Balance Over Optimisation AR/VR Design Lead at Google, candidly shared how diving into AI initially made her incredibly productive, but her mind couldn't keep up with the output. The lesson: just because you can be hyper-productive doesn't mean you should. Individual accountability for tech usage matters, even when tools yield profit. Notable SXSW Panels Wearable tech reducing risk for pregnant women, allowing more home monitoring and faster crisis response Balancing friendly culture with killer business instinct and innovation freedom Dept's talk: End-to-end digital experiences for brands like Google, Audi, and Patagonia The Process vs. Outcome Debate Craig's Music Experiment Craig explored AI music composition as a "frustrated musician" and noticed he's translating between old methods and new AI tools—creating cognitive load. He predicts future creators who start with AI-native tools won't have this translation layer, making it more natural. Emmanuella's Pottery Analogy Looking at her handmade pottery (some functional, some broken, all meaningful), Emmanuella argues that not everything needs commercial value. The creative process itself has intrinsic worth—making things teaches us, edifies us, fulfils our humanity. The Knowledge vs. Information Gap Googling "how to grow lettuce without snails" differs vastly from three years of planting, failing, seed-saving, and discovering what works in your soil. AI can provide information, but the frustrating process of research and practice transforms information into knowledge, and knowledge into wisdom. Key Quotes Dan Rosen (Warner Music Australasia President): "It takes a lot of effort to make something look effortless." Emmanuella's counterpoint: Look at a ballerina's broken feet—they train to destruction, yet appear weightless on stage. A prima ballerina friend broke her back performing, ending her career. "You cannot replicate that without effort and without human input." Craig's reflection: "If you haven't taken the time to bother writing something, why should I take the time to bother reading it?" Tech Updates New Recording Tools: The podcast now uses SquadCast and Descript, which offers AI-native editing—search for a word, delete it, and it's removed from video automatically. More human-centered than traditional timeline editing. Perplexity Hack: Craig asked Perplexity which aisle at his specific Bunnings store had car covers. What happened? What's Next Craig is heading to the CEDA AI Leadership Event, hosting a panel on the AI arms race with CEOs from Tech Council, AGL, Telstra, and the Australian Institute of Machine Learning.

    16 分钟
  6. 10月13日

    TruthAmp: Episode 7 - Build Me Up, AI-tercup: Website in an Hour

    Watch Video here: https://www.youtube.com/watch?v=HeP6SEMkhvQ Craig demonstrates the speed and power of AI-assisted web development by building Emmanuella a professional website (click) in less than one hour—right before her flight to South by Southwest Sydney. What Happened The Challenge: Craig forgot about the topic until an hour before recording, so he used AI tools to quickly build a website, then demo-ed it for Emmanuella as she rushed through the airport. The Tools: Craig used Claude (both the chat interface and Claude Code) to research Emmanuella's professional background, generate prompts, and build a functioning website styled after Brené Brown's site—all running locally on his laptop. The Process: Claude researched Emmanuella's professional history, services, images, and videos online Generated seven specific prompts for website development Built a sophisticated site with animations, accurate credentials (graduate diploma in Psychology, Master's degrees), and integrated multimedia All accomplished in less than one hour Key Insights Prompting is a Skill: Emmanuella observes that effective AI prompting requires specificity and iteration. It's not mind-reading—you need to clearly articulate style, colors, and desired outcomes, sometimes through multiple attempts. Collaboration Over Replacement: Craig emphasizes you still need someone who understands underlying technology to ask the right questions and ensure security. AI accelerates collaboration between developers and clients, enabling real-time changes during consultative sessions. Cost Savings: By dramatically reducing development time, AI tools could significantly lower hourly web development costs for clients. Quality vs. Sludge: While Emmanuella worries AI might "fill the internet with sludge," Craig counters that with proper expertise, AI can actually create more sophisticated and secure websites than traditional methods. Accessibility: People without huge budgets can now build sophisticated websites with integrated products and features—democratizing professional web presence. The Reveal The finished website impressively included: Accurate professional credentials and education Brené Brown-inspired styling and animations Integrated podcast episodes (Building a Better Geek, Truth Amp) Professional history and services All functional elements ready for deployment Takeaway AI tools like Claude Code are transforming web development from time-intensive coding to rapid, collaborative creation—but expertise still matters. The technology amplifies human knowledge rather than replacing it, enabling faster iteration and more accessible professional web presence. Note: This episode was recorded as Emmanuella literally ran through the airport to catch her flight to South by Southwest Sydney, where she's moderating panels on med tech and mentoring tech leadership.

    10 分钟
  7. 10月6日

    TruthAmp: Episode 6 - Copyright and AI

    Watch it here: https://www.youtube.com/watch?v=-Y7M4RUC_HU&t=1s Craig and Emmanuella discuss the collision between AI technology and copyright protection for artists. Main Points Artists' Perspective: Emmanuella explains that copyright payments—not just performances—are crucial income for creators. Successful artists earn substantially from songwriting royalties calculated by venue capacity. The Core Problem: Tech companies are training AI models on copyrighted material without permission or payment. Once incorporated, this data cannot be removed without rebuilding the entire model. Artists must police unauthorized use themselves rather than companies providing proactive protection. Legal Landscape: The US has "fair use" doctrine while Australia has stricter rules. Australia's Productivity Commissioner suggests relaxing copyright for AI innovation, but artists strongly oppose this. A middle-ground proposal involves mandatory compensation, though artists would lose control over usage. Media Framing: Australian media uniquely describes AI's use of copyrighted content as "theft"—language that shapes public perception differently than other countries. Unexpected Hope: Emmanuella suggests that as AI contributes to content mediocrity, wealthy tech entrepreneurs might eventually invest in preserving high-quality artistry—reviving historical patronage models. Key Legal Fact: AI-generated content currently isn't copyrightable. Only human-contributed portions of hybrid works can be protected. Takeaway The copyright-AI debate has no clear resolution. While governments lag behind, market forces—and potentially tech philanthropists—may ultimately determine how creators are compensated in the AI era.

    20 分钟
  8. 9月29日

    TruthAmp: Episode 5 - Would AI Lie to You

    Available in Video: https://www.youtube.com/watch?v=iOklnPvvDUo TruthAmp Episode 5: How Do I Know If AI Is Lying to Me? In this episode of Truth Amp, communication expert Emmanuella Grace and tech expert Craig Lawton tackle one of the most pressing questions about AI: How can you tell when AI gives you false information? Key Takeaways AI Doesn't "Lie" - It Hallucinates Craig explains that AI doesn't intentionally deceive. Instead, AI models are probabilistic systems that sometimes produce "hallucinations" - confident-sounding but inaccurate responses based on statistical patterns in their training data. The Swiss Cheese Problem AI knowledge has gaps like holes in Swiss cheese. When your question hits one of these knowledge gaps, the AI fills in blanks with plausible-sounding but potentially false information, especially in specialized domains like psychology, medicine, or law. Experts Aren't Immune Even domain experts can be caught off guard. Emmanuella shares how AI nearly fooled her with an incomplete psychology summary that seemed authoritative but was missing crucial information. The Generational Divide Many older users treat AI responses as infallible truth, lacking awareness of AI's limitations. This creates a responsibility gap - who should educate users about AI's fallibility? Practical Tips to Get Better AI Responses Turn on web search in your AI settings so it can access current information Specify timeframes in your prompts (e.g., "information from 2025") Learn better prompting techniques to avoid reinforcing your existing biases Understand AI training bias - models reflect historical data, which may contain outdated information The Bottom Line While the tech industry figures out responsibility and regulation, users need to take charge of their AI education. Media and educational institutions have a role to play in teaching AI literacy, especially around understanding biases, limitations, and effective prompting strategies. Truth Amp explores complex topics through the lens of human communication and technology expertise. New episodes weekly.

    17 分钟

关于

Welcome to Building a Better Geek, where we explore the intersection of technology, psychology and well-being. For high-functioning introverts finding an audience - if you like humans at least as much as machines - if you want to go deep on leadership, communication and all the things that go into building you. Emmanuella Grace is a communication coach and consultant, working with individuals and organisations to develop and strengthen the skills of voice and communication. Craig is an experienced Technologist and Leader. Connect with us using the details below.