Bare Knuckles and Brass Tacks

BKBT Productions

Bare Knuckles and Brass Tacks is the tech podcast about humans. Hosted by George K and George A, this podcast examines AI, infrastructure, technology adoption, and the broader implications of tech developments through both guest interviews and news commentary.Our guests bring honest perspectives on what's working, what's broken, and new ways to examine the roles and impacts of technology in our lives. We challenge conventional tech industry narratives and dig into real-world consequences over hype. Whether you're deeply technical or just trying to understand how technology shapes society, this show will make you think critically about where we're headed and who's getting left behind.

  1. Deep Learning vs Intuition: AI models and venture capital investing

    1D AGO

    Deep Learning vs Intuition: AI models and venture capital investing

    What if the best investment decision is one where no human is involved? Brant Meyer, partner at Trac VC joins the show this week to talk about the firm’s approach, where algorithms — not partners in puffer vests — make every single call. Over 115 investments to date with zero human investment decisions. An 8.5% loss ratio, orders of magnitude less than traditional VC, would seem to suggest they’re on to something. George K. and George A. wanted to know, if machines make the decision, what exactly is Brant’s job? But the more interesting conversation isn't about the wins. It's about what the model forces you to confront. We assume removing the human removes the bias — but Trac's algorithms are trained on data with its own biases. Then there's the psychological dimension. Brant makes the case that most resistance to algorithmic investing is emotional rather than rational. VCs resist algorithms because the discretionary call is the whole point. The juice, as he puts it, is the feeling of knowing. Strip that away and you're threatening an identity. Which raises the question George K. and George A. keep circling: how did venture capitalists acquire oracular status in the first place? The hit rate doesn't justify it. The pattern recognition, Brant argues, was never really theirs to claim. And yet , no founder wants to take money from a robot. The relationship still matters. The question is just whether we've been confusing that relationship with the thing it was never actually doing. Mentioned: Trac VC’s video

    47 min
  2. Best Of: What are we building? And the future of human flourishing...

    MAR 16

    Best Of: What are we building? And the future of human flourishing...

    We've spent the last several months talking to people who live at the intersection of technology and the humans on the receiving end of it. A data privacy attorney. A corpus linguist. A clinical psychologist. A performance coach. An entrepreneur who built a business on failure. They don't all agree with each other. But they're all pointing at the same thing: the gap between how technology gets built, deployed, and sold — and what it's actually doing to people. This week's episode is our attempt to pull that thread. Mike McLaughlin — The AI ecosystem is running on bad data, has no real mechanism to fix it, and the next wave of cybercrime will target the training data itself. Kimberly Becker, PhD — AI-generated text is structurally overconfident, and a corpus linguist traced that pattern all the way back to how decontextualized certainty language helped fuel the opioid epidemic. Dr. Marissa Alert — What organizations call employee resistance to AI is, clinically, a fear and identity threat response that most rollouts are spending millions to ignore. Tychon Carter — Winning is often where the real crisis begins, and the goalpost never stops moving until you decide your value isn't determined by your output. The "Bad Hombre" — A solopreneur who built a business on public failure makes the case that the willingness to fail more than most people even try is the only real competitive advantage.Every one of these conversations eventually arrives at the same place: the distance between what we're building and who it's landing on.

    38 min
  3. Why cybersecurity is broken and time is the enemy

    MAR 9

    Why cybersecurity is broken and time is the enemy

    Why do your friends and parents still get breach notification letters from companies they’ve never heard of? John Watters aka “The Cowboy” joins the show this week for a hard look at information security. In the early 2000s, he built iDefense from a bankruptcy buyout into one of the most influential threat intelligence companies in the world, pioneered responsible disclosure before the term even existed, and has watched the attack surface evolve from nation-state espionage into something that hits your credit card at a restaurant on a Tuesday. His answer to the breach question? The industry's been losing the clock. Attackers can move from target selection to exploitation in days. Defenders are still operating in weeks. And the gap isn't closing, not by a long shot. If anything, it's widening. This conversation goes from the living rooms of people who've stopped trusting cybersecurity to the boardrooms of Fortune 500 CISOs who still can't explain their third-party risk exposure in plain English. We talk time compression, threat intelligence architecture, the AI arms race that only one side seems to be taking seriously, and the uncomfortable truth about analysis paralysis in a field where the cost of inaction is terminal. John's closing advice to defenders: automate yourself out of a job before someone else does it for you. That one's worth the price of admission alone. Mentioned: This is How They Tell Me the World Ends, by Nicole Perlroth CISO Mike Melo’s post on security theater

    49 min
  4. AI market jitters, post-truth reality, data, and safeguarding what makes us human

    FEB 9

    AI market jitters, post-truth reality, data, and safeguarding what makes us human

    This week we're taking stock of conversation trends to let it rip on AI market jitters and what happens when the math stops math-ing. We start with the numbers that have investors nervy: Amazon's $200 billion capex projection for 2026, and the uncomfortable reality of building an entire economy on depreciating GPU infrastructure with a three-year shelf life. Why the dot-com bubble comparison are incomplete, and questioning what happens when billions flow into overwhelming into transformer model architecture while research into others starves. Then we shift from market corrections to attention economics, unpacking how AI tools promise productivity while actually training us to outsource thinking itself. The cost is both financial and experiential. When was the last time you sat alone without reaching for your phone? Can you still read sentences that run four lines long? The episode lands on an uncomfortable question about who gets to have unmediated experiences anymore, and whether we're living our own lives or just consuming other people's. Mentioned: Ed Zitron ’s “Better Offline” podcast Derek Thompson’s Plain English podcast interview with Paul Kedrosky on market conditions and signs of a bubble Stephen Colbert on “truthiness” Enshittification, coined by Cory Doctorow MIT on the philosophical puzzle of AI Netflix’s main competition is sleep Point of view: Gen Z will remember more of other people’s memories than their own Blaise Pascal writing about attention in 1670

    38 min
  5. AI vs Human writing and what it means for our thinking

    FEB 2

    AI vs Human writing and what it means for our thinking

    What happens when AI-generated text masquerades as human research? Kimberly Becker, PhD, a corpus linguist joins the show this week to talk about her study comparing human-written versus AI-generated abstracts in high-stakes healthcare research. The findings reveal something unsettling about how LLMs may potentially reshape scientific communication. ChatGPT's outputs showed higher informational density, formulaic patterns, and a lack of hedging, the linguistic uncertainty that marks careful scientific thinking. The AI doesn't say "may suggest" or "could indicate." It asserts. Confidently. Even when it's wrong. This matters beyond academia. When we optimize for speed and polish over depth and precision, we're changing how we write, and therefore changing how we think. We're externalizing cognition to systems trained on Reddit threads and blog posts, then wondering why the output feels sterile and an inch-deep. Becker's work raises uncomfortable questions: Are we training ourselves to accept confident wrongness? What happens when a generation of researchers doesn’t communicate uncertainty? And fundamentally, can a predictive text model ever replicate the pause, the breath, the examination that Neil Postman argued was essential to meaningful thought?This episode is about whether we're paying attention to what we're losing while we chase efficiency. Mentioned: James Marriott, Dawn of the Post-Literate Society Neil Postman’s seminal work, Amusing Ourselves to Death Derek Thompson, The End of Thinking•  • Linguistics Relevance Theory

    41 min
5
out of 5
10 Ratings

About

Bare Knuckles and Brass Tacks is the tech podcast about humans. Hosted by George K and George A, this podcast examines AI, infrastructure, technology adoption, and the broader implications of tech developments through both guest interviews and news commentary.Our guests bring honest perspectives on what's working, what's broken, and new ways to examine the roles and impacts of technology in our lives. We challenge conventional tech industry narratives and dig into real-world consequences over hype. Whether you're deeply technical or just trying to understand how technology shapes society, this show will make you think critically about where we're headed and who's getting left behind.

You Might Also Like