Bare Knuckles and Brass Tacks

BKBT Productions

Bare Knuckles and Brass Tacks is the tech podcast about humans. Hosted by George K and George A, this podcast examines AI, infrastructure, technology adoption, and the broader implications of tech developments through both guest interviews and news commentary.Our guests bring honest perspectives on what's working, what's broken, and new ways to examine the roles and impacts of technology in our lives. We challenge conventional tech industry narratives and dig into real-world consequences over hype. Whether you're deeply technical or just trying to understand how technology shapes society, this show will make you think critically about where we're headed and who's getting left behind.

  1. AI is doing real good and real harm, but the hype is hiding both

    27 APR

    AI is doing real good and real harm, but the hype is hiding both

    The AI hype machine is taking up all the oxygen we need to actually stop the harm happening today. This month we heard from three guests who didn't compare notes. Didn't coordinate. And all three circled the same thing: the #AI hype machine isn't just wrong, it's actively making things worse. Capital flows going to “everything machines” instead applications that actually accomplish tasks. Gas turbines burning methane next to communities already carrying four times the national cancer rate. AI chatbots mathematically, not metaphorically, mathematically, engineered to reinforce delusional thinking in vulnerable users. Deepfake abuse still expanding, still mostly targeting women and minors, still unsolved. This is the real harm inventory. This month. Right now. Meanwhile the discourse is about whether a model might hypothetically stage a coup in five years. We're not doing doomer porn. We're saying watch the industry’s hands, not the mouth. The boring risks are already here. The extraordinary stuff — the farmer in Morocco beating generalist models with expert-annotated field data, the researcher finding antibiotics with true wet lab work — that's also already here! It's just not getting same headlines and the funding. System Check. This month's episodes, broken down against current events and whatever's rattling around our brainboxes. Mentioned: Smaller models find the same bugs as Mythos Stanford HAI 2026 AI Index Discovering a new class of antibiotics Dmitri Alperovitch's testimony on compute Baidu robotaxi outage MIT CSAIL study on AI psychosis NAACP lawsuit against xAI XAI gas turbines polluting rural communities Northern Virginia datacenter health impacts Human Line Project

    41 min
  2. Distinguishing between movement and progress, in AI, security, and more

    20 APR

    Distinguishing between movement and progress, in AI, security, and more

    Are tech industries selling us a problems they invented? Ryan Clarque, CSO at Black Rifle Coffee Company, doesn't flinch at the big provocations. When Claude's Mythos model showed up in every LinkedIn feed promising a software apocalypse, Ryan's take was blunt: the basics were broken before Mythos, and they'll still be broken after it. The real question about a powerful AI model, it’s whether you've built a program capable of doing anything about them when it does. But the conversation doesn't stop at hype-busting. Ryan has quietly done something the industry insists can't be done: built a lean, two-person security operation that ditched the big-ticket SIEM vendors, took control of its own telemetry, and outperformed programs with ten times the headcount and budget. When one of those vendors found out, they sent their "heavy hitter" to prove Ryan wrong, who left agreeing Ryan didn't need them. What emerges is a portrait of a practitioner who learned to distinguish progress from movement — and who thinks most of the industry is confusing the two. The procurement cycle, the Gartner roadmap, the sequence of investments you're told you must make: Ryan's argument is that inertia dressed up as strategy has left small security teams demoralized and over-leveraged, and that the fix is less about budget and more about the willingness to build your own way out. And then, at the end of a week of planes and conferences, Ryan says something that reframes all of it. The reason he doesn't chase the car or the watch or the title isn't asceticism — it's that working in security means observing the worst of what people do to each other, and the only way to stay functional is to invest hard in what actually holds. Time. Trust. People who remember how you made them feel. Mentioned: Cal Newport on Mythos vs other LLMs in finding software vulnerabilities

    45 min
  3. AI Security Is Just as vague as "Cloud Security", but  With Sparkle Emojis

    6 APR

    AI Security Is Just as vague as "Cloud Security", but With Sparkle Emojis

    Amber Bennoui calls it like she sees it: most of what gets sold as "AI security" is just cloud security with sparkle emojis on it. She's co-founder of AISECA, a veteran product leader, and a more honest voices in a space that isn't exactly famous for honesty right now. We sat down with her fresh off RSA, and the conversation got very real: The real AI risk isn't the sci-fi scenario. It's the DevOps engineer at a 900-person company arguing they should be able to send commands via a remote control feature, with three security people in the building who don't even know the conversation is happening. It's the tools already embedded in software your finance and HR teams use every day, making decisions nobody gave explicit permission for. Amber's argument is simple and uncomfortable: most organizations have a discoverability problem they haven't solved yet, and vendors are selling dashboards to people who don't even know what's running in their own house. That's not security. That's theater. We also got into what it actually takes to build something vendor-agnostic and practitioner-led when the companies with the biggest budgets are also the ones racing to define what AI security means. And whether the tension between speed and safety is even something security teams get to resolve — or whether that decision has already been made for them. Mentioned:  MIT Paper, "Sycophantic Chatbots Cause Delusional Spiraling, Even in Ideal Bayesians"

    41 min
  4. The lawsuit that could reclaim the internet, and the AI hype cycle is eating its own tail

    30 MAR

    The lawsuit that could reclaim the internet, and the AI hype cycle is eating its own tail

    When was the last time a news headline about AI actually told you something true? George K. and George A. recorded this one from opposite sides of the planet — George K. fresh off RSA in San Francisco, George A. embedded at a global trust and safety conference in London. The distance didn't slow them down. This month's System Check has a theme: we’re living inside a story that powerful institutions are writing for us, and most of us aren't stopping to ask who's holding the pen. Meta and YouTube just lost a landmark lawsuit — not over what they published, but over how they designed their products to keep you hooked. The legal strategy that finally worked was the one used against Big Tobacco. Meanwhile, 82% of journalists now use some form of AI tool in their work. The people covering AI are increasingly shaped by it. The snake is eating its tail. The arms race math doesn't add up either. Forty billion dollar bridge loans. Circular investments. Credit-based bets assuming a revenue base that doesn't yet exist. And somewhere in rural Mississippi, kids are developing breathing problems because gas turbines got trucked in to power a datacenter the community never voted for. The question running underneath all of it: are we making decisions based on outcomes, or based on vibes? And if it's vibes — whose vibes are they, and how did they get there? Mentioned: Meta and YouTube verdict news coverage Center for Humane Technology’s podcast “Your Undivided Attention” episode on the Meta and YouTube lawsuit verdicts Ed Zitron’s recent monologue Research into how media covers AI UK Study on AI media coverage Muck Rack’s 2026 State of Journalism Report WSJ: CFOs expect to reduce headcount because of AI Anthropic co-founder Jack Clark on not being able to idle AI systems Iran War affects world helium supply, creating semiconductor bottleneck Environmental effects of Elon Musk using gas turbines to power data centers in rural communities

    41 min
  5. Deep Learning vs Intuition: AI models and venture capital investing

    23 MAR

    Deep Learning vs Intuition: AI models and venture capital investing

    What if the best investment decision is one where no human is involved? Brant Meyer, partner at Trac VC joins the show this week to talk about the firm’s approach, where algorithms — not partners in puffer vests — make every single call. Over 115 investments to date with zero human investment decisions. An 8.5% loss ratio, orders of magnitude less than traditional VC, would seem to suggest they’re on to something. George K. and George A. wanted to know, if machines make the decision, what exactly is Brant’s job? But the more interesting conversation isn't about the wins. It's about what the model forces you to confront. We assume removing the human removes the bias — but Trac's algorithms are trained on data with its own biases. Then there's the psychological dimension. Brant makes the case that most resistance to algorithmic investing is emotional rather than rational. VCs resist algorithms because the discretionary call is the whole point. The juice, as he puts it, is the feeling of knowing. Strip that away and you're threatening an identity. Which raises the question George K. and George A. keep circling: how did venture capitalists acquire oracular status in the first place? The hit rate doesn't justify it. The pattern recognition, Brant argues, was never really theirs to claim. And yet , no founder wants to take money from a robot. The relationship still matters. The question is just whether we've been confusing that relationship with the thing it was never actually doing. Mentioned: Trac VC’s video

    47 min
  6. Best Of: What are we building? And the future of human flourishing...

    16 MAR

    Best Of: What are we building? And the future of human flourishing...

    We've spent the last several months talking to people who live at the intersection of technology and the humans on the receiving end of it. A data privacy attorney. A corpus linguist. A clinical psychologist. A performance coach. An entrepreneur who built a business on failure. They don't all agree with each other. But they're all pointing at the same thing: the gap between how technology gets built, deployed, and sold — and what it's actually doing to people. This week's episode is our attempt to pull that thread. Mike McLaughlin — The AI ecosystem is running on bad data, has no real mechanism to fix it, and the next wave of cybercrime will target the training data itself. Kimberly Becker, PhD — AI-generated text is structurally overconfident, and a corpus linguist traced that pattern all the way back to how decontextualized certainty language helped fuel the opioid epidemic. Dr. Marissa Alert — What organizations call employee resistance to AI is, clinically, a fear and identity threat response that most rollouts are spending millions to ignore. Tychon Carter — Winning is often where the real crisis begins, and the goalpost never stops moving until you decide your value isn't determined by your output. The "Bad Hombre" — A solopreneur who built a business on public failure makes the case that the willingness to fail more than most people even try is the only real competitive advantage.Every one of these conversations eventually arrives at the same place: the distance between what we're building and who it's landing on.

    38 min

About

Bare Knuckles and Brass Tacks is the tech podcast about humans. Hosted by George K and George A, this podcast examines AI, infrastructure, technology adoption, and the broader implications of tech developments through both guest interviews and news commentary.Our guests bring honest perspectives on what's working, what's broken, and new ways to examine the roles and impacts of technology in our lives. We challenge conventional tech industry narratives and dig into real-world consequences over hype. Whether you're deeply technical or just trying to understand how technology shapes society, this show will make you think critically about where we're headed and who's getting left behind.

You Might Also Like