The episode turns into a freewheeling, funny, very human conversation about how AI is showing up in developers’ day-to-day lives, especially for the “I can do it, I just hate it” work. Will talks about getting wildly inconsistent AI PR review comments, but still finding real value in using Claude to refactor boring-but-necessary code like splitting up bloated classes and shared components. Dave riffs on how Claude is starting to mirror his humor and writing voice, then connects it to a psychology idea from Marty Seligman: don’t force yourself to “get good” at tasks you’d still hate even if you mastered them, because that’s a fast track to misery. For Dave, AI is a relief valve: it can generate PR descriptions, test scripts, and documentation in minutes, turning a three-hour, soul-draining slog into something manageable, and giving him back energy for the work he actually enjoys. From there, the discussion shifts into “agentic” workflows and a geeky Dungeons & Dragons thought experiment: could you build an AI-powered rules engine that handles combat bookkeeping, tracks inventory and positions, and references a big PDF ruleset accurately? Dave and Will talk through using RAG (retrieval-augmented generation) to index the rulebook and something like MCP-style tooling to let the model read/write to real databases so it doesn’t lose track of facts (what room you’re in, what items you have, what the rules say about advantage/disadvantage). They also touch on how newer models can sustain longer, more coherent outputs (Dave gushes about Claude Opus improvements and even creative writing that lands emotionally), and they speculate that “divide the work into sub-agents” is how these systems stay on track as tasks get bigger. The back half gets darker and more real: what happens when you give AIs root-level access to email, calendars, and money? Will imagines an assistant that can handle adulting (getting flooring quotes, scheduling bids) and Dave goes further, describing the exhausting annual battle to secure life-saving medication coverage for his wife and wishing for an AI that can fight bureaucracy relentlessly. That leads into red teaming, prompt injection, and the uncomfortable truth that guardrails are often driven by liability, not human-centered ethics; Dave contrasts frustrating experiences with GPT-style “lawyer mode” refusals versus Claude’s more collaborative boundary-setting, and argues we’re heading toward rules for AI that resemble rules for people. They close on a practical optimism: AIs aren’t “good” on their own, but they’re powerful force multipliers for getting over psychological humps, clearing drudgery, and even helping people stop discounting their own progress by reflecting back evidence-based positives—an unexpectedly meaningful use case amid all the chaos. Transcript: DAVE: Hello, and welcome to the Acima Developer Podcast. I'm David Brady. And we have been having a fantastic time chatting about AI, and we forgot to hit record. So, we're going to start the show right now. Today on the panel I've got Kyle Archer. I've got Thomas Wilcox, and I've got Will Archer. And this is going to be a fantastic chat. So, what have we been talking about, guys? We've been talking about D&D, music, lyrics, poetry. What's going on in AI this week? WILL: Oh man, I'm getting better. I'm getting better and better. Like, I got an AI review comment on a PR of mine earlier this week, and it was good. And I also got one today, like, just now, seconds ago, and it was doggy doo-doo. So, you know, like, they're getting smarter. They're getting smarter. They saved my bacon. My prompts have been getting more ambitious, you know? Like, more and more ambitious, where I'm like, hey, it's just, like, it's amazing. Like, I love finding the things that I hate. They're not hard. I just hate them. And AI doesn't have feelings about scut work. You know, I'll tell you, like, one thing. This is an antipattern that I think myself and other people will fall into, like, very frequently, but wonderful [inaudible 01:37] for AI. It's like, when you've got, like, shared library components, you know what I mean, or, like, your class is starting to get big, it's not technically complicated to, like, start breaking that thing up and, like, pulling these things into shared libraries, pulling these into shared modules, you know what I mean, common class extensions, like, all that stuff. It's very, very easy to do. It's very simple and straightforward. But you're not doing it, and I'm not doing it, and none of us are doing it, but we ought to be, and we can. And Claude does a pretty decent job. I had to clean it up, but I'm not mad. It didn't do me dirty, like, it did not do me wrong. DAVE: I have started saving screenshots of things that make me laugh about the AI, and Claude is absolutely learning my sense of humor and my writing style. And so, I literally...I will start typing a comment, and then I'll take my hands off the keyboard. I'm looking at one right now that is literally, "Comment, dear future..." and then it wrote, "Dave, colon, I'm so sorry." And that was pretty much where I was going with that comment, which is...it made me howl. There's another one where it's like, "This class couldn't," and then it completed, "possibly be located in a worse location." Oh, something you just said, though, this is a huge, like, a cross-threaded jump. I'm going to be thinking about this for a few days: the stuff that you can do, but you don't want to, that you don't like it. Okay, ready for a real big cross-discipline skip? Marty Seligman, "Authentic Happiness," I think, is...He wrote a book about happiness. But one of the things that he talks about...he's a psychiatrist. He was literally president of the APA. And what he realized is that there are things in your job...we tell everyone, "If you're bad at something, get better at it," and he said, "That is a recipe for depression and misery." Ask yourself what things in your job, that if you were really good at them, you'd still hate it. Don't get good at those things. Get rid of them. Put them off on someone else. Find somebody who likes that work and trade it off because the more you do it, the more miserable you're going to be. You're not going to find meaning in it. It's going to be drudgery and scut work. And there's so much stuff that I have been shoveling off on Claude, using that as my rubric to say, I'm going to keep this. No, you go do that. And, oh, it's so good. I write very, very slowly. It is agonizing for me to write. You guys, you've met me. I like to talk, and I talk fast, and that means I talk sloppily because I'm thinking as I talk. I'm an extroverted thinker. I'm literally hearing myself talk for the first time, and I'm processing these ideas. Well, when I write, I can't do that, and so it slows me down. So, everyone on my team they're writing their Slack report every day. It takes them five minutes. It takes me half an hour. They write a pull request description, takes them 20 minutes, takes me 2 and a half to 3 hours to write. And I've got a review writing skill now in Claude that I just drop it on there, and it follows the Acima template. Here's the ticket, here's the summary, here's the description, here's the reason why, here's how to test. Go on main. It will actually write me the Rails runner script. You put the thing in, like, go into a console, and type this, type this, type this. Nah, screw that. Open up bash and type Rails runner, and then here is your script. And it's going to load your merchant. It's going to do this, da da da. And then it will show you, right here, here's your output. Boom, done. Jump back to the branch; do it again. Here's the different output. Off to the races you go. And it will generate a PR in, like, two minutes, what was taking me three hours, and something that takes me three hours that when I'm done, I don't feel happy. I just feel exhausted. I just feel relief that it's over. And so, having that off my plate, fantastic. WILL: [inaudible 05:35] say there, like, I love it. Like, I have found that another stupid AI trick is just writing documentation, writing reviews, that kind of stuff. Man, I hate it. I hate it so much. But what I've found, right, and this is, I don't know, maybe more psychology than AI, is, like, AI will get it wrong. Often it's not. It'll blow it all the time, all the time. But the fact that they tried and failed, it's like, oh, I've got this thing now. I can work with this thing, right? Like, I'm not going on, like, a blank page, you know? Like, it'll just sort of, like, blargh, vomit out whatever sequence of words it thinks are going to come next in the equation, and then I can work with that. I work from a position of strength. DAVE: Yeah, I put a tweet out this morning. How'd I put it? "Claude lets me be 5 of me, each doing 80% of my work. One of me is an idiot, but the other 4 of us are 3 more of me." The footnote is, "Mind you, some days it takes all four of us to hold that idiot down," right? It's like, we've all lost time to the AI. If you've got any work done with AI, you have lost work and lost time to AI learning how to run it, because when it rolls, it rolls the truck, right? It will crash. WILL: Right. Okay. And this is a great, like, I am far from an AI expert. I am constructively lazy, which is the highest and best version an engineer can have, you know. DAVE: Capital L, Larry Wall's lazy, mm-hmm. WILL: But I'm not an AI expert. Like, I just, you know, I will pick up the tool, and it'll be like, if I've got a handful of nails and somebody's like, "Hey, this is a Powernail," I'll be like, all right, bang, bang. So, I was pitching Dave on, like, a less code-oriented thing. DAVE: Yeah, talk about this for a second. WILL: Mike left, and he left Dave and I alone to our own devices. And so, this is what you get, Mike. DAVE: