Acima Development

Mike Challis

At Acima, we have a large software development team. We wanted to be able to share with the community things we have learned about the development process. We'll share some tech specifics (we do Ruby, Kotlin, Javascript, and Haskell), but also talk a lot about mentoring, communication, hiring, planning, and the other things that make up a lot of the software development process but don't always get talked about enough.

  1. Episode 97: Standups

    APR 29

    Episode 97: Standups

    The episode of the Acima Development Podcast centers on database performance, using the concept of indexing as its foundation. Mike opens with a story about discovering Google in the early 2000s to illustrate how powerful indexing systems transformed access to information. That same principle applies to databases: indexes act as shortcuts that make retrieving data dramatically faster, especially in large datasets. The discussion emphasizes that while indexes can feel like a technical detail, they are fundamental to how modern systems function efficiently, much like search engines reshaped how people find information. Bill Coulam then dives into the technical side, explaining that indexes improve read performance but come with trade-offs, particularly slower writes because both the table and index must be updated. A key rule of thumb is that indexes are most beneficial when queries return a small subset of data, typically under about 25% of rows. The group explores how poor indexing strategies, like over-indexing or missing indexes on key relationships, can quietly degrade performance over time. Bill shares a striking real-world example where adding missing indexes reduced a process from taking 24 hours per record to processing millions in just a couple of hours, highlighting how impactful proper indexing can be. The conversation broadens into database design philosophy and performance tuning. The team discusses different index types in PostgreSQL, when to use them, and how to balance read vs. write performance depending on use cases like bulk inserts or high-frequency queries. They also touch on when relational databases fall short, such as full-text search or massive write-heavy workloads, where NoSQL or specialized systems may be better suited. Ultimately, the takeaway is that effective database performance comes from understanding your data, access patterns, and trade-offs, combined with ongoing maintenance and thoughtful design rather than relying on defaults or assumptions. Transcript: MIKE: Hello, and welcome to another episode of the Acima Development Podcast. I'm Mike, and I'm hosting again today. I'm going to start by introducing Bill Coulam, who's with us today. He comes to us from the data team. And he's been here before, but we're going to focus on some information that he has to share. So, he's kind of the star of the show today. Also with us, returning, we've got Eddy, Travis, Justin, Dave. Mr. Perez, great having you with us. We've got Mike Perez here with us, and Ramses. As usual, I'd like to start with something a little bit outside of our topic in order to bring it in and tie it into the outside world. And I was thinking about a story I think I've shared before. The importance of this moment early in my career keeps, like, growing as I look back to it, like, wow, that was a big deal, and I didn't realize it at the time. So, in the early 2000s, somewhere in the early 2000s, early, early 2000s, I was working for a guy [chuckles]; I'm going to say that. He had some projects, and he didn't have enough resources to do some freelance projects, and so I was doing some of his stuff. He was outsourcing his freelance work to me [laughs]. And he had a project that was in Windows, and there was something they wanted to accomplish through the API. And I started looking through the documentation, trying to use Microsoft's tools to search the documentation, and I spent hours. I looked everywhere I could [chuckles], and I couldn't find it. I came to the conclusion, maybe this doesn't even exist. And I came back to him, and he got back to me, like, 30 minutes later. He said, "You know, there's this new tool called Google, and I use that, and it's amazing. You should start using it because it works really well, and it led me to this documentation." Like, wow, well, I know what I'm going to use now. I'm going to use this Google thing [laughs] because that works way better than actually going through the table of contents, and the index, and the documentation, because that's really hard to search through. Those older forms of indexing were insufficient. Now, Google had this brilliant idea, you know, the founders of Google, that, okay, we'll index the internet. And even back then, that was, like, an impossible goal [chuckles]. And there were other sites that were doing it. There were indexes out there. What they would do is they'd look at the words on a website, and they would create an index based on those. And so, if you look for a word, they'd look for a website that had a lot of those words. Well, people really quickly figured out how to game that [chuckles], and, of course, they did. So, they were useless almost immediately because people would go into their meta tags, and they'd just write the same word a hundred times for something that the site was really not very applicable for. What Google did is they came up with a different sort of index, where they would index words in the links that linked back to a site, and also give extra weight if there were a lot of them, right? And so, by building a more appropriate index that suggested popularity, rather than self-determined, a self-stated importance of the page for a specific topic, they were able to come up with something way more effective. And you don't always think about indexes, you think, index? Like, I remember going to the library. It had, like, the Dewey Decimal System, which is really kind of weird and awkward and hard to find things with, but it was way better than the alternative, which would have been nothing. You don't usually think about indexes changing the world, but that index, that PageRank index, you know, the PageRank algorithm that they use to just create an index, that's all it is, right? Link this word, map this word to a website, so that when you're searching this word or phrase, then we can find it. It literally, like, fundamentally changed culture. It's now a verb [laughs]. Like, you Google something, even if you're using Bing for those of you out there who use Bing [laughs]... DAVE: Use Bing to Google, yeah. MIKE: Exactly. You use Bing to Google, because information now is accessible, and that is something that didn't exist before that. For all the digital natives who've grown up in this world, like, how did you find things before? Well, you didn't [laughs]. You suffered. You wandered through libraries. DAVE: We just got used to not knowing things, yep. MIKE: Exactly. That's exactly what you did. You got used to not knowing things. It changes everything when you have an effective index. And I could talk about all the times in my career when something's missing from the database, and yeah, it was the index. It's always the index. There's always a missing index somewhere. It solves all of your performance problems. And there probably is an exception, but I can't think of it [laughs]. It's always the index. That's what we're going to talk about today. We're going to talk about database performance. And we've been wanting to, you know, Bill's been preparing this and thinking about this for a while. If we're talking about database performance, indexes are going to come up over and over again. And this could seem really dry, and this is going to be a technical deep dive, right, we're going to very much going to talk about indexes. We're probably going to be focusing on PostgreSQL. But this idea of indexes is not a trivial one. It's how we operate in the modern world. Our culture, our commerce has been fundamentally transformed. Our ability to know things and outsource, you know, to this Library of Alexandria that we've got in our pockets all depends on indexes, and it's amazing. There's my introduction, Bill. And I wanted to lead out with some weight behind what you're going to be talking about today. BILL: I love it. That was a fantastic segue. All right. Hi, everyone. I am Bill, Bill Coulam. I've been doing this work for about 30 years now. I started as a software engineer using COBOL and mainframes, but I don't put that on my resume because I don't want anyone to ever call me back to help with that. So, I tell people I started with C and C++. I was actually one of the first users of Java back in 1995. My company that I worked for at the time, Anderson Consulting, they wanted me to go around to their clients and tell them what I thought of Java. And, at the time, I felt like it really wasn't ready for primetime, and so I kind of voted myself out of working on that platform. But that's okay because I ended up, on every project that I worked on, working with Oracle, and, at the time, Oracle was the 800-pound gorilla. And I was in the telecom industry, where we had some of the largest volumes of data in the world, and so I learned a lot of great lessons working on those big systems. It's a whole other world jumping between databases that have 10,000 to a hundred thousand rows to databases that have 500 million, a billion. Performance tests in your copy of production can take three hours. It's a completely different world. Anyway, so you learn a lot of good lessons working on data that big. I ended up sticking with Oracle for a long time. It became my bread and butter. And went from San Francisco to Denver to Houston, and then back here to Utah where I grew up. I've been here longer than I spent time in my own hometown. So, I've been here in the northern central part of Utah since 2007. Anyway, let's go ahead and jump into it. We're going to be talking about four areas: the fundamentals of indexing, some guiding principles, the two shared tendrils, index types that are available to us using Postgres as our source database, and some indexing dos and don'ts. Firstly, some fundamentals. An index is a shortcut to get at the data. However, because an index is a separate structure from the actual table containing the data, it requires at least two I/Os to get at the data: on

    59 min
  2. Episode 96: AI & Code Reviews

    APR 15

    Episode 96: AI & Code Reviews

    This episode explores how AI coding tools are changing the role of code review. The hosts point out that AI can generate large amounts of code quickly and even review it, which shifts the bottleneck from writing code to reviewing it. While AI can handle repetitive or low-risk tasks like documentation updates or simple refactors, it can also produce inconsistent feedback and get stuck in loops. Because of this, teams need clear rules and priorities, such as focusing first on whether code works, then on security and performance. AI is useful, but only when its boundaries are well defined. The group discusses different ways to structure AI-assisted reviews. Ideas include using multiple bots to score changes, setting strict allowlists for what AI can approve, and blocking sensitive areas like business logic or database changes. They compare AI to a junior developer who can help but should not be fully trusted without oversight. Risk becomes a key factor, similar to self-driving cars where automation works best under specific conditions. Some participants prefer AI as an assistant that gives suggestions rather than one that approves code, since human judgment is still needed for context and decision-making. The conversation also highlights what is lost when humans are removed from the review process. Code reviews have traditionally been collaborative and educational, helping developers learn and improve through discussion. AI removes much of that interaction and can even create false confidence by being overly agreeable or flattering. This can lead to mistakes making it into production. In the end, there is no clear solution. Teams need to balance speed with caution, use AI where it adds value, and keep humans involved to maintain both quality and the collaborative nature of building software. Transcript: MIKE: Hello and welcome to another episode of the Acima Development Podcast. I am Mike, and I am hosting again today. With me, I have, as usual, Will Archer. We've got Thomas Wilcox. We've got Eddy Lopez. Dave Brady. DAVE: Hello. MIKE: [inaudible 00:35] join. And we've got, after a long absence, Tad Thorley [laughs]. TAD: Yeah, thanks for inviting me. MIKE: We bumped into him this week, and he came and joined us, so it's great to have you, Tad. And Tad actually kind of seeded our topic for today that we'd like to go into. As usual, I'd like to, you know, connect this to real life. I went fishing for a compliment today [laughs]. I was talking to my daughter at lunch time, and she was saying something to my youngest. I didn't even hear what she said, but she said something like, "Oh, because you're strong and tough." And I didn't know who she was talking to. And I said, "What was that?" She said, "Oh, I was talking, you know, I was talking to him." I'm like, "Okay, because I know that I am, you know, weak and fragile." And she looks at me [laughs], and then she says, "You are not weak. You are strong," something [laughs] along those lines. I thought, ah, thank you [laughs]. Thank you. Say nice things to dad. And I totally dug for that. Totally not deserved in any way [laughs], but I took it anyway. As humans, we like somebody to say something nice to us. It's always a good thing. But we also are totally prone to flattery. And [laughs] if somebody says something nice to us, we will believe it, whether it's true or not. Actually, this morning, early, I read a crazy story. Crazy story. And I'm not going to go into it in depth, but it involved a scammer in Mexico convincing a variety of U.S. movie executives to make a movie out of his story of being imprisoned by the Mexican cartels to play flag football [laughs]. DAVE: Flag football. That's the interesting-- MIKE: To the death. To the death. DAVE: To the death. Oh yes. Yes. MIKE: But, you know, you can keep [inaudible 02:21] WILL: But no contact until you die. MIKE: Exactly [laughs]. WILL: You're only going to take one tackle, but it's going to be a doozy. MIKE: I think they weren't allowed to tackle, but they were, like, breaking each other's teeth. And then if you lost, they took you out back with weapons, yeah. DAVE: It's a high lie. It's traditional down there. MIKE: [chuckles] It was a crazy story. Well, no, it was a scam artist who was pulling all this off from the beginning. But, you know, you can pull off a lot by just being really convincing and saying nice things to people, telling them what they want to hear. We'd like to talk today about code reviews [chuckles] and doing evaluations of human output. And we're in an interesting time period. A couple of years ago, even a year ago, maybe even six months ago, we would not have had this conversation. But there are tools out there now that can read your code and actually give pretty good reviews most of the time. In fact, in some ways, they're going to be better, and that "in some ways" is doing some work here. So, let me be clear: in some ways, they're going to be better than human reviewers. That is not universally true, I don't think, at this point. In fact, I think it's far from universally true, which brings us to our topic today. What does it mean to do code review today? There are tools that can do code reviews. What do they do well? What do humans do well? What does it mean? And we've talked before about code reviews. I think it's been a while. I think it's been maybe a year or two since we've talked about code reviews, the value of code reviews. So, we'll maybe touch on them maybe a little less this time. DAVE: And it was entirely a soft skills discussion, right? MIKE: Yeah, I think it was. I think it was. DAVE: Humans talking to humans. MIKE: Humans talking to humans. And now we've got the machines talking to the humans, and the humans talking to the machines, and the humans talking to the humans about what the machines are saying. It's totally scrambled. So, revisiting this idea of reviews with AI in the mix, now, Tad, again, prompted this discussion because he's been playing around with this and has found some solutions to some of the cases that go wrong [laughs]. There are degenerate cases where the AI will recommend that you change something, and then when it sees your changes, it'll recommend you go back to the way you were before [laughs]. If you're anybody who's used a linter, you've probably seen the same thing. It tells you to fix it, and then you cause a new problem. So, which one do you choose? That's where we get into art. That's not an unsolvable problem, but there are some interesting solutions there. Nor is it nearly the sum of all of the problems here because there are all kinds of edge cases here with reviewing with AI. With that introduction, Tad, I'm really curious for you to give us a little talking to about what you've been working on and some of the solutions you've found. TAD: Okay. Yeah. I just was mentioning something to Dave because I think what's really hard is I find that, with AI, I do way more code reviews than I've ever done before. And I was giving Dave an example because I can, like, just with my Claude Code setup, I was able to integrate it with Sentry, which is error tracking, and Linear, which is our task management, and GitHub, right, has a command line. And so, I could literally, with a prompt, say, "Look at our past 20 or so Sentry errors. Create Linear tasks for each one. Create a local work tree for each of those Linear tasks. Fix them in parallel in those work trees. Create a PR for each one, and assign Chris for every PR. Do that in parallel with subagents." And, for me, typing that up takes, I don't know, a few minutes. And now I've just given Chris, like, two days' worth of reviews, possibly, or something like that, right? Like, so much code could be generated so quickly and so easily that I find that the code review step is the biggest bottleneck. It usually is the bottleneck, but now it's multiplied. Like, it is absolutely the biggest bottleneck in the whole process. And I don't honestly know, like, a complete solution to that. But something that we were doing at work was actually bot reviewers, where we would say, you know, like, if your review looks safe enough, the bot will just approve it. And that was kind of an interesting experiment that we were doing where you have to -- But, like you were saying, Mike, one of the first issues that I ran into when the CTO kind of implemented that was I pushed up a PR, and it said, "This code is inefficient." And I'm like, okay. And so, I just had my Claude just keep checking GitHub and say...I told it every time it says there's a problem, fix it, and push up the fixes, and just do that until everything is approved, right? And my Claude Code, for about 45 minutes, tried that. And it kept flipping back and forth between like, "Oh, you're not doing enough security checks.” Oh, "This code isn't performant enough.” Oh, "It's not doing the security checks," and just back and forth in a loop. And my Claude Code, I could almost feel its frustration in its final message to me. It essentially said, "I cannot get a review past the reviewers. I keep going in this cycle, and they are never going to review this," and it just gave up [laughs]. And I'm like, wow, I've never seen a bot just straight up give up before, but here we are. So, yeah, like, that was our first, like, test of that. Our setup was, we had what we called the bot committee, where we had a Codex, and we had, like, a Claude Opus that would both review independently then, like, an aggregate score would be kind of brought together. And if the score was over a certain threshold, then it's like, okay, yeah, you can auto-approve this. But what I did last week was, I found I had to go in and be very clear in what was okay to pass and what was not, right? Like, you're updating some documentation; that's great, you know. You shouldn't have to have a human, like, approve you

    43 min
  3. Episode 95: What Do Data Engineers Do?

    APR 1

    Episode 95: What Do Data Engineers Do?

    This episode explores the role of a data engineering team within a company and how it differs from traditional application development. While app developers focus on performance and real-time systems, the data team is responsible for collecting, syncing, and organizing data from many sources into a central warehouse (like Snowflake). Using tools such as Fivetran, data is continuously pulled from dozens of systems and stitched together into a unified view that business users, analysts, and dashboards can actually use. A major challenge discussed is how microservices (great for engineering) create fragmented data that must be carefully reconstructed to tell a complete story, such as the lifecycle of a customer or lease. A large portion of the conversation focuses on “data transformation,” which is the process of turning raw, scattered data into meaningful insights. This involves complex pipelines of queries and scripts that combine, clean, and interpret data across systems. The speakers emphasize that this work is far from simple—it requires deep understanding of both the data and the business context. Done well, it enables decision-making (like tracking revenue trends or customer behavior), but done poorly, it can lead to incorrect conclusions that impact the entire company. They compare transformation to cooking or even building a rocket: the output is fundamentally different from the raw inputs, and small mistakes upstream can cascade into major issues downstream. The group also discusses practical challenges in data modeling, system design, and collaboration between teams. Topics include the tradeoffs of normalization, handling schemas across evolving systems, and frustrations like poorly defined enums or lack of communication when engineers change databases without notifying the data team. Security is another key theme, especially around controlling access to sensitive data (PII) and preventing misuse. Ultimately, the episode highlights that data work sits at the center of the organization: it depends on upstream engineering decisions and directly influences downstream business outcomes, making clear communication, documentation, and thoughtful design essential as systems scale. Transcript: DAVE: Hello and welcome to the Acima Developers Podcast. We've got a fun group today. I've got Eddy. We've got Kyle. We've got Thomas. We've got Mike and Justin. We've got Bill, and we've got Zach. Now, Bill and Zach are infrequent. Bill's our DBA, and Zach is the...what are you? The head of the data team? ZACH: Technically, my title is Senior Manager, Data Architecture and Governance. But that's a fancy way of saying that I am heading up a data engineering team. Yep. DAVE: They made you widen the column size to fit that job title in. ZACH: Yeah, pretty much. DAVE: Yeah. Yeah. So, for people that don't know, I've been at Acima for almost five years, six years. I don't keep track of numbers. I worked in engineering for a couple of years, then I went over to work with Zach on the data team for a year. And then he got rid of me and sent me back to engineering. And I've been back over here for, like, a year and a half now. And I think it's really, really fascinating the different ways the teams work. Like, app dev focuses on latency, and we love to do everything with compute, and we're very scarce with storage. And the data team is kind of the other way around. You've got the great big warehouse. Storage is free. Compute is crucially expensive. It's like, you've got a table that has all the integers in it, and you look them up by ID because you can't calculate anything. That's a joke. But people don't believe me when I tell them you have a day’s table that is literally every day from 1970 forward. We don't want you to calculate the name of the day of the week. Just look it up in the table. We don't want you calculating the first letter of the day of the week. That's a separate column in that table, right? ZACH: Yeah. I don't think that that table was originally built for that reason specifically. I think a lot of people used it for that reason. There's a lot of really good days logic built into, like, Snowflake, Redshift, and all of the warehouses. However, when Acima first started, warehousing was a little bit newer, and so maybe a lot of those functionalities didn't exist. Now it's more like, what's a holiday [laughs]? And that's the main reason we're using that table is, what is a holiday? And that table is not always the most accurate on what a holiday is, either. But it's way more accurate than if we didn't use it [laughs]. And it's a data source that my predecessor exported from somewhere a decade ago and runs all the way through, like, 2060. So, I'll probably never adjust it, you know. It’s just -- DAVE: That was going to be my question, so when do we even run out of days? ZACH: It doesn't matter to me. It'll be long after I've, you know -- EDDY: Is that only taking into account local holidays, or now that you're considering, like, international growth, like, does the table also consider international holidays, or is it only local? ZACH: It's not been updated to consider international holidays. We don't have to do a ton with holidays on the data team. Really, that's going to be on our production systems, right? Like, we are consumers of data. We are not...Well, I mean, we generate data, too, but we're mostly consumers of data. If you look at the flow in, it's mostly data coming in. So, it's really important for, like, LMS to understand what a holiday is in every single country that they're in. Not as important for the data team because the events that should not happen on holidays, there should be no data for because they didn't happen, right? But no, I've not expanded that table for, like, Mexico or Canada or any other country. It's just U.S. And even then, like I said, it's not fully accurate. DAVE: I remember when I started here, we had no plans to go outside. We were just U.S. company, and so don't worry about it. And businesses pivot and grow. Zach, I got a question for you. I jumped straight into some detail, but I don't think a lot of people know what a data team does. We were talking about this in the pre-call. Like, the DBA does the architecture, but you guys...you said CrossFit. I work on Merchant Portal. My job is to help keep the merchants happy so that they can give leases to customers and get the product out the door. That's an application database written in Postgres. Where does my data go after, you know, like, every night, what happens to my data? What do you do with it, and who do you give it to, and what do they do with it? ZACH: Yeah, so that's a loaded question. Every 15 minutes, it syncs to the warehouse. We use tooling for that. That tooling is Fivetran. They're a great company. They have a bunch of people like me and smarter than me focusing on just, how do we sync data from data source to Snowflake or Redshift or a data destination, basically? So, it's the best way, in my opinion, to sync it. We used to have an in-house solution. It would miss data. We didn’t focus on it a lot because we have a bunch of other stuff. So, now it syncs into the warehouse. And especially in a system of microservices, which I know are great for software engineers, they're terrible for data engineers because the next piece of the puzzle is I have to stitch all that data together. A lease record, for instance, or really any record, is not going to be wholly in one service. So, now I need to create transformation tables so that our business users, our end users, our BI analysts, and the people viewing their dashboards can see the holistic view of the lease. Because, as you know, there's a certain point where Merchant Portal just doesn't care about it anymore, and it moves on to LMS. And then LMS doesn't necessarily care about all the nitty-gritty of what's happening behind the scenes in all the other microservices for, like, payments or anything like that. So, we really become the place where we're stitching that together. In the last count I had, I think there's 68 Postgres databases syncing into the warehouse today. DAVE: Wow. ZACH: We do not care about all of them [chuckles], to be frank. We do care about around 30 of them, and we use them for transformations. And then there's a bunch of just, like, batching, right? Like, I don't want, and you guys don't want, nobody wants the production customer-facing services spinning up jobs in the middle of the night to grab thousands or hundreds of thousands of records to throw them in a CSV and shoot them off to, like, a company that needs that information, right? Like a third-party company, maybe that we integrate with. And so, the last time I recorded, there was something like 50 third-party integrations that we're also handling. That data will go into those companies; data's coming out of those companies. Maybe the data goes into those companies in real-time events through the production consumer-facing services, but I am siphoning them into the warehouse so we can start to see, like, is this third-party company worth using? What is the effects that we are having here? Or maybe those companies are enriching our data, and then we look at that on the back end, and we let that adjust business decisions. And so, all that's got to come together in a singular place. And it's a lot. Like, the last time I checked, it’s...I keep saying, “Last time I checked,” I don't watch this like a hawk. But we had, like, 13 and a half thousand tables in the warehouse. So... EDDY: So, Zach, you mentioned something interesting, and I kind of want to elaborate a little bit. So, you said you have about 60-plus tables that have data, but you only care about half of them. What's the point of us -- ZACH: 68 schemas. So, like, Merchant Portal is a schema. Merchant Portal has, like, 218 tables. I care about those

    1h 3m
  4. Episode 94: Staying Cool During Production Issues

    MAR 18

    Episode 94: Staying Cool During Production Issues

    Mike opens by framing “production incidents” with a vivid non-software story. As a teenager he smashed bathroom tile with a dead-blow hammer, drove his pinky knuckle into a jagged shard, and had to manage both the injury and the panic of his little brother who got sick from seeing it. He uses that as the metaphor for on-call life. Bad things happen, reactions vary, and what you do in the first moments matters, especially staying calm, reassuring others, and focusing on the most urgent next step. The group riffs on modern incident response, starting with humor about “just ask the LLM,” but landing on a real point. AI can be excellent at sifting noisy logs, even if you should not blindly trust it mid-emergency. Dave pivots to the idea that the best loyalty, from customers and coworkers, is earned when something goes wrong and support is excellent. He describes jumping into a long outage call ready to tear apart his own recent work with zero ego, because people remember who shows up with “two tow trucks” when everything’s on fire. Mike and Justin emphasize composure and delegation. If you are overwhelmed, hand off to someone with a cool head. Prioritize restoring service, “stop the bleeding,” before deep root-cause analysis. Invest ahead of time in rollback plans, feature flags, staged rollouts, and observability. From there, they broaden into practical triage and long-term resilience. Verify the issue, look at metrics and dashboards to identify symptoms like CPU, disk, network, traffic spikes, and database issues, and narrow the delta between last-known-good and broken. They discuss how constraints differ in mobile, including App Store review delays, crash loops, and reliance on the user’s device and network. They also cover security incidents, where you need monitoring to detect attacks, plus coordinated mitigation like blocking traffic and working with vendors. They stress the importance of having an incident quarterback, a playbook, and a contact list for after-hours escalation. The close focuses on what comes after the band-aid. Do postmortems and cleanup so temporary fixes do not become permanent donuts. Balance realistic risk planning with business needs. Emphasize strong observability and the ability to recover quickly, alongside prevention, echoing practices like Chaos Monkey and the idea that monitoring prevents historical events from re-happening. Transcript: MIKE: Hello, and welcome to another episode of the Acima Development Podcast. I'm Mike, and I'm hosting again today. We've got a good crew here today, and I'm excited about this one. We've got Kyle Archer, Eddy Lopez. We've got Dave Brady. Hello, Justin Ellis, Thomas Wilcox. We've got Ramses Bateman, and Will Archer. So, I think we've all been here before multiple times [chuckles]. We've got a familiar crew to talk about an important topic that's always fresh because [chuckles] there's a constant need. I was racking my brain what story to tell for this, and I ended up going back to...I don't even remember exactly when it was, but it was somewhere in my late teens, early twenties, in that era. So, admission, that's quite a long time ago [laughs]. That's more than halfway back [laughs]. And I was helping out at my parents' house with some remodeling they were doing. They were tearing out the...they were redoing the bathroom. And so, they were tearing out...they had a wall that had some tile on it, and they were tearing out the tile. And they were going to put some new...I don't even remember. They shifted things around, but they were tearing out the tile. That's the important part. And I had my little brother with me nearby. He was too young to really help. He was, like, six. And, you know, he was just hanging out and chatting with me, and I was taking a...they call it a dead blow hammer. It's a hammer with sand in it, so when you hit, it just stops. So, it's a weighted hammer, but it has a soft landing, so it doesn't have a...it doesn't bounce back, right? It just kind of stops, rather than having a strong bounce. It's good for situations where you want to do that, right, where you don't...you really don't want it bouncing back and hitting you in the face. And I was breaking up the tile wall. Context, there I am with, like, a six-year-old breaking up a tile wall. And there was some wire mesh behind it, and I was gradually peeling back. As I broke it, I was peeling back this wire mesh that was embedded in some sort of mortar. And I was pulling out [inaudible 02:26] the cement behind the tile. And so, as I'm banging, I pull back a piece, you know, pull it back because I'm making some progress, and I swing in. And because that broken tile is now hanging out and mounted on that wire behind, with the, you know, the cement that's holding it together, when I swing with that hammer at full force, right after peeling, you know, an extra layer back, I sunk my knuckle of my pinky finger right into a piece of broken tile. And I go, oh, and I look down. And I look down into my knuckle, maybe five eighths of an inch, a couple of centimeters more than you should be looking down into a knuckle [laughs]. Oh [laughs], that moment, that's not good. And then the blood starts, right? A rather remarkable amount of blood, I'll say [laughs], was coming out of the finger. Remember, there's a six-year-old here in the room with me. And he yells, "Mom, dad, come help Mike. He's really hurt bad." And, of course, they're thinking the worst. I'm like, "No, no, no, no, no, it's okay [laughs]," yelling. But, you know, there's the moment of panic there. And so, I had some choices in that moment, right? What do I do? Luckily, I think I handled it pretty well. I comforted the people around me to let them know this isn't a disaster. I'm going to need to do something, but you don't need to, you know, call 911. Unfortunately...so, we got everything up, went to one of those urgent care places. They stitched me up. I could tell some other weird stories about it there. A few weeks later, I noticed a little white mark on my finger, and I started pulling, and it was a piece of the thread from the gauze that had somehow got stuck in my finger. And I pulled out, like, a foot [laughs] of this string out of my finger, and then it snapped down near the bottom, and some of it zipped back in. I've never seen it again, like, oooh [vocalization] [laughs]. And I still, when I touch my knuckle, I feel weird sensations all the way down the rest of my finger. It's a [inaudible 04:21] impact of that one. But my poor little brother [chuckles], he got sick from seeing it, and he was throwing up and just not okay. And I felt bad, and I had to comfort him, "This is really okay. I get some stitches, and it'll be fine [chuckles]. It will be fine." And [chuckles] I felt really bad because I was not really even thinking about it. I didn't realize that he was not okay. So, when I discovered before I left, like, 10 minutes later, he wasn't okay, you know, I gave him a hug, you know, tried to help him feel like things were okay, get a ride over to the urgent care facility. They stitched me up, and I'm fine. Today, we're going to talk about dealing with production incidents. And I bring up this example because it's outside of software, but it's a production incident, right? You've got the bad things happen, and what do you do? What do you do now? And I think that there's some aspects to that story we can riff on as well as others. But it helps set the stage for a lot of what happens when we have these production incidents and what we do in that moment because it matters a lot. And how some of the reactions, you know, there's a variety of reactions to this moment among the various parties in place that had some better, some worse, you know, impact. So, servers are down, you know, how do you keep cool? Things are on fire. And that's our topic today. And I've got definitely some thoughts on this. I've written down some notes, but, as usual, I don't want to...I've told the story, right? I've laid out the context. So, I am really hoping some of you all will have some initial thoughts to lead out with. EDDY: Sorry, is the answer not ask AI to see what's wrong with your server [inaudible 06:02]? MIKE: [laughs] DAVE: How do you think the server went down? EDDY: I was thinking, is that not the go-to answer now? I'm sorry, podcast over. Ask the LLM. [laughter]. WILL: Not not the answer. DAVE: The AI is going to say, "You are absolutely right to be upset that the server is down." JUSTIN: So, related to that --  WILL: I mean, I'm just saying that's not not the answer. Like, AI is great at reading a log. Like, it took me --  DAVE: Yeah, actually. WILL: Years, if not decades, to get, like, pretty decent at reading log vomit, you know what I mean, like, filtering through the chicken innards that [laughter], you know, a log will, like, throw up all over you and just be like, "Oh yeah, that's actually it." AI is actually super duper at that. I don't trust it, especially in an emergency but, like, do that. Sure. Yes. Do it. EDDY: I was literally pairing with someone, and we were looking at a Grafana log, right? And I'm like, "Oh, it's because of this." And they're like, "Where? Where is that?" And I'm like, "Oh, I read it somewhere here. Hold on, let me find it again." And, like, you get so good at ignoring all the clutter, you know, and just filtering everything. But, oh my God, dude, like, AI can sift through, like, raw JSON, like candy. DAVE: I have a thought to throw out. I have a bunch. I always do. But one of the things that...and this is not really a production thing, well, maybe it is: loyalty. The thing that makes somebody loyal, a customer, in particular, is you get this graph of, like, did they have a good time, or did they have a bad time? And then did they receive good support, or did they receive bad support? And the most vehement haters of a

    1h 12m
  5. Episode 93: The State of AI

    MAR 4

    Episode 93: The State of AI

    The episode turns into a freewheeling, funny, very human conversation about how AI is showing up in developers’ day-to-day lives, especially for the “I can do it, I just hate it” work. Will talks about getting wildly inconsistent AI PR review comments, but still finding real value in using Claude to refactor boring-but-necessary code like splitting up bloated classes and shared components. Dave riffs on how Claude is starting to mirror his humor and writing voice, then connects it to a psychology idea from Marty Seligman: don’t force yourself to “get good” at tasks you’d still hate even if you mastered them, because that’s a fast track to misery. For Dave, AI is a relief valve: it can generate PR descriptions, test scripts, and documentation in minutes, turning a three-hour, soul-draining slog into something manageable, and giving him back energy for the work he actually enjoys. From there, the discussion shifts into “agentic” workflows and a geeky Dungeons & Dragons thought experiment: could you build an AI-powered rules engine that handles combat bookkeeping, tracks inventory and positions, and references a big PDF ruleset accurately? Dave and Will talk through using RAG (retrieval-augmented generation) to index the rulebook and something like MCP-style tooling to let the model read/write to real databases so it doesn’t lose track of facts (what room you’re in, what items you have, what the rules say about advantage/disadvantage). They also touch on how newer models can sustain longer, more coherent outputs (Dave gushes about Claude Opus improvements and even creative writing that lands emotionally), and they speculate that “divide the work into sub-agents” is how these systems stay on track as tasks get bigger. The back half gets darker and more real: what happens when you give AIs root-level access to email, calendars, and money? Will imagines an assistant that can handle adulting (getting flooring quotes, scheduling bids) and Dave goes further, describing the exhausting annual battle to secure life-saving medication coverage for his wife and wishing for an AI that can fight bureaucracy relentlessly. That leads into red teaming, prompt injection, and the uncomfortable truth that guardrails are often driven by liability, not human-centered ethics; Dave contrasts frustrating experiences with GPT-style “lawyer mode” refusals versus Claude’s more collaborative boundary-setting, and argues we’re heading toward rules for AI that resemble rules for people. They close on a practical optimism: AIs aren’t “good” on their own, but they’re powerful force multipliers for getting over psychological humps, clearing drudgery, and even helping people stop discounting their own progress by reflecting back evidence-based positives—an unexpectedly meaningful use case amid all the chaos. Transcript: DAVE: Hello, and welcome to the Acima Developer Podcast. I'm David Brady. And we have been having a fantastic time chatting about AI, and we forgot to hit record. So, we're going to start the show right now. Today on the panel I've got Kyle Archer. I've got Thomas Wilcox, and I've got Will Archer. And this is going to be a fantastic chat. So, what have we been talking about, guys? We've been talking about D&D, music, lyrics, poetry. What's going on in AI this week? WILL: Oh man, I'm getting better. I'm getting better and better. Like, I got an AI review comment on a PR of mine earlier this week, and it was good. And I also got one today, like, just now, seconds ago, and it was doggy doo-doo. So, you know, like, they're getting smarter. They're getting smarter. They saved my bacon. My prompts have been getting more ambitious, you know? Like, more and more ambitious, where I'm like, hey, it's just, like, it's amazing. Like, I love finding the things that I hate. They're not hard. I just hate them. And AI doesn't have feelings about scut work. You know, I'll tell you, like, one thing. This is an antipattern that I think myself and other people will fall into, like, very frequently, but wonderful [inaudible 01:37] for AI. It's like, when you've got, like, shared library components, you know what I mean, or, like, your class is starting to get big, it's not technically complicated to, like, start breaking that thing up and, like, pulling these things into shared libraries, pulling these into shared modules, you know what I mean, common class extensions, like, all that stuff. It's very, very easy to do. It's very simple and straightforward. But you're not doing it, and I'm not doing it, and none of us are doing it, but we ought to be, and we can. And Claude does a pretty decent job. I had to clean it up, but I'm not mad. It didn't do me dirty, like, it did not do me wrong. DAVE: I have started saving screenshots of things that make me laugh about the AI, and Claude is absolutely learning my sense of humor and my writing style. And so, I literally...I will start typing a comment, and then I'll take my hands off the keyboard. I'm looking at one right now that is literally, "Comment, dear future..." and then it wrote, "Dave, colon, I'm so sorry." And that was pretty much where I was going with that comment, which is...it made me howl. There's another one where it's like, "This class couldn't," and then it completed, "possibly be located in a worse location." Oh, something you just said, though, this is a huge, like, a cross-threaded jump. I'm going to be thinking about this for a few days: the stuff that you can do, but you don't want to, that you don't like it. Okay, ready for a real big cross-discipline skip? Marty Seligman, "Authentic Happiness," I think, is...He wrote a book about happiness. But one of the things that he talks about...he's a psychiatrist. He was literally president of the APA. And what he realized is that there are things in your job...we tell everyone, "If you're bad at something, get better at it," and he said, "That is a recipe for depression and misery." Ask yourself what things in your job, that if you were really good at them, you'd still hate it. Don't get good at those things. Get rid of them. Put them off on someone else. Find somebody who likes that work and trade it off because the more you do it, the more miserable you're going to be. You're not going to find meaning in it. It's going to be drudgery and scut work. And there's so much stuff that I have been shoveling off on Claude, using that as my rubric to say, I'm going to keep this. No, you go do that. And, oh, it's so good. I write very, very slowly. It is agonizing for me to write. You guys, you've met me. I like to talk, and I talk fast, and that means I talk sloppily because I'm thinking as I talk. I'm an extroverted thinker. I'm literally hearing myself talk for the first time, and I'm processing these ideas. Well, when I write, I can't do that, and so it slows me down. So, everyone on my team they're writing their Slack report every day. It takes them five minutes. It takes me half an hour. They write a pull request description, takes them 20 minutes, takes me 2 and a half to 3 hours to write. And I've got a review writing skill now in Claude that I just drop it on there, and it follows the Acima template. Here's the ticket, here's the summary, here's the description, here's the reason why, here's how to test. Go on main. It will actually write me the Rails runner script. You put the thing in, like, go into a console, and type this, type this, type this. Nah, screw that. Open up bash and type Rails runner, and then here is your script. And it's going to load your merchant. It's going to do this, da da da. And then it will show you, right here, here's your output. Boom, done. Jump back to the branch; do it again. Here's the different output. Off to the races you go. And it will generate a PR in, like, two minutes, what was taking me three hours, and something that takes me three hours that when I'm done, I don't feel happy. I just feel exhausted. I just feel relief that it's over. And so, having that off my plate, fantastic. WILL: [inaudible 05:35] say there, like, I love it. Like, I have found that another stupid AI trick is just writing documentation, writing reviews, that kind of stuff. Man, I hate it. I hate it so much. But what I've found, right, and this is, I don't know, maybe more psychology than AI, is, like, AI will get it wrong. Often it's not. It'll blow it all the time, all the time. But the fact that they tried and failed, it's like, oh, I've got this thing now. I can work with this thing, right? Like, I'm not going on, like, a blank page, you know? Like, it'll just sort of, like, blargh, vomit out whatever sequence of words it thinks are going to come next in the equation, and then I can work with that. I work from a position of strength. DAVE: Yeah, I put a tweet out this morning. How'd I put it? "Claude lets me be 5 of me, each doing 80% of my work. One of me is an idiot, but the other 4 of us are 3 more of me." The footnote is, "Mind you, some days it takes all four of us to hold that idiot down," right? It's like, we've all lost time to the AI. If you've got any work done with AI, you have lost work and lost time to AI learning how to run it, because when it rolls, it rolls the truck, right? It will crash. WILL: Right. Okay. And this is a great, like, I am far from an AI expert. I am constructively lazy, which is the highest and best version an engineer can have, you know. DAVE: Capital L, Larry Wall's lazy, mm-hmm. WILL: But I'm not an AI expert. Like, I just, you know, I will pick up the tool, and it'll be like, if I've got a handful of nails and somebody's like, "Hey, this is a Powernail," I'll be like, all right, bang, bang. So, I was pitching Dave on, like, a less code-oriented thing. DAVE: Yeah, talk about this for a second. WILL: Mike left, and he left Dave and I alone to our own devices. And so, this is what you get, Mike. DAVE:

    51 min
  6. Episode 92: Technical Hobbies

    FEB 18

    Episode 92: Technical Hobbies

    This episode unfolds like a long, curious conversation among people who can’t help but see software everywhere—even when they’re not writing code. Mike opens with a story about large language models: how something as simple as guessing the next word, repeated trillions of times, leads to strange and powerful emergent behavior. Models start writing poetry, solving math problems, and following instructions—not because they were explicitly taught those skills, but because mastery in one domain spills into others. That becomes the episode’s core theme: real understanding, whether human or machine, comes from making connections across disciplines. From there, the panel moves into personal stories. Justin talks about electric vehicles and electric dirt bikes—machines made of frames, motors, batteries, and controllers that must “speak the same language” to work. Tuning power output or regenerative braking feels eerily similar to designing distributed systems or microservices: misaligned interfaces lead to failure, while deep understanding unlocks performance and joy. Kyle shares his experience retrofitting a sound system into a 1994 Ford Ranger, cutting metal and rerouting wiring to modernize old tech. Thomas brings in video games, describing how a decade-old console game breaks when ported to modern PCs because its logic was tied to frame rate—an unintentional lesson in legacy assumptions and technical debt. Each story circles back to the same realization: whether it’s hardware, games, or cars, the same systems thinking applies. By the end, the conversation becomes more reflective. Mike ties everything to teaching math, music, and writing, arguing that we often strip these disciplines of creativity by overemphasizing rules instead of problem-solving. Math, like programming, is a language for understanding the world; music and writing are languages for expression. The best software engineers, the group agrees, aren’t just chasing paychecks—they’re hooked on the joy of making, tinkering, and solving problems. The episode closes with a gentle challenge: don’t only optimize systems for work. Build something for yourself. Learn a new language, musical or technical. Touch hardware. Make noise. Those side paths, it turns out, are often what make us better at the thing we thought was our “main” craft. Transcript: MIKE: Hello and welcome to another episode of the Acima Development Podcast. I'm Mike, and I'm hosting again today. We're happy to have with us today Justin, who has not always been with us lately [chuckles] sometimes. JUSTIN: [laughs] MIKE: [inaudible 00:32]. He's not going to probably be here for the whole discussion, so I'm going to kind of pick on him a little bit at the beginning. But it's great to have him joining us. We've got a longstanding panelist, Kyle, and we've got Thomas who's been with us a couple of times now, right? THOMAS: Yeah, I think the last four times, so... MIKE: Cool. Cool. And, as usual, I'm going to...and there's actually even more relevance today. I'll [chuckles] come back to... I'm going to start by something a little outside of...well, this one's actually kind of in software, but not in writing software. So, large language models have been all the big thing in AI over the last few years, and it's just exploded. When I say exploded, they're expecting something like multiple trillions of dollars to be invested in data centers and AI generally over the next five years. That's just unthinkable sums of money. Unthinkable sums of money. By the way, we do have Will Archer joining us [chuckles], who is here a little late. So, unthinkable sums of money. It's a big deal. These large language models are a big deal, and they often display what's known as emergent behavior.  Now, let me give a little explanation. How they usually train these things is shockingly simple. They have a whole lot of weights that they use that can be moved around to make a guess. And they feed it some text, just a series of words, and they don't even recognize the words. They just know each one of them is a number. They, like, say, "Here's the series of numbers. Which one's next? Guess the next word." And, of course, it's going to be wrong. They'll nudge it a little bit. It's going to be a little closer, and they'll do it again. And they do that trillions of times [chuckles], like, just an unthinkable number of times. And it turns out, if you guess the next word enough times, weird things start to happen. First of all, you get really good at being able to say plausible natural language. I'll say English because we're speaking English, but they do other languages as well. But, you know, you can give it a starting word, and it'll come up with a sentence that follows that word that's very likely. And that sounds pretty boring, though, right? Guessing likely English that doesn't sound particularly useful, except that there's this emergent behavior, because it turns out that if you get really good at guessing words, you're also kind of good at other things. For example, it can generate poetry. You say, "Give me a poem." In fact, I saw one yesterday. "Give me a poem about fourth normal form [laughs] and emotional." Well, actually, how was it described? "Emotional lyrics about fourth normal form."  JUSTIN: [laughs] MIKE: And it obliged [laughs]. And this particular example I saw yesterday, it was done by our frequent participant, Dave Brady. He then sent it to an AI music generator and had it generate a prog metal song based on it. And [laughter] it was plausible and super cringy. But you can do that by just guessing the next word. You can do things like knowledge base querying because, guess what? If you know some stuff, right, asking, "Well, what's the answer to this?" Well, if you know the next word, it'll tell you the right answer. And then you get even into more perhaps amazing things like multi-step reasoning, like arithmetic. If you're guessing the next word, you're going to have to learn to handle some basic arithmetic problems, and it learns to do that. And even more, it can do things like instruction following or tool use like, "Go write me some software," or, "Go" as we talked about a few weeks ago, "act as my agent to go take some actions on my behalf." And because it's going to do plausible, you know, believable things next, it'll tend to go do that. So, there's overlap. Where I'm going with this is there is overlap between fields of expertise, between domains of expertise, and sometimes getting good at one thing can help you in other stuff. I've tried for years in this podcast to connect concepts [chuckles]. It's something I try to do. I think it's useful for discussion generally. I especially try to do that with abstract concepts, you know, things that are hard to think about, try to connect them to very grounded things, very tangible things outside of software development. I think there's often more overlap than we might think superficially between things. Part of what makes us good at thinking as humans is that we make connections. We make those connections between different domains of expertise. We reuse knowledge. We repurpose knowledge. We take it from one area to another. And finding surprising connections, it's delightful; it's enlightening. So, AI is starting to show some of those things, some of those emergent properties, but, frankly, it's not very good at it yet. I mean, you have models that have read basically the entire internet, and now they can usually answer your question about the weather right [laughter]. We need to get better at this. But we're going to use our human skills, and we're going to talk about software-adjacent things that we do. This is a chance to go a little outside our normal boundaries and explore where there might be some overlap in things that aren't specifically software. As I said, I would like to start with Justin and hear about some of the things you're doing outside of specifically the software world, and what you've learned from them, and maybe...well, exactly what have you learned from those? Because I'm guessing it's going to be applicable. JUSTIN: Yeah. So, a couple of things that I am doing right now that are software adjacent. I don't know if you guys know, but I am a big enthusiast for electric vehicles. I'm inside my, you know, GE Chevy Equinox right now, fully electric. I love driving this thing around. I also like to ride around my electric dirt bikes. And if you haven't been on an electric dirt bike yet, it is a lot of fun. Instant torque. You can ride it in the wilderness without, like, scaring animals or annoying neighbors, all of those things. And the thing about the electric dirt bike, it is a platform that you can customize to your heart's or to your wallet's content. I particularly like Talarias. There's other ones like Surrons, and, I mean, there's a whole slew of them now. But with my Talaria, I got one of the xXx ones. But you can customize this, the base of this thing. You basically have a frame, a motor, a battery, a controller, and then, you know, a slew of other things, including brakes and other things. And you can swap them out. The important thing, though, is that your controller has to be able to converse with your battery, with your throttle, with your brakes, because you have regenerative braking. And if you don't have a controller that can talk to these other things and that can, you know, interact with them right, you aren't going to go anywhere. And it certainly is not exactly software, but it is software. The controller has a kind of a base level of software that you can go in and update and tweak, such that you can tell it to draw more power from the battery. You can tell it to output more amps to the motor, and you can tweak things like those such that you can go faster. You can be more reckless, all of these things. But if you don't understand the langu

    44 min
  7. Episode 91: Surviving This Job Market

    FEB 4

    Episode 91: Surviving This Job Market

    Mike opens with a post-apocalyptic “choose your team” trope to frame today’s job market for junior developers: brutal competition, few openings, and the need to stand out with real, survival-level skills. He shares examples like his niece (strong student, no offers) and Acima’s internship receiving 300+ applicants, then asks the group what actually helps new grads stay relevant and get picked. Will’s core message is: breathe—computers aren’t going away, but the industry is cycling out of a long boom and juniors are getting hit hardest. He tells his own dot-com bust story (gas station job, selling plasma) to emphasize grit and staying in the game. His practical advice is to stop relying on being “in the stack of 300” and instead get known: show your work publicly, connect with people, join communities, and consistently post demos/blogs/tutorials for 3–6 months so hiring becomes about recognition and trust—not resume roulette. The group zooms in on communication as the multiplier: resumes should be clean and consistent (attention to detail), but networking and clear thinking matter more than keywords. Thomas and Eddy stress becoming more social, asking “dumb” questions, and building presentations around questions to invite engagement—especially remotely. For interviews, Mike and Will flag dishonesty and hand-wavy answers as major red flags; they prefer candidates who can explain their process, own gaps, and reason out loud (even if they need to look things up). They close by pointing to AI as a near-term opportunity: write and build around AI tooling and “vibe coding,” because established companies are hungry for people who can help integrate AI into messy legacy (“brownfield”) codebases—while noting the job crunch isn’t only AI, but also macro factors like post-COVID pullback, rates, and layoffs. Transcript: MIKE: Hello, and welcome to another episode of the Acima Development Podcast. I'm Mike, and I am hosting again today. With us, we have Eddy, Thomas, Will Archer, Ramses, and Kyle.   I'm going to start by...in the pre-call, we were talking about this. I'm going to paint a picture of a post-apocalyptic wasteland, and you've probably heard this story before. So, you got your standard post-apocalyptic narrative. Everything's terrible. You're alone, and everybody's dangerous. And you get a chance to take a couple of people with you. And everybody else might not make it, right, or depending on who you pick, you're not going to make it, as you have to cross through the hazards ahead. So, who do you choose? Who do you choose to go with you? And, you know, this is a common theme. It's a trope [chuckles]. It's a trope. You got to choose the right person.   And who you're going to choose is probably somebody with a particular set of skills [chuckles], and those particular skills...yeah, and I think that's a direct quote from a movie, but I'm not going to go with those ones specifically. You're going to look for somebody who stands out. So, are you going to look for somebody who's exactly like everybody else? Or are you going to look at the person, like, I know that they are super aware of their surroundings, and when the zombies come in, they'll alert me before they make it here, right? I would definitely pick somebody like that in the zombie post-apocalyptic future.   Or maybe you pick the tough person, right? Or you pick the person with deep knowledge of the plants and animals around, you know, who can forage for food. You're going to want to assess somebody who has skills that can help you survive. And there are going to be a lot of people who are going to have average skills, and they're probably fine. But you're going to want that person who actually stands out.   So, trope time over. We are in a world today where it is hard...and this is not just in software. We have a situation throughout many industries, actually, but it's especially bad in software, where if you're a new graduate, junior developer, it is...we've talked about this before [inaudible 02:37] crisis.   WILL: Apocalytic.   MIKE: It's apocalyptic, exactly.   WILL: Apocalyptic.   MIKE: Exactly. It's apocalyptic. It is.   So, I've got a niece who graduated, I can't remember now whether it's one or two years ago, and was, I think, valedictorian at their high school and solid near top of their class, I think, in college, lots of extracurricular activities. Brilliant, personable, kind of everything you'd want, hasn't got a single job offer [chuckles].   Another example: last year, we had an internship. We tend to every summer. I believe we had over 300 applicants for that role, and we got to pick, right [chuckles], which was great, and we had some great folks. But, actually, we brought one person from the year before. So, 300 applicants for one job; that's some serious competition.   And if you are going to try to get a job in this market, it...well, first of all, I'm sorry. We've talked about this before [chuckles] a few times, you know. It's...I feel for you. It's...this is tough. But also, you're going to...we'd like to talk today about what you can do. So, you're the person in that situation. Now we're talking specifically to these people in that situation, but it applies to everybody, right? We're all permanently in a situation where if you don't stay fresh, you're at risk, you know.   We're going to talk today about how you can stay relevant. How can you stand out? How can you be that person who gets picked so you don't get left behind for the, you know, for the apocalypse to claim you? Well, I've got some thoughts. We'll pepper the discussion with them as we go. But, you know, I'm just going to straight up ask at the beginning, you know, what do you all think? Do you have anything specifically in mind you think that somebody should be doing who's in that situation that you've seen work, or that you think would work, or you think doesn't work [chuckles]? What do you got?   WILL: So, the first thing I'd say is, like, everybody calm down. Computers are not going away. They're not going away. Nobody's phone is going away. Nobody is, like, nobody's unplugging servers. They're not going dark. None of this is happening. And I think, like, you know, we as an industry have gotten used to boom times for so, so, so, so long that, like, you know, finding out how the other half lives is, you know, an existential crisis for us. But, like, that's not to understate, right? Like, it is really bad out there, especially for junior developers and new grads and stuff like that.   I mean, so, from my perspective, you know, Uncle Will's story time. I graduated in 2001, right, which was pretty much the depths of the dot bomb, you know, economic pullback. I graduated with a degree in computer engineering, not computer science, computer engineering, right? So, it was the hardware engineering and stuff like that. I graduated first in my class, not top 10%, like the, you know, the spring semester, not the spring semester, summer semester. Well, anyway, whenever I graduated, like, I was the number one graduate from that thing. But I was like, I graduated from a terrible university, not a terrible university. It was a pretty good, small engineering school, Wright State University, go Raiders, in Dayton, Ohio.   I, valuing my life and sanity, wanted to get the absolute hell out of Dayton as fast as I could, which is a decision I have not regretted a single time in my entire life. But, like, I moved to Austin where I didn't know anybody. I had no connections, could not get a job anywhere for anybody, you know. Like, I was working at a gas station to make ends meet, right, because I had to eat still. I was working at a gas station and selling my plasma, right?   You guys stay in the game, and it'll be all right. Like, the people who are good are still valued. The people who are good are still needed. The people who are good are still not as common on the ground as you might be led to believe. It's still pretty tough to do this work, and if you're actually in here...so, like, if you've got the knack for it and you have the grit and the drive to continue doing it, you're going to be fine. There's still a seat at the table for you.   If you're out there for the bag, you know, maybe not. You're not going to make it, you know. They, like...it happened in 2001. It's happening again in 2008. It's happened over, over, and over, you know. There was a massive hiring boom from COVID. And, you know, like, if you're just in it for the paycheck, this work is just going to chew you up. The businesses, the industry is just not going to...you're not going to be able to keep up because the money is just not enough. And there aren't, like, any variety of worker protections in the business. There's nothing. There's no licensure. There's, you know what I mean, it's just, like, a random, maybe high school dropout in Bangladesh can take your job tomorrow, and that's just what it is. That's the literal long and short of it, you know what I mean?   So, it's going to be okay, guys, but, like, yes, there's going to be a cull, and if you lost your fastball or, like, you're just in it for the bag, or you're not really committed to, like, the thing, then, like, sorry.   MIKE: And, honestly, if that's where you're headed, you wouldn't be satisfied anyway [laughs].   WILL: No, you...yeah, you'd get out after, you know what I mean? Like, you're going to get out now versus getting out in five years, where it's just like, I just can't...I can't do another code review [laughs].   MIKE: Yeah. But a lot of us who love it, there's a reward in building things the way we do that, if you've been hooked by it, it's hard to let go of. And if you've got that and you're willing to put in the work, I agree with Will, you'll get there. But it might be a slog. I did not have great times back in that early 2000s era either [laughs].   WILL: Yeah, right? Ye

    52 min
  8. Episode 90: SQL as a Superpower

    JAN 21

    Episode 90: SQL as a Superpower

    Mike kicks off with stories from his career to argue that SQL is a “never-goes-away” superpower. He describes early jobs where everything was handcrafted queries and good database design was foundational, then later roles where rapid growth made inefficient queries and missing indexes painful fast. Even with modern ORMs making raw SQL feel like a “code smell” in app code, he still relies on SQL constantly for investigating patterns, diagnosing anomalies, and answering urgent business questions in real time. His core point: avoiding SQL is like avoiding algebra or relying entirely on GPS—you can get by, but you’ll be weaker when you need real problem-solving power. Will pushes back with a reality check from big enterprise environments: many engineers simply aren’t allowed to touch production databases for security and scale reasons. He explains how, in those worlds, “SQL skills” get replaced by working through service boundaries—mocking/spoofing microservice requests and relying on managed interfaces rather than direct queries. Mike agrees scale changes access, but argues the underlying concepts still matter: relational thinking, knowing what’s expensive, understanding how data is shaped and retrieved, and especially understanding concurrency and locking at the database layer. They trade war stories about bad concurrency patterns (like incrementing an integer in a table inside nested transactions) causing real production pain, and riff on why older systems leaned on sequential IDs vs UUIDs due to historical CPU and memory constraints. The conversation broadens into “fundamentals change how you think.” Dave argues that specific jobs (like DBA roles) may evolve or disappear, but the principles behind them are evergreen—much like learning Lisp/Clojure or watching SICP to internalize functional, transformation-based thinking. Mike ties this directly to SQL as a declarative language: you describe what you want, not how to do it, and that mindset carries into MapReduce, streams, comprehensions, and even modern AI prompting. Eddy and Thomas add practical perspectives: ORMs can hide SQL until you need verification, debugging, or analysis—then SQL becomes essential (especially in support/data roles). They end by stressing curiosity and communication as the real career accelerators, capped by Dave’s interview story about spotting a SQLite aggregation quirk: the takeaway isn’t “memorize tricks,” it’s “stay curious, keep learning, and you’ll keep moving up.” Transcript: MIKE: Hello, and welcome to another episode of the Acima Development Podcast. I'm Mike, and I am hosting again today. With me, we've got Dave, Eddy, Will Archer, and Thomas. I think we're all return [chuckles] panel participants here. I'm going to jump into, actually, kind of a series of stories here, talk a little bit about my career. My first full-time dev job, all of our database queries were handcrafted. We did use parameterized queries; that was supported. But this is in the time before you had Object Relational Mappers, you know, ORMs, they call them now, before they were widespread. I'm sure they existed. They probably existed for a long time. It was before anybody really used them. The leader of my team cared a lot about data, and we focused a lot on good database design as the foundation of our work. We always kind of started with the database and then modeled way up. It's a common way now, isn't always common. You can tell when people don't do it that way. He even discovered a bug in the PostgreSQL adapter for Java and submitted a patch to the project, so, you know, it was a thing. Those were both kind of new tech at the time, so [laughs] times have changed. At the time, I hadn't really done much with...okay, now here's a pause. You can call it SQL or sequel. I don't care. And I'll probably call it both in this conversation. It's a debate with no clear answer, and it really doesn't matter, so...[laughs] DAVE: It's a debate with no clear need, right? MIKE: Yeah. DAVE: Because if I say, "Sequel" and you say "SQL," I know what you meant. MIKE: Yep. So, I hadn't really done that in school, so it was this interesting new tool to learn. It's this language that's oriented around relational algebra instead of implementation details like loop counters or sorting algorithms, right? And different's fun, right? It's the cool thing. Wow, this is a different way to think. It was more than that, though. I got to...I enjoy...I started to, and I still enjoy thinking about problems in terms of a series of transformations on data. We could probably talk about that. I think it's a big deal. Sometime later, that company had been acquired. I ended up spending months just writing queries, like, all day, every day to drive reports, to make them efficient, to run quickly, so many queries. Probably wasn't the best way to do things, but it's what we did. I wrote a ton of queries back then. You know, at a later job, I've probably mentioned this before, our traffic grew 100x in a year, startup, right? And we were working with publishers. So [chuckles], we grew really fast, but it meant that anything inefficient became a bottleneck really quickly, like, weeks. Within weeks, a thing that worked great doesn't work anymore. And so, I added indexes to tables I don't know how many times. I added external indexes for full-text indexing, like twice. We did it once, and then that tool failed, so we did another one. Designed a new database schema for arbitrary web content, you know, wrote the queries around it. We even had a little micro language for publishers to use. I didn't originate that, but it was something we worked with to query their data into their pages. It was queries all the way down. It was just queries is all we did. Later, we built a data warehouse that powered, you know, analytics for our users, and, again, SQL [chuckles]. In recent years, those Object Relational Mappers, or ORMs, they've gotten really good for web development, right? I mean, it's the standard. And I'd say that using raw SQL in production code there's usually a smell nowadays. And I've been a lot less hands-on in the code for several years now, but I query the data warehouse all the time. Just in the last few weeks, a couple examples, I've pulled stats on historic traffic patterns and all sorts of configurations. I don't know how many times I've run the query, tweaked, to let us know what to expect for seasonal load as we went into holiday season, lots of shopping. I think it was last week with a group of people in a conference call. They're all watching me live coding, one of those make you sweat experiences. But, you know, I wrote this fairly complex query to show the average daily time between merchandise, like a customer choosing to get the merchandise and when it got shipped, and to show that a big vendor, that I'm not going to mention, was temporarily delayed. And so, this anomaly we saw in our system, because they had a big impact on us, was external and not a problem in our stuff. You know, live coding is always a little dicey, but it worked. And, fortunately, I had done this kind of thing before. This is all to say, knowing Sequel, SQL, call it either way, is as important to me today as it was at the start of my career. You know, way back then, like, wow, this is an important new tool I need to learn. And I'm still using it as much today as I was back then, and it just hasn't changed. It's still important. I haven't written Perl in decades, I don't think [laughs]. Anybody remembers Java applets? [laughs] DAVE: Yes. Wow. Yes. MIKE: I could talk for a long time, right? We could make a whole episode on dead or unpopular tech that was once a thing. But SQL, it hasn't gone away. Getting data out of a database is just this critical skill that never goes away. So, today we're going to talk about Sequel, or SQL, as kind of a superpower. You can get away with not knowing it now for a long time because you've got good ORMs. You've got visual query tools. You've got, like, the Business Intelligence Team they provide queries for you. But based on my own experience, I think that avoiding learning it it's like never learning algebra or never learning how to plan a route without GPS. Like, you can get by without it for quite a while, but you'll have so much power to solve problems you couldn't otherwise solve if you learn the amazing tool. WILL: You know, like, I love that, and I have, like, a similar...I have a similar arc. But as I was listening to it, I was thinking about, like, well, when's the last time? So, I mean, I'll just say, right, like, I think you have a unique and privileged position in that, like, you are allowed to run SQL. Most folks aren't allowed. You don't get to run SQL. And I thought about, like, what I had sort of replaced those SQL query skills with. And to be perfectly honest with you, like, you know what I mean, because I'm working for, like, big boys, like, you know, like, I went, you know, I worked with you guys at Acima, and Acima's not small. And then I went, you know, an order of magnitude bigger someplace else. And I went another order of magnitude bigger at someplace else. And, like, you don't run SQL. Like, I'm pretty, you know, I'm pretty up there. I'm a software architect II, you know, which is pretty, I mean, it sounds cool, right? There's a cool sound. It's got a cool sound to it, you know. They trust me with some stuff, but, like, I don't get in that database, never, never, ever. And, like, you know, and so, like, you know, you could do that, but I'm not allowed to do that. But I have all these tools, and I love SQL. And it would make my life a lot easier if I could do it some of the time. But I've kind of replaced SQL with microservice request spoofing, you know what I mean? Microservice request spoofing kung fu, in that, like, I can invent an entire data layer out of

    56 min

Ratings & Reviews

4.5
out of 5
2 Ratings

About

At Acima, we have a large software development team. We wanted to be able to share with the community things we have learned about the development process. We'll share some tech specifics (we do Ruby, Kotlin, Javascript, and Haskell), but also talk a lot about mentoring, communication, hiring, planning, and the other things that make up a lot of the software development process but don't always get talked about enough.