Acima Development

Mike Challis
Acima Development

At Acima, we have a large software development team. We wanted to be able to share with the community things we have learned about the development process. We'll share some tech specifics (we do Ruby, Kotlin, Javascript, and Haskell), but also talk a lot about mentoring, communication, hiring, planning, and the other things that make up a lot of the software development process but don't always get talked about enough.

  1. JAN 22

    Episode 64: Taking Breaks

    In this episode of the Acima Development Podcast, Mike and Ramses dive into the importance of recognizing and respecting mental and physical limits to maintain productivity and avoid burnout. Drawing a parallel to endurance sports like marathon running, Mike explains how the depletion of glycogen reserves mirrors the mental fatigue that can occur during intense problem-solving in software engineering. He shares a personal anecdote about his experience with low energy and poor decision-making while working in landscaping, emphasizing the value of taking breaks to restore mental clarity. The conversation explores practical strategies for managing mental capacity in high-cognitive tasks, such as stepping away from work, engaging in light physical activity, or shifting focus to a simpler task. Ramses shares his approach of taking regular breaks and prioritizing tasks based on energy levels, while Mike highlights techniques like the Pomodoro method and mindfulness practices to improve focus and productivity. Both agree that recognizing when to pause, reflect, and reframe assumptions is essential to navigating complex problems effectively. Mike concludes by underscoring the importance of regular mental “maintenance” to avoid burnout, likening it to routine upkeep for a car. By sharing personal stories and actionable tips, Mike and Ramses provide listeners with valuable insights into balancing productivity with rest, fostering a healthier and more sustainable approach to work. This episode serves as a reminder that taking breaks isn’t a sign of weakness but a key to long-term success and creativity. Transcript:  MIKE: Hello, and welcome to another episode of the Acima Development Podcast. I'm Mike, and I'm hosting again today. With me, I have Ramses. It's just the two of us today, which is kind of nice. We'll have a conversational podcast today. And we're going to talk about something we've actually been talking about for several weeks. We're going to be talking about how to clear your head. I'm going to share something outside of coding, as I like to do, and then bring it back to the work we do in software engineering. There's a challenge that people do endurance athletics run into. I'm not a marathon runner, but I've heard about it most from marathon running. People talk about hitting the wall. And in a variety of sports, they talk about bonking [laughs], you know, you hit the wall and bounce off, bonk. And I've looked into this some. There's a...in your liver, but there's also in other tissues, but particularly in your liver, your body stores glucose, you know, the simple sugar that you use for fuel for your body. And it doesn't store it directly as glucose. It makes chains like branch chains like snowflake sort of chain-on-chain structure that's easy to break apart, but also easy to store called glycogen. And your body stores some of it so that if you have to get up and go running, you can do that, and it's great. And you can sustain exercise for a couple of hours. I was researching this earlier today to make sure I was telling the truth. I found estimates of between 80 and 100 minutes of about how long the body’s glycogen reserves stay active until you deplete them with intense exercise. You hit that point, and suddenly, your reserves, your ability to quickly take your body's energy and put it into action, basically drops off a cliff. It just stops. It's not like you can't move necessarily because your body does have other ways. You can start breaking down fat reserves, and so on. But the easy energy is gone, right [laughs]? You hit a wall. And I’ve read about this with marathons. Well, elite marathon runners, we'll say, can finish a marathon in a little over two hours, which is crazy fast [laughs]. RAMSES: Yeah, wow. MIKE: They're really good. But notice how that's about 120 minutes or, let’s say, 150 minutes. And the glycogen reserves, remember, last for about 100. You can see why, a lot of times, people running marathons hit a two-hour point, and they hit the wall because they haven't replenished those energy reserves, right? And so, if you just try to go...and it's tempting. You want to just go and finish, right? So, you're maybe five miles from the end. If you're an elite runner, you're probably most of the way through the race when you're hitting that line, and you think, well, I can just push through. Well, no, you can't [laughs]. RAMSES: It's deadly, yeah. MIKE: Yeah, it just does not work. Now, like I said, I'm not a marathon runner. I do quite a bit of cycling, and I'm definitely familiar with hitting that wall. And I've hit it in other contexts, too. I remember I was working in landscaping many years ago [chuckles] and also intense physical work, right? You can deplete that blood sugar. And I remember working a job, and my boss was out doing something else. And he'd left me and my coworker to finish up. And we had a couple of other...we were going between a few different jobs. And the one we were at was running long, but I didn't want to let my boss down. So, I said, “Well, then let's just go finish up the other job.” This would have been hours since me and my co-worker had eaten. And my thought was, I just need to get this done. And we went there, and I was just so single-minded. I've got to get this done. Got to get this done. I wasn't really thinking about how foggy my head was getting. And [chuckles] in my rush to get things done, I was digging, and I was...I think I was just digging a hole, but it was hard. It was, like, clay-hard ground. So, I was just swinging a mattock, and the other side of the pick it's a flat bar kind of thing, digging that out, trying to break through the clay. And I hit an irrigation line, water went everywhere, and [chuckles] we didn't have the parts to fix it. And we had to go back and solve that problem. So, I’d made the problem worse by trying to rush things through. It made the problem worse. And my coworker, she says, “If you don't get your blood sugar up, keep your blood sugar up consistently, I'm not going to work with you anymore,” and [chuckles] she was right. And we were friends, you know, this was not mean-spirited. This was just, no, that was not the right thing to do, and she was right. I should have taken some time to get my head right. And this was pre-cell phone days, right? So, hard to call my boss, but, you know, I could have found a pay phone or something, found some way to pull this in, or maybe just gone back to the headquarters, called the customer, let them know we were going to be late because we were going to be late anyway, and finish the day. There's a number of things I should have done. But, luckily, it wasn’t a terrible disaster. We went back the next day with some PVC and fixed the problem, and everything was okay. But I should have known better. And you get that low blood sugar, and you just can't think right. You do not think clearly. None of which has anything directly to do with software engineering, right? But when you're building software, you think. You have to think. And we're probably not doing vigorous exercise while you're doing software engineering. They’re not necessarily compatible, right [laughs]? But your brain uses quite a bit of energy, and further, you can only sustain any activity for so long, that mental activity included. Sometimes we think, I'm just going to push through this. I just need to keep going. And you can tell your brain is slowing down, for whatever reason. There's all sorts of reasons that you can't think clearly anymore. You've got distractions. You've been focusing on it too long, and your brain just is done. And you think, well, I'm just going to push through. I'm going to push through. And the next thing you know, you've spent way too long accomplishing nothing [laughs]. You haven't made the situation any better. You've made it worse. Or you go back, and you've written bad code. You've made a bad decision. It is critical to be attentive to those signals in your brain and know, you know, I need to do something different. Have you had similar experiences, Ramses? RAMSES: Yeah, all the time. You're saying it's about, like, 100 minutes, 120 minutes, that's about my cap [laughter], maybe two hours on a really good stretch. But, at that point, it's, you know, you usually feel it in your back a little bit, and then usually, just have to get up. [inaudible 07:17] so I can't sit down for too long without moving a little bit. But yeah, it's really interesting. Sometimes you just...you're really close to solving the problem, but maybe you're down a dangerous hole. Yeah, it's really easy to fall into the, I'm really there, and, two hours later, you're still working on it. MIKE: Yeah, exactly [laughs]. Exactly. And, like you said, you're just trying to push through, trying to get it done. It's hard. It's hard to catch yourself. It takes practice to recognize. There's the people who use the Pomodoro technique, named by a guy who had a kitchen timer that’s shaped like a tomato. He'd set it for an hour or a half hour, whatever the length is, turn it on. And after that timer finished, time to get up and take a break. And forcing yourself to take a break like that sounds like you'd be disruptive, right? Like, wow, I'm never going to be able to get flow time. Like you said, that hundred minutes, you start going long enough, you're going to hit a wall, and you might not notice. You might not notice. RAMSES: Yeah, it's really easy to just sit down for, you know, 3, 4 hours and crank away. But I think you have to, like, mentally put yourself in a good spot where you can get to a good stopping point, and even if it's not a good stopping point, you just have to get up and take a break. MIKE: Exactly. Well, and I want to emphasize this point. To all of our listeners, we're giving you permission to take a break [laughs]. You're goi

    39 min
  2. Episode 63: The Evolution of DevOps

    JAN 8

    Episode 63: The Evolution of DevOps

    In this episode of the Acima Development Podcast, Mike, Kyle, and Will dive into the evolution of DevOps and its emerging subfields like Platform Engineering and CloudOps. They discuss how system administration has transitioned from hands-on server configuration to abstracted, automated infrastructure management. Kyle highlights the ongoing debate about whether these new titles represent genuine specialization or are merely rebrands of traditional DevOps roles. He explains that while the roles share significant overlap, Platform Engineers focus on building tools and workflows for engineers, while CloudOps specialists handle deployment and configuration in cloud environments. Despite these distinctions, Kyle acknowledges the fluidity of responsibilities within modern DevOps teams. The conversation also explores the complexities of managing cloud environments, especially within Kubernetes ecosystems. Kyle explains how tools like CMOs and YAML configurations bridge the gap between engineering and cloud infrastructure, enabling smoother workflows. They touch on the growing abstraction in networking, including dynamic layers like sidecars and service meshes, which add flexibility but also significant complexity. Kyle emphasizes that while these advancements streamline deployments and observability, they also introduce challenges in maintaining clarity and efficiency across multi-cloud and multi-region infrastructures. Finally, the team discusses the future of DevOps, including the increasing role of AI, automation, and multi-cloud expertise. Kyle predicts a continued shift towards higher levels of abstraction, with AI potentially optimizing auto-scaling and failover processes. However, he notes that AI struggles to keep up with the rapidly evolving DevOps landscape. Observability, cost efficiency, and auto-scaling remain top priorities, with multi-region deployments posing significant challenges. The episode concludes with a lighthearted reflection on the inevitable "closet of servers and box fans" lurking behind even the most advanced infrastructures—a reminder that despite the abstraction, physical and technical realities still anchor every deployment. Transcript: MIKE:  Hello, and welcome to another episode of the Acima Development Podcast. I'm Mike. I'm hosting again today. With me, I have Kyle and Will. Today, we're going to lean heavily into Kyle. Kyle is from our DevOps team. Well, I say this...we were just talking in the pre-call that we say DevOps, but even DevOps is kind of falling out of favor as a term because there's platform engineering, other things. But 10, 15 years ago, there was no DevOps; it was system administrators. And for those of us who were doing engineering at the time, a lot of times, it was just a developer given some system administration tasks, and [laughs] we’d get them done. I remember once I was driving to my...well, I was a passenger in the two-hour drive to my parents-in-law and banging out and configuring a server on the drive on a really weak mobile connection. And I got that done by the time we got there. And I was custom, just shelled straight into the server, configuring it as I went, based on stuff I’d done. And it was always that way. You shelled in; you set up the server yourself, and sometimes you went into the data center [laughs], and you brought in the hardware. We live in a different world today, and there's a reason that people have done that. If you’ve got 50 servers that have all been hand configured by somebody, and then he decides to go somewhere else, or she, or they [laughs], even if you have a pool of people that have been doing that, you lose so much knowledge. It's just a horrible way to maintain things. Just like you don't want to have your code sitting on somebody's personal laptop, we use a version control system. All of the best practice from software have made it into system administration. Totally changed the world. I'm talking too much because I'm not the expert. I'm just remembering how it was back then and knowing how much better it is right now. I don't even think about DevOps anymore, about the platform. It just happens. I merge [laughs] my code, and it ends up on production because of what Kyle and his team do. Kyle, how do you see that history of how we got from system administration? And Will was asking on the pre-call as well, is it just a rebrand, or [laughs] is there something really fundamental here that has changed in that process of deployment and infrastructure? KYLE: So, this one's been a little bit interesting to me because I was talking to a coworker of mine that he went out to do some interviewing. And I asked him, “Hey, if you were to do a mock interview for DevOps, how would that go?” And he was just like, “Oh, well, I've done system admin-type interviews.” And he’s just like, “So, what's the difference?” And even some Google searching yielded, like, there's quite a few similarities. There's some differences, you know, maybe they manage the actual servers more, or they manage, you know, a system admin would manage more of, like, physical hardware more type thing. And DevOps is more of that CI flow, programming flow, and then the cloud infrastructure type scenario. But then it kind of breaks down more because then you've got your specialties, observability being one of them, and each company has their opinion of what that should be. Should that be SRE? Like, here at Acima, we were the DevOps team, and we are still the DevOps team, but we have rebranded. There's half of my team that's now platform engineers and half that's CloudOps engineers. We're still doing the same skillset, but now we're called something else. I came in on the DevOps naming convention, so I don't have the full history there. And I don't fully understand the difference myself in a lot of them because there's so much crossover. Even looking at new jobs or job postings at different companies, it's like, okay, well, that's a new name, you know, just CloudOps or cloud engineering. It's just like, oh, those are just DevOps tasks, and it's like, they're wanting a DevOps guy. Where's this new name coming from? But I think that's part of...part of the divide is coming in from trying to divide DevOps as a culture instead of team. We're trying to teach those key concepts, and it's dividing out, like, what a DevOps engineer is actually doing and giving them more of a title that's appropriate to that. So, if you've got somebody that's working on your platform all the time and designing your platform, maybe they're that platform engineer. If you’ve got somebody that's working in one of the cloud spaces, then maybe they're more of a CloudOps or a cloud engineer type person. WILL: Okay, so, it's not the cloud. Like, your platform engineer and your cloud engineer they're different. Is it a situation where, like, okay, you got to have your own data center, or else you're a cloud engineer? Is that more/less it? KYLE: So, I'll explain the difference on our team, which is we have a few of us where we actually are writing the code for the platform. So, like me, I'm sending out the observability infrastructure and designing how that flow is actually done. And the CloudOps individual is the one that's taking in the requests and configuring the alerts, configuring the dashboards to deliver to the requesting team. It’s platform would be, you know, your CI/CD Jenkins, setting that up, and then Ops also be setting up -- WILL: Can you be more specific? Like, it doesn't have to be everything. So, like, case study it through for me. Because these are just sort of broad platform technologies and you can cover a lot of ground, right? Can you be more specific about case studies and workflows, and like, this workflow goes here, and this workflow goes there? Does that make sense at all? KYLE: I'm not quite sure what you're asking me. WILL: Ah, okay. Ah, who knows -- KYLE: But we still cross. Regardless, I mean, it's one of those things where we're crossing each other's boundaries. And we call ourselves separate teams, but we're still one team. I guess that's where it's a little bit foggy for me is to, like, well, you're wanting a defined line, I think. And even on our team where we have the separate sub-teams, there's not a defined line of who does what. WILL: Right. MIKE: Is there, like, aside...do you have some people who are more dedicated to actually interacting with the clouds and a side that's more integrated with interacting with the software side? Because there's the cloud side, right, and then there's the software that goes, you know, you interact with the engineering side. And so, there's two interfaces, right? The cloud interface, the one side that you deal with, and the other side is dealing with the engineering. So, I could kind of see that you've got an API of sorts on either end, and you have people assigned to either end. Is it that sort of division? KYLE: So, not quite in our setup. We’re doing the legwork. We're setting up the actual platform to be used, so setting up the standard workflow, maybe. Okay, so, like, our team, we've developed, like, a CMOs is a tool that we've wrapped several CLIs with. And in order to use that tool, we have a configuration YAML that will call different components in order to use that, so the platform people would have been the ones designing that, designing the tool, programming it out, and getting that functioning. And it would be the CloudOps guys that then take that from the engineers’ requests, and then they'll take what's been configured in that YAML file and generate the Terraform configuration that would then generate the cloud infrastructure from there. MIKE: So, that sounds like somewhat like the division I was thinking I was describing, where one side is building the interface that the engineers use, and the other side is actually building the cloud side so that th

    47 min
  3. Episode 62: The Mythical Man-Month

    12/26/2024

    Episode 62: The Mythical Man-Month

    The episode of the Acima Development Podcast centers around the challenges of scaling teams effectively, inspired by the principles from Fred Brooks' book The Mythical Man-Month. Host Mike begins by sharing a parable about his toddler's well-meaning but inexperienced attempts to help, drawing parallels to the complexities of onboarding and team communication. The core theme emphasizes that while eager new members can be valuable, onboarding takes time and resources, which can ironically slow down progress on active projects. The discussion highlights that simply adding more people to a late project often exacerbates delays due to the increased communication overhead and coordination required. The panel delves into strategies for mitigating these challenges, such as the importance of defining clear project milestones and fostering communication between product and engineering teams. They explore how startups often face growing pains when transitioning from nimble, small teams to larger, more structured ones. Effective leadership and the role of subject matter experts are critical in this context, ensuring continuity and knowledge sharing. Prototyping, rapid iteration, and intentional decoupling of work into smaller, manageable units are suggested as ways to enable independent progress and reduce bottlenecks. The conversation also touches on cultural and organizational aspects, like aligning incentives to encourage behaviors that prioritize long-term scalability over short-term output. The speakers stress the need for cross-training, maintaining organizational boundaries, and fostering relationships across teams to streamline collaboration. They close with reflections on leadership, emphasizing the discipline of narrowing scope and recognizing the inherent trade-offs in scaling efforts. Transcript:  MIKE: Hello, and welcome to another episode of the Acima Development Podcast. I'm Mike, and I will be hosting again today. With me, I have Matt, Ramses, Kyle, and Justin. And we're going to be talking about something that's been talked about for a long time. I will introduce this in a minute. I'm going to start by telling, as I usually do [chuckles], a little bit of a story and maybe a bit of a parable. I have a toddler at home, and anybody who's been around a toddler knows some things about toddlers [chuckles]. They love to help. They absolutely love to help. And a recent example is, about a month ago, I was fixing something, and my toddler wanted to use the hammer to help with everything. And he grabbed that hammer, and he started doing what a toddler would do and hitting everything he could see [laughs] with the hammer, requiring me to give him a lot of one-on-one attention [laughs]. And that is true about everything he wants to help with. His help with the dishes usually involves a lot of cleaning of water everywhere in the kitchen. His helping in the yard involves filling in a lot of holes or stopping, as I'm mowing, to let him pass. In short, he is very eager to help, but he lacks the experience to accomplish that yet. And that's not to say that he doesn't want to help, right? He has all of the right attitude. And I'd love to hire somebody like that with that kind of eager attitude. But there are limits to the training time. For him, the training time to do my job would be, you know, a few decades [laughs], and then he could have the experience needed. And that's not saying anything about him, or his character, or his abilities. He just hasn't gotten the experience yet. That's just how that works. And I got some older kids, too [chuckles], and the thing is, the same thing applies. And even if I went over and grabbed a neighbor and said, “Hey, can you come over and help me with something?” I'm going to have to take some time to explain to said neighbor what it's going to do to do that work. Even if they have lots of years of experience in life, they probably are not very familiar with what I'm going to ask them about. And there's this crucial distinction between somebody having technically the capacity with training and being able to do something now because there is a communication gap. Communication takes time and that is inescapable that communication takes time. And for some people who have closer experience, it's going to take less time, but it's always not zero. Not only is there communication, there's just overhead of coordination. Some jobs can't really readily be shared [laughs] because only one person fits. There was a book written on this topic in 1975. It's written by an author named Fred Brooks, and he titled his book “The Mythical Man-Month.” You got some nice alliteration there with Mythical Man-Month. Knowing this was 1975, it's a bit gendered [chuckles] because, obviously, there are more than just men in software, but he's using it in a more generic sense. And there have been women in software from far before 1975, so nothing gendered intended here. It's just the title of the book. He made this incredibly valuable observation that this idea that you can just have one person one month, that they are fungible, and you can just interchange, you can put more people on a project to make you go faster is false. It's just straight-up false. And it comes largely down to this communication overhead. And if you add more people, then you have more people to communicate with and more connections between people on the team. If you have one person, there's no communication, right? It's just you. If you have ten people, you have ten people communicating each to 9 other people. That's a lot of connections. The complexity within your team has exploded dramatically, and all of that communication overhead is not going to speed things up. And, in fact, if you add it late in the project, it will almost certainly slow it down. That's what we're going to be talking about today is this idea of the Mythical Man-Month, the idea that you can just add people to a project and make it faster. There's our introduction. I'm going to stop talking and will let my panel here share some thoughts. MATT: Yeah, I think when you start adding too many people to a problem, communication is key, right? And you start to play the telephone game. So, it's very easy for things to become miscommunicated. But also, there's the lack of context of the problem, and that has to be shared. People have to ramp up and learn that context and gain the knowledge of the problem they're trying to solve, and, ultimately, that's always going to slow things down. JUSTIN: Yeah, one thing that is kind of a corollary to that is that the mythical, like, the skill needed for startup, for a leader, is the ability to have vision and to do the thing. And, generally, it's a very small team, and you can all go be very fast and very nimble. But once you're past that startup phase, all of a sudden, you want to do more. And just throwing more people at the problem doesn't solve the problem any faster. And, all of a sudden, your skill set that you need, that you had to do the startup, which was to do the thing and to be a visionary and drive the thing, all of a sudden, your skill set needs to be management, and problem dissection, and distribution. And so, it's making the move from startup to post-startup that requires a different skill set that is difficult, that's like a gate that people run into or that businesses run into that could prevent them from being a more successful business. MATT: Yeah, and oftentimes, startups, specifically, will bring in a new CEO once they see that kind of growth, right, that actually specializes in those skills. And we always...well, not always, but very often, we see startups who grow too quickly. And I wonder if it's more than just the financial problems that it causes. But this is one of those problems as well is, the communication gaps that it creates, and the context, and people being on the same page, and understanding what they're really trying to accomplish. MIKE: It takes, I think, always longer than you'd expect to get somebody else up to speed. It's just hard. It's hard to take something that you can see and have experienced in your own mind and hand it off to somebody else. There's a mismatch. You have to convert it into language, and they have to convert it into their mental representation. And that transfer is inefficient and is going to take time. And, I think, over and over again, we underestimate the challenge of that transfer. JUSTIN: So, we often underestimate it, but what, I mean, we're all experienced engineers here, and we've been studying this for a while. So, what can we do to make that transition smoother? MATT: Well, I think there's a few things, right? Front loading is important. You need to know that something's coming ahead of time and give yourself adequate space to prepare for it. And I just want to clarify that I support the idea of contractors and bringing contractors in to help with projects, but I think there is a cost. And one of the things you need to be cognizant of is being able to reuse them. Your first project may slow down, and it may take twice as long as initially planned. But if you use those same teams and bring them in for the next project, they're going to have that knowledge and that context and can jump in and out and actually provide some lift like you expected on that first run. MIKE: I think that's huge. You have to have the [inaudible 09:52] time. Now, that's not an easily digestible message when you're trying to go fast to market [chuckles]. And you could just come up with a new project, right? The answer is going to want to be, well, I want this, you know, already. Why does it take so long to get people up to speed? Well, sometimes that is not a constraint we can mess with, right? Sometimes, we don't get to hire until today. MATT: Yeah, and I think there's a balance. And to the opposite side of the coin...and we

    1h 10m
  4. Episode 61: Effective Meetings

    12/11/2024

    Episode 61: Effective Meetings

    This episode of the Acima Development Podcast dives deep into the art of effective meetings, exploring both what makes them successful and what causes them to fail. Mike opens with humorous anecdotes of ineffective meetings, such as “death by PowerPoint” and marathon discussions that inadvertently turned into workout sessions. These examples highlight the common pitfalls of meetings that lack focus, preparation, or structure. The episode aims to turn these common frustrations into learning points for hosting productive, engaging, and goal-oriented meetings. Panelists Tad, Eddy, and Dave contribute insights into the key components of successful meetings. Tad emphasizes the importance of structure, such as having a clear agenda, timekeeping, and redirecting off-topic discussions to maintain focus. Eddy underscores the value of respecting participants' time, avoiding unnecessary tangents, and setting clear expectations for the meeting’s purpose. Dave provides a philosophical perspective, distinguishing between passive, reactive participation and proactive engagement, and stresses the importance of preparation and decision-making as central objectives of any meeting. The group also discusses strategies like giving everyone an active role, employing visual aids, and embracing rules like timeboxing to manage discussions effectively. The conversation concludes with actionable takeaways for designing better meetings. Key recommendations include establishing rules of engagement, distributing materials beforehand to ensure participants come prepared, and maintaining focus through time management and participant roles. The panel advocates for meetings that prioritize purpose over duration, ensuring decisions are made efficiently. By adopting these strategies, the podcast argues, teams can transform meetings from dreaded time sinks into productive, collaborative sessions that respect everyone’s time and energy. Transcript: MIKE: Hello, and welcome to another episode of the Acima Development Podcast. I'm Mike, and I am hosting again today. With me, I have a nice panel. We've got Dave. DAVE: Howdy, Howdy. MIKE: Ramses. RAMSES: Howdy, Howdy. MIKE: Eddy. EDDY: Hey. MIKE: And Tad. TAD: Hi. MIKE: Today, we are going to be talking about meetings, specifically about how to have effective meetings. And I'm going to present the topic on purpose because then I'm going to talk about the alternate. Because I think that if you're listening to this, and you're old enough to listen to a podcast, you have been in a horrible meeting. It was ineffective. It seems to be the default state of meetings because they are awful. They're famously bad. I've been wondering about which particular example to bring up. I have a couple from my experience of bad meetings. Once somebody offered...when I was doing a presentation, they had a related presentation, and they had some slides for it. This was some time back. I was teaching a class, and they had some slides. And they said, “Well, do you want some slides for your class?” And I said...and I paused for a moment. And I thought, slides? And I asked him, “Have you heard of death by PowerPoint?” [laughs] And he said, “Yes, actually, I have.” And it surprised me that the question was even asked because it just sounded so opposite of what I thought an effective discussion would be made of. It’s the first example that came to mind. I had another situation, this was quite a few years ago, where the team I was on discussed things so much. We had such long discussions about every architectural decision because we were building out a new application. It was hours. It was like six hours a day. And this was before people generally had cameras on. This is long enough ago, you know, video didn't always work, so it was audio meetings. And I couldn't stay awake. So, I started doing pull-ups and push-ups [laughs] during my meetings. And I put on 15 pounds of muscle from my meetings [laughs]. It was so bad that it actually improved my health [laughs]. TAD: Nice. MIKE: So, I don't know whether to complain about that or not, but meetings can be that bad. And, on the flip side, when you're in a meeting that's good, that is on point, and you get your topic covered, and you end early, everybody is just like, “Wow, how did that happen? That was a good meeting.” And I think it stands out because it's so different from what we expect by default. It turns out meetings are hard. It's hard to have a good meeting. And so, we're going to talk at length about that today. And I can give my ideas, but I've talked about some bad ones. I'm going to kind of step back and ask the panel here, you know, what do you think is important to have a good meeting? Or we can start off on the other track as well, you know, feel free to mention what you think makes for a bad meeting. I have some things that I believe are key to having a good and effective meeting, but I'm not going to throw those out first. I'm interested in what you all have to say, and then we'll build from there. So, what makes for a good meeting? TAD: One of the reasons why this topic kind of struck me is because I've actually...and I haven't been a member recently, but I was a member of Toastmasters for about four years. And it's primarily known as a public speaking club. But they’ve kind of tried to change the branding a little bit of more just a leadership business public speaking. Like, there's a whole host of skills that they really are trying to promote. And one of them is specifically meeting management because every Toastmasters meeting they require that you have an agenda. They require that you know all the speakers, all the things that are going to happen beforehand, how long they're going to speak for, that sort of thing. And as the person that runs the meeting, the success of your meeting is determined by did you hit all of the time points, and did you close the meeting out in an hour or less? And that's considered success, right? And they actually have someone in the meeting who's the timekeeper who the person running the meeting has to consult regularly like, okay, are we on time? Oh, did we fall behind? Okay, let me adjust. So, for me, it would be, do you have an agenda? Do you know what's going to happen in that meeting? Do you have a good idea of how long each piece of that meeting is going to take? And did you end on time? And part of that is, also, a lot of times things will come up in the meeting, and you have to have a way to, like, you’re like, “Okay, that's a great point. That's off-topic.” What's the process to handle something like that, right? Okay, do you take a note? That will be an action item. We'll take that offline. We'll have a discussion about that later, that sort of thing, too. So, those are just a handful of elements that I think make for a good meeting. MIKE: Okay. So, I think you've given a good list there. I'm going to come back to some of those as host because I think they're worth revisiting. What about others in the call here? EDDY: I think one thing people need to realize when they set up meetings is that you need to respect other people's time, right? If you're going to ask someone to be there for 30 minutes and then you spend 20 of those minutes about random stuff that's not even part of the agenda or the topic, you're not being effective, and then you're actually causing people to not pay attention, right? So, I think what's really important is that you have to respect the fact that other people have other things to do and to stay on course. I think that's one of my biggest concerns when being part of meetings. Another one that kind of just top of my head is timeboxing. It's something that we like to do in other meetings, right? Like, set things up in a certain window. If you think that it's going to go longer than that, axe it and then talk about that elsewhere. Again, it goes back to respecting people's time. I think that's really important. And probably setting some expectations, like, some ground rules, right? Like, this is what this meeting is going to be about. Let's try to not go into tangents and stay on course. So, I think those three core things, for me, is what makes meetings important. DAVE: When I think about good meetings, there's, like, the how, and the why, and those are two separate things in my mind. Like, the why, like, what is this meeting for, and, you know, why are we doing this? Most people come to a meeting in reactive, passive response mode. Just like, well, I'm just going to go to this room. I'm going to sit down, and then I'm going to see what happens. And then, I'm going to respond or react to it. And that's what most of us do in our meetings. And when you've got 99 people in a meeting, and only one person has a 5-minute agenda, it's going to be an hour-long meeting where it's just chit-chat, and socialize, and discussion. There's a really good very, very short book. It used to be free. I don't know if it is anymore. It was paid for a while. It was, like, two bucks at the time. It's called “Read This Before Our Next Meeting.” And he takes a very aggressive stance. I mentioned it a couple of weeks ago in the podcast, a couple of episodes ago, that we do stand up with 30 people on Michael’s teams, and we're out in five minutes. And Will got mad. He’s like, there's no way you're having...and I'm like, there's no way we're getting together, and discussing, and socializing, and screwing around and not having a meeting. You're right; we're not getting any of that done. But we are getting decisions made, and we are getting emergencies addressed, which was the whole point of that particular meeting. The read this before the meeting guy...I don't want to do a whole book report, but he basically says, “Meetings are to make decisions.” Everyone in the meeting should come ready to make that decisi

    45 min
  5. Episode 60: Business Logic Architecture

    11/27/2024

    Episode 60: Business Logic Architecture

    In this episode of the Acima Development Podcast, Mike kicks off the discussion with a story about his father’s woodworking journey, which leads him into a reflection on the craftsmanship seen in Michelangelo’s David. Drawing a parallel to software development, Mike explains how Uncle Bob’s philosophy about structuring code emphasizes that frameworks shouldn't dictate structure; rather, it should reflect the application’s purpose and business logic. The panel, comprising Eddy, Kyle, Ramses, and Will, dives into this idea by comparing software architecture to traditional architecture, stressing that applications should be designed to clearly convey their business domains rather than merely reflecting the framework they are built upon. The conversation shifts to how different development roles approach structuring business logic. Will, primarily a mobile developer, shares his preference for keeping business logic close to the data layer, ensuring it remains adaptable and maintains integrity across varying platforms. Kyle adds a DevOps perspective, highlighting the value of configuration management and the necessity for disposability in infrastructure components, ensuring business logic stays modular and replicable without becoming burdensome or redundant. The team explores the tension between creating a reusable layer for business logic while managing caching, persistence, and the pitfalls of “fat controllers” in frameworks like Rails. Eddy brings up how MVC can sometimes obscure domain logic, advocating for clean architecture practices and encouraging separation of interface and business layers. From a security angle, Justin emphasizes the importance of validation before any business logic is processed, advocating for clear layers to handle inputs and outputs independently from core logic. The team also discusses how integration and unit testing benefit from this layered approach, allowing developers to isolate and test business logic without impacting UI or data layers. They conclude that a middle layer for business logic—distinct from the UI and data layers—is crucial for maintainable, secure, and efficient code, reflecting that, much like in traditional craftsmanship, thoughtful structuring and separation of concerns lead to better long-term results in software. Transcript:  MIKE: Hello, and welcome to another episode of the Acima Development Podcast. I'm Mike, and I'm hosting again today. I’ve got a great panel with me here today. I've got Eddy, Kyle, Ramses, and Will. And, as usual, I'd like to start off with a story that's not about software development [laughs] and tie it in. And I’ve mentioned my dad a couple of times in the podcast. I'm going to be again today. He's an interesting guy. Like, we all think our dads are interesting. Well, I think my dad is interesting, too [laughs]. Most of his career is building, building things, but he's also a tinkerer. He makes his own tools. So, he does woodworking, right? And he makes his own tools for woodworking, and woodworking ends up doing a lot of things. One of the things that woodworking led him into is, you know, making stuff, then he's done some sculpting. Just in the last month, my mom and dad traveled to Italy and Greece. They hadn't done a whole lot of traveling. Did it there and went and saw some of the sculptures in Europe. And my dad said when he saw Michelangelo's David, it made him cry [laughs]. Because he's done sculpting professionally, you know, he's been building things his whole life. When he saw the craftspersonship, right, that went into that, it was just overwhelming. And, you know, it really affected him, which is pretty cool [laughs]. I haven't had that reaction seeing, well, I've never been there, so I really can't say. But, you know, when you devote yourself to making stuff so much for a long time, it, you know, it really affects you when you see something that's done well. Now, we build software [laughs], and you don't see the structure very much. I heard an analogy made by Uncle Bob. Look him up. If you don't know what I'm talking about, he's a luminary in the software community. That's not his full name, but it's what he goes by. He’s one of the signers of the Agile Manifesto. He's a prominent guy. I heard a talk from him once, and he said that in software, we tend to structure our applications around the framework instead of structuring them around what they do, and that idea has really struck me. And he made the comparison to architecture. I'm going to start with this, going back to this statue of David. You know, if you're going to make a big marble statue, you are constrained by the chunk of marble you start with. If you don't follow that shape, you're going to have some edges, right, where it does not work. And it drives everything that comes afterwards. You know, everything has to work around the constraints of that stone. And somehow, when we build software, we sometimes don't think that way. If we're writing a Rails app, we think, okay, I've got models, views, controllers. Everything should look like the framework. And I'm glad we've got the panel we have here today because not all of us are, you know, we're coming from different kinds of frameworks, which is kind of cool. This idea, though, of structure coming from your framework rather than what your code is it's a big one because if you look at your code and say, okay, yeah, this is a Rails application. I've got models, views, and controllers. I have no idea what this application does. I have no idea what the business domains are. I don't know how to find anything [chuckles]. It's all just lumped in there together. I'm going to take that further and say that if that's all you see, then there's something probably horribly wrong. Because, in any large application, you've got business logic that has nothing to do with presenting out to the customer. It's probably running in background jobs. It's running...you're probably sending out emails to people. You are, you know, writing reports, whatever the case may be, although you shouldn't write a lot of reports in your app. That's a bad idea. Use an outside tool [laughs]. If you’re writing reports, you're doing it wrong. Anyway, but there's all these things other than just, you know, your web app, you know, you probably got a mobile app somewhere. You've got an API. You've got to talk to it, talk to that with. So, if all your business logic is structured around your display code, then you've got a serious problem because it's not reusable. Going back to what Uncle Bob said, he used the example of not a sculpture but of architecture. He wasn't thinking about that marble you start with, and you get down to the sculpture. He was saying, well, if you look at a floor plan of a building, you know what that building is for. If you look at a school floor plan, you're going to look, and say, okay, here's the gym, here's the, you know, lunchroom, and here's all the classrooms. It's really obvious that it's functionally designed to match that. If you look at an office building and you see your cubicles, you know, and you see your meeting rooms, it's going to make sense. You're not going to see every single building look exactly the same because we wouldn't do that because the building should follow the function. We have domains in our code. If you've got something that's, you know, a major component in your code that can make up a...if you've got blogging software, you're probably going to see something around leaving comments, right? And that's a major domain of your application. And if it doesn't look like that, then there's something wrong with how it's structured. That idea of thinking about where your business logic goes, thinking about how your application is structured, I think, is a potent one. And that's our topic today. Where should you put your business logic? And we're going to talk about this in kind of a framework-agnostic way. Because I think these principles apply universally regardless of what framework we're working in. I'm wondering...so, we've got some...so, Will, you're not at Acima. I'm curious what your thoughts. I'm going to start there before we, you know, kind of talk about the Rails side or some of the in-house stuff. I'm also curious what you think, Kyle, and what your thoughts are because you’re looking at this from a DevOps perspective. Where should your business logic go? How do you structure things so that you end up with your business domain and it’s recognizable? And you don't just look like, oh, this is a React app, and that's all it is. I have no idea what else it does. WILL: I’d say one thing I am doing a lot of mobile development, you know, that's probably 80% of my work these days. And one thing about mobile app development is you can't roll it out with the speed that you can roll out a web app, right? I can have a web app. I could roll the whole server, I don't know, even a big server I can do, you know, in under an hour, you know, little stuff. You can do it in minutes, if not seconds. So, one of the things that you really need to be mindful of as a sort of, like, a mobile app developer, is pushing business logic as close to the web server as you possibly can. Sometimes you got to do...I'll do in terms of the data, like, what am I going to render and when, where, why, and how? I always want to push that back onto the server because I can fix it. I can change it. I can get different stuff going. And so, you know, for a mobile app and for a web app, too, I really like to keep that view layer pretty, you know, I mean, in terms of logic, I want to do layout. And I want to do a last line of defense sort of, you know, sanity checking, right? Because if something, if for whatever reason, I'm the last line of defense to keep something from being screwed up on the screen, that's what you want, right? That's your goal. That's what y

    51 min
  6. Episode 59: Infrastructure Tools

    11/13/2024

    Episode 59: Infrastructure Tools

    In this episode of the Acima Development Podcast, the panel dives into the complexities and consequences of choosing the right tools for software development and engineering. Justin opens with a story from his first job, where his team implemented an Oracle Customer Management System that ultimately led to disaster. The system, despite promises of extensive customizability and robust functionality, failed spectacularly due to performance issues, particularly with search speed, which took minutes per query. This failure, which ended in a costly rollback to the previous system, highlighted significant lessons about the importance of realistic testing, validating vendor claims independently, and having fallback plans for major tooling changes. As the discussion progresses, the team shares additional insights on the risks associated with both large, established tech tools and smaller, emerging ones. They debate the pros and cons of "buy vs. build," generally advocating for buying established solutions rather than building in-house, particularly when these tools fall outside the company’s core value proposition. However, they recognize that established tools, while more reliable, often become stagnant, and smaller, innovative companies may offer better solutions that come with risks, such as sustainability and long-term support. A balance is often difficult to strike, and each choice comes with potential downsides, whether it's cost, dependency, or eventual obsolescence. The episode closes on a broader note about the inherent "bitter lesson" of technology: that simple algorithms powered by ample resources often outperform intricate, specialized solutions. This leads to a candid reflection on open-source contributions, corporate influence, and the evolving tech landscape, where many advancements are the result of unsustainable VC-backed ventures. The hosts humorously conclude with the advice that any tooling decision likely involves trade-offs and inevitable challenges, encouraging listeners to accept the imperfect nature of these choices. Transcript: EDDY: Hello and welcome to Acimas Dev Podcast. Today, we have an awesome crowd, right? We have Mike; we have Kyle; we have Justin, and Melina, right? And today we decided upon popular opinion, right? That we'd be talking about tooling, right? Should you be married to a tool, pros and cons of switching to a new tool, when to do it, why to do it. And Justin actually was talking off-air on having a horror story behind something that sort of segues into this. So, Justin, by all means. JUSTIN: All right. So, what I have is a story from my first real programming job, and it was the decision to move to Oracle. Boy, I don't even know what it was called, but it was an Oracle customer management system, the CMS. And, if you're a programmer, sometime in your career, you will either write or rewrite a customer management system. That is, like, a given. I was lucky enough to do it or unlucky enough to do it on my first job. And the decision was made at levels far above me that we were going to do Oracle’s customer management system, which was, you know, Oracle a huge company, huge corporation, and it had a tool for everything and, especially, it had a tool for managing customers that we had never used before. And the thing about it was is, like, they advertised it as, hey, you can use this tool, and you can customize it any way you like it because we have a field for everything, and you can add your own custom fields. And every field could be validated, and every field could be a search, and every customer can have any number of objects associated with the customer. And so, it was everything to everybody. And the VP in charge was like, oh, this is going to be awesome. And we were a medical company that did medical tests. And we had customers in the numbers of tens of thousands, if not pushing a hundred thousand. So, the order of magnitude was actually not that big. But as we went along, we got given a deadline saying, “Hey, we are releasing this day, do or die.” And by gosh, we did it. We released that day. And come to find out the next day that nobody could use it. It wasn't a login issue; rather, it was a performance issue. Every single search took upwards of five minutes. And it was just a horrible, horrible experience. And it slowed our customer relationship team down to a crawl. Phone calls couldn't be completed. Doctors couldn't update their customer fields and things like that. And for the next, I think it was week and a half, we tried to fix it. And then, the CEO came in and said, “Screw it. You guys are moving back to the old way.” And, ultimately, we ended up throwing that away. And it was a, I don't know, I think it was on the order of a 5 million project, which some people think that's huge. Some people think that's not that huge. But to us, it was months and months of work and licensing fees and data. And it's actually probably over a year of work, come to think of it. But the key there was that it was an environment that nobody really knew how to do, an infrastructure that was new to folks. I mean, we'd done Oracle databases before, but, all of a sudden, we were asked to develop within the Oracle framework and using their tools, and it just sucked. And not only that, but our test environment was certainly not populated with a large enough data set to validate everything. And I believe that the fallout from that was the...I think that VP got fired. The rest of us just, you know, kept on going. And so, that's, like, the tale as old as time is, you know, some middle manager or upper middle manager gets canned because their project was a horrible implementation. EDDY: So, what's the underlying problem here? Was it lack of training? Is it the fact that you're developing in Java, or is it just Oracle in general [laughs]? JUSTIN: I don't know. There's probably plenty of blame to go around, right? But the lessons that I learned was the fact that you cannot depend on salespeople for the numbers that they give you. You have to prove it out yourself. And then, you have to test with as real production data as possible, and that includes complexity. That includes number of rows. That includes number of transactions, all of that. And you actually should have a rollout period where you have a beta with production data for X number of time because it basically shut the company down for a week and a half while we tried to make it work because they didn't have a fallback. And, luckily, they were able to fall back. They were able to roll back to the old way after a week and a half, but it was just...it just sucked. And, also, from the developer point of view, using the Oracle tools...because we had to use their packages. We had to use their architecture. Previously, we were just using their database as a data store. But now we were using their libraries. We were using their SDK a little bit. We were using their, you know, all these things, and it just sucked. Long story short, Oracle sucks, and use Postgres [laughs]. Even back then, Postgres was a thing, so... EDDY: I'm just surprised the company was willing to go a whole week with production being down until they finally decided, oh, you know what, cut the losses [laughs]. Let's go back to... JUSTIN: It's funny because different companies have different tolerances. This was a medical company. And so, when I say the whole company was shut down, I mean just, like, the customer side. We were still doing tests and things like that. And so, there was a certain amount of tolerance there for a couple of days. But the CEO was just like, you know, after a little while, he just couldn't take it anymore. KYLE: So, with that, were you privy to why they concluded on Oracle at all? And was there any competition why that tooling was selected? JUSTIN: That was all so far above me. KYLE: Far above you. JUSTIN: I was fresh out of college, but, man, it was a trial by fire. And just to follow up that a little bit, we were a development team of about 20. Over the course of the following year, I think they lost 15 of the engineers just because, you know, there were other problems with that organization, technology-wise. That was kind of the same time I left as well. MIKE: Yeah, I can say it's not always the big players. In my first job [laughs], we switched to a new customer management system, and it was built by, like, some guy in his garage. And it really was just somebody [laughs], I think, the CEO knew. And the entire thing, the entire system, was built around stored procedures in SQL Server because [laughs] they provide functionality to call out to externally compiled binaries, which, of course, you don't have the source code for that he had built. And there's no source code control, right? Because it's all just implemented straight inside the database. And there's no searching. You just have to know which store procedure to look at or look through them one by one until you find what you're looking for. And that was the entire system. And we bought it, and then, of course, the person who we bought it from wasn't available for assistance. So, it was on us to manage it ourselves. And we kept that tool [laughs] for as long as I was at that company, meaning I learned how [laughs] to navigate through every bit of that code in order to make it work. There's a lesson here about how you provision tools at a company, I think. As a company, whoever you are, you're probably not a customer management system company. And if you're putting all of your efforts into building that in-house, you're diluting your expertise in what you’re probably supposed to specialize in, whether it be medical products, or some sort of medical services, or financial services, or whatever the case may be. You are focused on somewhere outside your area of core expertise, and that often goes badly. I am, in gene

    46 min
  7. Episode 58: Feature Verification

    10/30/2024

    Episode 58: Feature Verification

    In this episode of the Acima Development Podcast, Mike humorously shares his experiences with his three-year-old son’s unconventional love for books, recounting how his son physically devours them, forcing Mike to create an elaborate security system to protect the family’s book collection. This story transitions into a discussion about problem-solving and determination, linking it to testing and validation in software development. The team laughs about how motivated problem-solving, much like a child's persistence, can reveal gaps in security or design, paralleling the way developers must rigorously test software to uncover flaws. The podcast delves into the importance of validating features beyond just passing unit tests, emphasizing that even bug-free code may fail if it doesn’t meet the right requirements. The conversation touches on the limitations of unit tests, pointing out that they are only as effective as the person who writes them and that they cannot catch misunderstandings in requirements. The hosts also discuss tools like Qualit.AI, which use AI to generate tests, but caution that even with advanced tools, ensuring correct specifications and validation is crucial to avoid costly errors in software. The conversation expands into a broader discussion of testing methodologies, including dynamic application security testing (DAST), the value of cross-checking systems, and the challenges of relying solely on automated tools. The hosts highlight how important it is to approach validation from multiple angles, much like using "belt and suspenders" for security, to ensure robust software that can withstand different types of user interactions and vulnerabilities. The episode concludes by reaffirming the importance of rigorous validation processes, whether through unit tests, security scans, or AI-driven solutions, to catch errors and ensure the overall quality of the code. Transcript: MIKE: Hello, and welcome to another episode of the Acima Development Podcast. I'm Mike. I have here with me Eddy, Justin, Dave, and Kyle. I have a three-year-old, and he loves books, just absolutely loves books. He devours them. I can't tell you how many books he's been through. The problem is, I mean that literally [laughs]. He has eaten more books than I can count. He just loves the cardboard, chews it right up, removes the binding, tears into pieces. It is a real challenge [laughs]. [inaudible 00:41] the opportunity to read, but, you know, he definitely gets his fill of books, just not in the way that we intended [laughs]. It's the point where we have to take some security measures. In my living room, I have some stairs. At the top of the stairs, there's a bathroom, but next to the bathroom, there is not a full room; it's just a loft. It's an area that's enclosed on three sides by the boundaries of the house and then a wall with the bathroom, but it's not enclosed along the railing. So, you can look out over the railing. And it's open. It's not a full room. And there's no door. You just turn left and walk into the space, and that's where we have our books. And, you know, we have a lot of books. We like books [laughs]. We're book lovers. We have lots of children's books, lots of shelves of books. And he'd go in there and just have a meal. So [laughter], that's a problem. So, we started coming up with tactics to dissuade him from doing so. You know, we can always go in there, and grab a book, sit with him one on one. But we're trying to encourage him to appreciate books in a different way. We tried to child-gate. That didn't last long. He learned how to climb over it within weeks. So, we got rid of that. I put up a sheet of plywood and thought, well, this will dissuade him. That did dissuade him for a short amount of time. He figured out how to move that out of the way. I literally put an anvil in front of the sheet of plywood. And you know that if you're very determined, even if you're three years old, you can slide an anvil across the carpet, and he got in. And, eventually, the solution I had to do was to mount. I cut the plywood down a little lower. It's over a meter [laughter]. It's about...it's a little under five feet high probably, and, you know, I think, like a meter and a half. And I've got it mounted on hinges, hinges for a gate, so gate hinges. So, it means they can rock both ways. So, you can turn it, you know, 180 degrees. And I've got that mounted into the stud in that part of the railing. It's an enclosed railing, So, I've got it mounted into the stud, so it’s solid. And I’ve put a gate latch on the other side that it mounts into. At first, we got this, and it worked. But pretty soon, he figured out how to undo the latch. So, we had to get a wire that had a screw class that held it together. He got through that, too. So, we have a padlock with a combination on it [laughter]. I hope that you get some sense of how impressive my son is [laughs]. DAVE: This is motivated problem-solving, absolutely. MIKE: Absolutely. Extremely motivated problem-solving. EDDY: Having a three-year-old being able to slide an anvil across the carpet is pretty impressive, just saying. MIKE: The truth is, he did that about a year ago. So [laughs], you know, he, yeah, he is strong and dedicated. I'm not going to go into all of the stories [laughs]. The other day, he saw a package on the front doorstep. So, he opened the window, pushed out the screen, jumped out the window, ran over, got the package, and came inside because he couldn't reach the latch on the door. EDDY: He'd make a really good QA member. MIKE: He would. And he'd make a great, well, he'd make a fantastic QA team member, which is the perfect segue into our topic today [laughs]. We're going to talk about validation of our features. And there are lots of things that people talk about to make, you know, your code good. And, you know, we talk a lot about unit tests. We focus a lot on unit tests. We've had multiple podcasts talking about RSpec [laughs] and best practices in that particular library for testing, which is great. I love unit tests. They give me a ton of confidence. I've also had experiences in the past where I have written some code, written the unit tests, gone through testing, everything worked perfectly, got it deployed, and then realized that it didn't meet the requirements [laughs]. Regardless of how well it was written, it missed the point, right? It missed the mark. And that is completely outside of what RSpec can save you from. Unless you have...you just can't. Like, if your requirements are wrong, if you misunderstand the requirements, then your specifications are not going to catch you. EDDY: Well, RSpec is only as good, or a unit test is only as good as the author, right? It can only be as good as the person who writes it. DAVE: The problem behind the problem is you built the wrong thing. There's no amount of tooling that can save you from successfully building the wrong thing. JUSTIN: Well, I went to a presentation by a company called Qualit.AI. They have AI tests or their tests are generated by an AI. And it's actually pretty interesting. You can feed it the specs that are written by the product owner rather than engineer. And, theoretically, it will solve some of that problem. MIKE: But, again, if the specifications are written wrong, we have trouble, and there's a gap there. Now, you can only take that so far. You know, at some point, people have got to get it right. There's this really important component of creating the validation instructions to make sure that we get out the feature that we really want early on. And you can have bug-free code, technically bug-free, because it does exactly what you told it to do that is not the feature you wanted. Further, unit tests live as a unit, right? They test a unit of code in isolation. By design, it makes them fast. It means you can run your test suite and validate your code quickly. They do not run an integrated test to make sure all the components run well together. And that's another prong of a testing strategy. There are also things like security tests, you know, penetration tests or probing for libraries with security holes. I mean, there are a whole range, a whole range of ways that things can go wrong. And the way that you validate your code is the way that my son does. He works on it until he gets to that book. And unless it's essentially impossible for him to get in, he's going to be eating that book. And sometimes, we don't have quite that attitude. Sometimes we think, well, you know, I think it passes the unit test. It must be good. You know, I've written good code here. And once we start to get a little lax there, we get ourselves in a lot of trouble. JUSTIN: This kind of goes into what type of testing you want to do. And the reason I bring this up is I'm in charge of the DAST testing at our company, and we use a tool called Rapid7. And one of the things that it does is it does, basically, you know, it scans the application, and it just shoots everything under the sun at the application, and it's not looking for functionality. It's not looking for usability. What it's looking for is vulnerability. And so, it's like, a specific type of thing that you're looking for, I mean, that kind of drives your testing, you know, what type of testing you're doing. And the test suite that Rapid7 uses takes basically two days to run. So, we always run it Friday at like, you know, 4:00 p.m. But it's just kind of nuts because it throws everything under the sun at every single form parameter API that it can detect. And, you know, it gives you results, and you've got to parse through them and everything and, you know, kind of see if they were useful or not. But I think it goes back to, you know, you've got to test with intent and, you know, what you're looking for as a result of it depends on what type of test you're running. DAVE

    58 min
  8. Episode 57: How to Learn

    10/16/2024

    Episode 57: How to Learn

    In this episode of the Acima Developer Podcast, Dave Brady, along with panelists Eddy, Kyle, Ramses, and Will, discuss the various ways people learn, sharing personal anecdotes and insights. Dave kicks off with a humorous story about learning to swim by being thrown into a lake, which sparks a broader conversation about how some people thrive when thrown into challenging situations, while others prefer a more structured approach. Will identifies with the former, explaining how he learns by setting impossible goals and pushing himself to figure things out, often attributing his approach to an undiagnosed ADHD coping mechanism. Eddy contrasts this with his preference for having safety nets in place when learning, underscoring the diversity in learning styles. The discussion also touches on the importance of incremental learning, with Will emphasizing the strategy of "debugging from the known to the unknown." He shares advice from his engineer father, explaining how building knowledge step by step can help navigate new problems. They also dive into how experience in multiple programming languages helps make learning new ones easier, but caution against assuming that all programming languages follow the same rules or paradigms. The group explores how learning through experimentation, much like playing music, can lead to deeper understanding, yet each new tool or language requires its own dedicated study. The conversation wraps up with a reflection on the importance of continuing education for software developers. Will argues that the industry often neglects training, which leads to burnout and inefficiency, as engineers are left to constantly catch up on their own time. Dave adds that knowledge is crucial to a developer's career longevity, stressing the need for ongoing learning even if employers don't invest in it. The episode highlights the parallels between music, programming, and learning in general, emphasizing the importance of both structured and self-driven approaches. Transcript: DAVE: Hello and welcome to the Acima Developer Podcast. I'm your host, Dave Brady. Today, we've got Eddy, and Kyle, and Ramses, and Will. We've got a really great panel today. We're talking about learning. And Mike usually likes to start with a story, and I don't have a good...I've got too many stories about learning, and I don't want this to be the David Brady talks about his life story again show. I’ll just start with one of the stories from my childhood, which is that a lot of times, I learned the hard way. My dad taught me to swim by throwing me in the lake. And the hard part wasn't learning how to dog paddle. The hard part was getting the knot untied from inside the sack. WILL: [laughs] DAVE: So -- EDDY: The joke is that you were destined to fail because you weren't given the proper...gotcha. DAVE: Yeah. Except, a lot of times...for the people listening at home, about 30 seconds before we started this call, I told Eddy that he should host, and he said, “No, no, no, no.” And I was like; I'm this close to just making him do it because the best way somebody [inaudible 01:07] that they can swim is just throw him in the lake. And it feels like you're tied up in a sack because you don't feel like you're maximally competent. You are minimally competent. That's what beginning is. And so, yeah, all right, so we took the joke. We overexplained it, and then we turned it into a metaphor. Why not? EDDY: I will say, David, it's funny because I've met some people in my life who learn best by being thrown in a bag and tied in a knot, and some of them prefer that. They like to just be tossed in the deep and figure it out, you know? I know that there's very little people in the world that learn that way, but I do know that there's a lot of people who actually excel being in a [inaudible 01:46]. DAVE: And, for those people, skydiving and working with cobras is probably not a good career. EDDY: Will, I saw you raise your hand. Are you one of those people? WILL: [laughs] It's me. Like, I am that person. Like, I have tried, like, so many times in my life, you know, so many times in my life, to be like, you don't have to do it that way, do you? Not that way. Not again. Again? But how I learn is I get myself in over my head. I assign myself crazy goals, impossible deadlines, crazy projects. I have no idea what I'm doing, and I don't know; I’ve figured it out. I pull it off [laughs]. EDDY: Is it because you want to tear things open? Like, if they assign you something and they're like, “Hey, Will, get this done,” you prefer to be proactive and undig and, you know, be... WILL: I think it's, I mean, as I learn more about sort of mental health and, you know, work styles and stuff like that, I think it's just a...I think it's an undiagnosed ADHD coping mechanism that that's the root of it. But, man, I don't know, I mean, like, I think there's a certain level of, like, if I may pat myself on the back a little bit, the breadth of things that I've done pretty successfully over the course of my career is significant. And I don't know a lot of people that have covered as much ground, and I don't know...it's hard to summon up the requisite psychic energy to really push yourself in a completely non-toxic fashion. Jet fuel is deadly poison, but it'll...you put a big enough pile of it under your butt and set the fuse, and you're going somewhere, you know [laughs]? DAVE: There's a YouTube guitarist named Rhett Shull, and his slogan is, there is no plan B. Like, he will actively destroy plan B. If he wants Plan A to work, he cannot have a Plan B in his pocket, or he'll fall back on it every time. And so, that's literally his channel slogan is, remember, there is no Plan B. EDDY: I will say I learn way different than you, Will. Like, if you throw me in the deep end and I don't know how to swim, I expect to have a lifesaver somewhere, like a jacket, or floaties, or whatever. It's like, cool, yeah, like, I'll put you in there, but you at least have a safe haven where you can fall back on in case you can't figure out how to swim. That's my personality. However, if you don't give me floaties and you don't give me a life jacket or a vest or anything, I can at least learn how to float [laughs], you know, stay afloat, you know, like, I won't drown entirely. It's just not the ideal spot you want to be in. WILL: I mean, it's not like I won't ask for help. It's not like I won't ask for clarification or anything like that. I just, I don't know; I mean, that's just how I do it. I need a certain amount of fire under my butt to do my best work. EDDY: That brings up a really interesting point, though. Like, what are some of those techniques that you do then? If you get put in that situation to learn, what are some of the things that you gravitate to that sort of help you? WILL: When I'm in it? Yeah, it's one of the best pieces of advice I ever got was from my dad, a capital E engineer, and he always said, “Debug from the known to the unknown,” right? And so, what you always want to do is you want to sort of expand what you're doing, expand your knowledge from a functional state, from a working state, and it doesn't matter how petty, and trivial, and stupid. So, if I was like, let's say I'm going to do a brand-new thing, right? I'm going to write a thing in a brand-new language, right? Like, I've said this on this podcast multiple times, and you haven't heard the last of it after I get this one out, is start as small as you need to get a functional kernel, a nucleus of something that you can do and then expand from there, which is the easy thing to say, but it's a hard thing to do because the space of a new problem, a new technique, a new territory is big. You don't even know where the edge of the map is. You don't know where you are. And so, you need to collapse down to something functional and then sort of build out from there. And that's, you know, that's the best way that I know to do it. And when you approach it like that, I mean, it really can be a very simple mechanical kind of a thing. EDDY: I like that. WILL: I was actually listening to a podcast. It’s the Andrew Huberman Podcast. He was talking about, like, literally, like, a process of learning, right? If I could paraphrase, like, a podcast that I only finished half of, the learning technique, the science that he was sort of describing was saying...he’s like, the key to learning is not so much learning; it's preventing the natural process of forgetting. Your brain is --- DAVE: Neural pruning. WILL: Well, your brain's taking in information all the time, right? You're constantly bombarded, flooded with data, and most of it gets thrown out because it's not useful. Your brain is actually really efficient at flushing that cache out. And so, what you want to do is try as hard as you can to establish it, and you build those hooks, tie it in with other things that you already know. Like, I've gotten really good at picking up languages because I've picked up a lot of languages. And it's just like, you know, somebody who learns seven languages, the eighth one will be easy. Well, if you know seven programming languages, then the eighth one isn't going to be so bad. And so, we're getting new techniques. We’re sort of attaching techniques onto things that you already know how to do. If you wanted to write...let's say you didn't know Java, but you knew Ruby pretty well, running a web server in Java, you would have a theoretical construct, a matrix to think about things. It's like, oh, well, I know about, you know, routes in Rails. You know, does my Apache Server have routes the same way? It sure does. Or, I mean, Apache is not the right one, but anyway, I forget, Spring Boot server. And so, you can connect things. So, it's like, oh, Ruby has an array. Java's got an array. It's got a vector, right? So, I mean, like, okay, so we've g

    50 min

About

At Acima, we have a large software development team. We wanted to be able to share with the community things we have learned about the development process. We'll share some tech specifics (we do Ruby, Kotlin, Javascript, and Haskell), but also talk a lot about mentoring, communication, hiring, planning, and the other things that make up a lot of the software development process but don't always get talked about enough.

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes, and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada