The Frontside Podcast

Charles Lowell & the Frontside Team

It's like hanging out at our software studio in Austin, Texas with Charles Lowell and the Frontside Team. We talk to smart people about how to make the world of software better for the people who make and use it. Managed and produced by @therubyrep.

  1. Big Ideas & The Future at The Frontside

    10/03/2019

    Big Ideas & The Future at The Frontside

    In this episode, Charles and Taras discuss "big ideas" and all the things they hope to accomplish at The Frontside over the next decade. Please join us in these conversations! If you or someone you know would be a perfect guest, please get in touch with us at contact@frontside.io. Our goal is to get people thinking on the platform level which includes tooling, internalization, state management, routing, upgrade, and the data layer. This show was produced by Mandy Moore, aka @therubyrep of DevReps, LLC. Transcript: CHARLES: Hello and welcome to The Frontside Podcast, a place where we talk about user interfaces and everything that you need to know to build them right. Today, we're going to talk about big ideas and the future of Frontside. TARAS: Yeah, starting with is Frontside is good idea? CHARLES: No, we're going to just talk about how do you know that an idea is good? We've touched on it a couple of times before. Like how do you, how do you go about validating a big idea, how do you discover a big idea? What do you do? TARAS: And then, even when you have big ideas like big tests, what does that mean for the world? How do you make a big idea an idea that a lot of people like and agree with and actually use on day to day. CHARLES: Yeah. It turns out that it's not easy. There's a lot of work involved with that. A lot of it crystallized around the conversations we're having about what exactly is big test and recognizing that big test isn't really a code base. It's not a toolkit. Even though it does has aspects of those things, it really is an idea. It's an approach. It's a way of going about your business, right? TARAS: Yeah. Especially when you put big test functionality in place, when you start doing big testing and then you put together things like using Mocha and Karma, big tests in that kind of test suite is really just like interactors and the idea of big testing. There's nothing else. All the interactors do is just give you an easy way to create composable like [inaudible] objects, so you don't have to write -- you have the components but you don't have to write selectors for each element in the component, especially if gets composed. But that's like a very small functionality that does a very specific thing. But big test itself, it takes a lot of work to actually -- we had this firsthand experience on the project we're working on right now. We are essentially introducing like Ember's acceptance testing but for React in a react project and having to explain to people what is it about this that actually makes it a really good idea and having people in the React world see that this is actually a really good idea. It's kind of incredible. When you actually try to sell something to somebody and convince somebody that this is a good idea is when you realize like how inadequate your understanding of the idea really is. You really have to start to break it down and understand what is it about this that is a really big idea. CHARLES: Yeah, I completely agree 100% because to be clear, we've actually been doing this now for two years almost. So, this is not the first React project where we've put these ideas in place. But I think in prior examples, we just kind of moved in and it's like we're going to do this because this is what we do. And we have firsthand knowledge of this working because we've operated in this community where this is just taken on faith that this is the way you go about your business. You have a very robust acceptance test suite. And because of that, you can experience incredible things. When you and I were talking before the show, we were kind of commenting on inside the Ember community, you can do impossible things because of the testing framework. You can upgrade from Ember 1 to Ember 3 which is a completely and totally separate framework, basically. You're completely and totally changing the underlying architecture of your application. You can do it in a deterministic way and that's actually incredible. TARAS: And what's interesting too is that the React core team kind of hinted a book at this also in their blog post about fiber or moving to fiber because one of the things that they talked about there is that knowing how the system is supposed to behave on the outside allowed them to change the internals of the free app framework, specifically about test suite for the React framework, but it allowed them to change the internals of the framework because they were testing kind of on the outside. The system kind of is a black box and that allowed them to change the internals and the test suite essentially stayed the same. So this idea of acceptance testing your thing is really fundamental to how Ember community operates. But other communities have this as a big idea as well. It's just applied in different areas. CHARLES: Right. And so applied to your actual application, this is something that's accepted in one place, but it's not an accepted practice in other places. But you can make your argument one of two ways and say, "I have experienced this and it's awesome. And you can do architectural upgrades if you follow the patterns laid out by this idea." And that works if you have a very, very high, high trust relationship, but you don't always have that, nor should you. Not everybody is going to be trusting you out of the box. So, you're going to have to lay out the arguments and really be able to illustrate conceptually practically how this is a good idea just so that people will actually give it a try. TARAS: And it takes a lot. One of the reasons why it was easier for us to introduce big testing to the current project we're working on is because we were able to write, like we've done the implementation for this in a previous project. So when we were convincing people this is a good idea, we're like, "Look, your test suite can be really good. It's going to be really fast. And look, we've done it before." So the actual process of convincing people that a big idea is a really good idea is actually kind of a complicated process that requires a lot of work and partially requires experimentation. You have to actually put an implementation in place and show that you're going to have to build up on your successes to be able to get to a point where you can convince people that this is actually a really good idea. People who have not heard about this idea, and especially people who might have counter, like they have people of authority in their community that have counter views. For example, quite often when it comes to big tests, when you bring up big tests, people will reference a blog post by a Google engineer that talks about how functional testing or acceptance testing is terrible. And for a lot of people, a Google engineer means a lot and the person makes really good points. But that's not the complete idea. The complete idea is not just about having an acceptance test suite. It's a certain kind of acceptance test suite. It's an acceptance test suite that mocks out at boundaries so you don't make API requests to the server, you make an API request to a [inaudible] server using something like Mirage or whatever that might be. So, the big idea, it has like new ones that makes it functional, but getting people who are completely unaware, who don't necessarily look up to you as an authority to believe you like, "Yes, that actually sounds like a really good idea." It is not a trivial task. CHARLES: No, it's not. Because the first time you try and explain it, you're arguing based on your own assumptions. So, you're coming to it safe in the knowledge that this is a really, really good idea based on your firsthand knowledge. But that means you're assuming a lot of things. You're assuming a lot of context that you have that someone else doesn't and they're going to be asking questions. Why this way, why this way, why this way? And so, you have to generate a framework for thinking about the entire problem in order to explain the value of the idea. And that's something that you don't get when it's just something that's accepted as a practice. TARAS: I think simulation is actually a really good example of that. If you haven't had experience with Mirage, if you don't know what having a configurable server in your tests does for you, you will probably not realize that similar ideas apply to, for example, Bluetooth that when you're writing tests for your Bluetooth devices, or you're writing tests for an application that interacts with Bluetooth devices, you actually want to have a simulation there for Bluetooth so that you can configure it in a kind of similar way to the way you would configure a Mirage for a specific test. You want to be able to say, "This Bluetooth device is going to exist," or, "I have these kinds of Bluetooth devices around me, they have the following attributes. They might disconnect after a little while." There's all kinds of scenarios that you want to be able to set up so that you can see how your application is going to respond. But if you haven't seen a simulation with something like Mirage, you're going to be going like, "I don't know what the hell why would this be helpful." CHARLES: It seems like lot of work. One of the things that we've been working on, as I said, is trying to come up with a framework for thinking about why this is a good idea because we can't just assume it. It's not common knowledge. For example, one of the things that we've been developing over the last year and more recently in the last few months is trying to understand what makes a test valuable. At its essence, what are the measures that you can hold up to a test and say this test has X value. Obviously, something like that is very, very difficult to quantify. But if you can show that and you say, "This test has these quantities and this test has these quantities," then we can actually measure them. Then it's going to allow people to accept it a lot more readily and try it a lot more rea

    31 min
  2. Transparent Development

    09/26/2019

    Transparent Development

    In this episode, Charles and Taras discuss "transparent development" and why it's not only beneficial to development teams, but to their clients as well. Please join us in these conversations! If you or someone you know would be a perfect guest, please get in touch with us at contact@frontside.io. Our goal is to get people thinking on the platform level which includes tooling, internalization, state management, routing, upgrade, and the data layer. This show was produced by Mandy Moore, aka @therubyrep of DevReps, LLC. Transcript: CHARLES: Hello and welcome to The Frontside Podcast, the place where we talk about user interfaces and everything that you need to know to build them right. It's been a long summer and we're back. I'm actually back living in Austin, Texas again. I think there wasn't too much margin in terms of time to record anything or do much else besides kind of hang on for survival. We've been really, really busy over the last couple of months, especially on the professional side. Frontside has been doing some pretty extraordinary things, some pretty interesting things. So, we've got a lot to chew on, a lot to talk about. TARAS: There's so much stuff to talk about and it's hard to know where to start. CHARLES: So, we'll be a little bit rambly, a little bit focused. We'll call it fambly. But I think one of the key points that is crystallized in our minds, I would say over this summer, is something that binds the way that we work together. Every once in a while, you do some work, you do some work, you do some work, and then all of a sudden you realize that a theme, there's something thematic to that work and it bubbles up to the surface and you kind of organically perceive an abstraction over the way that you work. I think we've hit that. I hit one of those points at least because one of the things that's very important for us is -- and if you know us, this is things that we talk about, things that we work on -- we will go into a project and set up the deployment on the very first day. Make sure that there is an entire pipeline, making sure that there is a test suite, making sure that there are preview applications. And this is kind of the mode that we've been working in, I mean, for years and years and years. And where you say like if what it takes is spending the first month of a project setting up your entire delivery and showcasing pipeline then that's the most important work, inverting the order and saying that has to really come before any code can come before it. And I don't know that we've ever had like a kind of unifying theme for all of those practices. I mean, we've talked about it in terms of saving money, in terms of ensuring quality, in terms of making sure that something is good for five or 10 years, like this is the way to do it. And I think those are definitely the outcomes that we're looking for. But I think we've kind of identified what the actual mode is for all of that. Is that fair to say? TARAS: Yeah, I think one of the things I've always thought about for a long time is the context within which decisions are made because it's not always easy. And it's sometimes really difficult to really give it a name, like getting to a point where you have really clear understanding of what is it that is guiding all of your actions. What is it that's making you do things? Like why do we put a month of work before we even start doing any work? Why do we put this in our contract? Why do we have a conversation with every client and say, "Look, before we start doing anything, we're going to put CI in place." Why are we blocking our business on doing this piece? It's actually kind of crazy that from a business perspective, it's a little bit crazy that you be like, "Oh, so you're willing to lose a client because the client doesn't want you to set up a CI process?" Or in a case of many clients, it's like you're not willing to accept -- the client is going to say, "We want to use Jenkins." And what we've done in the past, in almost every engagement, we're like, "Actually, no. We're not going to use Jenkins because we know that it's going to take so long for you to put Jenkins in place. By the time that we finish the project, you're probably still not going to have it in place. That means that we're not going to be able to rely on our CI process and we're not going to be able to rely on testing until you're finished." We're not going to have any of those things while we're doing development. But why are we doing all this stuff? It was actually not really apparent until very recently because they didn't really had a name to describe what is it about this tooling and all of these things that makes why is it so important to us. I think that's what kind of crystallized. And the way that I know that it's crystallized because now that we're talking to our clients about it, our clients are taking on to picking up the language. We don't have to convince people that this is a value. It just comes out of their mouth. Like it actually comes out of their mouth as a solution to completely unrelated problems, but they recognize how this particular thing is actually a solution in that particular circumstance as well even though it's not something Frontside sold in that particular situation. Do you want to announce what it actually is? CHARLES: Sure. Drum roll please [makes drum roll sound]. Not to get too hokey, but it's something that we're calling Transparent Development. What it means is having radical transparency throughout the entire development process, from the planning to the design, to the actual coding and to the releases. Everything about your process. The measure by which you can evaluate it is how transparent is this process to not just developers but other stakeholders, designers or people who are very developer adjacent, engineering managers all the way up to C level executives. How transparent is your development? And one of the ways that we sell this, because I think as we talk about how we arrived at this concept, we can see how this practice actually is a mode of thinking that guides you towards each one of these individual practice. It guides you towards continuous integration. It guides you towards testing. It guides you towards the continuous deployment. It guides you towards the continuous release in preview. I think the most important thing is that it's guided us, by capturing this concept, it's guided us to adopt new practices, which we did not have before. That's where the proof is in the pudding is if you can take an idea and it shows you things that you hadn't previously thought before. I think there's a fantastic example. I actually saw it at Clojure/conj in 2016, there was a talk on juggling. And one of the things that they talked about was up until I think it was the early 80's or maybe it was the early 60's, the state of juggling was you knew a bunch of tricks and you practice the tricks and you get these hard tricks. And that was what juggling was, is you practice these things. It was very satisfying and it had been like that for several millennia. But these guys in the Physics department were juggling enthusiasts and I don't know how the conversation came about, you'd have to watch the talk. It's really a great talk. But what they do is they make a writing system, a nomenclature system for systematizing juggling tricks, so they can describe any juggling trick with this abstract notation. And the surprising outcome, or not so surprising outcome, is that by then, once you have it in the notation, you can manipulate the notation to reveal new tricks that nobody had thought of before. But you're like, "Ah, by capturing the timing and the height and the hand and we can actually understand the fundamental concept, and so now we can recombine it in ways that weren't seen before." That actually opened up, I think an order of magnitude of new tricks that people just had not conceived of before because they did not know that they existed. And so, I think that's really, as an abstract concept, is a great yardstick by which to measure any idea. Yes, this idea very neatly explains the phenomenon with which I'm already familiar, but does the idea guide me towards things with which I have no concept of their existence? But because the idea predicts their existence, I know it must be there and I know where to look for it. And aha, there it is. It's like shining a light. And so I think that that's kind of the proof in the pudding. So that's a little bit of a tangent, but I think that's why we're so excited about this. And I think it's why we think it's a good idea. TARAS: Yes. So what's also been interesting for me is how universal it is. Because the question of is this transparent enough? That question could be actually asked in many cases. What's been interesting for me is that asking that question in different contexts that I didn't expect actually yielded better outcome. At the end of the day, I think that a test for any idea is like, is it something that can help you more frequently than not? Like is it actually leading you? Does applying this pattern, does it increase the chances of success? And that's one of the things that we've seen, thinking about just practices that we're putting into place and quite asking are they transparent enough? Is this transparent enough? It's actually been really effective. Do you want to talk about some of the things that we've put in place in regards to transparency? Like what it actually looks like? CHARLES: Yeah. I think this originally started when we were setting up a CI pipeline for a native application which is not something that we've typically done in the past. Over the last, I would say, 10 years, most of our work has been on the web. And so, when we're asked to essentially take responsibility for how a native application is going to be delivered, one of the first things that we asked kind of out of habit and out of just the way t

    47 min

Ratings & Reviews

5
out of 5
7 Ratings

About

It's like hanging out at our software studio in Austin, Texas with Charles Lowell and the Frontside Team. We talk to smart people about how to make the world of software better for the people who make and use it. Managed and produced by @therubyrep.