Episode 41
Code Reviews
March 13th, 2024
50 mins 48 secs
About this Episode
In this episode of The Acima Development Podcast, host Mike Challis introduces a discussion on code reviews by drawing an analogy between creating music and coding while sharing personal stories about feedback in musical endeavors to illustrate the importance of constructive criticism. These anecdotes highlight the parallels between receiving feedback on creative projects, like music, and technical work, such as code, emphasizing how both require openness to feedback for improvement.
The conversation then goes into the specifics of code reviews, highlighting their value in the software development process and how code reviews are akin to collaborations in music, where input from multiple contributors often leads to a superior output. The panelists explore the nuances of effective code reviews, including the balance between too many and too few contributors, the importance of automation in enforcing style guidelines, and the value of comments within the review process to provide context and facilitate understanding.
The episode concludes with a deep dive into the best practices for conducting code reviews, addressing the challenges of pushback, the significance of understanding the reviewer's perspective, and strategies for ensuring constructive and efficient feedback. The conversation also touches on the importance of security considerations during the review process and the potential pitfalls of approving changes prematurely.
Transcript:
MIKE: Hello, and welcome to another episode of the Acima Development podcast. I'm Mike, and I'll be hosting today. Here we have with us Eddy, Justin, and Kyle who've been with us before. And we are going to be talking about code reviews. I like to start with a story, and I'm going to do that.
Sometime recently, we talked about music and about how there's often overlap between people who make music and people who write code. I'm going to build on that a little bit. I like to write music. I sometimes just write music for fun. I do stuff my kids like [laughs]. Their favorite song is something I did; it was just kind of a personal challenge. You know the video game Plants vs. Zombies?
JUSTIN: Mm-hmm.
MIKE: I [laughs] listened to the soundtrack of that, and I thought, if I were to write the soundtrack, what would I do? And I wrote a song that kind of captured my take on it. And my kids love it. That's the one they ask me to play most [laughter]. So, I feel pretty good about that. They are honest judges. And, you know, they do, you know, they give me commentary on it. And sometimes there's a situation like that, and sometimes there's something a little bit more formal.
A serious example: I had a friend who passed away a few years ago, and they asked me to do an arrangement for her funeral. And I got together with the woman who was going to be singing. I did an arrangement, and I brought it to the singer, and she gave me feedback. And she said, "Well, you know, I think that there's too much build in the second verse, and I think we should move that to the third verse." And, honestly, she was right [laughs] in kind of every respect. She had the feedback that was necessary in order to improve that arrangement, you know, for something that we both cared about. That's really relevant, and that's not the only time that's happened.
I've got some friends that got a high schooler who's in...he does a lot of singing and drama. He's involved in a lot of those things. And I've done something with him before, too. Once again, I gave it to the singer. He came back with feedback and said, "Well, you know, I think we should change this. I think the melody is a little too repetitive. And I think if it went up here instead of down, that'd be better." And he was right. It made a genuine improvement to the structure of the music I put together.
I think that this story is really instructive. It's a review of something creative that is not in code, but it has many of the same characteristics as what we do in code, sometimes anybody who creates something. There's an intro to a song by a musician, like, a soul singer. She says [chuckles], "You know, remember, I'm an artist. I'm sensitive [laughs]. So, we feel that way, you know." Like, I'm an artist. I make this beautiful, perfect thing. And we think, well, you know, if anybody is giving me any feedback about this, they're criticizing my personal character and saying that I'm just not good enough. It's a really dangerous thing, both for ourselves and our art, I think, to start thinking that way.
It doesn't matter really how brilliant we think we are [laughs], and maybe we are. You know, maybe you have beautiful art that you create. Getting feedback on what you've created allows you to see it from a different perspective. We are all limited by the scope of what we can see with our own perspective. And seeing something from somebody else's perspective can help open up new views into something that may be wonderful.
You may have built something that's truly good. And when an eager collaborator is coming to work with you, they're not interested in tearing you down. They're interested in helping you make it better and in helping you achieve your vision. And they can say, "You know, I think that you were trying to do this. I don't think you quite got there," and help you better accomplish your purpose that you had initially. That framing, for me, for reviews is extremely helpful.
Now, we're going to be talking about code reviews, you know, that's perhaps more technical, maybe [laughs], than some of the stuff that comes from other creative endeavors, but it has many of the same characteristics. We're trying to solve a problem. And getting that second pair of eyes can help us solve the problem better in many cases. Also, the collaborator, in general, and if you're not doing this, you're doing it wrong [laughs], is trying to help you accomplish your purpose, not to tear you down, but to just help you do it better. And that second pair of eyes, that sounding board, is extremely helpful. So, that's the framing I wanted to start with. Does anybody have any thoughts about that?
JUSTIN: It's interesting because you see a lot of people that do great work, like, artists or whatever, all by themselves. And then they get together with somebody, and they do a collaboration. And some of the best music comes out of collaborations. And it's like, a couple of my favorite songs have, like, a lead artist, and then they have some lesser-known artist that is coming in and doing a riff or something like that. But you can tell that they meshed together their ideas and just made something so much better.
If you think about music these days, it's not just, you know, people don't just, like, or most of the time, they don't just slam something down, and it's done. It is a work of months and months and months, and they're tweaking every little thing. If you get somebody else in on that, it makes that tweaking move much faster because you have somebody else that you're brainstorming with to solve the problem. And I personally like pair programming a lot because we are talking about code reviews in this. But the pair programming part it's like an ongoing code review as you go in and solve the issue that you're doing together. We can talk about pair programming later, but those are my thoughts on that.
EDDY: As a band, you know, the more people that collaborate, it's an interesting dynamic because I associate that the more people you have in the pot, the slower it moves because you have more opinions. And so, because music is pretty subjective, the productivity does go down slightly, you know, the more people have input, kind of similar to the code, right? Like, if you're a single code developer [inaudible 05:52] to reviewers, the amount of feedback you get is much lower than if you have five or six people looking into these code changes, right?
JUSTIN: Yeah, I think there's definitely a point of diminishing returns as you add more people to it. That's a pretty steep curve [laughs], and so...
MIKE: You have bands with a couple of people. You have bands with three people. Four are pretty common. Very rarely do you have, like, 20. You've got like, what, Lynyrd Skynyrd, I don't know [laughter]. There's some crazy bands that have huge numbers of people, sort of a collective. But usually, you know, you just have a few because there's the limits, right? There's limits to what you can collaborate together on successfully.
JUSTIN: Though I do love big band music. Give me a good, big band. But, at that point, it's like you have a conductor that is leading the show. Maybe the comparison breaks down a little bit there, but anyway.
MIKE: Well, it's interesting. I mean, I was thinking similar. If you have a jazz combo, you know, a small jazz group, then there's a lot of room for creativity, and they can work together. When you get to the big band, you have a composer, right [laughs]? You have somebody who writes out the music, writes out the lead sheets, and gives it to people. It just changes the way you work.
You know, kind of going back to this idea that there's value in getting the collaborators–when we write code, it's very much common practice in the industry to have a code review or maybe pair programming, as Justin said, which is kind of a live code review, or you're collaborating in real-time, which is another way of providing value, right [chuckles]? Real-time collaboration provides many of the same benefits. It's normal industry to have a code review.
There are a few different purposes that could be proposed for that code review and that are sometimes used, and some of them, I think, have different value than others. For example, if you're sending some code over to code review and the primary feedback you get is stylistic, then I think that was a wasted opportunity [chuckles]. To say that again, I think that is a wasted opportunity.
There are automated tools in probably the language that they're using; if they're not, it would surprise me [chuckles]. There are automated tools in the language that you are using that can enforce style guidelines. And you can download the style guidelines from the internet and use what some big company uses and have a standard on your team. And you can make some tweaks to that. But the automated tool can enforce, you know, whether you use tabs versus spaces. It can enforce what your quotes look like. If you're having arguments about that, then I think you've got a more fundamental problem.
The place that we should start when you're working with the team is choosing the conventions that you're going to use, and that should just be the baseline, you know, that's the foundation. And if those come up in reviews, that means that somebody has either ignored those rules, and that's a different issue [chuckles], or you really haven't gotten consensus on your convention, and you need to go back to revisit the convention. So, I say this one more time, you know, I don't think that that is a valuable use of a code review.
To go to the music example, if you're a guitarist, you probably don't want somebody to say, "Well, no, I think you should pull out a trumpet." "What? [laughs] I don't know what you're talking about. We're a band that does not have trumpets." That's not what you're there for. But if somebody says, "Hey, you know, I think you could change the rhythm on your riff," that's a different kind of comment. It's commenting about the structure of what you've created in a way that is meaningful. The style should just be a given going in. Anybody have thoughts on that particular idea?
KYLE: I kind of agree, even the system's code that we have, you know, using Terraform, and I know that a lot of other languages have similar formatting tools, right? We've kind of just agreed, you know, we're going to use the Terraform format command. That is a quote, unquote "reviewer" per se in the steps in our PR, right? It's going to go through, and it will decline and say that we've failed if we haven't remembered to run that.
But then that keeps us all in line, especially where we're kind of cross-ecosystem, where some of us are running Mac, some of us are running Windows. And we avoid some of that, you know, line ending white space type issue, so that keeps us in line. And then the actual reviewers themselves they're looking at functionality of what we're actually creating the PR for rather than how does it look.
MIKE: Yeah, yeah. Absolutely. And you're not wasting your time [laughs].
KYLE: No.
MIKE: You're not wasting your time haggling over details that have been decided. If it can be automated, it probably should be. We're developers. We tell the computers what to do for us [laughs]. And they can take away some of our work.
EDDY: Would you agree, Mike, that getting an approval without any comments or "Looks good to me" would be just as bad as a wasted opportunity for stylistic comments?
MIKE: That's a great question, a loaded question [laughter] because I think we've all probably thrown an LGTM or something similar onto a review. I think it opens up a broader question, actually, and something I wanted to talk a little bit about. Have any of you read the book, The Phoenix Project? It is classic in the industry and helped give birth to the DevOps movement. So, well, I'm going to strongly recommend it. I've read it multiple times. It consolidates a lot of research into a novel so that you get a really [laughter] enjoyable novel that's noted, right? It gives you references as to where a lot of this material came from.
But one of the key themes of that book is that batch size matters. Let me explain that. There's a surprising amount of parallel between software development and physical goods manufacturing, whether it be cars or, you know, I'll say cars [laughs], the manufacturer of automobiles. And a lot of times, people say, "Well, no, I'm an artist. I read through this creative work." There's really nothing in line because, you know, I have to invent stuff all the time.
And certainly, there are differences, but in aggregate, you take a step far back enough, you're looking at people who are building things. At a high level, that is largely predictable. You can measure it, and you can streamline it, and you can do things to improve the process. And one of the things that is really bad for physical world manufacturing is large batch size.
For example, if you've got a, like, a powder coating station, and you have to have a thousand parts to go through that powder coating station, because otherwise you're wasting the resources of a big booth, then you have to wait for all of those thousand parts to be made every time you pass it through. So, you have an enforced stop in your process in order to wait for the material to pile up before you send it through. So, if you were to stand up on a catwalk up above the floor, you'd see this big pile of stuff in front of a powder coating. You could see that just at a glance that, wow, there is a bottleneck in my process right there. We can't let anything through until it's all ready.
And one thing that you can do to improve that on the assembly line is to rework your machinery so that you could have a continuous flow. So, if you could build a powder coating system that could take one part at a time, you know, you'd be required to rebuild the machining, but it would get rid of that dead spot. It'd get rid of that bottleneck in your process and reveal where the next bottleneck possibly was. But it would allow you to have continuous flow.
So, if your car, if you think about cars, if you have a bunch of auto bodies stacked up, well, that's hard to stack them up, right? How do you stack a bunch of auto bodies? It uses up space. It uses up mental energy. There's a lot of labor that goes into a large batch. And if you can reduce that batch size to a very small one, then a lot of that labor goes away. Sometimes, we think, well, I want to make sure my reviewer doesn't waste their time. So, I'm going to make sure I put a lot of code together, and I'm going to give it to them all at once.
And I think that that way of thinking is usually in error because it ignores those other costs: the cognitive costs of trying to understand a great, big change, the time costs of waiting for weeks, maybe, to have this change go through, where you could have had incremental changes, where you're putting up pieces of code during all that time and getting value out of it.
This is kind of a long answer to your question, Eddy. But this batch size, I think, is critical because if you rework your process so that small changes are going through regularly, it may be that your code changes are so obvious that somebody's going to look at them and say, "Oh yeah, well, of course, that works [laughter]. I really don't have anything to say because there's nothing left to talk about."
But that doesn't mean the conversation didn't happen because it took a lot of refinement ahead of time, earlier up that assembly line, right? Earlier in the process when you were planning out the work you were going to do when you broke it out into stories. That means that you already had those conversations as to what should be happening.
So, the conversation that might have come up in a code review did come up, whether you're pairing or whether you were refining together as a team earlier on. And so, when the reviewer finally got to it, they already knew what was going to be there. And when they looked at it, they say, "Oh, this is exactly what I expected." And in that circumstance, I think that that's actually a win. But the communication happened. If there was no communication all the way along, I would say that there's definitely a problem.
And, if you have a large batch, you've spent two weeks working on this PR, a pull request, you know, those who don't use the lingo. You're giving this to a reviewer, and they look at it for five minutes and say, "Oh yeah, that's good." I think that's a problem [chuckles]. Because you've done a big body of work there, and it needs a lot of time. Really, you probably have asked for a day's worth of that person's time to really fully understand what's going on.
JUSTIN: It's interesting, when people think of coding or when you're a coder yourself, and you're thinking, oh, my job is coding, and I'm going to put out a bunch of code. That's how I know that I'm doing a good job is: I put in a thousand lines of code today or something like that. Whereas, in reality, coding is certainly a significant part of your job, but even more important, is the planning about what you're doing and making sure that the whole process is given equal importance.
You're not just throwing your code over to be installed into production. You work with your team in planning what you're going to be working on. You work with your team on, you know, you do the actual work and then you do the code reviews, and you do the testing. Well, the testing should happen throughout and everything. And then there's the monitoring. And it's truly the software development life cycle, as they say. It's something that encompasses every part.
If you look at a typical software development life cycle graph, each of the parts is actually pretty much the same size. It kind of emphasizes that you can't just drop something. Or if you pay too much attention to a certain part, other parts may lack. But, I don't know; that's a little bit of a tangent with regards to code review, but I thought it was interesting.
KYLE: That's kind of a tangent, but, I mean, to some extent, you maybe think with a PR, in that life cycle, as you're saying, so if I'm a dev and I throw up a PR, I have the experience from developing and whatever my team might have. But I also have the option of throwing somebody from QA, or security, or DevOps onto that, you know, that are down in the life cycle or somewhere in the life cycle as well. They can review it. They can throw their experience in there that I may not have thought about, you know, get those details going, kind of like you're saying, the monitoring and all that jazz before it even hits prod.
EDDY: And, Justin, I have a question for you. So, you're in security, right?
JUSTIN: Right.
EDDY: Are you involved in any of the PRs that are going up in production at all or any?
JUSTIN: Occasionally. And it hasn't happened that often yet. It happens probably about once every other week, maybe once every three weeks, where somebody asks us to take a look at something. You know, usually, it leads to the discussion about what's your intent here? You know, kind of walk me through your solution. A lot of it is, you know, people just need reassurance that what they did is the right way.
But occasionally, we come back and say, "Hey, you know, you really should do some more validation, you know, just consider all your inputs as untrusted," or, you know, something to that effect where we talk through a particular solution. Oftentimes, it's talking about a third party and, you know, what we're sending them, or what they're sending us, and how do we trust it, or that sort of thing.
Long story short, yeah, every couple of weeks, I get pulled in to take a look at something, and it is really cool. I love doing it because, you know, having that discussion and getting into the code is one of the funnest parts of my application security job. One of the least fun parts is the policy writing, which is what I'm doing today [laughs]. But yeah, if you guys ever want me to look at something, please feel free to interrupt me. Whatever I'm doing, I'm going to drop it and come help you [laughter].
MIKE: Can I ask a follow-up question to that one?
JUSTIN: Sure.
MIKE: Would you rather talk to somebody once the code is already written or before they write the code?
JUSTIN: That's a good question. Probably both, but there are certain questions that need to be answered before the code is written. We actually are working with Anna and the product team to make sure those questions are asked at the epic level and answer to the epic level, things like, you know, are there changes in authentication or authorization? Are there changes with PII data? Are there changes with third-party interaction? Those are the main three questions that we want to ask on each epic.
A lot of times it's, like, not applicable or something like that because it's not, you know, they aren't doing that particular work or know or that sort of thing. But the times when we get brought into, when they put yes on those questions, we go in and have a conversation about exactly what's changed, and then, you know, just to make sure that those delicate subjects are treated with the proper respect that they need.
MIKE: Let me take that a little further. Would you like a software project to be written, and then you come in and say, "Okay, now it's time to add security to it"?
JUSTIN: Oh. Well, we've had that happen all the time, unfortunately [laughs].
MIKE: How well does that work?
JUSTIN: It does not work well because it messes up people's timelines. You know, oftentimes, a project is headed towards production, and then, you know, expectations are set. And then somebody asks the question, "Hey, we should have security look at this." And then, you know, we look at it, and we raise some questions.
A good example is we had a question about something the mobile team was implementing, and we had to go back and say, "Hey, what do you guys think about this?" You know, a conversation went in about a particular API they had and how they were accessing it and how it was designed. We all agreed that, hey, changes needed to be made. And so, they had to go back and add another, I think, it was another couple of days at least of development work to fix the concerns.
We really don't want to go in and dictate stuff because the key there is helping people understand why we have concerns. And once we've had that conversation, oftentimes, people are like, yeah, that makes a lot of sense. And that's what we're shooting for is like, hey, we want you to understand why we have concerns. And what results from that is sometimes we change, but also, sometimes they understand why we have concerns, and then we can go in and explain the risks and have a discussion about timelines and things like that.
MIKE: And obviously, I asked kind of a pointed question [laughter] that reveals that it is costly to make changes to a design late in the process.
JUSTIN: Yes.
MIKE: And security isn't just some dust you sprinkle on a project [laughter]. It's an integral part of the way a structure is built, you know, whether that's in physical world or in software [laughs]. You can't say, "Oh, I'm going to build a building with no thoughts as to structural integrity [laughs] initially and then add that later." It's not something you can add later. It doesn't work that way. The security is built in from the beginning. And it's not just security, right? We bring up security as an example.
If the review is the first time that important conversations are happening, I think it's a real problem. Building a little bit more on Eddy's question, you know, "It looks good to me," I think the ideal software development process, software development lifecycle, has communication early so that the review is there to have some dialogue, but it is a continuation of a conversation that was begun much earlier in the process.
JUSTIN: I think we're moving towards a, you know, what should you be looking for in a code review? And as a code reviewer, I mean, we have a couple of things. What should you do to prepare your code for code reviews? And what should you do? As a code reviewer, what should you be doing? If you guys haven't seen it yet, there is a really good example that Google gives. They have a paper that they wrote, a white paper, that has those two sections. It's like, what should I be doing to have a good code review? And then the other section is what should I do as a good code reviewer?
And if you haven't had the chance to look at that, go through it because I liked how it enumerated everything. It let me know what I was missing from both ends, you know, included things like, hey, is this performance? You know, performance is something you often don't think about that goes along with security; sometimes, you don't think about. But performance is like, you know, hey, it works for me in my development environment. And then you're like, somebody else who's doing a code review they should be aware of that. And, hopefully, their experience comes in and says, "Hey, you know, accessing this table this many times is going to lead to some issues," or something like that, so yeah.
MIKE: I fixed a performance issue during the winter, you know, Christmas time break. So [laughs], it's fresh on my mind and very much so. It's like security. You can't just bolt it on at the end. It's like, hey, we'll add performance now [laughs].
EDDY: Well, and, you know, I actually looked into that, too, when I came back, and it was really interesting because you could only replicate that in higher environments with a higher traffic count. And that's not as obvious, you know, when you're testing in a local environment, when you're truly in control of the data, right?
MIKE: Yeah, leaning on expertise and talking about that early is really helpful.
KYLE: I'm sitting here listening to this, too, and thinking through, and we're kind of saying it's hard or almost impossible to add it later, which for, like, performance and both security to the original product, yes. But I think the part that hasn't really been brought up is, so I'm thinking, performance. Yeah, if your app isn't performant, can you fix that? Sometimes you can't. How do you do that? You throw hardware at it. You throw money at it [laughter]. You get it fixed that way. [laughter] Same as security, right? You have to put another layer in front of it in order to fix your application.
So yeah, I mean, I'm agreeing with you guys, but I'm like, it becomes exponentially more costly because how do you end up fixing it? If you're fixing it late in the game, you're throwing extra layers at it, which costs more and more money.
MIKE: There's a lot of these things that are pointing back to having these conversations early. And coming back to reviews, reviews are a conversation. It's intended as a venue for communication. And we may enforce it and say you have to do that [chuckles], but only because we've seen that it's so valuable. You know, somebody's making some work of art, and they want to bring it to somebody. They're trying to open a dialogue. They're trying to have a conversation. And that's what code reviews can really provide us is that conversation so that people are thinking and interacting together.
If there's any antagonism or ego involved in there, it's very destructive to that process. It's just an opportunity to collaborate. And when embraced as such, I think it can provide a lot of value. But also, it opens up to this bigger idea that communication and bringing communication cycles earlier in the process is vital to quality software.
There's another thing I've been thinking a lot about lately. Some years back, I don't even know how many years back, 20 probably, there was this document, the Agile Manifesto that was written. Some developers got together and said, "You know, some things in process tend to be bad, and some things tend to be good." And they wrote a document that said, "Well, this is better than this." And one of the better things, the things that's most important, is having those feedback loops where you have a quick cycle of information. You build something; you very quickly get it to somebody who can look at it and give you feedback. And then you make a little change then you quickly bring it back.
There's an alternative way, which is you spend, like, a year writing documentation, and then you hand it to somebody and say, "Okay, go build it," [chuckles] And that doesn't tend to work very well. Maybe writing policies or [laughs] [inaudible 26:14] with documentation is hard. It doesn't fit very well. It's the way humans work. We work a lot better when we have something more tangible. Something we can look at, something we can talk about.
And encouraging that communication cycle early is really what code reviews are an extension of. It's an opportunity to have built-in feedback into the system so that you can get some feedback there. But it's not the only one, and it's important to not lean on that one piece. Sometimes, people hear, "Oh yeah, I want to be agile." And they think, I got to go get a book that says how somebody else did it and set up a process that follows that, you know, the diagrams they've got in the book.
That misses the whole point of what the document was about. Maybe not all of the point, but a lot of it, which is that some aspects of process are better than others. And central to the process, a good process, is recognition of human frailties and human strengths. And those work best under a tight feedback loop with a lot of communication.
JUSTIN: And so, something that comes up often is, like, I've got a new thing, and it's hard to break down, and there's a lot of features on the new thing. How do I do incremental releases on the new thing and make it smaller? But I don't want to change the old thing until the new thing is all done or the new thing is ready to be released.
MIKE: I get it. I've got some answers I can throw at that [laughs].
JUSTIN: Sure.
MIKE: First of all, to your point about the old thing, I don't want to get rid of the old thing until the new thing is all ready. There's the temptation to think, okay, that means I need to go and build the whole thing, and then I'll have it tested. That's a really bad idea [laughs]. That's a really bad idea. To use a famous piece of hardware like the iPhone, I've never worked at Apple, but I extremely strongly doubt that they put together a completed iPhone, handed it to Steve Jobs, and said, "Okay, this is ready. Go present on it."
What they would have done is they would have had different departments building different components within that device, collaborating together, you know, figuring out the interfaces between them, and iteratively building that thing. Even though it wasn't released, they were internally testing. And, again, I'm guessing, but just knowing how software works, it's the only way it's possible. They would have been internally testing it all along in pieces and in gradually larger chains of components until the whole thing was put together.
So, by the time the whole thing was put together, it had been heavily tested in end-to-end tests all along. Likewise, when we're building new things in software, you know, it's just one example. You know, we think it's physical, so it's a little more tangible. We should never think, well, this isn't going to be released, so I'm not going to bother testing it or getting it reviewed until it's all done. And this is the whole point of that communication loop.
There's no reason you can't have a communication loop with your stakeholders, you know, people who've asked for it, whether that be product people in your company and, hopefully, even some beta testers out in the wild, some users who could be working with you and testing it. So, even though it's not going up to general release, you follow exactly the same process as you would otherwise. And then, when you're ready, you flip a flag and maybe move people over. At that point, you've gone through the process, just that we've talked about. You just happened to have done it with a smaller group.
JUSTIN: And you can achieve that smaller group in a variety of ways, like keep it internal. Keep it behind a feature flag. You know, have alpha testers, have beta testers. All of these things can help you. You don't have to have general release, but you still gain the advantages of iterative releasing.
MIKE: There's a longstanding practice in a lot of companies where they have feature releases, companies that are not yet doing continuous delivery, and sometimes that's imposed on you. If you've got, like, a mobile app, if it's going out through app stores, continuous delivery is hard [laughter] because you've got to get every change approved. So, there's an external constraint. The batch size is forced to be larger. And there could be a temptation in that environment to say, "Okay, well, we'll build everything for the next release, and then we'll get it tested."
But best practice involves having a feature branch and be testing that. Treat that like your finished product and get your testing, get your feedback on that throughout the process so that when you finally get to that endpoint and ready for release, it's just a tag [laughs] almost, right? You're at a point where you know that it's already good.
EDDY: We've talked a lot about, like, why do code reviews, but I'm still kind of curious on how you would do a code review. And one thing that we actively practice, at least in the team that I'm on, is keeping the code file changes concise and below a minimum file change. And I think we've determined that that's easier to be more productive and find bugs if the code changes are small. I like that idea. I've seen other teams that do larger file changes, and, to me, it's sort of like, how the heck are you able to give a good review when you have 40 or 50 file changes in a single [inaudible 31:15], right?
MIKE: I would suggest you probably can't.
JUSTIN: Sometimes, you just have to hit approve. Is that what you're saying?
[laughter]
MIKE: At some point, you're going to have to press that button [laughter]. The clock is ticking. That's why you don't put yourself in that situation in the first place.
I think, you know, it's something Justin is saying. Preparing for reviewers, one thing you can do is not put a huge change; give them a huge change. Well, one thing I like to do is I like to put comments, not necessarily in the code, but on the pull review itself, you know, so kind of meta information. I'll go and I'll leave comments on my own code, you know, on my own pull request, and say, "Well, this is why I did this here. Maybe give this area some scrutiny. This is where some, you know, sensitive changes are." Or "This test was added," you know, whatever it is. I feel like that's really valuable, in my opinion.
JUSTIN: Should those be in the code review, or should those be actual comments in the code?
MIKE: I phrased that specifically because I think, I mean, arguably, they could be comments in the code, but some things I don't think are necessarily appropriate as comments in the code, that is, they don't make sense in the code. If I came back a year later, that comment wouldn't make any sense to me because it's talking about something that's not applicable in that context.
But in the context of that change, it's very important, and that's where I think the value in having that kind of meta commenting, where, you know, you're commenting on the review, you know, on the change, the request, rather than on the code itself means that there's no artifact that's going into the permanent code from your changes. But there is, you know, that artifact that somebody can look at as they're reviewing to get some additional context. That's one thing that I think is helpful.
EDDY: Typically, when we add comments to code, it's because it's supposed to make it easier to understand. If you're adding comments to make it easier, does that imply that the logic for that code that you're making is difficult? And if that's the case, maybe we shouldn't be commenting, but rather, we should be refactoring it or breaking it down.
MIKE: I would agree with that. I don't know that I'd go as far as to say that comments are a smell, but I maybe come close. That code that requires comments is complex. Anything complex is hard to work with.
KYLE: I actually just started doing that. I started putting my own comments on my PRs, like, mostly on stuff that I'm not 100% sure of, like, stylistic and, like, team convention on how we like to do that. Like, I was just writing a validation for just a model the other day. And there was two options I could have gone with, a custom validator, or they have, like, built-in syntax you can do just, like, right at the top, whatever. And I was like, which would we prefer? What's the team convention on this? And I thought it was helpful. And also, explaining my thought process on how I thought this should be done. Is there a better way? That kind of stuff.
MIKE: And did you get some feedback on it?
KYLE: I did. Yeah. Very helpful.
MIKE: Exactly. You opened up a dialogue. You held up that olive branch and said, "Please." [laughs]
JUSTIN: So, one of the things that I've found really helpful is you are your own first code reviewer. And I think you were kind of alluding to that when you're writing comments in your own code review. When you create your code review, you need to go through and review it yourself. Look at all the file changes and everything. And I found that for my code, I catch all sorts of stuff [laughs].
So, it turns out that that code review is really helpful for me to get it ready before I send it out to other people. So, don't be afraid to create a code review and label it as draft or something like that. And then, you'll be making more changes to it, but it really is helpful to see it in that code review mindset. It's almost as if you're wearing a different hat. You know, getting that different perspective just from yourself is really helpful.
KYLE: Yeah, like, I treat it like almost as if you're, like, proofreading an essay. Like, when you're first writing it, it's like, it might work. It might still be correct. But is there a better way to go? You know, you read back through your sentence, and you're like, oh, I'm going to put this verb at the end of the sentence to, you know, add additional emphasis to it, or anything like that. I think it's a good mindset to stay in.
EDDY: There have been times where it's been easier to express my concern with a pull request by just hopping on the call with that developer and be like, "Hey, instead of going back and forth, you know, on this pull request [laughs] that can span 20 or 30 threads long, you know, it's easier to hop on a call and elaborate what those changes are [inaudible 35:54].
The problem with that, and I don't know if you all agree, is if I have that question, probably someone else who stumbles upon the code will have the same question. So, at what point do you move the conversation audibly versus having it traceable, you know, in a pull request so that it becomes easier for someone else? Because, at some point, it becomes documented.
MIKE: You know, I will do the same thing we've been talking about, and I'll leave a comment on the pull request and say, "Okay, we had this conversation." Or "Can we have this conversation?" So, another reviewer looks in and sees, oh, there's another conversation here that I don't know about. I can go ask about that. And I like to call that out for that very reason. If they don't know about it, then they can't take advantage of it.
JUSTIN: Having multiple different ways of communication that's just something that we're used to. Personally, like, I'll leave small comments on PRs when I know that people are working on other things. And they'll get around to it, and then they could just leave some insight. Obviously like, if it's a very long, heavy, big concept, then taking it onto a call or, like, a Slack chat, whatever, definitely would make it easier, for sure. So, I think it depends on the situation.
MIKE: The idea we've been talking about of communication being vital suggests that the channel is not as important as the conversation. But making people aware...it sounds like part of your question was saying, well, there's no artifact to this conversation, so nobody knows it happened. That signaling, putting some sort of signal, that's more communication. You're saying a conversation happened, and things we can do to signal to others that something happened, I think, are really valuable. It's being proactive, maybe not quite noisy, but providing lots of hooks for people to grab onto to know what's going on. It enables that communication to happen more effectively.
JUSTIN: So, that kind of leads into a question of what kind of comments should you be putting in there? There's different levels of comments. There's, like, a nit level of comment. Like, this is a minor thing, you know, consider putting this on one line or do this on, you know, maybe use a slightly different function like a math function or something like that. You have optional stuff. Clearly label it optional if something is optional. And then there's, like, the ones that are like, "Hey, I don't think this will work. Let's have a conversation offline," that sort of thing. What are the different levels that you guys put on there?
MIKE: I think you left out another one that's actually as important, if not more important. Like, "Hey, this is great."
JUSTIN: Yes. Yes, I did [laughter].
MIKE: That sets an atmosphere. It sets a tone. You know, going back to music, you know, I like that riff. What if we change the rhythm? It lets people know that you understand the algorithms. You understand the big picture, and you really like what's going on. So, they have that framework, you know, that scaffolding to hold on to. It doesn't sound like I'm coming here to say, "Your code sucks." That's a miserable experience for everybody. So, I would add that to your list, although I think your list is valuable.
You know, there's different degrees, and I think that it'd be nice if there was, like, color coding or something so you could say. But, you know, there's verbal cues you can give like, "Oh, this is not a big deal. This is a style thing. This is important. Please let's talk about it. But also, you know, I like what you did here. This line is beautiful. I really like how concisely you wrote this."
I think those comments...and you might think, well, I don't need to put those. That doesn't change the code. It isn't going to change anything. Well, it is changing something [chuckles]. It's changing the perspective of the person who wrote the code so they understand where you're coming from.
EDDY: I don't even shy away from leaving a comment being like, "Can you explain what this does? This is pretty ambiguous. Can you elaborate what your intention is?" Because, at the end of the day, the one that's stamping the approval is me. So, like, if I feel unconfident or unsure about that change, I don't shy away from that. And I typically tend to be, like, the straggler.
JUSTIN: So, what do you think about, you know, suppose you do have, like, a comment, like, you don't think this will work. Have you considered, like, do you add additional stuff, like, have you considered X, or have you considered Y?
MIKE: And code examples are really nice in that context, where X has some actual code written out, you know, maybe it's pseudocode, but, you know, it lays out an idea. It's a whole lot easier to talk about something that is concrete.
EDDY: Okay, can I just preface and go on record and saying, hey, I really love it when someone says, "Hey, I think it would be better this way," and they provide a code block. And like, "This is how I would think this reads better." Oh my God, like, how helpful.
MIKE: [laughs]
EDDY: Because then not only do you understand what they're trying to get the point across, but you can see an example of what they meant. I love that. And it takes, what, like, an extra two minutes or so to pseudocode, like Mike was saying.
JUSTIN: I kind of want to go back to the format of code reviews. I mean, the ideal situation is that you're a developer, and you want to go sit down with somebody else, and you are sitting there with them. And they review your code face-to-face. In the ideal world, that'd be awesome, and you can have a conversation face to face. But we can't often do that because of, you know, time zones or, you know, we all have different things going on that we're working on.
And so, code reviews are there to make that experience as close as possible to that experience. And whatever you can do as a code reviewer to make that experience happen, I think, is a good thing. And so, being very clear and being verbose, maybe not too verbose, but being very cognizant of how the other person looks and feels, and what their situation is, and things like that. All that comes into play while you're doing this code review, and they're not sitting there right next to you.
EDDY: What's everyone's opinion on paragraphs being left as code reviews throughout the whole code?
JUSTIN: Wait, a whole paragraph? Like -- [laughs]
EDDY: It happens. Like, what's your opinion? Because you have, like, ten file changes, and, like, on each file, you have, like, a paragraph that's being written on, like, the gravity of that change [inaudible 42:06]. It doesn't happen often, but it does happen. Like, at that point, do you find that disruptive, or do you find that a bit overwhelming? Like, I'm kind of curious.
JUSTIN: I don't know. I've never really had that much [laughs]. My initial thought would be like, oh, that's a lot. But I don't know, have other people had that experience?
MIKE: I'm thinking of one reviewer who tends to leave long comments like that.
EDDY: [laughs]
MIKE: And my default answer is going to be, wow, you cared enough to give me all this information. Thank you. It might take a little bit of time, but that means that they respected this interaction enough to give it that much attention. Somebody doesn't do that unless they care.
KYLE: At times, I'll run into situations where I want to leave a paragraph.
MIKE: [laughs]
KYLE: And in those boats, I don't want to make the individual feel uncomfortable because I feel like when you do that, it's a bit of a, like, hey, change your world, change everything you're doing type of feeling when somebody comes at you like that. So, I prefer to take those offline and, like, do a direct message with them, kind of work through it a bit, and then come back with a smaller comment. That's my approach. I have had people leave those larger paragraphs, and I just know I personally don't feel great about them most of the time. So, yeah, that's what I've done.
MIKE: We've been talking a lot about keeping your audience in mind. It matters. Yeah, I think having a mental model of who you're working with and being respectful of them can make the process go a lot smoother.
KYLE: I do feel like the quicker channels like Slack or Teams or whatever you do, something with a communication turnaround, it's a little bit easier to avoid miscommunications as well than back-and-forth communication on a PR. So, that's what I mean when taking it offline, too.
MIKE: It's a lot easier to have a conversation, a verbal conversation, than to have somebody write ten paragraphs [laughter].
EDDY: And maybe you can take it a step further and do it in person.
JUSTIN: Yeah. And actually, one comment and then one kind of moving to the next level. Yeah, like I said earlier, it's always, like, whatever you can do to convey your intents in a clear manner. Sometimes that is just a single line on the comments, or sometimes it is, like, you having to sit down because it might take you a little while longer to get your intent across. And then, like you said, Mike, it's really important that you understand your audience.
And then, the next question I have, if we were to take this kind of where we're leading, is how do we handle pushback in code reviews? Suppose somebody comes in and says, "Hey, you need to do it this way [laughs]. This is the way." So, how do you handle that sort of situation?
EDDY: I've gotten pushback so far as people will request code changes without even, like, approaching me directly regarding their question. [inaudible 45:08] just sits there [chuckles]. You're just like, ah, like, I could have explained it had you just messaged me directly versus, like, abrupt request code changes. But I'm typically pretty open.
I was told pretty early on, actually, by a lot of people who mentored me saying, "Hey, don't be married to your code." And I live by that. I really do. I'm super early in my career, so if someone suggests something, I'm like, it's for a reason. Like, they've probably been through the same pain point, and they're trying to save you the heartache, you know, having to deal with the same thing they did. So, typically, I'm pretty open, and I'm pretty flexible. Very seldomly do I push back and be like, "But this works, and it does the same thing." Very rare. However, nine times out of 10, I typically say, "Oh yeah, that does look better. That does read nicer."
MIKE: Sometimes there are team conventions that you can point to, and if, as a reviewer, I say, "Well, this is our team convention," and there's justification there. You can point to an example and say, "Well, this is why we should do it." That sounds different than somebody coming and saying, "Well, no, I hate your code. You should do it my way." One of them is conscientious. It refers back to the standards that you've developed as a team. It ties back into the consensus. And the other one is very egocentric. Like, well, this is just all about me and my perspective.
And I think that both as a reviewer and as a reviewee, keeping those ideas in mind matters. The code is a group product. And if we can build the team, then we should try to do so. And it's not really about us personally, and if it is, then you're ejecting yourself, right? Then that is actually not valuable for the company probably. You've brought something that's not valuable to the company, and you're just bringing your own baggage in. You know, and we all have our own things that we're working through, but trying to avoid that where possible, both as a reviewer and a reviewee, I think, smooths out the process, not just in code reviews but in life [laughs].
EDDY: Do you ever find yourself approving code before it's ready, trusting the person that they're going to go in and actually update what you recommended? Let me take that a step further. Let's say you say, "Oh, this looks good. I recommend these changes, but are non-blockers." Like, do you find yourself approving PRs like that, or do you sit back and wait for them to make a change?
JUSTIN: You just said that it was a non-blocker, so why don't you approve it?
EDDY: All right. Because, like, I've seen, like, feedback where it's just like, oh yeah, like, I'd recommend doing it this way, but I get it, non-blocking, blah, approved. Like, do we feel good when it comes to doing approvals like that, or do we prefer to [inaudible 47:47] upon that and only approve it only after you feel really good about the change?
MIKE: There's some people who I know if I left that comment they're going to go back and make the change. I made a comment like that on a review probably a couple of months ago. You know, the coder went back and said, "Oh, you know, that was a really good idea. [laughs] That could have really simplified this." They went back and made the change. And I saw that in my email a couple of days...I was like, oh. [laughs] I didn't get a chance to get back to it, and they totally made the change. And I know that that, you know, that person would do it.
If I'm working with somebody who is just getting started and they're kind of overwhelmed [laughs] and they don't know what to do, then I'm probably not going to approve it, not because I don't approve of them as a person or, you know, of their ability generally but just because you've improved the process to give them a chance to make those changes. It depends some on the audience.
KYLE: It made me kind of think whenever I'm doing a review, I try to keep the audience in mind in the sense that do I know this individual? And if I don't know this individual, what is their background? There will be times when I'm either recommending a change, you know, and how much detail I will go into sometimes I will take into consideration their level in so much that if I'm not aware, I've gone into our directory and seeing, like, what level of the chain they are, you know, like, are they a software 1, 2, 3 type thing, you know.
And if it's a more junior individual, I will take the time to elaborate more or reach out to them. If they're more of a senior individual, I will take more of those, I guess, shorter routes where it's just kind of a quick explanation, kind of expecting that they'll take it and do what they will with it. That's just me, but I tend to do that route.
MIKE: You know, and this recurring theme of knowing your audience maybe is a good point to kind of conclude this session. We started talking about music and art and getting feedback. They say when you're making YouTube videos, don't read the comments [laughs] because we know the internet is mean. There are people that you actually care about and trust that you want their feedback. And code review is an opportunity to get that, you know, it's an opportunity to collaborate with people that you want to collaborate with generally. You can get advice on how to make a better product.
It's part of a broader culture of communication. In isolation, it's probably not going to help you very much [laughs]. You can't stick that security on at the end. But as part of a broader culture of tight communication loops, it can expand the communication in a really healthy, productive way and result in better code.
Thank you, everybody, for participating today. Any final words?
JUSTIN: Let me put my security hat on. Check your security issues while you do a code review. All right, now I'll take it off.
[laughter]
MIKE: Thank you. [inaudible 50:36] for us next time on the Acima Development Podcast.