Skip to content
Aleksandr Bakharev

Pragmatic software development guide for new projects

Software Engineering, Efficiency, Productivity, Pragmatic software development11 min read

I remember, when I just started my career in engineering, I was always wondering: "Why some people are often reluctant to process, code standards, in-depth quality work before releases, etc?". Only after some time I finally learned that in real life, such things are very context-specific.

For example, if you are bootstrapping the new idea, you do not really know whether it will fly or not, so spending time on anything other than building the MVP (minimum viable product) or a demo app could be deadly to the entire initiative.

In contrast, if you did your product validation and know that you are going to roll it out to the general audience, you must ensure that the thing you are building is reliable and maintainable longer term, so some extra governance won't hurt.

Looks quite binary at the first glance, isn't it? Unfortunately, this is not that simple. If you will blindly roll forward like a sport car without breaks, you are going crash sooner or later(unless you are building something with an ultimate goal to be sold ASAP, but this is another story). So I decided to try to formulate some ideas on how to keep your engineering simple and yet extendable for further enhancements.

General idea is: Be consistent and do not lie to yourself. At every stage of the product development certain things matter much more than others - even when other things are fancier and sexier and more fun to work on. Always ask yourself, how your current activity moves the product forward short-term and long-term. You will often find that short-term goals are equally if not more important than imaginary long-term goal. Also, keep in mind that it is easy to go from simple to hard, but it is very hard to go back.




The process

In general - keep it simple as long as you can. Most of the best things in life are simple, in fact ;) When working on the new project, you typically have a small team (maybe two teams), working towards some mid-term goal (Demo/MVP/Pilot customer). When your headcount is small, a lot of things are becoming redundant.

For instance, do you really need to plan work each and every week and spending hours in meetings every week synchronizing small atomic changes to the system you are building at the moment? Why not allowing people to work on end-to-end workflows? Do not get me wrong, you will definitely need to sync constantly, but if engineers will be building end-to-end solutions, depth of their expertise will pay off pretty soon. Moreover it will help people to develop sense of ownership for specific and self-contained parts of the product, which will positively affect quality longer term.

Another example would be releasing. Be fast there. On the early stage of the product, failure is often much more acceptable than delayed feedback. I get it, you might have zero users at the moment, but in this case, your engineering colleagues will be your first users. Technically, it means: merge your changes as fast as you can. Sure, be reasonable there, the thing you merge, should work, but you can keep polishing it asynchronously and start getting feedback right now. It is also extremely important for the fresh code base, so that you can gradually develop your product code style and figure out patterns you want to go for. Merging 5K+ lines of code pull requests will only bring bias and divergence to your codebase.

Another thing is SCRUM retrospectives and team culture. I get it, there are industry guidelines and stuff, but I would rather recommend to speak up at place and not waiting for the specific time-slot. For new teams and new products the culture is like a newborn which needs constant and instant care. Too many things are happening simultaneously, so by the time you hit the next "planned" retrospective, you culture might be seriously poisoned already. And you can't simply revert it as you do with buggy code change.




Quality assurance

Ok, this might be harsh for somebody and I apologize in advance. I never fully understood engineers, sending their code for QA approval. I understand how it works in a well established and highly specialized products. You've got QA staff members who are domain experts, know the tricky business logic and edge cases by heart (I am thinking of insurance software, logistics software, CRM software etc), and there is no way a regular engineer will know all that, unless they work on the product long enough. I also get how it works in mission-critical application, where each change must be validated at least 10 times before it hits the live system.

But I really do not get how an engineer in a regular company, working on some new project can open up a pull request and then expect it to be tested physically be another person before the merge. I see multiple problems with that:

  1. It is slow - I am pretty sure you have more engineers than QA people
  2. It does not let you engineers to develop a sense of ownership
  3. It does not motivate engineers to actually use product they are working on (especially parts, they never worked on)

What I would rather love to see is that engineers own certain workflows in a product and are in fact domain experts in these areas. I am a strong believer that software engineers should do a general QA work themselves(in a common case). What I mean by that is that having a dedicated QA person(s) in a team is luxury and should be used smart - "exploratory testing" is a good example of that. Of course there might be certain features and workflows where you desperately need the second opinion and it is totally ok. I am just saying that in most situations this is not the case.




Code review

This is very close to QA, but still, I would put it in a separate section, so if you, dear reader, is not doing code reviews on a daily basis, feel free to skip it.

  1. First things first, we are living in 21st century, so, please, no time wasting on things "white-spaces vs tabs" or whatever other stylistic non-sense. Focus on what's important and important is for your codebase to be uniform and consistent (using almost whatever style). Humans look at things using pattern matching, so let's help ourselves. Just add code auto-formatter, like "prettier" or "go fmt", make it mandatory and be ok with that.

  2. Next, try to avoid long discussions in pull requests. If you feel that the general idea is correct and code has no obvious bugs, leave all the stylistic comments you came up with after you actually approved the change. In such a way, PR author will be able to make stylistic changes asynchronously and merge the code. It requires certain level of trust and discipline, but this is what you should aim for, if you want to be productive. In contrast, if you see that code is non-sense and person obviously lacking some contextual information about the problem, please do not destroy your fellow engineer in the review by leaving multiple long comments - it will just look insulting most of the time. Instead, call/chat the person, or meet personally if possible, and talk about some required changes to make their solution more correct.

  3. Last but not least, leave your bias aside. We all unique and we all think differently - train yourself to accept other people's solutions and be ready to accept that other solution might be better than what you used to do historically. Remember, consistency in codebase is a key. It could be that you did a PR review and learned some new trick which could make your code from yesterday better - maybe this is a good time to plan some refactoring for you(remember, ownership). In such way you will learn from each other and naturally make your codebase maintainable in a way that wrong decisions will simply fail to fit in after some time passed.




Automation

I already highlighted the importance of automation in achieving high productivity and quality in my other posts, but let's revisit it in this particular context. We already learned that you need to be moving fast towards your MVP, and automation can help you. Be smart here though. Automating certain things is trickier than others, without bringing huge value at the moment. For instance, end-to-end testing is obviously hard and takes a lot of time to make it done right. Things like full-coverage unit testing can slow down your development process and make you move slower in the initial product phase. So here are things I would spend time on automating from the day 0:

  1. Product deployment - have a dead simple pipeline that will push your code to the appropriate environment (ideally with some internal users). And this is totally OK to start with CD(continuous delivery) and not with CI(continuous integration). It will promote a culture of moving fast and being responsible for failures. Of course as you go, you will need to enhance your pipeline with tests, blue/green deployment and all other processes that make your code actually shippable to your first customers. Most of cloud providers allow you to deploy in one single command so that this is pretty trivial task to start with.

  2. Do not overtest your product in the early phase - my personal opinion is that things like unit testing will hurt new project productivity. Nothing is settled yet, neither your product strategy, nor tech stack, so I do not see a point in testing some micro-level code pieces. It has an exception of some low level and domain specific components of course. For instance if you are building a serverless platform, you should probably start testing your scheduler from day 0. But I do not see any point in writing tests for every react component in your web interface - it is just too early and it will slow down your further refactoring. Same applies to UI tests automation - it is extremely important for mature product, but when you just starting out, there is no point to assert on every visible thing in UI - it will change every day and every change will trigger a set of adjustments in tests, which will be testing some intermediate product stage. More importantly, it will be a source of frustration for some people: "Maan, I did code, now I need to do tests". What you should focus instead is integration testing. What I mean by that is that you should look at your code from the perspective of target workflows you want to customers to be enabled with - this is what has to be tested. More specifically, you should focus on how system components are working together towards your goal. On practice it means no mocking in tests and use real data only. Yes, I know, books will tell you that such tests are error prone and will uncover issues of the components you are not responsible for. But there is no such thing in the early product! You should really care on how other components are doing and, perhaps, you will be able to point out some issue, nobody was aware of, before it hits your customers.

  3. As soon you started to add tests, treat it very seriously. There is no such thing as "it's just the testing code". You need to either treat tests as production grade code, or do not test at all. Because if you are not serious enough, you will inevitably end up in a mess of spaghetti code + mess with test data + flaky tests - nobody wants that.

With all that said, I do not mean that you have to ditch testing in your new product. Just be pragmatic about it and always estimate the value. Also, as soon product concepts are validated, you should start tightening things up slowly, and if you followed 3 simple rules we just discussed, you will already have all the infrastructure and boilerplate at place. Moreover there will be already right attitude about CI/CD in your team, so you will be much easier to get people contributing to it on a daily basis.




Frameworks/Languages

  1. Do not afraid to experiment here. There always will be some "conventional" choice, which will be safe bet, but not necessarily the best one for your project. Sometimes it could be more productive to let the team learn new concepts for a couple of weeks. If we will always follow conventions, there will be no way to innovate. Try new things and do not afraid to fail, as it is totally ok during the process of exploration. This is especially important with languages. I never understood people telling me: "I write in X language for 10 years and I feel comfortable about that", or "I will do everything in X language because it is fast" (disclaimer: it could be slow for the case you are working on, you just did not learn it yet). The exception to this rule is large companies where you already have a rich internal ecosystem and tooling, along with approved languages - obviously it will be a suicide mission to recreate all the in the new language/framework.

  2. Another thing is do not hunt for new hot tech stack just for the sake of being cool. Yes, it is important in hiring and most engineers love trying out new things, but again, be pragmatic and do not lie to yourself. Ask yourself questions: Will this technology enable me with something that was impossible before? How will my product benefit from it? How does this technology fit in my current stack? In other words you should clearly understand its value in the context of your project and being able to explain benefits to anybody on the team (even non-technical people).




Performance and optimizations

I am a strong believer that for most of the time, there is no point talking about performance without measuring and defining some metrics first. Modern software is incredibly complex and you can have abstract performance discussions all day long. Be pragmatic here, you are likely not Google, so you do not need to worry about many things for now, they have to worry about. And this is good for you, because you can actually focus on delivering value, instead of fighting against latencies in some part of the system, which will make no difference to the user at your scale. Throughout my career I never saw performance killing a product - it is always something else. The only exception is probably your storage system, which could destroy product economics or just be intolerably slow (if you made a really-really bad choice).

So, the bottom line here: if you want to do some performance work for your product, make sure you have all the tooling to actually measure what you want to improve. And understand how your customers will benefit from it.




Vendor lock

I believe that long-term, the industry should be moving towards hybrid cloud solutions, operating using open source standards. If you think about it, vendors like GCP, Azure, AWS etc are practically creating operating systems for data centers and usually making them proprietary. Each of those vendors reinventing same things over and over again, which is clearly suboptimal for users, since every provider has its own APIs. Hence, when building new things nowadays, I would keep in mind that keeping your product portable could help you in a future. Good example for this is e.g. running your service using managed Kubernetes might often be a better idea than using proprietary container orchestration platforms. Not only it makes your stack more portable, but it also opens you up an access to much larger community than your cloud provider support portal/forum. I do realize that this is hard to follow this advise, especially for your storage layer, but good to keep it in mind in my opinion.




Conclusion

I tried to cover some aspects, but I feel that we are just scratching the surface here. It is simply too much for a single post. I personally consider all items I brought here extremely crucial for early products and I try to follow it on daily basis.

Key message is: Seek for value in what you are doing and never lie to yourself, as value should be crystal clear to you. Remember, that at the end of the day, everything you do on your project should serve your current goal and help your customers in their life/work.

As usual, if you want to discuss the content, tag me on social media and let’s chat! Stay safe ;)

© 2021 by Aleksandr Bakharev. All rights reserved.