Your guide to measuring and improving code quality
It is dangerous to push software into production before knowing how each line of code contributes effectively to the product. If your code is of low quality or contains many bugs, the software will not run as it is supposed to and cause a number of problems for end-users and your team alike.
To avoid having to navigate such a scenario, it is important to continuously test your services and ensure that the code is fit to be pushed to production. The quality of your code, seeing as it forms the foundation of your software, ultimately plays a significant role in the success of the product.
What is code quality? How do you measure it? How do you improve it? These are some questions we will explore in this guide so that you can adopt good code quality practices when you are building your services.
What is code quality?
Code quality comprises the various characteristics of your code and is a measure of the effectiveness of the code that your developers write when they are building a product. There is no universal standard to assess it, and attempting to set one will be futile. Instead, code quality is dependent on various factors such as business goals, legibility, and security.
It makes sense for each team to define their own standards for high-quality code on the basis of their existing practices, priorities, and context. Some common indicators of higher quality for a piece of code may include: readability, functionality, extensibility, testability and maintainability, and reproducibility. These conventions or best practices, once set, boost developers’ code-writing skills and make it easier for them to work together. At the end of the day, the code should ensure that the product is working well, and it should minimize any risks or inconsistencies in the user experience.
Why is code quality important?
Code quality ultimately determines application quality to a large extent. You will agree that final metrics such as reliability, uptime, downtime, and the speed of development are all crucial to delivering robust software. If the quality of your code does not meet certain standards, the software quality is likely to fall short of these metrics. If the code quality starts to diminish, then it becomes tedious and time-consuming to develop and maintain that code, as well as the code surrounding it. For this reason, the responsibility for code quality is not just the developers’ but is shared by everyone involved in the software development process.
When programmers write clean code with proper indentation, formatting, and coding style, the increased readability speeds up their workflow, and, subsequently, the time-to-market. Such code is easier to test and modify when required. That being said, when the code is written keeping certain standards in mind right from the start, the amount of editing and revisiting required is likely to decrease. This further reduces the possibility for technical debt to accrue. Not only that, but such code is also more amenable to being reused, helping cut down on development costs and effort in the future.
Maintaining high code quality practices means that over time, you are investing in your ability to move fast and release high-quality software at the end of the lifecycle. With minimized errors and risks, good quality code positively impacts user experience as well, as they perceive your application as reliable and secure.
How do you track code quality?
By drawing on your team’s and your software’s specific context and needs, you can track the quality of your code in both tangible as well as more abstract ways. Only if you and your developers have visibility into the code quality can you make informed decisions and improve it. There are a few different methods that we recommend employing to measure code quality.
Static analysis employs a basic computational methodology to test source code against certain coding standards before the program is run. Setting baseline standards for code quality as a company can help align the various developers working on the same project. When put in place, these standards indicate if your code is doing what it is supposed to do. By running static code analysis, you can check for issues you may anticipate, such as variables that have accidentally been reused or functions that are too long. You can track these basic aspects of code quality in a more programmatic and automated manner.
With time, the coding standards will become second nature for your developers, and such errors will become increasingly infrequent. To get to that point, you must encourage them to run some kind of static analysis in every pull request or every time you merge code to master. Storing information from these analyses in a central database will significantly boost documentation efforts, which will be well acknowledged in the long run. Most programming languages have their own static analysis tools. SonarQube is a popular, open-source platform that companies use to keep track of this information.
Static analysis is relatively easy to run as part of your development lifecycle and can save your developers time and effort. By itself, however, it does not suffice to get visibility into all aspects of your code quality.
While static analysis takes care of the smaller, more technical hiccups, it does not deal with the more intangible questions concerning your code. Because they are carried out manually, code reviews contribute a more reflective and critical perspective to tracking the software algorithm and code quality.
You and your team members can ask questions about the different components that make up your development workflows, thus contributing to code quality. This could be about design patterns, for instance. Are you following the right design patterns? Are you designing the services the right way? Or, you can reflect on the chosen data models. Keeping an eye on the documentation is another important aspect.
Depending on what your development team’s workflows look like at that point in time, and the related code quality, you can approach the question of code quality from multiple angles. Doing so is more likely to help you set and enforce powerful standards that will lead to better quality services.
Code reviews are also beneficial in that they force the team to not only monitor quality but to do so with a critical eye. Automation has its place in the process, but any meaningful changes to code quality are more likely to emerge from reflection and engagement with the code.
You need not wait for any code to be written to start tracking and measuring code quality. The design stage of the software development lifecycle presents a valuable opportunity to think about what your code is going to do and how your code is going to be written. Brainstorm, document the design process, and bring visibility into this part of the process. You are not merely documenting these plans for posteriority's sake but to encourage the team to discuss and review the design patterns and data models.
Getting more eyes on that helps you align the team and set standards right from the beginning. Focusing such efforts early on in the process has the advantage of starting on a strong footing and having to deal with fewer issues down the road.
Testing continuously throughout and after the software development lifecycle is unsurprisingly beneficial to determining the quality and reworking your code. It keeps your team alert and provides opportunities for quick resolution of issues. In addition to SonarQube, tools like Codecov provide visibility into test coverage as an indicator of good quality. Test coverage measures the effectiveness of your test cases and reveals any gaps in your existing testing processes. Be sure to run all relevant tests, from unit tests to performance tests, and use them to identify the areas in which the code is lacking the most.
Developers who write quality code put security front and center. If you release code that is subsequently flagged for vulnerabilities on numerous occasions, you would not consider the code to be of high or even acceptable quality.
For this reason, running security and vulnerability scans as part of your automated checks and CI/CD processes is a no-brainer. Only with continuous testing and being on the lookout for security bugs during development can you expect to deliver robust software.
How do you improve it?
The goal is to push code quality in the right direction to improve it over time. The only way to improve code quality is to measure it. If your team is unaware of what the baseline is, there is no way to see how far, if at all, you have come. So it is of utmost importance that you make sure to define goals to track where you are today and where you are trying to get to over the course of the software development cycle. These goals can range from improving iteration time, i.e., how long it takes to release new features in your particular codebase, to boosting your DORA metrics or even improving your final output metrics over time. If you notice positive shifts in these areas over time, through the use of static analysis or code review tools, your code quality is likely improving.
Be sure to also incorporate quality tracking practices in your CI/CD processes and focus on giving visibility to developers and everybody else involved. If developers are not given access to these code quality metrics and their performance over time, they will have little to no clue regarding the level of impact of their workflows and their software engineering practices on the code quality.
Visibility is often underestimated, so make it a habit to present metrics on a regular basis. A central dashboard will also get the job done and is necessary for everyone to be on the same page about standards and expectations, as well as any shifts in quality over time. This can include the results of static analyses and security scans, as well as any other measurement tactics that are running as part of your CI/CD processes. SREs must take the lead here, and directly communicate any observed changes or issues to developers.
Avoid making matters worse by giving up flexibility to set code quality standards. Be careful not to turn standards into hard and fast rules or excessive restrictions that get in the way of the agile development workflows. These can instead encourage developers to write code that is mediocre for the sake of passing those tests because they need to ship features. Giving them visibility, as opposed to seemingly unfounded directives will also help them gain a deeper understanding of the situation and the shifts. Focus on working with developers by discussing their code and workflows with them. From this kind of engagement, in addition to the visibility into quality monitoring systems, you can set quality standards that can be leveraged by everyone.
How can Cortex help?
We have established that code quality, seeing as it is a foundational element, is a forceful influence on the overall quality of your software. The entire team must always have eyes on the code so that you can together leverage the data to make informed decisions with the aim of improving its quality. Only when you start monitoring relevant metrics can you review the code with the team and brainstorm ways to change the workflows. The quality assurance process does not end after you release your desktop, mobile, or web application. Improving code quality is not a day-long affair but a commitment to consistent reflection and refinement.
If this seems daunting, rest assured that the tools at your disposal will lessen the burden. At Cortex, we built a service catalog for companies seeking increased visibility into their services. Our platform compiles and presents all the relevant information around your microservices - from ownership to SLOs. The catalog also supports integrations with other development tools you are using. With this level of monitoring, not only can developers speed up the process of fixing errors but also dig deeper into each service to begin thinking about the next steps concerning improving the quality of the code.