At PubMatic, we strive to continuously improve on how we approach software development. There are many aspects that contribute to building high-quality code in an effective manner. One broad category of efficient improvements we consider is the process and supporting tools used between the time a piece of code is deemed functionally complete by its author and the time it can go through to QA. And from QA, on through various acceptance tests before production deployment.
Our Engineering teams work on various projects using a vast array of languages such as C/C++, Java, Javascript, Scala, Go, Kotlin, Objective C and supporting frameworks such as Hadoop, Spark, Oozie and Spring to name just a few. Teams have the freedom to do their own research on and selection of architecture, technologies, tools and best practices that fit the job at hand.
Over time, teams have made adjustments on how they approach the development process to become more effective and fully leverage the knowledge and experience of other team members. This has led to variations of the mechanisms employed to ensure code and features are ready for acceptance testing and production.
Addressing Variations and Code Quality
Given our team focus on code quality, we conducted an analysis across teams that revealed both variations and the potential to improve team effectiveness by increasing code quality assessment automation and tooling support. Additionally, it became apparent that standardizing quality assessment mechanisms across teams would be beneficial from the perspectives of increasing overall code quality, visibility, predictability, tooling support, knowledge transfer and documentation as well as project and team member ramp-up time.
Consequently, we formed a team tasked with surveying tools and technologies commonly used in the industry to streamline code quality assessment. We put together a set of criteria (e.g. learning curve, capabilities, community adoption, integrations, etc.) to be used to perform assessments in a consistent fashion. We also evaluated a few of the most commonly used code review tools, such as Collaborator, Upsource, Phabricator, and Codacy, among others.
Based on our criteria and rankings, Codacy was chosen as the tool of choice. Codacy helps streamline the code quality assessment process in the following ways:
- Once the code is built and tested locally, it is pushed into our VCS (git) along with coverage data
- Git uses a webhook that informs Codacy new code is ready for analysis
- Codacy analyses the code and updates its dashboards, revealing newly introduced and fixed issues, coverage evolution, hot spots and other quality aspects
- Codacy automatically pushes generated issue comments into git so they are available on pull requests
- The developer collects feedback from the review tool and is able to quickly identify and fix issues, then re-push the updated code
- Once no more issues are reported, the code is ready for human review, then QA, subsequent acceptance testing and finally, production deployment
Shortening the Feedback Loop
In order to shorten the feedback loop even further, we looked into standardizing (within techstack) tools used to assess code quality prior to committing the code. As such, IDEs and build scripts are fitted with tools—such as Checkstyle, FindBugs, PMD/CPD, Jacoco, golint, govet, and SafeSql—that are able to identify numerous code issues.
For JVM-based techstack projects we also put together a code coverage plugin, based on JaCoCo. The plugin assesses coverage levels and evolution (differential) for new and changed code and optionally enforces minimum thresholds (by breaking the build). This precluded the need to have developers use a centralized tool (e.g. SonarQube) that might incur bottlenecks, additional license fees, and inability to use the tool while working disconnected.
Benefits of Improved Tooling and Process
With the introduction of standardized quality assessment process and tooling, we have gained better visibility into the teams’ development practices. Further, we are seeing significant reduction in newly introduced bugs, better overall code quality and development predictability. As a bonus, the increase in automation is allowing our developers to focus more on business- or technology-related research and educational opportunities.
Our infrastructure is positively impacted by these changes as well:
- Fewer builds as bugs are caught before code is built by the CI pipeline
- Less time spent on testing features on QA servers
Partner Benefits
Last but not least, our partners benefit directly and indirectly from these improvements in the following ways:
- Shorter time-to-market facilitated by automated identification of code issues yielding shorter/fewer reviews and QA cycles
- Improved product quality due to fewer bugs that go undetected in QA
Our engineering teams are constantly working to improve our processes so we can focus on increased efficiency and an improved client experience with our solutions. To learn more about our recent innovations, check out additional engineering insights or join our teams.