Skip to content

Key Metrics In Software Development That Matter

December 19, 2011

As a senior engineering manager what metrics do you use to measure the performance of your software engineering organization? As I’ve walked in to various organizations over the years I’ve seen many sets of metrics in use. Some are spot on and others seem quite worthless or even nonexistent. I believe that software organizations need a set of metrics for their engineering teams that focus on the critical few vs. a laundry list of everything that can be measured. I also believe that the metrics that you choose at an organizational level need to be those that will institute a common set of commitments throughout the organization and drive change if and when required. They also have to be updated and published with a predictable cadence.

First of all I’m not a big fan of software engineering team metrics that simply show “We were busy” or “Look, ma, we followed our process perfectly!” At the end of the day those metrics are interesting within each team perhaps, but they fail to truly tell you as an engineering leader if you are shipping high quality software on time. I cannot tell you how many times over the years I’ve been shown MS Project Plans, Agile burn down charts, QA fix/found rates, or metrics reports that are as thick as phonebooks where each failed to really communicate anything meaningful about the organization. All are interesting, and I applaud the teams that use and refine their project-based metrics, but at the end of the day they often don’t reflect much beyond that fact that the teams are perhaps working hard and/or following their processes. One common thread that I’ve found is even with the most complex of metrics programs most teams cannot answer the simple question “How well are you meeting your commitments?”

When looking at metrics at the macro level that span an engineering group I personally prefer to focus on three sets of credibility metrics; those critical few metrics that measure end results vs. progress. I also like these to be from a business perspective instead of from an engineering perspective. Lastly, I want to have the measures and reporting consistent across all of my teams, reported in a single scorecard, and updated and shared with a regular predictable cadence (updated monthly, summarized by quarter and by year, and published for all the world to see):

On Time Delivery: Are my teams predictable or not? As you’ve seen in my previous postings predictability is a key component of your credibility. It doesn’t matter what you deliver if you are always missing your commitments. Telling the business that you’ll turn the crank in three week cycles and that it can decide to ship or deploy at any turn of the crank they want to doesn’t cut muster. That is absconding from your responsibilities as a senior engineering manager as it doesn’t allow the business to build robust business plans, build financial models, and ensure cross-functional operational readiness throughout the organization based on the timely delivery of your products. Whereas Agile provides for great visibility and enables predictability at the end of the day as a senior leader you still have to commit to a key headline for a release and 3-4 GIBIJFTs (Gee I’d Buy It Just For That) key features. The metrics:

  • Where we made delivery commitments did we keep them?
  • Where we missed commitments, how many did we miss and then by how many days?
  • At what percentage do we have on-time deliveries?
  • If we had to go back to the well and ask for and recommit to new delivery dates, how many times did this occur and in the end how many days was the final delivery from the original commit date?
  • Did we deliver (in terms of major content) what we said we would and in how many cases did we not?
  • How many releases did we delight our customers and fail to do so? (business gets to call this one, not engineering—sorry guys)

Quality: Are you shipping or deploying software with high quality or not? On time and content is meaningless if the quality is suspect at best. You have to be both on time and deliver with high quality to have credibility. Traditional found/fix rates collected during the development process also do not necessarily predict actual quality once a solution is out there in the wild. One rule that I have historically had with my teams is “Do No Harm” which also comes under the heading of “No Self-Inflicted Gunshot Wounds.” These can be among the worst of the quality sins as they undermine your customer’s confidence in your solutions. The other rule that I have had is that we should be doing everything possible to put the help desk out of business with each and every release. Those new features you added on time will not matter if you shipped what can be best described as Bovine Scat. The metrics:

  • Did we cause harm to our existing system and customers?
  • Did we hurt performance in any way?
  • Did we break something that use to work?
  • How many calls to the help desk did our solutions generate that were caused by the software?
  • How many patches did we have to release due to P1/P2 quality issues?
  • How many hours did engineering spend on fixing issues in what we delivered vs. working on new features or capabilities?

Budget: Did we ship high quality products on time without breaking the bank? If every release requires what can best be described as a Hail Mary pass as you poured on unplanned resources and tools then the business has other problems with your credibility. You need to be able to manage within your budgets and not allow poor planning on you or your teams’ part to become an emergency for the Finance department or your business. You also need to be sure that your teams do not have to waste money fixing the sins of the previous releases as that robs you of the resources that could be directed at new capabilities.

  • Within what $ amount and percentage were your teams under, on, or over budget?
  • Of the $ that were spent on our releases, how much in terms of $ and %-age was spent fixing quality issues in previous releases?

In the end, internal project-based metrics really can help teams achieve the goals of delighting customers with high quality solutions that are delivered on time. There are books out there that cover thousands upon thousands of project-based metrics. That said, unless you are measuring and improving on your on time and quality metrics at this macro level, the micro metrics may not matter… I am a big fan of implementing these critical few metrics, at the macro level, across all of my engineering teams and then driving programs that positively impact the results month after month, quarter after quarter, year after year. You cannot fix what you do not measure. Even if you and your teams are the best in the world at what you do it is nearly impossible to communicate your wins if you do not measure and publish them in a predictable cadence.

This is a companion post to my Elevator Speech Series… For more on “delivering high quality solutions on time that are innovative, delight customers, win reviews, and disrupt competitors; all while having fun, being ethical and transforming the business” please see this series.


From → Agile, Quality, Software

One Comment

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: