In my last blog post, I talked about the importance of measuring the business impact of DevOps-driven application delivery. At the DevOpsDays Seattle Open Space discussion on metrics, we also explored measuring DevOps teams’ performance and people productivity. I was glad to see that Nancy Gohring from 451 Research joined our session (check out her insights). Below are some of the key highlights from that Open Space discussion.
For DevOps leaders, knowing if DevOps teams are making progress toward meeting their organizational goals are important. Often these teams seem to have conflicting objectives. And, since DevOps practice involves a cultural shift, in our discussion it was concluded that it is crucial for Dev and Ops teams to collaborate and define what data should be logged so that both sides can understand if they are on track. This collaboration is even more relevant since Ops is often measured by service availability, while Dev teams are marching toward generating new features and driving new revenue.
One participant in our discussion reported that for one DevOps team, once they started tracking the number of wake-up calls due to IT incidents, they managed to reduce the overall number of open tickets. With data-driven insights, they were able to tie the wake-up call to the piece of code and to the developer that committed that code.
And based on another IT practitioners experience, once DevOps teams adopted measuring QA tests success/fail rates and tied them to code coverage, they were able to meet both internal and external customer SLAs. They were also able to understand if those SLAs were correlated with the size and behavior of their technical debt over time.
We also discussed tracking the number of closed tickets per person, open ticket durations, as well as the number of story points for understanding the app delivery velocity. These two metrics should be analyzed with more detail and standardized across the organization. For instance, if one measures IT teams’ productivity, the complexity of tickets and story points should be included in an analysis.
Some folks reported that on occasion when people realized their productivity was measured, there were examples of “gaming the system”. Those included creating long readme pages instead of contributing to the actual code, or when measuring the number of bugs per developer, people started to be defensive and even question the nature of a particular bug. Is this a bug or perhaps a feature? Sound familiar?
When measuring productivity and performance, DevOps leads need to analyze how insights gleaned from data will be used. In real DevOps spirit, measuring people performance should be used for continuous improvements, allocating appropriate resources and fostering increases in productivity or other goals, rather than “punishing” or “shaming” people.
So the real question is not whether you should measure people or a teams’ productivity. Rather, it’s about how you are using the data that was obtained by analysis. Without data, you have no insight and no way to track progress. So to end on a philosophical note, it’s not about the tool, but it’s how we humans are using it.
What DevOps metrics are you measuring? I am looking forward to hearing from you!
You can reach me on Twitter: Follow @stela_udo