One of the publications that I keep an eye on in the software industry is a quarterly report published by Software Equity Group, L.L.C.; the “The Software Industry Financial Report.” The report is about 70 pages long and it contains information on the financial performance of software companies aggregated into three “indexes” (software, SaaS and Internet). Each Index is divided into subdivisions. For example, the software index includes such things as billing software, gaming and healthcare. The data in the report is reported at the subdivision and index level.
So, with the context out of the report out of the way, I thought I’d share some of my observations on the 2Q14 report:
- In 2014, Forrester forecasts that software will be 26 percent of the technology spend, leading all other categories!
- A number of on-premises software companies are shifting their revenue models to offer SaaS pricing, but, for most of the larger, more stable companies in the software index, the revenue contributions from these changes are not sufficient to justify a move to the SaaS index.
- Gaming providers were stars of the indexes about a year ago, but now that sector seems to be slumping – apparently as they lose business and react to the boom in Internet and mobile games
As I thought about the implications of the data in this report, I asked myself just how valuable industry data is in our industry as it ages. I read this report every quarter, but does the state of the industry really change that much every quarter? What would I miss if I only read every other report? Is a great or poor performance by one industry sector in one quarter truly significant?
Of course, it depends. Sometimes one data point can be very meaningful – the canary in the coal mine. However, generally, what we really need to look at are trends that demand action.
To be able to spot a trend, we need a frequency of data sampling that will allow us to see a trend before it does serious damage. Too frequent sampling can cause us to waste time reacting to changes that are just natural, random oscillation around a trend.
The same is true of the metrics in the day-to-day running of our software development and/or IT departments. How often do we need to update the burndown chart of an Agile team? I would say daily because on a 10-day sprint we need to be able to spot a three-day trend, and a trend demands at least three data points to be observable.
How often do we need to benchmark ourselves against our industry peers? Annually is probably sufficient because the data from an aggregate of our peers will probably only change quarterly. But what about benchmarking ourselves against a baseline of our own performance while we are implementing an organizational change (e.g. transitioning to Agile or implementing SAFe)? Here I would argue that a monthly benchmark is appropriate because that would mean having the minimum three data points to detect a trend every quarter, which would be important if the transition was going off course or not meeting ROI goals.
In sum, it pays to stop and think about whether your review frequency for a set of metrics is consistent with the likelihood of a trend emerging sufficiently for you to make a decision to change your behavior. Not sure? Give me a call!