An Operating Model for Implementing a Static Code Analysis Tool

In the April 26, 2010 print edition of Informationweek, the Dr Dobb's Report contained a an article by Sid Sidner of ACI Worldwide describing his teams choice and implementation of a Static Code Analysis tool.  ACI looked at:

  • AdaCore's CodePeer
  • Coverity's Prevent
  • GrammaTech's CodeSonar
  • Greenhill's DoubleCheck
  • Klocwork's Insight
  • Lattix's LDM
  • Microsoft's StyleCop
  • Ounce Labs' Ounce Core

You can read Sid's comments on each of these tools in his article. The article is a very good description of how to integrate a software analysis tool into the development environment. Some engineering shops like ACI are a little bit ahead of IT organizations in building quality and security into their products. Sid offers the following "Tips for Success":

  • Define an "initial issue" policy - what will you do with the code issues that identified by the first analysis run?
  • Get the global mechanics working - many of the tools require license managers and centralized result servers
  • Attack one product at a time
  • Identify SMEs - those product experts who will also be experts in the tool
  • Train the SMEs
  • Work with the SMEs
  • Train the developers
  • Perform initial analysis on existing code and defer all issues - this is Sid's suggested choice of "Initial Issue" policy.  Note that he suggested to defer not ignore.
  • Deliver help from SME's to developers as required
  • Run the build anlaysis often
  • Review deferred issues

I discussed this article with Lev Lesokin of CAST Software who have their own static analysis application, the Application Intelligence Platform (AIP).  Lev highlighted a couple of considerations which Sid's paper did not cover:

  1. ACI probably do a very good job building quality into their code at a module level, but they should also look at quality at a broad architecture level. Interactions between modules, between distributed components, and most importantly between the database and the code base. Some of the most risky quality issues actually reside in that application level and they are the hardest to identify by code walk-throughs or localized static analysis.
  2. When analyzing your software, there's an opportunity to also use that to introduce measurement. Sid could put some metrics in place to make sure his teams are getting better at building quality and security in. Also, risk metrics can be used to make go/no-go decisions at stage gates. This might be a point for his VP of Engineering, though. He/she can introduce productivity measures and drive the organization towards continuous improvement.

These two uses of software analysis and measurement products might not be as relevant for ACI, because much of the software they build is monolithic products that they ship and don't see in a distributed environment until they implement at a customer. In an IT organization you will have many moving parts, connected through a services layer or various other interfaces. This is where app-level quality issues become much more present. Also, getting apples-to-apples measures across development teams in Java, .NET, DB stored procs, etc., is much more challenging. Once ACI's customers integrate their newly acquired payments system into their bank's accounts and cash management systems, they might consider an architecture-level analysis to determine how structurally strong their environment is in terms of stability, performance and security.

Written by Michael D. Harris at 16:51

2 Comments :

Iboaics said...
January 21, 2017 05:54
Iosibca said...
January 18, 2017 06:29

Comment

"It's frustrating that there are so many failed software projects when I know from personal experience that it's possible to do so much better - and we can help." 
- Mike Harris, DCG President

Subscribe to Our Newsletter
Join over 30,000 other subscribers. Subscribe to our newsletter today!