Software Vendor Management and Code Quality

Outsourcing software development projects requires vigilance in order to realize the anticipated gains.  The hard-fought negotiations to ensure a little bit less cost for the client with a worthwhile profit for the vendor are over for another year or two and the actual work can (re)commence. 

What impact will the new software development outsourcing contract have on the behavior of the vendor? 

Probably the vendor will be looking to save costs to regain lost margin.  With the best intentions in the world, this probably means quality is at risk, even if only in the short term.  Why? Because the vendor will probably choose to do one, or all, of the following: Push more work through the same team: introduce new, cheaper resources to the team; cut back on testing.

How can a client monitor for these software vendor management changes? 

First and foremost, you need good data.  It is not helpful to start to gather data after you think you might have detected a problem with delivered code.  The only data that will be useful in a discussion about diminishing quality from development outsourcing is trend data (I will return to this point at the end).  That means that the client must be capturing and analyzing data continuously – even in the good times.  It you tell me that the quality of my code has dropped off recently, I will not believe you unless you can show me concrete data showing me when and how it was better before.

What sort of data? 

The level of defects found by severity in any acceptance testing should be included.  However, with many clients these days having only limited software development capabilities themselves, I would also recommend that all delivered code should be passed through a reputable static code analysis such as CAST, SonarQube or Klocwork.  These tools provide a deeper analysis of the quality of the code, new metrics and, by comparison with previous runs on previous code deliveries, the impact of the current code delivery – did it improve or diminish the overall quality of the application being worked on.  Clearly, the former is desirable and the latter is a cause for discussion.  Some care needs to be taken before diving headlong into an untried static code analyzer.  Poor examples of the breed tend to generate many false positives –sometimes so many false positives that the credibility and value of the tool is lost. 

Maintaining code quality with software vendor management

Photo By Markus Spiske (https://unsplash.com/photos/xekxE_VR0Ec) [CC0], via Wikimedia Commons

From personal experience, I also like to see the results of formal code reviews carried out on the code by the developer and one of his colleagues.  To quote a RogueWave white paper, “The value of code review is unarguable, which explains why they’re mandated by 53% of today’s software development teams.”  Many years ago, during my time managing a large software development group at Sanchez Computer Associates (now part of FIS), we faced the challenge of maintaining and improving code quality on our complex core product while increasing the number of developers to meet demand.  Code reviews seemed to be a good answer because we had a group of very experienced developers who could teach and mentor the newcomers.  The problem was that the old hands were just as much in demand to get code out of the door so didn’t have the time to review all the code being produced by everyone else. 

They, not I, came up with a good compromise.  They devised a set of programming standards in the form of a checklist that every programmer, including the most experienced developers would apply to their code before unit test.  This caught a lot of minor problems through the simple repetitive reminder exercise.  Next, the programmer would do a quick review of their checklist and code with a colleague who could do quick “spot checks.”  Finally, if any coding defects were discovered in subsequent test or use, the lessons from these were captured in an updated checklist.  From a software vendor management perspective, I see the collection and review of these checklists as being a form of commitment from individual team members that their code is “done.”

Returning to my point about trend data, being the only currency for a software vendor management discussion, in my experience these discussions proceed very differently if the data collected before the contract (re)negotiation are used to set some expectations in the contract.  Not necessarily service level agreements (SLAs), because these may be reserved for more important issues such as cost, productivity or customer satisfaction, but certainly the recording of an expectation that quality metrics will meet or exceed some average expectations based on prior performance from this software vendor (or the one they are replacing). 

Written by Michael D. Harris at 14:39

How Software Estimation Impacts Business Value

Software estimation in simple terms is the prediction of the cost, effort and/or duration of a software development project based on some foundation of knowledge.  Once an estimate is created, a budget is generated from the estimate and the flow of activity (the planning process) runs from the budget.  

Software estimation can significantly impact business value because it impacts business planning and budgeting. 

One challenge is that most organizations have a portfolio of software development work that is larger than they can accomplish and need a mechanism to prioritize the projects based on the value they deliver to the business.  This is where estimation can help – they predict the future value of the project to the business and estimate the cost of the project in resources and time.   Unfortunately, the estimates are often created by the people that are performing the actual day-to-day work not estimation experts.  Worse, new estimates from the people doing the work are typically based on their recall of previous estimates, not on previous project actuals – very few organizations take the time to report the actuals after a project is completed.  To most accurately estimate a software development project’s future business value,  it is best to generate the estimate based on the actuals from similar past projects and statistical modelling of the parameter that are different for the next project. 

Of course, an estimate is only an estimate no matter who develops it.  You can’t predict all the factors that may require modifications to the plan.  This is where the estimation cone of uncertainty comes in.  The cone starts wide because there is quite a bit of uncertainty at the beginning around the requirements of a project.  As decisions are made and the team discovers some of the unknown challenges that a project presents, then the cone of uncertainty starts to get smaller towards the final estimate. 

In regards to business value, the cone of uncertainty is significant because of the impact that the rigid adoption of early estimates can have on the budgeting and planning processes, especially if the software development effort is outsourced

 

I see software estimation as both a form of planning and input to the business planning process.  However, there is a a significant cross-section of the development community that believes #NoEstimates is the wave of the future.  This is a movement within the Agile community based on the premise that software development is a learning process that will always involve discovery and be influenced by rapid external change.  They believe that this dynamic environment of ongoing changes makes detailed, up-front plans a waste of time as software estimates can never be accurate.  Using #NoEstimates techniques requires breaking down stories into manageable, predictable chunks so that teams can predictably deliver value.  The ability to predictably deliver value provides organizations with the tool to forecast the delivery.  In my view, the #NoEstimates philosophy really isn’t not estimating – it is just estimating differently. 

Whether you use classic estimation methodologies that leverage plans and performance to the plans to generate feedback and guidance, or follow the #NoEstimates mindset that uses both functional software and throughput measures as feedback and guidance – the goal is usually the same.  They are both a form of planning and input to the business planning processes that are aimed at driving the business value of each software development initiative. 

Written by Michael D. Harris at 11:16

Microservices in Software Architecture

Microservices in Software ArchitectureSoftware value can take many forms but the ability to respond quickly and flexibly to new business challenges separates “just so” software architecture from high-value software architecture.  To this end, over the past 20 years, we have seen many steps down the path from monolithic applications to client-server to service-oriented architectures (SOA).  Now, organizations seeking to maximize the business value of their software architectures are adopting microservices architectures. 

Microservices, as the name suggests, should represent the smallest unit of functionality that aligns to a core business capability. 

hat’s not to say that each business process or transaction is a single microservice but rather that business processes and transactions are “composed” using microservices.  Sounds like SOA?  Well, yes, it did to me too, at first.  The major difference, I think, is that this time the industry has got out ahead of the curve, learned from the challenges that we all had/have with SOA and built the necessary infrastructure to standardize and support the microservices from the beginning.  For example:

  • Microservices API’s are standardized
  • Microservices are natively able to communicate with each other through industry-wide adoption of pre-existing standards like HTTP and JSON.
  • Microservices can be formally defined using standards like the “Restful API Modelling Language” (RAML) so that developers reusing the microservices can depend on the functionality contained within the microservice and resist the urge to rewrite their own version “just in case.”  Indeed, a collaboration hub like Mulesoft’s Anypoint Exchange encourages merit-based reuse of microservices by capturing the reviews and ratings of other developers who have used that microservice.
  • Microservices can be implemented in different programming languages.
  • Tools are available to manage the complexity of microservices e.g. Mulesoft Anypoint Platform.

This last bullet hints at some of the challenges of a microservice architecture.  Development needs to be highly automated with automated deployment to keep track of all the microservices that need to be composed into a particular application and continuous integration.  However, the adoption of a microservices approach also requires strong discipline from developers and the devops team.  Fortunately, the “small is beautiful” nature of most microservices means that the development teams can (and should) be small so team discipline and communication can be maximized. 

Implementating a microservices architecture is not something to try on your own for the first time. 

There a number of companies that have already developed strong experience in architecting and development microservices including our own Spitfire Group who have done a number of implementations including a back-office upgrade for a Real Estate firm.

I believe that organizations should seriously consider enhancing the business value of their software by implementing microservices architecture for their “leading edge” products or services.  By “Leading edge,” I mean those software-based products or services that are most subject to change as the business environment changes.  They are probably customer-facing applications which have to respond to competitive changes in weeks not months.  They are probably going to be applications whose software value rests on they’re being fit for purpose all the time.

Written by Michael D. Harris at 13:52
Tags :
Categories :

How can my organization know if our Agile transformation is successful?

Scope of Report

It is commonly accepted that most organizations today have moved, are moving, or are evaluating a move toward the use of the Agile methodology. This report considers: (a) why the move to Agile; (b) what it means to adopt the Agile methodology to incur a transformation; (c) how to measure to know if your transformation is successful; and (d) how to ensure that the effects of the transformation are continued.

Why the move to Agile?

An IT organization has certain responsibilities that relate directly to their business client and the rest of the organization. From a business perspective, there are five (5) core goals for any IT team.

  1. Effectively manage workflow

  2. Proactively manage end user expectations

  3. Accurately plan, budget and forecast deliveries

  4. Accurately estimate deliverables

  5. Show value to the organization and the client

Agile, when properly adopted, has been shown to be an effective development method that addresses each of these five goals. As with any new business strategy, the move to Agile would be an attempt to optimize business efficiencies that affect the bottom line and the client-supplier relationship.

What is Agile transformation?

Tom Cagley has suggested that a transformation is a “complete or major change in someone's or something's appearance, form, etc.”; in other words, a changeover, metamorphosis, transfiguration, or conversion. Transformation “evokes a long-term change program that will result in a large-scale, strategic change impacting a whole organization (or at least a significant part)”. For Agile, it means fostering an environment of teamwork, trust, and open communication to facilitate continuous or frequent delivery of working software.

When an organization embraces such a change, it typically has gone through several stages. First, discovery -- a realization of organization needs and how you will attempt to fulfill the needs through a process solution. This is also characterized by knowledge gathering and process analysis. Secondly, proof-of-concept coordination through the organization to solicit sponsors and stakeholders, and assign participants to test the solution. This is executed through a pilot program, or a sampling of teams to use Agile, to generate interest and enthusiasm. Using the lessons learned, and positive and negative feedback, the organization then moves to definition, a more structured approach to implementing Agile. The last phase is institutionalization, in which the transformation is complete, and Agile is used throughout the organizational IT community. This is exemplified as not just a practice, but a ‘core foundation’ based upon innovation and business value.

Do we only start to measure when institutionalization occurs, or do we measure through all the process steps to realize when we have arrived at transformation? Obviously, the answer is that we implement metrics as the process evolves to be able to measure process outcomes, adjust the implementation as necessary, continuing to progress until the goal is reached.

What then do we measure to gauge transformation?

Scrum is a common approach to implement Agile project management. Other Agile and Lean frameworks include Extreme Programing (XP), Crystal, and Scaled Agile Framework Enterprise to name a few. The measures and metrics mentioned in this paper can be applied to most if not all.

There are several key metrics that are used to measure the Scrum environment. To review the terms and the process, the following is the framework which is being measured.

  • A product owner creates a prioritized requirement list called a product backlog.
  • During sprint planning, the team pulls a subset from the product backlog to accomplish in a single sprint.
  • The team decides how to implement the features that are represented in the subset.
  • The team has to complete the work in a 1-4 (2 weeks being typical) week sprint.
  • The team meets each day to assess its progress (daily Scrum or Stand-up).
  • During the sprint, the Scrum Master facilitates delivery of value.
  • By the end of the sprint, the features (work performed) meet the definition of done and are ready for delivery.
  • At the end of the sprint, the team engages in a sprint review and retrospective.
  • For the next sprint, the team chooses another subset of the product backlog and the cycle begins again.

The following are the recommended metrics based upon process measurement within that framework. All of them imply that there are organizational targets that once met would support the transformation.

1. Velocity and Productivity

According to the Scrum Alliance: “Velocity is how much product backlog effort a team can handle in one sprint. This can be estimated by using the historical data generated in previous sprints, assuming the team composition and sprint duration are kept constant. Once established, velocity can be used to plan projects and forecast releases.”

Velocity is a measure of throughput - an indication of how much, on average, a particular team can accomplish within a time box. Velocity can be gauged by the number of user stories delivered in a sprint, by the number of story points delivered in a sprint, or by the number of function points delivered in a sprint. Since user stories are not generally considered equal in complexity or time to develop, they have too much variability to be a reliable measure. Story points are subjective and are generally only consistent within a stable team. Again there may be too much variability to measure at an organization level, or across teams.

While story points provide the micro view within teams, we need some way to measure the macro view across multiple teams. Function points can be used at the inception of the project to size the backlog, to determine the deliverability of the minimum viable product and to capture actual size at completion. This allows a quantitative view of volatility. In addition, function points are a rules based measure of size, therefore, can be applied consistently and are useful for standardizing velocity or productivity. Productivity is size/effort, expressed as function points delivered per FTE or team member. Using function points as a basis for size, an organization can compare performance within dynamic teams and to the industry through the use of agile benchmark data.

2. Running Tested Features (RTF)

In general terms, the Running Tested Features (RTF) metric reflects “how many high-risk and high-business- value working features were delivered for deployment. RTF, counts the features delivered for deployment denominated per dollar of investment. The idea is to measure, at every moment in the project, how many features/stories pass all their (automated) acceptance tests and are known to be working”. The two components are time (daily) and the number of running, tested features ready for delivery to the business client. This metric is often used in environments where operations or production environments are “owned” by separate organizations (often true in DoD and Government environments).

3. Burn down/Burn up charts

According to Wikipedia, “A burn down chart is a graphical representation of work left to do versus time. The outstanding work (or backlog) is often on the vertical axis, with time along the horizontal. That is, it is a run chart of outstanding work. It is useful for predicting when all of the work will be completed.”

A burn up chart tracks progress towards a project's completion. In the simplest form, there are two lines on the chart. The vertical axis is amount of work, and is measured in units customized to your own project. Some common units are number of tasks, estimated hours, user stories or story points. The horizontal axis is time, usually measured in days.

These charts can allow you to identify issues (e.g. scope creep) so adjustments can be made early in the cycle. They are also effective tools for communicating with clients and management. The advantage of a burn up chart over a burn down chart is the inclusion of the scope line. It also allows you to visualize a more realistic completion date for the project, by extending a trend line from the scope as well as the completion line. Where the two trend lines meet is the estimated time of completion.

4. Technical Debt

Technical debt is a measure of the corners cut when developing a functionality (e.g. to prove that the functionality can be implemented and is desirable) the code may be written without full error trapping. As technical debt increases, it can become harder to add new features because of constraints imposed by previous poor coding. The measurement of technical debt was introduced in parallel with Extreme Programming (XP) which introduced the concept of “refactoring” or regularly revisiting inefficient or hard to maintain code to implement improvements. XP builds in refactoring, restructuring and improving the code as part of the development process. Technical debt is typically measured using code scanners which use proprietary algorithms to generate a metric based on the number of best practice rules that a particular piece of code infringes.

5. Defect Removal Effectiveness (DRE) and Defect Escape Rate (DER)

Measuring quality has always been a key metric, regardless of the life cycle methodology. The two key metrics in this area measure the ability to remove defects prior to release where:

The question usually arises over the time frame for a ‘release’. Quite simply, it depends on your delivery schedule – if you do a real release every 2 weeks, then that may be your measure of time. It is important to be consistent. As with any defect measurement, you will have to decide what priority defects are considered and are they all treated equally in the equation.

6. Work Allocation

There are three team metrics which can be used to support the outcomes of other metrics (cause and effect). The organization makes a sizable investment in building a solid cross-functional team with the right expertise for the product development. To protect the investment there is a key focus on building core product teams with deep product and technology knowledge. Rotating team members reduces the team scalability as continuity is constantly broken between releases. The following metrics are mainly targeted to gauge impact of team assignments, team changes between releases, and how the time is actually used – all which can affect delivery and costs:

1) Team utilization is quantified by the Team Utilization Quotient (TUQ). TUQ = Average time spent by team on the project

Example: Utilization is 10 resources for 5 months project.
- 4 resources joined in the beginning
- 2 resources joined after 2.5 months (50% project left)
- 4 resources joined in the last month of the project (25% project left)

TUQ = {(4*1)+(2*.5)+(4*.25)}/10 = .60 = 60%

2) Team scalability is quantified by the Team Scalability Quotient (TSQ): TSQ = % of the team retained from the previous release

In a TUQ example, we built a team of 10 people. The team had low utilization because of team assignments. Assuming the team is ready to take on next the version of the product, if you replace half of the team members with newer members to work on the new product release it reduces team scalability by 50%.

The third team metric is Work Allocation. This is a simple chart showing what percentage of available time was spent across all work categories for the sprint. Time activities should not only consider development activities but must include the time spent with clients, customers and stakeholders. In Agile, which fosters a cooperative environment, time needed for communication and feedback is as important as the time to code and test.

The use of these metrics should encourage resource managers, Scrum masters and Scrum coaches, to carefully consider how time and resource allocation impacts team efficiency and scalability. The transformation of the organization is from hero building to team building, and if you want to gain a fair ROI, you will invest in developing cross-functional teams. Obviously, disrupting teams will not generate the delivery responses you seek. Conversely, as team dynamics are fostered and improve, so will velocity.

7. Customer Satisfaction and Team Satisfaction

Last but certainly not least, one of the measures which is highly revealing of performance is customer satisfaction. Customer satisfaction answers the question of whether the client is happy with the delivery, quality and costs of the functionality being delivered. Satisfaction provides a view into how the team is perceived by the clients.

Team satisfaction measures how the team is affected by agile adoption. Agile transformation provides an environment that values technical innovation, collaboration, teamwork, and open and honest communication which yields higher team satisfaction. Team satisfaction is positively correlated to productivity. Team satisfaction can be an indicator of how well the organization has implemented Agile.

How do you know that the effects of the transformation will continue?

The most common answer is “you don’t know for sure”. As a matter of record, experience has shown us that without continued measurement and adequate coaching, teams fall into entropy and lose efficiencies. A measurement feedback model should be in place to monitor performance levels, to know when to get coaching and how to address process improvements as needed.

At any point in the transformation, an independent assessment may be in order to determine where you are in comparison to where you want to be. Feedback from an assessment is critical for developing a fact-based plan for improvement.

Conclusion

The journey to transformation involves a cultural organizational change which can be thoroughly measured using common Agile metrics. The efficiencies of the new Agile environment can be quantified, maintained and improved through the use of a continuous measurement framework and periodic independent assessments.

References

SPAMCAST, Tom Cagley. Nov 2015. So You Want A Transformation! https://tcagley.wordpress.com/2015/11/10/so-you-want-a-transformation/ Agile Metrics: Running Tested Features, 9 June 2014, https://www.scrumalliance.org/community/articles/2014/june/agile-metrics-(1).

Wikipedia Burn down chart https://en.wikipedia.org/wiki/Burn_down_chart.
Metrics Minute: Burn-Up Chart, Tom Cagley. https://tcagley.wordpress.com/?s=burn+up

Metrics Minute: Burn-Down Chart, Tom Cagley. https://tcagley.wordpress.com/?s=burn+down

Clarios Technology: What is a burnup chart?,http://www.clariostechnology.com/productivity/blog/whatisaburnupchart
Technopedia: Technical Debt, https://www.techopedia.com/definition/27913/technical-debt
XBOSOFT: Defect Removal Effectiveness Agile Testing Webinar Q&A, https://xbosoft.com/defect-removal-effectiveness-agile-testing-qa/.
Agile Helpline: Agile Team's Efficiency in Various Resource Allocation Models. http://www.agilehelpline.com/2011/05/agile-team-efficiency-in- various.html
DCG Software Value. Webinar: Agile Metrics What, When, How, David Herron. Nov. 2015.

Written by Default at 05:00
Categories :

Daily Stand-Up Meetings for Distributed Teams

Distributed Agile teams require a different level of care than a co-located team in order to ensure that they are as effective as possible. This is even more true for a team that is working through their forming-storming-norming process. Core Agile concepts are the team and communication, and these are key for the success of distributed Agile teams. Daily stand-up meetings are one of the most important communication tools for teams using scrum or other Agile/Lean frameworks, so it’s important that they function properly.

Here are some tips for making daily stand-ups work for distributed teams:

  1. Deal with the time zone issue. There are two primary options to deal with time zones. The first is to keep the team members within three or four time zones of each other. Given typical sourcing options, this tends to be difficult. A second option is to rotate the time for the stand-up meeting from sprint to sprint, so that everyone loses a similar amount of sleep (share the pain). One solution for when distributed teams can’t overlap is to have one team member (rotate) stay late or come in early to overlap work times.
  2. Identify and attack blockers between stand-ups. Typically, on distributed teams, all parties do not work at the same time. Team members should be counseled to communicate blockers to the team as soon as they are discovered, so that something discovered late in the day in one time zone does not affect the team in a different time zone (where they might just be starting to work). One group I worked with had stand-ups twice each day (at the beginning of the day and at the end of the day) to ensure continuous communication.
  3. Push status outside of the stand-up. A solution suggested by Matt Hauser is to have the team answer the classic three questions (What did you do yesterday? What will you do today? Is there anything blocking your progress?) on a WIKI or similar shared document for everyone on the team to read before the stand-up meeting. This helps focus the meeting on planning or dealing with issues.
  4. Vary the question set being asked. The process of varying the question set for each meeting keeps the team focused on communication rather than giving a memorized speech. For example, ask:
    1. Is anyone stuck?
    2. Does anyone need help?
    3. What did not get competed yesterday?
    4. Is there anything everyone should know?

This technique can be used for non-distributed teams as well.

  1. Ensure that everyone is standing. This is code for making sure that everyone is paying attention and staying focused. Standing is just one technique for helping team members stay focused. Other tips include banning cell phones and side conversations.
  2. Make sure the meeting stays “crisp.” Stand-up meetings by definition are short and to the point. The team needs to ensure that the meeting stays as disciplined as possible. All team members should show up on time and be prepared to discuss their role in the project. Discussion must include the willingness to ask for help and to provide help to team members.
  3. Use a physical status wall. While the term “distributed” screams tool usage, using a physical wall helps to focus the team. The simplicity of a physical wall takes the complexity of tool usage off the table, so that the focus can be on communication. Use of a physical wall in a distributed environment means using video to show the act of someone on the team physically moving tasks on the wall (after the fact a picture can be provided to the team). If video is not available, use a tool that everyone has access to. Keep tools as simple as possible.
  4. Don’t stop doing stand-ups. Stand-up meetings are a critical communication and planning event; not doing stand-ups for a distributed team is an indicator that the organization should go back to project manager/plan-based methods.

Like any other distributed team meeting, having good telecommunication/video tools is not only important, it is a prerequisite. If team members can’t hear each other, they cannot communicate.

Stand-ups are nearly ubiquitous in Agile. However, despite their simplicity, the added complexity of distributed teams can cause problems. The whole team is responsible for making the stand-up meetings work. While the scrum master may take the lead in insuring the logistics are right or to facilitate the session when needed, everyone needs to play a role.

Tom Cagley
VP of Consulting & Agile Practice Lead

 

 

Written by Tom Cagley at 05:00
Categories :

"It's frustrating that there are so many failed software projects when I know from personal experience that it's possible to do so much better - and we can help." 
- Mike Harris, DCG Owner

Subscribe to Our Newsletter
Join over 30,000 other subscribers. Subscribe to our newsletter today!