David Consulting Group Ltd., Trading as DCG Software Value, Accepted as a G-Cloud 8 Supplier

DCG Software Value, a global provider of Function Point Analysis, software estimation, and Agile support services, has officially been accepted as a supplier for the Crown Commercial Service (CCS) G-Cloud framework, G-Cloud 8.

The G-Cloud framework aims to make it easier to procure information technology services via approved public sector organisations. Those interested can use the “Digital Marketplace” to search for services that are covered by the G-Cloud frameworks. Suppliers are carefully evaluated during the tender process and pre-agreed terms and conditions offer customers sound contractual safeguards. The agreement is fully EU compliant, saving customers the time and money associated with conducting their own procurement exercise.

The goal of DCG Software Value is to make software value visible to those both in IT and on the business side of the organisation. They have successfully helped a number of organisations in the UK to achieve such goals. The company will continue to work with public organisations via G-Cloud 8, helping to implement improvements that will make software development deliver value more cost effectively. 

The company’s available services include:

  • Functional Sizing
  • Vendor Estimate Validation and Estimation On Demand
  • Scaled Agile Framework
  • Training – Functional Sizing and/or Estimating
  • AgilityHealth Radar
  • Agile JumpStart

Public sector buyers are can find DCG’s services via the Digital Marketplace. More information about DCG Software Value is available here.

About DCG Software Value
DCG Software Value is a global provider of Function Point Analysis, software estimation, and Agile support services. Since 1994, companies of all sizes who depend on their software have relied on DCG to foster improved decision making and resource management and to quantifiably impact their bottom line. DCG maintains offices in Newcastle (UK), Philadelphia, and Colorado. DCG Software Value is the operating name of Objective Integrity, Inc., a Pennsylvania corporation.

For more information, visit www.softwarevalue.com.

About Crown Commercial Service
The Crown Commercial Service (CCS) works with both departments and organisations across the whole of the public sector to ensure maximum value is extracted from every commercial relationship and improve the quality of service delivery. The CCS goal is to become the “go-to” place for expert commercial and procurement services.

For more information, visit www.gov.uk/ccs.

Written by Default at 05:00
Categories :

Tom Cagley's Practical Software Quality and Testing Conference Presentation


DCG Software Value, a global provider of software analytics, software quality management, and Agile support services, presented at the 2016 Practical Software Quality and Testing (PSQT) Conference in San Diego from August 14-19.

Tom Cagley, Vice President of Consulting and Agile Practice Lead at DCG Software Value, gave three presentations at this year's conference.

His first presentation, "Discover the Quality of Your Testing Process," focused on the Test Maturity Model integration (TMMi), a framework for software testing. During the presentation, Tom shared a case study to discuss how the TMMi appraisal process works, providing attendees with a tool to complete a high-level assessment of their organization.

His second presentation, "Scaling Agile Testing Using the TMMi," was part of the Agile track. Tom outlined the TMMi framework and provided a process for using environmental, technical, and project context to effectively integrate testing into an Agile development environment, measuring the effectiveness of the process.

His final presentation was also a part of the Agile track. "Impact of Agile Risk Management on Software Quality" discussed how to implement best practices for mitigating risk in an Agile environment.

Tom was very pleased with the outcome of his presentations, expressing his ability to connect with the audience on each of the topics. He hopes that his contribution to the conference will further develop company's abilities to consistently and efficiently produce high quality software that works as expected and benefits the bottom line of the company. 

If you have any questions about the presentations, feel free to contact Tom Cagley at t.cagley@softwarevalue.com.

Written by Default at 05:00
Categories :

What Does An Agile Coach Deliver?

Tom CagleyI am an Agile Coach, and I'm often asked about the role that Agile Coaches play in an organization. On the most basic level, Agile Coaches help teams and organizations embrace Agile and help maximize the delivery of business value from development. We use terms like "enable" and "facilitate" to describe how we help organizations and teams transform. But what does an Agile Coach actually do? Well, it's a variable mix of activities that includes: consulting, cajoling, training, arbitration, and mentoring.

Consulting

Coaches sometimes act as consultants. A consultant will actively involve him or herself in the game. Sometimes an Agile Coach will have to actively participate in performing a task or activity so that the team can see the technique in action.

Cajoling

Coaches cajole, with gentle urging or coaxing, the team or organization to change behaviors that don’t live up to Agile principles and values. In many cases, this cajoling is underscored by the war stories a Coach can deliver about the trials and tribulations that will ensue if certain behaviors are not corrected. The experiential base is important for the Coach to be able to hold the moral (metaphorically speaking) high ground needed to persuade the team or organization.

Training

Coaches deliver training. Training comes in many shapes and sizes. Coaches will be able to deliver training on a just-in-time or ad-hoc basis based on their own observations of how work is being done.  The goal of ad-hoc training is to ensure that the team or teams understand how to apply specific techniques as they are applying them. I liken this to a form of just-in-time training, which leverages a principle from adult learning that holds that adults retain knowledge better when it can be immediately applied. This does not exclude leading and organizing training as part of the more formal organizational change program.

Arbitration

Coaches arbitrate conflicts and difficult decisions. Projects, whether to transform whole organizations or to implement a set of simple user reports, always include the need to make decisions. Coaches help organizations make decisions so that they can move forward with a minimal loss of inertia. Facilitation for an Agile organization is a skill that is part art and part science – think emotive negotiation (or as a friend of mine calls it “family counseling for teams”).  The best Coaches teach the team or organizations they are working with these skills.

Mentoring

Coaches mentor. A mentor is a trusted counselor who provides guidance, advice, and training, usually at an intimate (one-on-one) level. A mentor needs to be dependable, engaged, authentic, and tuned into the needs of the mentee, so that the transfer of guidance is safe and efficient.

So, when we say that an Agile Coach enables and facilitates, what that really means is that they  consult, cajole, train, arbitrate, and mentor. The art of being a good Coach is knowing what mix of these activities is appropriate for any specific situation. And, as many readers probably are aware, a good Agile Coach can make or break an Agile transformation.

Tom Cagley
VP of Consulting & Agile Practice Lead

Written by Tom Cagley at 05:00
Categories :

Simple Metrics to Measure Value – CoD and WSJF

Cost of DelayWhen discussing value, determining how to measure that value is critical. As I write my second book on “The Business Value of Software," I find myself frequently coming back to two simple techniques that help organizations measure the business value of their software development projects: Cost of Delay (CoD) and Weighted Shortest Job First (WSJF).

CoD is the hourly, daily, or monthly cost associated with NOT starting a project. When a project is delayed, there is waste (i.e. wait times, inventory costs, opportunity costs) and this waste can negatively impact the bottom line. 

Cost of Delay =
User or Business Value + Time Criticality + Risk Reduction or Opportunity Enablement Value

WSJF is another metric that prioritizes those projects by putting the project with the highest WSJF at the top of the list. It is calculated by dividing the CoD by the duration of the project. 

These two techniques are extremely helpful in prioritizing software development initiatives based on economics. They enable an organization to prevent the frequent starting and stopping of projects that are extremely common in the software development world and allow for a continuous flow of product development based on metrics that drive business value. 

Donald Reinertsen, the author of “The Principles of Product Development Flow: Second Generation Lean Product Development” has said “If you only quantify one thing, quantify cost of delay." I whole-heartedly agree with Reinertsen, and I also encourage organizations to quantify WSJF. By measuring CoD, software development organizations will eliminate overhead associated with delays, streamline operations, and ultimately, produce more business value. By adding WSJF into the equation, they’ll be able to prioritize their projects such that they’re continuously delivering the greatest value to their business units.

I’m always interested in how software development organizations are using these two techniques. Please share the successes you’ve realized when utilizing CoD and/or WSJF.


Mike Harris
CEO

Written by Michael D. Harris at 05:00
Categories :

How can I establish a software vendor management system?

Scope of Report

This month’s report will focus on two key areas of vendor management. The first is vendor price evaluation which involves projecting the expected price for delivery on the requirements. The second is vendor governance. This is the process of monitoring and measuring vendor output through the use of service level measures.

Vendor Price Evaluation

Vendor Price Evaluationseeks to enable pricing based on an industry standard unit of measure for functionality that puts the buyer and seller on an even playing field for pricing, bid evaluation and negotiation.

Organizations leverage third party vendors for the development of many of their software initiatives. As such, they are continuously evaluating competing bids and looking for the best value proposition.

Being able to properly size and estimate the work effort is critical to evaluating the incoming vendor bids. Furthermore, an internally developed estimate provides a stronger position for negotiating terms and conditions. The effective delivery of an outsourced project is in part dependent on an open and transparent relationship with the vendor. A collaborative estimating effort provides for greater transparency, an understanding of potential risks, and a collective accountability for the outcomes.

To better control the process, an economic metric is recommended to provide the ability to perform true value analysis. This metric is based on historical vendor spending over a diverse sampling of outsourced projects, thus creating an experiential cost-per-unit “baseline”. Knowing the cost-per-unit price gives you leverage in negotiation. Instead of using hours billed as a fixed price measurement, you know the functional value of deliverables which allows billing on a per unit delivered basis.

To achieve this, we recommend the use of function points as a measure of the functional size of the project. Function Points (FPs) provide an accurate, consistent measure of the functionality delivered to the end user, independent of technology, with the ability to execute the measurement at any stage of the project, beginning at completion of requirements. Abundant function point-based industry benchmark data is available for comparison.

By comparing historical cost-per-FP to industry benchmark data, organizations can quickly determine whether or not they have been over- (or under-) spending. Under-spending may not seem like a problem but under-bidding by vendors is an established tactic to win business that may not represent a sustainable price. If forced to sustain an unrealistic low price, vendors may respond by populating project teams with progressively cheaper (and weaker) staff to the point where quality drops and/or delivery dates are not met. At this point, having great lawyers to enforce the original contract doesn’t help much.

Implementing this approach provides an organizational methodology for bid evaluation and a metric for determination of future financial performance.

Vendor Governance

The key to a successful vendor governance program is an effective set of Service Level Agreements (SLAs) backed up with historical or industry benchmarked data and agreement with the vendor on the SLAs.

The measures, data collection, and reporting will depend on the SLAs and/or the specific contract requirements with the software vendor. Contracts may be based strictly on cost-per-FP or they may be based on the achievement of productivity or quality measures. A combination can also be used with cost-per-FP as the billable component and productivity and quality levels used for incentives and/or penalties.

There are a number of key metrics that must be recorded from which we can derive other measures. The key metrics commonly used are: Size, Duration, Effort, Staff, Defects, Cost & Computer resources. For each key metric, a decision must be made as to the most appropriate unit of measurement. See the appendix for a list of key metrics and associated descriptions.

The service level measures must be defined in line with business needs and, as each business is different, the SLAs will be different for each business. The SLAs may be the typical quality, cost, productivity SLAs or more focused operational requirements like performance, maintainability, reliability or security needs within the business. All SLAs should be based on either benchmarked or historical data and agreed with the vendor. Most SLAs are a derivative of the base metrics and thus quantifiable.

One output measure to consider adding is business value; fundamentally, the reason we are developing any change should be to add business value. Typically, business value isn’t an SLA but it can add real focus on why the work is being undertaken and so we are now recommending it. The business value metric can be particularly helpful in the client-vendor relationship because it helps to align the business priorities of the client and the vendor (or to highlight any differences!).

The key is to define the measures and the data components of the measures prior to the start of the contract to avoid disputes during the contract period.

Reporting

Measurement reports for vendor management are typically provided during the due diligence phase of vendor selection and during the execution of the contract. During due diligence, the reports should provide the vendor with the client expectations regarding the chosen measures (e.g. Cost-per-FP, hours-per-FP, etc.). During the life of the contract, reports should be produced to show compliance to contract measures and to aid in identifying process improvement opportunities for all parties.

The typical reporting for vendor management consists of balanced scorecards for senior level management, project reports for project managers, and maintenance reports for support areas.

Balanced scorecard

These reports provide a complete picture of all measures required for managing the contract. These are typically summary reports that include data from multiple projects. The Balanced Scorecard Institute states that, “the balanced scorecard was originated by Robert Kaplan and David Norton as a performance measurement framework that added strategic non-financial performance measures to traditional financial metrics to give managers and executives a more 'balanced' view of organizational performance”.

In the case of software vendor management, the scorecard should have multiple measures that show contract results. For example, even though productivity may be a key ‘payment’ metric, quality should also be included to ensure that in efforts to improve productivity, quality does not suffer. The report should also include a short analysis that explains the results reported to ensure appropriate interpretation of the data.

Project reporting

These reports focus on individual projects and are provided to project teams. The reports should contain measures that support the contract and provide insight into the project itself. Analysis should always be provided to assist teams with assessing their project and identifying process improvement opportunities to better meet the contract requirements.

Maintenance reporting

These reports are at an application level and would be provided to support staff. This data would provide insight into the maintenance/support work being conducted. Again, this would be in support of specific contract measures, but it can also be used to identify process improvement opportunities and/or identify which applications may be candidates for redesign or redevelopment.

Data Definition and Collection

Data definition and collection processes need to be developed to support the reporting. As stated in the book, IT Measurement – Practical Advice from the Experts, this step should, “focus on data definition, data collection points, data collection responsibilities, and data collection vehicles”. Who is going to collect the data? When it will be collected? How it will be collected? Where it will be stored? These are important questions to drive the implementation of the contract measurements but these all depend on the most difficult step, data definition.

Data definition involves looking at all of the data elements required to support a measure and ensuring that both the client and the vendor have the same understanding of the definition. Since most software vendor contracts utilize productivity (FP/effort), this report will focus on defining the data elements of FPs and effort by way of example.

Function Point Data Definition

Function point guidelines should be developed for all parties to follow. This should include which industry standard will be used (e.g. International Function Point User Group Counting Practices Manual 4.x) as well as any company specific guidelines. Company specific guidelines should not change any industry standard rules, but provide guidance on how to handle specific, potentially ambiguous situations. For example, how will purchased packages be counted -- Will all functions be counted? Or will just the ‘customized’ functions be counted? Another consideration is how changes to requirements throughout the lifecycle will be counted. For example, some organizations count functions one time for a project unless a changed requirement is introduced late in the life cycle (e.g. system testing). Then a function may be counted more than once. Guidelines need to be established up front for as many situations as possible, but may need to be updated throughout the life of the contract as new situations arise.

Effort Data Definition

Effort can be one of the more contentious data elements to define in software vendor management systems. It is important to determine what life cycle activities are included in each aspect of the software vendor contract. For instance, if productivity is an SLA or a payment incentive, then vendors will want to exclude certain activities that clients may want to include. One example is setting up a test environment for a project. A vendor may want to exclude this from the productivity calculations while a client may think it should be included. A ‘rule of thumb’ is that if an activity is required for the project specifically the effort should be included. If the activity is to set up something that is for all projects to use, then it should be excluded. So in the test environment example if the vendor is setting up scenarios or simulators to test specific project functionality the effort should be included as part of the project productivity calculation. If the vendor is installing servers to host test data and tools, the effort should be excluded. There are more effort categories to examine than can be included in this report. A non-category issue decision with effort is the inclusion or not of “overtime” hours. The recording of overtime hours in time management systems tends to vary widely even within organizations because many software development employees are not paid for overtime hours. The important thing is for vendors and clients to work together to define and document the guidelines.

Code Quality Analytics

In addition to the standard SLAs and beyond functional testing, code analytics and an application analytics dashboard can provide an IT organization with key insights into code quality, reliability and stability of the code being delivered by the vendor.

Code analytics tools, such as those provided by CAST Software, analyze the delivered code to detect structural defects, security vulnerabilities and technical debt. The metrics generated by these tools can be used as SLAs.

There is value in understanding what is being developed throughout the lifecycle. In this way security, performance and reliability issues can be understood and addressed earlier while still in development.

In a waterfall development environment, code analytics can be executed at defined intervals throughout the lifecycle and after deployment to production. In an Agile framework, code analytics can be run as part of each code build, at least once per sprint, and code quality issues can be resolved real time.

Having this information early in the lifecycle enables fact-based vendor management. Code analytics, along with traditional measurements provides the buyer with the information needed to manage their vendor relationships and ensure value from their IT vendor spend.

Conclusion:

A robust vendor management system includes:

  • Pricing evaluation using industry standard measures to promote meaningful negotiations,

  • Service level metrics backed up with historical or industry-benchmarked data and

  • Code analytics to ensure quality, reliability and stability are built into the systems being developed.

    With these components in place an organization can efficiently manage vendor risk, monitor and evaluate vendor performance and ensure value is derived from every vendor relationship.

Sources:

  • Balanced Scorecard Institute Website www.balancedscorecard.org/resources/about-the-balanced- scorecard

  • “IT Measurement Practical Advice from the Experts.” International Function Point Users Group. Addison Wesley (Pearson Education, Inc.).2002 Chapter 6 Measurement Program Implementation Approaches.

  • CAST Software Website Application Analytics Software - http://www.castsoftware.com/

    Project size can be described in several ways, with software lines of code (SLOC) and function points being the most common.

     

    GLOSSARY:

    Function Points

    The industry standard approach to functional size is Function Points (FPs). It is a technology agnostic approach and can be performed at any point of the lifecycle.

    FP analysis provides real value as a sizing tool. Even in software developed using the latest innovations in technology, the five components of function point analysis still exist so function point counting remains a valuable tool for measuring software size. Because a FP count can be done based on a requirements document or user stories, and the expected variance in FP counts between two certified function point analysts is between 5% and 10%, an accurate and consistent measure of the project size can be derived. And because FP analysis is based on the users view and independent of technology it works just as well as technology evolves.

    SLOC

    Source lines of code is a physical view of the size but can only be derived at the end of a project.

    It has some inherent problems, one being that inefficient coding produces more lines of code, another being the fact that determining the SLOC size of a project before it is coded is itself an estimate.

    However, it can be used retrospectively to review a projects performance and you need to consider ESLOC or effective Source lines of code to remove the expert/novice factor of more lines of codes highlighted above.

    Code analysis tools like CAST can provide excellent diagnostics and even FP counts based on the code.

    Story Points

    Projects in an Agile framework typically use Story Points to describe their relative size. They work well within a team but are ineffective at an organization level to consider relative size.

    For example, a team can double their velocity simply by doubling the number of story points they assign to each story. They can also vary from one team to another as they are only relevant to the team, and sometimes, the sprint in question.

    Time (Duration)

    Simply the time measure for completing the project and/or supporting the application. This is calendar time, not effort

    Effort

    Effort is the amount of time to complete a project and/or support an application. Typically, hours is the metric used as it is standard across organizations. Work days or months may have different definitions across organizations.

Effort is one of the more challenging pieces of data to collect and the granularity at which you can analyze your measures is determined by how you record and capture the effort.

In agile teams, the effort is relatively fixed but flexible in the work performed, so if you want to analyze testing performance you need to know the split of testing work and so on.

Quality

Quality is a key measure in a vendor management situation as the quality of the code coming into testing and into production determines how well the project performs. We are all aware of the throw it over the wall mentality when deadlines start to hit and the resultant cost is defects being delivered to production.

A common request is how many defects are expected for a project of a particular size.

The truth is that the answer is not straightforward as many organizations have a different view of what a defect is and how to grade them. Set your criteria with the vendor framework first and then record going forward. A view of historical performance is extremely useful here as well.

The defects should be measured during User acceptance test as well as go-live during the warranty period and used to predict future volumes and identify releases where further investigation or discussion is warranted.

Staff – FTEs

This is the people metric, it is usually measured in FTE or Full-time equivalents so we have a comparable metric, you might have had 20 different people work on a project with a peak staff of 8FTEs or 10 people with the same effort and staffing profile, it’s the FTEs that is consistent and comparable.

There is also person resource type that can be relevant here so consideration to things like onshore/offshore, contractor/permanent/consultant or designer/manager/tester may need to be included.

Cost

This may be actual cost or a blended rate per hour. Where multiple currencies are involved, assumptions may be needed to be fixed about appropriate exchange rates.

Computer Resources

Computer resources covers the parameters of the technology environment such as platform, programming language etc. The final metric captures the “what?” and “how?” to allow to compare against similar project types by language and technical infrastructure.

Written by Default at 05:00
Categories :

"It's frustrating that there are so many failed software projects when I know from personal experience that it's possible to do so much better - and we can help." 
- Mike Harris, DCG Owner

Subscribe to Our Newsletter
Join over 30,000 other subscribers. Subscribe to our newsletter today!