Project Management

Use Statistical Process Control to ensure your deliverables are of acceptable quality


Statistical Process Control (SPC) techniques provide a data-based, objective way to determine whether your project is yielding products within an acceptable level of quality. These techniques rely on testing or inspections on similar products being produced by the project team. If your project is creating a small number of highly customized deliverables, like software applications, SPC techniques may not work for you. However, if your project is producing many similar products, SPC can determine if your processes are sufficient to produce products within an acceptable tolerance level.  

SPC helps you determine if your processes are "in control." When the process starts to falter and produce products that don't conform to your quality standards, the processes are designated as "out of control."

Control Charts

The use of control charts is a critical aspect of SPC, but not the only way SPC can be implemented.

Figure A

Here are the elements of a control chart.

On the control chart (shown in Figure A), the horizontal axis lists the samples that are tested or inspected over time. The vertical axis contains measurements from these samples. The control line (CL) denotes the process target. The upper control limit (UCL) and lower control limits (LCL) define the acceptable level of tolerance.

Figure B

This is a process in control.

A process in control, shown in Figure B, is one where all measurements over time fall within the upper control limits and the lower control limits.

Figure C

This control chart shows a process out of control.

A process is considered "out of control" when one or more of the following events occur.

  • One or more points are outside of the control limits
  • A run of eight points on one side of the center line (more than what would be considered random)
  • An unusual or nonrandom pattern in the data
  • A trend of seven points in a row upward or downward
  • Pattern of over and under CL, but within limits
  • Several points near a control limit, but not outside the limits

If any of these situations occurs, the project team needs to investigate the cause of the problem and determine the changes required to get the process back "in control."

Don't expect to reach perfection. Human factors, imperfect machinery and tools, equipment wear-and-tear, and process exceptions will always cause some variability. You can shrink the UCL and LCL to a smaller and smaller range if you continue to improve your processes (and tools), but the variability will never reach zero.

6 comments
mandrake64
mandrake64

I am all in favout of SPC for monitoring processes and systems and leading the application of resoures for continous improvement. But SPC is only of true use in measuring outcomes of a project if there is a suitable baseline established before the project commences and someone is prepared to gather the data and produce some meaningful charts both during and after the project finishes. This is most difficult to do when you are completing a project for a customer that understands their own data but has not produced a meaningful measure of their own process or system performance, i.e. a control chart of some description. Too often, in my experience, the logical step of ensuring that a project meets its key performance measures is left too late or simply not completed at all. Lack of perormance visibility and an acceptance of 80% of the gains is at the root of this problem. It is not that the step is forgoten, just that by the time the project winds down, the customer is preparing for the next emergency or critical project and cannot justify the time or resources necessary to perform the required monitoring and achieve the extra 20%. The best performing projects are those where the outcomes are clearly defined and measured before the project kicks off. An informed customer should already have a suite of control charts to choose from and have used them to highlight the deficiency that forms the basis of the project justification. The SPC chart itself forms part of the project definition and contract. This makes it much more difficult for a vendor to squeeze their way out of an agreed performance obligation. Keeping a portion of the total contract value under a bank guarantee will also keep the vendor focussed on the intended outcomes.

poongundran.krishnamurthi
poongundran.krishnamurthi

The article provides a good inside to control the quality over time. However, it would be much better to throw some light on what data we need to capture on the measurement column and what's the timeframe.

Tony Hopkinson
Tony Hopkinson

Doesn't happen very often does it? Not to mention best performing is distinctly open to question. You can meet the spec on all counts and still end up with a dissatisfied customer, simply because something changed between establishing the requirement and achieving it.

Tony Hopkinson
Tony Hopkinson

in mass production. The reason it's effective is it can dynamically indicate trends giving the guys on the shop floor a heads up that some thing is going out of whack. Size tolerances on wire for instance. It's no where near as effective with discrete measurements and too late with infrequent ones. I can't think of anything you can usefully measure to make use of the technique in terms of software engineering. Network traffic, memory usage, page hits and such would be OK but what would be the useful corrective action?

mandrake64
mandrake64

In software engineering you could chart quite a few items: - lines of code per day - rate of bug discovery and bug resolution per week - performance relative to the intent of the code, e.g. reduction in error rates for the process under control or number of defective items of product produced by the process under control - results of code reviews - number of errors in syntax and semantics - unit testing plans and results. These measures will enable you to determine whether your software engineering processes are efficient and in control. From a hardware perspective as you state you could measure: - peak and average CPU - memory usage - memory growth - percentage of swap space in use - processor usage for multiple processor machines - response time of an application or user interface - number of transactions processed per second These will help you determine whether the original estimates of as-delivered system loading are acceptable for a customer. These expectations should have been clearly outlined in the project definition, e.g. peak CPU loading no more than 40% of capacity or no more than 50% of physical memory in use, response time following user input less than 1 second.

Tony Hopkinson
Tony Hopkinson

Lines of code per day for insstance, which is abot the worst possible metric you could come up with. How verbose is the language being used. How much re-use How complex In re-use how good is the fit. Developer styles and departmental standards.... Too many variables to do more than draw some pretty lines on a piece to paper fool the ignorant into believing they are in control. Even some thing useful like estimate vs actuals in terms of development times are too unique to gain an insight. Software/hardware performance benchmarks I can see to an extent. Opertaional performance definitely. You need enough points to spot a trend and you need to be able to explain it and have a corrective action, otherwise all you are doing a documenting a screw up or a constant.