We suggested an approach where you:
- Identify the scope
- Identify quality criteria
- Review a sample of documents
- Create a baseline
- Expand the scope
All this makes sense. However, we also need to look at how to define a fair scoring system.
Some writers have heavier workloads, others writer more complex materials, others may be clever enough to superficially complete their documentation, but not provide the depth of quality we need.
So, how do you address this?
One suggestion is to apply weights to different criteria. For example, let’s say one criterion is for the Executive Summary.
If it exists, you award one point. If not, zero.
However, this is still pretty crude and doesn’t give us much insight into quality.
An alternative approach is to give this criterion a fixed score, ranging from 1 -5, with five as the highest.
Sum the numbers for the document and award a percentage, say 1-5% for the quality of this piece. This is known as a weight scoring model.
Weight Scoring Models for Technical Documents
This approach allows you to determine the relative success of your product, in this case your documents, based on several criteria. To do this, you need to:
- Identify criteria for the selection process, ie what needs to be measured.
- Assign weight (i.e. percentages) to each criterion. The sum total must be 100%.
- Assign scores to each criterion for each type of deliverable, eg guide, online help etc.
- Multiply scores by weights and get the total weighted score.
However, what’s important is not which approach to take but to step back and see which is more appropriate for your quality initiative. You might also consider adopting an approach that’s simplest to roll out as you’ll have some data coming in and, if you start small, you can use this as a building block for larger rollouts.