Creating Benchmarks for Technical Documents

One of the difficulties in managing a large team of technical writers is determining the quality of their work. This is more difficult if the teams work in remote offices, for example, in different countries, and you want to get a better understanding of where they are succeeding and falling short

Software Development Templates – Download

One way to get some insight into this is to define a set of benchmarks. Of course, benchmarks will not automatically improve the quality of your product, which is a set of technical documents, but should give you some insight into the performance of your team.

Benchmarks should be seen as one tool in your quality improvement program but not a means to an end.

In other words, your aim isn’t to move the scores for 70% to 100%, rather to determine why the team struggles to deliver specific types of content, for example, workflow diagrams with their materials. Once you understand this, you can explore how to resolve it.

Benchmarks for technical documents

  1. Identify areas to be assessed. Agree on the scope of the quality improvement program. For example, it may make sense to start small, isolate one part of the documentation, say the Online Help, and use this as a starting point. After this, you can expand and gradually include all document assets.
  2. Establish scoring mechanism. Some types of documents are harder to write than others. And, some parts of documents are more difficult. For this reason, make sure when you’re creating a scoring mechanism that you factor in the relative difficulty for each piece/type of content. For example, conceptual documentation may require more attention and effort than procedural documents. If you’re using MS Excel to capture the information, consider using weights for different criteria. You can also consider applying scores, for example, 1-5, depending on the depth of information provided.
  3. Establish a baseline. Once you have the first review completed, you should be able to establish a baseline, i.e. a starting point. Use this to compare the quality of work going forward and identify which areas need the most attention. For example, while your team may be doing well overall, it’s possible that one of two areas keep affecting their scores. These are the areas you need to examine and fix.
  4. Provide feedback. You probably need to remind the team that this isn’t a witch-hunt. The aim isn’t to name and shame. Instead, see it as an opportunity for the team to identify blind-spots in their work and also ensure that they can be proud of the product they deliver.
  5. Avoid using metrics to punish writers. Avoid scoring systems that create a league table as writers will see this as a competition. When that happens, writers will engage in CYA tactics and stop helping each other. Instead, focus on how the team is improving and, where necessary, sit with writers who are struggling to improve their numbers. Remember, they may not be the laziest, rather their results may be low as they have the heaviest workload and most difficult materials to document.

Summary

Before you start. Do you really need benchmarks to improve the quality of your technical documents? Maybe not. Are there other ways to achieve this?

If you do decide to adopt this approach, weight up the pros and cons first. What do you stand to gain?  What can go wrong? For example, I’ve seen writers, when pitted against each other, stop sharing information and use passive aggressive tactics to undermine other writers. So, it’s not that straightforward.

Benchmarks are only one tool in your quality improvement program. Before you set about establishing benchmarks, identify how this helps writers improve their work – and it should – and give them examples of how other firms have approached it.

Finally, don’t announce this out of the blue. Introduce the idea in advance, canvas opinions, and set expectations.