Contact Us

The update of the building PAS 91 2013 is a welcome improvement to the armoury of standardised Pre-Qualification Questionnaires (PQQs) available to public procurement bodies. Standardising procurement questions really helps the suppliers as it means that they don’t have to reinvent the wheel every single time they respond to a PQQ. It also helps the purchaser as they are less likely to get conflicting information from a supplier.

However, there is still the issue of how to evaluate this information for each competition. Suppliers will surely expect that the purchaser would come to the same conclusion based upon the same evidence. The standard PQQ gives no guidance upon assessment of the answer to the questions. Is that right?

It is clear that the requirements of individual procurements vary and therefore that the project specific questions should change for each project. Similarly, the weights of the criteria should also vary between competitions, as their importance to the procurement is dependent upon the specific requirements of the project. However, what about the assessment of the standard questions? Shouldn’t the same answer always receive the same score? Should suppliers complain if they receive different scores when giving the same answers for the standard questions?

Actually the answer is no! It is quite right that the same answer will sometimes get a high score, and sometimes get a low one. Perhaps the clearest examples of this are the questions relating to the financial robustness of the supplier. Purchasers understandably prefer suppliers with large financial resources when letting high value contracts, but are happy with smaller suppliers for lower value contracts. So it is quite right that the standard PQQs do not dictate how the procurer should assess the answers to the standard questions.

Another aspect that is not addressed by the standard PQQs is the way in which the answers are scored by the procurers. It is quite possible that, when assessing the same answer from a supplier, a single purchasing organisation will give a mark anywhere between 0 and 10 for one competition, between 0 and 3 for another, and 0, 3, 6 or 8 for yet another. Surely this is another area of confusion for suppliers: wouldn’t it be better to apply a consistent approach? Further benefits of this would be the ability to re-use the scoring scheme on similar procurements, not only in the published invitation documents, but also in the systems and tools used to support the evaluation process.

The government standard PQQs (including PAS 91 2013) help both procurers and suppliers by reducing the degree of wheel reinvention that is required for each procurement. However, procuring organisations could achieve even greater economies of effort by improving the consistency of the scoring scales they use.

< Back