Contact Us

Technical Score improvement

Figure 1: The graph shows how technical scores have improved over a two year period

I have recently analysed data gathered from a large programme of public sector procurements spanning the past two years. Projects ranged from the relatively small to the very large, and my aim was to ascertain if the quality of returns (as determined by the technical evaluation scores) had changed or not. Over 60 procurements were looked at between 2010 and 2012, all of which were tendered under a similar system/process.

The result of the investigation, shown graphically below, demonstrates a clear upward trend in the quality of technical submissions from tenderers. Scores were around 68% back in 2010 and have risen to around 90% in 2012. Not only that, but the general quality seems to have become more consistent over the period with far less variation from very low to very high scores.

The graph speaks for itself, but why are scores and consistency increasing? Can it be sustained? And is it a good thing?

Over the past two years the client has embraced some of the very best practice procurement processes, so maybe these are contributing factors.

Clear and Open Criteria – they have moved from a position of publishing only the bare minimum criteria and weights (which is all the regulations require after all), to telling the bidders what each criteria really means and roughly how they will be scored. This must have had a positive impact on technical scores and consistency. At least bidders no longer have to second guess what the criteria mean and they have some idea how the criteria will be scored.

Pre-Tender Briefings – all bidders are invited to attend a common pre-tender briefing to clarify any elements of the process that are required. If bidders feel that any area requires more clarification, then this can be given to all bidders in an open forum. Again, a no-brainer, and surely a positive step to openness. However, bidders are likely to be cagey in these open briefings with their competitors around. So I guess this is unlikely to have made a big difference.

Consensus Scoring – at around the 2010 mark, consensus scoring was introduced to reach agreed authority scores from multiple evaluators rather than applying an average. I have no doubt that this has resulted in a far more accurate and consistent scoring system. I still find it surprising (and disturbing) how many public sector bodies still rely on averaging scores.

Full Bidder Debriefing – for some time, all tenderers were offered the opportunity for full face-to-face debriefing in addition to the standard debrief letters mandated in the regulations (bidder debriefing and the procurement regulations alone should be another blog for the future….watch this space). With a fairly steady and consistent supply base, this will certainly improve scores over time (less so for our new bidders and SMEs who may not get the opportunity to bid as often).

Continuous Criteria Development - yes, some criteria have evolved slightly over the period with updates to regulations and requirements, but I would argue that the criteria have actually become more challenging over time which should result in a drop in scores if anything.

Obviously, for a large and complex programme, no single cause can claim the gold medal for the ‘Most Improved Process 2010-2012’. However, it is clear that all of these factors will have had a contribution to the improved quality of scores over time.

Is the improvement sustainable?…..clearly not against the current metrics/criteria. We will probably need to raise the quality bar and expectations to ensure we continue to strive for improvements. There is also more work that could be done on data analysis in terms of types of procurements, market sectors etc but it does give us a great indication that the quality of submissions is increasing.

Which leaves me with the question, is this improvement a good thing? Surely it must be. But has the quality of the delivery of contracts increased in line with the better submissions or are bidders simply learning how to achieve high scores? Should this be reflected in the weighting of technical and commercial criteria? Whatever your views, increasing technical scores throughout a programme must increase pressure on bidders to maintain high technical scores and submit more and more competitive prices.

< Back