How to set fair performance targets in retail

In my post, Key performance metrics for grocery retail product availability,
I covered the challenge of key performance indicators – providing metrics that are both comprehensive and simple as part of my 5-C framework. Following that post, a number of people contacted me to say that the metric creation is only the start, we must also consider the targets we set against those metrics. I wholeheartedly agree.

At the start of every financial year, a retailer has a set of business objectives, goals, and targets, future predictions of where they would like to be, from a revenue, margin, shrink or waste perspective, to name but a few. However, often these expectations are not based on robust science – and can even be as basic as last year plus or minus one or two percent! For a sector that once led the way in advanced data analysis and its interpretation, this does not feel best in class.

When compared to the relatively immature practice of sports science, we have seen concepts, espoused in the likes of Billy Beane’s Moneyball model, accelerate and overtake retailers. Association football and rugby, the latter only having been a professional sport for 25 years, now have clubs adopting rigorous and robust analytical assessments of game plan strategy, player development and standards, and on-field tactics and contribution, to optimize and predict result outcomes. All of these grounded in realistic expectations of performance progression – if you barely escape relegation last year then you don’t set yourself the target of champions this year!

So, how can retailers learn from this?

Setting the right performance targets is not easy at category, store, region, and national level, but if targets are to achieve buy-in, to be considered fair and representative and genuinely motivating to achieve, especially when backed up with a bonus that they have the chance to win, some rigor and discipline are required.

A number of commonly used approaches are summarised below, but if we want to set a new standard then a combinatorial modeling approach is proposed.



Cascaded Target (e.g. X%)

A deconstructed value allocated unscientifically across business functions/depts etc – everybody will contribute X% or your contribution will be Y%.



  • Simple approach (even if embellished as sales-weighted allocation)
  • Everybody participates to a degree
  • Theoretically maps up to the bigger picture


  • Not a fair allocation or realistic expectation
  • Likely to be unachievable


Last Year Plus

Expectation of a relative or fixed improvement on the previous period (i.e. year)



  • Constant need to improve regardless of current performance/position
  • Simple approach
  • Everybody participates


  • Not a fair allocation – possibly unachievable
  • Lacks relevance to the achievable opportunity gap


Simple Performance Offset

You are underperforming compared to format X, Region Y, etc so must improve by X.



  • Simple approach
  • Potentially fair benchmark/comparison


  • Lacks fairness, especially if control sets are not scientifically derived
  • Target (and current offset) unrealistic


Simple Explanatory Model

Construction of a set of explanatory inputs and target prediction, e.g. vanilla Regression model



  • Relatively simple approach
  • Takes into consideration some of your input values


  • It only creates an actual vs expected offset – what happens when Actual>expected?
  • Input scenario adjustments can be arbitrary
  • Multi-collinearity not managed so identifying driver actions challenging


Sophisticated Explanatory Model

A model that attempts to isolate all the driver and lever impacts



  • Sophisticated approach – creating a ‘fair expectation
  • Isolates the controllable inputs, i.e. how to drive it. 


  • More data and modeling intensive
  • Not so easy to understand


Combinatorial Model Approach

Control Set generation (i.e. similarity)

Explanatory model (i.e. isolated drivers and levers)

Lever adjustment scenarios (i.e. weights and fair benchmarking)



  • Greater predictive power and setting of a ‘Fair’ value
  • Connection to the underlying drivers and levers to enable action and performance improvement
  • Robust benchmarking (i.e. fair comparison) for lever adjustments


  • Most labor-intensive approach
  • Requires an ‘out-of-control-set’ scalar to enable universal comparison


Combinatorial modeling approach – adopting a lever-based score

Ultimately, if the target is meaningful then everyone across the organization should stand to benefit from its attainment. To enable this, tools should be provided that define and isolate the value of each process and action, or lever, in relation to the target metric.

I have spent the past 10 years building and refining a proprietary model that manages complex and highly correlated systems, alongside a visually representative scorecard of strength, direction, and certainty of each lever input. When such lever importance is combined with a robust store comparison benchmark, using robust and similarity-derived representative control sets, not only can it create an overall realistic target of improvement, based on all the important store levers, but we now also have a method of prioritizing which levers need to be focussed on in each period to achieve the greatest performance gains. And aren’t we all interested in getting the biggest bang for our buck, especially when resource-constrained?

The numbers can tell us a lot, but we must also embed expert interpretation before resource investment. For example, assessing for cause and effect. Higher nil-pick occasions (items ordered online and then not found by the picker in-store) may correlate with, and be caused by, higher online sales and the number of orders for that store. However, higher nil-pick occasions, despite appearing at the same time, definitely won’t be causing higher online sales and order numbers.

If this approach sounds of interest and you want to know more about how you can set fairer targets in your organization, and achieve them, then do get in contact.

Written by Dr David Waters

David joined the team in 2011. Drawing on his extensive background in science and mathematics, David designs quantitative, pragmatic solutions to solve the biggest problems facing our customers.