Improving AHP consistency

In the implementation of both, AHP excel template as well as AHP online software, inconsistent judgments are highlighted and recommendations for consistent judgments are given.  How it is done, and what is the method behind?

The method is based on Saaty’s article “Decision-making with the AHP: Why is the principal eigenvector necessary“. European Journal of Operational Research 145 (2003) 85–91. He describes three methods how to transform a positive reciprocal matrix to a near consistent matrix.

In my implementation I construct the matrix εij = aij wj/wi to identify the three judgments for which εij is farthest from one. In Saaty’s paper it is shown as table 3, in my excel template it is called Consistency Error Matrix.  As I have to do it on each individual input sheet using as an approximation the RGMM (row geometric mean method) results, before calculating the eigenvector solution in the summary sheet.

AHP and the Magical Number Seven Plus or Minus Two

In the analytic hierarchy process you define a set of criteria and sub-criteria arranged in a hierarchy, to do pairwise comparisons and find the weights of criteria or decision alternatives. In my AHP excel template the number of criteria is limited to ten, in my AHP online software to 15. Still sometimes I am asked to extend and allow for more criteria.

Why the number of criteria should not exceed the magical number seven plus or minus two?

There are three reasons not to exceed the number of 9 criteria in any AHP project. Two of them are quite clear and published in the literature:

  • The first has to do with the human limits on our capacity for processing information, and was published by George A Miller, as well as in the context of AHP by Saaty and Ozdemir.
  • The second is related to the first one. The number of pairwise comparisons increases with the number of criteria, it is (n2n)/2. For example, 9 criteria require 36 comparisons. For a  high number of comparisons easily logical inconsistencies occur, and the consistency ratio CR exceeds values of 10% to 20%, making the basic assumption of near consistent matrices invalid and the AHP results questionable.

See also my post here.

The third reason is not so obvious and not so well known. It is based on the limited 1 to 9 AHP ratio scale for the judgment. The maximum preference you can give to one criterion is 9, i.e. this criterion is 9 times more important than all other criteria. Assume, you have only two criteria, then – if you fully prefer one over the other – the preferred one will result in a weight of 90%, the other gets a weight of 10%. The weights depend on the number of criteria, the maximum weight or maximum priority wmax is always

wmax = M/(n + M – 1)

with M = 9, the maximum of the AHP scale and n the number of criteria. The below diagram shows wmax as a function of the number of criteria.

ahp-crit

Clearly you can see that for 10 criteria the maximum possible weight reduces to 50%, or in other words, although you give full preference to one criterion, it only gets a weight of 50%! For more than ten criteria the weight will be below 50%. This is the reason, why the number of criteria should never exceed the magical number seven plus or minus two.

AHP – High Consistency Ratio

Question: I know how AHP is working, but what I’m struggling with is, how to resolve the inconsistency (CR>0.1), when participants are done with their pairwise comparisons. It is time consuming if they go through the matrix and re-evaluate all their inputs. Do you have any suggestions?

Answer:  Yes, CR often is a problem. Also my projects show that, making the pair-wise comparisons, for many participant CR ends up to be higher than 0.1.  Based on a sample of nearly 100 respondents in different AHP projects, the median value of CR is 16%, i.e. only half of the participants achieve a CR below 16%  in my projects; 80-percentile is 36%. There seems also to be a tendency of increasing CR with the number of criteria, i.e. the median value significantly increases for more than 5 criteria.

From my experience, CR > 0.1 is not critical per se. I get reasonable weights for CR 0.15 or even higher (up to 0.3), depending on the number of criteria. The acceptance of a higher CR also depends on the kind of project (the specific decision problem), the out coming  priorities and the required accuracy (what is the actual impact on the result due to minor changes of criteria weights?).

In my latest AHP excel template and AHP online software AHP-OS the three most inconsistent judgments will be highlighted. The ideal judgment (resulting in lowest inconsistency) is shown. This will help participants to adjust their judgments on the scale to make the answers more consistent.

The first measure to keep inconsistencies low is to stick to the Magical Number Seven, Plus or Minus Two, i.e. keep the number of criteria in a range between 5 and 9 max. It has to do with the human limits on our capacity for processing information, originally published by George A. Miller in 1956, and taken-up by Saaty and Ozdemir  in a publication in 2003. Review your criteria selection, and try to cluster them in groups of 5 to 9, if you really need more.

Another possibility to improve consistency is to select the balanced-n scale instead of the standard AHP scale.  In my sample, changing from standard AHP scale to balanced scale decreases the median from 16% to 6%. You might select different scales in my template.

Conclusion

  • Try to keep the number of criteria between 5 or 7, never use more than 9.
  • Ask decision makers to adjust their judgments  in direction of the most consistent input during the pair-wise comparisons for the highlighted three most inconsistent comparisons. A slight adjustment of intensities 1 or 2 up or down can sometimes help.
  • Accept answers with CR > 10%, practically up to 20%, depending on the nature and objective of your project.
  • Do the eigenvector calculation with the balanced scale instead of the AHP scale, and compare resulting priorities and consistency. This does not require to redo the pairwise comparisons.

References

George A. Miller, The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information, The Psychological Review, 1956, vol. 63, pp. 81-97

Saaty, T.L. and Ozdemir, M.S. Why the Magic Number Seven Plus or Minus Two, Mathematical and Computer Modelling, 2003, vol. 38, pp. 233-244

Goepel, K.D., Comparison of Judgment Scales
of the Analytical Hierarchy Process - A New Approach, Preprint of an
article submitted for consideration in International Journal of
Information Technology and Decision Making © 2017 World Scientific
Publishing Company http://www.worldscientific.com/worldscinet/ijitdm
(2017)

AHP Consistency Ratio CR

Q: I read in some texts that a consistency ratio (actually inconsistency ratio) of less than 0.1 (10%) is good. I am not sure if your consistency ratio is a consistency ratio (i.e. the higher the percentage of the CR, the better and the more consistent the results are) vs inconsistency ratio (i.e. the consistency ratio percentage in your spreadsheet should be less in order to be more consistent).

Can you please let me know if a lower of higher percentage of the consistency ratio reflects a better more consistent response? Also, how important is the CR in the interpretation of results? If two consecutive rounds of solicited info yields very similar results, would that be acceptable even if the consistency ratio may not be good?

A: The CR in my spreadsheet is exactly the same you can find in the literature. A value less than 0.1 (10%) is good, but the threshold of 0.1 is a rule of thumb . Lower values are better than higher values, but values above 0.1 can be acceptable. It depends on the nature of your project. When you process the inputs from a group (several participants), it happens that individual CRs are above 10%, but the consolidated matrix CR is ok. Please read also my comment here.

;
dGgpuDN iLwQ