In AHP the preference *P _{i}* of alternative

(1)

(2)

**Example**

Table 1

Sensitivity analysis will answer two questions:

- Which is the most critical criterion, and
- Which is the most critical performance measure,

changing the ranking between two alternatives?

The *most critical criterion* is defined as the criterion *C _{k}*, with the smallest change of the current weight

The *Absolute-Top* (or AT) *critical criterion* is the most critical criterion with the smallest change *δ _{kij}* changing the ranking of the best (top) alternative.

The *Absolute-Any* (or AA) *critical criterion *is the most critical criterion with the smallest change *δ _{kij}* changing any ranking of alternatives.

For each pair of alternatives *A _{i}*,

(3)with .

**Example**

Table 2

- The absolute-top critical criterion is Neighbourhood: a change from 18.8% by -8% will change the ranking between the top alternative A1 (House A) and alternative A2 (House B).
- The absolute-any critical criterion is the same as above, as -8% is the smallest value in the table.

As the weight uncertainty for the criterion *Neighbourhood* is +1.4% and -1.3%, the solution is stable.

The *most critical measure of performance* is defined as the minimum change of the current value of *a _{ij}* such that the current ranking between alternative

For all alternatives *A _{i}* and

(4)with .

**Example**

Table 3

- The
*absolute-any**critical performance measure*is found for alternative(House C) under the criterion*A*_{3}*Financing*. A change from 27.9% by 20.4% will change its ranking with alternative(House B), i.e. only a (drastic) change from 27.9% to 48.3% of the evaluation of House C with respect to Financing would change the ranking of House C and House B.*A*_{2}

For alternative evaluation the method described above is implemented in AHP-OS. On the group result page in the *Group Result Menu* tick the checkbox *var* and then click *Scale*.

Under the headline Sensitivity Analysis TA and AA critical criterion as well as AA critical performance measure will be displayed. You can download the complete tables as csv files with a click on *Download*.

Triantaphyllou, E., Sánchez, A., *A sensitivity analysis approach for some deterministic multi-criteria decision making methods*, Decision Sciences, Vol. 28, No. 1, pp. 151-194, (1997).

Tick *var* and click on *Scale*. All priority vectors of your project will display the weight uncertainties with (+) and (-).

For example, “Capital” has a priority of 15.0% with an uncertainty 0f +1.7% and -2.1%.

The diagram for the total result will show in green the calculated priorities, in dark and light grey the possible plus and minus variations.

Calculation is based on a randomised variation of all judgment inputs by +/- 0.5 on the 1 – 9 judgment scale. For more than 1 participant the variation is reduced by the square root of the number of participants.

When downloading the results as csv file, uncertainties are listed below the group result.

Share on Facebook ]]>

*Dear Friends, dear Visitors,*

over the last four months I put in a lot of effort to improve the AHP-OS online tool. With several releases a simplified menu structure and new features were introduced.

- Delete individual participant’s inputs from an existing project.
- Update a project hierarchy or project description, as long as there is no input.
- Evaluate your AHP projects using different AHP judgment scales.
- Analyse weight uncertainties based on small randomised variations of input judgments.

The last two features are based on my recent study about the comparisons of different AHP scales. Up to date there was no recommendation, what scales to use, and I found a new approach to analyse and compare the scales based on simple analytic functions. This study is submitted for publication, and I hope it will not take too long, until it is available. You can find some more information already in my posting here.

The feature of analysing weight uncertainties is an innovative way of doing sensitivity analysis: all judgments are randomly varied by ±0.5 on the judgment scale, and for each variation the maximum and minimum out coming priorities are captured. I use 1000 variations, enough to get a relatively stable margin of errors for each weight. It gives you information, how “precise” a weight or ranking is in your specific project.

Again, a big *Thank You* to all donors! Please note that the website is a non-commercial website for educational purposes. Your donation is used to cover running costs like web hosting, antispam services etc. **PLEASE, help to support this website with a small donation.** I spend a lot of time, sharing my knowledge for free. Thank you in advance!

For now, please enjoy your visit on the site and feel free to leave a comment – it is always appreciated.

Klaus D. Goepel,

Singapore, June 2017

BPMSG stands for *Business Performance Management Singapore*. As of now, it is a non-commercial website, and information is shared for educational purposes. Please see licensing conditions and terms of use.

Please give credit or a link to my site, if you use parts in your work, or make a small donation to support my effort to maintain this website.

Share on Facebook ]]>

Salo and Hamalainen [1] pointed out that the integers from 1 to 9 yield local weights, which are not equally dispersed. Based on this observation, they proposed a balanced scale, where local weights are evenly dispersed over the weight range [0.1, 0.9]. They state that for a given set of priority vectors the corresponding ratios can be computed from the inverse relationship

*r* = *w* / (1 – *w*) (1a)

The priorities 0.1, 0.15, 0.2, … 0.8, 0.9 lead, for example, to the scale 1, 1.22, 1.5, 1.86, 2.33, 3.00, 4.00, 5.67 and 9.00. This scale can be computed by

*w*_{bal} = 0.45 + 0.05 *x* (1b)

with *x* = 1 … 9 and

(1c)

*c* ( resp. 1/*c*) are the entry values in the decision matrix, and *x* the pairwise comparison judgment on the scale 1 to 9.

In fact, eq. 1a or its inverse are the *special case for* *one selected pairwise comparison* of two criteria. If we take into account the complete *n* x *n* decision matrix for *n* criteria, the resulting weights for one criterion, judged as *x*-times more important than all others, can be calculated as:

(2)

Eq. 2 simplifies to eq. 1a for *n*=2.

With eq. 2 we can formulate the general case for the balanced scale, resulting in evenly dispersed weights for *n* criteria and a judgment *x* with *x* from 1 to *M*:

(3)

with

(3a)

(3b)

(3c)

We get the general balanced scale (balanced-n) as

(4)

With *n*=2 and *M*=9 it represents the classical balanced scale as given in eq. 1b and 1c. Fig. 1 shows the weights as a function of judgements derived from a case with 7 criteria using the fundamental AHP, balanced and general balanced (bal-n) scale. It can be seen that, for example, a single judgement “*5 – strong more important*” yields to a weight of 45% on the AHP scale, 28% on the balanced scale and 37% on the balanced-n scale.

Figure 1. Weights as function of judgment for the AHP scale, the balanced scale and the corrected balanced scale for 7 decision criteria.

A “strong” criterion is underweighted using the classical balanced scale, and overweighted using the standard AHP scale, compared to the general balanced-n scale. Weights of the balanced-n scale are distributed evenly over the judgment range, and only for *n* = 2 the original proposed balanced scale yields evenly distributed weights.

You can download my complete working paper “*Comparison of Judgment Scales of the Analytical Hierarchy Process – A New Approach*” submitted for publication from researchgate.net or here

[4] Salo, A.,Hämäläinen, R., *On the measurement of preferences in the analytic hierarchy process*, Journal of multi-critria decision analysis,Vol. 6, 309 – 319, (1997).

Share on Facebook ]]>

*Standard AHP linear scale**Logarithmic scale**Root square scale**Inverse linear scale**Balanced scale**Balanced-n scale**Adaptive-bal scale**Power scale**Geometric scale*

Fig. 1 Mapping of the 1 to 9 input values to the elements of the decision matrix.

*Power scale* and *geometric scale* extend the values of matrix elements from 9 to 81 resp. 256. *Root square* and *logarithmic scale* reduce the values from 9 down to 3 resp 3.2. *Inverse linear* and *balanced scale* keep the values in the original range, but change the weight dispersion. The *balanced-n scale* is a corrected version of the original balanced scale. The *adaptive-bal scale* scales the values depending on the number of criteria: for *n* = 2 criteria it represents the balanced scale, for *n* = 10 criteria it represents a balanced power scale.

As a result, priority discrimination will be improved using the geometric or power scale, but at the same time the consistency ratio will go up. For the *logarithmic*, *root square,* and *inverse linear scales* it is the opposite, priorities are more compressed or “equalised” across the criteria, see Fig. 2. At the same time CR improves.

Only the *balanced-n scale* and *adaptive-bal scale* will improve (or at least keep) the consistency ratio in a reasonable range and at the same time minimise weight uncertainties and weight dispersion.

Fig. 2 Change of priorities for different scales for an example with eight criteria.

The choice of the appropriate scale is difficult and an often discussed problem. Until today there is no published guideline, when to select which scale. A study on the impact on priorities and consistency ratio (CR) is published in [2]. I have just recently submitted a paper, providing a guideline on the selection of different AHP scales.

Open a project with completed judments (participants) from your project list. In the *Project menu* click on *View Result*. By default the results are then shown calculated based on the standard AHP 1 to 9 scale. To recalculate for different scales, select the scale in the *Group Result menu* from the scroll down list and click on *Scale*.

`[1] Ishizaka A., Labib A. Review of the main developments in the analytic hierarchy process, Expert Systems with Applications, 38(11), 14336 - 14345, (2011)`

`[2] Jiří Franeka, Aleš Krestaa. Judgment scales and consistency measure in AHP, Procedia Economics and Finance, 12, 164 - 173 (2014)`

`[3] W.W. Koczkodaj. Pairwise Comparison Rating Scale Paradox, Cornell University Library, (2015) https://arXiv.org/abs/1511.07540`

- x rite singapore loc:SG

- From the
*Hierarchy Input Menu*– decision hierarchy and local & global priorities - From the
*Group Result Menu*– Priorities by node and consolidated decision matrix - From the
*Project Data Menu*– Decision matrices from each participant

For each download you can select “.” or “,” as decimal separator. The downloaded csv (text) file is coded in UTF-8 and supports multi-language characters like Chinese, Korean, Japanese and of course a variety of Western languages.

Open Excel, click on “File” -> “New” to have a blank worksheet. Click on “*Data*“. On the left top you will find the “Get External Data” box.

Click on *From Text* to select the downloaded cvs file for import. The Text Import Wizzard will open.

**Now it is important to select 65001 : Unicode (UTF-8)** under *File origin*.

Then, depending on your decimal separator, select **Comma** or **Semicolon** as *Delimiters*:

When the import is done, your text characters should be displayed correctly. Save the file “Save as” as Excel workbook (*.xlsx).

- EXCEL WIZZARD

Open a project from your project list, and click on *Edit Project*. The project hierarchy page will open with a message on top , indicating that you are modifying an existong project. You can now change the hierarchy, for example add criteria or alternatives. A click on *Save/Update *in the *Hierarchy Input Menu*

will overwrite the data of the original project under the same session code. You will see it in a message . Before you click on *Go* to save, you can also update the project short description:

With *Use Hierarchy* in the project administration menu, the hierarchy window will open, and you can also modify the hierarchy or alternatives. But in contrast to *Edit* the modified project will be saved as a new project under a new project session code.

Share on Facebook ]]>

You can open one of your projects, either using a click on the session code in the project table, or selecting the session code from the session administration menu:

This will bring you to the project summary page, showing

- Project data
- Alternatives (if any)
- Participants (if any)
- Group input link (to be provided to your project participants)
- Project Hierarchy and hiearchy definiton (text)

At the bottom you find the new project administration menu:

From here you can:

- View Result: View the project group result (if there are already participants)
- Group Input: Start pairwise comparisons
- Use/Modify Hierarchy: use and modify the project’s hierarchy for a new project
**Delete selected Participants**(a request from many users)- Delete the whole project
- Close the project to go back to the project session table

Due to this new Project Administration menu some of the other menus are simplified. Let me know your experience with the new structure or if you find any bugs. The manual will be updated within the next days.

On the project summary page select the participants, you want to delete, and click on refresh.

You will then see a message *Selected participant(s):* Werner. Click on the button to delete the selected user(s). Careful: once deleted, they cannot be recovered and their pairwise comparison data will be lost.

- paperuri:(2f2ca361a5da31c6a7d3bc7b374f9304)

In [1] I proposed an **AHP group consensus indicator**` to quantify the consensus of the group, `

*i.e.* to have an estimate of the agreement on the outcoming priorities between participants. This indicator ranges from 0% to 100%. Zero percent corresponds to no consensus at all, 100% to full consensus. This indicator is derived from the concept of diversity based on Shannon alpha and beta entropy, as described in [2]. It is a measure of **homogeneity** of priorities between the participants and can also be interpreted as a **measure of overlap** between priorities of the group members.

If we would categorise group consensus in the three categories *low*, *moderate* and *high*, I would assign the following percentages to these categories:

- low consensus: below 65%
- moderate consensus: 65% to 75%
- high consensus: above 75%

Values below 50% indicate that there is practically no consensus within the group and a high diversity of judgments. Values in the 80% – 90% range indicate a high overlap of priorites and excellent agreement of judgments from the group members.

AHP allows for (logical) inconsistencies in judgments; the AHP **consistency ratio CR** is an indicator for this, and – as a rule of thumb – CR should not exceed 10% significantly. Please read my posts here and here.

It can be shown that, given a sufficiently large group size, consistency of the aggregate comparison matrix is guaranteed, regardless of the consistency measures of the individual comparison matrices, if the geometric mean (AIJ) is used to aggregate [3] . In other words, if the group of participants is large enough, the consistency ratio of the consolidated group matrix CR will decrease below 10% and is no longer an issue.

Consensus has to be strictly distinguished from consistency. The **consensus** is derived from the outcoming priorities and **has nothing to do with the consistency** ratio. Whether you have a small or a large group, in both cases consensus could be high or low, reflecting the “agreement” between group members. Even if you ask a million people, there could be no agreement (consensus) on a certain topic: half of them have the exact opposite judgment as the other half. As a result, the consensus indicator would be zero: there is *no overlap,* the total group is divided into two sub-groups having opposite opinions.

The beauty of the proposed AHP consensus indicator based on Shannon entropy is the possibility to analyse further, and to find out, whether there are sub-groups (cluster) of participants with high consensus among themself, but with low consensus to other sub-groups. This can be done using the concept of alpha and beta diversity [2]. I have published an excel template to to analyze similarities between the samples based on partitioning diversity in alpha and beta diversity. It can be also be used for your AHP results to analyse group consensus.

`[1] Klaus D. Goepel, (2013). Implementing the Analytic Hierarchy Process as a Standard Method for Multi-Criteria Decision Making In Corporate Enterprises – A New AHP Excel Template with Multiple Inputs, `

*Proceedings of the International Symposium on the Analytic Hierarchy Process, Kuala Lumpur 2013*

`[2] Lou Jost, (2006). Entropy and Diversity, `

*OIKOS Vol. 113, Issue 2, pg. 363-375, May 2006*

`[3] Aull-Hyde, Erdoğan, Duke (2006). An experiment on the consistency of aggregated comparison matrices in AHP, `

*European Journal of Operational Research 171(1):290-295 · February 2006*

When it comes to AHP, it seems the scientific world is still divided in opponents and advocates of the method.

I answered with the statistic of my website: BPMSG has more than 4000 users of the online software AHP-OS, 600 of them active users with 1000 projects and more than 3500 decision makers. My AHP excel template reached nearly 21 thousand downloads. It clearly shows that the method is not outdated.

“*No, I don´t think that AHP is outdated, but the fact that over than 1000 projects have been developed using AHP does not mean that their results are correct (which is impossible to check), or that the method is sound (which is easily challenged)… * “

yes, I agree, the numbers only show that AHP is not outdated (which was the original question). They don’t show, whether the results are correct or incorrect, but they also do not show whether the users did or did not realise the method’s drawbacks and limitations.

For me, as a practitioner, AHP is one of the supporting tools in decision making. The intention of a tool is what it does. A hammer intends to strike, a lever intends to lift. It is what they are made for.

From my users feedback I sometimes get the impression that some of them expect a decision making support tool to make the decision for them, and this is not what it is made for.

**In my practical applications AHP helped me and the teams a lot to gain a better insight into a decision problem, to separate important from less important criteria and to achieve a group consensus and agreement how to tackle a problem or proceed with a project.** Probably, this could be achieved with other tools too, but as you say, AHP is simple, understandable and easy.

For sure, real world problems are complex. Therefore they have to be broken down and simplified, to be handled with the method, and I agree, over-simplification can be dangerous. On the other hand, what other approach than the break down of complex problems into digestable pieces is possible?

**Finally, it’s not the tool producing the decision, but the humans behind it.** They will be accountable for the decision, and it’s their responsibility to find the appropriate model of a decision problem and the right balance between rational and non-rational arguments and potential consequences of their decision.

Let me know your opinion!

Share on Facebook ]]>