AHP-OS Implementation – working paper

For anyone, who is interested in the implementation of my free AHP-OS online software, and needs a reference:

Please cite:

Goepel, K. D. (2017). Implementation of an Online Software Tool for the Analytic Hierarchy Process – Challenges and Practical Experiences.  Working paper prepared for publication, Singapore July 2017, available from http://bpmsg.com/ahp-software/

I hope to finalize the paper soon so that I can submit it for publication.

Download the working paper from here.

Share on Facebook

Sensitivity Analysis in AHP

Sensitivity analysis is a fundamental concept in the effective use and implementation of quantitative decision models, whose purpose is to assess the stability of an optimal solution under changes in the parameters. (Dantzig)

Weighted sum model (Alternative Evaluation)

In AHP the preference Pi of alternative Ai is calculated using the following formula (weighted sum model):
(1)with  Wj the weight of criterion Cj, and aij the performance measure of alternative Ai with respect to criterion Cj. Performance values  are normalized.
(2)

Example

Table 1

Sensitivity analysis will answer two questions:

  • Which is the most critical criterion, and
  • Which is the most critical performance measure,

changing the ranking between two alternatives?

The most critical criterion

The most critical criterion is defined as the criterion Ck, with the smallest change of the current weight Wk by the amount of  δkij changing the ranking between the alternatives Ai and Aj.

The Absolute-Top (or AT) critical criterion is the most critical criterion with the smallest change δkij changing the ranking of the best (top) alternative.

The Absolute-Any (or AA) critical criterion is the most critical criterion with the smallest change δkij changing any ranking of alternatives.

For each pair of alternatives Ai, Aj, with i = 1 to n and  i < j we calculate
(3)with .

Example

Table 2

  • The absolute-top critical criterion is Neighbourhood: a change from 18.8% by -8% will change the ranking between the top alternative A1 (House A) and alternative A2 (House B).
  • The absolute-any critical criterion is the same as above, as -8% is the smallest value in the table.

As the weight uncertainty for the criterion Neighbourhood is +1.4% and -1.3%, the solution is stable.

 The most critical measure of performance

The most critical measure of performance is defined as the minimum change of  the current value of  aij such that the current ranking between alternative Ai  and Aj will change.

For all alternatives Ai and Aj  with ij and  and each criterion we calculate
(4)with .

Example

Table 3

  • The absolute-any critical performance measure is found for alternative A3 (House C) under the criterion Financing. A change from 27.9% by 20.4% will change its ranking with alternative A2 (House B), i.e. only a (drastic) change from 27.9% to 48.3% of the evaluation of House C with respect to Financing would change the ranking of House C and House B.

Implementation in AHP-OS

For alternative evaluation the method described above is implemented in AHP-OS. On the group result page in the Group Result Menu tick the checkbox var and then click Scale.

Under the headline Sensitivity Analysis TA and AA critical criterion as well as AA critical performance measure will be displayed. You can download the complete tables as csv files with a click on Download.

References

Triantaphyllou, E.,  Sánchez, A., A sensitivity analysis approach for some deterministic multi-criteria decision making methods, Decision Sciences, Vol. 28, No. 1, pp. 151-194, (1997).

Share on Facebook

Weight Uncertainties in AHP-OS

It is now possible, to analyse the weight uncertainties in your AHP-OS projects. When you view the results (View Result from the Project Administration Menu), you see the drop-down list for different AHP scales and a tick box var is shown.

Tick var and click on Scale. All priority vectors of your project will display the weight uncertainties with (+) and (-).

For example, “Capital” has a priority of 15.0% with an uncertainty 0f +1.7% and -2.1%.

The diagram for the total result will show in green the calculated priorities, in dark and light grey the possible plus and minus variations. 

Calculation is based on a randomised variation of all judgment inputs by +/- 0.5 on the 1 – 9 judgment scale. For more than 1 participant the variation is reduced by the square root of the number  of participants.

When downloading the results as csv file, uncertainties are listed below the group result.

 

Share on Facebook

Why the AHP Balanced Scale is not balanced

As part of my current work about AHP scales, here an important finding for the balanced scale:

Salo and Hamalainen [1] pointed out that the integers from 1 to 9 yield local weights, which are not equally dispersed. Based on this observation, they proposed a balanced scale, where local weights are evenly dispersed over the weight range [0.1, 0.9]. They state that for a given set of priority vectors the corresponding ratios can be computed from the inverse relationship

r = w / (1 – w)      (1a)

The priorities 0.1, 0.15, 0.2, … 0.8, 0.9 lead, for example, to the scale 1, 1.22, 1.5, 1.86, 2.33, 3.00, 4.00, 5.67 and 9.00. This scale can be computed by

wbal = 0.45 + 0.05 x     (1b)

with x = 1 … 9 and

 (1c)

c ( resp. 1/c) are the entry values in the decision matrix, and x the pairwise comparison judgment on the scale 1 to 9.

In fact, eq. 1a or its inverse are the special case for one selected pairwise comparison of two criteria. If we take into account the complete n x n decision matrix for n criteria, the resulting weights for one criterion, judged as x-times more important than all others, can be calculated as:

(2)

Eq. 2 simplifies to eq. 1a for n=2.

With eq. 2 we can formulate the general case for the balanced scale, resulting in evenly dispersed weights for n criteria and a judgment x with x from 1 to M:

(3)

with

(3a)

(3b)

(3c)

We get the general balanced scale (balanced-n) as

(4)

With n=2 and M=9 it represents the classical balanced scale as given in eq. 1b and 1c. Fig. 1 shows the weights as a function of judgements derived from a case with 7 criteria using the fundamental AHP, balanced and general balanced (bal-n) scale. It can be seen that, for example, a single judgement “5 – strong more important” yields to a weight of 45% on the AHP scale, 28% on the balanced scale and 37% on the balanced-n scale.

Figure 1. Weights as function of judgment for the AHP scale, the balanced scale and the corrected balanced scale for 7 decision criteria.

A “strong” criterion is underweighted using the classical balanced scale, and overweighted using the standard AHP scale, compared to the general balanced-n scale. Weights of the balanced-n scale are distributed evenly over the judgment range, and only for n = 2 the original proposed balanced scale yields evenly distributed weights.

You can download my complete working paper “Comparison of Judgment Scales of the Analytical Hierarchy Process – A New Approach” submitted for publication from researchgate.net or  here

References

[4] Salo, A.,Hämäläinen, R., On the measurement of preferences in the analytic hierarchy process, Journal of multi-critria decision analysis,Vol. 6, 309 – 319, (1997).

 

Share on Facebook

AHP Judgment Scales

The original AHP uses ratio scales. To derive priorities, verbal statements (comparisons) are converted into integers from 1 to 9. This “fundamental AHP scale” has been discussed, as there is no thoretical reason to be restricted to these numbers and verbal gradations. In the past several other numerical scales have been proposed [1],[3]. AHP-OS now supports nine different scales:

  1. Standard AHP linear scale
  2. Logarithmic scale
  3. Root square scale
  4. Inverse linear scale
  5. Balanced scale
  6. Balanced-n scale
  7. Adaptive-bal scale
  8. Power scale
  9. Geometric scale


Fig. 1 Mapping of the 1 to 9 input values to the elements of the decision matrix.

Power scale and geometric scale extend the values of matrix elements from 9 to 81 resp. 256. Root square and logarithmic scale reduce the values from 9 down to 3 resp 3.2. Inverse linear and balanced scale keep the values in the original range, but change the weight dispersion. The balanced-n scale is a corrected version of the original balanced scale. The adaptive-bal scale scales the values depending on the number of criteria: for n = 2 criteria it represents the balanced scale, for n = 10 criteria it represents a balanced power scale.

As a result, priority discrimination will be improved using the geometric or power scale, but at the same time the consistency ratio will go up. For the  logarithmic, root square, and inverse linear scales it is the opposite, priorities are more compressed or “equalised” across the criteria, see Fig. 2. At the same time CR improves.

Only the balanced-n scale and adaptive-bal scale will improve (or at least keep) the consistency ratio in a reasonable range and at the same time minimise weight uncertainties and weight dispersion.


Fig. 2 Change of priorities for different scales for an example with eight criteria.

The choice of the appropriate scale is difficult and an often discussed problem. Until today there is no published guideline, when to select which scale. A study on the impact on priorities and consistency ratio (CR) is published in [2]. I have just recently submitted a paper, providing a guideline on the selection of different AHP scales.

How to select different scales in AHP-OS

Open a project with completed judments (participants) from your project list. In the Project menu click on View Result. By default the results are then shown calculated based on the standard AHP 1 to 9 scale. To recalculate for different scales, select the scale in the Group Result menu from the scroll down list and click on Scale.

References

[1] Ishizaka A., Labib A. Review of the main developments in the analytic hierarchy process, Expert Systems with Applications, 38(11), 14336 - 14345, (2011)

[2] Jiří Franeka, Aleš Krestaa. Judgment scales and consistency measure in AHP, Procedia Economics and Finance, 12, 164 - 173 (2014)

[3] W.W. Koczkodaj. Pairwise Comparison Rating Scale Paradox, Cornell University Library, (2015) https://arXiv.org/abs/1511.07540

Incoming search terms:

  • x rite singapore loc:SG
Share on Facebook

AHP-OS Data Download and Import in Excel

Most data generated with AHP-OS can be downloaded as csv files for import into a spreadsheet program and further analysis:

  • From the Hierarchy Input Menu – decision hierarchy and local & global priorities
  • From the Group Result Menu – Priorities by node and consolidated decision matrix
  • From the Project Data Menu – Decision matrices from each participant

For each download you can select “.” or “,” as decimal separator. The downloaded csv (text) file is coded in UTF-8 and supports multi-language characters like Chinese, Korean, Japanese and of course a variety of Western languages.

How to import into excel?

Open Excel, click on “File” -> “New” to have a blank worksheet. Click on “Data“. On the left top you will find the “Get External Data” box.

Click on From Text to select the downloaded cvs file for import. The Text Import Wizzard will open.

Now it is important to select 65001 : Unicode (UTF-8) under File origin.

Then, depending on your decimal separator, select Comma or Semicolon as Delimiters:

When the import is done, your text characters should be displayed correctly.  Save the file “Save as” as Excel workbook (*.xlsx).

Incoming search terms:

  • EXCEL WIZZARD
Share on Facebook

AHP-OS – Editing saved projects

In the project menu of the latest AHP-OS version (2017-05-25), I added a button to edit saved projects. As long as there are no participants’ inputs (completed pairwise comparisons), any saved project’s hierarchy, alternatives or description can be modified.

Open a project from your project list, and click on Edit Project. The project hierarchy page will open with a message on top , indicating that you are modifying an existong project. You can now change the hierarchy, for example add criteria or alternatives. A click on Save/Update in the Hierarchy Input Menu

will overwrite the data of the original project under the same session code. You will see it in a message . Before you click on Go to save,  you  can also update the project short description:

Difference between Use Hierarchy and Edit Project

With Use Hierarchy in the project administration menu, the hierarchy window will open, and you can also modify the hierarchy or alternatives. But in contrast to Edit the modified project will be saved as a new project under a new project session code.


Share on Facebook

AHP-OS New Release with simplified project administration

Based on feedback from users, I just released a major update of BPMSG’s AHP online software AHP-OS with simplified menu structure and additional functionality.  Starting the program as registered and logged-in user, the project session  table is displayed, showing your projects.

You can open one of your projects, either using a click on the session code in the project table, or selecting the session code from the session administration menu:

This will bring you to the project summary page, showing

  • Project data
  • Alternatives (if any)
  • Participants (if any)
  • Group input link (to be provided to your project participants)
  • Project Hierarchy and hiearchy definiton (text)

At the bottom you find the new project administration menu:

From here you can:

  • View Result: View the project group result (if there are already participants)
  • Group Input: Start pairwise comparisons
  • Use/Modify Hierarchy: use and modify the project’s hierarchy for a new project
  • Delete selected Participants (a request from many users)
  • Delete the whole project
  • Close the project to go back to the project session table

Due to this new Project Administration menu some of the other menus are simplified. Let me know your experience with the new structure or if you find any bugs. The manual will be updated within the next days.

Deleting participants

On the project summary page select the participants, you want to delete, and click on refresh.

You will then see a message Selected participant(s): Werner. Click on the button to delete the selected user(s). Careful: once deleted, they cannot be recovered and their pairwise comparison data will be lost.

 

Incoming search terms:

  • paperuri:(2f2ca361a5da31c6a7d3bc7b374f9304)
Share on Facebook

AHP Group Consensus Indicator – how to understand and interpret?

BPMSG’s AHP excel template and AHP online software AHP-OS can be used for group decision making by asking several participants to give their inputs to a project in form of pairwise comparisons. Aggregation of individual judgments (AIJ) is done by calculating the geometric mean of the elements of all decision matrices using this consolidated decision matrix to derive the group priorities.

AHP consensus indicator

In [1] I proposed an AHP group consensus indicator to quantify the consensus of the group, i.e. to have an estimate of the agreement on the outcoming priorities between participants. This indicator ranges from 0% to 100%. Zero percent corresponds to no consensus at all, 100% to full consensus. This indicator is derived from the concept of diversity based on Shannon alpha and beta entropy, as described in [2].  It is a measure of homogeneity of priorities between the participants and can also be interpreted as a measure of overlap between priorities of the group members.

How to interpret?

If we would categorise group consensus in the three categories low, moderate and high, I would assign the following percentages to these categories:

  • low consensus: below 65%
  • moderate consensus: 65% to 75%
  • high consensus: above 75%

Values below 50% indicate that there is practically no consensus  within the group and a high diversity of judgments. Values in the 80% – 90% range indicate a high overlap of priorites and excellent agreement of judgments from the group members.

AHP Consensus indicator and AHP Consistency Ratio CR

AHP allows for (logical) inconsistencies in judgments; the AHP consistency ratio CR is an indicator for this, and – as a rule of thumb – CR  should not exceed 10% significantly. Please read my posts here and here.

It can be shown that,  given a sufficiently large group size, consistency of the aggregate comparison matrix is guaranteed, regardless of the consistency measures of the individual comparison matrices, if the geometric mean (AIJ) is used to aggregate [3] . In other words, if the group of participants is large enough, the consistency ratio of the consolidated group matrix CR will decrease below 10% and is no longer an issue.

Consensus has to be strictly distinguished from consistency. The consensus is derived from the outcoming priorities and has nothing to do with the consistency ratio. Whether you have a small or a large group, in both cases consensus could be high or low, reflecting the “agreement” between group members. Even if you ask a million people, there could be no agreement (consensus) on a certain topic: half of them have the exact opposite judgment as the other half. As a result, the consensus indicator would be zero: there is no overlap, the total group is divided into two sub-groups having opposite opinions.

Analyzing group consensus – groups and sub-groups

The beauty of the proposed AHP consensus indicator based on Shannon entropy is the possibility to analyse further, and to find out, whether there are  sub-groups (cluster) of participants with high consensus among themself, but with low consensus to other sub-groups. This can be done using the concept of alpha and beta diversity [2]. I have published an excel template to to analyze similarities between the samples based on partitioning diversity in alpha and beta diversity. It can be also be used for your AHP results to analyse group consensus.

References

[1] Klaus D. Goepel, (2013). Implementing the Analytic Hierarchy Process as a Standard Method for Multi-Criteria Decision Making In Corporate Enterprises – A New AHP Excel Template with Multiple Inputs, Proceedings of the International Symposium on the Analytic Hierarchy Process, Kuala Lumpur 2013

[2] Lou Jost, (2006). Entropy and Diversity, OIKOS Vol. 113, Issue 2, pg. 363-375, May 2006

[3] Aull-Hyde, Erdoğan, Duke (2006). An experiment on the consistency of aggregated comparison matrices in AHP, European Journal of Operational Research 171(1):290-295 · February 2006

Share on Facebook

The Analytical Hierarchy Process (AHP) – Is it old and outdated?

This was a question in researchgate.net, and the answer of Prof. Saaty – the creator of the method – is of course: “The AHP is the only accurate and rigorous mathematical way known for the measurement on intangibles. It is not going to get old for a long time., with a lot of answers from others following.

When it comes to AHP, it seems the scientific world is still divided in opponents and advocates of the method.

I answered with the statistic of my website: BPMSG has more than 4000 users of the online software AHP-OS, 600 of them active users with 1000 projects and more than 3500 decision makers. My AHP excel template reached nearly 21 thousand downloads.  It clearly shows that the method is not outdated.

As a reply Nolberto wrote:

No, I don´t think that AHP is outdated, but the fact that over than 1000 projects have been developed using AHP does not mean that their results are correct (which is impossible to check), or that the method is sound (which is easily challenged)… 

Here my answer:

yes, I agree, the numbers only show that AHP is not outdated (which was the original question). They don’t show, whether the results are correct or incorrect, but they also do not show whether the users did or did not realise the method’s drawbacks and limitations.

For me, as a practitioner, AHP is one of the supporting tools in decision making. The intention of a tool is what it does. A hammer intends to strike, a lever intends to lift. It is what they are made for.

From my users feedback I sometimes get the impression that some of them expect a decision making support tool to make the decision for them, and this is not what it is made for.

In my practical applications AHP helped me and the teams a lot to gain a better insight into a decision problem, to separate important from less important criteria and to achieve a group consensus and agreement how to tackle a problem or proceed with a project. Probably, this could be achieved with other tools too, but as you say, AHP is simple, understandable and easy.

For sure, real world problems are complex. Therefore they have to be broken down and simplified, to be handled with the method, and I agree, over-simplification can be dangerous. On the other hand, what other approach than the break down of complex problems into digestable pieces is possible?

Finally, it’s not the tool producing the decision, but the humans behind it. They will be accountable for the decision, and it’s their responsibility to find the appropriate model of a decision problem and the right balance between  rational and non-rational arguments and potential consequences of their decision.

Let me know your opinion!

 

Share on Facebook