skcriteria.cmp.ranks_rev.ranks_inv_check module

Test Criterion #1 for evaluating the effectiveness MCDA method.

According to this criterion, the best alternative identified by the method should remain unchanged when a non-optimal alternative is replaced by a worse alternative, provided that the relative importance of each decision criterion remains the same.

class skcriteria.cmp.ranks_rev.rank_inv_check.RankInvariantChecker(dmaker, *, repeat=1, allow_missing_alternatives=False, last_diff_strategy='median', random_state=None)[source]

Bases: SKCMethodABC

Test Criterion #1 for evaluating the effectiveness MCDA method.

According to this criterion, the best alternative identified by the method should remain unchanged when a non-optimal alternative is replaced by a worse alternative, provided that the relative importance of each decision criterion remains the same.

To illustrate, suppose that the MCDA method has ranked a set of alternatives, and one of the alternatives, \(A_j\), is replaced by another alternative, \(A_j'\), which is less desirable than Ak. The MCDA method should still identify the same best alternative when the alternatives are re-ranked using the same method. Furthermore, the relative rankings of the remaining alternatives that were not changed should also remain the same.

The current implementation worsens each non-optimal alternative repeat times, and stores each resulting output in a collection for comparison with the original ranking. In essence, the test is run once for each suboptimal alternative.

This class assumes that there is another suboptimal alternative \(A_j\) that is just the next worst alternative to \(A_k\), so that \(A_k \succ A_j\). Then it generates a mutation \(A_k'\) such that \(A_k'\) is worse than \(A_k\) but still better than \(A_j\) (\(A_k \succ A_k' \succ A_j\)). In the case that the worst alternative is reached, its degradation is limited by default with respect to the median of all limits of the previous alternatives mutations, in order not to break he distribution of each criterion.

Parameters:
  • dmaker (Decision maker - must implement the evaluate() method) – The MCDA method, or pipeline to evaluate.

  • repeat (int, default = 1) –

    How many times to mutate each suboptimal alternative.

    The total number of rankings returned by this method is given by the number of alternatives in the decision matrix minus one multiplied by repeat.

  • allow_missing_alternatives (bool, default = False) –

    dmaker can somehow return rankings with fewer alternatives than the original ones (using a pipeline that implements a filter, for example). By setting this parameter to True, the invariance test allows for missing alternatives in a ranking to be added with a value of the maximum value of the ranking obtained + 1.

    On the other hand, if the value is False, when a ranking is missing an alternative, the test will fail with a ValueError.

    If more than one alternative is removed, all of them are added with the same value

  • last_diff_strategy (str or callable (default: "median").) – True if any mutation is allowed that does not possess all the alternatives of the original decision matrix.

  • random_state (int, numpy.random.default_rng or None (default: None)) – Controls the random state to generate variations in the sub-optimal alternatives.

property dmaker

The MCDA method, or pipeline to evaluate.

property repeat

How many times to mutate each suboptimal alternative.

property allow_missing_alternatives

True if any mutation is allowed that does not possess all the alternatives of the original decision matrix.

property last_diff_strategy

Since the least preferred alternative has no lower bound (since there is nothing immediately below it), this function calculates a limit ceiling based on the bounds of all the other suboptimal alternatives.

property random_state

Controls the random state to generate variations in the sub-optimal alternatives.

evaluate(dm)[source]

Executes a the invariance test.

Parameters:

dm (DecisionMatrix) – The decision matrix to be evaluated.

Returns:

An object containing multiple rankings of the alternatives, with information on any changes made to the original decision matrix in the extra_ attribute. Specifically, the extra_ attribute contains a an object in the key rrt1 that provides information on any changes made to the original decision matrix, including the the noise applied to worsen any sub-optimal alternative.

Return type:

RanksComparator