skcriteria.agg._agg_base module

Core functionalities to create madm decision-maker classes.

class skcriteria.agg._agg_base.SKCDecisionMakerABC[source]

Bases: SKCMethodABC

Abstract class for all decisor based methods in scikit-criteria.

evaluate(dm)[source]

Validate the dm and calculate and evaluate the alternatives.

Parameters:

dm (skcriteria.data.DecisionMatrix) – Decision matrix on which the ranking will be calculated.

Returns:

Ranking.

Return type:

skcriteria.data.RankResult

class skcriteria.agg._agg_base.ResultABC(method, alternatives, values, extra)[source]

Bases: DiffEqualityMixin

Base class to implement different types of results.

Any evaluation of the DecisionMatrix is expected to result in an object that extends the functionalities of this class.

Parameters:
  • method (str) – Name of the method that generated the result.

  • alternatives (array-like) – Names of the alternatives evaluated.

  • values (array-like) – Values assigned to each alternative by the method, where the i-th value refers to the valuation of the i-th. alternative.

  • extra (dict-like) – Extra information provided by the method regarding the evaluation of the alternatives.

property values

Values assigned to each alternative by the method.

The i-th value refers to the valuation of the i-th. alternative.

property method

Name of the method that generated the result.

property alternatives

Names of the alternatives evaluated.

property extra_

Additional information about the result.

Note

e_ is an alias for this property

property e_

Additional information about the result.

Note

e_ is an alias for this property

to_series()[source]

The result as pandas.Series.

property shape

Tuple with (number_of_alternatives, ).

rank.shape <==> np.shape(rank)

diff(other, rtol=1e-05, atol=1e-08, equal_nan=False, check_dtypes=False)[source]

Return the difference between two objects within a tolerance.

This method should be implemented by subclasses to define how differences between objects are calculated.

The tolerance parameters rtol and atol, equal_nan, and check_dtypes are provided to be used by the numpy and pandas equality functions. These parameters allow you to customize the behavior of the equality comparison, such as setting the relative and absolute tolerance for numeric comparisons, considering NaN values as equal, and checking for the data type of the objects being compared.

Parameters:
  • other (object) – The object to compare to.

  • rtol (float, optional) – The relative tolerance parameter. Default is 1e-05.

  • atol (float, optional) – The absolute tolerance parameter. Default is 1e-08.

  • equal_nan (bool, optional) – Whether to consider NaN values as equal. Default is True.

  • check_dtypes (bool, optional) – Whether to check the data type of the objects. Default is False.

Returns:

The difference between the current and the other object.

Return type:

the_diff

See also

equals, aequals, numpy.isclose(), numpy.all(), numpy.any(), numpy.equal(), numpy.allclose()

Notes

The tolerance values are positive, typically very small numbers. The relative difference (rtol * abs(b)) and the absolute difference atol are added together to compare against the absolute difference between a and b.

NaNs are treated as equal if they are in the same place and if equal_nan=True. Infs are treated as equal if they are in the same place and of the same sign in both arrays.

values_equals(other)[source]

Check if the alternatives and values are the same.

The method doesn’t check the method or the extra parameters.

class skcriteria.agg._agg_base.RankResult(method, alternatives, values, extra)[source]

Bases: ResultABC

Ranking of alternatives.

This type of results is used by methods that generate a ranking of alternatives.

Parameters:
  • method (str) – Name of the method that generated the result.

  • alternatives (array-like) – Names of the alternatives evaluated.

  • values (array-like) – Values assigned to each alternative by the method, where the i-th value refers to the valuation of the i-th. alternative.

  • extra (dict-like) – Extra information provided by the method regarding the evaluation of the alternatives.

property has_ties_

Return True if two alternatives shares the same ranking.

property ties_

Counter object that counts how many times each value appears.

property rank_

Alias for values.

property untied_rank_

Ranking whitout ties.

if the ranking has ties this property assigns unique and consecutive values in the ranking. This method only assigns the values using the command numpy.argsort(rank_) + 1.

to_series(*, untied=False)[source]

The result as pandas.Series.

class skcriteria.agg._agg_base.KernelResult(method, alternatives, values, extra)[source]

Bases: ResultABC

Separates the alternatives between good (kernel) and bad.

This type of results is used by methods that select which alternatives are good and bad. The good alternatives are called “kernel”

Parameters:
  • method (str) – Name of the method that generated the result.

  • alternatives (array-like) – Names of the alternatives evaluated.

  • values (array-like) – Values assigned to each alternative by the method, where the i-th value refers to the valuation of the i-th. alternative.

  • extra (dict-like) – Extra information provided by the method regarding the evaluation of the alternatives.

property kernel_

Alias for values.

property kernel_size_

How many alternatives has the kernel.

property kernel_where_

Indexes of the alternatives that are part of the kernel.

property kernelwhere_

Indexes of the alternatives that are part of the kernel.

Deprecated since version 0.7: Use kernel_where_ instead

property kernel_alternatives_

Return the names of alternatives in the kernel.