icepool

Package for computing dice and card probabilities.

Starting with v0.25.1, you can replace latest in the URL with an old version number to get the documentation for that version.

See this JupyterLite distribution for examples.

Visit the project page.

General conventions:

  • Instances are immutable (apart from internal caching). Anything that looks like it mutates an instance actually returns a separate instance with the change.
  1"""Package for computing dice and card probabilities.
  2
  3Starting with `v0.25.1`, you can replace `latest` in the URL with an old version
  4number to get the documentation for that version.
  5
  6See [this JupyterLite distribution](https://highdiceroller.github.io/icepool/notebooks/lab/index.html)
  7for examples.
  8
  9[Visit the project page.](https://github.com/HighDiceRoller/icepool)
 10
 11General conventions:
 12
 13* Instances are immutable (apart from internal caching). Anything that looks
 14    like it mutates an instance actually returns a separate instance with the
 15    change.
 16"""
 17
 18__docformat__ = 'google'
 19
 20__version__ = '2.1.1'
 21
 22from typing import Final
 23
 24from icepool.typing import Outcome, RerollType, NoCacheType
 25from icepool.order import Order, ConflictingOrderError, UnsupportedOrder
 26
 27Reroll: Final = RerollType.Reroll
 28"""Indicates that an outcome should be rerolled (with unlimited depth).
 29
 30This can be used in place of outcomes in many places. See individual function
 31and method descriptions for details.
 32
 33This effectively removes the outcome from the probability space, along with its
 34contribution to the denominator.
 35
 36This can be used for conditional probability by removing all outcomes not
 37consistent with the given observations.
 38
 39Operation in specific cases:
 40
 41* When used with `Again`, only that stage is rerolled, not the entire `Again`
 42    tree.
 43* To reroll with limited depth, use `Die.reroll()`, or `Again` with no
 44    modification.
 45* When used with `MultisetEvaluator`, the entire evaluation is rerolled.
 46"""
 47
 48NoCache: Final = NoCacheType.NoCache
 49"""Indicates that caching should not be performed. Exact meaning depends on context."""
 50
 51# Expose certain names at top-level.
 52
 53from icepool.function import (d, z, __getattr__, coin, stochastic_round,
 54                              one_hot, from_cumulative, from_rv, pointwise_max,
 55                              pointwise_min, min_outcome, max_outcome,
 56                              consecutive, sorted_union, commonize_denominator,
 57                              reduce, accumulate, map, map_function,
 58                              map_and_time, mean_time_to_absorb, map_to_pool)
 59
 60from icepool.population.base import Population
 61from icepool.population.die import implicit_convert_to_die, Die
 62from icepool.expand import iter_cartesian_product, cartesian_product, tupleize, vectorize
 63from icepool.collection.vector import Vector
 64from icepool.collection.symbols import Symbols
 65from icepool.population.again import AgainExpression
 66
 67Again: Final = AgainExpression(is_additive=True)
 68"""A symbol indicating that the die should be rolled again, usually with some operation applied.
 69
 70This is designed to be used with the `Die()` constructor.
 71`AgainExpression`s should not be fed to functions or methods other than
 72`Die()` (or indirectly via `map()`), but they can be used with operators.
 73Examples:
 74
 75* `Again + 6`: Roll again and add 6.
 76* `Again + Again`: Roll again twice and sum.
 77
 78The `again_count`, `again_depth`, and `again_end` arguments to `Die()`
 79affect how these arguments are processed. At most one of `again_count` or
 80`again_depth` may be provided; if neither are provided, the behavior is as
 81`again_depth=1`.
 82
 83For finer control over rolling processes, use e.g. `Die.map()` instead.
 84
 85#### Count mode
 86
 87When `again_count` is provided, we start with one roll queued and execute one 
 88roll at a time. For every `Again` we roll, we queue another roll.
 89If we run out of rolls, we sum the rolls to find the result. If the total number
 90of rolls (not including the initial roll) would exceed `again_count`, we reroll
 91the entire process, effectively conditioning the process on not rolling more
 92than `again_count` extra dice.
 93
 94This mode only allows "additive" expressions to be used with `Again`, which
 95means that only the following operators are allowed:
 96
 97* Binary `+`
 98* `n @ AgainExpression`, where `n` is a non-negative `int` or `Population`.
 99
100Furthermore, the `+` operator is assumed to be associative and commutative.
101For example, `str` or `tuple` outcomes will not produce elements with a definite
102order.
103
104#### Depth mode
105
106When `again_depth=0`, `again_end` is directly substituted
107for each occurence of `Again`. For other values of `again_depth`, the result for
108`again_depth-1` is substituted for each occurence of `Again`.
109
110If `again_end=icepool.Reroll`, then any `AgainExpression`s in the final depth
111are rerolled.
112
113#### Rerolls
114
115`Reroll` only rerolls that particular die, not the entire process. Any such
116rerolls do not count against the `again_count` or `again_depth` limit.
117
118If `again_end=icepool.Reroll`:
119* Count mode: Any result that would cause the number of rolls to exceed
120    `again_count` is rerolled.
121* Depth mode: Any `AgainExpression`s in the final depth level are rerolled.
122"""
123
124from icepool.population.die_with_truth import DieWithTruth
125
126from icepool.collection.counts import CountsKeysView, CountsValuesView, CountsItemsView
127
128from icepool.population.keep import lowest, highest, middle
129
130from icepool.generator.pool import Pool, standard_pool
131from icepool.generator.keep import KeepGenerator
132from icepool.generator.compound_keep import CompoundKeepGenerator
133
134from icepool.generator.multiset_generator import MultisetGenerator
135from icepool.generator.multiset_tuple_generator import MultisetTupleGenerator
136from icepool.evaluator.multiset_evaluator import MultisetEvaluator
137
138from icepool.population.deck import Deck
139from icepool.generator.deal import Deal
140from icepool.generator.multi_deal import MultiDeal
141
142from icepool.expression.multiset_expression import MultisetExpression, implicit_convert_to_expression
143from icepool.evaluator.multiset_function import multiset_function
144from icepool.expression.multiset_parameter import MultisetParameter, MultisetTupleParameter
145from icepool.expression.multiset_mixture import MultisetMixture
146
147from icepool.population.format import format_probability_inverse
148
149from icepool.wallenius import Wallenius
150
151import icepool.generator as generator
152import icepool.evaluator as evaluator
153import icepool.operator as operator
154
155import icepool.typing as typing
156from icepool.expand import Expandable
157
158__all__ = [
159    'd', 'z', 'coin', 'stochastic_round', 'one_hot', 'Outcome', 'Die',
160    'Population', 'tupleize', 'vectorize', 'Vector', 'Symbols', 'Again',
161    'CountsKeysView', 'CountsValuesView', 'CountsItemsView', 'from_cumulative',
162    'from_rv', 'pointwise_max', 'pointwise_min', 'lowest', 'highest', 'middle',
163    'min_outcome', 'max_outcome', 'consecutive', 'sorted_union',
164    'commonize_denominator', 'reduce', 'accumulate', 'map', 'map_function',
165    'map_and_time', 'mean_time_to_absorb', 'map_to_pool', 'Reroll',
166    'RerollType', 'Pool', 'standard_pool', 'MultisetGenerator',
167    'MultisetExpression', 'MultisetEvaluator', 'Order',
168    'ConflictingOrderError', 'UnsupportedOrder', 'Deck', 'Deal', 'MultiDeal',
169    'multiset_function', 'MultisetParameter', 'MultisetTupleParameter',
170    'NoCache', 'function', 'typing', 'evaluator', 'format_probability_inverse',
171    'Wallenius'
172]
@cache
def d(sides: int, /) -> Die[int]:
19@cache
20def d(sides: int, /) -> 'icepool.Die[int]':
21    """A standard die, uniformly distributed from `1` to `sides` inclusive.
22
23    Don't confuse this with `icepool.Die()`:
24
25    * `icepool.Die([6])`: A `Die` that always rolls the integer 6.
26    * `icepool.d(6)`: A d6.
27
28    You can also import individual standard dice from the `icepool` module, e.g.
29    `from icepool import d6`.
30    """
31    if not isinstance(sides, int):
32        raise TypeError('sides must be an int.')
33    elif sides < 1:
34        raise ValueError('sides must be at least 1.')
35    return icepool.Die(range(1, sides + 1))

A standard die, uniformly distributed from 1 to sides inclusive.

Don't confuse this with icepool.Die():

You can also import individual standard dice from the icepool module, e.g. from icepool import d6.

@cache
def z(sides: int, /) -> Die[int]:
38@cache
39def z(sides: int, /) -> 'icepool.Die[int]':
40    """A die uniformly distributed from `0` to `sides - 1` inclusive.
41    
42    Equal to d(sides) - 1.
43    """
44    if not isinstance(sides, int):
45        raise TypeError('sides must be an int.')
46    elif sides < 1:
47        raise ValueError('sides must be at least 1.')
48    return icepool.Die(range(0, sides))

A die uniformly distributed from 0 to sides - 1 inclusive.

Equal to d(sides) - 1.

def coin( n: int | float | fractions.Fraction, d: int = 1, /, *, max_denominator: int | None = None) -> Die[bool]:
 74def coin(n: int | float | Fraction,
 75         d: int = 1,
 76         /,
 77         *,
 78         max_denominator: int | None = None) -> 'icepool.Die[bool]':
 79    """A `Die` that rolls `True` with probability `n / d`, and `False` otherwise.
 80
 81    If `n <= 0` or `n >= d` the result will have only one outcome.
 82
 83    Args:
 84        n: An int numerator, or a non-integer probability.
 85        d: An int denominator. Should not be provided if the first argument is
 86            not an int.
 87    """
 88    if not isinstance(n, int):
 89        if d != 1:
 90            raise ValueError(
 91                'If a non-int numerator is provided, a denominator must not be provided.'
 92            )
 93        fraction = Fraction(n)
 94        if max_denominator is not None:
 95            fraction = fraction.limit_denominator(max_denominator)
 96        n = fraction.numerator
 97        d = fraction.denominator
 98    data = {}
 99    if n < d:
100        data[False] = min(d - n, d)
101    if n > 0:
102        data[True] = min(n, d)
103
104    return icepool.Die(data)

A Die that rolls True with probability n / d, and False otherwise.

If n <= 0 or n >= d the result will have only one outcome.

Arguments:
  • n: An int numerator, or a non-integer probability.
  • d: An int denominator. Should not be provided if the first argument is not an int.
def stochastic_round( x, /, *, max_denominator: int | None = None) -> Die[int]:
107def stochastic_round(x,
108                     /,
109                     *,
110                     max_denominator: int | None = None) -> 'icepool.Die[int]':
111    """Randomly rounds a value up or down to the nearest integer according to the two distances.
112        
113    Specificially, rounds `x` up with probability `x - floor(x)` and down
114    otherwise, producing a `Die` with up to two outcomes.
115
116    Args:
117        max_denominator: If provided, each rounding will be performed
118            using `fractions.Fraction.limit_denominator(max_denominator)`.
119            Otherwise, the rounding will be performed without
120            `limit_denominator`.
121    """
122    integer_part = math.floor(x)
123    fractional_part = x - integer_part
124    return integer_part + coin(fractional_part,
125                               max_denominator=max_denominator)

Randomly rounds a value up or down to the nearest integer according to the two distances.

Specificially, rounds x up with probability x - floor(x) and down otherwise, producing a Die with up to two outcomes.

Arguments:
  • max_denominator: If provided, each rounding will be performed using fractions.Fraction.limit_denominator(max_denominator). Otherwise, the rounding will be performed without limit_denominator.
def one_hot(sides: int, /) -> Die[tuple[bool, ...]]:
128def one_hot(sides: int, /) -> 'icepool.Die[tuple[bool, ...]]':
129    """A `Die` with `Vector` outcomes with one element set to `True` uniformly at random and the rest `False`.
130
131    This is an easy (if somewhat expensive) way of representing how many dice
132    in a pool rolled each number. For example, the outcomes of `10 @ one_hot(6)`
133    are the `(ones, twos, threes, fours, fives, sixes)` rolled in 10d6.
134    """
135    data = []
136    for i in range(sides):
137        outcome = [False] * sides
138        outcome[i] = True
139        data.append(icepool.Vector(outcome))
140    return icepool.Die(data)

A Die with Vector outcomes with one element set to True uniformly at random and the rest False.

This is an easy (if somewhat expensive) way of representing how many dice in a pool rolled each number. For example, the outcomes of 10 @ one_hot(6) are the (ones, twos, threes, fours, fives, sixes) rolled in 10d6.

class Outcome(typing.Hashable, typing.Protocol[-T_contra]):
44class Outcome(Hashable, Protocol[T_contra]):
45    """Protocol to attempt to verify that outcome types are hashable and sortable.
46
47    Far from foolproof, e.g. it cannot enforce total ordering.
48    """
49
50    def __lt__(self, other: T_contra) -> bool:
51        ...

Protocol to attempt to verify that outcome types are hashable and sortable.

Far from foolproof, e.g. it cannot enforce total ordering.

  44class Die(Population[T_co], MaybeHashKeyed):
  45    """Sampling with replacement. Quantities represent weights.
  46
  47    Dice are immutable. Methods do not modify the `Die` in-place;
  48    rather they return a `Die` representing the result.
  49
  50    It's also possible to have "empty" dice with no outcomes at all,
  51    though these have little use other than being sentinel values.
  52    """
  53
  54    _data: Counts[T_co]
  55
  56    @property
  57    def _new_type(self) -> type:
  58        return Die
  59
  60    def __new__(
  61        cls,
  62        outcomes: Sequence | Mapping[Any, int],
  63        times: Sequence[int] | int = 1,
  64        *,
  65        again_count: int | None = None,
  66        again_depth: int | None = None,
  67        again_end: 'Outcome | Die | icepool.RerollType | None' = None
  68    ) -> 'Die[T_co]':
  69        """Constructor for a `Die`.
  70
  71        Don't confuse this with `d()`:
  72
  73        * `Die([6])`: A `Die` that always rolls the `int` 6.
  74        * `d(6)`: A d6.
  75
  76        Also, don't confuse this with `Pool()`:
  77
  78        * `Die([1, 2, 3, 4, 5, 6])`: A d6.
  79        * `Pool([1, 2, 3, 4, 5, 6])`: A `Pool` of six dice that always rolls one
  80            of each number.
  81
  82        Here are some different ways of constructing a d6:
  83
  84        * Just import it: `from icepool import d6`
  85        * Use the `d()` function: `icepool.d(6)`
  86        * Use a d6 that you already have: `Die(d6)` or `Die([d6])`
  87        * Mix a d3 and a d3+3: `Die([d3, d3+3])`
  88        * Use a dict: `Die({1:1, 2:1, 3:1, 4:1, 5:1, 6:1})`
  89        * Give the faces as a sequence: `Die([1, 2, 3, 4, 5, 6])`
  90
  91        All quantities must be non-negative. Outcomes with zero quantity will be
  92        omitted.
  93
  94        Several methods and functions foward **kwargs to this constructor.
  95        However, these only affect the construction of the returned or yielded
  96        dice. Any other implicit conversions of arguments or operands to dice
  97        will be done with the default keyword arguments.
  98
  99        EXPERIMENTAL: Use `icepool.Again` to roll the dice again, usually with
 100        some modification. See the `Again` documentation for details.
 101
 102        Denominator: For a flat set of outcomes, the denominator is just the
 103        sum of the corresponding quantities. If the outcomes themselves have
 104        secondary denominators, then the overall denominator will be minimized
 105        while preserving the relative weighting of the primary outcomes.
 106
 107        Args:
 108            outcomes: The faces of the `Die`. This can be one of the following:
 109                * A `Sequence` of outcomes. Duplicates will contribute
 110                    quantity for each appearance.
 111                * A `Mapping` from outcomes to quantities.
 112
 113                Individual outcomes can each be one of the following:
 114
 115                * An outcome, which must be hashable and totally orderable.
 116                    * For convenience, `tuple`s containing `Population`s will be
 117                        `tupleize`d into a `Population` of `tuple`s.
 118                        This does not apply to subclasses of `tuple`s such as `namedtuple`
 119                        or other classes such as `Vector`.
 120                * A `Die`, which will be flattened into the result.
 121                    The quantity assigned to a `Die` is shared among its
 122                    outcomes. The total denominator will be scaled up if
 123                    necessary.
 124                * `icepool.Reroll`, which will drop itself from consideration.
 125                * EXPERIMENTAL: `icepool.Again`. See the documentation for
 126                    `Again` for details.
 127            times: Multiplies the quantity of each element of `outcomes`.
 128                `times` can either be a sequence of the same length as
 129                `outcomes` or a single `int` to apply to all elements of
 130                `outcomes`.
 131            again_count, again_depth, again_end: These affect how `Again`
 132                expressions are handled. See the `Again` documentation for
 133                details.
 134        Raises:
 135            ValueError: `None` is not a valid outcome for a `Die`.
 136        """
 137        outcomes, times = icepool.creation_args.itemize(outcomes, times)
 138
 139        # Check for Again.
 140        if icepool.population.again.contains_again(outcomes):
 141            if again_count is not None:
 142                if again_depth is not None:
 143                    raise ValueError(
 144                        'At most one of again_count and again_depth may be used.'
 145                    )
 146                if again_end is not None:
 147                    raise ValueError(
 148                        'again_end cannot be used with again_count.')
 149                return icepool.population.again.evaluate_agains_using_count(
 150                    outcomes, times, again_count)
 151            else:
 152                if again_depth is None:
 153                    again_depth = 1
 154                return icepool.population.again.evaluate_agains_using_depth(
 155                    outcomes, times, again_depth, again_end)
 156
 157        # Agains have been replaced by this point.
 158        outcomes = cast(Sequence[T_co | Die[T_co] | icepool.RerollType],
 159                        outcomes)
 160
 161        if len(outcomes) == 1 and times[0] == 1 and isinstance(
 162                outcomes[0], Die):
 163            return outcomes[0]
 164
 165        counts: Counts[T_co] = icepool.creation_args.expand_args_for_die(
 166            outcomes, times)
 167
 168        return Die._new_raw(counts)
 169
 170    @classmethod
 171    def _new_raw(cls, data: Counts[T_co]) -> 'Die[T_co]':
 172        """Creates a new `Die` using already-processed arguments.
 173
 174        Args:
 175            data: At this point, this is a Counts.
 176        """
 177        self = super(Population, cls).__new__(cls)
 178        self._data = data
 179        return self
 180
 181    # Defined separately from the superclass to help typing.
 182    def unary_operator(self: 'icepool.Die[T_co]', op: Callable[..., U], *args,
 183                       **kwargs) -> 'icepool.Die[U]':
 184        """Performs the unary operation on the outcomes.
 185
 186        This is used for the standard unary operators
 187        `-, +, abs, ~, round, trunc, floor, ceil`
 188        as well as the additional methods
 189        `zero, bool`.
 190
 191        This is NOT used for the `[]` operator; when used directly, this is
 192        interpreted as a `Mapping` operation and returns the count corresponding
 193        to a given outcome. See `marginals()` for applying the `[]` operator to
 194        outcomes.
 195
 196        Returns:
 197            A `Die` representing the result.
 198
 199        Raises:
 200            ValueError: If tuples are of mismatched length.
 201        """
 202        return self._unary_operator(op, *args, **kwargs)
 203
 204    def binary_operator(self, other: 'Die', op: Callable[..., U], *args,
 205                        **kwargs) -> 'Die[U]':
 206        """Performs the operation on pairs of outcomes.
 207
 208        By the time this is called, the other operand has already been
 209        converted to a `Die`.
 210
 211        This is used for the standard binary operators
 212        `+, -, *, /, //, %, **, <<, >>, &, |, ^`
 213        and the standard binary comparators
 214        `<, <=, >=, >, ==, !=, cmp`.
 215
 216        `==` and `!=` additionally set the truth value of the `Die` according to
 217        whether the dice themselves are the same or not.
 218
 219        The `@` operator does NOT use this method directly.
 220        It rolls the left `Die`, which must have integer outcomes,
 221        then rolls the right `Die` that many times and sums the outcomes.
 222
 223        Returns:
 224            A `Die` representing the result.
 225
 226        Raises:
 227            ValueError: If tuples are of mismatched length within one of the
 228                dice or between the dice.
 229        """
 230        data: MutableMapping[Any, int] = defaultdict(int)
 231        for (outcome_self,
 232             quantity_self), (outcome_other,
 233                              quantity_other) in itertools.product(
 234                                  self.items(), other.items()):
 235            new_outcome = op(outcome_self, outcome_other, *args, **kwargs)
 236            data[new_outcome] += quantity_self * quantity_other
 237        return self._new_type(data)
 238
 239    # Basic access.
 240
 241    def keys(self) -> CountsKeysView[T_co]:
 242        return self._data.keys()
 243
 244    def values(self) -> CountsValuesView:
 245        return self._data.values()
 246
 247    def items(self) -> CountsItemsView[T_co]:
 248        return self._data.items()
 249
 250    def __getitem__(self, outcome, /) -> int:
 251        return self._data[outcome]
 252
 253    def __iter__(self) -> Iterator[T_co]:
 254        return iter(self.keys())
 255
 256    def __len__(self) -> int:
 257        """The number of outcomes. """
 258        return len(self._data)
 259
 260    def __contains__(self, outcome) -> bool:
 261        return outcome in self._data
 262
 263    # Quantity management.
 264
 265    def simplify(self) -> 'Die[T_co]':
 266        """Divides all quantities by their greatest common denominator. """
 267        return icepool.Die(self._data.simplify())
 268
 269    # Rerolls and other outcome management.
 270
 271    def reroll(self,
 272               outcomes: Callable[..., bool] | Collection[T_co] | None = None,
 273               /,
 274               *,
 275               star: bool | None = None,
 276               depth: int | Literal['inf']) -> 'Die[T_co]':
 277        """Rerolls the given outcomes.
 278
 279        Args:
 280            outcomes: Selects which outcomes to reroll. Options:
 281                * A collection of outcomes to reroll.
 282                * A callable that takes an outcome and returns `True` if it
 283                    should be rerolled.
 284                * If not provided, the min outcome will be rerolled.
 285            star: Whether outcomes should be unpacked into separate arguments
 286                before sending them to a callable `which`.
 287                If not provided, this will be guessed based on the function
 288                signature.
 289            depth: The maximum number of times to reroll.
 290                If `None`, rerolls an unlimited number of times.
 291
 292        Returns:
 293            A `Die` representing the reroll.
 294            If the reroll would never terminate, the result has no outcomes.
 295        """
 296
 297        if outcomes is None:
 298            outcome_set = {self.min_outcome()}
 299        else:
 300            outcome_set = self._select_outcomes(outcomes, star)
 301
 302        if depth == 'inf':
 303            data = {
 304                outcome: quantity
 305                for outcome, quantity in self.items()
 306                if outcome not in outcome_set
 307            }
 308        elif depth < 0:
 309            raise ValueError('reroll depth cannot be negative.')
 310        else:
 311            total_reroll_quantity = sum(quantity
 312                                        for outcome, quantity in self.items()
 313                                        if outcome in outcome_set)
 314            total_stop_quantity = self.denominator() - total_reroll_quantity
 315            rerollable_factor = total_reroll_quantity**depth
 316            stop_factor = (self.denominator()**(depth + 1) - rerollable_factor
 317                           * total_reroll_quantity) // total_stop_quantity
 318            data = {
 319                outcome: (rerollable_factor *
 320                          quantity if outcome in outcome_set else stop_factor *
 321                          quantity)
 322                for outcome, quantity in self.items()
 323            }
 324        return icepool.Die(data)
 325
 326    def filter(self,
 327               outcomes: Callable[..., bool] | Collection[T_co],
 328               /,
 329               *,
 330               star: bool | None = None,
 331               depth: int | Literal['inf']) -> 'Die[T_co]':
 332        """Rerolls until getting one of the given outcomes.
 333
 334        Essentially the complement of `reroll()`.
 335
 336        Args:
 337            outcomes: Selects which outcomes to reroll until. Options:
 338                * A callable that takes an outcome and returns `True` if it
 339                    should be accepted.
 340                * A collection of outcomes to reroll until.
 341            star: Whether outcomes should be unpacked into separate arguments
 342                before sending them to a callable `which`.
 343                If not provided, this will be guessed based on the function
 344                signature.
 345            depth: The maximum number of times to reroll.
 346                If `None`, rerolls an unlimited number of times.
 347
 348        Returns:
 349            A `Die` representing the reroll.
 350            If the reroll would never terminate, the result has no outcomes.
 351        """
 352
 353        if callable(outcomes):
 354            if star is None:
 355                star = infer_star(outcomes)
 356            if star:
 357
 358                not_outcomes = {
 359                    outcome
 360                    for outcome in self.outcomes()
 361                    if not outcomes(*outcome)  # type: ignore
 362                }
 363            else:
 364                not_outcomes = {
 365                    outcome
 366                    for outcome in self.outcomes() if not outcomes(outcome)
 367                }
 368        else:
 369            not_outcomes = {
 370                not_outcome
 371                for not_outcome in self.outcomes()
 372                if not_outcome not in outcomes
 373            }
 374        return self.reroll(not_outcomes, depth=depth)
 375
 376    def truncate(self, min_outcome=None, max_outcome=None) -> 'Die[T_co]':
 377        """Truncates the outcomes of this `Die` to the given range.
 378
 379        The endpoints are included in the result if applicable.
 380        If one of the arguments is not provided, that side will not be truncated.
 381
 382        This effectively rerolls outcomes outside the given range.
 383        If instead you want to replace those outcomes with the nearest endpoint,
 384        use `clip()`.
 385
 386        Not to be confused with `trunc(die)`, which performs integer truncation
 387        on each outcome.
 388        """
 389        if min_outcome is not None:
 390            start = bisect.bisect_left(self.outcomes(), min_outcome)
 391        else:
 392            start = None
 393        if max_outcome is not None:
 394            stop = bisect.bisect_right(self.outcomes(), max_outcome)
 395        else:
 396            stop = None
 397        data = {k: v for k, v in self.items()[start:stop]}
 398        return icepool.Die(data)
 399
 400    def clip(self, min_outcome=None, max_outcome=None) -> 'Die[T_co]':
 401        """Clips the outcomes of this `Die` to the given values.
 402
 403        The endpoints are included in the result if applicable.
 404        If one of the arguments is not provided, that side will not be clipped.
 405
 406        This is not the same as rerolling outcomes beyond this range;
 407        the outcome is simply adjusted to fit within the range.
 408        This will typically cause some quantity to bunch up at the endpoint(s).
 409        If you want to reroll outcomes beyond this range, use `truncate()`.
 410        """
 411        data: MutableMapping[Any, int] = defaultdict(int)
 412        for outcome, quantity in self.items():
 413            if min_outcome is not None and outcome <= min_outcome:
 414                data[min_outcome] += quantity
 415            elif max_outcome is not None and outcome >= max_outcome:
 416                data[max_outcome] += quantity
 417            else:
 418                data[outcome] += quantity
 419        return icepool.Die(data)
 420
 421    @cached_property
 422    def _popped_min(self) -> tuple['Die[T_co]', int]:
 423        die = Die._new_raw(self._data.remove_min())
 424        return die, self.quantities()[0]
 425
 426    def _pop_min(self) -> tuple['Die[T_co]', int]:
 427        """A `Die` with the min outcome removed, and the quantity of the removed outcome.
 428
 429        Raises:
 430            IndexError: If this `Die` has no outcome to pop.
 431        """
 432        return self._popped_min
 433
 434    @cached_property
 435    def _popped_max(self) -> tuple['Die[T_co]', int]:
 436        die = Die._new_raw(self._data.remove_max())
 437        return die, self.quantities()[-1]
 438
 439    def _pop_max(self) -> tuple['Die[T_co]', int]:
 440        """A `Die` with the max outcome removed, and the quantity of the removed outcome.
 441
 442        Raises:
 443            IndexError: If this `Die` has no outcome to pop.
 444        """
 445        return self._popped_max
 446
 447    # Processes.
 448
 449    def map(
 450            self,
 451            repl:
 452        'Callable[..., U | Die[U] | icepool.RerollType | icepool.AgainExpression] | Mapping[T_co, U | Die[U] | icepool.RerollType | icepool.AgainExpression]',
 453            /,
 454            *extra_args,
 455            star: bool | None = None,
 456            repeat: int | Literal['inf'] = 1,
 457            time_limit: int | Literal['inf'] | None = None,
 458            again_count: int | None = None,
 459            again_depth: int | None = None,
 460            again_end: 'U | Die[U] | icepool.RerollType | None' = None,
 461            **kwargs) -> 'Die[U]':
 462        """Maps outcomes of the `Die` to other outcomes.
 463
 464        This is also useful for representing processes.
 465
 466        As `icepool.map(repl, self, ...)`.
 467        """
 468        return icepool.map(repl,
 469                           self,
 470                           *extra_args,
 471                           star=star,
 472                           repeat=repeat,
 473                           time_limit=time_limit,
 474                           again_count=again_count,
 475                           again_depth=again_depth,
 476                           again_end=again_end,
 477                           **kwargs)
 478
 479    def map_and_time(
 480            self,
 481            repl:
 482        'Callable[..., T_co | Die[T_co] | icepool.RerollType] | Mapping[T_co, T_co | Die[T_co] | icepool.RerollType]',
 483            /,
 484            *extra_args,
 485            star: bool | None = None,
 486            time_limit: int,
 487            **kwargs) -> 'Die[tuple[T_co, int]]':
 488        """Repeatedly map outcomes of the state to other outcomes, while also
 489        counting timesteps.
 490
 491        This is useful for representing processes.
 492
 493        As `map_and_time(repl, self, ...)`.
 494        """
 495        return icepool.map_and_time(repl,
 496                                    self,
 497                                    *extra_args,
 498                                    star=star,
 499                                    time_limit=time_limit,
 500                                    **kwargs)
 501
 502    def mean_time_to_absorb(
 503            self,
 504            repl:
 505        'Callable[..., T_co | icepool.Die[T_co] | icepool.RerollType | icepool.AgainExpression] | Mapping[Any, T_co | icepool.Die[T_co] | icepool.RerollType | icepool.AgainExpression]',
 506            /,
 507            *extra_args,
 508            star: bool | None = None,
 509            **kwargs) -> Fraction:
 510        """EXPERIMENTAL: The mean time for the process to reach an absorbing state.
 511        
 512        As `mean_time_to_absorb(repl, self, ...)`.
 513        """
 514        return icepool.mean_time_to_absorb(repl,
 515                                           self,
 516                                           *extra_args,
 517                                           star=star,
 518                                           **kwargs)
 519
 520    def time_to_sum(self: 'Die[int]',
 521                    target: int,
 522                    /,
 523                    max_time: int,
 524                    dnf: 'int|icepool.RerollType|None' = None) -> 'Die[int]':
 525        """The number of rolls until the cumulative sum is greater or equal to the target.
 526        
 527        Args:
 528            target: The number to stop at once reached.
 529            max_time: The maximum number of rolls to run.
 530                If the sum is not reached, the outcome is determined by `dnf`.
 531            dnf: What time to assign in cases where the target was not reached
 532                in `max_time`. If not provided, this is set to `max_time`.
 533                `dnf=icepool.Reroll` will remove this case from the result,
 534                effectively rerolling it.
 535        """
 536        if target <= 0:
 537            return Die([0])
 538
 539        if dnf is None:
 540            dnf = max_time
 541
 542        def step(total, roll):
 543            return min(total + roll, target)
 544
 545        result: 'Die[tuple[int, int]]' = Die([0]).map_and_time(
 546            step, self, time_limit=max_time)
 547
 548        def get_time(total, time):
 549            if total < target:
 550                return dnf
 551            else:
 552                return time
 553
 554        return result.map(get_time)
 555
 556    @cached_property
 557    def _mean_time_to_sum_cache(self) -> list[Fraction]:
 558        return [Fraction(0)]
 559
 560    def mean_time_to_sum(self: 'Die[int]', target: int, /) -> Fraction:
 561        """The mean number of rolls until the cumulative sum is greater or equal to the target.
 562
 563        Args:
 564            target: The target sum.
 565
 566        Raises:
 567            ValueError: If `self` has negative outcomes.
 568            ZeroDivisionError: If `self.mean() == 0`.
 569        """
 570        target = max(target, 0)
 571
 572        if target < len(self._mean_time_to_sum_cache):
 573            return self._mean_time_to_sum_cache[target]
 574
 575        if self.min_outcome() < 0:
 576            raise ValueError(
 577                'mean_time_to_sum does not handle negative outcomes.')
 578        time_per_effect = Fraction(self.denominator(),
 579                                   self.denominator() - self.quantity(0))
 580
 581        for i in range(len(self._mean_time_to_sum_cache), target + 1):
 582            result = time_per_effect + self.reroll([
 583                0
 584            ], depth='inf').map(lambda x: self.mean_time_to_sum(i - x)).mean()
 585            self._mean_time_to_sum_cache.append(result)
 586
 587        return result
 588
 589    def explode(self,
 590                outcomes: Collection[T_co] | Callable[..., bool] | None = None,
 591                /,
 592                *,
 593                star: bool | None = None,
 594                depth: int = 9,
 595                end=None) -> 'Die[T_co]':
 596        """Causes outcomes to be rolled again and added to the total.
 597
 598        Args:
 599            outcomes: Which outcomes to explode. Options:
 600                * An collection of outcomes to explode.
 601                * A callable that takes an outcome and returns `True` if it
 602                    should be exploded.
 603                * If not supplied, the max outcome will explode.
 604            star: Whether outcomes should be unpacked into separate arguments
 605                before sending them to a callable `which`.
 606                If not provided, this will be guessed based on the function
 607                signature.
 608            depth: The maximum number of additional dice to roll, not counting
 609                the initial roll.
 610                If not supplied, a default value will be used.
 611            end: Once `depth` is reached, further explosions will be treated
 612                as this value. By default, a zero value will be used.
 613                `icepool.Reroll` will make one extra final roll, rerolling until
 614                a non-exploding outcome is reached.
 615        """
 616
 617        if outcomes is None:
 618            outcome_set = {self.max_outcome()}
 619        else:
 620            outcome_set = self._select_outcomes(outcomes, star)
 621
 622        if depth < 0:
 623            raise ValueError('depth cannot be negative.')
 624        elif depth == 0:
 625            return self
 626
 627        def map_final(outcome):
 628            if outcome in outcome_set:
 629                return outcome + icepool.Again
 630            else:
 631                return outcome
 632
 633        return self.map(map_final, again_depth=depth, again_end=end)
 634
 635    def if_else(
 636        self,
 637        outcome_if_true: U | 'Die[U]',
 638        outcome_if_false: U | 'Die[U]',
 639        *,
 640        again_count: int | None = None,
 641        again_depth: int | None = None,
 642        again_end: 'U | Die[U] | icepool.RerollType | None' = None
 643    ) -> 'Die[U]':
 644        """Ternary conditional operator.
 645
 646        This replaces truthy outcomes with the first argument and falsy outcomes
 647        with the second argument.
 648
 649        Args:
 650            again_count, again_depth, again_end: Forwarded to the final die constructor.
 651        """
 652        return self.map(lambda x: bool(x)).map(
 653            {
 654                True: outcome_if_true,
 655                False: outcome_if_false
 656            },
 657            again_count=again_count,
 658            again_depth=again_depth,
 659            again_end=again_end)
 660
 661    def is_in(self, outcomes: Container[T_co], /) -> 'Die[bool]':
 662        """A die that returns True iff the roll of the die is contained in the target."""
 663        return self.map(lambda x: x in outcomes)
 664
 665    def count(self, rolls: int, outcomes: Container[T_co], /) -> 'Die[int]':
 666        """Roll this dice a number of times and count how many are in the target."""
 667        return rolls @ self.is_in(outcomes)
 668
 669    # Pools and sums.
 670
 671    @cached_property
 672    def _sum_cache(self) -> MutableMapping[int, 'Die']:
 673        return {}
 674
 675    def _sum_all(self, rolls: int, /) -> 'Die':
 676        """Roll this `Die` `rolls` times and sum the results.
 677
 678        The sum is computed one at a time, with each additional item on the 
 679        right, similar to `functools.reduce()`.
 680
 681        If `rolls` is negative, roll the `Die` `abs(rolls)` times and negate
 682        the result.
 683
 684        If you instead want to replace tuple (or other sequence) outcomes with
 685        their sum, use `die.map(sum)`.
 686        """
 687        if rolls in self._sum_cache:
 688            return self._sum_cache[rolls]
 689
 690        if rolls < 0:
 691            result = -self._sum_all(-rolls)
 692        elif rolls == 0:
 693            result = self.zero().simplify()
 694        elif rolls == 1:
 695            result = self
 696        else:
 697            # In addition to working similar to reduce(), this seems to perform
 698            # better than binary split.
 699            result = self._sum_all(rolls - 1) + self
 700
 701        self._sum_cache[rolls] = result
 702        return result
 703
 704    def __matmul__(self: 'Die[int]', other) -> 'Die':
 705        """Roll the left `Die`, then roll the right `Die` that many times and sum the outcomes.
 706        
 707        The sum is computed one at a time, with each additional item on the 
 708        right, similar to `functools.reduce()`.
 709        """
 710        if isinstance(other, icepool.AgainExpression):
 711            return NotImplemented
 712        other = implicit_convert_to_die(other)
 713
 714        data: MutableMapping[int, Any] = defaultdict(int)
 715
 716        max_abs_die_count = max(abs(self.min_outcome()),
 717                                abs(self.max_outcome()))
 718        for die_count, die_count_quantity in self.items():
 719            factor = other.denominator()**(max_abs_die_count - abs(die_count))
 720            subresult = other._sum_all(die_count)
 721            for outcome, subresult_quantity in subresult.items():
 722                data[
 723                    outcome] += subresult_quantity * die_count_quantity * factor
 724
 725        return icepool.Die(data)
 726
 727    def __rmatmul__(self, other: 'int | Die[int]') -> 'Die':
 728        """Roll the left `Die`, then roll the right `Die` that many times and sum the outcomes.
 729        
 730        The sum is computed one at a time, with each additional item on the 
 731        right, similar to `functools.reduce()`.
 732        """
 733        if isinstance(other, icepool.AgainExpression):
 734            return NotImplemented
 735        other = implicit_convert_to_die(other)
 736        return other.__matmul__(self)
 737
 738    def sequence(self, rolls: int) -> 'icepool.Die[tuple[T_co, ...]]':
 739        """Possible sequences produced by rolling this die a number of times.
 740        
 741        This is extremely expensive computationally. If possible, use `reduce()`
 742        instead; if you don't care about order, `Die.pool()` is better.
 743        """
 744        return icepool.cartesian_product(*(self for _ in range(rolls)),
 745                                         outcome_type=tuple)  # type: ignore
 746
 747    def pool(self, rolls: int | Sequence[int] = 1, /) -> 'icepool.Pool[T_co]':
 748        """Creates a `Pool` from this `Die`.
 749
 750        You might subscript the pool immediately afterwards, e.g.
 751        `d6.pool(5)[-1, ..., 1]` takes the difference between the highest and
 752        lowest of 5d6.
 753
 754        Args:
 755            rolls: The number of copies of this `Die` to put in the pool.
 756                Or, a sequence of one `int` per die acting as
 757                `keep_tuple`. Note that `...` cannot be used in the
 758                argument to this method, as the argument determines the size of
 759                the pool.
 760        """
 761        if isinstance(rolls, int):
 762            return icepool.Pool({self: rolls})
 763        else:
 764            pool_size = len(rolls)
 765            # Haven't dealt with narrowing return type.
 766            return icepool.Pool({self: pool_size})[rolls]  # type: ignore
 767
 768    @overload
 769    def keep(self, rolls: Sequence[int], /) -> 'Die':
 770        """Selects elements after drawing and sorting and sums them.
 771
 772        Args:
 773            rolls: A sequence of `int` specifying how many times to count each
 774                element in ascending order.
 775        """
 776
 777    @overload
 778    def keep(self, rolls: int,
 779             index: slice | Sequence[int | EllipsisType] | int, /):
 780        """Selects elements after drawing and sorting and sums them.
 781
 782        Args:
 783            rolls: The number of dice to roll.
 784            index: One of the following:
 785            * An `int`. This will count only the roll at the specified index.
 786                In this case, the result is a `Die` rather than a generator.
 787            * A `slice`. The selected dice are counted once each.
 788            * A sequence of one `int` for each `Die`.
 789                Each roll is counted that many times, which could be multiple or
 790                negative times.
 791
 792                Up to one `...` (`Ellipsis`) may be used.
 793                `...` will be replaced with a number of zero
 794                counts depending on the `rolls`.
 795                This number may be "negative" if more `int`s are provided than
 796                `rolls`. Specifically:
 797
 798                * If `index` is shorter than `rolls`, `...`
 799                    acts as enough zero counts to make up the difference.
 800                    E.g. `(1, ..., 1)` on five dice would act as
 801                    `(1, 0, 0, 0, 1)`.
 802                * If `index` has length equal to `rolls`, `...` has no effect.
 803                    E.g. `(1, ..., 1)` on two dice would act as `(1, 1)`.
 804                * If `index` is longer than `rolls` and `...` is on one side,
 805                    elements will be dropped from `index` on the side with `...`.
 806                    E.g. `(..., 1, 2, 3)` on two dice would act as `(2, 3)`.
 807                * If `index` is longer than `rolls` and `...`
 808                    is in the middle, the counts will be as the sum of two
 809                    one-sided `...`.
 810                    E.g. `(-1, ..., 1)` acts like `(-1, ...)` plus `(..., 1)`.
 811                    If `rolls` was 1 this would have the -1 and 1 cancel each other out.
 812        """
 813
 814    def keep(self,
 815             rolls: int | Sequence[int],
 816             index: slice | Sequence[int | EllipsisType] | int | None = None,
 817             /) -> 'Die':
 818        """Selects elements after drawing and sorting and sums them.
 819
 820        Args:
 821            rolls: The number of dice to roll.
 822            index: One of the following:
 823            * An `int`. This will count only the roll at the specified index.
 824            In this case, the result is a `Die` rather than a generator.
 825            * A `slice`. The selected dice are counted once each.
 826            * A sequence of `int`s with length equal to `rolls`.
 827                Each roll is counted that many times, which could be multiple or
 828                negative times.
 829
 830                Up to one `...` (`Ellipsis`) may be used. If no `...` is used,
 831                the `rolls` argument may be omitted.
 832
 833                `...` will be replaced with a number of zero counts in order
 834                to make up any missing elements compared to `rolls`.
 835                This number may be "negative" if more `int`s are provided than
 836                `rolls`. Specifically:
 837
 838                * If `index` is shorter than `rolls`, `...`
 839                    acts as enough zero counts to make up the difference.
 840                    E.g. `(1, ..., 1)` on five dice would act as
 841                    `(1, 0, 0, 0, 1)`.
 842                * If `index` has length equal to `rolls`, `...` has no effect.
 843                    E.g. `(1, ..., 1)` on two dice would act as `(1, 1)`.
 844                * If `index` is longer than `rolls` and `...` is on one side,
 845                    elements will be dropped from `index` on the side with `...`.
 846                    E.g. `(..., 1, 2, 3)` on two dice would act as `(2, 3)`.
 847                * If `index` is longer than `rolls` and `...`
 848                    is in the middle, the counts will be as the sum of two
 849                    one-sided `...`.
 850                    E.g. `(-1, ..., 1)` acts like `(-1, ...)` plus `(..., 1)`.
 851                    If `rolls` was 1 this would have the -1 and 1 cancel each other out.
 852        """
 853        if isinstance(rolls, int):
 854            if index is None:
 855                raise ValueError(
 856                    'If the number of rolls is an integer, an index argument must be provided.'
 857                )
 858            if isinstance(index, int):
 859                return self.pool(rolls).keep(index)
 860            else:
 861                return self.pool(rolls).keep(index).sum()  # type: ignore
 862        else:
 863            if index is not None:
 864                raise ValueError('Only one index sequence can be given.')
 865            return self.pool(len(rolls)).keep(rolls).sum()  # type: ignore
 866
 867    def lowest(self,
 868               rolls: int,
 869               /,
 870               keep: int | None = None,
 871               drop: int | None = None) -> 'Die':
 872        """Roll several of this `Die` and return the lowest result, or the sum of some of the lowest.
 873
 874        The outcomes should support addition and multiplication if `keep != 1`.
 875
 876        Args:
 877            rolls: The number of dice to roll. All dice will have the same
 878                outcomes as `self`.
 879            keep, drop: These arguments work together:
 880                * If neither are provided, the single lowest die will be taken.
 881                * If only `keep` is provided, the `keep` lowest dice will be summed.
 882                * If only `drop` is provided, the `drop` lowest dice will be dropped
 883                    and the rest will be summed.
 884                * If both are provided, `drop` lowest dice will be dropped, then
 885                    the next `keep` lowest dice will be summed.
 886
 887        Returns:
 888            A `Die` representing the probability distribution of the sum.
 889        """
 890        index = lowest_slice(keep, drop)
 891        canonical = canonical_slice(index, rolls)
 892        if canonical.start == 0 and canonical.stop == 1:
 893            return self._lowest_single(rolls)
 894        # Expression evaluators are difficult to type.
 895        return self.pool(rolls)[index].sum()  # type: ignore
 896
 897    def _lowest_single(self, rolls: int, /) -> 'Die':
 898        """Roll this die several times and keep the lowest."""
 899        if rolls == 0:
 900            return self.zero().simplify()
 901        return icepool.from_cumulative(
 902            self.outcomes(), [x**rolls for x in self.quantities('>=')],
 903            reverse=True)
 904
 905    def highest(self,
 906                rolls: int,
 907                /,
 908                keep: int | None = None,
 909                drop: int | None = None) -> 'Die[T_co]':
 910        """Roll several of this `Die` and return the highest result, or the sum of some of the highest.
 911
 912        The outcomes should support addition and multiplication if `keep != 1`.
 913
 914        Args:
 915            rolls: The number of dice to roll.
 916            keep, drop: These arguments work together:
 917                * If neither are provided, the single highest die will be taken.
 918                * If only `keep` is provided, the `keep` highest dice will be summed.
 919                * If only `drop` is provided, the `drop` highest dice will be dropped
 920                    and the rest will be summed.
 921                * If both are provided, `drop` highest dice will be dropped, then
 922                    the next `keep` highest dice will be summed.
 923
 924        Returns:
 925            A `Die` representing the probability distribution of the sum.
 926        """
 927        index = highest_slice(keep, drop)
 928        canonical = canonical_slice(index, rolls)
 929        if canonical.start == rolls - 1 and canonical.stop == rolls:
 930            return self._highest_single(rolls)
 931        # Expression evaluators are difficult to type.
 932        return self.pool(rolls)[index].sum()  # type: ignore
 933
 934    def _highest_single(self, rolls: int, /) -> 'Die[T_co]':
 935        """Roll this die several times and keep the highest."""
 936        if rolls == 0:
 937            return self.zero().simplify()
 938        return icepool.from_cumulative(
 939            self.outcomes(), [x**rolls for x in self.quantities('<=')])
 940
 941    def middle(
 942            self,
 943            rolls: int,
 944            /,
 945            keep: int = 1,
 946            *,
 947            tie: Literal['error', 'high', 'low'] = 'error') -> 'icepool.Die':
 948        """Roll several of this `Die` and sum the sorted results in the middle.
 949
 950        The outcomes should support addition and multiplication if `keep != 1`.
 951
 952        Args:
 953            rolls: The number of dice to roll.
 954            keep: The number of outcomes to sum. If this is greater than the
 955                current keep_size, all are kept.
 956            tie: What to do if `keep` is odd but the current keep_size
 957                is even, or vice versa.
 958                * 'error' (default): Raises `IndexError`.
 959                * 'high': The higher outcome is taken.
 960                * 'low': The lower outcome is taken.
 961        """
 962        # Expression evaluators are difficult to type.
 963        return self.pool(rolls).middle(keep, tie=tie).sum()  # type: ignore
 964
 965    def map_to_pool(
 966            self,
 967            repl:
 968        'Callable[..., Sequence[icepool.Die[U] | U] | Mapping[icepool.Die[U], int] | Mapping[U, int] | icepool.RerollType] | None' = None,
 969            /,
 970            *extra_args: 'Outcome | icepool.Die | icepool.MultisetExpression',
 971            star: bool | None = None,
 972            **kwargs) -> 'icepool.MultisetExpression[U]':
 973        """EXPERIMENTAL: Maps outcomes of this `Die` to `Pools`, creating a `MultisetGenerator`.
 974
 975        As `icepool.map_to_pool(repl, self, ...)`.
 976
 977        If no argument is provided, the outcomes will be used to construct a
 978        mixture of pools directly, similar to the inverse of `pool.expand()`.
 979        Note that this is not particularly efficient since it does not make much
 980        use of dynamic programming.
 981        """
 982        if repl is None:
 983            repl = lambda x: x
 984        return icepool.map_to_pool(repl,
 985                                   self,
 986                                   *extra_args,
 987                                   star=star,
 988                                   **kwargs)
 989
 990    def explode_to_pool(self,
 991                        rolls: int = 1,
 992                        outcomes: Collection[T_co] | Callable[..., bool]
 993                        | None = None,
 994                        /,
 995                        *,
 996                        star: bool | None = None,
 997                        depth: int = 9) -> 'icepool.MultisetExpression[T_co]':
 998        """EXPERIMENTAL: Causes outcomes to be rolled again, keeping that outcome as an individual die in a pool.
 999        
1000        Args:
1001            rolls: The number of initial dice.
1002            outcomes: Which outcomes to explode. Options:
1003                * A single outcome to explode.
1004                * An collection of outcomes to explode.
1005                * A callable that takes an outcome and returns `True` if it
1006                    should be exploded.
1007                * If not supplied, the max outcome will explode.
1008            star: Whether outcomes should be unpacked into separate arguments
1009                before sending them to a callable `which`.
1010                If not provided, this will be guessed based on the function
1011                signature.
1012            depth: The maximum depth of explosions for an individual dice.
1013
1014        Returns:
1015            A `MultisetGenerator` representing the mixture of `Pool`s. Note  
1016            that this is not technically a `Pool`, though it supports most of 
1017            the same operations.
1018        """
1019        if depth == 0:
1020            return self.pool(rolls)
1021        if outcomes is None:
1022            explode_set = {self.max_outcome()}
1023        else:
1024            explode_set = self._select_outcomes(outcomes, star)
1025        if not explode_set:
1026            return self.pool(rolls)
1027        explode: 'Die[T_co]'
1028        not_explode: 'Die[T_co]'
1029        explode, not_explode = self.split(explode_set)
1030
1031        single_data: 'MutableMapping[icepool.Vector[int], int]' = defaultdict(
1032            int)
1033        for i in range(depth + 1):
1034            weight = explode.denominator()**i * self.denominator()**(
1035                depth - i) * not_explode.denominator()
1036            single_data[icepool.Vector((i, 1))] += weight
1037        single_data[icepool.Vector(
1038            (depth + 1, 0))] += explode.denominator()**(depth + 1)
1039
1040        single_count_die: 'Die[icepool.Vector[int]]' = Die(single_data)
1041        count_die = rolls @ single_count_die
1042
1043        return count_die.map_to_pool(
1044            lambda x, nx: [explode] * x + [not_explode] * nx)
1045
1046    def reroll_to_pool(
1047        self,
1048        rolls: int,
1049        outcomes: Callable[..., bool] | Collection[T_co] | None = None,
1050        /,
1051        *,
1052        max_rerolls: int | Literal['inf'],
1053        star: bool | None = None,
1054        depth: int | Literal['inf'] = 1,
1055        mode: Literal['random', 'lowest', 'highest', 'drop'] = 'random'
1056    ) -> 'icepool.MultisetExpression[T_co]':
1057        """EXPERIMENTAL: Applies a limited number of rerolls shared across a pool.
1058
1059        Each die can only be rerolled once (effectively `depth=1`), and no more
1060        than `max_rerolls` dice may be rerolled.
1061        
1062        Args:
1063            rolls: How many dice in the pool.
1064            outcomes: Selects which outcomes are eligible to be rerolled.
1065                Options:
1066                * A collection of outcomes to reroll.
1067                * A callable that takes an outcome and returns `True` if it
1068                    could be rerolled.
1069                * If not provided, the single minimum outcome will be rerolled.
1070            max_rerolls: The maximum total number of rerolls.
1071                If `max_rerolls == 'inf'`, then this is the same as 
1072                `self.reroll(which, star=star, depth=depth).pool(rolls)`.
1073            depth: EXTRA EXPERIMENTAL: The maximum depth of rerolls.
1074            star: Whether outcomes should be unpacked into separate arguments
1075                before sending them to a callable `which`.
1076                If not provided, this will be guessed based on the function
1077                signature.
1078            mode: How dice are selected for rerolling if there are more eligible
1079                dice than `max_rerolls`. Options:
1080                * `'random'` (default): Eligible dice will be chosen uniformly
1081                    at random.
1082                * `'lowest'`: The lowest eligible dice will be rerolled.
1083                * `'highest'`: The highest eligible dice will be rerolled.
1084                * `'drop'`: All dice that ended up on an outcome selected by 
1085                    `which` will be dropped. This includes both dice that rolled
1086                    into `which` initially and were not rerolled, and dice that
1087                    were rerolled but rolled into `which` again. This can be
1088                    considerably more efficient than the other modes.
1089
1090        Returns:
1091            A `MultisetGenerator` representing the mixture of `Pool`s. Note  
1092            that this is not technically a `Pool`, though it supports most of 
1093            the same operations.
1094        """
1095        if max_rerolls == 'inf':
1096            return self.reroll(outcomes, star=star, depth=depth).pool(rolls)
1097
1098        if outcomes is None:
1099            rerollable_set = {self.min_outcome()}
1100        else:
1101            rerollable_set = self._select_outcomes(outcomes, star)
1102            if not rerollable_set:
1103                return self.pool(rolls)
1104
1105        rerollable_die: 'Die[T_co]'
1106        not_rerollable_die: 'Die[T_co]'
1107        rerollable_die, not_rerollable_die = self.split(rerollable_set)
1108        single_is_rerollable = icepool.coin(rerollable_die.denominator(),
1109                                            self.denominator())
1110
1111        if depth == 'inf':
1112            depth = max_rerolls
1113
1114        def step(rerollable, rerolls_left):
1115            """Advances one step of rerolling if there are enough rerolls left to cover all rerollable dice.
1116            
1117            Returns:
1118                The number of dice showing rerollable outcomes and the number of remaining rerolls.
1119            """
1120            if rerollable == 0:
1121                return 0, 0
1122            if rerolls_left < rerollable:
1123                return rerollable, rerolls_left
1124
1125            return icepool.tupleize(rerollable @ single_is_rerollable,
1126                                    rerolls_left - rerollable)
1127
1128        initial_state = icepool.tupleize(rolls @ single_is_rerollable,
1129                                         max_rerolls)
1130        mid_pool_composition: Die[tuple[int, int]]
1131        mid_pool_composition = icepool.map(step,
1132                                           initial_state,
1133                                           star=True,
1134                                           repeat=depth - 1)
1135
1136        def final_step(rerollable, rerolls_left):
1137            """Performs the final reroll, which might not have enough rerolls to cover all rerollable dice.
1138            
1139            Returns: The number of dice that had a rerollable outcome,
1140                the number of dice that were rerolled due to max_rerolls,
1141                the number of rerolled dice that landed on a rerollable outcome
1142                again.
1143            """
1144            rerolled = min(rerollable, rerolls_left)
1145
1146            return icepool.tupleize(rerollable, rerolled,
1147                                    rerolled @ single_is_rerollable)
1148
1149        pool_composition: Die[tuple[int, int, int]] = mid_pool_composition.map(
1150            final_step, star=True)
1151
1152        denominator = self.denominator()**(rolls + max_rerolls)
1153        pool_composition = pool_composition.multiply_to_denominator(
1154            denominator)
1155
1156        def make_pool(rerollable, rerolled, rerolled_to_rerollable):
1157            rerolls_ran_out = rerollable - rerolled
1158            not_rerollable = rolls - rerolls_ran_out - rerolled_to_rerollable
1159            common = rerollable_die.pool(
1160                rerolled_to_rerollable) + not_rerollable_die.pool(
1161                    not_rerollable)
1162            match mode:
1163                case 'random':
1164                    return common + rerollable_die.pool(rerolls_ran_out)
1165                case 'lowest':
1166                    return common + rerollable_die.pool(rerollable).highest(
1167                        rerolls_ran_out)
1168                case 'highest':
1169                    return common + rerollable_die.pool(rerollable).lowest(
1170                        rerolls_ran_out)
1171                case 'drop':
1172                    return not_rerollable_die.pool(not_rerollable)
1173                case _:
1174                    raise ValueError(
1175                        f"Invalid reroll_priority '{mode}'. Allowed values are 'random', 'lowest', 'highest', 'drop'."
1176                    )
1177
1178        return pool_composition.map_to_pool(make_pool, star=True)
1179
1180    # Unary operators.
1181
1182    def __neg__(self) -> 'Die[T_co]':
1183        return self.unary_operator(operator.neg)
1184
1185    def __pos__(self) -> 'Die[T_co]':
1186        return self.unary_operator(operator.pos)
1187
1188    def __invert__(self) -> 'Die[T_co]':
1189        return self.unary_operator(operator.invert)
1190
1191    def abs(self) -> 'Die[T_co]':
1192        return self.unary_operator(operator.abs)
1193
1194    __abs__ = abs
1195
1196    def round(self, ndigits: int | None = None) -> 'Die':
1197        return self.unary_operator(round, ndigits)
1198
1199    __round__ = round
1200
1201    def stochastic_round(self,
1202                         *,
1203                         max_denominator: int | None = None) -> 'Die[int]':
1204        """Randomly rounds outcomes up or down to the nearest integer according to the two distances.
1205        
1206        Specificially, rounds `x` up with probability `x - floor(x)` and down
1207        otherwise.
1208
1209        Args:
1210            max_denominator: If provided, each rounding will be performed
1211                using `fractions.Fraction.limit_denominator(max_denominator)`.
1212                Otherwise, the rounding will be performed without
1213                `limit_denominator`.
1214        """
1215        return self.map(lambda x: icepool.stochastic_round(
1216            x, max_denominator=max_denominator))
1217
1218    def trunc(self) -> 'Die':
1219        return self.unary_operator(math.trunc)
1220
1221    __trunc__ = trunc
1222
1223    def floor(self) -> 'Die':
1224        return self.unary_operator(math.floor)
1225
1226    __floor__ = floor
1227
1228    def ceil(self) -> 'Die':
1229        return self.unary_operator(math.ceil)
1230
1231    __ceil__ = ceil
1232
1233    # Binary operators.
1234
1235    def __add__(self, other) -> 'Die':
1236        if isinstance(other, icepool.AgainExpression):
1237            return NotImplemented
1238        other = implicit_convert_to_die(other)
1239        return self.binary_operator(other, operator.add)
1240
1241    def __radd__(self, other) -> 'Die':
1242        if isinstance(other, icepool.AgainExpression):
1243            return NotImplemented
1244        other = implicit_convert_to_die(other)
1245        return other.binary_operator(self, operator.add)
1246
1247    def __sub__(self, other) -> 'Die':
1248        if isinstance(other, icepool.AgainExpression):
1249            return NotImplemented
1250        other = implicit_convert_to_die(other)
1251        return self.binary_operator(other, operator.sub)
1252
1253    def __rsub__(self, other) -> 'Die':
1254        if isinstance(other, icepool.AgainExpression):
1255            return NotImplemented
1256        other = implicit_convert_to_die(other)
1257        return other.binary_operator(self, operator.sub)
1258
1259    def __mul__(self, other) -> 'Die':
1260        if isinstance(other, icepool.AgainExpression):
1261            return NotImplemented
1262        other = implicit_convert_to_die(other)
1263        return self.binary_operator(other, operator.mul)
1264
1265    def __rmul__(self, other) -> 'Die':
1266        if isinstance(other, icepool.AgainExpression):
1267            return NotImplemented
1268        other = implicit_convert_to_die(other)
1269        return other.binary_operator(self, operator.mul)
1270
1271    def __truediv__(self, other) -> 'Die':
1272        if isinstance(other, icepool.AgainExpression):
1273            return NotImplemented
1274        other = implicit_convert_to_die(other)
1275        return self.binary_operator(other, operator.truediv)
1276
1277    def __rtruediv__(self, other) -> 'Die':
1278        if isinstance(other, icepool.AgainExpression):
1279            return NotImplemented
1280        other = implicit_convert_to_die(other)
1281        return other.binary_operator(self, operator.truediv)
1282
1283    def __floordiv__(self, other) -> 'Die':
1284        if isinstance(other, icepool.AgainExpression):
1285            return NotImplemented
1286        other = implicit_convert_to_die(other)
1287        return self.binary_operator(other, operator.floordiv)
1288
1289    def __rfloordiv__(self, other) -> 'Die':
1290        if isinstance(other, icepool.AgainExpression):
1291            return NotImplemented
1292        other = implicit_convert_to_die(other)
1293        return other.binary_operator(self, operator.floordiv)
1294
1295    def __pow__(self, other) -> 'Die':
1296        if isinstance(other, icepool.AgainExpression):
1297            return NotImplemented
1298        other = implicit_convert_to_die(other)
1299        return self.binary_operator(other, operator.pow)
1300
1301    def __rpow__(self, other) -> 'Die':
1302        if isinstance(other, icepool.AgainExpression):
1303            return NotImplemented
1304        other = implicit_convert_to_die(other)
1305        return other.binary_operator(self, operator.pow)
1306
1307    def __mod__(self, other) -> 'Die':
1308        if isinstance(other, icepool.AgainExpression):
1309            return NotImplemented
1310        other = implicit_convert_to_die(other)
1311        return self.binary_operator(other, operator.mod)
1312
1313    def __rmod__(self, other) -> 'Die':
1314        if isinstance(other, icepool.AgainExpression):
1315            return NotImplemented
1316        other = implicit_convert_to_die(other)
1317        return other.binary_operator(self, operator.mod)
1318
1319    def __lshift__(self, other) -> 'Die':
1320        if isinstance(other, icepool.AgainExpression):
1321            return NotImplemented
1322        other = implicit_convert_to_die(other)
1323        return self.binary_operator(other, operator.lshift)
1324
1325    def __rlshift__(self, other) -> 'Die':
1326        if isinstance(other, icepool.AgainExpression):
1327            return NotImplemented
1328        other = implicit_convert_to_die(other)
1329        return other.binary_operator(self, operator.lshift)
1330
1331    def __rshift__(self, other) -> 'Die':
1332        if isinstance(other, icepool.AgainExpression):
1333            return NotImplemented
1334        other = implicit_convert_to_die(other)
1335        return self.binary_operator(other, operator.rshift)
1336
1337    def __rrshift__(self, other) -> 'Die':
1338        if isinstance(other, icepool.AgainExpression):
1339            return NotImplemented
1340        other = implicit_convert_to_die(other)
1341        return other.binary_operator(self, operator.rshift)
1342
1343    def __and__(self, other) -> 'Die':
1344        if isinstance(other, icepool.AgainExpression):
1345            return NotImplemented
1346        other = implicit_convert_to_die(other)
1347        return self.binary_operator(other, operator.and_)
1348
1349    def __rand__(self, other) -> 'Die':
1350        if isinstance(other, icepool.AgainExpression):
1351            return NotImplemented
1352        other = implicit_convert_to_die(other)
1353        return other.binary_operator(self, operator.and_)
1354
1355    def __or__(self, other) -> 'Die':
1356        if isinstance(other, icepool.AgainExpression):
1357            return NotImplemented
1358        other = implicit_convert_to_die(other)
1359        return self.binary_operator(other, operator.or_)
1360
1361    def __ror__(self, other) -> 'Die':
1362        if isinstance(other, icepool.AgainExpression):
1363            return NotImplemented
1364        other = implicit_convert_to_die(other)
1365        return other.binary_operator(self, operator.or_)
1366
1367    def __xor__(self, other) -> 'Die':
1368        if isinstance(other, icepool.AgainExpression):
1369            return NotImplemented
1370        other = implicit_convert_to_die(other)
1371        return self.binary_operator(other, operator.xor)
1372
1373    def __rxor__(self, other) -> 'Die':
1374        if isinstance(other, icepool.AgainExpression):
1375            return NotImplemented
1376        other = implicit_convert_to_die(other)
1377        return other.binary_operator(self, operator.xor)
1378
1379    # Comparators.
1380
1381    def __lt__(self, other) -> 'Die[bool]':
1382        if isinstance(other, icepool.AgainExpression):
1383            return NotImplemented
1384        other = implicit_convert_to_die(other)
1385        return self.binary_operator(other, operator.lt)
1386
1387    def __le__(self, other) -> 'Die[bool]':
1388        if isinstance(other, icepool.AgainExpression):
1389            return NotImplemented
1390        other = implicit_convert_to_die(other)
1391        return self.binary_operator(other, operator.le)
1392
1393    def __ge__(self, other) -> 'Die[bool]':
1394        if isinstance(other, icepool.AgainExpression):
1395            return NotImplemented
1396        other = implicit_convert_to_die(other)
1397        return self.binary_operator(other, operator.ge)
1398
1399    def __gt__(self, other) -> 'Die[bool]':
1400        if isinstance(other, icepool.AgainExpression):
1401            return NotImplemented
1402        other = implicit_convert_to_die(other)
1403        return self.binary_operator(other, operator.gt)
1404
1405    # Equality operators. These produce a `DieWithTruth`.
1406
1407    # The result has a truth value, but is not a bool.
1408    def __eq__(self, other) -> 'icepool.DieWithTruth[bool]':  # type: ignore
1409        if isinstance(other, icepool.AgainExpression):
1410            return NotImplemented
1411        other_die: Die = implicit_convert_to_die(other)
1412
1413        def data_callback() -> Counts[bool]:
1414            return self.binary_operator(other_die, operator.eq)._data
1415
1416        def truth_value_callback() -> bool:
1417            return self.equals(other)
1418
1419        return icepool.DieWithTruth(data_callback, truth_value_callback)
1420
1421    # The result has a truth value, but is not a bool.
1422    def __ne__(self, other) -> 'icepool.DieWithTruth[bool]':  # type: ignore
1423        if isinstance(other, icepool.AgainExpression):
1424            return NotImplemented
1425        other_die: Die = implicit_convert_to_die(other)
1426
1427        def data_callback() -> Counts[bool]:
1428            return self.binary_operator(other_die, operator.ne)._data
1429
1430        def truth_value_callback() -> bool:
1431            return not self.equals(other)
1432
1433        return icepool.DieWithTruth(data_callback, truth_value_callback)
1434
1435    def cmp(self, other) -> 'Die[int]':
1436        """A `Die` with outcomes 1, -1, and 0.
1437
1438        The quantities are equal to the positive outcome of `self > other`,
1439        `self < other`, and the remainder respectively.
1440        """
1441        other = implicit_convert_to_die(other)
1442
1443        data = {}
1444
1445        lt = self < other
1446        if True in lt:
1447            data[-1] = lt[True]
1448        eq = self == other
1449        if True in eq:
1450            data[0] = eq[True]
1451        gt = self > other
1452        if True in gt:
1453            data[1] = gt[True]
1454
1455        return Die(data)
1456
1457    @staticmethod
1458    def _sign(x) -> int:
1459        z = Die._zero(x)
1460        if x > z:
1461            return 1
1462        elif x < z:
1463            return -1
1464        else:
1465            return 0
1466
1467    def sign(self) -> 'Die[int]':
1468        """Outcomes become 1 if greater than `zero()`, -1 if less than `zero()`, and 0 otherwise.
1469
1470        Note that for `float`s, +0.0, -0.0, and nan all become 0.
1471        """
1472        return self.unary_operator(Die._sign)
1473
1474    # Equality and hashing.
1475
1476    def __bool__(self) -> bool:
1477        raise TypeError(
1478            'A `Die` only has a truth value if it is the result of == or !=.\n'
1479            'This could result from trying to use a die in an if-statement,\n'
1480            'in which case you should use `die.if_else()` instead.\n'
1481            'Or it could result from trying to use a `Die` inside a tuple or vector outcome,\n'
1482            'in which case you should use `tupleize()` or `vectorize().')
1483
1484    @cached_property
1485    def hash_key(self) -> tuple:
1486        """A tuple that uniquely (as `equals()`) identifies this die.
1487
1488        Apart from being hashable and totally orderable, this is not guaranteed
1489        to be in any particular format or have any other properties.
1490        """
1491        return Die, tuple(self.items())
1492
1493    __hash__ = MaybeHashKeyed.__hash__
1494
1495    def equals(self, other, *, simplify: bool = False) -> bool:
1496        """`True` iff both dice have the same outcomes and quantities.
1497
1498        This is `False` if `other` is not a `Die`, even if it would convert
1499        to an equal `Die`.
1500
1501        Truth value does NOT matter.
1502
1503        If one `Die` has a zero-quantity outcome and the other `Die` does not
1504        contain that outcome, they are treated as unequal by this function.
1505
1506        The `==` and `!=` operators have a dual purpose; they return a `Die`
1507        with a truth value determined by this method.
1508        Only dice returned by these methods have a truth value. The data of
1509        these dice is lazily evaluated since the caller may only be interested
1510        in the `Die` value or the truth value.
1511
1512        Args:
1513            simplify: If `True`, the dice will be simplified before comparing.
1514                Otherwise, e.g. a 2:2 coin is not `equals()` to a 1:1 coin.
1515        """
1516        if self is other:
1517            return True
1518
1519        if not isinstance(other, Die):
1520            return False
1521
1522        if simplify:
1523            return self.simplify().hash_key == other.simplify().hash_key
1524        else:
1525            return self.hash_key == other.hash_key
1526
1527    # Strings.
1528
1529    def __repr__(self) -> str:
1530        items_string = ', '.join(f'{repr(outcome)}: {weight}'
1531                                 for outcome, weight in self.items())
1532        return type(self).__qualname__ + '({' + items_string + '})'

Sampling with replacement. Quantities represent weights.

Dice are immutable. Methods do not modify the Die in-place; rather they return a Die representing the result.

It's also possible to have "empty" dice with no outcomes at all, though these have little use other than being sentinel values.

def unary_operator( self: Die[+T_co], op: Callable[..., ~U], *args, **kwargs) -> Die[~U]:
182    def unary_operator(self: 'icepool.Die[T_co]', op: Callable[..., U], *args,
183                       **kwargs) -> 'icepool.Die[U]':
184        """Performs the unary operation on the outcomes.
185
186        This is used for the standard unary operators
187        `-, +, abs, ~, round, trunc, floor, ceil`
188        as well as the additional methods
189        `zero, bool`.
190
191        This is NOT used for the `[]` operator; when used directly, this is
192        interpreted as a `Mapping` operation and returns the count corresponding
193        to a given outcome. See `marginals()` for applying the `[]` operator to
194        outcomes.
195
196        Returns:
197            A `Die` representing the result.
198
199        Raises:
200            ValueError: If tuples are of mismatched length.
201        """
202        return self._unary_operator(op, *args, **kwargs)

Performs the unary operation on the outcomes.

This is used for the standard unary operators -, +, abs, ~, round, trunc, floor, ceil as well as the additional methods zero, bool.

This is NOT used for the [] operator; when used directly, this is interpreted as a Mapping operation and returns the count corresponding to a given outcome. See marginals() for applying the [] operator to outcomes.

Returns:

A Die representing the result.

Raises:
  • ValueError: If tuples are of mismatched length.
def binary_operator( self, other: Die, op: Callable[..., ~U], *args, **kwargs) -> Die[~U]:
204    def binary_operator(self, other: 'Die', op: Callable[..., U], *args,
205                        **kwargs) -> 'Die[U]':
206        """Performs the operation on pairs of outcomes.
207
208        By the time this is called, the other operand has already been
209        converted to a `Die`.
210
211        This is used for the standard binary operators
212        `+, -, *, /, //, %, **, <<, >>, &, |, ^`
213        and the standard binary comparators
214        `<, <=, >=, >, ==, !=, cmp`.
215
216        `==` and `!=` additionally set the truth value of the `Die` according to
217        whether the dice themselves are the same or not.
218
219        The `@` operator does NOT use this method directly.
220        It rolls the left `Die`, which must have integer outcomes,
221        then rolls the right `Die` that many times and sums the outcomes.
222
223        Returns:
224            A `Die` representing the result.
225
226        Raises:
227            ValueError: If tuples are of mismatched length within one of the
228                dice or between the dice.
229        """
230        data: MutableMapping[Any, int] = defaultdict(int)
231        for (outcome_self,
232             quantity_self), (outcome_other,
233                              quantity_other) in itertools.product(
234                                  self.items(), other.items()):
235            new_outcome = op(outcome_self, outcome_other, *args, **kwargs)
236            data[new_outcome] += quantity_self * quantity_other
237        return self._new_type(data)

Performs the operation on pairs of outcomes.

By the time this is called, the other operand has already been converted to a Die.

This is used for the standard binary operators +, -, *, /, //, %, **, <<, >>, &, |, ^ and the standard binary comparators <, <=, >=, >, ==, !=, cmp.

== and != additionally set the truth value of the Die according to whether the dice themselves are the same or not.

The @ operator does NOT use this method directly. It rolls the left Die, which must have integer outcomes, then rolls the right Die that many times and sums the outcomes.

Returns:

A Die representing the result.

Raises:
  • ValueError: If tuples are of mismatched length within one of the dice or between the dice.
def keys(self) -> CountsKeysView[+T_co]:
241    def keys(self) -> CountsKeysView[T_co]:
242        return self._data.keys()

The outcomes within the population in sorted order.

def values(self) -> CountsValuesView:
244    def values(self) -> CountsValuesView:
245        return self._data.values()

The quantities within the population in outcome order.

def items(self) -> CountsItemsView[+T_co]:
247    def items(self) -> CountsItemsView[T_co]:
248        return self._data.items()

The (outcome, quantity)s of the population in sorted order.

def simplify(self) -> Die[+T_co]:
265    def simplify(self) -> 'Die[T_co]':
266        """Divides all quantities by their greatest common denominator. """
267        return icepool.Die(self._data.simplify())

Divides all quantities by their greatest common denominator.

def reroll( self, outcomes: Union[Callable[..., bool], Collection[+T_co], NoneType] = None, /, *, star: bool | None = None, depth: Union[int, Literal['inf']]) -> Die[+T_co]:
271    def reroll(self,
272               outcomes: Callable[..., bool] | Collection[T_co] | None = None,
273               /,
274               *,
275               star: bool | None = None,
276               depth: int | Literal['inf']) -> 'Die[T_co]':
277        """Rerolls the given outcomes.
278
279        Args:
280            outcomes: Selects which outcomes to reroll. Options:
281                * A collection of outcomes to reroll.
282                * A callable that takes an outcome and returns `True` if it
283                    should be rerolled.
284                * If not provided, the min outcome will be rerolled.
285            star: Whether outcomes should be unpacked into separate arguments
286                before sending them to a callable `which`.
287                If not provided, this will be guessed based on the function
288                signature.
289            depth: The maximum number of times to reroll.
290                If `None`, rerolls an unlimited number of times.
291
292        Returns:
293            A `Die` representing the reroll.
294            If the reroll would never terminate, the result has no outcomes.
295        """
296
297        if outcomes is None:
298            outcome_set = {self.min_outcome()}
299        else:
300            outcome_set = self._select_outcomes(outcomes, star)
301
302        if depth == 'inf':
303            data = {
304                outcome: quantity
305                for outcome, quantity in self.items()
306                if outcome not in outcome_set
307            }
308        elif depth < 0:
309            raise ValueError('reroll depth cannot be negative.')
310        else:
311            total_reroll_quantity = sum(quantity
312                                        for outcome, quantity in self.items()
313                                        if outcome in outcome_set)
314            total_stop_quantity = self.denominator() - total_reroll_quantity
315            rerollable_factor = total_reroll_quantity**depth
316            stop_factor = (self.denominator()**(depth + 1) - rerollable_factor
317                           * total_reroll_quantity) // total_stop_quantity
318            data = {
319                outcome: (rerollable_factor *
320                          quantity if outcome in outcome_set else stop_factor *
321                          quantity)
322                for outcome, quantity in self.items()
323            }
324        return icepool.Die(data)

Rerolls the given outcomes.

Arguments:
  • outcomes: Selects which outcomes to reroll. Options:
    • A collection of outcomes to reroll.
    • A callable that takes an outcome and returns True if it should be rerolled.
    • If not provided, the min outcome will be rerolled.
  • star: Whether outcomes should be unpacked into separate arguments before sending them to a callable which. If not provided, this will be guessed based on the function signature.
  • depth: The maximum number of times to reroll. If None, rerolls an unlimited number of times.
Returns:

A Die representing the reroll. If the reroll would never terminate, the result has no outcomes.

def filter( self, outcomes: Union[Callable[..., bool], Collection[+T_co]], /, *, star: bool | None = None, depth: Union[int, Literal['inf']]) -> Die[+T_co]:
326    def filter(self,
327               outcomes: Callable[..., bool] | Collection[T_co],
328               /,
329               *,
330               star: bool | None = None,
331               depth: int | Literal['inf']) -> 'Die[T_co]':
332        """Rerolls until getting one of the given outcomes.
333
334        Essentially the complement of `reroll()`.
335
336        Args:
337            outcomes: Selects which outcomes to reroll until. Options:
338                * A callable that takes an outcome and returns `True` if it
339                    should be accepted.
340                * A collection of outcomes to reroll until.
341            star: Whether outcomes should be unpacked into separate arguments
342                before sending them to a callable `which`.
343                If not provided, this will be guessed based on the function
344                signature.
345            depth: The maximum number of times to reroll.
346                If `None`, rerolls an unlimited number of times.
347
348        Returns:
349            A `Die` representing the reroll.
350            If the reroll would never terminate, the result has no outcomes.
351        """
352
353        if callable(outcomes):
354            if star is None:
355                star = infer_star(outcomes)
356            if star:
357
358                not_outcomes = {
359                    outcome
360                    for outcome in self.outcomes()
361                    if not outcomes(*outcome)  # type: ignore
362                }
363            else:
364                not_outcomes = {
365                    outcome
366                    for outcome in self.outcomes() if not outcomes(outcome)
367                }
368        else:
369            not_outcomes = {
370                not_outcome
371                for not_outcome in self.outcomes()
372                if not_outcome not in outcomes
373            }
374        return self.reroll(not_outcomes, depth=depth)

Rerolls until getting one of the given outcomes.

Essentially the complement of reroll().

Arguments:
  • outcomes: Selects which outcomes to reroll until. Options:
    • A callable that takes an outcome and returns True if it should be accepted.
    • A collection of outcomes to reroll until.
  • star: Whether outcomes should be unpacked into separate arguments before sending them to a callable which. If not provided, this will be guessed based on the function signature.
  • depth: The maximum number of times to reroll. If None, rerolls an unlimited number of times.
Returns:

A Die representing the reroll. If the reroll would never terminate, the result has no outcomes.

def truncate( self, min_outcome=None, max_outcome=None) -> Die[+T_co]:
376    def truncate(self, min_outcome=None, max_outcome=None) -> 'Die[T_co]':
377        """Truncates the outcomes of this `Die` to the given range.
378
379        The endpoints are included in the result if applicable.
380        If one of the arguments is not provided, that side will not be truncated.
381
382        This effectively rerolls outcomes outside the given range.
383        If instead you want to replace those outcomes with the nearest endpoint,
384        use `clip()`.
385
386        Not to be confused with `trunc(die)`, which performs integer truncation
387        on each outcome.
388        """
389        if min_outcome is not None:
390            start = bisect.bisect_left(self.outcomes(), min_outcome)
391        else:
392            start = None
393        if max_outcome is not None:
394            stop = bisect.bisect_right(self.outcomes(), max_outcome)
395        else:
396            stop = None
397        data = {k: v for k, v in self.items()[start:stop]}
398        return icepool.Die(data)

Truncates the outcomes of this Die to the given range.

The endpoints are included in the result if applicable. If one of the arguments is not provided, that side will not be truncated.

This effectively rerolls outcomes outside the given range. If instead you want to replace those outcomes with the nearest endpoint, use clip().

Not to be confused with trunc(die), which performs integer truncation on each outcome.

def clip( self, min_outcome=None, max_outcome=None) -> Die[+T_co]:
400    def clip(self, min_outcome=None, max_outcome=None) -> 'Die[T_co]':
401        """Clips the outcomes of this `Die` to the given values.
402
403        The endpoints are included in the result if applicable.
404        If one of the arguments is not provided, that side will not be clipped.
405
406        This is not the same as rerolling outcomes beyond this range;
407        the outcome is simply adjusted to fit within the range.
408        This will typically cause some quantity to bunch up at the endpoint(s).
409        If you want to reroll outcomes beyond this range, use `truncate()`.
410        """
411        data: MutableMapping[Any, int] = defaultdict(int)
412        for outcome, quantity in self.items():
413            if min_outcome is not None and outcome <= min_outcome:
414                data[min_outcome] += quantity
415            elif max_outcome is not None and outcome >= max_outcome:
416                data[max_outcome] += quantity
417            else:
418                data[outcome] += quantity
419        return icepool.Die(data)

Clips the outcomes of this Die to the given values.

The endpoints are included in the result if applicable. If one of the arguments is not provided, that side will not be clipped.

This is not the same as rerolling outcomes beyond this range; the outcome is simply adjusted to fit within the range. This will typically cause some quantity to bunch up at the endpoint(s). If you want to reroll outcomes beyond this range, use truncate().

def map( self, repl: Union[Callable[..., Union[~U, Die[~U], RerollType, icepool.population.again.AgainExpression]], Mapping[+T_co, Union[~U, Die[~U], RerollType, icepool.population.again.AgainExpression]]], /, *extra_args, star: bool | None = None, repeat: Union[int, Literal['inf']] = 1, time_limit: Union[int, Literal['inf'], NoneType] = None, again_count: int | None = None, again_depth: int | None = None, again_end: Union[~U, Die[~U], RerollType, NoneType] = None, **kwargs) -> Die[~U]:
449    def map(
450            self,
451            repl:
452        'Callable[..., U | Die[U] | icepool.RerollType | icepool.AgainExpression] | Mapping[T_co, U | Die[U] | icepool.RerollType | icepool.AgainExpression]',
453            /,
454            *extra_args,
455            star: bool | None = None,
456            repeat: int | Literal['inf'] = 1,
457            time_limit: int | Literal['inf'] | None = None,
458            again_count: int | None = None,
459            again_depth: int | None = None,
460            again_end: 'U | Die[U] | icepool.RerollType | None' = None,
461            **kwargs) -> 'Die[U]':
462        """Maps outcomes of the `Die` to other outcomes.
463
464        This is also useful for representing processes.
465
466        As `icepool.map(repl, self, ...)`.
467        """
468        return icepool.map(repl,
469                           self,
470                           *extra_args,
471                           star=star,
472                           repeat=repeat,
473                           time_limit=time_limit,
474                           again_count=again_count,
475                           again_depth=again_depth,
476                           again_end=again_end,
477                           **kwargs)

Maps outcomes of the Die to other outcomes.

This is also useful for representing processes.

As icepool.map(repl, self, ...).

def map_and_time( self, repl: Union[Callable[..., Union[+T_co, Die[+T_co], RerollType]], Mapping[+T_co, Union[+T_co, Die[+T_co], RerollType]]], /, *extra_args, star: bool | None = None, time_limit: int, **kwargs) -> Die[tuple[+T_co, int]]:
479    def map_and_time(
480            self,
481            repl:
482        'Callable[..., T_co | Die[T_co] | icepool.RerollType] | Mapping[T_co, T_co | Die[T_co] | icepool.RerollType]',
483            /,
484            *extra_args,
485            star: bool | None = None,
486            time_limit: int,
487            **kwargs) -> 'Die[tuple[T_co, int]]':
488        """Repeatedly map outcomes of the state to other outcomes, while also
489        counting timesteps.
490
491        This is useful for representing processes.
492
493        As `map_and_time(repl, self, ...)`.
494        """
495        return icepool.map_and_time(repl,
496                                    self,
497                                    *extra_args,
498                                    star=star,
499                                    time_limit=time_limit,
500                                    **kwargs)

Repeatedly map outcomes of the state to other outcomes, while also counting timesteps.

This is useful for representing processes.

As map_and_time(repl, self, ...).

def mean_time_to_absorb( self, repl: Union[Callable[..., Union[+T_co, Die[+T_co], RerollType, icepool.population.again.AgainExpression]], Mapping[Any, Union[+T_co, Die[+T_co], RerollType, icepool.population.again.AgainExpression]]], /, *extra_args, star: bool | None = None, **kwargs) -> fractions.Fraction:
502    def mean_time_to_absorb(
503            self,
504            repl:
505        'Callable[..., T_co | icepool.Die[T_co] | icepool.RerollType | icepool.AgainExpression] | Mapping[Any, T_co | icepool.Die[T_co] | icepool.RerollType | icepool.AgainExpression]',
506            /,
507            *extra_args,
508            star: bool | None = None,
509            **kwargs) -> Fraction:
510        """EXPERIMENTAL: The mean time for the process to reach an absorbing state.
511        
512        As `mean_time_to_absorb(repl, self, ...)`.
513        """
514        return icepool.mean_time_to_absorb(repl,
515                                           self,
516                                           *extra_args,
517                                           star=star,
518                                           **kwargs)

EXPERIMENTAL: The mean time for the process to reach an absorbing state.

As mean_time_to_absorb(repl, self, ...).

def time_to_sum( self: Die[int], target: int, /, max_time: int, dnf: int | RerollType | None = None) -> Die[int]:
520    def time_to_sum(self: 'Die[int]',
521                    target: int,
522                    /,
523                    max_time: int,
524                    dnf: 'int|icepool.RerollType|None' = None) -> 'Die[int]':
525        """The number of rolls until the cumulative sum is greater or equal to the target.
526        
527        Args:
528            target: The number to stop at once reached.
529            max_time: The maximum number of rolls to run.
530                If the sum is not reached, the outcome is determined by `dnf`.
531            dnf: What time to assign in cases where the target was not reached
532                in `max_time`. If not provided, this is set to `max_time`.
533                `dnf=icepool.Reroll` will remove this case from the result,
534                effectively rerolling it.
535        """
536        if target <= 0:
537            return Die([0])
538
539        if dnf is None:
540            dnf = max_time
541
542        def step(total, roll):
543            return min(total + roll, target)
544
545        result: 'Die[tuple[int, int]]' = Die([0]).map_and_time(
546            step, self, time_limit=max_time)
547
548        def get_time(total, time):
549            if total < target:
550                return dnf
551            else:
552                return time
553
554        return result.map(get_time)

The number of rolls until the cumulative sum is greater or equal to the target.

Arguments:
  • target: The number to stop at once reached.
  • max_time: The maximum number of rolls to run. If the sum is not reached, the outcome is determined by dnf.
  • dnf: What time to assign in cases where the target was not reached in max_time. If not provided, this is set to max_time. dnf=icepool.Reroll will remove this case from the result, effectively rerolling it.
def mean_time_to_sum( self: Die[int], target: int, /) -> fractions.Fraction:
560    def mean_time_to_sum(self: 'Die[int]', target: int, /) -> Fraction:
561        """The mean number of rolls until the cumulative sum is greater or equal to the target.
562
563        Args:
564            target: The target sum.
565
566        Raises:
567            ValueError: If `self` has negative outcomes.
568            ZeroDivisionError: If `self.mean() == 0`.
569        """
570        target = max(target, 0)
571
572        if target < len(self._mean_time_to_sum_cache):
573            return self._mean_time_to_sum_cache[target]
574
575        if self.min_outcome() < 0:
576            raise ValueError(
577                'mean_time_to_sum does not handle negative outcomes.')
578        time_per_effect = Fraction(self.denominator(),
579                                   self.denominator() - self.quantity(0))
580
581        for i in range(len(self._mean_time_to_sum_cache), target + 1):
582            result = time_per_effect + self.reroll([
583                0
584            ], depth='inf').map(lambda x: self.mean_time_to_sum(i - x)).mean()
585            self._mean_time_to_sum_cache.append(result)
586
587        return result

The mean number of rolls until the cumulative sum is greater or equal to the target.

Arguments:
  • target: The target sum.
Raises:
  • ValueError: If self has negative outcomes.
  • ZeroDivisionError: If self.mean() == 0.
def explode( self, outcomes: Union[Callable[..., bool], Collection[+T_co], NoneType] = None, /, *, star: bool | None = None, depth: int = 9, end=None) -> Die[+T_co]:
589    def explode(self,
590                outcomes: Collection[T_co] | Callable[..., bool] | None = None,
591                /,
592                *,
593                star: bool | None = None,
594                depth: int = 9,
595                end=None) -> 'Die[T_co]':
596        """Causes outcomes to be rolled again and added to the total.
597
598        Args:
599            outcomes: Which outcomes to explode. Options:
600                * An collection of outcomes to explode.
601                * A callable that takes an outcome and returns `True` if it
602                    should be exploded.
603                * If not supplied, the max outcome will explode.
604            star: Whether outcomes should be unpacked into separate arguments
605                before sending them to a callable `which`.
606                If not provided, this will be guessed based on the function
607                signature.
608            depth: The maximum number of additional dice to roll, not counting
609                the initial roll.
610                If not supplied, a default value will be used.
611            end: Once `depth` is reached, further explosions will be treated
612                as this value. By default, a zero value will be used.
613                `icepool.Reroll` will make one extra final roll, rerolling until
614                a non-exploding outcome is reached.
615        """
616
617        if outcomes is None:
618            outcome_set = {self.max_outcome()}
619        else:
620            outcome_set = self._select_outcomes(outcomes, star)
621
622        if depth < 0:
623            raise ValueError('depth cannot be negative.')
624        elif depth == 0:
625            return self
626
627        def map_final(outcome):
628            if outcome in outcome_set:
629                return outcome + icepool.Again
630            else:
631                return outcome
632
633        return self.map(map_final, again_depth=depth, again_end=end)

Causes outcomes to be rolled again and added to the total.

Arguments:
  • outcomes: Which outcomes to explode. Options:
    • An collection of outcomes to explode.
    • A callable that takes an outcome and returns True if it should be exploded.
    • If not supplied, the max outcome will explode.
  • star: Whether outcomes should be unpacked into separate arguments before sending them to a callable which. If not provided, this will be guessed based on the function signature.
  • depth: The maximum number of additional dice to roll, not counting the initial roll. If not supplied, a default value will be used.
  • end: Once depth is reached, further explosions will be treated as this value. By default, a zero value will be used. icepool.Reroll will make one extra final roll, rerolling until a non-exploding outcome is reached.
def if_else( self, outcome_if_true: Union[~U, Die[~U]], outcome_if_false: Union[~U, Die[~U]], *, again_count: int | None = None, again_depth: int | None = None, again_end: Union[~U, Die[~U], RerollType, NoneType] = None) -> Die[~U]:
635    def if_else(
636        self,
637        outcome_if_true: U | 'Die[U]',
638        outcome_if_false: U | 'Die[U]',
639        *,
640        again_count: int | None = None,
641        again_depth: int | None = None,
642        again_end: 'U | Die[U] | icepool.RerollType | None' = None
643    ) -> 'Die[U]':
644        """Ternary conditional operator.
645
646        This replaces truthy outcomes with the first argument and falsy outcomes
647        with the second argument.
648
649        Args:
650            again_count, again_depth, again_end: Forwarded to the final die constructor.
651        """
652        return self.map(lambda x: bool(x)).map(
653            {
654                True: outcome_if_true,
655                False: outcome_if_false
656            },
657            again_count=again_count,
658            again_depth=again_depth,
659            again_end=again_end)

Ternary conditional operator.

This replaces truthy outcomes with the first argument and falsy outcomes with the second argument.

Arguments:
  • again_count, again_depth, again_end: Forwarded to the final die constructor.
def is_in(self, outcomes: Container[+T_co], /) -> Die[bool]:
661    def is_in(self, outcomes: Container[T_co], /) -> 'Die[bool]':
662        """A die that returns True iff the roll of the die is contained in the target."""
663        return self.map(lambda x: x in outcomes)

A die that returns True iff the roll of the die is contained in the target.

def count( self, rolls: int, outcomes: Container[+T_co], /) -> Die[int]:
665    def count(self, rolls: int, outcomes: Container[T_co], /) -> 'Die[int]':
666        """Roll this dice a number of times and count how many are in the target."""
667        return rolls @ self.is_in(outcomes)

Roll this dice a number of times and count how many are in the target.

def sequence(self, rolls: int) -> Die[tuple[+T_co, ...]]:
738    def sequence(self, rolls: int) -> 'icepool.Die[tuple[T_co, ...]]':
739        """Possible sequences produced by rolling this die a number of times.
740        
741        This is extremely expensive computationally. If possible, use `reduce()`
742        instead; if you don't care about order, `Die.pool()` is better.
743        """
744        return icepool.cartesian_product(*(self for _ in range(rolls)),
745                                         outcome_type=tuple)  # type: ignore

Possible sequences produced by rolling this die a number of times.

This is extremely expensive computationally. If possible, use reduce() instead; if you don't care about order, Die.pool() is better.

def pool( self, rolls: Union[int, Sequence[int]] = 1, /) -> Pool[+T_co]:
747    def pool(self, rolls: int | Sequence[int] = 1, /) -> 'icepool.Pool[T_co]':
748        """Creates a `Pool` from this `Die`.
749
750        You might subscript the pool immediately afterwards, e.g.
751        `d6.pool(5)[-1, ..., 1]` takes the difference between the highest and
752        lowest of 5d6.
753
754        Args:
755            rolls: The number of copies of this `Die` to put in the pool.
756                Or, a sequence of one `int` per die acting as
757                `keep_tuple`. Note that `...` cannot be used in the
758                argument to this method, as the argument determines the size of
759                the pool.
760        """
761        if isinstance(rolls, int):
762            return icepool.Pool({self: rolls})
763        else:
764            pool_size = len(rolls)
765            # Haven't dealt with narrowing return type.
766            return icepool.Pool({self: pool_size})[rolls]  # type: ignore

Creates a Pool from this Die.

You might subscript the pool immediately afterwards, e.g. d6.pool(5)[-1, ..., 1] takes the difference between the highest and lowest of 5d6.

Arguments:
  • rolls: The number of copies of this Die to put in the pool. Or, a sequence of one int per die acting as keep_tuple. Note that ... cannot be used in the argument to this method, as the argument determines the size of the pool.
def keep( self, rolls: Union[int, Sequence[int]], index: Union[slice, Sequence[int | ellipsis], int, NoneType] = None, /) -> Die:
814    def keep(self,
815             rolls: int | Sequence[int],
816             index: slice | Sequence[int | EllipsisType] | int | None = None,
817             /) -> 'Die':
818        """Selects elements after drawing and sorting and sums them.
819
820        Args:
821            rolls: The number of dice to roll.
822            index: One of the following:
823            * An `int`. This will count only the roll at the specified index.
824            In this case, the result is a `Die` rather than a generator.
825            * A `slice`. The selected dice are counted once each.
826            * A sequence of `int`s with length equal to `rolls`.
827                Each roll is counted that many times, which could be multiple or
828                negative times.
829
830                Up to one `...` (`Ellipsis`) may be used. If no `...` is used,
831                the `rolls` argument may be omitted.
832
833                `...` will be replaced with a number of zero counts in order
834                to make up any missing elements compared to `rolls`.
835                This number may be "negative" if more `int`s are provided than
836                `rolls`. Specifically:
837
838                * If `index` is shorter than `rolls`, `...`
839                    acts as enough zero counts to make up the difference.
840                    E.g. `(1, ..., 1)` on five dice would act as
841                    `(1, 0, 0, 0, 1)`.
842                * If `index` has length equal to `rolls`, `...` has no effect.
843                    E.g. `(1, ..., 1)` on two dice would act as `(1, 1)`.
844                * If `index` is longer than `rolls` and `...` is on one side,
845                    elements will be dropped from `index` on the side with `...`.
846                    E.g. `(..., 1, 2, 3)` on two dice would act as `(2, 3)`.
847                * If `index` is longer than `rolls` and `...`
848                    is in the middle, the counts will be as the sum of two
849                    one-sided `...`.
850                    E.g. `(-1, ..., 1)` acts like `(-1, ...)` plus `(..., 1)`.
851                    If `rolls` was 1 this would have the -1 and 1 cancel each other out.
852        """
853        if isinstance(rolls, int):
854            if index is None:
855                raise ValueError(
856                    'If the number of rolls is an integer, an index argument must be provided.'
857                )
858            if isinstance(index, int):
859                return self.pool(rolls).keep(index)
860            else:
861                return self.pool(rolls).keep(index).sum()  # type: ignore
862        else:
863            if index is not None:
864                raise ValueError('Only one index sequence can be given.')
865            return self.pool(len(rolls)).keep(rolls).sum()  # type: ignore

Selects elements after drawing and sorting and sums them.

Arguments:
  • rolls: The number of dice to roll.
  • index: One of the following:
    • An int. This will count only the roll at the specified index.
  • In this case, the result is a Die rather than a generator.
    • A slice. The selected dice are counted once each.
    • A sequence of ints with length equal to rolls. Each roll is counted that many times, which could be multiple or negative times.

    Up to one ... (Ellipsis) may be used. If no ... is used, the rolls argument may be omitted.

    ... will be replaced with a number of zero counts in order

    to make up any missing elements compared to rolls. This number may be "negative" if more ints are provided than rolls. Specifically:

    • If index is shorter than rolls, ... acts as enough zero counts to make up the difference. E.g. (1, ..., 1) on five dice would act as (1, 0, 0, 0, 1).
    • If index has length equal to rolls, ... has no effect. E.g. (1, ..., 1) on two dice would act as (1, 1).
    • If index is longer than rolls and ... is on one side, elements will be dropped from index on the side with .... E.g. (..., 1, 2, 3) on two dice would act as (2, 3).
    • If index is longer than rolls and ... is in the middle, the counts will be as the sum of two one-sided .... E.g. (-1, ..., 1) acts like (-1, ...) plus (..., 1). If rolls was 1 this would have the -1 and 1 cancel each other out.

def lowest( self, rolls: int, /, keep: int | None = None, drop: int | None = None) -> Die:
867    def lowest(self,
868               rolls: int,
869               /,
870               keep: int | None = None,
871               drop: int | None = None) -> 'Die':
872        """Roll several of this `Die` and return the lowest result, or the sum of some of the lowest.
873
874        The outcomes should support addition and multiplication if `keep != 1`.
875
876        Args:
877            rolls: The number of dice to roll. All dice will have the same
878                outcomes as `self`.
879            keep, drop: These arguments work together:
880                * If neither are provided, the single lowest die will be taken.
881                * If only `keep` is provided, the `keep` lowest dice will be summed.
882                * If only `drop` is provided, the `drop` lowest dice will be dropped
883                    and the rest will be summed.
884                * If both are provided, `drop` lowest dice will be dropped, then
885                    the next `keep` lowest dice will be summed.
886
887        Returns:
888            A `Die` representing the probability distribution of the sum.
889        """
890        index = lowest_slice(keep, drop)
891        canonical = canonical_slice(index, rolls)
892        if canonical.start == 0 and canonical.stop == 1:
893            return self._lowest_single(rolls)
894        # Expression evaluators are difficult to type.
895        return self.pool(rolls)[index].sum()  # type: ignore

Roll several of this Die and return the lowest result, or the sum of some of the lowest.

The outcomes should support addition and multiplication if keep != 1.

Arguments:
  • rolls: The number of dice to roll. All dice will have the same outcomes as self.
  • keep, drop: These arguments work together:
    • If neither are provided, the single lowest die will be taken.
    • If only keep is provided, the keep lowest dice will be summed.
    • If only drop is provided, the drop lowest dice will be dropped and the rest will be summed.
    • If both are provided, drop lowest dice will be dropped, then the next keep lowest dice will be summed.
Returns:

A Die representing the probability distribution of the sum.

def highest( self, rolls: int, /, keep: int | None = None, drop: int | None = None) -> Die[+T_co]:
905    def highest(self,
906                rolls: int,
907                /,
908                keep: int | None = None,
909                drop: int | None = None) -> 'Die[T_co]':
910        """Roll several of this `Die` and return the highest result, or the sum of some of the highest.
911
912        The outcomes should support addition and multiplication if `keep != 1`.
913
914        Args:
915            rolls: The number of dice to roll.
916            keep, drop: These arguments work together:
917                * If neither are provided, the single highest die will be taken.
918                * If only `keep` is provided, the `keep` highest dice will be summed.
919                * If only `drop` is provided, the `drop` highest dice will be dropped
920                    and the rest will be summed.
921                * If both are provided, `drop` highest dice will be dropped, then
922                    the next `keep` highest dice will be summed.
923
924        Returns:
925            A `Die` representing the probability distribution of the sum.
926        """
927        index = highest_slice(keep, drop)
928        canonical = canonical_slice(index, rolls)
929        if canonical.start == rolls - 1 and canonical.stop == rolls:
930            return self._highest_single(rolls)
931        # Expression evaluators are difficult to type.
932        return self.pool(rolls)[index].sum()  # type: ignore

Roll several of this Die and return the highest result, or the sum of some of the highest.

The outcomes should support addition and multiplication if keep != 1.

Arguments:
  • rolls: The number of dice to roll.
  • keep, drop: These arguments work together:
    • If neither are provided, the single highest die will be taken.
    • If only keep is provided, the keep highest dice will be summed.
    • If only drop is provided, the drop highest dice will be dropped and the rest will be summed.
    • If both are provided, drop highest dice will be dropped, then the next keep highest dice will be summed.
Returns:

A Die representing the probability distribution of the sum.

def middle( self, rolls: int, /, keep: int = 1, *, tie: Literal['error', 'high', 'low'] = 'error') -> Die:
941    def middle(
942            self,
943            rolls: int,
944            /,
945            keep: int = 1,
946            *,
947            tie: Literal['error', 'high', 'low'] = 'error') -> 'icepool.Die':
948        """Roll several of this `Die` and sum the sorted results in the middle.
949
950        The outcomes should support addition and multiplication if `keep != 1`.
951
952        Args:
953            rolls: The number of dice to roll.
954            keep: The number of outcomes to sum. If this is greater than the
955                current keep_size, all are kept.
956            tie: What to do if `keep` is odd but the current keep_size
957                is even, or vice versa.
958                * 'error' (default): Raises `IndexError`.
959                * 'high': The higher outcome is taken.
960                * 'low': The lower outcome is taken.
961        """
962        # Expression evaluators are difficult to type.
963        return self.pool(rolls).middle(keep, tie=tie).sum()  # type: ignore

Roll several of this Die and sum the sorted results in the middle.

The outcomes should support addition and multiplication if keep != 1.

Arguments:
  • rolls: The number of dice to roll.
  • keep: The number of outcomes to sum. If this is greater than the current keep_size, all are kept.
  • tie: What to do if keep is odd but the current keep_size is even, or vice versa.
    • 'error' (default): Raises IndexError.
    • 'high': The higher outcome is taken.
    • 'low': The lower outcome is taken.
def map_to_pool( self, repl: Optional[Callable[..., Union[Sequence[Union[Die[~U], ~U]], Mapping[Die[~U], int], Mapping[~U, int], RerollType]]] = None, /, *extra_args: Outcome | Die | MultisetExpression, star: bool | None = None, **kwargs) -> MultisetExpression[~U]:
965    def map_to_pool(
966            self,
967            repl:
968        'Callable[..., Sequence[icepool.Die[U] | U] | Mapping[icepool.Die[U], int] | Mapping[U, int] | icepool.RerollType] | None' = None,
969            /,
970            *extra_args: 'Outcome | icepool.Die | icepool.MultisetExpression',
971            star: bool | None = None,
972            **kwargs) -> 'icepool.MultisetExpression[U]':
973        """EXPERIMENTAL: Maps outcomes of this `Die` to `Pools`, creating a `MultisetGenerator`.
974
975        As `icepool.map_to_pool(repl, self, ...)`.
976
977        If no argument is provided, the outcomes will be used to construct a
978        mixture of pools directly, similar to the inverse of `pool.expand()`.
979        Note that this is not particularly efficient since it does not make much
980        use of dynamic programming.
981        """
982        if repl is None:
983            repl = lambda x: x
984        return icepool.map_to_pool(repl,
985                                   self,
986                                   *extra_args,
987                                   star=star,
988                                   **kwargs)

EXPERIMENTAL: Maps outcomes of this Die to Pools, creating a MultisetGenerator.

As icepool.map_to_pool(repl, self, ...).

If no argument is provided, the outcomes will be used to construct a mixture of pools directly, similar to the inverse of pool.expand(). Note that this is not particularly efficient since it does not make much use of dynamic programming.

def explode_to_pool( self, rolls: int = 1, outcomes: Union[Callable[..., bool], Collection[+T_co], NoneType] = None, /, *, star: bool | None = None, depth: int = 9) -> MultisetExpression[+T_co]:
 990    def explode_to_pool(self,
 991                        rolls: int = 1,
 992                        outcomes: Collection[T_co] | Callable[..., bool]
 993                        | None = None,
 994                        /,
 995                        *,
 996                        star: bool | None = None,
 997                        depth: int = 9) -> 'icepool.MultisetExpression[T_co]':
 998        """EXPERIMENTAL: Causes outcomes to be rolled again, keeping that outcome as an individual die in a pool.
 999        
1000        Args:
1001            rolls: The number of initial dice.
1002            outcomes: Which outcomes to explode. Options:
1003                * A single outcome to explode.
1004                * An collection of outcomes to explode.
1005                * A callable that takes an outcome and returns `True` if it
1006                    should be exploded.
1007                * If not supplied, the max outcome will explode.
1008            star: Whether outcomes should be unpacked into separate arguments
1009                before sending them to a callable `which`.
1010                If not provided, this will be guessed based on the function
1011                signature.
1012            depth: The maximum depth of explosions for an individual dice.
1013
1014        Returns:
1015            A `MultisetGenerator` representing the mixture of `Pool`s. Note  
1016            that this is not technically a `Pool`, though it supports most of 
1017            the same operations.
1018        """
1019        if depth == 0:
1020            return self.pool(rolls)
1021        if outcomes is None:
1022            explode_set = {self.max_outcome()}
1023        else:
1024            explode_set = self._select_outcomes(outcomes, star)
1025        if not explode_set:
1026            return self.pool(rolls)
1027        explode: 'Die[T_co]'
1028        not_explode: 'Die[T_co]'
1029        explode, not_explode = self.split(explode_set)
1030
1031        single_data: 'MutableMapping[icepool.Vector[int], int]' = defaultdict(
1032            int)
1033        for i in range(depth + 1):
1034            weight = explode.denominator()**i * self.denominator()**(
1035                depth - i) * not_explode.denominator()
1036            single_data[icepool.Vector((i, 1))] += weight
1037        single_data[icepool.Vector(
1038            (depth + 1, 0))] += explode.denominator()**(depth + 1)
1039
1040        single_count_die: 'Die[icepool.Vector[int]]' = Die(single_data)
1041        count_die = rolls @ single_count_die
1042
1043        return count_die.map_to_pool(
1044            lambda x, nx: [explode] * x + [not_explode] * nx)

EXPERIMENTAL: Causes outcomes to be rolled again, keeping that outcome as an individual die in a pool.

Arguments:
  • rolls: The number of initial dice.
  • outcomes: Which outcomes to explode. Options:
    • A single outcome to explode.
    • An collection of outcomes to explode.
    • A callable that takes an outcome and returns True if it should be exploded.
    • If not supplied, the max outcome will explode.
  • star: Whether outcomes should be unpacked into separate arguments before sending them to a callable which. If not provided, this will be guessed based on the function signature.
  • depth: The maximum depth of explosions for an individual dice.
Returns:

A MultisetGenerator representing the mixture of Pools. Note
that this is not technically a Pool, though it supports most of the same operations.

def reroll_to_pool( self, rolls: int, outcomes: Union[Callable[..., bool], Collection[+T_co], NoneType] = None, /, *, max_rerolls: Union[int, Literal['inf']], star: bool | None = None, depth: Union[int, Literal['inf']] = 1, mode: Literal['random', 'lowest', 'highest', 'drop'] = 'random') -> MultisetExpression[+T_co]:
1046    def reroll_to_pool(
1047        self,
1048        rolls: int,
1049        outcomes: Callable[..., bool] | Collection[T_co] | None = None,
1050        /,
1051        *,
1052        max_rerolls: int | Literal['inf'],
1053        star: bool | None = None,
1054        depth: int | Literal['inf'] = 1,
1055        mode: Literal['random', 'lowest', 'highest', 'drop'] = 'random'
1056    ) -> 'icepool.MultisetExpression[T_co]':
1057        """EXPERIMENTAL: Applies a limited number of rerolls shared across a pool.
1058
1059        Each die can only be rerolled once (effectively `depth=1`), and no more
1060        than `max_rerolls` dice may be rerolled.
1061        
1062        Args:
1063            rolls: How many dice in the pool.
1064            outcomes: Selects which outcomes are eligible to be rerolled.
1065                Options:
1066                * A collection of outcomes to reroll.
1067                * A callable that takes an outcome and returns `True` if it
1068                    could be rerolled.
1069                * If not provided, the single minimum outcome will be rerolled.
1070            max_rerolls: The maximum total number of rerolls.
1071                If `max_rerolls == 'inf'`, then this is the same as 
1072                `self.reroll(which, star=star, depth=depth).pool(rolls)`.
1073            depth: EXTRA EXPERIMENTAL: The maximum depth of rerolls.
1074            star: Whether outcomes should be unpacked into separate arguments
1075                before sending them to a callable `which`.
1076                If not provided, this will be guessed based on the function
1077                signature.
1078            mode: How dice are selected for rerolling if there are more eligible
1079                dice than `max_rerolls`. Options:
1080                * `'random'` (default): Eligible dice will be chosen uniformly
1081                    at random.
1082                * `'lowest'`: The lowest eligible dice will be rerolled.
1083                * `'highest'`: The highest eligible dice will be rerolled.
1084                * `'drop'`: All dice that ended up on an outcome selected by 
1085                    `which` will be dropped. This includes both dice that rolled
1086                    into `which` initially and were not rerolled, and dice that
1087                    were rerolled but rolled into `which` again. This can be
1088                    considerably more efficient than the other modes.
1089
1090        Returns:
1091            A `MultisetGenerator` representing the mixture of `Pool`s. Note  
1092            that this is not technically a `Pool`, though it supports most of 
1093            the same operations.
1094        """
1095        if max_rerolls == 'inf':
1096            return self.reroll(outcomes, star=star, depth=depth).pool(rolls)
1097
1098        if outcomes is None:
1099            rerollable_set = {self.min_outcome()}
1100        else:
1101            rerollable_set = self._select_outcomes(outcomes, star)
1102            if not rerollable_set:
1103                return self.pool(rolls)
1104
1105        rerollable_die: 'Die[T_co]'
1106        not_rerollable_die: 'Die[T_co]'
1107        rerollable_die, not_rerollable_die = self.split(rerollable_set)
1108        single_is_rerollable = icepool.coin(rerollable_die.denominator(),
1109                                            self.denominator())
1110
1111        if depth == 'inf':
1112            depth = max_rerolls
1113
1114        def step(rerollable, rerolls_left):
1115            """Advances one step of rerolling if there are enough rerolls left to cover all rerollable dice.
1116            
1117            Returns:
1118                The number of dice showing rerollable outcomes and the number of remaining rerolls.
1119            """
1120            if rerollable == 0:
1121                return 0, 0
1122            if rerolls_left < rerollable:
1123                return rerollable, rerolls_left
1124
1125            return icepool.tupleize(rerollable @ single_is_rerollable,
1126                                    rerolls_left - rerollable)
1127
1128        initial_state = icepool.tupleize(rolls @ single_is_rerollable,
1129                                         max_rerolls)
1130        mid_pool_composition: Die[tuple[int, int]]
1131        mid_pool_composition = icepool.map(step,
1132                                           initial_state,
1133                                           star=True,
1134                                           repeat=depth - 1)
1135
1136        def final_step(rerollable, rerolls_left):
1137            """Performs the final reroll, which might not have enough rerolls to cover all rerollable dice.
1138            
1139            Returns: The number of dice that had a rerollable outcome,
1140                the number of dice that were rerolled due to max_rerolls,
1141                the number of rerolled dice that landed on a rerollable outcome
1142                again.
1143            """
1144            rerolled = min(rerollable, rerolls_left)
1145
1146            return icepool.tupleize(rerollable, rerolled,
1147                                    rerolled @ single_is_rerollable)
1148
1149        pool_composition: Die[tuple[int, int, int]] = mid_pool_composition.map(
1150            final_step, star=True)
1151
1152        denominator = self.denominator()**(rolls + max_rerolls)
1153        pool_composition = pool_composition.multiply_to_denominator(
1154            denominator)
1155
1156        def make_pool(rerollable, rerolled, rerolled_to_rerollable):
1157            rerolls_ran_out = rerollable - rerolled
1158            not_rerollable = rolls - rerolls_ran_out - rerolled_to_rerollable
1159            common = rerollable_die.pool(
1160                rerolled_to_rerollable) + not_rerollable_die.pool(
1161                    not_rerollable)
1162            match mode:
1163                case 'random':
1164                    return common + rerollable_die.pool(rerolls_ran_out)
1165                case 'lowest':
1166                    return common + rerollable_die.pool(rerollable).highest(
1167                        rerolls_ran_out)
1168                case 'highest':
1169                    return common + rerollable_die.pool(rerollable).lowest(
1170                        rerolls_ran_out)
1171                case 'drop':
1172                    return not_rerollable_die.pool(not_rerollable)
1173                case _:
1174                    raise ValueError(
1175                        f"Invalid reroll_priority '{mode}'. Allowed values are 'random', 'lowest', 'highest', 'drop'."
1176                    )
1177
1178        return pool_composition.map_to_pool(make_pool, star=True)

EXPERIMENTAL: Applies a limited number of rerolls shared across a pool.

Each die can only be rerolled once (effectively depth=1), and no more than max_rerolls dice may be rerolled.

Arguments:
  • rolls: How many dice in the pool.
  • outcomes: Selects which outcomes are eligible to be rerolled. Options:
    • A collection of outcomes to reroll.
    • A callable that takes an outcome and returns True if it could be rerolled.
    • If not provided, the single minimum outcome will be rerolled.
  • max_rerolls: The maximum total number of rerolls. If max_rerolls == 'inf', then this is the same as self.reroll(which, star=star, depth=depth).pool(rolls).
  • depth: EXTRA EXPERIMENTAL: The maximum depth of rerolls.
  • star: Whether outcomes should be unpacked into separate arguments before sending them to a callable which. If not provided, this will be guessed based on the function signature.
  • mode: How dice are selected for rerolling if there are more eligible dice than max_rerolls. Options:
    • 'random' (default): Eligible dice will be chosen uniformly at random.
    • 'lowest': The lowest eligible dice will be rerolled.
    • 'highest': The highest eligible dice will be rerolled.
    • 'drop': All dice that ended up on an outcome selected by which will be dropped. This includes both dice that rolled into which initially and were not rerolled, and dice that were rerolled but rolled into which again. This can be considerably more efficient than the other modes.
Returns:

A MultisetGenerator representing the mixture of Pools. Note
that this is not technically a Pool, though it supports most of the same operations.

def abs(self) -> Die[+T_co]:
1191    def abs(self) -> 'Die[T_co]':
1192        return self.unary_operator(operator.abs)
def round(self, ndigits: int | None = None) -> Die:
1196    def round(self, ndigits: int | None = None) -> 'Die':
1197        return self.unary_operator(round, ndigits)
def stochastic_round( self, *, max_denominator: int | None = None) -> Die[int]:
1201    def stochastic_round(self,
1202                         *,
1203                         max_denominator: int | None = None) -> 'Die[int]':
1204        """Randomly rounds outcomes up or down to the nearest integer according to the two distances.
1205        
1206        Specificially, rounds `x` up with probability `x - floor(x)` and down
1207        otherwise.
1208
1209        Args:
1210            max_denominator: If provided, each rounding will be performed
1211                using `fractions.Fraction.limit_denominator(max_denominator)`.
1212                Otherwise, the rounding will be performed without
1213                `limit_denominator`.
1214        """
1215        return self.map(lambda x: icepool.stochastic_round(
1216            x, max_denominator=max_denominator))

Randomly rounds outcomes up or down to the nearest integer according to the two distances.

Specificially, rounds x up with probability x - floor(x) and down otherwise.

Arguments:
  • max_denominator: If provided, each rounding will be performed using fractions.Fraction.limit_denominator(max_denominator). Otherwise, the rounding will be performed without limit_denominator.
def trunc(self) -> Die:
1218    def trunc(self) -> 'Die':
1219        return self.unary_operator(math.trunc)
def floor(self) -> Die:
1223    def floor(self) -> 'Die':
1224        return self.unary_operator(math.floor)
def ceil(self) -> Die:
1228    def ceil(self) -> 'Die':
1229        return self.unary_operator(math.ceil)
def cmp(self, other) -> Die[int]:
1435    def cmp(self, other) -> 'Die[int]':
1436        """A `Die` with outcomes 1, -1, and 0.
1437
1438        The quantities are equal to the positive outcome of `self > other`,
1439        `self < other`, and the remainder respectively.
1440        """
1441        other = implicit_convert_to_die(other)
1442
1443        data = {}
1444
1445        lt = self < other
1446        if True in lt:
1447            data[-1] = lt[True]
1448        eq = self == other
1449        if True in eq:
1450            data[0] = eq[True]
1451        gt = self > other
1452        if True in gt:
1453            data[1] = gt[True]
1454
1455        return Die(data)

A Die with outcomes 1, -1, and 0.

The quantities are equal to the positive outcome of self > other, self < other, and the remainder respectively.

def sign(self) -> Die[int]:
1467    def sign(self) -> 'Die[int]':
1468        """Outcomes become 1 if greater than `zero()`, -1 if less than `zero()`, and 0 otherwise.
1469
1470        Note that for `float`s, +0.0, -0.0, and nan all become 0.
1471        """
1472        return self.unary_operator(Die._sign)

Outcomes become 1 if greater than zero(), -1 if less than zero(), and 0 otherwise.

Note that for floats, +0.0, -0.0, and nan all become 0.

class Population(abc.ABC, icepool.expand.Expandable[+T_co], typing.Mapping[typing.Any, int]):
 29class Population(ABC, Expandable[T_co], Mapping[Any, int]):
 30    """A mapping from outcomes to `int` quantities.
 31
 32    Outcomes with each instance must be hashable and totally orderable.
 33
 34    Subclasses include `Die` and `Deck`.
 35    """
 36
 37    # Abstract methods.
 38
 39    @property
 40    @abstractmethod
 41    def _new_type(self) -> type:
 42        """The type to use when constructing a new instance."""
 43
 44    @abstractmethod
 45    def keys(self) -> CountsKeysView[T_co]:
 46        """The outcomes within the population in sorted order."""
 47
 48    @abstractmethod
 49    def values(self) -> CountsValuesView:
 50        """The quantities within the population in outcome order."""
 51
 52    @abstractmethod
 53    def items(self) -> CountsItemsView[T_co]:
 54        """The (outcome, quantity)s of the population in sorted order."""
 55
 56    @property
 57    def _items_for_cartesian_product(self) -> Sequence[tuple[T_co, int]]:
 58        return self.items()
 59
 60    def _unary_operator(self, op: Callable, *args, **kwargs):
 61        data: MutableMapping[Any, int] = defaultdict(int)
 62        for outcome, quantity in self.items():
 63            new_outcome = op(outcome, *args, **kwargs)
 64            data[new_outcome] += quantity
 65        return self._new_type(data)
 66
 67    # Outcomes.
 68
 69    def outcomes(self) -> CountsKeysView[T_co]:
 70        """The outcomes of the mapping in ascending order.
 71
 72        These are also the `keys` of the mapping.
 73        Prefer to use the name `outcomes`.
 74        """
 75        return self.keys()
 76
 77    @cached_property
 78    def _common_outcome_length(self) -> int | None:
 79        result = None
 80        for outcome in self.outcomes():
 81            if isinstance(outcome, Mapping):
 82                return None
 83            elif isinstance(outcome, Sized):
 84                if result is None:
 85                    result = len(outcome)
 86                elif len(outcome) != result:
 87                    return None
 88        return result
 89
 90    def common_outcome_length(self) -> int | None:
 91        """The common length of all outcomes.
 92
 93        If outcomes have no lengths or different lengths, the result is `None`.
 94        """
 95        return self._common_outcome_length
 96
 97    def is_empty(self) -> bool:
 98        """`True` iff this population has no outcomes. """
 99        return len(self) == 0
100
101    def min_outcome(self) -> T_co:
102        """The least outcome."""
103        return self.outcomes()[0]
104
105    def max_outcome(self) -> T_co:
106        """The greatest outcome."""
107        return self.outcomes()[-1]
108
109    def nearest(self, comparison: Literal['<=', '<', '>=', '>'], outcome,
110                /) -> T_co | None:
111        """The nearest outcome in this population fitting the comparison.
112
113        Args:
114            comparison: The comparison which the result must fit. For example,
115                '<=' would find the greatest outcome that is not greater than
116                the argument.
117            outcome: The outcome to compare against.
118        
119        Returns:
120            The nearest outcome fitting the comparison, or `None` if there is
121            no such outcome.
122        """
123        match comparison:
124            case '<=':
125                if outcome in self:
126                    return outcome
127                index = bisect.bisect_right(self.outcomes(), outcome) - 1
128                if index < 0:
129                    return None
130                return self.outcomes()[index]
131            case '<':
132                index = bisect.bisect_left(self.outcomes(), outcome) - 1
133                if index < 0:
134                    return None
135                return self.outcomes()[index]
136            case '>=':
137                if outcome in self:
138                    return outcome
139                index = bisect.bisect_left(self.outcomes(), outcome)
140                if index >= len(self):
141                    return None
142                return self.outcomes()[index]
143            case '>':
144                index = bisect.bisect_right(self.outcomes(), outcome)
145                if index >= len(self):
146                    return None
147                return self.outcomes()[index]
148            case _:
149                raise ValueError(f'Invalid comparison {comparison}')
150
151    @staticmethod
152    def _zero(x):
153        return x * 0
154
155    def zero(self: C) -> C:
156        """Zeros all outcomes of this population.
157
158        This is done by multiplying all outcomes by `0`.
159
160        The result will have the same denominator.
161
162        Raises:
163            ValueError: If the zeros did not resolve to a single outcome.
164        """
165        result = self._unary_operator(Population._zero)
166        if len(result) != 1:
167            raise ValueError('zero() did not resolve to a single outcome.')
168        return result
169
170    def zero_outcome(self) -> T_co:
171        """A zero-outcome for this population.
172
173        E.g. `0` for a `Population` whose outcomes are `int`s.
174        """
175        return self.zero().outcomes()[0]
176
177    # Quantities.
178
179    @overload
180    def quantity(self, outcome: Hashable, /) -> int:
181        """The quantity of a single outcome."""
182
183    @overload
184    def quantity(self, comparison: Literal['==', '!=', '<=', '<', '>=', '>'],
185                 outcome: Hashable, /) -> int:
186        """The total quantity fitting a comparison to a single outcome."""
187
188    def quantity(self,
189                 comparison: Literal['==', '!=', '<=', '<', '>=', '>']
190                 | Hashable,
191                 outcome: Hashable | None = None,
192                 /) -> int:
193        """The quantity of a single outcome.
194
195        A comparison can be provided, in which case this returns the total
196        quantity fitting the comparison.
197        
198        Args:
199            comparison: The comparison to use. This can be omitted, in which
200                case it is treated as '=='.
201            outcome: The outcome to query.
202        """
203        if outcome is None:
204            outcome = comparison
205            comparison = '=='
206        else:
207            comparison = cast(Literal['==', '!=', '<=', '<', '>=', '>'],
208                              comparison)
209
210        match comparison:
211            case '==':
212                return self.get(outcome, 0)
213            case '!=':
214                return self.denominator() - self.get(outcome, 0)
215            case '<=' | '<':
216                threshold = self.nearest(comparison, outcome)
217                if threshold is None:
218                    return 0
219                else:
220                    return self._cumulative_quantities[threshold]
221            case '>=':
222                return self.denominator() - self.quantity('<', outcome)
223            case '>':
224                return self.denominator() - self.quantity('<=', outcome)
225            case _:
226                raise ValueError(f'Invalid comparison {comparison}')
227
228    @overload
229    def quantities(self, /) -> CountsValuesView:
230        """All quantities in sorted order."""
231
232    @overload
233    def quantities(self, comparison: Literal['==', '!=', '<=', '<', '>=', '>'],
234                   /) -> Sequence[int]:
235        """The total quantities fitting the comparison for each outcome in sorted order.
236        
237        For example, '<=' gives the CDF.
238
239        Args:
240            comparison: One of `'==', '!=', '<=', '<', '>=', '>'`.
241                May be omitted, in which case equality `'=='` is used.
242            outcome: The outcome to compare to.
243            percent: If set, the result will be a percentage expressed as a
244                `float`.
245        """
246
247    def quantities(self,
248                   comparison: Literal['==', '!=', '<=', '<', '>=', '>']
249                   | None = None,
250                   /) -> CountsValuesView | Sequence[int]:
251        """The quantities of the mapping in sorted order.
252
253        For example, '<=' gives the CDF.
254
255        Args:
256            comparison: One of `'==', '!=', '<=', '<', '>=', '>'`.
257                May be omitted, in which case equality `'=='` is used.
258        """
259        if comparison is None:
260            comparison = '=='
261
262        match comparison:
263            case '==':
264                return self.values()
265            case '<=':
266                return tuple(itertools.accumulate(self.values()))
267            case '>=':
268                return tuple(
269                    itertools.accumulate(self.values()[:-1],
270                                         operator.sub,
271                                         initial=self.denominator()))
272            case '!=':
273                return tuple(self.denominator() - q for q in self.values())
274            case '<':
275                return tuple(self.denominator() - q
276                             for q in self.quantities('>='))
277            case '>':
278                return tuple(self.denominator() - q
279                             for q in self.quantities('<='))
280            case _:
281                raise ValueError(f'Invalid comparison {comparison}')
282
283    @cached_property
284    def _cumulative_quantities(self) -> Mapping[T_co, int]:
285        result = {}
286        cdf = 0
287        for outcome, quantity in self.items():
288            cdf += quantity
289            result[outcome] = cdf
290        return result
291
292    @cached_property
293    def _denominator(self) -> int:
294        return sum(self.values())
295
296    def denominator(self) -> int:
297        """The sum of all quantities (e.g. weights or duplicates).
298
299        For the number of unique outcomes, use `len()`.
300        """
301        return self._denominator
302
303    def multiply_quantities(self: C, scale: int, /) -> C:
304        """Multiplies all quantities by an integer."""
305        if scale == 1:
306            return self
307        data = {
308            outcome: quantity * scale
309            for outcome, quantity in self.items()
310        }
311        return self._new_type(data)
312
313    def divide_quantities(self: C, divisor: int, /) -> C:
314        """Divides all quantities by an integer, rounding down.
315        
316        Resulting zero quantities are dropped.
317        """
318        if divisor == 0:
319            return self
320        data = {
321            outcome: quantity // divisor
322            for outcome, quantity in self.items() if quantity >= divisor
323        }
324        return self._new_type(data)
325
326    def modulo_quantities(self: C, divisor: int, /) -> C:
327        """Modulus of all quantities with an integer."""
328        data = {
329            outcome: quantity % divisor
330            for outcome, quantity in self.items()
331        }
332        return self._new_type(data)
333
334    def pad_to_denominator(self: C, denominator: int, /,
335                           outcome: Hashable) -> C:
336        """Changes the denominator to a target number by changing the quantity of a specified outcome.
337        
338        Args:
339            `target`: The denominator of the result.
340            `outcome`: The outcome whose quantity will be adjusted.
341
342        Returns:
343            A `Population` like `self` but with the quantity of `outcome`
344            adjusted so that the overall denominator is equal to  `target`.
345            If the denominator is reduced to zero, it will be removed.
346
347        Raises:
348            `ValueError` if this would require the quantity of the specified
349            outcome to be negative.
350        """
351        adjustment = denominator - self.denominator()
352        data = {outcome: quantity for outcome, quantity in self.items()}
353        new_quantity = data.get(outcome, 0) + adjustment
354        if new_quantity > 0:
355            data[outcome] = new_quantity
356        elif new_quantity == 0:
357            del data[outcome]
358        else:
359            raise ValueError(
360                f'Padding to denominator of {denominator} would require a negative quantity of {new_quantity} for {outcome}'
361            )
362        return self._new_type(data)
363
364    def multiply_to_denominator(self: C, denominator: int, /) -> C:
365        """Multiplies all quantities to reach the target denominiator.
366        
367        Raises:
368            ValueError if this cannot be achieved using an integer scaling.
369        """
370        if denominator % self.denominator():
371            raise ValueError(
372                'Target denominator is not an integer factor of the current denominator.'
373            )
374        return self.multiply_quantities(denominator // self.denominator())
375
376    def append(self: C, outcome, quantity: int = 1, /) -> C:
377        """This population with an outcome appended.
378        
379        Args:
380            outcome: The outcome to append.
381            quantity: The quantity of the outcome to append. Can be negative,
382                which removes quantity (but not below zero).
383        """
384        data = Counter(self)
385        data[outcome] = max(data[outcome] + quantity, 0)
386        return self._new_type(data)
387
388    def remove(self: C, outcome, quantity: int | None = None, /) -> C:
389        """This population with an outcome removed.
390
391        Args:
392            outcome: The outcome to append.
393            quantity: The quantity of the outcome to remove. If not set, all
394                quantity of that outcome is removed. Can be negative, which adds
395                quantity instead.
396        """
397        if quantity is None:
398            data = Counter(self)
399            data[outcome] = 0
400            return self._new_type(data)
401        else:
402            return self.append(outcome, -quantity)
403
404    # Probabilities.
405
406    @overload
407    def probability(self, outcome: Hashable, /, *,
408                    percent: Literal[False]) -> Fraction:
409        """The probability of a single outcome, or 0.0 if not present."""
410
411    @overload
412    def probability(self, outcome: Hashable, /, *,
413                    percent: Literal[True]) -> float:
414        """The probability of a single outcome, or 0.0 if not present."""
415
416    @overload
417    def probability(self, outcome: Hashable, /) -> Fraction:
418        """The probability of a single outcome, or 0.0 if not present."""
419
420    @overload
421    def probability(self, comparison: Literal['==', '!=', '<=', '<', '>=',
422                                              '>'], outcome: Hashable, /, *,
423                    percent: Literal[False]) -> Fraction:
424        """The total probability of outcomes fitting a comparison."""
425
426    @overload
427    def probability(self, comparison: Literal['==', '!=', '<=', '<', '>=',
428                                              '>'], outcome: Hashable, /, *,
429                    percent: Literal[True]) -> float:
430        """The total probability of outcomes fitting a comparison."""
431
432    @overload
433    def probability(self, comparison: Literal['==', '!=', '<=', '<', '>=',
434                                              '>'], outcome: Hashable,
435                    /) -> Fraction:
436        """The total probability of outcomes fitting a comparison."""
437
438    def probability(self,
439                    comparison: Literal['==', '!=', '<=', '<', '>=', '>']
440                    | Hashable,
441                    outcome: Hashable | None = None,
442                    /,
443                    *,
444                    percent: bool = False) -> Fraction | float:
445        """The total probability of outcomes fitting a comparison.
446        
447        Args:
448            comparison: One of `'==', '!=', '<=', '<', '>=', '>'`.
449                May be omitted, in which case equality `'=='` is used.
450            outcome: The outcome to compare to.
451            percent: If set, the result will be a percentage expressed as a
452                `float`. Otherwise, the result is a `Fraction`.
453        """
454        if outcome is None:
455            outcome = comparison
456            comparison = '=='
457        else:
458            comparison = cast(Literal['==', '!=', '<=', '<', '>=', '>'],
459                              comparison)
460        result = Fraction(self.quantity(comparison, outcome),
461                          self.denominator())
462        return result * 100.0 if percent else result
463
464    @overload
465    def probabilities(self, /, *,
466                      percent: Literal[False]) -> Sequence[Fraction]:
467        """All probabilities in sorted order."""
468
469    @overload
470    def probabilities(self, /, *, percent: Literal[True]) -> Sequence[float]:
471        """All probabilities in sorted order."""
472
473    @overload
474    def probabilities(self, /) -> Sequence[Fraction]:
475        """All probabilities in sorted order."""
476
477    @overload
478    def probabilities(self, comparison: Literal['==', '!=', '<=', '<', '>=',
479                                                '>'], /, *,
480                      percent: Literal[False]) -> Sequence[Fraction]:
481        """The total probabilities fitting the comparison for each outcome in sorted order.
482        
483        For example, '<=' gives the CDF.
484        """
485
486    @overload
487    def probabilities(self, comparison: Literal['==', '!=', '<=', '<', '>=',
488                                                '>'], /, *,
489                      percent: Literal[True]) -> Sequence[float]:
490        """The total probabilities fitting the comparison for each outcome in sorted order.
491        
492        For example, '<=' gives the CDF.
493        """
494
495    @overload
496    def probabilities(self, comparison: Literal['==', '!=', '<=', '<', '>=',
497                                                '>'], /) -> Sequence[Fraction]:
498        """The total probabilities fitting the comparison for each outcome in sorted order.
499        
500        For example, '<=' gives the CDF.
501        """
502
503    def probabilities(
504            self,
505            comparison: Literal['==', '!=', '<=', '<', '>=', '>']
506        | None = None,
507            /,
508            *,
509            percent: bool = False) -> Sequence[Fraction] | Sequence[float]:
510        """The total probabilities fitting the comparison for each outcome in sorted order.
511        
512        For example, '<=' gives the CDF.
513
514        Args:
515            comparison: One of `'==', '!=', '<=', '<', '>=', '>'`.
516                May be omitted, in which case equality `'=='` is used.
517            percent: If set, the result will be a percentage expressed as a
518                `float`. Otherwise, the result is a `Fraction`.
519        """
520        if comparison is None:
521            comparison = '=='
522
523        result = tuple(
524            Fraction(q, self.denominator())
525            for q in self.quantities(comparison))
526
527        if percent:
528            return tuple(100.0 * x for x in result)
529        else:
530            return result
531
532    # Scalar statistics.
533
534    def mode(self) -> tuple:
535        """A tuple containing the most common outcome(s) of the population.
536
537        These are sorted from lowest to highest.
538        """
539        return tuple(outcome for outcome, quantity in self.items()
540                     if quantity == self.modal_quantity())
541
542    def modal_quantity(self) -> int:
543        """The highest quantity of any single outcome. """
544        return max(self.quantities())
545
546    def kolmogorov_smirnov(self, other: 'Population') -> Fraction:
547        """Kolmogorov–Smirnov statistic. The maximum absolute difference between CDFs. """
548        outcomes = icepool.sorted_union(self, other)
549        return max(
550            abs(
551                self.probability('<=', outcome) -
552                other.probability('<=', outcome)) for outcome in outcomes)
553
554    def cramer_von_mises(self, other: 'Population') -> Fraction:
555        """Cramér-von Mises statistic. The sum-of-squares difference between CDFs. """
556        outcomes = icepool.sorted_union(self, other)
557        return sum(((self.probability('<=', outcome) -
558                     other.probability('<=', outcome))**2
559                    for outcome in outcomes),
560                   start=Fraction(0, 1))
561
562    def median(self):
563        """The median, taking the mean in case of a tie.
564
565        This will fail if the outcomes do not support division;
566        in this case, use `median_low` or `median_high` instead.
567        """
568        return self.quantile(1, 2)
569
570    def median_low(self) -> T_co:
571        """The median, taking the lower in case of a tie."""
572        return self.quantile_low(1, 2)
573
574    def median_high(self) -> T_co:
575        """The median, taking the higher in case of a tie."""
576        return self.quantile_high(1, 2)
577
578    def quantile(self, n: int, d: int = 100):
579        """The outcome `n / d` of the way through the CDF, taking the mean in case of a tie.
580
581        This will fail if the outcomes do not support addition and division;
582        in this case, use `quantile_low` or `quantile_high` instead.
583        """
584        # Should support addition and division.
585        return (self.quantile_low(n, d) +
586                self.quantile_high(n, d)) / 2  # type: ignore
587
588    def quantile_low(self, n: int, d: int = 100) -> T_co:
589        """The outcome `n / d` of the way through the CDF, taking the lesser in case of a tie."""
590        index = bisect.bisect_left(self.quantities('<='),
591                                   (n * self.denominator() + d - 1) // d)
592        if index >= len(self):
593            return self.max_outcome()
594        return self.outcomes()[index]
595
596    def quantile_high(self, n: int, d: int = 100) -> T_co:
597        """The outcome `n / d` of the way through the CDF, taking the greater in case of a tie."""
598        index = bisect.bisect_right(self.quantities('<='),
599                                    n * self.denominator() // d)
600        if index >= len(self):
601            return self.max_outcome()
602        return self.outcomes()[index]
603
604    @overload
605    def mean(self: 'Population[numbers.Rational]') -> Fraction:
606        ...
607
608    @overload
609    def mean(self: 'Population[float]') -> float:
610        ...
611
612    def mean(
613        self: 'Population[numbers.Rational] | Population[float]'
614    ) -> Fraction | float:
615        return try_fraction(
616            sum(outcome * quantity for outcome, quantity in self.items()),
617            self.denominator())
618
619    @overload
620    def variance(self: 'Population[numbers.Rational]') -> Fraction:
621        ...
622
623    @overload
624    def variance(self: 'Population[float]') -> float:
625        ...
626
627    def variance(
628        self: 'Population[numbers.Rational] | Population[float]'
629    ) -> Fraction | float:
630        """This is the population variance, not the sample variance."""
631        mean = self.mean()
632        mean_of_squares = try_fraction(
633            sum(quantity * outcome**2 for outcome, quantity in self.items()),
634            self.denominator())
635        return mean_of_squares - mean * mean
636
637    def standard_deviation(
638            self: 'Population[numbers.Rational] | Population[float]') -> float:
639        return math.sqrt(self.variance())
640
641    sd = standard_deviation
642
643    def standardized_moment(
644            self: 'Population[numbers.Rational] | Population[float]',
645            k: int) -> float:
646        sd = self.standard_deviation()
647        mean = self.mean()
648        ev = sum(p * (outcome - mean)**k  # type: ignore 
649                 for outcome, p in zip(self.outcomes(), self.probabilities()))
650        return ev / (sd**k)
651
652    def skewness(
653            self: 'Population[numbers.Rational] | Population[float]') -> float:
654        return self.standardized_moment(3)
655
656    def excess_kurtosis(
657            self: 'Population[numbers.Rational] | Population[float]') -> float:
658        return self.standardized_moment(4) - 3.0
659
660    def entropy(self, base: float = 2.0) -> float:
661        """The entropy of a random sample from this population.
662        
663        Args:
664            base: The logarithm base to use. Default is 2.0, which gives the 
665                entropy in bits.
666        """
667        return -sum(p * math.log(p, base)
668                    for p in self.probabilities() if p > 0.0)
669
670    # Joint statistics.
671
672    class _Marginals(Generic[C]):
673        """Helper class for implementing `marginals()`."""
674
675        _population: C
676
677        def __init__(self, population, /):
678            self._population = population
679
680        def __len__(self) -> int:
681            """The minimum len() of all outcomes."""
682            return min(len(x) for x in self._population.outcomes())
683
684        def __getitem__(self, dims: int | slice, /):
685            """Marginalizes the given dimensions."""
686            return self._population._unary_operator(operator.getitem, dims)
687
688        def __iter__(self) -> Iterator:
689            for i in range(len(self)):
690                yield self[i]
691
692        def __getattr__(self, key: str):
693            if key[0] == '_':
694                raise AttributeError(key)
695            return self._population._unary_operator(operator.attrgetter(key))
696
697    @property
698    def marginals(self: C) -> _Marginals[C]:
699        """A property that applies the `[]` operator to outcomes.
700
701        For example, `population.marginals[:2]` will marginalize the first two
702        elements of sequence outcomes.
703
704        Attributes that do not start with an underscore will also be forwarded.
705        For example, `population.marginals.x` will marginalize the `x` attribute
706        from e.g. `namedtuple` outcomes.
707        """
708        return Population._Marginals(self)
709
710    @overload
711    def covariance(self: 'Population[tuple[numbers.Rational, ...]]', i: int,
712                   j: int) -> Fraction:
713        ...
714
715    @overload
716    def covariance(self: 'Population[tuple[float, ...]]', i: int,
717                   j: int) -> float:
718        ...
719
720    def covariance(
721            self:
722        'Population[tuple[numbers.Rational, ...]] | Population[tuple[float, ...]]',
723            i: int, j: int) -> Fraction | float:
724        mean_i = self.marginals[i].mean()
725        mean_j = self.marginals[j].mean()
726        return try_fraction(
727            sum((outcome[i] - mean_i) * (outcome[j] - mean_j) * quantity
728                for outcome, quantity in self.items()), self.denominator())
729
730    def correlation(
731            self:
732        'Population[tuple[numbers.Rational, ...]] | Population[tuple[float, ...]]',
733            i: int, j: int) -> float:
734        sd_i = self.marginals[i].standard_deviation()
735        sd_j = self.marginals[j].standard_deviation()
736        return self.covariance(i, j) / (sd_i * sd_j)
737
738    # Transformations.
739
740    def _select_outcomes(self, which: Callable[..., bool] | Collection[T_co],
741                         star: bool | None) -> Set[T_co]:
742        """Returns a set of outcomes of self that fit the given condition."""
743        if callable(which):
744            if star is None:
745                star = infer_star(which)
746            if star:
747                # Need TypeVarTuple to check this.
748                return {
749                    outcome
750                    for outcome in self.outcomes()
751                    if which(*outcome)  # type: ignore
752                }
753            else:
754                return {
755                    outcome
756                    for outcome in self.outcomes() if which(outcome)
757                }
758        else:
759            # Collection.
760            return set(outcome for outcome in self.outcomes()
761                       if outcome in which)
762
763    def to_one_hot(self: C, outcomes: Sequence[T_co] | None = None) -> C:
764        """Converts the outcomes of this population to a one-hot representation.
765
766        Args:
767            outcomes: If provided, each outcome will be mapped to a `Vector`
768                where the element at `outcomes.index(outcome)` is set to `True`
769                and the rest to `False`, or all `False` if the outcome is not
770                in `outcomes`.
771                If not provided, `self.outcomes()` is used.
772        """
773        if outcomes is None:
774            outcomes = self.outcomes()
775
776        data: MutableMapping[Vector[bool], int] = defaultdict(int)
777        for outcome, quantity in zip(self.outcomes(), self.quantities()):
778            value = [False] * len(outcomes)
779            if outcome in outcomes:
780                value[outcomes.index(outcome)] = True
781            data[Vector(value)] += quantity
782        return self._new_type(data)
783
784    def split(self,
785              outcomes: Callable[..., bool] | Collection[T_co],
786              /,
787              *,
788              star: bool | None = None) -> tuple[C, C]:
789        """Splits this population into one containing selected items and another containing the rest.
790
791        The sum of the denominators of the results is equal to the denominator
792        of this population.
793
794        If you want to split more than two ways, use `Population.group_by()`.
795
796        Args:
797            outcomes: Selects which outcomes to select. Options:
798                * A callable that takes an outcome and returns `True` if it
799                    should be selected.
800                * A collection of outcomes to select.
801            star: Whether outcomes should be unpacked into separate arguments
802                before sending them to a callable `which`.
803                If not provided, this will be guessed based on the function
804                signature.
805
806        Returns:
807            A population consisting of the outcomes that were selected by
808            `which`, and a population consisting of the unselected outcomes.
809        """
810        outcome_set = self._select_outcomes(outcomes, star)
811
812        selected = {}
813        not_selected = {}
814        for outcome, count in self.items():
815            if outcome in outcome_set:
816                selected[outcome] = count
817            else:
818                not_selected[outcome] = count
819
820        return self._new_type(selected), self._new_type(not_selected)
821
822    class _GroupBy(Generic[C]):
823        """Helper class for implementing `group_by()`."""
824
825        _population: C
826
827        def __init__(self, population, /):
828            self._population = population
829
830        def __call__(self,
831                     key_map: Callable[..., U] | Mapping[T_co, U],
832                     /,
833                     *,
834                     star: bool | None = None) -> Mapping[U, C]:
835            if callable(key_map):
836                if star is None:
837                    star = infer_star(key_map)
838                if star:
839                    key_function = lambda o: key_map(*o)
840                else:
841                    key_function = key_map
842            else:
843                key_function = lambda o: key_map.get(o, o)
844
845            result_datas: MutableMapping[U, MutableMapping[Any, int]] = {}
846            outcome: Any
847            for outcome, quantity in self._population.items():
848                key = key_function(outcome)
849                if key not in result_datas:
850                    result_datas[key] = defaultdict(int)
851                result_datas[key][outcome] += quantity
852            return {
853                k: self._population._new_type(v)
854                for k, v in result_datas.items()
855            }
856
857        def __getitem__(self, dims: int | slice, /):
858            """Marginalizes the given dimensions."""
859            return self(lambda x: x[dims])
860
861        def __getattr__(self, key: str):
862            if key[0] == '_':
863                raise AttributeError(key)
864            return self(lambda x: getattr(x, key))
865
866    @property
867    def group_by(self: C) -> _GroupBy[C]:
868        """A method-like property that splits this population into sub-populations based on a key function.
869        
870        The sum of the denominators of the results is equal to the denominator
871        of this population.
872
873        This can be useful when using the law of total probability.
874
875        Example: `d10.group_by(lambda x: x % 3)` is
876        ```python
877        {
878            0: Die([3, 6, 9]),
879            1: Die([1, 4, 7, 10]),
880            2: Die([2, 5, 8]),
881        }
882        ```
883
884        You can also use brackets to group by indexes or slices; or attributes
885        to group by those. Example:
886
887        ```python
888        Die([
889            'aardvark',
890            'alligator',
891            'asp',
892            'blowfish',
893            'cat',
894            'crocodile',
895        ]).group_by[0]
896        ```
897
898        produces
899
900        ```python
901        {
902            'a': Die(['aardvark', 'alligator', 'asp']),
903            'b': Die(['blowfish']),
904            'c': Die(['cat', 'crocodile']),
905        }
906        ```
907
908        Args:
909            key_map: A function or mapping that takes outcomes and produces the
910                key of the corresponding outcome in the result. If this is
911                a Mapping, outcomes not in the mapping are their own key.
912            star: Whether outcomes should be unpacked into separate arguments
913                before sending them to a callable `key_map`.
914                If not provided, this will be guessed based on the function
915                signature.
916        """
917        return Population._GroupBy(self)
918
919    def sample(self) -> T_co:
920        """A single random sample from this population.
921
922        Note that this is always "with replacement" even for `Deck` since
923        instances are immutable.
924
925        This uses the standard `random` package and is not cryptographically
926        secure.
927        """
928        # We don't use random.choices since that is based on floats rather than ints.
929        r = random.randrange(self.denominator())
930        index = bisect.bisect_right(self.quantities('<='), r)
931        return self.outcomes()[index]
932
933    def format(self, format_spec: str, /, **kwargs) -> str:
934        """Formats this mapping as a string.
935
936        `format_spec` should start with the output format,
937        which can be:
938        * `md` for Markdown (default)
939        * `bbcode` for BBCode
940        * `csv` for comma-separated values
941        * `html` for HTML
942
943        After this, you may optionally add a `:` followed by a series of
944        requested columns. Allowed columns are:
945
946        * `o`: Outcomes.
947        * `*o`: Outcomes, unpacked if applicable.
948        * `q==`, `q<=`, `q>=`: Quantities ==, <=, or >= each outcome.
949        * `p==`, `p<=`, `p>=`: Probabilities (0-1).
950        * `%==`, `%<=`, `%>=`: Probabilities (0%-100%).
951        * `i==`, `i<=`, `i>=`: EXPERIMENTAL: "1 in N".
952
953        Columns may optionally be separated using `|` characters.
954
955        The default setting is equal to `f'{die:md:*o|q==|%==}'`. Here the 
956        columns are the outcomes (unpacked if applicable) the quantities, and 
957        the probabilities. The quantities are omitted from the default columns 
958        if any individual quantity is 10**30 or greater.
959        """
960        if not self.is_empty() and self.modal_quantity() < 10**30:
961            default_column_spec = '*oq==%=='
962        else:
963            default_column_spec = '*o%=='
964        if len(format_spec) == 0:
965            format_spec = 'md:' + default_column_spec
966
967        format_spec = format_spec.replace('|', '')
968
969        parts = format_spec.split(':')
970
971        if len(parts) == 1:
972            output_format = parts[0]
973            col_spec = default_column_spec
974        elif len(parts) == 2:
975            output_format = parts[0]
976            col_spec = parts[1]
977        else:
978            raise ValueError('format_spec has too many colons.')
979
980        match output_format:
981            case 'md':
982                return icepool.population.format.markdown(self, col_spec)
983            case 'bbcode':
984                return icepool.population.format.bbcode(self, col_spec)
985            case 'csv':
986                return icepool.population.format.csv(self, col_spec, **kwargs)
987            case 'html':
988                return icepool.population.format.html(self, col_spec)
989            case _:
990                raise ValueError(
991                    f"Unsupported output format '{output_format}'")
992
993    def __format__(self, format_spec: str, /) -> str:
994        return self.format(format_spec)
995
996    def __str__(self) -> str:
997        return f'{self}'

A mapping from outcomes to int quantities.

Outcomes with each instance must be hashable and totally orderable.

Subclasses include Die and Deck.

@abstractmethod
def keys(self) -> CountsKeysView[+T_co]:
44    @abstractmethod
45    def keys(self) -> CountsKeysView[T_co]:
46        """The outcomes within the population in sorted order."""

The outcomes within the population in sorted order.

@abstractmethod
def values(self) -> CountsValuesView:
48    @abstractmethod
49    def values(self) -> CountsValuesView:
50        """The quantities within the population in outcome order."""

The quantities within the population in outcome order.

@abstractmethod
def items(self) -> CountsItemsView[+T_co]:
52    @abstractmethod
53    def items(self) -> CountsItemsView[T_co]:
54        """The (outcome, quantity)s of the population in sorted order."""

The (outcome, quantity)s of the population in sorted order.

def outcomes(self) -> CountsKeysView[+T_co]:
69    def outcomes(self) -> CountsKeysView[T_co]:
70        """The outcomes of the mapping in ascending order.
71
72        These are also the `keys` of the mapping.
73        Prefer to use the name `outcomes`.
74        """
75        return self.keys()

The outcomes of the mapping in ascending order.

These are also the keys of the mapping. Prefer to use the name outcomes.

def common_outcome_length(self) -> int | None:
90    def common_outcome_length(self) -> int | None:
91        """The common length of all outcomes.
92
93        If outcomes have no lengths or different lengths, the result is `None`.
94        """
95        return self._common_outcome_length

The common length of all outcomes.

If outcomes have no lengths or different lengths, the result is None.

def is_empty(self) -> bool:
97    def is_empty(self) -> bool:
98        """`True` iff this population has no outcomes. """
99        return len(self) == 0

True iff this population has no outcomes.

def min_outcome(self) -> +T_co:
101    def min_outcome(self) -> T_co:
102        """The least outcome."""
103        return self.outcomes()[0]

The least outcome.

def max_outcome(self) -> +T_co:
105    def max_outcome(self) -> T_co:
106        """The greatest outcome."""
107        return self.outcomes()[-1]

The greatest outcome.

def nearest( self, comparison: Literal['<=', '<', '>=', '>'], outcome, /) -> Optional[+T_co]:
109    def nearest(self, comparison: Literal['<=', '<', '>=', '>'], outcome,
110                /) -> T_co | None:
111        """The nearest outcome in this population fitting the comparison.
112
113        Args:
114            comparison: The comparison which the result must fit. For example,
115                '<=' would find the greatest outcome that is not greater than
116                the argument.
117            outcome: The outcome to compare against.
118        
119        Returns:
120            The nearest outcome fitting the comparison, or `None` if there is
121            no such outcome.
122        """
123        match comparison:
124            case '<=':
125                if outcome in self:
126                    return outcome
127                index = bisect.bisect_right(self.outcomes(), outcome) - 1
128                if index < 0:
129                    return None
130                return self.outcomes()[index]
131            case '<':
132                index = bisect.bisect_left(self.outcomes(), outcome) - 1
133                if index < 0:
134                    return None
135                return self.outcomes()[index]
136            case '>=':
137                if outcome in self:
138                    return outcome
139                index = bisect.bisect_left(self.outcomes(), outcome)
140                if index >= len(self):
141                    return None
142                return self.outcomes()[index]
143            case '>':
144                index = bisect.bisect_right(self.outcomes(), outcome)
145                if index >= len(self):
146                    return None
147                return self.outcomes()[index]
148            case _:
149                raise ValueError(f'Invalid comparison {comparison}')

The nearest outcome in this population fitting the comparison.

Arguments:
  • comparison: The comparison which the result must fit. For example, '<=' would find the greatest outcome that is not greater than the argument.
  • outcome: The outcome to compare against.
Returns:

The nearest outcome fitting the comparison, or None if there is no such outcome.

def zero(self: ~C) -> ~C:
155    def zero(self: C) -> C:
156        """Zeros all outcomes of this population.
157
158        This is done by multiplying all outcomes by `0`.
159
160        The result will have the same denominator.
161
162        Raises:
163            ValueError: If the zeros did not resolve to a single outcome.
164        """
165        result = self._unary_operator(Population._zero)
166        if len(result) != 1:
167            raise ValueError('zero() did not resolve to a single outcome.')
168        return result

Zeros all outcomes of this population.

This is done by multiplying all outcomes by 0.

The result will have the same denominator.

Raises:
  • ValueError: If the zeros did not resolve to a single outcome.
def zero_outcome(self) -> +T_co:
170    def zero_outcome(self) -> T_co:
171        """A zero-outcome for this population.
172
173        E.g. `0` for a `Population` whose outcomes are `int`s.
174        """
175        return self.zero().outcomes()[0]

A zero-outcome for this population.

E.g. 0 for a Population whose outcomes are ints.

def quantity( self, comparison: Union[Literal['==', '!=', '<=', '<', '>=', '>'], Hashable], outcome: Optional[Hashable] = None, /) -> int:
188    def quantity(self,
189                 comparison: Literal['==', '!=', '<=', '<', '>=', '>']
190                 | Hashable,
191                 outcome: Hashable | None = None,
192                 /) -> int:
193        """The quantity of a single outcome.
194
195        A comparison can be provided, in which case this returns the total
196        quantity fitting the comparison.
197        
198        Args:
199            comparison: The comparison to use. This can be omitted, in which
200                case it is treated as '=='.
201            outcome: The outcome to query.
202        """
203        if outcome is None:
204            outcome = comparison
205            comparison = '=='
206        else:
207            comparison = cast(Literal['==', '!=', '<=', '<', '>=', '>'],
208                              comparison)
209
210        match comparison:
211            case '==':
212                return self.get(outcome, 0)
213            case '!=':
214                return self.denominator() - self.get(outcome, 0)
215            case '<=' | '<':
216                threshold = self.nearest(comparison, outcome)
217                if threshold is None:
218                    return 0
219                else:
220                    return self._cumulative_quantities[threshold]
221            case '>=':
222                return self.denominator() - self.quantity('<', outcome)
223            case '>':
224                return self.denominator() - self.quantity('<=', outcome)
225            case _:
226                raise ValueError(f'Invalid comparison {comparison}')

The quantity of a single outcome.

A comparison can be provided, in which case this returns the total quantity fitting the comparison.

Arguments:
  • comparison: The comparison to use. This can be omitted, in which case it is treated as '=='.
  • outcome: The outcome to query.
def quantities( self, comparison: Optional[Literal['==', '!=', '<=', '<', '>=', '>']] = None, /) -> Union[CountsValuesView, Sequence[int]]:
247    def quantities(self,
248                   comparison: Literal['==', '!=', '<=', '<', '>=', '>']
249                   | None = None,
250                   /) -> CountsValuesView | Sequence[int]:
251        """The quantities of the mapping in sorted order.
252
253        For example, '<=' gives the CDF.
254
255        Args:
256            comparison: One of `'==', '!=', '<=', '<', '>=', '>'`.
257                May be omitted, in which case equality `'=='` is used.
258        """
259        if comparison is None:
260            comparison = '=='
261
262        match comparison:
263            case '==':
264                return self.values()
265            case '<=':
266                return tuple(itertools.accumulate(self.values()))
267            case '>=':
268                return tuple(
269                    itertools.accumulate(self.values()[:-1],
270                                         operator.sub,
271                                         initial=self.denominator()))
272            case '!=':
273                return tuple(self.denominator() - q for q in self.values())
274            case '<':
275                return tuple(self.denominator() - q
276                             for q in self.quantities('>='))
277            case '>':
278                return tuple(self.denominator() - q
279                             for q in self.quantities('<='))
280            case _:
281                raise ValueError(f'Invalid comparison {comparison}')

The quantities of the mapping in sorted order.

For example, '<=' gives the CDF.

Arguments:
  • comparison: One of '==', '!=', '<=', '<', '>=', '>'. May be omitted, in which case equality '==' is used.
def denominator(self) -> int:
296    def denominator(self) -> int:
297        """The sum of all quantities (e.g. weights or duplicates).
298
299        For the number of unique outcomes, use `len()`.
300        """
301        return self._denominator

The sum of all quantities (e.g. weights or duplicates).

For the number of unique outcomes, use len().

def multiply_quantities(self: ~C, scale: int, /) -> ~C:
303    def multiply_quantities(self: C, scale: int, /) -> C:
304        """Multiplies all quantities by an integer."""
305        if scale == 1:
306            return self
307        data = {
308            outcome: quantity * scale
309            for outcome, quantity in self.items()
310        }
311        return self._new_type(data)

Multiplies all quantities by an integer.

def divide_quantities(self: ~C, divisor: int, /) -> ~C:
313    def divide_quantities(self: C, divisor: int, /) -> C:
314        """Divides all quantities by an integer, rounding down.
315        
316        Resulting zero quantities are dropped.
317        """
318        if divisor == 0:
319            return self
320        data = {
321            outcome: quantity // divisor
322            for outcome, quantity in self.items() if quantity >= divisor
323        }
324        return self._new_type(data)

Divides all quantities by an integer, rounding down.

Resulting zero quantities are dropped.

def modulo_quantities(self: ~C, divisor: int, /) -> ~C:
326    def modulo_quantities(self: C, divisor: int, /) -> C:
327        """Modulus of all quantities with an integer."""
328        data = {
329            outcome: quantity % divisor
330            for outcome, quantity in self.items()
331        }
332        return self._new_type(data)

Modulus of all quantities with an integer.

def pad_to_denominator(self: ~C, denominator: int, /, outcome: Hashable) -> ~C:
334    def pad_to_denominator(self: C, denominator: int, /,
335                           outcome: Hashable) -> C:
336        """Changes the denominator to a target number by changing the quantity of a specified outcome.
337        
338        Args:
339            `target`: The denominator of the result.
340            `outcome`: The outcome whose quantity will be adjusted.
341
342        Returns:
343            A `Population` like `self` but with the quantity of `outcome`
344            adjusted so that the overall denominator is equal to  `target`.
345            If the denominator is reduced to zero, it will be removed.
346
347        Raises:
348            `ValueError` if this would require the quantity of the specified
349            outcome to be negative.
350        """
351        adjustment = denominator - self.denominator()
352        data = {outcome: quantity for outcome, quantity in self.items()}
353        new_quantity = data.get(outcome, 0) + adjustment
354        if new_quantity > 0:
355            data[outcome] = new_quantity
356        elif new_quantity == 0:
357            del data[outcome]
358        else:
359            raise ValueError(
360                f'Padding to denominator of {denominator} would require a negative quantity of {new_quantity} for {outcome}'
361            )
362        return self._new_type(data)

Changes the denominator to a target number by changing the quantity of a specified outcome.

Arguments:
  • target: The denominator of the result.
  • outcome: The outcome whose quantity will be adjusted.
Returns:

A Population like self but with the quantity of outcome adjusted so that the overall denominator is equal to target. If the denominator is reduced to zero, it will be removed.

Raises:
  • ValueError if this would require the quantity of the specified
  • outcome to be negative.
def multiply_to_denominator(self: ~C, denominator: int, /) -> ~C:
364    def multiply_to_denominator(self: C, denominator: int, /) -> C:
365        """Multiplies all quantities to reach the target denominiator.
366        
367        Raises:
368            ValueError if this cannot be achieved using an integer scaling.
369        """
370        if denominator % self.denominator():
371            raise ValueError(
372                'Target denominator is not an integer factor of the current denominator.'
373            )
374        return self.multiply_quantities(denominator // self.denominator())

Multiplies all quantities to reach the target denominiator.

Raises:
  • ValueError if this cannot be achieved using an integer scaling.
def append(self: ~C, outcome, quantity: int = 1, /) -> ~C:
376    def append(self: C, outcome, quantity: int = 1, /) -> C:
377        """This population with an outcome appended.
378        
379        Args:
380            outcome: The outcome to append.
381            quantity: The quantity of the outcome to append. Can be negative,
382                which removes quantity (but not below zero).
383        """
384        data = Counter(self)
385        data[outcome] = max(data[outcome] + quantity, 0)
386        return self._new_type(data)

This population with an outcome appended.

Arguments:
  • outcome: The outcome to append.
  • quantity: The quantity of the outcome to append. Can be negative, which removes quantity (but not below zero).
def remove(self: ~C, outcome, quantity: int | None = None, /) -> ~C:
388    def remove(self: C, outcome, quantity: int | None = None, /) -> C:
389        """This population with an outcome removed.
390
391        Args:
392            outcome: The outcome to append.
393            quantity: The quantity of the outcome to remove. If not set, all
394                quantity of that outcome is removed. Can be negative, which adds
395                quantity instead.
396        """
397        if quantity is None:
398            data = Counter(self)
399            data[outcome] = 0
400            return self._new_type(data)
401        else:
402            return self.append(outcome, -quantity)

This population with an outcome removed.

Arguments:
  • outcome: The outcome to append.
  • quantity: The quantity of the outcome to remove. If not set, all quantity of that outcome is removed. Can be negative, which adds quantity instead.
def probability( self, comparison: Union[Literal['==', '!=', '<=', '<', '>=', '>'], Hashable], outcome: Optional[Hashable] = None, /, *, percent: bool = False) -> fractions.Fraction | float:
438    def probability(self,
439                    comparison: Literal['==', '!=', '<=', '<', '>=', '>']
440                    | Hashable,
441                    outcome: Hashable | None = None,
442                    /,
443                    *,
444                    percent: bool = False) -> Fraction | float:
445        """The total probability of outcomes fitting a comparison.
446        
447        Args:
448            comparison: One of `'==', '!=', '<=', '<', '>=', '>'`.
449                May be omitted, in which case equality `'=='` is used.
450            outcome: The outcome to compare to.
451            percent: If set, the result will be a percentage expressed as a
452                `float`. Otherwise, the result is a `Fraction`.
453        """
454        if outcome is None:
455            outcome = comparison
456            comparison = '=='
457        else:
458            comparison = cast(Literal['==', '!=', '<=', '<', '>=', '>'],
459                              comparison)
460        result = Fraction(self.quantity(comparison, outcome),
461                          self.denominator())
462        return result * 100.0 if percent else result

The total probability of outcomes fitting a comparison.

Arguments:
  • comparison: One of '==', '!=', '<=', '<', '>=', '>'. May be omitted, in which case equality '==' is used.
  • outcome: The outcome to compare to.
  • percent: If set, the result will be a percentage expressed as a float. Otherwise, the result is a Fraction.
def probabilities( self, comparison: Optional[Literal['==', '!=', '<=', '<', '>=', '>']] = None, /, *, percent: bool = False) -> Union[Sequence[fractions.Fraction], Sequence[float]]:
503    def probabilities(
504            self,
505            comparison: Literal['==', '!=', '<=', '<', '>=', '>']
506        | None = None,
507            /,
508            *,
509            percent: bool = False) -> Sequence[Fraction] | Sequence[float]:
510        """The total probabilities fitting the comparison for each outcome in sorted order.
511        
512        For example, '<=' gives the CDF.
513
514        Args:
515            comparison: One of `'==', '!=', '<=', '<', '>=', '>'`.
516                May be omitted, in which case equality `'=='` is used.
517            percent: If set, the result will be a percentage expressed as a
518                `float`. Otherwise, the result is a `Fraction`.
519        """
520        if comparison is None:
521            comparison = '=='
522
523        result = tuple(
524            Fraction(q, self.denominator())
525            for q in self.quantities(comparison))
526
527        if percent:
528            return tuple(100.0 * x for x in result)
529        else:
530            return result

The total probabilities fitting the comparison for each outcome in sorted order.

For example, '<=' gives the CDF.

Arguments:
  • comparison: One of '==', '!=', '<=', '<', '>=', '>'. May be omitted, in which case equality '==' is used.
  • percent: If set, the result will be a percentage expressed as a float. Otherwise, the result is a Fraction.
def mode(self) -> tuple:
534    def mode(self) -> tuple:
535        """A tuple containing the most common outcome(s) of the population.
536
537        These are sorted from lowest to highest.
538        """
539        return tuple(outcome for outcome, quantity in self.items()
540                     if quantity == self.modal_quantity())

A tuple containing the most common outcome(s) of the population.

These are sorted from lowest to highest.

def modal_quantity(self) -> int:
542    def modal_quantity(self) -> int:
543        """The highest quantity of any single outcome. """
544        return max(self.quantities())

The highest quantity of any single outcome.

def kolmogorov_smirnov(self, other: Population) -> fractions.Fraction:
546    def kolmogorov_smirnov(self, other: 'Population') -> Fraction:
547        """Kolmogorov–Smirnov statistic. The maximum absolute difference between CDFs. """
548        outcomes = icepool.sorted_union(self, other)
549        return max(
550            abs(
551                self.probability('<=', outcome) -
552                other.probability('<=', outcome)) for outcome in outcomes)

Kolmogorov–Smirnov statistic. The maximum absolute difference between CDFs.

def cramer_von_mises(self, other: Population) -> fractions.Fraction:
554    def cramer_von_mises(self, other: 'Population') -> Fraction:
555        """Cramér-von Mises statistic. The sum-of-squares difference between CDFs. """
556        outcomes = icepool.sorted_union(self, other)
557        return sum(((self.probability('<=', outcome) -
558                     other.probability('<=', outcome))**2
559                    for outcome in outcomes),
560                   start=Fraction(0, 1))

Cramér-von Mises statistic. The sum-of-squares difference between CDFs.

def median(self):
562    def median(self):
563        """The median, taking the mean in case of a tie.
564
565        This will fail if the outcomes do not support division;
566        in this case, use `median_low` or `median_high` instead.
567        """
568        return self.quantile(1, 2)

The median, taking the mean in case of a tie.

This will fail if the outcomes do not support division; in this case, use median_low or median_high instead.

def median_low(self) -> +T_co:
570    def median_low(self) -> T_co:
571        """The median, taking the lower in case of a tie."""
572        return self.quantile_low(1, 2)

The median, taking the lower in case of a tie.

def median_high(self) -> +T_co:
574    def median_high(self) -> T_co:
575        """The median, taking the higher in case of a tie."""
576        return self.quantile_high(1, 2)

The median, taking the higher in case of a tie.

def quantile(self, n: int, d: int = 100):
578    def quantile(self, n: int, d: int = 100):
579        """The outcome `n / d` of the way through the CDF, taking the mean in case of a tie.
580
581        This will fail if the outcomes do not support addition and division;
582        in this case, use `quantile_low` or `quantile_high` instead.
583        """
584        # Should support addition and division.
585        return (self.quantile_low(n, d) +
586                self.quantile_high(n, d)) / 2  # type: ignore

The outcome n / d of the way through the CDF, taking the mean in case of a tie.

This will fail if the outcomes do not support addition and division; in this case, use quantile_low or quantile_high instead.

def quantile_low(self, n: int, d: int = 100) -> +T_co:
588    def quantile_low(self, n: int, d: int = 100) -> T_co:
589        """The outcome `n / d` of the way through the CDF, taking the lesser in case of a tie."""
590        index = bisect.bisect_left(self.quantities('<='),
591                                   (n * self.denominator() + d - 1) // d)
592        if index >= len(self):
593            return self.max_outcome()
594        return self.outcomes()[index]

The outcome n / d of the way through the CDF, taking the lesser in case of a tie.

def quantile_high(self, n: int, d: int = 100) -> +T_co:
596    def quantile_high(self, n: int, d: int = 100) -> T_co:
597        """The outcome `n / d` of the way through the CDF, taking the greater in case of a tie."""
598        index = bisect.bisect_right(self.quantities('<='),
599                                    n * self.denominator() // d)
600        if index >= len(self):
601            return self.max_outcome()
602        return self.outcomes()[index]

The outcome n / d of the way through the CDF, taking the greater in case of a tie.

def mean( self: Union[Population[numbers.Rational], Population[float]]) -> fractions.Fraction | float:
612    def mean(
613        self: 'Population[numbers.Rational] | Population[float]'
614    ) -> Fraction | float:
615        return try_fraction(
616            sum(outcome * quantity for outcome, quantity in self.items()),
617            self.denominator())
def variance( self: Union[Population[numbers.Rational], Population[float]]) -> fractions.Fraction | float:
627    def variance(
628        self: 'Population[numbers.Rational] | Population[float]'
629    ) -> Fraction | float:
630        """This is the population variance, not the sample variance."""
631        mean = self.mean()
632        mean_of_squares = try_fraction(
633            sum(quantity * outcome**2 for outcome, quantity in self.items()),
634            self.denominator())
635        return mean_of_squares - mean * mean

This is the population variance, not the sample variance.

def standard_deviation( self: Union[Population[numbers.Rational], Population[float]]) -> float:
637    def standard_deviation(
638            self: 'Population[numbers.Rational] | Population[float]') -> float:
639        return math.sqrt(self.variance())
def sd( self: Union[Population[numbers.Rational], Population[float]]) -> float:
637    def standard_deviation(
638            self: 'Population[numbers.Rational] | Population[float]') -> float:
639        return math.sqrt(self.variance())
def standardized_moment( self: Union[Population[numbers.Rational], Population[float]], k: int) -> float:
643    def standardized_moment(
644            self: 'Population[numbers.Rational] | Population[float]',
645            k: int) -> float:
646        sd = self.standard_deviation()
647        mean = self.mean()
648        ev = sum(p * (outcome - mean)**k  # type: ignore 
649                 for outcome, p in zip(self.outcomes(), self.probabilities()))
650        return ev / (sd**k)
def skewness( self: Union[Population[numbers.Rational], Population[float]]) -> float:
652    def skewness(
653            self: 'Population[numbers.Rational] | Population[float]') -> float:
654        return self.standardized_moment(3)
def excess_kurtosis( self: Union[Population[numbers.Rational], Population[float]]) -> float:
656    def excess_kurtosis(
657            self: 'Population[numbers.Rational] | Population[float]') -> float:
658        return self.standardized_moment(4) - 3.0
def entropy(self, base: float = 2.0) -> float:
660    def entropy(self, base: float = 2.0) -> float:
661        """The entropy of a random sample from this population.
662        
663        Args:
664            base: The logarithm base to use. Default is 2.0, which gives the 
665                entropy in bits.
666        """
667        return -sum(p * math.log(p, base)
668                    for p in self.probabilities() if p > 0.0)

The entropy of a random sample from this population.

Arguments:
  • base: The logarithm base to use. Default is 2.0, which gives the entropy in bits.
marginals: icepool.population.base.Population._Marginals[~C]
697    @property
698    def marginals(self: C) -> _Marginals[C]:
699        """A property that applies the `[]` operator to outcomes.
700
701        For example, `population.marginals[:2]` will marginalize the first two
702        elements of sequence outcomes.
703
704        Attributes that do not start with an underscore will also be forwarded.
705        For example, `population.marginals.x` will marginalize the `x` attribute
706        from e.g. `namedtuple` outcomes.
707        """
708        return Population._Marginals(self)

A property that applies the [] operator to outcomes.

For example, population.marginals[:2] will marginalize the first two elements of sequence outcomes.

Attributes that do not start with an underscore will also be forwarded. For example, population.marginals.x will marginalize the x attribute from e.g. namedtuple outcomes.

def covariance( self: Union[Population[tuple[numbers.Rational, ...]], Population[tuple[float, ...]]], i: int, j: int) -> fractions.Fraction | float:
720    def covariance(
721            self:
722        'Population[tuple[numbers.Rational, ...]] | Population[tuple[float, ...]]',
723            i: int, j: int) -> Fraction | float:
724        mean_i = self.marginals[i].mean()
725        mean_j = self.marginals[j].mean()
726        return try_fraction(
727            sum((outcome[i] - mean_i) * (outcome[j] - mean_j) * quantity
728                for outcome, quantity in self.items()), self.denominator())
def correlation( self: Union[Population[tuple[numbers.Rational, ...]], Population[tuple[float, ...]]], i: int, j: int) -> float:
730    def correlation(
731            self:
732        'Population[tuple[numbers.Rational, ...]] | Population[tuple[float, ...]]',
733            i: int, j: int) -> float:
734        sd_i = self.marginals[i].standard_deviation()
735        sd_j = self.marginals[j].standard_deviation()
736        return self.covariance(i, j) / (sd_i * sd_j)
def to_one_hot(self: ~C, outcomes: Optional[Sequence[+T_co]] = None) -> ~C:
763    def to_one_hot(self: C, outcomes: Sequence[T_co] | None = None) -> C:
764        """Converts the outcomes of this population to a one-hot representation.
765
766        Args:
767            outcomes: If provided, each outcome will be mapped to a `Vector`
768                where the element at `outcomes.index(outcome)` is set to `True`
769                and the rest to `False`, or all `False` if the outcome is not
770                in `outcomes`.
771                If not provided, `self.outcomes()` is used.
772        """
773        if outcomes is None:
774            outcomes = self.outcomes()
775
776        data: MutableMapping[Vector[bool], int] = defaultdict(int)
777        for outcome, quantity in zip(self.outcomes(), self.quantities()):
778            value = [False] * len(outcomes)
779            if outcome in outcomes:
780                value[outcomes.index(outcome)] = True
781            data[Vector(value)] += quantity
782        return self._new_type(data)

Converts the outcomes of this population to a one-hot representation.

Arguments:
  • outcomes: If provided, each outcome will be mapped to a Vector where the element at outcomes.index(outcome) is set to True and the rest to False, or all False if the outcome is not in outcomes. If not provided, self.outcomes() is used.
def split( self, outcomes: Union[Callable[..., bool], Collection[+T_co]], /, *, star: bool | None = None) -> tuple[~C, ~C]:
784    def split(self,
785              outcomes: Callable[..., bool] | Collection[T_co],
786              /,
787              *,
788              star: bool | None = None) -> tuple[C, C]:
789        """Splits this population into one containing selected items and another containing the rest.
790
791        The sum of the denominators of the results is equal to the denominator
792        of this population.
793
794        If you want to split more than two ways, use `Population.group_by()`.
795
796        Args:
797            outcomes: Selects which outcomes to select. Options:
798                * A callable that takes an outcome and returns `True` if it
799                    should be selected.
800                * A collection of outcomes to select.
801            star: Whether outcomes should be unpacked into separate arguments
802                before sending them to a callable `which`.
803                If not provided, this will be guessed based on the function
804                signature.
805
806        Returns:
807            A population consisting of the outcomes that were selected by
808            `which`, and a population consisting of the unselected outcomes.
809        """
810        outcome_set = self._select_outcomes(outcomes, star)
811
812        selected = {}
813        not_selected = {}
814        for outcome, count in self.items():
815            if outcome in outcome_set:
816                selected[outcome] = count
817            else:
818                not_selected[outcome] = count
819
820        return self._new_type(selected), self._new_type(not_selected)

Splits this population into one containing selected items and another containing the rest.

The sum of the denominators of the results is equal to the denominator of this population.

If you want to split more than two ways, use Population.group_by().

Arguments:
  • outcomes: Selects which outcomes to select. Options:
    • A callable that takes an outcome and returns True if it should be selected.
    • A collection of outcomes to select.
  • star: Whether outcomes should be unpacked into separate arguments before sending them to a callable which. If not provided, this will be guessed based on the function signature.
Returns:

A population consisting of the outcomes that were selected by which, and a population consisting of the unselected outcomes.

group_by: icepool.population.base.Population._GroupBy[~C]
866    @property
867    def group_by(self: C) -> _GroupBy[C]:
868        """A method-like property that splits this population into sub-populations based on a key function.
869        
870        The sum of the denominators of the results is equal to the denominator
871        of this population.
872
873        This can be useful when using the law of total probability.
874
875        Example: `d10.group_by(lambda x: x % 3)` is
876        ```python
877        {
878            0: Die([3, 6, 9]),
879            1: Die([1, 4, 7, 10]),
880            2: Die([2, 5, 8]),
881        }
882        ```
883
884        You can also use brackets to group by indexes or slices; or attributes
885        to group by those. Example:
886
887        ```python
888        Die([
889            'aardvark',
890            'alligator',
891            'asp',
892            'blowfish',
893            'cat',
894            'crocodile',
895        ]).group_by[0]
896        ```
897
898        produces
899
900        ```python
901        {
902            'a': Die(['aardvark', 'alligator', 'asp']),
903            'b': Die(['blowfish']),
904            'c': Die(['cat', 'crocodile']),
905        }
906        ```
907
908        Args:
909            key_map: A function or mapping that takes outcomes and produces the
910                key of the corresponding outcome in the result. If this is
911                a Mapping, outcomes not in the mapping are their own key.
912            star: Whether outcomes should be unpacked into separate arguments
913                before sending them to a callable `key_map`.
914                If not provided, this will be guessed based on the function
915                signature.
916        """
917        return Population._GroupBy(self)

A method-like property that splits this population into sub-populations based on a key function.

The sum of the denominators of the results is equal to the denominator of this population.

This can be useful when using the law of total probability.

Example: d10.group_by(lambda x: x % 3) is

{
    0: Die([3, 6, 9]),
    1: Die([1, 4, 7, 10]),
    2: Die([2, 5, 8]),
}

You can also use brackets to group by indexes or slices; or attributes to group by those. Example:

Die([
    'aardvark',
    'alligator',
    'asp',
    'blowfish',
    'cat',
    'crocodile',
]).group_by[0]

produces

{
    'a': Die(['aardvark', 'alligator', 'asp']),
    'b': Die(['blowfish']),
    'c': Die(['cat', 'crocodile']),
}
Arguments:
  • key_map: A function or mapping that takes outcomes and produces the key of the corresponding outcome in the result. If this is a Mapping, outcomes not in the mapping are their own key.
  • star: Whether outcomes should be unpacked into separate arguments before sending them to a callable key_map. If not provided, this will be guessed based on the function signature.
def sample(self) -> +T_co:
919    def sample(self) -> T_co:
920        """A single random sample from this population.
921
922        Note that this is always "with replacement" even for `Deck` since
923        instances are immutable.
924
925        This uses the standard `random` package and is not cryptographically
926        secure.
927        """
928        # We don't use random.choices since that is based on floats rather than ints.
929        r = random.randrange(self.denominator())
930        index = bisect.bisect_right(self.quantities('<='), r)
931        return self.outcomes()[index]

A single random sample from this population.

Note that this is always "with replacement" even for Deck since instances are immutable.

This uses the standard random package and is not cryptographically secure.

def format(self, format_spec: str, /, **kwargs) -> str:
933    def format(self, format_spec: str, /, **kwargs) -> str:
934        """Formats this mapping as a string.
935
936        `format_spec` should start with the output format,
937        which can be:
938        * `md` for Markdown (default)
939        * `bbcode` for BBCode
940        * `csv` for comma-separated values
941        * `html` for HTML
942
943        After this, you may optionally add a `:` followed by a series of
944        requested columns. Allowed columns are:
945
946        * `o`: Outcomes.
947        * `*o`: Outcomes, unpacked if applicable.
948        * `q==`, `q<=`, `q>=`: Quantities ==, <=, or >= each outcome.
949        * `p==`, `p<=`, `p>=`: Probabilities (0-1).
950        * `%==`, `%<=`, `%>=`: Probabilities (0%-100%).
951        * `i==`, `i<=`, `i>=`: EXPERIMENTAL: "1 in N".
952
953        Columns may optionally be separated using `|` characters.
954
955        The default setting is equal to `f'{die:md:*o|q==|%==}'`. Here the 
956        columns are the outcomes (unpacked if applicable) the quantities, and 
957        the probabilities. The quantities are omitted from the default columns 
958        if any individual quantity is 10**30 or greater.
959        """
960        if not self.is_empty() and self.modal_quantity() < 10**30:
961            default_column_spec = '*oq==%=='
962        else:
963            default_column_spec = '*o%=='
964        if len(format_spec) == 0:
965            format_spec = 'md:' + default_column_spec
966
967        format_spec = format_spec.replace('|', '')
968
969        parts = format_spec.split(':')
970
971        if len(parts) == 1:
972            output_format = parts[0]
973            col_spec = default_column_spec
974        elif len(parts) == 2:
975            output_format = parts[0]
976            col_spec = parts[1]
977        else:
978            raise ValueError('format_spec has too many colons.')
979
980        match output_format:
981            case 'md':
982                return icepool.population.format.markdown(self, col_spec)
983            case 'bbcode':
984                return icepool.population.format.bbcode(self, col_spec)
985            case 'csv':
986                return icepool.population.format.csv(self, col_spec, **kwargs)
987            case 'html':
988                return icepool.population.format.html(self, col_spec)
989            case _:
990                raise ValueError(
991                    f"Unsupported output format '{output_format}'")

Formats this mapping as a string.

format_spec should start with the output format, which can be:

  • md for Markdown (default)
  • bbcode for BBCode
  • csv for comma-separated values
  • html for HTML

After this, you may optionally add a : followed by a series of requested columns. Allowed columns are:

  • o: Outcomes.
  • *o: Outcomes, unpacked if applicable.
  • q==, q<=, q>=: Quantities ==, <=, or >= each outcome.
  • p==, p<=, p>=: Probabilities (0-1).
  • %==, %<=, %>=: Probabilities (0%-100%).
  • i==, i<=, i>=: EXPERIMENTAL: "1 in N".

Columns may optionally be separated using | characters.

The default setting is equal to f'{die:md:*o|q==|%==}'. Here the columns are the outcomes (unpacked if applicable) the quantities, and the probabilities. The quantities are omitted from the default columns if any individual quantity is 10**30 or greater.

def tupleize( *args: Union[~T, Population[~T], RerollType]) -> Union[tuple[~T, ...], Population[tuple[~T, ...]], RerollType]:
 93def tupleize(
 94    *args: 'T | icepool.Population[T] | icepool.RerollType'
 95) -> 'tuple[T, ...] | icepool.Population[tuple[T, ...]] | icepool.RerollType':
 96    """Returns the Cartesian product of the arguments as `tuple`s or a `Population` thereof.
 97
 98    For example:
 99    * `tupleize(1, 2)` would produce `(1, 2)`.
100    * `tupleize(d6, 0)` would produce a `Die` with outcomes `(1, 0)`, `(2, 0)`,
101        ... `(6, 0)`.
102    * `tupleize(d6, d6)` would produce a `Die` with outcomes `(1, 1)`, `(1, 2)`,
103        ... `(6, 5)`, `(6, 6)`.
104
105    If `Population`s are provided, they must all be `Die` or all `Deck` and not
106    a mixture of the two.
107
108    If any argument is `icepool.Reroll`, the result is `icepool.Reroll`.
109
110    Returns:
111        If none of the outcomes is a `Population`, the result is a `tuple`
112        with one element per argument. Otherwise, the result is a `Population`
113        of the same type as the input `Population`, and the outcomes are
114        `tuple`s with one element per argument.
115    """
116    return cartesian_product(*args, outcome_type=tuple)

Returns the Cartesian product of the arguments as tuples or a Population thereof.

For example:

  • tupleize(1, 2) would produce (1, 2).
  • tupleize(d6, 0) would produce a Die with outcomes (1, 0), (2, 0), ... (6, 0).
  • tupleize(d6, d6) would produce a Die with outcomes (1, 1), (1, 2), ... (6, 5), (6, 6).

If Populations are provided, they must all be Die or all Deck and not a mixture of the two.

If any argument is icepool.Reroll, the result is icepool.Reroll.

Returns:

If none of the outcomes is a Population, the result is a tuple with one element per argument. Otherwise, the result is a Population of the same type as the input Population, and the outcomes are tuples with one element per argument.

def vectorize( *args: Union[~T, Population[~T], RerollType]) -> Union[Vector[~T], Population[Vector[~T]], RerollType]:
119def vectorize(
120    *args: 'T | icepool.Population[T] | icepool.RerollType'
121) -> 'icepool.Vector[T] | icepool.Population[icepool.Vector[T]] | icepool.RerollType':
122    """Returns the Cartesian product of the arguments as `Vector`s or a `Population` thereof.
123
124    For example:
125    * `vectorize(1, 2)` would produce `Vector(1, 2)`.
126    * `vectorize(d6, 0)` would produce a `Die` with outcomes `Vector(1, 0)`,
127        `Vector(2, 0)`, ... `Vector(6, 0)`.
128    * `vectorize(d6, d6)` would produce a `Die` with outcomes `Vector(1, 1)`,
129        `Vector(1, 2)`, ... `Vector(6, 5)`, `Vector(6, 6)`.
130
131    If `Population`s are provided, they must all be `Die` or all `Deck` and not
132    a mixture of the two.
133
134    If any argument is `icepool.Reroll`, the result is `icepool.Reroll`.
135
136    Returns:
137        If none of the outcomes is a `Population`, the result is a `Vector`
138        with one element per argument. Otherwise, the result is a `Population`
139        of the same type as the input `Population`, and the outcomes are
140        `Vector`s with one element per argument.
141    """
142    return cartesian_product(*args, outcome_type=icepool.Vector)

Returns the Cartesian product of the arguments as Vectors or a Population thereof.

For example:

  • vectorize(1, 2) would produce Vector(1, 2).
  • vectorize(d6, 0) would produce a Die with outcomes Vector(1, 0), Vector(2, 0), ... Vector(6, 0).
  • vectorize(d6, d6) would produce a Die with outcomes Vector(1, 1), Vector(1, 2), ... Vector(6, 5), Vector(6, 6).

If Populations are provided, they must all be Die or all Deck and not a mixture of the two.

If any argument is icepool.Reroll, the result is icepool.Reroll.

Returns:

If none of the outcomes is a Population, the result is a Vector with one element per argument. Otherwise, the result is a Population of the same type as the input Population, and the outcomes are Vectors with one element per argument.

class Vector(icepool.Outcome, typing.Sequence[+T_co]):
125class Vector(Outcome, Sequence[T_co]):
126    """Immutable tuple-like class that applies most operators elementwise.
127
128    May become a variadic generic type in the future.
129    """
130    __slots__ = ['_data', '_truth_value']
131
132    _data: tuple[T_co, ...]
133    _truth_value: bool | None
134
135    def __init__(self,
136                 elements: Iterable[T_co],
137                 *,
138                 truth_value: bool | None = None) -> None:
139        self._data = tuple(elements)
140        self._truth_value = truth_value
141
142    def __hash__(self) -> int:
143        return hash((Vector, self._data))
144
145    def __len__(self) -> int:
146        return len(self._data)
147
148    @overload
149    def __getitem__(self, index: int) -> T_co:
150        ...
151
152    @overload
153    def __getitem__(self, index: slice) -> 'Vector[T_co]':
154        ...
155
156    def __getitem__(self, index: int | slice) -> 'T_co | Vector[T_co]':
157        if isinstance(index, int):
158            return self._data[index]
159        else:
160            return Vector(self._data[index])
161
162    def __iter__(self) -> Iterator[T_co]:
163        return iter(self._data)
164
165    # Unary operators.
166
167    def unary_operator(self, op: Callable[..., U], *args,
168                       **kwargs) -> 'Vector[U]':
169        """Unary operators on `Vector` are applied elementwise.
170
171        This is used for the standard unary operators
172        `-, +, abs, ~, round, trunc, floor, ceil`
173        """
174        return Vector(op(x, *args, **kwargs) for x in self)
175
176    def __neg__(self) -> 'Vector[T_co]':
177        return self.unary_operator(operator.neg)
178
179    def __pos__(self) -> 'Vector[T_co]':
180        return self.unary_operator(operator.pos)
181
182    def __invert__(self) -> 'Vector[T_co]':
183        return self.unary_operator(operator.invert)
184
185    def abs(self) -> 'Vector[T_co]':
186        return self.unary_operator(operator.abs)
187
188    __abs__ = abs
189
190    def round(self, ndigits: int | None = None) -> 'Vector':
191        return self.unary_operator(round, ndigits)
192
193    __round__ = round
194
195    def trunc(self) -> 'Vector':
196        return self.unary_operator(math.trunc)
197
198    __trunc__ = trunc
199
200    def floor(self) -> 'Vector':
201        return self.unary_operator(math.floor)
202
203    __floor__ = floor
204
205    def ceil(self) -> 'Vector':
206        return self.unary_operator(math.ceil)
207
208    __ceil__ = ceil
209
210    # Binary operators.
211
212    def binary_operator(self,
213                        other,
214                        op: Callable[..., U],
215                        *args,
216                        compare_for_truth: bool = False,
217                        compare_non_vector: bool | None = None,
218                        **kwargs) -> 'Vector[U]':
219        """Binary operators on `Vector` are applied elementwise.
220
221        If the other operand is also a `Vector`, the operator is applied to each
222        pair of elements from `self` and `other`. Both must have the same
223        length.
224
225        Otherwise the other operand is broadcast to each element of `self`.
226
227        This is used for the standard binary operators
228        `+, -, *, /, //, %, **, <<, >>, &, |, ^`.
229
230        `@` is not included due to its different meaning in `Die`.
231
232        This is also used for the comparators
233        `<, <=, >, >=, ==, !=`.
234
235        In this case, the result also has a truth value based on lexicographic
236        ordering.
237        """
238        if isinstance(other, Vector):
239            if len(self) == len(other):
240                if compare_for_truth:
241                    truth_value = cast(bool, op(self._data, other._data))
242                else:
243                    truth_value = None
244                return Vector(
245                    (op(x, y, *args, **kwargs) for x, y in zip(self, other)),
246                    truth_value=truth_value)
247            else:
248                raise IndexError(
249                    f'Binary operators on Vectors are only valid if both are the same length ({len(self)} vs. {len(other)}).'
250                )
251        elif isinstance(other, (icepool.Population, icepool.AgainExpression)):
252            return NotImplemented  # delegate to the other
253        else:
254            return Vector((op(x, other, *args, **kwargs) for x in self),
255                          truth_value=compare_non_vector)
256
257    def reverse_binary_operator(self, other, op: Callable[..., U], *args,
258                                **kwargs) -> 'Vector[U]':
259        """Reverse version of `binary_operator()`."""
260        if isinstance(other, Vector):
261            if len(self) == len(other):
262                return Vector(
263                    op(y, x, *args, **kwargs) for x, y in zip(self, other))
264            else:
265                raise IndexError(
266                    f'Binary operators on Vectors are only valid if both are the same length ({len(other)} vs. {len(self)}).'
267                )
268        elif isinstance(other, (icepool.Population, icepool.AgainExpression)):
269            return NotImplemented  # delegate to the other
270        else:
271            return Vector(op(other, x, *args, **kwargs) for x in self)
272
273    def __add__(self, other) -> 'Vector':
274        return self.binary_operator(other, operator.add)
275
276    def __radd__(self, other) -> 'Vector':
277        return self.reverse_binary_operator(other, operator.add)
278
279    def __sub__(self, other) -> 'Vector':
280        return self.binary_operator(other, operator.sub)
281
282    def __rsub__(self, other) -> 'Vector':
283        return self.reverse_binary_operator(other, operator.sub)
284
285    def __mul__(self, other) -> 'Vector':
286        return self.binary_operator(other, operator.mul)
287
288    def __rmul__(self, other) -> 'Vector':
289        return self.reverse_binary_operator(other, operator.mul)
290
291    def __truediv__(self, other) -> 'Vector':
292        return self.binary_operator(other, operator.truediv)
293
294    def __rtruediv__(self, other) -> 'Vector':
295        return self.reverse_binary_operator(other, operator.truediv)
296
297    def __floordiv__(self, other) -> 'Vector':
298        return self.binary_operator(other, operator.floordiv)
299
300    def __rfloordiv__(self, other) -> 'Vector':
301        return self.reverse_binary_operator(other, operator.floordiv)
302
303    def __pow__(self, other) -> 'Vector':
304        return self.binary_operator(other, operator.pow)
305
306    def __rpow__(self, other) -> 'Vector':
307        return self.reverse_binary_operator(other, operator.pow)
308
309    def __mod__(self, other) -> 'Vector':
310        return self.binary_operator(other, operator.mod)
311
312    def __rmod__(self, other) -> 'Vector':
313        return self.reverse_binary_operator(other, operator.mod)
314
315    def __lshift__(self, other) -> 'Vector':
316        return self.binary_operator(other, operator.lshift)
317
318    def __rlshift__(self, other) -> 'Vector':
319        return self.reverse_binary_operator(other, operator.lshift)
320
321    def __rshift__(self, other) -> 'Vector':
322        return self.binary_operator(other, operator.rshift)
323
324    def __rrshift__(self, other) -> 'Vector':
325        return self.reverse_binary_operator(other, operator.rshift)
326
327    def __and__(self, other) -> 'Vector':
328        return self.binary_operator(other, operator.and_)
329
330    def __rand__(self, other) -> 'Vector':
331        return self.reverse_binary_operator(other, operator.and_)
332
333    def __or__(self, other) -> 'Vector':
334        return self.binary_operator(other, operator.or_)
335
336    def __ror__(self, other) -> 'Vector':
337        return self.reverse_binary_operator(other, operator.or_)
338
339    def __xor__(self, other) -> 'Vector':
340        return self.binary_operator(other, operator.xor)
341
342    def __rxor__(self, other) -> 'Vector':
343        return self.reverse_binary_operator(other, operator.xor)
344
345    # Comparators.
346    # These returns a value with a truth value, but not a bool.
347
348    def __lt__(self, other) -> 'Vector':  # type: ignore
349        return self.binary_operator(other,
350                                    operator.lt,
351                                    compare_for_truth=True,
352                                    compare_non_vector=None)
353
354    def __le__(self, other) -> 'Vector':  # type: ignore
355        return self.binary_operator(other,
356                                    operator.le,
357                                    compare_for_truth=True,
358                                    compare_non_vector=None)
359
360    def __gt__(self, other) -> 'Vector':  # type: ignore
361        return self.binary_operator(other,
362                                    operator.gt,
363                                    compare_for_truth=True,
364                                    compare_non_vector=None)
365
366    def __ge__(self, other) -> 'Vector':  # type: ignore
367        return self.binary_operator(other,
368                                    operator.ge,
369                                    compare_for_truth=True,
370                                    compare_non_vector=None)
371
372    def __eq__(self, other) -> 'Vector | bool':  # type: ignore
373        return self.binary_operator(other,
374                                    operator.eq,
375                                    compare_for_truth=True,
376                                    compare_non_vector=False)
377
378    def __ne__(self, other) -> 'Vector | bool':  # type: ignore
379        return self.binary_operator(other,
380                                    operator.ne,
381                                    compare_for_truth=True,
382                                    compare_non_vector=True)
383
384    def __bool__(self) -> bool:
385        if self._truth_value is None:
386            raise TypeError(
387                'Vector only has a truth value if it is the result of a comparison operator.'
388            )
389        return self._truth_value
390
391    # Sequence manipulation.
392
393    def append(self, other) -> 'Vector':
394        return Vector(self._data + (other, ))
395
396    def concatenate(self, other: 'Iterable') -> 'Vector':
397        return Vector(itertools.chain(self, other))
398
399    # Strings.
400
401    def __repr__(self) -> str:
402        return type(self).__qualname__ + '(' + repr(self._data) + ')'
403
404    def __str__(self) -> str:
405        return type(self).__qualname__ + '(' + str(self._data) + ')'

Immutable tuple-like class that applies most operators elementwise.

May become a variadic generic type in the future.

def unary_operator( self, op: Callable[..., ~U], *args, **kwargs) -> Vector[~U]:
167    def unary_operator(self, op: Callable[..., U], *args,
168                       **kwargs) -> 'Vector[U]':
169        """Unary operators on `Vector` are applied elementwise.
170
171        This is used for the standard unary operators
172        `-, +, abs, ~, round, trunc, floor, ceil`
173        """
174        return Vector(op(x, *args, **kwargs) for x in self)

Unary operators on Vector are applied elementwise.

This is used for the standard unary operators -, +, abs, ~, round, trunc, floor, ceil

def abs(self) -> Vector[+T_co]:
185    def abs(self) -> 'Vector[T_co]':
186        return self.unary_operator(operator.abs)
def round(self, ndigits: int | None = None) -> Vector:
190    def round(self, ndigits: int | None = None) -> 'Vector':
191        return self.unary_operator(round, ndigits)
def trunc(self) -> Vector:
195    def trunc(self) -> 'Vector':
196        return self.unary_operator(math.trunc)
def floor(self) -> Vector:
200    def floor(self) -> 'Vector':
201        return self.unary_operator(math.floor)
def ceil(self) -> Vector:
205    def ceil(self) -> 'Vector':
206        return self.unary_operator(math.ceil)
def binary_operator( self, other, op: Callable[..., ~U], *args, compare_for_truth: bool = False, compare_non_vector: bool | None = None, **kwargs) -> Vector[~U]:
212    def binary_operator(self,
213                        other,
214                        op: Callable[..., U],
215                        *args,
216                        compare_for_truth: bool = False,
217                        compare_non_vector: bool | None = None,
218                        **kwargs) -> 'Vector[U]':
219        """Binary operators on `Vector` are applied elementwise.
220
221        If the other operand is also a `Vector`, the operator is applied to each
222        pair of elements from `self` and `other`. Both must have the same
223        length.
224
225        Otherwise the other operand is broadcast to each element of `self`.
226
227        This is used for the standard binary operators
228        `+, -, *, /, //, %, **, <<, >>, &, |, ^`.
229
230        `@` is not included due to its different meaning in `Die`.
231
232        This is also used for the comparators
233        `<, <=, >, >=, ==, !=`.
234
235        In this case, the result also has a truth value based on lexicographic
236        ordering.
237        """
238        if isinstance(other, Vector):
239            if len(self) == len(other):
240                if compare_for_truth:
241                    truth_value = cast(bool, op(self._data, other._data))
242                else:
243                    truth_value = None
244                return Vector(
245                    (op(x, y, *args, **kwargs) for x, y in zip(self, other)),
246                    truth_value=truth_value)
247            else:
248                raise IndexError(
249                    f'Binary operators on Vectors are only valid if both are the same length ({len(self)} vs. {len(other)}).'
250                )
251        elif isinstance(other, (icepool.Population, icepool.AgainExpression)):
252            return NotImplemented  # delegate to the other
253        else:
254            return Vector((op(x, other, *args, **kwargs) for x in self),
255                          truth_value=compare_non_vector)

Binary operators on Vector are applied elementwise.

If the other operand is also a Vector, the operator is applied to each pair of elements from self and other. Both must have the same length.

Otherwise the other operand is broadcast to each element of self.

This is used for the standard binary operators +, -, *, /, //, %, **, <<, >>, &, |, ^.

@ is not included due to its different meaning in Die.

This is also used for the comparators <, <=, >, >=, ==, !=.

In this case, the result also has a truth value based on lexicographic ordering.

def reverse_binary_operator( self, other, op: Callable[..., ~U], *args, **kwargs) -> Vector[~U]:
257    def reverse_binary_operator(self, other, op: Callable[..., U], *args,
258                                **kwargs) -> 'Vector[U]':
259        """Reverse version of `binary_operator()`."""
260        if isinstance(other, Vector):
261            if len(self) == len(other):
262                return Vector(
263                    op(y, x, *args, **kwargs) for x, y in zip(self, other))
264            else:
265                raise IndexError(
266                    f'Binary operators on Vectors are only valid if both are the same length ({len(other)} vs. {len(self)}).'
267                )
268        elif isinstance(other, (icepool.Population, icepool.AgainExpression)):
269            return NotImplemented  # delegate to the other
270        else:
271            return Vector(op(other, x, *args, **kwargs) for x in self)

Reverse version of binary_operator().

def append(self, other) -> Vector:
393    def append(self, other) -> 'Vector':
394        return Vector(self._data + (other, ))
def concatenate(self, other: Iterable) -> Vector:
396    def concatenate(self, other: 'Iterable') -> 'Vector':
397        return Vector(itertools.chain(self, other))
Inherited Members
Outcome
Outcome
class Symbols(typing.Mapping[str, int]):
 16class Symbols(Mapping[str, int]):
 17    """EXPERIMENTAL: Immutable multiset of single characters.
 18
 19    Spaces, dashes, and underscores cannot be used as symbols.
 20
 21    Operations include:
 22
 23    | Operation                   | Count / notes                      |
 24    |:----------------------------|:-----------------------------------|
 25    | `additive_union`, `+`       | `l + r`                            |
 26    | `difference`, `-`           | `l - r`                            |
 27    | `intersection`, `&`         | `min(l, r)`                        |
 28    | `union`, `\\|`               | `max(l, r)`                        |
 29    | `symmetric_difference`, `^` | `abs(l - r)`                       |
 30    | `multiply_counts`, `*`      | `count * n`                        |
 31    | `divide_counts`, `//`       | `count // n`                       |
 32    | `issubset`, `<=`            | all counts l <= r                  |
 33    | `issuperset`, `>=`          | all counts l >= r                  |
 34    | `==`                        | all counts l == r                  |
 35    | `!=`                        | any count l != r                   |
 36    | unary `+`                   | drop all negative counts           |
 37    | unary `-`                   | reverses the sign of all counts    |
 38
 39    `<` and `>` are lexicographic orderings rather than subset relations.
 40    Specifically, they compare the count of each character in alphabetical
 41    order. For example:
 42    * `'a' > ''` since one `'a'` is more than zero `'a'`s.
 43    * `'a' > 'bb'` since `'a'` is compared first.
 44    * `'-a' < 'bb'` since the left side has -1 `'a'`s.
 45    * `'a' < 'ab'` since the `'a'`s are equal but the right side has more `'b'`s.
 46    
 47    Binary operators other than `*` and `//` implicitly convert the other
 48    argument to `Symbols` using the constructor.
 49
 50    Subscripting with a single character returns the count of that character
 51    as an `int`. E.g. `symbols['a']` -> number of `a`s as an `int`.
 52    You can also access it as an attribute, e.g.  `symbols.a`.
 53
 54    Subscripting with multiple characters returns a `Symbols` with only those
 55    characters, dropping the rest.
 56    E.g. `symbols['ab']` -> number of `a`s and `b`s as a `Symbols`.
 57    Again you can also access it as an attribute, e.g. `symbols.ab`.
 58    This is useful for reducing the outcome space, which reduces computational
 59    cost for further operations. If you want to keep only a single character
 60    while keeping the type as `Symbols`, you can subscript with that character
 61    plus an unused character.
 62
 63    Subscripting with duplicate characters currently has no further effect, but
 64    this may change in the future.
 65
 66    `Population.marginals` forwards attribute access, so you can use e.g.
 67    `die.marginals.a` to get the marginal distribution of `a`s.
 68
 69    Note that attribute access only works with valid identifiers,
 70    so e.g. emojis would need to use the subscript method.
 71    """
 72    _data: Mapping[str, int]
 73
 74    def __new__(cls,
 75                symbols: str | Iterable[str] | Mapping[str, int]) -> 'Symbols':
 76        """Constructor.
 77        
 78        The argument can be a string, an iterable of characters, or a mapping of
 79        characters to counts.
 80        
 81        If the argument is a string, negative symbols can be specified using a
 82        minus sign optionally surrounded by whitespace. For example,
 83        `a - b` has one positive a and one negative b.
 84        """
 85        self = super(Symbols, cls).__new__(cls)
 86        if isinstance(symbols, str):
 87            data: MutableMapping[str, int] = defaultdict(int)
 88            positive, *negative = re.split(r'\s*-\s*', symbols)
 89            for s in positive:
 90                data[s] += 1
 91            if len(negative) > 1:
 92                raise ValueError('Multiple dashes not allowed.')
 93            if len(negative) == 1:
 94                for s in negative[0]:
 95                    data[s] -= 1
 96        elif isinstance(symbols, Mapping):
 97            data = defaultdict(int, symbols)
 98        else:
 99            data = defaultdict(int)
100            for s in symbols:
101                data[s] += 1
102
103        for s in data:
104            if len(s) != 1:
105                raise ValueError(f'Symbol {s} is not a single character.')
106            if re.match(r'[\s_-]', s):
107                raise ValueError(
108                    f'{s} (U+{ord(s):04X}) is not a legal symbol.')
109
110        self._data = defaultdict(int,
111                                 {k: data[k]
112                                  for k in sorted(data.keys())})
113
114        return self
115
116    @classmethod
117    def _new_raw(cls, data: defaultdict[str, int]) -> 'Symbols':
118        self = super(Symbols, cls).__new__(cls)
119        self._data = data
120        return self
121
122    # Mapping interface.
123
124    def __getitem__(self, key: str) -> 'int | Symbols':  # type: ignore
125        if len(key) == 1:
126            return self._data[key]
127        else:
128            return Symbols._new_raw(
129                defaultdict(int, {s: self._data[s]
130                                  for s in key}))
131
132    def __getattr__(self, key: str) -> 'int | Symbols':
133        if key[0] == '_':
134            raise AttributeError(key)
135        return self[key]
136
137    def __iter__(self) -> Iterator[str]:
138        return iter(self._data)
139
140    def __len__(self) -> int:
141        return len(self._data)
142
143    # Binary operators.
144
145    def additive_union(self, *args:
146                       Iterable[str] | Mapping[str, int]) -> 'Symbols':
147        """The sum of counts of each symbol."""
148        return functools.reduce(operator.add, args, initial=self)
149
150    def __add__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols':
151        if isinstance(other, (icepool.Population, icepool.AgainExpression)):
152            return NotImplemented  # delegate to the other
153        data = defaultdict(int, self._data)
154        for s, count in Symbols(other).items():
155            data[s] += count
156        return Symbols._new_raw(data)
157
158    __radd__ = __add__
159
160    def difference(self, *args:
161                   Iterable[str] | Mapping[str, int]) -> 'Symbols':
162        """The difference between the counts of each symbol."""
163        return functools.reduce(operator.sub, args, initial=self)
164
165    def __sub__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols':
166        if isinstance(other, (icepool.Population, icepool.AgainExpression)):
167            return NotImplemented  # delegate to the other
168        data = defaultdict(int, self._data)
169        for s, count in Symbols(other).items():
170            data[s] -= count
171        return Symbols._new_raw(data)
172
173    def __rsub__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols':
174        if isinstance(other, (icepool.Population, icepool.AgainExpression)):
175            return NotImplemented  # delegate to the other
176        data = defaultdict(int, Symbols(other)._data)
177        for s, count in self.items():
178            data[s] -= count
179        return Symbols._new_raw(data)
180
181    def intersection(self, *args:
182                     Iterable[str] | Mapping[str, int]) -> 'Symbols':
183        """The min count of each symbol."""
184        return functools.reduce(operator.and_, args, initial=self)
185
186    def __and__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols':
187        if isinstance(other, (icepool.Population, icepool.AgainExpression)):
188            return NotImplemented  # delegate to the other
189        data: defaultdict[str, int] = defaultdict(int)
190        for s, count in Symbols(other).items():
191            data[s] = min(self.get(s, 0), count)
192        return Symbols._new_raw(data)
193
194    __rand__ = __and__
195
196    def union(self, *args: Iterable[str] | Mapping[str, int]) -> 'Symbols':
197        """The max count of each symbol."""
198        return functools.reduce(operator.or_, args, initial=self)
199
200    def __or__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols':
201        if isinstance(other, (icepool.Population, icepool.AgainExpression)):
202            return NotImplemented  # delegate to the other
203        data = defaultdict(int, self._data)
204        for s, count in Symbols(other).items():
205            data[s] = max(data[s], count)
206        return Symbols._new_raw(data)
207
208    __ror__ = __or__
209
210    def symmetric_difference(
211            self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols':
212        """The absolute difference in symbol counts between the two sets."""
213        return self ^ other
214
215    def __xor__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols':
216        if isinstance(other, (icepool.Population, icepool.AgainExpression)):
217            return NotImplemented  # delegate to the other
218        data = defaultdict(int, self._data)
219        for s, count in Symbols(other).items():
220            data[s] = abs(data[s] - count)
221        return Symbols._new_raw(data)
222
223    __rxor__ = __xor__
224
225    def multiply_counts(self, other: int) -> 'Symbols':
226        """Multiplies all counts by an integer."""
227        return self * other
228
229    def __mul__(self, other: int) -> 'Symbols':
230        if not isinstance(other, int):
231            return NotImplemented
232        data = defaultdict(int, {
233            s: count * other
234            for s, count in self.items()
235        })
236        return Symbols._new_raw(data)
237
238    __rmul__ = __mul__
239
240    def divide_counts(self, other: int) -> 'Symbols':
241        """Divides all counts by an integer, rounding down."""
242        data = defaultdict(int, {
243            s: count // other
244            for s, count in self.items()
245        })
246        return Symbols._new_raw(data)
247
248    def count_subset(self,
249                     divisor: Iterable[str] | Mapping[str, int],
250                     *,
251                     empty_divisor: int | None = None) -> int:
252        """The number of times the divisor is contained in this multiset."""
253        if not isinstance(divisor, Mapping):
254            divisor = Counter(divisor)
255        result = None
256        for s, count in divisor.items():
257            current = self._data[s] // count
258            if result is None or current < result:
259                result = current
260        if result is None:
261            if empty_divisor is None:
262                raise ZeroDivisionError('Divisor is empty.')
263            else:
264                return empty_divisor
265        else:
266            return result
267
268    @overload
269    def __floordiv__(self, other: int) -> 'Symbols':
270        """Same as divide_counts()."""
271
272    @overload
273    def __floordiv__(self, other: Iterable[str] | Mapping[str, int]) -> int:
274        """Same as count_subset()."""
275
276    @overload
277    def __floordiv__(
278            self,
279            other: int | Iterable[str] | Mapping[str, int]) -> 'Symbols | int':
280        ...
281
282    def __floordiv__(
283            self,
284            other: int | Iterable[str] | Mapping[str, int]) -> 'Symbols | int':
285        if isinstance(other, int):
286            return self.divide_counts(other)
287        elif isinstance(other, Iterable):
288            return self.count_subset(other)
289        else:
290            return NotImplemented
291
292    def __rfloordiv__(self, other: Iterable[str] | Mapping[str, int]) -> int:
293        return Symbols(other).count_subset(self)
294
295    def modulo_counts(self, other: int) -> 'Symbols':
296        return self % other
297
298    def __mod__(self, other: int) -> 'Symbols':
299        if not isinstance(other, int):
300            return NotImplemented
301        data = defaultdict(int, {
302            s: count % other
303            for s, count in self.items()
304        })
305        return Symbols._new_raw(data)
306
307    def __lt__(self, other: 'Symbols') -> bool:
308        if not isinstance(other, Symbols):
309            return NotImplemented
310        keys = sorted(set(self.keys()) | set(other.keys()))
311        for k in keys:
312            if self[k] < other[k]:  # type: ignore
313                return True
314            if self[k] > other[k]:  # type: ignore
315                return False
316        return False
317
318    def __gt__(self, other: 'Symbols') -> bool:
319        if not isinstance(other, Symbols):
320            return NotImplemented
321        keys = sorted(set(self.keys()) | set(other.keys()))
322        for k in keys:
323            if self[k] > other[k]:  # type: ignore
324                return True
325            if self[k] < other[k]:  # type: ignore
326                return False
327        return False
328
329    def issubset(self, other: Iterable[str] | Mapping[str, int]) -> bool:
330        """Whether `self` is a subset of the other.
331
332        Same as `<=`.
333        
334        Note that the `<` and `>` operators are lexicographic orderings,
335        not proper subset relations.
336        """
337        return self <= other
338
339    def __le__(self, other: Iterable[str] | Mapping[str, int]) -> bool:
340        if isinstance(other, (icepool.Population, icepool.AgainExpression)):
341            return NotImplemented  # delegate to the other
342        other = Symbols(other)
343        return all(self[s] <= other[s]  # type: ignore
344                   for s in itertools.chain(self, other))
345
346    def issuperset(self, other: Iterable[str] | Mapping[str, int]) -> bool:
347        """Whether `self` is a superset of the other.
348
349        Same as `>=`.
350        
351        Note that the `<` and `>` operators are lexicographic orderings,
352        not proper subset relations.
353        """
354        return self >= other
355
356    def __ge__(self, other: Iterable[str] | Mapping[str, int]) -> bool:
357        if isinstance(other, (icepool.Population, icepool.AgainExpression)):
358            return NotImplemented  # delegate to the other
359        other = Symbols(other)
360        return all(self[s] >= other[s]  # type: ignore
361                   for s in itertools.chain(self, other))
362
363    def isdisjoint(self, other: Iterable[str] | Mapping[str, int]) -> bool:
364        """Whether `self` has any positive elements in common with the other.
365        
366        Raises:
367            ValueError if either has negative elements.
368        """
369        other = Symbols(other)
370        return any(self[s] > 0 and other[s] > 0  # type: ignore
371                   for s in self)
372
373    def __eq__(self, other) -> bool:
374        if isinstance(other, (icepool.Population, icepool.AgainExpression)):
375            return NotImplemented  # delegate to the other
376        try:
377            other = Symbols(other)
378        except ValueError:
379            return NotImplemented
380        return all(self[s] == other[s]  # type: ignore
381                   for s in itertools.chain(self, other))
382
383    def __ne__(self, other) -> bool:
384        if isinstance(other, (icepool.Population, icepool.AgainExpression)):
385            return NotImplemented  # delegate to the other
386        try:
387            other = Symbols(other)
388        except ValueError:
389            return NotImplemented
390        return any(self[s] != other[s]  # type: ignore
391                   for s in itertools.chain(self, other))
392
393    # Unary operators.
394
395    def has_negative_counts(self) -> bool:
396        """Whether any counts are negative."""
397        return any(c < 0 for c in self.values())
398
399    def __pos__(self) -> 'Symbols':
400        data = defaultdict(int, {
401            s: count
402            for s, count in self.items() if count > 0
403        })
404        return Symbols._new_raw(data)
405
406    def __neg__(self) -> 'Symbols':
407        data = defaultdict(int, {s: -count for s, count in self.items()})
408        return Symbols._new_raw(data)
409
410    @cached_property
411    def _hash(self) -> int:
412        return hash((Symbols, str(self)))
413
414    def __hash__(self) -> int:
415        return self._hash
416
417    def size(self) -> int:
418        """The total number of elements."""
419        return sum(self._data.values())
420
421    @cached_property
422    def _str(self) -> str:
423        sorted_keys = sorted(self)
424        positive = ''.join(s * self._data[s] for s in sorted_keys
425                           if self._data[s] > 0)
426        negative = ''.join(s * -self._data[s] for s in sorted_keys
427                           if self._data[s] < 0)
428        if positive:
429            if negative:
430                return positive + ' - ' + negative
431            else:
432                return positive
433        else:
434            if negative:
435                return '-' + negative
436            else:
437                return ''
438
439    def __str__(self) -> str:
440        """All symbols in unary form (i.e. including duplicates) in ascending order.
441
442        If there are negative elements, they are listed following a ` - ` sign.
443        """
444        return self._str
445
446    def __repr__(self) -> str:
447        return type(self).__qualname__ + f"('{str(self)}')"

EXPERIMENTAL: Immutable multiset of single characters.

Spaces, dashes, and underscores cannot be used as symbols.

Operations include:

Operation Count / notes
additive_union, + l + r
difference, - l - r
intersection, & min(l, r)
union, | max(l, r)
symmetric_difference, ^ abs(l - r)
multiply_counts, * count * n
divide_counts, // count // n
issubset, <= all counts l <= r
issuperset, >= all counts l >= r
== all counts l == r
!= any count l != r
unary + drop all negative counts
unary - reverses the sign of all counts

< and > are lexicographic orderings rather than subset relations. Specifically, they compare the count of each character in alphabetical order. For example:

  • 'a' > '' since one 'a' is more than zero 'a's.
  • 'a' > 'bb' since 'a' is compared first.
  • '-a' < 'bb' since the left side has -1 'a's.
  • 'a' < 'ab' since the 'a's are equal but the right side has more 'b's.

Binary operators other than * and // implicitly convert the other argument to Symbols using the constructor.

Subscripting with a single character returns the count of that character as an int. E.g. symbols['a'] -> number of as as an int. You can also access it as an attribute, e.g. symbols.a.

Subscripting with multiple characters returns a Symbols with only those characters, dropping the rest. E.g. symbols['ab'] -> number of as and bs as a Symbols. Again you can also access it as an attribute, e.g. symbols.ab. This is useful for reducing the outcome space, which reduces computational cost for further operations. If you want to keep only a single character while keeping the type as Symbols, you can subscript with that character plus an unused character.

Subscripting with duplicate characters currently has no further effect, but this may change in the future.

Population.marginals forwards attribute access, so you can use e.g. die.marginals.a to get the marginal distribution of as.

Note that attribute access only works with valid identifiers, so e.g. emojis would need to use the subscript method.

Symbols(symbols: Union[str, Iterable[str], Mapping[str, int]])
 74    def __new__(cls,
 75                symbols: str | Iterable[str] | Mapping[str, int]) -> 'Symbols':
 76        """Constructor.
 77        
 78        The argument can be a string, an iterable of characters, or a mapping of
 79        characters to counts.
 80        
 81        If the argument is a string, negative symbols can be specified using a
 82        minus sign optionally surrounded by whitespace. For example,
 83        `a - b` has one positive a and one negative b.
 84        """
 85        self = super(Symbols, cls).__new__(cls)
 86        if isinstance(symbols, str):
 87            data: MutableMapping[str, int] = defaultdict(int)
 88            positive, *negative = re.split(r'\s*-\s*', symbols)
 89            for s in positive:
 90                data[s] += 1
 91            if len(negative) > 1:
 92                raise ValueError('Multiple dashes not allowed.')
 93            if len(negative) == 1:
 94                for s in negative[0]:
 95                    data[s] -= 1
 96        elif isinstance(symbols, Mapping):
 97            data = defaultdict(int, symbols)
 98        else:
 99            data = defaultdict(int)
100            for s in symbols:
101                data[s] += 1
102
103        for s in data:
104            if len(s) != 1:
105                raise ValueError(f'Symbol {s} is not a single character.')
106            if re.match(r'[\s_-]', s):
107                raise ValueError(
108                    f'{s} (U+{ord(s):04X}) is not a legal symbol.')
109
110        self._data = defaultdict(int,
111                                 {k: data[k]
112                                  for k in sorted(data.keys())})
113
114        return self

Constructor.

The argument can be a string, an iterable of characters, or a mapping of characters to counts.

If the argument is a string, negative symbols can be specified using a minus sign optionally surrounded by whitespace. For example, a - b has one positive a and one negative b.

def additive_union( self, *args: Union[Iterable[str], Mapping[str, int]]) -> Symbols:
145    def additive_union(self, *args:
146                       Iterable[str] | Mapping[str, int]) -> 'Symbols':
147        """The sum of counts of each symbol."""
148        return functools.reduce(operator.add, args, initial=self)

The sum of counts of each symbol.

def difference( self, *args: Union[Iterable[str], Mapping[str, int]]) -> Symbols:
160    def difference(self, *args:
161                   Iterable[str] | Mapping[str, int]) -> 'Symbols':
162        """The difference between the counts of each symbol."""
163        return functools.reduce(operator.sub, args, initial=self)

The difference between the counts of each symbol.

def intersection( self, *args: Union[Iterable[str], Mapping[str, int]]) -> Symbols:
181    def intersection(self, *args:
182                     Iterable[str] | Mapping[str, int]) -> 'Symbols':
183        """The min count of each symbol."""
184        return functools.reduce(operator.and_, args, initial=self)

The min count of each symbol.

def union( self, *args: Union[Iterable[str], Mapping[str, int]]) -> Symbols:
196    def union(self, *args: Iterable[str] | Mapping[str, int]) -> 'Symbols':
197        """The max count of each symbol."""
198        return functools.reduce(operator.or_, args, initial=self)

The max count of each symbol.

def symmetric_difference( self, other: Union[Iterable[str], Mapping[str, int]]) -> Symbols:
210    def symmetric_difference(
211            self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols':
212        """The absolute difference in symbol counts between the two sets."""
213        return self ^ other

The absolute difference in symbol counts between the two sets.

def multiply_counts(self, other: int) -> Symbols:
225    def multiply_counts(self, other: int) -> 'Symbols':
226        """Multiplies all counts by an integer."""
227        return self * other

Multiplies all counts by an integer.

def divide_counts(self, other: int) -> Symbols:
240    def divide_counts(self, other: int) -> 'Symbols':
241        """Divides all counts by an integer, rounding down."""
242        data = defaultdict(int, {
243            s: count // other
244            for s, count in self.items()
245        })
246        return Symbols._new_raw(data)

Divides all counts by an integer, rounding down.

def count_subset( self, divisor: Union[Iterable[str], Mapping[str, int]], *, empty_divisor: int | None = None) -> int:
248    def count_subset(self,
249                     divisor: Iterable[str] | Mapping[str, int],
250                     *,
251                     empty_divisor: int | None = None) -> int:
252        """The number of times the divisor is contained in this multiset."""
253        if not isinstance(divisor, Mapping):
254            divisor = Counter(divisor)
255        result = None
256        for s, count in divisor.items():
257            current = self._data[s] // count
258            if result is None or current < result:
259                result = current
260        if result is None:
261            if empty_divisor is None:
262                raise ZeroDivisionError('Divisor is empty.')
263            else:
264                return empty_divisor
265        else:
266            return result

The number of times the divisor is contained in this multiset.

def modulo_counts(self, other: int) -> Symbols:
295    def modulo_counts(self, other: int) -> 'Symbols':
296        return self % other
def issubset(self, other: Union[Iterable[str], Mapping[str, int]]) -> bool:
329    def issubset(self, other: Iterable[str] | Mapping[str, int]) -> bool:
330        """Whether `self` is a subset of the other.
331
332        Same as `<=`.
333        
334        Note that the `<` and `>` operators are lexicographic orderings,
335        not proper subset relations.
336        """
337        return self <= other

Whether self is a subset of the other.

Same as <=.

Note that the < and > operators are lexicographic orderings, not proper subset relations.

def issuperset(self, other: Union[Iterable[str], Mapping[str, int]]) -> bool:
346    def issuperset(self, other: Iterable[str] | Mapping[str, int]) -> bool:
347        """Whether `self` is a superset of the other.
348
349        Same as `>=`.
350        
351        Note that the `<` and `>` operators are lexicographic orderings,
352        not proper subset relations.
353        """
354        return self >= other

Whether self is a superset of the other.

Same as >=.

Note that the < and > operators are lexicographic orderings, not proper subset relations.

def isdisjoint(self, other: Union[Iterable[str], Mapping[str, int]]) -> bool:
363    def isdisjoint(self, other: Iterable[str] | Mapping[str, int]) -> bool:
364        """Whether `self` has any positive elements in common with the other.
365        
366        Raises:
367            ValueError if either has negative elements.
368        """
369        other = Symbols(other)
370        return any(self[s] > 0 and other[s] > 0  # type: ignore
371                   for s in self)

Whether self has any positive elements in common with the other.

Raises:
  • ValueError if either has negative elements.
def has_negative_counts(self) -> bool:
395    def has_negative_counts(self) -> bool:
396        """Whether any counts are negative."""
397        return any(c < 0 for c in self.values())

Whether any counts are negative.

def size(self) -> int:
417    def size(self) -> int:
418        """The total number of elements."""
419        return sum(self._data.values())

The total number of elements.

Again: Final = <icepool.population.again.AgainExpression object>

A symbol indicating that the die should be rolled again, usually with some operation applied.

This is designed to be used with the Die() constructor. AgainExpressions should not be fed to functions or methods other than Die() (or indirectly via map()), but they can be used with operators. Examples:

  • Again + 6: Roll again and add 6.
  • Again + Again: Roll again twice and sum.

The again_count, again_depth, and again_end arguments to Die() affect how these arguments are processed. At most one of again_count or again_depth may be provided; if neither are provided, the behavior is as again_depth=1.

For finer control over rolling processes, use e.g. Die.map() instead.

Count mode

When again_count is provided, we start with one roll queued and execute one roll at a time. For every Again we roll, we queue another roll. If we run out of rolls, we sum the rolls to find the result. If the total number of rolls (not including the initial roll) would exceed again_count, we reroll the entire process, effectively conditioning the process on not rolling more than again_count extra dice.

This mode only allows "additive" expressions to be used with Again, which means that only the following operators are allowed:

  • Binary +
  • n @ AgainExpression, where n is a non-negative int or Population.

Furthermore, the + operator is assumed to be associative and commutative. For example, str or tuple outcomes will not produce elements with a definite order.

Depth mode

When again_depth=0, again_end is directly substituted for each occurence of Again. For other values of again_depth, the result for again_depth-1 is substituted for each occurence of Again.

If again_end=icepool.Reroll, then any AgainExpressions in the final depth are rerolled.

Rerolls

Reroll only rerolls that particular die, not the entire process. Any such rerolls do not count against the again_count or again_depth limit.

If again_end=icepool.Reroll:

  • Count mode: Any result that would cause the number of rolls to exceed again_count is rerolled.
  • Depth mode: Any AgainExpressions in the final depth level are rerolled.
class CountsKeysView(typing.KeysView[~T], typing.Sequence[~T]):
144class CountsKeysView(KeysView[T], Sequence[T]):
145    """This functions as both a `KeysView` and a `Sequence`."""
146
147    def __init__(self, counts: Counts[T]):
148        self._mapping = counts
149
150    def __getitem__(self, index):
151        return self._mapping._keys[index]
152
153    def __len__(self) -> int:
154        return len(self._mapping)
155
156    def __eq__(self, other):
157        return self._mapping._keys == other

This functions as both a KeysView and a Sequence.

CountsKeysView(counts: icepool.collection.counts.Counts[~T])
147    def __init__(self, counts: Counts[T]):
148        self._mapping = counts
class CountsValuesView(typing.ValuesView[int], typing.Sequence[int]):
160class CountsValuesView(ValuesView[int], Sequence[int]):
161    """This functions as both a `ValuesView` and a `Sequence`."""
162
163    def __init__(self, counts: Counts):
164        self._mapping = counts
165
166    def __getitem__(self, index):
167        return self._mapping._values[index]
168
169    def __len__(self) -> int:
170        return len(self._mapping)
171
172    def __eq__(self, other):
173        return self._mapping._values == other

This functions as both a ValuesView and a Sequence.

CountsValuesView(counts: icepool.collection.counts.Counts)
163    def __init__(self, counts: Counts):
164        self._mapping = counts
class CountsItemsView(typing.ItemsView[~T, int], typing.Sequence[tuple[~T, int]]):
176class CountsItemsView(ItemsView[T, int], Sequence[tuple[T, int]]):
177    """This functions as both an `ItemsView` and a `Sequence`."""
178
179    def __init__(self, counts: Counts):
180        self._mapping = counts
181
182    def __getitem__(self, index):
183        return self._mapping._items[index]
184
185    def __eq__(self, other):
186        return self._mapping._items == other

This functions as both an ItemsView and a Sequence.

CountsItemsView(counts: icepool.collection.counts.Counts)
179    def __init__(self, counts: Counts):
180        self._mapping = counts
def from_cumulative( outcomes: Sequence[~T], cumulative: Union[Sequence[int], Sequence[Die[bool]]], *, reverse: bool = False) -> Die[~T]:
143def from_cumulative(outcomes: Sequence[T],
144                    cumulative: 'Sequence[int] | Sequence[icepool.Die[bool]]',
145                    *,
146                    reverse: bool = False) -> 'icepool.Die[T]':
147    """Constructs a `Die` from a sequence of cumulative values.
148
149    Args:
150        outcomes: The outcomes of the resulting die. Sorted order is recommended
151            but not necessary.
152        cumulative: The cumulative values (inclusive) of the outcomes in the
153            order they are given to this function. These may be:
154            * `int` cumulative quantities.
155            * Dice representing the cumulative distribution at that point.
156        reverse: Iff true, both of the arguments will be reversed. This allows
157            e.g. constructing using a survival distribution.
158    """
159    if len(outcomes) == 0:
160        return icepool.Die({})
161
162    if reverse:
163        outcomes = list(reversed(outcomes))
164        cumulative = list(reversed(cumulative))  # type: ignore
165
166    prev = 0
167    d = {}
168
169    if isinstance(cumulative[0], icepool.Die):
170        cumulative = commonize_denominator(*cumulative)
171        for outcome, die in zip(outcomes, cumulative):
172            d[outcome] = die.quantity('!=', False) - prev
173            prev = die.quantity('!=', False)
174    elif isinstance(cumulative[0], int):
175        cumulative = cast(Sequence[int], cumulative)
176        for outcome, quantity in zip(outcomes, cumulative):
177            d[outcome] = quantity - prev
178            prev = quantity
179    else:
180        raise TypeError(
181            f'Unsupported type {type(cumulative)} for cumulative values.')
182
183    return icepool.Die(d)

Constructs a Die from a sequence of cumulative values.

Arguments:
  • outcomes: The outcomes of the resulting die. Sorted order is recommended but not necessary.
  • cumulative: The cumulative values (inclusive) of the outcomes in the order they are given to this function. These may be:
    • int cumulative quantities.
    • Dice representing the cumulative distribution at that point.
  • reverse: Iff true, both of the arguments will be reversed. This allows e.g. constructing using a survival distribution.
def from_rv( rv, outcomes: Union[Sequence[int], Sequence[float]], denominator: int, **kwargs) -> Union[Die[int], Die[float]]:
198def from_rv(rv, outcomes: Sequence[int] | Sequence[float], denominator: int,
199            **kwargs) -> 'icepool.Die[int] | icepool.Die[float]':
200    """Constructs a `Die` from a rv object (as `scipy.stats`).
201
202    This is done using the CDF.
203
204    Args:
205        rv: A rv object (as `scipy.stats`).
206        outcomes: An iterable of `int`s or `float`s that will be the outcomes
207            of the resulting `Die`.
208            If the distribution is discrete, outcomes must be `int`s.
209            Some outcomes may be omitted if their probability is too small
210            compared to the denominator.
211        denominator: The denominator of the resulting `Die` will be set to this.
212        **kwargs: These will be forwarded to `rv.cdf()`.
213    """
214    if hasattr(rv, 'pdf'):
215        # Continuous distributions use midpoints.
216        midpoints = [(a + b) / 2 for a, b in zip(outcomes[:-1], outcomes[1:])]
217        cdf = rv.cdf(midpoints, **kwargs)
218        quantities_le = tuple(int(round(x * denominator))
219                              for x in cdf) + (denominator, )
220    else:
221        cdf = rv.cdf(outcomes, **kwargs)
222        quantities_le = tuple(int(round(x * denominator)) for x in cdf)
223    return from_cumulative(outcomes, quantities_le)

Constructs a Die from a rv object (as scipy.stats).

This is done using the CDF.

Arguments:
  • rv: A rv object (as scipy.stats).
  • outcomes: An iterable of ints or floats that will be the outcomes of the resulting Die. If the distribution is discrete, outcomes must be ints. Some outcomes may be omitted if their probability is too small compared to the denominator.
  • denominator: The denominator of the resulting Die will be set to this.
  • **kwargs: These will be forwarded to rv.cdf().
def pointwise_max( arg0, /, *more_args: Die[~T]) -> Die[~T]:
253def pointwise_max(arg0, /, *more_args: 'icepool.Die[T]') -> 'icepool.Die[T]':
254    """Selects the highest chance of rolling >= each outcome among the arguments.
255
256    Naming not finalized.
257
258    Specifically, for each outcome, the chance of the result rolling >= to that 
259    outcome is the same as the highest chance of rolling >= that outcome among
260    the arguments.
261
262    Equivalently, any quantile in the result is the highest of that quantile
263    among the arguments.
264
265    This is useful for selecting from several possible moves where you are
266    trying to get >= a threshold that is known but could change depending on the
267    situation.
268    
269    Args:
270        dice: Either an iterable of dice, or two or more dice as separate
271            arguments.
272    """
273    if len(more_args) == 0:
274        args = arg0
275    else:
276        args = (arg0, ) + more_args
277    args = commonize_denominator(*args)
278    outcomes = sorted_union(*args)
279    cumulative = [
280        min(die.quantity('<=', outcome) for die in args)
281        for outcome in outcomes
282    ]
283    return from_cumulative(outcomes, cumulative)

Selects the highest chance of rolling >= each outcome among the arguments.

Naming not finalized.

Specifically, for each outcome, the chance of the result rolling >= to that outcome is the same as the highest chance of rolling >= that outcome among the arguments.

Equivalently, any quantile in the result is the highest of that quantile among the arguments.

This is useful for selecting from several possible moves where you are trying to get >= a threshold that is known but could change depending on the situation.

Arguments:
  • dice: Either an iterable of dice, or two or more dice as separate arguments.
def pointwise_min( arg0, /, *more_args: Die[~T]) -> Die[~T]:
300def pointwise_min(arg0, /, *more_args: 'icepool.Die[T]') -> 'icepool.Die[T]':
301    """Selects the highest chance of rolling <= each outcome among the arguments.
302
303    Naming not finalized.
304    
305    Specifically, for each outcome, the chance of the result rolling <= to that 
306    outcome is the same as the highest chance of rolling <= that outcome among
307    the arguments.
308
309    Equivalently, any quantile in the result is the lowest of that quantile
310    among the arguments.
311
312    This is useful for selecting from several possible moves where you are
313    trying to get <= a threshold that is known but could change depending on the
314    situation.
315    
316    Args:
317        dice: Either an iterable of dice, or two or more dice as separate
318            arguments.
319    """
320    if len(more_args) == 0:
321        args = arg0
322    else:
323        args = (arg0, ) + more_args
324    args = commonize_denominator(*args)
325    outcomes = sorted_union(*args)
326    cumulative = [
327        max(die.quantity('<=', outcome) for die in args)
328        for outcome in outcomes
329    ]
330    return from_cumulative(outcomes, cumulative)

Selects the highest chance of rolling <= each outcome among the arguments.

Naming not finalized.

Specifically, for each outcome, the chance of the result rolling <= to that outcome is the same as the highest chance of rolling <= that outcome among the arguments.

Equivalently, any quantile in the result is the lowest of that quantile among the arguments.

This is useful for selecting from several possible moves where you are trying to get <= a threshold that is known but could change depending on the situation.

Arguments:
  • dice: Either an iterable of dice, or two or more dice as separate arguments.
def lowest( arg0, /, *more_args: Union[~T, Die[~T]], keep: int | None = None, drop: int | None = None, default: Optional[~T] = None) -> Die[~T]:
 99def lowest(arg0,
100           /,
101           *more_args: 'T | icepool.Die[T]',
102           keep: int | None = None,
103           drop: int | None = None,
104           default: T | None = None) -> 'icepool.Die[T]':
105    """The lowest outcome among the rolls, or the sum of some of the lowest.
106
107    The outcomes should support addition and multiplication if `keep != 1`.
108
109    Args:
110        args: Dice or individual outcomes in a single iterable, or as two or
111            more separate arguments. Similar to the built-in `min()`.
112        keep, drop: These arguments work together:
113            * If neither are provided, the single lowest die will be taken.
114            * If only `keep` is provided, the `keep` lowest dice will be summed.
115            * If only `drop` is provided, the `drop` lowest dice will be dropped
116                and the rest will be summed.
117            * If both are provided, `drop` lowest dice will be dropped, then
118                the next `keep` lowest dice will be summed.
119        default: If an empty iterable is provided, the result will be a die that
120            always rolls this value.
121
122    Raises:
123        ValueError if an empty iterable is provided with no `default`.
124    """
125    if len(more_args) == 0:
126        args = arg0
127    else:
128        args = (arg0, ) + more_args
129
130    if len(args) == 0:
131        if default is None:
132            raise ValueError(
133                "lowest() arg is an empty sequence and no default was provided."
134            )
135        else:
136            return icepool.Die([default])
137
138    index_slice = lowest_slice(keep, drop)
139    return _sum_slice(*args, index_slice=index_slice)

The lowest outcome among the rolls, or the sum of some of the lowest.

The outcomes should support addition and multiplication if keep != 1.

Arguments:
  • args: Dice or individual outcomes in a single iterable, or as two or more separate arguments. Similar to the built-in min().
  • keep, drop: These arguments work together:
    • If neither are provided, the single lowest die will be taken.
    • If only keep is provided, the keep lowest dice will be summed.
    • If only drop is provided, the drop lowest dice will be dropped and the rest will be summed.
    • If both are provided, drop lowest dice will be dropped, then the next keep lowest dice will be summed.
  • default: If an empty iterable is provided, the result will be a die that always rolls this value.
Raises:
  • ValueError if an empty iterable is provided with no default.
def highest( arg0, /, *more_args: Union[~T, Die[~T]], keep: int | None = None, drop: int | None = None, default: Optional[~T] = None) -> Die[~T]:
153def highest(arg0,
154            /,
155            *more_args: 'T | icepool.Die[T]',
156            keep: int | None = None,
157            drop: int | None = None,
158            default: T | None = None) -> 'icepool.Die[T]':
159    """The highest outcome among the rolls, or the sum of some of the highest.
160
161    The outcomes should support addition and multiplication if `keep != 1`.
162
163    Args:
164        args: Dice or individual outcomes in a single iterable, or as two or
165            more separate arguments. Similar to the built-in `max()`.
166        keep, drop: These arguments work together:
167            * If neither are provided, the single highest die will be taken.
168            * If only `keep` is provided, the `keep` highest dice will be summed.
169            * If only `drop` is provided, the `drop` highest dice will be dropped
170                and the rest will be summed.
171            * If both are provided, `drop` highest dice will be dropped, then
172                the next `keep` highest dice will be summed.
173        drop: This number of highest dice will be dropped before keeping dice
174            to be summed.
175        default: If an empty iterable is provided, the result will be a die that
176            always rolls this value.
177
178    Raises:
179        ValueError if an empty iterable is provided with no `default`.
180    """
181    if len(more_args) == 0:
182        args = arg0
183    else:
184        args = (arg0, ) + more_args
185
186    if len(args) == 0:
187        if default is None:
188            raise ValueError(
189                "highest() arg is an empty sequence and no default was provided."
190            )
191        else:
192            return icepool.Die([default])
193
194    index_slice = highest_slice(keep, drop)
195    return _sum_slice(*args, index_slice=index_slice)

The highest outcome among the rolls, or the sum of some of the highest.

The outcomes should support addition and multiplication if keep != 1.

Arguments:
  • args: Dice or individual outcomes in a single iterable, or as two or more separate arguments. Similar to the built-in max().
  • keep, drop: These arguments work together:
    • If neither are provided, the single highest die will be taken.
    • If only keep is provided, the keep highest dice will be summed.
    • If only drop is provided, the drop highest dice will be dropped and the rest will be summed.
    • If both are provided, drop highest dice will be dropped, then the next keep highest dice will be summed.
  • drop: This number of highest dice will be dropped before keeping dice to be summed.
  • default: If an empty iterable is provided, the result will be a die that always rolls this value.
Raises:
  • ValueError if an empty iterable is provided with no default.
def middle( arg0, /, *more_args: Union[~T, Die[~T]], keep: int = 1, tie: Literal['error', 'high', 'low'] = 'error', default: Optional[~T] = None) -> Die[~T]:
209def middle(arg0,
210           /,
211           *more_args: 'T | icepool.Die[T]',
212           keep: int = 1,
213           tie: Literal['error', 'high', 'low'] = 'error',
214           default: T | None = None) -> 'icepool.Die[T]':
215    """The middle of the outcomes among the rolls, or the sum of some of the middle.
216
217    The outcomes should support addition and multiplication if `keep != 1`.
218
219    Args:
220        args: Dice or individual outcomes in a single iterable, or as two or
221            more separate arguments.
222        keep: The number of outcomes to sum.
223        tie: What to do if `keep` is odd but the the number of args is even, or
224            vice versa.
225            * 'error' (default): Raises `IndexError`.
226            * 'high': The higher outcome is taken.
227            * 'low': The lower outcome is taken.
228        default: If an empty iterable is provided, the result will be a die that
229            always rolls this value.
230
231    Raises:
232        ValueError if an empty iterable is provided with no `default`.
233    """
234    if len(more_args) == 0:
235        args = arg0
236    else:
237        args = (arg0, ) + more_args
238
239    if len(args) == 0:
240        if default is None:
241            raise ValueError(
242                "middle() arg is an empty sequence and no default was provided."
243            )
244        else:
245            return icepool.Die([default])
246
247    # Expression evaluators are difficult to type.
248    return icepool.Pool(args).middle(keep, tie=tie).sum()  # type: ignore

The middle of the outcomes among the rolls, or the sum of some of the middle.

The outcomes should support addition and multiplication if keep != 1.

Arguments:
  • args: Dice or individual outcomes in a single iterable, or as two or more separate arguments.
  • keep: The number of outcomes to sum.
  • tie: What to do if keep is odd but the the number of args is even, or vice versa.
    • 'error' (default): Raises IndexError.
    • 'high': The higher outcome is taken.
    • 'low': The lower outcome is taken.
  • default: If an empty iterable is provided, the result will be a die that always rolls this value.
Raises:
  • ValueError if an empty iterable is provided with no default.
def min_outcome( *args: Union[Iterable[Union[~T, Population[~T]]], ~T]) -> ~T:
343def min_outcome(*args: 'Iterable[T | icepool.Population[T]] | T') -> T:
344    """The minimum possible outcome among the populations.
345    
346    Args:
347        Populations or single outcomes. Alternatively, a single iterable argument of such.
348    """
349    return min(_iter_outcomes(*args))

The minimum possible outcome among the populations.

Arguments:
  • Populations or single outcomes. Alternatively, a single iterable argument of such.
def max_outcome( *args: Union[Iterable[Union[~T, Population[~T]]], ~T]) -> ~T:
362def max_outcome(*args: 'Iterable[T | icepool.Population[T]] | T') -> T:
363    """The maximum possible outcome among the populations.
364    
365    Args:
366        Populations or single outcomes. Alternatively, a single iterable argument of such.
367    """
368    return max(_iter_outcomes(*args))

The maximum possible outcome among the populations.

Arguments:
  • Populations or single outcomes. Alternatively, a single iterable argument of such.
def consecutive(*args: Iterable[int]) -> Sequence[int]:
371def consecutive(*args: Iterable[int]) -> Sequence[int]:
372    """A minimal sequence of consecutive ints covering the argument sets."""
373    start = min((x for x in itertools.chain(*args)), default=None)
374    if start is None:
375        return ()
376    stop = max(x for x in itertools.chain(*args))
377    return tuple(range(start, stop + 1))

A minimal sequence of consecutive ints covering the argument sets.

def sorted_union(*args: Iterable[~T]) -> tuple[~T, ...]:
380def sorted_union(*args: Iterable[T]) -> tuple[T, ...]:
381    """Merge sets into a sorted sequence."""
382    if not args:
383        return ()
384    return tuple(sorted(set.union(*(set(arg) for arg in args))))

Merge sets into a sorted sequence.

def commonize_denominator( *dice: Union[~T, Die[~T]]) -> tuple[Die[~T], ...]:
387def commonize_denominator(
388        *dice: 'T | icepool.Die[T]') -> tuple['icepool.Die[T]', ...]:
389    """Scale the quantities of the dice so that all of them have the same denominator.
390
391    The denominator is the LCM of the denominators of the arguments.
392
393    Args:
394        *dice: Any number of dice or single outcomes convertible to dice.
395
396    Returns:
397        A tuple of dice with the same denominator.
398    """
399    converted_dice = [icepool.implicit_convert_to_die(die) for die in dice]
400    denominator_lcm = math.lcm(*(die.denominator() for die in converted_dice
401                                 if die.denominator() > 0))
402    return tuple(
403        die.multiply_quantities(denominator_lcm //
404                                die.denominator() if die.denominator() >
405                                0 else 1) for die in converted_dice)

Scale the quantities of the dice so that all of them have the same denominator.

The denominator is the LCM of the denominators of the arguments.

Arguments:
  • *dice: Any number of dice or single outcomes convertible to dice.
Returns:

A tuple of dice with the same denominator.

def reduce( function: Callable[[~T, ~T], Union[~T, Die[~T], RerollType]], dice: Iterable[Union[~T, Die[~T]]], *, initial: Union[~T, Die[~T], NoneType] = None) -> Die[~T]:
408def reduce(
409        function: 'Callable[[T, T], T | icepool.Die[T] | icepool.RerollType]',
410        dice: 'Iterable[T | icepool.Die[T]]',
411        *,
412        initial: 'T | icepool.Die[T] | None' = None) -> 'icepool.Die[T]':
413    """Applies a function of two arguments cumulatively to a sequence of dice.
414
415    Analogous to the
416    [`functools` function of the same name.](https://docs.python.org/3/library/functools.html#functools.reduce)
417
418    Args:
419        function: The function to map. The function should take two arguments,
420            which are an outcome from each of two dice, and produce an outcome
421            of the same type. It may also return `Reroll`, in which case the
422            entire sequence is effectively rerolled.
423        dice: A sequence of dice to map the function to, from left to right.
424        initial: If provided, this will be placed at the front of the sequence
425            of dice.
426        again_count, again_depth, again_end: Forwarded to the final die constructor.
427    """
428    # Conversion to dice is not necessary since map() takes care of that.
429    iter_dice = iter(dice)
430    if initial is not None:
431        result: 'icepool.Die[T]' = icepool.implicit_convert_to_die(initial)
432    else:
433        result = icepool.implicit_convert_to_die(next(iter_dice))
434    for die in iter_dice:
435        result = map(function, result, die)
436    return result

Applies a function of two arguments cumulatively to a sequence of dice.

Analogous to the .reduce">functools function of the same name.

Arguments:
  • function: The function to map. The function should take two arguments, which are an outcome from each of two dice, and produce an outcome of the same type. It may also return Reroll, in which case the entire sequence is effectively rerolled.
  • dice: A sequence of dice to map the function to, from left to right.
  • initial: If provided, this will be placed at the front of the sequence of dice.
  • again_count, again_depth, again_end: Forwarded to the final die constructor.
def accumulate( function: Callable[[~T, ~T], Union[~T, Die[~T]]], dice: Iterable[Union[~T, Die[~T]]], *, initial: Union[~T, Die[~T], NoneType] = None) -> Iterator[Die[~T]]:
439def accumulate(
440        function: 'Callable[[T, T], T | icepool.Die[T]]',
441        dice: 'Iterable[T | icepool.Die[T]]',
442        *,
443        initial: 'T | icepool.Die[T] | None' = None
444) -> Iterator['icepool.Die[T]']:
445    """Applies a function of two arguments cumulatively to a sequence of dice, yielding each result in turn.
446
447    Analogous to the
448    [`itertools function of the same name`](https://docs.python.org/3/library/itertools.html#itertools.accumulate)
449    , though with no default function and
450    the same parameter order as `reduce()`.
451
452    The number of results is equal to the number of elements of `dice`, with
453    one additional element if `initial` is provided.
454
455    Args:
456        function: The function to map. The function should take two arguments,
457            which are an outcome from each of two dice.
458        dice: A sequence of dice to map the function to, from left to right.
459        initial: If provided, this will be placed at the front of the sequence
460            of dice.
461    """
462    # Conversion to dice is not necessary since map() takes care of that.
463    iter_dice = iter(dice)
464    if initial is not None:
465        result: 'icepool.Die[T]' = icepool.implicit_convert_to_die(initial)
466    else:
467        try:
468            result = icepool.implicit_convert_to_die(next(iter_dice))
469        except StopIteration:
470            return
471    yield result
472    for die in iter_dice:
473        result = map(function, result, die)
474        yield result

Applies a function of two arguments cumulatively to a sequence of dice, yielding each result in turn.

Analogous to the .accumulate">itertools function of the same name , though with no default function and the same parameter order as reduce().

The number of results is equal to the number of elements of dice, with one additional element if initial is provided.

Arguments:
  • function: The function to map. The function should take two arguments, which are an outcome from each of two dice.
  • dice: A sequence of dice to map the function to, from left to right.
  • initial: If provided, this will be placed at the front of the sequence of dice.
def map( repl: Union[Callable[..., Union[~T, Die[~T], RerollType, icepool.population.again.AgainExpression]], Mapping[Any, Union[~T, Die[~T], RerollType, icepool.population.again.AgainExpression]]], /, *args: Outcome | Die | MultisetExpression, star: bool | None = None, repeat: Union[int, Literal['inf']] = 1, time_limit: Union[int, Literal['inf'], NoneType] = None, again_count: int | None = None, again_depth: int | None = None, again_end: Union[~T, Die[~T], RerollType, NoneType] = None, **kwargs) -> Die[~T]:
500def map(
501        repl:
502    'Callable[..., T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression] | Mapping[Any, T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression]',
503        /,
504        *args: 'Outcome | icepool.Die | icepool.MultisetExpression',
505        star: bool | None = None,
506        repeat: int | Literal['inf'] = 1,
507        time_limit: int | Literal['inf'] | None = None,
508        again_count: int | None = None,
509        again_depth: int | None = None,
510        again_end: 'T | icepool.Die[T] | icepool.RerollType | None' = None,
511        **kwargs) -> 'icepool.Die[T]':
512    """Applies `func(outcome_of_die_0, outcome_of_die_1, ...)` for all joint outcomes, returning a Die.
513
514    See `map_function` for a decorator version of this.
515
516    Example: `map(lambda a, b: a + b, d6, d6)` is the same as d6 + d6.
517
518    `map()` is flexible but not very efficient for more than a few dice.
519    If at all possible, use `reduce()`, `MultisetExpression` methods, and/or
520    `MultisetEvaluator`s. Even `Pool.expand()` (which sorts rolls) is more
521    efficient than using `map` on the dice in order.
522
523    `Again` can be used but is not recommended with `repeat` other than 1.
524
525    Args:
526        repl: One of the following:
527            * A callable that takes in one outcome per element of args and
528                produces a new outcome.
529            * A mapping from old outcomes to new outcomes.
530                Unmapped old outcomes stay the same.
531                In this case args must have exactly one element.
532            As with the `Die` constructor, the new outcomes:
533            * May be dice rather than just single outcomes.
534            * The special value `icepool.Reroll` will reroll that old outcome.
535            * `tuples` containing `Population`s will be `tupleize`d into
536                `Population`s of `tuple`s.
537                This does not apply to subclasses of `tuple`s such as `namedtuple`
538                or other classes such as `Vector`.
539        *args: `repl` will be called with all joint outcomes of these.
540            Allowed arg types are:
541            * Single outcome.
542            * `Die`. All outcomes will be sent to `repl`.
543            * `MultisetExpression`. All sorted tuples of outcomes will be sent
544                to `repl`, as `MultisetExpression.expand()`.
545        star: If `True`, the first of the args will be unpacked before giving
546            them to `repl`.
547            If not provided, it will be guessed based on the signature of `repl`
548            and the number of arguments.
549        repeat: This will be repeated with the same arguments on the
550            result this many times, except the first of `args` will be replaced
551            by the result of the previous iteration.
552
553            Note that returning `Reroll` from `repl` will effectively reroll all
554            arguments, including the first argument which represents the result
555            of the process up to this point. If you only want to reroll the
556            current stage, you can nest another `map` inside `repl`.
557
558            EXPERIMENTAL: If set to `'inf'`, the result will be as if this
559            were repeated an infinite number of times. In this case, the
560            result will be in simplest form.
561        time_limit: Similar to `repeat`, but will return early if a fixed point
562            is reached. If both `repeat` and `time_limit` are provided
563            (not recommended), `time_limit` takes priority.
564        again_count, again_depth, again_end: Forwarded to the final die constructor.
565        **kwargs: Keyword-only arguments can be forwarded to a callable `repl`.
566            Unlike *args, outcomes will not be expanded, i.e. `Die` and
567            `MultisetExpression` will be passed as-is. This is invalid for
568            non-callable `repl`.
569    """
570    transition_function = _canonicalize_transition_function(
571        repl, len(args), star)
572
573    if len(args) == 0:
574        if repeat != 1:
575            raise ValueError('If no arguments are given, repeat must be 1.')
576        return icepool.Die([transition_function(**kwargs)],
577                           again_count=again_count,
578                           again_depth=again_depth,
579                           again_end=again_end)
580
581    # Here len(args) is at least 1.
582
583    first_arg = args[0]
584    extra_args = args[1:]
585
586    if time_limit is not None:
587        repeat = time_limit
588
589    result: 'icepool.Die[T]'
590
591    if repeat == 'inf':
592        # Infinite repeat.
593        # T_co and U should be the same in this case.
594        def unary_transition_function(state):
595            return map(transition_function,
596                       state,
597                       *extra_args,
598                       star=False,
599                       again_count=again_count,
600                       again_depth=again_depth,
601                       again_end=again_end,
602                       **kwargs)
603
604        result, _ = icepool.population.markov_chain.absorbing_markov_chain(
605            icepool.Die([first_arg]), unary_transition_function)
606        return result
607    else:
608        if repeat < 0:
609            raise ValueError('repeat cannot be negative.')
610
611        if repeat == 0:
612            return icepool.Die([first_arg])
613        elif repeat == 1 and time_limit is None:
614            final_outcomes: 'list[T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression]' = []
615            final_quantities: list[int] = []
616            for outcomes, final_quantity in icepool.iter_cartesian_product(
617                    *args):
618                final_outcome = transition_function(*outcomes, **kwargs)
619                if final_outcome is not icepool.Reroll:
620                    final_outcomes.append(final_outcome)
621                    final_quantities.append(final_quantity)
622            return icepool.Die(final_outcomes,
623                               final_quantities,
624                               again_count=again_count,
625                               again_depth=again_depth,
626                               again_end=again_end)
627        else:
628            result = icepool.Die([first_arg])
629            for _ in range(repeat):
630                next_result = icepool.map(transition_function,
631                                          result,
632                                          *extra_args,
633                                          star=False,
634                                          again_count=again_count,
635                                          again_depth=again_depth,
636                                          again_end=again_end,
637                                          **kwargs)
638                if time_limit is not None and result.simplify(
639                ) == next_result.simplify():
640                    return result
641                result = next_result
642            return result

Applies func(outcome_of_die_0, outcome_of_die_1, ...) for all joint outcomes, returning a Die.

See map_function for a decorator version of this.

Example: map(lambda a, b: a + b, d6, d6) is the same as d6 + d6.

map() is flexible but not very efficient for more than a few dice. If at all possible, use reduce(), MultisetExpression methods, and/or MultisetEvaluators. Even Pool.expand() (which sorts rolls) is more efficient than using map on the dice in order.

Again can be used but is not recommended with repeat other than 1.

Arguments:
  • repl: One of the following:
    • A callable that takes in one outcome per element of args and produces a new outcome.
    • A mapping from old outcomes to new outcomes. Unmapped old outcomes stay the same. In this case args must have exactly one element. As with the Die constructor, the new outcomes:
    • May be dice rather than just single outcomes.
    • The special value icepool.Reroll will reroll that old outcome.
    • tuples containing Populations will be tupleized into Populations of tuples. This does not apply to subclasses of tuples such as namedtuple or other classes such as Vector.
  • *args: repl will be called with all joint outcomes of these. Allowed arg types are:
  • star: If True, the first of the args will be unpacked before giving them to repl. If not provided, it will be guessed based on the signature of repl and the number of arguments.
  • repeat: This will be repeated with the same arguments on the result this many times, except the first of args will be replaced by the result of the previous iteration.

    Note that returning Reroll from repl will effectively reroll all arguments, including the first argument which represents the result of the process up to this point. If you only want to reroll the current stage, you can nest another map inside repl.

    EXPERIMENTAL: If set to 'inf', the result will be as if this were repeated an infinite number of times. In this case, the result will be in simplest form.

  • time_limit: Similar to repeat, but will return early if a fixed point is reached. If both repeat and time_limit are provided (not recommended), time_limit takes priority.
  • again_count, again_depth, again_end: Forwarded to the final die constructor.
  • **kwargs: Keyword-only arguments can be forwarded to a callable repl. Unlike *args, outcomes will not be expanded, i.e. Die and MultisetExpression will be passed as-is. This is invalid for non-callable repl.
def map_function( function: Optional[Callable[..., Union[~T, Die[~T], RerollType, icepool.population.again.AgainExpression]]] = None, /, *, star: bool | None = None, repeat: Union[int, Literal['inf']] = 1, again_count: int | None = None, again_depth: int | None = None, again_end: Union[~T, Die[~T], RerollType, NoneType] = None, **kwargs) -> Union[Callable[..., Die[~T]], Callable[..., Callable[..., Die[~T]]]]:
683def map_function(
684    function:
685    'Callable[..., T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression] | None' = None,
686    /,
687    *,
688    star: bool | None = None,
689    repeat: int | Literal['inf'] = 1,
690    again_count: int | None = None,
691    again_depth: int | None = None,
692    again_end: 'T | icepool.Die[T] | icepool.RerollType | None' = None,
693    **kwargs
694) -> 'Callable[..., icepool.Die[T]] | Callable[..., Callable[..., icepool.Die[T]]]':
695    """Decorator that turns a function that takes outcomes into a function that takes dice.
696
697    The result must be a `Die`.
698
699    This is basically a decorator version of `map()` and produces behavior
700    similar to AnyDice functions, though Icepool has different typing rules
701    among other differences.
702
703    `map_function` can either be used with no arguments:
704
705    ```python
706    @map_function
707    def explode_six(x):
708        if x == 6:
709            return 6 + Again
710        else:
711            return x
712
713    explode_six(d6, again_depth=2)
714    ```
715
716    Or with keyword arguments, in which case the extra arguments are bound:
717
718    ```python
719    @map_function(again_depth=2)
720    def explode_six(x):
721        if x == 6:
722            return 6 + Again
723        else:
724            return x
725
726    explode_six(d6)
727    ```
728
729    Args:
730        again_count, again_depth, again_end: Forwarded to the final die constructor.
731    """
732
733    if function is not None:
734        return update_wrapper(partial(map, function, **kwargs), function)
735    else:
736
737        def decorator(
738            function:
739            'Callable[..., T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression]'
740        ) -> 'Callable[..., icepool.Die[T]]':
741
742            return update_wrapper(
743                partial(map,
744                        function,
745                        star=star,
746                        repeat=repeat,
747                        again_count=again_count,
748                        again_depth=again_depth,
749                        again_end=again_end,
750                        **kwargs), function)
751
752        return decorator

Decorator that turns a function that takes outcomes into a function that takes dice.

The result must be a Die.

This is basically a decorator version of map() and produces behavior similar to AnyDice functions, though Icepool has different typing rules among other differences.

map_function can either be used with no arguments:

@map_function
def explode_six(x):
    if x == 6:
        return 6 + Again
    else:
        return x

explode_six(d6, again_depth=2)

Or with keyword arguments, in which case the extra arguments are bound:

@map_function(again_depth=2)
def explode_six(x):
    if x == 6:
        return 6 + Again
    else:
        return x

explode_six(d6)
Arguments:
  • again_count, again_depth, again_end: Forwarded to the final die constructor.
def map_and_time( repl: Union[Callable[..., Union[~T, Die[~T], RerollType, icepool.population.again.AgainExpression]], Mapping[Any, Union[~T, Die[~T], RerollType, icepool.population.again.AgainExpression]]], initial_state: Union[~T, Die[~T]], /, *extra_args, star: bool | None = None, time_limit: int, **kwargs) -> Die[tuple[~T, int]]:
755def map_and_time(
756        repl:
757    'Callable[..., T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression] | Mapping[Any, T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression]',
758        initial_state: 'T | icepool.Die[T]',
759        /,
760        *extra_args,
761        star: bool | None = None,
762        time_limit: int,
763        **kwargs) -> 'icepool.Die[tuple[T, int]]':
764    """Repeatedly map outcomes of the state to other outcomes, while also
765    counting timesteps.
766
767    This is useful for representing processes.
768
769    The outcomes of the result are  `(outcome, time)`, where `time` is the
770    number of repeats needed to reach an absorbing outcome (an outcome that
771    only leads to itself), or `repeat`, whichever is lesser.
772
773    This will return early if it reaches a fixed point.
774    Therefore, you can set `repeat` equal to the maximum number of
775    time you could possibly be interested in without worrying about
776    it causing extra computations after the fixed point.
777
778    Args:
779        repl: One of the following:
780            * A callable returning a new outcome for each old outcome.
781            * A mapping from old outcomes to new outcomes.
782                Unmapped old outcomes stay the same.
783            The new outcomes may be dice rather than just single outcomes.
784            The special value `icepool.Reroll` will reroll that old outcome.
785        initial_state: The initial state of the process, which could be a
786            single state or a `Die`.
787        extra_args: Extra arguments to use, as per `map`. Note that these are
788            rerolled at every time step.
789        star: If `True`, the first of the args will be unpacked before giving
790            them to `func`.
791            If not provided, it will be guessed based on the signature of `func`
792            and the number of arguments.
793        time_limit: This will be repeated with the same arguments on the result
794            up to this many times.
795        **kwargs: Keyword-only arguments can be forwarded to a callable `repl`.
796            Unlike *args, outcomes will not be expanded, i.e. `Die` and
797            `MultisetExpression` will be passed as-is. This is invalid for
798            non-callable `repl`.
799
800    Returns:
801        The `Die` after the modification.
802    """
803    transition_function = _canonicalize_transition_function(
804        repl, 1 + len(extra_args), star)
805
806    result: 'icepool.Die[tuple[T, int]]' = map(lambda x: (x, 0), initial_state)
807
808    # Note that we don't expand extra_args during the outer map.
809    # This is needed to correctly evaluate whether each outcome is absorbing.
810    def transition_with_steps(outcome_and_steps, extra_args):
811        outcome, steps = outcome_and_steps
812        next_outcome = map(transition_function,
813                           outcome,
814                           *extra_args,
815                           star=False,
816                           **kwargs)
817        if icepool.population.markov_chain.is_absorbing(outcome, next_outcome):
818            return outcome, steps
819        else:
820            return icepool.tupleize(next_outcome, steps + 1)
821
822    return map(transition_with_steps,
823               result,
824               extra_args,
825               time_limit=time_limit)

Repeatedly map outcomes of the state to other outcomes, while also counting timesteps.

This is useful for representing processes.

The outcomes of the result are (outcome, time), where time is the number of repeats needed to reach an absorbing outcome (an outcome that only leads to itself), or repeat, whichever is lesser.

This will return early if it reaches a fixed point. Therefore, you can set repeat equal to the maximum number of time you could possibly be interested in without worrying about it causing extra computations after the fixed point.

Arguments:
  • repl: One of the following:
    • A callable returning a new outcome for each old outcome.
    • A mapping from old outcomes to new outcomes. Unmapped old outcomes stay the same. The new outcomes may be dice rather than just single outcomes. The special value icepool.Reroll will reroll that old outcome.
  • initial_state: The initial state of the process, which could be a single state or a Die.
  • extra_args: Extra arguments to use, as per map. Note that these are rerolled at every time step.
  • star: If True, the first of the args will be unpacked before giving them to func. If not provided, it will be guessed based on the signature of func and the number of arguments.
  • time_limit: This will be repeated with the same arguments on the result up to this many times.
  • **kwargs: Keyword-only arguments can be forwarded to a callable repl. Unlike *args, outcomes will not be expanded, i.e. Die and MultisetExpression will be passed as-is. This is invalid for non-callable repl.
Returns:

The Die after the modification.

def mean_time_to_absorb( repl: Union[Callable[..., Union[~T, Die[~T], RerollType, icepool.population.again.AgainExpression]], Mapping[Any, Union[~T, Die[~T], RerollType, icepool.population.again.AgainExpression]]], initial_state: Union[~T, Die[~T]], /, *extra_args, star: bool | None = None, **kwargs) -> fractions.Fraction:
828def mean_time_to_absorb(
829        repl:
830    'Callable[..., T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression] | Mapping[Any, T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression]',
831        initial_state: 'T | icepool.Die[T]',
832        /,
833        *extra_args,
834        star: bool | None = None,
835        **kwargs) -> Fraction:
836    """EXPERIMENTAL: The mean time for the process to reach an absorbing state.
837    
838    An absorbing state is one that maps to itself with unity probability.
839
840    Args:
841        repl: One of the following:
842            * A callable returning a new outcome for each old outcome.
843            * A mapping from old outcomes to new outcomes.
844                Unmapped old outcomes stay the same.
845            The new outcomes may be dice rather than just single outcomes.
846            The special value `icepool.Reroll` will reroll that old outcome.
847        initial_state: The initial state of the process, which could be a
848            single state or a `Die`.
849        extra_args: Extra arguments to use, as per `map`. Note that these are
850            rerolled at every time step.
851        star: If `True`, the first of the args will be unpacked before giving
852            them to `func`.
853            If not provided, it will be guessed based on the signature of `func`
854            and the number of arguments.
855        **kwargs: Keyword-only arguments can be forwarded to a callable `repl`.
856            Unlike *args, outcomes will not be expanded, i.e. `Die` and
857            `MultisetExpression` will be passed as-is. This is invalid for
858            non-callable `repl`.
859
860    Returns:
861        The mean time to absorption.
862    """
863    transition_function = _canonicalize_transition_function(
864        repl, 1 + len(extra_args), star)
865
866    # Infinite repeat.
867    # T_co and U should be the same in this case.
868    def unary_transition_function(state):
869        return map(transition_function,
870                   state,
871                   *extra_args,
872                   star=False,
873                   **kwargs)
874
875    _, result = icepool.population.markov_chain.absorbing_markov_chain(
876        icepool.Die([initial_state]), unary_transition_function)
877    return result

EXPERIMENTAL: The mean time for the process to reach an absorbing state.

An absorbing state is one that maps to itself with unity probability.

Arguments:
  • repl: One of the following:
    • A callable returning a new outcome for each old outcome.
    • A mapping from old outcomes to new outcomes. Unmapped old outcomes stay the same. The new outcomes may be dice rather than just single outcomes. The special value icepool.Reroll will reroll that old outcome.
  • initial_state: The initial state of the process, which could be a single state or a Die.
  • extra_args: Extra arguments to use, as per map. Note that these are rerolled at every time step.
  • star: If True, the first of the args will be unpacked before giving them to func. If not provided, it will be guessed based on the signature of func and the number of arguments.
  • **kwargs: Keyword-only arguments can be forwarded to a callable repl. Unlike *args, outcomes will not be expanded, i.e. Die and MultisetExpression will be passed as-is. This is invalid for non-callable repl.
Returns:

The mean time to absorption.

def map_to_pool( repl: Union[Callable[..., Union[MultisetExpression, Sequence[Union[Die[~T], ~T]], Mapping[Die[~T], int], Mapping[~T, int], RerollType]], Mapping[Any, Union[MultisetExpression, Sequence[Union[Die[~T], ~T]], Mapping[Die[~T], int], Mapping[~T, int], RerollType]]], /, *args: Outcome | Die | MultisetExpression, star: bool | None = None, **kwargs) -> MultisetExpression[~T]:
880def map_to_pool(
881        repl:
882    'Callable[..., icepool.MultisetExpression | Sequence[icepool.Die[T] | T] | Mapping[icepool.Die[T], int] | Mapping[T, int] | icepool.RerollType] | Mapping[Any, icepool.MultisetExpression | Sequence[icepool.Die[T] | T] | Mapping[icepool.Die[T], int] | Mapping[T, int] | icepool.RerollType]',
883        /,
884        *args: 'Outcome | icepool.Die | icepool.MultisetExpression',
885        star: bool | None = None,
886        **kwargs) -> 'icepool.MultisetExpression[T]':
887    """EXPERIMENTAL: Applies `repl(outcome_of_die_0, outcome_of_die_1, ...)` for all joint outcomes, producing a MultisetExpression.
888    
889    Args:
890        repl: One of the following:
891            * A callable that takes in one outcome per element of args and
892                produces a `MultisetExpression` or something convertible to a `Pool`.
893            * A mapping from old outcomes to `MultisetExpression` 
894                or something convertible to a `Pool`.
895                In this case args must have exactly one element.
896            The new outcomes may be dice rather than just single outcomes.
897            The special value `icepool.Reroll` will reroll that old outcome.
898        star: If `True`, the first of the args will be unpacked before giving
899            them to `repl`.
900            If not provided, it will be guessed based on the signature of `repl`
901            and the number of arguments.
902        **kwargs: Keyword-only arguments can be forwarded to a callable `repl`.
903            Unlike *args, outcomes will not be expanded, i.e. `Die` and
904            `MultisetExpression` will be passed as-is. This is invalid for
905            non-callable `repl`.
906
907    Returns:
908        A `MultisetExpression` representing the mixture of `Pool`s. Note  
909        that this is not technically a `Pool`, though it supports most of 
910        the same operations.
911
912    Raises:
913        ValueError: If `denominator` cannot be made consistent with the 
914            resulting mixture of pools.
915    """
916    transition_function = _canonicalize_transition_function(
917        repl, len(args), star)
918
919    data: 'MutableMapping[icepool.MultisetExpression[T], int]' = defaultdict(
920        int)
921    for outcomes, quantity in icepool.iter_cartesian_product(*args):
922        pool = transition_function(*outcomes, **kwargs)
923        if pool is icepool.Reroll:
924            continue
925        elif isinstance(pool, icepool.MultisetExpression):
926            data[pool] += quantity
927        else:
928            data[icepool.Pool(pool)] += quantity
929    # I couldn't get the covariance / contravariance to work.
930    return icepool.MultisetMixture(data)  # type: ignore

EXPERIMENTAL: Applies repl(outcome_of_die_0, outcome_of_die_1, ...) for all joint outcomes, producing a MultisetExpression.

Arguments:
  • repl: One of the following:
    • A callable that takes in one outcome per element of args and produces a MultisetExpression or something convertible to a Pool.
    • A mapping from old outcomes to MultisetExpression or something convertible to a Pool. In this case args must have exactly one element. The new outcomes may be dice rather than just single outcomes. The special value icepool.Reroll will reroll that old outcome.
  • star: If True, the first of the args will be unpacked before giving them to repl. If not provided, it will be guessed based on the signature of repl and the number of arguments.
  • **kwargs: Keyword-only arguments can be forwarded to a callable repl. Unlike *args, outcomes will not be expanded, i.e. Die and MultisetExpression will be passed as-is. This is invalid for non-callable repl.
Returns:

A MultisetExpression representing the mixture of Pools. Note
that this is not technically a Pool, though it supports most of the same operations.

Raises:
  • ValueError: If denominator cannot be made consistent with the resulting mixture of pools.
Reroll: Final = <RerollType.Reroll: 'Reroll'>

Indicates that an outcome should be rerolled (with unlimited depth).

This can be used in place of outcomes in many places. See individual function and method descriptions for details.

This effectively removes the outcome from the probability space, along with its contribution to the denominator.

This can be used for conditional probability by removing all outcomes not consistent with the given observations.

Operation in specific cases:

  • When used with Again, only that stage is rerolled, not the entire Again tree.
  • To reroll with limited depth, use Die.reroll(), or Again with no modification.
  • When used with MultisetEvaluator, the entire evaluation is rerolled.
class RerollType(enum.Enum):
32class RerollType(enum.Enum):
33    """The type of the Reroll singleton."""
34    Reroll = 'Reroll'
35    """Indicates an outcome should be rerolled (with unlimited depth)."""

The type of the Reroll singleton.

Reroll = <RerollType.Reroll: 'Reroll'>

Indicates an outcome should be rerolled (with unlimited depth).

class Pool(icepool.generator.keep.KeepGenerator[~T]):
 25class Pool(KeepGenerator[T]):
 26    """Represents a multiset of outcomes resulting from the roll of several dice.
 27
 28    This should be used in conjunction with `MultisetEvaluator` to generate a
 29    result.
 30
 31    Note that operators are performed on the multiset of rolls, not the multiset
 32    of dice. For example, `d6.pool(3) - d6.pool(3)` is not an empty pool, but
 33    an expression meaning "roll two pools of 3d6 and with rolls in the second 
 34    pool cancelling matching rolls in the first pool one-for-one".
 35    """
 36
 37    _dice: tuple[tuple['icepool.Die[T]', int], ...]
 38    _outcomes: tuple[T, ...]
 39
 40    def __new__(
 41            cls,
 42            dice:
 43        'Sequence[icepool.Die[T] | T] | Mapping[icepool.Die[T], int] | Mapping[T, int] | Mapping[icepool.Die[T] | T, int]',
 44            times: Sequence[int] | int = 1) -> 'Pool':
 45        """Public constructor for a pool.
 46
 47        Evaulation is most efficient when the dice are the same or same-side
 48        truncations of each other. For example, d4, d6, d8, d10, d12 are all
 49        same-side truncations of d12.
 50
 51        It is permissible to create a `Pool` without providing dice, but not all
 52        evaluators will handle this case, especially if they depend on the
 53        outcome type. Dice may be in the pool zero times, in which case their
 54        outcomes will be considered but without any count (unless another die
 55        has that outcome).
 56
 57        Args:
 58            dice: The dice to put in the `Pool`. This can be one of the following:
 59
 60                * A `Sequence` of `Die` or outcomes.
 61                * A `Mapping` of `Die` or outcomes to how many of that `Die` or
 62                    outcome to put in the `Pool`.
 63
 64                All outcomes within a `Pool` must be totally orderable.
 65            times: Multiplies the number of times each element of `dice` will
 66                be put into the pool.
 67                `times` can either be a sequence of the same length as
 68                `outcomes` or a single `int` to apply to all elements of
 69                `outcomes`.
 70
 71        Raises:
 72            ValueError: If a bare `Deck` or `Die` argument is provided.
 73                A `Pool` of a single `Die` should constructed as `Pool([die])`.
 74        """
 75        if isinstance(dice, Pool):
 76            if times == 1:
 77                return dice
 78            else:
 79                dice = {die: quantity for die, quantity in dice._dice}
 80
 81        if isinstance(dice, (icepool.Die, icepool.Deck, icepool.MultiDeal)):
 82            raise ValueError(
 83                f'A Pool cannot be constructed with a {type(dice).__name__} argument.'
 84            )
 85
 86        dice, times = icepool.creation_args.itemize(dice, times)
 87        converted_dice = [icepool.implicit_convert_to_die(die) for die in dice]
 88
 89        dice_counts: MutableMapping['icepool.Die[T]', int] = defaultdict(int)
 90        for die, qty in zip(converted_dice, times):
 91            if qty == 0:
 92                continue
 93            dice_counts[die] += qty
 94        keep_tuple = (1, ) * sum(times)
 95
 96        # Includes dice with zero qty.
 97        outcomes = icepool.sorted_union(*converted_dice)
 98        return cls._new_from_mapping(dice_counts, outcomes, keep_tuple)
 99
100    @classmethod
101    def _new_raw(cls, dice: tuple[tuple['icepool.Die[T]', int], ...],
102                 outcomes: tuple[T, ...], keep_tuple: tuple[int,
103                                                            ...]) -> 'Pool[T]':
104        """Create using a keep_tuple directly.
105
106        Args:
107            dice: A tuple of (die, count) pairs.
108            keep_tuple: A tuple of how many times to count each die.
109        """
110        self = super(Pool, cls).__new__(cls)
111        self._dice = dice
112        self._outcomes = outcomes
113        self._keep_tuple = keep_tuple
114        return self
115
116    @classmethod
117    def clear_cache(cls):
118        """Clears the global PoolSource cache."""
119        PoolSource._new_raw.cache_clear()
120
121    @classmethod
122    def _new_from_mapping(cls, dice_counts: Mapping['icepool.Die[T]', int],
123                          outcomes: tuple[T, ...],
124                          keep_tuple: tuple[int, ...]) -> 'Pool[T]':
125        """Creates a new pool.
126
127        Args:
128            dice_counts: A map from dice to rolls.
129            keep_tuple: A tuple with length equal to the number of dice.
130        """
131        dice = tuple(sorted(dice_counts.items(),
132                            key=lambda kv: kv[0].hash_key))
133        return Pool._new_raw(dice, outcomes, keep_tuple)
134
135    def _make_source(self):
136        return PoolSource(self._dice, self._outcomes, self._keep_tuple)
137
138    @cached_property
139    def _raw_size(self) -> int:
140        return sum(count for _, count in self._dice)
141
142    def raw_size(self) -> int:
143        """The number of dice in this pool before the keep_tuple is applied."""
144        return self._raw_size
145
146    @cached_property
147    def _denominator(self) -> int:
148        return math.prod(die.denominator()**count for die, count in self._dice)
149
150    def denominator(self) -> int:
151        return self._denominator
152
153    @cached_property
154    def _dice_tuple(self) -> tuple['icepool.Die[T]', ...]:
155        return sum(((die, ) * count for die, count in self._dice), start=())
156
157    @cached_property
158    def _unique_dice(self) -> Collection['icepool.Die[T]']:
159        return set(die for die, _ in self._dice)
160
161    def unique_dice(self) -> Collection['icepool.Die[T]']:
162        """The collection of unique dice in this pool."""
163        return self._unique_dice
164
165    def outcomes(self) -> Sequence[T]:
166        """The union of possible outcomes among all dice in this pool in ascending order."""
167        return self._outcomes
168
169    def _set_keep_tuple(self, keep_tuple: tuple[int,
170                                                ...]) -> 'KeepGenerator[T]':
171        return Pool._new_raw(self._dice, self._outcomes, keep_tuple)
172
173    def additive_union(
174        *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]'
175    ) -> 'MultisetExpression[T]':
176        args = tuple(
177            icepool.expression.multiset_expression.
178            implicit_convert_to_expression(arg) for arg in args)
179        if all(isinstance(arg, Pool) for arg in args):
180            pools = cast(tuple[Pool[T], ...], args)
181            keep_tuple: tuple[int, ...] = tuple(
182                reduce(operator.add, (pool.keep_tuple() for pool in pools),
183                       ()))
184            if len(keep_tuple) == 0 or all(x == keep_tuple[0]
185                                           for x in keep_tuple):
186                # All sorted positions count the same, so we can merge the
187                # pools.
188                dice: 'MutableMapping[icepool.Die, int]' = defaultdict(int)
189                for pool in pools:
190                    for die, die_count in pool._dice:
191                        dice[die] += die_count
192                outcomes = icepool.sorted_union(*(pool.outcomes()
193                                                  for pool in pools))
194                return Pool._new_from_mapping(dice, outcomes, keep_tuple)
195        return KeepGenerator.additive_union(*args)
196
197    @property
198    def hash_key(self):
199        return Pool, self._dice, self._keep_tuple
200
201    def __str__(self) -> str:
202        return (
203            f'Pool of {self.raw_size()} dice with keep_tuple={self.keep_tuple()}\n'
204            + ''.join(f'  {repr(die)} : {count},\n'
205                      for die, count in self._dice))

Represents a multiset of outcomes resulting from the roll of several dice.

This should be used in conjunction with MultisetEvaluator to generate a result.

Note that operators are performed on the multiset of rolls, not the multiset of dice. For example, d6.pool(3) - d6.pool(3) is not an empty pool, but an expression meaning "roll two pools of 3d6 and with rolls in the second pool cancelling matching rolls in the first pool one-for-one".

@classmethod
def clear_cache(cls):
116    @classmethod
117    def clear_cache(cls):
118        """Clears the global PoolSource cache."""
119        PoolSource._new_raw.cache_clear()

Clears the global PoolSource cache.

def raw_size(self) -> int:
142    def raw_size(self) -> int:
143        """The number of dice in this pool before the keep_tuple is applied."""
144        return self._raw_size

The number of dice in this pool before the keep_tuple is applied.

def denominator(self) -> int:
150    def denominator(self) -> int:
151        return self._denominator
def unique_dice(self) -> Collection[Die[~T]]:
161    def unique_dice(self) -> Collection['icepool.Die[T]']:
162        """The collection of unique dice in this pool."""
163        return self._unique_dice

The collection of unique dice in this pool.

def outcomes(self) -> Sequence[~T]:
165    def outcomes(self) -> Sequence[T]:
166        """The union of possible outcomes among all dice in this pool in ascending order."""
167        return self._outcomes

The union of possible outcomes among all dice in this pool in ascending order.

def additive_union( *args: Union[MultisetExpression[~T], Mapping[~T, int], Sequence[~T]]) -> MultisetExpression[~T]:
173    def additive_union(
174        *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]'
175    ) -> 'MultisetExpression[T]':
176        args = tuple(
177            icepool.expression.multiset_expression.
178            implicit_convert_to_expression(arg) for arg in args)
179        if all(isinstance(arg, Pool) for arg in args):
180            pools = cast(tuple[Pool[T], ...], args)
181            keep_tuple: tuple[int, ...] = tuple(
182                reduce(operator.add, (pool.keep_tuple() for pool in pools),
183                       ()))
184            if len(keep_tuple) == 0 or all(x == keep_tuple[0]
185                                           for x in keep_tuple):
186                # All sorted positions count the same, so we can merge the
187                # pools.
188                dice: 'MutableMapping[icepool.Die, int]' = defaultdict(int)
189                for pool in pools:
190                    for die, die_count in pool._dice:
191                        dice[die] += die_count
192                outcomes = icepool.sorted_union(*(pool.outcomes()
193                                                  for pool in pools))
194                return Pool._new_from_mapping(dice, outcomes, keep_tuple)
195        return KeepGenerator.additive_union(*args)

The combined elements from all of the multisets.

Same as a + b + c + ....

Any resulting counts that would be negative are set to zero.

Example:

[1, 2, 2, 3] + [1, 2, 4] -> [1, 1, 2, 2, 2, 3, 4]
hash_key
197    @property
198    def hash_key(self):
199        return Pool, self._dice, self._keep_tuple

A hash key for this object. This should include a type.

If None, this will not compare equal to any other object.

def standard_pool( die_sizes: Union[Collection[int], Mapping[int, int]]) -> Pool[int]:
343def standard_pool(
344        die_sizes: Collection[int] | Mapping[int, int]) -> 'Pool[int]':
345    """A `Pool` of standard dice (e.g. d6, d8...).
346
347    Args:
348        die_sizes: A collection of die sizes, which will put one die of that
349            sizes in the pool for each element.
350            Or, a mapping of die sizes to how many dice of that size to put
351            into the pool.
352            If empty, the pool will be considered to consist of zero zeros.
353    """
354    if not die_sizes:
355        return Pool({icepool.Die([0]): 0})
356    if isinstance(die_sizes, Mapping):
357        die_sizes = list(
358            itertools.chain.from_iterable([k] * v
359                                          for k, v in die_sizes.items()))
360    return Pool(list(icepool.d(x) for x in die_sizes))

A Pool of standard dice (e.g. d6, d8...).

Arguments:
  • die_sizes: A collection of die sizes, which will put one die of that sizes in the pool for each element. Or, a mapping of die sizes to how many dice of that size to put into the pool. If empty, the pool will be considered to consist of zero zeros.
class MultisetGenerator(icepool.MultisetExpression[~T]):
17class MultisetGenerator(MultisetExpression[T]):
18    """Abstract base class for generating multisets.
19
20    These include dice pools (`Pool`) and card deals (`Deal`). Most likely you
21    will be using one of these two rather than writing your own subclass of
22    `MultisetGenerator`.
23
24    The multisets are incrementally generated one outcome at a time.
25    For each outcome, a `count` and `weight` are generated, along with a
26    smaller generator to produce the rest of the multiset.
27
28    You can perform simple evaluations using built-in operators and methods in
29    this class.
30    For more complex evaluations and better performance, particularly when
31    multiple generators are involved, you will want to write your own subclass
32    of `MultisetEvaluator`.
33    """
34
35    _children = ()
36
37    @abstractmethod
38    def _make_source(self) -> 'MultisetSource':
39        """Create a source from this generator."""
40
41    @property
42    def _has_parameter(self) -> bool:
43        return False
44
45    def _prepare(
46        self
47    ) -> Iterator[tuple['tuple[Dungeonlet[T, Any], ...]',
48                        'tuple[Questlet[T, Any], ...]',
49                        'tuple[MultisetSourceBase[T, Any], ...]', int]]:
50        dungeonlets = (MultisetFreeVariable[T, int](), )
51        questlets = (MultisetGeneratorQuestlet[T](), )
52        sources = (self._make_source(), )
53        weight = 1
54        yield dungeonlets, questlets, sources, weight

Abstract base class for generating multisets.

These include dice pools (Pool) and card deals (Deal). Most likely you will be using one of these two rather than writing your own subclass of MultisetGenerator.

The multisets are incrementally generated one outcome at a time. For each outcome, a count and weight are generated, along with a smaller generator to produce the rest of the multiset.

You can perform simple evaluations using built-in operators and methods in this class. For more complex evaluations and better performance, particularly when multiple generators are involved, you will want to write your own subclass of MultisetEvaluator.

class MultisetExpression(icepool.expression.multiset_expression_base.MultisetExpressionBase[~T, int], icepool.expand.Expandable[tuple[~T, ...]]):
  66class MultisetExpression(MultisetExpressionBase[T, int],
  67                         Expandable[tuple[T, ...]]):
  68    """Abstract base class representing an expression that operates on single multisets.
  69
  70    There are three types of multiset expressions:
  71
  72    * `MultisetGenerator`, which produce raw outcomes and counts.
  73    * `MultisetOperator`, which takes outcomes with one or more counts and
  74        produces a count.
  75    * `MultisetVariable`, which is a temporary placeholder for some other 
  76        expression.
  77
  78    Expression methods can be applied to `MultisetGenerator`s to do simple
  79    evaluations. For joint evaluations, try `multiset_function`.
  80
  81    Use the provided operations to build up more complicated
  82    expressions, or to attach a final evaluator.
  83
  84    Operations include:
  85
  86    | Operation                   | Count / notes                               |
  87    |:----------------------------|:--------------------------------------------|
  88    | `additive_union`, `+`       | `l + r`                                     |
  89    | `difference`, `-`           | `l - r`                                     |
  90    | `intersection`, `&`         | `min(l, r)`                                 |
  91    | `union`, `\\|`               | `max(l, r)`                                 |
  92    | `symmetric_difference`, `^` | `abs(l - r)`                                |
  93    | `multiply_counts`, `*`      | `count * n`                                 |
  94    | `divide_counts`, `//`       | `count // n`                                |
  95    | `modulo_counts`, `%`        | `count % n`                                 |
  96    | `keep_counts`               | `count if count >= n else 0` etc.           |
  97    | unary `+`                   | same as `keep_counts('>=', 0)`              |
  98    | unary `-`                   | reverses the sign of all counts             |
  99    | `unique`                    | `min(count, n)`                             |
 100    | `keep_outcomes`             | `count if outcome in t else 0`              |
 101    | `drop_outcomes`             | `count if outcome not in t else 0`          |
 102    | `map_counts`                | `f(outcome, *counts)`                       |
 103    | `keep`, `[]`                | less capable than `KeepGenerator` version   |
 104    | `highest`                   | less capable than `KeepGenerator` version   |
 105    | `lowest`                    | less capable than `KeepGenerator` version   |
 106
 107    | Evaluator                      | Summary                                                                    |
 108    |:-------------------------------|:---------------------------------------------------------------------------|
 109    | `issubset`, `<=`               | Whether the left side's counts are all <= their counterparts on the right  |
 110    | `issuperset`, `>=`             | Whether the left side's counts are all >= their counterparts on the right  |
 111    | `isdisjoint`                   | Whether the left side has no positive counts in common with the right side |
 112    | `<`                            | As `<=`, but `False` if the two multisets are equal                        |
 113    | `>`                            | As `>=`, but `False` if the two multisets are equal                        |
 114    | `==`                           | Whether the left side has all the same counts as the right side            |
 115    | `!=`                           | Whether the left side has any different counts to the right side           |
 116    | `expand`                       | All elements in ascending order                                            |
 117    | `sum`                          | Sum of all elements                                                        |
 118    | `count`                        | The number of elements                                                     |
 119    | `any`                          | Whether there is at least 1 element                                        |
 120    | `highest_outcome_and_count`    | The highest outcome and how many of that outcome                           |
 121    | `all_counts`                   | All counts in descending order                                             |
 122    | `largest_count`                | The single largest count, aka x-of-a-kind                                  |
 123    | `largest_count_and_outcome`    | Same but also with the corresponding outcome                               |
 124    | `count_subset`, `//`           | The number of times the right side is contained in the left side           |
 125    | `largest_straight`             | Length of longest consecutive sequence                                     |
 126    | `largest_straight_and_outcome` | Same but also with the corresponding outcome                               |
 127    | `all_straights`                | Lengths of all consecutive sequences in descending order                   |
 128    """
 129
 130    def _make_param(self,
 131                    name: str,
 132                    arg_index: int,
 133                    star_index: int | None = None) -> 'MultisetParameter[T]':
 134        if star_index is not None:
 135            raise TypeError(
 136                'The single int count of MultisetExpression cannot be starred.'
 137            )
 138        return icepool.MultisetParameter(name, arg_index, star_index)
 139
 140    @property
 141    def _items_for_cartesian_product(
 142            self) -> Sequence[tuple[tuple[T, ...], int]]:
 143        expansion = cast('icepool.Die[tuple[T, ...]]', self.expand())
 144        return expansion.items()
 145
 146    # We need to reiterate this since we override __eq__.
 147    __hash__ = MaybeHashKeyed.__hash__  # type: ignore
 148
 149    # Binary operators.
 150
 151    def __add__(self,
 152                other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
 153                /) -> 'MultisetExpression[T]':
 154        try:
 155            return MultisetExpression.additive_union(self, other)
 156        except ImplicitConversionError:
 157            return NotImplemented
 158
 159    def __radd__(
 160            self,
 161            other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
 162            /) -> 'MultisetExpression[T]':
 163        try:
 164            return MultisetExpression.additive_union(other, self)
 165        except ImplicitConversionError:
 166            return NotImplemented
 167
 168    def additive_union(
 169        *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]'
 170    ) -> 'MultisetExpression[T]':
 171        """The combined elements from all of the multisets.
 172
 173        Same as `a + b + c + ...`.
 174
 175        Any resulting counts that would be negative are set to zero.
 176
 177        Example:
 178        ```python
 179        [1, 2, 2, 3] + [1, 2, 4] -> [1, 1, 2, 2, 2, 3, 4]
 180        ```
 181        """
 182        expressions = tuple(
 183            implicit_convert_to_expression(arg) for arg in args)
 184        return icepool.operator.MultisetAdditiveUnion(*expressions)
 185
 186    def __sub__(self,
 187                other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
 188                /) -> 'MultisetExpression[T]':
 189        try:
 190            return MultisetExpression.difference(self, other)
 191        except ImplicitConversionError:
 192            return NotImplemented
 193
 194    def __rsub__(
 195            self,
 196            other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
 197            /) -> 'MultisetExpression[T]':
 198        try:
 199            return MultisetExpression.difference(other, self)
 200        except ImplicitConversionError:
 201            return NotImplemented
 202
 203    def difference(
 204        *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]'
 205    ) -> 'MultisetExpression[T]':
 206        """The elements from the left multiset that are not in any of the others.
 207
 208        Same as `a - b - c - ...`.
 209
 210        Any resulting counts that would be negative are set to zero.
 211
 212        Example:
 213        ```python
 214        [1, 2, 2, 3] - [1, 2, 4] -> [2, 3]
 215        ```
 216
 217        If no arguments are given, the result will be an empty multiset, i.e.
 218        all zero counts.
 219
 220        Note that, as a multiset operation, this will only cancel elements 1:1.
 221        If you want to drop all elements in a set of outcomes regardless of
 222        count, either use `drop_outcomes()` instead, or use a large number of
 223        counts on the right side.
 224        """
 225        expressions = tuple(
 226            implicit_convert_to_expression(arg) for arg in args)
 227        return icepool.operator.MultisetDifference(*expressions)
 228
 229    def __and__(self,
 230                other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
 231                /) -> 'MultisetExpression[T]':
 232        try:
 233            return MultisetExpression.intersection(self, other)
 234        except ImplicitConversionError:
 235            return NotImplemented
 236
 237    def __rand__(
 238            self,
 239            other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
 240            /) -> 'MultisetExpression[T]':
 241        try:
 242            return MultisetExpression.intersection(other, self)
 243        except ImplicitConversionError:
 244            return NotImplemented
 245
 246    def intersection(
 247        *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]'
 248    ) -> 'MultisetExpression[T]':
 249        """The elements that all the multisets have in common.
 250
 251        Same as `a & b & c & ...`.
 252
 253        Any resulting counts that would be negative are set to zero.
 254
 255        Example:
 256        ```python
 257        [1, 2, 2, 3] & [1, 2, 4] -> [1, 2]
 258        ```
 259
 260        Note that, as a multiset operation, this will only intersect elements
 261        1:1.
 262        If you want to keep all elements in a set of outcomes regardless of
 263        count, either use `keep_outcomes()` instead, or use a large number of
 264        counts on the right side.
 265        """
 266        expressions = tuple(
 267            implicit_convert_to_expression(arg) for arg in args)
 268        return icepool.operator.MultisetIntersection(*expressions)
 269
 270    def __or__(self,
 271               other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
 272               /) -> 'MultisetExpression[T]':
 273        try:
 274            return MultisetExpression.union(self, other)
 275        except ImplicitConversionError:
 276            return NotImplemented
 277
 278    def __ror__(self,
 279                other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
 280                /) -> 'MultisetExpression[T]':
 281        try:
 282            return MultisetExpression.union(other, self)
 283        except ImplicitConversionError:
 284            return NotImplemented
 285
 286    def union(
 287        *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]'
 288    ) -> 'MultisetExpression[T]':
 289        """The most of each outcome that appear in any of the multisets.
 290
 291        Same as `a | b | c | ...`.
 292
 293        Any resulting counts that would be negative are set to zero.
 294
 295        Example:
 296        ```python
 297        [1, 2, 2, 3] | [1, 2, 4] -> [1, 2, 2, 3, 4]
 298        ```
 299        """
 300        expressions = tuple(
 301            implicit_convert_to_expression(arg) for arg in args)
 302        return icepool.operator.MultisetUnion(*expressions)
 303
 304    def __xor__(self,
 305                other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
 306                /) -> 'MultisetExpression[T]':
 307        try:
 308            return MultisetExpression.symmetric_difference(self, other)
 309        except ImplicitConversionError:
 310            return NotImplemented
 311
 312    def __rxor__(
 313            self,
 314            other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
 315            /) -> 'MultisetExpression[T]':
 316        try:
 317            # Symmetric.
 318            return MultisetExpression.symmetric_difference(self, other)
 319        except ImplicitConversionError:
 320            return NotImplemented
 321
 322    def symmetric_difference(
 323            self,
 324            other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
 325            /) -> 'MultisetExpression[T]':
 326        """The elements that appear in the left or right multiset but not both.
 327
 328        Same as `a ^ b`.
 329
 330        Specifically, this produces the absolute difference between counts.
 331        If you don't want negative counts to be used from the inputs, you can
 332        do `+left ^ +right`.
 333
 334        Example:
 335        ```python
 336        [1, 2, 2, 3] ^ [1, 2, 4] -> [2, 3, 4]
 337        ```
 338        """
 339        return icepool.operator.MultisetSymmetricDifference(
 340            self, implicit_convert_to_expression(other))
 341
 342    def keep_outcomes(
 343            self, outcomes:
 344        'Callable[[T], bool] | Collection[T] | MultisetExpression[T]',
 345            /) -> 'MultisetExpression[T]':
 346        """Keeps the designated outcomes, and drops the rest by setting their counts to zero.
 347
 348        This is similar to `intersection()`, except the right side is considered
 349        to have unlimited multiplicity.
 350
 351        Args:
 352            outcomes: A callable returning `True` iff the outcome should be kept,
 353                or an expression or collection of outcomes to keep.
 354        """
 355        if isinstance(outcomes, MultisetExpression):
 356            return icepool.operator.MultisetFilterOutcomesBinary(
 357                self, outcomes)
 358        else:
 359            return icepool.operator.MultisetFilterOutcomes(self,
 360                                                           outcomes=outcomes)
 361
 362    def drop_outcomes(
 363            self, outcomes:
 364        'Callable[[T], bool] | Collection[T] | MultisetExpression[T]',
 365            /) -> 'MultisetExpression[T]':
 366        """Drops the designated outcomes by setting their counts to zero, and keeps the rest.
 367
 368        This is similar to `difference()`, except the right side is considered
 369        to have unlimited multiplicity.
 370
 371        Args:
 372            outcomes: A callable returning `True` iff the outcome should be
 373                dropped, or an expression or collection of outcomes to drop.
 374        """
 375        if isinstance(outcomes, MultisetExpression):
 376            return icepool.operator.MultisetFilterOutcomesBinary(self,
 377                                                                 outcomes,
 378                                                                 invert=True)
 379        else:
 380            return icepool.operator.MultisetFilterOutcomes(self,
 381                                                           outcomes=outcomes,
 382                                                           invert=True)
 383
 384    # Adjust counts.
 385
 386    def map_counts(*args:
 387                   'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
 388                   function: Callable[..., int]) -> 'MultisetExpression[T]':
 389        """Maps the counts to new counts.
 390
 391        Args:
 392            function: A function that takes `outcome, *counts` and produces a
 393                combined count.
 394        """
 395        expressions = tuple(
 396            implicit_convert_to_expression(arg) for arg in args)
 397        return icepool.operator.MultisetMapCounts(*expressions,
 398                                                  function=function)
 399
 400    def __mul__(self, n: int) -> 'MultisetExpression[T]':
 401        if not isinstance(n, int):
 402            return NotImplemented
 403        return self.multiply_counts(n)
 404
 405    # Commutable in this case.
 406    def __rmul__(self, n: int) -> 'MultisetExpression[T]':
 407        if not isinstance(n, int):
 408            return NotImplemented
 409        return self.multiply_counts(n)
 410
 411    def multiply_counts(self, n: int, /) -> 'MultisetExpression[T]':
 412        """Multiplies all counts by n.
 413
 414        Same as `self * n`.
 415
 416        Example:
 417        ```python
 418        Pool([1, 2, 2, 3]) * 2 -> [1, 1, 2, 2, 2, 2, 3, 3]
 419        ```
 420        """
 421        return icepool.operator.MultisetMultiplyCounts(self, constant=n)
 422
 423    @overload
 424    def __floordiv__(self, other: int) -> 'MultisetExpression[T]':
 425        ...
 426
 427    @overload
 428    def __floordiv__(
 429        self, other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]'
 430    ) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]':
 431        """Same as divide_counts()."""
 432
 433    @overload
 434    def __floordiv__(
 435        self,
 436        other: 'int | MultisetExpression[T] | Mapping[T, int] | Sequence[T]'
 437    ) -> 'MultisetExpression[T] | icepool.Die[int] | MultisetFunctionRawResult[T, int]':
 438        """Same as count_subset()."""
 439
 440    def __floordiv__(
 441        self,
 442        other: 'int | MultisetExpression[T] | Mapping[T, int] | Sequence[T]'
 443    ) -> 'MultisetExpression[T] | icepool.Die[int] | MultisetFunctionRawResult[T, int]':
 444        if isinstance(other, int):
 445            return self.divide_counts(other)
 446        else:
 447            return self.count_subset(other)
 448
 449    def divide_counts(self, n: int, /) -> 'MultisetExpression[T]':
 450        """Divides all counts by n (rounding down).
 451
 452        Same as `self // n`.
 453
 454        Example:
 455        ```python
 456        Pool([1, 2, 2, 3]) // 2 -> [2]
 457        ```
 458        """
 459        return icepool.operator.MultisetFloordivCounts(self, constant=n)
 460
 461    def __mod__(self, n: int, /) -> 'MultisetExpression[T]':
 462        if not isinstance(n, int):
 463            return NotImplemented
 464        return icepool.operator.MultisetModuloCounts(self, constant=n)
 465
 466    def modulo_counts(self, n: int, /) -> 'MultisetExpression[T]':
 467        """Moduos all counts by n.
 468
 469        Same as `self % n`.
 470
 471        Example:
 472        ```python
 473        Pool([1, 2, 2, 3]) % 2 -> [1, 3]
 474        ```
 475        """
 476        return self % n
 477
 478    def __pos__(self) -> 'MultisetExpression[T]':
 479        """Sets all negative counts to zero."""
 480        return icepool.operator.MultisetKeepCounts(self,
 481                                                   comparison='>=',
 482                                                   constant=0)
 483
 484    def __neg__(self) -> 'MultisetExpression[T]':
 485        """As -1 * self."""
 486        return -1 * self
 487
 488    def keep_counts(self, comparison: Literal['==', '!=', '<=', '<', '>=',
 489                                              '>'], n: int,
 490                    /) -> 'MultisetExpression[T]':
 491        """Keeps counts fitting the comparison, treating the rest as zero.
 492
 493        For example, `expression.keep_counts('>=', 2)` would keep pairs,
 494        triplets, etc. and drop singles.
 495
 496        ```python
 497        Pool([1, 2, 2, 3, 3, 3]).keep_counts('>=', 2) -> [2, 2, 3, 3, 3]
 498        ```
 499        
 500        Args:
 501            comparison: The comparison to use.
 502            n: The number to compare counts against.
 503        """
 504        return icepool.operator.MultisetKeepCounts(self,
 505                                                   comparison=comparison,
 506                                                   constant=n)
 507
 508    def unique(self, n: int = 1, /) -> 'MultisetExpression[T]':
 509        """Counts each outcome at most `n` times.
 510
 511        For example, `generator.unique(2)` would count each outcome at most
 512        twice.
 513
 514        Example:
 515        ```python
 516        Pool([1, 2, 2, 3]).unique() -> [1, 2, 3]
 517        ```
 518        """
 519        return icepool.operator.MultisetUnique(self, constant=n)
 520
 521    # Keep highest / lowest.
 522
 523    @overload
 524    def keep(
 525        self, index: slice | Sequence[int | EllipsisType]
 526    ) -> 'MultisetExpression[T]':
 527        ...
 528
 529    @overload
 530    def keep(self,
 531             index: int) -> 'icepool.Die[T] | MultisetFunctionRawResult[T, T]':
 532        ...
 533
 534    def keep(
 535        self, index: slice | Sequence[int | EllipsisType] | int
 536    ) -> 'MultisetExpression[T] | icepool.Die[T] | MultisetFunctionRawResult[T, T]':
 537        """Selects elements after drawing and sorting.
 538
 539        This is less capable than the `KeepGenerator` version.
 540        In particular, it does not know how many elements it is selecting from,
 541        so it must be anchored at the starting end. The advantage is that it
 542        can be applied to any expression.
 543
 544        The valid types of argument are:
 545
 546        * A `slice`. If both start and stop are provided, they must both be
 547            non-negative or both be negative. step is not supported.
 548        * A sequence of `int` with `...` (`Ellipsis`) at exactly one end.
 549            Each sorted element will be counted that many times, with the
 550            `Ellipsis` treated as enough zeros (possibly "negative") to
 551            fill the rest of the elements.
 552        * An `int`, which evaluates by taking the element at the specified
 553            index. In this case the result is a `Die`.
 554
 555        Negative incoming counts are treated as zero counts.
 556
 557        Use the `[]` operator for the same effect as this method.
 558        """
 559        if isinstance(index, int):
 560            return icepool.evaluator.keep_evaluator.evaluate(self, index=index)
 561        else:
 562            return icepool.operator.MultisetKeep(self, index=index)
 563
 564    @overload
 565    def __getitem__(
 566        self, index: slice | Sequence[int | EllipsisType]
 567    ) -> 'MultisetExpression[T]':
 568        ...
 569
 570    @overload
 571    def __getitem__(
 572            self,
 573            index: int) -> 'icepool.Die[T] | MultisetFunctionRawResult[T, T]':
 574        ...
 575
 576    def __getitem__(
 577        self, index: slice | Sequence[int | EllipsisType] | int
 578    ) -> 'MultisetExpression[T] | icepool.Die[T] | MultisetFunctionRawResult[T, T]':
 579        return self.keep(index)
 580
 581    def lowest(self,
 582               keep: int | None = None,
 583               drop: int | None = None) -> 'MultisetExpression[T]':
 584        """Keep some of the lowest elements from this multiset and drop the rest.
 585
 586        In contrast to the die and free function versions, this does not
 587        automatically sum the dice. Use `.sum()` afterwards if you want to sum.
 588        Alternatively, you can perform some other evaluation.
 589
 590        This requires the outcomes to be evaluated in ascending order.
 591
 592        Args:
 593            keep, drop: These arguments work together:
 594                * If neither are provided, the single lowest element
 595                    will be kept.
 596                * If only `keep` is provided, the `keep` lowest elements
 597                    will be kept.
 598                * If only `drop` is provided, the `drop` lowest elements
 599                    will be dropped and the rest will be kept.
 600                * If both are provided, `drop` lowest elements will be dropped,
 601                    then the next `keep` lowest elements will be kept.
 602        """
 603        index = lowest_slice(keep, drop)
 604        return self.keep(index)
 605
 606    def highest(self,
 607                keep: int | None = None,
 608                drop: int | None = None) -> 'MultisetExpression[T]':
 609        """Keep some of the highest elements from this multiset and drop the rest.
 610
 611        In contrast to the die and free function versions, this does not
 612        automatically sum the dice. Use `.sum()` afterwards if you want to sum.
 613        Alternatively, you can perform some other evaluation.
 614
 615        This requires the outcomes to be evaluated in descending order.
 616
 617        Args:
 618            keep, drop: These arguments work together:
 619                * If neither are provided, the single highest element
 620                    will be kept.
 621                * If only `keep` is provided, the `keep` highest elements
 622                    will be kept.
 623                * If only `drop` is provided, the `drop` highest elements
 624                    will be dropped and the rest will be kept.
 625                * If both are provided, `drop` highest elements will be dropped, 
 626                    then the next `keep` highest elements will be kept.
 627        """
 628        index = highest_slice(keep, drop)
 629        return self.keep(index)
 630
 631    # Pairing.
 632
 633    def sort_pair(
 634        self,
 635        comparison: Literal['==', '!=', '<=', '<', '>=', '>'],
 636        other: 'MultisetExpression[T]',
 637        /,
 638        order: Order = Order.Descending,
 639        extra: Literal['early', 'late', 'low', 'high', 'equal', 'keep',
 640                       'drop'] = 'drop'
 641    ) -> 'MultisetExpression[T]':
 642        """EXPERIMENTAL: Sort `self` and `other` and make pairs of one element from each, then keep the elements from `self` from each pair that fit the given comparision.
 643
 644        Example: An attacker rolls 3d6 versus a defender's 2d6 in the game of
 645        *RISK*. Which pairs did the attacker win?
 646        ```python
 647        d6.pool(3).highest(2).sort_pair('>', d6.pool(2))
 648        ```
 649        
 650        Suppose the attacker rolled 6, 4, 3 and the defender 5, 5.
 651        In this case the 4 would be blocked since the attacker lost that pair,
 652        leaving the attacker's 6. If you want to keep the extra element (3), you 
 653        can use the `extra` parameter.
 654        ```python
 655
 656        Pool([6, 4, 3]).sort_pair('>', [5, 5]) -> [6]
 657        Pool([6, 4, 3]).sort_pair('>', [5, 5], extra='keep') -> [6, 3]
 658        ```
 659
 660        Contrast `max_pair_lowest()` and `max_pair_highest()`, which first 
 661        create the maximum number of pairs that fit the comparison, not
 662        necessarily in sorted order.
 663        In the above example, `max_pair()` would allow the defender to
 664        assign their 5s to block both the 4 and the 3.
 665
 666        Negative incoming counts are treated as zero counts.
 667        
 668        Args:
 669            comparison: The comparison to filter by. If you want to drop rather
 670                than keep, use the complementary comparison:
 671                * `'=='` vs. `'!='`
 672                * `'<='` vs. `'>'`
 673                * `'>='` vs. `'<'`
 674            other: The other multiset to pair elements with.
 675            order: The order in which to sort before forming pairs.
 676                Default is descending.
 677            extra: If the left operand has more elements than the right
 678                operand, this determines what is done with the extra elements.
 679                The default is `'drop'`.
 680                * `'early'`, `'late'`: The extra elements are considered to   
 681                    occur earlier or later in `order` than their missing
 682                    counterparts.
 683                * `'low'`, `'high'`, `'equal'`: The extra elements are 
 684                    considered to be lower, higher, or equal to their missing
 685                    counterparts.
 686                * `'keep'`, `'drop'`: The extra elements are always kept or 
 687                    dropped.
 688        """
 689        other = implicit_convert_to_expression(other)
 690
 691        return icepool.operator.MultisetSortPair(self,
 692                                                 other,
 693                                                 comparison=comparison,
 694                                                 sort_order=order,
 695                                                 extra=extra)
 696
 697    def sort_pair_keep_while(self,
 698                             comparison: Literal['==', '!=', '<=', '<', '>=',
 699                                                 '>'],
 700                             other: 'MultisetExpression[T]',
 701                             /,
 702                             order: Order = Order.Descending,
 703                             extra: Literal['early', 'late', 'low', 'high',
 704                                            'equal', 'continue',
 705                                            'break'] = 'break'):
 706        """EXPERIMENTAL: Sort `self` and `other` and make pairs of one element from each, then go through the pairs and keep elements from `self` while the `comparison` holds, dropping the rest.
 707
 708        Negative incoming counts are treated as zero counts.
 709
 710        Args:
 711            comparison: The comparison for which to continue the "while".
 712            other: The other multiset to pair elements with.
 713            order: The order in which to sort before forming pairs.
 714                Default is descending.
 715            extra: If the left operand has more elements than the right
 716                operand, this determines what is done with the extra elements.
 717                The default is `'break'`.
 718                * `'early'`, `'late'`: The extra elements are considered to   
 719                    occur earlier or later in `order` than their missing
 720                    counterparts.
 721                * `'low'`, `'high'`, `'equal'`: The extra elements are 
 722                    considered to be lower, higher, or equal to their missing
 723                    counterparts.
 724                * `'continue'`, `'break'`: If the "while" still holds upon 
 725                    reaching the extra elements, whether those elements
 726                    continue to be kept.
 727        """
 728        other = implicit_convert_to_expression(other)
 729        return icepool.operator.MultisetSortPairWhile(self,
 730                                                      other,
 731                                                      keep=True,
 732                                                      comparison=comparison,
 733                                                      sort_order=order,
 734                                                      extra=extra)
 735
 736    def sort_pair_drop_while(self,
 737                             comparison: Literal['==', '!=', '<=', '<', '>=',
 738                                                 '>'],
 739                             other: 'MultisetExpression[T]',
 740                             /,
 741                             order: Order = Order.Descending,
 742                             extra: Literal['early', 'late', 'low', 'high',
 743                                            'equal', 'continue',
 744                                            'break'] = 'break'):
 745        """EXPERIMENTAL: Sort `self` and `other` and make pairs of one element from each, then go through the pairs and drop elements from `self` while the `comparison` holds, keeping the rest.
 746
 747        Negative incoming counts are treated as zero counts.
 748
 749        Args:
 750            comparison: The comparison for which to continue the "while".
 751            other: The other multiset to pair elements with.
 752            order: The order in which to sort before forming pairs.
 753                Default is descending.
 754            extra: If the left operand has more elements than the right
 755                operand, this determines what is done with the extra elements.
 756                The default is `'break'`.
 757                * `'early'`, `'late'`: The extra elements are considered to   
 758                    occur earlier or later in `order` than their missing
 759                    counterparts.
 760                * `'low'`, `'high'`, `'equal'`: The extra elements are 
 761                    considered to be lower, higher, or equal to their missing
 762                    counterparts.
 763                * `'continue'`, `'break'`: If the "while" still holds upon 
 764                    reaching the extra elements, whether those elements
 765                    continue to be dropped.
 766        """
 767        other = implicit_convert_to_expression(other)
 768        return icepool.operator.MultisetSortPairWhile(self,
 769                                                      other,
 770                                                      keep=False,
 771                                                      comparison=comparison,
 772                                                      sort_order=order,
 773                                                      extra=extra)
 774
 775    def max_pair_highest(
 776            self, comparison: Literal['<=',
 777                                      '<'], other: 'MultisetExpression[T]', /,
 778            *, keep: Literal['paired', 'unpaired']) -> 'MultisetExpression[T]':
 779        """EXPERIMENTAL: Pair the highest elements from `self` with even higher (or equal) elements from `other`.
 780
 781        This pairs elements of `self` with elements of `other`, such that in
 782        each pair the element from `self` fits the `comparision` with the
 783        element from `other`. As many such pairs of elements will be created as 
 784        possible, preferring the highest pairable elements of `self`.
 785        Finally, either the paired or unpaired elements from `self` are kept.
 786
 787        This requires that outcomes be evaluated in descending order.
 788
 789        Negative incoming counts are treated as zero counts.
 790
 791        Example: An attacker rolls a pool of 4d6 and a defender rolls a pool of 
 792        3d6. Defender dice can be used to block attacker dice of equal or lesser
 793        value, and the defender prefers to block the highest attacker dice
 794        possible. Which attacker dice were not blocked?
 795        ```python
 796        d6.pool(4).max_pair('<=', d6.pool(3), keep='unpaired').sum()
 797        ```
 798
 799        Suppose the attacker rolls 6, 4, 3, 1 and the defender rolls 5, 5.
 800        Then the result would be [6, 1].
 801        ```python
 802        d6.pool([6, 4, 3, 1]).max_pair('<=', [5, 5], keep='unpaired')
 803        -> [6, 1]
 804        ```
 805
 806        Contrast `sort_pair()`, which first creates pairs in
 807        sorted order and then filters them by `comparison`.
 808        In the above example, `sort_pair()` would force the defender to pair
 809        against the 6 and the 4, which would only allow them to block the 4
 810        and let the 6, 3, and 1 through.
 811
 812        There is no `max_pair` with `'=='` because this would mean the same
 813        thing as `+self & +other` (if paired elements are kept), or
 814        `+self - +other` (if unpaired elements are kept).
 815
 816        Args:
 817            comparison: Either `'<='` or `'<'`.
 818            other: The other multiset to pair elements with.
 819            keep: Whether 'paired' or 'unpaired' elements are to be kept.
 820        """
 821        if keep == 'paired':
 822            keep_boolean = True
 823        elif keep == 'unpaired':
 824            keep_boolean = False
 825        else:
 826            raise ValueError(f"keep must be either 'paired' or 'unpaired'")
 827
 828        other = implicit_convert_to_expression(other)
 829        match comparison:
 830            case '<=':
 831                pair_equal = True
 832            case '<':
 833                pair_equal = False
 834            case _:
 835                raise ValueError(f'Invalid comparison {comparison}')
 836        return icepool.operator.MultisetMaxPair(self,
 837                                                other,
 838                                                order=Order.Descending,
 839                                                pair_equal=pair_equal,
 840                                                keep=keep_boolean)
 841
 842    def max_pair_lowest(
 843            self, comparison: Literal['>=',
 844                                      '>'], other: 'MultisetExpression[T]', /,
 845            *, keep: Literal['paired', 'unpaired']) -> 'MultisetExpression[T]':
 846        """EXPERIMENTAL: Pair the lowest elements from `self` with even lower (or equal) elements from `other`.
 847
 848        This pairs elements of `self` with elements of `other`, such that in
 849        each pair the element from `self` fits the `comparision` with the
 850        element from `other`. As many such pairs of elements will be created as 
 851        possible, preferring the lowest pairable elements of `self`.
 852        Finally, either the paired or unpaired elements from `self` are kept.
 853
 854        This requires that outcomes be evaluated in ascending order.
 855
 856        Negative incoming counts are treated as zero counts.
 857
 858        Contrast `sort_pair()`, which first creates pairs in
 859        sorted order and then filters them by `comparison`.
 860
 861        Args:
 862            comparison: Either `'>='` or `'>'`.
 863            other: The other multiset to pair elements with.
 864            keep: Whether 'paired' or 'unpaired' elements are to be kept.
 865        """
 866        if keep == 'paired':
 867            keep_boolean = True
 868        elif keep == 'unpaired':
 869            keep_boolean = False
 870        else:
 871            raise ValueError(f"keep must be either 'paired' or 'unpaired'")
 872
 873        other = implicit_convert_to_expression(other)
 874        match comparison:
 875            case '>=':
 876                pair_equal = True
 877            case '>':
 878                pair_equal = False
 879            case _:
 880                raise ValueError(f'Invalid comparison {comparison}')
 881        return icepool.operator.MultisetMaxPair(self,
 882                                                other,
 883                                                order=Order.Ascending,
 884                                                pair_equal=pair_equal,
 885                                                keep=keep_boolean)
 886
 887    def versus_all(self, comparison: Literal['<=', '<', '>=', '>'],
 888                   other: 'MultisetExpression[T]') -> 'MultisetExpression[T]':
 889        """EXPERIMENTAL: Keeps elements from `self` that fit the comparison against all elements of the other multiset.
 890        
 891        Args:
 892            comparison: One of `'<=', '<', '>=', '>'`.
 893            other: The other multiset to compare to. Negative counts are treated
 894                as 0.
 895        """
 896        other = implicit_convert_to_expression(other)
 897        lexi_tuple, order = compute_lexi_tuple_with_zero_right_first(
 898            comparison)
 899        return icepool.operator.MultisetVersus(self,
 900                                               other,
 901                                               lexi_tuple=lexi_tuple,
 902                                               order=order)
 903
 904    def versus_any(self, comparison: Literal['<=', '<', '>=', '>'],
 905                   other: 'MultisetExpression[T]') -> 'MultisetExpression[T]':
 906        """EXPERIMENTAL: Keeps elements from `self` that fit the comparison against any element of the other multiset.
 907        
 908        Args:
 909            comparison: One of `'<=', '<', '>=', '>'`.
 910            other: The other multiset to compare to. Negative counts are treated
 911                as 0.
 912        """
 913        other = implicit_convert_to_expression(other)
 914        lexi_tuple, order = compute_lexi_tuple_with_zero_right_first(
 915            comparison)
 916        lexi_tuple = tuple(reversed(lexi_tuple))  # type: ignore
 917        order = -order
 918
 919        return icepool.operator.MultisetVersus(self,
 920                                               other,
 921                                               lexi_tuple=lexi_tuple,
 922                                               order=order)
 923
 924    # Evaluations.
 925
 926    def expand(
 927        self,
 928        order: Order = Order.Ascending
 929    ) -> 'icepool.Die[tuple[T, ...]] | MultisetFunctionRawResult[T, tuple[T, ...]]':
 930        """Evaluation: All elements of the multiset in ascending order.
 931
 932        This is expensive and not recommended unless there are few possibilities.
 933
 934        Args:
 935            order: Whether the elements are in ascending (default) or descending
 936                order.
 937        """
 938        return icepool.evaluator.ExpandEvaluator().evaluate(self, order=order)
 939
 940    def sum(
 941        self,
 942        map: Callable[[T], U] | Mapping[T, U] | None = None
 943    ) -> 'icepool.Die[U] | MultisetFunctionRawResult[T, U]':
 944        """Evaluation: The sum of all elements."""
 945        if map is None:
 946            return icepool.evaluator.sum_evaluator.evaluate(self)
 947        else:
 948            return icepool.evaluator.SumEvaluator(map).evaluate(self)
 949
 950    def size(self) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]':
 951        """Evaluation: The total number of elements in the multiset.
 952
 953        This is usually not very interesting unless some other operation is
 954        performed first. Examples:
 955
 956        `generator.unique().size()` will count the number of unique outcomes.
 957
 958        `(generator & [4, 5, 6]).size()` will count up to one each of
 959        4, 5, and 6.
 960        """
 961        return icepool.evaluator.size_evaluator.evaluate(self)
 962
 963    def empty(
 964            self) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]':
 965        """Evaluation: Whether the multiset contains only zero counts."""
 966        return icepool.evaluator.empty_evaluator.evaluate(self)
 967
 968    def highest_outcome_and_count(
 969        self
 970    ) -> 'icepool.Die[tuple[T, int]] | MultisetFunctionRawResult[T, tuple[T, int]]':
 971        """Evaluation: The highest outcome with positive count, along with that count.
 972
 973        If no outcomes have positive count, the min outcome will be returned with 0 count.
 974        """
 975        return icepool.evaluator.highest_outcome_and_count_evaluator.evaluate(
 976            self)
 977
 978    def all_counts(
 979        self,
 980        filter: int | Literal['all'] = 1
 981    ) -> 'icepool.Die[tuple[int, ...]] | MultisetFunctionRawResult[T, tuple[int, ...]]':
 982        """Evaluation: Sorted tuple of all counts, i.e. the sizes of all matching sets.
 983
 984        The sizes are in **descending** order.
 985
 986        Args:
 987            filter: Any counts below this value will not be in the output.
 988                For example, `filter=2` will only produce pairs and better.
 989                If `None`, no filtering will be done.
 990
 991                Why not just place `keep_counts('>=')` before this?
 992                `keep_counts('>=')` operates by setting counts to zero, so we
 993                would still need an argument to specify whether we want to
 994                output zero counts. So we might as well use the argument to do
 995                both.
 996        """
 997        return icepool.evaluator.AllCountsEvaluator(
 998            filter=filter).evaluate(self)
 999
1000    def largest_count(
1001            self) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]':
1002        """Evaluation: The size of the largest matching set among the elements."""
1003        return icepool.evaluator.largest_count_evaluator.evaluate(self)
1004
1005    def largest_count_and_outcome(
1006        self
1007    ) -> 'icepool.Die[tuple[int, T]] | MultisetFunctionRawResult[T, tuple[int, T]]':
1008        """Evaluation: The largest matching set among the elements and the corresponding outcome."""
1009        return icepool.evaluator.largest_count_and_outcome_evaluator.evaluate(
1010            self)
1011
1012    def __rfloordiv__(
1013        self, other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]'
1014    ) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]':
1015        return implicit_convert_to_expression(other).count_subset(self)
1016
1017    def count_subset(
1018        self,
1019        divisor: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1020        /,
1021        *,
1022        empty_divisor: int | None = None
1023    ) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]':
1024        """Evaluation: The number of times the divisor is contained in this multiset.
1025        
1026        Args:
1027            divisor: The multiset to divide by.
1028            empty_divisor: If the divisor is empty, the outcome will be this.
1029                If not set, `ZeroDivisionError` will be raised for an empty
1030                right side.
1031
1032        Raises:
1033            ZeroDivisionError: If the divisor may be empty and 
1034                empty_divisor_outcome is not set.
1035        """
1036        divisor = implicit_convert_to_expression(divisor)
1037        return icepool.evaluator.CountSubsetEvaluator(
1038            empty_divisor=empty_divisor).evaluate(self, divisor)
1039
1040    def largest_straight(
1041        self: 'MultisetExpression[int]'
1042    ) -> 'icepool.Die[int] | MultisetFunctionRawResult[int, int]':
1043        """Evaluation: The size of the largest straight among the elements.
1044
1045        Outcomes must be `int`s.
1046        """
1047        return icepool.evaluator.largest_straight_evaluator.evaluate(self)
1048
1049    def largest_straight_and_outcome(
1050        self: 'MultisetExpression[int]',
1051        priority: Literal['low', 'high'] = 'high',
1052        /
1053    ) -> 'icepool.Die[tuple[int, int]] | MultisetFunctionRawResult[int, tuple[int, int]]':
1054        """Evaluation: The size of the largest straight among the elements and the highest (optionally, lowest) outcome in that straight.
1055
1056        Straight size is prioritized first, then the outcome.
1057
1058        Outcomes must be `int`s.
1059
1060        Args:
1061            priority: Controls which outcome within the straight is returned,
1062                and which straight is picked if there is a tie for largest
1063                straight.
1064        """
1065        if priority == 'high':
1066            return icepool.evaluator.largest_straight_and_outcome_evaluator_high.evaluate(
1067                self)
1068        elif priority == 'low':
1069            return icepool.evaluator.largest_straight_and_outcome_evaluator_low.evaluate(
1070                self)
1071        else:
1072            raise ValueError("priority must be 'low' or 'high'.")
1073
1074    def all_straights(
1075        self: 'MultisetExpression[int]'
1076    ) -> 'icepool.Die[tuple[int, ...]] | MultisetFunctionRawResult[int, tuple[int, ...]]':
1077        """Evaluation: The sizes of all straights.
1078
1079        The sizes are in **descending** order.
1080
1081        Each element can only contribute to one straight, though duplicate
1082        elements can produces straights that overlap in outcomes. In this case,
1083        elements are preferentially assigned to the longer straight.
1084        """
1085        return icepool.evaluator.all_straights_evaluator.evaluate(self)
1086
1087    def all_straights_reduce_counts(
1088        self: 'MultisetExpression[int]',
1089        reducer: Callable[[int, int], int] = operator.mul
1090    ) -> 'icepool.Die[tuple[tuple[int, int], ...]] | MultisetFunctionRawResult[int, tuple[tuple[int, int], ...]]':
1091        """Experimental: All straights with a reduce operation on the counts.
1092
1093        This can be used to evaluate e.g. cribbage-style straight counting.
1094
1095        The result is a tuple of `(run_length, run_score)`s.
1096        """
1097        return icepool.evaluator.AllStraightsReduceCountsEvaluator(
1098            reducer=reducer).evaluate(self)
1099
1100    def argsort(self: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1101                *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1102                order: Order = Order.Descending,
1103                limit: int | None = None):
1104        """Experimental: Returns the indexes of the originating multisets for each rank in their additive union.
1105
1106        Example:
1107        ```python
1108        MultisetExpression.argsort([10, 9, 5], [9, 9])
1109        ```
1110        produces
1111        ```python
1112        ((0,), (0, 1, 1), (0,))
1113        ```
1114        
1115        Args:
1116            self, *args: The multiset expressions to be evaluated.
1117            order: Which order the ranks are to be emitted. Default is descending.
1118            limit: How many ranks to emit. Default will emit all ranks, which
1119                makes the length of each outcome equal to
1120                `additive_union(+self, +arg1, +arg2, ...).unique().size()`
1121        """
1122        self = implicit_convert_to_expression(self)
1123        converted_args = [implicit_convert_to_expression(arg) for arg in args]
1124        return icepool.evaluator.ArgsortEvaluator(order=order,
1125                                                  limit=limit).evaluate(
1126                                                      self, *converted_args)
1127
1128    # Comparators.
1129
1130    def _compare(
1131        self,
1132        right: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1133        operation_class: Type['icepool.evaluator.ComparisonEvaluator'],
1134        *,
1135        truth_value_callback: 'Callable[[], bool] | None' = None
1136    ) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]':
1137        right = icepool.implicit_convert_to_expression(right)
1138
1139        if truth_value_callback is not None:
1140
1141            def data_callback() -> Counts[bool]:
1142                die = cast('icepool.Die[bool]',
1143                           operation_class().evaluate(self, right))
1144                if not isinstance(die, icepool.Die):
1145                    raise TypeError('Did not resolve to a die.')
1146                return die._data
1147
1148            return icepool.DieWithTruth(data_callback, truth_value_callback)
1149        else:
1150            return operation_class().evaluate(self, right)
1151
1152    def __lt__(self,
1153               other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1154               /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]':
1155        try:
1156            return self._compare(other,
1157                                 icepool.evaluator.IsProperSubsetEvaluator)
1158        except TypeError:
1159            return NotImplemented
1160
1161    def __le__(self,
1162               other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1163               /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]':
1164        try:
1165            return self._compare(other, icepool.evaluator.IsSubsetEvaluator)
1166        except TypeError:
1167            return NotImplemented
1168
1169    def issubset(
1170            self,
1171            other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1172            /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]':
1173        """Evaluation: Whether this multiset is a subset of the other multiset.
1174
1175        Specifically, if this multiset has a lesser or equal count for each
1176        outcome than the other multiset, this evaluates to `True`; 
1177        if there is some outcome for which this multiset has a greater count 
1178        than the other multiset, this evaluates to `False`.
1179
1180        `issubset` is the same as `self <= other`.
1181        
1182        `self < other` evaluates a proper subset relation, which is the same
1183        except the result is `False` if the two multisets are exactly equal.
1184        """
1185        return self._compare(other, icepool.evaluator.IsSubsetEvaluator)
1186
1187    def __gt__(self,
1188               other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1189               /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]':
1190        try:
1191            return self._compare(other,
1192                                 icepool.evaluator.IsProperSupersetEvaluator)
1193        except TypeError:
1194            return NotImplemented
1195
1196    def __ge__(self,
1197               other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1198               /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]':
1199        try:
1200            return self._compare(other, icepool.evaluator.IsSupersetEvaluator)
1201        except TypeError:
1202            return NotImplemented
1203
1204    def issuperset(
1205            self,
1206            other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1207            /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]':
1208        """Evaluation: Whether this multiset is a superset of the other multiset.
1209
1210        Specifically, if this multiset has a greater or equal count for each
1211        outcome than the other multiset, this evaluates to `True`; 
1212        if there is some  outcome for which this multiset has a lesser count 
1213        than the other multiset, this evaluates to `False`.
1214        
1215        A typical use of this evaluation is testing for the presence of a
1216        combo of cards in a hand, e.g.
1217
1218        ```python
1219        deck.deal(5) >= ['a', 'a', 'b']
1220        ```
1221
1222        represents the chance that a deal of 5 cards contains at least two 'a's
1223        and one 'b'.
1224
1225        `issuperset` is the same as `self >= other`.
1226
1227        `self > other` evaluates a proper superset relation, which is the same
1228        except the result is `False` if the two multisets are exactly equal.
1229        """
1230        return self._compare(other, icepool.evaluator.IsSupersetEvaluator)
1231
1232    def __eq__(  # type: ignore
1233            self,
1234            other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1235            /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]':
1236        try:
1237
1238            def truth_value_callback() -> bool:
1239                return self.equals(other)
1240
1241            return self._compare(other,
1242                                 icepool.evaluator.IsEqualSetEvaluator,
1243                                 truth_value_callback=truth_value_callback)
1244        except TypeError:
1245            return NotImplemented
1246
1247    def __ne__(  # type: ignore
1248            self,
1249            other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1250            /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]':
1251        try:
1252
1253            def truth_value_callback() -> bool:
1254                return not self.equals(other)
1255
1256            return self._compare(other,
1257                                 icepool.evaluator.IsNotEqualSetEvaluator,
1258                                 truth_value_callback=truth_value_callback)
1259        except TypeError:
1260            return NotImplemented
1261
1262    def isdisjoint(
1263            self,
1264            other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1265            /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]':
1266        """Evaluation: Whether this multiset is disjoint from the other multiset.
1267        
1268        Specifically, this evaluates to `False` if there is any outcome for
1269        which both multisets have positive count, and `True` if there is not.
1270
1271        Negative incoming counts are treated as zero counts.
1272        """
1273        return self._compare(other, icepool.evaluator.IsDisjointSetEvaluator)
1274
1275    # Lexicographic comparisons.
1276
1277    def leximin(
1278        self,
1279        comparison: Literal['==', '!=', '<=', '<', '>=', '>', 'cmp'],
1280        other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1281        /,
1282        extra: Literal['low', 'high', 'drop'] = 'high'
1283    ) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]':
1284        """Evaluation: EXPERIMENTAL: Lexicographic comparison after sorting each multiset in ascending order.
1285
1286        Compares the lowest element of each multiset; if they are equal,
1287        compares the next-lowest element, and so on.
1288        
1289        Args:
1290            comparison: The comparison to use.
1291            other: The multiset to compare to.
1292            extra: If one side has more elements than the other, how the extra
1293                elements are considered compared to their missing counterparts.
1294        """
1295        lexi_tuple = compute_lexi_tuple_with_extra(comparison, Order.Ascending,
1296                                                   extra)
1297        return icepool.evaluator.lexi_comparison_evaluator.evaluate(
1298            self,
1299            implicit_convert_to_expression(other),
1300            sort_order=Order.Ascending,
1301            lexi_tuple=lexi_tuple)
1302
1303    def leximax(
1304        self,
1305        comparison: Literal['==', '!=', '<=', '<', '>=', '>', 'cmp'],
1306        other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1307        /,
1308        extra: Literal['low', 'high', 'drop'] = 'high'
1309    ) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]':
1310        """Evaluation: EXPERIMENTAL: Lexicographic comparison after sorting each multiset in descending order.
1311
1312        Compares the highest element of each multiset; if they are equal,
1313        compares the next-highest element, and so on.
1314        
1315        Args:
1316            comparison: The comparison to use.
1317            other: The multiset to compare to.
1318            extra: If one side has more elements than the other, how the extra
1319                elements are considered compared to their missing counterparts.
1320        """
1321        lexi_tuple = compute_lexi_tuple_with_extra(comparison,
1322                                                   Order.Descending, extra)
1323        return icepool.evaluator.lexi_comparison_evaluator.evaluate(
1324            self,
1325            implicit_convert_to_expression(other),
1326            sort_order=Order.Descending,
1327            lexi_tuple=lexi_tuple)
1328
1329    # For helping debugging / testing.
1330    def force_order(self, force_order: Order) -> 'MultisetExpression[T]':
1331        """Forces outcomes to be seen by the evaluator in the given order.
1332
1333        This can be useful for debugging / testing.
1334        """
1335        if force_order == Order.Any:
1336            return self
1337        return icepool.operator.MultisetForceOrder(self,
1338                                                   force_order=force_order)

Abstract base class representing an expression that operates on single multisets.

There are three types of multiset expressions:

  • MultisetGenerator, which produce raw outcomes and counts.
  • MultisetOperator, which takes outcomes with one or more counts and produces a count.
  • MultisetVariable, which is a temporary placeholder for some other expression.

Expression methods can be applied to MultisetGenerators to do simple evaluations. For joint evaluations, try multiset_function.

Use the provided operations to build up more complicated expressions, or to attach a final evaluator.

Operations include:

Operation Count / notes
additive_union, + l + r
difference, - l - r
intersection, & min(l, r)
union, | max(l, r)
symmetric_difference, ^ abs(l - r)
multiply_counts, * count * n
divide_counts, // count // n
modulo_counts, % count % n
keep_counts count if count >= n else 0 etc.
unary + same as keep_counts('>=', 0)
unary - reverses the sign of all counts
unique min(count, n)
keep_outcomes count if outcome in t else 0
drop_outcomes count if outcome not in t else 0
map_counts f(outcome, *counts)
keep, [] less capable than KeepGenerator version
highest less capable than KeepGenerator version
lowest less capable than KeepGenerator version
Evaluator Summary
issubset, <= Whether the left side's counts are all <= their counterparts on the right
issuperset, >= Whether the left side's counts are all >= their counterparts on the right
isdisjoint Whether the left side has no positive counts in common with the right side
< As <=, but False if the two multisets are equal
> As >=, but False if the two multisets are equal
== Whether the left side has all the same counts as the right side
!= Whether the left side has any different counts to the right side
expand All elements in ascending order
sum Sum of all elements
count The number of elements
any Whether there is at least 1 element
highest_outcome_and_count The highest outcome and how many of that outcome
all_counts All counts in descending order
largest_count The single largest count, aka x-of-a-kind
largest_count_and_outcome Same but also with the corresponding outcome
count_subset, // The number of times the right side is contained in the left side
largest_straight Length of longest consecutive sequence
largest_straight_and_outcome Same but also with the corresponding outcome
all_straights Lengths of all consecutive sequences in descending order
def additive_union( *args: Union[MultisetExpression[~T], Mapping[~T, int], Sequence[~T]]) -> MultisetExpression[~T]:
168    def additive_union(
169        *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]'
170    ) -> 'MultisetExpression[T]':
171        """The combined elements from all of the multisets.
172
173        Same as `a + b + c + ...`.
174
175        Any resulting counts that would be negative are set to zero.
176
177        Example:
178        ```python
179        [1, 2, 2, 3] + [1, 2, 4] -> [1, 1, 2, 2, 2, 3, 4]
180        ```
181        """
182        expressions = tuple(
183            implicit_convert_to_expression(arg) for arg in args)
184        return icepool.operator.MultisetAdditiveUnion(*expressions)

The combined elements from all of the multisets.

Same as a + b + c + ....

Any resulting counts that would be negative are set to zero.

Example:

[1, 2, 2, 3] + [1, 2, 4] -> [1, 1, 2, 2, 2, 3, 4]
def difference( *args: Union[MultisetExpression[~T], Mapping[~T, int], Sequence[~T]]) -> MultisetExpression[~T]:
203    def difference(
204        *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]'
205    ) -> 'MultisetExpression[T]':
206        """The elements from the left multiset that are not in any of the others.
207
208        Same as `a - b - c - ...`.
209
210        Any resulting counts that would be negative are set to zero.
211
212        Example:
213        ```python
214        [1, 2, 2, 3] - [1, 2, 4] -> [2, 3]
215        ```
216
217        If no arguments are given, the result will be an empty multiset, i.e.
218        all zero counts.
219
220        Note that, as a multiset operation, this will only cancel elements 1:1.
221        If you want to drop all elements in a set of outcomes regardless of
222        count, either use `drop_outcomes()` instead, or use a large number of
223        counts on the right side.
224        """
225        expressions = tuple(
226            implicit_convert_to_expression(arg) for arg in args)
227        return icepool.operator.MultisetDifference(*expressions)

The elements from the left multiset that are not in any of the others.

Same as a - b - c - ....

Any resulting counts that would be negative are set to zero.

Example:

[1, 2, 2, 3] - [1, 2, 4] -> [2, 3]

If no arguments are given, the result will be an empty multiset, i.e. all zero counts.

Note that, as a multiset operation, this will only cancel elements 1:1. If you want to drop all elements in a set of outcomes regardless of count, either use drop_outcomes() instead, or use a large number of counts on the right side.

def intersection( *args: Union[MultisetExpression[~T], Mapping[~T, int], Sequence[~T]]) -> MultisetExpression[~T]:
246    def intersection(
247        *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]'
248    ) -> 'MultisetExpression[T]':
249        """The elements that all the multisets have in common.
250
251        Same as `a & b & c & ...`.
252
253        Any resulting counts that would be negative are set to zero.
254
255        Example:
256        ```python
257        [1, 2, 2, 3] & [1, 2, 4] -> [1, 2]
258        ```
259
260        Note that, as a multiset operation, this will only intersect elements
261        1:1.
262        If you want to keep all elements in a set of outcomes regardless of
263        count, either use `keep_outcomes()` instead, or use a large number of
264        counts on the right side.
265        """
266        expressions = tuple(
267            implicit_convert_to_expression(arg) for arg in args)
268        return icepool.operator.MultisetIntersection(*expressions)

The elements that all the multisets have in common.

Same as a & b & c & ....

Any resulting counts that would be negative are set to zero.

Example:

[1, 2, 2, 3] & [1, 2, 4] -> [1, 2]

Note that, as a multiset operation, this will only intersect elements 1:1. If you want to keep all elements in a set of outcomes regardless of count, either use keep_outcomes() instead, or use a large number of counts on the right side.

def union( *args: Union[MultisetExpression[~T], Mapping[~T, int], Sequence[~T]]) -> MultisetExpression[~T]:
286    def union(
287        *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]'
288    ) -> 'MultisetExpression[T]':
289        """The most of each outcome that appear in any of the multisets.
290
291        Same as `a | b | c | ...`.
292
293        Any resulting counts that would be negative are set to zero.
294
295        Example:
296        ```python
297        [1, 2, 2, 3] | [1, 2, 4] -> [1, 2, 2, 3, 4]
298        ```
299        """
300        expressions = tuple(
301            implicit_convert_to_expression(arg) for arg in args)
302        return icepool.operator.MultisetUnion(*expressions)

The most of each outcome that appear in any of the multisets.

Same as a | b | c | ....

Any resulting counts that would be negative are set to zero.

Example:

[1, 2, 2, 3] | [1, 2, 4] -> [1, 2, 2, 3, 4]
def symmetric_difference( self, other: Union[MultisetExpression[~T], Mapping[~T, int], Sequence[~T]], /) -> MultisetExpression[~T]:
322    def symmetric_difference(
323            self,
324            other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
325            /) -> 'MultisetExpression[T]':
326        """The elements that appear in the left or right multiset but not both.
327
328        Same as `a ^ b`.
329
330        Specifically, this produces the absolute difference between counts.
331        If you don't want negative counts to be used from the inputs, you can
332        do `+left ^ +right`.
333
334        Example:
335        ```python
336        [1, 2, 2, 3] ^ [1, 2, 4] -> [2, 3, 4]
337        ```
338        """
339        return icepool.operator.MultisetSymmetricDifference(
340            self, implicit_convert_to_expression(other))

The elements that appear in the left or right multiset but not both.

Same as a ^ b.

Specifically, this produces the absolute difference between counts. If you don't want negative counts to be used from the inputs, you can do +left ^ +right.

Example:

[1, 2, 2, 3] ^ [1, 2, 4] -> [2, 3, 4]
def keep_outcomes( self, outcomes: Union[Callable[[~T], bool], Collection[~T], MultisetExpression[~T]], /) -> MultisetExpression[~T]:
342    def keep_outcomes(
343            self, outcomes:
344        'Callable[[T], bool] | Collection[T] | MultisetExpression[T]',
345            /) -> 'MultisetExpression[T]':
346        """Keeps the designated outcomes, and drops the rest by setting their counts to zero.
347
348        This is similar to `intersection()`, except the right side is considered
349        to have unlimited multiplicity.
350
351        Args:
352            outcomes: A callable returning `True` iff the outcome should be kept,
353                or an expression or collection of outcomes to keep.
354        """
355        if isinstance(outcomes, MultisetExpression):
356            return icepool.operator.MultisetFilterOutcomesBinary(
357                self, outcomes)
358        else:
359            return icepool.operator.MultisetFilterOutcomes(self,
360                                                           outcomes=outcomes)

Keeps the designated outcomes, and drops the rest by setting their counts to zero.

This is similar to intersection(), except the right side is considered to have unlimited multiplicity.

Arguments:
  • outcomes: A callable returning True iff the outcome should be kept, or an expression or collection of outcomes to keep.
def drop_outcomes( self, outcomes: Union[Callable[[~T], bool], Collection[~T], MultisetExpression[~T]], /) -> MultisetExpression[~T]:
362    def drop_outcomes(
363            self, outcomes:
364        'Callable[[T], bool] | Collection[T] | MultisetExpression[T]',
365            /) -> 'MultisetExpression[T]':
366        """Drops the designated outcomes by setting their counts to zero, and keeps the rest.
367
368        This is similar to `difference()`, except the right side is considered
369        to have unlimited multiplicity.
370
371        Args:
372            outcomes: A callable returning `True` iff the outcome should be
373                dropped, or an expression or collection of outcomes to drop.
374        """
375        if isinstance(outcomes, MultisetExpression):
376            return icepool.operator.MultisetFilterOutcomesBinary(self,
377                                                                 outcomes,
378                                                                 invert=True)
379        else:
380            return icepool.operator.MultisetFilterOutcomes(self,
381                                                           outcomes=outcomes,
382                                                           invert=True)

Drops the designated outcomes by setting their counts to zero, and keeps the rest.

This is similar to difference(), except the right side is considered to have unlimited multiplicity.

Arguments:
  • outcomes: A callable returning True iff the outcome should be dropped, or an expression or collection of outcomes to drop.
def map_counts( *args: Union[MultisetExpression[~T], Mapping[~T, int], Sequence[~T]], function: Callable[..., int]) -> MultisetExpression[~T]:
386    def map_counts(*args:
387                   'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
388                   function: Callable[..., int]) -> 'MultisetExpression[T]':
389        """Maps the counts to new counts.
390
391        Args:
392            function: A function that takes `outcome, *counts` and produces a
393                combined count.
394        """
395        expressions = tuple(
396            implicit_convert_to_expression(arg) for arg in args)
397        return icepool.operator.MultisetMapCounts(*expressions,
398                                                  function=function)

Maps the counts to new counts.

Arguments:
  • function: A function that takes outcome, *counts and produces a combined count.
def multiply_counts( self, n: int, /) -> MultisetExpression[~T]:
411    def multiply_counts(self, n: int, /) -> 'MultisetExpression[T]':
412        """Multiplies all counts by n.
413
414        Same as `self * n`.
415
416        Example:
417        ```python
418        Pool([1, 2, 2, 3]) * 2 -> [1, 1, 2, 2, 2, 2, 3, 3]
419        ```
420        """
421        return icepool.operator.MultisetMultiplyCounts(self, constant=n)

Multiplies all counts by n.

Same as self * n.

Example:

Pool([1, 2, 2, 3]) * 2 -> [1, 1, 2, 2, 2, 2, 3, 3]
def divide_counts( self, n: int, /) -> MultisetExpression[~T]:
449    def divide_counts(self, n: int, /) -> 'MultisetExpression[T]':
450        """Divides all counts by n (rounding down).
451
452        Same as `self // n`.
453
454        Example:
455        ```python
456        Pool([1, 2, 2, 3]) // 2 -> [2]
457        ```
458        """
459        return icepool.operator.MultisetFloordivCounts(self, constant=n)

Divides all counts by n (rounding down).

Same as self // n.

Example:

Pool([1, 2, 2, 3]) // 2 -> [2]
def modulo_counts( self, n: int, /) -> MultisetExpression[~T]:
466    def modulo_counts(self, n: int, /) -> 'MultisetExpression[T]':
467        """Moduos all counts by n.
468
469        Same as `self % n`.
470
471        Example:
472        ```python
473        Pool([1, 2, 2, 3]) % 2 -> [1, 3]
474        ```
475        """
476        return self % n

Moduos all counts by n.

Same as self % n.

Example:

Pool([1, 2, 2, 3]) % 2 -> [1, 3]
def keep_counts( self, comparison: Literal['==', '!=', '<=', '<', '>=', '>'], n: int, /) -> MultisetExpression[~T]:
488    def keep_counts(self, comparison: Literal['==', '!=', '<=', '<', '>=',
489                                              '>'], n: int,
490                    /) -> 'MultisetExpression[T]':
491        """Keeps counts fitting the comparison, treating the rest as zero.
492
493        For example, `expression.keep_counts('>=', 2)` would keep pairs,
494        triplets, etc. and drop singles.
495
496        ```python
497        Pool([1, 2, 2, 3, 3, 3]).keep_counts('>=', 2) -> [2, 2, 3, 3, 3]
498        ```
499        
500        Args:
501            comparison: The comparison to use.
502            n: The number to compare counts against.
503        """
504        return icepool.operator.MultisetKeepCounts(self,
505                                                   comparison=comparison,
506                                                   constant=n)

Keeps counts fitting the comparison, treating the rest as zero.

For example, expression.keep_counts('>=', 2) would keep pairs, triplets, etc. and drop singles.

Pool([1, 2, 2, 3, 3, 3]).keep_counts('>=', 2) -> [2, 2, 3, 3, 3]
Arguments:
  • comparison: The comparison to use.
  • n: The number to compare counts against.
def unique( self, n: int = 1, /) -> MultisetExpression[~T]:
508    def unique(self, n: int = 1, /) -> 'MultisetExpression[T]':
509        """Counts each outcome at most `n` times.
510
511        For example, `generator.unique(2)` would count each outcome at most
512        twice.
513
514        Example:
515        ```python
516        Pool([1, 2, 2, 3]).unique() -> [1, 2, 3]
517        ```
518        """
519        return icepool.operator.MultisetUnique(self, constant=n)

Counts each outcome at most n times.

For example, generator.unique(2) would count each outcome at most twice.

Example:

Pool([1, 2, 2, 3]).unique() -> [1, 2, 3]
def keep( self, index: Union[slice, Sequence[int | ellipsis], int]) -> Union[MultisetExpression[~T], Die[~T], icepool.evaluator.multiset_function.MultisetFunctionRawResult[~T, ~T]]:
534    def keep(
535        self, index: slice | Sequence[int | EllipsisType] | int
536    ) -> 'MultisetExpression[T] | icepool.Die[T] | MultisetFunctionRawResult[T, T]':
537        """Selects elements after drawing and sorting.
538
539        This is less capable than the `KeepGenerator` version.
540        In particular, it does not know how many elements it is selecting from,
541        so it must be anchored at the starting end. The advantage is that it
542        can be applied to any expression.
543
544        The valid types of argument are:
545
546        * A `slice`. If both start and stop are provided, they must both be
547            non-negative or both be negative. step is not supported.
548        * A sequence of `int` with `...` (`Ellipsis`) at exactly one end.
549            Each sorted element will be counted that many times, with the
550            `Ellipsis` treated as enough zeros (possibly "negative") to
551            fill the rest of the elements.
552        * An `int`, which evaluates by taking the element at the specified
553            index. In this case the result is a `Die`.
554
555        Negative incoming counts are treated as zero counts.
556
557        Use the `[]` operator for the same effect as this method.
558        """
559        if isinstance(index, int):
560            return icepool.evaluator.keep_evaluator.evaluate(self, index=index)
561        else:
562            return icepool.operator.MultisetKeep(self, index=index)

Selects elements after drawing and sorting.

This is less capable than the KeepGenerator version. In particular, it does not know how many elements it is selecting from, so it must be anchored at the starting end. The advantage is that it can be applied to any expression.

The valid types of argument are:

  • A slice. If both start and stop are provided, they must both be non-negative or both be negative. step is not supported.
  • A sequence of int with ... (Ellipsis) at exactly one end. Each sorted element will be counted that many times, with the Ellipsis treated as enough zeros (possibly "negative") to fill the rest of the elements.
  • An int, which evaluates by taking the element at the specified index. In this case the result is a Die.

Negative incoming counts are treated as zero counts.

Use the [] operator for the same effect as this method.

def lowest( self, keep: int | None = None, drop: int | None = None) -> MultisetExpression[~T]:
581    def lowest(self,
582               keep: int | None = None,
583               drop: int | None = None) -> 'MultisetExpression[T]':
584        """Keep some of the lowest elements from this multiset and drop the rest.
585
586        In contrast to the die and free function versions, this does not
587        automatically sum the dice. Use `.sum()` afterwards if you want to sum.
588        Alternatively, you can perform some other evaluation.
589
590        This requires the outcomes to be evaluated in ascending order.
591
592        Args:
593            keep, drop: These arguments work together:
594                * If neither are provided, the single lowest element
595                    will be kept.
596                * If only `keep` is provided, the `keep` lowest elements
597                    will be kept.
598                * If only `drop` is provided, the `drop` lowest elements
599                    will be dropped and the rest will be kept.
600                * If both are provided, `drop` lowest elements will be dropped,
601                    then the next `keep` lowest elements will be kept.
602        """
603        index = lowest_slice(keep, drop)
604        return self.keep(index)

Keep some of the lowest elements from this multiset and drop the rest.

In contrast to the die and free function versions, this does not automatically sum the dice. Use .sum() afterwards if you want to sum. Alternatively, you can perform some other evaluation.

This requires the outcomes to be evaluated in ascending order.

Arguments:
  • keep, drop: These arguments work together:
    • If neither are provided, the single lowest element will be kept.
    • If only keep is provided, the keep lowest elements will be kept.
    • If only drop is provided, the drop lowest elements will be dropped and the rest will be kept.
    • If both are provided, drop lowest elements will be dropped, then the next keep lowest elements will be kept.
def highest( self, keep: int | None = None, drop: int | None = None) -> MultisetExpression[~T]:
606    def highest(self,
607                keep: int | None = None,
608                drop: int | None = None) -> 'MultisetExpression[T]':
609        """Keep some of the highest elements from this multiset and drop the rest.
610
611        In contrast to the die and free function versions, this does not
612        automatically sum the dice. Use `.sum()` afterwards if you want to sum.
613        Alternatively, you can perform some other evaluation.
614
615        This requires the outcomes to be evaluated in descending order.
616
617        Args:
618            keep, drop: These arguments work together:
619                * If neither are provided, the single highest element
620                    will be kept.
621                * If only `keep` is provided, the `keep` highest elements
622                    will be kept.
623                * If only `drop` is provided, the `drop` highest elements
624                    will be dropped and the rest will be kept.
625                * If both are provided, `drop` highest elements will be dropped, 
626                    then the next `keep` highest elements will be kept.
627        """
628        index = highest_slice(keep, drop)
629        return self.keep(index)

Keep some of the highest elements from this multiset and drop the rest.

In contrast to the die and free function versions, this does not automatically sum the dice. Use .sum() afterwards if you want to sum. Alternatively, you can perform some other evaluation.

This requires the outcomes to be evaluated in descending order.

Arguments:
  • keep, drop: These arguments work together:
    • If neither are provided, the single highest element will be kept.
    • If only keep is provided, the keep highest elements will be kept.
    • If only drop is provided, the drop highest elements will be dropped and the rest will be kept.
    • If both are provided, drop highest elements will be dropped, then the next keep highest elements will be kept.
def sort_pair( self, comparison: Literal['==', '!=', '<=', '<', '>=', '>'], other: MultisetExpression[~T], /, order: Order = <Order.Descending: -1>, extra: Literal['early', 'late', 'low', 'high', 'equal', 'keep', 'drop'] = 'drop') -> MultisetExpression[~T]:
633    def sort_pair(
634        self,
635        comparison: Literal['==', '!=', '<=', '<', '>=', '>'],
636        other: 'MultisetExpression[T]',
637        /,
638        order: Order = Order.Descending,
639        extra: Literal['early', 'late', 'low', 'high', 'equal', 'keep',
640                       'drop'] = 'drop'
641    ) -> 'MultisetExpression[T]':
642        """EXPERIMENTAL: Sort `self` and `other` and make pairs of one element from each, then keep the elements from `self` from each pair that fit the given comparision.
643
644        Example: An attacker rolls 3d6 versus a defender's 2d6 in the game of
645        *RISK*. Which pairs did the attacker win?
646        ```python
647        d6.pool(3).highest(2).sort_pair('>', d6.pool(2))
648        ```
649        
650        Suppose the attacker rolled 6, 4, 3 and the defender 5, 5.
651        In this case the 4 would be blocked since the attacker lost that pair,
652        leaving the attacker's 6. If you want to keep the extra element (3), you 
653        can use the `extra` parameter.
654        ```python
655
656        Pool([6, 4, 3]).sort_pair('>', [5, 5]) -> [6]
657        Pool([6, 4, 3]).sort_pair('>', [5, 5], extra='keep') -> [6, 3]
658        ```
659
660        Contrast `max_pair_lowest()` and `max_pair_highest()`, which first 
661        create the maximum number of pairs that fit the comparison, not
662        necessarily in sorted order.
663        In the above example, `max_pair()` would allow the defender to
664        assign their 5s to block both the 4 and the 3.
665
666        Negative incoming counts are treated as zero counts.
667        
668        Args:
669            comparison: The comparison to filter by. If you want to drop rather
670                than keep, use the complementary comparison:
671                * `'=='` vs. `'!='`
672                * `'<='` vs. `'>'`
673                * `'>='` vs. `'<'`
674            other: The other multiset to pair elements with.
675            order: The order in which to sort before forming pairs.
676                Default is descending.
677            extra: If the left operand has more elements than the right
678                operand, this determines what is done with the extra elements.
679                The default is `'drop'`.
680                * `'early'`, `'late'`: The extra elements are considered to   
681                    occur earlier or later in `order` than their missing
682                    counterparts.
683                * `'low'`, `'high'`, `'equal'`: The extra elements are 
684                    considered to be lower, higher, or equal to their missing
685                    counterparts.
686                * `'keep'`, `'drop'`: The extra elements are always kept or 
687                    dropped.
688        """
689        other = implicit_convert_to_expression(other)
690
691        return icepool.operator.MultisetSortPair(self,
692                                                 other,
693                                                 comparison=comparison,
694                                                 sort_order=order,
695                                                 extra=extra)

EXPERIMENTAL: Sort self and other and make pairs of one element from each, then keep the elements from self from each pair that fit the given comparision.

Example: An attacker rolls 3d6 versus a defender's 2d6 in the game of RISK. Which pairs did the attacker win?

d6.pool(3).highest(2).sort_pair('>', d6.pool(2))

Suppose the attacker rolled 6, 4, 3 and the defender 5, 5. In this case the 4 would be blocked since the attacker lost that pair, leaving the attacker's 6. If you want to keep the extra element (3), you can use the extra parameter.

Pool([6, 4, 3]).sort_pair('>', [5, 5]) -> [6]
Pool([6, 4, 3]).sort_pair('>', [5, 5], extra='keep') -> [6, 3]

Contrast max_pair_lowest() and max_pair_highest(), which first create the maximum number of pairs that fit the comparison, not necessarily in sorted order. In the above example, max_pair() would allow the defender to assign their 5s to block both the 4 and the 3.

Negative incoming counts are treated as zero counts.

Arguments:
  • comparison: The comparison to filter by. If you want to drop rather than keep, use the complementary comparison:
    • '==' vs. '!='
    • '<=' vs. '>'
    • '>=' vs. '<'
  • other: The other multiset to pair elements with.
  • order: The order in which to sort before forming pairs. Default is descending.
  • extra: If the left operand has more elements than the right operand, this determines what is done with the extra elements. The default is 'drop'.
    • 'early', 'late': The extra elements are considered to
      occur earlier or later in order than their missing counterparts.
    • 'low', 'high', 'equal': The extra elements are considered to be lower, higher, or equal to their missing counterparts.
    • 'keep', 'drop': The extra elements are always kept or dropped.
def sort_pair_keep_while( self, comparison: Literal['==', '!=', '<=', '<', '>=', '>'], other: MultisetExpression[~T], /, order: Order = <Order.Descending: -1>, extra: Literal['early', 'late', 'low', 'high', 'equal', 'continue', 'break'] = 'break'):
697    def sort_pair_keep_while(self,
698                             comparison: Literal['==', '!=', '<=', '<', '>=',
699                                                 '>'],
700                             other: 'MultisetExpression[T]',
701                             /,
702                             order: Order = Order.Descending,
703                             extra: Literal['early', 'late', 'low', 'high',
704                                            'equal', 'continue',
705                                            'break'] = 'break'):
706        """EXPERIMENTAL: Sort `self` and `other` and make pairs of one element from each, then go through the pairs and keep elements from `self` while the `comparison` holds, dropping the rest.
707
708        Negative incoming counts are treated as zero counts.
709
710        Args:
711            comparison: The comparison for which to continue the "while".
712            other: The other multiset to pair elements with.
713            order: The order in which to sort before forming pairs.
714                Default is descending.
715            extra: If the left operand has more elements than the right
716                operand, this determines what is done with the extra elements.
717                The default is `'break'`.
718                * `'early'`, `'late'`: The extra elements are considered to   
719                    occur earlier or later in `order` than their missing
720                    counterparts.
721                * `'low'`, `'high'`, `'equal'`: The extra elements are 
722                    considered to be lower, higher, or equal to their missing
723                    counterparts.
724                * `'continue'`, `'break'`: If the "while" still holds upon 
725                    reaching the extra elements, whether those elements
726                    continue to be kept.
727        """
728        other = implicit_convert_to_expression(other)
729        return icepool.operator.MultisetSortPairWhile(self,
730                                                      other,
731                                                      keep=True,
732                                                      comparison=comparison,
733                                                      sort_order=order,
734                                                      extra=extra)

EXPERIMENTAL: Sort self and other and make pairs of one element from each, then go through the pairs and keep elements from self while the comparison holds, dropping the rest.

Negative incoming counts are treated as zero counts.

Arguments:
  • comparison: The comparison for which to continue the "while".
  • other: The other multiset to pair elements with.
  • order: The order in which to sort before forming pairs. Default is descending.
  • extra: If the left operand has more elements than the right operand, this determines what is done with the extra elements. The default is 'break'.
    • 'early', 'late': The extra elements are considered to
      occur earlier or later in order than their missing counterparts.
    • 'low', 'high', 'equal': The extra elements are considered to be lower, higher, or equal to their missing counterparts.
    • 'continue', 'break': If the "while" still holds upon reaching the extra elements, whether those elements continue to be kept.
def sort_pair_drop_while( self, comparison: Literal['==', '!=', '<=', '<', '>=', '>'], other: MultisetExpression[~T], /, order: Order = <Order.Descending: -1>, extra: Literal['early', 'late', 'low', 'high', 'equal', 'continue', 'break'] = 'break'):
736    def sort_pair_drop_while(self,
737                             comparison: Literal['==', '!=', '<=', '<', '>=',
738                                                 '>'],
739                             other: 'MultisetExpression[T]',
740                             /,
741                             order: Order = Order.Descending,
742                             extra: Literal['early', 'late', 'low', 'high',
743                                            'equal', 'continue',
744                                            'break'] = 'break'):
745        """EXPERIMENTAL: Sort `self` and `other` and make pairs of one element from each, then go through the pairs and drop elements from `self` while the `comparison` holds, keeping the rest.
746
747        Negative incoming counts are treated as zero counts.
748
749        Args:
750            comparison: The comparison for which to continue the "while".
751            other: The other multiset to pair elements with.
752            order: The order in which to sort before forming pairs.
753                Default is descending.
754            extra: If the left operand has more elements than the right
755                operand, this determines what is done with the extra elements.
756                The default is `'break'`.
757                * `'early'`, `'late'`: The extra elements are considered to   
758                    occur earlier or later in `order` than their missing
759                    counterparts.
760                * `'low'`, `'high'`, `'equal'`: The extra elements are 
761                    considered to be lower, higher, or equal to their missing
762                    counterparts.
763                * `'continue'`, `'break'`: If the "while" still holds upon 
764                    reaching the extra elements, whether those elements
765                    continue to be dropped.
766        """
767        other = implicit_convert_to_expression(other)
768        return icepool.operator.MultisetSortPairWhile(self,
769                                                      other,
770                                                      keep=False,
771                                                      comparison=comparison,
772                                                      sort_order=order,
773                                                      extra=extra)

EXPERIMENTAL: Sort self and other and make pairs of one element from each, then go through the pairs and drop elements from self while the comparison holds, keeping the rest.

Negative incoming counts are treated as zero counts.

Arguments:
  • comparison: The comparison for which to continue the "while".
  • other: The other multiset to pair elements with.
  • order: The order in which to sort before forming pairs. Default is descending.
  • extra: If the left operand has more elements than the right operand, this determines what is done with the extra elements. The default is 'break'.
    • 'early', 'late': The extra elements are considered to
      occur earlier or later in order than their missing counterparts.
    • 'low', 'high', 'equal': The extra elements are considered to be lower, higher, or equal to their missing counterparts.
    • 'continue', 'break': If the "while" still holds upon reaching the extra elements, whether those elements continue to be dropped.
def max_pair_highest( self, comparison: Literal['<=', '<'], other: MultisetExpression[~T], /, *, keep: Literal['paired', 'unpaired']) -> MultisetExpression[~T]:
775    def max_pair_highest(
776            self, comparison: Literal['<=',
777                                      '<'], other: 'MultisetExpression[T]', /,
778            *, keep: Literal['paired', 'unpaired']) -> 'MultisetExpression[T]':
779        """EXPERIMENTAL: Pair the highest elements from `self` with even higher (or equal) elements from `other`.
780
781        This pairs elements of `self` with elements of `other`, such that in
782        each pair the element from `self` fits the `comparision` with the
783        element from `other`. As many such pairs of elements will be created as 
784        possible, preferring the highest pairable elements of `self`.
785        Finally, either the paired or unpaired elements from `self` are kept.
786
787        This requires that outcomes be evaluated in descending order.
788
789        Negative incoming counts are treated as zero counts.
790
791        Example: An attacker rolls a pool of 4d6 and a defender rolls a pool of 
792        3d6. Defender dice can be used to block attacker dice of equal or lesser
793        value, and the defender prefers to block the highest attacker dice
794        possible. Which attacker dice were not blocked?
795        ```python
796        d6.pool(4).max_pair('<=', d6.pool(3), keep='unpaired').sum()
797        ```
798
799        Suppose the attacker rolls 6, 4, 3, 1 and the defender rolls 5, 5.
800        Then the result would be [6, 1].
801        ```python
802        d6.pool([6, 4, 3, 1]).max_pair('<=', [5, 5], keep='unpaired')
803        -> [6, 1]
804        ```
805
806        Contrast `sort_pair()`, which first creates pairs in
807        sorted order and then filters them by `comparison`.
808        In the above example, `sort_pair()` would force the defender to pair
809        against the 6 and the 4, which would only allow them to block the 4
810        and let the 6, 3, and 1 through.
811
812        There is no `max_pair` with `'=='` because this would mean the same
813        thing as `+self & +other` (if paired elements are kept), or
814        `+self - +other` (if unpaired elements are kept).
815
816        Args:
817            comparison: Either `'<='` or `'<'`.
818            other: The other multiset to pair elements with.
819            keep: Whether 'paired' or 'unpaired' elements are to be kept.
820        """
821        if keep == 'paired':
822            keep_boolean = True
823        elif keep == 'unpaired':
824            keep_boolean = False
825        else:
826            raise ValueError(f"keep must be either 'paired' or 'unpaired'")
827
828        other = implicit_convert_to_expression(other)
829        match comparison:
830            case '<=':
831                pair_equal = True
832            case '<':
833                pair_equal = False
834            case _:
835                raise ValueError(f'Invalid comparison {comparison}')
836        return icepool.operator.MultisetMaxPair(self,
837                                                other,
838                                                order=Order.Descending,
839                                                pair_equal=pair_equal,
840                                                keep=keep_boolean)

EXPERIMENTAL: Pair the highest elements from self with even higher (or equal) elements from other.

This pairs elements of self with elements of other, such that in each pair the element from self fits the comparision with the element from other. As many such pairs of elements will be created as possible, preferring the highest pairable elements of self. Finally, either the paired or unpaired elements from self are kept.

This requires that outcomes be evaluated in descending order.

Negative incoming counts are treated as zero counts.

Example: An attacker rolls a pool of 4d6 and a defender rolls a pool of 3d6. Defender dice can be used to block attacker dice of equal or lesser value, and the defender prefers to block the highest attacker dice possible. Which attacker dice were not blocked?

d6.pool(4).max_pair('<=', d6.pool(3), keep='unpaired').sum()

Suppose the attacker rolls 6, 4, 3, 1 and the defender rolls 5, 5. Then the result would be [6, 1].

d6.pool([6, 4, 3, 1]).max_pair('<=', [5, 5], keep='unpaired')
-> [6, 1]

Contrast sort_pair(), which first creates pairs in sorted order and then filters them by comparison. In the above example, sort_pair() would force the defender to pair against the 6 and the 4, which would only allow them to block the 4 and let the 6, 3, and 1 through.

There is no max_pair with '==' because this would mean the same thing as +self & +other (if paired elements are kept), or +self - +other (if unpaired elements are kept).

Arguments:
  • comparison: Either '<=' or '<'.
  • other: The other multiset to pair elements with.
  • keep: Whether 'paired' or 'unpaired' elements are to be kept.
def max_pair_lowest( self, comparison: Literal['>=', '>'], other: MultisetExpression[~T], /, *, keep: Literal['paired', 'unpaired']) -> MultisetExpression[~T]:
842    def max_pair_lowest(
843            self, comparison: Literal['>=',
844                                      '>'], other: 'MultisetExpression[T]', /,
845            *, keep: Literal['paired', 'unpaired']) -> 'MultisetExpression[T]':
846        """EXPERIMENTAL: Pair the lowest elements from `self` with even lower (or equal) elements from `other`.
847
848        This pairs elements of `self` with elements of `other`, such that in
849        each pair the element from `self` fits the `comparision` with the
850        element from `other`. As many such pairs of elements will be created as 
851        possible, preferring the lowest pairable elements of `self`.
852        Finally, either the paired or unpaired elements from `self` are kept.
853
854        This requires that outcomes be evaluated in ascending order.
855
856        Negative incoming counts are treated as zero counts.
857
858        Contrast `sort_pair()`, which first creates pairs in
859        sorted order and then filters them by `comparison`.
860
861        Args:
862            comparison: Either `'>='` or `'>'`.
863            other: The other multiset to pair elements with.
864            keep: Whether 'paired' or 'unpaired' elements are to be kept.
865        """
866        if keep == 'paired':
867            keep_boolean = True
868        elif keep == 'unpaired':
869            keep_boolean = False
870        else:
871            raise ValueError(f"keep must be either 'paired' or 'unpaired'")
872
873        other = implicit_convert_to_expression(other)
874        match comparison:
875            case '>=':
876                pair_equal = True
877            case '>':
878                pair_equal = False
879            case _:
880                raise ValueError(f'Invalid comparison {comparison}')
881        return icepool.operator.MultisetMaxPair(self,
882                                                other,
883                                                order=Order.Ascending,
884                                                pair_equal=pair_equal,
885                                                keep=keep_boolean)

EXPERIMENTAL: Pair the lowest elements from self with even lower (or equal) elements from other.

This pairs elements of self with elements of other, such that in each pair the element from self fits the comparision with the element from other. As many such pairs of elements will be created as possible, preferring the lowest pairable elements of self. Finally, either the paired or unpaired elements from self are kept.

This requires that outcomes be evaluated in ascending order.

Negative incoming counts are treated as zero counts.

Contrast sort_pair(), which first creates pairs in sorted order and then filters them by comparison.

Arguments:
  • comparison: Either '>=' or '>'.
  • other: The other multiset to pair elements with.
  • keep: Whether 'paired' or 'unpaired' elements are to be kept.
def versus_all( self, comparison: Literal['<=', '<', '>=', '>'], other: MultisetExpression[~T]) -> MultisetExpression[~T]:
887    def versus_all(self, comparison: Literal['<=', '<', '>=', '>'],
888                   other: 'MultisetExpression[T]') -> 'MultisetExpression[T]':
889        """EXPERIMENTAL: Keeps elements from `self` that fit the comparison against all elements of the other multiset.
890        
891        Args:
892            comparison: One of `'<=', '<', '>=', '>'`.
893            other: The other multiset to compare to. Negative counts are treated
894                as 0.
895        """
896        other = implicit_convert_to_expression(other)
897        lexi_tuple, order = compute_lexi_tuple_with_zero_right_first(
898            comparison)
899        return icepool.operator.MultisetVersus(self,
900                                               other,
901                                               lexi_tuple=lexi_tuple,
902                                               order=order)

EXPERIMENTAL: Keeps elements from self that fit the comparison against all elements of the other multiset.

Arguments:
  • comparison: One of '<=', '<', '>=', '>'.
  • other: The other multiset to compare to. Negative counts are treated as 0.
def versus_any( self, comparison: Literal['<=', '<', '>=', '>'], other: MultisetExpression[~T]) -> MultisetExpression[~T]:
904    def versus_any(self, comparison: Literal['<=', '<', '>=', '>'],
905                   other: 'MultisetExpression[T]') -> 'MultisetExpression[T]':
906        """EXPERIMENTAL: Keeps elements from `self` that fit the comparison against any element of the other multiset.
907        
908        Args:
909            comparison: One of `'<=', '<', '>=', '>'`.
910            other: The other multiset to compare to. Negative counts are treated
911                as 0.
912        """
913        other = implicit_convert_to_expression(other)
914        lexi_tuple, order = compute_lexi_tuple_with_zero_right_first(
915            comparison)
916        lexi_tuple = tuple(reversed(lexi_tuple))  # type: ignore
917        order = -order
918
919        return icepool.operator.MultisetVersus(self,
920                                               other,
921                                               lexi_tuple=lexi_tuple,
922                                               order=order)

EXPERIMENTAL: Keeps elements from self that fit the comparison against any element of the other multiset.

Arguments:
  • comparison: One of '<=', '<', '>=', '>'.
  • other: The other multiset to compare to. Negative counts are treated as 0.
def expand( self, order: Order = <Order.Ascending: 1>) -> Union[Die[tuple[~T, ...]], icepool.evaluator.multiset_function.MultisetFunctionRawResult[~T, tuple[~T, ...]]]:
926    def expand(
927        self,
928        order: Order = Order.Ascending
929    ) -> 'icepool.Die[tuple[T, ...]] | MultisetFunctionRawResult[T, tuple[T, ...]]':
930        """Evaluation: All elements of the multiset in ascending order.
931
932        This is expensive and not recommended unless there are few possibilities.
933
934        Args:
935            order: Whether the elements are in ascending (default) or descending
936                order.
937        """
938        return icepool.evaluator.ExpandEvaluator().evaluate(self, order=order)

Evaluation: All elements of the multiset in ascending order.

This is expensive and not recommended unless there are few possibilities.

Arguments:
  • order: Whether the elements are in ascending (default) or descending order.
def sum( self, map: Union[Callable[[~T], ~U], Mapping[~T, ~U], NoneType] = None) -> Union[Die[~U], icepool.evaluator.multiset_function.MultisetFunctionRawResult[~T, ~U]]:
940    def sum(
941        self,
942        map: Callable[[T], U] | Mapping[T, U] | None = None
943    ) -> 'icepool.Die[U] | MultisetFunctionRawResult[T, U]':
944        """Evaluation: The sum of all elements."""
945        if map is None:
946            return icepool.evaluator.sum_evaluator.evaluate(self)
947        else:
948            return icepool.evaluator.SumEvaluator(map).evaluate(self)

Evaluation: The sum of all elements.

def size( self) -> Union[Die[int], icepool.evaluator.multiset_function.MultisetFunctionRawResult[~T, int]]:
950    def size(self) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]':
951        """Evaluation: The total number of elements in the multiset.
952
953        This is usually not very interesting unless some other operation is
954        performed first. Examples:
955
956        `generator.unique().size()` will count the number of unique outcomes.
957
958        `(generator & [4, 5, 6]).size()` will count up to one each of
959        4, 5, and 6.
960        """
961        return icepool.evaluator.size_evaluator.evaluate(self)

Evaluation: The total number of elements in the multiset.

This is usually not very interesting unless some other operation is performed first. Examples:

generator.unique().size() will count the number of unique outcomes.

(generator & [4, 5, 6]).size() will count up to one each of 4, 5, and 6.

def empty( self) -> Union[Die[bool], icepool.evaluator.multiset_function.MultisetFunctionRawResult[~T, bool]]:
963    def empty(
964            self) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]':
965        """Evaluation: Whether the multiset contains only zero counts."""
966        return icepool.evaluator.empty_evaluator.evaluate(self)

Evaluation: Whether the multiset contains only zero counts.

def highest_outcome_and_count( self) -> Union[Die[tuple[~T, int]], icepool.evaluator.multiset_function.MultisetFunctionRawResult[~T, tuple[~T, int]]]:
968    def highest_outcome_and_count(
969        self
970    ) -> 'icepool.Die[tuple[T, int]] | MultisetFunctionRawResult[T, tuple[T, int]]':
971        """Evaluation: The highest outcome with positive count, along with that count.
972
973        If no outcomes have positive count, the min outcome will be returned with 0 count.
974        """
975        return icepool.evaluator.highest_outcome_and_count_evaluator.evaluate(
976            self)

Evaluation: The highest outcome with positive count, along with that count.

If no outcomes have positive count, the min outcome will be returned with 0 count.

def all_counts( self, filter: Union[int, Literal['all']] = 1) -> Union[Die[tuple[int, ...]], icepool.evaluator.multiset_function.MultisetFunctionRawResult[~T, tuple[int, ...]]]:
978    def all_counts(
979        self,
980        filter: int | Literal['all'] = 1
981    ) -> 'icepool.Die[tuple[int, ...]] | MultisetFunctionRawResult[T, tuple[int, ...]]':
982        """Evaluation: Sorted tuple of all counts, i.e. the sizes of all matching sets.
983
984        The sizes are in **descending** order.
985
986        Args:
987            filter: Any counts below this value will not be in the output.
988                For example, `filter=2` will only produce pairs and better.
989                If `None`, no filtering will be done.
990
991                Why not just place `keep_counts('>=')` before this?
992                `keep_counts('>=')` operates by setting counts to zero, so we
993                would still need an argument to specify whether we want to
994                output zero counts. So we might as well use the argument to do
995                both.
996        """
997        return icepool.evaluator.AllCountsEvaluator(
998            filter=filter).evaluate(self)

Evaluation: Sorted tuple of all counts, i.e. the sizes of all matching sets.

The sizes are in descending order.

Arguments:
  • filter: Any counts below this value will not be in the output. For example, filter=2 will only produce pairs and better. If None, no filtering will be done.

    Why not just place keep_counts('>=') before this? keep_counts('>=') operates by setting counts to zero, so we would still need an argument to specify whether we want to output zero counts. So we might as well use the argument to do both.

def largest_count( self) -> Union[Die[int], icepool.evaluator.multiset_function.MultisetFunctionRawResult[~T, int]]:
1000    def largest_count(
1001            self) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]':
1002        """Evaluation: The size of the largest matching set among the elements."""
1003        return icepool.evaluator.largest_count_evaluator.evaluate(self)

Evaluation: The size of the largest matching set among the elements.

def largest_count_and_outcome( self) -> Union[Die[tuple[int, ~T]], icepool.evaluator.multiset_function.MultisetFunctionRawResult[~T, tuple[int, ~T]]]:
1005    def largest_count_and_outcome(
1006        self
1007    ) -> 'icepool.Die[tuple[int, T]] | MultisetFunctionRawResult[T, tuple[int, T]]':
1008        """Evaluation: The largest matching set among the elements and the corresponding outcome."""
1009        return icepool.evaluator.largest_count_and_outcome_evaluator.evaluate(
1010            self)

Evaluation: The largest matching set among the elements and the corresponding outcome.

def count_subset( self, divisor: Union[MultisetExpression[~T], Mapping[~T, int], Sequence[~T]], /, *, empty_divisor: int | None = None) -> Union[Die[int], icepool.evaluator.multiset_function.MultisetFunctionRawResult[~T, int]]:
1017    def count_subset(
1018        self,
1019        divisor: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1020        /,
1021        *,
1022        empty_divisor: int | None = None
1023    ) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]':
1024        """Evaluation: The number of times the divisor is contained in this multiset.
1025        
1026        Args:
1027            divisor: The multiset to divide by.
1028            empty_divisor: If the divisor is empty, the outcome will be this.
1029                If not set, `ZeroDivisionError` will be raised for an empty
1030                right side.
1031
1032        Raises:
1033            ZeroDivisionError: If the divisor may be empty and 
1034                empty_divisor_outcome is not set.
1035        """
1036        divisor = implicit_convert_to_expression(divisor)
1037        return icepool.evaluator.CountSubsetEvaluator(
1038            empty_divisor=empty_divisor).evaluate(self, divisor)

Evaluation: The number of times the divisor is contained in this multiset.

Arguments:
  • divisor: The multiset to divide by.
  • empty_divisor: If the divisor is empty, the outcome will be this. If not set, ZeroDivisionError will be raised for an empty right side.
Raises:
  • ZeroDivisionError: If the divisor may be empty and empty_divisor_outcome is not set.
def largest_straight( self: MultisetExpression[int]) -> Union[Die[int], icepool.evaluator.multiset_function.MultisetFunctionRawResult[int, int]]:
1040    def largest_straight(
1041        self: 'MultisetExpression[int]'
1042    ) -> 'icepool.Die[int] | MultisetFunctionRawResult[int, int]':
1043        """Evaluation: The size of the largest straight among the elements.
1044
1045        Outcomes must be `int`s.
1046        """
1047        return icepool.evaluator.largest_straight_evaluator.evaluate(self)

Evaluation: The size of the largest straight among the elements.

Outcomes must be ints.

def largest_straight_and_outcome( self: MultisetExpression[int], priority: Literal['low', 'high'] = 'high', /) -> Union[Die[tuple[int, int]], icepool.evaluator.multiset_function.MultisetFunctionRawResult[int, tuple[int, int]]]:
1049    def largest_straight_and_outcome(
1050        self: 'MultisetExpression[int]',
1051        priority: Literal['low', 'high'] = 'high',
1052        /
1053    ) -> 'icepool.Die[tuple[int, int]] | MultisetFunctionRawResult[int, tuple[int, int]]':
1054        """Evaluation: The size of the largest straight among the elements and the highest (optionally, lowest) outcome in that straight.
1055
1056        Straight size is prioritized first, then the outcome.
1057
1058        Outcomes must be `int`s.
1059
1060        Args:
1061            priority: Controls which outcome within the straight is returned,
1062                and which straight is picked if there is a tie for largest
1063                straight.
1064        """
1065        if priority == 'high':
1066            return icepool.evaluator.largest_straight_and_outcome_evaluator_high.evaluate(
1067                self)
1068        elif priority == 'low':
1069            return icepool.evaluator.largest_straight_and_outcome_evaluator_low.evaluate(
1070                self)
1071        else:
1072            raise ValueError("priority must be 'low' or 'high'.")

Evaluation: The size of the largest straight among the elements and the highest (optionally, lowest) outcome in that straight.

Straight size is prioritized first, then the outcome.

Outcomes must be ints.

Arguments:
  • priority: Controls which outcome within the straight is returned, and which straight is picked if there is a tie for largest straight.
def all_straights( self: MultisetExpression[int]) -> Union[Die[tuple[int, ...]], icepool.evaluator.multiset_function.MultisetFunctionRawResult[int, tuple[int, ...]]]:
1074    def all_straights(
1075        self: 'MultisetExpression[int]'
1076    ) -> 'icepool.Die[tuple[int, ...]] | MultisetFunctionRawResult[int, tuple[int, ...]]':
1077        """Evaluation: The sizes of all straights.
1078
1079        The sizes are in **descending** order.
1080
1081        Each element can only contribute to one straight, though duplicate
1082        elements can produces straights that overlap in outcomes. In this case,
1083        elements are preferentially assigned to the longer straight.
1084        """
1085        return icepool.evaluator.all_straights_evaluator.evaluate(self)

Evaluation: The sizes of all straights.

The sizes are in descending order.

Each element can only contribute to one straight, though duplicate elements can produces straights that overlap in outcomes. In this case, elements are preferentially assigned to the longer straight.

def all_straights_reduce_counts( self: MultisetExpression[int], reducer: Callable[[int, int], int] = <built-in function mul>) -> Union[Die[tuple[tuple[int, int], ...]], icepool.evaluator.multiset_function.MultisetFunctionRawResult[int, tuple[tuple[int, int], ...]]]:
1087    def all_straights_reduce_counts(
1088        self: 'MultisetExpression[int]',
1089        reducer: Callable[[int, int], int] = operator.mul
1090    ) -> 'icepool.Die[tuple[tuple[int, int], ...]] | MultisetFunctionRawResult[int, tuple[tuple[int, int], ...]]':
1091        """Experimental: All straights with a reduce operation on the counts.
1092
1093        This can be used to evaluate e.g. cribbage-style straight counting.
1094
1095        The result is a tuple of `(run_length, run_score)`s.
1096        """
1097        return icepool.evaluator.AllStraightsReduceCountsEvaluator(
1098            reducer=reducer).evaluate(self)

Experimental: All straights with a reduce operation on the counts.

This can be used to evaluate e.g. cribbage-style straight counting.

The result is a tuple of (run_length, run_score)s.

def argsort( self: Union[MultisetExpression[~T], Mapping[~T, int], Sequence[~T]], *args: Union[MultisetExpression[~T], Mapping[~T, int], Sequence[~T]], order: Order = <Order.Descending: -1>, limit: int | None = None):
1100    def argsort(self: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1101                *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1102                order: Order = Order.Descending,
1103                limit: int | None = None):
1104        """Experimental: Returns the indexes of the originating multisets for each rank in their additive union.
1105
1106        Example:
1107        ```python
1108        MultisetExpression.argsort([10, 9, 5], [9, 9])
1109        ```
1110        produces
1111        ```python
1112        ((0,), (0, 1, 1), (0,))
1113        ```
1114        
1115        Args:
1116            self, *args: The multiset expressions to be evaluated.
1117            order: Which order the ranks are to be emitted. Default is descending.
1118            limit: How many ranks to emit. Default will emit all ranks, which
1119                makes the length of each outcome equal to
1120                `additive_union(+self, +arg1, +arg2, ...).unique().size()`
1121        """
1122        self = implicit_convert_to_expression(self)
1123        converted_args = [implicit_convert_to_expression(arg) for arg in args]
1124        return icepool.evaluator.ArgsortEvaluator(order=order,
1125                                                  limit=limit).evaluate(
1126                                                      self, *converted_args)

Experimental: Returns the indexes of the originating multisets for each rank in their additive union.

Example:

MultisetExpression.argsort([10, 9, 5], [9, 9])

produces

((0,), (0, 1, 1), (0,))
Arguments:
  • self, *args: The multiset expressions to be evaluated.
  • order: Which order the ranks are to be emitted. Default is descending.
  • limit: How many ranks to emit. Default will emit all ranks, which makes the length of each outcome equal to additive_union(+self, +arg1, +arg2, ...).unique().size()
def issubset( self, other: Union[MultisetExpression[~T], Mapping[~T, int], Sequence[~T]], /) -> Union[Die[bool], icepool.evaluator.multiset_function.MultisetFunctionRawResult[~T, bool]]:
1169    def issubset(
1170            self,
1171            other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1172            /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]':
1173        """Evaluation: Whether this multiset is a subset of the other multiset.
1174
1175        Specifically, if this multiset has a lesser or equal count for each
1176        outcome than the other multiset, this evaluates to `True`; 
1177        if there is some outcome for which this multiset has a greater count 
1178        than the other multiset, this evaluates to `False`.
1179
1180        `issubset` is the same as `self <= other`.
1181        
1182        `self < other` evaluates a proper subset relation, which is the same
1183        except the result is `False` if the two multisets are exactly equal.
1184        """
1185        return self._compare(other, icepool.evaluator.IsSubsetEvaluator)

Evaluation: Whether this multiset is a subset of the other multiset.

Specifically, if this multiset has a lesser or equal count for each outcome than the other multiset, this evaluates to True; if there is some outcome for which this multiset has a greater count than the other multiset, this evaluates to False.

issubset is the same as self <= other.

self < other evaluates a proper subset relation, which is the same except the result is False if the two multisets are exactly equal.

def issuperset( self, other: Union[MultisetExpression[~T], Mapping[~T, int], Sequence[~T]], /) -> Union[Die[bool], icepool.evaluator.multiset_function.MultisetFunctionRawResult[~T, bool]]:
1204    def issuperset(
1205            self,
1206            other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1207            /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]':
1208        """Evaluation: Whether this multiset is a superset of the other multiset.
1209
1210        Specifically, if this multiset has a greater or equal count for each
1211        outcome than the other multiset, this evaluates to `True`; 
1212        if there is some  outcome for which this multiset has a lesser count 
1213        than the other multiset, this evaluates to `False`.
1214        
1215        A typical use of this evaluation is testing for the presence of a
1216        combo of cards in a hand, e.g.
1217
1218        ```python
1219        deck.deal(5) >= ['a', 'a', 'b']
1220        ```
1221
1222        represents the chance that a deal of 5 cards contains at least two 'a's
1223        and one 'b'.
1224
1225        `issuperset` is the same as `self >= other`.
1226
1227        `self > other` evaluates a proper superset relation, which is the same
1228        except the result is `False` if the two multisets are exactly equal.
1229        """
1230        return self._compare(other, icepool.evaluator.IsSupersetEvaluator)

Evaluation: Whether this multiset is a superset of the other multiset.

Specifically, if this multiset has a greater or equal count for each outcome than the other multiset, this evaluates to True; if there is some outcome for which this multiset has a lesser count than the other multiset, this evaluates to False.

A typical use of this evaluation is testing for the presence of a combo of cards in a hand, e.g.

deck.deal(5) >= ['a', 'a', 'b']

represents the chance that a deal of 5 cards contains at least two 'a's and one 'b'.

issuperset is the same as self >= other.

self > other evaluates a proper superset relation, which is the same except the result is False if the two multisets are exactly equal.

def isdisjoint( self, other: Union[MultisetExpression[~T], Mapping[~T, int], Sequence[~T]], /) -> Union[Die[bool], icepool.evaluator.multiset_function.MultisetFunctionRawResult[~T, bool]]:
1262    def isdisjoint(
1263            self,
1264            other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1265            /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]':
1266        """Evaluation: Whether this multiset is disjoint from the other multiset.
1267        
1268        Specifically, this evaluates to `False` if there is any outcome for
1269        which both multisets have positive count, and `True` if there is not.
1270
1271        Negative incoming counts are treated as zero counts.
1272        """
1273        return self._compare(other, icepool.evaluator.IsDisjointSetEvaluator)

Evaluation: Whether this multiset is disjoint from the other multiset.

Specifically, this evaluates to False if there is any outcome for which both multisets have positive count, and True if there is not.

Negative incoming counts are treated as zero counts.

def leximin( self, comparison: Literal['==', '!=', '<=', '<', '>=', '>', 'cmp'], other: Union[MultisetExpression[~T], Mapping[~T, int], Sequence[~T]], /, extra: Literal['low', 'high', 'drop'] = 'high') -> Union[Die[int], icepool.evaluator.multiset_function.MultisetFunctionRawResult[~T, int]]:
1277    def leximin(
1278        self,
1279        comparison: Literal['==', '!=', '<=', '<', '>=', '>', 'cmp'],
1280        other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1281        /,
1282        extra: Literal['low', 'high', 'drop'] = 'high'
1283    ) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]':
1284        """Evaluation: EXPERIMENTAL: Lexicographic comparison after sorting each multiset in ascending order.
1285
1286        Compares the lowest element of each multiset; if they are equal,
1287        compares the next-lowest element, and so on.
1288        
1289        Args:
1290            comparison: The comparison to use.
1291            other: The multiset to compare to.
1292            extra: If one side has more elements than the other, how the extra
1293                elements are considered compared to their missing counterparts.
1294        """
1295        lexi_tuple = compute_lexi_tuple_with_extra(comparison, Order.Ascending,
1296                                                   extra)
1297        return icepool.evaluator.lexi_comparison_evaluator.evaluate(
1298            self,
1299            implicit_convert_to_expression(other),
1300            sort_order=Order.Ascending,
1301            lexi_tuple=lexi_tuple)

Evaluation: EXPERIMENTAL: Lexicographic comparison after sorting each multiset in ascending order.

Compares the lowest element of each multiset; if they are equal, compares the next-lowest element, and so on.

Arguments:
  • comparison: The comparison to use.
  • other: The multiset to compare to.
  • extra: If one side has more elements than the other, how the extra elements are considered compared to their missing counterparts.
def leximax( self, comparison: Literal['==', '!=', '<=', '<', '>=', '>', 'cmp'], other: Union[MultisetExpression[~T], Mapping[~T, int], Sequence[~T]], /, extra: Literal['low', 'high', 'drop'] = 'high') -> Union[Die[int], icepool.evaluator.multiset_function.MultisetFunctionRawResult[~T, int]]:
1303    def leximax(
1304        self,
1305        comparison: Literal['==', '!=', '<=', '<', '>=', '>', 'cmp'],
1306        other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1307        /,
1308        extra: Literal['low', 'high', 'drop'] = 'high'
1309    ) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]':
1310        """Evaluation: EXPERIMENTAL: Lexicographic comparison after sorting each multiset in descending order.
1311
1312        Compares the highest element of each multiset; if they are equal,
1313        compares the next-highest element, and so on.
1314        
1315        Args:
1316            comparison: The comparison to use.
1317            other: The multiset to compare to.
1318            extra: If one side has more elements than the other, how the extra
1319                elements are considered compared to their missing counterparts.
1320        """
1321        lexi_tuple = compute_lexi_tuple_with_extra(comparison,
1322                                                   Order.Descending, extra)
1323        return icepool.evaluator.lexi_comparison_evaluator.evaluate(
1324            self,
1325            implicit_convert_to_expression(other),
1326            sort_order=Order.Descending,
1327            lexi_tuple=lexi_tuple)

Evaluation: EXPERIMENTAL: Lexicographic comparison after sorting each multiset in descending order.

Compares the highest element of each multiset; if they are equal, compares the next-highest element, and so on.

Arguments:
  • comparison: The comparison to use.
  • other: The multiset to compare to.
  • extra: If one side has more elements than the other, how the extra elements are considered compared to their missing counterparts.
def force_order( self, force_order: Order) -> MultisetExpression[~T]:
1330    def force_order(self, force_order: Order) -> 'MultisetExpression[T]':
1331        """Forces outcomes to be seen by the evaluator in the given order.
1332
1333        This can be useful for debugging / testing.
1334        """
1335        if force_order == Order.Any:
1336            return self
1337        return icepool.operator.MultisetForceOrder(self,
1338                                                   force_order=force_order)

Forces outcomes to be seen by the evaluator in the given order.

This can be useful for debugging / testing.

class MultisetEvaluator(icepool.evaluator.multiset_evaluator_base.MultisetEvaluatorBase[~T, +U_co]):
 21class MultisetEvaluator(MultisetEvaluatorBase[T, U_co]):
 22    """Evaluates a multiset based on a state transition function."""
 23
 24    @abstractmethod
 25    def next_state(self, state: Hashable, order: Order, outcome: T, /,
 26                   *counts) -> Hashable:
 27        """State transition function.
 28
 29        This should produce a state given the previous state, an outcome,
 30        the count of that outcome produced by each multiset input, and any
 31        **kwargs provided to `evaluate()`.
 32
 33        `evaluate()` will always call this with `state, outcome, *counts` as
 34        positional arguments. Furthermore, there is no expectation that a 
 35        subclass be able to handle an arbitrary number of counts. 
 36        
 37        Thus, you are free to:
 38        * Rename `state` or `outcome` in a subclass.
 39        * Replace `*counts` with a fixed set of parameters.
 40
 41        States must be hashable. At current, they do not have to be orderable.
 42        However, this may change in the future, and if they are not totally
 43        orderable, you must override `final_outcome` to create totally orderable
 44        final outcomes.
 45
 46        Returning a `Die` is not supported.
 47
 48        Args:
 49            state: A hashable object indicating the state before rolling the
 50                current outcome. If `initial_state()` is not overridden, the
 51                initial state is `None`.
 52            order: The order in which outcomes are seen. You can raise an 
 53                `UnsupportedOrder` if you don't want to support the current 
 54                order. In this case, the opposite order will then be attempted
 55                (if it hasn't already been attempted).
 56            outcome: The current outcome.
 57                If there are multiple inputs, the set of outcomes is at 
 58                least the union of the outcomes of the invididual inputs. 
 59                You can use `extra_outcomes()` to add extra outcomes.
 60            *counts: One value (usually an `int`) for each input indicating how
 61                many of the current outcome were produced. You may replace this
 62                with a fixed series of parameters.
 63
 64        Returns:
 65            A hashable object indicating the next state.
 66            The special value `icepool.Reroll` can be used to immediately remove
 67            the state from consideration, effectively performing a full reroll.
 68        """
 69
 70    def extra_outcomes(self, outcomes: Sequence[T]) -> Collection[T]:
 71        """Optional method to specify extra outcomes that should be seen as inputs to `next_state()`.
 72
 73        These will be seen by `next_state` even if they do not appear in the
 74        input(s). The default implementation returns `()`, or no additional
 75        outcomes.
 76
 77        If you want `next_state` to see consecutive `int` outcomes, you can set
 78        `extra_outcomes = icepool.MultisetEvaluator.consecutive`.
 79        See `consecutive()` below.
 80
 81        Args:
 82            outcomes: The outcomes that could be produced by the inputs, in
 83            ascending order.
 84        """
 85        return ()
 86
 87    def initial_state(self, order: Order, outcomes: Sequence[T], /, *sizes,
 88                      **kwargs: Hashable):
 89        """Optional method to produce an initial evaluation state.
 90
 91        If not overriden, the initial state is `None`. Note that this is not a
 92        valid `final_outcome()`.
 93
 94        All non-keyword arguments will be given positionally, so you are free
 95        to:
 96        * Rename any of them.
 97        * Replace `sizes` with a fixed series of arguments.
 98        * Replace trailing positional arguments with `*_` if you don't care
 99            about them.
100
101        Args:
102            order: The order in which outcomes will be seen by `next_state()`.
103            outcomes: All outcomes that will be seen by `next_state()`.
104            sizes: The sizes of the input multisets, provided
105                that the multiset has inferrable size with non-negative
106                counts. If not, the corresponding size is None.
107            kwargs: Non-multiset arguments that were provided to `evaluate()`.
108                You may replace `**kwargs` with a fixed set of keyword
109                parameters; `final_outcome()` should take the same set of
110                keyword parameters.
111
112        Raises:
113            UnsupportedOrder if the given order is not supported.
114        """
115        return None
116
117    def final_outcome(
118            self, final_state: Hashable, order: Order, outcomes: tuple[T, ...],
119            /, *sizes, **kwargs: Hashable
120    ) -> 'U_co | icepool.Die[U_co] | icepool.RerollType':
121        """Optional method to generate a final output outcome from a final state.
122
123        By default, the final outcome is equal to the final state.
124        Note that `None` is not a valid outcome for a `Die`,
125        and if there are no outcomes, `final_outcome` will be immediately
126        be callled with `final_state=None`.
127        Subclasses that want to handle this case should explicitly define what
128        happens.
129
130        All non-keyword arguments will be given positionally, so you are free
131        to:
132        * Rename any of them.
133        * Replace `sizes` with a fixed series of arguments.
134        * Replace trailing positional arguments with `*_` if you don't care
135            about them.
136
137        Args:
138            final_state: A state after all outcomes have been processed.
139            order: The order in which outcomes were seen by `next_state()`.
140            outcomes: All outcomes that were seen by `next_state()`.
141            sizes: The sizes of the input multisets, provided
142                that the multiset has inferrable size with non-negative
143                counts. If not, the corresponding size is None.
144            kwargs: Non-multiset arguments that were provided to `evaluate()`.
145                You may replace `**kwargs` with a fixed set of keyword
146                parameters; `initial_state()` should take the same set of
147                keyword parameters.
148
149        Returns:
150            A final outcome that will be used as part of constructing the result `Die`.
151            As usual for `Die()`, this could itself be a `Die` or `icepool.Reroll`.
152        """
153        # If not overriden, the final_state should have type U_co.
154        return cast(U_co, final_state)
155
156    def consecutive(self, outcomes: Sequence[int]) -> Collection[int]:
157        """Example implementation of `extra_outcomes()` that produces consecutive `int` outcomes.
158
159        Set `extra_outcomes = icepool.MultisetEvaluator.consecutive` to use this.
160
161        Returns:
162            All `int`s from the min outcome to the max outcome among the inputs,
163            inclusive.
164
165        Raises:
166            TypeError: if any input has any non-`int` outcome.
167        """
168        if not outcomes:
169            return ()
170
171        if any(not isinstance(x, int) for x in outcomes):
172            raise TypeError(
173                "consecutive cannot be used with outcomes of type other than 'int'."
174            )
175
176        return range(outcomes[0], outcomes[-1] + 1)
177
178    @property
179    def next_state_key(self) -> Hashable:
180        """Subclasses may optionally provide a key that uniquely identifies the `next_state()` computation.
181
182        This is used to persistently cache intermediate results between calls
183        to `evaluate()`. By default, `next_state_key` is `None`, which only
184        caches if not inside a `@multiset_function`.
185        
186        If you do implement this, `next_state_key` should include any members 
187        used in `next_state()` but does NOT need to include members that are 
188        only used in other methods, i.e. 
189        * `extra_outcomes()`
190        * `initial_state()`
191        * `final_outcome()`.
192        
193        For example, if `next_state()` is a pure function other than being 
194        defined by its evaluator class, you can use `type(self)`.
195
196        If you want to disable caching between calls to `evaluate()` even
197        outside of `@multiset_function`, return the special value 
198        `icepool.NoCache`.
199        """
200        return None
201
202    def _prepare(
203        self,
204        input_exps: tuple[MultisetExpressionBase[T, Any], ...],
205        kwargs: Mapping[str, Hashable],
206    ) -> Iterator[tuple['Dungeon[T]', 'Quest[T, U_co]',
207                        'tuple[MultisetSourceBase[T, Any], ...]', int]]:
208
209        for t in itertools.product(*(exp._prepare() for exp in input_exps)):
210            if t:
211                dungeonlet_flats, questlet_flats, sources, weights = zip(*t)
212            else:
213                dungeonlet_flats = ()
214                questlet_flats = ()
215                sources = ()
216                weights = ()
217            next_state_key: Hashable
218            if self.next_state_key is None:
219                # This should only get cached inside this evaluator, but add
220                # self id to be safe.
221                next_state_key = id(self)
222                multiset_function_can_cache = False
223            elif self.next_state_key is icepool.NoCache:
224                next_state_key = icepool.NoCache
225                multiset_function_can_cache = False
226            else:
227                next_state_key = self.next_state_key
228                multiset_function_can_cache = True
229            dungeon: MultisetEvaluatorDungeon[T] = MultisetEvaluatorDungeon(
230                self.next_state, next_state_key, multiset_function_can_cache,
231                dungeonlet_flats)
232            quest: MultisetEvaluatorQuest[T, U_co] = MultisetEvaluatorQuest(
233                self.initial_state, self.extra_outcomes, self.final_outcome,
234                questlet_flats)
235            sources = tuple(itertools.chain.from_iterable(sources))
236            weight = math.prod(weights)
237            yield dungeon, quest, sources, weight
238
239    def _should_cache(self, dungeon: 'Dungeon[T]') -> bool:
240        return dungeon.__hash__ is not None

Evaluates a multiset based on a state transition function.

@abstractmethod
def next_state( self, state: Hashable, order: Order, outcome: ~T, /, *counts) -> Hashable:
24    @abstractmethod
25    def next_state(self, state: Hashable, order: Order, outcome: T, /,
26                   *counts) -> Hashable:
27        """State transition function.
28
29        This should produce a state given the previous state, an outcome,
30        the count of that outcome produced by each multiset input, and any
31        **kwargs provided to `evaluate()`.
32
33        `evaluate()` will always call this with `state, outcome, *counts` as
34        positional arguments. Furthermore, there is no expectation that a 
35        subclass be able to handle an arbitrary number of counts. 
36        
37        Thus, you are free to:
38        * Rename `state` or `outcome` in a subclass.
39        * Replace `*counts` with a fixed set of parameters.
40
41        States must be hashable. At current, they do not have to be orderable.
42        However, this may change in the future, and if they are not totally
43        orderable, you must override `final_outcome` to create totally orderable
44        final outcomes.
45
46        Returning a `Die` is not supported.
47
48        Args:
49            state: A hashable object indicating the state before rolling the
50                current outcome. If `initial_state()` is not overridden, the
51                initial state is `None`.
52            order: The order in which outcomes are seen. You can raise an 
53                `UnsupportedOrder` if you don't want to support the current 
54                order. In this case, the opposite order will then be attempted
55                (if it hasn't already been attempted).
56            outcome: The current outcome.
57                If there are multiple inputs, the set of outcomes is at 
58                least the union of the outcomes of the invididual inputs. 
59                You can use `extra_outcomes()` to add extra outcomes.
60            *counts: One value (usually an `int`) for each input indicating how
61                many of the current outcome were produced. You may replace this
62                with a fixed series of parameters.
63
64        Returns:
65            A hashable object indicating the next state.
66            The special value `icepool.Reroll` can be used to immediately remove
67            the state from consideration, effectively performing a full reroll.
68        """

State transition function.

This should produce a state given the previous state, an outcome, the count of that outcome produced by each multiset input, and any **kwargs provided to evaluate().

evaluate() will always call this with state, outcome, *counts as positional arguments. Furthermore, there is no expectation that a subclass be able to handle an arbitrary number of counts.

Thus, you are free to:

  • Rename state or outcome in a subclass.
  • Replace *counts with a fixed set of parameters.

States must be hashable. At current, they do not have to be orderable. However, this may change in the future, and if they are not totally orderable, you must override final_outcome to create totally orderable final outcomes.

Returning a Die is not supported.

Arguments:
  • state: A hashable object indicating the state before rolling the current outcome. If initial_state() is not overridden, the initial state is None.
  • order: The order in which outcomes are seen. You can raise an UnsupportedOrder if you don't want to support the current order. In this case, the opposite order will then be attempted (if it hasn't already been attempted).
  • outcome: The current outcome. If there are multiple inputs, the set of outcomes is at least the union of the outcomes of the invididual inputs. You can use extra_outcomes() to add extra outcomes.
  • *counts: One value (usually an int) for each input indicating how many of the current outcome were produced. You may replace this with a fixed series of parameters.
Returns:

A hashable object indicating the next state. The special value icepool.Reroll can be used to immediately remove the state from consideration, effectively performing a full reroll.

def extra_outcomes(self, outcomes: Sequence[~T]) -> Collection[~T]:
70    def extra_outcomes(self, outcomes: Sequence[T]) -> Collection[T]:
71        """Optional method to specify extra outcomes that should be seen as inputs to `next_state()`.
72
73        These will be seen by `next_state` even if they do not appear in the
74        input(s). The default implementation returns `()`, or no additional
75        outcomes.
76
77        If you want `next_state` to see consecutive `int` outcomes, you can set
78        `extra_outcomes = icepool.MultisetEvaluator.consecutive`.
79        See `consecutive()` below.
80
81        Args:
82            outcomes: The outcomes that could be produced by the inputs, in
83            ascending order.
84        """
85        return ()

Optional method to specify extra outcomes that should be seen as inputs to next_state().

These will be seen by next_state even if they do not appear in the input(s). The default implementation returns (), or no additional outcomes.

If you want next_state to see consecutive int outcomes, you can set extra_outcomes = icepool.MultisetEvaluator.consecutive. See consecutive() below.

Arguments:
  • outcomes: The outcomes that could be produced by the inputs, in
  • ascending order.
def initial_state( self, order: Order, outcomes: Sequence[~T], /, *sizes, **kwargs: Hashable):
 87    def initial_state(self, order: Order, outcomes: Sequence[T], /, *sizes,
 88                      **kwargs: Hashable):
 89        """Optional method to produce an initial evaluation state.
 90
 91        If not overriden, the initial state is `None`. Note that this is not a
 92        valid `final_outcome()`.
 93
 94        All non-keyword arguments will be given positionally, so you are free
 95        to:
 96        * Rename any of them.
 97        * Replace `sizes` with a fixed series of arguments.
 98        * Replace trailing positional arguments with `*_` if you don't care
 99            about them.
100
101        Args:
102            order: The order in which outcomes will be seen by `next_state()`.
103            outcomes: All outcomes that will be seen by `next_state()`.
104            sizes: The sizes of the input multisets, provided
105                that the multiset has inferrable size with non-negative
106                counts. If not, the corresponding size is None.
107            kwargs: Non-multiset arguments that were provided to `evaluate()`.
108                You may replace `**kwargs` with a fixed set of keyword
109                parameters; `final_outcome()` should take the same set of
110                keyword parameters.
111
112        Raises:
113            UnsupportedOrder if the given order is not supported.
114        """
115        return None

Optional method to produce an initial evaluation state.

If not overriden, the initial state is None. Note that this is not a valid final_outcome().

All non-keyword arguments will be given positionally, so you are free to:

  • Rename any of them.
  • Replace sizes with a fixed series of arguments.
  • Replace trailing positional arguments with *_ if you don't care about them.
Arguments:
  • order: The order in which outcomes will be seen by next_state().
  • outcomes: All outcomes that will be seen by next_state().
  • sizes: The sizes of the input multisets, provided that the multiset has inferrable size with non-negative counts. If not, the corresponding size is None.
  • kwargs: Non-multiset arguments that were provided to evaluate(). You may replace **kwargs with a fixed set of keyword parameters; final_outcome() should take the same set of keyword parameters.
Raises:
  • UnsupportedOrder if the given order is not supported.
def final_outcome( self, final_state: Hashable, order: Order, outcomes: tuple[~T, ...], /, *sizes, **kwargs: Hashable) -> Union[+U_co, Die[+U_co], RerollType]:
117    def final_outcome(
118            self, final_state: Hashable, order: Order, outcomes: tuple[T, ...],
119            /, *sizes, **kwargs: Hashable
120    ) -> 'U_co | icepool.Die[U_co] | icepool.RerollType':
121        """Optional method to generate a final output outcome from a final state.
122
123        By default, the final outcome is equal to the final state.
124        Note that `None` is not a valid outcome for a `Die`,
125        and if there are no outcomes, `final_outcome` will be immediately
126        be callled with `final_state=None`.
127        Subclasses that want to handle this case should explicitly define what
128        happens.
129
130        All non-keyword arguments will be given positionally, so you are free
131        to:
132        * Rename any of them.
133        * Replace `sizes` with a fixed series of arguments.
134        * Replace trailing positional arguments with `*_` if you don't care
135            about them.
136
137        Args:
138            final_state: A state after all outcomes have been processed.
139            order: The order in which outcomes were seen by `next_state()`.
140            outcomes: All outcomes that were seen by `next_state()`.
141            sizes: The sizes of the input multisets, provided
142                that the multiset has inferrable size with non-negative
143                counts. If not, the corresponding size is None.
144            kwargs: Non-multiset arguments that were provided to `evaluate()`.
145                You may replace `**kwargs` with a fixed set of keyword
146                parameters; `initial_state()` should take the same set of
147                keyword parameters.
148
149        Returns:
150            A final outcome that will be used as part of constructing the result `Die`.
151            As usual for `Die()`, this could itself be a `Die` or `icepool.Reroll`.
152        """
153        # If not overriden, the final_state should have type U_co.
154        return cast(U_co, final_state)

Optional method to generate a final output outcome from a final state.

By default, the final outcome is equal to the final state. Note that None is not a valid outcome for a Die, and if there are no outcomes, final_outcome will be immediately be callled with final_state=None. Subclasses that want to handle this case should explicitly define what happens.

All non-keyword arguments will be given positionally, so you are free to:

  • Rename any of them.
  • Replace sizes with a fixed series of arguments.
  • Replace trailing positional arguments with *_ if you don't care about them.
Arguments:
  • final_state: A state after all outcomes have been processed.
  • order: The order in which outcomes were seen by next_state().
  • outcomes: All outcomes that were seen by next_state().
  • sizes: The sizes of the input multisets, provided that the multiset has inferrable size with non-negative counts. If not, the corresponding size is None.
  • kwargs: Non-multiset arguments that were provided to evaluate(). You may replace **kwargs with a fixed set of keyword parameters; initial_state() should take the same set of keyword parameters.
Returns:

A final outcome that will be used as part of constructing the result Die. As usual for Die(), this could itself be a Die or icepool.Reroll.

def consecutive(self, outcomes: Sequence[int]) -> Collection[int]:
156    def consecutive(self, outcomes: Sequence[int]) -> Collection[int]:
157        """Example implementation of `extra_outcomes()` that produces consecutive `int` outcomes.
158
159        Set `extra_outcomes = icepool.MultisetEvaluator.consecutive` to use this.
160
161        Returns:
162            All `int`s from the min outcome to the max outcome among the inputs,
163            inclusive.
164
165        Raises:
166            TypeError: if any input has any non-`int` outcome.
167        """
168        if not outcomes:
169            return ()
170
171        if any(not isinstance(x, int) for x in outcomes):
172            raise TypeError(
173                "consecutive cannot be used with outcomes of type other than 'int'."
174            )
175
176        return range(outcomes[0], outcomes[-1] + 1)

Example implementation of extra_outcomes() that produces consecutive int outcomes.

Set extra_outcomes = icepool.MultisetEvaluator.consecutive to use this.

Returns:

All ints from the min outcome to the max outcome among the inputs, inclusive.

Raises:
  • TypeError: if any input has any non-int outcome.
next_state_key: Hashable
178    @property
179    def next_state_key(self) -> Hashable:
180        """Subclasses may optionally provide a key that uniquely identifies the `next_state()` computation.
181
182        This is used to persistently cache intermediate results between calls
183        to `evaluate()`. By default, `next_state_key` is `None`, which only
184        caches if not inside a `@multiset_function`.
185        
186        If you do implement this, `next_state_key` should include any members 
187        used in `next_state()` but does NOT need to include members that are 
188        only used in other methods, i.e. 
189        * `extra_outcomes()`
190        * `initial_state()`
191        * `final_outcome()`.
192        
193        For example, if `next_state()` is a pure function other than being 
194        defined by its evaluator class, you can use `type(self)`.
195
196        If you want to disable caching between calls to `evaluate()` even
197        outside of `@multiset_function`, return the special value 
198        `icepool.NoCache`.
199        """
200        return None

Subclasses may optionally provide a key that uniquely identifies the next_state() computation.

This is used to persistently cache intermediate results between calls to evaluate(). By default, next_state_key is None, which only caches if not inside a @multiset_function.

If you do implement this, next_state_key should include any members used in next_state() but does NOT need to include members that are only used in other methods, i.e.

For example, if next_state() is a pure function other than being defined by its evaluator class, you can use type(self).

If you want to disable caching between calls to evaluate() even outside of @multiset_function, return the special value icepool.NoCache.

class Order(enum.IntEnum):
30class Order(enum.IntEnum):
31    """Can be used to define what order outcomes are seen in by MultisetEvaluators."""
32    Ascending = 1
33    Descending = -1
34    Any = 0
35
36    def merge(*orders: 'Order') -> 'Order':
37        """Merges the given Orders.
38
39        Returns:
40            `Any` if all arguments are `Any`.
41            `Ascending` if there is at least one `Ascending` in the arguments.
42            `Descending` if there is at least one `Descending` in the arguments.
43
44        Raises:
45            `ConflictingOrderError` if both `Ascending` and `Descending` are in 
46            the arguments.
47        """
48        result = Order.Any
49        for order in orders:
50            if (result > 0 and order < 0) or (result < 0 and order > 0):
51                raise ConflictingOrderError(
52                    f'Conflicting orders {orders}.\n' +
53                    'Tip: If you are using highest(keep=k), try using lowest(drop=n-k) instead, or vice versa.'
54                )
55            if result == Order.Any:
56                result = order
57        return result
58
59    def __neg__(self) -> 'Order':
60        if self is Order.Ascending:
61            return Order.Descending
62        elif self is Order.Descending:
63            return Order.Ascending
64        else:
65            return Order.Any

Can be used to define what order outcomes are seen in by MultisetEvaluators.

Ascending = <Order.Ascending: 1>
Descending = <Order.Descending: -1>
Any = <Order.Any: 0>
def merge(*orders: Order) -> Order:
36    def merge(*orders: 'Order') -> 'Order':
37        """Merges the given Orders.
38
39        Returns:
40            `Any` if all arguments are `Any`.
41            `Ascending` if there is at least one `Ascending` in the arguments.
42            `Descending` if there is at least one `Descending` in the arguments.
43
44        Raises:
45            `ConflictingOrderError` if both `Ascending` and `Descending` are in 
46            the arguments.
47        """
48        result = Order.Any
49        for order in orders:
50            if (result > 0 and order < 0) or (result < 0 and order > 0):
51                raise ConflictingOrderError(
52                    f'Conflicting orders {orders}.\n' +
53                    'Tip: If you are using highest(keep=k), try using lowest(drop=n-k) instead, or vice versa.'
54                )
55            if result == Order.Any:
56                result = order
57        return result

Merges the given Orders.

Returns:

Any if all arguments are Any. Ascending if there is at least one Ascending in the arguments. Descending if there is at least one Descending in the arguments.

Raises:
class ConflictingOrderError(icepool.order.OrderError):
19class ConflictingOrderError(OrderError):
20    """Indicates that two conflicting mandatory outcome orderings were encountered."""

Indicates that two conflicting mandatory outcome orderings were encountered.

class UnsupportedOrder(icepool.order.OrderException):
23class UnsupportedOrder(OrderException):
24    """Indicates that the given order is not supported under the current context.
25    
26    It may still be possible that retrying with the opposite order will succeed.
27    """

Indicates that the given order is not supported under the current context.

It may still be possible that retrying with the opposite order will succeed.

 20class Deck(Population[T_co], MaybeHashKeyed):
 21    """Sampling without replacement (within a single evaluation).
 22
 23    Quantities represent duplicates.
 24    """
 25
 26    _data: Counts[T_co]
 27    _deal: int
 28
 29    @property
 30    def _new_type(self) -> type:
 31        return Deck
 32
 33    def __new__(cls,
 34                outcomes: Sequence | Mapping[Any, int] = (),
 35                times: Sequence[int] | int = 1) -> 'Deck[T_co]':
 36        """Constructor for a `Deck`.
 37
 38        All quantities must be non-negative. Outcomes with zero quantity will be
 39        omitted.
 40
 41        Args:
 42            outcomes: The cards of the `Deck`. This can be one of the following:
 43                * A `Sequence` of outcomes. Duplicates will contribute
 44                    quantity for each appearance.
 45                * A `Mapping` from outcomes to quantities.
 46
 47                Each outcome may be one of the following:
 48                * An outcome, which must be hashable and totally orderable.
 49                * A `Deck`, which will be flattened into the result. If a
 50                    `times` is assigned to the `Deck`, the entire `Deck` will
 51                    be duplicated that many times.
 52            times: Multiplies the number of times each element of `outcomes`
 53                will be put into the `Deck`.
 54                `times` can either be a sequence of the same length as
 55                `outcomes` or a single `int` to apply to all elements of
 56                `outcomes`.
 57        """
 58
 59        if icepool.population.again.contains_again(outcomes):
 60            raise ValueError('Again cannot be used with Decks.')
 61
 62        outcomes, times = icepool.creation_args.itemize(outcomes, times)
 63
 64        if len(outcomes) == 1 and times[0] == 1 and isinstance(
 65                outcomes[0], Deck):
 66            return outcomes[0]
 67
 68        counts: Counts[T_co] = icepool.creation_args.expand_args_for_deck(
 69            outcomes, times)
 70
 71        return Deck._new_raw(counts)
 72
 73    @classmethod
 74    def _new_raw(cls, data: Counts[T_co]) -> 'Deck[T_co]':
 75        """Creates a new `Deck` using already-processed arguments.
 76
 77        Args:
 78            data: At this point, this is a Counts.
 79        """
 80        self = super(Population, cls).__new__(cls)
 81        self._data = data
 82        return self
 83
 84    def keys(self) -> CountsKeysView[T_co]:
 85        return self._data.keys()
 86
 87    def values(self) -> CountsValuesView:
 88        return self._data.values()
 89
 90    def items(self) -> CountsItemsView[T_co]:
 91        return self._data.items()
 92
 93    def __getitem__(self, outcome) -> int:
 94        return self._data[outcome]
 95
 96    def __iter__(self) -> Iterator[T_co]:
 97        return iter(self.keys())
 98
 99    def __len__(self) -> int:
100        return len(self._data)
101
102    size = icepool.Population.denominator
103
104    @cached_property
105    def _popped_min(self) -> tuple['Deck[T_co]', int]:
106        return self._new_raw(self._data.remove_min()), self.quantities()[0]
107
108    def _pop_min(self) -> tuple['Deck[T_co]', int]:
109        """A `Deck` with the min outcome removed."""
110        return self._popped_min
111
112    @cached_property
113    def _popped_max(self) -> tuple['Deck[T_co]', int]:
114        return self._new_raw(self._data.remove_max()), self.quantities()[-1]
115
116    def _pop_max(self) -> tuple['Deck[T_co]', int]:
117        """A `Deck` with the max outcome removed."""
118        return self._popped_max
119
120    @overload
121    def deal(self, hand_size: int, /) -> 'icepool.Deal[T_co]':
122        ...
123
124    @overload
125    def deal(self,
126             hand_sizes: Iterable[int]) -> 'icepool.MultiDeal[T_co, Any]':
127        ...
128
129    @overload
130    def deal(
131        self, hand_sizes: int | Iterable[int]
132    ) -> 'icepool.Deal[T_co] | icepool.MultiDeal[T_co, Any]':
133        ...
134
135    def deal(
136        self, hand_sizes: int | Iterable[int]
137    ) -> 'icepool.Deal[T_co] | icepool.MultiDeal[T_co, Any]':
138        """Deals the specified number of cards from this deck.
139
140        Args:
141            hand_sizes: Either an integer, in which case a `Deal` will be
142                returned, or an iterable of multiple hand sizes, in which case a
143                `MultiDeal` will be returned.
144        """
145        if isinstance(hand_sizes, int):
146            return icepool.Deal(self, hand_sizes)
147        else:
148            return icepool.MultiDeal(
149                self, tuple((hand_size, 1) for hand_size in hand_sizes))
150
151    def deal_groups(
152            self, *hand_groups: tuple[int,
153                                      int]) -> 'icepool.MultiDeal[T_co, Any]':
154        """EXPERIMENTAL: Deal cards into groups of hands, where the hands in each group could be produced in arbitrary order.
155        
156        Args:
157            hand_groups: Each argument is a tuple (hand_size, group_size),
158                denoting the number of cards in each hand of the group and
159                the number of hands in the group respectively.
160        """
161        return icepool.MultiDeal(self, hand_groups)
162
163    # Binary operators.
164
165    def additive_union(
166            self, *args: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]':
167        """Both decks merged together."""
168        return functools.reduce(operator.add, args,
169                                initial=self)  # type: ignore
170
171    def __add__(self,
172                other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]':
173        data = Counter(self._data)
174        for outcome, count in Counter(other).items():
175            data[outcome] += count
176        return Deck(+data)
177
178    __radd__ = __add__
179
180    def difference(self, *args:
181                   Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]':
182        """This deck with the other cards removed (but not below zero of each card)."""
183        return functools.reduce(operator.sub, args,
184                                initial=self)  # type: ignore
185
186    def __sub__(self,
187                other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]':
188        data = Counter(self._data)
189        for outcome, count in Counter(other).items():
190            data[outcome] -= count
191        return Deck(+data)
192
193    def __rsub__(self,
194                 other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]':
195        data = Counter(other)
196        for outcome, count in self.items():
197            data[outcome] -= count
198        return Deck(+data)
199
200    def intersection(
201            self, *args: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]':
202        """The cards that both decks have."""
203        return functools.reduce(operator.and_, args,
204                                initial=self)  # type: ignore
205
206    def __and__(self,
207                other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]':
208        data: Counter[T_co] = Counter()
209        for outcome, count in Counter(other).items():
210            data[outcome] = min(self.get(outcome, 0), count)
211        return Deck(+data)
212
213    __rand__ = __and__
214
215    def union(self, *args:
216              Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]':
217        """As many of each card as the deck that has more of them."""
218        return functools.reduce(operator.or_, args,
219                                initial=self)  # type: ignore
220
221    def __or__(self,
222               other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]':
223        data = Counter(self._data)
224        for outcome, count in Counter(other).items():
225            data[outcome] = max(data[outcome], count)
226        return Deck(+data)
227
228    __ror__ = __or__
229
230    def symmetric_difference(
231            self, other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]':
232        """As many of each card as the deck that has more of them."""
233        return self ^ other
234
235    def __xor__(self,
236                other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]':
237        data = Counter(self._data)
238        for outcome, count in Counter(other).items():
239            data[outcome] = abs(data[outcome] - count)
240        return Deck(+data)
241
242    def __mul__(self, other: int) -> 'Deck[T_co]':
243        if not isinstance(other, int):
244            return NotImplemented
245        return self.multiply_quantities(other)
246
247    __rmul__ = __mul__
248
249    def __floordiv__(self, other: int) -> 'Deck[T_co]':
250        if not isinstance(other, int):
251            return NotImplemented
252        return self.divide_quantities(other)
253
254    def __mod__(self, other: int) -> 'Deck[T_co]':
255        if not isinstance(other, int):
256            return NotImplemented
257        return self.modulo_quantities(other)
258
259    def map(
260            self,
261            repl:
262        'Callable[..., U | Deck[U] | icepool.RerollType] | Mapping[T_co, U | Deck[U] | icepool.RerollType]',
263            /,
264            *,
265            star: bool | None = None) -> 'Deck[U]':
266        """Maps outcomes of this `Deck` to other outcomes.
267
268        Args:
269            repl: One of the following:
270                * A callable returning a new outcome for each old outcome.
271                * A map from old outcomes to new outcomes.
272                    Unmapped old outcomes stay the same.
273                The new outcomes may be `Deck`s, in which case one card is
274                replaced with several. This is not recommended.
275            star: Whether outcomes should be unpacked into separate arguments
276                before sending them to a callable `repl`.
277                If not provided, this will be guessed based on the function
278                signature.
279        """
280        # Convert to a single-argument function.
281        if callable(repl):
282            if star is None:
283                star = infer_star(repl)
284            if star:
285
286                def transition_function(outcome):
287                    return repl(*outcome)
288            else:
289
290                def transition_function(outcome):
291                    return repl(outcome)
292        else:
293            # repl is a mapping.
294            def transition_function(outcome):
295                if outcome in repl:
296                    return repl[outcome]
297                else:
298                    return outcome
299
300        return Deck(
301            [transition_function(outcome) for outcome in self.outcomes()],
302            times=self.quantities())
303
304    @cached_property
305    def _sequence_cache(
306            self) -> 'MutableSequence[icepool.Die[tuple[T_co, ...]]]':
307        return [icepool.Die([()])]
308
309    def sequence(self, deals: int, /) -> 'icepool.Die[tuple[T_co, ...]]':
310        """Possible sequences produced by dealing from this deck a number of times.
311        
312        This is extremely expensive computationally. If you don't care about
313        order, use `deal()` instead.
314        """
315        if deals < 0:
316            raise ValueError('The number of cards dealt cannot be negative.')
317        for i in range(len(self._sequence_cache), deals + 1):
318
319            def transition(curr):
320                remaining = icepool.Die(self - curr)
321                return icepool.map(lambda curr, next: curr + (next, ), curr,
322                                   remaining)
323
324            result: 'icepool.Die[tuple[T_co, ...]]' = self._sequence_cache[
325                i - 1].map(transition)
326            self._sequence_cache.append(result)
327        return result
328
329    @cached_property
330    def hash_key(self) -> tuple:
331        return Deck, tuple(self.items())
332
333    def __repr__(self) -> str:
334        items_string = ', '.join(f'{repr(outcome)}: {quantity}'
335                                 for outcome, quantity in self.items())
336        return type(self).__qualname__ + '({' + items_string + '})'

Sampling without replacement (within a single evaluation).

Quantities represent duplicates.

def keys(self) -> CountsKeysView[+T_co]:
84    def keys(self) -> CountsKeysView[T_co]:
85        return self._data.keys()

The outcomes within the population in sorted order.

def values(self) -> CountsValuesView:
87    def values(self) -> CountsValuesView:
88        return self._data.values()

The quantities within the population in outcome order.

def items(self) -> CountsItemsView[+T_co]:
90    def items(self) -> CountsItemsView[T_co]:
91        return self._data.items()

The (outcome, quantity)s of the population in sorted order.

def size(self) -> int:
296    def denominator(self) -> int:
297        """The sum of all quantities (e.g. weights or duplicates).
298
299        For the number of unique outcomes, use `len()`.
300        """
301        return self._denominator

The sum of all quantities (e.g. weights or duplicates).

For the number of unique outcomes, use len().

def deal( self, hand_sizes: Union[int, Iterable[int]]) -> Union[Deal[+T_co], MultiDeal[+T_co, Any]]:
135    def deal(
136        self, hand_sizes: int | Iterable[int]
137    ) -> 'icepool.Deal[T_co] | icepool.MultiDeal[T_co, Any]':
138        """Deals the specified number of cards from this deck.
139
140        Args:
141            hand_sizes: Either an integer, in which case a `Deal` will be
142                returned, or an iterable of multiple hand sizes, in which case a
143                `MultiDeal` will be returned.
144        """
145        if isinstance(hand_sizes, int):
146            return icepool.Deal(self, hand_sizes)
147        else:
148            return icepool.MultiDeal(
149                self, tuple((hand_size, 1) for hand_size in hand_sizes))

Deals the specified number of cards from this deck.

Arguments:
  • hand_sizes: Either an integer, in which case a Deal will be returned, or an iterable of multiple hand sizes, in which case a MultiDeal will be returned.
def deal_groups( self, *hand_groups: tuple[int, int]) -> MultiDeal[+T_co, typing.Any]:
151    def deal_groups(
152            self, *hand_groups: tuple[int,
153                                      int]) -> 'icepool.MultiDeal[T_co, Any]':
154        """EXPERIMENTAL: Deal cards into groups of hands, where the hands in each group could be produced in arbitrary order.
155        
156        Args:
157            hand_groups: Each argument is a tuple (hand_size, group_size),
158                denoting the number of cards in each hand of the group and
159                the number of hands in the group respectively.
160        """
161        return icepool.MultiDeal(self, hand_groups)

EXPERIMENTAL: Deal cards into groups of hands, where the hands in each group could be produced in arbitrary order.

Arguments:
  • hand_groups: Each argument is a tuple (hand_size, group_size), denoting the number of cards in each hand of the group and the number of hands in the group respectively.
def additive_union( self, *args: Union[Iterable[+T_co], Mapping[+T_co, int]]) -> Deck[+T_co]:
165    def additive_union(
166            self, *args: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]':
167        """Both decks merged together."""
168        return functools.reduce(operator.add, args,
169                                initial=self)  # type: ignore

Both decks merged together.

def difference( self, *args: Union[Iterable[+T_co], Mapping[+T_co, int]]) -> Deck[+T_co]:
180    def difference(self, *args:
181                   Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]':
182        """This deck with the other cards removed (but not below zero of each card)."""
183        return functools.reduce(operator.sub, args,
184                                initial=self)  # type: ignore

This deck with the other cards removed (but not below zero of each card).

def intersection( self, *args: Union[Iterable[+T_co], Mapping[+T_co, int]]) -> Deck[+T_co]:
200    def intersection(
201            self, *args: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]':
202        """The cards that both decks have."""
203        return functools.reduce(operator.and_, args,
204                                initial=self)  # type: ignore

The cards that both decks have.

def union( self, *args: Union[Iterable[+T_co], Mapping[+T_co, int]]) -> Deck[+T_co]:
215    def union(self, *args:
216              Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]':
217        """As many of each card as the deck that has more of them."""
218        return functools.reduce(operator.or_, args,
219                                initial=self)  # type: ignore

As many of each card as the deck that has more of them.

def symmetric_difference( self, other: Union[Iterable[+T_co], Mapping[+T_co, int]]) -> Deck[+T_co]:
230    def symmetric_difference(
231            self, other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]':
232        """As many of each card as the deck that has more of them."""
233        return self ^ other

As many of each card as the deck that has more of them.

def map( self, repl: Union[Callable[..., Union[~U, Deck[~U], RerollType]], Mapping[+T_co, Union[~U, Deck[~U], RerollType]]], /, *, star: bool | None = None) -> Deck[~U]:
259    def map(
260            self,
261            repl:
262        'Callable[..., U | Deck[U] | icepool.RerollType] | Mapping[T_co, U | Deck[U] | icepool.RerollType]',
263            /,
264            *,
265            star: bool | None = None) -> 'Deck[U]':
266        """Maps outcomes of this `Deck` to other outcomes.
267
268        Args:
269            repl: One of the following:
270                * A callable returning a new outcome for each old outcome.
271                * A map from old outcomes to new outcomes.
272                    Unmapped old outcomes stay the same.
273                The new outcomes may be `Deck`s, in which case one card is
274                replaced with several. This is not recommended.
275            star: Whether outcomes should be unpacked into separate arguments
276                before sending them to a callable `repl`.
277                If not provided, this will be guessed based on the function
278                signature.
279        """
280        # Convert to a single-argument function.
281        if callable(repl):
282            if star is None:
283                star = infer_star(repl)
284            if star:
285
286                def transition_function(outcome):
287                    return repl(*outcome)
288            else:
289
290                def transition_function(outcome):
291                    return repl(outcome)
292        else:
293            # repl is a mapping.
294            def transition_function(outcome):
295                if outcome in repl:
296                    return repl[outcome]
297                else:
298                    return outcome
299
300        return Deck(
301            [transition_function(outcome) for outcome in self.outcomes()],
302            times=self.quantities())

Maps outcomes of this Deck to other outcomes.

Arguments:
  • repl: One of the following:
    • A callable returning a new outcome for each old outcome.
    • A map from old outcomes to new outcomes. Unmapped old outcomes stay the same. The new outcomes may be Decks, in which case one card is replaced with several. This is not recommended.
  • star: Whether outcomes should be unpacked into separate arguments before sending them to a callable repl. If not provided, this will be guessed based on the function signature.
def sequence(self, deals: int, /) -> Die[tuple[+T_co, ...]]:
309    def sequence(self, deals: int, /) -> 'icepool.Die[tuple[T_co, ...]]':
310        """Possible sequences produced by dealing from this deck a number of times.
311        
312        This is extremely expensive computationally. If you don't care about
313        order, use `deal()` instead.
314        """
315        if deals < 0:
316            raise ValueError('The number of cards dealt cannot be negative.')
317        for i in range(len(self._sequence_cache), deals + 1):
318
319            def transition(curr):
320                remaining = icepool.Die(self - curr)
321                return icepool.map(lambda curr, next: curr + (next, ), curr,
322                                   remaining)
323
324            result: 'icepool.Die[tuple[T_co, ...]]' = self._sequence_cache[
325                i - 1].map(transition)
326            self._sequence_cache.append(result)
327        return result

Possible sequences produced by dealing from this deck a number of times.

This is extremely expensive computationally. If you don't care about order, use deal() instead.

class Deal(icepool.generator.keep.KeepGenerator[~T]):
13class Deal(KeepGenerator[T]):
14    """Represents an unordered deal of a single hand from a `Deck`."""
15
16    _deck: 'icepool.Deck[T]'
17
18    def __init__(self, deck: 'icepool.Deck[T]', hand_size: int) -> None:
19        """Constructor.
20
21        For algorithmic reasons, you must pre-commit to the number of cards to
22        deal.
23
24        It is permissible to deal zero cards from an empty deck, but not all
25        evaluators will handle this case, especially if they depend on the
26        outcome type. Dealing zero cards from a non-empty deck does not have
27        this issue.
28
29        Args:
30            deck: The `Deck` to deal from.
31            hand_size: How many cards to deal.
32        """
33        if hand_size < 0:
34            raise ValueError('hand_size cannot be negative.')
35        if hand_size > deck.size():
36            raise ValueError(
37                'The number of cards dealt cannot exceed the size of the deck.'
38            )
39        self._deck = deck
40        self._keep_tuple = (1, ) * hand_size
41
42    @classmethod
43    def _new_raw(cls, deck: 'icepool.Deck[T]',
44                 keep_tuple: tuple[int, ...]) -> 'Deal[T]':
45        self = super(Deal, cls).__new__(cls)
46        self._deck = deck
47        self._keep_tuple = keep_tuple
48        return self
49
50    def _make_source(self):
51        return DealSource(self._deck, self._keep_tuple)
52
53    def _set_keep_tuple(self, keep_tuple: tuple[int, ...]) -> 'Deal[T]':
54        return Deal._new_raw(self._deck, keep_tuple)
55
56    def deck(self) -> 'icepool.Deck[T]':
57        """The `Deck` the cards are dealt from."""
58        return self._deck
59
60    def hand_size(self) -> int:
61        """The number of cards dealt to each hand as a tuple."""
62        return len(self._keep_tuple)
63
64    def outcomes(self) -> CountsKeysView[T]:
65        """The outcomes of the `Deck` in ascending order.
66
67        These are also the `keys` of the `Deck` as a `Mapping`.
68        Prefer to use the name `outcomes`.
69        """
70        return self.deck().outcomes()
71
72    def denominator(self) -> int:
73        return icepool.math.comb(self.deck().size(), self.hand_size())
74
75    @property
76    def hash_key(self):
77        return Deal, self._deck, self._keep_tuple
78
79    def __repr__(self) -> str:
80        return type(
81            self
82        ).__qualname__ + f'({repr(self.deck())}, hand_size={self.hand_size()})'
83
84    def __str__(self) -> str:
85        return type(
86            self
87        ).__qualname__ + f' of hand_size={self.hand_size()} from deck:\n' + str(
88            self.deck())

Represents an unordered deal of a single hand from a Deck.

Deal(deck: Deck[~T], hand_size: int)
18    def __init__(self, deck: 'icepool.Deck[T]', hand_size: int) -> None:
19        """Constructor.
20
21        For algorithmic reasons, you must pre-commit to the number of cards to
22        deal.
23
24        It is permissible to deal zero cards from an empty deck, but not all
25        evaluators will handle this case, especially if they depend on the
26        outcome type. Dealing zero cards from a non-empty deck does not have
27        this issue.
28
29        Args:
30            deck: The `Deck` to deal from.
31            hand_size: How many cards to deal.
32        """
33        if hand_size < 0:
34            raise ValueError('hand_size cannot be negative.')
35        if hand_size > deck.size():
36            raise ValueError(
37                'The number of cards dealt cannot exceed the size of the deck.'
38            )
39        self._deck = deck
40        self._keep_tuple = (1, ) * hand_size

Constructor.

For algorithmic reasons, you must pre-commit to the number of cards to deal.

It is permissible to deal zero cards from an empty deck, but not all evaluators will handle this case, especially if they depend on the outcome type. Dealing zero cards from a non-empty deck does not have this issue.

Arguments:
  • deck: The Deck to deal from.
  • hand_size: How many cards to deal.
def deck(self) -> Deck[~T]:
56    def deck(self) -> 'icepool.Deck[T]':
57        """The `Deck` the cards are dealt from."""
58        return self._deck

The Deck the cards are dealt from.

def hand_size(self) -> int:
60    def hand_size(self) -> int:
61        """The number of cards dealt to each hand as a tuple."""
62        return len(self._keep_tuple)

The number of cards dealt to each hand as a tuple.

def outcomes(self) -> CountsKeysView[~T]:
64    def outcomes(self) -> CountsKeysView[T]:
65        """The outcomes of the `Deck` in ascending order.
66
67        These are also the `keys` of the `Deck` as a `Mapping`.
68        Prefer to use the name `outcomes`.
69        """
70        return self.deck().outcomes()

The outcomes of the Deck in ascending order.

These are also the keys of the Deck as a Mapping. Prefer to use the name outcomes.

def denominator(self) -> int:
72    def denominator(self) -> int:
73        return icepool.math.comb(self.deck().size(), self.hand_size())
hash_key
75    @property
76    def hash_key(self):
77        return Deal, self._deck, self._keep_tuple

A hash key for this object. This should include a type.

If None, this will not compare equal to any other object.

class MultiDeal(icepool.generator.multiset_tuple_generator.MultisetTupleGenerator[~T, ~IntTupleOut]):
 20class MultiDeal(MultisetTupleGenerator[T, IntTupleOut]):
 21    """Represents an deal of multiple hands from a `Deck`.
 22    
 23    The cards within each hand are in sorted order. Furthermore, hands may be
 24    organized into groups in which the hands are initially indistinguishable.
 25    """
 26
 27    _deck: 'icepool.Deck[T]'
 28    # An ordered tuple of hand groups.
 29    # Each group is designated by (hand_size, hand_count).
 30    _hand_groups: tuple[tuple[int, int], ...]
 31
 32    def __init__(self, deck: 'icepool.Deck[T]',
 33                 hand_groups: tuple[tuple[int, int], ...]) -> None:
 34        """Constructor.
 35
 36        For algorithmic reasons, you must pre-commit to the number of cards to
 37        deal for each hand.
 38
 39        It is permissible to deal zero cards from an empty deck, but not all
 40        evaluators will handle this case, especially if they depend on the
 41        outcome type. Dealing zero cards from a non-empty deck does not have
 42        this issue.
 43
 44        Args:
 45            deck: The `Deck` to deal from.
 46            hand_groups: An ordered tuple of hand groups.
 47                Each group is designated by (hand_size, hand_count) with the
 48                hands of each group being arbitrarily ordered.
 49                The resulting counts are produced in a flat tuple.
 50        """
 51        self._deck = deck
 52        self._hand_groups = hand_groups
 53        if self.total_cards_dealt() > self.deck().size():
 54            raise ValueError(
 55                'The total number of cards dealt cannot exceed the size of the deck.'
 56            )
 57
 58    @classmethod
 59    def _new_raw(
 60        cls, deck: 'icepool.Deck[T]',
 61        hand_sizes: tuple[tuple[int, int],
 62                          ...]) -> 'MultiDeal[T, IntTupleOut]':
 63        self = super(MultiDeal, cls).__new__(cls)
 64        self._deck = deck
 65        self._hand_groups = hand_sizes
 66        return self
 67
 68    def deck(self) -> 'icepool.Deck[T]':
 69        """The `Deck` the cards are dealt from."""
 70        return self._deck
 71
 72    def hand_sizes(self) -> IntTupleOut:
 73        """The number of cards dealt to each hand as a tuple."""
 74        return cast(
 75            IntTupleOut,
 76            tuple(
 77                itertools.chain.from_iterable(
 78                    (hand_size, ) * group_size
 79                    for hand_size, group_size in self._hand_groups)))
 80
 81    def total_cards_dealt(self) -> int:
 82        """The total number of cards dealt."""
 83        return sum(hand_size * group_size
 84                   for hand_size, group_size in self._hand_groups)
 85
 86    def outcomes(self) -> CountsKeysView[T]:
 87        """The outcomes of the `Deck` in ascending order.
 88
 89        These are also the `keys` of the `Deck` as a `Mapping`.
 90        Prefer to use the name `outcomes`.
 91        """
 92        return self.deck().outcomes()
 93
 94    def __len__(self) -> int:
 95        return sum(group_size for _, group_size in self._hand_groups)
 96
 97    @cached_property
 98    def _denominator(self) -> int:
 99        d_total = icepool.math.comb(self.deck().size(),
100                                    self.total_cards_dealt())
101        d_split = math.prod(
102            icepool.math.comb(self.total_cards_dealt(), h)
103            for h in self.hand_sizes()[1:])
104        return d_total * d_split
105
106    def denominator(self) -> int:
107        return self._denominator
108
109    def _make_source(self) -> 'MultisetTupleSource[T, IntTupleOut]':
110        return MultiDealSource(self._deck, self._hand_groups)
111
112    @property
113    def hash_key(self) -> Hashable:
114        return MultiDeal, self._deck, self._hand_groups
115
116    def __repr__(self) -> str:
117        return type(
118            self
119        ).__qualname__ + f'({repr(self.deck())}, hand_groups={self._hand_groups})'
120
121    def __str__(self) -> str:
122        return type(
123            self
124        ).__qualname__ + f' of hand_groups={self._hand_groups} from deck:\n' + str(
125            self.deck())

Represents an deal of multiple hands from a Deck.

The cards within each hand are in sorted order. Furthermore, hands may be organized into groups in which the hands are initially indistinguishable.

MultiDeal( deck: Deck[~T], hand_groups: tuple[tuple[int, int], ...])
32    def __init__(self, deck: 'icepool.Deck[T]',
33                 hand_groups: tuple[tuple[int, int], ...]) -> None:
34        """Constructor.
35
36        For algorithmic reasons, you must pre-commit to the number of cards to
37        deal for each hand.
38
39        It is permissible to deal zero cards from an empty deck, but not all
40        evaluators will handle this case, especially if they depend on the
41        outcome type. Dealing zero cards from a non-empty deck does not have
42        this issue.
43
44        Args:
45            deck: The `Deck` to deal from.
46            hand_groups: An ordered tuple of hand groups.
47                Each group is designated by (hand_size, hand_count) with the
48                hands of each group being arbitrarily ordered.
49                The resulting counts are produced in a flat tuple.
50        """
51        self._deck = deck
52        self._hand_groups = hand_groups
53        if self.total_cards_dealt() > self.deck().size():
54            raise ValueError(
55                'The total number of cards dealt cannot exceed the size of the deck.'
56            )

Constructor.

For algorithmic reasons, you must pre-commit to the number of cards to deal for each hand.

It is permissible to deal zero cards from an empty deck, but not all evaluators will handle this case, especially if they depend on the outcome type. Dealing zero cards from a non-empty deck does not have this issue.

Arguments:
  • deck: The Deck to deal from.
  • hand_groups: An ordered tuple of hand groups. Each group is designated by (hand_size, hand_count) with the hands of each group being arbitrarily ordered. The resulting counts are produced in a flat tuple.
def deck(self) -> Deck[~T]:
68    def deck(self) -> 'icepool.Deck[T]':
69        """The `Deck` the cards are dealt from."""
70        return self._deck

The Deck the cards are dealt from.

def hand_sizes(self) -> ~IntTupleOut:
72    def hand_sizes(self) -> IntTupleOut:
73        """The number of cards dealt to each hand as a tuple."""
74        return cast(
75            IntTupleOut,
76            tuple(
77                itertools.chain.from_iterable(
78                    (hand_size, ) * group_size
79                    for hand_size, group_size in self._hand_groups)))

The number of cards dealt to each hand as a tuple.

def total_cards_dealt(self) -> int:
81    def total_cards_dealt(self) -> int:
82        """The total number of cards dealt."""
83        return sum(hand_size * group_size
84                   for hand_size, group_size in self._hand_groups)

The total number of cards dealt.

def outcomes(self) -> CountsKeysView[~T]:
86    def outcomes(self) -> CountsKeysView[T]:
87        """The outcomes of the `Deck` in ascending order.
88
89        These are also the `keys` of the `Deck` as a `Mapping`.
90        Prefer to use the name `outcomes`.
91        """
92        return self.deck().outcomes()

The outcomes of the Deck in ascending order.

These are also the keys of the Deck as a Mapping. Prefer to use the name outcomes.

def denominator(self) -> int:
106    def denominator(self) -> int:
107        return self._denominator
hash_key: Hashable
112    @property
113    def hash_key(self) -> Hashable:
114        return MultiDeal, self._deck, self._hand_groups

A hash key for this object. This should include a type.

If None, this will not compare equal to any other object.

def multiset_function( wrapped: Callable[..., Union[icepool.evaluator.multiset_function.MultisetFunctionRawResult[~T, +U_co], tuple[icepool.evaluator.multiset_function.MultisetFunctionRawResult[~T, +U_co], ...]]], /) -> icepool.evaluator.multiset_evaluator_base.MultisetEvaluatorBase[~T, +U_co]:
 55def multiset_function(wrapped: Callable[
 56    ...,
 57    'MultisetFunctionRawResult[T, U_co] | tuple[MultisetFunctionRawResult[T, U_co], ...]'],
 58                      /) -> 'MultisetEvaluatorBase[T, U_co]':
 59    """EXPERIMENTAL: A decorator that turns a function into a multiset evaluator.
 60
 61    The provided function should take in arguments representing multisets,
 62    do a limited set of operations on them (see `MultisetExpression`), and
 63    finish off with an evaluation. You can return a tuple to perform a joint
 64    evaluation.
 65
 66    For example, to create an evaluator which computes the elements each of two
 67    multisets has that the other doesn't:
 68    ```python
 69    @multiset_function
 70    def two_way_difference(a, b):
 71        return (a - b).expand(), (b - a).expand()
 72    ```
 73
 74    The special `star` keyword argument will unpack tuple-valued counts of the
 75    first argument inside the multiset function. For example,
 76    ```python
 77    hands = deck.deal((5, 5))
 78    two_way_difference(hands, star=True)
 79    ```
 80    effectively unpacks as if we had written
 81    ```python
 82    @multiset_function
 83    def two_way_difference(hands):
 84        a, b = hands
 85        return (a - b).expand(), (b - a).expand()
 86    ```
 87
 88    If not provided explicitly, `star` will be inferred automatically.
 89
 90    You can pass non-multiset values as keyword-only arguments.
 91    ```python
 92    @multiset_function
 93    def count_outcomes(a, *, target):
 94        return a.keep_outcomes(target).size()
 95
 96    count_outcomes(a, target=[5, 6])
 97    ```
 98
 99    While in theory `@multiset_function` implements late binding similar to
100    ordinary Python functions, I recommend using only pure functions.
101
102    Be careful when using control structures: you cannot branch on the value of
103    a multiset expression or evaluation, so e.g.
104
105    ```python
106    @multiset_function
107    def bad(a, b)
108        if a == b:
109            ...
110    ```
111
112    is not allowed.
113
114    `multiset_function` has considerable overhead, being effectively a
115    mini-language within Python. For better performance, you can try
116    implementing your own subclass of `MultisetEvaluator` directly.
117
118    Args:
119        function: This should take in multiset expressions as positional
120            arguments, and non-multiset variables as keyword arguments.
121    """
122    return MultisetFunctionEvaluator(wrapped)

EXPERIMENTAL: A decorator that turns a function into a multiset evaluator.

The provided function should take in arguments representing multisets, do a limited set of operations on them (see MultisetExpression), and finish off with an evaluation. You can return a tuple to perform a joint evaluation.

For example, to create an evaluator which computes the elements each of two multisets has that the other doesn't:

@multiset_function
def two_way_difference(a, b):
    return (a - b).expand(), (b - a).expand()

The special star keyword argument will unpack tuple-valued counts of the first argument inside the multiset function. For example,

hands = deck.deal((5, 5))
two_way_difference(hands, star=True)

effectively unpacks as if we had written

@multiset_function
def two_way_difference(hands):
    a, b = hands
    return (a - b).expand(), (b - a).expand()

If not provided explicitly, star will be inferred automatically.

You can pass non-multiset values as keyword-only arguments.

@multiset_function
def count_outcomes(a, *, target):
    return a.keep_outcomes(target).size()

count_outcomes(a, target=[5, 6])

While in theory @multiset_function implements late binding similar to ordinary Python functions, I recommend using only pure functions.

Be careful when using control structures: you cannot branch on the value of a multiset expression or evaluation, so e.g.

@multiset_function
def bad(a, b)
    if a == b:
        ...

is not allowed.

multiset_function has considerable overhead, being effectively a mini-language within Python. For better performance, you can try implementing your own subclass of MultisetEvaluator directly.

Arguments:
  • function: This should take in multiset expressions as positional arguments, and non-multiset variables as keyword arguments.
class MultisetParameter(icepool.expression.multiset_parameter.MultisetParameterBase[~T, int], icepool.MultisetExpression[~T]):
48class MultisetParameter(MultisetParameterBase[T, int], MultisetExpression[T]):
49    """A multiset parameter with a count of a single `int`."""
50
51    def __init__(self, name: str, arg_index: int, star_index: int | None):
52        self._name = name
53        self._arg_index = arg_index
54        self._star_index = star_index

A multiset parameter with a count of a single int.

MultisetParameter(name: str, arg_index: int, star_index: int | None)
51    def __init__(self, name: str, arg_index: int, star_index: int | None):
52        self._name = name
53        self._arg_index = arg_index
54        self._star_index = star_index
class MultisetTupleParameter(icepool.expression.multiset_parameter.MultisetParameterBase[~T, ~IntTupleOut], icepool.expression.multiset_tuple_expression.MultisetTupleExpression[~T, ~IntTupleOut]):
57class MultisetTupleParameter(MultisetParameterBase[T, IntTupleOut],
58                             MultisetTupleExpression[T, IntTupleOut]):
59    """A multiset parameter with a count of a tuple of `int`s."""
60
61    def __init__(self, name: str, arg_index: int, length: int):
62        self._name = name
63        self._arg_index = arg_index
64        self._star_index = None
65        self._length = length
66
67    def __len__(self):
68        return self._length

A multiset parameter with a count of a tuple of ints.

MultisetTupleParameter(name: str, arg_index: int, length: int)
61    def __init__(self, name: str, arg_index: int, length: int):
62        self._name = name
63        self._arg_index = arg_index
64        self._star_index = None
65        self._length = length
NoCache: Final = <NoCacheType.NoCache: 'NoCache'>

Indicates that caching should not be performed. Exact meaning depends on context.

def format_probability_inverse(probability, /, int_start: int = 20):
22def format_probability_inverse(probability, /, int_start: int = 20):
23    """EXPERIMENTAL: Formats the inverse of a value as "1 in N".
24    
25    Args:
26        probability: The value to be formatted.
27        int_start: If N = 1 / probability is between this value and 1 million
28            times this value it will be formatted as an integer. Otherwise it 
29            be formatted asa float with precision at least 1 part in int_start.
30    """
31    max_precision = math.ceil(math.log10(int_start))
32    if probability <= 0 or probability > 1:
33        return 'n/a'
34    product = probability * int_start
35    if product <= 1:
36        if probability * int_start * 10**6 <= 1:
37            return f'1 in {1.0 / probability:<.{max_precision}e}'
38        else:
39            return f'1 in {round(1 / probability)}'
40
41    precision = 0
42    precision_factor = 1
43    while product > precision_factor and precision < max_precision:
44        precision += 1
45        precision_factor *= 10
46    return f'1 in {1.0 / probability:<.{precision}f}'

EXPERIMENTAL: Formats the inverse of a value as "1 in N".

Arguments:
  • probability: The value to be formatted.
  • int_start: If N = 1 / probability is between this value and 1 million times this value it will be formatted as an integer. Otherwise it be formatted asa float with precision at least 1 part in int_start.
class Wallenius(typing.Generic[~T]):
28class Wallenius(Generic[T]):
29    """EXPERIMENTAL: Wallenius' noncentral hypergeometric distribution.
30
31    This is sampling without replacement with weights, where the entire weight
32    of a card goes away when it is pulled.
33    """
34    _weight_decks: 'MutableMapping[int, icepool.Deck[T]]'
35    _weight_die: 'icepool.Die[int]'
36
37    def __init__(self, data: Iterable[tuple[T, int]]
38                 | Mapping[T, int | tuple[int, int]]):
39        """Constructor.
40        
41        Args:
42            data: Either an iterable of (outcome, weight), or a mapping from
43                outcomes to either weights or (weight, quantity).
44            hand_size: The number of outcomes to pull.
45        """
46        self._weight_decks = {}
47
48        if isinstance(data, Mapping):
49            for outcome, v in data.items():
50                if isinstance(v, int):
51                    weight = v
52                    quantity = 1
53                else:
54                    weight, quantity = v
55                self._weight_decks[weight] = self._weight_decks.get(
56                    weight, icepool.Deck()).append(outcome, quantity)
57        else:
58            for outcome, weight in data:
59                self._weight_decks[weight] = self._weight_decks.get(
60                    weight, icepool.Deck()).append(outcome)
61
62        self._weight_die = icepool.Die({
63            weight: weight * deck.denominator()
64            for weight, deck in self._weight_decks.items()
65        })
66
67    def deal(self, hand_size: int, /) -> 'icepool.MultisetExpression[T]':
68        """Deals the specified number of outcomes from the Wallenius.
69        
70        The result is a `MultisetExpression` representing the multiset of
71        outcomes dealt.
72        """
73        if hand_size == 0:
74            return icepool.Pool([])
75
76        def inner(weights):
77            weight_counts = Counter(weights)
78            result = None
79            for weight, count in weight_counts.items():
80                deal = self._weight_decks[weight].deal(count)
81                if result is None:
82                    result = deal
83                else:
84                    result = result + deal
85            return result
86
87        hand_weights = _wallenius_weights(self._weight_die, hand_size)
88        return hand_weights.map_to_pool(inner, star=False)

EXPERIMENTAL: Wallenius' noncentral hypergeometric distribution.

This is sampling without replacement with weights, where the entire weight of a card goes away when it is pulled.

Wallenius( data: Union[Iterable[tuple[~T, int]], Mapping[~T, int | tuple[int, int]]])
37    def __init__(self, data: Iterable[tuple[T, int]]
38                 | Mapping[T, int | tuple[int, int]]):
39        """Constructor.
40        
41        Args:
42            data: Either an iterable of (outcome, weight), or a mapping from
43                outcomes to either weights or (weight, quantity).
44            hand_size: The number of outcomes to pull.
45        """
46        self._weight_decks = {}
47
48        if isinstance(data, Mapping):
49            for outcome, v in data.items():
50                if isinstance(v, int):
51                    weight = v
52                    quantity = 1
53                else:
54                    weight, quantity = v
55                self._weight_decks[weight] = self._weight_decks.get(
56                    weight, icepool.Deck()).append(outcome, quantity)
57        else:
58            for outcome, weight in data:
59                self._weight_decks[weight] = self._weight_decks.get(
60                    weight, icepool.Deck()).append(outcome)
61
62        self._weight_die = icepool.Die({
63            weight: weight * deck.denominator()
64            for weight, deck in self._weight_decks.items()
65        })

Constructor.

Arguments:
  • data: Either an iterable of (outcome, weight), or a mapping from outcomes to either weights or (weight, quantity).
  • hand_size: The number of outcomes to pull.
def deal( self, hand_size: int, /) -> MultisetExpression[~T]:
67    def deal(self, hand_size: int, /) -> 'icepool.MultisetExpression[T]':
68        """Deals the specified number of outcomes from the Wallenius.
69        
70        The result is a `MultisetExpression` representing the multiset of
71        outcomes dealt.
72        """
73        if hand_size == 0:
74            return icepool.Pool([])
75
76        def inner(weights):
77            weight_counts = Counter(weights)
78            result = None
79            for weight, count in weight_counts.items():
80                deal = self._weight_decks[weight].deal(count)
81                if result is None:
82                    result = deal
83                else:
84                    result = result + deal
85            return result
86
87        hand_weights = _wallenius_weights(self._weight_die, hand_size)
88        return hand_weights.map_to_pool(inner, star=False)

Deals the specified number of outcomes from the Wallenius.

The result is a MultisetExpression representing the multiset of outcomes dealt.