icepool

Package for computing dice and card probabilities.

Starting with v0.25.1, you can replace latest in the URL with an old version number to get the documentation for that version.

See this JupyterLite distribution for examples.

Visit the project page.

General conventions:

  • Instances are immutable (apart from internal caching). Anything that looks like it mutates an instance actually returns a separate instance with the change.
  1"""Package for computing dice and card probabilities.
  2
  3Starting with `v0.25.1`, you can replace `latest` in the URL with an old version
  4number to get the documentation for that version.
  5
  6See [this JupyterLite distribution](https://highdiceroller.github.io/icepool/notebooks/lab/index.html)
  7for examples.
  8
  9[Visit the project page.](https://github.com/HighDiceRoller/icepool)
 10
 11General conventions:
 12
 13* Instances are immutable (apart from internal caching). Anything that looks
 14    like it mutates an instance actually returns a separate instance with the
 15    change.
 16"""
 17
 18__docformat__ = 'google'
 19
 20__version__ = '2.0.2'
 21
 22from typing import Final
 23
 24from icepool.typing import Outcome, RerollType, NoCacheType
 25from icepool.order import Order, ConflictingOrderError, UnsupportedOrder
 26
 27Reroll: Final = RerollType.Reroll
 28"""Indicates that an outcome should be rerolled (with unlimited depth).
 29
 30This can be used in place of outcomes in many places. See individual function
 31and method descriptions for details.
 32
 33This effectively removes the outcome from the probability space, along with its
 34contribution to the denominator.
 35
 36This can be used for conditional probability by removing all outcomes not
 37consistent with the given observations.
 38
 39Operation in specific cases:
 40
 41* When used with `Again`, only that stage is rerolled, not the entire `Again`
 42    tree.
 43* To reroll with limited depth, use `Die.reroll()`, or `Again` with no
 44    modification.
 45* When used with `MultisetEvaluator`, the entire evaluation is rerolled.
 46"""
 47
 48NoCache: Final = NoCacheType.NoCache
 49"""Indicates that caching should not be performed. Exact meaning depends on context."""
 50
 51# Expose certain names at top-level.
 52
 53from icepool.function import (d, z, __getattr__, coin, stochastic_round,
 54                              one_hot, from_cumulative, from_rv, pointwise_max,
 55                              pointwise_min, min_outcome, max_outcome,
 56                              consecutive, sorted_union, commonize_denominator,
 57                              reduce, accumulate, map, map_function,
 58                              map_and_time, map_to_pool)
 59
 60from icepool.population.base import Population
 61from icepool.population.die import implicit_convert_to_die, Die
 62from icepool.expand import iter_cartesian_product, cartesian_product, tupleize, vectorize
 63from icepool.collection.vector import Vector
 64from icepool.collection.symbols import Symbols
 65from icepool.population.again import AgainExpression
 66
 67Again: Final = AgainExpression(is_additive=True)
 68"""A symbol indicating that the die should be rolled again, usually with some operation applied.
 69
 70This is designed to be used with the `Die()` constructor.
 71`AgainExpression`s should not be fed to functions or methods other than
 72`Die()`, but it can be used with operators. Examples:
 73
 74* `Again + 6`: Roll again and add 6.
 75* `Again + Again`: Roll again twice and sum.
 76
 77The `again_count`, `again_depth`, and `again_end` arguments to `Die()`
 78affect how these arguments are processed. At most one of `again_count` or
 79`again_depth` may be provided; if neither are provided, the behavior is as
 80`again_depth=1.
 81
 82For finer control over rolling processes, use e.g. `Die.map()` instead.
 83
 84#### Count mode
 85
 86When `again_count` is provided, we start with one roll queued and execute one 
 87roll at a time. For every `Again` we roll, we queue another roll.
 88If we run out of rolls, we sum the rolls to find the result. If the total number
 89of rolls (not including the initial roll) would exceed `again_count`, we reroll
 90the entire process, effectively conditioning the process on not rolling more
 91than `again_count` extra dice.
 92
 93This mode only allows "additive" expressions to be used with `Again`, which
 94means that only the following operators are allowed:
 95
 96* Binary `+`
 97* `n @ AgainExpression`, where `n` is a non-negative `int` or `Population`.
 98
 99Furthermore, the `+` operator is assumed to be associative and commutative.
100For example, `str` or `tuple` outcomes will not produce elements with a definite
101order.
102
103#### Depth mode
104
105When `again_depth=0`, `again_end` is directly substituted
106for each occurence of `Again`. For other values of `again_depth`, the result for
107`again_depth-1` is substituted for each occurence of `Again`.
108
109If `again_end=icepool.Reroll`, then any `AgainExpression`s in the final depth
110are rerolled.
111
112#### Rerolls
113
114`Reroll` only rerolls that particular die, not the entire process. Any such
115rerolls do not count against the `again_count` or `again_depth` limit.
116
117If `again_end=icepool.Reroll`:
118* Count mode: Any result that would cause the number of rolls to exceed
119    `again_count` is rerolled.
120* Depth mode: Any `AgainExpression`s in the final depth level are rerolled.
121"""
122
123from icepool.population.die_with_truth import DieWithTruth
124
125from icepool.collection.counts import CountsKeysView, CountsValuesView, CountsItemsView
126
127from icepool.population.keep import lowest, highest, middle
128
129from icepool.generator.pool import Pool, standard_pool
130from icepool.generator.keep import KeepGenerator
131from icepool.generator.compound_keep import CompoundKeepGenerator
132
133from icepool.generator.multiset_generator import MultisetGenerator
134from icepool.generator.multiset_tuple_generator import MultisetTupleGenerator
135from icepool.evaluator.multiset_evaluator import MultisetEvaluator
136
137from icepool.population.deck import Deck
138from icepool.generator.deal import Deal
139from icepool.generator.multi_deal import MultiDeal
140
141from icepool.expression.multiset_expression import MultisetExpression, implicit_convert_to_expression
142from icepool.evaluator.multiset_function import multiset_function
143from icepool.expression.multiset_parameter import MultisetParameter, MultisetTupleParameter
144from icepool.expression.multiset_mixture import MultisetMixture
145
146from icepool.population.format import format_probability_inverse
147
148from icepool.wallenius import Wallenius
149
150import icepool.generator as generator
151import icepool.evaluator as evaluator
152import icepool.operator as operator
153
154import icepool.typing as typing
155from icepool.expand import Expandable
156
157__all__ = [
158    'd', 'z', 'coin', 'stochastic_round', 'one_hot', 'Outcome', 'Die',
159    'Population', 'tupleize', 'vectorize', 'Vector', 'Symbols', 'Again',
160    'CountsKeysView', 'CountsValuesView', 'CountsItemsView', 'from_cumulative',
161    'from_rv', 'pointwise_max', 'pointwise_min', 'lowest', 'highest', 'middle',
162    'min_outcome', 'max_outcome', 'consecutive', 'sorted_union',
163    'commonize_denominator', 'reduce', 'accumulate', 'map', 'map_function',
164    'map_and_time', 'map_to_pool', 'Reroll', 'RerollType', 'Pool',
165    'standard_pool', 'MultisetGenerator', 'MultisetExpression',
166    'MultisetEvaluator', 'Order', 'ConflictingOrderError', 'UnsupportedOrder',
167    'Deck', 'Deal', 'MultiDeal', 'multiset_function', 'MultisetParameter',
168    'MultisetTupleParameter', 'NoCache', 'function', 'typing', 'evaluator',
169    'format_probability_inverse', 'Wallenius'
170]
@cache
def d(sides: int, /) -> Die[int]:
19@cache
20def d(sides: int, /) -> 'icepool.Die[int]':
21    """A standard die, uniformly distributed from `1` to `sides` inclusive.
22
23    Don't confuse this with `icepool.Die()`:
24
25    * `icepool.Die([6])`: A `Die` that always rolls the integer 6.
26    * `icepool.d(6)`: A d6.
27
28    You can also import individual standard dice from the `icepool` module, e.g.
29    `from icepool import d6`.
30    """
31    if not isinstance(sides, int):
32        raise TypeError('sides must be an int.')
33    elif sides < 1:
34        raise ValueError('sides must be at least 1.')
35    return icepool.Die(range(1, sides + 1))

A standard die, uniformly distributed from 1 to sides inclusive.

Don't confuse this with icepool.Die():

You can also import individual standard dice from the icepool module, e.g. from icepool import d6.

@cache
def z(sides: int, /) -> Die[int]:
38@cache
39def z(sides: int, /) -> 'icepool.Die[int]':
40    """A die uniformly distributed from `0` to `sides - 1` inclusive.
41    
42    Equal to d(sides) - 1.
43    """
44    if not isinstance(sides, int):
45        raise TypeError('sides must be an int.')
46    elif sides < 1:
47        raise ValueError('sides must be at least 1.')
48    return icepool.Die(range(0, sides))

A die uniformly distributed from 0 to sides - 1 inclusive.

Equal to d(sides) - 1.

def coin( n: int | float | fractions.Fraction, d: int = 1, /, *, max_denominator: int | None = None) -> Die[bool]:
 74def coin(n: int | float | Fraction,
 75         d: int = 1,
 76         /,
 77         *,
 78         max_denominator: int | None = None) -> 'icepool.Die[bool]':
 79    """A `Die` that rolls `True` with probability `n / d`, and `False` otherwise.
 80
 81    If `n <= 0` or `n >= d` the result will have only one outcome.
 82
 83    Args:
 84        n: An int numerator, or a non-integer probability.
 85        d: An int denominator. Should not be provided if the first argument is
 86            not an int.
 87    """
 88    if not isinstance(n, int):
 89        if d != 1:
 90            raise ValueError(
 91                'If a non-int numerator is provided, a denominator must not be provided.'
 92            )
 93        fraction = Fraction(n)
 94        if max_denominator is not None:
 95            fraction = fraction.limit_denominator(max_denominator)
 96        n = fraction.numerator
 97        d = fraction.denominator
 98    data = {}
 99    if n < d:
100        data[False] = min(d - n, d)
101    if n > 0:
102        data[True] = min(n, d)
103
104    return icepool.Die(data)

A Die that rolls True with probability n / d, and False otherwise.

If n <= 0 or n >= d the result will have only one outcome.

Arguments:
  • n: An int numerator, or a non-integer probability.
  • d: An int denominator. Should not be provided if the first argument is not an int.
def stochastic_round( x, /, *, max_denominator: int | None = None) -> Die[int]:
107def stochastic_round(x,
108                     /,
109                     *,
110                     max_denominator: int | None = None) -> 'icepool.Die[int]':
111    """Randomly rounds a value up or down to the nearest integer according to the two distances.
112        
113    Specificially, rounds `x` up with probability `x - floor(x)` and down
114    otherwise, producing a `Die` with up to two outcomes.
115
116    Args:
117        max_denominator: If provided, each rounding will be performed
118            using `fractions.Fraction.limit_denominator(max_denominator)`.
119            Otherwise, the rounding will be performed without
120            `limit_denominator`.
121    """
122    integer_part = math.floor(x)
123    fractional_part = x - integer_part
124    return integer_part + coin(fractional_part,
125                               max_denominator=max_denominator)

Randomly rounds a value up or down to the nearest integer according to the two distances.

Specificially, rounds x up with probability x - floor(x) and down otherwise, producing a Die with up to two outcomes.

Arguments:
  • max_denominator: If provided, each rounding will be performed using fractions.Fraction.limit_denominator(max_denominator). Otherwise, the rounding will be performed without limit_denominator.
def one_hot(sides: int, /) -> Die[tuple[bool, ...]]:
128def one_hot(sides: int, /) -> 'icepool.Die[tuple[bool, ...]]':
129    """A `Die` with `Vector` outcomes with one element set to `True` uniformly at random and the rest `False`.
130
131    This is an easy (if somewhat expensive) way of representing how many dice
132    in a pool rolled each number. For example, the outcomes of `10 @ one_hot(6)`
133    are the `(ones, twos, threes, fours, fives, sixes)` rolled in 10d6.
134    """
135    data = []
136    for i in range(sides):
137        outcome = [False] * sides
138        outcome[i] = True
139        data.append(icepool.Vector(outcome))
140    return icepool.Die(data)

A Die with Vector outcomes with one element set to True uniformly at random and the rest False.

This is an easy (if somewhat expensive) way of representing how many dice in a pool rolled each number. For example, the outcomes of 10 @ one_hot(6) are the (ones, twos, threes, fours, fives, sixes) rolled in 10d6.

class Outcome(typing.Hashable, typing.Protocol[-T_contra]):
44class Outcome(Hashable, Protocol[T_contra]):
45    """Protocol to attempt to verify that outcome types are hashable and sortable.
46
47    Far from foolproof, e.g. it cannot enforce total ordering.
48    """
49
50    def __lt__(self, other: T_contra) -> bool:
51        ...

Protocol to attempt to verify that outcome types are hashable and sortable.

Far from foolproof, e.g. it cannot enforce total ordering.

  44class Die(Population[T_co], MaybeHashKeyed):
  45    """Sampling with replacement. Quantities represent weights.
  46
  47    Dice are immutable. Methods do not modify the `Die` in-place;
  48    rather they return a `Die` representing the result.
  49
  50    It's also possible to have "empty" dice with no outcomes at all,
  51    though these have little use other than being sentinel values.
  52    """
  53
  54    _data: Counts[T_co]
  55
  56    @property
  57    def _new_type(self) -> type:
  58        return Die
  59
  60    def __new__(
  61        cls,
  62        outcomes: Sequence | Mapping[Any, int],
  63        times: Sequence[int] | int = 1,
  64        *,
  65        again_count: int | None = None,
  66        again_depth: int | None = None,
  67        again_end: 'Outcome | Die | icepool.RerollType | None' = None
  68    ) -> 'Die[T_co]':
  69        """Constructor for a `Die`.
  70
  71        Don't confuse this with `d()`:
  72
  73        * `Die([6])`: A `Die` that always rolls the `int` 6.
  74        * `d(6)`: A d6.
  75
  76        Also, don't confuse this with `Pool()`:
  77
  78        * `Die([1, 2, 3, 4, 5, 6])`: A d6.
  79        * `Pool([1, 2, 3, 4, 5, 6])`: A `Pool` of six dice that always rolls one
  80            of each number.
  81
  82        Here are some different ways of constructing a d6:
  83
  84        * Just import it: `from icepool import d6`
  85        * Use the `d()` function: `icepool.d(6)`
  86        * Use a d6 that you already have: `Die(d6)` or `Die([d6])`
  87        * Mix a d3 and a d3+3: `Die([d3, d3+3])`
  88        * Use a dict: `Die({1:1, 2:1, 3:1, 4:1, 5:1, 6:1})`
  89        * Give the faces as a sequence: `Die([1, 2, 3, 4, 5, 6])`
  90
  91        All quantities must be non-negative. Outcomes with zero quantity will be
  92        omitted.
  93
  94        Several methods and functions foward **kwargs to this constructor.
  95        However, these only affect the construction of the returned or yielded
  96        dice. Any other implicit conversions of arguments or operands to dice
  97        will be done with the default keyword arguments.
  98
  99        EXPERIMENTAL: Use `icepool.Again` to roll the dice again, usually with
 100        some modification. See the `Again` documentation for details.
 101
 102        Denominator: For a flat set of outcomes, the denominator is just the
 103        sum of the corresponding quantities. If the outcomes themselves have
 104        secondary denominators, then the overall denominator will be minimized
 105        while preserving the relative weighting of the primary outcomes.
 106
 107        Args:
 108            outcomes: The faces of the `Die`. This can be one of the following:
 109                * A `Sequence` of outcomes. Duplicates will contribute
 110                    quantity for each appearance.
 111                * A `Mapping` from outcomes to quantities.
 112
 113                Individual outcomes can each be one of the following:
 114
 115                * An outcome, which must be hashable and totally orderable.
 116                    * For convenience, `tuple`s containing `Population`s will be
 117                        `tupleize`d into a `Population` of `tuple`s.
 118                        This does not apply to subclasses of `tuple`s such as `namedtuple`
 119                        or other classes such as `Vector`.
 120                * A `Die`, which will be flattened into the result.
 121                    The quantity assigned to a `Die` is shared among its
 122                    outcomes. The total denominator will be scaled up if
 123                    necessary.
 124                * `icepool.Reroll`, which will drop itself from consideration.
 125                * EXPERIMENTAL: `icepool.Again`. See the documentation for
 126                    `Again` for details.
 127            times: Multiplies the quantity of each element of `outcomes`.
 128                `times` can either be a sequence of the same length as
 129                `outcomes` or a single `int` to apply to all elements of
 130                `outcomes`.
 131            again_count, again_depth, again_end: These affect how `Again`
 132                expressions are handled. See the `Again` documentation for
 133                details.
 134        Raises:
 135            ValueError: `None` is not a valid outcome for a `Die`.
 136        """
 137        outcomes, times = icepool.creation_args.itemize(outcomes, times)
 138
 139        # Check for Again.
 140        if icepool.population.again.contains_again(outcomes):
 141            if again_count is not None:
 142                if again_depth is not None:
 143                    raise ValueError(
 144                        'At most one of again_count and again_depth may be used.'
 145                    )
 146                if again_end is not None:
 147                    raise ValueError(
 148                        'again_end cannot be used with again_count.')
 149                return icepool.population.again.evaluate_agains_using_count(
 150                    outcomes, times, again_count)
 151            else:
 152                if again_depth is None:
 153                    again_depth = 1
 154                return icepool.population.again.evaluate_agains_using_depth(
 155                    outcomes, times, again_depth, again_end)
 156
 157        # Agains have been replaced by this point.
 158        outcomes = cast(Sequence[T_co | Die[T_co] | icepool.RerollType],
 159                        outcomes)
 160
 161        if len(outcomes) == 1 and times[0] == 1 and isinstance(
 162                outcomes[0], Die):
 163            return outcomes[0]
 164
 165        counts: Counts[T_co] = icepool.creation_args.expand_args_for_die(
 166            outcomes, times)
 167
 168        return Die._new_raw(counts)
 169
 170    @classmethod
 171    def _new_raw(cls, data: Counts[T_co]) -> 'Die[T_co]':
 172        """Creates a new `Die` using already-processed arguments.
 173
 174        Args:
 175            data: At this point, this is a Counts.
 176        """
 177        self = super(Population, cls).__new__(cls)
 178        self._data = data
 179        return self
 180
 181    # Defined separately from the superclass to help typing.
 182    def unary_operator(self: 'icepool.Die[T_co]', op: Callable[..., U], *args,
 183                       **kwargs) -> 'icepool.Die[U]':
 184        """Performs the unary operation on the outcomes.
 185
 186        This is used for the standard unary operators
 187        `-, +, abs, ~, round, trunc, floor, ceil`
 188        as well as the additional methods
 189        `zero, bool`.
 190
 191        This is NOT used for the `[]` operator; when used directly, this is
 192        interpreted as a `Mapping` operation and returns the count corresponding
 193        to a given outcome. See `marginals()` for applying the `[]` operator to
 194        outcomes.
 195
 196        Returns:
 197            A `Die` representing the result.
 198
 199        Raises:
 200            ValueError: If tuples are of mismatched length.
 201        """
 202        return self._unary_operator(op, *args, **kwargs)
 203
 204    def binary_operator(self, other: 'Die', op: Callable[..., U], *args,
 205                        **kwargs) -> 'Die[U]':
 206        """Performs the operation on pairs of outcomes.
 207
 208        By the time this is called, the other operand has already been
 209        converted to a `Die`.
 210
 211        This is used for the standard binary operators
 212        `+, -, *, /, //, %, **, <<, >>, &, |, ^`
 213        and the standard binary comparators
 214        `<, <=, >=, >, ==, !=, cmp`.
 215
 216        `==` and `!=` additionally set the truth value of the `Die` according to
 217        whether the dice themselves are the same or not.
 218
 219        The `@` operator does NOT use this method directly.
 220        It rolls the left `Die`, which must have integer outcomes,
 221        then rolls the right `Die` that many times and sums the outcomes.
 222
 223        Returns:
 224            A `Die` representing the result.
 225
 226        Raises:
 227            ValueError: If tuples are of mismatched length within one of the
 228                dice or between the dice.
 229        """
 230        data: MutableMapping[Any, int] = defaultdict(int)
 231        for (outcome_self,
 232             quantity_self), (outcome_other,
 233                              quantity_other) in itertools.product(
 234                                  self.items(), other.items()):
 235            new_outcome = op(outcome_self, outcome_other, *args, **kwargs)
 236            data[new_outcome] += quantity_self * quantity_other
 237        return self._new_type(data)
 238
 239    # Basic access.
 240
 241    def keys(self) -> CountsKeysView[T_co]:
 242        return self._data.keys()
 243
 244    def values(self) -> CountsValuesView:
 245        return self._data.values()
 246
 247    def items(self) -> CountsItemsView[T_co]:
 248        return self._data.items()
 249
 250    def __getitem__(self, outcome, /) -> int:
 251        return self._data[outcome]
 252
 253    def __iter__(self) -> Iterator[T_co]:
 254        return iter(self.keys())
 255
 256    def __len__(self) -> int:
 257        """The number of outcomes. """
 258        return len(self._data)
 259
 260    def __contains__(self, outcome) -> bool:
 261        return outcome in self._data
 262
 263    # Quantity management.
 264
 265    def simplify(self) -> 'Die[T_co]':
 266        """Divides all quantities by their greatest common denominator. """
 267        return icepool.Die(self._data.simplify())
 268
 269    # Rerolls and other outcome management.
 270
 271    def reroll(self,
 272               which: Callable[..., bool] | Collection[T_co] | None = None,
 273               /,
 274               *,
 275               star: bool | None = None,
 276               depth: int | Literal['inf']) -> 'Die[T_co]':
 277        """Rerolls the given outcomes.
 278
 279        Args:
 280            which: Selects which outcomes to reroll. Options:
 281                * A collection of outcomes to reroll.
 282                * A callable that takes an outcome and returns `True` if it
 283                    should be rerolled.
 284                * If not provided, the min outcome will be rerolled.
 285            star: Whether outcomes should be unpacked into separate arguments
 286                before sending them to a callable `which`.
 287                If not provided, this will be guessed based on the function
 288                signature.
 289            depth: The maximum number of times to reroll.
 290                If `None`, rerolls an unlimited number of times.
 291
 292        Returns:
 293            A `Die` representing the reroll.
 294            If the reroll would never terminate, the result has no outcomes.
 295        """
 296
 297        if which is None:
 298            outcome_set = {self.min_outcome()}
 299        else:
 300            outcome_set = self._select_outcomes(which, star)
 301
 302        if depth == 'inf':
 303            data = {
 304                outcome: quantity
 305                for outcome, quantity in self.items()
 306                if outcome not in outcome_set
 307            }
 308        elif depth < 0:
 309            raise ValueError('reroll depth cannot be negative.')
 310        else:
 311            total_reroll_quantity = sum(quantity
 312                                        for outcome, quantity in self.items()
 313                                        if outcome in outcome_set)
 314            total_stop_quantity = self.denominator() - total_reroll_quantity
 315            rerollable_factor = total_reroll_quantity**depth
 316            stop_factor = (self.denominator()**(depth + 1) - rerollable_factor
 317                           * total_reroll_quantity) // total_stop_quantity
 318            data = {
 319                outcome: (rerollable_factor *
 320                          quantity if outcome in outcome_set else stop_factor *
 321                          quantity)
 322                for outcome, quantity in self.items()
 323            }
 324        return icepool.Die(data)
 325
 326    def filter(self,
 327               which: Callable[..., bool] | Collection[T_co],
 328               /,
 329               *,
 330               star: bool | None = None,
 331               depth: int | Literal['inf']) -> 'Die[T_co]':
 332        """Rerolls until getting one of the given outcomes.
 333
 334        Essentially the complement of `reroll()`.
 335
 336        Args:
 337            which: Selects which outcomes to reroll until. Options:
 338                * A callable that takes an outcome and returns `True` if it
 339                    should be accepted.
 340                * A collection of outcomes to reroll until.
 341            star: Whether outcomes should be unpacked into separate arguments
 342                before sending them to a callable `which`.
 343                If not provided, this will be guessed based on the function
 344                signature.
 345            depth: The maximum number of times to reroll.
 346                If `None`, rerolls an unlimited number of times.
 347
 348        Returns:
 349            A `Die` representing the reroll.
 350            If the reroll would never terminate, the result has no outcomes.
 351        """
 352
 353        if callable(which):
 354            if star is None:
 355                star = infer_star(which)
 356            if star:
 357
 358                not_outcomes = {
 359                    outcome
 360                    for outcome in self.outcomes()
 361                    if not which(*outcome)  # type: ignore
 362                }
 363            else:
 364                not_outcomes = {
 365                    outcome
 366                    for outcome in self.outcomes() if not which(outcome)
 367                }
 368        else:
 369            not_outcomes = {
 370                not_outcome
 371                for not_outcome in self.outcomes() if not_outcome not in which
 372            }
 373        return self.reroll(not_outcomes, depth=depth)
 374
 375    def truncate(self, min_outcome=None, max_outcome=None) -> 'Die[T_co]':
 376        """Truncates the outcomes of this `Die` to the given range.
 377
 378        The endpoints are included in the result if applicable.
 379        If one of the arguments is not provided, that side will not be truncated.
 380
 381        This effectively rerolls outcomes outside the given range.
 382        If instead you want to replace those outcomes with the nearest endpoint,
 383        use `clip()`.
 384
 385        Not to be confused with `trunc(die)`, which performs integer truncation
 386        on each outcome.
 387        """
 388        if min_outcome is not None:
 389            start = bisect.bisect_left(self.outcomes(), min_outcome)
 390        else:
 391            start = None
 392        if max_outcome is not None:
 393            stop = bisect.bisect_right(self.outcomes(), max_outcome)
 394        else:
 395            stop = None
 396        data = {k: v for k, v in self.items()[start:stop]}
 397        return icepool.Die(data)
 398
 399    def clip(self, min_outcome=None, max_outcome=None) -> 'Die[T_co]':
 400        """Clips the outcomes of this `Die` to the given values.
 401
 402        The endpoints are included in the result if applicable.
 403        If one of the arguments is not provided, that side will not be clipped.
 404
 405        This is not the same as rerolling outcomes beyond this range;
 406        the outcome is simply adjusted to fit within the range.
 407        This will typically cause some quantity to bunch up at the endpoint(s).
 408        If you want to reroll outcomes beyond this range, use `truncate()`.
 409        """
 410        data: MutableMapping[Any, int] = defaultdict(int)
 411        for outcome, quantity in self.items():
 412            if min_outcome is not None and outcome <= min_outcome:
 413                data[min_outcome] += quantity
 414            elif max_outcome is not None and outcome >= max_outcome:
 415                data[max_outcome] += quantity
 416            else:
 417                data[outcome] += quantity
 418        return icepool.Die(data)
 419
 420    @cached_property
 421    def _popped_min(self) -> tuple['Die[T_co]', int]:
 422        die = Die._new_raw(self._data.remove_min())
 423        return die, self.quantities()[0]
 424
 425    def _pop_min(self) -> tuple['Die[T_co]', int]:
 426        """A `Die` with the min outcome removed, and the quantity of the removed outcome.
 427
 428        Raises:
 429            IndexError: If this `Die` has no outcome to pop.
 430        """
 431        return self._popped_min
 432
 433    @cached_property
 434    def _popped_max(self) -> tuple['Die[T_co]', int]:
 435        die = Die._new_raw(self._data.remove_max())
 436        return die, self.quantities()[-1]
 437
 438    def _pop_max(self) -> tuple['Die[T_co]', int]:
 439        """A `Die` with the max outcome removed, and the quantity of the removed outcome.
 440
 441        Raises:
 442            IndexError: If this `Die` has no outcome to pop.
 443        """
 444        return self._popped_max
 445
 446    # Processes.
 447
 448    def map(
 449            self,
 450            repl:
 451        'Callable[..., U | Die[U] | icepool.RerollType | icepool.AgainExpression] | Mapping[T_co, U | Die[U] | icepool.RerollType | icepool.AgainExpression]',
 452            /,
 453            *extra_args,
 454            star: bool | None = None,
 455            repeat: int | Literal['inf'] = 1,
 456            time_limit: int | Literal['inf'] | None = None,
 457            again_count: int | None = None,
 458            again_depth: int | None = None,
 459            again_end: 'U | Die[U] | icepool.RerollType | None' = None,
 460            **kwargs) -> 'Die[U]':
 461        """Maps outcomes of the `Die` to other outcomes.
 462
 463        This is also useful for representing processes.
 464
 465        As `icepool.map(repl, self, ...)`.
 466        """
 467        return icepool.map(repl,
 468                           self,
 469                           *extra_args,
 470                           star=star,
 471                           repeat=repeat,
 472                           time_limit=time_limit,
 473                           again_count=again_count,
 474                           again_depth=again_depth,
 475                           again_end=again_end,
 476                           **kwargs)
 477
 478    def map_and_time(
 479            self,
 480            repl:
 481        'Callable[..., T_co | Die[T_co] | icepool.RerollType] | Mapping[T_co, T_co | Die[T_co] | icepool.RerollType]',
 482            /,
 483            *extra_args,
 484            star: bool | None = None,
 485            time_limit: int,
 486            **kwargs) -> 'Die[tuple[T_co, int]]':
 487        """Repeatedly map outcomes of the state to other outcomes, while also
 488        counting timesteps.
 489
 490        This is useful for representing processes.
 491
 492        As `map_and_time(repl, self, ...)`.
 493        """
 494        return icepool.map_and_time(repl,
 495                                    self,
 496                                    *extra_args,
 497                                    star=star,
 498                                    time_limit=time_limit,
 499                                    **kwargs)
 500
 501    def time_to_sum(self: 'Die[int]',
 502                    target: int,
 503                    /,
 504                    max_time: int,
 505                    dnf: 'int|icepool.RerollType|None' = None) -> 'Die[int]':
 506        """The number of rolls until the cumulative sum is greater or equal to the target.
 507        
 508        Args:
 509            target: The number to stop at once reached.
 510            max_time: The maximum number of rolls to run.
 511                If the sum is not reached, the outcome is determined by `dnf`.
 512            dnf: What time to assign in cases where the target was not reached
 513                in `max_time`. If not provided, this is set to `max_time`.
 514                `dnf=icepool.Reroll` will remove this case from the result,
 515                effectively rerolling it.
 516        """
 517        if target <= 0:
 518            return Die([0])
 519
 520        if dnf is None:
 521            dnf = max_time
 522
 523        def step(total, roll):
 524            return min(total + roll, target)
 525
 526        result: 'Die[tuple[int, int]]' = Die([0]).map_and_time(
 527            step, self, time_limit=max_time)
 528
 529        def get_time(total, time):
 530            if total < target:
 531                return dnf
 532            else:
 533                return time
 534
 535        return result.map(get_time)
 536
 537    @cached_property
 538    def _mean_time_to_sum_cache(self) -> list[Fraction]:
 539        return [Fraction(0)]
 540
 541    def mean_time_to_sum(self: 'Die[int]', target: int, /) -> Fraction:
 542        """The mean number of rolls until the cumulative sum is greater or equal to the target.
 543
 544        Args:
 545            target: The target sum.
 546
 547        Raises:
 548            ValueError: If `self` has negative outcomes.
 549            ZeroDivisionError: If `self.mean() == 0`.
 550        """
 551        target = max(target, 0)
 552
 553        if target < len(self._mean_time_to_sum_cache):
 554            return self._mean_time_to_sum_cache[target]
 555
 556        if self.min_outcome() < 0:
 557            raise ValueError(
 558                'mean_time_to_sum does not handle negative outcomes.')
 559        time_per_effect = Fraction(self.denominator(),
 560                                   self.denominator() - self.quantity(0))
 561
 562        for i in range(len(self._mean_time_to_sum_cache), target + 1):
 563            result = time_per_effect + self.reroll([
 564                0
 565            ], depth='inf').map(lambda x: self.mean_time_to_sum(i - x)).mean()
 566            self._mean_time_to_sum_cache.append(result)
 567
 568        return result
 569
 570    def explode(self,
 571                which: Collection[T_co] | Callable[..., bool] | None = None,
 572                /,
 573                *,
 574                star: bool | None = None,
 575                depth: int = 9,
 576                end=None) -> 'Die[T_co]':
 577        """Causes outcomes to be rolled again and added to the total.
 578
 579        Args:
 580            which: Which outcomes to explode. Options:
 581                * An collection of outcomes to explode.
 582                * A callable that takes an outcome and returns `True` if it
 583                    should be exploded.
 584                * If not supplied, the max outcome will explode.
 585            star: Whether outcomes should be unpacked into separate arguments
 586                before sending them to a callable `which`.
 587                If not provided, this will be guessed based on the function
 588                signature.
 589            depth: The maximum number of additional dice to roll, not counting
 590                the initial roll.
 591                If not supplied, a default value will be used.
 592            end: Once `depth` is reached, further explosions will be treated
 593                as this value. By default, a zero value will be used.
 594                `icepool.Reroll` will make one extra final roll, rerolling until
 595                a non-exploding outcome is reached.
 596        """
 597
 598        if which is None:
 599            outcome_set = {self.max_outcome()}
 600        else:
 601            outcome_set = self._select_outcomes(which, star)
 602
 603        if depth < 0:
 604            raise ValueError('depth cannot be negative.')
 605        elif depth == 0:
 606            return self
 607
 608        def map_final(outcome):
 609            if outcome in outcome_set:
 610                return outcome + icepool.Again
 611            else:
 612                return outcome
 613
 614        return self.map(map_final, again_depth=depth, again_end=end)
 615
 616    def if_else(
 617        self,
 618        outcome_if_true: U | 'Die[U]',
 619        outcome_if_false: U | 'Die[U]',
 620        *,
 621        again_count: int | None = None,
 622        again_depth: int | None = None,
 623        again_end: 'U | Die[U] | icepool.RerollType | None' = None
 624    ) -> 'Die[U]':
 625        """Ternary conditional operator.
 626
 627        This replaces truthy outcomes with the first argument and falsy outcomes
 628        with the second argument.
 629
 630        Args:
 631            again_count, again_depth, again_end: Forwarded to the final die constructor.
 632        """
 633        return self.map(lambda x: bool(x)).map(
 634            {
 635                True: outcome_if_true,
 636                False: outcome_if_false
 637            },
 638            again_count=again_count,
 639            again_depth=again_depth,
 640            again_end=again_end)
 641
 642    def is_in(self, target: Container[T_co], /) -> 'Die[bool]':
 643        """A die that returns True iff the roll of the die is contained in the target."""
 644        return self.map(lambda x: x in target)
 645
 646    def count(self, rolls: int, target: Container[T_co], /) -> 'Die[int]':
 647        """Roll this dice a number of times and count how many are in the target."""
 648        return rolls @ self.is_in(target)
 649
 650    # Pools and sums.
 651
 652    @cached_property
 653    def _sum_cache(self) -> MutableMapping[int, 'Die']:
 654        return {}
 655
 656    def _sum_all(self, rolls: int, /) -> 'Die':
 657        """Roll this `Die` `rolls` times and sum the results.
 658
 659        The sum is computed one at a time, with each additional item on the 
 660        right, similar to `functools.reduce()`.
 661
 662        If `rolls` is negative, roll the `Die` `abs(rolls)` times and negate
 663        the result.
 664
 665        If you instead want to replace tuple (or other sequence) outcomes with
 666        their sum, use `die.map(sum)`.
 667        """
 668        if rolls in self._sum_cache:
 669            return self._sum_cache[rolls]
 670
 671        if rolls < 0:
 672            result = -self._sum_all(-rolls)
 673        elif rolls == 0:
 674            result = self.zero().simplify()
 675        elif rolls == 1:
 676            result = self
 677        else:
 678            # In addition to working similar to reduce(), this seems to perform
 679            # better than binary split.
 680            result = self._sum_all(rolls - 1) + self
 681
 682        self._sum_cache[rolls] = result
 683        return result
 684
 685    def __matmul__(self: 'Die[int]', other) -> 'Die':
 686        """Roll the left `Die`, then roll the right `Die` that many times and sum the outcomes.
 687        
 688        The sum is computed one at a time, with each additional item on the 
 689        right, similar to `functools.reduce()`.
 690        """
 691        if isinstance(other, icepool.AgainExpression):
 692            return NotImplemented
 693        other = implicit_convert_to_die(other)
 694
 695        data: MutableMapping[int, Any] = defaultdict(int)
 696
 697        max_abs_die_count = max(abs(self.min_outcome()),
 698                                abs(self.max_outcome()))
 699        for die_count, die_count_quantity in self.items():
 700            factor = other.denominator()**(max_abs_die_count - abs(die_count))
 701            subresult = other._sum_all(die_count)
 702            for outcome, subresult_quantity in subresult.items():
 703                data[
 704                    outcome] += subresult_quantity * die_count_quantity * factor
 705
 706        return icepool.Die(data)
 707
 708    def __rmatmul__(self, other: 'int | Die[int]') -> 'Die':
 709        """Roll the left `Die`, then roll the right `Die` that many times and sum the outcomes.
 710        
 711        The sum is computed one at a time, with each additional item on the 
 712        right, similar to `functools.reduce()`.
 713        """
 714        if isinstance(other, icepool.AgainExpression):
 715            return NotImplemented
 716        other = implicit_convert_to_die(other)
 717        return other.__matmul__(self)
 718
 719    def sequence(self, rolls: int) -> 'icepool.Die[tuple[T_co, ...]]':
 720        """Possible sequences produced by rolling this die a number of times.
 721        
 722        This is extremely expensive computationally. If possible, use `reduce()`
 723        instead; if you don't care about order, `Die.pool()` is better.
 724        """
 725        return icepool.cartesian_product(*(self for _ in range(rolls)),
 726                                         outcome_type=tuple)  # type: ignore
 727
 728    def pool(self, rolls: int | Sequence[int] = 1, /) -> 'icepool.Pool[T_co]':
 729        """Creates a `Pool` from this `Die`.
 730
 731        You might subscript the pool immediately afterwards, e.g.
 732        `d6.pool(5)[-1, ..., 1]` takes the difference between the highest and
 733        lowest of 5d6.
 734
 735        Args:
 736            rolls: The number of copies of this `Die` to put in the pool.
 737                Or, a sequence of one `int` per die acting as
 738                `keep_tuple`. Note that `...` cannot be used in the
 739                argument to this method, as the argument determines the size of
 740                the pool.
 741        """
 742        if isinstance(rolls, int):
 743            return icepool.Pool({self: rolls})
 744        else:
 745            pool_size = len(rolls)
 746            # Haven't dealt with narrowing return type.
 747            return icepool.Pool({self: pool_size})[rolls]  # type: ignore
 748
 749    @overload
 750    def keep(self, rolls: Sequence[int], /) -> 'Die':
 751        """Selects elements after drawing and sorting and sums them.
 752
 753        Args:
 754            rolls: A sequence of `int` specifying how many times to count each
 755                element in ascending order.
 756        """
 757
 758    @overload
 759    def keep(self, rolls: int,
 760             index: slice | Sequence[int | EllipsisType] | int, /):
 761        """Selects elements after drawing and sorting and sums them.
 762
 763        Args:
 764            rolls: The number of dice to roll.
 765            index: One of the following:
 766            * An `int`. This will count only the roll at the specified index.
 767                In this case, the result is a `Die` rather than a generator.
 768            * A `slice`. The selected dice are counted once each.
 769            * A sequence of one `int` for each `Die`.
 770                Each roll is counted that many times, which could be multiple or
 771                negative times.
 772
 773                Up to one `...` (`Ellipsis`) may be used.
 774                `...` will be replaced with a number of zero
 775                counts depending on the `rolls`.
 776                This number may be "negative" if more `int`s are provided than
 777                `rolls`. Specifically:
 778
 779                * If `index` is shorter than `rolls`, `...`
 780                    acts as enough zero counts to make up the difference.
 781                    E.g. `(1, ..., 1)` on five dice would act as
 782                    `(1, 0, 0, 0, 1)`.
 783                * If `index` has length equal to `rolls`, `...` has no effect.
 784                    E.g. `(1, ..., 1)` on two dice would act as `(1, 1)`.
 785                * If `index` is longer than `rolls` and `...` is on one side,
 786                    elements will be dropped from `index` on the side with `...`.
 787                    E.g. `(..., 1, 2, 3)` on two dice would act as `(2, 3)`.
 788                * If `index` is longer than `rolls` and `...`
 789                    is in the middle, the counts will be as the sum of two
 790                    one-sided `...`.
 791                    E.g. `(-1, ..., 1)` acts like `(-1, ...)` plus `(..., 1)`.
 792                    If `rolls` was 1 this would have the -1 and 1 cancel each other out.
 793        """
 794
 795    def keep(self,
 796             rolls: int | Sequence[int],
 797             index: slice | Sequence[int | EllipsisType] | int | None = None,
 798             /) -> 'Die':
 799        """Selects elements after drawing and sorting and sums them.
 800
 801        Args:
 802            rolls: The number of dice to roll.
 803            index: One of the following:
 804            * An `int`. This will count only the roll at the specified index.
 805            In this case, the result is a `Die` rather than a generator.
 806            * A `slice`. The selected dice are counted once each.
 807            * A sequence of `int`s with length equal to `rolls`.
 808                Each roll is counted that many times, which could be multiple or
 809                negative times.
 810
 811                Up to one `...` (`Ellipsis`) may be used. If no `...` is used,
 812                the `rolls` argument may be omitted.
 813
 814                `...` will be replaced with a number of zero counts in order
 815                to make up any missing elements compared to `rolls`.
 816                This number may be "negative" if more `int`s are provided than
 817                `rolls`. Specifically:
 818
 819                * If `index` is shorter than `rolls`, `...`
 820                    acts as enough zero counts to make up the difference.
 821                    E.g. `(1, ..., 1)` on five dice would act as
 822                    `(1, 0, 0, 0, 1)`.
 823                * If `index` has length equal to `rolls`, `...` has no effect.
 824                    E.g. `(1, ..., 1)` on two dice would act as `(1, 1)`.
 825                * If `index` is longer than `rolls` and `...` is on one side,
 826                    elements will be dropped from `index` on the side with `...`.
 827                    E.g. `(..., 1, 2, 3)` on two dice would act as `(2, 3)`.
 828                * If `index` is longer than `rolls` and `...`
 829                    is in the middle, the counts will be as the sum of two
 830                    one-sided `...`.
 831                    E.g. `(-1, ..., 1)` acts like `(-1, ...)` plus `(..., 1)`.
 832                    If `rolls` was 1 this would have the -1 and 1 cancel each other out.
 833        """
 834        if isinstance(rolls, int):
 835            if index is None:
 836                raise ValueError(
 837                    'If the number of rolls is an integer, an index argument must be provided.'
 838                )
 839            if isinstance(index, int):
 840                return self.pool(rolls).keep(index)
 841            else:
 842                return self.pool(rolls).keep(index).sum()  # type: ignore
 843        else:
 844            if index is not None:
 845                raise ValueError('Only one index sequence can be given.')
 846            return self.pool(len(rolls)).keep(rolls).sum()  # type: ignore
 847
 848    def lowest(self,
 849               rolls: int,
 850               /,
 851               keep: int | None = None,
 852               drop: int | None = None) -> 'Die':
 853        """Roll several of this `Die` and return the lowest result, or the sum of some of the lowest.
 854
 855        The outcomes should support addition and multiplication if `keep != 1`.
 856
 857        Args:
 858            rolls: The number of dice to roll. All dice will have the same
 859                outcomes as `self`.
 860            keep, drop: These arguments work together:
 861                * If neither are provided, the single lowest die will be taken.
 862                * If only `keep` is provided, the `keep` lowest dice will be summed.
 863                * If only `drop` is provided, the `drop` lowest dice will be dropped
 864                    and the rest will be summed.
 865                * If both are provided, `drop` lowest dice will be dropped, then
 866                    the next `keep` lowest dice will be summed.
 867
 868        Returns:
 869            A `Die` representing the probability distribution of the sum.
 870        """
 871        index = lowest_slice(keep, drop)
 872        canonical = canonical_slice(index, rolls)
 873        if canonical.start == 0 and canonical.stop == 1:
 874            return self._lowest_single(rolls)
 875        # Expression evaluators are difficult to type.
 876        return self.pool(rolls)[index].sum()  # type: ignore
 877
 878    def _lowest_single(self, rolls: int, /) -> 'Die':
 879        """Roll this die several times and keep the lowest."""
 880        if rolls == 0:
 881            return self.zero().simplify()
 882        return icepool.from_cumulative(
 883            self.outcomes(), [x**rolls for x in self.quantities('>=')],
 884            reverse=True)
 885
 886    def highest(self,
 887                rolls: int,
 888                /,
 889                keep: int | None = None,
 890                drop: int | None = None) -> 'Die[T_co]':
 891        """Roll several of this `Die` and return the highest result, or the sum of some of the highest.
 892
 893        The outcomes should support addition and multiplication if `keep != 1`.
 894
 895        Args:
 896            rolls: The number of dice to roll.
 897            keep, drop: These arguments work together:
 898                * If neither are provided, the single highest die will be taken.
 899                * If only `keep` is provided, the `keep` highest dice will be summed.
 900                * If only `drop` is provided, the `drop` highest dice will be dropped
 901                    and the rest will be summed.
 902                * If both are provided, `drop` highest dice will be dropped, then
 903                    the next `keep` highest dice will be summed.
 904
 905        Returns:
 906            A `Die` representing the probability distribution of the sum.
 907        """
 908        index = highest_slice(keep, drop)
 909        canonical = canonical_slice(index, rolls)
 910        if canonical.start == rolls - 1 and canonical.stop == rolls:
 911            return self._highest_single(rolls)
 912        # Expression evaluators are difficult to type.
 913        return self.pool(rolls)[index].sum()  # type: ignore
 914
 915    def _highest_single(self, rolls: int, /) -> 'Die[T_co]':
 916        """Roll this die several times and keep the highest."""
 917        if rolls == 0:
 918            return self.zero().simplify()
 919        return icepool.from_cumulative(
 920            self.outcomes(), [x**rolls for x in self.quantities('<=')])
 921
 922    def middle(
 923            self,
 924            rolls: int,
 925            /,
 926            keep: int = 1,
 927            *,
 928            tie: Literal['error', 'high', 'low'] = 'error') -> 'icepool.Die':
 929        """Roll several of this `Die` and sum the sorted results in the middle.
 930
 931        The outcomes should support addition and multiplication if `keep != 1`.
 932
 933        Args:
 934            rolls: The number of dice to roll.
 935            keep: The number of outcomes to sum. If this is greater than the
 936                current keep_size, all are kept.
 937            tie: What to do if `keep` is odd but the current keep_size
 938                is even, or vice versa.
 939                * 'error' (default): Raises `IndexError`.
 940                * 'high': The higher outcome is taken.
 941                * 'low': The lower outcome is taken.
 942        """
 943        # Expression evaluators are difficult to type.
 944        return self.pool(rolls).middle(keep, tie=tie).sum()  # type: ignore
 945
 946    def map_to_pool(
 947            self,
 948            repl:
 949        'Callable[..., Sequence[icepool.Die[U] | U] | Mapping[icepool.Die[U], int] | Mapping[U, int] | icepool.RerollType] | None' = None,
 950            /,
 951            *extra_args: 'Outcome | icepool.Die | icepool.MultisetExpression',
 952            star: bool | None = None,
 953            **kwargs) -> 'icepool.MultisetExpression[U]':
 954        """EXPERIMENTAL: Maps outcomes of this `Die` to `Pools`, creating a `MultisetGenerator`.
 955
 956        As `icepool.map_to_pool(repl, self, ...)`.
 957
 958        If no argument is provided, the outcomes will be used to construct a
 959        mixture of pools directly, similar to the inverse of `pool.expand()`.
 960        Note that this is not particularly efficient since it does not make much
 961        use of dynamic programming.
 962        """
 963        if repl is None:
 964            repl = lambda x: x
 965        return icepool.map_to_pool(repl,
 966                                   self,
 967                                   *extra_args,
 968                                   star=star,
 969                                   **kwargs)
 970
 971    def explode_to_pool(self,
 972                        rolls: int,
 973                        which: Collection[T_co] | Callable[..., bool]
 974                        | None = None,
 975                        /,
 976                        *,
 977                        star: bool | None = None,
 978                        depth: int = 9) -> 'icepool.MultisetExpression[T_co]':
 979        """EXPERIMENTAL: Causes outcomes to be rolled again, keeping that outcome as an individual die in a pool.
 980        
 981        Args:
 982            rolls: The number of initial dice.
 983            which: Which outcomes to explode. Options:
 984                * A single outcome to explode.
 985                * An collection of outcomes to explode.
 986                * A callable that takes an outcome and returns `True` if it
 987                    should be exploded.
 988                * If not supplied, the max outcome will explode.
 989            star: Whether outcomes should be unpacked into separate arguments
 990                before sending them to a callable `which`.
 991                If not provided, this will be guessed based on the function
 992                signature.
 993            depth: The maximum depth of explosions for an individual dice.
 994
 995        Returns:
 996            A `MultisetGenerator` representing the mixture of `Pool`s. Note  
 997            that this is not technically a `Pool`, though it supports most of 
 998            the same operations.
 999        """
1000        if depth == 0:
1001            return self.pool(rolls)
1002        if which is None:
1003            explode_set = {self.max_outcome()}
1004        else:
1005            explode_set = self._select_outcomes(which, star)
1006        if not explode_set:
1007            return self.pool(rolls)
1008        explode: 'Die[T_co]'
1009        not_explode: 'Die[T_co]'
1010        explode, not_explode = self.split(explode_set)
1011
1012        single_data: 'MutableMapping[icepool.Vector[int], int]' = defaultdict(
1013            int)
1014        for i in range(depth + 1):
1015            weight = explode.denominator()**i * self.denominator()**(
1016                depth - i) * not_explode.denominator()
1017            single_data[icepool.Vector((i, 1))] += weight
1018        single_data[icepool.Vector(
1019            (depth + 1, 0))] += explode.denominator()**(depth + 1)
1020
1021        single_count_die: 'Die[icepool.Vector[int]]' = Die(single_data)
1022        count_die = rolls @ single_count_die
1023
1024        return count_die.map_to_pool(
1025            lambda x, nx: [explode] * x + [not_explode] * nx)
1026
1027    def reroll_to_pool(
1028        self,
1029        rolls: int,
1030        which: Callable[..., bool] | Collection[T_co],
1031        /,
1032        max_rerolls: int,
1033        *,
1034        star: bool | None = None,
1035        mode: Literal['random', 'lowest', 'highest', 'drop'] = 'random'
1036    ) -> 'icepool.MultisetExpression[T_co]':
1037        """EXPERIMENTAL: Applies a limited number of rerolls shared across a pool.
1038
1039        Each die can only be rerolled once (effectively `depth=1`), and no more
1040        than `max_rerolls` dice may be rerolled.
1041        
1042        Args:
1043            rolls: How many dice in the pool.
1044            which: Selects which outcomes are eligible to be rerolled. Options:
1045                * A collection of outcomes to reroll.
1046                * A callable that takes an outcome and returns `True` if it
1047                    could be rerolled.
1048            max_rerolls: The maximum number of dice to reroll. 
1049                Note that each die can only be rerolled once, so if the number 
1050                of eligible dice is less than this, the excess rerolls have no
1051                effect.
1052            star: Whether outcomes should be unpacked into separate arguments
1053                before sending them to a callable `which`.
1054                If not provided, this will be guessed based on the function
1055                signature.
1056            mode: How dice are selected for rerolling if there are more eligible
1057                dice than `max_rerolls`. Options:
1058                * `'random'` (default): Eligible dice will be chosen uniformly
1059                    at random.
1060                * `'lowest'`: The lowest eligible dice will be rerolled.
1061                * `'highest'`: The highest eligible dice will be rerolled.
1062                * `'drop'`: All dice that ended up on an outcome selected by 
1063                    `which` will be dropped. This includes both dice that rolled
1064                    into `which` initially and were not rerolled, and dice that
1065                    were rerolled but rolled into `which` again. This can be
1066                    considerably more efficient than the other modes.
1067
1068        Returns:
1069            A `MultisetGenerator` representing the mixture of `Pool`s. Note  
1070            that this is not technically a `Pool`, though it supports most of 
1071            the same operations.
1072        """
1073        rerollable_set = self._select_outcomes(which, star)
1074        if not rerollable_set:
1075            return self.pool(rolls)
1076
1077        rerollable_die: 'Die[T_co]'
1078        not_rerollable_die: 'Die[T_co]'
1079        rerollable_die, not_rerollable_die = self.split(rerollable_set)
1080        single_is_rerollable = icepool.coin(rerollable_die.denominator(),
1081                                            self.denominator())
1082        rerollable = rolls @ single_is_rerollable
1083
1084        def split(initial_rerollable: int) -> Die[tuple[int, int, int]]:
1085            """Computes the composition of the pool.
1086
1087            Returns:
1088                initial_rerollable: The number of dice that initially fell into
1089                    the rerollable set.
1090                rerolled_to_rerollable: The number of dice that were rerolled,
1091                    but fell into the rerollable set again.
1092                not_rerollable: The number of dice that ended up outside the
1093                    rerollable set, including both initial and rerolled dice.
1094                not_rerolled: The number of dice that were eligible for
1095                    rerolling but were not rerolled.
1096            """
1097            initial_not_rerollable = rolls - initial_rerollable
1098            rerolled = min(initial_rerollable, max_rerolls)
1099            not_rerolled = initial_rerollable - rerolled
1100
1101            def second_split(rerolled_to_rerollable):
1102                """Splits the rerolled dice into those that fell into the rerollable and not-rerollable sets."""
1103                rerolled_to_not_rerollable = rerolled - rerolled_to_rerollable
1104                return icepool.tupleize(
1105                    initial_rerollable, rerolled_to_rerollable,
1106                    initial_not_rerollable + rerolled_to_not_rerollable,
1107                    not_rerolled)
1108
1109            return icepool.map(second_split,
1110                               rerolled @ single_is_rerollable,
1111                               star=False)
1112
1113        pool_composition = rerollable.map(split, star=False)
1114        denominator = self.denominator()**(rolls + min(rolls, max_rerolls))
1115        pool_composition = pool_composition.multiply_to_denominator(
1116            denominator)
1117
1118        def make_pool(initial_rerollable, rerolled_to_rerollable,
1119                      not_rerollable, not_rerolled):
1120            common = rerollable_die.pool(
1121                rerolled_to_rerollable) + not_rerollable_die.pool(
1122                    not_rerollable)
1123            match mode:
1124                case 'random':
1125                    return common + rerollable_die.pool(not_rerolled)
1126                case 'lowest':
1127                    return common + rerollable_die.pool(
1128                        initial_rerollable).highest(not_rerolled)
1129                case 'highest':
1130                    return common + rerollable_die.pool(
1131                        initial_rerollable).lowest(not_rerolled)
1132                case 'drop':
1133                    return not_rerollable_die.pool(not_rerollable)
1134                case _:
1135                    raise ValueError(
1136                        f"Invalid reroll_priority '{mode}'. Allowed values are 'random', 'lowest', 'highest', 'drop'."
1137                    )
1138
1139        return pool_composition.map_to_pool(make_pool, star=True)
1140
1141    # Unary operators.
1142
1143    def __neg__(self) -> 'Die[T_co]':
1144        return self.unary_operator(operator.neg)
1145
1146    def __pos__(self) -> 'Die[T_co]':
1147        return self.unary_operator(operator.pos)
1148
1149    def __invert__(self) -> 'Die[T_co]':
1150        return self.unary_operator(operator.invert)
1151
1152    def abs(self) -> 'Die[T_co]':
1153        return self.unary_operator(operator.abs)
1154
1155    __abs__ = abs
1156
1157    def round(self, ndigits: int | None = None) -> 'Die':
1158        return self.unary_operator(round, ndigits)
1159
1160    __round__ = round
1161
1162    def stochastic_round(self,
1163                         *,
1164                         max_denominator: int | None = None) -> 'Die[int]':
1165        """Randomly rounds outcomes up or down to the nearest integer according to the two distances.
1166        
1167        Specificially, rounds `x` up with probability `x - floor(x)` and down
1168        otherwise.
1169
1170        Args:
1171            max_denominator: If provided, each rounding will be performed
1172                using `fractions.Fraction.limit_denominator(max_denominator)`.
1173                Otherwise, the rounding will be performed without
1174                `limit_denominator`.
1175        """
1176        return self.map(lambda x: icepool.stochastic_round(
1177            x, max_denominator=max_denominator))
1178
1179    def trunc(self) -> 'Die':
1180        return self.unary_operator(math.trunc)
1181
1182    __trunc__ = trunc
1183
1184    def floor(self) -> 'Die':
1185        return self.unary_operator(math.floor)
1186
1187    __floor__ = floor
1188
1189    def ceil(self) -> 'Die':
1190        return self.unary_operator(math.ceil)
1191
1192    __ceil__ = ceil
1193
1194    # Binary operators.
1195
1196    def __add__(self, other) -> 'Die':
1197        if isinstance(other, icepool.AgainExpression):
1198            return NotImplemented
1199        other = implicit_convert_to_die(other)
1200        return self.binary_operator(other, operator.add)
1201
1202    def __radd__(self, other) -> 'Die':
1203        if isinstance(other, icepool.AgainExpression):
1204            return NotImplemented
1205        other = implicit_convert_to_die(other)
1206        return other.binary_operator(self, operator.add)
1207
1208    def __sub__(self, other) -> 'Die':
1209        if isinstance(other, icepool.AgainExpression):
1210            return NotImplemented
1211        other = implicit_convert_to_die(other)
1212        return self.binary_operator(other, operator.sub)
1213
1214    def __rsub__(self, other) -> 'Die':
1215        if isinstance(other, icepool.AgainExpression):
1216            return NotImplemented
1217        other = implicit_convert_to_die(other)
1218        return other.binary_operator(self, operator.sub)
1219
1220    def __mul__(self, other) -> 'Die':
1221        if isinstance(other, icepool.AgainExpression):
1222            return NotImplemented
1223        other = implicit_convert_to_die(other)
1224        return self.binary_operator(other, operator.mul)
1225
1226    def __rmul__(self, other) -> 'Die':
1227        if isinstance(other, icepool.AgainExpression):
1228            return NotImplemented
1229        other = implicit_convert_to_die(other)
1230        return other.binary_operator(self, operator.mul)
1231
1232    def __truediv__(self, other) -> 'Die':
1233        if isinstance(other, icepool.AgainExpression):
1234            return NotImplemented
1235        other = implicit_convert_to_die(other)
1236        return self.binary_operator(other, operator.truediv)
1237
1238    def __rtruediv__(self, other) -> 'Die':
1239        if isinstance(other, icepool.AgainExpression):
1240            return NotImplemented
1241        other = implicit_convert_to_die(other)
1242        return other.binary_operator(self, operator.truediv)
1243
1244    def __floordiv__(self, other) -> 'Die':
1245        if isinstance(other, icepool.AgainExpression):
1246            return NotImplemented
1247        other = implicit_convert_to_die(other)
1248        return self.binary_operator(other, operator.floordiv)
1249
1250    def __rfloordiv__(self, other) -> 'Die':
1251        if isinstance(other, icepool.AgainExpression):
1252            return NotImplemented
1253        other = implicit_convert_to_die(other)
1254        return other.binary_operator(self, operator.floordiv)
1255
1256    def __pow__(self, other) -> 'Die':
1257        if isinstance(other, icepool.AgainExpression):
1258            return NotImplemented
1259        other = implicit_convert_to_die(other)
1260        return self.binary_operator(other, operator.pow)
1261
1262    def __rpow__(self, other) -> 'Die':
1263        if isinstance(other, icepool.AgainExpression):
1264            return NotImplemented
1265        other = implicit_convert_to_die(other)
1266        return other.binary_operator(self, operator.pow)
1267
1268    def __mod__(self, other) -> 'Die':
1269        if isinstance(other, icepool.AgainExpression):
1270            return NotImplemented
1271        other = implicit_convert_to_die(other)
1272        return self.binary_operator(other, operator.mod)
1273
1274    def __rmod__(self, other) -> 'Die':
1275        if isinstance(other, icepool.AgainExpression):
1276            return NotImplemented
1277        other = implicit_convert_to_die(other)
1278        return other.binary_operator(self, operator.mod)
1279
1280    def __lshift__(self, other) -> 'Die':
1281        if isinstance(other, icepool.AgainExpression):
1282            return NotImplemented
1283        other = implicit_convert_to_die(other)
1284        return self.binary_operator(other, operator.lshift)
1285
1286    def __rlshift__(self, other) -> 'Die':
1287        if isinstance(other, icepool.AgainExpression):
1288            return NotImplemented
1289        other = implicit_convert_to_die(other)
1290        return other.binary_operator(self, operator.lshift)
1291
1292    def __rshift__(self, other) -> 'Die':
1293        if isinstance(other, icepool.AgainExpression):
1294            return NotImplemented
1295        other = implicit_convert_to_die(other)
1296        return self.binary_operator(other, operator.rshift)
1297
1298    def __rrshift__(self, other) -> 'Die':
1299        if isinstance(other, icepool.AgainExpression):
1300            return NotImplemented
1301        other = implicit_convert_to_die(other)
1302        return other.binary_operator(self, operator.rshift)
1303
1304    def __and__(self, other) -> 'Die':
1305        if isinstance(other, icepool.AgainExpression):
1306            return NotImplemented
1307        other = implicit_convert_to_die(other)
1308        return self.binary_operator(other, operator.and_)
1309
1310    def __rand__(self, other) -> 'Die':
1311        if isinstance(other, icepool.AgainExpression):
1312            return NotImplemented
1313        other = implicit_convert_to_die(other)
1314        return other.binary_operator(self, operator.and_)
1315
1316    def __or__(self, other) -> 'Die':
1317        if isinstance(other, icepool.AgainExpression):
1318            return NotImplemented
1319        other = implicit_convert_to_die(other)
1320        return self.binary_operator(other, operator.or_)
1321
1322    def __ror__(self, other) -> 'Die':
1323        if isinstance(other, icepool.AgainExpression):
1324            return NotImplemented
1325        other = implicit_convert_to_die(other)
1326        return other.binary_operator(self, operator.or_)
1327
1328    def __xor__(self, other) -> 'Die':
1329        if isinstance(other, icepool.AgainExpression):
1330            return NotImplemented
1331        other = implicit_convert_to_die(other)
1332        return self.binary_operator(other, operator.xor)
1333
1334    def __rxor__(self, other) -> 'Die':
1335        if isinstance(other, icepool.AgainExpression):
1336            return NotImplemented
1337        other = implicit_convert_to_die(other)
1338        return other.binary_operator(self, operator.xor)
1339
1340    # Comparators.
1341
1342    def __lt__(self, other) -> 'Die[bool]':
1343        if isinstance(other, icepool.AgainExpression):
1344            return NotImplemented
1345        other = implicit_convert_to_die(other)
1346        return self.binary_operator(other, operator.lt)
1347
1348    def __le__(self, other) -> 'Die[bool]':
1349        if isinstance(other, icepool.AgainExpression):
1350            return NotImplemented
1351        other = implicit_convert_to_die(other)
1352        return self.binary_operator(other, operator.le)
1353
1354    def __ge__(self, other) -> 'Die[bool]':
1355        if isinstance(other, icepool.AgainExpression):
1356            return NotImplemented
1357        other = implicit_convert_to_die(other)
1358        return self.binary_operator(other, operator.ge)
1359
1360    def __gt__(self, other) -> 'Die[bool]':
1361        if isinstance(other, icepool.AgainExpression):
1362            return NotImplemented
1363        other = implicit_convert_to_die(other)
1364        return self.binary_operator(other, operator.gt)
1365
1366    # Equality operators. These produce a `DieWithTruth`.
1367
1368    # The result has a truth value, but is not a bool.
1369    def __eq__(self, other) -> 'icepool.DieWithTruth[bool]':  # type: ignore
1370        if isinstance(other, icepool.AgainExpression):
1371            return NotImplemented
1372        other_die: Die = implicit_convert_to_die(other)
1373
1374        def data_callback() -> Counts[bool]:
1375            return self.binary_operator(other_die, operator.eq)._data
1376
1377        def truth_value_callback() -> bool:
1378            return self.equals(other)
1379
1380        return icepool.DieWithTruth(data_callback, truth_value_callback)
1381
1382    # The result has a truth value, but is not a bool.
1383    def __ne__(self, other) -> 'icepool.DieWithTruth[bool]':  # type: ignore
1384        if isinstance(other, icepool.AgainExpression):
1385            return NotImplemented
1386        other_die: Die = implicit_convert_to_die(other)
1387
1388        def data_callback() -> Counts[bool]:
1389            return self.binary_operator(other_die, operator.ne)._data
1390
1391        def truth_value_callback() -> bool:
1392            return not self.equals(other)
1393
1394        return icepool.DieWithTruth(data_callback, truth_value_callback)
1395
1396    def cmp(self, other) -> 'Die[int]':
1397        """A `Die` with outcomes 1, -1, and 0.
1398
1399        The quantities are equal to the positive outcome of `self > other`,
1400        `self < other`, and the remainder respectively.
1401        """
1402        other = implicit_convert_to_die(other)
1403
1404        data = {}
1405
1406        lt = self < other
1407        if True in lt:
1408            data[-1] = lt[True]
1409        eq = self == other
1410        if True in eq:
1411            data[0] = eq[True]
1412        gt = self > other
1413        if True in gt:
1414            data[1] = gt[True]
1415
1416        return Die(data)
1417
1418    @staticmethod
1419    def _sign(x) -> int:
1420        z = Die._zero(x)
1421        if x > z:
1422            return 1
1423        elif x < z:
1424            return -1
1425        else:
1426            return 0
1427
1428    def sign(self) -> 'Die[int]':
1429        """Outcomes become 1 if greater than `zero()`, -1 if less than `zero()`, and 0 otherwise.
1430
1431        Note that for `float`s, +0.0, -0.0, and nan all become 0.
1432        """
1433        return self.unary_operator(Die._sign)
1434
1435    # Equality and hashing.
1436
1437    def __bool__(self) -> bool:
1438        raise TypeError(
1439            'A `Die` only has a truth value if it is the result of == or !=.\n'
1440            'This could result from trying to use a die in an if-statement,\n'
1441            'in which case you should use `die.if_else()` instead.\n'
1442            'Or it could result from trying to use a `Die` inside a tuple or vector outcome,\n'
1443            'in which case you should use `tupleize()` or `vectorize().')
1444
1445    @cached_property
1446    def hash_key(self) -> tuple:
1447        """A tuple that uniquely (as `equals()`) identifies this die.
1448
1449        Apart from being hashable and totally orderable, this is not guaranteed
1450        to be in any particular format or have any other properties.
1451        """
1452        return Die, tuple(self.items())
1453
1454    __hash__ = MaybeHashKeyed.__hash__
1455
1456    def equals(self, other, *, simplify: bool = False) -> bool:
1457        """`True` iff both dice have the same outcomes and quantities.
1458
1459        This is `False` if `other` is not a `Die`, even if it would convert
1460        to an equal `Die`.
1461
1462        Truth value does NOT matter.
1463
1464        If one `Die` has a zero-quantity outcome and the other `Die` does not
1465        contain that outcome, they are treated as unequal by this function.
1466
1467        The `==` and `!=` operators have a dual purpose; they return a `Die`
1468        with a truth value determined by this method.
1469        Only dice returned by these methods have a truth value. The data of
1470        these dice is lazily evaluated since the caller may only be interested
1471        in the `Die` value or the truth value.
1472
1473        Args:
1474            simplify: If `True`, the dice will be simplified before comparing.
1475                Otherwise, e.g. a 2:2 coin is not `equals()` to a 1:1 coin.
1476        """
1477        if self is other:
1478            return True
1479
1480        if not isinstance(other, Die):
1481            return False
1482
1483        if simplify:
1484            return self.simplify().hash_key == other.simplify().hash_key
1485        else:
1486            return self.hash_key == other.hash_key
1487
1488    # Strings.
1489
1490    def __repr__(self) -> str:
1491        items_string = ', '.join(f'{repr(outcome)}: {weight}'
1492                                 for outcome, weight in self.items())
1493        return type(self).__qualname__ + '({' + items_string + '})'

Sampling with replacement. Quantities represent weights.

Dice are immutable. Methods do not modify the Die in-place; rather they return a Die representing the result.

It's also possible to have "empty" dice with no outcomes at all, though these have little use other than being sentinel values.

def unary_operator( self: Die[+T_co], op: Callable[..., ~U], *args, **kwargs) -> Die[~U]:
182    def unary_operator(self: 'icepool.Die[T_co]', op: Callable[..., U], *args,
183                       **kwargs) -> 'icepool.Die[U]':
184        """Performs the unary operation on the outcomes.
185
186        This is used for the standard unary operators
187        `-, +, abs, ~, round, trunc, floor, ceil`
188        as well as the additional methods
189        `zero, bool`.
190
191        This is NOT used for the `[]` operator; when used directly, this is
192        interpreted as a `Mapping` operation and returns the count corresponding
193        to a given outcome. See `marginals()` for applying the `[]` operator to
194        outcomes.
195
196        Returns:
197            A `Die` representing the result.
198
199        Raises:
200            ValueError: If tuples are of mismatched length.
201        """
202        return self._unary_operator(op, *args, **kwargs)

Performs the unary operation on the outcomes.

This is used for the standard unary operators -, +, abs, ~, round, trunc, floor, ceil as well as the additional methods zero, bool.

This is NOT used for the [] operator; when used directly, this is interpreted as a Mapping operation and returns the count corresponding to a given outcome. See marginals() for applying the [] operator to outcomes.

Returns:

A Die representing the result.

Raises:
  • ValueError: If tuples are of mismatched length.
def binary_operator( self, other: Die, op: Callable[..., ~U], *args, **kwargs) -> Die[~U]:
204    def binary_operator(self, other: 'Die', op: Callable[..., U], *args,
205                        **kwargs) -> 'Die[U]':
206        """Performs the operation on pairs of outcomes.
207
208        By the time this is called, the other operand has already been
209        converted to a `Die`.
210
211        This is used for the standard binary operators
212        `+, -, *, /, //, %, **, <<, >>, &, |, ^`
213        and the standard binary comparators
214        `<, <=, >=, >, ==, !=, cmp`.
215
216        `==` and `!=` additionally set the truth value of the `Die` according to
217        whether the dice themselves are the same or not.
218
219        The `@` operator does NOT use this method directly.
220        It rolls the left `Die`, which must have integer outcomes,
221        then rolls the right `Die` that many times and sums the outcomes.
222
223        Returns:
224            A `Die` representing the result.
225
226        Raises:
227            ValueError: If tuples are of mismatched length within one of the
228                dice or between the dice.
229        """
230        data: MutableMapping[Any, int] = defaultdict(int)
231        for (outcome_self,
232             quantity_self), (outcome_other,
233                              quantity_other) in itertools.product(
234                                  self.items(), other.items()):
235            new_outcome = op(outcome_self, outcome_other, *args, **kwargs)
236            data[new_outcome] += quantity_self * quantity_other
237        return self._new_type(data)

Performs the operation on pairs of outcomes.

By the time this is called, the other operand has already been converted to a Die.

This is used for the standard binary operators +, -, *, /, //, %, **, <<, >>, &, |, ^ and the standard binary comparators <, <=, >=, >, ==, !=, cmp.

== and != additionally set the truth value of the Die according to whether the dice themselves are the same or not.

The @ operator does NOT use this method directly. It rolls the left Die, which must have integer outcomes, then rolls the right Die that many times and sums the outcomes.

Returns:

A Die representing the result.

Raises:
  • ValueError: If tuples are of mismatched length within one of the dice or between the dice.
def keys(self) -> CountsKeysView[+T_co]:
241    def keys(self) -> CountsKeysView[T_co]:
242        return self._data.keys()

The outcomes within the population in sorted order.

def values(self) -> CountsValuesView:
244    def values(self) -> CountsValuesView:
245        return self._data.values()

The quantities within the population in outcome order.

def items(self) -> CountsItemsView[+T_co]:
247    def items(self) -> CountsItemsView[T_co]:
248        return self._data.items()

The (outcome, quantity)s of the population in sorted order.

def simplify(self) -> Die[+T_co]:
265    def simplify(self) -> 'Die[T_co]':
266        """Divides all quantities by their greatest common denominator. """
267        return icepool.Die(self._data.simplify())

Divides all quantities by their greatest common denominator.

def reroll( self, which: Union[Callable[..., bool], Collection[+T_co], NoneType] = None, /, *, star: bool | None = None, depth: Union[int, Literal['inf']]) -> Die[+T_co]:
271    def reroll(self,
272               which: Callable[..., bool] | Collection[T_co] | None = None,
273               /,
274               *,
275               star: bool | None = None,
276               depth: int | Literal['inf']) -> 'Die[T_co]':
277        """Rerolls the given outcomes.
278
279        Args:
280            which: Selects which outcomes to reroll. Options:
281                * A collection of outcomes to reroll.
282                * A callable that takes an outcome and returns `True` if it
283                    should be rerolled.
284                * If not provided, the min outcome will be rerolled.
285            star: Whether outcomes should be unpacked into separate arguments
286                before sending them to a callable `which`.
287                If not provided, this will be guessed based on the function
288                signature.
289            depth: The maximum number of times to reroll.
290                If `None`, rerolls an unlimited number of times.
291
292        Returns:
293            A `Die` representing the reroll.
294            If the reroll would never terminate, the result has no outcomes.
295        """
296
297        if which is None:
298            outcome_set = {self.min_outcome()}
299        else:
300            outcome_set = self._select_outcomes(which, star)
301
302        if depth == 'inf':
303            data = {
304                outcome: quantity
305                for outcome, quantity in self.items()
306                if outcome not in outcome_set
307            }
308        elif depth < 0:
309            raise ValueError('reroll depth cannot be negative.')
310        else:
311            total_reroll_quantity = sum(quantity
312                                        for outcome, quantity in self.items()
313                                        if outcome in outcome_set)
314            total_stop_quantity = self.denominator() - total_reroll_quantity
315            rerollable_factor = total_reroll_quantity**depth
316            stop_factor = (self.denominator()**(depth + 1) - rerollable_factor
317                           * total_reroll_quantity) // total_stop_quantity
318            data = {
319                outcome: (rerollable_factor *
320                          quantity if outcome in outcome_set else stop_factor *
321                          quantity)
322                for outcome, quantity in self.items()
323            }
324        return icepool.Die(data)

Rerolls the given outcomes.

Arguments:
  • which: Selects which outcomes to reroll. Options:
    • A collection of outcomes to reroll.
    • A callable that takes an outcome and returns True if it should be rerolled.
    • If not provided, the min outcome will be rerolled.
  • star: Whether outcomes should be unpacked into separate arguments before sending them to a callable which. If not provided, this will be guessed based on the function signature.
  • depth: The maximum number of times to reroll. If None, rerolls an unlimited number of times.
Returns:

A Die representing the reroll. If the reroll would never terminate, the result has no outcomes.

def filter( self, which: Union[Callable[..., bool], Collection[+T_co]], /, *, star: bool | None = None, depth: Union[int, Literal['inf']]) -> Die[+T_co]:
326    def filter(self,
327               which: Callable[..., bool] | Collection[T_co],
328               /,
329               *,
330               star: bool | None = None,
331               depth: int | Literal['inf']) -> 'Die[T_co]':
332        """Rerolls until getting one of the given outcomes.
333
334        Essentially the complement of `reroll()`.
335
336        Args:
337            which: Selects which outcomes to reroll until. Options:
338                * A callable that takes an outcome and returns `True` if it
339                    should be accepted.
340                * A collection of outcomes to reroll until.
341            star: Whether outcomes should be unpacked into separate arguments
342                before sending them to a callable `which`.
343                If not provided, this will be guessed based on the function
344                signature.
345            depth: The maximum number of times to reroll.
346                If `None`, rerolls an unlimited number of times.
347
348        Returns:
349            A `Die` representing the reroll.
350            If the reroll would never terminate, the result has no outcomes.
351        """
352
353        if callable(which):
354            if star is None:
355                star = infer_star(which)
356            if star:
357
358                not_outcomes = {
359                    outcome
360                    for outcome in self.outcomes()
361                    if not which(*outcome)  # type: ignore
362                }
363            else:
364                not_outcomes = {
365                    outcome
366                    for outcome in self.outcomes() if not which(outcome)
367                }
368        else:
369            not_outcomes = {
370                not_outcome
371                for not_outcome in self.outcomes() if not_outcome not in which
372            }
373        return self.reroll(not_outcomes, depth=depth)

Rerolls until getting one of the given outcomes.

Essentially the complement of reroll().

Arguments:
  • which: Selects which outcomes to reroll until. Options:
    • A callable that takes an outcome and returns True if it should be accepted.
    • A collection of outcomes to reroll until.
  • star: Whether outcomes should be unpacked into separate arguments before sending them to a callable which. If not provided, this will be guessed based on the function signature.
  • depth: The maximum number of times to reroll. If None, rerolls an unlimited number of times.
Returns:

A Die representing the reroll. If the reroll would never terminate, the result has no outcomes.

def truncate( self, min_outcome=None, max_outcome=None) -> Die[+T_co]:
375    def truncate(self, min_outcome=None, max_outcome=None) -> 'Die[T_co]':
376        """Truncates the outcomes of this `Die` to the given range.
377
378        The endpoints are included in the result if applicable.
379        If one of the arguments is not provided, that side will not be truncated.
380
381        This effectively rerolls outcomes outside the given range.
382        If instead you want to replace those outcomes with the nearest endpoint,
383        use `clip()`.
384
385        Not to be confused with `trunc(die)`, which performs integer truncation
386        on each outcome.
387        """
388        if min_outcome is not None:
389            start = bisect.bisect_left(self.outcomes(), min_outcome)
390        else:
391            start = None
392        if max_outcome is not None:
393            stop = bisect.bisect_right(self.outcomes(), max_outcome)
394        else:
395            stop = None
396        data = {k: v for k, v in self.items()[start:stop]}
397        return icepool.Die(data)

Truncates the outcomes of this Die to the given range.

The endpoints are included in the result if applicable. If one of the arguments is not provided, that side will not be truncated.

This effectively rerolls outcomes outside the given range. If instead you want to replace those outcomes with the nearest endpoint, use clip().

Not to be confused with trunc(die), which performs integer truncation on each outcome.

def clip( self, min_outcome=None, max_outcome=None) -> Die[+T_co]:
399    def clip(self, min_outcome=None, max_outcome=None) -> 'Die[T_co]':
400        """Clips the outcomes of this `Die` to the given values.
401
402        The endpoints are included in the result if applicable.
403        If one of the arguments is not provided, that side will not be clipped.
404
405        This is not the same as rerolling outcomes beyond this range;
406        the outcome is simply adjusted to fit within the range.
407        This will typically cause some quantity to bunch up at the endpoint(s).
408        If you want to reroll outcomes beyond this range, use `truncate()`.
409        """
410        data: MutableMapping[Any, int] = defaultdict(int)
411        for outcome, quantity in self.items():
412            if min_outcome is not None and outcome <= min_outcome:
413                data[min_outcome] += quantity
414            elif max_outcome is not None and outcome >= max_outcome:
415                data[max_outcome] += quantity
416            else:
417                data[outcome] += quantity
418        return icepool.Die(data)

Clips the outcomes of this Die to the given values.

The endpoints are included in the result if applicable. If one of the arguments is not provided, that side will not be clipped.

This is not the same as rerolling outcomes beyond this range; the outcome is simply adjusted to fit within the range. This will typically cause some quantity to bunch up at the endpoint(s). If you want to reroll outcomes beyond this range, use truncate().

def map( self, repl: Union[Callable[..., Union[~U, Die[~U], RerollType, icepool.population.again.AgainExpression]], Mapping[+T_co, Union[~U, Die[~U], RerollType, icepool.population.again.AgainExpression]]], /, *extra_args, star: bool | None = None, repeat: Union[int, Literal['inf']] = 1, time_limit: Union[int, Literal['inf'], NoneType] = None, again_count: int | None = None, again_depth: int | None = None, again_end: Union[~U, Die[~U], RerollType, NoneType] = None, **kwargs) -> Die[~U]:
448    def map(
449            self,
450            repl:
451        'Callable[..., U | Die[U] | icepool.RerollType | icepool.AgainExpression] | Mapping[T_co, U | Die[U] | icepool.RerollType | icepool.AgainExpression]',
452            /,
453            *extra_args,
454            star: bool | None = None,
455            repeat: int | Literal['inf'] = 1,
456            time_limit: int | Literal['inf'] | None = None,
457            again_count: int | None = None,
458            again_depth: int | None = None,
459            again_end: 'U | Die[U] | icepool.RerollType | None' = None,
460            **kwargs) -> 'Die[U]':
461        """Maps outcomes of the `Die` to other outcomes.
462
463        This is also useful for representing processes.
464
465        As `icepool.map(repl, self, ...)`.
466        """
467        return icepool.map(repl,
468                           self,
469                           *extra_args,
470                           star=star,
471                           repeat=repeat,
472                           time_limit=time_limit,
473                           again_count=again_count,
474                           again_depth=again_depth,
475                           again_end=again_end,
476                           **kwargs)

Maps outcomes of the Die to other outcomes.

This is also useful for representing processes.

As icepool.map(repl, self, ...).

def map_and_time( self, repl: Union[Callable[..., Union[+T_co, Die[+T_co], RerollType]], Mapping[+T_co, Union[+T_co, Die[+T_co], RerollType]]], /, *extra_args, star: bool | None = None, time_limit: int, **kwargs) -> Die[tuple[+T_co, int]]:
478    def map_and_time(
479            self,
480            repl:
481        'Callable[..., T_co | Die[T_co] | icepool.RerollType] | Mapping[T_co, T_co | Die[T_co] | icepool.RerollType]',
482            /,
483            *extra_args,
484            star: bool | None = None,
485            time_limit: int,
486            **kwargs) -> 'Die[tuple[T_co, int]]':
487        """Repeatedly map outcomes of the state to other outcomes, while also
488        counting timesteps.
489
490        This is useful for representing processes.
491
492        As `map_and_time(repl, self, ...)`.
493        """
494        return icepool.map_and_time(repl,
495                                    self,
496                                    *extra_args,
497                                    star=star,
498                                    time_limit=time_limit,
499                                    **kwargs)

Repeatedly map outcomes of the state to other outcomes, while also counting timesteps.

This is useful for representing processes.

As map_and_time(repl, self, ...).

def time_to_sum( self: Die[int], target: int, /, max_time: int, dnf: int | RerollType | None = None) -> Die[int]:
501    def time_to_sum(self: 'Die[int]',
502                    target: int,
503                    /,
504                    max_time: int,
505                    dnf: 'int|icepool.RerollType|None' = None) -> 'Die[int]':
506        """The number of rolls until the cumulative sum is greater or equal to the target.
507        
508        Args:
509            target: The number to stop at once reached.
510            max_time: The maximum number of rolls to run.
511                If the sum is not reached, the outcome is determined by `dnf`.
512            dnf: What time to assign in cases where the target was not reached
513                in `max_time`. If not provided, this is set to `max_time`.
514                `dnf=icepool.Reroll` will remove this case from the result,
515                effectively rerolling it.
516        """
517        if target <= 0:
518            return Die([0])
519
520        if dnf is None:
521            dnf = max_time
522
523        def step(total, roll):
524            return min(total + roll, target)
525
526        result: 'Die[tuple[int, int]]' = Die([0]).map_and_time(
527            step, self, time_limit=max_time)
528
529        def get_time(total, time):
530            if total < target:
531                return dnf
532            else:
533                return time
534
535        return result.map(get_time)

The number of rolls until the cumulative sum is greater or equal to the target.

Arguments:
  • target: The number to stop at once reached.
  • max_time: The maximum number of rolls to run. If the sum is not reached, the outcome is determined by dnf.
  • dnf: What time to assign in cases where the target was not reached in max_time. If not provided, this is set to max_time. dnf=icepool.Reroll will remove this case from the result, effectively rerolling it.
def mean_time_to_sum( self: Die[int], target: int, /) -> fractions.Fraction:
541    def mean_time_to_sum(self: 'Die[int]', target: int, /) -> Fraction:
542        """The mean number of rolls until the cumulative sum is greater or equal to the target.
543
544        Args:
545            target: The target sum.
546
547        Raises:
548            ValueError: If `self` has negative outcomes.
549            ZeroDivisionError: If `self.mean() == 0`.
550        """
551        target = max(target, 0)
552
553        if target < len(self._mean_time_to_sum_cache):
554            return self._mean_time_to_sum_cache[target]
555
556        if self.min_outcome() < 0:
557            raise ValueError(
558                'mean_time_to_sum does not handle negative outcomes.')
559        time_per_effect = Fraction(self.denominator(),
560                                   self.denominator() - self.quantity(0))
561
562        for i in range(len(self._mean_time_to_sum_cache), target + 1):
563            result = time_per_effect + self.reroll([
564                0
565            ], depth='inf').map(lambda x: self.mean_time_to_sum(i - x)).mean()
566            self._mean_time_to_sum_cache.append(result)
567
568        return result

The mean number of rolls until the cumulative sum is greater or equal to the target.

Arguments:
  • target: The target sum.
Raises:
  • ValueError: If self has negative outcomes.
  • ZeroDivisionError: If self.mean() == 0.
def explode( self, which: Union[Callable[..., bool], Collection[+T_co], NoneType] = None, /, *, star: bool | None = None, depth: int = 9, end=None) -> Die[+T_co]:
570    def explode(self,
571                which: Collection[T_co] | Callable[..., bool] | None = None,
572                /,
573                *,
574                star: bool | None = None,
575                depth: int = 9,
576                end=None) -> 'Die[T_co]':
577        """Causes outcomes to be rolled again and added to the total.
578
579        Args:
580            which: Which outcomes to explode. Options:
581                * An collection of outcomes to explode.
582                * A callable that takes an outcome and returns `True` if it
583                    should be exploded.
584                * If not supplied, the max outcome will explode.
585            star: Whether outcomes should be unpacked into separate arguments
586                before sending them to a callable `which`.
587                If not provided, this will be guessed based on the function
588                signature.
589            depth: The maximum number of additional dice to roll, not counting
590                the initial roll.
591                If not supplied, a default value will be used.
592            end: Once `depth` is reached, further explosions will be treated
593                as this value. By default, a zero value will be used.
594                `icepool.Reroll` will make one extra final roll, rerolling until
595                a non-exploding outcome is reached.
596        """
597
598        if which is None:
599            outcome_set = {self.max_outcome()}
600        else:
601            outcome_set = self._select_outcomes(which, star)
602
603        if depth < 0:
604            raise ValueError('depth cannot be negative.')
605        elif depth == 0:
606            return self
607
608        def map_final(outcome):
609            if outcome in outcome_set:
610                return outcome + icepool.Again
611            else:
612                return outcome
613
614        return self.map(map_final, again_depth=depth, again_end=end)

Causes outcomes to be rolled again and added to the total.

Arguments:
  • which: Which outcomes to explode. Options:
    • An collection of outcomes to explode.
    • A callable that takes an outcome and returns True if it should be exploded.
    • If not supplied, the max outcome will explode.
  • star: Whether outcomes should be unpacked into separate arguments before sending them to a callable which. If not provided, this will be guessed based on the function signature.
  • depth: The maximum number of additional dice to roll, not counting the initial roll. If not supplied, a default value will be used.
  • end: Once depth is reached, further explosions will be treated as this value. By default, a zero value will be used. icepool.Reroll will make one extra final roll, rerolling until a non-exploding outcome is reached.
def if_else( self, outcome_if_true: Union[~U, Die[~U]], outcome_if_false: Union[~U, Die[~U]], *, again_count: int | None = None, again_depth: int | None = None, again_end: Union[~U, Die[~U], RerollType, NoneType] = None) -> Die[~U]:
616    def if_else(
617        self,
618        outcome_if_true: U | 'Die[U]',
619        outcome_if_false: U | 'Die[U]',
620        *,
621        again_count: int | None = None,
622        again_depth: int | None = None,
623        again_end: 'U | Die[U] | icepool.RerollType | None' = None
624    ) -> 'Die[U]':
625        """Ternary conditional operator.
626
627        This replaces truthy outcomes with the first argument and falsy outcomes
628        with the second argument.
629
630        Args:
631            again_count, again_depth, again_end: Forwarded to the final die constructor.
632        """
633        return self.map(lambda x: bool(x)).map(
634            {
635                True: outcome_if_true,
636                False: outcome_if_false
637            },
638            again_count=again_count,
639            again_depth=again_depth,
640            again_end=again_end)

Ternary conditional operator.

This replaces truthy outcomes with the first argument and falsy outcomes with the second argument.

Arguments:
  • again_count, again_depth, again_end: Forwarded to the final die constructor.
def is_in(self, target: Container[+T_co], /) -> Die[bool]:
642    def is_in(self, target: Container[T_co], /) -> 'Die[bool]':
643        """A die that returns True iff the roll of the die is contained in the target."""
644        return self.map(lambda x: x in target)

A die that returns True iff the roll of the die is contained in the target.

def count( self, rolls: int, target: Container[+T_co], /) -> Die[int]:
646    def count(self, rolls: int, target: Container[T_co], /) -> 'Die[int]':
647        """Roll this dice a number of times and count how many are in the target."""
648        return rolls @ self.is_in(target)

Roll this dice a number of times and count how many are in the target.

def sequence(self, rolls: int) -> Die[tuple[+T_co, ...]]:
719    def sequence(self, rolls: int) -> 'icepool.Die[tuple[T_co, ...]]':
720        """Possible sequences produced by rolling this die a number of times.
721        
722        This is extremely expensive computationally. If possible, use `reduce()`
723        instead; if you don't care about order, `Die.pool()` is better.
724        """
725        return icepool.cartesian_product(*(self for _ in range(rolls)),
726                                         outcome_type=tuple)  # type: ignore

Possible sequences produced by rolling this die a number of times.

This is extremely expensive computationally. If possible, use reduce() instead; if you don't care about order, Die.pool() is better.

def pool( self, rolls: Union[int, Sequence[int]] = 1, /) -> Pool[+T_co]:
728    def pool(self, rolls: int | Sequence[int] = 1, /) -> 'icepool.Pool[T_co]':
729        """Creates a `Pool` from this `Die`.
730
731        You might subscript the pool immediately afterwards, e.g.
732        `d6.pool(5)[-1, ..., 1]` takes the difference between the highest and
733        lowest of 5d6.
734
735        Args:
736            rolls: The number of copies of this `Die` to put in the pool.
737                Or, a sequence of one `int` per die acting as
738                `keep_tuple`. Note that `...` cannot be used in the
739                argument to this method, as the argument determines the size of
740                the pool.
741        """
742        if isinstance(rolls, int):
743            return icepool.Pool({self: rolls})
744        else:
745            pool_size = len(rolls)
746            # Haven't dealt with narrowing return type.
747            return icepool.Pool({self: pool_size})[rolls]  # type: ignore

Creates a Pool from this Die.

You might subscript the pool immediately afterwards, e.g. d6.pool(5)[-1, ..., 1] takes the difference between the highest and lowest of 5d6.

Arguments:
  • rolls: The number of copies of this Die to put in the pool. Or, a sequence of one int per die acting as keep_tuple. Note that ... cannot be used in the argument to this method, as the argument determines the size of the pool.
def keep( self, rolls: Union[int, Sequence[int]], index: Union[slice, Sequence[int | ellipsis], int, NoneType] = None, /) -> Die:
795    def keep(self,
796             rolls: int | Sequence[int],
797             index: slice | Sequence[int | EllipsisType] | int | None = None,
798             /) -> 'Die':
799        """Selects elements after drawing and sorting and sums them.
800
801        Args:
802            rolls: The number of dice to roll.
803            index: One of the following:
804            * An `int`. This will count only the roll at the specified index.
805            In this case, the result is a `Die` rather than a generator.
806            * A `slice`. The selected dice are counted once each.
807            * A sequence of `int`s with length equal to `rolls`.
808                Each roll is counted that many times, which could be multiple or
809                negative times.
810
811                Up to one `...` (`Ellipsis`) may be used. If no `...` is used,
812                the `rolls` argument may be omitted.
813
814                `...` will be replaced with a number of zero counts in order
815                to make up any missing elements compared to `rolls`.
816                This number may be "negative" if more `int`s are provided than
817                `rolls`. Specifically:
818
819                * If `index` is shorter than `rolls`, `...`
820                    acts as enough zero counts to make up the difference.
821                    E.g. `(1, ..., 1)` on five dice would act as
822                    `(1, 0, 0, 0, 1)`.
823                * If `index` has length equal to `rolls`, `...` has no effect.
824                    E.g. `(1, ..., 1)` on two dice would act as `(1, 1)`.
825                * If `index` is longer than `rolls` and `...` is on one side,
826                    elements will be dropped from `index` on the side with `...`.
827                    E.g. `(..., 1, 2, 3)` on two dice would act as `(2, 3)`.
828                * If `index` is longer than `rolls` and `...`
829                    is in the middle, the counts will be as the sum of two
830                    one-sided `...`.
831                    E.g. `(-1, ..., 1)` acts like `(-1, ...)` plus `(..., 1)`.
832                    If `rolls` was 1 this would have the -1 and 1 cancel each other out.
833        """
834        if isinstance(rolls, int):
835            if index is None:
836                raise ValueError(
837                    'If the number of rolls is an integer, an index argument must be provided.'
838                )
839            if isinstance(index, int):
840                return self.pool(rolls).keep(index)
841            else:
842                return self.pool(rolls).keep(index).sum()  # type: ignore
843        else:
844            if index is not None:
845                raise ValueError('Only one index sequence can be given.')
846            return self.pool(len(rolls)).keep(rolls).sum()  # type: ignore

Selects elements after drawing and sorting and sums them.

Arguments:
  • rolls: The number of dice to roll.
  • index: One of the following:
    • An int. This will count only the roll at the specified index.
  • In this case, the result is a Die rather than a generator.
    • A slice. The selected dice are counted once each.
    • A sequence of ints with length equal to rolls. Each roll is counted that many times, which could be multiple or negative times.

    Up to one ... (Ellipsis) may be used. If no ... is used, the rolls argument may be omitted.

    ... will be replaced with a number of zero counts in order

    to make up any missing elements compared to rolls. This number may be "negative" if more ints are provided than rolls. Specifically:

    • If index is shorter than rolls, ... acts as enough zero counts to make up the difference. E.g. (1, ..., 1) on five dice would act as (1, 0, 0, 0, 1).
    • If index has length equal to rolls, ... has no effect. E.g. (1, ..., 1) on two dice would act as (1, 1).
    • If index is longer than rolls and ... is on one side, elements will be dropped from index on the side with .... E.g. (..., 1, 2, 3) on two dice would act as (2, 3).
    • If index is longer than rolls and ... is in the middle, the counts will be as the sum of two one-sided .... E.g. (-1, ..., 1) acts like (-1, ...) plus (..., 1). If rolls was 1 this would have the -1 and 1 cancel each other out.

def lowest( self, rolls: int, /, keep: int | None = None, drop: int | None = None) -> Die:
848    def lowest(self,
849               rolls: int,
850               /,
851               keep: int | None = None,
852               drop: int | None = None) -> 'Die':
853        """Roll several of this `Die` and return the lowest result, or the sum of some of the lowest.
854
855        The outcomes should support addition and multiplication if `keep != 1`.
856
857        Args:
858            rolls: The number of dice to roll. All dice will have the same
859                outcomes as `self`.
860            keep, drop: These arguments work together:
861                * If neither are provided, the single lowest die will be taken.
862                * If only `keep` is provided, the `keep` lowest dice will be summed.
863                * If only `drop` is provided, the `drop` lowest dice will be dropped
864                    and the rest will be summed.
865                * If both are provided, `drop` lowest dice will be dropped, then
866                    the next `keep` lowest dice will be summed.
867
868        Returns:
869            A `Die` representing the probability distribution of the sum.
870        """
871        index = lowest_slice(keep, drop)
872        canonical = canonical_slice(index, rolls)
873        if canonical.start == 0 and canonical.stop == 1:
874            return self._lowest_single(rolls)
875        # Expression evaluators are difficult to type.
876        return self.pool(rolls)[index].sum()  # type: ignore

Roll several of this Die and return the lowest result, or the sum of some of the lowest.

The outcomes should support addition and multiplication if keep != 1.

Arguments:
  • rolls: The number of dice to roll. All dice will have the same outcomes as self.
  • keep, drop: These arguments work together:
    • If neither are provided, the single lowest die will be taken.
    • If only keep is provided, the keep lowest dice will be summed.
    • If only drop is provided, the drop lowest dice will be dropped and the rest will be summed.
    • If both are provided, drop lowest dice will be dropped, then the next keep lowest dice will be summed.
Returns:

A Die representing the probability distribution of the sum.

def highest( self, rolls: int, /, keep: int | None = None, drop: int | None = None) -> Die[+T_co]:
886    def highest(self,
887                rolls: int,
888                /,
889                keep: int | None = None,
890                drop: int | None = None) -> 'Die[T_co]':
891        """Roll several of this `Die` and return the highest result, or the sum of some of the highest.
892
893        The outcomes should support addition and multiplication if `keep != 1`.
894
895        Args:
896            rolls: The number of dice to roll.
897            keep, drop: These arguments work together:
898                * If neither are provided, the single highest die will be taken.
899                * If only `keep` is provided, the `keep` highest dice will be summed.
900                * If only `drop` is provided, the `drop` highest dice will be dropped
901                    and the rest will be summed.
902                * If both are provided, `drop` highest dice will be dropped, then
903                    the next `keep` highest dice will be summed.
904
905        Returns:
906            A `Die` representing the probability distribution of the sum.
907        """
908        index = highest_slice(keep, drop)
909        canonical = canonical_slice(index, rolls)
910        if canonical.start == rolls - 1 and canonical.stop == rolls:
911            return self._highest_single(rolls)
912        # Expression evaluators are difficult to type.
913        return self.pool(rolls)[index].sum()  # type: ignore

Roll several of this Die and return the highest result, or the sum of some of the highest.

The outcomes should support addition and multiplication if keep != 1.

Arguments:
  • rolls: The number of dice to roll.
  • keep, drop: These arguments work together:
    • If neither are provided, the single highest die will be taken.
    • If only keep is provided, the keep highest dice will be summed.
    • If only drop is provided, the drop highest dice will be dropped and the rest will be summed.
    • If both are provided, drop highest dice will be dropped, then the next keep highest dice will be summed.
Returns:

A Die representing the probability distribution of the sum.

def middle( self, rolls: int, /, keep: int = 1, *, tie: Literal['error', 'high', 'low'] = 'error') -> Die:
922    def middle(
923            self,
924            rolls: int,
925            /,
926            keep: int = 1,
927            *,
928            tie: Literal['error', 'high', 'low'] = 'error') -> 'icepool.Die':
929        """Roll several of this `Die` and sum the sorted results in the middle.
930
931        The outcomes should support addition and multiplication if `keep != 1`.
932
933        Args:
934            rolls: The number of dice to roll.
935            keep: The number of outcomes to sum. If this is greater than the
936                current keep_size, all are kept.
937            tie: What to do if `keep` is odd but the current keep_size
938                is even, or vice versa.
939                * 'error' (default): Raises `IndexError`.
940                * 'high': The higher outcome is taken.
941                * 'low': The lower outcome is taken.
942        """
943        # Expression evaluators are difficult to type.
944        return self.pool(rolls).middle(keep, tie=tie).sum()  # type: ignore

Roll several of this Die and sum the sorted results in the middle.

The outcomes should support addition and multiplication if keep != 1.

Arguments:
  • rolls: The number of dice to roll.
  • keep: The number of outcomes to sum. If this is greater than the current keep_size, all are kept.
  • tie: What to do if keep is odd but the current keep_size is even, or vice versa.
    • 'error' (default): Raises IndexError.
    • 'high': The higher outcome is taken.
    • 'low': The lower outcome is taken.
def map_to_pool( self, repl: Optional[Callable[..., Union[Sequence[Union[Die[~U], ~U]], Mapping[Die[~U], int], Mapping[~U, int], RerollType]]] = None, /, *extra_args: Outcome | Die | MultisetExpression, star: bool | None = None, **kwargs) -> MultisetExpression[~U]:
946    def map_to_pool(
947            self,
948            repl:
949        'Callable[..., Sequence[icepool.Die[U] | U] | Mapping[icepool.Die[U], int] | Mapping[U, int] | icepool.RerollType] | None' = None,
950            /,
951            *extra_args: 'Outcome | icepool.Die | icepool.MultisetExpression',
952            star: bool | None = None,
953            **kwargs) -> 'icepool.MultisetExpression[U]':
954        """EXPERIMENTAL: Maps outcomes of this `Die` to `Pools`, creating a `MultisetGenerator`.
955
956        As `icepool.map_to_pool(repl, self, ...)`.
957
958        If no argument is provided, the outcomes will be used to construct a
959        mixture of pools directly, similar to the inverse of `pool.expand()`.
960        Note that this is not particularly efficient since it does not make much
961        use of dynamic programming.
962        """
963        if repl is None:
964            repl = lambda x: x
965        return icepool.map_to_pool(repl,
966                                   self,
967                                   *extra_args,
968                                   star=star,
969                                   **kwargs)

EXPERIMENTAL: Maps outcomes of this Die to Pools, creating a MultisetGenerator.

As icepool.map_to_pool(repl, self, ...).

If no argument is provided, the outcomes will be used to construct a mixture of pools directly, similar to the inverse of pool.expand(). Note that this is not particularly efficient since it does not make much use of dynamic programming.

def explode_to_pool( self, rolls: int, which: Union[Callable[..., bool], Collection[+T_co], NoneType] = None, /, *, star: bool | None = None, depth: int = 9) -> MultisetExpression[+T_co]:
 971    def explode_to_pool(self,
 972                        rolls: int,
 973                        which: Collection[T_co] | Callable[..., bool]
 974                        | None = None,
 975                        /,
 976                        *,
 977                        star: bool | None = None,
 978                        depth: int = 9) -> 'icepool.MultisetExpression[T_co]':
 979        """EXPERIMENTAL: Causes outcomes to be rolled again, keeping that outcome as an individual die in a pool.
 980        
 981        Args:
 982            rolls: The number of initial dice.
 983            which: Which outcomes to explode. Options:
 984                * A single outcome to explode.
 985                * An collection of outcomes to explode.
 986                * A callable that takes an outcome and returns `True` if it
 987                    should be exploded.
 988                * If not supplied, the max outcome will explode.
 989            star: Whether outcomes should be unpacked into separate arguments
 990                before sending them to a callable `which`.
 991                If not provided, this will be guessed based on the function
 992                signature.
 993            depth: The maximum depth of explosions for an individual dice.
 994
 995        Returns:
 996            A `MultisetGenerator` representing the mixture of `Pool`s. Note  
 997            that this is not technically a `Pool`, though it supports most of 
 998            the same operations.
 999        """
1000        if depth == 0:
1001            return self.pool(rolls)
1002        if which is None:
1003            explode_set = {self.max_outcome()}
1004        else:
1005            explode_set = self._select_outcomes(which, star)
1006        if not explode_set:
1007            return self.pool(rolls)
1008        explode: 'Die[T_co]'
1009        not_explode: 'Die[T_co]'
1010        explode, not_explode = self.split(explode_set)
1011
1012        single_data: 'MutableMapping[icepool.Vector[int], int]' = defaultdict(
1013            int)
1014        for i in range(depth + 1):
1015            weight = explode.denominator()**i * self.denominator()**(
1016                depth - i) * not_explode.denominator()
1017            single_data[icepool.Vector((i, 1))] += weight
1018        single_data[icepool.Vector(
1019            (depth + 1, 0))] += explode.denominator()**(depth + 1)
1020
1021        single_count_die: 'Die[icepool.Vector[int]]' = Die(single_data)
1022        count_die = rolls @ single_count_die
1023
1024        return count_die.map_to_pool(
1025            lambda x, nx: [explode] * x + [not_explode] * nx)

EXPERIMENTAL: Causes outcomes to be rolled again, keeping that outcome as an individual die in a pool.

Arguments:
  • rolls: The number of initial dice.
  • which: Which outcomes to explode. Options:
    • A single outcome to explode.
    • An collection of outcomes to explode.
    • A callable that takes an outcome and returns True if it should be exploded.
    • If not supplied, the max outcome will explode.
  • star: Whether outcomes should be unpacked into separate arguments before sending them to a callable which. If not provided, this will be guessed based on the function signature.
  • depth: The maximum depth of explosions for an individual dice.
Returns:

A MultisetGenerator representing the mixture of Pools. Note
that this is not technically a Pool, though it supports most of the same operations.

def reroll_to_pool( self, rolls: int, which: Union[Callable[..., bool], Collection[+T_co]], /, max_rerolls: int, *, star: bool | None = None, mode: Literal['random', 'lowest', 'highest', 'drop'] = 'random') -> MultisetExpression[+T_co]:
1027    def reroll_to_pool(
1028        self,
1029        rolls: int,
1030        which: Callable[..., bool] | Collection[T_co],
1031        /,
1032        max_rerolls: int,
1033        *,
1034        star: bool | None = None,
1035        mode: Literal['random', 'lowest', 'highest', 'drop'] = 'random'
1036    ) -> 'icepool.MultisetExpression[T_co]':
1037        """EXPERIMENTAL: Applies a limited number of rerolls shared across a pool.
1038
1039        Each die can only be rerolled once (effectively `depth=1`), and no more
1040        than `max_rerolls` dice may be rerolled.
1041        
1042        Args:
1043            rolls: How many dice in the pool.
1044            which: Selects which outcomes are eligible to be rerolled. Options:
1045                * A collection of outcomes to reroll.
1046                * A callable that takes an outcome and returns `True` if it
1047                    could be rerolled.
1048            max_rerolls: The maximum number of dice to reroll. 
1049                Note that each die can only be rerolled once, so if the number 
1050                of eligible dice is less than this, the excess rerolls have no
1051                effect.
1052            star: Whether outcomes should be unpacked into separate arguments
1053                before sending them to a callable `which`.
1054                If not provided, this will be guessed based on the function
1055                signature.
1056            mode: How dice are selected for rerolling if there are more eligible
1057                dice than `max_rerolls`. Options:
1058                * `'random'` (default): Eligible dice will be chosen uniformly
1059                    at random.
1060                * `'lowest'`: The lowest eligible dice will be rerolled.
1061                * `'highest'`: The highest eligible dice will be rerolled.
1062                * `'drop'`: All dice that ended up on an outcome selected by 
1063                    `which` will be dropped. This includes both dice that rolled
1064                    into `which` initially and were not rerolled, and dice that
1065                    were rerolled but rolled into `which` again. This can be
1066                    considerably more efficient than the other modes.
1067
1068        Returns:
1069            A `MultisetGenerator` representing the mixture of `Pool`s. Note  
1070            that this is not technically a `Pool`, though it supports most of 
1071            the same operations.
1072        """
1073        rerollable_set = self._select_outcomes(which, star)
1074        if not rerollable_set:
1075            return self.pool(rolls)
1076
1077        rerollable_die: 'Die[T_co]'
1078        not_rerollable_die: 'Die[T_co]'
1079        rerollable_die, not_rerollable_die = self.split(rerollable_set)
1080        single_is_rerollable = icepool.coin(rerollable_die.denominator(),
1081                                            self.denominator())
1082        rerollable = rolls @ single_is_rerollable
1083
1084        def split(initial_rerollable: int) -> Die[tuple[int, int, int]]:
1085            """Computes the composition of the pool.
1086
1087            Returns:
1088                initial_rerollable: The number of dice that initially fell into
1089                    the rerollable set.
1090                rerolled_to_rerollable: The number of dice that were rerolled,
1091                    but fell into the rerollable set again.
1092                not_rerollable: The number of dice that ended up outside the
1093                    rerollable set, including both initial and rerolled dice.
1094                not_rerolled: The number of dice that were eligible for
1095                    rerolling but were not rerolled.
1096            """
1097            initial_not_rerollable = rolls - initial_rerollable
1098            rerolled = min(initial_rerollable, max_rerolls)
1099            not_rerolled = initial_rerollable - rerolled
1100
1101            def second_split(rerolled_to_rerollable):
1102                """Splits the rerolled dice into those that fell into the rerollable and not-rerollable sets."""
1103                rerolled_to_not_rerollable = rerolled - rerolled_to_rerollable
1104                return icepool.tupleize(
1105                    initial_rerollable, rerolled_to_rerollable,
1106                    initial_not_rerollable + rerolled_to_not_rerollable,
1107                    not_rerolled)
1108
1109            return icepool.map(second_split,
1110                               rerolled @ single_is_rerollable,
1111                               star=False)
1112
1113        pool_composition = rerollable.map(split, star=False)
1114        denominator = self.denominator()**(rolls + min(rolls, max_rerolls))
1115        pool_composition = pool_composition.multiply_to_denominator(
1116            denominator)
1117
1118        def make_pool(initial_rerollable, rerolled_to_rerollable,
1119                      not_rerollable, not_rerolled):
1120            common = rerollable_die.pool(
1121                rerolled_to_rerollable) + not_rerollable_die.pool(
1122                    not_rerollable)
1123            match mode:
1124                case 'random':
1125                    return common + rerollable_die.pool(not_rerolled)
1126                case 'lowest':
1127                    return common + rerollable_die.pool(
1128                        initial_rerollable).highest(not_rerolled)
1129                case 'highest':
1130                    return common + rerollable_die.pool(
1131                        initial_rerollable).lowest(not_rerolled)
1132                case 'drop':
1133                    return not_rerollable_die.pool(not_rerollable)
1134                case _:
1135                    raise ValueError(
1136                        f"Invalid reroll_priority '{mode}'. Allowed values are 'random', 'lowest', 'highest', 'drop'."
1137                    )
1138
1139        return pool_composition.map_to_pool(make_pool, star=True)

EXPERIMENTAL: Applies a limited number of rerolls shared across a pool.

Each die can only be rerolled once (effectively depth=1), and no more than max_rerolls dice may be rerolled.

Arguments:
  • rolls: How many dice in the pool.
  • which: Selects which outcomes are eligible to be rerolled. Options:
    • A collection of outcomes to reroll.
    • A callable that takes an outcome and returns True if it could be rerolled.
  • max_rerolls: The maximum number of dice to reroll. Note that each die can only be rerolled once, so if the number of eligible dice is less than this, the excess rerolls have no effect.
  • star: Whether outcomes should be unpacked into separate arguments before sending them to a callable which. If not provided, this will be guessed based on the function signature.
  • mode: How dice are selected for rerolling if there are more eligible dice than max_rerolls. Options:
    • 'random' (default): Eligible dice will be chosen uniformly at random.
    • 'lowest': The lowest eligible dice will be rerolled.
    • 'highest': The highest eligible dice will be rerolled.
    • 'drop': All dice that ended up on an outcome selected by which will be dropped. This includes both dice that rolled into which initially and were not rerolled, and dice that were rerolled but rolled into which again. This can be considerably more efficient than the other modes.
Returns:

A MultisetGenerator representing the mixture of Pools. Note
that this is not technically a Pool, though it supports most of the same operations.

def abs(self) -> Die[+T_co]:
1152    def abs(self) -> 'Die[T_co]':
1153        return self.unary_operator(operator.abs)
def round(self, ndigits: int | None = None) -> Die:
1157    def round(self, ndigits: int | None = None) -> 'Die':
1158        return self.unary_operator(round, ndigits)
def stochastic_round( self, *, max_denominator: int | None = None) -> Die[int]:
1162    def stochastic_round(self,
1163                         *,
1164                         max_denominator: int | None = None) -> 'Die[int]':
1165        """Randomly rounds outcomes up or down to the nearest integer according to the two distances.
1166        
1167        Specificially, rounds `x` up with probability `x - floor(x)` and down
1168        otherwise.
1169
1170        Args:
1171            max_denominator: If provided, each rounding will be performed
1172                using `fractions.Fraction.limit_denominator(max_denominator)`.
1173                Otherwise, the rounding will be performed without
1174                `limit_denominator`.
1175        """
1176        return self.map(lambda x: icepool.stochastic_round(
1177            x, max_denominator=max_denominator))

Randomly rounds outcomes up or down to the nearest integer according to the two distances.

Specificially, rounds x up with probability x - floor(x) and down otherwise.

Arguments:
  • max_denominator: If provided, each rounding will be performed using fractions.Fraction.limit_denominator(max_denominator). Otherwise, the rounding will be performed without limit_denominator.
def trunc(self) -> Die:
1179    def trunc(self) -> 'Die':
1180        return self.unary_operator(math.trunc)
def floor(self) -> Die:
1184    def floor(self) -> 'Die':
1185        return self.unary_operator(math.floor)
def ceil(self) -> Die:
1189    def ceil(self) -> 'Die':
1190        return self.unary_operator(math.ceil)
def cmp(self, other) -> Die[int]:
1396    def cmp(self, other) -> 'Die[int]':
1397        """A `Die` with outcomes 1, -1, and 0.
1398
1399        The quantities are equal to the positive outcome of `self > other`,
1400        `self < other`, and the remainder respectively.
1401        """
1402        other = implicit_convert_to_die(other)
1403
1404        data = {}
1405
1406        lt = self < other
1407        if True in lt:
1408            data[-1] = lt[True]
1409        eq = self == other
1410        if True in eq:
1411            data[0] = eq[True]
1412        gt = self > other
1413        if True in gt:
1414            data[1] = gt[True]
1415
1416        return Die(data)

A Die with outcomes 1, -1, and 0.

The quantities are equal to the positive outcome of self > other, self < other, and the remainder respectively.

def sign(self) -> Die[int]:
1428    def sign(self) -> 'Die[int]':
1429        """Outcomes become 1 if greater than `zero()`, -1 if less than `zero()`, and 0 otherwise.
1430
1431        Note that for `float`s, +0.0, -0.0, and nan all become 0.
1432        """
1433        return self.unary_operator(Die._sign)

Outcomes become 1 if greater than zero(), -1 if less than zero(), and 0 otherwise.

Note that for floats, +0.0, -0.0, and nan all become 0.

class Population(abc.ABC, icepool.expand.Expandable[+T_co], typing.Mapping[typing.Any, int]):
 29class Population(ABC, Expandable[T_co], Mapping[Any, int]):
 30    """A mapping from outcomes to `int` quantities.
 31
 32    Outcomes with each instance must be hashable and totally orderable.
 33
 34    Subclasses include `Die` and `Deck`.
 35    """
 36
 37    # Abstract methods.
 38
 39    @property
 40    @abstractmethod
 41    def _new_type(self) -> type:
 42        """The type to use when constructing a new instance."""
 43
 44    @abstractmethod
 45    def keys(self) -> CountsKeysView[T_co]:
 46        """The outcomes within the population in sorted order."""
 47
 48    @abstractmethod
 49    def values(self) -> CountsValuesView:
 50        """The quantities within the population in outcome order."""
 51
 52    @abstractmethod
 53    def items(self) -> CountsItemsView[T_co]:
 54        """The (outcome, quantity)s of the population in sorted order."""
 55
 56    @property
 57    def _items_for_cartesian_product(self) -> Sequence[tuple[T_co, int]]:
 58        return self.items()
 59
 60    def _unary_operator(self, op: Callable, *args, **kwargs):
 61        data: MutableMapping[Any, int] = defaultdict(int)
 62        for outcome, quantity in self.items():
 63            new_outcome = op(outcome, *args, **kwargs)
 64            data[new_outcome] += quantity
 65        return self._new_type(data)
 66
 67    # Outcomes.
 68
 69    def outcomes(self) -> CountsKeysView[T_co]:
 70        """The outcomes of the mapping in ascending order.
 71
 72        These are also the `keys` of the mapping.
 73        Prefer to use the name `outcomes`.
 74        """
 75        return self.keys()
 76
 77    @cached_property
 78    def _common_outcome_length(self) -> int | None:
 79        result = None
 80        for outcome in self.outcomes():
 81            if isinstance(outcome, Mapping):
 82                return None
 83            elif isinstance(outcome, Sized):
 84                if result is None:
 85                    result = len(outcome)
 86                elif len(outcome) != result:
 87                    return None
 88        return result
 89
 90    def common_outcome_length(self) -> int | None:
 91        """The common length of all outcomes.
 92
 93        If outcomes have no lengths or different lengths, the result is `None`.
 94        """
 95        return self._common_outcome_length
 96
 97    def is_empty(self) -> bool:
 98        """`True` iff this population has no outcomes. """
 99        return len(self) == 0
100
101    def min_outcome(self) -> T_co:
102        """The least outcome."""
103        return self.outcomes()[0]
104
105    def max_outcome(self) -> T_co:
106        """The greatest outcome."""
107        return self.outcomes()[-1]
108
109    def nearest(self, comparison: Literal['<=', '<', '>=', '>'], outcome,
110                /) -> T_co | None:
111        """The nearest outcome in this population fitting the comparison.
112
113        Args:
114            comparison: The comparison which the result must fit. For example,
115                '<=' would find the greatest outcome that is not greater than
116                the argument.
117            outcome: The outcome to compare against.
118        
119        Returns:
120            The nearest outcome fitting the comparison, or `None` if there is
121            no such outcome.
122        """
123        match comparison:
124            case '<=':
125                if outcome in self:
126                    return outcome
127                index = bisect.bisect_right(self.outcomes(), outcome) - 1
128                if index < 0:
129                    return None
130                return self.outcomes()[index]
131            case '<':
132                index = bisect.bisect_left(self.outcomes(), outcome) - 1
133                if index < 0:
134                    return None
135                return self.outcomes()[index]
136            case '>=':
137                if outcome in self:
138                    return outcome
139                index = bisect.bisect_left(self.outcomes(), outcome)
140                if index >= len(self):
141                    return None
142                return self.outcomes()[index]
143            case '>':
144                index = bisect.bisect_right(self.outcomes(), outcome)
145                if index >= len(self):
146                    return None
147                return self.outcomes()[index]
148            case _:
149                raise ValueError(f'Invalid comparison {comparison}')
150
151    @staticmethod
152    def _zero(x):
153        return x * 0
154
155    def zero(self: C) -> C:
156        """Zeros all outcomes of this population.
157
158        This is done by multiplying all outcomes by `0`.
159
160        The result will have the same denominator.
161
162        Raises:
163            ValueError: If the zeros did not resolve to a single outcome.
164        """
165        result = self._unary_operator(Population._zero)
166        if len(result) != 1:
167            raise ValueError('zero() did not resolve to a single outcome.')
168        return result
169
170    def zero_outcome(self) -> T_co:
171        """A zero-outcome for this population.
172
173        E.g. `0` for a `Population` whose outcomes are `int`s.
174        """
175        return self.zero().outcomes()[0]
176
177    # Quantities.
178
179    @overload
180    def quantity(self, outcome: Hashable, /) -> int:
181        """The quantity of a single outcome."""
182
183    @overload
184    def quantity(self, comparison: Literal['==', '!=', '<=', '<', '>=', '>'],
185                 outcome: Hashable, /) -> int:
186        """The total quantity fitting a comparison to a single outcome."""
187
188    def quantity(self,
189                 comparison: Literal['==', '!=', '<=', '<', '>=', '>']
190                 | Hashable,
191                 outcome: Hashable | None = None,
192                 /) -> int:
193        """The quantity of a single outcome.
194
195        A comparison can be provided, in which case this returns the total
196        quantity fitting the comparison.
197        
198        Args:
199            comparison: The comparison to use. This can be omitted, in which
200                case it is treated as '=='.
201            outcome: The outcome to query.
202        """
203        if outcome is None:
204            outcome = comparison
205            comparison = '=='
206        else:
207            comparison = cast(Literal['==', '!=', '<=', '<', '>=', '>'],
208                              comparison)
209
210        match comparison:
211            case '==':
212                return self.get(outcome, 0)
213            case '!=':
214                return self.denominator() - self.get(outcome, 0)
215            case '<=' | '<':
216                threshold = self.nearest(comparison, outcome)
217                if threshold is None:
218                    return 0
219                else:
220                    return self._cumulative_quantities[threshold]
221            case '>=':
222                return self.denominator() - self.quantity('<', outcome)
223            case '>':
224                return self.denominator() - self.quantity('<=', outcome)
225            case _:
226                raise ValueError(f'Invalid comparison {comparison}')
227
228    @overload
229    def quantities(self, /) -> CountsValuesView:
230        """All quantities in sorted order."""
231
232    @overload
233    def quantities(self, comparison: Literal['==', '!=', '<=', '<', '>=', '>'],
234                   /) -> Sequence[int]:
235        """The total quantities fitting the comparison for each outcome in sorted order.
236        
237        For example, '<=' gives the CDF.
238
239        Args:
240            comparison: One of `'==', '!=', '<=', '<', '>=', '>'`.
241                May be omitted, in which case equality `'=='` is used.
242            outcome: The outcome to compare to.
243            percent: If set, the result will be a percentage expressed as a
244                `float`.
245        """
246
247    def quantities(self,
248                   comparison: Literal['==', '!=', '<=', '<', '>=', '>']
249                   | None = None,
250                   /) -> CountsValuesView | Sequence[int]:
251        """The quantities of the mapping in sorted order.
252
253        For example, '<=' gives the CDF.
254
255        Args:
256            comparison: One of `'==', '!=', '<=', '<', '>=', '>'`.
257                May be omitted, in which case equality `'=='` is used.
258        """
259        if comparison is None:
260            comparison = '=='
261
262        match comparison:
263            case '==':
264                return self.values()
265            case '<=':
266                return tuple(itertools.accumulate(self.values()))
267            case '>=':
268                return tuple(
269                    itertools.accumulate(self.values()[:-1],
270                                         operator.sub,
271                                         initial=self.denominator()))
272            case '!=':
273                return tuple(self.denominator() - q for q in self.values())
274            case '<':
275                return tuple(self.denominator() - q
276                             for q in self.quantities('>='))
277            case '>':
278                return tuple(self.denominator() - q
279                             for q in self.quantities('<='))
280            case _:
281                raise ValueError(f'Invalid comparison {comparison}')
282
283    @cached_property
284    def _cumulative_quantities(self) -> Mapping[T_co, int]:
285        result = {}
286        cdf = 0
287        for outcome, quantity in self.items():
288            cdf += quantity
289            result[outcome] = cdf
290        return result
291
292    @cached_property
293    def _denominator(self) -> int:
294        return sum(self.values())
295
296    def denominator(self) -> int:
297        """The sum of all quantities (e.g. weights or duplicates).
298
299        For the number of unique outcomes, use `len()`.
300        """
301        return self._denominator
302
303    def multiply_quantities(self: C, scale: int, /) -> C:
304        """Multiplies all quantities by an integer."""
305        if scale == 1:
306            return self
307        data = {
308            outcome: quantity * scale
309            for outcome, quantity in self.items()
310        }
311        return self._new_type(data)
312
313    def divide_quantities(self: C, divisor: int, /) -> C:
314        """Divides all quantities by an integer, rounding down.
315        
316        Resulting zero quantities are dropped.
317        """
318        if divisor == 0:
319            return self
320        data = {
321            outcome: quantity // divisor
322            for outcome, quantity in self.items() if quantity >= divisor
323        }
324        return self._new_type(data)
325
326    def modulo_quantities(self: C, divisor: int, /) -> C:
327        """Modulus of all quantities with an integer."""
328        data = {
329            outcome: quantity % divisor
330            for outcome, quantity in self.items()
331        }
332        return self._new_type(data)
333
334    def pad_to_denominator(self: C, target: int, /, outcome: Hashable) -> C:
335        """Changes the denominator to a target number by changing the quantity of a specified outcome.
336        
337        Args:
338            `target`: The denominator of the result.
339            `outcome`: The outcome whose quantity will be adjusted.
340
341        Returns:
342            A `Population` like `self` but with the quantity of `outcome`
343            adjusted so that the overall denominator is equal to  `target`.
344            If the denominator is reduced to zero, it will be removed.
345
346        Raises:
347            `ValueError` if this would require the quantity of the specified
348            outcome to be negative.
349        """
350        adjustment = target - self.denominator()
351        data = {outcome: quantity for outcome, quantity in self.items()}
352        new_quantity = data.get(outcome, 0) + adjustment
353        if new_quantity > 0:
354            data[outcome] = new_quantity
355        elif new_quantity == 0:
356            del data[outcome]
357        else:
358            raise ValueError(
359                f'Padding to denominator of {target} would require a negative quantity of {new_quantity} for {outcome}'
360            )
361        return self._new_type(data)
362
363    def multiply_to_denominator(self: C, denominator: int, /) -> C:
364        """Multiplies all quantities to reach the target denominiator.
365        
366        Raises:
367            ValueError if this cannot be achieved using an integer scaling.
368        """
369        if denominator % self.denominator():
370            raise ValueError(
371                'Target denominator is not an integer factor of the current denominator.'
372            )
373        return self.multiply_quantities(denominator // self.denominator())
374
375    def append(self: C, outcome, quantity: int = 1, /) -> C:
376        """This population with an outcome appended.
377        
378        Args:
379            outcome: The outcome to append.
380            quantity: The quantity of the outcome to append. Can be negative,
381                which removes quantity (but not below zero).
382        """
383        data = Counter(self)
384        data[outcome] = max(data[outcome] + quantity, 0)
385        return self._new_type(data)
386
387    def remove(self: C, outcome, quantity: int | None = None, /) -> C:
388        """This population with an outcome removed.
389
390        Args:
391            outcome: The outcome to append.
392            quantity: The quantity of the outcome to remove. If not set, all
393                quantity of that outcome is removed. Can be negative, which adds
394                quantity instead.
395        """
396        if quantity is None:
397            data = Counter(self)
398            data[outcome] = 0
399            return self._new_type(data)
400        else:
401            return self.append(outcome, -quantity)
402
403    # Probabilities.
404
405    @overload
406    def probability(self, outcome: Hashable, /, *,
407                    percent: Literal[False]) -> Fraction:
408        """The probability of a single outcome, or 0.0 if not present."""
409
410    @overload
411    def probability(self, outcome: Hashable, /, *,
412                    percent: Literal[True]) -> float:
413        """The probability of a single outcome, or 0.0 if not present."""
414
415    @overload
416    def probability(self, outcome: Hashable, /) -> Fraction:
417        """The probability of a single outcome, or 0.0 if not present."""
418
419    @overload
420    def probability(self, comparison: Literal['==', '!=', '<=', '<', '>=',
421                                              '>'], outcome: Hashable, /, *,
422                    percent: Literal[False]) -> Fraction:
423        """The total probability of outcomes fitting a comparison."""
424
425    @overload
426    def probability(self, comparison: Literal['==', '!=', '<=', '<', '>=',
427                                              '>'], outcome: Hashable, /, *,
428                    percent: Literal[True]) -> float:
429        """The total probability of outcomes fitting a comparison."""
430
431    @overload
432    def probability(self, comparison: Literal['==', '!=', '<=', '<', '>=',
433                                              '>'], outcome: Hashable,
434                    /) -> Fraction:
435        """The total probability of outcomes fitting a comparison."""
436
437    def probability(self,
438                    comparison: Literal['==', '!=', '<=', '<', '>=', '>']
439                    | Hashable,
440                    outcome: Hashable | None = None,
441                    /,
442                    *,
443                    percent: bool = False) -> Fraction | float:
444        """The total probability of outcomes fitting a comparison.
445        
446        Args:
447            comparison: One of `'==', '!=', '<=', '<', '>=', '>'`.
448                May be omitted, in which case equality `'=='` is used.
449            outcome: The outcome to compare to.
450            percent: If set, the result will be a percentage expressed as a
451                `float`. Otherwise, the result is a `Fraction`.
452        """
453        if outcome is None:
454            outcome = comparison
455            comparison = '=='
456        else:
457            comparison = cast(Literal['==', '!=', '<=', '<', '>=', '>'],
458                              comparison)
459        result = Fraction(self.quantity(comparison, outcome),
460                          self.denominator())
461        return result * 100.0 if percent else result
462
463    @overload
464    def probabilities(self, /, *,
465                      percent: Literal[False]) -> Sequence[Fraction]:
466        """All probabilities in sorted order."""
467
468    @overload
469    def probabilities(self, /, *, percent: Literal[True]) -> Sequence[float]:
470        """All probabilities in sorted order."""
471
472    @overload
473    def probabilities(self, /) -> Sequence[Fraction]:
474        """All probabilities in sorted order."""
475
476    @overload
477    def probabilities(self, comparison: Literal['==', '!=', '<=', '<', '>=',
478                                                '>'], /, *,
479                      percent: Literal[False]) -> Sequence[Fraction]:
480        """The total probabilities fitting the comparison for each outcome in sorted order.
481        
482        For example, '<=' gives the CDF.
483        """
484
485    @overload
486    def probabilities(self, comparison: Literal['==', '!=', '<=', '<', '>=',
487                                                '>'], /, *,
488                      percent: Literal[True]) -> Sequence[float]:
489        """The total probabilities fitting the comparison for each outcome in sorted order.
490        
491        For example, '<=' gives the CDF.
492        """
493
494    @overload
495    def probabilities(self, comparison: Literal['==', '!=', '<=', '<', '>=',
496                                                '>'], /) -> Sequence[Fraction]:
497        """The total probabilities fitting the comparison for each outcome in sorted order.
498        
499        For example, '<=' gives the CDF.
500        """
501
502    def probabilities(
503            self,
504            comparison: Literal['==', '!=', '<=', '<', '>=', '>']
505        | None = None,
506            /,
507            *,
508            percent: bool = False) -> Sequence[Fraction] | Sequence[float]:
509        """The total probabilities fitting the comparison for each outcome in sorted order.
510        
511        For example, '<=' gives the CDF.
512
513        Args:
514            comparison: One of `'==', '!=', '<=', '<', '>=', '>'`.
515                May be omitted, in which case equality `'=='` is used.
516            percent: If set, the result will be a percentage expressed as a
517                `float`. Otherwise, the result is a `Fraction`.
518        """
519        if comparison is None:
520            comparison = '=='
521
522        result = tuple(
523            Fraction(q, self.denominator())
524            for q in self.quantities(comparison))
525
526        if percent:
527            return tuple(100.0 * x for x in result)
528        else:
529            return result
530
531    # Scalar statistics.
532
533    def mode(self) -> tuple:
534        """A tuple containing the most common outcome(s) of the population.
535
536        These are sorted from lowest to highest.
537        """
538        return tuple(outcome for outcome, quantity in self.items()
539                     if quantity == self.modal_quantity())
540
541    def modal_quantity(self) -> int:
542        """The highest quantity of any single outcome. """
543        return max(self.quantities())
544
545    def kolmogorov_smirnov(self, other: 'Population') -> Fraction:
546        """Kolmogorov–Smirnov statistic. The maximum absolute difference between CDFs. """
547        outcomes = icepool.sorted_union(self, other)
548        return max(
549            abs(
550                self.probability('<=', outcome) -
551                other.probability('<=', outcome)) for outcome in outcomes)
552
553    def cramer_von_mises(self, other: 'Population') -> Fraction:
554        """Cramér-von Mises statistic. The sum-of-squares difference between CDFs. """
555        outcomes = icepool.sorted_union(self, other)
556        return sum(((self.probability('<=', outcome) -
557                     other.probability('<=', outcome))**2
558                    for outcome in outcomes),
559                   start=Fraction(0, 1))
560
561    def median(self):
562        """The median, taking the mean in case of a tie.
563
564        This will fail if the outcomes do not support division;
565        in this case, use `median_low` or `median_high` instead.
566        """
567        return self.quantile(1, 2)
568
569    def median_low(self) -> T_co:
570        """The median, taking the lower in case of a tie."""
571        return self.quantile_low(1, 2)
572
573    def median_high(self) -> T_co:
574        """The median, taking the higher in case of a tie."""
575        return self.quantile_high(1, 2)
576
577    def quantile(self, n: int, d: int = 100):
578        """The outcome `n / d` of the way through the CDF, taking the mean in case of a tie.
579
580        This will fail if the outcomes do not support addition and division;
581        in this case, use `quantile_low` or `quantile_high` instead.
582        """
583        # Should support addition and division.
584        return (self.quantile_low(n, d) +
585                self.quantile_high(n, d)) / 2  # type: ignore
586
587    def quantile_low(self, n: int, d: int = 100) -> T_co:
588        """The outcome `n / d` of the way through the CDF, taking the lesser in case of a tie."""
589        index = bisect.bisect_left(self.quantities('<='),
590                                   (n * self.denominator() + d - 1) // d)
591        if index >= len(self):
592            return self.max_outcome()
593        return self.outcomes()[index]
594
595    def quantile_high(self, n: int, d: int = 100) -> T_co:
596        """The outcome `n / d` of the way through the CDF, taking the greater in case of a tie."""
597        index = bisect.bisect_right(self.quantities('<='),
598                                    n * self.denominator() // d)
599        if index >= len(self):
600            return self.max_outcome()
601        return self.outcomes()[index]
602
603    @overload
604    def mean(self: 'Population[numbers.Rational]') -> Fraction:
605        ...
606
607    @overload
608    def mean(self: 'Population[float]') -> float:
609        ...
610
611    def mean(
612        self: 'Population[numbers.Rational] | Population[float]'
613    ) -> Fraction | float:
614        return try_fraction(
615            sum(outcome * quantity for outcome, quantity in self.items()),
616            self.denominator())
617
618    @overload
619    def variance(self: 'Population[numbers.Rational]') -> Fraction:
620        ...
621
622    @overload
623    def variance(self: 'Population[float]') -> float:
624        ...
625
626    def variance(
627        self: 'Population[numbers.Rational] | Population[float]'
628    ) -> Fraction | float:
629        """This is the population variance, not the sample variance."""
630        mean = self.mean()
631        mean_of_squares = try_fraction(
632            sum(quantity * outcome**2 for outcome, quantity in self.items()),
633            self.denominator())
634        return mean_of_squares - mean * mean
635
636    def standard_deviation(
637            self: 'Population[numbers.Rational] | Population[float]') -> float:
638        return math.sqrt(self.variance())
639
640    sd = standard_deviation
641
642    def standardized_moment(
643            self: 'Population[numbers.Rational] | Population[float]',
644            k: int) -> float:
645        sd = self.standard_deviation()
646        mean = self.mean()
647        ev = sum(p * (outcome - mean)**k  # type: ignore 
648                 for outcome, p in zip(self.outcomes(), self.probabilities()))
649        return ev / (sd**k)
650
651    def skewness(
652            self: 'Population[numbers.Rational] | Population[float]') -> float:
653        return self.standardized_moment(3)
654
655    def excess_kurtosis(
656            self: 'Population[numbers.Rational] | Population[float]') -> float:
657        return self.standardized_moment(4) - 3.0
658
659    def entropy(self, base: float = 2.0) -> float:
660        """The entropy of a random sample from this population.
661        
662        Args:
663            base: The logarithm base to use. Default is 2.0, which gives the 
664                entropy in bits.
665        """
666        return -sum(p * math.log(p, base)
667                    for p in self.probabilities() if p > 0.0)
668
669    # Joint statistics.
670
671    class _Marginals(Generic[C]):
672        """Helper class for implementing `marginals()`."""
673
674        _population: C
675
676        def __init__(self, population, /):
677            self._population = population
678
679        def __len__(self) -> int:
680            """The minimum len() of all outcomes."""
681            return min(len(x) for x in self._population.outcomes())
682
683        def __getitem__(self, dims: int | slice, /):
684            """Marginalizes the given dimensions."""
685            return self._population._unary_operator(operator.getitem, dims)
686
687        def __iter__(self) -> Iterator:
688            for i in range(len(self)):
689                yield self[i]
690
691        def __getattr__(self, key: str):
692            if key[0] == '_':
693                raise AttributeError(key)
694            return self._population._unary_operator(operator.attrgetter(key))
695
696    @property
697    def marginals(self: C) -> _Marginals[C]:
698        """A property that applies the `[]` operator to outcomes.
699
700        For example, `population.marginals[:2]` will marginalize the first two
701        elements of sequence outcomes.
702
703        Attributes that do not start with an underscore will also be forwarded.
704        For example, `population.marginals.x` will marginalize the `x` attribute
705        from e.g. `namedtuple` outcomes.
706        """
707        return Population._Marginals(self)
708
709    @overload
710    def covariance(self: 'Population[tuple[numbers.Rational, ...]]', i: int,
711                   j: int) -> Fraction:
712        ...
713
714    @overload
715    def covariance(self: 'Population[tuple[float, ...]]', i: int,
716                   j: int) -> float:
717        ...
718
719    def covariance(
720            self:
721        'Population[tuple[numbers.Rational, ...]] | Population[tuple[float, ...]]',
722            i: int, j: int) -> Fraction | float:
723        mean_i = self.marginals[i].mean()
724        mean_j = self.marginals[j].mean()
725        return try_fraction(
726            sum((outcome[i] - mean_i) * (outcome[j] - mean_j) * quantity
727                for outcome, quantity in self.items()), self.denominator())
728
729    def correlation(
730            self:
731        'Population[tuple[numbers.Rational, ...]] | Population[tuple[float, ...]]',
732            i: int, j: int) -> float:
733        sd_i = self.marginals[i].standard_deviation()
734        sd_j = self.marginals[j].standard_deviation()
735        return self.covariance(i, j) / (sd_i * sd_j)
736
737    # Transformations.
738
739    def _select_outcomes(self, which: Callable[..., bool] | Collection[T_co],
740                         star: bool | None) -> Set[T_co]:
741        """Returns a set of outcomes of self that fit the given condition."""
742        if callable(which):
743            if star is None:
744                star = infer_star(which)
745            if star:
746                # Need TypeVarTuple to check this.
747                return {
748                    outcome
749                    for outcome in self.outcomes()
750                    if which(*outcome)  # type: ignore
751                }
752            else:
753                return {
754                    outcome
755                    for outcome in self.outcomes() if which(outcome)
756                }
757        else:
758            # Collection.
759            return set(outcome for outcome in self.outcomes()
760                       if outcome in which)
761
762    def to_one_hot(self: C, outcomes: Sequence[T_co] | None = None) -> C:
763        """Converts the outcomes of this population to a one-hot representation.
764
765        Args:
766            outcomes: If provided, each outcome will be mapped to a `Vector`
767                where the element at `outcomes.index(outcome)` is set to `True`
768                and the rest to `False`, or all `False` if the outcome is not
769                in `outcomes`.
770                If not provided, `self.outcomes()` is used.
771        """
772        if outcomes is None:
773            outcomes = self.outcomes()
774
775        data: MutableMapping[Vector[bool], int] = defaultdict(int)
776        for outcome, quantity in zip(self.outcomes(), self.quantities()):
777            value = [False] * len(outcomes)
778            if outcome in outcomes:
779                value[outcomes.index(outcome)] = True
780            data[Vector(value)] += quantity
781        return self._new_type(data)
782
783    def split(self,
784              which: Callable[..., bool] | Collection[T_co] | None = None,
785              /,
786              *,
787              star: bool | None = None) -> tuple[C, C]:
788        """Splits this population into one containing selected items and another containing the rest.
789
790        The sum of the denominators of the results is equal to the denominator
791        of this population.
792
793        If you want to split more than two ways, use `Population.group_by()`.
794
795        Args:
796            which: Selects which outcomes to select. Options:
797                * A callable that takes an outcome and returns `True` if it
798                    should be selected.
799                * A collection of outcomes to select.
800            star: Whether outcomes should be unpacked into separate arguments
801                before sending them to a callable `which`.
802                If not provided, this will be guessed based on the function
803                signature.
804
805        Returns:
806            A population consisting of the outcomes that were selected by
807            `which`, and a population consisting of the unselected outcomes.
808        """
809        if which is None:
810            outcome_set = {self.min_outcome()}
811        else:
812            outcome_set = self._select_outcomes(which, star)
813
814        selected = {}
815        not_selected = {}
816        for outcome, count in self.items():
817            if outcome in outcome_set:
818                selected[outcome] = count
819            else:
820                not_selected[outcome] = count
821
822        return self._new_type(selected), self._new_type(not_selected)
823
824    class _GroupBy(Generic[C]):
825        """Helper class for implementing `group_by()`."""
826
827        _population: C
828
829        def __init__(self, population, /):
830            self._population = population
831
832        def __call__(self,
833                     key_map: Callable[..., U] | Mapping[T_co, U],
834                     /,
835                     *,
836                     star: bool | None = None) -> Mapping[U, C]:
837            if callable(key_map):
838                if star is None:
839                    star = infer_star(key_map)
840                if star:
841                    key_function = lambda o: key_map(*o)
842                else:
843                    key_function = key_map
844            else:
845                key_function = lambda o: key_map.get(o, o)
846
847            result_datas: MutableMapping[U, MutableMapping[Any, int]] = {}
848            outcome: Any
849            for outcome, quantity in self._population.items():
850                key = key_function(outcome)
851                if key not in result_datas:
852                    result_datas[key] = defaultdict(int)
853                result_datas[key][outcome] += quantity
854            return {
855                k: self._population._new_type(v)
856                for k, v in result_datas.items()
857            }
858
859        def __getitem__(self, dims: int | slice, /):
860            """Marginalizes the given dimensions."""
861            return self(lambda x: x[dims])
862
863        def __getattr__(self, key: str):
864            if key[0] == '_':
865                raise AttributeError(key)
866            return self(lambda x: getattr(x, key))
867
868    @property
869    def group_by(self: C) -> _GroupBy[C]:
870        """A method-like property that splits this population into sub-populations based on a key function.
871        
872        The sum of the denominators of the results is equal to the denominator
873        of this population.
874
875        This can be useful when using the law of total probability.
876
877        Example: `d10.group_by(lambda x: x % 3)` is
878        ```python
879        {
880            0: Die([3, 6, 9]),
881            1: Die([1, 4, 7, 10]),
882            2: Die([2, 5, 8]),
883        }
884        ```
885
886        You can also use brackets to group by indexes or slices; or attributes
887        to group by those. Example:
888
889        ```python
890        Die([
891            'aardvark',
892            'alligator',
893            'asp',
894            'blowfish',
895            'cat',
896            'crocodile',
897        ]).group_by[0]
898        ```
899
900        produces
901
902        ```python
903        {
904            'a': Die(['aardvark', 'alligator', 'asp']),
905            'b': Die(['blowfish']),
906            'c': Die(['cat', 'crocodile']),
907        }
908        ```
909
910        Args:
911            key_map: A function or mapping that takes outcomes and produces the
912                key of the corresponding outcome in the result. If this is
913                a Mapping, outcomes not in the mapping are their own key.
914            star: Whether outcomes should be unpacked into separate arguments
915                before sending them to a callable `key_map`.
916                If not provided, this will be guessed based on the function
917                signature.
918        """
919        return Population._GroupBy(self)
920
921    def sample(self) -> T_co:
922        """A single random sample from this population.
923
924        Note that this is always "with replacement" even for `Deck` since
925        instances are immutable.
926
927        This uses the standard `random` package and is not cryptographically
928        secure.
929        """
930        # We don't use random.choices since that is based on floats rather than ints.
931        r = random.randrange(self.denominator())
932        index = bisect.bisect_right(self.quantities('<='), r)
933        return self.outcomes()[index]
934
935    def format(self, format_spec: str, /, **kwargs) -> str:
936        """Formats this mapping as a string.
937
938        `format_spec` should start with the output format,
939        which can be:
940        * `md` for Markdown (default)
941        * `bbcode` for BBCode
942        * `csv` for comma-separated values
943        * `html` for HTML
944
945        After this, you may optionally add a `:` followed by a series of
946        requested columns. Allowed columns are:
947
948        * `o`: Outcomes.
949        * `*o`: Outcomes, unpacked if applicable.
950        * `q==`, `q<=`, `q>=`: Quantities ==, <=, or >= each outcome.
951        * `p==`, `p<=`, `p>=`: Probabilities (0-1).
952        * `%==`, `%<=`, `%>=`: Probabilities (0%-100%).
953        * `i==`, `i<=`, `i>=`: EXPERIMENTAL: "1 in N".
954
955        Columns may optionally be separated using `|` characters.
956
957        The default setting is equal to `f'{die:md:*o|q==|%==}'`. Here the 
958        columns are the outcomes (unpacked if applicable) the quantities, and 
959        the probabilities. The quantities are omitted from the default columns 
960        if any individual quantity is 10**30 or greater.
961        """
962        if not self.is_empty() and self.modal_quantity() < 10**30:
963            default_column_spec = '*oq==%=='
964        else:
965            default_column_spec = '*o%=='
966        if len(format_spec) == 0:
967            format_spec = 'md:' + default_column_spec
968
969        format_spec = format_spec.replace('|', '')
970
971        parts = format_spec.split(':')
972
973        if len(parts) == 1:
974            output_format = parts[0]
975            col_spec = default_column_spec
976        elif len(parts) == 2:
977            output_format = parts[0]
978            col_spec = parts[1]
979        else:
980            raise ValueError('format_spec has too many colons.')
981
982        match output_format:
983            case 'md':
984                return icepool.population.format.markdown(self, col_spec)
985            case 'bbcode':
986                return icepool.population.format.bbcode(self, col_spec)
987            case 'csv':
988                return icepool.population.format.csv(self, col_spec, **kwargs)
989            case 'html':
990                return icepool.population.format.html(self, col_spec)
991            case _:
992                raise ValueError(
993                    f"Unsupported output format '{output_format}'")
994
995    def __format__(self, format_spec: str, /) -> str:
996        return self.format(format_spec)
997
998    def __str__(self) -> str:
999        return f'{self}'

A mapping from outcomes to int quantities.

Outcomes with each instance must be hashable and totally orderable.

Subclasses include Die and Deck.

@abstractmethod
def keys(self) -> CountsKeysView[+T_co]:
44    @abstractmethod
45    def keys(self) -> CountsKeysView[T_co]:
46        """The outcomes within the population in sorted order."""

The outcomes within the population in sorted order.

@abstractmethod
def values(self) -> CountsValuesView:
48    @abstractmethod
49    def values(self) -> CountsValuesView:
50        """The quantities within the population in outcome order."""

The quantities within the population in outcome order.

@abstractmethod
def items(self) -> CountsItemsView[+T_co]:
52    @abstractmethod
53    def items(self) -> CountsItemsView[T_co]:
54        """The (outcome, quantity)s of the population in sorted order."""

The (outcome, quantity)s of the population in sorted order.

def outcomes(self) -> CountsKeysView[+T_co]:
69    def outcomes(self) -> CountsKeysView[T_co]:
70        """The outcomes of the mapping in ascending order.
71
72        These are also the `keys` of the mapping.
73        Prefer to use the name `outcomes`.
74        """
75        return self.keys()

The outcomes of the mapping in ascending order.

These are also the keys of the mapping. Prefer to use the name outcomes.

def common_outcome_length(self) -> int | None:
90    def common_outcome_length(self) -> int | None:
91        """The common length of all outcomes.
92
93        If outcomes have no lengths or different lengths, the result is `None`.
94        """
95        return self._common_outcome_length

The common length of all outcomes.

If outcomes have no lengths or different lengths, the result is None.

def is_empty(self) -> bool:
97    def is_empty(self) -> bool:
98        """`True` iff this population has no outcomes. """
99        return len(self) == 0

True iff this population has no outcomes.

def min_outcome(self) -> +T_co:
101    def min_outcome(self) -> T_co:
102        """The least outcome."""
103        return self.outcomes()[0]

The least outcome.

def max_outcome(self) -> +T_co:
105    def max_outcome(self) -> T_co:
106        """The greatest outcome."""
107        return self.outcomes()[-1]

The greatest outcome.

def nearest( self, comparison: Literal['<=', '<', '>=', '>'], outcome, /) -> Optional[+T_co]:
109    def nearest(self, comparison: Literal['<=', '<', '>=', '>'], outcome,
110                /) -> T_co | None:
111        """The nearest outcome in this population fitting the comparison.
112
113        Args:
114            comparison: The comparison which the result must fit. For example,
115                '<=' would find the greatest outcome that is not greater than
116                the argument.
117            outcome: The outcome to compare against.
118        
119        Returns:
120            The nearest outcome fitting the comparison, or `None` if there is
121            no such outcome.
122        """
123        match comparison:
124            case '<=':
125                if outcome in self:
126                    return outcome
127                index = bisect.bisect_right(self.outcomes(), outcome) - 1
128                if index < 0:
129                    return None
130                return self.outcomes()[index]
131            case '<':
132                index = bisect.bisect_left(self.outcomes(), outcome) - 1
133                if index < 0:
134                    return None
135                return self.outcomes()[index]
136            case '>=':
137                if outcome in self:
138                    return outcome
139                index = bisect.bisect_left(self.outcomes(), outcome)
140                if index >= len(self):
141                    return None
142                return self.outcomes()[index]
143            case '>':
144                index = bisect.bisect_right(self.outcomes(), outcome)
145                if index >= len(self):
146                    return None
147                return self.outcomes()[index]
148            case _:
149                raise ValueError(f'Invalid comparison {comparison}')

The nearest outcome in this population fitting the comparison.

Arguments:
  • comparison: The comparison which the result must fit. For example, '<=' would find the greatest outcome that is not greater than the argument.
  • outcome: The outcome to compare against.
Returns:

The nearest outcome fitting the comparison, or None if there is no such outcome.

def zero(self: ~C) -> ~C:
155    def zero(self: C) -> C:
156        """Zeros all outcomes of this population.
157
158        This is done by multiplying all outcomes by `0`.
159
160        The result will have the same denominator.
161
162        Raises:
163            ValueError: If the zeros did not resolve to a single outcome.
164        """
165        result = self._unary_operator(Population._zero)
166        if len(result) != 1:
167            raise ValueError('zero() did not resolve to a single outcome.')
168        return result

Zeros all outcomes of this population.

This is done by multiplying all outcomes by 0.

The result will have the same denominator.

Raises:
  • ValueError: If the zeros did not resolve to a single outcome.
def zero_outcome(self) -> +T_co:
170    def zero_outcome(self) -> T_co:
171        """A zero-outcome for this population.
172
173        E.g. `0` for a `Population` whose outcomes are `int`s.
174        """
175        return self.zero().outcomes()[0]

A zero-outcome for this population.

E.g. 0 for a Population whose outcomes are ints.

def quantity( self, comparison: Union[Literal['==', '!=', '<=', '<', '>=', '>'], Hashable], outcome: Optional[Hashable] = None, /) -> int:
188    def quantity(self,
189                 comparison: Literal['==', '!=', '<=', '<', '>=', '>']
190                 | Hashable,
191                 outcome: Hashable | None = None,
192                 /) -> int:
193        """The quantity of a single outcome.
194
195        A comparison can be provided, in which case this returns the total
196        quantity fitting the comparison.
197        
198        Args:
199            comparison: The comparison to use. This can be omitted, in which
200                case it is treated as '=='.
201            outcome: The outcome to query.
202        """
203        if outcome is None:
204            outcome = comparison
205            comparison = '=='
206        else:
207            comparison = cast(Literal['==', '!=', '<=', '<', '>=', '>'],
208                              comparison)
209
210        match comparison:
211            case '==':
212                return self.get(outcome, 0)
213            case '!=':
214                return self.denominator() - self.get(outcome, 0)
215            case '<=' | '<':
216                threshold = self.nearest(comparison, outcome)
217                if threshold is None:
218                    return 0
219                else:
220                    return self._cumulative_quantities[threshold]
221            case '>=':
222                return self.denominator() - self.quantity('<', outcome)
223            case '>':
224                return self.denominator() - self.quantity('<=', outcome)
225            case _:
226                raise ValueError(f'Invalid comparison {comparison}')

The quantity of a single outcome.

A comparison can be provided, in which case this returns the total quantity fitting the comparison.

Arguments:
  • comparison: The comparison to use. This can be omitted, in which case it is treated as '=='.
  • outcome: The outcome to query.
def quantities( self, comparison: Optional[Literal['==', '!=', '<=', '<', '>=', '>']] = None, /) -> Union[CountsValuesView, Sequence[int]]:
247    def quantities(self,
248                   comparison: Literal['==', '!=', '<=', '<', '>=', '>']
249                   | None = None,
250                   /) -> CountsValuesView | Sequence[int]:
251        """The quantities of the mapping in sorted order.
252
253        For example, '<=' gives the CDF.
254
255        Args:
256            comparison: One of `'==', '!=', '<=', '<', '>=', '>'`.
257                May be omitted, in which case equality `'=='` is used.
258        """
259        if comparison is None:
260            comparison = '=='
261
262        match comparison:
263            case '==':
264                return self.values()
265            case '<=':
266                return tuple(itertools.accumulate(self.values()))
267            case '>=':
268                return tuple(
269                    itertools.accumulate(self.values()[:-1],
270                                         operator.sub,
271                                         initial=self.denominator()))
272            case '!=':
273                return tuple(self.denominator() - q for q in self.values())
274            case '<':
275                return tuple(self.denominator() - q
276                             for q in self.quantities('>='))
277            case '>':
278                return tuple(self.denominator() - q
279                             for q in self.quantities('<='))
280            case _:
281                raise ValueError(f'Invalid comparison {comparison}')

The quantities of the mapping in sorted order.

For example, '<=' gives the CDF.

Arguments:
  • comparison: One of '==', '!=', '<=', '<', '>=', '>'. May be omitted, in which case equality '==' is used.
def denominator(self) -> int:
296    def denominator(self) -> int:
297        """The sum of all quantities (e.g. weights or duplicates).
298
299        For the number of unique outcomes, use `len()`.
300        """
301        return self._denominator

The sum of all quantities (e.g. weights or duplicates).

For the number of unique outcomes, use len().

def multiply_quantities(self: ~C, scale: int, /) -> ~C:
303    def multiply_quantities(self: C, scale: int, /) -> C:
304        """Multiplies all quantities by an integer."""
305        if scale == 1:
306            return self
307        data = {
308            outcome: quantity * scale
309            for outcome, quantity in self.items()
310        }
311        return self._new_type(data)

Multiplies all quantities by an integer.

def divide_quantities(self: ~C, divisor: int, /) -> ~C:
313    def divide_quantities(self: C, divisor: int, /) -> C:
314        """Divides all quantities by an integer, rounding down.
315        
316        Resulting zero quantities are dropped.
317        """
318        if divisor == 0:
319            return self
320        data = {
321            outcome: quantity // divisor
322            for outcome, quantity in self.items() if quantity >= divisor
323        }
324        return self._new_type(data)

Divides all quantities by an integer, rounding down.

Resulting zero quantities are dropped.

def modulo_quantities(self: ~C, divisor: int, /) -> ~C:
326    def modulo_quantities(self: C, divisor: int, /) -> C:
327        """Modulus of all quantities with an integer."""
328        data = {
329            outcome: quantity % divisor
330            for outcome, quantity in self.items()
331        }
332        return self._new_type(data)

Modulus of all quantities with an integer.

def pad_to_denominator(self: ~C, target: int, /, outcome: Hashable) -> ~C:
334    def pad_to_denominator(self: C, target: int, /, outcome: Hashable) -> C:
335        """Changes the denominator to a target number by changing the quantity of a specified outcome.
336        
337        Args:
338            `target`: The denominator of the result.
339            `outcome`: The outcome whose quantity will be adjusted.
340
341        Returns:
342            A `Population` like `self` but with the quantity of `outcome`
343            adjusted so that the overall denominator is equal to  `target`.
344            If the denominator is reduced to zero, it will be removed.
345
346        Raises:
347            `ValueError` if this would require the quantity of the specified
348            outcome to be negative.
349        """
350        adjustment = target - self.denominator()
351        data = {outcome: quantity for outcome, quantity in self.items()}
352        new_quantity = data.get(outcome, 0) + adjustment
353        if new_quantity > 0:
354            data[outcome] = new_quantity
355        elif new_quantity == 0:
356            del data[outcome]
357        else:
358            raise ValueError(
359                f'Padding to denominator of {target} would require a negative quantity of {new_quantity} for {outcome}'
360            )
361        return self._new_type(data)

Changes the denominator to a target number by changing the quantity of a specified outcome.

Arguments:
  • target: The denominator of the result.
  • outcome: The outcome whose quantity will be adjusted.
Returns:

A Population like self but with the quantity of outcome adjusted so that the overall denominator is equal to target. If the denominator is reduced to zero, it will be removed.

Raises:
  • ValueError if this would require the quantity of the specified
  • outcome to be negative.
def multiply_to_denominator(self: ~C, denominator: int, /) -> ~C:
363    def multiply_to_denominator(self: C, denominator: int, /) -> C:
364        """Multiplies all quantities to reach the target denominiator.
365        
366        Raises:
367            ValueError if this cannot be achieved using an integer scaling.
368        """
369        if denominator % self.denominator():
370            raise ValueError(
371                'Target denominator is not an integer factor of the current denominator.'
372            )
373        return self.multiply_quantities(denominator // self.denominator())

Multiplies all quantities to reach the target denominiator.

Raises:
  • ValueError if this cannot be achieved using an integer scaling.
def append(self: ~C, outcome, quantity: int = 1, /) -> ~C:
375    def append(self: C, outcome, quantity: int = 1, /) -> C:
376        """This population with an outcome appended.
377        
378        Args:
379            outcome: The outcome to append.
380            quantity: The quantity of the outcome to append. Can be negative,
381                which removes quantity (but not below zero).
382        """
383        data = Counter(self)
384        data[outcome] = max(data[outcome] + quantity, 0)
385        return self._new_type(data)

This population with an outcome appended.

Arguments:
  • outcome: The outcome to append.
  • quantity: The quantity of the outcome to append. Can be negative, which removes quantity (but not below zero).
def remove(self: ~C, outcome, quantity: int | None = None, /) -> ~C:
387    def remove(self: C, outcome, quantity: int | None = None, /) -> C:
388        """This population with an outcome removed.
389
390        Args:
391            outcome: The outcome to append.
392            quantity: The quantity of the outcome to remove. If not set, all
393                quantity of that outcome is removed. Can be negative, which adds
394                quantity instead.
395        """
396        if quantity is None:
397            data = Counter(self)
398            data[outcome] = 0
399            return self._new_type(data)
400        else:
401            return self.append(outcome, -quantity)

This population with an outcome removed.

Arguments:
  • outcome: The outcome to append.
  • quantity: The quantity of the outcome to remove. If not set, all quantity of that outcome is removed. Can be negative, which adds quantity instead.
def probability( self, comparison: Union[Literal['==', '!=', '<=', '<', '>=', '>'], Hashable], outcome: Optional[Hashable] = None, /, *, percent: bool = False) -> fractions.Fraction | float:
437    def probability(self,
438                    comparison: Literal['==', '!=', '<=', '<', '>=', '>']
439                    | Hashable,
440                    outcome: Hashable | None = None,
441                    /,
442                    *,
443                    percent: bool = False) -> Fraction | float:
444        """The total probability of outcomes fitting a comparison.
445        
446        Args:
447            comparison: One of `'==', '!=', '<=', '<', '>=', '>'`.
448                May be omitted, in which case equality `'=='` is used.
449            outcome: The outcome to compare to.
450            percent: If set, the result will be a percentage expressed as a
451                `float`. Otherwise, the result is a `Fraction`.
452        """
453        if outcome is None:
454            outcome = comparison
455            comparison = '=='
456        else:
457            comparison = cast(Literal['==', '!=', '<=', '<', '>=', '>'],
458                              comparison)
459        result = Fraction(self.quantity(comparison, outcome),
460                          self.denominator())
461        return result * 100.0 if percent else result

The total probability of outcomes fitting a comparison.

Arguments:
  • comparison: One of '==', '!=', '<=', '<', '>=', '>'. May be omitted, in which case equality '==' is used.
  • outcome: The outcome to compare to.
  • percent: If set, the result will be a percentage expressed as a float. Otherwise, the result is a Fraction.
def probabilities( self, comparison: Optional[Literal['==', '!=', '<=', '<', '>=', '>']] = None, /, *, percent: bool = False) -> Union[Sequence[fractions.Fraction], Sequence[float]]:
502    def probabilities(
503            self,
504            comparison: Literal['==', '!=', '<=', '<', '>=', '>']
505        | None = None,
506            /,
507            *,
508            percent: bool = False) -> Sequence[Fraction] | Sequence[float]:
509        """The total probabilities fitting the comparison for each outcome in sorted order.
510        
511        For example, '<=' gives the CDF.
512
513        Args:
514            comparison: One of `'==', '!=', '<=', '<', '>=', '>'`.
515                May be omitted, in which case equality `'=='` is used.
516            percent: If set, the result will be a percentage expressed as a
517                `float`. Otherwise, the result is a `Fraction`.
518        """
519        if comparison is None:
520            comparison = '=='
521
522        result = tuple(
523            Fraction(q, self.denominator())
524            for q in self.quantities(comparison))
525
526        if percent:
527            return tuple(100.0 * x for x in result)
528        else:
529            return result

The total probabilities fitting the comparison for each outcome in sorted order.

For example, '<=' gives the CDF.

Arguments:
  • comparison: One of '==', '!=', '<=', '<', '>=', '>'. May be omitted, in which case equality '==' is used.
  • percent: If set, the result will be a percentage expressed as a float. Otherwise, the result is a Fraction.
def mode(self) -> tuple:
533    def mode(self) -> tuple:
534        """A tuple containing the most common outcome(s) of the population.
535
536        These are sorted from lowest to highest.
537        """
538        return tuple(outcome for outcome, quantity in self.items()
539                     if quantity == self.modal_quantity())

A tuple containing the most common outcome(s) of the population.

These are sorted from lowest to highest.

def modal_quantity(self) -> int:
541    def modal_quantity(self) -> int:
542        """The highest quantity of any single outcome. """
543        return max(self.quantities())

The highest quantity of any single outcome.

def kolmogorov_smirnov(self, other: Population) -> fractions.Fraction:
545    def kolmogorov_smirnov(self, other: 'Population') -> Fraction:
546        """Kolmogorov–Smirnov statistic. The maximum absolute difference between CDFs. """
547        outcomes = icepool.sorted_union(self, other)
548        return max(
549            abs(
550                self.probability('<=', outcome) -
551                other.probability('<=', outcome)) for outcome in outcomes)

Kolmogorov–Smirnov statistic. The maximum absolute difference between CDFs.

def cramer_von_mises(self, other: Population) -> fractions.Fraction:
553    def cramer_von_mises(self, other: 'Population') -> Fraction:
554        """Cramér-von Mises statistic. The sum-of-squares difference between CDFs. """
555        outcomes = icepool.sorted_union(self, other)
556        return sum(((self.probability('<=', outcome) -
557                     other.probability('<=', outcome))**2
558                    for outcome in outcomes),
559                   start=Fraction(0, 1))

Cramér-von Mises statistic. The sum-of-squares difference between CDFs.

def median(self):
561    def median(self):
562        """The median, taking the mean in case of a tie.
563
564        This will fail if the outcomes do not support division;
565        in this case, use `median_low` or `median_high` instead.
566        """
567        return self.quantile(1, 2)

The median, taking the mean in case of a tie.

This will fail if the outcomes do not support division; in this case, use median_low or median_high instead.

def median_low(self) -> +T_co:
569    def median_low(self) -> T_co:
570        """The median, taking the lower in case of a tie."""
571        return self.quantile_low(1, 2)

The median, taking the lower in case of a tie.

def median_high(self) -> +T_co:
573    def median_high(self) -> T_co:
574        """The median, taking the higher in case of a tie."""
575        return self.quantile_high(1, 2)

The median, taking the higher in case of a tie.

def quantile(self, n: int, d: int = 100):
577    def quantile(self, n: int, d: int = 100):
578        """The outcome `n / d` of the way through the CDF, taking the mean in case of a tie.
579
580        This will fail if the outcomes do not support addition and division;
581        in this case, use `quantile_low` or `quantile_high` instead.
582        """
583        # Should support addition and division.
584        return (self.quantile_low(n, d) +
585                self.quantile_high(n, d)) / 2  # type: ignore

The outcome n / d of the way through the CDF, taking the mean in case of a tie.

This will fail if the outcomes do not support addition and division; in this case, use quantile_low or quantile_high instead.

def quantile_low(self, n: int, d: int = 100) -> +T_co:
587    def quantile_low(self, n: int, d: int = 100) -> T_co:
588        """The outcome `n / d` of the way through the CDF, taking the lesser in case of a tie."""
589        index = bisect.bisect_left(self.quantities('<='),
590                                   (n * self.denominator() + d - 1) // d)
591        if index >= len(self):
592            return self.max_outcome()
593        return self.outcomes()[index]

The outcome n / d of the way through the CDF, taking the lesser in case of a tie.

def quantile_high(self, n: int, d: int = 100) -> +T_co:
595    def quantile_high(self, n: int, d: int = 100) -> T_co:
596        """The outcome `n / d` of the way through the CDF, taking the greater in case of a tie."""
597        index = bisect.bisect_right(self.quantities('<='),
598                                    n * self.denominator() // d)
599        if index >= len(self):
600            return self.max_outcome()
601        return self.outcomes()[index]

The outcome n / d of the way through the CDF, taking the greater in case of a tie.

def mean( self: Union[Population[numbers.Rational], Population[float]]) -> fractions.Fraction | float:
611    def mean(
612        self: 'Population[numbers.Rational] | Population[float]'
613    ) -> Fraction | float:
614        return try_fraction(
615            sum(outcome * quantity for outcome, quantity in self.items()),
616            self.denominator())
def variance( self: Union[Population[numbers.Rational], Population[float]]) -> fractions.Fraction | float:
626    def variance(
627        self: 'Population[numbers.Rational] | Population[float]'
628    ) -> Fraction | float:
629        """This is the population variance, not the sample variance."""
630        mean = self.mean()
631        mean_of_squares = try_fraction(
632            sum(quantity * outcome**2 for outcome, quantity in self.items()),
633            self.denominator())
634        return mean_of_squares - mean * mean

This is the population variance, not the sample variance.

def standard_deviation( self: Union[Population[numbers.Rational], Population[float]]) -> float:
636    def standard_deviation(
637            self: 'Population[numbers.Rational] | Population[float]') -> float:
638        return math.sqrt(self.variance())
def sd( self: Union[Population[numbers.Rational], Population[float]]) -> float:
636    def standard_deviation(
637            self: 'Population[numbers.Rational] | Population[float]') -> float:
638        return math.sqrt(self.variance())
def standardized_moment( self: Union[Population[numbers.Rational], Population[float]], k: int) -> float:
642    def standardized_moment(
643            self: 'Population[numbers.Rational] | Population[float]',
644            k: int) -> float:
645        sd = self.standard_deviation()
646        mean = self.mean()
647        ev = sum(p * (outcome - mean)**k  # type: ignore 
648                 for outcome, p in zip(self.outcomes(), self.probabilities()))
649        return ev / (sd**k)
def skewness( self: Union[Population[numbers.Rational], Population[float]]) -> float:
651    def skewness(
652            self: 'Population[numbers.Rational] | Population[float]') -> float:
653        return self.standardized_moment(3)
def excess_kurtosis( self: Union[Population[numbers.Rational], Population[float]]) -> float:
655    def excess_kurtosis(
656            self: 'Population[numbers.Rational] | Population[float]') -> float:
657        return self.standardized_moment(4) - 3.0
def entropy(self, base: float = 2.0) -> float:
659    def entropy(self, base: float = 2.0) -> float:
660        """The entropy of a random sample from this population.
661        
662        Args:
663            base: The logarithm base to use. Default is 2.0, which gives the 
664                entropy in bits.
665        """
666        return -sum(p * math.log(p, base)
667                    for p in self.probabilities() if p > 0.0)

The entropy of a random sample from this population.

Arguments:
  • base: The logarithm base to use. Default is 2.0, which gives the entropy in bits.
marginals: icepool.population.base.Population._Marginals[~C]
696    @property
697    def marginals(self: C) -> _Marginals[C]:
698        """A property that applies the `[]` operator to outcomes.
699
700        For example, `population.marginals[:2]` will marginalize the first two
701        elements of sequence outcomes.
702
703        Attributes that do not start with an underscore will also be forwarded.
704        For example, `population.marginals.x` will marginalize the `x` attribute
705        from e.g. `namedtuple` outcomes.
706        """
707        return Population._Marginals(self)

A property that applies the [] operator to outcomes.

For example, population.marginals[:2] will marginalize the first two elements of sequence outcomes.

Attributes that do not start with an underscore will also be forwarded. For example, population.marginals.x will marginalize the x attribute from e.g. namedtuple outcomes.

def covariance( self: Union[Population[tuple[numbers.Rational, ...]], Population[tuple[float, ...]]], i: int, j: int) -> fractions.Fraction | float:
719    def covariance(
720            self:
721        'Population[tuple[numbers.Rational, ...]] | Population[tuple[float, ...]]',
722            i: int, j: int) -> Fraction | float:
723        mean_i = self.marginals[i].mean()
724        mean_j = self.marginals[j].mean()
725        return try_fraction(
726            sum((outcome[i] - mean_i) * (outcome[j] - mean_j) * quantity
727                for outcome, quantity in self.items()), self.denominator())
def correlation( self: Union[Population[tuple[numbers.Rational, ...]], Population[tuple[float, ...]]], i: int, j: int) -> float:
729    def correlation(
730            self:
731        'Population[tuple[numbers.Rational, ...]] | Population[tuple[float, ...]]',
732            i: int, j: int) -> float:
733        sd_i = self.marginals[i].standard_deviation()
734        sd_j = self.marginals[j].standard_deviation()
735        return self.covariance(i, j) / (sd_i * sd_j)
def to_one_hot(self: ~C, outcomes: Optional[Sequence[+T_co]] = None) -> ~C:
762    def to_one_hot(self: C, outcomes: Sequence[T_co] | None = None) -> C:
763        """Converts the outcomes of this population to a one-hot representation.
764
765        Args:
766            outcomes: If provided, each outcome will be mapped to a `Vector`
767                where the element at `outcomes.index(outcome)` is set to `True`
768                and the rest to `False`, or all `False` if the outcome is not
769                in `outcomes`.
770                If not provided, `self.outcomes()` is used.
771        """
772        if outcomes is None:
773            outcomes = self.outcomes()
774
775        data: MutableMapping[Vector[bool], int] = defaultdict(int)
776        for outcome, quantity in zip(self.outcomes(), self.quantities()):
777            value = [False] * len(outcomes)
778            if outcome in outcomes:
779                value[outcomes.index(outcome)] = True
780            data[Vector(value)] += quantity
781        return self._new_type(data)

Converts the outcomes of this population to a one-hot representation.

Arguments:
  • outcomes: If provided, each outcome will be mapped to a Vector where the element at outcomes.index(outcome) is set to True and the rest to False, or all False if the outcome is not in outcomes. If not provided, self.outcomes() is used.
def split( self, which: Union[Callable[..., bool], Collection[+T_co], NoneType] = None, /, *, star: bool | None = None) -> tuple[~C, ~C]:
783    def split(self,
784              which: Callable[..., bool] | Collection[T_co] | None = None,
785              /,
786              *,
787              star: bool | None = None) -> tuple[C, C]:
788        """Splits this population into one containing selected items and another containing the rest.
789
790        The sum of the denominators of the results is equal to the denominator
791        of this population.
792
793        If you want to split more than two ways, use `Population.group_by()`.
794
795        Args:
796            which: Selects which outcomes to select. Options:
797                * A callable that takes an outcome and returns `True` if it
798                    should be selected.
799                * A collection of outcomes to select.
800            star: Whether outcomes should be unpacked into separate arguments
801                before sending them to a callable `which`.
802                If not provided, this will be guessed based on the function
803                signature.
804
805        Returns:
806            A population consisting of the outcomes that were selected by
807            `which`, and a population consisting of the unselected outcomes.
808        """
809        if which is None:
810            outcome_set = {self.min_outcome()}
811        else:
812            outcome_set = self._select_outcomes(which, star)
813
814        selected = {}
815        not_selected = {}
816        for outcome, count in self.items():
817            if outcome in outcome_set:
818                selected[outcome] = count
819            else:
820                not_selected[outcome] = count
821
822        return self._new_type(selected), self._new_type(not_selected)

Splits this population into one containing selected items and another containing the rest.

The sum of the denominators of the results is equal to the denominator of this population.

If you want to split more than two ways, use Population.group_by().

Arguments:
  • which: Selects which outcomes to select. Options:
    • A callable that takes an outcome and returns True if it should be selected.
    • A collection of outcomes to select.
  • star: Whether outcomes should be unpacked into separate arguments before sending them to a callable which. If not provided, this will be guessed based on the function signature.
Returns:

A population consisting of the outcomes that were selected by which, and a population consisting of the unselected outcomes.

group_by: icepool.population.base.Population._GroupBy[~C]
868    @property
869    def group_by(self: C) -> _GroupBy[C]:
870        """A method-like property that splits this population into sub-populations based on a key function.
871        
872        The sum of the denominators of the results is equal to the denominator
873        of this population.
874
875        This can be useful when using the law of total probability.
876
877        Example: `d10.group_by(lambda x: x % 3)` is
878        ```python
879        {
880            0: Die([3, 6, 9]),
881            1: Die([1, 4, 7, 10]),
882            2: Die([2, 5, 8]),
883        }
884        ```
885
886        You can also use brackets to group by indexes or slices; or attributes
887        to group by those. Example:
888
889        ```python
890        Die([
891            'aardvark',
892            'alligator',
893            'asp',
894            'blowfish',
895            'cat',
896            'crocodile',
897        ]).group_by[0]
898        ```
899
900        produces
901
902        ```python
903        {
904            'a': Die(['aardvark', 'alligator', 'asp']),
905            'b': Die(['blowfish']),
906            'c': Die(['cat', 'crocodile']),
907        }
908        ```
909
910        Args:
911            key_map: A function or mapping that takes outcomes and produces the
912                key of the corresponding outcome in the result. If this is
913                a Mapping, outcomes not in the mapping are their own key.
914            star: Whether outcomes should be unpacked into separate arguments
915                before sending them to a callable `key_map`.
916                If not provided, this will be guessed based on the function
917                signature.
918        """
919        return Population._GroupBy(self)

A method-like property that splits this population into sub-populations based on a key function.

The sum of the denominators of the results is equal to the denominator of this population.

This can be useful when using the law of total probability.

Example: d10.group_by(lambda x: x % 3) is

{
    0: Die([3, 6, 9]),
    1: Die([1, 4, 7, 10]),
    2: Die([2, 5, 8]),
}

You can also use brackets to group by indexes or slices; or attributes to group by those. Example:

Die([
    'aardvark',
    'alligator',
    'asp',
    'blowfish',
    'cat',
    'crocodile',
]).group_by[0]

produces

{
    'a': Die(['aardvark', 'alligator', 'asp']),
    'b': Die(['blowfish']),
    'c': Die(['cat', 'crocodile']),
}
Arguments:
  • key_map: A function or mapping that takes outcomes and produces the key of the corresponding outcome in the result. If this is a Mapping, outcomes not in the mapping are their own key.
  • star: Whether outcomes should be unpacked into separate arguments before sending them to a callable key_map. If not provided, this will be guessed based on the function signature.
def sample(self) -> +T_co:
921    def sample(self) -> T_co:
922        """A single random sample from this population.
923
924        Note that this is always "with replacement" even for `Deck` since
925        instances are immutable.
926
927        This uses the standard `random` package and is not cryptographically
928        secure.
929        """
930        # We don't use random.choices since that is based on floats rather than ints.
931        r = random.randrange(self.denominator())
932        index = bisect.bisect_right(self.quantities('<='), r)
933        return self.outcomes()[index]

A single random sample from this population.

Note that this is always "with replacement" even for Deck since instances are immutable.

This uses the standard random package and is not cryptographically secure.

def format(self, format_spec: str, /, **kwargs) -> str:
935    def format(self, format_spec: str, /, **kwargs) -> str:
936        """Formats this mapping as a string.
937
938        `format_spec` should start with the output format,
939        which can be:
940        * `md` for Markdown (default)
941        * `bbcode` for BBCode
942        * `csv` for comma-separated values
943        * `html` for HTML
944
945        After this, you may optionally add a `:` followed by a series of
946        requested columns. Allowed columns are:
947
948        * `o`: Outcomes.
949        * `*o`: Outcomes, unpacked if applicable.
950        * `q==`, `q<=`, `q>=`: Quantities ==, <=, or >= each outcome.
951        * `p==`, `p<=`, `p>=`: Probabilities (0-1).
952        * `%==`, `%<=`, `%>=`: Probabilities (0%-100%).
953        * `i==`, `i<=`, `i>=`: EXPERIMENTAL: "1 in N".
954
955        Columns may optionally be separated using `|` characters.
956
957        The default setting is equal to `f'{die:md:*o|q==|%==}'`. Here the 
958        columns are the outcomes (unpacked if applicable) the quantities, and 
959        the probabilities. The quantities are omitted from the default columns 
960        if any individual quantity is 10**30 or greater.
961        """
962        if not self.is_empty() and self.modal_quantity() < 10**30:
963            default_column_spec = '*oq==%=='
964        else:
965            default_column_spec = '*o%=='
966        if len(format_spec) == 0:
967            format_spec = 'md:' + default_column_spec
968
969        format_spec = format_spec.replace('|', '')
970
971        parts = format_spec.split(':')
972
973        if len(parts) == 1:
974            output_format = parts[0]
975            col_spec = default_column_spec
976        elif len(parts) == 2:
977            output_format = parts[0]
978            col_spec = parts[1]
979        else:
980            raise ValueError('format_spec has too many colons.')
981
982        match output_format:
983            case 'md':
984                return icepool.population.format.markdown(self, col_spec)
985            case 'bbcode':
986                return icepool.population.format.bbcode(self, col_spec)
987            case 'csv':
988                return icepool.population.format.csv(self, col_spec, **kwargs)
989            case 'html':
990                return icepool.population.format.html(self, col_spec)
991            case _:
992                raise ValueError(
993                    f"Unsupported output format '{output_format}'")

Formats this mapping as a string.

format_spec should start with the output format, which can be:

  • md for Markdown (default)
  • bbcode for BBCode
  • csv for comma-separated values
  • html for HTML

After this, you may optionally add a : followed by a series of requested columns. Allowed columns are:

  • o: Outcomes.
  • *o: Outcomes, unpacked if applicable.
  • q==, q<=, q>=: Quantities ==, <=, or >= each outcome.
  • p==, p<=, p>=: Probabilities (0-1).
  • %==, %<=, %>=: Probabilities (0%-100%).
  • i==, i<=, i>=: EXPERIMENTAL: "1 in N".

Columns may optionally be separated using | characters.

The default setting is equal to f'{die:md:*o|q==|%==}'. Here the columns are the outcomes (unpacked if applicable) the quantities, and the probabilities. The quantities are omitted from the default columns if any individual quantity is 10**30 or greater.

def tupleize( *args: Union[~T, Population[~T], RerollType]) -> Union[tuple[~T, ...], Population[tuple[~T, ...]], RerollType]:
 93def tupleize(
 94    *args: 'T | icepool.Population[T] | icepool.RerollType'
 95) -> 'tuple[T, ...] | icepool.Population[tuple[T, ...]] | icepool.RerollType':
 96    """Returns the Cartesian product of the arguments as `tuple`s or a `Population` thereof.
 97
 98    For example:
 99    * `tupleize(1, 2)` would produce `(1, 2)`.
100    * `tupleize(d6, 0)` would produce a `Die` with outcomes `(1, 0)`, `(2, 0)`,
101        ... `(6, 0)`.
102    * `tupleize(d6, d6)` would produce a `Die` with outcomes `(1, 1)`, `(1, 2)`,
103        ... `(6, 5)`, `(6, 6)`.
104
105    If `Population`s are provided, they must all be `Die` or all `Deck` and not
106    a mixture of the two.
107
108    If any argument is `icepool.Reroll`, the result is `icepool.Reroll`.
109
110    Returns:
111        If none of the outcomes is a `Population`, the result is a `tuple`
112        with one element per argument. Otherwise, the result is a `Population`
113        of the same type as the input `Population`, and the outcomes are
114        `tuple`s with one element per argument.
115    """
116    return cartesian_product(*args, outcome_type=tuple)

Returns the Cartesian product of the arguments as tuples or a Population thereof.

For example:

  • tupleize(1, 2) would produce (1, 2).
  • tupleize(d6, 0) would produce a Die with outcomes (1, 0), (2, 0), ... (6, 0).
  • tupleize(d6, d6) would produce a Die with outcomes (1, 1), (1, 2), ... (6, 5), (6, 6).

If Populations are provided, they must all be Die or all Deck and not a mixture of the two.

If any argument is icepool.Reroll, the result is icepool.Reroll.

Returns:

If none of the outcomes is a Population, the result is a tuple with one element per argument. Otherwise, the result is a Population of the same type as the input Population, and the outcomes are tuples with one element per argument.

def vectorize( *args: Union[~T, Population[~T], RerollType]) -> Union[Vector[~T], Population[Vector[~T]], RerollType]:
119def vectorize(
120    *args: 'T | icepool.Population[T] | icepool.RerollType'
121) -> 'icepool.Vector[T] | icepool.Population[icepool.Vector[T]] | icepool.RerollType':
122    """Returns the Cartesian product of the arguments as `Vector`s or a `Population` thereof.
123
124    For example:
125    * `vectorize(1, 2)` would produce `Vector(1, 2)`.
126    * `vectorize(d6, 0)` would produce a `Die` with outcomes `Vector(1, 0)`,
127        `Vector(2, 0)`, ... `Vector(6, 0)`.
128    * `vectorize(d6, d6)` would produce a `Die` with outcomes `Vector(1, 1)`,
129        `Vector(1, 2)`, ... `Vector(6, 5)`, `Vector(6, 6)`.
130
131    If `Population`s are provided, they must all be `Die` or all `Deck` and not
132    a mixture of the two.
133
134    If any argument is `icepool.Reroll`, the result is `icepool.Reroll`.
135
136    Returns:
137        If none of the outcomes is a `Population`, the result is a `Vector`
138        with one element per argument. Otherwise, the result is a `Population`
139        of the same type as the input `Population`, and the outcomes are
140        `Vector`s with one element per argument.
141    """
142    return cartesian_product(*args, outcome_type=icepool.Vector)

Returns the Cartesian product of the arguments as Vectors or a Population thereof.

For example:

  • vectorize(1, 2) would produce Vector(1, 2).
  • vectorize(d6, 0) would produce a Die with outcomes Vector(1, 0), Vector(2, 0), ... Vector(6, 0).
  • vectorize(d6, d6) would produce a Die with outcomes Vector(1, 1), Vector(1, 2), ... Vector(6, 5), Vector(6, 6).

If Populations are provided, they must all be Die or all Deck and not a mixture of the two.

If any argument is icepool.Reroll, the result is icepool.Reroll.

Returns:

If none of the outcomes is a Population, the result is a Vector with one element per argument. Otherwise, the result is a Population of the same type as the input Population, and the outcomes are Vectors with one element per argument.

class Vector(icepool.Outcome, typing.Sequence[+T_co]):
125class Vector(Outcome, Sequence[T_co]):
126    """Immutable tuple-like class that applies most operators elementwise.
127
128    May become a variadic generic type in the future.
129    """
130    __slots__ = ['_data', '_truth_value']
131
132    _data: tuple[T_co, ...]
133    _truth_value: bool | None
134
135    def __init__(self,
136                 elements: Iterable[T_co],
137                 *,
138                 truth_value: bool | None = None) -> None:
139        self._data = tuple(elements)
140        self._truth_value = truth_value
141
142    def __hash__(self) -> int:
143        return hash((Vector, self._data))
144
145    def __len__(self) -> int:
146        return len(self._data)
147
148    @overload
149    def __getitem__(self, index: int) -> T_co:
150        ...
151
152    @overload
153    def __getitem__(self, index: slice) -> 'Vector[T_co]':
154        ...
155
156    def __getitem__(self, index: int | slice) -> 'T_co | Vector[T_co]':
157        if isinstance(index, int):
158            return self._data[index]
159        else:
160            return Vector(self._data[index])
161
162    def __iter__(self) -> Iterator[T_co]:
163        return iter(self._data)
164
165    # Unary operators.
166
167    def unary_operator(self, op: Callable[..., U], *args,
168                       **kwargs) -> 'Vector[U]':
169        """Unary operators on `Vector` are applied elementwise.
170
171        This is used for the standard unary operators
172        `-, +, abs, ~, round, trunc, floor, ceil`
173        """
174        return Vector(op(x, *args, **kwargs) for x in self)
175
176    def __neg__(self) -> 'Vector[T_co]':
177        return self.unary_operator(operator.neg)
178
179    def __pos__(self) -> 'Vector[T_co]':
180        return self.unary_operator(operator.pos)
181
182    def __invert__(self) -> 'Vector[T_co]':
183        return self.unary_operator(operator.invert)
184
185    def abs(self) -> 'Vector[T_co]':
186        return self.unary_operator(operator.abs)
187
188    __abs__ = abs
189
190    def round(self, ndigits: int | None = None) -> 'Vector':
191        return self.unary_operator(round, ndigits)
192
193    __round__ = round
194
195    def trunc(self) -> 'Vector':
196        return self.unary_operator(math.trunc)
197
198    __trunc__ = trunc
199
200    def floor(self) -> 'Vector':
201        return self.unary_operator(math.floor)
202
203    __floor__ = floor
204
205    def ceil(self) -> 'Vector':
206        return self.unary_operator(math.ceil)
207
208    __ceil__ = ceil
209
210    # Binary operators.
211
212    def binary_operator(self,
213                        other,
214                        op: Callable[..., U],
215                        *args,
216                        compare_for_truth: bool = False,
217                        **kwargs) -> 'Vector[U]':
218        """Binary operators on `Vector` are applied elementwise.
219
220        If the other operand is also a `Vector`, the operator is applied to each
221        pair of elements from `self` and `other`. Both must have the same
222        length.
223
224        Otherwise the other operand is broadcast to each element of `self`.
225
226        This is used for the standard binary operators
227        `+, -, *, /, //, %, **, <<, >>, &, |, ^`.
228
229        `@` is not included due to its different meaning in `Die`.
230
231        This is also used for the comparators
232        `<, <=, >, >=, ==, !=`.
233
234        In this case, the result also has a truth value based on lexicographic
235        ordering.
236        """
237        if isinstance(other, Vector):
238            if len(self) == len(other):
239                if compare_for_truth:
240                    truth_value = cast(bool, op(self._data, other._data))
241                else:
242                    truth_value = None
243                return Vector(
244                    (op(x, y, *args, **kwargs) for x, y in zip(self, other)),
245                    truth_value=truth_value)
246            else:
247                raise IndexError(
248                    f'Binary operators on Vectors are only valid if both are the same length ({len(self)} vs. {len(other)}).'
249                )
250        elif isinstance(other, (icepool.Population, icepool.AgainExpression)):
251            return NotImplemented  # delegate to the other
252        else:
253            return Vector((op(x, other, *args, **kwargs) for x in self))
254
255    def reverse_binary_operator(self, other, op: Callable[..., U], *args,
256                                **kwargs) -> 'Vector[U]':
257        """Reverse version of `binary_operator()`."""
258        if isinstance(other, Vector):
259            if len(self) == len(other):
260                return Vector(
261                    op(y, x, *args, **kwargs) for x, y in zip(self, other))
262            else:
263                raise IndexError(
264                    f'Binary operators on Vectors are only valid if both are the same length ({len(other)} vs. {len(self)}).'
265                )
266        elif isinstance(other, (icepool.Population, icepool.AgainExpression)):
267            return NotImplemented  # delegate to the other
268        else:
269            return Vector(op(other, x, *args, **kwargs) for x in self)
270
271    def __add__(self, other) -> 'Vector':
272        return self.binary_operator(other, operator.add)
273
274    def __radd__(self, other) -> 'Vector':
275        return self.reverse_binary_operator(other, operator.add)
276
277    def __sub__(self, other) -> 'Vector':
278        return self.binary_operator(other, operator.sub)
279
280    def __rsub__(self, other) -> 'Vector':
281        return self.reverse_binary_operator(other, operator.sub)
282
283    def __mul__(self, other) -> 'Vector':
284        return self.binary_operator(other, operator.mul)
285
286    def __rmul__(self, other) -> 'Vector':
287        return self.reverse_binary_operator(other, operator.mul)
288
289    def __truediv__(self, other) -> 'Vector':
290        return self.binary_operator(other, operator.truediv)
291
292    def __rtruediv__(self, other) -> 'Vector':
293        return self.reverse_binary_operator(other, operator.truediv)
294
295    def __floordiv__(self, other) -> 'Vector':
296        return self.binary_operator(other, operator.floordiv)
297
298    def __rfloordiv__(self, other) -> 'Vector':
299        return self.reverse_binary_operator(other, operator.floordiv)
300
301    def __pow__(self, other) -> 'Vector':
302        return self.binary_operator(other, operator.pow)
303
304    def __rpow__(self, other) -> 'Vector':
305        return self.reverse_binary_operator(other, operator.pow)
306
307    def __mod__(self, other) -> 'Vector':
308        return self.binary_operator(other, operator.mod)
309
310    def __rmod__(self, other) -> 'Vector':
311        return self.reverse_binary_operator(other, operator.mod)
312
313    def __lshift__(self, other) -> 'Vector':
314        return self.binary_operator(other, operator.lshift)
315
316    def __rlshift__(self, other) -> 'Vector':
317        return self.reverse_binary_operator(other, operator.lshift)
318
319    def __rshift__(self, other) -> 'Vector':
320        return self.binary_operator(other, operator.rshift)
321
322    def __rrshift__(self, other) -> 'Vector':
323        return self.reverse_binary_operator(other, operator.rshift)
324
325    def __and__(self, other) -> 'Vector':
326        return self.binary_operator(other, operator.and_)
327
328    def __rand__(self, other) -> 'Vector':
329        return self.reverse_binary_operator(other, operator.and_)
330
331    def __or__(self, other) -> 'Vector':
332        return self.binary_operator(other, operator.or_)
333
334    def __ror__(self, other) -> 'Vector':
335        return self.reverse_binary_operator(other, operator.or_)
336
337    def __xor__(self, other) -> 'Vector':
338        return self.binary_operator(other, operator.xor)
339
340    def __rxor__(self, other) -> 'Vector':
341        return self.reverse_binary_operator(other, operator.xor)
342
343    # Comparators.
344    # These returns a value with a truth value, but not a bool.
345
346    def __lt__(self, other) -> 'Vector':  # type: ignore
347        if not isinstance(other, Vector):
348            return NotImplemented
349        return self.binary_operator(other, operator.lt, compare_for_truth=True)
350
351    def __le__(self, other) -> 'Vector':  # type: ignore
352        if not isinstance(other, Vector):
353            return NotImplemented
354        return self.binary_operator(other, operator.le, compare_for_truth=True)
355
356    def __gt__(self, other) -> 'Vector':  # type: ignore
357        if not isinstance(other, Vector):
358            return NotImplemented
359        return self.binary_operator(other, operator.gt, compare_for_truth=True)
360
361    def __ge__(self, other) -> 'Vector':  # type: ignore
362        if not isinstance(other, Vector):
363            return NotImplemented
364        return self.binary_operator(other, operator.ge, compare_for_truth=True)
365
366    def __eq__(self, other) -> 'Vector | bool':  # type: ignore
367        if not isinstance(other, Vector):
368            return False
369        return self.binary_operator(other, operator.eq, compare_for_truth=True)
370
371    def __ne__(self, other) -> 'Vector | bool':  # type: ignore
372        if not isinstance(other, Vector):
373            return True
374        return self.binary_operator(other, operator.ne, compare_for_truth=True)
375
376    def __bool__(self) -> bool:
377        if self._truth_value is None:
378            raise TypeError(
379                'Vector only has a truth value if it is the result of a comparison operator.'
380            )
381        return self._truth_value
382
383    # Sequence manipulation.
384
385    def append(self, other) -> 'Vector':
386        return Vector(self._data + (other, ))
387
388    def concatenate(self, other: 'Iterable') -> 'Vector':
389        return Vector(itertools.chain(self, other))
390
391    # Strings.
392
393    def __repr__(self) -> str:
394        return type(self).__qualname__ + '(' + repr(self._data) + ')'
395
396    def __str__(self) -> str:
397        return type(self).__qualname__ + '(' + str(self._data) + ')'

Immutable tuple-like class that applies most operators elementwise.

May become a variadic generic type in the future.

def unary_operator( self, op: Callable[..., ~U], *args, **kwargs) -> Vector[~U]:
167    def unary_operator(self, op: Callable[..., U], *args,
168                       **kwargs) -> 'Vector[U]':
169        """Unary operators on `Vector` are applied elementwise.
170
171        This is used for the standard unary operators
172        `-, +, abs, ~, round, trunc, floor, ceil`
173        """
174        return Vector(op(x, *args, **kwargs) for x in self)

Unary operators on Vector are applied elementwise.

This is used for the standard unary operators -, +, abs, ~, round, trunc, floor, ceil

def abs(self) -> Vector[+T_co]:
185    def abs(self) -> 'Vector[T_co]':
186        return self.unary_operator(operator.abs)
def round(self, ndigits: int | None = None) -> Vector:
190    def round(self, ndigits: int | None = None) -> 'Vector':
191        return self.unary_operator(round, ndigits)
def trunc(self) -> Vector:
195    def trunc(self) -> 'Vector':
196        return self.unary_operator(math.trunc)
def floor(self) -> Vector:
200    def floor(self) -> 'Vector':
201        return self.unary_operator(math.floor)
def ceil(self) -> Vector:
205    def ceil(self) -> 'Vector':
206        return self.unary_operator(math.ceil)
def binary_operator( self, other, op: Callable[..., ~U], *args, compare_for_truth: bool = False, **kwargs) -> Vector[~U]:
212    def binary_operator(self,
213                        other,
214                        op: Callable[..., U],
215                        *args,
216                        compare_for_truth: bool = False,
217                        **kwargs) -> 'Vector[U]':
218        """Binary operators on `Vector` are applied elementwise.
219
220        If the other operand is also a `Vector`, the operator is applied to each
221        pair of elements from `self` and `other`. Both must have the same
222        length.
223
224        Otherwise the other operand is broadcast to each element of `self`.
225
226        This is used for the standard binary operators
227        `+, -, *, /, //, %, **, <<, >>, &, |, ^`.
228
229        `@` is not included due to its different meaning in `Die`.
230
231        This is also used for the comparators
232        `<, <=, >, >=, ==, !=`.
233
234        In this case, the result also has a truth value based on lexicographic
235        ordering.
236        """
237        if isinstance(other, Vector):
238            if len(self) == len(other):
239                if compare_for_truth:
240                    truth_value = cast(bool, op(self._data, other._data))
241                else:
242                    truth_value = None
243                return Vector(
244                    (op(x, y, *args, **kwargs) for x, y in zip(self, other)),
245                    truth_value=truth_value)
246            else:
247                raise IndexError(
248                    f'Binary operators on Vectors are only valid if both are the same length ({len(self)} vs. {len(other)}).'
249                )
250        elif isinstance(other, (icepool.Population, icepool.AgainExpression)):
251            return NotImplemented  # delegate to the other
252        else:
253            return Vector((op(x, other, *args, **kwargs) for x in self))

Binary operators on Vector are applied elementwise.

If the other operand is also a Vector, the operator is applied to each pair of elements from self and other. Both must have the same length.

Otherwise the other operand is broadcast to each element of self.

This is used for the standard binary operators +, -, *, /, //, %, **, <<, >>, &, |, ^.

@ is not included due to its different meaning in Die.

This is also used for the comparators <, <=, >, >=, ==, !=.

In this case, the result also has a truth value based on lexicographic ordering.

def reverse_binary_operator( self, other, op: Callable[..., ~U], *args, **kwargs) -> Vector[~U]:
255    def reverse_binary_operator(self, other, op: Callable[..., U], *args,
256                                **kwargs) -> 'Vector[U]':
257        """Reverse version of `binary_operator()`."""
258        if isinstance(other, Vector):
259            if len(self) == len(other):
260                return Vector(
261                    op(y, x, *args, **kwargs) for x, y in zip(self, other))
262            else:
263                raise IndexError(
264                    f'Binary operators on Vectors are only valid if both are the same length ({len(other)} vs. {len(self)}).'
265                )
266        elif isinstance(other, (icepool.Population, icepool.AgainExpression)):
267            return NotImplemented  # delegate to the other
268        else:
269            return Vector(op(other, x, *args, **kwargs) for x in self)

Reverse version of binary_operator().

def append(self, other) -> Vector:
385    def append(self, other) -> 'Vector':
386        return Vector(self._data + (other, ))
def concatenate(self, other: Iterable) -> Vector:
388    def concatenate(self, other: 'Iterable') -> 'Vector':
389        return Vector(itertools.chain(self, other))
Inherited Members
Outcome
Outcome
class Symbols(typing.Mapping[str, int]):
 16class Symbols(Mapping[str, int]):
 17    """EXPERIMENTAL: Immutable multiset of single characters.
 18
 19    Spaces, dashes, and underscores cannot be used as symbols.
 20
 21    Operations include:
 22
 23    | Operation                   | Count / notes                      |
 24    |:----------------------------|:-----------------------------------|
 25    | `additive_union`, `+`       | `l + r`                            |
 26    | `difference`, `-`           | `l - r`                            |
 27    | `intersection`, `&`         | `min(l, r)`                        |
 28    | `union`, `\\|`               | `max(l, r)`                        |
 29    | `symmetric_difference`, `^` | `abs(l - r)`                       |
 30    | `multiply_counts`, `*`      | `count * n`                        |
 31    | `divide_counts`, `//`       | `count // n`                       |
 32    | `issubset`, `<=`            | all counts l <= r                  |
 33    | `issuperset`, `>=`          | all counts l >= r                  |
 34    | `==`                        | all counts l == r                  |
 35    | `!=`                        | any count l != r                   |
 36    | unary `+`                   | drop all negative counts           |
 37    | unary `-`                   | reverses the sign of all counts    |
 38
 39    `<` and `>` are lexicographic orderings rather than subset relations.
 40    Specifically, they compare the count of each character in alphabetical
 41    order. For example:
 42    * `'a' > ''` since one `'a'` is more than zero `'a'`s.
 43    * `'a' > 'bb'` since `'a'` is compared first.
 44    * `'-a' < 'bb'` since the left side has -1 `'a'`s.
 45    * `'a' < 'ab'` since the `'a'`s are equal but the right side has more `'b'`s.
 46    
 47    Binary operators other than `*` and `//` implicitly convert the other
 48    argument to `Symbols` using the constructor.
 49
 50    Subscripting with a single character returns the count of that character
 51    as an `int`. E.g. `symbols['a']` -> number of `a`s as an `int`.
 52    You can also access it as an attribute, e.g.  `symbols.a`.
 53
 54    Subscripting with multiple characters returns a `Symbols` with only those
 55    characters, dropping the rest.
 56    E.g. `symbols['ab']` -> number of `a`s and `b`s as a `Symbols`.
 57    Again you can also access it as an attribute, e.g. `symbols.ab`.
 58    This is useful for reducing the outcome space, which reduces computational
 59    cost for further operations. If you want to keep only a single character
 60    while keeping the type as `Symbols`, you can subscript with that character
 61    plus an unused character.
 62
 63    Subscripting with duplicate characters currently has no further effect, but
 64    this may change in the future.
 65
 66    `Population.marginals` forwards attribute access, so you can use e.g.
 67    `die.marginals.a` to get the marginal distribution of `a`s.
 68
 69    Note that attribute access only works with valid identifiers,
 70    so e.g. emojis would need to use the subscript method.
 71    """
 72    _data: Mapping[str, int]
 73
 74    def __new__(cls,
 75                symbols: str | Iterable[str] | Mapping[str, int]) -> 'Symbols':
 76        """Constructor.
 77        
 78        The argument can be a string, an iterable of characters, or a mapping of
 79        characters to counts.
 80        
 81        If the argument is a string, negative symbols can be specified using a
 82        minus sign optionally surrounded by whitespace. For example,
 83        `a - b` has one positive a and one negative b.
 84        """
 85        self = super(Symbols, cls).__new__(cls)
 86        if isinstance(symbols, str):
 87            data: MutableMapping[str, int] = defaultdict(int)
 88            positive, *negative = re.split(r'\s*-\s*', symbols)
 89            for s in positive:
 90                data[s] += 1
 91            if len(negative) > 1:
 92                raise ValueError('Multiple dashes not allowed.')
 93            if len(negative) == 1:
 94                for s in negative[0]:
 95                    data[s] -= 1
 96        elif isinstance(symbols, Mapping):
 97            data = defaultdict(int, symbols)
 98        else:
 99            data = defaultdict(int)
100            for s in symbols:
101                data[s] += 1
102
103        for s in data:
104            if len(s) != 1:
105                raise ValueError(f'Symbol {s} is not a single character.')
106            if re.match(r'[\s_-]', s):
107                raise ValueError(
108                    f'{s} (U+{ord(s):04X}) is not a legal symbol.')
109
110        self._data = defaultdict(int,
111                                 {k: data[k]
112                                  for k in sorted(data.keys())})
113
114        return self
115
116    @classmethod
117    def _new_raw(cls, data: defaultdict[str, int]) -> 'Symbols':
118        self = super(Symbols, cls).__new__(cls)
119        self._data = data
120        return self
121
122    # Mapping interface.
123
124    def __getitem__(self, key: str) -> 'int | Symbols':  # type: ignore
125        if len(key) == 1:
126            return self._data[key]
127        else:
128            return Symbols._new_raw(
129                defaultdict(int, {s: self._data[s]
130                                  for s in key}))
131
132    def __getattr__(self, key: str) -> 'int | Symbols':
133        if key[0] == '_':
134            raise AttributeError(key)
135        return self[key]
136
137    def __iter__(self) -> Iterator[str]:
138        return iter(self._data)
139
140    def __len__(self) -> int:
141        return len(self._data)
142
143    # Binary operators.
144
145    def additive_union(self, *args:
146                       Iterable[str] | Mapping[str, int]) -> 'Symbols':
147        """The sum of counts of each symbol."""
148        return functools.reduce(operator.add, args, initial=self)
149
150    def __add__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols':
151        if isinstance(other, (icepool.Population, icepool.AgainExpression)):
152            return NotImplemented  # delegate to the other
153        data = defaultdict(int, self._data)
154        for s, count in Symbols(other).items():
155            data[s] += count
156        return Symbols._new_raw(data)
157
158    __radd__ = __add__
159
160    def difference(self, *args:
161                   Iterable[str] | Mapping[str, int]) -> 'Symbols':
162        """The difference between the counts of each symbol."""
163        return functools.reduce(operator.sub, args, initial=self)
164
165    def __sub__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols':
166        if isinstance(other, (icepool.Population, icepool.AgainExpression)):
167            return NotImplemented  # delegate to the other
168        data = defaultdict(int, self._data)
169        for s, count in Symbols(other).items():
170            data[s] -= count
171        return Symbols._new_raw(data)
172
173    def __rsub__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols':
174        if isinstance(other, (icepool.Population, icepool.AgainExpression)):
175            return NotImplemented  # delegate to the other
176        data = defaultdict(int, Symbols(other)._data)
177        for s, count in self.items():
178            data[s] -= count
179        return Symbols._new_raw(data)
180
181    def intersection(self, *args:
182                     Iterable[str] | Mapping[str, int]) -> 'Symbols':
183        """The min count of each symbol."""
184        return functools.reduce(operator.and_, args, initial=self)
185
186    def __and__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols':
187        if isinstance(other, (icepool.Population, icepool.AgainExpression)):
188            return NotImplemented  # delegate to the other
189        data: defaultdict[str, int] = defaultdict(int)
190        for s, count in Symbols(other).items():
191            data[s] = min(self.get(s, 0), count)
192        return Symbols._new_raw(data)
193
194    __rand__ = __and__
195
196    def union(self, *args: Iterable[str] | Mapping[str, int]) -> 'Symbols':
197        """The max count of each symbol."""
198        return functools.reduce(operator.or_, args, initial=self)
199
200    def __or__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols':
201        if isinstance(other, (icepool.Population, icepool.AgainExpression)):
202            return NotImplemented  # delegate to the other
203        data = defaultdict(int, self._data)
204        for s, count in Symbols(other).items():
205            data[s] = max(data[s], count)
206        return Symbols._new_raw(data)
207
208    __ror__ = __or__
209
210    def symmetric_difference(
211            self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols':
212        """The absolute difference in symbol counts between the two sets."""
213        return self ^ other
214
215    def __xor__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols':
216        if isinstance(other, (icepool.Population, icepool.AgainExpression)):
217            return NotImplemented  # delegate to the other
218        data = defaultdict(int, self._data)
219        for s, count in Symbols(other).items():
220            data[s] = abs(data[s] - count)
221        return Symbols._new_raw(data)
222
223    __rxor__ = __xor__
224
225    def multiply_counts(self, other: int) -> 'Symbols':
226        """Multiplies all counts by an integer."""
227        return self * other
228
229    def __mul__(self, other: int) -> 'Symbols':
230        if not isinstance(other, int):
231            return NotImplemented
232        data = defaultdict(int, {
233            s: count * other
234            for s, count in self.items()
235        })
236        return Symbols._new_raw(data)
237
238    __rmul__ = __mul__
239
240    def divide_counts(self, other: int) -> 'Symbols':
241        """Divides all counts by an integer, rounding down."""
242        data = defaultdict(int, {
243            s: count // other
244            for s, count in self.items()
245        })
246        return Symbols._new_raw(data)
247
248    def count_subset(self,
249                     divisor: Iterable[str] | Mapping[str, int],
250                     *,
251                     empty_divisor: int | None = None) -> int:
252        """The number of times the divisor is contained in this multiset."""
253        if not isinstance(divisor, Mapping):
254            divisor = Counter(divisor)
255        result = None
256        for s, count in divisor.items():
257            current = self._data[s] // count
258            if result is None or current < result:
259                result = current
260        if result is None:
261            if empty_divisor is None:
262                raise ZeroDivisionError('Divisor is empty.')
263            else:
264                return empty_divisor
265        else:
266            return result
267
268    @overload
269    def __floordiv__(self, other: int) -> 'Symbols':
270        """Same as divide_counts()."""
271
272    @overload
273    def __floordiv__(self, other: Iterable[str] | Mapping[str, int]) -> int:
274        """Same as count_subset()."""
275
276    @overload
277    def __floordiv__(
278            self,
279            other: int | Iterable[str] | Mapping[str, int]) -> 'Symbols | int':
280        ...
281
282    def __floordiv__(
283            self,
284            other: int | Iterable[str] | Mapping[str, int]) -> 'Symbols | int':
285        if isinstance(other, int):
286            return self.divide_counts(other)
287        elif isinstance(other, Iterable):
288            return self.count_subset(other)
289        else:
290            return NotImplemented
291
292    def __rfloordiv__(self, other: Iterable[str] | Mapping[str, int]) -> int:
293        return Symbols(other).count_subset(self)
294
295    def modulo_counts(self, other: int) -> 'Symbols':
296        return self % other
297
298    def __mod__(self, other: int) -> 'Symbols':
299        if not isinstance(other, int):
300            return NotImplemented
301        data = defaultdict(int, {
302            s: count % other
303            for s, count in self.items()
304        })
305        return Symbols._new_raw(data)
306
307    def __lt__(self, other: 'Symbols') -> bool:
308        if not isinstance(other, Symbols):
309            return NotImplemented
310        keys = sorted(set(self.keys()) | set(other.keys()))
311        for k in keys:
312            if self[k] < other[k]:  # type: ignore
313                return True
314            if self[k] > other[k]:  # type: ignore
315                return False
316        return False
317
318    def __gt__(self, other: 'Symbols') -> bool:
319        if not isinstance(other, Symbols):
320            return NotImplemented
321        keys = sorted(set(self.keys()) | set(other.keys()))
322        for k in keys:
323            if self[k] > other[k]:  # type: ignore
324                return True
325            if self[k] < other[k]:  # type: ignore
326                return False
327        return False
328
329    def issubset(self, other: Iterable[str] | Mapping[str, int]) -> bool:
330        """Whether `self` is a subset of the other.
331
332        Same as `<=`.
333        
334        Note that the `<` and `>` operators are lexicographic orderings,
335        not proper subset relations.
336        """
337        return self <= other
338
339    def __le__(self, other: Iterable[str] | Mapping[str, int]) -> bool:
340        if isinstance(other, (icepool.Population, icepool.AgainExpression)):
341            return NotImplemented  # delegate to the other
342        other = Symbols(other)
343        return all(self[s] <= other[s]  # type: ignore
344                   for s in itertools.chain(self, other))
345
346    def issuperset(self, other: Iterable[str] | Mapping[str, int]) -> bool:
347        """Whether `self` is a superset of the other.
348
349        Same as `>=`.
350        
351        Note that the `<` and `>` operators are lexicographic orderings,
352        not proper subset relations.
353        """
354        return self >= other
355
356    def __ge__(self, other: Iterable[str] | Mapping[str, int]) -> bool:
357        if isinstance(other, (icepool.Population, icepool.AgainExpression)):
358            return NotImplemented  # delegate to the other
359        other = Symbols(other)
360        return all(self[s] >= other[s]  # type: ignore
361                   for s in itertools.chain(self, other))
362
363    def isdisjoint(self, other: Iterable[str] | Mapping[str, int]) -> bool:
364        """Whether `self` has any positive elements in common with the other.
365        
366        Raises:
367            ValueError if either has negative elements.
368        """
369        other = Symbols(other)
370        return any(self[s] > 0 and other[s] > 0  # type: ignore
371                   for s in self)
372
373    def __eq__(self, other) -> bool:
374        if isinstance(other, (icepool.Population, icepool.AgainExpression)):
375            return NotImplemented  # delegate to the other
376        try:
377            other = Symbols(other)
378        except ValueError:
379            return NotImplemented
380        return all(self[s] == other[s]  # type: ignore
381                   for s in itertools.chain(self, other))
382
383    def __ne__(self, other) -> bool:
384        if isinstance(other, (icepool.Population, icepool.AgainExpression)):
385            return NotImplemented  # delegate to the other
386        try:
387            other = Symbols(other)
388        except ValueError:
389            return NotImplemented
390        return any(self[s] != other[s]  # type: ignore
391                   for s in itertools.chain(self, other))
392
393    # Unary operators.
394
395    def has_negative_counts(self) -> bool:
396        """Whether any counts are negative."""
397        return any(c < 0 for c in self.values())
398
399    def __pos__(self) -> 'Symbols':
400        data = defaultdict(int, {
401            s: count
402            for s, count in self.items() if count > 0
403        })
404        return Symbols._new_raw(data)
405
406    def __neg__(self) -> 'Symbols':
407        data = defaultdict(int, {s: -count for s, count in self.items()})
408        return Symbols._new_raw(data)
409
410    @cached_property
411    def _hash(self) -> int:
412        return hash((Symbols, str(self)))
413
414    def __hash__(self) -> int:
415        return self._hash
416
417    def size(self) -> int:
418        """The total number of elements."""
419        return sum(self._data.values())
420
421    @cached_property
422    def _str(self) -> str:
423        sorted_keys = sorted(self)
424        positive = ''.join(s * self._data[s] for s in sorted_keys
425                           if self._data[s] > 0)
426        negative = ''.join(s * -self._data[s] for s in sorted_keys
427                           if self._data[s] < 0)
428        if positive:
429            if negative:
430                return positive + ' - ' + negative
431            else:
432                return positive
433        else:
434            if negative:
435                return '-' + negative
436            else:
437                return ''
438
439    def __str__(self) -> str:
440        """All symbols in unary form (i.e. including duplicates) in ascending order.
441
442        If there are negative elements, they are listed following a ` - ` sign.
443        """
444        return self._str
445
446    def __repr__(self) -> str:
447        return type(self).__qualname__ + f"('{str(self)}')"

EXPERIMENTAL: Immutable multiset of single characters.

Spaces, dashes, and underscores cannot be used as symbols.

Operations include:

Operation Count / notes
additive_union, + l + r
difference, - l - r
intersection, & min(l, r)
union, | max(l, r)
symmetric_difference, ^ abs(l - r)
multiply_counts, * count * n
divide_counts, // count // n
issubset, <= all counts l <= r
issuperset, >= all counts l >= r
== all counts l == r
!= any count l != r
unary + drop all negative counts
unary - reverses the sign of all counts

< and > are lexicographic orderings rather than subset relations. Specifically, they compare the count of each character in alphabetical order. For example:

  • 'a' > '' since one 'a' is more than zero 'a's.
  • 'a' > 'bb' since 'a' is compared first.
  • '-a' < 'bb' since the left side has -1 'a's.
  • 'a' < 'ab' since the 'a's are equal but the right side has more 'b's.

Binary operators other than * and // implicitly convert the other argument to Symbols using the constructor.

Subscripting with a single character returns the count of that character as an int. E.g. symbols['a'] -> number of as as an int. You can also access it as an attribute, e.g. symbols.a.

Subscripting with multiple characters returns a Symbols with only those characters, dropping the rest. E.g. symbols['ab'] -> number of as and bs as a Symbols. Again you can also access it as an attribute, e.g. symbols.ab. This is useful for reducing the outcome space, which reduces computational cost for further operations. If you want to keep only a single character while keeping the type as Symbols, you can subscript with that character plus an unused character.

Subscripting with duplicate characters currently has no further effect, but this may change in the future.

Population.marginals forwards attribute access, so you can use e.g. die.marginals.a to get the marginal distribution of as.

Note that attribute access only works with valid identifiers, so e.g. emojis would need to use the subscript method.

Symbols(symbols: Union[str, Iterable[str], Mapping[str, int]])
 74    def __new__(cls,
 75                symbols: str | Iterable[str] | Mapping[str, int]) -> 'Symbols':
 76        """Constructor.
 77        
 78        The argument can be a string, an iterable of characters, or a mapping of
 79        characters to counts.
 80        
 81        If the argument is a string, negative symbols can be specified using a
 82        minus sign optionally surrounded by whitespace. For example,
 83        `a - b` has one positive a and one negative b.
 84        """
 85        self = super(Symbols, cls).__new__(cls)
 86        if isinstance(symbols, str):
 87            data: MutableMapping[str, int] = defaultdict(int)
 88            positive, *negative = re.split(r'\s*-\s*', symbols)
 89            for s in positive:
 90                data[s] += 1
 91            if len(negative) > 1:
 92                raise ValueError('Multiple dashes not allowed.')
 93            if len(negative) == 1:
 94                for s in negative[0]:
 95                    data[s] -= 1
 96        elif isinstance(symbols, Mapping):
 97            data = defaultdict(int, symbols)
 98        else:
 99            data = defaultdict(int)
100            for s in symbols:
101                data[s] += 1
102
103        for s in data:
104            if len(s) != 1:
105                raise ValueError(f'Symbol {s} is not a single character.')
106            if re.match(r'[\s_-]', s):
107                raise ValueError(
108                    f'{s} (U+{ord(s):04X}) is not a legal symbol.')
109
110        self._data = defaultdict(int,
111                                 {k: data[k]
112                                  for k in sorted(data.keys())})
113
114        return self

Constructor.

The argument can be a string, an iterable of characters, or a mapping of characters to counts.

If the argument is a string, negative symbols can be specified using a minus sign optionally surrounded by whitespace. For example, a - b has one positive a and one negative b.

def additive_union( self, *args: Union[Iterable[str], Mapping[str, int]]) -> Symbols:
145    def additive_union(self, *args:
146                       Iterable[str] | Mapping[str, int]) -> 'Symbols':
147        """The sum of counts of each symbol."""
148        return functools.reduce(operator.add, args, initial=self)

The sum of counts of each symbol.

def difference( self, *args: Union[Iterable[str], Mapping[str, int]]) -> Symbols:
160    def difference(self, *args:
161                   Iterable[str] | Mapping[str, int]) -> 'Symbols':
162        """The difference between the counts of each symbol."""
163        return functools.reduce(operator.sub, args, initial=self)

The difference between the counts of each symbol.

def intersection( self, *args: Union[Iterable[str], Mapping[str, int]]) -> Symbols:
181    def intersection(self, *args:
182                     Iterable[str] | Mapping[str, int]) -> 'Symbols':
183        """The min count of each symbol."""
184        return functools.reduce(operator.and_, args, initial=self)

The min count of each symbol.

def union( self, *args: Union[Iterable[str], Mapping[str, int]]) -> Symbols:
196    def union(self, *args: Iterable[str] | Mapping[str, int]) -> 'Symbols':
197        """The max count of each symbol."""
198        return functools.reduce(operator.or_, args, initial=self)

The max count of each symbol.

def symmetric_difference( self, other: Union[Iterable[str], Mapping[str, int]]) -> Symbols:
210    def symmetric_difference(
211            self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols':
212        """The absolute difference in symbol counts between the two sets."""
213        return self ^ other

The absolute difference in symbol counts between the two sets.

def multiply_counts(self, other: int) -> Symbols:
225    def multiply_counts(self, other: int) -> 'Symbols':
226        """Multiplies all counts by an integer."""
227        return self * other

Multiplies all counts by an integer.

def divide_counts(self, other: int) -> Symbols:
240    def divide_counts(self, other: int) -> 'Symbols':
241        """Divides all counts by an integer, rounding down."""
242        data = defaultdict(int, {
243            s: count // other
244            for s, count in self.items()
245        })
246        return Symbols._new_raw(data)

Divides all counts by an integer, rounding down.

def count_subset( self, divisor: Union[Iterable[str], Mapping[str, int]], *, empty_divisor: int | None = None) -> int:
248    def count_subset(self,
249                     divisor: Iterable[str] | Mapping[str, int],
250                     *,
251                     empty_divisor: int | None = None) -> int:
252        """The number of times the divisor is contained in this multiset."""
253        if not isinstance(divisor, Mapping):
254            divisor = Counter(divisor)
255        result = None
256        for s, count in divisor.items():
257            current = self._data[s] // count
258            if result is None or current < result:
259                result = current
260        if result is None:
261            if empty_divisor is None:
262                raise ZeroDivisionError('Divisor is empty.')
263            else:
264                return empty_divisor
265        else:
266            return result

The number of times the divisor is contained in this multiset.

def modulo_counts(self, other: int) -> Symbols:
295    def modulo_counts(self, other: int) -> 'Symbols':
296        return self % other
def issubset(self, other: Union[Iterable[str], Mapping[str, int]]) -> bool:
329    def issubset(self, other: Iterable[str] | Mapping[str, int]) -> bool:
330        """Whether `self` is a subset of the other.
331
332        Same as `<=`.
333        
334        Note that the `<` and `>` operators are lexicographic orderings,
335        not proper subset relations.
336        """
337        return self <= other

Whether self is a subset of the other.

Same as <=.

Note that the < and > operators are lexicographic orderings, not proper subset relations.

def issuperset(self, other: Union[Iterable[str], Mapping[str, int]]) -> bool:
346    def issuperset(self, other: Iterable[str] | Mapping[str, int]) -> bool:
347        """Whether `self` is a superset of the other.
348
349        Same as `>=`.
350        
351        Note that the `<` and `>` operators are lexicographic orderings,
352        not proper subset relations.
353        """
354        return self >= other

Whether self is a superset of the other.

Same as >=.

Note that the < and > operators are lexicographic orderings, not proper subset relations.

def isdisjoint(self, other: Union[Iterable[str], Mapping[str, int]]) -> bool:
363    def isdisjoint(self, other: Iterable[str] | Mapping[str, int]) -> bool:
364        """Whether `self` has any positive elements in common with the other.
365        
366        Raises:
367            ValueError if either has negative elements.
368        """
369        other = Symbols(other)
370        return any(self[s] > 0 and other[s] > 0  # type: ignore
371                   for s in self)

Whether self has any positive elements in common with the other.

Raises:
  • ValueError if either has negative elements.
def has_negative_counts(self) -> bool:
395    def has_negative_counts(self) -> bool:
396        """Whether any counts are negative."""
397        return any(c < 0 for c in self.values())

Whether any counts are negative.

def size(self) -> int:
417    def size(self) -> int:
418        """The total number of elements."""
419        return sum(self._data.values())

The total number of elements.

Again: Final = <icepool.population.again.AgainExpression object>

A symbol indicating that the die should be rolled again, usually with some operation applied.

This is designed to be used with the Die() constructor. AgainExpressions should not be fed to functions or methods other than Die(), but it can be used with operators. Examples:

  • Again + 6: Roll again and add 6.
  • Again + Again: Roll again twice and sum.

The again_count, again_depth, and again_end arguments to Die() affect how these arguments are processed. At most one of again_count or again_depth may be provided; if neither are provided, the behavior is as `again_depth=1.

For finer control over rolling processes, use e.g. Die.map() instead.

Count mode

When again_count is provided, we start with one roll queued and execute one roll at a time. For every Again we roll, we queue another roll. If we run out of rolls, we sum the rolls to find the result. If the total number of rolls (not including the initial roll) would exceed again_count, we reroll the entire process, effectively conditioning the process on not rolling more than again_count extra dice.

This mode only allows "additive" expressions to be used with Again, which means that only the following operators are allowed:

  • Binary +
  • n @ AgainExpression, where n is a non-negative int or Population.

Furthermore, the + operator is assumed to be associative and commutative. For example, str or tuple outcomes will not produce elements with a definite order.

Depth mode

When again_depth=0, again_end is directly substituted for each occurence of Again. For other values of again_depth, the result for again_depth-1 is substituted for each occurence of Again.

If again_end=icepool.Reroll, then any AgainExpressions in the final depth are rerolled.

Rerolls

Reroll only rerolls that particular die, not the entire process. Any such rerolls do not count against the again_count or again_depth limit.

If again_end=icepool.Reroll:

  • Count mode: Any result that would cause the number of rolls to exceed again_count is rerolled.
  • Depth mode: Any AgainExpressions in the final depth level are rerolled.
class CountsKeysView(typing.KeysView[~T], typing.Sequence[~T]):
144class CountsKeysView(KeysView[T], Sequence[T]):
145    """This functions as both a `KeysView` and a `Sequence`."""
146
147    def __init__(self, counts: Counts[T]):
148        self._mapping = counts
149
150    def __getitem__(self, index):
151        return self._mapping._keys[index]
152
153    def __len__(self) -> int:
154        return len(self._mapping)
155
156    def __eq__(self, other):
157        return self._mapping._keys == other

This functions as both a KeysView and a Sequence.

CountsKeysView(counts: icepool.collection.counts.Counts[~T])
147    def __init__(self, counts: Counts[T]):
148        self._mapping = counts
class CountsValuesView(typing.ValuesView[int], typing.Sequence[int]):
160class CountsValuesView(ValuesView[int], Sequence[int]):
161    """This functions as both a `ValuesView` and a `Sequence`."""
162
163    def __init__(self, counts: Counts):
164        self._mapping = counts
165
166    def __getitem__(self, index):
167        return self._mapping._values[index]
168
169    def __len__(self) -> int:
170        return len(self._mapping)
171
172    def __eq__(self, other):
173        return self._mapping._values == other

This functions as both a ValuesView and a Sequence.

CountsValuesView(counts: icepool.collection.counts.Counts)
163    def __init__(self, counts: Counts):
164        self._mapping = counts
class CountsItemsView(typing.ItemsView[~T, int], typing.Sequence[tuple[~T, int]]):
176class CountsItemsView(ItemsView[T, int], Sequence[tuple[T, int]]):
177    """This functions as both an `ItemsView` and a `Sequence`."""
178
179    def __init__(self, counts: Counts):
180        self._mapping = counts
181
182    def __getitem__(self, index):
183        return self._mapping._items[index]
184
185    def __eq__(self, other):
186        return self._mapping._items == other

This functions as both an ItemsView and a Sequence.

CountsItemsView(counts: icepool.collection.counts.Counts)
179    def __init__(self, counts: Counts):
180        self._mapping = counts
def from_cumulative( outcomes: Sequence[~T], cumulative: Union[Sequence[int], Sequence[Die[bool]]], *, reverse: bool = False) -> Die[~T]:
143def from_cumulative(outcomes: Sequence[T],
144                    cumulative: 'Sequence[int] | Sequence[icepool.Die[bool]]',
145                    *,
146                    reverse: bool = False) -> 'icepool.Die[T]':
147    """Constructs a `Die` from a sequence of cumulative values.
148
149    Args:
150        outcomes: The outcomes of the resulting die. Sorted order is recommended
151            but not necessary.
152        cumulative: The cumulative values (inclusive) of the outcomes in the
153            order they are given to this function. These may be:
154            * `int` cumulative quantities.
155            * Dice representing the cumulative distribution at that point.
156        reverse: Iff true, both of the arguments will be reversed. This allows
157            e.g. constructing using a survival distribution.
158    """
159    if len(outcomes) == 0:
160        return icepool.Die({})
161
162    if reverse:
163        outcomes = list(reversed(outcomes))
164        cumulative = list(reversed(cumulative))  # type: ignore
165
166    prev = 0
167    d = {}
168
169    if isinstance(cumulative[0], icepool.Die):
170        cumulative = commonize_denominator(*cumulative)
171        for outcome, die in zip(outcomes, cumulative):
172            d[outcome] = die.quantity('!=', False) - prev
173            prev = die.quantity('!=', False)
174    elif isinstance(cumulative[0], int):
175        cumulative = cast(Sequence[int], cumulative)
176        for outcome, quantity in zip(outcomes, cumulative):
177            d[outcome] = quantity - prev
178            prev = quantity
179    else:
180        raise TypeError(
181            f'Unsupported type {type(cumulative)} for cumulative values.')
182
183    return icepool.Die(d)

Constructs a Die from a sequence of cumulative values.

Arguments:
  • outcomes: The outcomes of the resulting die. Sorted order is recommended but not necessary.
  • cumulative: The cumulative values (inclusive) of the outcomes in the order they are given to this function. These may be:
    • int cumulative quantities.
    • Dice representing the cumulative distribution at that point.
  • reverse: Iff true, both of the arguments will be reversed. This allows e.g. constructing using a survival distribution.
def from_rv( rv, outcomes: Union[Sequence[int], Sequence[float]], denominator: int, **kwargs) -> Union[Die[int], Die[float]]:
198def from_rv(rv, outcomes: Sequence[int] | Sequence[float], denominator: int,
199            **kwargs) -> 'icepool.Die[int] | icepool.Die[float]':
200    """Constructs a `Die` from a rv object (as `scipy.stats`).
201
202    This is done using the CDF.
203
204    Args:
205        rv: A rv object (as `scipy.stats`).
206        outcomes: An iterable of `int`s or `float`s that will be the outcomes
207            of the resulting `Die`.
208            If the distribution is discrete, outcomes must be `int`s.
209            Some outcomes may be omitted if their probability is too small
210            compared to the denominator.
211        denominator: The denominator of the resulting `Die` will be set to this.
212        **kwargs: These will be forwarded to `rv.cdf()`.
213    """
214    if hasattr(rv, 'pdf'):
215        # Continuous distributions use midpoints.
216        midpoints = [(a + b) / 2 for a, b in zip(outcomes[:-1], outcomes[1:])]
217        cdf = rv.cdf(midpoints, **kwargs)
218        quantities_le = tuple(int(round(x * denominator))
219                              for x in cdf) + (denominator, )
220    else:
221        cdf = rv.cdf(outcomes, **kwargs)
222        quantities_le = tuple(int(round(x * denominator)) for x in cdf)
223    return from_cumulative(outcomes, quantities_le)

Constructs a Die from a rv object (as scipy.stats).

This is done using the CDF.

Arguments:
  • rv: A rv object (as scipy.stats).
  • outcomes: An iterable of ints or floats that will be the outcomes of the resulting Die. If the distribution is discrete, outcomes must be ints. Some outcomes may be omitted if their probability is too small compared to the denominator.
  • denominator: The denominator of the resulting Die will be set to this.
  • **kwargs: These will be forwarded to rv.cdf().
def pointwise_max( arg0, /, *more_args: Die[~T]) -> Die[~T]:
253def pointwise_max(arg0, /, *more_args: 'icepool.Die[T]') -> 'icepool.Die[T]':
254    """Selects the highest chance of rolling >= each outcome among the arguments.
255
256    Naming not finalized.
257
258    Specifically, for each outcome, the chance of the result rolling >= to that 
259    outcome is the same as the highest chance of rolling >= that outcome among
260    the arguments.
261
262    Equivalently, any quantile in the result is the highest of that quantile
263    among the arguments.
264
265    This is useful for selecting from several possible moves where you are
266    trying to get >= a threshold that is known but could change depending on the
267    situation.
268    
269    Args:
270        dice: Either an iterable of dice, or two or more dice as separate
271            arguments.
272    """
273    if len(more_args) == 0:
274        args = arg0
275    else:
276        args = (arg0, ) + more_args
277    args = commonize_denominator(*args)
278    outcomes = sorted_union(*args)
279    cumulative = [
280        min(die.quantity('<=', outcome) for die in args)
281        for outcome in outcomes
282    ]
283    return from_cumulative(outcomes, cumulative)

Selects the highest chance of rolling >= each outcome among the arguments.

Naming not finalized.

Specifically, for each outcome, the chance of the result rolling >= to that outcome is the same as the highest chance of rolling >= that outcome among the arguments.

Equivalently, any quantile in the result is the highest of that quantile among the arguments.

This is useful for selecting from several possible moves where you are trying to get >= a threshold that is known but could change depending on the situation.

Arguments:
  • dice: Either an iterable of dice, or two or more dice as separate arguments.
def pointwise_min( arg0, /, *more_args: Die[~T]) -> Die[~T]:
300def pointwise_min(arg0, /, *more_args: 'icepool.Die[T]') -> 'icepool.Die[T]':
301    """Selects the highest chance of rolling <= each outcome among the arguments.
302
303    Naming not finalized.
304    
305    Specifically, for each outcome, the chance of the result rolling <= to that 
306    outcome is the same as the highest chance of rolling <= that outcome among
307    the arguments.
308
309    Equivalently, any quantile in the result is the lowest of that quantile
310    among the arguments.
311
312    This is useful for selecting from several possible moves where you are
313    trying to get <= a threshold that is known but could change depending on the
314    situation.
315    
316    Args:
317        dice: Either an iterable of dice, or two or more dice as separate
318            arguments.
319    """
320    if len(more_args) == 0:
321        args = arg0
322    else:
323        args = (arg0, ) + more_args
324    args = commonize_denominator(*args)
325    outcomes = sorted_union(*args)
326    cumulative = [
327        max(die.quantity('<=', outcome) for die in args)
328        for outcome in outcomes
329    ]
330    return from_cumulative(outcomes, cumulative)

Selects the highest chance of rolling <= each outcome among the arguments.

Naming not finalized.

Specifically, for each outcome, the chance of the result rolling <= to that outcome is the same as the highest chance of rolling <= that outcome among the arguments.

Equivalently, any quantile in the result is the lowest of that quantile among the arguments.

This is useful for selecting from several possible moves where you are trying to get <= a threshold that is known but could change depending on the situation.

Arguments:
  • dice: Either an iterable of dice, or two or more dice as separate arguments.
def lowest( arg0, /, *more_args: Union[~T, Die[~T]], keep: int | None = None, drop: int | None = None, default: Optional[~T] = None) -> Die[~T]:
 99def lowest(arg0,
100           /,
101           *more_args: 'T | icepool.Die[T]',
102           keep: int | None = None,
103           drop: int | None = None,
104           default: T | None = None) -> 'icepool.Die[T]':
105    """The lowest outcome among the rolls, or the sum of some of the lowest.
106
107    The outcomes should support addition and multiplication if `keep != 1`.
108
109    Args:
110        args: Dice or individual outcomes in a single iterable, or as two or
111            more separate arguments. Similar to the built-in `min()`.
112        keep, drop: These arguments work together:
113            * If neither are provided, the single lowest die will be taken.
114            * If only `keep` is provided, the `keep` lowest dice will be summed.
115            * If only `drop` is provided, the `drop` lowest dice will be dropped
116                and the rest will be summed.
117            * If both are provided, `drop` lowest dice will be dropped, then
118                the next `keep` lowest dice will be summed.
119        default: If an empty iterable is provided, the result will be a die that
120            always rolls this value.
121
122    Raises:
123        ValueError if an empty iterable is provided with no `default`.
124    """
125    if len(more_args) == 0:
126        args = arg0
127    else:
128        args = (arg0, ) + more_args
129
130    if len(args) == 0:
131        if default is None:
132            raise ValueError(
133                "lowest() arg is an empty sequence and no default was provided."
134            )
135        else:
136            return icepool.Die([default])
137
138    index_slice = lowest_slice(keep, drop)
139    return _sum_slice(*args, index_slice=index_slice)

The lowest outcome among the rolls, or the sum of some of the lowest.

The outcomes should support addition and multiplication if keep != 1.

Arguments:
  • args: Dice or individual outcomes in a single iterable, or as two or more separate arguments. Similar to the built-in min().
  • keep, drop: These arguments work together:
    • If neither are provided, the single lowest die will be taken.
    • If only keep is provided, the keep lowest dice will be summed.
    • If only drop is provided, the drop lowest dice will be dropped and the rest will be summed.
    • If both are provided, drop lowest dice will be dropped, then the next keep lowest dice will be summed.
  • default: If an empty iterable is provided, the result will be a die that always rolls this value.
Raises:
  • ValueError if an empty iterable is provided with no default.
def highest( arg0, /, *more_args: Union[~T, Die[~T]], keep: int | None = None, drop: int | None = None, default: Optional[~T] = None) -> Die[~T]:
153def highest(arg0,
154            /,
155            *more_args: 'T | icepool.Die[T]',
156            keep: int | None = None,
157            drop: int | None = None,
158            default: T | None = None) -> 'icepool.Die[T]':
159    """The highest outcome among the rolls, or the sum of some of the highest.
160
161    The outcomes should support addition and multiplication if `keep != 1`.
162
163    Args:
164        args: Dice or individual outcomes in a single iterable, or as two or
165            more separate arguments. Similar to the built-in `max()`.
166        keep, drop: These arguments work together:
167            * If neither are provided, the single highest die will be taken.
168            * If only `keep` is provided, the `keep` highest dice will be summed.
169            * If only `drop` is provided, the `drop` highest dice will be dropped
170                and the rest will be summed.
171            * If both are provided, `drop` highest dice will be dropped, then
172                the next `keep` highest dice will be summed.
173        drop: This number of highest dice will be dropped before keeping dice
174            to be summed.
175        default: If an empty iterable is provided, the result will be a die that
176            always rolls this value.
177
178    Raises:
179        ValueError if an empty iterable is provided with no `default`.
180    """
181    if len(more_args) == 0:
182        args = arg0
183    else:
184        args = (arg0, ) + more_args
185
186    if len(args) == 0:
187        if default is None:
188            raise ValueError(
189                "highest() arg is an empty sequence and no default was provided."
190            )
191        else:
192            return icepool.Die([default])
193
194    index_slice = highest_slice(keep, drop)
195    return _sum_slice(*args, index_slice=index_slice)

The highest outcome among the rolls, or the sum of some of the highest.

The outcomes should support addition and multiplication if keep != 1.

Arguments:
  • args: Dice or individual outcomes in a single iterable, or as two or more separate arguments. Similar to the built-in max().
  • keep, drop: These arguments work together:
    • If neither are provided, the single highest die will be taken.
    • If only keep is provided, the keep highest dice will be summed.
    • If only drop is provided, the drop highest dice will be dropped and the rest will be summed.
    • If both are provided, drop highest dice will be dropped, then the next keep highest dice will be summed.
  • drop: This number of highest dice will be dropped before keeping dice to be summed.
  • default: If an empty iterable is provided, the result will be a die that always rolls this value.
Raises:
  • ValueError if an empty iterable is provided with no default.
def middle( arg0, /, *more_args: Union[~T, Die[~T]], keep: int = 1, tie: Literal['error', 'high', 'low'] = 'error', default: Optional[~T] = None) -> Die[~T]:
209def middle(arg0,
210           /,
211           *more_args: 'T | icepool.Die[T]',
212           keep: int = 1,
213           tie: Literal['error', 'high', 'low'] = 'error',
214           default: T | None = None) -> 'icepool.Die[T]':
215    """The middle of the outcomes among the rolls, or the sum of some of the middle.
216
217    The outcomes should support addition and multiplication if `keep != 1`.
218
219    Args:
220        args: Dice or individual outcomes in a single iterable, or as two or
221            more separate arguments.
222        keep: The number of outcomes to sum.
223        tie: What to do if `keep` is odd but the the number of args is even, or
224            vice versa.
225            * 'error' (default): Raises `IndexError`.
226            * 'high': The higher outcome is taken.
227            * 'low': The lower outcome is taken.
228        default: If an empty iterable is provided, the result will be a die that
229            always rolls this value.
230
231    Raises:
232        ValueError if an empty iterable is provided with no `default`.
233    """
234    if len(more_args) == 0:
235        args = arg0
236    else:
237        args = (arg0, ) + more_args
238
239    if len(args) == 0:
240        if default is None:
241            raise ValueError(
242                "middle() arg is an empty sequence and no default was provided."
243            )
244        else:
245            return icepool.Die([default])
246
247    # Expression evaluators are difficult to type.
248    return icepool.Pool(args).middle(keep, tie=tie).sum()  # type: ignore

The middle of the outcomes among the rolls, or the sum of some of the middle.

The outcomes should support addition and multiplication if keep != 1.

Arguments:
  • args: Dice or individual outcomes in a single iterable, or as two or more separate arguments.
  • keep: The number of outcomes to sum.
  • tie: What to do if keep is odd but the the number of args is even, or vice versa.
    • 'error' (default): Raises IndexError.
    • 'high': The higher outcome is taken.
    • 'low': The lower outcome is taken.
  • default: If an empty iterable is provided, the result will be a die that always rolls this value.
Raises:
  • ValueError if an empty iterable is provided with no default.
def min_outcome( *args: Union[Iterable[Union[~T, Population[~T]]], ~T]) -> ~T:
343def min_outcome(*args: 'Iterable[T | icepool.Population[T]] | T') -> T:
344    """The minimum possible outcome among the populations.
345    
346    Args:
347        Populations or single outcomes. Alternatively, a single iterable argument of such.
348    """
349    return min(_iter_outcomes(*args))

The minimum possible outcome among the populations.

Arguments:
  • Populations or single outcomes. Alternatively, a single iterable argument of such.
def max_outcome( *args: Union[Iterable[Union[~T, Population[~T]]], ~T]) -> ~T:
362def max_outcome(*args: 'Iterable[T | icepool.Population[T]] | T') -> T:
363    """The maximum possible outcome among the populations.
364    
365    Args:
366        Populations or single outcomes. Alternatively, a single iterable argument of such.
367    """
368    return max(_iter_outcomes(*args))

The maximum possible outcome among the populations.

Arguments:
  • Populations or single outcomes. Alternatively, a single iterable argument of such.
def consecutive(*args: Iterable[int]) -> Sequence[int]:
371def consecutive(*args: Iterable[int]) -> Sequence[int]:
372    """A minimal sequence of consecutive ints covering the argument sets."""
373    start = min((x for x in itertools.chain(*args)), default=None)
374    if start is None:
375        return ()
376    stop = max(x for x in itertools.chain(*args))
377    return tuple(range(start, stop + 1))

A minimal sequence of consecutive ints covering the argument sets.

def sorted_union(*args: Iterable[~T]) -> tuple[~T, ...]:
380def sorted_union(*args: Iterable[T]) -> tuple[T, ...]:
381    """Merge sets into a sorted sequence."""
382    if not args:
383        return ()
384    return tuple(sorted(set.union(*(set(arg) for arg in args))))

Merge sets into a sorted sequence.

def commonize_denominator( *dice: Union[~T, Die[~T]]) -> tuple[Die[~T], ...]:
387def commonize_denominator(
388        *dice: 'T | icepool.Die[T]') -> tuple['icepool.Die[T]', ...]:
389    """Scale the quantities of the dice so that all of them have the same denominator.
390
391    The denominator is the LCM of the denominators of the arguments.
392
393    Args:
394        *dice: Any number of dice or single outcomes convertible to dice.
395
396    Returns:
397        A tuple of dice with the same denominator.
398    """
399    converted_dice = [icepool.implicit_convert_to_die(die) for die in dice]
400    denominator_lcm = math.lcm(*(die.denominator() for die in converted_dice
401                                 if die.denominator() > 0))
402    return tuple(
403        die.multiply_quantities(denominator_lcm //
404                                die.denominator() if die.denominator() >
405                                0 else 1) for die in converted_dice)

Scale the quantities of the dice so that all of them have the same denominator.

The denominator is the LCM of the denominators of the arguments.

Arguments:
  • *dice: Any number of dice or single outcomes convertible to dice.
Returns:

A tuple of dice with the same denominator.

def reduce( function: Callable[[~T, ~T], Union[~T, Die[~T], RerollType]], dice: Iterable[Union[~T, Die[~T]]], *, initial: Union[~T, Die[~T], NoneType] = None) -> Die[~T]:
408def reduce(
409        function: 'Callable[[T, T], T | icepool.Die[T] | icepool.RerollType]',
410        dice: 'Iterable[T | icepool.Die[T]]',
411        *,
412        initial: 'T | icepool.Die[T] | None' = None) -> 'icepool.Die[T]':
413    """Applies a function of two arguments cumulatively to a sequence of dice.
414
415    Analogous to the
416    [`functools` function of the same name.](https://docs.python.org/3/library/functools.html#functools.reduce)
417
418    Args:
419        function: The function to map. The function should take two arguments,
420            which are an outcome from each of two dice, and produce an outcome
421            of the same type. It may also return `Reroll`, in which case the
422            entire sequence is effectively rerolled.
423        dice: A sequence of dice to map the function to, from left to right.
424        initial: If provided, this will be placed at the front of the sequence
425            of dice.
426        again_count, again_depth, again_end: Forwarded to the final die constructor.
427    """
428    # Conversion to dice is not necessary since map() takes care of that.
429    iter_dice = iter(dice)
430    if initial is not None:
431        result: 'icepool.Die[T]' = icepool.implicit_convert_to_die(initial)
432    else:
433        result = icepool.implicit_convert_to_die(next(iter_dice))
434    for die in iter_dice:
435        result = map(function, result, die)
436    return result

Applies a function of two arguments cumulatively to a sequence of dice.

Analogous to the .reduce">functools function of the same name.

Arguments:
  • function: The function to map. The function should take two arguments, which are an outcome from each of two dice, and produce an outcome of the same type. It may also return Reroll, in which case the entire sequence is effectively rerolled.
  • dice: A sequence of dice to map the function to, from left to right.
  • initial: If provided, this will be placed at the front of the sequence of dice.
  • again_count, again_depth, again_end: Forwarded to the final die constructor.
def accumulate( function: Callable[[~T, ~T], Union[~T, Die[~T]]], dice: Iterable[Union[~T, Die[~T]]], *, initial: Union[~T, Die[~T], NoneType] = None) -> Iterator[Die[~T]]:
439def accumulate(
440        function: 'Callable[[T, T], T | icepool.Die[T]]',
441        dice: 'Iterable[T | icepool.Die[T]]',
442        *,
443        initial: 'T | icepool.Die[T] | None' = None
444) -> Iterator['icepool.Die[T]']:
445    """Applies a function of two arguments cumulatively to a sequence of dice, yielding each result in turn.
446
447    Analogous to the
448    [`itertools function of the same name`](https://docs.python.org/3/library/itertools.html#itertools.accumulate)
449    , though with no default function and
450    the same parameter order as `reduce()`.
451
452    The number of results is equal to the number of elements of `dice`, with
453    one additional element if `initial` is provided.
454
455    Args:
456        function: The function to map. The function should take two arguments,
457            which are an outcome from each of two dice.
458        dice: A sequence of dice to map the function to, from left to right.
459        initial: If provided, this will be placed at the front of the sequence
460            of dice.
461    """
462    # Conversion to dice is not necessary since map() takes care of that.
463    iter_dice = iter(dice)
464    if initial is not None:
465        result: 'icepool.Die[T]' = icepool.implicit_convert_to_die(initial)
466    else:
467        try:
468            result = icepool.implicit_convert_to_die(next(iter_dice))
469        except StopIteration:
470            return
471    yield result
472    for die in iter_dice:
473        result = map(function, result, die)
474        yield result

Applies a function of two arguments cumulatively to a sequence of dice, yielding each result in turn.

Analogous to the .accumulate">itertools function of the same name , though with no default function and the same parameter order as reduce().

The number of results is equal to the number of elements of dice, with one additional element if initial is provided.

Arguments:
  • function: The function to map. The function should take two arguments, which are an outcome from each of two dice.
  • dice: A sequence of dice to map the function to, from left to right.
  • initial: If provided, this will be placed at the front of the sequence of dice.
def map( repl: Union[Callable[..., Union[~T, Die[~T], RerollType, icepool.population.again.AgainExpression]], Mapping[Any, Union[~T, Die[~T], RerollType, icepool.population.again.AgainExpression]]], /, *args: Outcome | Die | MultisetExpression, star: bool | None = None, repeat: Union[int, Literal['inf']] = 1, time_limit: Union[int, Literal['inf'], NoneType] = None, again_count: int | None = None, again_depth: int | None = None, again_end: Union[~T, Die[~T], RerollType, NoneType] = None, **kwargs) -> Die[~T]:
500def map(
501        repl:
502    'Callable[..., T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression] | Mapping[Any, T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression]',
503        /,
504        *args: 'Outcome | icepool.Die | icepool.MultisetExpression',
505        star: bool | None = None,
506        repeat: int | Literal['inf'] = 1,
507        time_limit: int | Literal['inf'] | None = None,
508        again_count: int | None = None,
509        again_depth: int | None = None,
510        again_end: 'T | icepool.Die[T] | icepool.RerollType | None' = None,
511        **kwargs) -> 'icepool.Die[T]':
512    """Applies `func(outcome_of_die_0, outcome_of_die_1, ...)` for all joint outcomes, returning a Die.
513
514    See `map_function` for a decorator version of this.
515
516    Example: `map(lambda a, b: a + b, d6, d6)` is the same as d6 + d6.
517
518    `map()` is flexible but not very efficient for more than a few dice.
519    If at all possible, use `reduce()`, `MultisetExpression` methods, and/or
520    `MultisetEvaluator`s. Even `Pool.expand()` (which sorts rolls) is more
521    efficient than using `map` on the dice in order.
522
523    `Again` can be used but is not recommended with `repeat` other than 1.
524
525    Args:
526        repl: One of the following:
527            * A callable that takes in one outcome per element of args and
528                produces a new outcome.
529            * A mapping from old outcomes to new outcomes.
530                Unmapped old outcomes stay the same.
531                In this case args must have exactly one element.
532            As with the `Die` constructor, the new outcomes:
533            * May be dice rather than just single outcomes.
534            * The special value `icepool.Reroll` will reroll that old outcome.
535            * `tuples` containing `Population`s will be `tupleize`d into
536                `Population`s of `tuple`s.
537                This does not apply to subclasses of `tuple`s such as `namedtuple`
538                or other classes such as `Vector`.
539        *args: `repl` will be called with all joint outcomes of these.
540            Allowed arg types are:
541            * Single outcome.
542            * `Die`. All outcomes will be sent to `repl`.
543            * `MultisetExpression`. All sorted tuples of outcomes will be sent
544                to `repl`, as `MultisetExpression.expand()`.
545        star: If `True`, the first of the args will be unpacked before giving
546            them to `repl`.
547            If not provided, it will be guessed based on the signature of `repl`
548            and the number of arguments.
549        repeat: This will be repeated with the same arguments on the
550            result this many times, except the first of `args` will be replaced
551            by the result of the previous iteration.
552
553            Note that returning `Reroll` from `repl` will effectively reroll all
554            arguments, including the first argument which represents the result
555            of the process up to this point. If you only want to reroll the
556            current stage, you can nest another `map` inside `repl`.
557
558            EXPERIMENTAL: If set to `'inf'`, the result will be as if this
559            were repeated an infinite number of times. In this case, the
560            result will be in simplest form.
561        time_limit: Similar to `repeat`, but will return early if a fixed point
562            is reached. If both `repeat` and `time_limit` are provided
563            (not recommended), `time_limit` takes priority.
564        again_count, again_depth, again_end: Forwarded to the final die constructor.
565        **kwargs: Keyword-only arguments can be forwarded to a callable `repl`.
566            Unlike *args, outcomes will not be expanded, i.e. `Die` and
567            `MultisetExpression` will be passed as-is. This is invalid for
568            non-callable `repl`.
569    """
570    transition_function = _canonicalize_transition_function(
571        repl, len(args), star)
572
573    if len(args) == 0:
574        if repeat != 1:
575            raise ValueError('If no arguments are given, repeat must be 1.')
576        return icepool.Die([transition_function(**kwargs)],
577                           again_count=again_count,
578                           again_depth=again_depth,
579                           again_end=again_end)
580
581    # Here len(args) is at least 1.
582
583    first_arg = args[0]
584    extra_args = args[1:]
585
586    if time_limit is not None:
587        repeat = time_limit
588
589    if repeat == 'inf':
590        # Infinite repeat.
591        # T_co and U should be the same in this case.
592        def unary_transition_function(state):
593            return map(transition_function,
594                       state,
595                       *extra_args,
596                       star=False,
597                       again_count=again_count,
598                       again_depth=again_depth,
599                       again_end=again_end,
600                       **kwargs)
601
602        return icepool.population.markov_chain.absorbing_markov_chain(
603            icepool.Die([args[0]]), unary_transition_function)
604    else:
605        if repeat < 0:
606            raise ValueError('repeat cannot be negative.')
607
608        if repeat == 0:
609            return icepool.Die([first_arg])
610        elif repeat == 1 and time_limit is None:
611            final_outcomes: 'list[T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression]' = []
612            final_quantities: list[int] = []
613            for outcomes, final_quantity in icepool.iter_cartesian_product(
614                    *args):
615                final_outcome = transition_function(*outcomes, **kwargs)
616                if final_outcome is not icepool.Reroll:
617                    final_outcomes.append(final_outcome)
618                    final_quantities.append(final_quantity)
619            return icepool.Die(final_outcomes,
620                               final_quantities,
621                               again_count=again_count,
622                               again_depth=again_depth,
623                               again_end=again_end)
624        else:
625            result: 'icepool.Die[T]' = icepool.Die([first_arg])
626            for _ in range(repeat):
627                next_result = icepool.map(transition_function,
628                                          result,
629                                          *extra_args,
630                                          star=False,
631                                          again_count=again_count,
632                                          again_depth=again_depth,
633                                          again_end=again_end,
634                                          **kwargs)
635                if time_limit is not None and result.simplify(
636                ) == next_result.simplify():
637                    return result
638                result = next_result
639            return result

Applies func(outcome_of_die_0, outcome_of_die_1, ...) for all joint outcomes, returning a Die.

See map_function for a decorator version of this.

Example: map(lambda a, b: a + b, d6, d6) is the same as d6 + d6.

map() is flexible but not very efficient for more than a few dice. If at all possible, use reduce(), MultisetExpression methods, and/or MultisetEvaluators. Even Pool.expand() (which sorts rolls) is more efficient than using map on the dice in order.

Again can be used but is not recommended with repeat other than 1.

Arguments:
  • repl: One of the following:
    • A callable that takes in one outcome per element of args and produces a new outcome.
    • A mapping from old outcomes to new outcomes. Unmapped old outcomes stay the same. In this case args must have exactly one element. As with the Die constructor, the new outcomes:
    • May be dice rather than just single outcomes.
    • The special value icepool.Reroll will reroll that old outcome.
    • tuples containing Populations will be tupleized into Populations of tuples. This does not apply to subclasses of tuples such as namedtuple or other classes such as Vector.
  • *args: repl will be called with all joint outcomes of these. Allowed arg types are:
  • star: If True, the first of the args will be unpacked before giving them to repl. If not provided, it will be guessed based on the signature of repl and the number of arguments.
  • repeat: This will be repeated with the same arguments on the result this many times, except the first of args will be replaced by the result of the previous iteration.

    Note that returning Reroll from repl will effectively reroll all arguments, including the first argument which represents the result of the process up to this point. If you only want to reroll the current stage, you can nest another map inside repl.

    EXPERIMENTAL: If set to 'inf', the result will be as if this were repeated an infinite number of times. In this case, the result will be in simplest form.

  • time_limit: Similar to repeat, but will return early if a fixed point is reached. If both repeat and time_limit are provided (not recommended), time_limit takes priority.
  • again_count, again_depth, again_end: Forwarded to the final die constructor.
  • **kwargs: Keyword-only arguments can be forwarded to a callable repl. Unlike *args, outcomes will not be expanded, i.e. Die and MultisetExpression will be passed as-is. This is invalid for non-callable repl.
def map_function( function: Optional[Callable[..., Union[~T, Die[~T], RerollType, icepool.population.again.AgainExpression]]] = None, /, *, star: bool | None = None, repeat: Union[int, Literal['inf']] = 1, again_count: int | None = None, again_depth: int | None = None, again_end: Union[~T, Die[~T], RerollType, NoneType] = None, **kwargs) -> Union[Callable[..., Die[~T]], Callable[..., Callable[..., Die[~T]]]]:
680def map_function(
681    function:
682    'Callable[..., T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression] | None' = None,
683    /,
684    *,
685    star: bool | None = None,
686    repeat: int | Literal['inf'] = 1,
687    again_count: int | None = None,
688    again_depth: int | None = None,
689    again_end: 'T | icepool.Die[T] | icepool.RerollType | None' = None,
690    **kwargs
691) -> 'Callable[..., icepool.Die[T]] | Callable[..., Callable[..., icepool.Die[T]]]':
692    """Decorator that turns a function that takes outcomes into a function that takes dice.
693
694    The result must be a `Die`.
695
696    This is basically a decorator version of `map()` and produces behavior
697    similar to AnyDice functions, though Icepool has different typing rules
698    among other differences.
699
700    `map_function` can either be used with no arguments:
701
702    ```python
703    @map_function
704    def explode_six(x):
705        if x == 6:
706            return 6 + Again
707        else:
708            return x
709
710    explode_six(d6, again_depth=2)
711    ```
712
713    Or with keyword arguments, in which case the extra arguments are bound:
714
715    ```python
716    @map_function(again_depth=2)
717    def explode_six(x):
718        if x == 6:
719            return 6 + Again
720        else:
721            return x
722
723    explode_six(d6)
724    ```
725
726    Args:
727        again_count, again_depth, again_end: Forwarded to the final die constructor.
728    """
729
730    if function is not None:
731        return update_wrapper(partial(map, function, **kwargs), function)
732    else:
733
734        def decorator(
735            function:
736            'Callable[..., T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression]'
737        ) -> 'Callable[..., icepool.Die[T]]':
738
739            return update_wrapper(
740                partial(map,
741                        function,
742                        star=star,
743                        repeat=repeat,
744                        again_count=again_count,
745                        again_depth=again_depth,
746                        again_end=again_end,
747                        **kwargs), function)
748
749        return decorator

Decorator that turns a function that takes outcomes into a function that takes dice.

The result must be a Die.

This is basically a decorator version of map() and produces behavior similar to AnyDice functions, though Icepool has different typing rules among other differences.

map_function can either be used with no arguments:

@map_function
def explode_six(x):
    if x == 6:
        return 6 + Again
    else:
        return x

explode_six(d6, again_depth=2)

Or with keyword arguments, in which case the extra arguments are bound:

@map_function(again_depth=2)
def explode_six(x):
    if x == 6:
        return 6 + Again
    else:
        return x

explode_six(d6)
Arguments:
  • again_count, again_depth, again_end: Forwarded to the final die constructor.
def map_and_time( repl: Union[Callable[..., Union[~T, Die[~T], RerollType, icepool.population.again.AgainExpression]], Mapping[Any, Union[~T, Die[~T], RerollType, icepool.population.again.AgainExpression]]], initial_state: Union[~T, Die[~T]], /, *extra_args, star: bool | None = None, time_limit: int, **kwargs) -> Die[tuple[~T, int]]:
752def map_and_time(
753        repl:
754    'Callable[..., T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression] | Mapping[Any, T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression]',
755        initial_state: 'T | icepool.Die[T]',
756        /,
757        *extra_args,
758        star: bool | None = None,
759        time_limit: int,
760        **kwargs) -> 'icepool.Die[tuple[T, int]]':
761    """Repeatedly map outcomes of the state to other outcomes, while also
762    counting timesteps.
763
764    This is useful for representing processes.
765
766    The outcomes of the result are  `(outcome, time)`, where `time` is the
767    number of repeats needed to reach an absorbing outcome (an outcome that
768    only leads to itself), or `repeat`, whichever is lesser.
769
770    This will return early if it reaches a fixed point.
771    Therefore, you can set `repeat` equal to the maximum number of
772    time you could possibly be interested in without worrying about
773    it causing extra computations after the fixed point.
774
775    Args:
776        repl: One of the following:
777            * A callable returning a new outcome for each old outcome.
778            * A mapping from old outcomes to new outcomes.
779                Unmapped old outcomes stay the same.
780            The new outcomes may be dice rather than just single outcomes.
781            The special value `icepool.Reroll` will reroll that old outcome.
782        initial_state: The initial state of the process, which could be a
783            single state or a `Die`.
784        extra_args: Extra arguments to use, as per `map`. Note that these are
785            rerolled at every time step.
786        star: If `True`, the first of the args will be unpacked before giving
787            them to `func`.
788            If not provided, it will be guessed based on the signature of `func`
789            and the number of arguments.
790        time_limit: This will be repeated with the same arguments on the result
791            up to this many times.
792        **kwargs: Keyword-only arguments can be forwarded to a callable `repl`.
793            Unlike *args, outcomes will not be expanded, i.e. `Die` and
794            `MultisetExpression` will be passed as-is. This is invalid for
795            non-callable `repl`.
796
797    Returns:
798        The `Die` after the modification.
799    """
800    transition_function = _canonicalize_transition_function(
801        repl, 1 + len(extra_args), star)
802
803    result: 'icepool.Die[tuple[T, int]]' = map(lambda x: (x, 0), initial_state)
804
805    # Note that we don't expand extra_args during the outer map.
806    # This is needed to correctly evaluate whether each outcome is absorbing.
807    def transition_with_steps(outcome_and_steps, extra_args):
808        outcome, steps = outcome_and_steps
809        next_outcome = map(transition_function, outcome, *extra_args, **kwargs)
810        if icepool.population.markov_chain.is_absorbing(outcome, next_outcome):
811            return outcome, steps
812        else:
813            return icepool.tupleize(next_outcome, steps + 1)
814
815    return map(transition_with_steps,
816               result,
817               extra_args,
818               time_limit=time_limit)

Repeatedly map outcomes of the state to other outcomes, while also counting timesteps.

This is useful for representing processes.

The outcomes of the result are (outcome, time), where time is the number of repeats needed to reach an absorbing outcome (an outcome that only leads to itself), or repeat, whichever is lesser.

This will return early if it reaches a fixed point. Therefore, you can set repeat equal to the maximum number of time you could possibly be interested in without worrying about it causing extra computations after the fixed point.

Arguments:
  • repl: One of the following:
    • A callable returning a new outcome for each old outcome.
    • A mapping from old outcomes to new outcomes. Unmapped old outcomes stay the same. The new outcomes may be dice rather than just single outcomes. The special value icepool.Reroll will reroll that old outcome.
  • initial_state: The initial state of the process, which could be a single state or a Die.
  • extra_args: Extra arguments to use, as per map. Note that these are rerolled at every time step.
  • star: If True, the first of the args will be unpacked before giving them to func. If not provided, it will be guessed based on the signature of func and the number of arguments.
  • time_limit: This will be repeated with the same arguments on the result up to this many times.
  • **kwargs: Keyword-only arguments can be forwarded to a callable repl. Unlike *args, outcomes will not be expanded, i.e. Die and MultisetExpression will be passed as-is. This is invalid for non-callable repl.
Returns:

The Die after the modification.

def map_to_pool( repl: Union[Callable[..., Union[MultisetExpression, Sequence[Union[Die[~T], ~T]], Mapping[Die[~T], int], Mapping[~T, int], RerollType]], Mapping[Any, Union[MultisetExpression, Sequence[Union[Die[~T], ~T]], Mapping[Die[~T], int], Mapping[~T, int], RerollType]]], /, *args: Outcome | Die | MultisetExpression, star: bool | None = None, **kwargs) -> MultisetExpression[~T]:
821def map_to_pool(
822        repl:
823    'Callable[..., icepool.MultisetExpression | Sequence[icepool.Die[T] | T] | Mapping[icepool.Die[T], int] | Mapping[T, int] | icepool.RerollType] | Mapping[Any, icepool.MultisetExpression | Sequence[icepool.Die[T] | T] | Mapping[icepool.Die[T], int] | Mapping[T, int] | icepool.RerollType]',
824        /,
825        *args: 'Outcome | icepool.Die | icepool.MultisetExpression',
826        star: bool | None = None,
827        **kwargs) -> 'icepool.MultisetExpression[T]':
828    """EXPERIMENTAL: Applies `repl(outcome_of_die_0, outcome_of_die_1, ...)` for all joint outcomes, producing a MultisetExpression.
829    
830    Args:
831        repl: One of the following:
832            * A callable that takes in one outcome per element of args and
833                produces a `MultisetExpression` or something convertible to a `Pool`.
834            * A mapping from old outcomes to `MultisetExpression` 
835                or something convertible to a `Pool`.
836                In this case args must have exactly one element.
837            The new outcomes may be dice rather than just single outcomes.
838            The special value `icepool.Reroll` will reroll that old outcome.
839        star: If `True`, the first of the args will be unpacked before giving
840            them to `repl`.
841            If not provided, it will be guessed based on the signature of `repl`
842            and the number of arguments.
843        **kwargs: Keyword-only arguments can be forwarded to a callable `repl`.
844            Unlike *args, outcomes will not be expanded, i.e. `Die` and
845            `MultisetExpression` will be passed as-is. This is invalid for
846            non-callable `repl`.
847
848    Returns:
849        A `MultisetExpression` representing the mixture of `Pool`s. Note  
850        that this is not technically a `Pool`, though it supports most of 
851        the same operations.
852
853    Raises:
854        ValueError: If `denominator` cannot be made consistent with the 
855            resulting mixture of pools.
856    """
857    transition_function = _canonicalize_transition_function(
858        repl, len(args), star)
859
860    data: 'MutableMapping[icepool.MultisetExpression[T], int]' = defaultdict(
861        int)
862    for outcomes, quantity in icepool.iter_cartesian_product(*args):
863        pool = transition_function(*outcomes, **kwargs)
864        if pool is icepool.Reroll:
865            continue
866        elif isinstance(pool, icepool.MultisetExpression):
867            data[pool] += quantity
868        else:
869            data[icepool.Pool(pool)] += quantity
870    # I couldn't get the covariance / contravariance to work.
871    return icepool.MultisetMixture(data)  # type: ignore

EXPERIMENTAL: Applies repl(outcome_of_die_0, outcome_of_die_1, ...) for all joint outcomes, producing a MultisetExpression.

Arguments:
  • repl: One of the following:
    • A callable that takes in one outcome per element of args and produces a MultisetExpression or something convertible to a Pool.
    • A mapping from old outcomes to MultisetExpression or something convertible to a Pool. In this case args must have exactly one element. The new outcomes may be dice rather than just single outcomes. The special value icepool.Reroll will reroll that old outcome.
  • star: If True, the first of the args will be unpacked before giving them to repl. If not provided, it will be guessed based on the signature of repl and the number of arguments.
  • **kwargs: Keyword-only arguments can be forwarded to a callable repl. Unlike *args, outcomes will not be expanded, i.e. Die and MultisetExpression will be passed as-is. This is invalid for non-callable repl.
Returns:

A MultisetExpression representing the mixture of Pools. Note
that this is not technically a Pool, though it supports most of the same operations.

Raises:
  • ValueError: If denominator cannot be made consistent with the resulting mixture of pools.
Reroll: Final = <RerollType.Reroll: 'Reroll'>

Indicates that an outcome should be rerolled (with unlimited depth).

This can be used in place of outcomes in many places. See individual function and method descriptions for details.

This effectively removes the outcome from the probability space, along with its contribution to the denominator.

This can be used for conditional probability by removing all outcomes not consistent with the given observations.

Operation in specific cases:

  • When used with Again, only that stage is rerolled, not the entire Again tree.
  • To reroll with limited depth, use Die.reroll(), or Again with no modification.
  • When used with MultisetEvaluator, the entire evaluation is rerolled.
class RerollType(enum.Enum):
32class RerollType(enum.Enum):
33    """The type of the Reroll singleton."""
34    Reroll = 'Reroll'
35    """Indicates an outcome should be rerolled (with unlimited depth)."""

The type of the Reroll singleton.

Reroll = <RerollType.Reroll: 'Reroll'>

Indicates an outcome should be rerolled (with unlimited depth).

class Pool(icepool.generator.keep.KeepGenerator[~T]):
 25class Pool(KeepGenerator[T]):
 26    """Represents a multiset of outcomes resulting from the roll of several dice.
 27
 28    This should be used in conjunction with `MultisetEvaluator` to generate a
 29    result.
 30
 31    Note that operators are performed on the multiset of rolls, not the multiset
 32    of dice. For example, `d6.pool(3) - d6.pool(3)` is not an empty pool, but
 33    an expression meaning "roll two pools of 3d6 and get the rolls from the
 34    first pool, with rolls in the second pool cancelling matching rolls in the
 35    first pool one-for-one".
 36    """
 37
 38    _dice: tuple[tuple['icepool.Die[T]', int], ...]
 39    _outcomes: tuple[T, ...]
 40
 41    def __new__(
 42            cls,
 43            dice:
 44        'Sequence[icepool.Die[T] | T] | Mapping[icepool.Die[T], int] | Mapping[T, int] | Mapping[icepool.Die[T] | T, int]',
 45            times: Sequence[int] | int = 1) -> 'Pool':
 46        """Public constructor for a pool.
 47
 48        Evaulation is most efficient when the dice are the same or same-side
 49        truncations of each other. For example, d4, d6, d8, d10, d12 are all
 50        same-side truncations of d12.
 51
 52        It is permissible to create a `Pool` without providing dice, but not all
 53        evaluators will handle this case, especially if they depend on the
 54        outcome type. Dice may be in the pool zero times, in which case their
 55        outcomes will be considered but without any count (unless another die
 56        has that outcome).
 57
 58        Args:
 59            dice: The dice to put in the `Pool`. This can be one of the following:
 60
 61                * A `Sequence` of `Die` or outcomes.
 62                * A `Mapping` of `Die` or outcomes to how many of that `Die` or
 63                    outcome to put in the `Pool`.
 64
 65                All outcomes within a `Pool` must be totally orderable.
 66            times: Multiplies the number of times each element of `dice` will
 67                be put into the pool.
 68                `times` can either be a sequence of the same length as
 69                `outcomes` or a single `int` to apply to all elements of
 70                `outcomes`.
 71
 72        Raises:
 73            ValueError: If a bare `Deck` or `Die` argument is provided.
 74                A `Pool` of a single `Die` should constructed as `Pool([die])`.
 75        """
 76        if isinstance(dice, Pool):
 77            if times == 1:
 78                return dice
 79            else:
 80                dice = {die: quantity for die, quantity in dice._dice}
 81
 82        if isinstance(dice, (icepool.Die, icepool.Deck, icepool.MultiDeal)):
 83            raise ValueError(
 84                f'A Pool cannot be constructed with a {type(dice).__name__} argument.'
 85            )
 86
 87        dice, times = icepool.creation_args.itemize(dice, times)
 88        converted_dice = [icepool.implicit_convert_to_die(die) for die in dice]
 89
 90        dice_counts: MutableMapping['icepool.Die[T]', int] = defaultdict(int)
 91        for die, qty in zip(converted_dice, times):
 92            if qty == 0:
 93                continue
 94            dice_counts[die] += qty
 95        keep_tuple = (1, ) * sum(times)
 96
 97        # Includes dice with zero qty.
 98        outcomes = icepool.sorted_union(*converted_dice)
 99        return cls._new_from_mapping(dice_counts, outcomes, keep_tuple)
100
101    @classmethod
102    def _new_raw(cls, dice: tuple[tuple['icepool.Die[T]', int], ...],
103                 outcomes: tuple[T, ...], keep_tuple: tuple[int,
104                                                            ...]) -> 'Pool[T]':
105        """Create using a keep_tuple directly.
106
107        Args:
108            dice: A tuple of (die, count) pairs.
109            keep_tuple: A tuple of how many times to count each die.
110        """
111        self = super(Pool, cls).__new__(cls)
112        self._dice = dice
113        self._outcomes = outcomes
114        self._keep_tuple = keep_tuple
115        return self
116
117    @classmethod
118    def clear_cache(cls):
119        """Clears the global PoolSource cache."""
120        PoolSource._new_raw.cache_clear()
121
122    @classmethod
123    def _new_from_mapping(cls, dice_counts: Mapping['icepool.Die[T]', int],
124                          outcomes: tuple[T, ...],
125                          keep_tuple: tuple[int, ...]) -> 'Pool[T]':
126        """Creates a new pool.
127
128        Args:
129            dice_counts: A map from dice to rolls.
130            keep_tuple: A tuple with length equal to the number of dice.
131        """
132        dice = tuple(sorted(dice_counts.items(),
133                            key=lambda kv: kv[0].hash_key))
134        return Pool._new_raw(dice, outcomes, keep_tuple)
135
136    def _make_source(self):
137        return PoolSource(self._dice, self._outcomes, self._keep_tuple)
138
139    @cached_property
140    def _raw_size(self) -> int:
141        return sum(count for _, count in self._dice)
142
143    def raw_size(self) -> int:
144        """The number of dice in this pool before the keep_tuple is applied."""
145        return self._raw_size
146
147    @cached_property
148    def _denominator(self) -> int:
149        return math.prod(die.denominator()**count for die, count in self._dice)
150
151    def denominator(self) -> int:
152        return self._denominator
153
154    @cached_property
155    def _dice_tuple(self) -> tuple['icepool.Die[T]', ...]:
156        return sum(((die, ) * count for die, count in self._dice), start=())
157
158    @cached_property
159    def _unique_dice(self) -> Collection['icepool.Die[T]']:
160        return set(die for die, _ in self._dice)
161
162    def unique_dice(self) -> Collection['icepool.Die[T]']:
163        """The collection of unique dice in this pool."""
164        return self._unique_dice
165
166    def outcomes(self) -> Sequence[T]:
167        """The union of possible outcomes among all dice in this pool in ascending order."""
168        return self._outcomes
169
170    def _set_keep_tuple(self, keep_tuple: tuple[int,
171                                                ...]) -> 'KeepGenerator[T]':
172        return Pool._new_raw(self._dice, self._outcomes, keep_tuple)
173
174    def additive_union(
175        *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]'
176    ) -> 'MultisetExpression[T]':
177        args = tuple(
178            icepool.expression.multiset_expression.
179            implicit_convert_to_expression(arg) for arg in args)
180        if all(isinstance(arg, Pool) for arg in args):
181            pools = cast(tuple[Pool[T], ...], args)
182            keep_tuple: tuple[int, ...] = tuple(
183                reduce(operator.add, (pool.keep_tuple() for pool in pools),
184                       ()))
185            if len(keep_tuple) == 0 or all(x == keep_tuple[0]
186                                           for x in keep_tuple):
187                # All sorted positions count the same, so we can merge the
188                # pools.
189                dice: 'MutableMapping[icepool.Die, int]' = defaultdict(int)
190                for pool in pools:
191                    for die, die_count in pool._dice:
192                        dice[die] += die_count
193                outcomes = icepool.sorted_union(*(pool.outcomes()
194                                                  for pool in pools))
195                return Pool._new_from_mapping(dice, outcomes, keep_tuple)
196        return KeepGenerator.additive_union(*args)
197
198    @property
199    def hash_key(self):
200        return Pool, self._dice, self._keep_tuple
201
202    def __str__(self) -> str:
203        return (
204            f'Pool of {self.raw_size()} dice with keep_tuple={self.keep_tuple()}\n'
205            + ''.join(f'  {repr(die)} : {count},\n'
206                      for die, count in self._dice))

Represents a multiset of outcomes resulting from the roll of several dice.

This should be used in conjunction with MultisetEvaluator to generate a result.

Note that operators are performed on the multiset of rolls, not the multiset of dice. For example, d6.pool(3) - d6.pool(3) is not an empty pool, but an expression meaning "roll two pools of 3d6 and get the rolls from the first pool, with rolls in the second pool cancelling matching rolls in the first pool one-for-one".

@classmethod
def clear_cache(cls):
117    @classmethod
118    def clear_cache(cls):
119        """Clears the global PoolSource cache."""
120        PoolSource._new_raw.cache_clear()

Clears the global PoolSource cache.

def raw_size(self) -> int:
143    def raw_size(self) -> int:
144        """The number of dice in this pool before the keep_tuple is applied."""
145        return self._raw_size

The number of dice in this pool before the keep_tuple is applied.

def denominator(self) -> int:
151    def denominator(self) -> int:
152        return self._denominator
def unique_dice(self) -> Collection[Die[~T]]:
162    def unique_dice(self) -> Collection['icepool.Die[T]']:
163        """The collection of unique dice in this pool."""
164        return self._unique_dice

The collection of unique dice in this pool.

def outcomes(self) -> Sequence[~T]:
166    def outcomes(self) -> Sequence[T]:
167        """The union of possible outcomes among all dice in this pool in ascending order."""
168        return self._outcomes

The union of possible outcomes among all dice in this pool in ascending order.

def additive_union( *args: Union[MultisetExpression[~T], Mapping[~T, int], Sequence[~T]]) -> MultisetExpression[~T]:
174    def additive_union(
175        *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]'
176    ) -> 'MultisetExpression[T]':
177        args = tuple(
178            icepool.expression.multiset_expression.
179            implicit_convert_to_expression(arg) for arg in args)
180        if all(isinstance(arg, Pool) for arg in args):
181            pools = cast(tuple[Pool[T], ...], args)
182            keep_tuple: tuple[int, ...] = tuple(
183                reduce(operator.add, (pool.keep_tuple() for pool in pools),
184                       ()))
185            if len(keep_tuple) == 0 or all(x == keep_tuple[0]
186                                           for x in keep_tuple):
187                # All sorted positions count the same, so we can merge the
188                # pools.
189                dice: 'MutableMapping[icepool.Die, int]' = defaultdict(int)
190                for pool in pools:
191                    for die, die_count in pool._dice:
192                        dice[die] += die_count
193                outcomes = icepool.sorted_union(*(pool.outcomes()
194                                                  for pool in pools))
195                return Pool._new_from_mapping(dice, outcomes, keep_tuple)
196        return KeepGenerator.additive_union(*args)

The combined elements from all of the multisets.

Same as a + b + c + ....

Any resulting counts that would be negative are set to zero.

Example:

[1, 2, 2, 3] + [1, 2, 4] -> [1, 1, 2, 2, 2, 3, 4]
hash_key
198    @property
199    def hash_key(self):
200        return Pool, self._dice, self._keep_tuple

A hash key for this object. This should include a type.

If None, this will not compare equal to any other object.

def standard_pool( die_sizes: Union[Collection[int], Mapping[int, int]]) -> Pool[int]:
344def standard_pool(
345        die_sizes: Collection[int] | Mapping[int, int]) -> 'Pool[int]':
346    """A `Pool` of standard dice (e.g. d6, d8...).
347
348    Args:
349        die_sizes: A collection of die sizes, which will put one die of that
350            sizes in the pool for each element.
351            Or, a mapping of die sizes to how many dice of that size to put
352            into the pool.
353            If empty, the pool will be considered to consist of zero zeros.
354    """
355    if not die_sizes:
356        return Pool({icepool.Die([0]): 0})
357    if isinstance(die_sizes, Mapping):
358        die_sizes = list(
359            itertools.chain.from_iterable([k] * v
360                                          for k, v in die_sizes.items()))
361    return Pool(list(icepool.d(x) for x in die_sizes))

A Pool of standard dice (e.g. d6, d8...).

Arguments:
  • die_sizes: A collection of die sizes, which will put one die of that sizes in the pool for each element. Or, a mapping of die sizes to how many dice of that size to put into the pool. If empty, the pool will be considered to consist of zero zeros.
class MultisetGenerator(icepool.MultisetExpression[~T]):
17class MultisetGenerator(MultisetExpression[T]):
18    """Abstract base class for generating multisets.
19
20    These include dice pools (`Pool`) and card deals (`Deal`). Most likely you
21    will be using one of these two rather than writing your own subclass of
22    `MultisetGenerator`.
23
24    The multisets are incrementally generated one outcome at a time.
25    For each outcome, a `count` and `weight` are generated, along with a
26    smaller generator to produce the rest of the multiset.
27
28    You can perform simple evaluations using built-in operators and methods in
29    this class.
30    For more complex evaluations and better performance, particularly when
31    multiple generators are involved, you will want to write your own subclass
32    of `MultisetEvaluator`.
33    """
34
35    _children = ()
36
37    @abstractmethod
38    def _make_source(self) -> 'MultisetSource':
39        """Create a source from this generator."""
40
41    @property
42    def _has_parameter(self) -> bool:
43        return False
44
45    def _prepare(
46        self
47    ) -> Iterator[tuple['tuple[Dungeonlet[T, Any], ...]',
48                        'tuple[Questlet[T, Any], ...]',
49                        'tuple[MultisetSourceBase[T, Any], ...]', int]]:
50        dungeonlets = (MultisetFreeVariable[T, int](), )
51        questlets = (MultisetGeneratorQuestlet[T](), )
52        sources = (self._make_source(), )
53        weight = 1
54        yield dungeonlets, questlets, sources, weight

Abstract base class for generating multisets.

These include dice pools (Pool) and card deals (Deal). Most likely you will be using one of these two rather than writing your own subclass of MultisetGenerator.

The multisets are incrementally generated one outcome at a time. For each outcome, a count and weight are generated, along with a smaller generator to produce the rest of the multiset.

You can perform simple evaluations using built-in operators and methods in this class. For more complex evaluations and better performance, particularly when multiple generators are involved, you will want to write your own subclass of MultisetEvaluator.

class MultisetExpression(icepool.expression.multiset_expression_base.MultisetExpressionBase[~T, int], icepool.expand.Expandable[tuple[~T, ...]]):
  65class MultisetExpression(MultisetExpressionBase[T, int],
  66                         Expandable[tuple[T, ...]]):
  67    """Abstract base class representing an expression that operates on single multisets.
  68
  69    There are three types of multiset expressions:
  70
  71    * `MultisetGenerator`, which produce raw outcomes and counts.
  72    * `MultisetOperator`, which takes outcomes with one or more counts and
  73        produces a count.
  74    * `MultisetVariable`, which is a temporary placeholder for some other 
  75        expression.
  76
  77    Expression methods can be applied to `MultisetGenerator`s to do simple
  78    evaluations. For joint evaluations, try `multiset_function`.
  79
  80    Use the provided operations to build up more complicated
  81    expressions, or to attach a final evaluator.
  82
  83    Operations include:
  84
  85    | Operation                   | Count / notes                               |
  86    |:----------------------------|:--------------------------------------------|
  87    | `additive_union`, `+`       | `l + r`                                     |
  88    | `difference`, `-`           | `l - r`                                     |
  89    | `intersection`, `&`         | `min(l, r)`                                 |
  90    | `union`, `\\|`               | `max(l, r)`                                 |
  91    | `symmetric_difference`, `^` | `abs(l - r)`                                |
  92    | `multiply_counts`, `*`      | `count * n`                                 |
  93    | `divide_counts`, `//`       | `count // n`                                |
  94    | `modulo_counts`, `%`        | `count % n`                                 |
  95    | `keep_counts`               | `count if count >= n else 0` etc.           |
  96    | unary `+`                   | same as `keep_counts_ge(0)`                 |
  97    | unary `-`                   | reverses the sign of all counts             |
  98    | `unique`                    | `min(count, n)`                             |
  99    | `keep_outcomes`             | `count if outcome in t else 0`              |
 100    | `drop_outcomes`             | `count if outcome not in t else 0`          |
 101    | `map_counts`                | `f(outcome, *counts)`                       |
 102    | `keep`, `[]`                | less capable than `KeepGenerator` version   |
 103    | `highest`                   | less capable than `KeepGenerator` version   |
 104    | `lowest`                    | less capable than `KeepGenerator` version   |
 105
 106    | Evaluator                      | Summary                                                                    |
 107    |:-------------------------------|:---------------------------------------------------------------------------|
 108    | `issubset`, `<=`               | Whether the left side's counts are all <= their counterparts on the right  |
 109    | `issuperset`, `>=`             | Whether the left side's counts are all >= their counterparts on the right  |
 110    | `isdisjoint`                   | Whether the left side has no positive counts in common with the right side |
 111    | `<`                            | As `<=`, but `False` if the two multisets are equal                        |
 112    | `>`                            | As `>=`, but `False` if the two multisets are equal                        |
 113    | `==`                           | Whether the left side has all the same counts as the right side            |
 114    | `!=`                           | Whether the left side has any different counts to the right side           |
 115    | `expand`                       | All elements in ascending order                                            |
 116    | `sum`                          | Sum of all elements                                                        |
 117    | `count`                        | The number of elements                                                     |
 118    | `any`                          | Whether there is at least 1 element                                        |
 119    | `highest_outcome_and_count`    | The highest outcome and how many of that outcome                           |
 120    | `all_counts`                   | All counts in descending order                                             |
 121    | `largest_count`                | The single largest count, aka x-of-a-kind                                  |
 122    | `largest_count_and_outcome`    | Same but also with the corresponding outcome                               |
 123    | `count_subset`, `//`           | The number of times the right side is contained in the left side           |
 124    | `largest_straight`             | Length of longest consecutive sequence                                     |
 125    | `largest_straight_and_outcome` | Same but also with the corresponding outcome                               |
 126    | `all_straights`                | Lengths of all consecutive sequences in descending order                   |
 127    """
 128
 129    def _make_param(self,
 130                    name: str,
 131                    arg_index: int,
 132                    star_index: int | None = None) -> 'MultisetParameter[T]':
 133        if star_index is not None:
 134            raise TypeError(
 135                'The single int count of MultisetExpression cannot be starred.'
 136            )
 137        return icepool.MultisetParameter(name, arg_index, star_index)
 138
 139    @property
 140    def _items_for_cartesian_product(
 141            self) -> Sequence[tuple[tuple[T, ...], int]]:
 142        expansion = cast('icepool.Die[tuple[T, ...]]', self.expand())
 143        return expansion.items()
 144
 145    # We need to reiterate this since we override __eq__.
 146    __hash__ = MaybeHashKeyed.__hash__  # type: ignore
 147
 148    # Binary operators.
 149
 150    def __add__(self,
 151                other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
 152                /) -> 'MultisetExpression[T]':
 153        try:
 154            return MultisetExpression.additive_union(self, other)
 155        except ImplicitConversionError:
 156            return NotImplemented
 157
 158    def __radd__(
 159            self,
 160            other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
 161            /) -> 'MultisetExpression[T]':
 162        try:
 163            return MultisetExpression.additive_union(other, self)
 164        except ImplicitConversionError:
 165            return NotImplemented
 166
 167    def additive_union(
 168        *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]'
 169    ) -> 'MultisetExpression[T]':
 170        """The combined elements from all of the multisets.
 171
 172        Same as `a + b + c + ...`.
 173
 174        Any resulting counts that would be negative are set to zero.
 175
 176        Example:
 177        ```python
 178        [1, 2, 2, 3] + [1, 2, 4] -> [1, 1, 2, 2, 2, 3, 4]
 179        ```
 180        """
 181        expressions = tuple(
 182            implicit_convert_to_expression(arg) for arg in args)
 183        return icepool.operator.MultisetAdditiveUnion(*expressions)
 184
 185    def __sub__(self,
 186                other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
 187                /) -> 'MultisetExpression[T]':
 188        try:
 189            return MultisetExpression.difference(self, other)
 190        except ImplicitConversionError:
 191            return NotImplemented
 192
 193    def __rsub__(
 194            self,
 195            other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
 196            /) -> 'MultisetExpression[T]':
 197        try:
 198            return MultisetExpression.difference(other, self)
 199        except ImplicitConversionError:
 200            return NotImplemented
 201
 202    def difference(
 203        *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]'
 204    ) -> 'MultisetExpression[T]':
 205        """The elements from the left multiset that are not in any of the others.
 206
 207        Same as `a - b - c - ...`.
 208
 209        Any resulting counts that would be negative are set to zero.
 210
 211        Example:
 212        ```python
 213        [1, 2, 2, 3] - [1, 2, 4] -> [2, 3]
 214        ```
 215
 216        If no arguments are given, the result will be an empty multiset, i.e.
 217        all zero counts.
 218
 219        Note that, as a multiset operation, this will only cancel elements 1:1.
 220        If you want to drop all elements in a set of outcomes regardless of
 221        count, either use `drop_outcomes()` instead, or use a large number of
 222        counts on the right side.
 223        """
 224        expressions = tuple(
 225            implicit_convert_to_expression(arg) for arg in args)
 226        return icepool.operator.MultisetDifference(*expressions)
 227
 228    def __and__(self,
 229                other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
 230                /) -> 'MultisetExpression[T]':
 231        try:
 232            return MultisetExpression.intersection(self, other)
 233        except ImplicitConversionError:
 234            return NotImplemented
 235
 236    def __rand__(
 237            self,
 238            other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
 239            /) -> 'MultisetExpression[T]':
 240        try:
 241            return MultisetExpression.intersection(other, self)
 242        except ImplicitConversionError:
 243            return NotImplemented
 244
 245    def intersection(
 246        *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]'
 247    ) -> 'MultisetExpression[T]':
 248        """The elements that all the multisets have in common.
 249
 250        Same as `a & b & c & ...`.
 251
 252        Any resulting counts that would be negative are set to zero.
 253
 254        Example:
 255        ```python
 256        [1, 2, 2, 3] & [1, 2, 4] -> [1, 2]
 257        ```
 258
 259        Note that, as a multiset operation, this will only intersect elements
 260        1:1.
 261        If you want to keep all elements in a set of outcomes regardless of
 262        count, either use `keep_outcomes()` instead, or use a large number of
 263        counts on the right side.
 264        """
 265        expressions = tuple(
 266            implicit_convert_to_expression(arg) for arg in args)
 267        return icepool.operator.MultisetIntersection(*expressions)
 268
 269    def __or__(self,
 270               other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
 271               /) -> 'MultisetExpression[T]':
 272        try:
 273            return MultisetExpression.union(self, other)
 274        except ImplicitConversionError:
 275            return NotImplemented
 276
 277    def __ror__(self,
 278                other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
 279                /) -> 'MultisetExpression[T]':
 280        try:
 281            return MultisetExpression.union(other, self)
 282        except ImplicitConversionError:
 283            return NotImplemented
 284
 285    def union(
 286        *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]'
 287    ) -> 'MultisetExpression[T]':
 288        """The most of each outcome that appear in any of the multisets.
 289
 290        Same as `a | b | c | ...`.
 291
 292        Any resulting counts that would be negative are set to zero.
 293
 294        Example:
 295        ```python
 296        [1, 2, 2, 3] | [1, 2, 4] -> [1, 2, 2, 3, 4]
 297        ```
 298        """
 299        expressions = tuple(
 300            implicit_convert_to_expression(arg) for arg in args)
 301        return icepool.operator.MultisetUnion(*expressions)
 302
 303    def __xor__(self,
 304                other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
 305                /) -> 'MultisetExpression[T]':
 306        try:
 307            return MultisetExpression.symmetric_difference(self, other)
 308        except ImplicitConversionError:
 309            return NotImplemented
 310
 311    def __rxor__(
 312            self,
 313            other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
 314            /) -> 'MultisetExpression[T]':
 315        try:
 316            # Symmetric.
 317            return MultisetExpression.symmetric_difference(self, other)
 318        except ImplicitConversionError:
 319            return NotImplemented
 320
 321    def symmetric_difference(
 322            self,
 323            other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
 324            /) -> 'MultisetExpression[T]':
 325        """The elements that appear in the left or right multiset but not both.
 326
 327        Same as `a ^ b`.
 328
 329        Specifically, this produces the absolute difference between counts.
 330        If you don't want negative counts to be used from the inputs, you can
 331        do `+left ^ +right`.
 332
 333        Example:
 334        ```python
 335        [1, 2, 2, 3] ^ [1, 2, 4] -> [2, 3, 4]
 336        ```
 337        """
 338        return icepool.operator.MultisetSymmetricDifference(
 339            self, implicit_convert_to_expression(other))
 340
 341    def keep_outcomes(
 342            self, target:
 343        'Callable[[T], bool] | Collection[T] | MultisetExpression[T]',
 344            /) -> 'MultisetExpression[T]':
 345        """Keeps the elements in the target set of outcomes, and drops the rest by setting their counts to zero.
 346
 347        This is similar to `intersection()`, except the right side is considered
 348        to have unlimited multiplicity.
 349
 350        Args:
 351            target: A callable returning `True` iff the outcome should be kept,
 352                or an expression or collection of outcomes to keep.
 353        """
 354        if isinstance(target, MultisetExpression):
 355            return icepool.operator.MultisetFilterOutcomesBinary(self, target)
 356        else:
 357            return icepool.operator.MultisetFilterOutcomes(self, target=target)
 358
 359    def drop_outcomes(
 360            self, target:
 361        'Callable[[T], bool] | Collection[T] | MultisetExpression[T]',
 362            /) -> 'MultisetExpression[T]':
 363        """Drops the elements in the target set of outcomes by setting their counts to zero, and keeps the rest.
 364
 365        This is similar to `difference()`, except the right side is considered
 366        to have unlimited multiplicity.
 367
 368        Args:
 369            target: A callable returning `True` iff the outcome should be
 370                dropped, or an expression or collection of outcomes to drop.
 371        """
 372        if isinstance(target, MultisetExpression):
 373            return icepool.operator.MultisetFilterOutcomesBinary(self,
 374                                                                 target,
 375                                                                 invert=True)
 376        else:
 377            return icepool.operator.MultisetFilterOutcomes(self,
 378                                                           target=target,
 379                                                           invert=True)
 380
 381    # Adjust counts.
 382
 383    def map_counts(*args:
 384                   'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
 385                   function: Callable[..., int]) -> 'MultisetExpression[T]':
 386        """Maps the counts to new counts.
 387
 388        Args:
 389            function: A function that takes `outcome, *counts` and produces a
 390                combined count.
 391        """
 392        expressions = tuple(
 393            implicit_convert_to_expression(arg) for arg in args)
 394        return icepool.operator.MultisetMapCounts(*expressions,
 395                                                  function=function)
 396
 397    def __mul__(self, n: int) -> 'MultisetExpression[T]':
 398        if not isinstance(n, int):
 399            return NotImplemented
 400        return self.multiply_counts(n)
 401
 402    # Commutable in this case.
 403    def __rmul__(self, n: int) -> 'MultisetExpression[T]':
 404        if not isinstance(n, int):
 405            return NotImplemented
 406        return self.multiply_counts(n)
 407
 408    def multiply_counts(self, n: int, /) -> 'MultisetExpression[T]':
 409        """Multiplies all counts by n.
 410
 411        Same as `self * n`.
 412
 413        Example:
 414        ```python
 415        Pool([1, 2, 2, 3]) * 2 -> [1, 1, 2, 2, 2, 2, 3, 3]
 416        ```
 417        """
 418        return icepool.operator.MultisetMultiplyCounts(self, constant=n)
 419
 420    @overload
 421    def __floordiv__(self, other: int) -> 'MultisetExpression[T]':
 422        ...
 423
 424    @overload
 425    def __floordiv__(
 426        self, other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]'
 427    ) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]':
 428        """Same as divide_counts()."""
 429
 430    @overload
 431    def __floordiv__(
 432        self,
 433        other: 'int | MultisetExpression[T] | Mapping[T, int] | Sequence[T]'
 434    ) -> 'MultisetExpression[T] | icepool.Die[int] | MultisetFunctionRawResult[T, int]':
 435        """Same as count_subset()."""
 436
 437    def __floordiv__(
 438        self,
 439        other: 'int | MultisetExpression[T] | Mapping[T, int] | Sequence[T]'
 440    ) -> 'MultisetExpression[T] | icepool.Die[int] | MultisetFunctionRawResult[T, int]':
 441        if isinstance(other, int):
 442            return self.divide_counts(other)
 443        else:
 444            return self.count_subset(other)
 445
 446    def divide_counts(self, n: int, /) -> 'MultisetExpression[T]':
 447        """Divides all counts by n (rounding down).
 448
 449        Same as `self // n`.
 450
 451        Example:
 452        ```python
 453        Pool([1, 2, 2, 3]) // 2 -> [2]
 454        ```
 455        """
 456        return icepool.operator.MultisetFloordivCounts(self, constant=n)
 457
 458    def __mod__(self, n: int, /) -> 'MultisetExpression[T]':
 459        if not isinstance(n, int):
 460            return NotImplemented
 461        return icepool.operator.MultisetModuloCounts(self, constant=n)
 462
 463    def modulo_counts(self, n: int, /) -> 'MultisetExpression[T]':
 464        """Moduos all counts by n.
 465
 466        Same as `self % n`.
 467
 468        Example:
 469        ```python
 470        Pool([1, 2, 2, 3]) % 2 -> [1, 3]
 471        ```
 472        """
 473        return self % n
 474
 475    def __pos__(self) -> 'MultisetExpression[T]':
 476        """Sets all negative counts to zero."""
 477        return icepool.operator.MultisetKeepCounts(self,
 478                                                   comparison='>=',
 479                                                   constant=0)
 480
 481    def __neg__(self) -> 'MultisetExpression[T]':
 482        """As -1 * self."""
 483        return -1 * self
 484
 485    def keep_counts(self, comparison: Literal['==', '!=', '<=', '<', '>=',
 486                                              '>'], n: int,
 487                    /) -> 'MultisetExpression[T]':
 488        """Keeps counts fitting the comparison, treating the rest as zero.
 489
 490        For example, `expression.keep_counts('>=', 2)` would keep pairs,
 491        triplets, etc. and drop singles.
 492
 493        ```python
 494        Pool([1, 2, 2, 3, 3, 3]).keep_counts('>=', 2) -> [2, 2, 3, 3, 3]
 495        ```
 496        
 497        Args:
 498            comparison: The comparison to use.
 499            n: The number to compare counts against.
 500        """
 501        return icepool.operator.MultisetKeepCounts(self,
 502                                                   comparison=comparison,
 503                                                   constant=n)
 504
 505    def unique(self, n: int = 1, /) -> 'MultisetExpression[T]':
 506        """Counts each outcome at most `n` times.
 507
 508        For example, `generator.unique(2)` would count each outcome at most
 509        twice.
 510
 511        Example:
 512        ```python
 513        Pool([1, 2, 2, 3]).unique() -> [1, 2, 3]
 514        ```
 515        """
 516        return icepool.operator.MultisetUnique(self, constant=n)
 517
 518    # Keep highest / lowest.
 519
 520    @overload
 521    def keep(
 522        self, index: slice | Sequence[int | EllipsisType]
 523    ) -> 'MultisetExpression[T]':
 524        ...
 525
 526    @overload
 527    def keep(self,
 528             index: int) -> 'icepool.Die[T] | MultisetFunctionRawResult[T, T]':
 529        ...
 530
 531    def keep(
 532        self, index: slice | Sequence[int | EllipsisType] | int
 533    ) -> 'MultisetExpression[T] | icepool.Die[T] | MultisetFunctionRawResult[T, T]':
 534        """Selects elements after drawing and sorting.
 535
 536        This is less capable than the `KeepGenerator` version.
 537        In particular, it does not know how many elements it is selecting from,
 538        so it must be anchored at the starting end. The advantage is that it
 539        can be applied to any expression.
 540
 541        The valid types of argument are:
 542
 543        * A `slice`. If both start and stop are provided, they must both be
 544            non-negative or both be negative. step is not supported.
 545        * A sequence of `int` with `...` (`Ellipsis`) at exactly one end.
 546            Each sorted element will be counted that many times, with the
 547            `Ellipsis` treated as enough zeros (possibly "negative") to
 548            fill the rest of the elements.
 549        * An `int`, which evaluates by taking the element at the specified
 550            index. In this case the result is a `Die`.
 551
 552        Negative incoming counts are treated as zero counts.
 553
 554        Use the `[]` operator for the same effect as this method.
 555        """
 556        if isinstance(index, int):
 557            return icepool.evaluator.keep_evaluator.evaluate(self, index=index)
 558        else:
 559            return icepool.operator.MultisetKeep(self, index=index)
 560
 561    @overload
 562    def __getitem__(
 563        self, index: slice | Sequence[int | EllipsisType]
 564    ) -> 'MultisetExpression[T]':
 565        ...
 566
 567    @overload
 568    def __getitem__(
 569            self,
 570            index: int) -> 'icepool.Die[T] | MultisetFunctionRawResult[T, T]':
 571        ...
 572
 573    def __getitem__(
 574        self, index: slice | Sequence[int | EllipsisType] | int
 575    ) -> 'MultisetExpression[T] | icepool.Die[T] | MultisetFunctionRawResult[T, T]':
 576        return self.keep(index)
 577
 578    def lowest(self,
 579               keep: int | None = None,
 580               drop: int | None = None) -> 'MultisetExpression[T]':
 581        """Keep some of the lowest elements from this multiset and drop the rest.
 582
 583        In contrast to the die and free function versions, this does not
 584        automatically sum the dice. Use `.sum()` afterwards if you want to sum.
 585        Alternatively, you can perform some other evaluation.
 586
 587        This requires the outcomes to be evaluated in ascending order.
 588
 589        Args:
 590            keep, drop: These arguments work together:
 591                * If neither are provided, the single lowest element
 592                    will be kept.
 593                * If only `keep` is provided, the `keep` lowest elements
 594                    will be kept.
 595                * If only `drop` is provided, the `drop` lowest elements
 596                    will be dropped and the rest will be kept.
 597                * If both are provided, `drop` lowest elements will be dropped,
 598                    then the next `keep` lowest elements will be kept.
 599        """
 600        index = lowest_slice(keep, drop)
 601        return self.keep(index)
 602
 603    def highest(self,
 604                keep: int | None = None,
 605                drop: int | None = None) -> 'MultisetExpression[T]':
 606        """Keep some of the highest elements from this multiset and drop the rest.
 607
 608        In contrast to the die and free function versions, this does not
 609        automatically sum the dice. Use `.sum()` afterwards if you want to sum.
 610        Alternatively, you can perform some other evaluation.
 611
 612        This requires the outcomes to be evaluated in descending order.
 613
 614        Args:
 615            keep, drop: These arguments work together:
 616                * If neither are provided, the single highest element
 617                    will be kept.
 618                * If only `keep` is provided, the `keep` highest elements
 619                    will be kept.
 620                * If only `drop` is provided, the `drop` highest elements
 621                    will be dropped and the rest will be kept.
 622                * If both are provided, `drop` highest elements will be dropped, 
 623                    then the next `keep` highest elements will be kept.
 624        """
 625        index = highest_slice(keep, drop)
 626        return self.keep(index)
 627
 628    # Matching.
 629
 630    def sort_match(self,
 631                   comparison: Literal['==', '!=', '<=', '<', '>=', '>'],
 632                   other: 'MultisetExpression[T]',
 633                   /,
 634                   order: Order = Order.Descending) -> 'MultisetExpression[T]':
 635        """EXPERIMENTAL: Matches elements of `self` with elements of `other` in sorted order, then keeps elements from `self` that fit `comparison` with their partner.
 636
 637        Extra elements: If `self` has more elements than `other`, whether the
 638        extra elements are kept depends on the `order` and `comparison`:
 639        * Descending: kept for `'>='`, `'>'`
 640        * Ascending: kept for `'<='`, `'<'`
 641
 642        Example: An attacker rolls 3d6 versus a defender's 2d6 in the game of
 643        *RISK*. Which pairs did the attacker win?
 644        ```python
 645        d6.pool(3).highest(2).sort_match('>', d6.pool(2))
 646        ```
 647        
 648        Suppose the attacker rolled 6, 4, 3 and the defender 5, 5.
 649        In this case the 4 would be blocked since the attacker lost that pair,
 650        leaving the attacker's 6 and 3. If you don't want to keep the extra
 651        element, you can use `highest`.
 652        ```python
 653        Pool([6, 4, 3]).sort_match('>', [5, 5]) -> [6, 3]
 654        Pool([6, 4, 3]).highest(2).sort_match('>', [5, 5]) -> [6]
 655        ```
 656
 657        Contrast `maximum_match()`, which first creates the maximum number of
 658        pairs that fit the comparison, not necessarily in sorted order.
 659        In the above example, `maximum_match()` would allow the defender to
 660        assign their 5s to block both the 4 and the 3.
 661
 662        Negative incoming counts are treated as zero counts.
 663        
 664        Args:
 665            comparison: The comparison to filter by. If you want to drop rather
 666                than keep, use the complementary comparison:
 667                * `'=='` vs. `'!='`
 668                * `'<='` vs. `'>'`
 669                * `'>='` vs. `'<'`
 670            other: The other multiset to match elements with.
 671            order: The order in which to sort before forming matches.
 672                Default is descending.
 673        """
 674        other = implicit_convert_to_expression(other)
 675
 676        match comparison:
 677            case '==':
 678                lesser, tie, greater = 0, 1, 0
 679            case '!=':
 680                lesser, tie, greater = 1, 0, 1
 681            case '<=':
 682                lesser, tie, greater = 1, 1, 0
 683            case '<':
 684                lesser, tie, greater = 1, 0, 0
 685            case '>=':
 686                lesser, tie, greater = 0, 1, 1
 687            case '>':
 688                lesser, tie, greater = 0, 0, 1
 689            case _:
 690                raise ValueError(f'Invalid comparison {comparison}')
 691
 692        if order > 0:
 693            left_first = lesser
 694            right_first = greater
 695        else:
 696            left_first = greater
 697            right_first = lesser
 698
 699        return icepool.operator.MultisetSortMatch(self,
 700                                                  other,
 701                                                  order=order,
 702                                                  tie=tie,
 703                                                  left_first=left_first,
 704                                                  right_first=right_first)
 705
 706    def maximum_match_highest(
 707            self, comparison: Literal['<=',
 708                                      '<'], other: 'MultisetExpression[T]', /,
 709            *, keep: Literal['matched',
 710                             'unmatched']) -> 'MultisetExpression[T]':
 711        """EXPERIMENTAL: Match the highest elements from `self` with even higher (or equal) elements from `other`.
 712
 713        This matches elements of `self` with elements of `other`, such that in
 714        each pair the element from `self` fits the `comparision` with the
 715        element from `other`. As many such pairs of elements will be matched as 
 716        possible, preferring the highest matchable elements of `self`.
 717        Finally, either the matched or unmatched elements from `self` are kept.
 718
 719        This requires that outcomes be evaluated in descending order.
 720
 721        Example: An attacker rolls a pool of 4d6 and a defender rolls a pool of 
 722        3d6. Defender dice can be used to block attacker dice of equal or lesser
 723        value, and the defender prefers to block the highest attacker dice
 724        possible. Which attacker dice were not blocked?
 725        ```python
 726        d6.pool(4).maximum_match('<=', d6.pool(3), keep='unmatched').sum()
 727        ```
 728
 729        Suppose the attacker rolls 6, 4, 3, 1 and the defender rolls 5, 5.
 730        Then the result would be [6, 1].
 731        ```python
 732        d6.pool([6, 4, 3, 1]).maximum_match('<=', [5, 5], keep='unmatched')
 733        -> [6, 1]
 734        ```
 735
 736        Contrast `sort_match()`, which first creates pairs in
 737        sorted order and then filters them by `comparison`.
 738        In the above example, `sort_matched` would force the defender to match
 739        against the 5 and the 4, which would only allow them to block the 4.
 740
 741        Negative incoming counts are treated as zero counts.
 742
 743        Args:
 744            comparison: Either `'<='` or `'<'`.
 745            other: The other multiset to match elements with.
 746            keep: Whether 'matched' or 'unmatched' elements are to be kept.
 747        """
 748        if keep == 'matched':
 749            keep_boolean = True
 750        elif keep == 'unmatched':
 751            keep_boolean = False
 752        else:
 753            raise ValueError(f"keep must be either 'matched' or 'unmatched'")
 754
 755        other = implicit_convert_to_expression(other)
 756        match comparison:
 757            case '<=':
 758                match_equal = True
 759            case '<':
 760                match_equal = False
 761            case _:
 762                raise ValueError(f'Invalid comparison {comparison}')
 763        return icepool.operator.MultisetMaximumMatch(self,
 764                                                     other,
 765                                                     order=Order.Descending,
 766                                                     match_equal=match_equal,
 767                                                     keep=keep_boolean)
 768
 769    def maximum_match_lowest(
 770            self, comparison: Literal['>=',
 771                                      '>'], other: 'MultisetExpression[T]', /,
 772            *, keep: Literal['matched',
 773                             'unmatched']) -> 'MultisetExpression[T]':
 774        """EXPERIMENTAL: Match the lowest elements from `self` with even lower (or equal) elements from `other`.
 775
 776        This matches elements of `self` with elements of `other`, such that in
 777        each pair the element from `self` fits the `comparision` with the
 778        element from `other`. As many such pairs of elements will be matched as 
 779        possible, preferring the lowest matchable elements of `self`.
 780        Finally, either the matched or unmatched elements from `self` are kept.
 781
 782        This requires that outcomes be evaluated in ascending order.
 783
 784        Contrast `sort_match()`, which first creates pairs in
 785        sorted order and then filters them by `comparison`.
 786
 787        Args:
 788            comparison: Either `'>='` or `'>'`.
 789            other: The other multiset to match elements with.
 790            keep: Whether 'matched' or 'unmatched' elements are to be kept.
 791        """
 792        if keep == 'matched':
 793            keep_boolean = True
 794        elif keep == 'unmatched':
 795            keep_boolean = False
 796        else:
 797            raise ValueError(f"keep must be either 'matched' or 'unmatched'")
 798
 799        other = implicit_convert_to_expression(other)
 800        match comparison:
 801            case '>=':
 802                match_equal = True
 803            case '>':
 804                match_equal = False
 805            case _:
 806                raise ValueError(f'Invalid comparison {comparison}')
 807        return icepool.operator.MultisetMaximumMatch(self,
 808                                                     other,
 809                                                     order=Order.Ascending,
 810                                                     match_equal=match_equal,
 811                                                     keep=keep_boolean)
 812
 813    # Evaluations.
 814
 815    def expand(
 816        self,
 817        order: Order = Order.Ascending
 818    ) -> 'icepool.Die[tuple[T, ...]] | MultisetFunctionRawResult[T, tuple[T, ...]]':
 819        """Evaluation: All elements of the multiset in ascending order.
 820
 821        This is expensive and not recommended unless there are few possibilities.
 822
 823        Args:
 824            order: Whether the elements are in ascending (default) or descending
 825                order.
 826        """
 827        return icepool.evaluator.ExpandEvaluator().evaluate(self, order=order)
 828
 829    def sum(
 830        self,
 831        map: Callable[[T], U] | Mapping[T, U] | None = None
 832    ) -> 'icepool.Die[U] | MultisetFunctionRawResult[T, U]':
 833        """Evaluation: The sum of all elements."""
 834        if map is None:
 835            return icepool.evaluator.sum_evaluator.evaluate(self)
 836        else:
 837            return icepool.evaluator.SumEvaluator(map).evaluate(self)
 838
 839    def size(self) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]':
 840        """Evaluation: The total number of elements in the multiset.
 841
 842        This is usually not very interesting unless some other operation is
 843        performed first. Examples:
 844
 845        `generator.unique().size()` will count the number of unique outcomes.
 846
 847        `(generator & [4, 5, 6]).size()` will count up to one each of
 848        4, 5, and 6.
 849        """
 850        return icepool.evaluator.size_evaluator.evaluate(self)
 851
 852    def any(self) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]':
 853        """Evaluation: Whether the multiset has at least one positive count. """
 854        return icepool.evaluator.any_evaluator.evaluate(self)
 855
 856    def highest_outcome_and_count(
 857        self
 858    ) -> 'icepool.Die[tuple[T, int]] | MultisetFunctionRawResult[T, tuple[T, int]]':
 859        """Evaluation: The highest outcome with positive count, along with that count.
 860
 861        If no outcomes have positive count, the min outcome will be returned with 0 count.
 862        """
 863        return icepool.evaluator.highest_outcome_and_count_evaluator.evaluate(
 864            self)
 865
 866    def all_counts(
 867        self,
 868        filter: int | Literal['all'] = 1
 869    ) -> 'icepool.Die[tuple[int, ...]] | MultisetFunctionRawResult[T, tuple[int, ...]]':
 870        """Evaluation: Sorted tuple of all counts, i.e. the sizes of all matching sets.
 871
 872        The sizes are in **descending** order.
 873
 874        Args:
 875            filter: Any counts below this value will not be in the output.
 876                For example, `filter=2` will only produce pairs and better.
 877                If `None`, no filtering will be done.
 878
 879                Why not just place `keep_counts_ge()` before this?
 880                `keep_counts_ge()` operates by setting counts to zero, so you
 881                would still need an argument to specify whether you want to
 882                output zero counts. So we might as well use the argument to do
 883                both.
 884        """
 885        return icepool.evaluator.AllCountsEvaluator(
 886            filter=filter).evaluate(self)
 887
 888    def largest_count(
 889            self) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]':
 890        """Evaluation: The size of the largest matching set among the elements."""
 891        return icepool.evaluator.largest_count_evaluator.evaluate(self)
 892
 893    def largest_count_and_outcome(
 894        self
 895    ) -> 'icepool.Die[tuple[int, T]] | MultisetFunctionRawResult[T, tuple[int, T]]':
 896        """Evaluation: The largest matching set among the elements and the corresponding outcome."""
 897        return icepool.evaluator.largest_count_and_outcome_evaluator.evaluate(
 898            self)
 899
 900    def __rfloordiv__(
 901        self, other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]'
 902    ) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]':
 903        return implicit_convert_to_expression(other).count_subset(self)
 904
 905    def count_subset(
 906        self,
 907        divisor: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
 908        /,
 909        *,
 910        empty_divisor: int | None = None
 911    ) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]':
 912        """Evaluation: The number of times the divisor is contained in this multiset.
 913        
 914        Args:
 915            divisor: The multiset to divide by.
 916            empty_divisor: If the divisor is empty, the outcome will be this.
 917                If not set, `ZeroDivisionError` will be raised for an empty
 918                right side.
 919
 920        Raises:
 921            ZeroDivisionError: If the divisor may be empty and 
 922                empty_divisor_outcome is not set.
 923        """
 924        divisor = implicit_convert_to_expression(divisor)
 925        return icepool.evaluator.CountSubsetEvaluator(
 926            empty_divisor=empty_divisor).evaluate(self, divisor)
 927
 928    def largest_straight(
 929        self: 'MultisetExpression[int]'
 930    ) -> 'icepool.Die[int] | MultisetFunctionRawResult[int, int]':
 931        """Evaluation: The size of the largest straight among the elements.
 932
 933        Outcomes must be `int`s.
 934        """
 935        return icepool.evaluator.largest_straight_evaluator.evaluate(self)
 936
 937    def largest_straight_and_outcome(
 938        self: 'MultisetExpression[int]',
 939        priority: Literal['low', 'high'] = 'high',
 940        /
 941    ) -> 'icepool.Die[tuple[int, int]] | MultisetFunctionRawResult[int, tuple[int, int]]':
 942        """Evaluation: The size of the largest straight among the elements and the highest (optionally, lowest) outcome in that straight.
 943
 944        Straight size is prioritized first, then the outcome.
 945
 946        Outcomes must be `int`s.
 947
 948        Args:
 949            priority: Controls which outcome within the straight is returned,
 950                and which straight is picked if there is a tie for largest
 951                straight.
 952        """
 953        if priority == 'high':
 954            return icepool.evaluator.largest_straight_and_outcome_evaluator_high.evaluate(
 955                self)
 956        elif priority == 'low':
 957            return icepool.evaluator.largest_straight_and_outcome_evaluator_low.evaluate(
 958                self)
 959        else:
 960            raise ValueError("priority must be 'low' or 'high'.")
 961
 962    def all_straights(
 963        self: 'MultisetExpression[int]'
 964    ) -> 'icepool.Die[tuple[int, ...]] | MultisetFunctionRawResult[int, tuple[int, ...]]':
 965        """Evaluation: The sizes of all straights.
 966
 967        The sizes are in **descending** order.
 968
 969        Each element can only contribute to one straight, though duplicate
 970        elements can produces straights that overlap in outcomes. In this case,
 971        elements are preferentially assigned to the longer straight.
 972        """
 973        return icepool.evaluator.all_straights_evaluator.evaluate(self)
 974
 975    def all_straights_reduce_counts(
 976        self: 'MultisetExpression[int]',
 977        reducer: Callable[[int, int], int] = operator.mul
 978    ) -> 'icepool.Die[tuple[tuple[int, int], ...]] | MultisetFunctionRawResult[int, tuple[tuple[int, int], ...]]':
 979        """Experimental: All straights with a reduce operation on the counts.
 980
 981        This can be used to evaluate e.g. cribbage-style straight counting.
 982
 983        The result is a tuple of `(run_length, run_score)`s.
 984        """
 985        return icepool.evaluator.AllStraightsReduceCountsEvaluator(
 986            reducer=reducer).evaluate(self)
 987
 988    def argsort(self: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
 989                *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
 990                order: Order = Order.Descending,
 991                limit: int | None = None):
 992        """Experimental: Returns the indexes of the originating multisets for each rank in their additive union.
 993
 994        Example:
 995        ```python
 996        MultisetExpression.argsort([10, 9, 5], [9, 9])
 997        ```
 998        produces
 999        ```python
1000        ((0,), (0, 1, 1), (0,))
1001        ```
1002        
1003        Args:
1004            self, *args: The multiset expressions to be evaluated.
1005            order: Which order the ranks are to be emitted. Default is descending.
1006            limit: How many ranks to emit. Default will emit all ranks, which
1007                makes the length of each outcome equal to
1008                `additive_union(+self, +arg1, +arg2, ...).unique().size()`
1009        """
1010        self = implicit_convert_to_expression(self)
1011        converted_args = [implicit_convert_to_expression(arg) for arg in args]
1012        return icepool.evaluator.ArgsortEvaluator(order=order,
1013                                                  limit=limit).evaluate(
1014                                                      self, *converted_args)
1015
1016    # Comparators.
1017
1018    def _compare(
1019        self,
1020        right: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1021        operation_class: Type['icepool.evaluator.ComparisonEvaluator'],
1022        *,
1023        truth_value_callback: 'Callable[[], bool] | None' = None
1024    ) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]':
1025        right = icepool.implicit_convert_to_expression(right)
1026
1027        if truth_value_callback is not None:
1028
1029            def data_callback() -> Counts[bool]:
1030                die = cast('icepool.Die[bool]',
1031                           operation_class().evaluate(self, right))
1032                if not isinstance(die, icepool.Die):
1033                    raise TypeError('Did not resolve to a die.')
1034                return die._data
1035
1036            return icepool.DieWithTruth(data_callback, truth_value_callback)
1037        else:
1038            return operation_class().evaluate(self, right)
1039
1040    def __lt__(self,
1041               other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1042               /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]':
1043        try:
1044            return self._compare(other,
1045                                 icepool.evaluator.IsProperSubsetEvaluator)
1046        except TypeError:
1047            return NotImplemented
1048
1049    def __le__(self,
1050               other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1051               /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]':
1052        try:
1053            return self._compare(other, icepool.evaluator.IsSubsetEvaluator)
1054        except TypeError:
1055            return NotImplemented
1056
1057    def issubset(
1058            self,
1059            other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1060            /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]':
1061        """Evaluation: Whether this multiset is a subset of the other multiset.
1062
1063        Specifically, if this multiset has a lesser or equal count for each
1064        outcome than the other multiset, this evaluates to `True`; 
1065        if there is some outcome for which this multiset has a greater count 
1066        than the other multiset, this evaluates to `False`.
1067
1068        `issubset` is the same as `self <= other`.
1069        
1070        `self < other` evaluates a proper subset relation, which is the same
1071        except the result is `False` if the two multisets are exactly equal.
1072        """
1073        return self._compare(other, icepool.evaluator.IsSubsetEvaluator)
1074
1075    def __gt__(self,
1076               other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1077               /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]':
1078        try:
1079            return self._compare(other,
1080                                 icepool.evaluator.IsProperSupersetEvaluator)
1081        except TypeError:
1082            return NotImplemented
1083
1084    def __ge__(self,
1085               other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1086               /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]':
1087        try:
1088            return self._compare(other, icepool.evaluator.IsSupersetEvaluator)
1089        except TypeError:
1090            return NotImplemented
1091
1092    def issuperset(
1093            self,
1094            other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1095            /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]':
1096        """Evaluation: Whether this multiset is a superset of the other multiset.
1097
1098        Specifically, if this multiset has a greater or equal count for each
1099        outcome than the other multiset, this evaluates to `True`; 
1100        if there is some  outcome for which this multiset has a lesser count 
1101        than the other multiset, this evaluates to `False`.
1102        
1103        A typical use of this evaluation is testing for the presence of a
1104        combo of cards in a hand, e.g.
1105
1106        ```python
1107        deck.deal(5) >= ['a', 'a', 'b']
1108        ```
1109
1110        represents the chance that a deal of 5 cards contains at least two 'a's
1111        and one 'b'.
1112
1113        `issuperset` is the same as `self >= other`.
1114
1115        `self > other` evaluates a proper superset relation, which is the same
1116        except the result is `False` if the two multisets are exactly equal.
1117        """
1118        return self._compare(other, icepool.evaluator.IsSupersetEvaluator)
1119
1120    def __eq__(  # type: ignore
1121            self,
1122            other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1123            /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]':
1124        try:
1125
1126            def truth_value_callback() -> bool:
1127                return self.equals(other)
1128
1129            return self._compare(other,
1130                                 icepool.evaluator.IsEqualSetEvaluator,
1131                                 truth_value_callback=truth_value_callback)
1132        except TypeError:
1133            return NotImplemented
1134
1135    def __ne__(  # type: ignore
1136            self,
1137            other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1138            /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]':
1139        try:
1140
1141            def truth_value_callback() -> bool:
1142                return not self.equals(other)
1143
1144            return self._compare(other,
1145                                 icepool.evaluator.IsNotEqualSetEvaluator,
1146                                 truth_value_callback=truth_value_callback)
1147        except TypeError:
1148            return NotImplemented
1149
1150    def isdisjoint(
1151            self,
1152            other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1153            /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]':
1154        """Evaluation: Whether this multiset is disjoint from the other multiset.
1155        
1156        Specifically, this evaluates to `False` if there is any outcome for
1157        which both multisets have positive count, and `True` if there is not.
1158
1159        Negative incoming counts are treated as zero counts.
1160        """
1161        return self._compare(other, icepool.evaluator.IsDisjointSetEvaluator)
1162
1163    # For helping debugging / testing.
1164    def force_order(self, force_order: Order) -> 'MultisetExpression[T]':
1165        """Forces outcomes to be seen by the evaluator in the given order.
1166
1167        This can be useful for debugging / testing.
1168        """
1169        if force_order == Order.Any:
1170            return self
1171        return icepool.operator.MultisetForceOrder(self,
1172                                                   force_order=force_order)

Abstract base class representing an expression that operates on single multisets.

There are three types of multiset expressions:

  • MultisetGenerator, which produce raw outcomes and counts.
  • MultisetOperator, which takes outcomes with one or more counts and produces a count.
  • MultisetVariable, which is a temporary placeholder for some other expression.

Expression methods can be applied to MultisetGenerators to do simple evaluations. For joint evaluations, try multiset_function.

Use the provided operations to build up more complicated expressions, or to attach a final evaluator.

Operations include:

Operation Count / notes
additive_union, + l + r
difference, - l - r
intersection, & min(l, r)
union, | max(l, r)
symmetric_difference, ^ abs(l - r)
multiply_counts, * count * n
divide_counts, // count // n
modulo_counts, % count % n
keep_counts count if count >= n else 0 etc.
unary + same as keep_counts_ge(0)
unary - reverses the sign of all counts
unique min(count, n)
keep_outcomes count if outcome in t else 0
drop_outcomes count if outcome not in t else 0
map_counts f(outcome, *counts)
keep, [] less capable than KeepGenerator version
highest less capable than KeepGenerator version
lowest less capable than KeepGenerator version
Evaluator Summary
issubset, <= Whether the left side's counts are all <= their counterparts on the right
issuperset, >= Whether the left side's counts are all >= their counterparts on the right
isdisjoint Whether the left side has no positive counts in common with the right side
< As <=, but False if the two multisets are equal
> As >=, but False if the two multisets are equal
== Whether the left side has all the same counts as the right side
!= Whether the left side has any different counts to the right side
expand All elements in ascending order
sum Sum of all elements
count The number of elements
any Whether there is at least 1 element
highest_outcome_and_count The highest outcome and how many of that outcome
all_counts All counts in descending order
largest_count The single largest count, aka x-of-a-kind
largest_count_and_outcome Same but also with the corresponding outcome
count_subset, // The number of times the right side is contained in the left side
largest_straight Length of longest consecutive sequence
largest_straight_and_outcome Same but also with the corresponding outcome
all_straights Lengths of all consecutive sequences in descending order
def additive_union( *args: Union[MultisetExpression[~T], Mapping[~T, int], Sequence[~T]]) -> MultisetExpression[~T]:
167    def additive_union(
168        *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]'
169    ) -> 'MultisetExpression[T]':
170        """The combined elements from all of the multisets.
171
172        Same as `a + b + c + ...`.
173
174        Any resulting counts that would be negative are set to zero.
175
176        Example:
177        ```python
178        [1, 2, 2, 3] + [1, 2, 4] -> [1, 1, 2, 2, 2, 3, 4]
179        ```
180        """
181        expressions = tuple(
182            implicit_convert_to_expression(arg) for arg in args)
183        return icepool.operator.MultisetAdditiveUnion(*expressions)

The combined elements from all of the multisets.

Same as a + b + c + ....

Any resulting counts that would be negative are set to zero.

Example:

[1, 2, 2, 3] + [1, 2, 4] -> [1, 1, 2, 2, 2, 3, 4]
def difference( *args: Union[MultisetExpression[~T], Mapping[~T, int], Sequence[~T]]) -> MultisetExpression[~T]:
202    def difference(
203        *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]'
204    ) -> 'MultisetExpression[T]':
205        """The elements from the left multiset that are not in any of the others.
206
207        Same as `a - b - c - ...`.
208
209        Any resulting counts that would be negative are set to zero.
210
211        Example:
212        ```python
213        [1, 2, 2, 3] - [1, 2, 4] -> [2, 3]
214        ```
215
216        If no arguments are given, the result will be an empty multiset, i.e.
217        all zero counts.
218
219        Note that, as a multiset operation, this will only cancel elements 1:1.
220        If you want to drop all elements in a set of outcomes regardless of
221        count, either use `drop_outcomes()` instead, or use a large number of
222        counts on the right side.
223        """
224        expressions = tuple(
225            implicit_convert_to_expression(arg) for arg in args)
226        return icepool.operator.MultisetDifference(*expressions)

The elements from the left multiset that are not in any of the others.

Same as a - b - c - ....

Any resulting counts that would be negative are set to zero.

Example:

[1, 2, 2, 3] - [1, 2, 4] -> [2, 3]

If no arguments are given, the result will be an empty multiset, i.e. all zero counts.

Note that, as a multiset operation, this will only cancel elements 1:1. If you want to drop all elements in a set of outcomes regardless of count, either use drop_outcomes() instead, or use a large number of counts on the right side.

def intersection( *args: Union[MultisetExpression[~T], Mapping[~T, int], Sequence[~T]]) -> MultisetExpression[~T]:
245    def intersection(
246        *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]'
247    ) -> 'MultisetExpression[T]':
248        """The elements that all the multisets have in common.
249
250        Same as `a & b & c & ...`.
251
252        Any resulting counts that would be negative are set to zero.
253
254        Example:
255        ```python
256        [1, 2, 2, 3] & [1, 2, 4] -> [1, 2]
257        ```
258
259        Note that, as a multiset operation, this will only intersect elements
260        1:1.
261        If you want to keep all elements in a set of outcomes regardless of
262        count, either use `keep_outcomes()` instead, or use a large number of
263        counts on the right side.
264        """
265        expressions = tuple(
266            implicit_convert_to_expression(arg) for arg in args)
267        return icepool.operator.MultisetIntersection(*expressions)

The elements that all the multisets have in common.

Same as a & b & c & ....

Any resulting counts that would be negative are set to zero.

Example:

[1, 2, 2, 3] & [1, 2, 4] -> [1, 2]

Note that, as a multiset operation, this will only intersect elements 1:1. If you want to keep all elements in a set of outcomes regardless of count, either use keep_outcomes() instead, or use a large number of counts on the right side.

def union( *args: Union[MultisetExpression[~T], Mapping[~T, int], Sequence[~T]]) -> MultisetExpression[~T]:
285    def union(
286        *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]'
287    ) -> 'MultisetExpression[T]':
288        """The most of each outcome that appear in any of the multisets.
289
290        Same as `a | b | c | ...`.
291
292        Any resulting counts that would be negative are set to zero.
293
294        Example:
295        ```python
296        [1, 2, 2, 3] | [1, 2, 4] -> [1, 2, 2, 3, 4]
297        ```
298        """
299        expressions = tuple(
300            implicit_convert_to_expression(arg) for arg in args)
301        return icepool.operator.MultisetUnion(*expressions)

The most of each outcome that appear in any of the multisets.

Same as a | b | c | ....

Any resulting counts that would be negative are set to zero.

Example:

[1, 2, 2, 3] | [1, 2, 4] -> [1, 2, 2, 3, 4]
def symmetric_difference( self, other: Union[MultisetExpression[~T], Mapping[~T, int], Sequence[~T]], /) -> MultisetExpression[~T]:
321    def symmetric_difference(
322            self,
323            other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
324            /) -> 'MultisetExpression[T]':
325        """The elements that appear in the left or right multiset but not both.
326
327        Same as `a ^ b`.
328
329        Specifically, this produces the absolute difference between counts.
330        If you don't want negative counts to be used from the inputs, you can
331        do `+left ^ +right`.
332
333        Example:
334        ```python
335        [1, 2, 2, 3] ^ [1, 2, 4] -> [2, 3, 4]
336        ```
337        """
338        return icepool.operator.MultisetSymmetricDifference(
339            self, implicit_convert_to_expression(other))

The elements that appear in the left or right multiset but not both.

Same as a ^ b.

Specifically, this produces the absolute difference between counts. If you don't want negative counts to be used from the inputs, you can do +left ^ +right.

Example:

[1, 2, 2, 3] ^ [1, 2, 4] -> [2, 3, 4]
def keep_outcomes( self, target: Union[Callable[[~T], bool], Collection[~T], MultisetExpression[~T]], /) -> MultisetExpression[~T]:
341    def keep_outcomes(
342            self, target:
343        'Callable[[T], bool] | Collection[T] | MultisetExpression[T]',
344            /) -> 'MultisetExpression[T]':
345        """Keeps the elements in the target set of outcomes, and drops the rest by setting their counts to zero.
346
347        This is similar to `intersection()`, except the right side is considered
348        to have unlimited multiplicity.
349
350        Args:
351            target: A callable returning `True` iff the outcome should be kept,
352                or an expression or collection of outcomes to keep.
353        """
354        if isinstance(target, MultisetExpression):
355            return icepool.operator.MultisetFilterOutcomesBinary(self, target)
356        else:
357            return icepool.operator.MultisetFilterOutcomes(self, target=target)

Keeps the elements in the target set of outcomes, and drops the rest by setting their counts to zero.

This is similar to intersection(), except the right side is considered to have unlimited multiplicity.

Arguments:
  • target: A callable returning True iff the outcome should be kept, or an expression or collection of outcomes to keep.
def drop_outcomes( self, target: Union[Callable[[~T], bool], Collection[~T], MultisetExpression[~T]], /) -> MultisetExpression[~T]:
359    def drop_outcomes(
360            self, target:
361        'Callable[[T], bool] | Collection[T] | MultisetExpression[T]',
362            /) -> 'MultisetExpression[T]':
363        """Drops the elements in the target set of outcomes by setting their counts to zero, and keeps the rest.
364
365        This is similar to `difference()`, except the right side is considered
366        to have unlimited multiplicity.
367
368        Args:
369            target: A callable returning `True` iff the outcome should be
370                dropped, or an expression or collection of outcomes to drop.
371        """
372        if isinstance(target, MultisetExpression):
373            return icepool.operator.MultisetFilterOutcomesBinary(self,
374                                                                 target,
375                                                                 invert=True)
376        else:
377            return icepool.operator.MultisetFilterOutcomes(self,
378                                                           target=target,
379                                                           invert=True)

Drops the elements in the target set of outcomes by setting their counts to zero, and keeps the rest.

This is similar to difference(), except the right side is considered to have unlimited multiplicity.

Arguments:
  • target: A callable returning True iff the outcome should be dropped, or an expression or collection of outcomes to drop.
def map_counts( *args: Union[MultisetExpression[~T], Mapping[~T, int], Sequence[~T]], function: Callable[..., int]) -> MultisetExpression[~T]:
383    def map_counts(*args:
384                   'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
385                   function: Callable[..., int]) -> 'MultisetExpression[T]':
386        """Maps the counts to new counts.
387
388        Args:
389            function: A function that takes `outcome, *counts` and produces a
390                combined count.
391        """
392        expressions = tuple(
393            implicit_convert_to_expression(arg) for arg in args)
394        return icepool.operator.MultisetMapCounts(*expressions,
395                                                  function=function)

Maps the counts to new counts.

Arguments:
  • function: A function that takes outcome, *counts and produces a combined count.
def multiply_counts( self, n: int, /) -> MultisetExpression[~T]:
408    def multiply_counts(self, n: int, /) -> 'MultisetExpression[T]':
409        """Multiplies all counts by n.
410
411        Same as `self * n`.
412
413        Example:
414        ```python
415        Pool([1, 2, 2, 3]) * 2 -> [1, 1, 2, 2, 2, 2, 3, 3]
416        ```
417        """
418        return icepool.operator.MultisetMultiplyCounts(self, constant=n)

Multiplies all counts by n.

Same as self * n.

Example:

Pool([1, 2, 2, 3]) * 2 -> [1, 1, 2, 2, 2, 2, 3, 3]
def divide_counts( self, n: int, /) -> MultisetExpression[~T]:
446    def divide_counts(self, n: int, /) -> 'MultisetExpression[T]':
447        """Divides all counts by n (rounding down).
448
449        Same as `self // n`.
450
451        Example:
452        ```python
453        Pool([1, 2, 2, 3]) // 2 -> [2]
454        ```
455        """
456        return icepool.operator.MultisetFloordivCounts(self, constant=n)

Divides all counts by n (rounding down).

Same as self // n.

Example:

Pool([1, 2, 2, 3]) // 2 -> [2]
def modulo_counts( self, n: int, /) -> MultisetExpression[~T]:
463    def modulo_counts(self, n: int, /) -> 'MultisetExpression[T]':
464        """Moduos all counts by n.
465
466        Same as `self % n`.
467
468        Example:
469        ```python
470        Pool([1, 2, 2, 3]) % 2 -> [1, 3]
471        ```
472        """
473        return self % n

Moduos all counts by n.

Same as self % n.

Example:

Pool([1, 2, 2, 3]) % 2 -> [1, 3]
def keep_counts( self, comparison: Literal['==', '!=', '<=', '<', '>=', '>'], n: int, /) -> MultisetExpression[~T]:
485    def keep_counts(self, comparison: Literal['==', '!=', '<=', '<', '>=',
486                                              '>'], n: int,
487                    /) -> 'MultisetExpression[T]':
488        """Keeps counts fitting the comparison, treating the rest as zero.
489
490        For example, `expression.keep_counts('>=', 2)` would keep pairs,
491        triplets, etc. and drop singles.
492
493        ```python
494        Pool([1, 2, 2, 3, 3, 3]).keep_counts('>=', 2) -> [2, 2, 3, 3, 3]
495        ```
496        
497        Args:
498            comparison: The comparison to use.
499            n: The number to compare counts against.
500        """
501        return icepool.operator.MultisetKeepCounts(self,
502                                                   comparison=comparison,
503                                                   constant=n)

Keeps counts fitting the comparison, treating the rest as zero.

For example, expression.keep_counts('>=', 2) would keep pairs, triplets, etc. and drop singles.

Pool([1, 2, 2, 3, 3, 3]).keep_counts('>=', 2) -> [2, 2, 3, 3, 3]
Arguments:
  • comparison: The comparison to use.
  • n: The number to compare counts against.
def unique( self, n: int = 1, /) -> MultisetExpression[~T]:
505    def unique(self, n: int = 1, /) -> 'MultisetExpression[T]':
506        """Counts each outcome at most `n` times.
507
508        For example, `generator.unique(2)` would count each outcome at most
509        twice.
510
511        Example:
512        ```python
513        Pool([1, 2, 2, 3]).unique() -> [1, 2, 3]
514        ```
515        """
516        return icepool.operator.MultisetUnique(self, constant=n)

Counts each outcome at most n times.

For example, generator.unique(2) would count each outcome at most twice.

Example:

Pool([1, 2, 2, 3]).unique() -> [1, 2, 3]
def keep( self, index: Union[slice, Sequence[int | ellipsis], int]) -> Union[MultisetExpression[~T], Die[~T], icepool.evaluator.multiset_function.MultisetFunctionRawResult[~T, ~T]]:
531    def keep(
532        self, index: slice | Sequence[int | EllipsisType] | int
533    ) -> 'MultisetExpression[T] | icepool.Die[T] | MultisetFunctionRawResult[T, T]':
534        """Selects elements after drawing and sorting.
535
536        This is less capable than the `KeepGenerator` version.
537        In particular, it does not know how many elements it is selecting from,
538        so it must be anchored at the starting end. The advantage is that it
539        can be applied to any expression.
540
541        The valid types of argument are:
542
543        * A `slice`. If both start and stop are provided, they must both be
544            non-negative or both be negative. step is not supported.
545        * A sequence of `int` with `...` (`Ellipsis`) at exactly one end.
546            Each sorted element will be counted that many times, with the
547            `Ellipsis` treated as enough zeros (possibly "negative") to
548            fill the rest of the elements.
549        * An `int`, which evaluates by taking the element at the specified
550            index. In this case the result is a `Die`.
551
552        Negative incoming counts are treated as zero counts.
553
554        Use the `[]` operator for the same effect as this method.
555        """
556        if isinstance(index, int):
557            return icepool.evaluator.keep_evaluator.evaluate(self, index=index)
558        else:
559            return icepool.operator.MultisetKeep(self, index=index)

Selects elements after drawing and sorting.

This is less capable than the KeepGenerator version. In particular, it does not know how many elements it is selecting from, so it must be anchored at the starting end. The advantage is that it can be applied to any expression.

The valid types of argument are:

  • A slice. If both start and stop are provided, they must both be non-negative or both be negative. step is not supported.
  • A sequence of int with ... (Ellipsis) at exactly one end. Each sorted element will be counted that many times, with the Ellipsis treated as enough zeros (possibly "negative") to fill the rest of the elements.
  • An int, which evaluates by taking the element at the specified index. In this case the result is a Die.

Negative incoming counts are treated as zero counts.

Use the [] operator for the same effect as this method.

def lowest( self, keep: int | None = None, drop: int | None = None) -> MultisetExpression[~T]:
578    def lowest(self,
579               keep: int | None = None,
580               drop: int | None = None) -> 'MultisetExpression[T]':
581        """Keep some of the lowest elements from this multiset and drop the rest.
582
583        In contrast to the die and free function versions, this does not
584        automatically sum the dice. Use `.sum()` afterwards if you want to sum.
585        Alternatively, you can perform some other evaluation.
586
587        This requires the outcomes to be evaluated in ascending order.
588
589        Args:
590            keep, drop: These arguments work together:
591                * If neither are provided, the single lowest element
592                    will be kept.
593                * If only `keep` is provided, the `keep` lowest elements
594                    will be kept.
595                * If only `drop` is provided, the `drop` lowest elements
596                    will be dropped and the rest will be kept.
597                * If both are provided, `drop` lowest elements will be dropped,
598                    then the next `keep` lowest elements will be kept.
599        """
600        index = lowest_slice(keep, drop)
601        return self.keep(index)

Keep some of the lowest elements from this multiset and drop the rest.

In contrast to the die and free function versions, this does not automatically sum the dice. Use .sum() afterwards if you want to sum. Alternatively, you can perform some other evaluation.

This requires the outcomes to be evaluated in ascending order.

Arguments:
  • keep, drop: These arguments work together:
    • If neither are provided, the single lowest element will be kept.
    • If only keep is provided, the keep lowest elements will be kept.
    • If only drop is provided, the drop lowest elements will be dropped and the rest will be kept.
    • If both are provided, drop lowest elements will be dropped, then the next keep lowest elements will be kept.
def highest( self, keep: int | None = None, drop: int | None = None) -> MultisetExpression[~T]:
603    def highest(self,
604                keep: int | None = None,
605                drop: int | None = None) -> 'MultisetExpression[T]':
606        """Keep some of the highest elements from this multiset and drop the rest.
607
608        In contrast to the die and free function versions, this does not
609        automatically sum the dice. Use `.sum()` afterwards if you want to sum.
610        Alternatively, you can perform some other evaluation.
611
612        This requires the outcomes to be evaluated in descending order.
613
614        Args:
615            keep, drop: These arguments work together:
616                * If neither are provided, the single highest element
617                    will be kept.
618                * If only `keep` is provided, the `keep` highest elements
619                    will be kept.
620                * If only `drop` is provided, the `drop` highest elements
621                    will be dropped and the rest will be kept.
622                * If both are provided, `drop` highest elements will be dropped, 
623                    then the next `keep` highest elements will be kept.
624        """
625        index = highest_slice(keep, drop)
626        return self.keep(index)

Keep some of the highest elements from this multiset and drop the rest.

In contrast to the die and free function versions, this does not automatically sum the dice. Use .sum() afterwards if you want to sum. Alternatively, you can perform some other evaluation.

This requires the outcomes to be evaluated in descending order.

Arguments:
  • keep, drop: These arguments work together:
    • If neither are provided, the single highest element will be kept.
    • If only keep is provided, the keep highest elements will be kept.
    • If only drop is provided, the drop highest elements will be dropped and the rest will be kept.
    • If both are provided, drop highest elements will be dropped, then the next keep highest elements will be kept.
def sort_match( self, comparison: Literal['==', '!=', '<=', '<', '>=', '>'], other: MultisetExpression[~T], /, order: Order = <Order.Descending: -1>) -> MultisetExpression[~T]:
630    def sort_match(self,
631                   comparison: Literal['==', '!=', '<=', '<', '>=', '>'],
632                   other: 'MultisetExpression[T]',
633                   /,
634                   order: Order = Order.Descending) -> 'MultisetExpression[T]':
635        """EXPERIMENTAL: Matches elements of `self` with elements of `other` in sorted order, then keeps elements from `self` that fit `comparison` with their partner.
636
637        Extra elements: If `self` has more elements than `other`, whether the
638        extra elements are kept depends on the `order` and `comparison`:
639        * Descending: kept for `'>='`, `'>'`
640        * Ascending: kept for `'<='`, `'<'`
641
642        Example: An attacker rolls 3d6 versus a defender's 2d6 in the game of
643        *RISK*. Which pairs did the attacker win?
644        ```python
645        d6.pool(3).highest(2).sort_match('>', d6.pool(2))
646        ```
647        
648        Suppose the attacker rolled 6, 4, 3 and the defender 5, 5.
649        In this case the 4 would be blocked since the attacker lost that pair,
650        leaving the attacker's 6 and 3. If you don't want to keep the extra
651        element, you can use `highest`.
652        ```python
653        Pool([6, 4, 3]).sort_match('>', [5, 5]) -> [6, 3]
654        Pool([6, 4, 3]).highest(2).sort_match('>', [5, 5]) -> [6]
655        ```
656
657        Contrast `maximum_match()`, which first creates the maximum number of
658        pairs that fit the comparison, not necessarily in sorted order.
659        In the above example, `maximum_match()` would allow the defender to
660        assign their 5s to block both the 4 and the 3.
661
662        Negative incoming counts are treated as zero counts.
663        
664        Args:
665            comparison: The comparison to filter by. If you want to drop rather
666                than keep, use the complementary comparison:
667                * `'=='` vs. `'!='`
668                * `'<='` vs. `'>'`
669                * `'>='` vs. `'<'`
670            other: The other multiset to match elements with.
671            order: The order in which to sort before forming matches.
672                Default is descending.
673        """
674        other = implicit_convert_to_expression(other)
675
676        match comparison:
677            case '==':
678                lesser, tie, greater = 0, 1, 0
679            case '!=':
680                lesser, tie, greater = 1, 0, 1
681            case '<=':
682                lesser, tie, greater = 1, 1, 0
683            case '<':
684                lesser, tie, greater = 1, 0, 0
685            case '>=':
686                lesser, tie, greater = 0, 1, 1
687            case '>':
688                lesser, tie, greater = 0, 0, 1
689            case _:
690                raise ValueError(f'Invalid comparison {comparison}')
691
692        if order > 0:
693            left_first = lesser
694            right_first = greater
695        else:
696            left_first = greater
697            right_first = lesser
698
699        return icepool.operator.MultisetSortMatch(self,
700                                                  other,
701                                                  order=order,
702                                                  tie=tie,
703                                                  left_first=left_first,
704                                                  right_first=right_first)

EXPERIMENTAL: Matches elements of self with elements of other in sorted order, then keeps elements from self that fit comparison with their partner.

Extra elements: If self has more elements than other, whether the extra elements are kept depends on the order and comparison:

  • Descending: kept for '>=', '>'
  • Ascending: kept for '<=', '<'

Example: An attacker rolls 3d6 versus a defender's 2d6 in the game of RISK. Which pairs did the attacker win?

d6.pool(3).highest(2).sort_match('>', d6.pool(2))

Suppose the attacker rolled 6, 4, 3 and the defender 5, 5. In this case the 4 would be blocked since the attacker lost that pair, leaving the attacker's 6 and 3. If you don't want to keep the extra element, you can use highest.

Pool([6, 4, 3]).sort_match('>', [5, 5]) -> [6, 3]
Pool([6, 4, 3]).highest(2).sort_match('>', [5, 5]) -> [6]

Contrast maximum_match(), which first creates the maximum number of pairs that fit the comparison, not necessarily in sorted order. In the above example, maximum_match() would allow the defender to assign their 5s to block both the 4 and the 3.

Negative incoming counts are treated as zero counts.

Arguments:
  • comparison: The comparison to filter by. If you want to drop rather than keep, use the complementary comparison:
    • '==' vs. '!='
    • '<=' vs. '>'
    • '>=' vs. '<'
  • other: The other multiset to match elements with.
  • order: The order in which to sort before forming matches. Default is descending.
def maximum_match_highest( self, comparison: Literal['<=', '<'], other: MultisetExpression[~T], /, *, keep: Literal['matched', 'unmatched']) -> MultisetExpression[~T]:
706    def maximum_match_highest(
707            self, comparison: Literal['<=',
708                                      '<'], other: 'MultisetExpression[T]', /,
709            *, keep: Literal['matched',
710                             'unmatched']) -> 'MultisetExpression[T]':
711        """EXPERIMENTAL: Match the highest elements from `self` with even higher (or equal) elements from `other`.
712
713        This matches elements of `self` with elements of `other`, such that in
714        each pair the element from `self` fits the `comparision` with the
715        element from `other`. As many such pairs of elements will be matched as 
716        possible, preferring the highest matchable elements of `self`.
717        Finally, either the matched or unmatched elements from `self` are kept.
718
719        This requires that outcomes be evaluated in descending order.
720
721        Example: An attacker rolls a pool of 4d6 and a defender rolls a pool of 
722        3d6. Defender dice can be used to block attacker dice of equal or lesser
723        value, and the defender prefers to block the highest attacker dice
724        possible. Which attacker dice were not blocked?
725        ```python
726        d6.pool(4).maximum_match('<=', d6.pool(3), keep='unmatched').sum()
727        ```
728
729        Suppose the attacker rolls 6, 4, 3, 1 and the defender rolls 5, 5.
730        Then the result would be [6, 1].
731        ```python
732        d6.pool([6, 4, 3, 1]).maximum_match('<=', [5, 5], keep='unmatched')
733        -> [6, 1]
734        ```
735
736        Contrast `sort_match()`, which first creates pairs in
737        sorted order and then filters them by `comparison`.
738        In the above example, `sort_matched` would force the defender to match
739        against the 5 and the 4, which would only allow them to block the 4.
740
741        Negative incoming counts are treated as zero counts.
742
743        Args:
744            comparison: Either `'<='` or `'<'`.
745            other: The other multiset to match elements with.
746            keep: Whether 'matched' or 'unmatched' elements are to be kept.
747        """
748        if keep == 'matched':
749            keep_boolean = True
750        elif keep == 'unmatched':
751            keep_boolean = False
752        else:
753            raise ValueError(f"keep must be either 'matched' or 'unmatched'")
754
755        other = implicit_convert_to_expression(other)
756        match comparison:
757            case '<=':
758                match_equal = True
759            case '<':
760                match_equal = False
761            case _:
762                raise ValueError(f'Invalid comparison {comparison}')
763        return icepool.operator.MultisetMaximumMatch(self,
764                                                     other,
765                                                     order=Order.Descending,
766                                                     match_equal=match_equal,
767                                                     keep=keep_boolean)

EXPERIMENTAL: Match the highest elements from self with even higher (or equal) elements from other.

This matches elements of self with elements of other, such that in each pair the element from self fits the comparision with the element from other. As many such pairs of elements will be matched as possible, preferring the highest matchable elements of self. Finally, either the matched or unmatched elements from self are kept.

This requires that outcomes be evaluated in descending order.

Example: An attacker rolls a pool of 4d6 and a defender rolls a pool of 3d6. Defender dice can be used to block attacker dice of equal or lesser value, and the defender prefers to block the highest attacker dice possible. Which attacker dice were not blocked?

d6.pool(4).maximum_match('<=', d6.pool(3), keep='unmatched').sum()

Suppose the attacker rolls 6, 4, 3, 1 and the defender rolls 5, 5. Then the result would be [6, 1].

d6.pool([6, 4, 3, 1]).maximum_match('<=', [5, 5], keep='unmatched')
-> [6, 1]

Contrast sort_match(), which first creates pairs in sorted order and then filters them by comparison. In the above example, sort_matched would force the defender to match against the 5 and the 4, which would only allow them to block the 4.

Negative incoming counts are treated as zero counts.

Arguments:
  • comparison: Either '<=' or '<'.
  • other: The other multiset to match elements with.
  • keep: Whether 'matched' or 'unmatched' elements are to be kept.
def maximum_match_lowest( self, comparison: Literal['>=', '>'], other: MultisetExpression[~T], /, *, keep: Literal['matched', 'unmatched']) -> MultisetExpression[~T]:
769    def maximum_match_lowest(
770            self, comparison: Literal['>=',
771                                      '>'], other: 'MultisetExpression[T]', /,
772            *, keep: Literal['matched',
773                             'unmatched']) -> 'MultisetExpression[T]':
774        """EXPERIMENTAL: Match the lowest elements from `self` with even lower (or equal) elements from `other`.
775
776        This matches elements of `self` with elements of `other`, such that in
777        each pair the element from `self` fits the `comparision` with the
778        element from `other`. As many such pairs of elements will be matched as 
779        possible, preferring the lowest matchable elements of `self`.
780        Finally, either the matched or unmatched elements from `self` are kept.
781
782        This requires that outcomes be evaluated in ascending order.
783
784        Contrast `sort_match()`, which first creates pairs in
785        sorted order and then filters them by `comparison`.
786
787        Args:
788            comparison: Either `'>='` or `'>'`.
789            other: The other multiset to match elements with.
790            keep: Whether 'matched' or 'unmatched' elements are to be kept.
791        """
792        if keep == 'matched':
793            keep_boolean = True
794        elif keep == 'unmatched':
795            keep_boolean = False
796        else:
797            raise ValueError(f"keep must be either 'matched' or 'unmatched'")
798
799        other = implicit_convert_to_expression(other)
800        match comparison:
801            case '>=':
802                match_equal = True
803            case '>':
804                match_equal = False
805            case _:
806                raise ValueError(f'Invalid comparison {comparison}')
807        return icepool.operator.MultisetMaximumMatch(self,
808                                                     other,
809                                                     order=Order.Ascending,
810                                                     match_equal=match_equal,
811                                                     keep=keep_boolean)

EXPERIMENTAL: Match the lowest elements from self with even lower (or equal) elements from other.

This matches elements of self with elements of other, such that in each pair the element from self fits the comparision with the element from other. As many such pairs of elements will be matched as possible, preferring the lowest matchable elements of self. Finally, either the matched or unmatched elements from self are kept.

This requires that outcomes be evaluated in ascending order.

Contrast sort_match(), which first creates pairs in sorted order and then filters them by comparison.

Arguments:
  • comparison: Either '>=' or '>'.
  • other: The other multiset to match elements with.
  • keep: Whether 'matched' or 'unmatched' elements are to be kept.
def expand( self, order: Order = <Order.Ascending: 1>) -> Union[Die[tuple[~T, ...]], icepool.evaluator.multiset_function.MultisetFunctionRawResult[~T, tuple[~T, ...]]]:
815    def expand(
816        self,
817        order: Order = Order.Ascending
818    ) -> 'icepool.Die[tuple[T, ...]] | MultisetFunctionRawResult[T, tuple[T, ...]]':
819        """Evaluation: All elements of the multiset in ascending order.
820
821        This is expensive and not recommended unless there are few possibilities.
822
823        Args:
824            order: Whether the elements are in ascending (default) or descending
825                order.
826        """
827        return icepool.evaluator.ExpandEvaluator().evaluate(self, order=order)

Evaluation: All elements of the multiset in ascending order.

This is expensive and not recommended unless there are few possibilities.

Arguments:
  • order: Whether the elements are in ascending (default) or descending order.
def sum( self, map: Union[Callable[[~T], ~U], Mapping[~T, ~U], NoneType] = None) -> Union[Die[~U], icepool.evaluator.multiset_function.MultisetFunctionRawResult[~T, ~U]]:
829    def sum(
830        self,
831        map: Callable[[T], U] | Mapping[T, U] | None = None
832    ) -> 'icepool.Die[U] | MultisetFunctionRawResult[T, U]':
833        """Evaluation: The sum of all elements."""
834        if map is None:
835            return icepool.evaluator.sum_evaluator.evaluate(self)
836        else:
837            return icepool.evaluator.SumEvaluator(map).evaluate(self)

Evaluation: The sum of all elements.

def size( self) -> Union[Die[int], icepool.evaluator.multiset_function.MultisetFunctionRawResult[~T, int]]:
839    def size(self) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]':
840        """Evaluation: The total number of elements in the multiset.
841
842        This is usually not very interesting unless some other operation is
843        performed first. Examples:
844
845        `generator.unique().size()` will count the number of unique outcomes.
846
847        `(generator & [4, 5, 6]).size()` will count up to one each of
848        4, 5, and 6.
849        """
850        return icepool.evaluator.size_evaluator.evaluate(self)

Evaluation: The total number of elements in the multiset.

This is usually not very interesting unless some other operation is performed first. Examples:

generator.unique().size() will count the number of unique outcomes.

(generator & [4, 5, 6]).size() will count up to one each of 4, 5, and 6.

def any( self) -> Union[Die[bool], icepool.evaluator.multiset_function.MultisetFunctionRawResult[~T, bool]]:
852    def any(self) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]':
853        """Evaluation: Whether the multiset has at least one positive count. """
854        return icepool.evaluator.any_evaluator.evaluate(self)

Evaluation: Whether the multiset has at least one positive count.

def highest_outcome_and_count( self) -> Union[Die[tuple[~T, int]], icepool.evaluator.multiset_function.MultisetFunctionRawResult[~T, tuple[~T, int]]]:
856    def highest_outcome_and_count(
857        self
858    ) -> 'icepool.Die[tuple[T, int]] | MultisetFunctionRawResult[T, tuple[T, int]]':
859        """Evaluation: The highest outcome with positive count, along with that count.
860
861        If no outcomes have positive count, the min outcome will be returned with 0 count.
862        """
863        return icepool.evaluator.highest_outcome_and_count_evaluator.evaluate(
864            self)

Evaluation: The highest outcome with positive count, along with that count.

If no outcomes have positive count, the min outcome will be returned with 0 count.

def all_counts( self, filter: Union[int, Literal['all']] = 1) -> Union[Die[tuple[int, ...]], icepool.evaluator.multiset_function.MultisetFunctionRawResult[~T, tuple[int, ...]]]:
866    def all_counts(
867        self,
868        filter: int | Literal['all'] = 1
869    ) -> 'icepool.Die[tuple[int, ...]] | MultisetFunctionRawResult[T, tuple[int, ...]]':
870        """Evaluation: Sorted tuple of all counts, i.e. the sizes of all matching sets.
871
872        The sizes are in **descending** order.
873
874        Args:
875            filter: Any counts below this value will not be in the output.
876                For example, `filter=2` will only produce pairs and better.
877                If `None`, no filtering will be done.
878
879                Why not just place `keep_counts_ge()` before this?
880                `keep_counts_ge()` operates by setting counts to zero, so you
881                would still need an argument to specify whether you want to
882                output zero counts. So we might as well use the argument to do
883                both.
884        """
885        return icepool.evaluator.AllCountsEvaluator(
886            filter=filter).evaluate(self)

Evaluation: Sorted tuple of all counts, i.e. the sizes of all matching sets.

The sizes are in descending order.

Arguments:
  • filter: Any counts below this value will not be in the output. For example, filter=2 will only produce pairs and better. If None, no filtering will be done.

    Why not just place keep_counts_ge() before this? keep_counts_ge() operates by setting counts to zero, so you would still need an argument to specify whether you want to output zero counts. So we might as well use the argument to do both.

def largest_count( self) -> Union[Die[int], icepool.evaluator.multiset_function.MultisetFunctionRawResult[~T, int]]:
888    def largest_count(
889            self) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]':
890        """Evaluation: The size of the largest matching set among the elements."""
891        return icepool.evaluator.largest_count_evaluator.evaluate(self)

Evaluation: The size of the largest matching set among the elements.

def largest_count_and_outcome( self) -> Union[Die[tuple[int, ~T]], icepool.evaluator.multiset_function.MultisetFunctionRawResult[~T, tuple[int, ~T]]]:
893    def largest_count_and_outcome(
894        self
895    ) -> 'icepool.Die[tuple[int, T]] | MultisetFunctionRawResult[T, tuple[int, T]]':
896        """Evaluation: The largest matching set among the elements and the corresponding outcome."""
897        return icepool.evaluator.largest_count_and_outcome_evaluator.evaluate(
898            self)

Evaluation: The largest matching set among the elements and the corresponding outcome.

def count_subset( self, divisor: Union[MultisetExpression[~T], Mapping[~T, int], Sequence[~T]], /, *, empty_divisor: int | None = None) -> Union[Die[int], icepool.evaluator.multiset_function.MultisetFunctionRawResult[~T, int]]:
905    def count_subset(
906        self,
907        divisor: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
908        /,
909        *,
910        empty_divisor: int | None = None
911    ) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]':
912        """Evaluation: The number of times the divisor is contained in this multiset.
913        
914        Args:
915            divisor: The multiset to divide by.
916            empty_divisor: If the divisor is empty, the outcome will be this.
917                If not set, `ZeroDivisionError` will be raised for an empty
918                right side.
919
920        Raises:
921            ZeroDivisionError: If the divisor may be empty and 
922                empty_divisor_outcome is not set.
923        """
924        divisor = implicit_convert_to_expression(divisor)
925        return icepool.evaluator.CountSubsetEvaluator(
926            empty_divisor=empty_divisor).evaluate(self, divisor)

Evaluation: The number of times the divisor is contained in this multiset.

Arguments:
  • divisor: The multiset to divide by.
  • empty_divisor: If the divisor is empty, the outcome will be this. If not set, ZeroDivisionError will be raised for an empty right side.
Raises:
  • ZeroDivisionError: If the divisor may be empty and empty_divisor_outcome is not set.
def largest_straight( self: MultisetExpression[int]) -> Union[Die[int], icepool.evaluator.multiset_function.MultisetFunctionRawResult[int, int]]:
928    def largest_straight(
929        self: 'MultisetExpression[int]'
930    ) -> 'icepool.Die[int] | MultisetFunctionRawResult[int, int]':
931        """Evaluation: The size of the largest straight among the elements.
932
933        Outcomes must be `int`s.
934        """
935        return icepool.evaluator.largest_straight_evaluator.evaluate(self)

Evaluation: The size of the largest straight among the elements.

Outcomes must be ints.

def largest_straight_and_outcome( self: MultisetExpression[int], priority: Literal['low', 'high'] = 'high', /) -> Union[Die[tuple[int, int]], icepool.evaluator.multiset_function.MultisetFunctionRawResult[int, tuple[int, int]]]:
937    def largest_straight_and_outcome(
938        self: 'MultisetExpression[int]',
939        priority: Literal['low', 'high'] = 'high',
940        /
941    ) -> 'icepool.Die[tuple[int, int]] | MultisetFunctionRawResult[int, tuple[int, int]]':
942        """Evaluation: The size of the largest straight among the elements and the highest (optionally, lowest) outcome in that straight.
943
944        Straight size is prioritized first, then the outcome.
945
946        Outcomes must be `int`s.
947
948        Args:
949            priority: Controls which outcome within the straight is returned,
950                and which straight is picked if there is a tie for largest
951                straight.
952        """
953        if priority == 'high':
954            return icepool.evaluator.largest_straight_and_outcome_evaluator_high.evaluate(
955                self)
956        elif priority == 'low':
957            return icepool.evaluator.largest_straight_and_outcome_evaluator_low.evaluate(
958                self)
959        else:
960            raise ValueError("priority must be 'low' or 'high'.")

Evaluation: The size of the largest straight among the elements and the highest (optionally, lowest) outcome in that straight.

Straight size is prioritized first, then the outcome.

Outcomes must be ints.

Arguments:
  • priority: Controls which outcome within the straight is returned, and which straight is picked if there is a tie for largest straight.
def all_straights( self: MultisetExpression[int]) -> Union[Die[tuple[int, ...]], icepool.evaluator.multiset_function.MultisetFunctionRawResult[int, tuple[int, ...]]]:
962    def all_straights(
963        self: 'MultisetExpression[int]'
964    ) -> 'icepool.Die[tuple[int, ...]] | MultisetFunctionRawResult[int, tuple[int, ...]]':
965        """Evaluation: The sizes of all straights.
966
967        The sizes are in **descending** order.
968
969        Each element can only contribute to one straight, though duplicate
970        elements can produces straights that overlap in outcomes. In this case,
971        elements are preferentially assigned to the longer straight.
972        """
973        return icepool.evaluator.all_straights_evaluator.evaluate(self)

Evaluation: The sizes of all straights.

The sizes are in descending order.

Each element can only contribute to one straight, though duplicate elements can produces straights that overlap in outcomes. In this case, elements are preferentially assigned to the longer straight.

def all_straights_reduce_counts( self: MultisetExpression[int], reducer: Callable[[int, int], int] = <built-in function mul>) -> Union[Die[tuple[tuple[int, int], ...]], icepool.evaluator.multiset_function.MultisetFunctionRawResult[int, tuple[tuple[int, int], ...]]]:
975    def all_straights_reduce_counts(
976        self: 'MultisetExpression[int]',
977        reducer: Callable[[int, int], int] = operator.mul
978    ) -> 'icepool.Die[tuple[tuple[int, int], ...]] | MultisetFunctionRawResult[int, tuple[tuple[int, int], ...]]':
979        """Experimental: All straights with a reduce operation on the counts.
980
981        This can be used to evaluate e.g. cribbage-style straight counting.
982
983        The result is a tuple of `(run_length, run_score)`s.
984        """
985        return icepool.evaluator.AllStraightsReduceCountsEvaluator(
986            reducer=reducer).evaluate(self)

Experimental: All straights with a reduce operation on the counts.

This can be used to evaluate e.g. cribbage-style straight counting.

The result is a tuple of (run_length, run_score)s.

def argsort( self: Union[MultisetExpression[~T], Mapping[~T, int], Sequence[~T]], *args: Union[MultisetExpression[~T], Mapping[~T, int], Sequence[~T]], order: Order = <Order.Descending: -1>, limit: int | None = None):
 988    def argsort(self: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
 989                *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
 990                order: Order = Order.Descending,
 991                limit: int | None = None):
 992        """Experimental: Returns the indexes of the originating multisets for each rank in their additive union.
 993
 994        Example:
 995        ```python
 996        MultisetExpression.argsort([10, 9, 5], [9, 9])
 997        ```
 998        produces
 999        ```python
1000        ((0,), (0, 1, 1), (0,))
1001        ```
1002        
1003        Args:
1004            self, *args: The multiset expressions to be evaluated.
1005            order: Which order the ranks are to be emitted. Default is descending.
1006            limit: How many ranks to emit. Default will emit all ranks, which
1007                makes the length of each outcome equal to
1008                `additive_union(+self, +arg1, +arg2, ...).unique().size()`
1009        """
1010        self = implicit_convert_to_expression(self)
1011        converted_args = [implicit_convert_to_expression(arg) for arg in args]
1012        return icepool.evaluator.ArgsortEvaluator(order=order,
1013                                                  limit=limit).evaluate(
1014                                                      self, *converted_args)

Experimental: Returns the indexes of the originating multisets for each rank in their additive union.

Example:

MultisetExpression.argsort([10, 9, 5], [9, 9])

produces

((0,), (0, 1, 1), (0,))
Arguments:
  • self, *args: The multiset expressions to be evaluated.
  • order: Which order the ranks are to be emitted. Default is descending.
  • limit: How many ranks to emit. Default will emit all ranks, which makes the length of each outcome equal to additive_union(+self, +arg1, +arg2, ...).unique().size()
def issubset( self, other: Union[MultisetExpression[~T], Mapping[~T, int], Sequence[~T]], /) -> Union[Die[bool], icepool.evaluator.multiset_function.MultisetFunctionRawResult[~T, bool]]:
1057    def issubset(
1058            self,
1059            other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1060            /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]':
1061        """Evaluation: Whether this multiset is a subset of the other multiset.
1062
1063        Specifically, if this multiset has a lesser or equal count for each
1064        outcome than the other multiset, this evaluates to `True`; 
1065        if there is some outcome for which this multiset has a greater count 
1066        than the other multiset, this evaluates to `False`.
1067
1068        `issubset` is the same as `self <= other`.
1069        
1070        `self < other` evaluates a proper subset relation, which is the same
1071        except the result is `False` if the two multisets are exactly equal.
1072        """
1073        return self._compare(other, icepool.evaluator.IsSubsetEvaluator)

Evaluation: Whether this multiset is a subset of the other multiset.

Specifically, if this multiset has a lesser or equal count for each outcome than the other multiset, this evaluates to True; if there is some outcome for which this multiset has a greater count than the other multiset, this evaluates to False.

issubset is the same as self <= other.

self < other evaluates a proper subset relation, which is the same except the result is False if the two multisets are exactly equal.

def issuperset( self, other: Union[MultisetExpression[~T], Mapping[~T, int], Sequence[~T]], /) -> Union[Die[bool], icepool.evaluator.multiset_function.MultisetFunctionRawResult[~T, bool]]:
1092    def issuperset(
1093            self,
1094            other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1095            /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]':
1096        """Evaluation: Whether this multiset is a superset of the other multiset.
1097
1098        Specifically, if this multiset has a greater or equal count for each
1099        outcome than the other multiset, this evaluates to `True`; 
1100        if there is some  outcome for which this multiset has a lesser count 
1101        than the other multiset, this evaluates to `False`.
1102        
1103        A typical use of this evaluation is testing for the presence of a
1104        combo of cards in a hand, e.g.
1105
1106        ```python
1107        deck.deal(5) >= ['a', 'a', 'b']
1108        ```
1109
1110        represents the chance that a deal of 5 cards contains at least two 'a's
1111        and one 'b'.
1112
1113        `issuperset` is the same as `self >= other`.
1114
1115        `self > other` evaluates a proper superset relation, which is the same
1116        except the result is `False` if the two multisets are exactly equal.
1117        """
1118        return self._compare(other, icepool.evaluator.IsSupersetEvaluator)

Evaluation: Whether this multiset is a superset of the other multiset.

Specifically, if this multiset has a greater or equal count for each outcome than the other multiset, this evaluates to True; if there is some outcome for which this multiset has a lesser count than the other multiset, this evaluates to False.

A typical use of this evaluation is testing for the presence of a combo of cards in a hand, e.g.

deck.deal(5) >= ['a', 'a', 'b']

represents the chance that a deal of 5 cards contains at least two 'a's and one 'b'.

issuperset is the same as self >= other.

self > other evaluates a proper superset relation, which is the same except the result is False if the two multisets are exactly equal.

def isdisjoint( self, other: Union[MultisetExpression[~T], Mapping[~T, int], Sequence[~T]], /) -> Union[Die[bool], icepool.evaluator.multiset_function.MultisetFunctionRawResult[~T, bool]]:
1150    def isdisjoint(
1151            self,
1152            other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]',
1153            /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]':
1154        """Evaluation: Whether this multiset is disjoint from the other multiset.
1155        
1156        Specifically, this evaluates to `False` if there is any outcome for
1157        which both multisets have positive count, and `True` if there is not.
1158
1159        Negative incoming counts are treated as zero counts.
1160        """
1161        return self._compare(other, icepool.evaluator.IsDisjointSetEvaluator)

Evaluation: Whether this multiset is disjoint from the other multiset.

Specifically, this evaluates to False if there is any outcome for which both multisets have positive count, and True if there is not.

Negative incoming counts are treated as zero counts.

def force_order( self, force_order: Order) -> MultisetExpression[~T]:
1164    def force_order(self, force_order: Order) -> 'MultisetExpression[T]':
1165        """Forces outcomes to be seen by the evaluator in the given order.
1166
1167        This can be useful for debugging / testing.
1168        """
1169        if force_order == Order.Any:
1170            return self
1171        return icepool.operator.MultisetForceOrder(self,
1172                                                   force_order=force_order)

Forces outcomes to be seen by the evaluator in the given order.

This can be useful for debugging / testing.

class MultisetEvaluator(icepool.evaluator.multiset_evaluator_base.MultisetEvaluatorBase[~T, +U_co]):
 21class MultisetEvaluator(MultisetEvaluatorBase[T, U_co]):
 22    """Evaluates a multiset based on a state transition function."""
 23
 24    @abstractmethod
 25    def next_state(self, state: Hashable, order: Order, outcome: T, /,
 26                   *counts) -> Hashable:
 27        """State transition function.
 28
 29        This should produce a state given the previous state, an outcome,
 30        the count of that outcome produced by each multiset input, and any
 31        **kwargs provided to `evaluate()`.
 32
 33        `evaluate()` will always call this with `state, outcome, *counts` as
 34        positional arguments. Furthermore, there is no expectation that a 
 35        subclass be able to handle an arbitrary number of counts. 
 36        
 37        Thus, you are free to:
 38        * Rename `state` or `outcome` in a subclass.
 39        * Replace `*counts` with a fixed set of parameters.
 40
 41        States must be hashable. At current, they do not have to be orderable.
 42        However, this may change in the future, and if they are not totally
 43        orderable, you must override `final_outcome` to create totally orderable
 44        final outcomes.
 45
 46        Returning a `Die` is not supported.
 47
 48        Args:
 49            state: A hashable object indicating the state before rolling the
 50                current outcome. If `initial_state()` is not overridden, the
 51                initial state is `None`.
 52            order: The order in which outcomes are seen. You can raise an 
 53                `UnsupportedOrder` if you don't want to support the current 
 54                order. In this case, the opposite order will then be attempted
 55                (if it hasn't already been attempted).
 56            outcome: The current outcome.
 57                If there are multiple inputs, the set of outcomes is at 
 58                least the union of the outcomes of the invididual inputs. 
 59                You can use `extra_outcomes()` to add extra outcomes.
 60            *counts: One value (usually an `int`) for each input indicating how
 61                many of the current outcome were produced. You may replace this
 62                with a fixed series of parameters.
 63
 64        Returns:
 65            A hashable object indicating the next state.
 66            The special value `icepool.Reroll` can be used to immediately remove
 67            the state from consideration, effectively performing a full reroll.
 68        """
 69
 70    def extra_outcomes(self, outcomes: Sequence[T]) -> Collection[T]:
 71        """Optional method to specify extra outcomes that should be seen as inputs to `next_state()`.
 72
 73        These will be seen by `next_state` even if they do not appear in the
 74        input(s). The default implementation returns `()`, or no additional
 75        outcomes.
 76
 77        If you want `next_state` to see consecutive `int` outcomes, you can set
 78        `extra_outcomes = icepool.MultisetEvaluator.consecutive`.
 79        See `consecutive()` below.
 80
 81        Args:
 82            outcomes: The outcomes that could be produced by the inputs, in
 83            ascending order.
 84        """
 85        return ()
 86
 87    def initial_state(self, order: Order, outcomes: Sequence[T], /, *sizes,
 88                      **kwargs: Hashable):
 89        """Optional method to produce an initial evaluation state.
 90
 91        If not overriden, the initial state is `None`. Note that this is not a
 92        valid `final_outcome()`.
 93
 94        All non-keyword arguments will be given positionally, so you are free
 95        to:
 96        * Rename any of them.
 97        * Replace `sizes` with a fixed series of arguments.
 98        * Replace trailing positional arguments with `*_` if you don't care
 99            about them.
100
101        Args:
102            order: The order in which outcomes will be seen by `next_state()`.
103            outcomes: All outcomes that will be seen by `next_state()`.
104            sizes: The sizes of the input multisets, provided
105                that the multiset has inferrable size with non-negative
106                counts. If not, the corresponding size is None.
107            kwargs: Non-multiset arguments that were provided to `evaluate()`.
108                You may replace `**kwargs` with a fixed set of keyword
109                parameters; `final_outcome()` should take the same set of
110                keyword parameters.
111
112        Raises:
113            UnsupportedOrder if the given order is not supported.
114        """
115        return None
116
117    def final_outcome(
118            self, final_state: Hashable, order: Order, outcomes: tuple[T, ...],
119            /, *sizes, **kwargs: Hashable
120    ) -> 'U_co | icepool.Die[U_co] | icepool.RerollType':
121        """Optional method to generate a final output outcome from a final state.
122
123        By default, the final outcome is equal to the final state.
124        Note that `None` is not a valid outcome for a `Die`,
125        and if there are no outcomes, `final_outcome` will be immediately
126        be callled with `final_state=None`.
127        Subclasses that want to handle this case should explicitly define what
128        happens.
129
130        All non-keyword arguments will be given positionally, so you are free
131        to:
132        * Rename any of them.
133        * Replace `sizes` with a fixed series of arguments.
134        * Replace trailing positional arguments with `*_` if you don't care
135            about them.
136
137        Args:
138            final_state: A state after all outcomes have been processed.
139            order: The order in which outcomes were seen by `next_state()`.
140            outcomes: All outcomes that were seen by `next_state()`.
141            sizes: The sizes of the input multisets, provided
142                that the multiset has inferrable size with non-negative
143                counts. If not, the corresponding size is None.
144            kwargs: Non-multiset arguments that were provided to `evaluate()`.
145                You may replace `**kwargs` with a fixed set of keyword
146                parameters; `initial_state()` should take the same set of
147                keyword parameters.
148
149        Returns:
150            A final outcome that will be used as part of constructing the result `Die`.
151            As usual for `Die()`, this could itself be a `Die` or `icepool.Reroll`.
152        """
153        # If not overriden, the final_state should have type U_co.
154        return cast(U_co, final_state)
155
156    def consecutive(self, outcomes: Sequence[int]) -> Collection[int]:
157        """Example implementation of `extra_outcomes()` that produces consecutive `int` outcomes.
158
159        Set `extra_outcomes = icepool.MultisetEvaluator.consecutive` to use this.
160
161        Returns:
162            All `int`s from the min outcome to the max outcome among the inputs,
163            inclusive.
164
165        Raises:
166            TypeError: if any input has any non-`int` outcome.
167        """
168        if not outcomes:
169            return ()
170
171        if any(not isinstance(x, int) for x in outcomes):
172            raise TypeError(
173                "consecutive cannot be used with outcomes of type other than 'int'."
174            )
175
176        return range(outcomes[0], outcomes[-1] + 1)
177
178    @property
179    def next_state_key(self) -> Hashable:
180        """Subclasses may optionally provide key that uniquely identifies the `next_state()` computation.
181
182        This is used to persistently cache intermediate results between calls
183        to `evaluate()`. By default, `next_state_key` is `None`, which only
184        caches if not inside a `@multiset_function`.
185        
186        If you do implement this, `next_state_key` should include any members 
187        used in `next_state()` but does NOT need to include members that are 
188        only used in other methods, i.e. 
189        * `extra_outcomes()`
190        * `initial_state()`
191        * `final_outcome()`. 
192        For example, if `next_state()` is a pure function other than being 
193        defined by its evaluator class, you can use `type(self)`.
194
195        If you want to disable caching between calls to `evaluate()` even
196        outside of `@multiset_function`, return the special value 
197        `icepool.NoCache`.
198        """
199        return None
200
201    def _prepare(
202        self,
203        input_exps: tuple[MultisetExpressionBase[T, Any], ...],
204        kwargs: Mapping[str, Hashable],
205    ) -> Iterator[tuple['Dungeon[T]', 'Quest[T, U_co]',
206                        'tuple[MultisetSourceBase[T, Any], ...]', int]]:
207
208        for t in itertools.product(*(exp._prepare() for exp in input_exps)):
209            if t:
210                dungeonlet_flats, questlet_flats, sources, weights = zip(*t)
211            else:
212                dungeonlet_flats = ()
213                questlet_flats = ()
214                sources = ()
215                weights = ()
216            next_state_key: Hashable
217            if self.next_state_key is None:
218                # This should only get cached inside this evaluator, but add
219                # self id to be safe.
220                next_state_key = id(self)
221                multiset_function_can_cache = False
222            elif self.next_state_key is icepool.NoCache:
223                next_state_key = icepool.NoCache
224                multiset_function_can_cache = False
225            else:
226                next_state_key = self.next_state_key
227                multiset_function_can_cache = True
228            dungeon: MultisetEvaluatorDungeon[T] = MultisetEvaluatorDungeon(
229                self.next_state, next_state_key, multiset_function_can_cache,
230                dungeonlet_flats)
231            quest: MultisetEvaluatorQuest[T, U_co] = MultisetEvaluatorQuest(
232                self.initial_state, self.extra_outcomes, self.final_outcome,
233                questlet_flats)
234            sources = tuple(itertools.chain.from_iterable(sources))
235            weight = math.prod(weights)
236            yield dungeon, quest, sources, weight
237
238    def _should_cache(self, dungeon: 'Dungeon[T]') -> bool:
239        return dungeon.__hash__ is not None

Evaluates a multiset based on a state transition function.

@abstractmethod
def next_state( self, state: Hashable, order: Order, outcome: ~T, /, *counts) -> Hashable:
24    @abstractmethod
25    def next_state(self, state: Hashable, order: Order, outcome: T, /,
26                   *counts) -> Hashable:
27        """State transition function.
28
29        This should produce a state given the previous state, an outcome,
30        the count of that outcome produced by each multiset input, and any
31        **kwargs provided to `evaluate()`.
32
33        `evaluate()` will always call this with `state, outcome, *counts` as
34        positional arguments. Furthermore, there is no expectation that a 
35        subclass be able to handle an arbitrary number of counts. 
36        
37        Thus, you are free to:
38        * Rename `state` or `outcome` in a subclass.
39        * Replace `*counts` with a fixed set of parameters.
40
41        States must be hashable. At current, they do not have to be orderable.
42        However, this may change in the future, and if they are not totally
43        orderable, you must override `final_outcome` to create totally orderable
44        final outcomes.
45
46        Returning a `Die` is not supported.
47
48        Args:
49            state: A hashable object indicating the state before rolling the
50                current outcome. If `initial_state()` is not overridden, the
51                initial state is `None`.
52            order: The order in which outcomes are seen. You can raise an 
53                `UnsupportedOrder` if you don't want to support the current 
54                order. In this case, the opposite order will then be attempted
55                (if it hasn't already been attempted).
56            outcome: The current outcome.
57                If there are multiple inputs, the set of outcomes is at 
58                least the union of the outcomes of the invididual inputs. 
59                You can use `extra_outcomes()` to add extra outcomes.
60            *counts: One value (usually an `int`) for each input indicating how
61                many of the current outcome were produced. You may replace this
62                with a fixed series of parameters.
63
64        Returns:
65            A hashable object indicating the next state.
66            The special value `icepool.Reroll` can be used to immediately remove
67            the state from consideration, effectively performing a full reroll.
68        """

State transition function.

This should produce a state given the previous state, an outcome, the count of that outcome produced by each multiset input, and any **kwargs provided to evaluate().

evaluate() will always call this with state, outcome, *counts as positional arguments. Furthermore, there is no expectation that a subclass be able to handle an arbitrary number of counts.

Thus, you are free to:

  • Rename state or outcome in a subclass.
  • Replace *counts with a fixed set of parameters.

States must be hashable. At current, they do not have to be orderable. However, this may change in the future, and if they are not totally orderable, you must override final_outcome to create totally orderable final outcomes.

Returning a Die is not supported.

Arguments:
  • state: A hashable object indicating the state before rolling the current outcome. If initial_state() is not overridden, the initial state is None.
  • order: The order in which outcomes are seen. You can raise an UnsupportedOrder if you don't want to support the current order. In this case, the opposite order will then be attempted (if it hasn't already been attempted).
  • outcome: The current outcome. If there are multiple inputs, the set of outcomes is at least the union of the outcomes of the invididual inputs. You can use extra_outcomes() to add extra outcomes.
  • *counts: One value (usually an int) for each input indicating how many of the current outcome were produced. You may replace this with a fixed series of parameters.
Returns:

A hashable object indicating the next state. The special value icepool.Reroll can be used to immediately remove the state from consideration, effectively performing a full reroll.

def extra_outcomes(self, outcomes: Sequence[~T]) -> Collection[~T]:
70    def extra_outcomes(self, outcomes: Sequence[T]) -> Collection[T]:
71        """Optional method to specify extra outcomes that should be seen as inputs to `next_state()`.
72
73        These will be seen by `next_state` even if they do not appear in the
74        input(s). The default implementation returns `()`, or no additional
75        outcomes.
76
77        If you want `next_state` to see consecutive `int` outcomes, you can set
78        `extra_outcomes = icepool.MultisetEvaluator.consecutive`.
79        See `consecutive()` below.
80
81        Args:
82            outcomes: The outcomes that could be produced by the inputs, in
83            ascending order.
84        """
85        return ()

Optional method to specify extra outcomes that should be seen as inputs to next_state().

These will be seen by next_state even if they do not appear in the input(s). The default implementation returns (), or no additional outcomes.

If you want next_state to see consecutive int outcomes, you can set extra_outcomes = icepool.MultisetEvaluator.consecutive. See consecutive() below.

Arguments:
  • outcomes: The outcomes that could be produced by the inputs, in
  • ascending order.
def initial_state( self, order: Order, outcomes: Sequence[~T], /, *sizes, **kwargs: Hashable):
 87    def initial_state(self, order: Order, outcomes: Sequence[T], /, *sizes,
 88                      **kwargs: Hashable):
 89        """Optional method to produce an initial evaluation state.
 90
 91        If not overriden, the initial state is `None`. Note that this is not a
 92        valid `final_outcome()`.
 93
 94        All non-keyword arguments will be given positionally, so you are free
 95        to:
 96        * Rename any of them.
 97        * Replace `sizes` with a fixed series of arguments.
 98        * Replace trailing positional arguments with `*_` if you don't care
 99            about them.
100
101        Args:
102            order: The order in which outcomes will be seen by `next_state()`.
103            outcomes: All outcomes that will be seen by `next_state()`.
104            sizes: The sizes of the input multisets, provided
105                that the multiset has inferrable size with non-negative
106                counts. If not, the corresponding size is None.
107            kwargs: Non-multiset arguments that were provided to `evaluate()`.
108                You may replace `**kwargs` with a fixed set of keyword
109                parameters; `final_outcome()` should take the same set of
110                keyword parameters.
111
112        Raises:
113            UnsupportedOrder if the given order is not supported.
114        """
115        return None

Optional method to produce an initial evaluation state.

If not overriden, the initial state is None. Note that this is not a valid final_outcome().

All non-keyword arguments will be given positionally, so you are free to:

  • Rename any of them.
  • Replace sizes with a fixed series of arguments.
  • Replace trailing positional arguments with *_ if you don't care about them.
Arguments:
  • order: The order in which outcomes will be seen by next_state().
  • outcomes: All outcomes that will be seen by next_state().
  • sizes: The sizes of the input multisets, provided that the multiset has inferrable size with non-negative counts. If not, the corresponding size is None.
  • kwargs: Non-multiset arguments that were provided to evaluate(). You may replace **kwargs with a fixed set of keyword parameters; final_outcome() should take the same set of keyword parameters.
Raises:
  • UnsupportedOrder if the given order is not supported.
def final_outcome( self, final_state: Hashable, order: Order, outcomes: tuple[~T, ...], /, *sizes, **kwargs: Hashable) -> Union[+U_co, Die[+U_co], RerollType]:
117    def final_outcome(
118            self, final_state: Hashable, order: Order, outcomes: tuple[T, ...],
119            /, *sizes, **kwargs: Hashable
120    ) -> 'U_co | icepool.Die[U_co] | icepool.RerollType':
121        """Optional method to generate a final output outcome from a final state.
122
123        By default, the final outcome is equal to the final state.
124        Note that `None` is not a valid outcome for a `Die`,
125        and if there are no outcomes, `final_outcome` will be immediately
126        be callled with `final_state=None`.
127        Subclasses that want to handle this case should explicitly define what
128        happens.
129
130        All non-keyword arguments will be given positionally, so you are free
131        to:
132        * Rename any of them.
133        * Replace `sizes` with a fixed series of arguments.
134        * Replace trailing positional arguments with `*_` if you don't care
135            about them.
136
137        Args:
138            final_state: A state after all outcomes have been processed.
139            order: The order in which outcomes were seen by `next_state()`.
140            outcomes: All outcomes that were seen by `next_state()`.
141            sizes: The sizes of the input multisets, provided
142                that the multiset has inferrable size with non-negative
143                counts. If not, the corresponding size is None.
144            kwargs: Non-multiset arguments that were provided to `evaluate()`.
145                You may replace `**kwargs` with a fixed set of keyword
146                parameters; `initial_state()` should take the same set of
147                keyword parameters.
148
149        Returns:
150            A final outcome that will be used as part of constructing the result `Die`.
151            As usual for `Die()`, this could itself be a `Die` or `icepool.Reroll`.
152        """
153        # If not overriden, the final_state should have type U_co.
154        return cast(U_co, final_state)

Optional method to generate a final output outcome from a final state.

By default, the final outcome is equal to the final state. Note that None is not a valid outcome for a Die, and if there are no outcomes, final_outcome will be immediately be callled with final_state=None. Subclasses that want to handle this case should explicitly define what happens.

All non-keyword arguments will be given positionally, so you are free to:

  • Rename any of them.
  • Replace sizes with a fixed series of arguments.
  • Replace trailing positional arguments with *_ if you don't care about them.
Arguments:
  • final_state: A state after all outcomes have been processed.
  • order: The order in which outcomes were seen by next_state().
  • outcomes: All outcomes that were seen by next_state().
  • sizes: The sizes of the input multisets, provided that the multiset has inferrable size with non-negative counts. If not, the corresponding size is None.
  • kwargs: Non-multiset arguments that were provided to evaluate(). You may replace **kwargs with a fixed set of keyword parameters; initial_state() should take the same set of keyword parameters.
Returns:

A final outcome that will be used as part of constructing the result Die. As usual for Die(), this could itself be a Die or icepool.Reroll.

def consecutive(self, outcomes: Sequence[int]) -> Collection[int]:
156    def consecutive(self, outcomes: Sequence[int]) -> Collection[int]:
157        """Example implementation of `extra_outcomes()` that produces consecutive `int` outcomes.
158
159        Set `extra_outcomes = icepool.MultisetEvaluator.consecutive` to use this.
160
161        Returns:
162            All `int`s from the min outcome to the max outcome among the inputs,
163            inclusive.
164
165        Raises:
166            TypeError: if any input has any non-`int` outcome.
167        """
168        if not outcomes:
169            return ()
170
171        if any(not isinstance(x, int) for x in outcomes):
172            raise TypeError(
173                "consecutive cannot be used with outcomes of type other than 'int'."
174            )
175
176        return range(outcomes[0], outcomes[-1] + 1)

Example implementation of extra_outcomes() that produces consecutive int outcomes.

Set extra_outcomes = icepool.MultisetEvaluator.consecutive to use this.

Returns:

All ints from the min outcome to the max outcome among the inputs, inclusive.

Raises:
  • TypeError: if any input has any non-int outcome.
next_state_key: Hashable
178    @property
179    def next_state_key(self) -> Hashable:
180        """Subclasses may optionally provide key that uniquely identifies the `next_state()` computation.
181
182        This is used to persistently cache intermediate results between calls
183        to `evaluate()`. By default, `next_state_key` is `None`, which only
184        caches if not inside a `@multiset_function`.
185        
186        If you do implement this, `next_state_key` should include any members 
187        used in `next_state()` but does NOT need to include members that are 
188        only used in other methods, i.e. 
189        * `extra_outcomes()`
190        * `initial_state()`
191        * `final_outcome()`. 
192        For example, if `next_state()` is a pure function other than being 
193        defined by its evaluator class, you can use `type(self)`.
194
195        If you want to disable caching between calls to `evaluate()` even
196        outside of `@multiset_function`, return the special value 
197        `icepool.NoCache`.
198        """
199        return None

Subclasses may optionally provide key that uniquely identifies the next_state() computation.

This is used to persistently cache intermediate results between calls to evaluate(). By default, next_state_key is None, which only caches if not inside a @multiset_function.

If you do implement this, next_state_key should include any members used in next_state() but does NOT need to include members that are only used in other methods, i.e.

If you want to disable caching between calls to evaluate() even outside of @multiset_function, return the special value icepool.NoCache.

class Order(enum.IntEnum):
23class Order(enum.IntEnum):
24    """Can be used to define what order outcomes are seen in by MultisetEvaluators."""
25    Ascending = 1
26    Descending = -1
27    Any = 0
28
29    def merge(*orders: 'Order') -> 'Order':
30        """Merges the given Orders.
31
32        Returns:
33            `Any` if all arguments are `Any`.
34            `Ascending` if there is at least one `Ascending` in the arguments.
35            `Descending` if there is at least one `Descending` in the arguments.
36
37        Raises:
38            `ConflictingOrderError` if both `Ascending` and `Descending` are in 
39            the arguments.
40        """
41        result = Order.Any
42        for order in orders:
43            if (result > 0 and order < 0) or (result < 0 and order > 0):
44                raise ConflictingOrderError(
45                    f'Conflicting orders {orders}.\n' +
46                    'Tip: If you are using highest(keep=k), try using lowest(drop=n-k) instead, or vice versa.'
47                )
48            if result == Order.Any:
49                result = order
50        return result
51
52    def __neg__(self) -> 'Order':
53        if self is Order.Ascending:
54            return Order.Descending
55        elif self is Order.Descending:
56            return Order.Ascending
57        else:
58            return Order.Any

Can be used to define what order outcomes are seen in by MultisetEvaluators.

Ascending = <Order.Ascending: 1>
Descending = <Order.Descending: -1>
Any = <Order.Any: 0>
def merge(*orders: Order) -> Order:
29    def merge(*orders: 'Order') -> 'Order':
30        """Merges the given Orders.
31
32        Returns:
33            `Any` if all arguments are `Any`.
34            `Ascending` if there is at least one `Ascending` in the arguments.
35            `Descending` if there is at least one `Descending` in the arguments.
36
37        Raises:
38            `ConflictingOrderError` if both `Ascending` and `Descending` are in 
39            the arguments.
40        """
41        result = Order.Any
42        for order in orders:
43            if (result > 0 and order < 0) or (result < 0 and order > 0):
44                raise ConflictingOrderError(
45                    f'Conflicting orders {orders}.\n' +
46                    'Tip: If you are using highest(keep=k), try using lowest(drop=n-k) instead, or vice versa.'
47                )
48            if result == Order.Any:
49                result = order
50        return result

Merges the given Orders.

Returns:

Any if all arguments are Any. Ascending if there is at least one Ascending in the arguments. Descending if there is at least one Descending in the arguments.

Raises:
class ConflictingOrderError(icepool.order.OrderException):
15class ConflictingOrderError(OrderException):
16    """Indicates that two conflicting mandatory outcome orderings were encountered."""

Indicates that two conflicting mandatory outcome orderings were encountered.

class UnsupportedOrder(icepool.order.OrderException):
19class UnsupportedOrder(OrderException):
20    """Indicates that the given order is not supported under the current context."""

Indicates that the given order is not supported under the current context.

 20class Deck(Population[T_co], MaybeHashKeyed):
 21    """Sampling without replacement (within a single evaluation).
 22
 23    Quantities represent duplicates.
 24    """
 25
 26    _data: Counts[T_co]
 27    _deal: int
 28
 29    @property
 30    def _new_type(self) -> type:
 31        return Deck
 32
 33    def __new__(cls,
 34                outcomes: Sequence | Mapping[Any, int] = (),
 35                times: Sequence[int] | int = 1) -> 'Deck[T_co]':
 36        """Constructor for a `Deck`.
 37
 38        All quantities must be non-negative. Outcomes with zero quantity will be
 39        omitted.
 40
 41        Args:
 42            outcomes: The cards of the `Deck`. This can be one of the following:
 43                * A `Sequence` of outcomes. Duplicates will contribute
 44                    quantity for each appearance.
 45                * A `Mapping` from outcomes to quantities.
 46
 47                Each outcome may be one of the following:
 48                * An outcome, which must be hashable and totally orderable.
 49                * A `Deck`, which will be flattened into the result. If a
 50                    `times` is assigned to the `Deck`, the entire `Deck` will
 51                    be duplicated that many times.
 52            times: Multiplies the number of times each element of `outcomes`
 53                will be put into the `Deck`.
 54                `times` can either be a sequence of the same length as
 55                `outcomes` or a single `int` to apply to all elements of
 56                `outcomes`.
 57        """
 58
 59        if icepool.population.again.contains_again(outcomes):
 60            raise ValueError('Again cannot be used with Decks.')
 61
 62        outcomes, times = icepool.creation_args.itemize(outcomes, times)
 63
 64        if len(outcomes) == 1 and times[0] == 1 and isinstance(
 65                outcomes[0], Deck):
 66            return outcomes[0]
 67
 68        counts: Counts[T_co] = icepool.creation_args.expand_args_for_deck(
 69            outcomes, times)
 70
 71        return Deck._new_raw(counts)
 72
 73    @classmethod
 74    def _new_raw(cls, data: Counts[T_co]) -> 'Deck[T_co]':
 75        """Creates a new `Deck` using already-processed arguments.
 76
 77        Args:
 78            data: At this point, this is a Counts.
 79        """
 80        self = super(Population, cls).__new__(cls)
 81        self._data = data
 82        return self
 83
 84    def keys(self) -> CountsKeysView[T_co]:
 85        return self._data.keys()
 86
 87    def values(self) -> CountsValuesView:
 88        return self._data.values()
 89
 90    def items(self) -> CountsItemsView[T_co]:
 91        return self._data.items()
 92
 93    def __getitem__(self, outcome) -> int:
 94        return self._data[outcome]
 95
 96    def __iter__(self) -> Iterator[T_co]:
 97        return iter(self.keys())
 98
 99    def __len__(self) -> int:
100        return len(self._data)
101
102    size = icepool.Population.denominator
103
104    @cached_property
105    def _popped_min(self) -> tuple['Deck[T_co]', int]:
106        return self._new_raw(self._data.remove_min()), self.quantities()[0]
107
108    def _pop_min(self) -> tuple['Deck[T_co]', int]:
109        """A `Deck` with the min outcome removed."""
110        return self._popped_min
111
112    @cached_property
113    def _popped_max(self) -> tuple['Deck[T_co]', int]:
114        return self._new_raw(self._data.remove_max()), self.quantities()[-1]
115
116    def _pop_max(self) -> tuple['Deck[T_co]', int]:
117        """A `Deck` with the max outcome removed."""
118        return self._popped_max
119
120    @overload
121    def deal(self, hand_size: int, /) -> 'icepool.Deal[T_co]':
122        ...
123
124    @overload
125    def deal(self,
126             hand_sizes: Iterable[int]) -> 'icepool.MultiDeal[T_co, Any]':
127        ...
128
129    @overload
130    def deal(
131        self, hand_sizes: int | Iterable[int]
132    ) -> 'icepool.Deal[T_co] | icepool.MultiDeal[T_co, Any]':
133        ...
134
135    def deal(
136        self, hand_sizes: int | Iterable[int]
137    ) -> 'icepool.Deal[T_co] | icepool.MultiDeal[T_co, Any]':
138        """Deals the specified number of cards from this deck.
139
140        Args:
141            hand_sizes: Either an integer, in which case a `Deal` will be
142                returned, or an iterable of multiple hand sizes, in which case a
143                `MultiDeal` will be returned.
144        """
145        if isinstance(hand_sizes, int):
146            return icepool.Deal(self, hand_sizes)
147        else:
148            return icepool.MultiDeal(
149                self, tuple((hand_size, 1) for hand_size in hand_sizes))
150
151    def deal_groups(
152            self, *hand_groups: tuple[int,
153                                      int]) -> 'icepool.MultiDeal[T_co, Any]':
154        """EXPERIMENTAL: Deal cards into groups of hands, where the hands in each group could be produced in arbitrary order.
155        
156        Args:
157            hand_groups: Each argument is a tuple (hand_size, group_size),
158                denoting the number of cards in each hand of the group and
159                the number of hands in the group respectively.
160        """
161        return icepool.MultiDeal(self, hand_groups)
162
163    # Binary operators.
164
165    def additive_union(
166            self, *args: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]':
167        """Both decks merged together."""
168        return functools.reduce(operator.add, args,
169                                initial=self)  # type: ignore
170
171    def __add__(self,
172                other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]':
173        data = Counter(self._data)
174        for outcome, count in Counter(other).items():
175            data[outcome] += count
176        return Deck(+data)
177
178    __radd__ = __add__
179
180    def difference(self, *args:
181                   Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]':
182        """This deck with the other cards removed (but not below zero of each card)."""
183        return functools.reduce(operator.sub, args,
184                                initial=self)  # type: ignore
185
186    def __sub__(self,
187                other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]':
188        data = Counter(self._data)
189        for outcome, count in Counter(other).items():
190            data[outcome] -= count
191        return Deck(+data)
192
193    def __rsub__(self,
194                 other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]':
195        data = Counter(other)
196        for outcome, count in self.items():
197            data[outcome] -= count
198        return Deck(+data)
199
200    def intersection(
201            self, *args: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]':
202        """The cards that both decks have."""
203        return functools.reduce(operator.and_, args,
204                                initial=self)  # type: ignore
205
206    def __and__(self,
207                other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]':
208        data: Counter[T_co] = Counter()
209        for outcome, count in Counter(other).items():
210            data[outcome] = min(self.get(outcome, 0), count)
211        return Deck(+data)
212
213    __rand__ = __and__
214
215    def union(self, *args:
216              Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]':
217        """As many of each card as the deck that has more of them."""
218        return functools.reduce(operator.or_, args,
219                                initial=self)  # type: ignore
220
221    def __or__(self,
222               other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]':
223        data = Counter(self._data)
224        for outcome, count in Counter(other).items():
225            data[outcome] = max(data[outcome], count)
226        return Deck(+data)
227
228    __ror__ = __or__
229
230    def symmetric_difference(
231            self, other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]':
232        """As many of each card as the deck that has more of them."""
233        return self ^ other
234
235    def __xor__(self,
236                other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]':
237        data = Counter(self._data)
238        for outcome, count in Counter(other).items():
239            data[outcome] = abs(data[outcome] - count)
240        return Deck(+data)
241
242    def __mul__(self, other: int) -> 'Deck[T_co]':
243        if not isinstance(other, int):
244            return NotImplemented
245        return self.multiply_quantities(other)
246
247    __rmul__ = __mul__
248
249    def __floordiv__(self, other: int) -> 'Deck[T_co]':
250        if not isinstance(other, int):
251            return NotImplemented
252        return self.divide_quantities(other)
253
254    def __mod__(self, other: int) -> 'Deck[T_co]':
255        if not isinstance(other, int):
256            return NotImplemented
257        return self.modulo_quantities(other)
258
259    def map(
260            self,
261            repl:
262        'Callable[..., U | Deck[U] | icepool.RerollType] | Mapping[T_co, U | Deck[U] | icepool.RerollType]',
263            /,
264            *,
265            star: bool | None = None) -> 'Deck[U]':
266        """Maps outcomes of this `Deck` to other outcomes.
267
268        Args:
269            repl: One of the following:
270                * A callable returning a new outcome for each old outcome.
271                * A map from old outcomes to new outcomes.
272                    Unmapped old outcomes stay the same.
273                The new outcomes may be `Deck`s, in which case one card is
274                replaced with several. This is not recommended.
275            star: Whether outcomes should be unpacked into separate arguments
276                before sending them to a callable `repl`.
277                If not provided, this will be guessed based on the function
278                signature.
279        """
280        # Convert to a single-argument function.
281        if callable(repl):
282            if star is None:
283                star = infer_star(repl)
284            if star:
285
286                def transition_function(outcome):
287                    return repl(*outcome)
288            else:
289
290                def transition_function(outcome):
291                    return repl(outcome)
292        else:
293            # repl is a mapping.
294            def transition_function(outcome):
295                if outcome in repl:
296                    return repl[outcome]
297                else:
298                    return outcome
299
300        return Deck(
301            [transition_function(outcome) for outcome in self.outcomes()],
302            times=self.quantities())
303
304    @cached_property
305    def _sequence_cache(
306            self) -> 'MutableSequence[icepool.Die[tuple[T_co, ...]]]':
307        return [icepool.Die([()])]
308
309    def sequence(self, deals: int, /) -> 'icepool.Die[tuple[T_co, ...]]':
310        """Possible sequences produced by dealing from this deck a number of times.
311        
312        This is extremely expensive computationally. If you don't care about
313        order, use `deal()` instead.
314        """
315        if deals < 0:
316            raise ValueError('The number of cards dealt cannot be negative.')
317        for i in range(len(self._sequence_cache), deals + 1):
318
319            def transition(curr):
320                remaining = icepool.Die(self - curr)
321                return icepool.map(lambda curr, next: curr + (next, ), curr,
322                                   remaining)
323
324            result: 'icepool.Die[tuple[T_co, ...]]' = self._sequence_cache[
325                i - 1].map(transition)
326            self._sequence_cache.append(result)
327        return result
328
329    @cached_property
330    def hash_key(self) -> tuple:
331        return Deck, tuple(self.items())
332
333    def __repr__(self) -> str:
334        items_string = ', '.join(f'{repr(outcome)}: {quantity}'
335                                 for outcome, quantity in self.items())
336        return type(self).__qualname__ + '({' + items_string + '})'

Sampling without replacement (within a single evaluation).

Quantities represent duplicates.

def keys(self) -> CountsKeysView[+T_co]:
84    def keys(self) -> CountsKeysView[T_co]:
85        return self._data.keys()

The outcomes within the population in sorted order.

def values(self) -> CountsValuesView:
87    def values(self) -> CountsValuesView:
88        return self._data.values()

The quantities within the population in outcome order.

def items(self) -> CountsItemsView[+T_co]:
90    def items(self) -> CountsItemsView[T_co]:
91        return self._data.items()

The (outcome, quantity)s of the population in sorted order.

def size(self) -> int:
296    def denominator(self) -> int:
297        """The sum of all quantities (e.g. weights or duplicates).
298
299        For the number of unique outcomes, use `len()`.
300        """
301        return self._denominator

The sum of all quantities (e.g. weights or duplicates).

For the number of unique outcomes, use len().

def deal( self, hand_sizes: Union[int, Iterable[int]]) -> Union[Deal[+T_co], MultiDeal[+T_co, Any]]:
135    def deal(
136        self, hand_sizes: int | Iterable[int]
137    ) -> 'icepool.Deal[T_co] | icepool.MultiDeal[T_co, Any]':
138        """Deals the specified number of cards from this deck.
139
140        Args:
141            hand_sizes: Either an integer, in which case a `Deal` will be
142                returned, or an iterable of multiple hand sizes, in which case a
143                `MultiDeal` will be returned.
144        """
145        if isinstance(hand_sizes, int):
146            return icepool.Deal(self, hand_sizes)
147        else:
148            return icepool.MultiDeal(
149                self, tuple((hand_size, 1) for hand_size in hand_sizes))

Deals the specified number of cards from this deck.

Arguments:
  • hand_sizes: Either an integer, in which case a Deal will be returned, or an iterable of multiple hand sizes, in which case a MultiDeal will be returned.
def deal_groups( self, *hand_groups: tuple[int, int]) -> MultiDeal[+T_co, typing.Any]:
151    def deal_groups(
152            self, *hand_groups: tuple[int,
153                                      int]) -> 'icepool.MultiDeal[T_co, Any]':
154        """EXPERIMENTAL: Deal cards into groups of hands, where the hands in each group could be produced in arbitrary order.
155        
156        Args:
157            hand_groups: Each argument is a tuple (hand_size, group_size),
158                denoting the number of cards in each hand of the group and
159                the number of hands in the group respectively.
160        """
161        return icepool.MultiDeal(self, hand_groups)

EXPERIMENTAL: Deal cards into groups of hands, where the hands in each group could be produced in arbitrary order.

Arguments:
  • hand_groups: Each argument is a tuple (hand_size, group_size), denoting the number of cards in each hand of the group and the number of hands in the group respectively.
def additive_union( self, *args: Union[Iterable[+T_co], Mapping[+T_co, int]]) -> Deck[+T_co]:
165    def additive_union(
166            self, *args: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]':
167        """Both decks merged together."""
168        return functools.reduce(operator.add, args,
169                                initial=self)  # type: ignore

Both decks merged together.

def difference( self, *args: Union[Iterable[+T_co], Mapping[+T_co, int]]) -> Deck[+T_co]:
180    def difference(self, *args:
181                   Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]':
182        """This deck with the other cards removed (but not below zero of each card)."""
183        return functools.reduce(operator.sub, args,
184                                initial=self)  # type: ignore

This deck with the other cards removed (but not below zero of each card).

def intersection( self, *args: Union[Iterable[+T_co], Mapping[+T_co, int]]) -> Deck[+T_co]:
200    def intersection(
201            self, *args: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]':
202        """The cards that both decks have."""
203        return functools.reduce(operator.and_, args,
204                                initial=self)  # type: ignore

The cards that both decks have.

def union( self, *args: Union[Iterable[+T_co], Mapping[+T_co, int]]) -> Deck[+T_co]:
215    def union(self, *args:
216              Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]':
217        """As many of each card as the deck that has more of them."""
218        return functools.reduce(operator.or_, args,
219                                initial=self)  # type: ignore

As many of each card as the deck that has more of them.

def symmetric_difference( self, other: Union[Iterable[+T_co], Mapping[+T_co, int]]) -> Deck[+T_co]:
230    def symmetric_difference(
231            self, other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]':
232        """As many of each card as the deck that has more of them."""
233        return self ^ other

As many of each card as the deck that has more of them.

def map( self, repl: Union[Callable[..., Union[~U, Deck[~U], RerollType]], Mapping[+T_co, Union[~U, Deck[~U], RerollType]]], /, *, star: bool | None = None) -> Deck[~U]:
259    def map(
260            self,
261            repl:
262        'Callable[..., U | Deck[U] | icepool.RerollType] | Mapping[T_co, U | Deck[U] | icepool.RerollType]',
263            /,
264            *,
265            star: bool | None = None) -> 'Deck[U]':
266        """Maps outcomes of this `Deck` to other outcomes.
267
268        Args:
269            repl: One of the following:
270                * A callable returning a new outcome for each old outcome.
271                * A map from old outcomes to new outcomes.
272                    Unmapped old outcomes stay the same.
273                The new outcomes may be `Deck`s, in which case one card is
274                replaced with several. This is not recommended.
275            star: Whether outcomes should be unpacked into separate arguments
276                before sending them to a callable `repl`.
277                If not provided, this will be guessed based on the function
278                signature.
279        """
280        # Convert to a single-argument function.
281        if callable(repl):
282            if star is None:
283                star = infer_star(repl)
284            if star:
285
286                def transition_function(outcome):
287                    return repl(*outcome)
288            else:
289
290                def transition_function(outcome):
291                    return repl(outcome)
292        else:
293            # repl is a mapping.
294            def transition_function(outcome):
295                if outcome in repl:
296                    return repl[outcome]
297                else:
298                    return outcome
299
300        return Deck(
301            [transition_function(outcome) for outcome in self.outcomes()],
302            times=self.quantities())

Maps outcomes of this Deck to other outcomes.

Arguments:
  • repl: One of the following:
    • A callable returning a new outcome for each old outcome.
    • A map from old outcomes to new outcomes. Unmapped old outcomes stay the same. The new outcomes may be Decks, in which case one card is replaced with several. This is not recommended.
  • star: Whether outcomes should be unpacked into separate arguments before sending them to a callable repl. If not provided, this will be guessed based on the function signature.
def sequence(self, deals: int, /) -> Die[tuple[+T_co, ...]]:
309    def sequence(self, deals: int, /) -> 'icepool.Die[tuple[T_co, ...]]':
310        """Possible sequences produced by dealing from this deck a number of times.
311        
312        This is extremely expensive computationally. If you don't care about
313        order, use `deal()` instead.
314        """
315        if deals < 0:
316            raise ValueError('The number of cards dealt cannot be negative.')
317        for i in range(len(self._sequence_cache), deals + 1):
318
319            def transition(curr):
320                remaining = icepool.Die(self - curr)
321                return icepool.map(lambda curr, next: curr + (next, ), curr,
322                                   remaining)
323
324            result: 'icepool.Die[tuple[T_co, ...]]' = self._sequence_cache[
325                i - 1].map(transition)
326            self._sequence_cache.append(result)
327        return result

Possible sequences produced by dealing from this deck a number of times.

This is extremely expensive computationally. If you don't care about order, use deal() instead.

class Deal(icepool.generator.keep.KeepGenerator[~T]):
13class Deal(KeepGenerator[T]):
14    """Represents an unordered deal of a single hand from a `Deck`."""
15
16    _deck: 'icepool.Deck[T]'
17
18    def __init__(self, deck: 'icepool.Deck[T]', hand_size: int) -> None:
19        """Constructor.
20
21        For algorithmic reasons, you must pre-commit to the number of cards to
22        deal.
23
24        It is permissible to deal zero cards from an empty deck, but not all
25        evaluators will handle this case, especially if they depend on the
26        outcome type. Dealing zero cards from a non-empty deck does not have
27        this issue.
28
29        Args:
30            deck: The `Deck` to deal from.
31            hand_size: How many cards to deal.
32        """
33        if hand_size < 0:
34            raise ValueError('hand_size cannot be negative.')
35        if hand_size > deck.size():
36            raise ValueError(
37                'The number of cards dealt cannot exceed the size of the deck.'
38            )
39        self._deck = deck
40        self._keep_tuple = (1, ) * hand_size
41
42    @classmethod
43    def _new_raw(cls, deck: 'icepool.Deck[T]',
44                 keep_tuple: tuple[int, ...]) -> 'Deal[T]':
45        self = super(Deal, cls).__new__(cls)
46        self._deck = deck
47        self._keep_tuple = keep_tuple
48        return self
49
50    def _make_source(self):
51        return DealSource(self._deck, self._keep_tuple)
52
53    def _set_keep_tuple(self, keep_tuple: tuple[int, ...]) -> 'Deal[T]':
54        return Deal._new_raw(self._deck, keep_tuple)
55
56    def deck(self) -> 'icepool.Deck[T]':
57        """The `Deck` the cards are dealt from."""
58        return self._deck
59
60    def hand_size(self) -> int:
61        """The number of cards dealt to each hand as a tuple."""
62        return len(self._keep_tuple)
63
64    def outcomes(self) -> CountsKeysView[T]:
65        """The outcomes of the `Deck` in ascending order.
66
67        These are also the `keys` of the `Deck` as a `Mapping`.
68        Prefer to use the name `outcomes`.
69        """
70        return self.deck().outcomes()
71
72    def denominator(self) -> int:
73        return icepool.math.comb(self.deck().size(), self.hand_size())
74
75    @property
76    def hash_key(self):
77        return Deal, self._deck, self._keep_tuple
78
79    def __repr__(self) -> str:
80        return type(
81            self
82        ).__qualname__ + f'({repr(self.deck())}, hand_size={self.hand_size()})'
83
84    def __str__(self) -> str:
85        return type(
86            self
87        ).__qualname__ + f' of hand_size={self.hand_size()} from deck:\n' + str(
88            self.deck())

Represents an unordered deal of a single hand from a Deck.

Deal(deck: Deck[~T], hand_size: int)
18    def __init__(self, deck: 'icepool.Deck[T]', hand_size: int) -> None:
19        """Constructor.
20
21        For algorithmic reasons, you must pre-commit to the number of cards to
22        deal.
23
24        It is permissible to deal zero cards from an empty deck, but not all
25        evaluators will handle this case, especially if they depend on the
26        outcome type. Dealing zero cards from a non-empty deck does not have
27        this issue.
28
29        Args:
30            deck: The `Deck` to deal from.
31            hand_size: How many cards to deal.
32        """
33        if hand_size < 0:
34            raise ValueError('hand_size cannot be negative.')
35        if hand_size > deck.size():
36            raise ValueError(
37                'The number of cards dealt cannot exceed the size of the deck.'
38            )
39        self._deck = deck
40        self._keep_tuple = (1, ) * hand_size

Constructor.

For algorithmic reasons, you must pre-commit to the number of cards to deal.

It is permissible to deal zero cards from an empty deck, but not all evaluators will handle this case, especially if they depend on the outcome type. Dealing zero cards from a non-empty deck does not have this issue.

Arguments:
  • deck: The Deck to deal from.
  • hand_size: How many cards to deal.
def deck(self) -> Deck[~T]:
56    def deck(self) -> 'icepool.Deck[T]':
57        """The `Deck` the cards are dealt from."""
58        return self._deck

The Deck the cards are dealt from.

def hand_size(self) -> int:
60    def hand_size(self) -> int:
61        """The number of cards dealt to each hand as a tuple."""
62        return len(self._keep_tuple)

The number of cards dealt to each hand as a tuple.

def outcomes(self) -> CountsKeysView[~T]:
64    def outcomes(self) -> CountsKeysView[T]:
65        """The outcomes of the `Deck` in ascending order.
66
67        These are also the `keys` of the `Deck` as a `Mapping`.
68        Prefer to use the name `outcomes`.
69        """
70        return self.deck().outcomes()

The outcomes of the Deck in ascending order.

These are also the keys of the Deck as a Mapping. Prefer to use the name outcomes.

def denominator(self) -> int:
72    def denominator(self) -> int:
73        return icepool.math.comb(self.deck().size(), self.hand_size())
hash_key
75    @property
76    def hash_key(self):
77        return Deal, self._deck, self._keep_tuple

A hash key for this object. This should include a type.

If None, this will not compare equal to any other object.

class MultiDeal(icepool.generator.multiset_tuple_generator.MultisetTupleGenerator[~T, ~IntTupleOut]):
 20class MultiDeal(MultisetTupleGenerator[T, IntTupleOut]):
 21    """Represents an deal of multiple hands from a `Deck`.
 22    
 23    The cards within each hand are in sorted order. Furthermore, hands may be
 24    organized into groups in which the hands are initially indistinguishable.
 25    """
 26
 27    _deck: 'icepool.Deck[T]'
 28    # An ordered tuple of hand groups.
 29    # Each group is designated by (hand_size, hand_count).
 30    _hand_groups: tuple[tuple[int, int], ...]
 31
 32    def __init__(self, deck: 'icepool.Deck[T]',
 33                 hand_groups: tuple[tuple[int, int], ...]) -> None:
 34        """Constructor.
 35
 36        For algorithmic reasons, you must pre-commit to the number of cards to
 37        deal for each hand.
 38
 39        It is permissible to deal zero cards from an empty deck, but not all
 40        evaluators will handle this case, especially if they depend on the
 41        outcome type. Dealing zero cards from a non-empty deck does not have
 42        this issue.
 43
 44        Args:
 45            deck: The `Deck` to deal from.
 46            hand_groups: An ordered tuple of hand groups.
 47                Each group is designated by (hand_size, hand_count) with the
 48                hands of each group being arbitrarily ordered.
 49                The resulting counts are produced in a flat tuple.
 50        """
 51        self._deck = deck
 52        self._hand_groups = hand_groups
 53        if self.total_cards_dealt() > self.deck().size():
 54            raise ValueError(
 55                'The total number of cards dealt cannot exceed the size of the deck.'
 56            )
 57
 58    @classmethod
 59    def _new_raw(
 60        cls, deck: 'icepool.Deck[T]',
 61        hand_sizes: tuple[tuple[int, int],
 62                          ...]) -> 'MultiDeal[T, IntTupleOut]':
 63        self = super(MultiDeal, cls).__new__(cls)
 64        self._deck = deck
 65        self._hand_groups = hand_sizes
 66        return self
 67
 68    def deck(self) -> 'icepool.Deck[T]':
 69        """The `Deck` the cards are dealt from."""
 70        return self._deck
 71
 72    def hand_sizes(self) -> IntTupleOut:
 73        """The number of cards dealt to each hand as a tuple."""
 74        return cast(
 75            IntTupleOut,
 76            tuple(
 77                itertools.chain.from_iterable(
 78                    (hand_size, ) * group_size
 79                    for hand_size, group_size in self._hand_groups)))
 80
 81    def total_cards_dealt(self) -> int:
 82        """The total number of cards dealt."""
 83        return sum(hand_size * group_size
 84                   for hand_size, group_size in self._hand_groups)
 85
 86    def outcomes(self) -> CountsKeysView[T]:
 87        """The outcomes of the `Deck` in ascending order.
 88
 89        These are also the `keys` of the `Deck` as a `Mapping`.
 90        Prefer to use the name `outcomes`.
 91        """
 92        return self.deck().outcomes()
 93
 94    def __len__(self) -> int:
 95        return sum(group_size for _, group_size in self._hand_groups)
 96
 97    @cached_property
 98    def _denominator(self) -> int:
 99        d_total = icepool.math.comb(self.deck().size(),
100                                    self.total_cards_dealt())
101        d_split = math.prod(
102            icepool.math.comb(self.total_cards_dealt(), h)
103            for h in self.hand_sizes()[1:])
104        return d_total * d_split
105
106    def denominator(self) -> int:
107        return self._denominator
108
109    def _make_source(self) -> 'MultisetTupleSource[T, IntTupleOut]':
110        return MultiDealSource(self._deck, self._hand_groups)
111
112    @property
113    def hash_key(self) -> Hashable:
114        return MultiDeal, self._deck, self._hand_groups
115
116    def __repr__(self) -> str:
117        return type(
118            self
119        ).__qualname__ + f'({repr(self.deck())}, hand_groups={self._hand_groups})'
120
121    def __str__(self) -> str:
122        return type(
123            self
124        ).__qualname__ + f' of hand_groups={self._hand_groups} from deck:\n' + str(
125            self.deck())

Represents an deal of multiple hands from a Deck.

The cards within each hand are in sorted order. Furthermore, hands may be organized into groups in which the hands are initially indistinguishable.

MultiDeal( deck: Deck[~T], hand_groups: tuple[tuple[int, int], ...])
32    def __init__(self, deck: 'icepool.Deck[T]',
33                 hand_groups: tuple[tuple[int, int], ...]) -> None:
34        """Constructor.
35
36        For algorithmic reasons, you must pre-commit to the number of cards to
37        deal for each hand.
38
39        It is permissible to deal zero cards from an empty deck, but not all
40        evaluators will handle this case, especially if they depend on the
41        outcome type. Dealing zero cards from a non-empty deck does not have
42        this issue.
43
44        Args:
45            deck: The `Deck` to deal from.
46            hand_groups: An ordered tuple of hand groups.
47                Each group is designated by (hand_size, hand_count) with the
48                hands of each group being arbitrarily ordered.
49                The resulting counts are produced in a flat tuple.
50        """
51        self._deck = deck
52        self._hand_groups = hand_groups
53        if self.total_cards_dealt() > self.deck().size():
54            raise ValueError(
55                'The total number of cards dealt cannot exceed the size of the deck.'
56            )

Constructor.

For algorithmic reasons, you must pre-commit to the number of cards to deal for each hand.

It is permissible to deal zero cards from an empty deck, but not all evaluators will handle this case, especially if they depend on the outcome type. Dealing zero cards from a non-empty deck does not have this issue.

Arguments:
  • deck: The Deck to deal from.
  • hand_groups: An ordered tuple of hand groups. Each group is designated by (hand_size, hand_count) with the hands of each group being arbitrarily ordered. The resulting counts are produced in a flat tuple.
def deck(self) -> Deck[~T]:
68    def deck(self) -> 'icepool.Deck[T]':
69        """The `Deck` the cards are dealt from."""
70        return self._deck

The Deck the cards are dealt from.

def hand_sizes(self) -> ~IntTupleOut:
72    def hand_sizes(self) -> IntTupleOut:
73        """The number of cards dealt to each hand as a tuple."""
74        return cast(
75            IntTupleOut,
76            tuple(
77                itertools.chain.from_iterable(
78                    (hand_size, ) * group_size
79                    for hand_size, group_size in self._hand_groups)))

The number of cards dealt to each hand as a tuple.

def total_cards_dealt(self) -> int:
81    def total_cards_dealt(self) -> int:
82        """The total number of cards dealt."""
83        return sum(hand_size * group_size
84                   for hand_size, group_size in self._hand_groups)

The total number of cards dealt.

def outcomes(self) -> CountsKeysView[~T]:
86    def outcomes(self) -> CountsKeysView[T]:
87        """The outcomes of the `Deck` in ascending order.
88
89        These are also the `keys` of the `Deck` as a `Mapping`.
90        Prefer to use the name `outcomes`.
91        """
92        return self.deck().outcomes()

The outcomes of the Deck in ascending order.

These are also the keys of the Deck as a Mapping. Prefer to use the name outcomes.

def denominator(self) -> int:
106    def denominator(self) -> int:
107        return self._denominator
hash_key: Hashable
112    @property
113    def hash_key(self) -> Hashable:
114        return MultiDeal, self._deck, self._hand_groups

A hash key for this object. This should include a type.

If None, this will not compare equal to any other object.

def multiset_function( wrapped: Callable[..., Union[icepool.evaluator.multiset_function.MultisetFunctionRawResult[~T, +U_co], tuple[icepool.evaluator.multiset_function.MultisetFunctionRawResult[~T, +U_co], ...]]], /) -> icepool.evaluator.multiset_evaluator_base.MultisetEvaluatorBase[~T, +U_co]:
 55def multiset_function(wrapped: Callable[
 56    ...,
 57    'MultisetFunctionRawResult[T, U_co] | tuple[MultisetFunctionRawResult[T, U_co], ...]'],
 58                      /) -> 'MultisetEvaluatorBase[T, U_co]':
 59    """EXPERIMENTAL: A decorator that turns a function into a multiset evaluator.
 60
 61    The provided function should take in arguments representing multisets,
 62    do a limited set of operations on them (see `MultisetExpression`), and
 63    finish off with an evaluation. You can return a tuple to perform a joint
 64    evaluation.
 65
 66    For example, to create an evaluator which computes the elements each of two
 67    multisets has that the other doesn't:
 68    ```python
 69    @multiset_function
 70    def two_way_difference(a, b):
 71        return (a - b).expand(), (b - a).expand()
 72    ```
 73
 74    The special `star` keyword argument will unpack tuple-valued counts of the
 75    first argument inside the multiset function. For example,
 76    ```python
 77    hands = deck.deal((5, 5))
 78    two_way_difference(hands, star=True)
 79    ```
 80    effectively unpacks as if we had written
 81    ```python
 82    @multiset_function
 83    def two_way_difference(hands):
 84        a, b = hands
 85        return (a - b).expand(), (b - a).expand()
 86    ```
 87
 88    If not provided explicitly, `star` will be inferred automatically.
 89
 90    You can pass non-multiset values as keyword-only arguments.
 91    ```python
 92    @multiset_function
 93    def count_outcomes(a, *, target):
 94        return a.keep_outcomes(target).size()
 95
 96    count_outcomes(a, target=[5, 6])
 97    ```
 98
 99    While in theory `@multiset_function` implements late binding similar to
100    ordinary Python functions, I recommend using only pure functions.
101
102    Be careful when using control structures: you cannot branch on the value of
103    a multiset expression or evaluation, so e.g.
104
105    ```python
106    @multiset_function
107    def bad(a, b)
108        if a == b:
109            ...
110    ```
111
112    is not allowed.
113
114    `multiset_function` has considerable overhead, being effectively a
115    mini-language within Python. For better performance, you can try
116    implementing your own subclass of `MultisetEvaluator` directly.
117
118    Args:
119        function: This should take in multiset expressions as positional
120            arguments, and non-multiset variables as keyword arguments.
121    """
122    return MultisetFunctionEvaluator(wrapped)

EXPERIMENTAL: A decorator that turns a function into a multiset evaluator.

The provided function should take in arguments representing multisets, do a limited set of operations on them (see MultisetExpression), and finish off with an evaluation. You can return a tuple to perform a joint evaluation.

For example, to create an evaluator which computes the elements each of two multisets has that the other doesn't:

@multiset_function
def two_way_difference(a, b):
    return (a - b).expand(), (b - a).expand()

The special star keyword argument will unpack tuple-valued counts of the first argument inside the multiset function. For example,

hands = deck.deal((5, 5))
two_way_difference(hands, star=True)

effectively unpacks as if we had written

@multiset_function
def two_way_difference(hands):
    a, b = hands
    return (a - b).expand(), (b - a).expand()

If not provided explicitly, star will be inferred automatically.

You can pass non-multiset values as keyword-only arguments.

@multiset_function
def count_outcomes(a, *, target):
    return a.keep_outcomes(target).size()

count_outcomes(a, target=[5, 6])

While in theory @multiset_function implements late binding similar to ordinary Python functions, I recommend using only pure functions.

Be careful when using control structures: you cannot branch on the value of a multiset expression or evaluation, so e.g.

@multiset_function
def bad(a, b)
    if a == b:
        ...

is not allowed.

multiset_function has considerable overhead, being effectively a mini-language within Python. For better performance, you can try implementing your own subclass of MultisetEvaluator directly.

Arguments:
  • function: This should take in multiset expressions as positional arguments, and non-multiset variables as keyword arguments.
class MultisetParameter(icepool.expression.multiset_parameter.MultisetParameterBase[~T, int], icepool.MultisetExpression[~T]):
48class MultisetParameter(MultisetParameterBase[T, int], MultisetExpression[T]):
49    """A multiset parameter with a count of a single `int`."""
50
51    def __init__(self, name: str, arg_index: int, star_index: int | None):
52        self._name = name
53        self._arg_index = arg_index
54        self._star_index = star_index

A multiset parameter with a count of a single int.

MultisetParameter(name: str, arg_index: int, star_index: int | None)
51    def __init__(self, name: str, arg_index: int, star_index: int | None):
52        self._name = name
53        self._arg_index = arg_index
54        self._star_index = star_index
class MultisetTupleParameter(icepool.expression.multiset_parameter.MultisetParameterBase[~T, ~IntTupleOut], icepool.expression.multiset_tuple_expression.MultisetTupleExpression[~T, ~IntTupleOut]):
57class MultisetTupleParameter(MultisetParameterBase[T, IntTupleOut],
58                             MultisetTupleExpression[T, IntTupleOut]):
59    """A multiset parameter with a count of a tuple of `int`s."""
60
61    def __init__(self, name: str, arg_index: int, length: int):
62        self._name = name
63        self._arg_index = arg_index
64        self._star_index = None
65        self._length = length
66
67    def __len__(self):
68        return self._length

A multiset parameter with a count of a tuple of ints.

MultisetTupleParameter(name: str, arg_index: int, length: int)
61    def __init__(self, name: str, arg_index: int, length: int):
62        self._name = name
63        self._arg_index = arg_index
64        self._star_index = None
65        self._length = length
NoCache: Final = <NoCacheType.NoCache: 'NoCache'>

Indicates that caching should not be performed. Exact meaning depends on context.

def format_probability_inverse(probability, /, int_start: int = 20):
22def format_probability_inverse(probability, /, int_start: int = 20):
23    """EXPERIMENTAL: Formats the inverse of a value as "1 in N".
24    
25    Args:
26        probability: The value to be formatted.
27        int_start: If N = 1 / probability is between this value and 1 million
28            times this value it will be formatted as an integer. Otherwise it 
29            be formatted asa float with precision at least 1 part in int_start.
30    """
31    max_precision = math.ceil(math.log10(int_start))
32    if probability <= 0 or probability > 1:
33        return 'n/a'
34    product = probability * int_start
35    if product <= 1:
36        if probability * int_start * 10**6 <= 1:
37            return f'1 in {1.0 / probability:<.{max_precision}e}'
38        else:
39            return f'1 in {round(1 / probability)}'
40
41    precision = 0
42    precision_factor = 1
43    while product > precision_factor and precision < max_precision:
44        precision += 1
45        precision_factor *= 10
46    return f'1 in {1.0 / probability:<.{precision}f}'

EXPERIMENTAL: Formats the inverse of a value as "1 in N".

Arguments:
  • probability: The value to be formatted.
  • int_start: If N = 1 / probability is between this value and 1 million times this value it will be formatted as an integer. Otherwise it be formatted asa float with precision at least 1 part in int_start.
class Wallenius(typing.Generic[~T]):
28class Wallenius(Generic[T]):
29    """EXPERIMENTAL: Wallenius' noncentral hypergeometric distribution.
30
31    This is sampling without replacement with weights, where the entire weight
32    of a card goes away when it is pulled.
33    """
34    _weight_decks: 'MutableMapping[int, icepool.Deck[T]]'
35    _weight_die: 'icepool.Die[int]'
36
37    def __init__(self, data: Iterable[tuple[T, int]]
38                 | Mapping[T, int | tuple[int, int]]):
39        """Constructor.
40        
41        Args:
42            data: Either an iterable of (outcome, weight), or a mapping from
43                outcomes to either weights or (weight, quantity).
44            hand_size: The number of outcomes to pull.
45        """
46        self._weight_decks = {}
47
48        if isinstance(data, Mapping):
49            for outcome, v in data.items():
50                if isinstance(v, int):
51                    weight = v
52                    quantity = 1
53                else:
54                    weight, quantity = v
55                self._weight_decks[weight] = self._weight_decks.get(
56                    weight, icepool.Deck()).append(outcome, quantity)
57        else:
58            for outcome, weight in data:
59                self._weight_decks[weight] = self._weight_decks.get(
60                    weight, icepool.Deck()).append(outcome)
61
62        self._weight_die = icepool.Die({
63            weight: weight * deck.denominator()
64            for weight, deck in self._weight_decks.items()
65        })
66
67    def deal(self, hand_size: int, /) -> 'icepool.MultisetExpression[T]':
68        """Deals the specified number of outcomes from the Wallenius.
69        
70        The result is a `MultisetExpression` representing the multiset of
71        outcomes dealt.
72        """
73        if hand_size == 0:
74            return icepool.Pool([])
75
76        def inner(weights):
77            weight_counts = Counter(weights)
78            result = None
79            for weight, count in weight_counts.items():
80                deal = self._weight_decks[weight].deal(count)
81                if result is None:
82                    result = deal
83                else:
84                    result = result + deal
85            return result
86
87        hand_weights = _wallenius_weights(self._weight_die, hand_size)
88        return hand_weights.map_to_pool(inner, star=False)

EXPERIMENTAL: Wallenius' noncentral hypergeometric distribution.

This is sampling without replacement with weights, where the entire weight of a card goes away when it is pulled.

Wallenius( data: Union[Iterable[tuple[~T, int]], Mapping[~T, int | tuple[int, int]]])
37    def __init__(self, data: Iterable[tuple[T, int]]
38                 | Mapping[T, int | tuple[int, int]]):
39        """Constructor.
40        
41        Args:
42            data: Either an iterable of (outcome, weight), or a mapping from
43                outcomes to either weights or (weight, quantity).
44            hand_size: The number of outcomes to pull.
45        """
46        self._weight_decks = {}
47
48        if isinstance(data, Mapping):
49            for outcome, v in data.items():
50                if isinstance(v, int):
51                    weight = v
52                    quantity = 1
53                else:
54                    weight, quantity = v
55                self._weight_decks[weight] = self._weight_decks.get(
56                    weight, icepool.Deck()).append(outcome, quantity)
57        else:
58            for outcome, weight in data:
59                self._weight_decks[weight] = self._weight_decks.get(
60                    weight, icepool.Deck()).append(outcome)
61
62        self._weight_die = icepool.Die({
63            weight: weight * deck.denominator()
64            for weight, deck in self._weight_decks.items()
65        })

Constructor.

Arguments:
  • data: Either an iterable of (outcome, weight), or a mapping from outcomes to either weights or (weight, quantity).
  • hand_size: The number of outcomes to pull.
def deal( self, hand_size: int, /) -> MultisetExpression[~T]:
67    def deal(self, hand_size: int, /) -> 'icepool.MultisetExpression[T]':
68        """Deals the specified number of outcomes from the Wallenius.
69        
70        The result is a `MultisetExpression` representing the multiset of
71        outcomes dealt.
72        """
73        if hand_size == 0:
74            return icepool.Pool([])
75
76        def inner(weights):
77            weight_counts = Counter(weights)
78            result = None
79            for weight, count in weight_counts.items():
80                deal = self._weight_decks[weight].deal(count)
81                if result is None:
82                    result = deal
83                else:
84                    result = result + deal
85            return result
86
87        hand_weights = _wallenius_weights(self._weight_die, hand_size)
88        return hand_weights.map_to_pool(inner, star=False)

Deals the specified number of outcomes from the Wallenius.

The result is a MultisetExpression representing the multiset of outcomes dealt.