icepool
Package for computing dice and card probabilities.
Starting with v0.25.1, you can replace latest in the URL with an old version
number to get the documentation for that version.
See this JupyterLite distribution for examples.
General conventions:
- Instances are immutable (apart from internal caching). Anything that looks like it mutates an instance actually returns a separate instance with the change.
1"""Package for computing dice and card probabilities. 2 3Starting with `v0.25.1`, you can replace `latest` in the URL with an old version 4number to get the documentation for that version. 5 6See [this JupyterLite distribution](https://highdiceroller.github.io/icepool/notebooks/lab/index.html) 7for examples. 8 9[Visit the project page.](https://github.com/HighDiceRoller/icepool) 10 11General conventions: 12 13* Instances are immutable (apart from internal caching). Anything that looks 14 like it mutates an instance actually returns a separate instance with the 15 change. 16""" 17 18__docformat__ = 'google' 19 20__version__ = '2.2.1' 21 22from typing import Final 23 24from icepool.typing import Outcome, RerollType, NoCacheType 25from icepool.order import Order, ConflictingOrderError, UnsupportedOrder 26from icepool.map_tools.common import Break 27 28Reroll: Final = RerollType.Reroll 29"""Indicates that an outcome should be rerolled (with unlimited depth). 30 31This effectively removes the outcome from the probability space, along with its 32contribution to the denominator. 33 34This can be used for conditional probability by removing all outcomes not 35consistent with the given observations. 36 37Operation in specific cases: 38 39* If sent to the constructor of `Die`, it and the corresponding quantity is 40 dropped. 41* When used with `Again` or `map(repeat)`, only that stage is rerolled, not the 42 entire rolling process. 43* To reroll with limited depth, use `Die.reroll()`, or `Again` with no 44 modification. 45* When used with `MultisetEvaluator`, this currently has the same meaning as 46 `Restart`. Prefer using `Restart` in this case. 47""" 48Restart: Final = RerollType.Restart 49"""Indicates that a rolling process should be restarted (with unlimited depth). 50 51`Restart` effectively removes the sequence of events from the probability space, 52along with its contribution to the denominator. 53 54`Restart` can be used for conditional probability by removing all sequences of 55events not consistent with the given observations. 56 57`Restart` can be used with `again_count`, `map(repeat)`, or `MultisetEvaluator`. 58When sent to the constructor of `Die`, it has the same effect as `Reroll`; 59prefer using `Reroll` in this case. 60 61`Restart` can NOT be used with `again_depth`, but `Reroll` can. 62""" 63 64REROLL_TYPES: Final = (Reroll, Restart) 65"""Explicitly defined since Enum.__contains__ requires that the queried value be hashable.""" 66 67NoCache: Final = NoCacheType.NoCache 68"""Indicates that caching should not be performed. Exact meaning depends on context.""" 69 70# Expose certain names at top-level. 71 72from icepool.function import (d, z, __getattr__, coin, stochastic_round, 73 one_hot, from_cumulative, from_rv, pointwise_max, 74 pointwise_min, min_outcome, max_outcome, 75 consecutive, sorted_union, 76 harmonize_denominators) 77from icepool.map_tools.function import (reduce, accumulate, map, map_function, 78 map_and_time, mean_time_to_absorb, 79 map_to_pool) 80 81from icepool.population.base import Population 82from icepool.population.die import implicit_convert_to_die, Die 83from icepool.expand import iter_cartesian_product, cartesian_product, tupleize, vectorize 84from icepool.collection.vector import Vector 85from icepool.collection.vector_with_truth_only import VectorWithTruthOnly 86from icepool.collection.symbols import Symbols 87from icepool.population.again import AgainExpression 88 89Again: Final = AgainExpression(is_additive=True) 90"""A symbol indicating that the die should be rolled again, usually with some operation applied. 91 92This is designed to be used with the `Die()` constructor. 93`AgainExpression`s should not be fed to functions or methods other than 94`Die()` (or indirectly via `map()`), but they can be used with operators. 95Examples: 96 97* `Again + 6`: Roll again and add 6. 98* `Again + Again`: Roll again twice and sum. 99 100The `again_count`, `again_depth`, and `again_end` arguments to `Die()` 101affect how these arguments are processed. At most one of `again_count` or 102`again_depth` may be provided; if neither are provided, the behavior is as 103`again_depth=1`. 104 105For finer control over rolling processes, use e.g. `Die.map()` instead. 106 107#### Count mode 108 109When `again_count` is provided, we start with one roll queued and execute one 110roll at a time. For every `Again` we roll, we queue another roll. 111If we run out of rolls, we sum the rolls to find the result. We evaluate up to 112`again_count` extra rolls. If, at this point, there are still dice remaining: 113 114* `Restart`: If there would be dice over the limit, we restart the entire 115 process from the beginning, effectively conditioning the process against 116 this sequence of events. 117* `Reroll`: Any remaining dice can't produce more `Again`s. 118* `outcome`: Any remaining dice are each treated as the given outcome. 119* `None`: Any remaining dice are treated as zero. 120 121This mode only allows "additive" expressions to be used with `Again`, which 122means that only the following operators are allowed: 123 124* Binary `+` 125* `n @ AgainExpression`, where `n` is a non-negative `int` or `Population`. 126 127Furthermore, the `+` operator is assumed to be associative and commutative. 128For example, `str` or `tuple` outcomes will not produce elements with a definite 129order. 130 131#### Depth mode 132 133When `again_depth=0`, `again_end` is directly substituted 134for each occurence of `Again`. For other values of `again_depth`, the result for 135`again_depth-1` is substituted for each occurence of `Again`. 136 137If `again_end=Reroll`, then any `AgainExpression`s in the final depth 138are rerolled. `Restart` cannot be used with `again_depth`. 139""" 140 141from icepool.population.die_with_truth import DieWithTruth 142 143from icepool.collection.counts import CountsKeysView, CountsValuesView, CountsItemsView 144 145from icepool.population.keep import lowest, highest, middle 146 147from icepool.generator.pool import Pool, standard_pool, d_pool, z_pool 148from icepool.generator.keep import KeepGenerator 149from icepool.generator.compound_keep import CompoundKeepGenerator 150 151from icepool.generator.multiset_generator import MultisetGenerator 152from icepool.generator.multiset_tuple_generator import MultisetTupleGenerator 153from icepool.generator.weightless import WeightlessGenerator 154from icepool.evaluator.multiset_evaluator import MultisetEvaluator 155 156from icepool.population.deck import Deck 157from icepool.generator.deal import Deal 158from icepool.generator.multi_deal import MultiDeal 159 160from icepool.expression.multiset_expression import MultisetExpression, implicit_convert_to_expression 161from icepool.evaluator.multiset_function import multiset_function 162from icepool.expression.multiset_parameter import MultisetParameter, MultisetTupleParameter 163from icepool.expression.multiset_mixture import MultisetMixture 164 165from icepool.population.format import format_probability_inverse 166 167from icepool.wallenius import Wallenius 168 169import icepool.generator as generator 170import icepool.evaluator as evaluator 171import icepool.operator as operator 172 173import icepool.typing as typing 174from icepool.expand import Expandable 175 176__all__ = [ 177 'd', 'z', 'coin', 'stochastic_round', 'one_hot', 'Outcome', 'Die', 178 'Population', 'tupleize', 'vectorize', 'Vector', 'Symbols', 'Again', 179 'CountsKeysView', 'CountsValuesView', 'CountsItemsView', 'from_cumulative', 180 'from_rv', 'pointwise_max', 'pointwise_min', 'lowest', 'highest', 'middle', 181 'min_outcome', 'max_outcome', 'consecutive', 'sorted_union', 182 'harmonize_denominators', 'reduce', 'accumulate', 'map', 'map_function', 183 'map_and_time', 'mean_time_to_absorb', 'map_to_pool', 'Reroll', 'Restart', 184 'Break', 'RerollType', 'Pool', 'd_pool', 'z_pool', 'MultisetGenerator', 185 'MultisetExpression', 'MultisetEvaluator', 'Order', 186 'ConflictingOrderError', 'UnsupportedOrder', 'Deck', 'Deal', 'MultiDeal', 187 'multiset_function', 'MultisetParameter', 'MultisetTupleParameter', 188 'NoCache', 'function', 'typing', 'evaluator', 'format_probability_inverse', 189 'Wallenius' 190]
18@cache 19def d(sides: int, /) -> 'icepool.Die[int]': 20 """A standard die, uniformly distributed from `1` to `sides` inclusive. 21 22 Don't confuse this with `icepool.Die()`: 23 24 * `icepool.Die([6])`: A `Die` that always rolls the integer 6. 25 * `icepool.d(6)`: A d6. 26 27 You can also import individual standard dice from the `icepool` module, e.g. 28 `from icepool import d6`. 29 """ 30 if not isinstance(sides, int): 31 raise TypeError('sides must be an int.') 32 elif sides < 1: 33 raise ValueError('sides must be at least 1.') 34 return icepool.Die(range(1, sides + 1))
A standard die, uniformly distributed from 1 to sides inclusive.
Don't confuse this with icepool.Die():
icepool.Die([6]): ADiethat always rolls the integer 6.icepool.d(6): A d6.
You can also import individual standard dice from the icepool module, e.g.
from icepool import d6.
37@cache 38def z(sides: int, /) -> 'icepool.Die[int]': 39 """A die uniformly distributed from `0` to `sides - 1` inclusive. 40 41 Equal to d(sides) - 1. 42 """ 43 if not isinstance(sides, int): 44 raise TypeError('sides must be an int.') 45 elif sides < 1: 46 raise ValueError('sides must be at least 1.') 47 return icepool.Die(range(0, sides))
A die uniformly distributed from 0 to sides - 1 inclusive.
Equal to d(sides) - 1.
73def coin(n: int | float | Fraction, 74 d: int = 1, 75 /, 76 *, 77 max_denominator: int | None = None) -> 'icepool.Die[bool]': 78 """A `Die` that rolls `True` with probability `n / d`, and `False` otherwise. 79 80 If `n <= 0` or `n >= d` the result will have only one outcome. 81 82 Args: 83 n: An int numerator, or a non-integer probability. 84 d: An int denominator. Should not be provided if the first argument is 85 not an int. 86 """ 87 if not isinstance(n, int): 88 if d != 1: 89 raise ValueError( 90 'If a non-int numerator is provided, a denominator must not be provided.' 91 ) 92 fraction = Fraction(n) 93 if max_denominator is not None: 94 fraction = fraction.limit_denominator(max_denominator) 95 n = fraction.numerator 96 d = fraction.denominator 97 data = {} 98 if n < d: 99 data[False] = min(d - n, d) 100 if n > 0: 101 data[True] = min(n, d) 102 103 return icepool.Die(data)
A Die that rolls True with probability n / d, and False otherwise.
If n <= 0 or n >= d the result will have only one outcome.
Arguments:
- n: An int numerator, or a non-integer probability.
- d: An int denominator. Should not be provided if the first argument is not an int.
106def stochastic_round(x, 107 /, 108 *, 109 max_denominator: int | None = None) -> 'icepool.Die[int]': 110 """Randomly rounds a value up or down to the nearest integer according to the two distances. 111 112 Specificially, rounds `x` up with probability `x - floor(x)` and down 113 otherwise, producing a `Die` with up to two outcomes. 114 115 Args: 116 max_denominator: If provided, each rounding will be performed 117 using `fractions.Fraction.limit_denominator(max_denominator)`. 118 Otherwise, the rounding will be performed without 119 `limit_denominator`. 120 """ 121 integer_part = math.floor(x) 122 fractional_part = x - integer_part 123 return integer_part + coin(fractional_part, 124 max_denominator=max_denominator)
Randomly rounds a value up or down to the nearest integer according to the two distances.
Specificially, rounds x up with probability x - floor(x) and down
otherwise, producing a Die with up to two outcomes.
Arguments:
- max_denominator: If provided, each rounding will be performed
using
fractions.Fraction.limit_denominator(max_denominator). Otherwise, the rounding will be performed withoutlimit_denominator.
127def one_hot(sides: int, /) -> 'icepool.Die[tuple[bool, ...]]': 128 """A `Die` with `Vector` outcomes with one element set to `True` uniformly at random and the rest `False`. 129 130 This is an easy (if somewhat expensive) way of representing how many dice 131 in a pool rolled each number. For example, the outcomes of `10 @ one_hot(6)` 132 are the `(ones, twos, threes, fours, fives, sixes)` rolled in 10d6. 133 """ 134 data = [] 135 for i in range(sides): 136 outcome = [False] * sides 137 outcome[i] = True 138 data.append(icepool.Vector(outcome)) 139 return icepool.Die(data)
A Die with Vector outcomes with one element set to True uniformly at random and the rest False.
This is an easy (if somewhat expensive) way of representing how many dice
in a pool rolled each number. For example, the outcomes of 10 @ one_hot(6)
are the (ones, twos, threes, fours, fives, sixes) rolled in 10d6.
44class Outcome(Hashable, Protocol[T_contra]): 45 """Protocol to attempt to verify that outcome types are hashable and sortable. 46 47 Far from foolproof, e.g. it cannot enforce total ordering. 48 """ 49 50 def __lt__(self, other: T_contra) -> bool: 51 ...
Protocol to attempt to verify that outcome types are hashable and sortable.
Far from foolproof, e.g. it cannot enforce total ordering.
44class Die(Population[T_co], MaybeHashKeyed): 45 """Sampling with replacement. Quantities represent weights. 46 47 Dice are immutable. Methods do not modify the `Die` in-place; 48 rather they return a `Die` representing the result. 49 50 It's also possible to have "empty" dice with no outcomes at all, 51 though these have little use other than being sentinel values. 52 """ 53 54 _data: Counts[T_co] 55 56 @property 57 def _new_type(self) -> type: 58 return Die 59 60 def __new__( 61 cls, 62 outcomes: Sequence | Mapping[Any, int], 63 times: Sequence[int] | int = 1, 64 *, 65 again_count: int | None = None, 66 again_depth: int | None = None, 67 again_end: 'T_co | Die[T_co] | icepool.RerollType | None' = None 68 ) -> 'Die[T_co]': 69 """Constructor for a `Die`. 70 71 Don't confuse this with `d()`: 72 73 * `Die([6])`: A `Die` that always rolls the `int` 6. 74 * `d(6)`: A d6. 75 76 Also, don't confuse this with `Pool()`: 77 78 * `Die([1, 2, 3, 4, 5, 6])`: A d6. 79 * `Pool([1, 2, 3, 4, 5, 6])`: A `Pool` of six dice that always rolls one 80 of each number. 81 82 Here are some different ways of constructing a d6: 83 84 * Just import it: `from icepool import d6` 85 * Use the `d()` function: `icepool.d(6)` 86 * Use a d6 that you already have: `Die(d6)` or `Die([d6])` 87 * Mix a d3 and a d3+3: `Die([d3, d3+3])` 88 * Use a dict: `Die({1:1, 2:1, 3:1, 4:1, 5:1, 6:1})` 89 * Give the faces as a sequence: `Die([1, 2, 3, 4, 5, 6])` 90 91 All quantities must be non-negative. Outcomes with zero quantity will be 92 omitted. 93 94 Several methods and functions foward **kwargs to this constructor. 95 However, these only affect the construction of the returned or yielded 96 dice. Any other implicit conversions of arguments or operands to dice 97 will be done with the default keyword arguments. 98 99 EXPERIMENTAL: Use `icepool.Again` to roll the dice again, usually with 100 some modification. See the `Again` documentation for details. 101 102 Denominator: For a flat set of outcomes, the denominator is just the 103 sum of the corresponding quantities. If the outcomes themselves have 104 secondary denominators, then the overall denominator will be minimized 105 while preserving the relative weighting of the primary outcomes. 106 107 Args: 108 outcomes: The faces of the `Die`. This can be one of the following: 109 * A `Sequence` of outcomes. Duplicates will contribute 110 quantity for each appearance. 111 * A `Mapping` from outcomes to quantities. 112 113 Individual outcomes can each be one of the following: 114 115 * An outcome, which must be hashable and totally orderable. 116 * For convenience, `tuple`s containing `Population`s will be 117 `tupleize`d into a `Population` of `tuple`s. 118 This does not apply to subclasses of `tuple`s such as `namedtuple` 119 or other classes such as `Vector`. 120 * A `Die`, which will be flattened into the result. 121 The quantity assigned to a `Die` is shared among its 122 outcomes. The total denominator will be scaled up if 123 necessary. 124 * `icepool.Reroll`, which will drop itself from consideration. 125 * EXPERIMENTAL: `icepool.Again`. See the documentation for 126 `Again` for details. 127 times: Multiplies the quantity of each element of `outcomes`. 128 `times` can either be a sequence of the same length as 129 `outcomes` or a single `int` to apply to all elements of 130 `outcomes`. 131 again_count, again_depth, again_end: These affect how `Again` 132 expressions are handled. See the `Again` documentation for 133 details. 134 Raises: 135 ValueError: `None` is not a valid outcome for a `Die`. 136 """ 137 outcomes, times = icepool.creation_args.itemize(outcomes, times) 138 139 # Check for Again. 140 if icepool.population.again.contains_again(outcomes): 141 if again_count is not None: 142 if again_depth is not None: 143 raise ValueError( 144 'At most one of again_count and again_depth may be used.' 145 ) 146 return icepool.population.again.evaluate_agains_using_count( 147 outcomes, times, again_count, again_end) 148 else: 149 if again_depth is None: 150 again_depth = 1 151 return icepool.population.again.evaluate_agains_using_depth( 152 outcomes, times, again_depth, again_end) 153 154 # Agains have been replaced by this point. 155 outcomes = cast(Sequence[T_co | Die[T_co] | icepool.RerollType], 156 outcomes) 157 158 if len(outcomes) == 1 and times[0] == 1 and isinstance( 159 outcomes[0], Die): 160 return outcomes[0] 161 162 counts: Counts[T_co] = icepool.creation_args.expand_args_for_die( 163 outcomes, times) 164 165 return Die._new_raw(counts) 166 167 @classmethod 168 def _new_raw(cls, data: Counts[T_co]) -> 'Die[T_co]': 169 """Creates a new `Die` using already-processed arguments. 170 171 Args: 172 data: At this point, this is a Counts. 173 """ 174 self = super(Population, cls).__new__(cls) 175 self._data = data 176 return self 177 178 # Defined separately from the superclass to help typing. 179 def unary_operator(self: 'icepool.Die[T_co]', op: Callable[..., U], *args, 180 **kwargs) -> 'icepool.Die[U]': 181 """Performs the unary operation on the outcomes. 182 183 This is used for the standard unary operators 184 `-, +, abs, ~, round, trunc, floor, ceil` 185 as well as the additional methods 186 `zero, bool`. 187 188 This is NOT used for the `[]` operator; when used directly, this is 189 interpreted as a `Mapping` operation and returns the count corresponding 190 to a given outcome. See `marginals()` for applying the `[]` operator to 191 outcomes. 192 193 Returns: 194 A `Die` representing the result. 195 196 Raises: 197 ValueError: If tuples are of mismatched length. 198 """ 199 return self._unary_operator(op, *args, **kwargs) 200 201 def binary_operator(self, other: 'Die', op: Callable[..., U], *args, 202 **kwargs) -> 'Die[U]': 203 """Performs the operation on pairs of outcomes. 204 205 By the time this is called, the other operand has already been 206 converted to a `Die`. 207 208 This is used for the standard binary operators 209 `+, -, *, /, //, %, **, <<, >>, &, |, ^` 210 and the standard binary comparators 211 `<, <=, >=, >, ==, !=, cmp`. 212 213 `==` and `!=` additionally set the truth value of the `Die` according to 214 whether the dice themselves are the same or not. 215 216 The `@` operator does NOT use this method directly. 217 It rolls the left `Die`, which must have integer outcomes, 218 then rolls the right `Die` that many times and sums the outcomes. 219 220 Returns: 221 A `Die` representing the result. 222 223 Raises: 224 ValueError: If tuples are of mismatched length within one of the 225 dice or between the dice. 226 """ 227 data: MutableMapping[Any, int] = defaultdict(int) 228 for (outcome_self, 229 quantity_self), (outcome_other, 230 quantity_other) in itertools.product( 231 self.items(), other.items()): 232 new_outcome = op(outcome_self, outcome_other, *args, **kwargs) 233 data[new_outcome] += quantity_self * quantity_other 234 return self._new_type(data) 235 236 # Basic access. 237 238 def keys(self) -> CountsKeysView[T_co]: 239 return self._data.keys() 240 241 def values(self) -> CountsValuesView: 242 return self._data.values() 243 244 def items(self) -> CountsItemsView[T_co]: 245 return self._data.items() 246 247 def __getitem__(self, outcome, /) -> int: 248 return self._data[outcome] 249 250 def __iter__(self) -> Iterator[T_co]: 251 return iter(self.keys()) 252 253 def __len__(self) -> int: 254 """The number of outcomes. """ 255 return len(self._data) 256 257 def __contains__(self, outcome) -> bool: 258 return outcome in self._data 259 260 # Quantity management. 261 262 def simplify(self) -> 'Die[T_co]': 263 """Divides all quantities by their greatest common denominator. """ 264 return icepool.Die(self._data.simplify()) 265 266 # Rerolls and other outcome management. 267 268 def reroll(self, 269 outcomes: Callable[..., bool] | Collection[T_co] | None = None, 270 /, 271 *, 272 star: bool | None = None, 273 depth: int | Literal['inf']) -> 'Die[T_co]': 274 """Rerolls the given outcomes. 275 276 Args: 277 outcomes: Selects which outcomes to reroll. Options: 278 * A collection of outcomes to reroll. 279 * A callable that takes an outcome and returns `True` if it 280 should be rerolled. 281 * If not provided, the min outcome will be rerolled. 282 star: Whether outcomes should be unpacked into separate arguments 283 before sending them to a callable `which`. 284 If not provided, this will be guessed based on the function 285 signature. 286 depth: The maximum number of times to reroll. 287 If `None`, rerolls an unlimited number of times. 288 289 Returns: 290 A `Die` representing the reroll. 291 If the reroll would never terminate, the result has no outcomes. 292 """ 293 294 if outcomes is None: 295 outcome_set = {self.min_outcome()} 296 else: 297 outcome_set = self._select_outcomes(outcomes, star) 298 299 if depth == 'inf': 300 data = { 301 outcome: quantity 302 for outcome, quantity in self.items() 303 if outcome not in outcome_set 304 } 305 elif depth < 0: 306 raise ValueError('reroll depth cannot be negative.') 307 else: 308 total_reroll_quantity = sum(quantity 309 for outcome, quantity in self.items() 310 if outcome in outcome_set) 311 total_stop_quantity = self.denominator() - total_reroll_quantity 312 rerollable_factor = total_reroll_quantity**depth 313 stop_factor = (self.denominator()**(depth + 1) - rerollable_factor 314 * total_reroll_quantity) // total_stop_quantity 315 data = { 316 outcome: (rerollable_factor * 317 quantity if outcome in outcome_set else stop_factor * 318 quantity) 319 for outcome, quantity in self.items() 320 } 321 return icepool.Die(data) 322 323 def filter(self, 324 outcomes: Callable[..., bool] | Collection[T_co], 325 /, 326 *, 327 star: bool | None = None, 328 depth: int | Literal['inf']) -> 'Die[T_co]': 329 """Rerolls until getting one of the given outcomes. 330 331 Essentially the complement of `reroll()`. 332 333 Args: 334 outcomes: Selects which outcomes to reroll until. Options: 335 * A callable that takes an outcome and returns `True` if it 336 should be accepted. 337 * A collection of outcomes to reroll until. 338 star: Whether outcomes should be unpacked into separate arguments 339 before sending them to a callable `which`. 340 If not provided, this will be guessed based on the function 341 signature. 342 depth: The maximum number of times to reroll. 343 If `None`, rerolls an unlimited number of times. 344 345 Returns: 346 A `Die` representing the reroll. 347 If the reroll would never terminate, the result has no outcomes. 348 """ 349 350 if callable(outcomes): 351 if star is None: 352 star = infer_star(outcomes) 353 if star: 354 355 not_outcomes = { 356 outcome 357 for outcome in self.outcomes() 358 if not outcomes(*outcome) # type: ignore 359 } 360 else: 361 not_outcomes = { 362 outcome 363 for outcome in self.outcomes() if not outcomes(outcome) 364 } 365 else: 366 not_outcomes = { 367 not_outcome 368 for not_outcome in self.outcomes() 369 if not_outcome not in outcomes 370 } 371 return self.reroll(not_outcomes, depth=depth) 372 373 def truncate(self, min_outcome=None, max_outcome=None) -> 'Die[T_co]': 374 """Truncates the outcomes of this `Die` to the given range. 375 376 The endpoints are included in the result if applicable. 377 If one of the arguments is not provided, that side will not be truncated. 378 379 This effectively rerolls outcomes outside the given range. 380 If instead you want to replace those outcomes with the nearest endpoint, 381 use `clip()`. 382 383 Not to be confused with `trunc(die)`, which performs integer truncation 384 on each outcome. 385 """ 386 if min_outcome is not None: 387 start = bisect.bisect_left(self.outcomes(), min_outcome) 388 else: 389 start = None 390 if max_outcome is not None: 391 stop = bisect.bisect_right(self.outcomes(), max_outcome) 392 else: 393 stop = None 394 data = {k: v for k, v in self.items()[start:stop]} 395 return icepool.Die(data) 396 397 def clip(self, min_outcome=None, max_outcome=None) -> 'Die[T_co]': 398 """Clips the outcomes of this `Die` to the given values. 399 400 The endpoints are included in the result if applicable. 401 If one of the arguments is not provided, that side will not be clipped. 402 403 This is not the same as rerolling outcomes beyond this range; 404 the outcome is simply adjusted to fit within the range. 405 This will typically cause some quantity to bunch up at the endpoint(s). 406 If you want to reroll outcomes beyond this range, use `truncate()`. 407 """ 408 data: MutableMapping[Any, int] = defaultdict(int) 409 for outcome, quantity in self.items(): 410 if min_outcome is not None and outcome <= min_outcome: 411 data[min_outcome] += quantity 412 elif max_outcome is not None and outcome >= max_outcome: 413 data[max_outcome] += quantity 414 else: 415 data[outcome] += quantity 416 return icepool.Die(data) 417 418 @cached_property 419 def _popped_min(self) -> tuple['Die[T_co]', int]: 420 die = Die._new_raw(self._data.remove_min()) 421 return die, self.quantities()[0] 422 423 def _pop_min(self) -> tuple['Die[T_co]', int]: 424 """A `Die` with the min outcome removed, and the quantity of the removed outcome. 425 426 Raises: 427 IndexError: If this `Die` has no outcome to pop. 428 """ 429 return self._popped_min 430 431 @cached_property 432 def _popped_max(self) -> tuple['Die[T_co]', int]: 433 die = Die._new_raw(self._data.remove_max()) 434 return die, self.quantities()[-1] 435 436 def _pop_max(self) -> tuple['Die[T_co]', int]: 437 """A `Die` with the max outcome removed, and the quantity of the removed outcome. 438 439 Raises: 440 IndexError: If this `Die` has no outcome to pop. 441 """ 442 return self._popped_max 443 444 # Processes. 445 @overload 446 def map( 447 self, 448 repl: 449 'Callable[..., U | Die[U] | icepool.RerollType | icepool.AgainExpression] | Mapping[T_co, U | Die[U] | icepool.RerollType | icepool.AgainExpression]', 450 /, 451 *extra_args, 452 star: bool | None = None, 453 repeat: None = None, 454 again_count: int | None = None, 455 again_depth: int | None = None, 456 again_end: 'U | Die[U] | icepool.RerollType | None' = None, 457 **kwargs) -> 'Die[U]': 458 ... 459 460 @overload 461 def map( 462 self, 463 repl: 464 'Callable[..., T_co | Die[T_co] | icepool.RerollType | icepool.AgainExpression] | Mapping[T_co, T_co | Die[T_co] | icepool.RerollType | icepool.AgainExpression]', 465 /, 466 *extra_args, 467 star: bool | None = None, 468 repeat: int | Literal['inf'], 469 **kwargs) -> 'Die[T_co]': 470 ... 471 472 def map( 473 self, 474 repl: 475 'Callable[..., U | Die[U] | icepool.RerollType | icepool.AgainExpression] | Mapping[T_co, U | Die[U] | icepool.RerollType | icepool.AgainExpression]', 476 /, 477 *extra_args, 478 star: bool | None = None, 479 repeat: int | Literal['inf'] | None = None, 480 again_count: int | None = None, 481 again_depth: int | None = None, 482 again_end: 'U | Die[U] | icepool.RerollType | None' = None, 483 **kwargs) -> 'Die[U]': 484 """Maps outcomes of the `Die` to other outcomes. 485 486 This is also useful for representing processes. 487 488 As `icepool.map(repl, self, ...)`. 489 """ 490 return icepool.map( 491 repl, 492 self, 493 *extra_args, 494 star=star, 495 repeat=repeat, # type:ignore 496 again_count=again_count, 497 again_depth=again_depth, 498 again_end=again_end, 499 **kwargs) # type:ignore 500 501 def map_and_time( 502 self, 503 repl: 504 'Callable[..., T_co | Die[T_co] | icepool.RerollType] | Mapping[T_co, T_co | Die[T_co] | icepool.RerollType]', 505 /, 506 *extra_args, 507 star: bool | None = None, 508 repeat: int, 509 **kwargs) -> 'Die[tuple[T_co, int]]': 510 """Repeatedly map outcomes of the state to other outcomes, while also 511 counting timesteps. 512 513 This is useful for representing processes. 514 515 As `map_and_time(repl, self, ...)`. 516 """ 517 return icepool.map_and_time(repl, 518 self, 519 *extra_args, 520 star=star, 521 repeat=repeat, 522 **kwargs) 523 524 def mean_time_to_absorb( 525 self, 526 repl: 527 'Callable[..., T_co | icepool.Die[T_co] | icepool.RerollType] | Mapping[Any, T_co | icepool.Die[T_co] | icepool.RerollType]', 528 /, 529 *extra_args, 530 star: bool | None = None, 531 **kwargs) -> Fraction: 532 """EXPERIMENTAL: The mean time for the process to reach an absorbing state. 533 534 As `mean_time_to_absorb(repl, self, ...)`. 535 """ 536 return icepool.mean_time_to_absorb(repl, 537 self, 538 *extra_args, 539 star=star, 540 **kwargs) 541 542 def time_to_sum(self: 'Die[int]', 543 target: int, 544 /, 545 max_time: int | None = None, 546 dnf: 'int|icepool.RerollType|None' = None) -> 'Die[int]': 547 """The number of rolls until the cumulative sum is greater or equal to the target. 548 549 Args: 550 target: The number to stop at once reached. 551 max_time: The maximum number of rolls to run. 552 If the sum is not reached, the outcome is determined by `dnf`. 553 dnf: What time to assign in cases where the target was not reached 554 in `max_time`. If not provided, this is set to `max_time`. 555 `dnf=icepool.Reroll` will remove this case from the result, 556 effectively rerolling it. 557 """ 558 if target <= 0: 559 return Die([0]) 560 561 if max_time is None: 562 if self.min_outcome() <= 0: 563 raise ValueError( 564 'max_time must be provided if not all outcomes are positive.' 565 ) 566 max_time = (target + self.min_outcome() - 1) // self.min_outcome() 567 568 if dnf is None: 569 dnf = max_time 570 571 def step(total, roll): 572 return min(total + roll, target) 573 574 result: 'Die[tuple[int, int]]' = Die([0]).map_and_time(step, 575 self, 576 repeat=max_time) 577 578 def get_time(total, time): 579 if total < target: 580 return dnf 581 else: 582 return time 583 584 return result.map(get_time) 585 586 @cached_property 587 def _mean_time_to_sum_cache(self) -> list[Fraction]: 588 return [Fraction(0)] 589 590 def mean_time_to_sum(self: 'Die[int]', target: int, /) -> Fraction: 591 """The mean number of rolls until the cumulative sum is greater or equal to the target. 592 593 Args: 594 target: The target sum. 595 596 Raises: 597 ValueError: If `self` has negative outcomes. 598 ZeroDivisionError: If `self.mean() == 0`. 599 """ 600 target = max(target, 0) 601 602 if target < len(self._mean_time_to_sum_cache): 603 return self._mean_time_to_sum_cache[target] 604 605 if self.min_outcome() < 0: 606 raise ValueError( 607 'mean_time_to_sum does not handle negative outcomes.') 608 time_per_effect = Fraction(self.denominator(), 609 self.denominator() - self.quantity(0)) 610 611 for i in range(len(self._mean_time_to_sum_cache), target + 1): 612 result = time_per_effect + self.reroll([ 613 0 614 ], depth='inf').map(lambda x: self.mean_time_to_sum(i - x)).mean() 615 self._mean_time_to_sum_cache.append(result) 616 617 return result 618 619 def explode(self, 620 outcomes: Collection[T_co] | Callable[..., bool] | None = None, 621 /, 622 *, 623 star: bool | None = None, 624 depth: int = 9, 625 end=None) -> 'Die[T_co]': 626 """Causes outcomes to be rolled again and added to the total. 627 628 Args: 629 outcomes: Which outcomes to explode. Options: 630 * An collection of outcomes to explode. 631 * A callable that takes an outcome and returns `True` if it 632 should be exploded. 633 * If not supplied, the max outcome will explode. 634 star: Whether outcomes should be unpacked into separate arguments 635 before sending them to a callable `which`. 636 If not provided, this will be guessed based on the function 637 signature. 638 depth: The maximum number of additional dice to roll, not counting 639 the initial roll. 640 If not supplied, a default value will be used. 641 end: Once `depth` is reached, further explosions will be treated 642 as this value. By default, a zero value will be used. 643 `icepool.Reroll` will make one extra final roll, rerolling until 644 a non-exploding outcome is reached. 645 """ 646 647 if outcomes is None: 648 outcome_set = {self.max_outcome()} 649 else: 650 outcome_set = self._select_outcomes(outcomes, star) 651 652 if depth < 0: 653 raise ValueError('depth cannot be negative.') 654 elif depth == 0: 655 return self 656 657 def map_final(outcome): 658 if outcome in outcome_set: 659 return outcome + icepool.Again 660 else: 661 return outcome 662 663 return self.map(map_final, again_depth=depth, again_end=end) 664 665 def if_else( 666 self, 667 outcome_if_true: U | 'Die[U]', 668 outcome_if_false: U | 'Die[U]', 669 *, 670 again_count: int | None = None, 671 again_depth: int | None = None, 672 again_end: 'U | Die[U] | icepool.RerollType | None' = None 673 ) -> 'Die[U]': 674 """Ternary conditional operator. 675 676 This replaces truthy outcomes with the first argument and falsy outcomes 677 with the second argument. 678 679 Args: 680 again_count, again_depth, again_end: Forwarded to the final die constructor. 681 """ 682 return self.map(lambda x: bool(x)).map( 683 { 684 True: outcome_if_true, 685 False: outcome_if_false 686 }, 687 again_count=again_count, 688 again_depth=again_depth, 689 again_end=again_end) 690 691 def is_in(self, outcomes: Container[T_co], /) -> 'Die[bool]': 692 """A die that returns True iff the roll of the die is contained in the target.""" 693 return self.map(lambda x: x in outcomes) 694 695 def count(self, rolls: int, outcomes: Container[T_co], /) -> 'Die[int]': 696 """Roll this dice a number of times and count how many are in the target.""" 697 return rolls @ self.is_in(outcomes) 698 699 # Pools and sums. 700 701 @cached_property 702 def _sum_cache(self) -> MutableMapping[int, 'Die']: 703 return {} 704 705 def _sum_all(self, rolls: int, /) -> 'Die': 706 """Roll this `Die` `rolls` times and sum the results. 707 708 The sum is computed one at a time, with each additional item on the 709 right, similar to `functools.reduce()`. 710 711 If `rolls` is negative, roll the `Die` `abs(rolls)` times and negate 712 the result. 713 714 If you instead want to replace tuple (or other sequence) outcomes with 715 their sum, use `die.map(sum)`. 716 """ 717 if rolls in self._sum_cache: 718 return self._sum_cache[rolls] 719 720 if rolls < 0: 721 result = -self._sum_all(-rolls) 722 elif rolls == 0: 723 result = self.zero().simplify() 724 elif rolls == 1: 725 result = self 726 else: 727 # In addition to working similar to reduce(), this seems to perform 728 # better than binary split. 729 result = self._sum_all(rolls - 1) + self 730 731 self._sum_cache[rolls] = result 732 return result 733 734 def __matmul__(self: 'Die[int]', other) -> 'Die': 735 """Roll the left `Die`, then roll the right `Die` that many times and sum the outcomes. 736 737 The sum is computed one at a time, with each additional item on the 738 right, similar to `functools.reduce()`. 739 """ 740 if isinstance(other, icepool.AgainExpression): 741 return NotImplemented 742 other = implicit_convert_to_die(other) 743 744 data: MutableMapping[int, Any] = defaultdict(int) 745 746 max_abs_die_count = max(abs(self.min_outcome()), 747 abs(self.max_outcome())) 748 for die_count, die_count_quantity in self.items(): 749 factor = other.denominator()**(max_abs_die_count - abs(die_count)) 750 subresult = other._sum_all(die_count) 751 for outcome, subresult_quantity in subresult.items(): 752 data[ 753 outcome] += subresult_quantity * die_count_quantity * factor 754 755 return icepool.Die(data) 756 757 def __rmatmul__(self, other: 'int | Die[int]') -> 'Die': 758 """Roll the left `Die`, then roll the right `Die` that many times and sum the outcomes. 759 760 The sum is computed one at a time, with each additional item on the 761 right, similar to `functools.reduce()`. 762 """ 763 if isinstance(other, icepool.AgainExpression): 764 return NotImplemented 765 other = implicit_convert_to_die(other) 766 return other.__matmul__(self) 767 768 def sequence(self, rolls: int) -> 'icepool.Die[tuple[T_co, ...]]': 769 """Possible sequences produced by rolling this die a number of times. 770 771 This is extremely expensive computationally. If possible, use `reduce()` 772 instead; if you don't care about order, `Die.pool()` is better. 773 """ 774 return icepool.cartesian_product(*(self for _ in range(rolls)), 775 outcome_type=tuple) # type: ignore 776 777 def pool(self, rolls: int | Sequence[int] = 1, /) -> 'icepool.Pool[T_co]': 778 """Creates a `Pool` from this `Die`. 779 780 You might subscript the pool immediately afterwards, e.g. 781 `d6.pool(5)[-1, ..., 1]` takes the difference between the highest and 782 lowest of 5d6. 783 784 Args: 785 rolls: The number of copies of this `Die` to put in the pool. 786 Or, a sequence of one `int` per die acting as 787 `keep_tuple`. Note that `...` cannot be used in the 788 argument to this method, as the argument determines the size of 789 the pool. 790 """ 791 if isinstance(rolls, int): 792 return icepool.Pool({self: rolls}) 793 else: 794 pool_size = len(rolls) 795 # Haven't dealt with narrowing return type. 796 return icepool.Pool({self: pool_size})[rolls] # type: ignore 797 798 @overload 799 def keep(self, rolls: Sequence[int], /) -> 'Die': 800 """Selects elements after drawing and sorting and sums them. 801 802 Args: 803 rolls: A sequence of `int` specifying how many times to count each 804 element in ascending order. 805 """ 806 807 @overload 808 def keep(self, rolls: int, 809 index: slice | Sequence[int | EllipsisType] | int, /): 810 """Selects elements after drawing and sorting and sums them. 811 812 Args: 813 rolls: The number of dice to roll. 814 index: One of the following: 815 * An `int`. This will count only the roll at the specified index. 816 In this case, the result is a `Die` rather than a generator. 817 * A `slice`. The selected dice are counted once each. 818 * A sequence of one `int` for each `Die`. 819 Each roll is counted that many times, which could be multiple or 820 negative times. 821 822 Up to one `...` (`Ellipsis`) may be used. 823 `...` will be replaced with a number of zero 824 counts depending on the `rolls`. 825 This number may be "negative" if more `int`s are provided than 826 `rolls`. Specifically: 827 828 * If `index` is shorter than `rolls`, `...` 829 acts as enough zero counts to make up the difference. 830 E.g. `(1, ..., 1)` on five dice would act as 831 `(1, 0, 0, 0, 1)`. 832 * If `index` has length equal to `rolls`, `...` has no effect. 833 E.g. `(1, ..., 1)` on two dice would act as `(1, 1)`. 834 * If `index` is longer than `rolls` and `...` is on one side, 835 elements will be dropped from `index` on the side with `...`. 836 E.g. `(..., 1, 2, 3)` on two dice would act as `(2, 3)`. 837 * If `index` is longer than `rolls` and `...` 838 is in the middle, the counts will be as the sum of two 839 one-sided `...`. 840 E.g. `(-1, ..., 1)` acts like `(-1, ...)` plus `(..., 1)`. 841 If `rolls` was 1 this would have the -1 and 1 cancel each other out. 842 """ 843 844 def keep(self, 845 rolls: int | Sequence[int], 846 index: slice | Sequence[int | EllipsisType] | int | None = None, 847 /) -> 'Die': 848 """Selects elements after drawing and sorting and sums them. 849 850 Args: 851 rolls: The number of dice to roll. 852 index: One of the following: 853 * An `int`. This will count only the roll at the specified index. 854 In this case, the result is a `Die` rather than a generator. 855 * A `slice`. The selected dice are counted once each. 856 * A sequence of `int`s with length equal to `rolls`. 857 Each roll is counted that many times, which could be multiple or 858 negative times. 859 860 Up to one `...` (`Ellipsis`) may be used. If no `...` is used, 861 the `rolls` argument may be omitted. 862 863 `...` will be replaced with a number of zero counts in order 864 to make up any missing elements compared to `rolls`. 865 This number may be "negative" if more `int`s are provided than 866 `rolls`. Specifically: 867 868 * If `index` is shorter than `rolls`, `...` 869 acts as enough zero counts to make up the difference. 870 E.g. `(1, ..., 1)` on five dice would act as 871 `(1, 0, 0, 0, 1)`. 872 * If `index` has length equal to `rolls`, `...` has no effect. 873 E.g. `(1, ..., 1)` on two dice would act as `(1, 1)`. 874 * If `index` is longer than `rolls` and `...` is on one side, 875 elements will be dropped from `index` on the side with `...`. 876 E.g. `(..., 1, 2, 3)` on two dice would act as `(2, 3)`. 877 * If `index` is longer than `rolls` and `...` 878 is in the middle, the counts will be as the sum of two 879 one-sided `...`. 880 E.g. `(-1, ..., 1)` acts like `(-1, ...)` plus `(..., 1)`. 881 If `rolls` was 1 this would have the -1 and 1 cancel each other out. 882 """ 883 if isinstance(rolls, int): 884 if index is None: 885 raise ValueError( 886 'If the number of rolls is an integer, an index argument must be provided.' 887 ) 888 if isinstance(index, int): 889 return self.pool(rolls).keep(index) 890 else: 891 return self.pool(rolls).keep(index).sum() # type: ignore 892 else: 893 if index is not None: 894 raise ValueError('Only one index sequence can be given.') 895 return self.pool(len(rolls)).keep(rolls).sum() # type: ignore 896 897 def lowest(self, 898 rolls: int, 899 /, 900 keep: int | None = None, 901 drop: int | None = None) -> 'Die': 902 """Roll several of this `Die` and return the lowest result, or the sum of some of the lowest. 903 904 The outcomes should support addition and multiplication if `keep != 1`. 905 906 Args: 907 rolls: The number of dice to roll. All dice will have the same 908 outcomes as `self`. 909 keep, drop: These arguments work together: 910 * If neither are provided, the single lowest die will be taken. 911 * If only `keep` is provided, the `keep` lowest dice will be summed. 912 * If only `drop` is provided, the `drop` lowest dice will be dropped 913 and the rest will be summed. 914 * If both are provided, `drop` lowest dice will be dropped, then 915 the next `keep` lowest dice will be summed. 916 917 Returns: 918 A `Die` representing the probability distribution of the sum. 919 """ 920 index = lowest_slice(keep, drop) 921 canonical = canonical_slice(index, rolls) 922 if canonical.start == 0 and canonical.stop == 1: 923 return self._lowest_single(rolls) 924 # Expression evaluators are difficult to type. 925 return self.pool(rolls)[index].sum() # type: ignore 926 927 def _lowest_single(self, rolls: int, /) -> 'Die': 928 """Roll this die several times and keep the lowest.""" 929 if rolls == 0: 930 return self.zero().simplify() 931 return icepool.from_cumulative( 932 self.outcomes(), [x**rolls for x in self.quantities('>=')], 933 reverse=True) 934 935 def highest(self, 936 rolls: int, 937 /, 938 keep: int | None = None, 939 drop: int | None = None) -> 'Die[T_co]': 940 """Roll several of this `Die` and return the highest result, or the sum of some of the highest. 941 942 The outcomes should support addition and multiplication if `keep != 1`. 943 944 Args: 945 rolls: The number of dice to roll. 946 keep, drop: These arguments work together: 947 * If neither are provided, the single highest die will be taken. 948 * If only `keep` is provided, the `keep` highest dice will be summed. 949 * If only `drop` is provided, the `drop` highest dice will be dropped 950 and the rest will be summed. 951 * If both are provided, `drop` highest dice will be dropped, then 952 the next `keep` highest dice will be summed. 953 954 Returns: 955 A `Die` representing the probability distribution of the sum. 956 """ 957 index = highest_slice(keep, drop) 958 canonical = canonical_slice(index, rolls) 959 if canonical.start == rolls - 1 and canonical.stop == rolls: 960 return self._highest_single(rolls) 961 # Expression evaluators are difficult to type. 962 return self.pool(rolls)[index].sum() # type: ignore 963 964 def _highest_single(self, rolls: int, /) -> 'Die[T_co]': 965 """Roll this die several times and keep the highest.""" 966 if rolls == 0: 967 return self.zero().simplify() 968 return icepool.from_cumulative( 969 self.outcomes(), [x**rolls for x in self.quantities('<=')]) 970 971 def middle( 972 self, 973 rolls: int, 974 /, 975 keep: int = 1, 976 *, 977 tie: Literal['error', 'high', 'low'] = 'error') -> 'icepool.Die': 978 """Roll several of this `Die` and sum the sorted results in the middle. 979 980 The outcomes should support addition and multiplication if `keep != 1`. 981 982 Args: 983 rolls: The number of dice to roll. 984 keep: The number of outcomes to sum. If this is greater than the 985 current keep_size, all are kept. 986 tie: What to do if `keep` is odd but the current keep_size 987 is even, or vice versa. 988 * 'error' (default): Raises `IndexError`. 989 * 'high': The higher outcome is taken. 990 * 'low': The lower outcome is taken. 991 """ 992 # Expression evaluators are difficult to type. 993 return self.pool(rolls).middle(keep, tie=tie).sum() # type: ignore 994 995 def map_to_pool( 996 self, 997 repl: 998 'Callable[..., Sequence[icepool.Die[U] | U] | Mapping[icepool.Die[U], int] | Mapping[U, int] | icepool.RerollType] | None' = None, 999 /, 1000 *extra_args: 'Outcome | icepool.Die | icepool.MultisetExpression', 1001 star: bool | None = None, 1002 **kwargs) -> 'icepool.MultisetExpression[U]': 1003 """EXPERIMENTAL: Maps outcomes of this `Die` to `Pools`, creating a `MultisetGenerator`. 1004 1005 As `icepool.map_to_pool(repl, self, ...)`. 1006 1007 If no argument is provided, the outcomes will be used to construct a 1008 mixture of pools directly, similar to the inverse of `pool.expand()`. 1009 Note that this is not particularly efficient since it does not make much 1010 use of dynamic programming. 1011 """ 1012 if repl is None: 1013 repl = lambda x: x 1014 return icepool.map_to_pool(repl, 1015 self, 1016 *extra_args, 1017 star=star, 1018 **kwargs) 1019 1020 def explode_to_pool(self, 1021 rolls: int = 1, 1022 outcomes: Collection[T_co] | Callable[..., bool] 1023 | None = None, 1024 /, 1025 *, 1026 star: bool | None = None, 1027 depth: int = 9) -> 'icepool.MultisetExpression[T_co]': 1028 """EXPERIMENTAL: Causes outcomes to be rolled again, keeping that outcome as an individual die in a pool. 1029 1030 Args: 1031 rolls: The number of initial dice. 1032 outcomes: Which outcomes to explode. Options: 1033 * A single outcome to explode. 1034 * An collection of outcomes to explode. 1035 * A callable that takes an outcome and returns `True` if it 1036 should be exploded. 1037 * If not supplied, the max outcome will explode. 1038 star: Whether outcomes should be unpacked into separate arguments 1039 before sending them to a callable `which`. 1040 If not provided, this will be guessed based on the function 1041 signature. 1042 depth: The maximum depth of explosions for an individual dice. 1043 1044 Returns: 1045 A `MultisetGenerator` representing the mixture of `Pool`s. Note 1046 that this is not technically a `Pool`, though it supports most of 1047 the same operations. 1048 """ 1049 if depth == 0: 1050 return self.pool(rolls) 1051 if outcomes is None: 1052 explode_set = {self.max_outcome()} 1053 else: 1054 explode_set = self._select_outcomes(outcomes, star) 1055 if not explode_set: 1056 return self.pool(rolls) 1057 explode: 'Die[T_co]' 1058 not_explode: 'Die[T_co]' 1059 explode, not_explode = self.split(explode_set) 1060 1061 single_data: 'MutableMapping[icepool.Vector[int], int]' = defaultdict( 1062 int) 1063 for i in range(depth + 1): 1064 weight = explode.denominator()**i * self.denominator()**( 1065 depth - i) * not_explode.denominator() 1066 single_data[icepool.Vector((i, 1))] += weight 1067 single_data[icepool.Vector( 1068 (depth + 1, 0))] += explode.denominator()**(depth + 1) 1069 1070 single_count_die: 'Die[icepool.Vector[int]]' = Die(single_data) 1071 count_die = rolls @ single_count_die 1072 1073 return count_die.map_to_pool( 1074 lambda x, nx: [explode] * x + [not_explode] * nx) 1075 1076 def reroll_to_pool( 1077 self, 1078 rolls: int, 1079 outcomes: Callable[..., bool] | Collection[T_co] | None = None, 1080 /, 1081 *, 1082 max_rerolls: int | Literal['inf'], 1083 star: bool | None = None, 1084 depth: int | Literal['inf'] = 1, 1085 mode: Literal['random', 'low', 'high', 'drop'] = 'random' 1086 ) -> 'icepool.MultisetExpression[T_co]': 1087 """EXPERIMENTAL: Applies a limited number of rerolls shared across a pool. 1088 1089 Each die can only be rerolled once (effectively `depth=1`), and no more 1090 than `max_rerolls` dice may be rerolled. 1091 1092 Args: 1093 rolls: How many dice in the pool. 1094 outcomes: Selects which outcomes are eligible to be rerolled. 1095 Options: 1096 * A collection of outcomes to reroll. 1097 * A callable that takes an outcome and returns `True` if it 1098 could be rerolled. 1099 * If not provided, the single minimum outcome will be rerolled. 1100 max_rerolls: The maximum total number of rerolls. 1101 If `max_rerolls == 'inf'`, then this is the same as 1102 `self.reroll(which, star=star, depth=depth).pool(rolls)`. 1103 depth: EXTRA EXPERIMENTAL: The maximum depth of rerolls. 1104 star: Whether outcomes should be unpacked into separate arguments 1105 before sending them to a callable `which`. 1106 If not provided, this will be guessed based on the function 1107 signature. 1108 mode: How dice are selected for rerolling if there are more eligible 1109 dice than `max_rerolls`. Options: 1110 * `'random'` (default): Eligible dice will be chosen uniformly 1111 at random. 1112 * `'low'`: The lowest eligible dice will be rerolled. 1113 * `'high'`: The highest eligible dice will be rerolled. 1114 * `'drop'`: All dice that ended up on an outcome selected by 1115 `which` will be dropped. This includes both dice that rolled 1116 into `which` initially and were not rerolled, and dice that 1117 were rerolled but rolled into `which` again. This can be 1118 considerably more efficient than the other modes. 1119 1120 Returns: 1121 A `MultisetGenerator` representing the mixture of `Pool`s. Note 1122 that this is not technically a `Pool`, though it supports most of 1123 the same operations. 1124 """ 1125 if max_rerolls == 'inf': 1126 return self.reroll(outcomes, star=star, depth=depth).pool(rolls) 1127 1128 if outcomes is None: 1129 rerollable_set = {self.min_outcome()} 1130 else: 1131 rerollable_set = self._select_outcomes(outcomes, star) 1132 if not rerollable_set: 1133 return self.pool(rolls) 1134 1135 rerollable_die: 'Die[T_co]' 1136 not_rerollable_die: 'Die[T_co]' 1137 rerollable_die, not_rerollable_die = self.split(rerollable_set) 1138 single_is_rerollable = icepool.coin(rerollable_die.denominator(), 1139 self.denominator()) 1140 1141 if depth == 'inf': 1142 depth = max_rerolls 1143 1144 def step(rerollable, rerolls_left): 1145 """Advances one step of rerolling if there are enough rerolls left to cover all rerollable dice. 1146 1147 Returns: 1148 The number of dice showing rerollable outcomes and the number of remaining rerolls. 1149 """ 1150 if rerollable == 0: 1151 return 0, 0 1152 if rerolls_left < rerollable: 1153 return rerollable, rerolls_left 1154 1155 return icepool.tupleize(rerollable @ single_is_rerollable, 1156 rerolls_left - rerollable) 1157 1158 initial_state = icepool.tupleize(rolls @ single_is_rerollable, 1159 max_rerolls) 1160 mid_pool_composition: Die[tuple[int, int]] 1161 mid_pool_composition = icepool.map(step, 1162 initial_state, 1163 star=True, 1164 repeat=depth - 1) 1165 1166 def final_step(rerollable, rerolls_left): 1167 """Performs the final reroll, which might not have enough rerolls to cover all rerollable dice. 1168 1169 Returns: The number of dice that had a rerollable outcome, 1170 the number of dice that were rerolled due to max_rerolls, 1171 the number of rerolled dice that landed on a rerollable outcome 1172 again. 1173 """ 1174 rerolled = min(rerollable, rerolls_left) 1175 1176 return icepool.tupleize(rerollable, rerolled, 1177 rerolled @ single_is_rerollable) 1178 1179 pool_composition: Die[tuple[int, int, int]] = mid_pool_composition.map( 1180 final_step, star=True) 1181 1182 denominator = self.denominator()**(rolls + max_rerolls) 1183 pool_composition = pool_composition.multiply_to_denominator( 1184 denominator) 1185 1186 def make_pool(rerollable, rerolled, rerolled_to_rerollable): 1187 rerolls_ran_out = rerollable - rerolled 1188 not_rerollable = rolls - rerolls_ran_out - rerolled_to_rerollable 1189 common = rerollable_die.pool( 1190 rerolled_to_rerollable) + not_rerollable_die.pool( 1191 not_rerollable) 1192 match mode: 1193 case 'random': 1194 return common + rerollable_die.pool(rerolls_ran_out) 1195 case 'low': 1196 return common + rerollable_die.pool(rerollable).highest( 1197 rerolls_ran_out) 1198 case 'high': 1199 return common + rerollable_die.pool(rerollable).lowest( 1200 rerolls_ran_out) 1201 case 'drop': 1202 return not_rerollable_die.pool(not_rerollable) 1203 case _: 1204 raise ValueError( 1205 f"Invalid reroll_priority '{mode}'. Allowed values are 'random', 'low', 'high', 'drop'." 1206 ) 1207 1208 return pool_composition.map_to_pool(make_pool, star=True) 1209 1210 # Unary operators. 1211 1212 def __neg__(self) -> 'Die[T_co]': 1213 return self.unary_operator(operator.neg) 1214 1215 def __pos__(self) -> 'Die[T_co]': 1216 return self.unary_operator(operator.pos) 1217 1218 def __invert__(self) -> 'Die[T_co]': 1219 return self.unary_operator(operator.invert) 1220 1221 def abs(self) -> 'Die[T_co]': 1222 return self.unary_operator(operator.abs) 1223 1224 __abs__ = abs 1225 1226 def round(self, ndigits: int | None = None) -> 'Die': 1227 return self.unary_operator(round, ndigits) 1228 1229 __round__ = round 1230 1231 def stochastic_round(self, 1232 *, 1233 max_denominator: int | None = None) -> 'Die[int]': 1234 """Randomly rounds outcomes up or down to the nearest integer according to the two distances. 1235 1236 Specificially, rounds `x` up with probability `x - floor(x)` and down 1237 otherwise. 1238 1239 Args: 1240 max_denominator: If provided, each rounding will be performed 1241 using `fractions.Fraction.limit_denominator(max_denominator)`. 1242 Otherwise, the rounding will be performed without 1243 `limit_denominator`. 1244 """ 1245 return self.map(lambda x: icepool.stochastic_round( 1246 x, max_denominator=max_denominator)) 1247 1248 def trunc(self) -> 'Die': 1249 return self.unary_operator(math.trunc) 1250 1251 __trunc__ = trunc 1252 1253 def floor(self) -> 'Die': 1254 return self.unary_operator(math.floor) 1255 1256 __floor__ = floor 1257 1258 def ceil(self) -> 'Die': 1259 return self.unary_operator(math.ceil) 1260 1261 __ceil__ = ceil 1262 1263 # Binary operators. 1264 1265 def __add__(self, other) -> 'Die': 1266 if isinstance(other, icepool.AgainExpression): 1267 return NotImplemented 1268 other = implicit_convert_to_die(other) 1269 return self.binary_operator(other, operator.add) 1270 1271 def __radd__(self, other) -> 'Die': 1272 if isinstance(other, icepool.AgainExpression): 1273 return NotImplemented 1274 other = implicit_convert_to_die(other) 1275 return other.binary_operator(self, operator.add) 1276 1277 def __sub__(self, other) -> 'Die': 1278 if isinstance(other, icepool.AgainExpression): 1279 return NotImplemented 1280 other = implicit_convert_to_die(other) 1281 return self.binary_operator(other, operator.sub) 1282 1283 def __rsub__(self, other) -> 'Die': 1284 if isinstance(other, icepool.AgainExpression): 1285 return NotImplemented 1286 other = implicit_convert_to_die(other) 1287 return other.binary_operator(self, operator.sub) 1288 1289 def __mul__(self, other) -> 'Die': 1290 if isinstance(other, icepool.AgainExpression): 1291 return NotImplemented 1292 other = implicit_convert_to_die(other) 1293 return self.binary_operator(other, operator.mul) 1294 1295 def __rmul__(self, other) -> 'Die': 1296 if isinstance(other, icepool.AgainExpression): 1297 return NotImplemented 1298 other = implicit_convert_to_die(other) 1299 return other.binary_operator(self, operator.mul) 1300 1301 def __truediv__(self, other) -> 'Die': 1302 if isinstance(other, icepool.AgainExpression): 1303 return NotImplemented 1304 other = implicit_convert_to_die(other) 1305 return self.binary_operator(other, operator.truediv) 1306 1307 def __rtruediv__(self, other) -> 'Die': 1308 if isinstance(other, icepool.AgainExpression): 1309 return NotImplemented 1310 other = implicit_convert_to_die(other) 1311 return other.binary_operator(self, operator.truediv) 1312 1313 def __floordiv__(self, other) -> 'Die': 1314 if isinstance(other, icepool.AgainExpression): 1315 return NotImplemented 1316 other = implicit_convert_to_die(other) 1317 return self.binary_operator(other, operator.floordiv) 1318 1319 def __rfloordiv__(self, other) -> 'Die': 1320 if isinstance(other, icepool.AgainExpression): 1321 return NotImplemented 1322 other = implicit_convert_to_die(other) 1323 return other.binary_operator(self, operator.floordiv) 1324 1325 def __pow__(self, other) -> 'Die': 1326 if isinstance(other, icepool.AgainExpression): 1327 return NotImplemented 1328 other = implicit_convert_to_die(other) 1329 return self.binary_operator(other, operator.pow) 1330 1331 def __rpow__(self, other) -> 'Die': 1332 if isinstance(other, icepool.AgainExpression): 1333 return NotImplemented 1334 other = implicit_convert_to_die(other) 1335 return other.binary_operator(self, operator.pow) 1336 1337 def __mod__(self, other) -> 'Die': 1338 if isinstance(other, icepool.AgainExpression): 1339 return NotImplemented 1340 other = implicit_convert_to_die(other) 1341 return self.binary_operator(other, operator.mod) 1342 1343 def __rmod__(self, other) -> 'Die': 1344 if isinstance(other, icepool.AgainExpression): 1345 return NotImplemented 1346 other = implicit_convert_to_die(other) 1347 return other.binary_operator(self, operator.mod) 1348 1349 def __lshift__(self, other) -> 'Die': 1350 if isinstance(other, icepool.AgainExpression): 1351 return NotImplemented 1352 other = implicit_convert_to_die(other) 1353 return self.binary_operator(other, operator.lshift) 1354 1355 def __rlshift__(self, other) -> 'Die': 1356 if isinstance(other, icepool.AgainExpression): 1357 return NotImplemented 1358 other = implicit_convert_to_die(other) 1359 return other.binary_operator(self, operator.lshift) 1360 1361 def __rshift__(self, other) -> 'Die': 1362 if isinstance(other, icepool.AgainExpression): 1363 return NotImplemented 1364 other = implicit_convert_to_die(other) 1365 return self.binary_operator(other, operator.rshift) 1366 1367 def __rrshift__(self, other) -> 'Die': 1368 if isinstance(other, icepool.AgainExpression): 1369 return NotImplemented 1370 other = implicit_convert_to_die(other) 1371 return other.binary_operator(self, operator.rshift) 1372 1373 def __and__(self, other) -> 'Die': 1374 if isinstance(other, icepool.AgainExpression): 1375 return NotImplemented 1376 other = implicit_convert_to_die(other) 1377 return self.binary_operator(other, operator.and_) 1378 1379 def __rand__(self, other) -> 'Die': 1380 if isinstance(other, icepool.AgainExpression): 1381 return NotImplemented 1382 other = implicit_convert_to_die(other) 1383 return other.binary_operator(self, operator.and_) 1384 1385 def __or__(self, other) -> 'Die': 1386 if isinstance(other, icepool.AgainExpression): 1387 return NotImplemented 1388 other = implicit_convert_to_die(other) 1389 return self.binary_operator(other, operator.or_) 1390 1391 def __ror__(self, other) -> 'Die': 1392 if isinstance(other, icepool.AgainExpression): 1393 return NotImplemented 1394 other = implicit_convert_to_die(other) 1395 return other.binary_operator(self, operator.or_) 1396 1397 def __xor__(self, other) -> 'Die': 1398 if isinstance(other, icepool.AgainExpression): 1399 return NotImplemented 1400 other = implicit_convert_to_die(other) 1401 return self.binary_operator(other, operator.xor) 1402 1403 def __rxor__(self, other) -> 'Die': 1404 if isinstance(other, icepool.AgainExpression): 1405 return NotImplemented 1406 other = implicit_convert_to_die(other) 1407 return other.binary_operator(self, operator.xor) 1408 1409 # Comparators. 1410 1411 def __lt__(self, other) -> 'Die[bool]': 1412 if isinstance(other, icepool.AgainExpression): 1413 return NotImplemented 1414 other = implicit_convert_to_die(other) 1415 return self.binary_operator(other, operator.lt) 1416 1417 def __le__(self, other) -> 'Die[bool]': 1418 if isinstance(other, icepool.AgainExpression): 1419 return NotImplemented 1420 other = implicit_convert_to_die(other) 1421 return self.binary_operator(other, operator.le) 1422 1423 def __ge__(self, other) -> 'Die[bool]': 1424 if isinstance(other, icepool.AgainExpression): 1425 return NotImplemented 1426 other = implicit_convert_to_die(other) 1427 return self.binary_operator(other, operator.ge) 1428 1429 def __gt__(self, other) -> 'Die[bool]': 1430 if isinstance(other, icepool.AgainExpression): 1431 return NotImplemented 1432 other = implicit_convert_to_die(other) 1433 return self.binary_operator(other, operator.gt) 1434 1435 # Equality operators. These produce a `DieWithTruth`. 1436 1437 # The result has a truth value, but is not a bool. 1438 def __eq__(self, other) -> 'icepool.DieWithTruth[bool]': # type: ignore 1439 if isinstance(other, icepool.AgainExpression): 1440 return NotImplemented 1441 other_die: Die = implicit_convert_to_die(other) 1442 1443 def data_callback() -> Counts[bool]: 1444 return self.binary_operator(other_die, operator.eq)._data 1445 1446 def truth_value_callback() -> bool: 1447 return self.equals(other) 1448 1449 return icepool.DieWithTruth(data_callback, truth_value_callback) 1450 1451 # The result has a truth value, but is not a bool. 1452 def __ne__(self, other) -> 'icepool.DieWithTruth[bool]': # type: ignore 1453 if isinstance(other, icepool.AgainExpression): 1454 return NotImplemented 1455 other_die: Die = implicit_convert_to_die(other) 1456 1457 def data_callback() -> Counts[bool]: 1458 return self.binary_operator(other_die, operator.ne)._data 1459 1460 def truth_value_callback() -> bool: 1461 return not self.equals(other) 1462 1463 return icepool.DieWithTruth(data_callback, truth_value_callback) 1464 1465 def cmp(self, other) -> 'Die[int]': 1466 """A `Die` with outcomes 1, -1, and 0. 1467 1468 The quantities are equal to the positive outcome of `self > other`, 1469 `self < other`, and the remainder respectively. 1470 """ 1471 other = implicit_convert_to_die(other) 1472 1473 data = {} 1474 1475 lt = self < other 1476 if True in lt: 1477 data[-1] = lt[True] 1478 eq = self == other 1479 if True in eq: 1480 data[0] = eq[True] 1481 gt = self > other 1482 if True in gt: 1483 data[1] = gt[True] 1484 1485 return Die(data) 1486 1487 @staticmethod 1488 def _sign(x) -> int: 1489 z = Die._zero(x) 1490 if x > z: 1491 return 1 1492 elif x < z: 1493 return -1 1494 else: 1495 return 0 1496 1497 def sign(self) -> 'Die[int]': 1498 """Outcomes become 1 if greater than `zero()`, -1 if less than `zero()`, and 0 otherwise. 1499 1500 Note that for `float`s, +0.0, -0.0, and nan all become 0. 1501 """ 1502 return self.unary_operator(Die._sign) 1503 1504 # Equality and hashing. 1505 1506 def __bool__(self) -> bool: 1507 raise TypeError( 1508 'A `Die` only has a truth value if it is the result of == or !=.\n' 1509 'This could result from trying to use a die in an if-statement,\n' 1510 'in which case you should use `die.if_else()` instead.\n' 1511 'Or it could result from trying to use a `Die` inside a tuple or vector outcome,\n' 1512 'in which case you should use `tupleize()` or `vectorize().') 1513 1514 @cached_property 1515 def hash_key(self) -> tuple: 1516 """A tuple that uniquely (as `equals()`) identifies this die. 1517 1518 Apart from being hashable and totally orderable, this is not guaranteed 1519 to be in any particular format or have any other properties. 1520 """ 1521 return Die, tuple(self.items()) 1522 1523 __hash__ = MaybeHashKeyed.__hash__ 1524 1525 def equals(self, other, *, simplify: bool = False) -> bool: 1526 """`True` iff both dice have the same outcomes and quantities. 1527 1528 This is `False` if `other` is not a `Die`, even if it would convert 1529 to an equal `Die`. 1530 1531 Truth value does NOT matter. 1532 1533 If one `Die` has a zero-quantity outcome and the other `Die` does not 1534 contain that outcome, they are treated as unequal by this function. 1535 1536 The `==` and `!=` operators have a dual purpose; they return a `Die` 1537 with a truth value determined by this method. 1538 Only dice returned by these methods have a truth value. The data of 1539 these dice is lazily evaluated since the caller may only be interested 1540 in the `Die` value or the truth value. 1541 1542 Args: 1543 simplify: If `True`, the dice will be simplified before comparing. 1544 Otherwise, e.g. a 2:2 coin is not `equals()` to a 1:1 coin. 1545 """ 1546 if self is other: 1547 return True 1548 1549 if not isinstance(other, Die): 1550 return False 1551 1552 if simplify: 1553 return self.simplify().hash_key == other.simplify().hash_key 1554 else: 1555 return self.hash_key == other.hash_key 1556 1557 # Strings. 1558 1559 def __repr__(self) -> str: 1560 items_string = ', '.join(f'{repr(outcome)}: {weight}' 1561 for outcome, weight in self.items()) 1562 return type(self).__qualname__ + '({' + items_string + '})'
Sampling with replacement. Quantities represent weights.
Dice are immutable. Methods do not modify the Die in-place;
rather they return a Die representing the result.
It's also possible to have "empty" dice with no outcomes at all, though these have little use other than being sentinel values.
179 def unary_operator(self: 'icepool.Die[T_co]', op: Callable[..., U], *args, 180 **kwargs) -> 'icepool.Die[U]': 181 """Performs the unary operation on the outcomes. 182 183 This is used for the standard unary operators 184 `-, +, abs, ~, round, trunc, floor, ceil` 185 as well as the additional methods 186 `zero, bool`. 187 188 This is NOT used for the `[]` operator; when used directly, this is 189 interpreted as a `Mapping` operation and returns the count corresponding 190 to a given outcome. See `marginals()` for applying the `[]` operator to 191 outcomes. 192 193 Returns: 194 A `Die` representing the result. 195 196 Raises: 197 ValueError: If tuples are of mismatched length. 198 """ 199 return self._unary_operator(op, *args, **kwargs)
Performs the unary operation on the outcomes.
This is used for the standard unary operators
-, +, abs, ~, round, trunc, floor, ceil
as well as the additional methods
zero, bool.
This is NOT used for the [] operator; when used directly, this is
interpreted as a Mapping operation and returns the count corresponding
to a given outcome. See marginals() for applying the [] operator to
outcomes.
Returns:
A
Dierepresenting the result.
Raises:
- ValueError: If tuples are of mismatched length.
201 def binary_operator(self, other: 'Die', op: Callable[..., U], *args, 202 **kwargs) -> 'Die[U]': 203 """Performs the operation on pairs of outcomes. 204 205 By the time this is called, the other operand has already been 206 converted to a `Die`. 207 208 This is used for the standard binary operators 209 `+, -, *, /, //, %, **, <<, >>, &, |, ^` 210 and the standard binary comparators 211 `<, <=, >=, >, ==, !=, cmp`. 212 213 `==` and `!=` additionally set the truth value of the `Die` according to 214 whether the dice themselves are the same or not. 215 216 The `@` operator does NOT use this method directly. 217 It rolls the left `Die`, which must have integer outcomes, 218 then rolls the right `Die` that many times and sums the outcomes. 219 220 Returns: 221 A `Die` representing the result. 222 223 Raises: 224 ValueError: If tuples are of mismatched length within one of the 225 dice or between the dice. 226 """ 227 data: MutableMapping[Any, int] = defaultdict(int) 228 for (outcome_self, 229 quantity_self), (outcome_other, 230 quantity_other) in itertools.product( 231 self.items(), other.items()): 232 new_outcome = op(outcome_self, outcome_other, *args, **kwargs) 233 data[new_outcome] += quantity_self * quantity_other 234 return self._new_type(data)
Performs the operation on pairs of outcomes.
By the time this is called, the other operand has already been
converted to a Die.
This is used for the standard binary operators
+, -, *, /, //, %, **, <<, >>, &, |, ^
and the standard binary comparators
<, <=, >=, >, ==, !=, cmp.
== and != additionally set the truth value of the Die according to
whether the dice themselves are the same or not.
The @ operator does NOT use this method directly.
It rolls the left Die, which must have integer outcomes,
then rolls the right Die that many times and sums the outcomes.
Returns:
A
Dierepresenting the result.
Raises:
- ValueError: If tuples are of mismatched length within one of the dice or between the dice.
262 def simplify(self) -> 'Die[T_co]': 263 """Divides all quantities by their greatest common denominator. """ 264 return icepool.Die(self._data.simplify())
Divides all quantities by their greatest common denominator.
268 def reroll(self, 269 outcomes: Callable[..., bool] | Collection[T_co] | None = None, 270 /, 271 *, 272 star: bool | None = None, 273 depth: int | Literal['inf']) -> 'Die[T_co]': 274 """Rerolls the given outcomes. 275 276 Args: 277 outcomes: Selects which outcomes to reroll. Options: 278 * A collection of outcomes to reroll. 279 * A callable that takes an outcome and returns `True` if it 280 should be rerolled. 281 * If not provided, the min outcome will be rerolled. 282 star: Whether outcomes should be unpacked into separate arguments 283 before sending them to a callable `which`. 284 If not provided, this will be guessed based on the function 285 signature. 286 depth: The maximum number of times to reroll. 287 If `None`, rerolls an unlimited number of times. 288 289 Returns: 290 A `Die` representing the reroll. 291 If the reroll would never terminate, the result has no outcomes. 292 """ 293 294 if outcomes is None: 295 outcome_set = {self.min_outcome()} 296 else: 297 outcome_set = self._select_outcomes(outcomes, star) 298 299 if depth == 'inf': 300 data = { 301 outcome: quantity 302 for outcome, quantity in self.items() 303 if outcome not in outcome_set 304 } 305 elif depth < 0: 306 raise ValueError('reroll depth cannot be negative.') 307 else: 308 total_reroll_quantity = sum(quantity 309 for outcome, quantity in self.items() 310 if outcome in outcome_set) 311 total_stop_quantity = self.denominator() - total_reroll_quantity 312 rerollable_factor = total_reroll_quantity**depth 313 stop_factor = (self.denominator()**(depth + 1) - rerollable_factor 314 * total_reroll_quantity) // total_stop_quantity 315 data = { 316 outcome: (rerollable_factor * 317 quantity if outcome in outcome_set else stop_factor * 318 quantity) 319 for outcome, quantity in self.items() 320 } 321 return icepool.Die(data)
Rerolls the given outcomes.
Arguments:
- outcomes: Selects which outcomes to reroll. Options:
- A collection of outcomes to reroll.
- A callable that takes an outcome and returns
Trueif it should be rerolled. - If not provided, the min outcome will be rerolled.
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
which. If not provided, this will be guessed based on the function signature. - depth: The maximum number of times to reroll.
If
None, rerolls an unlimited number of times.
Returns:
A
Dierepresenting the reroll. If the reroll would never terminate, the result has no outcomes.
323 def filter(self, 324 outcomes: Callable[..., bool] | Collection[T_co], 325 /, 326 *, 327 star: bool | None = None, 328 depth: int | Literal['inf']) -> 'Die[T_co]': 329 """Rerolls until getting one of the given outcomes. 330 331 Essentially the complement of `reroll()`. 332 333 Args: 334 outcomes: Selects which outcomes to reroll until. Options: 335 * A callable that takes an outcome and returns `True` if it 336 should be accepted. 337 * A collection of outcomes to reroll until. 338 star: Whether outcomes should be unpacked into separate arguments 339 before sending them to a callable `which`. 340 If not provided, this will be guessed based on the function 341 signature. 342 depth: The maximum number of times to reroll. 343 If `None`, rerolls an unlimited number of times. 344 345 Returns: 346 A `Die` representing the reroll. 347 If the reroll would never terminate, the result has no outcomes. 348 """ 349 350 if callable(outcomes): 351 if star is None: 352 star = infer_star(outcomes) 353 if star: 354 355 not_outcomes = { 356 outcome 357 for outcome in self.outcomes() 358 if not outcomes(*outcome) # type: ignore 359 } 360 else: 361 not_outcomes = { 362 outcome 363 for outcome in self.outcomes() if not outcomes(outcome) 364 } 365 else: 366 not_outcomes = { 367 not_outcome 368 for not_outcome in self.outcomes() 369 if not_outcome not in outcomes 370 } 371 return self.reroll(not_outcomes, depth=depth)
Rerolls until getting one of the given outcomes.
Essentially the complement of reroll().
Arguments:
- outcomes: Selects which outcomes to reroll until. Options:
- A callable that takes an outcome and returns
Trueif it should be accepted. - A collection of outcomes to reroll until.
- A callable that takes an outcome and returns
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
which. If not provided, this will be guessed based on the function signature. - depth: The maximum number of times to reroll.
If
None, rerolls an unlimited number of times.
Returns:
A
Dierepresenting the reroll. If the reroll would never terminate, the result has no outcomes.
373 def truncate(self, min_outcome=None, max_outcome=None) -> 'Die[T_co]': 374 """Truncates the outcomes of this `Die` to the given range. 375 376 The endpoints are included in the result if applicable. 377 If one of the arguments is not provided, that side will not be truncated. 378 379 This effectively rerolls outcomes outside the given range. 380 If instead you want to replace those outcomes with the nearest endpoint, 381 use `clip()`. 382 383 Not to be confused with `trunc(die)`, which performs integer truncation 384 on each outcome. 385 """ 386 if min_outcome is not None: 387 start = bisect.bisect_left(self.outcomes(), min_outcome) 388 else: 389 start = None 390 if max_outcome is not None: 391 stop = bisect.bisect_right(self.outcomes(), max_outcome) 392 else: 393 stop = None 394 data = {k: v for k, v in self.items()[start:stop]} 395 return icepool.Die(data)
Truncates the outcomes of this Die to the given range.
The endpoints are included in the result if applicable. If one of the arguments is not provided, that side will not be truncated.
This effectively rerolls outcomes outside the given range.
If instead you want to replace those outcomes with the nearest endpoint,
use clip().
Not to be confused with trunc(die), which performs integer truncation
on each outcome.
397 def clip(self, min_outcome=None, max_outcome=None) -> 'Die[T_co]': 398 """Clips the outcomes of this `Die` to the given values. 399 400 The endpoints are included in the result if applicable. 401 If one of the arguments is not provided, that side will not be clipped. 402 403 This is not the same as rerolling outcomes beyond this range; 404 the outcome is simply adjusted to fit within the range. 405 This will typically cause some quantity to bunch up at the endpoint(s). 406 If you want to reroll outcomes beyond this range, use `truncate()`. 407 """ 408 data: MutableMapping[Any, int] = defaultdict(int) 409 for outcome, quantity in self.items(): 410 if min_outcome is not None and outcome <= min_outcome: 411 data[min_outcome] += quantity 412 elif max_outcome is not None and outcome >= max_outcome: 413 data[max_outcome] += quantity 414 else: 415 data[outcome] += quantity 416 return icepool.Die(data)
Clips the outcomes of this Die to the given values.
The endpoints are included in the result if applicable. If one of the arguments is not provided, that side will not be clipped.
This is not the same as rerolling outcomes beyond this range;
the outcome is simply adjusted to fit within the range.
This will typically cause some quantity to bunch up at the endpoint(s).
If you want to reroll outcomes beyond this range, use truncate().
472 def map( 473 self, 474 repl: 475 'Callable[..., U | Die[U] | icepool.RerollType | icepool.AgainExpression] | Mapping[T_co, U | Die[U] | icepool.RerollType | icepool.AgainExpression]', 476 /, 477 *extra_args, 478 star: bool | None = None, 479 repeat: int | Literal['inf'] | None = None, 480 again_count: int | None = None, 481 again_depth: int | None = None, 482 again_end: 'U | Die[U] | icepool.RerollType | None' = None, 483 **kwargs) -> 'Die[U]': 484 """Maps outcomes of the `Die` to other outcomes. 485 486 This is also useful for representing processes. 487 488 As `icepool.map(repl, self, ...)`. 489 """ 490 return icepool.map( 491 repl, 492 self, 493 *extra_args, 494 star=star, 495 repeat=repeat, # type:ignore 496 again_count=again_count, 497 again_depth=again_depth, 498 again_end=again_end, 499 **kwargs) # type:ignore
Maps outcomes of the Die to other outcomes.
This is also useful for representing processes.
As icepool.map(repl, self, ...).
501 def map_and_time( 502 self, 503 repl: 504 'Callable[..., T_co | Die[T_co] | icepool.RerollType] | Mapping[T_co, T_co | Die[T_co] | icepool.RerollType]', 505 /, 506 *extra_args, 507 star: bool | None = None, 508 repeat: int, 509 **kwargs) -> 'Die[tuple[T_co, int]]': 510 """Repeatedly map outcomes of the state to other outcomes, while also 511 counting timesteps. 512 513 This is useful for representing processes. 514 515 As `map_and_time(repl, self, ...)`. 516 """ 517 return icepool.map_and_time(repl, 518 self, 519 *extra_args, 520 star=star, 521 repeat=repeat, 522 **kwargs)
Repeatedly map outcomes of the state to other outcomes, while also counting timesteps.
This is useful for representing processes.
As map_and_time(repl, self, ...).
524 def mean_time_to_absorb( 525 self, 526 repl: 527 'Callable[..., T_co | icepool.Die[T_co] | icepool.RerollType] | Mapping[Any, T_co | icepool.Die[T_co] | icepool.RerollType]', 528 /, 529 *extra_args, 530 star: bool | None = None, 531 **kwargs) -> Fraction: 532 """EXPERIMENTAL: The mean time for the process to reach an absorbing state. 533 534 As `mean_time_to_absorb(repl, self, ...)`. 535 """ 536 return icepool.mean_time_to_absorb(repl, 537 self, 538 *extra_args, 539 star=star, 540 **kwargs)
EXPERIMENTAL: The mean time for the process to reach an absorbing state.
As mean_time_to_absorb(repl, self, ...).
542 def time_to_sum(self: 'Die[int]', 543 target: int, 544 /, 545 max_time: int | None = None, 546 dnf: 'int|icepool.RerollType|None' = None) -> 'Die[int]': 547 """The number of rolls until the cumulative sum is greater or equal to the target. 548 549 Args: 550 target: The number to stop at once reached. 551 max_time: The maximum number of rolls to run. 552 If the sum is not reached, the outcome is determined by `dnf`. 553 dnf: What time to assign in cases where the target was not reached 554 in `max_time`. If not provided, this is set to `max_time`. 555 `dnf=icepool.Reroll` will remove this case from the result, 556 effectively rerolling it. 557 """ 558 if target <= 0: 559 return Die([0]) 560 561 if max_time is None: 562 if self.min_outcome() <= 0: 563 raise ValueError( 564 'max_time must be provided if not all outcomes are positive.' 565 ) 566 max_time = (target + self.min_outcome() - 1) // self.min_outcome() 567 568 if dnf is None: 569 dnf = max_time 570 571 def step(total, roll): 572 return min(total + roll, target) 573 574 result: 'Die[tuple[int, int]]' = Die([0]).map_and_time(step, 575 self, 576 repeat=max_time) 577 578 def get_time(total, time): 579 if total < target: 580 return dnf 581 else: 582 return time 583 584 return result.map(get_time)
The number of rolls until the cumulative sum is greater or equal to the target.
Arguments:
- target: The number to stop at once reached.
- max_time: The maximum number of rolls to run.
If the sum is not reached, the outcome is determined by
dnf. - dnf: What time to assign in cases where the target was not reached
in
max_time. If not provided, this is set tomax_time.dnf=icepool.Rerollwill remove this case from the result, effectively rerolling it.
590 def mean_time_to_sum(self: 'Die[int]', target: int, /) -> Fraction: 591 """The mean number of rolls until the cumulative sum is greater or equal to the target. 592 593 Args: 594 target: The target sum. 595 596 Raises: 597 ValueError: If `self` has negative outcomes. 598 ZeroDivisionError: If `self.mean() == 0`. 599 """ 600 target = max(target, 0) 601 602 if target < len(self._mean_time_to_sum_cache): 603 return self._mean_time_to_sum_cache[target] 604 605 if self.min_outcome() < 0: 606 raise ValueError( 607 'mean_time_to_sum does not handle negative outcomes.') 608 time_per_effect = Fraction(self.denominator(), 609 self.denominator() - self.quantity(0)) 610 611 for i in range(len(self._mean_time_to_sum_cache), target + 1): 612 result = time_per_effect + self.reroll([ 613 0 614 ], depth='inf').map(lambda x: self.mean_time_to_sum(i - x)).mean() 615 self._mean_time_to_sum_cache.append(result) 616 617 return result
The mean number of rolls until the cumulative sum is greater or equal to the target.
Arguments:
- target: The target sum.
Raises:
- ValueError: If
selfhas negative outcomes. - ZeroDivisionError: If
self.mean() == 0.
619 def explode(self, 620 outcomes: Collection[T_co] | Callable[..., bool] | None = None, 621 /, 622 *, 623 star: bool | None = None, 624 depth: int = 9, 625 end=None) -> 'Die[T_co]': 626 """Causes outcomes to be rolled again and added to the total. 627 628 Args: 629 outcomes: Which outcomes to explode. Options: 630 * An collection of outcomes to explode. 631 * A callable that takes an outcome and returns `True` if it 632 should be exploded. 633 * If not supplied, the max outcome will explode. 634 star: Whether outcomes should be unpacked into separate arguments 635 before sending them to a callable `which`. 636 If not provided, this will be guessed based on the function 637 signature. 638 depth: The maximum number of additional dice to roll, not counting 639 the initial roll. 640 If not supplied, a default value will be used. 641 end: Once `depth` is reached, further explosions will be treated 642 as this value. By default, a zero value will be used. 643 `icepool.Reroll` will make one extra final roll, rerolling until 644 a non-exploding outcome is reached. 645 """ 646 647 if outcomes is None: 648 outcome_set = {self.max_outcome()} 649 else: 650 outcome_set = self._select_outcomes(outcomes, star) 651 652 if depth < 0: 653 raise ValueError('depth cannot be negative.') 654 elif depth == 0: 655 return self 656 657 def map_final(outcome): 658 if outcome in outcome_set: 659 return outcome + icepool.Again 660 else: 661 return outcome 662 663 return self.map(map_final, again_depth=depth, again_end=end)
Causes outcomes to be rolled again and added to the total.
Arguments:
- outcomes: Which outcomes to explode. Options:
- An collection of outcomes to explode.
- A callable that takes an outcome and returns
Trueif it should be exploded. - If not supplied, the max outcome will explode.
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
which. If not provided, this will be guessed based on the function signature. - depth: The maximum number of additional dice to roll, not counting the initial roll. If not supplied, a default value will be used.
- end: Once
depthis reached, further explosions will be treated as this value. By default, a zero value will be used.icepool.Rerollwill make one extra final roll, rerolling until a non-exploding outcome is reached.
665 def if_else( 666 self, 667 outcome_if_true: U | 'Die[U]', 668 outcome_if_false: U | 'Die[U]', 669 *, 670 again_count: int | None = None, 671 again_depth: int | None = None, 672 again_end: 'U | Die[U] | icepool.RerollType | None' = None 673 ) -> 'Die[U]': 674 """Ternary conditional operator. 675 676 This replaces truthy outcomes with the first argument and falsy outcomes 677 with the second argument. 678 679 Args: 680 again_count, again_depth, again_end: Forwarded to the final die constructor. 681 """ 682 return self.map(lambda x: bool(x)).map( 683 { 684 True: outcome_if_true, 685 False: outcome_if_false 686 }, 687 again_count=again_count, 688 again_depth=again_depth, 689 again_end=again_end)
Ternary conditional operator.
This replaces truthy outcomes with the first argument and falsy outcomes with the second argument.
Arguments:
- again_count, again_depth, again_end: Forwarded to the final die constructor.
691 def is_in(self, outcomes: Container[T_co], /) -> 'Die[bool]': 692 """A die that returns True iff the roll of the die is contained in the target.""" 693 return self.map(lambda x: x in outcomes)
A die that returns True iff the roll of the die is contained in the target.
695 def count(self, rolls: int, outcomes: Container[T_co], /) -> 'Die[int]': 696 """Roll this dice a number of times and count how many are in the target.""" 697 return rolls @ self.is_in(outcomes)
Roll this dice a number of times and count how many are in the target.
768 def sequence(self, rolls: int) -> 'icepool.Die[tuple[T_co, ...]]': 769 """Possible sequences produced by rolling this die a number of times. 770 771 This is extremely expensive computationally. If possible, use `reduce()` 772 instead; if you don't care about order, `Die.pool()` is better. 773 """ 774 return icepool.cartesian_product(*(self for _ in range(rolls)), 775 outcome_type=tuple) # type: ignore
Possible sequences produced by rolling this die a number of times.
This is extremely expensive computationally. If possible, use reduce()
instead; if you don't care about order, Die.pool() is better.
777 def pool(self, rolls: int | Sequence[int] = 1, /) -> 'icepool.Pool[T_co]': 778 """Creates a `Pool` from this `Die`. 779 780 You might subscript the pool immediately afterwards, e.g. 781 `d6.pool(5)[-1, ..., 1]` takes the difference between the highest and 782 lowest of 5d6. 783 784 Args: 785 rolls: The number of copies of this `Die` to put in the pool. 786 Or, a sequence of one `int` per die acting as 787 `keep_tuple`. Note that `...` cannot be used in the 788 argument to this method, as the argument determines the size of 789 the pool. 790 """ 791 if isinstance(rolls, int): 792 return icepool.Pool({self: rolls}) 793 else: 794 pool_size = len(rolls) 795 # Haven't dealt with narrowing return type. 796 return icepool.Pool({self: pool_size})[rolls] # type: ignore
You might subscript the pool immediately afterwards, e.g.
d6.pool(5)[-1, ..., 1] takes the difference between the highest and
lowest of 5d6.
Arguments:
- rolls: The number of copies of this
Dieto put in the pool. Or, a sequence of oneintper die acting askeep_tuple. Note that...cannot be used in the argument to this method, as the argument determines the size of the pool.
844 def keep(self, 845 rolls: int | Sequence[int], 846 index: slice | Sequence[int | EllipsisType] | int | None = None, 847 /) -> 'Die': 848 """Selects elements after drawing and sorting and sums them. 849 850 Args: 851 rolls: The number of dice to roll. 852 index: One of the following: 853 * An `int`. This will count only the roll at the specified index. 854 In this case, the result is a `Die` rather than a generator. 855 * A `slice`. The selected dice are counted once each. 856 * A sequence of `int`s with length equal to `rolls`. 857 Each roll is counted that many times, which could be multiple or 858 negative times. 859 860 Up to one `...` (`Ellipsis`) may be used. If no `...` is used, 861 the `rolls` argument may be omitted. 862 863 `...` will be replaced with a number of zero counts in order 864 to make up any missing elements compared to `rolls`. 865 This number may be "negative" if more `int`s are provided than 866 `rolls`. Specifically: 867 868 * If `index` is shorter than `rolls`, `...` 869 acts as enough zero counts to make up the difference. 870 E.g. `(1, ..., 1)` on five dice would act as 871 `(1, 0, 0, 0, 1)`. 872 * If `index` has length equal to `rolls`, `...` has no effect. 873 E.g. `(1, ..., 1)` on two dice would act as `(1, 1)`. 874 * If `index` is longer than `rolls` and `...` is on one side, 875 elements will be dropped from `index` on the side with `...`. 876 E.g. `(..., 1, 2, 3)` on two dice would act as `(2, 3)`. 877 * If `index` is longer than `rolls` and `...` 878 is in the middle, the counts will be as the sum of two 879 one-sided `...`. 880 E.g. `(-1, ..., 1)` acts like `(-1, ...)` plus `(..., 1)`. 881 If `rolls` was 1 this would have the -1 and 1 cancel each other out. 882 """ 883 if isinstance(rolls, int): 884 if index is None: 885 raise ValueError( 886 'If the number of rolls is an integer, an index argument must be provided.' 887 ) 888 if isinstance(index, int): 889 return self.pool(rolls).keep(index) 890 else: 891 return self.pool(rolls).keep(index).sum() # type: ignore 892 else: 893 if index is not None: 894 raise ValueError('Only one index sequence can be given.') 895 return self.pool(len(rolls)).keep(rolls).sum() # type: ignore
Selects elements after drawing and sorting and sums them.
Arguments:
- rolls: The number of dice to roll.
- index: One of the following:
- An
int. This will count only the roll at the specified index.
- An
- In this case, the result is a
Dierather than a generator. - A
slice. The selected dice are counted once each.
- A
- A sequence of
ints with length equal torolls. Each roll is counted that many times, which could be multiple or negative times.
Up to one
...(Ellipsis) may be used. If no...is used, therollsargument may be omitted....will be replaced with a number of zero counts in order to make up any missing elements compared torolls. This number may be "negative" if moreints are provided thanrolls. Specifically:- If
indexis shorter thanrolls,...acts as enough zero counts to make up the difference. E.g.(1, ..., 1)on five dice would act as(1, 0, 0, 0, 1). - If
indexhas length equal torolls,...has no effect. E.g.(1, ..., 1)on two dice would act as(1, 1). - If
indexis longer thanrollsand...is on one side, elements will be dropped fromindexon the side with.... E.g.(..., 1, 2, 3)on two dice would act as(2, 3). - If
indexis longer thanrollsand...is in the middle, the counts will be as the sum of two one-sided.... E.g.(-1, ..., 1)acts like(-1, ...)plus(..., 1). Ifrollswas 1 this would have the -1 and 1 cancel each other out.
- A sequence of
897 def lowest(self, 898 rolls: int, 899 /, 900 keep: int | None = None, 901 drop: int | None = None) -> 'Die': 902 """Roll several of this `Die` and return the lowest result, or the sum of some of the lowest. 903 904 The outcomes should support addition and multiplication if `keep != 1`. 905 906 Args: 907 rolls: The number of dice to roll. All dice will have the same 908 outcomes as `self`. 909 keep, drop: These arguments work together: 910 * If neither are provided, the single lowest die will be taken. 911 * If only `keep` is provided, the `keep` lowest dice will be summed. 912 * If only `drop` is provided, the `drop` lowest dice will be dropped 913 and the rest will be summed. 914 * If both are provided, `drop` lowest dice will be dropped, then 915 the next `keep` lowest dice will be summed. 916 917 Returns: 918 A `Die` representing the probability distribution of the sum. 919 """ 920 index = lowest_slice(keep, drop) 921 canonical = canonical_slice(index, rolls) 922 if canonical.start == 0 and canonical.stop == 1: 923 return self._lowest_single(rolls) 924 # Expression evaluators are difficult to type. 925 return self.pool(rolls)[index].sum() # type: ignore
Roll several of this Die and return the lowest result, or the sum of some of the lowest.
The outcomes should support addition and multiplication if keep != 1.
Arguments:
- rolls: The number of dice to roll. All dice will have the same
outcomes as
self. - keep, drop: These arguments work together:
- If neither are provided, the single lowest die will be taken.
- If only
keepis provided, thekeeplowest dice will be summed. - If only
dropis provided, thedroplowest dice will be dropped and the rest will be summed. - If both are provided,
droplowest dice will be dropped, then the nextkeeplowest dice will be summed.
Returns:
A
Dierepresenting the probability distribution of the sum.
935 def highest(self, 936 rolls: int, 937 /, 938 keep: int | None = None, 939 drop: int | None = None) -> 'Die[T_co]': 940 """Roll several of this `Die` and return the highest result, or the sum of some of the highest. 941 942 The outcomes should support addition and multiplication if `keep != 1`. 943 944 Args: 945 rolls: The number of dice to roll. 946 keep, drop: These arguments work together: 947 * If neither are provided, the single highest die will be taken. 948 * If only `keep` is provided, the `keep` highest dice will be summed. 949 * If only `drop` is provided, the `drop` highest dice will be dropped 950 and the rest will be summed. 951 * If both are provided, `drop` highest dice will be dropped, then 952 the next `keep` highest dice will be summed. 953 954 Returns: 955 A `Die` representing the probability distribution of the sum. 956 """ 957 index = highest_slice(keep, drop) 958 canonical = canonical_slice(index, rolls) 959 if canonical.start == rolls - 1 and canonical.stop == rolls: 960 return self._highest_single(rolls) 961 # Expression evaluators are difficult to type. 962 return self.pool(rolls)[index].sum() # type: ignore
Roll several of this Die and return the highest result, or the sum of some of the highest.
The outcomes should support addition and multiplication if keep != 1.
Arguments:
- rolls: The number of dice to roll.
- keep, drop: These arguments work together:
- If neither are provided, the single highest die will be taken.
- If only
keepis provided, thekeephighest dice will be summed. - If only
dropis provided, thedrophighest dice will be dropped and the rest will be summed. - If both are provided,
drophighest dice will be dropped, then the nextkeephighest dice will be summed.
Returns:
A
Dierepresenting the probability distribution of the sum.
971 def middle( 972 self, 973 rolls: int, 974 /, 975 keep: int = 1, 976 *, 977 tie: Literal['error', 'high', 'low'] = 'error') -> 'icepool.Die': 978 """Roll several of this `Die` and sum the sorted results in the middle. 979 980 The outcomes should support addition and multiplication if `keep != 1`. 981 982 Args: 983 rolls: The number of dice to roll. 984 keep: The number of outcomes to sum. If this is greater than the 985 current keep_size, all are kept. 986 tie: What to do if `keep` is odd but the current keep_size 987 is even, or vice versa. 988 * 'error' (default): Raises `IndexError`. 989 * 'high': The higher outcome is taken. 990 * 'low': The lower outcome is taken. 991 """ 992 # Expression evaluators are difficult to type. 993 return self.pool(rolls).middle(keep, tie=tie).sum() # type: ignore
Roll several of this Die and sum the sorted results in the middle.
The outcomes should support addition and multiplication if keep != 1.
Arguments:
- rolls: The number of dice to roll.
- keep: The number of outcomes to sum. If this is greater than the current keep_size, all are kept.
- tie: What to do if
keepis odd but the current keep_size is even, or vice versa.- 'error' (default): Raises
IndexError. - 'high': The higher outcome is taken.
- 'low': The lower outcome is taken.
- 'error' (default): Raises
995 def map_to_pool( 996 self, 997 repl: 998 'Callable[..., Sequence[icepool.Die[U] | U] | Mapping[icepool.Die[U], int] | Mapping[U, int] | icepool.RerollType] | None' = None, 999 /, 1000 *extra_args: 'Outcome | icepool.Die | icepool.MultisetExpression', 1001 star: bool | None = None, 1002 **kwargs) -> 'icepool.MultisetExpression[U]': 1003 """EXPERIMENTAL: Maps outcomes of this `Die` to `Pools`, creating a `MultisetGenerator`. 1004 1005 As `icepool.map_to_pool(repl, self, ...)`. 1006 1007 If no argument is provided, the outcomes will be used to construct a 1008 mixture of pools directly, similar to the inverse of `pool.expand()`. 1009 Note that this is not particularly efficient since it does not make much 1010 use of dynamic programming. 1011 """ 1012 if repl is None: 1013 repl = lambda x: x 1014 return icepool.map_to_pool(repl, 1015 self, 1016 *extra_args, 1017 star=star, 1018 **kwargs)
EXPERIMENTAL: Maps outcomes of this Die to Pools, creating a MultisetGenerator.
As icepool.map_to_pool(repl, self, ...).
If no argument is provided, the outcomes will be used to construct a
mixture of pools directly, similar to the inverse of pool.expand().
Note that this is not particularly efficient since it does not make much
use of dynamic programming.
1020 def explode_to_pool(self, 1021 rolls: int = 1, 1022 outcomes: Collection[T_co] | Callable[..., bool] 1023 | None = None, 1024 /, 1025 *, 1026 star: bool | None = None, 1027 depth: int = 9) -> 'icepool.MultisetExpression[T_co]': 1028 """EXPERIMENTAL: Causes outcomes to be rolled again, keeping that outcome as an individual die in a pool. 1029 1030 Args: 1031 rolls: The number of initial dice. 1032 outcomes: Which outcomes to explode. Options: 1033 * A single outcome to explode. 1034 * An collection of outcomes to explode. 1035 * A callable that takes an outcome and returns `True` if it 1036 should be exploded. 1037 * If not supplied, the max outcome will explode. 1038 star: Whether outcomes should be unpacked into separate arguments 1039 before sending them to a callable `which`. 1040 If not provided, this will be guessed based on the function 1041 signature. 1042 depth: The maximum depth of explosions for an individual dice. 1043 1044 Returns: 1045 A `MultisetGenerator` representing the mixture of `Pool`s. Note 1046 that this is not technically a `Pool`, though it supports most of 1047 the same operations. 1048 """ 1049 if depth == 0: 1050 return self.pool(rolls) 1051 if outcomes is None: 1052 explode_set = {self.max_outcome()} 1053 else: 1054 explode_set = self._select_outcomes(outcomes, star) 1055 if not explode_set: 1056 return self.pool(rolls) 1057 explode: 'Die[T_co]' 1058 not_explode: 'Die[T_co]' 1059 explode, not_explode = self.split(explode_set) 1060 1061 single_data: 'MutableMapping[icepool.Vector[int], int]' = defaultdict( 1062 int) 1063 for i in range(depth + 1): 1064 weight = explode.denominator()**i * self.denominator()**( 1065 depth - i) * not_explode.denominator() 1066 single_data[icepool.Vector((i, 1))] += weight 1067 single_data[icepool.Vector( 1068 (depth + 1, 0))] += explode.denominator()**(depth + 1) 1069 1070 single_count_die: 'Die[icepool.Vector[int]]' = Die(single_data) 1071 count_die = rolls @ single_count_die 1072 1073 return count_die.map_to_pool( 1074 lambda x, nx: [explode] * x + [not_explode] * nx)
EXPERIMENTAL: Causes outcomes to be rolled again, keeping that outcome as an individual die in a pool.
Arguments:
- rolls: The number of initial dice.
- outcomes: Which outcomes to explode. Options:
- A single outcome to explode.
- An collection of outcomes to explode.
- A callable that takes an outcome and returns
Trueif it should be exploded. - If not supplied, the max outcome will explode.
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
which. If not provided, this will be guessed based on the function signature. - depth: The maximum depth of explosions for an individual dice.
Returns:
A
MultisetGeneratorrepresenting the mixture ofPools. Note
that this is not technically aPool, though it supports most of the same operations.
1076 def reroll_to_pool( 1077 self, 1078 rolls: int, 1079 outcomes: Callable[..., bool] | Collection[T_co] | None = None, 1080 /, 1081 *, 1082 max_rerolls: int | Literal['inf'], 1083 star: bool | None = None, 1084 depth: int | Literal['inf'] = 1, 1085 mode: Literal['random', 'low', 'high', 'drop'] = 'random' 1086 ) -> 'icepool.MultisetExpression[T_co]': 1087 """EXPERIMENTAL: Applies a limited number of rerolls shared across a pool. 1088 1089 Each die can only be rerolled once (effectively `depth=1`), and no more 1090 than `max_rerolls` dice may be rerolled. 1091 1092 Args: 1093 rolls: How many dice in the pool. 1094 outcomes: Selects which outcomes are eligible to be rerolled. 1095 Options: 1096 * A collection of outcomes to reroll. 1097 * A callable that takes an outcome and returns `True` if it 1098 could be rerolled. 1099 * If not provided, the single minimum outcome will be rerolled. 1100 max_rerolls: The maximum total number of rerolls. 1101 If `max_rerolls == 'inf'`, then this is the same as 1102 `self.reroll(which, star=star, depth=depth).pool(rolls)`. 1103 depth: EXTRA EXPERIMENTAL: The maximum depth of rerolls. 1104 star: Whether outcomes should be unpacked into separate arguments 1105 before sending them to a callable `which`. 1106 If not provided, this will be guessed based on the function 1107 signature. 1108 mode: How dice are selected for rerolling if there are more eligible 1109 dice than `max_rerolls`. Options: 1110 * `'random'` (default): Eligible dice will be chosen uniformly 1111 at random. 1112 * `'low'`: The lowest eligible dice will be rerolled. 1113 * `'high'`: The highest eligible dice will be rerolled. 1114 * `'drop'`: All dice that ended up on an outcome selected by 1115 `which` will be dropped. This includes both dice that rolled 1116 into `which` initially and were not rerolled, and dice that 1117 were rerolled but rolled into `which` again. This can be 1118 considerably more efficient than the other modes. 1119 1120 Returns: 1121 A `MultisetGenerator` representing the mixture of `Pool`s. Note 1122 that this is not technically a `Pool`, though it supports most of 1123 the same operations. 1124 """ 1125 if max_rerolls == 'inf': 1126 return self.reroll(outcomes, star=star, depth=depth).pool(rolls) 1127 1128 if outcomes is None: 1129 rerollable_set = {self.min_outcome()} 1130 else: 1131 rerollable_set = self._select_outcomes(outcomes, star) 1132 if not rerollable_set: 1133 return self.pool(rolls) 1134 1135 rerollable_die: 'Die[T_co]' 1136 not_rerollable_die: 'Die[T_co]' 1137 rerollable_die, not_rerollable_die = self.split(rerollable_set) 1138 single_is_rerollable = icepool.coin(rerollable_die.denominator(), 1139 self.denominator()) 1140 1141 if depth == 'inf': 1142 depth = max_rerolls 1143 1144 def step(rerollable, rerolls_left): 1145 """Advances one step of rerolling if there are enough rerolls left to cover all rerollable dice. 1146 1147 Returns: 1148 The number of dice showing rerollable outcomes and the number of remaining rerolls. 1149 """ 1150 if rerollable == 0: 1151 return 0, 0 1152 if rerolls_left < rerollable: 1153 return rerollable, rerolls_left 1154 1155 return icepool.tupleize(rerollable @ single_is_rerollable, 1156 rerolls_left - rerollable) 1157 1158 initial_state = icepool.tupleize(rolls @ single_is_rerollable, 1159 max_rerolls) 1160 mid_pool_composition: Die[tuple[int, int]] 1161 mid_pool_composition = icepool.map(step, 1162 initial_state, 1163 star=True, 1164 repeat=depth - 1) 1165 1166 def final_step(rerollable, rerolls_left): 1167 """Performs the final reroll, which might not have enough rerolls to cover all rerollable dice. 1168 1169 Returns: The number of dice that had a rerollable outcome, 1170 the number of dice that were rerolled due to max_rerolls, 1171 the number of rerolled dice that landed on a rerollable outcome 1172 again. 1173 """ 1174 rerolled = min(rerollable, rerolls_left) 1175 1176 return icepool.tupleize(rerollable, rerolled, 1177 rerolled @ single_is_rerollable) 1178 1179 pool_composition: Die[tuple[int, int, int]] = mid_pool_composition.map( 1180 final_step, star=True) 1181 1182 denominator = self.denominator()**(rolls + max_rerolls) 1183 pool_composition = pool_composition.multiply_to_denominator( 1184 denominator) 1185 1186 def make_pool(rerollable, rerolled, rerolled_to_rerollable): 1187 rerolls_ran_out = rerollable - rerolled 1188 not_rerollable = rolls - rerolls_ran_out - rerolled_to_rerollable 1189 common = rerollable_die.pool( 1190 rerolled_to_rerollable) + not_rerollable_die.pool( 1191 not_rerollable) 1192 match mode: 1193 case 'random': 1194 return common + rerollable_die.pool(rerolls_ran_out) 1195 case 'low': 1196 return common + rerollable_die.pool(rerollable).highest( 1197 rerolls_ran_out) 1198 case 'high': 1199 return common + rerollable_die.pool(rerollable).lowest( 1200 rerolls_ran_out) 1201 case 'drop': 1202 return not_rerollable_die.pool(not_rerollable) 1203 case _: 1204 raise ValueError( 1205 f"Invalid reroll_priority '{mode}'. Allowed values are 'random', 'low', 'high', 'drop'." 1206 ) 1207 1208 return pool_composition.map_to_pool(make_pool, star=True)
EXPERIMENTAL: Applies a limited number of rerolls shared across a pool.
Each die can only be rerolled once (effectively depth=1), and no more
than max_rerolls dice may be rerolled.
Arguments:
- rolls: How many dice in the pool.
- outcomes: Selects which outcomes are eligible to be rerolled.
Options:
- A collection of outcomes to reroll.
- A callable that takes an outcome and returns
Trueif it could be rerolled. - If not provided, the single minimum outcome will be rerolled.
- max_rerolls: The maximum total number of rerolls.
If
max_rerolls == 'inf', then this is the same asself.reroll(which, star=star, depth=depth).pool(rolls). - depth: EXTRA EXPERIMENTAL: The maximum depth of rerolls.
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
which. If not provided, this will be guessed based on the function signature. - mode: How dice are selected for rerolling if there are more eligible
dice than
max_rerolls. Options:'random'(default): Eligible dice will be chosen uniformly at random.'low': The lowest eligible dice will be rerolled.'high': The highest eligible dice will be rerolled.'drop': All dice that ended up on an outcome selected bywhichwill be dropped. This includes both dice that rolled intowhichinitially and were not rerolled, and dice that were rerolled but rolled intowhichagain. This can be considerably more efficient than the other modes.
Returns:
A
MultisetGeneratorrepresenting the mixture ofPools. Note
that this is not technically aPool, though it supports most of the same operations.
1231 def stochastic_round(self, 1232 *, 1233 max_denominator: int | None = None) -> 'Die[int]': 1234 """Randomly rounds outcomes up or down to the nearest integer according to the two distances. 1235 1236 Specificially, rounds `x` up with probability `x - floor(x)` and down 1237 otherwise. 1238 1239 Args: 1240 max_denominator: If provided, each rounding will be performed 1241 using `fractions.Fraction.limit_denominator(max_denominator)`. 1242 Otherwise, the rounding will be performed without 1243 `limit_denominator`. 1244 """ 1245 return self.map(lambda x: icepool.stochastic_round( 1246 x, max_denominator=max_denominator))
Randomly rounds outcomes up or down to the nearest integer according to the two distances.
Specificially, rounds x up with probability x - floor(x) and down
otherwise.
Arguments:
- max_denominator: If provided, each rounding will be performed
using
fractions.Fraction.limit_denominator(max_denominator). Otherwise, the rounding will be performed withoutlimit_denominator.
1465 def cmp(self, other) -> 'Die[int]': 1466 """A `Die` with outcomes 1, -1, and 0. 1467 1468 The quantities are equal to the positive outcome of `self > other`, 1469 `self < other`, and the remainder respectively. 1470 """ 1471 other = implicit_convert_to_die(other) 1472 1473 data = {} 1474 1475 lt = self < other 1476 if True in lt: 1477 data[-1] = lt[True] 1478 eq = self == other 1479 if True in eq: 1480 data[0] = eq[True] 1481 gt = self > other 1482 if True in gt: 1483 data[1] = gt[True] 1484 1485 return Die(data)
A Die with outcomes 1, -1, and 0.
The quantities are equal to the positive outcome of self > other,
self < other, and the remainder respectively.
Inherited Members
29class Population(ABC, Expandable[T_co], Mapping[Any, int]): 30 """A mapping from outcomes to `int` quantities. 31 32 Outcomes with each instance must be hashable and totally orderable. 33 34 Subclasses include `Die` and `Deck`. 35 """ 36 37 # Abstract methods. 38 39 @property 40 @abstractmethod 41 def _new_type(self) -> type: 42 """The type to use when constructing a new instance.""" 43 44 @abstractmethod 45 def keys(self) -> CountsKeysView[T_co]: 46 """The outcomes within the population in sorted order.""" 47 48 @abstractmethod 49 def values(self) -> CountsValuesView: 50 """The quantities within the population in outcome order.""" 51 52 @abstractmethod 53 def items(self) -> CountsItemsView[T_co]: 54 """The (outcome, quantity)s of the population in sorted order.""" 55 56 @property 57 def _items_for_cartesian_product(self) -> Sequence[tuple[T_co, int]]: 58 return self.items() 59 60 def _unary_operator(self, op: Callable, *args, **kwargs): 61 data: MutableMapping[Any, int] = defaultdict(int) 62 for outcome, quantity in self.items(): 63 new_outcome = op(outcome, *args, **kwargs) 64 data[new_outcome] += quantity 65 return self._new_type(data) 66 67 # Outcomes. 68 69 def outcomes(self) -> CountsKeysView[T_co]: 70 """The outcomes of the mapping in ascending order. 71 72 These are also the `keys` of the mapping. 73 Prefer to use the name `outcomes`. 74 """ 75 return self.keys() 76 77 @cached_property 78 def _common_outcome_length(self) -> int | None: 79 result = None 80 for outcome in self.outcomes(): 81 if isinstance(outcome, Mapping): 82 return None 83 elif isinstance(outcome, Sized): 84 if result is None: 85 result = len(outcome) 86 elif len(outcome) != result: 87 return None 88 return result 89 90 def common_outcome_length(self) -> int | None: 91 """The common length of all outcomes. 92 93 If outcomes have no lengths or different lengths, the result is `None`. 94 """ 95 return self._common_outcome_length 96 97 def is_empty(self) -> bool: 98 """`True` iff this population has no outcomes. """ 99 return len(self) == 0 100 101 def min_outcome(self) -> T_co: 102 """The least outcome.""" 103 return self.outcomes()[0] 104 105 def max_outcome(self) -> T_co: 106 """The greatest outcome.""" 107 return self.outcomes()[-1] 108 109 def nearest(self, comparison: Literal['<=', '<', '>=', '>'], outcome, 110 /) -> T_co | None: 111 """The nearest outcome in this population fitting the comparison. 112 113 Args: 114 comparison: The comparison which the result must fit. For example, 115 '<=' would find the greatest outcome that is not greater than 116 the argument. 117 outcome: The outcome to compare against. 118 119 Returns: 120 The nearest outcome fitting the comparison, or `None` if there is 121 no such outcome. 122 """ 123 match comparison: 124 case '<=': 125 if outcome in self: 126 return outcome 127 index = bisect.bisect_right(self.outcomes(), outcome) - 1 128 if index < 0: 129 return None 130 return self.outcomes()[index] 131 case '<': 132 index = bisect.bisect_left(self.outcomes(), outcome) - 1 133 if index < 0: 134 return None 135 return self.outcomes()[index] 136 case '>=': 137 if outcome in self: 138 return outcome 139 index = bisect.bisect_left(self.outcomes(), outcome) 140 if index >= len(self): 141 return None 142 return self.outcomes()[index] 143 case '>': 144 index = bisect.bisect_right(self.outcomes(), outcome) 145 if index >= len(self): 146 return None 147 return self.outcomes()[index] 148 case _: 149 raise ValueError(f'Invalid comparison {comparison}') 150 151 @staticmethod 152 def _zero(x): 153 return x * 0 154 155 def zero(self: C) -> C: 156 """Zeros all outcomes of this population. 157 158 This is done by multiplying all outcomes by `0`. 159 160 The result will have the same denominator. 161 162 Raises: 163 ValueError: If the zeros did not resolve to a single outcome. 164 """ 165 result = self._unary_operator(Population._zero) 166 if len(result) != 1: 167 raise ValueError('zero() did not resolve to a single outcome.') 168 return result 169 170 def zero_outcome(self) -> T_co: 171 """A zero-outcome for this population. 172 173 E.g. `0` for a `Population` whose outcomes are `int`s. 174 """ 175 return self.zero().outcomes()[0] 176 177 # Quantities. 178 179 @overload 180 def quantity(self, outcome: Hashable, /) -> int: 181 """The quantity of a single outcome.""" 182 183 @overload 184 def quantity(self, comparison: Literal['==', '!=', '<=', '<', '>=', '>'], 185 outcome: Hashable, /) -> int: 186 """The total quantity fitting a comparison to a single outcome.""" 187 188 def quantity(self, 189 comparison: Literal['==', '!=', '<=', '<', '>=', '>'] 190 | Hashable, 191 outcome: Hashable | None = None, 192 /) -> int: 193 """The quantity of a single outcome. 194 195 A comparison can be provided, in which case this returns the total 196 quantity fitting the comparison. 197 198 Args: 199 comparison: The comparison to use. This can be omitted, in which 200 case it is treated as '=='. 201 outcome: The outcome to query. 202 """ 203 if outcome is None: 204 outcome = comparison 205 comparison = '==' 206 else: 207 comparison = cast(Literal['==', '!=', '<=', '<', '>=', '>'], 208 comparison) 209 210 match comparison: 211 case '==': 212 return self.get(outcome, 0) 213 case '!=': 214 return self.denominator() - self.get(outcome, 0) 215 case '<=' | '<': 216 threshold = self.nearest(comparison, outcome) 217 if threshold is None: 218 return 0 219 else: 220 return self._cumulative_quantities[threshold] 221 case '>=': 222 return self.denominator() - self.quantity('<', outcome) 223 case '>': 224 return self.denominator() - self.quantity('<=', outcome) 225 case _: 226 raise ValueError(f'Invalid comparison {comparison}') 227 228 def quantity_where(self, 229 which: Callable[..., bool], 230 /, 231 star: bool | None = None) -> int: 232 """The quantity fulfilling a boolean condition.""" 233 if star is None: 234 star = infer_star(which) 235 if star: 236 return sum(quantity # type: ignore 237 for outcome, quantity in self.items() 238 if which(*outcome)) # type: ignore 239 else: 240 return sum(quantity for outcome, quantity in self.items() 241 if which(outcome)) 242 243 @overload 244 def quantities(self, /) -> CountsValuesView: 245 """All quantities in sorted order.""" 246 247 @overload 248 def quantities(self, comparison: Literal['==', '!=', '<=', '<', '>=', '>'], 249 /) -> Sequence[int]: 250 """The total quantities fitting the comparison for each outcome in sorted order. 251 252 For example, '<=' gives the CDF. 253 254 Args: 255 comparison: One of `'==', '!=', '<=', '<', '>=', '>'`. 256 May be omitted, in which case equality `'=='` is used. 257 outcome: The outcome to compare to. 258 percent: If set, the result will be a percentage expressed as a 259 `float`. 260 """ 261 262 def quantities(self, 263 comparison: Literal['==', '!=', '<=', '<', '>=', '>'] 264 | None = None, 265 /) -> CountsValuesView | Sequence[int]: 266 """The quantities of the mapping in sorted order. 267 268 For example, '<=' gives the CDF. 269 270 Args: 271 comparison: One of `'==', '!=', '<=', '<', '>=', '>'`. 272 May be omitted, in which case equality `'=='` is used. 273 """ 274 if comparison is None: 275 comparison = '==' 276 277 match comparison: 278 case '==': 279 return self.values() 280 case '<=': 281 return tuple(itertools.accumulate(self.values())) 282 case '>=': 283 return tuple( 284 itertools.accumulate(self.values()[:-1], 285 operator.sub, 286 initial=self.denominator())) 287 case '!=': 288 return tuple(self.denominator() - q for q in self.values()) 289 case '<': 290 return tuple(self.denominator() - q 291 for q in self.quantities('>=')) 292 case '>': 293 return tuple(self.denominator() - q 294 for q in self.quantities('<=')) 295 case _: 296 raise ValueError(f'Invalid comparison {comparison}') 297 298 @cached_property 299 def _cumulative_quantities(self) -> Mapping[T_co, int]: 300 result = {} 301 cdf = 0 302 for outcome, quantity in self.items(): 303 cdf += quantity 304 result[outcome] = cdf 305 return result 306 307 @cached_property 308 def _denominator(self) -> int: 309 return sum(self.values()) 310 311 def denominator(self) -> int: 312 """The sum of all quantities (e.g. weights or duplicates). 313 314 For the number of unique outcomes, use `len()`. 315 """ 316 return self._denominator 317 318 def multiply_quantities(self: C, scale: int, /) -> C: 319 """Multiplies all quantities by an integer.""" 320 if scale == 1: 321 return self 322 data = { 323 outcome: quantity * scale 324 for outcome, quantity in self.items() 325 } 326 return self._new_type(data) 327 328 def divide_quantities(self: C, divisor: int, /) -> C: 329 """Divides all quantities by an integer, rounding down. 330 331 Resulting zero quantities are dropped. 332 """ 333 if divisor == 0: 334 return self 335 data = { 336 outcome: quantity // divisor 337 for outcome, quantity in self.items() if quantity >= divisor 338 } 339 return self._new_type(data) 340 341 def modulo_quantities(self: C, divisor: int, /) -> C: 342 """Modulus of all quantities with an integer.""" 343 data = { 344 outcome: quantity % divisor 345 for outcome, quantity in self.items() 346 } 347 return self._new_type(data) 348 349 def pad_to_denominator(self: C, denominator: int, /, 350 outcome: Hashable) -> C: 351 """Changes the denominator to a target number by changing the quantity of a specified outcome. 352 353 Args: 354 `target`: The denominator of the result. 355 `outcome`: The outcome whose quantity will be adjusted. 356 357 Returns: 358 A `Population` like `self` but with the quantity of `outcome` 359 adjusted so that the overall denominator is equal to `target`. 360 If the denominator is reduced to zero, it will be removed. 361 362 Raises: 363 `ValueError` if this would require the quantity of the specified 364 outcome to be negative. 365 """ 366 adjustment = denominator - self.denominator() 367 data = {outcome: quantity for outcome, quantity in self.items()} 368 new_quantity = data.get(outcome, 0) + adjustment 369 if new_quantity > 0: 370 data[outcome] = new_quantity 371 elif new_quantity == 0: 372 del data[outcome] 373 else: 374 raise ValueError( 375 f'Padding to denominator of {denominator} would require a negative quantity of {new_quantity} for {outcome}' 376 ) 377 return self._new_type(data) 378 379 def multiply_to_denominator(self: C, denominator: int, /) -> C: 380 """Multiplies all quantities to reach the target denominiator. 381 382 Raises: 383 ValueError if this cannot be achieved using an integer scaling. 384 """ 385 if denominator % self.denominator(): 386 raise ValueError( 387 'Target denominator is not an integer factor of the current denominator.' 388 ) 389 return self.multiply_quantities(denominator // self.denominator()) 390 391 def append(self: C, outcome, quantity: int = 1, /) -> C: 392 """This population with an outcome appended. 393 394 Args: 395 outcome: The outcome to append. 396 quantity: The quantity of the outcome to append. Can be negative, 397 which removes quantity (but not below zero). 398 """ 399 data = Counter(self) 400 data[outcome] = max(data[outcome] + quantity, 0) 401 return self._new_type(data) 402 403 def remove(self: C, outcome, quantity: int | None = None, /) -> C: 404 """This population with an outcome removed. 405 406 Args: 407 outcome: The outcome to append. 408 quantity: The quantity of the outcome to remove. If not set, all 409 quantity of that outcome is removed. Can be negative, which adds 410 quantity instead. 411 """ 412 if quantity is None: 413 data = Counter(self) 414 data[outcome] = 0 415 return self._new_type(data) 416 else: 417 return self.append(outcome, -quantity) 418 419 # Probabilities. 420 421 @overload 422 def probability(self, outcome: Hashable, /, *, 423 percent: Literal[False]) -> Fraction: 424 """The probability of a single outcome, or 0.0 if not present.""" 425 426 @overload 427 def probability(self, outcome: Hashable, /, *, 428 percent: Literal[True]) -> float: 429 """The probability of a single outcome, or 0.0 if not present.""" 430 431 @overload 432 def probability(self, outcome: Hashable, /) -> Fraction: 433 """The probability of a single outcome, or 0.0 if not present.""" 434 435 @overload 436 def probability(self, comparison: Literal['==', '!=', '<=', '<', '>=', 437 '>'], outcome: Hashable, /, *, 438 percent: Literal[False]) -> Fraction: 439 """The total probability of outcomes fitting a comparison.""" 440 441 @overload 442 def probability(self, comparison: Literal['==', '!=', '<=', '<', '>=', 443 '>'], outcome: Hashable, /, *, 444 percent: Literal[True]) -> float: 445 """The total probability of outcomes fitting a comparison.""" 446 447 @overload 448 def probability(self, comparison: Literal['==', '!=', '<=', '<', '>=', 449 '>'], outcome: Hashable, 450 /) -> Fraction: 451 """The total probability of outcomes fitting a comparison.""" 452 453 def probability(self, 454 comparison: Literal['==', '!=', '<=', '<', '>=', '>'] 455 | Hashable, 456 outcome: Hashable | None = None, 457 /, 458 *, 459 percent: bool = False) -> Fraction | float: 460 """The total probability of outcomes fitting a comparison. 461 462 Args: 463 comparison: One of `'==', '!=', '<=', '<', '>=', '>'`. 464 May be omitted, in which case equality `'=='` is used. 465 outcome: The outcome to compare to. 466 percent: If set, the result will be a percentage expressed as a 467 `float`. Otherwise, the result is a `Fraction`. 468 """ 469 if outcome is None: 470 outcome = comparison 471 comparison = '==' 472 else: 473 comparison = cast(Literal['==', '!=', '<=', '<', '>=', '>'], 474 comparison) 475 result = Fraction(self.quantity(comparison, outcome), 476 self.denominator()) 477 return result * 100.0 if percent else result 478 479 @overload 480 def probability_where(self, which: Callable[..., 481 bool], /, star: bool | None, 482 percent: Literal[False]) -> Fraction: 483 ... 484 485 @overload 486 def probability_where(self, which: Callable[..., bool], /, 487 star: bool | None, percent: Literal[True]) -> float: 488 ... 489 490 @overload 491 def probability_where(self, which: Callable[..., 492 bool], /, star: bool | None, 493 percent: bool) -> Fraction | float: 494 ... 495 496 def probability_where(self, 497 which: Callable[..., bool], 498 /, 499 star: bool | None = None, 500 percent: bool = False) -> Fraction | float: 501 """The probability fulfilling a boolean condition.""" 502 numerator = self.quantity_where(which, star=star) 503 if percent: 504 return 100.0 * numerator / self.denominator() 505 else: 506 return Fraction(numerator, self.denominator()) 507 508 @overload 509 def probabilities(self, /, *, 510 percent: Literal[False]) -> Sequence[Fraction]: 511 """All probabilities in sorted order.""" 512 513 @overload 514 def probabilities(self, /, *, percent: Literal[True]) -> Sequence[float]: 515 """All probabilities in sorted order.""" 516 517 @overload 518 def probabilities(self, /) -> Sequence[Fraction]: 519 """All probabilities in sorted order.""" 520 521 @overload 522 def probabilities(self, comparison: Literal['==', '!=', '<=', '<', '>=', 523 '>'], /, *, 524 percent: Literal[False]) -> Sequence[Fraction]: 525 """The total probabilities fitting the comparison for each outcome in sorted order. 526 527 For example, '<=' gives the CDF. 528 """ 529 530 @overload 531 def probabilities(self, comparison: Literal['==', '!=', '<=', '<', '>=', 532 '>'], /, *, 533 percent: Literal[True]) -> Sequence[float]: 534 """The total probabilities fitting the comparison for each outcome in sorted order. 535 536 For example, '<=' gives the CDF. 537 """ 538 539 @overload 540 def probabilities(self, comparison: Literal['==', '!=', '<=', '<', '>=', 541 '>'], /) -> Sequence[Fraction]: 542 """The total probabilities fitting the comparison for each outcome in sorted order. 543 544 For example, '<=' gives the CDF. 545 """ 546 547 def probabilities( 548 self, 549 comparison: Literal['==', '!=', '<=', '<', '>=', '>'] 550 | None = None, 551 /, 552 *, 553 percent: bool = False) -> Sequence[Fraction] | Sequence[float]: 554 """The total probabilities fitting the comparison for each outcome in sorted order. 555 556 For example, '<=' gives the CDF. 557 558 Args: 559 comparison: One of `'==', '!=', '<=', '<', '>=', '>'`. 560 May be omitted, in which case equality `'=='` is used. 561 percent: If set, the result will be a percentage expressed as a 562 `float`. Otherwise, the result is a `Fraction`. 563 """ 564 if comparison is None: 565 comparison = '==' 566 567 result = tuple( 568 Fraction(q, self.denominator()) 569 for q in self.quantities(comparison)) 570 571 if percent: 572 return tuple(100.0 * x for x in result) 573 else: 574 return result 575 576 # Scalar statistics. 577 578 def mode(self) -> tuple: 579 """A tuple containing the most common outcome(s) of the population. 580 581 These are sorted from lowest to highest. 582 """ 583 return tuple(outcome for outcome, quantity in self.items() 584 if quantity == self.modal_quantity()) 585 586 def modal_quantity(self) -> int: 587 """The highest quantity of any single outcome. """ 588 return max(self.quantities()) 589 590 def kolmogorov_smirnov(self, other: 'Population') -> Fraction: 591 """Kolmogorov–Smirnov statistic. The maximum absolute difference between CDFs. """ 592 outcomes = icepool.sorted_union(self, other) 593 return max( 594 abs( 595 self.probability('<=', outcome) - 596 other.probability('<=', outcome)) for outcome in outcomes) 597 598 def cramer_von_mises(self, other: 'Population') -> Fraction: 599 """Cramér-von Mises statistic. The sum-of-squares difference between CDFs. """ 600 outcomes = icepool.sorted_union(self, other) 601 return sum(((self.probability('<=', outcome) - 602 other.probability('<=', outcome))**2 603 for outcome in outcomes), 604 start=Fraction(0, 1)) 605 606 def median(self): 607 """The median, taking the mean in case of a tie. 608 609 This will fail if the outcomes do not support division; 610 in this case, use `median_low` or `median_high` instead. 611 """ 612 return self.quantile(1, 2) 613 614 def median_low(self) -> T_co: 615 """The median, taking the lower in case of a tie.""" 616 return self.quantile_low(1, 2) 617 618 def median_high(self) -> T_co: 619 """The median, taking the higher in case of a tie.""" 620 return self.quantile_high(1, 2) 621 622 def quantile(self, n: int, d: int = 100): 623 """The outcome `n / d` of the way through the CDF, taking the mean in case of a tie. 624 625 This will fail if the outcomes do not support addition and division; 626 in this case, use `quantile_low` or `quantile_high` instead. 627 """ 628 # Should support addition and division. 629 return (self.quantile_low(n, d) + 630 self.quantile_high(n, d)) / 2 # type: ignore 631 632 def quantile_low(self, n: int, d: int = 100) -> T_co: 633 """The outcome `n / d` of the way through the CDF, taking the lesser in case of a tie.""" 634 index = bisect.bisect_left(self.quantities('<='), 635 (n * self.denominator() + d - 1) // d) 636 if index >= len(self): 637 return self.max_outcome() 638 return self.outcomes()[index] 639 640 def quantile_high(self, n: int, d: int = 100) -> T_co: 641 """The outcome `n / d` of the way through the CDF, taking the greater in case of a tie.""" 642 index = bisect.bisect_right(self.quantities('<='), 643 n * self.denominator() // d) 644 if index >= len(self): 645 return self.max_outcome() 646 return self.outcomes()[index] 647 648 @overload 649 def mean(self: 'Population[numbers.Rational]') -> Fraction: 650 ... 651 652 @overload 653 def mean(self: 'Population[float]') -> float: 654 ... 655 656 def mean( 657 self: 'Population[numbers.Rational] | Population[float]' 658 ) -> Fraction | float: 659 return try_fraction( 660 sum(outcome * quantity for outcome, quantity in self.items()), 661 self.denominator()) 662 663 @overload 664 def variance(self: 'Population[numbers.Rational]') -> Fraction: 665 ... 666 667 @overload 668 def variance(self: 'Population[float]') -> float: 669 ... 670 671 def variance( 672 self: 'Population[numbers.Rational] | Population[float]' 673 ) -> Fraction | float: 674 """This is the population variance, not the sample variance.""" 675 mean = self.mean() 676 mean_of_squares = try_fraction( 677 sum(quantity * outcome**2 for outcome, quantity in self.items()), 678 self.denominator()) 679 return mean_of_squares - mean * mean 680 681 def standard_deviation( 682 self: 'Population[numbers.Rational] | Population[float]') -> float: 683 return math.sqrt(self.variance()) 684 685 sd = standard_deviation 686 687 def standardized_moment( 688 self: 'Population[numbers.Rational] | Population[float]', 689 k: int) -> float: 690 sd = self.standard_deviation() 691 mean = self.mean() 692 ev = sum(p * (outcome - mean)**k # type: ignore 693 for outcome, p in zip(self.outcomes(), self.probabilities())) 694 return ev / (sd**k) 695 696 def skewness( 697 self: 'Population[numbers.Rational] | Population[float]') -> float: 698 return self.standardized_moment(3) 699 700 def excess_kurtosis( 701 self: 'Population[numbers.Rational] | Population[float]') -> float: 702 return self.standardized_moment(4) - 3.0 703 704 def entropy(self, base: float = 2.0) -> float: 705 """The entropy of a random sample from this population. 706 707 Args: 708 base: The logarithm base to use. Default is 2.0, which gives the 709 entropy in bits. 710 """ 711 return -sum(p * math.log(p, base) 712 for p in self.probabilities() if p > 0.0) 713 714 # Joint statistics. 715 716 class _Marginals(Generic[C]): 717 """Helper class for implementing `marginals()`.""" 718 719 _population: C 720 721 def __init__(self, population, /): 722 self._population = population 723 724 def __len__(self) -> int: 725 """The minimum len() of all outcomes.""" 726 return min(len(x) for x in self._population.outcomes()) 727 728 def __getitem__(self, dims: int | slice, /): 729 """Marginalizes the given dimensions.""" 730 return self._population._unary_operator(operator.getitem, dims) 731 732 def __iter__(self) -> Iterator: 733 for i in range(len(self)): 734 yield self[i] 735 736 def __getattr__(self, key: str): 737 if key[0] == '_': 738 raise AttributeError(key) 739 return self._population._unary_operator(operator.attrgetter(key)) 740 741 @property 742 def marginals(self: C) -> _Marginals[C]: 743 """A property that applies the `[]` operator to outcomes. 744 745 For example, `population.marginals[:2]` will marginalize the first two 746 elements of sequence outcomes. 747 748 Attributes that do not start with an underscore will also be forwarded. 749 For example, `population.marginals.x` will marginalize the `x` attribute 750 from e.g. `namedtuple` outcomes. 751 """ 752 return Population._Marginals(self) 753 754 @overload 755 def covariance(self: 'Population[tuple[numbers.Rational, ...]]', i: int, 756 j: int) -> Fraction: 757 ... 758 759 @overload 760 def covariance(self: 'Population[tuple[float, ...]]', i: int, 761 j: int) -> float: 762 ... 763 764 def covariance( 765 self: 766 'Population[tuple[numbers.Rational, ...]] | Population[tuple[float, ...]]', 767 i: int, j: int) -> Fraction | float: 768 mean_i = self.marginals[i].mean() 769 mean_j = self.marginals[j].mean() 770 return try_fraction( 771 sum((outcome[i] - mean_i) * (outcome[j] - mean_j) * quantity 772 for outcome, quantity in self.items()), self.denominator()) 773 774 def correlation( 775 self: 776 'Population[tuple[numbers.Rational, ...]] | Population[tuple[float, ...]]', 777 i: int, j: int) -> float: 778 sd_i = self.marginals[i].standard_deviation() 779 sd_j = self.marginals[j].standard_deviation() 780 return self.covariance(i, j) / (sd_i * sd_j) 781 782 # Transformations. 783 784 def _select_outcomes(self, which: Callable[..., bool] | Collection[T_co], 785 star: bool | None) -> Set[T_co]: 786 """Returns a set of outcomes of self that fit the given condition.""" 787 if callable(which): 788 if star is None: 789 star = infer_star(which) 790 if star: 791 # Need TypeVarTuple to check this. 792 return { 793 outcome 794 for outcome in self.outcomes() 795 if which(*outcome) # type: ignore 796 } 797 else: 798 return { 799 outcome 800 for outcome in self.outcomes() if which(outcome) 801 } 802 else: 803 # Collection. 804 return set(outcome for outcome in self.outcomes() 805 if outcome in which) 806 807 def to_one_hot(self: C, outcomes: Sequence[T_co] | None = None) -> C: 808 """Converts the outcomes of this population to a one-hot representation. 809 810 Args: 811 outcomes: If provided, each outcome will be mapped to a `Vector` 812 where the element at `outcomes.index(outcome)` is set to `True` 813 and the rest to `False`, or all `False` if the outcome is not 814 in `outcomes`. 815 If not provided, `self.outcomes()` is used. 816 """ 817 if outcomes is None: 818 outcomes = self.outcomes() 819 820 data: MutableMapping[Vector[bool], int] = defaultdict(int) 821 for outcome, quantity in zip(self.outcomes(), self.quantities()): 822 value = [False] * len(outcomes) 823 if outcome in outcomes: 824 value[outcomes.index(outcome)] = True 825 data[Vector(value)] += quantity 826 return self._new_type(data) 827 828 def split(self, 829 outcomes: Callable[..., bool] | Collection[T_co], 830 /, 831 *, 832 star: bool | None = None) -> tuple[C, C]: 833 """Splits this population into one containing selected items and another containing the rest. 834 835 The sum of the denominators of the results is equal to the denominator 836 of this population. 837 838 If you want to split more than two ways, use `Population.group_by()`. 839 840 Args: 841 outcomes: Selects which outcomes to select. Options: 842 * A callable that takes an outcome and returns `True` if it 843 should be selected. 844 * A collection of outcomes to select. 845 star: Whether outcomes should be unpacked into separate arguments 846 before sending them to a callable `which`. 847 If not provided, this will be guessed based on the function 848 signature. 849 850 Returns: 851 A population consisting of the outcomes that were selected by 852 `which`, and a population consisting of the unselected outcomes. 853 """ 854 outcome_set = self._select_outcomes(outcomes, star) 855 856 selected = {} 857 not_selected = {} 858 for outcome, count in self.items(): 859 if outcome in outcome_set: 860 selected[outcome] = count 861 else: 862 not_selected[outcome] = count 863 864 return self._new_type(selected), self._new_type(not_selected) 865 866 class _GroupBy(Generic[C]): 867 """Helper class for implementing `group_by()`.""" 868 869 _population: C 870 871 def __init__(self, population, /): 872 self._population = population 873 874 def __call__(self, 875 key_map: Callable[..., U] | Mapping[T_co, U], 876 /, 877 *, 878 star: bool | None = None) -> Mapping[U, C]: 879 if callable(key_map): 880 if star is None: 881 star = infer_star(key_map) 882 if star: 883 key_function = lambda o: key_map(*o) 884 else: 885 key_function = key_map 886 else: 887 key_function = lambda o: key_map.get(o, o) 888 889 result_datas: MutableMapping[U, MutableMapping[Any, int]] = {} 890 outcome: Any 891 for outcome, quantity in self._population.items(): 892 key = key_function(outcome) 893 if key not in result_datas: 894 result_datas[key] = defaultdict(int) 895 result_datas[key][outcome] += quantity 896 return { 897 k: self._population._new_type(v) 898 for k, v in result_datas.items() 899 } 900 901 def __getitem__(self, dims: int | slice, /): 902 """Marginalizes the given dimensions.""" 903 return self(lambda x: x[dims]) 904 905 def __getattr__(self, key: str): 906 if key[0] == '_': 907 raise AttributeError(key) 908 return self(lambda x: getattr(x, key)) 909 910 @property 911 def group_by(self: C) -> _GroupBy[C]: 912 """A method-like property that splits this population into sub-populations based on a key function. 913 914 The sum of the denominators of the results is equal to the denominator 915 of this population. 916 917 This can be useful when using the law of total probability. 918 919 Example: `d10.group_by(lambda x: x % 3)` is 920 ```python 921 { 922 0: Die([3, 6, 9]), 923 1: Die([1, 4, 7, 10]), 924 2: Die([2, 5, 8]), 925 } 926 ``` 927 928 You can also use brackets to group by indexes or slices; or attributes 929 to group by those. Example: 930 931 ```python 932 Die([ 933 'aardvark', 934 'alligator', 935 'asp', 936 'blowfish', 937 'cat', 938 'crocodile', 939 ]).group_by[0] 940 ``` 941 942 produces 943 944 ```python 945 { 946 'a': Die(['aardvark', 'alligator', 'asp']), 947 'b': Die(['blowfish']), 948 'c': Die(['cat', 'crocodile']), 949 } 950 ``` 951 952 Args: 953 key_map: A function or mapping that takes outcomes and produces the 954 key of the corresponding outcome in the result. If this is 955 a Mapping, outcomes not in the mapping are their own key. 956 star: Whether outcomes should be unpacked into separate arguments 957 before sending them to a callable `key_map`. 958 If not provided, this will be guessed based on the function 959 signature. 960 """ 961 return Population._GroupBy(self) 962 963 def sample(self) -> T_co: 964 """A single random sample from this population. 965 966 Note that this is always "with replacement" even for `Deck` since 967 instances are immutable. 968 969 This uses the standard `random` package and is not cryptographically 970 secure. 971 """ 972 # We don't use random.choices since that is based on floats rather than ints. 973 r = random.randrange(self.denominator()) 974 index = bisect.bisect_right(self.quantities('<='), r) 975 return self.outcomes()[index] 976 977 def format(self, format_spec: str, /, **kwargs) -> str: 978 """Formats this mapping as a string. 979 980 `format_spec` should start with the output format, 981 which can be: 982 * `md` for Markdown (default) 983 * `bbcode` for BBCode 984 * `csv` for comma-separated values 985 * `html` for HTML 986 987 After this, you may optionally add a `:` followed by a series of 988 requested columns. Allowed columns are: 989 990 * `o`: Outcomes. 991 * `*o`: Outcomes, unpacked if applicable. 992 * `q==`, `q<=`, `q>=`: Quantities ==, <=, or >= each outcome. 993 * `p==`, `p<=`, `p>=`: Probabilities (0-1). 994 * `%==`, `%<=`, `%>=`: Probabilities (0%-100%). 995 * `i==`, `i<=`, `i>=`: EXPERIMENTAL: "1 in N". 996 997 Columns may optionally be separated using `|` characters. 998 999 The default setting is equal to `f'{die:md:*o|q==|%==}'`. Here the 1000 columns are the outcomes (unpacked if applicable) the quantities, and 1001 the probabilities. The quantities are omitted from the default columns 1002 if any individual quantity is 10**30 or greater. 1003 """ 1004 if not self.is_empty() and self.modal_quantity() < 10**30: 1005 default_column_spec = '*oq==%==' 1006 else: 1007 default_column_spec = '*o%==' 1008 if len(format_spec) == 0: 1009 format_spec = 'md:' + default_column_spec 1010 1011 format_spec = format_spec.replace('|', '') 1012 1013 parts = format_spec.split(':') 1014 1015 if len(parts) == 1: 1016 output_format = parts[0] 1017 col_spec = default_column_spec 1018 elif len(parts) == 2: 1019 output_format = parts[0] 1020 col_spec = parts[1] 1021 else: 1022 raise ValueError('format_spec has too many colons.') 1023 1024 match output_format: 1025 case 'md': 1026 return icepool.population.format.markdown(self, col_spec) 1027 case 'bbcode': 1028 return icepool.population.format.bbcode(self, col_spec) 1029 case 'csv': 1030 return icepool.population.format.csv(self, col_spec, **kwargs) 1031 case 'html': 1032 return icepool.population.format.html(self, col_spec) 1033 case _: 1034 raise ValueError( 1035 f"Unsupported output format '{output_format}'") 1036 1037 def __format__(self, format_spec: str, /) -> str: 1038 return self.format(format_spec) 1039 1040 def __str__(self) -> str: 1041 return f'{self}'
A mapping from outcomes to int quantities.
Outcomes with each instance must be hashable and totally orderable.
44 @abstractmethod 45 def keys(self) -> CountsKeysView[T_co]: 46 """The outcomes within the population in sorted order."""
The outcomes within the population in sorted order.
48 @abstractmethod 49 def values(self) -> CountsValuesView: 50 """The quantities within the population in outcome order."""
The quantities within the population in outcome order.
52 @abstractmethod 53 def items(self) -> CountsItemsView[T_co]: 54 """The (outcome, quantity)s of the population in sorted order."""
The (outcome, quantity)s of the population in sorted order.
90 def common_outcome_length(self) -> int | None: 91 """The common length of all outcomes. 92 93 If outcomes have no lengths or different lengths, the result is `None`. 94 """ 95 return self._common_outcome_length
The common length of all outcomes.
If outcomes have no lengths or different lengths, the result is None.
97 def is_empty(self) -> bool: 98 """`True` iff this population has no outcomes. """ 99 return len(self) == 0
True iff this population has no outcomes.
109 def nearest(self, comparison: Literal['<=', '<', '>=', '>'], outcome, 110 /) -> T_co | None: 111 """The nearest outcome in this population fitting the comparison. 112 113 Args: 114 comparison: The comparison which the result must fit. For example, 115 '<=' would find the greatest outcome that is not greater than 116 the argument. 117 outcome: The outcome to compare against. 118 119 Returns: 120 The nearest outcome fitting the comparison, or `None` if there is 121 no such outcome. 122 """ 123 match comparison: 124 case '<=': 125 if outcome in self: 126 return outcome 127 index = bisect.bisect_right(self.outcomes(), outcome) - 1 128 if index < 0: 129 return None 130 return self.outcomes()[index] 131 case '<': 132 index = bisect.bisect_left(self.outcomes(), outcome) - 1 133 if index < 0: 134 return None 135 return self.outcomes()[index] 136 case '>=': 137 if outcome in self: 138 return outcome 139 index = bisect.bisect_left(self.outcomes(), outcome) 140 if index >= len(self): 141 return None 142 return self.outcomes()[index] 143 case '>': 144 index = bisect.bisect_right(self.outcomes(), outcome) 145 if index >= len(self): 146 return None 147 return self.outcomes()[index] 148 case _: 149 raise ValueError(f'Invalid comparison {comparison}')
The nearest outcome in this population fitting the comparison.
Arguments:
- comparison: The comparison which the result must fit. For example, '<=' would find the greatest outcome that is not greater than the argument.
- outcome: The outcome to compare against.
Returns:
The nearest outcome fitting the comparison, or
Noneif there is no such outcome.
155 def zero(self: C) -> C: 156 """Zeros all outcomes of this population. 157 158 This is done by multiplying all outcomes by `0`. 159 160 The result will have the same denominator. 161 162 Raises: 163 ValueError: If the zeros did not resolve to a single outcome. 164 """ 165 result = self._unary_operator(Population._zero) 166 if len(result) != 1: 167 raise ValueError('zero() did not resolve to a single outcome.') 168 return result
Zeros all outcomes of this population.
This is done by multiplying all outcomes by 0.
The result will have the same denominator.
Raises:
- ValueError: If the zeros did not resolve to a single outcome.
170 def zero_outcome(self) -> T_co: 171 """A zero-outcome for this population. 172 173 E.g. `0` for a `Population` whose outcomes are `int`s. 174 """ 175 return self.zero().outcomes()[0]
A zero-outcome for this population.
E.g. 0 for a Population whose outcomes are ints.
188 def quantity(self, 189 comparison: Literal['==', '!=', '<=', '<', '>=', '>'] 190 | Hashable, 191 outcome: Hashable | None = None, 192 /) -> int: 193 """The quantity of a single outcome. 194 195 A comparison can be provided, in which case this returns the total 196 quantity fitting the comparison. 197 198 Args: 199 comparison: The comparison to use. This can be omitted, in which 200 case it is treated as '=='. 201 outcome: The outcome to query. 202 """ 203 if outcome is None: 204 outcome = comparison 205 comparison = '==' 206 else: 207 comparison = cast(Literal['==', '!=', '<=', '<', '>=', '>'], 208 comparison) 209 210 match comparison: 211 case '==': 212 return self.get(outcome, 0) 213 case '!=': 214 return self.denominator() - self.get(outcome, 0) 215 case '<=' | '<': 216 threshold = self.nearest(comparison, outcome) 217 if threshold is None: 218 return 0 219 else: 220 return self._cumulative_quantities[threshold] 221 case '>=': 222 return self.denominator() - self.quantity('<', outcome) 223 case '>': 224 return self.denominator() - self.quantity('<=', outcome) 225 case _: 226 raise ValueError(f'Invalid comparison {comparison}')
The quantity of a single outcome.
A comparison can be provided, in which case this returns the total quantity fitting the comparison.
Arguments:
- comparison: The comparison to use. This can be omitted, in which case it is treated as '=='.
- outcome: The outcome to query.
228 def quantity_where(self, 229 which: Callable[..., bool], 230 /, 231 star: bool | None = None) -> int: 232 """The quantity fulfilling a boolean condition.""" 233 if star is None: 234 star = infer_star(which) 235 if star: 236 return sum(quantity # type: ignore 237 for outcome, quantity in self.items() 238 if which(*outcome)) # type: ignore 239 else: 240 return sum(quantity for outcome, quantity in self.items() 241 if which(outcome))
The quantity fulfilling a boolean condition.
262 def quantities(self, 263 comparison: Literal['==', '!=', '<=', '<', '>=', '>'] 264 | None = None, 265 /) -> CountsValuesView | Sequence[int]: 266 """The quantities of the mapping in sorted order. 267 268 For example, '<=' gives the CDF. 269 270 Args: 271 comparison: One of `'==', '!=', '<=', '<', '>=', '>'`. 272 May be omitted, in which case equality `'=='` is used. 273 """ 274 if comparison is None: 275 comparison = '==' 276 277 match comparison: 278 case '==': 279 return self.values() 280 case '<=': 281 return tuple(itertools.accumulate(self.values())) 282 case '>=': 283 return tuple( 284 itertools.accumulate(self.values()[:-1], 285 operator.sub, 286 initial=self.denominator())) 287 case '!=': 288 return tuple(self.denominator() - q for q in self.values()) 289 case '<': 290 return tuple(self.denominator() - q 291 for q in self.quantities('>=')) 292 case '>': 293 return tuple(self.denominator() - q 294 for q in self.quantities('<=')) 295 case _: 296 raise ValueError(f'Invalid comparison {comparison}')
The quantities of the mapping in sorted order.
For example, '<=' gives the CDF.
Arguments:
- comparison: One of
'==', '!=', '<=', '<', '>=', '>'. May be omitted, in which case equality'=='is used.
311 def denominator(self) -> int: 312 """The sum of all quantities (e.g. weights or duplicates). 313 314 For the number of unique outcomes, use `len()`. 315 """ 316 return self._denominator
The sum of all quantities (e.g. weights or duplicates).
For the number of unique outcomes, use len().
318 def multiply_quantities(self: C, scale: int, /) -> C: 319 """Multiplies all quantities by an integer.""" 320 if scale == 1: 321 return self 322 data = { 323 outcome: quantity * scale 324 for outcome, quantity in self.items() 325 } 326 return self._new_type(data)
Multiplies all quantities by an integer.
328 def divide_quantities(self: C, divisor: int, /) -> C: 329 """Divides all quantities by an integer, rounding down. 330 331 Resulting zero quantities are dropped. 332 """ 333 if divisor == 0: 334 return self 335 data = { 336 outcome: quantity // divisor 337 for outcome, quantity in self.items() if quantity >= divisor 338 } 339 return self._new_type(data)
Divides all quantities by an integer, rounding down.
Resulting zero quantities are dropped.
341 def modulo_quantities(self: C, divisor: int, /) -> C: 342 """Modulus of all quantities with an integer.""" 343 data = { 344 outcome: quantity % divisor 345 for outcome, quantity in self.items() 346 } 347 return self._new_type(data)
Modulus of all quantities with an integer.
349 def pad_to_denominator(self: C, denominator: int, /, 350 outcome: Hashable) -> C: 351 """Changes the denominator to a target number by changing the quantity of a specified outcome. 352 353 Args: 354 `target`: The denominator of the result. 355 `outcome`: The outcome whose quantity will be adjusted. 356 357 Returns: 358 A `Population` like `self` but with the quantity of `outcome` 359 adjusted so that the overall denominator is equal to `target`. 360 If the denominator is reduced to zero, it will be removed. 361 362 Raises: 363 `ValueError` if this would require the quantity of the specified 364 outcome to be negative. 365 """ 366 adjustment = denominator - self.denominator() 367 data = {outcome: quantity for outcome, quantity in self.items()} 368 new_quantity = data.get(outcome, 0) + adjustment 369 if new_quantity > 0: 370 data[outcome] = new_quantity 371 elif new_quantity == 0: 372 del data[outcome] 373 else: 374 raise ValueError( 375 f'Padding to denominator of {denominator} would require a negative quantity of {new_quantity} for {outcome}' 376 ) 377 return self._new_type(data)
Changes the denominator to a target number by changing the quantity of a specified outcome.
Arguments:
target: The denominator of the result.outcome: The outcome whose quantity will be adjusted.
Returns:
A
Populationlikeselfbut with the quantity ofoutcomeadjusted so that the overall denominator is equal totarget. If the denominator is reduced to zero, it will be removed.
Raises:
ValueErrorif this would require the quantity of the specified- outcome to be negative.
379 def multiply_to_denominator(self: C, denominator: int, /) -> C: 380 """Multiplies all quantities to reach the target denominiator. 381 382 Raises: 383 ValueError if this cannot be achieved using an integer scaling. 384 """ 385 if denominator % self.denominator(): 386 raise ValueError( 387 'Target denominator is not an integer factor of the current denominator.' 388 ) 389 return self.multiply_quantities(denominator // self.denominator())
Multiplies all quantities to reach the target denominiator.
Raises:
- ValueError if this cannot be achieved using an integer scaling.
391 def append(self: C, outcome, quantity: int = 1, /) -> C: 392 """This population with an outcome appended. 393 394 Args: 395 outcome: The outcome to append. 396 quantity: The quantity of the outcome to append. Can be negative, 397 which removes quantity (but not below zero). 398 """ 399 data = Counter(self) 400 data[outcome] = max(data[outcome] + quantity, 0) 401 return self._new_type(data)
This population with an outcome appended.
Arguments:
- outcome: The outcome to append.
- quantity: The quantity of the outcome to append. Can be negative, which removes quantity (but not below zero).
403 def remove(self: C, outcome, quantity: int | None = None, /) -> C: 404 """This population with an outcome removed. 405 406 Args: 407 outcome: The outcome to append. 408 quantity: The quantity of the outcome to remove. If not set, all 409 quantity of that outcome is removed. Can be negative, which adds 410 quantity instead. 411 """ 412 if quantity is None: 413 data = Counter(self) 414 data[outcome] = 0 415 return self._new_type(data) 416 else: 417 return self.append(outcome, -quantity)
This population with an outcome removed.
Arguments:
- outcome: The outcome to append.
- quantity: The quantity of the outcome to remove. If not set, all quantity of that outcome is removed. Can be negative, which adds quantity instead.
453 def probability(self, 454 comparison: Literal['==', '!=', '<=', '<', '>=', '>'] 455 | Hashable, 456 outcome: Hashable | None = None, 457 /, 458 *, 459 percent: bool = False) -> Fraction | float: 460 """The total probability of outcomes fitting a comparison. 461 462 Args: 463 comparison: One of `'==', '!=', '<=', '<', '>=', '>'`. 464 May be omitted, in which case equality `'=='` is used. 465 outcome: The outcome to compare to. 466 percent: If set, the result will be a percentage expressed as a 467 `float`. Otherwise, the result is a `Fraction`. 468 """ 469 if outcome is None: 470 outcome = comparison 471 comparison = '==' 472 else: 473 comparison = cast(Literal['==', '!=', '<=', '<', '>=', '>'], 474 comparison) 475 result = Fraction(self.quantity(comparison, outcome), 476 self.denominator()) 477 return result * 100.0 if percent else result
The total probability of outcomes fitting a comparison.
Arguments:
- comparison: One of
'==', '!=', '<=', '<', '>=', '>'. May be omitted, in which case equality'=='is used. - outcome: The outcome to compare to.
- percent: If set, the result will be a percentage expressed as a
float. Otherwise, the result is aFraction.
496 def probability_where(self, 497 which: Callable[..., bool], 498 /, 499 star: bool | None = None, 500 percent: bool = False) -> Fraction | float: 501 """The probability fulfilling a boolean condition.""" 502 numerator = self.quantity_where(which, star=star) 503 if percent: 504 return 100.0 * numerator / self.denominator() 505 else: 506 return Fraction(numerator, self.denominator())
The probability fulfilling a boolean condition.
547 def probabilities( 548 self, 549 comparison: Literal['==', '!=', '<=', '<', '>=', '>'] 550 | None = None, 551 /, 552 *, 553 percent: bool = False) -> Sequence[Fraction] | Sequence[float]: 554 """The total probabilities fitting the comparison for each outcome in sorted order. 555 556 For example, '<=' gives the CDF. 557 558 Args: 559 comparison: One of `'==', '!=', '<=', '<', '>=', '>'`. 560 May be omitted, in which case equality `'=='` is used. 561 percent: If set, the result will be a percentage expressed as a 562 `float`. Otherwise, the result is a `Fraction`. 563 """ 564 if comparison is None: 565 comparison = '==' 566 567 result = tuple( 568 Fraction(q, self.denominator()) 569 for q in self.quantities(comparison)) 570 571 if percent: 572 return tuple(100.0 * x for x in result) 573 else: 574 return result
The total probabilities fitting the comparison for each outcome in sorted order.
For example, '<=' gives the CDF.
Arguments:
- comparison: One of
'==', '!=', '<=', '<', '>=', '>'. May be omitted, in which case equality'=='is used. - percent: If set, the result will be a percentage expressed as a
float. Otherwise, the result is aFraction.
578 def mode(self) -> tuple: 579 """A tuple containing the most common outcome(s) of the population. 580 581 These are sorted from lowest to highest. 582 """ 583 return tuple(outcome for outcome, quantity in self.items() 584 if quantity == self.modal_quantity())
A tuple containing the most common outcome(s) of the population.
These are sorted from lowest to highest.
586 def modal_quantity(self) -> int: 587 """The highest quantity of any single outcome. """ 588 return max(self.quantities())
The highest quantity of any single outcome.
590 def kolmogorov_smirnov(self, other: 'Population') -> Fraction: 591 """Kolmogorov–Smirnov statistic. The maximum absolute difference between CDFs. """ 592 outcomes = icepool.sorted_union(self, other) 593 return max( 594 abs( 595 self.probability('<=', outcome) - 596 other.probability('<=', outcome)) for outcome in outcomes)
Kolmogorov–Smirnov statistic. The maximum absolute difference between CDFs.
598 def cramer_von_mises(self, other: 'Population') -> Fraction: 599 """Cramér-von Mises statistic. The sum-of-squares difference between CDFs. """ 600 outcomes = icepool.sorted_union(self, other) 601 return sum(((self.probability('<=', outcome) - 602 other.probability('<=', outcome))**2 603 for outcome in outcomes), 604 start=Fraction(0, 1))
Cramér-von Mises statistic. The sum-of-squares difference between CDFs.
606 def median(self): 607 """The median, taking the mean in case of a tie. 608 609 This will fail if the outcomes do not support division; 610 in this case, use `median_low` or `median_high` instead. 611 """ 612 return self.quantile(1, 2)
The median, taking the mean in case of a tie.
This will fail if the outcomes do not support division;
in this case, use median_low or median_high instead.
614 def median_low(self) -> T_co: 615 """The median, taking the lower in case of a tie.""" 616 return self.quantile_low(1, 2)
The median, taking the lower in case of a tie.
618 def median_high(self) -> T_co: 619 """The median, taking the higher in case of a tie.""" 620 return self.quantile_high(1, 2)
The median, taking the higher in case of a tie.
622 def quantile(self, n: int, d: int = 100): 623 """The outcome `n / d` of the way through the CDF, taking the mean in case of a tie. 624 625 This will fail if the outcomes do not support addition and division; 626 in this case, use `quantile_low` or `quantile_high` instead. 627 """ 628 # Should support addition and division. 629 return (self.quantile_low(n, d) + 630 self.quantile_high(n, d)) / 2 # type: ignore
The outcome n / d of the way through the CDF, taking the mean in case of a tie.
This will fail if the outcomes do not support addition and division;
in this case, use quantile_low or quantile_high instead.
632 def quantile_low(self, n: int, d: int = 100) -> T_co: 633 """The outcome `n / d` of the way through the CDF, taking the lesser in case of a tie.""" 634 index = bisect.bisect_left(self.quantities('<='), 635 (n * self.denominator() + d - 1) // d) 636 if index >= len(self): 637 return self.max_outcome() 638 return self.outcomes()[index]
The outcome n / d of the way through the CDF, taking the lesser in case of a tie.
640 def quantile_high(self, n: int, d: int = 100) -> T_co: 641 """The outcome `n / d` of the way through the CDF, taking the greater in case of a tie.""" 642 index = bisect.bisect_right(self.quantities('<='), 643 n * self.denominator() // d) 644 if index >= len(self): 645 return self.max_outcome() 646 return self.outcomes()[index]
The outcome n / d of the way through the CDF, taking the greater in case of a tie.
671 def variance( 672 self: 'Population[numbers.Rational] | Population[float]' 673 ) -> Fraction | float: 674 """This is the population variance, not the sample variance.""" 675 mean = self.mean() 676 mean_of_squares = try_fraction( 677 sum(quantity * outcome**2 for outcome, quantity in self.items()), 678 self.denominator()) 679 return mean_of_squares - mean * mean
This is the population variance, not the sample variance.
687 def standardized_moment( 688 self: 'Population[numbers.Rational] | Population[float]', 689 k: int) -> float: 690 sd = self.standard_deviation() 691 mean = self.mean() 692 ev = sum(p * (outcome - mean)**k # type: ignore 693 for outcome, p in zip(self.outcomes(), self.probabilities())) 694 return ev / (sd**k)
704 def entropy(self, base: float = 2.0) -> float: 705 """The entropy of a random sample from this population. 706 707 Args: 708 base: The logarithm base to use. Default is 2.0, which gives the 709 entropy in bits. 710 """ 711 return -sum(p * math.log(p, base) 712 for p in self.probabilities() if p > 0.0)
The entropy of a random sample from this population.
Arguments:
- base: The logarithm base to use. Default is 2.0, which gives the entropy in bits.
741 @property 742 def marginals(self: C) -> _Marginals[C]: 743 """A property that applies the `[]` operator to outcomes. 744 745 For example, `population.marginals[:2]` will marginalize the first two 746 elements of sequence outcomes. 747 748 Attributes that do not start with an underscore will also be forwarded. 749 For example, `population.marginals.x` will marginalize the `x` attribute 750 from e.g. `namedtuple` outcomes. 751 """ 752 return Population._Marginals(self)
A property that applies the [] operator to outcomes.
For example, population.marginals[:2] will marginalize the first two
elements of sequence outcomes.
Attributes that do not start with an underscore will also be forwarded.
For example, population.marginals.x will marginalize the x attribute
from e.g. namedtuple outcomes.
764 def covariance( 765 self: 766 'Population[tuple[numbers.Rational, ...]] | Population[tuple[float, ...]]', 767 i: int, j: int) -> Fraction | float: 768 mean_i = self.marginals[i].mean() 769 mean_j = self.marginals[j].mean() 770 return try_fraction( 771 sum((outcome[i] - mean_i) * (outcome[j] - mean_j) * quantity 772 for outcome, quantity in self.items()), self.denominator())
807 def to_one_hot(self: C, outcomes: Sequence[T_co] | None = None) -> C: 808 """Converts the outcomes of this population to a one-hot representation. 809 810 Args: 811 outcomes: If provided, each outcome will be mapped to a `Vector` 812 where the element at `outcomes.index(outcome)` is set to `True` 813 and the rest to `False`, or all `False` if the outcome is not 814 in `outcomes`. 815 If not provided, `self.outcomes()` is used. 816 """ 817 if outcomes is None: 818 outcomes = self.outcomes() 819 820 data: MutableMapping[Vector[bool], int] = defaultdict(int) 821 for outcome, quantity in zip(self.outcomes(), self.quantities()): 822 value = [False] * len(outcomes) 823 if outcome in outcomes: 824 value[outcomes.index(outcome)] = True 825 data[Vector(value)] += quantity 826 return self._new_type(data)
Converts the outcomes of this population to a one-hot representation.
Arguments:
828 def split(self, 829 outcomes: Callable[..., bool] | Collection[T_co], 830 /, 831 *, 832 star: bool | None = None) -> tuple[C, C]: 833 """Splits this population into one containing selected items and another containing the rest. 834 835 The sum of the denominators of the results is equal to the denominator 836 of this population. 837 838 If you want to split more than two ways, use `Population.group_by()`. 839 840 Args: 841 outcomes: Selects which outcomes to select. Options: 842 * A callable that takes an outcome and returns `True` if it 843 should be selected. 844 * A collection of outcomes to select. 845 star: Whether outcomes should be unpacked into separate arguments 846 before sending them to a callable `which`. 847 If not provided, this will be guessed based on the function 848 signature. 849 850 Returns: 851 A population consisting of the outcomes that were selected by 852 `which`, and a population consisting of the unselected outcomes. 853 """ 854 outcome_set = self._select_outcomes(outcomes, star) 855 856 selected = {} 857 not_selected = {} 858 for outcome, count in self.items(): 859 if outcome in outcome_set: 860 selected[outcome] = count 861 else: 862 not_selected[outcome] = count 863 864 return self._new_type(selected), self._new_type(not_selected)
Splits this population into one containing selected items and another containing the rest.
The sum of the denominators of the results is equal to the denominator of this population.
If you want to split more than two ways, use Population.group_by().
Arguments:
- outcomes: Selects which outcomes to select. Options:
- A callable that takes an outcome and returns
Trueif it should be selected. - A collection of outcomes to select.
- A callable that takes an outcome and returns
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
which. If not provided, this will be guessed based on the function signature.
Returns:
A population consisting of the outcomes that were selected by
which, and a population consisting of the unselected outcomes.
910 @property 911 def group_by(self: C) -> _GroupBy[C]: 912 """A method-like property that splits this population into sub-populations based on a key function. 913 914 The sum of the denominators of the results is equal to the denominator 915 of this population. 916 917 This can be useful when using the law of total probability. 918 919 Example: `d10.group_by(lambda x: x % 3)` is 920 ```python 921 { 922 0: Die([3, 6, 9]), 923 1: Die([1, 4, 7, 10]), 924 2: Die([2, 5, 8]), 925 } 926 ``` 927 928 You can also use brackets to group by indexes or slices; or attributes 929 to group by those. Example: 930 931 ```python 932 Die([ 933 'aardvark', 934 'alligator', 935 'asp', 936 'blowfish', 937 'cat', 938 'crocodile', 939 ]).group_by[0] 940 ``` 941 942 produces 943 944 ```python 945 { 946 'a': Die(['aardvark', 'alligator', 'asp']), 947 'b': Die(['blowfish']), 948 'c': Die(['cat', 'crocodile']), 949 } 950 ``` 951 952 Args: 953 key_map: A function or mapping that takes outcomes and produces the 954 key of the corresponding outcome in the result. If this is 955 a Mapping, outcomes not in the mapping are their own key. 956 star: Whether outcomes should be unpacked into separate arguments 957 before sending them to a callable `key_map`. 958 If not provided, this will be guessed based on the function 959 signature. 960 """ 961 return Population._GroupBy(self)
A method-like property that splits this population into sub-populations based on a key function.
The sum of the denominators of the results is equal to the denominator of this population.
This can be useful when using the law of total probability.
Example: d10.group_by(lambda x: x % 3) is
{
0: Die([3, 6, 9]),
1: Die([1, 4, 7, 10]),
2: Die([2, 5, 8]),
}
You can also use brackets to group by indexes or slices; or attributes to group by those. Example:
Die([
'aardvark',
'alligator',
'asp',
'blowfish',
'cat',
'crocodile',
]).group_by[0]
produces
{
'a': Die(['aardvark', 'alligator', 'asp']),
'b': Die(['blowfish']),
'c': Die(['cat', 'crocodile']),
}
Arguments:
- key_map: A function or mapping that takes outcomes and produces the key of the corresponding outcome in the result. If this is a Mapping, outcomes not in the mapping are their own key.
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
key_map. If not provided, this will be guessed based on the function signature.
963 def sample(self) -> T_co: 964 """A single random sample from this population. 965 966 Note that this is always "with replacement" even for `Deck` since 967 instances are immutable. 968 969 This uses the standard `random` package and is not cryptographically 970 secure. 971 """ 972 # We don't use random.choices since that is based on floats rather than ints. 973 r = random.randrange(self.denominator()) 974 index = bisect.bisect_right(self.quantities('<='), r) 975 return self.outcomes()[index]
A single random sample from this population.
Note that this is always "with replacement" even for Deck since
instances are immutable.
This uses the standard random package and is not cryptographically
secure.
977 def format(self, format_spec: str, /, **kwargs) -> str: 978 """Formats this mapping as a string. 979 980 `format_spec` should start with the output format, 981 which can be: 982 * `md` for Markdown (default) 983 * `bbcode` for BBCode 984 * `csv` for comma-separated values 985 * `html` for HTML 986 987 After this, you may optionally add a `:` followed by a series of 988 requested columns. Allowed columns are: 989 990 * `o`: Outcomes. 991 * `*o`: Outcomes, unpacked if applicable. 992 * `q==`, `q<=`, `q>=`: Quantities ==, <=, or >= each outcome. 993 * `p==`, `p<=`, `p>=`: Probabilities (0-1). 994 * `%==`, `%<=`, `%>=`: Probabilities (0%-100%). 995 * `i==`, `i<=`, `i>=`: EXPERIMENTAL: "1 in N". 996 997 Columns may optionally be separated using `|` characters. 998 999 The default setting is equal to `f'{die:md:*o|q==|%==}'`. Here the 1000 columns are the outcomes (unpacked if applicable) the quantities, and 1001 the probabilities. The quantities are omitted from the default columns 1002 if any individual quantity is 10**30 or greater. 1003 """ 1004 if not self.is_empty() and self.modal_quantity() < 10**30: 1005 default_column_spec = '*oq==%==' 1006 else: 1007 default_column_spec = '*o%==' 1008 if len(format_spec) == 0: 1009 format_spec = 'md:' + default_column_spec 1010 1011 format_spec = format_spec.replace('|', '') 1012 1013 parts = format_spec.split(':') 1014 1015 if len(parts) == 1: 1016 output_format = parts[0] 1017 col_spec = default_column_spec 1018 elif len(parts) == 2: 1019 output_format = parts[0] 1020 col_spec = parts[1] 1021 else: 1022 raise ValueError('format_spec has too many colons.') 1023 1024 match output_format: 1025 case 'md': 1026 return icepool.population.format.markdown(self, col_spec) 1027 case 'bbcode': 1028 return icepool.population.format.bbcode(self, col_spec) 1029 case 'csv': 1030 return icepool.population.format.csv(self, col_spec, **kwargs) 1031 case 'html': 1032 return icepool.population.format.html(self, col_spec) 1033 case _: 1034 raise ValueError( 1035 f"Unsupported output format '{output_format}'")
Formats this mapping as a string.
format_spec should start with the output format,
which can be:
mdfor Markdown (default)bbcodefor BBCodecsvfor comma-separated valueshtmlfor HTML
After this, you may optionally add a : followed by a series of
requested columns. Allowed columns are:
o: Outcomes.*o: Outcomes, unpacked if applicable.q==,q<=,q>=: Quantities ==, <=, or >= each outcome.p==,p<=,p>=: Probabilities (0-1).%==,%<=,%>=: Probabilities (0%-100%).i==,i<=,i>=: EXPERIMENTAL: "1 in N".
Columns may optionally be separated using | characters.
The default setting is equal to f'{die:md:*o|q==|%==}'. Here the
columns are the outcomes (unpacked if applicable) the quantities, and
the probabilities. The quantities are omitted from the default columns
if any individual quantity is 10**30 or greater.
110def tupleize( 111 *args: 'T | icepool.Population[T] | icepool.RerollType' 112) -> 'tuple[T, ...] | icepool.Population[tuple[T, ...]] | icepool.RerollType': 113 """Returns the Cartesian product of the arguments as `tuple`s or a `Population` thereof. 114 115 For example: 116 * `tupleize(1, 2)` would produce `(1, 2)`. 117 * `tupleize(d6, 0)` would produce a `Die` with outcomes `(1, 0)`, `(2, 0)`, 118 ... `(6, 0)`. 119 * `tupleize(d6, d6)` would produce a `Die` with outcomes `(1, 1)`, `(1, 2)`, 120 ... `(6, 5)`, `(6, 6)`. 121 122 If `Population`s are provided, they must all be `Die` or all `Deck` and not 123 a mixture of the two. 124 125 If any argument is `icepool.Reroll`, the result is `icepool.Reroll`. 126 127 Returns: 128 If none of the outcomes is a `Population`, the result is a `tuple` 129 with one element per argument. Otherwise, the result is a `Population` 130 of the same type as the input `Population`, and the outcomes are 131 `tuple`s with one element per argument. 132 """ 133 return cartesian_product(*args, outcome_type=tuple)
Returns the Cartesian product of the arguments as tuples or a Population thereof.
For example:
tupleize(1, 2)would produce(1, 2).tupleize(d6, 0)would produce aDiewith outcomes(1, 0),(2, 0), ...(6, 0).tupleize(d6, d6)would produce aDiewith outcomes(1, 1),(1, 2), ...(6, 5),(6, 6).
If Populations are provided, they must all be Die or all Deck and not
a mixture of the two.
If any argument is icepool.Reroll, the result is icepool.Reroll.
Returns:
If none of the outcomes is a
Population, the result is atuplewith one element per argument. Otherwise, the result is aPopulationof the same type as the inputPopulation, and the outcomes aretuples with one element per argument.
150def vectorize( 151 *args: 'T | icepool.Population[T] | icepool.RerollType' 152) -> 'icepool.Vector[T] | icepool.Population[icepool.Vector[T]] | icepool.RerollType': 153 """Returns the Cartesian product of the arguments as `Vector`s or a `Population` thereof. 154 155 For example: 156 * `vectorize(1, 2)` would produce `Vector(1, 2)`. 157 * `vectorize(d6, 0)` would produce a `Die` with outcomes `Vector(1, 0)`, 158 `Vector(2, 0)`, ... `Vector(6, 0)`. 159 * `vectorize(d6, d6)` would produce a `Die` with outcomes `Vector(1, 1)`, 160 `Vector(1, 2)`, ... `Vector(6, 5)`, `Vector(6, 6)`. 161 162 If `Population`s are provided, they must all be `Die` or all `Deck` and not 163 a mixture of the two. 164 165 If any argument is `icepool.Reroll`, the result is `icepool.Reroll`. 166 167 Returns: 168 If none of the outcomes is a `Population`, the result is a `Vector` 169 with one element per argument. Otherwise, the result is a `Population` 170 of the same type as the input `Population`, and the outcomes are 171 `Vector`s with one element per argument. 172 """ 173 return cartesian_product(*args, outcome_type=icepool.Vector)
Returns the Cartesian product of the arguments as Vectors or a Population thereof.
For example:
vectorize(1, 2)would produceVector(1, 2).vectorize(d6, 0)would produce aDiewith outcomesVector(1, 0),Vector(2, 0), ...Vector(6, 0).vectorize(d6, d6)would produce aDiewith outcomesVector(1, 1),Vector(1, 2), ...Vector(6, 5),Vector(6, 6).
If Populations are provided, they must all be Die or all Deck and not
a mixture of the two.
If any argument is icepool.Reroll, the result is icepool.Reroll.
Returns:
If none of the outcomes is a
Population, the result is aVectorwith one element per argument. Otherwise, the result is aPopulationof the same type as the inputPopulation, and the outcomes areVectors with one element per argument.
125class Vector(Outcome, Sequence[T_co]): 126 """Immutable tuple-like class that applies most operators elementwise. 127 128 May become a variadic generic type in the future. 129 """ 130 __slots__ = ['_data', '_truth_value'] 131 132 _data: tuple[T_co, ...] 133 _truth_value: bool | None 134 135 def __init__(self, 136 elements: Iterable[T_co], 137 *, 138 truth_value: bool | None = None) -> None: 139 self._data = tuple(elements) 140 self._truth_value = truth_value 141 142 def __hash__(self) -> int: 143 return hash((Vector, self._data)) 144 145 def __len__(self) -> int: 146 return len(self._data) 147 148 @overload 149 def __getitem__(self, index: int) -> T_co: 150 ... 151 152 @overload 153 def __getitem__(self, index: slice) -> 'Vector[T_co]': 154 ... 155 156 def __getitem__(self, index: int | slice) -> 'T_co | Vector[T_co]': 157 if isinstance(index, int): 158 return self._data[index] 159 else: 160 return Vector(self._data[index]) 161 162 def __iter__(self) -> Iterator[T_co]: 163 return iter(self._data) 164 165 # Unary operators. 166 167 def unary_operator(self, op: Callable[..., U], *args, 168 **kwargs) -> 'Vector[U]': 169 """Unary operators on `Vector` are applied elementwise. 170 171 This is used for the standard unary operators 172 `-, +, abs, ~, round, trunc, floor, ceil` 173 """ 174 return Vector(op(x, *args, **kwargs) for x in self) 175 176 def __neg__(self) -> 'Vector[T_co]': 177 return self.unary_operator(operator.neg) 178 179 def __pos__(self) -> 'Vector[T_co]': 180 return self.unary_operator(operator.pos) 181 182 def __invert__(self) -> 'Vector[T_co]': 183 return self.unary_operator(operator.invert) 184 185 def abs(self) -> 'Vector[T_co]': 186 return self.unary_operator(operator.abs) 187 188 __abs__ = abs 189 190 def round(self, ndigits: int | None = None) -> 'Vector': 191 return self.unary_operator(round, ndigits) 192 193 __round__ = round 194 195 def trunc(self) -> 'Vector': 196 return self.unary_operator(math.trunc) 197 198 __trunc__ = trunc 199 200 def floor(self) -> 'Vector': 201 return self.unary_operator(math.floor) 202 203 __floor__ = floor 204 205 def ceil(self) -> 'Vector': 206 return self.unary_operator(math.ceil) 207 208 __ceil__ = ceil 209 210 # Binary operators. 211 212 def binary_operator(self, 213 other, 214 op: Callable[..., U], 215 *args, 216 compare_for_truth: bool = False, 217 compare_non_vector: bool | None = None, 218 **kwargs) -> 'Vector[U]': 219 """Binary operators on `Vector` are applied elementwise. 220 221 If the other operand is also a `Vector`, the operator is applied to each 222 pair of elements from `self` and `other`. Both must have the same 223 length. 224 225 Otherwise the other operand is broadcast to each element of `self`. 226 227 This is used for the standard binary operators 228 `+, -, *, /, //, %, **, <<, >>, &, |, ^`. 229 230 `@` is not included due to its different meaning in `Die`. 231 232 This is also used for the comparators 233 `<, <=, >, >=, ==, !=`. 234 235 In this case, the result also has a truth value based on lexicographic 236 ordering. 237 """ 238 if isinstance(other, Vector): 239 if len(self) == len(other): 240 if compare_for_truth: 241 truth_value = cast(bool, op(self._data, other._data)) 242 else: 243 truth_value = None 244 return Vector( 245 (op(x, y, *args, **kwargs) for x, y in zip(self, other)), 246 truth_value=truth_value) 247 else: 248 if compare_for_truth: 249 truth_value = cast(bool, op(self._data, other._data)) 250 return icepool.VectorWithTruthOnly(truth_value) 251 else: 252 raise IndexError( 253 f'Binary operators on Vectors are only valid if both are the same length ({len(self)} vs. {len(other)}).' 254 ) 255 elif isinstance(other, (icepool.Population, icepool.AgainExpression)): 256 return NotImplemented # delegate to the other 257 else: 258 return Vector((op(x, other, *args, **kwargs) for x in self), 259 truth_value=compare_non_vector) 260 261 def reverse_binary_operator(self, other, op: Callable[..., U], *args, 262 **kwargs) -> 'Vector[U]': 263 """Reverse version of `binary_operator()`.""" 264 if isinstance(other, Vector): 265 if len(self) == len(other): 266 return Vector( 267 op(y, x, *args, **kwargs) for x, y in zip(self, other)) 268 else: 269 raise IndexError( 270 f'Binary operators on Vectors are only valid if both are the same length ({len(other)} vs. {len(self)}).' 271 ) 272 elif isinstance(other, (icepool.Population, icepool.AgainExpression)): 273 return NotImplemented # delegate to the other 274 else: 275 return Vector(op(other, x, *args, **kwargs) for x in self) 276 277 def __add__(self, other) -> 'Vector': 278 return self.binary_operator(other, operator.add) 279 280 def __radd__(self, other) -> 'Vector': 281 return self.reverse_binary_operator(other, operator.add) 282 283 def __sub__(self, other) -> 'Vector': 284 return self.binary_operator(other, operator.sub) 285 286 def __rsub__(self, other) -> 'Vector': 287 return self.reverse_binary_operator(other, operator.sub) 288 289 def __mul__(self, other) -> 'Vector': 290 return self.binary_operator(other, operator.mul) 291 292 def __rmul__(self, other) -> 'Vector': 293 return self.reverse_binary_operator(other, operator.mul) 294 295 def __truediv__(self, other) -> 'Vector': 296 return self.binary_operator(other, operator.truediv) 297 298 def __rtruediv__(self, other) -> 'Vector': 299 return self.reverse_binary_operator(other, operator.truediv) 300 301 def __floordiv__(self, other) -> 'Vector': 302 return self.binary_operator(other, operator.floordiv) 303 304 def __rfloordiv__(self, other) -> 'Vector': 305 return self.reverse_binary_operator(other, operator.floordiv) 306 307 def __pow__(self, other) -> 'Vector': 308 return self.binary_operator(other, operator.pow) 309 310 def __rpow__(self, other) -> 'Vector': 311 return self.reverse_binary_operator(other, operator.pow) 312 313 def __mod__(self, other) -> 'Vector': 314 return self.binary_operator(other, operator.mod) 315 316 def __rmod__(self, other) -> 'Vector': 317 return self.reverse_binary_operator(other, operator.mod) 318 319 def __lshift__(self, other) -> 'Vector': 320 return self.binary_operator(other, operator.lshift) 321 322 def __rlshift__(self, other) -> 'Vector': 323 return self.reverse_binary_operator(other, operator.lshift) 324 325 def __rshift__(self, other) -> 'Vector': 326 return self.binary_operator(other, operator.rshift) 327 328 def __rrshift__(self, other) -> 'Vector': 329 return self.reverse_binary_operator(other, operator.rshift) 330 331 def __and__(self, other) -> 'Vector': 332 return self.binary_operator(other, operator.and_) 333 334 def __rand__(self, other) -> 'Vector': 335 return self.reverse_binary_operator(other, operator.and_) 336 337 def __or__(self, other) -> 'Vector': 338 return self.binary_operator(other, operator.or_) 339 340 def __ror__(self, other) -> 'Vector': 341 return self.reverse_binary_operator(other, operator.or_) 342 343 def __xor__(self, other) -> 'Vector': 344 return self.binary_operator(other, operator.xor) 345 346 def __rxor__(self, other) -> 'Vector': 347 return self.reverse_binary_operator(other, operator.xor) 348 349 # Comparators. 350 # These returns a value with a truth value, but not a bool. 351 352 def __lt__(self, other) -> 'Vector': # type: ignore 353 return self.binary_operator(other, 354 operator.lt, 355 compare_for_truth=True, 356 compare_non_vector=None) 357 358 def __le__(self, other) -> 'Vector': # type: ignore 359 return self.binary_operator(other, 360 operator.le, 361 compare_for_truth=True, 362 compare_non_vector=None) 363 364 def __gt__(self, other) -> 'Vector': # type: ignore 365 return self.binary_operator(other, 366 operator.gt, 367 compare_for_truth=True, 368 compare_non_vector=None) 369 370 def __ge__(self, other) -> 'Vector': # type: ignore 371 return self.binary_operator(other, 372 operator.ge, 373 compare_for_truth=True, 374 compare_non_vector=None) 375 376 def __eq__(self, other) -> 'Vector | bool': # type: ignore 377 return self.binary_operator(other, 378 operator.eq, 379 compare_for_truth=True, 380 compare_non_vector=False) 381 382 def __ne__(self, other) -> 'Vector | bool': # type: ignore 383 return self.binary_operator(other, 384 operator.ne, 385 compare_for_truth=True, 386 compare_non_vector=True) 387 388 def __bool__(self) -> bool: 389 if self._truth_value is None: 390 raise TypeError( 391 'Vector only has a truth value if it is the result of a comparison operator.' 392 ) 393 return self._truth_value 394 395 # Sequence manipulation. 396 397 def append(self, other) -> 'Vector': 398 return Vector(self._data + (other, )) 399 400 def concatenate(self, other: 'Iterable') -> 'Vector': 401 return Vector(itertools.chain(self, other)) 402 403 # Strings. 404 405 def __repr__(self) -> str: 406 return type(self).__qualname__ + '(' + repr(self._data) + ')' 407 408 def __str__(self) -> str: 409 return type(self).__qualname__ + '(' + str(self._data) + ')'
Immutable tuple-like class that applies most operators elementwise.
May become a variadic generic type in the future.
167 def unary_operator(self, op: Callable[..., U], *args, 168 **kwargs) -> 'Vector[U]': 169 """Unary operators on `Vector` are applied elementwise. 170 171 This is used for the standard unary operators 172 `-, +, abs, ~, round, trunc, floor, ceil` 173 """ 174 return Vector(op(x, *args, **kwargs) for x in self)
Unary operators on Vector are applied elementwise.
This is used for the standard unary operators
-, +, abs, ~, round, trunc, floor, ceil
212 def binary_operator(self, 213 other, 214 op: Callable[..., U], 215 *args, 216 compare_for_truth: bool = False, 217 compare_non_vector: bool | None = None, 218 **kwargs) -> 'Vector[U]': 219 """Binary operators on `Vector` are applied elementwise. 220 221 If the other operand is also a `Vector`, the operator is applied to each 222 pair of elements from `self` and `other`. Both must have the same 223 length. 224 225 Otherwise the other operand is broadcast to each element of `self`. 226 227 This is used for the standard binary operators 228 `+, -, *, /, //, %, **, <<, >>, &, |, ^`. 229 230 `@` is not included due to its different meaning in `Die`. 231 232 This is also used for the comparators 233 `<, <=, >, >=, ==, !=`. 234 235 In this case, the result also has a truth value based on lexicographic 236 ordering. 237 """ 238 if isinstance(other, Vector): 239 if len(self) == len(other): 240 if compare_for_truth: 241 truth_value = cast(bool, op(self._data, other._data)) 242 else: 243 truth_value = None 244 return Vector( 245 (op(x, y, *args, **kwargs) for x, y in zip(self, other)), 246 truth_value=truth_value) 247 else: 248 if compare_for_truth: 249 truth_value = cast(bool, op(self._data, other._data)) 250 return icepool.VectorWithTruthOnly(truth_value) 251 else: 252 raise IndexError( 253 f'Binary operators on Vectors are only valid if both are the same length ({len(self)} vs. {len(other)}).' 254 ) 255 elif isinstance(other, (icepool.Population, icepool.AgainExpression)): 256 return NotImplemented # delegate to the other 257 else: 258 return Vector((op(x, other, *args, **kwargs) for x in self), 259 truth_value=compare_non_vector)
Binary operators on Vector are applied elementwise.
If the other operand is also a Vector, the operator is applied to each
pair of elements from self and other. Both must have the same
length.
Otherwise the other operand is broadcast to each element of self.
This is used for the standard binary operators
+, -, *, /, //, %, **, <<, >>, &, |, ^.
@ is not included due to its different meaning in Die.
This is also used for the comparators
<, <=, >, >=, ==, !=.
In this case, the result also has a truth value based on lexicographic ordering.
261 def reverse_binary_operator(self, other, op: Callable[..., U], *args, 262 **kwargs) -> 'Vector[U]': 263 """Reverse version of `binary_operator()`.""" 264 if isinstance(other, Vector): 265 if len(self) == len(other): 266 return Vector( 267 op(y, x, *args, **kwargs) for x, y in zip(self, other)) 268 else: 269 raise IndexError( 270 f'Binary operators on Vectors are only valid if both are the same length ({len(other)} vs. {len(self)}).' 271 ) 272 elif isinstance(other, (icepool.Population, icepool.AgainExpression)): 273 return NotImplemented # delegate to the other 274 else: 275 return Vector(op(other, x, *args, **kwargs) for x in self)
Reverse version of binary_operator().
16class Symbols(Mapping[str, int]): 17 """EXPERIMENTAL: Immutable multiset of single characters. 18 19 Spaces, dashes, and underscores cannot be used as symbols. 20 21 Operations include: 22 23 | Operation | Count / notes | 24 |:----------------------------|:-----------------------------------| 25 | `additive_union`, `+` | `l + r` | 26 | `difference`, `-` | `l - r` | 27 | `intersection`, `&` | `min(l, r)` | 28 | `union`, `\\|` | `max(l, r)` | 29 | `symmetric_difference`, `^` | `abs(l - r)` | 30 | `multiply_counts`, `*` | `count * n` | 31 | `divide_counts`, `//` | `count // n` | 32 | `issubset`, `<=` | all counts l <= r | 33 | `issuperset`, `>=` | all counts l >= r | 34 | `==` | all counts l == r | 35 | `!=` | any count l != r | 36 | unary `+` | drop all negative counts | 37 | unary `-` | reverses the sign of all counts | 38 39 `<` and `>` are lexicographic orderings rather than subset relations. 40 Specifically, they compare the count of each character in alphabetical 41 order. For example: 42 * `'a' > ''` since one `'a'` is more than zero `'a'`s. 43 * `'a' > 'bb'` since `'a'` is compared first. 44 * `'-a' < 'bb'` since the left side has -1 `'a'`s. 45 * `'a' < 'ab'` since the `'a'`s are equal but the right side has more `'b'`s. 46 47 Binary operators other than `*` and `//` implicitly convert the other 48 argument to `Symbols` using the constructor. 49 50 Subscripting with a single character returns the count of that character 51 as an `int`. E.g. `symbols['a']` -> number of `a`s as an `int`. 52 You can also access it as an attribute, e.g. `symbols.a`. 53 54 Subscripting with multiple characters returns a `Symbols` with only those 55 characters, dropping the rest. 56 E.g. `symbols['ab']` -> number of `a`s and `b`s as a `Symbols`. 57 Again you can also access it as an attribute, e.g. `symbols.ab`. 58 This is useful for reducing the outcome space, which reduces computational 59 cost for further operations. If you want to keep only a single character 60 while keeping the type as `Symbols`, you can subscript with that character 61 plus an unused character. 62 63 Subscripting with duplicate characters currently has no further effect, but 64 this may change in the future. 65 66 `Population.marginals` forwards attribute access, so you can use e.g. 67 `die.marginals.a` to get the marginal distribution of `a`s. 68 69 Note that attribute access only works with valid identifiers, 70 so e.g. emojis would need to use the subscript method. 71 """ 72 _data: Mapping[str, int] 73 74 def __new__(cls, 75 symbols: str | Iterable[str] | Mapping[str, int]) -> 'Symbols': 76 """Constructor. 77 78 The argument can be a string, an iterable of characters, or a mapping of 79 characters to counts. 80 81 If the argument is a string, negative symbols can be specified using a 82 minus sign optionally surrounded by whitespace. For example, 83 `a - b` has one positive a and one negative b. 84 """ 85 self = super(Symbols, cls).__new__(cls) 86 if isinstance(symbols, str): 87 data: MutableMapping[str, int] = defaultdict(int) 88 positive, *negative = re.split(r'\s*-\s*', symbols) 89 for s in positive: 90 data[s] += 1 91 if len(negative) > 1: 92 raise ValueError('Multiple dashes not allowed.') 93 if len(negative) == 1: 94 for s in negative[0]: 95 data[s] -= 1 96 elif isinstance(symbols, Mapping): 97 data = defaultdict(int, symbols) 98 else: 99 data = defaultdict(int) 100 for s in symbols: 101 data[s] += 1 102 103 for s in data: 104 if len(s) != 1: 105 raise ValueError(f'Symbol {s} is not a single character.') 106 if re.match(r'[\s_-]', s): 107 raise ValueError( 108 f'{s} (U+{ord(s):04X}) is not a legal symbol.') 109 110 self._data = defaultdict(int, 111 {k: data[k] 112 for k in sorted(data.keys())}) 113 114 return self 115 116 @classmethod 117 def _new_raw(cls, data: defaultdict[str, int]) -> 'Symbols': 118 self = super(Symbols, cls).__new__(cls) 119 self._data = data 120 return self 121 122 # Mapping interface. 123 124 def __getitem__(self, key: str) -> 'int | Symbols': # type: ignore 125 if len(key) == 1: 126 return self._data[key] 127 else: 128 return Symbols._new_raw( 129 defaultdict(int, {s: self._data[s] 130 for s in key})) 131 132 def __getattr__(self, key: str) -> 'int | Symbols': 133 if key[0] == '_': 134 raise AttributeError(key) 135 return self[key] 136 137 def __iter__(self) -> Iterator[str]: 138 return iter(self._data) 139 140 def __len__(self) -> int: 141 return len(self._data) 142 143 # Binary operators. 144 145 def additive_union(self, *args: 146 Iterable[str] | Mapping[str, int]) -> 'Symbols': 147 """The sum of counts of each symbol.""" 148 return functools.reduce(operator.add, args, initial=self) 149 150 def __add__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 151 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 152 return NotImplemented # delegate to the other 153 data = defaultdict(int, self._data) 154 for s, count in Symbols(other).items(): 155 data[s] += count 156 return Symbols._new_raw(data) 157 158 __radd__ = __add__ 159 160 def difference(self, *args: 161 Iterable[str] | Mapping[str, int]) -> 'Symbols': 162 """The difference between the counts of each symbol.""" 163 return functools.reduce(operator.sub, args, initial=self) 164 165 def __sub__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 166 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 167 return NotImplemented # delegate to the other 168 data = defaultdict(int, self._data) 169 for s, count in Symbols(other).items(): 170 data[s] -= count 171 return Symbols._new_raw(data) 172 173 def __rsub__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 174 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 175 return NotImplemented # delegate to the other 176 data = defaultdict(int, Symbols(other)._data) 177 for s, count in self.items(): 178 data[s] -= count 179 return Symbols._new_raw(data) 180 181 def intersection(self, *args: 182 Iterable[str] | Mapping[str, int]) -> 'Symbols': 183 """The min count of each symbol.""" 184 return functools.reduce(operator.and_, args, initial=self) 185 186 def __and__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 187 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 188 return NotImplemented # delegate to the other 189 data: defaultdict[str, int] = defaultdict(int) 190 for s, count in Symbols(other).items(): 191 data[s] = min(self.get(s, 0), count) 192 return Symbols._new_raw(data) 193 194 __rand__ = __and__ 195 196 def union(self, *args: Iterable[str] | Mapping[str, int]) -> 'Symbols': 197 """The max count of each symbol.""" 198 return functools.reduce(operator.or_, args, initial=self) 199 200 def __or__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 201 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 202 return NotImplemented # delegate to the other 203 data = defaultdict(int, self._data) 204 for s, count in Symbols(other).items(): 205 data[s] = max(data[s], count) 206 return Symbols._new_raw(data) 207 208 __ror__ = __or__ 209 210 def symmetric_difference( 211 self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 212 """The absolute difference in symbol counts between the two sets.""" 213 return self ^ other 214 215 def __xor__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 216 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 217 return NotImplemented # delegate to the other 218 data = defaultdict(int, self._data) 219 for s, count in Symbols(other).items(): 220 data[s] = abs(data[s] - count) 221 return Symbols._new_raw(data) 222 223 __rxor__ = __xor__ 224 225 def multiply_counts(self, other: int) -> 'Symbols': 226 """Multiplies all counts by an integer.""" 227 return self * other 228 229 def __mul__(self, other: int) -> 'Symbols': 230 if not isinstance(other, int): 231 return NotImplemented 232 data = defaultdict(int, { 233 s: count * other 234 for s, count in self.items() 235 }) 236 return Symbols._new_raw(data) 237 238 __rmul__ = __mul__ 239 240 def divide_counts(self, other: int) -> 'Symbols': 241 """Divides all counts by an integer, rounding down.""" 242 data = defaultdict(int, { 243 s: count // other 244 for s, count in self.items() 245 }) 246 return Symbols._new_raw(data) 247 248 def count_subset(self, 249 divisor: Iterable[str] | Mapping[str, int], 250 *, 251 empty_divisor: int | None = None) -> int: 252 """The number of times the divisor is contained in this multiset.""" 253 if not isinstance(divisor, Mapping): 254 divisor = Counter(divisor) 255 result = None 256 for s, count in divisor.items(): 257 current = self._data[s] // count 258 if result is None or current < result: 259 result = current 260 if result is None: 261 if empty_divisor is None: 262 raise ZeroDivisionError('Divisor is empty.') 263 else: 264 return empty_divisor 265 else: 266 return result 267 268 @overload 269 def __floordiv__(self, other: int) -> 'Symbols': 270 """Same as divide_counts().""" 271 272 @overload 273 def __floordiv__(self, other: Iterable[str] | Mapping[str, int]) -> int: 274 """Same as count_subset().""" 275 276 @overload 277 def __floordiv__( 278 self, 279 other: int | Iterable[str] | Mapping[str, int]) -> 'Symbols | int': 280 ... 281 282 def __floordiv__( 283 self, 284 other: int | Iterable[str] | Mapping[str, int]) -> 'Symbols | int': 285 if isinstance(other, int): 286 return self.divide_counts(other) 287 elif isinstance(other, Iterable): 288 return self.count_subset(other) 289 else: 290 return NotImplemented 291 292 def __rfloordiv__(self, other: Iterable[str] | Mapping[str, int]) -> int: 293 return Symbols(other).count_subset(self) 294 295 def modulo_counts(self, other: int) -> 'Symbols': 296 return self % other 297 298 def __mod__(self, other: int) -> 'Symbols': 299 if not isinstance(other, int): 300 return NotImplemented 301 data = defaultdict(int, { 302 s: count % other 303 for s, count in self.items() 304 }) 305 return Symbols._new_raw(data) 306 307 def __lt__(self, other: 'Symbols') -> bool: 308 if not isinstance(other, Symbols): 309 return NotImplemented 310 keys = sorted(set(self.keys()) | set(other.keys())) 311 for k in keys: 312 if self[k] < other[k]: # type: ignore 313 return True 314 if self[k] > other[k]: # type: ignore 315 return False 316 return False 317 318 def __gt__(self, other: 'Symbols') -> bool: 319 if not isinstance(other, Symbols): 320 return NotImplemented 321 keys = sorted(set(self.keys()) | set(other.keys())) 322 for k in keys: 323 if self[k] > other[k]: # type: ignore 324 return True 325 if self[k] < other[k]: # type: ignore 326 return False 327 return False 328 329 def issubset(self, other: Iterable[str] | Mapping[str, int]) -> bool: 330 """Whether `self` is a subset of the other. 331 332 Same as `<=`. 333 334 Note that the `<` and `>` operators are lexicographic orderings, 335 not proper subset relations. 336 """ 337 return self <= other 338 339 def __le__(self, other: Iterable[str] | Mapping[str, int]) -> bool: 340 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 341 return NotImplemented # delegate to the other 342 other = Symbols(other) 343 return all(self[s] <= other[s] # type: ignore 344 for s in itertools.chain(self, other)) 345 346 def issuperset(self, other: Iterable[str] | Mapping[str, int]) -> bool: 347 """Whether `self` is a superset of the other. 348 349 Same as `>=`. 350 351 Note that the `<` and `>` operators are lexicographic orderings, 352 not proper subset relations. 353 """ 354 return self >= other 355 356 def __ge__(self, other: Iterable[str] | Mapping[str, int]) -> bool: 357 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 358 return NotImplemented # delegate to the other 359 other = Symbols(other) 360 return all(self[s] >= other[s] # type: ignore 361 for s in itertools.chain(self, other)) 362 363 def isdisjoint(self, other: Iterable[str] | Mapping[str, int]) -> bool: 364 """Whether `self` has any positive elements in common with the other. 365 366 Raises: 367 ValueError if either has negative elements. 368 """ 369 other = Symbols(other) 370 return any(self[s] > 0 and other[s] > 0 # type: ignore 371 for s in self) 372 373 def __eq__(self, other) -> bool: 374 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 375 return NotImplemented # delegate to the other 376 try: 377 other = Symbols(other) 378 except (TypeError, ValueError): 379 return NotImplemented 380 return all(self[s] == other[s] # type: ignore 381 for s in itertools.chain(self, other)) 382 383 def __ne__(self, other) -> bool: 384 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 385 return NotImplemented # delegate to the other 386 try: 387 other = Symbols(other) 388 except (TypeError, ValueError): 389 return NotImplemented 390 return any(self[s] != other[s] # type: ignore 391 for s in itertools.chain(self, other)) 392 393 # Unary operators. 394 395 def has_negative_counts(self) -> bool: 396 """Whether any counts are negative.""" 397 return any(c < 0 for c in self.values()) 398 399 def __pos__(self) -> 'Symbols': 400 data = defaultdict(int, { 401 s: count 402 for s, count in self.items() if count > 0 403 }) 404 return Symbols._new_raw(data) 405 406 def __neg__(self) -> 'Symbols': 407 data = defaultdict(int, {s: -count for s, count in self.items()}) 408 return Symbols._new_raw(data) 409 410 @cached_property 411 def _hash(self) -> int: 412 return hash((Symbols, str(self))) 413 414 def __hash__(self) -> int: 415 return self._hash 416 417 def size(self) -> int: 418 """The total number of elements.""" 419 return sum(self._data.values()) 420 421 @cached_property 422 def _str(self) -> str: 423 sorted_keys = sorted(self) 424 positive = ''.join(s * self._data[s] for s in sorted_keys 425 if self._data[s] > 0) 426 negative = ''.join(s * -self._data[s] for s in sorted_keys 427 if self._data[s] < 0) 428 if positive: 429 if negative: 430 return positive + ' - ' + negative 431 else: 432 return positive 433 else: 434 if negative: 435 return '-' + negative 436 else: 437 return '' 438 439 def __str__(self) -> str: 440 """All symbols in unary form (i.e. including duplicates) in ascending order. 441 442 If there are negative elements, they are listed following a ` - ` sign. 443 """ 444 return self._str 445 446 def __repr__(self) -> str: 447 return type(self).__qualname__ + f"('{str(self)}')"
EXPERIMENTAL: Immutable multiset of single characters.
Spaces, dashes, and underscores cannot be used as symbols.
Operations include:
| Operation | Count / notes |
|---|---|
additive_union, + |
l + r |
difference, - |
l - r |
intersection, & |
min(l, r) |
union, | |
max(l, r) |
symmetric_difference, ^ |
abs(l - r) |
multiply_counts, * |
count * n |
divide_counts, // |
count // n |
issubset, <= |
all counts l <= r |
issuperset, >= |
all counts l >= r |
== |
all counts l == r |
!= |
any count l != r |
unary + |
drop all negative counts |
unary - |
reverses the sign of all counts |
< and > are lexicographic orderings rather than subset relations.
Specifically, they compare the count of each character in alphabetical
order. For example:
'a' > ''since one'a'is more than zero'a's.'a' > 'bb'since'a'is compared first.'-a' < 'bb'since the left side has -1'a's.'a' < 'ab'since the'a's are equal but the right side has more'b's.
Binary operators other than * and // implicitly convert the other
argument to Symbols using the constructor.
Subscripting with a single character returns the count of that character
as an int. E.g. symbols['a'] -> number of as as an int.
You can also access it as an attribute, e.g. symbols.a.
Subscripting with multiple characters returns a Symbols with only those
characters, dropping the rest.
E.g. symbols['ab'] -> number of as and bs as a Symbols.
Again you can also access it as an attribute, e.g. symbols.ab.
This is useful for reducing the outcome space, which reduces computational
cost for further operations. If you want to keep only a single character
while keeping the type as Symbols, you can subscript with that character
plus an unused character.
Subscripting with duplicate characters currently has no further effect, but this may change in the future.
Population.marginals forwards attribute access, so you can use e.g.
die.marginals.a to get the marginal distribution of as.
Note that attribute access only works with valid identifiers, so e.g. emojis would need to use the subscript method.
74 def __new__(cls, 75 symbols: str | Iterable[str] | Mapping[str, int]) -> 'Symbols': 76 """Constructor. 77 78 The argument can be a string, an iterable of characters, or a mapping of 79 characters to counts. 80 81 If the argument is a string, negative symbols can be specified using a 82 minus sign optionally surrounded by whitespace. For example, 83 `a - b` has one positive a and one negative b. 84 """ 85 self = super(Symbols, cls).__new__(cls) 86 if isinstance(symbols, str): 87 data: MutableMapping[str, int] = defaultdict(int) 88 positive, *negative = re.split(r'\s*-\s*', symbols) 89 for s in positive: 90 data[s] += 1 91 if len(negative) > 1: 92 raise ValueError('Multiple dashes not allowed.') 93 if len(negative) == 1: 94 for s in negative[0]: 95 data[s] -= 1 96 elif isinstance(symbols, Mapping): 97 data = defaultdict(int, symbols) 98 else: 99 data = defaultdict(int) 100 for s in symbols: 101 data[s] += 1 102 103 for s in data: 104 if len(s) != 1: 105 raise ValueError(f'Symbol {s} is not a single character.') 106 if re.match(r'[\s_-]', s): 107 raise ValueError( 108 f'{s} (U+{ord(s):04X}) is not a legal symbol.') 109 110 self._data = defaultdict(int, 111 {k: data[k] 112 for k in sorted(data.keys())}) 113 114 return self
Constructor.
The argument can be a string, an iterable of characters, or a mapping of characters to counts.
If the argument is a string, negative symbols can be specified using a
minus sign optionally surrounded by whitespace. For example,
a - b has one positive a and one negative b.
145 def additive_union(self, *args: 146 Iterable[str] | Mapping[str, int]) -> 'Symbols': 147 """The sum of counts of each symbol.""" 148 return functools.reduce(operator.add, args, initial=self)
The sum of counts of each symbol.
160 def difference(self, *args: 161 Iterable[str] | Mapping[str, int]) -> 'Symbols': 162 """The difference between the counts of each symbol.""" 163 return functools.reduce(operator.sub, args, initial=self)
The difference between the counts of each symbol.
181 def intersection(self, *args: 182 Iterable[str] | Mapping[str, int]) -> 'Symbols': 183 """The min count of each symbol.""" 184 return functools.reduce(operator.and_, args, initial=self)
The min count of each symbol.
196 def union(self, *args: Iterable[str] | Mapping[str, int]) -> 'Symbols': 197 """The max count of each symbol.""" 198 return functools.reduce(operator.or_, args, initial=self)
The max count of each symbol.
210 def symmetric_difference( 211 self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 212 """The absolute difference in symbol counts between the two sets.""" 213 return self ^ other
The absolute difference in symbol counts between the two sets.
225 def multiply_counts(self, other: int) -> 'Symbols': 226 """Multiplies all counts by an integer.""" 227 return self * other
Multiplies all counts by an integer.
240 def divide_counts(self, other: int) -> 'Symbols': 241 """Divides all counts by an integer, rounding down.""" 242 data = defaultdict(int, { 243 s: count // other 244 for s, count in self.items() 245 }) 246 return Symbols._new_raw(data)
Divides all counts by an integer, rounding down.
248 def count_subset(self, 249 divisor: Iterable[str] | Mapping[str, int], 250 *, 251 empty_divisor: int | None = None) -> int: 252 """The number of times the divisor is contained in this multiset.""" 253 if not isinstance(divisor, Mapping): 254 divisor = Counter(divisor) 255 result = None 256 for s, count in divisor.items(): 257 current = self._data[s] // count 258 if result is None or current < result: 259 result = current 260 if result is None: 261 if empty_divisor is None: 262 raise ZeroDivisionError('Divisor is empty.') 263 else: 264 return empty_divisor 265 else: 266 return result
The number of times the divisor is contained in this multiset.
329 def issubset(self, other: Iterable[str] | Mapping[str, int]) -> bool: 330 """Whether `self` is a subset of the other. 331 332 Same as `<=`. 333 334 Note that the `<` and `>` operators are lexicographic orderings, 335 not proper subset relations. 336 """ 337 return self <= other
Whether self is a subset of the other.
Same as <=.
Note that the < and > operators are lexicographic orderings,
not proper subset relations.
346 def issuperset(self, other: Iterable[str] | Mapping[str, int]) -> bool: 347 """Whether `self` is a superset of the other. 348 349 Same as `>=`. 350 351 Note that the `<` and `>` operators are lexicographic orderings, 352 not proper subset relations. 353 """ 354 return self >= other
Whether self is a superset of the other.
Same as >=.
Note that the < and > operators are lexicographic orderings,
not proper subset relations.
363 def isdisjoint(self, other: Iterable[str] | Mapping[str, int]) -> bool: 364 """Whether `self` has any positive elements in common with the other. 365 366 Raises: 367 ValueError if either has negative elements. 368 """ 369 other = Symbols(other) 370 return any(self[s] > 0 and other[s] > 0 # type: ignore 371 for s in self)
Whether self has any positive elements in common with the other.
Raises:
- ValueError if either has negative elements.
A symbol indicating that the die should be rolled again, usually with some operation applied.
This is designed to be used with the Die() constructor.
AgainExpressions should not be fed to functions or methods other than
Die() (or indirectly via map()), but they can be used with operators.
Examples:
Again + 6: Roll again and add 6.Again + Again: Roll again twice and sum.
The again_count, again_depth, and again_end arguments to Die()
affect how these arguments are processed. At most one of again_count or
again_depth may be provided; if neither are provided, the behavior is as
again_depth=1.
For finer control over rolling processes, use e.g. Die.map() instead.
Count mode
When again_count is provided, we start with one roll queued and execute one
roll at a time. For every Again we roll, we queue another roll.
If we run out of rolls, we sum the rolls to find the result. We evaluate up to
again_count extra rolls. If, at this point, there are still dice remaining:
Restart: If there would be dice over the limit, we restart the entire process from the beginning, effectively conditioning the process against this sequence of events.Reroll: Any remaining dice can't produce moreAgains.outcome: Any remaining dice are each treated as the given outcome.None: Any remaining dice are treated as zero.
This mode only allows "additive" expressions to be used with Again, which
means that only the following operators are allowed:
- Binary
+ n @ AgainExpression, wherenis a non-negativeintorPopulation.
Furthermore, the + operator is assumed to be associative and commutative.
For example, str or tuple outcomes will not produce elements with a definite
order.
Depth mode
When again_depth=0, again_end is directly substituted
for each occurence of Again. For other values of again_depth, the result for
again_depth-1 is substituted for each occurence of Again.
If again_end=Reroll, then any AgainExpressions in the final depth
are rerolled. Restart cannot be used with again_depth.
144class CountsKeysView(KeysView[T], Sequence[T]): 145 """This functions as both a `KeysView` and a `Sequence`.""" 146 147 def __init__(self, counts: Counts[T]): 148 self._mapping = counts 149 150 def __getitem__(self, index): 151 return self._mapping._keys[index] 152 153 def __len__(self) -> int: 154 return len(self._mapping) 155 156 def __eq__(self, other): 157 return self._mapping._keys == other
This functions as both a KeysView and a Sequence.
160class CountsValuesView(ValuesView[int], Sequence[int]): 161 """This functions as both a `ValuesView` and a `Sequence`.""" 162 163 def __init__(self, counts: Counts): 164 self._mapping = counts 165 166 def __getitem__(self, index): 167 return self._mapping._values[index] 168 169 def __len__(self) -> int: 170 return len(self._mapping) 171 172 def __eq__(self, other): 173 return self._mapping._values == other
This functions as both a ValuesView and a Sequence.
176class CountsItemsView(ItemsView[T, int], Sequence[tuple[T, int]]): 177 """This functions as both an `ItemsView` and a `Sequence`.""" 178 179 def __init__(self, counts: Counts): 180 self._mapping = counts 181 182 def __getitem__(self, index): 183 return self._mapping._items[index] 184 185 def __eq__(self, other): 186 return self._mapping._items == other
This functions as both an ItemsView and a Sequence.
142def from_cumulative(outcomes: Sequence[T], 143 cumulative: 'Sequence[int] | Sequence[icepool.Die[bool]]', 144 *, 145 reverse: bool = False) -> 'icepool.Die[T]': 146 """Constructs a `Die` from a sequence of cumulative values. 147 148 Args: 149 outcomes: The outcomes of the resulting die. Sorted order is recommended 150 but not necessary. 151 cumulative: The cumulative values (inclusive) of the outcomes in the 152 order they are given to this function. These may be: 153 * `int` cumulative quantities. 154 * Dice representing the cumulative distribution at that point. 155 reverse: Iff true, both of the arguments will be reversed. This allows 156 e.g. constructing using a survival distribution. 157 """ 158 if len(outcomes) == 0: 159 return icepool.Die({}) 160 161 if reverse: 162 outcomes = list(reversed(outcomes)) 163 cumulative = list(reversed(cumulative)) # type: ignore 164 165 prev = 0 166 d = {} 167 168 if isinstance(cumulative[0], icepool.Die): 169 cumulative = harmonize_denominators(cumulative) 170 for outcome, die in zip(outcomes, cumulative): 171 d[outcome] = die.quantity('!=', False) - prev 172 prev = die.quantity('!=', False) 173 elif isinstance(cumulative[0], int): 174 cumulative = cast(Sequence[int], cumulative) 175 for outcome, quantity in zip(outcomes, cumulative): 176 d[outcome] = quantity - prev 177 prev = quantity 178 else: 179 raise TypeError( 180 f'Unsupported type {type(cumulative)} for cumulative values.') 181 182 return icepool.Die(d)
Constructs a Die from a sequence of cumulative values.
Arguments:
- outcomes: The outcomes of the resulting die. Sorted order is recommended but not necessary.
- cumulative: The cumulative values (inclusive) of the outcomes in the
order they are given to this function. These may be:
intcumulative quantities.- Dice representing the cumulative distribution at that point.
- reverse: Iff true, both of the arguments will be reversed. This allows e.g. constructing using a survival distribution.
197def from_rv(rv, outcomes: Sequence[int] | Sequence[float], denominator: int, 198 **kwargs) -> 'icepool.Die[int] | icepool.Die[float]': 199 """Constructs a `Die` from a rv object (as `scipy.stats`). 200 201 This is done using the CDF. 202 203 Args: 204 rv: A rv object (as `scipy.stats`). 205 outcomes: An iterable of `int`s or `float`s that will be the outcomes 206 of the resulting `Die`. 207 If the distribution is discrete, outcomes must be `int`s. 208 Some outcomes may be omitted if their probability is too small 209 compared to the denominator. 210 denominator: The denominator of the resulting `Die` will be set to this. 211 **kwargs: These will be forwarded to `rv.cdf()`. 212 """ 213 if hasattr(rv, 'pdf'): 214 # Continuous distributions use midpoints. 215 midpoints = [(a + b) / 2 for a, b in zip(outcomes[:-1], outcomes[1:])] 216 cdf = rv.cdf(midpoints, **kwargs) 217 quantities_le = tuple(int(round(x * denominator)) 218 for x in cdf) + (denominator, ) 219 else: 220 cdf = rv.cdf(outcomes, **kwargs) 221 quantities_le = tuple(int(round(x * denominator)) for x in cdf) 222 return from_cumulative(outcomes, quantities_le)
Constructs a Die from a rv object (as scipy.stats).
This is done using the CDF.
Arguments:
- rv: A rv object (as
scipy.stats). - outcomes: An iterable of
ints orfloats that will be the outcomes of the resultingDie. If the distribution is discrete, outcomes must beints. Some outcomes may be omitted if their probability is too small compared to the denominator. - denominator: The denominator of the resulting
Diewill be set to this. - **kwargs: These will be forwarded to
rv.cdf().
252def pointwise_max(arg0, /, *more_args: 'icepool.Die[T]') -> 'icepool.Die[T]': 253 """Selects the highest chance of rolling >= each outcome among the arguments. 254 255 Naming not finalized. 256 257 Specifically, for each outcome, the chance of the result rolling >= to that 258 outcome is the same as the highest chance of rolling >= that outcome among 259 the arguments. 260 261 Equivalently, any quantile in the result is the highest of that quantile 262 among the arguments. 263 264 This is useful for selecting from several possible moves where you are 265 trying to get >= a threshold that is known but could change depending on the 266 situation. 267 268 Args: 269 dice: Either an iterable of dice, or two or more dice as separate 270 arguments. 271 """ 272 if len(more_args) == 0: 273 args = arg0 274 else: 275 args = (arg0, ) + more_args 276 args = harmonize_denominators(args) 277 outcomes = sorted_union(*args) 278 cumulative = [ 279 min(die.quantity('<=', outcome) for die in args) 280 for outcome in outcomes 281 ] 282 return from_cumulative(outcomes, cumulative)
Selects the highest chance of rolling >= each outcome among the arguments.
Naming not finalized.
Specifically, for each outcome, the chance of the result rolling >= to that outcome is the same as the highest chance of rolling >= that outcome among the arguments.
Equivalently, any quantile in the result is the highest of that quantile among the arguments.
This is useful for selecting from several possible moves where you are trying to get >= a threshold that is known but could change depending on the situation.
Arguments:
- dice: Either an iterable of dice, or two or more dice as separate arguments.
299def pointwise_min(arg0, /, *more_args: 'icepool.Die[T]') -> 'icepool.Die[T]': 300 """Selects the highest chance of rolling <= each outcome among the arguments. 301 302 Naming not finalized. 303 304 Specifically, for each outcome, the chance of the result rolling <= to that 305 outcome is the same as the highest chance of rolling <= that outcome among 306 the arguments. 307 308 Equivalently, any quantile in the result is the lowest of that quantile 309 among the arguments. 310 311 This is useful for selecting from several possible moves where you are 312 trying to get <= a threshold that is known but could change depending on the 313 situation. 314 315 Args: 316 dice: Either an iterable of dice, or two or more dice as separate 317 arguments. 318 """ 319 if len(more_args) == 0: 320 args = arg0 321 else: 322 args = (arg0, ) + more_args 323 args = harmonize_denominators(args) 324 outcomes = sorted_union(*args) 325 cumulative = [ 326 max(die.quantity('<=', outcome) for die in args) 327 for outcome in outcomes 328 ] 329 return from_cumulative(outcomes, cumulative)
Selects the highest chance of rolling <= each outcome among the arguments.
Naming not finalized.
Specifically, for each outcome, the chance of the result rolling <= to that outcome is the same as the highest chance of rolling <= that outcome among the arguments.
Equivalently, any quantile in the result is the lowest of that quantile among the arguments.
This is useful for selecting from several possible moves where you are trying to get <= a threshold that is known but could change depending on the situation.
Arguments:
- dice: Either an iterable of dice, or two or more dice as separate arguments.
99def lowest(arg0, 100 /, 101 *more_args: 'T | icepool.Die[T]', 102 keep: int | None = None, 103 drop: int | None = None, 104 default: T | None = None) -> 'icepool.Die[T]': 105 """The lowest outcome among the rolls, or the sum of some of the lowest. 106 107 The outcomes should support addition and multiplication if `keep != 1`. 108 109 Args: 110 args: Dice or individual outcomes in a single iterable, or as two or 111 more separate arguments. Similar to the built-in `min()`. 112 keep, drop: These arguments work together: 113 * If neither are provided, the single lowest die will be taken. 114 * If only `keep` is provided, the `keep` lowest dice will be summed. 115 * If only `drop` is provided, the `drop` lowest dice will be dropped 116 and the rest will be summed. 117 * If both are provided, `drop` lowest dice will be dropped, then 118 the next `keep` lowest dice will be summed. 119 default: If an empty iterable is provided, the result will be a die that 120 always rolls this value. 121 122 Raises: 123 ValueError if an empty iterable is provided with no `default`. 124 """ 125 if len(more_args) == 0: 126 args = arg0 127 else: 128 args = (arg0, ) + more_args 129 130 if len(args) == 0: 131 if default is None: 132 raise ValueError( 133 "lowest() arg is an empty sequence and no default was provided." 134 ) 135 else: 136 return icepool.Die([default]) 137 138 index_slice = lowest_slice(keep, drop) 139 return _sum_slice(*args, index_slice=index_slice)
The lowest outcome among the rolls, or the sum of some of the lowest.
The outcomes should support addition and multiplication if keep != 1.
Arguments:
- args: Dice or individual outcomes in a single iterable, or as two or
more separate arguments. Similar to the built-in
min(). - keep, drop: These arguments work together:
- If neither are provided, the single lowest die will be taken.
- If only
keepis provided, thekeeplowest dice will be summed. - If only
dropis provided, thedroplowest dice will be dropped and the rest will be summed. - If both are provided,
droplowest dice will be dropped, then the nextkeeplowest dice will be summed.
- default: If an empty iterable is provided, the result will be a die that always rolls this value.
Raises:
- ValueError if an empty iterable is provided with no
default.
153def highest(arg0, 154 /, 155 *more_args: 'T | icepool.Die[T]', 156 keep: int | None = None, 157 drop: int | None = None, 158 default: T | None = None) -> 'icepool.Die[T]': 159 """The highest outcome among the rolls, or the sum of some of the highest. 160 161 The outcomes should support addition and multiplication if `keep != 1`. 162 163 Args: 164 args: Dice or individual outcomes in a single iterable, or as two or 165 more separate arguments. Similar to the built-in `max()`. 166 keep, drop: These arguments work together: 167 * If neither are provided, the single highest die will be taken. 168 * If only `keep` is provided, the `keep` highest dice will be summed. 169 * If only `drop` is provided, the `drop` highest dice will be dropped 170 and the rest will be summed. 171 * If both are provided, `drop` highest dice will be dropped, then 172 the next `keep` highest dice will be summed. 173 drop: This number of highest dice will be dropped before keeping dice 174 to be summed. 175 default: If an empty iterable is provided, the result will be a die that 176 always rolls this value. 177 178 Raises: 179 ValueError if an empty iterable is provided with no `default`. 180 """ 181 if len(more_args) == 0: 182 args = arg0 183 else: 184 args = (arg0, ) + more_args 185 186 if len(args) == 0: 187 if default is None: 188 raise ValueError( 189 "highest() arg is an empty sequence and no default was provided." 190 ) 191 else: 192 return icepool.Die([default]) 193 194 index_slice = highest_slice(keep, drop) 195 return _sum_slice(*args, index_slice=index_slice)
The highest outcome among the rolls, or the sum of some of the highest.
The outcomes should support addition and multiplication if keep != 1.
Arguments:
- args: Dice or individual outcomes in a single iterable, or as two or
more separate arguments. Similar to the built-in
max(). - keep, drop: These arguments work together:
- If neither are provided, the single highest die will be taken.
- If only
keepis provided, thekeephighest dice will be summed. - If only
dropis provided, thedrophighest dice will be dropped and the rest will be summed. - If both are provided,
drophighest dice will be dropped, then the nextkeephighest dice will be summed.
- drop: This number of highest dice will be dropped before keeping dice to be summed.
- default: If an empty iterable is provided, the result will be a die that always rolls this value.
Raises:
- ValueError if an empty iterable is provided with no
default.
209def middle(arg0, 210 /, 211 *more_args: 'T | icepool.Die[T]', 212 keep: int = 1, 213 tie: Literal['error', 'high', 'low'] = 'error', 214 default: T | None = None) -> 'icepool.Die[T]': 215 """The middle of the outcomes among the rolls, or the sum of some of the middle. 216 217 The outcomes should support addition and multiplication if `keep != 1`. 218 219 Args: 220 args: Dice or individual outcomes in a single iterable, or as two or 221 more separate arguments. 222 keep: The number of outcomes to sum. 223 tie: What to do if `keep` is odd but the the number of args is even, or 224 vice versa. 225 * 'error' (default): Raises `IndexError`. 226 * 'high': The higher outcome is taken. 227 * 'low': The lower outcome is taken. 228 default: If an empty iterable is provided, the result will be a die that 229 always rolls this value. 230 231 Raises: 232 ValueError if an empty iterable is provided with no `default`. 233 """ 234 if len(more_args) == 0: 235 args = arg0 236 else: 237 args = (arg0, ) + more_args 238 239 if len(args) == 0: 240 if default is None: 241 raise ValueError( 242 "middle() arg is an empty sequence and no default was provided." 243 ) 244 else: 245 return icepool.Die([default]) 246 247 # Expression evaluators are difficult to type. 248 return icepool.Pool(args).middle(keep, tie=tie).sum() # type: ignore
The middle of the outcomes among the rolls, or the sum of some of the middle.
The outcomes should support addition and multiplication if keep != 1.
Arguments:
- args: Dice or individual outcomes in a single iterable, or as two or more separate arguments.
- keep: The number of outcomes to sum.
- tie: What to do if
keepis odd but the the number of args is even, or vice versa.- 'error' (default): Raises
IndexError. - 'high': The higher outcome is taken.
- 'low': The lower outcome is taken.
- 'error' (default): Raises
- default: If an empty iterable is provided, the result will be a die that always rolls this value.
Raises:
- ValueError if an empty iterable is provided with no
default.
342def min_outcome(*args: 'Iterable[T | icepool.Population[T]] | T') -> T: 343 """The minimum possible outcome among the populations. 344 345 Args: 346 Populations or single outcomes. Alternatively, a single iterable argument of such. 347 """ 348 return min(_iter_outcomes(*args))
The minimum possible outcome among the populations.
Arguments:
- Populations or single outcomes. Alternatively, a single iterable argument of such.
361def max_outcome(*args: 'Iterable[T | icepool.Population[T]] | T') -> T: 362 """The maximum possible outcome among the populations. 363 364 Args: 365 Populations or single outcomes. Alternatively, a single iterable argument of such. 366 """ 367 return max(_iter_outcomes(*args))
The maximum possible outcome among the populations.
Arguments:
- Populations or single outcomes. Alternatively, a single iterable argument of such.
370def consecutive(*args: Iterable[int]) -> Sequence[int]: 371 """A minimal sequence of consecutive ints covering the argument sets.""" 372 start = min((x for x in itertools.chain(*args)), default=None) 373 if start is None: 374 return () 375 stop = max(x for x in itertools.chain(*args)) 376 return tuple(range(start, stop + 1))
A minimal sequence of consecutive ints covering the argument sets.
379def sorted_union(*args: Iterable[T]) -> tuple[T, ...]: 380 """Merge sets into a sorted sequence.""" 381 if not args: 382 return () 383 return tuple(sorted(set.union(*(set(arg) for arg in args))))
Merge sets into a sorted sequence.
386def harmonize_denominators(dice: 'Sequence[T | icepool.Die[T]]', 387 weights: Sequence[int] | None = None, 388 /) -> tuple['icepool.Die[T]', ...]: 389 """Scale the quantities of the dice so that the denominators are proportional to given weights. 390 391 Args: 392 dice: Any number of dice or single outcomes convertible to dice. 393 weights: The target relative denominators of the dice. If not provided, 394 all dice will be scaled to the same denominator, the same as 395 `weights = [1] * len(dice)`. 396 397 Returns: 398 A tuple of dice with the adjusted denominators. 399 """ 400 if weights is None: 401 weights = [1] * len(dice) 402 converted_dice = [icepool.implicit_convert_to_die(die) for die in dice] 403 scale_factors = weighted_lcm([d.denominator() for d in converted_dice], 404 weights) 405 return tuple( 406 die.multiply_quantities(scale_factor) 407 for die, scale_factor in zip(converted_dice, scale_factors))
Scale the quantities of the dice so that the denominators are proportional to given weights.
Arguments:
- dice: Any number of dice or single outcomes convertible to dice.
- weights: The target relative denominators of the dice. If not provided,
all dice will be scaled to the same denominator, the same as
weights = [1] * len(dice).
Returns:
A tuple of dice with the adjusted denominators.
460def reduce( 461 function: 'Callable[[T, T], T | icepool.Die[T] | icepool.RerollType]', 462 dice: 'Iterable[T | icepool.Die[T]]', 463 *, 464 initial: 'T | icepool.Die[T] | None' = None) -> 'icepool.Die[T]': 465 """Applies a function of two arguments cumulatively to a sequence of dice. 466 467 Analogous to the 468 [`functools` function of the same name.](https://docs.python.org/3/library/functools.html#functools.reduce) 469 470 Args: 471 function: The function to map. The function should take two arguments, 472 which are an outcome from each of two dice, and produce an outcome 473 of the same type. It may also return `Reroll`, in which case the 474 entire sequence is effectively rerolled. 475 dice: A sequence of dice to map the function to, from left to right. 476 initial: If provided, this will be placed at the front of the sequence 477 of dice. 478 again_count, again_depth, again_end: Forwarded to the final die constructor. 479 """ 480 # Conversion to dice is not necessary since map() takes care of that. 481 iter_dice = iter(dice) 482 if initial is not None: 483 result: 'icepool.Die[T]' = icepool.implicit_convert_to_die(initial) 484 else: 485 result = icepool.implicit_convert_to_die(next(iter_dice)) 486 for die in iter_dice: 487 result = map(function, result, die) 488 return result
Applies a function of two arguments cumulatively to a sequence of dice.
Analogous to the
.reduce">functools function of the same name.
Arguments:
- function: The function to map. The function should take two arguments,
which are an outcome from each of two dice, and produce an outcome
of the same type. It may also return
Reroll, in which case the entire sequence is effectively rerolled. - dice: A sequence of dice to map the function to, from left to right.
- initial: If provided, this will be placed at the front of the sequence of dice.
- again_count, again_depth, again_end: Forwarded to the final die constructor.
491def accumulate( 492 function: 'Callable[[T, T], T | icepool.Die[T]]', 493 dice: 'Iterable[T | icepool.Die[T]]', 494 *, 495 initial: 'T | icepool.Die[T] | None' = None 496) -> Iterator['icepool.Die[T]']: 497 """Applies a function of two arguments cumulatively to a sequence of dice, yielding each result in turn. 498 499 Analogous to the 500 [`itertools function of the same name`](https://docs.python.org/3/library/itertools.html#itertools.accumulate) 501 , though with no default function and 502 the same parameter order as `reduce()`. 503 504 The number of results is equal to the number of elements of `dice`, with 505 one additional element if `initial` is provided. 506 507 Args: 508 function: The function to map. The function should take two arguments, 509 which are an outcome from each of two dice. 510 dice: A sequence of dice to map the function to, from left to right. 511 initial: If provided, this will be placed at the front of the sequence 512 of dice. 513 """ 514 # Conversion to dice is not necessary since map() takes care of that. 515 iter_dice = iter(dice) 516 if initial is not None: 517 result: 'icepool.Die[T]' = icepool.implicit_convert_to_die(initial) 518 else: 519 try: 520 result = icepool.implicit_convert_to_die(next(iter_dice)) 521 except StopIteration: 522 return 523 yield result 524 for die in iter_dice: 525 result = map(function, result, die) 526 yield result
Applies a function of two arguments cumulatively to a sequence of dice, yielding each result in turn.
Analogous to the
.accumulate">itertools function of the same name
, though with no default function and
the same parameter order as reduce().
The number of results is equal to the number of elements of dice, with
one additional element if initial is provided.
Arguments:
- function: The function to map. The function should take two arguments, which are an outcome from each of two dice.
- dice: A sequence of dice to map the function to, from left to right.
- initial: If provided, this will be placed at the front of the sequence of dice.
52def map( 53 repl: 54 'Callable[..., T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression] | Mapping[Any, T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression]', 55 /, 56 *args: 'Outcome | icepool.Die | icepool.MultisetExpression', 57 star: bool | None = None, 58 repeat: int | Literal['inf'] | None = None, 59 again_count: int | None = None, 60 again_depth: int | None = None, 61 again_end: 'T | icepool.Die[T] | icepool.RerollType | None' = None, 62 **kwargs) -> 'icepool.Die[T]': 63 """Applies `func(outcome_of_die_0, outcome_of_die_1, ...)` for all joint outcomes, returning a Die. 64 65 See `map_function` for a decorator version of this. 66 67 Example: `map(lambda a, b: a + b, d6, d6)` is the same as d6 + d6. 68 69 `map()` is flexible but not very efficient for more than a few dice. 70 If at all possible, use `reduce()`, `MultisetExpression` methods, and/or 71 `MultisetEvaluator`s. Even `Pool.expand()` (which sorts rolls) is more 72 efficient than using `map` on the dice in order. 73 74 `Again` can be used but can't be combined with `repeat`. 75 76 Args: 77 repl: One of the following: 78 * A callable that takes in one outcome per element of args and 79 produces a new outcome. 80 * A mapping from old outcomes to new outcomes. 81 Unmapped old outcomes stay the same. 82 In this case args must have exactly one element. 83 As with the `Die` constructor, the new outcomes: 84 * May be dice rather than just single outcomes. 85 * The special value `icepool.Reroll` will reroll that old outcome. 86 * `tuples` containing `Population`s will be `tupleize`d into 87 `Population`s of `tuple`s. 88 This does not apply to subclasses of `tuple`s such as `namedtuple` 89 or other classes such as `Vector`. 90 *args: `repl` will be called with all joint outcomes of these. 91 Allowed arg types are: 92 * Single outcome. 93 * `Die`. All outcomes will be sent to `repl`. 94 * `MultisetExpression`. All sorted tuples of outcomes will be sent 95 to `repl`, as `MultisetExpression.expand()`. 96 star: If `True`, the first of the args will be unpacked before giving 97 them to `repl`. 98 If not provided, it will be inferred based on the signature of `repl` 99 and the number of arguments. 100 repeat: If provided, `map` will be repeated with the same arguments on 101 the result this many times, except the first of `args` will be 102 replaced by the result of the previous iteration. In other words, 103 this produces the result of a Markov process. 104 105 `map(repeat)` will stop early if the entire state distribution has 106 converged to absorbing states. You can force an absorption to a 107 desired state using `Break(state)`. Furthermore, if a state only 108 leads to itself, reaching that state is considered an absorption. 109 110 `Reroll` can be used to reroll the current stage, while `Restart` 111 restarts the process from the beginning, effectively conditioning 112 against that sequence of state transitions. 113 114 `repeat` is not compatible with `Again`. 115 116 EXPERIMENTAL: If set to `'inf'`, the result will be as if this 117 were repeated an infinite number of times. In this case, the 118 result will be in simplest form. 119 again_count, again_depth, again_end: Forwarded to the final die constructor. 120 **kwargs: Keyword-only arguments can be forwarded to a callable `repl`. 121 Unlike *args, outcomes will not be expanded, i.e. `Die` and 122 `MultisetExpression` will be passed as-is. This is invalid for 123 non-callable `repl`. 124 """ 125 126 if len(args) == 0: 127 if repeat is not None: 128 raise ValueError( 129 'If no arguments are given, repeat cannot be used.') 130 if isinstance(repl, Mapping): 131 raise ValueError( 132 'If no arguments are given, repl must be a callable.') 133 return icepool.Die([repl(**kwargs)]) 134 135 # Here len(args) is at least 1. 136 die_args: 'Sequence[T | icepool.Die[T]]' = [ 137 ( 138 arg.expand() if isinstance(arg, icepool.MultisetExpression) else 139 arg # type: ignore 140 ) for arg in args 141 ] 142 143 first_arg = die_args[0] 144 extra_args = die_args[1:] 145 146 if repeat is None: 147 return map_simple(repl, 148 first_arg, 149 *extra_args, 150 star=star, 151 again_count=again_count, 152 again_depth=again_depth, 153 again_end=again_end, 154 **kwargs) 155 156 # No Agains allowed past here. 157 repl = cast('Callable[..., T | icepool.Die[T] | icepool.RerollType]', repl) 158 transition_cache = TransitionCache(repl, *extra_args, star=star, **kwargs) 159 160 if repeat == 'inf': 161 # Infinite repeat. 162 # T_co and U should be the same in this case. 163 return icepool.map_tools.markov_chain.absorbing_markov_chain_die( 164 transition_cache, first_arg) 165 elif repeat < 0: 166 raise ValueError('repeat cannot be negative.') 167 elif repeat == 0: 168 return icepool.Die([first_arg]) 169 else: 170 transition_die = transition_cache.self_loop_die( 171 icepool.Die([first_arg])) 172 for i in range(repeat): 173 transition_die = transition_cache.step_transition_die( 174 transition_die) 175 if not any(transition_type == TransitionType.DEFAULT 176 for transition_type, _ in transition_die): 177 break 178 return transition_die.map(final_map, star=True)
Applies func(outcome_of_die_0, outcome_of_die_1, ...) for all joint outcomes, returning a Die.
See map_function for a decorator version of this.
Example: map(lambda a, b: a + b, d6, d6) is the same as d6 + d6.
map() is flexible but not very efficient for more than a few dice.
If at all possible, use reduce(), MultisetExpression methods, and/or
MultisetEvaluators. Even Pool.expand() (which sorts rolls) is more
efficient than using map on the dice in order.
Again can be used but can't be combined with repeat.
Arguments:
- repl: One of the following:
- A callable that takes in one outcome per element of args and produces a new outcome.
- A mapping from old outcomes to new outcomes.
Unmapped old outcomes stay the same.
In this case args must have exactly one element.
As with the
Dieconstructor, the new outcomes: - May be dice rather than just single outcomes.
- The special value
icepool.Rerollwill reroll that old outcome. tuplescontainingPopulations will betupleized intoPopulations oftuples. This does not apply to subclasses oftuples such asnamedtupleor other classes such asVector.
- *args:
replwill be called with all joint outcomes of these. Allowed arg types are:- Single outcome.
Die. All outcomes will be sent torepl.MultisetExpression. All sorted tuples of outcomes will be sent torepl, asMultisetExpression.expand().
- star: If
True, the first of the args will be unpacked before giving them torepl. If not provided, it will be inferred based on the signature ofrepland the number of arguments. repeat: If provided,
mapwill be repeated with the same arguments on the result this many times, except the first ofargswill be replaced by the result of the previous iteration. In other words, this produces the result of a Markov process.map(repeat)will stop early if the entire state distribution has converged to absorbing states. You can force an absorption to a desired state usingBreak(state). Furthermore, if a state only leads to itself, reaching that state is considered an absorption.Rerollcan be used to reroll the current stage, whileRestartrestarts the process from the beginning, effectively conditioning against that sequence of state transitions.repeatis not compatible withAgain.EXPERIMENTAL: If set to
'inf', the result will be as if this were repeated an infinite number of times. In this case, the result will be in simplest form.- again_count, again_depth, again_end: Forwarded to the final die constructor.
- **kwargs: Keyword-only arguments can be forwarded to a callable
repl. Unlike *args, outcomes will not be expanded, i.e.DieandMultisetExpressionwill be passed as-is. This is invalid for non-callablerepl.
219def map_function( 220 function: 221 'Callable[..., T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression] | None' = None, 222 /, 223 *, 224 star: bool | None = None, 225 repeat: int | Literal['inf'] | None = None, 226 again_count: int | None = None, 227 again_depth: int | None = None, 228 again_end: 'T | icepool.Die[T] | icepool.RerollType | None' = None, 229 **kwargs 230) -> 'Callable[..., icepool.Die[T]] | Callable[..., Callable[..., icepool.Die[T]]]': 231 """Decorator that turns a function that takes outcomes into a function that takes dice. 232 233 The result must be a `Die`. 234 235 This is basically a decorator version of `map()` and produces behavior 236 similar to AnyDice functions, though Icepool has different typing rules 237 among other differences. 238 239 `map_function` can either be used with no arguments: 240 241 ```python 242 @map_function 243 def explode_six(x): 244 if x == 6: 245 return 6 + Again 246 else: 247 return x 248 249 explode_six(d6, again_depth=2) 250 ``` 251 252 Or with keyword arguments, in which case the extra arguments are bound: 253 254 ```python 255 @map_function(again_depth=2) 256 def explode_six(x): 257 if x == 6: 258 return 6 + Again 259 else: 260 return x 261 262 explode_six(d6) 263 ``` 264 265 Args: 266 again_count, again_depth, again_end: Forwarded to the final die constructor. 267 """ 268 269 if function is not None: 270 return update_wrapper(partial(map, function, **kwargs), function) 271 else: 272 273 def decorator( 274 function: 275 'Callable[..., T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression]' 276 ) -> 'Callable[..., icepool.Die[T]]': 277 278 return update_wrapper( 279 partial(map, 280 function, 281 star=star, 282 repeat=repeat, 283 again_count=again_count, 284 again_depth=again_depth, 285 again_end=again_end, 286 **kwargs), function) 287 288 return decorator
Decorator that turns a function that takes outcomes into a function that takes dice.
The result must be a Die.
This is basically a decorator version of map() and produces behavior
similar to AnyDice functions, though Icepool has different typing rules
among other differences.
map_function can either be used with no arguments:
@map_function
def explode_six(x):
if x == 6:
return 6 + Again
else:
return x
explode_six(d6, again_depth=2)
Or with keyword arguments, in which case the extra arguments are bound:
@map_function(again_depth=2)
def explode_six(x):
if x == 6:
return 6 + Again
else:
return x
explode_six(d6)
Arguments:
- again_count, again_depth, again_end: Forwarded to the final die constructor.
291def map_and_time( 292 repl: 293 'Callable[..., T | icepool.Die[T] | icepool.RerollType] | Mapping[Any, T | icepool.Die[T] | icepool.RerollType]', 294 initial_state: 'T | icepool.Die[T]', 295 /, 296 *extra_args, 297 star: bool | None = None, 298 repeat: int, 299 **kwargs) -> 'icepool.Die[tuple[T, int]]': 300 """Repeatedly map outcomes of the state to other outcomes, while also 301 counting timesteps. 302 303 This is useful for representing processes. 304 305 The outcomes of the result are `(outcome, time)`, where `time` is the 306 number of repeats needed to reach an absorbing outcome (an outcome that 307 only leads to itself), or `repeat`, whichever is lesser. 308 309 This will return early if it reaches a fixed point. 310 Therefore, you can set `repeat` equal to the maximum number of 311 time you could possibly be interested in without worrying about 312 it causing extra computations after the fixed point. 313 314 Args: 315 repl: One of the following: 316 * A callable returning a new outcome for each old outcome. 317 * A mapping from old outcomes to new outcomes. 318 Unmapped old outcomes stay the same. 319 The new outcomes may be dice rather than just single outcomes. 320 The special value `icepool.Reroll` will reroll that old outcome. 321 initial_state: The initial state of the process, which could be a 322 single state or a `Die`. 323 extra_args: Extra arguments to use, as per `map`. Note that these are 324 rerolled at every time step. 325 star: If `True`, the first of the args will be unpacked before giving 326 them to `func`. 327 If not provided, it will be guessed based on the signature of `func` 328 and the number of arguments. 329 repeat: This will be repeated with the same arguments on the result 330 up to this many times. 331 **kwargs: Keyword-only arguments can be forwarded to a callable `repl`. 332 Unlike *args, outcomes will not be expanded, i.e. `Die` and 333 `MultisetExpression` will be passed as-is. This is invalid for 334 non-callable `repl`. 335 336 Returns: 337 The `Die` after the modification. 338 """ 339 # Here len(args) is at least 1. 340 extra_dice: 'Sequence[T | icepool.Die[T]]' = [ 341 ( 342 arg.expand() if isinstance(arg, icepool.MultisetExpression) else 343 arg # type: ignore 344 ) for arg in extra_args 345 ] 346 347 transition_cache = TransitionCache(repl, *extra_dice, star=star, **kwargs) 348 349 transition_die = transition_cache.self_loop_die_with_zero_time( 350 icepool.Die([initial_state])) 351 for i in range(repeat): 352 transition_die = transition_cache.step_transition_die_with_time( 353 transition_die) 354 if not any(transition_type == TransitionType.DEFAULT 355 for transition_type, state, time in transition_die): 356 break 357 return transition_die.marginals[1:]
Repeatedly map outcomes of the state to other outcomes, while also counting timesteps.
This is useful for representing processes.
The outcomes of the result are (outcome, time), where time is the
number of repeats needed to reach an absorbing outcome (an outcome that
only leads to itself), or repeat, whichever is lesser.
This will return early if it reaches a fixed point.
Therefore, you can set repeat equal to the maximum number of
time you could possibly be interested in without worrying about
it causing extra computations after the fixed point.
Arguments:
- repl: One of the following:
- A callable returning a new outcome for each old outcome.
- A mapping from old outcomes to new outcomes.
Unmapped old outcomes stay the same.
The new outcomes may be dice rather than just single outcomes.
The special value
icepool.Rerollwill reroll that old outcome.
- initial_state: The initial state of the process, which could be a
single state or a
Die. - extra_args: Extra arguments to use, as per
map. Note that these are rerolled at every time step. - star: If
True, the first of the args will be unpacked before giving them tofunc. If not provided, it will be guessed based on the signature offuncand the number of arguments. - repeat: This will be repeated with the same arguments on the result up to this many times.
- **kwargs: Keyword-only arguments can be forwarded to a callable
repl. Unlike *args, outcomes will not be expanded, i.e.DieandMultisetExpressionwill be passed as-is. This is invalid for non-callablerepl.
Returns:
The
Dieafter the modification.
360def mean_time_to_absorb( 361 repl: 362 'Callable[..., T | icepool.Die[T] | icepool.RerollType] | Mapping[Any, T | icepool.Die[T] | icepool.RerollType]', 363 initial_state: 'T | icepool.Die[T]', 364 /, 365 *extra_args, 366 star: bool | None = None, 367 **kwargs) -> Fraction: 368 """EXPERIMENTAL: The mean time for the process to reach an absorbing state. 369 370 An absorbing state is one that maps to itself with unity probability. 371 372 Args: 373 repl: One of the following: 374 * A callable returning a new outcome for each old outcome. 375 * A mapping from old outcomes to new outcomes. 376 Unmapped old outcomes stay the same. 377 The new outcomes may be dice rather than just single outcomes. 378 The special value `Reroll` will reroll that old outcome. 379 Currently, `mean_time_to_absorb` does not support `Restart`. 380 initial_state: The initial state of the process, which could be a 381 single state or a `Die`. 382 extra_args: Extra arguments to use, as per `map`. Note that these are 383 rerolled at every time step. 384 star: If `True`, the first of the args will be unpacked before giving 385 them to `func`. 386 If not provided, it will be guessed based on the signature of `func` 387 and the number of arguments. 388 **kwargs: Keyword-only arguments can be forwarded to a callable `repl`. 389 Unlike *args, outcomes will not be expanded, i.e. `Die` and 390 `MultisetExpression` will be passed as-is. This is invalid for 391 non-callable `repl`. 392 393 Returns: 394 The mean time to absorption. 395 """ 396 transition_cache = TransitionCache(repl, *extra_args, star=star, **kwargs) 397 398 # Infinite repeat. 399 # T_co and U should be the same in this case. 400 401 return icepool.map_tools.markov_chain.absorbing_markov_chain_mean_absorption_time( 402 transition_cache, initial_state)
EXPERIMENTAL: The mean time for the process to reach an absorbing state.
An absorbing state is one that maps to itself with unity probability.
Arguments:
- repl: One of the following:
- A callable returning a new outcome for each old outcome.
- A mapping from old outcomes to new outcomes.
Unmapped old outcomes stay the same.
The new outcomes may be dice rather than just single outcomes.
The special value
Rerollwill reroll that old outcome. Currently,mean_time_to_absorbdoes not supportRestart.
- initial_state: The initial state of the process, which could be a
single state or a
Die. - extra_args: Extra arguments to use, as per
map. Note that these are rerolled at every time step. - star: If
True, the first of the args will be unpacked before giving them tofunc. If not provided, it will be guessed based on the signature offuncand the number of arguments. - **kwargs: Keyword-only arguments can be forwarded to a callable
repl. Unlike *args, outcomes will not be expanded, i.e.DieandMultisetExpressionwill be passed as-is. This is invalid for non-callablerepl.
Returns:
The mean time to absorption.
405def map_to_pool( 406 repl: 407 'Callable[..., icepool.MultisetExpression | Sequence[icepool.Die[T] | T] | Mapping[icepool.Die[T], int] | Mapping[T, int] | icepool.RerollType] | Mapping[Any, icepool.MultisetExpression | Sequence[icepool.Die[T] | T] | Mapping[icepool.Die[T], int] | Mapping[T, int] | icepool.RerollType]', 408 /, 409 *args: 'Outcome | icepool.Die | icepool.MultisetExpression', 410 star: bool | None = None, 411 **kwargs) -> 'icepool.MultisetExpression[T]': 412 """EXPERIMENTAL: Applies `repl(outcome_of_die_0, outcome_of_die_1, ...)` for all joint outcomes, producing a MultisetExpression. 413 414 Args: 415 repl: One of the following: 416 * A callable that takes in one outcome per element of args and 417 produces a `MultisetExpression` or something convertible to a `Pool`. 418 * A mapping from old outcomes to `MultisetExpression` 419 or something convertible to a `Pool`. 420 In this case args must have exactly one element. 421 The new outcomes may be dice rather than just single outcomes. 422 The special value `icepool.Reroll` will reroll that old outcome. 423 star: If `True`, the first of the args will be unpacked before giving 424 them to `repl`. 425 If not provided, it will be guessed based on the signature of `repl` 426 and the number of arguments. 427 **kwargs: Keyword-only arguments can be forwarded to a callable `repl`. 428 Unlike *args, outcomes will not be expanded, i.e. `Die` and 429 `MultisetExpression` will be passed as-is. This is invalid for 430 non-callable `repl`. 431 432 Returns: 433 A `MultisetExpression` representing the mixture of `Pool`s. Note 434 that this is not technically a `Pool`, though it supports most of 435 the same operations. 436 437 Raises: 438 ValueError: If `denominator` cannot be made consistent with the 439 resulting mixture of pools. 440 """ 441 transition_function, star = transition_and_star(repl, len(args), star) 442 443 data: 'MutableMapping[icepool.MultisetExpression[T], int]' = defaultdict( 444 int) 445 for outcomes, quantity in icepool.iter_cartesian_product(*args): 446 if star: 447 pool = transition_function(*outcomes[0], *outcomes[1:], **kwargs) 448 else: 449 pool = transition_function(*outcomes, **kwargs) 450 if pool in icepool.REROLL_TYPES: 451 continue 452 elif isinstance(pool, icepool.MultisetExpression): 453 data[pool] += quantity 454 else: 455 data[icepool.Pool(pool)] += quantity 456 # I couldn't get the covariance / contravariance to work. 457 return icepool.MultisetMixture(data) # type: ignore
EXPERIMENTAL: Applies repl(outcome_of_die_0, outcome_of_die_1, ...) for all joint outcomes, producing a MultisetExpression.
Arguments:
- repl: One of the following:
- A callable that takes in one outcome per element of args and
produces a
MultisetExpressionor something convertible to aPool. - A mapping from old outcomes to
MultisetExpressionor something convertible to aPool. In this case args must have exactly one element. The new outcomes may be dice rather than just single outcomes. The special valueicepool.Rerollwill reroll that old outcome.
- A callable that takes in one outcome per element of args and
produces a
- star: If
True, the first of the args will be unpacked before giving them torepl. If not provided, it will be guessed based on the signature ofrepland the number of arguments. - **kwargs: Keyword-only arguments can be forwarded to a callable
repl. Unlike *args, outcomes will not be expanded, i.e.DieandMultisetExpressionwill be passed as-is. This is invalid for non-callablerepl.
Returns:
A
MultisetExpressionrepresenting the mixture ofPools. Note
that this is not technically aPool, though it supports most of the same operations.
Raises:
- ValueError: If
denominatorcannot be made consistent with the resulting mixture of pools.
Indicates that an outcome should be rerolled (with unlimited depth).
This effectively removes the outcome from the probability space, along with its contribution to the denominator.
This can be used for conditional probability by removing all outcomes not consistent with the given observations.
Operation in specific cases:
- If sent to the constructor of
Die, it and the corresponding quantity is dropped. - When used with
Againormap(repeat), only that stage is rerolled, not the entire rolling process. - To reroll with limited depth, use
Die.reroll(), orAgainwith no modification. - When used with
MultisetEvaluator, this currently has the same meaning asRestart. Prefer usingRestartin this case.
Indicates that a rolling process should be restarted (with unlimited depth).
Restart effectively removes the sequence of events from the probability space,
along with its contribution to the denominator.
Restart can be used for conditional probability by removing all sequences of
events not consistent with the given observations.
Restart can be used with again_count, map(repeat), or MultisetEvaluator.
When sent to the constructor of Die, it has the same effect as Reroll;
prefer using Reroll in this case.
19class Break(Generic[T]): 20 """EXPERIMENTAL: Wrapper around a return value for triggering an early exit from `map(repeat)`. 21 22 For example, to add successive dice until the total reaches 10: 23 ```python 24 def example(total, new_roll): 25 if total >= 10: 26 return Break() # same as Break(total) 27 else: 28 return total + new_roll 29 map(example, 0, d(6)) 30 ``` 31 """ 32 33 def __init__(self, outcome: T | None = None): 34 """Constructor. 35 36 Args: 37 outcome: The wrapped outcome. If `None`, it is considered to be 38 equal to the first argument to the current iteration of `map()`. 39 """ 40 self.outcome = outcome 41 42 def __hash__(self) -> int: 43 return hash((Break, self.outcome)) 44 45 def __repr__(self) -> str: 46 return f'Break({repr(self.outcome)})' 47 48 def __str__(self) -> str: 49 return f'Break({str(self.outcome)})'
EXPERIMENTAL: Wrapper around a return value for triggering an early exit from map(repeat).
For example, to add successive dice until the total reaches 10:
def example(total, new_roll):
if total >= 10:
return Break() # same as Break(total)
else:
return total + new_roll
map(example, 0, d(6))
33 def __init__(self, outcome: T | None = None): 34 """Constructor. 35 36 Args: 37 outcome: The wrapped outcome. If `None`, it is considered to be 38 equal to the first argument to the current iteration of `map()`. 39 """ 40 self.outcome = outcome
Constructor.
Arguments:
- outcome: The wrapped outcome. If
None, it is considered to be equal to the first argument to the current iteration ofmap().
32class RerollType(enum.Enum): 33 """The type of the Reroll and Restart singletons.""" 34 Reroll = 'Reroll' 35 Restart = 'Restart'
The type of the Reroll and Restart singletons.
27class Pool(KeepGenerator[T]): 28 """Represents a multiset of outcomes resulting from the roll of several dice. 29 30 This should be used in conjunction with `MultisetEvaluator` to generate a 31 result. 32 33 Note that operators are performed on the multiset of rolls, not the multiset 34 of dice. For example, `d6.pool(3) - d6.pool(3)` is not an empty pool, but 35 an expression meaning "roll two pools of 3d6 and with rolls in the second 36 pool cancelling matching rolls in the first pool one-for-one". 37 """ 38 39 _dice: tuple[tuple['icepool.Die[T]', int], ...] 40 _outcomes: tuple[T, ...] 41 42 def __new__( 43 cls, 44 dice: 45 'Sequence[icepool.Die[T] | T] | Mapping[icepool.Die[T], int] | Mapping[T, int] | Mapping[icepool.Die[T] | T, int]', 46 times: Sequence[int] | int = 1) -> 'Pool': 47 """Public constructor for a pool. 48 49 Evaulation is most efficient when the dice are the same or same-side 50 truncations of each other. For example, d4, d6, d8, d10, d12 are all 51 same-side truncations of d12. 52 53 It is permissible to create a `Pool` without providing dice, but not all 54 evaluators will handle this case, especially if they depend on the 55 outcome type. Dice may be in the pool zero times, in which case their 56 outcomes will be considered but without any count (unless another die 57 has that outcome). 58 59 Args: 60 dice: The dice to put in the `Pool`. This can be one of the following: 61 62 * A `Sequence` of `Die` or outcomes. 63 * A `Mapping` of `Die` or outcomes to how many of that `Die` or 64 outcome to put in the `Pool`. 65 66 All outcomes within a `Pool` must be totally orderable. 67 times: Multiplies the number of times each element of `dice` will 68 be put into the pool. 69 `times` can either be a sequence of the same length as 70 `outcomes` or a single `int` to apply to all elements of 71 `outcomes`. 72 73 Raises: 74 ValueError: If a bare `Deck` or `Die` argument is provided. 75 A `Pool` of a single `Die` should constructed as `Pool([die])`. 76 """ 77 if isinstance(dice, Pool): 78 if times == 1: 79 return dice 80 else: 81 dice = {die: quantity for die, quantity in dice._dice} 82 83 if isinstance(dice, (icepool.Die, icepool.Deck, icepool.MultiDeal)): 84 raise ValueError( 85 f'A Pool cannot be constructed with a {type(dice).__name__} argument.' 86 ) 87 88 dice, times = icepool.creation_args.itemize(dice, times) 89 converted_dice = [icepool.implicit_convert_to_die(die) for die in dice] 90 91 dice_counts: MutableMapping['icepool.Die[T]', int] = defaultdict(int) 92 for die, qty in zip(converted_dice, times): 93 if qty == 0: 94 continue 95 dice_counts[die] += qty 96 keep_tuple = (1, ) * sum(times) 97 98 # Includes dice with zero qty. 99 outcomes = icepool.sorted_union(*converted_dice) 100 return cls._new_from_mapping(dice_counts, outcomes, keep_tuple) 101 102 @classmethod 103 def _new_raw(cls, dice: tuple[tuple['icepool.Die[T]', int], ...], 104 outcomes: tuple[T, ...], keep_tuple: tuple[int, 105 ...]) -> 'Pool[T]': 106 """Create using a keep_tuple directly. 107 108 Args: 109 dice: A tuple of (die, count) pairs. 110 keep_tuple: A tuple of how many times to count each die. 111 """ 112 self = super(Pool, cls).__new__(cls) 113 self._dice = dice 114 self._outcomes = outcomes 115 self._keep_tuple = keep_tuple 116 return self 117 118 @classmethod 119 def clear_cache(cls): 120 """Clears the global PoolSource cache.""" 121 PoolSource._new_raw.cache_clear() 122 123 @classmethod 124 def _new_from_mapping(cls, dice_counts: Mapping['icepool.Die[T]', int], 125 outcomes: tuple[T, ...], 126 keep_tuple: tuple[int, ...]) -> 'Pool[T]': 127 """Creates a new pool. 128 129 Args: 130 dice_counts: A map from dice to rolls. 131 keep_tuple: A tuple with length equal to the number of dice. 132 """ 133 dice = tuple(sorted(dice_counts.items(), 134 key=lambda kv: kv[0].hash_key)) 135 return Pool._new_raw(dice, outcomes, keep_tuple) 136 137 def _make_source(self): 138 return PoolSource(self._dice, self._outcomes, self._keep_tuple) 139 140 @cached_property 141 def _raw_size(self) -> int: 142 return sum(count for _, count in self._dice) 143 144 def raw_size(self) -> int: 145 """The number of dice in this pool before the keep_tuple is applied.""" 146 return self._raw_size 147 148 @cached_property 149 def _denominator(self) -> int: 150 return math.prod(die.denominator()**count for die, count in self._dice) 151 152 def denominator(self) -> int: 153 return self._denominator 154 155 @cached_property 156 def _dice_tuple(self) -> tuple['icepool.Die[T]', ...]: 157 return sum(((die, ) * count for die, count in self._dice), start=()) 158 159 @cached_property 160 def _unique_dice(self) -> Collection['icepool.Die[T]']: 161 return set(die for die, _ in self._dice) 162 163 def unique_dice(self) -> Collection['icepool.Die[T]']: 164 """The collection of unique dice in this pool.""" 165 return self._unique_dice 166 167 def outcomes(self) -> Sequence[T]: 168 """The union of possible outcomes among all dice in this pool in ascending order.""" 169 return self._outcomes 170 171 def _set_keep_tuple(self, keep_tuple: tuple[int, 172 ...]) -> 'KeepGenerator[T]': 173 return Pool._new_raw(self._dice, self._outcomes, keep_tuple) 174 175 def additive_union( 176 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 177 ) -> 'MultisetExpression[T]': 178 args = tuple( 179 icepool.expression.multiset_expression. 180 implicit_convert_to_expression(arg) for arg in args) 181 if all(isinstance(arg, Pool) for arg in args): 182 pools = cast(tuple[Pool[T], ...], args) 183 keep_tuple: tuple[int, ...] = tuple( 184 reduce(operator.add, (pool.keep_tuple() for pool in pools), 185 ())) 186 if len(keep_tuple) == 0 or all(x == keep_tuple[0] 187 for x in keep_tuple): 188 # All sorted positions count the same, so we can merge the 189 # pools. 190 dice: 'MutableMapping[icepool.Die, int]' = defaultdict(int) 191 for pool in pools: 192 for die, die_count in pool._dice: 193 dice[die] += die_count 194 outcomes = icepool.sorted_union(*(pool.outcomes() 195 for pool in pools)) 196 return Pool._new_from_mapping(dice, outcomes, keep_tuple) 197 return KeepGenerator.additive_union(*args) 198 199 @property 200 def hash_key(self): 201 return Pool, self._dice, self._keep_tuple 202 203 def __str__(self) -> str: 204 return ( 205 f'Pool of {self.raw_size()} dice with keep_tuple={self.keep_tuple()}\n' 206 + ''.join(f' {repr(die)} : {count},\n' 207 for die, count in self._dice))
Represents a multiset of outcomes resulting from the roll of several dice.
This should be used in conjunction with MultisetEvaluator to generate a
result.
Note that operators are performed on the multiset of rolls, not the multiset
of dice. For example, d6.pool(3) - d6.pool(3) is not an empty pool, but
an expression meaning "roll two pools of 3d6 and with rolls in the second
pool cancelling matching rolls in the first pool one-for-one".
118 @classmethod 119 def clear_cache(cls): 120 """Clears the global PoolSource cache.""" 121 PoolSource._new_raw.cache_clear()
Clears the global PoolSource cache.
144 def raw_size(self) -> int: 145 """The number of dice in this pool before the keep_tuple is applied.""" 146 return self._raw_size
The number of dice in this pool before the keep_tuple is applied.
163 def unique_dice(self) -> Collection['icepool.Die[T]']: 164 """The collection of unique dice in this pool.""" 165 return self._unique_dice
The collection of unique dice in this pool.
167 def outcomes(self) -> Sequence[T]: 168 """The union of possible outcomes among all dice in this pool in ascending order.""" 169 return self._outcomes
The union of possible outcomes among all dice in this pool in ascending order.
175 def additive_union( 176 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 177 ) -> 'MultisetExpression[T]': 178 args = tuple( 179 icepool.expression.multiset_expression. 180 implicit_convert_to_expression(arg) for arg in args) 181 if all(isinstance(arg, Pool) for arg in args): 182 pools = cast(tuple[Pool[T], ...], args) 183 keep_tuple: tuple[int, ...] = tuple( 184 reduce(operator.add, (pool.keep_tuple() for pool in pools), 185 ())) 186 if len(keep_tuple) == 0 or all(x == keep_tuple[0] 187 for x in keep_tuple): 188 # All sorted positions count the same, so we can merge the 189 # pools. 190 dice: 'MutableMapping[icepool.Die, int]' = defaultdict(int) 191 for pool in pools: 192 for die, die_count in pool._dice: 193 dice[die] += die_count 194 outcomes = icepool.sorted_union(*(pool.outcomes() 195 for pool in pools)) 196 return Pool._new_from_mapping(dice, outcomes, keep_tuple) 197 return KeepGenerator.additive_union(*args)
The combined elements from all of the multisets.
Specifically, the counts for each outcome will be summed across the arguments.
Same as a + b + c + ....
Example:
[1, 2, 2, 3] + [1, 2, 4] -> [1, 1, 2, 2, 2, 3, 4]
A hash key for this object. This should include a type.
If None, this will not compare equal to any other object.
Inherited Members
345def d_pool(die_sizes: Collection[int] | Mapping[int, int]) -> 'Pool[int]': 346 """A `Pool` of standard dice (e.g. d6, d8...). 347 348 Args: 349 die_sizes: A collection of die sizes, which will put one die of that 350 sizes in the pool for each element. 351 Or, a mapping of die sizes to how many dice of that size to put 352 into the pool. 353 If empty, the pool will be considered to consist of zero zeros. 354 """ 355 if not die_sizes: 356 return Pool({icepool.Die([0]): 0}) 357 if isinstance(die_sizes, Mapping): 358 die_sizes = list( 359 itertools.chain.from_iterable([k] * v 360 for k, v in die_sizes.items())) 361 return Pool(list(icepool.d(x) for x in die_sizes))
A Pool of standard dice (e.g. d6, d8...).
Arguments:
- die_sizes: A collection of die sizes, which will put one die of that sizes in the pool for each element. Or, a mapping of die sizes to how many dice of that size to put into the pool. If empty, the pool will be considered to consist of zero zeros.
372def z_pool(die_sizes: Collection[int] | Mapping[int, int]) -> 'Pool[int]': 373 """A `Pool` of zero-indexed dice (e.g. z6, z8...). 374 375 Args: 376 die_sizes: A collection of die sizes, which will put one die of that 377 sizes in the pool for each element. 378 Or, a mapping of die sizes to how many dice of that size to put 379 into the pool. 380 If empty, the pool will be considered to consist of zero zeros. 381 """ 382 if not die_sizes: 383 return Pool({icepool.Die([0]): 0}) 384 if isinstance(die_sizes, Mapping): 385 die_sizes = list( 386 itertools.chain.from_iterable([k] * v 387 for k, v in die_sizes.items())) 388 return Pool(list(icepool.z(x) for x in die_sizes))
A Pool of zero-indexed dice (e.g. z6, z8...).
Arguments:
- die_sizes: A collection of die sizes, which will put one die of that sizes in the pool for each element. Or, a mapping of die sizes to how many dice of that size to put into the pool. If empty, the pool will be considered to consist of zero zeros.
18class MultisetGenerator(MultisetExpression[T]): 19 """Abstract base class for generating multisets. 20 21 These include dice pools (`Pool`) and card deals (`Deal`). Most likely you 22 will be using one of these two rather than writing your own subclass of 23 `MultisetGenerator`. 24 25 The multisets are incrementally generated one outcome at a time. 26 For each outcome, a `count` and `weight` are generated, along with a 27 smaller generator to produce the rest of the multiset. 28 29 You can perform simple evaluations using built-in operators and methods in 30 this class. 31 For more complex evaluations and better performance, particularly when 32 multiple generators are involved, you will want to write your own subclass 33 of `MultisetEvaluator`. 34 """ 35 36 _children = () 37 38 @abstractmethod 39 def _make_source(self) -> 'MultisetSource': 40 """Create a source from this generator.""" 41 42 @property 43 def _has_parameter(self) -> bool: 44 return False 45 46 def _prepare( 47 self 48 ) -> Iterator[tuple['tuple[Dungeonlet[T, Any], ...]', 49 'tuple[Questlet[T, Any], ...]', 50 'tuple[MultisetSourceBase[T, Any], ...]', int]]: 51 dungeonlets = (MultisetFreeVariable[T, int](), ) 52 questlets = (MultisetGeneratorQuestlet[T](), ) 53 sources = (self._make_source(), ) 54 weight = 1 55 yield dungeonlets, questlets, sources, weight 56 57 def weightless(self) -> 'MultisetGenerator[T]': 58 """EXPERIMENTAL: Produces a wrapped generator in which each possible multiset is equally weighted. 59 60 In other words, given a generator `g`, 61 ```python 62 g.expand() 63 g.weightless().expand() 64 ``` 65 have the same set of outcomes, but the weightless version has every 66 outcome with quantity 1. Other operators and evaluations can be 67 attached to the result of `weightless()` as usual, in which case the 68 quantity of each outcome the number of *unique* multisets producing that 69 given outcome, rather than the ordinary probabilistic weighting. 70 71 `weightless()` requires that each call to the underlying `source.pop()` 72 does not yield duplicate count values; if so, the evaluation will raise 73 `UnsupportedOrder`. Keeps and mixed pools usually fail this. 74 """ 75 if isinstance(self, icepool.WeightlessGenerator): 76 return self 77 return icepool.WeightlessGenerator(self)
Abstract base class for generating multisets.
These include dice pools (Pool) and card deals (Deal). Most likely you
will be using one of these two rather than writing your own subclass of
MultisetGenerator.
The multisets are incrementally generated one outcome at a time.
For each outcome, a count and weight are generated, along with a
smaller generator to produce the rest of the multiset.
You can perform simple evaluations using built-in operators and methods in
this class.
For more complex evaluations and better performance, particularly when
multiple generators are involved, you will want to write your own subclass
of MultisetEvaluator.
57 def weightless(self) -> 'MultisetGenerator[T]': 58 """EXPERIMENTAL: Produces a wrapped generator in which each possible multiset is equally weighted. 59 60 In other words, given a generator `g`, 61 ```python 62 g.expand() 63 g.weightless().expand() 64 ``` 65 have the same set of outcomes, but the weightless version has every 66 outcome with quantity 1. Other operators and evaluations can be 67 attached to the result of `weightless()` as usual, in which case the 68 quantity of each outcome the number of *unique* multisets producing that 69 given outcome, rather than the ordinary probabilistic weighting. 70 71 `weightless()` requires that each call to the underlying `source.pop()` 72 does not yield duplicate count values; if so, the evaluation will raise 73 `UnsupportedOrder`. Keeps and mixed pools usually fail this. 74 """ 75 if isinstance(self, icepool.WeightlessGenerator): 76 return self 77 return icepool.WeightlessGenerator(self)
EXPERIMENTAL: Produces a wrapped generator in which each possible multiset is equally weighted.
In other words, given a generator g,
g.expand()
g.weightless().expand()
have the same set of outcomes, but the weightless version has every
outcome with quantity 1. Other operators and evaluations can be
attached to the result of weightless() as usual, in which case the
quantity of each outcome the number of unique multisets producing that
given outcome, rather than the ordinary probabilistic weighting.
weightless() requires that each call to the underlying source.pop()
does not yield duplicate count values; if so, the evaluation will raise
UnsupportedOrder. Keeps and mixed pools usually fail this.
Inherited Members
66class MultisetExpression(MultisetExpressionBase[T, int], 67 Expandable[tuple[T, ...]]): 68 """Abstract base class representing an expression that operates on single multisets. 69 70 There are three types of multiset expressions: 71 72 * `MultisetGenerator`, which produce raw outcomes and counts. 73 * `MultisetOperator`, which takes outcomes with one or more counts and 74 produces a count. 75 * `MultisetVariable`, which is a temporary placeholder for some other 76 expression. 77 78 Expression methods can be applied to `MultisetGenerator`s to do simple 79 evaluations. For joint evaluations, try `multiset_function`. 80 81 Use the provided operations to build up more complicated 82 expressions, or to attach a final evaluator. 83 84 Operations include: 85 86 | Operation | Count / notes | 87 |:----------------------------|:--------------------------------------------| 88 | `additive_union`, `+` | `l + r` | 89 | `difference`, `-` | `l - r` | 90 | `intersection`, `&` | `min(l, r)` | 91 | `union`, `\\|` | `max(l, r)` | 92 | `symmetric_difference`, `^` | `abs(l - r)` | 93 | `multiply_counts`, `*` | `count * n` | 94 | `divide_counts`, `//` | `count // n` | 95 | `modulo_counts`, `%` | `count % n` | 96 | `keep_counts` | `count if count >= n else 0` etc. | 97 | unary `+` | same as `keep_counts('>=', 0)` | 98 | unary `-` | reverses the sign of all counts | 99 | `unique` | `min(count, n)` | 100 | `keep_outcomes` | `count if outcome in t else 0` | 101 | `drop_outcomes` | `count if outcome not in t else 0` | 102 | `map_counts` | `f(outcome, *counts)` | 103 | `keep`, `[]` | less capable than `KeepGenerator` version | 104 | `highest` | less capable than `KeepGenerator` version | 105 | `lowest` | less capable than `KeepGenerator` version | 106 107 | Evaluator | Summary | 108 |:-------------------------------|:---------------------------------------------------------------------------| 109 | `issubset`, `<=` | Whether the left side's counts are all <= their counterparts on the right | 110 | `issuperset`, `>=` | Whether the left side's counts are all >= their counterparts on the right | 111 | `isdisjoint` | Whether the left side has no positive counts in common with the right side | 112 | `<` | As `<=`, but `False` if the two multisets are equal | 113 | `>` | As `>=`, but `False` if the two multisets are equal | 114 | `==` | Whether the left side has all the same counts as the right side | 115 | `!=` | Whether the left side has any different counts to the right side | 116 | `expand` | All elements in ascending order | 117 | `sum` | Sum of all elements | 118 | `size` | The number of elements | 119 | `empty` | Whether all counts are zero | 120 | `all_counts` | All counts in descending order | 121 | `product_of_counts` | The product of all counts | 122 | `highest_outcome_and_count` | The highest outcome and how many of that outcome | 123 | `largest_count` | The single largest count, aka x-of-a-kind | 124 | `largest_count_and_outcome` | Same but also with the corresponding outcome | 125 | `count_subset`, `//` | The number of times the right side is contained in the left side | 126 | `largest_straight` | Length of longest consecutive sequence | 127 | `largest_straight_and_outcome` | Same but also with the corresponding outcome | 128 | `all_straights` | Lengths of all consecutive sequences in descending order | 129 """ 130 131 def _make_param(self, 132 name: str, 133 arg_index: int, 134 star_index: int | None = None) -> 'MultisetParameter[T]': 135 if star_index is not None: 136 raise TypeError( 137 'The single int count of MultisetExpression cannot be starred.' 138 ) 139 return icepool.MultisetParameter(name, arg_index, star_index) 140 141 @property 142 def _items_for_cartesian_product( 143 self) -> Sequence[tuple[tuple[T, ...], int]]: 144 expansion = cast('icepool.Die[tuple[T, ...]]', self.expand()) 145 return expansion.items() 146 147 # We need to reiterate this since we override __eq__. 148 __hash__ = MaybeHashKeyed.__hash__ # type: ignore 149 150 # Binary operators. 151 152 def __add__(self, 153 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 154 /) -> 'MultisetExpression[T]': 155 try: 156 return MultisetExpression.additive_union(self, other) 157 except ImplicitConversionError: 158 return NotImplemented 159 160 def __radd__( 161 self, 162 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 163 /) -> 'MultisetExpression[T]': 164 try: 165 return MultisetExpression.additive_union(other, self) 166 except ImplicitConversionError: 167 return NotImplemented 168 169 def additive_union( 170 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 171 ) -> 'MultisetExpression[T]': 172 """The combined elements from all of the multisets. 173 174 Specifically, the counts for each outcome will be summed across the 175 arguments. 176 177 Same as `a + b + c + ...`. 178 179 Example: 180 ```python 181 [1, 2, 2, 3] + [1, 2, 4] -> [1, 1, 2, 2, 2, 3, 4] 182 ``` 183 """ 184 expressions = tuple( 185 implicit_convert_to_expression(arg) for arg in args) 186 return icepool.operator.MultisetAdditiveUnion(*expressions) 187 188 def __sub__(self, 189 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 190 /) -> 'MultisetExpression[T]': 191 try: 192 return MultisetExpression.difference(self, other) 193 except ImplicitConversionError: 194 return NotImplemented 195 196 def __rsub__( 197 self, 198 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 199 /) -> 'MultisetExpression[T]': 200 try: 201 return MultisetExpression.difference(other, self) 202 except ImplicitConversionError: 203 return NotImplemented 204 205 def difference( 206 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 207 keep_negative_counts: bool = False) -> 'MultisetExpression[T]': 208 """The elements from the left multiset that are not in any of the others. 209 210 Specifically, for each outcome, the count of that outcome is that of 211 the leftmost argument minus the counts from all other arguments. 212 By default, if the result would be negative, it is set to zero. 213 214 Same as `a - b - c - ...`. 215 216 Example: 217 ```python 218 [1, 2, 2, 3] - [1, 2, 4] -> [2, 3] 219 ``` 220 221 If no arguments are given, the result will be an empty multiset, i.e. 222 all zero counts. 223 224 Aas a multiset operation, this will only cancel elements 1:1. 225 If you want to drop all elements in a set of outcomes regardless of 226 count, either use `drop_outcomes()` instead, or use a large number of 227 counts on the right side. 228 229 [This infographic](https://github.com/HighDiceRoller/icepool/blob/main/images/opposed_pools.png?raw=true) 230 gives an overview of several opposed dice pool mechanics, including this 231 one. 232 233 Args: 234 *args: All but the leftmost argument will subtract their counts 235 from the leftmost argument. 236 keep_negative_counts: If set (default False), negative resultig 237 counts will be preserved. 238 """ 239 expressions = tuple( 240 implicit_convert_to_expression(arg) for arg in args) 241 if keep_negative_counts: 242 return icepool.operator.MultisetDifferenceKeepNegative( 243 *expressions) 244 else: 245 return icepool.operator.MultisetDifferenceDropNegative( 246 *expressions) 247 248 def __and__(self, 249 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 250 /) -> 'MultisetExpression[T]': 251 try: 252 return MultisetExpression.intersection(self, other) 253 except ImplicitConversionError: 254 return NotImplemented 255 256 def __rand__( 257 self, 258 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 259 /) -> 'MultisetExpression[T]': 260 try: 261 return MultisetExpression.intersection(other, self) 262 except ImplicitConversionError: 263 return NotImplemented 264 265 def intersection( 266 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 267 ) -> 'MultisetExpression[T]': 268 """The elements that all the multisets have in common. 269 270 Specifically, the count for each outcome is the minimum count among the 271 arguments. 272 273 Same as `a & b & c & ...`. 274 275 Example: 276 ```python 277 [1, 2, 2, 3] & [1, 2, 4] -> [1, 2] 278 ``` 279 280 As a multiset operation, this will only intersect elements 1:1. 281 If you want to keep all elements in a set of outcomes regardless of 282 count, either use `keep_outcomes()` instead, or use a large number of 283 counts on the right side. 284 """ 285 expressions = tuple( 286 implicit_convert_to_expression(arg) for arg in args) 287 return icepool.operator.MultisetIntersection(*expressions) 288 289 def __or__(self, 290 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 291 /) -> 'MultisetExpression[T]': 292 try: 293 return MultisetExpression.union(self, other) 294 except ImplicitConversionError: 295 return NotImplemented 296 297 def __ror__(self, 298 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 299 /) -> 'MultisetExpression[T]': 300 try: 301 return MultisetExpression.union(other, self) 302 except ImplicitConversionError: 303 return NotImplemented 304 305 def union( 306 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 307 ) -> 'MultisetExpression[T]': 308 """The most of each outcome that appear in any of the multisets. 309 310 Specifically, the count for each outcome is the maximum count among the 311 arguments. 312 313 Same as `a | b | c | ...`. 314 315 Example: 316 ```python 317 [1, 2, 2, 3] | [1, 2, 4] -> [1, 2, 2, 3, 4] 318 ``` 319 """ 320 expressions = tuple( 321 implicit_convert_to_expression(arg) for arg in args) 322 return icepool.operator.MultisetUnion(*expressions) 323 324 def __xor__(self, 325 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 326 /) -> 'MultisetExpression[T]': 327 try: 328 return MultisetExpression.symmetric_difference(self, other) 329 except ImplicitConversionError: 330 return NotImplemented 331 332 def __rxor__( 333 self, 334 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 335 /) -> 'MultisetExpression[T]': 336 try: 337 # Symmetric. 338 return MultisetExpression.symmetric_difference(self, other) 339 except ImplicitConversionError: 340 return NotImplemented 341 342 def symmetric_difference( 343 self, 344 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 345 /) -> 'MultisetExpression[T]': 346 """The elements that appear in the left or right multiset but not both. 347 348 Specifically, the count for each outcome is the absolute difference 349 between the counts from the two arguments. 350 351 Same as `a ^ b`. 352 353 Specifically, this produces the absolute difference between counts. 354 If you don't want negative counts to be used from the inputs, you can 355 do `+left ^ +right`. 356 357 Example: 358 ```python 359 [1, 2, 2, 3] ^ [1, 2, 4] -> [2, 3, 4] 360 ``` 361 """ 362 return icepool.operator.MultisetSymmetricDifference( 363 self, implicit_convert_to_expression(other)) 364 365 def keep_outcomes( 366 self, outcomes: 367 'Callable[[T], bool] | Collection[T] | MultisetExpression[T]', 368 /) -> 'MultisetExpression[T]': 369 """Keeps the designated outcomes, and drops the rest by setting their counts to zero. 370 371 This is similar to `intersection()`, except the right side is considered 372 to have unlimited multiplicity. 373 374 Args: 375 outcomes: A callable returning `True` iff the outcome should be kept, 376 or an expression or collection of outcomes to keep. 377 """ 378 if isinstance(outcomes, MultisetExpression): 379 return icepool.operator.MultisetFilterOutcomesBinary( 380 self, outcomes) 381 else: 382 return icepool.operator.MultisetFilterOutcomes(self, 383 outcomes=outcomes) 384 385 def drop_outcomes( 386 self, outcomes: 387 'Callable[[T], bool] | Collection[T] | MultisetExpression[T]', 388 /) -> 'MultisetExpression[T]': 389 """Drops the designated outcomes by setting their counts to zero, and keeps the rest. 390 391 This is similar to `difference()`, except the right side is considered 392 to have unlimited multiplicity. 393 394 Args: 395 outcomes: A callable returning `True` iff the outcome should be 396 dropped, or an expression or collection of outcomes to drop. 397 """ 398 if isinstance(outcomes, MultisetExpression): 399 return icepool.operator.MultisetFilterOutcomesBinary(self, 400 outcomes, 401 invert=True) 402 else: 403 return icepool.operator.MultisetFilterOutcomes(self, 404 outcomes=outcomes, 405 invert=True) 406 407 # Adjust counts. 408 409 def map_counts(*args: 410 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 411 function: Callable[..., int]) -> 'MultisetExpression[T]': 412 """Maps the counts to new counts. 413 414 Args: 415 function: A function that takes `outcome, *counts` and produces a 416 combined count. 417 """ 418 expressions = tuple( 419 implicit_convert_to_expression(arg) for arg in args) 420 return icepool.operator.MultisetMapCounts(*expressions, 421 function=function) 422 423 def __mul__(self, n: int) -> 'MultisetExpression[T]': 424 if not isinstance(n, int): 425 return NotImplemented 426 return self.multiply_counts(n) 427 428 # Commutable in this case. 429 def __rmul__(self, n: int) -> 'MultisetExpression[T]': 430 if not isinstance(n, int): 431 return NotImplemented 432 return self.multiply_counts(n) 433 434 def multiply_counts(self, n: int, /) -> 'MultisetExpression[T]': 435 """Multiplies all counts by n. 436 437 Same as `self * n`. 438 439 Example: 440 ```python 441 Pool([1, 2, 2, 3]) * 2 -> [1, 1, 2, 2, 2, 2, 3, 3] 442 ``` 443 """ 444 return icepool.operator.MultisetMultiplyCounts(self, constant=n) 445 446 @overload 447 def __floordiv__(self, other: int) -> 'MultisetExpression[T]': 448 ... 449 450 @overload 451 def __floordiv__( 452 self, other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 453 ) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]': 454 """Same as divide_counts().""" 455 456 @overload 457 def __floordiv__( 458 self, 459 other: 'int | MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 460 ) -> 'MultisetExpression[T] | icepool.Die[int] | MultisetFunctionRawResult[T, int]': 461 """Same as count_subset().""" 462 463 def __floordiv__( 464 self, 465 other: 'int | MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 466 ) -> 'MultisetExpression[T] | icepool.Die[int] | MultisetFunctionRawResult[T, int]': 467 if isinstance(other, int): 468 return self.divide_counts(other) 469 else: 470 return self.count_subset(other) 471 472 def divide_counts(self, n: int, /) -> 'MultisetExpression[T]': 473 """Divides all counts by n (rounding down). 474 475 Same as `self // n`. 476 477 Example: 478 ```python 479 Pool([1, 2, 2, 3]) // 2 -> [2] 480 ``` 481 """ 482 return icepool.operator.MultisetFloordivCounts(self, constant=n) 483 484 def __mod__(self, n: int, /) -> 'MultisetExpression[T]': 485 if not isinstance(n, int): 486 return NotImplemented 487 return icepool.operator.MultisetModuloCounts(self, constant=n) 488 489 def modulo_counts(self, n: int, /) -> 'MultisetExpression[T]': 490 """Moduos all counts by n. 491 492 Same as `self % n`. 493 494 Example: 495 ```python 496 Pool([1, 2, 2, 3]) % 2 -> [1, 3] 497 ``` 498 """ 499 return self % n 500 501 def __pos__(self) -> 'MultisetExpression[T]': 502 """Sets all negative counts to zero.""" 503 return icepool.operator.MultisetKeepCounts(self, 504 comparison='>=', 505 constant=0) 506 507 def __neg__(self) -> 'MultisetExpression[T]': 508 """As -1 * self.""" 509 return -1 * self 510 511 def keep_counts(self, comparison: Literal['==', '!=', '<=', '<', '>=', 512 '>'], n: int, 513 /) -> 'MultisetExpression[T]': 514 """Keeps counts fitting the comparison, treating the rest as zero. 515 516 For example, `expression.keep_counts('>=', 2)` would keep pairs, 517 triplets, etc. and drop singles. 518 519 ```python 520 Pool([1, 2, 2, 3, 3, 3]).keep_counts('>=', 2) -> [2, 2, 3, 3, 3] 521 ``` 522 523 Args: 524 comparison: The comparison to use. 525 n: The number to compare counts against. 526 """ 527 return icepool.operator.MultisetKeepCounts(self, 528 comparison=comparison, 529 constant=n) 530 531 def unique(self, n: int = 1, /) -> 'MultisetExpression[T]': 532 """Counts each outcome at most `n` times. 533 534 For example, `generator.unique(2)` would count each outcome at most 535 twice. 536 537 Example: 538 ```python 539 Pool([1, 2, 2, 3]).unique() -> [1, 2, 3] 540 ``` 541 """ 542 return icepool.operator.MultisetUnique(self, constant=n) 543 544 # Keep highest / lowest. 545 546 @overload 547 def keep( 548 self, index: slice | Sequence[int | EllipsisType] 549 ) -> 'MultisetExpression[T]': 550 ... 551 552 @overload 553 def keep(self, 554 index: int) -> 'icepool.Die[T] | MultisetFunctionRawResult[T, T]': 555 ... 556 557 def keep( 558 self, index: slice | Sequence[int | EllipsisType] | int 559 ) -> 'MultisetExpression[T] | icepool.Die[T] | MultisetFunctionRawResult[T, T]': 560 """Selects elements after drawing and sorting. 561 562 This is less capable than the `KeepGenerator` version. 563 In particular, it does not know how many elements it is selecting from, 564 so it must be anchored at the starting end. The advantage is that it 565 can be applied to any expression. 566 567 The valid types of argument are: 568 569 * A `slice`. If both start and stop are provided, they must both be 570 non-negative or both be negative. step is not supported. 571 * A sequence of `int` with `...` (`Ellipsis`) at exactly one end. 572 Each sorted element will be counted that many times, with the 573 `Ellipsis` treated as enough zeros (possibly "negative") to 574 fill the rest of the elements. 575 * An `int`, which evaluates by taking the element at the specified 576 index. In this case the result is a `Die`. 577 578 Negative incoming counts are treated as zero counts. 579 580 Use the `[]` operator for the same effect as this method. 581 """ 582 if isinstance(index, int): 583 return icepool.evaluator.keep_evaluator.evaluate(self, index=index) 584 else: 585 return icepool.operator.MultisetKeep(self, index=index) 586 587 @overload 588 def __getitem__( 589 self, index: slice | Sequence[int | EllipsisType] 590 ) -> 'MultisetExpression[T]': 591 ... 592 593 @overload 594 def __getitem__( 595 self, 596 index: int) -> 'icepool.Die[T] | MultisetFunctionRawResult[T, T]': 597 ... 598 599 def __getitem__( 600 self, index: slice | Sequence[int | EllipsisType] | int 601 ) -> 'MultisetExpression[T] | icepool.Die[T] | MultisetFunctionRawResult[T, T]': 602 return self.keep(index) 603 604 def lowest(self, 605 keep: int | None = None, 606 drop: int | None = None) -> 'MultisetExpression[T]': 607 """Keep some of the lowest elements from this multiset and drop the rest. 608 609 In contrast to the die and free function versions, this does not 610 automatically sum the dice. Use `.sum()` afterwards if you want to sum. 611 Alternatively, you can perform some other evaluation. 612 613 This requires the outcomes to be evaluated in ascending order. 614 615 Args: 616 keep, drop: These arguments work together: 617 * If neither are provided, the single lowest element 618 will be kept. 619 * If only `keep` is provided, the `keep` lowest elements 620 will be kept. 621 * If only `drop` is provided, the `drop` lowest elements 622 will be dropped and the rest will be kept. 623 * If both are provided, `drop` lowest elements will be dropped, 624 then the next `keep` lowest elements will be kept. 625 """ 626 index = lowest_slice(keep, drop) 627 return self.keep(index) 628 629 def highest(self, 630 keep: int | None = None, 631 drop: int | None = None) -> 'MultisetExpression[T]': 632 """Keep some of the highest elements from this multiset and drop the rest. 633 634 In contrast to the die and free function versions, this does not 635 automatically sum the dice. Use `.sum()` afterwards if you want to sum. 636 Alternatively, you can perform some other evaluation. 637 638 This requires the outcomes to be evaluated in descending order. 639 640 Args: 641 keep, drop: These arguments work together: 642 * If neither are provided, the single highest element 643 will be kept. 644 * If only `keep` is provided, the `keep` highest elements 645 will be kept. 646 * If only `drop` is provided, the `drop` highest elements 647 will be dropped and the rest will be kept. 648 * If both are provided, `drop` highest elements will be dropped, 649 then the next `keep` highest elements will be kept. 650 """ 651 index = highest_slice(keep, drop) 652 return self.keep(index) 653 654 # Pairing. 655 656 def sort_pair( 657 self, 658 comparison: Literal['==', '!=', '<=', '<', '>=', '>'], 659 other: 'MultisetExpression[T]', 660 /, 661 order: Order = Order.Descending, 662 extra: Literal['early', 'late', 'low', 'high', 'equal', 'keep', 663 'drop'] = 'drop' 664 ) -> 'MultisetExpression[T]': 665 """EXPERIMENTAL: Sort `self` and `other` and make pairs of one element from each, then keep the elements from `self` from each pair that fit the given comparision. 666 667 Example: An attacker rolls 3d6 versus a defender's 2d6 in the game of 668 *RISK*. Which pairs did the attacker win? 669 ```python 670 d6.pool(3).highest(2).sort_pair('>', d6.pool(2)) 671 ``` 672 673 Suppose the attacker rolled 6, 4, 3 and the defender 5, 5. 674 In this case the 4 would be blocked since the attacker lost that pair, 675 leaving the attacker's 6. If you want to keep the extra element (3), you 676 can use the `extra` parameter. 677 ```python 678 679 Pool([6, 4, 3]).sort_pair('>', [5, 5]) -> [6] 680 Pool([6, 4, 3]).sort_pair('>', [5, 5], extra='keep') -> [6, 3] 681 ``` 682 683 Contrast `max_pair_keep()` and `max_pair_drop()`, which first 684 create the maximum number of pairs that fit the comparison, not 685 necessarily in sorted order. 686 In the above example, `max_pair()` would allow the defender to 687 assign their 5s to block both the 4 and the 3. 688 689 [This infographic](https://github.com/HighDiceRoller/icepool/blob/main/images/opposed_pools.png?raw=true) 690 gives an overview of several opposed dice pool mechanics, including this 691 one. 692 693 This is not designed for use with negative counts. 694 695 Args: 696 comparison: The comparison to filter by. If you want to drop rather 697 than keep, use the complementary comparison: 698 * `'=='` vs. `'!='` 699 * `'<='` vs. `'>'` 700 * `'>='` vs. `'<'` 701 other: The other multiset to pair elements with. 702 order: The order in which to sort before forming pairs. 703 Default is descending. 704 extra: If the left operand has more elements than the right 705 operand, this determines what is done with the extra elements. 706 The default is `'drop'`. 707 * `'early'`, `'late'`: The extra elements are considered to 708 occur earlier or later in `order` than their missing 709 counterparts. 710 * `'low'`, `'high'`, `'equal'`: The extra elements are 711 considered to be lower, higher, or equal to their missing 712 counterparts. 713 * `'keep'`, `'drop'`: The extra elements are always kept or 714 dropped. 715 """ 716 other = implicit_convert_to_expression(other) 717 718 return icepool.operator.MultisetSortPair(self, 719 other, 720 comparison=comparison, 721 sort_order=order, 722 extra=extra) 723 724 def sort_pair_keep_while(self, 725 comparison: Literal['==', '!=', '<=', '<', '>=', 726 '>'], 727 other: 'MultisetExpression[T]', 728 /, 729 order: Order = Order.Descending, 730 extra: Literal['early', 'late', 'low', 'high', 731 'equal', 'continue', 732 'break'] = 'break'): 733 """EXPERIMENTAL: Sort `self` and `other` and make pairs of one element from each, then go through the pairs and keep elements from `self` while the `comparison` holds, dropping the rest. 734 735 This is not designed for use with negative counts. 736 737 [This infographic](https://github.com/HighDiceRoller/icepool/blob/main/images/opposed_pools.png?raw=true) 738 gives an overview of several opposed dice pool mechanics, including this 739 one. 740 741 Args: 742 comparison: The comparison for which to continue the "while". 743 other: The other multiset to pair elements with. 744 order: The order in which to sort before forming pairs. 745 Default is descending. 746 extra: If the left operand has more elements than the right 747 operand, this determines what is done with the extra elements. 748 The default is `'break'`. 749 * `'early'`, `'late'`: The extra elements are considered to 750 occur earlier or later in `order` than their missing 751 counterparts. 752 * `'low'`, `'high'`, `'equal'`: The extra elements are 753 considered to be lower, higher, or equal to their missing 754 counterparts. 755 * `'continue'`, `'break'`: If the "while" still holds upon 756 reaching the extra elements, whether those elements 757 continue to be kept. 758 """ 759 other = implicit_convert_to_expression(other) 760 return icepool.operator.MultisetSortPairWhile(self, 761 other, 762 keep=True, 763 comparison=comparison, 764 sort_order=order, 765 extra=extra) 766 767 def sort_pair_drop_while(self, 768 comparison: Literal['==', '!=', '<=', '<', '>=', 769 '>'], 770 other: 'MultisetExpression[T]', 771 /, 772 order: Order = Order.Descending, 773 extra: Literal['early', 'late', 'low', 'high', 774 'equal', 'continue', 775 'break'] = 'break'): 776 """EXPERIMENTAL: Sort `self` and `other` and make pairs of one element from each, then go through the pairs and drop elements from `self` while the `comparison` holds, keeping the rest. 777 778 This is not designed for use with negative counts. 779 780 [This infographic](https://github.com/HighDiceRoller/icepool/blob/main/images/opposed_pools.png?raw=true) 781 gives an overview of several opposed dice pool mechanics, including this 782 one. 783 784 Args: 785 comparison: The comparison for which to continue the "while". 786 other: The other multiset to pair elements with. 787 order: The order in which to sort before forming pairs. 788 Default is descending. 789 extra: If the left operand has more elements than the right 790 operand, this determines what is done with the extra elements. 791 The default is `'break'`. 792 * `'early'`, `'late'`: The extra elements are considered to 793 occur earlier or later in `order` than their missing 794 counterparts. 795 * `'low'`, `'high'`, `'equal'`: The extra elements are 796 considered to be lower, higher, or equal to their missing 797 counterparts. 798 * `'continue'`, `'break'`: If the "while" still holds upon 799 reaching the extra elements, whether those elements 800 continue to be dropped. 801 """ 802 other = implicit_convert_to_expression(other) 803 return icepool.operator.MultisetSortPairWhile(self, 804 other, 805 keep=False, 806 comparison=comparison, 807 sort_order=order, 808 extra=extra) 809 810 def max_pair_keep(self, 811 comparison: Literal['==', '<=', '<', '>=', '>'], 812 other: 'MultisetExpression[T]', 813 priority: Literal['low', 'high'] | None = None, 814 /) -> 'MultisetExpression[T]': 815 """EXPERIMENTAL: Form as many pairs of elements between `self` and `other` fitting the comparison, then keep the paired elements from `self`. 816 817 This pairs elements of `self` with elements of `other`, such that in 818 each pair the element from `self` fits the `comparison` with the 819 element from `other`. As many such pairs of elements will be created as 820 possible, prioritizing either the lowest or highest possible elements. 821 Finally, the paired elements from `self` are kept, dropping the rest. 822 823 This requires that outcomes be evaluated in descending order if 824 prioritizing high elements, or ascending order if prioritizing low 825 elements. 826 827 This is not designed for use with negative counts. 828 829 Example: An attacker rolls a pool of 4d6 and a defender rolls a pool of 830 3d6. Defender dice can be used to block attacker dice of equal or lesser 831 value, and the defender prefers to block the highest attacker dice 832 possible. Which attacker dice were blocked? 833 ```python 834 d6.pool(4).max_pair_keep('<=', d6.pool(3), 'high').sum() 835 ``` 836 837 Suppose the attacker rolls 6, 4, 3, 1 and the defender rolls 5, 5. 838 Then the result would be [4, 3]. 839 ```python 840 Pool([6, 4, 3, 1]).max_pair('<=', [5, 5], 'high') 841 -> [4, 3] 842 ``` 843 844 The complement of this is `max_pair_drop`, which drops the paired 845 elements from `self` and keeps the rest. 846 847 Contrast `sort_pair()`, which first creates pairs in 848 sorted order and then filters them by `comparison`. 849 In the above example, `sort_pair()` would force the defender to pair 850 against the 6 and the 4, which would only allow them to block the 4 851 and let the 6, 3, and 1 through. 852 853 [This infographic](https://github.com/HighDiceRoller/icepool/blob/main/images/opposed_pools.png?raw=true) 854 gives an overview of several opposed dice pool mechanics, including this 855 one. 856 857 Args: 858 comparison: The comparison that the pairs must satisfy. 859 `'=='` is the same as `+self & +other`. 860 other: The other multiset to pair elements with. 861 priority: Optional paramter to prioritize pairing `'low'` or 862 `'high'` elements. Note that this does not change the number of 863 elements that are paired. 864 """ 865 other = implicit_convert_to_expression(other) 866 if comparison == '==': 867 return +self & +other 868 869 cls: Type[icepool.operator.MultisetMaxPairLate] | Type[ 870 icepool.operator.MultisetMaxPairEarly] 871 872 if priority is None: 873 order = Order.Ascending 874 left_first, tie, _ = compute_lexi_tuple(comparison, order) 875 if left_first: 876 order = Order.Descending 877 cls = icepool.operator.MultisetMaxPairLate 878 else: 879 match priority: 880 case 'low': 881 order = Order.Ascending 882 case 'high': 883 order = Order.Descending 884 case _: 885 raise ValueError("priority must be 'low' or 'high'.") 886 887 left_first, tie, _ = compute_lexi_tuple(comparison, order) 888 889 if left_first: 890 cls = icepool.operator.MultisetMaxPairEarly 891 else: 892 cls = icepool.operator.MultisetMaxPairLate 893 894 return cls(self, 895 other, 896 order=order, 897 pair_equal=cast(bool, tie), 898 keep=True) 899 900 def max_pair_drop(self, 901 comparison: Literal['==', '<=', '<', '>=', '>'], 902 other: 'MultisetExpression[T]', 903 priority: Literal['low', 'high'] | None = None, 904 /) -> 'MultisetExpression[T]': 905 """EXPERIMENTAL: Form as many pairs of elements between `self` and `other` fitting the comparison, then drop the paired elements from `self`. 906 907 This pairs elements of `self` with elements of `other`, such that in 908 each pair the element from `self` fits the `comparison` with the 909 element from `other`. As many such pairs of elements will be created as 910 possible, prioritizing either the lowest or highest possible elements. 911 Finally, the paired elements from `self` are dropped, keeping the rest. 912 913 This requires that outcomes be evaluated in descending order if 914 prioritizing high elements, or ascending order if prioritizing low 915 elements. 916 917 This is not designed for use with negative counts. 918 919 Example: An attacker rolls a pool of 4d6 and a defender rolls a pool of 920 3d6. Defender dice can be used to block attacker dice of equal or lesser 921 value, and the defender prefers to block the highest attacker dice 922 possible. Which attacker dice were NOT blocked? 923 ```python 924 d6.pool(4).max_pair_drop('<=', d6.pool(3), 'high').sum() 925 ``` 926 927 Suppose the attacker rolls 6, 4, 3, 1 and the defender rolls 5, 5. 928 Then the result would be [4, 3]. 929 ```python 930 Pool([6, 4, 3, 1]).max_pair_drop('<=', [5, 5], 'high') 931 -> [6, 1] 932 ``` 933 934 The complement of this is `max_pair_keep`, which keeps the paired 935 elements from `self` and drops the rest. 936 937 Contrast `sort_pair()`, which first creates pairs in 938 sorted order and then filters them by `comparison`. 939 In the above example, `sort_pair()` would force the defender to pair 940 against the 6 and the 4, which would only allow them to block the 4 941 and let the 6, 3, and 1 through. 942 943 [This infographic](https://github.com/HighDiceRoller/icepool/blob/main/images/opposed_pools.png?raw=true) 944 gives an overview of several opposed dice pool mechanics, including this 945 one. 946 947 Args: 948 comparison: The comparison that the pairs must satisfy. 949 `'=='` is the same as `self - other`. 950 other: The other multiset to pair elements with. 951 priority: Optional paramter to prioritize pairing `'low'` or 952 `'high'` elements. Note that this does not change the number of 953 elements that are paired. 954 """ 955 other = implicit_convert_to_expression(other) 956 if comparison == '==': 957 return self - other 958 959 cls: Type[icepool.operator.MultisetMaxPairLate] | Type[ 960 icepool.operator.MultisetMaxPairEarly] 961 962 if priority is None: 963 order = Order.Ascending 964 left_first, tie, _ = compute_lexi_tuple(comparison, order) 965 if left_first: 966 order = Order.Descending 967 cls = icepool.operator.MultisetMaxPairLate 968 else: 969 match priority: 970 case 'low': 971 order = Order.Ascending 972 case 'high': 973 order = Order.Descending 974 case _: 975 raise ValueError("priority must be 'low' or 'high'.") 976 977 left_first, tie, _ = compute_lexi_tuple(comparison, order) 978 979 if left_first: 980 cls = icepool.operator.MultisetMaxPairEarly 981 else: 982 cls = icepool.operator.MultisetMaxPairLate 983 984 return cls(self, 985 other, 986 order=order, 987 pair_equal=cast(bool, tie), 988 keep=False) 989 990 def versus_all(self, comparison: Literal['<=', '<', '>=', '>'], 991 other: 'MultisetExpression[T]') -> 'MultisetExpression[T]': 992 """EXPERIMENTAL: Keeps elements from `self` that fit the comparison against all elements of the other multiset. 993 994 Contrast `versus_any()`. 995 996 [This infographic](https://github.com/HighDiceRoller/icepool/blob/main/images/opposed_pools.png?raw=true) 997 gives an overview of several opposed dice pool mechanics, including this 998 one. 999 1000 Args: 1001 comparison: One of `'<=', '<', '>=', '>'`. 1002 other: The other multiset to compare to. Negative counts are treated 1003 as 0. 1004 """ 1005 other = implicit_convert_to_expression(other) 1006 lexi_tuple, order = compute_lexi_tuple_with_zero_right_first( 1007 comparison) 1008 return icepool.operator.MultisetVersus(self, 1009 other, 1010 lexi_tuple=lexi_tuple, 1011 order=order) 1012 1013 def versus_any(self, comparison: Literal['<=', '<', '>=', '>'], 1014 other: 'MultisetExpression[T]') -> 'MultisetExpression[T]': 1015 """EXPERIMENTAL: Keeps elements from `self` that fit the comparison against any element of the other multiset. 1016 1017 Contrast `versus_all()`. 1018 1019 [This infographic](https://github.com/HighDiceRoller/icepool/blob/main/images/opposed_pools.png?raw=true) 1020 gives an overview of several opposed dice pool mechanics, including this 1021 one. 1022 1023 Args: 1024 comparison: One of `'<=', '<', '>=', '>'`. 1025 other: The other multiset to compare to. Negative counts are treated 1026 as 0. 1027 """ 1028 other = implicit_convert_to_expression(other) 1029 lexi_tuple, order = compute_lexi_tuple_with_zero_right_first( 1030 comparison) 1031 lexi_tuple = tuple(reversed(lexi_tuple)) # type: ignore 1032 order = -order 1033 1034 return icepool.operator.MultisetVersus(self, 1035 other, 1036 lexi_tuple=lexi_tuple, 1037 order=order) 1038 1039 # Evaluations. 1040 1041 def expand( 1042 self, 1043 order: Order = Order.Ascending 1044 ) -> 'icepool.Die[tuple[T, ...]] | MultisetFunctionRawResult[T, tuple[T, ...]]': 1045 """Evaluation: All elements of the multiset in ascending order. 1046 1047 This is expensive and not recommended unless there are few possibilities. 1048 1049 Args: 1050 order: Whether the elements are in ascending (default) or descending 1051 order. 1052 """ 1053 return icepool.evaluator.ExpandEvaluator().evaluate(self, order=order) 1054 1055 def sum( 1056 self, 1057 map: Callable[[T], U] | Mapping[T, U] | None = None 1058 ) -> 'icepool.Die[U] | MultisetFunctionRawResult[T, U]': 1059 """Evaluation: The sum of all elements.""" 1060 if map is None: 1061 return icepool.evaluator.sum_evaluator.evaluate(self) 1062 else: 1063 return icepool.evaluator.SumEvaluator(map).evaluate(self) 1064 1065 def size(self) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]': 1066 """Evaluation: The total number of elements in the multiset. 1067 1068 This is usually not very interesting unless some other operation is 1069 performed first. Examples: 1070 1071 `generator.unique().size()` will count the number of unique outcomes. 1072 1073 `(generator & [4, 5, 6]).size()` will count up to one each of 1074 4, 5, and 6. 1075 """ 1076 return icepool.evaluator.size_evaluator.evaluate(self) 1077 1078 def empty( 1079 self) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]': 1080 """Evaluation: Whether the multiset contains only zero counts.""" 1081 return icepool.evaluator.empty_evaluator.evaluate(self) 1082 1083 def product_of_counts( 1084 self, ) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]': 1085 """Evaluation: The product of counts in the multiset.""" 1086 return icepool.evaluator.product_of_counts_evaluator.evaluate(self) 1087 1088 def highest_outcome_and_count( 1089 self 1090 ) -> 'icepool.Die[tuple[T, int]] | MultisetFunctionRawResult[T, tuple[T, int]]': 1091 """Evaluation: The highest outcome with positive count, along with that count. 1092 1093 If no outcomes have positive count, the min outcome will be returned with 0 count. 1094 """ 1095 return icepool.evaluator.highest_outcome_and_count_evaluator.evaluate( 1096 self) 1097 1098 def all_counts( 1099 self, 1100 filter: int | Literal['all'] = 1 1101 ) -> 'icepool.Die[tuple[int, ...]] | MultisetFunctionRawResult[T, tuple[int, ...]]': 1102 """Evaluation: Sorted tuple of all counts, i.e. the sizes of all matching sets. 1103 1104 The sizes are in **descending** order. 1105 1106 Args: 1107 filter: Any counts below this value will not be in the output. 1108 For example, `filter=2` will only produce pairs and better. 1109 If `'all'`, no filtering will be done. 1110 1111 Why not just place `keep_counts('>=')` before this? 1112 `keep_counts('>=')` operates by setting counts to zero, so we 1113 would still need an argument to specify whether we want to 1114 output zero counts. So we might as well use the argument to do 1115 both. 1116 """ 1117 return icepool.evaluator.AllCountsEvaluator( 1118 filter=filter).evaluate(self) 1119 1120 def largest_count( 1121 self, 1122 *, 1123 wild: Callable[[T], bool] | Collection[T] | None = None, 1124 wild_low: Callable[[T], bool] | Collection[T] | None = None, 1125 wild_high: Callable[[T], bool] | Collection[T] | None = None, 1126 ) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]': 1127 """Evaluation: The size of the largest matching set among the elements. 1128 1129 Args: 1130 wild: If provided, the counts of these outcomes will be combined 1131 with the counts of any other outcomes. 1132 wild_low: These wilds can only be combined with outcomes that they 1133 are lower than. 1134 wild_high: These wilds can only be combined with outcomes that they 1135 are higher than. 1136 """ 1137 if wild is None and wild_low is None and wild_high is None: 1138 return icepool.evaluator.largest_count_evaluator.evaluate(self) 1139 else: 1140 return icepool.evaluator.LargestCountWithWildEvaluator( 1141 wild=wild, wild_low=wild_low, 1142 wild_high=wild_high).evaluate(self) 1143 1144 def largest_count_and_outcome( 1145 self 1146 ) -> 'icepool.Die[tuple[int, T]] | MultisetFunctionRawResult[T, tuple[int, T]]': 1147 """Evaluation: The largest matching set among the elements and the corresponding outcome.""" 1148 return icepool.evaluator.largest_count_and_outcome_evaluator.evaluate( 1149 self) 1150 1151 def __rfloordiv__( 1152 self, other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 1153 ) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]': 1154 return implicit_convert_to_expression(other).count_subset(self) 1155 1156 def count_subset( 1157 self, 1158 divisor: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1159 /, 1160 *, 1161 empty_divisor: int | None = None 1162 ) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]': 1163 """Evaluation: The number of times the divisor is contained in this multiset. 1164 1165 Args: 1166 divisor: The multiset to divide by. 1167 empty_divisor: If the divisor is empty, the outcome will be this. 1168 If not set, `ZeroDivisionError` will be raised for an empty 1169 right side. 1170 1171 Raises: 1172 ZeroDivisionError: If the divisor may be empty and 1173 `empty_divisor` is not set. 1174 """ 1175 divisor = implicit_convert_to_expression(divisor) 1176 return icepool.evaluator.CountSubsetEvaluator( 1177 empty_divisor=empty_divisor).evaluate(self, divisor) 1178 1179 def largest_straight( 1180 self: 'MultisetExpression[int]' 1181 ) -> 'icepool.Die[int] | MultisetFunctionRawResult[int, int]': 1182 """Evaluation: The size of the largest straight among the elements. 1183 1184 Outcomes must be `int`s. 1185 """ 1186 return icepool.evaluator.largest_straight_evaluator.evaluate(self) 1187 1188 def largest_straight_and_outcome( 1189 self: 'MultisetExpression[int]', 1190 priority: Literal['low', 'high'] = 'high', 1191 / 1192 ) -> 'icepool.Die[tuple[int, int]] | MultisetFunctionRawResult[int, tuple[int, int]]': 1193 """Evaluation: The size of the largest straight among the elements and the highest (optionally, lowest) outcome in that straight. 1194 1195 Straight size is prioritized first, then the outcome. 1196 1197 Outcomes must be `int`s. 1198 1199 Args: 1200 priority: Controls which outcome within the straight is returned, 1201 and which straight is picked if there is a tie for largest 1202 straight. 1203 """ 1204 if priority == 'high': 1205 return icepool.evaluator.largest_straight_and_outcome_evaluator_high.evaluate( 1206 self) 1207 elif priority == 'low': 1208 return icepool.evaluator.largest_straight_and_outcome_evaluator_low.evaluate( 1209 self) 1210 else: 1211 raise ValueError("priority must be 'low' or 'high'.") 1212 1213 def all_straights( 1214 self: 'MultisetExpression[int]' 1215 ) -> 'icepool.Die[tuple[int, ...]] | MultisetFunctionRawResult[int, tuple[int, ...]]': 1216 """Evaluation: The sizes of all straights. 1217 1218 The sizes are in **descending** order. 1219 1220 Each element can only contribute to one straight, though duplicate 1221 elements can produces straights that overlap in outcomes. In this case, 1222 elements are preferentially assigned to the longer straight. 1223 """ 1224 return icepool.evaluator.all_straights_evaluator.evaluate(self) 1225 1226 def all_straights_reduce_counts( 1227 self: 'MultisetExpression[int]', 1228 reducer: Callable[[int, int], int] = operator.mul 1229 ) -> 'icepool.Die[tuple[tuple[int, int], ...]] | MultisetFunctionRawResult[int, tuple[tuple[int, int], ...]]': 1230 """Experimental: All straights with a reduce operation on the counts. 1231 1232 This can be used to evaluate e.g. cribbage-style straight counting. 1233 1234 The result is a tuple of `(run_length, run_score)`s. 1235 """ 1236 return icepool.evaluator.AllStraightsReduceCountsEvaluator( 1237 reducer=reducer).evaluate(self) 1238 1239 def argsort(self: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1240 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1241 order: Order = Order.Descending, 1242 limit: int | None = None): 1243 """Experimental: Returns the indexes of the originating multisets for each rank in their additive union. 1244 1245 Example: 1246 ```python 1247 MultisetExpression.argsort([10, 9, 5], [9, 9]) 1248 ``` 1249 produces 1250 ```python 1251 ((0,), (0, 1, 1), (0,)) 1252 ``` 1253 1254 Args: 1255 self, *args: The multiset expressions to be evaluated. 1256 order: Which order the ranks are to be emitted. Default is descending. 1257 limit: How many ranks to emit. Default will emit all ranks, which 1258 makes the length of each outcome equal to 1259 `additive_union(+self, +arg1, +arg2, ...).unique().size()` 1260 """ 1261 self = implicit_convert_to_expression(self) 1262 converted_args = [implicit_convert_to_expression(arg) for arg in args] 1263 return icepool.evaluator.ArgsortEvaluator(order=order, 1264 limit=limit).evaluate( 1265 self, *converted_args) 1266 1267 # Comparators. 1268 1269 def _compare( 1270 self, 1271 right: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1272 operation_class: Type['icepool.evaluator.ComparisonEvaluator'], 1273 *, 1274 truth_value_callback: 'Callable[[], bool] | None' = None 1275 ) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]': 1276 right = icepool.implicit_convert_to_expression(right) 1277 1278 if truth_value_callback is not None: 1279 1280 def data_callback() -> Counts[bool]: 1281 die = cast('icepool.Die[bool]', 1282 operation_class().evaluate(self, right)) 1283 if not isinstance(die, icepool.Die): 1284 raise TypeError('Did not resolve to a die.') 1285 return die._data 1286 1287 return icepool.DieWithTruth(data_callback, truth_value_callback) 1288 else: 1289 return operation_class().evaluate(self, right) 1290 1291 def __lt__(self, 1292 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1293 /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]': 1294 try: 1295 return self._compare(other, 1296 icepool.evaluator.IsProperSubsetEvaluator) 1297 except TypeError: 1298 return NotImplemented 1299 1300 def __le__(self, 1301 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1302 /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]': 1303 try: 1304 return self._compare(other, icepool.evaluator.IsSubsetEvaluator) 1305 except TypeError: 1306 return NotImplemented 1307 1308 def issubset( 1309 self, 1310 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1311 /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]': 1312 """Evaluation: Whether this multiset is a subset of the other multiset. 1313 1314 Specifically, if this multiset has a lesser or equal count for each 1315 outcome than the other multiset, this evaluates to `True`; 1316 if there is some outcome for which this multiset has a greater count 1317 than the other multiset, this evaluates to `False`. 1318 1319 `issubset` is the same as `self <= other`. 1320 1321 `self < other` evaluates a proper subset relation, which is the same 1322 except the result is `False` if the two multisets are exactly equal. 1323 """ 1324 return self._compare(other, icepool.evaluator.IsSubsetEvaluator) 1325 1326 def __gt__(self, 1327 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1328 /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]': 1329 try: 1330 return self._compare(other, 1331 icepool.evaluator.IsProperSupersetEvaluator) 1332 except TypeError: 1333 return NotImplemented 1334 1335 def __ge__(self, 1336 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1337 /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]': 1338 try: 1339 return self._compare(other, icepool.evaluator.IsSupersetEvaluator) 1340 except TypeError: 1341 return NotImplemented 1342 1343 def issuperset( 1344 self, 1345 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1346 /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]': 1347 """Evaluation: Whether this multiset is a superset of the other multiset. 1348 1349 Specifically, if this multiset has a greater or equal count for each 1350 outcome than the other multiset, this evaluates to `True`; 1351 if there is some outcome for which this multiset has a lesser count 1352 than the other multiset, this evaluates to `False`. 1353 1354 A typical use of this evaluation is testing for the presence of a 1355 combo of cards in a hand, e.g. 1356 1357 ```python 1358 deck.deal(5) >= ['a', 'a', 'b'] 1359 ``` 1360 1361 represents the chance that a deal of 5 cards contains at least two 'a's 1362 and one 'b'. 1363 1364 `issuperset` is the same as `self >= other`. 1365 1366 `self > other` evaluates a proper superset relation, which is the same 1367 except the result is `False` if the two multisets are exactly equal. 1368 """ 1369 return self._compare(other, icepool.evaluator.IsSupersetEvaluator) 1370 1371 def __eq__( # type: ignore 1372 self, 1373 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1374 /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]': 1375 try: 1376 1377 def truth_value_callback() -> bool: 1378 return self.equals(other) 1379 1380 return self._compare(other, 1381 icepool.evaluator.IsEqualSetEvaluator, 1382 truth_value_callback=truth_value_callback) 1383 except TypeError: 1384 return NotImplemented 1385 1386 def __ne__( # type: ignore 1387 self, 1388 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1389 /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]': 1390 try: 1391 1392 def truth_value_callback() -> bool: 1393 return not self.equals(other) 1394 1395 return self._compare(other, 1396 icepool.evaluator.IsNotEqualSetEvaluator, 1397 truth_value_callback=truth_value_callback) 1398 except TypeError: 1399 return NotImplemented 1400 1401 def isdisjoint( 1402 self, 1403 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1404 /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]': 1405 """Evaluation: Whether this multiset is disjoint from the other multiset. 1406 1407 Specifically, this evaluates to `False` if there is any outcome for 1408 which both multisets have positive count, and `True` if there is not. 1409 1410 Negative incoming counts are treated as zero counts. 1411 """ 1412 return self._compare(other, icepool.evaluator.IsDisjointSetEvaluator) 1413 1414 # Lexicographic comparisons. 1415 1416 def leximin( 1417 self, 1418 comparison: Literal['==', '!=', '<=', '<', '>=', '>', 'cmp'], 1419 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1420 /, 1421 extra: Literal['low', 'high', 'drop'] = 'high' 1422 ) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]': 1423 """Evaluation: EXPERIMENTAL: Lexicographic comparison after sorting each multiset in ascending order. 1424 1425 Compares the lowest element of each multiset; if they are equal, 1426 compares the next-lowest element, and so on. 1427 1428 [This infographic](https://github.com/HighDiceRoller/icepool/blob/main/images/opposed_pools.png?raw=true) 1429 gives an overview of several opposed dice pool mechanics, including this 1430 one. 1431 1432 Args: 1433 comparison: The comparison to use. 1434 other: The multiset to compare to. 1435 extra: If one side has more elements than the other, how the extra 1436 elements are considered compared to their missing counterparts. 1437 """ 1438 lexi_tuple = compute_lexi_tuple_with_extra(comparison, Order.Ascending, 1439 extra) 1440 return icepool.evaluator.lexi_comparison_evaluator.evaluate( 1441 self, 1442 implicit_convert_to_expression(other), 1443 sort_order=Order.Ascending, 1444 lexi_tuple=lexi_tuple) 1445 1446 def leximax( 1447 self, 1448 comparison: Literal['==', '!=', '<=', '<', '>=', '>', 'cmp'], 1449 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1450 /, 1451 extra: Literal['low', 'high', 'drop'] = 'high' 1452 ) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]': 1453 """Evaluation: EXPERIMENTAL: Lexicographic comparison after sorting each multiset in descending order. 1454 1455 Compares the highest element of each multiset; if they are equal, 1456 compares the next-highest element, and so on. 1457 1458 [This infographic](https://github.com/HighDiceRoller/icepool/blob/main/images/opposed_pools.png?raw=true) 1459 gives an overview of several opposed dice pool mechanics, including this 1460 one. 1461 1462 Args: 1463 comparison: The comparison to use. 1464 other: The multiset to compare to. 1465 extra: If one side has more elements than the other, how the extra 1466 elements are considered compared to their missing counterparts. 1467 """ 1468 lexi_tuple = compute_lexi_tuple_with_extra(comparison, 1469 Order.Descending, extra) 1470 return icepool.evaluator.lexi_comparison_evaluator.evaluate( 1471 self, 1472 implicit_convert_to_expression(other), 1473 sort_order=Order.Descending, 1474 lexi_tuple=lexi_tuple) 1475 1476 # For helping debugging / testing. 1477 def force_order(self, force_order: Order) -> 'MultisetExpression[T]': 1478 """Forces outcomes to be seen by the evaluator in the given order. 1479 1480 This can be useful for debugging / testing. 1481 """ 1482 if force_order == Order.Any: 1483 return self 1484 return icepool.operator.MultisetForceOrder(self, 1485 force_order=force_order)
Abstract base class representing an expression that operates on single multisets.
There are three types of multiset expressions:
MultisetGenerator, which produce raw outcomes and counts.MultisetOperator, which takes outcomes with one or more counts and produces a count.MultisetVariable, which is a temporary placeholder for some other expression.
Expression methods can be applied to MultisetGenerators to do simple
evaluations. For joint evaluations, try multiset_function.
Use the provided operations to build up more complicated expressions, or to attach a final evaluator.
Operations include:
| Operation | Count / notes |
|---|---|
additive_union, + |
l + r |
difference, - |
l - r |
intersection, & |
min(l, r) |
union, | |
max(l, r) |
symmetric_difference, ^ |
abs(l - r) |
multiply_counts, * |
count * n |
divide_counts, // |
count // n |
modulo_counts, % |
count % n |
keep_counts |
count if count >= n else 0 etc. |
unary + |
same as keep_counts('>=', 0) |
unary - |
reverses the sign of all counts |
unique |
min(count, n) |
keep_outcomes |
count if outcome in t else 0 |
drop_outcomes |
count if outcome not in t else 0 |
map_counts |
f(outcome, *counts) |
keep, [] |
less capable than KeepGenerator version |
highest |
less capable than KeepGenerator version |
lowest |
less capable than KeepGenerator version |
| Evaluator | Summary |
|---|---|
issubset, <= |
Whether the left side's counts are all <= their counterparts on the right |
issuperset, >= |
Whether the left side's counts are all >= their counterparts on the right |
isdisjoint |
Whether the left side has no positive counts in common with the right side |
< |
As <=, but False if the two multisets are equal |
> |
As >=, but False if the two multisets are equal |
== |
Whether the left side has all the same counts as the right side |
!= |
Whether the left side has any different counts to the right side |
expand |
All elements in ascending order |
sum |
Sum of all elements |
size |
The number of elements |
empty |
Whether all counts are zero |
all_counts |
All counts in descending order |
product_of_counts |
The product of all counts |
highest_outcome_and_count |
The highest outcome and how many of that outcome |
largest_count |
The single largest count, aka x-of-a-kind |
largest_count_and_outcome |
Same but also with the corresponding outcome |
count_subset, // |
The number of times the right side is contained in the left side |
largest_straight |
Length of longest consecutive sequence |
largest_straight_and_outcome |
Same but also with the corresponding outcome |
all_straights |
Lengths of all consecutive sequences in descending order |
169 def additive_union( 170 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 171 ) -> 'MultisetExpression[T]': 172 """The combined elements from all of the multisets. 173 174 Specifically, the counts for each outcome will be summed across the 175 arguments. 176 177 Same as `a + b + c + ...`. 178 179 Example: 180 ```python 181 [1, 2, 2, 3] + [1, 2, 4] -> [1, 1, 2, 2, 2, 3, 4] 182 ``` 183 """ 184 expressions = tuple( 185 implicit_convert_to_expression(arg) for arg in args) 186 return icepool.operator.MultisetAdditiveUnion(*expressions)
The combined elements from all of the multisets.
Specifically, the counts for each outcome will be summed across the arguments.
Same as a + b + c + ....
Example:
[1, 2, 2, 3] + [1, 2, 4] -> [1, 1, 2, 2, 2, 3, 4]
205 def difference( 206 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 207 keep_negative_counts: bool = False) -> 'MultisetExpression[T]': 208 """The elements from the left multiset that are not in any of the others. 209 210 Specifically, for each outcome, the count of that outcome is that of 211 the leftmost argument minus the counts from all other arguments. 212 By default, if the result would be negative, it is set to zero. 213 214 Same as `a - b - c - ...`. 215 216 Example: 217 ```python 218 [1, 2, 2, 3] - [1, 2, 4] -> [2, 3] 219 ``` 220 221 If no arguments are given, the result will be an empty multiset, i.e. 222 all zero counts. 223 224 Aas a multiset operation, this will only cancel elements 1:1. 225 If you want to drop all elements in a set of outcomes regardless of 226 count, either use `drop_outcomes()` instead, or use a large number of 227 counts on the right side. 228 229 [This infographic](https://github.com/HighDiceRoller/icepool/blob/main/images/opposed_pools.png?raw=true) 230 gives an overview of several opposed dice pool mechanics, including this 231 one. 232 233 Args: 234 *args: All but the leftmost argument will subtract their counts 235 from the leftmost argument. 236 keep_negative_counts: If set (default False), negative resultig 237 counts will be preserved. 238 """ 239 expressions = tuple( 240 implicit_convert_to_expression(arg) for arg in args) 241 if keep_negative_counts: 242 return icepool.operator.MultisetDifferenceKeepNegative( 243 *expressions) 244 else: 245 return icepool.operator.MultisetDifferenceDropNegative( 246 *expressions)
The elements from the left multiset that are not in any of the others.
Specifically, for each outcome, the count of that outcome is that of the leftmost argument minus the counts from all other arguments. By default, if the result would be negative, it is set to zero.
Same as a - b - c - ....
Example:
[1, 2, 2, 3] - [1, 2, 4] -> [2, 3]
If no arguments are given, the result will be an empty multiset, i.e. all zero counts.
Aas a multiset operation, this will only cancel elements 1:1.
If you want to drop all elements in a set of outcomes regardless of
count, either use drop_outcomes() instead, or use a large number of
counts on the right side.
This infographic gives an overview of several opposed dice pool mechanics, including this one.
Arguments:
- *args: All but the leftmost argument will subtract their counts from the leftmost argument.
- keep_negative_counts: If set (default False), negative resultig counts will be preserved.
265 def intersection( 266 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 267 ) -> 'MultisetExpression[T]': 268 """The elements that all the multisets have in common. 269 270 Specifically, the count for each outcome is the minimum count among the 271 arguments. 272 273 Same as `a & b & c & ...`. 274 275 Example: 276 ```python 277 [1, 2, 2, 3] & [1, 2, 4] -> [1, 2] 278 ``` 279 280 As a multiset operation, this will only intersect elements 1:1. 281 If you want to keep all elements in a set of outcomes regardless of 282 count, either use `keep_outcomes()` instead, or use a large number of 283 counts on the right side. 284 """ 285 expressions = tuple( 286 implicit_convert_to_expression(arg) for arg in args) 287 return icepool.operator.MultisetIntersection(*expressions)
The elements that all the multisets have in common.
Specifically, the count for each outcome is the minimum count among the arguments.
Same as a & b & c & ....
Example:
[1, 2, 2, 3] & [1, 2, 4] -> [1, 2]
As a multiset operation, this will only intersect elements 1:1.
If you want to keep all elements in a set of outcomes regardless of
count, either use keep_outcomes() instead, or use a large number of
counts on the right side.
305 def union( 306 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 307 ) -> 'MultisetExpression[T]': 308 """The most of each outcome that appear in any of the multisets. 309 310 Specifically, the count for each outcome is the maximum count among the 311 arguments. 312 313 Same as `a | b | c | ...`. 314 315 Example: 316 ```python 317 [1, 2, 2, 3] | [1, 2, 4] -> [1, 2, 2, 3, 4] 318 ``` 319 """ 320 expressions = tuple( 321 implicit_convert_to_expression(arg) for arg in args) 322 return icepool.operator.MultisetUnion(*expressions)
The most of each outcome that appear in any of the multisets.
Specifically, the count for each outcome is the maximum count among the arguments.
Same as a | b | c | ....
Example:
[1, 2, 2, 3] | [1, 2, 4] -> [1, 2, 2, 3, 4]
342 def symmetric_difference( 343 self, 344 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 345 /) -> 'MultisetExpression[T]': 346 """The elements that appear in the left or right multiset but not both. 347 348 Specifically, the count for each outcome is the absolute difference 349 between the counts from the two arguments. 350 351 Same as `a ^ b`. 352 353 Specifically, this produces the absolute difference between counts. 354 If you don't want negative counts to be used from the inputs, you can 355 do `+left ^ +right`. 356 357 Example: 358 ```python 359 [1, 2, 2, 3] ^ [1, 2, 4] -> [2, 3, 4] 360 ``` 361 """ 362 return icepool.operator.MultisetSymmetricDifference( 363 self, implicit_convert_to_expression(other))
The elements that appear in the left or right multiset but not both.
Specifically, the count for each outcome is the absolute difference between the counts from the two arguments.
Same as a ^ b.
Specifically, this produces the absolute difference between counts.
If you don't want negative counts to be used from the inputs, you can
do +left ^ +right.
Example:
[1, 2, 2, 3] ^ [1, 2, 4] -> [2, 3, 4]
365 def keep_outcomes( 366 self, outcomes: 367 'Callable[[T], bool] | Collection[T] | MultisetExpression[T]', 368 /) -> 'MultisetExpression[T]': 369 """Keeps the designated outcomes, and drops the rest by setting their counts to zero. 370 371 This is similar to `intersection()`, except the right side is considered 372 to have unlimited multiplicity. 373 374 Args: 375 outcomes: A callable returning `True` iff the outcome should be kept, 376 or an expression or collection of outcomes to keep. 377 """ 378 if isinstance(outcomes, MultisetExpression): 379 return icepool.operator.MultisetFilterOutcomesBinary( 380 self, outcomes) 381 else: 382 return icepool.operator.MultisetFilterOutcomes(self, 383 outcomes=outcomes)
Keeps the designated outcomes, and drops the rest by setting their counts to zero.
This is similar to intersection(), except the right side is considered
to have unlimited multiplicity.
Arguments:
- outcomes: A callable returning
Trueiff the outcome should be kept, or an expression or collection of outcomes to keep.
385 def drop_outcomes( 386 self, outcomes: 387 'Callable[[T], bool] | Collection[T] | MultisetExpression[T]', 388 /) -> 'MultisetExpression[T]': 389 """Drops the designated outcomes by setting their counts to zero, and keeps the rest. 390 391 This is similar to `difference()`, except the right side is considered 392 to have unlimited multiplicity. 393 394 Args: 395 outcomes: A callable returning `True` iff the outcome should be 396 dropped, or an expression or collection of outcomes to drop. 397 """ 398 if isinstance(outcomes, MultisetExpression): 399 return icepool.operator.MultisetFilterOutcomesBinary(self, 400 outcomes, 401 invert=True) 402 else: 403 return icepool.operator.MultisetFilterOutcomes(self, 404 outcomes=outcomes, 405 invert=True)
Drops the designated outcomes by setting their counts to zero, and keeps the rest.
This is similar to difference(), except the right side is considered
to have unlimited multiplicity.
Arguments:
- outcomes: A callable returning
Trueiff the outcome should be dropped, or an expression or collection of outcomes to drop.
409 def map_counts(*args: 410 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 411 function: Callable[..., int]) -> 'MultisetExpression[T]': 412 """Maps the counts to new counts. 413 414 Args: 415 function: A function that takes `outcome, *counts` and produces a 416 combined count. 417 """ 418 expressions = tuple( 419 implicit_convert_to_expression(arg) for arg in args) 420 return icepool.operator.MultisetMapCounts(*expressions, 421 function=function)
Maps the counts to new counts.
Arguments:
- function: A function that takes
outcome, *countsand produces a combined count.
434 def multiply_counts(self, n: int, /) -> 'MultisetExpression[T]': 435 """Multiplies all counts by n. 436 437 Same as `self * n`. 438 439 Example: 440 ```python 441 Pool([1, 2, 2, 3]) * 2 -> [1, 1, 2, 2, 2, 2, 3, 3] 442 ``` 443 """ 444 return icepool.operator.MultisetMultiplyCounts(self, constant=n)
Multiplies all counts by n.
Same as self * n.
Example:
Pool([1, 2, 2, 3]) * 2 -> [1, 1, 2, 2, 2, 2, 3, 3]
472 def divide_counts(self, n: int, /) -> 'MultisetExpression[T]': 473 """Divides all counts by n (rounding down). 474 475 Same as `self // n`. 476 477 Example: 478 ```python 479 Pool([1, 2, 2, 3]) // 2 -> [2] 480 ``` 481 """ 482 return icepool.operator.MultisetFloordivCounts(self, constant=n)
Divides all counts by n (rounding down).
Same as self // n.
Example:
Pool([1, 2, 2, 3]) // 2 -> [2]
489 def modulo_counts(self, n: int, /) -> 'MultisetExpression[T]': 490 """Moduos all counts by n. 491 492 Same as `self % n`. 493 494 Example: 495 ```python 496 Pool([1, 2, 2, 3]) % 2 -> [1, 3] 497 ``` 498 """ 499 return self % n
Moduos all counts by n.
Same as self % n.
Example:
Pool([1, 2, 2, 3]) % 2 -> [1, 3]
511 def keep_counts(self, comparison: Literal['==', '!=', '<=', '<', '>=', 512 '>'], n: int, 513 /) -> 'MultisetExpression[T]': 514 """Keeps counts fitting the comparison, treating the rest as zero. 515 516 For example, `expression.keep_counts('>=', 2)` would keep pairs, 517 triplets, etc. and drop singles. 518 519 ```python 520 Pool([1, 2, 2, 3, 3, 3]).keep_counts('>=', 2) -> [2, 2, 3, 3, 3] 521 ``` 522 523 Args: 524 comparison: The comparison to use. 525 n: The number to compare counts against. 526 """ 527 return icepool.operator.MultisetKeepCounts(self, 528 comparison=comparison, 529 constant=n)
Keeps counts fitting the comparison, treating the rest as zero.
For example, expression.keep_counts('>=', 2) would keep pairs,
triplets, etc. and drop singles.
Pool([1, 2, 2, 3, 3, 3]).keep_counts('>=', 2) -> [2, 2, 3, 3, 3]
Arguments:
- comparison: The comparison to use.
- n: The number to compare counts against.
531 def unique(self, n: int = 1, /) -> 'MultisetExpression[T]': 532 """Counts each outcome at most `n` times. 533 534 For example, `generator.unique(2)` would count each outcome at most 535 twice. 536 537 Example: 538 ```python 539 Pool([1, 2, 2, 3]).unique() -> [1, 2, 3] 540 ``` 541 """ 542 return icepool.operator.MultisetUnique(self, constant=n)
Counts each outcome at most n times.
For example, generator.unique(2) would count each outcome at most
twice.
Example:
Pool([1, 2, 2, 3]).unique() -> [1, 2, 3]
557 def keep( 558 self, index: slice | Sequence[int | EllipsisType] | int 559 ) -> 'MultisetExpression[T] | icepool.Die[T] | MultisetFunctionRawResult[T, T]': 560 """Selects elements after drawing and sorting. 561 562 This is less capable than the `KeepGenerator` version. 563 In particular, it does not know how many elements it is selecting from, 564 so it must be anchored at the starting end. The advantage is that it 565 can be applied to any expression. 566 567 The valid types of argument are: 568 569 * A `slice`. If both start and stop are provided, they must both be 570 non-negative or both be negative. step is not supported. 571 * A sequence of `int` with `...` (`Ellipsis`) at exactly one end. 572 Each sorted element will be counted that many times, with the 573 `Ellipsis` treated as enough zeros (possibly "negative") to 574 fill the rest of the elements. 575 * An `int`, which evaluates by taking the element at the specified 576 index. In this case the result is a `Die`. 577 578 Negative incoming counts are treated as zero counts. 579 580 Use the `[]` operator for the same effect as this method. 581 """ 582 if isinstance(index, int): 583 return icepool.evaluator.keep_evaluator.evaluate(self, index=index) 584 else: 585 return icepool.operator.MultisetKeep(self, index=index)
Selects elements after drawing and sorting.
This is less capable than the KeepGenerator version.
In particular, it does not know how many elements it is selecting from,
so it must be anchored at the starting end. The advantage is that it
can be applied to any expression.
The valid types of argument are:
- A
slice. If both start and stop are provided, they must both be non-negative or both be negative. step is not supported. - A sequence of
intwith...(Ellipsis) at exactly one end. Each sorted element will be counted that many times, with theEllipsistreated as enough zeros (possibly "negative") to fill the rest of the elements. - An
int, which evaluates by taking the element at the specified index. In this case the result is aDie.
Negative incoming counts are treated as zero counts.
Use the [] operator for the same effect as this method.
604 def lowest(self, 605 keep: int | None = None, 606 drop: int | None = None) -> 'MultisetExpression[T]': 607 """Keep some of the lowest elements from this multiset and drop the rest. 608 609 In contrast to the die and free function versions, this does not 610 automatically sum the dice. Use `.sum()` afterwards if you want to sum. 611 Alternatively, you can perform some other evaluation. 612 613 This requires the outcomes to be evaluated in ascending order. 614 615 Args: 616 keep, drop: These arguments work together: 617 * If neither are provided, the single lowest element 618 will be kept. 619 * If only `keep` is provided, the `keep` lowest elements 620 will be kept. 621 * If only `drop` is provided, the `drop` lowest elements 622 will be dropped and the rest will be kept. 623 * If both are provided, `drop` lowest elements will be dropped, 624 then the next `keep` lowest elements will be kept. 625 """ 626 index = lowest_slice(keep, drop) 627 return self.keep(index)
Keep some of the lowest elements from this multiset and drop the rest.
In contrast to the die and free function versions, this does not
automatically sum the dice. Use .sum() afterwards if you want to sum.
Alternatively, you can perform some other evaluation.
This requires the outcomes to be evaluated in ascending order.
Arguments:
- keep, drop: These arguments work together:
- If neither are provided, the single lowest element will be kept.
- If only
keepis provided, thekeeplowest elements will be kept. - If only
dropis provided, thedroplowest elements will be dropped and the rest will be kept. - If both are provided,
droplowest elements will be dropped, then the nextkeeplowest elements will be kept.
629 def highest(self, 630 keep: int | None = None, 631 drop: int | None = None) -> 'MultisetExpression[T]': 632 """Keep some of the highest elements from this multiset and drop the rest. 633 634 In contrast to the die and free function versions, this does not 635 automatically sum the dice. Use `.sum()` afterwards if you want to sum. 636 Alternatively, you can perform some other evaluation. 637 638 This requires the outcomes to be evaluated in descending order. 639 640 Args: 641 keep, drop: These arguments work together: 642 * If neither are provided, the single highest element 643 will be kept. 644 * If only `keep` is provided, the `keep` highest elements 645 will be kept. 646 * If only `drop` is provided, the `drop` highest elements 647 will be dropped and the rest will be kept. 648 * If both are provided, `drop` highest elements will be dropped, 649 then the next `keep` highest elements will be kept. 650 """ 651 index = highest_slice(keep, drop) 652 return self.keep(index)
Keep some of the highest elements from this multiset and drop the rest.
In contrast to the die and free function versions, this does not
automatically sum the dice. Use .sum() afterwards if you want to sum.
Alternatively, you can perform some other evaluation.
This requires the outcomes to be evaluated in descending order.
Arguments:
- keep, drop: These arguments work together:
- If neither are provided, the single highest element will be kept.
- If only
keepis provided, thekeephighest elements will be kept. - If only
dropis provided, thedrophighest elements will be dropped and the rest will be kept. - If both are provided,
drophighest elements will be dropped, then the nextkeephighest elements will be kept.
656 def sort_pair( 657 self, 658 comparison: Literal['==', '!=', '<=', '<', '>=', '>'], 659 other: 'MultisetExpression[T]', 660 /, 661 order: Order = Order.Descending, 662 extra: Literal['early', 'late', 'low', 'high', 'equal', 'keep', 663 'drop'] = 'drop' 664 ) -> 'MultisetExpression[T]': 665 """EXPERIMENTAL: Sort `self` and `other` and make pairs of one element from each, then keep the elements from `self` from each pair that fit the given comparision. 666 667 Example: An attacker rolls 3d6 versus a defender's 2d6 in the game of 668 *RISK*. Which pairs did the attacker win? 669 ```python 670 d6.pool(3).highest(2).sort_pair('>', d6.pool(2)) 671 ``` 672 673 Suppose the attacker rolled 6, 4, 3 and the defender 5, 5. 674 In this case the 4 would be blocked since the attacker lost that pair, 675 leaving the attacker's 6. If you want to keep the extra element (3), you 676 can use the `extra` parameter. 677 ```python 678 679 Pool([6, 4, 3]).sort_pair('>', [5, 5]) -> [6] 680 Pool([6, 4, 3]).sort_pair('>', [5, 5], extra='keep') -> [6, 3] 681 ``` 682 683 Contrast `max_pair_keep()` and `max_pair_drop()`, which first 684 create the maximum number of pairs that fit the comparison, not 685 necessarily in sorted order. 686 In the above example, `max_pair()` would allow the defender to 687 assign their 5s to block both the 4 and the 3. 688 689 [This infographic](https://github.com/HighDiceRoller/icepool/blob/main/images/opposed_pools.png?raw=true) 690 gives an overview of several opposed dice pool mechanics, including this 691 one. 692 693 This is not designed for use with negative counts. 694 695 Args: 696 comparison: The comparison to filter by. If you want to drop rather 697 than keep, use the complementary comparison: 698 * `'=='` vs. `'!='` 699 * `'<='` vs. `'>'` 700 * `'>='` vs. `'<'` 701 other: The other multiset to pair elements with. 702 order: The order in which to sort before forming pairs. 703 Default is descending. 704 extra: If the left operand has more elements than the right 705 operand, this determines what is done with the extra elements. 706 The default is `'drop'`. 707 * `'early'`, `'late'`: The extra elements are considered to 708 occur earlier or later in `order` than their missing 709 counterparts. 710 * `'low'`, `'high'`, `'equal'`: The extra elements are 711 considered to be lower, higher, or equal to their missing 712 counterparts. 713 * `'keep'`, `'drop'`: The extra elements are always kept or 714 dropped. 715 """ 716 other = implicit_convert_to_expression(other) 717 718 return icepool.operator.MultisetSortPair(self, 719 other, 720 comparison=comparison, 721 sort_order=order, 722 extra=extra)
EXPERIMENTAL: Sort self and other and make pairs of one element from each, then keep the elements from self from each pair that fit the given comparision.
Example: An attacker rolls 3d6 versus a defender's 2d6 in the game of RISK. Which pairs did the attacker win?
d6.pool(3).highest(2).sort_pair('>', d6.pool(2))
Suppose the attacker rolled 6, 4, 3 and the defender 5, 5.
In this case the 4 would be blocked since the attacker lost that pair,
leaving the attacker's 6. If you want to keep the extra element (3), you
can use the extra parameter.
Pool([6, 4, 3]).sort_pair('>', [5, 5]) -> [6]
Pool([6, 4, 3]).sort_pair('>', [5, 5], extra='keep') -> [6, 3]
Contrast max_pair_keep() and max_pair_drop(), which first
create the maximum number of pairs that fit the comparison, not
necessarily in sorted order.
In the above example, max_pair() would allow the defender to
assign their 5s to block both the 4 and the 3.
This infographic gives an overview of several opposed dice pool mechanics, including this one.
This is not designed for use with negative counts.
Arguments:
- comparison: The comparison to filter by. If you want to drop rather
than keep, use the complementary comparison:
'=='vs.'!=''<='vs.'>''>='vs.'<'
- other: The other multiset to pair elements with.
- order: The order in which to sort before forming pairs. Default is descending.
- extra: If the left operand has more elements than the right
operand, this determines what is done with the extra elements.
The default is
'drop'.'early','late': The extra elements are considered to
occur earlier or later inorderthan their missing counterparts.'low','high','equal': The extra elements are considered to be lower, higher, or equal to their missing counterparts.'keep','drop': The extra elements are always kept or dropped.
724 def sort_pair_keep_while(self, 725 comparison: Literal['==', '!=', '<=', '<', '>=', 726 '>'], 727 other: 'MultisetExpression[T]', 728 /, 729 order: Order = Order.Descending, 730 extra: Literal['early', 'late', 'low', 'high', 731 'equal', 'continue', 732 'break'] = 'break'): 733 """EXPERIMENTAL: Sort `self` and `other` and make pairs of one element from each, then go through the pairs and keep elements from `self` while the `comparison` holds, dropping the rest. 734 735 This is not designed for use with negative counts. 736 737 [This infographic](https://github.com/HighDiceRoller/icepool/blob/main/images/opposed_pools.png?raw=true) 738 gives an overview of several opposed dice pool mechanics, including this 739 one. 740 741 Args: 742 comparison: The comparison for which to continue the "while". 743 other: The other multiset to pair elements with. 744 order: The order in which to sort before forming pairs. 745 Default is descending. 746 extra: If the left operand has more elements than the right 747 operand, this determines what is done with the extra elements. 748 The default is `'break'`. 749 * `'early'`, `'late'`: The extra elements are considered to 750 occur earlier or later in `order` than their missing 751 counterparts. 752 * `'low'`, `'high'`, `'equal'`: The extra elements are 753 considered to be lower, higher, or equal to their missing 754 counterparts. 755 * `'continue'`, `'break'`: If the "while" still holds upon 756 reaching the extra elements, whether those elements 757 continue to be kept. 758 """ 759 other = implicit_convert_to_expression(other) 760 return icepool.operator.MultisetSortPairWhile(self, 761 other, 762 keep=True, 763 comparison=comparison, 764 sort_order=order, 765 extra=extra)
EXPERIMENTAL: Sort self and other and make pairs of one element from each, then go through the pairs and keep elements from self while the comparison holds, dropping the rest.
This is not designed for use with negative counts.
This infographic gives an overview of several opposed dice pool mechanics, including this one.
Arguments:
- comparison: The comparison for which to continue the "while".
- other: The other multiset to pair elements with.
- order: The order in which to sort before forming pairs. Default is descending.
- extra: If the left operand has more elements than the right
operand, this determines what is done with the extra elements.
The default is
'break'.'early','late': The extra elements are considered to
occur earlier or later inorderthan their missing counterparts.'low','high','equal': The extra elements are considered to be lower, higher, or equal to their missing counterparts.'continue','break': If the "while" still holds upon reaching the extra elements, whether those elements continue to be kept.
767 def sort_pair_drop_while(self, 768 comparison: Literal['==', '!=', '<=', '<', '>=', 769 '>'], 770 other: 'MultisetExpression[T]', 771 /, 772 order: Order = Order.Descending, 773 extra: Literal['early', 'late', 'low', 'high', 774 'equal', 'continue', 775 'break'] = 'break'): 776 """EXPERIMENTAL: Sort `self` and `other` and make pairs of one element from each, then go through the pairs and drop elements from `self` while the `comparison` holds, keeping the rest. 777 778 This is not designed for use with negative counts. 779 780 [This infographic](https://github.com/HighDiceRoller/icepool/blob/main/images/opposed_pools.png?raw=true) 781 gives an overview of several opposed dice pool mechanics, including this 782 one. 783 784 Args: 785 comparison: The comparison for which to continue the "while". 786 other: The other multiset to pair elements with. 787 order: The order in which to sort before forming pairs. 788 Default is descending. 789 extra: If the left operand has more elements than the right 790 operand, this determines what is done with the extra elements. 791 The default is `'break'`. 792 * `'early'`, `'late'`: The extra elements are considered to 793 occur earlier or later in `order` than their missing 794 counterparts. 795 * `'low'`, `'high'`, `'equal'`: The extra elements are 796 considered to be lower, higher, or equal to their missing 797 counterparts. 798 * `'continue'`, `'break'`: If the "while" still holds upon 799 reaching the extra elements, whether those elements 800 continue to be dropped. 801 """ 802 other = implicit_convert_to_expression(other) 803 return icepool.operator.MultisetSortPairWhile(self, 804 other, 805 keep=False, 806 comparison=comparison, 807 sort_order=order, 808 extra=extra)
EXPERIMENTAL: Sort self and other and make pairs of one element from each, then go through the pairs and drop elements from self while the comparison holds, keeping the rest.
This is not designed for use with negative counts.
This infographic gives an overview of several opposed dice pool mechanics, including this one.
Arguments:
- comparison: The comparison for which to continue the "while".
- other: The other multiset to pair elements with.
- order: The order in which to sort before forming pairs. Default is descending.
- extra: If the left operand has more elements than the right
operand, this determines what is done with the extra elements.
The default is
'break'.'early','late': The extra elements are considered to
occur earlier or later inorderthan their missing counterparts.'low','high','equal': The extra elements are considered to be lower, higher, or equal to their missing counterparts.'continue','break': If the "while" still holds upon reaching the extra elements, whether those elements continue to be dropped.
810 def max_pair_keep(self, 811 comparison: Literal['==', '<=', '<', '>=', '>'], 812 other: 'MultisetExpression[T]', 813 priority: Literal['low', 'high'] | None = None, 814 /) -> 'MultisetExpression[T]': 815 """EXPERIMENTAL: Form as many pairs of elements between `self` and `other` fitting the comparison, then keep the paired elements from `self`. 816 817 This pairs elements of `self` with elements of `other`, such that in 818 each pair the element from `self` fits the `comparison` with the 819 element from `other`. As many such pairs of elements will be created as 820 possible, prioritizing either the lowest or highest possible elements. 821 Finally, the paired elements from `self` are kept, dropping the rest. 822 823 This requires that outcomes be evaluated in descending order if 824 prioritizing high elements, or ascending order if prioritizing low 825 elements. 826 827 This is not designed for use with negative counts. 828 829 Example: An attacker rolls a pool of 4d6 and a defender rolls a pool of 830 3d6. Defender dice can be used to block attacker dice of equal or lesser 831 value, and the defender prefers to block the highest attacker dice 832 possible. Which attacker dice were blocked? 833 ```python 834 d6.pool(4).max_pair_keep('<=', d6.pool(3), 'high').sum() 835 ``` 836 837 Suppose the attacker rolls 6, 4, 3, 1 and the defender rolls 5, 5. 838 Then the result would be [4, 3]. 839 ```python 840 Pool([6, 4, 3, 1]).max_pair('<=', [5, 5], 'high') 841 -> [4, 3] 842 ``` 843 844 The complement of this is `max_pair_drop`, which drops the paired 845 elements from `self` and keeps the rest. 846 847 Contrast `sort_pair()`, which first creates pairs in 848 sorted order and then filters them by `comparison`. 849 In the above example, `sort_pair()` would force the defender to pair 850 against the 6 and the 4, which would only allow them to block the 4 851 and let the 6, 3, and 1 through. 852 853 [This infographic](https://github.com/HighDiceRoller/icepool/blob/main/images/opposed_pools.png?raw=true) 854 gives an overview of several opposed dice pool mechanics, including this 855 one. 856 857 Args: 858 comparison: The comparison that the pairs must satisfy. 859 `'=='` is the same as `+self & +other`. 860 other: The other multiset to pair elements with. 861 priority: Optional paramter to prioritize pairing `'low'` or 862 `'high'` elements. Note that this does not change the number of 863 elements that are paired. 864 """ 865 other = implicit_convert_to_expression(other) 866 if comparison == '==': 867 return +self & +other 868 869 cls: Type[icepool.operator.MultisetMaxPairLate] | Type[ 870 icepool.operator.MultisetMaxPairEarly] 871 872 if priority is None: 873 order = Order.Ascending 874 left_first, tie, _ = compute_lexi_tuple(comparison, order) 875 if left_first: 876 order = Order.Descending 877 cls = icepool.operator.MultisetMaxPairLate 878 else: 879 match priority: 880 case 'low': 881 order = Order.Ascending 882 case 'high': 883 order = Order.Descending 884 case _: 885 raise ValueError("priority must be 'low' or 'high'.") 886 887 left_first, tie, _ = compute_lexi_tuple(comparison, order) 888 889 if left_first: 890 cls = icepool.operator.MultisetMaxPairEarly 891 else: 892 cls = icepool.operator.MultisetMaxPairLate 893 894 return cls(self, 895 other, 896 order=order, 897 pair_equal=cast(bool, tie), 898 keep=True)
EXPERIMENTAL: Form as many pairs of elements between self and other fitting the comparison, then keep the paired elements from self.
This pairs elements of self with elements of other, such that in
each pair the element from self fits the comparison with the
element from other. As many such pairs of elements will be created as
possible, prioritizing either the lowest or highest possible elements.
Finally, the paired elements from self are kept, dropping the rest.
This requires that outcomes be evaluated in descending order if prioritizing high elements, or ascending order if prioritizing low elements.
This is not designed for use with negative counts.
Example: An attacker rolls a pool of 4d6 and a defender rolls a pool of 3d6. Defender dice can be used to block attacker dice of equal or lesser value, and the defender prefers to block the highest attacker dice possible. Which attacker dice were blocked?
d6.pool(4).max_pair_keep('<=', d6.pool(3), 'high').sum()
Suppose the attacker rolls 6, 4, 3, 1 and the defender rolls 5, 5. Then the result would be [4, 3].
Pool([6, 4, 3, 1]).max_pair('<=', [5, 5], 'high')
-> [4, 3]
The complement of this is max_pair_drop, which drops the paired
elements from self and keeps the rest.
Contrast sort_pair(), which first creates pairs in
sorted order and then filters them by comparison.
In the above example, sort_pair() would force the defender to pair
against the 6 and the 4, which would only allow them to block the 4
and let the 6, 3, and 1 through.
This infographic gives an overview of several opposed dice pool mechanics, including this one.
Arguments:
- comparison: The comparison that the pairs must satisfy.
'=='is the same as+self & +other. - other: The other multiset to pair elements with.
- priority: Optional paramter to prioritize pairing
'low'or'high'elements. Note that this does not change the number of elements that are paired.
900 def max_pair_drop(self, 901 comparison: Literal['==', '<=', '<', '>=', '>'], 902 other: 'MultisetExpression[T]', 903 priority: Literal['low', 'high'] | None = None, 904 /) -> 'MultisetExpression[T]': 905 """EXPERIMENTAL: Form as many pairs of elements between `self` and `other` fitting the comparison, then drop the paired elements from `self`. 906 907 This pairs elements of `self` with elements of `other`, such that in 908 each pair the element from `self` fits the `comparison` with the 909 element from `other`. As many such pairs of elements will be created as 910 possible, prioritizing either the lowest or highest possible elements. 911 Finally, the paired elements from `self` are dropped, keeping the rest. 912 913 This requires that outcomes be evaluated in descending order if 914 prioritizing high elements, or ascending order if prioritizing low 915 elements. 916 917 This is not designed for use with negative counts. 918 919 Example: An attacker rolls a pool of 4d6 and a defender rolls a pool of 920 3d6. Defender dice can be used to block attacker dice of equal or lesser 921 value, and the defender prefers to block the highest attacker dice 922 possible. Which attacker dice were NOT blocked? 923 ```python 924 d6.pool(4).max_pair_drop('<=', d6.pool(3), 'high').sum() 925 ``` 926 927 Suppose the attacker rolls 6, 4, 3, 1 and the defender rolls 5, 5. 928 Then the result would be [4, 3]. 929 ```python 930 Pool([6, 4, 3, 1]).max_pair_drop('<=', [5, 5], 'high') 931 -> [6, 1] 932 ``` 933 934 The complement of this is `max_pair_keep`, which keeps the paired 935 elements from `self` and drops the rest. 936 937 Contrast `sort_pair()`, which first creates pairs in 938 sorted order and then filters them by `comparison`. 939 In the above example, `sort_pair()` would force the defender to pair 940 against the 6 and the 4, which would only allow them to block the 4 941 and let the 6, 3, and 1 through. 942 943 [This infographic](https://github.com/HighDiceRoller/icepool/blob/main/images/opposed_pools.png?raw=true) 944 gives an overview of several opposed dice pool mechanics, including this 945 one. 946 947 Args: 948 comparison: The comparison that the pairs must satisfy. 949 `'=='` is the same as `self - other`. 950 other: The other multiset to pair elements with. 951 priority: Optional paramter to prioritize pairing `'low'` or 952 `'high'` elements. Note that this does not change the number of 953 elements that are paired. 954 """ 955 other = implicit_convert_to_expression(other) 956 if comparison == '==': 957 return self - other 958 959 cls: Type[icepool.operator.MultisetMaxPairLate] | Type[ 960 icepool.operator.MultisetMaxPairEarly] 961 962 if priority is None: 963 order = Order.Ascending 964 left_first, tie, _ = compute_lexi_tuple(comparison, order) 965 if left_first: 966 order = Order.Descending 967 cls = icepool.operator.MultisetMaxPairLate 968 else: 969 match priority: 970 case 'low': 971 order = Order.Ascending 972 case 'high': 973 order = Order.Descending 974 case _: 975 raise ValueError("priority must be 'low' or 'high'.") 976 977 left_first, tie, _ = compute_lexi_tuple(comparison, order) 978 979 if left_first: 980 cls = icepool.operator.MultisetMaxPairEarly 981 else: 982 cls = icepool.operator.MultisetMaxPairLate 983 984 return cls(self, 985 other, 986 order=order, 987 pair_equal=cast(bool, tie), 988 keep=False)
EXPERIMENTAL: Form as many pairs of elements between self and other fitting the comparison, then drop the paired elements from self.
This pairs elements of self with elements of other, such that in
each pair the element from self fits the comparison with the
element from other. As many such pairs of elements will be created as
possible, prioritizing either the lowest or highest possible elements.
Finally, the paired elements from self are dropped, keeping the rest.
This requires that outcomes be evaluated in descending order if prioritizing high elements, or ascending order if prioritizing low elements.
This is not designed for use with negative counts.
Example: An attacker rolls a pool of 4d6 and a defender rolls a pool of 3d6. Defender dice can be used to block attacker dice of equal or lesser value, and the defender prefers to block the highest attacker dice possible. Which attacker dice were NOT blocked?
d6.pool(4).max_pair_drop('<=', d6.pool(3), 'high').sum()
Suppose the attacker rolls 6, 4, 3, 1 and the defender rolls 5, 5. Then the result would be [4, 3].
Pool([6, 4, 3, 1]).max_pair_drop('<=', [5, 5], 'high')
-> [6, 1]
The complement of this is max_pair_keep, which keeps the paired
elements from self and drops the rest.
Contrast sort_pair(), which first creates pairs in
sorted order and then filters them by comparison.
In the above example, sort_pair() would force the defender to pair
against the 6 and the 4, which would only allow them to block the 4
and let the 6, 3, and 1 through.
This infographic gives an overview of several opposed dice pool mechanics, including this one.
Arguments:
- comparison: The comparison that the pairs must satisfy.
'=='is the same asself - other. - other: The other multiset to pair elements with.
- priority: Optional paramter to prioritize pairing
'low'or'high'elements. Note that this does not change the number of elements that are paired.
990 def versus_all(self, comparison: Literal['<=', '<', '>=', '>'], 991 other: 'MultisetExpression[T]') -> 'MultisetExpression[T]': 992 """EXPERIMENTAL: Keeps elements from `self` that fit the comparison against all elements of the other multiset. 993 994 Contrast `versus_any()`. 995 996 [This infographic](https://github.com/HighDiceRoller/icepool/blob/main/images/opposed_pools.png?raw=true) 997 gives an overview of several opposed dice pool mechanics, including this 998 one. 999 1000 Args: 1001 comparison: One of `'<=', '<', '>=', '>'`. 1002 other: The other multiset to compare to. Negative counts are treated 1003 as 0. 1004 """ 1005 other = implicit_convert_to_expression(other) 1006 lexi_tuple, order = compute_lexi_tuple_with_zero_right_first( 1007 comparison) 1008 return icepool.operator.MultisetVersus(self, 1009 other, 1010 lexi_tuple=lexi_tuple, 1011 order=order)
EXPERIMENTAL: Keeps elements from self that fit the comparison against all elements of the other multiset.
Contrast versus_any().
This infographic gives an overview of several opposed dice pool mechanics, including this one.
Arguments:
- comparison: One of
'<=', '<', '>=', '>'. - other: The other multiset to compare to. Negative counts are treated as 0.
1013 def versus_any(self, comparison: Literal['<=', '<', '>=', '>'], 1014 other: 'MultisetExpression[T]') -> 'MultisetExpression[T]': 1015 """EXPERIMENTAL: Keeps elements from `self` that fit the comparison against any element of the other multiset. 1016 1017 Contrast `versus_all()`. 1018 1019 [This infographic](https://github.com/HighDiceRoller/icepool/blob/main/images/opposed_pools.png?raw=true) 1020 gives an overview of several opposed dice pool mechanics, including this 1021 one. 1022 1023 Args: 1024 comparison: One of `'<=', '<', '>=', '>'`. 1025 other: The other multiset to compare to. Negative counts are treated 1026 as 0. 1027 """ 1028 other = implicit_convert_to_expression(other) 1029 lexi_tuple, order = compute_lexi_tuple_with_zero_right_first( 1030 comparison) 1031 lexi_tuple = tuple(reversed(lexi_tuple)) # type: ignore 1032 order = -order 1033 1034 return icepool.operator.MultisetVersus(self, 1035 other, 1036 lexi_tuple=lexi_tuple, 1037 order=order)
EXPERIMENTAL: Keeps elements from self that fit the comparison against any element of the other multiset.
Contrast versus_all().
This infographic gives an overview of several opposed dice pool mechanics, including this one.
Arguments:
- comparison: One of
'<=', '<', '>=', '>'. - other: The other multiset to compare to. Negative counts are treated as 0.
1041 def expand( 1042 self, 1043 order: Order = Order.Ascending 1044 ) -> 'icepool.Die[tuple[T, ...]] | MultisetFunctionRawResult[T, tuple[T, ...]]': 1045 """Evaluation: All elements of the multiset in ascending order. 1046 1047 This is expensive and not recommended unless there are few possibilities. 1048 1049 Args: 1050 order: Whether the elements are in ascending (default) or descending 1051 order. 1052 """ 1053 return icepool.evaluator.ExpandEvaluator().evaluate(self, order=order)
Evaluation: All elements of the multiset in ascending order.
This is expensive and not recommended unless there are few possibilities.
Arguments:
- order: Whether the elements are in ascending (default) or descending order.
1055 def sum( 1056 self, 1057 map: Callable[[T], U] | Mapping[T, U] | None = None 1058 ) -> 'icepool.Die[U] | MultisetFunctionRawResult[T, U]': 1059 """Evaluation: The sum of all elements.""" 1060 if map is None: 1061 return icepool.evaluator.sum_evaluator.evaluate(self) 1062 else: 1063 return icepool.evaluator.SumEvaluator(map).evaluate(self)
Evaluation: The sum of all elements.
1065 def size(self) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]': 1066 """Evaluation: The total number of elements in the multiset. 1067 1068 This is usually not very interesting unless some other operation is 1069 performed first. Examples: 1070 1071 `generator.unique().size()` will count the number of unique outcomes. 1072 1073 `(generator & [4, 5, 6]).size()` will count up to one each of 1074 4, 5, and 6. 1075 """ 1076 return icepool.evaluator.size_evaluator.evaluate(self)
Evaluation: The total number of elements in the multiset.
This is usually not very interesting unless some other operation is performed first. Examples:
generator.unique().size() will count the number of unique outcomes.
(generator & [4, 5, 6]).size() will count up to one each of
4, 5, and 6.
1078 def empty( 1079 self) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]': 1080 """Evaluation: Whether the multiset contains only zero counts.""" 1081 return icepool.evaluator.empty_evaluator.evaluate(self)
Evaluation: Whether the multiset contains only zero counts.
1083 def product_of_counts( 1084 self, ) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]': 1085 """Evaluation: The product of counts in the multiset.""" 1086 return icepool.evaluator.product_of_counts_evaluator.evaluate(self)
Evaluation: The product of counts in the multiset.
1088 def highest_outcome_and_count( 1089 self 1090 ) -> 'icepool.Die[tuple[T, int]] | MultisetFunctionRawResult[T, tuple[T, int]]': 1091 """Evaluation: The highest outcome with positive count, along with that count. 1092 1093 If no outcomes have positive count, the min outcome will be returned with 0 count. 1094 """ 1095 return icepool.evaluator.highest_outcome_and_count_evaluator.evaluate( 1096 self)
Evaluation: The highest outcome with positive count, along with that count.
If no outcomes have positive count, the min outcome will be returned with 0 count.
1098 def all_counts( 1099 self, 1100 filter: int | Literal['all'] = 1 1101 ) -> 'icepool.Die[tuple[int, ...]] | MultisetFunctionRawResult[T, tuple[int, ...]]': 1102 """Evaluation: Sorted tuple of all counts, i.e. the sizes of all matching sets. 1103 1104 The sizes are in **descending** order. 1105 1106 Args: 1107 filter: Any counts below this value will not be in the output. 1108 For example, `filter=2` will only produce pairs and better. 1109 If `'all'`, no filtering will be done. 1110 1111 Why not just place `keep_counts('>=')` before this? 1112 `keep_counts('>=')` operates by setting counts to zero, so we 1113 would still need an argument to specify whether we want to 1114 output zero counts. So we might as well use the argument to do 1115 both. 1116 """ 1117 return icepool.evaluator.AllCountsEvaluator( 1118 filter=filter).evaluate(self)
Evaluation: Sorted tuple of all counts, i.e. the sizes of all matching sets.
The sizes are in descending order.
Arguments:
filter: Any counts below this value will not be in the output. For example,
filter=2will only produce pairs and better. If'all', no filtering will be done.Why not just place
keep_counts('>=')before this?keep_counts('>=')operates by setting counts to zero, so we would still need an argument to specify whether we want to output zero counts. So we might as well use the argument to do both.
1120 def largest_count( 1121 self, 1122 *, 1123 wild: Callable[[T], bool] | Collection[T] | None = None, 1124 wild_low: Callable[[T], bool] | Collection[T] | None = None, 1125 wild_high: Callable[[T], bool] | Collection[T] | None = None, 1126 ) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]': 1127 """Evaluation: The size of the largest matching set among the elements. 1128 1129 Args: 1130 wild: If provided, the counts of these outcomes will be combined 1131 with the counts of any other outcomes. 1132 wild_low: These wilds can only be combined with outcomes that they 1133 are lower than. 1134 wild_high: These wilds can only be combined with outcomes that they 1135 are higher than. 1136 """ 1137 if wild is None and wild_low is None and wild_high is None: 1138 return icepool.evaluator.largest_count_evaluator.evaluate(self) 1139 else: 1140 return icepool.evaluator.LargestCountWithWildEvaluator( 1141 wild=wild, wild_low=wild_low, 1142 wild_high=wild_high).evaluate(self)
Evaluation: The size of the largest matching set among the elements.
Arguments:
- wild: If provided, the counts of these outcomes will be combined with the counts of any other outcomes.
- wild_low: These wilds can only be combined with outcomes that they are lower than.
- wild_high: These wilds can only be combined with outcomes that they are higher than.
1144 def largest_count_and_outcome( 1145 self 1146 ) -> 'icepool.Die[tuple[int, T]] | MultisetFunctionRawResult[T, tuple[int, T]]': 1147 """Evaluation: The largest matching set among the elements and the corresponding outcome.""" 1148 return icepool.evaluator.largest_count_and_outcome_evaluator.evaluate( 1149 self)
Evaluation: The largest matching set among the elements and the corresponding outcome.
1156 def count_subset( 1157 self, 1158 divisor: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1159 /, 1160 *, 1161 empty_divisor: int | None = None 1162 ) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]': 1163 """Evaluation: The number of times the divisor is contained in this multiset. 1164 1165 Args: 1166 divisor: The multiset to divide by. 1167 empty_divisor: If the divisor is empty, the outcome will be this. 1168 If not set, `ZeroDivisionError` will be raised for an empty 1169 right side. 1170 1171 Raises: 1172 ZeroDivisionError: If the divisor may be empty and 1173 `empty_divisor` is not set. 1174 """ 1175 divisor = implicit_convert_to_expression(divisor) 1176 return icepool.evaluator.CountSubsetEvaluator( 1177 empty_divisor=empty_divisor).evaluate(self, divisor)
Evaluation: The number of times the divisor is contained in this multiset.
Arguments:
- divisor: The multiset to divide by.
- empty_divisor: If the divisor is empty, the outcome will be this.
If not set,
ZeroDivisionErrorwill be raised for an empty right side.
Raises:
- ZeroDivisionError: If the divisor may be empty and
empty_divisoris not set.
1179 def largest_straight( 1180 self: 'MultisetExpression[int]' 1181 ) -> 'icepool.Die[int] | MultisetFunctionRawResult[int, int]': 1182 """Evaluation: The size of the largest straight among the elements. 1183 1184 Outcomes must be `int`s. 1185 """ 1186 return icepool.evaluator.largest_straight_evaluator.evaluate(self)
Evaluation: The size of the largest straight among the elements.
Outcomes must be ints.
1188 def largest_straight_and_outcome( 1189 self: 'MultisetExpression[int]', 1190 priority: Literal['low', 'high'] = 'high', 1191 / 1192 ) -> 'icepool.Die[tuple[int, int]] | MultisetFunctionRawResult[int, tuple[int, int]]': 1193 """Evaluation: The size of the largest straight among the elements and the highest (optionally, lowest) outcome in that straight. 1194 1195 Straight size is prioritized first, then the outcome. 1196 1197 Outcomes must be `int`s. 1198 1199 Args: 1200 priority: Controls which outcome within the straight is returned, 1201 and which straight is picked if there is a tie for largest 1202 straight. 1203 """ 1204 if priority == 'high': 1205 return icepool.evaluator.largest_straight_and_outcome_evaluator_high.evaluate( 1206 self) 1207 elif priority == 'low': 1208 return icepool.evaluator.largest_straight_and_outcome_evaluator_low.evaluate( 1209 self) 1210 else: 1211 raise ValueError("priority must be 'low' or 'high'.")
Evaluation: The size of the largest straight among the elements and the highest (optionally, lowest) outcome in that straight.
Straight size is prioritized first, then the outcome.
Outcomes must be ints.
Arguments:
- priority: Controls which outcome within the straight is returned, and which straight is picked if there is a tie for largest straight.
1213 def all_straights( 1214 self: 'MultisetExpression[int]' 1215 ) -> 'icepool.Die[tuple[int, ...]] | MultisetFunctionRawResult[int, tuple[int, ...]]': 1216 """Evaluation: The sizes of all straights. 1217 1218 The sizes are in **descending** order. 1219 1220 Each element can only contribute to one straight, though duplicate 1221 elements can produces straights that overlap in outcomes. In this case, 1222 elements are preferentially assigned to the longer straight. 1223 """ 1224 return icepool.evaluator.all_straights_evaluator.evaluate(self)
Evaluation: The sizes of all straights.
The sizes are in descending order.
Each element can only contribute to one straight, though duplicate elements can produces straights that overlap in outcomes. In this case, elements are preferentially assigned to the longer straight.
1226 def all_straights_reduce_counts( 1227 self: 'MultisetExpression[int]', 1228 reducer: Callable[[int, int], int] = operator.mul 1229 ) -> 'icepool.Die[tuple[tuple[int, int], ...]] | MultisetFunctionRawResult[int, tuple[tuple[int, int], ...]]': 1230 """Experimental: All straights with a reduce operation on the counts. 1231 1232 This can be used to evaluate e.g. cribbage-style straight counting. 1233 1234 The result is a tuple of `(run_length, run_score)`s. 1235 """ 1236 return icepool.evaluator.AllStraightsReduceCountsEvaluator( 1237 reducer=reducer).evaluate(self)
Experimental: All straights with a reduce operation on the counts.
This can be used to evaluate e.g. cribbage-style straight counting.
The result is a tuple of (run_length, run_score)s.
1239 def argsort(self: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1240 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1241 order: Order = Order.Descending, 1242 limit: int | None = None): 1243 """Experimental: Returns the indexes of the originating multisets for each rank in their additive union. 1244 1245 Example: 1246 ```python 1247 MultisetExpression.argsort([10, 9, 5], [9, 9]) 1248 ``` 1249 produces 1250 ```python 1251 ((0,), (0, 1, 1), (0,)) 1252 ``` 1253 1254 Args: 1255 self, *args: The multiset expressions to be evaluated. 1256 order: Which order the ranks are to be emitted. Default is descending. 1257 limit: How many ranks to emit. Default will emit all ranks, which 1258 makes the length of each outcome equal to 1259 `additive_union(+self, +arg1, +arg2, ...).unique().size()` 1260 """ 1261 self = implicit_convert_to_expression(self) 1262 converted_args = [implicit_convert_to_expression(arg) for arg in args] 1263 return icepool.evaluator.ArgsortEvaluator(order=order, 1264 limit=limit).evaluate( 1265 self, *converted_args)
Experimental: Returns the indexes of the originating multisets for each rank in their additive union.
Example:
MultisetExpression.argsort([10, 9, 5], [9, 9])
produces
((0,), (0, 1, 1), (0,))
Arguments:
- self, *args: The multiset expressions to be evaluated.
- order: Which order the ranks are to be emitted. Default is descending.
- limit: How many ranks to emit. Default will emit all ranks, which
makes the length of each outcome equal to
additive_union(+self, +arg1, +arg2, ...).unique().size()
1308 def issubset( 1309 self, 1310 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1311 /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]': 1312 """Evaluation: Whether this multiset is a subset of the other multiset. 1313 1314 Specifically, if this multiset has a lesser or equal count for each 1315 outcome than the other multiset, this evaluates to `True`; 1316 if there is some outcome for which this multiset has a greater count 1317 than the other multiset, this evaluates to `False`. 1318 1319 `issubset` is the same as `self <= other`. 1320 1321 `self < other` evaluates a proper subset relation, which is the same 1322 except the result is `False` if the two multisets are exactly equal. 1323 """ 1324 return self._compare(other, icepool.evaluator.IsSubsetEvaluator)
Evaluation: Whether this multiset is a subset of the other multiset.
Specifically, if this multiset has a lesser or equal count for each
outcome than the other multiset, this evaluates to True;
if there is some outcome for which this multiset has a greater count
than the other multiset, this evaluates to False.
issubset is the same as self <= other.
self < other evaluates a proper subset relation, which is the same
except the result is False if the two multisets are exactly equal.
1343 def issuperset( 1344 self, 1345 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1346 /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]': 1347 """Evaluation: Whether this multiset is a superset of the other multiset. 1348 1349 Specifically, if this multiset has a greater or equal count for each 1350 outcome than the other multiset, this evaluates to `True`; 1351 if there is some outcome for which this multiset has a lesser count 1352 than the other multiset, this evaluates to `False`. 1353 1354 A typical use of this evaluation is testing for the presence of a 1355 combo of cards in a hand, e.g. 1356 1357 ```python 1358 deck.deal(5) >= ['a', 'a', 'b'] 1359 ``` 1360 1361 represents the chance that a deal of 5 cards contains at least two 'a's 1362 and one 'b'. 1363 1364 `issuperset` is the same as `self >= other`. 1365 1366 `self > other` evaluates a proper superset relation, which is the same 1367 except the result is `False` if the two multisets are exactly equal. 1368 """ 1369 return self._compare(other, icepool.evaluator.IsSupersetEvaluator)
Evaluation: Whether this multiset is a superset of the other multiset.
Specifically, if this multiset has a greater or equal count for each
outcome than the other multiset, this evaluates to True;
if there is some outcome for which this multiset has a lesser count
than the other multiset, this evaluates to False.
A typical use of this evaluation is testing for the presence of a combo of cards in a hand, e.g.
deck.deal(5) >= ['a', 'a', 'b']
represents the chance that a deal of 5 cards contains at least two 'a's and one 'b'.
issuperset is the same as self >= other.
self > other evaluates a proper superset relation, which is the same
except the result is False if the two multisets are exactly equal.
1401 def isdisjoint( 1402 self, 1403 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1404 /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]': 1405 """Evaluation: Whether this multiset is disjoint from the other multiset. 1406 1407 Specifically, this evaluates to `False` if there is any outcome for 1408 which both multisets have positive count, and `True` if there is not. 1409 1410 Negative incoming counts are treated as zero counts. 1411 """ 1412 return self._compare(other, icepool.evaluator.IsDisjointSetEvaluator)
Evaluation: Whether this multiset is disjoint from the other multiset.
Specifically, this evaluates to False if there is any outcome for
which both multisets have positive count, and True if there is not.
Negative incoming counts are treated as zero counts.
1416 def leximin( 1417 self, 1418 comparison: Literal['==', '!=', '<=', '<', '>=', '>', 'cmp'], 1419 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1420 /, 1421 extra: Literal['low', 'high', 'drop'] = 'high' 1422 ) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]': 1423 """Evaluation: EXPERIMENTAL: Lexicographic comparison after sorting each multiset in ascending order. 1424 1425 Compares the lowest element of each multiset; if they are equal, 1426 compares the next-lowest element, and so on. 1427 1428 [This infographic](https://github.com/HighDiceRoller/icepool/blob/main/images/opposed_pools.png?raw=true) 1429 gives an overview of several opposed dice pool mechanics, including this 1430 one. 1431 1432 Args: 1433 comparison: The comparison to use. 1434 other: The multiset to compare to. 1435 extra: If one side has more elements than the other, how the extra 1436 elements are considered compared to their missing counterparts. 1437 """ 1438 lexi_tuple = compute_lexi_tuple_with_extra(comparison, Order.Ascending, 1439 extra) 1440 return icepool.evaluator.lexi_comparison_evaluator.evaluate( 1441 self, 1442 implicit_convert_to_expression(other), 1443 sort_order=Order.Ascending, 1444 lexi_tuple=lexi_tuple)
Evaluation: EXPERIMENTAL: Lexicographic comparison after sorting each multiset in ascending order.
Compares the lowest element of each multiset; if they are equal, compares the next-lowest element, and so on.
This infographic gives an overview of several opposed dice pool mechanics, including this one.
Arguments:
- comparison: The comparison to use.
- other: The multiset to compare to.
- extra: If one side has more elements than the other, how the extra elements are considered compared to their missing counterparts.
1446 def leximax( 1447 self, 1448 comparison: Literal['==', '!=', '<=', '<', '>=', '>', 'cmp'], 1449 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1450 /, 1451 extra: Literal['low', 'high', 'drop'] = 'high' 1452 ) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]': 1453 """Evaluation: EXPERIMENTAL: Lexicographic comparison after sorting each multiset in descending order. 1454 1455 Compares the highest element of each multiset; if they are equal, 1456 compares the next-highest element, and so on. 1457 1458 [This infographic](https://github.com/HighDiceRoller/icepool/blob/main/images/opposed_pools.png?raw=true) 1459 gives an overview of several opposed dice pool mechanics, including this 1460 one. 1461 1462 Args: 1463 comparison: The comparison to use. 1464 other: The multiset to compare to. 1465 extra: If one side has more elements than the other, how the extra 1466 elements are considered compared to their missing counterparts. 1467 """ 1468 lexi_tuple = compute_lexi_tuple_with_extra(comparison, 1469 Order.Descending, extra) 1470 return icepool.evaluator.lexi_comparison_evaluator.evaluate( 1471 self, 1472 implicit_convert_to_expression(other), 1473 sort_order=Order.Descending, 1474 lexi_tuple=lexi_tuple)
Evaluation: EXPERIMENTAL: Lexicographic comparison after sorting each multiset in descending order.
Compares the highest element of each multiset; if they are equal, compares the next-highest element, and so on.
This infographic gives an overview of several opposed dice pool mechanics, including this one.
Arguments:
- comparison: The comparison to use.
- other: The multiset to compare to.
- extra: If one side has more elements than the other, how the extra elements are considered compared to their missing counterparts.
1477 def force_order(self, force_order: Order) -> 'MultisetExpression[T]': 1478 """Forces outcomes to be seen by the evaluator in the given order. 1479 1480 This can be useful for debugging / testing. 1481 """ 1482 if force_order == Order.Any: 1483 return self 1484 return icepool.operator.MultisetForceOrder(self, 1485 force_order=force_order)
Forces outcomes to be seen by the evaluator in the given order.
This can be useful for debugging / testing.
Inherited Members
21class MultisetEvaluator(MultisetEvaluatorBase[T, U_co]): 22 """Evaluates a multiset based on a state transition function.""" 23 24 @abstractmethod 25 def next_state(self, state: Hashable, order: Order, outcome: T, /, 26 *counts) -> Hashable: 27 """State transition function. 28 29 This should produce a state given the previous state, an outcome, 30 the count of that outcome produced by each multiset input, and any 31 **kwargs provided to `evaluate()`. 32 33 `evaluate()` will always call this with `state, outcome, *counts` as 34 positional arguments. Furthermore, there is no expectation that a 35 subclass be able to handle an arbitrary number of counts. 36 37 Thus, you are free to: 38 * Rename `state` or `outcome` in a subclass. 39 * Replace `*counts` with a fixed set of parameters. 40 41 States must be hashable. At current, they do not have to be orderable. 42 However, this may change in the future, and if they are not totally 43 orderable, you must override `final_outcome` to create totally orderable 44 final outcomes. 45 46 Returning a `Die` is not supported. 47 48 Args: 49 state: A hashable object indicating the state before rolling the 50 current outcome. If `initial_state()` is not overridden, the 51 initial state is `None`. 52 order: The order in which outcomes are seen. You can raise an 53 `UnsupportedOrder` if you don't want to support the current 54 order. In this case, the opposite order will then be attempted 55 (if it hasn't already been attempted). 56 outcome: The current outcome. 57 If there are multiple inputs, the set of outcomes is at 58 least the union of the outcomes of the invididual inputs. 59 You can use `extra_outcomes()` to add extra outcomes. 60 *counts: One value (usually an `int`) for each input indicating how 61 many of the current outcome were produced. You may replace this 62 with a fixed series of parameters. 63 64 Returns: 65 A hashable object indicating the next state. 66 The special value `icepool.Reroll` can be used to immediately remove 67 the state from consideration, effectively performing a full reroll. 68 """ 69 70 def extra_outcomes(self, outcomes: Sequence[T]) -> Collection[T]: 71 """Optional method to specify extra outcomes that should be seen as inputs to `next_state()`. 72 73 These will be seen by `next_state` even if they do not appear in the 74 input(s). The default implementation returns `()`, or no additional 75 outcomes. 76 77 If you want `next_state` to see consecutive `int` outcomes, you can set 78 `extra_outcomes = icepool.MultisetEvaluator.consecutive`. 79 See `consecutive()` below. 80 81 Args: 82 outcomes: The outcomes that could be produced by the inputs, in 83 ascending order. 84 """ 85 return () 86 87 def initial_state(self, order: Order, outcomes: Sequence[T], /, *sizes, 88 **kwargs: Hashable): 89 """Optional method to produce an initial evaluation state. 90 91 If not overriden, the initial state is `None`. Note that this is not a 92 valid `final_outcome()`. 93 94 All non-keyword arguments will be given positionally, so you are free 95 to: 96 * Rename any of them. 97 * Replace `sizes` with a fixed series of arguments. 98 * Replace trailing positional arguments with `*_` if you don't care 99 about them. 100 101 Args: 102 order: The order in which outcomes will be seen by `next_state()`. 103 outcomes: All outcomes that will be seen by `next_state()`. 104 sizes: The sizes of the input multisets, provided 105 that the multiset has inferrable size with non-negative 106 counts. If not, the corresponding size is None. 107 kwargs: Non-multiset arguments that were provided to `evaluate()`. 108 You may replace `**kwargs` with a fixed set of keyword 109 parameters; `final_outcome()` should take the same set of 110 keyword parameters. 111 112 Raises: 113 UnsupportedOrder if the given order is not supported. 114 """ 115 return None 116 117 def final_outcome( 118 self, final_state: Hashable, order: Order, outcomes: tuple[T, ...], 119 /, *sizes, **kwargs: Hashable 120 ) -> 'U_co | icepool.Die[U_co] | icepool.RerollType': 121 """Optional method to generate a final output outcome from a final state. 122 123 By default, the final outcome is equal to the final state. 124 Note that `None` is not a valid outcome for a `Die`, 125 and if there are no outcomes, `final_outcome` will be immediately 126 be callled with `final_state=None`. 127 Subclasses that want to handle this case should explicitly define what 128 happens. 129 130 All non-keyword arguments will be given positionally, so you are free 131 to: 132 * Rename any of them. 133 * Replace `sizes` with a fixed series of arguments. 134 * Replace trailing positional arguments with `*_` if you don't care 135 about them. 136 137 Args: 138 final_state: A state after all outcomes have been processed. 139 order: The order in which outcomes were seen by `next_state()`. 140 outcomes: All outcomes that were seen by `next_state()`. 141 sizes: The sizes of the input multisets, provided 142 that the multiset has inferrable size with non-negative 143 counts. If not, the corresponding size is None. 144 kwargs: Non-multiset arguments that were provided to `evaluate()`. 145 You may replace `**kwargs` with a fixed set of keyword 146 parameters; `initial_state()` should take the same set of 147 keyword parameters. 148 149 Returns: 150 A final outcome that will be used as part of constructing the result `Die`. 151 As usual for `Die()`, this could itself be a `Die` or `icepool.Reroll`. 152 """ 153 # If not overriden, the final_state should have type U_co. 154 return cast(U_co, final_state) 155 156 def consecutive(self, outcomes: Sequence[int]) -> Collection[int]: 157 """Example implementation of `extra_outcomes()` that produces consecutive `int` outcomes. 158 159 Set `extra_outcomes = icepool.MultisetEvaluator.consecutive` to use this. 160 161 Returns: 162 All `int`s from the min outcome to the max outcome among the inputs, 163 inclusive. 164 165 Raises: 166 TypeError: if any input has any non-`int` outcome. 167 """ 168 if not outcomes: 169 return () 170 171 if any(not isinstance(x, int) for x in outcomes): 172 raise TypeError( 173 "consecutive cannot be used with outcomes of type other than 'int'." 174 ) 175 176 return range(outcomes[0], outcomes[-1] + 1) 177 178 @property 179 def next_state_key(self) -> Hashable: 180 """Subclasses may optionally provide a key that uniquely identifies the `next_state()` computation. 181 182 This is used to persistently cache intermediate results between calls 183 to `evaluate()`. By default, `next_state_key` is `None`, which only 184 caches if not inside a `@multiset_function`. 185 186 If you do implement this, `next_state_key` should include any members 187 used in `next_state()` but does NOT need to include members that are 188 only used in other methods, i.e. 189 * `extra_outcomes()` 190 * `initial_state()` 191 * `final_outcome()`. 192 193 For example, if `next_state()` is a pure function other than being 194 defined by its evaluator class, you can use `type(self)`. 195 196 If you want to disable caching between calls to `evaluate()` even 197 outside of `@multiset_function`, return the special value 198 `icepool.NoCache`. 199 """ 200 return None 201 202 def _prepare( 203 self, 204 input_exps: tuple[MultisetExpressionBase[T, Any], ...], 205 kwargs: Mapping[str, Hashable], 206 ) -> Iterator[tuple['Dungeon[T]', 'Quest[T, U_co]', 207 'tuple[MultisetSourceBase[T, Any], ...]', int]]: 208 209 for t in itertools.product(*(exp._prepare() for exp in input_exps)): 210 if t: 211 dungeonlet_flats, questlet_flats, sources, weights = zip(*t) 212 else: 213 dungeonlet_flats = () 214 questlet_flats = () 215 sources = () 216 weights = () 217 next_state_key: Hashable 218 if self.next_state_key is None: 219 # This should only get cached inside this evaluator, but add 220 # self id to be safe. 221 next_state_key = id(self) 222 multiset_function_can_cache = False 223 elif self.next_state_key is icepool.NoCache: 224 next_state_key = icepool.NoCache 225 multiset_function_can_cache = False 226 else: 227 next_state_key = self.next_state_key 228 multiset_function_can_cache = True 229 dungeon: MultisetEvaluatorDungeon[T] = MultisetEvaluatorDungeon( 230 self.next_state, next_state_key, multiset_function_can_cache, 231 dungeonlet_flats) 232 quest: MultisetEvaluatorQuest[T, U_co] = MultisetEvaluatorQuest( 233 self.initial_state, self.extra_outcomes, self.final_outcome, 234 questlet_flats) 235 sources = tuple(itertools.chain.from_iterable(sources)) 236 weight = math.prod(weights) 237 yield dungeon, quest, sources, weight 238 239 def _should_cache(self, dungeon: 'Dungeon[T]') -> bool: 240 return dungeon.__hash__ is not None
Evaluates a multiset based on a state transition function.
24 @abstractmethod 25 def next_state(self, state: Hashable, order: Order, outcome: T, /, 26 *counts) -> Hashable: 27 """State transition function. 28 29 This should produce a state given the previous state, an outcome, 30 the count of that outcome produced by each multiset input, and any 31 **kwargs provided to `evaluate()`. 32 33 `evaluate()` will always call this with `state, outcome, *counts` as 34 positional arguments. Furthermore, there is no expectation that a 35 subclass be able to handle an arbitrary number of counts. 36 37 Thus, you are free to: 38 * Rename `state` or `outcome` in a subclass. 39 * Replace `*counts` with a fixed set of parameters. 40 41 States must be hashable. At current, they do not have to be orderable. 42 However, this may change in the future, and if they are not totally 43 orderable, you must override `final_outcome` to create totally orderable 44 final outcomes. 45 46 Returning a `Die` is not supported. 47 48 Args: 49 state: A hashable object indicating the state before rolling the 50 current outcome. If `initial_state()` is not overridden, the 51 initial state is `None`. 52 order: The order in which outcomes are seen. You can raise an 53 `UnsupportedOrder` if you don't want to support the current 54 order. In this case, the opposite order will then be attempted 55 (if it hasn't already been attempted). 56 outcome: The current outcome. 57 If there are multiple inputs, the set of outcomes is at 58 least the union of the outcomes of the invididual inputs. 59 You can use `extra_outcomes()` to add extra outcomes. 60 *counts: One value (usually an `int`) for each input indicating how 61 many of the current outcome were produced. You may replace this 62 with a fixed series of parameters. 63 64 Returns: 65 A hashable object indicating the next state. 66 The special value `icepool.Reroll` can be used to immediately remove 67 the state from consideration, effectively performing a full reroll. 68 """
State transition function.
This should produce a state given the previous state, an outcome,
the count of that outcome produced by each multiset input, and any
**kwargs provided to evaluate().
evaluate() will always call this with state, outcome, *counts as
positional arguments. Furthermore, there is no expectation that a
subclass be able to handle an arbitrary number of counts.
Thus, you are free to:
- Rename
stateoroutcomein a subclass. - Replace
*countswith a fixed set of parameters.
States must be hashable. At current, they do not have to be orderable.
However, this may change in the future, and if they are not totally
orderable, you must override final_outcome to create totally orderable
final outcomes.
Returning a Die is not supported.
Arguments:
- state: A hashable object indicating the state before rolling the
current outcome. If
initial_state()is not overridden, the initial state isNone. - order: The order in which outcomes are seen. You can raise an
UnsupportedOrderif you don't want to support the current order. In this case, the opposite order will then be attempted (if it hasn't already been attempted). - outcome: The current outcome.
If there are multiple inputs, the set of outcomes is at
least the union of the outcomes of the invididual inputs.
You can use
extra_outcomes()to add extra outcomes. - *counts: One value (usually an
int) for each input indicating how many of the current outcome were produced. You may replace this with a fixed series of parameters.
Returns:
A hashable object indicating the next state. The special value
icepool.Rerollcan be used to immediately remove the state from consideration, effectively performing a full reroll.
70 def extra_outcomes(self, outcomes: Sequence[T]) -> Collection[T]: 71 """Optional method to specify extra outcomes that should be seen as inputs to `next_state()`. 72 73 These will be seen by `next_state` even if they do not appear in the 74 input(s). The default implementation returns `()`, or no additional 75 outcomes. 76 77 If you want `next_state` to see consecutive `int` outcomes, you can set 78 `extra_outcomes = icepool.MultisetEvaluator.consecutive`. 79 See `consecutive()` below. 80 81 Args: 82 outcomes: The outcomes that could be produced by the inputs, in 83 ascending order. 84 """ 85 return ()
Optional method to specify extra outcomes that should be seen as inputs to next_state().
These will be seen by next_state even if they do not appear in the
input(s). The default implementation returns (), or no additional
outcomes.
If you want next_state to see consecutive int outcomes, you can set
extra_outcomes = icepool.MultisetEvaluator.consecutive.
See consecutive() below.
Arguments:
- outcomes: The outcomes that could be produced by the inputs, in
- ascending order.
87 def initial_state(self, order: Order, outcomes: Sequence[T], /, *sizes, 88 **kwargs: Hashable): 89 """Optional method to produce an initial evaluation state. 90 91 If not overriden, the initial state is `None`. Note that this is not a 92 valid `final_outcome()`. 93 94 All non-keyword arguments will be given positionally, so you are free 95 to: 96 * Rename any of them. 97 * Replace `sizes` with a fixed series of arguments. 98 * Replace trailing positional arguments with `*_` if you don't care 99 about them. 100 101 Args: 102 order: The order in which outcomes will be seen by `next_state()`. 103 outcomes: All outcomes that will be seen by `next_state()`. 104 sizes: The sizes of the input multisets, provided 105 that the multiset has inferrable size with non-negative 106 counts. If not, the corresponding size is None. 107 kwargs: Non-multiset arguments that were provided to `evaluate()`. 108 You may replace `**kwargs` with a fixed set of keyword 109 parameters; `final_outcome()` should take the same set of 110 keyword parameters. 111 112 Raises: 113 UnsupportedOrder if the given order is not supported. 114 """ 115 return None
Optional method to produce an initial evaluation state.
If not overriden, the initial state is None. Note that this is not a
valid final_outcome().
All non-keyword arguments will be given positionally, so you are free to:
- Rename any of them.
- Replace
sizeswith a fixed series of arguments. - Replace trailing positional arguments with
*_if you don't care about them.
Arguments:
- order: The order in which outcomes will be seen by
next_state(). - outcomes: All outcomes that will be seen by
next_state(). - sizes: The sizes of the input multisets, provided that the multiset has inferrable size with non-negative counts. If not, the corresponding size is None.
- kwargs: Non-multiset arguments that were provided to
evaluate(). You may replace**kwargswith a fixed set of keyword parameters;final_outcome()should take the same set of keyword parameters.
Raises:
- UnsupportedOrder if the given order is not supported.
117 def final_outcome( 118 self, final_state: Hashable, order: Order, outcomes: tuple[T, ...], 119 /, *sizes, **kwargs: Hashable 120 ) -> 'U_co | icepool.Die[U_co] | icepool.RerollType': 121 """Optional method to generate a final output outcome from a final state. 122 123 By default, the final outcome is equal to the final state. 124 Note that `None` is not a valid outcome for a `Die`, 125 and if there are no outcomes, `final_outcome` will be immediately 126 be callled with `final_state=None`. 127 Subclasses that want to handle this case should explicitly define what 128 happens. 129 130 All non-keyword arguments will be given positionally, so you are free 131 to: 132 * Rename any of them. 133 * Replace `sizes` with a fixed series of arguments. 134 * Replace trailing positional arguments with `*_` if you don't care 135 about them. 136 137 Args: 138 final_state: A state after all outcomes have been processed. 139 order: The order in which outcomes were seen by `next_state()`. 140 outcomes: All outcomes that were seen by `next_state()`. 141 sizes: The sizes of the input multisets, provided 142 that the multiset has inferrable size with non-negative 143 counts. If not, the corresponding size is None. 144 kwargs: Non-multiset arguments that were provided to `evaluate()`. 145 You may replace `**kwargs` with a fixed set of keyword 146 parameters; `initial_state()` should take the same set of 147 keyword parameters. 148 149 Returns: 150 A final outcome that will be used as part of constructing the result `Die`. 151 As usual for `Die()`, this could itself be a `Die` or `icepool.Reroll`. 152 """ 153 # If not overriden, the final_state should have type U_co. 154 return cast(U_co, final_state)
Optional method to generate a final output outcome from a final state.
By default, the final outcome is equal to the final state.
Note that None is not a valid outcome for a Die,
and if there are no outcomes, final_outcome will be immediately
be callled with final_state=None.
Subclasses that want to handle this case should explicitly define what
happens.
All non-keyword arguments will be given positionally, so you are free to:
- Rename any of them.
- Replace
sizeswith a fixed series of arguments. - Replace trailing positional arguments with
*_if you don't care about them.
Arguments:
- final_state: A state after all outcomes have been processed.
- order: The order in which outcomes were seen by
next_state(). - outcomes: All outcomes that were seen by
next_state(). - sizes: The sizes of the input multisets, provided that the multiset has inferrable size with non-negative counts. If not, the corresponding size is None.
- kwargs: Non-multiset arguments that were provided to
evaluate(). You may replace**kwargswith a fixed set of keyword parameters;initial_state()should take the same set of keyword parameters.
Returns:
A final outcome that will be used as part of constructing the result
Die. As usual forDie(), this could itself be aDieoricepool.Reroll.
156 def consecutive(self, outcomes: Sequence[int]) -> Collection[int]: 157 """Example implementation of `extra_outcomes()` that produces consecutive `int` outcomes. 158 159 Set `extra_outcomes = icepool.MultisetEvaluator.consecutive` to use this. 160 161 Returns: 162 All `int`s from the min outcome to the max outcome among the inputs, 163 inclusive. 164 165 Raises: 166 TypeError: if any input has any non-`int` outcome. 167 """ 168 if not outcomes: 169 return () 170 171 if any(not isinstance(x, int) for x in outcomes): 172 raise TypeError( 173 "consecutive cannot be used with outcomes of type other than 'int'." 174 ) 175 176 return range(outcomes[0], outcomes[-1] + 1)
Example implementation of extra_outcomes() that produces consecutive int outcomes.
Set extra_outcomes = icepool.MultisetEvaluator.consecutive to use this.
Returns:
All
ints from the min outcome to the max outcome among the inputs, inclusive.
Raises:
- TypeError: if any input has any non-
intoutcome.
178 @property 179 def next_state_key(self) -> Hashable: 180 """Subclasses may optionally provide a key that uniquely identifies the `next_state()` computation. 181 182 This is used to persistently cache intermediate results between calls 183 to `evaluate()`. By default, `next_state_key` is `None`, which only 184 caches if not inside a `@multiset_function`. 185 186 If you do implement this, `next_state_key` should include any members 187 used in `next_state()` but does NOT need to include members that are 188 only used in other methods, i.e. 189 * `extra_outcomes()` 190 * `initial_state()` 191 * `final_outcome()`. 192 193 For example, if `next_state()` is a pure function other than being 194 defined by its evaluator class, you can use `type(self)`. 195 196 If you want to disable caching between calls to `evaluate()` even 197 outside of `@multiset_function`, return the special value 198 `icepool.NoCache`. 199 """ 200 return None
Subclasses may optionally provide a key that uniquely identifies the next_state() computation.
This is used to persistently cache intermediate results between calls
to evaluate(). By default, next_state_key is None, which only
caches if not inside a @multiset_function.
If you do implement this, next_state_key should include any members
used in next_state() but does NOT need to include members that are
only used in other methods, i.e.
For example, if next_state() is a pure function other than being
defined by its evaluator class, you can use type(self).
If you want to disable caching between calls to evaluate() even
outside of @multiset_function, return the special value
icepool.NoCache.
30class Order(enum.IntEnum): 31 """Can be used to define what order outcomes are seen in by MultisetEvaluators.""" 32 Ascending = 1 33 Descending = -1 34 Any = 0 35 36 def merge(*orders: 'Order') -> 'Order': 37 """Merges the given Orders. 38 39 Returns: 40 `Any` if all arguments are `Any`. 41 `Ascending` if there is at least one `Ascending` in the arguments. 42 `Descending` if there is at least one `Descending` in the arguments. 43 44 Raises: 45 `ConflictingOrderError` if both `Ascending` and `Descending` are in 46 the arguments. 47 """ 48 result = Order.Any 49 for order in orders: 50 if (result > 0 and order < 0) or (result < 0 and order > 0): 51 raise ConflictingOrderError( 52 f'Conflicting orders {orders}.\n' + 53 'Tip: If you are using highest(keep=k), try using lowest(drop=n-k) instead, or vice versa.' 54 ) 55 if result == Order.Any: 56 result = order 57 return result 58 59 def __neg__(self) -> 'Order': 60 if self is Order.Ascending: 61 return Order.Descending 62 elif self is Order.Descending: 63 return Order.Ascending 64 else: 65 return Order.Any
Can be used to define what order outcomes are seen in by MultisetEvaluators.
36 def merge(*orders: 'Order') -> 'Order': 37 """Merges the given Orders. 38 39 Returns: 40 `Any` if all arguments are `Any`. 41 `Ascending` if there is at least one `Ascending` in the arguments. 42 `Descending` if there is at least one `Descending` in the arguments. 43 44 Raises: 45 `ConflictingOrderError` if both `Ascending` and `Descending` are in 46 the arguments. 47 """ 48 result = Order.Any 49 for order in orders: 50 if (result > 0 and order < 0) or (result < 0 and order > 0): 51 raise ConflictingOrderError( 52 f'Conflicting orders {orders}.\n' + 53 'Tip: If you are using highest(keep=k), try using lowest(drop=n-k) instead, or vice versa.' 54 ) 55 if result == Order.Any: 56 result = order 57 return result
Merges the given Orders.
Returns:
Anyif all arguments areAny.Ascendingif there is at least oneAscendingin the arguments.Descendingif there is at least oneDescendingin the arguments.
Raises:
ConflictingOrderErrorif bothAscendingandDescendingare in- the arguments.
19class ConflictingOrderError(OrderError): 20 """Indicates that two conflicting mandatory outcome orderings were encountered."""
Indicates that two conflicting mandatory outcome orderings were encountered.
23class UnsupportedOrder(OrderException): 24 """Indicates that the given order is not supported under the current context. 25 26 It may still be possible that retrying with the opposite order will succeed. 27 """
Indicates that the given order is not supported under the current context.
It may still be possible that retrying with the opposite order will succeed.
20class Deck(Population[T_co], MaybeHashKeyed): 21 """Sampling without replacement (within a single evaluation). 22 23 Quantities represent duplicates. 24 """ 25 26 _data: Counts[T_co] 27 _deal: int 28 29 @property 30 def _new_type(self) -> type: 31 return Deck 32 33 def __new__(cls, 34 outcomes: Sequence | Mapping[Any, int] = (), 35 times: Sequence[int] | int = 1) -> 'Deck[T_co]': 36 """Constructor for a `Deck`. 37 38 All quantities must be non-negative. Outcomes with zero quantity will be 39 omitted. 40 41 Args: 42 outcomes: The cards of the `Deck`. This can be one of the following: 43 * A `Sequence` of outcomes. Duplicates will contribute 44 quantity for each appearance. 45 * A `Mapping` from outcomes to quantities. 46 47 Each outcome may be one of the following: 48 * An outcome, which must be hashable and totally orderable. 49 * A `Deck`, which will be flattened into the result. If a 50 `times` is assigned to the `Deck`, the entire `Deck` will 51 be duplicated that many times. 52 times: Multiplies the number of times each element of `outcomes` 53 will be put into the `Deck`. 54 `times` can either be a sequence of the same length as 55 `outcomes` or a single `int` to apply to all elements of 56 `outcomes`. 57 """ 58 59 if icepool.population.again.contains_again(outcomes): 60 raise ValueError('Again cannot be used with Decks.') 61 62 outcomes, times = icepool.creation_args.itemize(outcomes, times) 63 64 if len(outcomes) == 1 and times[0] == 1 and isinstance( 65 outcomes[0], Deck): 66 return outcomes[0] 67 68 counts: Counts[T_co] = icepool.creation_args.expand_args_for_deck( 69 outcomes, times) 70 71 return Deck._new_raw(counts) 72 73 @classmethod 74 def _new_raw(cls, data: Counts[T_co]) -> 'Deck[T_co]': 75 """Creates a new `Deck` using already-processed arguments. 76 77 Args: 78 data: At this point, this is a Counts. 79 """ 80 self = super(Population, cls).__new__(cls) 81 self._data = data 82 return self 83 84 def keys(self) -> CountsKeysView[T_co]: 85 return self._data.keys() 86 87 def values(self) -> CountsValuesView: 88 return self._data.values() 89 90 def items(self) -> CountsItemsView[T_co]: 91 return self._data.items() 92 93 def __getitem__(self, outcome) -> int: 94 return self._data[outcome] 95 96 def __iter__(self) -> Iterator[T_co]: 97 return iter(self.keys()) 98 99 def __len__(self) -> int: 100 return len(self._data) 101 102 size = icepool.Population.denominator 103 104 @cached_property 105 def _popped_min(self) -> tuple['Deck[T_co]', int]: 106 return self._new_raw(self._data.remove_min()), self.quantities()[0] 107 108 def _pop_min(self) -> tuple['Deck[T_co]', int]: 109 """A `Deck` with the min outcome removed.""" 110 return self._popped_min 111 112 @cached_property 113 def _popped_max(self) -> tuple['Deck[T_co]', int]: 114 return self._new_raw(self._data.remove_max()), self.quantities()[-1] 115 116 def _pop_max(self) -> tuple['Deck[T_co]', int]: 117 """A `Deck` with the max outcome removed.""" 118 return self._popped_max 119 120 @overload 121 def deal(self, hand_size: int, /) -> 'icepool.Deal[T_co]': 122 ... 123 124 @overload 125 def deal(self, 126 hand_sizes: Iterable[int]) -> 'icepool.MultiDeal[T_co, Any]': 127 ... 128 129 @overload 130 def deal( 131 self, hand_sizes: int | Iterable[int] 132 ) -> 'icepool.Deal[T_co] | icepool.MultiDeal[T_co, Any]': 133 ... 134 135 def deal( 136 self, hand_sizes: int | Iterable[int] 137 ) -> 'icepool.Deal[T_co] | icepool.MultiDeal[T_co, Any]': 138 """Deals the specified number of cards from this deck. 139 140 Args: 141 hand_sizes: Either an integer, in which case a `Deal` will be 142 returned, or an iterable of multiple hand sizes, in which case a 143 `MultiDeal` will be returned. 144 """ 145 if isinstance(hand_sizes, int): 146 return icepool.Deal(self, hand_sizes) 147 else: 148 return icepool.MultiDeal( 149 self, tuple((hand_size, 1) for hand_size in hand_sizes)) 150 151 def deal_groups( 152 self, *hand_groups: tuple[int, 153 int]) -> 'icepool.MultiDeal[T_co, Any]': 154 """EXPERIMENTAL: Deal cards into groups of hands, where the hands in each group could be produced in arbitrary order. 155 156 Args: 157 hand_groups: Each argument is a tuple (hand_size, group_size), 158 denoting the number of cards in each hand of the group and 159 the number of hands in the group respectively. 160 """ 161 return icepool.MultiDeal(self, hand_groups) 162 163 # Binary operators. 164 165 def additive_union( 166 self, *args: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 167 """Both decks merged together.""" 168 return functools.reduce(operator.add, args, 169 initial=self) # type: ignore 170 171 def __add__(self, 172 other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 173 data = Counter(self._data) 174 for outcome, count in Counter(other).items(): 175 data[outcome] += count 176 return Deck(+data) 177 178 __radd__ = __add__ 179 180 def difference(self, *args: 181 Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 182 """This deck with the other cards removed (but not below zero of each card).""" 183 return functools.reduce(operator.sub, args, 184 initial=self) # type: ignore 185 186 def __sub__(self, 187 other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 188 data = Counter(self._data) 189 for outcome, count in Counter(other).items(): 190 data[outcome] -= count 191 return Deck(+data) 192 193 def __rsub__(self, 194 other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 195 data = Counter(other) 196 for outcome, count in self.items(): 197 data[outcome] -= count 198 return Deck(+data) 199 200 def intersection( 201 self, *args: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 202 """The cards that both decks have.""" 203 return functools.reduce(operator.and_, args, 204 initial=self) # type: ignore 205 206 def __and__(self, 207 other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 208 data: Counter[T_co] = Counter() 209 for outcome, count in Counter(other).items(): 210 data[outcome] = min(self.get(outcome, 0), count) 211 return Deck(+data) 212 213 __rand__ = __and__ 214 215 def union(self, *args: 216 Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 217 """As many of each card as the deck that has more of them.""" 218 return functools.reduce(operator.or_, args, 219 initial=self) # type: ignore 220 221 def __or__(self, 222 other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 223 data = Counter(self._data) 224 for outcome, count in Counter(other).items(): 225 data[outcome] = max(data[outcome], count) 226 return Deck(+data) 227 228 __ror__ = __or__ 229 230 def symmetric_difference( 231 self, other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 232 """As many of each card as the deck that has more of them.""" 233 return self ^ other 234 235 def __xor__(self, 236 other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 237 data = Counter(self._data) 238 for outcome, count in Counter(other).items(): 239 data[outcome] = abs(data[outcome] - count) 240 return Deck(+data) 241 242 def __mul__(self, other: int) -> 'Deck[T_co]': 243 if not isinstance(other, int): 244 return NotImplemented 245 return self.multiply_quantities(other) 246 247 __rmul__ = __mul__ 248 249 def __floordiv__(self, other: int) -> 'Deck[T_co]': 250 if not isinstance(other, int): 251 return NotImplemented 252 return self.divide_quantities(other) 253 254 def __mod__(self, other: int) -> 'Deck[T_co]': 255 if not isinstance(other, int): 256 return NotImplemented 257 return self.modulo_quantities(other) 258 259 def map( 260 self, 261 repl: 262 'Callable[..., U | Deck[U] | icepool.RerollType] | Mapping[T_co, U | Deck[U] | icepool.RerollType]', 263 /, 264 *, 265 star: bool | None = None) -> 'Deck[U]': 266 """Maps outcomes of this `Deck` to other outcomes. 267 268 Args: 269 repl: One of the following: 270 * A callable returning a new outcome for each old outcome. 271 * A map from old outcomes to new outcomes. 272 Unmapped old outcomes stay the same. 273 The new outcomes may be `Deck`s, in which case one card is 274 replaced with several. This is not recommended. 275 star: Whether outcomes should be unpacked into separate arguments 276 before sending them to a callable `repl`. 277 If not provided, this will be guessed based on the function 278 signature. 279 """ 280 # Convert to a single-argument function. 281 if callable(repl): 282 if star is None: 283 star = infer_star(repl) 284 if star: 285 286 def transition_function(outcome): 287 return repl(*outcome) 288 else: 289 290 def transition_function(outcome): 291 return repl(outcome) 292 else: 293 # repl is a mapping. 294 def transition_function(outcome): 295 if outcome in repl: 296 return repl[outcome] 297 else: 298 return outcome 299 300 return Deck( 301 [transition_function(outcome) for outcome in self.outcomes()], 302 times=self.quantities()) 303 304 @cached_property 305 def _sequence_cache( 306 self) -> 'MutableSequence[icepool.Die[tuple[T_co, ...]]]': 307 return [icepool.Die([()])] 308 309 def sequence(self, deals: int, /) -> 'icepool.Die[tuple[T_co, ...]]': 310 """Possible sequences produced by dealing from this deck a number of times. 311 312 This is extremely expensive computationally. If you don't care about 313 order, use `deal()` instead. 314 """ 315 if deals < 0: 316 raise ValueError('The number of cards dealt cannot be negative.') 317 for i in range(len(self._sequence_cache), deals + 1): 318 319 def transition(curr): 320 remaining = icepool.Die(self - curr) 321 return icepool.map(lambda curr, next: curr + (next, ), curr, 322 remaining) 323 324 result: 'icepool.Die[tuple[T_co, ...]]' = self._sequence_cache[ 325 i - 1].map(transition) 326 self._sequence_cache.append(result) 327 return result 328 329 @cached_property 330 def hash_key(self) -> tuple: 331 return Deck, tuple(self.items()) 332 333 def __repr__(self) -> str: 334 items_string = ', '.join(f'{repr(outcome)}: {quantity}' 335 for outcome, quantity in self.items()) 336 return type(self).__qualname__ + '({' + items_string + '})'
Sampling without replacement (within a single evaluation).
Quantities represent duplicates.
311 def denominator(self) -> int: 312 """The sum of all quantities (e.g. weights or duplicates). 313 314 For the number of unique outcomes, use `len()`. 315 """ 316 return self._denominator
The sum of all quantities (e.g. weights or duplicates).
For the number of unique outcomes, use len().
135 def deal( 136 self, hand_sizes: int | Iterable[int] 137 ) -> 'icepool.Deal[T_co] | icepool.MultiDeal[T_co, Any]': 138 """Deals the specified number of cards from this deck. 139 140 Args: 141 hand_sizes: Either an integer, in which case a `Deal` will be 142 returned, or an iterable of multiple hand sizes, in which case a 143 `MultiDeal` will be returned. 144 """ 145 if isinstance(hand_sizes, int): 146 return icepool.Deal(self, hand_sizes) 147 else: 148 return icepool.MultiDeal( 149 self, tuple((hand_size, 1) for hand_size in hand_sizes))
151 def deal_groups( 152 self, *hand_groups: tuple[int, 153 int]) -> 'icepool.MultiDeal[T_co, Any]': 154 """EXPERIMENTAL: Deal cards into groups of hands, where the hands in each group could be produced in arbitrary order. 155 156 Args: 157 hand_groups: Each argument is a tuple (hand_size, group_size), 158 denoting the number of cards in each hand of the group and 159 the number of hands in the group respectively. 160 """ 161 return icepool.MultiDeal(self, hand_groups)
EXPERIMENTAL: Deal cards into groups of hands, where the hands in each group could be produced in arbitrary order.
Arguments:
- hand_groups: Each argument is a tuple (hand_size, group_size), denoting the number of cards in each hand of the group and the number of hands in the group respectively.
165 def additive_union( 166 self, *args: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 167 """Both decks merged together.""" 168 return functools.reduce(operator.add, args, 169 initial=self) # type: ignore
Both decks merged together.
180 def difference(self, *args: 181 Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 182 """This deck with the other cards removed (but not below zero of each card).""" 183 return functools.reduce(operator.sub, args, 184 initial=self) # type: ignore
This deck with the other cards removed (but not below zero of each card).
200 def intersection( 201 self, *args: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 202 """The cards that both decks have.""" 203 return functools.reduce(operator.and_, args, 204 initial=self) # type: ignore
The cards that both decks have.
215 def union(self, *args: 216 Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 217 """As many of each card as the deck that has more of them.""" 218 return functools.reduce(operator.or_, args, 219 initial=self) # type: ignore
As many of each card as the deck that has more of them.
230 def symmetric_difference( 231 self, other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 232 """As many of each card as the deck that has more of them.""" 233 return self ^ other
As many of each card as the deck that has more of them.
259 def map( 260 self, 261 repl: 262 'Callable[..., U | Deck[U] | icepool.RerollType] | Mapping[T_co, U | Deck[U] | icepool.RerollType]', 263 /, 264 *, 265 star: bool | None = None) -> 'Deck[U]': 266 """Maps outcomes of this `Deck` to other outcomes. 267 268 Args: 269 repl: One of the following: 270 * A callable returning a new outcome for each old outcome. 271 * A map from old outcomes to new outcomes. 272 Unmapped old outcomes stay the same. 273 The new outcomes may be `Deck`s, in which case one card is 274 replaced with several. This is not recommended. 275 star: Whether outcomes should be unpacked into separate arguments 276 before sending them to a callable `repl`. 277 If not provided, this will be guessed based on the function 278 signature. 279 """ 280 # Convert to a single-argument function. 281 if callable(repl): 282 if star is None: 283 star = infer_star(repl) 284 if star: 285 286 def transition_function(outcome): 287 return repl(*outcome) 288 else: 289 290 def transition_function(outcome): 291 return repl(outcome) 292 else: 293 # repl is a mapping. 294 def transition_function(outcome): 295 if outcome in repl: 296 return repl[outcome] 297 else: 298 return outcome 299 300 return Deck( 301 [transition_function(outcome) for outcome in self.outcomes()], 302 times=self.quantities())
Maps outcomes of this Deck to other outcomes.
Arguments:
- repl: One of the following:
- A callable returning a new outcome for each old outcome.
- A map from old outcomes to new outcomes.
Unmapped old outcomes stay the same.
The new outcomes may be
Decks, in which case one card is replaced with several. This is not recommended.
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
repl. If not provided, this will be guessed based on the function signature.
309 def sequence(self, deals: int, /) -> 'icepool.Die[tuple[T_co, ...]]': 310 """Possible sequences produced by dealing from this deck a number of times. 311 312 This is extremely expensive computationally. If you don't care about 313 order, use `deal()` instead. 314 """ 315 if deals < 0: 316 raise ValueError('The number of cards dealt cannot be negative.') 317 for i in range(len(self._sequence_cache), deals + 1): 318 319 def transition(curr): 320 remaining = icepool.Die(self - curr) 321 return icepool.map(lambda curr, next: curr + (next, ), curr, 322 remaining) 323 324 result: 'icepool.Die[tuple[T_co, ...]]' = self._sequence_cache[ 325 i - 1].map(transition) 326 self._sequence_cache.append(result) 327 return result
Possible sequences produced by dealing from this deck a number of times.
This is extremely expensive computationally. If you don't care about
order, use deal() instead.
Inherited Members
15class Deal(KeepGenerator[T]): 16 """Represents an unordered deal of a single hand from a `Deck`.""" 17 18 _deck: 'icepool.Deck[T]' 19 20 def __init__(self, deck: 'icepool.Deck[T]', hand_size: int) -> None: 21 """Constructor. 22 23 For algorithmic reasons, you must pre-commit to the number of cards to 24 deal. 25 26 It is permissible to deal zero cards from an empty deck, but not all 27 evaluators will handle this case, especially if they depend on the 28 outcome type. Dealing zero cards from a non-empty deck does not have 29 this issue. 30 31 Args: 32 deck: The `Deck` to deal from. 33 hand_size: How many cards to deal. 34 """ 35 if hand_size < 0: 36 raise ValueError('hand_size cannot be negative.') 37 if hand_size > deck.size(): 38 raise ValueError( 39 'The number of cards dealt cannot exceed the size of the deck.' 40 ) 41 self._deck = deck 42 self._keep_tuple = (1, ) * hand_size 43 44 @classmethod 45 def _new_raw(cls, deck: 'icepool.Deck[T]', 46 keep_tuple: tuple[int, ...]) -> 'Deal[T]': 47 self = super(Deal, cls).__new__(cls) 48 self._deck = deck 49 self._keep_tuple = keep_tuple 50 return self 51 52 def _make_source(self): 53 return DealSource(self._deck, self._keep_tuple) 54 55 def _set_keep_tuple(self, keep_tuple: tuple[int, ...]) -> 'Deal[T]': 56 return Deal._new_raw(self._deck, keep_tuple) 57 58 def deck(self) -> 'icepool.Deck[T]': 59 """The `Deck` the cards are dealt from.""" 60 return self._deck 61 62 def hand_size(self) -> int: 63 """The number of cards dealt to each hand as a tuple.""" 64 return len(self._keep_tuple) 65 66 def outcomes(self) -> CountsKeysView[T]: 67 """The outcomes of the `Deck` in ascending order. 68 69 These are also the `keys` of the `Deck` as a `Mapping`. 70 Prefer to use the name `outcomes`. 71 """ 72 return self.deck().outcomes() 73 74 def denominator(self) -> int: 75 return icepool.math.comb(self.deck().size(), self.hand_size()) 76 77 @property 78 def hash_key(self): 79 return Deal, self._deck, self._keep_tuple 80 81 def __repr__(self) -> str: 82 return type( 83 self 84 ).__qualname__ + f'({repr(self.deck())}, hand_size={self.hand_size()})' 85 86 def __str__(self) -> str: 87 return type( 88 self 89 ).__qualname__ + f' of hand_size={self.hand_size()} from deck:\n' + str( 90 self.deck())
Represents an unordered deal of a single hand from a Deck.
20 def __init__(self, deck: 'icepool.Deck[T]', hand_size: int) -> None: 21 """Constructor. 22 23 For algorithmic reasons, you must pre-commit to the number of cards to 24 deal. 25 26 It is permissible to deal zero cards from an empty deck, but not all 27 evaluators will handle this case, especially if they depend on the 28 outcome type. Dealing zero cards from a non-empty deck does not have 29 this issue. 30 31 Args: 32 deck: The `Deck` to deal from. 33 hand_size: How many cards to deal. 34 """ 35 if hand_size < 0: 36 raise ValueError('hand_size cannot be negative.') 37 if hand_size > deck.size(): 38 raise ValueError( 39 'The number of cards dealt cannot exceed the size of the deck.' 40 ) 41 self._deck = deck 42 self._keep_tuple = (1, ) * hand_size
Constructor.
For algorithmic reasons, you must pre-commit to the number of cards to deal.
It is permissible to deal zero cards from an empty deck, but not all evaluators will handle this case, especially if they depend on the outcome type. Dealing zero cards from a non-empty deck does not have this issue.
Arguments:
- deck: The
Deckto deal from. - hand_size: How many cards to deal.
58 def deck(self) -> 'icepool.Deck[T]': 59 """The `Deck` the cards are dealt from.""" 60 return self._deck
The Deck the cards are dealt from.
62 def hand_size(self) -> int: 63 """The number of cards dealt to each hand as a tuple.""" 64 return len(self._keep_tuple)
The number of cards dealt to each hand as a tuple.
A hash key for this object. This should include a type.
If None, this will not compare equal to any other object.
Inherited Members
20class MultiDeal(MultisetTupleGenerator[T, IntTupleOut]): 21 """Represents an deal of multiple hands from a `Deck`. 22 23 The cards within each hand are in sorted order. Furthermore, hands may be 24 organized into groups in which the hands are initially indistinguishable. 25 """ 26 27 _deck: 'icepool.Deck[T]' 28 # An ordered tuple of hand groups. 29 # Each group is designated by (hand_size, hand_count). 30 _hand_groups: tuple[tuple[int, int], ...] 31 32 def __init__(self, deck: 'icepool.Deck[T]', 33 hand_groups: tuple[tuple[int, int], ...]) -> None: 34 """Constructor. 35 36 For algorithmic reasons, you must pre-commit to the number of cards to 37 deal for each hand. 38 39 It is permissible to deal zero cards from an empty deck, but not all 40 evaluators will handle this case, especially if they depend on the 41 outcome type. Dealing zero cards from a non-empty deck does not have 42 this issue. 43 44 Args: 45 deck: The `Deck` to deal from. 46 hand_groups: An ordered tuple of hand groups. 47 Each group is designated by (hand_size, hand_count) with the 48 hands of each group being arbitrarily ordered. 49 The resulting counts are produced in a flat tuple. 50 """ 51 self._deck = deck 52 self._hand_groups = hand_groups 53 if self.total_cards_dealt() > self.deck().size(): 54 raise ValueError( 55 'The total number of cards dealt cannot exceed the size of the deck.' 56 ) 57 58 @classmethod 59 def _new_raw( 60 cls, deck: 'icepool.Deck[T]', 61 hand_sizes: tuple[tuple[int, int], 62 ...]) -> 'MultiDeal[T, IntTupleOut]': 63 self = super(MultiDeal, cls).__new__(cls) 64 self._deck = deck 65 self._hand_groups = hand_sizes 66 return self 67 68 def deck(self) -> 'icepool.Deck[T]': 69 """The `Deck` the cards are dealt from.""" 70 return self._deck 71 72 def hand_sizes(self) -> IntTupleOut: 73 """The number of cards dealt to each hand as a tuple.""" 74 return cast( 75 IntTupleOut, 76 tuple( 77 itertools.chain.from_iterable( 78 (hand_size, ) * group_size 79 for hand_size, group_size in self._hand_groups))) 80 81 def total_cards_dealt(self) -> int: 82 """The total number of cards dealt.""" 83 return sum(hand_size * group_size 84 for hand_size, group_size in self._hand_groups) 85 86 def outcomes(self) -> CountsKeysView[T]: 87 """The outcomes of the `Deck` in ascending order. 88 89 These are also the `keys` of the `Deck` as a `Mapping`. 90 Prefer to use the name `outcomes`. 91 """ 92 return self.deck().outcomes() 93 94 def __len__(self) -> int: 95 return sum(group_size for _, group_size in self._hand_groups) 96 97 @cached_property 98 def _denominator(self) -> int: 99 d_total = icepool.math.comb(self.deck().size(), 100 self.total_cards_dealt()) 101 d_split = math.prod( 102 icepool.math.comb(self.total_cards_dealt(), h) 103 for h in self.hand_sizes()[1:]) 104 return d_total * d_split 105 106 def denominator(self) -> int: 107 return self._denominator 108 109 def _make_source(self) -> 'MultisetTupleSource[T, IntTupleOut]': 110 return MultiDealSource(self._deck, self._hand_groups) 111 112 @property 113 def hash_key(self) -> Hashable: 114 return MultiDeal, self._deck, self._hand_groups 115 116 def __repr__(self) -> str: 117 return type( 118 self 119 ).__qualname__ + f'({repr(self.deck())}, hand_groups={self._hand_groups})' 120 121 def __str__(self) -> str: 122 return type( 123 self 124 ).__qualname__ + f' of hand_groups={self._hand_groups} from deck:\n' + str( 125 self.deck())
Represents an deal of multiple hands from a Deck.
The cards within each hand are in sorted order. Furthermore, hands may be organized into groups in which the hands are initially indistinguishable.
32 def __init__(self, deck: 'icepool.Deck[T]', 33 hand_groups: tuple[tuple[int, int], ...]) -> None: 34 """Constructor. 35 36 For algorithmic reasons, you must pre-commit to the number of cards to 37 deal for each hand. 38 39 It is permissible to deal zero cards from an empty deck, but not all 40 evaluators will handle this case, especially if they depend on the 41 outcome type. Dealing zero cards from a non-empty deck does not have 42 this issue. 43 44 Args: 45 deck: The `Deck` to deal from. 46 hand_groups: An ordered tuple of hand groups. 47 Each group is designated by (hand_size, hand_count) with the 48 hands of each group being arbitrarily ordered. 49 The resulting counts are produced in a flat tuple. 50 """ 51 self._deck = deck 52 self._hand_groups = hand_groups 53 if self.total_cards_dealt() > self.deck().size(): 54 raise ValueError( 55 'The total number of cards dealt cannot exceed the size of the deck.' 56 )
Constructor.
For algorithmic reasons, you must pre-commit to the number of cards to deal for each hand.
It is permissible to deal zero cards from an empty deck, but not all evaluators will handle this case, especially if they depend on the outcome type. Dealing zero cards from a non-empty deck does not have this issue.
Arguments:
- deck: The
Deckto deal from. - hand_groups: An ordered tuple of hand groups. Each group is designated by (hand_size, hand_count) with the hands of each group being arbitrarily ordered. The resulting counts are produced in a flat tuple.
68 def deck(self) -> 'icepool.Deck[T]': 69 """The `Deck` the cards are dealt from.""" 70 return self._deck
The Deck the cards are dealt from.
72 def hand_sizes(self) -> IntTupleOut: 73 """The number of cards dealt to each hand as a tuple.""" 74 return cast( 75 IntTupleOut, 76 tuple( 77 itertools.chain.from_iterable( 78 (hand_size, ) * group_size 79 for hand_size, group_size in self._hand_groups)))
The number of cards dealt to each hand as a tuple.
81 def total_cards_dealt(self) -> int: 82 """The total number of cards dealt.""" 83 return sum(hand_size * group_size 84 for hand_size, group_size in self._hand_groups)
The total number of cards dealt.
112 @property 113 def hash_key(self) -> Hashable: 114 return MultiDeal, self._deck, self._hand_groups
A hash key for this object. This should include a type.
If None, this will not compare equal to any other object.
Inherited Members
55def multiset_function(wrapped: Callable[ 56 ..., 57 'MultisetFunctionRawResult[T, U_co] | tuple[MultisetFunctionRawResult[T, U_co], ...]'], 58 /) -> 'MultisetEvaluatorBase[T, U_co]': 59 """EXPERIMENTAL: A decorator that turns a function into a multiset evaluator. 60 61 The provided function should take in arguments representing multisets, 62 do a limited set of operations on them (see `MultisetExpression`), and 63 finish off with an evaluation. You can return a tuple to perform a joint 64 evaluation. 65 66 For example, to create an evaluator which computes the elements each of two 67 multisets has that the other doesn't: 68 ```python 69 @multiset_function 70 def two_way_difference(a, b): 71 return (a - b).expand(), (b - a).expand() 72 ``` 73 74 The special `star` keyword argument will unpack tuple-valued counts of the 75 first argument inside the multiset function. For example, 76 ```python 77 hands = deck.deal((5, 5)) 78 two_way_difference(hands, star=True) 79 ``` 80 effectively unpacks as if we had written 81 ```python 82 @multiset_function 83 def two_way_difference(hands): 84 a, b = hands 85 return (a - b).expand(), (b - a).expand() 86 ``` 87 88 If not provided explicitly, `star` will be inferred automatically. 89 90 You can pass non-multiset values as keyword-only arguments. 91 ```python 92 @multiset_function 93 def count_outcomes(a, *, target): 94 return a.keep_outcomes(target).size() 95 96 count_outcomes(a, target=[5, 6]) 97 ``` 98 99 While in theory `@multiset_function` implements late binding similar to 100 ordinary Python functions, I recommend using only pure functions. 101 102 Be careful when using control structures: you cannot branch on the value of 103 a multiset expression or evaluation, so e.g. 104 105 ```python 106 @multiset_function 107 def bad(a, b) 108 if a == b: 109 ... 110 ``` 111 112 is not allowed. 113 114 `multiset_function` has considerable overhead, being effectively a 115 mini-language within Python. For better performance, you can try 116 implementing your own subclass of `MultisetEvaluator` directly. 117 118 Args: 119 function: This should take in multiset expressions as positional 120 arguments, and non-multiset variables as keyword arguments. 121 """ 122 return MultisetFunctionEvaluator(wrapped)
EXPERIMENTAL: A decorator that turns a function into a multiset evaluator.
The provided function should take in arguments representing multisets,
do a limited set of operations on them (see MultisetExpression), and
finish off with an evaluation. You can return a tuple to perform a joint
evaluation.
For example, to create an evaluator which computes the elements each of two multisets has that the other doesn't:
@multiset_function
def two_way_difference(a, b):
return (a - b).expand(), (b - a).expand()
The special star keyword argument will unpack tuple-valued counts of the
first argument inside the multiset function. For example,
hands = deck.deal((5, 5))
two_way_difference(hands, star=True)
effectively unpacks as if we had written
@multiset_function
def two_way_difference(hands):
a, b = hands
return (a - b).expand(), (b - a).expand()
If not provided explicitly, star will be inferred automatically.
You can pass non-multiset values as keyword-only arguments.
@multiset_function
def count_outcomes(a, *, target):
return a.keep_outcomes(target).size()
count_outcomes(a, target=[5, 6])
While in theory @multiset_function implements late binding similar to
ordinary Python functions, I recommend using only pure functions.
Be careful when using control structures: you cannot branch on the value of a multiset expression or evaluation, so e.g.
@multiset_function
def bad(a, b)
if a == b:
...
is not allowed.
multiset_function has considerable overhead, being effectively a
mini-language within Python. For better performance, you can try
implementing your own subclass of MultisetEvaluator directly.
Arguments:
- function: This should take in multiset expressions as positional arguments, and non-multiset variables as keyword arguments.
48class MultisetParameter(MultisetParameterBase[T, int], MultisetExpression[T]): 49 """A multiset parameter with a count of a single `int`.""" 50 51 def __init__(self, name: str, arg_index: int, star_index: int | None): 52 self._name = name 53 self._arg_index = arg_index 54 self._star_index = star_index
A multiset parameter with a count of a single int.
Inherited Members
57class MultisetTupleParameter(MultisetParameterBase[T, IntTupleOut], 58 MultisetTupleExpression[T, IntTupleOut]): 59 """A multiset parameter with a count of a tuple of `int`s.""" 60 61 def __init__(self, name: str, arg_index: int, length: int): 62 self._name = name 63 self._arg_index = arg_index 64 self._star_index = None 65 self._length = length 66 67 def __len__(self): 68 return self._length
A multiset parameter with a count of a tuple of ints.
Inherited Members
Indicates that caching should not be performed. Exact meaning depends on context.
22def format_probability_inverse(probability, /, int_start: int = 20): 23 """EXPERIMENTAL: Formats the inverse of a value as "1 in N". 24 25 Args: 26 probability: The value to be formatted. 27 int_start: If N = 1 / probability is between this value and 1 million 28 times this value it will be formatted as an integer. Otherwise it 29 be formatted asa float with precision at least 1 part in int_start. 30 """ 31 max_precision = math.ceil(math.log10(int_start)) 32 if probability <= 0 or probability > 1: 33 return 'n/a' 34 product = probability * int_start 35 if product <= 1: 36 if probability * int_start * 10**6 <= 1: 37 return f'1 in {1.0 / probability:<.{max_precision}e}' 38 else: 39 return f'1 in {round(1 / probability)}' 40 41 precision = 0 42 precision_factor = 1 43 while product > precision_factor and precision < max_precision: 44 precision += 1 45 precision_factor *= 10 46 return f'1 in {1.0 / probability:<.{precision}f}'
EXPERIMENTAL: Formats the inverse of a value as "1 in N".
Arguments:
- probability: The value to be formatted.
- int_start: If N = 1 / probability is between this value and 1 million times this value it will be formatted as an integer. Otherwise it be formatted asa float with precision at least 1 part in int_start.
28class Wallenius(Generic[T]): 29 """EXPERIMENTAL: Wallenius' noncentral hypergeometric distribution. 30 31 This is sampling without replacement with weights, where the entire weight 32 of a card goes away when it is pulled. 33 """ 34 _weight_decks: 'MutableMapping[int, icepool.Deck[T]]' 35 _weight_die: 'icepool.Die[int]' 36 37 def __init__(self, data: Iterable[tuple[T, int]] 38 | Mapping[T, int | tuple[int, int]]): 39 """Constructor. 40 41 Args: 42 data: Either an iterable of (outcome, weight), or a mapping from 43 outcomes to either weights or (weight, quantity). 44 hand_size: The number of outcomes to pull. 45 """ 46 self._weight_decks = {} 47 48 if isinstance(data, Mapping): 49 for outcome, v in data.items(): 50 if isinstance(v, int): 51 weight = v 52 quantity = 1 53 else: 54 weight, quantity = v 55 self._weight_decks[weight] = self._weight_decks.get( 56 weight, icepool.Deck()).append(outcome, quantity) 57 else: 58 for outcome, weight in data: 59 self._weight_decks[weight] = self._weight_decks.get( 60 weight, icepool.Deck()).append(outcome) 61 62 self._weight_die = icepool.Die({ 63 weight: weight * deck.denominator() 64 for weight, deck in self._weight_decks.items() 65 }) 66 67 def deal(self, hand_size: int, /) -> 'icepool.MultisetExpression[T]': 68 """Deals the specified number of outcomes from the Wallenius. 69 70 The result is a `MultisetExpression` representing the multiset of 71 outcomes dealt. 72 """ 73 if hand_size == 0: 74 return icepool.Pool([]) 75 76 def inner(weights): 77 weight_counts = Counter(weights) 78 result = None 79 for weight, count in weight_counts.items(): 80 deal = self._weight_decks[weight].deal(count) 81 if result is None: 82 result = deal 83 else: 84 result = result + deal 85 return result 86 87 hand_weights = _wallenius_weights(self._weight_die, hand_size) 88 return hand_weights.map_to_pool(inner, star=False)
EXPERIMENTAL: Wallenius' noncentral hypergeometric distribution.
This is sampling without replacement with weights, where the entire weight of a card goes away when it is pulled.
37 def __init__(self, data: Iterable[tuple[T, int]] 38 | Mapping[T, int | tuple[int, int]]): 39 """Constructor. 40 41 Args: 42 data: Either an iterable of (outcome, weight), or a mapping from 43 outcomes to either weights or (weight, quantity). 44 hand_size: The number of outcomes to pull. 45 """ 46 self._weight_decks = {} 47 48 if isinstance(data, Mapping): 49 for outcome, v in data.items(): 50 if isinstance(v, int): 51 weight = v 52 quantity = 1 53 else: 54 weight, quantity = v 55 self._weight_decks[weight] = self._weight_decks.get( 56 weight, icepool.Deck()).append(outcome, quantity) 57 else: 58 for outcome, weight in data: 59 self._weight_decks[weight] = self._weight_decks.get( 60 weight, icepool.Deck()).append(outcome) 61 62 self._weight_die = icepool.Die({ 63 weight: weight * deck.denominator() 64 for weight, deck in self._weight_decks.items() 65 })
Constructor.
Arguments:
- data: Either an iterable of (outcome, weight), or a mapping from outcomes to either weights or (weight, quantity).
- hand_size: The number of outcomes to pull.
67 def deal(self, hand_size: int, /) -> 'icepool.MultisetExpression[T]': 68 """Deals the specified number of outcomes from the Wallenius. 69 70 The result is a `MultisetExpression` representing the multiset of 71 outcomes dealt. 72 """ 73 if hand_size == 0: 74 return icepool.Pool([]) 75 76 def inner(weights): 77 weight_counts = Counter(weights) 78 result = None 79 for weight, count in weight_counts.items(): 80 deal = self._weight_decks[weight].deal(count) 81 if result is None: 82 result = deal 83 else: 84 result = result + deal 85 return result 86 87 hand_weights = _wallenius_weights(self._weight_die, hand_size) 88 return hand_weights.map_to_pool(inner, star=False)
Deals the specified number of outcomes from the Wallenius.
The result is a MultisetExpression representing the multiset of
outcomes dealt.