icepool
Package for computing dice and card probabilities.
Starting with v0.25.1
, you can replace latest
in the URL with an old version
number to get the documentation for that version.
See this JupyterLite distribution for examples.
General conventions:
- Instances are immutable (apart from internal caching). Anything that looks like it mutates an instance actually returns a separate instance with the change.
1"""Package for computing dice and card probabilities. 2 3Starting with `v0.25.1`, you can replace `latest` in the URL with an old version 4number to get the documentation for that version. 5 6See [this JupyterLite distribution](https://highdiceroller.github.io/icepool/notebooks/lab/index.html) 7for examples. 8 9[Visit the project page.](https://github.com/HighDiceRoller/icepool) 10 11General conventions: 12 13* Instances are immutable (apart from internal caching). Anything that looks 14 like it mutates an instance actually returns a separate instance with the 15 change. 16""" 17 18__docformat__ = 'google' 19 20__version__ = '1.7.2' 21 22from typing import Final 23 24from icepool.typing import Outcome, RerollType 25from icepool.order import Order, ConflictingOrderError, UnsupportedOrderError 26 27Reroll: Final = RerollType.Reroll 28"""Indicates that an outcome should be rerolled (with unlimited depth). 29 30This can be used in place of outcomes in many places. See individual function 31and method descriptions for details. 32 33This effectively removes the outcome from the probability space, along with its 34contribution to the denominator. 35 36This can be used for conditional probability by removing all outcomes not 37consistent with the given observations. 38 39Operation in specific cases: 40 41* When used with `Again`, only that stage is rerolled, not the entire `Again` 42 tree. 43* To reroll with limited depth, use `Die.reroll()`, or `Again` with no 44 modification. 45* When used with `MultisetEvaluator`, the entire evaluation is rerolled. 46""" 47 48# Expose certain names at top-level. 49 50from icepool.function import (d, z, __getattr__, coin, stochastic_round, 51 one_hot, from_cumulative, from_rv, pointwise_max, 52 pointwise_min, min_outcome, max_outcome, 53 consecutive, sorted_union, commonize_denominator, 54 reduce, accumulate, map, map_function, 55 map_and_time, map_to_pool) 56 57from icepool.population.base import Population 58from icepool.population.die import implicit_convert_to_die, Die 59from icepool.expand import iter_cartesian_product, cartesian_product, tupleize, vectorize 60from icepool.collection.vector import Vector 61from icepool.collection.symbols import Symbols 62from icepool.population.again import AgainExpression 63 64Again: Final = AgainExpression(is_additive=True) 65"""A symbol indicating that the die should be rolled again, usually with some operation applied. 66 67This is designed to be used with the `Die()` constructor. 68`AgainExpression`s should not be fed to functions or methods other than 69`Die()`, but it can be used with operators. Examples: 70 71* `Again + 6`: Roll again and add 6. 72* `Again + Again`: Roll again twice and sum. 73 74The `again_count`, `again_depth`, and `again_end` arguments to `Die()` 75affect how these arguments are processed. At most one of `again_count` or 76`again_depth` may be provided; if neither are provided, the behavior is as 77`again_depth=1. 78 79For finer control over rolling processes, use e.g. `Die.map()` instead. 80 81#### Count mode 82 83When `again_count` is provided, we start with one roll queued and execute one 84roll at a time. For every `Again` we roll, we queue another roll. 85If we run out of rolls, we sum the rolls to find the result. If the total number 86of rolls (not including the initial roll) would exceed `again_count`, we reroll 87the entire process, effectively conditioning the process on not rolling more 88than `again_count` extra dice. 89 90This mode only allows "additive" expressions to be used with `Again`, which 91means that only the following operators are allowed: 92 93* Binary `+` 94* `n @ AgainExpression`, where `n` is a non-negative `int` or `Population`. 95 96Furthermore, the `+` operator is assumed to be associative and commutative. 97For example, `str` or `tuple` outcomes will not produce elements with a definite 98order. 99 100#### Depth mode 101 102When `again_depth=0`, `again_end` is directly substituted 103for each occurence of `Again`. For other values of `again_depth`, the result for 104`again_depth-1` is substituted for each occurence of `Again`. 105 106If `again_end=icepool.Reroll`, then any `AgainExpression`s in the final depth 107are rerolled. 108 109#### Rerolls 110 111`Reroll` only rerolls that particular die, not the entire process. Any such 112rerolls do not count against the `again_count` or `again_depth` limit. 113 114If `again_end=icepool.Reroll`: 115* Count mode: Any result that would cause the number of rolls to exceed 116 `again_count` is rerolled. 117* Depth mode: Any `AgainExpression`s in the final depth level are rerolled. 118""" 119 120from icepool.population.die_with_truth import DieWithTruth 121 122from icepool.collection.counts import CountsKeysView, CountsValuesView, CountsItemsView 123 124from icepool.population.keep import lowest, highest, middle 125 126from icepool.generator.pool import Pool, standard_pool 127from icepool.generator.keep import KeepGenerator 128from icepool.generator.compound_keep import CompoundKeepGenerator 129 130from icepool.generator.multiset_generator import MultisetGenerator 131from icepool.generator.multiset_tuple_generator import MultisetTupleGenerator 132from icepool.evaluator.multiset_evaluator import MultisetEvaluator 133 134from icepool.population.deck import Deck 135from icepool.generator.deal import Deal 136from icepool.generator.multi_deal import MultiDeal 137 138from icepool.expression.multiset_expression_base import MultisetVariableError 139from icepool.expression.multiset_expression import MultisetExpression, implicit_convert_to_expression 140from icepool.evaluator.multiset_function import multiset_function 141from icepool.expression.multiset_variable import MultisetVariable 142from icepool.expression.multiset_mixture import MultisetMixture 143from icepool.expression.multiset_tuple_variable import MultisetTupleVariable 144 145from icepool.population.format import format_probability_inverse 146 147from icepool.wallenius import Wallenius 148 149import icepool.generator as generator 150import icepool.evaluator as evaluator 151import icepool.operator as operator 152 153import icepool.typing as typing 154from icepool.expand import Expandable 155 156__all__ = [ 157 'd', 'z', 'coin', 'stochastic_round', 'one_hot', 'Outcome', 'Die', 158 'Population', 'tupleize', 'vectorize', 'Vector', 'Symbols', 'Again', 159 'CountsKeysView', 'CountsValuesView', 'CountsItemsView', 'from_cumulative', 160 'from_rv', 'pointwise_max', 'pointwise_min', 'lowest', 'highest', 'middle', 161 'min_outcome', 'max_outcome', 'consecutive', 'sorted_union', 162 'commonize_denominator', 'reduce', 'accumulate', 'map', 'map_function', 163 'map_and_time', 'map_to_pool', 'Reroll', 'RerollType', 'Pool', 164 'standard_pool', 'MultisetGenerator', 'MultisetExpression', 165 'MultisetEvaluator', 'Order', 'ConflictingOrderError', 166 'UnsupportedOrderError', 'Deck', 'Deal', 'MultiDeal', 'multiset_function', 167 'function', 'typing', 'evaluator', 'format_probability_inverse', 168 'Wallenius' 169]
19@cache 20def d(sides: int, /) -> 'icepool.Die[int]': 21 """A standard die, uniformly distributed from `1` to `sides` inclusive. 22 23 Don't confuse this with `icepool.Die()`: 24 25 * `icepool.Die([6])`: A `Die` that always rolls the integer 6. 26 * `icepool.d(6)`: A d6. 27 28 You can also import individual standard dice from the `icepool` module, e.g. 29 `from icepool import d6`. 30 """ 31 if not isinstance(sides, int): 32 raise TypeError('sides must be an int.') 33 elif sides < 1: 34 raise ValueError('sides must be at least 1.') 35 return icepool.Die(range(1, sides + 1))
A standard die, uniformly distributed from 1
to sides
inclusive.
Don't confuse this with icepool.Die()
:
icepool.Die([6])
: ADie
that always rolls the integer 6.icepool.d(6)
: A d6.
You can also import individual standard dice from the icepool
module, e.g.
from icepool import d6
.
38@cache 39def z(sides: int, /) -> 'icepool.Die[int]': 40 """A die uniformly distributed from `0` to `sides - 1` inclusive. 41 42 Equal to d(sides) - 1. 43 """ 44 if not isinstance(sides, int): 45 raise TypeError('sides must be an int.') 46 elif sides < 1: 47 raise ValueError('sides must be at least 1.') 48 return icepool.Die(range(0, sides))
A die uniformly distributed from 0
to sides - 1
inclusive.
Equal to d(sides) - 1.
74def coin(n: int | float | Fraction, 75 d: int = 1, 76 /, 77 *, 78 max_denominator: int | None = None) -> 'icepool.Die[bool]': 79 """A `Die` that rolls `True` with probability `n / d`, and `False` otherwise. 80 81 If `n <= 0` or `n >= d` the result will have only one outcome. 82 83 Args: 84 n: An int numerator, or a non-integer probability. 85 d: An int denominator. Should not be provided if the first argument is 86 not an int. 87 """ 88 if not isinstance(n, int): 89 if d != 1: 90 raise ValueError( 91 'If a non-int numerator is provided, a denominator must not be provided.' 92 ) 93 fraction = Fraction(n) 94 if max_denominator is not None: 95 fraction = fraction.limit_denominator(max_denominator) 96 n = fraction.numerator 97 d = fraction.denominator 98 data = {} 99 if n < d: 100 data[False] = min(d - n, d) 101 if n > 0: 102 data[True] = min(n, d) 103 104 return icepool.Die(data)
A Die
that rolls True
with probability n / d
, and False
otherwise.
If n <= 0
or n >= d
the result will have only one outcome.
Arguments:
- n: An int numerator, or a non-integer probability.
- d: An int denominator. Should not be provided if the first argument is not an int.
107def stochastic_round(x, 108 /, 109 *, 110 max_denominator: int | None = None) -> 'icepool.Die[int]': 111 """Randomly rounds a value up or down to the nearest integer according to the two distances. 112 113 Specificially, rounds `x` up with probability `x - floor(x)` and down 114 otherwise, producing a `Die` with up to two outcomes. 115 116 Args: 117 max_denominator: If provided, each rounding will be performed 118 using `fractions.Fraction.limit_denominator(max_denominator)`. 119 Otherwise, the rounding will be performed without 120 `limit_denominator`. 121 """ 122 integer_part = math.floor(x) 123 fractional_part = x - integer_part 124 return integer_part + coin(fractional_part, 125 max_denominator=max_denominator)
Randomly rounds a value up or down to the nearest integer according to the two distances.
Specificially, rounds x
up with probability x - floor(x)
and down
otherwise, producing a Die
with up to two outcomes.
Arguments:
- max_denominator: If provided, each rounding will be performed
using
fractions.Fraction.limit_denominator(max_denominator)
. Otherwise, the rounding will be performed withoutlimit_denominator
.
128def one_hot(sides: int, /) -> 'icepool.Die[tuple[bool, ...]]': 129 """A `Die` with `Vector` outcomes with one element set to `True` uniformly at random and the rest `False`. 130 131 This is an easy (if somewhat expensive) way of representing how many dice 132 in a pool rolled each number. For example, the outcomes of `10 @ one_hot(6)` 133 are the `(ones, twos, threes, fours, fives, sixes)` rolled in 10d6. 134 """ 135 data = [] 136 for i in range(sides): 137 outcome = [False] * sides 138 outcome[i] = True 139 data.append(icepool.Vector(outcome)) 140 return icepool.Die(data)
A Die
with Vector
outcomes with one element set to True
uniformly at random and the rest False
.
This is an easy (if somewhat expensive) way of representing how many dice
in a pool rolled each number. For example, the outcomes of 10 @ one_hot(6)
are the (ones, twos, threes, fours, fives, sixes)
rolled in 10d6.
43class Outcome(Hashable, Protocol[T_contra]): 44 """Protocol to attempt to verify that outcome types are hashable and sortable. 45 46 Far from foolproof, e.g. it cannot enforce total ordering. 47 """ 48 49 def __lt__(self, other: T_contra) -> bool: 50 ...
Protocol to attempt to verify that outcome types are hashable and sortable.
Far from foolproof, e.g. it cannot enforce total ordering.
45class Die(Population[T_co]): 46 """Sampling with replacement. Quantities represent weights. 47 48 Dice are immutable. Methods do not modify the `Die` in-place; 49 rather they return a `Die` representing the result. 50 51 It's also possible to have "empty" dice with no outcomes at all, 52 though these have little use other than being sentinel values. 53 """ 54 55 _data: Counts[T_co] 56 57 @property 58 def _new_type(self) -> type: 59 return Die 60 61 def __new__( 62 cls, 63 outcomes: Sequence | Mapping[Any, int], 64 times: Sequence[int] | int = 1, 65 *, 66 again_count: int | None = None, 67 again_depth: int | None = None, 68 again_end: 'Outcome | Die | icepool.RerollType | None' = None 69 ) -> 'Die[T_co]': 70 """Constructor for a `Die`. 71 72 Don't confuse this with `d()`: 73 74 * `Die([6])`: A `Die` that always rolls the `int` 6. 75 * `d(6)`: A d6. 76 77 Also, don't confuse this with `Pool()`: 78 79 * `Die([1, 2, 3, 4, 5, 6])`: A d6. 80 * `Pool([1, 2, 3, 4, 5, 6])`: A `Pool` of six dice that always rolls one 81 of each number. 82 83 Here are some different ways of constructing a d6: 84 85 * Just import it: `from icepool import d6` 86 * Use the `d()` function: `icepool.d(6)` 87 * Use a d6 that you already have: `Die(d6)` or `Die([d6])` 88 * Mix a d3 and a d3+3: `Die([d3, d3+3])` 89 * Use a dict: `Die({1:1, 2:1, 3:1, 4:1, 5:1, 6:1})` 90 * Give the faces as a sequence: `Die([1, 2, 3, 4, 5, 6])` 91 92 All quantities must be non-negative. Outcomes with zero quantity will be 93 omitted. 94 95 Several methods and functions foward **kwargs to this constructor. 96 However, these only affect the construction of the returned or yielded 97 dice. Any other implicit conversions of arguments or operands to dice 98 will be done with the default keyword arguments. 99 100 EXPERIMENTAL: Use `icepool.Again` to roll the dice again, usually with 101 some modification. See the `Again` documentation for details. 102 103 Denominator: For a flat set of outcomes, the denominator is just the 104 sum of the corresponding quantities. If the outcomes themselves have 105 secondary denominators, then the overall denominator will be minimized 106 while preserving the relative weighting of the primary outcomes. 107 108 Args: 109 outcomes: The faces of the `Die`. This can be one of the following: 110 * A `Sequence` of outcomes. Duplicates will contribute 111 quantity for each appearance. 112 * A `Mapping` from outcomes to quantities. 113 114 Individual outcomes can each be one of the following: 115 116 * An outcome, which must be hashable and totally orderable. 117 * For convenience, `tuple`s containing `Population`s will be 118 `tupleize`d into a `Population` of `tuple`s. 119 This does not apply to subclasses of `tuple`s such as `namedtuple` 120 or other classes such as `Vector`. 121 * A `Die`, which will be flattened into the result. 122 The quantity assigned to a `Die` is shared among its 123 outcomes. The total denominator will be scaled up if 124 necessary. 125 * `icepool.Reroll`, which will drop itself from consideration. 126 * EXPERIMENTAL: `icepool.Again`. See the documentation for 127 `Again` for details. 128 times: Multiplies the quantity of each element of `outcomes`. 129 `times` can either be a sequence of the same length as 130 `outcomes` or a single `int` to apply to all elements of 131 `outcomes`. 132 again_count, again_depth, again_end: These affect how `Again` 133 expressions are handled. See the `Again` documentation for 134 details. 135 Raises: 136 ValueError: `None` is not a valid outcome for a `Die`. 137 """ 138 outcomes, times = icepool.creation_args.itemize(outcomes, times) 139 140 # Check for Again. 141 if icepool.population.again.contains_again(outcomes): 142 if again_count is not None: 143 if again_depth is not None: 144 raise ValueError( 145 'At most one of again_count and again_depth may be used.' 146 ) 147 if again_end is not None: 148 raise ValueError( 149 'again_end cannot be used with again_count.') 150 return icepool.population.again.evaluate_agains_using_count( 151 outcomes, times, again_count) 152 else: 153 if again_depth is None: 154 again_depth = 1 155 return icepool.population.again.evaluate_agains_using_depth( 156 outcomes, times, again_depth, again_end) 157 158 # Agains have been replaced by this point. 159 outcomes = cast(Sequence[T_co | Die[T_co] | icepool.RerollType], 160 outcomes) 161 162 if len(outcomes) == 1 and times[0] == 1 and isinstance( 163 outcomes[0], Die): 164 return outcomes[0] 165 166 counts: Counts[T_co] = icepool.creation_args.expand_args_for_die( 167 outcomes, times) 168 169 return Die._new_raw(counts) 170 171 @classmethod 172 def _new_raw(cls, data: Counts[T_co]) -> 'Die[T_co]': 173 """Creates a new `Die` using already-processed arguments. 174 175 Args: 176 data: At this point, this is a Counts. 177 """ 178 self = super(Population, cls).__new__(cls) 179 self._data = data 180 return self 181 182 # Defined separately from the superclass to help typing. 183 def unary_operator(self: 'icepool.Die[T_co]', op: Callable[..., U], *args, 184 **kwargs) -> 'icepool.Die[U]': 185 """Performs the unary operation on the outcomes. 186 187 This is used for the standard unary operators 188 `-, +, abs, ~, round, trunc, floor, ceil` 189 as well as the additional methods 190 `zero, bool`. 191 192 This is NOT used for the `[]` operator; when used directly, this is 193 interpreted as a `Mapping` operation and returns the count corresponding 194 to a given outcome. See `marginals()` for applying the `[]` operator to 195 outcomes. 196 197 Returns: 198 A `Die` representing the result. 199 200 Raises: 201 ValueError: If tuples are of mismatched length. 202 """ 203 return self._unary_operator(op, *args, **kwargs) 204 205 def binary_operator(self, other: 'Die', op: Callable[..., U], *args, 206 **kwargs) -> 'Die[U]': 207 """Performs the operation on pairs of outcomes. 208 209 By the time this is called, the other operand has already been 210 converted to a `Die`. 211 212 If one side of a binary operator is a tuple and the other is not, the 213 binary operator is applied to each element of the tuple with the 214 non-tuple side. For example, the following are equivalent: 215 216 ```python 217 cartesian_product(d6, d8) * 2 218 cartesian_product(d6 * 2, d8 * 2) 219 ``` 220 221 This is used for the standard binary operators 222 `+, -, *, /, //, %, **, <<, >>, &, |, ^` 223 and the standard binary comparators 224 `<, <=, >=, >, ==, !=, cmp`. 225 226 `==` and `!=` additionally set the truth value of the `Die` according to 227 whether the dice themselves are the same or not. 228 229 The `@` operator does NOT use this method directly. 230 It rolls the left `Die`, which must have integer outcomes, 231 then rolls the right `Die` that many times and sums the outcomes. 232 233 Returns: 234 A `Die` representing the result. 235 236 Raises: 237 ValueError: If tuples are of mismatched length within one of the 238 dice or between the dice. 239 """ 240 data: MutableMapping[Any, int] = defaultdict(int) 241 for (outcome_self, 242 quantity_self), (outcome_other, 243 quantity_other) in itertools.product( 244 self.items(), other.items()): 245 new_outcome = op(outcome_self, outcome_other, *args, **kwargs) 246 data[new_outcome] += quantity_self * quantity_other 247 return self._new_type(data) 248 249 # Basic access. 250 251 def keys(self) -> CountsKeysView[T_co]: 252 return self._data.keys() 253 254 def values(self) -> CountsValuesView: 255 return self._data.values() 256 257 def items(self) -> CountsItemsView[T_co]: 258 return self._data.items() 259 260 def __getitem__(self, outcome, /) -> int: 261 return self._data[outcome] 262 263 def __iter__(self) -> Iterator[T_co]: 264 return iter(self.keys()) 265 266 def __len__(self) -> int: 267 """The number of outcomes. """ 268 return len(self._data) 269 270 def __contains__(self, outcome) -> bool: 271 return outcome in self._data 272 273 # Quantity management. 274 275 def simplify(self) -> 'Die[T_co]': 276 """Divides all quantities by their greatest common denominator. """ 277 return icepool.Die(self._data.simplify()) 278 279 # Rerolls and other outcome management. 280 281 def reroll(self, 282 which: Callable[..., bool] | Collection[T_co] | None = None, 283 /, 284 *, 285 star: bool | None = None, 286 depth: int | Literal['inf']) -> 'Die[T_co]': 287 """Rerolls the given outcomes. 288 289 Args: 290 which: Selects which outcomes to reroll. Options: 291 * A collection of outcomes to reroll. 292 * A callable that takes an outcome and returns `True` if it 293 should be rerolled. 294 * If not provided, the min outcome will be rerolled. 295 star: Whether outcomes should be unpacked into separate arguments 296 before sending them to a callable `which`. 297 If not provided, this will be guessed based on the function 298 signature. 299 depth: The maximum number of times to reroll. 300 If `None`, rerolls an unlimited number of times. 301 302 Returns: 303 A `Die` representing the reroll. 304 If the reroll would never terminate, the result has no outcomes. 305 """ 306 307 if which is None: 308 outcome_set = {self.min_outcome()} 309 else: 310 outcome_set = self._select_outcomes(which, star) 311 312 if depth == 'inf' or depth is None: 313 if depth is None: 314 warnings.warn( 315 "depth=None is deprecated; use depth='inf' instead.", 316 category=DeprecationWarning, 317 stacklevel=1) 318 data = { 319 outcome: quantity 320 for outcome, quantity in self.items() 321 if outcome not in outcome_set 322 } 323 elif depth < 0: 324 raise ValueError('reroll depth cannot be negative.') 325 else: 326 total_reroll_quantity = sum(quantity 327 for outcome, quantity in self.items() 328 if outcome in outcome_set) 329 total_stop_quantity = self.denominator() - total_reroll_quantity 330 rerollable_factor = total_reroll_quantity**depth 331 stop_factor = (self.denominator()**(depth + 1) - rerollable_factor 332 * total_reroll_quantity) // total_stop_quantity 333 data = { 334 outcome: (rerollable_factor * 335 quantity if outcome in outcome_set else stop_factor * 336 quantity) 337 for outcome, quantity in self.items() 338 } 339 return icepool.Die(data) 340 341 def filter(self, 342 which: Callable[..., bool] | Collection[T_co], 343 /, 344 *, 345 star: bool | None = None, 346 depth: int | Literal['inf']) -> 'Die[T_co]': 347 """Rerolls until getting one of the given outcomes. 348 349 Essentially the complement of `reroll()`. 350 351 Args: 352 which: Selects which outcomes to reroll until. Options: 353 * A callable that takes an outcome and returns `True` if it 354 should be accepted. 355 * A collection of outcomes to reroll until. 356 star: Whether outcomes should be unpacked into separate arguments 357 before sending them to a callable `which`. 358 If not provided, this will be guessed based on the function 359 signature. 360 depth: The maximum number of times to reroll. 361 If `None`, rerolls an unlimited number of times. 362 363 Returns: 364 A `Die` representing the reroll. 365 If the reroll would never terminate, the result has no outcomes. 366 """ 367 368 if callable(which): 369 if star is None: 370 star = infer_star(which) 371 if star: 372 373 not_outcomes = { 374 outcome 375 for outcome in self.outcomes() 376 if not which(*outcome) # type: ignore 377 } 378 else: 379 not_outcomes = { 380 outcome 381 for outcome in self.outcomes() if not which(outcome) 382 } 383 else: 384 not_outcomes = { 385 not_outcome 386 for not_outcome in self.outcomes() if not_outcome not in which 387 } 388 return self.reroll(not_outcomes, depth=depth) 389 390 def truncate(self, min_outcome=None, max_outcome=None) -> 'Die[T_co]': 391 """Truncates the outcomes of this `Die` to the given range. 392 393 The endpoints are included in the result if applicable. 394 If one of the arguments is not provided, that side will not be truncated. 395 396 This effectively rerolls outcomes outside the given range. 397 If instead you want to replace those outcomes with the nearest endpoint, 398 use `clip()`. 399 400 Not to be confused with `trunc(die)`, which performs integer truncation 401 on each outcome. 402 """ 403 if min_outcome is not None: 404 start = bisect.bisect_left(self.outcomes(), min_outcome) 405 else: 406 start = None 407 if max_outcome is not None: 408 stop = bisect.bisect_right(self.outcomes(), max_outcome) 409 else: 410 stop = None 411 data = {k: v for k, v in self.items()[start:stop]} 412 return icepool.Die(data) 413 414 def clip(self, min_outcome=None, max_outcome=None) -> 'Die[T_co]': 415 """Clips the outcomes of this `Die` to the given values. 416 417 The endpoints are included in the result if applicable. 418 If one of the arguments is not provided, that side will not be clipped. 419 420 This is not the same as rerolling outcomes beyond this range; 421 the outcome is simply adjusted to fit within the range. 422 This will typically cause some quantity to bunch up at the endpoint(s). 423 If you want to reroll outcomes beyond this range, use `truncate()`. 424 """ 425 data: MutableMapping[Any, int] = defaultdict(int) 426 for outcome, quantity in self.items(): 427 if min_outcome is not None and outcome <= min_outcome: 428 data[min_outcome] += quantity 429 elif max_outcome is not None and outcome >= max_outcome: 430 data[max_outcome] += quantity 431 else: 432 data[outcome] += quantity 433 return icepool.Die(data) 434 435 @cached_property 436 def _popped_min(self) -> tuple['Die[T_co]', int]: 437 die = Die._new_raw(self._data.remove_min()) 438 return die, self.quantities()[0] 439 440 def _pop_min(self) -> tuple['Die[T_co]', int]: 441 """A `Die` with the min outcome removed, and the quantity of the removed outcome. 442 443 Raises: 444 IndexError: If this `Die` has no outcome to pop. 445 """ 446 return self._popped_min 447 448 @cached_property 449 def _popped_max(self) -> tuple['Die[T_co]', int]: 450 die = Die._new_raw(self._data.remove_max()) 451 return die, self.quantities()[-1] 452 453 def _pop_max(self) -> tuple['Die[T_co]', int]: 454 """A `Die` with the max outcome removed, and the quantity of the removed outcome. 455 456 Raises: 457 IndexError: If this `Die` has no outcome to pop. 458 """ 459 return self._popped_max 460 461 # Processes. 462 463 def map( 464 self, 465 repl: 466 'Callable[..., U | Die[U] | icepool.RerollType | icepool.AgainExpression] | Mapping[T_co, U | Die[U] | icepool.RerollType | icepool.AgainExpression]', 467 /, 468 *extra_args, 469 star: bool | None = None, 470 repeat: int | Literal['inf'] = 1, 471 time_limit: int | Literal['inf'] | None = None, 472 again_count: int | None = None, 473 again_depth: int | None = None, 474 again_end: 'U | Die[U] | icepool.RerollType | None' = None 475 ) -> 'Die[U]': 476 """Maps outcomes of the `Die` to other outcomes. 477 478 This is also useful for representing processes. 479 480 As `icepool.map(repl, self, ...)`. 481 """ 482 return icepool.map(repl, 483 self, 484 *extra_args, 485 star=star, 486 repeat=repeat, 487 time_limit=time_limit, 488 again_count=again_count, 489 again_depth=again_depth, 490 again_end=again_end) 491 492 def map_and_time( 493 self, 494 repl: 495 'Callable[..., T_co | Die[T_co] | icepool.RerollType] | Mapping[T_co, T_co | Die[T_co] | icepool.RerollType]', 496 /, 497 *extra_args, 498 star: bool | None = None, 499 time_limit: int) -> 'Die[tuple[T_co, int]]': 500 """Repeatedly map outcomes of the state to other outcomes, while also 501 counting timesteps. 502 503 This is useful for representing processes. 504 505 As `map_and_time(repl, self, ...)`. 506 """ 507 return icepool.map_and_time(repl, 508 self, 509 *extra_args, 510 star=star, 511 time_limit=time_limit) 512 513 def time_to_sum(self: 'Die[int]', 514 target: int, 515 /, 516 max_time: int, 517 dnf: 'int|icepool.RerollType|None' = None) -> 'Die[int]': 518 """The number of rolls until the cumulative sum is greater or equal to the target. 519 520 Args: 521 target: The number to stop at once reached. 522 max_time: The maximum number of rolls to run. 523 If the sum is not reached, the outcome is determined by `dnf`. 524 dnf: What time to assign in cases where the target was not reached 525 in `max_time`. If not provided, this is set to `max_time`. 526 `dnf=icepool.Reroll` will remove this case from the result, 527 effectively rerolling it. 528 """ 529 if target <= 0: 530 return Die([0]) 531 532 if dnf is None: 533 dnf = max_time 534 535 def step(total, roll): 536 return min(total + roll, target) 537 538 result: 'Die[tuple[int, int]]' = Die([0]).map_and_time( 539 step, self, time_limit=max_time) 540 541 def get_time(total, time): 542 if total < target: 543 return dnf 544 else: 545 return time 546 547 return result.map(get_time) 548 549 @cached_property 550 def _mean_time_to_sum_cache(self) -> list[Fraction]: 551 return [Fraction(0)] 552 553 def mean_time_to_sum(self: 'Die[int]', target: int, /) -> Fraction: 554 """The mean number of rolls until the cumulative sum is greater or equal to the target. 555 556 Args: 557 target: The target sum. 558 559 Raises: 560 ValueError: If `self` has negative outcomes. 561 ZeroDivisionError: If `self.mean() == 0`. 562 """ 563 target = max(target, 0) 564 565 if target < len(self._mean_time_to_sum_cache): 566 return self._mean_time_to_sum_cache[target] 567 568 if self.min_outcome() < 0: 569 raise ValueError( 570 'mean_time_to_sum does not handle negative outcomes.') 571 time_per_effect = Fraction(self.denominator(), 572 self.denominator() - self.quantity(0)) 573 574 for i in range(len(self._mean_time_to_sum_cache), target + 1): 575 result = time_per_effect + self.reroll([ 576 0 577 ], depth='inf').map(lambda x: self.mean_time_to_sum(i - x)).mean() 578 self._mean_time_to_sum_cache.append(result) 579 580 return result 581 582 def explode(self, 583 which: Collection[T_co] | Callable[..., bool] | None = None, 584 /, 585 *, 586 star: bool | None = None, 587 depth: int = 9, 588 end=None) -> 'Die[T_co]': 589 """Causes outcomes to be rolled again and added to the total. 590 591 Args: 592 which: Which outcomes to explode. Options: 593 * An collection of outcomes to explode. 594 * A callable that takes an outcome and returns `True` if it 595 should be exploded. 596 * If not supplied, the max outcome will explode. 597 star: Whether outcomes should be unpacked into separate arguments 598 before sending them to a callable `which`. 599 If not provided, this will be guessed based on the function 600 signature. 601 depth: The maximum number of additional dice to roll, not counting 602 the initial roll. 603 If not supplied, a default value will be used. 604 end: Once `depth` is reached, further explosions will be treated 605 as this value. By default, a zero value will be used. 606 `icepool.Reroll` will make one extra final roll, rerolling until 607 a non-exploding outcome is reached. 608 """ 609 610 if which is None: 611 outcome_set = {self.max_outcome()} 612 else: 613 outcome_set = self._select_outcomes(which, star) 614 615 if depth < 0: 616 raise ValueError('depth cannot be negative.') 617 elif depth == 0: 618 return self 619 620 def map_final(outcome): 621 if outcome in outcome_set: 622 return outcome + icepool.Again 623 else: 624 return outcome 625 626 return self.map(map_final, again_depth=depth, again_end=end) 627 628 def if_else( 629 self, 630 outcome_if_true: U | 'Die[U]', 631 outcome_if_false: U | 'Die[U]', 632 *, 633 again_count: int | None = None, 634 again_depth: int | None = None, 635 again_end: 'U | Die[U] | icepool.RerollType | None' = None 636 ) -> 'Die[U]': 637 """Ternary conditional operator. 638 639 This replaces truthy outcomes with the first argument and falsy outcomes 640 with the second argument. 641 642 Args: 643 again_count, again_depth, again_end: Forwarded to the final die constructor. 644 """ 645 return self.map(lambda x: bool(x)).map( 646 { 647 True: outcome_if_true, 648 False: outcome_if_false 649 }, 650 again_count=again_count, 651 again_depth=again_depth, 652 again_end=again_end) 653 654 def is_in(self, target: Container[T_co], /) -> 'Die[bool]': 655 """A die that returns True iff the roll of the die is contained in the target.""" 656 return self.map(lambda x: x in target) 657 658 def count(self, rolls: int, target: Container[T_co], /) -> 'Die[int]': 659 """Roll this dice a number of times and count how many are in the target.""" 660 return rolls @ self.is_in(target) 661 662 # Pools and sums. 663 664 @cached_property 665 def _sum_cache(self) -> MutableMapping[int, 'Die']: 666 return {} 667 668 def _sum_all(self, rolls: int, /) -> 'Die': 669 """Roll this `Die` `rolls` times and sum the results. 670 671 The sum is computed one at a time, with each additional item on the 672 right, similar to `functools.reduce()`. 673 674 If `rolls` is negative, roll the `Die` `abs(rolls)` times and negate 675 the result. 676 677 If you instead want to replace tuple (or other sequence) outcomes with 678 their sum, use `die.map(sum)`. 679 """ 680 if rolls in self._sum_cache: 681 return self._sum_cache[rolls] 682 683 if rolls < 0: 684 result = -self._sum_all(-rolls) 685 elif rolls == 0: 686 result = self.zero().simplify() 687 elif rolls == 1: 688 result = self 689 else: 690 # In addition to working similar to reduce(), this seems to perform 691 # better than binary split. 692 result = self._sum_all(rolls - 1) + self 693 694 self._sum_cache[rolls] = result 695 return result 696 697 def __matmul__(self: 'Die[int]', other) -> 'Die': 698 """Roll the left `Die`, then roll the right `Die` that many times and sum the outcomes. 699 700 The sum is computed one at a time, with each additional item on the 701 right, similar to `functools.reduce()`. 702 """ 703 if isinstance(other, icepool.AgainExpression): 704 return NotImplemented 705 other = implicit_convert_to_die(other) 706 707 data: MutableMapping[int, Any] = defaultdict(int) 708 709 max_abs_die_count = max(abs(self.min_outcome()), 710 abs(self.max_outcome())) 711 for die_count, die_count_quantity in self.items(): 712 factor = other.denominator()**(max_abs_die_count - abs(die_count)) 713 subresult = other._sum_all(die_count) 714 for outcome, subresult_quantity in subresult.items(): 715 data[ 716 outcome] += subresult_quantity * die_count_quantity * factor 717 718 return icepool.Die(data) 719 720 def __rmatmul__(self, other: 'int | Die[int]') -> 'Die': 721 """Roll the left `Die`, then roll the right `Die` that many times and sum the outcomes. 722 723 The sum is computed one at a time, with each additional item on the 724 right, similar to `functools.reduce()`. 725 """ 726 if isinstance(other, icepool.AgainExpression): 727 return NotImplemented 728 other = implicit_convert_to_die(other) 729 return other.__matmul__(self) 730 731 def sequence(self, rolls: int) -> 'icepool.Die[tuple[T_co, ...]]': 732 """Possible sequences produced by rolling this die a number of times. 733 734 This is extremely expensive computationally. If possible, use `reduce()` 735 instead; if you don't care about order, `Die.pool()` is better. 736 """ 737 return icepool.cartesian_product(*(self for _ in range(rolls)), 738 outcome_type=tuple) # type: ignore 739 740 def pool(self, rolls: int | Sequence[int] = 1, /) -> 'icepool.Pool[T_co]': 741 """Creates a `Pool` from this `Die`. 742 743 You might subscript the pool immediately afterwards, e.g. 744 `d6.pool(5)[-1, ..., 1]` takes the difference between the highest and 745 lowest of 5d6. 746 747 Args: 748 rolls: The number of copies of this `Die` to put in the pool. 749 Or, a sequence of one `int` per die acting as 750 `keep_tuple`. Note that `...` cannot be used in the 751 argument to this method, as the argument determines the size of 752 the pool. 753 """ 754 if isinstance(rolls, int): 755 return icepool.Pool({self: rolls}) 756 else: 757 pool_size = len(rolls) 758 # Haven't dealt with narrowing return type. 759 return icepool.Pool({self: pool_size})[rolls] # type: ignore 760 761 @overload 762 def keep(self, rolls: Sequence[int], /) -> 'Die': 763 """Selects elements after drawing and sorting and sums them. 764 765 Args: 766 rolls: A sequence of `int` specifying how many times to count each 767 element in ascending order. 768 """ 769 770 @overload 771 def keep(self, rolls: int, 772 index: slice | Sequence[int | EllipsisType] | int, /): 773 """Selects elements after drawing and sorting and sums them. 774 775 Args: 776 rolls: The number of dice to roll. 777 index: One of the following: 778 * An `int`. This will count only the roll at the specified index. 779 In this case, the result is a `Die` rather than a generator. 780 * A `slice`. The selected dice are counted once each. 781 * A sequence of one `int` for each `Die`. 782 Each roll is counted that many times, which could be multiple or 783 negative times. 784 785 Up to one `...` (`Ellipsis`) may be used. 786 `...` will be replaced with a number of zero 787 counts depending on the `rolls`. 788 This number may be "negative" if more `int`s are provided than 789 `rolls`. Specifically: 790 791 * If `index` is shorter than `rolls`, `...` 792 acts as enough zero counts to make up the difference. 793 E.g. `(1, ..., 1)` on five dice would act as 794 `(1, 0, 0, 0, 1)`. 795 * If `index` has length equal to `rolls`, `...` has no effect. 796 E.g. `(1, ..., 1)` on two dice would act as `(1, 1)`. 797 * If `index` is longer than `rolls` and `...` is on one side, 798 elements will be dropped from `index` on the side with `...`. 799 E.g. `(..., 1, 2, 3)` on two dice would act as `(2, 3)`. 800 * If `index` is longer than `rolls` and `...` 801 is in the middle, the counts will be as the sum of two 802 one-sided `...`. 803 E.g. `(-1, ..., 1)` acts like `(-1, ...)` plus `(..., 1)`. 804 If `rolls` was 1 this would have the -1 and 1 cancel each other out. 805 """ 806 807 def keep(self, 808 rolls: int | Sequence[int], 809 index: slice | Sequence[int | EllipsisType] | int | None = None, 810 /) -> 'Die': 811 """Selects elements after drawing and sorting and sums them. 812 813 Args: 814 rolls: The number of dice to roll. 815 index: One of the following: 816 * An `int`. This will count only the roll at the specified index. 817 In this case, the result is a `Die` rather than a generator. 818 * A `slice`. The selected dice are counted once each. 819 * A sequence of `int`s with length equal to `rolls`. 820 Each roll is counted that many times, which could be multiple or 821 negative times. 822 823 Up to one `...` (`Ellipsis`) may be used. If no `...` is used, 824 the `rolls` argument may be omitted. 825 826 `...` will be replaced with a number of zero counts in order 827 to make up any missing elements compared to `rolls`. 828 This number may be "negative" if more `int`s are provided than 829 `rolls`. Specifically: 830 831 * If `index` is shorter than `rolls`, `...` 832 acts as enough zero counts to make up the difference. 833 E.g. `(1, ..., 1)` on five dice would act as 834 `(1, 0, 0, 0, 1)`. 835 * If `index` has length equal to `rolls`, `...` has no effect. 836 E.g. `(1, ..., 1)` on two dice would act as `(1, 1)`. 837 * If `index` is longer than `rolls` and `...` is on one side, 838 elements will be dropped from `index` on the side with `...`. 839 E.g. `(..., 1, 2, 3)` on two dice would act as `(2, 3)`. 840 * If `index` is longer than `rolls` and `...` 841 is in the middle, the counts will be as the sum of two 842 one-sided `...`. 843 E.g. `(-1, ..., 1)` acts like `(-1, ...)` plus `(..., 1)`. 844 If `rolls` was 1 this would have the -1 and 1 cancel each other out. 845 """ 846 if isinstance(rolls, int): 847 if index is None: 848 raise ValueError( 849 'If the number of rolls is an integer, an index argument must be provided.' 850 ) 851 if isinstance(index, int): 852 return self.pool(rolls).keep(index) 853 else: 854 return self.pool(rolls).keep(index).sum() # type: ignore 855 else: 856 if index is not None: 857 raise ValueError('Only one index sequence can be given.') 858 return self.pool(len(rolls)).keep(rolls).sum() # type: ignore 859 860 def lowest(self, 861 rolls: int, 862 /, 863 keep: int | None = None, 864 drop: int | None = None) -> 'Die': 865 """Roll several of this `Die` and return the lowest result, or the sum of some of the lowest. 866 867 The outcomes should support addition and multiplication if `keep != 1`. 868 869 Args: 870 rolls: The number of dice to roll. All dice will have the same 871 outcomes as `self`. 872 keep, drop: These arguments work together: 873 * If neither are provided, the single lowest die will be taken. 874 * If only `keep` is provided, the `keep` lowest dice will be summed. 875 * If only `drop` is provided, the `drop` lowest dice will be dropped 876 and the rest will be summed. 877 * If both are provided, `drop` lowest dice will be dropped, then 878 the next `keep` lowest dice will be summed. 879 880 Returns: 881 A `Die` representing the probability distribution of the sum. 882 """ 883 index = lowest_slice(keep, drop) 884 canonical = canonical_slice(index, rolls) 885 if canonical.start == 0 and canonical.stop == 1: 886 return self._lowest_single(rolls) 887 # Expression evaluators are difficult to type. 888 return self.pool(rolls)[index].sum() # type: ignore 889 890 def _lowest_single(self, rolls: int, /) -> 'Die': 891 """Roll this die several times and keep the lowest.""" 892 if rolls == 0: 893 return self.zero().simplify() 894 return icepool.from_cumulative( 895 self.outcomes(), [x**rolls for x in self.quantities('>=')], 896 reverse=True) 897 898 def highest(self, 899 rolls: int, 900 /, 901 keep: int | None = None, 902 drop: int | None = None) -> 'Die[T_co]': 903 """Roll several of this `Die` and return the highest result, or the sum of some of the highest. 904 905 The outcomes should support addition and multiplication if `keep != 1`. 906 907 Args: 908 rolls: The number of dice to roll. 909 keep, drop: These arguments work together: 910 * If neither are provided, the single highest die will be taken. 911 * If only `keep` is provided, the `keep` highest dice will be summed. 912 * If only `drop` is provided, the `drop` highest dice will be dropped 913 and the rest will be summed. 914 * If both are provided, `drop` highest dice will be dropped, then 915 the next `keep` highest dice will be summed. 916 917 Returns: 918 A `Die` representing the probability distribution of the sum. 919 """ 920 index = highest_slice(keep, drop) 921 canonical = canonical_slice(index, rolls) 922 if canonical.start == rolls - 1 and canonical.stop == rolls: 923 return self._highest_single(rolls) 924 # Expression evaluators are difficult to type. 925 return self.pool(rolls)[index].sum() # type: ignore 926 927 def _highest_single(self, rolls: int, /) -> 'Die[T_co]': 928 """Roll this die several times and keep the highest.""" 929 if rolls == 0: 930 return self.zero().simplify() 931 return icepool.from_cumulative( 932 self.outcomes(), [x**rolls for x in self.quantities('<=')]) 933 934 def middle( 935 self, 936 rolls: int, 937 /, 938 keep: int = 1, 939 *, 940 tie: Literal['error', 'high', 'low'] = 'error') -> 'icepool.Die': 941 """Roll several of this `Die` and sum the sorted results in the middle. 942 943 The outcomes should support addition and multiplication if `keep != 1`. 944 945 Args: 946 rolls: The number of dice to roll. 947 keep: The number of outcomes to sum. If this is greater than the 948 current keep_size, all are kept. 949 tie: What to do if `keep` is odd but the current keep_size 950 is even, or vice versa. 951 * 'error' (default): Raises `IndexError`. 952 * 'high': The higher outcome is taken. 953 * 'low': The lower outcome is taken. 954 """ 955 # Expression evaluators are difficult to type. 956 return self.pool(rolls).middle(keep, tie=tie).sum() # type: ignore 957 958 def map_to_pool( 959 self, 960 repl: 961 'Callable[..., Sequence[icepool.Die[U] | U] | Mapping[icepool.Die[U], int] | Mapping[U, int] | icepool.RerollType] | None' = None, 962 /, 963 *extra_args: 'Outcome | icepool.Die | icepool.MultisetExpression', 964 star: bool | None = None, 965 denominator: int | None = None) -> 'icepool.MultisetExpression[U]': 966 """EXPERIMENTAL: Maps outcomes of this `Die` to `Pools`, creating a `MultisetGenerator`. 967 968 As `icepool.map_to_pool(repl, self, ...)`. 969 970 If no argument is provided, the outcomes will be used to construct a 971 mixture of pools directly, similar to the inverse of `pool.expand()`. 972 Note that this is not particularly efficient since it does not make much 973 use of dynamic programming. 974 975 Args: 976 repl: One of the following: 977 * A callable that takes in one outcome per element of args and 978 produces a `Pool` (or something convertible to such). 979 * A mapping from old outcomes to `Pool` 980 (or something convertible to such). 981 In this case args must have exactly one element. 982 The new outcomes may be dice rather than just single outcomes. 983 The special value `icepool.Reroll` will reroll that old outcome. 984 star: If `True`, the first of the args will be unpacked before 985 giving them to `repl`. 986 If not provided, it will be guessed based on the signature of 987 `repl` and the number of arguments. 988 denominator: If provided, the denominator of the result will be this 989 value. Otherwise it will be the minimum to correctly weight the 990 pools. 991 992 Returns: 993 A `MultisetGenerator` representing the mixture of `Pool`s. Note 994 that this is not technically a `Pool`, though it supports most of 995 the same operations. 996 997 Raises: 998 ValueError: If `denominator` cannot be made consistent with the 999 resulting mixture of pools. 1000 """ 1001 if repl is None: 1002 repl = lambda x: x 1003 return icepool.map_to_pool(repl, self, *extra_args, star=star) 1004 1005 def explode_to_pool(self, 1006 rolls: int, 1007 which: Collection[T_co] | Callable[..., bool] 1008 | None = None, 1009 /, 1010 *, 1011 star: bool | None = None, 1012 depth: int = 9) -> 'icepool.MultisetExpression[T_co]': 1013 """EXPERIMENTAL: Causes outcomes to be rolled again, keeping that outcome as an individual die in a pool. 1014 1015 Args: 1016 rolls: The number of initial dice. 1017 which: Which outcomes to explode. Options: 1018 * A single outcome to explode. 1019 * An collection of outcomes to explode. 1020 * A callable that takes an outcome and returns `True` if it 1021 should be exploded. 1022 * If not supplied, the max outcome will explode. 1023 star: Whether outcomes should be unpacked into separate arguments 1024 before sending them to a callable `which`. 1025 If not provided, this will be guessed based on the function 1026 signature. 1027 depth: The maximum depth of explosions for an individual dice. 1028 1029 Returns: 1030 A `MultisetGenerator` representing the mixture of `Pool`s. Note 1031 that this is not technically a `Pool`, though it supports most of 1032 the same operations. 1033 """ 1034 if depth == 0: 1035 return self.pool(rolls) 1036 if which is None: 1037 explode_set = {self.max_outcome()} 1038 else: 1039 explode_set = self._select_outcomes(which, star) 1040 if not explode_set: 1041 return self.pool(rolls) 1042 explode: 'Die[T_co]' 1043 not_explode: 'Die[T_co]' 1044 explode, not_explode = self.split(explode_set) 1045 1046 single_data: 'MutableMapping[icepool.Vector[int], int]' = defaultdict( 1047 int) 1048 for i in range(depth + 1): 1049 weight = explode.denominator()**i * self.denominator()**( 1050 depth - i) * not_explode.denominator() 1051 single_data[icepool.Vector((i, 1))] += weight 1052 single_data[icepool.Vector( 1053 (depth + 1, 0))] += explode.denominator()**(depth + 1) 1054 1055 single_count_die: 'Die[icepool.Vector[int]]' = Die(single_data) 1056 count_die = rolls @ single_count_die 1057 1058 return count_die.map_to_pool( 1059 lambda x, nx: [explode] * x + [not_explode] * nx) 1060 1061 def reroll_to_pool( 1062 self, 1063 rolls: int, 1064 which: Callable[..., bool] | Collection[T_co], 1065 /, 1066 max_rerolls: int, 1067 *, 1068 star: bool | None = None, 1069 mode: Literal['random', 'lowest', 'highest', 'drop'] = 'random' 1070 ) -> 'icepool.MultisetExpression[T_co]': 1071 """EXPERIMENTAL: Applies a limited number of rerolls shared across a pool. 1072 1073 Each die can only be rerolled once (effectively `depth=1`), and no more 1074 than `max_rerolls` dice may be rerolled. 1075 1076 Args: 1077 rolls: How many dice in the pool. 1078 which: Selects which outcomes are eligible to be rerolled. Options: 1079 * A collection of outcomes to reroll. 1080 * A callable that takes an outcome and returns `True` if it 1081 could be rerolled. 1082 max_rerolls: The maximum number of dice to reroll. 1083 Note that each die can only be rerolled once, so if the number 1084 of eligible dice is less than this, the excess rerolls have no 1085 effect. 1086 star: Whether outcomes should be unpacked into separate arguments 1087 before sending them to a callable `which`. 1088 If not provided, this will be guessed based on the function 1089 signature. 1090 mode: How dice are selected for rerolling if there are more eligible 1091 dice than `max_rerolls`. Options: 1092 * `'random'` (default): Eligible dice will be chosen uniformly 1093 at random. 1094 * `'lowest'`: The lowest eligible dice will be rerolled. 1095 * `'highest'`: The highest eligible dice will be rerolled. 1096 * `'drop'`: All dice that ended up on an outcome selected by 1097 `which` will be dropped. This includes both dice that rolled 1098 into `which` initially and were not rerolled, and dice that 1099 were rerolled but rolled into `which` again. This can be 1100 considerably more efficient than the other modes. 1101 1102 Returns: 1103 A `MultisetGenerator` representing the mixture of `Pool`s. Note 1104 that this is not technically a `Pool`, though it supports most of 1105 the same operations. 1106 """ 1107 rerollable_set = self._select_outcomes(which, star) 1108 if not rerollable_set: 1109 return self.pool(rolls) 1110 1111 rerollable_die: 'Die[T_co]' 1112 not_rerollable_die: 'Die[T_co]' 1113 rerollable_die, not_rerollable_die = self.split(rerollable_set) 1114 single_is_rerollable = icepool.coin(rerollable_die.denominator(), 1115 self.denominator()) 1116 rerollable = rolls @ single_is_rerollable 1117 1118 def split(initial_rerollable: int) -> Die[tuple[int, int, int]]: 1119 """Computes the composition of the pool. 1120 1121 Returns: 1122 initial_rerollable: The number of dice that initially fell into 1123 the rerollable set. 1124 rerolled_to_rerollable: The number of dice that were rerolled, 1125 but fell into the rerollable set again. 1126 not_rerollable: The number of dice that ended up outside the 1127 rerollable set, including both initial and rerolled dice. 1128 not_rerolled: The number of dice that were eligible for 1129 rerolling but were not rerolled. 1130 """ 1131 initial_not_rerollable = rolls - initial_rerollable 1132 rerolled = min(initial_rerollable, max_rerolls) 1133 not_rerolled = initial_rerollable - rerolled 1134 1135 def second_split(rerolled_to_rerollable): 1136 """Splits the rerolled dice into those that fell into the rerollable and not-rerollable sets.""" 1137 rerolled_to_not_rerollable = rerolled - rerolled_to_rerollable 1138 return icepool.tupleize( 1139 initial_rerollable, rerolled_to_rerollable, 1140 initial_not_rerollable + rerolled_to_not_rerollable, 1141 not_rerolled) 1142 1143 return icepool.map(second_split, 1144 rerolled @ single_is_rerollable, 1145 star=False) 1146 1147 pool_composition = rerollable.map(split, star=False) 1148 denominator = self.denominator()**(rolls + min(rolls, max_rerolls)) 1149 pool_composition = pool_composition.multiply_to_denominator( 1150 denominator) 1151 1152 def make_pool(initial_rerollable, rerolled_to_rerollable, 1153 not_rerollable, not_rerolled): 1154 common = rerollable_die.pool( 1155 rerolled_to_rerollable) + not_rerollable_die.pool( 1156 not_rerollable) 1157 match mode: 1158 case 'random': 1159 return common + rerollable_die.pool(not_rerolled) 1160 case 'lowest': 1161 return common + rerollable_die.pool( 1162 initial_rerollable).highest(not_rerolled) 1163 case 'highest': 1164 return common + rerollable_die.pool( 1165 initial_rerollable).lowest(not_rerolled) 1166 case 'drop': 1167 return not_rerollable_die.pool(not_rerollable) 1168 case _: 1169 raise ValueError( 1170 f"Invalid reroll_priority '{mode}'. Allowed values are 'random', 'lowest', 'highest', 'drop'." 1171 ) 1172 1173 return pool_composition.map_to_pool(make_pool, star=True) 1174 1175 # Unary operators. 1176 1177 def __neg__(self) -> 'Die[T_co]': 1178 return self.unary_operator(operator.neg) 1179 1180 def __pos__(self) -> 'Die[T_co]': 1181 return self.unary_operator(operator.pos) 1182 1183 def __invert__(self) -> 'Die[T_co]': 1184 return self.unary_operator(operator.invert) 1185 1186 def abs(self) -> 'Die[T_co]': 1187 return self.unary_operator(operator.abs) 1188 1189 __abs__ = abs 1190 1191 def round(self, ndigits: int | None = None) -> 'Die': 1192 return self.unary_operator(round, ndigits) 1193 1194 __round__ = round 1195 1196 def stochastic_round(self, 1197 *, 1198 max_denominator: int | None = None) -> 'Die[int]': 1199 """Randomly rounds outcomes up or down to the nearest integer according to the two distances. 1200 1201 Specificially, rounds `x` up with probability `x - floor(x)` and down 1202 otherwise. 1203 1204 Args: 1205 max_denominator: If provided, each rounding will be performed 1206 using `fractions.Fraction.limit_denominator(max_denominator)`. 1207 Otherwise, the rounding will be performed without 1208 `limit_denominator`. 1209 """ 1210 return self.map(lambda x: icepool.stochastic_round( 1211 x, max_denominator=max_denominator)) 1212 1213 def trunc(self) -> 'Die': 1214 return self.unary_operator(math.trunc) 1215 1216 __trunc__ = trunc 1217 1218 def floor(self) -> 'Die': 1219 return self.unary_operator(math.floor) 1220 1221 __floor__ = floor 1222 1223 def ceil(self) -> 'Die': 1224 return self.unary_operator(math.ceil) 1225 1226 __ceil__ = ceil 1227 1228 # Binary operators. 1229 1230 def __add__(self, other) -> 'Die': 1231 if isinstance(other, icepool.AgainExpression): 1232 return NotImplemented 1233 other = implicit_convert_to_die(other) 1234 return self.binary_operator(other, operator.add) 1235 1236 def __radd__(self, other) -> 'Die': 1237 if isinstance(other, icepool.AgainExpression): 1238 return NotImplemented 1239 other = implicit_convert_to_die(other) 1240 return other.binary_operator(self, operator.add) 1241 1242 def __sub__(self, other) -> 'Die': 1243 if isinstance(other, icepool.AgainExpression): 1244 return NotImplemented 1245 other = implicit_convert_to_die(other) 1246 return self.binary_operator(other, operator.sub) 1247 1248 def __rsub__(self, other) -> 'Die': 1249 if isinstance(other, icepool.AgainExpression): 1250 return NotImplemented 1251 other = implicit_convert_to_die(other) 1252 return other.binary_operator(self, operator.sub) 1253 1254 def __mul__(self, other) -> 'Die': 1255 if isinstance(other, icepool.AgainExpression): 1256 return NotImplemented 1257 other = implicit_convert_to_die(other) 1258 return self.binary_operator(other, operator.mul) 1259 1260 def __rmul__(self, other) -> 'Die': 1261 if isinstance(other, icepool.AgainExpression): 1262 return NotImplemented 1263 other = implicit_convert_to_die(other) 1264 return other.binary_operator(self, operator.mul) 1265 1266 def __truediv__(self, other) -> 'Die': 1267 if isinstance(other, icepool.AgainExpression): 1268 return NotImplemented 1269 other = implicit_convert_to_die(other) 1270 return self.binary_operator(other, operator.truediv) 1271 1272 def __rtruediv__(self, other) -> 'Die': 1273 if isinstance(other, icepool.AgainExpression): 1274 return NotImplemented 1275 other = implicit_convert_to_die(other) 1276 return other.binary_operator(self, operator.truediv) 1277 1278 def __floordiv__(self, other) -> 'Die': 1279 if isinstance(other, icepool.AgainExpression): 1280 return NotImplemented 1281 other = implicit_convert_to_die(other) 1282 return self.binary_operator(other, operator.floordiv) 1283 1284 def __rfloordiv__(self, other) -> 'Die': 1285 if isinstance(other, icepool.AgainExpression): 1286 return NotImplemented 1287 other = implicit_convert_to_die(other) 1288 return other.binary_operator(self, operator.floordiv) 1289 1290 def __pow__(self, other) -> 'Die': 1291 if isinstance(other, icepool.AgainExpression): 1292 return NotImplemented 1293 other = implicit_convert_to_die(other) 1294 return self.binary_operator(other, operator.pow) 1295 1296 def __rpow__(self, other) -> 'Die': 1297 if isinstance(other, icepool.AgainExpression): 1298 return NotImplemented 1299 other = implicit_convert_to_die(other) 1300 return other.binary_operator(self, operator.pow) 1301 1302 def __mod__(self, other) -> 'Die': 1303 if isinstance(other, icepool.AgainExpression): 1304 return NotImplemented 1305 other = implicit_convert_to_die(other) 1306 return self.binary_operator(other, operator.mod) 1307 1308 def __rmod__(self, other) -> 'Die': 1309 if isinstance(other, icepool.AgainExpression): 1310 return NotImplemented 1311 other = implicit_convert_to_die(other) 1312 return other.binary_operator(self, operator.mod) 1313 1314 def __lshift__(self, other) -> 'Die': 1315 if isinstance(other, icepool.AgainExpression): 1316 return NotImplemented 1317 other = implicit_convert_to_die(other) 1318 return self.binary_operator(other, operator.lshift) 1319 1320 def __rlshift__(self, other) -> 'Die': 1321 if isinstance(other, icepool.AgainExpression): 1322 return NotImplemented 1323 other = implicit_convert_to_die(other) 1324 return other.binary_operator(self, operator.lshift) 1325 1326 def __rshift__(self, other) -> 'Die': 1327 if isinstance(other, icepool.AgainExpression): 1328 return NotImplemented 1329 other = implicit_convert_to_die(other) 1330 return self.binary_operator(other, operator.rshift) 1331 1332 def __rrshift__(self, other) -> 'Die': 1333 if isinstance(other, icepool.AgainExpression): 1334 return NotImplemented 1335 other = implicit_convert_to_die(other) 1336 return other.binary_operator(self, operator.rshift) 1337 1338 def __and__(self, other) -> 'Die': 1339 if isinstance(other, icepool.AgainExpression): 1340 return NotImplemented 1341 other = implicit_convert_to_die(other) 1342 return self.binary_operator(other, operator.and_) 1343 1344 def __rand__(self, other) -> 'Die': 1345 if isinstance(other, icepool.AgainExpression): 1346 return NotImplemented 1347 other = implicit_convert_to_die(other) 1348 return other.binary_operator(self, operator.and_) 1349 1350 def __or__(self, other) -> 'Die': 1351 if isinstance(other, icepool.AgainExpression): 1352 return NotImplemented 1353 other = implicit_convert_to_die(other) 1354 return self.binary_operator(other, operator.or_) 1355 1356 def __ror__(self, other) -> 'Die': 1357 if isinstance(other, icepool.AgainExpression): 1358 return NotImplemented 1359 other = implicit_convert_to_die(other) 1360 return other.binary_operator(self, operator.or_) 1361 1362 def __xor__(self, other) -> 'Die': 1363 if isinstance(other, icepool.AgainExpression): 1364 return NotImplemented 1365 other = implicit_convert_to_die(other) 1366 return self.binary_operator(other, operator.xor) 1367 1368 def __rxor__(self, other) -> 'Die': 1369 if isinstance(other, icepool.AgainExpression): 1370 return NotImplemented 1371 other = implicit_convert_to_die(other) 1372 return other.binary_operator(self, operator.xor) 1373 1374 # Comparators. 1375 1376 def __lt__(self, other) -> 'Die[bool]': 1377 if isinstance(other, icepool.AgainExpression): 1378 return NotImplemented 1379 other = implicit_convert_to_die(other) 1380 return self.binary_operator(other, operator.lt) 1381 1382 def __le__(self, other) -> 'Die[bool]': 1383 if isinstance(other, icepool.AgainExpression): 1384 return NotImplemented 1385 other = implicit_convert_to_die(other) 1386 return self.binary_operator(other, operator.le) 1387 1388 def __ge__(self, other) -> 'Die[bool]': 1389 if isinstance(other, icepool.AgainExpression): 1390 return NotImplemented 1391 other = implicit_convert_to_die(other) 1392 return self.binary_operator(other, operator.ge) 1393 1394 def __gt__(self, other) -> 'Die[bool]': 1395 if isinstance(other, icepool.AgainExpression): 1396 return NotImplemented 1397 other = implicit_convert_to_die(other) 1398 return self.binary_operator(other, operator.gt) 1399 1400 # Equality operators. These produce a `DieWithTruth`. 1401 1402 # The result has a truth value, but is not a bool. 1403 def __eq__(self, other) -> 'icepool.DieWithTruth[bool]': # type: ignore 1404 if isinstance(other, icepool.AgainExpression): 1405 return NotImplemented 1406 other_die: Die = implicit_convert_to_die(other) 1407 1408 def data_callback() -> Counts[bool]: 1409 return self.binary_operator(other_die, operator.eq)._data 1410 1411 def truth_value_callback() -> bool: 1412 return self.equals(other) 1413 1414 return icepool.DieWithTruth(data_callback, truth_value_callback) 1415 1416 # The result has a truth value, but is not a bool. 1417 def __ne__(self, other) -> 'icepool.DieWithTruth[bool]': # type: ignore 1418 if isinstance(other, icepool.AgainExpression): 1419 return NotImplemented 1420 other_die: Die = implicit_convert_to_die(other) 1421 1422 def data_callback() -> Counts[bool]: 1423 return self.binary_operator(other_die, operator.ne)._data 1424 1425 def truth_value_callback() -> bool: 1426 return not self.equals(other) 1427 1428 return icepool.DieWithTruth(data_callback, truth_value_callback) 1429 1430 def cmp(self, other) -> 'Die[int]': 1431 """A `Die` with outcomes 1, -1, and 0. 1432 1433 The quantities are equal to the positive outcome of `self > other`, 1434 `self < other`, and the remainder respectively. 1435 """ 1436 other = implicit_convert_to_die(other) 1437 1438 data = {} 1439 1440 lt = self < other 1441 if True in lt: 1442 data[-1] = lt[True] 1443 eq = self == other 1444 if True in eq: 1445 data[0] = eq[True] 1446 gt = self > other 1447 if True in gt: 1448 data[1] = gt[True] 1449 1450 return Die(data) 1451 1452 @staticmethod 1453 def _sign(x) -> int: 1454 z = Die._zero(x) 1455 if x > z: 1456 return 1 1457 elif x < z: 1458 return -1 1459 else: 1460 return 0 1461 1462 def sign(self) -> 'Die[int]': 1463 """Outcomes become 1 if greater than `zero()`, -1 if less than `zero()`, and 0 otherwise. 1464 1465 Note that for `float`s, +0.0, -0.0, and nan all become 0. 1466 """ 1467 return self.unary_operator(Die._sign) 1468 1469 # Equality and hashing. 1470 1471 def __bool__(self) -> bool: 1472 raise TypeError( 1473 'A `Die` only has a truth value if it is the result of == or !=.\n' 1474 'This could result from trying to use a die in an if-statement,\n' 1475 'in which case you should use `die.if_else()` instead.\n' 1476 'Or it could result from trying to use a `Die` inside a tuple or vector outcome,\n' 1477 'in which case you should use `tupleize()` or `vectorize().') 1478 1479 @cached_property 1480 def _hash_key(self) -> tuple: 1481 """A tuple that uniquely (as `equals()`) identifies this die. 1482 1483 Apart from being hashable and totally orderable, this is not guaranteed 1484 to be in any particular format or have any other properties. 1485 """ 1486 return tuple(self.items()) 1487 1488 @cached_property 1489 def _hash(self) -> int: 1490 return hash(self._hash_key) 1491 1492 def __hash__(self) -> int: 1493 return self._hash 1494 1495 def equals(self, other, *, simplify: bool = False) -> bool: 1496 """`True` iff both dice have the same outcomes and quantities. 1497 1498 This is `False` if `other` is not a `Die`, even if it would convert 1499 to an equal `Die`. 1500 1501 Truth value does NOT matter. 1502 1503 If one `Die` has a zero-quantity outcome and the other `Die` does not 1504 contain that outcome, they are treated as unequal by this function. 1505 1506 The `==` and `!=` operators have a dual purpose; they return a `Die` 1507 with a truth value determined by this method. 1508 Only dice returned by these methods have a truth value. The data of 1509 these dice is lazily evaluated since the caller may only be interested 1510 in the `Die` value or the truth value. 1511 1512 Args: 1513 simplify: If `True`, the dice will be simplified before comparing. 1514 Otherwise, e.g. a 2:2 coin is not `equals()` to a 1:1 coin. 1515 """ 1516 if not isinstance(other, Die): 1517 return False 1518 1519 if simplify: 1520 return self.simplify()._hash_key == other.simplify()._hash_key 1521 else: 1522 return self._hash_key == other._hash_key 1523 1524 # Strings. 1525 1526 def __repr__(self) -> str: 1527 items_string = ', '.join(f'{repr(outcome)}: {weight}' 1528 for outcome, weight in self.items()) 1529 return type(self).__qualname__ + '({' + items_string + '})'
Sampling with replacement. Quantities represent weights.
Dice are immutable. Methods do not modify the Die
in-place;
rather they return a Die
representing the result.
It's also possible to have "empty" dice with no outcomes at all, though these have little use other than being sentinel values.
183 def unary_operator(self: 'icepool.Die[T_co]', op: Callable[..., U], *args, 184 **kwargs) -> 'icepool.Die[U]': 185 """Performs the unary operation on the outcomes. 186 187 This is used for the standard unary operators 188 `-, +, abs, ~, round, trunc, floor, ceil` 189 as well as the additional methods 190 `zero, bool`. 191 192 This is NOT used for the `[]` operator; when used directly, this is 193 interpreted as a `Mapping` operation and returns the count corresponding 194 to a given outcome. See `marginals()` for applying the `[]` operator to 195 outcomes. 196 197 Returns: 198 A `Die` representing the result. 199 200 Raises: 201 ValueError: If tuples are of mismatched length. 202 """ 203 return self._unary_operator(op, *args, **kwargs)
Performs the unary operation on the outcomes.
This is used for the standard unary operators
-, +, abs, ~, round, trunc, floor, ceil
as well as the additional methods
zero, bool
.
This is NOT used for the []
operator; when used directly, this is
interpreted as a Mapping
operation and returns the count corresponding
to a given outcome. See marginals()
for applying the []
operator to
outcomes.
Returns:
A
Die
representing the result.
Raises:
- ValueError: If tuples are of mismatched length.
205 def binary_operator(self, other: 'Die', op: Callable[..., U], *args, 206 **kwargs) -> 'Die[U]': 207 """Performs the operation on pairs of outcomes. 208 209 By the time this is called, the other operand has already been 210 converted to a `Die`. 211 212 If one side of a binary operator is a tuple and the other is not, the 213 binary operator is applied to each element of the tuple with the 214 non-tuple side. For example, the following are equivalent: 215 216 ```python 217 cartesian_product(d6, d8) * 2 218 cartesian_product(d6 * 2, d8 * 2) 219 ``` 220 221 This is used for the standard binary operators 222 `+, -, *, /, //, %, **, <<, >>, &, |, ^` 223 and the standard binary comparators 224 `<, <=, >=, >, ==, !=, cmp`. 225 226 `==` and `!=` additionally set the truth value of the `Die` according to 227 whether the dice themselves are the same or not. 228 229 The `@` operator does NOT use this method directly. 230 It rolls the left `Die`, which must have integer outcomes, 231 then rolls the right `Die` that many times and sums the outcomes. 232 233 Returns: 234 A `Die` representing the result. 235 236 Raises: 237 ValueError: If tuples are of mismatched length within one of the 238 dice or between the dice. 239 """ 240 data: MutableMapping[Any, int] = defaultdict(int) 241 for (outcome_self, 242 quantity_self), (outcome_other, 243 quantity_other) in itertools.product( 244 self.items(), other.items()): 245 new_outcome = op(outcome_self, outcome_other, *args, **kwargs) 246 data[new_outcome] += quantity_self * quantity_other 247 return self._new_type(data)
Performs the operation on pairs of outcomes.
By the time this is called, the other operand has already been
converted to a Die
.
If one side of a binary operator is a tuple and the other is not, the binary operator is applied to each element of the tuple with the non-tuple side. For example, the following are equivalent:
cartesian_product(d6, d8) * 2
cartesian_product(d6 * 2, d8 * 2)
This is used for the standard binary operators
+, -, *, /, //, %, **, <<, >>, &, |, ^
and the standard binary comparators
<, <=, >=, >, ==, !=, cmp
.
==
and !=
additionally set the truth value of the Die
according to
whether the dice themselves are the same or not.
The @
operator does NOT use this method directly.
It rolls the left Die
, which must have integer outcomes,
then rolls the right Die
that many times and sums the outcomes.
Returns:
A
Die
representing the result.
Raises:
- ValueError: If tuples are of mismatched length within one of the dice or between the dice.
275 def simplify(self) -> 'Die[T_co]': 276 """Divides all quantities by their greatest common denominator. """ 277 return icepool.Die(self._data.simplify())
Divides all quantities by their greatest common denominator.
281 def reroll(self, 282 which: Callable[..., bool] | Collection[T_co] | None = None, 283 /, 284 *, 285 star: bool | None = None, 286 depth: int | Literal['inf']) -> 'Die[T_co]': 287 """Rerolls the given outcomes. 288 289 Args: 290 which: Selects which outcomes to reroll. Options: 291 * A collection of outcomes to reroll. 292 * A callable that takes an outcome and returns `True` if it 293 should be rerolled. 294 * If not provided, the min outcome will be rerolled. 295 star: Whether outcomes should be unpacked into separate arguments 296 before sending them to a callable `which`. 297 If not provided, this will be guessed based on the function 298 signature. 299 depth: The maximum number of times to reroll. 300 If `None`, rerolls an unlimited number of times. 301 302 Returns: 303 A `Die` representing the reroll. 304 If the reroll would never terminate, the result has no outcomes. 305 """ 306 307 if which is None: 308 outcome_set = {self.min_outcome()} 309 else: 310 outcome_set = self._select_outcomes(which, star) 311 312 if depth == 'inf' or depth is None: 313 if depth is None: 314 warnings.warn( 315 "depth=None is deprecated; use depth='inf' instead.", 316 category=DeprecationWarning, 317 stacklevel=1) 318 data = { 319 outcome: quantity 320 for outcome, quantity in self.items() 321 if outcome not in outcome_set 322 } 323 elif depth < 0: 324 raise ValueError('reroll depth cannot be negative.') 325 else: 326 total_reroll_quantity = sum(quantity 327 for outcome, quantity in self.items() 328 if outcome in outcome_set) 329 total_stop_quantity = self.denominator() - total_reroll_quantity 330 rerollable_factor = total_reroll_quantity**depth 331 stop_factor = (self.denominator()**(depth + 1) - rerollable_factor 332 * total_reroll_quantity) // total_stop_quantity 333 data = { 334 outcome: (rerollable_factor * 335 quantity if outcome in outcome_set else stop_factor * 336 quantity) 337 for outcome, quantity in self.items() 338 } 339 return icepool.Die(data)
Rerolls the given outcomes.
Arguments:
- which: Selects which outcomes to reroll. Options:
- A collection of outcomes to reroll.
- A callable that takes an outcome and returns
True
if it should be rerolled. - If not provided, the min outcome will be rerolled.
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
which
. If not provided, this will be guessed based on the function signature. - depth: The maximum number of times to reroll.
If
None
, rerolls an unlimited number of times.
Returns:
A
Die
representing the reroll. If the reroll would never terminate, the result has no outcomes.
341 def filter(self, 342 which: Callable[..., bool] | Collection[T_co], 343 /, 344 *, 345 star: bool | None = None, 346 depth: int | Literal['inf']) -> 'Die[T_co]': 347 """Rerolls until getting one of the given outcomes. 348 349 Essentially the complement of `reroll()`. 350 351 Args: 352 which: Selects which outcomes to reroll until. Options: 353 * A callable that takes an outcome and returns `True` if it 354 should be accepted. 355 * A collection of outcomes to reroll until. 356 star: Whether outcomes should be unpacked into separate arguments 357 before sending them to a callable `which`. 358 If not provided, this will be guessed based on the function 359 signature. 360 depth: The maximum number of times to reroll. 361 If `None`, rerolls an unlimited number of times. 362 363 Returns: 364 A `Die` representing the reroll. 365 If the reroll would never terminate, the result has no outcomes. 366 """ 367 368 if callable(which): 369 if star is None: 370 star = infer_star(which) 371 if star: 372 373 not_outcomes = { 374 outcome 375 for outcome in self.outcomes() 376 if not which(*outcome) # type: ignore 377 } 378 else: 379 not_outcomes = { 380 outcome 381 for outcome in self.outcomes() if not which(outcome) 382 } 383 else: 384 not_outcomes = { 385 not_outcome 386 for not_outcome in self.outcomes() if not_outcome not in which 387 } 388 return self.reroll(not_outcomes, depth=depth)
Rerolls until getting one of the given outcomes.
Essentially the complement of reroll()
.
Arguments:
- which: Selects which outcomes to reroll until. Options:
- A callable that takes an outcome and returns
True
if it should be accepted. - A collection of outcomes to reroll until.
- A callable that takes an outcome and returns
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
which
. If not provided, this will be guessed based on the function signature. - depth: The maximum number of times to reroll.
If
None
, rerolls an unlimited number of times.
Returns:
A
Die
representing the reroll. If the reroll would never terminate, the result has no outcomes.
390 def truncate(self, min_outcome=None, max_outcome=None) -> 'Die[T_co]': 391 """Truncates the outcomes of this `Die` to the given range. 392 393 The endpoints are included in the result if applicable. 394 If one of the arguments is not provided, that side will not be truncated. 395 396 This effectively rerolls outcomes outside the given range. 397 If instead you want to replace those outcomes with the nearest endpoint, 398 use `clip()`. 399 400 Not to be confused with `trunc(die)`, which performs integer truncation 401 on each outcome. 402 """ 403 if min_outcome is not None: 404 start = bisect.bisect_left(self.outcomes(), min_outcome) 405 else: 406 start = None 407 if max_outcome is not None: 408 stop = bisect.bisect_right(self.outcomes(), max_outcome) 409 else: 410 stop = None 411 data = {k: v for k, v in self.items()[start:stop]} 412 return icepool.Die(data)
Truncates the outcomes of this Die
to the given range.
The endpoints are included in the result if applicable. If one of the arguments is not provided, that side will not be truncated.
This effectively rerolls outcomes outside the given range.
If instead you want to replace those outcomes with the nearest endpoint,
use clip()
.
Not to be confused with trunc(die)
, which performs integer truncation
on each outcome.
414 def clip(self, min_outcome=None, max_outcome=None) -> 'Die[T_co]': 415 """Clips the outcomes of this `Die` to the given values. 416 417 The endpoints are included in the result if applicable. 418 If one of the arguments is not provided, that side will not be clipped. 419 420 This is not the same as rerolling outcomes beyond this range; 421 the outcome is simply adjusted to fit within the range. 422 This will typically cause some quantity to bunch up at the endpoint(s). 423 If you want to reroll outcomes beyond this range, use `truncate()`. 424 """ 425 data: MutableMapping[Any, int] = defaultdict(int) 426 for outcome, quantity in self.items(): 427 if min_outcome is not None and outcome <= min_outcome: 428 data[min_outcome] += quantity 429 elif max_outcome is not None and outcome >= max_outcome: 430 data[max_outcome] += quantity 431 else: 432 data[outcome] += quantity 433 return icepool.Die(data)
Clips the outcomes of this Die
to the given values.
The endpoints are included in the result if applicable. If one of the arguments is not provided, that side will not be clipped.
This is not the same as rerolling outcomes beyond this range;
the outcome is simply adjusted to fit within the range.
This will typically cause some quantity to bunch up at the endpoint(s).
If you want to reroll outcomes beyond this range, use truncate()
.
463 def map( 464 self, 465 repl: 466 'Callable[..., U | Die[U] | icepool.RerollType | icepool.AgainExpression] | Mapping[T_co, U | Die[U] | icepool.RerollType | icepool.AgainExpression]', 467 /, 468 *extra_args, 469 star: bool | None = None, 470 repeat: int | Literal['inf'] = 1, 471 time_limit: int | Literal['inf'] | None = None, 472 again_count: int | None = None, 473 again_depth: int | None = None, 474 again_end: 'U | Die[U] | icepool.RerollType | None' = None 475 ) -> 'Die[U]': 476 """Maps outcomes of the `Die` to other outcomes. 477 478 This is also useful for representing processes. 479 480 As `icepool.map(repl, self, ...)`. 481 """ 482 return icepool.map(repl, 483 self, 484 *extra_args, 485 star=star, 486 repeat=repeat, 487 time_limit=time_limit, 488 again_count=again_count, 489 again_depth=again_depth, 490 again_end=again_end)
Maps outcomes of the Die
to other outcomes.
This is also useful for representing processes.
As icepool.map(repl, self, ...)
.
492 def map_and_time( 493 self, 494 repl: 495 'Callable[..., T_co | Die[T_co] | icepool.RerollType] | Mapping[T_co, T_co | Die[T_co] | icepool.RerollType]', 496 /, 497 *extra_args, 498 star: bool | None = None, 499 time_limit: int) -> 'Die[tuple[T_co, int]]': 500 """Repeatedly map outcomes of the state to other outcomes, while also 501 counting timesteps. 502 503 This is useful for representing processes. 504 505 As `map_and_time(repl, self, ...)`. 506 """ 507 return icepool.map_and_time(repl, 508 self, 509 *extra_args, 510 star=star, 511 time_limit=time_limit)
Repeatedly map outcomes of the state to other outcomes, while also counting timesteps.
This is useful for representing processes.
As map_and_time(repl, self, ...)
.
513 def time_to_sum(self: 'Die[int]', 514 target: int, 515 /, 516 max_time: int, 517 dnf: 'int|icepool.RerollType|None' = None) -> 'Die[int]': 518 """The number of rolls until the cumulative sum is greater or equal to the target. 519 520 Args: 521 target: The number to stop at once reached. 522 max_time: The maximum number of rolls to run. 523 If the sum is not reached, the outcome is determined by `dnf`. 524 dnf: What time to assign in cases where the target was not reached 525 in `max_time`. If not provided, this is set to `max_time`. 526 `dnf=icepool.Reroll` will remove this case from the result, 527 effectively rerolling it. 528 """ 529 if target <= 0: 530 return Die([0]) 531 532 if dnf is None: 533 dnf = max_time 534 535 def step(total, roll): 536 return min(total + roll, target) 537 538 result: 'Die[tuple[int, int]]' = Die([0]).map_and_time( 539 step, self, time_limit=max_time) 540 541 def get_time(total, time): 542 if total < target: 543 return dnf 544 else: 545 return time 546 547 return result.map(get_time)
The number of rolls until the cumulative sum is greater or equal to the target.
Arguments:
- target: The number to stop at once reached.
- max_time: The maximum number of rolls to run.
If the sum is not reached, the outcome is determined by
dnf
. - dnf: What time to assign in cases where the target was not reached
in
max_time
. If not provided, this is set tomax_time
.dnf=icepool.Reroll
will remove this case from the result, effectively rerolling it.
553 def mean_time_to_sum(self: 'Die[int]', target: int, /) -> Fraction: 554 """The mean number of rolls until the cumulative sum is greater or equal to the target. 555 556 Args: 557 target: The target sum. 558 559 Raises: 560 ValueError: If `self` has negative outcomes. 561 ZeroDivisionError: If `self.mean() == 0`. 562 """ 563 target = max(target, 0) 564 565 if target < len(self._mean_time_to_sum_cache): 566 return self._mean_time_to_sum_cache[target] 567 568 if self.min_outcome() < 0: 569 raise ValueError( 570 'mean_time_to_sum does not handle negative outcomes.') 571 time_per_effect = Fraction(self.denominator(), 572 self.denominator() - self.quantity(0)) 573 574 for i in range(len(self._mean_time_to_sum_cache), target + 1): 575 result = time_per_effect + self.reroll([ 576 0 577 ], depth='inf').map(lambda x: self.mean_time_to_sum(i - x)).mean() 578 self._mean_time_to_sum_cache.append(result) 579 580 return result
The mean number of rolls until the cumulative sum is greater or equal to the target.
Arguments:
- target: The target sum.
Raises:
- ValueError: If
self
has negative outcomes. - ZeroDivisionError: If
self.mean() == 0
.
582 def explode(self, 583 which: Collection[T_co] | Callable[..., bool] | None = None, 584 /, 585 *, 586 star: bool | None = None, 587 depth: int = 9, 588 end=None) -> 'Die[T_co]': 589 """Causes outcomes to be rolled again and added to the total. 590 591 Args: 592 which: Which outcomes to explode. Options: 593 * An collection of outcomes to explode. 594 * A callable that takes an outcome and returns `True` if it 595 should be exploded. 596 * If not supplied, the max outcome will explode. 597 star: Whether outcomes should be unpacked into separate arguments 598 before sending them to a callable `which`. 599 If not provided, this will be guessed based on the function 600 signature. 601 depth: The maximum number of additional dice to roll, not counting 602 the initial roll. 603 If not supplied, a default value will be used. 604 end: Once `depth` is reached, further explosions will be treated 605 as this value. By default, a zero value will be used. 606 `icepool.Reroll` will make one extra final roll, rerolling until 607 a non-exploding outcome is reached. 608 """ 609 610 if which is None: 611 outcome_set = {self.max_outcome()} 612 else: 613 outcome_set = self._select_outcomes(which, star) 614 615 if depth < 0: 616 raise ValueError('depth cannot be negative.') 617 elif depth == 0: 618 return self 619 620 def map_final(outcome): 621 if outcome in outcome_set: 622 return outcome + icepool.Again 623 else: 624 return outcome 625 626 return self.map(map_final, again_depth=depth, again_end=end)
Causes outcomes to be rolled again and added to the total.
Arguments:
- which: Which outcomes to explode. Options:
- An collection of outcomes to explode.
- A callable that takes an outcome and returns
True
if it should be exploded. - If not supplied, the max outcome will explode.
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
which
. If not provided, this will be guessed based on the function signature. - depth: The maximum number of additional dice to roll, not counting the initial roll. If not supplied, a default value will be used.
- end: Once
depth
is reached, further explosions will be treated as this value. By default, a zero value will be used.icepool.Reroll
will make one extra final roll, rerolling until a non-exploding outcome is reached.
628 def if_else( 629 self, 630 outcome_if_true: U | 'Die[U]', 631 outcome_if_false: U | 'Die[U]', 632 *, 633 again_count: int | None = None, 634 again_depth: int | None = None, 635 again_end: 'U | Die[U] | icepool.RerollType | None' = None 636 ) -> 'Die[U]': 637 """Ternary conditional operator. 638 639 This replaces truthy outcomes with the first argument and falsy outcomes 640 with the second argument. 641 642 Args: 643 again_count, again_depth, again_end: Forwarded to the final die constructor. 644 """ 645 return self.map(lambda x: bool(x)).map( 646 { 647 True: outcome_if_true, 648 False: outcome_if_false 649 }, 650 again_count=again_count, 651 again_depth=again_depth, 652 again_end=again_end)
Ternary conditional operator.
This replaces truthy outcomes with the first argument and falsy outcomes with the second argument.
Arguments:
- again_count, again_depth, again_end: Forwarded to the final die constructor.
654 def is_in(self, target: Container[T_co], /) -> 'Die[bool]': 655 """A die that returns True iff the roll of the die is contained in the target.""" 656 return self.map(lambda x: x in target)
A die that returns True iff the roll of the die is contained in the target.
658 def count(self, rolls: int, target: Container[T_co], /) -> 'Die[int]': 659 """Roll this dice a number of times and count how many are in the target.""" 660 return rolls @ self.is_in(target)
Roll this dice a number of times and count how many are in the target.
731 def sequence(self, rolls: int) -> 'icepool.Die[tuple[T_co, ...]]': 732 """Possible sequences produced by rolling this die a number of times. 733 734 This is extremely expensive computationally. If possible, use `reduce()` 735 instead; if you don't care about order, `Die.pool()` is better. 736 """ 737 return icepool.cartesian_product(*(self for _ in range(rolls)), 738 outcome_type=tuple) # type: ignore
Possible sequences produced by rolling this die a number of times.
This is extremely expensive computationally. If possible, use reduce()
instead; if you don't care about order, Die.pool()
is better.
740 def pool(self, rolls: int | Sequence[int] = 1, /) -> 'icepool.Pool[T_co]': 741 """Creates a `Pool` from this `Die`. 742 743 You might subscript the pool immediately afterwards, e.g. 744 `d6.pool(5)[-1, ..., 1]` takes the difference between the highest and 745 lowest of 5d6. 746 747 Args: 748 rolls: The number of copies of this `Die` to put in the pool. 749 Or, a sequence of one `int` per die acting as 750 `keep_tuple`. Note that `...` cannot be used in the 751 argument to this method, as the argument determines the size of 752 the pool. 753 """ 754 if isinstance(rolls, int): 755 return icepool.Pool({self: rolls}) 756 else: 757 pool_size = len(rolls) 758 # Haven't dealt with narrowing return type. 759 return icepool.Pool({self: pool_size})[rolls] # type: ignore
You might subscript the pool immediately afterwards, e.g.
d6.pool(5)[-1, ..., 1]
takes the difference between the highest and
lowest of 5d6.
Arguments:
- rolls: The number of copies of this
Die
to put in the pool. Or, a sequence of oneint
per die acting askeep_tuple
. Note that...
cannot be used in the argument to this method, as the argument determines the size of the pool.
807 def keep(self, 808 rolls: int | Sequence[int], 809 index: slice | Sequence[int | EllipsisType] | int | None = None, 810 /) -> 'Die': 811 """Selects elements after drawing and sorting and sums them. 812 813 Args: 814 rolls: The number of dice to roll. 815 index: One of the following: 816 * An `int`. This will count only the roll at the specified index. 817 In this case, the result is a `Die` rather than a generator. 818 * A `slice`. The selected dice are counted once each. 819 * A sequence of `int`s with length equal to `rolls`. 820 Each roll is counted that many times, which could be multiple or 821 negative times. 822 823 Up to one `...` (`Ellipsis`) may be used. If no `...` is used, 824 the `rolls` argument may be omitted. 825 826 `...` will be replaced with a number of zero counts in order 827 to make up any missing elements compared to `rolls`. 828 This number may be "negative" if more `int`s are provided than 829 `rolls`. Specifically: 830 831 * If `index` is shorter than `rolls`, `...` 832 acts as enough zero counts to make up the difference. 833 E.g. `(1, ..., 1)` on five dice would act as 834 `(1, 0, 0, 0, 1)`. 835 * If `index` has length equal to `rolls`, `...` has no effect. 836 E.g. `(1, ..., 1)` on two dice would act as `(1, 1)`. 837 * If `index` is longer than `rolls` and `...` is on one side, 838 elements will be dropped from `index` on the side with `...`. 839 E.g. `(..., 1, 2, 3)` on two dice would act as `(2, 3)`. 840 * If `index` is longer than `rolls` and `...` 841 is in the middle, the counts will be as the sum of two 842 one-sided `...`. 843 E.g. `(-1, ..., 1)` acts like `(-1, ...)` plus `(..., 1)`. 844 If `rolls` was 1 this would have the -1 and 1 cancel each other out. 845 """ 846 if isinstance(rolls, int): 847 if index is None: 848 raise ValueError( 849 'If the number of rolls is an integer, an index argument must be provided.' 850 ) 851 if isinstance(index, int): 852 return self.pool(rolls).keep(index) 853 else: 854 return self.pool(rolls).keep(index).sum() # type: ignore 855 else: 856 if index is not None: 857 raise ValueError('Only one index sequence can be given.') 858 return self.pool(len(rolls)).keep(rolls).sum() # type: ignore
Selects elements after drawing and sorting and sums them.
Arguments:
- rolls: The number of dice to roll.
- index: One of the following:
- An
int
. This will count only the roll at the specified index.
- An
- In this case, the result is a
Die
rather than a generator. - A
slice
. The selected dice are counted once each.
- A
- A sequence of
int
s with length equal torolls
. Each roll is counted that many times, which could be multiple or negative times.
Up to one
...
(Ellipsis
) may be used. If no...
is used, therolls
argument may be omitted....
will be replaced with a number of zero counts in orderto make up any missing elements compared to
rolls
. This number may be "negative" if moreint
s are provided thanrolls
. Specifically:- If
index
is shorter thanrolls
,...
acts as enough zero counts to make up the difference. E.g.(1, ..., 1)
on five dice would act as(1, 0, 0, 0, 1)
. - If
index
has length equal torolls
,...
has no effect. E.g.(1, ..., 1)
on two dice would act as(1, 1)
. - If
index
is longer thanrolls
and...
is on one side, elements will be dropped fromindex
on the side with...
. E.g.(..., 1, 2, 3)
on two dice would act as(2, 3)
. - If
index
is longer thanrolls
and...
is in the middle, the counts will be as the sum of two one-sided...
. E.g.(-1, ..., 1)
acts like(-1, ...)
plus(..., 1)
. Ifrolls
was 1 this would have the -1 and 1 cancel each other out.
- A sequence of
860 def lowest(self, 861 rolls: int, 862 /, 863 keep: int | None = None, 864 drop: int | None = None) -> 'Die': 865 """Roll several of this `Die` and return the lowest result, or the sum of some of the lowest. 866 867 The outcomes should support addition and multiplication if `keep != 1`. 868 869 Args: 870 rolls: The number of dice to roll. All dice will have the same 871 outcomes as `self`. 872 keep, drop: These arguments work together: 873 * If neither are provided, the single lowest die will be taken. 874 * If only `keep` is provided, the `keep` lowest dice will be summed. 875 * If only `drop` is provided, the `drop` lowest dice will be dropped 876 and the rest will be summed. 877 * If both are provided, `drop` lowest dice will be dropped, then 878 the next `keep` lowest dice will be summed. 879 880 Returns: 881 A `Die` representing the probability distribution of the sum. 882 """ 883 index = lowest_slice(keep, drop) 884 canonical = canonical_slice(index, rolls) 885 if canonical.start == 0 and canonical.stop == 1: 886 return self._lowest_single(rolls) 887 # Expression evaluators are difficult to type. 888 return self.pool(rolls)[index].sum() # type: ignore
Roll several of this Die
and return the lowest result, or the sum of some of the lowest.
The outcomes should support addition and multiplication if keep != 1
.
Arguments:
- rolls: The number of dice to roll. All dice will have the same
outcomes as
self
. - keep, drop: These arguments work together:
- If neither are provided, the single lowest die will be taken.
- If only
keep
is provided, thekeep
lowest dice will be summed. - If only
drop
is provided, thedrop
lowest dice will be dropped and the rest will be summed. - If both are provided,
drop
lowest dice will be dropped, then the nextkeep
lowest dice will be summed.
Returns:
A
Die
representing the probability distribution of the sum.
898 def highest(self, 899 rolls: int, 900 /, 901 keep: int | None = None, 902 drop: int | None = None) -> 'Die[T_co]': 903 """Roll several of this `Die` and return the highest result, or the sum of some of the highest. 904 905 The outcomes should support addition and multiplication if `keep != 1`. 906 907 Args: 908 rolls: The number of dice to roll. 909 keep, drop: These arguments work together: 910 * If neither are provided, the single highest die will be taken. 911 * If only `keep` is provided, the `keep` highest dice will be summed. 912 * If only `drop` is provided, the `drop` highest dice will be dropped 913 and the rest will be summed. 914 * If both are provided, `drop` highest dice will be dropped, then 915 the next `keep` highest dice will be summed. 916 917 Returns: 918 A `Die` representing the probability distribution of the sum. 919 """ 920 index = highest_slice(keep, drop) 921 canonical = canonical_slice(index, rolls) 922 if canonical.start == rolls - 1 and canonical.stop == rolls: 923 return self._highest_single(rolls) 924 # Expression evaluators are difficult to type. 925 return self.pool(rolls)[index].sum() # type: ignore
Roll several of this Die
and return the highest result, or the sum of some of the highest.
The outcomes should support addition and multiplication if keep != 1
.
Arguments:
- rolls: The number of dice to roll.
- keep, drop: These arguments work together:
- If neither are provided, the single highest die will be taken.
- If only
keep
is provided, thekeep
highest dice will be summed. - If only
drop
is provided, thedrop
highest dice will be dropped and the rest will be summed. - If both are provided,
drop
highest dice will be dropped, then the nextkeep
highest dice will be summed.
Returns:
A
Die
representing the probability distribution of the sum.
934 def middle( 935 self, 936 rolls: int, 937 /, 938 keep: int = 1, 939 *, 940 tie: Literal['error', 'high', 'low'] = 'error') -> 'icepool.Die': 941 """Roll several of this `Die` and sum the sorted results in the middle. 942 943 The outcomes should support addition and multiplication if `keep != 1`. 944 945 Args: 946 rolls: The number of dice to roll. 947 keep: The number of outcomes to sum. If this is greater than the 948 current keep_size, all are kept. 949 tie: What to do if `keep` is odd but the current keep_size 950 is even, or vice versa. 951 * 'error' (default): Raises `IndexError`. 952 * 'high': The higher outcome is taken. 953 * 'low': The lower outcome is taken. 954 """ 955 # Expression evaluators are difficult to type. 956 return self.pool(rolls).middle(keep, tie=tie).sum() # type: ignore
Roll several of this Die
and sum the sorted results in the middle.
The outcomes should support addition and multiplication if keep != 1
.
Arguments:
- rolls: The number of dice to roll.
- keep: The number of outcomes to sum. If this is greater than the current keep_size, all are kept.
- tie: What to do if
keep
is odd but the current keep_size is even, or vice versa.- 'error' (default): Raises
IndexError
. - 'high': The higher outcome is taken.
- 'low': The lower outcome is taken.
- 'error' (default): Raises
958 def map_to_pool( 959 self, 960 repl: 961 'Callable[..., Sequence[icepool.Die[U] | U] | Mapping[icepool.Die[U], int] | Mapping[U, int] | icepool.RerollType] | None' = None, 962 /, 963 *extra_args: 'Outcome | icepool.Die | icepool.MultisetExpression', 964 star: bool | None = None, 965 denominator: int | None = None) -> 'icepool.MultisetExpression[U]': 966 """EXPERIMENTAL: Maps outcomes of this `Die` to `Pools`, creating a `MultisetGenerator`. 967 968 As `icepool.map_to_pool(repl, self, ...)`. 969 970 If no argument is provided, the outcomes will be used to construct a 971 mixture of pools directly, similar to the inverse of `pool.expand()`. 972 Note that this is not particularly efficient since it does not make much 973 use of dynamic programming. 974 975 Args: 976 repl: One of the following: 977 * A callable that takes in one outcome per element of args and 978 produces a `Pool` (or something convertible to such). 979 * A mapping from old outcomes to `Pool` 980 (or something convertible to such). 981 In this case args must have exactly one element. 982 The new outcomes may be dice rather than just single outcomes. 983 The special value `icepool.Reroll` will reroll that old outcome. 984 star: If `True`, the first of the args will be unpacked before 985 giving them to `repl`. 986 If not provided, it will be guessed based on the signature of 987 `repl` and the number of arguments. 988 denominator: If provided, the denominator of the result will be this 989 value. Otherwise it will be the minimum to correctly weight the 990 pools. 991 992 Returns: 993 A `MultisetGenerator` representing the mixture of `Pool`s. Note 994 that this is not technically a `Pool`, though it supports most of 995 the same operations. 996 997 Raises: 998 ValueError: If `denominator` cannot be made consistent with the 999 resulting mixture of pools. 1000 """ 1001 if repl is None: 1002 repl = lambda x: x 1003 return icepool.map_to_pool(repl, self, *extra_args, star=star)
EXPERIMENTAL: Maps outcomes of this Die
to Pools
, creating a MultisetGenerator
.
As icepool.map_to_pool(repl, self, ...)
.
If no argument is provided, the outcomes will be used to construct a
mixture of pools directly, similar to the inverse of pool.expand()
.
Note that this is not particularly efficient since it does not make much
use of dynamic programming.
Arguments:
- repl: One of the following:
- A callable that takes in one outcome per element of args and
produces a
Pool
(or something convertible to such). - A mapping from old outcomes to
Pool
(or something convertible to such). In this case args must have exactly one element. The new outcomes may be dice rather than just single outcomes. The special valueicepool.Reroll
will reroll that old outcome.
- A callable that takes in one outcome per element of args and
produces a
- star: If
True
, the first of the args will be unpacked before giving them torepl
. If not provided, it will be guessed based on the signature ofrepl
and the number of arguments. - denominator: If provided, the denominator of the result will be this value. Otherwise it will be the minimum to correctly weight the pools.
Returns:
A
MultisetGenerator
representing the mixture ofPool
s. Note
that this is not technically aPool
, though it supports most of the same operations.
Raises:
- ValueError: If
denominator
cannot be made consistent with the resulting mixture of pools.
1005 def explode_to_pool(self, 1006 rolls: int, 1007 which: Collection[T_co] | Callable[..., bool] 1008 | None = None, 1009 /, 1010 *, 1011 star: bool | None = None, 1012 depth: int = 9) -> 'icepool.MultisetExpression[T_co]': 1013 """EXPERIMENTAL: Causes outcomes to be rolled again, keeping that outcome as an individual die in a pool. 1014 1015 Args: 1016 rolls: The number of initial dice. 1017 which: Which outcomes to explode. Options: 1018 * A single outcome to explode. 1019 * An collection of outcomes to explode. 1020 * A callable that takes an outcome and returns `True` if it 1021 should be exploded. 1022 * If not supplied, the max outcome will explode. 1023 star: Whether outcomes should be unpacked into separate arguments 1024 before sending them to a callable `which`. 1025 If not provided, this will be guessed based on the function 1026 signature. 1027 depth: The maximum depth of explosions for an individual dice. 1028 1029 Returns: 1030 A `MultisetGenerator` representing the mixture of `Pool`s. Note 1031 that this is not technically a `Pool`, though it supports most of 1032 the same operations. 1033 """ 1034 if depth == 0: 1035 return self.pool(rolls) 1036 if which is None: 1037 explode_set = {self.max_outcome()} 1038 else: 1039 explode_set = self._select_outcomes(which, star) 1040 if not explode_set: 1041 return self.pool(rolls) 1042 explode: 'Die[T_co]' 1043 not_explode: 'Die[T_co]' 1044 explode, not_explode = self.split(explode_set) 1045 1046 single_data: 'MutableMapping[icepool.Vector[int], int]' = defaultdict( 1047 int) 1048 for i in range(depth + 1): 1049 weight = explode.denominator()**i * self.denominator()**( 1050 depth - i) * not_explode.denominator() 1051 single_data[icepool.Vector((i, 1))] += weight 1052 single_data[icepool.Vector( 1053 (depth + 1, 0))] += explode.denominator()**(depth + 1) 1054 1055 single_count_die: 'Die[icepool.Vector[int]]' = Die(single_data) 1056 count_die = rolls @ single_count_die 1057 1058 return count_die.map_to_pool( 1059 lambda x, nx: [explode] * x + [not_explode] * nx)
EXPERIMENTAL: Causes outcomes to be rolled again, keeping that outcome as an individual die in a pool.
Arguments:
- rolls: The number of initial dice.
- which: Which outcomes to explode. Options:
- A single outcome to explode.
- An collection of outcomes to explode.
- A callable that takes an outcome and returns
True
if it should be exploded. - If not supplied, the max outcome will explode.
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
which
. If not provided, this will be guessed based on the function signature. - depth: The maximum depth of explosions for an individual dice.
Returns:
A
MultisetGenerator
representing the mixture ofPool
s. Note
that this is not technically aPool
, though it supports most of the same operations.
1061 def reroll_to_pool( 1062 self, 1063 rolls: int, 1064 which: Callable[..., bool] | Collection[T_co], 1065 /, 1066 max_rerolls: int, 1067 *, 1068 star: bool | None = None, 1069 mode: Literal['random', 'lowest', 'highest', 'drop'] = 'random' 1070 ) -> 'icepool.MultisetExpression[T_co]': 1071 """EXPERIMENTAL: Applies a limited number of rerolls shared across a pool. 1072 1073 Each die can only be rerolled once (effectively `depth=1`), and no more 1074 than `max_rerolls` dice may be rerolled. 1075 1076 Args: 1077 rolls: How many dice in the pool. 1078 which: Selects which outcomes are eligible to be rerolled. Options: 1079 * A collection of outcomes to reroll. 1080 * A callable that takes an outcome and returns `True` if it 1081 could be rerolled. 1082 max_rerolls: The maximum number of dice to reroll. 1083 Note that each die can only be rerolled once, so if the number 1084 of eligible dice is less than this, the excess rerolls have no 1085 effect. 1086 star: Whether outcomes should be unpacked into separate arguments 1087 before sending them to a callable `which`. 1088 If not provided, this will be guessed based on the function 1089 signature. 1090 mode: How dice are selected for rerolling if there are more eligible 1091 dice than `max_rerolls`. Options: 1092 * `'random'` (default): Eligible dice will be chosen uniformly 1093 at random. 1094 * `'lowest'`: The lowest eligible dice will be rerolled. 1095 * `'highest'`: The highest eligible dice will be rerolled. 1096 * `'drop'`: All dice that ended up on an outcome selected by 1097 `which` will be dropped. This includes both dice that rolled 1098 into `which` initially and were not rerolled, and dice that 1099 were rerolled but rolled into `which` again. This can be 1100 considerably more efficient than the other modes. 1101 1102 Returns: 1103 A `MultisetGenerator` representing the mixture of `Pool`s. Note 1104 that this is not technically a `Pool`, though it supports most of 1105 the same operations. 1106 """ 1107 rerollable_set = self._select_outcomes(which, star) 1108 if not rerollable_set: 1109 return self.pool(rolls) 1110 1111 rerollable_die: 'Die[T_co]' 1112 not_rerollable_die: 'Die[T_co]' 1113 rerollable_die, not_rerollable_die = self.split(rerollable_set) 1114 single_is_rerollable = icepool.coin(rerollable_die.denominator(), 1115 self.denominator()) 1116 rerollable = rolls @ single_is_rerollable 1117 1118 def split(initial_rerollable: int) -> Die[tuple[int, int, int]]: 1119 """Computes the composition of the pool. 1120 1121 Returns: 1122 initial_rerollable: The number of dice that initially fell into 1123 the rerollable set. 1124 rerolled_to_rerollable: The number of dice that were rerolled, 1125 but fell into the rerollable set again. 1126 not_rerollable: The number of dice that ended up outside the 1127 rerollable set, including both initial and rerolled dice. 1128 not_rerolled: The number of dice that were eligible for 1129 rerolling but were not rerolled. 1130 """ 1131 initial_not_rerollable = rolls - initial_rerollable 1132 rerolled = min(initial_rerollable, max_rerolls) 1133 not_rerolled = initial_rerollable - rerolled 1134 1135 def second_split(rerolled_to_rerollable): 1136 """Splits the rerolled dice into those that fell into the rerollable and not-rerollable sets.""" 1137 rerolled_to_not_rerollable = rerolled - rerolled_to_rerollable 1138 return icepool.tupleize( 1139 initial_rerollable, rerolled_to_rerollable, 1140 initial_not_rerollable + rerolled_to_not_rerollable, 1141 not_rerolled) 1142 1143 return icepool.map(second_split, 1144 rerolled @ single_is_rerollable, 1145 star=False) 1146 1147 pool_composition = rerollable.map(split, star=False) 1148 denominator = self.denominator()**(rolls + min(rolls, max_rerolls)) 1149 pool_composition = pool_composition.multiply_to_denominator( 1150 denominator) 1151 1152 def make_pool(initial_rerollable, rerolled_to_rerollable, 1153 not_rerollable, not_rerolled): 1154 common = rerollable_die.pool( 1155 rerolled_to_rerollable) + not_rerollable_die.pool( 1156 not_rerollable) 1157 match mode: 1158 case 'random': 1159 return common + rerollable_die.pool(not_rerolled) 1160 case 'lowest': 1161 return common + rerollable_die.pool( 1162 initial_rerollable).highest(not_rerolled) 1163 case 'highest': 1164 return common + rerollable_die.pool( 1165 initial_rerollable).lowest(not_rerolled) 1166 case 'drop': 1167 return not_rerollable_die.pool(not_rerollable) 1168 case _: 1169 raise ValueError( 1170 f"Invalid reroll_priority '{mode}'. Allowed values are 'random', 'lowest', 'highest', 'drop'." 1171 ) 1172 1173 return pool_composition.map_to_pool(make_pool, star=True)
EXPERIMENTAL: Applies a limited number of rerolls shared across a pool.
Each die can only be rerolled once (effectively depth=1
), and no more
than max_rerolls
dice may be rerolled.
Arguments:
- rolls: How many dice in the pool.
- which: Selects which outcomes are eligible to be rerolled. Options:
- A collection of outcomes to reroll.
- A callable that takes an outcome and returns
True
if it could be rerolled.
- max_rerolls: The maximum number of dice to reroll. Note that each die can only be rerolled once, so if the number of eligible dice is less than this, the excess rerolls have no effect.
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
which
. If not provided, this will be guessed based on the function signature. - mode: How dice are selected for rerolling if there are more eligible
dice than
max_rerolls
. Options:'random'
(default): Eligible dice will be chosen uniformly at random.'lowest'
: The lowest eligible dice will be rerolled.'highest'
: The highest eligible dice will be rerolled.'drop'
: All dice that ended up on an outcome selected bywhich
will be dropped. This includes both dice that rolled intowhich
initially and were not rerolled, and dice that were rerolled but rolled intowhich
again. This can be considerably more efficient than the other modes.
Returns:
A
MultisetGenerator
representing the mixture ofPool
s. Note
that this is not technically aPool
, though it supports most of the same operations.
1196 def stochastic_round(self, 1197 *, 1198 max_denominator: int | None = None) -> 'Die[int]': 1199 """Randomly rounds outcomes up or down to the nearest integer according to the two distances. 1200 1201 Specificially, rounds `x` up with probability `x - floor(x)` and down 1202 otherwise. 1203 1204 Args: 1205 max_denominator: If provided, each rounding will be performed 1206 using `fractions.Fraction.limit_denominator(max_denominator)`. 1207 Otherwise, the rounding will be performed without 1208 `limit_denominator`. 1209 """ 1210 return self.map(lambda x: icepool.stochastic_round( 1211 x, max_denominator=max_denominator))
Randomly rounds outcomes up or down to the nearest integer according to the two distances.
Specificially, rounds x
up with probability x - floor(x)
and down
otherwise.
Arguments:
- max_denominator: If provided, each rounding will be performed
using
fractions.Fraction.limit_denominator(max_denominator)
. Otherwise, the rounding will be performed withoutlimit_denominator
.
1430 def cmp(self, other) -> 'Die[int]': 1431 """A `Die` with outcomes 1, -1, and 0. 1432 1433 The quantities are equal to the positive outcome of `self > other`, 1434 `self < other`, and the remainder respectively. 1435 """ 1436 other = implicit_convert_to_die(other) 1437 1438 data = {} 1439 1440 lt = self < other 1441 if True in lt: 1442 data[-1] = lt[True] 1443 eq = self == other 1444 if True in eq: 1445 data[0] = eq[True] 1446 gt = self > other 1447 if True in gt: 1448 data[1] = gt[True] 1449 1450 return Die(data)
A Die
with outcomes 1, -1, and 0.
The quantities are equal to the positive outcome of self > other
,
self < other
, and the remainder respectively.
1495 def equals(self, other, *, simplify: bool = False) -> bool: 1496 """`True` iff both dice have the same outcomes and quantities. 1497 1498 This is `False` if `other` is not a `Die`, even if it would convert 1499 to an equal `Die`. 1500 1501 Truth value does NOT matter. 1502 1503 If one `Die` has a zero-quantity outcome and the other `Die` does not 1504 contain that outcome, they are treated as unequal by this function. 1505 1506 The `==` and `!=` operators have a dual purpose; they return a `Die` 1507 with a truth value determined by this method. 1508 Only dice returned by these methods have a truth value. The data of 1509 these dice is lazily evaluated since the caller may only be interested 1510 in the `Die` value or the truth value. 1511 1512 Args: 1513 simplify: If `True`, the dice will be simplified before comparing. 1514 Otherwise, e.g. a 2:2 coin is not `equals()` to a 1:1 coin. 1515 """ 1516 if not isinstance(other, Die): 1517 return False 1518 1519 if simplify: 1520 return self.simplify()._hash_key == other.simplify()._hash_key 1521 else: 1522 return self._hash_key == other._hash_key
True
iff both dice have the same outcomes and quantities.
This is False
if other
is not a Die
, even if it would convert
to an equal Die
.
Truth value does NOT matter.
If one Die
has a zero-quantity outcome and the other Die
does not
contain that outcome, they are treated as unequal by this function.
The ==
and !=
operators have a dual purpose; they return a Die
with a truth value determined by this method.
Only dice returned by these methods have a truth value. The data of
these dice is lazily evaluated since the caller may only be interested
in the Die
value or the truth value.
Arguments:
- simplify: If
True
, the dice will be simplified before comparing. Otherwise, e.g. a 2:2 coin is notequals()
to a 1:1 coin.
29class Population(ABC, Expandable[T_co], Mapping[Any, int]): 30 """A mapping from outcomes to `int` quantities. 31 32 Outcomes with each instance must be hashable and totally orderable. 33 34 Subclasses include `Die` and `Deck`. 35 """ 36 37 # Abstract methods. 38 39 @property 40 @abstractmethod 41 def _new_type(self) -> type: 42 """The type to use when constructing a new instance.""" 43 44 @abstractmethod 45 def keys(self) -> CountsKeysView[T_co]: 46 """The outcomes within the population in sorted order.""" 47 48 @abstractmethod 49 def values(self) -> CountsValuesView: 50 """The quantities within the population in outcome order.""" 51 52 @abstractmethod 53 def items(self) -> CountsItemsView[T_co]: 54 """The (outcome, quantity)s of the population in sorted order.""" 55 56 @property 57 def _items_for_cartesian_product(self) -> Sequence[tuple[T_co, int]]: 58 return self.items() 59 60 def _unary_operator(self, op: Callable, *args, **kwargs): 61 data: MutableMapping[Any, int] = defaultdict(int) 62 for outcome, quantity in self.items(): 63 new_outcome = op(outcome, *args, **kwargs) 64 data[new_outcome] += quantity 65 return self._new_type(data) 66 67 # Outcomes. 68 69 def outcomes(self) -> CountsKeysView[T_co]: 70 """The outcomes of the mapping in ascending order. 71 72 These are also the `keys` of the mapping. 73 Prefer to use the name `outcomes`. 74 """ 75 return self.keys() 76 77 @cached_property 78 def _common_outcome_length(self) -> int | None: 79 result = None 80 for outcome in self.outcomes(): 81 if isinstance(outcome, Mapping): 82 return None 83 elif isinstance(outcome, Sized): 84 if result is None: 85 result = len(outcome) 86 elif len(outcome) != result: 87 return None 88 return result 89 90 def common_outcome_length(self) -> int | None: 91 """The common length of all outcomes. 92 93 If outcomes have no lengths or different lengths, the result is `None`. 94 """ 95 return self._common_outcome_length 96 97 def is_empty(self) -> bool: 98 """`True` iff this population has no outcomes. """ 99 return len(self) == 0 100 101 def min_outcome(self) -> T_co: 102 """The least outcome.""" 103 return self.outcomes()[0] 104 105 def max_outcome(self) -> T_co: 106 """The greatest outcome.""" 107 return self.outcomes()[-1] 108 109 def nearest(self, comparison: Literal['<=', '<', '>=', '>'], outcome, 110 /) -> T_co | None: 111 """The nearest outcome in this population fitting the comparison. 112 113 Args: 114 comparison: The comparison which the result must fit. For example, 115 '<=' would find the greatest outcome that is not greater than 116 the argument. 117 outcome: The outcome to compare against. 118 119 Returns: 120 The nearest outcome fitting the comparison, or `None` if there is 121 no such outcome. 122 """ 123 match comparison: 124 case '<=': 125 if outcome in self: 126 return outcome 127 index = bisect.bisect_right(self.outcomes(), outcome) - 1 128 if index < 0: 129 return None 130 return self.outcomes()[index] 131 case '<': 132 index = bisect.bisect_left(self.outcomes(), outcome) - 1 133 if index < 0: 134 return None 135 return self.outcomes()[index] 136 case '>=': 137 if outcome in self: 138 return outcome 139 index = bisect.bisect_left(self.outcomes(), outcome) 140 if index >= len(self): 141 return None 142 return self.outcomes()[index] 143 case '>': 144 index = bisect.bisect_right(self.outcomes(), outcome) 145 if index >= len(self): 146 return None 147 return self.outcomes()[index] 148 case _: 149 raise ValueError(f'Invalid comparison {comparison}') 150 151 @staticmethod 152 def _zero(x): 153 return x * 0 154 155 def zero(self: C) -> C: 156 """Zeros all outcomes of this population. 157 158 This is done by multiplying all outcomes by `0`. 159 160 The result will have the same denominator. 161 162 Raises: 163 ValueError: If the zeros did not resolve to a single outcome. 164 """ 165 result = self._unary_operator(Population._zero) 166 if len(result) != 1: 167 raise ValueError('zero() did not resolve to a single outcome.') 168 return result 169 170 def zero_outcome(self) -> T_co: 171 """A zero-outcome for this population. 172 173 E.g. `0` for a `Population` whose outcomes are `int`s. 174 """ 175 return self.zero().outcomes()[0] 176 177 # Quantities. 178 179 @overload 180 def quantity(self, outcome: Hashable, /) -> int: 181 """The quantity of a single outcome.""" 182 183 @overload 184 def quantity(self, comparison: Literal['==', '!=', '<=', '<', '>=', '>'], 185 outcome: Hashable, /) -> int: 186 """The total quantity fitting a comparison to a single outcome.""" 187 188 def quantity(self, 189 comparison: Literal['==', '!=', '<=', '<', '>=', '>'] 190 | Hashable, 191 outcome: Hashable | None = None, 192 /) -> int: 193 """The quantity of a single outcome. 194 195 A comparison can be provided, in which case this returns the total 196 quantity fitting the comparison. 197 198 Args: 199 comparison: The comparison to use. This can be omitted, in which 200 case it is treated as '=='. 201 outcome: The outcome to query. 202 """ 203 if outcome is None: 204 outcome = comparison 205 comparison = '==' 206 else: 207 comparison = cast(Literal['==', '!=', '<=', '<', '>=', '>'], 208 comparison) 209 210 match comparison: 211 case '==': 212 return self.get(outcome, 0) 213 case '!=': 214 return self.denominator() - self.get(outcome, 0) 215 case '<=' | '<': 216 threshold = self.nearest(comparison, outcome) 217 if threshold is None: 218 return 0 219 else: 220 return self._cumulative_quantities[threshold] 221 case '>=': 222 return self.denominator() - self.quantity('<', outcome) 223 case '>': 224 return self.denominator() - self.quantity('<=', outcome) 225 case _: 226 raise ValueError(f'Invalid comparison {comparison}') 227 228 @overload 229 def quantities(self, /) -> CountsValuesView: 230 """All quantities in sorted order.""" 231 232 @overload 233 def quantities(self, comparison: Literal['==', '!=', '<=', '<', '>=', '>'], 234 /) -> Sequence[int]: 235 """The total quantities fitting the comparison for each outcome in sorted order. 236 237 For example, '<=' gives the CDF. 238 239 Args: 240 comparison: One of `'==', '!=', '<=', '<', '>=', '>'`. 241 May be omitted, in which case equality `'=='` is used. 242 outcome: The outcome to compare to. 243 percent: If set, the result will be a percentage expressed as a 244 `float`. 245 """ 246 247 def quantities(self, 248 comparison: Literal['==', '!=', '<=', '<', '>=', '>'] 249 | None = None, 250 /) -> CountsValuesView | Sequence[int]: 251 """The quantities of the mapping in sorted order. 252 253 For example, '<=' gives the CDF. 254 255 Args: 256 comparison: One of `'==', '!=', '<=', '<', '>=', '>'`. 257 May be omitted, in which case equality `'=='` is used. 258 """ 259 if comparison is None: 260 comparison = '==' 261 262 match comparison: 263 case '==': 264 return self.values() 265 case '<=': 266 return tuple(itertools.accumulate(self.values())) 267 case '>=': 268 return tuple( 269 itertools.accumulate(self.values()[:-1], 270 operator.sub, 271 initial=self.denominator())) 272 case '!=': 273 return tuple(self.denominator() - q for q in self.values()) 274 case '<': 275 return tuple(self.denominator() - q 276 for q in self.quantities('>=')) 277 case '>': 278 return tuple(self.denominator() - q 279 for q in self.quantities('<=')) 280 case _: 281 raise ValueError(f'Invalid comparison {comparison}') 282 283 @cached_property 284 def _cumulative_quantities(self) -> Mapping[T_co, int]: 285 result = {} 286 cdf = 0 287 for outcome, quantity in self.items(): 288 cdf += quantity 289 result[outcome] = cdf 290 return result 291 292 @cached_property 293 def _denominator(self) -> int: 294 return sum(self.values()) 295 296 def denominator(self) -> int: 297 """The sum of all quantities (e.g. weights or duplicates). 298 299 For the number of unique outcomes, use `len()`. 300 """ 301 return self._denominator 302 303 def multiply_quantities(self: C, scale: int, /) -> C: 304 """Multiplies all quantities by an integer.""" 305 if scale == 1: 306 return self 307 data = { 308 outcome: quantity * scale 309 for outcome, quantity in self.items() 310 } 311 return self._new_type(data) 312 313 def divide_quantities(self: C, divisor: int, /) -> C: 314 """Divides all quantities by an integer, rounding down. 315 316 Resulting zero quantities are dropped. 317 """ 318 if divisor == 0: 319 return self 320 data = { 321 outcome: quantity // divisor 322 for outcome, quantity in self.items() if quantity >= divisor 323 } 324 return self._new_type(data) 325 326 def modulo_quantities(self: C, divisor: int, /) -> C: 327 """Modulus of all quantities with an integer.""" 328 data = { 329 outcome: quantity % divisor 330 for outcome, quantity in self.items() 331 } 332 return self._new_type(data) 333 334 def pad_to_denominator(self: C, target: int, /, outcome: Hashable) -> C: 335 """Changes the denominator to a target number by changing the quantity of a specified outcome. 336 337 Args: 338 `target`: The denominator of the result. 339 `outcome`: The outcome whose quantity will be adjusted. 340 341 Returns: 342 A `Population` like `self` but with the quantity of `outcome` 343 adjusted so that the overall denominator is equal to `target`. 344 If the denominator is reduced to zero, it will be removed. 345 346 Raises: 347 `ValueError` if this would require the quantity of the specified 348 outcome to be negative. 349 """ 350 adjustment = target - self.denominator() 351 data = {outcome: quantity for outcome, quantity in self.items()} 352 new_quantity = data.get(outcome, 0) + adjustment 353 if new_quantity > 0: 354 data[outcome] = new_quantity 355 elif new_quantity == 0: 356 del data[outcome] 357 else: 358 raise ValueError( 359 f'Padding to denominator of {target} would require a negative quantity of {new_quantity} for {outcome}' 360 ) 361 return self._new_type(data) 362 363 def multiply_to_denominator(self: C, denominator: int, /) -> C: 364 """Multiplies all quantities to reach the target denominiator. 365 366 Raises: 367 ValueError if this cannot be achieved using an integer scaling. 368 """ 369 if denominator % self.denominator(): 370 raise ValueError( 371 'Target denominator is not an integer factor of the current denominator.' 372 ) 373 return self.multiply_quantities(denominator // self.denominator()) 374 375 def append(self: C, outcome, quantity: int = 1, /) -> C: 376 """This population with an outcome appended. 377 378 Args: 379 outcome: The outcome to append. 380 quantity: The quantity of the outcome to append. Can be negative, 381 which removes quantity (but not below zero). 382 """ 383 data = Counter(self) 384 data[outcome] = max(data[outcome] + quantity, 0) 385 return self._new_type(data) 386 387 def remove(self: C, outcome, quantity: int | None = None, /) -> C: 388 """This population with an outcome removed. 389 390 Args: 391 outcome: The outcome to append. 392 quantity: The quantity of the outcome to remove. If not set, all 393 quantity of that outcome is removed. Can be negative, which adds 394 quantity instead. 395 """ 396 if quantity is None: 397 data = Counter(self) 398 data[outcome] = 0 399 return self._new_type(data) 400 else: 401 return self.append(outcome, -quantity) 402 403 # Probabilities. 404 405 @overload 406 def probability(self, outcome: Hashable, /, *, 407 percent: Literal[False]) -> Fraction: 408 """The probability of a single outcome, or 0.0 if not present.""" 409 410 @overload 411 def probability(self, outcome: Hashable, /, *, 412 percent: Literal[True]) -> float: 413 """The probability of a single outcome, or 0.0 if not present.""" 414 415 @overload 416 def probability(self, outcome: Hashable, /) -> Fraction: 417 """The probability of a single outcome, or 0.0 if not present.""" 418 419 @overload 420 def probability(self, comparison: Literal['==', '!=', '<=', '<', '>=', 421 '>'], outcome: Hashable, /, *, 422 percent: Literal[False]) -> Fraction: 423 """The total probability of outcomes fitting a comparison.""" 424 425 @overload 426 def probability(self, comparison: Literal['==', '!=', '<=', '<', '>=', 427 '>'], outcome: Hashable, /, *, 428 percent: Literal[True]) -> float: 429 """The total probability of outcomes fitting a comparison.""" 430 431 @overload 432 def probability(self, comparison: Literal['==', '!=', '<=', '<', '>=', 433 '>'], outcome: Hashable, 434 /) -> Fraction: 435 """The total probability of outcomes fitting a comparison.""" 436 437 def probability(self, 438 comparison: Literal['==', '!=', '<=', '<', '>=', '>'] 439 | Hashable, 440 outcome: Hashable | None = None, 441 /, 442 *, 443 percent: bool = False) -> Fraction | float: 444 """The total probability of outcomes fitting a comparison. 445 446 Args: 447 comparison: One of `'==', '!=', '<=', '<', '>=', '>'`. 448 May be omitted, in which case equality `'=='` is used. 449 outcome: The outcome to compare to. 450 percent: If set, the result will be a percentage expressed as a 451 `float`. Otherwise, the result is a `Fraction`. 452 """ 453 if outcome is None: 454 outcome = comparison 455 comparison = '==' 456 else: 457 comparison = cast(Literal['==', '!=', '<=', '<', '>=', '>'], 458 comparison) 459 result = Fraction(self.quantity(comparison, outcome), 460 self.denominator()) 461 return result * 100.0 if percent else result 462 463 @overload 464 def probabilities(self, /, *, 465 percent: Literal[False]) -> Sequence[Fraction]: 466 """All probabilities in sorted order.""" 467 468 @overload 469 def probabilities(self, /, *, percent: Literal[True]) -> Sequence[float]: 470 """All probabilities in sorted order.""" 471 472 @overload 473 def probabilities(self, /) -> Sequence[Fraction]: 474 """All probabilities in sorted order.""" 475 476 @overload 477 def probabilities(self, comparison: Literal['==', '!=', '<=', '<', '>=', 478 '>'], /, *, 479 percent: Literal[False]) -> Sequence[Fraction]: 480 """The total probabilities fitting the comparison for each outcome in sorted order. 481 482 For example, '<=' gives the CDF. 483 """ 484 485 @overload 486 def probabilities(self, comparison: Literal['==', '!=', '<=', '<', '>=', 487 '>'], /, *, 488 percent: Literal[True]) -> Sequence[float]: 489 """The total probabilities fitting the comparison for each outcome in sorted order. 490 491 For example, '<=' gives the CDF. 492 """ 493 494 @overload 495 def probabilities(self, comparison: Literal['==', '!=', '<=', '<', '>=', 496 '>'], /) -> Sequence[Fraction]: 497 """The total probabilities fitting the comparison for each outcome in sorted order. 498 499 For example, '<=' gives the CDF. 500 """ 501 502 def probabilities( 503 self, 504 comparison: Literal['==', '!=', '<=', '<', '>=', '>'] 505 | None = None, 506 /, 507 *, 508 percent: bool = False) -> Sequence[Fraction] | Sequence[float]: 509 """The total probabilities fitting the comparison for each outcome in sorted order. 510 511 For example, '<=' gives the CDF. 512 513 Args: 514 comparison: One of `'==', '!=', '<=', '<', '>=', '>'`. 515 May be omitted, in which case equality `'=='` is used. 516 percent: If set, the result will be a percentage expressed as a 517 `float`. Otherwise, the result is a `Fraction`. 518 """ 519 if comparison is None: 520 comparison = '==' 521 522 result = tuple( 523 Fraction(q, self.denominator()) 524 for q in self.quantities(comparison)) 525 526 if percent: 527 return tuple(100.0 * x for x in result) 528 else: 529 return result 530 531 # Scalar statistics. 532 533 def mode(self) -> tuple: 534 """A tuple containing the most common outcome(s) of the population. 535 536 These are sorted from lowest to highest. 537 """ 538 return tuple(outcome for outcome, quantity in self.items() 539 if quantity == self.modal_quantity()) 540 541 def modal_quantity(self) -> int: 542 """The highest quantity of any single outcome. """ 543 return max(self.quantities()) 544 545 def kolmogorov_smirnov(self, other: 'Population') -> Fraction: 546 """Kolmogorov–Smirnov statistic. The maximum absolute difference between CDFs. """ 547 outcomes = icepool.sorted_union(self, other) 548 return max( 549 abs( 550 self.probability('<=', outcome) - 551 other.probability('<=', outcome)) for outcome in outcomes) 552 553 def cramer_von_mises(self, other: 'Population') -> Fraction: 554 """Cramér-von Mises statistic. The sum-of-squares difference between CDFs. """ 555 outcomes = icepool.sorted_union(self, other) 556 return sum(((self.probability('<=', outcome) - 557 other.probability('<=', outcome))**2 558 for outcome in outcomes), 559 start=Fraction(0, 1)) 560 561 def median(self): 562 """The median, taking the mean in case of a tie. 563 564 This will fail if the outcomes do not support division; 565 in this case, use `median_low` or `median_high` instead. 566 """ 567 return self.quantile(1, 2) 568 569 def median_low(self) -> T_co: 570 """The median, taking the lower in case of a tie.""" 571 return self.quantile_low(1, 2) 572 573 def median_high(self) -> T_co: 574 """The median, taking the higher in case of a tie.""" 575 return self.quantile_high(1, 2) 576 577 def quantile(self, n: int, d: int = 100): 578 """The outcome `n / d` of the way through the CDF, taking the mean in case of a tie. 579 580 This will fail if the outcomes do not support addition and division; 581 in this case, use `quantile_low` or `quantile_high` instead. 582 """ 583 # Should support addition and division. 584 return (self.quantile_low(n, d) + 585 self.quantile_high(n, d)) / 2 # type: ignore 586 587 def quantile_low(self, n: int, d: int = 100) -> T_co: 588 """The outcome `n / d` of the way through the CDF, taking the lesser in case of a tie.""" 589 index = bisect.bisect_left(self.quantities('<='), 590 (n * self.denominator() + d - 1) // d) 591 if index >= len(self): 592 return self.max_outcome() 593 return self.outcomes()[index] 594 595 def quantile_high(self, n: int, d: int = 100) -> T_co: 596 """The outcome `n / d` of the way through the CDF, taking the greater in case of a tie.""" 597 index = bisect.bisect_right(self.quantities('<='), 598 n * self.denominator() // d) 599 if index >= len(self): 600 return self.max_outcome() 601 return self.outcomes()[index] 602 603 @overload 604 def mean(self: 'Population[numbers.Rational]') -> Fraction: 605 ... 606 607 @overload 608 def mean(self: 'Population[float]') -> float: 609 ... 610 611 def mean( 612 self: 'Population[numbers.Rational] | Population[float]' 613 ) -> Fraction | float: 614 return try_fraction( 615 sum(outcome * quantity for outcome, quantity in self.items()), 616 self.denominator()) 617 618 @overload 619 def variance(self: 'Population[numbers.Rational]') -> Fraction: 620 ... 621 622 @overload 623 def variance(self: 'Population[float]') -> float: 624 ... 625 626 def variance( 627 self: 'Population[numbers.Rational] | Population[float]' 628 ) -> Fraction | float: 629 """This is the population variance, not the sample variance.""" 630 mean = self.mean() 631 mean_of_squares = try_fraction( 632 sum(quantity * outcome**2 for outcome, quantity in self.items()), 633 self.denominator()) 634 return mean_of_squares - mean * mean 635 636 def standard_deviation( 637 self: 'Population[numbers.Rational] | Population[float]') -> float: 638 return math.sqrt(self.variance()) 639 640 sd = standard_deviation 641 642 def standardized_moment( 643 self: 'Population[numbers.Rational] | Population[float]', 644 k: int) -> float: 645 sd = self.standard_deviation() 646 mean = self.mean() 647 ev = sum(p * (outcome - mean)**k # type: ignore 648 for outcome, p in zip(self.outcomes(), self.probabilities())) 649 return ev / (sd**k) 650 651 def skewness( 652 self: 'Population[numbers.Rational] | Population[float]') -> float: 653 return self.standardized_moment(3) 654 655 def excess_kurtosis( 656 self: 'Population[numbers.Rational] | Population[float]') -> float: 657 return self.standardized_moment(4) - 3.0 658 659 def entropy(self, base: float = 2.0) -> float: 660 """The entropy of a random sample from this population. 661 662 Args: 663 base: The logarithm base to use. Default is 2.0, which gives the 664 entropy in bits. 665 """ 666 return -sum(p * math.log(p, base) 667 for p in self.probabilities() if p > 0.0) 668 669 # Joint statistics. 670 671 class _Marginals(Generic[C]): 672 """Helper class for implementing `marginals()`.""" 673 674 _population: C 675 676 def __init__(self, population, /): 677 self._population = population 678 679 def __len__(self) -> int: 680 """The minimum len() of all outcomes.""" 681 return min(len(x) for x in self._population.outcomes()) 682 683 def __getitem__(self, dims: int | slice, /): 684 """Marginalizes the given dimensions.""" 685 return self._population._unary_operator(operator.getitem, dims) 686 687 def __iter__(self) -> Iterator: 688 for i in range(len(self)): 689 yield self[i] 690 691 def __getattr__(self, key: str): 692 if key[0] == '_': 693 raise AttributeError(key) 694 return self._population._unary_operator(operator.attrgetter(key)) 695 696 @property 697 def marginals(self: C) -> _Marginals[C]: 698 """A property that applies the `[]` operator to outcomes. 699 700 For example, `population.marginals[:2]` will marginalize the first two 701 elements of sequence outcomes. 702 703 Attributes that do not start with an underscore will also be forwarded. 704 For example, `population.marginals.x` will marginalize the `x` attribute 705 from e.g. `namedtuple` outcomes. 706 """ 707 return Population._Marginals(self) 708 709 @overload 710 def covariance(self: 'Population[tuple[numbers.Rational, ...]]', i: int, 711 j: int) -> Fraction: 712 ... 713 714 @overload 715 def covariance(self: 'Population[tuple[float, ...]]', i: int, 716 j: int) -> float: 717 ... 718 719 def covariance( 720 self: 721 'Population[tuple[numbers.Rational, ...]] | Population[tuple[float, ...]]', 722 i: int, j: int) -> Fraction | float: 723 mean_i = self.marginals[i].mean() 724 mean_j = self.marginals[j].mean() 725 return try_fraction( 726 sum((outcome[i] - mean_i) * (outcome[j] - mean_j) * quantity 727 for outcome, quantity in self.items()), self.denominator()) 728 729 def correlation( 730 self: 731 'Population[tuple[numbers.Rational, ...]] | Population[tuple[float, ...]]', 732 i: int, j: int) -> float: 733 sd_i = self.marginals[i].standard_deviation() 734 sd_j = self.marginals[j].standard_deviation() 735 return self.covariance(i, j) / (sd_i * sd_j) 736 737 # Transformations. 738 739 def _select_outcomes(self, which: Callable[..., bool] | Collection[T_co], 740 star: bool | None) -> Set[T_co]: 741 """Returns a set of outcomes of self that fit the given condition.""" 742 if callable(which): 743 if star is None: 744 star = infer_star(which) 745 if star: 746 # Need TypeVarTuple to check this. 747 return { 748 outcome 749 for outcome in self.outcomes() 750 if which(*outcome) # type: ignore 751 } 752 else: 753 return { 754 outcome 755 for outcome in self.outcomes() if which(outcome) 756 } 757 else: 758 # Collection. 759 return set(outcome for outcome in self.outcomes() 760 if outcome in which) 761 762 def to_one_hot(self: C, outcomes: Sequence[T_co] | None = None) -> C: 763 """Converts the outcomes of this population to a one-hot representation. 764 765 Args: 766 outcomes: If provided, each outcome will be mapped to a `Vector` 767 where the element at `outcomes.index(outcome)` is set to `True` 768 and the rest to `False`, or all `False` if the outcome is not 769 in `outcomes`. 770 If not provided, `self.outcomes()` is used. 771 """ 772 if outcomes is None: 773 outcomes = self.outcomes() 774 775 data: MutableMapping[Vector[bool], int] = defaultdict(int) 776 for outcome, quantity in zip(self.outcomes(), self.quantities()): 777 value = [False] * len(outcomes) 778 if outcome in outcomes: 779 value[outcomes.index(outcome)] = True 780 data[Vector(value)] += quantity 781 return self._new_type(data) 782 783 def split(self, 784 which: Callable[..., bool] | Collection[T_co] | None = None, 785 /, 786 *, 787 star: bool | None = None) -> tuple[C, C]: 788 """Splits this population into one containing selected items and another containing the rest. 789 790 The sum of the denominators of the results is equal to the denominator 791 of this population. 792 793 If you want to split more than two ways, use `Population.group_by()`. 794 795 Args: 796 which: Selects which outcomes to select. Options: 797 * A callable that takes an outcome and returns `True` if it 798 should be selected. 799 * A collection of outcomes to select. 800 star: Whether outcomes should be unpacked into separate arguments 801 before sending them to a callable `which`. 802 If not provided, this will be guessed based on the function 803 signature. 804 805 Returns: 806 A population consisting of the outcomes that were selected by 807 `which`, and a population consisting of the unselected outcomes. 808 """ 809 if which is None: 810 outcome_set = {self.min_outcome()} 811 else: 812 outcome_set = self._select_outcomes(which, star) 813 814 selected = {} 815 not_selected = {} 816 for outcome, count in self.items(): 817 if outcome in outcome_set: 818 selected[outcome] = count 819 else: 820 not_selected[outcome] = count 821 822 return self._new_type(selected), self._new_type(not_selected) 823 824 class _GroupBy(Generic[C]): 825 """Helper class for implementing `group_by()`.""" 826 827 _population: C 828 829 def __init__(self, population, /): 830 self._population = population 831 832 def __call__(self, 833 key_map: Callable[..., U] | Mapping[T_co, U], 834 /, 835 *, 836 star: bool | None = None) -> Mapping[U, C]: 837 if callable(key_map): 838 if star is None: 839 star = infer_star(key_map) 840 if star: 841 key_function = lambda o: key_map(*o) 842 else: 843 key_function = key_map 844 else: 845 key_function = lambda o: key_map.get(o, o) 846 847 result_datas: MutableMapping[U, MutableMapping[Any, int]] = {} 848 outcome: Any 849 for outcome, quantity in self._population.items(): 850 key = key_function(outcome) 851 if key not in result_datas: 852 result_datas[key] = defaultdict(int) 853 result_datas[key][outcome] += quantity 854 return { 855 k: self._population._new_type(v) 856 for k, v in result_datas.items() 857 } 858 859 def __getitem__(self, dims: int | slice, /): 860 """Marginalizes the given dimensions.""" 861 return self(lambda x: x[dims]) 862 863 def __getattr__(self, key: str): 864 if key[0] == '_': 865 raise AttributeError(key) 866 return self(lambda x: getattr(x, key)) 867 868 @property 869 def group_by(self: C) -> _GroupBy[C]: 870 """A method-like property that splits this population into sub-populations based on a key function. 871 872 The sum of the denominators of the results is equal to the denominator 873 of this population. 874 875 This can be useful when using the law of total probability. 876 877 Example: `d10.group_by(lambda x: x % 3)` is 878 ```python 879 { 880 0: Die([3, 6, 9]), 881 1: Die([1, 4, 7, 10]), 882 2: Die([2, 5, 8]), 883 } 884 ``` 885 886 You can also use brackets to group by indexes or slices; or attributes 887 to group by those. Example: 888 889 ```python 890 Die([ 891 'aardvark', 892 'alligator', 893 'asp', 894 'blowfish', 895 'cat', 896 'crocodile', 897 ]).group_by[0] 898 ``` 899 900 produces 901 902 ```python 903 { 904 'a': Die(['aardvark', 'alligator', 'asp']), 905 'b': Die(['blowfish']), 906 'c': Die(['cat', 'crocodile']), 907 } 908 ``` 909 910 Args: 911 key_map: A function or mapping that takes outcomes and produces the 912 key of the corresponding outcome in the result. If this is 913 a Mapping, outcomes not in the mapping are their own key. 914 star: Whether outcomes should be unpacked into separate arguments 915 before sending them to a callable `key_map`. 916 If not provided, this will be guessed based on the function 917 signature. 918 """ 919 return Population._GroupBy(self) 920 921 def sample(self) -> T_co: 922 """A single random sample from this population. 923 924 Note that this is always "with replacement" even for `Deck` since 925 instances are immutable. 926 927 This uses the standard `random` package and is not cryptographically 928 secure. 929 """ 930 # We don't use random.choices since that is based on floats rather than ints. 931 r = random.randrange(self.denominator()) 932 index = bisect.bisect_right(self.quantities('<='), r) 933 return self.outcomes()[index] 934 935 def format(self, format_spec: str, /, **kwargs) -> str: 936 """Formats this mapping as a string. 937 938 `format_spec` should start with the output format, 939 which can be: 940 * `md` for Markdown (default) 941 * `bbcode` for BBCode 942 * `csv` for comma-separated values 943 * `html` for HTML 944 945 After this, you may optionally add a `:` followed by a series of 946 requested columns. Allowed columns are: 947 948 * `o`: Outcomes. 949 * `*o`: Outcomes, unpacked if applicable. 950 * `q==`, `q<=`, `q>=`: Quantities ==, <=, or >= each outcome. 951 * `p==`, `p<=`, `p>=`: Probabilities (0-1). 952 * `%==`, `%<=`, `%>=`: Probabilities (0%-100%). 953 * `i==`, `i<=`, `i>=`: EXPERIMENTAL: "1 in N". 954 955 Columns may optionally be separated using `|` characters. 956 957 The default setting is equal to `f'{die:md:*o|q==|%==}'`. Here the 958 columns are the outcomes (unpacked if applicable) the quantities, and 959 the probabilities. The quantities are omitted from the default columns 960 if any individual quantity is 10**30 or greater. 961 """ 962 if not self.is_empty() and self.modal_quantity() < 10**30: 963 default_column_spec = '*oq==%==' 964 else: 965 default_column_spec = '*o%==' 966 if len(format_spec) == 0: 967 format_spec = 'md:' + default_column_spec 968 969 format_spec = format_spec.replace('|', '') 970 971 parts = format_spec.split(':') 972 973 if len(parts) == 1: 974 output_format = parts[0] 975 col_spec = default_column_spec 976 elif len(parts) == 2: 977 output_format = parts[0] 978 col_spec = parts[1] 979 else: 980 raise ValueError('format_spec has too many colons.') 981 982 match output_format: 983 case 'md': 984 return icepool.population.format.markdown(self, col_spec) 985 case 'bbcode': 986 return icepool.population.format.bbcode(self, col_spec) 987 case 'csv': 988 return icepool.population.format.csv(self, col_spec, **kwargs) 989 case 'html': 990 return icepool.population.format.html(self, col_spec) 991 case _: 992 raise ValueError( 993 f"Unsupported output format '{output_format}'") 994 995 def __format__(self, format_spec: str, /) -> str: 996 return self.format(format_spec) 997 998 def __str__(self) -> str: 999 return f'{self}'
A mapping from outcomes to int
quantities.
Outcomes with each instance must be hashable and totally orderable.
44 @abstractmethod 45 def keys(self) -> CountsKeysView[T_co]: 46 """The outcomes within the population in sorted order."""
The outcomes within the population in sorted order.
48 @abstractmethod 49 def values(self) -> CountsValuesView: 50 """The quantities within the population in outcome order."""
The quantities within the population in outcome order.
52 @abstractmethod 53 def items(self) -> CountsItemsView[T_co]: 54 """The (outcome, quantity)s of the population in sorted order."""
The (outcome, quantity)s of the population in sorted order.
90 def common_outcome_length(self) -> int | None: 91 """The common length of all outcomes. 92 93 If outcomes have no lengths or different lengths, the result is `None`. 94 """ 95 return self._common_outcome_length
The common length of all outcomes.
If outcomes have no lengths or different lengths, the result is None
.
97 def is_empty(self) -> bool: 98 """`True` iff this population has no outcomes. """ 99 return len(self) == 0
True
iff this population has no outcomes.
109 def nearest(self, comparison: Literal['<=', '<', '>=', '>'], outcome, 110 /) -> T_co | None: 111 """The nearest outcome in this population fitting the comparison. 112 113 Args: 114 comparison: The comparison which the result must fit. For example, 115 '<=' would find the greatest outcome that is not greater than 116 the argument. 117 outcome: The outcome to compare against. 118 119 Returns: 120 The nearest outcome fitting the comparison, or `None` if there is 121 no such outcome. 122 """ 123 match comparison: 124 case '<=': 125 if outcome in self: 126 return outcome 127 index = bisect.bisect_right(self.outcomes(), outcome) - 1 128 if index < 0: 129 return None 130 return self.outcomes()[index] 131 case '<': 132 index = bisect.bisect_left(self.outcomes(), outcome) - 1 133 if index < 0: 134 return None 135 return self.outcomes()[index] 136 case '>=': 137 if outcome in self: 138 return outcome 139 index = bisect.bisect_left(self.outcomes(), outcome) 140 if index >= len(self): 141 return None 142 return self.outcomes()[index] 143 case '>': 144 index = bisect.bisect_right(self.outcomes(), outcome) 145 if index >= len(self): 146 return None 147 return self.outcomes()[index] 148 case _: 149 raise ValueError(f'Invalid comparison {comparison}')
The nearest outcome in this population fitting the comparison.
Arguments:
- comparison: The comparison which the result must fit. For example, '<=' would find the greatest outcome that is not greater than the argument.
- outcome: The outcome to compare against.
Returns:
The nearest outcome fitting the comparison, or
None
if there is no such outcome.
155 def zero(self: C) -> C: 156 """Zeros all outcomes of this population. 157 158 This is done by multiplying all outcomes by `0`. 159 160 The result will have the same denominator. 161 162 Raises: 163 ValueError: If the zeros did not resolve to a single outcome. 164 """ 165 result = self._unary_operator(Population._zero) 166 if len(result) != 1: 167 raise ValueError('zero() did not resolve to a single outcome.') 168 return result
Zeros all outcomes of this population.
This is done by multiplying all outcomes by 0
.
The result will have the same denominator.
Raises:
- ValueError: If the zeros did not resolve to a single outcome.
170 def zero_outcome(self) -> T_co: 171 """A zero-outcome for this population. 172 173 E.g. `0` for a `Population` whose outcomes are `int`s. 174 """ 175 return self.zero().outcomes()[0]
A zero-outcome for this population.
E.g. 0
for a Population
whose outcomes are int
s.
188 def quantity(self, 189 comparison: Literal['==', '!=', '<=', '<', '>=', '>'] 190 | Hashable, 191 outcome: Hashable | None = None, 192 /) -> int: 193 """The quantity of a single outcome. 194 195 A comparison can be provided, in which case this returns the total 196 quantity fitting the comparison. 197 198 Args: 199 comparison: The comparison to use. This can be omitted, in which 200 case it is treated as '=='. 201 outcome: The outcome to query. 202 """ 203 if outcome is None: 204 outcome = comparison 205 comparison = '==' 206 else: 207 comparison = cast(Literal['==', '!=', '<=', '<', '>=', '>'], 208 comparison) 209 210 match comparison: 211 case '==': 212 return self.get(outcome, 0) 213 case '!=': 214 return self.denominator() - self.get(outcome, 0) 215 case '<=' | '<': 216 threshold = self.nearest(comparison, outcome) 217 if threshold is None: 218 return 0 219 else: 220 return self._cumulative_quantities[threshold] 221 case '>=': 222 return self.denominator() - self.quantity('<', outcome) 223 case '>': 224 return self.denominator() - self.quantity('<=', outcome) 225 case _: 226 raise ValueError(f'Invalid comparison {comparison}')
The quantity of a single outcome.
A comparison can be provided, in which case this returns the total quantity fitting the comparison.
Arguments:
- comparison: The comparison to use. This can be omitted, in which case it is treated as '=='.
- outcome: The outcome to query.
247 def quantities(self, 248 comparison: Literal['==', '!=', '<=', '<', '>=', '>'] 249 | None = None, 250 /) -> CountsValuesView | Sequence[int]: 251 """The quantities of the mapping in sorted order. 252 253 For example, '<=' gives the CDF. 254 255 Args: 256 comparison: One of `'==', '!=', '<=', '<', '>=', '>'`. 257 May be omitted, in which case equality `'=='` is used. 258 """ 259 if comparison is None: 260 comparison = '==' 261 262 match comparison: 263 case '==': 264 return self.values() 265 case '<=': 266 return tuple(itertools.accumulate(self.values())) 267 case '>=': 268 return tuple( 269 itertools.accumulate(self.values()[:-1], 270 operator.sub, 271 initial=self.denominator())) 272 case '!=': 273 return tuple(self.denominator() - q for q in self.values()) 274 case '<': 275 return tuple(self.denominator() - q 276 for q in self.quantities('>=')) 277 case '>': 278 return tuple(self.denominator() - q 279 for q in self.quantities('<=')) 280 case _: 281 raise ValueError(f'Invalid comparison {comparison}')
The quantities of the mapping in sorted order.
For example, '<=' gives the CDF.
Arguments:
- comparison: One of
'==', '!=', '<=', '<', '>=', '>'
. May be omitted, in which case equality'=='
is used.
296 def denominator(self) -> int: 297 """The sum of all quantities (e.g. weights or duplicates). 298 299 For the number of unique outcomes, use `len()`. 300 """ 301 return self._denominator
The sum of all quantities (e.g. weights or duplicates).
For the number of unique outcomes, use len()
.
303 def multiply_quantities(self: C, scale: int, /) -> C: 304 """Multiplies all quantities by an integer.""" 305 if scale == 1: 306 return self 307 data = { 308 outcome: quantity * scale 309 for outcome, quantity in self.items() 310 } 311 return self._new_type(data)
Multiplies all quantities by an integer.
313 def divide_quantities(self: C, divisor: int, /) -> C: 314 """Divides all quantities by an integer, rounding down. 315 316 Resulting zero quantities are dropped. 317 """ 318 if divisor == 0: 319 return self 320 data = { 321 outcome: quantity // divisor 322 for outcome, quantity in self.items() if quantity >= divisor 323 } 324 return self._new_type(data)
Divides all quantities by an integer, rounding down.
Resulting zero quantities are dropped.
326 def modulo_quantities(self: C, divisor: int, /) -> C: 327 """Modulus of all quantities with an integer.""" 328 data = { 329 outcome: quantity % divisor 330 for outcome, quantity in self.items() 331 } 332 return self._new_type(data)
Modulus of all quantities with an integer.
334 def pad_to_denominator(self: C, target: int, /, outcome: Hashable) -> C: 335 """Changes the denominator to a target number by changing the quantity of a specified outcome. 336 337 Args: 338 `target`: The denominator of the result. 339 `outcome`: The outcome whose quantity will be adjusted. 340 341 Returns: 342 A `Population` like `self` but with the quantity of `outcome` 343 adjusted so that the overall denominator is equal to `target`. 344 If the denominator is reduced to zero, it will be removed. 345 346 Raises: 347 `ValueError` if this would require the quantity of the specified 348 outcome to be negative. 349 """ 350 adjustment = target - self.denominator() 351 data = {outcome: quantity for outcome, quantity in self.items()} 352 new_quantity = data.get(outcome, 0) + adjustment 353 if new_quantity > 0: 354 data[outcome] = new_quantity 355 elif new_quantity == 0: 356 del data[outcome] 357 else: 358 raise ValueError( 359 f'Padding to denominator of {target} would require a negative quantity of {new_quantity} for {outcome}' 360 ) 361 return self._new_type(data)
Changes the denominator to a target number by changing the quantity of a specified outcome.
Arguments:
target
: The denominator of the result.outcome
: The outcome whose quantity will be adjusted.
Returns:
A
Population
likeself
but with the quantity ofoutcome
adjusted so that the overall denominator is equal totarget
. If the denominator is reduced to zero, it will be removed.
Raises:
ValueError
if this would require the quantity of the specified- outcome to be negative.
363 def multiply_to_denominator(self: C, denominator: int, /) -> C: 364 """Multiplies all quantities to reach the target denominiator. 365 366 Raises: 367 ValueError if this cannot be achieved using an integer scaling. 368 """ 369 if denominator % self.denominator(): 370 raise ValueError( 371 'Target denominator is not an integer factor of the current denominator.' 372 ) 373 return self.multiply_quantities(denominator // self.denominator())
Multiplies all quantities to reach the target denominiator.
Raises:
- ValueError if this cannot be achieved using an integer scaling.
375 def append(self: C, outcome, quantity: int = 1, /) -> C: 376 """This population with an outcome appended. 377 378 Args: 379 outcome: The outcome to append. 380 quantity: The quantity of the outcome to append. Can be negative, 381 which removes quantity (but not below zero). 382 """ 383 data = Counter(self) 384 data[outcome] = max(data[outcome] + quantity, 0) 385 return self._new_type(data)
This population with an outcome appended.
Arguments:
- outcome: The outcome to append.
- quantity: The quantity of the outcome to append. Can be negative, which removes quantity (but not below zero).
387 def remove(self: C, outcome, quantity: int | None = None, /) -> C: 388 """This population with an outcome removed. 389 390 Args: 391 outcome: The outcome to append. 392 quantity: The quantity of the outcome to remove. If not set, all 393 quantity of that outcome is removed. Can be negative, which adds 394 quantity instead. 395 """ 396 if quantity is None: 397 data = Counter(self) 398 data[outcome] = 0 399 return self._new_type(data) 400 else: 401 return self.append(outcome, -quantity)
This population with an outcome removed.
Arguments:
- outcome: The outcome to append.
- quantity: The quantity of the outcome to remove. If not set, all quantity of that outcome is removed. Can be negative, which adds quantity instead.
437 def probability(self, 438 comparison: Literal['==', '!=', '<=', '<', '>=', '>'] 439 | Hashable, 440 outcome: Hashable | None = None, 441 /, 442 *, 443 percent: bool = False) -> Fraction | float: 444 """The total probability of outcomes fitting a comparison. 445 446 Args: 447 comparison: One of `'==', '!=', '<=', '<', '>=', '>'`. 448 May be omitted, in which case equality `'=='` is used. 449 outcome: The outcome to compare to. 450 percent: If set, the result will be a percentage expressed as a 451 `float`. Otherwise, the result is a `Fraction`. 452 """ 453 if outcome is None: 454 outcome = comparison 455 comparison = '==' 456 else: 457 comparison = cast(Literal['==', '!=', '<=', '<', '>=', '>'], 458 comparison) 459 result = Fraction(self.quantity(comparison, outcome), 460 self.denominator()) 461 return result * 100.0 if percent else result
The total probability of outcomes fitting a comparison.
Arguments:
- comparison: One of
'==', '!=', '<=', '<', '>=', '>'
. May be omitted, in which case equality'=='
is used. - outcome: The outcome to compare to.
- percent: If set, the result will be a percentage expressed as a
float
. Otherwise, the result is aFraction
.
502 def probabilities( 503 self, 504 comparison: Literal['==', '!=', '<=', '<', '>=', '>'] 505 | None = None, 506 /, 507 *, 508 percent: bool = False) -> Sequence[Fraction] | Sequence[float]: 509 """The total probabilities fitting the comparison for each outcome in sorted order. 510 511 For example, '<=' gives the CDF. 512 513 Args: 514 comparison: One of `'==', '!=', '<=', '<', '>=', '>'`. 515 May be omitted, in which case equality `'=='` is used. 516 percent: If set, the result will be a percentage expressed as a 517 `float`. Otherwise, the result is a `Fraction`. 518 """ 519 if comparison is None: 520 comparison = '==' 521 522 result = tuple( 523 Fraction(q, self.denominator()) 524 for q in self.quantities(comparison)) 525 526 if percent: 527 return tuple(100.0 * x for x in result) 528 else: 529 return result
The total probabilities fitting the comparison for each outcome in sorted order.
For example, '<=' gives the CDF.
Arguments:
- comparison: One of
'==', '!=', '<=', '<', '>=', '>'
. May be omitted, in which case equality'=='
is used. - percent: If set, the result will be a percentage expressed as a
float
. Otherwise, the result is aFraction
.
533 def mode(self) -> tuple: 534 """A tuple containing the most common outcome(s) of the population. 535 536 These are sorted from lowest to highest. 537 """ 538 return tuple(outcome for outcome, quantity in self.items() 539 if quantity == self.modal_quantity())
A tuple containing the most common outcome(s) of the population.
These are sorted from lowest to highest.
541 def modal_quantity(self) -> int: 542 """The highest quantity of any single outcome. """ 543 return max(self.quantities())
The highest quantity of any single outcome.
545 def kolmogorov_smirnov(self, other: 'Population') -> Fraction: 546 """Kolmogorov–Smirnov statistic. The maximum absolute difference between CDFs. """ 547 outcomes = icepool.sorted_union(self, other) 548 return max( 549 abs( 550 self.probability('<=', outcome) - 551 other.probability('<=', outcome)) for outcome in outcomes)
Kolmogorov–Smirnov statistic. The maximum absolute difference between CDFs.
553 def cramer_von_mises(self, other: 'Population') -> Fraction: 554 """Cramér-von Mises statistic. The sum-of-squares difference between CDFs. """ 555 outcomes = icepool.sorted_union(self, other) 556 return sum(((self.probability('<=', outcome) - 557 other.probability('<=', outcome))**2 558 for outcome in outcomes), 559 start=Fraction(0, 1))
Cramér-von Mises statistic. The sum-of-squares difference between CDFs.
561 def median(self): 562 """The median, taking the mean in case of a tie. 563 564 This will fail if the outcomes do not support division; 565 in this case, use `median_low` or `median_high` instead. 566 """ 567 return self.quantile(1, 2)
The median, taking the mean in case of a tie.
This will fail if the outcomes do not support division;
in this case, use median_low
or median_high
instead.
569 def median_low(self) -> T_co: 570 """The median, taking the lower in case of a tie.""" 571 return self.quantile_low(1, 2)
The median, taking the lower in case of a tie.
573 def median_high(self) -> T_co: 574 """The median, taking the higher in case of a tie.""" 575 return self.quantile_high(1, 2)
The median, taking the higher in case of a tie.
577 def quantile(self, n: int, d: int = 100): 578 """The outcome `n / d` of the way through the CDF, taking the mean in case of a tie. 579 580 This will fail if the outcomes do not support addition and division; 581 in this case, use `quantile_low` or `quantile_high` instead. 582 """ 583 # Should support addition and division. 584 return (self.quantile_low(n, d) + 585 self.quantile_high(n, d)) / 2 # type: ignore
The outcome n / d
of the way through the CDF, taking the mean in case of a tie.
This will fail if the outcomes do not support addition and division;
in this case, use quantile_low
or quantile_high
instead.
587 def quantile_low(self, n: int, d: int = 100) -> T_co: 588 """The outcome `n / d` of the way through the CDF, taking the lesser in case of a tie.""" 589 index = bisect.bisect_left(self.quantities('<='), 590 (n * self.denominator() + d - 1) // d) 591 if index >= len(self): 592 return self.max_outcome() 593 return self.outcomes()[index]
The outcome n / d
of the way through the CDF, taking the lesser in case of a tie.
595 def quantile_high(self, n: int, d: int = 100) -> T_co: 596 """The outcome `n / d` of the way through the CDF, taking the greater in case of a tie.""" 597 index = bisect.bisect_right(self.quantities('<='), 598 n * self.denominator() // d) 599 if index >= len(self): 600 return self.max_outcome() 601 return self.outcomes()[index]
The outcome n / d
of the way through the CDF, taking the greater in case of a tie.
626 def variance( 627 self: 'Population[numbers.Rational] | Population[float]' 628 ) -> Fraction | float: 629 """This is the population variance, not the sample variance.""" 630 mean = self.mean() 631 mean_of_squares = try_fraction( 632 sum(quantity * outcome**2 for outcome, quantity in self.items()), 633 self.denominator()) 634 return mean_of_squares - mean * mean
This is the population variance, not the sample variance.
642 def standardized_moment( 643 self: 'Population[numbers.Rational] | Population[float]', 644 k: int) -> float: 645 sd = self.standard_deviation() 646 mean = self.mean() 647 ev = sum(p * (outcome - mean)**k # type: ignore 648 for outcome, p in zip(self.outcomes(), self.probabilities())) 649 return ev / (sd**k)
659 def entropy(self, base: float = 2.0) -> float: 660 """The entropy of a random sample from this population. 661 662 Args: 663 base: The logarithm base to use. Default is 2.0, which gives the 664 entropy in bits. 665 """ 666 return -sum(p * math.log(p, base) 667 for p in self.probabilities() if p > 0.0)
The entropy of a random sample from this population.
Arguments:
- base: The logarithm base to use. Default is 2.0, which gives the entropy in bits.
696 @property 697 def marginals(self: C) -> _Marginals[C]: 698 """A property that applies the `[]` operator to outcomes. 699 700 For example, `population.marginals[:2]` will marginalize the first two 701 elements of sequence outcomes. 702 703 Attributes that do not start with an underscore will also be forwarded. 704 For example, `population.marginals.x` will marginalize the `x` attribute 705 from e.g. `namedtuple` outcomes. 706 """ 707 return Population._Marginals(self)
A property that applies the []
operator to outcomes.
For example, population.marginals[:2]
will marginalize the first two
elements of sequence outcomes.
Attributes that do not start with an underscore will also be forwarded.
For example, population.marginals.x
will marginalize the x
attribute
from e.g. namedtuple
outcomes.
719 def covariance( 720 self: 721 'Population[tuple[numbers.Rational, ...]] | Population[tuple[float, ...]]', 722 i: int, j: int) -> Fraction | float: 723 mean_i = self.marginals[i].mean() 724 mean_j = self.marginals[j].mean() 725 return try_fraction( 726 sum((outcome[i] - mean_i) * (outcome[j] - mean_j) * quantity 727 for outcome, quantity in self.items()), self.denominator())
762 def to_one_hot(self: C, outcomes: Sequence[T_co] | None = None) -> C: 763 """Converts the outcomes of this population to a one-hot representation. 764 765 Args: 766 outcomes: If provided, each outcome will be mapped to a `Vector` 767 where the element at `outcomes.index(outcome)` is set to `True` 768 and the rest to `False`, or all `False` if the outcome is not 769 in `outcomes`. 770 If not provided, `self.outcomes()` is used. 771 """ 772 if outcomes is None: 773 outcomes = self.outcomes() 774 775 data: MutableMapping[Vector[bool], int] = defaultdict(int) 776 for outcome, quantity in zip(self.outcomes(), self.quantities()): 777 value = [False] * len(outcomes) 778 if outcome in outcomes: 779 value[outcomes.index(outcome)] = True 780 data[Vector(value)] += quantity 781 return self._new_type(data)
Converts the outcomes of this population to a one-hot representation.
Arguments:
783 def split(self, 784 which: Callable[..., bool] | Collection[T_co] | None = None, 785 /, 786 *, 787 star: bool | None = None) -> tuple[C, C]: 788 """Splits this population into one containing selected items and another containing the rest. 789 790 The sum of the denominators of the results is equal to the denominator 791 of this population. 792 793 If you want to split more than two ways, use `Population.group_by()`. 794 795 Args: 796 which: Selects which outcomes to select. Options: 797 * A callable that takes an outcome and returns `True` if it 798 should be selected. 799 * A collection of outcomes to select. 800 star: Whether outcomes should be unpacked into separate arguments 801 before sending them to a callable `which`. 802 If not provided, this will be guessed based on the function 803 signature. 804 805 Returns: 806 A population consisting of the outcomes that were selected by 807 `which`, and a population consisting of the unselected outcomes. 808 """ 809 if which is None: 810 outcome_set = {self.min_outcome()} 811 else: 812 outcome_set = self._select_outcomes(which, star) 813 814 selected = {} 815 not_selected = {} 816 for outcome, count in self.items(): 817 if outcome in outcome_set: 818 selected[outcome] = count 819 else: 820 not_selected[outcome] = count 821 822 return self._new_type(selected), self._new_type(not_selected)
Splits this population into one containing selected items and another containing the rest.
The sum of the denominators of the results is equal to the denominator of this population.
If you want to split more than two ways, use Population.group_by()
.
Arguments:
- which: Selects which outcomes to select. Options:
- A callable that takes an outcome and returns
True
if it should be selected. - A collection of outcomes to select.
- A callable that takes an outcome and returns
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
which
. If not provided, this will be guessed based on the function signature.
Returns:
A population consisting of the outcomes that were selected by
which
, and a population consisting of the unselected outcomes.
868 @property 869 def group_by(self: C) -> _GroupBy[C]: 870 """A method-like property that splits this population into sub-populations based on a key function. 871 872 The sum of the denominators of the results is equal to the denominator 873 of this population. 874 875 This can be useful when using the law of total probability. 876 877 Example: `d10.group_by(lambda x: x % 3)` is 878 ```python 879 { 880 0: Die([3, 6, 9]), 881 1: Die([1, 4, 7, 10]), 882 2: Die([2, 5, 8]), 883 } 884 ``` 885 886 You can also use brackets to group by indexes or slices; or attributes 887 to group by those. Example: 888 889 ```python 890 Die([ 891 'aardvark', 892 'alligator', 893 'asp', 894 'blowfish', 895 'cat', 896 'crocodile', 897 ]).group_by[0] 898 ``` 899 900 produces 901 902 ```python 903 { 904 'a': Die(['aardvark', 'alligator', 'asp']), 905 'b': Die(['blowfish']), 906 'c': Die(['cat', 'crocodile']), 907 } 908 ``` 909 910 Args: 911 key_map: A function or mapping that takes outcomes and produces the 912 key of the corresponding outcome in the result. If this is 913 a Mapping, outcomes not in the mapping are their own key. 914 star: Whether outcomes should be unpacked into separate arguments 915 before sending them to a callable `key_map`. 916 If not provided, this will be guessed based on the function 917 signature. 918 """ 919 return Population._GroupBy(self)
A method-like property that splits this population into sub-populations based on a key function.
The sum of the denominators of the results is equal to the denominator of this population.
This can be useful when using the law of total probability.
Example: d10.group_by(lambda x: x % 3)
is
{
0: Die([3, 6, 9]),
1: Die([1, 4, 7, 10]),
2: Die([2, 5, 8]),
}
You can also use brackets to group by indexes or slices; or attributes to group by those. Example:
Die([
'aardvark',
'alligator',
'asp',
'blowfish',
'cat',
'crocodile',
]).group_by[0]
produces
{
'a': Die(['aardvark', 'alligator', 'asp']),
'b': Die(['blowfish']),
'c': Die(['cat', 'crocodile']),
}
Arguments:
- key_map: A function or mapping that takes outcomes and produces the key of the corresponding outcome in the result. If this is a Mapping, outcomes not in the mapping are their own key.
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
key_map
. If not provided, this will be guessed based on the function signature.
921 def sample(self) -> T_co: 922 """A single random sample from this population. 923 924 Note that this is always "with replacement" even for `Deck` since 925 instances are immutable. 926 927 This uses the standard `random` package and is not cryptographically 928 secure. 929 """ 930 # We don't use random.choices since that is based on floats rather than ints. 931 r = random.randrange(self.denominator()) 932 index = bisect.bisect_right(self.quantities('<='), r) 933 return self.outcomes()[index]
A single random sample from this population.
Note that this is always "with replacement" even for Deck
since
instances are immutable.
This uses the standard random
package and is not cryptographically
secure.
935 def format(self, format_spec: str, /, **kwargs) -> str: 936 """Formats this mapping as a string. 937 938 `format_spec` should start with the output format, 939 which can be: 940 * `md` for Markdown (default) 941 * `bbcode` for BBCode 942 * `csv` for comma-separated values 943 * `html` for HTML 944 945 After this, you may optionally add a `:` followed by a series of 946 requested columns. Allowed columns are: 947 948 * `o`: Outcomes. 949 * `*o`: Outcomes, unpacked if applicable. 950 * `q==`, `q<=`, `q>=`: Quantities ==, <=, or >= each outcome. 951 * `p==`, `p<=`, `p>=`: Probabilities (0-1). 952 * `%==`, `%<=`, `%>=`: Probabilities (0%-100%). 953 * `i==`, `i<=`, `i>=`: EXPERIMENTAL: "1 in N". 954 955 Columns may optionally be separated using `|` characters. 956 957 The default setting is equal to `f'{die:md:*o|q==|%==}'`. Here the 958 columns are the outcomes (unpacked if applicable) the quantities, and 959 the probabilities. The quantities are omitted from the default columns 960 if any individual quantity is 10**30 or greater. 961 """ 962 if not self.is_empty() and self.modal_quantity() < 10**30: 963 default_column_spec = '*oq==%==' 964 else: 965 default_column_spec = '*o%==' 966 if len(format_spec) == 0: 967 format_spec = 'md:' + default_column_spec 968 969 format_spec = format_spec.replace('|', '') 970 971 parts = format_spec.split(':') 972 973 if len(parts) == 1: 974 output_format = parts[0] 975 col_spec = default_column_spec 976 elif len(parts) == 2: 977 output_format = parts[0] 978 col_spec = parts[1] 979 else: 980 raise ValueError('format_spec has too many colons.') 981 982 match output_format: 983 case 'md': 984 return icepool.population.format.markdown(self, col_spec) 985 case 'bbcode': 986 return icepool.population.format.bbcode(self, col_spec) 987 case 'csv': 988 return icepool.population.format.csv(self, col_spec, **kwargs) 989 case 'html': 990 return icepool.population.format.html(self, col_spec) 991 case _: 992 raise ValueError( 993 f"Unsupported output format '{output_format}'")
Formats this mapping as a string.
format_spec
should start with the output format,
which can be:
md
for Markdown (default)bbcode
for BBCodecsv
for comma-separated valueshtml
for HTML
After this, you may optionally add a :
followed by a series of
requested columns. Allowed columns are:
o
: Outcomes.*o
: Outcomes, unpacked if applicable.q==
,q<=
,q>=
: Quantities ==, <=, or >= each outcome.p==
,p<=
,p>=
: Probabilities (0-1).%==
,%<=
,%>=
: Probabilities (0%-100%).i==
,i<=
,i>=
: EXPERIMENTAL: "1 in N".
Columns may optionally be separated using |
characters.
The default setting is equal to f'{die:md:*o|q==|%==}'
. Here the
columns are the outcomes (unpacked if applicable) the quantities, and
the probabilities. The quantities are omitted from the default columns
if any individual quantity is 10**30 or greater.
88def tupleize( 89 *args: 'T | icepool.Population[T]' 90) -> 'tuple[T, ...] | icepool.Population[tuple[T, ...]]': 91 """Returns the Cartesian product of the arguments as `tuple`s or a `Population` thereof. 92 93 For example: 94 * `tupleize(1, 2)` would produce `(1, 2)`. 95 * `tupleize(d6, 0)` would produce a `Die` with outcomes `(1, 0)`, `(2, 0)`, 96 ... `(6, 0)`. 97 * `tupleize(d6, d6)` would produce a `Die` with outcomes `(1, 1)`, `(1, 2)`, 98 ... `(6, 5)`, `(6, 6)`. 99 100 If `Population`s are provided, they must all be `Die` or all `Deck` and not 101 a mixture of the two. 102 103 Returns: 104 If none of the outcomes is a `Population`, the result is a `tuple` 105 with one element per argument. Otherwise, the result is a `Population` 106 of the same type as the input `Population`, and the outcomes are 107 `tuple`s with one element per argument. 108 """ 109 return cartesian_product(*args, outcome_type=tuple)
Returns the Cartesian product of the arguments as tuple
s or a Population
thereof.
For example:
tupleize(1, 2)
would produce(1, 2)
.tupleize(d6, 0)
would produce aDie
with outcomes(1, 0)
,(2, 0)
, ...(6, 0)
.tupleize(d6, d6)
would produce aDie
with outcomes(1, 1)
,(1, 2)
, ...(6, 5)
,(6, 6)
.
If Population
s are provided, they must all be Die
or all Deck
and not
a mixture of the two.
Returns:
If none of the outcomes is a
Population
, the result is atuple
with one element per argument. Otherwise, the result is aPopulation
of the same type as the inputPopulation
, and the outcomes aretuple
s with one element per argument.
112def vectorize( 113 *args: 'T | icepool.Population[T]' 114) -> 'icepool.Vector[T] | icepool.Population[icepool.Vector[T]]': 115 """Returns the Cartesian product of the arguments as `Vector`s or a `Population` thereof. 116 117 For example: 118 * `vectorize(1, 2)` would produce `Vector(1, 2)`. 119 * `vectorize(d6, 0)` would produce a `Die` with outcomes `Vector(1, 0)`, 120 `Vector(2, 0)`, ... `Vector(6, 0)`. 121 * `vectorize(d6, d6)` would produce a `Die` with outcomes `Vector(1, 1)`, 122 `Vector(1, 2)`, ... `Vector(6, 5)`, `Vector(6, 6)`. 123 124 If `Population`s are provided, they must all be `Die` or all `Deck` and not 125 a mixture of the two. 126 127 Returns: 128 If none of the outcomes is a `Population`, the result is a `Vector` 129 with one element per argument. Otherwise, the result is a `Population` 130 of the same type as the input `Population`, and the outcomes are 131 `Vector`s with one element per argument. 132 """ 133 return cartesian_product(*args, outcome_type=icepool.Vector)
Returns the Cartesian product of the arguments as Vector
s or a Population
thereof.
For example:
vectorize(1, 2)
would produceVector(1, 2)
.vectorize(d6, 0)
would produce aDie
with outcomesVector(1, 0)
,Vector(2, 0)
, ...Vector(6, 0)
.vectorize(d6, d6)
would produce aDie
with outcomesVector(1, 1)
,Vector(1, 2)
, ...Vector(6, 5)
,Vector(6, 6)
.
If Population
s are provided, they must all be Die
or all Deck
and not
a mixture of the two.
Returns:
If none of the outcomes is a
Population
, the result is aVector
with one element per argument. Otherwise, the result is aPopulation
of the same type as the inputPopulation
, and the outcomes areVector
s with one element per argument.
125class Vector(Outcome, Sequence[T_co]): 126 """Immutable tuple-like class that applies most operators elementwise. 127 128 May become a variadic generic type in the future. 129 """ 130 __slots__ = ['_data', '_truth_value'] 131 132 _data: tuple[T_co, ...] 133 _truth_value: bool | None 134 135 def __init__(self, 136 elements: Iterable[T_co], 137 *, 138 truth_value: bool | None = None) -> None: 139 self._data = tuple(elements) 140 self._truth_value = truth_value 141 142 def __hash__(self) -> int: 143 return hash((Vector, self._data)) 144 145 def __len__(self) -> int: 146 return len(self._data) 147 148 @overload 149 def __getitem__(self, index: int) -> T_co: 150 ... 151 152 @overload 153 def __getitem__(self, index: slice) -> 'Vector[T_co]': 154 ... 155 156 def __getitem__(self, index: int | slice) -> 'T_co | Vector[T_co]': 157 if isinstance(index, int): 158 return self._data[index] 159 else: 160 return Vector(self._data[index]) 161 162 def __iter__(self) -> Iterator[T_co]: 163 return iter(self._data) 164 165 # Unary operators. 166 167 def unary_operator(self, op: Callable[..., U], *args, 168 **kwargs) -> 'Vector[U]': 169 """Unary operators on `Vector` are applied elementwise. 170 171 This is used for the standard unary operators 172 `-, +, abs, ~, round, trunc, floor, ceil` 173 """ 174 return Vector(op(x, *args, **kwargs) for x in self) 175 176 def __neg__(self) -> 'Vector[T_co]': 177 return self.unary_operator(operator.neg) 178 179 def __pos__(self) -> 'Vector[T_co]': 180 return self.unary_operator(operator.pos) 181 182 def __invert__(self) -> 'Vector[T_co]': 183 return self.unary_operator(operator.invert) 184 185 def abs(self) -> 'Vector[T_co]': 186 return self.unary_operator(operator.abs) 187 188 __abs__ = abs 189 190 def round(self, ndigits: int | None = None) -> 'Vector': 191 return self.unary_operator(round, ndigits) 192 193 __round__ = round 194 195 def trunc(self) -> 'Vector': 196 return self.unary_operator(math.trunc) 197 198 __trunc__ = trunc 199 200 def floor(self) -> 'Vector': 201 return self.unary_operator(math.floor) 202 203 __floor__ = floor 204 205 def ceil(self) -> 'Vector': 206 return self.unary_operator(math.ceil) 207 208 __ceil__ = ceil 209 210 # Binary operators. 211 212 def binary_operator(self, 213 other, 214 op: Callable[..., U], 215 *args, 216 compare_for_truth: bool = False, 217 **kwargs) -> 'Vector[U]': 218 """Binary operators on `Vector` are applied elementwise. 219 220 If the other operand is also a `Vector`, the operator is applied to each 221 pair of elements from `self` and `other`. Both must have the same 222 length. 223 224 Otherwise the other operand is broadcast to each element of `self`. 225 226 This is used for the standard binary operators 227 `+, -, *, /, //, %, **, <<, >>, &, |, ^`. 228 229 `@` is not included due to its different meaning in `Die`. 230 231 This is also used for the comparators 232 `<, <=, >, >=, ==, !=`. 233 234 In this case, the result also has a truth value based on lexicographic 235 ordering. 236 """ 237 if isinstance(other, Vector): 238 if len(self) == len(other): 239 if compare_for_truth: 240 truth_value = cast(bool, op(self._data, other._data)) 241 else: 242 truth_value = None 243 return Vector( 244 (op(x, y, *args, **kwargs) for x, y in zip(self, other)), 245 truth_value=truth_value) 246 else: 247 raise IndexError( 248 f'Binary operators on Vectors are only valid if both are the same length ({len(self)} vs. {len(other)}).' 249 ) 250 elif isinstance(other, (icepool.Population, icepool.AgainExpression)): 251 return NotImplemented # delegate to the other 252 else: 253 return Vector((op(x, other, *args, **kwargs) for x in self)) 254 255 def reverse_binary_operator(self, other, op: Callable[..., U], *args, 256 **kwargs) -> 'Vector[U]': 257 """Reverse version of `binary_operator()`.""" 258 if isinstance(other, Vector): 259 if len(self) == len(other): 260 return Vector( 261 op(y, x, *args, **kwargs) for x, y in zip(self, other)) 262 else: 263 raise IndexError( 264 f'Binary operators on Vectors are only valid if both are the same length ({len(other)} vs. {len(self)}).' 265 ) 266 elif isinstance(other, (icepool.Population, icepool.AgainExpression)): 267 return NotImplemented # delegate to the other 268 else: 269 return Vector(op(other, x, *args, **kwargs) for x in self) 270 271 def __add__(self, other) -> 'Vector': 272 return self.binary_operator(other, operator.add) 273 274 def __radd__(self, other) -> 'Vector': 275 return self.reverse_binary_operator(other, operator.add) 276 277 def __sub__(self, other) -> 'Vector': 278 return self.binary_operator(other, operator.sub) 279 280 def __rsub__(self, other) -> 'Vector': 281 return self.reverse_binary_operator(other, operator.sub) 282 283 def __mul__(self, other) -> 'Vector': 284 return self.binary_operator(other, operator.mul) 285 286 def __rmul__(self, other) -> 'Vector': 287 return self.reverse_binary_operator(other, operator.mul) 288 289 def __truediv__(self, other) -> 'Vector': 290 return self.binary_operator(other, operator.truediv) 291 292 def __rtruediv__(self, other) -> 'Vector': 293 return self.reverse_binary_operator(other, operator.truediv) 294 295 def __floordiv__(self, other) -> 'Vector': 296 return self.binary_operator(other, operator.floordiv) 297 298 def __rfloordiv__(self, other) -> 'Vector': 299 return self.reverse_binary_operator(other, operator.floordiv) 300 301 def __pow__(self, other) -> 'Vector': 302 return self.binary_operator(other, operator.pow) 303 304 def __rpow__(self, other) -> 'Vector': 305 return self.reverse_binary_operator(other, operator.pow) 306 307 def __mod__(self, other) -> 'Vector': 308 return self.binary_operator(other, operator.mod) 309 310 def __rmod__(self, other) -> 'Vector': 311 return self.reverse_binary_operator(other, operator.mod) 312 313 def __lshift__(self, other) -> 'Vector': 314 return self.binary_operator(other, operator.lshift) 315 316 def __rlshift__(self, other) -> 'Vector': 317 return self.reverse_binary_operator(other, operator.lshift) 318 319 def __rshift__(self, other) -> 'Vector': 320 return self.binary_operator(other, operator.rshift) 321 322 def __rrshift__(self, other) -> 'Vector': 323 return self.reverse_binary_operator(other, operator.rshift) 324 325 def __and__(self, other) -> 'Vector': 326 return self.binary_operator(other, operator.and_) 327 328 def __rand__(self, other) -> 'Vector': 329 return self.reverse_binary_operator(other, operator.and_) 330 331 def __or__(self, other) -> 'Vector': 332 return self.binary_operator(other, operator.or_) 333 334 def __ror__(self, other) -> 'Vector': 335 return self.reverse_binary_operator(other, operator.or_) 336 337 def __xor__(self, other) -> 'Vector': 338 return self.binary_operator(other, operator.xor) 339 340 def __rxor__(self, other) -> 'Vector': 341 return self.reverse_binary_operator(other, operator.xor) 342 343 # Comparators. 344 # These returns a value with a truth value, but not a bool. 345 346 def __lt__(self, other) -> 'Vector': # type: ignore 347 if not isinstance(other, Vector): 348 return NotImplemented 349 return self.binary_operator(other, operator.lt, compare_for_truth=True) 350 351 def __le__(self, other) -> 'Vector': # type: ignore 352 if not isinstance(other, Vector): 353 return NotImplemented 354 return self.binary_operator(other, operator.le, compare_for_truth=True) 355 356 def __gt__(self, other) -> 'Vector': # type: ignore 357 if not isinstance(other, Vector): 358 return NotImplemented 359 return self.binary_operator(other, operator.gt, compare_for_truth=True) 360 361 def __ge__(self, other) -> 'Vector': # type: ignore 362 if not isinstance(other, Vector): 363 return NotImplemented 364 return self.binary_operator(other, operator.ge, compare_for_truth=True) 365 366 def __eq__(self, other) -> 'Vector | bool': # type: ignore 367 if not isinstance(other, Vector): 368 return False 369 return self.binary_operator(other, operator.eq, compare_for_truth=True) 370 371 def __ne__(self, other) -> 'Vector | bool': # type: ignore 372 if not isinstance(other, Vector): 373 return True 374 return self.binary_operator(other, operator.ne, compare_for_truth=True) 375 376 def __bool__(self) -> bool: 377 if self._truth_value is None: 378 raise TypeError( 379 'Vector only has a truth value if it is the result of a comparison operator.' 380 ) 381 return self._truth_value 382 383 # Sequence manipulation. 384 385 def append(self, other) -> 'Vector': 386 return Vector(self._data + (other, )) 387 388 def concatenate(self, other: 'Iterable') -> 'Vector': 389 return Vector(itertools.chain(self, other)) 390 391 # Strings. 392 393 def __repr__(self) -> str: 394 return type(self).__qualname__ + '(' + repr(self._data) + ')' 395 396 def __str__(self) -> str: 397 return type(self).__qualname__ + '(' + str(self._data) + ')'
Immutable tuple-like class that applies most operators elementwise.
May become a variadic generic type in the future.
167 def unary_operator(self, op: Callable[..., U], *args, 168 **kwargs) -> 'Vector[U]': 169 """Unary operators on `Vector` are applied elementwise. 170 171 This is used for the standard unary operators 172 `-, +, abs, ~, round, trunc, floor, ceil` 173 """ 174 return Vector(op(x, *args, **kwargs) for x in self)
Unary operators on Vector
are applied elementwise.
This is used for the standard unary operators
-, +, abs, ~, round, trunc, floor, ceil
212 def binary_operator(self, 213 other, 214 op: Callable[..., U], 215 *args, 216 compare_for_truth: bool = False, 217 **kwargs) -> 'Vector[U]': 218 """Binary operators on `Vector` are applied elementwise. 219 220 If the other operand is also a `Vector`, the operator is applied to each 221 pair of elements from `self` and `other`. Both must have the same 222 length. 223 224 Otherwise the other operand is broadcast to each element of `self`. 225 226 This is used for the standard binary operators 227 `+, -, *, /, //, %, **, <<, >>, &, |, ^`. 228 229 `@` is not included due to its different meaning in `Die`. 230 231 This is also used for the comparators 232 `<, <=, >, >=, ==, !=`. 233 234 In this case, the result also has a truth value based on lexicographic 235 ordering. 236 """ 237 if isinstance(other, Vector): 238 if len(self) == len(other): 239 if compare_for_truth: 240 truth_value = cast(bool, op(self._data, other._data)) 241 else: 242 truth_value = None 243 return Vector( 244 (op(x, y, *args, **kwargs) for x, y in zip(self, other)), 245 truth_value=truth_value) 246 else: 247 raise IndexError( 248 f'Binary operators on Vectors are only valid if both are the same length ({len(self)} vs. {len(other)}).' 249 ) 250 elif isinstance(other, (icepool.Population, icepool.AgainExpression)): 251 return NotImplemented # delegate to the other 252 else: 253 return Vector((op(x, other, *args, **kwargs) for x in self))
Binary operators on Vector
are applied elementwise.
If the other operand is also a Vector
, the operator is applied to each
pair of elements from self
and other
. Both must have the same
length.
Otherwise the other operand is broadcast to each element of self
.
This is used for the standard binary operators
+, -, *, /, //, %, **, <<, >>, &, |, ^
.
@
is not included due to its different meaning in Die
.
This is also used for the comparators
<, <=, >, >=, ==, !=
.
In this case, the result also has a truth value based on lexicographic ordering.
255 def reverse_binary_operator(self, other, op: Callable[..., U], *args, 256 **kwargs) -> 'Vector[U]': 257 """Reverse version of `binary_operator()`.""" 258 if isinstance(other, Vector): 259 if len(self) == len(other): 260 return Vector( 261 op(y, x, *args, **kwargs) for x, y in zip(self, other)) 262 else: 263 raise IndexError( 264 f'Binary operators on Vectors are only valid if both are the same length ({len(other)} vs. {len(self)}).' 265 ) 266 elif isinstance(other, (icepool.Population, icepool.AgainExpression)): 267 return NotImplemented # delegate to the other 268 else: 269 return Vector(op(other, x, *args, **kwargs) for x in self)
Reverse version of binary_operator()
.
16class Symbols(Mapping[str, int]): 17 """EXPERIMENTAL: Immutable multiset of single characters. 18 19 Spaces, dashes, and underscores cannot be used as symbols. 20 21 Operations include: 22 23 | Operation | Count / notes | 24 |:----------------------------|:-----------------------------------| 25 | `additive_union`, `+` | `l + r` | 26 | `difference`, `-` | `l - r` | 27 | `intersection`, `&` | `min(l, r)` | 28 | `union`, `\\|` | `max(l, r)` | 29 | `symmetric_difference`, `^` | `abs(l - r)` | 30 | `multiply_counts`, `*` | `count * n` | 31 | `divide_counts`, `//` | `count // n` | 32 | `issubset`, `<=` | all counts l <= r | 33 | `issuperset`, `>=` | all counts l >= r | 34 | `==` | all counts l == r | 35 | `!=` | any count l != r | 36 | unary `+` | drop all negative counts | 37 | unary `-` | reverses the sign of all counts | 38 39 `<` and `>` are lexicographic orderings rather than subset relations. 40 Specifically, they compare the count of each character in alphabetical 41 order. For example: 42 * `'a' > ''` since one `'a'` is more than zero `'a'`s. 43 * `'a' > 'bb'` since `'a'` is compared first. 44 * `'-a' < 'bb'` since the left side has -1 `'a'`s. 45 * `'a' < 'ab'` since the `'a'`s are equal but the right side has more `'b'`s. 46 47 Binary operators other than `*` and `//` implicitly convert the other 48 argument to `Symbols` using the constructor. 49 50 Subscripting with a single character returns the count of that character 51 as an `int`. E.g. `symbols['a']` -> number of `a`s as an `int`. 52 You can also access it as an attribute, e.g. `symbols.a`. 53 54 Subscripting with multiple characters returns a `Symbols` with only those 55 characters, dropping the rest. 56 E.g. `symbols['ab']` -> number of `a`s and `b`s as a `Symbols`. 57 Again you can also access it as an attribute, e.g. `symbols.ab`. 58 This is useful for reducing the outcome space, which reduces computational 59 cost for further operations. If you want to keep only a single character 60 while keeping the type as `Symbols`, you can subscript with that character 61 plus an unused character. 62 63 Subscripting with duplicate characters currently has no further effect, but 64 this may change in the future. 65 66 `Population.marginals` forwards attribute access, so you can use e.g. 67 `die.marginals.a` to get the marginal distribution of `a`s. 68 69 Note that attribute access only works with valid identifiers, 70 so e.g. emojis would need to use the subscript method. 71 """ 72 _data: Mapping[str, int] 73 74 def __new__(cls, 75 symbols: str | Iterable[str] | Mapping[str, int]) -> 'Symbols': 76 """Constructor. 77 78 The argument can be a string, an iterable of characters, or a mapping of 79 characters to counts. 80 81 If the argument is a string, negative symbols can be specified using a 82 minus sign optionally surrounded by whitespace. For example, 83 `a - b` has one positive a and one negative b. 84 """ 85 self = super(Symbols, cls).__new__(cls) 86 if isinstance(symbols, str): 87 data: MutableMapping[str, int] = defaultdict(int) 88 positive, *negative = re.split(r'\s*-\s*', symbols) 89 for s in positive: 90 data[s] += 1 91 if len(negative) > 1: 92 raise ValueError('Multiple dashes not allowed.') 93 if len(negative) == 1: 94 for s in negative[0]: 95 data[s] -= 1 96 elif isinstance(symbols, Mapping): 97 data = defaultdict(int, symbols) 98 else: 99 data = defaultdict(int) 100 for s in symbols: 101 data[s] += 1 102 103 for s in data: 104 if len(s) != 1: 105 raise ValueError(f'Symbol {s} is not a single character.') 106 if re.match(r'[\s_-]', s): 107 raise ValueError( 108 f'{s} (U+{ord(s):04X}) is not a legal symbol.') 109 110 self._data = defaultdict(int, 111 {k: data[k] 112 for k in sorted(data.keys())}) 113 114 return self 115 116 @classmethod 117 def _new_raw(cls, data: defaultdict[str, int]) -> 'Symbols': 118 self = super(Symbols, cls).__new__(cls) 119 self._data = data 120 return self 121 122 # Mapping interface. 123 124 def __getitem__(self, key: str) -> 'int | Symbols': # type: ignore 125 if len(key) == 1: 126 return self._data[key] 127 else: 128 return Symbols._new_raw( 129 defaultdict(int, {s: self._data[s] 130 for s in key})) 131 132 def __getattr__(self, key: str) -> 'int | Symbols': 133 if key[0] == '_': 134 raise AttributeError(key) 135 return self[key] 136 137 def __iter__(self) -> Iterator[str]: 138 return iter(self._data) 139 140 def __len__(self) -> int: 141 return len(self._data) 142 143 # Binary operators. 144 145 def additive_union(self, *args: 146 Iterable[str] | Mapping[str, int]) -> 'Symbols': 147 """The sum of counts of each symbol.""" 148 return functools.reduce(operator.add, args, initial=self) 149 150 def __add__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 151 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 152 return NotImplemented # delegate to the other 153 data = defaultdict(int, self._data) 154 for s, count in Symbols(other).items(): 155 data[s] += count 156 return Symbols._new_raw(data) 157 158 __radd__ = __add__ 159 160 def difference(self, *args: 161 Iterable[str] | Mapping[str, int]) -> 'Symbols': 162 """The difference between the counts of each symbol.""" 163 return functools.reduce(operator.sub, args, initial=self) 164 165 def __sub__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 166 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 167 return NotImplemented # delegate to the other 168 data = defaultdict(int, self._data) 169 for s, count in Symbols(other).items(): 170 data[s] -= count 171 return Symbols._new_raw(data) 172 173 def __rsub__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 174 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 175 return NotImplemented # delegate to the other 176 data = defaultdict(int, Symbols(other)._data) 177 for s, count in self.items(): 178 data[s] -= count 179 return Symbols._new_raw(data) 180 181 def intersection(self, *args: 182 Iterable[str] | Mapping[str, int]) -> 'Symbols': 183 """The min count of each symbol.""" 184 return functools.reduce(operator.and_, args, initial=self) 185 186 def __and__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 187 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 188 return NotImplemented # delegate to the other 189 data: defaultdict[str, int] = defaultdict(int) 190 for s, count in Symbols(other).items(): 191 data[s] = min(self.get(s, 0), count) 192 return Symbols._new_raw(data) 193 194 __rand__ = __and__ 195 196 def union(self, *args: Iterable[str] | Mapping[str, int]) -> 'Symbols': 197 """The max count of each symbol.""" 198 return functools.reduce(operator.or_, args, initial=self) 199 200 def __or__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 201 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 202 return NotImplemented # delegate to the other 203 data = defaultdict(int, self._data) 204 for s, count in Symbols(other).items(): 205 data[s] = max(data[s], count) 206 return Symbols._new_raw(data) 207 208 __ror__ = __or__ 209 210 def symmetric_difference( 211 self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 212 """The absolute difference in symbol counts between the two sets.""" 213 return self ^ other 214 215 def __xor__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 216 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 217 return NotImplemented # delegate to the other 218 data = defaultdict(int, self._data) 219 for s, count in Symbols(other).items(): 220 data[s] = abs(data[s] - count) 221 return Symbols._new_raw(data) 222 223 __rxor__ = __xor__ 224 225 def multiply_counts(self, other: int) -> 'Symbols': 226 """Multiplies all counts by an integer.""" 227 return self * other 228 229 def __mul__(self, other: int) -> 'Symbols': 230 if not isinstance(other, int): 231 return NotImplemented 232 data = defaultdict(int, { 233 s: count * other 234 for s, count in self.items() 235 }) 236 return Symbols._new_raw(data) 237 238 __rmul__ = __mul__ 239 240 def divide_counts(self, other: int) -> 'Symbols': 241 """Divides all counts by an integer, rounding down.""" 242 data = defaultdict(int, { 243 s: count // other 244 for s, count in self.items() 245 }) 246 return Symbols._new_raw(data) 247 248 def count_subset(self, 249 divisor: Iterable[str] | Mapping[str, int], 250 *, 251 empty_divisor: int | None = None) -> int: 252 """The number of times the divisor is contained in this multiset.""" 253 if not isinstance(divisor, Mapping): 254 divisor = Counter(divisor) 255 result = None 256 for s, count in divisor.items(): 257 current = self._data[s] // count 258 if result is None or current < result: 259 result = current 260 if result is None: 261 if empty_divisor is None: 262 raise ZeroDivisionError('Divisor is empty.') 263 else: 264 return empty_divisor 265 else: 266 return result 267 268 @overload 269 def __floordiv__(self, other: int) -> 'Symbols': 270 """Same as divide_counts().""" 271 272 @overload 273 def __floordiv__(self, other: Iterable[str] | Mapping[str, int]) -> int: 274 """Same as count_subset().""" 275 276 @overload 277 def __floordiv__( 278 self, 279 other: int | Iterable[str] | Mapping[str, int]) -> 'Symbols | int': 280 ... 281 282 def __floordiv__( 283 self, 284 other: int | Iterable[str] | Mapping[str, int]) -> 'Symbols | int': 285 if isinstance(other, int): 286 return self.divide_counts(other) 287 elif isinstance(other, Iterable): 288 return self.count_subset(other) 289 else: 290 return NotImplemented 291 292 def __rfloordiv__(self, other: Iterable[str] | Mapping[str, int]) -> int: 293 return Symbols(other).count_subset(self) 294 295 def modulo_counts(self, other: int) -> 'Symbols': 296 return self % other 297 298 def __mod__(self, other: int) -> 'Symbols': 299 if not isinstance(other, int): 300 return NotImplemented 301 data = defaultdict(int, { 302 s: count % other 303 for s, count in self.items() 304 }) 305 return Symbols._new_raw(data) 306 307 def __lt__(self, other: 'Symbols') -> bool: 308 if not isinstance(other, Symbols): 309 return NotImplemented 310 keys = sorted(set(self.keys()) | set(other.keys())) 311 for k in keys: 312 if self[k] < other[k]: # type: ignore 313 return True 314 if self[k] > other[k]: # type: ignore 315 return False 316 return False 317 318 def __gt__(self, other: 'Symbols') -> bool: 319 if not isinstance(other, Symbols): 320 return NotImplemented 321 keys = sorted(set(self.keys()) | set(other.keys())) 322 for k in keys: 323 if self[k] > other[k]: # type: ignore 324 return True 325 if self[k] < other[k]: # type: ignore 326 return False 327 return False 328 329 def issubset(self, other: Iterable[str] | Mapping[str, int]) -> bool: 330 """Whether `self` is a subset of the other. 331 332 Same as `<=`. 333 334 Note that the `<` and `>` operators are lexicographic orderings, 335 not proper subset relations. 336 """ 337 return self <= other 338 339 def __le__(self, other: Iterable[str] | Mapping[str, int]) -> bool: 340 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 341 return NotImplemented # delegate to the other 342 other = Symbols(other) 343 return all(self[s] <= other[s] # type: ignore 344 for s in itertools.chain(self, other)) 345 346 def issuperset(self, other: Iterable[str] | Mapping[str, int]) -> bool: 347 """Whether `self` is a superset of the other. 348 349 Same as `>=`. 350 351 Note that the `<` and `>` operators are lexicographic orderings, 352 not proper subset relations. 353 """ 354 return self >= other 355 356 def __ge__(self, other: Iterable[str] | Mapping[str, int]) -> bool: 357 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 358 return NotImplemented # delegate to the other 359 other = Symbols(other) 360 return all(self[s] >= other[s] # type: ignore 361 for s in itertools.chain(self, other)) 362 363 def isdisjoint(self, other: Iterable[str] | Mapping[str, int]) -> bool: 364 """Whether `self` has any positive elements in common with the other. 365 366 Raises: 367 ValueError if either has negative elements. 368 """ 369 other = Symbols(other) 370 return any(self[s] > 0 and other[s] > 0 # type: ignore 371 for s in self) 372 373 def __eq__(self, other) -> bool: 374 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 375 return NotImplemented # delegate to the other 376 try: 377 other = Symbols(other) 378 except ValueError: 379 return NotImplemented 380 return all(self[s] == other[s] # type: ignore 381 for s in itertools.chain(self, other)) 382 383 def __ne__(self, other) -> bool: 384 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 385 return NotImplemented # delegate to the other 386 try: 387 other = Symbols(other) 388 except ValueError: 389 return NotImplemented 390 return any(self[s] != other[s] # type: ignore 391 for s in itertools.chain(self, other)) 392 393 # Unary operators. 394 395 def has_negative_counts(self) -> bool: 396 """Whether any counts are negative.""" 397 return any(c < 0 for c in self.values()) 398 399 def __pos__(self) -> 'Symbols': 400 data = defaultdict(int, { 401 s: count 402 for s, count in self.items() if count > 0 403 }) 404 return Symbols._new_raw(data) 405 406 def __neg__(self) -> 'Symbols': 407 data = defaultdict(int, {s: -count for s, count in self.items()}) 408 return Symbols._new_raw(data) 409 410 @cached_property 411 def _hash(self) -> int: 412 return hash((Symbols, str(self))) 413 414 def __hash__(self) -> int: 415 return self._hash 416 417 def count(self) -> int: 418 """The total number of elements.""" 419 return sum(self._data.values()) 420 421 @cached_property 422 def _str(self) -> str: 423 sorted_keys = sorted(self) 424 positive = ''.join(s * self._data[s] for s in sorted_keys 425 if self._data[s] > 0) 426 negative = ''.join(s * -self._data[s] for s in sorted_keys 427 if self._data[s] < 0) 428 if positive: 429 if negative: 430 return positive + ' - ' + negative 431 else: 432 return positive 433 else: 434 if negative: 435 return '-' + negative 436 else: 437 return '' 438 439 def __str__(self) -> str: 440 """All symbols in unary form (i.e. including duplicates) in ascending order. 441 442 If there are negative elements, they are listed following a ` - ` sign. 443 """ 444 return self._str 445 446 def __repr__(self) -> str: 447 return type(self).__qualname__ + f"('{str(self)}')"
EXPERIMENTAL: Immutable multiset of single characters.
Spaces, dashes, and underscores cannot be used as symbols.
Operations include:
Operation | Count / notes |
---|---|
additive_union , + |
l + r |
difference , - |
l - r |
intersection , & |
min(l, r) |
union , | |
max(l, r) |
symmetric_difference , ^ |
abs(l - r) |
multiply_counts , * |
count * n |
divide_counts , // |
count // n |
issubset , <= |
all counts l <= r |
issuperset , >= |
all counts l >= r |
== |
all counts l == r |
!= |
any count l != r |
unary + |
drop all negative counts |
unary - |
reverses the sign of all counts |
<
and >
are lexicographic orderings rather than subset relations.
Specifically, they compare the count of each character in alphabetical
order. For example:
'a' > ''
since one'a'
is more than zero'a'
s.'a' > 'bb'
since'a'
is compared first.'-a' < 'bb'
since the left side has -1'a'
s.'a' < 'ab'
since the'a'
s are equal but the right side has more'b'
s.
Binary operators other than *
and //
implicitly convert the other
argument to Symbols
using the constructor.
Subscripting with a single character returns the count of that character
as an int
. E.g. symbols['a']
-> number of a
s as an int
.
You can also access it as an attribute, e.g. symbols.a
.
Subscripting with multiple characters returns a Symbols
with only those
characters, dropping the rest.
E.g. symbols['ab']
-> number of a
s and b
s as a Symbols
.
Again you can also access it as an attribute, e.g. symbols.ab
.
This is useful for reducing the outcome space, which reduces computational
cost for further operations. If you want to keep only a single character
while keeping the type as Symbols
, you can subscript with that character
plus an unused character.
Subscripting with duplicate characters currently has no further effect, but this may change in the future.
Population.marginals
forwards attribute access, so you can use e.g.
die.marginals.a
to get the marginal distribution of a
s.
Note that attribute access only works with valid identifiers, so e.g. emojis would need to use the subscript method.
74 def __new__(cls, 75 symbols: str | Iterable[str] | Mapping[str, int]) -> 'Symbols': 76 """Constructor. 77 78 The argument can be a string, an iterable of characters, or a mapping of 79 characters to counts. 80 81 If the argument is a string, negative symbols can be specified using a 82 minus sign optionally surrounded by whitespace. For example, 83 `a - b` has one positive a and one negative b. 84 """ 85 self = super(Symbols, cls).__new__(cls) 86 if isinstance(symbols, str): 87 data: MutableMapping[str, int] = defaultdict(int) 88 positive, *negative = re.split(r'\s*-\s*', symbols) 89 for s in positive: 90 data[s] += 1 91 if len(negative) > 1: 92 raise ValueError('Multiple dashes not allowed.') 93 if len(negative) == 1: 94 for s in negative[0]: 95 data[s] -= 1 96 elif isinstance(symbols, Mapping): 97 data = defaultdict(int, symbols) 98 else: 99 data = defaultdict(int) 100 for s in symbols: 101 data[s] += 1 102 103 for s in data: 104 if len(s) != 1: 105 raise ValueError(f'Symbol {s} is not a single character.') 106 if re.match(r'[\s_-]', s): 107 raise ValueError( 108 f'{s} (U+{ord(s):04X}) is not a legal symbol.') 109 110 self._data = defaultdict(int, 111 {k: data[k] 112 for k in sorted(data.keys())}) 113 114 return self
Constructor.
The argument can be a string, an iterable of characters, or a mapping of characters to counts.
If the argument is a string, negative symbols can be specified using a
minus sign optionally surrounded by whitespace. For example,
a - b
has one positive a and one negative b.
145 def additive_union(self, *args: 146 Iterable[str] | Mapping[str, int]) -> 'Symbols': 147 """The sum of counts of each symbol.""" 148 return functools.reduce(operator.add, args, initial=self)
The sum of counts of each symbol.
160 def difference(self, *args: 161 Iterable[str] | Mapping[str, int]) -> 'Symbols': 162 """The difference between the counts of each symbol.""" 163 return functools.reduce(operator.sub, args, initial=self)
The difference between the counts of each symbol.
181 def intersection(self, *args: 182 Iterable[str] | Mapping[str, int]) -> 'Symbols': 183 """The min count of each symbol.""" 184 return functools.reduce(operator.and_, args, initial=self)
The min count of each symbol.
196 def union(self, *args: Iterable[str] | Mapping[str, int]) -> 'Symbols': 197 """The max count of each symbol.""" 198 return functools.reduce(operator.or_, args, initial=self)
The max count of each symbol.
210 def symmetric_difference( 211 self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 212 """The absolute difference in symbol counts between the two sets.""" 213 return self ^ other
The absolute difference in symbol counts between the two sets.
225 def multiply_counts(self, other: int) -> 'Symbols': 226 """Multiplies all counts by an integer.""" 227 return self * other
Multiplies all counts by an integer.
240 def divide_counts(self, other: int) -> 'Symbols': 241 """Divides all counts by an integer, rounding down.""" 242 data = defaultdict(int, { 243 s: count // other 244 for s, count in self.items() 245 }) 246 return Symbols._new_raw(data)
Divides all counts by an integer, rounding down.
248 def count_subset(self, 249 divisor: Iterable[str] | Mapping[str, int], 250 *, 251 empty_divisor: int | None = None) -> int: 252 """The number of times the divisor is contained in this multiset.""" 253 if not isinstance(divisor, Mapping): 254 divisor = Counter(divisor) 255 result = None 256 for s, count in divisor.items(): 257 current = self._data[s] // count 258 if result is None or current < result: 259 result = current 260 if result is None: 261 if empty_divisor is None: 262 raise ZeroDivisionError('Divisor is empty.') 263 else: 264 return empty_divisor 265 else: 266 return result
The number of times the divisor is contained in this multiset.
329 def issubset(self, other: Iterable[str] | Mapping[str, int]) -> bool: 330 """Whether `self` is a subset of the other. 331 332 Same as `<=`. 333 334 Note that the `<` and `>` operators are lexicographic orderings, 335 not proper subset relations. 336 """ 337 return self <= other
Whether self
is a subset of the other.
Same as <=
.
Note that the <
and >
operators are lexicographic orderings,
not proper subset relations.
346 def issuperset(self, other: Iterable[str] | Mapping[str, int]) -> bool: 347 """Whether `self` is a superset of the other. 348 349 Same as `>=`. 350 351 Note that the `<` and `>` operators are lexicographic orderings, 352 not proper subset relations. 353 """ 354 return self >= other
Whether self
is a superset of the other.
Same as >=
.
Note that the <
and >
operators are lexicographic orderings,
not proper subset relations.
363 def isdisjoint(self, other: Iterable[str] | Mapping[str, int]) -> bool: 364 """Whether `self` has any positive elements in common with the other. 365 366 Raises: 367 ValueError if either has negative elements. 368 """ 369 other = Symbols(other) 370 return any(self[s] > 0 and other[s] > 0 # type: ignore 371 for s in self)
Whether self
has any positive elements in common with the other.
Raises:
- ValueError if either has negative elements.
A symbol indicating that the die should be rolled again, usually with some operation applied.
This is designed to be used with the Die()
constructor.
AgainExpression
s should not be fed to functions or methods other than
Die()
, but it can be used with operators. Examples:
Again + 6
: Roll again and add 6.Again + Again
: Roll again twice and sum.
The again_count
, again_depth
, and again_end
arguments to Die()
affect how these arguments are processed. At most one of again_count
or
again_depth
may be provided; if neither are provided, the behavior is as
`again_depth=1.
For finer control over rolling processes, use e.g. Die.map()
instead.
Count mode
When again_count
is provided, we start with one roll queued and execute one
roll at a time. For every Again
we roll, we queue another roll.
If we run out of rolls, we sum the rolls to find the result. If the total number
of rolls (not including the initial roll) would exceed again_count
, we reroll
the entire process, effectively conditioning the process on not rolling more
than again_count
extra dice.
This mode only allows "additive" expressions to be used with Again
, which
means that only the following operators are allowed:
- Binary
+
n @ AgainExpression
, wheren
is a non-negativeint
orPopulation
.
Furthermore, the +
operator is assumed to be associative and commutative.
For example, str
or tuple
outcomes will not produce elements with a definite
order.
Depth mode
When again_depth=0
, again_end
is directly substituted
for each occurence of Again
. For other values of again_depth
, the result for
again_depth-1
is substituted for each occurence of Again
.
If again_end=icepool.Reroll
, then any AgainExpression
s in the final depth
are rerolled.
Rerolls
Reroll
only rerolls that particular die, not the entire process. Any such
rerolls do not count against the again_count
or again_depth
limit.
If again_end=icepool.Reroll
:
- Count mode: Any result that would cause the number of rolls to exceed
again_count
is rerolled. - Depth mode: Any
AgainExpression
s in the final depth level are rerolled.
144class CountsKeysView(KeysView[T], Sequence[T]): 145 """This functions as both a `KeysView` and a `Sequence`.""" 146 147 def __init__(self, counts: Counts[T]): 148 self._mapping = counts 149 150 def __getitem__(self, index): 151 return self._mapping._keys[index] 152 153 def __len__(self) -> int: 154 return len(self._mapping) 155 156 def __eq__(self, other): 157 return self._mapping._keys == other
This functions as both a KeysView
and a Sequence
.
160class CountsValuesView(ValuesView[int], Sequence[int]): 161 """This functions as both a `ValuesView` and a `Sequence`.""" 162 163 def __init__(self, counts: Counts): 164 self._mapping = counts 165 166 def __getitem__(self, index): 167 return self._mapping._values[index] 168 169 def __len__(self) -> int: 170 return len(self._mapping) 171 172 def __eq__(self, other): 173 return self._mapping._values == other
This functions as both a ValuesView
and a Sequence
.
176class CountsItemsView(ItemsView[T, int], Sequence[tuple[T, int]]): 177 """This functions as both an `ItemsView` and a `Sequence`.""" 178 179 def __init__(self, counts: Counts): 180 self._mapping = counts 181 182 def __getitem__(self, index): 183 return self._mapping._items[index] 184 185 def __eq__(self, other): 186 return self._mapping._items == other
This functions as both an ItemsView
and a Sequence
.
143def from_cumulative(outcomes: Sequence[T], 144 cumulative: 'Sequence[int] | Sequence[icepool.Die[bool]]', 145 *, 146 reverse: bool = False) -> 'icepool.Die[T]': 147 """Constructs a `Die` from a sequence of cumulative values. 148 149 Args: 150 outcomes: The outcomes of the resulting die. Sorted order is recommended 151 but not necessary. 152 cumulative: The cumulative values (inclusive) of the outcomes in the 153 order they are given to this function. These may be: 154 * `int` cumulative quantities. 155 * Dice representing the cumulative distribution at that point. 156 reverse: Iff true, both of the arguments will be reversed. This allows 157 e.g. constructing using a survival distribution. 158 """ 159 if len(outcomes) == 0: 160 return icepool.Die({}) 161 162 if reverse: 163 outcomes = list(reversed(outcomes)) 164 cumulative = list(reversed(cumulative)) # type: ignore 165 166 prev = 0 167 d = {} 168 169 if isinstance(cumulative[0], icepool.Die): 170 cumulative = commonize_denominator(*cumulative) 171 for outcome, die in zip(outcomes, cumulative): 172 d[outcome] = die.quantity('!=', False) - prev 173 prev = die.quantity('!=', False) 174 elif isinstance(cumulative[0], int): 175 cumulative = cast(Sequence[int], cumulative) 176 for outcome, quantity in zip(outcomes, cumulative): 177 d[outcome] = quantity - prev 178 prev = quantity 179 else: 180 raise TypeError( 181 f'Unsupported type {type(cumulative)} for cumulative values.') 182 183 return icepool.Die(d)
Constructs a Die
from a sequence of cumulative values.
Arguments:
- outcomes: The outcomes of the resulting die. Sorted order is recommended but not necessary.
- cumulative: The cumulative values (inclusive) of the outcomes in the
order they are given to this function. These may be:
int
cumulative quantities.- Dice representing the cumulative distribution at that point.
- reverse: Iff true, both of the arguments will be reversed. This allows e.g. constructing using a survival distribution.
198def from_rv(rv, outcomes: Sequence[int] | Sequence[float], denominator: int, 199 **kwargs) -> 'icepool.Die[int] | icepool.Die[float]': 200 """Constructs a `Die` from a rv object (as `scipy.stats`). 201 202 This is done using the CDF. 203 204 Args: 205 rv: A rv object (as `scipy.stats`). 206 outcomes: An iterable of `int`s or `float`s that will be the outcomes 207 of the resulting `Die`. 208 If the distribution is discrete, outcomes must be `int`s. 209 Some outcomes may be omitted if their probability is too small 210 compared to the denominator. 211 denominator: The denominator of the resulting `Die` will be set to this. 212 **kwargs: These will be forwarded to `rv.cdf()`. 213 """ 214 if hasattr(rv, 'pdf'): 215 # Continuous distributions use midpoints. 216 midpoints = [(a + b) / 2 for a, b in zip(outcomes[:-1], outcomes[1:])] 217 cdf = rv.cdf(midpoints, **kwargs) 218 quantities_le = tuple(int(round(x * denominator)) 219 for x in cdf) + (denominator, ) 220 else: 221 cdf = rv.cdf(outcomes, **kwargs) 222 quantities_le = tuple(int(round(x * denominator)) for x in cdf) 223 return from_cumulative(outcomes, quantities_le)
Constructs a Die
from a rv object (as scipy.stats
).
This is done using the CDF.
Arguments:
- rv: A rv object (as
scipy.stats
). - outcomes: An iterable of
int
s orfloat
s that will be the outcomes of the resultingDie
. If the distribution is discrete, outcomes must beint
s. Some outcomes may be omitted if their probability is too small compared to the denominator. - denominator: The denominator of the resulting
Die
will be set to this. - **kwargs: These will be forwarded to
rv.cdf()
.
253def pointwise_max(arg0, /, *more_args: 'icepool.Die[T]') -> 'icepool.Die[T]': 254 """Selects the highest chance of rolling >= each outcome among the arguments. 255 256 Naming not finalized. 257 258 Specifically, for each outcome, the chance of the result rolling >= to that 259 outcome is the same as the highest chance of rolling >= that outcome among 260 the arguments. 261 262 Equivalently, any quantile in the result is the highest of that quantile 263 among the arguments. 264 265 This is useful for selecting from several possible moves where you are 266 trying to get >= a threshold that is known but could change depending on the 267 situation. 268 269 Args: 270 dice: Either an iterable of dice, or two or more dice as separate 271 arguments. 272 """ 273 if len(more_args) == 0: 274 args = arg0 275 else: 276 args = (arg0, ) + more_args 277 args = commonize_denominator(*args) 278 outcomes = sorted_union(*args) 279 cumulative = [ 280 min(die.quantity('<=', outcome) for die in args) 281 for outcome in outcomes 282 ] 283 return from_cumulative(outcomes, cumulative)
Selects the highest chance of rolling >= each outcome among the arguments.
Naming not finalized.
Specifically, for each outcome, the chance of the result rolling >= to that outcome is the same as the highest chance of rolling >= that outcome among the arguments.
Equivalently, any quantile in the result is the highest of that quantile among the arguments.
This is useful for selecting from several possible moves where you are trying to get >= a threshold that is known but could change depending on the situation.
Arguments:
- dice: Either an iterable of dice, or two or more dice as separate arguments.
300def pointwise_min(arg0, /, *more_args: 'icepool.Die[T]') -> 'icepool.Die[T]': 301 """Selects the highest chance of rolling <= each outcome among the arguments. 302 303 Naming not finalized. 304 305 Specifically, for each outcome, the chance of the result rolling <= to that 306 outcome is the same as the highest chance of rolling <= that outcome among 307 the arguments. 308 309 Equivalently, any quantile in the result is the lowest of that quantile 310 among the arguments. 311 312 This is useful for selecting from several possible moves where you are 313 trying to get <= a threshold that is known but could change depending on the 314 situation. 315 316 Args: 317 dice: Either an iterable of dice, or two or more dice as separate 318 arguments. 319 """ 320 if len(more_args) == 0: 321 args = arg0 322 else: 323 args = (arg0, ) + more_args 324 args = commonize_denominator(*args) 325 outcomes = sorted_union(*args) 326 cumulative = [ 327 max(die.quantity('<=', outcome) for die in args) 328 for outcome in outcomes 329 ] 330 return from_cumulative(outcomes, cumulative)
Selects the highest chance of rolling <= each outcome among the arguments.
Naming not finalized.
Specifically, for each outcome, the chance of the result rolling <= to that outcome is the same as the highest chance of rolling <= that outcome among the arguments.
Equivalently, any quantile in the result is the lowest of that quantile among the arguments.
This is useful for selecting from several possible moves where you are trying to get <= a threshold that is known but could change depending on the situation.
Arguments:
- dice: Either an iterable of dice, or two or more dice as separate arguments.
99def lowest(arg0, 100 /, 101 *more_args: 'T | icepool.Die[T]', 102 keep: int | None = None, 103 drop: int | None = None, 104 default: T | None = None) -> 'icepool.Die[T]': 105 """The lowest outcome among the rolls, or the sum of some of the lowest. 106 107 The outcomes should support addition and multiplication if `keep != 1`. 108 109 Args: 110 args: Dice or individual outcomes in a single iterable, or as two or 111 more separate arguments. Similar to the built-in `min()`. 112 keep, drop: These arguments work together: 113 * If neither are provided, the single lowest die will be taken. 114 * If only `keep` is provided, the `keep` lowest dice will be summed. 115 * If only `drop` is provided, the `drop` lowest dice will be dropped 116 and the rest will be summed. 117 * If both are provided, `drop` lowest dice will be dropped, then 118 the next `keep` lowest dice will be summed. 119 default: If an empty iterable is provided, the result will be a die that 120 always rolls this value. 121 122 Raises: 123 ValueError if an empty iterable is provided with no `default`. 124 """ 125 if len(more_args) == 0: 126 args = arg0 127 else: 128 args = (arg0, ) + more_args 129 130 if len(args) == 0: 131 if default is None: 132 raise ValueError( 133 "lowest() arg is an empty sequence and no default was provided." 134 ) 135 else: 136 return icepool.Die([default]) 137 138 index_slice = lowest_slice(keep, drop) 139 return _sum_slice(*args, index_slice=index_slice)
The lowest outcome among the rolls, or the sum of some of the lowest.
The outcomes should support addition and multiplication if keep != 1
.
Arguments:
- args: Dice or individual outcomes in a single iterable, or as two or
more separate arguments. Similar to the built-in
min()
. - keep, drop: These arguments work together:
- If neither are provided, the single lowest die will be taken.
- If only
keep
is provided, thekeep
lowest dice will be summed. - If only
drop
is provided, thedrop
lowest dice will be dropped and the rest will be summed. - If both are provided,
drop
lowest dice will be dropped, then the nextkeep
lowest dice will be summed.
- default: If an empty iterable is provided, the result will be a die that always rolls this value.
Raises:
- ValueError if an empty iterable is provided with no
default
.
153def highest(arg0, 154 /, 155 *more_args: 'T | icepool.Die[T]', 156 keep: int | None = None, 157 drop: int | None = None, 158 default: T | None = None) -> 'icepool.Die[T]': 159 """The highest outcome among the rolls, or the sum of some of the highest. 160 161 The outcomes should support addition and multiplication if `keep != 1`. 162 163 Args: 164 args: Dice or individual outcomes in a single iterable, or as two or 165 more separate arguments. Similar to the built-in `max()`. 166 keep, drop: These arguments work together: 167 * If neither are provided, the single highest die will be taken. 168 * If only `keep` is provided, the `keep` highest dice will be summed. 169 * If only `drop` is provided, the `drop` highest dice will be dropped 170 and the rest will be summed. 171 * If both are provided, `drop` highest dice will be dropped, then 172 the next `keep` highest dice will be summed. 173 drop: This number of highest dice will be dropped before keeping dice 174 to be summed. 175 default: If an empty iterable is provided, the result will be a die that 176 always rolls this value. 177 178 Raises: 179 ValueError if an empty iterable is provided with no `default`. 180 """ 181 if len(more_args) == 0: 182 args = arg0 183 else: 184 args = (arg0, ) + more_args 185 186 if len(args) == 0: 187 if default is None: 188 raise ValueError( 189 "highest() arg is an empty sequence and no default was provided." 190 ) 191 else: 192 return icepool.Die([default]) 193 194 index_slice = highest_slice(keep, drop) 195 return _sum_slice(*args, index_slice=index_slice)
The highest outcome among the rolls, or the sum of some of the highest.
The outcomes should support addition and multiplication if keep != 1
.
Arguments:
- args: Dice or individual outcomes in a single iterable, or as two or
more separate arguments. Similar to the built-in
max()
. - keep, drop: These arguments work together:
- If neither are provided, the single highest die will be taken.
- If only
keep
is provided, thekeep
highest dice will be summed. - If only
drop
is provided, thedrop
highest dice will be dropped and the rest will be summed. - If both are provided,
drop
highest dice will be dropped, then the nextkeep
highest dice will be summed.
- drop: This number of highest dice will be dropped before keeping dice to be summed.
- default: If an empty iterable is provided, the result will be a die that always rolls this value.
Raises:
- ValueError if an empty iterable is provided with no
default
.
209def middle(arg0, 210 /, 211 *more_args: 'T | icepool.Die[T]', 212 keep: int = 1, 213 tie: Literal['error', 'high', 'low'] = 'error', 214 default: T | None = None) -> 'icepool.Die[T]': 215 """The middle of the outcomes among the rolls, or the sum of some of the middle. 216 217 The outcomes should support addition and multiplication if `keep != 1`. 218 219 Args: 220 args: Dice or individual outcomes in a single iterable, or as two or 221 more separate arguments. 222 keep: The number of outcomes to sum. 223 tie: What to do if `keep` is odd but the the number of args is even, or 224 vice versa. 225 * 'error' (default): Raises `IndexError`. 226 * 'high': The higher outcome is taken. 227 * 'low': The lower outcome is taken. 228 default: If an empty iterable is provided, the result will be a die that 229 always rolls this value. 230 231 Raises: 232 ValueError if an empty iterable is provided with no `default`. 233 """ 234 if len(more_args) == 0: 235 args = arg0 236 else: 237 args = (arg0, ) + more_args 238 239 if len(args) == 0: 240 if default is None: 241 raise ValueError( 242 "middle() arg is an empty sequence and no default was provided." 243 ) 244 else: 245 return icepool.Die([default]) 246 247 # Expression evaluators are difficult to type. 248 return icepool.Pool(args).middle(keep, tie=tie).sum() # type: ignore
The middle of the outcomes among the rolls, or the sum of some of the middle.
The outcomes should support addition and multiplication if keep != 1
.
Arguments:
- args: Dice or individual outcomes in a single iterable, or as two or more separate arguments.
- keep: The number of outcomes to sum.
- tie: What to do if
keep
is odd but the the number of args is even, or vice versa.- 'error' (default): Raises
IndexError
. - 'high': The higher outcome is taken.
- 'low': The lower outcome is taken.
- 'error' (default): Raises
- default: If an empty iterable is provided, the result will be a die that always rolls this value.
Raises:
- ValueError if an empty iterable is provided with no
default
.
343def min_outcome(*args: 'Iterable[T | icepool.Population[T]] | T') -> T: 344 """The minimum possible outcome among the populations. 345 346 Args: 347 Populations or single outcomes. Alternatively, a single iterable argument of such. 348 """ 349 return min(_iter_outcomes(*args))
The minimum possible outcome among the populations.
Arguments:
- Populations or single outcomes. Alternatively, a single iterable argument of such.
362def max_outcome(*args: 'Iterable[T | icepool.Population[T]] | T') -> T: 363 """The maximum possible outcome among the populations. 364 365 Args: 366 Populations or single outcomes. Alternatively, a single iterable argument of such. 367 """ 368 return max(_iter_outcomes(*args))
The maximum possible outcome among the populations.
Arguments:
- Populations or single outcomes. Alternatively, a single iterable argument of such.
371def consecutive(*args: Iterable[int]) -> Sequence[int]: 372 """A minimal sequence of consecutive ints covering the argument sets.""" 373 start = min((x for x in itertools.chain(*args)), default=None) 374 if start is None: 375 return () 376 stop = max(x for x in itertools.chain(*args)) 377 return tuple(range(start, stop + 1))
A minimal sequence of consecutive ints covering the argument sets.
380def sorted_union(*args: Iterable[T]) -> tuple[T, ...]: 381 """Merge sets into a sorted sequence.""" 382 if not args: 383 return () 384 return tuple(sorted(set.union(*(set(arg) for arg in args))))
Merge sets into a sorted sequence.
387def commonize_denominator( 388 *dice: 'T | icepool.Die[T]') -> tuple['icepool.Die[T]', ...]: 389 """Scale the quantities of the dice so that all of them have the same denominator. 390 391 The denominator is the LCM of the denominators of the arguments. 392 393 Args: 394 *dice: Any number of dice or single outcomes convertible to dice. 395 396 Returns: 397 A tuple of dice with the same denominator. 398 """ 399 converted_dice = [icepool.implicit_convert_to_die(die) for die in dice] 400 denominator_lcm = math.lcm(*(die.denominator() for die in converted_dice 401 if die.denominator() > 0)) 402 return tuple( 403 die.multiply_quantities(denominator_lcm // 404 die.denominator() if die.denominator() > 405 0 else 1) for die in converted_dice)
Scale the quantities of the dice so that all of them have the same denominator.
The denominator is the LCM of the denominators of the arguments.
Arguments:
- *dice: Any number of dice or single outcomes convertible to dice.
Returns:
A tuple of dice with the same denominator.
408def reduce( 409 function: 'Callable[[T, T], T | icepool.Die[T] | icepool.RerollType]', 410 dice: 'Iterable[T | icepool.Die[T]]', 411 *, 412 initial: 'T | icepool.Die[T] | None' = None) -> 'icepool.Die[T]': 413 """Applies a function of two arguments cumulatively to a sequence of dice. 414 415 Analogous to the 416 [`functools` function of the same name.](https://docs.python.org/3/library/functools.html#functools.reduce) 417 418 Args: 419 function: The function to map. The function should take two arguments, 420 which are an outcome from each of two dice, and produce an outcome 421 of the same type. It may also return `Reroll`, in which case the 422 entire sequence is effectively rerolled. 423 dice: A sequence of dice to map the function to, from left to right. 424 initial: If provided, this will be placed at the front of the sequence 425 of dice. 426 again_count, again_depth, again_end: Forwarded to the final die constructor. 427 """ 428 # Conversion to dice is not necessary since map() takes care of that. 429 iter_dice = iter(dice) 430 if initial is not None: 431 result: 'icepool.Die[T]' = icepool.implicit_convert_to_die(initial) 432 else: 433 result = icepool.implicit_convert_to_die(next(iter_dice)) 434 for die in iter_dice: 435 result = map(function, result, die) 436 return result
Applies a function of two arguments cumulatively to a sequence of dice.
Analogous to the
.reduce">functools
function of the same name.
Arguments:
- function: The function to map. The function should take two arguments,
which are an outcome from each of two dice, and produce an outcome
of the same type. It may also return
Reroll
, in which case the entire sequence is effectively rerolled. - dice: A sequence of dice to map the function to, from left to right.
- initial: If provided, this will be placed at the front of the sequence of dice.
- again_count, again_depth, again_end: Forwarded to the final die constructor.
439def accumulate( 440 function: 'Callable[[T, T], T | icepool.Die[T]]', 441 dice: 'Iterable[T | icepool.Die[T]]', 442 *, 443 initial: 'T | icepool.Die[T] | None' = None 444) -> Iterator['icepool.Die[T]']: 445 """Applies a function of two arguments cumulatively to a sequence of dice, yielding each result in turn. 446 447 Analogous to the 448 [`itertools function of the same name`](https://docs.python.org/3/library/itertools.html#itertools.accumulate) 449 , though with no default function and 450 the same parameter order as `reduce()`. 451 452 The number of results is equal to the number of elements of `dice`, with 453 one additional element if `initial` is provided. 454 455 Args: 456 function: The function to map. The function should take two arguments, 457 which are an outcome from each of two dice. 458 dice: A sequence of dice to map the function to, from left to right. 459 initial: If provided, this will be placed at the front of the sequence 460 of dice. 461 """ 462 # Conversion to dice is not necessary since map() takes care of that. 463 iter_dice = iter(dice) 464 if initial is not None: 465 result: 'icepool.Die[T]' = icepool.implicit_convert_to_die(initial) 466 else: 467 try: 468 result = icepool.implicit_convert_to_die(next(iter_dice)) 469 except StopIteration: 470 return 471 yield result 472 for die in iter_dice: 473 result = map(function, result, die) 474 yield result
Applies a function of two arguments cumulatively to a sequence of dice, yielding each result in turn.
Analogous to the
.accumulate">itertools function of the same name
, though with no default function and
the same parameter order as reduce()
.
The number of results is equal to the number of elements of dice
, with
one additional element if initial
is provided.
Arguments:
- function: The function to map. The function should take two arguments, which are an outcome from each of two dice.
- dice: A sequence of dice to map the function to, from left to right.
- initial: If provided, this will be placed at the front of the sequence of dice.
499def map( 500 repl: 501 'Callable[..., T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression] | Mapping[Any, T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression]', 502 /, 503 *args: 'Outcome | icepool.Die | icepool.MultisetExpression', 504 star: bool | None = None, 505 repeat: int | Literal['inf'] = 1, 506 time_limit: int | Literal['inf'] | None = None, 507 again_count: int | None = None, 508 again_depth: int | None = None, 509 again_end: 'T | icepool.Die[T] | icepool.RerollType | None' = None 510) -> 'icepool.Die[T]': 511 """Applies `func(outcome_of_die_0, outcome_of_die_1, ...)` for all joint outcomes, returning a Die. 512 513 See `map_function` for a decorator version of this. 514 515 Example: `map(lambda a, b: a + b, d6, d6)` is the same as d6 + d6. 516 517 `map()` is flexible but not very efficient for more than a few dice. 518 If at all possible, use `reduce()`, `MultisetExpression` methods, and/or 519 `MultisetEvaluator`s. Even `Pool.expand()` (which sorts rolls) is more 520 efficient than using `map` on the dice in order. 521 522 `Again` can be used but is not recommended with `repeat` other than 1. 523 524 Args: 525 repl: One of the following: 526 * A callable that takes in one outcome per element of args and 527 produces a new outcome. 528 * A mapping from old outcomes to new outcomes. 529 Unmapped old outcomes stay the same. 530 In this case args must have exactly one element. 531 As with the `Die` constructor, the new outcomes: 532 * May be dice rather than just single outcomes. 533 * The special value `icepool.Reroll` will reroll that old outcome. 534 * `tuples` containing `Population`s will be `tupleize`d into 535 `Population`s of `tuple`s. 536 This does not apply to subclasses of `tuple`s such as `namedtuple` 537 or other classes such as `Vector`. 538 *args: `func` will be called with all joint outcomes of these. 539 Allowed arg types are: 540 * Single outcome. 541 * `Die`. All outcomes will be sent to `func`. 542 * `MultisetExpression`. All sorted tuples of outcomes will be sent 543 to `func`, as `MultisetExpression.expand()`. 544 star: If `True`, the first of the args will be unpacked before giving 545 them to `func`. 546 If not provided, it will be guessed based on the signature of `func` 547 and the number of arguments. 548 repeat: This will be repeated with the same arguments on the 549 result this many times, except the first of `args` will be replaced 550 by the result of the previous iteration. 551 552 Note that returning `Reroll` from `repl` will effectively reroll all 553 arguments, including the first argument which represents the result 554 of the process up to this point. If you only want to reroll the 555 current stage, you can nest another `map` inside `repl`. 556 557 EXPERIMENTAL: If set to `'inf'`, the result will be as if this 558 were repeated an infinite number of times. In this case, the 559 result will be in simplest form. 560 time_limit: Similar to `repeat`, but will return early if a fixed point 561 is reached. If both `repeat` and `time_limit` are provided 562 (not recommended), `time_limit` takes priority. 563 again_count, again_depth, again_end: Forwarded to the final die constructor. 564 """ 565 transition_function = _canonicalize_transition_function( 566 repl, len(args), star) 567 568 if len(args) == 0: 569 if repeat != 1: 570 raise ValueError('If no arguments are given, repeat must be 1.') 571 return icepool.Die([transition_function()], 572 again_count=again_count, 573 again_depth=again_depth, 574 again_end=again_end) 575 576 # Here len(args) is at least 1. 577 578 first_arg = args[0] 579 extra_args = args[1:] 580 581 if time_limit is not None: 582 repeat = time_limit 583 584 if repeat == 'inf': 585 # Infinite repeat. 586 # T_co and U should be the same in this case. 587 def unary_transition_function(state): 588 return map(transition_function, 589 state, 590 *extra_args, 591 star=False, 592 again_count=again_count, 593 again_depth=again_depth, 594 again_end=again_end) 595 596 return icepool.population.markov_chain.absorbing_markov_chain( 597 icepool.Die([args[0]]), unary_transition_function) 598 else: 599 if repeat < 0: 600 raise ValueError('repeat cannot be negative.') 601 602 if repeat == 0: 603 return icepool.Die([first_arg]) 604 elif repeat == 1 and time_limit is None: 605 final_outcomes: 'list[T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression]' = [] 606 final_quantities: list[int] = [] 607 for outcomes, final_quantity in icepool.iter_cartesian_product( 608 *args): 609 final_outcome = transition_function(*outcomes) 610 if final_outcome is not icepool.Reroll: 611 final_outcomes.append(final_outcome) 612 final_quantities.append(final_quantity) 613 return icepool.Die(final_outcomes, 614 final_quantities, 615 again_count=again_count, 616 again_depth=again_depth, 617 again_end=again_end) 618 else: 619 result: 'icepool.Die[T]' = icepool.Die([first_arg]) 620 for _ in range(repeat): 621 next_result = icepool.map(transition_function, 622 result, 623 *extra_args, 624 star=False, 625 again_count=again_count, 626 again_depth=again_depth, 627 again_end=again_end) 628 if time_limit is not None and result.simplify( 629 ) == next_result.simplify(): 630 return result 631 result = next_result 632 return result
Applies func(outcome_of_die_0, outcome_of_die_1, ...)
for all joint outcomes, returning a Die.
See map_function
for a decorator version of this.
Example: map(lambda a, b: a + b, d6, d6)
is the same as d6 + d6.
map()
is flexible but not very efficient for more than a few dice.
If at all possible, use reduce()
, MultisetExpression
methods, and/or
MultisetEvaluator
s. Even Pool.expand()
(which sorts rolls) is more
efficient than using map
on the dice in order.
Again
can be used but is not recommended with repeat
other than 1.
Arguments:
- repl: One of the following:
- A callable that takes in one outcome per element of args and produces a new outcome.
- A mapping from old outcomes to new outcomes.
Unmapped old outcomes stay the same.
In this case args must have exactly one element.
As with the
Die
constructor, the new outcomes: - May be dice rather than just single outcomes.
- The special value
icepool.Reroll
will reroll that old outcome. tuples
containingPopulation
s will betupleize
d intoPopulation
s oftuple
s. This does not apply to subclasses oftuple
s such asnamedtuple
or other classes such asVector
.
- *args:
func
will be called with all joint outcomes of these. Allowed arg types are:- Single outcome.
Die
. All outcomes will be sent tofunc
.MultisetExpression
. All sorted tuples of outcomes will be sent tofunc
, asMultisetExpression.expand()
.
- star: If
True
, the first of the args will be unpacked before giving them tofunc
. If not provided, it will be guessed based on the signature offunc
and the number of arguments. repeat: This will be repeated with the same arguments on the result this many times, except the first of
args
will be replaced by the result of the previous iteration.Note that returning
Reroll
fromrepl
will effectively reroll all arguments, including the first argument which represents the result of the process up to this point. If you only want to reroll the current stage, you can nest anothermap
insiderepl
.EXPERIMENTAL: If set to
'inf'
, the result will be as if this were repeated an infinite number of times. In this case, the result will be in simplest form.- time_limit: Similar to
repeat
, but will return early if a fixed point is reached. If bothrepeat
andtime_limit
are provided (not recommended),time_limit
takes priority. - again_count, again_depth, again_end: Forwarded to the final die constructor.
672def map_function( 673 function: 674 'Callable[..., T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression] | None' = None, 675 /, 676 *, 677 star: bool | None = None, 678 repeat: int | Literal['inf'] = 1, 679 again_count: int | None = None, 680 again_depth: int | None = None, 681 again_end: 'T | icepool.Die[T] | icepool.RerollType | None' = None 682) -> 'Callable[..., icepool.Die[T]] | Callable[..., Callable[..., icepool.Die[T]]]': 683 """Decorator that turns a function that takes outcomes into a function that takes dice. 684 685 The result must be a `Die`. 686 687 This is basically a decorator version of `map()` and produces behavior 688 similar to AnyDice functions, though Icepool has different typing rules 689 among other differences. 690 691 `map_function` can either be used with no arguments: 692 693 ```python 694 @map_function 695 def explode_six(x): 696 if x == 6: 697 return 6 + Again 698 else: 699 return x 700 701 explode_six(d6, again_depth=2) 702 ``` 703 704 Or with keyword arguments, in which case the extra arguments are bound: 705 706 ```python 707 @map_function(again_depth=2) 708 def explode_six(x): 709 if x == 6: 710 return 6 + Again 711 else: 712 return x 713 714 explode_six(d6) 715 ``` 716 717 Args: 718 again_count, again_depth, again_end: Forwarded to the final die constructor. 719 """ 720 721 if function is not None: 722 return update_wrapper(partial(map, function), function) 723 else: 724 725 def decorator( 726 function: 727 'Callable[..., T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression]' 728 ) -> 'Callable[..., icepool.Die[T]]': 729 730 return update_wrapper( 731 partial(map, 732 function, 733 star=star, 734 repeat=repeat, 735 again_count=again_count, 736 again_depth=again_depth, 737 again_end=again_end), function) 738 739 return decorator
Decorator that turns a function that takes outcomes into a function that takes dice.
The result must be a Die
.
This is basically a decorator version of map()
and produces behavior
similar to AnyDice functions, though Icepool has different typing rules
among other differences.
map_function
can either be used with no arguments:
@map_function
def explode_six(x):
if x == 6:
return 6 + Again
else:
return x
explode_six(d6, again_depth=2)
Or with keyword arguments, in which case the extra arguments are bound:
@map_function(again_depth=2)
def explode_six(x):
if x == 6:
return 6 + Again
else:
return x
explode_six(d6)
Arguments:
- again_count, again_depth, again_end: Forwarded to the final die constructor.
742def map_and_time( 743 repl: 744 'Callable[..., T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression] | Mapping[Any, T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression]', 745 initial_state: 'T | icepool.Die[T]', 746 /, 747 *extra_args, 748 star: bool | None = None, 749 time_limit: int) -> 'icepool.Die[tuple[T, int]]': 750 """Repeatedly map outcomes of the state to other outcomes, while also 751 counting timesteps. 752 753 This is useful for representing processes. 754 755 The outcomes of the result are `(outcome, time)`, where `time` is the 756 number of repeats needed to reach an absorbing outcome (an outcome that 757 only leads to itself), or `repeat`, whichever is lesser. 758 759 This will return early if it reaches a fixed point. 760 Therefore, you can set `repeat` equal to the maximum number of 761 time you could possibly be interested in without worrying about 762 it causing extra computations after the fixed point. 763 764 Args: 765 repl: One of the following: 766 * A callable returning a new outcome for each old outcome. 767 * A mapping from old outcomes to new outcomes. 768 Unmapped old outcomes stay the same. 769 The new outcomes may be dice rather than just single outcomes. 770 The special value `icepool.Reroll` will reroll that old outcome. 771 initial_state: The initial state of the process, which could be a 772 single state or a `Die`. 773 extra_args: Extra arguments to use, as per `map`. Note that these are 774 rerolled at every time step. 775 star: If `True`, the first of the args will be unpacked before giving 776 them to `func`. 777 If not provided, it will be guessed based on the signature of `func` 778 and the number of arguments. 779 time_limit: This will be repeated with the same arguments on the result 780 up to this many times. 781 782 Returns: 783 The `Die` after the modification. 784 """ 785 transition_function = _canonicalize_transition_function( 786 repl, 1 + len(extra_args), star) 787 788 result: 'icepool.Die[tuple[T, int]]' = map(lambda x: (x, 0), initial_state) 789 790 # Note that we don't expand extra_args during the outer map. 791 # This is needed to correctly evaluate whether each outcome is absorbing. 792 def transition_with_steps(outcome_and_steps, extra_args): 793 outcome, steps = outcome_and_steps 794 next_outcome = map(transition_function, outcome, *extra_args) 795 if icepool.population.markov_chain.is_absorbing(outcome, next_outcome): 796 return outcome, steps 797 else: 798 return icepool.tupleize(next_outcome, steps + 1) 799 800 return map(transition_with_steps, 801 result, 802 extra_args, 803 time_limit=time_limit)
Repeatedly map outcomes of the state to other outcomes, while also counting timesteps.
This is useful for representing processes.
The outcomes of the result are (outcome, time)
, where time
is the
number of repeats needed to reach an absorbing outcome (an outcome that
only leads to itself), or repeat
, whichever is lesser.
This will return early if it reaches a fixed point.
Therefore, you can set repeat
equal to the maximum number of
time you could possibly be interested in without worrying about
it causing extra computations after the fixed point.
Arguments:
- repl: One of the following:
- A callable returning a new outcome for each old outcome.
- A mapping from old outcomes to new outcomes.
Unmapped old outcomes stay the same.
The new outcomes may be dice rather than just single outcomes.
The special value
icepool.Reroll
will reroll that old outcome.
- initial_state: The initial state of the process, which could be a
single state or a
Die
. - extra_args: Extra arguments to use, as per
map
. Note that these are rerolled at every time step. - star: If
True
, the first of the args will be unpacked before giving them tofunc
. If not provided, it will be guessed based on the signature offunc
and the number of arguments. - time_limit: This will be repeated with the same arguments on the result up to this many times.
Returns:
The
Die
after the modification.
806def map_to_pool( 807 repl: 808 'Callable[..., icepool.MultisetExpression | Sequence[icepool.Die[T] | T] | Mapping[icepool.Die[T], int] | Mapping[T, int] | icepool.RerollType] | Mapping[Any, icepool.MultisetExpression | Sequence[icepool.Die[T] | T] | Mapping[icepool.Die[T], int] | Mapping[T, int] | icepool.RerollType]', 809 /, 810 *args: 'Outcome | icepool.Die | icepool.MultisetExpression', 811 star: bool | None = None) -> 'icepool.MultisetExpression[T]': 812 """EXPERIMENTAL: Applies `repl(outcome_of_die_0, outcome_of_die_1, ...)` for all joint outcomes, producing a MultisetExpression. 813 814 Args: 815 repl: One of the following: 816 * A callable that takes in one outcome per element of args and 817 produces a `MultisetExpression` or something convertible to a `Pool`. 818 * A mapping from old outcomes to `MultisetExpression` 819 or something convertible to a `Pool`. 820 In this case args must have exactly one element. 821 The new outcomes may be dice rather than just single outcomes. 822 The special value `icepool.Reroll` will reroll that old outcome. 823 star: If `True`, the first of the args will be unpacked before giving 824 them to `repl`. 825 If not provided, it will be guessed based on the signature of `repl` 826 and the number of arguments. 827 denominator: If provided, the denominator of the result will be this 828 value. Otherwise it will be the minimum to correctly weight the 829 pools. 830 831 Returns: 832 A `MultisetExpression` representing the mixture of `Pool`s. Note 833 that this is not technically a `Pool`, though it supports most of 834 the same operations. 835 836 Raises: 837 ValueError: If `denominator` cannot be made consistent with the 838 resulting mixture of pools. 839 """ 840 transition_function = _canonicalize_transition_function( 841 repl, len(args), star) 842 843 data: 'MutableMapping[icepool.MultisetExpression[T], int]' = defaultdict( 844 int) 845 for outcomes, quantity in icepool.iter_cartesian_product(*args): 846 pool = transition_function(*outcomes) 847 if pool is icepool.Reroll: 848 continue 849 elif isinstance(pool, icepool.MultisetExpression): 850 data[pool] += quantity 851 else: 852 data[icepool.Pool(pool)] += quantity 853 # I couldn't get the covariance / contravariance to work. 854 return icepool.MultisetMixture(data) # type: ignore
EXPERIMENTAL: Applies repl(outcome_of_die_0, outcome_of_die_1, ...)
for all joint outcomes, producing a MultisetExpression.
Arguments:
- repl: One of the following:
- A callable that takes in one outcome per element of args and
produces a
MultisetExpression
or something convertible to aPool
. - A mapping from old outcomes to
MultisetExpression
or something convertible to aPool
. In this case args must have exactly one element. The new outcomes may be dice rather than just single outcomes. The special valueicepool.Reroll
will reroll that old outcome.
- A callable that takes in one outcome per element of args and
produces a
- star: If
True
, the first of the args will be unpacked before giving them torepl
. If not provided, it will be guessed based on the signature ofrepl
and the number of arguments. - denominator: If provided, the denominator of the result will be this value. Otherwise it will be the minimum to correctly weight the pools.
Returns:
A
MultisetExpression
representing the mixture ofPool
s. Note
that this is not technically aPool
, though it supports most of the same operations.
Raises:
- ValueError: If
denominator
cannot be made consistent with the resulting mixture of pools.
Indicates that an outcome should be rerolled (with unlimited depth).
This can be used in place of outcomes in many places. See individual function and method descriptions for details.
This effectively removes the outcome from the probability space, along with its contribution to the denominator.
This can be used for conditional probability by removing all outcomes not consistent with the given observations.
Operation in specific cases:
- When used with
Again
, only that stage is rerolled, not the entireAgain
tree. - To reroll with limited depth, use
Die.reroll()
, orAgain
with no modification. - When used with
MultisetEvaluator
, the entire evaluation is rerolled.
37class RerollType(enum.Enum): 38 """The type of the Reroll singleton.""" 39 Reroll = 'Reroll' 40 """Indicates an outcome should be rerolled (with unlimited depth)."""
The type of the Reroll singleton.
Indicates an outcome should be rerolled (with unlimited depth).
25class Pool(KeepGenerator[T]): 26 """Represents a multiset of outcomes resulting from the roll of several dice. 27 28 This should be used in conjunction with `MultisetEvaluator` to generate a 29 result. 30 31 Note that operators are performed on the multiset of rolls, not the multiset 32 of dice. For example, `d6.pool(3) - d6.pool(3)` is not an empty pool, but 33 an expression meaning "roll two pools of 3d6 and get the rolls from the 34 first pool, with rolls in the second pool cancelling matching rolls in the 35 first pool one-for-one". 36 """ 37 38 _dice: tuple[tuple['icepool.Die[T]', int]] 39 _outcomes: tuple[T, ...] 40 41 def __new__( 42 cls, 43 dice: 44 'Sequence[icepool.Die[T] | T] | Mapping[icepool.Die[T], int] | Mapping[T, int] | Mapping[icepool.Die[T] | T, int]', 45 times: Sequence[int] | int = 1) -> 'Pool': 46 """Public constructor for a pool. 47 48 Evaulation is most efficient when the dice are the same or same-side 49 truncations of each other. For example, d4, d6, d8, d10, d12 are all 50 same-side truncations of d12. 51 52 It is permissible to create a `Pool` without providing dice, but not all 53 evaluators will handle this case, especially if they depend on the 54 outcome type. Dice may be in the pool zero times, in which case their 55 outcomes will be considered but without any count (unless another die 56 has that outcome). 57 58 Args: 59 dice: The dice to put in the `Pool`. This can be one of the following: 60 61 * A `Sequence` of `Die` or outcomes. 62 * A `Mapping` of `Die` or outcomes to how many of that `Die` or 63 outcome to put in the `Pool`. 64 65 All outcomes within a `Pool` must be totally orderable. 66 times: Multiplies the number of times each element of `dice` will 67 be put into the pool. 68 `times` can either be a sequence of the same length as 69 `outcomes` or a single `int` to apply to all elements of 70 `outcomes`. 71 72 Raises: 73 ValueError: If a bare `Deck` or `Die` argument is provided. 74 A `Pool` of a single `Die` should constructed as `Pool([die])`. 75 """ 76 if isinstance(dice, Pool): 77 if times == 1: 78 return dice 79 else: 80 dice = {die: quantity for die, quantity in dice._dice} 81 82 if isinstance(dice, (icepool.Die, icepool.Deck, icepool.MultiDeal)): 83 raise ValueError( 84 f'A Pool cannot be constructed with a {type(dice).__name__} argument.' 85 ) 86 87 dice, times = icepool.creation_args.itemize(dice, times) 88 converted_dice = [icepool.implicit_convert_to_die(die) for die in dice] 89 90 dice_counts: MutableMapping['icepool.Die[T]', int] = defaultdict(int) 91 for die, qty in zip(converted_dice, times): 92 if qty == 0: 93 continue 94 dice_counts[die] += qty 95 keep_tuple = (1, ) * sum(times) 96 97 # Includes dice with zero qty. 98 outcomes = icepool.sorted_union(*converted_dice) 99 return cls._new_from_mapping(dice_counts, outcomes, keep_tuple) 100 101 @classmethod 102 @cache 103 def _new_raw(cls, dice: tuple[tuple['icepool.Die[T]', int]], 104 outcomes: tuple[T], keep_tuple: tuple[int, ...]) -> 'Pool[T]': 105 """All pool creation ends up here. This method is cached. 106 107 Args: 108 dice: A tuple of (die, count) pairs. 109 keep_tuple: A tuple of how many times to count each die. 110 """ 111 self = super(Pool, cls).__new__(cls) 112 self._dice = dice 113 self._outcomes = outcomes 114 self._keep_tuple = keep_tuple 115 return self 116 117 @classmethod 118 def clear_cache(cls): 119 """Clears the global pool cache.""" 120 Pool._new_raw.cache_clear() 121 122 @classmethod 123 def _new_from_mapping(cls, dice_counts: Mapping['icepool.Die[T]', int], 124 outcomes: tuple[T, ...], 125 keep_tuple: Sequence[int]) -> 'Pool[T]': 126 """Creates a new pool. 127 128 Args: 129 dice_counts: A map from dice to rolls. 130 keep_tuple: A tuple with length equal to the number of dice. 131 """ 132 dice = tuple( 133 sorted(dice_counts.items(), key=lambda kv: kv[0]._hash_key)) 134 return Pool._new_raw(dice, outcomes, keep_tuple) 135 136 @cached_property 137 def _raw_size(self) -> int: 138 return sum(count for _, count in self._dice) 139 140 def raw_size(self) -> int: 141 """The number of dice in this pool before the keep_tuple is applied.""" 142 return self._raw_size 143 144 def _is_resolvable(self) -> bool: 145 return all(not die.is_empty() for die, _ in self._dice) 146 147 @cached_property 148 def _denominator(self) -> int: 149 return math.prod(die.denominator()**count for die, count in self._dice) 150 151 def denominator(self) -> int: 152 return self._denominator 153 154 @cached_property 155 def _dice_tuple(self) -> tuple['icepool.Die[T]', ...]: 156 return sum(((die, ) * count for die, count in self._dice), start=()) 157 158 @cached_property 159 def _unique_dice(self) -> Collection['icepool.Die[T]']: 160 return set(die for die, _ in self._dice) 161 162 def unique_dice(self) -> Collection['icepool.Die[T]']: 163 """The collection of unique dice in this pool.""" 164 return self._unique_dice 165 166 def outcomes(self) -> Sequence[T]: 167 """The union of possible outcomes among all dice in this pool in ascending order.""" 168 return self._outcomes 169 170 def local_order_preference(self) -> tuple[Order, OrderReason]: 171 can_truncate_min, can_truncate_max = icepool.order.can_truncate( 172 self.unique_dice()) 173 if can_truncate_min and not can_truncate_max: 174 return Order.Ascending, OrderReason.PoolComposition 175 if can_truncate_max and not can_truncate_min: 176 return Order.Descending, OrderReason.PoolComposition 177 178 lo_skip, hi_skip = icepool.order.lo_hi_skip(self.keep_tuple()) 179 if lo_skip > hi_skip: 180 return Order.Descending, OrderReason.KeepSkip 181 if hi_skip > lo_skip: 182 return Order.Ascending, OrderReason.KeepSkip 183 184 return Order.Any, OrderReason.NoPreference 185 186 def min_outcome(self) -> T: 187 """The min outcome among all dice in this pool.""" 188 return self._outcomes[0] 189 190 def max_outcome(self) -> T: 191 """The max outcome among all dice in this pool.""" 192 return self._outcomes[-1] 193 194 def _prepare(self): 195 yield self, 1 196 197 def _generate_min(self, min_outcome) -> Iterator[tuple['Pool', int, int]]: 198 """Pops the given outcome from this pool, if it is the min outcome. 199 200 Yields: 201 popped_pool: The pool after the min outcome is popped. 202 count: The number of dice that rolled the min outcome, after 203 accounting for keep_tuple. 204 weight: The weight of this incremental result. 205 """ 206 if not self.outcomes(): 207 yield self, 0, 1 208 return 209 if min_outcome != self.min_outcome(): 210 yield self, 0, 1 211 return 212 generators = [ 213 iter_die_pop_min(die, die_count, min_outcome) 214 for die, die_count in self._dice 215 ] 216 skip_weight = None 217 for pop in itertools.product(*generators): 218 total_hits = 0 219 result_weight = 1 220 next_dice_counts: MutableMapping[Any, int] = defaultdict(int) 221 for popped_die, misses, hits, weight in pop: 222 if not popped_die.is_empty() and misses > 0: 223 next_dice_counts[popped_die] += misses 224 total_hits += hits 225 result_weight *= weight 226 popped_keep_tuple, result_count = pop_min_from_keep_tuple( 227 self.keep_tuple(), total_hits) 228 popped_pool = Pool._new_from_mapping(next_dice_counts, 229 self._outcomes[1:], 230 popped_keep_tuple) 231 if not any(popped_keep_tuple): 232 # Dump all dice in exchange for the denominator. 233 skip_weight = (skip_weight or 234 0) + result_weight * popped_pool.denominator() 235 continue 236 237 yield popped_pool, result_count, result_weight 238 239 if skip_weight is not None: 240 popped_pool = Pool._new_raw((), self._outcomes[1:], ()) 241 yield popped_pool, sum(self.keep_tuple()), skip_weight 242 243 def _generate_max(self, max_outcome) -> Iterator[tuple['Pool', int, int]]: 244 """Pops the given outcome from this pool, if it is the max outcome. 245 246 Yields: 247 popped_pool: The pool after the max outcome is popped. 248 count: The number of dice that rolled the max outcome, after 249 accounting for keep_tuple. 250 weight: The weight of this incremental result. 251 """ 252 if not self.outcomes(): 253 yield self, 0, 1 254 return 255 if max_outcome != self.max_outcome(): 256 yield self, 0, 1 257 return 258 generators = [ 259 iter_die_pop_max(die, die_count, max_outcome) 260 for die, die_count in self._dice 261 ] 262 skip_weight = None 263 for pop in itertools.product(*generators): 264 total_hits = 0 265 result_weight = 1 266 next_dice_counts: MutableMapping[Any, int] = defaultdict(int) 267 for popped_die, misses, hits, weight in pop: 268 if not popped_die.is_empty() and misses > 0: 269 next_dice_counts[popped_die] += misses 270 total_hits += hits 271 result_weight *= weight 272 popped_keep_tuple, result_count = pop_max_from_keep_tuple( 273 self.keep_tuple(), total_hits) 274 popped_pool = Pool._new_from_mapping(next_dice_counts, 275 self._outcomes[:-1], 276 popped_keep_tuple) 277 if not any(popped_keep_tuple): 278 # Dump all dice in exchange for the denominator. 279 skip_weight = (skip_weight or 280 0) + result_weight * popped_pool.denominator() 281 continue 282 283 yield popped_pool, result_count, result_weight 284 285 if skip_weight is not None: 286 popped_pool = Pool._new_raw((), self._outcomes[:-1], ()) 287 yield popped_pool, sum(self.keep_tuple()), skip_weight 288 289 def _set_keep_tuple(self, keep_tuple: tuple[int, 290 ...]) -> 'KeepGenerator[T]': 291 return Pool._new_raw(self._dice, self._outcomes, keep_tuple) 292 293 def additive_union( 294 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 295 ) -> 'MultisetExpression[T]': 296 args = tuple( 297 icepool.expression.multiset_expression. 298 implicit_convert_to_expression(arg) for arg in args) 299 if all(isinstance(arg, Pool) for arg in args): 300 pools = cast(tuple[Pool[T], ...], args) 301 keep_tuple: tuple[int, ...] = tuple( 302 reduce(operator.add, (pool.keep_tuple() for pool in pools), 303 ())) 304 if len(keep_tuple) == 0 or all(x == keep_tuple[0] 305 for x in keep_tuple): 306 # All sorted positions count the same, so we can merge the 307 # pools. 308 dice: 'MutableMapping[icepool.Die, int]' = defaultdict(int) 309 for pool in pools: 310 for die, die_count in pool._dice: 311 dice[die] += die_count 312 outcomes = icepool.sorted_union(*(pool.outcomes() 313 for pool in pools)) 314 return Pool._new_from_mapping(dice, outcomes, keep_tuple) 315 return KeepGenerator.additive_union(*args) 316 317 def __str__(self) -> str: 318 return ( 319 f'Pool of {self.raw_size()} dice with keep_tuple={self.keep_tuple()}\n' 320 + ''.join(f' {repr(die)} : {count},\n' 321 for die, count in self._dice)) 322 323 @cached_property 324 def _local_hash_key(self) -> tuple: 325 return Pool, self._dice, self._outcomes, self._keep_tuple
Represents a multiset of outcomes resulting from the roll of several dice.
This should be used in conjunction with MultisetEvaluator
to generate a
result.
Note that operators are performed on the multiset of rolls, not the multiset
of dice. For example, d6.pool(3) - d6.pool(3)
is not an empty pool, but
an expression meaning "roll two pools of 3d6 and get the rolls from the
first pool, with rolls in the second pool cancelling matching rolls in the
first pool one-for-one".
117 @classmethod 118 def clear_cache(cls): 119 """Clears the global pool cache.""" 120 Pool._new_raw.cache_clear()
Clears the global pool cache.
140 def raw_size(self) -> int: 141 """The number of dice in this pool before the keep_tuple is applied.""" 142 return self._raw_size
The number of dice in this pool before the keep_tuple is applied.
162 def unique_dice(self) -> Collection['icepool.Die[T]']: 163 """The collection of unique dice in this pool.""" 164 return self._unique_dice
The collection of unique dice in this pool.
166 def outcomes(self) -> Sequence[T]: 167 """The union of possible outcomes among all dice in this pool in ascending order.""" 168 return self._outcomes
The union of possible outcomes among all dice in this pool in ascending order.
170 def local_order_preference(self) -> tuple[Order, OrderReason]: 171 can_truncate_min, can_truncate_max = icepool.order.can_truncate( 172 self.unique_dice()) 173 if can_truncate_min and not can_truncate_max: 174 return Order.Ascending, OrderReason.PoolComposition 175 if can_truncate_max and not can_truncate_min: 176 return Order.Descending, OrderReason.PoolComposition 177 178 lo_skip, hi_skip = icepool.order.lo_hi_skip(self.keep_tuple()) 179 if lo_skip > hi_skip: 180 return Order.Descending, OrderReason.KeepSkip 181 if hi_skip > lo_skip: 182 return Order.Ascending, OrderReason.KeepSkip 183 184 return Order.Any, OrderReason.NoPreference
Any ordering that is preferred or required by this expression node.
186 def min_outcome(self) -> T: 187 """The min outcome among all dice in this pool.""" 188 return self._outcomes[0]
The min outcome among all dice in this pool.
190 def max_outcome(self) -> T: 191 """The max outcome among all dice in this pool.""" 192 return self._outcomes[-1]
The max outcome among all dice in this pool.
293 def additive_union( 294 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 295 ) -> 'MultisetExpression[T]': 296 args = tuple( 297 icepool.expression.multiset_expression. 298 implicit_convert_to_expression(arg) for arg in args) 299 if all(isinstance(arg, Pool) for arg in args): 300 pools = cast(tuple[Pool[T], ...], args) 301 keep_tuple: tuple[int, ...] = tuple( 302 reduce(operator.add, (pool.keep_tuple() for pool in pools), 303 ())) 304 if len(keep_tuple) == 0 or all(x == keep_tuple[0] 305 for x in keep_tuple): 306 # All sorted positions count the same, so we can merge the 307 # pools. 308 dice: 'MutableMapping[icepool.Die, int]' = defaultdict(int) 309 for pool in pools: 310 for die, die_count in pool._dice: 311 dice[die] += die_count 312 outcomes = icepool.sorted_union(*(pool.outcomes() 313 for pool in pools)) 314 return Pool._new_from_mapping(dice, outcomes, keep_tuple) 315 return KeepGenerator.additive_union(*args)
The combined elements from all of the multisets.
Same as a + b + c + ...
.
Any resulting counts that would be negative are set to zero.
Example:
[1, 2, 2, 3] + [1, 2, 4] -> [1, 1, 2, 2, 2, 3, 4]
328def standard_pool( 329 die_sizes: Collection[int] | Mapping[int, int]) -> 'Pool[int]': 330 """A `Pool` of standard dice (e.g. d6, d8...). 331 332 Args: 333 die_sizes: A collection of die sizes, which will put one die of that 334 sizes in the pool for each element. 335 Or, a mapping of die sizes to how many dice of that size to put 336 into the pool. 337 If empty, the pool will be considered to consist of zero zeros. 338 """ 339 if not die_sizes: 340 return Pool({icepool.Die([0]): 0}) 341 if isinstance(die_sizes, Mapping): 342 die_sizes = list( 343 itertools.chain.from_iterable([k] * v 344 for k, v in die_sizes.items())) 345 return Pool(list(icepool.d(x) for x in die_sizes))
A Pool
of standard dice (e.g. d6, d8...).
Arguments:
- die_sizes: A collection of die sizes, which will put one die of that sizes in the pool for each element. Or, a mapping of die sizes to how many dice of that size to put into the pool. If empty, the pool will be considered to consist of zero zeros.
29class MultisetGenerator(MultisetExpression[T]): 30 """Abstract base class for generating multisets. 31 32 These include dice pools (`Pool`) and card deals (`Deal`). Most likely you 33 will be using one of these two rather than writing your own subclass of 34 `MultisetGenerator`. 35 36 The multisets are incrementally generated one outcome at a time. 37 For each outcome, a `count` and `weight` are generated, along with a 38 smaller generator to produce the rest of the multiset. 39 40 You can perform simple evaluations using built-in operators and methods in 41 this class. 42 For more complex evaluations and better performance, particularly when 43 multiple generators are involved, you will want to write your own subclass 44 of `MultisetEvaluator`. 45 """ 46 47 _children = () 48 49 def has_parameters(self) -> bool: 50 return False 51 52 # Overridden to switch body generators with variables. 53 54 @property 55 def _body_inputs(self) -> 'tuple[icepool.MultisetGenerator, ...]': 56 return (self, ) 57 58 def _detach(self, body_inputs: 'list[MultisetExpressionBase]' = []): 59 result = icepool.MultisetVariable(False, len(body_inputs)) 60 body_inputs.append(self) 61 return result 62 63 def _apply_variables(self, outcome: T, body_counts: tuple[int, ...], 64 param_counts: tuple[int, ...]): 65 raise icepool.MultisetVariableError( 66 '_detach should have been called before _apply_variables.')
Abstract base class for generating multisets.
These include dice pools (Pool
) and card deals (Deal
). Most likely you
will be using one of these two rather than writing your own subclass of
MultisetGenerator
.
The multisets are incrementally generated one outcome at a time.
For each outcome, a count
and weight
are generated, along with a
smaller generator to produce the rest of the multiset.
You can perform simple evaluations using built-in operators and methods in
this class.
For more complex evaluations and better performance, particularly when
multiple generators are involved, you will want to write your own subclass
of MultisetEvaluator
.
70class MultisetExpression(MultisetExpressionBase[T, int], 71 Expandable[tuple[T, ...]]): 72 """Abstract base class representing an expression that operates on single multisets. 73 74 There are three types of multiset expressions: 75 76 * `MultisetGenerator`, which produce raw outcomes and counts. 77 * `MultisetOperator`, which takes outcomes with one or more counts and 78 produces a count. 79 * `MultisetVariable`, which is a temporary placeholder for some other 80 expression. 81 82 Expression methods can be applied to `MultisetGenerator`s to do simple 83 evaluations. For joint evaluations, try `multiset_function`. 84 85 Use the provided operations to build up more complicated 86 expressions, or to attach a final evaluator. 87 88 Operations include: 89 90 | Operation | Count / notes | 91 |:----------------------------|:--------------------------------------------| 92 | `additive_union`, `+` | `l + r` | 93 | `difference`, `-` | `l - r` | 94 | `intersection`, `&` | `min(l, r)` | 95 | `union`, `\\|` | `max(l, r)` | 96 | `symmetric_difference`, `^` | `abs(l - r)` | 97 | `multiply_counts`, `*` | `count * n` | 98 | `divide_counts`, `//` | `count // n` | 99 | `modulo_counts`, `%` | `count % n` | 100 | `keep_counts` | `count if count >= n else 0` etc. | 101 | unary `+` | same as `keep_counts_ge(0)` | 102 | unary `-` | reverses the sign of all counts | 103 | `unique` | `min(count, n)` | 104 | `keep_outcomes` | `count if outcome in t else 0` | 105 | `drop_outcomes` | `count if outcome not in t else 0` | 106 | `map_counts` | `f(outcome, *counts)` | 107 | `keep`, `[]` | less capable than `KeepGenerator` version | 108 | `highest` | less capable than `KeepGenerator` version | 109 | `lowest` | less capable than `KeepGenerator` version | 110 111 | Evaluator | Summary | 112 |:-------------------------------|:---------------------------------------------------------------------------| 113 | `issubset`, `<=` | Whether the left side's counts are all <= their counterparts on the right | 114 | `issuperset`, `>=` | Whether the left side's counts are all >= their counterparts on the right | 115 | `isdisjoint` | Whether the left side has no positive counts in common with the right side | 116 | `<` | As `<=`, but `False` if the two multisets are equal | 117 | `>` | As `>=`, but `False` if the two multisets are equal | 118 | `==` | Whether the left side has all the same counts as the right side | 119 | `!=` | Whether the left side has any different counts to the right side | 120 | `expand` | All elements in ascending order | 121 | `sum` | Sum of all elements | 122 | `count` | The number of elements | 123 | `any` | Whether there is at least 1 element | 124 | `highest_outcome_and_count` | The highest outcome and how many of that outcome | 125 | `all_counts` | All counts in descending order | 126 | `largest_count` | The single largest count, aka x-of-a-kind | 127 | `largest_count_and_outcome` | Same but also with the corresponding outcome | 128 | `count_subset`, `//` | The number of times the right side is contained in the left side | 129 | `largest_straight` | Length of longest consecutive sequence | 130 | `largest_straight_and_outcome` | Same but also with the corresponding outcome | 131 | `all_straights` | Lengths of all consecutive sequences in descending order | 132 """ 133 134 @property 135 def _variable_type(self) -> type: 136 return icepool.MultisetVariable 137 138 # Abstract overrides with more specific signatures. 139 @abstractmethod 140 def _prepare(self) -> Iterator[tuple['MultisetExpression[T]', int]]: 141 ... 142 143 @abstractmethod 144 def _generate_min( 145 self, min_outcome: T 146 ) -> Iterator[tuple['MultisetExpression[T]', int, int]]: 147 ... 148 149 @abstractmethod 150 def _generate_max( 151 self, max_outcome: T 152 ) -> Iterator[tuple['MultisetExpression[T]', int, int]]: 153 ... 154 155 @abstractmethod 156 def _detach( 157 self, 158 body_inputs: 'list[MultisetExpressionBase]' = [] 159 ) -> 'MultisetExpression[T]': 160 ... 161 162 @abstractmethod 163 def _apply_variables( 164 self, outcome: T, body_counts: tuple[int, ...], 165 param_counts: tuple[int, 166 ...]) -> 'tuple[MultisetExpression[T], int]': 167 ... 168 169 @property 170 def _items_for_cartesian_product( 171 self) -> Sequence[tuple[tuple[T, ...], int]]: 172 expansion = cast('icepool.Die[tuple[T, ...]]', self.expand()) 173 return expansion.items() 174 175 # Binary operators. 176 177 def __add__(self, 178 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 179 /) -> 'MultisetExpression[T]': 180 try: 181 return MultisetExpression.additive_union(self, other) 182 except ImplicitConversionError: 183 return NotImplemented 184 185 def __radd__( 186 self, 187 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 188 /) -> 'MultisetExpression[T]': 189 try: 190 return MultisetExpression.additive_union(other, self) 191 except ImplicitConversionError: 192 return NotImplemented 193 194 def additive_union( 195 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 196 ) -> 'MultisetExpression[T]': 197 """The combined elements from all of the multisets. 198 199 Same as `a + b + c + ...`. 200 201 Any resulting counts that would be negative are set to zero. 202 203 Example: 204 ```python 205 [1, 2, 2, 3] + [1, 2, 4] -> [1, 1, 2, 2, 2, 3, 4] 206 ``` 207 """ 208 expressions = tuple( 209 implicit_convert_to_expression(arg) for arg in args) 210 return icepool.operator.MultisetAdditiveUnion(*expressions) 211 212 def __sub__(self, 213 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 214 /) -> 'MultisetExpression[T]': 215 try: 216 return MultisetExpression.difference(self, other) 217 except ImplicitConversionError: 218 return NotImplemented 219 220 def __rsub__( 221 self, 222 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 223 /) -> 'MultisetExpression[T]': 224 try: 225 return MultisetExpression.difference(other, self) 226 except ImplicitConversionError: 227 return NotImplemented 228 229 def difference( 230 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 231 ) -> 'MultisetExpression[T]': 232 """The elements from the left multiset that are not in any of the others. 233 234 Same as `a - b - c - ...`. 235 236 Any resulting counts that would be negative are set to zero. 237 238 Example: 239 ```python 240 [1, 2, 2, 3] - [1, 2, 4] -> [2, 3] 241 ``` 242 243 If no arguments are given, the result will be an empty multiset, i.e. 244 all zero counts. 245 246 Note that, as a multiset operation, this will only cancel elements 1:1. 247 If you want to drop all elements in a set of outcomes regardless of 248 count, either use `drop_outcomes()` instead, or use a large number of 249 counts on the right side. 250 """ 251 expressions = tuple( 252 implicit_convert_to_expression(arg) for arg in args) 253 return icepool.operator.MultisetDifference(*expressions) 254 255 def __and__(self, 256 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 257 /) -> 'MultisetExpression[T]': 258 try: 259 return MultisetExpression.intersection(self, other) 260 except ImplicitConversionError: 261 return NotImplemented 262 263 def __rand__( 264 self, 265 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 266 /) -> 'MultisetExpression[T]': 267 try: 268 return MultisetExpression.intersection(other, self) 269 except ImplicitConversionError: 270 return NotImplemented 271 272 def intersection( 273 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 274 ) -> 'MultisetExpression[T]': 275 """The elements that all the multisets have in common. 276 277 Same as `a & b & c & ...`. 278 279 Any resulting counts that would be negative are set to zero. 280 281 Example: 282 ```python 283 [1, 2, 2, 3] & [1, 2, 4] -> [1, 2] 284 ``` 285 286 Note that, as a multiset operation, this will only intersect elements 287 1:1. 288 If you want to keep all elements in a set of outcomes regardless of 289 count, either use `keep_outcomes()` instead, or use a large number of 290 counts on the right side. 291 """ 292 expressions = tuple( 293 implicit_convert_to_expression(arg) for arg in args) 294 return icepool.operator.MultisetIntersection(*expressions) 295 296 def __or__(self, 297 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 298 /) -> 'MultisetExpression[T]': 299 try: 300 return MultisetExpression.union(self, other) 301 except ImplicitConversionError: 302 return NotImplemented 303 304 def __ror__(self, 305 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 306 /) -> 'MultisetExpression[T]': 307 try: 308 return MultisetExpression.union(other, self) 309 except ImplicitConversionError: 310 return NotImplemented 311 312 def union( 313 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 314 ) -> 'MultisetExpression[T]': 315 """The most of each outcome that appear in any of the multisets. 316 317 Same as `a | b | c | ...`. 318 319 Any resulting counts that would be negative are set to zero. 320 321 Example: 322 ```python 323 [1, 2, 2, 3] | [1, 2, 4] -> [1, 2, 2, 3, 4] 324 ``` 325 """ 326 expressions = tuple( 327 implicit_convert_to_expression(arg) for arg in args) 328 return icepool.operator.MultisetUnion(*expressions) 329 330 def __xor__(self, 331 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 332 /) -> 'MultisetExpression[T]': 333 try: 334 return MultisetExpression.symmetric_difference(self, other) 335 except ImplicitConversionError: 336 return NotImplemented 337 338 def __rxor__( 339 self, 340 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 341 /) -> 'MultisetExpression[T]': 342 try: 343 # Symmetric. 344 return MultisetExpression.symmetric_difference(self, other) 345 except ImplicitConversionError: 346 return NotImplemented 347 348 def symmetric_difference( 349 self, 350 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 351 /) -> 'MultisetExpression[T]': 352 """The elements that appear in the left or right multiset but not both. 353 354 Same as `a ^ b`. 355 356 Specifically, this produces the absolute difference between counts. 357 If you don't want negative counts to be used from the inputs, you can 358 do `+left ^ +right`. 359 360 Example: 361 ```python 362 [1, 2, 2, 3] ^ [1, 2, 4] -> [2, 3, 4] 363 ``` 364 """ 365 return icepool.operator.MultisetSymmetricDifference( 366 self, implicit_convert_to_expression(other)) 367 368 def keep_outcomes( 369 self, target: 370 'Callable[[T], bool] | Collection[T] | MultisetExpression[T]', 371 /) -> 'MultisetExpression[T]': 372 """Keeps the elements in the target set of outcomes, and drops the rest by setting their counts to zero. 373 374 This is similar to `intersection()`, except the right side is considered 375 to have unlimited multiplicity. 376 377 Args: 378 target: A callable returning `True` iff the outcome should be kept, 379 or an expression or collection of outcomes to keep. 380 """ 381 if isinstance(target, MultisetExpression): 382 return icepool.operator.MultisetFilterOutcomesBinary(self, target) 383 else: 384 return icepool.operator.MultisetFilterOutcomes(self, target=target) 385 386 def drop_outcomes( 387 self, target: 388 'Callable[[T], bool] | Collection[T] | MultisetExpression[T]', 389 /) -> 'MultisetExpression[T]': 390 """Drops the elements in the target set of outcomes by setting their counts to zero, and keeps the rest. 391 392 This is similar to `difference()`, except the right side is considered 393 to have unlimited multiplicity. 394 395 Args: 396 target: A callable returning `True` iff the outcome should be 397 dropped, or an expression or collection of outcomes to drop. 398 """ 399 if isinstance(target, MultisetExpression): 400 return icepool.operator.MultisetFilterOutcomesBinary(self, 401 target, 402 invert=True) 403 else: 404 return icepool.operator.MultisetFilterOutcomes(self, 405 target=target, 406 invert=True) 407 408 # Adjust counts. 409 410 def map_counts(*args: 411 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 412 function: Callable[..., int]) -> 'MultisetExpression[T]': 413 """Maps the counts to new counts. 414 415 Args: 416 function: A function that takes `outcome, *counts` and produces a 417 combined count. 418 """ 419 expressions = tuple( 420 implicit_convert_to_expression(arg) for arg in args) 421 return icepool.operator.MultisetMapCounts(*expressions, 422 function=function) 423 424 def __mul__(self, n: int) -> 'MultisetExpression[T]': 425 if not isinstance(n, int): 426 return NotImplemented 427 return self.multiply_counts(n) 428 429 # Commutable in this case. 430 def __rmul__(self, n: int) -> 'MultisetExpression[T]': 431 if not isinstance(n, int): 432 return NotImplemented 433 return self.multiply_counts(n) 434 435 def multiply_counts(self, n: int, /) -> 'MultisetExpression[T]': 436 """Multiplies all counts by n. 437 438 Same as `self * n`. 439 440 Example: 441 ```python 442 Pool([1, 2, 2, 3]) * 2 -> [1, 1, 2, 2, 2, 2, 3, 3] 443 ``` 444 """ 445 return icepool.operator.MultisetMultiplyCounts(self, constant=n) 446 447 @overload 448 def __floordiv__(self, other: int) -> 'MultisetExpression[T]': 449 ... 450 451 @overload 452 def __floordiv__( 453 self, other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 454 ) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]': 455 """Same as divide_counts().""" 456 457 @overload 458 def __floordiv__( 459 self, 460 other: 'int | MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 461 ) -> 'MultisetExpression[T] | icepool.Die[int] | MultisetFunctionRawResult[T, int]': 462 """Same as count_subset().""" 463 464 def __floordiv__( 465 self, 466 other: 'int | MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 467 ) -> 'MultisetExpression[T] | icepool.Die[int] | MultisetFunctionRawResult[T, int]': 468 if isinstance(other, int): 469 return self.divide_counts(other) 470 else: 471 return self.count_subset(other) 472 473 def divide_counts(self, n: int, /) -> 'MultisetExpression[T]': 474 """Divides all counts by n (rounding down). 475 476 Same as `self // n`. 477 478 Example: 479 ```python 480 Pool([1, 2, 2, 3]) // 2 -> [2] 481 ``` 482 """ 483 return icepool.operator.MultisetFloordivCounts(self, constant=n) 484 485 def __mod__(self, n: int, /) -> 'MultisetExpression[T]': 486 if not isinstance(n, int): 487 return NotImplemented 488 return icepool.operator.MultisetModuloCounts(self, constant=n) 489 490 def modulo_counts(self, n: int, /) -> 'MultisetExpression[T]': 491 """Moduos all counts by n. 492 493 Same as `self % n`. 494 495 Example: 496 ```python 497 Pool([1, 2, 2, 3]) % 2 -> [1, 3] 498 ``` 499 """ 500 return self % n 501 502 def __pos__(self) -> 'MultisetExpression[T]': 503 """Sets all negative counts to zero.""" 504 return icepool.operator.MultisetKeepCounts(self, 505 comparison='>=', 506 constant=0) 507 508 def __neg__(self) -> 'MultisetExpression[T]': 509 """As -1 * self.""" 510 return -1 * self 511 512 def keep_counts(self, comparison: Literal['==', '!=', '<=', '<', '>=', 513 '>'], n: int, 514 /) -> 'MultisetExpression[T]': 515 """Keeps counts fitting the comparison, treating the rest as zero. 516 517 For example, `expression.keep_counts('>=', 2)` would keep pairs, 518 triplets, etc. and drop singles. 519 520 ```python 521 Pool([1, 2, 2, 3, 3, 3]).keep_counts('>=', 2) -> [2, 2, 3, 3, 3] 522 ``` 523 524 Args: 525 comparison: The comparison to use. 526 n: The number to compare counts against. 527 """ 528 return icepool.operator.MultisetKeepCounts(self, 529 comparison=comparison, 530 constant=n) 531 532 def unique(self, n: int = 1, /) -> 'MultisetExpression[T]': 533 """Counts each outcome at most `n` times. 534 535 For example, `generator.unique(2)` would count each outcome at most 536 twice. 537 538 Example: 539 ```python 540 Pool([1, 2, 2, 3]).unique() -> [1, 2, 3] 541 ``` 542 """ 543 return icepool.operator.MultisetUnique(self, constant=n) 544 545 # Keep highest / lowest. 546 547 @overload 548 def keep( 549 self, index: slice | Sequence[int | EllipsisType] 550 ) -> 'MultisetExpression[T]': 551 ... 552 553 @overload 554 def keep(self, 555 index: int) -> 'icepool.Die[T] | MultisetFunctionRawResult[T, T]': 556 ... 557 558 def keep( 559 self, index: slice | Sequence[int | EllipsisType] | int 560 ) -> 'MultisetExpression[T] | icepool.Die[T] | MultisetFunctionRawResult[T, T]': 561 """Selects elements after drawing and sorting. 562 563 This is less capable than the `KeepGenerator` version. 564 In particular, it does not know how many elements it is selecting from, 565 so it must be anchored at the starting end. The advantage is that it 566 can be applied to any expression. 567 568 The valid types of argument are: 569 570 * A `slice`. If both start and stop are provided, they must both be 571 non-negative or both be negative. step is not supported. 572 * A sequence of `int` with `...` (`Ellipsis`) at exactly one end. 573 Each sorted element will be counted that many times, with the 574 `Ellipsis` treated as enough zeros (possibly "negative") to 575 fill the rest of the elements. 576 * An `int`, which evaluates by taking the element at the specified 577 index. In this case the result is a `Die`. 578 579 Negative incoming counts are treated as zero counts. 580 581 Use the `[]` operator for the same effect as this method. 582 """ 583 if isinstance(index, int): 584 return icepool.evaluator.keep_evaluator.evaluate(self, index=index) 585 else: 586 return icepool.operator.MultisetKeep(self, index=index) 587 588 @overload 589 def __getitem__( 590 self, index: slice | Sequence[int | EllipsisType] 591 ) -> 'MultisetExpression[T]': 592 ... 593 594 @overload 595 def __getitem__( 596 self, 597 index: int) -> 'icepool.Die[T] | MultisetFunctionRawResult[T, T]': 598 ... 599 600 def __getitem__( 601 self, index: slice | Sequence[int | EllipsisType] | int 602 ) -> 'MultisetExpression[T] | icepool.Die[T] | MultisetFunctionRawResult[T, T]': 603 return self.keep(index) 604 605 def lowest(self, 606 keep: int | None = None, 607 drop: int | None = None) -> 'MultisetExpression[T]': 608 """Keep some of the lowest elements from this multiset and drop the rest. 609 610 In contrast to the die and free function versions, this does not 611 automatically sum the dice. Use `.sum()` afterwards if you want to sum. 612 Alternatively, you can perform some other evaluation. 613 614 This requires the outcomes to be evaluated in ascending order. 615 616 Args: 617 keep, drop: These arguments work together: 618 * If neither are provided, the single lowest element 619 will be kept. 620 * If only `keep` is provided, the `keep` lowest elements 621 will be kept. 622 * If only `drop` is provided, the `drop` lowest elements 623 will be dropped and the rest will be kept. 624 * If both are provided, `drop` lowest elements will be dropped, 625 then the next `keep` lowest elements will be kept. 626 """ 627 index = lowest_slice(keep, drop) 628 return self.keep(index) 629 630 def highest(self, 631 keep: int | None = None, 632 drop: int | None = None) -> 'MultisetExpression[T]': 633 """Keep some of the highest elements from this multiset and drop the rest. 634 635 In contrast to the die and free function versions, this does not 636 automatically sum the dice. Use `.sum()` afterwards if you want to sum. 637 Alternatively, you can perform some other evaluation. 638 639 This requires the outcomes to be evaluated in descending order. 640 641 Args: 642 keep, drop: These arguments work together: 643 * If neither are provided, the single highest element 644 will be kept. 645 * If only `keep` is provided, the `keep` highest elements 646 will be kept. 647 * If only `drop` is provided, the `drop` highest elements 648 will be dropped and the rest will be kept. 649 * If both are provided, `drop` highest elements will be dropped, 650 then the next `keep` highest elements will be kept. 651 """ 652 index = highest_slice(keep, drop) 653 return self.keep(index) 654 655 # Matching. 656 657 def sort_match(self, 658 comparison: Literal['==', '!=', '<=', '<', '>=', '>'], 659 other: 'MultisetExpression[T]', 660 /, 661 order: Order = Order.Descending) -> 'MultisetExpression[T]': 662 """EXPERIMENTAL: Matches elements of `self` with elements of `other` in sorted order, then keeps elements from `self` that fit `comparison` with their partner. 663 664 Extra elements: If `self` has more elements than `other`, whether the 665 extra elements are kept depends on the `order` and `comparison`: 666 * Descending: kept for `'>='`, `'>'` 667 * Ascending: kept for `'<='`, `'<'` 668 669 Example: An attacker rolls 3d6 versus a defender's 2d6 in the game of 670 *RISK*. Which pairs did the attacker win? 671 ```python 672 d6.pool(3).highest(2).sort_match('>', d6.pool(2)) 673 ``` 674 675 Suppose the attacker rolled 6, 4, 3 and the defender 5, 5. 676 In this case the 4 would be blocked since the attacker lost that pair, 677 leaving the attacker's 6 and 3. If you don't want to keep the extra 678 element, you can use `highest`. 679 ```python 680 Pool([6, 4, 3]).sort_match('>', [5, 5]) -> [6, 3] 681 Pool([6, 4, 3]).highest(2).sort_match('>', [5, 5]) -> [6] 682 ``` 683 684 Contrast `maximum_match()`, which first creates the maximum number of 685 pairs that fit the comparison, not necessarily in sorted order. 686 In the above example, `maximum_match()` would allow the defender to 687 assign their 5s to block both the 4 and the 3. 688 689 Negative incoming counts are treated as zero counts. 690 691 Args: 692 comparison: The comparison to filter by. If you want to drop rather 693 than keep, use the complementary comparison: 694 * `'=='` vs. `'!='` 695 * `'<='` vs. `'>'` 696 * `'>='` vs. `'<'` 697 other: The other multiset to match elements with. 698 order: The order in which to sort before forming matches. 699 Default is descending. 700 """ 701 other = implicit_convert_to_expression(other) 702 703 match comparison: 704 case '==': 705 lesser, tie, greater = 0, 1, 0 706 case '!=': 707 lesser, tie, greater = 1, 0, 1 708 case '<=': 709 lesser, tie, greater = 1, 1, 0 710 case '<': 711 lesser, tie, greater = 1, 0, 0 712 case '>=': 713 lesser, tie, greater = 0, 1, 1 714 case '>': 715 lesser, tie, greater = 0, 0, 1 716 case _: 717 raise ValueError(f'Invalid comparison {comparison}') 718 719 if order > 0: 720 left_first = lesser 721 right_first = greater 722 else: 723 left_first = greater 724 right_first = lesser 725 726 return icepool.operator.MultisetSortMatch(self, 727 other, 728 order=order, 729 tie=tie, 730 left_first=left_first, 731 right_first=right_first) 732 733 def maximum_match_highest( 734 self, comparison: Literal['<=', 735 '<'], other: 'MultisetExpression[T]', /, 736 *, keep: Literal['matched', 737 'unmatched']) -> 'MultisetExpression[T]': 738 """EXPERIMENTAL: Match the highest elements from `self` with even higher (or equal) elements from `other`. 739 740 This matches elements of `self` with elements of `other`, such that in 741 each pair the element from `self` fits the `comparision` with the 742 element from `other`. As many such pairs of elements will be matched as 743 possible, preferring the highest matchable elements of `self`. 744 Finally, either the matched or unmatched elements from `self` are kept. 745 746 This requires that outcomes be evaluated in descending order. 747 748 Example: An attacker rolls a pool of 4d6 and a defender rolls a pool of 749 3d6. Defender dice can be used to block attacker dice of equal or lesser 750 value, and the defender prefers to block the highest attacker dice 751 possible. Which attacker dice were not blocked? 752 ```python 753 d6.pool(4).maximum_match('<=', d6.pool(3), keep='unmatched').sum() 754 ``` 755 756 Suppose the attacker rolls 6, 4, 3, 1 and the defender rolls 5, 5. 757 Then the result would be [6, 1]. 758 ```python 759 d6.pool([6, 4, 3, 1]).maximum_match('<=', [5, 5], keep='unmatched') 760 -> [6, 1] 761 ``` 762 763 Contrast `sort_match()`, which first creates pairs in 764 sorted order and then filters them by `comparison`. 765 In the above example, `sort_matched` would force the defender to match 766 against the 5 and the 4, which would only allow them to block the 4. 767 768 Negative incoming counts are treated as zero counts. 769 770 Args: 771 comparison: Either `'<='` or `'<'`. 772 other: The other multiset to match elements with. 773 keep: Whether 'matched' or 'unmatched' elements are to be kept. 774 """ 775 if keep == 'matched': 776 keep_boolean = True 777 elif keep == 'unmatched': 778 keep_boolean = False 779 else: 780 raise ValueError(f"keep must be either 'matched' or 'unmatched'") 781 782 other = implicit_convert_to_expression(other) 783 match comparison: 784 case '<=': 785 match_equal = True 786 case '<': 787 match_equal = False 788 case _: 789 raise ValueError(f'Invalid comparison {comparison}') 790 return icepool.operator.MultisetMaximumMatch(self, 791 other, 792 order=Order.Descending, 793 match_equal=match_equal, 794 keep=keep_boolean) 795 796 def maximum_match_lowest( 797 self, comparison: Literal['>=', 798 '>'], other: 'MultisetExpression[T]', /, 799 *, keep: Literal['matched', 800 'unmatched']) -> 'MultisetExpression[T]': 801 """EXPERIMENTAL: Match the lowest elements from `self` with even lower (or equal) elements from `other`. 802 803 This matches elements of `self` with elements of `other`, such that in 804 each pair the element from `self` fits the `comparision` with the 805 element from `other`. As many such pairs of elements will be matched as 806 possible, preferring the lowest matchable elements of `self`. 807 Finally, either the matched or unmatched elements from `self` are kept. 808 809 This requires that outcomes be evaluated in ascending order. 810 811 Contrast `sort_match()`, which first creates pairs in 812 sorted order and then filters them by `comparison`. 813 814 Args: 815 comparison: Either `'>='` or `'>'`. 816 other: The other multiset to match elements with. 817 keep: Whether 'matched' or 'unmatched' elements are to be kept. 818 """ 819 if keep == 'matched': 820 keep_boolean = True 821 elif keep == 'unmatched': 822 keep_boolean = False 823 else: 824 raise ValueError(f"keep must be either 'matched' or 'unmatched'") 825 826 other = implicit_convert_to_expression(other) 827 match comparison: 828 case '>=': 829 match_equal = True 830 case '>': 831 match_equal = False 832 case _: 833 raise ValueError(f'Invalid comparison {comparison}') 834 return icepool.operator.MultisetMaximumMatch(self, 835 other, 836 order=Order.Ascending, 837 match_equal=match_equal, 838 keep=keep_boolean) 839 840 # Evaluations. 841 842 def expand( 843 self, 844 order: Order = Order.Ascending 845 ) -> 'icepool.Die[tuple[T, ...]] | MultisetFunctionRawResult[T, tuple[T, ...]]': 846 """Evaluation: All elements of the multiset in ascending order. 847 848 This is expensive and not recommended unless there are few possibilities. 849 850 Args: 851 order: Whether the elements are in ascending (default) or descending 852 order. 853 """ 854 return icepool.evaluator.ExpandEvaluator().evaluate(self, order=order) 855 856 def sum( 857 self, 858 map: Callable[[T], U] | Mapping[T, U] | None = None 859 ) -> 'icepool.Die[U] | MultisetFunctionRawResult[T, U]': 860 """Evaluation: The sum of all elements.""" 861 if map is None: 862 return icepool.evaluator.sum_evaluator.evaluate(self) 863 else: 864 return icepool.evaluator.SumEvaluator(map).evaluate(self) 865 866 def count(self) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]': 867 """Evaluation: The total number of elements in the multiset. 868 869 This is usually not very interesting unless some other operation is 870 performed first. Examples: 871 872 `generator.unique().count()` will count the number of unique outcomes. 873 874 `(generator & [4, 5, 6]).count()` will count up to one each of 875 4, 5, and 6. 876 """ 877 return icepool.evaluator.count_evaluator.evaluate(self) 878 879 def any(self) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]': 880 """Evaluation: Whether the multiset has at least one positive count. """ 881 return icepool.evaluator.any_evaluator.evaluate(self) 882 883 def highest_outcome_and_count( 884 self 885 ) -> 'icepool.Die[tuple[T, int]] | MultisetFunctionRawResult[T, tuple[T, int]]': 886 """Evaluation: The highest outcome with positive count, along with that count. 887 888 If no outcomes have positive count, the min outcome will be returned with 0 count. 889 """ 890 return icepool.evaluator.highest_outcome_and_count_evaluator.evaluate( 891 self) 892 893 def all_counts( 894 self, 895 filter: int | Literal['all'] = 1 896 ) -> 'icepool.Die[tuple[int, ...]] | MultisetFunctionRawResult[T, tuple[int, ...]]': 897 """Evaluation: Sorted tuple of all counts, i.e. the sizes of all matching sets. 898 899 The sizes are in **descending** order. 900 901 Args: 902 filter: Any counts below this value will not be in the output. 903 For example, `filter=2` will only produce pairs and better. 904 If `None`, no filtering will be done. 905 906 Why not just place `keep_counts_ge()` before this? 907 `keep_counts_ge()` operates by setting counts to zero, so you 908 would still need an argument to specify whether you want to 909 output zero counts. So we might as well use the argument to do 910 both. 911 """ 912 return icepool.evaluator.AllCountsEvaluator( 913 filter=filter).evaluate(self) 914 915 def largest_count( 916 self) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]': 917 """Evaluation: The size of the largest matching set among the elements.""" 918 return icepool.evaluator.largest_count_evaluator.evaluate(self) 919 920 def largest_count_and_outcome( 921 self 922 ) -> 'icepool.Die[tuple[int, T]] | MultisetFunctionRawResult[T, tuple[int, T]]': 923 """Evaluation: The largest matching set among the elements and the corresponding outcome.""" 924 return icepool.evaluator.largest_count_and_outcome_evaluator.evaluate( 925 self) 926 927 def __rfloordiv__( 928 self, other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 929 ) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]': 930 return implicit_convert_to_expression(other).count_subset(self) 931 932 def count_subset( 933 self, 934 divisor: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 935 /, 936 *, 937 empty_divisor: int | None = None 938 ) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]': 939 """Evaluation: The number of times the divisor is contained in this multiset. 940 941 Args: 942 divisor: The multiset to divide by. 943 empty_divisor: If the divisor is empty, the outcome will be this. 944 If not set, `ZeroDivisionError` will be raised for an empty 945 right side. 946 947 Raises: 948 ZeroDivisionError: If the divisor may be empty and 949 empty_divisor_outcome is not set. 950 """ 951 divisor = implicit_convert_to_expression(divisor) 952 return icepool.evaluator.CountSubsetEvaluator( 953 empty_divisor=empty_divisor).evaluate(self, divisor) 954 955 def largest_straight( 956 self: 'MultisetExpression[int]' 957 ) -> 'icepool.Die[int] | MultisetFunctionRawResult[int, int]': 958 """Evaluation: The size of the largest straight among the elements. 959 960 Outcomes must be `int`s. 961 """ 962 return icepool.evaluator.largest_straight_evaluator.evaluate(self) 963 964 def largest_straight_and_outcome( 965 self: 'MultisetExpression[int]', 966 priority: Literal['low', 'high'] = 'high', 967 / 968 ) -> 'icepool.Die[tuple[int, int]] | MultisetFunctionRawResult[int, tuple[int, int]]': 969 """Evaluation: The size of the largest straight among the elements and the highest (optionally, lowest) outcome in that straight. 970 971 Straight size is prioritized first, then the outcome. 972 973 Outcomes must be `int`s. 974 975 Args: 976 priority: Controls which outcome within the straight is returned, 977 and which straight is picked if there is a tie for largest 978 straight. 979 """ 980 if priority == 'high': 981 return icepool.evaluator.largest_straight_and_outcome_evaluator_high.evaluate( 982 self) 983 elif priority == 'low': 984 return icepool.evaluator.largest_straight_and_outcome_evaluator_low.evaluate( 985 self) 986 else: 987 raise ValueError("priority must be 'low' or 'high'.") 988 989 def all_straights( 990 self: 'MultisetExpression[int]' 991 ) -> 'icepool.Die[tuple[int, ...]] | MultisetFunctionRawResult[int, tuple[int, ...]]': 992 """Evaluation: The sizes of all straights. 993 994 The sizes are in **descending** order. 995 996 Each element can only contribute to one straight, though duplicate 997 elements can produces straights that overlap in outcomes. In this case, 998 elements are preferentially assigned to the longer straight. 999 """ 1000 return icepool.evaluator.all_straights_evaluator.evaluate(self) 1001 1002 def all_straights_reduce_counts( 1003 self: 'MultisetExpression[int]', 1004 reducer: Callable[[int, int], int] = operator.mul 1005 ) -> 'icepool.Die[tuple[tuple[int, int], ...]] | MultisetFunctionRawResult[int, tuple[tuple[int, int], ...]]': 1006 """Experimental: All straights with a reduce operation on the counts. 1007 1008 This can be used to evaluate e.g. cribbage-style straight counting. 1009 1010 The result is a tuple of `(run_length, run_score)`s. 1011 """ 1012 return icepool.evaluator.AllStraightsReduceCountsEvaluator( 1013 reducer=reducer).evaluate(self) 1014 1015 def argsort(self: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1016 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1017 order: Order = Order.Descending, 1018 limit: int | None = None): 1019 """Experimental: Returns the indexes of the originating multisets for each rank in their additive union. 1020 1021 Example: 1022 ```python 1023 MultisetExpression.argsort([10, 9, 5], [9, 9]) 1024 ``` 1025 produces 1026 ```python 1027 ((0,), (0, 1, 1), (0,)) 1028 ``` 1029 1030 Args: 1031 self, *args: The multiset expressions to be evaluated. 1032 order: Which order the ranks are to be emitted. Default is descending. 1033 limit: How many ranks to emit. Default will emit all ranks, which 1034 makes the length of each outcome equal to 1035 `additive_union(+self, +arg1, +arg2, ...).unique().count()` 1036 """ 1037 self = implicit_convert_to_expression(self) 1038 converted_args = [implicit_convert_to_expression(arg) for arg in args] 1039 return icepool.evaluator.ArgsortEvaluator(order=order, 1040 limit=limit).evaluate( 1041 self, *converted_args) 1042 1043 # Comparators. 1044 1045 def _compare( 1046 self, 1047 right: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1048 operation_class: Type['icepool.evaluator.ComparisonEvaluator'], 1049 *, 1050 truth_value_callback: 'Callable[[], bool] | None' = None 1051 ) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]': 1052 right = icepool.implicit_convert_to_expression(right) 1053 1054 if truth_value_callback is not None: 1055 1056 def data_callback() -> Counts[bool]: 1057 die = cast('icepool.Die[bool]', 1058 operation_class().evaluate(self, right)) 1059 if not isinstance(die, icepool.Die): 1060 raise TypeError('Did not resolve to a die.') 1061 return die._data 1062 1063 return icepool.DieWithTruth(data_callback, truth_value_callback) 1064 else: 1065 return operation_class().evaluate(self, right) 1066 1067 def __lt__(self, 1068 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1069 /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]': 1070 try: 1071 return self._compare(other, 1072 icepool.evaluator.IsProperSubsetEvaluator) 1073 except TypeError: 1074 return NotImplemented 1075 1076 def __le__(self, 1077 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1078 /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]': 1079 try: 1080 return self._compare(other, icepool.evaluator.IsSubsetEvaluator) 1081 except TypeError: 1082 return NotImplemented 1083 1084 def issubset( 1085 self, 1086 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1087 /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]': 1088 """Evaluation: Whether this multiset is a subset of the other multiset. 1089 1090 Specifically, if this multiset has a lesser or equal count for each 1091 outcome than the other multiset, this evaluates to `True`; 1092 if there is some outcome for which this multiset has a greater count 1093 than the other multiset, this evaluates to `False`. 1094 1095 `issubset` is the same as `self <= other`. 1096 1097 `self < other` evaluates a proper subset relation, which is the same 1098 except the result is `False` if the two multisets are exactly equal. 1099 """ 1100 return self._compare(other, icepool.evaluator.IsSubsetEvaluator) 1101 1102 def __gt__(self, 1103 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1104 /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]': 1105 try: 1106 return self._compare(other, 1107 icepool.evaluator.IsProperSupersetEvaluator) 1108 except TypeError: 1109 return NotImplemented 1110 1111 def __ge__(self, 1112 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1113 /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]': 1114 try: 1115 return self._compare(other, icepool.evaluator.IsSupersetEvaluator) 1116 except TypeError: 1117 return NotImplemented 1118 1119 def issuperset( 1120 self, 1121 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1122 /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]': 1123 """Evaluation: Whether this multiset is a superset of the other multiset. 1124 1125 Specifically, if this multiset has a greater or equal count for each 1126 outcome than the other multiset, this evaluates to `True`; 1127 if there is some outcome for which this multiset has a lesser count 1128 than the other multiset, this evaluates to `False`. 1129 1130 A typical use of this evaluation is testing for the presence of a 1131 combo of cards in a hand, e.g. 1132 1133 ```python 1134 deck.deal(5) >= ['a', 'a', 'b'] 1135 ``` 1136 1137 represents the chance that a deal of 5 cards contains at least two 'a's 1138 and one 'b'. 1139 1140 `issuperset` is the same as `self >= other`. 1141 1142 `self > other` evaluates a proper superset relation, which is the same 1143 except the result is `False` if the two multisets are exactly equal. 1144 """ 1145 return self._compare(other, icepool.evaluator.IsSupersetEvaluator) 1146 1147 def __eq__( # type: ignore 1148 self, 1149 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1150 /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]': 1151 try: 1152 1153 def truth_value_callback() -> bool: 1154 if not isinstance(other, MultisetExpression): 1155 return False 1156 return self._hash_key == other._hash_key 1157 1158 return self._compare(other, 1159 icepool.evaluator.IsEqualSetEvaluator, 1160 truth_value_callback=truth_value_callback) 1161 except TypeError: 1162 return NotImplemented 1163 1164 def __ne__( # type: ignore 1165 self, 1166 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1167 /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]': 1168 try: 1169 1170 def truth_value_callback() -> bool: 1171 if not isinstance(other, MultisetExpression): 1172 return False 1173 return self._hash_key != other._hash_key 1174 1175 return self._compare(other, 1176 icepool.evaluator.IsNotEqualSetEvaluator, 1177 truth_value_callback=truth_value_callback) 1178 except TypeError: 1179 return NotImplemented 1180 1181 # We need to define this again since we have an override for __eq__. 1182 def __hash__(self) -> int: 1183 return self._hash 1184 1185 def isdisjoint( 1186 self, 1187 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1188 /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]': 1189 """Evaluation: Whether this multiset is disjoint from the other multiset. 1190 1191 Specifically, this evaluates to `False` if there is any outcome for 1192 which both multisets have positive count, and `True` if there is not. 1193 1194 Negative incoming counts are treated as zero counts. 1195 """ 1196 return self._compare(other, icepool.evaluator.IsDisjointSetEvaluator)
Abstract base class representing an expression that operates on single multisets.
There are three types of multiset expressions:
MultisetGenerator
, which produce raw outcomes and counts.MultisetOperator
, which takes outcomes with one or more counts and produces a count.MultisetVariable
, which is a temporary placeholder for some other expression.
Expression methods can be applied to MultisetGenerator
s to do simple
evaluations. For joint evaluations, try multiset_function
.
Use the provided operations to build up more complicated expressions, or to attach a final evaluator.
Operations include:
Operation | Count / notes |
---|---|
additive_union , + |
l + r |
difference , - |
l - r |
intersection , & |
min(l, r) |
union , | |
max(l, r) |
symmetric_difference , ^ |
abs(l - r) |
multiply_counts , * |
count * n |
divide_counts , // |
count // n |
modulo_counts , % |
count % n |
keep_counts |
count if count >= n else 0 etc. |
unary + |
same as keep_counts_ge(0) |
unary - |
reverses the sign of all counts |
unique |
min(count, n) |
keep_outcomes |
count if outcome in t else 0 |
drop_outcomes |
count if outcome not in t else 0 |
map_counts |
f(outcome, *counts) |
keep , [] |
less capable than KeepGenerator version |
highest |
less capable than KeepGenerator version |
lowest |
less capable than KeepGenerator version |
Evaluator | Summary |
---|---|
issubset , <= |
Whether the left side's counts are all <= their counterparts on the right |
issuperset , >= |
Whether the left side's counts are all >= their counterparts on the right |
isdisjoint |
Whether the left side has no positive counts in common with the right side |
< |
As <= , but False if the two multisets are equal |
> |
As >= , but False if the two multisets are equal |
== |
Whether the left side has all the same counts as the right side |
!= |
Whether the left side has any different counts to the right side |
expand |
All elements in ascending order |
sum |
Sum of all elements |
count |
The number of elements |
any |
Whether there is at least 1 element |
highest_outcome_and_count |
The highest outcome and how many of that outcome |
all_counts |
All counts in descending order |
largest_count |
The single largest count, aka x-of-a-kind |
largest_count_and_outcome |
Same but also with the corresponding outcome |
count_subset , // |
The number of times the right side is contained in the left side |
largest_straight |
Length of longest consecutive sequence |
largest_straight_and_outcome |
Same but also with the corresponding outcome |
all_straights |
Lengths of all consecutive sequences in descending order |
194 def additive_union( 195 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 196 ) -> 'MultisetExpression[T]': 197 """The combined elements from all of the multisets. 198 199 Same as `a + b + c + ...`. 200 201 Any resulting counts that would be negative are set to zero. 202 203 Example: 204 ```python 205 [1, 2, 2, 3] + [1, 2, 4] -> [1, 1, 2, 2, 2, 3, 4] 206 ``` 207 """ 208 expressions = tuple( 209 implicit_convert_to_expression(arg) for arg in args) 210 return icepool.operator.MultisetAdditiveUnion(*expressions)
The combined elements from all of the multisets.
Same as a + b + c + ...
.
Any resulting counts that would be negative are set to zero.
Example:
[1, 2, 2, 3] + [1, 2, 4] -> [1, 1, 2, 2, 2, 3, 4]
229 def difference( 230 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 231 ) -> 'MultisetExpression[T]': 232 """The elements from the left multiset that are not in any of the others. 233 234 Same as `a - b - c - ...`. 235 236 Any resulting counts that would be negative are set to zero. 237 238 Example: 239 ```python 240 [1, 2, 2, 3] - [1, 2, 4] -> [2, 3] 241 ``` 242 243 If no arguments are given, the result will be an empty multiset, i.e. 244 all zero counts. 245 246 Note that, as a multiset operation, this will only cancel elements 1:1. 247 If you want to drop all elements in a set of outcomes regardless of 248 count, either use `drop_outcomes()` instead, or use a large number of 249 counts on the right side. 250 """ 251 expressions = tuple( 252 implicit_convert_to_expression(arg) for arg in args) 253 return icepool.operator.MultisetDifference(*expressions)
The elements from the left multiset that are not in any of the others.
Same as a - b - c - ...
.
Any resulting counts that would be negative are set to zero.
Example:
[1, 2, 2, 3] - [1, 2, 4] -> [2, 3]
If no arguments are given, the result will be an empty multiset, i.e. all zero counts.
Note that, as a multiset operation, this will only cancel elements 1:1.
If you want to drop all elements in a set of outcomes regardless of
count, either use drop_outcomes()
instead, or use a large number of
counts on the right side.
272 def intersection( 273 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 274 ) -> 'MultisetExpression[T]': 275 """The elements that all the multisets have in common. 276 277 Same as `a & b & c & ...`. 278 279 Any resulting counts that would be negative are set to zero. 280 281 Example: 282 ```python 283 [1, 2, 2, 3] & [1, 2, 4] -> [1, 2] 284 ``` 285 286 Note that, as a multiset operation, this will only intersect elements 287 1:1. 288 If you want to keep all elements in a set of outcomes regardless of 289 count, either use `keep_outcomes()` instead, or use a large number of 290 counts on the right side. 291 """ 292 expressions = tuple( 293 implicit_convert_to_expression(arg) for arg in args) 294 return icepool.operator.MultisetIntersection(*expressions)
The elements that all the multisets have in common.
Same as a & b & c & ...
.
Any resulting counts that would be negative are set to zero.
Example:
[1, 2, 2, 3] & [1, 2, 4] -> [1, 2]
Note that, as a multiset operation, this will only intersect elements
1:1.
If you want to keep all elements in a set of outcomes regardless of
count, either use keep_outcomes()
instead, or use a large number of
counts on the right side.
312 def union( 313 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 314 ) -> 'MultisetExpression[T]': 315 """The most of each outcome that appear in any of the multisets. 316 317 Same as `a | b | c | ...`. 318 319 Any resulting counts that would be negative are set to zero. 320 321 Example: 322 ```python 323 [1, 2, 2, 3] | [1, 2, 4] -> [1, 2, 2, 3, 4] 324 ``` 325 """ 326 expressions = tuple( 327 implicit_convert_to_expression(arg) for arg in args) 328 return icepool.operator.MultisetUnion(*expressions)
The most of each outcome that appear in any of the multisets.
Same as a | b | c | ...
.
Any resulting counts that would be negative are set to zero.
Example:
[1, 2, 2, 3] | [1, 2, 4] -> [1, 2, 2, 3, 4]
348 def symmetric_difference( 349 self, 350 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 351 /) -> 'MultisetExpression[T]': 352 """The elements that appear in the left or right multiset but not both. 353 354 Same as `a ^ b`. 355 356 Specifically, this produces the absolute difference between counts. 357 If you don't want negative counts to be used from the inputs, you can 358 do `+left ^ +right`. 359 360 Example: 361 ```python 362 [1, 2, 2, 3] ^ [1, 2, 4] -> [2, 3, 4] 363 ``` 364 """ 365 return icepool.operator.MultisetSymmetricDifference( 366 self, implicit_convert_to_expression(other))
The elements that appear in the left or right multiset but not both.
Same as a ^ b
.
Specifically, this produces the absolute difference between counts.
If you don't want negative counts to be used from the inputs, you can
do +left ^ +right
.
Example:
[1, 2, 2, 3] ^ [1, 2, 4] -> [2, 3, 4]
368 def keep_outcomes( 369 self, target: 370 'Callable[[T], bool] | Collection[T] | MultisetExpression[T]', 371 /) -> 'MultisetExpression[T]': 372 """Keeps the elements in the target set of outcomes, and drops the rest by setting their counts to zero. 373 374 This is similar to `intersection()`, except the right side is considered 375 to have unlimited multiplicity. 376 377 Args: 378 target: A callable returning `True` iff the outcome should be kept, 379 or an expression or collection of outcomes to keep. 380 """ 381 if isinstance(target, MultisetExpression): 382 return icepool.operator.MultisetFilterOutcomesBinary(self, target) 383 else: 384 return icepool.operator.MultisetFilterOutcomes(self, target=target)
Keeps the elements in the target set of outcomes, and drops the rest by setting their counts to zero.
This is similar to intersection()
, except the right side is considered
to have unlimited multiplicity.
Arguments:
- target: A callable returning
True
iff the outcome should be kept, or an expression or collection of outcomes to keep.
386 def drop_outcomes( 387 self, target: 388 'Callable[[T], bool] | Collection[T] | MultisetExpression[T]', 389 /) -> 'MultisetExpression[T]': 390 """Drops the elements in the target set of outcomes by setting their counts to zero, and keeps the rest. 391 392 This is similar to `difference()`, except the right side is considered 393 to have unlimited multiplicity. 394 395 Args: 396 target: A callable returning `True` iff the outcome should be 397 dropped, or an expression or collection of outcomes to drop. 398 """ 399 if isinstance(target, MultisetExpression): 400 return icepool.operator.MultisetFilterOutcomesBinary(self, 401 target, 402 invert=True) 403 else: 404 return icepool.operator.MultisetFilterOutcomes(self, 405 target=target, 406 invert=True)
Drops the elements in the target set of outcomes by setting their counts to zero, and keeps the rest.
This is similar to difference()
, except the right side is considered
to have unlimited multiplicity.
Arguments:
- target: A callable returning
True
iff the outcome should be dropped, or an expression or collection of outcomes to drop.
410 def map_counts(*args: 411 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 412 function: Callable[..., int]) -> 'MultisetExpression[T]': 413 """Maps the counts to new counts. 414 415 Args: 416 function: A function that takes `outcome, *counts` and produces a 417 combined count. 418 """ 419 expressions = tuple( 420 implicit_convert_to_expression(arg) for arg in args) 421 return icepool.operator.MultisetMapCounts(*expressions, 422 function=function)
Maps the counts to new counts.
Arguments:
- function: A function that takes
outcome, *counts
and produces a combined count.
435 def multiply_counts(self, n: int, /) -> 'MultisetExpression[T]': 436 """Multiplies all counts by n. 437 438 Same as `self * n`. 439 440 Example: 441 ```python 442 Pool([1, 2, 2, 3]) * 2 -> [1, 1, 2, 2, 2, 2, 3, 3] 443 ``` 444 """ 445 return icepool.operator.MultisetMultiplyCounts(self, constant=n)
Multiplies all counts by n.
Same as self * n
.
Example:
Pool([1, 2, 2, 3]) * 2 -> [1, 1, 2, 2, 2, 2, 3, 3]
473 def divide_counts(self, n: int, /) -> 'MultisetExpression[T]': 474 """Divides all counts by n (rounding down). 475 476 Same as `self // n`. 477 478 Example: 479 ```python 480 Pool([1, 2, 2, 3]) // 2 -> [2] 481 ``` 482 """ 483 return icepool.operator.MultisetFloordivCounts(self, constant=n)
Divides all counts by n (rounding down).
Same as self // n
.
Example:
Pool([1, 2, 2, 3]) // 2 -> [2]
490 def modulo_counts(self, n: int, /) -> 'MultisetExpression[T]': 491 """Moduos all counts by n. 492 493 Same as `self % n`. 494 495 Example: 496 ```python 497 Pool([1, 2, 2, 3]) % 2 -> [1, 3] 498 ``` 499 """ 500 return self % n
Moduos all counts by n.
Same as self % n
.
Example:
Pool([1, 2, 2, 3]) % 2 -> [1, 3]
512 def keep_counts(self, comparison: Literal['==', '!=', '<=', '<', '>=', 513 '>'], n: int, 514 /) -> 'MultisetExpression[T]': 515 """Keeps counts fitting the comparison, treating the rest as zero. 516 517 For example, `expression.keep_counts('>=', 2)` would keep pairs, 518 triplets, etc. and drop singles. 519 520 ```python 521 Pool([1, 2, 2, 3, 3, 3]).keep_counts('>=', 2) -> [2, 2, 3, 3, 3] 522 ``` 523 524 Args: 525 comparison: The comparison to use. 526 n: The number to compare counts against. 527 """ 528 return icepool.operator.MultisetKeepCounts(self, 529 comparison=comparison, 530 constant=n)
Keeps counts fitting the comparison, treating the rest as zero.
For example, expression.keep_counts('>=', 2)
would keep pairs,
triplets, etc. and drop singles.
Pool([1, 2, 2, 3, 3, 3]).keep_counts('>=', 2) -> [2, 2, 3, 3, 3]
Arguments:
- comparison: The comparison to use.
- n: The number to compare counts against.
532 def unique(self, n: int = 1, /) -> 'MultisetExpression[T]': 533 """Counts each outcome at most `n` times. 534 535 For example, `generator.unique(2)` would count each outcome at most 536 twice. 537 538 Example: 539 ```python 540 Pool([1, 2, 2, 3]).unique() -> [1, 2, 3] 541 ``` 542 """ 543 return icepool.operator.MultisetUnique(self, constant=n)
Counts each outcome at most n
times.
For example, generator.unique(2)
would count each outcome at most
twice.
Example:
Pool([1, 2, 2, 3]).unique() -> [1, 2, 3]
558 def keep( 559 self, index: slice | Sequence[int | EllipsisType] | int 560 ) -> 'MultisetExpression[T] | icepool.Die[T] | MultisetFunctionRawResult[T, T]': 561 """Selects elements after drawing and sorting. 562 563 This is less capable than the `KeepGenerator` version. 564 In particular, it does not know how many elements it is selecting from, 565 so it must be anchored at the starting end. The advantage is that it 566 can be applied to any expression. 567 568 The valid types of argument are: 569 570 * A `slice`. If both start and stop are provided, they must both be 571 non-negative or both be negative. step is not supported. 572 * A sequence of `int` with `...` (`Ellipsis`) at exactly one end. 573 Each sorted element will be counted that many times, with the 574 `Ellipsis` treated as enough zeros (possibly "negative") to 575 fill the rest of the elements. 576 * An `int`, which evaluates by taking the element at the specified 577 index. In this case the result is a `Die`. 578 579 Negative incoming counts are treated as zero counts. 580 581 Use the `[]` operator for the same effect as this method. 582 """ 583 if isinstance(index, int): 584 return icepool.evaluator.keep_evaluator.evaluate(self, index=index) 585 else: 586 return icepool.operator.MultisetKeep(self, index=index)
Selects elements after drawing and sorting.
This is less capable than the KeepGenerator
version.
In particular, it does not know how many elements it is selecting from,
so it must be anchored at the starting end. The advantage is that it
can be applied to any expression.
The valid types of argument are:
- A
slice
. If both start and stop are provided, they must both be non-negative or both be negative. step is not supported. - A sequence of
int
with...
(Ellipsis
) at exactly one end. Each sorted element will be counted that many times, with theEllipsis
treated as enough zeros (possibly "negative") to fill the rest of the elements. - An
int
, which evaluates by taking the element at the specified index. In this case the result is aDie
.
Negative incoming counts are treated as zero counts.
Use the []
operator for the same effect as this method.
605 def lowest(self, 606 keep: int | None = None, 607 drop: int | None = None) -> 'MultisetExpression[T]': 608 """Keep some of the lowest elements from this multiset and drop the rest. 609 610 In contrast to the die and free function versions, this does not 611 automatically sum the dice. Use `.sum()` afterwards if you want to sum. 612 Alternatively, you can perform some other evaluation. 613 614 This requires the outcomes to be evaluated in ascending order. 615 616 Args: 617 keep, drop: These arguments work together: 618 * If neither are provided, the single lowest element 619 will be kept. 620 * If only `keep` is provided, the `keep` lowest elements 621 will be kept. 622 * If only `drop` is provided, the `drop` lowest elements 623 will be dropped and the rest will be kept. 624 * If both are provided, `drop` lowest elements will be dropped, 625 then the next `keep` lowest elements will be kept. 626 """ 627 index = lowest_slice(keep, drop) 628 return self.keep(index)
Keep some of the lowest elements from this multiset and drop the rest.
In contrast to the die and free function versions, this does not
automatically sum the dice. Use .sum()
afterwards if you want to sum.
Alternatively, you can perform some other evaluation.
This requires the outcomes to be evaluated in ascending order.
Arguments:
- keep, drop: These arguments work together:
- If neither are provided, the single lowest element will be kept.
- If only
keep
is provided, thekeep
lowest elements will be kept. - If only
drop
is provided, thedrop
lowest elements will be dropped and the rest will be kept. - If both are provided,
drop
lowest elements will be dropped, then the nextkeep
lowest elements will be kept.
630 def highest(self, 631 keep: int | None = None, 632 drop: int | None = None) -> 'MultisetExpression[T]': 633 """Keep some of the highest elements from this multiset and drop the rest. 634 635 In contrast to the die and free function versions, this does not 636 automatically sum the dice. Use `.sum()` afterwards if you want to sum. 637 Alternatively, you can perform some other evaluation. 638 639 This requires the outcomes to be evaluated in descending order. 640 641 Args: 642 keep, drop: These arguments work together: 643 * If neither are provided, the single highest element 644 will be kept. 645 * If only `keep` is provided, the `keep` highest elements 646 will be kept. 647 * If only `drop` is provided, the `drop` highest elements 648 will be dropped and the rest will be kept. 649 * If both are provided, `drop` highest elements will be dropped, 650 then the next `keep` highest elements will be kept. 651 """ 652 index = highest_slice(keep, drop) 653 return self.keep(index)
Keep some of the highest elements from this multiset and drop the rest.
In contrast to the die and free function versions, this does not
automatically sum the dice. Use .sum()
afterwards if you want to sum.
Alternatively, you can perform some other evaluation.
This requires the outcomes to be evaluated in descending order.
Arguments:
- keep, drop: These arguments work together:
- If neither are provided, the single highest element will be kept.
- If only
keep
is provided, thekeep
highest elements will be kept. - If only
drop
is provided, thedrop
highest elements will be dropped and the rest will be kept. - If both are provided,
drop
highest elements will be dropped, then the nextkeep
highest elements will be kept.
657 def sort_match(self, 658 comparison: Literal['==', '!=', '<=', '<', '>=', '>'], 659 other: 'MultisetExpression[T]', 660 /, 661 order: Order = Order.Descending) -> 'MultisetExpression[T]': 662 """EXPERIMENTAL: Matches elements of `self` with elements of `other` in sorted order, then keeps elements from `self` that fit `comparison` with their partner. 663 664 Extra elements: If `self` has more elements than `other`, whether the 665 extra elements are kept depends on the `order` and `comparison`: 666 * Descending: kept for `'>='`, `'>'` 667 * Ascending: kept for `'<='`, `'<'` 668 669 Example: An attacker rolls 3d6 versus a defender's 2d6 in the game of 670 *RISK*. Which pairs did the attacker win? 671 ```python 672 d6.pool(3).highest(2).sort_match('>', d6.pool(2)) 673 ``` 674 675 Suppose the attacker rolled 6, 4, 3 and the defender 5, 5. 676 In this case the 4 would be blocked since the attacker lost that pair, 677 leaving the attacker's 6 and 3. If you don't want to keep the extra 678 element, you can use `highest`. 679 ```python 680 Pool([6, 4, 3]).sort_match('>', [5, 5]) -> [6, 3] 681 Pool([6, 4, 3]).highest(2).sort_match('>', [5, 5]) -> [6] 682 ``` 683 684 Contrast `maximum_match()`, which first creates the maximum number of 685 pairs that fit the comparison, not necessarily in sorted order. 686 In the above example, `maximum_match()` would allow the defender to 687 assign their 5s to block both the 4 and the 3. 688 689 Negative incoming counts are treated as zero counts. 690 691 Args: 692 comparison: The comparison to filter by. If you want to drop rather 693 than keep, use the complementary comparison: 694 * `'=='` vs. `'!='` 695 * `'<='` vs. `'>'` 696 * `'>='` vs. `'<'` 697 other: The other multiset to match elements with. 698 order: The order in which to sort before forming matches. 699 Default is descending. 700 """ 701 other = implicit_convert_to_expression(other) 702 703 match comparison: 704 case '==': 705 lesser, tie, greater = 0, 1, 0 706 case '!=': 707 lesser, tie, greater = 1, 0, 1 708 case '<=': 709 lesser, tie, greater = 1, 1, 0 710 case '<': 711 lesser, tie, greater = 1, 0, 0 712 case '>=': 713 lesser, tie, greater = 0, 1, 1 714 case '>': 715 lesser, tie, greater = 0, 0, 1 716 case _: 717 raise ValueError(f'Invalid comparison {comparison}') 718 719 if order > 0: 720 left_first = lesser 721 right_first = greater 722 else: 723 left_first = greater 724 right_first = lesser 725 726 return icepool.operator.MultisetSortMatch(self, 727 other, 728 order=order, 729 tie=tie, 730 left_first=left_first, 731 right_first=right_first)
EXPERIMENTAL: Matches elements of self
with elements of other
in sorted order, then keeps elements from self
that fit comparison
with their partner.
Extra elements: If self
has more elements than other
, whether the
extra elements are kept depends on the order
and comparison
:
- Descending: kept for
'>='
,'>'
- Ascending: kept for
'<='
,'<'
Example: An attacker rolls 3d6 versus a defender's 2d6 in the game of RISK. Which pairs did the attacker win?
d6.pool(3).highest(2).sort_match('>', d6.pool(2))
Suppose the attacker rolled 6, 4, 3 and the defender 5, 5.
In this case the 4 would be blocked since the attacker lost that pair,
leaving the attacker's 6 and 3. If you don't want to keep the extra
element, you can use highest
.
Pool([6, 4, 3]).sort_match('>', [5, 5]) -> [6, 3]
Pool([6, 4, 3]).highest(2).sort_match('>', [5, 5]) -> [6]
Contrast maximum_match()
, which first creates the maximum number of
pairs that fit the comparison, not necessarily in sorted order.
In the above example, maximum_match()
would allow the defender to
assign their 5s to block both the 4 and the 3.
Negative incoming counts are treated as zero counts.
Arguments:
- comparison: The comparison to filter by. If you want to drop rather
than keep, use the complementary comparison:
'=='
vs.'!='
'<='
vs.'>'
'>='
vs.'<'
- other: The other multiset to match elements with.
- order: The order in which to sort before forming matches. Default is descending.
733 def maximum_match_highest( 734 self, comparison: Literal['<=', 735 '<'], other: 'MultisetExpression[T]', /, 736 *, keep: Literal['matched', 737 'unmatched']) -> 'MultisetExpression[T]': 738 """EXPERIMENTAL: Match the highest elements from `self` with even higher (or equal) elements from `other`. 739 740 This matches elements of `self` with elements of `other`, such that in 741 each pair the element from `self` fits the `comparision` with the 742 element from `other`. As many such pairs of elements will be matched as 743 possible, preferring the highest matchable elements of `self`. 744 Finally, either the matched or unmatched elements from `self` are kept. 745 746 This requires that outcomes be evaluated in descending order. 747 748 Example: An attacker rolls a pool of 4d6 and a defender rolls a pool of 749 3d6. Defender dice can be used to block attacker dice of equal or lesser 750 value, and the defender prefers to block the highest attacker dice 751 possible. Which attacker dice were not blocked? 752 ```python 753 d6.pool(4).maximum_match('<=', d6.pool(3), keep='unmatched').sum() 754 ``` 755 756 Suppose the attacker rolls 6, 4, 3, 1 and the defender rolls 5, 5. 757 Then the result would be [6, 1]. 758 ```python 759 d6.pool([6, 4, 3, 1]).maximum_match('<=', [5, 5], keep='unmatched') 760 -> [6, 1] 761 ``` 762 763 Contrast `sort_match()`, which first creates pairs in 764 sorted order and then filters them by `comparison`. 765 In the above example, `sort_matched` would force the defender to match 766 against the 5 and the 4, which would only allow them to block the 4. 767 768 Negative incoming counts are treated as zero counts. 769 770 Args: 771 comparison: Either `'<='` or `'<'`. 772 other: The other multiset to match elements with. 773 keep: Whether 'matched' or 'unmatched' elements are to be kept. 774 """ 775 if keep == 'matched': 776 keep_boolean = True 777 elif keep == 'unmatched': 778 keep_boolean = False 779 else: 780 raise ValueError(f"keep must be either 'matched' or 'unmatched'") 781 782 other = implicit_convert_to_expression(other) 783 match comparison: 784 case '<=': 785 match_equal = True 786 case '<': 787 match_equal = False 788 case _: 789 raise ValueError(f'Invalid comparison {comparison}') 790 return icepool.operator.MultisetMaximumMatch(self, 791 other, 792 order=Order.Descending, 793 match_equal=match_equal, 794 keep=keep_boolean)
EXPERIMENTAL: Match the highest elements from self
with even higher (or equal) elements from other
.
This matches elements of self
with elements of other
, such that in
each pair the element from self
fits the comparision
with the
element from other
. As many such pairs of elements will be matched as
possible, preferring the highest matchable elements of self
.
Finally, either the matched or unmatched elements from self
are kept.
This requires that outcomes be evaluated in descending order.
Example: An attacker rolls a pool of 4d6 and a defender rolls a pool of 3d6. Defender dice can be used to block attacker dice of equal or lesser value, and the defender prefers to block the highest attacker dice possible. Which attacker dice were not blocked?
d6.pool(4).maximum_match('<=', d6.pool(3), keep='unmatched').sum()
Suppose the attacker rolls 6, 4, 3, 1 and the defender rolls 5, 5. Then the result would be [6, 1].
d6.pool([6, 4, 3, 1]).maximum_match('<=', [5, 5], keep='unmatched')
-> [6, 1]
Contrast sort_match()
, which first creates pairs in
sorted order and then filters them by comparison
.
In the above example, sort_matched
would force the defender to match
against the 5 and the 4, which would only allow them to block the 4.
Negative incoming counts are treated as zero counts.
Arguments:
- comparison: Either
'<='
or'<'
. - other: The other multiset to match elements with.
- keep: Whether 'matched' or 'unmatched' elements are to be kept.
796 def maximum_match_lowest( 797 self, comparison: Literal['>=', 798 '>'], other: 'MultisetExpression[T]', /, 799 *, keep: Literal['matched', 800 'unmatched']) -> 'MultisetExpression[T]': 801 """EXPERIMENTAL: Match the lowest elements from `self` with even lower (or equal) elements from `other`. 802 803 This matches elements of `self` with elements of `other`, such that in 804 each pair the element from `self` fits the `comparision` with the 805 element from `other`. As many such pairs of elements will be matched as 806 possible, preferring the lowest matchable elements of `self`. 807 Finally, either the matched or unmatched elements from `self` are kept. 808 809 This requires that outcomes be evaluated in ascending order. 810 811 Contrast `sort_match()`, which first creates pairs in 812 sorted order and then filters them by `comparison`. 813 814 Args: 815 comparison: Either `'>='` or `'>'`. 816 other: The other multiset to match elements with. 817 keep: Whether 'matched' or 'unmatched' elements are to be kept. 818 """ 819 if keep == 'matched': 820 keep_boolean = True 821 elif keep == 'unmatched': 822 keep_boolean = False 823 else: 824 raise ValueError(f"keep must be either 'matched' or 'unmatched'") 825 826 other = implicit_convert_to_expression(other) 827 match comparison: 828 case '>=': 829 match_equal = True 830 case '>': 831 match_equal = False 832 case _: 833 raise ValueError(f'Invalid comparison {comparison}') 834 return icepool.operator.MultisetMaximumMatch(self, 835 other, 836 order=Order.Ascending, 837 match_equal=match_equal, 838 keep=keep_boolean)
EXPERIMENTAL: Match the lowest elements from self
with even lower (or equal) elements from other
.
This matches elements of self
with elements of other
, such that in
each pair the element from self
fits the comparision
with the
element from other
. As many such pairs of elements will be matched as
possible, preferring the lowest matchable elements of self
.
Finally, either the matched or unmatched elements from self
are kept.
This requires that outcomes be evaluated in ascending order.
Contrast sort_match()
, which first creates pairs in
sorted order and then filters them by comparison
.
Arguments:
- comparison: Either
'>='
or'>'
. - other: The other multiset to match elements with.
- keep: Whether 'matched' or 'unmatched' elements are to be kept.
842 def expand( 843 self, 844 order: Order = Order.Ascending 845 ) -> 'icepool.Die[tuple[T, ...]] | MultisetFunctionRawResult[T, tuple[T, ...]]': 846 """Evaluation: All elements of the multiset in ascending order. 847 848 This is expensive and not recommended unless there are few possibilities. 849 850 Args: 851 order: Whether the elements are in ascending (default) or descending 852 order. 853 """ 854 return icepool.evaluator.ExpandEvaluator().evaluate(self, order=order)
Evaluation: All elements of the multiset in ascending order.
This is expensive and not recommended unless there are few possibilities.
Arguments:
- order: Whether the elements are in ascending (default) or descending order.
856 def sum( 857 self, 858 map: Callable[[T], U] | Mapping[T, U] | None = None 859 ) -> 'icepool.Die[U] | MultisetFunctionRawResult[T, U]': 860 """Evaluation: The sum of all elements.""" 861 if map is None: 862 return icepool.evaluator.sum_evaluator.evaluate(self) 863 else: 864 return icepool.evaluator.SumEvaluator(map).evaluate(self)
Evaluation: The sum of all elements.
866 def count(self) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]': 867 """Evaluation: The total number of elements in the multiset. 868 869 This is usually not very interesting unless some other operation is 870 performed first. Examples: 871 872 `generator.unique().count()` will count the number of unique outcomes. 873 874 `(generator & [4, 5, 6]).count()` will count up to one each of 875 4, 5, and 6. 876 """ 877 return icepool.evaluator.count_evaluator.evaluate(self)
Evaluation: The total number of elements in the multiset.
This is usually not very interesting unless some other operation is performed first. Examples:
generator.unique().count()
will count the number of unique outcomes.
(generator & [4, 5, 6]).count()
will count up to one each of
4, 5, and 6.
879 def any(self) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]': 880 """Evaluation: Whether the multiset has at least one positive count. """ 881 return icepool.evaluator.any_evaluator.evaluate(self)
Evaluation: Whether the multiset has at least one positive count.
883 def highest_outcome_and_count( 884 self 885 ) -> 'icepool.Die[tuple[T, int]] | MultisetFunctionRawResult[T, tuple[T, int]]': 886 """Evaluation: The highest outcome with positive count, along with that count. 887 888 If no outcomes have positive count, the min outcome will be returned with 0 count. 889 """ 890 return icepool.evaluator.highest_outcome_and_count_evaluator.evaluate( 891 self)
Evaluation: The highest outcome with positive count, along with that count.
If no outcomes have positive count, the min outcome will be returned with 0 count.
893 def all_counts( 894 self, 895 filter: int | Literal['all'] = 1 896 ) -> 'icepool.Die[tuple[int, ...]] | MultisetFunctionRawResult[T, tuple[int, ...]]': 897 """Evaluation: Sorted tuple of all counts, i.e. the sizes of all matching sets. 898 899 The sizes are in **descending** order. 900 901 Args: 902 filter: Any counts below this value will not be in the output. 903 For example, `filter=2` will only produce pairs and better. 904 If `None`, no filtering will be done. 905 906 Why not just place `keep_counts_ge()` before this? 907 `keep_counts_ge()` operates by setting counts to zero, so you 908 would still need an argument to specify whether you want to 909 output zero counts. So we might as well use the argument to do 910 both. 911 """ 912 return icepool.evaluator.AllCountsEvaluator( 913 filter=filter).evaluate(self)
Evaluation: Sorted tuple of all counts, i.e. the sizes of all matching sets.
The sizes are in descending order.
Arguments:
filter: Any counts below this value will not be in the output. For example,
filter=2
will only produce pairs and better. IfNone
, no filtering will be done.Why not just place
keep_counts_ge()
before this?keep_counts_ge()
operates by setting counts to zero, so you would still need an argument to specify whether you want to output zero counts. So we might as well use the argument to do both.
915 def largest_count( 916 self) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]': 917 """Evaluation: The size of the largest matching set among the elements.""" 918 return icepool.evaluator.largest_count_evaluator.evaluate(self)
Evaluation: The size of the largest matching set among the elements.
920 def largest_count_and_outcome( 921 self 922 ) -> 'icepool.Die[tuple[int, T]] | MultisetFunctionRawResult[T, tuple[int, T]]': 923 """Evaluation: The largest matching set among the elements and the corresponding outcome.""" 924 return icepool.evaluator.largest_count_and_outcome_evaluator.evaluate( 925 self)
Evaluation: The largest matching set among the elements and the corresponding outcome.
932 def count_subset( 933 self, 934 divisor: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 935 /, 936 *, 937 empty_divisor: int | None = None 938 ) -> 'icepool.Die[int] | MultisetFunctionRawResult[T, int]': 939 """Evaluation: The number of times the divisor is contained in this multiset. 940 941 Args: 942 divisor: The multiset to divide by. 943 empty_divisor: If the divisor is empty, the outcome will be this. 944 If not set, `ZeroDivisionError` will be raised for an empty 945 right side. 946 947 Raises: 948 ZeroDivisionError: If the divisor may be empty and 949 empty_divisor_outcome is not set. 950 """ 951 divisor = implicit_convert_to_expression(divisor) 952 return icepool.evaluator.CountSubsetEvaluator( 953 empty_divisor=empty_divisor).evaluate(self, divisor)
Evaluation: The number of times the divisor is contained in this multiset.
Arguments:
- divisor: The multiset to divide by.
- empty_divisor: If the divisor is empty, the outcome will be this.
If not set,
ZeroDivisionError
will be raised for an empty right side.
Raises:
- ZeroDivisionError: If the divisor may be empty and empty_divisor_outcome is not set.
955 def largest_straight( 956 self: 'MultisetExpression[int]' 957 ) -> 'icepool.Die[int] | MultisetFunctionRawResult[int, int]': 958 """Evaluation: The size of the largest straight among the elements. 959 960 Outcomes must be `int`s. 961 """ 962 return icepool.evaluator.largest_straight_evaluator.evaluate(self)
Evaluation: The size of the largest straight among the elements.
Outcomes must be int
s.
964 def largest_straight_and_outcome( 965 self: 'MultisetExpression[int]', 966 priority: Literal['low', 'high'] = 'high', 967 / 968 ) -> 'icepool.Die[tuple[int, int]] | MultisetFunctionRawResult[int, tuple[int, int]]': 969 """Evaluation: The size of the largest straight among the elements and the highest (optionally, lowest) outcome in that straight. 970 971 Straight size is prioritized first, then the outcome. 972 973 Outcomes must be `int`s. 974 975 Args: 976 priority: Controls which outcome within the straight is returned, 977 and which straight is picked if there is a tie for largest 978 straight. 979 """ 980 if priority == 'high': 981 return icepool.evaluator.largest_straight_and_outcome_evaluator_high.evaluate( 982 self) 983 elif priority == 'low': 984 return icepool.evaluator.largest_straight_and_outcome_evaluator_low.evaluate( 985 self) 986 else: 987 raise ValueError("priority must be 'low' or 'high'.")
Evaluation: The size of the largest straight among the elements and the highest (optionally, lowest) outcome in that straight.
Straight size is prioritized first, then the outcome.
Outcomes must be int
s.
Arguments:
- priority: Controls which outcome within the straight is returned, and which straight is picked if there is a tie for largest straight.
989 def all_straights( 990 self: 'MultisetExpression[int]' 991 ) -> 'icepool.Die[tuple[int, ...]] | MultisetFunctionRawResult[int, tuple[int, ...]]': 992 """Evaluation: The sizes of all straights. 993 994 The sizes are in **descending** order. 995 996 Each element can only contribute to one straight, though duplicate 997 elements can produces straights that overlap in outcomes. In this case, 998 elements are preferentially assigned to the longer straight. 999 """ 1000 return icepool.evaluator.all_straights_evaluator.evaluate(self)
Evaluation: The sizes of all straights.
The sizes are in descending order.
Each element can only contribute to one straight, though duplicate elements can produces straights that overlap in outcomes. In this case, elements are preferentially assigned to the longer straight.
1002 def all_straights_reduce_counts( 1003 self: 'MultisetExpression[int]', 1004 reducer: Callable[[int, int], int] = operator.mul 1005 ) -> 'icepool.Die[tuple[tuple[int, int], ...]] | MultisetFunctionRawResult[int, tuple[tuple[int, int], ...]]': 1006 """Experimental: All straights with a reduce operation on the counts. 1007 1008 This can be used to evaluate e.g. cribbage-style straight counting. 1009 1010 The result is a tuple of `(run_length, run_score)`s. 1011 """ 1012 return icepool.evaluator.AllStraightsReduceCountsEvaluator( 1013 reducer=reducer).evaluate(self)
Experimental: All straights with a reduce operation on the counts.
This can be used to evaluate e.g. cribbage-style straight counting.
The result is a tuple of (run_length, run_score)
s.
1015 def argsort(self: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1016 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1017 order: Order = Order.Descending, 1018 limit: int | None = None): 1019 """Experimental: Returns the indexes of the originating multisets for each rank in their additive union. 1020 1021 Example: 1022 ```python 1023 MultisetExpression.argsort([10, 9, 5], [9, 9]) 1024 ``` 1025 produces 1026 ```python 1027 ((0,), (0, 1, 1), (0,)) 1028 ``` 1029 1030 Args: 1031 self, *args: The multiset expressions to be evaluated. 1032 order: Which order the ranks are to be emitted. Default is descending. 1033 limit: How many ranks to emit. Default will emit all ranks, which 1034 makes the length of each outcome equal to 1035 `additive_union(+self, +arg1, +arg2, ...).unique().count()` 1036 """ 1037 self = implicit_convert_to_expression(self) 1038 converted_args = [implicit_convert_to_expression(arg) for arg in args] 1039 return icepool.evaluator.ArgsortEvaluator(order=order, 1040 limit=limit).evaluate( 1041 self, *converted_args)
Experimental: Returns the indexes of the originating multisets for each rank in their additive union.
Example:
MultisetExpression.argsort([10, 9, 5], [9, 9])
produces
((0,), (0, 1, 1), (0,))
Arguments:
- self, *args: The multiset expressions to be evaluated.
- order: Which order the ranks are to be emitted. Default is descending.
- limit: How many ranks to emit. Default will emit all ranks, which
makes the length of each outcome equal to
additive_union(+self, +arg1, +arg2, ...).unique().count()
1084 def issubset( 1085 self, 1086 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1087 /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]': 1088 """Evaluation: Whether this multiset is a subset of the other multiset. 1089 1090 Specifically, if this multiset has a lesser or equal count for each 1091 outcome than the other multiset, this evaluates to `True`; 1092 if there is some outcome for which this multiset has a greater count 1093 than the other multiset, this evaluates to `False`. 1094 1095 `issubset` is the same as `self <= other`. 1096 1097 `self < other` evaluates a proper subset relation, which is the same 1098 except the result is `False` if the two multisets are exactly equal. 1099 """ 1100 return self._compare(other, icepool.evaluator.IsSubsetEvaluator)
Evaluation: Whether this multiset is a subset of the other multiset.
Specifically, if this multiset has a lesser or equal count for each
outcome than the other multiset, this evaluates to True
;
if there is some outcome for which this multiset has a greater count
than the other multiset, this evaluates to False
.
issubset
is the same as self <= other
.
self < other
evaluates a proper subset relation, which is the same
except the result is False
if the two multisets are exactly equal.
1119 def issuperset( 1120 self, 1121 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1122 /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]': 1123 """Evaluation: Whether this multiset is a superset of the other multiset. 1124 1125 Specifically, if this multiset has a greater or equal count for each 1126 outcome than the other multiset, this evaluates to `True`; 1127 if there is some outcome for which this multiset has a lesser count 1128 than the other multiset, this evaluates to `False`. 1129 1130 A typical use of this evaluation is testing for the presence of a 1131 combo of cards in a hand, e.g. 1132 1133 ```python 1134 deck.deal(5) >= ['a', 'a', 'b'] 1135 ``` 1136 1137 represents the chance that a deal of 5 cards contains at least two 'a's 1138 and one 'b'. 1139 1140 `issuperset` is the same as `self >= other`. 1141 1142 `self > other` evaluates a proper superset relation, which is the same 1143 except the result is `False` if the two multisets are exactly equal. 1144 """ 1145 return self._compare(other, icepool.evaluator.IsSupersetEvaluator)
Evaluation: Whether this multiset is a superset of the other multiset.
Specifically, if this multiset has a greater or equal count for each
outcome than the other multiset, this evaluates to True
;
if there is some outcome for which this multiset has a lesser count
than the other multiset, this evaluates to False
.
A typical use of this evaluation is testing for the presence of a combo of cards in a hand, e.g.
deck.deal(5) >= ['a', 'a', 'b']
represents the chance that a deal of 5 cards contains at least two 'a's and one 'b'.
issuperset
is the same as self >= other
.
self > other
evaluates a proper superset relation, which is the same
except the result is False
if the two multisets are exactly equal.
1185 def isdisjoint( 1186 self, 1187 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1188 /) -> 'icepool.Die[bool] | MultisetFunctionRawResult[T, bool]': 1189 """Evaluation: Whether this multiset is disjoint from the other multiset. 1190 1191 Specifically, this evaluates to `False` if there is any outcome for 1192 which both multisets have positive count, and `True` if there is not. 1193 1194 Negative incoming counts are treated as zero counts. 1195 """ 1196 return self._compare(other, icepool.evaluator.IsDisjointSetEvaluator)
Evaluation: Whether this multiset is disjoint from the other multiset.
Specifically, this evaluates to False
if there is any outcome for
which both multisets have positive count, and True
if there is not.
Negative incoming counts are treated as zero counts.
18class MultisetEvaluator(MultisetEvaluatorBase[T, U_co]): 19 20 def initial_state(self, order, outcomes, /, **kwargs): 21 # TODO: docstring 22 return None 23 24 def next_state(self, state: Hashable, outcome: T, /, *counts) -> Hashable: 25 """State transition function. 26 27 This should produce a state given the previous state, an outcome, 28 the count of that outcome produced by each multiset input, and any 29 **kwargs provided to `evaluate()`. 30 31 `evaluate()` will always call this with `state, outcome, *counts` as 32 positional arguments. Furthermore, there is no expectation that a 33 subclass be able to handle an arbitrary number of counts. 34 35 Thus, you are free to: 36 * Rename `state` or `outcome` in a subclass. 37 * Replace `*counts` with a fixed set of parameters. 38 39 `**kwargs` are non-multiset arguments that were provided to 40 `evaluate()`. 41 You may replace `**kwargs` with a fixed set of keyword parameters; 42 `final_state()` should take the same set of keyword parameters. 43 44 Make sure to handle the base case where `state is None`. 45 46 States must be hashable. At current, they do not have to be orderable. 47 However, this may change in the future, and if they are not totally 48 orderable, you must override `final_outcome` to create totally orderable 49 final outcomes. 50 51 By default, this method may receive outcomes in any order. You can 52 adjust this as follows: 53 54 * If you want to guarantee ascending or descending order, you can 55 implement `next_state_ascending()` or `next_state_descending()` 56 instead. You can implement both if you want the implementation to 57 depend on order. 58 * You can raise an `UnsupportedOrderError` if you don't want to 59 support the current order. In this case, the opposite order will then 60 be attempted (if it hasn't already been attempted). 61 62 The behavior of returning a `Die` from `next_state` is currently 63 undefined. 64 65 Args: 66 state: A hashable object indicating the state before rolling the 67 current outcome. If this is the first outcome being considered, 68 `state` will be `None`. 69 outcome: The current outcome. 70 `next_state` will see all rolled outcomes in monotonic order; 71 either ascending or descending depending on `order()`. 72 If there are multiple inputs, the set of outcomes is at 73 least the union of the outcomes of the invididual inputs. 74 You can use `extra_outcomes()` to add extra outcomes. 75 *counts: One value (usually an `int`) for each input indicating how 76 many of the current outcome were produced. 77 78 Returns: 79 A hashable object indicating the next state. 80 The special value `icepool.Reroll` can be used to immediately remove 81 the state from consideration, effectively performing a full reroll. 82 """ 83 raise NotImplementedError() 84 85 def next_state_ascending(self, state: Hashable, outcome: T, /, 86 *counts) -> Hashable: 87 """As next_state() but handles outcomes in ascending order only. 88 89 You can implement both `next_state_ascending()` and 90 `next_state_descending()` if you want to handle both outcome orders 91 with a separate implementation for each. 92 """ 93 raise NotImplementedError() 94 95 def next_state_descending(self, state: Hashable, outcome: T, /, 96 *counts) -> Hashable: 97 """As next_state() but handles outcomes in descending order only. 98 99 You can implement both `next_state_ascending()` and 100 `next_state_descending()` if you want to handle both outcome orders 101 with a separate implementation for each. 102 """ 103 raise NotImplementedError() 104 105 def extra_outcomes(self, outcomes: Sequence[T]) -> Collection[T]: 106 """Optional method to specify extra outcomes that should be seen as inputs to `next_state()`. 107 108 These will be seen by `next_state` even if they do not appear in the 109 input(s). The default implementation returns `()`, or no additional 110 outcomes. 111 112 If you want `next_state` to see consecutive `int` outcomes, you can set 113 `extra_outcomes = icepool.MultisetEvaluator.consecutive`. 114 See `consecutive()` below. 115 116 Args: 117 outcomes: The outcomes that could be produced by the inputs, in 118 ascending order. 119 """ 120 return () 121 122 def final_outcome( 123 self, final_state: Hashable, /, **kwargs: Hashable 124 ) -> 'U_co | icepool.Die[U_co] | icepool.RerollType': 125 """Optional method to generate a final output outcome from a final state. 126 127 By default, the final outcome is equal to the final state. 128 Note that `None` is not a valid outcome for a `Die`, 129 and if there are no outcomes, `final_outcome` will be immediately 130 be callled with `final_state=None`. 131 Subclasses that want to handle this case should explicitly define what 132 happens. 133 134 `**kwargs` are non-multiset arguments that were provided to 135 `evaluate()`. 136 You may replace `**kwargs` with a fixed set of keyword parameters; 137 `final_state()` should take the same set of keyword parameters. 138 139 Args: 140 final_state: A state after all outcomes have been processed. 141 kwargs: Any kwargs that were passed to the evaluation. 142 143 Returns: 144 A final outcome that will be used as part of constructing the result `Die`. 145 As usual for `Die()`, this could itself be a `Die` or `icepool.Reroll`. 146 """ 147 # If not overriden, the final_state should have type U_co. 148 return cast(U_co, final_state) 149 150 def _get_next_state_method( 151 self, method_name: str) -> Callable[..., Hashable] | None: 152 """Returns the next_state method by name, or `None` if not implemented.""" 153 try: 154 method = getattr(self, method_name) 155 if method is None: 156 return None 157 method_func = getattr(method, '__func__', method) 158 except AttributeError: 159 return None 160 if method_func is getattr(MultisetEvaluator, method_name): 161 return None 162 return method 163 164 def next_state_method(self, order: Order, 165 /) -> Callable[..., Hashable] | None: 166 """Returns the bound next_state* method for the given order, or `None` if that order is not supported. 167 168 `next_state_ascending` and `next_state_descending` are prioritized over 169 `next_state`. 170 """ 171 if order > 0: 172 return self._get_next_state_method( 173 'next_state_ascending') or self._get_next_state_method( 174 'next_state') 175 elif order < 0: 176 return self._get_next_state_method( 177 'next_state_descending') or self._get_next_state_method( 178 'next_state') 179 else: 180 return self._get_next_state_method( 181 'next_state_ascending') or self._get_next_state_method( 182 'next_state_descending') or self._get_next_state_method( 183 'next_state') 184 185 def consecutive(self, outcomes: Sequence[int]) -> Collection[int]: 186 """Example implementation of `extra_outcomes()` that produces consecutive `int` outcomes. 187 188 Set `extra_outcomes = icepool.MultisetEvaluator.consecutive` to use this. 189 190 Returns: 191 All `int`s from the min outcome to the max outcome among the inputs, 192 inclusive. 193 194 Raises: 195 TypeError: if any input has any non-`int` outcome. 196 """ 197 if not outcomes: 198 return () 199 200 if any(not isinstance(x, int) for x in outcomes): 201 raise TypeError( 202 "consecutive cannot be used with outcomes of type other than 'int'." 203 ) 204 205 return range(outcomes[0], outcomes[-1] + 1) 206 207 def _prepare( 208 self, 209 inputs: 'tuple[MultisetExpressionBase[T, Q], ...]', 210 kwargs: Mapping[str, Hashable], 211 ): 212 yield MultisetEvaluatorDungeon( 213 self, self.initial_state, self.next_state_method(Order.Ascending), 214 self.next_state_method(Order.Descending), self.extra_outcomes, 215 self.final_outcome, kwargs), (), 1
Helper class that provides a standard way to create an ABC using inheritance.
24 def next_state(self, state: Hashable, outcome: T, /, *counts) -> Hashable: 25 """State transition function. 26 27 This should produce a state given the previous state, an outcome, 28 the count of that outcome produced by each multiset input, and any 29 **kwargs provided to `evaluate()`. 30 31 `evaluate()` will always call this with `state, outcome, *counts` as 32 positional arguments. Furthermore, there is no expectation that a 33 subclass be able to handle an arbitrary number of counts. 34 35 Thus, you are free to: 36 * Rename `state` or `outcome` in a subclass. 37 * Replace `*counts` with a fixed set of parameters. 38 39 `**kwargs` are non-multiset arguments that were provided to 40 `evaluate()`. 41 You may replace `**kwargs` with a fixed set of keyword parameters; 42 `final_state()` should take the same set of keyword parameters. 43 44 Make sure to handle the base case where `state is None`. 45 46 States must be hashable. At current, they do not have to be orderable. 47 However, this may change in the future, and if they are not totally 48 orderable, you must override `final_outcome` to create totally orderable 49 final outcomes. 50 51 By default, this method may receive outcomes in any order. You can 52 adjust this as follows: 53 54 * If you want to guarantee ascending or descending order, you can 55 implement `next_state_ascending()` or `next_state_descending()` 56 instead. You can implement both if you want the implementation to 57 depend on order. 58 * You can raise an `UnsupportedOrderError` if you don't want to 59 support the current order. In this case, the opposite order will then 60 be attempted (if it hasn't already been attempted). 61 62 The behavior of returning a `Die` from `next_state` is currently 63 undefined. 64 65 Args: 66 state: A hashable object indicating the state before rolling the 67 current outcome. If this is the first outcome being considered, 68 `state` will be `None`. 69 outcome: The current outcome. 70 `next_state` will see all rolled outcomes in monotonic order; 71 either ascending or descending depending on `order()`. 72 If there are multiple inputs, the set of outcomes is at 73 least the union of the outcomes of the invididual inputs. 74 You can use `extra_outcomes()` to add extra outcomes. 75 *counts: One value (usually an `int`) for each input indicating how 76 many of the current outcome were produced. 77 78 Returns: 79 A hashable object indicating the next state. 80 The special value `icepool.Reroll` can be used to immediately remove 81 the state from consideration, effectively performing a full reroll. 82 """ 83 raise NotImplementedError()
State transition function.
This should produce a state given the previous state, an outcome,
the count of that outcome produced by each multiset input, and any
**kwargs provided to evaluate()
.
evaluate()
will always call this with state, outcome, *counts
as
positional arguments. Furthermore, there is no expectation that a
subclass be able to handle an arbitrary number of counts.
Thus, you are free to:
- Rename
state
oroutcome
in a subclass. - Replace
*counts
with a fixed set of parameters.
**kwargs
are non-multiset arguments that were provided to
evaluate()
.
You may replace **kwargs
with a fixed set of keyword parameters;
final_state()
should take the same set of keyword parameters.
Make sure to handle the base case where state is None
.
States must be hashable. At current, they do not have to be orderable.
However, this may change in the future, and if they are not totally
orderable, you must override final_outcome
to create totally orderable
final outcomes.
By default, this method may receive outcomes in any order. You can adjust this as follows:
- If you want to guarantee ascending or descending order, you can
implement
next_state_ascending()
ornext_state_descending()
instead. You can implement both if you want the implementation to depend on order. - You can raise an
UnsupportedOrderError
if you don't want to support the current order. In this case, the opposite order will then be attempted (if it hasn't already been attempted).
The behavior of returning a Die
from next_state
is currently
undefined.
Arguments:
- state: A hashable object indicating the state before rolling the
current outcome. If this is the first outcome being considered,
state
will beNone
. - outcome: The current outcome.
next_state
will see all rolled outcomes in monotonic order; either ascending or descending depending onorder()
. If there are multiple inputs, the set of outcomes is at least the union of the outcomes of the invididual inputs. You can useextra_outcomes()
to add extra outcomes. - *counts: One value (usually an
int
) for each input indicating how many of the current outcome were produced.
Returns:
A hashable object indicating the next state. The special value
icepool.Reroll
can be used to immediately remove the state from consideration, effectively performing a full reroll.
85 def next_state_ascending(self, state: Hashable, outcome: T, /, 86 *counts) -> Hashable: 87 """As next_state() but handles outcomes in ascending order only. 88 89 You can implement both `next_state_ascending()` and 90 `next_state_descending()` if you want to handle both outcome orders 91 with a separate implementation for each. 92 """ 93 raise NotImplementedError()
As next_state() but handles outcomes in ascending order only.
You can implement both next_state_ascending()
and
next_state_descending()
if you want to handle both outcome orders
with a separate implementation for each.
95 def next_state_descending(self, state: Hashable, outcome: T, /, 96 *counts) -> Hashable: 97 """As next_state() but handles outcomes in descending order only. 98 99 You can implement both `next_state_ascending()` and 100 `next_state_descending()` if you want to handle both outcome orders 101 with a separate implementation for each. 102 """ 103 raise NotImplementedError()
As next_state() but handles outcomes in descending order only.
You can implement both next_state_ascending()
and
next_state_descending()
if you want to handle both outcome orders
with a separate implementation for each.
105 def extra_outcomes(self, outcomes: Sequence[T]) -> Collection[T]: 106 """Optional method to specify extra outcomes that should be seen as inputs to `next_state()`. 107 108 These will be seen by `next_state` even if they do not appear in the 109 input(s). The default implementation returns `()`, or no additional 110 outcomes. 111 112 If you want `next_state` to see consecutive `int` outcomes, you can set 113 `extra_outcomes = icepool.MultisetEvaluator.consecutive`. 114 See `consecutive()` below. 115 116 Args: 117 outcomes: The outcomes that could be produced by the inputs, in 118 ascending order. 119 """ 120 return ()
Optional method to specify extra outcomes that should be seen as inputs to next_state()
.
These will be seen by next_state
even if they do not appear in the
input(s). The default implementation returns ()
, or no additional
outcomes.
If you want next_state
to see consecutive int
outcomes, you can set
extra_outcomes = icepool.MultisetEvaluator.consecutive
.
See consecutive()
below.
Arguments:
- outcomes: The outcomes that could be produced by the inputs, in
- ascending order.
122 def final_outcome( 123 self, final_state: Hashable, /, **kwargs: Hashable 124 ) -> 'U_co | icepool.Die[U_co] | icepool.RerollType': 125 """Optional method to generate a final output outcome from a final state. 126 127 By default, the final outcome is equal to the final state. 128 Note that `None` is not a valid outcome for a `Die`, 129 and if there are no outcomes, `final_outcome` will be immediately 130 be callled with `final_state=None`. 131 Subclasses that want to handle this case should explicitly define what 132 happens. 133 134 `**kwargs` are non-multiset arguments that were provided to 135 `evaluate()`. 136 You may replace `**kwargs` with a fixed set of keyword parameters; 137 `final_state()` should take the same set of keyword parameters. 138 139 Args: 140 final_state: A state after all outcomes have been processed. 141 kwargs: Any kwargs that were passed to the evaluation. 142 143 Returns: 144 A final outcome that will be used as part of constructing the result `Die`. 145 As usual for `Die()`, this could itself be a `Die` or `icepool.Reroll`. 146 """ 147 # If not overriden, the final_state should have type U_co. 148 return cast(U_co, final_state)
Optional method to generate a final output outcome from a final state.
By default, the final outcome is equal to the final state.
Note that None
is not a valid outcome for a Die
,
and if there are no outcomes, final_outcome
will be immediately
be callled with final_state=None
.
Subclasses that want to handle this case should explicitly define what
happens.
**kwargs
are non-multiset arguments that were provided to
evaluate()
.
You may replace **kwargs
with a fixed set of keyword parameters;
final_state()
should take the same set of keyword parameters.
Arguments:
- final_state: A state after all outcomes have been processed.
- kwargs: Any kwargs that were passed to the evaluation.
Returns:
A final outcome that will be used as part of constructing the result
Die
. As usual forDie()
, this could itself be aDie
oricepool.Reroll
.
164 def next_state_method(self, order: Order, 165 /) -> Callable[..., Hashable] | None: 166 """Returns the bound next_state* method for the given order, or `None` if that order is not supported. 167 168 `next_state_ascending` and `next_state_descending` are prioritized over 169 `next_state`. 170 """ 171 if order > 0: 172 return self._get_next_state_method( 173 'next_state_ascending') or self._get_next_state_method( 174 'next_state') 175 elif order < 0: 176 return self._get_next_state_method( 177 'next_state_descending') or self._get_next_state_method( 178 'next_state') 179 else: 180 return self._get_next_state_method( 181 'next_state_ascending') or self._get_next_state_method( 182 'next_state_descending') or self._get_next_state_method( 183 'next_state')
Returns the bound next_state* method for the given order, or None
if that order is not supported.
next_state_ascending
and next_state_descending
are prioritized over
next_state
.
185 def consecutive(self, outcomes: Sequence[int]) -> Collection[int]: 186 """Example implementation of `extra_outcomes()` that produces consecutive `int` outcomes. 187 188 Set `extra_outcomes = icepool.MultisetEvaluator.consecutive` to use this. 189 190 Returns: 191 All `int`s from the min outcome to the max outcome among the inputs, 192 inclusive. 193 194 Raises: 195 TypeError: if any input has any non-`int` outcome. 196 """ 197 if not outcomes: 198 return () 199 200 if any(not isinstance(x, int) for x in outcomes): 201 raise TypeError( 202 "consecutive cannot be used with outcomes of type other than 'int'." 203 ) 204 205 return range(outcomes[0], outcomes[-1] + 1)
Example implementation of extra_outcomes()
that produces consecutive int
outcomes.
Set extra_outcomes = icepool.MultisetEvaluator.consecutive
to use this.
Returns:
All
int
s from the min outcome to the max outcome among the inputs, inclusive.
Raises:
- TypeError: if any input has any non-
int
outcome.
23class Order(enum.IntEnum): 24 """Can be used to define what order outcomes are seen in by MultisetEvaluators.""" 25 Ascending = 1 26 Descending = -1 27 Any = 0 28 29 def merge(*orders: 'Order') -> 'Order': 30 """Merges the given Orders. 31 32 Returns: 33 `Any` if all arguments are `Any`. 34 `Ascending` if there is at least one `Ascending` in the arguments. 35 `Descending` if there is at least one `Descending` in the arguments. 36 37 Raises: 38 `ConflictingOrderError` if both `Ascending` and `Descending` are in 39 the arguments. 40 """ 41 result = Order.Any 42 for order in orders: 43 if (result > 0 and order < 0) or (result < 0 and order > 0): 44 raise ConflictingOrderError( 45 f'Conflicting orders {orders}.\n' + 46 'Tip: If you are using highest(keep=k), try using lowest(drop=n-k) instead, or vice versa.' 47 ) 48 if result == Order.Any: 49 result = order 50 return result 51 52 def __neg__(self) -> 'Order': 53 return Order(-self.value)
Can be used to define what order outcomes are seen in by MultisetEvaluators.
29 def merge(*orders: 'Order') -> 'Order': 30 """Merges the given Orders. 31 32 Returns: 33 `Any` if all arguments are `Any`. 34 `Ascending` if there is at least one `Ascending` in the arguments. 35 `Descending` if there is at least one `Descending` in the arguments. 36 37 Raises: 38 `ConflictingOrderError` if both `Ascending` and `Descending` are in 39 the arguments. 40 """ 41 result = Order.Any 42 for order in orders: 43 if (result > 0 and order < 0) or (result < 0 and order > 0): 44 raise ConflictingOrderError( 45 f'Conflicting orders {orders}.\n' + 46 'Tip: If you are using highest(keep=k), try using lowest(drop=n-k) instead, or vice versa.' 47 ) 48 if result == Order.Any: 49 result = order 50 return result
Merges the given Orders.
Returns:
Any
if all arguments areAny
.Ascending
if there is at least oneAscending
in the arguments.Descending
if there is at least oneDescending
in the arguments.
Raises:
ConflictingOrderError
if bothAscending
andDescending
are in- the arguments.
15class ConflictingOrderError(OrderError): 16 """Indicates that two conflicting mandatory outcome orderings were encountered."""
Indicates that two conflicting mandatory outcome orderings were encountered.
19class UnsupportedOrderError(OrderError): 20 """Indicates that the given order is not supported under the current context."""
Indicates that the given order is not supported under the current context.
20class Deck(Population[T_co]): 21 """Sampling without replacement (within a single evaluation). 22 23 Quantities represent duplicates. 24 """ 25 26 _data: Counts[T_co] 27 _deal: int 28 29 @property 30 def _new_type(self) -> type: 31 return Deck 32 33 def __new__(cls, 34 outcomes: Sequence | Mapping[Any, int] = (), 35 times: Sequence[int] | int = 1) -> 'Deck[T_co]': 36 """Constructor for a `Deck`. 37 38 All quantities must be non-negative. Outcomes with zero quantity will be 39 omitted. 40 41 Args: 42 outcomes: The cards of the `Deck`. This can be one of the following: 43 * A `Sequence` of outcomes. Duplicates will contribute 44 quantity for each appearance. 45 * A `Mapping` from outcomes to quantities. 46 47 Each outcome may be one of the following: 48 * An outcome, which must be hashable and totally orderable. 49 * A `Deck`, which will be flattened into the result. If a 50 `times` is assigned to the `Deck`, the entire `Deck` will 51 be duplicated that many times. 52 times: Multiplies the number of times each element of `outcomes` 53 will be put into the `Deck`. 54 `times` can either be a sequence of the same length as 55 `outcomes` or a single `int` to apply to all elements of 56 `outcomes`. 57 """ 58 59 if icepool.population.again.contains_again(outcomes): 60 raise ValueError('Again cannot be used with Decks.') 61 62 outcomes, times = icepool.creation_args.itemize(outcomes, times) 63 64 if len(outcomes) == 1 and times[0] == 1 and isinstance( 65 outcomes[0], Deck): 66 return outcomes[0] 67 68 counts: Counts[T_co] = icepool.creation_args.expand_args_for_deck( 69 outcomes, times) 70 71 return Deck._new_raw(counts) 72 73 @classmethod 74 def _new_raw(cls, data: Counts[T_co]) -> 'Deck[T_co]': 75 """Creates a new `Deck` using already-processed arguments. 76 77 Args: 78 data: At this point, this is a Counts. 79 """ 80 self = super(Population, cls).__new__(cls) 81 self._data = data 82 return self 83 84 def keys(self) -> CountsKeysView[T_co]: 85 return self._data.keys() 86 87 def values(self) -> CountsValuesView: 88 return self._data.values() 89 90 def items(self) -> CountsItemsView[T_co]: 91 return self._data.items() 92 93 def __getitem__(self, outcome) -> int: 94 return self._data[outcome] 95 96 def __iter__(self) -> Iterator[T_co]: 97 return iter(self.keys()) 98 99 def __len__(self) -> int: 100 return len(self._data) 101 102 size = icepool.Population.denominator 103 104 @cached_property 105 def _popped_min(self) -> tuple['Deck[T_co]', int]: 106 return self._new_raw(self._data.remove_min()), self.quantities()[0] 107 108 def _pop_min(self) -> tuple['Deck[T_co]', int]: 109 """A `Deck` with the min outcome removed.""" 110 return self._popped_min 111 112 @cached_property 113 def _popped_max(self) -> tuple['Deck[T_co]', int]: 114 return self._new_raw(self._data.remove_max()), self.quantities()[-1] 115 116 def _pop_max(self) -> tuple['Deck[T_co]', int]: 117 """A `Deck` with the max outcome removed.""" 118 return self._popped_max 119 120 @overload 121 def deal(self, hand_size: int, /) -> 'icepool.Deal[T_co]': 122 ... 123 124 @overload 125 def deal(self, hand_sizes: Iterable[int]) -> 'icepool.MultiDeal[T_co]': 126 ... 127 128 @overload 129 def deal( 130 self, hand_sizes: int | Iterable[int] 131 ) -> 'icepool.Deal[T_co] | icepool.MultiDeal[T_co]': 132 ... 133 134 def deal( 135 self, hand_sizes: int | Iterable[int] 136 ) -> 'icepool.Deal[T_co] | icepool.MultiDeal[T_co]': 137 """Deals the specified number of cards from this deck. 138 139 Args: 140 hand_sizes: Either an integer, in which case a `Deal` will be 141 returned, or an iterable of multiple hand sizes, in which case a 142 `MultiDeal` will be returned. 143 """ 144 if isinstance(hand_sizes, int): 145 return icepool.Deal(self, hand_sizes) 146 else: 147 return icepool.MultiDeal(self, hand_sizes) 148 149 # Binary operators. 150 151 def additive_union( 152 self, *args: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 153 """Both decks merged together.""" 154 return functools.reduce(operator.add, args, initial=self) 155 156 def __add__(self, 157 other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 158 data = Counter(self._data) 159 for outcome, count in Counter(other).items(): 160 data[outcome] += count 161 return Deck(+data) 162 163 __radd__ = __add__ 164 165 def difference(self, *args: 166 Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 167 """This deck with the other cards removed (but not below zero of each card).""" 168 return functools.reduce(operator.sub, args, initial=self) 169 170 def __sub__(self, 171 other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 172 data = Counter(self._data) 173 for outcome, count in Counter(other).items(): 174 data[outcome] -= count 175 return Deck(+data) 176 177 def __rsub__(self, 178 other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 179 data = Counter(other) 180 for outcome, count in self.items(): 181 data[outcome] -= count 182 return Deck(+data) 183 184 def intersection( 185 self, *args: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 186 """The cards that both decks have.""" 187 return functools.reduce(operator.and_, args, initial=self) 188 189 def __and__(self, 190 other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 191 data: Counter[T_co] = Counter() 192 for outcome, count in Counter(other).items(): 193 data[outcome] = min(self.get(outcome, 0), count) 194 return Deck(+data) 195 196 __rand__ = __and__ 197 198 def union(self, *args: 199 Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 200 """As many of each card as the deck that has more of them.""" 201 return functools.reduce(operator.or_, args, initial=self) 202 203 def __or__(self, 204 other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 205 data = Counter(self._data) 206 for outcome, count in Counter(other).items(): 207 data[outcome] = max(data[outcome], count) 208 return Deck(+data) 209 210 __ror__ = __or__ 211 212 def symmetric_difference( 213 self, other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 214 """As many of each card as the deck that has more of them.""" 215 return self ^ other 216 217 def __xor__(self, 218 other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 219 data = Counter(self._data) 220 for outcome, count in Counter(other).items(): 221 data[outcome] = abs(data[outcome] - count) 222 return Deck(+data) 223 224 def __mul__(self, other: int) -> 'Deck[T_co]': 225 if not isinstance(other, int): 226 return NotImplemented 227 return self.multiply_quantities(other) 228 229 __rmul__ = __mul__ 230 231 def __floordiv__(self, other: int) -> 'Deck[T_co]': 232 if not isinstance(other, int): 233 return NotImplemented 234 return self.divide_quantities(other) 235 236 def __mod__(self, other: int) -> 'Deck[T_co]': 237 if not isinstance(other, int): 238 return NotImplemented 239 return self.modulo_quantities(other) 240 241 def map( 242 self, 243 repl: 244 'Callable[..., U | Deck[U] | icepool.RerollType] | Mapping[T_co, U | Deck[U] | icepool.RerollType]', 245 /, 246 star: bool | None = None) -> 'Deck[U]': 247 """Maps outcomes of this `Deck` to other outcomes. 248 249 Args: 250 repl: One of the following: 251 * A callable returning a new outcome for each old outcome. 252 * A map from old outcomes to new outcomes. 253 Unmapped old outcomes stay the same. 254 The new outcomes may be `Deck`s, in which case one card is 255 replaced with several. This is not recommended. 256 star: Whether outcomes should be unpacked into separate arguments 257 before sending them to a callable `repl`. 258 If not provided, this will be guessed based on the function 259 signature. 260 """ 261 # Convert to a single-argument function. 262 if callable(repl): 263 if star is None: 264 star = infer_star(repl) 265 if star: 266 267 def transition_function(outcome): 268 return repl(*outcome) 269 else: 270 271 def transition_function(outcome): 272 return repl(outcome) 273 else: 274 # repl is a mapping. 275 def transition_function(outcome): 276 if outcome in repl: 277 return repl[outcome] 278 else: 279 return outcome 280 281 return Deck( 282 [transition_function(outcome) for outcome in self.outcomes()], 283 times=self.quantities()) 284 285 @cached_property 286 def _sequence_cache( 287 self) -> 'MutableSequence[icepool.Die[tuple[T_co, ...]]]': 288 return [icepool.Die([()])] 289 290 def sequence(self, deals: int, /) -> 'icepool.Die[tuple[T_co, ...]]': 291 """Possible sequences produced by dealing from this deck a number of times. 292 293 This is extremely expensive computationally. If you don't care about 294 order, use `deal()` instead. 295 """ 296 if deals < 0: 297 raise ValueError('The number of cards dealt cannot be negative.') 298 for i in range(len(self._sequence_cache), deals + 1): 299 300 def transition(curr): 301 remaining = icepool.Die(self - curr) 302 return icepool.map(lambda curr, next: curr + (next, ), curr, 303 remaining) 304 305 result: 'icepool.Die[tuple[T_co, ...]]' = self._sequence_cache[ 306 i - 1].map(transition) 307 self._sequence_cache.append(result) 308 return result 309 310 @cached_property 311 def _hash_key(self) -> tuple: 312 return Deck, tuple(self.items()) 313 314 def __eq__(self, other) -> bool: 315 if not isinstance(other, Deck): 316 return False 317 return self._hash_key == other._hash_key 318 319 @cached_property 320 def _hash(self) -> int: 321 return hash(self._hash_key) 322 323 def __hash__(self) -> int: 324 return self._hash 325 326 def __repr__(self) -> str: 327 items_string = ', '.join(f'{repr(outcome)}: {quantity}' 328 for outcome, quantity in self.items()) 329 return type(self).__qualname__ + '({' + items_string + '})'
Sampling without replacement (within a single evaluation).
Quantities represent duplicates.
296 def denominator(self) -> int: 297 """The sum of all quantities (e.g. weights or duplicates). 298 299 For the number of unique outcomes, use `len()`. 300 """ 301 return self._denominator
The sum of all quantities (e.g. weights or duplicates).
For the number of unique outcomes, use len()
.
134 def deal( 135 self, hand_sizes: int | Iterable[int] 136 ) -> 'icepool.Deal[T_co] | icepool.MultiDeal[T_co]': 137 """Deals the specified number of cards from this deck. 138 139 Args: 140 hand_sizes: Either an integer, in which case a `Deal` will be 141 returned, or an iterable of multiple hand sizes, in which case a 142 `MultiDeal` will be returned. 143 """ 144 if isinstance(hand_sizes, int): 145 return icepool.Deal(self, hand_sizes) 146 else: 147 return icepool.MultiDeal(self, hand_sizes)
151 def additive_union( 152 self, *args: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 153 """Both decks merged together.""" 154 return functools.reduce(operator.add, args, initial=self)
Both decks merged together.
165 def difference(self, *args: 166 Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 167 """This deck with the other cards removed (but not below zero of each card).""" 168 return functools.reduce(operator.sub, args, initial=self)
This deck with the other cards removed (but not below zero of each card).
184 def intersection( 185 self, *args: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 186 """The cards that both decks have.""" 187 return functools.reduce(operator.and_, args, initial=self)
The cards that both decks have.
198 def union(self, *args: 199 Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 200 """As many of each card as the deck that has more of them.""" 201 return functools.reduce(operator.or_, args, initial=self)
As many of each card as the deck that has more of them.
212 def symmetric_difference( 213 self, other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 214 """As many of each card as the deck that has more of them.""" 215 return self ^ other
As many of each card as the deck that has more of them.
241 def map( 242 self, 243 repl: 244 'Callable[..., U | Deck[U] | icepool.RerollType] | Mapping[T_co, U | Deck[U] | icepool.RerollType]', 245 /, 246 star: bool | None = None) -> 'Deck[U]': 247 """Maps outcomes of this `Deck` to other outcomes. 248 249 Args: 250 repl: One of the following: 251 * A callable returning a new outcome for each old outcome. 252 * A map from old outcomes to new outcomes. 253 Unmapped old outcomes stay the same. 254 The new outcomes may be `Deck`s, in which case one card is 255 replaced with several. This is not recommended. 256 star: Whether outcomes should be unpacked into separate arguments 257 before sending them to a callable `repl`. 258 If not provided, this will be guessed based on the function 259 signature. 260 """ 261 # Convert to a single-argument function. 262 if callable(repl): 263 if star is None: 264 star = infer_star(repl) 265 if star: 266 267 def transition_function(outcome): 268 return repl(*outcome) 269 else: 270 271 def transition_function(outcome): 272 return repl(outcome) 273 else: 274 # repl is a mapping. 275 def transition_function(outcome): 276 if outcome in repl: 277 return repl[outcome] 278 else: 279 return outcome 280 281 return Deck( 282 [transition_function(outcome) for outcome in self.outcomes()], 283 times=self.quantities())
Maps outcomes of this Deck
to other outcomes.
Arguments:
- repl: One of the following:
- A callable returning a new outcome for each old outcome.
- A map from old outcomes to new outcomes.
Unmapped old outcomes stay the same.
The new outcomes may be
Deck
s, in which case one card is replaced with several. This is not recommended.
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
repl
. If not provided, this will be guessed based on the function signature.
290 def sequence(self, deals: int, /) -> 'icepool.Die[tuple[T_co, ...]]': 291 """Possible sequences produced by dealing from this deck a number of times. 292 293 This is extremely expensive computationally. If you don't care about 294 order, use `deal()` instead. 295 """ 296 if deals < 0: 297 raise ValueError('The number of cards dealt cannot be negative.') 298 for i in range(len(self._sequence_cache), deals + 1): 299 300 def transition(curr): 301 remaining = icepool.Die(self - curr) 302 return icepool.map(lambda curr, next: curr + (next, ), curr, 303 remaining) 304 305 result: 'icepool.Die[tuple[T_co, ...]]' = self._sequence_cache[ 306 i - 1].map(transition) 307 self._sequence_cache.append(result) 308 return result
Possible sequences produced by dealing from this deck a number of times.
This is extremely expensive computationally. If you don't care about
order, use deal()
instead.
16class Deal(KeepGenerator[T]): 17 """Represents an unordered deal of a single hand from a `Deck`.""" 18 19 _deck: 'icepool.Deck[T]' 20 _hand_size: int 21 22 def __init__(self, deck: 'icepool.Deck[T]', hand_size: int) -> None: 23 """Constructor. 24 25 For algorithmic reasons, you must pre-commit to the number of cards to 26 deal. 27 28 It is permissible to deal zero cards from an empty deck, but not all 29 evaluators will handle this case, especially if they depend on the 30 outcome type. Dealing zero cards from a non-empty deck does not have 31 this issue. 32 33 Args: 34 deck: The `Deck` to deal from. 35 hand_size: How many cards to deal. 36 """ 37 if hand_size < 0: 38 raise ValueError('hand_size cannot be negative.') 39 if hand_size > deck.size(): 40 raise ValueError( 41 'The number of cards dealt cannot exceed the size of the deck.' 42 ) 43 self._deck = deck 44 self._hand_size = hand_size 45 self._keep_tuple = (1, ) * hand_size 46 47 @classmethod 48 def _new_raw(cls, deck: 'icepool.Deck[T]', hand_size: int, 49 keep_tuple: tuple[int, ...]) -> 'Deal[T]': 50 self = super(Deal, cls).__new__(cls) 51 self._deck = deck 52 self._hand_size = hand_size 53 self._keep_tuple = keep_tuple 54 return self 55 56 def _set_keep_tuple(self, keep_tuple: tuple[int, ...]) -> 'Deal[T]': 57 return Deal._new_raw(self._deck, self._hand_size, keep_tuple) 58 59 def deck(self) -> 'icepool.Deck[T]': 60 """The `Deck` the cards are dealt from.""" 61 return self._deck 62 63 def hand_sizes(self) -> tuple[int, ...]: 64 """The number of cards dealt to each hand as a tuple.""" 65 return (self._hand_size, ) 66 67 def total_cards_dealt(self) -> int: 68 """The total number of cards dealt.""" 69 return self._hand_size 70 71 def outcomes(self) -> CountsKeysView[T]: 72 """The outcomes of the `Deck` in ascending order. 73 74 These are also the `keys` of the `Deck` as a `Mapping`. 75 Prefer to use the name `outcomes`. 76 """ 77 return self.deck().outcomes() 78 79 def _is_resolvable(self) -> bool: 80 return len(self.outcomes()) > 0 81 82 def denominator(self) -> int: 83 return icepool.math.comb(self.deck().size(), self._hand_size) 84 85 def _prepare(self): 86 yield self, 1 87 88 def _generate_min(self, min_outcome) -> Iterator[tuple['Deal', int, int]]: 89 if not self.outcomes() or min_outcome != self.min_outcome(): 90 yield self, 0, 1 91 return 92 93 popped_deck, deck_count = self.deck()._pop_min() 94 95 min_count = max(0, deck_count + self._hand_size - self.deck().size()) 96 max_count = min(deck_count, self._hand_size) 97 skip_weight = None 98 for count in range(min_count, max_count + 1): 99 popped_keep_tuple, result_count = pop_min_from_keep_tuple( 100 self.keep_tuple(), count) 101 popped_deal = Deal._new_raw(popped_deck, self._hand_size - count, 102 popped_keep_tuple) 103 weight = icepool.math.comb(deck_count, count) 104 if not any(popped_keep_tuple): 105 # Dump all dice in exchange for the denominator. 106 skip_weight = (skip_weight 107 or 0) + weight * popped_deal.denominator() 108 continue 109 yield popped_deal, result_count, weight 110 111 if skip_weight is not None: 112 popped_deal = Deal._new_raw(popped_deck, 0, ()) 113 yield popped_deal, sum(self.keep_tuple()), skip_weight 114 115 def _generate_max(self, max_outcome) -> Iterator[tuple['Deal', int, int]]: 116 if not self.outcomes() or max_outcome != self.max_outcome(): 117 yield self, 0, 1 118 return 119 120 popped_deck, deck_count = self.deck()._pop_max() 121 122 min_count = max(0, deck_count + self._hand_size - self.deck().size()) 123 max_count = min(deck_count, self._hand_size) 124 skip_weight = None 125 for count in range(min_count, max_count + 1): 126 popped_keep_tuple, result_count = pop_max_from_keep_tuple( 127 self.keep_tuple(), count) 128 popped_deal = Deal._new_raw(popped_deck, self._hand_size - count, 129 popped_keep_tuple) 130 weight = icepool.math.comb(deck_count, count) 131 if not any(popped_keep_tuple): 132 # Dump all dice in exchange for the denominator. 133 skip_weight = (skip_weight 134 or 0) + weight * popped_deal.denominator() 135 continue 136 yield popped_deal, result_count, weight 137 138 if skip_weight is not None: 139 popped_deal = Deal._new_raw(popped_deck, 0, ()) 140 yield popped_deal, sum(self.keep_tuple()), skip_weight 141 142 def local_order_preference(self) -> tuple[Order, OrderReason]: 143 lo_skip, hi_skip = icepool.order.lo_hi_skip(self.keep_tuple()) 144 if lo_skip > hi_skip: 145 return Order.Descending, OrderReason.KeepSkip 146 if hi_skip > lo_skip: 147 return Order.Ascending, OrderReason.KeepSkip 148 149 return Order.Any, OrderReason.NoPreference 150 151 @cached_property 152 def _local_hash_key(self) -> Hashable: 153 return Deal, self.deck(), self._hand_size, self._keep_tuple 154 155 def __repr__(self) -> str: 156 return type( 157 self 158 ).__qualname__ + f'({repr(self.deck())}, hand_size={self._hand_size})' 159 160 def __str__(self) -> str: 161 return type( 162 self 163 ).__qualname__ + f' of hand_size={self._hand_size} from deck:\n' + str( 164 self.deck())
Represents an unordered deal of a single hand from a Deck
.
22 def __init__(self, deck: 'icepool.Deck[T]', hand_size: int) -> None: 23 """Constructor. 24 25 For algorithmic reasons, you must pre-commit to the number of cards to 26 deal. 27 28 It is permissible to deal zero cards from an empty deck, but not all 29 evaluators will handle this case, especially if they depend on the 30 outcome type. Dealing zero cards from a non-empty deck does not have 31 this issue. 32 33 Args: 34 deck: The `Deck` to deal from. 35 hand_size: How many cards to deal. 36 """ 37 if hand_size < 0: 38 raise ValueError('hand_size cannot be negative.') 39 if hand_size > deck.size(): 40 raise ValueError( 41 'The number of cards dealt cannot exceed the size of the deck.' 42 ) 43 self._deck = deck 44 self._hand_size = hand_size 45 self._keep_tuple = (1, ) * hand_size
Constructor.
For algorithmic reasons, you must pre-commit to the number of cards to deal.
It is permissible to deal zero cards from an empty deck, but not all evaluators will handle this case, especially if they depend on the outcome type. Dealing zero cards from a non-empty deck does not have this issue.
Arguments:
- deck: The
Deck
to deal from. - hand_size: How many cards to deal.
59 def deck(self) -> 'icepool.Deck[T]': 60 """The `Deck` the cards are dealt from.""" 61 return self._deck
The Deck
the cards are dealt from.
63 def hand_sizes(self) -> tuple[int, ...]: 64 """The number of cards dealt to each hand as a tuple.""" 65 return (self._hand_size, )
The number of cards dealt to each hand as a tuple.
67 def total_cards_dealt(self) -> int: 68 """The total number of cards dealt.""" 69 return self._hand_size
The total number of cards dealt.
142 def local_order_preference(self) -> tuple[Order, OrderReason]: 143 lo_skip, hi_skip = icepool.order.lo_hi_skip(self.keep_tuple()) 144 if lo_skip > hi_skip: 145 return Order.Descending, OrderReason.KeepSkip 146 if hi_skip > lo_skip: 147 return Order.Ascending, OrderReason.KeepSkip 148 149 return Order.Any, OrderReason.NoPreference
Any ordering that is preferred or required by this expression node.
20class MultiDeal(MultisetTupleGenerator[T]): 21 """Represents an unordered deal of multiple hands from a `Deck`.""" 22 23 _deck: 'icepool.Deck[T]' 24 _hand_sizes: tuple[int, ...] 25 26 def __init__(self, deck: 'icepool.Deck[T]', 27 hand_sizes: Iterable[int]) -> None: 28 """Constructor. 29 30 For algorithmic reasons, you must pre-commit to the number of cards to 31 deal for each hand. 32 33 It is permissible to deal zero cards from an empty deck, but not all 34 evaluators will handle this case, especially if they depend on the 35 outcome type. Dealing zero cards from a non-empty deck does not have 36 this issue. 37 38 Args: 39 deck: The `Deck` to deal from. 40 *hand_sizes: How many cards to deal. If multiple `hand_sizes` are 41 provided, `MultisetEvaluator.next_state` will recieve one count 42 per hand in order. Try to keep the number of hands to a minimum 43 as this can be computationally intensive. 44 """ 45 if any(hand < 0 for hand in hand_sizes): 46 raise ValueError('hand_sizes cannot be negative.') 47 self._deck = deck 48 self._hand_sizes = tuple(hand_sizes) 49 if self.total_cards_dealt() > self.deck().size(): 50 raise ValueError( 51 'The total number of cards dealt cannot exceed the size of the deck.' 52 ) 53 54 @classmethod 55 def _new_raw(cls, deck: 'icepool.Deck[T]', 56 hand_sizes: tuple[int, ...]) -> 'MultiDeal[T]': 57 self = super(MultiDeal, cls).__new__(cls) 58 self._deck = deck 59 self._hand_sizes = hand_sizes 60 return self 61 62 def deck(self) -> 'icepool.Deck[T]': 63 """The `Deck` the cards are dealt from.""" 64 return self._deck 65 66 def hand_sizes(self) -> tuple[int, ...]: 67 """The number of cards dealt to each hand as a tuple.""" 68 return self._hand_sizes 69 70 def total_cards_dealt(self) -> int: 71 """The total number of cards dealt.""" 72 return sum(self.hand_sizes()) 73 74 def outcomes(self) -> CountsKeysView[T]: 75 """The outcomes of the `Deck` in ascending order. 76 77 These are also the `keys` of the `Deck` as a `Mapping`. 78 Prefer to use the name `outcomes`. 79 """ 80 return self.deck().outcomes() 81 82 def _is_resolvable(self) -> bool: 83 return len(self.outcomes()) > 0 84 85 @cached_property 86 def _denomiator(self) -> int: 87 d_total = icepool.math.comb(self.deck().size(), 88 self.total_cards_dealt()) 89 d_split = math.prod( 90 icepool.math.comb(self.total_cards_dealt(), h) 91 for h in self.hand_sizes()[1:]) 92 return d_total * d_split 93 94 def denominator(self) -> int: 95 return self._denomiator 96 97 def _prepare(self): 98 yield self, 1 99 100 def _generate_common( 101 self, popped_deck: 'icepool.Deck[T]', deck_count: int 102 ) -> Iterator[tuple['MultiDeal', tuple[int, ...], int]]: 103 """Common implementation for _generate_min and _generate_max.""" 104 min_count = max( 105 0, deck_count + self.total_cards_dealt() - self.deck().size()) 106 max_count = min(deck_count, self.total_cards_dealt()) 107 for count_total in range(min_count, max_count + 1): 108 weight_total = icepool.math.comb(deck_count, count_total) 109 # The "deck" is the hand sizes. 110 for counts, weight_split in iter_hypergeom(self.hand_sizes(), 111 count_total): 112 popped_deal = MultiDeal._new_raw( 113 popped_deck, 114 tuple(h - c for h, c in zip(self.hand_sizes(), counts))) 115 weight = weight_total * weight_split 116 yield popped_deal, counts, weight 117 118 def _generate_min( 119 self, 120 min_outcome) -> Iterator[tuple['MultiDeal', tuple[int, ...], int]]: 121 if not self.outcomes() or min_outcome != self.min_outcome(): 122 yield self, (0, ), 1 123 return 124 125 popped_deck, deck_count = self.deck()._pop_min() 126 127 yield from self._generate_common(popped_deck, deck_count) 128 129 def _generate_max( 130 self, 131 max_outcome) -> Iterator[tuple['MultiDeal', tuple[int, ...], int]]: 132 if not self.outcomes() or max_outcome != self.max_outcome(): 133 yield self, (0, ), 1 134 return 135 136 popped_deck, deck_count = self.deck()._pop_max() 137 138 yield from self._generate_common(popped_deck, deck_count) 139 140 def local_order_preference(self) -> tuple[Order, OrderReason]: 141 return Order.Any, OrderReason.NoPreference 142 143 @cached_property 144 def _local_hash_key(self) -> Hashable: 145 return MultiDeal, self.deck(), self.hand_sizes() 146 147 def __repr__(self) -> str: 148 return type( 149 self 150 ).__qualname__ + f'({repr(self.deck())}, hand_sizes={self.hand_sizes()})' 151 152 def __str__(self) -> str: 153 return type( 154 self 155 ).__qualname__ + f' of hand_sizes={self.hand_sizes()} from deck:\n' + str( 156 self.deck())
Represents an unordered deal of multiple hands from a Deck
.
26 def __init__(self, deck: 'icepool.Deck[T]', 27 hand_sizes: Iterable[int]) -> None: 28 """Constructor. 29 30 For algorithmic reasons, you must pre-commit to the number of cards to 31 deal for each hand. 32 33 It is permissible to deal zero cards from an empty deck, but not all 34 evaluators will handle this case, especially if they depend on the 35 outcome type. Dealing zero cards from a non-empty deck does not have 36 this issue. 37 38 Args: 39 deck: The `Deck` to deal from. 40 *hand_sizes: How many cards to deal. If multiple `hand_sizes` are 41 provided, `MultisetEvaluator.next_state` will recieve one count 42 per hand in order. Try to keep the number of hands to a minimum 43 as this can be computationally intensive. 44 """ 45 if any(hand < 0 for hand in hand_sizes): 46 raise ValueError('hand_sizes cannot be negative.') 47 self._deck = deck 48 self._hand_sizes = tuple(hand_sizes) 49 if self.total_cards_dealt() > self.deck().size(): 50 raise ValueError( 51 'The total number of cards dealt cannot exceed the size of the deck.' 52 )
Constructor.
For algorithmic reasons, you must pre-commit to the number of cards to deal for each hand.
It is permissible to deal zero cards from an empty deck, but not all evaluators will handle this case, especially if they depend on the outcome type. Dealing zero cards from a non-empty deck does not have this issue.
Arguments:
- deck: The
Deck
to deal from. - *hand_sizes: How many cards to deal. If multiple
hand_sizes
are provided,MultisetEvaluator.next_state
will recieve one count per hand in order. Try to keep the number of hands to a minimum as this can be computationally intensive.
62 def deck(self) -> 'icepool.Deck[T]': 63 """The `Deck` the cards are dealt from.""" 64 return self._deck
The Deck
the cards are dealt from.
66 def hand_sizes(self) -> tuple[int, ...]: 67 """The number of cards dealt to each hand as a tuple.""" 68 return self._hand_sizes
The number of cards dealt to each hand as a tuple.
70 def total_cards_dealt(self) -> int: 71 """The total number of cards dealt.""" 72 return sum(self.hand_sizes())
The total number of cards dealt.
54def multiset_function(wrapped: Callable[ 55 ..., 56 'MultisetFunctionRawResult[T, U_co] | tuple[MultisetFunctionRawResult[T, U_co], ...]'], 57 /) -> 'MultisetEvaluatorBase[T, U_co]': 58 """EXPERIMENTAL: A decorator that turns a function into a multiset evaluator. 59 60 The provided function should take in arguments representing multisets, 61 do a limited set of operations on them (see `MultisetExpression`), and 62 finish off with an evaluation. You can return a tuple to perform a joint 63 evaluation. 64 65 For example, to create an evaluator which computes the elements each of two 66 multisets has that the other doesn't: 67 ```python 68 @multiset_function 69 def two_way_difference(a, b): 70 return (a - b).expand(), (b - a).expand() 71 ``` 72 73 You can send non-multiset variables as keyword arguments 74 (recommended to be defined as keyword-only). 75 ```python 76 @multiset_function 77 def count_outcomes(a, *, target): 78 return a.keep_outcomes(target).count() 79 80 count_outcomes(a, target=[5, 6]) 81 ``` 82 83 While in theory `@multiset_function` implements late binding similar to 84 ordinary Python functions, I recommend using only pure functions. 85 86 Be careful when using control structures: you cannot branch on the value of 87 a multiset expression or evaluation, so e.g. 88 89 ```python 90 @multiset_function 91 def bad(a, b) 92 if a == b: 93 ... 94 ``` 95 96 is not allowed. 97 98 `multiset_function` has considerable overhead, being effectively a 99 mini-language within Python. For better performance, you can try 100 implementing your own subclass of `MultisetEvaluator` directly. 101 102 Args: 103 function: This should take in multiset expressions as positional 104 arguments, and non-multiset variables as keyword arguments. 105 """ 106 return MultisetFunctionEvaluator(wrapped)
EXPERIMENTAL: A decorator that turns a function into a multiset evaluator.
The provided function should take in arguments representing multisets,
do a limited set of operations on them (see MultisetExpression
), and
finish off with an evaluation. You can return a tuple to perform a joint
evaluation.
For example, to create an evaluator which computes the elements each of two multisets has that the other doesn't:
@multiset_function
def two_way_difference(a, b):
return (a - b).expand(), (b - a).expand()
You can send non-multiset variables as keyword arguments (recommended to be defined as keyword-only).
@multiset_function
def count_outcomes(a, *, target):
return a.keep_outcomes(target).count()
count_outcomes(a, target=[5, 6])
While in theory @multiset_function
implements late binding similar to
ordinary Python functions, I recommend using only pure functions.
Be careful when using control structures: you cannot branch on the value of a multiset expression or evaluation, so e.g.
@multiset_function
def bad(a, b)
if a == b:
...
is not allowed.
multiset_function
has considerable overhead, being effectively a
mini-language within Python. For better performance, you can try
implementing your own subclass of MultisetEvaluator
directly.
Arguments:
- function: This should take in multiset expressions as positional arguments, and non-multiset variables as keyword arguments.
23def format_probability_inverse(probability, /, int_start: int = 20): 24 """EXPERIMENTAL: Formats the inverse of a value as "1 in N". 25 26 Args: 27 probability: The value to be formatted. 28 int_start: If N = 1 / probability is between this value and 1 million 29 times this value it will be formatted as an integer. Otherwise it 30 be formatted asa float with precision at least 1 part in int_start. 31 """ 32 max_precision = math.ceil(math.log10(int_start)) 33 if probability <= 0 or probability > 1: 34 return 'n/a' 35 product = probability * int_start 36 if product <= 1: 37 if probability * int_start * 10**6 <= 1: 38 return f'1 in {1.0 / probability:<.{max_precision}e}' 39 else: 40 return f'1 in {round(1 / probability)}' 41 42 precision = 0 43 precision_factor = 1 44 while product > precision_factor and precision < max_precision: 45 precision += 1 46 precision_factor *= 10 47 return f'1 in {1.0 / probability:<.{precision}f}'
EXPERIMENTAL: Formats the inverse of a value as "1 in N".
Arguments:
- probability: The value to be formatted.
- int_start: If N = 1 / probability is between this value and 1 million times this value it will be formatted as an integer. Otherwise it be formatted asa float with precision at least 1 part in int_start.
28class Wallenius(Generic[T]): 29 """EXPERIMENTAL: Wallenius' noncentral hypergeometric distribution. 30 31 This is sampling without replacement with weights, where the entire weight 32 of a card goes away when it is pulled. 33 """ 34 _weight_decks: 'MutableMapping[int, icepool.Deck[T]]' 35 _weight_die: 'icepool.Die[int]' 36 37 def __init__(self, data: Iterable[tuple[T, int]] 38 | Mapping[T, int | tuple[int, int]]): 39 """Constructor. 40 41 Args: 42 data: Either an iterable of (outcome, weight), or a mapping from 43 outcomes to either weights or (weight, quantity). 44 hand_size: The number of outcomes to pull. 45 """ 46 self._weight_decks = {} 47 48 if isinstance(data, Mapping): 49 for outcome, v in data.items(): 50 if isinstance(v, int): 51 weight = v 52 quantity = 1 53 else: 54 weight, quantity = v 55 self._weight_decks[weight] = self._weight_decks.get( 56 weight, icepool.Deck()).append(outcome, quantity) 57 else: 58 for outcome, weight in data: 59 self._weight_decks[weight] = self._weight_decks.get( 60 weight, icepool.Deck()).append(outcome) 61 62 self._weight_die = icepool.Die({ 63 weight: weight * deck.denominator() 64 for weight, deck in self._weight_decks.items() 65 }) 66 67 def deal(self, hand_size: int, /) -> 'icepool.MultisetExpression[T]': 68 """Deals the specified number of outcomes from the Wallenius. 69 70 The result is a `MultisetExpression` representing the multiset of 71 outcomes dealt. 72 """ 73 if hand_size == 0: 74 return icepool.Pool([]) 75 76 def inner(weights): 77 weight_counts = Counter(weights) 78 result = None 79 for weight, count in weight_counts.items(): 80 deal = self._weight_decks[weight].deal(count) 81 if result is None: 82 result = deal 83 else: 84 result = result + deal 85 return result 86 87 hand_weights = _wallenius_weights(self._weight_die, hand_size) 88 return hand_weights.map_to_pool(inner, star=False)
EXPERIMENTAL: Wallenius' noncentral hypergeometric distribution.
This is sampling without replacement with weights, where the entire weight of a card goes away when it is pulled.
37 def __init__(self, data: Iterable[tuple[T, int]] 38 | Mapping[T, int | tuple[int, int]]): 39 """Constructor. 40 41 Args: 42 data: Either an iterable of (outcome, weight), or a mapping from 43 outcomes to either weights or (weight, quantity). 44 hand_size: The number of outcomes to pull. 45 """ 46 self._weight_decks = {} 47 48 if isinstance(data, Mapping): 49 for outcome, v in data.items(): 50 if isinstance(v, int): 51 weight = v 52 quantity = 1 53 else: 54 weight, quantity = v 55 self._weight_decks[weight] = self._weight_decks.get( 56 weight, icepool.Deck()).append(outcome, quantity) 57 else: 58 for outcome, weight in data: 59 self._weight_decks[weight] = self._weight_decks.get( 60 weight, icepool.Deck()).append(outcome) 61 62 self._weight_die = icepool.Die({ 63 weight: weight * deck.denominator() 64 for weight, deck in self._weight_decks.items() 65 })
Constructor.
Arguments:
- data: Either an iterable of (outcome, weight), or a mapping from outcomes to either weights or (weight, quantity).
- hand_size: The number of outcomes to pull.
67 def deal(self, hand_size: int, /) -> 'icepool.MultisetExpression[T]': 68 """Deals the specified number of outcomes from the Wallenius. 69 70 The result is a `MultisetExpression` representing the multiset of 71 outcomes dealt. 72 """ 73 if hand_size == 0: 74 return icepool.Pool([]) 75 76 def inner(weights): 77 weight_counts = Counter(weights) 78 result = None 79 for weight, count in weight_counts.items(): 80 deal = self._weight_decks[weight].deal(count) 81 if result is None: 82 result = deal 83 else: 84 result = result + deal 85 return result 86 87 hand_weights = _wallenius_weights(self._weight_die, hand_size) 88 return hand_weights.map_to_pool(inner, star=False)
Deals the specified number of outcomes from the Wallenius.
The result is a MultisetExpression
representing the multiset of
outcomes dealt.