icepool
Package for computing dice and card probabilities.
Starting with v0.25.1
, you can replace latest
in the URL with an old version
number to get the documentation for that version.
See this JupyterLite distribution for examples.
General conventions:
- Instances are immutable (apart from internal caching). Anything that looks like it mutates an instance actually returns a separate instance with the change.
- Unless explictly specified otherwise, elements with zero quantity, rolls, etc. are considered.
1"""Package for computing dice and card probabilities. 2 3Starting with `v0.25.1`, you can replace `latest` in the URL with an old version 4number to get the documentation for that version. 5 6See [this JupyterLite distribution](https://highdiceroller.github.io/icepool/notebooks/lab/index.html) 7for examples. 8 9[Visit the project page.](https://github.com/HighDiceRoller/icepool) 10 11General conventions: 12 13* Instances are immutable (apart from internal caching). Anything that looks 14 like it mutates an instance actually returns a separate instance with the 15 change. 16* Unless explictly specified otherwise, elements with zero quantity, rolls, etc. 17 are considered. 18""" 19 20__docformat__ = 'google' 21 22__version__ = '1.5.0a0' 23 24from typing import Final 25 26from icepool.typing import Outcome, Order, RerollType 27 28Reroll: Final = RerollType.Reroll 29"""Indicates that an outcome should be rerolled (with unlimited depth). 30 31This can be used in place of outcomes in many places. See individual function 32and method descriptions for details. 33 34This effectively removes the outcome from the probability space, along with its 35contribution to the denominator. 36 37This can be used for conditional probability by removing all outcomes not 38consistent with the given observations. 39 40Operation in specific cases: 41 42* When used with `Again`, only that stage is rerolled, not the entire `Again` 43 tree. 44* To reroll with limited depth, use `Die.reroll()`, or `Again` with no 45 modification. 46* When used with `MultisetEvaluator`, the entire evaluation is rerolled. 47""" 48 49# Expose certain names at top-level. 50 51from icepool.function import (d, z, __getattr__, coin, stochastic_round, 52 one_hot, iter_cartesian_product, from_cumulative, 53 from_rv, min_outcome, max_outcome, align, 54 align_range, commonize_denominator, reduce, 55 accumulate, map, map_function, map_and_time, 56 map_to_pool) 57 58from icepool.population.base import Population 59from icepool.population.die import implicit_convert_to_die, Die 60from icepool.collection.vector import cartesian_product, tupleize, vectorize, Vector 61from icepool.collection.symbols import Symbols 62from icepool.population.again import AgainExpression 63 64Again: Final = AgainExpression(is_additive=True) 65"""A symbol indicating that the die should be rolled again, usually with some operation applied. 66 67This is designed to be used with the `Die()` constructor. 68`AgainExpression`s should not be fed to functions or methods other than 69`Die()`, but it can be used with operators. Examples: 70 71* `Again + 6`: Roll again and add 6. 72* `Again + Again`: Roll again twice and sum. 73 74The `again_count`, `again_depth`, and `again_end` arguments to `Die()` 75affect how these arguments are processed. At most one of `again_count` or 76`again_depth` may be provided; if neither are provided, the behavior is as 77`again_depth=1. 78 79For finer control over rolling processes, use e.g. `Die.map()` instead. 80 81#### Count mode 82 83When `again_count` is provided, we start with one roll queued and execute one 84roll at a time. For every `Again` we roll, we queue another roll. 85If we run out of rolls, we sum the rolls to find the result. If the total number 86of rolls (not including the initial roll) would exceed `again_count`, we reroll 87the entire process, effectively conditioning the process on not rolling more 88than `again_count` extra dice. 89 90This mode only allows "additive" expressions to be used with `Again`, which 91means that only the following operators are allowed: 92 93* Binary `+` 94* `n @ AgainExpression`, where `n` is a non-negative `int` or `Population`. 95 96Furthermore, the `+` operator is assumed to be associative and commutative. 97For example, `str` or `tuple` outcomes will not produce elements with a definite 98order. 99 100#### Depth mode 101 102When `again_depth=0`, `again_end` is directly substituted 103for each occurence of `Again`. For other values of `again_depth`, the result for 104`again_depth-1` is substituted for each occurence of `Again`. 105 106If `again_end=icepool.Reroll`, then any `AgainExpression`s in the final depth 107are rerolled. 108 109#### Rerolls 110 111`Reroll` only rerolls that particular die, not the entire process. Any such 112rerolls do not count against the `again_count` or `again_depth` limit. 113 114If `again_end=icepool.Reroll`: 115* Count mode: Any result that would cause the number of rolls to exceed 116 `again_count` is rerolled. 117* Depth mode: Any `AgainExpression`s in the final depth level are rerolled. 118""" 119 120from icepool.population.die_with_truth import DieWithTruth 121 122from icepool.collection.counts import CountsKeysView, CountsValuesView, CountsItemsView 123 124from icepool.population.keep import lowest, highest, middle 125 126from icepool.generator.pool import Pool, standard_pool 127from icepool.generator.keep import KeepGenerator 128from icepool.generator.compound_keep import CompoundKeepGenerator 129from icepool.generator.mixture import MixtureGenerator 130 131from icepool.generator.multiset_generator import MultisetGenerator, InitialMultisetGenerator, NextMultisetGenerator 132from icepool.generator.alignment import Alignment 133from icepool.evaluator.multiset_evaluator import MultisetEvaluator 134 135from icepool.population.deck import Deck 136from icepool.generator.deal import Deal 137from icepool.generator.multi_deal import MultiDeal 138 139from icepool.expression.multiset_expression import MultisetExpression, implicit_convert_to_expression 140from icepool.expression.multiset_function import multiset_function 141 142import icepool.expression as expression 143import icepool.generator as generator 144import icepool.evaluator as evaluator 145 146import icepool.typing as typing 147 148__all__ = [ 149 'd', 'z', 'coin', 'stochastic_round', 'one_hot', 'Outcome', 'Die', 150 'Population', 'tupleize', 'vectorize', 'Vector', 'Symbols', 'Again', 151 'CountsKeysView', 'CountsValuesView', 'CountsItemsView', 'from_cumulative', 152 'from_rv', 'lowest', 'highest', 'middle', 'min_outcome', 'max_outcome', 153 'align', 'align_range', 'commonize_denominator', 'reduce', 'accumulate', 154 'map', 'map_function', 'map_and_time', 'map_to_pool', 'Reroll', 155 'RerollType', 'Pool', 'standard_pool', 'MultisetGenerator', 'Alignment', 156 'MultisetExpression', 'MultisetEvaluator', 'Order', 'Deck', 'Deal', 157 'MultiDeal', 'multiset_function', 'function', 'typing', 'evaluator' 158]
20@cache 21def d(sides: int, /) -> 'icepool.Die[int]': 22 """A standard die, uniformly distributed from `1` to `sides` inclusive. 23 24 Don't confuse this with `icepool.Die()`: 25 26 * `icepool.Die([6])`: A `Die` that always rolls the integer 6. 27 * `icepool.d(6)`: A d6. 28 29 You can also import individual standard dice from the `icepool` module, e.g. 30 `from icepool import d6`. 31 """ 32 if not isinstance(sides, int): 33 raise TypeError('sides must be an int.') 34 elif sides < 1: 35 raise ValueError('sides must be at least 1.') 36 return icepool.Die(range(1, sides + 1))
A standard die, uniformly distributed from 1
to sides
inclusive.
Don't confuse this with icepool.Die()
:
icepool.Die([6])
: ADie
that always rolls the integer 6.icepool.d(6)
: A d6.
You can also import individual standard dice from the icepool
module, e.g.
from icepool import d6
.
39@cache 40def z(sides: int, /) -> 'icepool.Die[int]': 41 """A die uniformly distributed from `0` to `sides - 1` inclusive. 42 43 Equal to d(sides) - 1. 44 """ 45 if not isinstance(sides, int): 46 raise TypeError('sides must be an int.') 47 elif sides < 1: 48 raise ValueError('sides must be at least 1.') 49 return icepool.Die(range(0, sides))
A die uniformly distributed from 0
to sides - 1
inclusive.
Equal to d(sides) - 1.
75def coin(n: int | float | Fraction, 76 d: int = 1, 77 /, 78 *, 79 max_denominator: int | None = None) -> 'icepool.Die[bool]': 80 """A `Die` that rolls `True` with probability `n / d`, and `False` otherwise. 81 82 If `n <= 0` or `n >= d` the result will have only one outcome. 83 84 Args: 85 n: An int numerator, or a non-integer probability. 86 d: An int denominator. Should not be provided if the first argument is 87 not an int. 88 """ 89 if not isinstance(n, int): 90 if d != 1: 91 raise ValueError( 92 'If a non-int numerator is provided, a denominator must not be provided.' 93 ) 94 fraction = Fraction(n) 95 if max_denominator is not None: 96 fraction = fraction.limit_denominator(max_denominator) 97 n = fraction.numerator 98 d = fraction.denominator 99 data = {} 100 if n < d: 101 data[False] = min(d - n, d) 102 if n > 0: 103 data[True] = min(n, d) 104 105 return icepool.Die(data)
A Die
that rolls True
with probability n / d
, and False
otherwise.
If n <= 0
or n >= d
the result will have only one outcome.
Arguments:
- n: An int numerator, or a non-integer probability.
- d: An int denominator. Should not be provided if the first argument is not an int.
108def stochastic_round(x, 109 /, 110 *, 111 max_denominator: int | None = None) -> 'icepool.Die[int]': 112 """Randomly rounds a value up or down to the nearest integer according to the two distances. 113 114 Specificially, rounds `x` up with probability `x - floor(x)` and down 115 otherwise, producing a `Die` with up to two outcomes. 116 117 Args: 118 max_denominator: If provided, each rounding will be performed 119 using `fractions.Fraction.limit_denominator(max_denominator)`. 120 Otherwise, the rounding will be performed without 121 `limit_denominator`. 122 """ 123 integer_part = math.floor(x) 124 fractional_part = x - integer_part 125 return integer_part + coin(fractional_part, 126 max_denominator=max_denominator)
Randomly rounds a value up or down to the nearest integer according to the two distances.
Specificially, rounds x
up with probability x - floor(x)
and down
otherwise, producing a Die
with up to two outcomes.
Arguments:
- max_denominator: If provided, each rounding will be performed
using
fractions.Fraction.limit_denominator(max_denominator)
. Otherwise, the rounding will be performed withoutlimit_denominator
.
129def one_hot(sides: int, /) -> 'icepool.Die[tuple[bool, ...]]': 130 """A `Die` with `Vector` outcomes with one element set to `True` uniformly at random and the rest `False`. 131 132 This is an easy (if somewhat expensive) way of representing how many dice 133 in a pool rolled each number. For example, the outcomes of `10 @ one_hot(6)` 134 are the `(ones, twos, threes, fours, fives, sixes)` rolled in 10d6. 135 """ 136 data = [] 137 for i in range(sides): 138 outcome = [False] * sides 139 outcome[i] = True 140 data.append(icepool.Vector(outcome)) 141 return icepool.Die(data)
A Die
with Vector
outcomes with one element set to True
uniformly at random and the rest False
.
This is an easy (if somewhat expensive) way of representing how many dice
in a pool rolled each number. For example, the outcomes of 10 @ one_hot(6)
are the (ones, twos, threes, fours, fives, sixes)
rolled in 10d6.
70class Outcome(Hashable, Protocol[T_contra]): 71 """Protocol to attempt to verify that outcome types are hashable and sortable. 72 73 Far from foolproof, e.g. it cannot enforce total ordering. 74 """ 75 76 def __lt__(self, other: T_contra) -> bool: 77 ...
Protocol to attempt to verify that outcome types are hashable and sortable.
Far from foolproof, e.g. it cannot enforce total ordering.
43class Die(Population[T_co]): 44 """Sampling with replacement. Quantities represent weights. 45 46 Dice are immutable. Methods do not modify the `Die` in-place; 47 rather they return a `Die` representing the result. 48 49 It *is* (mostly) well-defined to have a `Die` with zero-quantity outcomes. 50 These can be useful in a few cases, such as: 51 52 * `MultisetEvaluator` will iterate through zero-quantity outcomes, 53 rather than possibly skipping that outcome. (Though in most cases it's 54 better to use `MultisetEvaluator.alignment()`.) 55 * `icepool.align()` and the like are convenient for making dice share the 56 same set of outcomes. 57 58 However, zero-quantity outcomes have a computational cost like any other 59 outcome. Unless you have a specific use case in mind, it's best to leave 60 them out. 61 62 Most operators and methods will not introduce zero-quantity outcomes if 63 their arguments do not have any; nor remove zero-quantity outcomes. 64 65 It's also possible to have "empty" dice with no outcomes at all, 66 though these have little use other than being sentinel values. 67 """ 68 69 _data: Counts[T_co] 70 71 @property 72 def _new_type(self) -> type: 73 return Die 74 75 def __new__( 76 cls, 77 outcomes: Sequence | Mapping[Any, int], 78 times: Sequence[int] | int = 1, 79 *, 80 again_count: int | None = None, 81 again_depth: int | None = None, 82 again_end: 'Outcome | Die | icepool.RerollType | None' = None 83 ) -> 'Die[T_co]': 84 """Constructor for a `Die`. 85 86 Don't confuse this with `d()`: 87 88 * `Die([6])`: A `Die` that always rolls the `int` 6. 89 * `d(6)`: A d6. 90 91 Also, don't confuse this with `Pool()`: 92 93 * `Die([1, 2, 3, 4, 5, 6])`: A d6. 94 * `Pool([1, 2, 3, 4, 5, 6])`: A `Pool` of six dice that always rolls one 95 of each number. 96 97 Here are some different ways of constructing a d6: 98 99 * Just import it: `from icepool import d6` 100 * Use the `d()` function: `icepool.d(6)` 101 * Use a d6 that you already have: `Die(d6)` or `Die([d6])` 102 * Mix a d3 and a d3+3: `Die([d3, d3+3])` 103 * Use a dict: `Die({1:1, 2:1, 3:1, 4:1, 5:1, 6:1})` 104 * Give the faces as a sequence: `Die([1, 2, 3, 4, 5, 6])` 105 106 All quantities must be non-negative, though they can be zero. 107 108 Several methods and functions foward **kwargs to this constructor. 109 However, these only affect the construction of the returned or yielded 110 dice. Any other implicit conversions of arguments or operands to dice 111 will be done with the default keyword arguments. 112 113 EXPERIMENTAL: Use `icepool.Again` to roll the dice again, usually with 114 some modification. See the `Again` documentation for details. 115 116 Denominator: For a flat set of outcomes, the denominator is just the 117 sum of the corresponding quantities. If the outcomes themselves have 118 secondary denominators, then the overall denominator will be minimized 119 while preserving the relative weighting of the primary outcomes. 120 121 Args: 122 outcomes: The faces of the `Die`. This can be one of the following: 123 * A `Sequence` of outcomes. Duplicates will contribute 124 quantity for each appearance. 125 * A `Mapping` from outcomes to quantities. 126 127 Individual outcomes can each be one of the following: 128 129 * An outcome, which must be hashable and totally orderable. 130 * For convenience, `tuple`s containing `Population`s will be 131 `tupleize`d into a `Population` of `tuple`s. 132 This does not apply to subclasses of `tuple`s such as `namedtuple` 133 or other classes such as `Vector`. 134 * A `Die`, which will be flattened into the result. 135 The quantity assigned to a `Die` is shared among its 136 outcomes. The total denominator will be scaled up if 137 necessary. 138 * `icepool.Reroll`, which will drop itself from consideration. 139 * EXPERIMENTAL: `icepool.Again`. See the documentation for 140 `Again` for details. 141 times: Multiplies the quantity of each element of `outcomes`. 142 `times` can either be a sequence of the same length as 143 `outcomes` or a single `int` to apply to all elements of 144 `outcomes`. 145 again_count, again_depth, again_end: These affect how `Again` 146 expressions are handled. See the `Again` documentation for 147 details. 148 Raises: 149 ValueError: `None` is not a valid outcome for a `Die`. 150 """ 151 outcomes, times = icepool.creation_args.itemize(outcomes, times) 152 153 # Check for Again. 154 if icepool.population.again.contains_again(outcomes): 155 if again_count is not None: 156 if again_depth is not None: 157 raise ValueError( 158 'At most one of again_count and again_depth may be used.' 159 ) 160 if again_end is not None: 161 raise ValueError( 162 'again_end cannot be used with again_count.') 163 return icepool.population.again.evaluate_agains_using_count( 164 outcomes, times, again_count) 165 else: 166 if again_depth is None: 167 again_depth = 1 168 return icepool.population.again.evaluate_agains_using_depth( 169 outcomes, times, again_depth, again_end) 170 171 # Agains have been replaced by this point. 172 outcomes = cast(Sequence[T_co | Die[T_co] | icepool.RerollType], 173 outcomes) 174 175 if len(outcomes) == 1 and times[0] == 1 and isinstance( 176 outcomes[0], Die): 177 return outcomes[0] 178 179 counts: Counts[T_co] = icepool.creation_args.expand_args_for_die( 180 outcomes, times) 181 182 return Die._new_raw(counts) 183 184 @classmethod 185 def _new_raw(cls, data: Counts[T_co]) -> 'Die[T_co]': 186 """Creates a new `Die` using already-processed arguments. 187 188 Args: 189 data: At this point, this is a Counts. 190 """ 191 self = super(Population, cls).__new__(cls) 192 self._data = data 193 return self 194 195 # Defined separately from the superclass to help typing. 196 def unary_operator(self: 'icepool.Die[T_co]', op: Callable[..., U], *args, 197 **kwargs) -> 'icepool.Die[U]': 198 """Performs the unary operation on the outcomes. 199 200 This is used for the standard unary operators 201 `-, +, abs, ~, round, trunc, floor, ceil` 202 as well as the additional methods 203 `zero, bool`. 204 205 This is NOT used for the `[]` operator; when used directly, this is 206 interpreted as a `Mapping` operation and returns the count corresponding 207 to a given outcome. See `marginals()` for applying the `[]` operator to 208 outcomes. 209 210 Returns: 211 A `Die` representing the result. 212 213 Raises: 214 ValueError: If tuples are of mismatched length. 215 """ 216 return self._unary_operator(op, *args, **kwargs) 217 218 def binary_operator(self, other: 'Die', op: Callable[..., U], *args, 219 **kwargs) -> 'Die[U]': 220 """Performs the operation on pairs of outcomes. 221 222 By the time this is called, the other operand has already been 223 converted to a `Die`. 224 225 If one side of a binary operator is a tuple and the other is not, the 226 binary operator is applied to each element of the tuple with the 227 non-tuple side. For example, the following are equivalent: 228 229 ```python 230 cartesian_product(d6, d8) * 2 231 cartesian_product(d6 * 2, d8 * 2) 232 ``` 233 234 This is used for the standard binary operators 235 `+, -, *, /, //, %, **, <<, >>, &, |, ^` 236 and the standard binary comparators 237 `<, <=, >=, >, ==, !=, cmp`. 238 239 `==` and `!=` additionally set the truth value of the `Die` according to 240 whether the dice themselves are the same or not. 241 242 The `@` operator does NOT use this method directly. 243 It rolls the left `Die`, which must have integer outcomes, 244 then rolls the right `Die` that many times and sums the outcomes. 245 246 Returns: 247 A `Die` representing the result. 248 249 Raises: 250 ValueError: If tuples are of mismatched length within one of the 251 dice or between the dice. 252 """ 253 data: MutableMapping[Any, int] = defaultdict(int) 254 for (outcome_self, 255 quantity_self), (outcome_other, 256 quantity_other) in itertools.product( 257 self.items(), other.items()): 258 new_outcome = op(outcome_self, outcome_other, *args, **kwargs) 259 data[new_outcome] += quantity_self * quantity_other 260 return self._new_type(data) 261 262 # Basic access. 263 264 def keys(self) -> CountsKeysView[T_co]: 265 return self._data.keys() 266 267 def values(self) -> CountsValuesView: 268 return self._data.values() 269 270 def items(self) -> CountsItemsView[T_co]: 271 return self._data.items() 272 273 def __getitem__(self, outcome, /) -> int: 274 return self._data[outcome] 275 276 def __iter__(self) -> Iterator[T_co]: 277 return iter(self.keys()) 278 279 def __len__(self) -> int: 280 """The number of outcomes. """ 281 return len(self._data) 282 283 def __contains__(self, outcome) -> bool: 284 return outcome in self._data 285 286 # Quantity management. 287 288 def simplify(self) -> 'Die[T_co]': 289 """Divides all quantities by their greatest common denominator. """ 290 return icepool.Die(self._data.simplify()) 291 292 # Rerolls and other outcome management. 293 294 def _select_outcomes(self, which: Callable[..., bool] | Collection[T_co], 295 star: bool | None) -> Set[T_co]: 296 if callable(which): 297 if star is None: 298 star = guess_star(which) 299 if star: 300 # Need TypeVarTuple to check this. 301 return { 302 outcome 303 for outcome in self.outcomes() 304 if which(*outcome) # type: ignore 305 } 306 else: 307 return { 308 outcome 309 for outcome in self.outcomes() if which(outcome) 310 } 311 else: 312 # Collection. 313 return set(which) 314 315 def reroll(self, 316 which: Callable[..., bool] | Collection[T_co] | None = None, 317 /, 318 *, 319 star: bool | None = None, 320 depth: int | None) -> 'Die[T_co]': 321 """Rerolls the given outcomes. 322 323 Args: 324 which: Selects which outcomes to reroll. Options: 325 * A collection of outcomes to reroll. 326 * A callable that takes an outcome and returns `True` if it 327 should be rerolled. 328 * If not provided, the min outcome will be rerolled. 329 star: Whether outcomes should be unpacked into separate arguments 330 before sending them to a callable `which`. 331 If not provided, this will be guessed based on the function 332 signature. 333 depth: The maximum number of times to reroll. 334 If `None`, rerolls an unlimited number of times. 335 336 Returns: 337 A `Die` representing the reroll. 338 If the reroll would never terminate, the result has no outcomes. 339 """ 340 341 if which is None: 342 outcome_set = {self.min_outcome()} 343 else: 344 outcome_set = self._select_outcomes(which, star) 345 346 if depth is None: 347 data = { 348 outcome: quantity 349 for outcome, quantity in self.items() 350 if outcome not in outcome_set 351 } 352 elif depth < 0: 353 raise ValueError('reroll depth cannot be negative.') 354 else: 355 total_reroll_quantity = sum(quantity 356 for outcome, quantity in self.items() 357 if outcome in outcome_set) 358 total_stop_quantity = self.denominator() - total_reroll_quantity 359 rerollable_factor = total_reroll_quantity**depth 360 stop_factor = (self.denominator()**(depth + 1) - rerollable_factor 361 * total_reroll_quantity) // total_stop_quantity 362 data = { 363 outcome: (rerollable_factor * 364 quantity if outcome in outcome_set else stop_factor * 365 quantity) 366 for outcome, quantity in self.items() 367 } 368 return icepool.Die(data) 369 370 def filter(self, 371 which: Callable[..., bool] | Collection[T_co], 372 /, 373 *, 374 star: bool | None = None, 375 depth: int | None) -> 'Die[T_co]': 376 """Rerolls until getting one of the given outcomes. 377 378 Essentially the complement of `reroll()`. 379 380 Args: 381 which: Selects which outcomes to reroll until. Options: 382 * A callable that takes an outcome and returns `True` if it 383 should be accepted. 384 * A collection of outcomes to reroll until. 385 star: Whether outcomes should be unpacked into separate arguments 386 before sending them to a callable `which`. 387 If not provided, this will be guessed based on the function 388 signature. 389 depth: The maximum number of times to reroll. 390 If `None`, rerolls an unlimited number of times. 391 392 Returns: 393 A `Die` representing the reroll. 394 If the reroll would never terminate, the result has no outcomes. 395 """ 396 397 if callable(which): 398 if star is None: 399 star = guess_star(which) 400 if star: 401 402 not_outcomes = { 403 outcome 404 for outcome in self.outcomes() 405 if not which(*outcome) # type: ignore 406 } 407 else: 408 not_outcomes = { 409 outcome 410 for outcome in self.outcomes() if not which(outcome) 411 } 412 else: 413 not_outcomes = { 414 not_outcome 415 for not_outcome in self.outcomes() if not_outcome not in which 416 } 417 return self.reroll(not_outcomes, depth=depth) 418 419 def split(self, 420 which: Callable[..., bool] | Collection[T_co] | None = None, 421 /, 422 *, 423 star: bool | None = None): 424 """Splits this die into one containing selected items and another containing the rest. 425 426 The total denominator is preserved. 427 428 Equivalent to `self.filter(), self.reroll()`. 429 430 Args: 431 which: Selects which outcomes to reroll until. Options: 432 * A callable that takes an outcome and returns `True` if it 433 should be accepted. 434 * A collection of outcomes to reroll until. 435 star: Whether outcomes should be unpacked into separate arguments 436 before sending them to a callable `which`. 437 If not provided, this will be guessed based on the function 438 signature. 439 """ 440 if which is None: 441 outcome_set = {self.min_outcome()} 442 else: 443 outcome_set = self._select_outcomes(which, star) 444 445 selected = {} 446 not_selected = {} 447 for outcome, count in self.items(): 448 if outcome in outcome_set: 449 selected[outcome] = count 450 else: 451 not_selected[outcome] = count 452 453 return Die(selected), Die(not_selected) 454 455 def truncate(self, min_outcome=None, max_outcome=None) -> 'Die[T_co]': 456 """Truncates the outcomes of this `Die` to the given range. 457 458 The endpoints are included in the result if applicable. 459 If one of the arguments is not provided, that side will not be truncated. 460 461 This effectively rerolls outcomes outside the given range. 462 If instead you want to replace those outcomes with the nearest endpoint, 463 use `clip()`. 464 465 Not to be confused with `trunc(die)`, which performs integer truncation 466 on each outcome. 467 """ 468 if min_outcome is not None: 469 start = bisect.bisect_left(self.outcomes(), min_outcome) 470 else: 471 start = None 472 if max_outcome is not None: 473 stop = bisect.bisect_right(self.outcomes(), max_outcome) 474 else: 475 stop = None 476 data = {k: v for k, v in self.items()[start:stop]} 477 return icepool.Die(data) 478 479 def clip(self, min_outcome=None, max_outcome=None) -> 'Die[T_co]': 480 """Clips the outcomes of this `Die` to the given values. 481 482 The endpoints are included in the result if applicable. 483 If one of the arguments is not provided, that side will not be clipped. 484 485 This is not the same as rerolling outcomes beyond this range; 486 the outcome is simply adjusted to fit within the range. 487 This will typically cause some quantity to bunch up at the endpoint. 488 If you want to reroll outcomes beyond this range, use `truncate()`. 489 """ 490 data: MutableMapping[Any, int] = defaultdict(int) 491 for outcome, quantity in self.items(): 492 if min_outcome is not None and outcome <= min_outcome: 493 data[min_outcome] += quantity 494 elif max_outcome is not None and outcome >= max_outcome: 495 data[max_outcome] += quantity 496 else: 497 data[outcome] += quantity 498 return icepool.Die(data) 499 500 def set_range(self: 'Die[int]', 501 min_outcome: int | None = None, 502 max_outcome: int | None = None) -> 'Die[int]': 503 """Sets the outcomes of this `Die` to the given `int` range (inclusive). 504 505 This may remove outcomes (if they are not within the range) 506 and/or add zero-quantity outcomes (if they are in range but not present 507 in this `Die`). 508 509 Args: 510 min_outcome: The min outcome of the result. 511 If omitted, the min outcome of this `Die` will be used. 512 max_outcome: The max outcome of the result. 513 If omitted, the max outcome of this `Die` will be used. 514 """ 515 if min_outcome is None: 516 min_outcome = self.min_outcome() 517 if max_outcome is None: 518 max_outcome = self.max_outcome() 519 520 return self.set_outcomes(range(min_outcome, max_outcome + 1)) 521 522 def set_outcomes(self, outcomes: Iterable[T_co]) -> 'Die[T_co]': 523 """Sets the set of outcomes to the argument. 524 525 This may remove outcomes (if they are not present in the argument) 526 and/or add zero-quantity outcomes (if they are not present in this `Die`). 527 """ 528 data = {x: self.quantity(x) for x in outcomes} 529 return icepool.Die(data) 530 531 def trim(self) -> 'Die[T_co]': 532 """Removes all zero-quantity outcomes. """ 533 data = {k: v for k, v in self.items() if v > 0} 534 return icepool.Die(data) 535 536 @cached_property 537 def _popped_min(self) -> tuple['Die[T_co]', int]: 538 die = Die._new_raw(self._data.remove_min()) 539 return die, self.quantities()[0] 540 541 def _pop_min(self) -> tuple['Die[T_co]', int]: 542 """A `Die` with the min outcome removed, and the quantity of the removed outcome. 543 544 Raises: 545 IndexError: If this `Die` has no outcome to pop. 546 """ 547 return self._popped_min 548 549 @cached_property 550 def _popped_max(self) -> tuple['Die[T_co]', int]: 551 die = Die._new_raw(self._data.remove_max()) 552 return die, self.quantities()[-1] 553 554 def _pop_max(self) -> tuple['Die[T_co]', int]: 555 """A `Die` with the max outcome removed, and the quantity of the removed outcome. 556 557 Raises: 558 IndexError: If this `Die` has no outcome to pop. 559 """ 560 return self._popped_max 561 562 # Processes. 563 564 def map( 565 self, 566 repl: 567 'Callable[..., U | Die[U] | icepool.RerollType | icepool.AgainExpression] | Mapping[T_co, U | Die[U] | icepool.RerollType | icepool.AgainExpression]', 568 /, 569 *extra_args, 570 star: bool | None = None, 571 repeat: int | None = 1, 572 again_count: int | None = None, 573 again_depth: int | None = None, 574 again_end: 'U | Die[U] | icepool.RerollType | None' = None 575 ) -> 'Die[U]': 576 """Maps outcomes of the `Die` to other outcomes. 577 578 This is also useful for representing processes. 579 580 As `icepool.map(repl, self, ...)`. 581 """ 582 return icepool.map(repl, 583 self, 584 *extra_args, 585 star=star, 586 repeat=repeat, 587 again_count=again_count, 588 again_depth=again_depth, 589 again_end=again_end) 590 591 def map_and_time( 592 self, 593 repl: 594 'Callable[..., T_co | Die[T_co] | icepool.RerollType] | Mapping[T_co, T_co | Die[T_co] | icepool.RerollType]', 595 /, 596 *extra_args, 597 star: bool | None = None, 598 repeat: int) -> 'Die[tuple[T_co, int]]': 599 """Repeatedly map outcomes of the state to other outcomes, while also 600 counting timesteps. 601 602 This is useful for representing processes. 603 604 As `map_and_time(repl, self, ...)`. 605 """ 606 return icepool.map_and_time(repl, 607 self, 608 *extra_args, 609 star=star, 610 repeat=repeat) 611 612 @cached_property 613 def _mean_time_to_sum_cache(self) -> list[Fraction]: 614 return [Fraction(0)] 615 616 def mean_time_to_sum(self: 'Die[int]', target: int, /) -> Fraction: 617 """The mean number of rolls until the cumulative sum is greater or equal to the target. 618 619 Args: 620 target: The target sum. 621 622 Raises: 623 ValueError: If `self` has negative outcomes. 624 ZeroDivisionError: If `self.mean() == 0`. 625 """ 626 target = max(target, 0) 627 628 if target < len(self._mean_time_to_sum_cache): 629 return self._mean_time_to_sum_cache[target] 630 631 if self.min_outcome() < 0: 632 raise ValueError( 633 'mean_time_to_sum does not handle negative outcomes.') 634 time_per_effect = Fraction(self.denominator(), 635 self.denominator() - self.quantity(0)) 636 637 for i in range(len(self._mean_time_to_sum_cache), target + 1): 638 result = time_per_effect + self.reroll( 639 [0], 640 depth=None).map(lambda x: self.mean_time_to_sum(i - x)).mean() 641 self._mean_time_to_sum_cache.append(result) 642 643 return result 644 645 def explode(self, 646 which: Collection[T_co] | Callable[..., bool] | None = None, 647 *, 648 star: bool | None = None, 649 depth: int = 9, 650 end=None) -> 'Die[T_co]': 651 """Causes outcomes to be rolled again and added to the total. 652 653 Args: 654 which: Which outcomes to explode. Options: 655 * A single outcome to explode. 656 * An collection of outcomes to explode. 657 * A callable that takes an outcome and returns `True` if it 658 should be exploded. 659 * If not supplied, the max outcome will explode. 660 star: Whether outcomes should be unpacked into separate arguments 661 before sending them to a callable `which`. 662 If not provided, this will be guessed based on the function 663 signature. 664 depth: The maximum number of additional dice to roll, not counting 665 the initial roll. 666 If not supplied, a default value will be used. 667 end: Once `depth` is reached, further explosions will be treated 668 as this value. By default, a zero value will be used. 669 `icepool.Reroll` will make one extra final roll, rerolling until 670 a non-exploding outcome is reached. 671 """ 672 673 if which is None: 674 outcome_set = {self.max_outcome()} 675 else: 676 outcome_set = self._select_outcomes(which, star) 677 678 if depth < 0: 679 raise ValueError('depth cannot be negative.') 680 elif depth == 0: 681 return self 682 683 def map_final(outcome): 684 if outcome in outcome_set: 685 return outcome + icepool.Again 686 else: 687 return outcome 688 689 return self.map(map_final, again_depth=depth, again_end=end) 690 691 def if_else( 692 self, 693 outcome_if_true: U | 'Die[U]', 694 outcome_if_false: U | 'Die[U]', 695 *, 696 again_count: int | None = None, 697 again_depth: int | None = None, 698 again_end: 'U | Die[U] | icepool.RerollType | None' = None 699 ) -> 'Die[U]': 700 """Ternary conditional operator. 701 702 This replaces truthy outcomes with the first argument and falsy outcomes 703 with the second argument. 704 705 Args: 706 again_count, again_depth, again_end: Forwarded to the final die constructor. 707 """ 708 return self.map(lambda x: bool(x)).map( 709 { 710 True: outcome_if_true, 711 False: outcome_if_false 712 }, 713 again_count=again_count, 714 again_depth=again_depth, 715 again_end=again_end) 716 717 def is_in(self, target: Container[T_co], /) -> 'Die[bool]': 718 """A die that returns True iff the roll of the die is contained in the target.""" 719 return self.map(lambda x: x in target) 720 721 def count(self, rolls: int, target: Container[T_co], /) -> 'Die[int]': 722 """Roll this dice a number of times and count how many are in the target.""" 723 return rolls @ self.is_in(target) 724 725 # Pools and sums. 726 727 @cached_property 728 def _sum_cache(self) -> MutableMapping[int, 'Die']: 729 return {} 730 731 def _sum_all(self, rolls: int, /) -> 'Die': 732 """Roll this `Die` `rolls` times and sum the results. 733 734 If `rolls` is negative, roll the `Die` `abs(rolls)` times and negate 735 the result. 736 737 If you instead want to replace tuple (or other sequence) outcomes with 738 their sum, use `die.map(sum)`. 739 """ 740 if rolls in self._sum_cache: 741 return self._sum_cache[rolls] 742 743 if rolls < 0: 744 result = -self._sum_all(-rolls) 745 elif rolls == 0: 746 result = self.zero().simplify() 747 elif rolls == 1: 748 result = self 749 else: 750 # Binary split seems to perform much worse. 751 result = self + self._sum_all(rolls - 1) 752 753 self._sum_cache[rolls] = result 754 return result 755 756 def __matmul__(self: 'Die[int]', other) -> 'Die': 757 """Roll the left `Die`, then roll the right `Die` that many times and sum the outcomes.""" 758 if isinstance(other, icepool.AgainExpression): 759 return NotImplemented 760 other = implicit_convert_to_die(other) 761 762 data: MutableMapping[int, Any] = defaultdict(int) 763 764 max_abs_die_count = max(abs(self.min_outcome()), 765 abs(self.max_outcome())) 766 for die_count, die_count_quantity in self.items(): 767 factor = other.denominator()**(max_abs_die_count - abs(die_count)) 768 subresult = other._sum_all(die_count) 769 for outcome, subresult_quantity in subresult.items(): 770 data[ 771 outcome] += subresult_quantity * die_count_quantity * factor 772 773 return icepool.Die(data) 774 775 def __rmatmul__(self, other: 'int | Die[int]') -> 'Die': 776 """Roll the left `Die`, then roll the right `Die` that many times and sum the outcomes.""" 777 if isinstance(other, icepool.AgainExpression): 778 return NotImplemented 779 other = implicit_convert_to_die(other) 780 return other.__matmul__(self) 781 782 def pool(self, rolls: int | Sequence[int] = 1, /) -> 'icepool.Pool[T_co]': 783 """Creates a `Pool` from this `Die`. 784 785 You might subscript the pool immediately afterwards, e.g. 786 `d6.pool(5)[-1, ..., 1]` takes the difference between the highest and 787 lowest of 5d6. 788 789 Args: 790 rolls: The number of copies of this `Die` to put in the pool. 791 Or, a sequence of one `int` per die acting as 792 `keep_tuple`. Note that `...` cannot be used in the 793 argument to this method, as the argument determines the size of 794 the pool. 795 """ 796 if isinstance(rolls, int): 797 return icepool.Pool({self: rolls}) 798 else: 799 pool_size = len(rolls) 800 # Haven't dealt with narrowing return type. 801 return icepool.Pool({self: pool_size})[rolls] # type: ignore 802 803 def lowest(self, 804 rolls: int, 805 /, 806 keep: int | None = None, 807 drop: int | None = None) -> 'Die': 808 """Roll several of this `Die` and return the lowest result, or the sum of some of the lowest. 809 810 The outcomes should support addition and multiplication if `keep != 1`. 811 812 Args: 813 rolls: The number of dice to roll. All dice will have the same 814 outcomes as `self`. 815 keep, drop: These arguments work together: 816 * If neither are provided, the single lowest die will be taken. 817 * If only `keep` is provided, the `keep` lowest dice will be summed. 818 * If only `drop` is provided, the `drop` lowest dice will be dropped 819 and the rest will be summed. 820 * If both are provided, `drop` lowest dice will be dropped, then 821 the next `keep` lowest dice will be summed. 822 823 Returns: 824 A `Die` representing the probability distribution of the sum. 825 """ 826 index = lowest_slice(keep, drop) 827 canonical = canonical_slice(index, rolls) 828 if canonical.start == 0 and canonical.stop == 1: 829 return self._lowest_single(rolls) 830 # Expression evaluators are difficult to type. 831 return self.pool(rolls)[index].sum() # type: ignore 832 833 def _lowest_single(self, rolls: int, /) -> 'Die': 834 """Roll this die several times and keep the lowest.""" 835 if rolls == 0: 836 return self.zero().simplify() 837 return icepool.from_cumulative( 838 self.outcomes(), [x**rolls for x in self.quantities_ge()], 839 reverse=True) 840 841 def highest(self, 842 rolls: int, 843 /, 844 keep: int | None = None, 845 drop: int | None = None) -> 'Die[T_co]': 846 """Roll several of this `Die` and return the highest result, or the sum of some of the highest. 847 848 The outcomes should support addition and multiplication if `keep != 1`. 849 850 Args: 851 rolls: The number of dice to roll. 852 keep, drop: These arguments work together: 853 * If neither are provided, the single highest die will be taken. 854 * If only `keep` is provided, the `keep` highest dice will be summed. 855 * If only `drop` is provided, the `drop` highest dice will be dropped 856 and the rest will be summed. 857 * If both are provided, `drop` highest dice will be dropped, then 858 the next `keep` highest dice will be summed. 859 860 Returns: 861 A `Die` representing the probability distribution of the sum. 862 """ 863 index = highest_slice(keep, drop) 864 canonical = canonical_slice(index, rolls) 865 if canonical.start == rolls - 1 and canonical.stop == rolls: 866 return self._highest_single(rolls) 867 # Expression evaluators are difficult to type. 868 return self.pool(rolls)[index].sum() # type: ignore 869 870 def _highest_single(self, rolls: int, /) -> 'Die[T_co]': 871 """Roll this die several times and keep the highest.""" 872 if rolls == 0: 873 return self.zero().simplify() 874 return icepool.from_cumulative( 875 self.outcomes(), [x**rolls for x in self.quantities_le()]) 876 877 def middle( 878 self, 879 rolls: int, 880 /, 881 keep: int = 1, 882 *, 883 tie: Literal['error', 'high', 'low'] = 'error') -> 'icepool.Die': 884 """Roll several of this `Die` and sum the sorted results in the middle. 885 886 The outcomes should support addition and multiplication if `keep != 1`. 887 888 Args: 889 rolls: The number of dice to roll. 890 keep: The number of outcomes to sum. If this is greater than the 891 current keep_size, all are kept. 892 tie: What to do if `keep` is odd but the current keep_size 893 is even, or vice versa. 894 * 'error' (default): Raises `IndexError`. 895 * 'high': The higher outcome is taken. 896 * 'low': The lower outcome is taken. 897 """ 898 # Expression evaluators are difficult to type. 899 return self.pool(rolls).middle(keep, tie=tie).sum() # type: ignore 900 901 def map_to_pool( 902 self, 903 repl: 904 'Callable[..., Sequence[icepool.Die[U] | U] | Mapping[icepool.Die[U], int] | Mapping[U, int] | icepool.Reroll] | None' = None, 905 /, 906 *extra_args: 'Outcome | icepool.Die | icepool.MultisetExpression', 907 star: bool | None = None, 908 denominator: int | None = None 909 ) -> 'icepool.MultisetGenerator[U, tuple[int]]': 910 """EXPERIMENTAL: Maps outcomes of this `Die` to `Pools`, creating a `MultisetGenerator`. 911 912 As `icepool.map_to_pool(repl, self, ...)`. 913 914 If no argument is provided, the outcomes will be used to construct a 915 mixture of pools directly, similar to the inverse of `pool.expand()`. 916 Note that this is not particularly efficient since it does not make much 917 use of dynamic programming. 918 919 Args: 920 repl: One of the following: 921 * A callable that takes in one outcome per element of args and 922 produces a `Pool` (or something convertible to such). 923 * A mapping from old outcomes to `Pool` 924 (or something convertible to such). 925 In this case args must have exactly one element. 926 The new outcomes may be dice rather than just single outcomes. 927 The special value `icepool.Reroll` will reroll that old outcome. 928 star: If `True`, the first of the args will be unpacked before 929 giving them to `repl`. 930 If not provided, it will be guessed based on the signature of 931 `repl` and the number of arguments. 932 denominator: If provided, the denominator of the result will be this 933 value. Otherwise it will be the minimum to correctly weight the 934 pools. 935 936 Returns: 937 A `MultisetGenerator` representing the mixture of `Pool`s. Note 938 that this is not technically a `Pool`, though it supports most of 939 the same operations. 940 941 Raises: 942 ValueError: If `denominator` cannot be made consistent with the 943 resulting mixture of pools. 944 """ 945 if repl is None: 946 repl = lambda x: x 947 return icepool.map_to_pool(repl, 948 self, 949 *extra_args, 950 star=star, 951 denominator=denominator) 952 953 def explode_to_pool( 954 self, 955 rolls: int, 956 /, 957 which: Collection[T_co] | Callable[..., bool] | None = None, 958 *, 959 star: bool | None = None, 960 depth: int = 9) -> 'icepool.MultisetGenerator[T_co, tuple[int]]': 961 """EXPERIMENTAL: Causes outcomes to be rolled again, keeping that outcome as an individual die in a pool. 962 963 Args: 964 rolls: The number of initial dice. 965 which: Which outcomes to explode. Options: 966 * A single outcome to explode. 967 * An collection of outcomes to explode. 968 * A callable that takes an outcome and returns `True` if it 969 should be exploded. 970 * If not supplied, the max outcome will explode. 971 star: Whether outcomes should be unpacked into separate arguments 972 before sending them to a callable `which`. 973 If not provided, this will be guessed based on the function 974 signature. 975 depth: The maximum depth of explosions for an individual dice. 976 977 Returns: 978 A `MultisetGenerator` representing the mixture of `Pool`s. Note 979 that this is not technically a `Pool`, though it supports most of 980 the same operations. 981 """ 982 if which is None: 983 explode_set = {self.max_outcome()} 984 else: 985 explode_set = self._select_outcomes(which, star) 986 explode, not_explode = self.split(explode_set) 987 988 single_data: 'MutableMapping[icepool.Vector[int], int]' = defaultdict( 989 int) 990 for i in range(depth + 1): 991 weight = explode.denominator()**i * self.denominator()**( 992 depth - i) * not_explode.denominator() 993 single_data[icepool.Vector((i, 1))] += weight 994 single_data[icepool.Vector( 995 (depth + 1, 0))] += explode.denominator()**(depth + 1) 996 997 single_count_die: 'Die[icepool.Vector[int]]' = Die(single_data) 998 count_die = rolls @ single_count_die 999 1000 return count_die.map_to_pool( 1001 lambda x, nx: [explode] * x + [not_explode] * nx) 1002 1003 def reroll_to_pool( 1004 self, 1005 rolls: int, 1006 which: Callable[..., bool] | Collection[T_co], 1007 /, 1008 max_rerolls: int, 1009 *, 1010 star: bool | None = None, 1011 reroll_priority: Literal['random', 'lowest', 'highest'] = 'random' 1012 ) -> 'icepool.MultisetGenerator[T_co, tuple[int]]': 1013 """EXPERIMENTAL: Applies a limited number of rerolls shared across a pool. 1014 1015 Each die can only be rerolled once, and no more than `max_rerolls` dice 1016 may be rerolled. 1017 1018 Args: 1019 rolls: How many dice in the pool. 1020 which: Selects which outcomes to reroll. Options: 1021 * A collection of outcomes to reroll. 1022 * A callable that takes an outcome and returns `True` if it 1023 should be rerolled. 1024 max_rerolls: The maximum number of dice to reroll. 1025 star: Whether outcomes should be unpacked into separate arguments 1026 before sending them to a callable `which`. 1027 If not provided, this will be guessed based on the function 1028 signature. 1029 reroll_priority: Which values will be prioritized for rerolling. Options: 1030 * `'random'` (default): Eligible dice will be chosen uniformly at random. 1031 * `'lowest'`: The lowest eligible dice will be rerolled. 1032 * `'highest'`: The highest eligible dice will be rerolled. 1033 1034 Returns: 1035 A `MultisetGenerator` representing the mixture of `Pool`s. Note 1036 that this is not technically a `Pool`, though it supports most of 1037 the same operations. 1038 """ 1039 outcome_set = self._select_outcomes(which, star) 1040 rerollable_die, not_rerollable_die = self.split(outcome_set) 1041 single_is_rerollable = icepool.coin(rerollable_die.denominator(), 1042 self.denominator()) 1043 rerollable = rolls @ single_is_rerollable 1044 1045 def split(initial_rerollable: int) -> Die[tuple[int, int, int]]: 1046 """Computes the composition of the pool. 1047 1048 Returns: 1049 initial_rerollable: The number of dice that initially fell into 1050 the rerollable set. 1051 rerolled_to_rerollable: The number of dice that were rerolled, 1052 but fell into the rerollable set again. 1053 not_rerollable: The number of dice that ended up outside the 1054 rerollable set, including both initial and rerolled dice. 1055 not_rerolled: The number of dice that were eligible for 1056 rerolling but were not rerolled. 1057 """ 1058 initial_not_rerollable = rolls - initial_rerollable 1059 rerolled = min(initial_rerollable, max_rerolls) 1060 not_rerolled = initial_rerollable - rerolled 1061 1062 def second_split(rerolled_to_rerollable): 1063 """Splits the rerolled dice into those that fell into the rerollable and not-rerollable sets.""" 1064 rerolled_to_not_rerollable = rerolled - rerolled_to_rerollable 1065 return icepool.tupleize( 1066 initial_rerollable, rerolled_to_rerollable, 1067 initial_not_rerollable + rerolled_to_not_rerollable, 1068 not_rerolled) 1069 1070 return icepool.map(second_split, 1071 rerolled @ single_is_rerollable, 1072 star=False) 1073 1074 pool_composition = rerollable.map(split, star=False) 1075 1076 def make_pool(initial_rerollable, rerolled_to_rerollable, 1077 not_rerollable, not_rerolled): 1078 common = rerollable_die.pool( 1079 rerolled_to_rerollable) + not_rerollable_die.pool( 1080 not_rerollable) 1081 if reroll_priority == 'random': 1082 return common + rerollable_die.pool(not_rerolled) 1083 elif reroll_priority == 'lowest': 1084 return common + rerollable_die.pool( 1085 initial_rerollable).highest(not_rerolled) 1086 elif reroll_priority == 'highest': 1087 return common + rerollable_die.pool(initial_rerollable).lowest( 1088 not_rerolled) 1089 else: 1090 raise ValueError( 1091 f"Invalid reroll_priority '{reroll_priority}'. Allowed values are 'random', 'lowest', and 'highest'." 1092 ) 1093 1094 denominator = self.denominator()**(rolls + min(rolls, max_rerolls)) 1095 1096 return pool_composition.map_to_pool(make_pool, 1097 star=True, 1098 denominator=denominator) 1099 1100 # Unary operators. 1101 1102 def __neg__(self) -> 'Die[T_co]': 1103 return self.unary_operator(operator.neg) 1104 1105 def __pos__(self) -> 'Die[T_co]': 1106 return self.unary_operator(operator.pos) 1107 1108 def __invert__(self) -> 'Die[T_co]': 1109 return self.unary_operator(operator.invert) 1110 1111 def abs(self) -> 'Die[T_co]': 1112 return self.unary_operator(operator.abs) 1113 1114 __abs__ = abs 1115 1116 def round(self, ndigits: int | None = None) -> 'Die': 1117 return self.unary_operator(round, ndigits) 1118 1119 __round__ = round 1120 1121 def stochastic_round(self, 1122 *, 1123 max_denominator: int | None = None) -> 'Die[int]': 1124 """Randomly rounds outcomes up or down to the nearest integer according to the two distances. 1125 1126 Specificially, rounds `x` up with probability `x - floor(x)` and down 1127 otherwise. 1128 1129 Args: 1130 max_denominator: If provided, each rounding will be performed 1131 using `fractions.Fraction.limit_denominator(max_denominator)`. 1132 Otherwise, the rounding will be performed without 1133 `limit_denominator`. 1134 """ 1135 return self.map(lambda x: icepool.stochastic_round( 1136 x, max_denominator=max_denominator)) 1137 1138 def trunc(self) -> 'Die': 1139 return self.unary_operator(math.trunc) 1140 1141 __trunc__ = trunc 1142 1143 def floor(self) -> 'Die': 1144 return self.unary_operator(math.floor) 1145 1146 __floor__ = floor 1147 1148 def ceil(self) -> 'Die': 1149 return self.unary_operator(math.ceil) 1150 1151 __ceil__ = ceil 1152 1153 @staticmethod 1154 def _zero(x): 1155 return x * 0 1156 1157 def zero(self) -> 'Die[T_co]': 1158 """Zeros all outcomes of this die. 1159 1160 This is done by multiplying all outcomes by `0`. 1161 1162 The result will have the same denominator as this die. 1163 1164 Raises: 1165 ValueError: If the zeros did not resolve to a single outcome. 1166 """ 1167 result = self.unary_operator(Die._zero) 1168 if len(result) != 1: 1169 raise ValueError('zero() did not resolve to a single outcome.') 1170 return result 1171 1172 def zero_outcome(self) -> T_co: 1173 """A zero-outcome for this die. 1174 1175 E.g. `0` for a `Die` whose outcomes are `int`s. 1176 """ 1177 return self.zero().outcomes()[0] 1178 1179 # Binary operators. 1180 1181 def __add__(self, other) -> 'Die': 1182 if isinstance(other, icepool.AgainExpression): 1183 return NotImplemented 1184 other = implicit_convert_to_die(other) 1185 return self.binary_operator(other, operator.add) 1186 1187 def __radd__(self, other) -> 'Die': 1188 if isinstance(other, icepool.AgainExpression): 1189 return NotImplemented 1190 other = implicit_convert_to_die(other) 1191 return other.binary_operator(self, operator.add) 1192 1193 def __sub__(self, other) -> 'Die': 1194 if isinstance(other, icepool.AgainExpression): 1195 return NotImplemented 1196 other = implicit_convert_to_die(other) 1197 return self.binary_operator(other, operator.sub) 1198 1199 def __rsub__(self, other) -> 'Die': 1200 if isinstance(other, icepool.AgainExpression): 1201 return NotImplemented 1202 other = implicit_convert_to_die(other) 1203 return other.binary_operator(self, operator.sub) 1204 1205 def __mul__(self, other) -> 'Die': 1206 if isinstance(other, icepool.AgainExpression): 1207 return NotImplemented 1208 other = implicit_convert_to_die(other) 1209 return self.binary_operator(other, operator.mul) 1210 1211 def __rmul__(self, other) -> 'Die': 1212 if isinstance(other, icepool.AgainExpression): 1213 return NotImplemented 1214 other = implicit_convert_to_die(other) 1215 return other.binary_operator(self, operator.mul) 1216 1217 def __truediv__(self, other) -> 'Die': 1218 if isinstance(other, icepool.AgainExpression): 1219 return NotImplemented 1220 other = implicit_convert_to_die(other) 1221 return self.binary_operator(other, operator.truediv) 1222 1223 def __rtruediv__(self, other) -> 'Die': 1224 if isinstance(other, icepool.AgainExpression): 1225 return NotImplemented 1226 other = implicit_convert_to_die(other) 1227 return other.binary_operator(self, operator.truediv) 1228 1229 def __floordiv__(self, other) -> 'Die': 1230 if isinstance(other, icepool.AgainExpression): 1231 return NotImplemented 1232 other = implicit_convert_to_die(other) 1233 return self.binary_operator(other, operator.floordiv) 1234 1235 def __rfloordiv__(self, other) -> 'Die': 1236 if isinstance(other, icepool.AgainExpression): 1237 return NotImplemented 1238 other = implicit_convert_to_die(other) 1239 return other.binary_operator(self, operator.floordiv) 1240 1241 def __pow__(self, other) -> 'Die': 1242 if isinstance(other, icepool.AgainExpression): 1243 return NotImplemented 1244 other = implicit_convert_to_die(other) 1245 return self.binary_operator(other, operator.pow) 1246 1247 def __rpow__(self, other) -> 'Die': 1248 if isinstance(other, icepool.AgainExpression): 1249 return NotImplemented 1250 other = implicit_convert_to_die(other) 1251 return other.binary_operator(self, operator.pow) 1252 1253 def __mod__(self, other) -> 'Die': 1254 if isinstance(other, icepool.AgainExpression): 1255 return NotImplemented 1256 other = implicit_convert_to_die(other) 1257 return self.binary_operator(other, operator.mod) 1258 1259 def __rmod__(self, other) -> 'Die': 1260 if isinstance(other, icepool.AgainExpression): 1261 return NotImplemented 1262 other = implicit_convert_to_die(other) 1263 return other.binary_operator(self, operator.mod) 1264 1265 def __lshift__(self, other) -> 'Die': 1266 if isinstance(other, icepool.AgainExpression): 1267 return NotImplemented 1268 other = implicit_convert_to_die(other) 1269 return self.binary_operator(other, operator.lshift) 1270 1271 def __rlshift__(self, other) -> 'Die': 1272 if isinstance(other, icepool.AgainExpression): 1273 return NotImplemented 1274 other = implicit_convert_to_die(other) 1275 return other.binary_operator(self, operator.lshift) 1276 1277 def __rshift__(self, other) -> 'Die': 1278 if isinstance(other, icepool.AgainExpression): 1279 return NotImplemented 1280 other = implicit_convert_to_die(other) 1281 return self.binary_operator(other, operator.rshift) 1282 1283 def __rrshift__(self, other) -> 'Die': 1284 if isinstance(other, icepool.AgainExpression): 1285 return NotImplemented 1286 other = implicit_convert_to_die(other) 1287 return other.binary_operator(self, operator.rshift) 1288 1289 def __and__(self, other) -> 'Die': 1290 if isinstance(other, icepool.AgainExpression): 1291 return NotImplemented 1292 other = implicit_convert_to_die(other) 1293 return self.binary_operator(other, operator.and_) 1294 1295 def __rand__(self, other) -> 'Die': 1296 if isinstance(other, icepool.AgainExpression): 1297 return NotImplemented 1298 other = implicit_convert_to_die(other) 1299 return other.binary_operator(self, operator.and_) 1300 1301 def __or__(self, other) -> 'Die': 1302 if isinstance(other, icepool.AgainExpression): 1303 return NotImplemented 1304 other = implicit_convert_to_die(other) 1305 return self.binary_operator(other, operator.or_) 1306 1307 def __ror__(self, other) -> 'Die': 1308 if isinstance(other, icepool.AgainExpression): 1309 return NotImplemented 1310 other = implicit_convert_to_die(other) 1311 return other.binary_operator(self, operator.or_) 1312 1313 def __xor__(self, other) -> 'Die': 1314 if isinstance(other, icepool.AgainExpression): 1315 return NotImplemented 1316 other = implicit_convert_to_die(other) 1317 return self.binary_operator(other, operator.xor) 1318 1319 def __rxor__(self, other) -> 'Die': 1320 if isinstance(other, icepool.AgainExpression): 1321 return NotImplemented 1322 other = implicit_convert_to_die(other) 1323 return other.binary_operator(self, operator.xor) 1324 1325 # Comparators. 1326 1327 def __lt__(self, other) -> 'Die[bool]': 1328 if isinstance(other, icepool.AgainExpression): 1329 return NotImplemented 1330 other = implicit_convert_to_die(other) 1331 return self.binary_operator(other, operator.lt) 1332 1333 def __le__(self, other) -> 'Die[bool]': 1334 if isinstance(other, icepool.AgainExpression): 1335 return NotImplemented 1336 other = implicit_convert_to_die(other) 1337 return self.binary_operator(other, operator.le) 1338 1339 def __ge__(self, other) -> 'Die[bool]': 1340 if isinstance(other, icepool.AgainExpression): 1341 return NotImplemented 1342 other = implicit_convert_to_die(other) 1343 return self.binary_operator(other, operator.ge) 1344 1345 def __gt__(self, other) -> 'Die[bool]': 1346 if isinstance(other, icepool.AgainExpression): 1347 return NotImplemented 1348 other = implicit_convert_to_die(other) 1349 return self.binary_operator(other, operator.gt) 1350 1351 # Equality operators. These produce a `DieWithTruth`. 1352 1353 # The result has a truth value, but is not a bool. 1354 def __eq__(self, other) -> 'icepool.DieWithTruth[bool]': # type: ignore 1355 if isinstance(other, icepool.AgainExpression): 1356 return NotImplemented 1357 other_die: Die = implicit_convert_to_die(other) 1358 1359 def data_callback() -> Counts[bool]: 1360 return self.binary_operator(other_die, operator.eq)._data 1361 1362 def truth_value_callback() -> bool: 1363 return self.equals(other) 1364 1365 return icepool.DieWithTruth(data_callback, truth_value_callback) 1366 1367 # The result has a truth value, but is not a bool. 1368 def __ne__(self, other) -> 'icepool.DieWithTruth[bool]': # type: ignore 1369 if isinstance(other, icepool.AgainExpression): 1370 return NotImplemented 1371 other_die: Die = implicit_convert_to_die(other) 1372 1373 def data_callback() -> Counts[bool]: 1374 return self.binary_operator(other_die, operator.ne)._data 1375 1376 def truth_value_callback() -> bool: 1377 return not self.equals(other) 1378 1379 return icepool.DieWithTruth(data_callback, truth_value_callback) 1380 1381 def cmp(self, other) -> 'Die[int]': 1382 """A `Die` with outcomes 1, -1, and 0. 1383 1384 The quantities are equal to the positive outcome of `self > other`, 1385 `self < other`, and the remainder respectively. 1386 1387 This will include all three outcomes even if they have zero quantity. 1388 """ 1389 other = implicit_convert_to_die(other) 1390 1391 data = {} 1392 1393 lt = self < other 1394 if True in lt: 1395 data[-1] = lt[True] 1396 eq = self == other 1397 if True in eq: 1398 data[0] = eq[True] 1399 gt = self > other 1400 if True in gt: 1401 data[1] = gt[True] 1402 1403 return Die(data) 1404 1405 @staticmethod 1406 def _sign(x) -> int: 1407 z = Die._zero(x) 1408 if x > z: 1409 return 1 1410 elif x < z: 1411 return -1 1412 else: 1413 return 0 1414 1415 def sign(self) -> 'Die[int]': 1416 """Outcomes become 1 if greater than `zero()`, -1 if less than `zero()`, and 0 otherwise. 1417 1418 Note that for `float`s, +0.0, -0.0, and nan all become 0. 1419 """ 1420 return self.unary_operator(Die._sign) 1421 1422 # Equality and hashing. 1423 1424 def __bool__(self) -> bool: 1425 raise TypeError( 1426 'A `Die` only has a truth value if it is the result of == or !=.\n' 1427 'This could result from trying to use a die in an if-statement,\n' 1428 'in which case you should use `die.if_else()` instead.\n' 1429 'Or it could result from trying to use a `Die` inside a tuple or vector outcome,\n' 1430 'in which case you should use `tupleize()` or `vectorize().') 1431 1432 @cached_property 1433 def _hash_key(self) -> tuple: 1434 """A tuple that uniquely (as `equals()`) identifies this die. 1435 1436 Apart from being hashable and totally orderable, this is not guaranteed 1437 to be in any particular format or have any other properties. 1438 """ 1439 return tuple(self.items()) 1440 1441 @cached_property 1442 def _hash(self) -> int: 1443 return hash(self._hash_key) 1444 1445 def __hash__(self) -> int: 1446 return self._hash 1447 1448 def equals(self, other, *, simplify: bool = False) -> bool: 1449 """`True` iff both dice have the same outcomes and quantities. 1450 1451 This is `False` if `other` is not a `Die`, even if it would convert 1452 to an equal `Die`. 1453 1454 Truth value does NOT matter. 1455 1456 If one `Die` has a zero-quantity outcome and the other `Die` does not 1457 contain that outcome, they are treated as unequal by this function. 1458 1459 The `==` and `!=` operators have a dual purpose; they return a `Die` 1460 with a truth value determined by this method. 1461 Only dice returned by these methods have a truth value. The data of 1462 these dice is lazily evaluated since the caller may only be interested 1463 in the `Die` value or the truth value. 1464 1465 Args: 1466 simplify: If `True`, the dice will be simplified before comparing. 1467 Otherwise, e.g. a 2:2 coin is not `equals()` to a 1:1 coin. 1468 """ 1469 if not isinstance(other, Die): 1470 return False 1471 1472 if simplify: 1473 return self.simplify()._hash_key == other.simplify()._hash_key 1474 else: 1475 return self._hash_key == other._hash_key 1476 1477 # Strings. 1478 1479 def __repr__(self) -> str: 1480 inner = ', '.join(f'{repr(outcome)}: {weight}' 1481 for outcome, weight in self.items()) 1482 return type(self).__qualname__ + '({' + inner + '})'
Sampling with replacement. Quantities represent weights.
Dice are immutable. Methods do not modify the Die
in-place;
rather they return a Die
representing the result.
It is (mostly) well-defined to have a Die
with zero-quantity outcomes.
These can be useful in a few cases, such as:
MultisetEvaluator
will iterate through zero-quantity outcomes, rather than possibly skipping that outcome. (Though in most cases it's better to useMultisetEvaluator.alignment()
.)icepool.align()
and the like are convenient for making dice share the same set of outcomes.
However, zero-quantity outcomes have a computational cost like any other outcome. Unless you have a specific use case in mind, it's best to leave them out.
Most operators and methods will not introduce zero-quantity outcomes if their arguments do not have any; nor remove zero-quantity outcomes.
It's also possible to have "empty" dice with no outcomes at all, though these have little use other than being sentinel values.
75 def __new__( 76 cls, 77 outcomes: Sequence | Mapping[Any, int], 78 times: Sequence[int] | int = 1, 79 *, 80 again_count: int | None = None, 81 again_depth: int | None = None, 82 again_end: 'Outcome | Die | icepool.RerollType | None' = None 83 ) -> 'Die[T_co]': 84 """Constructor for a `Die`. 85 86 Don't confuse this with `d()`: 87 88 * `Die([6])`: A `Die` that always rolls the `int` 6. 89 * `d(6)`: A d6. 90 91 Also, don't confuse this with `Pool()`: 92 93 * `Die([1, 2, 3, 4, 5, 6])`: A d6. 94 * `Pool([1, 2, 3, 4, 5, 6])`: A `Pool` of six dice that always rolls one 95 of each number. 96 97 Here are some different ways of constructing a d6: 98 99 * Just import it: `from icepool import d6` 100 * Use the `d()` function: `icepool.d(6)` 101 * Use a d6 that you already have: `Die(d6)` or `Die([d6])` 102 * Mix a d3 and a d3+3: `Die([d3, d3+3])` 103 * Use a dict: `Die({1:1, 2:1, 3:1, 4:1, 5:1, 6:1})` 104 * Give the faces as a sequence: `Die([1, 2, 3, 4, 5, 6])` 105 106 All quantities must be non-negative, though they can be zero. 107 108 Several methods and functions foward **kwargs to this constructor. 109 However, these only affect the construction of the returned or yielded 110 dice. Any other implicit conversions of arguments or operands to dice 111 will be done with the default keyword arguments. 112 113 EXPERIMENTAL: Use `icepool.Again` to roll the dice again, usually with 114 some modification. See the `Again` documentation for details. 115 116 Denominator: For a flat set of outcomes, the denominator is just the 117 sum of the corresponding quantities. If the outcomes themselves have 118 secondary denominators, then the overall denominator will be minimized 119 while preserving the relative weighting of the primary outcomes. 120 121 Args: 122 outcomes: The faces of the `Die`. This can be one of the following: 123 * A `Sequence` of outcomes. Duplicates will contribute 124 quantity for each appearance. 125 * A `Mapping` from outcomes to quantities. 126 127 Individual outcomes can each be one of the following: 128 129 * An outcome, which must be hashable and totally orderable. 130 * For convenience, `tuple`s containing `Population`s will be 131 `tupleize`d into a `Population` of `tuple`s. 132 This does not apply to subclasses of `tuple`s such as `namedtuple` 133 or other classes such as `Vector`. 134 * A `Die`, which will be flattened into the result. 135 The quantity assigned to a `Die` is shared among its 136 outcomes. The total denominator will be scaled up if 137 necessary. 138 * `icepool.Reroll`, which will drop itself from consideration. 139 * EXPERIMENTAL: `icepool.Again`. See the documentation for 140 `Again` for details. 141 times: Multiplies the quantity of each element of `outcomes`. 142 `times` can either be a sequence of the same length as 143 `outcomes` or a single `int` to apply to all elements of 144 `outcomes`. 145 again_count, again_depth, again_end: These affect how `Again` 146 expressions are handled. See the `Again` documentation for 147 details. 148 Raises: 149 ValueError: `None` is not a valid outcome for a `Die`. 150 """ 151 outcomes, times = icepool.creation_args.itemize(outcomes, times) 152 153 # Check for Again. 154 if icepool.population.again.contains_again(outcomes): 155 if again_count is not None: 156 if again_depth is not None: 157 raise ValueError( 158 'At most one of again_count and again_depth may be used.' 159 ) 160 if again_end is not None: 161 raise ValueError( 162 'again_end cannot be used with again_count.') 163 return icepool.population.again.evaluate_agains_using_count( 164 outcomes, times, again_count) 165 else: 166 if again_depth is None: 167 again_depth = 1 168 return icepool.population.again.evaluate_agains_using_depth( 169 outcomes, times, again_depth, again_end) 170 171 # Agains have been replaced by this point. 172 outcomes = cast(Sequence[T_co | Die[T_co] | icepool.RerollType], 173 outcomes) 174 175 if len(outcomes) == 1 and times[0] == 1 and isinstance( 176 outcomes[0], Die): 177 return outcomes[0] 178 179 counts: Counts[T_co] = icepool.creation_args.expand_args_for_die( 180 outcomes, times) 181 182 return Die._new_raw(counts)
Constructor for a Die
.
Don't confuse this with d()
:
Die([6])
: ADie
that always rolls theint
6.d(6)
: A d6.
Also, don't confuse this with Pool()
:
Die([1, 2, 3, 4, 5, 6])
: A d6.Pool([1, 2, 3, 4, 5, 6])
: APool
of six dice that always rolls one of each number.
Here are some different ways of constructing a d6:
- Just import it:
from icepool import d6
- Use the
d()
function:icepool.d(6)
- Use a d6 that you already have:
Die(d6)
orDie([d6])
- Mix a d3 and a d3+3:
Die([d3, d3+3])
- Use a dict:
Die({1:1, 2:1, 3:1, 4:1, 5:1, 6:1})
- Give the faces as a sequence:
Die([1, 2, 3, 4, 5, 6])
All quantities must be non-negative, though they can be zero.
Several methods and functions foward **kwargs to this constructor. However, these only affect the construction of the returned or yielded dice. Any other implicit conversions of arguments or operands to dice will be done with the default keyword arguments.
EXPERIMENTAL: Use icepool.Again
to roll the dice again, usually with
some modification. See the Again
documentation for details.
Denominator: For a flat set of outcomes, the denominator is just the sum of the corresponding quantities. If the outcomes themselves have secondary denominators, then the overall denominator will be minimized while preserving the relative weighting of the primary outcomes.
Arguments:
outcomes: The faces of the
Die
. This can be one of the following:- A
Sequence
of outcomes. Duplicates will contribute quantity for each appearance. - A
Mapping
from outcomes to quantities.
Individual outcomes can each be one of the following:
- An outcome, which must be hashable and totally orderable.
- For convenience,
tuple
s containingPopulation
s will betupleize
d into aPopulation
oftuple
s. This does not apply to subclasses oftuple
s such asnamedtuple
or other classes such asVector
.
- For convenience,
- A
Die
, which will be flattened into the result. The quantity assigned to aDie
is shared among its outcomes. The total denominator will be scaled up if necessary. icepool.Reroll
, which will drop itself from consideration.- EXPERIMENTAL:
icepool.Again
. See the documentation forAgain
for details.
- A
- times: Multiplies the quantity of each element of
outcomes
.times
can either be a sequence of the same length asoutcomes
or a singleint
to apply to all elements ofoutcomes
. - again_count, again_depth, again_end: These affect how
Again
expressions are handled. See theAgain
documentation for details.
Raises:
- ValueError:
None
is not a valid outcome for aDie
.
196 def unary_operator(self: 'icepool.Die[T_co]', op: Callable[..., U], *args, 197 **kwargs) -> 'icepool.Die[U]': 198 """Performs the unary operation on the outcomes. 199 200 This is used for the standard unary operators 201 `-, +, abs, ~, round, trunc, floor, ceil` 202 as well as the additional methods 203 `zero, bool`. 204 205 This is NOT used for the `[]` operator; when used directly, this is 206 interpreted as a `Mapping` operation and returns the count corresponding 207 to a given outcome. See `marginals()` for applying the `[]` operator to 208 outcomes. 209 210 Returns: 211 A `Die` representing the result. 212 213 Raises: 214 ValueError: If tuples are of mismatched length. 215 """ 216 return self._unary_operator(op, *args, **kwargs)
Performs the unary operation on the outcomes.
This is used for the standard unary operators
-, +, abs, ~, round, trunc, floor, ceil
as well as the additional methods
zero, bool
.
This is NOT used for the []
operator; when used directly, this is
interpreted as a Mapping
operation and returns the count corresponding
to a given outcome. See marginals()
for applying the []
operator to
outcomes.
Returns:
A
Die
representing the result.
Raises:
- ValueError: If tuples are of mismatched length.
218 def binary_operator(self, other: 'Die', op: Callable[..., U], *args, 219 **kwargs) -> 'Die[U]': 220 """Performs the operation on pairs of outcomes. 221 222 By the time this is called, the other operand has already been 223 converted to a `Die`. 224 225 If one side of a binary operator is a tuple and the other is not, the 226 binary operator is applied to each element of the tuple with the 227 non-tuple side. For example, the following are equivalent: 228 229 ```python 230 cartesian_product(d6, d8) * 2 231 cartesian_product(d6 * 2, d8 * 2) 232 ``` 233 234 This is used for the standard binary operators 235 `+, -, *, /, //, %, **, <<, >>, &, |, ^` 236 and the standard binary comparators 237 `<, <=, >=, >, ==, !=, cmp`. 238 239 `==` and `!=` additionally set the truth value of the `Die` according to 240 whether the dice themselves are the same or not. 241 242 The `@` operator does NOT use this method directly. 243 It rolls the left `Die`, which must have integer outcomes, 244 then rolls the right `Die` that many times and sums the outcomes. 245 246 Returns: 247 A `Die` representing the result. 248 249 Raises: 250 ValueError: If tuples are of mismatched length within one of the 251 dice or between the dice. 252 """ 253 data: MutableMapping[Any, int] = defaultdict(int) 254 for (outcome_self, 255 quantity_self), (outcome_other, 256 quantity_other) in itertools.product( 257 self.items(), other.items()): 258 new_outcome = op(outcome_self, outcome_other, *args, **kwargs) 259 data[new_outcome] += quantity_self * quantity_other 260 return self._new_type(data)
Performs the operation on pairs of outcomes.
By the time this is called, the other operand has already been
converted to a Die
.
If one side of a binary operator is a tuple and the other is not, the binary operator is applied to each element of the tuple with the non-tuple side. For example, the following are equivalent:
cartesian_product(d6, d8) * 2
cartesian_product(d6 * 2, d8 * 2)
This is used for the standard binary operators
+, -, *, /, //, %, **, <<, >>, &, |, ^
and the standard binary comparators
<, <=, >=, >, ==, !=, cmp
.
==
and !=
additionally set the truth value of the Die
according to
whether the dice themselves are the same or not.
The @
operator does NOT use this method directly.
It rolls the left Die
, which must have integer outcomes,
then rolls the right Die
that many times and sums the outcomes.
Returns:
A
Die
representing the result.
Raises:
- ValueError: If tuples are of mismatched length within one of the dice or between the dice.
288 def simplify(self) -> 'Die[T_co]': 289 """Divides all quantities by their greatest common denominator. """ 290 return icepool.Die(self._data.simplify())
Divides all quantities by their greatest common denominator.
315 def reroll(self, 316 which: Callable[..., bool] | Collection[T_co] | None = None, 317 /, 318 *, 319 star: bool | None = None, 320 depth: int | None) -> 'Die[T_co]': 321 """Rerolls the given outcomes. 322 323 Args: 324 which: Selects which outcomes to reroll. Options: 325 * A collection of outcomes to reroll. 326 * A callable that takes an outcome and returns `True` if it 327 should be rerolled. 328 * If not provided, the min outcome will be rerolled. 329 star: Whether outcomes should be unpacked into separate arguments 330 before sending them to a callable `which`. 331 If not provided, this will be guessed based on the function 332 signature. 333 depth: The maximum number of times to reroll. 334 If `None`, rerolls an unlimited number of times. 335 336 Returns: 337 A `Die` representing the reroll. 338 If the reroll would never terminate, the result has no outcomes. 339 """ 340 341 if which is None: 342 outcome_set = {self.min_outcome()} 343 else: 344 outcome_set = self._select_outcomes(which, star) 345 346 if depth is None: 347 data = { 348 outcome: quantity 349 for outcome, quantity in self.items() 350 if outcome not in outcome_set 351 } 352 elif depth < 0: 353 raise ValueError('reroll depth cannot be negative.') 354 else: 355 total_reroll_quantity = sum(quantity 356 for outcome, quantity in self.items() 357 if outcome in outcome_set) 358 total_stop_quantity = self.denominator() - total_reroll_quantity 359 rerollable_factor = total_reroll_quantity**depth 360 stop_factor = (self.denominator()**(depth + 1) - rerollable_factor 361 * total_reroll_quantity) // total_stop_quantity 362 data = { 363 outcome: (rerollable_factor * 364 quantity if outcome in outcome_set else stop_factor * 365 quantity) 366 for outcome, quantity in self.items() 367 } 368 return icepool.Die(data)
Rerolls the given outcomes.
Arguments:
- which: Selects which outcomes to reroll. Options:
- A collection of outcomes to reroll.
- A callable that takes an outcome and returns
True
if it should be rerolled. - If not provided, the min outcome will be rerolled.
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
which
. If not provided, this will be guessed based on the function signature. - depth: The maximum number of times to reroll.
If
None
, rerolls an unlimited number of times.
Returns:
A
Die
representing the reroll. If the reroll would never terminate, the result has no outcomes.
370 def filter(self, 371 which: Callable[..., bool] | Collection[T_co], 372 /, 373 *, 374 star: bool | None = None, 375 depth: int | None) -> 'Die[T_co]': 376 """Rerolls until getting one of the given outcomes. 377 378 Essentially the complement of `reroll()`. 379 380 Args: 381 which: Selects which outcomes to reroll until. Options: 382 * A callable that takes an outcome and returns `True` if it 383 should be accepted. 384 * A collection of outcomes to reroll until. 385 star: Whether outcomes should be unpacked into separate arguments 386 before sending them to a callable `which`. 387 If not provided, this will be guessed based on the function 388 signature. 389 depth: The maximum number of times to reroll. 390 If `None`, rerolls an unlimited number of times. 391 392 Returns: 393 A `Die` representing the reroll. 394 If the reroll would never terminate, the result has no outcomes. 395 """ 396 397 if callable(which): 398 if star is None: 399 star = guess_star(which) 400 if star: 401 402 not_outcomes = { 403 outcome 404 for outcome in self.outcomes() 405 if not which(*outcome) # type: ignore 406 } 407 else: 408 not_outcomes = { 409 outcome 410 for outcome in self.outcomes() if not which(outcome) 411 } 412 else: 413 not_outcomes = { 414 not_outcome 415 for not_outcome in self.outcomes() if not_outcome not in which 416 } 417 return self.reroll(not_outcomes, depth=depth)
Rerolls until getting one of the given outcomes.
Essentially the complement of reroll()
.
Arguments:
- which: Selects which outcomes to reroll until. Options:
- A callable that takes an outcome and returns
True
if it should be accepted. - A collection of outcomes to reroll until.
- A callable that takes an outcome and returns
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
which
. If not provided, this will be guessed based on the function signature. - depth: The maximum number of times to reroll.
If
None
, rerolls an unlimited number of times.
Returns:
A
Die
representing the reroll. If the reroll would never terminate, the result has no outcomes.
419 def split(self, 420 which: Callable[..., bool] | Collection[T_co] | None = None, 421 /, 422 *, 423 star: bool | None = None): 424 """Splits this die into one containing selected items and another containing the rest. 425 426 The total denominator is preserved. 427 428 Equivalent to `self.filter(), self.reroll()`. 429 430 Args: 431 which: Selects which outcomes to reroll until. Options: 432 * A callable that takes an outcome and returns `True` if it 433 should be accepted. 434 * A collection of outcomes to reroll until. 435 star: Whether outcomes should be unpacked into separate arguments 436 before sending them to a callable `which`. 437 If not provided, this will be guessed based on the function 438 signature. 439 """ 440 if which is None: 441 outcome_set = {self.min_outcome()} 442 else: 443 outcome_set = self._select_outcomes(which, star) 444 445 selected = {} 446 not_selected = {} 447 for outcome, count in self.items(): 448 if outcome in outcome_set: 449 selected[outcome] = count 450 else: 451 not_selected[outcome] = count 452 453 return Die(selected), Die(not_selected)
Splits this die into one containing selected items and another containing the rest.
The total denominator is preserved.
Equivalent to self.filter(), self.reroll()
.
Arguments:
- which: Selects which outcomes to reroll until. Options:
- A callable that takes an outcome and returns
True
if it should be accepted. - A collection of outcomes to reroll until.
- A callable that takes an outcome and returns
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
which
. If not provided, this will be guessed based on the function signature.
455 def truncate(self, min_outcome=None, max_outcome=None) -> 'Die[T_co]': 456 """Truncates the outcomes of this `Die` to the given range. 457 458 The endpoints are included in the result if applicable. 459 If one of the arguments is not provided, that side will not be truncated. 460 461 This effectively rerolls outcomes outside the given range. 462 If instead you want to replace those outcomes with the nearest endpoint, 463 use `clip()`. 464 465 Not to be confused with `trunc(die)`, which performs integer truncation 466 on each outcome. 467 """ 468 if min_outcome is not None: 469 start = bisect.bisect_left(self.outcomes(), min_outcome) 470 else: 471 start = None 472 if max_outcome is not None: 473 stop = bisect.bisect_right(self.outcomes(), max_outcome) 474 else: 475 stop = None 476 data = {k: v for k, v in self.items()[start:stop]} 477 return icepool.Die(data)
Truncates the outcomes of this Die
to the given range.
The endpoints are included in the result if applicable. If one of the arguments is not provided, that side will not be truncated.
This effectively rerolls outcomes outside the given range.
If instead you want to replace those outcomes with the nearest endpoint,
use clip()
.
Not to be confused with trunc(die)
, which performs integer truncation
on each outcome.
479 def clip(self, min_outcome=None, max_outcome=None) -> 'Die[T_co]': 480 """Clips the outcomes of this `Die` to the given values. 481 482 The endpoints are included in the result if applicable. 483 If one of the arguments is not provided, that side will not be clipped. 484 485 This is not the same as rerolling outcomes beyond this range; 486 the outcome is simply adjusted to fit within the range. 487 This will typically cause some quantity to bunch up at the endpoint. 488 If you want to reroll outcomes beyond this range, use `truncate()`. 489 """ 490 data: MutableMapping[Any, int] = defaultdict(int) 491 for outcome, quantity in self.items(): 492 if min_outcome is not None and outcome <= min_outcome: 493 data[min_outcome] += quantity 494 elif max_outcome is not None and outcome >= max_outcome: 495 data[max_outcome] += quantity 496 else: 497 data[outcome] += quantity 498 return icepool.Die(data)
Clips the outcomes of this Die
to the given values.
The endpoints are included in the result if applicable. If one of the arguments is not provided, that side will not be clipped.
This is not the same as rerolling outcomes beyond this range;
the outcome is simply adjusted to fit within the range.
This will typically cause some quantity to bunch up at the endpoint.
If you want to reroll outcomes beyond this range, use truncate()
.
500 def set_range(self: 'Die[int]', 501 min_outcome: int | None = None, 502 max_outcome: int | None = None) -> 'Die[int]': 503 """Sets the outcomes of this `Die` to the given `int` range (inclusive). 504 505 This may remove outcomes (if they are not within the range) 506 and/or add zero-quantity outcomes (if they are in range but not present 507 in this `Die`). 508 509 Args: 510 min_outcome: The min outcome of the result. 511 If omitted, the min outcome of this `Die` will be used. 512 max_outcome: The max outcome of the result. 513 If omitted, the max outcome of this `Die` will be used. 514 """ 515 if min_outcome is None: 516 min_outcome = self.min_outcome() 517 if max_outcome is None: 518 max_outcome = self.max_outcome() 519 520 return self.set_outcomes(range(min_outcome, max_outcome + 1))
522 def set_outcomes(self, outcomes: Iterable[T_co]) -> 'Die[T_co]': 523 """Sets the set of outcomes to the argument. 524 525 This may remove outcomes (if they are not present in the argument) 526 and/or add zero-quantity outcomes (if they are not present in this `Die`). 527 """ 528 data = {x: self.quantity(x) for x in outcomes} 529 return icepool.Die(data)
Sets the set of outcomes to the argument.
This may remove outcomes (if they are not present in the argument)
and/or add zero-quantity outcomes (if they are not present in this Die
).
531 def trim(self) -> 'Die[T_co]': 532 """Removes all zero-quantity outcomes. """ 533 data = {k: v for k, v in self.items() if v > 0} 534 return icepool.Die(data)
Removes all zero-quantity outcomes.
564 def map( 565 self, 566 repl: 567 'Callable[..., U | Die[U] | icepool.RerollType | icepool.AgainExpression] | Mapping[T_co, U | Die[U] | icepool.RerollType | icepool.AgainExpression]', 568 /, 569 *extra_args, 570 star: bool | None = None, 571 repeat: int | None = 1, 572 again_count: int | None = None, 573 again_depth: int | None = None, 574 again_end: 'U | Die[U] | icepool.RerollType | None' = None 575 ) -> 'Die[U]': 576 """Maps outcomes of the `Die` to other outcomes. 577 578 This is also useful for representing processes. 579 580 As `icepool.map(repl, self, ...)`. 581 """ 582 return icepool.map(repl, 583 self, 584 *extra_args, 585 star=star, 586 repeat=repeat, 587 again_count=again_count, 588 again_depth=again_depth, 589 again_end=again_end)
Maps outcomes of the Die
to other outcomes.
This is also useful for representing processes.
As icepool.map(repl, self, ...)
.
591 def map_and_time( 592 self, 593 repl: 594 'Callable[..., T_co | Die[T_co] | icepool.RerollType] | Mapping[T_co, T_co | Die[T_co] | icepool.RerollType]', 595 /, 596 *extra_args, 597 star: bool | None = None, 598 repeat: int) -> 'Die[tuple[T_co, int]]': 599 """Repeatedly map outcomes of the state to other outcomes, while also 600 counting timesteps. 601 602 This is useful for representing processes. 603 604 As `map_and_time(repl, self, ...)`. 605 """ 606 return icepool.map_and_time(repl, 607 self, 608 *extra_args, 609 star=star, 610 repeat=repeat)
Repeatedly map outcomes of the state to other outcomes, while also counting timesteps.
This is useful for representing processes.
As map_and_time(repl, self, ...)
.
616 def mean_time_to_sum(self: 'Die[int]', target: int, /) -> Fraction: 617 """The mean number of rolls until the cumulative sum is greater or equal to the target. 618 619 Args: 620 target: The target sum. 621 622 Raises: 623 ValueError: If `self` has negative outcomes. 624 ZeroDivisionError: If `self.mean() == 0`. 625 """ 626 target = max(target, 0) 627 628 if target < len(self._mean_time_to_sum_cache): 629 return self._mean_time_to_sum_cache[target] 630 631 if self.min_outcome() < 0: 632 raise ValueError( 633 'mean_time_to_sum does not handle negative outcomes.') 634 time_per_effect = Fraction(self.denominator(), 635 self.denominator() - self.quantity(0)) 636 637 for i in range(len(self._mean_time_to_sum_cache), target + 1): 638 result = time_per_effect + self.reroll( 639 [0], 640 depth=None).map(lambda x: self.mean_time_to_sum(i - x)).mean() 641 self._mean_time_to_sum_cache.append(result) 642 643 return result
The mean number of rolls until the cumulative sum is greater or equal to the target.
Arguments:
- target: The target sum.
Raises:
- ValueError: If
self
has negative outcomes. - ZeroDivisionError: If
self.mean() == 0
.
645 def explode(self, 646 which: Collection[T_co] | Callable[..., bool] | None = None, 647 *, 648 star: bool | None = None, 649 depth: int = 9, 650 end=None) -> 'Die[T_co]': 651 """Causes outcomes to be rolled again and added to the total. 652 653 Args: 654 which: Which outcomes to explode. Options: 655 * A single outcome to explode. 656 * An collection of outcomes to explode. 657 * A callable that takes an outcome and returns `True` if it 658 should be exploded. 659 * If not supplied, the max outcome will explode. 660 star: Whether outcomes should be unpacked into separate arguments 661 before sending them to a callable `which`. 662 If not provided, this will be guessed based on the function 663 signature. 664 depth: The maximum number of additional dice to roll, not counting 665 the initial roll. 666 If not supplied, a default value will be used. 667 end: Once `depth` is reached, further explosions will be treated 668 as this value. By default, a zero value will be used. 669 `icepool.Reroll` will make one extra final roll, rerolling until 670 a non-exploding outcome is reached. 671 """ 672 673 if which is None: 674 outcome_set = {self.max_outcome()} 675 else: 676 outcome_set = self._select_outcomes(which, star) 677 678 if depth < 0: 679 raise ValueError('depth cannot be negative.') 680 elif depth == 0: 681 return self 682 683 def map_final(outcome): 684 if outcome in outcome_set: 685 return outcome + icepool.Again 686 else: 687 return outcome 688 689 return self.map(map_final, again_depth=depth, again_end=end)
Causes outcomes to be rolled again and added to the total.
Arguments:
- which: Which outcomes to explode. Options:
- A single outcome to explode.
- An collection of outcomes to explode.
- A callable that takes an outcome and returns
True
if it should be exploded. - If not supplied, the max outcome will explode.
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
which
. If not provided, this will be guessed based on the function signature. - depth: The maximum number of additional dice to roll, not counting the initial roll. If not supplied, a default value will be used.
- end: Once
depth
is reached, further explosions will be treated as this value. By default, a zero value will be used.icepool.Reroll
will make one extra final roll, rerolling until a non-exploding outcome is reached.
691 def if_else( 692 self, 693 outcome_if_true: U | 'Die[U]', 694 outcome_if_false: U | 'Die[U]', 695 *, 696 again_count: int | None = None, 697 again_depth: int | None = None, 698 again_end: 'U | Die[U] | icepool.RerollType | None' = None 699 ) -> 'Die[U]': 700 """Ternary conditional operator. 701 702 This replaces truthy outcomes with the first argument and falsy outcomes 703 with the second argument. 704 705 Args: 706 again_count, again_depth, again_end: Forwarded to the final die constructor. 707 """ 708 return self.map(lambda x: bool(x)).map( 709 { 710 True: outcome_if_true, 711 False: outcome_if_false 712 }, 713 again_count=again_count, 714 again_depth=again_depth, 715 again_end=again_end)
Ternary conditional operator.
This replaces truthy outcomes with the first argument and falsy outcomes with the second argument.
Arguments:
- again_count, again_depth, again_end: Forwarded to the final die constructor.
717 def is_in(self, target: Container[T_co], /) -> 'Die[bool]': 718 """A die that returns True iff the roll of the die is contained in the target.""" 719 return self.map(lambda x: x in target)
A die that returns True iff the roll of the die is contained in the target.
721 def count(self, rolls: int, target: Container[T_co], /) -> 'Die[int]': 722 """Roll this dice a number of times and count how many are in the target.""" 723 return rolls @ self.is_in(target)
Roll this dice a number of times and count how many are in the target.
782 def pool(self, rolls: int | Sequence[int] = 1, /) -> 'icepool.Pool[T_co]': 783 """Creates a `Pool` from this `Die`. 784 785 You might subscript the pool immediately afterwards, e.g. 786 `d6.pool(5)[-1, ..., 1]` takes the difference between the highest and 787 lowest of 5d6. 788 789 Args: 790 rolls: The number of copies of this `Die` to put in the pool. 791 Or, a sequence of one `int` per die acting as 792 `keep_tuple`. Note that `...` cannot be used in the 793 argument to this method, as the argument determines the size of 794 the pool. 795 """ 796 if isinstance(rolls, int): 797 return icepool.Pool({self: rolls}) 798 else: 799 pool_size = len(rolls) 800 # Haven't dealt with narrowing return type. 801 return icepool.Pool({self: pool_size})[rolls] # type: ignore
You might subscript the pool immediately afterwards, e.g.
d6.pool(5)[-1, ..., 1]
takes the difference between the highest and
lowest of 5d6.
Arguments:
- rolls: The number of copies of this
Die
to put in the pool. Or, a sequence of oneint
per die acting askeep_tuple
. Note that...
cannot be used in the argument to this method, as the argument determines the size of the pool.
803 def lowest(self, 804 rolls: int, 805 /, 806 keep: int | None = None, 807 drop: int | None = None) -> 'Die': 808 """Roll several of this `Die` and return the lowest result, or the sum of some of the lowest. 809 810 The outcomes should support addition and multiplication if `keep != 1`. 811 812 Args: 813 rolls: The number of dice to roll. All dice will have the same 814 outcomes as `self`. 815 keep, drop: These arguments work together: 816 * If neither are provided, the single lowest die will be taken. 817 * If only `keep` is provided, the `keep` lowest dice will be summed. 818 * If only `drop` is provided, the `drop` lowest dice will be dropped 819 and the rest will be summed. 820 * If both are provided, `drop` lowest dice will be dropped, then 821 the next `keep` lowest dice will be summed. 822 823 Returns: 824 A `Die` representing the probability distribution of the sum. 825 """ 826 index = lowest_slice(keep, drop) 827 canonical = canonical_slice(index, rolls) 828 if canonical.start == 0 and canonical.stop == 1: 829 return self._lowest_single(rolls) 830 # Expression evaluators are difficult to type. 831 return self.pool(rolls)[index].sum() # type: ignore
Roll several of this Die
and return the lowest result, or the sum of some of the lowest.
The outcomes should support addition and multiplication if keep != 1
.
Arguments:
- rolls: The number of dice to roll. All dice will have the same
outcomes as
self
. - keep, drop: These arguments work together:
- If neither are provided, the single lowest die will be taken.
- If only
keep
is provided, thekeep
lowest dice will be summed. - If only
drop
is provided, thedrop
lowest dice will be dropped and the rest will be summed. - If both are provided,
drop
lowest dice will be dropped, then the nextkeep
lowest dice will be summed.
Returns:
A
Die
representing the probability distribution of the sum.
841 def highest(self, 842 rolls: int, 843 /, 844 keep: int | None = None, 845 drop: int | None = None) -> 'Die[T_co]': 846 """Roll several of this `Die` and return the highest result, or the sum of some of the highest. 847 848 The outcomes should support addition and multiplication if `keep != 1`. 849 850 Args: 851 rolls: The number of dice to roll. 852 keep, drop: These arguments work together: 853 * If neither are provided, the single highest die will be taken. 854 * If only `keep` is provided, the `keep` highest dice will be summed. 855 * If only `drop` is provided, the `drop` highest dice will be dropped 856 and the rest will be summed. 857 * If both are provided, `drop` highest dice will be dropped, then 858 the next `keep` highest dice will be summed. 859 860 Returns: 861 A `Die` representing the probability distribution of the sum. 862 """ 863 index = highest_slice(keep, drop) 864 canonical = canonical_slice(index, rolls) 865 if canonical.start == rolls - 1 and canonical.stop == rolls: 866 return self._highest_single(rolls) 867 # Expression evaluators are difficult to type. 868 return self.pool(rolls)[index].sum() # type: ignore
Roll several of this Die
and return the highest result, or the sum of some of the highest.
The outcomes should support addition and multiplication if keep != 1
.
Arguments:
- rolls: The number of dice to roll.
- keep, drop: These arguments work together:
- If neither are provided, the single highest die will be taken.
- If only
keep
is provided, thekeep
highest dice will be summed. - If only
drop
is provided, thedrop
highest dice will be dropped and the rest will be summed. - If both are provided,
drop
highest dice will be dropped, then the nextkeep
highest dice will be summed.
Returns:
A
Die
representing the probability distribution of the sum.
877 def middle( 878 self, 879 rolls: int, 880 /, 881 keep: int = 1, 882 *, 883 tie: Literal['error', 'high', 'low'] = 'error') -> 'icepool.Die': 884 """Roll several of this `Die` and sum the sorted results in the middle. 885 886 The outcomes should support addition and multiplication if `keep != 1`. 887 888 Args: 889 rolls: The number of dice to roll. 890 keep: The number of outcomes to sum. If this is greater than the 891 current keep_size, all are kept. 892 tie: What to do if `keep` is odd but the current keep_size 893 is even, or vice versa. 894 * 'error' (default): Raises `IndexError`. 895 * 'high': The higher outcome is taken. 896 * 'low': The lower outcome is taken. 897 """ 898 # Expression evaluators are difficult to type. 899 return self.pool(rolls).middle(keep, tie=tie).sum() # type: ignore
Roll several of this Die
and sum the sorted results in the middle.
The outcomes should support addition and multiplication if keep != 1
.
Arguments:
- rolls: The number of dice to roll.
- keep: The number of outcomes to sum. If this is greater than the current keep_size, all are kept.
- tie: What to do if
keep
is odd but the current keep_size is even, or vice versa.- 'error' (default): Raises
IndexError
. - 'high': The higher outcome is taken.
- 'low': The lower outcome is taken.
- 'error' (default): Raises
901 def map_to_pool( 902 self, 903 repl: 904 'Callable[..., Sequence[icepool.Die[U] | U] | Mapping[icepool.Die[U], int] | Mapping[U, int] | icepool.Reroll] | None' = None, 905 /, 906 *extra_args: 'Outcome | icepool.Die | icepool.MultisetExpression', 907 star: bool | None = None, 908 denominator: int | None = None 909 ) -> 'icepool.MultisetGenerator[U, tuple[int]]': 910 """EXPERIMENTAL: Maps outcomes of this `Die` to `Pools`, creating a `MultisetGenerator`. 911 912 As `icepool.map_to_pool(repl, self, ...)`. 913 914 If no argument is provided, the outcomes will be used to construct a 915 mixture of pools directly, similar to the inverse of `pool.expand()`. 916 Note that this is not particularly efficient since it does not make much 917 use of dynamic programming. 918 919 Args: 920 repl: One of the following: 921 * A callable that takes in one outcome per element of args and 922 produces a `Pool` (or something convertible to such). 923 * A mapping from old outcomes to `Pool` 924 (or something convertible to such). 925 In this case args must have exactly one element. 926 The new outcomes may be dice rather than just single outcomes. 927 The special value `icepool.Reroll` will reroll that old outcome. 928 star: If `True`, the first of the args will be unpacked before 929 giving them to `repl`. 930 If not provided, it will be guessed based on the signature of 931 `repl` and the number of arguments. 932 denominator: If provided, the denominator of the result will be this 933 value. Otherwise it will be the minimum to correctly weight the 934 pools. 935 936 Returns: 937 A `MultisetGenerator` representing the mixture of `Pool`s. Note 938 that this is not technically a `Pool`, though it supports most of 939 the same operations. 940 941 Raises: 942 ValueError: If `denominator` cannot be made consistent with the 943 resulting mixture of pools. 944 """ 945 if repl is None: 946 repl = lambda x: x 947 return icepool.map_to_pool(repl, 948 self, 949 *extra_args, 950 star=star, 951 denominator=denominator)
EXPERIMENTAL: Maps outcomes of this Die
to Pools
, creating a MultisetGenerator
.
As icepool.map_to_pool(repl, self, ...)
.
If no argument is provided, the outcomes will be used to construct a
mixture of pools directly, similar to the inverse of pool.expand()
.
Note that this is not particularly efficient since it does not make much
use of dynamic programming.
Arguments:
- repl: One of the following:
- A callable that takes in one outcome per element of args and
produces a
Pool
(or something convertible to such). - A mapping from old outcomes to
Pool
(or something convertible to such). In this case args must have exactly one element. The new outcomes may be dice rather than just single outcomes. The special valueicepool.Reroll
will reroll that old outcome.
- A callable that takes in one outcome per element of args and
produces a
- star: If
True
, the first of the args will be unpacked before giving them torepl
. If not provided, it will be guessed based on the signature ofrepl
and the number of arguments. - denominator: If provided, the denominator of the result will be this value. Otherwise it will be the minimum to correctly weight the pools.
Returns:
A
MultisetGenerator
representing the mixture ofPool
s. Note
that this is not technically aPool
, though it supports most of the same operations.
Raises:
- ValueError: If
denominator
cannot be made consistent with the resulting mixture of pools.
953 def explode_to_pool( 954 self, 955 rolls: int, 956 /, 957 which: Collection[T_co] | Callable[..., bool] | None = None, 958 *, 959 star: bool | None = None, 960 depth: int = 9) -> 'icepool.MultisetGenerator[T_co, tuple[int]]': 961 """EXPERIMENTAL: Causes outcomes to be rolled again, keeping that outcome as an individual die in a pool. 962 963 Args: 964 rolls: The number of initial dice. 965 which: Which outcomes to explode. Options: 966 * A single outcome to explode. 967 * An collection of outcomes to explode. 968 * A callable that takes an outcome and returns `True` if it 969 should be exploded. 970 * If not supplied, the max outcome will explode. 971 star: Whether outcomes should be unpacked into separate arguments 972 before sending them to a callable `which`. 973 If not provided, this will be guessed based on the function 974 signature. 975 depth: The maximum depth of explosions for an individual dice. 976 977 Returns: 978 A `MultisetGenerator` representing the mixture of `Pool`s. Note 979 that this is not technically a `Pool`, though it supports most of 980 the same operations. 981 """ 982 if which is None: 983 explode_set = {self.max_outcome()} 984 else: 985 explode_set = self._select_outcomes(which, star) 986 explode, not_explode = self.split(explode_set) 987 988 single_data: 'MutableMapping[icepool.Vector[int], int]' = defaultdict( 989 int) 990 for i in range(depth + 1): 991 weight = explode.denominator()**i * self.denominator()**( 992 depth - i) * not_explode.denominator() 993 single_data[icepool.Vector((i, 1))] += weight 994 single_data[icepool.Vector( 995 (depth + 1, 0))] += explode.denominator()**(depth + 1) 996 997 single_count_die: 'Die[icepool.Vector[int]]' = Die(single_data) 998 count_die = rolls @ single_count_die 999 1000 return count_die.map_to_pool( 1001 lambda x, nx: [explode] * x + [not_explode] * nx)
EXPERIMENTAL: Causes outcomes to be rolled again, keeping that outcome as an individual die in a pool.
Arguments:
- rolls: The number of initial dice.
- which: Which outcomes to explode. Options:
- A single outcome to explode.
- An collection of outcomes to explode.
- A callable that takes an outcome and returns
True
if it should be exploded. - If not supplied, the max outcome will explode.
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
which
. If not provided, this will be guessed based on the function signature. - depth: The maximum depth of explosions for an individual dice.
Returns:
A
MultisetGenerator
representing the mixture ofPool
s. Note
that this is not technically aPool
, though it supports most of the same operations.
1003 def reroll_to_pool( 1004 self, 1005 rolls: int, 1006 which: Callable[..., bool] | Collection[T_co], 1007 /, 1008 max_rerolls: int, 1009 *, 1010 star: bool | None = None, 1011 reroll_priority: Literal['random', 'lowest', 'highest'] = 'random' 1012 ) -> 'icepool.MultisetGenerator[T_co, tuple[int]]': 1013 """EXPERIMENTAL: Applies a limited number of rerolls shared across a pool. 1014 1015 Each die can only be rerolled once, and no more than `max_rerolls` dice 1016 may be rerolled. 1017 1018 Args: 1019 rolls: How many dice in the pool. 1020 which: Selects which outcomes to reroll. Options: 1021 * A collection of outcomes to reroll. 1022 * A callable that takes an outcome and returns `True` if it 1023 should be rerolled. 1024 max_rerolls: The maximum number of dice to reroll. 1025 star: Whether outcomes should be unpacked into separate arguments 1026 before sending them to a callable `which`. 1027 If not provided, this will be guessed based on the function 1028 signature. 1029 reroll_priority: Which values will be prioritized for rerolling. Options: 1030 * `'random'` (default): Eligible dice will be chosen uniformly at random. 1031 * `'lowest'`: The lowest eligible dice will be rerolled. 1032 * `'highest'`: The highest eligible dice will be rerolled. 1033 1034 Returns: 1035 A `MultisetGenerator` representing the mixture of `Pool`s. Note 1036 that this is not technically a `Pool`, though it supports most of 1037 the same operations. 1038 """ 1039 outcome_set = self._select_outcomes(which, star) 1040 rerollable_die, not_rerollable_die = self.split(outcome_set) 1041 single_is_rerollable = icepool.coin(rerollable_die.denominator(), 1042 self.denominator()) 1043 rerollable = rolls @ single_is_rerollable 1044 1045 def split(initial_rerollable: int) -> Die[tuple[int, int, int]]: 1046 """Computes the composition of the pool. 1047 1048 Returns: 1049 initial_rerollable: The number of dice that initially fell into 1050 the rerollable set. 1051 rerolled_to_rerollable: The number of dice that were rerolled, 1052 but fell into the rerollable set again. 1053 not_rerollable: The number of dice that ended up outside the 1054 rerollable set, including both initial and rerolled dice. 1055 not_rerolled: The number of dice that were eligible for 1056 rerolling but were not rerolled. 1057 """ 1058 initial_not_rerollable = rolls - initial_rerollable 1059 rerolled = min(initial_rerollable, max_rerolls) 1060 not_rerolled = initial_rerollable - rerolled 1061 1062 def second_split(rerolled_to_rerollable): 1063 """Splits the rerolled dice into those that fell into the rerollable and not-rerollable sets.""" 1064 rerolled_to_not_rerollable = rerolled - rerolled_to_rerollable 1065 return icepool.tupleize( 1066 initial_rerollable, rerolled_to_rerollable, 1067 initial_not_rerollable + rerolled_to_not_rerollable, 1068 not_rerolled) 1069 1070 return icepool.map(second_split, 1071 rerolled @ single_is_rerollable, 1072 star=False) 1073 1074 pool_composition = rerollable.map(split, star=False) 1075 1076 def make_pool(initial_rerollable, rerolled_to_rerollable, 1077 not_rerollable, not_rerolled): 1078 common = rerollable_die.pool( 1079 rerolled_to_rerollable) + not_rerollable_die.pool( 1080 not_rerollable) 1081 if reroll_priority == 'random': 1082 return common + rerollable_die.pool(not_rerolled) 1083 elif reroll_priority == 'lowest': 1084 return common + rerollable_die.pool( 1085 initial_rerollable).highest(not_rerolled) 1086 elif reroll_priority == 'highest': 1087 return common + rerollable_die.pool(initial_rerollable).lowest( 1088 not_rerolled) 1089 else: 1090 raise ValueError( 1091 f"Invalid reroll_priority '{reroll_priority}'. Allowed values are 'random', 'lowest', and 'highest'." 1092 ) 1093 1094 denominator = self.denominator()**(rolls + min(rolls, max_rerolls)) 1095 1096 return pool_composition.map_to_pool(make_pool, 1097 star=True, 1098 denominator=denominator)
EXPERIMENTAL: Applies a limited number of rerolls shared across a pool.
Each die can only be rerolled once, and no more than max_rerolls
dice
may be rerolled.
Arguments:
- rolls: How many dice in the pool.
- which: Selects which outcomes to reroll. Options:
- A collection of outcomes to reroll.
- A callable that takes an outcome and returns
True
if it should be rerolled.
- max_rerolls: The maximum number of dice to reroll.
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
which
. If not provided, this will be guessed based on the function signature. - reroll_priority: Which values will be prioritized for rerolling. Options:
'random'
(default): Eligible dice will be chosen uniformly at random.'lowest'
: The lowest eligible dice will be rerolled.'highest'
: The highest eligible dice will be rerolled.
Returns:
A
MultisetGenerator
representing the mixture ofPool
s. Note
that this is not technically aPool
, though it supports most of the same operations.
1121 def stochastic_round(self, 1122 *, 1123 max_denominator: int | None = None) -> 'Die[int]': 1124 """Randomly rounds outcomes up or down to the nearest integer according to the two distances. 1125 1126 Specificially, rounds `x` up with probability `x - floor(x)` and down 1127 otherwise. 1128 1129 Args: 1130 max_denominator: If provided, each rounding will be performed 1131 using `fractions.Fraction.limit_denominator(max_denominator)`. 1132 Otherwise, the rounding will be performed without 1133 `limit_denominator`. 1134 """ 1135 return self.map(lambda x: icepool.stochastic_round( 1136 x, max_denominator=max_denominator))
Randomly rounds outcomes up or down to the nearest integer according to the two distances.
Specificially, rounds x
up with probability x - floor(x)
and down
otherwise.
Arguments:
- max_denominator: If provided, each rounding will be performed
using
fractions.Fraction.limit_denominator(max_denominator)
. Otherwise, the rounding will be performed withoutlimit_denominator
.
1157 def zero(self) -> 'Die[T_co]': 1158 """Zeros all outcomes of this die. 1159 1160 This is done by multiplying all outcomes by `0`. 1161 1162 The result will have the same denominator as this die. 1163 1164 Raises: 1165 ValueError: If the zeros did not resolve to a single outcome. 1166 """ 1167 result = self.unary_operator(Die._zero) 1168 if len(result) != 1: 1169 raise ValueError('zero() did not resolve to a single outcome.') 1170 return result
Zeros all outcomes of this die.
This is done by multiplying all outcomes by 0
.
The result will have the same denominator as this die.
Raises:
- ValueError: If the zeros did not resolve to a single outcome.
1172 def zero_outcome(self) -> T_co: 1173 """A zero-outcome for this die. 1174 1175 E.g. `0` for a `Die` whose outcomes are `int`s. 1176 """ 1177 return self.zero().outcomes()[0]
A zero-outcome for this die.
E.g. 0
for a Die
whose outcomes are int
s.
1381 def cmp(self, other) -> 'Die[int]': 1382 """A `Die` with outcomes 1, -1, and 0. 1383 1384 The quantities are equal to the positive outcome of `self > other`, 1385 `self < other`, and the remainder respectively. 1386 1387 This will include all three outcomes even if they have zero quantity. 1388 """ 1389 other = implicit_convert_to_die(other) 1390 1391 data = {} 1392 1393 lt = self < other 1394 if True in lt: 1395 data[-1] = lt[True] 1396 eq = self == other 1397 if True in eq: 1398 data[0] = eq[True] 1399 gt = self > other 1400 if True in gt: 1401 data[1] = gt[True] 1402 1403 return Die(data)
A Die
with outcomes 1, -1, and 0.
The quantities are equal to the positive outcome of self > other
,
self < other
, and the remainder respectively.
This will include all three outcomes even if they have zero quantity.
1448 def equals(self, other, *, simplify: bool = False) -> bool: 1449 """`True` iff both dice have the same outcomes and quantities. 1450 1451 This is `False` if `other` is not a `Die`, even if it would convert 1452 to an equal `Die`. 1453 1454 Truth value does NOT matter. 1455 1456 If one `Die` has a zero-quantity outcome and the other `Die` does not 1457 contain that outcome, they are treated as unequal by this function. 1458 1459 The `==` and `!=` operators have a dual purpose; they return a `Die` 1460 with a truth value determined by this method. 1461 Only dice returned by these methods have a truth value. The data of 1462 these dice is lazily evaluated since the caller may only be interested 1463 in the `Die` value or the truth value. 1464 1465 Args: 1466 simplify: If `True`, the dice will be simplified before comparing. 1467 Otherwise, e.g. a 2:2 coin is not `equals()` to a 1:1 coin. 1468 """ 1469 if not isinstance(other, Die): 1470 return False 1471 1472 if simplify: 1473 return self.simplify()._hash_key == other.simplify()._hash_key 1474 else: 1475 return self._hash_key == other._hash_key
True
iff both dice have the same outcomes and quantities.
This is False
if other
is not a Die
, even if it would convert
to an equal Die
.
Truth value does NOT matter.
If one Die
has a zero-quantity outcome and the other Die
does not
contain that outcome, they are treated as unequal by this function.
The ==
and !=
operators have a dual purpose; they return a Die
with a truth value determined by this method.
Only dice returned by these methods have a truth value. The data of
these dice is lazily evaluated since the caller may only be interested
in the Die
value or the truth value.
Arguments:
- simplify: If
True
, the dice will be simplified before comparing. Otherwise, e.g. a 2:2 coin is notequals()
to a 1:1 coin.
Inherited Members
- Population
- outcomes
- common_outcome_length
- is_empty
- min_outcome
- max_outcome
- nearest_le
- nearest_lt
- nearest_ge
- nearest_gt
- quantity
- quantities
- denominator
- scale_quantities
- has_zero_quantities
- quantity_ne
- quantity_le
- quantity_lt
- quantity_ge
- quantity_gt
- quantities_le
- quantities_ge
- quantities_lt
- quantities_gt
- probability
- probability_le
- probability_lt
- probability_ge
- probability_gt
- probabilities
- probabilities_le
- probabilities_ge
- probabilities_lt
- probabilities_gt
- mode
- modal_quantity
- kolmogorov_smirnov
- cramer_von_mises
- median
- median_low
- median_high
- quantile
- quantile_low
- quantile_high
- mean
- variance
- standard_deviation
- sd
- standardized_moment
- skewness
- excess_kurtosis
- entropy
- marginals
- covariance
- correlation
- to_one_hot
- sample
- format
- collections.abc.Mapping
- get
28class Population(ABC, Generic[T_co], Mapping[Any, int]): 29 """A mapping from outcomes to `int` quantities. 30 31 Outcomes with each instance must be hashable and totally orderable. 32 33 Subclasses include `Die` and `Deck`. 34 """ 35 36 # Abstract methods. 37 38 @property 39 @abstractmethod 40 def _new_type(self) -> type: 41 """The type to use when constructing a new instance.""" 42 43 @abstractmethod 44 def keys(self) -> CountsKeysView[T_co]: 45 """The outcomes within the population in sorted order.""" 46 47 @abstractmethod 48 def values(self) -> CountsValuesView: 49 """The quantities within the population in outcome order.""" 50 51 @abstractmethod 52 def items(self) -> CountsItemsView[T_co]: 53 """The (outcome, quantity)s of the population in sorted order.""" 54 55 @property 56 def _items_for_cartesian_product(self) -> Sequence[tuple[T_co, int]]: 57 return self.items() 58 59 def _unary_operator(self, op: Callable, *args, **kwargs): 60 data: MutableMapping[Any, int] = defaultdict(int) 61 for outcome, quantity in self.items(): 62 new_outcome = op(outcome, *args, **kwargs) 63 data[new_outcome] += quantity 64 return self._new_type(data) 65 66 # Outcomes. 67 68 def outcomes(self) -> CountsKeysView[T_co]: 69 """The outcomes of the mapping in ascending order. 70 71 These are also the `keys` of the mapping. 72 Prefer to use the name `outcomes`. 73 """ 74 return self.keys() 75 76 @cached_property 77 def _common_outcome_length(self) -> int | None: 78 result = None 79 for outcome in self.outcomes(): 80 if isinstance(outcome, Mapping): 81 return None 82 elif isinstance(outcome, Sized): 83 if result is None: 84 result = len(outcome) 85 elif len(outcome) != result: 86 return None 87 return result 88 89 def common_outcome_length(self) -> int | None: 90 """The common length of all outcomes. 91 92 If outcomes have no lengths or different lengths, the result is `None`. 93 """ 94 return self._common_outcome_length 95 96 def is_empty(self) -> bool: 97 """`True` iff this population has no outcomes. """ 98 return len(self) == 0 99 100 def min_outcome(self) -> T_co: 101 """The least outcome.""" 102 return self.outcomes()[0] 103 104 def max_outcome(self) -> T_co: 105 """The greatest outcome.""" 106 return self.outcomes()[-1] 107 108 def nearest_le(self, outcome) -> T_co | None: 109 """The nearest outcome that is <= the argument. 110 111 Returns `None` if there is no such outcome. 112 """ 113 if outcome in self: 114 return outcome 115 index = bisect.bisect_right(self.outcomes(), outcome) - 1 116 if index < 0: 117 return None 118 return self.outcomes()[index] 119 120 def nearest_lt(self, outcome) -> T_co | None: 121 """The nearest outcome that is < the argument. 122 123 Returns `None` if there is no such outcome. 124 """ 125 index = bisect.bisect_left(self.outcomes(), outcome) - 1 126 if index < 0: 127 return None 128 return self.outcomes()[index] 129 130 def nearest_ge(self, outcome) -> T_co | None: 131 """The nearest outcome that is >= the argument. 132 133 Returns `None` if there is no such outcome. 134 """ 135 if outcome in self: 136 return outcome 137 index = bisect.bisect_left(self.outcomes(), outcome) 138 if index >= len(self): 139 return None 140 return self.outcomes()[index] 141 142 def nearest_gt(self, outcome) -> T_co | None: 143 """The nearest outcome that is > the argument. 144 145 Returns `None` if there is no such outcome. 146 """ 147 index = bisect.bisect_right(self.outcomes(), outcome) 148 if index >= len(self): 149 return None 150 return self.outcomes()[index] 151 152 # Quantities. 153 154 def quantity(self, outcome: Hashable) -> int: 155 """The quantity of a single outcome, or 0 if not present.""" 156 return self.get(outcome, 0) 157 158 @overload 159 def quantities(self) -> CountsValuesView: 160 ... 161 162 @overload 163 def quantities(self, outcomes: Sequence) -> Sequence[int]: 164 ... 165 166 def quantities( 167 self, 168 outcomes: Sequence | None = None 169 ) -> CountsValuesView | Sequence[int]: 170 """The quantities of the mapping in sorted order. 171 172 These are also the `values` of the mapping. 173 Prefer to use the name `quantities`. 174 175 Args: 176 outcomes: If provided, the quantities corresponding to these 177 outcomes will be returned (or 0 if not present). 178 """ 179 if outcomes is None: 180 return self.values() 181 else: 182 return tuple(self.quantity(outcome) for outcome in outcomes) 183 184 @cached_property 185 def _denominator(self) -> int: 186 return sum(self.values()) 187 188 def denominator(self) -> int: 189 """The sum of all quantities (e.g. weights or duplicates). 190 191 For the number of unique outcomes, including those with zero quantity, 192 use `len()`. 193 """ 194 return self._denominator 195 196 def scale_quantities(self: C, scale: int) -> C: 197 """Scales all quantities by an integer.""" 198 if scale == 1: 199 return self 200 data = { 201 outcome: quantity * scale 202 for outcome, quantity in self.items() 203 } 204 return self._new_type(data) 205 206 def has_zero_quantities(self) -> bool: 207 """`True` iff `self` contains at least one outcome with zero quantity. """ 208 return 0 in self.values() 209 210 def quantity_ne(self, outcome) -> int: 211 """The quantity != a single outcome. """ 212 return self.denominator() - self.quantity(outcome) 213 214 @cached_property 215 def _cumulative_quantities(self) -> Mapping[T_co, int]: 216 result = {} 217 cdf = 0 218 for outcome, quantity in self.items(): 219 cdf += quantity 220 result[outcome] = cdf 221 return result 222 223 def quantity_le(self, outcome) -> int: 224 """The quantity <= a single outcome.""" 225 outcome = self.nearest_le(outcome) 226 if outcome is None: 227 return 0 228 else: 229 return self._cumulative_quantities[outcome] 230 231 def quantity_lt(self, outcome) -> int: 232 """The quantity < a single outcome.""" 233 outcome = self.nearest_lt(outcome) 234 if outcome is None: 235 return 0 236 else: 237 return self._cumulative_quantities[outcome] 238 239 def quantity_ge(self, outcome) -> int: 240 """The quantity >= a single outcome.""" 241 return self.denominator() - self.quantity_lt(outcome) 242 243 def quantity_gt(self, outcome) -> int: 244 """The quantity > a single outcome.""" 245 return self.denominator() - self.quantity_le(outcome) 246 247 @cached_property 248 def _quantities_le(self) -> Sequence[int]: 249 return tuple(itertools.accumulate(self.values())) 250 251 def quantities_le(self, outcomes: Sequence | None = None) -> Sequence[int]: 252 """The quantity <= each outcome in order. 253 254 Args: 255 outcomes: If provided, the quantities corresponding to these 256 outcomes will be returned (or 0 if not present). 257 """ 258 if outcomes is None: 259 return self._quantities_le 260 else: 261 return tuple(self.quantity_le(x) for x in outcomes) 262 263 @cached_property 264 def _quantities_ge(self) -> Sequence[int]: 265 return tuple( 266 itertools.accumulate(self.values()[:-1], 267 operator.sub, 268 initial=self.denominator())) 269 270 def quantities_ge(self, outcomes: Sequence | None = None) -> Sequence[int]: 271 """The quantity >= each outcome in order. 272 273 Args: 274 outcomes: If provided, the quantities corresponding to these 275 outcomes will be returned (or 0 if not present). 276 """ 277 if outcomes is None: 278 return self._quantities_ge 279 else: 280 return tuple(self.quantity_ge(x) for x in outcomes) 281 282 def quantities_lt(self, outcomes: Sequence | None = None) -> Sequence[int]: 283 """The quantity < each outcome in order. 284 285 Args: 286 outcomes: If provided, the quantities corresponding to these 287 outcomes will be returned (or 0 if not present). 288 """ 289 return tuple(self.denominator() - x 290 for x in self.quantities_ge(outcomes)) 291 292 def quantities_gt(self, outcomes: Sequence | None = None) -> Sequence[int]: 293 """The quantity > each outcome in order. 294 295 Args: 296 outcomes: If provided, the quantities corresponding to these 297 outcomes will be returned (or 0 if not present). 298 """ 299 return tuple(self.denominator() - x 300 for x in self.quantities_le(outcomes)) 301 302 # Probabilities. 303 304 @overload 305 def probability(self, outcome: Hashable, *, 306 percent: Literal[False]) -> Fraction: 307 ... 308 309 @overload 310 def probability(self, outcome: Hashable, *, 311 percent: Literal[True]) -> float: 312 ... 313 314 @overload 315 def probability(self, outcome: Hashable) -> Fraction: 316 ... 317 318 def probability(self, 319 outcome: Hashable, 320 *, 321 percent: bool = False) -> Fraction | float: 322 """The probability of a single outcome, or 0.0 if not present. """ 323 result = Fraction(self.quantity(outcome), self.denominator()) 324 return result * 100.0 if percent else result 325 326 @overload 327 def probability_le(self, outcome: Hashable, *, 328 percent: Literal[False]) -> Fraction: 329 ... 330 331 @overload 332 def probability_le(self, outcome: Hashable, *, 333 percent: Literal[True]) -> float: 334 ... 335 336 @overload 337 def probability_le(self, outcome: Hashable) -> Fraction: 338 ... 339 340 def probability_le(self, 341 outcome: Hashable, 342 *, 343 percent: bool = False) -> Fraction | float: 344 """The probability <= a single outcome. """ 345 result = Fraction(self.quantity_le(outcome), self.denominator()) 346 return result * 100.0 if percent else result 347 348 @overload 349 def probability_lt(self, outcome: Hashable, *, 350 percent: Literal[False]) -> Fraction: 351 ... 352 353 @overload 354 def probability_lt(self, outcome: Hashable, *, 355 percent: Literal[True]) -> float: 356 ... 357 358 @overload 359 def probability_lt(self, outcome: Hashable) -> Fraction: 360 ... 361 362 def probability_lt(self, 363 outcome: Hashable, 364 *, 365 percent: bool = False) -> Fraction | float: 366 """The probability < a single outcome. """ 367 result = Fraction(self.quantity_lt(outcome), self.denominator()) 368 return result * 100.0 if percent else result 369 370 @overload 371 def probability_ge(self, outcome: Hashable, *, 372 percent: Literal[False]) -> Fraction: 373 ... 374 375 @overload 376 def probability_ge(self, outcome: Hashable, *, 377 percent: Literal[True]) -> float: 378 ... 379 380 @overload 381 def probability_ge(self, outcome: Hashable) -> Fraction: 382 ... 383 384 def probability_ge(self, 385 outcome: Hashable, 386 *, 387 percent: bool = False) -> Fraction | float: 388 """The probability >= a single outcome. """ 389 result = Fraction(self.quantity_ge(outcome), self.denominator()) 390 return result * 100.0 if percent else result 391 392 @overload 393 def probability_gt(self, outcome: Hashable, *, 394 percent: Literal[False]) -> Fraction: 395 ... 396 397 @overload 398 def probability_gt(self, outcome: Hashable, *, 399 percent: Literal[True]) -> float: 400 ... 401 402 @overload 403 def probability_gt(self, outcome: Hashable) -> Fraction: 404 ... 405 406 def probability_gt(self, 407 outcome: Hashable, 408 *, 409 percent: bool = False) -> Fraction | float: 410 """The probability > a single outcome. """ 411 result = Fraction(self.quantity_gt(outcome), self.denominator()) 412 return result * 100.0 if percent else result 413 414 @cached_property 415 def _probabilities(self) -> Sequence[Fraction]: 416 return tuple(Fraction(v, self.denominator()) for v in self.values()) 417 418 @overload 419 def probabilities(self, 420 outcomes: Sequence | None = None, 421 *, 422 percent: Literal[False]) -> Sequence[Fraction]: 423 ... 424 425 @overload 426 def probabilities(self, 427 outcomes: Sequence | None = None, 428 *, 429 percent: Literal[True]) -> Sequence[float]: 430 ... 431 432 @overload 433 def probabilities(self, 434 outcomes: Sequence | None = None) -> Sequence[Fraction]: 435 ... 436 437 def probabilities( 438 self, 439 outcomes: Sequence | None = None, 440 *, 441 percent: bool = False) -> Sequence[Fraction] | Sequence[float]: 442 """The probability of each outcome in order. 443 444 Also known as the probability mass function (PMF). 445 446 Args: 447 outcomes: If provided, the probabilities corresponding to these 448 outcomes will be returned (or 0 if not present). 449 percent: If set, the results will be in percent 450 (i.e. total of 100.0) and the values are `float`s. 451 Otherwise, the total will be 1 and the values are `Fraction`s. 452 """ 453 if outcomes is None: 454 result = self._probabilities 455 else: 456 result = tuple(self.probability(x) for x in outcomes) 457 458 if percent: 459 return tuple(100.0 * x for x in result) 460 else: 461 return result 462 463 @cached_property 464 def _probabilities_le(self) -> Sequence[Fraction]: 465 return tuple( 466 Fraction(quantity, self.denominator()) 467 for quantity in self.quantities_le()) 468 469 @overload 470 def probabilities_le(self, 471 outcomes: Sequence | None = None, 472 *, 473 percent: Literal[False]) -> Sequence[Fraction]: 474 ... 475 476 @overload 477 def probabilities_le(self, 478 outcomes: Sequence | None = None, 479 *, 480 percent: Literal[True]) -> Sequence[float]: 481 ... 482 483 @overload 484 def probabilities_le(self, 485 outcomes: Sequence | None = None 486 ) -> Sequence[Fraction]: 487 ... 488 489 def probabilities_le( 490 self, 491 outcomes: Sequence | None = None, 492 *, 493 percent: bool = False) -> Sequence[Fraction] | Sequence[float]: 494 """The probability of rolling <= each outcome in order. 495 496 Also known as the cumulative distribution function (CDF), 497 though this term is ambigiuous whether it is < or <=. 498 499 Args: 500 outcomes: If provided, the probabilities corresponding to these 501 outcomes will be returned (or 0 if not present). 502 percent: If set, the results will be in percent 503 (i.e. total of 100.0) and the values are `float`s. 504 Otherwise, the total will be 1 and the values are `Fraction`s. 505 """ 506 if outcomes is None: 507 result = self._probabilities_le 508 else: 509 result = tuple(self.probability_le(x) for x in outcomes) 510 511 if percent: 512 return tuple(100.0 * x for x in result) 513 else: 514 return result 515 516 @cached_property 517 def _probabilities_ge(self) -> Sequence[Fraction]: 518 return tuple( 519 Fraction(quantity, self.denominator()) 520 for quantity in self.quantities_ge()) 521 522 @overload 523 def probabilities_ge(self, 524 outcomes: Sequence | None = None, 525 *, 526 percent: Literal[False]) -> Sequence[Fraction]: 527 ... 528 529 @overload 530 def probabilities_ge(self, 531 outcomes: Sequence | None = None, 532 *, 533 percent: Literal[True]) -> Sequence[float]: 534 ... 535 536 @overload 537 def probabilities_ge(self, 538 outcomes: Sequence | None = None 539 ) -> Sequence[Fraction]: 540 ... 541 542 def probabilities_ge( 543 self, 544 outcomes: Sequence | None = None, 545 *, 546 percent: bool = False) -> Sequence[Fraction] | Sequence[float]: 547 """The probability of rolling >= each outcome in order. 548 549 Also known as the survival function (SF) or 550 complementary cumulative distribution function (CCDF), 551 though these term are ambigiuous whether they are is > or >=. 552 553 Args: 554 outcomes: If provided, the probabilities corresponding to these 555 outcomes will be returned (or 0 if not present). 556 percent: If set, the results will be in percent 557 (i.e. total of 100.0) and the values are `float`s. 558 Otherwise, the total will be 1 and the values are `Fraction`s. 559 """ 560 if outcomes is None: 561 result = self._probabilities_ge 562 else: 563 result = tuple(self.probability_ge(x) for x in outcomes) 564 565 if percent: 566 return tuple(100.0 * x for x in result) 567 else: 568 return result 569 570 @overload 571 def probabilities_lt(self, 572 outcomes: Sequence | None = None, 573 *, 574 percent: Literal[False]) -> Sequence[Fraction]: 575 ... 576 577 @overload 578 def probabilities_lt(self, 579 outcomes: Sequence | None = None, 580 *, 581 percent: Literal[True]) -> Sequence[float]: 582 ... 583 584 @overload 585 def probabilities_lt(self, 586 outcomes: Sequence | None = None 587 ) -> Sequence[Fraction]: 588 ... 589 590 def probabilities_lt( 591 self, 592 outcomes: Sequence | None = None, 593 *, 594 percent: bool = False) -> Sequence[Fraction] | Sequence[float]: 595 """The probability of rolling < each outcome in order. 596 597 Args: 598 outcomes: If provided, the probabilities corresponding to these 599 outcomes will be returned (or 0 if not present). 600 percent: If set, the results will be in percent 601 (i.e. total of 100.0) and the values are `float`s. 602 Otherwise, the total will be 1 and the values are `Fraction`s. 603 """ 604 if outcomes is None: 605 result = tuple(1 - x for x in self._probabilities_ge) 606 else: 607 result = tuple(1 - self.probability_ge(x) for x in outcomes) 608 609 if percent: 610 return tuple(100.0 * x for x in result) 611 else: 612 return result 613 614 @overload 615 def probabilities_gt(self, 616 outcomes: Sequence | None = None, 617 *, 618 percent: Literal[False]) -> Sequence[Fraction]: 619 ... 620 621 @overload 622 def probabilities_gt(self, 623 outcomes: Sequence | None = None, 624 *, 625 percent: Literal[True]) -> Sequence[float]: 626 ... 627 628 @overload 629 def probabilities_gt(self, 630 outcomes: Sequence | None = None 631 ) -> Sequence[Fraction]: 632 ... 633 634 def probabilities_gt( 635 self, 636 outcomes: Sequence | None = None, 637 *, 638 percent: bool = False) -> Sequence[Fraction] | Sequence[float]: 639 """The probability of rolling > each outcome in order. 640 641 Args: 642 outcomes: If provided, the probabilities corresponding to these 643 outcomes will be returned (or 0 if not present). 644 percent: If set, the results will be in percent 645 (i.e. total of 100.0) and the values are `float`s. 646 Otherwise, the total will be 1 and the values are `Fraction`s. 647 """ 648 if outcomes is None: 649 result = tuple(1 - x for x in self._probabilities_le) 650 else: 651 result = tuple(1 - self.probability_le(x) for x in outcomes) 652 653 if percent: 654 return tuple(100.0 * x for x in result) 655 else: 656 return result 657 658 # Scalar statistics. 659 660 def mode(self) -> tuple: 661 """A tuple containing the most common outcome(s) of the population. 662 663 These are sorted from lowest to highest. 664 """ 665 return tuple(outcome for outcome, quantity in self.items() 666 if quantity == self.modal_quantity()) 667 668 def modal_quantity(self) -> int: 669 """The highest quantity of any single outcome. """ 670 return max(self.quantities()) 671 672 def kolmogorov_smirnov(self, other) -> Fraction: 673 """Kolmogorov–Smirnov statistic. The maximum absolute difference between CDFs. """ 674 a, b = icepool.align(self, other) 675 return max( 676 abs(a - b) 677 for a, b in zip(a.probabilities_le(), b.probabilities_le())) 678 679 def cramer_von_mises(self, other) -> Fraction: 680 """Cramér-von Mises statistic. The sum-of-squares difference between CDFs. """ 681 a, b = icepool.align(self, other) 682 return sum( 683 ((a - b)**2 684 for a, b in zip(a.probabilities_le(), b.probabilities_le())), 685 start=Fraction(0, 1)) 686 687 def median(self): 688 """The median, taking the mean in case of a tie. 689 690 This will fail if the outcomes do not support division; 691 in this case, use `median_low` or `median_high` instead. 692 """ 693 return self.quantile(1, 2) 694 695 def median_low(self) -> T_co: 696 """The median, taking the lower in case of a tie.""" 697 return self.quantile_low(1, 2) 698 699 def median_high(self) -> T_co: 700 """The median, taking the higher in case of a tie.""" 701 return self.quantile_high(1, 2) 702 703 def quantile(self, n: int, d: int = 100): 704 """The outcome `n / d` of the way through the CDF, taking the mean in case of a tie. 705 706 This will fail if the outcomes do not support addition and division; 707 in this case, use `quantile_low` or `quantile_high` instead. 708 """ 709 # Should support addition and division. 710 return (self.quantile_low(n, d) + 711 self.quantile_high(n, d)) / 2 # type: ignore 712 713 def quantile_low(self, n: int, d: int = 100) -> T_co: 714 """The outcome `n / d` of the way through the CDF, taking the lesser in case of a tie.""" 715 index = bisect.bisect_left(self.quantities_le(), 716 (n * self.denominator() + d - 1) // d) 717 if index >= len(self): 718 return self.max_outcome() 719 return self.outcomes()[index] 720 721 def quantile_high(self, n: int, d: int = 100) -> T_co: 722 """The outcome `n / d` of the way through the CDF, taking the greater in case of a tie.""" 723 index = bisect.bisect_right(self.quantities_le(), 724 n * self.denominator() // d) 725 if index >= len(self): 726 return self.max_outcome() 727 return self.outcomes()[index] 728 729 @overload 730 def mean(self: 'Population[numbers.Rational]') -> Fraction: 731 ... 732 733 @overload 734 def mean(self: 'Population[float]') -> float: 735 ... 736 737 def mean( 738 self: 'Population[numbers.Rational] | Population[float]' 739 ) -> Fraction | float: 740 return try_fraction( 741 sum(outcome * quantity for outcome, quantity in self.items()), 742 self.denominator()) 743 744 @overload 745 def variance(self: 'Population[numbers.Rational]') -> Fraction: 746 ... 747 748 @overload 749 def variance(self: 'Population[float]') -> float: 750 ... 751 752 def variance( 753 self: 'Population[numbers.Rational] | Population[float]' 754 ) -> Fraction | float: 755 """This is the population variance, not the sample variance.""" 756 mean = self.mean() 757 mean_of_squares = try_fraction( 758 sum(quantity * outcome**2 for outcome, quantity in self.items()), 759 self.denominator()) 760 return mean_of_squares - mean * mean 761 762 def standard_deviation( 763 self: 'Population[numbers.Rational] | Population[float]') -> float: 764 return math.sqrt(self.variance()) 765 766 sd = standard_deviation 767 768 def standardized_moment( 769 self: 'Population[numbers.Rational] | Population[float]', 770 k: int) -> float: 771 sd = self.standard_deviation() 772 mean = self.mean() 773 ev = sum(p * (outcome - mean)**k # type: ignore 774 for outcome, p in zip(self.outcomes(), self.probabilities())) 775 return ev / (sd**k) 776 777 def skewness( 778 self: 'Population[numbers.Rational] | Population[float]') -> float: 779 return self.standardized_moment(3) 780 781 def excess_kurtosis( 782 self: 'Population[numbers.Rational] | Population[float]') -> float: 783 return self.standardized_moment(4) - 3.0 784 785 def entropy(self, base: float = 2.0) -> float: 786 """The entropy of a random sample from this population. 787 788 Args: 789 base: The logarithm base to use. Default is 2.0, which gives the 790 entropy in bits. 791 """ 792 return -sum(p * math.log(p, base) 793 for p in self.probabilities() if p > 0.0) 794 795 # Joint statistics. 796 797 class _Marginals(Generic[C]): 798 """Helper class for implementing `marginals()`.""" 799 800 _population: C 801 802 def __init__(self, population, /): 803 self._population = population 804 805 def __len__(self) -> int: 806 """The minimum len() of all outcomes.""" 807 return min(len(x) for x in self._population.outcomes()) 808 809 def __getitem__(self, dims: int | slice, /): 810 """Marginalizes the given dimensions.""" 811 return self._population._unary_operator(operator.getitem, dims) 812 813 def __iter__(self) -> Iterator: 814 for i in range(len(self)): 815 yield self[i] 816 817 def __getattr__(self, key: str): 818 if key[0] == '_': 819 raise AttributeError(key) 820 return self._population._unary_operator(operator.attrgetter(key)) 821 822 @property 823 def marginals(self: C) -> _Marginals[C]: 824 """A property that applies the `[]` operator to outcomes. 825 826 For example, `population.marginals[:2]` will marginalize the first two 827 elements of sequence outcomes. 828 829 Attributes that do not start with an underscore will also be forwarded. 830 For example, `population.marginals.x` will marginalize the `x` attribute 831 from e.g. `namedtuple` outcomes. 832 """ 833 return Population._Marginals(self) 834 835 @overload 836 def covariance(self: 'Population[tuple[numbers.Rational, ...]]', i: int, 837 j: int) -> Fraction: 838 ... 839 840 @overload 841 def covariance(self: 'Population[tuple[float, ...]]', i: int, 842 j: int) -> float: 843 ... 844 845 def covariance( 846 self: 847 'Population[tuple[numbers.Rational, ...]] | Population[tuple[float, ...]]', 848 i: int, j: int) -> Fraction | float: 849 mean_i = self.marginals[i].mean() 850 mean_j = self.marginals[j].mean() 851 return try_fraction( 852 sum((outcome[i] - mean_i) * (outcome[j] - mean_j) * quantity 853 for outcome, quantity in self.items()), self.denominator()) 854 855 def correlation( 856 self: 857 'Population[tuple[numbers.Rational, ...]] | Population[tuple[float, ...]]', 858 i: int, j: int) -> float: 859 sd_i = self.marginals[i].standard_deviation() 860 sd_j = self.marginals[j].standard_deviation() 861 return self.covariance(i, j) / (sd_i * sd_j) 862 863 # Transformations. 864 865 def to_one_hot(self: C, outcomes: Sequence[T_co] | None = None) -> C: 866 """Converts the outcomes of this population to a one-hot representation. 867 868 Args: 869 outcomes: If provided, each outcome will be mapped to a `Vector` 870 where the element at `outcomes.index(outcome)` is set to `True` 871 and the rest to `False`, or all `False` if the outcome is not 872 in `outcomes`. 873 If not provided, `self.outcomes()` is used. 874 """ 875 if outcomes is None: 876 outcomes = self.outcomes() 877 878 data: MutableMapping[Vector[bool], int] = defaultdict(int) 879 for outcome, quantity in zip(self.outcomes(), self.quantities()): 880 value = [False] * len(outcomes) 881 if outcome in outcomes: 882 value[outcomes.index(outcome)] = True 883 data[Vector(value)] += quantity 884 return self._new_type(data) 885 886 def sample(self) -> T_co: 887 """A single random sample from this population. 888 889 Note that this is always "with replacement" even for `Deck` since 890 instances are immutable. 891 892 This uses the standard `random` package and is not cryptographically 893 secure. 894 """ 895 # We don't use random.choices since that is based on floats rather than ints. 896 r = random.randrange(self.denominator()) 897 index = bisect.bisect_right(self.quantities_le(), r) 898 return self.outcomes()[index] 899 900 def format(self, format_spec: str, /, **kwargs) -> str: 901 """Formats this mapping as a string. 902 903 `format_spec` should start with the output format, 904 which can be: 905 * `md` for Markdown (default) 906 * `bbcode` for BBCode 907 * `csv` for comma-separated values 908 * `html` for HTML 909 910 After this, you may optionally add a `:` followed by a series of 911 requested columns. Allowed columns are: 912 913 * `o`: Outcomes. 914 * `*o`: Outcomes, unpacked if applicable. 915 * `q==`, `q<=`, `q>=`: Quantities ==, <=, or >= each outcome. 916 * `p==`, `p<=`, `p>=`: Probabilities (0-1) ==, <=, or >= each outcome. 917 * `%==`, `%<=`, `%>=`: Probabilities (0%-100%) ==, <=, or >= each outcome. 918 919 Columns may optionally be separated using `|` characters. 920 921 The default columns are `*o|q==|%==`, which are the unpacked outcomes, 922 the quantities, and the probabilities. The quantities are omitted from 923 the default columns if any individual quantity is 10**30 or greater. 924 """ 925 if not self.is_empty() and self.modal_quantity() < 10**30: 926 default_column_spec = '*oq==%==' 927 else: 928 default_column_spec = '*o%==' 929 if len(format_spec) == 0: 930 format_spec = 'md:' + default_column_spec 931 932 format_spec = format_spec.replace('|', '') 933 934 parts = format_spec.split(':') 935 936 if len(parts) == 1: 937 output_format = parts[0] 938 col_spec = default_column_spec 939 elif len(parts) == 2: 940 output_format = parts[0] 941 col_spec = parts[1] 942 else: 943 raise ValueError('format_spec has too many colons.') 944 945 if output_format == 'md': 946 return icepool.population.format.markdown(self, col_spec) 947 elif output_format == 'bbcode': 948 return icepool.population.format.bbcode(self, col_spec) 949 elif output_format == 'csv': 950 return icepool.population.format.csv(self, col_spec, **kwargs) 951 elif output_format == 'html': 952 return icepool.population.format.html(self, col_spec) 953 else: 954 raise ValueError(f"Unsupported output format '{output_format}'") 955 956 def __format__(self, format_spec: str, /) -> str: 957 return self.format(format_spec) 958 959 def __str__(self) -> str: 960 return f'{self}'
A mapping from outcomes to int
quantities.
Outcomes with each instance must be hashable and totally orderable.
43 @abstractmethod 44 def keys(self) -> CountsKeysView[T_co]: 45 """The outcomes within the population in sorted order."""
The outcomes within the population in sorted order.
47 @abstractmethod 48 def values(self) -> CountsValuesView: 49 """The quantities within the population in outcome order."""
The quantities within the population in outcome order.
51 @abstractmethod 52 def items(self) -> CountsItemsView[T_co]: 53 """The (outcome, quantity)s of the population in sorted order."""
The (outcome, quantity)s of the population in sorted order.
89 def common_outcome_length(self) -> int | None: 90 """The common length of all outcomes. 91 92 If outcomes have no lengths or different lengths, the result is `None`. 93 """ 94 return self._common_outcome_length
The common length of all outcomes.
If outcomes have no lengths or different lengths, the result is None
.
96 def is_empty(self) -> bool: 97 """`True` iff this population has no outcomes. """ 98 return len(self) == 0
True
iff this population has no outcomes.
108 def nearest_le(self, outcome) -> T_co | None: 109 """The nearest outcome that is <= the argument. 110 111 Returns `None` if there is no such outcome. 112 """ 113 if outcome in self: 114 return outcome 115 index = bisect.bisect_right(self.outcomes(), outcome) - 1 116 if index < 0: 117 return None 118 return self.outcomes()[index]
The nearest outcome that is <= the argument.
Returns None
if there is no such outcome.
120 def nearest_lt(self, outcome) -> T_co | None: 121 """The nearest outcome that is < the argument. 122 123 Returns `None` if there is no such outcome. 124 """ 125 index = bisect.bisect_left(self.outcomes(), outcome) - 1 126 if index < 0: 127 return None 128 return self.outcomes()[index]
The nearest outcome that is < the argument.
Returns None
if there is no such outcome.
130 def nearest_ge(self, outcome) -> T_co | None: 131 """The nearest outcome that is >= the argument. 132 133 Returns `None` if there is no such outcome. 134 """ 135 if outcome in self: 136 return outcome 137 index = bisect.bisect_left(self.outcomes(), outcome) 138 if index >= len(self): 139 return None 140 return self.outcomes()[index]
The nearest outcome that is >= the argument.
Returns None
if there is no such outcome.
142 def nearest_gt(self, outcome) -> T_co | None: 143 """The nearest outcome that is > the argument. 144 145 Returns `None` if there is no such outcome. 146 """ 147 index = bisect.bisect_right(self.outcomes(), outcome) 148 if index >= len(self): 149 return None 150 return self.outcomes()[index]
The nearest outcome that is > the argument.
Returns None
if there is no such outcome.
154 def quantity(self, outcome: Hashable) -> int: 155 """The quantity of a single outcome, or 0 if not present.""" 156 return self.get(outcome, 0)
The quantity of a single outcome, or 0 if not present.
166 def quantities( 167 self, 168 outcomes: Sequence | None = None 169 ) -> CountsValuesView | Sequence[int]: 170 """The quantities of the mapping in sorted order. 171 172 These are also the `values` of the mapping. 173 Prefer to use the name `quantities`. 174 175 Args: 176 outcomes: If provided, the quantities corresponding to these 177 outcomes will be returned (or 0 if not present). 178 """ 179 if outcomes is None: 180 return self.values() 181 else: 182 return tuple(self.quantity(outcome) for outcome in outcomes)
The quantities of the mapping in sorted order.
These are also the values
of the mapping.
Prefer to use the name quantities
.
Arguments:
- outcomes: If provided, the quantities corresponding to these outcomes will be returned (or 0 if not present).
188 def denominator(self) -> int: 189 """The sum of all quantities (e.g. weights or duplicates). 190 191 For the number of unique outcomes, including those with zero quantity, 192 use `len()`. 193 """ 194 return self._denominator
The sum of all quantities (e.g. weights or duplicates).
For the number of unique outcomes, including those with zero quantity,
use len()
.
196 def scale_quantities(self: C, scale: int) -> C: 197 """Scales all quantities by an integer.""" 198 if scale == 1: 199 return self 200 data = { 201 outcome: quantity * scale 202 for outcome, quantity in self.items() 203 } 204 return self._new_type(data)
Scales all quantities by an integer.
206 def has_zero_quantities(self) -> bool: 207 """`True` iff `self` contains at least one outcome with zero quantity. """ 208 return 0 in self.values()
True
iff self
contains at least one outcome with zero quantity.
210 def quantity_ne(self, outcome) -> int: 211 """The quantity != a single outcome. """ 212 return self.denominator() - self.quantity(outcome)
The quantity != a single outcome.
223 def quantity_le(self, outcome) -> int: 224 """The quantity <= a single outcome.""" 225 outcome = self.nearest_le(outcome) 226 if outcome is None: 227 return 0 228 else: 229 return self._cumulative_quantities[outcome]
The quantity <= a single outcome.
231 def quantity_lt(self, outcome) -> int: 232 """The quantity < a single outcome.""" 233 outcome = self.nearest_lt(outcome) 234 if outcome is None: 235 return 0 236 else: 237 return self._cumulative_quantities[outcome]
The quantity < a single outcome.
239 def quantity_ge(self, outcome) -> int: 240 """The quantity >= a single outcome.""" 241 return self.denominator() - self.quantity_lt(outcome)
The quantity >= a single outcome.
243 def quantity_gt(self, outcome) -> int: 244 """The quantity > a single outcome.""" 245 return self.denominator() - self.quantity_le(outcome)
The quantity > a single outcome.
251 def quantities_le(self, outcomes: Sequence | None = None) -> Sequence[int]: 252 """The quantity <= each outcome in order. 253 254 Args: 255 outcomes: If provided, the quantities corresponding to these 256 outcomes will be returned (or 0 if not present). 257 """ 258 if outcomes is None: 259 return self._quantities_le 260 else: 261 return tuple(self.quantity_le(x) for x in outcomes)
The quantity <= each outcome in order.
Arguments:
- outcomes: If provided, the quantities corresponding to these outcomes will be returned (or 0 if not present).
270 def quantities_ge(self, outcomes: Sequence | None = None) -> Sequence[int]: 271 """The quantity >= each outcome in order. 272 273 Args: 274 outcomes: If provided, the quantities corresponding to these 275 outcomes will be returned (or 0 if not present). 276 """ 277 if outcomes is None: 278 return self._quantities_ge 279 else: 280 return tuple(self.quantity_ge(x) for x in outcomes)
The quantity >= each outcome in order.
Arguments:
- outcomes: If provided, the quantities corresponding to these outcomes will be returned (or 0 if not present).
282 def quantities_lt(self, outcomes: Sequence | None = None) -> Sequence[int]: 283 """The quantity < each outcome in order. 284 285 Args: 286 outcomes: If provided, the quantities corresponding to these 287 outcomes will be returned (or 0 if not present). 288 """ 289 return tuple(self.denominator() - x 290 for x in self.quantities_ge(outcomes))
The quantity < each outcome in order.
Arguments:
- outcomes: If provided, the quantities corresponding to these outcomes will be returned (or 0 if not present).
292 def quantities_gt(self, outcomes: Sequence | None = None) -> Sequence[int]: 293 """The quantity > each outcome in order. 294 295 Args: 296 outcomes: If provided, the quantities corresponding to these 297 outcomes will be returned (or 0 if not present). 298 """ 299 return tuple(self.denominator() - x 300 for x in self.quantities_le(outcomes))
The quantity > each outcome in order.
Arguments:
- outcomes: If provided, the quantities corresponding to these outcomes will be returned (or 0 if not present).
318 def probability(self, 319 outcome: Hashable, 320 *, 321 percent: bool = False) -> Fraction | float: 322 """The probability of a single outcome, or 0.0 if not present. """ 323 result = Fraction(self.quantity(outcome), self.denominator()) 324 return result * 100.0 if percent else result
The probability of a single outcome, or 0.0 if not present.
340 def probability_le(self, 341 outcome: Hashable, 342 *, 343 percent: bool = False) -> Fraction | float: 344 """The probability <= a single outcome. """ 345 result = Fraction(self.quantity_le(outcome), self.denominator()) 346 return result * 100.0 if percent else result
The probability <= a single outcome.
362 def probability_lt(self, 363 outcome: Hashable, 364 *, 365 percent: bool = False) -> Fraction | float: 366 """The probability < a single outcome. """ 367 result = Fraction(self.quantity_lt(outcome), self.denominator()) 368 return result * 100.0 if percent else result
The probability < a single outcome.
384 def probability_ge(self, 385 outcome: Hashable, 386 *, 387 percent: bool = False) -> Fraction | float: 388 """The probability >= a single outcome. """ 389 result = Fraction(self.quantity_ge(outcome), self.denominator()) 390 return result * 100.0 if percent else result
The probability >= a single outcome.
406 def probability_gt(self, 407 outcome: Hashable, 408 *, 409 percent: bool = False) -> Fraction | float: 410 """The probability > a single outcome. """ 411 result = Fraction(self.quantity_gt(outcome), self.denominator()) 412 return result * 100.0 if percent else result
The probability > a single outcome.
437 def probabilities( 438 self, 439 outcomes: Sequence | None = None, 440 *, 441 percent: bool = False) -> Sequence[Fraction] | Sequence[float]: 442 """The probability of each outcome in order. 443 444 Also known as the probability mass function (PMF). 445 446 Args: 447 outcomes: If provided, the probabilities corresponding to these 448 outcomes will be returned (or 0 if not present). 449 percent: If set, the results will be in percent 450 (i.e. total of 100.0) and the values are `float`s. 451 Otherwise, the total will be 1 and the values are `Fraction`s. 452 """ 453 if outcomes is None: 454 result = self._probabilities 455 else: 456 result = tuple(self.probability(x) for x in outcomes) 457 458 if percent: 459 return tuple(100.0 * x for x in result) 460 else: 461 return result
The probability of each outcome in order.
Also known as the probability mass function (PMF).
Arguments:
- outcomes: If provided, the probabilities corresponding to these outcomes will be returned (or 0 if not present).
- percent: If set, the results will be in percent
(i.e. total of 100.0) and the values are
float
s. Otherwise, the total will be 1 and the values areFraction
s.
489 def probabilities_le( 490 self, 491 outcomes: Sequence | None = None, 492 *, 493 percent: bool = False) -> Sequence[Fraction] | Sequence[float]: 494 """The probability of rolling <= each outcome in order. 495 496 Also known as the cumulative distribution function (CDF), 497 though this term is ambigiuous whether it is < or <=. 498 499 Args: 500 outcomes: If provided, the probabilities corresponding to these 501 outcomes will be returned (or 0 if not present). 502 percent: If set, the results will be in percent 503 (i.e. total of 100.0) and the values are `float`s. 504 Otherwise, the total will be 1 and the values are `Fraction`s. 505 """ 506 if outcomes is None: 507 result = self._probabilities_le 508 else: 509 result = tuple(self.probability_le(x) for x in outcomes) 510 511 if percent: 512 return tuple(100.0 * x for x in result) 513 else: 514 return result
The probability of rolling <= each outcome in order.
Also known as the cumulative distribution function (CDF), though this term is ambigiuous whether it is < or <=.
Arguments:
- outcomes: If provided, the probabilities corresponding to these outcomes will be returned (or 0 if not present).
- percent: If set, the results will be in percent
(i.e. total of 100.0) and the values are
float
s. Otherwise, the total will be 1 and the values areFraction
s.
542 def probabilities_ge( 543 self, 544 outcomes: Sequence | None = None, 545 *, 546 percent: bool = False) -> Sequence[Fraction] | Sequence[float]: 547 """The probability of rolling >= each outcome in order. 548 549 Also known as the survival function (SF) or 550 complementary cumulative distribution function (CCDF), 551 though these term are ambigiuous whether they are is > or >=. 552 553 Args: 554 outcomes: If provided, the probabilities corresponding to these 555 outcomes will be returned (or 0 if not present). 556 percent: If set, the results will be in percent 557 (i.e. total of 100.0) and the values are `float`s. 558 Otherwise, the total will be 1 and the values are `Fraction`s. 559 """ 560 if outcomes is None: 561 result = self._probabilities_ge 562 else: 563 result = tuple(self.probability_ge(x) for x in outcomes) 564 565 if percent: 566 return tuple(100.0 * x for x in result) 567 else: 568 return result
The probability of rolling >= each outcome in order.
Also known as the survival function (SF) or complementary cumulative distribution function (CCDF), though these term are ambigiuous whether they are is > or >=.
Arguments:
- outcomes: If provided, the probabilities corresponding to these outcomes will be returned (or 0 if not present).
- percent: If set, the results will be in percent
(i.e. total of 100.0) and the values are
float
s. Otherwise, the total will be 1 and the values areFraction
s.
590 def probabilities_lt( 591 self, 592 outcomes: Sequence | None = None, 593 *, 594 percent: bool = False) -> Sequence[Fraction] | Sequence[float]: 595 """The probability of rolling < each outcome in order. 596 597 Args: 598 outcomes: If provided, the probabilities corresponding to these 599 outcomes will be returned (or 0 if not present). 600 percent: If set, the results will be in percent 601 (i.e. total of 100.0) and the values are `float`s. 602 Otherwise, the total will be 1 and the values are `Fraction`s. 603 """ 604 if outcomes is None: 605 result = tuple(1 - x for x in self._probabilities_ge) 606 else: 607 result = tuple(1 - self.probability_ge(x) for x in outcomes) 608 609 if percent: 610 return tuple(100.0 * x for x in result) 611 else: 612 return result
The probability of rolling < each outcome in order.
Arguments:
- outcomes: If provided, the probabilities corresponding to these outcomes will be returned (or 0 if not present).
- percent: If set, the results will be in percent
(i.e. total of 100.0) and the values are
float
s. Otherwise, the total will be 1 and the values areFraction
s.
634 def probabilities_gt( 635 self, 636 outcomes: Sequence | None = None, 637 *, 638 percent: bool = False) -> Sequence[Fraction] | Sequence[float]: 639 """The probability of rolling > each outcome in order. 640 641 Args: 642 outcomes: If provided, the probabilities corresponding to these 643 outcomes will be returned (or 0 if not present). 644 percent: If set, the results will be in percent 645 (i.e. total of 100.0) and the values are `float`s. 646 Otherwise, the total will be 1 and the values are `Fraction`s. 647 """ 648 if outcomes is None: 649 result = tuple(1 - x for x in self._probabilities_le) 650 else: 651 result = tuple(1 - self.probability_le(x) for x in outcomes) 652 653 if percent: 654 return tuple(100.0 * x for x in result) 655 else: 656 return result
The probability of rolling > each outcome in order.
Arguments:
- outcomes: If provided, the probabilities corresponding to these outcomes will be returned (or 0 if not present).
- percent: If set, the results will be in percent
(i.e. total of 100.0) and the values are
float
s. Otherwise, the total will be 1 and the values areFraction
s.
660 def mode(self) -> tuple: 661 """A tuple containing the most common outcome(s) of the population. 662 663 These are sorted from lowest to highest. 664 """ 665 return tuple(outcome for outcome, quantity in self.items() 666 if quantity == self.modal_quantity())
A tuple containing the most common outcome(s) of the population.
These are sorted from lowest to highest.
668 def modal_quantity(self) -> int: 669 """The highest quantity of any single outcome. """ 670 return max(self.quantities())
The highest quantity of any single outcome.
672 def kolmogorov_smirnov(self, other) -> Fraction: 673 """Kolmogorov–Smirnov statistic. The maximum absolute difference between CDFs. """ 674 a, b = icepool.align(self, other) 675 return max( 676 abs(a - b) 677 for a, b in zip(a.probabilities_le(), b.probabilities_le()))
Kolmogorov–Smirnov statistic. The maximum absolute difference between CDFs.
679 def cramer_von_mises(self, other) -> Fraction: 680 """Cramér-von Mises statistic. The sum-of-squares difference between CDFs. """ 681 a, b = icepool.align(self, other) 682 return sum( 683 ((a - b)**2 684 for a, b in zip(a.probabilities_le(), b.probabilities_le())), 685 start=Fraction(0, 1))
Cramér-von Mises statistic. The sum-of-squares difference between CDFs.
687 def median(self): 688 """The median, taking the mean in case of a tie. 689 690 This will fail if the outcomes do not support division; 691 in this case, use `median_low` or `median_high` instead. 692 """ 693 return self.quantile(1, 2)
The median, taking the mean in case of a tie.
This will fail if the outcomes do not support division;
in this case, use median_low
or median_high
instead.
695 def median_low(self) -> T_co: 696 """The median, taking the lower in case of a tie.""" 697 return self.quantile_low(1, 2)
The median, taking the lower in case of a tie.
699 def median_high(self) -> T_co: 700 """The median, taking the higher in case of a tie.""" 701 return self.quantile_high(1, 2)
The median, taking the higher in case of a tie.
703 def quantile(self, n: int, d: int = 100): 704 """The outcome `n / d` of the way through the CDF, taking the mean in case of a tie. 705 706 This will fail if the outcomes do not support addition and division; 707 in this case, use `quantile_low` or `quantile_high` instead. 708 """ 709 # Should support addition and division. 710 return (self.quantile_low(n, d) + 711 self.quantile_high(n, d)) / 2 # type: ignore
The outcome n / d
of the way through the CDF, taking the mean in case of a tie.
This will fail if the outcomes do not support addition and division;
in this case, use quantile_low
or quantile_high
instead.
713 def quantile_low(self, n: int, d: int = 100) -> T_co: 714 """The outcome `n / d` of the way through the CDF, taking the lesser in case of a tie.""" 715 index = bisect.bisect_left(self.quantities_le(), 716 (n * self.denominator() + d - 1) // d) 717 if index >= len(self): 718 return self.max_outcome() 719 return self.outcomes()[index]
The outcome n / d
of the way through the CDF, taking the lesser in case of a tie.
721 def quantile_high(self, n: int, d: int = 100) -> T_co: 722 """The outcome `n / d` of the way through the CDF, taking the greater in case of a tie.""" 723 index = bisect.bisect_right(self.quantities_le(), 724 n * self.denominator() // d) 725 if index >= len(self): 726 return self.max_outcome() 727 return self.outcomes()[index]
The outcome n / d
of the way through the CDF, taking the greater in case of a tie.
752 def variance( 753 self: 'Population[numbers.Rational] | Population[float]' 754 ) -> Fraction | float: 755 """This is the population variance, not the sample variance.""" 756 mean = self.mean() 757 mean_of_squares = try_fraction( 758 sum(quantity * outcome**2 for outcome, quantity in self.items()), 759 self.denominator()) 760 return mean_of_squares - mean * mean
This is the population variance, not the sample variance.
768 def standardized_moment( 769 self: 'Population[numbers.Rational] | Population[float]', 770 k: int) -> float: 771 sd = self.standard_deviation() 772 mean = self.mean() 773 ev = sum(p * (outcome - mean)**k # type: ignore 774 for outcome, p in zip(self.outcomes(), self.probabilities())) 775 return ev / (sd**k)
785 def entropy(self, base: float = 2.0) -> float: 786 """The entropy of a random sample from this population. 787 788 Args: 789 base: The logarithm base to use. Default is 2.0, which gives the 790 entropy in bits. 791 """ 792 return -sum(p * math.log(p, base) 793 for p in self.probabilities() if p > 0.0)
The entropy of a random sample from this population.
Arguments:
- base: The logarithm base to use. Default is 2.0, which gives the entropy in bits.
822 @property 823 def marginals(self: C) -> _Marginals[C]: 824 """A property that applies the `[]` operator to outcomes. 825 826 For example, `population.marginals[:2]` will marginalize the first two 827 elements of sequence outcomes. 828 829 Attributes that do not start with an underscore will also be forwarded. 830 For example, `population.marginals.x` will marginalize the `x` attribute 831 from e.g. `namedtuple` outcomes. 832 """ 833 return Population._Marginals(self)
A property that applies the []
operator to outcomes.
For example, population.marginals[:2]
will marginalize the first two
elements of sequence outcomes.
Attributes that do not start with an underscore will also be forwarded.
For example, population.marginals.x
will marginalize the x
attribute
from e.g. namedtuple
outcomes.
845 def covariance( 846 self: 847 'Population[tuple[numbers.Rational, ...]] | Population[tuple[float, ...]]', 848 i: int, j: int) -> Fraction | float: 849 mean_i = self.marginals[i].mean() 850 mean_j = self.marginals[j].mean() 851 return try_fraction( 852 sum((outcome[i] - mean_i) * (outcome[j] - mean_j) * quantity 853 for outcome, quantity in self.items()), self.denominator())
865 def to_one_hot(self: C, outcomes: Sequence[T_co] | None = None) -> C: 866 """Converts the outcomes of this population to a one-hot representation. 867 868 Args: 869 outcomes: If provided, each outcome will be mapped to a `Vector` 870 where the element at `outcomes.index(outcome)` is set to `True` 871 and the rest to `False`, or all `False` if the outcome is not 872 in `outcomes`. 873 If not provided, `self.outcomes()` is used. 874 """ 875 if outcomes is None: 876 outcomes = self.outcomes() 877 878 data: MutableMapping[Vector[bool], int] = defaultdict(int) 879 for outcome, quantity in zip(self.outcomes(), self.quantities()): 880 value = [False] * len(outcomes) 881 if outcome in outcomes: 882 value[outcomes.index(outcome)] = True 883 data[Vector(value)] += quantity 884 return self._new_type(data)
Converts the outcomes of this population to a one-hot representation.
Arguments:
886 def sample(self) -> T_co: 887 """A single random sample from this population. 888 889 Note that this is always "with replacement" even for `Deck` since 890 instances are immutable. 891 892 This uses the standard `random` package and is not cryptographically 893 secure. 894 """ 895 # We don't use random.choices since that is based on floats rather than ints. 896 r = random.randrange(self.denominator()) 897 index = bisect.bisect_right(self.quantities_le(), r) 898 return self.outcomes()[index]
A single random sample from this population.
Note that this is always "with replacement" even for Deck
since
instances are immutable.
This uses the standard random
package and is not cryptographically
secure.
900 def format(self, format_spec: str, /, **kwargs) -> str: 901 """Formats this mapping as a string. 902 903 `format_spec` should start with the output format, 904 which can be: 905 * `md` for Markdown (default) 906 * `bbcode` for BBCode 907 * `csv` for comma-separated values 908 * `html` for HTML 909 910 After this, you may optionally add a `:` followed by a series of 911 requested columns. Allowed columns are: 912 913 * `o`: Outcomes. 914 * `*o`: Outcomes, unpacked if applicable. 915 * `q==`, `q<=`, `q>=`: Quantities ==, <=, or >= each outcome. 916 * `p==`, `p<=`, `p>=`: Probabilities (0-1) ==, <=, or >= each outcome. 917 * `%==`, `%<=`, `%>=`: Probabilities (0%-100%) ==, <=, or >= each outcome. 918 919 Columns may optionally be separated using `|` characters. 920 921 The default columns are `*o|q==|%==`, which are the unpacked outcomes, 922 the quantities, and the probabilities. The quantities are omitted from 923 the default columns if any individual quantity is 10**30 or greater. 924 """ 925 if not self.is_empty() and self.modal_quantity() < 10**30: 926 default_column_spec = '*oq==%==' 927 else: 928 default_column_spec = '*o%==' 929 if len(format_spec) == 0: 930 format_spec = 'md:' + default_column_spec 931 932 format_spec = format_spec.replace('|', '') 933 934 parts = format_spec.split(':') 935 936 if len(parts) == 1: 937 output_format = parts[0] 938 col_spec = default_column_spec 939 elif len(parts) == 2: 940 output_format = parts[0] 941 col_spec = parts[1] 942 else: 943 raise ValueError('format_spec has too many colons.') 944 945 if output_format == 'md': 946 return icepool.population.format.markdown(self, col_spec) 947 elif output_format == 'bbcode': 948 return icepool.population.format.bbcode(self, col_spec) 949 elif output_format == 'csv': 950 return icepool.population.format.csv(self, col_spec, **kwargs) 951 elif output_format == 'html': 952 return icepool.population.format.html(self, col_spec) 953 else: 954 raise ValueError(f"Unsupported output format '{output_format}'")
Formats this mapping as a string.
format_spec
should start with the output format,
which can be:
md
for Markdown (default)bbcode
for BBCodecsv
for comma-separated valueshtml
for HTML
After this, you may optionally add a :
followed by a series of
requested columns. Allowed columns are:
o
: Outcomes.*o
: Outcomes, unpacked if applicable.q==
,q<=
,q>=
: Quantities ==, <=, or >= each outcome.p==
,p<=
,p>=
: Probabilities (0-1) ==, <=, or >= each outcome.%==
,%<=
,%>=
: Probabilities (0%-100%) ==, <=, or >= each outcome.
Columns may optionally be separated using |
characters.
The default columns are *o|q==|%==
, which are the unpacked outcomes,
the quantities, and the probabilities. The quantities are omitted from
the default columns if any individual quantity is 10**30 or greater.
Inherited Members
- collections.abc.Mapping
- get
77def tupleize( 78 *args: 'T | icepool.Population[T]' 79) -> 'tuple[T, ...] | icepool.Population[tuple[T, ...]]': 80 """Returns the Cartesian product of the arguments as `tuple`s or a `Population` thereof. 81 82 For example: 83 * `tupleize(1, 2)` would produce `(1, 2)`. 84 * `tupleize(d6, 0)` would produce a `Die` with outcomes `(1, 0)`, `(2, 0)`, 85 ... `(6, 0)`. 86 * `tupleize(d6, d6)` would produce a `Die` with outcomes `(1, 1)`, `(1, 2)`, 87 ... `(6, 5)`, `(6, 6)`. 88 89 If `Population`s are provided, they must all be `Die` or all `Deck` and not 90 a mixture of the two. 91 92 Returns: 93 If none of the outcomes is a `Population`, the result is a `tuple` 94 with one element per argument. Otherwise, the result is a `Population` 95 of the same type as the input `Population`, and the outcomes are 96 `tuple`s with one element per argument. 97 """ 98 return cartesian_product(*args, outcome_type=tuple)
Returns the Cartesian product of the arguments as tuple
s or a Population
thereof.
For example:
tupleize(1, 2)
would produce(1, 2)
.tupleize(d6, 0)
would produce aDie
with outcomes(1, 0)
,(2, 0)
, ...(6, 0)
.tupleize(d6, d6)
would produce aDie
with outcomes(1, 1)
,(1, 2)
, ...(6, 5)
,(6, 6)
.
If Population
s are provided, they must all be Die
or all Deck
and not
a mixture of the two.
Returns:
If none of the outcomes is a
Population
, the result is atuple
with one element per argument. Otherwise, the result is aPopulation
of the same type as the inputPopulation
, and the outcomes aretuple
s with one element per argument.
101def vectorize( 102 *args: 'T | icepool.Population[T]' 103) -> 'Vector[T] | icepool.Population[Vector[T]]': 104 """Returns the Cartesian product of the arguments as `Vector`s or a `Population` thereof. 105 106 For example: 107 * `vectorize(1, 2)` would produce `Vector(1, 2)`. 108 * `vectorize(d6, 0)` would produce a `Die` with outcomes `Vector(1, 0)`, 109 `Vector(2, 0)`, ... `Vector(6, 0)`. 110 * `vectorize(d6, d6)` would produce a `Die` with outcomes `Vector(1, 1)`, 111 `Vector(1, 2)`, ... `Vector(6, 5)`, `Vector(6, 6)`. 112 113 If `Population`s are provided, they must all be `Die` or all `Deck` and not 114 a mixture of the two. 115 116 Returns: 117 If none of the outcomes is a `Population`, the result is a `Vector` 118 with one element per argument. Otherwise, the result is a `Population` 119 of the same type as the input `Population`, and the outcomes are 120 `Vector`s with one element per argument. 121 """ 122 return cartesian_product(*args, outcome_type=Vector)
Returns the Cartesian product of the arguments as Vector
s or a Population
thereof.
For example:
vectorize(1, 2)
would produceVector(1, 2)
.vectorize(d6, 0)
would produce aDie
with outcomesVector(1, 0)
,Vector(2, 0)
, ...Vector(6, 0)
.vectorize(d6, d6)
would produce aDie
with outcomesVector(1, 1)
,Vector(1, 2)
, ...Vector(6, 5)
,Vector(6, 6)
.
If Population
s are provided, they must all be Die
or all Deck
and not
a mixture of the two.
Returns:
If none of the outcomes is a
Population
, the result is aVector
with one element per argument. Otherwise, the result is aPopulation
of the same type as the inputPopulation
, and the outcomes areVector
s with one element per argument.
125class Vector(Outcome, Sequence[T_co]): 126 """Immutable tuple-like class that applies most operators elementwise. 127 128 May become a variadic generic type in the future. 129 """ 130 __slots__ = ['_data'] 131 132 _data: tuple[T_co, ...] 133 134 def __init__(self, 135 elements: Iterable[T_co], 136 *, 137 truth_value: bool | None = None) -> None: 138 self._data = tuple(elements) 139 if any(isinstance(x, icepool.AgainExpression) for x in self._data): 140 raise TypeError('Again is not a valid element of Vector.') 141 self._truth_value = truth_value 142 143 def __hash__(self) -> int: 144 return hash((Vector, self._data)) 145 146 def __len__(self) -> int: 147 return len(self._data) 148 149 @overload 150 def __getitem__(self, index: int) -> T_co: 151 ... 152 153 @overload 154 def __getitem__(self, index: slice) -> 'Vector[T_co]': 155 ... 156 157 def __getitem__(self, index: int | slice) -> 'T_co | Vector[T_co]': 158 if isinstance(index, int): 159 return self._data[index] 160 else: 161 return Vector(self._data[index]) 162 163 def __iter__(self) -> Iterator[T_co]: 164 return iter(self._data) 165 166 # Unary operators. 167 168 def unary_operator(self, op: Callable[..., U], *args, 169 **kwargs) -> 'Vector[U]': 170 """Unary operators on `Vector` are applied elementwise. 171 172 This is used for the standard unary operators 173 `-, +, abs, ~, round, trunc, floor, ceil` 174 """ 175 return Vector(op(x, *args, **kwargs) for x in self) 176 177 def __neg__(self) -> 'Vector[T_co]': 178 return self.unary_operator(operator.neg) 179 180 def __pos__(self) -> 'Vector[T_co]': 181 return self.unary_operator(operator.pos) 182 183 def __invert__(self) -> 'Vector[T_co]': 184 return self.unary_operator(operator.invert) 185 186 def abs(self) -> 'Vector[T_co]': 187 return self.unary_operator(operator.abs) 188 189 __abs__ = abs 190 191 def round(self, ndigits: int | None = None) -> 'Vector': 192 return self.unary_operator(round, ndigits) 193 194 __round__ = round 195 196 def trunc(self) -> 'Vector': 197 return self.unary_operator(math.trunc) 198 199 __trunc__ = trunc 200 201 def floor(self) -> 'Vector': 202 return self.unary_operator(math.floor) 203 204 __floor__ = floor 205 206 def ceil(self) -> 'Vector': 207 return self.unary_operator(math.ceil) 208 209 __ceil__ = ceil 210 211 # Binary operators. 212 213 def binary_operator(self, 214 other, 215 op: Callable[..., U], 216 *args, 217 compare_for_truth: bool = False, 218 **kwargs) -> 'Vector[U]': 219 """Binary operators on `Vector` are applied elementwise. 220 221 If the other operand is also a `Vector`, the operator is applied to each 222 pair of elements from `self` and `other`. Both must have the same 223 length. 224 225 Otherwise the other operand is broadcast to each element of `self`. 226 227 This is used for the standard binary operators 228 `+, -, *, /, //, %, **, <<, >>, &, |, ^`. 229 230 `@` is not included due to its different meaning in `Die`. 231 232 This is also used for the comparators 233 `<, <=, >, >=, ==, !=`. 234 235 In this case, the result also has a truth value based on lexicographic 236 ordering. 237 """ 238 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 239 return NotImplemented # delegate to the other 240 if isinstance(other, Vector): 241 if len(self) == len(other): 242 if compare_for_truth: 243 truth_value = cast(bool, op(self._data, other._data)) 244 else: 245 truth_value = None 246 return Vector( 247 (op(x, y, *args, **kwargs) for x, y in zip(self, other)), 248 truth_value=truth_value) 249 else: 250 raise IndexError( 251 f'Binary operators on Vectors are only valid if both are the same length ({len(self)} vs. {len(other)}).' 252 ) 253 else: 254 return Vector((op(x, other, *args, **kwargs) for x in self)) 255 256 def reverse_binary_operator(self, other, op: Callable[..., U], *args, 257 **kwargs) -> 'Vector[U]': 258 """Reverse version of `binary_operator()`.""" 259 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 260 return NotImplemented # delegate to the other 261 if isinstance(other, Vector): 262 if len(self) == len(other): 263 return Vector( 264 op(y, x, *args, **kwargs) for x, y in zip(self, other)) 265 else: 266 raise IndexError( 267 f'Binary operators on Vectors are only valid if both are the same length ({len(other)} vs. {len(self)}).' 268 ) 269 else: 270 return Vector(op(other, x, *args, **kwargs) for x in self) 271 272 def __add__(self, other) -> 'Vector': 273 return self.binary_operator(other, operator.add) 274 275 def __radd__(self, other) -> 'Vector': 276 return self.reverse_binary_operator(other, operator.add) 277 278 def __sub__(self, other) -> 'Vector': 279 return self.binary_operator(other, operator.sub) 280 281 def __rsub__(self, other) -> 'Vector': 282 return self.reverse_binary_operator(other, operator.sub) 283 284 def __mul__(self, other) -> 'Vector': 285 return self.binary_operator(other, operator.mul) 286 287 def __rmul__(self, other) -> 'Vector': 288 return self.reverse_binary_operator(other, operator.mul) 289 290 def __truediv__(self, other) -> 'Vector': 291 return self.binary_operator(other, operator.truediv) 292 293 def __rtruediv__(self, other) -> 'Vector': 294 return self.reverse_binary_operator(other, operator.truediv) 295 296 def __floordiv__(self, other) -> 'Vector': 297 return self.binary_operator(other, operator.floordiv) 298 299 def __rfloordiv__(self, other) -> 'Vector': 300 return self.reverse_binary_operator(other, operator.floordiv) 301 302 def __pow__(self, other) -> 'Vector': 303 return self.binary_operator(other, operator.pow) 304 305 def __rpow__(self, other) -> 'Vector': 306 return self.reverse_binary_operator(other, operator.pow) 307 308 def __mod__(self, other) -> 'Vector': 309 return self.binary_operator(other, operator.mod) 310 311 def __rmod__(self, other) -> 'Vector': 312 return self.reverse_binary_operator(other, operator.mod) 313 314 def __lshift__(self, other) -> 'Vector': 315 return self.binary_operator(other, operator.lshift) 316 317 def __rlshift__(self, other) -> 'Vector': 318 return self.reverse_binary_operator(other, operator.lshift) 319 320 def __rshift__(self, other) -> 'Vector': 321 return self.binary_operator(other, operator.rshift) 322 323 def __rrshift__(self, other) -> 'Vector': 324 return self.reverse_binary_operator(other, operator.rshift) 325 326 def __and__(self, other) -> 'Vector': 327 return self.binary_operator(other, operator.and_) 328 329 def __rand__(self, other) -> 'Vector': 330 return self.reverse_binary_operator(other, operator.and_) 331 332 def __or__(self, other) -> 'Vector': 333 return self.binary_operator(other, operator.or_) 334 335 def __ror__(self, other) -> 'Vector': 336 return self.reverse_binary_operator(other, operator.or_) 337 338 def __xor__(self, other) -> 'Vector': 339 return self.binary_operator(other, operator.xor) 340 341 def __rxor__(self, other) -> 'Vector': 342 return self.reverse_binary_operator(other, operator.xor) 343 344 # Comparators. 345 # These returns a value with a truth value, but not a bool. 346 347 def __lt__(self, other) -> 'Vector': # type: ignore 348 if not isinstance(other, Vector): 349 return NotImplemented 350 return self.binary_operator(other, operator.lt, compare_for_truth=True) 351 352 def __le__(self, other) -> 'Vector': # type: ignore 353 if not isinstance(other, Vector): 354 return NotImplemented 355 return self.binary_operator(other, operator.le, compare_for_truth=True) 356 357 def __gt__(self, other) -> 'Vector': # type: ignore 358 if not isinstance(other, Vector): 359 return NotImplemented 360 return self.binary_operator(other, operator.gt, compare_for_truth=True) 361 362 def __ge__(self, other) -> 'Vector': # type: ignore 363 if not isinstance(other, Vector): 364 return NotImplemented 365 return self.binary_operator(other, operator.ge, compare_for_truth=True) 366 367 def __eq__(self, other) -> 'Vector | bool': # type: ignore 368 if not isinstance(other, Vector): 369 return False 370 return self.binary_operator(other, operator.eq, compare_for_truth=True) 371 372 def __ne__(self, other) -> 'Vector | bool': # type: ignore 373 if not isinstance(other, Vector): 374 return True 375 return self.binary_operator(other, operator.ne, compare_for_truth=True) 376 377 def __bool__(self) -> bool: 378 if self._truth_value is None: 379 raise TypeError( 380 'Vector only has a truth value if it is the result of a comparison operator.' 381 ) 382 return self._truth_value 383 384 # Sequence manipulation. 385 386 def append(self, other) -> 'Vector': 387 return Vector(self._data + (other, )) 388 389 def concatenate(self, other: 'Iterable') -> 'Vector': 390 return Vector(itertools.chain(self, other)) 391 392 # Strings. 393 394 def __repr__(self) -> str: 395 return type(self).__qualname__ + '(' + repr(self._data) + ')' 396 397 def __str__(self) -> str: 398 return type(self).__qualname__ + '(' + str(self._data) + ')'
Immutable tuple-like class that applies most operators elementwise.
May become a variadic generic type in the future.
168 def unary_operator(self, op: Callable[..., U], *args, 169 **kwargs) -> 'Vector[U]': 170 """Unary operators on `Vector` are applied elementwise. 171 172 This is used for the standard unary operators 173 `-, +, abs, ~, round, trunc, floor, ceil` 174 """ 175 return Vector(op(x, *args, **kwargs) for x in self)
Unary operators on Vector
are applied elementwise.
This is used for the standard unary operators
-, +, abs, ~, round, trunc, floor, ceil
213 def binary_operator(self, 214 other, 215 op: Callable[..., U], 216 *args, 217 compare_for_truth: bool = False, 218 **kwargs) -> 'Vector[U]': 219 """Binary operators on `Vector` are applied elementwise. 220 221 If the other operand is also a `Vector`, the operator is applied to each 222 pair of elements from `self` and `other`. Both must have the same 223 length. 224 225 Otherwise the other operand is broadcast to each element of `self`. 226 227 This is used for the standard binary operators 228 `+, -, *, /, //, %, **, <<, >>, &, |, ^`. 229 230 `@` is not included due to its different meaning in `Die`. 231 232 This is also used for the comparators 233 `<, <=, >, >=, ==, !=`. 234 235 In this case, the result also has a truth value based on lexicographic 236 ordering. 237 """ 238 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 239 return NotImplemented # delegate to the other 240 if isinstance(other, Vector): 241 if len(self) == len(other): 242 if compare_for_truth: 243 truth_value = cast(bool, op(self._data, other._data)) 244 else: 245 truth_value = None 246 return Vector( 247 (op(x, y, *args, **kwargs) for x, y in zip(self, other)), 248 truth_value=truth_value) 249 else: 250 raise IndexError( 251 f'Binary operators on Vectors are only valid if both are the same length ({len(self)} vs. {len(other)}).' 252 ) 253 else: 254 return Vector((op(x, other, *args, **kwargs) for x in self))
Binary operators on Vector
are applied elementwise.
If the other operand is also a Vector
, the operator is applied to each
pair of elements from self
and other
. Both must have the same
length.
Otherwise the other operand is broadcast to each element of self
.
This is used for the standard binary operators
+, -, *, /, //, %, **, <<, >>, &, |, ^
.
@
is not included due to its different meaning in Die
.
This is also used for the comparators
<, <=, >, >=, ==, !=
.
In this case, the result also has a truth value based on lexicographic ordering.
256 def reverse_binary_operator(self, other, op: Callable[..., U], *args, 257 **kwargs) -> 'Vector[U]': 258 """Reverse version of `binary_operator()`.""" 259 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 260 return NotImplemented # delegate to the other 261 if isinstance(other, Vector): 262 if len(self) == len(other): 263 return Vector( 264 op(y, x, *args, **kwargs) for x, y in zip(self, other)) 265 else: 266 raise IndexError( 267 f'Binary operators on Vectors are only valid if both are the same length ({len(other)} vs. {len(self)}).' 268 ) 269 else: 270 return Vector(op(other, x, *args, **kwargs) for x in self)
Reverse version of binary_operator()
.
16class Symbols(Mapping[str, int]): 17 """EXPERIMENTAL: Immutable multiset of single characters. 18 19 Spaces, dashes, and underscores cannot be used as symbols. 20 21 Operations include: 22 23 | Operation | Count / notes | 24 |:----------------------------|:-----------------------------------| 25 | `additive_union`, `+` | `l + r` | 26 | `difference`, `-` | `l - r` | 27 | `intersection`, `&` | `min(l, r)` | 28 | `union`, `\\|` | `max(l, r)` | 29 | `symmetric_difference`, `^` | `abs(l - r)` | 30 | `multiply_counts`, `*` | `count * n` | 31 | `divide_counts`, `//` | `count // n` | 32 | `issubset`, `<=` | all counts l <= r | 33 | `issuperset`, `>=` | all counts l >= r | 34 | `==` | all counts l == r | 35 | `!=` | any count l != r | 36 | unary `+` | drop all negative counts | 37 | unary `-` | reverses the sign of all counts | 38 39 `<` and `>` are lexicographic orderings rather than subset relations. 40 Specifically, they compare the count of each character in alphabetical 41 order. For example: 42 * `'a' > ''` since one `'a'` is more than zero `'a'`s. 43 * `'a' > 'bb'` since `'a'` is compared first. 44 * `'-a' < 'bb'` since the left side has -1 `'a'`s. 45 * `'a' < 'ab'` since the `'a'`s are equal but the right side has more `'b'`s. 46 47 Binary operators other than `*` and `//` implicitly convert the other 48 argument to `Symbols` using the constructor. 49 50 Subscripting with a single character returns the count of that character 51 as an `int`. E.g. `symbols['a']` -> number of `a`s as an `int`. 52 You can also access it as an attribute, e.g. `symbols.a`. 53 54 Subscripting with multiple characters returns a `Symbols` with only those 55 characters, dropping the rest. 56 E.g. `symbols['ab']` -> number of `a`s and `b`s as a `Symbols`. 57 Again you can also access it as an attribute, e.g. `symbols.ab`. 58 This is useful for reducing the outcome space, which reduces computational 59 cost for further operations. If you want to keep only a single character 60 while keeping the type as `Symbols`, you can subscript with that character 61 plus an unused character. 62 63 Subscripting with duplicate characters currently has no further effect, but 64 this may change in the future. 65 66 `Population.marginals` forwards attribute access, so you can use e.g. 67 `die.marginals.a` to get the marginal distribution of `a`s. 68 69 Note that attribute access only works with valid identifiers, 70 so e.g. emojis would need to use the subscript method. 71 """ 72 _data: Mapping[str, int] 73 74 def __new__(cls, 75 symbols: str | Iterable[str] | Mapping[str, int]) -> 'Symbols': 76 """Constructor. 77 78 The argument can be a string, an iterable of characters, or a mapping of 79 characters to counts. 80 81 If the argument is a string, negative symbols can be specified using a 82 minus sign optionally surrounded by whitespace. For example, 83 `a - b` has one positive a and one negative b. 84 """ 85 self = super(Symbols, cls).__new__(cls) 86 if isinstance(symbols, str): 87 data: MutableMapping[str, int] = defaultdict(int) 88 positive, *negative = re.split(r'\s*-\s*', symbols) 89 for s in positive: 90 data[s] += 1 91 if len(negative) > 1: 92 raise ValueError('Multiple dashes not allowed.') 93 if len(negative) == 1: 94 for s in negative[0]: 95 data[s] -= 1 96 elif isinstance(symbols, Mapping): 97 data = defaultdict(int, symbols) 98 else: 99 data = defaultdict(int) 100 for s in symbols: 101 data[s] += 1 102 103 for s in data: 104 if len(s) != 1: 105 raise ValueError(f'Symbol {s} is not a single character.') 106 if re.match(r'[\s_-]', s): 107 raise ValueError( 108 f'{s} (U+{ord(s):04X}) is not a legal symbol.') 109 110 self._data = defaultdict(int, 111 {k: data[k] 112 for k in sorted(data.keys())}) 113 114 return self 115 116 @classmethod 117 def _new_raw(cls, data: defaultdict[str, int]) -> 'Symbols': 118 self = super(Symbols, cls).__new__(cls) 119 self._data = data 120 return self 121 122 # Mapping interface. 123 124 def __getitem__(self, key: str) -> 'int | Symbols': # type: ignore 125 if len(key) == 1: 126 return self._data[key] 127 else: 128 return Symbols._new_raw( 129 defaultdict(int, {s: self._data[s] 130 for s in key})) 131 132 def __getattr__(self, key: str) -> 'int | Symbols': 133 if key[0] == '_': 134 raise AttributeError(key) 135 return self[key] 136 137 def __iter__(self) -> Iterator[str]: 138 return iter(self._data) 139 140 def __len__(self) -> int: 141 return len(self._data) 142 143 # Binary operators. 144 145 def additive_union(self, *args: 146 Iterable[str] | Mapping[str, int]) -> 'Symbols': 147 """The sum of counts of each symbol.""" 148 return functools.reduce(operator.add, args, initial=self) 149 150 def __add__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 151 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 152 return NotImplemented # delegate to the other 153 data = defaultdict(int, self._data) 154 for s, count in Symbols(other).items(): 155 data[s] += count 156 return Symbols._new_raw(data) 157 158 __radd__ = __add__ 159 160 def difference(self, *args: 161 Iterable[str] | Mapping[str, int]) -> 'Symbols': 162 """The difference between the counts of each symbol.""" 163 return functools.reduce(operator.sub, args, initial=self) 164 165 def __sub__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 166 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 167 return NotImplemented # delegate to the other 168 data = defaultdict(int, self._data) 169 for s, count in Symbols(other).items(): 170 data[s] -= count 171 return Symbols._new_raw(data) 172 173 def __rsub__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 174 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 175 return NotImplemented # delegate to the other 176 data = defaultdict(int, Symbols(other)._data) 177 for s, count in self.items(): 178 data[s] -= count 179 return Symbols._new_raw(data) 180 181 def intersection(self, *args: 182 Iterable[str] | Mapping[str, int]) -> 'Symbols': 183 """The min count of each symbol.""" 184 return functools.reduce(operator.and_, args, initial=self) 185 186 def __and__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 187 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 188 return NotImplemented # delegate to the other 189 data: defaultdict[str, int] = defaultdict(int) 190 for s, count in Symbols(other).items(): 191 data[s] = min(self.get(s, 0), count) 192 return Symbols._new_raw(data) 193 194 __rand__ = __and__ 195 196 def union(self, *args: Iterable[str] | Mapping[str, int]) -> 'Symbols': 197 """The max count of each symbol.""" 198 return functools.reduce(operator.or_, args, initial=self) 199 200 def __or__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 201 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 202 return NotImplemented # delegate to the other 203 data = defaultdict(int, self._data) 204 for s, count in Symbols(other).items(): 205 data[s] = max(data[s], count) 206 return Symbols._new_raw(data) 207 208 __ror__ = __or__ 209 210 def symmetric_difference( 211 self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 212 """The absolute difference in symbol counts between the two sets.""" 213 return self ^ other 214 215 def __xor__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 216 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 217 return NotImplemented # delegate to the other 218 data = defaultdict(int, self._data) 219 for s, count in Symbols(other).items(): 220 data[s] = abs(data[s] - count) 221 return Symbols._new_raw(data) 222 223 __rxor__ = __xor__ 224 225 def multiply_counts(self, other: int) -> 'Symbols': 226 """Multiplies all counts by an integer.""" 227 return self * other 228 229 def __mul__(self, other: int) -> 'Symbols': 230 if not isinstance(other, int): 231 return NotImplemented 232 data = defaultdict(int, { 233 s: count * other 234 for s, count in self.items() 235 }) 236 return Symbols._new_raw(data) 237 238 __rmul__ = __mul__ 239 240 def divide_counts(self, other: int) -> 'Symbols': 241 """Divides all counts by an integer, rounding down.""" 242 data = defaultdict(int, { 243 s: count // other 244 for s, count in self.items() 245 }) 246 return Symbols._new_raw(data) 247 248 def count_subset(self, 249 divisor: Iterable[str] | Mapping[str, int], 250 *, 251 empty_divisor: int | None = None) -> int: 252 """The number of times the divisor is contained in this multiset.""" 253 if not isinstance(divisor, Mapping): 254 divisor = Counter(divisor) 255 result = None 256 for s, count in divisor.items(): 257 current = self._data[s] // count 258 if result is None or current < result: 259 result = current 260 if result is None: 261 if empty_divisor is None: 262 raise ZeroDivisionError('Divisor is empty.') 263 else: 264 return empty_divisor 265 else: 266 return result 267 268 @overload 269 def __floordiv__(self, other: int) -> 'Symbols': 270 """Same as divide_counts().""" 271 272 @overload 273 def __floordiv__(self, other: Iterable[str] | Mapping[str, int]) -> int: 274 """Same as count_subset().""" 275 276 @overload 277 def __floordiv__( 278 self, 279 other: int | Iterable[str] | Mapping[str, int]) -> 'Symbols | int': 280 ... 281 282 def __floordiv__( 283 self, 284 other: int | Iterable[str] | Mapping[str, int]) -> 'Symbols | int': 285 if isinstance(other, int): 286 return self.divide_counts(other) 287 elif isinstance(other, Iterable): 288 return self.count_subset(other) 289 else: 290 return NotImplemented 291 292 def __rfloordiv__(self, other: Iterable[str] | Mapping[str, int]) -> int: 293 return Symbols(other).count_subset(self) 294 295 def modulo_counts(self, other: int) -> 'Symbols': 296 return self % other 297 298 def __mod__(self, other: int) -> 'Symbols': 299 if not isinstance(other, int): 300 return NotImplemented 301 data = defaultdict(int, { 302 s: count % other 303 for s, count in self.items() 304 }) 305 return Symbols._new_raw(data) 306 307 def __lt__(self, other: 'Symbols') -> bool: 308 if not isinstance(other, Symbols): 309 return NotImplemented 310 keys = sorted(set(self.keys()) | set(other.keys())) 311 for k in keys: 312 if self[k] < other[k]: # type: ignore 313 return True 314 if self[k] > other[k]: # type: ignore 315 return False 316 return False 317 318 def __gt__(self, other: 'Symbols') -> bool: 319 if not isinstance(other, Symbols): 320 return NotImplemented 321 keys = sorted(set(self.keys()) | set(other.keys())) 322 for k in keys: 323 if self[k] > other[k]: # type: ignore 324 return True 325 if self[k] < other[k]: # type: ignore 326 return False 327 return False 328 329 def issubset(self, other: Iterable[str] | Mapping[str, int]) -> bool: 330 """Whether `self` is a subset of the other. 331 332 Same as `<=`. 333 334 Note that the `<` and `>` operators are lexicographic orderings, 335 not proper subset relations. 336 """ 337 return self <= other 338 339 def __le__(self, other: Iterable[str] | Mapping[str, int]) -> bool: 340 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 341 return NotImplemented # delegate to the other 342 other = Symbols(other) 343 return all(self[s] <= other[s] # type: ignore 344 for s in itertools.chain(self, other)) 345 346 def issuperset(self, other: Iterable[str] | Mapping[str, int]) -> bool: 347 """Whether `self` is a superset of the other. 348 349 Same as `>=`. 350 351 Note that the `<` and `>` operators are lexicographic orderings, 352 not proper subset relations. 353 """ 354 return self >= other 355 356 def __ge__(self, other: Iterable[str] | Mapping[str, int]) -> bool: 357 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 358 return NotImplemented # delegate to the other 359 other = Symbols(other) 360 return all(self[s] >= other[s] # type: ignore 361 for s in itertools.chain(self, other)) 362 363 def isdisjoint(self, other: Iterable[str] | Mapping[str, int]) -> bool: 364 """Whether `self` has any positive elements in common with the other. 365 366 Raises: 367 ValueError if either has negative elements. 368 """ 369 other = Symbols(other) 370 if self.has_negative_counts() or other.has_negative_counts(): 371 raise ValueError( 372 "isdisjoint() is not defined for negative counts.") 373 return any(self[s] > 0 and other[s] > 0 # type: ignore 374 for s in self) 375 376 def __eq__(self, other) -> bool: 377 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 378 return NotImplemented # delegate to the other 379 try: 380 other = Symbols(other) 381 except ValueError: 382 return NotImplemented 383 return all(self[s] == other[s] # type: ignore 384 for s in itertools.chain(self, other)) 385 386 def __ne__(self, other) -> bool: 387 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 388 return NotImplemented # delegate to the other 389 try: 390 other = Symbols(other) 391 except ValueError: 392 return NotImplemented 393 return any(self[s] != other[s] # type: ignore 394 for s in itertools.chain(self, other)) 395 396 # Unary operators. 397 398 def has_negative_counts(self) -> bool: 399 """Whether any counts are negative.""" 400 return any(c < 0 for c in self.values()) 401 402 def __pos__(self) -> 'Symbols': 403 data = defaultdict(int, { 404 s: count 405 for s, count in self.items() if count > 0 406 }) 407 return Symbols._new_raw(data) 408 409 def __neg__(self) -> 'Symbols': 410 data = defaultdict(int, {s: -count for s, count in self.items()}) 411 return Symbols._new_raw(data) 412 413 @cached_property 414 def _hash(self) -> int: 415 return hash((Symbols, str(self))) 416 417 def __hash__(self) -> int: 418 return self._hash 419 420 def count(self) -> int: 421 """The total number of elements.""" 422 return sum(self._data.values()) 423 424 @cached_property 425 def _str(self) -> str: 426 sorted_keys = sorted(self) 427 positive = ''.join(s * self._data[s] for s in sorted_keys 428 if self._data[s] > 0) 429 negative = ''.join(s * -self._data[s] for s in sorted_keys 430 if self._data[s] < 0) 431 if positive: 432 if negative: 433 return positive + ' - ' + negative 434 else: 435 return positive 436 else: 437 if negative: 438 return '-' + negative 439 else: 440 return '' 441 442 def __str__(self) -> str: 443 """All symbols in unary form (i.e. including duplicates) in ascending order. 444 445 If there are negative elements, they are listed following a ` - ` sign. 446 """ 447 return self._str 448 449 def __repr__(self) -> str: 450 return type(self).__qualname__ + f"('{str(self)}')"
EXPERIMENTAL: Immutable multiset of single characters.
Spaces, dashes, and underscores cannot be used as symbols.
Operations include:
Operation | Count / notes |
---|---|
additive_union , + |
l + r |
difference , - |
l - r |
intersection , & |
min(l, r) |
union , | |
max(l, r) |
symmetric_difference , ^ |
abs(l - r) |
multiply_counts , * |
count * n |
divide_counts , // |
count // n |
issubset , <= |
all counts l <= r |
issuperset , >= |
all counts l >= r |
== |
all counts l == r |
!= |
any count l != r |
unary + |
drop all negative counts |
unary - |
reverses the sign of all counts |
<
and >
are lexicographic orderings rather than subset relations.
Specifically, they compare the count of each character in alphabetical
order. For example:
'a' > ''
since one'a'
is more than zero'a'
s.'a' > 'bb'
since'a'
is compared first.'-a' < 'bb'
since the left side has -1'a'
s.'a' < 'ab'
since the'a'
s are equal but the right side has more'b'
s.
Binary operators other than *
and //
implicitly convert the other
argument to Symbols
using the constructor.
Subscripting with a single character returns the count of that character
as an int
. E.g. symbols['a']
-> number of a
s as an int
.
You can also access it as an attribute, e.g. symbols.a
.
Subscripting with multiple characters returns a Symbols
with only those
characters, dropping the rest.
E.g. symbols['ab']
-> number of a
s and b
s as a Symbols
.
Again you can also access it as an attribute, e.g. symbols.ab
.
This is useful for reducing the outcome space, which reduces computational
cost for further operations. If you want to keep only a single character
while keeping the type as Symbols
, you can subscript with that character
plus an unused character.
Subscripting with duplicate characters currently has no further effect, but this may change in the future.
Population.marginals
forwards attribute access, so you can use e.g.
die.marginals.a
to get the marginal distribution of a
s.
Note that attribute access only works with valid identifiers, so e.g. emojis would need to use the subscript method.
74 def __new__(cls, 75 symbols: str | Iterable[str] | Mapping[str, int]) -> 'Symbols': 76 """Constructor. 77 78 The argument can be a string, an iterable of characters, or a mapping of 79 characters to counts. 80 81 If the argument is a string, negative symbols can be specified using a 82 minus sign optionally surrounded by whitespace. For example, 83 `a - b` has one positive a and one negative b. 84 """ 85 self = super(Symbols, cls).__new__(cls) 86 if isinstance(symbols, str): 87 data: MutableMapping[str, int] = defaultdict(int) 88 positive, *negative = re.split(r'\s*-\s*', symbols) 89 for s in positive: 90 data[s] += 1 91 if len(negative) > 1: 92 raise ValueError('Multiple dashes not allowed.') 93 if len(negative) == 1: 94 for s in negative[0]: 95 data[s] -= 1 96 elif isinstance(symbols, Mapping): 97 data = defaultdict(int, symbols) 98 else: 99 data = defaultdict(int) 100 for s in symbols: 101 data[s] += 1 102 103 for s in data: 104 if len(s) != 1: 105 raise ValueError(f'Symbol {s} is not a single character.') 106 if re.match(r'[\s_-]', s): 107 raise ValueError( 108 f'{s} (U+{ord(s):04X}) is not a legal symbol.') 109 110 self._data = defaultdict(int, 111 {k: data[k] 112 for k in sorted(data.keys())}) 113 114 return self
Constructor.
The argument can be a string, an iterable of characters, or a mapping of characters to counts.
If the argument is a string, negative symbols can be specified using a
minus sign optionally surrounded by whitespace. For example,
a - b
has one positive a and one negative b.
145 def additive_union(self, *args: 146 Iterable[str] | Mapping[str, int]) -> 'Symbols': 147 """The sum of counts of each symbol.""" 148 return functools.reduce(operator.add, args, initial=self)
The sum of counts of each symbol.
160 def difference(self, *args: 161 Iterable[str] | Mapping[str, int]) -> 'Symbols': 162 """The difference between the counts of each symbol.""" 163 return functools.reduce(operator.sub, args, initial=self)
The difference between the counts of each symbol.
181 def intersection(self, *args: 182 Iterable[str] | Mapping[str, int]) -> 'Symbols': 183 """The min count of each symbol.""" 184 return functools.reduce(operator.and_, args, initial=self)
The min count of each symbol.
196 def union(self, *args: Iterable[str] | Mapping[str, int]) -> 'Symbols': 197 """The max count of each symbol.""" 198 return functools.reduce(operator.or_, args, initial=self)
The max count of each symbol.
210 def symmetric_difference( 211 self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 212 """The absolute difference in symbol counts between the two sets.""" 213 return self ^ other
The absolute difference in symbol counts between the two sets.
225 def multiply_counts(self, other: int) -> 'Symbols': 226 """Multiplies all counts by an integer.""" 227 return self * other
Multiplies all counts by an integer.
240 def divide_counts(self, other: int) -> 'Symbols': 241 """Divides all counts by an integer, rounding down.""" 242 data = defaultdict(int, { 243 s: count // other 244 for s, count in self.items() 245 }) 246 return Symbols._new_raw(data)
Divides all counts by an integer, rounding down.
248 def count_subset(self, 249 divisor: Iterable[str] | Mapping[str, int], 250 *, 251 empty_divisor: int | None = None) -> int: 252 """The number of times the divisor is contained in this multiset.""" 253 if not isinstance(divisor, Mapping): 254 divisor = Counter(divisor) 255 result = None 256 for s, count in divisor.items(): 257 current = self._data[s] // count 258 if result is None or current < result: 259 result = current 260 if result is None: 261 if empty_divisor is None: 262 raise ZeroDivisionError('Divisor is empty.') 263 else: 264 return empty_divisor 265 else: 266 return result
The number of times the divisor is contained in this multiset.
329 def issubset(self, other: Iterable[str] | Mapping[str, int]) -> bool: 330 """Whether `self` is a subset of the other. 331 332 Same as `<=`. 333 334 Note that the `<` and `>` operators are lexicographic orderings, 335 not proper subset relations. 336 """ 337 return self <= other
Whether self
is a subset of the other.
Same as <=
.
Note that the <
and >
operators are lexicographic orderings,
not proper subset relations.
346 def issuperset(self, other: Iterable[str] | Mapping[str, int]) -> bool: 347 """Whether `self` is a superset of the other. 348 349 Same as `>=`. 350 351 Note that the `<` and `>` operators are lexicographic orderings, 352 not proper subset relations. 353 """ 354 return self >= other
Whether self
is a superset of the other.
Same as >=
.
Note that the <
and >
operators are lexicographic orderings,
not proper subset relations.
363 def isdisjoint(self, other: Iterable[str] | Mapping[str, int]) -> bool: 364 """Whether `self` has any positive elements in common with the other. 365 366 Raises: 367 ValueError if either has negative elements. 368 """ 369 other = Symbols(other) 370 if self.has_negative_counts() or other.has_negative_counts(): 371 raise ValueError( 372 "isdisjoint() is not defined for negative counts.") 373 return any(self[s] > 0 and other[s] > 0 # type: ignore 374 for s in self)
Whether self
has any positive elements in common with the other.
Raises:
- ValueError if either has negative elements.
398 def has_negative_counts(self) -> bool: 399 """Whether any counts are negative.""" 400 return any(c < 0 for c in self.values())
Whether any counts are negative.
420 def count(self) -> int: 421 """The total number of elements.""" 422 return sum(self._data.values())
The total number of elements.
Inherited Members
- collections.abc.Mapping
- get
- keys
- items
- values
A symbol indicating that the die should be rolled again, usually with some operation applied.
This is designed to be used with the Die()
constructor.
AgainExpression
s should not be fed to functions or methods other than
Die()
, but it can be used with operators. Examples:
Again + 6
: Roll again and add 6.Again + Again
: Roll again twice and sum.
The again_count
, again_depth
, and again_end
arguments to Die()
affect how these arguments are processed. At most one of again_count
or
again_depth
may be provided; if neither are provided, the behavior is as
`again_depth=1.
For finer control over rolling processes, use e.g. Die.map()
instead.
Count mode
When again_count
is provided, we start with one roll queued and execute one
roll at a time. For every Again
we roll, we queue another roll.
If we run out of rolls, we sum the rolls to find the result. If the total number
of rolls (not including the initial roll) would exceed again_count
, we reroll
the entire process, effectively conditioning the process on not rolling more
than again_count
extra dice.
This mode only allows "additive" expressions to be used with Again
, which
means that only the following operators are allowed:
- Binary
+
n @ AgainExpression
, wheren
is a non-negativeint
orPopulation
.
Furthermore, the +
operator is assumed to be associative and commutative.
For example, str
or tuple
outcomes will not produce elements with a definite
order.
Depth mode
When again_depth=0
, again_end
is directly substituted
for each occurence of Again
. For other values of again_depth
, the result for
again_depth-1
is substituted for each occurence of Again
.
If again_end=icepoolicepool.Reroll
, then any AgainExpression
s in the final depth
are rerolled.
Rerolls
Reroll
only rerolls that particular die, not the entire process. Any such
rerolls do not count against the again_count
or again_depth
limit.
If again_end=icepoolicepool.Reroll
:
- Count mode: Any result that would cause the number of rolls to exceed
again_count
is rerolled. - Depth mode: Any
AgainExpression
s in the final depth level are rerolled.
144class CountsKeysView(KeysView[T], Sequence[T]): 145 """This functions as both a `KeysView` and a `Sequence`.""" 146 147 def __init__(self, counts: Counts[T]): 148 self._mapping = counts 149 150 def __getitem__(self, index): 151 return self._mapping._keys[index] 152 153 def __len__(self) -> int: 154 return len(self._mapping) 155 156 def __eq__(self, other): 157 return self._mapping._keys == other
This functions as both a KeysView
and a Sequence
.
Inherited Members
- collections.abc.Set
- isdisjoint
- collections.abc.Sequence
- index
- count
160class CountsValuesView(ValuesView[int], Sequence[int]): 161 """This functions as both a `ValuesView` and a `Sequence`.""" 162 163 def __init__(self, counts: Counts): 164 self._mapping = counts 165 166 def __getitem__(self, index): 167 return self._mapping._values[index] 168 169 def __len__(self) -> int: 170 return len(self._mapping) 171 172 def __eq__(self, other): 173 return self._mapping._values == other
This functions as both a ValuesView
and a Sequence
.
Inherited Members
- collections.abc.Sequence
- index
- count
176class CountsItemsView(ItemsView[T, int], Sequence[tuple[T, int]]): 177 """This functions as both an `ItemsView` and a `Sequence`.""" 178 179 def __init__(self, counts: Counts): 180 self._mapping = counts 181 182 def __getitem__(self, index): 183 return self._mapping._items[index] 184 185 def __eq__(self, other): 186 return self._mapping._items == other
This functions as both an ItemsView
and a Sequence
.
Inherited Members
- collections.abc.Set
- isdisjoint
- collections.abc.Sequence
- index
- count
144def from_cumulative(outcomes: Sequence[T], 145 cumulative: 'Sequence[int] | Sequence[icepool.Die[bool]]', 146 *, 147 reverse: bool = False) -> 'icepool.Die[T]': 148 """Constructs a `Die` from a sequence of cumulative values. 149 150 Args: 151 outcomes: The outcomes of the resulting die. Sorted order is recommended 152 but not necessary. 153 cumulative: The cumulative values (inclusive) of the outcomes in the 154 order they are given to this function. These may be: 155 * `int` cumulative quantities. 156 * Dice representing the cumulative distribution at that point. 157 reverse: Iff true, both of the arguments will be reversed. This allows 158 e.g. constructing using a survival distribution. 159 """ 160 if len(outcomes) == 0: 161 return icepool.Die({}) 162 163 if reverse: 164 outcomes = list(reversed(outcomes)) 165 cumulative = list(reversed(cumulative)) # type: ignore 166 167 prev = 0 168 d = {} 169 170 if isinstance(cumulative[0], icepool.Die): 171 cumulative = commonize_denominator(*cumulative) 172 for outcome, die in zip(outcomes, cumulative): 173 d[outcome] = die.quantity_ne(False) - prev 174 prev = die.quantity_ne(False) 175 elif isinstance(cumulative[0], int): 176 cumulative = cast(Sequence[int], cumulative) 177 for outcome, quantity in zip(outcomes, cumulative): 178 d[outcome] = quantity - prev 179 prev = quantity 180 else: 181 raise TypeError( 182 f'Unsupported type {type(cumulative)} for cumulative values.') 183 184 return icepool.Die(d)
Constructs a Die
from a sequence of cumulative values.
Arguments:
- outcomes: The outcomes of the resulting die. Sorted order is recommended but not necessary.
- cumulative: The cumulative values (inclusive) of the outcomes in the
order they are given to this function. These may be:
int
cumulative quantities.- Dice representing the cumulative distribution at that point.
- reverse: Iff true, both of the arguments will be reversed. This allows e.g. constructing using a survival distribution.
199def from_rv(rv, outcomes: Sequence[int] | Sequence[float], denominator: int, 200 **kwargs) -> 'icepool.Die[int] | icepool.Die[float]': 201 """Constructs a `Die` from a rv object (as `scipy.stats`). 202 Args: 203 rv: A rv object (as `scipy.stats`). 204 outcomes: An iterable of `int`s or `float`s that will be the outcomes 205 of the resulting `Die`. 206 If the distribution is discrete, outcomes must be `int`s. 207 denominator: The denominator of the resulting `Die` will be set to this. 208 **kwargs: These will be forwarded to `rv.cdf()`. 209 """ 210 if hasattr(rv, 'pdf'): 211 # Continuous distributions use midpoints. 212 midpoints = [(a + b) / 2 for a, b in zip(outcomes[:-1], outcomes[1:])] 213 cdf = rv.cdf(midpoints, **kwargs) 214 quantities_le = tuple(int(round(x * denominator)) 215 for x in cdf) + (denominator, ) 216 else: 217 cdf = rv.cdf(outcomes, **kwargs) 218 quantities_le = tuple(int(round(x * denominator)) for x in cdf) 219 return from_cumulative(outcomes, quantities_le)
Constructs a Die
from a rv object (as scipy.stats
).
Arguments:
99def lowest(arg0, 100 /, 101 *more_args: 'T | icepool.Die[T]', 102 keep: int | None = None, 103 drop: int | None = None, 104 default: T | None = None) -> 'icepool.Die[T]': 105 """The lowest outcome among the rolls, or the sum of some of the lowest. 106 107 The outcomes should support addition and multiplication if `keep != 1`. 108 109 Args: 110 args: Dice or individual outcomes in a single iterable, or as two or 111 more separate arguments. Similar to the built-in `min()`. 112 keep, drop: These arguments work together: 113 * If neither are provided, the single lowest die will be taken. 114 * If only `keep` is provided, the `keep` lowest dice will be summed. 115 * If only `drop` is provided, the `drop` lowest dice will be dropped 116 and the rest will be summed. 117 * If both are provided, `drop` lowest dice will be dropped, then 118 the next `keep` lowest dice will be summed. 119 default: If an empty iterable is provided, the result will be a die that 120 always rolls this value. 121 122 Raises: 123 ValueError if an empty iterable is provided with no `default`. 124 """ 125 if len(more_args) == 0: 126 args = arg0 127 else: 128 args = (arg0, ) + more_args 129 130 if len(args) == 0: 131 if default is None: 132 raise ValueError( 133 "lowest() arg is an empty sequence and no default was provided." 134 ) 135 else: 136 return icepool.Die([default]) 137 138 index_slice = lowest_slice(keep, drop) 139 return _sum_slice(*args, index_slice=index_slice)
The lowest outcome among the rolls, or the sum of some of the lowest.
The outcomes should support addition and multiplication if keep != 1
.
Arguments:
- args: Dice or individual outcomes in a single iterable, or as two or
more separate arguments. Similar to the built-in
min()
. - keep, drop: These arguments work together:
- If neither are provided, the single lowest die will be taken.
- If only
keep
is provided, thekeep
lowest dice will be summed. - If only
drop
is provided, thedrop
lowest dice will be dropped and the rest will be summed. - If both are provided,
drop
lowest dice will be dropped, then the nextkeep
lowest dice will be summed.
- default: If an empty iterable is provided, the result will be a die that always rolls this value.
Raises:
- ValueError if an empty iterable is provided with no
default
.
153def highest(arg0, 154 /, 155 *more_args: 'T | icepool.Die[T]', 156 keep: int | None = None, 157 drop: int | None = None, 158 default: T | None = None) -> 'icepool.Die[T]': 159 """The highest outcome among the rolls, or the sum of some of the highest. 160 161 The outcomes should support addition and multiplication if `keep != 1`. 162 163 Args: 164 args: Dice or individual outcomes in a single iterable, or as two or 165 more separate arguments. Similar to the built-in `max()`. 166 keep, drop: These arguments work together: 167 * If neither are provided, the single highest die will be taken. 168 * If only `keep` is provided, the `keep` highest dice will be summed. 169 * If only `drop` is provided, the `drop` highest dice will be dropped 170 and the rest will be summed. 171 * If both are provided, `drop` highest dice will be dropped, then 172 the next `keep` highest dice will be summed. 173 drop: This number of highest dice will be dropped before keeping dice 174 to be summed. 175 default: If an empty iterable is provided, the result will be a die that 176 always rolls this value. 177 178 Raises: 179 ValueError if an empty iterable is provided with no `default`. 180 """ 181 if len(more_args) == 0: 182 args = arg0 183 else: 184 args = (arg0, ) + more_args 185 186 if len(args) == 0: 187 if default is None: 188 raise ValueError( 189 "highest() arg is an empty sequence and no default was provided." 190 ) 191 else: 192 return icepool.Die([default]) 193 194 index_slice = highest_slice(keep, drop) 195 return _sum_slice(*args, index_slice=index_slice)
The highest outcome among the rolls, or the sum of some of the highest.
The outcomes should support addition and multiplication if keep != 1
.
Arguments:
- args: Dice or individual outcomes in a single iterable, or as two or
more separate arguments. Similar to the built-in
max()
. - keep, drop: These arguments work together:
- If neither are provided, the single highest die will be taken.
- If only
keep
is provided, thekeep
highest dice will be summed. - If only
drop
is provided, thedrop
highest dice will be dropped and the rest will be summed. - If both are provided,
drop
highest dice will be dropped, then the nextkeep
highest dice will be summed.
- drop: This number of highest dice will be dropped before keeping dice to be summed.
- default: If an empty iterable is provided, the result will be a die that always rolls this value.
Raises:
- ValueError if an empty iterable is provided with no
default
.
209def middle(arg0, 210 /, 211 *more_args: 'T | icepool.Die[T]', 212 keep: int = 1, 213 tie: Literal['error', 'high', 'low'] = 'error', 214 default: T | None = None) -> 'icepool.Die[T]': 215 """The middle of the outcomes among the rolls, or the sum of some of the middle. 216 217 The outcomes should support addition and multiplication if `keep != 1`. 218 219 Args: 220 args: Dice or individual outcomes in a single iterable, or as two or 221 more separate arguments. 222 keep: The number of outcomes to sum. 223 tie: What to do if `keep` is odd but the the number of args is even, or 224 vice versa. 225 * 'error' (default): Raises `IndexError`. 226 * 'high': The higher outcome is taken. 227 * 'low': The lower outcome is taken. 228 default: If an empty iterable is provided, the result will be a die that 229 always rolls this value. 230 231 Raises: 232 ValueError if an empty iterable is provided with no `default`. 233 """ 234 if len(more_args) == 0: 235 args = arg0 236 else: 237 args = (arg0, ) + more_args 238 239 if len(args) == 0: 240 if default is None: 241 raise ValueError( 242 "middle() arg is an empty sequence and no default was provided." 243 ) 244 else: 245 return icepool.Die([default]) 246 247 # Expression evaluators are difficult to type. 248 return icepool.Pool(args).middle(keep, tie=tie).sum() # type: ignore
The middle of the outcomes among the rolls, or the sum of some of the middle.
The outcomes should support addition and multiplication if keep != 1
.
Arguments:
- args: Dice or individual outcomes in a single iterable, or as two or more separate arguments.
- keep: The number of outcomes to sum.
- tie: What to do if
keep
is odd but the the number of args is even, or vice versa.- 'error' (default): Raises
IndexError
. - 'high': The higher outcome is taken.
- 'low': The lower outcome is taken.
- 'error' (default): Raises
- default: If an empty iterable is provided, the result will be a die that always rolls this value.
Raises:
- ValueError if an empty iterable is provided with no
default
.
222def min_outcome(*dice: 'T | icepool.Die[T]') -> T: 223 """The minimum outcome among the dice. """ 224 converted_dice = [icepool.implicit_convert_to_die(die) for die in dice] 225 return min(die.outcomes()[0] for die in converted_dice)
The minimum outcome among the dice.
228def max_outcome(*dice: 'T | icepool.Die[T]') -> T: 229 """The maximum outcome among the dice. """ 230 converted_dice = [icepool.implicit_convert_to_die(die) for die in dice] 231 return max(die.outcomes()[-1] for die in converted_dice)
The maximum outcome among the dice.
234def align(*dice: 'T | icepool.Die[T]') -> tuple['icepool.Die[T]', ...]: 235 """Pads dice with zero quantities so that all have the same set of outcomes. 236 237 Args: 238 *dice: Any number of dice or single outcomes convertible to dice. 239 240 Returns: 241 A tuple of aligned dice. 242 """ 243 converted_dice = [icepool.implicit_convert_to_die(die) for die in dice] 244 outcomes = set( 245 itertools.chain.from_iterable(die.outcomes() 246 for die in converted_dice)) 247 return tuple(die.set_outcomes(outcomes) for die in converted_dice)
Pads dice with zero quantities so that all have the same set of outcomes.
Arguments:
- *dice: Any number of dice or single outcomes convertible to dice.
Returns:
A tuple of aligned dice.
250def align_range( 251 *dice: 'int | icepool.Die[int]') -> tuple['icepool.Die[int]', ...]: 252 """Pads dice with zero quantities so that all have the same set of consecutive `int` outcomes. 253 254 Args: 255 *dice: Any number of dice or single outcomes convertible to dice. 256 257 Returns: 258 A tuple of aligned dice. 259 """ 260 converted_dice = [icepool.implicit_convert_to_die(die) for die in dice] 261 outcomes = range(icepool.min_outcome(*converted_dice), 262 icepool.max_outcome(*converted_dice) + 1) 263 return tuple(die.set_outcomes(outcomes) for die in converted_dice)
Pads dice with zero quantities so that all have the same set of consecutive int
outcomes.
Arguments:
- *dice: Any number of dice or single outcomes convertible to dice.
Returns:
A tuple of aligned dice.
266def commonize_denominator( 267 *dice: 'T | icepool.Die[T]') -> tuple['icepool.Die[T]', ...]: 268 """Scale the weights of the dice so that all of them have the same denominator. 269 270 Args: 271 *dice: Any number of dice or single outcomes convertible to dice. 272 273 Returns: 274 A tuple of dice with the same denominator. 275 """ 276 converted_dice = [ 277 icepool.implicit_convert_to_die(die).simplify() for die in dice 278 ] 279 denominator_lcm = math.lcm(*(die.denominator() for die in converted_dice 280 if die.denominator() > 0)) 281 return tuple( 282 die.scale_quantities(denominator_lcm // 283 die.denominator() if die.denominator() > 0 else 1) 284 for die in converted_dice)
Scale the weights of the dice so that all of them have the same denominator.
Arguments:
- *dice: Any number of dice or single outcomes convertible to dice.
Returns:
A tuple of dice with the same denominator.
287def reduce( 288 function: 'Callable[[T, T], T | icepool.Die[T] | icepool.RerollType]', 289 dice: 'Iterable[T | icepool.Die[T]]', 290 *, 291 initial: 'T | icepool.Die[T] | None' = None) -> 'icepool.Die[T]': 292 """Applies a function of two arguments cumulatively to a sequence of dice. 293 294 Analogous to the 295 [`functools` function of the same name.](https://docs.python.org/3/library/functools.html#functools.reduce) 296 297 Args: 298 function: The function to map. The function should take two arguments, 299 which are an outcome from each of two dice, and produce an outcome 300 of the same type. It may also return `Reroll`, in which case the 301 entire sequence is effectively rerolled. 302 dice: A sequence of dice to map the function to, from left to right. 303 initial: If provided, this will be placed at the front of the sequence 304 of dice. 305 again_count, again_depth, again_end: Forwarded to the final die constructor. 306 """ 307 # Conversion to dice is not necessary since map() takes care of that. 308 iter_dice = iter(dice) 309 if initial is not None: 310 result: 'icepool.Die[T]' = icepool.implicit_convert_to_die(initial) 311 else: 312 result = icepool.implicit_convert_to_die(next(iter_dice)) 313 for die in iter_dice: 314 result = map(function, result, die) 315 return result
Applies a function of two arguments cumulatively to a sequence of dice.
Analogous to the
icepool.reduce">functools
function of the same name.
Arguments:
- function: The function to map. The function should take two arguments,
which are an outcome from each of two dice, and produce an outcome
of the same type. It may also return
Reroll
, in which case the entire sequence is effectively rerolled. - dice: A sequence of dice to map the function to, from left to right.
- initial: If provided, this will be placed at the front of the sequence of dice.
- again_count, again_depth, again_end: Forwarded to the final die constructor.
318def accumulate( 319 function: 'Callable[[T, T], T | icepool.Die[T]]', 320 dice: 'Iterable[T | icepool.Die[T]]', 321 *, 322 initial: 'T | icepool.Die[T] | None' = None 323) -> Iterator['icepool.Die[T]']: 324 """Applies a function of two arguments cumulatively to a sequence of dice, yielding each result in turn. 325 326 Analogous to the 327 [`itertools function of the same name`](https://docs.python.org/3/library/itertools.html#itertools.accumulate) 328 , though with no default function and 329 the same parameter order as `reduce()`. 330 331 The number of results is equal to the number of elements of `dice`, with 332 one additional element if `initial` is provided. 333 334 Args: 335 function: The function to map. The function should take two arguments, 336 which are an outcome from each of two dice. 337 dice: A sequence of dice to map the function to, from left to right. 338 initial: If provided, this will be placed at the front of the sequence 339 of dice. 340 """ 341 # Conversion to dice is not necessary since map() takes care of that. 342 iter_dice = iter(dice) 343 if initial is not None: 344 result: 'icepool.Die[T]' = icepool.implicit_convert_to_die(initial) 345 else: 346 try: 347 result = icepool.implicit_convert_to_die(next(iter_dice)) 348 except StopIteration: 349 return 350 yield result 351 for die in iter_dice: 352 result = map(function, result, die) 353 yield result
Applies a function of two arguments cumulatively to a sequence of dice, yielding each result in turn.
Analogous to the
icepool.accumulate">itertools function of the same name
, though with no default function and
the same parameter order as reduce()
.
The number of results is equal to the number of elements of dice
, with
one additional element if initial
is provided.
Arguments:
- function: The function to map. The function should take two arguments, which are an outcome from each of two dice.
- dice: A sequence of dice to map the function to, from left to right.
- initial: If provided, this will be placed at the front of the sequence of dice.
378def map( 379 repl: 380 'Callable[..., T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression] | Mapping[Any, T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression]', 381 /, 382 *args: 'Outcome | icepool.Die | icepool.MultisetExpression', 383 star: bool | None = None, 384 repeat: int | None = 1, 385 again_count: int | None = None, 386 again_depth: int | None = None, 387 again_end: 'T | icepool.Die[T] | icepool.RerollType | None' = None 388) -> 'icepool.Die[T]': 389 """Applies `func(outcome_of_die_0, outcome_of_die_1, ...)` for all joint outcomes, returning a Die. 390 391 See `map_function` for a decorator version of this. 392 393 Example: `map(lambda a, b: a + b, d6, d6)` is the same as d6 + d6. 394 395 `map()` is flexible but not very efficient for more than a few dice. 396 If at all possible, use `reduce()`, `MultisetExpression` methods, and/or 397 `MultisetEvaluator`s. Even `Pool.expand()` (which sorts rolls) is more 398 efficient than using `map` on the dice in order. 399 400 `Again` can be used but is not recommended with `repeat` other than 1. 401 402 Args: 403 repl: One of the following: 404 * A callable that takes in one outcome per element of args and 405 produces a new outcome. 406 * A mapping from old outcomes to new outcomes. 407 Unmapped old outcomes stay the same. 408 In this case args must have exactly one element. 409 As with the `Die` constructor, the new outcomes: 410 * May be dice rather than just single outcomes. 411 * The special value `icepool.Reroll` will reroll that old outcome. 412 * `tuples` containing `Population`s will be `tupleize`d into 413 `Population`s of `tuple`s. 414 This does not apply to subclasses of `tuple`s such as `namedtuple` 415 or other classes such as `Vector`. 416 *args: `func` will be called with all joint outcomes of these. 417 Allowed arg types are: 418 * Single outcome. 419 * `Die`. All outcomes will be sent to `func`. 420 * `MultisetExpression`. All sorted tuples of outcomes will be sent 421 to `func`, as `MultisetExpression.expand()`. The expression must 422 be fully bound. 423 star: If `True`, the first of the args will be unpacked before giving 424 them to `func`. 425 If not provided, it will be guessed based on the signature of `func` 426 and the number of arguments. 427 repeat: This will be repeated with the same arguments on the 428 result this many times, except the first of `args` will be replaced 429 by the result of the previous iteration. 430 431 Note that returning `Reroll` from `repl` will effectively reroll all 432 arguments, including the first argument which represents the result 433 of the process up to this point. If you only want to reroll the 434 current stage, you can nest another `map` inside `repl`. 435 436 EXPERIMENTAL: If set to `None`, the result will be as if this 437 were repeated an infinite number of times. In this case, the 438 result will be in simplest form. 439 again_count, again_depth, again_end: Forwarded to the final die constructor. 440 """ 441 transition_function = _canonicalize_transition_function( 442 repl, len(args), star) 443 444 if len(args) == 0: 445 if repeat != 1: 446 raise ValueError('If no arguments are given, repeat must be 1.') 447 return icepool.Die([transition_function()], 448 again_count=again_count, 449 again_depth=again_depth, 450 again_end=again_end) 451 452 # Here len(args) is at least 1. 453 454 first_arg = args[0] 455 extra_args = args[1:] 456 457 if repeat is not None: 458 if repeat < 0: 459 raise ValueError('repeat cannot be negative.') 460 elif repeat == 0: 461 return icepool.Die([first_arg]) 462 elif repeat == 1: 463 final_outcomes: 'list[T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression]' = [] 464 final_quantities: list[int] = [] 465 for outcomes, final_quantity in iter_cartesian_product(*args): 466 final_outcome = transition_function(*outcomes) 467 if final_outcome is not icepool.Reroll: 468 final_outcomes.append(final_outcome) 469 final_quantities.append(final_quantity) 470 return icepool.Die(final_outcomes, 471 final_quantities, 472 again_count=again_count, 473 again_depth=again_depth, 474 again_end=again_end) 475 else: 476 result: 'icepool.Die[T]' = icepool.Die([first_arg]) 477 for _ in range(repeat): 478 result = icepool.map(transition_function, 479 result, 480 *extra_args, 481 star=False, 482 again_count=again_count, 483 again_depth=again_depth, 484 again_end=again_end) 485 return result 486 else: 487 # Infinite repeat. 488 # T_co and U should be the same in this case. 489 def unary_transition_function(state): 490 return map(transition_function, 491 state, 492 *extra_args, 493 star=False, 494 again_count=again_count, 495 again_depth=again_depth, 496 again_end=again_end) 497 498 return icepool.population.markov_chain.absorbing_markov_chain( 499 icepool.Die([args[0]]), unary_transition_function)
Applies func(outcome_of_die_0, outcome_of_die_1, ...)
for all joint outcomes, returning a Die.
See map_function
for a decorator version of this.
Example: map(lambda a, b: a + b, d6, d6)
is the same as d6 + d6.
map()
is flexible but not very efficient for more than a few dice.
If at all possible, use reduce()
, MultisetExpression
methods, and/or
MultisetEvaluator
s. Even Pool.expand()
(which sorts rolls) is more
efficient than using map
on the dice in order.
Again
can be used but is not recommended with repeat
other than 1.
Arguments:
- repl: One of the following:
- A callable that takes in one outcome per element of args and produces a new outcome.
- A mapping from old outcomes to new outcomes.
Unmapped old outcomes stay the same.
In this case args must have exactly one element.
As with the
Die
constructor, the new outcomes: - May be dice rather than just single outcomes.
- The special value
icepool.Reroll
will reroll that old outcome. tuples
containingPopulation
s will betupleize
d intoPopulation
s oftuple
s. This does not apply to subclasses oftuple
s such asnamedtuple
or other classes such asVector
.
- *args:
func
will be called with all joint outcomes of these. Allowed arg types are:- Single outcome.
Die
. All outcomes will be sent tofunc
.MultisetExpression
. All sorted tuples of outcomes will be sent tofunc
, asMultisetExpression.expand()
. The expression must be fully bound.
- star: If
True
, the first of the args will be unpacked before giving them tofunc
. If not provided, it will be guessed based on the signature offunc
and the number of arguments. repeat: This will be repeated with the same arguments on the result this many times, except the first of
args
will be replaced by the result of the previous iteration.Note that returning
Reroll
fromrepl
will effectively reroll all arguments, including the first argument which represents the result of the process up to this point. If you only want to reroll the current stage, you can nest anothermap
insiderepl
.EXPERIMENTAL: If set to
None
, the result will be as if this were repeated an infinite number of times. In this case, the result will be in simplest form.- again_count, again_depth, again_end: Forwarded to the final die constructor.
539def map_function( 540 function: 541 'Callable[..., T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression] | None' = None, 542 /, 543 *, 544 star: bool | None = None, 545 repeat: int | None = 1, 546 again_count: int | None = None, 547 again_depth: int | None = None, 548 again_end: 'T | icepool.Die[T] | icepool.RerollType | None' = None 549) -> 'Callable[..., icepool.Die[T]] | Callable[..., Callable[..., icepool.Die[T]]]': 550 """Decorator that turns a function that takes outcomes into a function that takes dice. 551 552 The result must be a `Die`. 553 554 This is basically a decorator version of `map()` and produces behavior 555 similar to AnyDice functions, though Icepool has different typing rules 556 among other differences. 557 558 `map_function` can either be used with no arguments: 559 560 ```python 561 @map_function 562 def explode_six(x): 563 if x == 6: 564 return 6 + Again 565 else: 566 return x 567 568 explode_six(d6, again_depth=2) 569 ``` 570 571 Or with keyword arguments, in which case the extra arguments are bound: 572 573 ```python 574 @map_function(again_depth=2) 575 def explode_six(x): 576 if x == 6: 577 return 6 + Again 578 else: 579 return x 580 581 explode_six(d6) 582 ``` 583 584 Args: 585 again_count, again_depth, again_end: Forwarded to the final die constructor. 586 """ 587 588 if function is not None: 589 return update_wrapper(partial(map, function), function) 590 else: 591 592 def decorator( 593 function: 594 'Callable[..., T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression]' 595 ) -> 'Callable[..., icepool.Die[T]]': 596 597 return update_wrapper( 598 partial(map, 599 function, 600 star=star, 601 repeat=repeat, 602 again_count=again_count, 603 again_depth=again_depth, 604 again_end=again_end), function) 605 606 return decorator
Decorator that turns a function that takes outcomes into a function that takes dice.
The result must be a Die
.
This is basically a decorator version of map()
and produces behavior
similar to AnyDice functions, though Icepool has different typing rules
among other differences.
map_function
can either be used with no arguments:
@map_function
def explode_six(x):
if x == 6:
return 6 + Again
else:
return x
explode_six(d6, again_depth=2)
Or with keyword arguments, in which case the extra arguments are bound:
@map_function(again_depth=2)
def explode_six(x):
if x == 6:
return 6 + Again
else:
return x
explode_six(d6)
Arguments:
- again_count, again_depth, again_end: Forwarded to the final die constructor.
609def map_and_time( 610 repl: 611 'Callable[..., T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression] | Mapping[Any, T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression]', 612 state, 613 /, 614 *extra_args, 615 star: bool | None = None, 616 repeat: int) -> 'icepool.Die[tuple[T, int]]': 617 """Repeatedly map outcomes of the state to other outcomes, while also 618 counting timesteps. 619 620 This is useful for representing processes. 621 622 The outcomes of the result are `(outcome, time)`, where `time` is the 623 number of repeats needed to reach an absorbing outcome (an outcome that 624 only leads to itself), or `repeat`, whichever is lesser. 625 626 This will return early if it reaches a fixed point. 627 Therefore, you can set `repeat` equal to the maximum number of 628 time you could possibly be interested in without worrying about 629 it causing extra computations after the fixed point. 630 631 Args: 632 repl: One of the following: 633 * A callable returning a new outcome for each old outcome. 634 * A mapping from old outcomes to new outcomes. 635 Unmapped old outcomes stay the same. 636 The new outcomes may be dice rather than just single outcomes. 637 The special value `icepool.Reroll` will reroll that old outcome. 638 star: If `True`, the first of the args will be unpacked before giving 639 them to `func`. 640 If not provided, it will be guessed based on the signature of `func` 641 and the number of arguments. 642 repeat: This will be repeated with the same arguments on the result 643 this many times. 644 645 Returns: 646 The `Die` after the modification. 647 """ 648 transition_function = _canonicalize_transition_function( 649 repl, 1 + len(extra_args), star) 650 651 result: 'icepool.Die[tuple[T, int]]' = state.map(lambda x: (x, 0)) 652 653 def transition_with_steps(outcome_and_steps): 654 outcome, steps = outcome_and_steps 655 next_outcome = transition_function(outcome, *extra_args) 656 if icepool.population.markov_chain.is_absorbing(outcome, next_outcome): 657 return outcome, steps 658 else: 659 return icepool.tupleize(next_outcome, steps + 1) 660 661 for _ in range(repeat): 662 next_result: 'icepool.Die[tuple[T, int]]' = map( 663 transition_with_steps, result) 664 if result == next_result: 665 return next_result 666 result = next_result 667 return result
Repeatedly map outcomes of the state to other outcomes, while also counting timesteps.
This is useful for representing processes.
The outcomes of the result are (outcome, time)
, where time
is the
number of repeats needed to reach an absorbing outcome (an outcome that
only leads to itself), or repeat
, whichever is lesser.
This will return early if it reaches a fixed point.
Therefore, you can set repeat
equal to the maximum number of
time you could possibly be interested in without worrying about
it causing extra computations after the fixed point.
Arguments:
- repl: One of the following:
- A callable returning a new outcome for each old outcome.
- A mapping from old outcomes to new outcomes.
Unmapped old outcomes stay the same.
The new outcomes may be dice rather than just single outcomes.
The special value
icepool.Reroll
will reroll that old outcome.
- star: If
True
, the first of the args will be unpacked before giving them tofunc
. If not provided, it will be guessed based on the signature offunc
and the number of arguments. - repeat: This will be repeated with the same arguments on the result this many times.
Returns:
The
Die
after the modification.
670def map_to_pool( 671 repl: 672 'Callable[..., icepool.MultisetGenerator | Sequence[icepool.Die[T] | T] | Mapping[icepool.Die[T], int] | Mapping[T, int] | icepool.Reroll] | Mapping[Any, icepool.MultisetGenerator | Sequence[icepool.Die[T] | T] | Mapping[icepool.Die[T], int] | Mapping[T, int] | icepool.Reroll]', 673 /, 674 *args: 'Outcome | icepool.Die | icepool.MultisetExpression', 675 star: bool | None = None, 676 denominator: int | None = None 677) -> 'icepool.MultisetGenerator[T, tuple[int]]': 678 """EXPERIMENTAL: Applies `repl(outcome_of_die_0, outcome_of_die_1, ...)` for all joint outcomes, producing a MultisetGenerator. 679 680 Args: 681 repl: One of the following: 682 * A callable that takes in one outcome per element of args and 683 produces a `MultisetGenerator` or something convertible to a `Pool`. 684 * A mapping from old outcomes to `MultisetGenerator` 685 or something convertible to a `Pool`. 686 In this case args must have exactly one element. 687 The new outcomes may be dice rather than just single outcomes. 688 The special value `icepool.Reroll` will reroll that old outcome. 689 star: If `True`, the first of the args will be unpacked before giving 690 them to `repl`. 691 If not provided, it will be guessed based on the signature of `repl` 692 and the number of arguments. 693 denominator: If provided, the denominator of the result will be this 694 value. Otherwise it will be the minimum to correctly weight the 695 pools. 696 697 Returns: 698 A `MultisetGenerator` representing the mixture of `Pool`s. Note 699 that this is not technically a `Pool`, though it supports most of 700 the same operations. 701 702 Raises: 703 ValueError: If `denominator` cannot be made consistent with the 704 resulting mixture of pools. 705 """ 706 transition_function = _canonicalize_transition_function( 707 repl, len(args), star) 708 709 data: 'MutableMapping[icepool.MultisetGenerator[T, tuple[int]], int]' = defaultdict( 710 int) 711 for outcomes, quantity in iter_cartesian_product(*args): 712 pool = transition_function(*outcomes) 713 if pool is icepool.Reroll: 714 continue 715 elif isinstance(pool, icepool.MultisetGenerator): 716 data[pool] += quantity 717 else: 718 data[icepool.Pool(pool)] += quantity 719 # I couldn't get the covariance / contravariance to work. 720 return icepool.MixtureGenerator(data, 721 denominator=denominator) # type: ignore
EXPERIMENTAL: Applies repl(outcome_of_die_0, outcome_of_die_1, ...)
for all joint outcomes, producing a MultisetGenerator.
Arguments:
- repl: One of the following:
- A callable that takes in one outcome per element of args and
produces a
MultisetGenerator
or something convertible to aPool
. - A mapping from old outcomes to
MultisetGenerator
or something convertible to aPool
. In this case args must have exactly one element. The new outcomes may be dice rather than just single outcomes. The special valueicepool.Reroll
will reroll that old outcome.
- A callable that takes in one outcome per element of args and
produces a
- star: If
True
, the first of the args will be unpacked before giving them torepl
. If not provided, it will be guessed based on the signature ofrepl
and the number of arguments. - denominator: If provided, the denominator of the result will be this value. Otherwise it will be the minimum to correctly weight the pools.
Returns:
A
MultisetGenerator
representing the mixture ofPool
s. Note
that this is not technically aPool
, though it supports most of the same operations.
Raises:
- ValueError: If
denominator
cannot be made consistent with the resulting mixture of pools.
Indicates that an outcome should be rerolled (with unlimited depth).
This can be used in place of outcomes in many places. See individual function and method descriptions for details.
This effectively removes the outcome from the probability space, along with its contribution to the denominator.
This can be used for conditional probability by removing all outcomes not consistent with the given observations.
Operation in specific cases:
- When used with
Again
, only that stage is rerolled, not the entireAgain
tree. - To reroll with limited depth, use
Die.reroll()
, orAgain
with no modification. - When used with
MultisetEvaluator
, the entire evaluation is rerolled.
64class RerollType(enum.Enum): 65 """The type of the Reroll singleton.""" 66 Reroll = 'Reroll' 67 """Indicates an outcome should be rerolled (with unlimited depth)."""
The type of the Reroll singleton.
Indicates an outcome should be rerolled (with unlimited depth).
Inherited Members
- enum.Enum
- name
- value
25class Pool(KeepGenerator[T]): 26 """Represents a multiset of outcomes resulting from the roll of several dice. 27 28 This should be used in conjunction with `MultisetEvaluator` to generate a 29 result. 30 31 Note that operators are performed on the multiset of rolls, not the multiset 32 of dice. For example, `d6.pool(3) - d6.pool(3)` is not an empty pool, but 33 an expression meaning "roll two pools of 3d6 and get the rolls from the 34 first pool, with rolls in the second pool cancelling matching rolls in the 35 first pool one-for-one". 36 """ 37 38 _dice: tuple[tuple['icepool.Die[T]', int]] 39 40 def __new__( 41 cls, 42 dice: 43 'Sequence[icepool.Die[T] | T] | Mapping[icepool.Die[T], int] | Mapping[T, int] | Mapping[icepool.Die[T] | T, int]', 44 times: Sequence[int] | int = 1) -> 'Pool': 45 """Public constructor for a pool. 46 47 Evaulation is most efficient when the dice are the same or same-side 48 truncations of each other. For example, d4, d6, d8, d10, d12 are all 49 same-side truncations of d12. 50 51 It is permissible to create a `Pool` without providing dice, but not all 52 evaluators will handle this case, especially if they depend on the 53 outcome type. In this case you may want to provide a die with zero 54 quantity. 55 56 Args: 57 dice: The dice to put in the `Pool`. This can be one of the following: 58 59 * A `Sequence` of `Die` or outcomes. 60 * A `Mapping` of `Die` or outcomes to how many of that `Die` or 61 outcome to put in the `Pool`. 62 63 All outcomes within a `Pool` must be totally orderable. 64 times: Multiplies the number of times each element of `dice` will 65 be put into the pool. 66 `times` can either be a sequence of the same length as 67 `outcomes` or a single `int` to apply to all elements of 68 `outcomes`. 69 70 Raises: 71 ValueError: If a bare `Deck` or `Die` argument is provided. 72 A `Pool` of a single `Die` should constructed as `Pool([die])`. 73 """ 74 if isinstance(dice, Pool): 75 if times == 1: 76 return dice 77 else: 78 dice = {die: quantity for die, quantity in dice._dice} 79 80 if isinstance(dice, (icepool.Die, icepool.Deck, icepool.MultiDeal)): 81 raise ValueError( 82 f'A Pool cannot be constructed with a {type(dice).__name__} argument.' 83 ) 84 85 dice, times = icepool.creation_args.itemize(dice, times) 86 converted_dice = [icepool.implicit_convert_to_die(die) for die in dice] 87 88 dice_counts: MutableMapping['icepool.Die[T]', int] = defaultdict(int) 89 for die, qty in zip(converted_dice, times): 90 dice_counts[die] += qty 91 keep_tuple = (1, ) * sum(times) 92 return cls._new_from_mapping(dice_counts, keep_tuple) 93 94 @classmethod 95 @cache 96 def _new_raw(cls, dice: tuple[tuple['icepool.Die[T]', int]], 97 keep_tuple: tuple[int, ...]) -> 'Pool[T]': 98 """All pool creation ends up here. This method is cached. 99 100 Args: 101 dice: A tuple of (die, count) pairs. 102 keep_tuple: A tuple of how many times to count each die. 103 """ 104 self = super(Pool, cls).__new__(cls) 105 self._dice = dice 106 self._keep_tuple = keep_tuple 107 return self 108 109 @classmethod 110 def _new_empty(cls) -> 'Pool': 111 return cls._new_raw((), ()) 112 113 @classmethod 114 def clear_cache(cls): 115 """Clears the global pool cache.""" 116 Pool._new_raw.cache_clear() 117 118 @classmethod 119 def _new_from_mapping(cls, dice_counts: Mapping['icepool.Die[T]', int], 120 keep_tuple: Sequence[int]) -> 'Pool[T]': 121 """Creates a new pool. 122 123 Args: 124 dice_counts: A map from dice to rolls. 125 keep_tuple: A tuple with length equal to the number of dice. 126 """ 127 dice = tuple( 128 sorted(dice_counts.items(), key=lambda kv: kv[0]._hash_key)) 129 return Pool._new_raw(dice, keep_tuple) 130 131 @cached_property 132 def _raw_size(self) -> int: 133 return sum(count for _, count in self._dice) 134 135 def raw_size(self) -> int: 136 """The number of dice in this pool before the keep_tuple is applied.""" 137 return self._raw_size 138 139 def _is_resolvable(self) -> bool: 140 return all(not die.is_empty() for die, _ in self._dice) 141 142 @cached_property 143 def _denominator(self) -> int: 144 return math.prod(die.denominator()**count for die, count in self._dice) 145 146 def denominator(self) -> int: 147 return self._denominator 148 149 @cached_property 150 def _dice_tuple(self) -> tuple['icepool.Die[T]', ...]: 151 return sum(((die, ) * count for die, count in self._dice), start=()) 152 153 @cached_property 154 def _unique_dice(self) -> Collection['icepool.Die[T]']: 155 return set(die for die, _ in self._dice) 156 157 def unique_dice(self) -> Collection['icepool.Die[T]']: 158 """The collection of unique dice in this pool.""" 159 return self._unique_dice 160 161 @cached_property 162 def _outcomes(self) -> Sequence[T]: 163 outcome_set = set( 164 itertools.chain.from_iterable(die.outcomes() 165 for die in self.unique_dice())) 166 return tuple(sorted(outcome_set)) 167 168 def outcomes(self) -> Sequence[T]: 169 """The union of possible outcomes among all dice in this pool in ascending order.""" 170 return self._outcomes 171 172 def output_arity(self) -> int: 173 return 1 174 175 def _estimate_order_costs(self) -> tuple[int, int]: 176 """Estimates the cost of popping from the min and max sides. 177 178 Returns: 179 pop_min_cost 180 pop_max_cost 181 """ 182 return icepool.generator.pool_cost.estimate_costs(self) 183 184 @cached_property 185 def _min_outcome(self) -> T: 186 return min(die.min_outcome() for die in self.unique_dice()) 187 188 def min_outcome(self) -> T: 189 """The min outcome among all dice in this pool.""" 190 return self._min_outcome 191 192 @cached_property 193 def _max_outcome(self) -> T: 194 return max(die.max_outcome() for die in self.unique_dice()) 195 196 def max_outcome(self) -> T: 197 """The max outcome among all dice in this pool.""" 198 return self._max_outcome 199 200 def _generate_initial(self) -> InitialMultisetGenerator: 201 yield self, 1 202 203 def _generate_min(self, min_outcome) -> NextMultisetGenerator: 204 """Pops the given outcome from this pool, if it is the min outcome. 205 206 Yields: 207 popped_pool: The pool after the min outcome is popped. 208 count: The number of dice that rolled the min outcome, after 209 accounting for keep_tuple. 210 weight: The weight of this incremental result. 211 """ 212 if not self.outcomes(): 213 yield self, (0, ), 1 214 return 215 generators = [ 216 iter_die_pop_min(die, die_count, min_outcome) 217 for die, die_count in self._dice 218 ] 219 skip_weight = None 220 for pop in itertools.product(*generators): 221 total_hits = 0 222 result_weight = 1 223 next_dice_counts: MutableMapping[Any, int] = defaultdict(int) 224 for popped_die, misses, hits, weight in pop: 225 if not popped_die.is_empty(): 226 next_dice_counts[popped_die] += misses 227 total_hits += hits 228 result_weight *= weight 229 popped_keep_tuple, result_count = pop_min_from_keep_tuple( 230 self.keep_tuple(), total_hits) 231 popped_pool = Pool._new_from_mapping(next_dice_counts, 232 popped_keep_tuple) 233 if not any(popped_keep_tuple): 234 # Dump all dice in exchange for the denominator. 235 skip_weight = (skip_weight or 236 0) + result_weight * popped_pool.denominator() 237 continue 238 239 yield popped_pool, (result_count, ), result_weight 240 241 if skip_weight is not None: 242 yield Pool._new_empty(), (sum(self.keep_tuple()), ), skip_weight 243 244 def _generate_max(self, max_outcome) -> NextMultisetGenerator: 245 """Pops the given outcome from this pool, if it is the max outcome. 246 247 Yields: 248 popped_pool: The pool after the max outcome is popped. 249 count: The number of dice that rolled the max outcome, after 250 accounting for keep_tuple. 251 weight: The weight of this incremental result. 252 """ 253 if not self.outcomes(): 254 yield self, (0, ), 1 255 return 256 generators = [ 257 iter_die_pop_max(die, die_count, max_outcome) 258 for die, die_count in self._dice 259 ] 260 skip_weight = None 261 for pop in itertools.product(*generators): 262 total_hits = 0 263 result_weight = 1 264 next_dice_counts: MutableMapping[Any, int] = defaultdict(int) 265 for popped_die, misses, hits, weight in pop: 266 if not popped_die.is_empty(): 267 next_dice_counts[popped_die] += misses 268 total_hits += hits 269 result_weight *= weight 270 popped_keep_tuple, result_count = pop_max_from_keep_tuple( 271 self.keep_tuple(), total_hits) 272 popped_pool = Pool._new_from_mapping(next_dice_counts, 273 popped_keep_tuple) 274 if not any(popped_keep_tuple): 275 # Dump all dice in exchange for the denominator. 276 skip_weight = (skip_weight or 277 0) + result_weight * popped_pool.denominator() 278 continue 279 280 yield popped_pool, (result_count, ), result_weight 281 282 if skip_weight is not None: 283 yield Pool._new_empty(), (sum(self.keep_tuple()), ), skip_weight 284 285 def _set_keep_tuple(self, keep_tuple: tuple[int, 286 ...]) -> 'KeepGenerator[T]': 287 return Pool._new_raw(self._dice, keep_tuple) 288 289 def additive_union( 290 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 291 ) -> 'MultisetExpression[T]': 292 args = tuple( 293 icepool.expression.implicit_convert_to_expression(arg) 294 for arg in args) 295 if all(isinstance(arg, Pool) for arg in args): 296 pools = cast(tuple[Pool, ...], args) 297 keep_tuple: tuple[int, ...] = tuple( 298 reduce(operator.add, (pool.keep_tuple() for pool in pools), 299 ())) 300 if len(keep_tuple) == 0: 301 # All empty. 302 return Pool._new_empty() 303 if all(x == keep_tuple[0] for x in keep_tuple): 304 # All sorted positions count the same, so we can merge the 305 # pools. 306 dice: 'MutableMapping[icepool.Die, int]' = defaultdict(int) 307 for pool in pools: 308 for die, die_count in pool._dice: 309 dice[die] += die_count 310 return Pool._new_from_mapping(dice, keep_tuple) 311 return KeepGenerator.additive_union(*args) 312 313 def __str__(self) -> str: 314 return ( 315 f'Pool of {self.raw_size()} dice with keep_tuple={self.keep_tuple()}\n' 316 + ''.join(f' {repr(die)}\n' for die in self._dice_tuple)) 317 318 @cached_property 319 def _hash_key(self) -> tuple: 320 return Pool, self._dice, self._keep_tuple
Represents a multiset of outcomes resulting from the roll of several dice.
This should be used in conjunction with MultisetEvaluator
to generate a
result.
Note that operators are performed on the multiset of rolls, not the multiset
of dice. For example, d6.pool(3) - d6.pool(3)
is not an empty pool, but
an expression meaning "roll two pools of 3d6 and get the rolls from the
first pool, with rolls in the second pool cancelling matching rolls in the
first pool one-for-one".
40 def __new__( 41 cls, 42 dice: 43 'Sequence[icepool.Die[T] | T] | Mapping[icepool.Die[T], int] | Mapping[T, int] | Mapping[icepool.Die[T] | T, int]', 44 times: Sequence[int] | int = 1) -> 'Pool': 45 """Public constructor for a pool. 46 47 Evaulation is most efficient when the dice are the same or same-side 48 truncations of each other. For example, d4, d6, d8, d10, d12 are all 49 same-side truncations of d12. 50 51 It is permissible to create a `Pool` without providing dice, but not all 52 evaluators will handle this case, especially if they depend on the 53 outcome type. In this case you may want to provide a die with zero 54 quantity. 55 56 Args: 57 dice: The dice to put in the `Pool`. This can be one of the following: 58 59 * A `Sequence` of `Die` or outcomes. 60 * A `Mapping` of `Die` or outcomes to how many of that `Die` or 61 outcome to put in the `Pool`. 62 63 All outcomes within a `Pool` must be totally orderable. 64 times: Multiplies the number of times each element of `dice` will 65 be put into the pool. 66 `times` can either be a sequence of the same length as 67 `outcomes` or a single `int` to apply to all elements of 68 `outcomes`. 69 70 Raises: 71 ValueError: If a bare `Deck` or `Die` argument is provided. 72 A `Pool` of a single `Die` should constructed as `Pool([die])`. 73 """ 74 if isinstance(dice, Pool): 75 if times == 1: 76 return dice 77 else: 78 dice = {die: quantity for die, quantity in dice._dice} 79 80 if isinstance(dice, (icepool.Die, icepool.Deck, icepool.MultiDeal)): 81 raise ValueError( 82 f'A Pool cannot be constructed with a {type(dice).__name__} argument.' 83 ) 84 85 dice, times = icepool.creation_args.itemize(dice, times) 86 converted_dice = [icepool.implicit_convert_to_die(die) for die in dice] 87 88 dice_counts: MutableMapping['icepool.Die[T]', int] = defaultdict(int) 89 for die, qty in zip(converted_dice, times): 90 dice_counts[die] += qty 91 keep_tuple = (1, ) * sum(times) 92 return cls._new_from_mapping(dice_counts, keep_tuple)
Public constructor for a pool.
Evaulation is most efficient when the dice are the same or same-side truncations of each other. For example, d4, d6, d8, d10, d12 are all same-side truncations of d12.
It is permissible to create a Pool
without providing dice, but not all
evaluators will handle this case, especially if they depend on the
outcome type. In this case you may want to provide a die with zero
quantity.
Arguments:
dice: The dice to put in the
Pool
. This can be one of the following:- A
Sequence
ofDie
or outcomes. - A
Mapping
ofDie
or outcomes to how many of thatDie
or outcome to put in thePool
.
All outcomes within a
Pool
must be totally orderable.- A
- times: Multiplies the number of times each element of
dice
will be put into the pool.times
can either be a sequence of the same length asoutcomes
or a singleint
to apply to all elements ofoutcomes
.
Raises:
113 @classmethod 114 def clear_cache(cls): 115 """Clears the global pool cache.""" 116 Pool._new_raw.cache_clear()
Clears the global pool cache.
135 def raw_size(self) -> int: 136 """The number of dice in this pool before the keep_tuple is applied.""" 137 return self._raw_size
The number of dice in this pool before the keep_tuple is applied.
157 def unique_dice(self) -> Collection['icepool.Die[T]']: 158 """The collection of unique dice in this pool.""" 159 return self._unique_dice
The collection of unique dice in this pool.
168 def outcomes(self) -> Sequence[T]: 169 """The union of possible outcomes among all dice in this pool in ascending order.""" 170 return self._outcomes
The union of possible outcomes among all dice in this pool in ascending order.
188 def min_outcome(self) -> T: 189 """The min outcome among all dice in this pool.""" 190 return self._min_outcome
The min outcome among all dice in this pool.
196 def max_outcome(self) -> T: 197 """The max outcome among all dice in this pool.""" 198 return self._max_outcome
The max outcome among all dice in this pool.
289 def additive_union( 290 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 291 ) -> 'MultisetExpression[T]': 292 args = tuple( 293 icepool.expression.implicit_convert_to_expression(arg) 294 for arg in args) 295 if all(isinstance(arg, Pool) for arg in args): 296 pools = cast(tuple[Pool, ...], args) 297 keep_tuple: tuple[int, ...] = tuple( 298 reduce(operator.add, (pool.keep_tuple() for pool in pools), 299 ())) 300 if len(keep_tuple) == 0: 301 # All empty. 302 return Pool._new_empty() 303 if all(x == keep_tuple[0] for x in keep_tuple): 304 # All sorted positions count the same, so we can merge the 305 # pools. 306 dice: 'MutableMapping[icepool.Die, int]' = defaultdict(int) 307 for pool in pools: 308 for die, die_count in pool._dice: 309 dice[die] += die_count 310 return Pool._new_from_mapping(dice, keep_tuple) 311 return KeepGenerator.additive_union(*args)
The combined elements from all of the multisets.
Same as a + b + c + ...
.
Any resulting counts that would be negative are set to zero.
Example:
[1, 2, 2, 3] + [1, 2, 4] -> [1, 1, 2, 2, 2, 3, 4]
Inherited Members
- icepool.generator.keep.KeepGenerator
- keep_size
- keep_tuple
- has_negative_keeps
- keep
- middle
- multiply_counts
- MultisetExpression
- difference
- intersection
- union
- symmetric_difference
- keep_outcomes
- drop_outcomes
- map_counts
- divide_counts
- modulo_counts
- keep_counts_le
- keep_counts_lt
- keep_counts_ge
- keep_counts_gt
- keep_counts_eq
- keep_counts_ne
- unique
- lowest
- highest
- evaluate
- expand
- sum
- count
- any
- highest_outcome_and_count
- all_counts
- largest_count
- largest_count_and_outcome
- count_subset
- largest_straight
- largest_straight_and_outcome
- all_straights
- all_straights_reduce_counts
- argsort
- issubset
- issuperset
- isdisjoint
- compair_lt
- compair_le
- compair_gt
- compair_ge
- compair_eq
- compair_ne
323def standard_pool( 324 die_sizes: Collection[int] | Mapping[int, int]) -> 'Pool[int]': 325 """A `Pool` of standard dice (e.g. d6, d8...). 326 327 Args: 328 die_sizes: A collection of die sizes, which will put one die of that 329 sizes in the pool for each element. 330 Or, a mapping of die sizes to how many dice of that size to put 331 into the pool. 332 If empty, the pool will be considered to consist of 0d1. 333 """ 334 if not die_sizes: 335 return Pool({1: 0}) 336 if isinstance(die_sizes, Mapping): 337 die_sizes = list( 338 itertools.chain.from_iterable([k] * v 339 for k, v in die_sizes.items())) 340 return Pool(list(icepool.d(x) for x in die_sizes))
A Pool
of standard dice (e.g. d6, d8...).
Arguments:
- die_sizes: A collection of die sizes, which will put one die of that sizes in the pool for each element. Or, a mapping of die sizes to how many dice of that size to put into the pool. If empty, the pool will be considered to consist of 0d1.
32class MultisetGenerator(Generic[T, Qs], MultisetExpression[T]): 33 """Abstract base class for generating one or more multisets. 34 35 These include dice pools (`Pool`) and card deals (`Deal`). Most likely you 36 will be using one of these two rather than writing your own subclass of 37 `MultisetGenerator`. 38 39 The multisets are incrementally generated one outcome at a time. 40 For each outcome, a `count` and `weight` are generated, along with a 41 smaller generator to produce the rest of the multiset. 42 43 You can perform simple evaluations using built-in operators and methods in 44 this class. 45 For more complex evaluations and better performance, particularly when 46 multiple generators are involved, you will want to write your own subclass 47 of `MultisetEvaluator`. 48 """ 49 50 @abstractmethod 51 def outcomes(self) -> Sequence[T]: 52 """The possible outcomes that could be generated, in ascending order.""" 53 54 @abstractmethod 55 def output_arity(self) -> int: 56 """The number of multisets/counts generated. Must be constant.""" 57 58 @abstractmethod 59 def _is_resolvable(self) -> bool: 60 """`True` iff the generator is capable of producing an overall outcome. 61 62 For example, a dice `Pool` will return `False` if it contains any dice 63 with no outcomes. 64 """ 65 66 @abstractmethod 67 def _generate_initial(self) -> InitialMultisetGenerator: 68 """Initialize the generator before any outcomes are emitted. 69 70 Yields: 71 * A sub-generator. 72 * The weight for selecting that sub-generator. 73 74 Unitary generators can just yield `(self, 1)` and return. 75 """ 76 77 @abstractmethod 78 def _generate_min(self, min_outcome) -> NextMultisetGenerator: 79 """Pops the min outcome from this generator if it matches the argument. 80 81 Yields: 82 * A generator with the min outcome popped. 83 * A tuple of counts for the min outcome. 84 * The weight for this many of the min outcome appearing. 85 86 If the argument does not match the min outcome, or this generator 87 has no outcomes, only a single tuple is yielded: 88 89 * `self` 90 * A tuple of zeros. 91 * weight = 1. 92 """ 93 94 @abstractmethod 95 def _generate_max(self, max_outcome) -> NextMultisetGenerator: 96 """Pops the max outcome from this generator if it matches the argument. 97 98 Yields: 99 * A generator with the min outcome popped. 100 * A tuple of counts for the min outcome. 101 * The weight for this many of the min outcome appearing. 102 103 If the argument does not match the min outcome, or this generator 104 has no outcomes, only a single tuple is yielded: 105 106 * `self` 107 * A tuple of zeros. 108 * weight = 1. 109 """ 110 111 @abstractmethod 112 def _estimate_order_costs(self) -> tuple[int, int]: 113 """Estimates the cost of popping from the min and max sides during an evaluation. 114 115 Returns: 116 pop_min_cost: A positive `int`. 117 pop_max_cost: A positive `int`. 118 """ 119 120 @abstractmethod 121 def denominator(self) -> int: 122 """The total weight of all paths through this generator.""" 123 124 @property 125 @abstractmethod 126 def _hash_key(self) -> Hashable: 127 """A hash key that logically identifies this object among MultisetGenerators. 128 129 Used to implement `equals()` and `__hash__()` 130 """ 131 132 def equals(self, other) -> bool: 133 """Whether this generator is logically equal to another object.""" 134 if not isinstance(other, MultisetGenerator): 135 return False 136 return self._hash_key == other._hash_key 137 138 @cached_property 139 def _hash(self) -> int: 140 return hash(self._hash_key) 141 142 def __hash__(self) -> int: 143 return self._hash 144 145 # Equality with truth value, needed for hashing. 146 147 # The result has a truth value, but is not a bool. 148 def __eq__( # type: ignore 149 self, other) -> 'icepool.DieWithTruth[bool]': 150 151 def data_callback() -> Counts[bool]: 152 die = cast('icepool.Die[bool]', 153 MultisetExpression.__eq__(self, other)) 154 if not isinstance(die, icepool.Die): 155 raise TypeError('Did not resolve to a die.') 156 return die._data 157 158 def truth_value_callback() -> bool: 159 if not isinstance(other, MultisetGenerator): 160 return False 161 return self._hash_key == other._hash_key 162 163 return icepool.DieWithTruth(data_callback, truth_value_callback) 164 165 # The result has a truth value, but is not a bool. 166 def __ne__( # type: ignore 167 self, other) -> 'icepool.DieWithTruth[bool]': 168 169 def data_callback() -> Counts[bool]: 170 die = cast('icepool.Die[bool]', 171 MultisetExpression.__ne__(self, other)) 172 if not isinstance(die, icepool.Die): 173 raise TypeError('Did not resolve to a die.') 174 return die._data 175 176 def truth_value_callback() -> bool: 177 if not isinstance(other, MultisetGenerator): 178 return True 179 return self._hash_key != other._hash_key 180 181 return icepool.DieWithTruth(data_callback, truth_value_callback) 182 183 # Expression API. 184 185 def _next_state(self, state, outcome: Outcome, *counts: 186 int) -> tuple[Hashable, int]: 187 raise RuntimeError( 188 'Internal error: Expressions should be unbound before evaluation.') 189 190 def _order(self) -> Order: 191 return Order.Any 192 193 def _bound_generators(self) -> 'tuple[icepool.MultisetGenerator, ...]': 194 return (self, ) 195 196 def _unbind(self, prefix_start: int, 197 free_start: int) -> 'tuple[MultisetExpression, int]': 198 unbound_expression = icepool.expression.MultisetVariable(prefix_start) 199 return unbound_expression, prefix_start + 1 200 201 def _free_arity(self) -> int: 202 return 0 203 204 def min_outcome(self) -> T: 205 return self.outcomes()[0] 206 207 def max_outcome(self) -> T: 208 return self.outcomes()[-1] 209 210 # Sampling. 211 212 def sample(self) -> tuple[tuple, ...]: 213 """EXPERIMENTAL: A single random sample from this generator. 214 215 This uses the standard `random` package and is not cryptographically 216 secure. 217 218 Returns: 219 A sorted tuple of outcomes for each output of this generator. 220 """ 221 if not self.outcomes(): 222 raise ValueError('Cannot sample from an empty set of outcomes.') 223 224 min_cost, max_cost = self._estimate_order_costs() 225 226 if min_cost < max_cost: 227 outcome = self.min_outcome() 228 generated = tuple(self._generate_min(outcome)) 229 else: 230 outcome = self.max_outcome() 231 generated = tuple(self._generate_max(outcome)) 232 233 cumulative_weights = tuple( 234 itertools.accumulate(g.denominator() * w for g, _, w in generated)) 235 denominator = cumulative_weights[-1] 236 # We don't use random.choices since that is based on floats rather than ints. 237 r = random.randrange(denominator) 238 index = bisect.bisect_right(cumulative_weights, r) 239 popped_generator, counts, _ = generated[index] 240 head = tuple((outcome, ) * count for count in counts) 241 if popped_generator.outcomes(): 242 tail = popped_generator.sample() 243 return tuple(tuple(sorted(h + t)) for h, t, in zip(head, tail)) 244 else: 245 return head
Abstract base class for generating one or more multisets.
These include dice pools (Pool
) and card deals (Deal
). Most likely you
will be using one of these two rather than writing your own subclass of
MultisetGenerator
.
The multisets are incrementally generated one outcome at a time.
For each outcome, a count
and weight
are generated, along with a
smaller generator to produce the rest of the multiset.
You can perform simple evaluations using built-in operators and methods in
this class.
For more complex evaluations and better performance, particularly when
multiple generators are involved, you will want to write your own subclass
of MultisetEvaluator
.
50 @abstractmethod 51 def outcomes(self) -> Sequence[T]: 52 """The possible outcomes that could be generated, in ascending order."""
The possible outcomes that could be generated, in ascending order.
54 @abstractmethod 55 def output_arity(self) -> int: 56 """The number of multisets/counts generated. Must be constant."""
The number of multisets/counts generated. Must be constant.
120 @abstractmethod 121 def denominator(self) -> int: 122 """The total weight of all paths through this generator."""
The total weight of all paths through this generator.
132 def equals(self, other) -> bool: 133 """Whether this generator is logically equal to another object.""" 134 if not isinstance(other, MultisetGenerator): 135 return False 136 return self._hash_key == other._hash_key
Whether this generator is logically equal to another object.
212 def sample(self) -> tuple[tuple, ...]: 213 """EXPERIMENTAL: A single random sample from this generator. 214 215 This uses the standard `random` package and is not cryptographically 216 secure. 217 218 Returns: 219 A sorted tuple of outcomes for each output of this generator. 220 """ 221 if not self.outcomes(): 222 raise ValueError('Cannot sample from an empty set of outcomes.') 223 224 min_cost, max_cost = self._estimate_order_costs() 225 226 if min_cost < max_cost: 227 outcome = self.min_outcome() 228 generated = tuple(self._generate_min(outcome)) 229 else: 230 outcome = self.max_outcome() 231 generated = tuple(self._generate_max(outcome)) 232 233 cumulative_weights = tuple( 234 itertools.accumulate(g.denominator() * w for g, _, w in generated)) 235 denominator = cumulative_weights[-1] 236 # We don't use random.choices since that is based on floats rather than ints. 237 r = random.randrange(denominator) 238 index = bisect.bisect_right(cumulative_weights, r) 239 popped_generator, counts, _ = generated[index] 240 head = tuple((outcome, ) * count for count in counts) 241 if popped_generator.outcomes(): 242 tail = popped_generator.sample() 243 return tuple(tuple(sorted(h + t)) for h, t, in zip(head, tail)) 244 else: 245 return head
EXPERIMENTAL: A single random sample from this generator.
This uses the standard random
package and is not cryptographically
secure.
Returns:
A sorted tuple of outcomes for each output of this generator.
Inherited Members
- MultisetExpression
- additive_union
- difference
- intersection
- union
- symmetric_difference
- keep_outcomes
- drop_outcomes
- map_counts
- multiply_counts
- divide_counts
- modulo_counts
- keep_counts_le
- keep_counts_lt
- keep_counts_ge
- keep_counts_gt
- keep_counts_eq
- keep_counts_ne
- unique
- keep
- lowest
- highest
- evaluate
- expand
- sum
- count
- any
- highest_outcome_and_count
- all_counts
- largest_count
- largest_count_and_outcome
- count_subset
- largest_straight
- largest_straight_and_outcome
- all_straights
- all_straights_reduce_counts
- argsort
- issubset
- issuperset
- isdisjoint
- compair_lt
- compair_le
- compair_gt
- compair_ge
- compair_eq
- compair_ne
17class Alignment(MultisetGenerator[T, tuple[()]]): 18 """A generator that does not output any counts. 19 20 This can be used to enforce that certain outcomes are seen without otherwise 21 affecting a multiset evaluation. 22 """ 23 24 def __init__(self, outcomes: Collection[T]): 25 self._outcomes = tuple(sorted(outcomes)) 26 27 def outcomes(self) -> Sequence[T]: 28 return self._outcomes 29 30 def output_arity(self) -> int: 31 return 0 32 33 def _is_resolvable(self) -> bool: 34 return True 35 36 def _generate_initial(self) -> InitialAlignmentGenerator: 37 yield self, 1 38 39 def _generate_min(self, min_outcome) -> AlignmentGenerator: 40 """`Alignment` only outputs 0 counts with weight 1.""" 41 if not self.outcomes() or min_outcome != self.min_outcome(): 42 yield self, (0, ), 1 43 else: 44 yield Alignment(self.outcomes()[1:]), (), 1 45 46 def _generate_max(self, max_outcome) -> AlignmentGenerator: 47 """`Alignment` only outputs 0 counts with weight 1.""" 48 if not self.outcomes() or max_outcome != self.max_outcome(): 49 yield self, (0, ), 1 50 else: 51 yield Alignment(self.outcomes()[:-1]), (), 1 52 53 def _estimate_order_costs(self) -> tuple[int, int]: 54 result = len(self.outcomes()) 55 return result, result 56 57 def denominator(self) -> int: 58 return 0 59 60 @cached_property 61 def _hash_key(self) -> Hashable: 62 return Alignment, self._outcomes
A generator that does not output any counts.
This can be used to enforce that certain outcomes are seen without otherwise affecting a multiset evaluation.
The possible outcomes that could be generated, in ascending order.
Inherited Members
- MultisetExpression
- additive_union
- difference
- intersection
- union
- symmetric_difference
- keep_outcomes
- drop_outcomes
- map_counts
- multiply_counts
- divide_counts
- modulo_counts
- keep_counts_le
- keep_counts_lt
- keep_counts_ge
- keep_counts_gt
- keep_counts_eq
- keep_counts_ne
- unique
- keep
- lowest
- highest
- evaluate
- expand
- sum
- count
- any
- highest_outcome_and_count
- all_counts
- largest_count
- largest_count_and_outcome
- count_subset
- largest_straight
- largest_straight_and_outcome
- all_straights
- all_straights_reduce_counts
- argsort
- issubset
- issuperset
- isdisjoint
- compair_lt
- compair_le
- compair_gt
- compair_ge
- compair_eq
- compair_ne
38class MultisetExpression(ABC, Generic[T_contra]): 39 """Abstract base class representing an expression that operates on multisets. 40 41 Expression methods can be applied to `MultisetGenerator`s to do simple 42 evaluations. For joint evaluations, try `multiset_function`. 43 44 Use the provided operations to build up more complicated 45 expressions, or to attach a final evaluator. 46 47 Operations include: 48 49 | Operation | Count / notes | 50 |:----------------------------|:--------------------------------------------| 51 | `additive_union`, `+` | `l + r` | 52 | `difference`, `-` | `l - r` | 53 | `intersection`, `&` | `min(l, r)` | 54 | `union`, `\\|` | `max(l, r)` | 55 | `symmetric_difference`, `^` | `abs(l - r)` | 56 | `multiply_counts`, `*` | `count * n` | 57 | `divide_counts`, `//` | `count // n` | 58 | `modulo_counts`, `%` | `count % n` | 59 | `keep_counts_ge` etc. | `count if count >= n else 0` etc. | 60 | unary `+` | same as `keep_counts_ge(0)` | 61 | unary `-` | reverses the sign of all counts | 62 | `unique` | `min(count, n)` | 63 | `keep_outcomes` | `count if outcome in t else 0` | 64 | `drop_outcomes` | `count if outcome not in t else 0` | 65 | `map_counts` | `f(outcome, *counts)` | 66 | `keep`, `[]` | less capable than `KeepGenerator` version | 67 | `highest` | less capable than `KeepGenerator` version | 68 | `lowest` | less capable than `KeepGenerator` version | 69 70 | Evaluator | Summary | 71 |:-------------------------------|:---------------------------------------------------------------------------| 72 | `issubset`, `<=` | Whether the left side's counts are all <= their counterparts on the right | 73 | `issuperset`, `>=` | Whether the left side's counts are all >= their counterparts on the right | 74 | `isdisjoint` | Whether the left side has no positive counts in common with the right side | 75 | `<` | As `<=`, but `False` if the two multisets are equal | 76 | `>` | As `>=`, but `False` if the two multisets are equal | 77 | `==` | Whether the left side has all the same counts as the right side | 78 | `!=` | Whether the left side has any different counts to the right side | 79 | `expand` | All elements in ascending order | 80 | `sum` | Sum of all elements | 81 | `count` | The number of elements | 82 | `any` | Whether there is at least 1 element | 83 | `highest_outcome_and_count` | The highest outcome and how many of that outcome | 84 | `all_counts` | All counts in descending order | 85 | `largest_count` | The single largest count, aka x-of-a-kind | 86 | `largest_count_and_outcome` | Same but also with the corresponding outcome | 87 | `count_subset`, `//` | The number of times the right side is contained in the left side | 88 | `largest_straight` | Length of longest consecutive sequence | 89 | `largest_straight_and_outcome` | Same but also with the corresponding outcome | 90 | `all_straights` | Lengths of all consecutive sequences in descending order | 91 """ 92 93 @abstractmethod 94 def _next_state(self, state, outcome: T_contra, *counts: 95 int) -> tuple[Hashable, int]: 96 """Updates the state for this expression and does any necessary count modification. 97 98 Args: 99 state: The overall state. This will contain all information needed 100 by this expression and any previous expressions. 101 outcome: The current outcome. 102 counts: The raw counts: first, the counts resulting from bound 103 generators, then the counts from free variables. 104 This must be passed to any previous expressions. 105 106 Returns: 107 * state: The updated state, which will be seen again by this 108 `next_state` later. 109 * count: The resulting count, which will be sent forward. 110 """ 111 112 @abstractmethod 113 def _order(self) -> Order: 114 """Any ordering that is required by this expression. 115 116 This should check any previous expressions for their order, and 117 raise a ValueError for any incompatibilities. 118 119 Returns: 120 The required order. 121 """ 122 123 @abstractmethod 124 def _free_arity(self) -> int: 125 """The minimum number of multisets/counts that must be provided to this expression. 126 127 Any excess multisets/counts that are provided will be ignored. 128 129 This does not include bound generators. 130 """ 131 132 @abstractmethod 133 def _bound_generators(self) -> 'tuple[icepool.MultisetGenerator, ...]': 134 """Returns a sequence of bound generators.""" 135 136 @abstractmethod 137 def _unbind(self, prefix_start: int, 138 free_start: int) -> 'tuple[MultisetExpression, int]': 139 """Replaces bound generators within this expression with free variables. 140 141 Bound generators are replaced with free variables with index equal to 142 their position in _bound_generators(). 143 144 Variables that are already free have their indexes shifted by the 145 number of bound genrators. 146 147 Args: 148 prefix_start: The index of the next bound generator. 149 free_start: The total number of bound generators. 150 151 Returns: 152 The transformed expression and the new prefix_start. 153 """ 154 155 @staticmethod 156 def _validate_output_arity(inner: 'MultisetExpression') -> None: 157 """Validates that if the given expression is a generator, its output arity is 1.""" 158 if isinstance(inner, 159 icepool.MultisetGenerator) and inner.output_arity() != 1: 160 raise ValueError( 161 'Only generators with output arity of 1 may be bound to expressions.\nUse a multiset_function to select individual outputs.' 162 ) 163 164 @cached_property 165 def _items_for_cartesian_product(self) -> Sequence[tuple[T_contra, int]]: 166 if self._free_arity() > 0: 167 raise ValueError('Expression must be fully bound.') 168 return self.expand().items() # type: ignore 169 170 # Binary operators. 171 172 def __add__( 173 self, other: 174 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 175 /) -> 'MultisetExpression[T_contra]': 176 try: 177 return MultisetExpression.additive_union(self, other) 178 except ImplicitConversionError: 179 return NotImplemented 180 181 def __radd__( 182 self, other: 183 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 184 /) -> 'MultisetExpression[T_contra]': 185 try: 186 return MultisetExpression.additive_union(other, self) 187 except ImplicitConversionError: 188 return NotImplemented 189 190 def additive_union( 191 *args: 192 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]' 193 ) -> 'MultisetExpression[T_contra]': 194 """The combined elements from all of the multisets. 195 196 Same as `a + b + c + ...`. 197 198 Any resulting counts that would be negative are set to zero. 199 200 Example: 201 ```python 202 [1, 2, 2, 3] + [1, 2, 4] -> [1, 1, 2, 2, 2, 3, 4] 203 ``` 204 """ 205 expressions = tuple( 206 implicit_convert_to_expression(arg) for arg in args) 207 return icepool.expression.AdditiveUnionExpression(*expressions) 208 209 def __sub__( 210 self, other: 211 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 212 /) -> 'MultisetExpression[T_contra]': 213 try: 214 return MultisetExpression.difference(self, other) 215 except ImplicitConversionError: 216 return NotImplemented 217 218 def __rsub__( 219 self, other: 220 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 221 /) -> 'MultisetExpression[T_contra]': 222 try: 223 return MultisetExpression.difference(other, self) 224 except ImplicitConversionError: 225 return NotImplemented 226 227 def difference( 228 *args: 229 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]' 230 ) -> 'MultisetExpression[T_contra]': 231 """The elements from the left multiset that are not in any of the others. 232 233 Same as `a - b - c - ...`. 234 235 Any resulting counts that would be negative are set to zero. 236 237 Example: 238 ```python 239 [1, 2, 2, 3] - [1, 2, 4] -> [2, 3] 240 ``` 241 242 If no arguments are given, the result will be an empty multiset, i.e. 243 all zero counts. 244 245 Note that, as a multiset operation, this will only cancel elements 1:1. 246 If you want to drop all elements in a set of outcomes regardless of 247 count, either use `drop_outcomes()` instead, or use a large number of 248 counts on the right side. 249 """ 250 expressions = tuple( 251 implicit_convert_to_expression(arg) for arg in args) 252 return icepool.expression.DifferenceExpression(*expressions) 253 254 def __and__( 255 self, other: 256 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 257 /) -> 'MultisetExpression[T_contra]': 258 try: 259 return MultisetExpression.intersection(self, other) 260 except ImplicitConversionError: 261 return NotImplemented 262 263 def __rand__( 264 self, other: 265 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 266 /) -> 'MultisetExpression[T_contra]': 267 try: 268 return MultisetExpression.intersection(other, self) 269 except ImplicitConversionError: 270 return NotImplemented 271 272 def intersection( 273 *args: 274 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]' 275 ) -> 'MultisetExpression[T_contra]': 276 """The elements that all the multisets have in common. 277 278 Same as `a & b & c & ...`. 279 280 Any resulting counts that would be negative are set to zero. 281 282 Example: 283 ```python 284 [1, 2, 2, 3] & [1, 2, 4] -> [1, 2] 285 ``` 286 287 Note that, as a multiset operation, this will only intersect elements 288 1:1. 289 If you want to keep all elements in a set of outcomes regardless of 290 count, either use `keep_outcomes()` instead, or use a large number of 291 counts on the right side. 292 """ 293 expressions = tuple( 294 implicit_convert_to_expression(arg) for arg in args) 295 return icepool.expression.IntersectionExpression(*expressions) 296 297 def __or__( 298 self, other: 299 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 300 /) -> 'MultisetExpression[T_contra]': 301 try: 302 return MultisetExpression.union(self, other) 303 except ImplicitConversionError: 304 return NotImplemented 305 306 def __ror__( 307 self, other: 308 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 309 /) -> 'MultisetExpression[T_contra]': 310 try: 311 return MultisetExpression.union(other, self) 312 except ImplicitConversionError: 313 return NotImplemented 314 315 def union( 316 *args: 317 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]' 318 ) -> 'MultisetExpression[T_contra]': 319 """The most of each outcome that appear in any of the multisets. 320 321 Same as `a | b | c | ...`. 322 323 Any resulting counts that would be negative are set to zero. 324 325 Example: 326 ```python 327 [1, 2, 2, 3] | [1, 2, 4] -> [1, 2, 2, 3, 4] 328 ``` 329 """ 330 expressions = tuple( 331 implicit_convert_to_expression(arg) for arg in args) 332 return icepool.expression.UnionExpression(*expressions) 333 334 def __xor__( 335 self, other: 336 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 337 /) -> 'MultisetExpression[T_contra]': 338 try: 339 return MultisetExpression.symmetric_difference(self, other) 340 except ImplicitConversionError: 341 return NotImplemented 342 343 def __rxor__( 344 self, other: 345 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 346 /) -> 'MultisetExpression[T_contra]': 347 try: 348 # Symmetric. 349 return MultisetExpression.symmetric_difference(self, other) 350 except ImplicitConversionError: 351 return NotImplemented 352 353 def symmetric_difference( 354 self, other: 355 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 356 /) -> 'MultisetExpression[T_contra]': 357 """The elements that appear in the left or right multiset but not both. 358 359 Same as `a ^ b`. 360 361 Specifically, this produces the absolute difference between counts. 362 If you don't want negative counts to be used from the inputs, you can 363 do `left.keep_counts_ge(0) ^ right.keep_counts_ge(0)`. 364 365 Example: 366 ```python 367 [1, 2, 2, 3] ^ [1, 2, 4] -> [2, 3, 4] 368 ``` 369 """ 370 other = implicit_convert_to_expression(other) 371 return icepool.expression.SymmetricDifferenceExpression(self, other) 372 373 def keep_outcomes( 374 self, target: 375 'Callable[[T_contra], bool] | Collection[T_contra] | MultisetExpression[T_contra]', 376 /) -> 'MultisetExpression[T_contra]': 377 """Keeps the elements in the target set of outcomes, and drops the rest by setting their counts to zero. 378 379 This is similar to `intersection()`, except the right side is considered 380 to have unlimited multiplicity. 381 382 Args: 383 target: A callable returning `True` iff the outcome should be kept, 384 or an expression or collection of outcomes to keep. 385 """ 386 if isinstance(target, MultisetExpression): 387 return icepool.expression.FilterOutcomesBinaryExpression( 388 self, target) 389 else: 390 return icepool.expression.FilterOutcomesExpression(self, target) 391 392 def drop_outcomes( 393 self, target: 394 'Callable[[T_contra], bool] | Collection[T_contra] | MultisetExpression[T_contra]', 395 /) -> 'MultisetExpression[T_contra]': 396 """Drops the elements in the target set of outcomes by setting their counts to zero, and keeps the rest. 397 398 This is similar to `difference()`, except the right side is considered 399 to have unlimited multiplicity. 400 401 Args: 402 target: A callable returning `True` iff the outcome should be 403 dropped, or an expression or collection of outcomes to drop. 404 """ 405 if isinstance(target, MultisetExpression): 406 return icepool.expression.FilterOutcomesBinaryExpression( 407 self, target, invert=True) 408 else: 409 return icepool.expression.FilterOutcomesExpression(self, 410 target, 411 invert=True) 412 413 # Adjust counts. 414 415 def map_counts( 416 *args: 417 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 418 function: Callable[..., int]) -> 'MultisetExpression[T_contra]': 419 """Maps the counts to new counts. 420 421 Args: 422 function: A function that takes `outcome, *counts` and produces a 423 combined count. 424 """ 425 expressions = tuple( 426 implicit_convert_to_expression(arg) for arg in args) 427 return icepool.expression.MapCountsExpression(*expressions, 428 function=function) 429 430 def __mul__(self, n: int) -> 'MultisetExpression[T_contra]': 431 if not isinstance(n, int): 432 return NotImplemented 433 return self.multiply_counts(n) 434 435 # Commutable in this case. 436 def __rmul__(self, n: int) -> 'MultisetExpression[T_contra]': 437 if not isinstance(n, int): 438 return NotImplemented 439 return self.multiply_counts(n) 440 441 def multiply_counts(self, n: int, /) -> 'MultisetExpression[T_contra]': 442 """Multiplies all counts by n. 443 444 Same as `self * n`. 445 446 Example: 447 ```python 448 Pool([1, 2, 2, 3]) * 2 -> [1, 1, 2, 2, 2, 2, 3, 3] 449 ``` 450 """ 451 return icepool.expression.MultiplyCountsExpression(self, n) 452 453 @overload 454 def __floordiv__(self, other: int) -> 'MultisetExpression[T_contra]': 455 ... 456 457 @overload 458 def __floordiv__( 459 self, other: 460 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]' 461 ) -> 'icepool.Die[int] | icepool.MultisetEvaluator[T_contra, int]': 462 """Same as divide_counts().""" 463 464 @overload 465 def __floordiv__( 466 self, other: 467 'int | MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]' 468 ) -> 'MultisetExpression[T_contra] | icepool.Die[int] | icepool.MultisetEvaluator[T_contra, int]': 469 """Same as count_subset().""" 470 471 def __floordiv__( 472 self, other: 473 'int | MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]' 474 ) -> 'MultisetExpression[T_contra] | icepool.Die[int] | icepool.MultisetEvaluator[T_contra, int]': 475 if isinstance(other, int): 476 return self.divide_counts(other) 477 else: 478 return self.count_subset(other) 479 480 def divide_counts(self, n: int, /) -> 'MultisetExpression[T_contra]': 481 """Divides all counts by n (rounding down). 482 483 Same as `self // n`. 484 485 Example: 486 ```python 487 Pool([1, 2, 2, 3]) // 2 -> [2] 488 ``` 489 """ 490 return icepool.expression.FloorDivCountsExpression(self, n) 491 492 def __mod__(self, n: int, /) -> 'MultisetExpression[T_contra]': 493 if not isinstance(n, int): 494 return NotImplemented 495 return icepool.expression.ModuloCountsExpression(self, n) 496 497 def modulo_counts(self, n: int, /) -> 'MultisetExpression[T_contra]': 498 """Moduos all counts by n. 499 500 Same as `self % n`. 501 502 Example: 503 ```python 504 Pool([1, 2, 2, 3]) % 2 -> [1, 3] 505 ``` 506 """ 507 return self % n 508 509 def __pos__(self) -> 'MultisetExpression[T_contra]': 510 return icepool.expression.KeepCountsExpression(self, 0, operator.ge) 511 512 def __neg__(self) -> 'MultisetExpression[T_contra]': 513 """As -1 * self.""" 514 return -1 * self 515 516 def keep_counts_le(self, n: int, /) -> 'MultisetExpression[T_contra]': 517 """Keeps counts that are <= n, treating the rest as zero. 518 519 For example, `expression.keep_counts_le(2)` would remove triplets and 520 better. 521 522 Example: 523 ```python 524 Pool([1, 2, 2, 3, 3, 3]).keep_counts_le(2) -> [1, 2, 2] 525 ``` 526 """ 527 return icepool.expression.KeepCountsExpression(self, n, operator.le) 528 529 def keep_counts_lt(self, n: int, /) -> 'MultisetExpression[T_contra]': 530 """Keeps counts that are < n, treating the rest as zero. 531 532 For example, `expression.keep_counts_lt(2)` would remove doubles, 533 triplets... 534 535 Example: 536 ```python 537 Pool([1, 2, 2, 3, 3, 3]).keep_counts_lt(2) -> [1] 538 ``` 539 """ 540 return icepool.expression.KeepCountsExpression(self, n, operator.lt) 541 542 def keep_counts_ge(self, n: int, /) -> 'MultisetExpression[T_contra]': 543 """Keeps counts that are >= n, treating the rest as zero. 544 545 For example, `expression.keep_counts_ge(2)` would only produce 546 pairs and better. 547 548 `expression.keep_countss_ge(0)` is useful for removing negative counts. 549 You can use the unary operator `+expression` for the same effect. 550 551 Example: 552 ```python 553 Pool([1, 2, 2, 3, 3, 3]).keep_counts_ge(2) -> [2, 2, 3, 3, 3] 554 ``` 555 """ 556 return icepool.expression.KeepCountsExpression(self, n, operator.ge) 557 558 def keep_counts_gt(self, n: int, /) -> 'MultisetExpression[T_contra]': 559 """Keeps counts that are < n, treating the rest as zero. 560 561 For example, `expression.keep_counts_gt(2)` would remove singles and 562 doubles. 563 564 Example: 565 ```python 566 Pool([1, 2, 2, 3, 3, 3]).keep_counts_gt(2) -> [3, 3, 3] 567 ``` 568 """ 569 return icepool.expression.KeepCountsExpression(self, n, operator.gt) 570 571 def keep_counts_eq(self, n: int, /) -> 'MultisetExpression[T_contra]': 572 """Keeps counts that are == n, treating the rest as zero. 573 574 For example, `expression.keep_counts_eq(2)` would keep pairs but not 575 singles or triplets. 576 577 Example: 578 ```python 579 Pool([1, 2, 2, 3, 3, 3]).keep_counts_le(2) -> [2, 2] 580 ``` 581 """ 582 return icepool.expression.KeepCountsExpression(self, n, operator.eq) 583 584 def keep_counts_ne(self, n: int, /) -> 'MultisetExpression[T_contra]': 585 """Keeps counts that are != n, treating the rest as zero. 586 587 For example, `expression.keep_counts_eq(2)` would drop pairs but keep 588 singles and triplets. 589 590 Example: 591 ```python 592 Pool([1, 2, 2, 3, 3, 3]).keep_counts_ne(2) -> [1, 3, 3, 3] 593 ``` 594 """ 595 return icepool.expression.KeepCountsExpression(self, n, operator.ne) 596 597 def unique(self, n: int = 1, /) -> 'MultisetExpression[T_contra]': 598 """Counts each outcome at most `n` times. 599 600 For example, `generator.unique(2)` would count each outcome at most 601 twice. 602 603 Example: 604 ```python 605 Pool([1, 2, 2, 3]).unique() -> [1, 2, 3] 606 ``` 607 """ 608 return icepool.expression.UniqueExpression(self, n) 609 610 # Keep highest / lowest. 611 612 @overload 613 def keep( 614 self, index: slice | Sequence[int | EllipsisType] 615 ) -> 'MultisetExpression[T_contra]': 616 ... 617 618 @overload 619 def keep( 620 self, index: int 621 ) -> 'icepool.Die[T_contra] | icepool.MultisetEvaluator[T_contra, T_contra]': 622 ... 623 624 def keep( 625 self, index: slice | Sequence[int | EllipsisType] | int 626 ) -> 'MultisetExpression[T_contra] | icepool.Die[T_contra] | icepool.MultisetEvaluator[T_contra, T_contra]': 627 """Selects pulls after drawing and sorting. 628 629 This is less capable than the `KeepGenerator` version. 630 In particular, it does not know how many elements it is selecting from, 631 so it must be anchored at the starting end. The advantage is that it 632 can be applied to any expression. 633 634 The valid types of argument are: 635 636 * A `slice`. If both start and stop are provided, they must both be 637 non-negative or both be negative. step is not supported. 638 * A sequence of `int` with `...` (`Ellipsis`) at exactly one end. 639 Each sorted element will be counted that many times, with the 640 `Ellipsis` treated as enough zeros (possibly "negative") to 641 fill the rest of the elements. 642 * An `int`, which evaluates by taking the element at the specified 643 index. In this case the result is a `Die` (if fully bound) or a 644 `MultisetEvaluator` (if there are free variables). 645 646 Use the `[]` operator for the same effect as this method. 647 """ 648 if isinstance(index, int): 649 return self.evaluate( 650 evaluator=icepool.evaluator.KeepEvaluator(index)) 651 else: 652 return icepool.expression.KeepExpression(self, index) 653 654 @overload 655 def __getitem__( 656 self, index: slice | Sequence[int | EllipsisType] 657 ) -> 'MultisetExpression[T_contra]': 658 ... 659 660 @overload 661 def __getitem__( 662 self, index: int 663 ) -> 'icepool.Die[T_contra] | icepool.MultisetEvaluator[T_contra, T_contra]': 664 ... 665 666 def __getitem__( 667 self, index: slice | Sequence[int | EllipsisType] | int 668 ) -> 'MultisetExpression[T_contra] | icepool.Die[T_contra] | icepool.MultisetEvaluator[T_contra, T_contra]': 669 return self.keep(index) 670 671 def lowest(self, 672 keep: int | None = None, 673 drop: int | None = None) -> 'MultisetExpression[T_contra]': 674 """Keep some of the lowest elements from this multiset and drop the rest. 675 676 In contrast to the die and free function versions, this does not 677 automatically sum the dice. Use `.sum()` afterwards if you want to sum. 678 Alternatively, you can perform some other evaluation. 679 680 This requires the outcomes to be evaluated in ascending order. 681 682 Args: 683 keep, drop: These arguments work together: 684 * If neither are provided, the single lowest element 685 will be kept. 686 * If only `keep` is provided, the `keep` lowest elements 687 will be kept. 688 * If only `drop` is provided, the `drop` lowest elements 689 will be dropped and the rest will be kept. 690 * If both are provided, `drop` lowest elements will be dropped, 691 then the next `keep` lowest elements will be kept. 692 """ 693 index = lowest_slice(keep, drop) 694 return self.keep(index) 695 696 def highest(self, 697 keep: int | None = None, 698 drop: int | None = None) -> 'MultisetExpression[T_contra]': 699 """Keep some of the highest elements from this multiset and drop the rest. 700 701 In contrast to the die and free function versions, this does not 702 automatically sum the dice. Use `.sum()` afterwards if you want to sum. 703 Alternatively, you can perform some other evaluation. 704 705 This requires the outcomes to be evaluated in descending order. 706 707 Args: 708 keep, drop: These arguments work together: 709 * If neither are provided, the single highest element 710 will be kept. 711 * If only `keep` is provided, the `keep` highest elements 712 will be kept. 713 * If only `drop` is provided, the `drop` highest elements 714 will be dropped and the rest will be kept. 715 * If both are provided, `drop` highest elements will be dropped, 716 then the next `keep` highest elements will be kept. 717 """ 718 index = highest_slice(keep, drop) 719 return self.keep(index) 720 721 # Evaluations. 722 723 def evaluate( 724 *expressions: 'MultisetExpression[T_contra]', 725 evaluator: 'icepool.MultisetEvaluator[T_contra, U]' 726 ) -> 'icepool.Die[U] | icepool.MultisetEvaluator[T_contra, U]': 727 """Attaches a final `MultisetEvaluator` to expressions. 728 729 All of the `MultisetExpression` methods below are evaluations, 730 as are the operators `<, <=, >, >=, !=, ==`. This means if the 731 expression is fully bound, it will be evaluated to a `Die`. 732 733 Returns: 734 A `Die` if the expression is are fully bound. 735 A `MultisetEvaluator` otherwise. 736 """ 737 if all( 738 isinstance(expression, icepool.MultisetGenerator) 739 for expression in expressions): 740 return evaluator.evaluate(*expressions) 741 evaluator = icepool.evaluator.ExpressionEvaluator(*expressions, 742 evaluator=evaluator) 743 if evaluator._free_arity == 0: 744 return evaluator.evaluate() 745 else: 746 return evaluator 747 748 def expand( 749 self, 750 order: Order = Order.Ascending 751 ) -> 'icepool.Die[tuple[T_contra, ...]] | icepool.MultisetEvaluator[T_contra, tuple[T_contra, ...]]': 752 """Evaluation: All elements of the multiset in ascending order. 753 754 This is expensive and not recommended unless there are few possibilities. 755 756 Args: 757 order: Whether the elements are in ascending (default) or descending 758 order. 759 """ 760 return self.evaluate(evaluator=icepool.evaluator.ExpandEvaluator( 761 order=order)) 762 763 def sum( 764 self, 765 map: Callable[[T_contra], U] | Mapping[T_contra, U] | None = None 766 ) -> 'icepool.Die[U] | icepool.MultisetEvaluator[T_contra, U]': 767 """Evaluation: The sum of all elements.""" 768 if map is None: 769 return self.evaluate(evaluator=icepool.evaluator.sum_evaluator) 770 else: 771 return self.evaluate(evaluator=icepool.evaluator.SumEvaluator(map)) 772 773 def count( 774 self 775 ) -> 'icepool.Die[int] | icepool.MultisetEvaluator[T_contra, int]': 776 """Evaluation: The total number of elements in the multiset. 777 778 This is usually not very interesting unless some other operation is 779 performed first. Examples: 780 781 `generator.unique().count()` will count the number of unique outcomes. 782 783 `(generator & [4, 5, 6]).count()` will count up to one each of 784 4, 5, and 6. 785 """ 786 return self.evaluate(evaluator=icepool.evaluator.count_evaluator) 787 788 def any( 789 self 790 ) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T_contra, bool]': 791 """Evaluation: Whether the multiset has at least one positive count. """ 792 return self.evaluate(evaluator=icepool.evaluator.any_evaluator) 793 794 def highest_outcome_and_count( 795 self 796 ) -> 'icepool.Die[tuple[T_contra, int]] | icepool.MultisetEvaluator[T_contra, tuple[T_contra, int]]': 797 """Evaluation: The highest outcome with positive count, along with that count. 798 799 If no outcomes have positive count, the min outcome will be returned with 0 count. 800 """ 801 return self.evaluate( 802 evaluator=icepool.evaluator.HighestOutcomeAndCountEvaluator()) 803 804 def all_counts( 805 self, 806 filter: int | None = 1 807 ) -> 'icepool.Die[tuple[int, ...]] | icepool.MultisetEvaluator[T_contra, tuple[int, ...]]': 808 """Evaluation: Sorted tuple of all counts, i.e. the sizes of all matching sets. 809 810 The sizes are in **descending** order. 811 812 Args: 813 filter: Any counts below this value will not be in the output. 814 For example, `filter=2` will only produce pairs and better. 815 If `None`, no filtering will be done. 816 817 Why not just place `keep_countss_ge()` before this? 818 `keep_countss_ge()` operates by setting counts to zero, so you 819 would still need an argument to specify whether you want to 820 output zero counts. So we might as well use the argument to do 821 both. 822 """ 823 return self.evaluate(evaluator=icepool.evaluator.AllCountsEvaluator( 824 filter=filter)) 825 826 def largest_count( 827 self 828 ) -> 'icepool.Die[int] | icepool.MultisetEvaluator[T_contra, int]': 829 """Evaluation: The size of the largest matching set among the elements.""" 830 return self.evaluate( 831 evaluator=icepool.evaluator.LargestCountEvaluator()) 832 833 def largest_count_and_outcome( 834 self 835 ) -> 'icepool.Die[tuple[int, T_contra]] | icepool.MultisetEvaluator[T_contra, tuple[int, T_contra]]': 836 """Evaluation: The largest matching set among the elements and the corresponding outcome.""" 837 return self.evaluate( 838 evaluator=icepool.evaluator.LargestCountAndOutcomeEvaluator()) 839 840 def __rfloordiv__( 841 self, other: 842 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]' 843 ) -> 'icepool.Die[int] | icepool.MultisetEvaluator[T_contra, int]': 844 other = implicit_convert_to_expression(other) 845 return other.count_subset(self) 846 847 def count_subset( 848 self, 849 divisor: 850 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 851 /, 852 *, 853 empty_divisor: int | None = None 854 ) -> 'icepool.Die[int] | icepool.MultisetEvaluator[T_contra, int]': 855 """Evaluation: The number of times the divisor is contained in this multiset. 856 857 Args: 858 divisor: The multiset to divide by. 859 empty_divisor: If the divisor is empty, the outcome will be this. 860 If not set, `ZeroDivisionError` will be raised for an empty 861 right side. 862 863 Raises: 864 ZeroDivisionError: If the divisor may be empty and 865 empty_divisor_outcome is not set. 866 """ 867 divisor = implicit_convert_to_expression(divisor) 868 return self.evaluate(divisor, 869 evaluator=icepool.evaluator.CountSubsetEvaluator( 870 empty_divisor=empty_divisor)) 871 872 def largest_straight( 873 self: 'MultisetExpression[int]' 874 ) -> 'icepool.Die[int] | icepool.MultisetEvaluator[int, int]': 875 """Evaluation: The size of the largest straight among the elements. 876 877 Outcomes must be `int`s. 878 """ 879 return self.evaluate( 880 evaluator=icepool.evaluator.LargestStraightEvaluator()) 881 882 def largest_straight_and_outcome( 883 self: 'MultisetExpression[int]' 884 ) -> 'icepool.Die[tuple[int, int]] | icepool.MultisetEvaluator[int, tuple[int, int]]': 885 """Evaluation: The size of the largest straight among the elements and the highest outcome in that straight. 886 887 Outcomes must be `int`s. 888 """ 889 return self.evaluate( 890 evaluator=icepool.evaluator.LargestStraightAndOutcomeEvaluator()) 891 892 def all_straights( 893 self: 'MultisetExpression[int]' 894 ) -> 'icepool.Die[tuple[int, ...]] | icepool.MultisetEvaluator[int, tuple[int, ...]]': 895 """Evaluation: The sizes of all straights. 896 897 The sizes are in **descending** order. 898 899 Each element can only contribute to one straight, though duplicate 900 elements can produces straights that overlap in outcomes. In this case, 901 elements are preferentially assigned to the longer straight. 902 """ 903 return self.evaluate( 904 evaluator=icepool.evaluator.AllStraightsEvaluator()) 905 906 def all_straights_reduce_counts( 907 self: 'MultisetExpression[int]', 908 reducer: Callable[[int, int], int] = operator.mul 909 ) -> 'icepool.Die[tuple[tuple[int, int], ...]] | icepool.MultisetEvaluator[int, tuple[tuple[int, int], ...]]': 910 """Experimental: All straights with a reduce operation on the counts. 911 912 This can be used to evaluate e.g. cribbage-style straight counting. 913 914 The result is a tuple of `(run_length, run_score)`s. 915 """ 916 return self.evaluate( 917 evaluator=icepool.evaluator.AllStraightsReduceCountsEvaluator( 918 reducer=reducer)) 919 920 def argsort( 921 self: 922 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 923 *args: 924 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 925 order: Order = Order.Descending, 926 limit: int | None = None): 927 """Experimental: Returns the indexes of the originating multisets for each rank in their additive union. 928 929 Example: 930 ```python 931 MultisetExpression.argsort([10, 9, 5], [9, 9]) 932 ``` 933 produces 934 ```python 935 ((0,), (0, 1, 1), (0,)) 936 ``` 937 938 Args: 939 self, *args: The multiset expressions to be evaluated. 940 order: Which order the ranks are to be emitted. Default is descending. 941 limit: How many ranks to emit. Default will emit all ranks, which 942 makes the length of each outcome equal to 943 `additive_union(+self, +arg1, +arg2, ...).unique().count()` 944 """ 945 self = implicit_convert_to_expression(self) 946 converted_args = [implicit_convert_to_expression(arg) for arg in args] 947 return self.evaluate(*converted_args, 948 evaluator=icepool.evaluator.ArgsortEvaluator( 949 order=order, limit=limit)) 950 951 # Comparators. 952 953 def _compare( 954 self, right: 955 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 956 operation_class: Type['icepool.evaluator.ComparisonEvaluator'] 957 ) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T_contra, bool]': 958 if isinstance(right, MultisetExpression): 959 evaluator = icepool.evaluator.ExpressionEvaluator( 960 self, right, evaluator=operation_class()) 961 elif isinstance(right, (Mapping, Sequence)): 962 right_expression = icepool.implicit_convert_to_expression(right) 963 evaluator = icepool.evaluator.ExpressionEvaluator( 964 self, right_expression, evaluator=operation_class()) 965 else: 966 raise TypeError('Operand not comparable with expression.') 967 968 if evaluator._free_arity == 0: 969 return evaluator.evaluate() 970 else: 971 return evaluator 972 973 def __lt__( 974 self, other: 975 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 976 / 977 ) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T_contra, bool]': 978 try: 979 return self._compare(other, 980 icepool.evaluator.IsProperSubsetEvaluator) 981 except TypeError: 982 return NotImplemented 983 984 def __le__( 985 self, other: 986 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 987 / 988 ) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T_contra, bool]': 989 try: 990 return self._compare(other, icepool.evaluator.IsSubsetEvaluator) 991 except TypeError: 992 return NotImplemented 993 994 def issubset( 995 self, other: 996 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 997 / 998 ) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T_contra, bool]': 999 """Evaluation: Whether this multiset is a subset of the other multiset. 1000 1001 Specifically, if this multiset has a lesser or equal count for each 1002 outcome than the other multiset, this evaluates to `True`; 1003 if there is some outcome for which this multiset has a greater count 1004 than the other multiset, this evaluates to `False`. 1005 1006 `issubset` is the same as `self <= other`. 1007 1008 `self < other` evaluates a proper subset relation, which is the same 1009 except the result is `False` if the two multisets are exactly equal. 1010 """ 1011 return self._compare(other, icepool.evaluator.IsSubsetEvaluator) 1012 1013 def __gt__( 1014 self, other: 1015 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1016 / 1017 ) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T_contra, bool]': 1018 try: 1019 return self._compare(other, 1020 icepool.evaluator.IsProperSupersetEvaluator) 1021 except TypeError: 1022 return NotImplemented 1023 1024 def __ge__( 1025 self, other: 1026 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1027 / 1028 ) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T_contra, bool]': 1029 try: 1030 return self._compare(other, icepool.evaluator.IsSupersetEvaluator) 1031 except TypeError: 1032 return NotImplemented 1033 1034 def issuperset( 1035 self, other: 1036 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1037 / 1038 ) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T_contra, bool]': 1039 """Evaluation: Whether this multiset is a superset of the other multiset. 1040 1041 Specifically, if this multiset has a greater or equal count for each 1042 outcome than the other multiset, this evaluates to `True`; 1043 if there is some outcome for which this multiset has a lesser count 1044 than the other multiset, this evaluates to `False`. 1045 1046 A typical use of this evaluation is testing for the presence of a 1047 combo of cards in a hand, e.g. 1048 1049 ```python 1050 deck.deal(5) >= ['a', 'a', 'b'] 1051 ``` 1052 1053 represents the chance that a deal of 5 cards contains at least two 'a's 1054 and one 'b'. 1055 1056 `issuperset` is the same as `self >= other`. 1057 1058 `self > other` evaluates a proper superset relation, which is the same 1059 except the result is `False` if the two multisets are exactly equal. 1060 """ 1061 return self._compare(other, icepool.evaluator.IsSupersetEvaluator) 1062 1063 # The result has no truth value. 1064 def __eq__( # type: ignore 1065 self, other: 1066 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1067 / 1068 ) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T_contra, bool]': 1069 try: 1070 return self._compare(other, icepool.evaluator.IsEqualSetEvaluator) 1071 except TypeError: 1072 return NotImplemented 1073 1074 # The result has no truth value. 1075 def __ne__( # type: ignore 1076 self, other: 1077 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1078 / 1079 ) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T_contra, bool]': 1080 try: 1081 return self._compare(other, 1082 icepool.evaluator.IsNotEqualSetEvaluator) 1083 except TypeError: 1084 return NotImplemented 1085 1086 def isdisjoint( 1087 self, other: 1088 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1089 / 1090 ) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T_contra, bool]': 1091 """Evaluation: Whether this multiset is disjoint from the other multiset. 1092 1093 Specifically, this evaluates to `False` if there is any outcome for 1094 which both multisets have positive count, and `True` if there is not. 1095 """ 1096 return self._compare(other, icepool.evaluator.IsDisjointSetEvaluator) 1097 1098 def compair_lt( 1099 self, 1100 other: 1101 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1102 *, 1103 order: Order = Order.Descending): 1104 """Evaluation: EXPERIMENTAL: Compare pairs of elements in sorted order, counting how many pairs `self` is < `other`. 1105 1106 Any extra unpaired elements do not affect the result. 1107 1108 Args: 1109 other: The other multiset to compare. 1110 order: Which order elements will be matched in. 1111 Default is descending. 1112 """ 1113 other = implicit_convert_to_expression(other) 1114 return self.evaluate(other, 1115 evaluator=icepool.evaluator.CompairEvalautor( 1116 order=order, 1117 tie=0, 1118 left_greater=0, 1119 right_greater=1)) 1120 1121 def compair_le( 1122 self, 1123 other: 1124 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1125 *, 1126 order: Order = Order.Descending): 1127 """Evaluation: EXPERIMENTAL: Compare pairs of elements in sorted order, counting how many pairs `self` is <= `other`. 1128 1129 Any extra unpaired elements do not affect the result. 1130 1131 Example: number of armies destroyed by the defender in a 1132 3v2 attack in *RISK*: 1133 ```python 1134 d6.pool(3).compair_le(d6.pool(2)) 1135 ``` 1136 1137 Args: 1138 other: The other multiset to compare. 1139 order: Which order elements will be matched in. 1140 Default is descending. 1141 """ 1142 other = implicit_convert_to_expression(other) 1143 return self.evaluate(other, 1144 evaluator=icepool.evaluator.CompairEvalautor( 1145 order=order, 1146 tie=1, 1147 left_greater=0, 1148 right_greater=1)) 1149 1150 def compair_gt( 1151 self, 1152 other: 1153 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1154 *, 1155 order: Order = Order.Descending): 1156 """Evaluation: EXPERIMENTAL: Compare pairs of elements in sorted order, counting how many pairs `self` is > `other`. 1157 1158 Any extra unpaired elements do not affect the result. 1159 1160 Example: number of armies destroyed by the attacker in a 1161 3v2 attack in *RISK*: 1162 ```python 1163 d6.pool(3).compair_gt(d6.pool(2)) 1164 ``` 1165 1166 Args: 1167 other: The other multiset to compare. 1168 order: Which order elements will be matched in. 1169 Default is descending. 1170 """ 1171 other = implicit_convert_to_expression(other) 1172 return self.evaluate(other, 1173 evaluator=icepool.evaluator.CompairEvalautor( 1174 order=order, 1175 tie=0, 1176 left_greater=1, 1177 right_greater=0)) 1178 1179 def compair_ge( 1180 self, 1181 other: 1182 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1183 *, 1184 order: Order = Order.Descending): 1185 """Evaluation: EXPERIMENTAL: Compare pairs of elements in sorted order, counting how many pairs `self` is >= `other`. 1186 1187 Any extra unpaired elements do not affect the result. 1188 1189 Args: 1190 other: The other multiset to compare. 1191 order: Which order elements will be matched in. 1192 Default is descending. 1193 """ 1194 other = implicit_convert_to_expression(other) 1195 return self.evaluate(other, 1196 evaluator=icepool.evaluator.CompairEvalautor( 1197 order=order, 1198 tie=1, 1199 left_greater=1, 1200 right_greater=0)) 1201 1202 def compair_eq( 1203 self, 1204 other: 1205 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1206 *, 1207 order: Order = Order.Descending): 1208 """Evaluation: EXPERIMENTAL: Compare pairs of elements in sorted order, counting how many pairs `self` is >= `other`. 1209 1210 Any extra unpaired elements do not affect the result. 1211 1212 Args: 1213 other: The other multiset to compare. 1214 order: Which order elements will be matched in. 1215 Default is descending. 1216 """ 1217 other = implicit_convert_to_expression(other) 1218 return self.evaluate(other, 1219 evaluator=icepool.evaluator.CompairEvalautor( 1220 order=order, 1221 tie=1, 1222 left_greater=0, 1223 right_greater=0)) 1224 1225 def compair_ne( 1226 self, 1227 other: 1228 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1229 *, 1230 order: Order = Order.Descending): 1231 """Evaluation: EXPERIMENTAL: Compare pairs of elements in sorted order, counting how many pairs `self` is >= `other`. 1232 1233 Any extra unpaired elements do not affect the result. 1234 1235 Args: 1236 other: The other multiset to compare. 1237 order: Which order elements will be matched in. 1238 Default is descending. 1239 """ 1240 other = implicit_convert_to_expression(other) 1241 return self.evaluate(other, 1242 evaluator=icepool.evaluator.CompairEvalautor( 1243 order=order, 1244 tie=0, 1245 left_greater=1, 1246 right_greater=1))
Abstract base class representing an expression that operates on multisets.
Expression methods can be applied to MultisetGenerator
s to do simple
evaluations. For joint evaluations, try multiset_function
.
Use the provided operations to build up more complicated expressions, or to attach a final evaluator.
Operations include:
Operation | Count / notes |
---|---|
additive_union , + |
l + r |
difference , - |
l - r |
intersection , & |
min(l, r) |
union , | |
max(l, r) |
symmetric_difference , ^ |
abs(l - r) |
multiply_counts , * |
count * n |
divide_counts , // |
count // n |
modulo_counts , % |
count % n |
keep_counts_ge etc. |
count if count >= n else 0 etc. |
unary + |
same as keep_counts_ge(0) |
unary - |
reverses the sign of all counts |
unique |
min(count, n) |
keep_outcomes |
count if outcome in t else 0 |
drop_outcomes |
count if outcome not in t else 0 |
map_counts |
f(outcome, *counts) |
keep , [] |
less capable than KeepGenerator version |
highest |
less capable than KeepGenerator version |
lowest |
less capable than KeepGenerator version |
Evaluator | Summary |
---|---|
issubset , <= |
Whether the left side's counts are all <= their counterparts on the right |
issuperset , >= |
Whether the left side's counts are all >= their counterparts on the right |
isdisjoint |
Whether the left side has no positive counts in common with the right side |
< |
As <= , but False if the two multisets are equal |
> |
As >= , but False if the two multisets are equal |
== |
Whether the left side has all the same counts as the right side |
!= |
Whether the left side has any different counts to the right side |
expand |
All elements in ascending order |
sum |
Sum of all elements |
count |
The number of elements |
any |
Whether there is at least 1 element |
highest_outcome_and_count |
The highest outcome and how many of that outcome |
all_counts |
All counts in descending order |
largest_count |
The single largest count, aka x-of-a-kind |
largest_count_and_outcome |
Same but also with the corresponding outcome |
count_subset , // |
The number of times the right side is contained in the left side |
largest_straight |
Length of longest consecutive sequence |
largest_straight_and_outcome |
Same but also with the corresponding outcome |
all_straights |
Lengths of all consecutive sequences in descending order |
190 def additive_union( 191 *args: 192 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]' 193 ) -> 'MultisetExpression[T_contra]': 194 """The combined elements from all of the multisets. 195 196 Same as `a + b + c + ...`. 197 198 Any resulting counts that would be negative are set to zero. 199 200 Example: 201 ```python 202 [1, 2, 2, 3] + [1, 2, 4] -> [1, 1, 2, 2, 2, 3, 4] 203 ``` 204 """ 205 expressions = tuple( 206 implicit_convert_to_expression(arg) for arg in args) 207 return icepool.expression.AdditiveUnionExpression(*expressions)
The combined elements from all of the multisets.
Same as a + b + c + ...
.
Any resulting counts that would be negative are set to zero.
Example:
[1, 2, 2, 3] + [1, 2, 4] -> [1, 1, 2, 2, 2, 3, 4]
227 def difference( 228 *args: 229 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]' 230 ) -> 'MultisetExpression[T_contra]': 231 """The elements from the left multiset that are not in any of the others. 232 233 Same as `a - b - c - ...`. 234 235 Any resulting counts that would be negative are set to zero. 236 237 Example: 238 ```python 239 [1, 2, 2, 3] - [1, 2, 4] -> [2, 3] 240 ``` 241 242 If no arguments are given, the result will be an empty multiset, i.e. 243 all zero counts. 244 245 Note that, as a multiset operation, this will only cancel elements 1:1. 246 If you want to drop all elements in a set of outcomes regardless of 247 count, either use `drop_outcomes()` instead, or use a large number of 248 counts on the right side. 249 """ 250 expressions = tuple( 251 implicit_convert_to_expression(arg) for arg in args) 252 return icepool.expression.DifferenceExpression(*expressions)
The elements from the left multiset that are not in any of the others.
Same as a - b - c - ...
.
Any resulting counts that would be negative are set to zero.
Example:
[1, 2, 2, 3] - [1, 2, 4] -> [2, 3]
If no arguments are given, the result will be an empty multiset, i.e. all zero counts.
Note that, as a multiset operation, this will only cancel elements 1:1.
If you want to drop all elements in a set of outcomes regardless of
count, either use drop_outcomes()
instead, or use a large number of
counts on the right side.
272 def intersection( 273 *args: 274 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]' 275 ) -> 'MultisetExpression[T_contra]': 276 """The elements that all the multisets have in common. 277 278 Same as `a & b & c & ...`. 279 280 Any resulting counts that would be negative are set to zero. 281 282 Example: 283 ```python 284 [1, 2, 2, 3] & [1, 2, 4] -> [1, 2] 285 ``` 286 287 Note that, as a multiset operation, this will only intersect elements 288 1:1. 289 If you want to keep all elements in a set of outcomes regardless of 290 count, either use `keep_outcomes()` instead, or use a large number of 291 counts on the right side. 292 """ 293 expressions = tuple( 294 implicit_convert_to_expression(arg) for arg in args) 295 return icepool.expression.IntersectionExpression(*expressions)
The elements that all the multisets have in common.
Same as a & b & c & ...
.
Any resulting counts that would be negative are set to zero.
Example:
[1, 2, 2, 3] & [1, 2, 4] -> [1, 2]
Note that, as a multiset operation, this will only intersect elements
1:1.
If you want to keep all elements in a set of outcomes regardless of
count, either use keep_outcomes()
instead, or use a large number of
counts on the right side.
315 def union( 316 *args: 317 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]' 318 ) -> 'MultisetExpression[T_contra]': 319 """The most of each outcome that appear in any of the multisets. 320 321 Same as `a | b | c | ...`. 322 323 Any resulting counts that would be negative are set to zero. 324 325 Example: 326 ```python 327 [1, 2, 2, 3] | [1, 2, 4] -> [1, 2, 2, 3, 4] 328 ``` 329 """ 330 expressions = tuple( 331 implicit_convert_to_expression(arg) for arg in args) 332 return icepool.expression.UnionExpression(*expressions)
The most of each outcome that appear in any of the multisets.
Same as a | b | c | ...
.
Any resulting counts that would be negative are set to zero.
Example:
[1, 2, 2, 3] | [1, 2, 4] -> [1, 2, 2, 3, 4]
353 def symmetric_difference( 354 self, other: 355 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 356 /) -> 'MultisetExpression[T_contra]': 357 """The elements that appear in the left or right multiset but not both. 358 359 Same as `a ^ b`. 360 361 Specifically, this produces the absolute difference between counts. 362 If you don't want negative counts to be used from the inputs, you can 363 do `left.keep_counts_ge(0) ^ right.keep_counts_ge(0)`. 364 365 Example: 366 ```python 367 [1, 2, 2, 3] ^ [1, 2, 4] -> [2, 3, 4] 368 ``` 369 """ 370 other = implicit_convert_to_expression(other) 371 return icepool.expression.SymmetricDifferenceExpression(self, other)
The elements that appear in the left or right multiset but not both.
Same as a ^ b
.
Specifically, this produces the absolute difference between counts.
If you don't want negative counts to be used from the inputs, you can
do left.keep_counts_ge(0) ^ right.keep_counts_ge(0)
.
Example:
[1, 2, 2, 3] ^ [1, 2, 4] -> [2, 3, 4]
373 def keep_outcomes( 374 self, target: 375 'Callable[[T_contra], bool] | Collection[T_contra] | MultisetExpression[T_contra]', 376 /) -> 'MultisetExpression[T_contra]': 377 """Keeps the elements in the target set of outcomes, and drops the rest by setting their counts to zero. 378 379 This is similar to `intersection()`, except the right side is considered 380 to have unlimited multiplicity. 381 382 Args: 383 target: A callable returning `True` iff the outcome should be kept, 384 or an expression or collection of outcomes to keep. 385 """ 386 if isinstance(target, MultisetExpression): 387 return icepool.expression.FilterOutcomesBinaryExpression( 388 self, target) 389 else: 390 return icepool.expression.FilterOutcomesExpression(self, target)
Keeps the elements in the target set of outcomes, and drops the rest by setting their counts to zero.
This is similar to intersection()
, except the right side is considered
to have unlimited multiplicity.
Arguments:
- target: A callable returning
True
iff the outcome should be kept, or an expression or collection of outcomes to keep.
392 def drop_outcomes( 393 self, target: 394 'Callable[[T_contra], bool] | Collection[T_contra] | MultisetExpression[T_contra]', 395 /) -> 'MultisetExpression[T_contra]': 396 """Drops the elements in the target set of outcomes by setting their counts to zero, and keeps the rest. 397 398 This is similar to `difference()`, except the right side is considered 399 to have unlimited multiplicity. 400 401 Args: 402 target: A callable returning `True` iff the outcome should be 403 dropped, or an expression or collection of outcomes to drop. 404 """ 405 if isinstance(target, MultisetExpression): 406 return icepool.expression.FilterOutcomesBinaryExpression( 407 self, target, invert=True) 408 else: 409 return icepool.expression.FilterOutcomesExpression(self, 410 target, 411 invert=True)
Drops the elements in the target set of outcomes by setting their counts to zero, and keeps the rest.
This is similar to difference()
, except the right side is considered
to have unlimited multiplicity.
Arguments:
- target: A callable returning
True
iff the outcome should be dropped, or an expression or collection of outcomes to drop.
415 def map_counts( 416 *args: 417 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 418 function: Callable[..., int]) -> 'MultisetExpression[T_contra]': 419 """Maps the counts to new counts. 420 421 Args: 422 function: A function that takes `outcome, *counts` and produces a 423 combined count. 424 """ 425 expressions = tuple( 426 implicit_convert_to_expression(arg) for arg in args) 427 return icepool.expression.MapCountsExpression(*expressions, 428 function=function)
Maps the counts to new counts.
Arguments:
- function: A function that takes
outcome, *counts
and produces a combined count.
441 def multiply_counts(self, n: int, /) -> 'MultisetExpression[T_contra]': 442 """Multiplies all counts by n. 443 444 Same as `self * n`. 445 446 Example: 447 ```python 448 Pool([1, 2, 2, 3]) * 2 -> [1, 1, 2, 2, 2, 2, 3, 3] 449 ``` 450 """ 451 return icepool.expression.MultiplyCountsExpression(self, n)
Multiplies all counts by n.
Same as self * n
.
Example:
Pool([1, 2, 2, 3]) * 2 -> [1, 1, 2, 2, 2, 2, 3, 3]
480 def divide_counts(self, n: int, /) -> 'MultisetExpression[T_contra]': 481 """Divides all counts by n (rounding down). 482 483 Same as `self // n`. 484 485 Example: 486 ```python 487 Pool([1, 2, 2, 3]) // 2 -> [2] 488 ``` 489 """ 490 return icepool.expression.FloorDivCountsExpression(self, n)
Divides all counts by n (rounding down).
Same as self // n
.
Example:
Pool([1, 2, 2, 3]) // 2 -> [2]
497 def modulo_counts(self, n: int, /) -> 'MultisetExpression[T_contra]': 498 """Moduos all counts by n. 499 500 Same as `self % n`. 501 502 Example: 503 ```python 504 Pool([1, 2, 2, 3]) % 2 -> [1, 3] 505 ``` 506 """ 507 return self % n
Moduos all counts by n.
Same as self % n
.
Example:
Pool([1, 2, 2, 3]) % 2 -> [1, 3]
516 def keep_counts_le(self, n: int, /) -> 'MultisetExpression[T_contra]': 517 """Keeps counts that are <= n, treating the rest as zero. 518 519 For example, `expression.keep_counts_le(2)` would remove triplets and 520 better. 521 522 Example: 523 ```python 524 Pool([1, 2, 2, 3, 3, 3]).keep_counts_le(2) -> [1, 2, 2] 525 ``` 526 """ 527 return icepool.expression.KeepCountsExpression(self, n, operator.le)
Keeps counts that are <= n, treating the rest as zero.
For example, expression.keep_counts_le(2)
would remove triplets and
better.
Example:
Pool([1, 2, 2, 3, 3, 3]).keep_counts_le(2) -> [1, 2, 2]
529 def keep_counts_lt(self, n: int, /) -> 'MultisetExpression[T_contra]': 530 """Keeps counts that are < n, treating the rest as zero. 531 532 For example, `expression.keep_counts_lt(2)` would remove doubles, 533 triplets... 534 535 Example: 536 ```python 537 Pool([1, 2, 2, 3, 3, 3]).keep_counts_lt(2) -> [1] 538 ``` 539 """ 540 return icepool.expression.KeepCountsExpression(self, n, operator.lt)
Keeps counts that are < n, treating the rest as zero.
For example, expression.keep_counts_lt(2)
would remove doubles,
triplets...
Example:
Pool([1, 2, 2, 3, 3, 3]).keep_counts_lt(2) -> [1]
542 def keep_counts_ge(self, n: int, /) -> 'MultisetExpression[T_contra]': 543 """Keeps counts that are >= n, treating the rest as zero. 544 545 For example, `expression.keep_counts_ge(2)` would only produce 546 pairs and better. 547 548 `expression.keep_countss_ge(0)` is useful for removing negative counts. 549 You can use the unary operator `+expression` for the same effect. 550 551 Example: 552 ```python 553 Pool([1, 2, 2, 3, 3, 3]).keep_counts_ge(2) -> [2, 2, 3, 3, 3] 554 ``` 555 """ 556 return icepool.expression.KeepCountsExpression(self, n, operator.ge)
Keeps counts that are >= n, treating the rest as zero.
For example, expression.keep_counts_ge(2)
would only produce
pairs and better.
expression.keep_countss_ge(0)
is useful for removing negative counts.
You can use the unary operator +expression
for the same effect.
Example:
Pool([1, 2, 2, 3, 3, 3]).keep_counts_ge(2) -> [2, 2, 3, 3, 3]
558 def keep_counts_gt(self, n: int, /) -> 'MultisetExpression[T_contra]': 559 """Keeps counts that are < n, treating the rest as zero. 560 561 For example, `expression.keep_counts_gt(2)` would remove singles and 562 doubles. 563 564 Example: 565 ```python 566 Pool([1, 2, 2, 3, 3, 3]).keep_counts_gt(2) -> [3, 3, 3] 567 ``` 568 """ 569 return icepool.expression.KeepCountsExpression(self, n, operator.gt)
Keeps counts that are < n, treating the rest as zero.
For example, expression.keep_counts_gt(2)
would remove singles and
doubles.
Example:
Pool([1, 2, 2, 3, 3, 3]).keep_counts_gt(2) -> [3, 3, 3]
571 def keep_counts_eq(self, n: int, /) -> 'MultisetExpression[T_contra]': 572 """Keeps counts that are == n, treating the rest as zero. 573 574 For example, `expression.keep_counts_eq(2)` would keep pairs but not 575 singles or triplets. 576 577 Example: 578 ```python 579 Pool([1, 2, 2, 3, 3, 3]).keep_counts_le(2) -> [2, 2] 580 ``` 581 """ 582 return icepool.expression.KeepCountsExpression(self, n, operator.eq)
Keeps counts that are == n, treating the rest as zero.
For example, expression.keep_counts_eq(2)
would keep pairs but not
singles or triplets.
Example:
Pool([1, 2, 2, 3, 3, 3]).keep_counts_le(2) -> [2, 2]
584 def keep_counts_ne(self, n: int, /) -> 'MultisetExpression[T_contra]': 585 """Keeps counts that are != n, treating the rest as zero. 586 587 For example, `expression.keep_counts_eq(2)` would drop pairs but keep 588 singles and triplets. 589 590 Example: 591 ```python 592 Pool([1, 2, 2, 3, 3, 3]).keep_counts_ne(2) -> [1, 3, 3, 3] 593 ``` 594 """ 595 return icepool.expression.KeepCountsExpression(self, n, operator.ne)
Keeps counts that are != n, treating the rest as zero.
For example, expression.keep_counts_eq(2)
would drop pairs but keep
singles and triplets.
Example:
Pool([1, 2, 2, 3, 3, 3]).keep_counts_ne(2) -> [1, 3, 3, 3]
597 def unique(self, n: int = 1, /) -> 'MultisetExpression[T_contra]': 598 """Counts each outcome at most `n` times. 599 600 For example, `generator.unique(2)` would count each outcome at most 601 twice. 602 603 Example: 604 ```python 605 Pool([1, 2, 2, 3]).unique() -> [1, 2, 3] 606 ``` 607 """ 608 return icepool.expression.UniqueExpression(self, n)
Counts each outcome at most n
times.
For example, generator.unique(2)
would count each outcome at most
twice.
Example:
Pool([1, 2, 2, 3]).unique() -> [1, 2, 3]
624 def keep( 625 self, index: slice | Sequence[int | EllipsisType] | int 626 ) -> 'MultisetExpression[T_contra] | icepool.Die[T_contra] | icepool.MultisetEvaluator[T_contra, T_contra]': 627 """Selects pulls after drawing and sorting. 628 629 This is less capable than the `KeepGenerator` version. 630 In particular, it does not know how many elements it is selecting from, 631 so it must be anchored at the starting end. The advantage is that it 632 can be applied to any expression. 633 634 The valid types of argument are: 635 636 * A `slice`. If both start and stop are provided, they must both be 637 non-negative or both be negative. step is not supported. 638 * A sequence of `int` with `...` (`Ellipsis`) at exactly one end. 639 Each sorted element will be counted that many times, with the 640 `Ellipsis` treated as enough zeros (possibly "negative") to 641 fill the rest of the elements. 642 * An `int`, which evaluates by taking the element at the specified 643 index. In this case the result is a `Die` (if fully bound) or a 644 `MultisetEvaluator` (if there are free variables). 645 646 Use the `[]` operator for the same effect as this method. 647 """ 648 if isinstance(index, int): 649 return self.evaluate( 650 evaluator=icepool.evaluator.KeepEvaluator(index)) 651 else: 652 return icepool.expression.KeepExpression(self, index)
Selects pulls after drawing and sorting.
This is less capable than the KeepGenerator
version.
In particular, it does not know how many elements it is selecting from,
so it must be anchored at the starting end. The advantage is that it
can be applied to any expression.
The valid types of argument are:
- A
slice
. If both start and stop are provided, they must both be non-negative or both be negative. step is not supported. - A sequence of
int
with...
(Ellipsis
) at exactly one end. Each sorted element will be counted that many times, with theEllipsis
treated as enough zeros (possibly "negative") to fill the rest of the elements. - An
int
, which evaluates by taking the element at the specified index. In this case the result is aDie
(if fully bound) or aMultisetEvaluator
(if there are free variables).
Use the []
operator for the same effect as this method.
671 def lowest(self, 672 keep: int | None = None, 673 drop: int | None = None) -> 'MultisetExpression[T_contra]': 674 """Keep some of the lowest elements from this multiset and drop the rest. 675 676 In contrast to the die and free function versions, this does not 677 automatically sum the dice. Use `.sum()` afterwards if you want to sum. 678 Alternatively, you can perform some other evaluation. 679 680 This requires the outcomes to be evaluated in ascending order. 681 682 Args: 683 keep, drop: These arguments work together: 684 * If neither are provided, the single lowest element 685 will be kept. 686 * If only `keep` is provided, the `keep` lowest elements 687 will be kept. 688 * If only `drop` is provided, the `drop` lowest elements 689 will be dropped and the rest will be kept. 690 * If both are provided, `drop` lowest elements will be dropped, 691 then the next `keep` lowest elements will be kept. 692 """ 693 index = lowest_slice(keep, drop) 694 return self.keep(index)
Keep some of the lowest elements from this multiset and drop the rest.
In contrast to the die and free function versions, this does not
automatically sum the dice. Use .sum()
afterwards if you want to sum.
Alternatively, you can perform some other evaluation.
This requires the outcomes to be evaluated in ascending order.
Arguments:
- keep, drop: These arguments work together:
- If neither are provided, the single lowest element will be kept.
- If only
keep
is provided, thekeep
lowest elements will be kept. - If only
drop
is provided, thedrop
lowest elements will be dropped and the rest will be kept. - If both are provided,
drop
lowest elements will be dropped, then the nextkeep
lowest elements will be kept.
696 def highest(self, 697 keep: int | None = None, 698 drop: int | None = None) -> 'MultisetExpression[T_contra]': 699 """Keep some of the highest elements from this multiset and drop the rest. 700 701 In contrast to the die and free function versions, this does not 702 automatically sum the dice. Use `.sum()` afterwards if you want to sum. 703 Alternatively, you can perform some other evaluation. 704 705 This requires the outcomes to be evaluated in descending order. 706 707 Args: 708 keep, drop: These arguments work together: 709 * If neither are provided, the single highest element 710 will be kept. 711 * If only `keep` is provided, the `keep` highest elements 712 will be kept. 713 * If only `drop` is provided, the `drop` highest elements 714 will be dropped and the rest will be kept. 715 * If both are provided, `drop` highest elements will be dropped, 716 then the next `keep` highest elements will be kept. 717 """ 718 index = highest_slice(keep, drop) 719 return self.keep(index)
Keep some of the highest elements from this multiset and drop the rest.
In contrast to the die and free function versions, this does not
automatically sum the dice. Use .sum()
afterwards if you want to sum.
Alternatively, you can perform some other evaluation.
This requires the outcomes to be evaluated in descending order.
Arguments:
- keep, drop: These arguments work together:
- If neither are provided, the single highest element will be kept.
- If only
keep
is provided, thekeep
highest elements will be kept. - If only
drop
is provided, thedrop
highest elements will be dropped and the rest will be kept. - If both are provided,
drop
highest elements will be dropped, then the nextkeep
highest elements will be kept.
723 def evaluate( 724 *expressions: 'MultisetExpression[T_contra]', 725 evaluator: 'icepool.MultisetEvaluator[T_contra, U]' 726 ) -> 'icepool.Die[U] | icepool.MultisetEvaluator[T_contra, U]': 727 """Attaches a final `MultisetEvaluator` to expressions. 728 729 All of the `MultisetExpression` methods below are evaluations, 730 as are the operators `<, <=, >, >=, !=, ==`. This means if the 731 expression is fully bound, it will be evaluated to a `Die`. 732 733 Returns: 734 A `Die` if the expression is are fully bound. 735 A `MultisetEvaluator` otherwise. 736 """ 737 if all( 738 isinstance(expression, icepool.MultisetGenerator) 739 for expression in expressions): 740 return evaluator.evaluate(*expressions) 741 evaluator = icepool.evaluator.ExpressionEvaluator(*expressions, 742 evaluator=evaluator) 743 if evaluator._free_arity == 0: 744 return evaluator.evaluate() 745 else: 746 return evaluator
Attaches a final MultisetEvaluator
to expressions.
All of the MultisetExpression
methods below are evaluations,
as are the operators <, <=, >, >=, !=, ==
. This means if the
expression is fully bound, it will be evaluated to a Die
.
Returns:
A
Die
if the expression is are fully bound. AMultisetEvaluator
otherwise.
748 def expand( 749 self, 750 order: Order = Order.Ascending 751 ) -> 'icepool.Die[tuple[T_contra, ...]] | icepool.MultisetEvaluator[T_contra, tuple[T_contra, ...]]': 752 """Evaluation: All elements of the multiset in ascending order. 753 754 This is expensive and not recommended unless there are few possibilities. 755 756 Args: 757 order: Whether the elements are in ascending (default) or descending 758 order. 759 """ 760 return self.evaluate(evaluator=icepool.evaluator.ExpandEvaluator( 761 order=order))
Evaluation: All elements of the multiset in ascending order.
This is expensive and not recommended unless there are few possibilities.
Arguments:
- order: Whether the elements are in ascending (default) or descending order.
763 def sum( 764 self, 765 map: Callable[[T_contra], U] | Mapping[T_contra, U] | None = None 766 ) -> 'icepool.Die[U] | icepool.MultisetEvaluator[T_contra, U]': 767 """Evaluation: The sum of all elements.""" 768 if map is None: 769 return self.evaluate(evaluator=icepool.evaluator.sum_evaluator) 770 else: 771 return self.evaluate(evaluator=icepool.evaluator.SumEvaluator(map))
Evaluation: The sum of all elements.
773 def count( 774 self 775 ) -> 'icepool.Die[int] | icepool.MultisetEvaluator[T_contra, int]': 776 """Evaluation: The total number of elements in the multiset. 777 778 This is usually not very interesting unless some other operation is 779 performed first. Examples: 780 781 `generator.unique().count()` will count the number of unique outcomes. 782 783 `(generator & [4, 5, 6]).count()` will count up to one each of 784 4, 5, and 6. 785 """ 786 return self.evaluate(evaluator=icepool.evaluator.count_evaluator)
Evaluation: The total number of elements in the multiset.
This is usually not very interesting unless some other operation is performed first. Examples:
generator.unique().count()
will count the number of unique outcomes.
(generator & [4, 5, 6]).count()
will count up to one each of
4, 5, and 6.
788 def any( 789 self 790 ) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T_contra, bool]': 791 """Evaluation: Whether the multiset has at least one positive count. """ 792 return self.evaluate(evaluator=icepool.evaluator.any_evaluator)
Evaluation: Whether the multiset has at least one positive count.
794 def highest_outcome_and_count( 795 self 796 ) -> 'icepool.Die[tuple[T_contra, int]] | icepool.MultisetEvaluator[T_contra, tuple[T_contra, int]]': 797 """Evaluation: The highest outcome with positive count, along with that count. 798 799 If no outcomes have positive count, the min outcome will be returned with 0 count. 800 """ 801 return self.evaluate( 802 evaluator=icepool.evaluator.HighestOutcomeAndCountEvaluator())
Evaluation: The highest outcome with positive count, along with that count.
If no outcomes have positive count, the min outcome will be returned with 0 count.
804 def all_counts( 805 self, 806 filter: int | None = 1 807 ) -> 'icepool.Die[tuple[int, ...]] | icepool.MultisetEvaluator[T_contra, tuple[int, ...]]': 808 """Evaluation: Sorted tuple of all counts, i.e. the sizes of all matching sets. 809 810 The sizes are in **descending** order. 811 812 Args: 813 filter: Any counts below this value will not be in the output. 814 For example, `filter=2` will only produce pairs and better. 815 If `None`, no filtering will be done. 816 817 Why not just place `keep_countss_ge()` before this? 818 `keep_countss_ge()` operates by setting counts to zero, so you 819 would still need an argument to specify whether you want to 820 output zero counts. So we might as well use the argument to do 821 both. 822 """ 823 return self.evaluate(evaluator=icepool.evaluator.AllCountsEvaluator( 824 filter=filter))
Evaluation: Sorted tuple of all counts, i.e. the sizes of all matching sets.
The sizes are in descending order.
Arguments:
filter: Any counts below this value will not be in the output. For example,
filter=2
will only produce pairs and better. IfNone
, no filtering will be done.Why not just place
keep_countss_ge()
before this?keep_countss_ge()
operates by setting counts to zero, so you would still need an argument to specify whether you want to output zero counts. So we might as well use the argument to do both.
826 def largest_count( 827 self 828 ) -> 'icepool.Die[int] | icepool.MultisetEvaluator[T_contra, int]': 829 """Evaluation: The size of the largest matching set among the elements.""" 830 return self.evaluate( 831 evaluator=icepool.evaluator.LargestCountEvaluator())
Evaluation: The size of the largest matching set among the elements.
833 def largest_count_and_outcome( 834 self 835 ) -> 'icepool.Die[tuple[int, T_contra]] | icepool.MultisetEvaluator[T_contra, tuple[int, T_contra]]': 836 """Evaluation: The largest matching set among the elements and the corresponding outcome.""" 837 return self.evaluate( 838 evaluator=icepool.evaluator.LargestCountAndOutcomeEvaluator())
Evaluation: The largest matching set among the elements and the corresponding outcome.
847 def count_subset( 848 self, 849 divisor: 850 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 851 /, 852 *, 853 empty_divisor: int | None = None 854 ) -> 'icepool.Die[int] | icepool.MultisetEvaluator[T_contra, int]': 855 """Evaluation: The number of times the divisor is contained in this multiset. 856 857 Args: 858 divisor: The multiset to divide by. 859 empty_divisor: If the divisor is empty, the outcome will be this. 860 If not set, `ZeroDivisionError` will be raised for an empty 861 right side. 862 863 Raises: 864 ZeroDivisionError: If the divisor may be empty and 865 empty_divisor_outcome is not set. 866 """ 867 divisor = implicit_convert_to_expression(divisor) 868 return self.evaluate(divisor, 869 evaluator=icepool.evaluator.CountSubsetEvaluator( 870 empty_divisor=empty_divisor))
Evaluation: The number of times the divisor is contained in this multiset.
Arguments:
- divisor: The multiset to divide by.
- empty_divisor: If the divisor is empty, the outcome will be this.
If not set,
ZeroDivisionError
will be raised for an empty right side.
Raises:
- ZeroDivisionError: If the divisor may be empty and empty_divisor_outcome is not set.
872 def largest_straight( 873 self: 'MultisetExpression[int]' 874 ) -> 'icepool.Die[int] | icepool.MultisetEvaluator[int, int]': 875 """Evaluation: The size of the largest straight among the elements. 876 877 Outcomes must be `int`s. 878 """ 879 return self.evaluate( 880 evaluator=icepool.evaluator.LargestStraightEvaluator())
Evaluation: The size of the largest straight among the elements.
Outcomes must be int
s.
882 def largest_straight_and_outcome( 883 self: 'MultisetExpression[int]' 884 ) -> 'icepool.Die[tuple[int, int]] | icepool.MultisetEvaluator[int, tuple[int, int]]': 885 """Evaluation: The size of the largest straight among the elements and the highest outcome in that straight. 886 887 Outcomes must be `int`s. 888 """ 889 return self.evaluate( 890 evaluator=icepool.evaluator.LargestStraightAndOutcomeEvaluator())
Evaluation: The size of the largest straight among the elements and the highest outcome in that straight.
Outcomes must be int
s.
892 def all_straights( 893 self: 'MultisetExpression[int]' 894 ) -> 'icepool.Die[tuple[int, ...]] | icepool.MultisetEvaluator[int, tuple[int, ...]]': 895 """Evaluation: The sizes of all straights. 896 897 The sizes are in **descending** order. 898 899 Each element can only contribute to one straight, though duplicate 900 elements can produces straights that overlap in outcomes. In this case, 901 elements are preferentially assigned to the longer straight. 902 """ 903 return self.evaluate( 904 evaluator=icepool.evaluator.AllStraightsEvaluator())
Evaluation: The sizes of all straights.
The sizes are in descending order.
Each element can only contribute to one straight, though duplicate elements can produces straights that overlap in outcomes. In this case, elements are preferentially assigned to the longer straight.
906 def all_straights_reduce_counts( 907 self: 'MultisetExpression[int]', 908 reducer: Callable[[int, int], int] = operator.mul 909 ) -> 'icepool.Die[tuple[tuple[int, int], ...]] | icepool.MultisetEvaluator[int, tuple[tuple[int, int], ...]]': 910 """Experimental: All straights with a reduce operation on the counts. 911 912 This can be used to evaluate e.g. cribbage-style straight counting. 913 914 The result is a tuple of `(run_length, run_score)`s. 915 """ 916 return self.evaluate( 917 evaluator=icepool.evaluator.AllStraightsReduceCountsEvaluator( 918 reducer=reducer))
Experimental: All straights with a reduce operation on the counts.
This can be used to evaluate e.g. cribbage-style straight counting.
The result is a tuple of (run_length, run_score)
s.
920 def argsort( 921 self: 922 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 923 *args: 924 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 925 order: Order = Order.Descending, 926 limit: int | None = None): 927 """Experimental: Returns the indexes of the originating multisets for each rank in their additive union. 928 929 Example: 930 ```python 931 MultisetExpression.argsort([10, 9, 5], [9, 9]) 932 ``` 933 produces 934 ```python 935 ((0,), (0, 1, 1), (0,)) 936 ``` 937 938 Args: 939 self, *args: The multiset expressions to be evaluated. 940 order: Which order the ranks are to be emitted. Default is descending. 941 limit: How many ranks to emit. Default will emit all ranks, which 942 makes the length of each outcome equal to 943 `additive_union(+self, +arg1, +arg2, ...).unique().count()` 944 """ 945 self = implicit_convert_to_expression(self) 946 converted_args = [implicit_convert_to_expression(arg) for arg in args] 947 return self.evaluate(*converted_args, 948 evaluator=icepool.evaluator.ArgsortEvaluator( 949 order=order, limit=limit))
Experimental: Returns the indexes of the originating multisets for each rank in their additive union.
Example:
MultisetExpression.argsort([10, 9, 5], [9, 9])
produces
((0,), (0, 1, 1), (0,))
Arguments:
- self, *args: The multiset expressions to be evaluated.
- order: Which order the ranks are to be emitted. Default is descending.
- limit: How many ranks to emit. Default will emit all ranks, which
makes the length of each outcome equal to
additive_union(+self, +arg1, +arg2, ...).unique().count()
994 def issubset( 995 self, other: 996 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 997 / 998 ) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T_contra, bool]': 999 """Evaluation: Whether this multiset is a subset of the other multiset. 1000 1001 Specifically, if this multiset has a lesser or equal count for each 1002 outcome than the other multiset, this evaluates to `True`; 1003 if there is some outcome for which this multiset has a greater count 1004 than the other multiset, this evaluates to `False`. 1005 1006 `issubset` is the same as `self <= other`. 1007 1008 `self < other` evaluates a proper subset relation, which is the same 1009 except the result is `False` if the two multisets are exactly equal. 1010 """ 1011 return self._compare(other, icepool.evaluator.IsSubsetEvaluator)
Evaluation: Whether this multiset is a subset of the other multiset.
Specifically, if this multiset has a lesser or equal count for each
outcome than the other multiset, this evaluates to True
;
if there is some outcome for which this multiset has a greater count
than the other multiset, this evaluates to False
.
issubset
is the same as self <= other
.
self < other
evaluates a proper subset relation, which is the same
except the result is False
if the two multisets are exactly equal.
1034 def issuperset( 1035 self, other: 1036 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1037 / 1038 ) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T_contra, bool]': 1039 """Evaluation: Whether this multiset is a superset of the other multiset. 1040 1041 Specifically, if this multiset has a greater or equal count for each 1042 outcome than the other multiset, this evaluates to `True`; 1043 if there is some outcome for which this multiset has a lesser count 1044 than the other multiset, this evaluates to `False`. 1045 1046 A typical use of this evaluation is testing for the presence of a 1047 combo of cards in a hand, e.g. 1048 1049 ```python 1050 deck.deal(5) >= ['a', 'a', 'b'] 1051 ``` 1052 1053 represents the chance that a deal of 5 cards contains at least two 'a's 1054 and one 'b'. 1055 1056 `issuperset` is the same as `self >= other`. 1057 1058 `self > other` evaluates a proper superset relation, which is the same 1059 except the result is `False` if the two multisets are exactly equal. 1060 """ 1061 return self._compare(other, icepool.evaluator.IsSupersetEvaluator)
Evaluation: Whether this multiset is a superset of the other multiset.
Specifically, if this multiset has a greater or equal count for each
outcome than the other multiset, this evaluates to True
;
if there is some outcome for which this multiset has a lesser count
than the other multiset, this evaluates to False
.
A typical use of this evaluation is testing for the presence of a combo of cards in a hand, e.g.
deck.deal(5) >= ['a', 'a', 'b']
represents the chance that a deal of 5 cards contains at least two 'a's and one 'b'.
issuperset
is the same as self >= other
.
self > other
evaluates a proper superset relation, which is the same
except the result is False
if the two multisets are exactly equal.
1086 def isdisjoint( 1087 self, other: 1088 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1089 / 1090 ) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T_contra, bool]': 1091 """Evaluation: Whether this multiset is disjoint from the other multiset. 1092 1093 Specifically, this evaluates to `False` if there is any outcome for 1094 which both multisets have positive count, and `True` if there is not. 1095 """ 1096 return self._compare(other, icepool.evaluator.IsDisjointSetEvaluator)
Evaluation: Whether this multiset is disjoint from the other multiset.
Specifically, this evaluates to False
if there is any outcome for
which both multisets have positive count, and True
if there is not.
1098 def compair_lt( 1099 self, 1100 other: 1101 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1102 *, 1103 order: Order = Order.Descending): 1104 """Evaluation: EXPERIMENTAL: Compare pairs of elements in sorted order, counting how many pairs `self` is < `other`. 1105 1106 Any extra unpaired elements do not affect the result. 1107 1108 Args: 1109 other: The other multiset to compare. 1110 order: Which order elements will be matched in. 1111 Default is descending. 1112 """ 1113 other = implicit_convert_to_expression(other) 1114 return self.evaluate(other, 1115 evaluator=icepool.evaluator.CompairEvalautor( 1116 order=order, 1117 tie=0, 1118 left_greater=0, 1119 right_greater=1))
Evaluation: EXPERIMENTAL: Compare pairs of elements in sorted order, counting how many pairs self
is < other
.
Any extra unpaired elements do not affect the result.
Arguments:
- other: The other multiset to compare.
- order: Which order elements will be matched in. Default is descending.
1121 def compair_le( 1122 self, 1123 other: 1124 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1125 *, 1126 order: Order = Order.Descending): 1127 """Evaluation: EXPERIMENTAL: Compare pairs of elements in sorted order, counting how many pairs `self` is <= `other`. 1128 1129 Any extra unpaired elements do not affect the result. 1130 1131 Example: number of armies destroyed by the defender in a 1132 3v2 attack in *RISK*: 1133 ```python 1134 d6.pool(3).compair_le(d6.pool(2)) 1135 ``` 1136 1137 Args: 1138 other: The other multiset to compare. 1139 order: Which order elements will be matched in. 1140 Default is descending. 1141 """ 1142 other = implicit_convert_to_expression(other) 1143 return self.evaluate(other, 1144 evaluator=icepool.evaluator.CompairEvalautor( 1145 order=order, 1146 tie=1, 1147 left_greater=0, 1148 right_greater=1))
Evaluation: EXPERIMENTAL: Compare pairs of elements in sorted order, counting how many pairs self
is <= other
.
Any extra unpaired elements do not affect the result.
Example: number of armies destroyed by the defender in a 3v2 attack in RISK:
d6.pool(3).compair_le(d6.pool(2))
Arguments:
- other: The other multiset to compare.
- order: Which order elements will be matched in. Default is descending.
1150 def compair_gt( 1151 self, 1152 other: 1153 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1154 *, 1155 order: Order = Order.Descending): 1156 """Evaluation: EXPERIMENTAL: Compare pairs of elements in sorted order, counting how many pairs `self` is > `other`. 1157 1158 Any extra unpaired elements do not affect the result. 1159 1160 Example: number of armies destroyed by the attacker in a 1161 3v2 attack in *RISK*: 1162 ```python 1163 d6.pool(3).compair_gt(d6.pool(2)) 1164 ``` 1165 1166 Args: 1167 other: The other multiset to compare. 1168 order: Which order elements will be matched in. 1169 Default is descending. 1170 """ 1171 other = implicit_convert_to_expression(other) 1172 return self.evaluate(other, 1173 evaluator=icepool.evaluator.CompairEvalautor( 1174 order=order, 1175 tie=0, 1176 left_greater=1, 1177 right_greater=0))
Evaluation: EXPERIMENTAL: Compare pairs of elements in sorted order, counting how many pairs self
is > other
.
Any extra unpaired elements do not affect the result.
Example: number of armies destroyed by the attacker in a 3v2 attack in RISK:
d6.pool(3).compair_gt(d6.pool(2))
Arguments:
- other: The other multiset to compare.
- order: Which order elements will be matched in. Default is descending.
1179 def compair_ge( 1180 self, 1181 other: 1182 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1183 *, 1184 order: Order = Order.Descending): 1185 """Evaluation: EXPERIMENTAL: Compare pairs of elements in sorted order, counting how many pairs `self` is >= `other`. 1186 1187 Any extra unpaired elements do not affect the result. 1188 1189 Args: 1190 other: The other multiset to compare. 1191 order: Which order elements will be matched in. 1192 Default is descending. 1193 """ 1194 other = implicit_convert_to_expression(other) 1195 return self.evaluate(other, 1196 evaluator=icepool.evaluator.CompairEvalautor( 1197 order=order, 1198 tie=1, 1199 left_greater=1, 1200 right_greater=0))
Evaluation: EXPERIMENTAL: Compare pairs of elements in sorted order, counting how many pairs self
is >= other
.
Any extra unpaired elements do not affect the result.
Arguments:
- other: The other multiset to compare.
- order: Which order elements will be matched in. Default is descending.
1202 def compair_eq( 1203 self, 1204 other: 1205 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1206 *, 1207 order: Order = Order.Descending): 1208 """Evaluation: EXPERIMENTAL: Compare pairs of elements in sorted order, counting how many pairs `self` is >= `other`. 1209 1210 Any extra unpaired elements do not affect the result. 1211 1212 Args: 1213 other: The other multiset to compare. 1214 order: Which order elements will be matched in. 1215 Default is descending. 1216 """ 1217 other = implicit_convert_to_expression(other) 1218 return self.evaluate(other, 1219 evaluator=icepool.evaluator.CompairEvalautor( 1220 order=order, 1221 tie=1, 1222 left_greater=0, 1223 right_greater=0))
Evaluation: EXPERIMENTAL: Compare pairs of elements in sorted order, counting how many pairs self
is >= other
.
Any extra unpaired elements do not affect the result.
Arguments:
- other: The other multiset to compare.
- order: Which order elements will be matched in. Default is descending.
1225 def compair_ne( 1226 self, 1227 other: 1228 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1229 *, 1230 order: Order = Order.Descending): 1231 """Evaluation: EXPERIMENTAL: Compare pairs of elements in sorted order, counting how many pairs `self` is >= `other`. 1232 1233 Any extra unpaired elements do not affect the result. 1234 1235 Args: 1236 other: The other multiset to compare. 1237 order: Which order elements will be matched in. 1238 Default is descending. 1239 """ 1240 other = implicit_convert_to_expression(other) 1241 return self.evaluate(other, 1242 evaluator=icepool.evaluator.CompairEvalautor( 1243 order=order, 1244 tie=0, 1245 left_greater=1, 1246 right_greater=1))
Evaluation: EXPERIMENTAL: Compare pairs of elements in sorted order, counting how many pairs self
is >= other
.
Any extra unpaired elements do not affect the result.
Arguments:
- other: The other multiset to compare.
- order: Which order elements will be matched in. Default is descending.
26class MultisetEvaluator(ABC, Generic[T_contra, U_co]): 27 """An abstract, immutable, callable class for evaulating one or more `MultisetGenerator`s. 28 29 There is one abstract method to implement: `next_state()`. 30 This should incrementally calculate the result given one outcome at a time 31 along with how many of that outcome were produced. 32 33 An example sequence of calls, as far as `next_state()` is concerned, is: 34 35 1. `state = next_state(state=None, outcome=1, count_of_1s)` 36 2. `state = next_state(state, 2, count_of_2s)` 37 3. `state = next_state(state, 3, count_of_3s)` 38 4. `state = next_state(state, 4, count_of_4s)` 39 5. `state = next_state(state, 5, count_of_5s)` 40 6. `state = next_state(state, 6, count_of_6s)` 41 7. `outcome = final_outcome(state)` 42 43 A few other methods can optionally be overridden to further customize behavior. 44 45 It is not expected that subclasses of `MultisetEvaluator` 46 be able to handle arbitrary types or numbers of generators. 47 Indeed, most are expected to handle only a fixed number of generators, 48 and often even only generators with a particular type of `Die`. 49 50 Instances cache all intermediate state distributions. 51 You should therefore reuse instances when possible. 52 53 Instances should not be modified after construction 54 in any way that affects the return values of these methods. 55 Otherwise, values in the cache may be incorrect. 56 """ 57 58 @abstractmethod 59 def next_state(self, state: Hashable, outcome: T_contra, /, *counts: 60 int) -> Hashable: 61 """State transition function. 62 63 This should produce a state given the previous state, an outcome, 64 and the number that outcome produced by each generator. 65 66 `evaluate()` will always call this using only positional arguments. 67 Furthermore, there is no expectation that a subclass be able to handle 68 an arbitrary number of counts. Thus, you are free to rename any of 69 the parameters in a subclass, or to replace `*counts` with a fixed set 70 of parameters. 71 72 Make sure to handle the base case where `state is None`. 73 74 States must be hashable. At current, they do not have to be orderable. 75 However, this may change in the future, and if they are not totally 76 orderable, you must override `final_outcome` to create totally orderable 77 final outcomes. 78 79 The behavior of returning a `Die` from `next_state` is currently 80 undefined. 81 82 Args: 83 state: A hashable object indicating the state before rolling the 84 current outcome. If this is the first outcome being considered, 85 `state` will be `None`. 86 outcome: The current outcome. 87 `next_state` will see all rolled outcomes in monotonic order; 88 either ascending or descending depending on `order()`. 89 If there are multiple generators, the set of outcomes is the 90 union of the outcomes of the invididual generators. All outcomes 91 with nonzero count will be seen. Outcomes with zero count 92 may or may not be seen. If you need to enforce that certain 93 outcomes are seen even if they have zero count, see 94 `alignment()`. 95 *counts: One value (usually an `int`) for each generator output 96 indicating how many of the current outcome were produced. 97 All outcomes with nonzero count are guaranteed to be seen. 98 To guarantee that outcomes are seen even if they have zero 99 count, override `alignment()`. 100 101 Returns: 102 A hashable object indicating the next state. 103 The special value `icepool.Reroll` can be used to immediately remove 104 the state from consideration, effectively performing a full reroll. 105 """ 106 107 def final_outcome( 108 self, final_state: Hashable 109 ) -> 'U_co | icepool.Die[U_co] | icepool.RerollType': 110 """Optional function to generate a final outcome from a final state. 111 112 By default, the final outcome is equal to the final state. 113 Note that `None` is not a valid outcome for a `Die`, 114 and if there are no outcomes, `final_outcome` will be immediately 115 be callled with `final_state=None`. 116 Subclasses that want to handle this case should explicitly define what 117 happens. 118 119 Args: 120 final_state: A state after all outcomes have been processed. 121 122 Returns: 123 A final outcome that will be used as part of constructing the result `Die`. 124 As usual for `Die()`, this could itself be a `Die` or `icepool.Reroll`. 125 """ 126 # If not overriden, the final_state should have type U_co. 127 return cast(U_co, final_state) 128 129 def order(self) -> Order: 130 """Optional function to determine the order in which `next_state()` will see outcomes. 131 132 The default is ascending order. This has better caching behavior with 133 mixed standard dice. 134 135 Returns: 136 * Order.Ascending (= 1) 137 if `next_state()` should always see the outcomes in ascending order. 138 * Order.Descending (= -1) 139 if `next_state()` should always see the outcomes in descending order. 140 * Order.Any (= 0) 141 if the result of the evaluation is order-independent. 142 """ 143 return Order.Ascending 144 145 def alignment(self, outcomes: Sequence[T_contra]) -> Collection[T_contra]: 146 """Optional method to specify additional outcomes that should be seen by `next_state()`. 147 148 These will be seen by `next_state` even if they have zero count or do 149 not appear in the generator(s) at all. 150 151 The default implementation returns `()`; this means outcomes with zero 152 count may or may not be seen by `next_state`. 153 154 If you want `next_state` to see consecutive `int` outcomes, you can set 155 `alignment = icepool.MultisetEvaluator.range_alignment`. 156 See `range_alignment()` below. 157 158 If you want `next_state` to see all generator outcomes, you can return 159 `outcomes` as-is. 160 161 Args: 162 outcomes: The outcomes that could be produced by the generators, in 163 ascending order. 164 """ 165 return () 166 167 def range_alignment(self, outcomes: Sequence[int]) -> Collection[int]: 168 """Example implementation of `alignment()` that produces consecutive `int` outcomes. 169 170 There is no expectation that a subclass be able to handle 171 an arbitrary number of generators. Thus, you are free to rename any of 172 the parameters in a subclass, or to replace `*generators` with a fixed 173 set of parameters. 174 175 Set `alignment = icepool.MultisetEvaluator.range_alignment` to use this. 176 177 Returns: 178 All `int`s from the min outcome to the max outcome among the generators, 179 inclusive. 180 181 Raises: 182 TypeError: if any generator has any non-`int` outcome. 183 """ 184 if not outcomes: 185 return () 186 187 if any(not isinstance(x, int) for x in outcomes): 188 raise TypeError( 189 "range_alignment cannot be used with outcomes of type other than 'int'." 190 ) 191 192 return range(outcomes[0], outcomes[-1] + 1) 193 194 def prefix_generators(self) -> 'tuple[icepool.MultisetGenerator, ...]': 195 """An optional sequence of extra generators whose counts will be prepended to *counts.""" 196 return () 197 198 def validate_arity(self, arity: int) -> None: 199 """An optional method to verify the total input arity. 200 201 This is called after any implicit conversion to generators, but does 202 not include any `extra_generators()`. 203 204 Overriding `next_state` with a fixed number of counts will make this 205 check redundant. 206 207 Raises: 208 `ValueError` if the total input arity is not valid. 209 """ 210 211 @cached_property 212 def _cache(self) -> MutableMapping[Any, Mapping[Any, int]]: 213 """A cache of (order, generators) -> weight distribution over states. """ 214 return {} 215 216 @overload 217 def evaluate( 218 self, *args: 'Mapping[T_contra, int] | Sequence[T_contra]' 219 ) -> 'icepool.Die[U_co]': 220 ... 221 222 @overload 223 def evaluate( 224 self, *args: 'MultisetExpression[T_contra]' 225 ) -> 'MultisetEvaluator[T_contra, U_co]': 226 ... 227 228 @overload 229 def evaluate( 230 self, *args: 231 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]' 232 ) -> 'icepool.Die[U_co] | MultisetEvaluator[T_contra, U_co]': 233 ... 234 235 def evaluate( 236 self, *args: 237 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]' 238 ) -> 'icepool.Die[U_co] | MultisetEvaluator[T_contra, U_co]': 239 """Evaluates generator(s). 240 241 You can call the `MultisetEvaluator` object directly for the same effect, 242 e.g. `sum_evaluator(generator)` is an alias for `sum_evaluator.evaluate(generator)`. 243 244 Most evaluators will expect a fixed number of input multisets. 245 The union of the outcomes of the generator(s) must be totally orderable. 246 247 Args: 248 *args: Each may be one of the following: 249 * A `MultisetExpression`. 250 * A mappable mapping outcomes to the number of those outcomes. 251 * A sequence of outcomes. 252 253 Returns: 254 A `Die` representing the distribution of the final outcome if no 255 arg contains a free variable. Otherwise, returns a new evaluator. 256 """ 257 from icepool.generator.alignment import Alignment 258 259 # Convert arguments to expressions. 260 expressions = tuple( 261 icepool.implicit_convert_to_expression(arg) for arg in args) 262 263 if any(expression._free_arity() > 0 for expression in expressions): 264 from icepool.evaluator.expression import ExpressionEvaluator 265 return ExpressionEvaluator(*expressions, evaluator=self) 266 267 if not all( 268 isinstance(expression, icepool.MultisetGenerator) 269 for expression in expressions): 270 from icepool.evaluator.expression import ExpressionEvaluator 271 return ExpressionEvaluator(*expressions, evaluator=self).evaluate() 272 273 generators = cast(tuple[icepool.MultisetGenerator, ...], expressions) 274 275 self.validate_arity( 276 sum(generator.output_arity() for generator in generators)) 277 278 generators = self.prefix_generators() + generators 279 280 if not all(generator._is_resolvable() for generator in generators): 281 return icepool.Die([]) 282 283 algorithm, order = self._select_algorithm(*generators) 284 285 # We use a separate class to guarantee all outcomes are visited. 286 outcomes = sorted_union(*(generator.outcomes() 287 for generator in generators)) 288 alignment = Alignment(self.alignment(outcomes)) 289 290 dist: MutableMapping[Any, int] = defaultdict(int) 291 iterators = MultisetEvaluator._initialize_generators(generators) 292 for p in itertools.product(*iterators): 293 sub_generators, sub_weights = zip(*p) 294 prod_weight = math.prod(sub_weights) 295 sub_result = algorithm(order, alignment, sub_generators) 296 for sub_state, sub_weight in sub_result.items(): 297 dist[sub_state] += sub_weight * prod_weight 298 299 final_outcomes = [] 300 final_weights = [] 301 for state, weight in dist.items(): 302 outcome = self.final_outcome(state) 303 if outcome is None: 304 raise TypeError( 305 "None is not a valid final outcome.\n" 306 "This may have been a result of not supplying any generator with an outcome." 307 ) 308 if outcome is not icepool.Reroll: 309 final_outcomes.append(outcome) 310 final_weights.append(weight) 311 312 return icepool.Die(final_outcomes, final_weights) 313 314 __call__ = evaluate 315 316 def _select_algorithm( 317 self, *generators: 'icepool.MultisetGenerator[T_contra, Any]' 318 ) -> tuple[ 319 'Callable[[Order, Alignment[T_contra], tuple[icepool.MultisetGenerator[T_contra, Any], ...]], Mapping[Any, int]]', 320 Order]: 321 """Selects an algorithm and iteration order. 322 323 Returns: 324 * The algorithm to use (`_eval_internal*`). 325 * The order in which `next_state()` sees outcomes. 326 1 for ascending and -1 for descending. 327 """ 328 eval_order = self.order() 329 330 if not generators: 331 # No generators. 332 return self._eval_internal, eval_order 333 334 pop_min_costs, pop_max_costs = zip(*(generator._estimate_order_costs() 335 for generator in generators)) 336 337 pop_min_cost = math.prod(pop_min_costs) 338 pop_max_cost = math.prod(pop_max_costs) 339 340 # No preferred order case: go directly with cost. 341 if eval_order == Order.Any: 342 if pop_max_cost <= pop_min_cost: 343 return self._eval_internal, Order.Ascending 344 else: 345 return self._eval_internal, Order.Descending 346 347 # Preferred order case. 348 # Go with the preferred order unless there is a "significant" 349 # cost factor. 350 351 if PREFERRED_ORDER_COST_FACTOR * pop_max_cost < pop_min_cost: 352 cost_order = Order.Ascending 353 elif PREFERRED_ORDER_COST_FACTOR * pop_min_cost < pop_max_cost: 354 cost_order = Order.Descending 355 else: 356 cost_order = Order.Any 357 358 if cost_order == Order.Any or eval_order == cost_order: 359 # Use the preferred algorithm. 360 return self._eval_internal, eval_order 361 else: 362 # Use the less-preferred algorithm. 363 return self._eval_internal_iterative, eval_order 364 365 def _eval_internal( 366 self, order: Order, alignment: 'Alignment[T_contra]', 367 generators: 'tuple[icepool.MultisetGenerator[T_contra, Any], ...]' 368 ) -> Mapping[Any, int]: 369 """Internal algorithm for iterating in the more-preferred order, 370 i.e. giving outcomes to `next_state()` from wide to narrow. 371 372 All intermediate return values are cached in the instance. 373 374 Arguments: 375 order: The order in which to send outcomes to `next_state()`. 376 alignment: As `alignment()`. Elements will be popped off this 377 during recursion. 378 generators: One or more `MultisetGenerators`s to evaluate. Elements 379 will be popped off this during recursion. 380 381 Returns: 382 A dict `{ state : weight }` describing the probability distribution 383 over states. 384 """ 385 cache_key = (order, alignment, generators) 386 if cache_key in self._cache: 387 return self._cache[cache_key] 388 389 result: MutableMapping[Any, int] = defaultdict(int) 390 391 if all(not generator.outcomes() 392 for generator in generators) and not alignment.outcomes(): 393 result = {None: 1} 394 else: 395 outcome, prev_alignment, iterators = MultisetEvaluator._pop_generators( 396 order, alignment, generators) 397 for p in itertools.product(*iterators): 398 prev_generators, counts, weights = zip(*p) 399 counts = tuple(itertools.chain.from_iterable(counts)) 400 prod_weight = math.prod(weights) 401 prev = self._eval_internal(order, prev_alignment, 402 prev_generators) 403 for prev_state, prev_weight in prev.items(): 404 state = self.next_state(prev_state, outcome, *counts) 405 if state is not icepool.Reroll: 406 result[state] += prev_weight * prod_weight 407 408 self._cache[cache_key] = result 409 return result 410 411 def _eval_internal_iterative( 412 self, order: int, alignment: 'Alignment[T_contra]', 413 generators: 'tuple[icepool.MultisetGenerator[T_contra, Any], ...]' 414 ) -> Mapping[Any, int]: 415 """Internal algorithm for iterating in the less-preferred order, 416 i.e. giving outcomes to `next_state()` from narrow to wide. 417 418 This algorithm does not perform persistent memoization. 419 """ 420 if all(not generator.outcomes() 421 for generator in generators) and not alignment.outcomes(): 422 return {None: 1} 423 dist: MutableMapping[Any, int] = defaultdict(int) 424 dist[None, alignment, generators] = 1 425 final_dist: MutableMapping[Any, int] = defaultdict(int) 426 while dist: 427 next_dist: MutableMapping[Any, int] = defaultdict(int) 428 for (prev_state, prev_alignment, 429 prev_generators), weight in dist.items(): 430 # The order flip here is the only purpose of this algorithm. 431 outcome, alignment, iterators = MultisetEvaluator._pop_generators( 432 -order, prev_alignment, prev_generators) 433 for p in itertools.product(*iterators): 434 generators, counts, weights = zip(*p) 435 counts = tuple(itertools.chain.from_iterable(counts)) 436 prod_weight = math.prod(weights) 437 state = self.next_state(prev_state, outcome, *counts) 438 if state is not icepool.Reroll: 439 if all(not generator.outcomes() 440 for generator in generators): 441 final_dist[state] += weight * prod_weight 442 else: 443 next_dist[state, alignment, 444 generators] += weight * prod_weight 445 dist = next_dist 446 return final_dist 447 448 @staticmethod 449 def _initialize_generators( 450 generators: 'tuple[icepool.MultisetGenerator[T_contra, Any], ...]' 451 ) -> 'tuple[icepool.InitialMultisetGenerator, ...]': 452 return tuple(generator._generate_initial() for generator in generators) 453 454 @staticmethod 455 def _pop_generators( 456 side: int, alignment: 'Alignment[T_contra]', 457 generators: 'tuple[icepool.MultisetGenerator[T_contra, Any], ...]' 458 ) -> 'tuple[T_contra, Alignment[T_contra], tuple[icepool.NextMultisetGenerator, ...]]': 459 """Pops a single outcome from the generators. 460 461 Returns: 462 * The popped outcome. 463 * The remaining alignment. 464 * A tuple of iterators over the resulting generators, counts, and weights. 465 """ 466 alignment_and_generators = (alignment, ) + generators 467 if side >= 0: 468 outcome = max(generator.max_outcome() 469 for generator in alignment_and_generators 470 if generator.outcomes()) 471 472 next_alignment, _, _ = next(alignment._generate_max(outcome)) 473 474 return outcome, next_alignment, tuple( 475 generator._generate_max(outcome) for generator in generators) 476 else: 477 outcome = min(generator.min_outcome() 478 for generator in alignment_and_generators 479 if generator.outcomes()) 480 481 next_alignment, _, _ = next(alignment._generate_min(outcome)) 482 483 return outcome, next_alignment, tuple( 484 generator._generate_min(outcome) for generator in generators) 485 486 def sample( 487 self, *generators: 488 'icepool.MultisetGenerator[T_contra, Any] | Mapping[T_contra, int] | Sequence[T_contra]' 489 ): 490 """EXPERIMENTAL: Samples one result from the generator(s) and evaluates the result.""" 491 # Convert non-`Pool` arguments to `Pool`. 492 converted_generators = tuple( 493 generator if isinstance(generator, icepool.MultisetGenerator 494 ) else icepool.Pool(generator) 495 for generator in generators) 496 497 result = self.evaluate(*itertools.chain.from_iterable( 498 generator.sample() for generator in converted_generators)) 499 500 if not result.is_empty(): 501 return result.outcomes()[0] 502 else: 503 return result 504 505 def __bool__(self) -> bool: 506 raise TypeError('MultisetEvaluator does not have a truth value.') 507 508 def __str__(self) -> str: 509 return type(self).__name__
An abstract, immutable, callable class for evaulating one or more MultisetGenerator
s.
There is one abstract method to implement: next_state()
.
This should incrementally calculate the result given one outcome at a time
along with how many of that outcome were produced.
An example sequence of calls, as far as next_state()
is concerned, is:
state = next_state(state=None, outcome=1, count_of_1s)
state = next_state(state, 2, count_of_2s)
state = next_state(state, 3, count_of_3s)
state = next_state(state, 4, count_of_4s)
state = next_state(state, 5, count_of_5s)
state = next_state(state, 6, count_of_6s)
outcome = final_outcome(state)
A few other methods can optionally be overridden to further customize behavior.
It is not expected that subclasses of MultisetEvaluator
be able to handle arbitrary types or numbers of generators.
Indeed, most are expected to handle only a fixed number of generators,
and often even only generators with a particular type of Die
.
Instances cache all intermediate state distributions. You should therefore reuse instances when possible.
Instances should not be modified after construction in any way that affects the return values of these methods. Otherwise, values in the cache may be incorrect.
58 @abstractmethod 59 def next_state(self, state: Hashable, outcome: T_contra, /, *counts: 60 int) -> Hashable: 61 """State transition function. 62 63 This should produce a state given the previous state, an outcome, 64 and the number that outcome produced by each generator. 65 66 `evaluate()` will always call this using only positional arguments. 67 Furthermore, there is no expectation that a subclass be able to handle 68 an arbitrary number of counts. Thus, you are free to rename any of 69 the parameters in a subclass, or to replace `*counts` with a fixed set 70 of parameters. 71 72 Make sure to handle the base case where `state is None`. 73 74 States must be hashable. At current, they do not have to be orderable. 75 However, this may change in the future, and if they are not totally 76 orderable, you must override `final_outcome` to create totally orderable 77 final outcomes. 78 79 The behavior of returning a `Die` from `next_state` is currently 80 undefined. 81 82 Args: 83 state: A hashable object indicating the state before rolling the 84 current outcome. If this is the first outcome being considered, 85 `state` will be `None`. 86 outcome: The current outcome. 87 `next_state` will see all rolled outcomes in monotonic order; 88 either ascending or descending depending on `order()`. 89 If there are multiple generators, the set of outcomes is the 90 union of the outcomes of the invididual generators. All outcomes 91 with nonzero count will be seen. Outcomes with zero count 92 may or may not be seen. If you need to enforce that certain 93 outcomes are seen even if they have zero count, see 94 `alignment()`. 95 *counts: One value (usually an `int`) for each generator output 96 indicating how many of the current outcome were produced. 97 All outcomes with nonzero count are guaranteed to be seen. 98 To guarantee that outcomes are seen even if they have zero 99 count, override `alignment()`. 100 101 Returns: 102 A hashable object indicating the next state. 103 The special value `icepool.Reroll` can be used to immediately remove 104 the state from consideration, effectively performing a full reroll. 105 """
State transition function.
This should produce a state given the previous state, an outcome, and the number that outcome produced by each generator.
evaluate()
will always call this using only positional arguments.
Furthermore, there is no expectation that a subclass be able to handle
an arbitrary number of counts. Thus, you are free to rename any of
the parameters in a subclass, or to replace *counts
with a fixed set
of parameters.
Make sure to handle the base case where state is None
.
States must be hashable. At current, they do not have to be orderable.
However, this may change in the future, and if they are not totally
orderable, you must override final_outcome
to create totally orderable
final outcomes.
The behavior of returning a Die
from next_state
is currently
undefined.
Arguments:
- state: A hashable object indicating the state before rolling the
current outcome. If this is the first outcome being considered,
state
will beNone
. - outcome: The current outcome.
next_state
will see all rolled outcomes in monotonic order; either ascending or descending depending onorder()
. If there are multiple generators, the set of outcomes is the union of the outcomes of the invididual generators. All outcomes with nonzero count will be seen. Outcomes with zero count may or may not be seen. If you need to enforce that certain outcomes are seen even if they have zero count, seealignment()
. - *counts: One value (usually an
int
) for each generator output indicating how many of the current outcome were produced. All outcomes with nonzero count are guaranteed to be seen. To guarantee that outcomes are seen even if they have zero count, overridealignment()
.
Returns:
A hashable object indicating the next state. The special value
icepool.Reroll
can be used to immediately remove the state from consideration, effectively performing a full reroll.
107 def final_outcome( 108 self, final_state: Hashable 109 ) -> 'U_co | icepool.Die[U_co] | icepool.RerollType': 110 """Optional function to generate a final outcome from a final state. 111 112 By default, the final outcome is equal to the final state. 113 Note that `None` is not a valid outcome for a `Die`, 114 and if there are no outcomes, `final_outcome` will be immediately 115 be callled with `final_state=None`. 116 Subclasses that want to handle this case should explicitly define what 117 happens. 118 119 Args: 120 final_state: A state after all outcomes have been processed. 121 122 Returns: 123 A final outcome that will be used as part of constructing the result `Die`. 124 As usual for `Die()`, this could itself be a `Die` or `icepool.Reroll`. 125 """ 126 # If not overriden, the final_state should have type U_co. 127 return cast(U_co, final_state)
Optional function to generate a final outcome from a final state.
By default, the final outcome is equal to the final state.
Note that None
is not a valid outcome for a Die
,
and if there are no outcomes, final_outcome
will be immediately
be callled with final_state=None
.
Subclasses that want to handle this case should explicitly define what
happens.
Arguments:
- final_state: A state after all outcomes have been processed.
Returns:
A final outcome that will be used as part of constructing the result
Die
. As usual forDie()
, this could itself be aDie
oricepool.Reroll
.
129 def order(self) -> Order: 130 """Optional function to determine the order in which `next_state()` will see outcomes. 131 132 The default is ascending order. This has better caching behavior with 133 mixed standard dice. 134 135 Returns: 136 * Order.Ascending (= 1) 137 if `next_state()` should always see the outcomes in ascending order. 138 * Order.Descending (= -1) 139 if `next_state()` should always see the outcomes in descending order. 140 * Order.Any (= 0) 141 if the result of the evaluation is order-independent. 142 """ 143 return Order.Ascending
Optional function to determine the order in which next_state()
will see outcomes.
The default is ascending order. This has better caching behavior with mixed standard dice.
Returns:
- Order.Ascending (= 1) if
next_state()
should always see the outcomes in ascending order.- Order.Descending (= -1) if
next_state()
should always see the outcomes in descending order.- Order.Any (= 0) if the result of the evaluation is order-independent.
145 def alignment(self, outcomes: Sequence[T_contra]) -> Collection[T_contra]: 146 """Optional method to specify additional outcomes that should be seen by `next_state()`. 147 148 These will be seen by `next_state` even if they have zero count or do 149 not appear in the generator(s) at all. 150 151 The default implementation returns `()`; this means outcomes with zero 152 count may or may not be seen by `next_state`. 153 154 If you want `next_state` to see consecutive `int` outcomes, you can set 155 `alignment = icepool.MultisetEvaluator.range_alignment`. 156 See `range_alignment()` below. 157 158 If you want `next_state` to see all generator outcomes, you can return 159 `outcomes` as-is. 160 161 Args: 162 outcomes: The outcomes that could be produced by the generators, in 163 ascending order. 164 """ 165 return ()
Optional method to specify additional outcomes that should be seen by next_state()
.
These will be seen by next_state
even if they have zero count or do
not appear in the generator(s) at all.
The default implementation returns ()
; this means outcomes with zero
count may or may not be seen by next_state
.
If you want next_state
to see consecutive int
outcomes, you can set
alignment = icepool.MultisetEvaluator.range_alignment
.
See range_alignment()
below.
If you want next_state
to see all generator outcomes, you can return
outcomes
as-is.
Arguments:
- outcomes: The outcomes that could be produced by the generators, in
- ascending order.
167 def range_alignment(self, outcomes: Sequence[int]) -> Collection[int]: 168 """Example implementation of `alignment()` that produces consecutive `int` outcomes. 169 170 There is no expectation that a subclass be able to handle 171 an arbitrary number of generators. Thus, you are free to rename any of 172 the parameters in a subclass, or to replace `*generators` with a fixed 173 set of parameters. 174 175 Set `alignment = icepool.MultisetEvaluator.range_alignment` to use this. 176 177 Returns: 178 All `int`s from the min outcome to the max outcome among the generators, 179 inclusive. 180 181 Raises: 182 TypeError: if any generator has any non-`int` outcome. 183 """ 184 if not outcomes: 185 return () 186 187 if any(not isinstance(x, int) for x in outcomes): 188 raise TypeError( 189 "range_alignment cannot be used with outcomes of type other than 'int'." 190 ) 191 192 return range(outcomes[0], outcomes[-1] + 1)
Example implementation of alignment()
that produces consecutive int
outcomes.
There is no expectation that a subclass be able to handle
an arbitrary number of generators. Thus, you are free to rename any of
the parameters in a subclass, or to replace *generators
with a fixed
set of parameters.
Set alignment = icepool.MultisetEvaluator.range_alignment
to use this.
Returns:
All
int
s from the min outcome to the max outcome among the generators, inclusive.
Raises:
- TypeError: if any generator has any non-
int
outcome.
194 def prefix_generators(self) -> 'tuple[icepool.MultisetGenerator, ...]': 195 """An optional sequence of extra generators whose counts will be prepended to *counts.""" 196 return ()
An optional sequence of extra generators whose counts will be prepended to *counts.
198 def validate_arity(self, arity: int) -> None: 199 """An optional method to verify the total input arity. 200 201 This is called after any implicit conversion to generators, but does 202 not include any `extra_generators()`. 203 204 Overriding `next_state` with a fixed number of counts will make this 205 check redundant. 206 207 Raises: 208 `ValueError` if the total input arity is not valid. 209 """
An optional method to verify the total input arity.
This is called after any implicit conversion to generators, but does
not include any extra_generators()
.
Overriding next_state
with a fixed number of counts will make this
check redundant.
Raises:
ValueError
if the total input arity is not valid.
235 def evaluate( 236 self, *args: 237 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]' 238 ) -> 'icepool.Die[U_co] | MultisetEvaluator[T_contra, U_co]': 239 """Evaluates generator(s). 240 241 You can call the `MultisetEvaluator` object directly for the same effect, 242 e.g. `sum_evaluator(generator)` is an alias for `sum_evaluator.evaluate(generator)`. 243 244 Most evaluators will expect a fixed number of input multisets. 245 The union of the outcomes of the generator(s) must be totally orderable. 246 247 Args: 248 *args: Each may be one of the following: 249 * A `MultisetExpression`. 250 * A mappable mapping outcomes to the number of those outcomes. 251 * A sequence of outcomes. 252 253 Returns: 254 A `Die` representing the distribution of the final outcome if no 255 arg contains a free variable. Otherwise, returns a new evaluator. 256 """ 257 from icepool.generator.alignment import Alignment 258 259 # Convert arguments to expressions. 260 expressions = tuple( 261 icepool.implicit_convert_to_expression(arg) for arg in args) 262 263 if any(expression._free_arity() > 0 for expression in expressions): 264 from icepool.evaluator.expression import ExpressionEvaluator 265 return ExpressionEvaluator(*expressions, evaluator=self) 266 267 if not all( 268 isinstance(expression, icepool.MultisetGenerator) 269 for expression in expressions): 270 from icepool.evaluator.expression import ExpressionEvaluator 271 return ExpressionEvaluator(*expressions, evaluator=self).evaluate() 272 273 generators = cast(tuple[icepool.MultisetGenerator, ...], expressions) 274 275 self.validate_arity( 276 sum(generator.output_arity() for generator in generators)) 277 278 generators = self.prefix_generators() + generators 279 280 if not all(generator._is_resolvable() for generator in generators): 281 return icepool.Die([]) 282 283 algorithm, order = self._select_algorithm(*generators) 284 285 # We use a separate class to guarantee all outcomes are visited. 286 outcomes = sorted_union(*(generator.outcomes() 287 for generator in generators)) 288 alignment = Alignment(self.alignment(outcomes)) 289 290 dist: MutableMapping[Any, int] = defaultdict(int) 291 iterators = MultisetEvaluator._initialize_generators(generators) 292 for p in itertools.product(*iterators): 293 sub_generators, sub_weights = zip(*p) 294 prod_weight = math.prod(sub_weights) 295 sub_result = algorithm(order, alignment, sub_generators) 296 for sub_state, sub_weight in sub_result.items(): 297 dist[sub_state] += sub_weight * prod_weight 298 299 final_outcomes = [] 300 final_weights = [] 301 for state, weight in dist.items(): 302 outcome = self.final_outcome(state) 303 if outcome is None: 304 raise TypeError( 305 "None is not a valid final outcome.\n" 306 "This may have been a result of not supplying any generator with an outcome." 307 ) 308 if outcome is not icepool.Reroll: 309 final_outcomes.append(outcome) 310 final_weights.append(weight) 311 312 return icepool.Die(final_outcomes, final_weights)
Evaluates generator(s).
You can call the MultisetEvaluator
object directly for the same effect,
e.g. sum_evaluator(generator)
is an alias for sum_evaluator.evaluate(generator)
.
Most evaluators will expect a fixed number of input multisets. The union of the outcomes of the generator(s) must be totally orderable.
Arguments:
- *args: Each may be one of the following:
- A
MultisetExpression
. - A mappable mapping outcomes to the number of those outcomes.
- A sequence of outcomes.
- A
Returns:
A
Die
representing the distribution of the final outcome if no arg contains a free variable. Otherwise, returns a new evaluator.
486 def sample( 487 self, *generators: 488 'icepool.MultisetGenerator[T_contra, Any] | Mapping[T_contra, int] | Sequence[T_contra]' 489 ): 490 """EXPERIMENTAL: Samples one result from the generator(s) and evaluates the result.""" 491 # Convert non-`Pool` arguments to `Pool`. 492 converted_generators = tuple( 493 generator if isinstance(generator, icepool.MultisetGenerator 494 ) else icepool.Pool(generator) 495 for generator in generators) 496 497 result = self.evaluate(*itertools.chain.from_iterable( 498 generator.sample() for generator in converted_generators)) 499 500 if not result.is_empty(): 501 return result.outcomes()[0] 502 else: 503 return result
EXPERIMENTAL: Samples one result from the generator(s) and evaluates the result.
34class Order(enum.IntEnum): 35 """Can be used to define what order outcomes are seen in by MultisetEvaluators.""" 36 Ascending = 1 37 Descending = -1 38 Any = 0 39 40 def merge(*orders: 'Order') -> 'Order': 41 """Merges the given Orders. 42 43 Returns: 44 `Any` if all arguments are `Any`. 45 `Ascending` if there is at least one `Ascending` in the arguments. 46 `Descending` if there is at least one `Descending` in the arguments. 47 48 Raises: 49 `ValueError` if both `Ascending` and `Descending` are in the 50 arguments. 51 """ 52 result = Order.Any 53 for order in orders: 54 if (result > 0 and order < 0) or (result < 0 and order > 0): 55 raise ValueError( 56 f'Conflicting orders {orders}.\n' + 57 'Tip: If you are using highest(keep=k), try using lowest(drop=n-k) instead, or vice versa.' 58 ) 59 if result == Order.Any: 60 result = order 61 return result
Can be used to define what order outcomes are seen in by MultisetEvaluators.
40 def merge(*orders: 'Order') -> 'Order': 41 """Merges the given Orders. 42 43 Returns: 44 `Any` if all arguments are `Any`. 45 `Ascending` if there is at least one `Ascending` in the arguments. 46 `Descending` if there is at least one `Descending` in the arguments. 47 48 Raises: 49 `ValueError` if both `Ascending` and `Descending` are in the 50 arguments. 51 """ 52 result = Order.Any 53 for order in orders: 54 if (result > 0 and order < 0) or (result < 0 and order > 0): 55 raise ValueError( 56 f'Conflicting orders {orders}.\n' + 57 'Tip: If you are using highest(keep=k), try using lowest(drop=n-k) instead, or vice versa.' 58 ) 59 if result == Order.Any: 60 result = order 61 return result
Merges the given Orders.
Returns:
Any
if all arguments areAny
.Ascending
if there is at least oneAscending
in the arguments.Descending
if there is at least oneDescending
in the arguments.
Raises:
ValueError
if bothAscending
andDescending
are in the- arguments.
Inherited Members
- enum.Enum
- name
- value
- builtins.int
- conjugate
- bit_length
- bit_count
- to_bytes
- from_bytes
- as_integer_ratio
- is_integer
- real
- imag
- numerator
- denominator
20class Deck(Population[T_co]): 21 """Sampling without replacement (within a single evaluation). 22 23 Quantities represent duplicates. 24 """ 25 26 _data: Counts[T_co] 27 _deal: int 28 29 @property 30 def _new_type(self) -> type: 31 return Deck 32 33 def __new__(cls, 34 outcomes: Sequence | Mapping[Any, int], 35 times: Sequence[int] | int = 1) -> 'Deck[T_co]': 36 """Constructor for a `Deck`. 37 38 Args: 39 outcomes: The cards of the `Deck`. This can be one of the following: 40 * A `Sequence` of outcomes. Duplicates will contribute 41 quantity for each appearance. 42 * A `Mapping` from outcomes to quantities. 43 44 Each outcome may be one of the following: 45 * An outcome, which must be hashable and totally orderable. 46 * A `Deck`, which will be flattened into the result. If a 47 `times` is assigned to the `Deck`, the entire `Deck` will 48 be duplicated that many times. 49 times: Multiplies the number of times each element of `outcomes` 50 will be put into the `Deck`. 51 `times` can either be a sequence of the same length as 52 `outcomes` or a single `int` to apply to all elements of 53 `outcomes`. 54 """ 55 56 if icepool.population.again.contains_again(outcomes): 57 raise ValueError('Again cannot be used with Decks.') 58 59 outcomes, times = icepool.creation_args.itemize(outcomes, times) 60 61 if len(outcomes) == 1 and times[0] == 1 and isinstance( 62 outcomes[0], Deck): 63 return outcomes[0] 64 65 counts: Counts[T_co] = icepool.creation_args.expand_args_for_deck( 66 outcomes, times) 67 68 return Deck._new_raw(counts) 69 70 @classmethod 71 def _new_raw(cls, data: Counts[T_co]) -> 'Deck[T_co]': 72 """Creates a new `Deck` using already-processed arguments. 73 74 Args: 75 data: At this point, this is a Counts. 76 """ 77 self = super(Population, cls).__new__(cls) 78 self._data = data 79 return self 80 81 def keys(self) -> CountsKeysView[T_co]: 82 return self._data.keys() 83 84 def values(self) -> CountsValuesView: 85 return self._data.values() 86 87 def items(self) -> CountsItemsView[T_co]: 88 return self._data.items() 89 90 def __getitem__(self, outcome) -> int: 91 return self._data[outcome] 92 93 def __iter__(self) -> Iterator[T_co]: 94 return iter(self.keys()) 95 96 def __len__(self) -> int: 97 return len(self._data) 98 99 size = icepool.Population.denominator 100 101 @cached_property 102 def _popped_min(self) -> tuple['Deck[T_co]', int]: 103 return self._new_raw(self._data.remove_min()), self.quantities()[0] 104 105 def _pop_min(self) -> tuple['Deck[T_co]', int]: 106 """A `Deck` with the min outcome removed.""" 107 return self._popped_min 108 109 @cached_property 110 def _popped_max(self) -> tuple['Deck[T_co]', int]: 111 return self._new_raw(self._data.remove_max()), self.quantities()[-1] 112 113 def _pop_max(self) -> tuple['Deck[T_co]', int]: 114 """A `Deck` with the max outcome removed.""" 115 return self._popped_max 116 117 @overload 118 def deal(self, hand_size: int, /) -> 'icepool.Deal[T_co]': 119 ... 120 121 @overload 122 def deal( 123 self, hand_size: int, hand_size_2: int, /, *more_hand_sizes: 124 int) -> 'icepool.MultiDeal[T_co, tuple[int, ...]]': 125 ... 126 127 @overload 128 def deal( 129 self, *hand_sizes: int 130 ) -> 'icepool.Deal[T_co] | icepool.MultiDeal[T_co, tuple[int, ...]]': 131 ... 132 133 def deal( 134 self, *hand_sizes: int 135 ) -> 'icepool.Deal[T_co] | icepool.MultiDeal[T_co, tuple[int, ...]]': 136 """Creates a `Deal` object from this deck. 137 138 See `Deal()` for details. 139 """ 140 if len(hand_sizes) == 1: 141 return icepool.Deal(self, *hand_sizes) 142 else: 143 return icepool.MultiDeal(self, *hand_sizes) 144 145 # Binary operators. 146 147 def additive_union( 148 self, *args: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 149 """Both decks merged together.""" 150 return functools.reduce(operator.add, args, initial=self) 151 152 def __add__(self, 153 other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 154 data = Counter(self._data) 155 for outcome, count in Counter(other).items(): 156 data[outcome] += count 157 return Deck(+data) 158 159 __radd__ = __add__ 160 161 def difference(self, *args: 162 Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 163 """This deck with the other cards removed (but not below zero of each card).""" 164 return functools.reduce(operator.sub, args, initial=self) 165 166 def __sub__(self, 167 other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 168 data = Counter(self._data) 169 for outcome, count in Counter(other).items(): 170 data[outcome] -= count 171 return Deck(+data) 172 173 def __rsub__(self, 174 other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 175 data = Counter(other) 176 for outcome, count in self.items(): 177 data[outcome] -= count 178 return Deck(+data) 179 180 def intersection( 181 self, *args: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 182 """The cards that both decks have.""" 183 return functools.reduce(operator.and_, args, initial=self) 184 185 def __and__(self, 186 other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 187 data: Counter[T_co] = Counter() 188 for outcome, count in Counter(other).items(): 189 data[outcome] = min(self.get(outcome, 0), count) 190 return Deck(+data) 191 192 __rand__ = __and__ 193 194 def union(self, *args: 195 Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 196 """As many of each card as the deck that has more of them.""" 197 return functools.reduce(operator.or_, args, initial=self) 198 199 def __or__(self, 200 other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 201 data = Counter(self._data) 202 for outcome, count in Counter(other).items(): 203 data[outcome] = max(data[outcome], count) 204 return Deck(+data) 205 206 __ror__ = __or__ 207 208 def symmetric_difference( 209 self, other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 210 """As many of each card as the deck that has more of them.""" 211 return self ^ other 212 213 def __xor__(self, 214 other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 215 data = Counter(self._data) 216 for outcome, count in Counter(other).items(): 217 data[outcome] = abs(data[outcome] - count) 218 return Deck(+data) 219 220 def multiply_counts(self, other: int) -> 'Deck[T_co]': 221 """Merge multiple copies of the deck.""" 222 return self * other 223 224 def __mul__(self, other: int) -> 'Deck[T_co]': 225 if not isinstance(other, int): 226 return NotImplemented 227 data = Counter({s: count * other for s, count in self.items()}) 228 return Deck(+data) 229 230 __rmul__ = __mul__ 231 232 def divide_counts(self, other: int) -> 'Deck[T_co]': 233 """Divide all card counts, rounding down.""" 234 return self // other 235 236 def __floordiv__(self, other: int) -> 'Deck[T_co]': 237 if not isinstance(other, int): 238 return NotImplemented 239 data = Counter({s: count // other for s, count in self.items()}) 240 return Deck(+data) 241 242 def modulo_counts(self, other: int) -> 'Deck[T_co]': 243 """Modulo all card counts.""" 244 return self % other 245 246 def __mod__(self, other: int) -> 'Deck[T_co]': 247 if not isinstance(other, int): 248 return NotImplemented 249 data = Counter({s: count % other for s, count in self.items()}) 250 return Deck(+data) 251 252 def map( 253 self, 254 repl: 255 'Callable[..., U | Deck[U] | icepool.RerollType] | Mapping[T_co, U | Deck[U] | icepool.RerollType]', 256 /, 257 star: bool | None = None) -> 'Deck[U]': 258 """Maps outcomes of this `Deck` to other outcomes. 259 260 Args: 261 repl: One of the following: 262 * A callable returning a new outcome for each old outcome. 263 * A map from old outcomes to new outcomes. 264 Unmapped old outcomes stay the same. 265 The new outcomes may be `Deck`s, in which case one card is 266 replaced with several. This is not recommended. 267 star: Whether outcomes should be unpacked into separate arguments 268 before sending them to a callable `repl`. 269 If not provided, this will be guessed based on the function 270 signature. 271 """ 272 # Convert to a single-argument function. 273 if callable(repl): 274 if star is None: 275 star = guess_star(repl) 276 if star: 277 278 def transition_function(outcome): 279 return repl(*outcome) 280 else: 281 282 def transition_function(outcome): 283 return repl(outcome) 284 else: 285 # repl is a mapping. 286 def transition_function(outcome): 287 if outcome in repl: 288 return repl[outcome] 289 else: 290 return outcome 291 292 return Deck( 293 [transition_function(outcome) for outcome in self.outcomes()], 294 times=self.quantities()) 295 296 @cached_property 297 def _hash_key(self) -> tuple: 298 return Deck, tuple(self.items()) 299 300 def __eq__(self, other) -> bool: 301 if not isinstance(other, Deck): 302 return False 303 return self._hash_key == other._hash_key 304 305 @cached_property 306 def _hash(self) -> int: 307 return hash(self._hash_key) 308 309 def __hash__(self) -> int: 310 return self._hash 311 312 def __repr__(self) -> str: 313 inner = ', '.join(f'{repr(outcome)}: {quantity}' 314 for outcome, quantity in self.items()) 315 return type(self).__qualname__ + '({' + inner + '})'
Sampling without replacement (within a single evaluation).
Quantities represent duplicates.
33 def __new__(cls, 34 outcomes: Sequence | Mapping[Any, int], 35 times: Sequence[int] | int = 1) -> 'Deck[T_co]': 36 """Constructor for a `Deck`. 37 38 Args: 39 outcomes: The cards of the `Deck`. This can be one of the following: 40 * A `Sequence` of outcomes. Duplicates will contribute 41 quantity for each appearance. 42 * A `Mapping` from outcomes to quantities. 43 44 Each outcome may be one of the following: 45 * An outcome, which must be hashable and totally orderable. 46 * A `Deck`, which will be flattened into the result. If a 47 `times` is assigned to the `Deck`, the entire `Deck` will 48 be duplicated that many times. 49 times: Multiplies the number of times each element of `outcomes` 50 will be put into the `Deck`. 51 `times` can either be a sequence of the same length as 52 `outcomes` or a single `int` to apply to all elements of 53 `outcomes`. 54 """ 55 56 if icepool.population.again.contains_again(outcomes): 57 raise ValueError('Again cannot be used with Decks.') 58 59 outcomes, times = icepool.creation_args.itemize(outcomes, times) 60 61 if len(outcomes) == 1 and times[0] == 1 and isinstance( 62 outcomes[0], Deck): 63 return outcomes[0] 64 65 counts: Counts[T_co] = icepool.creation_args.expand_args_for_deck( 66 outcomes, times) 67 68 return Deck._new_raw(counts)
Constructor for a Deck
.
Arguments:
outcomes: The cards of the
Deck
. This can be one of the following:- A
Sequence
of outcomes. Duplicates will contribute quantity for each appearance. - A
Mapping
from outcomes to quantities.
Each outcome may be one of the following:
- A
- times: Multiplies the number of times each element of
outcomes
will be put into theDeck
.times
can either be a sequence of the same length asoutcomes
or a singleint
to apply to all elements ofoutcomes
.
188 def denominator(self) -> int: 189 """The sum of all quantities (e.g. weights or duplicates). 190 191 For the number of unique outcomes, including those with zero quantity, 192 use `len()`. 193 """ 194 return self._denominator
The sum of all quantities (e.g. weights or duplicates).
For the number of unique outcomes, including those with zero quantity,
use len()
.
133 def deal( 134 self, *hand_sizes: int 135 ) -> 'icepool.Deal[T_co] | icepool.MultiDeal[T_co, tuple[int, ...]]': 136 """Creates a `Deal` object from this deck. 137 138 See `Deal()` for details. 139 """ 140 if len(hand_sizes) == 1: 141 return icepool.Deal(self, *hand_sizes) 142 else: 143 return icepool.MultiDeal(self, *hand_sizes)
147 def additive_union( 148 self, *args: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 149 """Both decks merged together.""" 150 return functools.reduce(operator.add, args, initial=self)
Both decks merged together.
161 def difference(self, *args: 162 Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 163 """This deck with the other cards removed (but not below zero of each card).""" 164 return functools.reduce(operator.sub, args, initial=self)
This deck with the other cards removed (but not below zero of each card).
180 def intersection( 181 self, *args: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 182 """The cards that both decks have.""" 183 return functools.reduce(operator.and_, args, initial=self)
The cards that both decks have.
194 def union(self, *args: 195 Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 196 """As many of each card as the deck that has more of them.""" 197 return functools.reduce(operator.or_, args, initial=self)
As many of each card as the deck that has more of them.
208 def symmetric_difference( 209 self, other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 210 """As many of each card as the deck that has more of them.""" 211 return self ^ other
As many of each card as the deck that has more of them.
220 def multiply_counts(self, other: int) -> 'Deck[T_co]': 221 """Merge multiple copies of the deck.""" 222 return self * other
Merge multiple copies of the deck.
232 def divide_counts(self, other: int) -> 'Deck[T_co]': 233 """Divide all card counts, rounding down.""" 234 return self // other
Divide all card counts, rounding down.
242 def modulo_counts(self, other: int) -> 'Deck[T_co]': 243 """Modulo all card counts.""" 244 return self % other
Modulo all card counts.
252 def map( 253 self, 254 repl: 255 'Callable[..., U | Deck[U] | icepool.RerollType] | Mapping[T_co, U | Deck[U] | icepool.RerollType]', 256 /, 257 star: bool | None = None) -> 'Deck[U]': 258 """Maps outcomes of this `Deck` to other outcomes. 259 260 Args: 261 repl: One of the following: 262 * A callable returning a new outcome for each old outcome. 263 * A map from old outcomes to new outcomes. 264 Unmapped old outcomes stay the same. 265 The new outcomes may be `Deck`s, in which case one card is 266 replaced with several. This is not recommended. 267 star: Whether outcomes should be unpacked into separate arguments 268 before sending them to a callable `repl`. 269 If not provided, this will be guessed based on the function 270 signature. 271 """ 272 # Convert to a single-argument function. 273 if callable(repl): 274 if star is None: 275 star = guess_star(repl) 276 if star: 277 278 def transition_function(outcome): 279 return repl(*outcome) 280 else: 281 282 def transition_function(outcome): 283 return repl(outcome) 284 else: 285 # repl is a mapping. 286 def transition_function(outcome): 287 if outcome in repl: 288 return repl[outcome] 289 else: 290 return outcome 291 292 return Deck( 293 [transition_function(outcome) for outcome in self.outcomes()], 294 times=self.quantities())
Maps outcomes of this Deck
to other outcomes.
Arguments:
- repl: One of the following:
- A callable returning a new outcome for each old outcome.
- A map from old outcomes to new outcomes.
Unmapped old outcomes stay the same.
The new outcomes may be
Deck
s, in which case one card is replaced with several. This is not recommended.
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
repl
. If not provided, this will be guessed based on the function signature.
Inherited Members
- Population
- outcomes
- common_outcome_length
- is_empty
- min_outcome
- max_outcome
- nearest_le
- nearest_lt
- nearest_ge
- nearest_gt
- quantity
- quantities
- denominator
- scale_quantities
- has_zero_quantities
- quantity_ne
- quantity_le
- quantity_lt
- quantity_ge
- quantity_gt
- quantities_le
- quantities_ge
- quantities_lt
- quantities_gt
- probability
- probability_le
- probability_lt
- probability_ge
- probability_gt
- probabilities
- probabilities_le
- probabilities_ge
- probabilities_lt
- probabilities_gt
- mode
- modal_quantity
- kolmogorov_smirnov
- cramer_von_mises
- median
- median_low
- median_high
- quantile
- quantile_low
- quantile_high
- mean
- variance
- standard_deviation
- sd
- standardized_moment
- skewness
- excess_kurtosis
- entropy
- marginals
- covariance
- correlation
- to_one_hot
- sample
- format
- collections.abc.Mapping
- get
15class Deal(KeepGenerator[T]): 16 """Represents an unordered deal of a single hand from a `Deck`.""" 17 18 _deck: 'icepool.Deck[T]' 19 _hand_size: int 20 21 def __init__(self, deck: 'icepool.Deck[T]', hand_size: int) -> None: 22 """Constructor. 23 24 For algorithmic reasons, you must pre-commit to the number of cards to 25 deal. 26 27 It is permissible to deal zero cards from an empty deck, but not all 28 evaluators will handle this case, especially if they depend on the 29 outcome type. Dealing zero cards from a non-empty deck does not have 30 this issue. 31 32 Args: 33 deck: The `Deck` to deal from. 34 hand_size: How many cards to deal. 35 """ 36 if hand_size < 0: 37 raise ValueError('hand_size cannot be negative.') 38 if hand_size > deck.size(): 39 raise ValueError( 40 'The number of cards dealt cannot exceed the size of the deck.' 41 ) 42 self._deck = deck 43 self._hand_size = hand_size 44 self._keep_tuple = (1, ) * hand_size 45 46 @classmethod 47 def _new_raw(cls, deck: 'icepool.Deck[T]', hand_size: int, 48 keep_tuple: tuple[int, ...]) -> 'Deal[T]': 49 self = super(Deal, cls).__new__(cls) 50 self._deck = deck 51 self._hand_size = hand_size 52 self._keep_tuple = keep_tuple 53 return self 54 55 def _set_keep_tuple(self, keep_tuple: tuple[int, ...]) -> 'Deal[T]': 56 return Deal._new_raw(self._deck, self._hand_size, keep_tuple) 57 58 def deck(self) -> 'icepool.Deck[T]': 59 """The `Deck` the cards are dealt from.""" 60 return self._deck 61 62 def hand_sizes(self) -> tuple[int, ...]: 63 """The number of cards dealt to each hand as a tuple.""" 64 return (self._hand_size, ) 65 66 def total_cards_dealt(self) -> int: 67 """The total number of cards dealt.""" 68 return self._hand_size 69 70 def outcomes(self) -> CountsKeysView[T]: 71 """The outcomes of the `Deck` in ascending order. 72 73 These are also the `keys` of the `Deck` as a `Mapping`. 74 Prefer to use the name `outcomes`. 75 """ 76 return self.deck().outcomes() 77 78 def output_arity(self) -> int: 79 return 1 80 81 def _is_resolvable(self) -> bool: 82 return len(self.outcomes()) > 0 83 84 def denominator(self) -> int: 85 return icepool.math.comb(self.deck().size(), self._hand_size) 86 87 def _generate_initial(self) -> InitialMultisetGenerator: 88 yield self, 1 89 90 def _generate_min(self, min_outcome) -> NextMultisetGenerator: 91 if not self.outcomes() or min_outcome != self.min_outcome(): 92 yield self, (0, ), 1 93 return 94 95 popped_deck, deck_count = self.deck()._pop_min() 96 97 min_count = max(0, deck_count + self._hand_size - self.deck().size()) 98 max_count = min(deck_count, self._hand_size) 99 for count in range(min_count, max_count + 1): 100 popped_keep_tuple, result_count = pop_min_from_keep_tuple( 101 self.keep_tuple(), count) 102 popped_deal = Deal._new_raw(popped_deck, self._hand_size - count, 103 popped_keep_tuple) 104 weight = icepool.math.comb(deck_count, count) 105 yield popped_deal, (result_count, ), weight 106 107 def _generate_max(self, max_outcome) -> NextMultisetGenerator: 108 if not self.outcomes() or max_outcome != self.max_outcome(): 109 yield self, (0, ), 1 110 return 111 112 popped_deck, deck_count = self.deck()._pop_max() 113 114 min_count = max(0, deck_count + self._hand_size - self.deck().size()) 115 max_count = min(deck_count, self._hand_size) 116 for count in range(min_count, max_count + 1): 117 popped_keep_tuple, result_count = pop_max_from_keep_tuple( 118 self.keep_tuple(), count) 119 popped_deal = Deal._new_raw(popped_deck, self._hand_size - count, 120 popped_keep_tuple) 121 weight = icepool.math.comb(deck_count, count) 122 yield popped_deal, (result_count, ), weight 123 124 def _estimate_order_costs(self) -> tuple[int, int]: 125 result = len(self.outcomes()) * self._hand_size 126 return result, result 127 128 @cached_property 129 def _hash_key(self) -> Hashable: 130 return Deal, self.deck(), self._hand_size, self._keep_tuple 131 132 def __repr__(self) -> str: 133 return type( 134 self 135 ).__qualname__ + f'({repr(self.deck())}, hand_size={self._hand_size})' 136 137 def __str__(self) -> str: 138 return type( 139 self 140 ).__qualname__ + f' of hand_size={self._hand_size} from deck:\n' + str( 141 self.deck())
Represents an unordered deal of a single hand from a Deck
.
21 def __init__(self, deck: 'icepool.Deck[T]', hand_size: int) -> None: 22 """Constructor. 23 24 For algorithmic reasons, you must pre-commit to the number of cards to 25 deal. 26 27 It is permissible to deal zero cards from an empty deck, but not all 28 evaluators will handle this case, especially if they depend on the 29 outcome type. Dealing zero cards from a non-empty deck does not have 30 this issue. 31 32 Args: 33 deck: The `Deck` to deal from. 34 hand_size: How many cards to deal. 35 """ 36 if hand_size < 0: 37 raise ValueError('hand_size cannot be negative.') 38 if hand_size > deck.size(): 39 raise ValueError( 40 'The number of cards dealt cannot exceed the size of the deck.' 41 ) 42 self._deck = deck 43 self._hand_size = hand_size 44 self._keep_tuple = (1, ) * hand_size
Constructor.
For algorithmic reasons, you must pre-commit to the number of cards to deal.
It is permissible to deal zero cards from an empty deck, but not all evaluators will handle this case, especially if they depend on the outcome type. Dealing zero cards from a non-empty deck does not have this issue.
Arguments:
- deck: The
Deck
to deal from. - hand_size: How many cards to deal.
58 def deck(self) -> 'icepool.Deck[T]': 59 """The `Deck` the cards are dealt from.""" 60 return self._deck
The Deck
the cards are dealt from.
62 def hand_sizes(self) -> tuple[int, ...]: 63 """The number of cards dealt to each hand as a tuple.""" 64 return (self._hand_size, )
The number of cards dealt to each hand as a tuple.
66 def total_cards_dealt(self) -> int: 67 """The total number of cards dealt.""" 68 return self._hand_size
The total number of cards dealt.
Inherited Members
- icepool.generator.keep.KeepGenerator
- keep_size
- keep_tuple
- has_negative_keeps
- keep
- middle
- additive_union
- multiply_counts
- MultisetExpression
- difference
- intersection
- union
- symmetric_difference
- keep_outcomes
- drop_outcomes
- map_counts
- divide_counts
- modulo_counts
- keep_counts_le
- keep_counts_lt
- keep_counts_ge
- keep_counts_gt
- keep_counts_eq
- keep_counts_ne
- unique
- lowest
- highest
- evaluate
- expand
- sum
- count
- any
- highest_outcome_and_count
- all_counts
- largest_count
- largest_count_and_outcome
- count_subset
- largest_straight
- largest_straight_and_outcome
- all_straights
- all_straights_reduce_counts
- argsort
- issubset
- issuperset
- isdisjoint
- compair_lt
- compair_le
- compair_gt
- compair_ge
- compair_eq
- compair_ne
16class MultiDeal(MultisetGenerator[T, Qs]): 17 """Represents an unordered deal of multiple hands from a `Deck`.""" 18 19 _deck: 'icepool.Deck[T]' 20 _hand_sizes: Qs 21 22 def __init__(self, deck: 'icepool.Deck[T]', *hand_sizes: int) -> None: 23 """Constructor. 24 25 For algorithmic reasons, you must pre-commit to the number of cards to 26 deal for each hand. 27 28 It is permissible to deal zero cards from an empty deck, but not all 29 evaluators will handle this case, especially if they depend on the 30 outcome type. Dealing zero cards from a non-empty deck does not have 31 this issue. 32 33 Args: 34 deck: The `Deck` to deal from. 35 *hand_sizes: How many cards to deal. If multiple `hand_sizes` are 36 provided, `MultisetEvaluator.next_state` will recieve one count 37 per hand in order. Try to keep the number of hands to a minimum 38 as this can be computationally intensive. 39 """ 40 if any(hand < 0 for hand in hand_sizes): 41 raise ValueError('hand_sizes cannot be negative.') 42 self._deck = deck 43 self._hand_sizes = cast(Qs, hand_sizes) 44 if self.total_cards_dealt() > self.deck().size(): 45 raise ValueError( 46 'The total number of cards dealt cannot exceed the size of the deck.' 47 ) 48 49 @classmethod 50 def _new_raw(cls, deck: 'icepool.Deck[T]', 51 hand_sizes: Qs) -> 'MultiDeal[T, Qs]': 52 self = super(MultiDeal, cls).__new__(cls) 53 self._deck = deck 54 self._hand_sizes = hand_sizes 55 return self 56 57 def deck(self) -> 'icepool.Deck[T]': 58 """The `Deck` the cards are dealt from.""" 59 return self._deck 60 61 def hand_sizes(self) -> Qs: 62 """The number of cards dealt to each hand as a tuple.""" 63 return self._hand_sizes 64 65 def total_cards_dealt(self) -> int: 66 """The total number of cards dealt.""" 67 return sum(self.hand_sizes()) 68 69 def outcomes(self) -> CountsKeysView[T]: 70 """The outcomes of the `Deck` in ascending order. 71 72 These are also the `keys` of the `Deck` as a `Mapping`. 73 Prefer to use the name `outcomes`. 74 """ 75 return self.deck().outcomes() 76 77 def output_arity(self) -> int: 78 return len(self._hand_sizes) 79 80 def _is_resolvable(self) -> bool: 81 return len(self.outcomes()) > 0 82 83 @cached_property 84 def _denomiator(self) -> int: 85 d_total = icepool.math.comb(self.deck().size(), 86 self.total_cards_dealt()) 87 d_split = math.prod( 88 icepool.math.comb(self.total_cards_dealt(), h) 89 for h in self.hand_sizes()[1:]) 90 return d_total * d_split 91 92 def denominator(self) -> int: 93 return self._denomiator 94 95 def _generate_initial(self) -> InitialMultisetGenerator: 96 yield self, 1 97 98 def _generate_common(self, popped_deck: 'icepool.Deck[T]', 99 deck_count: int) -> NextMultisetGenerator: 100 """Common implementation for _generate_min and _generate_max.""" 101 min_count = max( 102 0, deck_count + self.total_cards_dealt() - self.deck().size()) 103 max_count = min(deck_count, self.total_cards_dealt()) 104 for count_total in range(min_count, max_count + 1): 105 weight_total = icepool.math.comb(deck_count, count_total) 106 # The "deck" is the hand sizes. 107 for counts, weight_split in iter_hypergeom(self.hand_sizes(), 108 count_total): 109 popped_deal = MultiDeal._new_raw( 110 popped_deck, 111 tuple(h - c for h, c in zip(self.hand_sizes(), counts))) 112 weight = weight_total * weight_split 113 yield popped_deal, counts, weight 114 115 def _generate_min(self, min_outcome) -> NextMultisetGenerator: 116 if not self.outcomes() or min_outcome != self.min_outcome(): 117 yield self, (0, ), 1 118 return 119 120 popped_deck, deck_count = self.deck()._pop_min() 121 122 yield from self._generate_common(popped_deck, deck_count) 123 124 def _generate_max(self, max_outcome) -> NextMultisetGenerator: 125 if not self.outcomes() or max_outcome != self.max_outcome(): 126 yield self, (0, ), 1 127 return 128 129 popped_deck, deck_count = self.deck()._pop_max() 130 131 yield from self._generate_common(popped_deck, deck_count) 132 133 def _estimate_order_costs(self) -> tuple[int, int]: 134 result = len(self.outcomes()) * math.prod(self.hand_sizes()) 135 return result, result 136 137 @cached_property 138 def _hash_key(self) -> Hashable: 139 return MultiDeal, self.deck(), self.hand_sizes() 140 141 def __repr__(self) -> str: 142 return type( 143 self 144 ).__qualname__ + f'({repr(self.deck())}, hand_sizes={self.hand_sizes()})' 145 146 def __str__(self) -> str: 147 return type( 148 self 149 ).__qualname__ + f' of hand_sizes={self.hand_sizes()} from deck:\n' + str( 150 self.deck())
Represents an unordered deal of multiple hands from a Deck
.
22 def __init__(self, deck: 'icepool.Deck[T]', *hand_sizes: int) -> None: 23 """Constructor. 24 25 For algorithmic reasons, you must pre-commit to the number of cards to 26 deal for each hand. 27 28 It is permissible to deal zero cards from an empty deck, but not all 29 evaluators will handle this case, especially if they depend on the 30 outcome type. Dealing zero cards from a non-empty deck does not have 31 this issue. 32 33 Args: 34 deck: The `Deck` to deal from. 35 *hand_sizes: How many cards to deal. If multiple `hand_sizes` are 36 provided, `MultisetEvaluator.next_state` will recieve one count 37 per hand in order. Try to keep the number of hands to a minimum 38 as this can be computationally intensive. 39 """ 40 if any(hand < 0 for hand in hand_sizes): 41 raise ValueError('hand_sizes cannot be negative.') 42 self._deck = deck 43 self._hand_sizes = cast(Qs, hand_sizes) 44 if self.total_cards_dealt() > self.deck().size(): 45 raise ValueError( 46 'The total number of cards dealt cannot exceed the size of the deck.' 47 )
Constructor.
For algorithmic reasons, you must pre-commit to the number of cards to deal for each hand.
It is permissible to deal zero cards from an empty deck, but not all evaluators will handle this case, especially if they depend on the outcome type. Dealing zero cards from a non-empty deck does not have this issue.
Arguments:
- deck: The
Deck
to deal from. - *hand_sizes: How many cards to deal. If multiple
hand_sizes
are provided,MultisetEvaluator.next_state
will recieve one count per hand in order. Try to keep the number of hands to a minimum as this can be computationally intensive.
57 def deck(self) -> 'icepool.Deck[T]': 58 """The `Deck` the cards are dealt from.""" 59 return self._deck
The Deck
the cards are dealt from.
61 def hand_sizes(self) -> Qs: 62 """The number of cards dealt to each hand as a tuple.""" 63 return self._hand_sizes
The number of cards dealt to each hand as a tuple.
65 def total_cards_dealt(self) -> int: 66 """The total number of cards dealt.""" 67 return sum(self.hand_sizes())
The total number of cards dealt.
Inherited Members
- MultisetExpression
- additive_union
- difference
- intersection
- union
- symmetric_difference
- keep_outcomes
- drop_outcomes
- map_counts
- multiply_counts
- divide_counts
- modulo_counts
- keep_counts_le
- keep_counts_lt
- keep_counts_ge
- keep_counts_gt
- keep_counts_eq
- keep_counts_ne
- unique
- keep
- lowest
- highest
- evaluate
- expand
- sum
- count
- any
- highest_outcome_and_count
- all_counts
- largest_count
- largest_count_and_outcome
- count_subset
- largest_straight
- largest_straight_and_outcome
- all_straights
- all_straights_reduce_counts
- argsort
- issubset
- issuperset
- isdisjoint
- compair_lt
- compair_le
- compair_gt
- compair_ge
- compair_eq
- compair_ne
65def multiset_function( 66 function: Callable[..., NestedTupleOrEvaluator[T_contra, U_co]], 67 /) -> MultisetEvaluator[T_contra, NestedTupleOrOutcome[U_co]]: 68 """EXPERIMENTAL: A decorator that turns a function into a `MultisetEvaluator`. 69 70 The provided function should take in arguments representing multisets, 71 do a limited set of operations on them (see `MultisetExpression`), and 72 finish off with an evaluation. You can return tuples to perform a joint 73 evaluation. 74 75 For example, to create an evaluator which computes the elements each of two 76 multisets has that the other doesn't: 77 78 ```python 79 @multiset_function 80 def two_way_difference(a, b): 81 return (a - b).expand(), (b - a).expand() 82 ``` 83 84 Any globals inside `function` are effectively bound at the time 85 `multiset_function` is invoked. Note that this is different than how 86 ordinary Python closures behave. For example, 87 88 ```python 89 target = [1, 2, 3] 90 91 @multiset_function 92 def count_intersection(a): 93 return (a & target).count() 94 95 print(count_intersection(d6.pool(3))) 96 97 target = [1] 98 print(count_intersection(d6.pool(3))) 99 ``` 100 101 would produce the same thing both times. Likewise, the function should not 102 have any side effects. 103 104 Be careful when using control structures: you cannot branch on the value of 105 a multiset expression or evaluation, so e.g. 106 107 ```python 108 @multiset_function 109 def bad(a, b) 110 if a == b: 111 ... 112 ``` 113 114 is not allowed. 115 116 `multiset_function` has considerable overhead, being effectively a 117 mini-language within Python. For better performance, you can try 118 implementing your own subclass of `MultisetEvaluator` directly. 119 120 Args: 121 function: This should take in a fixed number of multiset variables and 122 output an evaluator or a nested tuple of evaluators. Tuples will 123 result in a `JointEvaluator`. 124 """ 125 parameters = inspect.signature(function, follow_wrapped=False).parameters 126 for parameter in parameters.values(): 127 if parameter.kind not in [ 128 inspect.Parameter.POSITIONAL_ONLY, 129 inspect.Parameter.POSITIONAL_OR_KEYWORD, 130 ] or parameter.default != inspect.Parameter.empty: 131 raise ValueError( 132 'Callable must take only a fixed number of positional arguments.' 133 ) 134 tuple_or_evaluator = function(*(MV(i) for i in range(len(parameters)))) 135 evaluator = replace_tuples_with_joint_evaluator(tuple_or_evaluator) 136 return update_wrapper(evaluator, function)
EXPERIMENTAL: A decorator that turns a function into a MultisetEvaluator
.
The provided function should take in arguments representing multisets,
do a limited set of operations on them (see MultisetExpression
), and
finish off with an evaluation. You can return tuples to perform a joint
evaluation.
For example, to create an evaluator which computes the elements each of two multisets has that the other doesn't:
@multiset_function
def two_way_difference(a, b):
return (a - b).expand(), (b - a).expand()
Any globals inside function
are effectively bound at the time
multiset_function
is invoked. Note that this is different than how
ordinary Python closures behave. For example,
target = [1, 2, 3]
@multiset_function
def count_intersection(a):
return (a & target).count()
print(count_intersection(d6.pool(3)))
target = [1]
print(count_intersection(d6.pool(3)))
would produce the same thing both times. Likewise, the function should not have any side effects.
Be careful when using control structures: you cannot branch on the value of a multiset expression or evaluation, so e.g.
@multiset_function
def bad(a, b)
if a == b:
...
is not allowed.
multiset_function
has considerable overhead, being effectively a
mini-language within Python. For better performance, you can try
implementing your own subclass of MultisetEvaluator
directly.
Arguments:
- function: This should take in a fixed number of multiset variables and
output an evaluator or a nested tuple of evaluators. Tuples will
result in a
JointEvaluator
.