icepool
Package for computing dice and card probabilities.
Starting with v0.25.1
, you can replace latest
in the URL with an old version
number to get the documentation for that version.
See this JupyterLite distribution for examples.
General conventions:
- Instances are immutable (apart from internal caching). Anything that looks like it mutates an instance actually returns a separate instance with the change.
1"""Package for computing dice and card probabilities. 2 3Starting with `v0.25.1`, you can replace `latest` in the URL with an old version 4number to get the documentation for that version. 5 6See [this JupyterLite distribution](https://highdiceroller.github.io/icepool/notebooks/lab/index.html) 7for examples. 8 9[Visit the project page.](https://github.com/HighDiceRoller/icepool) 10 11General conventions: 12 13* Instances are immutable (apart from internal caching). Anything that looks 14 like it mutates an instance actually returns a separate instance with the 15 change. 16""" 17 18__docformat__ = 'google' 19 20__version__ = '1.6.1' 21 22from typing import Final 23 24from icepool.typing import Outcome, Order, RerollType 25 26Reroll: Final = RerollType.Reroll 27"""Indicates that an outcome should be rerolled (with unlimited depth). 28 29This can be used in place of outcomes in many places. See individual function 30and method descriptions for details. 31 32This effectively removes the outcome from the probability space, along with its 33contribution to the denominator. 34 35This can be used for conditional probability by removing all outcomes not 36consistent with the given observations. 37 38Operation in specific cases: 39 40* When used with `Again`, only that stage is rerolled, not the entire `Again` 41 tree. 42* To reroll with limited depth, use `Die.reroll()`, or `Again` with no 43 modification. 44* When used with `MultisetEvaluator`, the entire evaluation is rerolled. 45""" 46 47# Expose certain names at top-level. 48 49from icepool.function import ( 50 d, z, __getattr__, coin, stochastic_round, one_hot, iter_cartesian_product, 51 from_cumulative, from_rv, pointwise_max, pointwise_min, min_outcome, 52 max_outcome, consecutive, sorted_union, commonize_denominator, reduce, 53 accumulate, map, map_function, map_and_time, map_to_pool) 54 55from icepool.population.base import Population 56from icepool.population.die import implicit_convert_to_die, Die 57from icepool.collection.vector import cartesian_product, tupleize, vectorize, Vector 58from icepool.collection.symbols import Symbols 59from icepool.population.again import AgainExpression 60 61Again: Final = AgainExpression(is_additive=True) 62"""A symbol indicating that the die should be rolled again, usually with some operation applied. 63 64This is designed to be used with the `Die()` constructor. 65`AgainExpression`s should not be fed to functions or methods other than 66`Die()`, but it can be used with operators. Examples: 67 68* `Again + 6`: Roll again and add 6. 69* `Again + Again`: Roll again twice and sum. 70 71The `again_count`, `again_depth`, and `again_end` arguments to `Die()` 72affect how these arguments are processed. At most one of `again_count` or 73`again_depth` may be provided; if neither are provided, the behavior is as 74`again_depth=1. 75 76For finer control over rolling processes, use e.g. `Die.map()` instead. 77 78#### Count mode 79 80When `again_count` is provided, we start with one roll queued and execute one 81roll at a time. For every `Again` we roll, we queue another roll. 82If we run out of rolls, we sum the rolls to find the result. If the total number 83of rolls (not including the initial roll) would exceed `again_count`, we reroll 84the entire process, effectively conditioning the process on not rolling more 85than `again_count` extra dice. 86 87This mode only allows "additive" expressions to be used with `Again`, which 88means that only the following operators are allowed: 89 90* Binary `+` 91* `n @ AgainExpression`, where `n` is a non-negative `int` or `Population`. 92 93Furthermore, the `+` operator is assumed to be associative and commutative. 94For example, `str` or `tuple` outcomes will not produce elements with a definite 95order. 96 97#### Depth mode 98 99When `again_depth=0`, `again_end` is directly substituted 100for each occurence of `Again`. For other values of `again_depth`, the result for 101`again_depth-1` is substituted for each occurence of `Again`. 102 103If `again_end=icepool.Reroll`, then any `AgainExpression`s in the final depth 104are rerolled. 105 106#### Rerolls 107 108`Reroll` only rerolls that particular die, not the entire process. Any such 109rerolls do not count against the `again_count` or `again_depth` limit. 110 111If `again_end=icepool.Reroll`: 112* Count mode: Any result that would cause the number of rolls to exceed 113 `again_count` is rerolled. 114* Depth mode: Any `AgainExpression`s in the final depth level are rerolled. 115""" 116 117from icepool.population.die_with_truth import DieWithTruth 118 119from icepool.collection.counts import CountsKeysView, CountsValuesView, CountsItemsView 120 121from icepool.population.keep import lowest, highest, middle 122 123from icepool.generator.pool import Pool, standard_pool 124from icepool.generator.keep import KeepGenerator 125from icepool.generator.compound_keep import CompoundKeepGenerator 126from icepool.generator.mixture import MixtureGenerator 127 128from icepool.generator.multiset_generator import MultisetGenerator, InitialMultisetGenerator, NextMultisetGenerator 129from icepool.generator.alignment import Alignment 130from icepool.evaluator.multiset_evaluator import MultisetEvaluator 131 132from icepool.population.deck import Deck 133from icepool.generator.deal import Deal 134from icepool.generator.multi_deal import MultiDeal 135 136from icepool.expression.multiset_expression import MultisetExpression, implicit_convert_to_expression 137from icepool.expression.multiset_function import multiset_function 138 139from icepool.population.format import format_probability_inverse 140 141import icepool.expression as expression 142import icepool.generator as generator 143import icepool.evaluator as evaluator 144 145import icepool.typing as typing 146 147__all__ = [ 148 'd', 'z', 'coin', 'stochastic_round', 'one_hot', 'Outcome', 'Die', 149 'Population', 'tupleize', 'vectorize', 'Vector', 'Symbols', 'Again', 150 'CountsKeysView', 'CountsValuesView', 'CountsItemsView', 'from_cumulative', 151 'from_rv', 'pointwise_max', 'pointwise_min', 'lowest', 'highest', 'middle', 152 'min_outcome', 'max_outcome', 'consecutive', 'sorted_union', 153 'commonize_denominator', 'reduce', 'accumulate', 'map', 'map_function', 154 'map_and_time', 'map_to_pool', 'Reroll', 'RerollType', 'Pool', 155 'standard_pool', 'MultisetGenerator', 'MultisetExpression', 156 'MultisetEvaluator', 'Order', 'Deck', 'Deal', 'MultiDeal', 157 'multiset_function', 'function', 'typing', 'evaluator', 158 'format_probability_inverse' 159]
20@cache 21def d(sides: int, /) -> 'icepool.Die[int]': 22 """A standard die, uniformly distributed from `1` to `sides` inclusive. 23 24 Don't confuse this with `icepool.Die()`: 25 26 * `icepool.Die([6])`: A `Die` that always rolls the integer 6. 27 * `icepool.d(6)`: A d6. 28 29 You can also import individual standard dice from the `icepool` module, e.g. 30 `from icepool import d6`. 31 """ 32 if not isinstance(sides, int): 33 raise TypeError('sides must be an int.') 34 elif sides < 1: 35 raise ValueError('sides must be at least 1.') 36 return icepool.Die(range(1, sides + 1))
A standard die, uniformly distributed from 1
to sides
inclusive.
Don't confuse this with icepool.Die()
:
icepool.Die([6])
: ADie
that always rolls the integer 6.icepool.d(6)
: A d6.
You can also import individual standard dice from the icepool
module, e.g.
from icepool import d6
.
39@cache 40def z(sides: int, /) -> 'icepool.Die[int]': 41 """A die uniformly distributed from `0` to `sides - 1` inclusive. 42 43 Equal to d(sides) - 1. 44 """ 45 if not isinstance(sides, int): 46 raise TypeError('sides must be an int.') 47 elif sides < 1: 48 raise ValueError('sides must be at least 1.') 49 return icepool.Die(range(0, sides))
A die uniformly distributed from 0
to sides - 1
inclusive.
Equal to d(sides) - 1.
75def coin(n: int | float | Fraction, 76 d: int = 1, 77 /, 78 *, 79 max_denominator: int | None = None) -> 'icepool.Die[bool]': 80 """A `Die` that rolls `True` with probability `n / d`, and `False` otherwise. 81 82 If `n <= 0` or `n >= d` the result will have only one outcome. 83 84 Args: 85 n: An int numerator, or a non-integer probability. 86 d: An int denominator. Should not be provided if the first argument is 87 not an int. 88 """ 89 if not isinstance(n, int): 90 if d != 1: 91 raise ValueError( 92 'If a non-int numerator is provided, a denominator must not be provided.' 93 ) 94 fraction = Fraction(n) 95 if max_denominator is not None: 96 fraction = fraction.limit_denominator(max_denominator) 97 n = fraction.numerator 98 d = fraction.denominator 99 data = {} 100 if n < d: 101 data[False] = min(d - n, d) 102 if n > 0: 103 data[True] = min(n, d) 104 105 return icepool.Die(data)
A Die
that rolls True
with probability n / d
, and False
otherwise.
If n <= 0
or n >= d
the result will have only one outcome.
Arguments:
- n: An int numerator, or a non-integer probability.
- d: An int denominator. Should not be provided if the first argument is not an int.
108def stochastic_round(x, 109 /, 110 *, 111 max_denominator: int | None = None) -> 'icepool.Die[int]': 112 """Randomly rounds a value up or down to the nearest integer according to the two distances. 113 114 Specificially, rounds `x` up with probability `x - floor(x)` and down 115 otherwise, producing a `Die` with up to two outcomes. 116 117 Args: 118 max_denominator: If provided, each rounding will be performed 119 using `fractions.Fraction.limit_denominator(max_denominator)`. 120 Otherwise, the rounding will be performed without 121 `limit_denominator`. 122 """ 123 integer_part = math.floor(x) 124 fractional_part = x - integer_part 125 return integer_part + coin(fractional_part, 126 max_denominator=max_denominator)
Randomly rounds a value up or down to the nearest integer according to the two distances.
Specificially, rounds x
up with probability x - floor(x)
and down
otherwise, producing a Die
with up to two outcomes.
Arguments:
- max_denominator: If provided, each rounding will be performed
using
fractions.Fraction.limit_denominator(max_denominator)
. Otherwise, the rounding will be performed withoutlimit_denominator
.
129def one_hot(sides: int, /) -> 'icepool.Die[tuple[bool, ...]]': 130 """A `Die` with `Vector` outcomes with one element set to `True` uniformly at random and the rest `False`. 131 132 This is an easy (if somewhat expensive) way of representing how many dice 133 in a pool rolled each number. For example, the outcomes of `10 @ one_hot(6)` 134 are the `(ones, twos, threes, fours, fives, sixes)` rolled in 10d6. 135 """ 136 data = [] 137 for i in range(sides): 138 outcome = [False] * sides 139 outcome[i] = True 140 data.append(icepool.Vector(outcome)) 141 return icepool.Die(data)
A Die
with Vector
outcomes with one element set to True
uniformly at random and the rest False
.
This is an easy (if somewhat expensive) way of representing how many dice
in a pool rolled each number. For example, the outcomes of 10 @ one_hot(6)
are the (ones, twos, threes, fours, fives, sixes)
rolled in 10d6.
70class Outcome(Hashable, Protocol[T_contra]): 71 """Protocol to attempt to verify that outcome types are hashable and sortable. 72 73 Far from foolproof, e.g. it cannot enforce total ordering. 74 """ 75 76 def __lt__(self, other: T_contra) -> bool: 77 ...
Protocol to attempt to verify that outcome types are hashable and sortable.
Far from foolproof, e.g. it cannot enforce total ordering.
45class Die(Population[T_co]): 46 """Sampling with replacement. Quantities represent weights. 47 48 Dice are immutable. Methods do not modify the `Die` in-place; 49 rather they return a `Die` representing the result. 50 51 It's also possible to have "empty" dice with no outcomes at all, 52 though these have little use other than being sentinel values. 53 """ 54 55 _data: Counts[T_co] 56 57 @property 58 def _new_type(self) -> type: 59 return Die 60 61 def __new__( 62 cls, 63 outcomes: Sequence | Mapping[Any, int], 64 times: Sequence[int] | int = 1, 65 *, 66 again_count: int | None = None, 67 again_depth: int | None = None, 68 again_end: 'Outcome | Die | icepool.RerollType | None' = None 69 ) -> 'Die[T_co]': 70 """Constructor for a `Die`. 71 72 Don't confuse this with `d()`: 73 74 * `Die([6])`: A `Die` that always rolls the `int` 6. 75 * `d(6)`: A d6. 76 77 Also, don't confuse this with `Pool()`: 78 79 * `Die([1, 2, 3, 4, 5, 6])`: A d6. 80 * `Pool([1, 2, 3, 4, 5, 6])`: A `Pool` of six dice that always rolls one 81 of each number. 82 83 Here are some different ways of constructing a d6: 84 85 * Just import it: `from icepool import d6` 86 * Use the `d()` function: `icepool.d(6)` 87 * Use a d6 that you already have: `Die(d6)` or `Die([d6])` 88 * Mix a d3 and a d3+3: `Die([d3, d3+3])` 89 * Use a dict: `Die({1:1, 2:1, 3:1, 4:1, 5:1, 6:1})` 90 * Give the faces as a sequence: `Die([1, 2, 3, 4, 5, 6])` 91 92 All quantities must be non-negative. Outcomes with zero quantity will be 93 omitted. 94 95 Several methods and functions foward **kwargs to this constructor. 96 However, these only affect the construction of the returned or yielded 97 dice. Any other implicit conversions of arguments or operands to dice 98 will be done with the default keyword arguments. 99 100 EXPERIMENTAL: Use `icepool.Again` to roll the dice again, usually with 101 some modification. See the `Again` documentation for details. 102 103 Denominator: For a flat set of outcomes, the denominator is just the 104 sum of the corresponding quantities. If the outcomes themselves have 105 secondary denominators, then the overall denominator will be minimized 106 while preserving the relative weighting of the primary outcomes. 107 108 Args: 109 outcomes: The faces of the `Die`. This can be one of the following: 110 * A `Sequence` of outcomes. Duplicates will contribute 111 quantity for each appearance. 112 * A `Mapping` from outcomes to quantities. 113 114 Individual outcomes can each be one of the following: 115 116 * An outcome, which must be hashable and totally orderable. 117 * For convenience, `tuple`s containing `Population`s will be 118 `tupleize`d into a `Population` of `tuple`s. 119 This does not apply to subclasses of `tuple`s such as `namedtuple` 120 or other classes such as `Vector`. 121 * A `Die`, which will be flattened into the result. 122 The quantity assigned to a `Die` is shared among its 123 outcomes. The total denominator will be scaled up if 124 necessary. 125 * `icepool.Reroll`, which will drop itself from consideration. 126 * EXPERIMENTAL: `icepool.Again`. See the documentation for 127 `Again` for details. 128 times: Multiplies the quantity of each element of `outcomes`. 129 `times` can either be a sequence of the same length as 130 `outcomes` or a single `int` to apply to all elements of 131 `outcomes`. 132 again_count, again_depth, again_end: These affect how `Again` 133 expressions are handled. See the `Again` documentation for 134 details. 135 Raises: 136 ValueError: `None` is not a valid outcome for a `Die`. 137 """ 138 outcomes, times = icepool.creation_args.itemize(outcomes, times) 139 140 # Check for Again. 141 if icepool.population.again.contains_again(outcomes): 142 if again_count is not None: 143 if again_depth is not None: 144 raise ValueError( 145 'At most one of again_count and again_depth may be used.' 146 ) 147 if again_end is not None: 148 raise ValueError( 149 'again_end cannot be used with again_count.') 150 return icepool.population.again.evaluate_agains_using_count( 151 outcomes, times, again_count) 152 else: 153 if again_depth is None: 154 again_depth = 1 155 return icepool.population.again.evaluate_agains_using_depth( 156 outcomes, times, again_depth, again_end) 157 158 # Agains have been replaced by this point. 159 outcomes = cast(Sequence[T_co | Die[T_co] | icepool.RerollType], 160 outcomes) 161 162 if len(outcomes) == 1 and times[0] == 1 and isinstance( 163 outcomes[0], Die): 164 return outcomes[0] 165 166 counts: Counts[T_co] = icepool.creation_args.expand_args_for_die( 167 outcomes, times) 168 169 return Die._new_raw(counts) 170 171 @classmethod 172 def _new_raw(cls, data: Counts[T_co]) -> 'Die[T_co]': 173 """Creates a new `Die` using already-processed arguments. 174 175 Args: 176 data: At this point, this is a Counts. 177 """ 178 self = super(Population, cls).__new__(cls) 179 self._data = data 180 return self 181 182 # Defined separately from the superclass to help typing. 183 def unary_operator(self: 'icepool.Die[T_co]', op: Callable[..., U], *args, 184 **kwargs) -> 'icepool.Die[U]': 185 """Performs the unary operation on the outcomes. 186 187 This is used for the standard unary operators 188 `-, +, abs, ~, round, trunc, floor, ceil` 189 as well as the additional methods 190 `zero, bool`. 191 192 This is NOT used for the `[]` operator; when used directly, this is 193 interpreted as a `Mapping` operation and returns the count corresponding 194 to a given outcome. See `marginals()` for applying the `[]` operator to 195 outcomes. 196 197 Returns: 198 A `Die` representing the result. 199 200 Raises: 201 ValueError: If tuples are of mismatched length. 202 """ 203 return self._unary_operator(op, *args, **kwargs) 204 205 def binary_operator(self, other: 'Die', op: Callable[..., U], *args, 206 **kwargs) -> 'Die[U]': 207 """Performs the operation on pairs of outcomes. 208 209 By the time this is called, the other operand has already been 210 converted to a `Die`. 211 212 If one side of a binary operator is a tuple and the other is not, the 213 binary operator is applied to each element of the tuple with the 214 non-tuple side. For example, the following are equivalent: 215 216 ```python 217 cartesian_product(d6, d8) * 2 218 cartesian_product(d6 * 2, d8 * 2) 219 ``` 220 221 This is used for the standard binary operators 222 `+, -, *, /, //, %, **, <<, >>, &, |, ^` 223 and the standard binary comparators 224 `<, <=, >=, >, ==, !=, cmp`. 225 226 `==` and `!=` additionally set the truth value of the `Die` according to 227 whether the dice themselves are the same or not. 228 229 The `@` operator does NOT use this method directly. 230 It rolls the left `Die`, which must have integer outcomes, 231 then rolls the right `Die` that many times and sums the outcomes. 232 233 Returns: 234 A `Die` representing the result. 235 236 Raises: 237 ValueError: If tuples are of mismatched length within one of the 238 dice or between the dice. 239 """ 240 data: MutableMapping[Any, int] = defaultdict(int) 241 for (outcome_self, 242 quantity_self), (outcome_other, 243 quantity_other) in itertools.product( 244 self.items(), other.items()): 245 new_outcome = op(outcome_self, outcome_other, *args, **kwargs) 246 data[new_outcome] += quantity_self * quantity_other 247 return self._new_type(data) 248 249 # Basic access. 250 251 def keys(self) -> CountsKeysView[T_co]: 252 return self._data.keys() 253 254 def values(self) -> CountsValuesView: 255 return self._data.values() 256 257 def items(self) -> CountsItemsView[T_co]: 258 return self._data.items() 259 260 def __getitem__(self, outcome, /) -> int: 261 return self._data[outcome] 262 263 def __iter__(self) -> Iterator[T_co]: 264 return iter(self.keys()) 265 266 def __len__(self) -> int: 267 """The number of outcomes. """ 268 return len(self._data) 269 270 def __contains__(self, outcome) -> bool: 271 return outcome in self._data 272 273 # Quantity management. 274 275 def simplify(self) -> 'Die[T_co]': 276 """Divides all quantities by their greatest common denominator. """ 277 return icepool.Die(self._data.simplify()) 278 279 # Rerolls and other outcome management. 280 281 def _select_outcomes(self, which: Callable[..., bool] | Collection[T_co], 282 star: bool | None) -> Set[T_co]: 283 """Returns a set of outcomes of self that fit the given condition.""" 284 if callable(which): 285 if star is None: 286 star = guess_star(which) 287 if star: 288 # Need TypeVarTuple to check this. 289 return { 290 outcome 291 for outcome in self.outcomes() 292 if which(*outcome) # type: ignore 293 } 294 else: 295 return { 296 outcome 297 for outcome in self.outcomes() if which(outcome) 298 } 299 else: 300 # Collection. 301 return set(outcome for outcome in self.outcomes() 302 if outcome in which) 303 304 def reroll(self, 305 which: Callable[..., bool] | Collection[T_co] | None = None, 306 /, 307 *, 308 star: bool | None = None, 309 depth: int | Literal['inf']) -> 'Die[T_co]': 310 """Rerolls the given outcomes. 311 312 Args: 313 which: Selects which outcomes to reroll. Options: 314 * A collection of outcomes to reroll. 315 * A callable that takes an outcome and returns `True` if it 316 should be rerolled. 317 * If not provided, the min outcome will be rerolled. 318 star: Whether outcomes should be unpacked into separate arguments 319 before sending them to a callable `which`. 320 If not provided, this will be guessed based on the function 321 signature. 322 depth: The maximum number of times to reroll. 323 If `None`, rerolls an unlimited number of times. 324 325 Returns: 326 A `Die` representing the reroll. 327 If the reroll would never terminate, the result has no outcomes. 328 """ 329 330 if which is None: 331 outcome_set = {self.min_outcome()} 332 else: 333 outcome_set = self._select_outcomes(which, star) 334 335 if depth == 'inf' or depth is None: 336 if depth is None: 337 warnings.warn( 338 "depth=None is deprecated; use depth='inf' instead.", 339 category=DeprecationWarning, 340 stacklevel=1) 341 data = { 342 outcome: quantity 343 for outcome, quantity in self.items() 344 if outcome not in outcome_set 345 } 346 elif depth < 0: 347 raise ValueError('reroll depth cannot be negative.') 348 else: 349 total_reroll_quantity = sum(quantity 350 for outcome, quantity in self.items() 351 if outcome in outcome_set) 352 total_stop_quantity = self.denominator() - total_reroll_quantity 353 rerollable_factor = total_reroll_quantity**depth 354 stop_factor = (self.denominator()**(depth + 1) - rerollable_factor 355 * total_reroll_quantity) // total_stop_quantity 356 data = { 357 outcome: (rerollable_factor * 358 quantity if outcome in outcome_set else stop_factor * 359 quantity) 360 for outcome, quantity in self.items() 361 } 362 return icepool.Die(data) 363 364 def filter(self, 365 which: Callable[..., bool] | Collection[T_co], 366 /, 367 *, 368 star: bool | None = None, 369 depth: int | Literal['inf']) -> 'Die[T_co]': 370 """Rerolls until getting one of the given outcomes. 371 372 Essentially the complement of `reroll()`. 373 374 Args: 375 which: Selects which outcomes to reroll until. Options: 376 * A callable that takes an outcome and returns `True` if it 377 should be accepted. 378 * A collection of outcomes to reroll until. 379 star: Whether outcomes should be unpacked into separate arguments 380 before sending them to a callable `which`. 381 If not provided, this will be guessed based on the function 382 signature. 383 depth: The maximum number of times to reroll. 384 If `None`, rerolls an unlimited number of times. 385 386 Returns: 387 A `Die` representing the reroll. 388 If the reroll would never terminate, the result has no outcomes. 389 """ 390 391 if callable(which): 392 if star is None: 393 star = guess_star(which) 394 if star: 395 396 not_outcomes = { 397 outcome 398 for outcome in self.outcomes() 399 if not which(*outcome) # type: ignore 400 } 401 else: 402 not_outcomes = { 403 outcome 404 for outcome in self.outcomes() if not which(outcome) 405 } 406 else: 407 not_outcomes = { 408 not_outcome 409 for not_outcome in self.outcomes() if not_outcome not in which 410 } 411 return self.reroll(not_outcomes, depth=depth) 412 413 def split(self, 414 which: Callable[..., bool] | Collection[T_co] | None = None, 415 /, 416 *, 417 star: bool | None = None): 418 """Splits this die into one containing selected items and another containing the rest. 419 420 The total denominator is preserved. 421 422 Equivalent to `self.filter(), self.reroll()`. 423 424 Args: 425 which: Selects which outcomes to reroll until. Options: 426 * A callable that takes an outcome and returns `True` if it 427 should be accepted. 428 * A collection of outcomes to reroll until. 429 star: Whether outcomes should be unpacked into separate arguments 430 before sending them to a callable `which`. 431 If not provided, this will be guessed based on the function 432 signature. 433 """ 434 if which is None: 435 outcome_set = {self.min_outcome()} 436 else: 437 outcome_set = self._select_outcomes(which, star) 438 439 selected = {} 440 not_selected = {} 441 for outcome, count in self.items(): 442 if outcome in outcome_set: 443 selected[outcome] = count 444 else: 445 not_selected[outcome] = count 446 447 return Die(selected), Die(not_selected) 448 449 def truncate(self, min_outcome=None, max_outcome=None) -> 'Die[T_co]': 450 """Truncates the outcomes of this `Die` to the given range. 451 452 The endpoints are included in the result if applicable. 453 If one of the arguments is not provided, that side will not be truncated. 454 455 This effectively rerolls outcomes outside the given range. 456 If instead you want to replace those outcomes with the nearest endpoint, 457 use `clip()`. 458 459 Not to be confused with `trunc(die)`, which performs integer truncation 460 on each outcome. 461 """ 462 if min_outcome is not None: 463 start = bisect.bisect_left(self.outcomes(), min_outcome) 464 else: 465 start = None 466 if max_outcome is not None: 467 stop = bisect.bisect_right(self.outcomes(), max_outcome) 468 else: 469 stop = None 470 data = {k: v for k, v in self.items()[start:stop]} 471 return icepool.Die(data) 472 473 def clip(self, min_outcome=None, max_outcome=None) -> 'Die[T_co]': 474 """Clips the outcomes of this `Die` to the given values. 475 476 The endpoints are included in the result if applicable. 477 If one of the arguments is not provided, that side will not be clipped. 478 479 This is not the same as rerolling outcomes beyond this range; 480 the outcome is simply adjusted to fit within the range. 481 This will typically cause some quantity to bunch up at the endpoint(s). 482 If you want to reroll outcomes beyond this range, use `truncate()`. 483 """ 484 data: MutableMapping[Any, int] = defaultdict(int) 485 for outcome, quantity in self.items(): 486 if min_outcome is not None and outcome <= min_outcome: 487 data[min_outcome] += quantity 488 elif max_outcome is not None and outcome >= max_outcome: 489 data[max_outcome] += quantity 490 else: 491 data[outcome] += quantity 492 return icepool.Die(data) 493 494 @cached_property 495 def _popped_min(self) -> tuple['Die[T_co]', int]: 496 die = Die._new_raw(self._data.remove_min()) 497 return die, self.quantities()[0] 498 499 def _pop_min(self) -> tuple['Die[T_co]', int]: 500 """A `Die` with the min outcome removed, and the quantity of the removed outcome. 501 502 Raises: 503 IndexError: If this `Die` has no outcome to pop. 504 """ 505 return self._popped_min 506 507 @cached_property 508 def _popped_max(self) -> tuple['Die[T_co]', int]: 509 die = Die._new_raw(self._data.remove_max()) 510 return die, self.quantities()[-1] 511 512 def _pop_max(self) -> tuple['Die[T_co]', int]: 513 """A `Die` with the max outcome removed, and the quantity of the removed outcome. 514 515 Raises: 516 IndexError: If this `Die` has no outcome to pop. 517 """ 518 return self._popped_max 519 520 # Processes. 521 522 def map( 523 self, 524 repl: 525 'Callable[..., U | Die[U] | icepool.RerollType | icepool.AgainExpression] | Mapping[T_co, U | Die[U] | icepool.RerollType | icepool.AgainExpression]', 526 /, 527 *extra_args, 528 star: bool | None = None, 529 repeat: int | Literal['inf'] = 1, 530 time_limit: int | Literal['inf'] | None = None, 531 again_count: int | None = None, 532 again_depth: int | None = None, 533 again_end: 'U | Die[U] | icepool.RerollType | None' = None 534 ) -> 'Die[U]': 535 """Maps outcomes of the `Die` to other outcomes. 536 537 This is also useful for representing processes. 538 539 As `icepool.map(repl, self, ...)`. 540 """ 541 return icepool.map(repl, 542 self, 543 *extra_args, 544 star=star, 545 repeat=repeat, 546 time_limit=time_limit, 547 again_count=again_count, 548 again_depth=again_depth, 549 again_end=again_end) 550 551 def map_and_time( 552 self, 553 repl: 554 'Callable[..., T_co | Die[T_co] | icepool.RerollType] | Mapping[T_co, T_co | Die[T_co] | icepool.RerollType]', 555 /, 556 *extra_args, 557 star: bool | None = None, 558 time_limit: int) -> 'Die[tuple[T_co, int]]': 559 """Repeatedly map outcomes of the state to other outcomes, while also 560 counting timesteps. 561 562 This is useful for representing processes. 563 564 As `map_and_time(repl, self, ...)`. 565 """ 566 return icepool.map_and_time(repl, 567 self, 568 *extra_args, 569 star=star, 570 time_limit=time_limit) 571 572 def time_to_sum(self: 'Die[int]', 573 target: int, 574 /, 575 max_time: int, 576 dnf: 'int|icepool.RerollType|None' = None) -> 'Die[int]': 577 """The number of rolls until the cumulative sum is greater or equal to the target. 578 579 Args: 580 target: The number to stop at once reached. 581 max_time: The maximum number of rolls to run. 582 If the sum is not reached, the outcome is determined by `dnf`. 583 dnf: What time to assign in cases where the target was not reached 584 in `max_time`. If not provided, this is set to `max_time`. 585 `dnf=icepool.Reroll` will remove this case from the result, 586 effectively rerolling it. 587 """ 588 if target <= 0: 589 return Die([0]) 590 591 if dnf is None: 592 dnf = max_time 593 594 def step(total, roll): 595 return min(total + roll, target) 596 597 result: 'Die[tuple[int, int]]' = Die([0]).map_and_time( 598 step, self, time_limit=max_time) 599 600 def get_time(total, time): 601 if total < target: 602 return dnf 603 else: 604 return time 605 606 return result.map(get_time) 607 608 @cached_property 609 def _mean_time_to_sum_cache(self) -> list[Fraction]: 610 return [Fraction(0)] 611 612 def mean_time_to_sum(self: 'Die[int]', target: int, /) -> Fraction: 613 """The mean number of rolls until the cumulative sum is greater or equal to the target. 614 615 Args: 616 target: The target sum. 617 618 Raises: 619 ValueError: If `self` has negative outcomes. 620 ZeroDivisionError: If `self.mean() == 0`. 621 """ 622 target = max(target, 0) 623 624 if target < len(self._mean_time_to_sum_cache): 625 return self._mean_time_to_sum_cache[target] 626 627 if self.min_outcome() < 0: 628 raise ValueError( 629 'mean_time_to_sum does not handle negative outcomes.') 630 time_per_effect = Fraction(self.denominator(), 631 self.denominator() - self.quantity(0)) 632 633 for i in range(len(self._mean_time_to_sum_cache), target + 1): 634 result = time_per_effect + self.reroll([ 635 0 636 ], depth='inf').map(lambda x: self.mean_time_to_sum(i - x)).mean() 637 self._mean_time_to_sum_cache.append(result) 638 639 return result 640 641 def explode(self, 642 which: Collection[T_co] | Callable[..., bool] | None = None, 643 /, 644 *, 645 star: bool | None = None, 646 depth: int = 9, 647 end=None) -> 'Die[T_co]': 648 """Causes outcomes to be rolled again and added to the total. 649 650 Args: 651 which: Which outcomes to explode. Options: 652 * A single outcome to explode. 653 * An collection of outcomes to explode. 654 * A callable that takes an outcome and returns `True` if it 655 should be exploded. 656 * If not supplied, the max outcome will explode. 657 star: Whether outcomes should be unpacked into separate arguments 658 before sending them to a callable `which`. 659 If not provided, this will be guessed based on the function 660 signature. 661 depth: The maximum number of additional dice to roll, not counting 662 the initial roll. 663 If not supplied, a default value will be used. 664 end: Once `depth` is reached, further explosions will be treated 665 as this value. By default, a zero value will be used. 666 `icepool.Reroll` will make one extra final roll, rerolling until 667 a non-exploding outcome is reached. 668 """ 669 670 if which is None: 671 outcome_set = {self.max_outcome()} 672 else: 673 outcome_set = self._select_outcomes(which, star) 674 675 if depth < 0: 676 raise ValueError('depth cannot be negative.') 677 elif depth == 0: 678 return self 679 680 def map_final(outcome): 681 if outcome in outcome_set: 682 return outcome + icepool.Again 683 else: 684 return outcome 685 686 return self.map(map_final, again_depth=depth, again_end=end) 687 688 def if_else( 689 self, 690 outcome_if_true: U | 'Die[U]', 691 outcome_if_false: U | 'Die[U]', 692 *, 693 again_count: int | None = None, 694 again_depth: int | None = None, 695 again_end: 'U | Die[U] | icepool.RerollType | None' = None 696 ) -> 'Die[U]': 697 """Ternary conditional operator. 698 699 This replaces truthy outcomes with the first argument and falsy outcomes 700 with the second argument. 701 702 Args: 703 again_count, again_depth, again_end: Forwarded to the final die constructor. 704 """ 705 return self.map(lambda x: bool(x)).map( 706 { 707 True: outcome_if_true, 708 False: outcome_if_false 709 }, 710 again_count=again_count, 711 again_depth=again_depth, 712 again_end=again_end) 713 714 def is_in(self, target: Container[T_co], /) -> 'Die[bool]': 715 """A die that returns True iff the roll of the die is contained in the target.""" 716 return self.map(lambda x: x in target) 717 718 def count(self, rolls: int, target: Container[T_co], /) -> 'Die[int]': 719 """Roll this dice a number of times and count how many are in the target.""" 720 return rolls @ self.is_in(target) 721 722 # Pools and sums. 723 724 @cached_property 725 def _sum_cache(self) -> MutableMapping[int, 'Die']: 726 return {} 727 728 def _sum_all(self, rolls: int, /) -> 'Die': 729 """Roll this `Die` `rolls` times and sum the results. 730 731 The sum is computed one at a time, with each additional item on the 732 right, similar to `functools.reduce()`. 733 734 If `rolls` is negative, roll the `Die` `abs(rolls)` times and negate 735 the result. 736 737 If you instead want to replace tuple (or other sequence) outcomes with 738 their sum, use `die.map(sum)`. 739 """ 740 if rolls in self._sum_cache: 741 return self._sum_cache[rolls] 742 743 if rolls < 0: 744 result = -self._sum_all(-rolls) 745 elif rolls == 0: 746 result = self.zero().simplify() 747 elif rolls == 1: 748 result = self 749 else: 750 # In addition to working similar to reduce(), this seems to perform 751 # better than binary split. 752 result = self._sum_all(rolls - 1) + self 753 754 self._sum_cache[rolls] = result 755 return result 756 757 def __matmul__(self: 'Die[int]', other) -> 'Die': 758 """Roll the left `Die`, then roll the right `Die` that many times and sum the outcomes. 759 760 The sum is computed one at a time, with each additional item on the 761 right, similar to `functools.reduce()`. 762 """ 763 if isinstance(other, icepool.AgainExpression): 764 return NotImplemented 765 other = implicit_convert_to_die(other) 766 767 data: MutableMapping[int, Any] = defaultdict(int) 768 769 max_abs_die_count = max(abs(self.min_outcome()), 770 abs(self.max_outcome())) 771 for die_count, die_count_quantity in self.items(): 772 factor = other.denominator()**(max_abs_die_count - abs(die_count)) 773 subresult = other._sum_all(die_count) 774 for outcome, subresult_quantity in subresult.items(): 775 data[ 776 outcome] += subresult_quantity * die_count_quantity * factor 777 778 return icepool.Die(data) 779 780 def __rmatmul__(self, other: 'int | Die[int]') -> 'Die': 781 """Roll the left `Die`, then roll the right `Die` that many times and sum the outcomes. 782 783 The sum is computed one at a time, with each additional item on the 784 right, similar to `functools.reduce()`. 785 """ 786 if isinstance(other, icepool.AgainExpression): 787 return NotImplemented 788 other = implicit_convert_to_die(other) 789 return other.__matmul__(self) 790 791 def sequence(self, rolls: int) -> 'icepool.Die[tuple[T_co, ...]]': 792 """Possible sequences produced by rolling this die a number of times. 793 794 This is extremely expensive computationally. If possible, use `reduce()` 795 instead; if you don't care about order, `Die.pool()` is better. 796 """ 797 return icepool.cartesian_product(*(self for _ in range(rolls)), 798 outcome_type=tuple) # type: ignore 799 800 def pool(self, rolls: int | Sequence[int] = 1, /) -> 'icepool.Pool[T_co]': 801 """Creates a `Pool` from this `Die`. 802 803 You might subscript the pool immediately afterwards, e.g. 804 `d6.pool(5)[-1, ..., 1]` takes the difference between the highest and 805 lowest of 5d6. 806 807 Args: 808 rolls: The number of copies of this `Die` to put in the pool. 809 Or, a sequence of one `int` per die acting as 810 `keep_tuple`. Note that `...` cannot be used in the 811 argument to this method, as the argument determines the size of 812 the pool. 813 """ 814 if isinstance(rolls, int): 815 return icepool.Pool({self: rolls}) 816 else: 817 pool_size = len(rolls) 818 # Haven't dealt with narrowing return type. 819 return icepool.Pool({self: pool_size})[rolls] # type: ignore 820 821 @overload 822 def keep(self, rolls: Sequence[int], /) -> 'Die': 823 """Selects elements after drawing and sorting and sums them. 824 825 Args: 826 rolls: A sequence of `int` specifying how many times to count each 827 element in ascending order. 828 """ 829 830 @overload 831 def keep(self, rolls: int, 832 index: slice | Sequence[int | EllipsisType] | int, /): 833 """Selects elements after drawing and sorting and sums them. 834 835 Args: 836 rolls: The number of dice to roll. 837 index: One of the following: 838 * An `int`. This will count only the roll at the specified index. 839 In this case, the result is a `Die` rather than a generator. 840 * A `slice`. The selected dice are counted once each. 841 * A sequence of one `int` for each `Die`. 842 Each roll is counted that many times, which could be multiple or 843 negative times. 844 845 Up to one `...` (`Ellipsis`) may be used. 846 `...` will be replaced with a number of zero 847 counts depending on the `rolls`. 848 This number may be "negative" if more `int`s are provided than 849 `rolls`. Specifically: 850 851 * If `index` is shorter than `rolls`, `...` 852 acts as enough zero counts to make up the difference. 853 E.g. `(1, ..., 1)` on five dice would act as 854 `(1, 0, 0, 0, 1)`. 855 * If `index` has length equal to `rolls`, `...` has no effect. 856 E.g. `(1, ..., 1)` on two dice would act as `(1, 1)`. 857 * If `index` is longer than `rolls` and `...` is on one side, 858 elements will be dropped from `index` on the side with `...`. 859 E.g. `(..., 1, 2, 3)` on two dice would act as `(2, 3)`. 860 * If `index` is longer than `rolls` and `...` 861 is in the middle, the counts will be as the sum of two 862 one-sided `...`. 863 E.g. `(-1, ..., 1)` acts like `(-1, ...)` plus `(..., 1)`. 864 If `rolls` was 1 this would have the -1 and 1 cancel each other out. 865 """ 866 867 def keep(self, 868 rolls: int | Sequence[int], 869 index: slice | Sequence[int | EllipsisType] | int | None = None, 870 /) -> 'Die': 871 """Selects elements after drawing and sorting and sums them. 872 873 Args: 874 rolls: The number of dice to roll. 875 index: One of the following: 876 * An `int`. This will count only the roll at the specified index. 877 In this case, the result is a `Die` rather than a generator. 878 * A `slice`. The selected dice are counted once each. 879 * A sequence of `int`s with length equal to `rolls`. 880 Each roll is counted that many times, which could be multiple or 881 negative times. 882 883 Up to one `...` (`Ellipsis`) may be used. If no `...` is used, 884 the `rolls` argument may be omitted. 885 886 `...` will be replaced with a number of zero counts in order 887 to make up any missing elements compared to `rolls`. 888 This number may be "negative" if more `int`s are provided than 889 `rolls`. Specifically: 890 891 * If `index` is shorter than `rolls`, `...` 892 acts as enough zero counts to make up the difference. 893 E.g. `(1, ..., 1)` on five dice would act as 894 `(1, 0, 0, 0, 1)`. 895 * If `index` has length equal to `rolls`, `...` has no effect. 896 E.g. `(1, ..., 1)` on two dice would act as `(1, 1)`. 897 * If `index` is longer than `rolls` and `...` is on one side, 898 elements will be dropped from `index` on the side with `...`. 899 E.g. `(..., 1, 2, 3)` on two dice would act as `(2, 3)`. 900 * If `index` is longer than `rolls` and `...` 901 is in the middle, the counts will be as the sum of two 902 one-sided `...`. 903 E.g. `(-1, ..., 1)` acts like `(-1, ...)` plus `(..., 1)`. 904 If `rolls` was 1 this would have the -1 and 1 cancel each other out. 905 """ 906 if isinstance(rolls, int): 907 if index is None: 908 raise ValueError( 909 'If the number of rolls is an integer, an index argument must be provided.' 910 ) 911 if isinstance(index, int): 912 return self.pool(rolls).keep(index) 913 else: 914 return self.pool(rolls).keep(index).sum() # type: ignore 915 else: 916 if index is not None: 917 raise ValueError('Only one index sequence can be given.') 918 return self.pool(len(rolls)).keep(rolls).sum() # type: ignore 919 920 def lowest(self, 921 rolls: int, 922 /, 923 keep: int | None = None, 924 drop: int | None = None) -> 'Die': 925 """Roll several of this `Die` and return the lowest result, or the sum of some of the lowest. 926 927 The outcomes should support addition and multiplication if `keep != 1`. 928 929 Args: 930 rolls: The number of dice to roll. All dice will have the same 931 outcomes as `self`. 932 keep, drop: These arguments work together: 933 * If neither are provided, the single lowest die will be taken. 934 * If only `keep` is provided, the `keep` lowest dice will be summed. 935 * If only `drop` is provided, the `drop` lowest dice will be dropped 936 and the rest will be summed. 937 * If both are provided, `drop` lowest dice will be dropped, then 938 the next `keep` lowest dice will be summed. 939 940 Returns: 941 A `Die` representing the probability distribution of the sum. 942 """ 943 index = lowest_slice(keep, drop) 944 canonical = canonical_slice(index, rolls) 945 if canonical.start == 0 and canonical.stop == 1: 946 return self._lowest_single(rolls) 947 # Expression evaluators are difficult to type. 948 return self.pool(rolls)[index].sum() # type: ignore 949 950 def _lowest_single(self, rolls: int, /) -> 'Die': 951 """Roll this die several times and keep the lowest.""" 952 if rolls == 0: 953 return self.zero().simplify() 954 return icepool.from_cumulative( 955 self.outcomes(), [x**rolls for x in self.quantities('>=')], 956 reverse=True) 957 958 def highest(self, 959 rolls: int, 960 /, 961 keep: int | None = None, 962 drop: int | None = None) -> 'Die[T_co]': 963 """Roll several of this `Die` and return the highest result, or the sum of some of the highest. 964 965 The outcomes should support addition and multiplication if `keep != 1`. 966 967 Args: 968 rolls: The number of dice to roll. 969 keep, drop: These arguments work together: 970 * If neither are provided, the single highest die will be taken. 971 * If only `keep` is provided, the `keep` highest dice will be summed. 972 * If only `drop` is provided, the `drop` highest dice will be dropped 973 and the rest will be summed. 974 * If both are provided, `drop` highest dice will be dropped, then 975 the next `keep` highest dice will be summed. 976 977 Returns: 978 A `Die` representing the probability distribution of the sum. 979 """ 980 index = highest_slice(keep, drop) 981 canonical = canonical_slice(index, rolls) 982 if canonical.start == rolls - 1 and canonical.stop == rolls: 983 return self._highest_single(rolls) 984 # Expression evaluators are difficult to type. 985 return self.pool(rolls)[index].sum() # type: ignore 986 987 def _highest_single(self, rolls: int, /) -> 'Die[T_co]': 988 """Roll this die several times and keep the highest.""" 989 if rolls == 0: 990 return self.zero().simplify() 991 return icepool.from_cumulative( 992 self.outcomes(), [x**rolls for x in self.quantities('<=')]) 993 994 def middle( 995 self, 996 rolls: int, 997 /, 998 keep: int = 1, 999 *, 1000 tie: Literal['error', 'high', 'low'] = 'error') -> 'icepool.Die': 1001 """Roll several of this `Die` and sum the sorted results in the middle. 1002 1003 The outcomes should support addition and multiplication if `keep != 1`. 1004 1005 Args: 1006 rolls: The number of dice to roll. 1007 keep: The number of outcomes to sum. If this is greater than the 1008 current keep_size, all are kept. 1009 tie: What to do if `keep` is odd but the current keep_size 1010 is even, or vice versa. 1011 * 'error' (default): Raises `IndexError`. 1012 * 'high': The higher outcome is taken. 1013 * 'low': The lower outcome is taken. 1014 """ 1015 # Expression evaluators are difficult to type. 1016 return self.pool(rolls).middle(keep, tie=tie).sum() # type: ignore 1017 1018 def map_to_pool( 1019 self, 1020 repl: 1021 'Callable[..., Sequence[icepool.Die[U] | U] | Mapping[icepool.Die[U], int] | Mapping[U, int] | icepool.RerollType] | None' = None, 1022 /, 1023 *extra_args: 'Outcome | icepool.Die | icepool.MultisetExpression', 1024 star: bool | None = None, 1025 denominator: int | None = None 1026 ) -> 'icepool.MultisetGenerator[U, tuple[int]]': 1027 """EXPERIMENTAL: Maps outcomes of this `Die` to `Pools`, creating a `MultisetGenerator`. 1028 1029 As `icepool.map_to_pool(repl, self, ...)`. 1030 1031 If no argument is provided, the outcomes will be used to construct a 1032 mixture of pools directly, similar to the inverse of `pool.expand()`. 1033 Note that this is not particularly efficient since it does not make much 1034 use of dynamic programming. 1035 1036 Args: 1037 repl: One of the following: 1038 * A callable that takes in one outcome per element of args and 1039 produces a `Pool` (or something convertible to such). 1040 * A mapping from old outcomes to `Pool` 1041 (or something convertible to such). 1042 In this case args must have exactly one element. 1043 The new outcomes may be dice rather than just single outcomes. 1044 The special value `icepool.Reroll` will reroll that old outcome. 1045 star: If `True`, the first of the args will be unpacked before 1046 giving them to `repl`. 1047 If not provided, it will be guessed based on the signature of 1048 `repl` and the number of arguments. 1049 denominator: If provided, the denominator of the result will be this 1050 value. Otherwise it will be the minimum to correctly weight the 1051 pools. 1052 1053 Returns: 1054 A `MultisetGenerator` representing the mixture of `Pool`s. Note 1055 that this is not technically a `Pool`, though it supports most of 1056 the same operations. 1057 1058 Raises: 1059 ValueError: If `denominator` cannot be made consistent with the 1060 resulting mixture of pools. 1061 """ 1062 if repl is None: 1063 repl = lambda x: x 1064 return icepool.map_to_pool(repl, 1065 self, 1066 *extra_args, 1067 star=star, 1068 denominator=denominator) 1069 1070 def explode_to_pool( 1071 self, 1072 rolls: int, 1073 which: Collection[T_co] | Callable[..., bool] | None = None, 1074 /, 1075 *, 1076 star: bool | None = None, 1077 depth: int = 9) -> 'icepool.MultisetGenerator[T_co, tuple[int]]': 1078 """EXPERIMENTAL: Causes outcomes to be rolled again, keeping that outcome as an individual die in a pool. 1079 1080 Args: 1081 rolls: The number of initial dice. 1082 which: Which outcomes to explode. Options: 1083 * A single outcome to explode. 1084 * An collection of outcomes to explode. 1085 * A callable that takes an outcome and returns `True` if it 1086 should be exploded. 1087 * If not supplied, the max outcome will explode. 1088 star: Whether outcomes should be unpacked into separate arguments 1089 before sending them to a callable `which`. 1090 If not provided, this will be guessed based on the function 1091 signature. 1092 depth: The maximum depth of explosions for an individual dice. 1093 1094 Returns: 1095 A `MultisetGenerator` representing the mixture of `Pool`s. Note 1096 that this is not technically a `Pool`, though it supports most of 1097 the same operations. 1098 """ 1099 if depth == 0: 1100 return self.pool(rolls) 1101 if which is None: 1102 explode_set = {self.max_outcome()} 1103 else: 1104 explode_set = self._select_outcomes(which, star) 1105 if not explode_set: 1106 return self.pool(rolls) 1107 explode, not_explode = self.split(explode_set) 1108 1109 single_data: 'MutableMapping[icepool.Vector[int], int]' = defaultdict( 1110 int) 1111 for i in range(depth + 1): 1112 weight = explode.denominator()**i * self.denominator()**( 1113 depth - i) * not_explode.denominator() 1114 single_data[icepool.Vector((i, 1))] += weight 1115 single_data[icepool.Vector( 1116 (depth + 1, 0))] += explode.denominator()**(depth + 1) 1117 1118 single_count_die: 'Die[icepool.Vector[int]]' = Die(single_data) 1119 count_die = rolls @ single_count_die 1120 1121 return count_die.map_to_pool( 1122 lambda x, nx: [explode] * x + [not_explode] * nx) 1123 1124 def reroll_to_pool( 1125 self, 1126 rolls: int, 1127 which: Callable[..., bool] | Collection[T_co], 1128 /, 1129 max_rerolls: int, 1130 *, 1131 star: bool | None = None, 1132 mode: Literal['random', 'lowest', 'highest', 'drop'] = 'random' 1133 ) -> 'icepool.MultisetGenerator[T_co, tuple[int]]': 1134 """EXPERIMENTAL: Applies a limited number of rerolls shared across a pool. 1135 1136 Each die can only be rerolled once (effectively `depth=1`), and no more 1137 than `max_rerolls` dice may be rerolled. 1138 1139 Args: 1140 rolls: How many dice in the pool. 1141 which: Selects which outcomes are eligible to be rerolled. Options: 1142 * A collection of outcomes to reroll. 1143 * A callable that takes an outcome and returns `True` if it 1144 could be rerolled. 1145 max_rerolls: The maximum number of dice to reroll. 1146 Note that each die can only be rerolled once, so if the number 1147 of eligible dice is less than this, the excess rerolls have no 1148 effect. 1149 star: Whether outcomes should be unpacked into separate arguments 1150 before sending them to a callable `which`. 1151 If not provided, this will be guessed based on the function 1152 signature. 1153 mode: How dice are selected for rerolling if there are more eligible 1154 dice than `max_rerolls`. Options: 1155 * `'random'` (default): Eligible dice will be chosen uniformly 1156 at random. 1157 * `'lowest'`: The lowest eligible dice will be rerolled. 1158 * `'highest'`: The highest eligible dice will be rerolled. 1159 * `'drop'`: All dice that ended up on an outcome selected by 1160 `which` will be dropped. This includes both dice that rolled 1161 into `which` initially and were not rerolled, and dice that 1162 were rerolled but rolled into `which` again. This can be 1163 considerably more efficient than the other modes. 1164 1165 Returns: 1166 A `MultisetGenerator` representing the mixture of `Pool`s. Note 1167 that this is not technically a `Pool`, though it supports most of 1168 the same operations. 1169 """ 1170 rerollable_set = self._select_outcomes(which, star) 1171 if not rerollable_set: 1172 return self.pool(rolls) 1173 1174 rerollable_die, not_rerollable_die = self.split(rerollable_set) 1175 single_is_rerollable = icepool.coin(rerollable_die.denominator(), 1176 self.denominator()) 1177 rerollable = rolls @ single_is_rerollable 1178 1179 def split(initial_rerollable: int) -> Die[tuple[int, int, int]]: 1180 """Computes the composition of the pool. 1181 1182 Returns: 1183 initial_rerollable: The number of dice that initially fell into 1184 the rerollable set. 1185 rerolled_to_rerollable: The number of dice that were rerolled, 1186 but fell into the rerollable set again. 1187 not_rerollable: The number of dice that ended up outside the 1188 rerollable set, including both initial and rerolled dice. 1189 not_rerolled: The number of dice that were eligible for 1190 rerolling but were not rerolled. 1191 """ 1192 initial_not_rerollable = rolls - initial_rerollable 1193 rerolled = min(initial_rerollable, max_rerolls) 1194 not_rerolled = initial_rerollable - rerolled 1195 1196 def second_split(rerolled_to_rerollable): 1197 """Splits the rerolled dice into those that fell into the rerollable and not-rerollable sets.""" 1198 rerolled_to_not_rerollable = rerolled - rerolled_to_rerollable 1199 return icepool.tupleize( 1200 initial_rerollable, rerolled_to_rerollable, 1201 initial_not_rerollable + rerolled_to_not_rerollable, 1202 not_rerolled) 1203 1204 return icepool.map(second_split, 1205 rerolled @ single_is_rerollable, 1206 star=False) 1207 1208 pool_composition = rerollable.map(split, star=False) 1209 1210 def make_pool(initial_rerollable, rerolled_to_rerollable, 1211 not_rerollable, not_rerolled): 1212 common = rerollable_die.pool( 1213 rerolled_to_rerollable) + not_rerollable_die.pool( 1214 not_rerollable) 1215 match mode: 1216 case 'random': 1217 return common + rerollable_die.pool(not_rerolled) 1218 case 'lowest': 1219 return common + rerollable_die.pool( 1220 initial_rerollable).highest(not_rerolled) 1221 case 'highest': 1222 return common + rerollable_die.pool( 1223 initial_rerollable).lowest(not_rerolled) 1224 case 'drop': 1225 return not_rerollable_die.pool(not_rerollable) 1226 case _: 1227 raise ValueError( 1228 f"Invalid reroll_priority '{mode}'. Allowed values are 'random', 'lowest', 'highest', 'drop'." 1229 ) 1230 1231 denominator = self.denominator()**(rolls + min(rolls, max_rerolls)) 1232 1233 return pool_composition.map_to_pool(make_pool, 1234 star=True, 1235 denominator=denominator) 1236 1237 # Unary operators. 1238 1239 def __neg__(self) -> 'Die[T_co]': 1240 return self.unary_operator(operator.neg) 1241 1242 def __pos__(self) -> 'Die[T_co]': 1243 return self.unary_operator(operator.pos) 1244 1245 def __invert__(self) -> 'Die[T_co]': 1246 return self.unary_operator(operator.invert) 1247 1248 def abs(self) -> 'Die[T_co]': 1249 return self.unary_operator(operator.abs) 1250 1251 __abs__ = abs 1252 1253 def round(self, ndigits: int | None = None) -> 'Die': 1254 return self.unary_operator(round, ndigits) 1255 1256 __round__ = round 1257 1258 def stochastic_round(self, 1259 *, 1260 max_denominator: int | None = None) -> 'Die[int]': 1261 """Randomly rounds outcomes up or down to the nearest integer according to the two distances. 1262 1263 Specificially, rounds `x` up with probability `x - floor(x)` and down 1264 otherwise. 1265 1266 Args: 1267 max_denominator: If provided, each rounding will be performed 1268 using `fractions.Fraction.limit_denominator(max_denominator)`. 1269 Otherwise, the rounding will be performed without 1270 `limit_denominator`. 1271 """ 1272 return self.map(lambda x: icepool.stochastic_round( 1273 x, max_denominator=max_denominator)) 1274 1275 def trunc(self) -> 'Die': 1276 return self.unary_operator(math.trunc) 1277 1278 __trunc__ = trunc 1279 1280 def floor(self) -> 'Die': 1281 return self.unary_operator(math.floor) 1282 1283 __floor__ = floor 1284 1285 def ceil(self) -> 'Die': 1286 return self.unary_operator(math.ceil) 1287 1288 __ceil__ = ceil 1289 1290 # Binary operators. 1291 1292 def __add__(self, other) -> 'Die': 1293 if isinstance(other, icepool.AgainExpression): 1294 return NotImplemented 1295 other = implicit_convert_to_die(other) 1296 return self.binary_operator(other, operator.add) 1297 1298 def __radd__(self, other) -> 'Die': 1299 if isinstance(other, icepool.AgainExpression): 1300 return NotImplemented 1301 other = implicit_convert_to_die(other) 1302 return other.binary_operator(self, operator.add) 1303 1304 def __sub__(self, other) -> 'Die': 1305 if isinstance(other, icepool.AgainExpression): 1306 return NotImplemented 1307 other = implicit_convert_to_die(other) 1308 return self.binary_operator(other, operator.sub) 1309 1310 def __rsub__(self, other) -> 'Die': 1311 if isinstance(other, icepool.AgainExpression): 1312 return NotImplemented 1313 other = implicit_convert_to_die(other) 1314 return other.binary_operator(self, operator.sub) 1315 1316 def __mul__(self, other) -> 'Die': 1317 if isinstance(other, icepool.AgainExpression): 1318 return NotImplemented 1319 other = implicit_convert_to_die(other) 1320 return self.binary_operator(other, operator.mul) 1321 1322 def __rmul__(self, other) -> 'Die': 1323 if isinstance(other, icepool.AgainExpression): 1324 return NotImplemented 1325 other = implicit_convert_to_die(other) 1326 return other.binary_operator(self, operator.mul) 1327 1328 def __truediv__(self, other) -> 'Die': 1329 if isinstance(other, icepool.AgainExpression): 1330 return NotImplemented 1331 other = implicit_convert_to_die(other) 1332 return self.binary_operator(other, operator.truediv) 1333 1334 def __rtruediv__(self, other) -> 'Die': 1335 if isinstance(other, icepool.AgainExpression): 1336 return NotImplemented 1337 other = implicit_convert_to_die(other) 1338 return other.binary_operator(self, operator.truediv) 1339 1340 def __floordiv__(self, other) -> 'Die': 1341 if isinstance(other, icepool.AgainExpression): 1342 return NotImplemented 1343 other = implicit_convert_to_die(other) 1344 return self.binary_operator(other, operator.floordiv) 1345 1346 def __rfloordiv__(self, other) -> 'Die': 1347 if isinstance(other, icepool.AgainExpression): 1348 return NotImplemented 1349 other = implicit_convert_to_die(other) 1350 return other.binary_operator(self, operator.floordiv) 1351 1352 def __pow__(self, other) -> 'Die': 1353 if isinstance(other, icepool.AgainExpression): 1354 return NotImplemented 1355 other = implicit_convert_to_die(other) 1356 return self.binary_operator(other, operator.pow) 1357 1358 def __rpow__(self, other) -> 'Die': 1359 if isinstance(other, icepool.AgainExpression): 1360 return NotImplemented 1361 other = implicit_convert_to_die(other) 1362 return other.binary_operator(self, operator.pow) 1363 1364 def __mod__(self, other) -> 'Die': 1365 if isinstance(other, icepool.AgainExpression): 1366 return NotImplemented 1367 other = implicit_convert_to_die(other) 1368 return self.binary_operator(other, operator.mod) 1369 1370 def __rmod__(self, other) -> 'Die': 1371 if isinstance(other, icepool.AgainExpression): 1372 return NotImplemented 1373 other = implicit_convert_to_die(other) 1374 return other.binary_operator(self, operator.mod) 1375 1376 def __lshift__(self, other) -> 'Die': 1377 if isinstance(other, icepool.AgainExpression): 1378 return NotImplemented 1379 other = implicit_convert_to_die(other) 1380 return self.binary_operator(other, operator.lshift) 1381 1382 def __rlshift__(self, other) -> 'Die': 1383 if isinstance(other, icepool.AgainExpression): 1384 return NotImplemented 1385 other = implicit_convert_to_die(other) 1386 return other.binary_operator(self, operator.lshift) 1387 1388 def __rshift__(self, other) -> 'Die': 1389 if isinstance(other, icepool.AgainExpression): 1390 return NotImplemented 1391 other = implicit_convert_to_die(other) 1392 return self.binary_operator(other, operator.rshift) 1393 1394 def __rrshift__(self, other) -> 'Die': 1395 if isinstance(other, icepool.AgainExpression): 1396 return NotImplemented 1397 other = implicit_convert_to_die(other) 1398 return other.binary_operator(self, operator.rshift) 1399 1400 def __and__(self, other) -> 'Die': 1401 if isinstance(other, icepool.AgainExpression): 1402 return NotImplemented 1403 other = implicit_convert_to_die(other) 1404 return self.binary_operator(other, operator.and_) 1405 1406 def __rand__(self, other) -> 'Die': 1407 if isinstance(other, icepool.AgainExpression): 1408 return NotImplemented 1409 other = implicit_convert_to_die(other) 1410 return other.binary_operator(self, operator.and_) 1411 1412 def __or__(self, other) -> 'Die': 1413 if isinstance(other, icepool.AgainExpression): 1414 return NotImplemented 1415 other = implicit_convert_to_die(other) 1416 return self.binary_operator(other, operator.or_) 1417 1418 def __ror__(self, other) -> 'Die': 1419 if isinstance(other, icepool.AgainExpression): 1420 return NotImplemented 1421 other = implicit_convert_to_die(other) 1422 return other.binary_operator(self, operator.or_) 1423 1424 def __xor__(self, other) -> 'Die': 1425 if isinstance(other, icepool.AgainExpression): 1426 return NotImplemented 1427 other = implicit_convert_to_die(other) 1428 return self.binary_operator(other, operator.xor) 1429 1430 def __rxor__(self, other) -> 'Die': 1431 if isinstance(other, icepool.AgainExpression): 1432 return NotImplemented 1433 other = implicit_convert_to_die(other) 1434 return other.binary_operator(self, operator.xor) 1435 1436 # Comparators. 1437 1438 def __lt__(self, other) -> 'Die[bool]': 1439 if isinstance(other, icepool.AgainExpression): 1440 return NotImplemented 1441 other = implicit_convert_to_die(other) 1442 return self.binary_operator(other, operator.lt) 1443 1444 def __le__(self, other) -> 'Die[bool]': 1445 if isinstance(other, icepool.AgainExpression): 1446 return NotImplemented 1447 other = implicit_convert_to_die(other) 1448 return self.binary_operator(other, operator.le) 1449 1450 def __ge__(self, other) -> 'Die[bool]': 1451 if isinstance(other, icepool.AgainExpression): 1452 return NotImplemented 1453 other = implicit_convert_to_die(other) 1454 return self.binary_operator(other, operator.ge) 1455 1456 def __gt__(self, other) -> 'Die[bool]': 1457 if isinstance(other, icepool.AgainExpression): 1458 return NotImplemented 1459 other = implicit_convert_to_die(other) 1460 return self.binary_operator(other, operator.gt) 1461 1462 # Equality operators. These produce a `DieWithTruth`. 1463 1464 # The result has a truth value, but is not a bool. 1465 def __eq__(self, other) -> 'icepool.DieWithTruth[bool]': # type: ignore 1466 if isinstance(other, icepool.AgainExpression): 1467 return NotImplemented 1468 other_die: Die = implicit_convert_to_die(other) 1469 1470 def data_callback() -> Counts[bool]: 1471 return self.binary_operator(other_die, operator.eq)._data 1472 1473 def truth_value_callback() -> bool: 1474 return self.equals(other) 1475 1476 return icepool.DieWithTruth(data_callback, truth_value_callback) 1477 1478 # The result has a truth value, but is not a bool. 1479 def __ne__(self, other) -> 'icepool.DieWithTruth[bool]': # type: ignore 1480 if isinstance(other, icepool.AgainExpression): 1481 return NotImplemented 1482 other_die: Die = implicit_convert_to_die(other) 1483 1484 def data_callback() -> Counts[bool]: 1485 return self.binary_operator(other_die, operator.ne)._data 1486 1487 def truth_value_callback() -> bool: 1488 return not self.equals(other) 1489 1490 return icepool.DieWithTruth(data_callback, truth_value_callback) 1491 1492 def cmp(self, other) -> 'Die[int]': 1493 """A `Die` with outcomes 1, -1, and 0. 1494 1495 The quantities are equal to the positive outcome of `self > other`, 1496 `self < other`, and the remainder respectively. 1497 """ 1498 other = implicit_convert_to_die(other) 1499 1500 data = {} 1501 1502 lt = self < other 1503 if True in lt: 1504 data[-1] = lt[True] 1505 eq = self == other 1506 if True in eq: 1507 data[0] = eq[True] 1508 gt = self > other 1509 if True in gt: 1510 data[1] = gt[True] 1511 1512 return Die(data) 1513 1514 @staticmethod 1515 def _sign(x) -> int: 1516 z = Die._zero(x) 1517 if x > z: 1518 return 1 1519 elif x < z: 1520 return -1 1521 else: 1522 return 0 1523 1524 def sign(self) -> 'Die[int]': 1525 """Outcomes become 1 if greater than `zero()`, -1 if less than `zero()`, and 0 otherwise. 1526 1527 Note that for `float`s, +0.0, -0.0, and nan all become 0. 1528 """ 1529 return self.unary_operator(Die._sign) 1530 1531 # Equality and hashing. 1532 1533 def __bool__(self) -> bool: 1534 raise TypeError( 1535 'A `Die` only has a truth value if it is the result of == or !=.\n' 1536 'This could result from trying to use a die in an if-statement,\n' 1537 'in which case you should use `die.if_else()` instead.\n' 1538 'Or it could result from trying to use a `Die` inside a tuple or vector outcome,\n' 1539 'in which case you should use `tupleize()` or `vectorize().') 1540 1541 @cached_property 1542 def _hash_key(self) -> tuple: 1543 """A tuple that uniquely (as `equals()`) identifies this die. 1544 1545 Apart from being hashable and totally orderable, this is not guaranteed 1546 to be in any particular format or have any other properties. 1547 """ 1548 return tuple(self.items()) 1549 1550 @cached_property 1551 def _hash(self) -> int: 1552 return hash(self._hash_key) 1553 1554 def __hash__(self) -> int: 1555 return self._hash 1556 1557 def equals(self, other, *, simplify: bool = False) -> bool: 1558 """`True` iff both dice have the same outcomes and quantities. 1559 1560 This is `False` if `other` is not a `Die`, even if it would convert 1561 to an equal `Die`. 1562 1563 Truth value does NOT matter. 1564 1565 If one `Die` has a zero-quantity outcome and the other `Die` does not 1566 contain that outcome, they are treated as unequal by this function. 1567 1568 The `==` and `!=` operators have a dual purpose; they return a `Die` 1569 with a truth value determined by this method. 1570 Only dice returned by these methods have a truth value. The data of 1571 these dice is lazily evaluated since the caller may only be interested 1572 in the `Die` value or the truth value. 1573 1574 Args: 1575 simplify: If `True`, the dice will be simplified before comparing. 1576 Otherwise, e.g. a 2:2 coin is not `equals()` to a 1:1 coin. 1577 """ 1578 if not isinstance(other, Die): 1579 return False 1580 1581 if simplify: 1582 return self.simplify()._hash_key == other.simplify()._hash_key 1583 else: 1584 return self._hash_key == other._hash_key 1585 1586 # Strings. 1587 1588 def __repr__(self) -> str: 1589 inner = ', '.join(f'{repr(outcome)}: {weight}' 1590 for outcome, weight in self.items()) 1591 return type(self).__qualname__ + '({' + inner + '})'
Sampling with replacement. Quantities represent weights.
Dice are immutable. Methods do not modify the Die
in-place;
rather they return a Die
representing the result.
It's also possible to have "empty" dice with no outcomes at all, though these have little use other than being sentinel values.
61 def __new__( 62 cls, 63 outcomes: Sequence | Mapping[Any, int], 64 times: Sequence[int] | int = 1, 65 *, 66 again_count: int | None = None, 67 again_depth: int | None = None, 68 again_end: 'Outcome | Die | icepool.RerollType | None' = None 69 ) -> 'Die[T_co]': 70 """Constructor for a `Die`. 71 72 Don't confuse this with `d()`: 73 74 * `Die([6])`: A `Die` that always rolls the `int` 6. 75 * `d(6)`: A d6. 76 77 Also, don't confuse this with `Pool()`: 78 79 * `Die([1, 2, 3, 4, 5, 6])`: A d6. 80 * `Pool([1, 2, 3, 4, 5, 6])`: A `Pool` of six dice that always rolls one 81 of each number. 82 83 Here are some different ways of constructing a d6: 84 85 * Just import it: `from icepool import d6` 86 * Use the `d()` function: `icepool.d(6)` 87 * Use a d6 that you already have: `Die(d6)` or `Die([d6])` 88 * Mix a d3 and a d3+3: `Die([d3, d3+3])` 89 * Use a dict: `Die({1:1, 2:1, 3:1, 4:1, 5:1, 6:1})` 90 * Give the faces as a sequence: `Die([1, 2, 3, 4, 5, 6])` 91 92 All quantities must be non-negative. Outcomes with zero quantity will be 93 omitted. 94 95 Several methods and functions foward **kwargs to this constructor. 96 However, these only affect the construction of the returned or yielded 97 dice. Any other implicit conversions of arguments or operands to dice 98 will be done with the default keyword arguments. 99 100 EXPERIMENTAL: Use `icepool.Again` to roll the dice again, usually with 101 some modification. See the `Again` documentation for details. 102 103 Denominator: For a flat set of outcomes, the denominator is just the 104 sum of the corresponding quantities. If the outcomes themselves have 105 secondary denominators, then the overall denominator will be minimized 106 while preserving the relative weighting of the primary outcomes. 107 108 Args: 109 outcomes: The faces of the `Die`. This can be one of the following: 110 * A `Sequence` of outcomes. Duplicates will contribute 111 quantity for each appearance. 112 * A `Mapping` from outcomes to quantities. 113 114 Individual outcomes can each be one of the following: 115 116 * An outcome, which must be hashable and totally orderable. 117 * For convenience, `tuple`s containing `Population`s will be 118 `tupleize`d into a `Population` of `tuple`s. 119 This does not apply to subclasses of `tuple`s such as `namedtuple` 120 or other classes such as `Vector`. 121 * A `Die`, which will be flattened into the result. 122 The quantity assigned to a `Die` is shared among its 123 outcomes. The total denominator will be scaled up if 124 necessary. 125 * `icepool.Reroll`, which will drop itself from consideration. 126 * EXPERIMENTAL: `icepool.Again`. See the documentation for 127 `Again` for details. 128 times: Multiplies the quantity of each element of `outcomes`. 129 `times` can either be a sequence of the same length as 130 `outcomes` or a single `int` to apply to all elements of 131 `outcomes`. 132 again_count, again_depth, again_end: These affect how `Again` 133 expressions are handled. See the `Again` documentation for 134 details. 135 Raises: 136 ValueError: `None` is not a valid outcome for a `Die`. 137 """ 138 outcomes, times = icepool.creation_args.itemize(outcomes, times) 139 140 # Check for Again. 141 if icepool.population.again.contains_again(outcomes): 142 if again_count is not None: 143 if again_depth is not None: 144 raise ValueError( 145 'At most one of again_count and again_depth may be used.' 146 ) 147 if again_end is not None: 148 raise ValueError( 149 'again_end cannot be used with again_count.') 150 return icepool.population.again.evaluate_agains_using_count( 151 outcomes, times, again_count) 152 else: 153 if again_depth is None: 154 again_depth = 1 155 return icepool.population.again.evaluate_agains_using_depth( 156 outcomes, times, again_depth, again_end) 157 158 # Agains have been replaced by this point. 159 outcomes = cast(Sequence[T_co | Die[T_co] | icepool.RerollType], 160 outcomes) 161 162 if len(outcomes) == 1 and times[0] == 1 and isinstance( 163 outcomes[0], Die): 164 return outcomes[0] 165 166 counts: Counts[T_co] = icepool.creation_args.expand_args_for_die( 167 outcomes, times) 168 169 return Die._new_raw(counts)
Constructor for a Die
.
Don't confuse this with d()
:
Die([6])
: ADie
that always rolls theint
6.d(6)
: A d6.
Also, don't confuse this with Pool()
:
Die([1, 2, 3, 4, 5, 6])
: A d6.Pool([1, 2, 3, 4, 5, 6])
: APool
of six dice that always rolls one of each number.
Here are some different ways of constructing a d6:
- Just import it:
from icepool import d6
- Use the
d()
function:icepool.d(6)
- Use a d6 that you already have:
Die(d6)
orDie([d6])
- Mix a d3 and a d3+3:
Die([d3, d3+3])
- Use a dict:
Die({1:1, 2:1, 3:1, 4:1, 5:1, 6:1})
- Give the faces as a sequence:
Die([1, 2, 3, 4, 5, 6])
All quantities must be non-negative. Outcomes with zero quantity will be omitted.
Several methods and functions foward **kwargs to this constructor. However, these only affect the construction of the returned or yielded dice. Any other implicit conversions of arguments or operands to dice will be done with the default keyword arguments.
EXPERIMENTAL: Use icepool.Again
to roll the dice again, usually with
some modification. See the Again
documentation for details.
Denominator: For a flat set of outcomes, the denominator is just the sum of the corresponding quantities. If the outcomes themselves have secondary denominators, then the overall denominator will be minimized while preserving the relative weighting of the primary outcomes.
Arguments:
outcomes: The faces of the
Die
. This can be one of the following:- A
Sequence
of outcomes. Duplicates will contribute quantity for each appearance. - A
Mapping
from outcomes to quantities.
Individual outcomes can each be one of the following:
- An outcome, which must be hashable and totally orderable.
- For convenience,
tuple
s containingPopulation
s will betupleize
d into aPopulation
oftuple
s. This does not apply to subclasses oftuple
s such asnamedtuple
or other classes such asVector
.
- For convenience,
- A
Die
, which will be flattened into the result. The quantity assigned to aDie
is shared among its outcomes. The total denominator will be scaled up if necessary. icepool.Reroll
, which will drop itself from consideration.- EXPERIMENTAL:
icepool.Again
. See the documentation forAgain
for details.
- A
- times: Multiplies the quantity of each element of
outcomes
.times
can either be a sequence of the same length asoutcomes
or a singleint
to apply to all elements ofoutcomes
. - again_count, again_depth, again_end: These affect how
Again
expressions are handled. See theAgain
documentation for details.
Raises:
- ValueError:
None
is not a valid outcome for aDie
.
183 def unary_operator(self: 'icepool.Die[T_co]', op: Callable[..., U], *args, 184 **kwargs) -> 'icepool.Die[U]': 185 """Performs the unary operation on the outcomes. 186 187 This is used for the standard unary operators 188 `-, +, abs, ~, round, trunc, floor, ceil` 189 as well as the additional methods 190 `zero, bool`. 191 192 This is NOT used for the `[]` operator; when used directly, this is 193 interpreted as a `Mapping` operation and returns the count corresponding 194 to a given outcome. See `marginals()` for applying the `[]` operator to 195 outcomes. 196 197 Returns: 198 A `Die` representing the result. 199 200 Raises: 201 ValueError: If tuples are of mismatched length. 202 """ 203 return self._unary_operator(op, *args, **kwargs)
Performs the unary operation on the outcomes.
This is used for the standard unary operators
-, +, abs, ~, round, trunc, floor, ceil
as well as the additional methods
zero, bool
.
This is NOT used for the []
operator; when used directly, this is
interpreted as a Mapping
operation and returns the count corresponding
to a given outcome. See marginals()
for applying the []
operator to
outcomes.
Returns:
A
Die
representing the result.
Raises:
- ValueError: If tuples are of mismatched length.
205 def binary_operator(self, other: 'Die', op: Callable[..., U], *args, 206 **kwargs) -> 'Die[U]': 207 """Performs the operation on pairs of outcomes. 208 209 By the time this is called, the other operand has already been 210 converted to a `Die`. 211 212 If one side of a binary operator is a tuple and the other is not, the 213 binary operator is applied to each element of the tuple with the 214 non-tuple side. For example, the following are equivalent: 215 216 ```python 217 cartesian_product(d6, d8) * 2 218 cartesian_product(d6 * 2, d8 * 2) 219 ``` 220 221 This is used for the standard binary operators 222 `+, -, *, /, //, %, **, <<, >>, &, |, ^` 223 and the standard binary comparators 224 `<, <=, >=, >, ==, !=, cmp`. 225 226 `==` and `!=` additionally set the truth value of the `Die` according to 227 whether the dice themselves are the same or not. 228 229 The `@` operator does NOT use this method directly. 230 It rolls the left `Die`, which must have integer outcomes, 231 then rolls the right `Die` that many times and sums the outcomes. 232 233 Returns: 234 A `Die` representing the result. 235 236 Raises: 237 ValueError: If tuples are of mismatched length within one of the 238 dice or between the dice. 239 """ 240 data: MutableMapping[Any, int] = defaultdict(int) 241 for (outcome_self, 242 quantity_self), (outcome_other, 243 quantity_other) in itertools.product( 244 self.items(), other.items()): 245 new_outcome = op(outcome_self, outcome_other, *args, **kwargs) 246 data[new_outcome] += quantity_self * quantity_other 247 return self._new_type(data)
Performs the operation on pairs of outcomes.
By the time this is called, the other operand has already been
converted to a Die
.
If one side of a binary operator is a tuple and the other is not, the binary operator is applied to each element of the tuple with the non-tuple side. For example, the following are equivalent:
cartesian_product(d6, d8) * 2
cartesian_product(d6 * 2, d8 * 2)
This is used for the standard binary operators
+, -, *, /, //, %, **, <<, >>, &, |, ^
and the standard binary comparators
<, <=, >=, >, ==, !=, cmp
.
==
and !=
additionally set the truth value of the Die
according to
whether the dice themselves are the same or not.
The @
operator does NOT use this method directly.
It rolls the left Die
, which must have integer outcomes,
then rolls the right Die
that many times and sums the outcomes.
Returns:
A
Die
representing the result.
Raises:
- ValueError: If tuples are of mismatched length within one of the dice or between the dice.
275 def simplify(self) -> 'Die[T_co]': 276 """Divides all quantities by their greatest common denominator. """ 277 return icepool.Die(self._data.simplify())
Divides all quantities by their greatest common denominator.
304 def reroll(self, 305 which: Callable[..., bool] | Collection[T_co] | None = None, 306 /, 307 *, 308 star: bool | None = None, 309 depth: int | Literal['inf']) -> 'Die[T_co]': 310 """Rerolls the given outcomes. 311 312 Args: 313 which: Selects which outcomes to reroll. Options: 314 * A collection of outcomes to reroll. 315 * A callable that takes an outcome and returns `True` if it 316 should be rerolled. 317 * If not provided, the min outcome will be rerolled. 318 star: Whether outcomes should be unpacked into separate arguments 319 before sending them to a callable `which`. 320 If not provided, this will be guessed based on the function 321 signature. 322 depth: The maximum number of times to reroll. 323 If `None`, rerolls an unlimited number of times. 324 325 Returns: 326 A `Die` representing the reroll. 327 If the reroll would never terminate, the result has no outcomes. 328 """ 329 330 if which is None: 331 outcome_set = {self.min_outcome()} 332 else: 333 outcome_set = self._select_outcomes(which, star) 334 335 if depth == 'inf' or depth is None: 336 if depth is None: 337 warnings.warn( 338 "depth=None is deprecated; use depth='inf' instead.", 339 category=DeprecationWarning, 340 stacklevel=1) 341 data = { 342 outcome: quantity 343 for outcome, quantity in self.items() 344 if outcome not in outcome_set 345 } 346 elif depth < 0: 347 raise ValueError('reroll depth cannot be negative.') 348 else: 349 total_reroll_quantity = sum(quantity 350 for outcome, quantity in self.items() 351 if outcome in outcome_set) 352 total_stop_quantity = self.denominator() - total_reroll_quantity 353 rerollable_factor = total_reroll_quantity**depth 354 stop_factor = (self.denominator()**(depth + 1) - rerollable_factor 355 * total_reroll_quantity) // total_stop_quantity 356 data = { 357 outcome: (rerollable_factor * 358 quantity if outcome in outcome_set else stop_factor * 359 quantity) 360 for outcome, quantity in self.items() 361 } 362 return icepool.Die(data)
Rerolls the given outcomes.
Arguments:
- which: Selects which outcomes to reroll. Options:
- A collection of outcomes to reroll.
- A callable that takes an outcome and returns
True
if it should be rerolled. - If not provided, the min outcome will be rerolled.
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
which
. If not provided, this will be guessed based on the function signature. - depth: The maximum number of times to reroll.
If
None
, rerolls an unlimited number of times.
Returns:
A
Die
representing the reroll. If the reroll would never terminate, the result has no outcomes.
364 def filter(self, 365 which: Callable[..., bool] | Collection[T_co], 366 /, 367 *, 368 star: bool | None = None, 369 depth: int | Literal['inf']) -> 'Die[T_co]': 370 """Rerolls until getting one of the given outcomes. 371 372 Essentially the complement of `reroll()`. 373 374 Args: 375 which: Selects which outcomes to reroll until. Options: 376 * A callable that takes an outcome and returns `True` if it 377 should be accepted. 378 * A collection of outcomes to reroll until. 379 star: Whether outcomes should be unpacked into separate arguments 380 before sending them to a callable `which`. 381 If not provided, this will be guessed based on the function 382 signature. 383 depth: The maximum number of times to reroll. 384 If `None`, rerolls an unlimited number of times. 385 386 Returns: 387 A `Die` representing the reroll. 388 If the reroll would never terminate, the result has no outcomes. 389 """ 390 391 if callable(which): 392 if star is None: 393 star = guess_star(which) 394 if star: 395 396 not_outcomes = { 397 outcome 398 for outcome in self.outcomes() 399 if not which(*outcome) # type: ignore 400 } 401 else: 402 not_outcomes = { 403 outcome 404 for outcome in self.outcomes() if not which(outcome) 405 } 406 else: 407 not_outcomes = { 408 not_outcome 409 for not_outcome in self.outcomes() if not_outcome not in which 410 } 411 return self.reroll(not_outcomes, depth=depth)
Rerolls until getting one of the given outcomes.
Essentially the complement of reroll()
.
Arguments:
- which: Selects which outcomes to reroll until. Options:
- A callable that takes an outcome and returns
True
if it should be accepted. - A collection of outcomes to reroll until.
- A callable that takes an outcome and returns
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
which
. If not provided, this will be guessed based on the function signature. - depth: The maximum number of times to reroll.
If
None
, rerolls an unlimited number of times.
Returns:
A
Die
representing the reroll. If the reroll would never terminate, the result has no outcomes.
413 def split(self, 414 which: Callable[..., bool] | Collection[T_co] | None = None, 415 /, 416 *, 417 star: bool | None = None): 418 """Splits this die into one containing selected items and another containing the rest. 419 420 The total denominator is preserved. 421 422 Equivalent to `self.filter(), self.reroll()`. 423 424 Args: 425 which: Selects which outcomes to reroll until. Options: 426 * A callable that takes an outcome and returns `True` if it 427 should be accepted. 428 * A collection of outcomes to reroll until. 429 star: Whether outcomes should be unpacked into separate arguments 430 before sending them to a callable `which`. 431 If not provided, this will be guessed based on the function 432 signature. 433 """ 434 if which is None: 435 outcome_set = {self.min_outcome()} 436 else: 437 outcome_set = self._select_outcomes(which, star) 438 439 selected = {} 440 not_selected = {} 441 for outcome, count in self.items(): 442 if outcome in outcome_set: 443 selected[outcome] = count 444 else: 445 not_selected[outcome] = count 446 447 return Die(selected), Die(not_selected)
Splits this die into one containing selected items and another containing the rest.
The total denominator is preserved.
Equivalent to self.filter(), self.reroll()
.
Arguments:
- which: Selects which outcomes to reroll until. Options:
- A callable that takes an outcome and returns
True
if it should be accepted. - A collection of outcomes to reroll until.
- A callable that takes an outcome and returns
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
which
. If not provided, this will be guessed based on the function signature.
449 def truncate(self, min_outcome=None, max_outcome=None) -> 'Die[T_co]': 450 """Truncates the outcomes of this `Die` to the given range. 451 452 The endpoints are included in the result if applicable. 453 If one of the arguments is not provided, that side will not be truncated. 454 455 This effectively rerolls outcomes outside the given range. 456 If instead you want to replace those outcomes with the nearest endpoint, 457 use `clip()`. 458 459 Not to be confused with `trunc(die)`, which performs integer truncation 460 on each outcome. 461 """ 462 if min_outcome is not None: 463 start = bisect.bisect_left(self.outcomes(), min_outcome) 464 else: 465 start = None 466 if max_outcome is not None: 467 stop = bisect.bisect_right(self.outcomes(), max_outcome) 468 else: 469 stop = None 470 data = {k: v for k, v in self.items()[start:stop]} 471 return icepool.Die(data)
Truncates the outcomes of this Die
to the given range.
The endpoints are included in the result if applicable. If one of the arguments is not provided, that side will not be truncated.
This effectively rerolls outcomes outside the given range.
If instead you want to replace those outcomes with the nearest endpoint,
use clip()
.
Not to be confused with trunc(die)
, which performs integer truncation
on each outcome.
473 def clip(self, min_outcome=None, max_outcome=None) -> 'Die[T_co]': 474 """Clips the outcomes of this `Die` to the given values. 475 476 The endpoints are included in the result if applicable. 477 If one of the arguments is not provided, that side will not be clipped. 478 479 This is not the same as rerolling outcomes beyond this range; 480 the outcome is simply adjusted to fit within the range. 481 This will typically cause some quantity to bunch up at the endpoint(s). 482 If you want to reroll outcomes beyond this range, use `truncate()`. 483 """ 484 data: MutableMapping[Any, int] = defaultdict(int) 485 for outcome, quantity in self.items(): 486 if min_outcome is not None and outcome <= min_outcome: 487 data[min_outcome] += quantity 488 elif max_outcome is not None and outcome >= max_outcome: 489 data[max_outcome] += quantity 490 else: 491 data[outcome] += quantity 492 return icepool.Die(data)
Clips the outcomes of this Die
to the given values.
The endpoints are included in the result if applicable. If one of the arguments is not provided, that side will not be clipped.
This is not the same as rerolling outcomes beyond this range;
the outcome is simply adjusted to fit within the range.
This will typically cause some quantity to bunch up at the endpoint(s).
If you want to reroll outcomes beyond this range, use truncate()
.
522 def map( 523 self, 524 repl: 525 'Callable[..., U | Die[U] | icepool.RerollType | icepool.AgainExpression] | Mapping[T_co, U | Die[U] | icepool.RerollType | icepool.AgainExpression]', 526 /, 527 *extra_args, 528 star: bool | None = None, 529 repeat: int | Literal['inf'] = 1, 530 time_limit: int | Literal['inf'] | None = None, 531 again_count: int | None = None, 532 again_depth: int | None = None, 533 again_end: 'U | Die[U] | icepool.RerollType | None' = None 534 ) -> 'Die[U]': 535 """Maps outcomes of the `Die` to other outcomes. 536 537 This is also useful for representing processes. 538 539 As `icepool.map(repl, self, ...)`. 540 """ 541 return icepool.map(repl, 542 self, 543 *extra_args, 544 star=star, 545 repeat=repeat, 546 time_limit=time_limit, 547 again_count=again_count, 548 again_depth=again_depth, 549 again_end=again_end)
Maps outcomes of the Die
to other outcomes.
This is also useful for representing processes.
As icepool.map(repl, self, ...)
.
551 def map_and_time( 552 self, 553 repl: 554 'Callable[..., T_co | Die[T_co] | icepool.RerollType] | Mapping[T_co, T_co | Die[T_co] | icepool.RerollType]', 555 /, 556 *extra_args, 557 star: bool | None = None, 558 time_limit: int) -> 'Die[tuple[T_co, int]]': 559 """Repeatedly map outcomes of the state to other outcomes, while also 560 counting timesteps. 561 562 This is useful for representing processes. 563 564 As `map_and_time(repl, self, ...)`. 565 """ 566 return icepool.map_and_time(repl, 567 self, 568 *extra_args, 569 star=star, 570 time_limit=time_limit)
Repeatedly map outcomes of the state to other outcomes, while also counting timesteps.
This is useful for representing processes.
As map_and_time(repl, self, ...)
.
572 def time_to_sum(self: 'Die[int]', 573 target: int, 574 /, 575 max_time: int, 576 dnf: 'int|icepool.RerollType|None' = None) -> 'Die[int]': 577 """The number of rolls until the cumulative sum is greater or equal to the target. 578 579 Args: 580 target: The number to stop at once reached. 581 max_time: The maximum number of rolls to run. 582 If the sum is not reached, the outcome is determined by `dnf`. 583 dnf: What time to assign in cases where the target was not reached 584 in `max_time`. If not provided, this is set to `max_time`. 585 `dnf=icepool.Reroll` will remove this case from the result, 586 effectively rerolling it. 587 """ 588 if target <= 0: 589 return Die([0]) 590 591 if dnf is None: 592 dnf = max_time 593 594 def step(total, roll): 595 return min(total + roll, target) 596 597 result: 'Die[tuple[int, int]]' = Die([0]).map_and_time( 598 step, self, time_limit=max_time) 599 600 def get_time(total, time): 601 if total < target: 602 return dnf 603 else: 604 return time 605 606 return result.map(get_time)
The number of rolls until the cumulative sum is greater or equal to the target.
Arguments:
- target: The number to stop at once reached.
- max_time: The maximum number of rolls to run.
If the sum is not reached, the outcome is determined by
dnf
. - dnf: What time to assign in cases where the target was not reached
in
max_time
. If not provided, this is set tomax_time
.dnf=icepool.Reroll
will remove this case from the result, effectively rerolling it.
612 def mean_time_to_sum(self: 'Die[int]', target: int, /) -> Fraction: 613 """The mean number of rolls until the cumulative sum is greater or equal to the target. 614 615 Args: 616 target: The target sum. 617 618 Raises: 619 ValueError: If `self` has negative outcomes. 620 ZeroDivisionError: If `self.mean() == 0`. 621 """ 622 target = max(target, 0) 623 624 if target < len(self._mean_time_to_sum_cache): 625 return self._mean_time_to_sum_cache[target] 626 627 if self.min_outcome() < 0: 628 raise ValueError( 629 'mean_time_to_sum does not handle negative outcomes.') 630 time_per_effect = Fraction(self.denominator(), 631 self.denominator() - self.quantity(0)) 632 633 for i in range(len(self._mean_time_to_sum_cache), target + 1): 634 result = time_per_effect + self.reroll([ 635 0 636 ], depth='inf').map(lambda x: self.mean_time_to_sum(i - x)).mean() 637 self._mean_time_to_sum_cache.append(result) 638 639 return result
The mean number of rolls until the cumulative sum is greater or equal to the target.
Arguments:
- target: The target sum.
Raises:
- ValueError: If
self
has negative outcomes. - ZeroDivisionError: If
self.mean() == 0
.
641 def explode(self, 642 which: Collection[T_co] | Callable[..., bool] | None = None, 643 /, 644 *, 645 star: bool | None = None, 646 depth: int = 9, 647 end=None) -> 'Die[T_co]': 648 """Causes outcomes to be rolled again and added to the total. 649 650 Args: 651 which: Which outcomes to explode. Options: 652 * A single outcome to explode. 653 * An collection of outcomes to explode. 654 * A callable that takes an outcome and returns `True` if it 655 should be exploded. 656 * If not supplied, the max outcome will explode. 657 star: Whether outcomes should be unpacked into separate arguments 658 before sending them to a callable `which`. 659 If not provided, this will be guessed based on the function 660 signature. 661 depth: The maximum number of additional dice to roll, not counting 662 the initial roll. 663 If not supplied, a default value will be used. 664 end: Once `depth` is reached, further explosions will be treated 665 as this value. By default, a zero value will be used. 666 `icepool.Reroll` will make one extra final roll, rerolling until 667 a non-exploding outcome is reached. 668 """ 669 670 if which is None: 671 outcome_set = {self.max_outcome()} 672 else: 673 outcome_set = self._select_outcomes(which, star) 674 675 if depth < 0: 676 raise ValueError('depth cannot be negative.') 677 elif depth == 0: 678 return self 679 680 def map_final(outcome): 681 if outcome in outcome_set: 682 return outcome + icepool.Again 683 else: 684 return outcome 685 686 return self.map(map_final, again_depth=depth, again_end=end)
Causes outcomes to be rolled again and added to the total.
Arguments:
- which: Which outcomes to explode. Options:
- A single outcome to explode.
- An collection of outcomes to explode.
- A callable that takes an outcome and returns
True
if it should be exploded. - If not supplied, the max outcome will explode.
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
which
. If not provided, this will be guessed based on the function signature. - depth: The maximum number of additional dice to roll, not counting the initial roll. If not supplied, a default value will be used.
- end: Once
depth
is reached, further explosions will be treated as this value. By default, a zero value will be used.icepool.Reroll
will make one extra final roll, rerolling until a non-exploding outcome is reached.
688 def if_else( 689 self, 690 outcome_if_true: U | 'Die[U]', 691 outcome_if_false: U | 'Die[U]', 692 *, 693 again_count: int | None = None, 694 again_depth: int | None = None, 695 again_end: 'U | Die[U] | icepool.RerollType | None' = None 696 ) -> 'Die[U]': 697 """Ternary conditional operator. 698 699 This replaces truthy outcomes with the first argument and falsy outcomes 700 with the second argument. 701 702 Args: 703 again_count, again_depth, again_end: Forwarded to the final die constructor. 704 """ 705 return self.map(lambda x: bool(x)).map( 706 { 707 True: outcome_if_true, 708 False: outcome_if_false 709 }, 710 again_count=again_count, 711 again_depth=again_depth, 712 again_end=again_end)
Ternary conditional operator.
This replaces truthy outcomes with the first argument and falsy outcomes with the second argument.
Arguments:
- again_count, again_depth, again_end: Forwarded to the final die constructor.
714 def is_in(self, target: Container[T_co], /) -> 'Die[bool]': 715 """A die that returns True iff the roll of the die is contained in the target.""" 716 return self.map(lambda x: x in target)
A die that returns True iff the roll of the die is contained in the target.
718 def count(self, rolls: int, target: Container[T_co], /) -> 'Die[int]': 719 """Roll this dice a number of times and count how many are in the target.""" 720 return rolls @ self.is_in(target)
Roll this dice a number of times and count how many are in the target.
791 def sequence(self, rolls: int) -> 'icepool.Die[tuple[T_co, ...]]': 792 """Possible sequences produced by rolling this die a number of times. 793 794 This is extremely expensive computationally. If possible, use `reduce()` 795 instead; if you don't care about order, `Die.pool()` is better. 796 """ 797 return icepool.cartesian_product(*(self for _ in range(rolls)), 798 outcome_type=tuple) # type: ignore
Possible sequences produced by rolling this die a number of times.
This is extremely expensive computationally. If possible, use reduce()
instead; if you don't care about order, Die.pool()
is better.
800 def pool(self, rolls: int | Sequence[int] = 1, /) -> 'icepool.Pool[T_co]': 801 """Creates a `Pool` from this `Die`. 802 803 You might subscript the pool immediately afterwards, e.g. 804 `d6.pool(5)[-1, ..., 1]` takes the difference between the highest and 805 lowest of 5d6. 806 807 Args: 808 rolls: The number of copies of this `Die` to put in the pool. 809 Or, a sequence of one `int` per die acting as 810 `keep_tuple`. Note that `...` cannot be used in the 811 argument to this method, as the argument determines the size of 812 the pool. 813 """ 814 if isinstance(rolls, int): 815 return icepool.Pool({self: rolls}) 816 else: 817 pool_size = len(rolls) 818 # Haven't dealt with narrowing return type. 819 return icepool.Pool({self: pool_size})[rolls] # type: ignore
You might subscript the pool immediately afterwards, e.g.
d6.pool(5)[-1, ..., 1]
takes the difference between the highest and
lowest of 5d6.
Arguments:
- rolls: The number of copies of this
Die
to put in the pool. Or, a sequence of oneint
per die acting askeep_tuple
. Note that...
cannot be used in the argument to this method, as the argument determines the size of the pool.
867 def keep(self, 868 rolls: int | Sequence[int], 869 index: slice | Sequence[int | EllipsisType] | int | None = None, 870 /) -> 'Die': 871 """Selects elements after drawing and sorting and sums them. 872 873 Args: 874 rolls: The number of dice to roll. 875 index: One of the following: 876 * An `int`. This will count only the roll at the specified index. 877 In this case, the result is a `Die` rather than a generator. 878 * A `slice`. The selected dice are counted once each. 879 * A sequence of `int`s with length equal to `rolls`. 880 Each roll is counted that many times, which could be multiple or 881 negative times. 882 883 Up to one `...` (`Ellipsis`) may be used. If no `...` is used, 884 the `rolls` argument may be omitted. 885 886 `...` will be replaced with a number of zero counts in order 887 to make up any missing elements compared to `rolls`. 888 This number may be "negative" if more `int`s are provided than 889 `rolls`. Specifically: 890 891 * If `index` is shorter than `rolls`, `...` 892 acts as enough zero counts to make up the difference. 893 E.g. `(1, ..., 1)` on five dice would act as 894 `(1, 0, 0, 0, 1)`. 895 * If `index` has length equal to `rolls`, `...` has no effect. 896 E.g. `(1, ..., 1)` on two dice would act as `(1, 1)`. 897 * If `index` is longer than `rolls` and `...` is on one side, 898 elements will be dropped from `index` on the side with `...`. 899 E.g. `(..., 1, 2, 3)` on two dice would act as `(2, 3)`. 900 * If `index` is longer than `rolls` and `...` 901 is in the middle, the counts will be as the sum of two 902 one-sided `...`. 903 E.g. `(-1, ..., 1)` acts like `(-1, ...)` plus `(..., 1)`. 904 If `rolls` was 1 this would have the -1 and 1 cancel each other out. 905 """ 906 if isinstance(rolls, int): 907 if index is None: 908 raise ValueError( 909 'If the number of rolls is an integer, an index argument must be provided.' 910 ) 911 if isinstance(index, int): 912 return self.pool(rolls).keep(index) 913 else: 914 return self.pool(rolls).keep(index).sum() # type: ignore 915 else: 916 if index is not None: 917 raise ValueError('Only one index sequence can be given.') 918 return self.pool(len(rolls)).keep(rolls).sum() # type: ignore
Selects elements after drawing and sorting and sums them.
Arguments:
- rolls: The number of dice to roll.
- index: One of the following:
- An
int
. This will count only the roll at the specified index.
- An
- In this case, the result is a
Die
rather than a generator. - A
slice
. The selected dice are counted once each.
- A
- A sequence of
int
s with length equal torolls
. Each roll is counted that many times, which could be multiple or negative times.
Up to one
...
(Ellipsis
) may be used. If no...
is used, therolls
argument may be omitted....
will be replaced with a number of zero counts in orderto make up any missing elements compared to
rolls
. This number may be "negative" if moreint
s are provided thanrolls
. Specifically:- If
index
is shorter thanrolls
,...
acts as enough zero counts to make up the difference. E.g.(1, ..., 1)
on five dice would act as(1, 0, 0, 0, 1)
. - If
index
has length equal torolls
,...
has no effect. E.g.(1, ..., 1)
on two dice would act as(1, 1)
. - If
index
is longer thanrolls
and...
is on one side, elements will be dropped fromindex
on the side with...
. E.g.(..., 1, 2, 3)
on two dice would act as(2, 3)
. - If
index
is longer thanrolls
and...
is in the middle, the counts will be as the sum of two one-sided...
. E.g.(-1, ..., 1)
acts like(-1, ...)
plus(..., 1)
. Ifrolls
was 1 this would have the -1 and 1 cancel each other out.
- A sequence of
920 def lowest(self, 921 rolls: int, 922 /, 923 keep: int | None = None, 924 drop: int | None = None) -> 'Die': 925 """Roll several of this `Die` and return the lowest result, or the sum of some of the lowest. 926 927 The outcomes should support addition and multiplication if `keep != 1`. 928 929 Args: 930 rolls: The number of dice to roll. All dice will have the same 931 outcomes as `self`. 932 keep, drop: These arguments work together: 933 * If neither are provided, the single lowest die will be taken. 934 * If only `keep` is provided, the `keep` lowest dice will be summed. 935 * If only `drop` is provided, the `drop` lowest dice will be dropped 936 and the rest will be summed. 937 * If both are provided, `drop` lowest dice will be dropped, then 938 the next `keep` lowest dice will be summed. 939 940 Returns: 941 A `Die` representing the probability distribution of the sum. 942 """ 943 index = lowest_slice(keep, drop) 944 canonical = canonical_slice(index, rolls) 945 if canonical.start == 0 and canonical.stop == 1: 946 return self._lowest_single(rolls) 947 # Expression evaluators are difficult to type. 948 return self.pool(rolls)[index].sum() # type: ignore
Roll several of this Die
and return the lowest result, or the sum of some of the lowest.
The outcomes should support addition and multiplication if keep != 1
.
Arguments:
- rolls: The number of dice to roll. All dice will have the same
outcomes as
self
. - keep, drop: These arguments work together:
- If neither are provided, the single lowest die will be taken.
- If only
keep
is provided, thekeep
lowest dice will be summed. - If only
drop
is provided, thedrop
lowest dice will be dropped and the rest will be summed. - If both are provided,
drop
lowest dice will be dropped, then the nextkeep
lowest dice will be summed.
Returns:
A
Die
representing the probability distribution of the sum.
958 def highest(self, 959 rolls: int, 960 /, 961 keep: int | None = None, 962 drop: int | None = None) -> 'Die[T_co]': 963 """Roll several of this `Die` and return the highest result, or the sum of some of the highest. 964 965 The outcomes should support addition and multiplication if `keep != 1`. 966 967 Args: 968 rolls: The number of dice to roll. 969 keep, drop: These arguments work together: 970 * If neither are provided, the single highest die will be taken. 971 * If only `keep` is provided, the `keep` highest dice will be summed. 972 * If only `drop` is provided, the `drop` highest dice will be dropped 973 and the rest will be summed. 974 * If both are provided, `drop` highest dice will be dropped, then 975 the next `keep` highest dice will be summed. 976 977 Returns: 978 A `Die` representing the probability distribution of the sum. 979 """ 980 index = highest_slice(keep, drop) 981 canonical = canonical_slice(index, rolls) 982 if canonical.start == rolls - 1 and canonical.stop == rolls: 983 return self._highest_single(rolls) 984 # Expression evaluators are difficult to type. 985 return self.pool(rolls)[index].sum() # type: ignore
Roll several of this Die
and return the highest result, or the sum of some of the highest.
The outcomes should support addition and multiplication if keep != 1
.
Arguments:
- rolls: The number of dice to roll.
- keep, drop: These arguments work together:
- If neither are provided, the single highest die will be taken.
- If only
keep
is provided, thekeep
highest dice will be summed. - If only
drop
is provided, thedrop
highest dice will be dropped and the rest will be summed. - If both are provided,
drop
highest dice will be dropped, then the nextkeep
highest dice will be summed.
Returns:
A
Die
representing the probability distribution of the sum.
994 def middle( 995 self, 996 rolls: int, 997 /, 998 keep: int = 1, 999 *, 1000 tie: Literal['error', 'high', 'low'] = 'error') -> 'icepool.Die': 1001 """Roll several of this `Die` and sum the sorted results in the middle. 1002 1003 The outcomes should support addition and multiplication if `keep != 1`. 1004 1005 Args: 1006 rolls: The number of dice to roll. 1007 keep: The number of outcomes to sum. If this is greater than the 1008 current keep_size, all are kept. 1009 tie: What to do if `keep` is odd but the current keep_size 1010 is even, or vice versa. 1011 * 'error' (default): Raises `IndexError`. 1012 * 'high': The higher outcome is taken. 1013 * 'low': The lower outcome is taken. 1014 """ 1015 # Expression evaluators are difficult to type. 1016 return self.pool(rolls).middle(keep, tie=tie).sum() # type: ignore
Roll several of this Die
and sum the sorted results in the middle.
The outcomes should support addition and multiplication if keep != 1
.
Arguments:
- rolls: The number of dice to roll.
- keep: The number of outcomes to sum. If this is greater than the current keep_size, all are kept.
- tie: What to do if
keep
is odd but the current keep_size is even, or vice versa.- 'error' (default): Raises
IndexError
. - 'high': The higher outcome is taken.
- 'low': The lower outcome is taken.
- 'error' (default): Raises
1018 def map_to_pool( 1019 self, 1020 repl: 1021 'Callable[..., Sequence[icepool.Die[U] | U] | Mapping[icepool.Die[U], int] | Mapping[U, int] | icepool.RerollType] | None' = None, 1022 /, 1023 *extra_args: 'Outcome | icepool.Die | icepool.MultisetExpression', 1024 star: bool | None = None, 1025 denominator: int | None = None 1026 ) -> 'icepool.MultisetGenerator[U, tuple[int]]': 1027 """EXPERIMENTAL: Maps outcomes of this `Die` to `Pools`, creating a `MultisetGenerator`. 1028 1029 As `icepool.map_to_pool(repl, self, ...)`. 1030 1031 If no argument is provided, the outcomes will be used to construct a 1032 mixture of pools directly, similar to the inverse of `pool.expand()`. 1033 Note that this is not particularly efficient since it does not make much 1034 use of dynamic programming. 1035 1036 Args: 1037 repl: One of the following: 1038 * A callable that takes in one outcome per element of args and 1039 produces a `Pool` (or something convertible to such). 1040 * A mapping from old outcomes to `Pool` 1041 (or something convertible to such). 1042 In this case args must have exactly one element. 1043 The new outcomes may be dice rather than just single outcomes. 1044 The special value `icepool.Reroll` will reroll that old outcome. 1045 star: If `True`, the first of the args will be unpacked before 1046 giving them to `repl`. 1047 If not provided, it will be guessed based on the signature of 1048 `repl` and the number of arguments. 1049 denominator: If provided, the denominator of the result will be this 1050 value. Otherwise it will be the minimum to correctly weight the 1051 pools. 1052 1053 Returns: 1054 A `MultisetGenerator` representing the mixture of `Pool`s. Note 1055 that this is not technically a `Pool`, though it supports most of 1056 the same operations. 1057 1058 Raises: 1059 ValueError: If `denominator` cannot be made consistent with the 1060 resulting mixture of pools. 1061 """ 1062 if repl is None: 1063 repl = lambda x: x 1064 return icepool.map_to_pool(repl, 1065 self, 1066 *extra_args, 1067 star=star, 1068 denominator=denominator)
EXPERIMENTAL: Maps outcomes of this Die
to Pools
, creating a MultisetGenerator
.
As icepool.map_to_pool(repl, self, ...)
.
If no argument is provided, the outcomes will be used to construct a
mixture of pools directly, similar to the inverse of pool.expand()
.
Note that this is not particularly efficient since it does not make much
use of dynamic programming.
Arguments:
- repl: One of the following:
- A callable that takes in one outcome per element of args and
produces a
Pool
(or something convertible to such). - A mapping from old outcomes to
Pool
(or something convertible to such). In this case args must have exactly one element. The new outcomes may be dice rather than just single outcomes. The special valueicepool.Reroll
will reroll that old outcome.
- A callable that takes in one outcome per element of args and
produces a
- star: If
True
, the first of the args will be unpacked before giving them torepl
. If not provided, it will be guessed based on the signature ofrepl
and the number of arguments. - denominator: If provided, the denominator of the result will be this value. Otherwise it will be the minimum to correctly weight the pools.
Returns:
A
MultisetGenerator
representing the mixture ofPool
s. Note
that this is not technically aPool
, though it supports most of the same operations.
Raises:
- ValueError: If
denominator
cannot be made consistent with the resulting mixture of pools.
1070 def explode_to_pool( 1071 self, 1072 rolls: int, 1073 which: Collection[T_co] | Callable[..., bool] | None = None, 1074 /, 1075 *, 1076 star: bool | None = None, 1077 depth: int = 9) -> 'icepool.MultisetGenerator[T_co, tuple[int]]': 1078 """EXPERIMENTAL: Causes outcomes to be rolled again, keeping that outcome as an individual die in a pool. 1079 1080 Args: 1081 rolls: The number of initial dice. 1082 which: Which outcomes to explode. Options: 1083 * A single outcome to explode. 1084 * An collection of outcomes to explode. 1085 * A callable that takes an outcome and returns `True` if it 1086 should be exploded. 1087 * If not supplied, the max outcome will explode. 1088 star: Whether outcomes should be unpacked into separate arguments 1089 before sending them to a callable `which`. 1090 If not provided, this will be guessed based on the function 1091 signature. 1092 depth: The maximum depth of explosions for an individual dice. 1093 1094 Returns: 1095 A `MultisetGenerator` representing the mixture of `Pool`s. Note 1096 that this is not technically a `Pool`, though it supports most of 1097 the same operations. 1098 """ 1099 if depth == 0: 1100 return self.pool(rolls) 1101 if which is None: 1102 explode_set = {self.max_outcome()} 1103 else: 1104 explode_set = self._select_outcomes(which, star) 1105 if not explode_set: 1106 return self.pool(rolls) 1107 explode, not_explode = self.split(explode_set) 1108 1109 single_data: 'MutableMapping[icepool.Vector[int], int]' = defaultdict( 1110 int) 1111 for i in range(depth + 1): 1112 weight = explode.denominator()**i * self.denominator()**( 1113 depth - i) * not_explode.denominator() 1114 single_data[icepool.Vector((i, 1))] += weight 1115 single_data[icepool.Vector( 1116 (depth + 1, 0))] += explode.denominator()**(depth + 1) 1117 1118 single_count_die: 'Die[icepool.Vector[int]]' = Die(single_data) 1119 count_die = rolls @ single_count_die 1120 1121 return count_die.map_to_pool( 1122 lambda x, nx: [explode] * x + [not_explode] * nx)
EXPERIMENTAL: Causes outcomes to be rolled again, keeping that outcome as an individual die in a pool.
Arguments:
- rolls: The number of initial dice.
- which: Which outcomes to explode. Options:
- A single outcome to explode.
- An collection of outcomes to explode.
- A callable that takes an outcome and returns
True
if it should be exploded. - If not supplied, the max outcome will explode.
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
which
. If not provided, this will be guessed based on the function signature. - depth: The maximum depth of explosions for an individual dice.
Returns:
A
MultisetGenerator
representing the mixture ofPool
s. Note
that this is not technically aPool
, though it supports most of the same operations.
1124 def reroll_to_pool( 1125 self, 1126 rolls: int, 1127 which: Callable[..., bool] | Collection[T_co], 1128 /, 1129 max_rerolls: int, 1130 *, 1131 star: bool | None = None, 1132 mode: Literal['random', 'lowest', 'highest', 'drop'] = 'random' 1133 ) -> 'icepool.MultisetGenerator[T_co, tuple[int]]': 1134 """EXPERIMENTAL: Applies a limited number of rerolls shared across a pool. 1135 1136 Each die can only be rerolled once (effectively `depth=1`), and no more 1137 than `max_rerolls` dice may be rerolled. 1138 1139 Args: 1140 rolls: How many dice in the pool. 1141 which: Selects which outcomes are eligible to be rerolled. Options: 1142 * A collection of outcomes to reroll. 1143 * A callable that takes an outcome and returns `True` if it 1144 could be rerolled. 1145 max_rerolls: The maximum number of dice to reroll. 1146 Note that each die can only be rerolled once, so if the number 1147 of eligible dice is less than this, the excess rerolls have no 1148 effect. 1149 star: Whether outcomes should be unpacked into separate arguments 1150 before sending them to a callable `which`. 1151 If not provided, this will be guessed based on the function 1152 signature. 1153 mode: How dice are selected for rerolling if there are more eligible 1154 dice than `max_rerolls`. Options: 1155 * `'random'` (default): Eligible dice will be chosen uniformly 1156 at random. 1157 * `'lowest'`: The lowest eligible dice will be rerolled. 1158 * `'highest'`: The highest eligible dice will be rerolled. 1159 * `'drop'`: All dice that ended up on an outcome selected by 1160 `which` will be dropped. This includes both dice that rolled 1161 into `which` initially and were not rerolled, and dice that 1162 were rerolled but rolled into `which` again. This can be 1163 considerably more efficient than the other modes. 1164 1165 Returns: 1166 A `MultisetGenerator` representing the mixture of `Pool`s. Note 1167 that this is not technically a `Pool`, though it supports most of 1168 the same operations. 1169 """ 1170 rerollable_set = self._select_outcomes(which, star) 1171 if not rerollable_set: 1172 return self.pool(rolls) 1173 1174 rerollable_die, not_rerollable_die = self.split(rerollable_set) 1175 single_is_rerollable = icepool.coin(rerollable_die.denominator(), 1176 self.denominator()) 1177 rerollable = rolls @ single_is_rerollable 1178 1179 def split(initial_rerollable: int) -> Die[tuple[int, int, int]]: 1180 """Computes the composition of the pool. 1181 1182 Returns: 1183 initial_rerollable: The number of dice that initially fell into 1184 the rerollable set. 1185 rerolled_to_rerollable: The number of dice that were rerolled, 1186 but fell into the rerollable set again. 1187 not_rerollable: The number of dice that ended up outside the 1188 rerollable set, including both initial and rerolled dice. 1189 not_rerolled: The number of dice that were eligible for 1190 rerolling but were not rerolled. 1191 """ 1192 initial_not_rerollable = rolls - initial_rerollable 1193 rerolled = min(initial_rerollable, max_rerolls) 1194 not_rerolled = initial_rerollable - rerolled 1195 1196 def second_split(rerolled_to_rerollable): 1197 """Splits the rerolled dice into those that fell into the rerollable and not-rerollable sets.""" 1198 rerolled_to_not_rerollable = rerolled - rerolled_to_rerollable 1199 return icepool.tupleize( 1200 initial_rerollable, rerolled_to_rerollable, 1201 initial_not_rerollable + rerolled_to_not_rerollable, 1202 not_rerolled) 1203 1204 return icepool.map(second_split, 1205 rerolled @ single_is_rerollable, 1206 star=False) 1207 1208 pool_composition = rerollable.map(split, star=False) 1209 1210 def make_pool(initial_rerollable, rerolled_to_rerollable, 1211 not_rerollable, not_rerolled): 1212 common = rerollable_die.pool( 1213 rerolled_to_rerollable) + not_rerollable_die.pool( 1214 not_rerollable) 1215 match mode: 1216 case 'random': 1217 return common + rerollable_die.pool(not_rerolled) 1218 case 'lowest': 1219 return common + rerollable_die.pool( 1220 initial_rerollable).highest(not_rerolled) 1221 case 'highest': 1222 return common + rerollable_die.pool( 1223 initial_rerollable).lowest(not_rerolled) 1224 case 'drop': 1225 return not_rerollable_die.pool(not_rerollable) 1226 case _: 1227 raise ValueError( 1228 f"Invalid reroll_priority '{mode}'. Allowed values are 'random', 'lowest', 'highest', 'drop'." 1229 ) 1230 1231 denominator = self.denominator()**(rolls + min(rolls, max_rerolls)) 1232 1233 return pool_composition.map_to_pool(make_pool, 1234 star=True, 1235 denominator=denominator)
EXPERIMENTAL: Applies a limited number of rerolls shared across a pool.
Each die can only be rerolled once (effectively depth=1
), and no more
than max_rerolls
dice may be rerolled.
Arguments:
- rolls: How many dice in the pool.
- which: Selects which outcomes are eligible to be rerolled. Options:
- A collection of outcomes to reroll.
- A callable that takes an outcome and returns
True
if it could be rerolled.
- max_rerolls: The maximum number of dice to reroll. Note that each die can only be rerolled once, so if the number of eligible dice is less than this, the excess rerolls have no effect.
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
which
. If not provided, this will be guessed based on the function signature. - mode: How dice are selected for rerolling if there are more eligible
dice than
max_rerolls
. Options:'random'
(default): Eligible dice will be chosen uniformly at random.'lowest'
: The lowest eligible dice will be rerolled.'highest'
: The highest eligible dice will be rerolled.'drop'
: All dice that ended up on an outcome selected bywhich
will be dropped. This includes both dice that rolled intowhich
initially and were not rerolled, and dice that were rerolled but rolled intowhich
again. This can be considerably more efficient than the other modes.
Returns:
A
MultisetGenerator
representing the mixture ofPool
s. Note
that this is not technically aPool
, though it supports most of the same operations.
1258 def stochastic_round(self, 1259 *, 1260 max_denominator: int | None = None) -> 'Die[int]': 1261 """Randomly rounds outcomes up or down to the nearest integer according to the two distances. 1262 1263 Specificially, rounds `x` up with probability `x - floor(x)` and down 1264 otherwise. 1265 1266 Args: 1267 max_denominator: If provided, each rounding will be performed 1268 using `fractions.Fraction.limit_denominator(max_denominator)`. 1269 Otherwise, the rounding will be performed without 1270 `limit_denominator`. 1271 """ 1272 return self.map(lambda x: icepool.stochastic_round( 1273 x, max_denominator=max_denominator))
Randomly rounds outcomes up or down to the nearest integer according to the two distances.
Specificially, rounds x
up with probability x - floor(x)
and down
otherwise.
Arguments:
- max_denominator: If provided, each rounding will be performed
using
fractions.Fraction.limit_denominator(max_denominator)
. Otherwise, the rounding will be performed withoutlimit_denominator
.
1492 def cmp(self, other) -> 'Die[int]': 1493 """A `Die` with outcomes 1, -1, and 0. 1494 1495 The quantities are equal to the positive outcome of `self > other`, 1496 `self < other`, and the remainder respectively. 1497 """ 1498 other = implicit_convert_to_die(other) 1499 1500 data = {} 1501 1502 lt = self < other 1503 if True in lt: 1504 data[-1] = lt[True] 1505 eq = self == other 1506 if True in eq: 1507 data[0] = eq[True] 1508 gt = self > other 1509 if True in gt: 1510 data[1] = gt[True] 1511 1512 return Die(data)
A Die
with outcomes 1, -1, and 0.
The quantities are equal to the positive outcome of self > other
,
self < other
, and the remainder respectively.
1557 def equals(self, other, *, simplify: bool = False) -> bool: 1558 """`True` iff both dice have the same outcomes and quantities. 1559 1560 This is `False` if `other` is not a `Die`, even if it would convert 1561 to an equal `Die`. 1562 1563 Truth value does NOT matter. 1564 1565 If one `Die` has a zero-quantity outcome and the other `Die` does not 1566 contain that outcome, they are treated as unequal by this function. 1567 1568 The `==` and `!=` operators have a dual purpose; they return a `Die` 1569 with a truth value determined by this method. 1570 Only dice returned by these methods have a truth value. The data of 1571 these dice is lazily evaluated since the caller may only be interested 1572 in the `Die` value or the truth value. 1573 1574 Args: 1575 simplify: If `True`, the dice will be simplified before comparing. 1576 Otherwise, e.g. a 2:2 coin is not `equals()` to a 1:1 coin. 1577 """ 1578 if not isinstance(other, Die): 1579 return False 1580 1581 if simplify: 1582 return self.simplify()._hash_key == other.simplify()._hash_key 1583 else: 1584 return self._hash_key == other._hash_key
True
iff both dice have the same outcomes and quantities.
This is False
if other
is not a Die
, even if it would convert
to an equal Die
.
Truth value does NOT matter.
If one Die
has a zero-quantity outcome and the other Die
does not
contain that outcome, they are treated as unequal by this function.
The ==
and !=
operators have a dual purpose; they return a Die
with a truth value determined by this method.
Only dice returned by these methods have a truth value. The data of
these dice is lazily evaluated since the caller may only be interested
in the Die
value or the truth value.
Arguments:
- simplify: If
True
, the dice will be simplified before comparing. Otherwise, e.g. a 2:2 coin is notequals()
to a 1:1 coin.
28class Population(ABC, Generic[T_co], Mapping[Any, int]): 29 """A mapping from outcomes to `int` quantities. 30 31 Outcomes with each instance must be hashable and totally orderable. 32 33 Subclasses include `Die` and `Deck`. 34 """ 35 36 # Abstract methods. 37 38 @property 39 @abstractmethod 40 def _new_type(self) -> type: 41 """The type to use when constructing a new instance.""" 42 43 @abstractmethod 44 def keys(self) -> CountsKeysView[T_co]: 45 """The outcomes within the population in sorted order.""" 46 47 @abstractmethod 48 def values(self) -> CountsValuesView: 49 """The quantities within the population in outcome order.""" 50 51 @abstractmethod 52 def items(self) -> CountsItemsView[T_co]: 53 """The (outcome, quantity)s of the population in sorted order.""" 54 55 @property 56 def _items_for_cartesian_product(self) -> Sequence[tuple[T_co, int]]: 57 return self.items() 58 59 def _unary_operator(self, op: Callable, *args, **kwargs): 60 data: MutableMapping[Any, int] = defaultdict(int) 61 for outcome, quantity in self.items(): 62 new_outcome = op(outcome, *args, **kwargs) 63 data[new_outcome] += quantity 64 return self._new_type(data) 65 66 # Outcomes. 67 68 def outcomes(self) -> CountsKeysView[T_co]: 69 """The outcomes of the mapping in ascending order. 70 71 These are also the `keys` of the mapping. 72 Prefer to use the name `outcomes`. 73 """ 74 return self.keys() 75 76 @cached_property 77 def _common_outcome_length(self) -> int | None: 78 result = None 79 for outcome in self.outcomes(): 80 if isinstance(outcome, Mapping): 81 return None 82 elif isinstance(outcome, Sized): 83 if result is None: 84 result = len(outcome) 85 elif len(outcome) != result: 86 return None 87 return result 88 89 def common_outcome_length(self) -> int | None: 90 """The common length of all outcomes. 91 92 If outcomes have no lengths or different lengths, the result is `None`. 93 """ 94 return self._common_outcome_length 95 96 def is_empty(self) -> bool: 97 """`True` iff this population has no outcomes. """ 98 return len(self) == 0 99 100 def min_outcome(self) -> T_co: 101 """The least outcome.""" 102 return self.outcomes()[0] 103 104 def max_outcome(self) -> T_co: 105 """The greatest outcome.""" 106 return self.outcomes()[-1] 107 108 def nearest(self, comparison: Literal['<=', '<', '>=', '>'], outcome, 109 /) -> T_co | None: 110 """The nearest outcome in this population fitting the comparison. 111 112 Args: 113 comparison: The comparison which the result must fit. For example, 114 '<=' would find the greatest outcome that is not greater than 115 the argument. 116 outcome: The outcome to compare against. 117 118 Returns: 119 The nearest outcome fitting the comparison, or `None` if there is 120 no such outcome. 121 """ 122 match comparison: 123 case '<=': 124 if outcome in self: 125 return outcome 126 index = bisect.bisect_right(self.outcomes(), outcome) - 1 127 if index < 0: 128 return None 129 return self.outcomes()[index] 130 case '<': 131 index = bisect.bisect_left(self.outcomes(), outcome) - 1 132 if index < 0: 133 return None 134 return self.outcomes()[index] 135 case '>=': 136 if outcome in self: 137 return outcome 138 index = bisect.bisect_left(self.outcomes(), outcome) 139 if index >= len(self): 140 return None 141 return self.outcomes()[index] 142 case '>': 143 index = bisect.bisect_right(self.outcomes(), outcome) 144 if index >= len(self): 145 return None 146 return self.outcomes()[index] 147 case _: 148 raise ValueError(f'Invalid comparison {comparison}') 149 150 @staticmethod 151 def _zero(x): 152 return x * 0 153 154 def zero(self: C) -> C: 155 """Zeros all outcomes of this population. 156 157 This is done by multiplying all outcomes by `0`. 158 159 The result will have the same denominator. 160 161 Raises: 162 ValueError: If the zeros did not resolve to a single outcome. 163 """ 164 result = self._unary_operator(Population._zero) 165 if len(result) != 1: 166 raise ValueError('zero() did not resolve to a single outcome.') 167 return result 168 169 def zero_outcome(self) -> T_co: 170 """A zero-outcome for this population. 171 172 E.g. `0` for a `Population` whose outcomes are `int`s. 173 """ 174 return self.zero().outcomes()[0] 175 176 # Quantities. 177 178 @overload 179 def quantity(self, outcome: Hashable, /) -> int: 180 """The quantity of a single outcome.""" 181 182 @overload 183 def quantity(self, comparison: Literal['==', '!=', '<=', '<', '>=', '>'], 184 outcome: Hashable, /) -> int: 185 """The total quantity fitting a comparison to a single outcome.""" 186 187 def quantity(self, 188 comparison: Literal['==', '!=', '<=', '<', '>=', '>'] 189 | Hashable, 190 outcome: Hashable | None = None, 191 /) -> int: 192 """The quantity of a single outcome. 193 194 A comparison can be provided, in which case this returns the total 195 quantity fitting the comparison. 196 197 Args: 198 comparison: The comparison to use. This can be omitted, in which 199 case it is treated as '=='. 200 outcome: The outcome to query. 201 """ 202 if outcome is None: 203 outcome = comparison 204 comparison = '==' 205 else: 206 comparison = cast(Literal['==', '!=', '<=', '<', '>=', '>'], 207 comparison) 208 209 match comparison: 210 case '==': 211 return self.get(outcome, 0) 212 case '!=': 213 return self.denominator() - self.get(outcome, 0) 214 case '<=' | '<': 215 threshold = self.nearest(comparison, outcome) 216 if threshold is None: 217 return 0 218 else: 219 return self._cumulative_quantities[threshold] 220 case '>=': 221 return self.denominator() - self.quantity('<', outcome) 222 case '>': 223 return self.denominator() - self.quantity('<=', outcome) 224 case _: 225 raise ValueError(f'Invalid comparison {comparison}') 226 227 @overload 228 def quantities(self, /) -> CountsValuesView: 229 """All quantities in sorted order.""" 230 231 @overload 232 def quantities(self, comparison: Literal['==', '!=', '<=', '<', '>=', '>'], 233 /) -> Sequence[int]: 234 """The total quantities fitting the comparison for each outcome in sorted order. 235 236 For example, '<=' gives the CDF. 237 """ 238 239 def quantities(self, 240 comparison: Literal['==', '!=', '<=', '<', '>=', '>'] 241 | None = None, 242 /) -> CountsValuesView | Sequence[int]: 243 """The quantities of the mapping in sorted order. 244 245 For example, '<=' gives the CDF. 246 247 Args: 248 comparison: Optional. If omitted, this defaults to '=='. 249 """ 250 if comparison is None: 251 comparison = '==' 252 253 match comparison: 254 case '==': 255 return self.values() 256 case '<=': 257 return tuple(itertools.accumulate(self.values())) 258 case '>=': 259 return tuple( 260 itertools.accumulate(self.values()[:-1], 261 operator.sub, 262 initial=self.denominator())) 263 case '!=': 264 return tuple(self.denominator() - q for q in self.values()) 265 case '<': 266 return tuple(self.denominator() - q 267 for q in self.quantities('>=')) 268 case '>': 269 return tuple(self.denominator() - q 270 for q in self.quantities('<=')) 271 case _: 272 raise ValueError(f'Invalid comparison {comparison}') 273 274 @cached_property 275 def _cumulative_quantities(self) -> Mapping[T_co, int]: 276 result = {} 277 cdf = 0 278 for outcome, quantity in self.items(): 279 cdf += quantity 280 result[outcome] = cdf 281 return result 282 283 @cached_property 284 def _denominator(self) -> int: 285 return sum(self.values()) 286 287 def denominator(self) -> int: 288 """The sum of all quantities (e.g. weights or duplicates). 289 290 For the number of unique outcomes, use `len()`. 291 """ 292 return self._denominator 293 294 def multiply_quantities(self: C, scale: int, /) -> C: 295 """Multiplies all quantities by an integer.""" 296 if scale == 1: 297 return self 298 data = { 299 outcome: quantity * scale 300 for outcome, quantity in self.items() 301 } 302 return self._new_type(data) 303 304 def divide_quantities(self: C, divisor: int, /) -> C: 305 """Divides all quantities by an integer, rounding down. 306 307 Resulting zero quantities are dropped. 308 """ 309 if divisor == 0: 310 return self 311 data = { 312 outcome: quantity // divisor 313 for outcome, quantity in self.items() if quantity >= divisor 314 } 315 return self._new_type(data) 316 317 def modulo_quantities(self: C, divisor: int, /) -> C: 318 """Modulus of all quantities with an integer.""" 319 data = { 320 outcome: quantity % divisor 321 for outcome, quantity in self.items() 322 } 323 return self._new_type(data) 324 325 def pad_to_denominator(self: C, target: int, /, outcome: Hashable) -> C: 326 """Changes the denominator to a target number by changing the quantity of a specified outcome. 327 328 Args: 329 `target`: The denominator of the result. 330 `outcome`: The outcome whose quantity will be adjusted. 331 332 Returns: 333 A `Population` like `self` but with the quantity of `outcome` 334 adjusted so that the overall denominator is equal to `target`. 335 If the denominator is reduced to zero, it will be removed. 336 337 Raises: 338 `ValueError` if this would require the quantity of the specified 339 outcome to be negative. 340 """ 341 adjustment = target - self.denominator() 342 data = {outcome: quantity for outcome, quantity in self.items()} 343 new_quantity = data.get(outcome, 0) + adjustment 344 if new_quantity > 0: 345 data[outcome] = new_quantity 346 elif new_quantity == 0: 347 del data[outcome] 348 else: 349 raise ValueError( 350 f'Padding to denominator of {target} would require a negative quantity of {new_quantity} for {outcome}' 351 ) 352 return self._new_type(data) 353 354 # Probabilities. 355 356 @overload 357 def probability(self, outcome: Hashable, /, *, 358 percent: Literal[False]) -> Fraction: 359 """The probability of a single outcome, or 0.0 if not present.""" 360 361 @overload 362 def probability(self, outcome: Hashable, /, *, 363 percent: Literal[True]) -> float: 364 """The probability of a single outcome, or 0.0 if not present.""" 365 366 @overload 367 def probability(self, outcome: Hashable, /) -> Fraction: 368 """The probability of a single outcome, or 0.0 if not present.""" 369 370 @overload 371 def probability(self, comparison: Literal['==', '!=', '<=', '<', '>=', 372 '>'], outcome: Hashable, /, *, 373 percent: Literal[False]) -> Fraction: 374 """The total probability of outcomes fitting a comparison.""" 375 376 @overload 377 def probability(self, comparison: Literal['==', '!=', '<=', '<', '>=', 378 '>'], outcome: Hashable, /, *, 379 percent: Literal[True]) -> float: 380 """The total probability of outcomes fitting a comparison.""" 381 382 @overload 383 def probability(self, comparison: Literal['==', '!=', '<=', '<', '>=', 384 '>'], outcome: Hashable, 385 /) -> Fraction: 386 """The total probability of outcomes fitting a comparison.""" 387 388 def probability(self, 389 comparison: Literal['==', '!=', '<=', '<', '>=', '>'] 390 | Hashable, 391 outcome: Hashable | None = None, 392 /, 393 *, 394 percent: bool = False) -> Fraction | float: 395 """The total probability of outcomes fitting a comparison.""" 396 if outcome is None: 397 outcome = comparison 398 comparison = '==' 399 else: 400 comparison = cast(Literal['==', '!=', '<=', '<', '>=', '>'], 401 comparison) 402 result = Fraction(self.quantity(comparison, outcome), 403 self.denominator()) 404 return result * 100.0 if percent else result 405 406 @overload 407 def probabilities(self, /, *, 408 percent: Literal[False]) -> Sequence[Fraction]: 409 """All probabilities in sorted order.""" 410 411 @overload 412 def probabilities(self, /, *, percent: Literal[True]) -> Sequence[float]: 413 """All probabilities in sorted order.""" 414 415 @overload 416 def probabilities(self, /) -> Sequence[Fraction]: 417 """All probabilities in sorted order.""" 418 419 @overload 420 def probabilities(self, comparison: Literal['==', '!=', '<=', '<', '>=', 421 '>'], /, *, 422 percent: Literal[False]) -> Sequence[Fraction]: 423 """The total probabilities fitting the comparison for each outcome in sorted order. 424 425 For example, '<=' gives the CDF. 426 """ 427 428 @overload 429 def probabilities(self, comparison: Literal['==', '!=', '<=', '<', '>=', 430 '>'], /, *, 431 percent: Literal[True]) -> Sequence[float]: 432 """The total probabilities fitting the comparison for each outcome in sorted order. 433 434 For example, '<=' gives the CDF. 435 """ 436 437 @overload 438 def probabilities(self, comparison: Literal['==', '!=', '<=', '<', '>=', 439 '>'], /) -> Sequence[Fraction]: 440 """The total probabilities fitting the comparison for each outcome in sorted order. 441 442 For example, '<=' gives the CDF. 443 """ 444 445 def probabilities( 446 self, 447 comparison: Literal['==', '!=', '<=', '<', '>=', '>'] 448 | None = None, 449 /, 450 *, 451 percent: bool = False) -> Sequence[Fraction] | Sequence[float]: 452 """The total probabilities fitting the comparison for each outcome in sorted order. 453 454 For example, '<=' gives the CDF. 455 456 Args: 457 comparison: Optional. If omitted, this defaults to '=='. 458 """ 459 if comparison is None: 460 comparison = '==' 461 462 result = tuple( 463 Fraction(q, self.denominator()) 464 for q in self.quantities(comparison)) 465 466 if percent: 467 return tuple(100.0 * x for x in result) 468 else: 469 return result 470 471 # Scalar statistics. 472 473 def mode(self) -> tuple: 474 """A tuple containing the most common outcome(s) of the population. 475 476 These are sorted from lowest to highest. 477 """ 478 return tuple(outcome for outcome, quantity in self.items() 479 if quantity == self.modal_quantity()) 480 481 def modal_quantity(self) -> int: 482 """The highest quantity of any single outcome. """ 483 return max(self.quantities()) 484 485 def kolmogorov_smirnov(self, other: 'Population') -> Fraction: 486 """Kolmogorov–Smirnov statistic. The maximum absolute difference between CDFs. """ 487 outcomes = icepool.sorted_union(self, other) 488 return max( 489 abs( 490 self.probability('<=', outcome) - 491 other.probability('<=', outcome)) for outcome in outcomes) 492 493 def cramer_von_mises(self, other: 'Population') -> Fraction: 494 """Cramér-von Mises statistic. The sum-of-squares difference between CDFs. """ 495 outcomes = icepool.sorted_union(self, other) 496 return sum(((self.probability('<=', outcome) - 497 other.probability('<=', outcome))**2 498 for outcome in outcomes), 499 start=Fraction(0, 1)) 500 501 def median(self): 502 """The median, taking the mean in case of a tie. 503 504 This will fail if the outcomes do not support division; 505 in this case, use `median_low` or `median_high` instead. 506 """ 507 return self.quantile(1, 2) 508 509 def median_low(self) -> T_co: 510 """The median, taking the lower in case of a tie.""" 511 return self.quantile_low(1, 2) 512 513 def median_high(self) -> T_co: 514 """The median, taking the higher in case of a tie.""" 515 return self.quantile_high(1, 2) 516 517 def quantile(self, n: int, d: int = 100): 518 """The outcome `n / d` of the way through the CDF, taking the mean in case of a tie. 519 520 This will fail if the outcomes do not support addition and division; 521 in this case, use `quantile_low` or `quantile_high` instead. 522 """ 523 # Should support addition and division. 524 return (self.quantile_low(n, d) + 525 self.quantile_high(n, d)) / 2 # type: ignore 526 527 def quantile_low(self, n: int, d: int = 100) -> T_co: 528 """The outcome `n / d` of the way through the CDF, taking the lesser in case of a tie.""" 529 index = bisect.bisect_left(self.quantities('<='), 530 (n * self.denominator() + d - 1) // d) 531 if index >= len(self): 532 return self.max_outcome() 533 return self.outcomes()[index] 534 535 def quantile_high(self, n: int, d: int = 100) -> T_co: 536 """The outcome `n / d` of the way through the CDF, taking the greater in case of a tie.""" 537 index = bisect.bisect_right(self.quantities('<='), 538 n * self.denominator() // d) 539 if index >= len(self): 540 return self.max_outcome() 541 return self.outcomes()[index] 542 543 @overload 544 def mean(self: 'Population[numbers.Rational]') -> Fraction: 545 ... 546 547 @overload 548 def mean(self: 'Population[float]') -> float: 549 ... 550 551 def mean( 552 self: 'Population[numbers.Rational] | Population[float]' 553 ) -> Fraction | float: 554 return try_fraction( 555 sum(outcome * quantity for outcome, quantity in self.items()), 556 self.denominator()) 557 558 @overload 559 def variance(self: 'Population[numbers.Rational]') -> Fraction: 560 ... 561 562 @overload 563 def variance(self: 'Population[float]') -> float: 564 ... 565 566 def variance( 567 self: 'Population[numbers.Rational] | Population[float]' 568 ) -> Fraction | float: 569 """This is the population variance, not the sample variance.""" 570 mean = self.mean() 571 mean_of_squares = try_fraction( 572 sum(quantity * outcome**2 for outcome, quantity in self.items()), 573 self.denominator()) 574 return mean_of_squares - mean * mean 575 576 def standard_deviation( 577 self: 'Population[numbers.Rational] | Population[float]') -> float: 578 return math.sqrt(self.variance()) 579 580 sd = standard_deviation 581 582 def standardized_moment( 583 self: 'Population[numbers.Rational] | Population[float]', 584 k: int) -> float: 585 sd = self.standard_deviation() 586 mean = self.mean() 587 ev = sum(p * (outcome - mean)**k # type: ignore 588 for outcome, p in zip(self.outcomes(), self.probabilities())) 589 return ev / (sd**k) 590 591 def skewness( 592 self: 'Population[numbers.Rational] | Population[float]') -> float: 593 return self.standardized_moment(3) 594 595 def excess_kurtosis( 596 self: 'Population[numbers.Rational] | Population[float]') -> float: 597 return self.standardized_moment(4) - 3.0 598 599 def entropy(self, base: float = 2.0) -> float: 600 """The entropy of a random sample from this population. 601 602 Args: 603 base: The logarithm base to use. Default is 2.0, which gives the 604 entropy in bits. 605 """ 606 return -sum(p * math.log(p, base) 607 for p in self.probabilities() if p > 0.0) 608 609 # Joint statistics. 610 611 class _Marginals(Generic[C]): 612 """Helper class for implementing `marginals()`.""" 613 614 _population: C 615 616 def __init__(self, population, /): 617 self._population = population 618 619 def __len__(self) -> int: 620 """The minimum len() of all outcomes.""" 621 return min(len(x) for x in self._population.outcomes()) 622 623 def __getitem__(self, dims: int | slice, /): 624 """Marginalizes the given dimensions.""" 625 return self._population._unary_operator(operator.getitem, dims) 626 627 def __iter__(self) -> Iterator: 628 for i in range(len(self)): 629 yield self[i] 630 631 def __getattr__(self, key: str): 632 if key[0] == '_': 633 raise AttributeError(key) 634 return self._population._unary_operator(operator.attrgetter(key)) 635 636 @property 637 def marginals(self: C) -> _Marginals[C]: 638 """A property that applies the `[]` operator to outcomes. 639 640 For example, `population.marginals[:2]` will marginalize the first two 641 elements of sequence outcomes. 642 643 Attributes that do not start with an underscore will also be forwarded. 644 For example, `population.marginals.x` will marginalize the `x` attribute 645 from e.g. `namedtuple` outcomes. 646 """ 647 return Population._Marginals(self) 648 649 @overload 650 def covariance(self: 'Population[tuple[numbers.Rational, ...]]', i: int, 651 j: int) -> Fraction: 652 ... 653 654 @overload 655 def covariance(self: 'Population[tuple[float, ...]]', i: int, 656 j: int) -> float: 657 ... 658 659 def covariance( 660 self: 661 'Population[tuple[numbers.Rational, ...]] | Population[tuple[float, ...]]', 662 i: int, j: int) -> Fraction | float: 663 mean_i = self.marginals[i].mean() 664 mean_j = self.marginals[j].mean() 665 return try_fraction( 666 sum((outcome[i] - mean_i) * (outcome[j] - mean_j) * quantity 667 for outcome, quantity in self.items()), self.denominator()) 668 669 def correlation( 670 self: 671 'Population[tuple[numbers.Rational, ...]] | Population[tuple[float, ...]]', 672 i: int, j: int) -> float: 673 sd_i = self.marginals[i].standard_deviation() 674 sd_j = self.marginals[j].standard_deviation() 675 return self.covariance(i, j) / (sd_i * sd_j) 676 677 # Transformations. 678 679 def to_one_hot(self: C, outcomes: Sequence[T_co] | None = None) -> C: 680 """Converts the outcomes of this population to a one-hot representation. 681 682 Args: 683 outcomes: If provided, each outcome will be mapped to a `Vector` 684 where the element at `outcomes.index(outcome)` is set to `True` 685 and the rest to `False`, or all `False` if the outcome is not 686 in `outcomes`. 687 If not provided, `self.outcomes()` is used. 688 """ 689 if outcomes is None: 690 outcomes = self.outcomes() 691 692 data: MutableMapping[Vector[bool], int] = defaultdict(int) 693 for outcome, quantity in zip(self.outcomes(), self.quantities()): 694 value = [False] * len(outcomes) 695 if outcome in outcomes: 696 value[outcomes.index(outcome)] = True 697 data[Vector(value)] += quantity 698 return self._new_type(data) 699 700 def sample(self) -> T_co: 701 """A single random sample from this population. 702 703 Note that this is always "with replacement" even for `Deck` since 704 instances are immutable. 705 706 This uses the standard `random` package and is not cryptographically 707 secure. 708 """ 709 # We don't use random.choices since that is based on floats rather than ints. 710 r = random.randrange(self.denominator()) 711 index = bisect.bisect_right(self.quantities('<='), r) 712 return self.outcomes()[index] 713 714 def format(self, format_spec: str, /, **kwargs) -> str: 715 """Formats this mapping as a string. 716 717 `format_spec` should start with the output format, 718 which can be: 719 * `md` for Markdown (default) 720 * `bbcode` for BBCode 721 * `csv` for comma-separated values 722 * `html` for HTML 723 724 After this, you may optionally add a `:` followed by a series of 725 requested columns. Allowed columns are: 726 727 * `o`: Outcomes. 728 * `*o`: Outcomes, unpacked if applicable. 729 * `q==`, `q<=`, `q>=`: Quantities ==, <=, or >= each outcome. 730 * `p==`, `p<=`, `p>=`: Probabilities (0-1). 731 * `%==`, `%<=`, `%>=`: Probabilities (0%-100%). 732 * `i==`, `i<=`, `i>=`: EXPERIMENTAL: "1 in N". 733 734 Columns may optionally be separated using `|` characters. 735 736 The default setting is equal to `f'{die:md:*o|q==|%==}'`. Here the 737 columns are the outcomes (unpacked if applicable) the quantities, and 738 the probabilities. The quantities are omitted from the default columns 739 if any individual quantity is 10**30 or greater. 740 """ 741 if not self.is_empty() and self.modal_quantity() < 10**30: 742 default_column_spec = '*oq==%==' 743 else: 744 default_column_spec = '*o%==' 745 if len(format_spec) == 0: 746 format_spec = 'md:' + default_column_spec 747 748 format_spec = format_spec.replace('|', '') 749 750 parts = format_spec.split(':') 751 752 if len(parts) == 1: 753 output_format = parts[0] 754 col_spec = default_column_spec 755 elif len(parts) == 2: 756 output_format = parts[0] 757 col_spec = parts[1] 758 else: 759 raise ValueError('format_spec has too many colons.') 760 761 match output_format: 762 case 'md': 763 return icepool.population.format.markdown(self, col_spec) 764 case 'bbcode': 765 return icepool.population.format.bbcode(self, col_spec) 766 case 'csv': 767 return icepool.population.format.csv(self, col_spec, **kwargs) 768 case 'html': 769 return icepool.population.format.html(self, col_spec) 770 case _: 771 raise ValueError( 772 f"Unsupported output format '{output_format}'") 773 774 def __format__(self, format_spec: str, /) -> str: 775 return self.format(format_spec) 776 777 def __str__(self) -> str: 778 return f'{self}'
A mapping from outcomes to int
quantities.
Outcomes with each instance must be hashable and totally orderable.
43 @abstractmethod 44 def keys(self) -> CountsKeysView[T_co]: 45 """The outcomes within the population in sorted order."""
The outcomes within the population in sorted order.
47 @abstractmethod 48 def values(self) -> CountsValuesView: 49 """The quantities within the population in outcome order."""
The quantities within the population in outcome order.
51 @abstractmethod 52 def items(self) -> CountsItemsView[T_co]: 53 """The (outcome, quantity)s of the population in sorted order."""
The (outcome, quantity)s of the population in sorted order.
89 def common_outcome_length(self) -> int | None: 90 """The common length of all outcomes. 91 92 If outcomes have no lengths or different lengths, the result is `None`. 93 """ 94 return self._common_outcome_length
The common length of all outcomes.
If outcomes have no lengths or different lengths, the result is None
.
96 def is_empty(self) -> bool: 97 """`True` iff this population has no outcomes. """ 98 return len(self) == 0
True
iff this population has no outcomes.
108 def nearest(self, comparison: Literal['<=', '<', '>=', '>'], outcome, 109 /) -> T_co | None: 110 """The nearest outcome in this population fitting the comparison. 111 112 Args: 113 comparison: The comparison which the result must fit. For example, 114 '<=' would find the greatest outcome that is not greater than 115 the argument. 116 outcome: The outcome to compare against. 117 118 Returns: 119 The nearest outcome fitting the comparison, or `None` if there is 120 no such outcome. 121 """ 122 match comparison: 123 case '<=': 124 if outcome in self: 125 return outcome 126 index = bisect.bisect_right(self.outcomes(), outcome) - 1 127 if index < 0: 128 return None 129 return self.outcomes()[index] 130 case '<': 131 index = bisect.bisect_left(self.outcomes(), outcome) - 1 132 if index < 0: 133 return None 134 return self.outcomes()[index] 135 case '>=': 136 if outcome in self: 137 return outcome 138 index = bisect.bisect_left(self.outcomes(), outcome) 139 if index >= len(self): 140 return None 141 return self.outcomes()[index] 142 case '>': 143 index = bisect.bisect_right(self.outcomes(), outcome) 144 if index >= len(self): 145 return None 146 return self.outcomes()[index] 147 case _: 148 raise ValueError(f'Invalid comparison {comparison}')
The nearest outcome in this population fitting the comparison.
Arguments:
- comparison: The comparison which the result must fit. For example, '<=' would find the greatest outcome that is not greater than the argument.
- outcome: The outcome to compare against.
Returns:
The nearest outcome fitting the comparison, or
None
if there is no such outcome.
154 def zero(self: C) -> C: 155 """Zeros all outcomes of this population. 156 157 This is done by multiplying all outcomes by `0`. 158 159 The result will have the same denominator. 160 161 Raises: 162 ValueError: If the zeros did not resolve to a single outcome. 163 """ 164 result = self._unary_operator(Population._zero) 165 if len(result) != 1: 166 raise ValueError('zero() did not resolve to a single outcome.') 167 return result
Zeros all outcomes of this population.
This is done by multiplying all outcomes by 0
.
The result will have the same denominator.
Raises:
- ValueError: If the zeros did not resolve to a single outcome.
169 def zero_outcome(self) -> T_co: 170 """A zero-outcome for this population. 171 172 E.g. `0` for a `Population` whose outcomes are `int`s. 173 """ 174 return self.zero().outcomes()[0]
A zero-outcome for this population.
E.g. 0
for a Population
whose outcomes are int
s.
187 def quantity(self, 188 comparison: Literal['==', '!=', '<=', '<', '>=', '>'] 189 | Hashable, 190 outcome: Hashable | None = None, 191 /) -> int: 192 """The quantity of a single outcome. 193 194 A comparison can be provided, in which case this returns the total 195 quantity fitting the comparison. 196 197 Args: 198 comparison: The comparison to use. This can be omitted, in which 199 case it is treated as '=='. 200 outcome: The outcome to query. 201 """ 202 if outcome is None: 203 outcome = comparison 204 comparison = '==' 205 else: 206 comparison = cast(Literal['==', '!=', '<=', '<', '>=', '>'], 207 comparison) 208 209 match comparison: 210 case '==': 211 return self.get(outcome, 0) 212 case '!=': 213 return self.denominator() - self.get(outcome, 0) 214 case '<=' | '<': 215 threshold = self.nearest(comparison, outcome) 216 if threshold is None: 217 return 0 218 else: 219 return self._cumulative_quantities[threshold] 220 case '>=': 221 return self.denominator() - self.quantity('<', outcome) 222 case '>': 223 return self.denominator() - self.quantity('<=', outcome) 224 case _: 225 raise ValueError(f'Invalid comparison {comparison}')
The quantity of a single outcome.
A comparison can be provided, in which case this returns the total quantity fitting the comparison.
Arguments:
- comparison: The comparison to use. This can be omitted, in which case it is treated as '=='.
- outcome: The outcome to query.
239 def quantities(self, 240 comparison: Literal['==', '!=', '<=', '<', '>=', '>'] 241 | None = None, 242 /) -> CountsValuesView | Sequence[int]: 243 """The quantities of the mapping in sorted order. 244 245 For example, '<=' gives the CDF. 246 247 Args: 248 comparison: Optional. If omitted, this defaults to '=='. 249 """ 250 if comparison is None: 251 comparison = '==' 252 253 match comparison: 254 case '==': 255 return self.values() 256 case '<=': 257 return tuple(itertools.accumulate(self.values())) 258 case '>=': 259 return tuple( 260 itertools.accumulate(self.values()[:-1], 261 operator.sub, 262 initial=self.denominator())) 263 case '!=': 264 return tuple(self.denominator() - q for q in self.values()) 265 case '<': 266 return tuple(self.denominator() - q 267 for q in self.quantities('>=')) 268 case '>': 269 return tuple(self.denominator() - q 270 for q in self.quantities('<=')) 271 case _: 272 raise ValueError(f'Invalid comparison {comparison}')
The quantities of the mapping in sorted order.
For example, '<=' gives the CDF.
Arguments:
- comparison: Optional. If omitted, this defaults to '=='.
287 def denominator(self) -> int: 288 """The sum of all quantities (e.g. weights or duplicates). 289 290 For the number of unique outcomes, use `len()`. 291 """ 292 return self._denominator
The sum of all quantities (e.g. weights or duplicates).
For the number of unique outcomes, use len()
.
294 def multiply_quantities(self: C, scale: int, /) -> C: 295 """Multiplies all quantities by an integer.""" 296 if scale == 1: 297 return self 298 data = { 299 outcome: quantity * scale 300 for outcome, quantity in self.items() 301 } 302 return self._new_type(data)
Multiplies all quantities by an integer.
304 def divide_quantities(self: C, divisor: int, /) -> C: 305 """Divides all quantities by an integer, rounding down. 306 307 Resulting zero quantities are dropped. 308 """ 309 if divisor == 0: 310 return self 311 data = { 312 outcome: quantity // divisor 313 for outcome, quantity in self.items() if quantity >= divisor 314 } 315 return self._new_type(data)
Divides all quantities by an integer, rounding down.
Resulting zero quantities are dropped.
317 def modulo_quantities(self: C, divisor: int, /) -> C: 318 """Modulus of all quantities with an integer.""" 319 data = { 320 outcome: quantity % divisor 321 for outcome, quantity in self.items() 322 } 323 return self._new_type(data)
Modulus of all quantities with an integer.
325 def pad_to_denominator(self: C, target: int, /, outcome: Hashable) -> C: 326 """Changes the denominator to a target number by changing the quantity of a specified outcome. 327 328 Args: 329 `target`: The denominator of the result. 330 `outcome`: The outcome whose quantity will be adjusted. 331 332 Returns: 333 A `Population` like `self` but with the quantity of `outcome` 334 adjusted so that the overall denominator is equal to `target`. 335 If the denominator is reduced to zero, it will be removed. 336 337 Raises: 338 `ValueError` if this would require the quantity of the specified 339 outcome to be negative. 340 """ 341 adjustment = target - self.denominator() 342 data = {outcome: quantity for outcome, quantity in self.items()} 343 new_quantity = data.get(outcome, 0) + adjustment 344 if new_quantity > 0: 345 data[outcome] = new_quantity 346 elif new_quantity == 0: 347 del data[outcome] 348 else: 349 raise ValueError( 350 f'Padding to denominator of {target} would require a negative quantity of {new_quantity} for {outcome}' 351 ) 352 return self._new_type(data)
Changes the denominator to a target number by changing the quantity of a specified outcome.
Arguments:
target
: The denominator of the result.outcome
: The outcome whose quantity will be adjusted.
Returns:
A
Population
likeself
but with the quantity ofoutcome
adjusted so that the overall denominator is equal totarget
. If the denominator is reduced to zero, it will be removed.
Raises:
ValueError
if this would require the quantity of the specified- outcome to be negative.
388 def probability(self, 389 comparison: Literal['==', '!=', '<=', '<', '>=', '>'] 390 | Hashable, 391 outcome: Hashable | None = None, 392 /, 393 *, 394 percent: bool = False) -> Fraction | float: 395 """The total probability of outcomes fitting a comparison.""" 396 if outcome is None: 397 outcome = comparison 398 comparison = '==' 399 else: 400 comparison = cast(Literal['==', '!=', '<=', '<', '>=', '>'], 401 comparison) 402 result = Fraction(self.quantity(comparison, outcome), 403 self.denominator()) 404 return result * 100.0 if percent else result
The total probability of outcomes fitting a comparison.
445 def probabilities( 446 self, 447 comparison: Literal['==', '!=', '<=', '<', '>=', '>'] 448 | None = None, 449 /, 450 *, 451 percent: bool = False) -> Sequence[Fraction] | Sequence[float]: 452 """The total probabilities fitting the comparison for each outcome in sorted order. 453 454 For example, '<=' gives the CDF. 455 456 Args: 457 comparison: Optional. If omitted, this defaults to '=='. 458 """ 459 if comparison is None: 460 comparison = '==' 461 462 result = tuple( 463 Fraction(q, self.denominator()) 464 for q in self.quantities(comparison)) 465 466 if percent: 467 return tuple(100.0 * x for x in result) 468 else: 469 return result
The total probabilities fitting the comparison for each outcome in sorted order.
For example, '<=' gives the CDF.
Arguments:
- comparison: Optional. If omitted, this defaults to '=='.
473 def mode(self) -> tuple: 474 """A tuple containing the most common outcome(s) of the population. 475 476 These are sorted from lowest to highest. 477 """ 478 return tuple(outcome for outcome, quantity in self.items() 479 if quantity == self.modal_quantity())
A tuple containing the most common outcome(s) of the population.
These are sorted from lowest to highest.
481 def modal_quantity(self) -> int: 482 """The highest quantity of any single outcome. """ 483 return max(self.quantities())
The highest quantity of any single outcome.
485 def kolmogorov_smirnov(self, other: 'Population') -> Fraction: 486 """Kolmogorov–Smirnov statistic. The maximum absolute difference between CDFs. """ 487 outcomes = icepool.sorted_union(self, other) 488 return max( 489 abs( 490 self.probability('<=', outcome) - 491 other.probability('<=', outcome)) for outcome in outcomes)
Kolmogorov–Smirnov statistic. The maximum absolute difference between CDFs.
493 def cramer_von_mises(self, other: 'Population') -> Fraction: 494 """Cramér-von Mises statistic. The sum-of-squares difference between CDFs. """ 495 outcomes = icepool.sorted_union(self, other) 496 return sum(((self.probability('<=', outcome) - 497 other.probability('<=', outcome))**2 498 for outcome in outcomes), 499 start=Fraction(0, 1))
Cramér-von Mises statistic. The sum-of-squares difference between CDFs.
501 def median(self): 502 """The median, taking the mean in case of a tie. 503 504 This will fail if the outcomes do not support division; 505 in this case, use `median_low` or `median_high` instead. 506 """ 507 return self.quantile(1, 2)
The median, taking the mean in case of a tie.
This will fail if the outcomes do not support division;
in this case, use median_low
or median_high
instead.
509 def median_low(self) -> T_co: 510 """The median, taking the lower in case of a tie.""" 511 return self.quantile_low(1, 2)
The median, taking the lower in case of a tie.
513 def median_high(self) -> T_co: 514 """The median, taking the higher in case of a tie.""" 515 return self.quantile_high(1, 2)
The median, taking the higher in case of a tie.
517 def quantile(self, n: int, d: int = 100): 518 """The outcome `n / d` of the way through the CDF, taking the mean in case of a tie. 519 520 This will fail if the outcomes do not support addition and division; 521 in this case, use `quantile_low` or `quantile_high` instead. 522 """ 523 # Should support addition and division. 524 return (self.quantile_low(n, d) + 525 self.quantile_high(n, d)) / 2 # type: ignore
The outcome n / d
of the way through the CDF, taking the mean in case of a tie.
This will fail if the outcomes do not support addition and division;
in this case, use quantile_low
or quantile_high
instead.
527 def quantile_low(self, n: int, d: int = 100) -> T_co: 528 """The outcome `n / d` of the way through the CDF, taking the lesser in case of a tie.""" 529 index = bisect.bisect_left(self.quantities('<='), 530 (n * self.denominator() + d - 1) // d) 531 if index >= len(self): 532 return self.max_outcome() 533 return self.outcomes()[index]
The outcome n / d
of the way through the CDF, taking the lesser in case of a tie.
535 def quantile_high(self, n: int, d: int = 100) -> T_co: 536 """The outcome `n / d` of the way through the CDF, taking the greater in case of a tie.""" 537 index = bisect.bisect_right(self.quantities('<='), 538 n * self.denominator() // d) 539 if index >= len(self): 540 return self.max_outcome() 541 return self.outcomes()[index]
The outcome n / d
of the way through the CDF, taking the greater in case of a tie.
566 def variance( 567 self: 'Population[numbers.Rational] | Population[float]' 568 ) -> Fraction | float: 569 """This is the population variance, not the sample variance.""" 570 mean = self.mean() 571 mean_of_squares = try_fraction( 572 sum(quantity * outcome**2 for outcome, quantity in self.items()), 573 self.denominator()) 574 return mean_of_squares - mean * mean
This is the population variance, not the sample variance.
582 def standardized_moment( 583 self: 'Population[numbers.Rational] | Population[float]', 584 k: int) -> float: 585 sd = self.standard_deviation() 586 mean = self.mean() 587 ev = sum(p * (outcome - mean)**k # type: ignore 588 for outcome, p in zip(self.outcomes(), self.probabilities())) 589 return ev / (sd**k)
599 def entropy(self, base: float = 2.0) -> float: 600 """The entropy of a random sample from this population. 601 602 Args: 603 base: The logarithm base to use. Default is 2.0, which gives the 604 entropy in bits. 605 """ 606 return -sum(p * math.log(p, base) 607 for p in self.probabilities() if p > 0.0)
The entropy of a random sample from this population.
Arguments:
- base: The logarithm base to use. Default is 2.0, which gives the entropy in bits.
636 @property 637 def marginals(self: C) -> _Marginals[C]: 638 """A property that applies the `[]` operator to outcomes. 639 640 For example, `population.marginals[:2]` will marginalize the first two 641 elements of sequence outcomes. 642 643 Attributes that do not start with an underscore will also be forwarded. 644 For example, `population.marginals.x` will marginalize the `x` attribute 645 from e.g. `namedtuple` outcomes. 646 """ 647 return Population._Marginals(self)
A property that applies the []
operator to outcomes.
For example, population.marginals[:2]
will marginalize the first two
elements of sequence outcomes.
Attributes that do not start with an underscore will also be forwarded.
For example, population.marginals.x
will marginalize the x
attribute
from e.g. namedtuple
outcomes.
659 def covariance( 660 self: 661 'Population[tuple[numbers.Rational, ...]] | Population[tuple[float, ...]]', 662 i: int, j: int) -> Fraction | float: 663 mean_i = self.marginals[i].mean() 664 mean_j = self.marginals[j].mean() 665 return try_fraction( 666 sum((outcome[i] - mean_i) * (outcome[j] - mean_j) * quantity 667 for outcome, quantity in self.items()), self.denominator())
679 def to_one_hot(self: C, outcomes: Sequence[T_co] | None = None) -> C: 680 """Converts the outcomes of this population to a one-hot representation. 681 682 Args: 683 outcomes: If provided, each outcome will be mapped to a `Vector` 684 where the element at `outcomes.index(outcome)` is set to `True` 685 and the rest to `False`, or all `False` if the outcome is not 686 in `outcomes`. 687 If not provided, `self.outcomes()` is used. 688 """ 689 if outcomes is None: 690 outcomes = self.outcomes() 691 692 data: MutableMapping[Vector[bool], int] = defaultdict(int) 693 for outcome, quantity in zip(self.outcomes(), self.quantities()): 694 value = [False] * len(outcomes) 695 if outcome in outcomes: 696 value[outcomes.index(outcome)] = True 697 data[Vector(value)] += quantity 698 return self._new_type(data)
Converts the outcomes of this population to a one-hot representation.
Arguments:
700 def sample(self) -> T_co: 701 """A single random sample from this population. 702 703 Note that this is always "with replacement" even for `Deck` since 704 instances are immutable. 705 706 This uses the standard `random` package and is not cryptographically 707 secure. 708 """ 709 # We don't use random.choices since that is based on floats rather than ints. 710 r = random.randrange(self.denominator()) 711 index = bisect.bisect_right(self.quantities('<='), r) 712 return self.outcomes()[index]
A single random sample from this population.
Note that this is always "with replacement" even for Deck
since
instances are immutable.
This uses the standard random
package and is not cryptographically
secure.
714 def format(self, format_spec: str, /, **kwargs) -> str: 715 """Formats this mapping as a string. 716 717 `format_spec` should start with the output format, 718 which can be: 719 * `md` for Markdown (default) 720 * `bbcode` for BBCode 721 * `csv` for comma-separated values 722 * `html` for HTML 723 724 After this, you may optionally add a `:` followed by a series of 725 requested columns. Allowed columns are: 726 727 * `o`: Outcomes. 728 * `*o`: Outcomes, unpacked if applicable. 729 * `q==`, `q<=`, `q>=`: Quantities ==, <=, or >= each outcome. 730 * `p==`, `p<=`, `p>=`: Probabilities (0-1). 731 * `%==`, `%<=`, `%>=`: Probabilities (0%-100%). 732 * `i==`, `i<=`, `i>=`: EXPERIMENTAL: "1 in N". 733 734 Columns may optionally be separated using `|` characters. 735 736 The default setting is equal to `f'{die:md:*o|q==|%==}'`. Here the 737 columns are the outcomes (unpacked if applicable) the quantities, and 738 the probabilities. The quantities are omitted from the default columns 739 if any individual quantity is 10**30 or greater. 740 """ 741 if not self.is_empty() and self.modal_quantity() < 10**30: 742 default_column_spec = '*oq==%==' 743 else: 744 default_column_spec = '*o%==' 745 if len(format_spec) == 0: 746 format_spec = 'md:' + default_column_spec 747 748 format_spec = format_spec.replace('|', '') 749 750 parts = format_spec.split(':') 751 752 if len(parts) == 1: 753 output_format = parts[0] 754 col_spec = default_column_spec 755 elif len(parts) == 2: 756 output_format = parts[0] 757 col_spec = parts[1] 758 else: 759 raise ValueError('format_spec has too many colons.') 760 761 match output_format: 762 case 'md': 763 return icepool.population.format.markdown(self, col_spec) 764 case 'bbcode': 765 return icepool.population.format.bbcode(self, col_spec) 766 case 'csv': 767 return icepool.population.format.csv(self, col_spec, **kwargs) 768 case 'html': 769 return icepool.population.format.html(self, col_spec) 770 case _: 771 raise ValueError( 772 f"Unsupported output format '{output_format}'")
Formats this mapping as a string.
format_spec
should start with the output format,
which can be:
md
for Markdown (default)bbcode
for BBCodecsv
for comma-separated valueshtml
for HTML
After this, you may optionally add a :
followed by a series of
requested columns. Allowed columns are:
o
: Outcomes.*o
: Outcomes, unpacked if applicable.q==
,q<=
,q>=
: Quantities ==, <=, or >= each outcome.p==
,p<=
,p>=
: Probabilities (0-1).%==
,%<=
,%>=
: Probabilities (0%-100%).i==
,i<=
,i>=
: EXPERIMENTAL: "1 in N".
Columns may optionally be separated using |
characters.
The default setting is equal to f'{die:md:*o|q==|%==}'
. Here the
columns are the outcomes (unpacked if applicable) the quantities, and
the probabilities. The quantities are omitted from the default columns
if any individual quantity is 10**30 or greater.
77def tupleize( 78 *args: 'T | icepool.Population[T]' 79) -> 'tuple[T, ...] | icepool.Population[tuple[T, ...]]': 80 """Returns the Cartesian product of the arguments as `tuple`s or a `Population` thereof. 81 82 For example: 83 * `tupleize(1, 2)` would produce `(1, 2)`. 84 * `tupleize(d6, 0)` would produce a `Die` with outcomes `(1, 0)`, `(2, 0)`, 85 ... `(6, 0)`. 86 * `tupleize(d6, d6)` would produce a `Die` with outcomes `(1, 1)`, `(1, 2)`, 87 ... `(6, 5)`, `(6, 6)`. 88 89 If `Population`s are provided, they must all be `Die` or all `Deck` and not 90 a mixture of the two. 91 92 Returns: 93 If none of the outcomes is a `Population`, the result is a `tuple` 94 with one element per argument. Otherwise, the result is a `Population` 95 of the same type as the input `Population`, and the outcomes are 96 `tuple`s with one element per argument. 97 """ 98 return cartesian_product(*args, outcome_type=tuple)
Returns the Cartesian product of the arguments as tuple
s or a Population
thereof.
For example:
tupleize(1, 2)
would produce(1, 2)
.tupleize(d6, 0)
would produce aDie
with outcomes(1, 0)
,(2, 0)
, ...(6, 0)
.tupleize(d6, d6)
would produce aDie
with outcomes(1, 1)
,(1, 2)
, ...(6, 5)
,(6, 6)
.
If Population
s are provided, they must all be Die
or all Deck
and not
a mixture of the two.
Returns:
If none of the outcomes is a
Population
, the result is atuple
with one element per argument. Otherwise, the result is aPopulation
of the same type as the inputPopulation
, and the outcomes aretuple
s with one element per argument.
101def vectorize( 102 *args: 'T | icepool.Population[T]' 103) -> 'Vector[T] | icepool.Population[Vector[T]]': 104 """Returns the Cartesian product of the arguments as `Vector`s or a `Population` thereof. 105 106 For example: 107 * `vectorize(1, 2)` would produce `Vector(1, 2)`. 108 * `vectorize(d6, 0)` would produce a `Die` with outcomes `Vector(1, 0)`, 109 `Vector(2, 0)`, ... `Vector(6, 0)`. 110 * `vectorize(d6, d6)` would produce a `Die` with outcomes `Vector(1, 1)`, 111 `Vector(1, 2)`, ... `Vector(6, 5)`, `Vector(6, 6)`. 112 113 If `Population`s are provided, they must all be `Die` or all `Deck` and not 114 a mixture of the two. 115 116 Returns: 117 If none of the outcomes is a `Population`, the result is a `Vector` 118 with one element per argument. Otherwise, the result is a `Population` 119 of the same type as the input `Population`, and the outcomes are 120 `Vector`s with one element per argument. 121 """ 122 return cartesian_product(*args, outcome_type=Vector)
Returns the Cartesian product of the arguments as Vector
s or a Population
thereof.
For example:
vectorize(1, 2)
would produceVector(1, 2)
.vectorize(d6, 0)
would produce aDie
with outcomesVector(1, 0)
,Vector(2, 0)
, ...Vector(6, 0)
.vectorize(d6, d6)
would produce aDie
with outcomesVector(1, 1)
,Vector(1, 2)
, ...Vector(6, 5)
,Vector(6, 6)
.
If Population
s are provided, they must all be Die
or all Deck
and not
a mixture of the two.
Returns:
If none of the outcomes is a
Population
, the result is aVector
with one element per argument. Otherwise, the result is aPopulation
of the same type as the inputPopulation
, and the outcomes areVector
s with one element per argument.
125class Vector(Outcome, Sequence[T_co]): 126 """Immutable tuple-like class that applies most operators elementwise. 127 128 May become a variadic generic type in the future. 129 """ 130 __slots__ = ['_data'] 131 132 _data: tuple[T_co, ...] 133 134 def __init__(self, 135 elements: Iterable[T_co], 136 *, 137 truth_value: bool | None = None) -> None: 138 self._data = tuple(elements) 139 if any(isinstance(x, icepool.AgainExpression) for x in self._data): 140 raise TypeError('Again is not a valid element of Vector.') 141 self._truth_value = truth_value 142 143 def __hash__(self) -> int: 144 return hash((Vector, self._data)) 145 146 def __len__(self) -> int: 147 return len(self._data) 148 149 @overload 150 def __getitem__(self, index: int) -> T_co: 151 ... 152 153 @overload 154 def __getitem__(self, index: slice) -> 'Vector[T_co]': 155 ... 156 157 def __getitem__(self, index: int | slice) -> 'T_co | Vector[T_co]': 158 if isinstance(index, int): 159 return self._data[index] 160 else: 161 return Vector(self._data[index]) 162 163 def __iter__(self) -> Iterator[T_co]: 164 return iter(self._data) 165 166 # Unary operators. 167 168 def unary_operator(self, op: Callable[..., U], *args, 169 **kwargs) -> 'Vector[U]': 170 """Unary operators on `Vector` are applied elementwise. 171 172 This is used for the standard unary operators 173 `-, +, abs, ~, round, trunc, floor, ceil` 174 """ 175 return Vector(op(x, *args, **kwargs) for x in self) 176 177 def __neg__(self) -> 'Vector[T_co]': 178 return self.unary_operator(operator.neg) 179 180 def __pos__(self) -> 'Vector[T_co]': 181 return self.unary_operator(operator.pos) 182 183 def __invert__(self) -> 'Vector[T_co]': 184 return self.unary_operator(operator.invert) 185 186 def abs(self) -> 'Vector[T_co]': 187 return self.unary_operator(operator.abs) 188 189 __abs__ = abs 190 191 def round(self, ndigits: int | None = None) -> 'Vector': 192 return self.unary_operator(round, ndigits) 193 194 __round__ = round 195 196 def trunc(self) -> 'Vector': 197 return self.unary_operator(math.trunc) 198 199 __trunc__ = trunc 200 201 def floor(self) -> 'Vector': 202 return self.unary_operator(math.floor) 203 204 __floor__ = floor 205 206 def ceil(self) -> 'Vector': 207 return self.unary_operator(math.ceil) 208 209 __ceil__ = ceil 210 211 # Binary operators. 212 213 def binary_operator(self, 214 other, 215 op: Callable[..., U], 216 *args, 217 compare_for_truth: bool = False, 218 **kwargs) -> 'Vector[U]': 219 """Binary operators on `Vector` are applied elementwise. 220 221 If the other operand is also a `Vector`, the operator is applied to each 222 pair of elements from `self` and `other`. Both must have the same 223 length. 224 225 Otherwise the other operand is broadcast to each element of `self`. 226 227 This is used for the standard binary operators 228 `+, -, *, /, //, %, **, <<, >>, &, |, ^`. 229 230 `@` is not included due to its different meaning in `Die`. 231 232 This is also used for the comparators 233 `<, <=, >, >=, ==, !=`. 234 235 In this case, the result also has a truth value based on lexicographic 236 ordering. 237 """ 238 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 239 return NotImplemented # delegate to the other 240 if isinstance(other, Vector): 241 if len(self) == len(other): 242 if compare_for_truth: 243 truth_value = cast(bool, op(self._data, other._data)) 244 else: 245 truth_value = None 246 return Vector( 247 (op(x, y, *args, **kwargs) for x, y in zip(self, other)), 248 truth_value=truth_value) 249 else: 250 raise IndexError( 251 f'Binary operators on Vectors are only valid if both are the same length ({len(self)} vs. {len(other)}).' 252 ) 253 else: 254 return Vector((op(x, other, *args, **kwargs) for x in self)) 255 256 def reverse_binary_operator(self, other, op: Callable[..., U], *args, 257 **kwargs) -> 'Vector[U]': 258 """Reverse version of `binary_operator()`.""" 259 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 260 return NotImplemented # delegate to the other 261 if isinstance(other, Vector): 262 if len(self) == len(other): 263 return Vector( 264 op(y, x, *args, **kwargs) for x, y in zip(self, other)) 265 else: 266 raise IndexError( 267 f'Binary operators on Vectors are only valid if both are the same length ({len(other)} vs. {len(self)}).' 268 ) 269 else: 270 return Vector(op(other, x, *args, **kwargs) for x in self) 271 272 def __add__(self, other) -> 'Vector': 273 return self.binary_operator(other, operator.add) 274 275 def __radd__(self, other) -> 'Vector': 276 return self.reverse_binary_operator(other, operator.add) 277 278 def __sub__(self, other) -> 'Vector': 279 return self.binary_operator(other, operator.sub) 280 281 def __rsub__(self, other) -> 'Vector': 282 return self.reverse_binary_operator(other, operator.sub) 283 284 def __mul__(self, other) -> 'Vector': 285 return self.binary_operator(other, operator.mul) 286 287 def __rmul__(self, other) -> 'Vector': 288 return self.reverse_binary_operator(other, operator.mul) 289 290 def __truediv__(self, other) -> 'Vector': 291 return self.binary_operator(other, operator.truediv) 292 293 def __rtruediv__(self, other) -> 'Vector': 294 return self.reverse_binary_operator(other, operator.truediv) 295 296 def __floordiv__(self, other) -> 'Vector': 297 return self.binary_operator(other, operator.floordiv) 298 299 def __rfloordiv__(self, other) -> 'Vector': 300 return self.reverse_binary_operator(other, operator.floordiv) 301 302 def __pow__(self, other) -> 'Vector': 303 return self.binary_operator(other, operator.pow) 304 305 def __rpow__(self, other) -> 'Vector': 306 return self.reverse_binary_operator(other, operator.pow) 307 308 def __mod__(self, other) -> 'Vector': 309 return self.binary_operator(other, operator.mod) 310 311 def __rmod__(self, other) -> 'Vector': 312 return self.reverse_binary_operator(other, operator.mod) 313 314 def __lshift__(self, other) -> 'Vector': 315 return self.binary_operator(other, operator.lshift) 316 317 def __rlshift__(self, other) -> 'Vector': 318 return self.reverse_binary_operator(other, operator.lshift) 319 320 def __rshift__(self, other) -> 'Vector': 321 return self.binary_operator(other, operator.rshift) 322 323 def __rrshift__(self, other) -> 'Vector': 324 return self.reverse_binary_operator(other, operator.rshift) 325 326 def __and__(self, other) -> 'Vector': 327 return self.binary_operator(other, operator.and_) 328 329 def __rand__(self, other) -> 'Vector': 330 return self.reverse_binary_operator(other, operator.and_) 331 332 def __or__(self, other) -> 'Vector': 333 return self.binary_operator(other, operator.or_) 334 335 def __ror__(self, other) -> 'Vector': 336 return self.reverse_binary_operator(other, operator.or_) 337 338 def __xor__(self, other) -> 'Vector': 339 return self.binary_operator(other, operator.xor) 340 341 def __rxor__(self, other) -> 'Vector': 342 return self.reverse_binary_operator(other, operator.xor) 343 344 # Comparators. 345 # These returns a value with a truth value, but not a bool. 346 347 def __lt__(self, other) -> 'Vector': # type: ignore 348 if not isinstance(other, Vector): 349 return NotImplemented 350 return self.binary_operator(other, operator.lt, compare_for_truth=True) 351 352 def __le__(self, other) -> 'Vector': # type: ignore 353 if not isinstance(other, Vector): 354 return NotImplemented 355 return self.binary_operator(other, operator.le, compare_for_truth=True) 356 357 def __gt__(self, other) -> 'Vector': # type: ignore 358 if not isinstance(other, Vector): 359 return NotImplemented 360 return self.binary_operator(other, operator.gt, compare_for_truth=True) 361 362 def __ge__(self, other) -> 'Vector': # type: ignore 363 if not isinstance(other, Vector): 364 return NotImplemented 365 return self.binary_operator(other, operator.ge, compare_for_truth=True) 366 367 def __eq__(self, other) -> 'Vector | bool': # type: ignore 368 if not isinstance(other, Vector): 369 return False 370 return self.binary_operator(other, operator.eq, compare_for_truth=True) 371 372 def __ne__(self, other) -> 'Vector | bool': # type: ignore 373 if not isinstance(other, Vector): 374 return True 375 return self.binary_operator(other, operator.ne, compare_for_truth=True) 376 377 def __bool__(self) -> bool: 378 if self._truth_value is None: 379 raise TypeError( 380 'Vector only has a truth value if it is the result of a comparison operator.' 381 ) 382 return self._truth_value 383 384 # Sequence manipulation. 385 386 def append(self, other) -> 'Vector': 387 return Vector(self._data + (other, )) 388 389 def concatenate(self, other: 'Iterable') -> 'Vector': 390 return Vector(itertools.chain(self, other)) 391 392 # Strings. 393 394 def __repr__(self) -> str: 395 return type(self).__qualname__ + '(' + repr(self._data) + ')' 396 397 def __str__(self) -> str: 398 return type(self).__qualname__ + '(' + str(self._data) + ')'
Immutable tuple-like class that applies most operators elementwise.
May become a variadic generic type in the future.
168 def unary_operator(self, op: Callable[..., U], *args, 169 **kwargs) -> 'Vector[U]': 170 """Unary operators on `Vector` are applied elementwise. 171 172 This is used for the standard unary operators 173 `-, +, abs, ~, round, trunc, floor, ceil` 174 """ 175 return Vector(op(x, *args, **kwargs) for x in self)
Unary operators on Vector
are applied elementwise.
This is used for the standard unary operators
-, +, abs, ~, round, trunc, floor, ceil
213 def binary_operator(self, 214 other, 215 op: Callable[..., U], 216 *args, 217 compare_for_truth: bool = False, 218 **kwargs) -> 'Vector[U]': 219 """Binary operators on `Vector` are applied elementwise. 220 221 If the other operand is also a `Vector`, the operator is applied to each 222 pair of elements from `self` and `other`. Both must have the same 223 length. 224 225 Otherwise the other operand is broadcast to each element of `self`. 226 227 This is used for the standard binary operators 228 `+, -, *, /, //, %, **, <<, >>, &, |, ^`. 229 230 `@` is not included due to its different meaning in `Die`. 231 232 This is also used for the comparators 233 `<, <=, >, >=, ==, !=`. 234 235 In this case, the result also has a truth value based on lexicographic 236 ordering. 237 """ 238 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 239 return NotImplemented # delegate to the other 240 if isinstance(other, Vector): 241 if len(self) == len(other): 242 if compare_for_truth: 243 truth_value = cast(bool, op(self._data, other._data)) 244 else: 245 truth_value = None 246 return Vector( 247 (op(x, y, *args, **kwargs) for x, y in zip(self, other)), 248 truth_value=truth_value) 249 else: 250 raise IndexError( 251 f'Binary operators on Vectors are only valid if both are the same length ({len(self)} vs. {len(other)}).' 252 ) 253 else: 254 return Vector((op(x, other, *args, **kwargs) for x in self))
Binary operators on Vector
are applied elementwise.
If the other operand is also a Vector
, the operator is applied to each
pair of elements from self
and other
. Both must have the same
length.
Otherwise the other operand is broadcast to each element of self
.
This is used for the standard binary operators
+, -, *, /, //, %, **, <<, >>, &, |, ^
.
@
is not included due to its different meaning in Die
.
This is also used for the comparators
<, <=, >, >=, ==, !=
.
In this case, the result also has a truth value based on lexicographic ordering.
256 def reverse_binary_operator(self, other, op: Callable[..., U], *args, 257 **kwargs) -> 'Vector[U]': 258 """Reverse version of `binary_operator()`.""" 259 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 260 return NotImplemented # delegate to the other 261 if isinstance(other, Vector): 262 if len(self) == len(other): 263 return Vector( 264 op(y, x, *args, **kwargs) for x, y in zip(self, other)) 265 else: 266 raise IndexError( 267 f'Binary operators on Vectors are only valid if both are the same length ({len(other)} vs. {len(self)}).' 268 ) 269 else: 270 return Vector(op(other, x, *args, **kwargs) for x in self)
Reverse version of binary_operator()
.
16class Symbols(Mapping[str, int]): 17 """EXPERIMENTAL: Immutable multiset of single characters. 18 19 Spaces, dashes, and underscores cannot be used as symbols. 20 21 Operations include: 22 23 | Operation | Count / notes | 24 |:----------------------------|:-----------------------------------| 25 | `additive_union`, `+` | `l + r` | 26 | `difference`, `-` | `l - r` | 27 | `intersection`, `&` | `min(l, r)` | 28 | `union`, `\\|` | `max(l, r)` | 29 | `symmetric_difference`, `^` | `abs(l - r)` | 30 | `multiply_counts`, `*` | `count * n` | 31 | `divide_counts`, `//` | `count // n` | 32 | `issubset`, `<=` | all counts l <= r | 33 | `issuperset`, `>=` | all counts l >= r | 34 | `==` | all counts l == r | 35 | `!=` | any count l != r | 36 | unary `+` | drop all negative counts | 37 | unary `-` | reverses the sign of all counts | 38 39 `<` and `>` are lexicographic orderings rather than subset relations. 40 Specifically, they compare the count of each character in alphabetical 41 order. For example: 42 * `'a' > ''` since one `'a'` is more than zero `'a'`s. 43 * `'a' > 'bb'` since `'a'` is compared first. 44 * `'-a' < 'bb'` since the left side has -1 `'a'`s. 45 * `'a' < 'ab'` since the `'a'`s are equal but the right side has more `'b'`s. 46 47 Binary operators other than `*` and `//` implicitly convert the other 48 argument to `Symbols` using the constructor. 49 50 Subscripting with a single character returns the count of that character 51 as an `int`. E.g. `symbols['a']` -> number of `a`s as an `int`. 52 You can also access it as an attribute, e.g. `symbols.a`. 53 54 Subscripting with multiple characters returns a `Symbols` with only those 55 characters, dropping the rest. 56 E.g. `symbols['ab']` -> number of `a`s and `b`s as a `Symbols`. 57 Again you can also access it as an attribute, e.g. `symbols.ab`. 58 This is useful for reducing the outcome space, which reduces computational 59 cost for further operations. If you want to keep only a single character 60 while keeping the type as `Symbols`, you can subscript with that character 61 plus an unused character. 62 63 Subscripting with duplicate characters currently has no further effect, but 64 this may change in the future. 65 66 `Population.marginals` forwards attribute access, so you can use e.g. 67 `die.marginals.a` to get the marginal distribution of `a`s. 68 69 Note that attribute access only works with valid identifiers, 70 so e.g. emojis would need to use the subscript method. 71 """ 72 _data: Mapping[str, int] 73 74 def __new__(cls, 75 symbols: str | Iterable[str] | Mapping[str, int]) -> 'Symbols': 76 """Constructor. 77 78 The argument can be a string, an iterable of characters, or a mapping of 79 characters to counts. 80 81 If the argument is a string, negative symbols can be specified using a 82 minus sign optionally surrounded by whitespace. For example, 83 `a - b` has one positive a and one negative b. 84 """ 85 self = super(Symbols, cls).__new__(cls) 86 if isinstance(symbols, str): 87 data: MutableMapping[str, int] = defaultdict(int) 88 positive, *negative = re.split(r'\s*-\s*', symbols) 89 for s in positive: 90 data[s] += 1 91 if len(negative) > 1: 92 raise ValueError('Multiple dashes not allowed.') 93 if len(negative) == 1: 94 for s in negative[0]: 95 data[s] -= 1 96 elif isinstance(symbols, Mapping): 97 data = defaultdict(int, symbols) 98 else: 99 data = defaultdict(int) 100 for s in symbols: 101 data[s] += 1 102 103 for s in data: 104 if len(s) != 1: 105 raise ValueError(f'Symbol {s} is not a single character.') 106 if re.match(r'[\s_-]', s): 107 raise ValueError( 108 f'{s} (U+{ord(s):04X}) is not a legal symbol.') 109 110 self._data = defaultdict(int, 111 {k: data[k] 112 for k in sorted(data.keys())}) 113 114 return self 115 116 @classmethod 117 def _new_raw(cls, data: defaultdict[str, int]) -> 'Symbols': 118 self = super(Symbols, cls).__new__(cls) 119 self._data = data 120 return self 121 122 # Mapping interface. 123 124 def __getitem__(self, key: str) -> 'int | Symbols': # type: ignore 125 if len(key) == 1: 126 return self._data[key] 127 else: 128 return Symbols._new_raw( 129 defaultdict(int, {s: self._data[s] 130 for s in key})) 131 132 def __getattr__(self, key: str) -> 'int | Symbols': 133 if key[0] == '_': 134 raise AttributeError(key) 135 return self[key] 136 137 def __iter__(self) -> Iterator[str]: 138 return iter(self._data) 139 140 def __len__(self) -> int: 141 return len(self._data) 142 143 # Binary operators. 144 145 def additive_union(self, *args: 146 Iterable[str] | Mapping[str, int]) -> 'Symbols': 147 """The sum of counts of each symbol.""" 148 return functools.reduce(operator.add, args, initial=self) 149 150 def __add__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 151 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 152 return NotImplemented # delegate to the other 153 data = defaultdict(int, self._data) 154 for s, count in Symbols(other).items(): 155 data[s] += count 156 return Symbols._new_raw(data) 157 158 __radd__ = __add__ 159 160 def difference(self, *args: 161 Iterable[str] | Mapping[str, int]) -> 'Symbols': 162 """The difference between the counts of each symbol.""" 163 return functools.reduce(operator.sub, args, initial=self) 164 165 def __sub__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 166 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 167 return NotImplemented # delegate to the other 168 data = defaultdict(int, self._data) 169 for s, count in Symbols(other).items(): 170 data[s] -= count 171 return Symbols._new_raw(data) 172 173 def __rsub__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 174 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 175 return NotImplemented # delegate to the other 176 data = defaultdict(int, Symbols(other)._data) 177 for s, count in self.items(): 178 data[s] -= count 179 return Symbols._new_raw(data) 180 181 def intersection(self, *args: 182 Iterable[str] | Mapping[str, int]) -> 'Symbols': 183 """The min count of each symbol.""" 184 return functools.reduce(operator.and_, args, initial=self) 185 186 def __and__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 187 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 188 return NotImplemented # delegate to the other 189 data: defaultdict[str, int] = defaultdict(int) 190 for s, count in Symbols(other).items(): 191 data[s] = min(self.get(s, 0), count) 192 return Symbols._new_raw(data) 193 194 __rand__ = __and__ 195 196 def union(self, *args: Iterable[str] | Mapping[str, int]) -> 'Symbols': 197 """The max count of each symbol.""" 198 return functools.reduce(operator.or_, args, initial=self) 199 200 def __or__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 201 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 202 return NotImplemented # delegate to the other 203 data = defaultdict(int, self._data) 204 for s, count in Symbols(other).items(): 205 data[s] = max(data[s], count) 206 return Symbols._new_raw(data) 207 208 __ror__ = __or__ 209 210 def symmetric_difference( 211 self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 212 """The absolute difference in symbol counts between the two sets.""" 213 return self ^ other 214 215 def __xor__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 216 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 217 return NotImplemented # delegate to the other 218 data = defaultdict(int, self._data) 219 for s, count in Symbols(other).items(): 220 data[s] = abs(data[s] - count) 221 return Symbols._new_raw(data) 222 223 __rxor__ = __xor__ 224 225 def multiply_counts(self, other: int) -> 'Symbols': 226 """Multiplies all counts by an integer.""" 227 return self * other 228 229 def __mul__(self, other: int) -> 'Symbols': 230 if not isinstance(other, int): 231 return NotImplemented 232 data = defaultdict(int, { 233 s: count * other 234 for s, count in self.items() 235 }) 236 return Symbols._new_raw(data) 237 238 __rmul__ = __mul__ 239 240 def divide_counts(self, other: int) -> 'Symbols': 241 """Divides all counts by an integer, rounding down.""" 242 data = defaultdict(int, { 243 s: count // other 244 for s, count in self.items() 245 }) 246 return Symbols._new_raw(data) 247 248 def count_subset(self, 249 divisor: Iterable[str] | Mapping[str, int], 250 *, 251 empty_divisor: int | None = None) -> int: 252 """The number of times the divisor is contained in this multiset.""" 253 if not isinstance(divisor, Mapping): 254 divisor = Counter(divisor) 255 result = None 256 for s, count in divisor.items(): 257 current = self._data[s] // count 258 if result is None or current < result: 259 result = current 260 if result is None: 261 if empty_divisor is None: 262 raise ZeroDivisionError('Divisor is empty.') 263 else: 264 return empty_divisor 265 else: 266 return result 267 268 @overload 269 def __floordiv__(self, other: int) -> 'Symbols': 270 """Same as divide_counts().""" 271 272 @overload 273 def __floordiv__(self, other: Iterable[str] | Mapping[str, int]) -> int: 274 """Same as count_subset().""" 275 276 @overload 277 def __floordiv__( 278 self, 279 other: int | Iterable[str] | Mapping[str, int]) -> 'Symbols | int': 280 ... 281 282 def __floordiv__( 283 self, 284 other: int | Iterable[str] | Mapping[str, int]) -> 'Symbols | int': 285 if isinstance(other, int): 286 return self.divide_counts(other) 287 elif isinstance(other, Iterable): 288 return self.count_subset(other) 289 else: 290 return NotImplemented 291 292 def __rfloordiv__(self, other: Iterable[str] | Mapping[str, int]) -> int: 293 return Symbols(other).count_subset(self) 294 295 def modulo_counts(self, other: int) -> 'Symbols': 296 return self % other 297 298 def __mod__(self, other: int) -> 'Symbols': 299 if not isinstance(other, int): 300 return NotImplemented 301 data = defaultdict(int, { 302 s: count % other 303 for s, count in self.items() 304 }) 305 return Symbols._new_raw(data) 306 307 def __lt__(self, other: 'Symbols') -> bool: 308 if not isinstance(other, Symbols): 309 return NotImplemented 310 keys = sorted(set(self.keys()) | set(other.keys())) 311 for k in keys: 312 if self[k] < other[k]: # type: ignore 313 return True 314 if self[k] > other[k]: # type: ignore 315 return False 316 return False 317 318 def __gt__(self, other: 'Symbols') -> bool: 319 if not isinstance(other, Symbols): 320 return NotImplemented 321 keys = sorted(set(self.keys()) | set(other.keys())) 322 for k in keys: 323 if self[k] > other[k]: # type: ignore 324 return True 325 if self[k] < other[k]: # type: ignore 326 return False 327 return False 328 329 def issubset(self, other: Iterable[str] | Mapping[str, int]) -> bool: 330 """Whether `self` is a subset of the other. 331 332 Same as `<=`. 333 334 Note that the `<` and `>` operators are lexicographic orderings, 335 not proper subset relations. 336 """ 337 return self <= other 338 339 def __le__(self, other: Iterable[str] | Mapping[str, int]) -> bool: 340 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 341 return NotImplemented # delegate to the other 342 other = Symbols(other) 343 return all(self[s] <= other[s] # type: ignore 344 for s in itertools.chain(self, other)) 345 346 def issuperset(self, other: Iterable[str] | Mapping[str, int]) -> bool: 347 """Whether `self` is a superset of the other. 348 349 Same as `>=`. 350 351 Note that the `<` and `>` operators are lexicographic orderings, 352 not proper subset relations. 353 """ 354 return self >= other 355 356 def __ge__(self, other: Iterable[str] | Mapping[str, int]) -> bool: 357 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 358 return NotImplemented # delegate to the other 359 other = Symbols(other) 360 return all(self[s] >= other[s] # type: ignore 361 for s in itertools.chain(self, other)) 362 363 def isdisjoint(self, other: Iterable[str] | Mapping[str, int]) -> bool: 364 """Whether `self` has any positive elements in common with the other. 365 366 Raises: 367 ValueError if either has negative elements. 368 """ 369 other = Symbols(other) 370 if self.has_negative_counts() or other.has_negative_counts(): 371 raise ValueError( 372 "isdisjoint() is not defined for negative counts.") 373 return any(self[s] > 0 and other[s] > 0 # type: ignore 374 for s in self) 375 376 def __eq__(self, other) -> bool: 377 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 378 return NotImplemented # delegate to the other 379 try: 380 other = Symbols(other) 381 except ValueError: 382 return NotImplemented 383 return all(self[s] == other[s] # type: ignore 384 for s in itertools.chain(self, other)) 385 386 def __ne__(self, other) -> bool: 387 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 388 return NotImplemented # delegate to the other 389 try: 390 other = Symbols(other) 391 except ValueError: 392 return NotImplemented 393 return any(self[s] != other[s] # type: ignore 394 for s in itertools.chain(self, other)) 395 396 # Unary operators. 397 398 def has_negative_counts(self) -> bool: 399 """Whether any counts are negative.""" 400 return any(c < 0 for c in self.values()) 401 402 def __pos__(self) -> 'Symbols': 403 data = defaultdict(int, { 404 s: count 405 for s, count in self.items() if count > 0 406 }) 407 return Symbols._new_raw(data) 408 409 def __neg__(self) -> 'Symbols': 410 data = defaultdict(int, {s: -count for s, count in self.items()}) 411 return Symbols._new_raw(data) 412 413 @cached_property 414 def _hash(self) -> int: 415 return hash((Symbols, str(self))) 416 417 def __hash__(self) -> int: 418 return self._hash 419 420 def count(self) -> int: 421 """The total number of elements.""" 422 return sum(self._data.values()) 423 424 @cached_property 425 def _str(self) -> str: 426 sorted_keys = sorted(self) 427 positive = ''.join(s * self._data[s] for s in sorted_keys 428 if self._data[s] > 0) 429 negative = ''.join(s * -self._data[s] for s in sorted_keys 430 if self._data[s] < 0) 431 if positive: 432 if negative: 433 return positive + ' - ' + negative 434 else: 435 return positive 436 else: 437 if negative: 438 return '-' + negative 439 else: 440 return '' 441 442 def __str__(self) -> str: 443 """All symbols in unary form (i.e. including duplicates) in ascending order. 444 445 If there are negative elements, they are listed following a ` - ` sign. 446 """ 447 return self._str 448 449 def __repr__(self) -> str: 450 return type(self).__qualname__ + f"('{str(self)}')"
EXPERIMENTAL: Immutable multiset of single characters.
Spaces, dashes, and underscores cannot be used as symbols.
Operations include:
Operation | Count / notes |
---|---|
additive_union , + |
l + r |
difference , - |
l - r |
intersection , & |
min(l, r) |
union , | |
max(l, r) |
symmetric_difference , ^ |
abs(l - r) |
multiply_counts , * |
count * n |
divide_counts , // |
count // n |
issubset , <= |
all counts l <= r |
issuperset , >= |
all counts l >= r |
== |
all counts l == r |
!= |
any count l != r |
unary + |
drop all negative counts |
unary - |
reverses the sign of all counts |
<
and >
are lexicographic orderings rather than subset relations.
Specifically, they compare the count of each character in alphabetical
order. For example:
'a' > ''
since one'a'
is more than zero'a'
s.'a' > 'bb'
since'a'
is compared first.'-a' < 'bb'
since the left side has -1'a'
s.'a' < 'ab'
since the'a'
s are equal but the right side has more'b'
s.
Binary operators other than *
and //
implicitly convert the other
argument to Symbols
using the constructor.
Subscripting with a single character returns the count of that character
as an int
. E.g. symbols['a']
-> number of a
s as an int
.
You can also access it as an attribute, e.g. symbols.a
.
Subscripting with multiple characters returns a Symbols
with only those
characters, dropping the rest.
E.g. symbols['ab']
-> number of a
s and b
s as a Symbols
.
Again you can also access it as an attribute, e.g. symbols.ab
.
This is useful for reducing the outcome space, which reduces computational
cost for further operations. If you want to keep only a single character
while keeping the type as Symbols
, you can subscript with that character
plus an unused character.
Subscripting with duplicate characters currently has no further effect, but this may change in the future.
Population.marginals
forwards attribute access, so you can use e.g.
die.marginals.a
to get the marginal distribution of a
s.
Note that attribute access only works with valid identifiers, so e.g. emojis would need to use the subscript method.
74 def __new__(cls, 75 symbols: str | Iterable[str] | Mapping[str, int]) -> 'Symbols': 76 """Constructor. 77 78 The argument can be a string, an iterable of characters, or a mapping of 79 characters to counts. 80 81 If the argument is a string, negative symbols can be specified using a 82 minus sign optionally surrounded by whitespace. For example, 83 `a - b` has one positive a and one negative b. 84 """ 85 self = super(Symbols, cls).__new__(cls) 86 if isinstance(symbols, str): 87 data: MutableMapping[str, int] = defaultdict(int) 88 positive, *negative = re.split(r'\s*-\s*', symbols) 89 for s in positive: 90 data[s] += 1 91 if len(negative) > 1: 92 raise ValueError('Multiple dashes not allowed.') 93 if len(negative) == 1: 94 for s in negative[0]: 95 data[s] -= 1 96 elif isinstance(symbols, Mapping): 97 data = defaultdict(int, symbols) 98 else: 99 data = defaultdict(int) 100 for s in symbols: 101 data[s] += 1 102 103 for s in data: 104 if len(s) != 1: 105 raise ValueError(f'Symbol {s} is not a single character.') 106 if re.match(r'[\s_-]', s): 107 raise ValueError( 108 f'{s} (U+{ord(s):04X}) is not a legal symbol.') 109 110 self._data = defaultdict(int, 111 {k: data[k] 112 for k in sorted(data.keys())}) 113 114 return self
Constructor.
The argument can be a string, an iterable of characters, or a mapping of characters to counts.
If the argument is a string, negative symbols can be specified using a
minus sign optionally surrounded by whitespace. For example,
a - b
has one positive a and one negative b.
145 def additive_union(self, *args: 146 Iterable[str] | Mapping[str, int]) -> 'Symbols': 147 """The sum of counts of each symbol.""" 148 return functools.reduce(operator.add, args, initial=self)
The sum of counts of each symbol.
160 def difference(self, *args: 161 Iterable[str] | Mapping[str, int]) -> 'Symbols': 162 """The difference between the counts of each symbol.""" 163 return functools.reduce(operator.sub, args, initial=self)
The difference between the counts of each symbol.
181 def intersection(self, *args: 182 Iterable[str] | Mapping[str, int]) -> 'Symbols': 183 """The min count of each symbol.""" 184 return functools.reduce(operator.and_, args, initial=self)
The min count of each symbol.
196 def union(self, *args: Iterable[str] | Mapping[str, int]) -> 'Symbols': 197 """The max count of each symbol.""" 198 return functools.reduce(operator.or_, args, initial=self)
The max count of each symbol.
210 def symmetric_difference( 211 self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 212 """The absolute difference in symbol counts between the two sets.""" 213 return self ^ other
The absolute difference in symbol counts between the two sets.
225 def multiply_counts(self, other: int) -> 'Symbols': 226 """Multiplies all counts by an integer.""" 227 return self * other
Multiplies all counts by an integer.
240 def divide_counts(self, other: int) -> 'Symbols': 241 """Divides all counts by an integer, rounding down.""" 242 data = defaultdict(int, { 243 s: count // other 244 for s, count in self.items() 245 }) 246 return Symbols._new_raw(data)
Divides all counts by an integer, rounding down.
248 def count_subset(self, 249 divisor: Iterable[str] | Mapping[str, int], 250 *, 251 empty_divisor: int | None = None) -> int: 252 """The number of times the divisor is contained in this multiset.""" 253 if not isinstance(divisor, Mapping): 254 divisor = Counter(divisor) 255 result = None 256 for s, count in divisor.items(): 257 current = self._data[s] // count 258 if result is None or current < result: 259 result = current 260 if result is None: 261 if empty_divisor is None: 262 raise ZeroDivisionError('Divisor is empty.') 263 else: 264 return empty_divisor 265 else: 266 return result
The number of times the divisor is contained in this multiset.
329 def issubset(self, other: Iterable[str] | Mapping[str, int]) -> bool: 330 """Whether `self` is a subset of the other. 331 332 Same as `<=`. 333 334 Note that the `<` and `>` operators are lexicographic orderings, 335 not proper subset relations. 336 """ 337 return self <= other
Whether self
is a subset of the other.
Same as <=
.
Note that the <
and >
operators are lexicographic orderings,
not proper subset relations.
346 def issuperset(self, other: Iterable[str] | Mapping[str, int]) -> bool: 347 """Whether `self` is a superset of the other. 348 349 Same as `>=`. 350 351 Note that the `<` and `>` operators are lexicographic orderings, 352 not proper subset relations. 353 """ 354 return self >= other
Whether self
is a superset of the other.
Same as >=
.
Note that the <
and >
operators are lexicographic orderings,
not proper subset relations.
363 def isdisjoint(self, other: Iterable[str] | Mapping[str, int]) -> bool: 364 """Whether `self` has any positive elements in common with the other. 365 366 Raises: 367 ValueError if either has negative elements. 368 """ 369 other = Symbols(other) 370 if self.has_negative_counts() or other.has_negative_counts(): 371 raise ValueError( 372 "isdisjoint() is not defined for negative counts.") 373 return any(self[s] > 0 and other[s] > 0 # type: ignore 374 for s in self)
Whether self
has any positive elements in common with the other.
Raises:
- ValueError if either has negative elements.
A symbol indicating that the die should be rolled again, usually with some operation applied.
This is designed to be used with the Die()
constructor.
AgainExpression
s should not be fed to functions or methods other than
Die()
, but it can be used with operators. Examples:
Again + 6
: Roll again and add 6.Again + Again
: Roll again twice and sum.
The again_count
, again_depth
, and again_end
arguments to Die()
affect how these arguments are processed. At most one of again_count
or
again_depth
may be provided; if neither are provided, the behavior is as
`again_depth=1.
For finer control over rolling processes, use e.g. Die.map()
instead.
Count mode
When again_count
is provided, we start with one roll queued and execute one
roll at a time. For every Again
we roll, we queue another roll.
If we run out of rolls, we sum the rolls to find the result. If the total number
of rolls (not including the initial roll) would exceed again_count
, we reroll
the entire process, effectively conditioning the process on not rolling more
than again_count
extra dice.
This mode only allows "additive" expressions to be used with Again
, which
means that only the following operators are allowed:
- Binary
+
n @ AgainExpression
, wheren
is a non-negativeint
orPopulation
.
Furthermore, the +
operator is assumed to be associative and commutative.
For example, str
or tuple
outcomes will not produce elements with a definite
order.
Depth mode
When again_depth=0
, again_end
is directly substituted
for each occurence of Again
. For other values of again_depth
, the result for
again_depth-1
is substituted for each occurence of Again
.
If again_end=icepool.Reroll
, then any AgainExpression
s in the final depth
are rerolled.
Rerolls
Reroll
only rerolls that particular die, not the entire process. Any such
rerolls do not count against the again_count
or again_depth
limit.
If again_end=icepool.Reroll
:
- Count mode: Any result that would cause the number of rolls to exceed
again_count
is rerolled. - Depth mode: Any
AgainExpression
s in the final depth level are rerolled.
144class CountsKeysView(KeysView[T], Sequence[T]): 145 """This functions as both a `KeysView` and a `Sequence`.""" 146 147 def __init__(self, counts: Counts[T]): 148 self._mapping = counts 149 150 def __getitem__(self, index): 151 return self._mapping._keys[index] 152 153 def __len__(self) -> int: 154 return len(self._mapping) 155 156 def __eq__(self, other): 157 return self._mapping._keys == other
This functions as both a KeysView
and a Sequence
.
160class CountsValuesView(ValuesView[int], Sequence[int]): 161 """This functions as both a `ValuesView` and a `Sequence`.""" 162 163 def __init__(self, counts: Counts): 164 self._mapping = counts 165 166 def __getitem__(self, index): 167 return self._mapping._values[index] 168 169 def __len__(self) -> int: 170 return len(self._mapping) 171 172 def __eq__(self, other): 173 return self._mapping._values == other
This functions as both a ValuesView
and a Sequence
.
176class CountsItemsView(ItemsView[T, int], Sequence[tuple[T, int]]): 177 """This functions as both an `ItemsView` and a `Sequence`.""" 178 179 def __init__(self, counts: Counts): 180 self._mapping = counts 181 182 def __getitem__(self, index): 183 return self._mapping._items[index] 184 185 def __eq__(self, other): 186 return self._mapping._items == other
This functions as both an ItemsView
and a Sequence
.
144def from_cumulative(outcomes: Sequence[T], 145 cumulative: 'Sequence[int] | Sequence[icepool.Die[bool]]', 146 *, 147 reverse: bool = False) -> 'icepool.Die[T]': 148 """Constructs a `Die` from a sequence of cumulative values. 149 150 Args: 151 outcomes: The outcomes of the resulting die. Sorted order is recommended 152 but not necessary. 153 cumulative: The cumulative values (inclusive) of the outcomes in the 154 order they are given to this function. These may be: 155 * `int` cumulative quantities. 156 * Dice representing the cumulative distribution at that point. 157 reverse: Iff true, both of the arguments will be reversed. This allows 158 e.g. constructing using a survival distribution. 159 """ 160 if len(outcomes) == 0: 161 return icepool.Die({}) 162 163 if reverse: 164 outcomes = list(reversed(outcomes)) 165 cumulative = list(reversed(cumulative)) # type: ignore 166 167 prev = 0 168 d = {} 169 170 if isinstance(cumulative[0], icepool.Die): 171 cumulative = commonize_denominator(*cumulative) 172 for outcome, die in zip(outcomes, cumulative): 173 d[outcome] = die.quantity('!=', False) - prev 174 prev = die.quantity('!=', False) 175 elif isinstance(cumulative[0], int): 176 cumulative = cast(Sequence[int], cumulative) 177 for outcome, quantity in zip(outcomes, cumulative): 178 d[outcome] = quantity - prev 179 prev = quantity 180 else: 181 raise TypeError( 182 f'Unsupported type {type(cumulative)} for cumulative values.') 183 184 return icepool.Die(d)
Constructs a Die
from a sequence of cumulative values.
Arguments:
- outcomes: The outcomes of the resulting die. Sorted order is recommended but not necessary.
- cumulative: The cumulative values (inclusive) of the outcomes in the
order they are given to this function. These may be:
int
cumulative quantities.- Dice representing the cumulative distribution at that point.
- reverse: Iff true, both of the arguments will be reversed. This allows e.g. constructing using a survival distribution.
199def from_rv(rv, outcomes: Sequence[int] | Sequence[float], denominator: int, 200 **kwargs) -> 'icepool.Die[int] | icepool.Die[float]': 201 """Constructs a `Die` from a rv object (as `scipy.stats`). 202 203 This is done using the CDF. 204 205 Args: 206 rv: A rv object (as `scipy.stats`). 207 outcomes: An iterable of `int`s or `float`s that will be the outcomes 208 of the resulting `Die`. 209 If the distribution is discrete, outcomes must be `int`s. 210 Some outcomes may be omitted if their probability is too small 211 compared to the denominator. 212 denominator: The denominator of the resulting `Die` will be set to this. 213 **kwargs: These will be forwarded to `rv.cdf()`. 214 """ 215 if hasattr(rv, 'pdf'): 216 # Continuous distributions use midpoints. 217 midpoints = [(a + b) / 2 for a, b in zip(outcomes[:-1], outcomes[1:])] 218 cdf = rv.cdf(midpoints, **kwargs) 219 quantities_le = tuple(int(round(x * denominator)) 220 for x in cdf) + (denominator, ) 221 else: 222 cdf = rv.cdf(outcomes, **kwargs) 223 quantities_le = tuple(int(round(x * denominator)) for x in cdf) 224 return from_cumulative(outcomes, quantities_le)
Constructs a Die
from a rv object (as scipy.stats
).
This is done using the CDF.
Arguments:
- rv: A rv object (as
scipy.stats
). - outcomes: An iterable of
int
s orfloat
s that will be the outcomes of the resultingDie
. If the distribution is discrete, outcomes must beint
s. Some outcomes may be omitted if their probability is too small compared to the denominator. - denominator: The denominator of the resulting
Die
will be set to this. - **kwargs: These will be forwarded to
rv.cdf()
.
254def pointwise_max(arg0, /, *more_args: 'icepool.Die[T]') -> 'icepool.Die[T]': 255 """Selects the highest chance of rolling >= each outcome among the arguments. 256 257 Naming not finalized. 258 259 Specifically, for each outcome, the chance of the result rolling >= to that 260 outcome is the same as the highest chance of rolling >= that outcome among 261 the arguments. 262 263 Equivalently, any quantile in the result is the highest of that quantile 264 among the arguments. 265 266 This is useful for selecting from several possible moves where you are 267 trying to get >= a threshold that is known but could change depending on the 268 situation. 269 270 Args: 271 dice: Either an iterable of dice, or two or more dice as separate 272 arguments. 273 """ 274 if len(more_args) == 0: 275 args = arg0 276 else: 277 args = (arg0, ) + more_args 278 args = commonize_denominator(*args) 279 outcomes = sorted_union(*args) 280 cumulative = [ 281 min(die.quantity('<=', outcome) for die in args) 282 for outcome in outcomes 283 ] 284 return from_cumulative(outcomes, cumulative)
Selects the highest chance of rolling >= each outcome among the arguments.
Naming not finalized.
Specifically, for each outcome, the chance of the result rolling >= to that outcome is the same as the highest chance of rolling >= that outcome among the arguments.
Equivalently, any quantile in the result is the highest of that quantile among the arguments.
This is useful for selecting from several possible moves where you are trying to get >= a threshold that is known but could change depending on the situation.
Arguments:
- dice: Either an iterable of dice, or two or more dice as separate arguments.
301def pointwise_min(arg0, /, *more_args: 'icepool.Die[T]') -> 'icepool.Die[T]': 302 """Selects the highest chance of rolling <= each outcome among the arguments. 303 304 Naming not finalized. 305 306 Specifically, for each outcome, the chance of the result rolling <= to that 307 outcome is the same as the highest chance of rolling <= that outcome among 308 the arguments. 309 310 Equivalently, any quantile in the result is the lowest of that quantile 311 among the arguments. 312 313 This is useful for selecting from several possible moves where you are 314 trying to get <= a threshold that is known but could change depending on the 315 situation. 316 317 Args: 318 dice: Either an iterable of dice, or two or more dice as separate 319 arguments. 320 """ 321 if len(more_args) == 0: 322 args = arg0 323 else: 324 args = (arg0, ) + more_args 325 args = commonize_denominator(*args) 326 outcomes = sorted_union(*args) 327 cumulative = [ 328 max(die.quantity('<=', outcome) for die in args) 329 for outcome in outcomes 330 ] 331 return from_cumulative(outcomes, cumulative)
Selects the highest chance of rolling <= each outcome among the arguments.
Naming not finalized.
Specifically, for each outcome, the chance of the result rolling <= to that outcome is the same as the highest chance of rolling <= that outcome among the arguments.
Equivalently, any quantile in the result is the lowest of that quantile among the arguments.
This is useful for selecting from several possible moves where you are trying to get <= a threshold that is known but could change depending on the situation.
Arguments:
- dice: Either an iterable of dice, or two or more dice as separate arguments.
99def lowest(arg0, 100 /, 101 *more_args: 'T | icepool.Die[T]', 102 keep: int | None = None, 103 drop: int | None = None, 104 default: T | None = None) -> 'icepool.Die[T]': 105 """The lowest outcome among the rolls, or the sum of some of the lowest. 106 107 The outcomes should support addition and multiplication if `keep != 1`. 108 109 Args: 110 args: Dice or individual outcomes in a single iterable, or as two or 111 more separate arguments. Similar to the built-in `min()`. 112 keep, drop: These arguments work together: 113 * If neither are provided, the single lowest die will be taken. 114 * If only `keep` is provided, the `keep` lowest dice will be summed. 115 * If only `drop` is provided, the `drop` lowest dice will be dropped 116 and the rest will be summed. 117 * If both are provided, `drop` lowest dice will be dropped, then 118 the next `keep` lowest dice will be summed. 119 default: If an empty iterable is provided, the result will be a die that 120 always rolls this value. 121 122 Raises: 123 ValueError if an empty iterable is provided with no `default`. 124 """ 125 if len(more_args) == 0: 126 args = arg0 127 else: 128 args = (arg0, ) + more_args 129 130 if len(args) == 0: 131 if default is None: 132 raise ValueError( 133 "lowest() arg is an empty sequence and no default was provided." 134 ) 135 else: 136 return icepool.Die([default]) 137 138 index_slice = lowest_slice(keep, drop) 139 return _sum_slice(*args, index_slice=index_slice)
The lowest outcome among the rolls, or the sum of some of the lowest.
The outcomes should support addition and multiplication if keep != 1
.
Arguments:
- args: Dice or individual outcomes in a single iterable, or as two or
more separate arguments. Similar to the built-in
min()
. - keep, drop: These arguments work together:
- If neither are provided, the single lowest die will be taken.
- If only
keep
is provided, thekeep
lowest dice will be summed. - If only
drop
is provided, thedrop
lowest dice will be dropped and the rest will be summed. - If both are provided,
drop
lowest dice will be dropped, then the nextkeep
lowest dice will be summed.
- default: If an empty iterable is provided, the result will be a die that always rolls this value.
Raises:
- ValueError if an empty iterable is provided with no
default
.
153def highest(arg0, 154 /, 155 *more_args: 'T | icepool.Die[T]', 156 keep: int | None = None, 157 drop: int | None = None, 158 default: T | None = None) -> 'icepool.Die[T]': 159 """The highest outcome among the rolls, or the sum of some of the highest. 160 161 The outcomes should support addition and multiplication if `keep != 1`. 162 163 Args: 164 args: Dice or individual outcomes in a single iterable, or as two or 165 more separate arguments. Similar to the built-in `max()`. 166 keep, drop: These arguments work together: 167 * If neither are provided, the single highest die will be taken. 168 * If only `keep` is provided, the `keep` highest dice will be summed. 169 * If only `drop` is provided, the `drop` highest dice will be dropped 170 and the rest will be summed. 171 * If both are provided, `drop` highest dice will be dropped, then 172 the next `keep` highest dice will be summed. 173 drop: This number of highest dice will be dropped before keeping dice 174 to be summed. 175 default: If an empty iterable is provided, the result will be a die that 176 always rolls this value. 177 178 Raises: 179 ValueError if an empty iterable is provided with no `default`. 180 """ 181 if len(more_args) == 0: 182 args = arg0 183 else: 184 args = (arg0, ) + more_args 185 186 if len(args) == 0: 187 if default is None: 188 raise ValueError( 189 "highest() arg is an empty sequence and no default was provided." 190 ) 191 else: 192 return icepool.Die([default]) 193 194 index_slice = highest_slice(keep, drop) 195 return _sum_slice(*args, index_slice=index_slice)
The highest outcome among the rolls, or the sum of some of the highest.
The outcomes should support addition and multiplication if keep != 1
.
Arguments:
- args: Dice or individual outcomes in a single iterable, or as two or
more separate arguments. Similar to the built-in
max()
. - keep, drop: These arguments work together:
- If neither are provided, the single highest die will be taken.
- If only
keep
is provided, thekeep
highest dice will be summed. - If only
drop
is provided, thedrop
highest dice will be dropped and the rest will be summed. - If both are provided,
drop
highest dice will be dropped, then the nextkeep
highest dice will be summed.
- drop: This number of highest dice will be dropped before keeping dice to be summed.
- default: If an empty iterable is provided, the result will be a die that always rolls this value.
Raises:
- ValueError if an empty iterable is provided with no
default
.
209def middle(arg0, 210 /, 211 *more_args: 'T | icepool.Die[T]', 212 keep: int = 1, 213 tie: Literal['error', 'high', 'low'] = 'error', 214 default: T | None = None) -> 'icepool.Die[T]': 215 """The middle of the outcomes among the rolls, or the sum of some of the middle. 216 217 The outcomes should support addition and multiplication if `keep != 1`. 218 219 Args: 220 args: Dice or individual outcomes in a single iterable, or as two or 221 more separate arguments. 222 keep: The number of outcomes to sum. 223 tie: What to do if `keep` is odd but the the number of args is even, or 224 vice versa. 225 * 'error' (default): Raises `IndexError`. 226 * 'high': The higher outcome is taken. 227 * 'low': The lower outcome is taken. 228 default: If an empty iterable is provided, the result will be a die that 229 always rolls this value. 230 231 Raises: 232 ValueError if an empty iterable is provided with no `default`. 233 """ 234 if len(more_args) == 0: 235 args = arg0 236 else: 237 args = (arg0, ) + more_args 238 239 if len(args) == 0: 240 if default is None: 241 raise ValueError( 242 "middle() arg is an empty sequence and no default was provided." 243 ) 244 else: 245 return icepool.Die([default]) 246 247 # Expression evaluators are difficult to type. 248 return icepool.Pool(args).middle(keep, tie=tie).sum() # type: ignore
The middle of the outcomes among the rolls, or the sum of some of the middle.
The outcomes should support addition and multiplication if keep != 1
.
Arguments:
- args: Dice or individual outcomes in a single iterable, or as two or more separate arguments.
- keep: The number of outcomes to sum.
- tie: What to do if
keep
is odd but the the number of args is even, or vice versa.- 'error' (default): Raises
IndexError
. - 'high': The higher outcome is taken.
- 'low': The lower outcome is taken.
- 'error' (default): Raises
- default: If an empty iterable is provided, the result will be a die that always rolls this value.
Raises:
- ValueError if an empty iterable is provided with no
default
.
344def min_outcome(*args: 'Iterable[T | icepool.Population[T]] | T') -> T: 345 """The minimum possible outcome among the populations. 346 347 Args: 348 Populations or single outcomes. Alternatively, a single iterable argument of such. 349 """ 350 return min(_iter_outcomes(*args))
The minimum possible outcome among the populations.
Arguments:
- Populations or single outcomes. Alternatively, a single iterable argument of such.
363def max_outcome(*args: 'Iterable[T | icepool.Population[T]] | T') -> T: 364 """The maximum possible outcome among the populations. 365 366 Args: 367 Populations or single outcomes. Alternatively, a single iterable argument of such. 368 """ 369 return max(_iter_outcomes(*args))
The maximum possible outcome among the populations.
Arguments:
- Populations or single outcomes. Alternatively, a single iterable argument of such.
372def consecutive(*args: Iterable[int]) -> Sequence[int]: 373 """A minimal sequence of consecutive ints covering the argument sets.""" 374 start = min((x for x in itertools.chain(*args)), default=None) 375 if start is None: 376 return () 377 stop = max(x for x in itertools.chain(*args)) 378 return tuple(range(start, stop + 1))
A minimal sequence of consecutive ints covering the argument sets.
381def sorted_union(*args: Iterable[T]) -> tuple[T, ...]: 382 """Merge sets into a sorted sequence.""" 383 if not args: 384 return () 385 return tuple(sorted(set.union(*(set(arg) for arg in args))))
Merge sets into a sorted sequence.
388def commonize_denominator( 389 *dice: 'T | icepool.Die[T]') -> tuple['icepool.Die[T]', ...]: 390 """Scale the quantities of the dice so that all of them have the same denominator. 391 392 The denominator is the LCM of the denominators of the arguments. 393 394 Args: 395 *dice: Any number of dice or single outcomes convertible to dice. 396 397 Returns: 398 A tuple of dice with the same denominator. 399 """ 400 converted_dice = [icepool.implicit_convert_to_die(die) for die in dice] 401 denominator_lcm = math.lcm(*(die.denominator() for die in converted_dice 402 if die.denominator() > 0)) 403 return tuple( 404 die.multiply_quantities(denominator_lcm // 405 die.denominator() if die.denominator() > 406 0 else 1) for die in converted_dice)
Scale the quantities of the dice so that all of them have the same denominator.
The denominator is the LCM of the denominators of the arguments.
Arguments:
- *dice: Any number of dice or single outcomes convertible to dice.
Returns:
A tuple of dice with the same denominator.
409def reduce( 410 function: 'Callable[[T, T], T | icepool.Die[T] | icepool.RerollType]', 411 dice: 'Iterable[T | icepool.Die[T]]', 412 *, 413 initial: 'T | icepool.Die[T] | None' = None) -> 'icepool.Die[T]': 414 """Applies a function of two arguments cumulatively to a sequence of dice. 415 416 Analogous to the 417 [`functools` function of the same name.](https://docs.python.org/3/library/functools.html#functools.reduce) 418 419 Args: 420 function: The function to map. The function should take two arguments, 421 which are an outcome from each of two dice, and produce an outcome 422 of the same type. It may also return `Reroll`, in which case the 423 entire sequence is effectively rerolled. 424 dice: A sequence of dice to map the function to, from left to right. 425 initial: If provided, this will be placed at the front of the sequence 426 of dice. 427 again_count, again_depth, again_end: Forwarded to the final die constructor. 428 """ 429 # Conversion to dice is not necessary since map() takes care of that. 430 iter_dice = iter(dice) 431 if initial is not None: 432 result: 'icepool.Die[T]' = icepool.implicit_convert_to_die(initial) 433 else: 434 result = icepool.implicit_convert_to_die(next(iter_dice)) 435 for die in iter_dice: 436 result = map(function, result, die) 437 return result
Applies a function of two arguments cumulatively to a sequence of dice.
Analogous to the
.reduce">functools
function of the same name.
Arguments:
- function: The function to map. The function should take two arguments,
which are an outcome from each of two dice, and produce an outcome
of the same type. It may also return
Reroll
, in which case the entire sequence is effectively rerolled. - dice: A sequence of dice to map the function to, from left to right.
- initial: If provided, this will be placed at the front of the sequence of dice.
- again_count, again_depth, again_end: Forwarded to the final die constructor.
440def accumulate( 441 function: 'Callable[[T, T], T | icepool.Die[T]]', 442 dice: 'Iterable[T | icepool.Die[T]]', 443 *, 444 initial: 'T | icepool.Die[T] | None' = None 445) -> Iterator['icepool.Die[T]']: 446 """Applies a function of two arguments cumulatively to a sequence of dice, yielding each result in turn. 447 448 Analogous to the 449 [`itertools function of the same name`](https://docs.python.org/3/library/itertools.html#itertools.accumulate) 450 , though with no default function and 451 the same parameter order as `reduce()`. 452 453 The number of results is equal to the number of elements of `dice`, with 454 one additional element if `initial` is provided. 455 456 Args: 457 function: The function to map. The function should take two arguments, 458 which are an outcome from each of two dice. 459 dice: A sequence of dice to map the function to, from left to right. 460 initial: If provided, this will be placed at the front of the sequence 461 of dice. 462 """ 463 # Conversion to dice is not necessary since map() takes care of that. 464 iter_dice = iter(dice) 465 if initial is not None: 466 result: 'icepool.Die[T]' = icepool.implicit_convert_to_die(initial) 467 else: 468 try: 469 result = icepool.implicit_convert_to_die(next(iter_dice)) 470 except StopIteration: 471 return 472 yield result 473 for die in iter_dice: 474 result = map(function, result, die) 475 yield result
Applies a function of two arguments cumulatively to a sequence of dice, yielding each result in turn.
Analogous to the
.accumulate">itertools function of the same name
, though with no default function and
the same parameter order as reduce()
.
The number of results is equal to the number of elements of dice
, with
one additional element if initial
is provided.
Arguments:
- function: The function to map. The function should take two arguments, which are an outcome from each of two dice.
- dice: A sequence of dice to map the function to, from left to right.
- initial: If provided, this will be placed at the front of the sequence of dice.
500def map( 501 repl: 502 'Callable[..., T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression] | Mapping[Any, T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression]', 503 /, 504 *args: 'Outcome | icepool.Die | icepool.MultisetExpression', 505 star: bool | None = None, 506 repeat: int | Literal['inf'] = 1, 507 time_limit: int | Literal['inf'] | None = None, 508 again_count: int | None = None, 509 again_depth: int | None = None, 510 again_end: 'T | icepool.Die[T] | icepool.RerollType | None' = None 511) -> 'icepool.Die[T]': 512 """Applies `func(outcome_of_die_0, outcome_of_die_1, ...)` for all joint outcomes, returning a Die. 513 514 See `map_function` for a decorator version of this. 515 516 Example: `map(lambda a, b: a + b, d6, d6)` is the same as d6 + d6. 517 518 `map()` is flexible but not very efficient for more than a few dice. 519 If at all possible, use `reduce()`, `MultisetExpression` methods, and/or 520 `MultisetEvaluator`s. Even `Pool.expand()` (which sorts rolls) is more 521 efficient than using `map` on the dice in order. 522 523 `Again` can be used but is not recommended with `repeat` other than 1. 524 525 Args: 526 repl: One of the following: 527 * A callable that takes in one outcome per element of args and 528 produces a new outcome. 529 * A mapping from old outcomes to new outcomes. 530 Unmapped old outcomes stay the same. 531 In this case args must have exactly one element. 532 As with the `Die` constructor, the new outcomes: 533 * May be dice rather than just single outcomes. 534 * The special value `icepool.Reroll` will reroll that old outcome. 535 * `tuples` containing `Population`s will be `tupleize`d into 536 `Population`s of `tuple`s. 537 This does not apply to subclasses of `tuple`s such as `namedtuple` 538 or other classes such as `Vector`. 539 *args: `func` will be called with all joint outcomes of these. 540 Allowed arg types are: 541 * Single outcome. 542 * `Die`. All outcomes will be sent to `func`. 543 * `MultisetExpression`. All sorted tuples of outcomes will be sent 544 to `func`, as `MultisetExpression.expand()`. The expression must 545 be fully bound. 546 star: If `True`, the first of the args will be unpacked before giving 547 them to `func`. 548 If not provided, it will be guessed based on the signature of `func` 549 and the number of arguments. 550 repeat: This will be repeated with the same arguments on the 551 result this many times, except the first of `args` will be replaced 552 by the result of the previous iteration. 553 554 Note that returning `Reroll` from `repl` will effectively reroll all 555 arguments, including the first argument which represents the result 556 of the process up to this point. If you only want to reroll the 557 current stage, you can nest another `map` inside `repl`. 558 559 EXPERIMENTAL: If set to `'inf'`, the result will be as if this 560 were repeated an infinite number of times. In this case, the 561 result will be in simplest form. 562 time_limit: Similar to `repeat`, but will return early if a fixed point 563 is reached. If both `repeat` and `time_limit` are provided 564 (not recommended), `time_limit` takes priority. 565 again_count, again_depth, again_end: Forwarded to the final die constructor. 566 """ 567 transition_function = _canonicalize_transition_function( 568 repl, len(args), star) 569 570 if len(args) == 0: 571 if repeat != 1: 572 raise ValueError('If no arguments are given, repeat must be 1.') 573 return icepool.Die([transition_function()], 574 again_count=again_count, 575 again_depth=again_depth, 576 again_end=again_end) 577 578 # Here len(args) is at least 1. 579 580 first_arg = args[0] 581 extra_args = args[1:] 582 583 if time_limit is not None: 584 repeat = time_limit 585 586 if repeat == 'inf': 587 # Infinite repeat. 588 # T_co and U should be the same in this case. 589 def unary_transition_function(state): 590 return map(transition_function, 591 state, 592 *extra_args, 593 star=False, 594 again_count=again_count, 595 again_depth=again_depth, 596 again_end=again_end) 597 598 return icepool.population.markov_chain.absorbing_markov_chain( 599 icepool.Die([args[0]]), unary_transition_function) 600 else: 601 if repeat < 0: 602 raise ValueError('repeat cannot be negative.') 603 604 if repeat == 0: 605 return icepool.Die([first_arg]) 606 elif repeat == 1 and time_limit is None: 607 final_outcomes: 'list[T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression]' = [] 608 final_quantities: list[int] = [] 609 for outcomes, final_quantity in iter_cartesian_product(*args): 610 final_outcome = transition_function(*outcomes) 611 if final_outcome is not icepool.Reroll: 612 final_outcomes.append(final_outcome) 613 final_quantities.append(final_quantity) 614 return icepool.Die(final_outcomes, 615 final_quantities, 616 again_count=again_count, 617 again_depth=again_depth, 618 again_end=again_end) 619 else: 620 result: 'icepool.Die[T]' = icepool.Die([first_arg]) 621 for _ in range(repeat): 622 next_result = icepool.map(transition_function, 623 result, 624 *extra_args, 625 star=False, 626 again_count=again_count, 627 again_depth=again_depth, 628 again_end=again_end) 629 if time_limit is not None and result.simplify( 630 ) == next_result.simplify(): 631 return result 632 result = next_result 633 return result
Applies func(outcome_of_die_0, outcome_of_die_1, ...)
for all joint outcomes, returning a Die.
See map_function
for a decorator version of this.
Example: map(lambda a, b: a + b, d6, d6)
is the same as d6 + d6.
map()
is flexible but not very efficient for more than a few dice.
If at all possible, use reduce()
, MultisetExpression
methods, and/or
MultisetEvaluator
s. Even Pool.expand()
(which sorts rolls) is more
efficient than using map
on the dice in order.
Again
can be used but is not recommended with repeat
other than 1.
Arguments:
- repl: One of the following:
- A callable that takes in one outcome per element of args and produces a new outcome.
- A mapping from old outcomes to new outcomes.
Unmapped old outcomes stay the same.
In this case args must have exactly one element.
As with the
Die
constructor, the new outcomes: - May be dice rather than just single outcomes.
- The special value
icepool.Reroll
will reroll that old outcome. tuples
containingPopulation
s will betupleize
d intoPopulation
s oftuple
s. This does not apply to subclasses oftuple
s such asnamedtuple
or other classes such asVector
.
- *args:
func
will be called with all joint outcomes of these. Allowed arg types are:- Single outcome.
Die
. All outcomes will be sent tofunc
.MultisetExpression
. All sorted tuples of outcomes will be sent tofunc
, asMultisetExpression.expand()
. The expression must be fully bound.
- star: If
True
, the first of the args will be unpacked before giving them tofunc
. If not provided, it will be guessed based on the signature offunc
and the number of arguments. repeat: This will be repeated with the same arguments on the result this many times, except the first of
args
will be replaced by the result of the previous iteration.Note that returning
Reroll
fromrepl
will effectively reroll all arguments, including the first argument which represents the result of the process up to this point. If you only want to reroll the current stage, you can nest anothermap
insiderepl
.EXPERIMENTAL: If set to
'inf'
, the result will be as if this were repeated an infinite number of times. In this case, the result will be in simplest form.- time_limit: Similar to
repeat
, but will return early if a fixed point is reached. If bothrepeat
andtime_limit
are provided (not recommended),time_limit
takes priority. - again_count, again_depth, again_end: Forwarded to the final die constructor.
673def map_function( 674 function: 675 'Callable[..., T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression] | None' = None, 676 /, 677 *, 678 star: bool | None = None, 679 repeat: int | Literal['inf'] = 1, 680 again_count: int | None = None, 681 again_depth: int | None = None, 682 again_end: 'T | icepool.Die[T] | icepool.RerollType | None' = None 683) -> 'Callable[..., icepool.Die[T]] | Callable[..., Callable[..., icepool.Die[T]]]': 684 """Decorator that turns a function that takes outcomes into a function that takes dice. 685 686 The result must be a `Die`. 687 688 This is basically a decorator version of `map()` and produces behavior 689 similar to AnyDice functions, though Icepool has different typing rules 690 among other differences. 691 692 `map_function` can either be used with no arguments: 693 694 ```python 695 @map_function 696 def explode_six(x): 697 if x == 6: 698 return 6 + Again 699 else: 700 return x 701 702 explode_six(d6, again_depth=2) 703 ``` 704 705 Or with keyword arguments, in which case the extra arguments are bound: 706 707 ```python 708 @map_function(again_depth=2) 709 def explode_six(x): 710 if x == 6: 711 return 6 + Again 712 else: 713 return x 714 715 explode_six(d6) 716 ``` 717 718 Args: 719 again_count, again_depth, again_end: Forwarded to the final die constructor. 720 """ 721 722 if function is not None: 723 return update_wrapper(partial(map, function), function) 724 else: 725 726 def decorator( 727 function: 728 'Callable[..., T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression]' 729 ) -> 'Callable[..., icepool.Die[T]]': 730 731 return update_wrapper( 732 partial(map, 733 function, 734 star=star, 735 repeat=repeat, 736 again_count=again_count, 737 again_depth=again_depth, 738 again_end=again_end), function) 739 740 return decorator
Decorator that turns a function that takes outcomes into a function that takes dice.
The result must be a Die
.
This is basically a decorator version of map()
and produces behavior
similar to AnyDice functions, though Icepool has different typing rules
among other differences.
map_function
can either be used with no arguments:
@map_function
def explode_six(x):
if x == 6:
return 6 + Again
else:
return x
explode_six(d6, again_depth=2)
Or with keyword arguments, in which case the extra arguments are bound:
@map_function(again_depth=2)
def explode_six(x):
if x == 6:
return 6 + Again
else:
return x
explode_six(d6)
Arguments:
- again_count, again_depth, again_end: Forwarded to the final die constructor.
743def map_and_time( 744 repl: 745 'Callable[..., T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression] | Mapping[Any, T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression]', 746 initial_state: 'T | icepool.Die[T]', 747 /, 748 *extra_args, 749 star: bool | None = None, 750 time_limit: int) -> 'icepool.Die[tuple[T, int]]': 751 """Repeatedly map outcomes of the state to other outcomes, while also 752 counting timesteps. 753 754 This is useful for representing processes. 755 756 The outcomes of the result are `(outcome, time)`, where `time` is the 757 number of repeats needed to reach an absorbing outcome (an outcome that 758 only leads to itself), or `repeat`, whichever is lesser. 759 760 This will return early if it reaches a fixed point. 761 Therefore, you can set `repeat` equal to the maximum number of 762 time you could possibly be interested in without worrying about 763 it causing extra computations after the fixed point. 764 765 Args: 766 repl: One of the following: 767 * A callable returning a new outcome for each old outcome. 768 * A mapping from old outcomes to new outcomes. 769 Unmapped old outcomes stay the same. 770 The new outcomes may be dice rather than just single outcomes. 771 The special value `icepool.Reroll` will reroll that old outcome. 772 initial_state: The initial state of the process, which could be a 773 single state or a `Die`. 774 extra_args: Extra arguments to use, as per `map`. Note that these are 775 rerolled at every time step. 776 star: If `True`, the first of the args will be unpacked before giving 777 them to `func`. 778 If not provided, it will be guessed based on the signature of `func` 779 and the number of arguments. 780 time_limit: This will be repeated with the same arguments on the result 781 up to this many times. 782 783 Returns: 784 The `Die` after the modification. 785 """ 786 transition_function = _canonicalize_transition_function( 787 repl, 1 + len(extra_args), star) 788 789 result: 'icepool.Die[tuple[T, int]]' = map(lambda x: (x, 0), initial_state) 790 791 # Note that we don't expand extra_args during the outer map. 792 # This is needed to correctly evaluate whether each outcome is absorbing. 793 def transition_with_steps(outcome_and_steps, extra_args): 794 outcome, steps = outcome_and_steps 795 next_outcome = map(transition_function, outcome, *extra_args) 796 if icepool.population.markov_chain.is_absorbing(outcome, next_outcome): 797 return outcome, steps 798 else: 799 return icepool.tupleize(next_outcome, steps + 1) 800 801 return map(transition_with_steps, 802 result, 803 extra_args, 804 time_limit=time_limit)
Repeatedly map outcomes of the state to other outcomes, while also counting timesteps.
This is useful for representing processes.
The outcomes of the result are (outcome, time)
, where time
is the
number of repeats needed to reach an absorbing outcome (an outcome that
only leads to itself), or repeat
, whichever is lesser.
This will return early if it reaches a fixed point.
Therefore, you can set repeat
equal to the maximum number of
time you could possibly be interested in without worrying about
it causing extra computations after the fixed point.
Arguments:
- repl: One of the following:
- A callable returning a new outcome for each old outcome.
- A mapping from old outcomes to new outcomes.
Unmapped old outcomes stay the same.
The new outcomes may be dice rather than just single outcomes.
The special value
icepool.Reroll
will reroll that old outcome.
- initial_state: The initial state of the process, which could be a
single state or a
Die
. - extra_args: Extra arguments to use, as per
map
. Note that these are rerolled at every time step. - star: If
True
, the first of the args will be unpacked before giving them tofunc
. If not provided, it will be guessed based on the signature offunc
and the number of arguments. - time_limit: This will be repeated with the same arguments on the result up to this many times.
Returns:
The
Die
after the modification.
807def map_to_pool( 808 repl: 809 'Callable[..., icepool.MultisetGenerator | Sequence[icepool.Die[T] | T] | Mapping[icepool.Die[T], int] | Mapping[T, int] | icepool.RerollType] | Mapping[Any, icepool.MultisetGenerator | Sequence[icepool.Die[T] | T] | Mapping[icepool.Die[T], int] | Mapping[T, int] | icepool.RerollType]', 810 /, 811 *args: 'Outcome | icepool.Die | icepool.MultisetExpression', 812 star: bool | None = None, 813 denominator: int | None = None 814) -> 'icepool.MultisetGenerator[T, tuple[int]]': 815 """EXPERIMENTAL: Applies `repl(outcome_of_die_0, outcome_of_die_1, ...)` for all joint outcomes, producing a MultisetGenerator. 816 817 Args: 818 repl: One of the following: 819 * A callable that takes in one outcome per element of args and 820 produces a `MultisetGenerator` or something convertible to a `Pool`. 821 * A mapping from old outcomes to `MultisetGenerator` 822 or something convertible to a `Pool`. 823 In this case args must have exactly one element. 824 The new outcomes may be dice rather than just single outcomes. 825 The special value `icepool.Reroll` will reroll that old outcome. 826 star: If `True`, the first of the args will be unpacked before giving 827 them to `repl`. 828 If not provided, it will be guessed based on the signature of `repl` 829 and the number of arguments. 830 denominator: If provided, the denominator of the result will be this 831 value. Otherwise it will be the minimum to correctly weight the 832 pools. 833 834 Returns: 835 A `MultisetGenerator` representing the mixture of `Pool`s. Note 836 that this is not technically a `Pool`, though it supports most of 837 the same operations. 838 839 Raises: 840 ValueError: If `denominator` cannot be made consistent with the 841 resulting mixture of pools. 842 """ 843 transition_function = _canonicalize_transition_function( 844 repl, len(args), star) 845 846 data: 'MutableMapping[icepool.MultisetGenerator[T, tuple[int]], int]' = defaultdict( 847 int) 848 for outcomes, quantity in iter_cartesian_product(*args): 849 pool = transition_function(*outcomes) 850 if pool is icepool.Reroll: 851 continue 852 elif isinstance(pool, icepool.MultisetGenerator): 853 data[pool] += quantity 854 else: 855 data[icepool.Pool(pool)] += quantity 856 # I couldn't get the covariance / contravariance to work. 857 return icepool.MixtureGenerator(data, 858 denominator=denominator) # type: ignore
EXPERIMENTAL: Applies repl(outcome_of_die_0, outcome_of_die_1, ...)
for all joint outcomes, producing a MultisetGenerator.
Arguments:
- repl: One of the following:
- A callable that takes in one outcome per element of args and
produces a
MultisetGenerator
or something convertible to aPool
. - A mapping from old outcomes to
MultisetGenerator
or something convertible to aPool
. In this case args must have exactly one element. The new outcomes may be dice rather than just single outcomes. The special valueicepool.Reroll
will reroll that old outcome.
- A callable that takes in one outcome per element of args and
produces a
- star: If
True
, the first of the args will be unpacked before giving them torepl
. If not provided, it will be guessed based on the signature ofrepl
and the number of arguments. - denominator: If provided, the denominator of the result will be this value. Otherwise it will be the minimum to correctly weight the pools.
Returns:
A
MultisetGenerator
representing the mixture ofPool
s. Note
that this is not technically aPool
, though it supports most of the same operations.
Raises:
- ValueError: If
denominator
cannot be made consistent with the resulting mixture of pools.
Indicates that an outcome should be rerolled (with unlimited depth).
This can be used in place of outcomes in many places. See individual function and method descriptions for details.
This effectively removes the outcome from the probability space, along with its contribution to the denominator.
This can be used for conditional probability by removing all outcomes not consistent with the given observations.
Operation in specific cases:
- When used with
Again
, only that stage is rerolled, not the entireAgain
tree. - To reroll with limited depth, use
Die.reroll()
, orAgain
with no modification. - When used with
MultisetEvaluator
, the entire evaluation is rerolled.
64class RerollType(enum.Enum): 65 """The type of the Reroll singleton.""" 66 Reroll = 'Reroll' 67 """Indicates an outcome should be rerolled (with unlimited depth)."""
The type of the Reroll singleton.
Indicates an outcome should be rerolled (with unlimited depth).
26class Pool(KeepGenerator[T]): 27 """Represents a multiset of outcomes resulting from the roll of several dice. 28 29 This should be used in conjunction with `MultisetEvaluator` to generate a 30 result. 31 32 Note that operators are performed on the multiset of rolls, not the multiset 33 of dice. For example, `d6.pool(3) - d6.pool(3)` is not an empty pool, but 34 an expression meaning "roll two pools of 3d6 and get the rolls from the 35 first pool, with rolls in the second pool cancelling matching rolls in the 36 first pool one-for-one". 37 """ 38 39 _dice: tuple[tuple['icepool.Die[T]', int]] 40 _outcomes: tuple[T, ...] 41 42 def __new__( 43 cls, 44 dice: 45 'Sequence[icepool.Die[T] | T] | Mapping[icepool.Die[T], int] | Mapping[T, int] | Mapping[icepool.Die[T] | T, int]', 46 times: Sequence[int] | int = 1) -> 'Pool': 47 """Public constructor for a pool. 48 49 Evaulation is most efficient when the dice are the same or same-side 50 truncations of each other. For example, d4, d6, d8, d10, d12 are all 51 same-side truncations of d12. 52 53 It is permissible to create a `Pool` without providing dice, but not all 54 evaluators will handle this case, especially if they depend on the 55 outcome type. Dice may be in the pool zero times, in which case their 56 outcomes will be considered but without any count (unless another die 57 has that outcome). 58 59 Args: 60 dice: The dice to put in the `Pool`. This can be one of the following: 61 62 * A `Sequence` of `Die` or outcomes. 63 * A `Mapping` of `Die` or outcomes to how many of that `Die` or 64 outcome to put in the `Pool`. 65 66 All outcomes within a `Pool` must be totally orderable. 67 times: Multiplies the number of times each element of `dice` will 68 be put into the pool. 69 `times` can either be a sequence of the same length as 70 `outcomes` or a single `int` to apply to all elements of 71 `outcomes`. 72 73 Raises: 74 ValueError: If a bare `Deck` or `Die` argument is provided. 75 A `Pool` of a single `Die` should constructed as `Pool([die])`. 76 """ 77 if isinstance(dice, Pool): 78 if times == 1: 79 return dice 80 else: 81 dice = {die: quantity for die, quantity in dice._dice} 82 83 if isinstance(dice, (icepool.Die, icepool.Deck, icepool.MultiDeal)): 84 raise ValueError( 85 f'A Pool cannot be constructed with a {type(dice).__name__} argument.' 86 ) 87 88 dice, times = icepool.creation_args.itemize(dice, times) 89 converted_dice = [icepool.implicit_convert_to_die(die) for die in dice] 90 91 dice_counts: MutableMapping['icepool.Die[T]', int] = defaultdict(int) 92 for die, qty in zip(converted_dice, times): 93 if qty == 0: 94 continue 95 dice_counts[die] += qty 96 keep_tuple = (1, ) * sum(times) 97 98 # Includes dice with zero qty. 99 outcomes = icepool.sorted_union(*converted_dice) 100 return cls._new_from_mapping(dice_counts, outcomes, keep_tuple) 101 102 @classmethod 103 @cache 104 def _new_raw(cls, dice: tuple[tuple['icepool.Die[T]', int]], 105 outcomes: tuple[T], keep_tuple: tuple[int, ...]) -> 'Pool[T]': 106 """All pool creation ends up here. This method is cached. 107 108 Args: 109 dice: A tuple of (die, count) pairs. 110 keep_tuple: A tuple of how many times to count each die. 111 """ 112 self = super(Pool, cls).__new__(cls) 113 self._dice = dice 114 self._outcomes = outcomes 115 self._keep_tuple = keep_tuple 116 return self 117 118 @classmethod 119 def clear_cache(cls): 120 """Clears the global pool cache.""" 121 Pool._new_raw.cache_clear() 122 123 @classmethod 124 def _new_from_mapping(cls, dice_counts: Mapping['icepool.Die[T]', int], 125 outcomes: tuple[T, ...], 126 keep_tuple: Sequence[int]) -> 'Pool[T]': 127 """Creates a new pool. 128 129 Args: 130 dice_counts: A map from dice to rolls. 131 keep_tuple: A tuple with length equal to the number of dice. 132 """ 133 dice = tuple( 134 sorted(dice_counts.items(), key=lambda kv: kv[0]._hash_key)) 135 return Pool._new_raw(dice, outcomes, keep_tuple) 136 137 @cached_property 138 def _raw_size(self) -> int: 139 return sum(count for _, count in self._dice) 140 141 def raw_size(self) -> int: 142 """The number of dice in this pool before the keep_tuple is applied.""" 143 return self._raw_size 144 145 def _is_resolvable(self) -> bool: 146 return all(not die.is_empty() for die, _ in self._dice) 147 148 @cached_property 149 def _denominator(self) -> int: 150 return math.prod(die.denominator()**count for die, count in self._dice) 151 152 def denominator(self) -> int: 153 return self._denominator 154 155 @cached_property 156 def _dice_tuple(self) -> tuple['icepool.Die[T]', ...]: 157 return sum(((die, ) * count for die, count in self._dice), start=()) 158 159 @cached_property 160 def _unique_dice(self) -> Collection['icepool.Die[T]']: 161 return set(die for die, _ in self._dice) 162 163 def unique_dice(self) -> Collection['icepool.Die[T]']: 164 """The collection of unique dice in this pool.""" 165 return self._unique_dice 166 167 def outcomes(self) -> Sequence[T]: 168 """The union of possible outcomes among all dice in this pool in ascending order.""" 169 return self._outcomes 170 171 def output_arity(self) -> int: 172 return 1 173 174 def _preferred_pop_order(self) -> tuple[Order | None, PopOrderReason]: 175 can_truncate_min, can_truncate_max = icepool.generator.pop_order.can_truncate( 176 self.unique_dice()) 177 if can_truncate_min and not can_truncate_max: 178 return Order.Ascending, PopOrderReason.PoolComposition 179 if can_truncate_max and not can_truncate_min: 180 return Order.Descending, PopOrderReason.PoolComposition 181 182 lo_skip, hi_skip = icepool.generator.pop_order.lo_hi_skip( 183 self.keep_tuple()) 184 if lo_skip > hi_skip: 185 return Order.Descending, PopOrderReason.KeepSkip 186 if hi_skip > lo_skip: 187 return Order.Ascending, PopOrderReason.KeepSkip 188 189 return Order.Any, PopOrderReason.NoPreference 190 191 def min_outcome(self) -> T: 192 """The min outcome among all dice in this pool.""" 193 return self._outcomes[0] 194 195 def max_outcome(self) -> T: 196 """The max outcome among all dice in this pool.""" 197 return self._outcomes[-1] 198 199 def _generate_initial(self) -> InitialMultisetGenerator: 200 yield self, 1 201 202 def _generate_min(self, min_outcome) -> NextMultisetGenerator: 203 """Pops the given outcome from this pool, if it is the min outcome. 204 205 Yields: 206 popped_pool: The pool after the min outcome is popped. 207 count: The number of dice that rolled the min outcome, after 208 accounting for keep_tuple. 209 weight: The weight of this incremental result. 210 """ 211 if not self.outcomes(): 212 yield self, (0, ), 1 213 return 214 if min_outcome != self.min_outcome(): 215 yield self, (0, ), 1 216 return 217 generators = [ 218 iter_die_pop_min(die, die_count, min_outcome) 219 for die, die_count in self._dice 220 ] 221 skip_weight = None 222 for pop in itertools.product(*generators): 223 total_hits = 0 224 result_weight = 1 225 next_dice_counts: MutableMapping[Any, int] = defaultdict(int) 226 for popped_die, misses, hits, weight in pop: 227 if not popped_die.is_empty() and misses > 0: 228 next_dice_counts[popped_die] += misses 229 total_hits += hits 230 result_weight *= weight 231 popped_keep_tuple, result_count = pop_min_from_keep_tuple( 232 self.keep_tuple(), total_hits) 233 popped_pool = Pool._new_from_mapping(next_dice_counts, 234 self._outcomes[1:], 235 popped_keep_tuple) 236 if not any(popped_keep_tuple): 237 # Dump all dice in exchange for the denominator. 238 skip_weight = (skip_weight or 239 0) + result_weight * popped_pool.denominator() 240 continue 241 242 yield popped_pool, (result_count, ), result_weight 243 244 if skip_weight is not None: 245 popped_pool = Pool._new_raw((), self._outcomes[1:], ()) 246 yield popped_pool, (sum(self.keep_tuple()), ), skip_weight 247 248 def _generate_max(self, max_outcome) -> NextMultisetGenerator: 249 """Pops the given outcome from this pool, if it is the max outcome. 250 251 Yields: 252 popped_pool: The pool after the max outcome is popped. 253 count: The number of dice that rolled the max outcome, after 254 accounting for keep_tuple. 255 weight: The weight of this incremental result. 256 """ 257 if not self.outcomes(): 258 yield self, (0, ), 1 259 return 260 if max_outcome != self.max_outcome(): 261 yield self, (0, ), 1 262 return 263 generators = [ 264 iter_die_pop_max(die, die_count, max_outcome) 265 for die, die_count in self._dice 266 ] 267 skip_weight = None 268 for pop in itertools.product(*generators): 269 total_hits = 0 270 result_weight = 1 271 next_dice_counts: MutableMapping[Any, int] = defaultdict(int) 272 for popped_die, misses, hits, weight in pop: 273 if not popped_die.is_empty() and misses > 0: 274 next_dice_counts[popped_die] += misses 275 total_hits += hits 276 result_weight *= weight 277 popped_keep_tuple, result_count = pop_max_from_keep_tuple( 278 self.keep_tuple(), total_hits) 279 popped_pool = Pool._new_from_mapping(next_dice_counts, 280 self._outcomes[:-1], 281 popped_keep_tuple) 282 if not any(popped_keep_tuple): 283 # Dump all dice in exchange for the denominator. 284 skip_weight = (skip_weight or 285 0) + result_weight * popped_pool.denominator() 286 continue 287 288 yield popped_pool, (result_count, ), result_weight 289 290 if skip_weight is not None: 291 popped_pool = Pool._new_raw((), self._outcomes[:-1], ()) 292 yield popped_pool, (sum(self.keep_tuple()), ), skip_weight 293 294 def _set_keep_tuple(self, keep_tuple: tuple[int, 295 ...]) -> 'KeepGenerator[T]': 296 return Pool._new_raw(self._dice, self._outcomes, keep_tuple) 297 298 def additive_union( 299 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 300 ) -> 'MultisetExpression[T]': 301 args = tuple( 302 icepool.expression.implicit_convert_to_expression(arg) 303 for arg in args) 304 if all(isinstance(arg, Pool) for arg in args): 305 pools = cast(tuple[Pool[T], ...], args) 306 keep_tuple: tuple[int, ...] = tuple( 307 reduce(operator.add, (pool.keep_tuple() for pool in pools), 308 ())) 309 if len(keep_tuple) == 0 or all(x == keep_tuple[0] 310 for x in keep_tuple): 311 # All sorted positions count the same, so we can merge the 312 # pools. 313 dice: 'MutableMapping[icepool.Die, int]' = defaultdict(int) 314 for pool in pools: 315 for die, die_count in pool._dice: 316 dice[die] += die_count 317 outcomes = icepool.sorted_union(*(pool.outcomes() 318 for pool in pools)) 319 return Pool._new_from_mapping(dice, outcomes, keep_tuple) 320 return KeepGenerator.additive_union(*args) 321 322 def __str__(self) -> str: 323 return ( 324 f'Pool of {self.raw_size()} dice with keep_tuple={self.keep_tuple()}\n' 325 + ''.join(f' {repr(die)} : {count},\n' 326 for die, count in self._dice)) 327 328 @cached_property 329 def _hash_key(self) -> tuple: 330 return Pool, self._dice, self._outcomes, self._keep_tuple
Represents a multiset of outcomes resulting from the roll of several dice.
This should be used in conjunction with MultisetEvaluator
to generate a
result.
Note that operators are performed on the multiset of rolls, not the multiset
of dice. For example, d6.pool(3) - d6.pool(3)
is not an empty pool, but
an expression meaning "roll two pools of 3d6 and get the rolls from the
first pool, with rolls in the second pool cancelling matching rolls in the
first pool one-for-one".
42 def __new__( 43 cls, 44 dice: 45 'Sequence[icepool.Die[T] | T] | Mapping[icepool.Die[T], int] | Mapping[T, int] | Mapping[icepool.Die[T] | T, int]', 46 times: Sequence[int] | int = 1) -> 'Pool': 47 """Public constructor for a pool. 48 49 Evaulation is most efficient when the dice are the same or same-side 50 truncations of each other. For example, d4, d6, d8, d10, d12 are all 51 same-side truncations of d12. 52 53 It is permissible to create a `Pool` without providing dice, but not all 54 evaluators will handle this case, especially if they depend on the 55 outcome type. Dice may be in the pool zero times, in which case their 56 outcomes will be considered but without any count (unless another die 57 has that outcome). 58 59 Args: 60 dice: The dice to put in the `Pool`. This can be one of the following: 61 62 * A `Sequence` of `Die` or outcomes. 63 * A `Mapping` of `Die` or outcomes to how many of that `Die` or 64 outcome to put in the `Pool`. 65 66 All outcomes within a `Pool` must be totally orderable. 67 times: Multiplies the number of times each element of `dice` will 68 be put into the pool. 69 `times` can either be a sequence of the same length as 70 `outcomes` or a single `int` to apply to all elements of 71 `outcomes`. 72 73 Raises: 74 ValueError: If a bare `Deck` or `Die` argument is provided. 75 A `Pool` of a single `Die` should constructed as `Pool([die])`. 76 """ 77 if isinstance(dice, Pool): 78 if times == 1: 79 return dice 80 else: 81 dice = {die: quantity for die, quantity in dice._dice} 82 83 if isinstance(dice, (icepool.Die, icepool.Deck, icepool.MultiDeal)): 84 raise ValueError( 85 f'A Pool cannot be constructed with a {type(dice).__name__} argument.' 86 ) 87 88 dice, times = icepool.creation_args.itemize(dice, times) 89 converted_dice = [icepool.implicit_convert_to_die(die) for die in dice] 90 91 dice_counts: MutableMapping['icepool.Die[T]', int] = defaultdict(int) 92 for die, qty in zip(converted_dice, times): 93 if qty == 0: 94 continue 95 dice_counts[die] += qty 96 keep_tuple = (1, ) * sum(times) 97 98 # Includes dice with zero qty. 99 outcomes = icepool.sorted_union(*converted_dice) 100 return cls._new_from_mapping(dice_counts, outcomes, keep_tuple)
Public constructor for a pool.
Evaulation is most efficient when the dice are the same or same-side truncations of each other. For example, d4, d6, d8, d10, d12 are all same-side truncations of d12.
It is permissible to create a Pool
without providing dice, but not all
evaluators will handle this case, especially if they depend on the
outcome type. Dice may be in the pool zero times, in which case their
outcomes will be considered but without any count (unless another die
has that outcome).
Arguments:
dice: The dice to put in the
Pool
. This can be one of the following:- A
Sequence
ofDie
or outcomes. - A
Mapping
ofDie
or outcomes to how many of thatDie
or outcome to put in thePool
.
All outcomes within a
Pool
must be totally orderable.- A
- times: Multiplies the number of times each element of
dice
will be put into the pool.times
can either be a sequence of the same length asoutcomes
or a singleint
to apply to all elements ofoutcomes
.
Raises:
118 @classmethod 119 def clear_cache(cls): 120 """Clears the global pool cache.""" 121 Pool._new_raw.cache_clear()
Clears the global pool cache.
141 def raw_size(self) -> int: 142 """The number of dice in this pool before the keep_tuple is applied.""" 143 return self._raw_size
The number of dice in this pool before the keep_tuple is applied.
163 def unique_dice(self) -> Collection['icepool.Die[T]']: 164 """The collection of unique dice in this pool.""" 165 return self._unique_dice
The collection of unique dice in this pool.
167 def outcomes(self) -> Sequence[T]: 168 """The union of possible outcomes among all dice in this pool in ascending order.""" 169 return self._outcomes
The union of possible outcomes among all dice in this pool in ascending order.
191 def min_outcome(self) -> T: 192 """The min outcome among all dice in this pool.""" 193 return self._outcomes[0]
The min outcome among all dice in this pool.
195 def max_outcome(self) -> T: 196 """The max outcome among all dice in this pool.""" 197 return self._outcomes[-1]
The max outcome among all dice in this pool.
298 def additive_union( 299 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 300 ) -> 'MultisetExpression[T]': 301 args = tuple( 302 icepool.expression.implicit_convert_to_expression(arg) 303 for arg in args) 304 if all(isinstance(arg, Pool) for arg in args): 305 pools = cast(tuple[Pool[T], ...], args) 306 keep_tuple: tuple[int, ...] = tuple( 307 reduce(operator.add, (pool.keep_tuple() for pool in pools), 308 ())) 309 if len(keep_tuple) == 0 or all(x == keep_tuple[0] 310 for x in keep_tuple): 311 # All sorted positions count the same, so we can merge the 312 # pools. 313 dice: 'MutableMapping[icepool.Die, int]' = defaultdict(int) 314 for pool in pools: 315 for die, die_count in pool._dice: 316 dice[die] += die_count 317 outcomes = icepool.sorted_union(*(pool.outcomes() 318 for pool in pools)) 319 return Pool._new_from_mapping(dice, outcomes, keep_tuple) 320 return KeepGenerator.additive_union(*args)
The combined elements from all of the multisets.
Same as a + b + c + ...
.
Any resulting counts that would be negative are set to zero.
Example:
[1, 2, 2, 3] + [1, 2, 4] -> [1, 1, 2, 2, 2, 3, 4]
333def standard_pool( 334 die_sizes: Collection[int] | Mapping[int, int]) -> 'Pool[int]': 335 """A `Pool` of standard dice (e.g. d6, d8...). 336 337 Args: 338 die_sizes: A collection of die sizes, which will put one die of that 339 sizes in the pool for each element. 340 Or, a mapping of die sizes to how many dice of that size to put 341 into the pool. 342 If empty, the pool will be considered to consist of zero zeros. 343 """ 344 if not die_sizes: 345 return Pool({icepool.Die([0]): 0}) 346 if isinstance(die_sizes, Mapping): 347 die_sizes = list( 348 itertools.chain.from_iterable([k] * v 349 for k, v in die_sizes.items())) 350 return Pool(list(icepool.d(x) for x in die_sizes))
A Pool
of standard dice (e.g. d6, d8...).
Arguments:
- die_sizes: A collection of die sizes, which will put one die of that sizes in the pool for each element. Or, a mapping of die sizes to how many dice of that size to put into the pool. If empty, the pool will be considered to consist of zero zeros.
33class MultisetGenerator(Generic[T, Qs], MultisetExpression[T]): 34 """Abstract base class for generating one or more multisets. 35 36 These include dice pools (`Pool`) and card deals (`Deal`). Most likely you 37 will be using one of these two rather than writing your own subclass of 38 `MultisetGenerator`. 39 40 The multisets are incrementally generated one outcome at a time. 41 For each outcome, a `count` and `weight` are generated, along with a 42 smaller generator to produce the rest of the multiset. 43 44 You can perform simple evaluations using built-in operators and methods in 45 this class. 46 For more complex evaluations and better performance, particularly when 47 multiple generators are involved, you will want to write your own subclass 48 of `MultisetEvaluator`. 49 """ 50 51 @abstractmethod 52 def outcomes(self) -> Sequence[T]: 53 """The possible outcomes that could be generated, in ascending order.""" 54 55 @abstractmethod 56 def output_arity(self) -> int: 57 """The number of multisets/counts generated. Must be constant.""" 58 59 @abstractmethod 60 def _is_resolvable(self) -> bool: 61 """`True` iff the generator is capable of producing an overall outcome. 62 63 For example, a dice `Pool` will return `False` if it contains any dice 64 with no outcomes. 65 """ 66 67 @abstractmethod 68 def _generate_initial(self) -> InitialMultisetGenerator: 69 """Initialize the generator before any outcomes are emitted. 70 71 Yields: 72 * A sub-generator. 73 * The weight for selecting that sub-generator. 74 75 Unitary generators can just yield `(self, 1)` and return. 76 """ 77 78 @abstractmethod 79 def _generate_min(self, min_outcome) -> NextMultisetGenerator: 80 """Pops the min outcome from this generator if it matches the argument. 81 82 Yields: 83 * A generator with the min outcome popped. 84 * A tuple of counts for the min outcome. 85 * The weight for this many of the min outcome appearing. 86 87 If the argument does not match the min outcome, or this generator 88 has no outcomes, only a single tuple is yielded: 89 90 * `self` 91 * A tuple of zeros. 92 * weight = 1. 93 """ 94 95 @abstractmethod 96 def _generate_max(self, max_outcome) -> NextMultisetGenerator: 97 """Pops the max outcome from this generator if it matches the argument. 98 99 Yields: 100 * A generator with the min outcome popped. 101 * A tuple of counts for the min outcome. 102 * The weight for this many of the min outcome appearing. 103 104 If the argument does not match the min outcome, or this generator 105 has no outcomes, only a single tuple is yielded: 106 107 * `self` 108 * A tuple of zeros. 109 * weight = 1. 110 """ 111 112 @abstractmethod 113 def _preferred_pop_order(self) -> tuple[Order | None, PopOrderReason]: 114 """Returns the preferred pop order of the generator, along with the priority of that pop order. 115 116 Greater priorities strictly outrank lower priorities. 117 An order of `None` represents conflicting orders and can occur in the 118 argument and/or return value. 119 """ 120 121 @abstractmethod 122 def denominator(self) -> int: 123 """The total weight of all paths through this generator.""" 124 125 @property 126 def _can_keep(self) -> bool: 127 """Whether the generator supports enhanced keep operations.""" 128 return False 129 130 @property 131 @abstractmethod 132 def _hash_key(self) -> Hashable: 133 """A hash key that logically identifies this object among MultisetGenerators. 134 135 Used to implement `equals()` and `__hash__()` 136 """ 137 138 def equals(self, other) -> bool: 139 """Whether this generator is logically equal to another object.""" 140 if not isinstance(other, MultisetGenerator): 141 return False 142 return self._hash_key == other._hash_key 143 144 @cached_property 145 def _hash(self) -> int: 146 return hash(self._hash_key) 147 148 def __hash__(self) -> int: 149 return self._hash 150 151 # Equality with truth value, needed for hashing. 152 153 # The result has a truth value, but is not a bool. 154 def __eq__( # type: ignore 155 self, other) -> 'icepool.DieWithTruth[bool]': 156 157 def data_callback() -> Counts[bool]: 158 die = cast('icepool.Die[bool]', 159 MultisetExpression.__eq__(self, other)) 160 if not isinstance(die, icepool.Die): 161 raise TypeError('Did not resolve to a die.') 162 return die._data 163 164 def truth_value_callback() -> bool: 165 if not isinstance(other, MultisetGenerator): 166 return False 167 return self._hash_key == other._hash_key 168 169 return icepool.DieWithTruth(data_callback, truth_value_callback) 170 171 # The result has a truth value, but is not a bool. 172 def __ne__( # type: ignore 173 self, other) -> 'icepool.DieWithTruth[bool]': 174 175 def data_callback() -> Counts[bool]: 176 die = cast('icepool.Die[bool]', 177 MultisetExpression.__ne__(self, other)) 178 if not isinstance(die, icepool.Die): 179 raise TypeError('Did not resolve to a die.') 180 return die._data 181 182 def truth_value_callback() -> bool: 183 if not isinstance(other, MultisetGenerator): 184 return True 185 return self._hash_key != other._hash_key 186 187 return icepool.DieWithTruth(data_callback, truth_value_callback) 188 189 # Expression API. 190 191 _inners = () 192 193 def _make_unbound(self, *unbound_inners) -> 'icepool.MultisetExpression': 194 raise RuntimeError( 195 'Should not be reached; _unbind should have been overridden directly.' 196 ) 197 198 def _next_state(self, state, outcome: Outcome, *counts: 199 int) -> tuple[Hashable, int]: 200 raise RuntimeError( 201 'Internal error: Expressions should be unbound before evaluation.') 202 203 def order(self) -> Order: 204 return Order.Any 205 206 # Overridden to switch bound generators with variables. 207 208 @property 209 def _bound_generators(self) -> 'tuple[icepool.MultisetGenerator, ...]': 210 return (self, ) 211 212 def _unbind(self, prefix_start: int, 213 free_start: int) -> 'tuple[MultisetExpression, int]': 214 unbound_expression = icepool.expression.MultisetVariable( 215 index=prefix_start) 216 return unbound_expression, prefix_start + 1 217 218 def _free_arity(self) -> int: 219 return 0 220 221 def min_outcome(self) -> T: 222 return self.outcomes()[0] 223 224 def max_outcome(self) -> T: 225 return self.outcomes()[-1] 226 227 # Sampling. 228 229 def sample(self) -> tuple[tuple, ...]: 230 """EXPERIMENTAL: A single random sample from this generator. 231 232 This uses the standard `random` package and is not cryptographically 233 secure. 234 235 Returns: 236 A sorted tuple of outcomes for each output of this generator. 237 """ 238 if not self.outcomes(): 239 raise ValueError('Cannot sample from an empty set of outcomes.') 240 241 preferred_pop_order, pop_order_reason = self._preferred_pop_order() 242 243 if preferred_pop_order is not None and preferred_pop_order > 0: 244 outcome = self.min_outcome() 245 generated = tuple(self._generate_min(outcome)) 246 else: 247 outcome = self.max_outcome() 248 generated = tuple(self._generate_max(outcome)) 249 250 cumulative_weights = tuple( 251 itertools.accumulate(g.denominator() * w for g, _, w in generated)) 252 denominator = cumulative_weights[-1] 253 # We don't use random.choices since that is based on floats rather than ints. 254 r = random.randrange(denominator) 255 index = bisect.bisect_right(cumulative_weights, r) 256 popped_generator, counts, _ = generated[index] 257 head = tuple((outcome, ) * count for count in counts) 258 if popped_generator.outcomes(): 259 tail = popped_generator.sample() 260 return tuple(tuple(sorted(h + t)) for h, t, in zip(head, tail)) 261 else: 262 return head
Abstract base class for generating one or more multisets.
These include dice pools (Pool
) and card deals (Deal
). Most likely you
will be using one of these two rather than writing your own subclass of
MultisetGenerator
.
The multisets are incrementally generated one outcome at a time.
For each outcome, a count
and weight
are generated, along with a
smaller generator to produce the rest of the multiset.
You can perform simple evaluations using built-in operators and methods in
this class.
For more complex evaluations and better performance, particularly when
multiple generators are involved, you will want to write your own subclass
of MultisetEvaluator
.
51 @abstractmethod 52 def outcomes(self) -> Sequence[T]: 53 """The possible outcomes that could be generated, in ascending order."""
The possible outcomes that could be generated, in ascending order.
55 @abstractmethod 56 def output_arity(self) -> int: 57 """The number of multisets/counts generated. Must be constant."""
The number of multisets/counts generated. Must be constant.
121 @abstractmethod 122 def denominator(self) -> int: 123 """The total weight of all paths through this generator."""
The total weight of all paths through this generator.
138 def equals(self, other) -> bool: 139 """Whether this generator is logically equal to another object.""" 140 if not isinstance(other, MultisetGenerator): 141 return False 142 return self._hash_key == other._hash_key
Whether this generator is logically equal to another object.
Any ordering that is required by this expression.
This should check any previous expressions for their order, and raise a ValueError for any incompatibilities.
Returns:
The required order.
229 def sample(self) -> tuple[tuple, ...]: 230 """EXPERIMENTAL: A single random sample from this generator. 231 232 This uses the standard `random` package and is not cryptographically 233 secure. 234 235 Returns: 236 A sorted tuple of outcomes for each output of this generator. 237 """ 238 if not self.outcomes(): 239 raise ValueError('Cannot sample from an empty set of outcomes.') 240 241 preferred_pop_order, pop_order_reason = self._preferred_pop_order() 242 243 if preferred_pop_order is not None and preferred_pop_order > 0: 244 outcome = self.min_outcome() 245 generated = tuple(self._generate_min(outcome)) 246 else: 247 outcome = self.max_outcome() 248 generated = tuple(self._generate_max(outcome)) 249 250 cumulative_weights = tuple( 251 itertools.accumulate(g.denominator() * w for g, _, w in generated)) 252 denominator = cumulative_weights[-1] 253 # We don't use random.choices since that is based on floats rather than ints. 254 r = random.randrange(denominator) 255 index = bisect.bisect_right(cumulative_weights, r) 256 popped_generator, counts, _ = generated[index] 257 head = tuple((outcome, ) * count for count in counts) 258 if popped_generator.outcomes(): 259 tail = popped_generator.sample() 260 return tuple(tuple(sorted(h + t)) for h, t, in zip(head, tail)) 261 else: 262 return head
EXPERIMENTAL: A single random sample from this generator.
This uses the standard random
package and is not cryptographically
secure.
Returns:
A sorted tuple of outcomes for each output of this generator.
38class MultisetExpression(ABC, Generic[T_contra]): 39 """Abstract base class representing an expression that operates on multisets. 40 41 Expression methods can be applied to `MultisetGenerator`s to do simple 42 evaluations. For joint evaluations, try `multiset_function`. 43 44 Use the provided operations to build up more complicated 45 expressions, or to attach a final evaluator. 46 47 Operations include: 48 49 | Operation | Count / notes | 50 |:----------------------------|:--------------------------------------------| 51 | `additive_union`, `+` | `l + r` | 52 | `difference`, `-` | `l - r` | 53 | `intersection`, `&` | `min(l, r)` | 54 | `union`, `\\|` | `max(l, r)` | 55 | `symmetric_difference`, `^` | `abs(l - r)` | 56 | `multiply_counts`, `*` | `count * n` | 57 | `divide_counts`, `//` | `count // n` | 58 | `modulo_counts`, `%` | `count % n` | 59 | `keep_counts` | `count if count >= n else 0` etc. | 60 | unary `+` | same as `keep_counts_ge(0)` | 61 | unary `-` | reverses the sign of all counts | 62 | `unique` | `min(count, n)` | 63 | `keep_outcomes` | `count if outcome in t else 0` | 64 | `drop_outcomes` | `count if outcome not in t else 0` | 65 | `map_counts` | `f(outcome, *counts)` | 66 | `keep`, `[]` | less capable than `KeepGenerator` version | 67 | `highest` | less capable than `KeepGenerator` version | 68 | `lowest` | less capable than `KeepGenerator` version | 69 70 | Evaluator | Summary | 71 |:-------------------------------|:---------------------------------------------------------------------------| 72 | `issubset`, `<=` | Whether the left side's counts are all <= their counterparts on the right | 73 | `issuperset`, `>=` | Whether the left side's counts are all >= their counterparts on the right | 74 | `isdisjoint` | Whether the left side has no positive counts in common with the right side | 75 | `<` | As `<=`, but `False` if the two multisets are equal | 76 | `>` | As `>=`, but `False` if the two multisets are equal | 77 | `==` | Whether the left side has all the same counts as the right side | 78 | `!=` | Whether the left side has any different counts to the right side | 79 | `expand` | All elements in ascending order | 80 | `sum` | Sum of all elements | 81 | `count` | The number of elements | 82 | `any` | Whether there is at least 1 element | 83 | `highest_outcome_and_count` | The highest outcome and how many of that outcome | 84 | `all_counts` | All counts in descending order | 85 | `largest_count` | The single largest count, aka x-of-a-kind | 86 | `largest_count_and_outcome` | Same but also with the corresponding outcome | 87 | `count_subset`, `//` | The number of times the right side is contained in the left side | 88 | `largest_straight` | Length of longest consecutive sequence | 89 | `largest_straight_and_outcome` | Same but also with the corresponding outcome | 90 | `all_straights` | Lengths of all consecutive sequences in descending order | 91 """ 92 93 _inners: 'tuple[MultisetExpression, ...]' 94 """A tuple of inner generators. These are assumed to the positional arguments of the constructor.""" 95 96 @abstractmethod 97 def _make_unbound(self, *unbound_inners) -> 'icepool.MultisetExpression': 98 """Given a sequence of unbound inners, returns an unbound version of this expression.""" 99 100 @abstractmethod 101 def _next_state(self, state, outcome: T_contra, *counts: 102 int) -> tuple[Hashable, int]: 103 """Updates the state for this expression and does any necessary count modification. 104 105 Args: 106 state: The overall state. This will contain all information needed 107 by this expression and any previous expressions. 108 outcome: The current outcome. 109 counts: The raw counts: first, the counts resulting from bound 110 generators, then the counts from free variables. 111 This must be passed to any previous expressions. 112 113 Returns: 114 * state: The updated state, which will be seen again by this 115 `next_state` later. 116 * count: The resulting count, which will be sent forward. 117 """ 118 119 @abstractmethod 120 def order(self) -> Order: 121 """Any ordering that is required by this expression. 122 123 This should check any previous expressions for their order, and 124 raise a ValueError for any incompatibilities. 125 126 Returns: 127 The required order. 128 """ 129 130 @abstractmethod 131 def _free_arity(self) -> int: 132 """The minimum number of multisets/counts that must be provided to this expression. 133 134 Any excess multisets/counts that are provided will be ignored. 135 136 This does not include bound generators. 137 """ 138 139 @cached_property 140 def _bound_generators(self) -> 'tuple[icepool.MultisetGenerator, ...]': 141 return reduce(operator.add, 142 (inner._bound_generators for inner in self._inners), ()) 143 144 def _unbind(self, prefix_start: int, 145 free_start: int) -> 'tuple[MultisetExpression, int]': 146 """Replaces bound generators within this expression with free variables. 147 148 Bound generators are replaced with free variables with index equal to 149 their position in _bound_generators(). 150 151 Variables that are already free have their indexes shifted by the 152 number of bound genrators. 153 154 Args: 155 prefix_start: The index of the next bound generator. 156 free_start: The total number of bound generators. 157 158 Returns: 159 The transformed expression and the new prefix_start. 160 """ 161 unbound_inners = [] 162 for inner in self._inners: 163 unbound_inner, prefix_start = inner._unbind( 164 prefix_start, free_start) 165 unbound_inners.append(unbound_inner) 166 unbound_expression = self._make_unbound(*unbound_inners) 167 return unbound_expression, prefix_start 168 169 @staticmethod 170 def _validate_output_arity(inner: 'MultisetExpression') -> None: 171 """Validates that if the given expression is a generator, its output arity is 1.""" 172 if isinstance(inner, 173 icepool.MultisetGenerator) and inner.output_arity() != 1: 174 raise ValueError( 175 'Only generators with output arity of 1 may be bound to expressions.\nUse a multiset_function to select individual outputs.' 176 ) 177 178 @cached_property 179 def _items_for_cartesian_product(self) -> Sequence[tuple[T_contra, int]]: 180 if self._free_arity() > 0: 181 raise ValueError('Expression must be fully bound.') 182 return self.expand().items() # type: ignore 183 184 # Binary operators. 185 186 def __add__( 187 self, other: 188 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 189 /) -> 'MultisetExpression[T_contra]': 190 try: 191 return MultisetExpression.additive_union(self, other) 192 except ImplicitConversionError: 193 return NotImplemented 194 195 def __radd__( 196 self, other: 197 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 198 /) -> 'MultisetExpression[T_contra]': 199 try: 200 return MultisetExpression.additive_union(other, self) 201 except ImplicitConversionError: 202 return NotImplemented 203 204 def additive_union( 205 *args: 206 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]' 207 ) -> 'MultisetExpression[T_contra]': 208 """The combined elements from all of the multisets. 209 210 Same as `a + b + c + ...`. 211 212 Any resulting counts that would be negative are set to zero. 213 214 Example: 215 ```python 216 [1, 2, 2, 3] + [1, 2, 4] -> [1, 1, 2, 2, 2, 3, 4] 217 ``` 218 """ 219 expressions = tuple( 220 implicit_convert_to_expression(arg) for arg in args) 221 return icepool.expression.AdditiveUnionExpression(*expressions) 222 223 def __sub__( 224 self, other: 225 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 226 /) -> 'MultisetExpression[T_contra]': 227 try: 228 return MultisetExpression.difference(self, other) 229 except ImplicitConversionError: 230 return NotImplemented 231 232 def __rsub__( 233 self, other: 234 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 235 /) -> 'MultisetExpression[T_contra]': 236 try: 237 return MultisetExpression.difference(other, self) 238 except ImplicitConversionError: 239 return NotImplemented 240 241 def difference( 242 *args: 243 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]' 244 ) -> 'MultisetExpression[T_contra]': 245 """The elements from the left multiset that are not in any of the others. 246 247 Same as `a - b - c - ...`. 248 249 Any resulting counts that would be negative are set to zero. 250 251 Example: 252 ```python 253 [1, 2, 2, 3] - [1, 2, 4] -> [2, 3] 254 ``` 255 256 If no arguments are given, the result will be an empty multiset, i.e. 257 all zero counts. 258 259 Note that, as a multiset operation, this will only cancel elements 1:1. 260 If you want to drop all elements in a set of outcomes regardless of 261 count, either use `drop_outcomes()` instead, or use a large number of 262 counts on the right side. 263 """ 264 expressions = tuple( 265 implicit_convert_to_expression(arg) for arg in args) 266 return icepool.expression.DifferenceExpression(*expressions) 267 268 def __and__( 269 self, other: 270 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 271 /) -> 'MultisetExpression[T_contra]': 272 try: 273 return MultisetExpression.intersection(self, other) 274 except ImplicitConversionError: 275 return NotImplemented 276 277 def __rand__( 278 self, other: 279 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 280 /) -> 'MultisetExpression[T_contra]': 281 try: 282 return MultisetExpression.intersection(other, self) 283 except ImplicitConversionError: 284 return NotImplemented 285 286 def intersection( 287 *args: 288 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]' 289 ) -> 'MultisetExpression[T_contra]': 290 """The elements that all the multisets have in common. 291 292 Same as `a & b & c & ...`. 293 294 Any resulting counts that would be negative are set to zero. 295 296 Example: 297 ```python 298 [1, 2, 2, 3] & [1, 2, 4] -> [1, 2] 299 ``` 300 301 Note that, as a multiset operation, this will only intersect elements 302 1:1. 303 If you want to keep all elements in a set of outcomes regardless of 304 count, either use `keep_outcomes()` instead, or use a large number of 305 counts on the right side. 306 """ 307 expressions = tuple( 308 implicit_convert_to_expression(arg) for arg in args) 309 return icepool.expression.IntersectionExpression(*expressions) 310 311 def __or__( 312 self, other: 313 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 314 /) -> 'MultisetExpression[T_contra]': 315 try: 316 return MultisetExpression.union(self, other) 317 except ImplicitConversionError: 318 return NotImplemented 319 320 def __ror__( 321 self, other: 322 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 323 /) -> 'MultisetExpression[T_contra]': 324 try: 325 return MultisetExpression.union(other, self) 326 except ImplicitConversionError: 327 return NotImplemented 328 329 def union( 330 *args: 331 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]' 332 ) -> 'MultisetExpression[T_contra]': 333 """The most of each outcome that appear in any of the multisets. 334 335 Same as `a | b | c | ...`. 336 337 Any resulting counts that would be negative are set to zero. 338 339 Example: 340 ```python 341 [1, 2, 2, 3] | [1, 2, 4] -> [1, 2, 2, 3, 4] 342 ``` 343 """ 344 expressions = tuple( 345 implicit_convert_to_expression(arg) for arg in args) 346 return icepool.expression.UnionExpression(*expressions) 347 348 def __xor__( 349 self, other: 350 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 351 /) -> 'MultisetExpression[T_contra]': 352 try: 353 return MultisetExpression.symmetric_difference(self, other) 354 except ImplicitConversionError: 355 return NotImplemented 356 357 def __rxor__( 358 self, other: 359 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 360 /) -> 'MultisetExpression[T_contra]': 361 try: 362 # Symmetric. 363 return MultisetExpression.symmetric_difference(self, other) 364 except ImplicitConversionError: 365 return NotImplemented 366 367 def symmetric_difference( 368 self, other: 369 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 370 /) -> 'MultisetExpression[T_contra]': 371 """The elements that appear in the left or right multiset but not both. 372 373 Same as `a ^ b`. 374 375 Specifically, this produces the absolute difference between counts. 376 If you don't want negative counts to be used from the inputs, you can 377 do `left.keep_counts('>=', 0) ^ right.keep_counts('>=', 0)`. 378 379 Example: 380 ```python 381 [1, 2, 2, 3] ^ [1, 2, 4] -> [2, 3, 4] 382 ``` 383 """ 384 other = implicit_convert_to_expression(other) 385 return icepool.expression.SymmetricDifferenceExpression(self, other) 386 387 def keep_outcomes( 388 self, target: 389 'Callable[[T_contra], bool] | Collection[T_contra] | MultisetExpression[T_contra]', 390 /) -> 'MultisetExpression[T_contra]': 391 """Keeps the elements in the target set of outcomes, and drops the rest by setting their counts to zero. 392 393 This is similar to `intersection()`, except the right side is considered 394 to have unlimited multiplicity. 395 396 Args: 397 target: A callable returning `True` iff the outcome should be kept, 398 or an expression or collection of outcomes to keep. 399 """ 400 if isinstance(target, MultisetExpression): 401 return icepool.expression.FilterOutcomesBinaryExpression( 402 self, target) 403 else: 404 return icepool.expression.FilterOutcomesExpression(self, 405 target=target) 406 407 def drop_outcomes( 408 self, target: 409 'Callable[[T_contra], bool] | Collection[T_contra] | MultisetExpression[T_contra]', 410 /) -> 'MultisetExpression[T_contra]': 411 """Drops the elements in the target set of outcomes by setting their counts to zero, and keeps the rest. 412 413 This is similar to `difference()`, except the right side is considered 414 to have unlimited multiplicity. 415 416 Args: 417 target: A callable returning `True` iff the outcome should be 418 dropped, or an expression or collection of outcomes to drop. 419 """ 420 if isinstance(target, MultisetExpression): 421 return icepool.expression.FilterOutcomesBinaryExpression( 422 self, target, invert=True) 423 else: 424 return icepool.expression.FilterOutcomesExpression(self, 425 target=target, 426 invert=True) 427 428 # Adjust counts. 429 430 def map_counts( 431 *args: 432 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 433 function: Callable[..., int]) -> 'MultisetExpression[T_contra]': 434 """Maps the counts to new counts. 435 436 Args: 437 function: A function that takes `outcome, *counts` and produces a 438 combined count. 439 """ 440 expressions = tuple( 441 implicit_convert_to_expression(arg) for arg in args) 442 return icepool.expression.MapCountsExpression(*expressions, 443 function=function) 444 445 def __mul__(self, n: int) -> 'MultisetExpression[T_contra]': 446 if not isinstance(n, int): 447 return NotImplemented 448 return self.multiply_counts(n) 449 450 # Commutable in this case. 451 def __rmul__(self, n: int) -> 'MultisetExpression[T_contra]': 452 if not isinstance(n, int): 453 return NotImplemented 454 return self.multiply_counts(n) 455 456 def multiply_counts(self, n: int, /) -> 'MultisetExpression[T_contra]': 457 """Multiplies all counts by n. 458 459 Same as `self * n`. 460 461 Example: 462 ```python 463 Pool([1, 2, 2, 3]) * 2 -> [1, 1, 2, 2, 2, 2, 3, 3] 464 ``` 465 """ 466 return icepool.expression.MultiplyCountsExpression(self, constant=n) 467 468 @overload 469 def __floordiv__(self, other: int) -> 'MultisetExpression[T_contra]': 470 ... 471 472 @overload 473 def __floordiv__( 474 self, other: 475 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]' 476 ) -> 'icepool.Die[int] | icepool.MultisetEvaluator[T_contra, int]': 477 """Same as divide_counts().""" 478 479 @overload 480 def __floordiv__( 481 self, other: 482 'int | MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]' 483 ) -> 'MultisetExpression[T_contra] | icepool.Die[int] | icepool.MultisetEvaluator[T_contra, int]': 484 """Same as count_subset().""" 485 486 def __floordiv__( 487 self, other: 488 'int | MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]' 489 ) -> 'MultisetExpression[T_contra] | icepool.Die[int] | icepool.MultisetEvaluator[T_contra, int]': 490 if isinstance(other, int): 491 return self.divide_counts(other) 492 else: 493 return self.count_subset(other) 494 495 def divide_counts(self, n: int, /) -> 'MultisetExpression[T_contra]': 496 """Divides all counts by n (rounding down). 497 498 Same as `self // n`. 499 500 Example: 501 ```python 502 Pool([1, 2, 2, 3]) // 2 -> [2] 503 ``` 504 """ 505 return icepool.expression.FloorDivCountsExpression(self, constant=n) 506 507 def __mod__(self, n: int, /) -> 'MultisetExpression[T_contra]': 508 if not isinstance(n, int): 509 return NotImplemented 510 return icepool.expression.ModuloCountsExpression(self, constant=n) 511 512 def modulo_counts(self, n: int, /) -> 'MultisetExpression[T_contra]': 513 """Moduos all counts by n. 514 515 Same as `self % n`. 516 517 Example: 518 ```python 519 Pool([1, 2, 2, 3]) % 2 -> [1, 3] 520 ``` 521 """ 522 return self % n 523 524 def __pos__(self) -> 'MultisetExpression[T_contra]': 525 """Sets all negative counts to zero.""" 526 return icepool.expression.KeepCountsExpression(self, 527 comparison='>=', 528 constant=0) 529 530 def __neg__(self) -> 'MultisetExpression[T_contra]': 531 """As -1 * self.""" 532 return -1 * self 533 534 def keep_counts(self, comparison: Literal['==', '!=', '<=', '<', '>=', 535 '>'], n: int, 536 /) -> 'MultisetExpression[T_contra]': 537 """Keeps counts fitting the comparison, treating the rest as zero. 538 539 For example, `expression.keep_counts('>=', 2)` would keep pairs, 540 triplets, etc. and drop singles. 541 542 ```python 543 Pool([1, 2, 2, 3, 3, 3]).keep_counts('>=', 2) -> [2, 2, 3, 3, 3] 544 ``` 545 546 Args: 547 comparison: The comparison to use. 548 n: The number to compare counts against. 549 """ 550 return icepool.expression.KeepCountsExpression(self, 551 comparison=comparison, 552 constant=n) 553 554 def unique(self, n: int = 1, /) -> 'MultisetExpression[T_contra]': 555 """Counts each outcome at most `n` times. 556 557 For example, `generator.unique(2)` would count each outcome at most 558 twice. 559 560 Example: 561 ```python 562 Pool([1, 2, 2, 3]).unique() -> [1, 2, 3] 563 ``` 564 """ 565 return icepool.expression.UniqueExpression(self, constant=n) 566 567 # Keep highest / lowest. 568 569 @overload 570 def keep( 571 self, index: slice | Sequence[int | EllipsisType] 572 ) -> 'MultisetExpression[T_contra]': 573 ... 574 575 @overload 576 def keep( 577 self, index: int 578 ) -> 'icepool.Die[T_contra] | icepool.MultisetEvaluator[T_contra, T_contra]': 579 ... 580 581 def keep( 582 self, index: slice | Sequence[int | EllipsisType] | int 583 ) -> 'MultisetExpression[T_contra] | icepool.Die[T_contra] | icepool.MultisetEvaluator[T_contra, T_contra]': 584 """Selects elements after drawing and sorting. 585 586 This is less capable than the `KeepGenerator` version. 587 In particular, it does not know how many elements it is selecting from, 588 so it must be anchored at the starting end. The advantage is that it 589 can be applied to any expression. 590 591 The valid types of argument are: 592 593 * A `slice`. If both start and stop are provided, they must both be 594 non-negative or both be negative. step is not supported. 595 * A sequence of `int` with `...` (`Ellipsis`) at exactly one end. 596 Each sorted element will be counted that many times, with the 597 `Ellipsis` treated as enough zeros (possibly "negative") to 598 fill the rest of the elements. 599 * An `int`, which evaluates by taking the element at the specified 600 index. In this case the result is a `Die` (if fully bound) or a 601 `MultisetEvaluator` (if there are free variables). 602 603 Use the `[]` operator for the same effect as this method. 604 """ 605 if isinstance(index, int): 606 return self.evaluate( 607 evaluator=icepool.evaluator.KeepEvaluator(index)) 608 else: 609 return icepool.expression.KeepExpression(self, index=index) 610 611 @overload 612 def __getitem__( 613 self, index: slice | Sequence[int | EllipsisType] 614 ) -> 'MultisetExpression[T_contra]': 615 ... 616 617 @overload 618 def __getitem__( 619 self, index: int 620 ) -> 'icepool.Die[T_contra] | icepool.MultisetEvaluator[T_contra, T_contra]': 621 ... 622 623 def __getitem__( 624 self, index: slice | Sequence[int | EllipsisType] | int 625 ) -> 'MultisetExpression[T_contra] | icepool.Die[T_contra] | icepool.MultisetEvaluator[T_contra, T_contra]': 626 return self.keep(index) 627 628 def lowest(self, 629 keep: int | None = None, 630 drop: int | None = None) -> 'MultisetExpression[T_contra]': 631 """Keep some of the lowest elements from this multiset and drop the rest. 632 633 In contrast to the die and free function versions, this does not 634 automatically sum the dice. Use `.sum()` afterwards if you want to sum. 635 Alternatively, you can perform some other evaluation. 636 637 This requires the outcomes to be evaluated in ascending order. 638 639 Args: 640 keep, drop: These arguments work together: 641 * If neither are provided, the single lowest element 642 will be kept. 643 * If only `keep` is provided, the `keep` lowest elements 644 will be kept. 645 * If only `drop` is provided, the `drop` lowest elements 646 will be dropped and the rest will be kept. 647 * If both are provided, `drop` lowest elements will be dropped, 648 then the next `keep` lowest elements will be kept. 649 """ 650 index = lowest_slice(keep, drop) 651 return self.keep(index) 652 653 def highest(self, 654 keep: int | None = None, 655 drop: int | None = None) -> 'MultisetExpression[T_contra]': 656 """Keep some of the highest elements from this multiset and drop the rest. 657 658 In contrast to the die and free function versions, this does not 659 automatically sum the dice. Use `.sum()` afterwards if you want to sum. 660 Alternatively, you can perform some other evaluation. 661 662 This requires the outcomes to be evaluated in descending order. 663 664 Args: 665 keep, drop: These arguments work together: 666 * If neither are provided, the single highest element 667 will be kept. 668 * If only `keep` is provided, the `keep` highest elements 669 will be kept. 670 * If only `drop` is provided, the `drop` highest elements 671 will be dropped and the rest will be kept. 672 * If both are provided, `drop` highest elements will be dropped, 673 then the next `keep` highest elements will be kept. 674 """ 675 index = highest_slice(keep, drop) 676 return self.keep(index) 677 678 # Matching. 679 680 def sort_match( 681 self, 682 comparison: Literal['==', '!=', '<=', '<', '>=', '>'], 683 other: 'MultisetExpression[T_contra]', 684 /, 685 order: Order = Order.Descending) -> 'MultisetExpression[T_contra]': 686 """EXPERIMENTAL: Matches elements of `self` with elements of `other` in sorted order, then keeps elements from `self` that fit `comparison` with their partner. 687 688 Extra elements: If `self` has more elements than `other`, whether the 689 extra elements are kept depends on the `order` and `comparison`: 690 * Descending: kept for `'>='`, `'>'` 691 * Ascending: kept for `'<='`, `'<'` 692 693 Example: An attacker rolls 3d6 versus a defender's 2d6 in the game of 694 *RISK*. Which pairs did the attacker win? 695 ```python 696 d6.pool(3).highest(2).sort_match('>', d6.pool(2)) 697 ``` 698 699 Suppose the attacker rolled 6, 4, 3 and the defender 5, 5. 700 In this case the 4 would be blocked since the attacker lost that pair, 701 leaving the attacker's 6 and 3. If you don't want to keep the extra 702 element, you can use `highest`. 703 ```python 704 Pool([6, 4, 3]).sort_match('>', [5, 5]) -> [6, 3] 705 Pool([6, 4, 3]).highest(2).sort_match('>', [5, 5]) -> [6] 706 ``` 707 708 Contrast `maximum_match()`, which first creates the maximum number of 709 pairs that fit the comparison, not necessarily in sorted order. 710 In the above example, `maximum_match()` would allow the defender to 711 assign their 5s to block both the 4 and the 3. 712 713 Args: 714 comparison: The comparison to filter by. If you want to drop rather 715 than keep, use the complementary comparison: 716 * `'=='` vs. `'!='` 717 * `'<='` vs. `'>'` 718 * `'>='` vs. `'<'` 719 other: The other multiset to match elements with. 720 order: The order in which to sort before forming matches. 721 Default is descending. 722 """ 723 other = implicit_convert_to_expression(other) 724 725 match comparison: 726 case '==': 727 lesser, tie, greater = 0, 1, 0 728 case '!=': 729 lesser, tie, greater = 1, 0, 1 730 case '<=': 731 lesser, tie, greater = 1, 1, 0 732 case '<': 733 lesser, tie, greater = 1, 0, 0 734 case '>=': 735 lesser, tie, greater = 0, 1, 1 736 case '>': 737 lesser, tie, greater = 0, 0, 1 738 case _: 739 raise ValueError(f'Invalid comparison {comparison}') 740 741 if order > 0: 742 left_first = lesser 743 right_first = greater 744 else: 745 left_first = greater 746 right_first = lesser 747 748 return icepool.expression.SortMatchExpression(self, 749 other, 750 order=order, 751 tie=tie, 752 left_first=left_first, 753 right_first=right_first) 754 755 def maximum_match_highest( 756 self, comparison: Literal['<=', 757 '<'], other: 'MultisetExpression[T_contra]', 758 /, *, keep: Literal['matched', 759 'unmatched']) -> 'MultisetExpression[T_contra]': 760 """EXPERIMENTAL: Match the highest elements from `self` with even higher (or equal) elements from `other`. 761 762 This matches elements of `self` with elements of `other`, such that in 763 each pair the element from `self` fits the `comparision` with the 764 element from `other`. As many such pairs of elements will be matched as 765 possible, preferring the highest matchable elements of `self`. 766 Finally, either the matched or unmatched elements from `self` are kept. 767 768 This requires that outcomes be evaluated in descending order. 769 770 Example: An attacker rolls a pool of 4d6 and a defender rolls a pool of 771 3d6. Defender dice can be used to block attacker dice of equal or lesser 772 value, and the defender prefers to block the highest attacker dice 773 possible. Which attacker dice were not blocked? 774 ```python 775 d6.pool(4).maximum_match('<=', d6.pool(3), keep='unmatched').sum() 776 ``` 777 778 Suppose the attacker rolls 6, 4, 3, 1 and the defender rolls 5, 5. 779 Then the result would be [6, 1]. 780 ```python 781 d6.pool([6, 4, 3, 1]).maximum_match('<=', [5, 5], keep='unmatched') 782 -> [6, 1] 783 ``` 784 785 Contrast `sort_match()`, which first creates pairs in 786 sorted order and then filters them by `comparison`. 787 In the above example, `sort_matched` would force the defender to match 788 against the 5 and the 4, which would only allow them to block the 4. 789 790 Args: 791 comparison: Either `'<='` or `'<'`. 792 other: The other multiset to match elements with. 793 keep: Whether 'matched' or 'unmatched' elements are to be kept. 794 """ 795 if keep == 'matched': 796 keep_boolean = True 797 elif keep == 'unmatched': 798 keep_boolean = False 799 else: 800 raise ValueError(f"keep must be either 'matched' or 'unmatched'") 801 802 other = implicit_convert_to_expression(other) 803 match comparison: 804 case '<=': 805 match_equal = True 806 case '<': 807 match_equal = False 808 case _: 809 raise ValueError(f'Invalid comparison {comparison}') 810 return icepool.expression.MaximumMatchExpression( 811 self, 812 other, 813 order=Order.Descending, 814 match_equal=match_equal, 815 keep=keep_boolean) 816 817 def maximum_match_lowest( 818 self, comparison: Literal['>=', 819 '>'], other: 'MultisetExpression[T_contra]', 820 /, *, keep: Literal['matched', 821 'unmatched']) -> 'MultisetExpression[T_contra]': 822 """EXPERIMENTAL: Match the lowest elements from `self` with even lower (or equal) elements from `other`. 823 824 This matches elements of `self` with elements of `other`, such that in 825 each pair the element from `self` fits the `comparision` with the 826 element from `other`. As many such pairs of elements will be matched as 827 possible, preferring the lowest matchable elements of `self`. 828 Finally, either the matched or unmatched elements from `self` are kept. 829 830 This requires that outcomes be evaluated in ascending order. 831 832 Contrast `sort_match()`, which first creates pairs in 833 sorted order and then filters them by `comparison`. 834 835 Args: 836 comparison: Either `'>='` or `'>'`. 837 other: The other multiset to match elements with. 838 keep: Whether 'matched' or 'unmatched' elements are to be kept. 839 """ 840 if keep == 'matched': 841 keep_boolean = True 842 elif keep == 'unmatched': 843 keep_boolean = False 844 else: 845 raise ValueError(f"keep must be either 'matched' or 'unmatched'") 846 847 other = implicit_convert_to_expression(other) 848 match comparison: 849 case '>=': 850 match_equal = True 851 case '>': 852 match_equal = False 853 case _: 854 raise ValueError(f'Invalid comparison {comparison}') 855 return icepool.expression.MaximumMatchExpression( 856 self, 857 other, 858 order=Order.Ascending, 859 match_equal=match_equal, 860 keep=keep_boolean) 861 862 # Evaluations. 863 864 def evaluate( 865 *expressions: 'MultisetExpression[T_contra]', 866 evaluator: 'icepool.MultisetEvaluator[T_contra, U]' 867 ) -> 'icepool.Die[U] | icepool.MultisetEvaluator[T_contra, U]': 868 """Attaches a final `MultisetEvaluator` to expressions. 869 870 All of the `MultisetExpression` methods below are evaluations, 871 as are the operators `<, <=, >, >=, !=, ==`. This means if the 872 expression is fully bound, it will be evaluated to a `Die`. 873 874 Returns: 875 A `Die` if the expression is are fully bound. 876 A `MultisetEvaluator` otherwise. 877 """ 878 if all( 879 isinstance(expression, icepool.MultisetGenerator) 880 for expression in expressions): 881 return evaluator.evaluate(*expressions) 882 evaluator = icepool.evaluator.ExpressionEvaluator(*expressions, 883 evaluator=evaluator) 884 if evaluator._free_arity == 0: 885 return evaluator.evaluate() 886 else: 887 return evaluator 888 889 def expand( 890 self, 891 order: Order = Order.Ascending 892 ) -> 'icepool.Die[tuple[T_contra, ...]] | icepool.MultisetEvaluator[T_contra, tuple[T_contra, ...]]': 893 """Evaluation: All elements of the multiset in ascending order. 894 895 This is expensive and not recommended unless there are few possibilities. 896 897 Args: 898 order: Whether the elements are in ascending (default) or descending 899 order. 900 """ 901 return self.evaluate(evaluator=icepool.evaluator.ExpandEvaluator( 902 order=order)) 903 904 def sum( 905 self, 906 map: Callable[[T_contra], U] | Mapping[T_contra, U] | None = None 907 ) -> 'icepool.Die[U] | icepool.MultisetEvaluator[T_contra, U]': 908 """Evaluation: The sum of all elements.""" 909 if map is None: 910 return self.evaluate(evaluator=icepool.evaluator.sum_evaluator) 911 else: 912 return self.evaluate(evaluator=icepool.evaluator.SumEvaluator(map)) 913 914 def count( 915 self 916 ) -> 'icepool.Die[int] | icepool.MultisetEvaluator[T_contra, int]': 917 """Evaluation: The total number of elements in the multiset. 918 919 This is usually not very interesting unless some other operation is 920 performed first. Examples: 921 922 `generator.unique().count()` will count the number of unique outcomes. 923 924 `(generator & [4, 5, 6]).count()` will count up to one each of 925 4, 5, and 6. 926 """ 927 return self.evaluate(evaluator=icepool.evaluator.count_evaluator) 928 929 def any( 930 self 931 ) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T_contra, bool]': 932 """Evaluation: Whether the multiset has at least one positive count. """ 933 return self.evaluate(evaluator=icepool.evaluator.any_evaluator) 934 935 def highest_outcome_and_count( 936 self 937 ) -> 'icepool.Die[tuple[T_contra, int]] | icepool.MultisetEvaluator[T_contra, tuple[T_contra, int]]': 938 """Evaluation: The highest outcome with positive count, along with that count. 939 940 If no outcomes have positive count, the min outcome will be returned with 0 count. 941 """ 942 return self.evaluate( 943 evaluator=icepool.evaluator.highest_outcome_and_count_evaluator) 944 945 def all_counts( 946 self, 947 filter: int | Literal['all'] = 1 948 ) -> 'icepool.Die[tuple[int, ...]] | icepool.MultisetEvaluator[T_contra, tuple[int, ...]]': 949 """Evaluation: Sorted tuple of all counts, i.e. the sizes of all matching sets. 950 951 The sizes are in **descending** order. 952 953 Args: 954 filter: Any counts below this value will not be in the output. 955 For example, `filter=2` will only produce pairs and better. 956 If `None`, no filtering will be done. 957 958 Why not just place `keep_counts_ge()` before this? 959 `keep_counts_ge()` operates by setting counts to zero, so you 960 would still need an argument to specify whether you want to 961 output zero counts. So we might as well use the argument to do 962 both. 963 """ 964 return self.evaluate(evaluator=icepool.evaluator.AllCountsEvaluator( 965 filter=filter)) 966 967 def largest_count( 968 self 969 ) -> 'icepool.Die[int] | icepool.MultisetEvaluator[T_contra, int]': 970 """Evaluation: The size of the largest matching set among the elements.""" 971 return self.evaluate( 972 evaluator=icepool.evaluator.largest_count_evaluator) 973 974 def largest_count_and_outcome( 975 self 976 ) -> 'icepool.Die[tuple[int, T_contra]] | icepool.MultisetEvaluator[T_contra, tuple[int, T_contra]]': 977 """Evaluation: The largest matching set among the elements and the corresponding outcome.""" 978 return self.evaluate( 979 evaluator=icepool.evaluator.largest_count_and_outcome_evaluator) 980 981 def __rfloordiv__( 982 self, other: 983 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]' 984 ) -> 'icepool.Die[int] | icepool.MultisetEvaluator[T_contra, int]': 985 other = implicit_convert_to_expression(other) 986 return other.count_subset(self) 987 988 def count_subset( 989 self, 990 divisor: 991 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 992 /, 993 *, 994 empty_divisor: int | None = None 995 ) -> 'icepool.Die[int] | icepool.MultisetEvaluator[T_contra, int]': 996 """Evaluation: The number of times the divisor is contained in this multiset. 997 998 Args: 999 divisor: The multiset to divide by. 1000 empty_divisor: If the divisor is empty, the outcome will be this. 1001 If not set, `ZeroDivisionError` will be raised for an empty 1002 right side. 1003 1004 Raises: 1005 ZeroDivisionError: If the divisor may be empty and 1006 empty_divisor_outcome is not set. 1007 """ 1008 divisor = implicit_convert_to_expression(divisor) 1009 return self.evaluate(divisor, 1010 evaluator=icepool.evaluator.CountSubsetEvaluator( 1011 empty_divisor=empty_divisor)) 1012 1013 def largest_straight( 1014 self: 'MultisetExpression[int]' 1015 ) -> 'icepool.Die[int] | icepool.MultisetEvaluator[int, int]': 1016 """Evaluation: The size of the largest straight among the elements. 1017 1018 Outcomes must be `int`s. 1019 """ 1020 return self.evaluate( 1021 evaluator=icepool.evaluator.largest_straight_evaluator) 1022 1023 def largest_straight_and_outcome( 1024 self: 'MultisetExpression[int]' 1025 ) -> 'icepool.Die[tuple[int, int]] | icepool.MultisetEvaluator[int, tuple[int, int]]': 1026 """Evaluation: The size of the largest straight among the elements and the highest outcome in that straight. 1027 1028 Outcomes must be `int`s. 1029 """ 1030 return self.evaluate( 1031 evaluator=icepool.evaluator.largest_straight_and_outcome_evaluator) 1032 1033 def all_straights( 1034 self: 'MultisetExpression[int]' 1035 ) -> 'icepool.Die[tuple[int, ...]] | icepool.MultisetEvaluator[int, tuple[int, ...]]': 1036 """Evaluation: The sizes of all straights. 1037 1038 The sizes are in **descending** order. 1039 1040 Each element can only contribute to one straight, though duplicate 1041 elements can produces straights that overlap in outcomes. In this case, 1042 elements are preferentially assigned to the longer straight. 1043 """ 1044 return self.evaluate( 1045 evaluator=icepool.evaluator.all_straights_evaluator) 1046 1047 def all_straights_reduce_counts( 1048 self: 'MultisetExpression[int]', 1049 reducer: Callable[[int, int], int] = operator.mul 1050 ) -> 'icepool.Die[tuple[tuple[int, int], ...]] | icepool.MultisetEvaluator[int, tuple[tuple[int, int], ...]]': 1051 """Experimental: All straights with a reduce operation on the counts. 1052 1053 This can be used to evaluate e.g. cribbage-style straight counting. 1054 1055 The result is a tuple of `(run_length, run_score)`s. 1056 """ 1057 return self.evaluate( 1058 evaluator=icepool.evaluator.AllStraightsReduceCountsEvaluator( 1059 reducer=reducer)) 1060 1061 def argsort( 1062 self: 1063 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1064 *args: 1065 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1066 order: Order = Order.Descending, 1067 limit: int | None = None): 1068 """Experimental: Returns the indexes of the originating multisets for each rank in their additive union. 1069 1070 Example: 1071 ```python 1072 MultisetExpression.argsort([10, 9, 5], [9, 9]) 1073 ``` 1074 produces 1075 ```python 1076 ((0,), (0, 1, 1), (0,)) 1077 ``` 1078 1079 Args: 1080 self, *args: The multiset expressions to be evaluated. 1081 order: Which order the ranks are to be emitted. Default is descending. 1082 limit: How many ranks to emit. Default will emit all ranks, which 1083 makes the length of each outcome equal to 1084 `additive_union(+self, +arg1, +arg2, ...).unique().count()` 1085 """ 1086 self = implicit_convert_to_expression(self) 1087 converted_args = [implicit_convert_to_expression(arg) for arg in args] 1088 return self.evaluate(*converted_args, 1089 evaluator=icepool.evaluator.ArgsortEvaluator( 1090 order=order, limit=limit)) 1091 1092 # Comparators. 1093 1094 def _compare( 1095 self, right: 1096 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1097 operation_class: Type['icepool.evaluator.ComparisonEvaluator'] 1098 ) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T_contra, bool]': 1099 if isinstance(right, MultisetExpression): 1100 evaluator = icepool.evaluator.ExpressionEvaluator( 1101 self, right, evaluator=operation_class()) 1102 elif isinstance(right, (Mapping, Sequence)): 1103 right_expression = icepool.implicit_convert_to_expression(right) 1104 evaluator = icepool.evaluator.ExpressionEvaluator( 1105 self, right_expression, evaluator=operation_class()) 1106 else: 1107 raise TypeError('Operand not comparable with expression.') 1108 1109 if evaluator._free_arity == 0: 1110 return evaluator.evaluate() 1111 else: 1112 return evaluator 1113 1114 def __lt__( 1115 self, other: 1116 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1117 / 1118 ) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T_contra, bool]': 1119 try: 1120 return self._compare(other, 1121 icepool.evaluator.IsProperSubsetEvaluator) 1122 except TypeError: 1123 return NotImplemented 1124 1125 def __le__( 1126 self, other: 1127 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1128 / 1129 ) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T_contra, bool]': 1130 try: 1131 return self._compare(other, icepool.evaluator.IsSubsetEvaluator) 1132 except TypeError: 1133 return NotImplemented 1134 1135 def issubset( 1136 self, other: 1137 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1138 / 1139 ) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T_contra, bool]': 1140 """Evaluation: Whether this multiset is a subset of the other multiset. 1141 1142 Specifically, if this multiset has a lesser or equal count for each 1143 outcome than the other multiset, this evaluates to `True`; 1144 if there is some outcome for which this multiset has a greater count 1145 than the other multiset, this evaluates to `False`. 1146 1147 `issubset` is the same as `self <= other`. 1148 1149 `self < other` evaluates a proper subset relation, which is the same 1150 except the result is `False` if the two multisets are exactly equal. 1151 """ 1152 return self._compare(other, icepool.evaluator.IsSubsetEvaluator) 1153 1154 def __gt__( 1155 self, other: 1156 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1157 / 1158 ) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T_contra, bool]': 1159 try: 1160 return self._compare(other, 1161 icepool.evaluator.IsProperSupersetEvaluator) 1162 except TypeError: 1163 return NotImplemented 1164 1165 def __ge__( 1166 self, other: 1167 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1168 / 1169 ) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T_contra, bool]': 1170 try: 1171 return self._compare(other, icepool.evaluator.IsSupersetEvaluator) 1172 except TypeError: 1173 return NotImplemented 1174 1175 def issuperset( 1176 self, other: 1177 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1178 / 1179 ) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T_contra, bool]': 1180 """Evaluation: Whether this multiset is a superset of the other multiset. 1181 1182 Specifically, if this multiset has a greater or equal count for each 1183 outcome than the other multiset, this evaluates to `True`; 1184 if there is some outcome for which this multiset has a lesser count 1185 than the other multiset, this evaluates to `False`. 1186 1187 A typical use of this evaluation is testing for the presence of a 1188 combo of cards in a hand, e.g. 1189 1190 ```python 1191 deck.deal(5) >= ['a', 'a', 'b'] 1192 ``` 1193 1194 represents the chance that a deal of 5 cards contains at least two 'a's 1195 and one 'b'. 1196 1197 `issuperset` is the same as `self >= other`. 1198 1199 `self > other` evaluates a proper superset relation, which is the same 1200 except the result is `False` if the two multisets are exactly equal. 1201 """ 1202 return self._compare(other, icepool.evaluator.IsSupersetEvaluator) 1203 1204 # The result has no truth value. 1205 def __eq__( # type: ignore 1206 self, other: 1207 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1208 / 1209 ) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T_contra, bool]': 1210 try: 1211 return self._compare(other, icepool.evaluator.IsEqualSetEvaluator) 1212 except TypeError: 1213 return NotImplemented 1214 1215 # The result has no truth value. 1216 def __ne__( # type: ignore 1217 self, other: 1218 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1219 / 1220 ) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T_contra, bool]': 1221 try: 1222 return self._compare(other, 1223 icepool.evaluator.IsNotEqualSetEvaluator) 1224 except TypeError: 1225 return NotImplemented 1226 1227 def isdisjoint( 1228 self, other: 1229 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1230 / 1231 ) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T_contra, bool]': 1232 """Evaluation: Whether this multiset is disjoint from the other multiset. 1233 1234 Specifically, this evaluates to `False` if there is any outcome for 1235 which both multisets have positive count, and `True` if there is not. 1236 """ 1237 return self._compare(other, icepool.evaluator.IsDisjointSetEvaluator)
Abstract base class representing an expression that operates on multisets.
Expression methods can be applied to MultisetGenerator
s to do simple
evaluations. For joint evaluations, try multiset_function
.
Use the provided operations to build up more complicated expressions, or to attach a final evaluator.
Operations include:
Operation | Count / notes |
---|---|
additive_union , + |
l + r |
difference , - |
l - r |
intersection , & |
min(l, r) |
union , | |
max(l, r) |
symmetric_difference , ^ |
abs(l - r) |
multiply_counts , * |
count * n |
divide_counts , // |
count // n |
modulo_counts , % |
count % n |
keep_counts |
count if count >= n else 0 etc. |
unary + |
same as keep_counts_ge(0) |
unary - |
reverses the sign of all counts |
unique |
min(count, n) |
keep_outcomes |
count if outcome in t else 0 |
drop_outcomes |
count if outcome not in t else 0 |
map_counts |
f(outcome, *counts) |
keep , [] |
less capable than KeepGenerator version |
highest |
less capable than KeepGenerator version |
lowest |
less capable than KeepGenerator version |
Evaluator | Summary |
---|---|
issubset , <= |
Whether the left side's counts are all <= their counterparts on the right |
issuperset , >= |
Whether the left side's counts are all >= their counterparts on the right |
isdisjoint |
Whether the left side has no positive counts in common with the right side |
< |
As <= , but False if the two multisets are equal |
> |
As >= , but False if the two multisets are equal |
== |
Whether the left side has all the same counts as the right side |
!= |
Whether the left side has any different counts to the right side |
expand |
All elements in ascending order |
sum |
Sum of all elements |
count |
The number of elements |
any |
Whether there is at least 1 element |
highest_outcome_and_count |
The highest outcome and how many of that outcome |
all_counts |
All counts in descending order |
largest_count |
The single largest count, aka x-of-a-kind |
largest_count_and_outcome |
Same but also with the corresponding outcome |
count_subset , // |
The number of times the right side is contained in the left side |
largest_straight |
Length of longest consecutive sequence |
largest_straight_and_outcome |
Same but also with the corresponding outcome |
all_straights |
Lengths of all consecutive sequences in descending order |
119 @abstractmethod 120 def order(self) -> Order: 121 """Any ordering that is required by this expression. 122 123 This should check any previous expressions for their order, and 124 raise a ValueError for any incompatibilities. 125 126 Returns: 127 The required order. 128 """
Any ordering that is required by this expression.
This should check any previous expressions for their order, and raise a ValueError for any incompatibilities.
Returns:
The required order.
204 def additive_union( 205 *args: 206 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]' 207 ) -> 'MultisetExpression[T_contra]': 208 """The combined elements from all of the multisets. 209 210 Same as `a + b + c + ...`. 211 212 Any resulting counts that would be negative are set to zero. 213 214 Example: 215 ```python 216 [1, 2, 2, 3] + [1, 2, 4] -> [1, 1, 2, 2, 2, 3, 4] 217 ``` 218 """ 219 expressions = tuple( 220 implicit_convert_to_expression(arg) for arg in args) 221 return icepool.expression.AdditiveUnionExpression(*expressions)
The combined elements from all of the multisets.
Same as a + b + c + ...
.
Any resulting counts that would be negative are set to zero.
Example:
[1, 2, 2, 3] + [1, 2, 4] -> [1, 1, 2, 2, 2, 3, 4]
241 def difference( 242 *args: 243 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]' 244 ) -> 'MultisetExpression[T_contra]': 245 """The elements from the left multiset that are not in any of the others. 246 247 Same as `a - b - c - ...`. 248 249 Any resulting counts that would be negative are set to zero. 250 251 Example: 252 ```python 253 [1, 2, 2, 3] - [1, 2, 4] -> [2, 3] 254 ``` 255 256 If no arguments are given, the result will be an empty multiset, i.e. 257 all zero counts. 258 259 Note that, as a multiset operation, this will only cancel elements 1:1. 260 If you want to drop all elements in a set of outcomes regardless of 261 count, either use `drop_outcomes()` instead, or use a large number of 262 counts on the right side. 263 """ 264 expressions = tuple( 265 implicit_convert_to_expression(arg) for arg in args) 266 return icepool.expression.DifferenceExpression(*expressions)
The elements from the left multiset that are not in any of the others.
Same as a - b - c - ...
.
Any resulting counts that would be negative are set to zero.
Example:
[1, 2, 2, 3] - [1, 2, 4] -> [2, 3]
If no arguments are given, the result will be an empty multiset, i.e. all zero counts.
Note that, as a multiset operation, this will only cancel elements 1:1.
If you want to drop all elements in a set of outcomes regardless of
count, either use drop_outcomes()
instead, or use a large number of
counts on the right side.
286 def intersection( 287 *args: 288 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]' 289 ) -> 'MultisetExpression[T_contra]': 290 """The elements that all the multisets have in common. 291 292 Same as `a & b & c & ...`. 293 294 Any resulting counts that would be negative are set to zero. 295 296 Example: 297 ```python 298 [1, 2, 2, 3] & [1, 2, 4] -> [1, 2] 299 ``` 300 301 Note that, as a multiset operation, this will only intersect elements 302 1:1. 303 If you want to keep all elements in a set of outcomes regardless of 304 count, either use `keep_outcomes()` instead, or use a large number of 305 counts on the right side. 306 """ 307 expressions = tuple( 308 implicit_convert_to_expression(arg) for arg in args) 309 return icepool.expression.IntersectionExpression(*expressions)
The elements that all the multisets have in common.
Same as a & b & c & ...
.
Any resulting counts that would be negative are set to zero.
Example:
[1, 2, 2, 3] & [1, 2, 4] -> [1, 2]
Note that, as a multiset operation, this will only intersect elements
1:1.
If you want to keep all elements in a set of outcomes regardless of
count, either use keep_outcomes()
instead, or use a large number of
counts on the right side.
329 def union( 330 *args: 331 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]' 332 ) -> 'MultisetExpression[T_contra]': 333 """The most of each outcome that appear in any of the multisets. 334 335 Same as `a | b | c | ...`. 336 337 Any resulting counts that would be negative are set to zero. 338 339 Example: 340 ```python 341 [1, 2, 2, 3] | [1, 2, 4] -> [1, 2, 2, 3, 4] 342 ``` 343 """ 344 expressions = tuple( 345 implicit_convert_to_expression(arg) for arg in args) 346 return icepool.expression.UnionExpression(*expressions)
The most of each outcome that appear in any of the multisets.
Same as a | b | c | ...
.
Any resulting counts that would be negative are set to zero.
Example:
[1, 2, 2, 3] | [1, 2, 4] -> [1, 2, 2, 3, 4]
367 def symmetric_difference( 368 self, other: 369 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 370 /) -> 'MultisetExpression[T_contra]': 371 """The elements that appear in the left or right multiset but not both. 372 373 Same as `a ^ b`. 374 375 Specifically, this produces the absolute difference between counts. 376 If you don't want negative counts to be used from the inputs, you can 377 do `left.keep_counts('>=', 0) ^ right.keep_counts('>=', 0)`. 378 379 Example: 380 ```python 381 [1, 2, 2, 3] ^ [1, 2, 4] -> [2, 3, 4] 382 ``` 383 """ 384 other = implicit_convert_to_expression(other) 385 return icepool.expression.SymmetricDifferenceExpression(self, other)
The elements that appear in the left or right multiset but not both.
Same as a ^ b
.
Specifically, this produces the absolute difference between counts.
If you don't want negative counts to be used from the inputs, you can
do left.keep_counts('>=', 0) ^ right.keep_counts('>=', 0)
.
Example:
[1, 2, 2, 3] ^ [1, 2, 4] -> [2, 3, 4]
387 def keep_outcomes( 388 self, target: 389 'Callable[[T_contra], bool] | Collection[T_contra] | MultisetExpression[T_contra]', 390 /) -> 'MultisetExpression[T_contra]': 391 """Keeps the elements in the target set of outcomes, and drops the rest by setting their counts to zero. 392 393 This is similar to `intersection()`, except the right side is considered 394 to have unlimited multiplicity. 395 396 Args: 397 target: A callable returning `True` iff the outcome should be kept, 398 or an expression or collection of outcomes to keep. 399 """ 400 if isinstance(target, MultisetExpression): 401 return icepool.expression.FilterOutcomesBinaryExpression( 402 self, target) 403 else: 404 return icepool.expression.FilterOutcomesExpression(self, 405 target=target)
Keeps the elements in the target set of outcomes, and drops the rest by setting their counts to zero.
This is similar to intersection()
, except the right side is considered
to have unlimited multiplicity.
Arguments:
- target: A callable returning
True
iff the outcome should be kept, or an expression or collection of outcomes to keep.
407 def drop_outcomes( 408 self, target: 409 'Callable[[T_contra], bool] | Collection[T_contra] | MultisetExpression[T_contra]', 410 /) -> 'MultisetExpression[T_contra]': 411 """Drops the elements in the target set of outcomes by setting their counts to zero, and keeps the rest. 412 413 This is similar to `difference()`, except the right side is considered 414 to have unlimited multiplicity. 415 416 Args: 417 target: A callable returning `True` iff the outcome should be 418 dropped, or an expression or collection of outcomes to drop. 419 """ 420 if isinstance(target, MultisetExpression): 421 return icepool.expression.FilterOutcomesBinaryExpression( 422 self, target, invert=True) 423 else: 424 return icepool.expression.FilterOutcomesExpression(self, 425 target=target, 426 invert=True)
Drops the elements in the target set of outcomes by setting their counts to zero, and keeps the rest.
This is similar to difference()
, except the right side is considered
to have unlimited multiplicity.
Arguments:
- target: A callable returning
True
iff the outcome should be dropped, or an expression or collection of outcomes to drop.
430 def map_counts( 431 *args: 432 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 433 function: Callable[..., int]) -> 'MultisetExpression[T_contra]': 434 """Maps the counts to new counts. 435 436 Args: 437 function: A function that takes `outcome, *counts` and produces a 438 combined count. 439 """ 440 expressions = tuple( 441 implicit_convert_to_expression(arg) for arg in args) 442 return icepool.expression.MapCountsExpression(*expressions, 443 function=function)
Maps the counts to new counts.
Arguments:
- function: A function that takes
outcome, *counts
and produces a combined count.
456 def multiply_counts(self, n: int, /) -> 'MultisetExpression[T_contra]': 457 """Multiplies all counts by n. 458 459 Same as `self * n`. 460 461 Example: 462 ```python 463 Pool([1, 2, 2, 3]) * 2 -> [1, 1, 2, 2, 2, 2, 3, 3] 464 ``` 465 """ 466 return icepool.expression.MultiplyCountsExpression(self, constant=n)
Multiplies all counts by n.
Same as self * n
.
Example:
Pool([1, 2, 2, 3]) * 2 -> [1, 1, 2, 2, 2, 2, 3, 3]
495 def divide_counts(self, n: int, /) -> 'MultisetExpression[T_contra]': 496 """Divides all counts by n (rounding down). 497 498 Same as `self // n`. 499 500 Example: 501 ```python 502 Pool([1, 2, 2, 3]) // 2 -> [2] 503 ``` 504 """ 505 return icepool.expression.FloorDivCountsExpression(self, constant=n)
Divides all counts by n (rounding down).
Same as self // n
.
Example:
Pool([1, 2, 2, 3]) // 2 -> [2]
512 def modulo_counts(self, n: int, /) -> 'MultisetExpression[T_contra]': 513 """Moduos all counts by n. 514 515 Same as `self % n`. 516 517 Example: 518 ```python 519 Pool([1, 2, 2, 3]) % 2 -> [1, 3] 520 ``` 521 """ 522 return self % n
Moduos all counts by n.
Same as self % n
.
Example:
Pool([1, 2, 2, 3]) % 2 -> [1, 3]
534 def keep_counts(self, comparison: Literal['==', '!=', '<=', '<', '>=', 535 '>'], n: int, 536 /) -> 'MultisetExpression[T_contra]': 537 """Keeps counts fitting the comparison, treating the rest as zero. 538 539 For example, `expression.keep_counts('>=', 2)` would keep pairs, 540 triplets, etc. and drop singles. 541 542 ```python 543 Pool([1, 2, 2, 3, 3, 3]).keep_counts('>=', 2) -> [2, 2, 3, 3, 3] 544 ``` 545 546 Args: 547 comparison: The comparison to use. 548 n: The number to compare counts against. 549 """ 550 return icepool.expression.KeepCountsExpression(self, 551 comparison=comparison, 552 constant=n)
Keeps counts fitting the comparison, treating the rest as zero.
For example, expression.keep_counts('>=', 2)
would keep pairs,
triplets, etc. and drop singles.
Pool([1, 2, 2, 3, 3, 3]).keep_counts('>=', 2) -> [2, 2, 3, 3, 3]
Arguments:
- comparison: The comparison to use.
- n: The number to compare counts against.
554 def unique(self, n: int = 1, /) -> 'MultisetExpression[T_contra]': 555 """Counts each outcome at most `n` times. 556 557 For example, `generator.unique(2)` would count each outcome at most 558 twice. 559 560 Example: 561 ```python 562 Pool([1, 2, 2, 3]).unique() -> [1, 2, 3] 563 ``` 564 """ 565 return icepool.expression.UniqueExpression(self, constant=n)
Counts each outcome at most n
times.
For example, generator.unique(2)
would count each outcome at most
twice.
Example:
Pool([1, 2, 2, 3]).unique() -> [1, 2, 3]
581 def keep( 582 self, index: slice | Sequence[int | EllipsisType] | int 583 ) -> 'MultisetExpression[T_contra] | icepool.Die[T_contra] | icepool.MultisetEvaluator[T_contra, T_contra]': 584 """Selects elements after drawing and sorting. 585 586 This is less capable than the `KeepGenerator` version. 587 In particular, it does not know how many elements it is selecting from, 588 so it must be anchored at the starting end. The advantage is that it 589 can be applied to any expression. 590 591 The valid types of argument are: 592 593 * A `slice`. If both start and stop are provided, they must both be 594 non-negative or both be negative. step is not supported. 595 * A sequence of `int` with `...` (`Ellipsis`) at exactly one end. 596 Each sorted element will be counted that many times, with the 597 `Ellipsis` treated as enough zeros (possibly "negative") to 598 fill the rest of the elements. 599 * An `int`, which evaluates by taking the element at the specified 600 index. In this case the result is a `Die` (if fully bound) or a 601 `MultisetEvaluator` (if there are free variables). 602 603 Use the `[]` operator for the same effect as this method. 604 """ 605 if isinstance(index, int): 606 return self.evaluate( 607 evaluator=icepool.evaluator.KeepEvaluator(index)) 608 else: 609 return icepool.expression.KeepExpression(self, index=index)
Selects elements after drawing and sorting.
This is less capable than the KeepGenerator
version.
In particular, it does not know how many elements it is selecting from,
so it must be anchored at the starting end. The advantage is that it
can be applied to any expression.
The valid types of argument are:
- A
slice
. If both start and stop are provided, they must both be non-negative or both be negative. step is not supported. - A sequence of
int
with...
(Ellipsis
) at exactly one end. Each sorted element will be counted that many times, with theEllipsis
treated as enough zeros (possibly "negative") to fill the rest of the elements. - An
int
, which evaluates by taking the element at the specified index. In this case the result is aDie
(if fully bound) or aMultisetEvaluator
(if there are free variables).
Use the []
operator for the same effect as this method.
628 def lowest(self, 629 keep: int | None = None, 630 drop: int | None = None) -> 'MultisetExpression[T_contra]': 631 """Keep some of the lowest elements from this multiset and drop the rest. 632 633 In contrast to the die and free function versions, this does not 634 automatically sum the dice. Use `.sum()` afterwards if you want to sum. 635 Alternatively, you can perform some other evaluation. 636 637 This requires the outcomes to be evaluated in ascending order. 638 639 Args: 640 keep, drop: These arguments work together: 641 * If neither are provided, the single lowest element 642 will be kept. 643 * If only `keep` is provided, the `keep` lowest elements 644 will be kept. 645 * If only `drop` is provided, the `drop` lowest elements 646 will be dropped and the rest will be kept. 647 * If both are provided, `drop` lowest elements will be dropped, 648 then the next `keep` lowest elements will be kept. 649 """ 650 index = lowest_slice(keep, drop) 651 return self.keep(index)
Keep some of the lowest elements from this multiset and drop the rest.
In contrast to the die and free function versions, this does not
automatically sum the dice. Use .sum()
afterwards if you want to sum.
Alternatively, you can perform some other evaluation.
This requires the outcomes to be evaluated in ascending order.
Arguments:
- keep, drop: These arguments work together:
- If neither are provided, the single lowest element will be kept.
- If only
keep
is provided, thekeep
lowest elements will be kept. - If only
drop
is provided, thedrop
lowest elements will be dropped and the rest will be kept. - If both are provided,
drop
lowest elements will be dropped, then the nextkeep
lowest elements will be kept.
653 def highest(self, 654 keep: int | None = None, 655 drop: int | None = None) -> 'MultisetExpression[T_contra]': 656 """Keep some of the highest elements from this multiset and drop the rest. 657 658 In contrast to the die and free function versions, this does not 659 automatically sum the dice. Use `.sum()` afterwards if you want to sum. 660 Alternatively, you can perform some other evaluation. 661 662 This requires the outcomes to be evaluated in descending order. 663 664 Args: 665 keep, drop: These arguments work together: 666 * If neither are provided, the single highest element 667 will be kept. 668 * If only `keep` is provided, the `keep` highest elements 669 will be kept. 670 * If only `drop` is provided, the `drop` highest elements 671 will be dropped and the rest will be kept. 672 * If both are provided, `drop` highest elements will be dropped, 673 then the next `keep` highest elements will be kept. 674 """ 675 index = highest_slice(keep, drop) 676 return self.keep(index)
Keep some of the highest elements from this multiset and drop the rest.
In contrast to the die and free function versions, this does not
automatically sum the dice. Use .sum()
afterwards if you want to sum.
Alternatively, you can perform some other evaluation.
This requires the outcomes to be evaluated in descending order.
Arguments:
- keep, drop: These arguments work together:
- If neither are provided, the single highest element will be kept.
- If only
keep
is provided, thekeep
highest elements will be kept. - If only
drop
is provided, thedrop
highest elements will be dropped and the rest will be kept. - If both are provided,
drop
highest elements will be dropped, then the nextkeep
highest elements will be kept.
680 def sort_match( 681 self, 682 comparison: Literal['==', '!=', '<=', '<', '>=', '>'], 683 other: 'MultisetExpression[T_contra]', 684 /, 685 order: Order = Order.Descending) -> 'MultisetExpression[T_contra]': 686 """EXPERIMENTAL: Matches elements of `self` with elements of `other` in sorted order, then keeps elements from `self` that fit `comparison` with their partner. 687 688 Extra elements: If `self` has more elements than `other`, whether the 689 extra elements are kept depends on the `order` and `comparison`: 690 * Descending: kept for `'>='`, `'>'` 691 * Ascending: kept for `'<='`, `'<'` 692 693 Example: An attacker rolls 3d6 versus a defender's 2d6 in the game of 694 *RISK*. Which pairs did the attacker win? 695 ```python 696 d6.pool(3).highest(2).sort_match('>', d6.pool(2)) 697 ``` 698 699 Suppose the attacker rolled 6, 4, 3 and the defender 5, 5. 700 In this case the 4 would be blocked since the attacker lost that pair, 701 leaving the attacker's 6 and 3. If you don't want to keep the extra 702 element, you can use `highest`. 703 ```python 704 Pool([6, 4, 3]).sort_match('>', [5, 5]) -> [6, 3] 705 Pool([6, 4, 3]).highest(2).sort_match('>', [5, 5]) -> [6] 706 ``` 707 708 Contrast `maximum_match()`, which first creates the maximum number of 709 pairs that fit the comparison, not necessarily in sorted order. 710 In the above example, `maximum_match()` would allow the defender to 711 assign their 5s to block both the 4 and the 3. 712 713 Args: 714 comparison: The comparison to filter by. If you want to drop rather 715 than keep, use the complementary comparison: 716 * `'=='` vs. `'!='` 717 * `'<='` vs. `'>'` 718 * `'>='` vs. `'<'` 719 other: The other multiset to match elements with. 720 order: The order in which to sort before forming matches. 721 Default is descending. 722 """ 723 other = implicit_convert_to_expression(other) 724 725 match comparison: 726 case '==': 727 lesser, tie, greater = 0, 1, 0 728 case '!=': 729 lesser, tie, greater = 1, 0, 1 730 case '<=': 731 lesser, tie, greater = 1, 1, 0 732 case '<': 733 lesser, tie, greater = 1, 0, 0 734 case '>=': 735 lesser, tie, greater = 0, 1, 1 736 case '>': 737 lesser, tie, greater = 0, 0, 1 738 case _: 739 raise ValueError(f'Invalid comparison {comparison}') 740 741 if order > 0: 742 left_first = lesser 743 right_first = greater 744 else: 745 left_first = greater 746 right_first = lesser 747 748 return icepool.expression.SortMatchExpression(self, 749 other, 750 order=order, 751 tie=tie, 752 left_first=left_first, 753 right_first=right_first)
EXPERIMENTAL: Matches elements of self
with elements of other
in sorted order, then keeps elements from self
that fit comparison
with their partner.
Extra elements: If self
has more elements than other
, whether the
extra elements are kept depends on the order
and comparison
:
- Descending: kept for
'>='
,'>'
- Ascending: kept for
'<='
,'<'
Example: An attacker rolls 3d6 versus a defender's 2d6 in the game of RISK. Which pairs did the attacker win?
d6.pool(3).highest(2).sort_match('>', d6.pool(2))
Suppose the attacker rolled 6, 4, 3 and the defender 5, 5.
In this case the 4 would be blocked since the attacker lost that pair,
leaving the attacker's 6 and 3. If you don't want to keep the extra
element, you can use highest
.
Pool([6, 4, 3]).sort_match('>', [5, 5]) -> [6, 3]
Pool([6, 4, 3]).highest(2).sort_match('>', [5, 5]) -> [6]
Contrast maximum_match()
, which first creates the maximum number of
pairs that fit the comparison, not necessarily in sorted order.
In the above example, maximum_match()
would allow the defender to
assign their 5s to block both the 4 and the 3.
Arguments:
- comparison: The comparison to filter by. If you want to drop rather
than keep, use the complementary comparison:
'=='
vs.'!='
'<='
vs.'>'
'>='
vs.'<'
- other: The other multiset to match elements with.
- order: The order in which to sort before forming matches. Default is descending.
755 def maximum_match_highest( 756 self, comparison: Literal['<=', 757 '<'], other: 'MultisetExpression[T_contra]', 758 /, *, keep: Literal['matched', 759 'unmatched']) -> 'MultisetExpression[T_contra]': 760 """EXPERIMENTAL: Match the highest elements from `self` with even higher (or equal) elements from `other`. 761 762 This matches elements of `self` with elements of `other`, such that in 763 each pair the element from `self` fits the `comparision` with the 764 element from `other`. As many such pairs of elements will be matched as 765 possible, preferring the highest matchable elements of `self`. 766 Finally, either the matched or unmatched elements from `self` are kept. 767 768 This requires that outcomes be evaluated in descending order. 769 770 Example: An attacker rolls a pool of 4d6 and a defender rolls a pool of 771 3d6. Defender dice can be used to block attacker dice of equal or lesser 772 value, and the defender prefers to block the highest attacker dice 773 possible. Which attacker dice were not blocked? 774 ```python 775 d6.pool(4).maximum_match('<=', d6.pool(3), keep='unmatched').sum() 776 ``` 777 778 Suppose the attacker rolls 6, 4, 3, 1 and the defender rolls 5, 5. 779 Then the result would be [6, 1]. 780 ```python 781 d6.pool([6, 4, 3, 1]).maximum_match('<=', [5, 5], keep='unmatched') 782 -> [6, 1] 783 ``` 784 785 Contrast `sort_match()`, which first creates pairs in 786 sorted order and then filters them by `comparison`. 787 In the above example, `sort_matched` would force the defender to match 788 against the 5 and the 4, which would only allow them to block the 4. 789 790 Args: 791 comparison: Either `'<='` or `'<'`. 792 other: The other multiset to match elements with. 793 keep: Whether 'matched' or 'unmatched' elements are to be kept. 794 """ 795 if keep == 'matched': 796 keep_boolean = True 797 elif keep == 'unmatched': 798 keep_boolean = False 799 else: 800 raise ValueError(f"keep must be either 'matched' or 'unmatched'") 801 802 other = implicit_convert_to_expression(other) 803 match comparison: 804 case '<=': 805 match_equal = True 806 case '<': 807 match_equal = False 808 case _: 809 raise ValueError(f'Invalid comparison {comparison}') 810 return icepool.expression.MaximumMatchExpression( 811 self, 812 other, 813 order=Order.Descending, 814 match_equal=match_equal, 815 keep=keep_boolean)
EXPERIMENTAL: Match the highest elements from self
with even higher (or equal) elements from other
.
This matches elements of self
with elements of other
, such that in
each pair the element from self
fits the comparision
with the
element from other
. As many such pairs of elements will be matched as
possible, preferring the highest matchable elements of self
.
Finally, either the matched or unmatched elements from self
are kept.
This requires that outcomes be evaluated in descending order.
Example: An attacker rolls a pool of 4d6 and a defender rolls a pool of 3d6. Defender dice can be used to block attacker dice of equal or lesser value, and the defender prefers to block the highest attacker dice possible. Which attacker dice were not blocked?
d6.pool(4).maximum_match('<=', d6.pool(3), keep='unmatched').sum()
Suppose the attacker rolls 6, 4, 3, 1 and the defender rolls 5, 5. Then the result would be [6, 1].
d6.pool([6, 4, 3, 1]).maximum_match('<=', [5, 5], keep='unmatched')
-> [6, 1]
Contrast sort_match()
, which first creates pairs in
sorted order and then filters them by comparison
.
In the above example, sort_matched
would force the defender to match
against the 5 and the 4, which would only allow them to block the 4.
Arguments:
- comparison: Either
'<='
or'<'
. - other: The other multiset to match elements with.
- keep: Whether 'matched' or 'unmatched' elements are to be kept.
817 def maximum_match_lowest( 818 self, comparison: Literal['>=', 819 '>'], other: 'MultisetExpression[T_contra]', 820 /, *, keep: Literal['matched', 821 'unmatched']) -> 'MultisetExpression[T_contra]': 822 """EXPERIMENTAL: Match the lowest elements from `self` with even lower (or equal) elements from `other`. 823 824 This matches elements of `self` with elements of `other`, such that in 825 each pair the element from `self` fits the `comparision` with the 826 element from `other`. As many such pairs of elements will be matched as 827 possible, preferring the lowest matchable elements of `self`. 828 Finally, either the matched or unmatched elements from `self` are kept. 829 830 This requires that outcomes be evaluated in ascending order. 831 832 Contrast `sort_match()`, which first creates pairs in 833 sorted order and then filters them by `comparison`. 834 835 Args: 836 comparison: Either `'>='` or `'>'`. 837 other: The other multiset to match elements with. 838 keep: Whether 'matched' or 'unmatched' elements are to be kept. 839 """ 840 if keep == 'matched': 841 keep_boolean = True 842 elif keep == 'unmatched': 843 keep_boolean = False 844 else: 845 raise ValueError(f"keep must be either 'matched' or 'unmatched'") 846 847 other = implicit_convert_to_expression(other) 848 match comparison: 849 case '>=': 850 match_equal = True 851 case '>': 852 match_equal = False 853 case _: 854 raise ValueError(f'Invalid comparison {comparison}') 855 return icepool.expression.MaximumMatchExpression( 856 self, 857 other, 858 order=Order.Ascending, 859 match_equal=match_equal, 860 keep=keep_boolean)
EXPERIMENTAL: Match the lowest elements from self
with even lower (or equal) elements from other
.
This matches elements of self
with elements of other
, such that in
each pair the element from self
fits the comparision
with the
element from other
. As many such pairs of elements will be matched as
possible, preferring the lowest matchable elements of self
.
Finally, either the matched or unmatched elements from self
are kept.
This requires that outcomes be evaluated in ascending order.
Contrast sort_match()
, which first creates pairs in
sorted order and then filters them by comparison
.
Arguments:
- comparison: Either
'>='
or'>'
. - other: The other multiset to match elements with.
- keep: Whether 'matched' or 'unmatched' elements are to be kept.
864 def evaluate( 865 *expressions: 'MultisetExpression[T_contra]', 866 evaluator: 'icepool.MultisetEvaluator[T_contra, U]' 867 ) -> 'icepool.Die[U] | icepool.MultisetEvaluator[T_contra, U]': 868 """Attaches a final `MultisetEvaluator` to expressions. 869 870 All of the `MultisetExpression` methods below are evaluations, 871 as are the operators `<, <=, >, >=, !=, ==`. This means if the 872 expression is fully bound, it will be evaluated to a `Die`. 873 874 Returns: 875 A `Die` if the expression is are fully bound. 876 A `MultisetEvaluator` otherwise. 877 """ 878 if all( 879 isinstance(expression, icepool.MultisetGenerator) 880 for expression in expressions): 881 return evaluator.evaluate(*expressions) 882 evaluator = icepool.evaluator.ExpressionEvaluator(*expressions, 883 evaluator=evaluator) 884 if evaluator._free_arity == 0: 885 return evaluator.evaluate() 886 else: 887 return evaluator
Attaches a final MultisetEvaluator
to expressions.
All of the MultisetExpression
methods below are evaluations,
as are the operators <, <=, >, >=, !=, ==
. This means if the
expression is fully bound, it will be evaluated to a Die
.
Returns:
A
Die
if the expression is are fully bound. AMultisetEvaluator
otherwise.
889 def expand( 890 self, 891 order: Order = Order.Ascending 892 ) -> 'icepool.Die[tuple[T_contra, ...]] | icepool.MultisetEvaluator[T_contra, tuple[T_contra, ...]]': 893 """Evaluation: All elements of the multiset in ascending order. 894 895 This is expensive and not recommended unless there are few possibilities. 896 897 Args: 898 order: Whether the elements are in ascending (default) or descending 899 order. 900 """ 901 return self.evaluate(evaluator=icepool.evaluator.ExpandEvaluator( 902 order=order))
Evaluation: All elements of the multiset in ascending order.
This is expensive and not recommended unless there are few possibilities.
Arguments:
- order: Whether the elements are in ascending (default) or descending order.
904 def sum( 905 self, 906 map: Callable[[T_contra], U] | Mapping[T_contra, U] | None = None 907 ) -> 'icepool.Die[U] | icepool.MultisetEvaluator[T_contra, U]': 908 """Evaluation: The sum of all elements.""" 909 if map is None: 910 return self.evaluate(evaluator=icepool.evaluator.sum_evaluator) 911 else: 912 return self.evaluate(evaluator=icepool.evaluator.SumEvaluator(map))
Evaluation: The sum of all elements.
914 def count( 915 self 916 ) -> 'icepool.Die[int] | icepool.MultisetEvaluator[T_contra, int]': 917 """Evaluation: The total number of elements in the multiset. 918 919 This is usually not very interesting unless some other operation is 920 performed first. Examples: 921 922 `generator.unique().count()` will count the number of unique outcomes. 923 924 `(generator & [4, 5, 6]).count()` will count up to one each of 925 4, 5, and 6. 926 """ 927 return self.evaluate(evaluator=icepool.evaluator.count_evaluator)
Evaluation: The total number of elements in the multiset.
This is usually not very interesting unless some other operation is performed first. Examples:
generator.unique().count()
will count the number of unique outcomes.
(generator & [4, 5, 6]).count()
will count up to one each of
4, 5, and 6.
929 def any( 930 self 931 ) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T_contra, bool]': 932 """Evaluation: Whether the multiset has at least one positive count. """ 933 return self.evaluate(evaluator=icepool.evaluator.any_evaluator)
Evaluation: Whether the multiset has at least one positive count.
935 def highest_outcome_and_count( 936 self 937 ) -> 'icepool.Die[tuple[T_contra, int]] | icepool.MultisetEvaluator[T_contra, tuple[T_contra, int]]': 938 """Evaluation: The highest outcome with positive count, along with that count. 939 940 If no outcomes have positive count, the min outcome will be returned with 0 count. 941 """ 942 return self.evaluate( 943 evaluator=icepool.evaluator.highest_outcome_and_count_evaluator)
Evaluation: The highest outcome with positive count, along with that count.
If no outcomes have positive count, the min outcome will be returned with 0 count.
945 def all_counts( 946 self, 947 filter: int | Literal['all'] = 1 948 ) -> 'icepool.Die[tuple[int, ...]] | icepool.MultisetEvaluator[T_contra, tuple[int, ...]]': 949 """Evaluation: Sorted tuple of all counts, i.e. the sizes of all matching sets. 950 951 The sizes are in **descending** order. 952 953 Args: 954 filter: Any counts below this value will not be in the output. 955 For example, `filter=2` will only produce pairs and better. 956 If `None`, no filtering will be done. 957 958 Why not just place `keep_counts_ge()` before this? 959 `keep_counts_ge()` operates by setting counts to zero, so you 960 would still need an argument to specify whether you want to 961 output zero counts. So we might as well use the argument to do 962 both. 963 """ 964 return self.evaluate(evaluator=icepool.evaluator.AllCountsEvaluator( 965 filter=filter))
Evaluation: Sorted tuple of all counts, i.e. the sizes of all matching sets.
The sizes are in descending order.
Arguments:
filter: Any counts below this value will not be in the output. For example,
filter=2
will only produce pairs and better. IfNone
, no filtering will be done.Why not just place
keep_counts_ge()
before this?keep_counts_ge()
operates by setting counts to zero, so you would still need an argument to specify whether you want to output zero counts. So we might as well use the argument to do both.
967 def largest_count( 968 self 969 ) -> 'icepool.Die[int] | icepool.MultisetEvaluator[T_contra, int]': 970 """Evaluation: The size of the largest matching set among the elements.""" 971 return self.evaluate( 972 evaluator=icepool.evaluator.largest_count_evaluator)
Evaluation: The size of the largest matching set among the elements.
974 def largest_count_and_outcome( 975 self 976 ) -> 'icepool.Die[tuple[int, T_contra]] | icepool.MultisetEvaluator[T_contra, tuple[int, T_contra]]': 977 """Evaluation: The largest matching set among the elements and the corresponding outcome.""" 978 return self.evaluate( 979 evaluator=icepool.evaluator.largest_count_and_outcome_evaluator)
Evaluation: The largest matching set among the elements and the corresponding outcome.
988 def count_subset( 989 self, 990 divisor: 991 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 992 /, 993 *, 994 empty_divisor: int | None = None 995 ) -> 'icepool.Die[int] | icepool.MultisetEvaluator[T_contra, int]': 996 """Evaluation: The number of times the divisor is contained in this multiset. 997 998 Args: 999 divisor: The multiset to divide by. 1000 empty_divisor: If the divisor is empty, the outcome will be this. 1001 If not set, `ZeroDivisionError` will be raised for an empty 1002 right side. 1003 1004 Raises: 1005 ZeroDivisionError: If the divisor may be empty and 1006 empty_divisor_outcome is not set. 1007 """ 1008 divisor = implicit_convert_to_expression(divisor) 1009 return self.evaluate(divisor, 1010 evaluator=icepool.evaluator.CountSubsetEvaluator( 1011 empty_divisor=empty_divisor))
Evaluation: The number of times the divisor is contained in this multiset.
Arguments:
- divisor: The multiset to divide by.
- empty_divisor: If the divisor is empty, the outcome will be this.
If not set,
ZeroDivisionError
will be raised for an empty right side.
Raises:
- ZeroDivisionError: If the divisor may be empty and empty_divisor_outcome is not set.
1013 def largest_straight( 1014 self: 'MultisetExpression[int]' 1015 ) -> 'icepool.Die[int] | icepool.MultisetEvaluator[int, int]': 1016 """Evaluation: The size of the largest straight among the elements. 1017 1018 Outcomes must be `int`s. 1019 """ 1020 return self.evaluate( 1021 evaluator=icepool.evaluator.largest_straight_evaluator)
Evaluation: The size of the largest straight among the elements.
Outcomes must be int
s.
1023 def largest_straight_and_outcome( 1024 self: 'MultisetExpression[int]' 1025 ) -> 'icepool.Die[tuple[int, int]] | icepool.MultisetEvaluator[int, tuple[int, int]]': 1026 """Evaluation: The size of the largest straight among the elements and the highest outcome in that straight. 1027 1028 Outcomes must be `int`s. 1029 """ 1030 return self.evaluate( 1031 evaluator=icepool.evaluator.largest_straight_and_outcome_evaluator)
Evaluation: The size of the largest straight among the elements and the highest outcome in that straight.
Outcomes must be int
s.
1033 def all_straights( 1034 self: 'MultisetExpression[int]' 1035 ) -> 'icepool.Die[tuple[int, ...]] | icepool.MultisetEvaluator[int, tuple[int, ...]]': 1036 """Evaluation: The sizes of all straights. 1037 1038 The sizes are in **descending** order. 1039 1040 Each element can only contribute to one straight, though duplicate 1041 elements can produces straights that overlap in outcomes. In this case, 1042 elements are preferentially assigned to the longer straight. 1043 """ 1044 return self.evaluate( 1045 evaluator=icepool.evaluator.all_straights_evaluator)
Evaluation: The sizes of all straights.
The sizes are in descending order.
Each element can only contribute to one straight, though duplicate elements can produces straights that overlap in outcomes. In this case, elements are preferentially assigned to the longer straight.
1047 def all_straights_reduce_counts( 1048 self: 'MultisetExpression[int]', 1049 reducer: Callable[[int, int], int] = operator.mul 1050 ) -> 'icepool.Die[tuple[tuple[int, int], ...]] | icepool.MultisetEvaluator[int, tuple[tuple[int, int], ...]]': 1051 """Experimental: All straights with a reduce operation on the counts. 1052 1053 This can be used to evaluate e.g. cribbage-style straight counting. 1054 1055 The result is a tuple of `(run_length, run_score)`s. 1056 """ 1057 return self.evaluate( 1058 evaluator=icepool.evaluator.AllStraightsReduceCountsEvaluator( 1059 reducer=reducer))
Experimental: All straights with a reduce operation on the counts.
This can be used to evaluate e.g. cribbage-style straight counting.
The result is a tuple of (run_length, run_score)
s.
1061 def argsort( 1062 self: 1063 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1064 *args: 1065 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1066 order: Order = Order.Descending, 1067 limit: int | None = None): 1068 """Experimental: Returns the indexes of the originating multisets for each rank in their additive union. 1069 1070 Example: 1071 ```python 1072 MultisetExpression.argsort([10, 9, 5], [9, 9]) 1073 ``` 1074 produces 1075 ```python 1076 ((0,), (0, 1, 1), (0,)) 1077 ``` 1078 1079 Args: 1080 self, *args: The multiset expressions to be evaluated. 1081 order: Which order the ranks are to be emitted. Default is descending. 1082 limit: How many ranks to emit. Default will emit all ranks, which 1083 makes the length of each outcome equal to 1084 `additive_union(+self, +arg1, +arg2, ...).unique().count()` 1085 """ 1086 self = implicit_convert_to_expression(self) 1087 converted_args = [implicit_convert_to_expression(arg) for arg in args] 1088 return self.evaluate(*converted_args, 1089 evaluator=icepool.evaluator.ArgsortEvaluator( 1090 order=order, limit=limit))
Experimental: Returns the indexes of the originating multisets for each rank in their additive union.
Example:
MultisetExpression.argsort([10, 9, 5], [9, 9])
produces
((0,), (0, 1, 1), (0,))
Arguments:
- self, *args: The multiset expressions to be evaluated.
- order: Which order the ranks are to be emitted. Default is descending.
- limit: How many ranks to emit. Default will emit all ranks, which
makes the length of each outcome equal to
additive_union(+self, +arg1, +arg2, ...).unique().count()
1135 def issubset( 1136 self, other: 1137 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1138 / 1139 ) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T_contra, bool]': 1140 """Evaluation: Whether this multiset is a subset of the other multiset. 1141 1142 Specifically, if this multiset has a lesser or equal count for each 1143 outcome than the other multiset, this evaluates to `True`; 1144 if there is some outcome for which this multiset has a greater count 1145 than the other multiset, this evaluates to `False`. 1146 1147 `issubset` is the same as `self <= other`. 1148 1149 `self < other` evaluates a proper subset relation, which is the same 1150 except the result is `False` if the two multisets are exactly equal. 1151 """ 1152 return self._compare(other, icepool.evaluator.IsSubsetEvaluator)
Evaluation: Whether this multiset is a subset of the other multiset.
Specifically, if this multiset has a lesser or equal count for each
outcome than the other multiset, this evaluates to True
;
if there is some outcome for which this multiset has a greater count
than the other multiset, this evaluates to False
.
issubset
is the same as self <= other
.
self < other
evaluates a proper subset relation, which is the same
except the result is False
if the two multisets are exactly equal.
1175 def issuperset( 1176 self, other: 1177 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1178 / 1179 ) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T_contra, bool]': 1180 """Evaluation: Whether this multiset is a superset of the other multiset. 1181 1182 Specifically, if this multiset has a greater or equal count for each 1183 outcome than the other multiset, this evaluates to `True`; 1184 if there is some outcome for which this multiset has a lesser count 1185 than the other multiset, this evaluates to `False`. 1186 1187 A typical use of this evaluation is testing for the presence of a 1188 combo of cards in a hand, e.g. 1189 1190 ```python 1191 deck.deal(5) >= ['a', 'a', 'b'] 1192 ``` 1193 1194 represents the chance that a deal of 5 cards contains at least two 'a's 1195 and one 'b'. 1196 1197 `issuperset` is the same as `self >= other`. 1198 1199 `self > other` evaluates a proper superset relation, which is the same 1200 except the result is `False` if the two multisets are exactly equal. 1201 """ 1202 return self._compare(other, icepool.evaluator.IsSupersetEvaluator)
Evaluation: Whether this multiset is a superset of the other multiset.
Specifically, if this multiset has a greater or equal count for each
outcome than the other multiset, this evaluates to True
;
if there is some outcome for which this multiset has a lesser count
than the other multiset, this evaluates to False
.
A typical use of this evaluation is testing for the presence of a combo of cards in a hand, e.g.
deck.deal(5) >= ['a', 'a', 'b']
represents the chance that a deal of 5 cards contains at least two 'a's and one 'b'.
issuperset
is the same as self >= other
.
self > other
evaluates a proper superset relation, which is the same
except the result is False
if the two multisets are exactly equal.
1227 def isdisjoint( 1228 self, other: 1229 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]', 1230 / 1231 ) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T_contra, bool]': 1232 """Evaluation: Whether this multiset is disjoint from the other multiset. 1233 1234 Specifically, this evaluates to `False` if there is any outcome for 1235 which both multisets have positive count, and `True` if there is not. 1236 """ 1237 return self._compare(other, icepool.evaluator.IsDisjointSetEvaluator)
Evaluation: Whether this multiset is disjoint from the other multiset.
Specifically, this evaluates to False
if there is any outcome for
which both multisets have positive count, and True
if there is not.
26class MultisetEvaluator(ABC, Generic[T_contra, U_co]): 27 """An abstract, immutable, callable class for evaulating one or more `MultisetGenerator`s. 28 29 There is one abstract method to implement: `next_state()`. 30 This should incrementally calculate the result given one outcome at a time 31 along with how many of that outcome were produced. 32 33 An example sequence of calls, as far as `next_state()` is concerned, is: 34 35 1. `state = next_state(state=None, outcome=1, count_of_1s)` 36 2. `state = next_state(state, 2, count_of_2s)` 37 3. `state = next_state(state, 3, count_of_3s)` 38 4. `state = next_state(state, 4, count_of_4s)` 39 5. `state = next_state(state, 5, count_of_5s)` 40 6. `state = next_state(state, 6, count_of_6s)` 41 7. `outcome = final_outcome(state)` 42 43 A few other methods can optionally be overridden to further customize behavior. 44 45 It is not expected that subclasses of `MultisetEvaluator` 46 be able to handle arbitrary types or numbers of generators. 47 Indeed, most are expected to handle only a fixed number of generators, 48 and often even only generators with a particular type of `Die`. 49 50 Instances cache all intermediate state distributions. 51 You should therefore reuse instances when possible. 52 53 Instances should not be modified after construction 54 in any way that affects the return values of these methods. 55 Otherwise, values in the cache may be incorrect. 56 """ 57 58 @abstractmethod 59 def next_state(self, state: Hashable, outcome: T_contra, /, *counts: 60 int) -> Hashable: 61 """State transition function. 62 63 This should produce a state given the previous state, an outcome, 64 and the number that outcome produced by each generator. 65 66 `evaluate()` will always call this using only positional arguments. 67 Furthermore, there is no expectation that a subclass be able to handle 68 an arbitrary number of counts. Thus, you are free to rename any of 69 the parameters in a subclass, or to replace `*counts` with a fixed set 70 of parameters. 71 72 Make sure to handle the base case where `state is None`. 73 74 States must be hashable. At current, they do not have to be orderable. 75 However, this may change in the future, and if they are not totally 76 orderable, you must override `final_outcome` to create totally orderable 77 final outcomes. 78 79 The behavior of returning a `Die` from `next_state` is currently 80 undefined. 81 82 Args: 83 state: A hashable object indicating the state before rolling the 84 current outcome. If this is the first outcome being considered, 85 `state` will be `None`. 86 outcome: The current outcome. 87 `next_state` will see all rolled outcomes in monotonic order; 88 either ascending or descending depending on `order()`. 89 If there are multiple generators, the set of outcomes is at 90 least the union of the outcomes of the invididual generators. 91 You can use `extra_outcomes()` to add extra outcomes. 92 *counts: One value (usually an `int`) for each generator output 93 indicating how many of the current outcome were produced. 94 95 Returns: 96 A hashable object indicating the next state. 97 The special value `icepool.Reroll` can be used to immediately remove 98 the state from consideration, effectively performing a full reroll. 99 """ 100 101 def final_outcome( 102 self, final_state: Hashable 103 ) -> 'U_co | icepool.Die[U_co] | icepool.RerollType': 104 """Optional function to generate a final output outcome from a final state. 105 106 By default, the final outcome is equal to the final state. 107 Note that `None` is not a valid outcome for a `Die`, 108 and if there are no outcomes, `final_outcome` will be immediately 109 be callled with `final_state=None`. 110 Subclasses that want to handle this case should explicitly define what 111 happens. 112 113 Args: 114 final_state: A state after all outcomes have been processed. 115 116 Returns: 117 A final outcome that will be used as part of constructing the result `Die`. 118 As usual for `Die()`, this could itself be a `Die` or `icepool.Reroll`. 119 """ 120 # If not overriden, the final_state should have type U_co. 121 return cast(U_co, final_state) 122 123 def order(self) -> Order: 124 """Optional function to determine the order in which `next_state()` will see outcomes. 125 126 The default is ascending order. This has better caching behavior with 127 mixed standard dice. 128 129 Returns: 130 * Order.Ascending (= 1) 131 if `next_state()` should always see the outcomes in ascending order. 132 * Order.Descending (= -1) 133 if `next_state()` should always see the outcomes in descending order. 134 * Order.Any (= 0) 135 if the result of the evaluation is order-independent. 136 """ 137 return Order.Ascending 138 139 def extra_outcomes(self, 140 outcomes: Sequence[T_contra]) -> Collection[T_contra]: 141 """Optional method to specify extra outcomes that should be seen as inputs to `next_state()`. 142 143 These will be seen by `next_state` even if they do not appear in the 144 generator(s). The default implementation returns `()`, or no additional 145 outcomes. 146 147 If you want `next_state` to see consecutive `int` outcomes, you can set 148 `extra_outcomes = icepool.MultisetEvaluator.consecutive`. 149 See `consecutive()` below. 150 151 Args: 152 outcomes: The outcomes that could be produced by the generators, in 153 ascending order. 154 """ 155 return () 156 157 def consecutive(self, outcomes: Sequence[int]) -> Collection[int]: 158 """Example implementation of `extra_outcomes()` that produces consecutive `int` outcomes. 159 160 There is no expectation that a subclass be able to handle 161 an arbitrary number of generators. Thus, you are free to rename any of 162 the parameters in a subclass, or to replace `*generators` with a fixed 163 set of parameters. 164 165 Set `extra_outcomes = icepool.MultisetEvaluator.consecutive` to use this. 166 167 Returns: 168 All `int`s from the min outcome to the max outcome among the generators, 169 inclusive. 170 171 Raises: 172 TypeError: if any generator has any non-`int` outcome. 173 """ 174 if not outcomes: 175 return () 176 177 if any(not isinstance(x, int) for x in outcomes): 178 raise TypeError( 179 "consecutive cannot be used with outcomes of type other than 'int'." 180 ) 181 182 return range(outcomes[0], outcomes[-1] + 1) 183 184 def prefix_generators(self) -> 'tuple[icepool.MultisetGenerator, ...]': 185 """An optional sequence of extra generators whose counts will be prepended to *counts.""" 186 return () 187 188 def validate_arity(self, arity: int) -> None: 189 """An optional method to verify the total input arity. 190 191 This is called after any implicit conversion to generators, but does 192 not include any `extra_generators()`. 193 194 Overriding `next_state` with a fixed number of counts will make this 195 check redundant. 196 197 Raises: 198 `ValueError` if the total input arity is not valid. 199 """ 200 201 @cached_property 202 def _cache( 203 self 204 ) -> 'MutableMapping[tuple[Order, Alignment, tuple[MultisetGenerator, ...], Hashable], Mapping[Any, int]]': 205 """Cached results. 206 207 The key is `(order, extra_outcomes, generators, state)`. 208 The value is another mapping `final_state: quantity` representing the 209 state distribution produced by `order, extra_outcomes, generators` when 210 starting at state `state`. 211 """ 212 return {} 213 214 @overload 215 def evaluate( 216 self, *args: 'Mapping[T_contra, int] | Sequence[T_contra]' 217 ) -> 'icepool.Die[U_co]': 218 ... 219 220 @overload 221 def evaluate( 222 self, *args: 'MultisetExpression[T_contra]' 223 ) -> 'MultisetEvaluator[T_contra, U_co]': 224 ... 225 226 @overload 227 def evaluate( 228 self, *args: 229 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]' 230 ) -> 'icepool.Die[U_co] | MultisetEvaluator[T_contra, U_co]': 231 ... 232 233 def evaluate( 234 self, *args: 235 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]' 236 ) -> 'icepool.Die[U_co] | MultisetEvaluator[T_contra, U_co]': 237 """Evaluates generator(s). 238 239 You can call the `MultisetEvaluator` object directly for the same effect, 240 e.g. `sum_evaluator(generator)` is an alias for `sum_evaluator.evaluate(generator)`. 241 242 Most evaluators will expect a fixed number of input multisets. 243 The union of the outcomes of the generator(s) must be totally orderable. 244 245 Args: 246 *args: Each may be one of the following: 247 * A `MultisetExpression`. 248 * A mappable mapping outcomes to the number of those outcomes. 249 * A sequence of outcomes. 250 251 Returns: 252 A `Die` representing the distribution of the final outcome if no 253 arg contains a free variable. Otherwise, returns a new evaluator. 254 """ 255 from icepool.generator.alignment import Alignment 256 257 # Convert arguments to expressions. 258 expressions = tuple( 259 icepool.implicit_convert_to_expression(arg) for arg in args) 260 261 if any(expression._free_arity() > 0 for expression in expressions): 262 from icepool.evaluator.expression import ExpressionEvaluator 263 return ExpressionEvaluator(*expressions, evaluator=self) 264 265 if not all( 266 isinstance(expression, icepool.MultisetGenerator) 267 for expression in expressions): 268 from icepool.evaluator.expression import ExpressionEvaluator 269 return ExpressionEvaluator(*expressions, evaluator=self).evaluate() 270 271 generators = cast(tuple[icepool.MultisetGenerator, ...], expressions) 272 273 self.validate_arity( 274 sum(generator.output_arity() for generator in generators)) 275 276 generators = self.prefix_generators() + generators 277 278 if not all(generator._is_resolvable() for generator in generators): 279 return icepool.Die([]) 280 281 algorithm, order = self._select_algorithm(*generators) 282 283 outcomes = icepool.sorted_union(*(generator.outcomes() 284 for generator in generators)) 285 extra_outcomes = Alignment(self.extra_outcomes(outcomes)) 286 287 dist: MutableMapping[Any, int] = defaultdict(int) 288 iterators = MultisetEvaluator._initialize_generators(generators) 289 for p in itertools.product(*iterators): 290 sub_generators, sub_weights = zip(*p) 291 prod_weight = math.prod(sub_weights) 292 sub_result = algorithm(order, extra_outcomes, sub_generators) 293 for sub_state, sub_weight in sub_result.items(): 294 dist[sub_state] += sub_weight * prod_weight 295 296 final_outcomes = [] 297 final_weights = [] 298 for state, weight in dist.items(): 299 outcome = self.final_outcome(state) 300 if outcome is None: 301 raise TypeError( 302 "None is not a valid final outcome.\n" 303 "This may have been a result of not supplying any generator with an outcome." 304 ) 305 if outcome is not icepool.Reroll: 306 final_outcomes.append(outcome) 307 final_weights.append(weight) 308 309 return icepool.Die(final_outcomes, final_weights) 310 311 __call__ = evaluate 312 313 def _select_algorithm( 314 self, *generators: 'icepool.MultisetGenerator[T_contra, Any]' 315 ) -> tuple[ 316 'Callable[[Order, Alignment[T_contra], tuple[icepool.MultisetGenerator[T_contra, Any], ...]], Mapping[Any, int]]', 317 Order]: 318 """Selects an algorithm and iteration order. 319 320 Returns: 321 * The algorithm to use (`_eval_internal*`). 322 * The order in which `next_state()` sees outcomes. 323 1 for ascending and -1 for descending. 324 """ 325 eval_order = self.order() 326 327 if not generators: 328 # No generators. 329 return self._eval_internal, eval_order 330 331 preferred_pop_order, pop_order_reason = merge_pop_orders( 332 *(generator._preferred_pop_order() for generator in generators)) 333 334 if preferred_pop_order is None: 335 preferred_pop_order = Order.Any 336 pop_order_reason = PopOrderReason.NoPreference 337 338 # No mandatory evaluation order, go with preferred algorithm. 339 # Note that this has order *opposite* the pop order. 340 if eval_order == Order.Any: 341 return self._eval_internal, Order(-preferred_pop_order 342 or Order.Ascending) 343 344 # Mandatory evaluation order. 345 if preferred_pop_order == Order.Any: 346 return self._eval_internal, eval_order 347 elif eval_order != preferred_pop_order: 348 return self._eval_internal, eval_order 349 else: 350 return self._eval_internal_forward, eval_order 351 352 def _eval_internal( 353 self, order: Order, extra_outcomes: 'Alignment[T_contra]', 354 generators: 'tuple[icepool.MultisetGenerator[T_contra, Any], ...]' 355 ) -> Mapping[Any, int]: 356 """Internal algorithm for iterating in the more-preferred order. 357 358 All intermediate return values are cached in the instance. 359 360 Arguments: 361 order: The order in which to send outcomes to `next_state()`. 362 extra_outcomes: As `extra_outcomes()`. Elements will be popped off this 363 during recursion. 364 generators: One or more `MultisetGenerators`s to evaluate. Elements 365 will be popped off this during recursion. 366 367 Returns: 368 A dict `{ state : weight }` describing the probability distribution 369 over states. 370 """ 371 cache_key = (order, extra_outcomes, generators, None) 372 if cache_key in self._cache: 373 return self._cache[cache_key] 374 375 result: MutableMapping[Any, int] = defaultdict(int) 376 377 if all(not generator.outcomes() 378 for generator in generators) and not extra_outcomes.outcomes(): 379 result = {None: 1} 380 else: 381 outcome, prev_extra_outcomes, iterators = MultisetEvaluator._pop_generators( 382 Order(-order), extra_outcomes, generators) 383 for p in itertools.product(*iterators): 384 prev_generators, counts, weights = zip(*p) 385 counts = tuple(itertools.chain.from_iterable(counts)) 386 prod_weight = math.prod(weights) 387 prev = self._eval_internal(order, prev_extra_outcomes, 388 prev_generators) 389 for prev_state, prev_weight in prev.items(): 390 state = self.next_state(prev_state, outcome, *counts) 391 if state is not icepool.Reroll: 392 result[state] += prev_weight * prod_weight 393 394 self._cache[cache_key] = result 395 return result 396 397 def _eval_internal_forward( 398 self, 399 order: Order, 400 extra_outcomes: 'Alignment[T_contra]', 401 generators: 'tuple[icepool.MultisetGenerator[T_contra, Any], ...]', 402 state: Hashable = None) -> Mapping[Any, int]: 403 """Internal algorithm for iterating in the less-preferred order. 404 405 All intermediate return values are cached in the instance. 406 407 Arguments: 408 order: The order in which to send outcomes to `next_state()`. 409 extra_outcomes: As `extra_outcomes()`. Elements will be popped off this 410 during recursion. 411 generators: One or more `MultisetGenerators`s to evaluate. Elements 412 will be popped off this during recursion. 413 414 Returns: 415 A dict `{ state : weight }` describing the probability distribution 416 over states. 417 """ 418 cache_key = (order, extra_outcomes, generators, state) 419 if cache_key in self._cache: 420 return self._cache[cache_key] 421 422 result: MutableMapping[Any, int] = defaultdict(int) 423 424 if all(not generator.outcomes() 425 for generator in generators) and not extra_outcomes.outcomes(): 426 result = {state: 1} 427 else: 428 outcome, next_extra_outcomes, iterators = MultisetEvaluator._pop_generators( 429 order, extra_outcomes, generators) 430 for p in itertools.product(*iterators): 431 next_generators, counts, weights = zip(*p) 432 counts = tuple(itertools.chain.from_iterable(counts)) 433 prod_weight = math.prod(weights) 434 next_state = self.next_state(state, outcome, *counts) 435 if next_state is not icepool.Reroll: 436 final_dist = self._eval_internal_forward( 437 order, next_extra_outcomes, next_generators, 438 next_state) 439 for final_state, weight in final_dist.items(): 440 result[final_state] += weight * prod_weight 441 442 self._cache[cache_key] = result 443 return result 444 445 @staticmethod 446 def _initialize_generators( 447 generators: 'tuple[icepool.MultisetGenerator[T_contra, Any], ...]' 448 ) -> 'tuple[icepool.InitialMultisetGenerator, ...]': 449 return tuple(generator._generate_initial() for generator in generators) 450 451 @staticmethod 452 def _pop_generators( 453 order: Order, extra_outcomes: 'Alignment[T_contra]', 454 generators: 'tuple[icepool.MultisetGenerator[T_contra, Any], ...]' 455 ) -> 'tuple[T_contra, Alignment[T_contra], tuple[icepool.NextMultisetGenerator, ...]]': 456 """Pops a single outcome from the generators. 457 458 Args: 459 order: The order in which to pop. Descending order pops from the top 460 and ascending from the bottom. 461 extra_outcomes: Any extra outcomes to use. 462 generators: The generators to pop from. 463 464 Returns: 465 * The popped outcome. 466 * The remaining extra outcomes. 467 * A tuple of iterators over the resulting generators, counts, and weights. 468 """ 469 extra_outcomes_and_generators = (extra_outcomes, ) + generators 470 if order < 0: 471 outcome = max(generator.max_outcome() 472 for generator in extra_outcomes_and_generators 473 if generator.outcomes()) 474 475 next_extra_outcomes, _, _ = next( 476 extra_outcomes._generate_max(outcome)) 477 478 return outcome, next_extra_outcomes, tuple( 479 generator._generate_max(outcome) for generator in generators) 480 else: 481 outcome = min(generator.min_outcome() 482 for generator in extra_outcomes_and_generators 483 if generator.outcomes()) 484 485 next_extra_outcomes, _, _ = next( 486 extra_outcomes._generate_min(outcome)) 487 488 return outcome, next_extra_outcomes, tuple( 489 generator._generate_min(outcome) for generator in generators) 490 491 def sample( 492 self, *generators: 493 'icepool.MultisetGenerator[T_contra, Any] | Mapping[T_contra, int] | Sequence[T_contra]' 494 ): 495 """EXPERIMENTAL: Samples one result from the generator(s) and evaluates the result.""" 496 # Convert non-`Pool` arguments to `Pool`. 497 converted_generators = tuple( 498 generator if isinstance(generator, icepool.MultisetGenerator 499 ) else icepool.Pool(generator) 500 for generator in generators) 501 502 result = self.evaluate(*itertools.chain.from_iterable( 503 generator.sample() for generator in converted_generators)) 504 505 if not result.is_empty(): 506 return result.outcomes()[0] 507 else: 508 return result 509 510 def __bool__(self) -> bool: 511 raise TypeError('MultisetEvaluator does not have a truth value.') 512 513 def __str__(self) -> str: 514 return type(self).__name__
An abstract, immutable, callable class for evaulating one or more MultisetGenerator
s.
There is one abstract method to implement: next_state()
.
This should incrementally calculate the result given one outcome at a time
along with how many of that outcome were produced.
An example sequence of calls, as far as next_state()
is concerned, is:
state = next_state(state=None, outcome=1, count_of_1s)
state = next_state(state, 2, count_of_2s)
state = next_state(state, 3, count_of_3s)
state = next_state(state, 4, count_of_4s)
state = next_state(state, 5, count_of_5s)
state = next_state(state, 6, count_of_6s)
outcome = final_outcome(state)
A few other methods can optionally be overridden to further customize behavior.
It is not expected that subclasses of MultisetEvaluator
be able to handle arbitrary types or numbers of generators.
Indeed, most are expected to handle only a fixed number of generators,
and often even only generators with a particular type of Die
.
Instances cache all intermediate state distributions. You should therefore reuse instances when possible.
Instances should not be modified after construction in any way that affects the return values of these methods. Otherwise, values in the cache may be incorrect.
58 @abstractmethod 59 def next_state(self, state: Hashable, outcome: T_contra, /, *counts: 60 int) -> Hashable: 61 """State transition function. 62 63 This should produce a state given the previous state, an outcome, 64 and the number that outcome produced by each generator. 65 66 `evaluate()` will always call this using only positional arguments. 67 Furthermore, there is no expectation that a subclass be able to handle 68 an arbitrary number of counts. Thus, you are free to rename any of 69 the parameters in a subclass, or to replace `*counts` with a fixed set 70 of parameters. 71 72 Make sure to handle the base case where `state is None`. 73 74 States must be hashable. At current, they do not have to be orderable. 75 However, this may change in the future, and if they are not totally 76 orderable, you must override `final_outcome` to create totally orderable 77 final outcomes. 78 79 The behavior of returning a `Die` from `next_state` is currently 80 undefined. 81 82 Args: 83 state: A hashable object indicating the state before rolling the 84 current outcome. If this is the first outcome being considered, 85 `state` will be `None`. 86 outcome: The current outcome. 87 `next_state` will see all rolled outcomes in monotonic order; 88 either ascending or descending depending on `order()`. 89 If there are multiple generators, the set of outcomes is at 90 least the union of the outcomes of the invididual generators. 91 You can use `extra_outcomes()` to add extra outcomes. 92 *counts: One value (usually an `int`) for each generator output 93 indicating how many of the current outcome were produced. 94 95 Returns: 96 A hashable object indicating the next state. 97 The special value `icepool.Reroll` can be used to immediately remove 98 the state from consideration, effectively performing a full reroll. 99 """
State transition function.
This should produce a state given the previous state, an outcome, and the number that outcome produced by each generator.
evaluate()
will always call this using only positional arguments.
Furthermore, there is no expectation that a subclass be able to handle
an arbitrary number of counts. Thus, you are free to rename any of
the parameters in a subclass, or to replace *counts
with a fixed set
of parameters.
Make sure to handle the base case where state is None
.
States must be hashable. At current, they do not have to be orderable.
However, this may change in the future, and if they are not totally
orderable, you must override final_outcome
to create totally orderable
final outcomes.
The behavior of returning a Die
from next_state
is currently
undefined.
Arguments:
- state: A hashable object indicating the state before rolling the
current outcome. If this is the first outcome being considered,
state
will beNone
. - outcome: The current outcome.
next_state
will see all rolled outcomes in monotonic order; either ascending or descending depending onorder()
. If there are multiple generators, the set of outcomes is at least the union of the outcomes of the invididual generators. You can useextra_outcomes()
to add extra outcomes. - *counts: One value (usually an
int
) for each generator output indicating how many of the current outcome were produced.
Returns:
A hashable object indicating the next state. The special value
icepool.Reroll
can be used to immediately remove the state from consideration, effectively performing a full reroll.
101 def final_outcome( 102 self, final_state: Hashable 103 ) -> 'U_co | icepool.Die[U_co] | icepool.RerollType': 104 """Optional function to generate a final output outcome from a final state. 105 106 By default, the final outcome is equal to the final state. 107 Note that `None` is not a valid outcome for a `Die`, 108 and if there are no outcomes, `final_outcome` will be immediately 109 be callled with `final_state=None`. 110 Subclasses that want to handle this case should explicitly define what 111 happens. 112 113 Args: 114 final_state: A state after all outcomes have been processed. 115 116 Returns: 117 A final outcome that will be used as part of constructing the result `Die`. 118 As usual for `Die()`, this could itself be a `Die` or `icepool.Reroll`. 119 """ 120 # If not overriden, the final_state should have type U_co. 121 return cast(U_co, final_state)
Optional function to generate a final output outcome from a final state.
By default, the final outcome is equal to the final state.
Note that None
is not a valid outcome for a Die
,
and if there are no outcomes, final_outcome
will be immediately
be callled with final_state=None
.
Subclasses that want to handle this case should explicitly define what
happens.
Arguments:
- final_state: A state after all outcomes have been processed.
Returns:
A final outcome that will be used as part of constructing the result
Die
. As usual forDie()
, this could itself be aDie
oricepool.Reroll
.
123 def order(self) -> Order: 124 """Optional function to determine the order in which `next_state()` will see outcomes. 125 126 The default is ascending order. This has better caching behavior with 127 mixed standard dice. 128 129 Returns: 130 * Order.Ascending (= 1) 131 if `next_state()` should always see the outcomes in ascending order. 132 * Order.Descending (= -1) 133 if `next_state()` should always see the outcomes in descending order. 134 * Order.Any (= 0) 135 if the result of the evaluation is order-independent. 136 """ 137 return Order.Ascending
Optional function to determine the order in which next_state()
will see outcomes.
The default is ascending order. This has better caching behavior with mixed standard dice.
Returns:
- Order.Ascending (= 1) if
next_state()
should always see the outcomes in ascending order.- Order.Descending (= -1) if
next_state()
should always see the outcomes in descending order.- Order.Any (= 0) if the result of the evaluation is order-independent.
139 def extra_outcomes(self, 140 outcomes: Sequence[T_contra]) -> Collection[T_contra]: 141 """Optional method to specify extra outcomes that should be seen as inputs to `next_state()`. 142 143 These will be seen by `next_state` even if they do not appear in the 144 generator(s). The default implementation returns `()`, or no additional 145 outcomes. 146 147 If you want `next_state` to see consecutive `int` outcomes, you can set 148 `extra_outcomes = icepool.MultisetEvaluator.consecutive`. 149 See `consecutive()` below. 150 151 Args: 152 outcomes: The outcomes that could be produced by the generators, in 153 ascending order. 154 """ 155 return ()
Optional method to specify extra outcomes that should be seen as inputs to next_state()
.
These will be seen by next_state
even if they do not appear in the
generator(s). The default implementation returns ()
, or no additional
outcomes.
If you want next_state
to see consecutive int
outcomes, you can set
extra_outcomes = icepool.MultisetEvaluator.consecutive
.
See consecutive()
below.
Arguments:
- outcomes: The outcomes that could be produced by the generators, in
- ascending order.
157 def consecutive(self, outcomes: Sequence[int]) -> Collection[int]: 158 """Example implementation of `extra_outcomes()` that produces consecutive `int` outcomes. 159 160 There is no expectation that a subclass be able to handle 161 an arbitrary number of generators. Thus, you are free to rename any of 162 the parameters in a subclass, or to replace `*generators` with a fixed 163 set of parameters. 164 165 Set `extra_outcomes = icepool.MultisetEvaluator.consecutive` to use this. 166 167 Returns: 168 All `int`s from the min outcome to the max outcome among the generators, 169 inclusive. 170 171 Raises: 172 TypeError: if any generator has any non-`int` outcome. 173 """ 174 if not outcomes: 175 return () 176 177 if any(not isinstance(x, int) for x in outcomes): 178 raise TypeError( 179 "consecutive cannot be used with outcomes of type other than 'int'." 180 ) 181 182 return range(outcomes[0], outcomes[-1] + 1)
Example implementation of extra_outcomes()
that produces consecutive int
outcomes.
There is no expectation that a subclass be able to handle
an arbitrary number of generators. Thus, you are free to rename any of
the parameters in a subclass, or to replace *generators
with a fixed
set of parameters.
Set extra_outcomes = icepool.MultisetEvaluator.consecutive
to use this.
Returns:
All
int
s from the min outcome to the max outcome among the generators, inclusive.
Raises:
- TypeError: if any generator has any non-
int
outcome.
184 def prefix_generators(self) -> 'tuple[icepool.MultisetGenerator, ...]': 185 """An optional sequence of extra generators whose counts will be prepended to *counts.""" 186 return ()
An optional sequence of extra generators whose counts will be prepended to *counts.
188 def validate_arity(self, arity: int) -> None: 189 """An optional method to verify the total input arity. 190 191 This is called after any implicit conversion to generators, but does 192 not include any `extra_generators()`. 193 194 Overriding `next_state` with a fixed number of counts will make this 195 check redundant. 196 197 Raises: 198 `ValueError` if the total input arity is not valid. 199 """
An optional method to verify the total input arity.
This is called after any implicit conversion to generators, but does
not include any extra_generators()
.
Overriding next_state
with a fixed number of counts will make this
check redundant.
Raises:
ValueError
if the total input arity is not valid.
233 def evaluate( 234 self, *args: 235 'MultisetExpression[T_contra] | Mapping[T_contra, int] | Sequence[T_contra]' 236 ) -> 'icepool.Die[U_co] | MultisetEvaluator[T_contra, U_co]': 237 """Evaluates generator(s). 238 239 You can call the `MultisetEvaluator` object directly for the same effect, 240 e.g. `sum_evaluator(generator)` is an alias for `sum_evaluator.evaluate(generator)`. 241 242 Most evaluators will expect a fixed number of input multisets. 243 The union of the outcomes of the generator(s) must be totally orderable. 244 245 Args: 246 *args: Each may be one of the following: 247 * A `MultisetExpression`. 248 * A mappable mapping outcomes to the number of those outcomes. 249 * A sequence of outcomes. 250 251 Returns: 252 A `Die` representing the distribution of the final outcome if no 253 arg contains a free variable. Otherwise, returns a new evaluator. 254 """ 255 from icepool.generator.alignment import Alignment 256 257 # Convert arguments to expressions. 258 expressions = tuple( 259 icepool.implicit_convert_to_expression(arg) for arg in args) 260 261 if any(expression._free_arity() > 0 for expression in expressions): 262 from icepool.evaluator.expression import ExpressionEvaluator 263 return ExpressionEvaluator(*expressions, evaluator=self) 264 265 if not all( 266 isinstance(expression, icepool.MultisetGenerator) 267 for expression in expressions): 268 from icepool.evaluator.expression import ExpressionEvaluator 269 return ExpressionEvaluator(*expressions, evaluator=self).evaluate() 270 271 generators = cast(tuple[icepool.MultisetGenerator, ...], expressions) 272 273 self.validate_arity( 274 sum(generator.output_arity() for generator in generators)) 275 276 generators = self.prefix_generators() + generators 277 278 if not all(generator._is_resolvable() for generator in generators): 279 return icepool.Die([]) 280 281 algorithm, order = self._select_algorithm(*generators) 282 283 outcomes = icepool.sorted_union(*(generator.outcomes() 284 for generator in generators)) 285 extra_outcomes = Alignment(self.extra_outcomes(outcomes)) 286 287 dist: MutableMapping[Any, int] = defaultdict(int) 288 iterators = MultisetEvaluator._initialize_generators(generators) 289 for p in itertools.product(*iterators): 290 sub_generators, sub_weights = zip(*p) 291 prod_weight = math.prod(sub_weights) 292 sub_result = algorithm(order, extra_outcomes, sub_generators) 293 for sub_state, sub_weight in sub_result.items(): 294 dist[sub_state] += sub_weight * prod_weight 295 296 final_outcomes = [] 297 final_weights = [] 298 for state, weight in dist.items(): 299 outcome = self.final_outcome(state) 300 if outcome is None: 301 raise TypeError( 302 "None is not a valid final outcome.\n" 303 "This may have been a result of not supplying any generator with an outcome." 304 ) 305 if outcome is not icepool.Reroll: 306 final_outcomes.append(outcome) 307 final_weights.append(weight) 308 309 return icepool.Die(final_outcomes, final_weights)
Evaluates generator(s).
You can call the MultisetEvaluator
object directly for the same effect,
e.g. sum_evaluator(generator)
is an alias for sum_evaluator.evaluate(generator)
.
Most evaluators will expect a fixed number of input multisets. The union of the outcomes of the generator(s) must be totally orderable.
Arguments:
- *args: Each may be one of the following:
- A
MultisetExpression
. - A mappable mapping outcomes to the number of those outcomes.
- A sequence of outcomes.
- A
Returns:
A
Die
representing the distribution of the final outcome if no arg contains a free variable. Otherwise, returns a new evaluator.
491 def sample( 492 self, *generators: 493 'icepool.MultisetGenerator[T_contra, Any] | Mapping[T_contra, int] | Sequence[T_contra]' 494 ): 495 """EXPERIMENTAL: Samples one result from the generator(s) and evaluates the result.""" 496 # Convert non-`Pool` arguments to `Pool`. 497 converted_generators = tuple( 498 generator if isinstance(generator, icepool.MultisetGenerator 499 ) else icepool.Pool(generator) 500 for generator in generators) 501 502 result = self.evaluate(*itertools.chain.from_iterable( 503 generator.sample() for generator in converted_generators)) 504 505 if not result.is_empty(): 506 return result.outcomes()[0] 507 else: 508 return result
EXPERIMENTAL: Samples one result from the generator(s) and evaluates the result.
34class Order(enum.IntEnum): 35 """Can be used to define what order outcomes are seen in by MultisetEvaluators.""" 36 Ascending = 1 37 Descending = -1 38 Any = 0 39 40 def merge(*orders: 'Order') -> 'Order': 41 """Merges the given Orders. 42 43 Returns: 44 `Any` if all arguments are `Any`. 45 `Ascending` if there is at least one `Ascending` in the arguments. 46 `Descending` if there is at least one `Descending` in the arguments. 47 48 Raises: 49 `ValueError` if both `Ascending` and `Descending` are in the 50 arguments. 51 """ 52 result = Order.Any 53 for order in orders: 54 if (result > 0 and order < 0) or (result < 0 and order > 0): 55 raise ValueError( 56 f'Conflicting orders {orders}.\n' + 57 'Tip: If you are using highest(keep=k), try using lowest(drop=n-k) instead, or vice versa.' 58 ) 59 if result == Order.Any: 60 result = order 61 return result
Can be used to define what order outcomes are seen in by MultisetEvaluators.
40 def merge(*orders: 'Order') -> 'Order': 41 """Merges the given Orders. 42 43 Returns: 44 `Any` if all arguments are `Any`. 45 `Ascending` if there is at least one `Ascending` in the arguments. 46 `Descending` if there is at least one `Descending` in the arguments. 47 48 Raises: 49 `ValueError` if both `Ascending` and `Descending` are in the 50 arguments. 51 """ 52 result = Order.Any 53 for order in orders: 54 if (result > 0 and order < 0) or (result < 0 and order > 0): 55 raise ValueError( 56 f'Conflicting orders {orders}.\n' + 57 'Tip: If you are using highest(keep=k), try using lowest(drop=n-k) instead, or vice versa.' 58 ) 59 if result == Order.Any: 60 result = order 61 return result
Merges the given Orders.
Returns:
Any
if all arguments areAny
.Ascending
if there is at least oneAscending
in the arguments.Descending
if there is at least oneDescending
in the arguments.
Raises:
ValueError
if bothAscending
andDescending
are in the- arguments.
20class Deck(Population[T_co]): 21 """Sampling without replacement (within a single evaluation). 22 23 Quantities represent duplicates. 24 """ 25 26 _data: Counts[T_co] 27 _deal: int 28 29 @property 30 def _new_type(self) -> type: 31 return Deck 32 33 def __new__(cls, 34 outcomes: Sequence | Mapping[Any, int], 35 times: Sequence[int] | int = 1) -> 'Deck[T_co]': 36 """Constructor for a `Deck`. 37 38 All quantities must be non-negative. Outcomes with zero quantity will be 39 omitted. 40 41 Args: 42 outcomes: The cards of the `Deck`. This can be one of the following: 43 * A `Sequence` of outcomes. Duplicates will contribute 44 quantity for each appearance. 45 * A `Mapping` from outcomes to quantities. 46 47 Each outcome may be one of the following: 48 * An outcome, which must be hashable and totally orderable. 49 * A `Deck`, which will be flattened into the result. If a 50 `times` is assigned to the `Deck`, the entire `Deck` will 51 be duplicated that many times. 52 times: Multiplies the number of times each element of `outcomes` 53 will be put into the `Deck`. 54 `times` can either be a sequence of the same length as 55 `outcomes` or a single `int` to apply to all elements of 56 `outcomes`. 57 """ 58 59 if icepool.population.again.contains_again(outcomes): 60 raise ValueError('Again cannot be used with Decks.') 61 62 outcomes, times = icepool.creation_args.itemize(outcomes, times) 63 64 if len(outcomes) == 1 and times[0] == 1 and isinstance( 65 outcomes[0], Deck): 66 return outcomes[0] 67 68 counts: Counts[T_co] = icepool.creation_args.expand_args_for_deck( 69 outcomes, times) 70 71 return Deck._new_raw(counts) 72 73 @classmethod 74 def _new_raw(cls, data: Counts[T_co]) -> 'Deck[T_co]': 75 """Creates a new `Deck` using already-processed arguments. 76 77 Args: 78 data: At this point, this is a Counts. 79 """ 80 self = super(Population, cls).__new__(cls) 81 self._data = data 82 return self 83 84 def keys(self) -> CountsKeysView[T_co]: 85 return self._data.keys() 86 87 def values(self) -> CountsValuesView: 88 return self._data.values() 89 90 def items(self) -> CountsItemsView[T_co]: 91 return self._data.items() 92 93 def __getitem__(self, outcome) -> int: 94 return self._data[outcome] 95 96 def __iter__(self) -> Iterator[T_co]: 97 return iter(self.keys()) 98 99 def __len__(self) -> int: 100 return len(self._data) 101 102 size = icepool.Population.denominator 103 104 @cached_property 105 def _popped_min(self) -> tuple['Deck[T_co]', int]: 106 return self._new_raw(self._data.remove_min()), self.quantities()[0] 107 108 def _pop_min(self) -> tuple['Deck[T_co]', int]: 109 """A `Deck` with the min outcome removed.""" 110 return self._popped_min 111 112 @cached_property 113 def _popped_max(self) -> tuple['Deck[T_co]', int]: 114 return self._new_raw(self._data.remove_max()), self.quantities()[-1] 115 116 def _pop_max(self) -> tuple['Deck[T_co]', int]: 117 """A `Deck` with the max outcome removed.""" 118 return self._popped_max 119 120 @overload 121 def deal(self, hand_size: int, /) -> 'icepool.Deal[T_co]': 122 ... 123 124 @overload 125 def deal( 126 self, hand_size: int, hand_size_2: int, /, *more_hand_sizes: 127 int) -> 'icepool.MultiDeal[T_co, tuple[int, ...]]': 128 ... 129 130 @overload 131 def deal( 132 self, *hand_sizes: int 133 ) -> 'icepool.Deal[T_co] | icepool.MultiDeal[T_co, tuple[int, ...]]': 134 ... 135 136 def deal( 137 self, *hand_sizes: int 138 ) -> 'icepool.Deal[T_co] | icepool.MultiDeal[T_co, tuple[int, ...]]': 139 """Creates a `Deal` object from this deck. 140 141 See `Deal()` for details. 142 """ 143 if len(hand_sizes) == 1: 144 return icepool.Deal(self, *hand_sizes) 145 else: 146 return icepool.MultiDeal(self, *hand_sizes) 147 148 # Binary operators. 149 150 def additive_union( 151 self, *args: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 152 """Both decks merged together.""" 153 return functools.reduce(operator.add, args, initial=self) 154 155 def __add__(self, 156 other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 157 data = Counter(self._data) 158 for outcome, count in Counter(other).items(): 159 data[outcome] += count 160 return Deck(+data) 161 162 __radd__ = __add__ 163 164 def difference(self, *args: 165 Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 166 """This deck with the other cards removed (but not below zero of each card).""" 167 return functools.reduce(operator.sub, args, initial=self) 168 169 def __sub__(self, 170 other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 171 data = Counter(self._data) 172 for outcome, count in Counter(other).items(): 173 data[outcome] -= count 174 return Deck(+data) 175 176 def __rsub__(self, 177 other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 178 data = Counter(other) 179 for outcome, count in self.items(): 180 data[outcome] -= count 181 return Deck(+data) 182 183 def intersection( 184 self, *args: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 185 """The cards that both decks have.""" 186 return functools.reduce(operator.and_, args, initial=self) 187 188 def __and__(self, 189 other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 190 data: Counter[T_co] = Counter() 191 for outcome, count in Counter(other).items(): 192 data[outcome] = min(self.get(outcome, 0), count) 193 return Deck(+data) 194 195 __rand__ = __and__ 196 197 def union(self, *args: 198 Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 199 """As many of each card as the deck that has more of them.""" 200 return functools.reduce(operator.or_, args, initial=self) 201 202 def __or__(self, 203 other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 204 data = Counter(self._data) 205 for outcome, count in Counter(other).items(): 206 data[outcome] = max(data[outcome], count) 207 return Deck(+data) 208 209 __ror__ = __or__ 210 211 def symmetric_difference( 212 self, other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 213 """As many of each card as the deck that has more of them.""" 214 return self ^ other 215 216 def __xor__(self, 217 other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 218 data = Counter(self._data) 219 for outcome, count in Counter(other).items(): 220 data[outcome] = abs(data[outcome] - count) 221 return Deck(+data) 222 223 def __mul__(self, other: int) -> 'Deck[T_co]': 224 if not isinstance(other, int): 225 return NotImplemented 226 return self.multiply_quantities(other) 227 228 __rmul__ = __mul__ 229 230 def __floordiv__(self, other: int) -> 'Deck[T_co]': 231 if not isinstance(other, int): 232 return NotImplemented 233 return self.divide_quantities(other) 234 235 def __mod__(self, other: int) -> 'Deck[T_co]': 236 if not isinstance(other, int): 237 return NotImplemented 238 return self.modulo_quantities(other) 239 240 def map( 241 self, 242 repl: 243 'Callable[..., U | Deck[U] | icepool.RerollType] | Mapping[T_co, U | Deck[U] | icepool.RerollType]', 244 /, 245 star: bool | None = None) -> 'Deck[U]': 246 """Maps outcomes of this `Deck` to other outcomes. 247 248 Args: 249 repl: One of the following: 250 * A callable returning a new outcome for each old outcome. 251 * A map from old outcomes to new outcomes. 252 Unmapped old outcomes stay the same. 253 The new outcomes may be `Deck`s, in which case one card is 254 replaced with several. This is not recommended. 255 star: Whether outcomes should be unpacked into separate arguments 256 before sending them to a callable `repl`. 257 If not provided, this will be guessed based on the function 258 signature. 259 """ 260 # Convert to a single-argument function. 261 if callable(repl): 262 if star is None: 263 star = guess_star(repl) 264 if star: 265 266 def transition_function(outcome): 267 return repl(*outcome) 268 else: 269 270 def transition_function(outcome): 271 return repl(outcome) 272 else: 273 # repl is a mapping. 274 def transition_function(outcome): 275 if outcome in repl: 276 return repl[outcome] 277 else: 278 return outcome 279 280 return Deck( 281 [transition_function(outcome) for outcome in self.outcomes()], 282 times=self.quantities()) 283 284 @cached_property 285 def _sequence_cache( 286 self) -> 'MutableSequence[icepool.Die[tuple[T_co, ...]]]': 287 return [icepool.Die([()])] 288 289 def sequence(self, deals: int, /) -> 'icepool.Die[tuple[T_co, ...]]': 290 """Possible sequences produced by dealing from this deck a number of times. 291 292 This is extremely expensive computationally. If you don't care about 293 order, use `deal()` instead. 294 """ 295 if deals < 0: 296 raise ValueError('The number of cards dealt cannot be negative.') 297 for i in range(len(self._sequence_cache), deals + 1): 298 299 def transition(curr): 300 remaining = icepool.Die(self - curr) 301 return icepool.map(lambda curr, next: curr + (next, ), curr, 302 remaining) 303 304 result: 'icepool.Die[tuple[T_co, ...]]' = self._sequence_cache[ 305 i - 1].map(transition) 306 self._sequence_cache.append(result) 307 return result 308 309 @cached_property 310 def _hash_key(self) -> tuple: 311 return Deck, tuple(self.items()) 312 313 def __eq__(self, other) -> bool: 314 if not isinstance(other, Deck): 315 return False 316 return self._hash_key == other._hash_key 317 318 @cached_property 319 def _hash(self) -> int: 320 return hash(self._hash_key) 321 322 def __hash__(self) -> int: 323 return self._hash 324 325 def __repr__(self) -> str: 326 inner = ', '.join(f'{repr(outcome)}: {quantity}' 327 for outcome, quantity in self.items()) 328 return type(self).__qualname__ + '({' + inner + '})'
Sampling without replacement (within a single evaluation).
Quantities represent duplicates.
33 def __new__(cls, 34 outcomes: Sequence | Mapping[Any, int], 35 times: Sequence[int] | int = 1) -> 'Deck[T_co]': 36 """Constructor for a `Deck`. 37 38 All quantities must be non-negative. Outcomes with zero quantity will be 39 omitted. 40 41 Args: 42 outcomes: The cards of the `Deck`. This can be one of the following: 43 * A `Sequence` of outcomes. Duplicates will contribute 44 quantity for each appearance. 45 * A `Mapping` from outcomes to quantities. 46 47 Each outcome may be one of the following: 48 * An outcome, which must be hashable and totally orderable. 49 * A `Deck`, which will be flattened into the result. If a 50 `times` is assigned to the `Deck`, the entire `Deck` will 51 be duplicated that many times. 52 times: Multiplies the number of times each element of `outcomes` 53 will be put into the `Deck`. 54 `times` can either be a sequence of the same length as 55 `outcomes` or a single `int` to apply to all elements of 56 `outcomes`. 57 """ 58 59 if icepool.population.again.contains_again(outcomes): 60 raise ValueError('Again cannot be used with Decks.') 61 62 outcomes, times = icepool.creation_args.itemize(outcomes, times) 63 64 if len(outcomes) == 1 and times[0] == 1 and isinstance( 65 outcomes[0], Deck): 66 return outcomes[0] 67 68 counts: Counts[T_co] = icepool.creation_args.expand_args_for_deck( 69 outcomes, times) 70 71 return Deck._new_raw(counts)
Constructor for a Deck
.
All quantities must be non-negative. Outcomes with zero quantity will be omitted.
Arguments:
outcomes: The cards of the
Deck
. This can be one of the following:- A
Sequence
of outcomes. Duplicates will contribute quantity for each appearance. - A
Mapping
from outcomes to quantities.
Each outcome may be one of the following:
- A
- times: Multiplies the number of times each element of
outcomes
will be put into theDeck
.times
can either be a sequence of the same length asoutcomes
or a singleint
to apply to all elements ofoutcomes
.
287 def denominator(self) -> int: 288 """The sum of all quantities (e.g. weights or duplicates). 289 290 For the number of unique outcomes, use `len()`. 291 """ 292 return self._denominator
The sum of all quantities (e.g. weights or duplicates).
For the number of unique outcomes, use len()
.
136 def deal( 137 self, *hand_sizes: int 138 ) -> 'icepool.Deal[T_co] | icepool.MultiDeal[T_co, tuple[int, ...]]': 139 """Creates a `Deal` object from this deck. 140 141 See `Deal()` for details. 142 """ 143 if len(hand_sizes) == 1: 144 return icepool.Deal(self, *hand_sizes) 145 else: 146 return icepool.MultiDeal(self, *hand_sizes)
150 def additive_union( 151 self, *args: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 152 """Both decks merged together.""" 153 return functools.reduce(operator.add, args, initial=self)
Both decks merged together.
164 def difference(self, *args: 165 Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 166 """This deck with the other cards removed (but not below zero of each card).""" 167 return functools.reduce(operator.sub, args, initial=self)
This deck with the other cards removed (but not below zero of each card).
183 def intersection( 184 self, *args: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 185 """The cards that both decks have.""" 186 return functools.reduce(operator.and_, args, initial=self)
The cards that both decks have.
197 def union(self, *args: 198 Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 199 """As many of each card as the deck that has more of them.""" 200 return functools.reduce(operator.or_, args, initial=self)
As many of each card as the deck that has more of them.
211 def symmetric_difference( 212 self, other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 213 """As many of each card as the deck that has more of them.""" 214 return self ^ other
As many of each card as the deck that has more of them.
240 def map( 241 self, 242 repl: 243 'Callable[..., U | Deck[U] | icepool.RerollType] | Mapping[T_co, U | Deck[U] | icepool.RerollType]', 244 /, 245 star: bool | None = None) -> 'Deck[U]': 246 """Maps outcomes of this `Deck` to other outcomes. 247 248 Args: 249 repl: One of the following: 250 * A callable returning a new outcome for each old outcome. 251 * A map from old outcomes to new outcomes. 252 Unmapped old outcomes stay the same. 253 The new outcomes may be `Deck`s, in which case one card is 254 replaced with several. This is not recommended. 255 star: Whether outcomes should be unpacked into separate arguments 256 before sending them to a callable `repl`. 257 If not provided, this will be guessed based on the function 258 signature. 259 """ 260 # Convert to a single-argument function. 261 if callable(repl): 262 if star is None: 263 star = guess_star(repl) 264 if star: 265 266 def transition_function(outcome): 267 return repl(*outcome) 268 else: 269 270 def transition_function(outcome): 271 return repl(outcome) 272 else: 273 # repl is a mapping. 274 def transition_function(outcome): 275 if outcome in repl: 276 return repl[outcome] 277 else: 278 return outcome 279 280 return Deck( 281 [transition_function(outcome) for outcome in self.outcomes()], 282 times=self.quantities())
Maps outcomes of this Deck
to other outcomes.
Arguments:
- repl: One of the following:
- A callable returning a new outcome for each old outcome.
- A map from old outcomes to new outcomes.
Unmapped old outcomes stay the same.
The new outcomes may be
Deck
s, in which case one card is replaced with several. This is not recommended.
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
repl
. If not provided, this will be guessed based on the function signature.
289 def sequence(self, deals: int, /) -> 'icepool.Die[tuple[T_co, ...]]': 290 """Possible sequences produced by dealing from this deck a number of times. 291 292 This is extremely expensive computationally. If you don't care about 293 order, use `deal()` instead. 294 """ 295 if deals < 0: 296 raise ValueError('The number of cards dealt cannot be negative.') 297 for i in range(len(self._sequence_cache), deals + 1): 298 299 def transition(curr): 300 remaining = icepool.Die(self - curr) 301 return icepool.map(lambda curr, next: curr + (next, ), curr, 302 remaining) 303 304 result: 'icepool.Die[tuple[T_co, ...]]' = self._sequence_cache[ 305 i - 1].map(transition) 306 self._sequence_cache.append(result) 307 return result
Possible sequences produced by dealing from this deck a number of times.
This is extremely expensive computationally. If you don't care about
order, use deal()
instead.
17class Deal(KeepGenerator[T]): 18 """Represents an unordered deal of a single hand from a `Deck`.""" 19 20 _deck: 'icepool.Deck[T]' 21 _hand_size: int 22 23 def __init__(self, deck: 'icepool.Deck[T]', hand_size: int) -> None: 24 """Constructor. 25 26 For algorithmic reasons, you must pre-commit to the number of cards to 27 deal. 28 29 It is permissible to deal zero cards from an empty deck, but not all 30 evaluators will handle this case, especially if they depend on the 31 outcome type. Dealing zero cards from a non-empty deck does not have 32 this issue. 33 34 Args: 35 deck: The `Deck` to deal from. 36 hand_size: How many cards to deal. 37 """ 38 if hand_size < 0: 39 raise ValueError('hand_size cannot be negative.') 40 if hand_size > deck.size(): 41 raise ValueError( 42 'The number of cards dealt cannot exceed the size of the deck.' 43 ) 44 self._deck = deck 45 self._hand_size = hand_size 46 self._keep_tuple = (1, ) * hand_size 47 48 @classmethod 49 def _new_raw(cls, deck: 'icepool.Deck[T]', hand_size: int, 50 keep_tuple: tuple[int, ...]) -> 'Deal[T]': 51 self = super(Deal, cls).__new__(cls) 52 self._deck = deck 53 self._hand_size = hand_size 54 self._keep_tuple = keep_tuple 55 return self 56 57 def _set_keep_tuple(self, keep_tuple: tuple[int, ...]) -> 'Deal[T]': 58 return Deal._new_raw(self._deck, self._hand_size, keep_tuple) 59 60 def deck(self) -> 'icepool.Deck[T]': 61 """The `Deck` the cards are dealt from.""" 62 return self._deck 63 64 def hand_sizes(self) -> tuple[int, ...]: 65 """The number of cards dealt to each hand as a tuple.""" 66 return (self._hand_size, ) 67 68 def total_cards_dealt(self) -> int: 69 """The total number of cards dealt.""" 70 return self._hand_size 71 72 def outcomes(self) -> CountsKeysView[T]: 73 """The outcomes of the `Deck` in ascending order. 74 75 These are also the `keys` of the `Deck` as a `Mapping`. 76 Prefer to use the name `outcomes`. 77 """ 78 return self.deck().outcomes() 79 80 def output_arity(self) -> int: 81 return 1 82 83 def _is_resolvable(self) -> bool: 84 return len(self.outcomes()) > 0 85 86 def denominator(self) -> int: 87 return icepool.math.comb(self.deck().size(), self._hand_size) 88 89 def _generate_initial(self) -> InitialMultisetGenerator: 90 yield self, 1 91 92 def _generate_min(self, min_outcome) -> NextMultisetGenerator: 93 if not self.outcomes() or min_outcome != self.min_outcome(): 94 yield self, (0, ), 1 95 return 96 97 popped_deck, deck_count = self.deck()._pop_min() 98 99 min_count = max(0, deck_count + self._hand_size - self.deck().size()) 100 max_count = min(deck_count, self._hand_size) 101 skip_weight = None 102 for count in range(min_count, max_count + 1): 103 popped_keep_tuple, result_count = pop_min_from_keep_tuple( 104 self.keep_tuple(), count) 105 popped_deal = Deal._new_raw(popped_deck, self._hand_size - count, 106 popped_keep_tuple) 107 weight = icepool.math.comb(deck_count, count) 108 if not any(popped_keep_tuple): 109 # Dump all dice in exchange for the denominator. 110 skip_weight = (skip_weight 111 or 0) + weight * popped_deal.denominator() 112 continue 113 yield popped_deal, (result_count, ), weight 114 115 if skip_weight is not None: 116 popped_deal = Deal._new_raw(popped_deck, 0, ()) 117 yield popped_deal, (sum(self.keep_tuple()), ), skip_weight 118 119 def _generate_max(self, max_outcome) -> NextMultisetGenerator: 120 if not self.outcomes() or max_outcome != self.max_outcome(): 121 yield self, (0, ), 1 122 return 123 124 popped_deck, deck_count = self.deck()._pop_max() 125 126 min_count = max(0, deck_count + self._hand_size - self.deck().size()) 127 max_count = min(deck_count, self._hand_size) 128 skip_weight = None 129 for count in range(min_count, max_count + 1): 130 popped_keep_tuple, result_count = pop_max_from_keep_tuple( 131 self.keep_tuple(), count) 132 popped_deal = Deal._new_raw(popped_deck, self._hand_size - count, 133 popped_keep_tuple) 134 weight = icepool.math.comb(deck_count, count) 135 if not any(popped_keep_tuple): 136 # Dump all dice in exchange for the denominator. 137 skip_weight = (skip_weight 138 or 0) + weight * popped_deal.denominator() 139 continue 140 yield popped_deal, (result_count, ), weight 141 142 if skip_weight is not None: 143 popped_deal = Deal._new_raw(popped_deck, 0, ()) 144 yield popped_deal, (sum(self.keep_tuple()), ), skip_weight 145 146 def _preferred_pop_order(self) -> tuple[Order | None, PopOrderReason]: 147 lo_skip, hi_skip = icepool.generator.pop_order.lo_hi_skip( 148 self.keep_tuple()) 149 if lo_skip > hi_skip: 150 return Order.Descending, PopOrderReason.KeepSkip 151 if hi_skip > lo_skip: 152 return Order.Ascending, PopOrderReason.KeepSkip 153 154 return Order.Any, PopOrderReason.NoPreference 155 156 @cached_property 157 def _hash_key(self) -> Hashable: 158 return Deal, self.deck(), self._hand_size, self._keep_tuple 159 160 def __repr__(self) -> str: 161 return type( 162 self 163 ).__qualname__ + f'({repr(self.deck())}, hand_size={self._hand_size})' 164 165 def __str__(self) -> str: 166 return type( 167 self 168 ).__qualname__ + f' of hand_size={self._hand_size} from deck:\n' + str( 169 self.deck())
Represents an unordered deal of a single hand from a Deck
.
23 def __init__(self, deck: 'icepool.Deck[T]', hand_size: int) -> None: 24 """Constructor. 25 26 For algorithmic reasons, you must pre-commit to the number of cards to 27 deal. 28 29 It is permissible to deal zero cards from an empty deck, but not all 30 evaluators will handle this case, especially if they depend on the 31 outcome type. Dealing zero cards from a non-empty deck does not have 32 this issue. 33 34 Args: 35 deck: The `Deck` to deal from. 36 hand_size: How many cards to deal. 37 """ 38 if hand_size < 0: 39 raise ValueError('hand_size cannot be negative.') 40 if hand_size > deck.size(): 41 raise ValueError( 42 'The number of cards dealt cannot exceed the size of the deck.' 43 ) 44 self._deck = deck 45 self._hand_size = hand_size 46 self._keep_tuple = (1, ) * hand_size
Constructor.
For algorithmic reasons, you must pre-commit to the number of cards to deal.
It is permissible to deal zero cards from an empty deck, but not all evaluators will handle this case, especially if they depend on the outcome type. Dealing zero cards from a non-empty deck does not have this issue.
Arguments:
- deck: The
Deck
to deal from. - hand_size: How many cards to deal.
60 def deck(self) -> 'icepool.Deck[T]': 61 """The `Deck` the cards are dealt from.""" 62 return self._deck
The Deck
the cards are dealt from.
64 def hand_sizes(self) -> tuple[int, ...]: 65 """The number of cards dealt to each hand as a tuple.""" 66 return (self._hand_size, )
The number of cards dealt to each hand as a tuple.
68 def total_cards_dealt(self) -> int: 69 """The total number of cards dealt.""" 70 return self._hand_size
The total number of cards dealt.
17class MultiDeal(MultisetGenerator[T, Qs]): 18 """Represents an unordered deal of multiple hands from a `Deck`.""" 19 20 _deck: 'icepool.Deck[T]' 21 _hand_sizes: Qs 22 23 def __init__(self, deck: 'icepool.Deck[T]', *hand_sizes: int) -> None: 24 """Constructor. 25 26 For algorithmic reasons, you must pre-commit to the number of cards to 27 deal for each hand. 28 29 It is permissible to deal zero cards from an empty deck, but not all 30 evaluators will handle this case, especially if they depend on the 31 outcome type. Dealing zero cards from a non-empty deck does not have 32 this issue. 33 34 Args: 35 deck: The `Deck` to deal from. 36 *hand_sizes: How many cards to deal. If multiple `hand_sizes` are 37 provided, `MultisetEvaluator.next_state` will recieve one count 38 per hand in order. Try to keep the number of hands to a minimum 39 as this can be computationally intensive. 40 """ 41 if any(hand < 0 for hand in hand_sizes): 42 raise ValueError('hand_sizes cannot be negative.') 43 self._deck = deck 44 self._hand_sizes = cast(Qs, hand_sizes) 45 if self.total_cards_dealt() > self.deck().size(): 46 raise ValueError( 47 'The total number of cards dealt cannot exceed the size of the deck.' 48 ) 49 50 @classmethod 51 def _new_raw(cls, deck: 'icepool.Deck[T]', 52 hand_sizes: Qs) -> 'MultiDeal[T, Qs]': 53 self = super(MultiDeal, cls).__new__(cls) 54 self._deck = deck 55 self._hand_sizes = hand_sizes 56 return self 57 58 def deck(self) -> 'icepool.Deck[T]': 59 """The `Deck` the cards are dealt from.""" 60 return self._deck 61 62 def hand_sizes(self) -> Qs: 63 """The number of cards dealt to each hand as a tuple.""" 64 return self._hand_sizes 65 66 def total_cards_dealt(self) -> int: 67 """The total number of cards dealt.""" 68 return sum(self.hand_sizes()) 69 70 def outcomes(self) -> CountsKeysView[T]: 71 """The outcomes of the `Deck` in ascending order. 72 73 These are also the `keys` of the `Deck` as a `Mapping`. 74 Prefer to use the name `outcomes`. 75 """ 76 return self.deck().outcomes() 77 78 def output_arity(self) -> int: 79 return len(self._hand_sizes) 80 81 def _is_resolvable(self) -> bool: 82 return len(self.outcomes()) > 0 83 84 @cached_property 85 def _denomiator(self) -> int: 86 d_total = icepool.math.comb(self.deck().size(), 87 self.total_cards_dealt()) 88 d_split = math.prod( 89 icepool.math.comb(self.total_cards_dealt(), h) 90 for h in self.hand_sizes()[1:]) 91 return d_total * d_split 92 93 def denominator(self) -> int: 94 return self._denomiator 95 96 def _generate_initial(self) -> InitialMultisetGenerator: 97 yield self, 1 98 99 def _generate_common(self, popped_deck: 'icepool.Deck[T]', 100 deck_count: int) -> NextMultisetGenerator: 101 """Common implementation for _generate_min and _generate_max.""" 102 min_count = max( 103 0, deck_count + self.total_cards_dealt() - self.deck().size()) 104 max_count = min(deck_count, self.total_cards_dealt()) 105 for count_total in range(min_count, max_count + 1): 106 weight_total = icepool.math.comb(deck_count, count_total) 107 # The "deck" is the hand sizes. 108 for counts, weight_split in iter_hypergeom(self.hand_sizes(), 109 count_total): 110 popped_deal = MultiDeal._new_raw( 111 popped_deck, 112 tuple(h - c for h, c in zip(self.hand_sizes(), counts))) 113 weight = weight_total * weight_split 114 yield popped_deal, counts, weight 115 116 def _generate_min(self, min_outcome) -> NextMultisetGenerator: 117 if not self.outcomes() or min_outcome != self.min_outcome(): 118 yield self, (0, ), 1 119 return 120 121 popped_deck, deck_count = self.deck()._pop_min() 122 123 yield from self._generate_common(popped_deck, deck_count) 124 125 def _generate_max(self, max_outcome) -> NextMultisetGenerator: 126 if not self.outcomes() or max_outcome != self.max_outcome(): 127 yield self, (0, ), 1 128 return 129 130 popped_deck, deck_count = self.deck()._pop_max() 131 132 yield from self._generate_common(popped_deck, deck_count) 133 134 def _preferred_pop_order(self) -> tuple[Order | None, PopOrderReason]: 135 return Order.Any, PopOrderReason.NoPreference 136 137 @cached_property 138 def _hash_key(self) -> Hashable: 139 return MultiDeal, self.deck(), self.hand_sizes() 140 141 def __repr__(self) -> str: 142 return type( 143 self 144 ).__qualname__ + f'({repr(self.deck())}, hand_sizes={self.hand_sizes()})' 145 146 def __str__(self) -> str: 147 return type( 148 self 149 ).__qualname__ + f' of hand_sizes={self.hand_sizes()} from deck:\n' + str( 150 self.deck())
Represents an unordered deal of multiple hands from a Deck
.
23 def __init__(self, deck: 'icepool.Deck[T]', *hand_sizes: int) -> None: 24 """Constructor. 25 26 For algorithmic reasons, you must pre-commit to the number of cards to 27 deal for each hand. 28 29 It is permissible to deal zero cards from an empty deck, but not all 30 evaluators will handle this case, especially if they depend on the 31 outcome type. Dealing zero cards from a non-empty deck does not have 32 this issue. 33 34 Args: 35 deck: The `Deck` to deal from. 36 *hand_sizes: How many cards to deal. If multiple `hand_sizes` are 37 provided, `MultisetEvaluator.next_state` will recieve one count 38 per hand in order. Try to keep the number of hands to a minimum 39 as this can be computationally intensive. 40 """ 41 if any(hand < 0 for hand in hand_sizes): 42 raise ValueError('hand_sizes cannot be negative.') 43 self._deck = deck 44 self._hand_sizes = cast(Qs, hand_sizes) 45 if self.total_cards_dealt() > self.deck().size(): 46 raise ValueError( 47 'The total number of cards dealt cannot exceed the size of the deck.' 48 )
Constructor.
For algorithmic reasons, you must pre-commit to the number of cards to deal for each hand.
It is permissible to deal zero cards from an empty deck, but not all evaluators will handle this case, especially if they depend on the outcome type. Dealing zero cards from a non-empty deck does not have this issue.
Arguments:
- deck: The
Deck
to deal from. - *hand_sizes: How many cards to deal. If multiple
hand_sizes
are provided,MultisetEvaluator.next_state
will recieve one count per hand in order. Try to keep the number of hands to a minimum as this can be computationally intensive.
58 def deck(self) -> 'icepool.Deck[T]': 59 """The `Deck` the cards are dealt from.""" 60 return self._deck
The Deck
the cards are dealt from.
62 def hand_sizes(self) -> Qs: 63 """The number of cards dealt to each hand as a tuple.""" 64 return self._hand_sizes
The number of cards dealt to each hand as a tuple.
66 def total_cards_dealt(self) -> int: 67 """The total number of cards dealt.""" 68 return sum(self.hand_sizes())
The total number of cards dealt.
65def multiset_function( 66 function: Callable[..., NestedTupleOrEvaluator[T_contra, U_co]], 67 /) -> MultisetEvaluator[T_contra, NestedTupleOrOutcome[U_co]]: 68 """EXPERIMENTAL: A decorator that turns a function into a `MultisetEvaluator`. 69 70 The provided function should take in arguments representing multisets, 71 do a limited set of operations on them (see `MultisetExpression`), and 72 finish off with an evaluation. You can return tuples to perform a joint 73 evaluation. 74 75 For example, to create an evaluator which computes the elements each of two 76 multisets has that the other doesn't: 77 78 ```python 79 @multiset_function 80 def two_way_difference(a, b): 81 return (a - b).expand(), (b - a).expand() 82 ``` 83 84 Any globals inside `function` are effectively bound at the time 85 `multiset_function` is invoked. Note that this is different than how 86 ordinary Python closures behave. For example, 87 88 ```python 89 target = [1, 2, 3] 90 91 @multiset_function 92 def count_intersection(a): 93 return (a & target).count() 94 95 print(count_intersection(d6.pool(3))) 96 97 target = [1] 98 print(count_intersection(d6.pool(3))) 99 ``` 100 101 would produce the same thing both times. Likewise, the function should not 102 have any side effects. 103 104 Be careful when using control structures: you cannot branch on the value of 105 a multiset expression or evaluation, so e.g. 106 107 ```python 108 @multiset_function 109 def bad(a, b) 110 if a == b: 111 ... 112 ``` 113 114 is not allowed. 115 116 `multiset_function` has considerable overhead, being effectively a 117 mini-language within Python. For better performance, you can try 118 implementing your own subclass of `MultisetEvaluator` directly. 119 120 Args: 121 function: This should take in a fixed number of multiset variables and 122 output an evaluator or a nested tuple of evaluators. Tuples will 123 result in a `JointEvaluator`. 124 """ 125 parameters = inspect.signature(function, follow_wrapped=False).parameters 126 for parameter in parameters.values(): 127 if parameter.kind not in [ 128 inspect.Parameter.POSITIONAL_ONLY, 129 inspect.Parameter.POSITIONAL_OR_KEYWORD, 130 ] or parameter.default != inspect.Parameter.empty: 131 raise ValueError( 132 'Callable must take only a fixed number of positional arguments.' 133 ) 134 tuple_or_evaluator = function(*(MV(index=i) 135 for i in range(len(parameters)))) 136 evaluator = replace_tuples_with_joint_evaluator(tuple_or_evaluator) 137 return update_wrapper(evaluator, function)
EXPERIMENTAL: A decorator that turns a function into a MultisetEvaluator
.
The provided function should take in arguments representing multisets,
do a limited set of operations on them (see MultisetExpression
), and
finish off with an evaluation. You can return tuples to perform a joint
evaluation.
For example, to create an evaluator which computes the elements each of two multisets has that the other doesn't:
@multiset_function
def two_way_difference(a, b):
return (a - b).expand(), (b - a).expand()
Any globals inside function
are effectively bound at the time
multiset_function
is invoked. Note that this is different than how
ordinary Python closures behave. For example,
target = [1, 2, 3]
@multiset_function
def count_intersection(a):
return (a & target).count()
print(count_intersection(d6.pool(3)))
target = [1]
print(count_intersection(d6.pool(3)))
would produce the same thing both times. Likewise, the function should not have any side effects.
Be careful when using control structures: you cannot branch on the value of a multiset expression or evaluation, so e.g.
@multiset_function
def bad(a, b)
if a == b:
...
is not allowed.
multiset_function
has considerable overhead, being effectively a
mini-language within Python. For better performance, you can try
implementing your own subclass of MultisetEvaluator
directly.
Arguments:
- function: This should take in a fixed number of multiset variables and
output an evaluator or a nested tuple of evaluators. Tuples will
result in a
JointEvaluator
.
23def format_probability_inverse(probability, /, int_start: int = 20): 24 """EXPERIMENTAL: Formats the inverse of a value as "1 in N". 25 26 Args: 27 probability: The value to be formatted. 28 int_start: If N = 1 / probability is between this value and 1 million 29 times this value it will be formatted as an integer. Otherwise it 30 be formatted asa float with precision at least 1 part in int_start. 31 """ 32 max_precision = math.ceil(math.log10(int_start)) 33 if probability <= 0 or probability >= 1: 34 return 'n/a' 35 product = probability * int_start 36 if product <= 1: 37 if probability * int_start * 10**6 <= 1: 38 return f'1 in {1.0 / probability:<.{max_precision}e}' 39 else: 40 return f'1 in {round(1 / probability)}' 41 42 precision = 0 43 precision_factor = 1 44 while product > precision_factor and precision < max_precision: 45 precision += 1 46 precision_factor *= 10 47 return f'1 in {1.0 / probability:<.{precision}f}'
EXPERIMENTAL: Formats the inverse of a value as "1 in N".
Arguments:
- probability: The value to be formatted.
- int_start: If N = 1 / probability is between this value and 1 million times this value it will be formatted as an integer. Otherwise it be formatted asa float with precision at least 1 part in int_start.