icepool
Package for computing dice and card probabilities.
Starting with v0.25.1
, you can replace latest
in the URL with an old version
number to get the documentation for that version.
See this JupyterLite distribution for examples.
General conventions:
- Instances are immutable (apart from internal caching). Anything that looks like it mutates an instance actually returns a separate instance with the change.
1"""Package for computing dice and card probabilities. 2 3Starting with `v0.25.1`, you can replace `latest` in the URL with an old version 4number to get the documentation for that version. 5 6See [this JupyterLite distribution](https://highdiceroller.github.io/icepool/notebooks/lab/index.html) 7for examples. 8 9[Visit the project page.](https://github.com/HighDiceRoller/icepool) 10 11General conventions: 12 13* Instances are immutable (apart from internal caching). Anything that looks 14 like it mutates an instance actually returns a separate instance with the 15 change. 16""" 17 18__docformat__ = 'google' 19 20__version__ = '1.7.0a1' 21 22from typing import Final 23 24from icepool.typing import Outcome, RerollType, NoExpand 25from icepool.order import Order 26 27Reroll: Final = RerollType.Reroll 28"""Indicates that an outcome should be rerolled (with unlimited depth). 29 30This can be used in place of outcomes in many places. See individual function 31and method descriptions for details. 32 33This effectively removes the outcome from the probability space, along with its 34contribution to the denominator. 35 36This can be used for conditional probability by removing all outcomes not 37consistent with the given observations. 38 39Operation in specific cases: 40 41* When used with `Again`, only that stage is rerolled, not the entire `Again` 42 tree. 43* To reroll with limited depth, use `Die.reroll()`, or `Again` with no 44 modification. 45* When used with `MultisetEvaluator`, the entire evaluation is rerolled. 46""" 47 48# Expose certain names at top-level. 49 50from icepool.function import ( 51 d, z, __getattr__, coin, stochastic_round, one_hot, iter_cartesian_product, 52 from_cumulative, from_rv, pointwise_max, pointwise_min, min_outcome, 53 max_outcome, consecutive, sorted_union, commonize_denominator, reduce, 54 accumulate, map, map_function, map_and_time, map_to_pool) 55 56from icepool.population.base import Population 57from icepool.population.die import implicit_convert_to_die, Die 58from icepool.collection.vector import cartesian_product, tupleize, vectorize, Vector 59from icepool.collection.symbols import Symbols 60from icepool.population.again import AgainExpression 61 62Again: Final = AgainExpression(is_additive=True) 63"""A symbol indicating that the die should be rolled again, usually with some operation applied. 64 65This is designed to be used with the `Die()` constructor. 66`AgainExpression`s should not be fed to functions or methods other than 67`Die()`, but it can be used with operators. Examples: 68 69* `Again + 6`: Roll again and add 6. 70* `Again + Again`: Roll again twice and sum. 71 72The `again_count`, `again_depth`, and `again_end` arguments to `Die()` 73affect how these arguments are processed. At most one of `again_count` or 74`again_depth` may be provided; if neither are provided, the behavior is as 75`again_depth=1. 76 77For finer control over rolling processes, use e.g. `Die.map()` instead. 78 79#### Count mode 80 81When `again_count` is provided, we start with one roll queued and execute one 82roll at a time. For every `Again` we roll, we queue another roll. 83If we run out of rolls, we sum the rolls to find the result. If the total number 84of rolls (not including the initial roll) would exceed `again_count`, we reroll 85the entire process, effectively conditioning the process on not rolling more 86than `again_count` extra dice. 87 88This mode only allows "additive" expressions to be used with `Again`, which 89means that only the following operators are allowed: 90 91* Binary `+` 92* `n @ AgainExpression`, where `n` is a non-negative `int` or `Population`. 93 94Furthermore, the `+` operator is assumed to be associative and commutative. 95For example, `str` or `tuple` outcomes will not produce elements with a definite 96order. 97 98#### Depth mode 99 100When `again_depth=0`, `again_end` is directly substituted 101for each occurence of `Again`. For other values of `again_depth`, the result for 102`again_depth-1` is substituted for each occurence of `Again`. 103 104If `again_end=icepool.Reroll`, then any `AgainExpression`s in the final depth 105are rerolled. 106 107#### Rerolls 108 109`Reroll` only rerolls that particular die, not the entire process. Any such 110rerolls do not count against the `again_count` or `again_depth` limit. 111 112If `again_end=icepool.Reroll`: 113* Count mode: Any result that would cause the number of rolls to exceed 114 `again_count` is rerolled. 115* Depth mode: Any `AgainExpression`s in the final depth level are rerolled. 116""" 117 118from icepool.population.die_with_truth import DieWithTruth 119 120from icepool.collection.counts import CountsKeysView, CountsValuesView, CountsItemsView 121 122from icepool.population.keep import lowest, highest, middle 123 124from icepool.generator.pool import Pool, standard_pool 125from icepool.generator.keep import KeepGenerator 126from icepool.generator.compound_keep import CompoundKeepGenerator 127from icepool.generator.mixture import MixtureGenerator 128 129from icepool.multiset_expression import (MultisetExpression, 130 implicit_convert_to_expression, 131 InitialMultisetGeneration, 132 PopMultisetGeneration, 133 MultisetArityError, 134 MultisetBindingError) 135 136from icepool.generator.multiset_generator import MultisetGenerator 137from icepool.generator.alignment import Alignment 138from icepool.evaluator.multiset_evaluator import MultisetEvaluator 139 140from icepool.population.deck import Deck 141from icepool.generator.deal import Deal 142from icepool.generator.multi_deal import MultiDeal 143 144from icepool.evaluator.multiset_function import multiset_function 145from icepool.multiset_variable import MultisetVariable 146 147from icepool.population.format import format_probability_inverse 148 149import icepool.generator as generator 150import icepool.evaluator as evaluator 151import icepool.operator as operator 152 153import icepool.typing as typing 154 155__all__ = [ 156 'd', 'z', 'coin', 'stochastic_round', 'one_hot', 'Outcome', 'Die', 157 'Population', 'tupleize', 'vectorize', 'Vector', 'Symbols', 'Again', 158 'CountsKeysView', 'CountsValuesView', 'CountsItemsView', 'from_cumulative', 159 'from_rv', 'pointwise_max', 'pointwise_min', 'lowest', 'highest', 'middle', 160 'min_outcome', 'max_outcome', 'consecutive', 'sorted_union', 161 'commonize_denominator', 'reduce', 'accumulate', 'map', 'map_function', 162 'map_and_time', 'map_to_pool', 'Reroll', 'RerollType', 'NoExpand', 'Pool', 163 'standard_pool', 'MultisetGenerator', 'MultisetExpression', 164 'MultisetEvaluator', 'Order', 'Deck', 'Deal', 'MultiDeal', 165 'multiset_function', 'function', 'typing', 'evaluator', 166 'format_probability_inverse' 167]
20@cache 21def d(sides: int, /) -> 'icepool.Die[int]': 22 """A standard die, uniformly distributed from `1` to `sides` inclusive. 23 24 Don't confuse this with `icepool.Die()`: 25 26 * `icepool.Die([6])`: A `Die` that always rolls the integer 6. 27 * `icepool.d(6)`: A d6. 28 29 You can also import individual standard dice from the `icepool` module, e.g. 30 `from icepool import d6`. 31 """ 32 if not isinstance(sides, int): 33 raise TypeError('sides must be an int.') 34 elif sides < 1: 35 raise ValueError('sides must be at least 1.') 36 return icepool.Die(range(1, sides + 1))
A standard die, uniformly distributed from 1
to sides
inclusive.
Don't confuse this with icepool.Die()
:
icepool.Die([6])
: ADie
that always rolls the integer 6.icepool.d(6)
: A d6.
You can also import individual standard dice from the icepool
module, e.g.
from icepool import d6
.
39@cache 40def z(sides: int, /) -> 'icepool.Die[int]': 41 """A die uniformly distributed from `0` to `sides - 1` inclusive. 42 43 Equal to d(sides) - 1. 44 """ 45 if not isinstance(sides, int): 46 raise TypeError('sides must be an int.') 47 elif sides < 1: 48 raise ValueError('sides must be at least 1.') 49 return icepool.Die(range(0, sides))
A die uniformly distributed from 0
to sides - 1
inclusive.
Equal to d(sides) - 1.
75def coin(n: int | float | Fraction, 76 d: int = 1, 77 /, 78 *, 79 max_denominator: int | None = None) -> 'icepool.Die[bool]': 80 """A `Die` that rolls `True` with probability `n / d`, and `False` otherwise. 81 82 If `n <= 0` or `n >= d` the result will have only one outcome. 83 84 Args: 85 n: An int numerator, or a non-integer probability. 86 d: An int denominator. Should not be provided if the first argument is 87 not an int. 88 """ 89 if not isinstance(n, int): 90 if d != 1: 91 raise ValueError( 92 'If a non-int numerator is provided, a denominator must not be provided.' 93 ) 94 fraction = Fraction(n) 95 if max_denominator is not None: 96 fraction = fraction.limit_denominator(max_denominator) 97 n = fraction.numerator 98 d = fraction.denominator 99 data = {} 100 if n < d: 101 data[False] = min(d - n, d) 102 if n > 0: 103 data[True] = min(n, d) 104 105 return icepool.Die(data)
A Die
that rolls True
with probability n / d
, and False
otherwise.
If n <= 0
or n >= d
the result will have only one outcome.
Arguments:
- n: An int numerator, or a non-integer probability.
- d: An int denominator. Should not be provided if the first argument is not an int.
108def stochastic_round(x, 109 /, 110 *, 111 max_denominator: int | None = None) -> 'icepool.Die[int]': 112 """Randomly rounds a value up or down to the nearest integer according to the two distances. 113 114 Specificially, rounds `x` up with probability `x - floor(x)` and down 115 otherwise, producing a `Die` with up to two outcomes. 116 117 Args: 118 max_denominator: If provided, each rounding will be performed 119 using `fractions.Fraction.limit_denominator(max_denominator)`. 120 Otherwise, the rounding will be performed without 121 `limit_denominator`. 122 """ 123 integer_part = math.floor(x) 124 fractional_part = x - integer_part 125 return integer_part + coin(fractional_part, 126 max_denominator=max_denominator)
Randomly rounds a value up or down to the nearest integer according to the two distances.
Specificially, rounds x
up with probability x - floor(x)
and down
otherwise, producing a Die
with up to two outcomes.
Arguments:
- max_denominator: If provided, each rounding will be performed
using
fractions.Fraction.limit_denominator(max_denominator)
. Otherwise, the rounding will be performed withoutlimit_denominator
.
129def one_hot(sides: int, /) -> 'icepool.Die[tuple[bool, ...]]': 130 """A `Die` with `Vector` outcomes with one element set to `True` uniformly at random and the rest `False`. 131 132 This is an easy (if somewhat expensive) way of representing how many dice 133 in a pool rolled each number. For example, the outcomes of `10 @ one_hot(6)` 134 are the `(ones, twos, threes, fours, fives, sixes)` rolled in 10d6. 135 """ 136 data = [] 137 for i in range(sides): 138 outcome = [False] * sides 139 outcome[i] = True 140 data.append(icepool.Vector(outcome)) 141 return icepool.Die(data)
A Die
with Vector
outcomes with one element set to True
uniformly at random and the rest False
.
This is an easy (if somewhat expensive) way of representing how many dice
in a pool rolled each number. For example, the outcomes of 10 @ one_hot(6)
are the (ones, twos, threes, fours, fives, sixes)
rolled in 10d6.
43class Outcome(Hashable, Protocol[T_contra]): 44 """Protocol to attempt to verify that outcome types are hashable and sortable. 45 46 Far from foolproof, e.g. it cannot enforce total ordering. 47 """ 48 49 def __lt__(self, other: T_contra) -> bool: 50 ...
Protocol to attempt to verify that outcome types are hashable and sortable.
Far from foolproof, e.g. it cannot enforce total ordering.
45class Die(Population[T_co]): 46 """Sampling with replacement. Quantities represent weights. 47 48 Dice are immutable. Methods do not modify the `Die` in-place; 49 rather they return a `Die` representing the result. 50 51 It's also possible to have "empty" dice with no outcomes at all, 52 though these have little use other than being sentinel values. 53 """ 54 55 _data: Counts[T_co] 56 57 @property 58 def _new_type(self) -> type: 59 return Die 60 61 def __new__( 62 cls, 63 outcomes: Sequence | Mapping[Any, int], 64 times: Sequence[int] | int = 1, 65 *, 66 again_count: int | None = None, 67 again_depth: int | None = None, 68 again_end: 'Outcome | Die | icepool.RerollType | None' = None 69 ) -> 'Die[T_co]': 70 """Constructor for a `Die`. 71 72 Don't confuse this with `d()`: 73 74 * `Die([6])`: A `Die` that always rolls the `int` 6. 75 * `d(6)`: A d6. 76 77 Also, don't confuse this with `Pool()`: 78 79 * `Die([1, 2, 3, 4, 5, 6])`: A d6. 80 * `Pool([1, 2, 3, 4, 5, 6])`: A `Pool` of six dice that always rolls one 81 of each number. 82 83 Here are some different ways of constructing a d6: 84 85 * Just import it: `from icepool import d6` 86 * Use the `d()` function: `icepool.d(6)` 87 * Use a d6 that you already have: `Die(d6)` or `Die([d6])` 88 * Mix a d3 and a d3+3: `Die([d3, d3+3])` 89 * Use a dict: `Die({1:1, 2:1, 3:1, 4:1, 5:1, 6:1})` 90 * Give the faces as a sequence: `Die([1, 2, 3, 4, 5, 6])` 91 92 All quantities must be non-negative. Outcomes with zero quantity will be 93 omitted. 94 95 Several methods and functions foward **kwargs to this constructor. 96 However, these only affect the construction of the returned or yielded 97 dice. Any other implicit conversions of arguments or operands to dice 98 will be done with the default keyword arguments. 99 100 EXPERIMENTAL: Use `icepool.Again` to roll the dice again, usually with 101 some modification. See the `Again` documentation for details. 102 103 Denominator: For a flat set of outcomes, the denominator is just the 104 sum of the corresponding quantities. If the outcomes themselves have 105 secondary denominators, then the overall denominator will be minimized 106 while preserving the relative weighting of the primary outcomes. 107 108 Args: 109 outcomes: The faces of the `Die`. This can be one of the following: 110 * A `Sequence` of outcomes. Duplicates will contribute 111 quantity for each appearance. 112 * A `Mapping` from outcomes to quantities. 113 114 Individual outcomes can each be one of the following: 115 116 * An outcome, which must be hashable and totally orderable. 117 * For convenience, `tuple`s containing `Population`s will be 118 `tupleize`d into a `Population` of `tuple`s. 119 This does not apply to subclasses of `tuple`s such as `namedtuple` 120 or other classes such as `Vector`. 121 * A `Die`, which will be flattened into the result. 122 The quantity assigned to a `Die` is shared among its 123 outcomes. The total denominator will be scaled up if 124 necessary. 125 * `icepool.Reroll`, which will drop itself from consideration. 126 * EXPERIMENTAL: `icepool.Again`. See the documentation for 127 `Again` for details. 128 times: Multiplies the quantity of each element of `outcomes`. 129 `times` can either be a sequence of the same length as 130 `outcomes` or a single `int` to apply to all elements of 131 `outcomes`. 132 again_count, again_depth, again_end: These affect how `Again` 133 expressions are handled. See the `Again` documentation for 134 details. 135 Raises: 136 ValueError: `None` is not a valid outcome for a `Die`. 137 """ 138 outcomes, times = icepool.creation_args.itemize(outcomes, times) 139 140 # Check for Again. 141 if icepool.population.again.contains_again(outcomes): 142 if again_count is not None: 143 if again_depth is not None: 144 raise ValueError( 145 'At most one of again_count and again_depth may be used.' 146 ) 147 if again_end is not None: 148 raise ValueError( 149 'again_end cannot be used with again_count.') 150 return icepool.population.again.evaluate_agains_using_count( 151 outcomes, times, again_count) 152 else: 153 if again_depth is None: 154 again_depth = 1 155 return icepool.population.again.evaluate_agains_using_depth( 156 outcomes, times, again_depth, again_end) 157 158 # Agains have been replaced by this point. 159 outcomes = cast(Sequence[T_co | Die[T_co] | icepool.RerollType], 160 outcomes) 161 162 if len(outcomes) == 1 and times[0] == 1 and isinstance( 163 outcomes[0], Die): 164 return outcomes[0] 165 166 counts: Counts[T_co] = icepool.creation_args.expand_args_for_die( 167 outcomes, times) 168 169 return Die._new_raw(counts) 170 171 @classmethod 172 def _new_raw(cls, data: Counts[T_co]) -> 'Die[T_co]': 173 """Creates a new `Die` using already-processed arguments. 174 175 Args: 176 data: At this point, this is a Counts. 177 """ 178 self = super(Population, cls).__new__(cls) 179 self._data = data 180 return self 181 182 # Defined separately from the superclass to help typing. 183 def unary_operator(self: 'icepool.Die[T_co]', op: Callable[..., U], *args, 184 **kwargs) -> 'icepool.Die[U]': 185 """Performs the unary operation on the outcomes. 186 187 This is used for the standard unary operators 188 `-, +, abs, ~, round, trunc, floor, ceil` 189 as well as the additional methods 190 `zero, bool`. 191 192 This is NOT used for the `[]` operator; when used directly, this is 193 interpreted as a `Mapping` operation and returns the count corresponding 194 to a given outcome. See `marginals()` for applying the `[]` operator to 195 outcomes. 196 197 Returns: 198 A `Die` representing the result. 199 200 Raises: 201 ValueError: If tuples are of mismatched length. 202 """ 203 return self._unary_operator(op, *args, **kwargs) 204 205 def binary_operator(self, other: 'Die', op: Callable[..., U], *args, 206 **kwargs) -> 'Die[U]': 207 """Performs the operation on pairs of outcomes. 208 209 By the time this is called, the other operand has already been 210 converted to a `Die`. 211 212 If one side of a binary operator is a tuple and the other is not, the 213 binary operator is applied to each element of the tuple with the 214 non-tuple side. For example, the following are equivalent: 215 216 ```python 217 cartesian_product(d6, d8) * 2 218 cartesian_product(d6 * 2, d8 * 2) 219 ``` 220 221 This is used for the standard binary operators 222 `+, -, *, /, //, %, **, <<, >>, &, |, ^` 223 and the standard binary comparators 224 `<, <=, >=, >, ==, !=, cmp`. 225 226 `==` and `!=` additionally set the truth value of the `Die` according to 227 whether the dice themselves are the same or not. 228 229 The `@` operator does NOT use this method directly. 230 It rolls the left `Die`, which must have integer outcomes, 231 then rolls the right `Die` that many times and sums the outcomes. 232 233 Returns: 234 A `Die` representing the result. 235 236 Raises: 237 ValueError: If tuples are of mismatched length within one of the 238 dice or between the dice. 239 """ 240 data: MutableMapping[Any, int] = defaultdict(int) 241 for (outcome_self, 242 quantity_self), (outcome_other, 243 quantity_other) in itertools.product( 244 self.items(), other.items()): 245 new_outcome = op(outcome_self, outcome_other, *args, **kwargs) 246 data[new_outcome] += quantity_self * quantity_other 247 return self._new_type(data) 248 249 # Basic access. 250 251 def keys(self) -> CountsKeysView[T_co]: 252 return self._data.keys() 253 254 def values(self) -> CountsValuesView: 255 return self._data.values() 256 257 def items(self) -> CountsItemsView[T_co]: 258 return self._data.items() 259 260 def __getitem__(self, outcome, /) -> int: 261 return self._data[outcome] 262 263 def __iter__(self) -> Iterator[T_co]: 264 return iter(self.keys()) 265 266 def __len__(self) -> int: 267 """The number of outcomes. """ 268 return len(self._data) 269 270 def __contains__(self, outcome) -> bool: 271 return outcome in self._data 272 273 # Quantity management. 274 275 def simplify(self) -> 'Die[T_co]': 276 """Divides all quantities by their greatest common denominator. """ 277 return icepool.Die(self._data.simplify()) 278 279 # Rerolls and other outcome management. 280 281 def reroll(self, 282 which: Callable[..., bool] | Collection[T_co] | None = None, 283 /, 284 *, 285 star: bool | None = None, 286 depth: int | Literal['inf']) -> 'Die[T_co]': 287 """Rerolls the given outcomes. 288 289 Args: 290 which: Selects which outcomes to reroll. Options: 291 * A collection of outcomes to reroll. 292 * A callable that takes an outcome and returns `True` if it 293 should be rerolled. 294 * If not provided, the min outcome will be rerolled. 295 star: Whether outcomes should be unpacked into separate arguments 296 before sending them to a callable `which`. 297 If not provided, this will be guessed based on the function 298 signature. 299 depth: The maximum number of times to reroll. 300 If `None`, rerolls an unlimited number of times. 301 302 Returns: 303 A `Die` representing the reroll. 304 If the reroll would never terminate, the result has no outcomes. 305 """ 306 307 if which is None: 308 outcome_set = {self.min_outcome()} 309 else: 310 outcome_set = self._select_outcomes(which, star) 311 312 if depth == 'inf' or depth is None: 313 if depth is None: 314 warnings.warn( 315 "depth=None is deprecated; use depth='inf' instead.", 316 category=DeprecationWarning, 317 stacklevel=1) 318 data = { 319 outcome: quantity 320 for outcome, quantity in self.items() 321 if outcome not in outcome_set 322 } 323 elif depth < 0: 324 raise ValueError('reroll depth cannot be negative.') 325 else: 326 total_reroll_quantity = sum(quantity 327 for outcome, quantity in self.items() 328 if outcome in outcome_set) 329 total_stop_quantity = self.denominator() - total_reroll_quantity 330 rerollable_factor = total_reroll_quantity**depth 331 stop_factor = (self.denominator()**(depth + 1) - rerollable_factor 332 * total_reroll_quantity) // total_stop_quantity 333 data = { 334 outcome: (rerollable_factor * 335 quantity if outcome in outcome_set else stop_factor * 336 quantity) 337 for outcome, quantity in self.items() 338 } 339 return icepool.Die(data) 340 341 def filter(self, 342 which: Callable[..., bool] | Collection[T_co], 343 /, 344 *, 345 star: bool | None = None, 346 depth: int | Literal['inf']) -> 'Die[T_co]': 347 """Rerolls until getting one of the given outcomes. 348 349 Essentially the complement of `reroll()`. 350 351 Args: 352 which: Selects which outcomes to reroll until. Options: 353 * A callable that takes an outcome and returns `True` if it 354 should be accepted. 355 * A collection of outcomes to reroll until. 356 star: Whether outcomes should be unpacked into separate arguments 357 before sending them to a callable `which`. 358 If not provided, this will be guessed based on the function 359 signature. 360 depth: The maximum number of times to reroll. 361 If `None`, rerolls an unlimited number of times. 362 363 Returns: 364 A `Die` representing the reroll. 365 If the reroll would never terminate, the result has no outcomes. 366 """ 367 368 if callable(which): 369 if star is None: 370 star = infer_star(which) 371 if star: 372 373 not_outcomes = { 374 outcome 375 for outcome in self.outcomes() 376 if not which(*outcome) # type: ignore 377 } 378 else: 379 not_outcomes = { 380 outcome 381 for outcome in self.outcomes() if not which(outcome) 382 } 383 else: 384 not_outcomes = { 385 not_outcome 386 for not_outcome in self.outcomes() if not_outcome not in which 387 } 388 return self.reroll(not_outcomes, depth=depth) 389 390 def truncate(self, min_outcome=None, max_outcome=None) -> 'Die[T_co]': 391 """Truncates the outcomes of this `Die` to the given range. 392 393 The endpoints are included in the result if applicable. 394 If one of the arguments is not provided, that side will not be truncated. 395 396 This effectively rerolls outcomes outside the given range. 397 If instead you want to replace those outcomes with the nearest endpoint, 398 use `clip()`. 399 400 Not to be confused with `trunc(die)`, which performs integer truncation 401 on each outcome. 402 """ 403 if min_outcome is not None: 404 start = bisect.bisect_left(self.outcomes(), min_outcome) 405 else: 406 start = None 407 if max_outcome is not None: 408 stop = bisect.bisect_right(self.outcomes(), max_outcome) 409 else: 410 stop = None 411 data = {k: v for k, v in self.items()[start:stop]} 412 return icepool.Die(data) 413 414 def clip(self, min_outcome=None, max_outcome=None) -> 'Die[T_co]': 415 """Clips the outcomes of this `Die` to the given values. 416 417 The endpoints are included in the result if applicable. 418 If one of the arguments is not provided, that side will not be clipped. 419 420 This is not the same as rerolling outcomes beyond this range; 421 the outcome is simply adjusted to fit within the range. 422 This will typically cause some quantity to bunch up at the endpoint(s). 423 If you want to reroll outcomes beyond this range, use `truncate()`. 424 """ 425 data: MutableMapping[Any, int] = defaultdict(int) 426 for outcome, quantity in self.items(): 427 if min_outcome is not None and outcome <= min_outcome: 428 data[min_outcome] += quantity 429 elif max_outcome is not None and outcome >= max_outcome: 430 data[max_outcome] += quantity 431 else: 432 data[outcome] += quantity 433 return icepool.Die(data) 434 435 @cached_property 436 def _popped_min(self) -> tuple['Die[T_co]', int]: 437 die = Die._new_raw(self._data.remove_min()) 438 return die, self.quantities()[0] 439 440 def _pop_min(self) -> tuple['Die[T_co]', int]: 441 """A `Die` with the min outcome removed, and the quantity of the removed outcome. 442 443 Raises: 444 IndexError: If this `Die` has no outcome to pop. 445 """ 446 return self._popped_min 447 448 @cached_property 449 def _popped_max(self) -> tuple['Die[T_co]', int]: 450 die = Die._new_raw(self._data.remove_max()) 451 return die, self.quantities()[-1] 452 453 def _pop_max(self) -> tuple['Die[T_co]', int]: 454 """A `Die` with the max outcome removed, and the quantity of the removed outcome. 455 456 Raises: 457 IndexError: If this `Die` has no outcome to pop. 458 """ 459 return self._popped_max 460 461 # Processes. 462 463 def map( 464 self, 465 repl: 466 'Callable[..., U | Die[U] | icepool.RerollType | icepool.AgainExpression] | Mapping[T_co, U | Die[U] | icepool.RerollType | icepool.AgainExpression]', 467 /, 468 *extra_args, 469 star: bool | None = None, 470 repeat: int | Literal['inf'] = 1, 471 time_limit: int | Literal['inf'] | None = None, 472 again_count: int | None = None, 473 again_depth: int | None = None, 474 again_end: 'U | Die[U] | icepool.RerollType | None' = None 475 ) -> 'Die[U]': 476 """Maps outcomes of the `Die` to other outcomes. 477 478 This is also useful for representing processes. 479 480 As `icepool.map(repl, self, ...)`. 481 """ 482 return icepool.map(repl, 483 self, 484 *extra_args, 485 star=star, 486 repeat=repeat, 487 time_limit=time_limit, 488 again_count=again_count, 489 again_depth=again_depth, 490 again_end=again_end) 491 492 def map_and_time( 493 self, 494 repl: 495 'Callable[..., T_co | Die[T_co] | icepool.RerollType] | Mapping[T_co, T_co | Die[T_co] | icepool.RerollType]', 496 /, 497 *extra_args, 498 star: bool | None = None, 499 time_limit: int) -> 'Die[tuple[T_co, int]]': 500 """Repeatedly map outcomes of the state to other outcomes, while also 501 counting timesteps. 502 503 This is useful for representing processes. 504 505 As `map_and_time(repl, self, ...)`. 506 """ 507 return icepool.map_and_time(repl, 508 self, 509 *extra_args, 510 star=star, 511 time_limit=time_limit) 512 513 def time_to_sum(self: 'Die[int]', 514 target: int, 515 /, 516 max_time: int, 517 dnf: 'int|icepool.RerollType|None' = None) -> 'Die[int]': 518 """The number of rolls until the cumulative sum is greater or equal to the target. 519 520 Args: 521 target: The number to stop at once reached. 522 max_time: The maximum number of rolls to run. 523 If the sum is not reached, the outcome is determined by `dnf`. 524 dnf: What time to assign in cases where the target was not reached 525 in `max_time`. If not provided, this is set to `max_time`. 526 `dnf=icepool.Reroll` will remove this case from the result, 527 effectively rerolling it. 528 """ 529 if target <= 0: 530 return Die([0]) 531 532 if dnf is None: 533 dnf = max_time 534 535 def step(total, roll): 536 return min(total + roll, target) 537 538 result: 'Die[tuple[int, int]]' = Die([0]).map_and_time( 539 step, self, time_limit=max_time) 540 541 def get_time(total, time): 542 if total < target: 543 return dnf 544 else: 545 return time 546 547 return result.map(get_time) 548 549 @cached_property 550 def _mean_time_to_sum_cache(self) -> list[Fraction]: 551 return [Fraction(0)] 552 553 def mean_time_to_sum(self: 'Die[int]', target: int, /) -> Fraction: 554 """The mean number of rolls until the cumulative sum is greater or equal to the target. 555 556 Args: 557 target: The target sum. 558 559 Raises: 560 ValueError: If `self` has negative outcomes. 561 ZeroDivisionError: If `self.mean() == 0`. 562 """ 563 target = max(target, 0) 564 565 if target < len(self._mean_time_to_sum_cache): 566 return self._mean_time_to_sum_cache[target] 567 568 if self.min_outcome() < 0: 569 raise ValueError( 570 'mean_time_to_sum does not handle negative outcomes.') 571 time_per_effect = Fraction(self.denominator(), 572 self.denominator() - self.quantity(0)) 573 574 for i in range(len(self._mean_time_to_sum_cache), target + 1): 575 result = time_per_effect + self.reroll([ 576 0 577 ], depth='inf').map(lambda x: self.mean_time_to_sum(i - x)).mean() 578 self._mean_time_to_sum_cache.append(result) 579 580 return result 581 582 def explode(self, 583 which: Collection[T_co] | Callable[..., bool] | None = None, 584 /, 585 *, 586 star: bool | None = None, 587 depth: int = 9, 588 end=None) -> 'Die[T_co]': 589 """Causes outcomes to be rolled again and added to the total. 590 591 Args: 592 which: Which outcomes to explode. Options: 593 * An collection of outcomes to explode. 594 * A callable that takes an outcome and returns `True` if it 595 should be exploded. 596 * If not supplied, the max outcome will explode. 597 star: Whether outcomes should be unpacked into separate arguments 598 before sending them to a callable `which`. 599 If not provided, this will be guessed based on the function 600 signature. 601 depth: The maximum number of additional dice to roll, not counting 602 the initial roll. 603 If not supplied, a default value will be used. 604 end: Once `depth` is reached, further explosions will be treated 605 as this value. By default, a zero value will be used. 606 `icepool.Reroll` will make one extra final roll, rerolling until 607 a non-exploding outcome is reached. 608 """ 609 610 if which is None: 611 outcome_set = {self.max_outcome()} 612 else: 613 outcome_set = self._select_outcomes(which, star) 614 615 if depth < 0: 616 raise ValueError('depth cannot be negative.') 617 elif depth == 0: 618 return self 619 620 def map_final(outcome): 621 if outcome in outcome_set: 622 return outcome + icepool.Again 623 else: 624 return outcome 625 626 return self.map(map_final, again_depth=depth, again_end=end) 627 628 def if_else( 629 self, 630 outcome_if_true: U | 'Die[U]', 631 outcome_if_false: U | 'Die[U]', 632 *, 633 again_count: int | None = None, 634 again_depth: int | None = None, 635 again_end: 'U | Die[U] | icepool.RerollType | None' = None 636 ) -> 'Die[U]': 637 """Ternary conditional operator. 638 639 This replaces truthy outcomes with the first argument and falsy outcomes 640 with the second argument. 641 642 Args: 643 again_count, again_depth, again_end: Forwarded to the final die constructor. 644 """ 645 return self.map(lambda x: bool(x)).map( 646 { 647 True: outcome_if_true, 648 False: outcome_if_false 649 }, 650 again_count=again_count, 651 again_depth=again_depth, 652 again_end=again_end) 653 654 def is_in(self, target: Container[T_co], /) -> 'Die[bool]': 655 """A die that returns True iff the roll of the die is contained in the target.""" 656 return self.map(lambda x: x in target) 657 658 def count(self, rolls: int, target: Container[T_co], /) -> 'Die[int]': 659 """Roll this dice a number of times and count how many are in the target.""" 660 return rolls @ self.is_in(target) 661 662 # Pools and sums. 663 664 @cached_property 665 def _sum_cache(self) -> MutableMapping[int, 'Die']: 666 return {} 667 668 def _sum_all(self, rolls: int, /) -> 'Die': 669 """Roll this `Die` `rolls` times and sum the results. 670 671 The sum is computed one at a time, with each additional item on the 672 right, similar to `functools.reduce()`. 673 674 If `rolls` is negative, roll the `Die` `abs(rolls)` times and negate 675 the result. 676 677 If you instead want to replace tuple (or other sequence) outcomes with 678 their sum, use `die.map(sum)`. 679 """ 680 if rolls in self._sum_cache: 681 return self._sum_cache[rolls] 682 683 if rolls < 0: 684 result = -self._sum_all(-rolls) 685 elif rolls == 0: 686 result = self.zero().simplify() 687 elif rolls == 1: 688 result = self 689 else: 690 # In addition to working similar to reduce(), this seems to perform 691 # better than binary split. 692 result = self._sum_all(rolls - 1) + self 693 694 self._sum_cache[rolls] = result 695 return result 696 697 def __matmul__(self: 'Die[int]', other) -> 'Die': 698 """Roll the left `Die`, then roll the right `Die` that many times and sum the outcomes. 699 700 The sum is computed one at a time, with each additional item on the 701 right, similar to `functools.reduce()`. 702 """ 703 if isinstance(other, icepool.AgainExpression): 704 return NotImplemented 705 other = implicit_convert_to_die(other) 706 707 data: MutableMapping[int, Any] = defaultdict(int) 708 709 max_abs_die_count = max(abs(self.min_outcome()), 710 abs(self.max_outcome())) 711 for die_count, die_count_quantity in self.items(): 712 factor = other.denominator()**(max_abs_die_count - abs(die_count)) 713 subresult = other._sum_all(die_count) 714 for outcome, subresult_quantity in subresult.items(): 715 data[ 716 outcome] += subresult_quantity * die_count_quantity * factor 717 718 return icepool.Die(data) 719 720 def __rmatmul__(self, other: 'int | Die[int]') -> 'Die': 721 """Roll the left `Die`, then roll the right `Die` that many times and sum the outcomes. 722 723 The sum is computed one at a time, with each additional item on the 724 right, similar to `functools.reduce()`. 725 """ 726 if isinstance(other, icepool.AgainExpression): 727 return NotImplemented 728 other = implicit_convert_to_die(other) 729 return other.__matmul__(self) 730 731 def sequence(self, rolls: int) -> 'icepool.Die[tuple[T_co, ...]]': 732 """Possible sequences produced by rolling this die a number of times. 733 734 This is extremely expensive computationally. If possible, use `reduce()` 735 instead; if you don't care about order, `Die.pool()` is better. 736 """ 737 return icepool.cartesian_product(*(self for _ in range(rolls)), 738 outcome_type=tuple) # type: ignore 739 740 def pool(self, rolls: int | Sequence[int] = 1, /) -> 'icepool.Pool[T_co]': 741 """Creates a `Pool` from this `Die`. 742 743 You might subscript the pool immediately afterwards, e.g. 744 `d6.pool(5)[-1, ..., 1]` takes the difference between the highest and 745 lowest of 5d6. 746 747 Args: 748 rolls: The number of copies of this `Die` to put in the pool. 749 Or, a sequence of one `int` per die acting as 750 `keep_tuple`. Note that `...` cannot be used in the 751 argument to this method, as the argument determines the size of 752 the pool. 753 """ 754 if isinstance(rolls, int): 755 return icepool.Pool({self: rolls}) 756 else: 757 pool_size = len(rolls) 758 # Haven't dealt with narrowing return type. 759 return icepool.Pool({self: pool_size})[rolls] # type: ignore 760 761 @overload 762 def keep(self, rolls: Sequence[int], /) -> 'Die': 763 """Selects elements after drawing and sorting and sums them. 764 765 Args: 766 rolls: A sequence of `int` specifying how many times to count each 767 element in ascending order. 768 """ 769 770 @overload 771 def keep(self, rolls: int, 772 index: slice | Sequence[int | EllipsisType] | int, /): 773 """Selects elements after drawing and sorting and sums them. 774 775 Args: 776 rolls: The number of dice to roll. 777 index: One of the following: 778 * An `int`. This will count only the roll at the specified index. 779 In this case, the result is a `Die` rather than a generator. 780 * A `slice`. The selected dice are counted once each. 781 * A sequence of one `int` for each `Die`. 782 Each roll is counted that many times, which could be multiple or 783 negative times. 784 785 Up to one `...` (`Ellipsis`) may be used. 786 `...` will be replaced with a number of zero 787 counts depending on the `rolls`. 788 This number may be "negative" if more `int`s are provided than 789 `rolls`. Specifically: 790 791 * If `index` is shorter than `rolls`, `...` 792 acts as enough zero counts to make up the difference. 793 E.g. `(1, ..., 1)` on five dice would act as 794 `(1, 0, 0, 0, 1)`. 795 * If `index` has length equal to `rolls`, `...` has no effect. 796 E.g. `(1, ..., 1)` on two dice would act as `(1, 1)`. 797 * If `index` is longer than `rolls` and `...` is on one side, 798 elements will be dropped from `index` on the side with `...`. 799 E.g. `(..., 1, 2, 3)` on two dice would act as `(2, 3)`. 800 * If `index` is longer than `rolls` and `...` 801 is in the middle, the counts will be as the sum of two 802 one-sided `...`. 803 E.g. `(-1, ..., 1)` acts like `(-1, ...)` plus `(..., 1)`. 804 If `rolls` was 1 this would have the -1 and 1 cancel each other out. 805 """ 806 807 def keep(self, 808 rolls: int | Sequence[int], 809 index: slice | Sequence[int | EllipsisType] | int | None = None, 810 /) -> 'Die': 811 """Selects elements after drawing and sorting and sums them. 812 813 Args: 814 rolls: The number of dice to roll. 815 index: One of the following: 816 * An `int`. This will count only the roll at the specified index. 817 In this case, the result is a `Die` rather than a generator. 818 * A `slice`. The selected dice are counted once each. 819 * A sequence of `int`s with length equal to `rolls`. 820 Each roll is counted that many times, which could be multiple or 821 negative times. 822 823 Up to one `...` (`Ellipsis`) may be used. If no `...` is used, 824 the `rolls` argument may be omitted. 825 826 `...` will be replaced with a number of zero counts in order 827 to make up any missing elements compared to `rolls`. 828 This number may be "negative" if more `int`s are provided than 829 `rolls`. Specifically: 830 831 * If `index` is shorter than `rolls`, `...` 832 acts as enough zero counts to make up the difference. 833 E.g. `(1, ..., 1)` on five dice would act as 834 `(1, 0, 0, 0, 1)`. 835 * If `index` has length equal to `rolls`, `...` has no effect. 836 E.g. `(1, ..., 1)` on two dice would act as `(1, 1)`. 837 * If `index` is longer than `rolls` and `...` is on one side, 838 elements will be dropped from `index` on the side with `...`. 839 E.g. `(..., 1, 2, 3)` on two dice would act as `(2, 3)`. 840 * If `index` is longer than `rolls` and `...` 841 is in the middle, the counts will be as the sum of two 842 one-sided `...`. 843 E.g. `(-1, ..., 1)` acts like `(-1, ...)` plus `(..., 1)`. 844 If `rolls` was 1 this would have the -1 and 1 cancel each other out. 845 """ 846 if isinstance(rolls, int): 847 if index is None: 848 raise ValueError( 849 'If the number of rolls is an integer, an index argument must be provided.' 850 ) 851 if isinstance(index, int): 852 return self.pool(rolls).keep(index) 853 else: 854 return self.pool(rolls).keep(index).sum() # type: ignore 855 else: 856 if index is not None: 857 raise ValueError('Only one index sequence can be given.') 858 return self.pool(len(rolls)).keep(rolls).sum() # type: ignore 859 860 def lowest(self, 861 rolls: int, 862 /, 863 keep: int | None = None, 864 drop: int | None = None) -> 'Die': 865 """Roll several of this `Die` and return the lowest result, or the sum of some of the lowest. 866 867 The outcomes should support addition and multiplication if `keep != 1`. 868 869 Args: 870 rolls: The number of dice to roll. All dice will have the same 871 outcomes as `self`. 872 keep, drop: These arguments work together: 873 * If neither are provided, the single lowest die will be taken. 874 * If only `keep` is provided, the `keep` lowest dice will be summed. 875 * If only `drop` is provided, the `drop` lowest dice will be dropped 876 and the rest will be summed. 877 * If both are provided, `drop` lowest dice will be dropped, then 878 the next `keep` lowest dice will be summed. 879 880 Returns: 881 A `Die` representing the probability distribution of the sum. 882 """ 883 index = lowest_slice(keep, drop) 884 canonical = canonical_slice(index, rolls) 885 if canonical.start == 0 and canonical.stop == 1: 886 return self._lowest_single(rolls) 887 # Expression evaluators are difficult to type. 888 return self.pool(rolls)[index].sum() # type: ignore 889 890 def _lowest_single(self, rolls: int, /) -> 'Die': 891 """Roll this die several times and keep the lowest.""" 892 if rolls == 0: 893 return self.zero().simplify() 894 return icepool.from_cumulative( 895 self.outcomes(), [x**rolls for x in self.quantities('>=')], 896 reverse=True) 897 898 def highest(self, 899 rolls: int, 900 /, 901 keep: int | None = None, 902 drop: int | None = None) -> 'Die[T_co]': 903 """Roll several of this `Die` and return the highest result, or the sum of some of the highest. 904 905 The outcomes should support addition and multiplication if `keep != 1`. 906 907 Args: 908 rolls: The number of dice to roll. 909 keep, drop: These arguments work together: 910 * If neither are provided, the single highest die will be taken. 911 * If only `keep` is provided, the `keep` highest dice will be summed. 912 * If only `drop` is provided, the `drop` highest dice will be dropped 913 and the rest will be summed. 914 * If both are provided, `drop` highest dice will be dropped, then 915 the next `keep` highest dice will be summed. 916 917 Returns: 918 A `Die` representing the probability distribution of the sum. 919 """ 920 index = highest_slice(keep, drop) 921 canonical = canonical_slice(index, rolls) 922 if canonical.start == rolls - 1 and canonical.stop == rolls: 923 return self._highest_single(rolls) 924 # Expression evaluators are difficult to type. 925 return self.pool(rolls)[index].sum() # type: ignore 926 927 def _highest_single(self, rolls: int, /) -> 'Die[T_co]': 928 """Roll this die several times and keep the highest.""" 929 if rolls == 0: 930 return self.zero().simplify() 931 return icepool.from_cumulative( 932 self.outcomes(), [x**rolls for x in self.quantities('<=')]) 933 934 def middle( 935 self, 936 rolls: int, 937 /, 938 keep: int = 1, 939 *, 940 tie: Literal['error', 'high', 'low'] = 'error') -> 'icepool.Die': 941 """Roll several of this `Die` and sum the sorted results in the middle. 942 943 The outcomes should support addition and multiplication if `keep != 1`. 944 945 Args: 946 rolls: The number of dice to roll. 947 keep: The number of outcomes to sum. If this is greater than the 948 current keep_size, all are kept. 949 tie: What to do if `keep` is odd but the current keep_size 950 is even, or vice versa. 951 * 'error' (default): Raises `IndexError`. 952 * 'high': The higher outcome is taken. 953 * 'low': The lower outcome is taken. 954 """ 955 # Expression evaluators are difficult to type. 956 return self.pool(rolls).middle(keep, tie=tie).sum() # type: ignore 957 958 def map_to_pool( 959 self, 960 repl: 961 'Callable[..., Sequence[icepool.Die[U] | U] | Mapping[icepool.Die[U], int] | Mapping[U, int] | icepool.RerollType] | None' = None, 962 /, 963 *extra_args: 'Outcome | icepool.Die | icepool.MultisetExpression', 964 star: bool | None = None, 965 denominator: int | None = None 966 ) -> 'icepool.MultisetGenerator[U, tuple[int]]': 967 """EXPERIMENTAL: Maps outcomes of this `Die` to `Pools`, creating a `MultisetGenerator`. 968 969 As `icepool.map_to_pool(repl, self, ...)`. 970 971 If no argument is provided, the outcomes will be used to construct a 972 mixture of pools directly, similar to the inverse of `pool.expand()`. 973 Note that this is not particularly efficient since it does not make much 974 use of dynamic programming. 975 976 Args: 977 repl: One of the following: 978 * A callable that takes in one outcome per element of args and 979 produces a `Pool` (or something convertible to such). 980 * A mapping from old outcomes to `Pool` 981 (or something convertible to such). 982 In this case args must have exactly one element. 983 The new outcomes may be dice rather than just single outcomes. 984 The special value `icepool.Reroll` will reroll that old outcome. 985 star: If `True`, the first of the args will be unpacked before 986 giving them to `repl`. 987 If not provided, it will be guessed based on the signature of 988 `repl` and the number of arguments. 989 denominator: If provided, the denominator of the result will be this 990 value. Otherwise it will be the minimum to correctly weight the 991 pools. 992 993 Returns: 994 A `MultisetGenerator` representing the mixture of `Pool`s. Note 995 that this is not technically a `Pool`, though it supports most of 996 the same operations. 997 998 Raises: 999 ValueError: If `denominator` cannot be made consistent with the 1000 resulting mixture of pools. 1001 """ 1002 if repl is None: 1003 repl = lambda x: x 1004 return icepool.map_to_pool(repl, 1005 self, 1006 *extra_args, 1007 star=star, 1008 denominator=denominator) 1009 1010 def explode_to_pool( 1011 self, 1012 rolls: int, 1013 which: Collection[T_co] | Callable[..., bool] | None = None, 1014 /, 1015 *, 1016 star: bool | None = None, 1017 depth: int = 9) -> 'icepool.MultisetGenerator[T_co, tuple[int]]': 1018 """EXPERIMENTAL: Causes outcomes to be rolled again, keeping that outcome as an individual die in a pool. 1019 1020 Args: 1021 rolls: The number of initial dice. 1022 which: Which outcomes to explode. Options: 1023 * A single outcome to explode. 1024 * An collection of outcomes to explode. 1025 * A callable that takes an outcome and returns `True` if it 1026 should be exploded. 1027 * If not supplied, the max outcome will explode. 1028 star: Whether outcomes should be unpacked into separate arguments 1029 before sending them to a callable `which`. 1030 If not provided, this will be guessed based on the function 1031 signature. 1032 depth: The maximum depth of explosions for an individual dice. 1033 1034 Returns: 1035 A `MultisetGenerator` representing the mixture of `Pool`s. Note 1036 that this is not technically a `Pool`, though it supports most of 1037 the same operations. 1038 """ 1039 if depth == 0: 1040 return self.pool(rolls) 1041 if which is None: 1042 explode_set = {self.max_outcome()} 1043 else: 1044 explode_set = self._select_outcomes(which, star) 1045 if not explode_set: 1046 return self.pool(rolls) 1047 explode: 'Die[T_co]' 1048 not_explode: 'Die[T_co]' 1049 explode, not_explode = self.split(explode_set) 1050 1051 single_data: 'MutableMapping[icepool.Vector[int], int]' = defaultdict( 1052 int) 1053 for i in range(depth + 1): 1054 weight = explode.denominator()**i * self.denominator()**( 1055 depth - i) * not_explode.denominator() 1056 single_data[icepool.Vector((i, 1))] += weight 1057 single_data[icepool.Vector( 1058 (depth + 1, 0))] += explode.denominator()**(depth + 1) 1059 1060 single_count_die: 'Die[icepool.Vector[int]]' = Die(single_data) 1061 count_die = rolls @ single_count_die 1062 1063 return count_die.map_to_pool( 1064 lambda x, nx: [explode] * x + [not_explode] * nx) 1065 1066 def reroll_to_pool( 1067 self, 1068 rolls: int, 1069 which: Callable[..., bool] | Collection[T_co], 1070 /, 1071 max_rerolls: int, 1072 *, 1073 star: bool | None = None, 1074 mode: Literal['random', 'lowest', 'highest', 'drop'] = 'random' 1075 ) -> 'icepool.MultisetGenerator[T_co, tuple[int]]': 1076 """EXPERIMENTAL: Applies a limited number of rerolls shared across a pool. 1077 1078 Each die can only be rerolled once (effectively `depth=1`), and no more 1079 than `max_rerolls` dice may be rerolled. 1080 1081 Args: 1082 rolls: How many dice in the pool. 1083 which: Selects which outcomes are eligible to be rerolled. Options: 1084 * A collection of outcomes to reroll. 1085 * A callable that takes an outcome and returns `True` if it 1086 could be rerolled. 1087 max_rerolls: The maximum number of dice to reroll. 1088 Note that each die can only be rerolled once, so if the number 1089 of eligible dice is less than this, the excess rerolls have no 1090 effect. 1091 star: Whether outcomes should be unpacked into separate arguments 1092 before sending them to a callable `which`. 1093 If not provided, this will be guessed based on the function 1094 signature. 1095 mode: How dice are selected for rerolling if there are more eligible 1096 dice than `max_rerolls`. Options: 1097 * `'random'` (default): Eligible dice will be chosen uniformly 1098 at random. 1099 * `'lowest'`: The lowest eligible dice will be rerolled. 1100 * `'highest'`: The highest eligible dice will be rerolled. 1101 * `'drop'`: All dice that ended up on an outcome selected by 1102 `which` will be dropped. This includes both dice that rolled 1103 into `which` initially and were not rerolled, and dice that 1104 were rerolled but rolled into `which` again. This can be 1105 considerably more efficient than the other modes. 1106 1107 Returns: 1108 A `MultisetGenerator` representing the mixture of `Pool`s. Note 1109 that this is not technically a `Pool`, though it supports most of 1110 the same operations. 1111 """ 1112 rerollable_set = self._select_outcomes(which, star) 1113 if not rerollable_set: 1114 return self.pool(rolls) 1115 1116 rerollable_die: 'Die[T_co]' 1117 not_rerollable_die: 'Die[T_co]' 1118 rerollable_die, not_rerollable_die = self.split(rerollable_set) 1119 single_is_rerollable = icepool.coin(rerollable_die.denominator(), 1120 self.denominator()) 1121 rerollable = rolls @ single_is_rerollable 1122 1123 def split(initial_rerollable: int) -> Die[tuple[int, int, int]]: 1124 """Computes the composition of the pool. 1125 1126 Returns: 1127 initial_rerollable: The number of dice that initially fell into 1128 the rerollable set. 1129 rerolled_to_rerollable: The number of dice that were rerolled, 1130 but fell into the rerollable set again. 1131 not_rerollable: The number of dice that ended up outside the 1132 rerollable set, including both initial and rerolled dice. 1133 not_rerolled: The number of dice that were eligible for 1134 rerolling but were not rerolled. 1135 """ 1136 initial_not_rerollable = rolls - initial_rerollable 1137 rerolled = min(initial_rerollable, max_rerolls) 1138 not_rerolled = initial_rerollable - rerolled 1139 1140 def second_split(rerolled_to_rerollable): 1141 """Splits the rerolled dice into those that fell into the rerollable and not-rerollable sets.""" 1142 rerolled_to_not_rerollable = rerolled - rerolled_to_rerollable 1143 return icepool.tupleize( 1144 initial_rerollable, rerolled_to_rerollable, 1145 initial_not_rerollable + rerolled_to_not_rerollable, 1146 not_rerolled) 1147 1148 return icepool.map(second_split, 1149 rerolled @ single_is_rerollable, 1150 star=False) 1151 1152 pool_composition = rerollable.map(split, star=False) 1153 1154 def make_pool(initial_rerollable, rerolled_to_rerollable, 1155 not_rerollable, not_rerolled): 1156 common = rerollable_die.pool( 1157 rerolled_to_rerollable) + not_rerollable_die.pool( 1158 not_rerollable) 1159 match mode: 1160 case 'random': 1161 return common + rerollable_die.pool(not_rerolled) 1162 case 'lowest': 1163 return common + rerollable_die.pool( 1164 initial_rerollable).highest(not_rerolled) 1165 case 'highest': 1166 return common + rerollable_die.pool( 1167 initial_rerollable).lowest(not_rerolled) 1168 case 'drop': 1169 return not_rerollable_die.pool(not_rerollable) 1170 case _: 1171 raise ValueError( 1172 f"Invalid reroll_priority '{mode}'. Allowed values are 'random', 'lowest', 'highest', 'drop'." 1173 ) 1174 1175 denominator = self.denominator()**(rolls + min(rolls, max_rerolls)) 1176 1177 return pool_composition.map_to_pool(make_pool, 1178 star=True, 1179 denominator=denominator) 1180 1181 # Unary operators. 1182 1183 def __neg__(self) -> 'Die[T_co]': 1184 return self.unary_operator(operator.neg) 1185 1186 def __pos__(self) -> 'Die[T_co]': 1187 return self.unary_operator(operator.pos) 1188 1189 def __invert__(self) -> 'Die[T_co]': 1190 return self.unary_operator(operator.invert) 1191 1192 def abs(self) -> 'Die[T_co]': 1193 return self.unary_operator(operator.abs) 1194 1195 __abs__ = abs 1196 1197 def round(self, ndigits: int | None = None) -> 'Die': 1198 return self.unary_operator(round, ndigits) 1199 1200 __round__ = round 1201 1202 def stochastic_round(self, 1203 *, 1204 max_denominator: int | None = None) -> 'Die[int]': 1205 """Randomly rounds outcomes up or down to the nearest integer according to the two distances. 1206 1207 Specificially, rounds `x` up with probability `x - floor(x)` and down 1208 otherwise. 1209 1210 Args: 1211 max_denominator: If provided, each rounding will be performed 1212 using `fractions.Fraction.limit_denominator(max_denominator)`. 1213 Otherwise, the rounding will be performed without 1214 `limit_denominator`. 1215 """ 1216 return self.map(lambda x: icepool.stochastic_round( 1217 x, max_denominator=max_denominator)) 1218 1219 def trunc(self) -> 'Die': 1220 return self.unary_operator(math.trunc) 1221 1222 __trunc__ = trunc 1223 1224 def floor(self) -> 'Die': 1225 return self.unary_operator(math.floor) 1226 1227 __floor__ = floor 1228 1229 def ceil(self) -> 'Die': 1230 return self.unary_operator(math.ceil) 1231 1232 __ceil__ = ceil 1233 1234 # Binary operators. 1235 1236 def __add__(self, other) -> 'Die': 1237 if isinstance(other, icepool.AgainExpression): 1238 return NotImplemented 1239 other = implicit_convert_to_die(other) 1240 return self.binary_operator(other, operator.add) 1241 1242 def __radd__(self, other) -> 'Die': 1243 if isinstance(other, icepool.AgainExpression): 1244 return NotImplemented 1245 other = implicit_convert_to_die(other) 1246 return other.binary_operator(self, operator.add) 1247 1248 def __sub__(self, other) -> 'Die': 1249 if isinstance(other, icepool.AgainExpression): 1250 return NotImplemented 1251 other = implicit_convert_to_die(other) 1252 return self.binary_operator(other, operator.sub) 1253 1254 def __rsub__(self, other) -> 'Die': 1255 if isinstance(other, icepool.AgainExpression): 1256 return NotImplemented 1257 other = implicit_convert_to_die(other) 1258 return other.binary_operator(self, operator.sub) 1259 1260 def __mul__(self, other) -> 'Die': 1261 if isinstance(other, icepool.AgainExpression): 1262 return NotImplemented 1263 other = implicit_convert_to_die(other) 1264 return self.binary_operator(other, operator.mul) 1265 1266 def __rmul__(self, other) -> 'Die': 1267 if isinstance(other, icepool.AgainExpression): 1268 return NotImplemented 1269 other = implicit_convert_to_die(other) 1270 return other.binary_operator(self, operator.mul) 1271 1272 def __truediv__(self, other) -> 'Die': 1273 if isinstance(other, icepool.AgainExpression): 1274 return NotImplemented 1275 other = implicit_convert_to_die(other) 1276 return self.binary_operator(other, operator.truediv) 1277 1278 def __rtruediv__(self, other) -> 'Die': 1279 if isinstance(other, icepool.AgainExpression): 1280 return NotImplemented 1281 other = implicit_convert_to_die(other) 1282 return other.binary_operator(self, operator.truediv) 1283 1284 def __floordiv__(self, other) -> 'Die': 1285 if isinstance(other, icepool.AgainExpression): 1286 return NotImplemented 1287 other = implicit_convert_to_die(other) 1288 return self.binary_operator(other, operator.floordiv) 1289 1290 def __rfloordiv__(self, other) -> 'Die': 1291 if isinstance(other, icepool.AgainExpression): 1292 return NotImplemented 1293 other = implicit_convert_to_die(other) 1294 return other.binary_operator(self, operator.floordiv) 1295 1296 def __pow__(self, other) -> 'Die': 1297 if isinstance(other, icepool.AgainExpression): 1298 return NotImplemented 1299 other = implicit_convert_to_die(other) 1300 return self.binary_operator(other, operator.pow) 1301 1302 def __rpow__(self, other) -> 'Die': 1303 if isinstance(other, icepool.AgainExpression): 1304 return NotImplemented 1305 other = implicit_convert_to_die(other) 1306 return other.binary_operator(self, operator.pow) 1307 1308 def __mod__(self, other) -> 'Die': 1309 if isinstance(other, icepool.AgainExpression): 1310 return NotImplemented 1311 other = implicit_convert_to_die(other) 1312 return self.binary_operator(other, operator.mod) 1313 1314 def __rmod__(self, other) -> 'Die': 1315 if isinstance(other, icepool.AgainExpression): 1316 return NotImplemented 1317 other = implicit_convert_to_die(other) 1318 return other.binary_operator(self, operator.mod) 1319 1320 def __lshift__(self, other) -> 'Die': 1321 if isinstance(other, icepool.AgainExpression): 1322 return NotImplemented 1323 other = implicit_convert_to_die(other) 1324 return self.binary_operator(other, operator.lshift) 1325 1326 def __rlshift__(self, other) -> 'Die': 1327 if isinstance(other, icepool.AgainExpression): 1328 return NotImplemented 1329 other = implicit_convert_to_die(other) 1330 return other.binary_operator(self, operator.lshift) 1331 1332 def __rshift__(self, other) -> 'Die': 1333 if isinstance(other, icepool.AgainExpression): 1334 return NotImplemented 1335 other = implicit_convert_to_die(other) 1336 return self.binary_operator(other, operator.rshift) 1337 1338 def __rrshift__(self, other) -> 'Die': 1339 if isinstance(other, icepool.AgainExpression): 1340 return NotImplemented 1341 other = implicit_convert_to_die(other) 1342 return other.binary_operator(self, operator.rshift) 1343 1344 def __and__(self, other) -> 'Die': 1345 if isinstance(other, icepool.AgainExpression): 1346 return NotImplemented 1347 other = implicit_convert_to_die(other) 1348 return self.binary_operator(other, operator.and_) 1349 1350 def __rand__(self, other) -> 'Die': 1351 if isinstance(other, icepool.AgainExpression): 1352 return NotImplemented 1353 other = implicit_convert_to_die(other) 1354 return other.binary_operator(self, operator.and_) 1355 1356 def __or__(self, other) -> 'Die': 1357 if isinstance(other, icepool.AgainExpression): 1358 return NotImplemented 1359 other = implicit_convert_to_die(other) 1360 return self.binary_operator(other, operator.or_) 1361 1362 def __ror__(self, other) -> 'Die': 1363 if isinstance(other, icepool.AgainExpression): 1364 return NotImplemented 1365 other = implicit_convert_to_die(other) 1366 return other.binary_operator(self, operator.or_) 1367 1368 def __xor__(self, other) -> 'Die': 1369 if isinstance(other, icepool.AgainExpression): 1370 return NotImplemented 1371 other = implicit_convert_to_die(other) 1372 return self.binary_operator(other, operator.xor) 1373 1374 def __rxor__(self, other) -> 'Die': 1375 if isinstance(other, icepool.AgainExpression): 1376 return NotImplemented 1377 other = implicit_convert_to_die(other) 1378 return other.binary_operator(self, operator.xor) 1379 1380 # Comparators. 1381 1382 def __lt__(self, other) -> 'Die[bool]': 1383 if isinstance(other, icepool.AgainExpression): 1384 return NotImplemented 1385 other = implicit_convert_to_die(other) 1386 return self.binary_operator(other, operator.lt) 1387 1388 def __le__(self, other) -> 'Die[bool]': 1389 if isinstance(other, icepool.AgainExpression): 1390 return NotImplemented 1391 other = implicit_convert_to_die(other) 1392 return self.binary_operator(other, operator.le) 1393 1394 def __ge__(self, other) -> 'Die[bool]': 1395 if isinstance(other, icepool.AgainExpression): 1396 return NotImplemented 1397 other = implicit_convert_to_die(other) 1398 return self.binary_operator(other, operator.ge) 1399 1400 def __gt__(self, other) -> 'Die[bool]': 1401 if isinstance(other, icepool.AgainExpression): 1402 return NotImplemented 1403 other = implicit_convert_to_die(other) 1404 return self.binary_operator(other, operator.gt) 1405 1406 # Equality operators. These produce a `DieWithTruth`. 1407 1408 # The result has a truth value, but is not a bool. 1409 def __eq__(self, other) -> 'icepool.DieWithTruth[bool]': # type: ignore 1410 if isinstance(other, icepool.AgainExpression): 1411 return NotImplemented 1412 other_die: Die = implicit_convert_to_die(other) 1413 1414 def data_callback() -> Counts[bool]: 1415 return self.binary_operator(other_die, operator.eq)._data 1416 1417 def truth_value_callback() -> bool: 1418 return self.equals(other) 1419 1420 return icepool.DieWithTruth(data_callback, truth_value_callback) 1421 1422 # The result has a truth value, but is not a bool. 1423 def __ne__(self, other) -> 'icepool.DieWithTruth[bool]': # type: ignore 1424 if isinstance(other, icepool.AgainExpression): 1425 return NotImplemented 1426 other_die: Die = implicit_convert_to_die(other) 1427 1428 def data_callback() -> Counts[bool]: 1429 return self.binary_operator(other_die, operator.ne)._data 1430 1431 def truth_value_callback() -> bool: 1432 return not self.equals(other) 1433 1434 return icepool.DieWithTruth(data_callback, truth_value_callback) 1435 1436 def cmp(self, other) -> 'Die[int]': 1437 """A `Die` with outcomes 1, -1, and 0. 1438 1439 The quantities are equal to the positive outcome of `self > other`, 1440 `self < other`, and the remainder respectively. 1441 """ 1442 other = implicit_convert_to_die(other) 1443 1444 data = {} 1445 1446 lt = self < other 1447 if True in lt: 1448 data[-1] = lt[True] 1449 eq = self == other 1450 if True in eq: 1451 data[0] = eq[True] 1452 gt = self > other 1453 if True in gt: 1454 data[1] = gt[True] 1455 1456 return Die(data) 1457 1458 @staticmethod 1459 def _sign(x) -> int: 1460 z = Die._zero(x) 1461 if x > z: 1462 return 1 1463 elif x < z: 1464 return -1 1465 else: 1466 return 0 1467 1468 def sign(self) -> 'Die[int]': 1469 """Outcomes become 1 if greater than `zero()`, -1 if less than `zero()`, and 0 otherwise. 1470 1471 Note that for `float`s, +0.0, -0.0, and nan all become 0. 1472 """ 1473 return self.unary_operator(Die._sign) 1474 1475 # Equality and hashing. 1476 1477 def __bool__(self) -> bool: 1478 raise TypeError( 1479 'A `Die` only has a truth value if it is the result of == or !=.\n' 1480 'This could result from trying to use a die in an if-statement,\n' 1481 'in which case you should use `die.if_else()` instead.\n' 1482 'Or it could result from trying to use a `Die` inside a tuple or vector outcome,\n' 1483 'in which case you should use `tupleize()` or `vectorize().') 1484 1485 @cached_property 1486 def _hash_key(self) -> tuple: 1487 """A tuple that uniquely (as `equals()`) identifies this die. 1488 1489 Apart from being hashable and totally orderable, this is not guaranteed 1490 to be in any particular format or have any other properties. 1491 """ 1492 return tuple(self.items()) 1493 1494 @cached_property 1495 def _hash(self) -> int: 1496 return hash(self._hash_key) 1497 1498 def __hash__(self) -> int: 1499 return self._hash 1500 1501 def equals(self, other, *, simplify: bool = False) -> bool: 1502 """`True` iff both dice have the same outcomes and quantities. 1503 1504 This is `False` if `other` is not a `Die`, even if it would convert 1505 to an equal `Die`. 1506 1507 Truth value does NOT matter. 1508 1509 If one `Die` has a zero-quantity outcome and the other `Die` does not 1510 contain that outcome, they are treated as unequal by this function. 1511 1512 The `==` and `!=` operators have a dual purpose; they return a `Die` 1513 with a truth value determined by this method. 1514 Only dice returned by these methods have a truth value. The data of 1515 these dice is lazily evaluated since the caller may only be interested 1516 in the `Die` value or the truth value. 1517 1518 Args: 1519 simplify: If `True`, the dice will be simplified before comparing. 1520 Otherwise, e.g. a 2:2 coin is not `equals()` to a 1:1 coin. 1521 """ 1522 if not isinstance(other, Die): 1523 return False 1524 1525 if simplify: 1526 return self.simplify()._hash_key == other.simplify()._hash_key 1527 else: 1528 return self._hash_key == other._hash_key 1529 1530 # Strings. 1531 1532 def __repr__(self) -> str: 1533 items_string = ', '.join(f'{repr(outcome)}: {weight}' 1534 for outcome, weight in self.items()) 1535 return type(self).__qualname__ + '({' + items_string + '})'
Sampling with replacement. Quantities represent weights.
Dice are immutable. Methods do not modify the Die
in-place;
rather they return a Die
representing the result.
It's also possible to have "empty" dice with no outcomes at all, though these have little use other than being sentinel values.
183 def unary_operator(self: 'icepool.Die[T_co]', op: Callable[..., U], *args, 184 **kwargs) -> 'icepool.Die[U]': 185 """Performs the unary operation on the outcomes. 186 187 This is used for the standard unary operators 188 `-, +, abs, ~, round, trunc, floor, ceil` 189 as well as the additional methods 190 `zero, bool`. 191 192 This is NOT used for the `[]` operator; when used directly, this is 193 interpreted as a `Mapping` operation and returns the count corresponding 194 to a given outcome. See `marginals()` for applying the `[]` operator to 195 outcomes. 196 197 Returns: 198 A `Die` representing the result. 199 200 Raises: 201 ValueError: If tuples are of mismatched length. 202 """ 203 return self._unary_operator(op, *args, **kwargs)
Performs the unary operation on the outcomes.
This is used for the standard unary operators
-, +, abs, ~, round, trunc, floor, ceil
as well as the additional methods
zero, bool
.
This is NOT used for the []
operator; when used directly, this is
interpreted as a Mapping
operation and returns the count corresponding
to a given outcome. See marginals()
for applying the []
operator to
outcomes.
Returns:
A
Die
representing the result.
Raises:
- ValueError: If tuples are of mismatched length.
205 def binary_operator(self, other: 'Die', op: Callable[..., U], *args, 206 **kwargs) -> 'Die[U]': 207 """Performs the operation on pairs of outcomes. 208 209 By the time this is called, the other operand has already been 210 converted to a `Die`. 211 212 If one side of a binary operator is a tuple and the other is not, the 213 binary operator is applied to each element of the tuple with the 214 non-tuple side. For example, the following are equivalent: 215 216 ```python 217 cartesian_product(d6, d8) * 2 218 cartesian_product(d6 * 2, d8 * 2) 219 ``` 220 221 This is used for the standard binary operators 222 `+, -, *, /, //, %, **, <<, >>, &, |, ^` 223 and the standard binary comparators 224 `<, <=, >=, >, ==, !=, cmp`. 225 226 `==` and `!=` additionally set the truth value of the `Die` according to 227 whether the dice themselves are the same or not. 228 229 The `@` operator does NOT use this method directly. 230 It rolls the left `Die`, which must have integer outcomes, 231 then rolls the right `Die` that many times and sums the outcomes. 232 233 Returns: 234 A `Die` representing the result. 235 236 Raises: 237 ValueError: If tuples are of mismatched length within one of the 238 dice or between the dice. 239 """ 240 data: MutableMapping[Any, int] = defaultdict(int) 241 for (outcome_self, 242 quantity_self), (outcome_other, 243 quantity_other) in itertools.product( 244 self.items(), other.items()): 245 new_outcome = op(outcome_self, outcome_other, *args, **kwargs) 246 data[new_outcome] += quantity_self * quantity_other 247 return self._new_type(data)
Performs the operation on pairs of outcomes.
By the time this is called, the other operand has already been
converted to a Die
.
If one side of a binary operator is a tuple and the other is not, the binary operator is applied to each element of the tuple with the non-tuple side. For example, the following are equivalent:
cartesian_product(d6, d8) * 2
cartesian_product(d6 * 2, d8 * 2)
This is used for the standard binary operators
+, -, *, /, //, %, **, <<, >>, &, |, ^
and the standard binary comparators
<, <=, >=, >, ==, !=, cmp
.
==
and !=
additionally set the truth value of the Die
according to
whether the dice themselves are the same or not.
The @
operator does NOT use this method directly.
It rolls the left Die
, which must have integer outcomes,
then rolls the right Die
that many times and sums the outcomes.
Returns:
A
Die
representing the result.
Raises:
- ValueError: If tuples are of mismatched length within one of the dice or between the dice.
275 def simplify(self) -> 'Die[T_co]': 276 """Divides all quantities by their greatest common denominator. """ 277 return icepool.Die(self._data.simplify())
Divides all quantities by their greatest common denominator.
281 def reroll(self, 282 which: Callable[..., bool] | Collection[T_co] | None = None, 283 /, 284 *, 285 star: bool | None = None, 286 depth: int | Literal['inf']) -> 'Die[T_co]': 287 """Rerolls the given outcomes. 288 289 Args: 290 which: Selects which outcomes to reroll. Options: 291 * A collection of outcomes to reroll. 292 * A callable that takes an outcome and returns `True` if it 293 should be rerolled. 294 * If not provided, the min outcome will be rerolled. 295 star: Whether outcomes should be unpacked into separate arguments 296 before sending them to a callable `which`. 297 If not provided, this will be guessed based on the function 298 signature. 299 depth: The maximum number of times to reroll. 300 If `None`, rerolls an unlimited number of times. 301 302 Returns: 303 A `Die` representing the reroll. 304 If the reroll would never terminate, the result has no outcomes. 305 """ 306 307 if which is None: 308 outcome_set = {self.min_outcome()} 309 else: 310 outcome_set = self._select_outcomes(which, star) 311 312 if depth == 'inf' or depth is None: 313 if depth is None: 314 warnings.warn( 315 "depth=None is deprecated; use depth='inf' instead.", 316 category=DeprecationWarning, 317 stacklevel=1) 318 data = { 319 outcome: quantity 320 for outcome, quantity in self.items() 321 if outcome not in outcome_set 322 } 323 elif depth < 0: 324 raise ValueError('reroll depth cannot be negative.') 325 else: 326 total_reroll_quantity = sum(quantity 327 for outcome, quantity in self.items() 328 if outcome in outcome_set) 329 total_stop_quantity = self.denominator() - total_reroll_quantity 330 rerollable_factor = total_reroll_quantity**depth 331 stop_factor = (self.denominator()**(depth + 1) - rerollable_factor 332 * total_reroll_quantity) // total_stop_quantity 333 data = { 334 outcome: (rerollable_factor * 335 quantity if outcome in outcome_set else stop_factor * 336 quantity) 337 for outcome, quantity in self.items() 338 } 339 return icepool.Die(data)
Rerolls the given outcomes.
Arguments:
- which: Selects which outcomes to reroll. Options:
- A collection of outcomes to reroll.
- A callable that takes an outcome and returns
True
if it should be rerolled. - If not provided, the min outcome will be rerolled.
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
which
. If not provided, this will be guessed based on the function signature. - depth: The maximum number of times to reroll.
If
None
, rerolls an unlimited number of times.
Returns:
A
Die
representing the reroll. If the reroll would never terminate, the result has no outcomes.
341 def filter(self, 342 which: Callable[..., bool] | Collection[T_co], 343 /, 344 *, 345 star: bool | None = None, 346 depth: int | Literal['inf']) -> 'Die[T_co]': 347 """Rerolls until getting one of the given outcomes. 348 349 Essentially the complement of `reroll()`. 350 351 Args: 352 which: Selects which outcomes to reroll until. Options: 353 * A callable that takes an outcome and returns `True` if it 354 should be accepted. 355 * A collection of outcomes to reroll until. 356 star: Whether outcomes should be unpacked into separate arguments 357 before sending them to a callable `which`. 358 If not provided, this will be guessed based on the function 359 signature. 360 depth: The maximum number of times to reroll. 361 If `None`, rerolls an unlimited number of times. 362 363 Returns: 364 A `Die` representing the reroll. 365 If the reroll would never terminate, the result has no outcomes. 366 """ 367 368 if callable(which): 369 if star is None: 370 star = infer_star(which) 371 if star: 372 373 not_outcomes = { 374 outcome 375 for outcome in self.outcomes() 376 if not which(*outcome) # type: ignore 377 } 378 else: 379 not_outcomes = { 380 outcome 381 for outcome in self.outcomes() if not which(outcome) 382 } 383 else: 384 not_outcomes = { 385 not_outcome 386 for not_outcome in self.outcomes() if not_outcome not in which 387 } 388 return self.reroll(not_outcomes, depth=depth)
Rerolls until getting one of the given outcomes.
Essentially the complement of reroll()
.
Arguments:
- which: Selects which outcomes to reroll until. Options:
- A callable that takes an outcome and returns
True
if it should be accepted. - A collection of outcomes to reroll until.
- A callable that takes an outcome and returns
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
which
. If not provided, this will be guessed based on the function signature. - depth: The maximum number of times to reroll.
If
None
, rerolls an unlimited number of times.
Returns:
A
Die
representing the reroll. If the reroll would never terminate, the result has no outcomes.
390 def truncate(self, min_outcome=None, max_outcome=None) -> 'Die[T_co]': 391 """Truncates the outcomes of this `Die` to the given range. 392 393 The endpoints are included in the result if applicable. 394 If one of the arguments is not provided, that side will not be truncated. 395 396 This effectively rerolls outcomes outside the given range. 397 If instead you want to replace those outcomes with the nearest endpoint, 398 use `clip()`. 399 400 Not to be confused with `trunc(die)`, which performs integer truncation 401 on each outcome. 402 """ 403 if min_outcome is not None: 404 start = bisect.bisect_left(self.outcomes(), min_outcome) 405 else: 406 start = None 407 if max_outcome is not None: 408 stop = bisect.bisect_right(self.outcomes(), max_outcome) 409 else: 410 stop = None 411 data = {k: v for k, v in self.items()[start:stop]} 412 return icepool.Die(data)
Truncates the outcomes of this Die
to the given range.
The endpoints are included in the result if applicable. If one of the arguments is not provided, that side will not be truncated.
This effectively rerolls outcomes outside the given range.
If instead you want to replace those outcomes with the nearest endpoint,
use clip()
.
Not to be confused with trunc(die)
, which performs integer truncation
on each outcome.
414 def clip(self, min_outcome=None, max_outcome=None) -> 'Die[T_co]': 415 """Clips the outcomes of this `Die` to the given values. 416 417 The endpoints are included in the result if applicable. 418 If one of the arguments is not provided, that side will not be clipped. 419 420 This is not the same as rerolling outcomes beyond this range; 421 the outcome is simply adjusted to fit within the range. 422 This will typically cause some quantity to bunch up at the endpoint(s). 423 If you want to reroll outcomes beyond this range, use `truncate()`. 424 """ 425 data: MutableMapping[Any, int] = defaultdict(int) 426 for outcome, quantity in self.items(): 427 if min_outcome is not None and outcome <= min_outcome: 428 data[min_outcome] += quantity 429 elif max_outcome is not None and outcome >= max_outcome: 430 data[max_outcome] += quantity 431 else: 432 data[outcome] += quantity 433 return icepool.Die(data)
Clips the outcomes of this Die
to the given values.
The endpoints are included in the result if applicable. If one of the arguments is not provided, that side will not be clipped.
This is not the same as rerolling outcomes beyond this range;
the outcome is simply adjusted to fit within the range.
This will typically cause some quantity to bunch up at the endpoint(s).
If you want to reroll outcomes beyond this range, use truncate()
.
463 def map( 464 self, 465 repl: 466 'Callable[..., U | Die[U] | icepool.RerollType | icepool.AgainExpression] | Mapping[T_co, U | Die[U] | icepool.RerollType | icepool.AgainExpression]', 467 /, 468 *extra_args, 469 star: bool | None = None, 470 repeat: int | Literal['inf'] = 1, 471 time_limit: int | Literal['inf'] | None = None, 472 again_count: int | None = None, 473 again_depth: int | None = None, 474 again_end: 'U | Die[U] | icepool.RerollType | None' = None 475 ) -> 'Die[U]': 476 """Maps outcomes of the `Die` to other outcomes. 477 478 This is also useful for representing processes. 479 480 As `icepool.map(repl, self, ...)`. 481 """ 482 return icepool.map(repl, 483 self, 484 *extra_args, 485 star=star, 486 repeat=repeat, 487 time_limit=time_limit, 488 again_count=again_count, 489 again_depth=again_depth, 490 again_end=again_end)
Maps outcomes of the Die
to other outcomes.
This is also useful for representing processes.
As icepool.map(repl, self, ...)
.
492 def map_and_time( 493 self, 494 repl: 495 'Callable[..., T_co | Die[T_co] | icepool.RerollType] | Mapping[T_co, T_co | Die[T_co] | icepool.RerollType]', 496 /, 497 *extra_args, 498 star: bool | None = None, 499 time_limit: int) -> 'Die[tuple[T_co, int]]': 500 """Repeatedly map outcomes of the state to other outcomes, while also 501 counting timesteps. 502 503 This is useful for representing processes. 504 505 As `map_and_time(repl, self, ...)`. 506 """ 507 return icepool.map_and_time(repl, 508 self, 509 *extra_args, 510 star=star, 511 time_limit=time_limit)
Repeatedly map outcomes of the state to other outcomes, while also counting timesteps.
This is useful for representing processes.
As map_and_time(repl, self, ...)
.
513 def time_to_sum(self: 'Die[int]', 514 target: int, 515 /, 516 max_time: int, 517 dnf: 'int|icepool.RerollType|None' = None) -> 'Die[int]': 518 """The number of rolls until the cumulative sum is greater or equal to the target. 519 520 Args: 521 target: The number to stop at once reached. 522 max_time: The maximum number of rolls to run. 523 If the sum is not reached, the outcome is determined by `dnf`. 524 dnf: What time to assign in cases where the target was not reached 525 in `max_time`. If not provided, this is set to `max_time`. 526 `dnf=icepool.Reroll` will remove this case from the result, 527 effectively rerolling it. 528 """ 529 if target <= 0: 530 return Die([0]) 531 532 if dnf is None: 533 dnf = max_time 534 535 def step(total, roll): 536 return min(total + roll, target) 537 538 result: 'Die[tuple[int, int]]' = Die([0]).map_and_time( 539 step, self, time_limit=max_time) 540 541 def get_time(total, time): 542 if total < target: 543 return dnf 544 else: 545 return time 546 547 return result.map(get_time)
The number of rolls until the cumulative sum is greater or equal to the target.
Arguments:
- target: The number to stop at once reached.
- max_time: The maximum number of rolls to run.
If the sum is not reached, the outcome is determined by
dnf
. - dnf: What time to assign in cases where the target was not reached
in
max_time
. If not provided, this is set tomax_time
.dnf=icepool.Reroll
will remove this case from the result, effectively rerolling it.
553 def mean_time_to_sum(self: 'Die[int]', target: int, /) -> Fraction: 554 """The mean number of rolls until the cumulative sum is greater or equal to the target. 555 556 Args: 557 target: The target sum. 558 559 Raises: 560 ValueError: If `self` has negative outcomes. 561 ZeroDivisionError: If `self.mean() == 0`. 562 """ 563 target = max(target, 0) 564 565 if target < len(self._mean_time_to_sum_cache): 566 return self._mean_time_to_sum_cache[target] 567 568 if self.min_outcome() < 0: 569 raise ValueError( 570 'mean_time_to_sum does not handle negative outcomes.') 571 time_per_effect = Fraction(self.denominator(), 572 self.denominator() - self.quantity(0)) 573 574 for i in range(len(self._mean_time_to_sum_cache), target + 1): 575 result = time_per_effect + self.reroll([ 576 0 577 ], depth='inf').map(lambda x: self.mean_time_to_sum(i - x)).mean() 578 self._mean_time_to_sum_cache.append(result) 579 580 return result
The mean number of rolls until the cumulative sum is greater or equal to the target.
Arguments:
- target: The target sum.
Raises:
- ValueError: If
self
has negative outcomes. - ZeroDivisionError: If
self.mean() == 0
.
582 def explode(self, 583 which: Collection[T_co] | Callable[..., bool] | None = None, 584 /, 585 *, 586 star: bool | None = None, 587 depth: int = 9, 588 end=None) -> 'Die[T_co]': 589 """Causes outcomes to be rolled again and added to the total. 590 591 Args: 592 which: Which outcomes to explode. Options: 593 * An collection of outcomes to explode. 594 * A callable that takes an outcome and returns `True` if it 595 should be exploded. 596 * If not supplied, the max outcome will explode. 597 star: Whether outcomes should be unpacked into separate arguments 598 before sending them to a callable `which`. 599 If not provided, this will be guessed based on the function 600 signature. 601 depth: The maximum number of additional dice to roll, not counting 602 the initial roll. 603 If not supplied, a default value will be used. 604 end: Once `depth` is reached, further explosions will be treated 605 as this value. By default, a zero value will be used. 606 `icepool.Reroll` will make one extra final roll, rerolling until 607 a non-exploding outcome is reached. 608 """ 609 610 if which is None: 611 outcome_set = {self.max_outcome()} 612 else: 613 outcome_set = self._select_outcomes(which, star) 614 615 if depth < 0: 616 raise ValueError('depth cannot be negative.') 617 elif depth == 0: 618 return self 619 620 def map_final(outcome): 621 if outcome in outcome_set: 622 return outcome + icepool.Again 623 else: 624 return outcome 625 626 return self.map(map_final, again_depth=depth, again_end=end)
Causes outcomes to be rolled again and added to the total.
Arguments:
- which: Which outcomes to explode. Options:
- An collection of outcomes to explode.
- A callable that takes an outcome and returns
True
if it should be exploded. - If not supplied, the max outcome will explode.
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
which
. If not provided, this will be guessed based on the function signature. - depth: The maximum number of additional dice to roll, not counting the initial roll. If not supplied, a default value will be used.
- end: Once
depth
is reached, further explosions will be treated as this value. By default, a zero value will be used.icepool.Reroll
will make one extra final roll, rerolling until a non-exploding outcome is reached.
628 def if_else( 629 self, 630 outcome_if_true: U | 'Die[U]', 631 outcome_if_false: U | 'Die[U]', 632 *, 633 again_count: int | None = None, 634 again_depth: int | None = None, 635 again_end: 'U | Die[U] | icepool.RerollType | None' = None 636 ) -> 'Die[U]': 637 """Ternary conditional operator. 638 639 This replaces truthy outcomes with the first argument and falsy outcomes 640 with the second argument. 641 642 Args: 643 again_count, again_depth, again_end: Forwarded to the final die constructor. 644 """ 645 return self.map(lambda x: bool(x)).map( 646 { 647 True: outcome_if_true, 648 False: outcome_if_false 649 }, 650 again_count=again_count, 651 again_depth=again_depth, 652 again_end=again_end)
Ternary conditional operator.
This replaces truthy outcomes with the first argument and falsy outcomes with the second argument.
Arguments:
- again_count, again_depth, again_end: Forwarded to the final die constructor.
654 def is_in(self, target: Container[T_co], /) -> 'Die[bool]': 655 """A die that returns True iff the roll of the die is contained in the target.""" 656 return self.map(lambda x: x in target)
A die that returns True iff the roll of the die is contained in the target.
658 def count(self, rolls: int, target: Container[T_co], /) -> 'Die[int]': 659 """Roll this dice a number of times and count how many are in the target.""" 660 return rolls @ self.is_in(target)
Roll this dice a number of times and count how many are in the target.
731 def sequence(self, rolls: int) -> 'icepool.Die[tuple[T_co, ...]]': 732 """Possible sequences produced by rolling this die a number of times. 733 734 This is extremely expensive computationally. If possible, use `reduce()` 735 instead; if you don't care about order, `Die.pool()` is better. 736 """ 737 return icepool.cartesian_product(*(self for _ in range(rolls)), 738 outcome_type=tuple) # type: ignore
Possible sequences produced by rolling this die a number of times.
This is extremely expensive computationally. If possible, use reduce()
instead; if you don't care about order, Die.pool()
is better.
740 def pool(self, rolls: int | Sequence[int] = 1, /) -> 'icepool.Pool[T_co]': 741 """Creates a `Pool` from this `Die`. 742 743 You might subscript the pool immediately afterwards, e.g. 744 `d6.pool(5)[-1, ..., 1]` takes the difference between the highest and 745 lowest of 5d6. 746 747 Args: 748 rolls: The number of copies of this `Die` to put in the pool. 749 Or, a sequence of one `int` per die acting as 750 `keep_tuple`. Note that `...` cannot be used in the 751 argument to this method, as the argument determines the size of 752 the pool. 753 """ 754 if isinstance(rolls, int): 755 return icepool.Pool({self: rolls}) 756 else: 757 pool_size = len(rolls) 758 # Haven't dealt with narrowing return type. 759 return icepool.Pool({self: pool_size})[rolls] # type: ignore
You might subscript the pool immediately afterwards, e.g.
d6.pool(5)[-1, ..., 1]
takes the difference between the highest and
lowest of 5d6.
Arguments:
- rolls: The number of copies of this
Die
to put in the pool. Or, a sequence of oneint
per die acting askeep_tuple
. Note that...
cannot be used in the argument to this method, as the argument determines the size of the pool.
807 def keep(self, 808 rolls: int | Sequence[int], 809 index: slice | Sequence[int | EllipsisType] | int | None = None, 810 /) -> 'Die': 811 """Selects elements after drawing and sorting and sums them. 812 813 Args: 814 rolls: The number of dice to roll. 815 index: One of the following: 816 * An `int`. This will count only the roll at the specified index. 817 In this case, the result is a `Die` rather than a generator. 818 * A `slice`. The selected dice are counted once each. 819 * A sequence of `int`s with length equal to `rolls`. 820 Each roll is counted that many times, which could be multiple or 821 negative times. 822 823 Up to one `...` (`Ellipsis`) may be used. If no `...` is used, 824 the `rolls` argument may be omitted. 825 826 `...` will be replaced with a number of zero counts in order 827 to make up any missing elements compared to `rolls`. 828 This number may be "negative" if more `int`s are provided than 829 `rolls`. Specifically: 830 831 * If `index` is shorter than `rolls`, `...` 832 acts as enough zero counts to make up the difference. 833 E.g. `(1, ..., 1)` on five dice would act as 834 `(1, 0, 0, 0, 1)`. 835 * If `index` has length equal to `rolls`, `...` has no effect. 836 E.g. `(1, ..., 1)` on two dice would act as `(1, 1)`. 837 * If `index` is longer than `rolls` and `...` is on one side, 838 elements will be dropped from `index` on the side with `...`. 839 E.g. `(..., 1, 2, 3)` on two dice would act as `(2, 3)`. 840 * If `index` is longer than `rolls` and `...` 841 is in the middle, the counts will be as the sum of two 842 one-sided `...`. 843 E.g. `(-1, ..., 1)` acts like `(-1, ...)` plus `(..., 1)`. 844 If `rolls` was 1 this would have the -1 and 1 cancel each other out. 845 """ 846 if isinstance(rolls, int): 847 if index is None: 848 raise ValueError( 849 'If the number of rolls is an integer, an index argument must be provided.' 850 ) 851 if isinstance(index, int): 852 return self.pool(rolls).keep(index) 853 else: 854 return self.pool(rolls).keep(index).sum() # type: ignore 855 else: 856 if index is not None: 857 raise ValueError('Only one index sequence can be given.') 858 return self.pool(len(rolls)).keep(rolls).sum() # type: ignore
Selects elements after drawing and sorting and sums them.
Arguments:
- rolls: The number of dice to roll.
- index: One of the following:
- An
int
. This will count only the roll at the specified index.
- An
- In this case, the result is a
Die
rather than a generator. - A
slice
. The selected dice are counted once each.
- A
- A sequence of
int
s with length equal torolls
. Each roll is counted that many times, which could be multiple or negative times.
Up to one
...
(Ellipsis
) may be used. If no...
is used, therolls
argument may be omitted....
will be replaced with a number of zero counts in orderto make up any missing elements compared to
rolls
. This number may be "negative" if moreint
s are provided thanrolls
. Specifically:- If
index
is shorter thanrolls
,...
acts as enough zero counts to make up the difference. E.g.(1, ..., 1)
on five dice would act as(1, 0, 0, 0, 1)
. - If
index
has length equal torolls
,...
has no effect. E.g.(1, ..., 1)
on two dice would act as(1, 1)
. - If
index
is longer thanrolls
and...
is on one side, elements will be dropped fromindex
on the side with...
. E.g.(..., 1, 2, 3)
on two dice would act as(2, 3)
. - If
index
is longer thanrolls
and...
is in the middle, the counts will be as the sum of two one-sided...
. E.g.(-1, ..., 1)
acts like(-1, ...)
plus(..., 1)
. Ifrolls
was 1 this would have the -1 and 1 cancel each other out.
- A sequence of
860 def lowest(self, 861 rolls: int, 862 /, 863 keep: int | None = None, 864 drop: int | None = None) -> 'Die': 865 """Roll several of this `Die` and return the lowest result, or the sum of some of the lowest. 866 867 The outcomes should support addition and multiplication if `keep != 1`. 868 869 Args: 870 rolls: The number of dice to roll. All dice will have the same 871 outcomes as `self`. 872 keep, drop: These arguments work together: 873 * If neither are provided, the single lowest die will be taken. 874 * If only `keep` is provided, the `keep` lowest dice will be summed. 875 * If only `drop` is provided, the `drop` lowest dice will be dropped 876 and the rest will be summed. 877 * If both are provided, `drop` lowest dice will be dropped, then 878 the next `keep` lowest dice will be summed. 879 880 Returns: 881 A `Die` representing the probability distribution of the sum. 882 """ 883 index = lowest_slice(keep, drop) 884 canonical = canonical_slice(index, rolls) 885 if canonical.start == 0 and canonical.stop == 1: 886 return self._lowest_single(rolls) 887 # Expression evaluators are difficult to type. 888 return self.pool(rolls)[index].sum() # type: ignore
Roll several of this Die
and return the lowest result, or the sum of some of the lowest.
The outcomes should support addition and multiplication if keep != 1
.
Arguments:
- rolls: The number of dice to roll. All dice will have the same
outcomes as
self
. - keep, drop: These arguments work together:
- If neither are provided, the single lowest die will be taken.
- If only
keep
is provided, thekeep
lowest dice will be summed. - If only
drop
is provided, thedrop
lowest dice will be dropped and the rest will be summed. - If both are provided,
drop
lowest dice will be dropped, then the nextkeep
lowest dice will be summed.
Returns:
A
Die
representing the probability distribution of the sum.
898 def highest(self, 899 rolls: int, 900 /, 901 keep: int | None = None, 902 drop: int | None = None) -> 'Die[T_co]': 903 """Roll several of this `Die` and return the highest result, or the sum of some of the highest. 904 905 The outcomes should support addition and multiplication if `keep != 1`. 906 907 Args: 908 rolls: The number of dice to roll. 909 keep, drop: These arguments work together: 910 * If neither are provided, the single highest die will be taken. 911 * If only `keep` is provided, the `keep` highest dice will be summed. 912 * If only `drop` is provided, the `drop` highest dice will be dropped 913 and the rest will be summed. 914 * If both are provided, `drop` highest dice will be dropped, then 915 the next `keep` highest dice will be summed. 916 917 Returns: 918 A `Die` representing the probability distribution of the sum. 919 """ 920 index = highest_slice(keep, drop) 921 canonical = canonical_slice(index, rolls) 922 if canonical.start == rolls - 1 and canonical.stop == rolls: 923 return self._highest_single(rolls) 924 # Expression evaluators are difficult to type. 925 return self.pool(rolls)[index].sum() # type: ignore
Roll several of this Die
and return the highest result, or the sum of some of the highest.
The outcomes should support addition and multiplication if keep != 1
.
Arguments:
- rolls: The number of dice to roll.
- keep, drop: These arguments work together:
- If neither are provided, the single highest die will be taken.
- If only
keep
is provided, thekeep
highest dice will be summed. - If only
drop
is provided, thedrop
highest dice will be dropped and the rest will be summed. - If both are provided,
drop
highest dice will be dropped, then the nextkeep
highest dice will be summed.
Returns:
A
Die
representing the probability distribution of the sum.
934 def middle( 935 self, 936 rolls: int, 937 /, 938 keep: int = 1, 939 *, 940 tie: Literal['error', 'high', 'low'] = 'error') -> 'icepool.Die': 941 """Roll several of this `Die` and sum the sorted results in the middle. 942 943 The outcomes should support addition and multiplication if `keep != 1`. 944 945 Args: 946 rolls: The number of dice to roll. 947 keep: The number of outcomes to sum. If this is greater than the 948 current keep_size, all are kept. 949 tie: What to do if `keep` is odd but the current keep_size 950 is even, or vice versa. 951 * 'error' (default): Raises `IndexError`. 952 * 'high': The higher outcome is taken. 953 * 'low': The lower outcome is taken. 954 """ 955 # Expression evaluators are difficult to type. 956 return self.pool(rolls).middle(keep, tie=tie).sum() # type: ignore
Roll several of this Die
and sum the sorted results in the middle.
The outcomes should support addition and multiplication if keep != 1
.
Arguments:
- rolls: The number of dice to roll.
- keep: The number of outcomes to sum. If this is greater than the current keep_size, all are kept.
- tie: What to do if
keep
is odd but the current keep_size is even, or vice versa.- 'error' (default): Raises
IndexError
. - 'high': The higher outcome is taken.
- 'low': The lower outcome is taken.
- 'error' (default): Raises
958 def map_to_pool( 959 self, 960 repl: 961 'Callable[..., Sequence[icepool.Die[U] | U] | Mapping[icepool.Die[U], int] | Mapping[U, int] | icepool.RerollType] | None' = None, 962 /, 963 *extra_args: 'Outcome | icepool.Die | icepool.MultisetExpression', 964 star: bool | None = None, 965 denominator: int | None = None 966 ) -> 'icepool.MultisetGenerator[U, tuple[int]]': 967 """EXPERIMENTAL: Maps outcomes of this `Die` to `Pools`, creating a `MultisetGenerator`. 968 969 As `icepool.map_to_pool(repl, self, ...)`. 970 971 If no argument is provided, the outcomes will be used to construct a 972 mixture of pools directly, similar to the inverse of `pool.expand()`. 973 Note that this is not particularly efficient since it does not make much 974 use of dynamic programming. 975 976 Args: 977 repl: One of the following: 978 * A callable that takes in one outcome per element of args and 979 produces a `Pool` (or something convertible to such). 980 * A mapping from old outcomes to `Pool` 981 (or something convertible to such). 982 In this case args must have exactly one element. 983 The new outcomes may be dice rather than just single outcomes. 984 The special value `icepool.Reroll` will reroll that old outcome. 985 star: If `True`, the first of the args will be unpacked before 986 giving them to `repl`. 987 If not provided, it will be guessed based on the signature of 988 `repl` and the number of arguments. 989 denominator: If provided, the denominator of the result will be this 990 value. Otherwise it will be the minimum to correctly weight the 991 pools. 992 993 Returns: 994 A `MultisetGenerator` representing the mixture of `Pool`s. Note 995 that this is not technically a `Pool`, though it supports most of 996 the same operations. 997 998 Raises: 999 ValueError: If `denominator` cannot be made consistent with the 1000 resulting mixture of pools. 1001 """ 1002 if repl is None: 1003 repl = lambda x: x 1004 return icepool.map_to_pool(repl, 1005 self, 1006 *extra_args, 1007 star=star, 1008 denominator=denominator)
EXPERIMENTAL: Maps outcomes of this Die
to Pools
, creating a MultisetGenerator
.
As icepool.map_to_pool(repl, self, ...)
.
If no argument is provided, the outcomes will be used to construct a
mixture of pools directly, similar to the inverse of pool.expand()
.
Note that this is not particularly efficient since it does not make much
use of dynamic programming.
Arguments:
- repl: One of the following:
- A callable that takes in one outcome per element of args and
produces a
Pool
(or something convertible to such). - A mapping from old outcomes to
Pool
(or something convertible to such). In this case args must have exactly one element. The new outcomes may be dice rather than just single outcomes. The special valueicepool.Reroll
will reroll that old outcome.
- A callable that takes in one outcome per element of args and
produces a
- star: If
True
, the first of the args will be unpacked before giving them torepl
. If not provided, it will be guessed based on the signature ofrepl
and the number of arguments. - denominator: If provided, the denominator of the result will be this value. Otherwise it will be the minimum to correctly weight the pools.
Returns:
A
MultisetGenerator
representing the mixture ofPool
s. Note
that this is not technically aPool
, though it supports most of the same operations.
Raises:
- ValueError: If
denominator
cannot be made consistent with the resulting mixture of pools.
1010 def explode_to_pool( 1011 self, 1012 rolls: int, 1013 which: Collection[T_co] | Callable[..., bool] | None = None, 1014 /, 1015 *, 1016 star: bool | None = None, 1017 depth: int = 9) -> 'icepool.MultisetGenerator[T_co, tuple[int]]': 1018 """EXPERIMENTAL: Causes outcomes to be rolled again, keeping that outcome as an individual die in a pool. 1019 1020 Args: 1021 rolls: The number of initial dice. 1022 which: Which outcomes to explode. Options: 1023 * A single outcome to explode. 1024 * An collection of outcomes to explode. 1025 * A callable that takes an outcome and returns `True` if it 1026 should be exploded. 1027 * If not supplied, the max outcome will explode. 1028 star: Whether outcomes should be unpacked into separate arguments 1029 before sending them to a callable `which`. 1030 If not provided, this will be guessed based on the function 1031 signature. 1032 depth: The maximum depth of explosions for an individual dice. 1033 1034 Returns: 1035 A `MultisetGenerator` representing the mixture of `Pool`s. Note 1036 that this is not technically a `Pool`, though it supports most of 1037 the same operations. 1038 """ 1039 if depth == 0: 1040 return self.pool(rolls) 1041 if which is None: 1042 explode_set = {self.max_outcome()} 1043 else: 1044 explode_set = self._select_outcomes(which, star) 1045 if not explode_set: 1046 return self.pool(rolls) 1047 explode: 'Die[T_co]' 1048 not_explode: 'Die[T_co]' 1049 explode, not_explode = self.split(explode_set) 1050 1051 single_data: 'MutableMapping[icepool.Vector[int], int]' = defaultdict( 1052 int) 1053 for i in range(depth + 1): 1054 weight = explode.denominator()**i * self.denominator()**( 1055 depth - i) * not_explode.denominator() 1056 single_data[icepool.Vector((i, 1))] += weight 1057 single_data[icepool.Vector( 1058 (depth + 1, 0))] += explode.denominator()**(depth + 1) 1059 1060 single_count_die: 'Die[icepool.Vector[int]]' = Die(single_data) 1061 count_die = rolls @ single_count_die 1062 1063 return count_die.map_to_pool( 1064 lambda x, nx: [explode] * x + [not_explode] * nx)
EXPERIMENTAL: Causes outcomes to be rolled again, keeping that outcome as an individual die in a pool.
Arguments:
- rolls: The number of initial dice.
- which: Which outcomes to explode. Options:
- A single outcome to explode.
- An collection of outcomes to explode.
- A callable that takes an outcome and returns
True
if it should be exploded. - If not supplied, the max outcome will explode.
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
which
. If not provided, this will be guessed based on the function signature. - depth: The maximum depth of explosions for an individual dice.
Returns:
A
MultisetGenerator
representing the mixture ofPool
s. Note
that this is not technically aPool
, though it supports most of the same operations.
1066 def reroll_to_pool( 1067 self, 1068 rolls: int, 1069 which: Callable[..., bool] | Collection[T_co], 1070 /, 1071 max_rerolls: int, 1072 *, 1073 star: bool | None = None, 1074 mode: Literal['random', 'lowest', 'highest', 'drop'] = 'random' 1075 ) -> 'icepool.MultisetGenerator[T_co, tuple[int]]': 1076 """EXPERIMENTAL: Applies a limited number of rerolls shared across a pool. 1077 1078 Each die can only be rerolled once (effectively `depth=1`), and no more 1079 than `max_rerolls` dice may be rerolled. 1080 1081 Args: 1082 rolls: How many dice in the pool. 1083 which: Selects which outcomes are eligible to be rerolled. Options: 1084 * A collection of outcomes to reroll. 1085 * A callable that takes an outcome and returns `True` if it 1086 could be rerolled. 1087 max_rerolls: The maximum number of dice to reroll. 1088 Note that each die can only be rerolled once, so if the number 1089 of eligible dice is less than this, the excess rerolls have no 1090 effect. 1091 star: Whether outcomes should be unpacked into separate arguments 1092 before sending them to a callable `which`. 1093 If not provided, this will be guessed based on the function 1094 signature. 1095 mode: How dice are selected for rerolling if there are more eligible 1096 dice than `max_rerolls`. Options: 1097 * `'random'` (default): Eligible dice will be chosen uniformly 1098 at random. 1099 * `'lowest'`: The lowest eligible dice will be rerolled. 1100 * `'highest'`: The highest eligible dice will be rerolled. 1101 * `'drop'`: All dice that ended up on an outcome selected by 1102 `which` will be dropped. This includes both dice that rolled 1103 into `which` initially and were not rerolled, and dice that 1104 were rerolled but rolled into `which` again. This can be 1105 considerably more efficient than the other modes. 1106 1107 Returns: 1108 A `MultisetGenerator` representing the mixture of `Pool`s. Note 1109 that this is not technically a `Pool`, though it supports most of 1110 the same operations. 1111 """ 1112 rerollable_set = self._select_outcomes(which, star) 1113 if not rerollable_set: 1114 return self.pool(rolls) 1115 1116 rerollable_die: 'Die[T_co]' 1117 not_rerollable_die: 'Die[T_co]' 1118 rerollable_die, not_rerollable_die = self.split(rerollable_set) 1119 single_is_rerollable = icepool.coin(rerollable_die.denominator(), 1120 self.denominator()) 1121 rerollable = rolls @ single_is_rerollable 1122 1123 def split(initial_rerollable: int) -> Die[tuple[int, int, int]]: 1124 """Computes the composition of the pool. 1125 1126 Returns: 1127 initial_rerollable: The number of dice that initially fell into 1128 the rerollable set. 1129 rerolled_to_rerollable: The number of dice that were rerolled, 1130 but fell into the rerollable set again. 1131 not_rerollable: The number of dice that ended up outside the 1132 rerollable set, including both initial and rerolled dice. 1133 not_rerolled: The number of dice that were eligible for 1134 rerolling but were not rerolled. 1135 """ 1136 initial_not_rerollable = rolls - initial_rerollable 1137 rerolled = min(initial_rerollable, max_rerolls) 1138 not_rerolled = initial_rerollable - rerolled 1139 1140 def second_split(rerolled_to_rerollable): 1141 """Splits the rerolled dice into those that fell into the rerollable and not-rerollable sets.""" 1142 rerolled_to_not_rerollable = rerolled - rerolled_to_rerollable 1143 return icepool.tupleize( 1144 initial_rerollable, rerolled_to_rerollable, 1145 initial_not_rerollable + rerolled_to_not_rerollable, 1146 not_rerolled) 1147 1148 return icepool.map(second_split, 1149 rerolled @ single_is_rerollable, 1150 star=False) 1151 1152 pool_composition = rerollable.map(split, star=False) 1153 1154 def make_pool(initial_rerollable, rerolled_to_rerollable, 1155 not_rerollable, not_rerolled): 1156 common = rerollable_die.pool( 1157 rerolled_to_rerollable) + not_rerollable_die.pool( 1158 not_rerollable) 1159 match mode: 1160 case 'random': 1161 return common + rerollable_die.pool(not_rerolled) 1162 case 'lowest': 1163 return common + rerollable_die.pool( 1164 initial_rerollable).highest(not_rerolled) 1165 case 'highest': 1166 return common + rerollable_die.pool( 1167 initial_rerollable).lowest(not_rerolled) 1168 case 'drop': 1169 return not_rerollable_die.pool(not_rerollable) 1170 case _: 1171 raise ValueError( 1172 f"Invalid reroll_priority '{mode}'. Allowed values are 'random', 'lowest', 'highest', 'drop'." 1173 ) 1174 1175 denominator = self.denominator()**(rolls + min(rolls, max_rerolls)) 1176 1177 return pool_composition.map_to_pool(make_pool, 1178 star=True, 1179 denominator=denominator)
EXPERIMENTAL: Applies a limited number of rerolls shared across a pool.
Each die can only be rerolled once (effectively depth=1
), and no more
than max_rerolls
dice may be rerolled.
Arguments:
- rolls: How many dice in the pool.
- which: Selects which outcomes are eligible to be rerolled. Options:
- A collection of outcomes to reroll.
- A callable that takes an outcome and returns
True
if it could be rerolled.
- max_rerolls: The maximum number of dice to reroll. Note that each die can only be rerolled once, so if the number of eligible dice is less than this, the excess rerolls have no effect.
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
which
. If not provided, this will be guessed based on the function signature. - mode: How dice are selected for rerolling if there are more eligible
dice than
max_rerolls
. Options:'random'
(default): Eligible dice will be chosen uniformly at random.'lowest'
: The lowest eligible dice will be rerolled.'highest'
: The highest eligible dice will be rerolled.'drop'
: All dice that ended up on an outcome selected bywhich
will be dropped. This includes both dice that rolled intowhich
initially and were not rerolled, and dice that were rerolled but rolled intowhich
again. This can be considerably more efficient than the other modes.
Returns:
A
MultisetGenerator
representing the mixture ofPool
s. Note
that this is not technically aPool
, though it supports most of the same operations.
1202 def stochastic_round(self, 1203 *, 1204 max_denominator: int | None = None) -> 'Die[int]': 1205 """Randomly rounds outcomes up or down to the nearest integer according to the two distances. 1206 1207 Specificially, rounds `x` up with probability `x - floor(x)` and down 1208 otherwise. 1209 1210 Args: 1211 max_denominator: If provided, each rounding will be performed 1212 using `fractions.Fraction.limit_denominator(max_denominator)`. 1213 Otherwise, the rounding will be performed without 1214 `limit_denominator`. 1215 """ 1216 return self.map(lambda x: icepool.stochastic_round( 1217 x, max_denominator=max_denominator))
Randomly rounds outcomes up or down to the nearest integer according to the two distances.
Specificially, rounds x
up with probability x - floor(x)
and down
otherwise.
Arguments:
- max_denominator: If provided, each rounding will be performed
using
fractions.Fraction.limit_denominator(max_denominator)
. Otherwise, the rounding will be performed withoutlimit_denominator
.
1436 def cmp(self, other) -> 'Die[int]': 1437 """A `Die` with outcomes 1, -1, and 0. 1438 1439 The quantities are equal to the positive outcome of `self > other`, 1440 `self < other`, and the remainder respectively. 1441 """ 1442 other = implicit_convert_to_die(other) 1443 1444 data = {} 1445 1446 lt = self < other 1447 if True in lt: 1448 data[-1] = lt[True] 1449 eq = self == other 1450 if True in eq: 1451 data[0] = eq[True] 1452 gt = self > other 1453 if True in gt: 1454 data[1] = gt[True] 1455 1456 return Die(data)
A Die
with outcomes 1, -1, and 0.
The quantities are equal to the positive outcome of self > other
,
self < other
, and the remainder respectively.
1501 def equals(self, other, *, simplify: bool = False) -> bool: 1502 """`True` iff both dice have the same outcomes and quantities. 1503 1504 This is `False` if `other` is not a `Die`, even if it would convert 1505 to an equal `Die`. 1506 1507 Truth value does NOT matter. 1508 1509 If one `Die` has a zero-quantity outcome and the other `Die` does not 1510 contain that outcome, they are treated as unequal by this function. 1511 1512 The `==` and `!=` operators have a dual purpose; they return a `Die` 1513 with a truth value determined by this method. 1514 Only dice returned by these methods have a truth value. The data of 1515 these dice is lazily evaluated since the caller may only be interested 1516 in the `Die` value or the truth value. 1517 1518 Args: 1519 simplify: If `True`, the dice will be simplified before comparing. 1520 Otherwise, e.g. a 2:2 coin is not `equals()` to a 1:1 coin. 1521 """ 1522 if not isinstance(other, Die): 1523 return False 1524 1525 if simplify: 1526 return self.simplify()._hash_key == other.simplify()._hash_key 1527 else: 1528 return self._hash_key == other._hash_key
True
iff both dice have the same outcomes and quantities.
This is False
if other
is not a Die
, even if it would convert
to an equal Die
.
Truth value does NOT matter.
If one Die
has a zero-quantity outcome and the other Die
does not
contain that outcome, they are treated as unequal by this function.
The ==
and !=
operators have a dual purpose; they return a Die
with a truth value determined by this method.
Only dice returned by these methods have a truth value. The data of
these dice is lazily evaluated since the caller may only be interested
in the Die
value or the truth value.
Arguments:
- simplify: If
True
, the dice will be simplified before comparing. Otherwise, e.g. a 2:2 coin is notequals()
to a 1:1 coin.
Inherited Members
28class Population(ABC, Expandable[T_co], Mapping[Any, int]): 29 """A mapping from outcomes to `int` quantities. 30 31 Outcomes with each instance must be hashable and totally orderable. 32 33 Subclasses include `Die` and `Deck`. 34 """ 35 36 # Abstract methods. 37 38 @property 39 @abstractmethod 40 def _new_type(self) -> type: 41 """The type to use when constructing a new instance.""" 42 43 @abstractmethod 44 def keys(self) -> CountsKeysView[T_co]: 45 """The outcomes within the population in sorted order.""" 46 47 @abstractmethod 48 def values(self) -> CountsValuesView: 49 """The quantities within the population in outcome order.""" 50 51 @abstractmethod 52 def items(self) -> CountsItemsView[T_co]: 53 """The (outcome, quantity)s of the population in sorted order.""" 54 55 @property 56 def _items_for_cartesian_product(self) -> Sequence[tuple[T_co, int]]: 57 return self.items() 58 59 def _unary_operator(self, op: Callable, *args, **kwargs): 60 data: MutableMapping[Any, int] = defaultdict(int) 61 for outcome, quantity in self.items(): 62 new_outcome = op(outcome, *args, **kwargs) 63 data[new_outcome] += quantity 64 return self._new_type(data) 65 66 # Outcomes. 67 68 def outcomes(self) -> CountsKeysView[T_co]: 69 """The outcomes of the mapping in ascending order. 70 71 These are also the `keys` of the mapping. 72 Prefer to use the name `outcomes`. 73 """ 74 return self.keys() 75 76 @cached_property 77 def _common_outcome_length(self) -> int | None: 78 result = None 79 for outcome in self.outcomes(): 80 if isinstance(outcome, Mapping): 81 return None 82 elif isinstance(outcome, Sized): 83 if result is None: 84 result = len(outcome) 85 elif len(outcome) != result: 86 return None 87 return result 88 89 def common_outcome_length(self) -> int | None: 90 """The common length of all outcomes. 91 92 If outcomes have no lengths or different lengths, the result is `None`. 93 """ 94 return self._common_outcome_length 95 96 def is_empty(self) -> bool: 97 """`True` iff this population has no outcomes. """ 98 return len(self) == 0 99 100 def min_outcome(self) -> T_co: 101 """The least outcome.""" 102 return self.outcomes()[0] 103 104 def max_outcome(self) -> T_co: 105 """The greatest outcome.""" 106 return self.outcomes()[-1] 107 108 def nearest(self, comparison: Literal['<=', '<', '>=', '>'], outcome, 109 /) -> T_co | None: 110 """The nearest outcome in this population fitting the comparison. 111 112 Args: 113 comparison: The comparison which the result must fit. For example, 114 '<=' would find the greatest outcome that is not greater than 115 the argument. 116 outcome: The outcome to compare against. 117 118 Returns: 119 The nearest outcome fitting the comparison, or `None` if there is 120 no such outcome. 121 """ 122 match comparison: 123 case '<=': 124 if outcome in self: 125 return outcome 126 index = bisect.bisect_right(self.outcomes(), outcome) - 1 127 if index < 0: 128 return None 129 return self.outcomes()[index] 130 case '<': 131 index = bisect.bisect_left(self.outcomes(), outcome) - 1 132 if index < 0: 133 return None 134 return self.outcomes()[index] 135 case '>=': 136 if outcome in self: 137 return outcome 138 index = bisect.bisect_left(self.outcomes(), outcome) 139 if index >= len(self): 140 return None 141 return self.outcomes()[index] 142 case '>': 143 index = bisect.bisect_right(self.outcomes(), outcome) 144 if index >= len(self): 145 return None 146 return self.outcomes()[index] 147 case _: 148 raise ValueError(f'Invalid comparison {comparison}') 149 150 @staticmethod 151 def _zero(x): 152 return x * 0 153 154 def zero(self: C) -> C: 155 """Zeros all outcomes of this population. 156 157 This is done by multiplying all outcomes by `0`. 158 159 The result will have the same denominator. 160 161 Raises: 162 ValueError: If the zeros did not resolve to a single outcome. 163 """ 164 result = self._unary_operator(Population._zero) 165 if len(result) != 1: 166 raise ValueError('zero() did not resolve to a single outcome.') 167 return result 168 169 def zero_outcome(self) -> T_co: 170 """A zero-outcome for this population. 171 172 E.g. `0` for a `Population` whose outcomes are `int`s. 173 """ 174 return self.zero().outcomes()[0] 175 176 # Quantities. 177 178 @overload 179 def quantity(self, outcome: Hashable, /) -> int: 180 """The quantity of a single outcome.""" 181 182 @overload 183 def quantity(self, comparison: Literal['==', '!=', '<=', '<', '>=', '>'], 184 outcome: Hashable, /) -> int: 185 """The total quantity fitting a comparison to a single outcome.""" 186 187 def quantity(self, 188 comparison: Literal['==', '!=', '<=', '<', '>=', '>'] 189 | Hashable, 190 outcome: Hashable | None = None, 191 /) -> int: 192 """The quantity of a single outcome. 193 194 A comparison can be provided, in which case this returns the total 195 quantity fitting the comparison. 196 197 Args: 198 comparison: The comparison to use. This can be omitted, in which 199 case it is treated as '=='. 200 outcome: The outcome to query. 201 """ 202 if outcome is None: 203 outcome = comparison 204 comparison = '==' 205 else: 206 comparison = cast(Literal['==', '!=', '<=', '<', '>=', '>'], 207 comparison) 208 209 match comparison: 210 case '==': 211 return self.get(outcome, 0) 212 case '!=': 213 return self.denominator() - self.get(outcome, 0) 214 case '<=' | '<': 215 threshold = self.nearest(comparison, outcome) 216 if threshold is None: 217 return 0 218 else: 219 return self._cumulative_quantities[threshold] 220 case '>=': 221 return self.denominator() - self.quantity('<', outcome) 222 case '>': 223 return self.denominator() - self.quantity('<=', outcome) 224 case _: 225 raise ValueError(f'Invalid comparison {comparison}') 226 227 @overload 228 def quantities(self, /) -> CountsValuesView: 229 """All quantities in sorted order.""" 230 231 @overload 232 def quantities(self, comparison: Literal['==', '!=', '<=', '<', '>=', '>'], 233 /) -> Sequence[int]: 234 """The total quantities fitting the comparison for each outcome in sorted order. 235 236 For example, '<=' gives the CDF. 237 """ 238 239 def quantities(self, 240 comparison: Literal['==', '!=', '<=', '<', '>=', '>'] 241 | None = None, 242 /) -> CountsValuesView | Sequence[int]: 243 """The quantities of the mapping in sorted order. 244 245 For example, '<=' gives the CDF. 246 247 Args: 248 comparison: Optional. If omitted, this defaults to '=='. 249 """ 250 if comparison is None: 251 comparison = '==' 252 253 match comparison: 254 case '==': 255 return self.values() 256 case '<=': 257 return tuple(itertools.accumulate(self.values())) 258 case '>=': 259 return tuple( 260 itertools.accumulate(self.values()[:-1], 261 operator.sub, 262 initial=self.denominator())) 263 case '!=': 264 return tuple(self.denominator() - q for q in self.values()) 265 case '<': 266 return tuple(self.denominator() - q 267 for q in self.quantities('>=')) 268 case '>': 269 return tuple(self.denominator() - q 270 for q in self.quantities('<=')) 271 case _: 272 raise ValueError(f'Invalid comparison {comparison}') 273 274 @cached_property 275 def _cumulative_quantities(self) -> Mapping[T_co, int]: 276 result = {} 277 cdf = 0 278 for outcome, quantity in self.items(): 279 cdf += quantity 280 result[outcome] = cdf 281 return result 282 283 @cached_property 284 def _denominator(self) -> int: 285 return sum(self.values()) 286 287 def denominator(self) -> int: 288 """The sum of all quantities (e.g. weights or duplicates). 289 290 For the number of unique outcomes, use `len()`. 291 """ 292 return self._denominator 293 294 def multiply_quantities(self: C, scale: int, /) -> C: 295 """Multiplies all quantities by an integer.""" 296 if scale == 1: 297 return self 298 data = { 299 outcome: quantity * scale 300 for outcome, quantity in self.items() 301 } 302 return self._new_type(data) 303 304 def divide_quantities(self: C, divisor: int, /) -> C: 305 """Divides all quantities by an integer, rounding down. 306 307 Resulting zero quantities are dropped. 308 """ 309 if divisor == 0: 310 return self 311 data = { 312 outcome: quantity // divisor 313 for outcome, quantity in self.items() if quantity >= divisor 314 } 315 return self._new_type(data) 316 317 def modulo_quantities(self: C, divisor: int, /) -> C: 318 """Modulus of all quantities with an integer.""" 319 data = { 320 outcome: quantity % divisor 321 for outcome, quantity in self.items() 322 } 323 return self._new_type(data) 324 325 def pad_to_denominator(self: C, target: int, /, outcome: Hashable) -> C: 326 """Changes the denominator to a target number by changing the quantity of a specified outcome. 327 328 Args: 329 `target`: The denominator of the result. 330 `outcome`: The outcome whose quantity will be adjusted. 331 332 Returns: 333 A `Population` like `self` but with the quantity of `outcome` 334 adjusted so that the overall denominator is equal to `target`. 335 If the denominator is reduced to zero, it will be removed. 336 337 Raises: 338 `ValueError` if this would require the quantity of the specified 339 outcome to be negative. 340 """ 341 adjustment = target - self.denominator() 342 data = {outcome: quantity for outcome, quantity in self.items()} 343 new_quantity = data.get(outcome, 0) + adjustment 344 if new_quantity > 0: 345 data[outcome] = new_quantity 346 elif new_quantity == 0: 347 del data[outcome] 348 else: 349 raise ValueError( 350 f'Padding to denominator of {target} would require a negative quantity of {new_quantity} for {outcome}' 351 ) 352 return self._new_type(data) 353 354 # Probabilities. 355 356 @overload 357 def probability(self, outcome: Hashable, /, *, 358 percent: Literal[False]) -> Fraction: 359 """The probability of a single outcome, or 0.0 if not present.""" 360 361 @overload 362 def probability(self, outcome: Hashable, /, *, 363 percent: Literal[True]) -> float: 364 """The probability of a single outcome, or 0.0 if not present.""" 365 366 @overload 367 def probability(self, outcome: Hashable, /) -> Fraction: 368 """The probability of a single outcome, or 0.0 if not present.""" 369 370 @overload 371 def probability(self, comparison: Literal['==', '!=', '<=', '<', '>=', 372 '>'], outcome: Hashable, /, *, 373 percent: Literal[False]) -> Fraction: 374 """The total probability of outcomes fitting a comparison.""" 375 376 @overload 377 def probability(self, comparison: Literal['==', '!=', '<=', '<', '>=', 378 '>'], outcome: Hashable, /, *, 379 percent: Literal[True]) -> float: 380 """The total probability of outcomes fitting a comparison.""" 381 382 @overload 383 def probability(self, comparison: Literal['==', '!=', '<=', '<', '>=', 384 '>'], outcome: Hashable, 385 /) -> Fraction: 386 """The total probability of outcomes fitting a comparison.""" 387 388 def probability(self, 389 comparison: Literal['==', '!=', '<=', '<', '>=', '>'] 390 | Hashable, 391 outcome: Hashable | None = None, 392 /, 393 *, 394 percent: bool = False) -> Fraction | float: 395 """The total probability of outcomes fitting a comparison.""" 396 if outcome is None: 397 outcome = comparison 398 comparison = '==' 399 else: 400 comparison = cast(Literal['==', '!=', '<=', '<', '>=', '>'], 401 comparison) 402 result = Fraction(self.quantity(comparison, outcome), 403 self.denominator()) 404 return result * 100.0 if percent else result 405 406 @overload 407 def probabilities(self, /, *, 408 percent: Literal[False]) -> Sequence[Fraction]: 409 """All probabilities in sorted order.""" 410 411 @overload 412 def probabilities(self, /, *, percent: Literal[True]) -> Sequence[float]: 413 """All probabilities in sorted order.""" 414 415 @overload 416 def probabilities(self, /) -> Sequence[Fraction]: 417 """All probabilities in sorted order.""" 418 419 @overload 420 def probabilities(self, comparison: Literal['==', '!=', '<=', '<', '>=', 421 '>'], /, *, 422 percent: Literal[False]) -> Sequence[Fraction]: 423 """The total probabilities fitting the comparison for each outcome in sorted order. 424 425 For example, '<=' gives the CDF. 426 """ 427 428 @overload 429 def probabilities(self, comparison: Literal['==', '!=', '<=', '<', '>=', 430 '>'], /, *, 431 percent: Literal[True]) -> Sequence[float]: 432 """The total probabilities fitting the comparison for each outcome in sorted order. 433 434 For example, '<=' gives the CDF. 435 """ 436 437 @overload 438 def probabilities(self, comparison: Literal['==', '!=', '<=', '<', '>=', 439 '>'], /) -> Sequence[Fraction]: 440 """The total probabilities fitting the comparison for each outcome in sorted order. 441 442 For example, '<=' gives the CDF. 443 """ 444 445 def probabilities( 446 self, 447 comparison: Literal['==', '!=', '<=', '<', '>=', '>'] 448 | None = None, 449 /, 450 *, 451 percent: bool = False) -> Sequence[Fraction] | Sequence[float]: 452 """The total probabilities fitting the comparison for each outcome in sorted order. 453 454 For example, '<=' gives the CDF. 455 456 Args: 457 comparison: Optional. If omitted, this defaults to '=='. 458 """ 459 if comparison is None: 460 comparison = '==' 461 462 result = tuple( 463 Fraction(q, self.denominator()) 464 for q in self.quantities(comparison)) 465 466 if percent: 467 return tuple(100.0 * x for x in result) 468 else: 469 return result 470 471 # Scalar statistics. 472 473 def mode(self) -> tuple: 474 """A tuple containing the most common outcome(s) of the population. 475 476 These are sorted from lowest to highest. 477 """ 478 return tuple(outcome for outcome, quantity in self.items() 479 if quantity == self.modal_quantity()) 480 481 def modal_quantity(self) -> int: 482 """The highest quantity of any single outcome. """ 483 return max(self.quantities()) 484 485 def kolmogorov_smirnov(self, other: 'Population') -> Fraction: 486 """Kolmogorov–Smirnov statistic. The maximum absolute difference between CDFs. """ 487 outcomes = icepool.sorted_union(self, other) 488 return max( 489 abs( 490 self.probability('<=', outcome) - 491 other.probability('<=', outcome)) for outcome in outcomes) 492 493 def cramer_von_mises(self, other: 'Population') -> Fraction: 494 """Cramér-von Mises statistic. The sum-of-squares difference between CDFs. """ 495 outcomes = icepool.sorted_union(self, other) 496 return sum(((self.probability('<=', outcome) - 497 other.probability('<=', outcome))**2 498 for outcome in outcomes), 499 start=Fraction(0, 1)) 500 501 def median(self): 502 """The median, taking the mean in case of a tie. 503 504 This will fail if the outcomes do not support division; 505 in this case, use `median_low` or `median_high` instead. 506 """ 507 return self.quantile(1, 2) 508 509 def median_low(self) -> T_co: 510 """The median, taking the lower in case of a tie.""" 511 return self.quantile_low(1, 2) 512 513 def median_high(self) -> T_co: 514 """The median, taking the higher in case of a tie.""" 515 return self.quantile_high(1, 2) 516 517 def quantile(self, n: int, d: int = 100): 518 """The outcome `n / d` of the way through the CDF, taking the mean in case of a tie. 519 520 This will fail if the outcomes do not support addition and division; 521 in this case, use `quantile_low` or `quantile_high` instead. 522 """ 523 # Should support addition and division. 524 return (self.quantile_low(n, d) + 525 self.quantile_high(n, d)) / 2 # type: ignore 526 527 def quantile_low(self, n: int, d: int = 100) -> T_co: 528 """The outcome `n / d` of the way through the CDF, taking the lesser in case of a tie.""" 529 index = bisect.bisect_left(self.quantities('<='), 530 (n * self.denominator() + d - 1) // d) 531 if index >= len(self): 532 return self.max_outcome() 533 return self.outcomes()[index] 534 535 def quantile_high(self, n: int, d: int = 100) -> T_co: 536 """The outcome `n / d` of the way through the CDF, taking the greater in case of a tie.""" 537 index = bisect.bisect_right(self.quantities('<='), 538 n * self.denominator() // d) 539 if index >= len(self): 540 return self.max_outcome() 541 return self.outcomes()[index] 542 543 @overload 544 def mean(self: 'Population[numbers.Rational]') -> Fraction: 545 ... 546 547 @overload 548 def mean(self: 'Population[float]') -> float: 549 ... 550 551 def mean( 552 self: 'Population[numbers.Rational] | Population[float]' 553 ) -> Fraction | float: 554 return try_fraction( 555 sum(outcome * quantity for outcome, quantity in self.items()), 556 self.denominator()) 557 558 @overload 559 def variance(self: 'Population[numbers.Rational]') -> Fraction: 560 ... 561 562 @overload 563 def variance(self: 'Population[float]') -> float: 564 ... 565 566 def variance( 567 self: 'Population[numbers.Rational] | Population[float]' 568 ) -> Fraction | float: 569 """This is the population variance, not the sample variance.""" 570 mean = self.mean() 571 mean_of_squares = try_fraction( 572 sum(quantity * outcome**2 for outcome, quantity in self.items()), 573 self.denominator()) 574 return mean_of_squares - mean * mean 575 576 def standard_deviation( 577 self: 'Population[numbers.Rational] | Population[float]') -> float: 578 return math.sqrt(self.variance()) 579 580 sd = standard_deviation 581 582 def standardized_moment( 583 self: 'Population[numbers.Rational] | Population[float]', 584 k: int) -> float: 585 sd = self.standard_deviation() 586 mean = self.mean() 587 ev = sum(p * (outcome - mean)**k # type: ignore 588 for outcome, p in zip(self.outcomes(), self.probabilities())) 589 return ev / (sd**k) 590 591 def skewness( 592 self: 'Population[numbers.Rational] | Population[float]') -> float: 593 return self.standardized_moment(3) 594 595 def excess_kurtosis( 596 self: 'Population[numbers.Rational] | Population[float]') -> float: 597 return self.standardized_moment(4) - 3.0 598 599 def entropy(self, base: float = 2.0) -> float: 600 """The entropy of a random sample from this population. 601 602 Args: 603 base: The logarithm base to use. Default is 2.0, which gives the 604 entropy in bits. 605 """ 606 return -sum(p * math.log(p, base) 607 for p in self.probabilities() if p > 0.0) 608 609 # Joint statistics. 610 611 class _Marginals(Generic[C]): 612 """Helper class for implementing `marginals()`.""" 613 614 _population: C 615 616 def __init__(self, population, /): 617 self._population = population 618 619 def __len__(self) -> int: 620 """The minimum len() of all outcomes.""" 621 return min(len(x) for x in self._population.outcomes()) 622 623 def __getitem__(self, dims: int | slice, /): 624 """Marginalizes the given dimensions.""" 625 return self._population._unary_operator(operator.getitem, dims) 626 627 def __iter__(self) -> Iterator: 628 for i in range(len(self)): 629 yield self[i] 630 631 def __getattr__(self, key: str): 632 if key[0] == '_': 633 raise AttributeError(key) 634 return self._population._unary_operator(operator.attrgetter(key)) 635 636 @property 637 def marginals(self: C) -> _Marginals[C]: 638 """A property that applies the `[]` operator to outcomes. 639 640 For example, `population.marginals[:2]` will marginalize the first two 641 elements of sequence outcomes. 642 643 Attributes that do not start with an underscore will also be forwarded. 644 For example, `population.marginals.x` will marginalize the `x` attribute 645 from e.g. `namedtuple` outcomes. 646 """ 647 return Population._Marginals(self) 648 649 @overload 650 def covariance(self: 'Population[tuple[numbers.Rational, ...]]', i: int, 651 j: int) -> Fraction: 652 ... 653 654 @overload 655 def covariance(self: 'Population[tuple[float, ...]]', i: int, 656 j: int) -> float: 657 ... 658 659 def covariance( 660 self: 661 'Population[tuple[numbers.Rational, ...]] | Population[tuple[float, ...]]', 662 i: int, j: int) -> Fraction | float: 663 mean_i = self.marginals[i].mean() 664 mean_j = self.marginals[j].mean() 665 return try_fraction( 666 sum((outcome[i] - mean_i) * (outcome[j] - mean_j) * quantity 667 for outcome, quantity in self.items()), self.denominator()) 668 669 def correlation( 670 self: 671 'Population[tuple[numbers.Rational, ...]] | Population[tuple[float, ...]]', 672 i: int, j: int) -> float: 673 sd_i = self.marginals[i].standard_deviation() 674 sd_j = self.marginals[j].standard_deviation() 675 return self.covariance(i, j) / (sd_i * sd_j) 676 677 # Transformations. 678 679 def _select_outcomes(self, which: Callable[..., bool] | Collection[T_co], 680 star: bool | None) -> Set[T_co]: 681 """Returns a set of outcomes of self that fit the given condition.""" 682 if callable(which): 683 if star is None: 684 star = infer_star(which) 685 if star: 686 # Need TypeVarTuple to check this. 687 return { 688 outcome 689 for outcome in self.outcomes() 690 if which(*outcome) # type: ignore 691 } 692 else: 693 return { 694 outcome 695 for outcome in self.outcomes() if which(outcome) 696 } 697 else: 698 # Collection. 699 return set(outcome for outcome in self.outcomes() 700 if outcome in which) 701 702 def to_one_hot(self: C, outcomes: Sequence[T_co] | None = None) -> C: 703 """Converts the outcomes of this population to a one-hot representation. 704 705 Args: 706 outcomes: If provided, each outcome will be mapped to a `Vector` 707 where the element at `outcomes.index(outcome)` is set to `True` 708 and the rest to `False`, or all `False` if the outcome is not 709 in `outcomes`. 710 If not provided, `self.outcomes()` is used. 711 """ 712 if outcomes is None: 713 outcomes = self.outcomes() 714 715 data: MutableMapping[Vector[bool], int] = defaultdict(int) 716 for outcome, quantity in zip(self.outcomes(), self.quantities()): 717 value = [False] * len(outcomes) 718 if outcome in outcomes: 719 value[outcomes.index(outcome)] = True 720 data[Vector(value)] += quantity 721 return self._new_type(data) 722 723 def split(self, 724 which: Callable[..., bool] | Collection[T_co] | None = None, 725 /, 726 *, 727 star: bool | None = None) -> tuple[C, C]: 728 """Splits this population into one containing selected items and another containing the rest. 729 730 The sum of the denominators of the results is equal to the denominator 731 of this population. 732 733 If you want to split more than two ways, use `Population.group_by()`. 734 735 Args: 736 which: Selects which outcomes to select. Options: 737 * A callable that takes an outcome and returns `True` if it 738 should be selected. 739 * A collection of outcomes to select. 740 star: Whether outcomes should be unpacked into separate arguments 741 before sending them to a callable `which`. 742 If not provided, this will be guessed based on the function 743 signature. 744 745 Returns: 746 A population consisting of the outcomes that were selected by 747 `which`, and a population consisting of the unselected outcomes. 748 """ 749 if which is None: 750 outcome_set = {self.min_outcome()} 751 else: 752 outcome_set = self._select_outcomes(which, star) 753 754 selected = {} 755 not_selected = {} 756 for outcome, count in self.items(): 757 if outcome in outcome_set: 758 selected[outcome] = count 759 else: 760 not_selected[outcome] = count 761 762 return self._new_type(selected), self._new_type(not_selected) 763 764 class _GroupBy(Generic[C]): 765 """Helper class for implementing `group_by()`.""" 766 767 _population: C 768 769 def __init__(self, population, /): 770 self._population = population 771 772 def __call__(self, 773 key_map: Callable[..., U] | Mapping[T_co, U], 774 /, 775 *, 776 star: bool | None = None) -> Mapping[U, C]: 777 if callable(key_map): 778 if star is None: 779 star = infer_star(key_map) 780 if star: 781 key_function = lambda o: key_map(*o) 782 else: 783 key_function = key_map 784 else: 785 key_function = lambda o: key_map.get(o, o) 786 787 result_datas: MutableMapping[U, MutableMapping[Any, int]] = {} 788 outcome: Any 789 for outcome, quantity in self._population.items(): 790 key = key_function(outcome) 791 if key not in result_datas: 792 result_datas[key] = defaultdict(int) 793 result_datas[key][outcome] += quantity 794 return { 795 k: self._population._new_type(v) 796 for k, v in result_datas.items() 797 } 798 799 def __getitem__(self, dims: int | slice, /): 800 """Marginalizes the given dimensions.""" 801 return self(lambda x: x[dims]) 802 803 def __getattr__(self, key: str): 804 if key[0] == '_': 805 raise AttributeError(key) 806 return self(lambda x: getattr(x, key)) 807 808 @property 809 def group_by(self: C) -> _GroupBy[C]: 810 """A method-like property that splits this population into sub-populations based on a key function. 811 812 The sum of the denominators of the results is equal to the denominator 813 of this population. 814 815 This can be useful when using the law of total probability. 816 817 Example: `d10.group_by(lambda x: x % 3)` is 818 ```python 819 { 820 0: Die([3, 6, 9]), 821 1: Die([1, 4, 7, 10]), 822 2: Die([2, 5, 8]), 823 } 824 ``` 825 826 You can also use brackets to group by indexes or slices; or attributes 827 to group by those. Example: 828 829 ```python 830 Die([ 831 'aardvark', 832 'alligator', 833 'asp', 834 'blowfish', 835 'cat', 836 'crocodile', 837 ]).group_by[0] 838 ``` 839 840 produces 841 842 ```python 843 { 844 'a': Die(['aardvark', 'alligator', 'asp']), 845 'b': Die(['blowfish']), 846 'c': Die(['cat', 'crocodile']), 847 } 848 ``` 849 850 Args: 851 key_map: A function or mapping that takes outcomes and produces the 852 key of the corresponding outcome in the result. If this is 853 a Mapping, outcomes not in the mapping are their own key. 854 star: Whether outcomes should be unpacked into separate arguments 855 before sending them to a callable `key_map`. 856 If not provided, this will be guessed based on the function 857 signature. 858 """ 859 return Population._GroupBy(self) 860 861 def sample(self) -> T_co: 862 """A single random sample from this population. 863 864 Note that this is always "with replacement" even for `Deck` since 865 instances are immutable. 866 867 This uses the standard `random` package and is not cryptographically 868 secure. 869 """ 870 # We don't use random.choices since that is based on floats rather than ints. 871 r = random.randrange(self.denominator()) 872 index = bisect.bisect_right(self.quantities('<='), r) 873 return self.outcomes()[index] 874 875 def format(self, format_spec: str, /, **kwargs) -> str: 876 """Formats this mapping as a string. 877 878 `format_spec` should start with the output format, 879 which can be: 880 * `md` for Markdown (default) 881 * `bbcode` for BBCode 882 * `csv` for comma-separated values 883 * `html` for HTML 884 885 After this, you may optionally add a `:` followed by a series of 886 requested columns. Allowed columns are: 887 888 * `o`: Outcomes. 889 * `*o`: Outcomes, unpacked if applicable. 890 * `q==`, `q<=`, `q>=`: Quantities ==, <=, or >= each outcome. 891 * `p==`, `p<=`, `p>=`: Probabilities (0-1). 892 * `%==`, `%<=`, `%>=`: Probabilities (0%-100%). 893 * `i==`, `i<=`, `i>=`: EXPERIMENTAL: "1 in N". 894 895 Columns may optionally be separated using `|` characters. 896 897 The default setting is equal to `f'{die:md:*o|q==|%==}'`. Here the 898 columns are the outcomes (unpacked if applicable) the quantities, and 899 the probabilities. The quantities are omitted from the default columns 900 if any individual quantity is 10**30 or greater. 901 """ 902 if not self.is_empty() and self.modal_quantity() < 10**30: 903 default_column_spec = '*oq==%==' 904 else: 905 default_column_spec = '*o%==' 906 if len(format_spec) == 0: 907 format_spec = 'md:' + default_column_spec 908 909 format_spec = format_spec.replace('|', '') 910 911 parts = format_spec.split(':') 912 913 if len(parts) == 1: 914 output_format = parts[0] 915 col_spec = default_column_spec 916 elif len(parts) == 2: 917 output_format = parts[0] 918 col_spec = parts[1] 919 else: 920 raise ValueError('format_spec has too many colons.') 921 922 match output_format: 923 case 'md': 924 return icepool.population.format.markdown(self, col_spec) 925 case 'bbcode': 926 return icepool.population.format.bbcode(self, col_spec) 927 case 'csv': 928 return icepool.population.format.csv(self, col_spec, **kwargs) 929 case 'html': 930 return icepool.population.format.html(self, col_spec) 931 case _: 932 raise ValueError( 933 f"Unsupported output format '{output_format}'") 934 935 def __format__(self, format_spec: str, /) -> str: 936 return self.format(format_spec) 937 938 def __str__(self) -> str: 939 return f'{self}'
A mapping from outcomes to int
quantities.
Outcomes with each instance must be hashable and totally orderable.
43 @abstractmethod 44 def keys(self) -> CountsKeysView[T_co]: 45 """The outcomes within the population in sorted order."""
The outcomes within the population in sorted order.
47 @abstractmethod 48 def values(self) -> CountsValuesView: 49 """The quantities within the population in outcome order."""
The quantities within the population in outcome order.
51 @abstractmethod 52 def items(self) -> CountsItemsView[T_co]: 53 """The (outcome, quantity)s of the population in sorted order."""
The (outcome, quantity)s of the population in sorted order.
89 def common_outcome_length(self) -> int | None: 90 """The common length of all outcomes. 91 92 If outcomes have no lengths or different lengths, the result is `None`. 93 """ 94 return self._common_outcome_length
The common length of all outcomes.
If outcomes have no lengths or different lengths, the result is None
.
96 def is_empty(self) -> bool: 97 """`True` iff this population has no outcomes. """ 98 return len(self) == 0
True
iff this population has no outcomes.
108 def nearest(self, comparison: Literal['<=', '<', '>=', '>'], outcome, 109 /) -> T_co | None: 110 """The nearest outcome in this population fitting the comparison. 111 112 Args: 113 comparison: The comparison which the result must fit. For example, 114 '<=' would find the greatest outcome that is not greater than 115 the argument. 116 outcome: The outcome to compare against. 117 118 Returns: 119 The nearest outcome fitting the comparison, or `None` if there is 120 no such outcome. 121 """ 122 match comparison: 123 case '<=': 124 if outcome in self: 125 return outcome 126 index = bisect.bisect_right(self.outcomes(), outcome) - 1 127 if index < 0: 128 return None 129 return self.outcomes()[index] 130 case '<': 131 index = bisect.bisect_left(self.outcomes(), outcome) - 1 132 if index < 0: 133 return None 134 return self.outcomes()[index] 135 case '>=': 136 if outcome in self: 137 return outcome 138 index = bisect.bisect_left(self.outcomes(), outcome) 139 if index >= len(self): 140 return None 141 return self.outcomes()[index] 142 case '>': 143 index = bisect.bisect_right(self.outcomes(), outcome) 144 if index >= len(self): 145 return None 146 return self.outcomes()[index] 147 case _: 148 raise ValueError(f'Invalid comparison {comparison}')
The nearest outcome in this population fitting the comparison.
Arguments:
- comparison: The comparison which the result must fit. For example, '<=' would find the greatest outcome that is not greater than the argument.
- outcome: The outcome to compare against.
Returns:
The nearest outcome fitting the comparison, or
None
if there is no such outcome.
154 def zero(self: C) -> C: 155 """Zeros all outcomes of this population. 156 157 This is done by multiplying all outcomes by `0`. 158 159 The result will have the same denominator. 160 161 Raises: 162 ValueError: If the zeros did not resolve to a single outcome. 163 """ 164 result = self._unary_operator(Population._zero) 165 if len(result) != 1: 166 raise ValueError('zero() did not resolve to a single outcome.') 167 return result
Zeros all outcomes of this population.
This is done by multiplying all outcomes by 0
.
The result will have the same denominator.
Raises:
- ValueError: If the zeros did not resolve to a single outcome.
169 def zero_outcome(self) -> T_co: 170 """A zero-outcome for this population. 171 172 E.g. `0` for a `Population` whose outcomes are `int`s. 173 """ 174 return self.zero().outcomes()[0]
A zero-outcome for this population.
E.g. 0
for a Population
whose outcomes are int
s.
187 def quantity(self, 188 comparison: Literal['==', '!=', '<=', '<', '>=', '>'] 189 | Hashable, 190 outcome: Hashable | None = None, 191 /) -> int: 192 """The quantity of a single outcome. 193 194 A comparison can be provided, in which case this returns the total 195 quantity fitting the comparison. 196 197 Args: 198 comparison: The comparison to use. This can be omitted, in which 199 case it is treated as '=='. 200 outcome: The outcome to query. 201 """ 202 if outcome is None: 203 outcome = comparison 204 comparison = '==' 205 else: 206 comparison = cast(Literal['==', '!=', '<=', '<', '>=', '>'], 207 comparison) 208 209 match comparison: 210 case '==': 211 return self.get(outcome, 0) 212 case '!=': 213 return self.denominator() - self.get(outcome, 0) 214 case '<=' | '<': 215 threshold = self.nearest(comparison, outcome) 216 if threshold is None: 217 return 0 218 else: 219 return self._cumulative_quantities[threshold] 220 case '>=': 221 return self.denominator() - self.quantity('<', outcome) 222 case '>': 223 return self.denominator() - self.quantity('<=', outcome) 224 case _: 225 raise ValueError(f'Invalid comparison {comparison}')
The quantity of a single outcome.
A comparison can be provided, in which case this returns the total quantity fitting the comparison.
Arguments:
- comparison: The comparison to use. This can be omitted, in which case it is treated as '=='.
- outcome: The outcome to query.
239 def quantities(self, 240 comparison: Literal['==', '!=', '<=', '<', '>=', '>'] 241 | None = None, 242 /) -> CountsValuesView | Sequence[int]: 243 """The quantities of the mapping in sorted order. 244 245 For example, '<=' gives the CDF. 246 247 Args: 248 comparison: Optional. If omitted, this defaults to '=='. 249 """ 250 if comparison is None: 251 comparison = '==' 252 253 match comparison: 254 case '==': 255 return self.values() 256 case '<=': 257 return tuple(itertools.accumulate(self.values())) 258 case '>=': 259 return tuple( 260 itertools.accumulate(self.values()[:-1], 261 operator.sub, 262 initial=self.denominator())) 263 case '!=': 264 return tuple(self.denominator() - q for q in self.values()) 265 case '<': 266 return tuple(self.denominator() - q 267 for q in self.quantities('>=')) 268 case '>': 269 return tuple(self.denominator() - q 270 for q in self.quantities('<=')) 271 case _: 272 raise ValueError(f'Invalid comparison {comparison}')
The quantities of the mapping in sorted order.
For example, '<=' gives the CDF.
Arguments:
- comparison: Optional. If omitted, this defaults to '=='.
287 def denominator(self) -> int: 288 """The sum of all quantities (e.g. weights or duplicates). 289 290 For the number of unique outcomes, use `len()`. 291 """ 292 return self._denominator
The sum of all quantities (e.g. weights or duplicates).
For the number of unique outcomes, use len()
.
294 def multiply_quantities(self: C, scale: int, /) -> C: 295 """Multiplies all quantities by an integer.""" 296 if scale == 1: 297 return self 298 data = { 299 outcome: quantity * scale 300 for outcome, quantity in self.items() 301 } 302 return self._new_type(data)
Multiplies all quantities by an integer.
304 def divide_quantities(self: C, divisor: int, /) -> C: 305 """Divides all quantities by an integer, rounding down. 306 307 Resulting zero quantities are dropped. 308 """ 309 if divisor == 0: 310 return self 311 data = { 312 outcome: quantity // divisor 313 for outcome, quantity in self.items() if quantity >= divisor 314 } 315 return self._new_type(data)
Divides all quantities by an integer, rounding down.
Resulting zero quantities are dropped.
317 def modulo_quantities(self: C, divisor: int, /) -> C: 318 """Modulus of all quantities with an integer.""" 319 data = { 320 outcome: quantity % divisor 321 for outcome, quantity in self.items() 322 } 323 return self._new_type(data)
Modulus of all quantities with an integer.
325 def pad_to_denominator(self: C, target: int, /, outcome: Hashable) -> C: 326 """Changes the denominator to a target number by changing the quantity of a specified outcome. 327 328 Args: 329 `target`: The denominator of the result. 330 `outcome`: The outcome whose quantity will be adjusted. 331 332 Returns: 333 A `Population` like `self` but with the quantity of `outcome` 334 adjusted so that the overall denominator is equal to `target`. 335 If the denominator is reduced to zero, it will be removed. 336 337 Raises: 338 `ValueError` if this would require the quantity of the specified 339 outcome to be negative. 340 """ 341 adjustment = target - self.denominator() 342 data = {outcome: quantity for outcome, quantity in self.items()} 343 new_quantity = data.get(outcome, 0) + adjustment 344 if new_quantity > 0: 345 data[outcome] = new_quantity 346 elif new_quantity == 0: 347 del data[outcome] 348 else: 349 raise ValueError( 350 f'Padding to denominator of {target} would require a negative quantity of {new_quantity} for {outcome}' 351 ) 352 return self._new_type(data)
Changes the denominator to a target number by changing the quantity of a specified outcome.
Arguments:
target
: The denominator of the result.outcome
: The outcome whose quantity will be adjusted.
Returns:
A
Population
likeself
but with the quantity ofoutcome
adjusted so that the overall denominator is equal totarget
. If the denominator is reduced to zero, it will be removed.
Raises:
ValueError
if this would require the quantity of the specified- outcome to be negative.
388 def probability(self, 389 comparison: Literal['==', '!=', '<=', '<', '>=', '>'] 390 | Hashable, 391 outcome: Hashable | None = None, 392 /, 393 *, 394 percent: bool = False) -> Fraction | float: 395 """The total probability of outcomes fitting a comparison.""" 396 if outcome is None: 397 outcome = comparison 398 comparison = '==' 399 else: 400 comparison = cast(Literal['==', '!=', '<=', '<', '>=', '>'], 401 comparison) 402 result = Fraction(self.quantity(comparison, outcome), 403 self.denominator()) 404 return result * 100.0 if percent else result
The total probability of outcomes fitting a comparison.
445 def probabilities( 446 self, 447 comparison: Literal['==', '!=', '<=', '<', '>=', '>'] 448 | None = None, 449 /, 450 *, 451 percent: bool = False) -> Sequence[Fraction] | Sequence[float]: 452 """The total probabilities fitting the comparison for each outcome in sorted order. 453 454 For example, '<=' gives the CDF. 455 456 Args: 457 comparison: Optional. If omitted, this defaults to '=='. 458 """ 459 if comparison is None: 460 comparison = '==' 461 462 result = tuple( 463 Fraction(q, self.denominator()) 464 for q in self.quantities(comparison)) 465 466 if percent: 467 return tuple(100.0 * x for x in result) 468 else: 469 return result
The total probabilities fitting the comparison for each outcome in sorted order.
For example, '<=' gives the CDF.
Arguments:
- comparison: Optional. If omitted, this defaults to '=='.
473 def mode(self) -> tuple: 474 """A tuple containing the most common outcome(s) of the population. 475 476 These are sorted from lowest to highest. 477 """ 478 return tuple(outcome for outcome, quantity in self.items() 479 if quantity == self.modal_quantity())
A tuple containing the most common outcome(s) of the population.
These are sorted from lowest to highest.
481 def modal_quantity(self) -> int: 482 """The highest quantity of any single outcome. """ 483 return max(self.quantities())
The highest quantity of any single outcome.
485 def kolmogorov_smirnov(self, other: 'Population') -> Fraction: 486 """Kolmogorov–Smirnov statistic. The maximum absolute difference between CDFs. """ 487 outcomes = icepool.sorted_union(self, other) 488 return max( 489 abs( 490 self.probability('<=', outcome) - 491 other.probability('<=', outcome)) for outcome in outcomes)
Kolmogorov–Smirnov statistic. The maximum absolute difference between CDFs.
493 def cramer_von_mises(self, other: 'Population') -> Fraction: 494 """Cramér-von Mises statistic. The sum-of-squares difference between CDFs. """ 495 outcomes = icepool.sorted_union(self, other) 496 return sum(((self.probability('<=', outcome) - 497 other.probability('<=', outcome))**2 498 for outcome in outcomes), 499 start=Fraction(0, 1))
Cramér-von Mises statistic. The sum-of-squares difference between CDFs.
501 def median(self): 502 """The median, taking the mean in case of a tie. 503 504 This will fail if the outcomes do not support division; 505 in this case, use `median_low` or `median_high` instead. 506 """ 507 return self.quantile(1, 2)
The median, taking the mean in case of a tie.
This will fail if the outcomes do not support division;
in this case, use median_low
or median_high
instead.
509 def median_low(self) -> T_co: 510 """The median, taking the lower in case of a tie.""" 511 return self.quantile_low(1, 2)
The median, taking the lower in case of a tie.
513 def median_high(self) -> T_co: 514 """The median, taking the higher in case of a tie.""" 515 return self.quantile_high(1, 2)
The median, taking the higher in case of a tie.
517 def quantile(self, n: int, d: int = 100): 518 """The outcome `n / d` of the way through the CDF, taking the mean in case of a tie. 519 520 This will fail if the outcomes do not support addition and division; 521 in this case, use `quantile_low` or `quantile_high` instead. 522 """ 523 # Should support addition and division. 524 return (self.quantile_low(n, d) + 525 self.quantile_high(n, d)) / 2 # type: ignore
The outcome n / d
of the way through the CDF, taking the mean in case of a tie.
This will fail if the outcomes do not support addition and division;
in this case, use quantile_low
or quantile_high
instead.
527 def quantile_low(self, n: int, d: int = 100) -> T_co: 528 """The outcome `n / d` of the way through the CDF, taking the lesser in case of a tie.""" 529 index = bisect.bisect_left(self.quantities('<='), 530 (n * self.denominator() + d - 1) // d) 531 if index >= len(self): 532 return self.max_outcome() 533 return self.outcomes()[index]
The outcome n / d
of the way through the CDF, taking the lesser in case of a tie.
535 def quantile_high(self, n: int, d: int = 100) -> T_co: 536 """The outcome `n / d` of the way through the CDF, taking the greater in case of a tie.""" 537 index = bisect.bisect_right(self.quantities('<='), 538 n * self.denominator() // d) 539 if index >= len(self): 540 return self.max_outcome() 541 return self.outcomes()[index]
The outcome n / d
of the way through the CDF, taking the greater in case of a tie.
566 def variance( 567 self: 'Population[numbers.Rational] | Population[float]' 568 ) -> Fraction | float: 569 """This is the population variance, not the sample variance.""" 570 mean = self.mean() 571 mean_of_squares = try_fraction( 572 sum(quantity * outcome**2 for outcome, quantity in self.items()), 573 self.denominator()) 574 return mean_of_squares - mean * mean
This is the population variance, not the sample variance.
582 def standardized_moment( 583 self: 'Population[numbers.Rational] | Population[float]', 584 k: int) -> float: 585 sd = self.standard_deviation() 586 mean = self.mean() 587 ev = sum(p * (outcome - mean)**k # type: ignore 588 for outcome, p in zip(self.outcomes(), self.probabilities())) 589 return ev / (sd**k)
599 def entropy(self, base: float = 2.0) -> float: 600 """The entropy of a random sample from this population. 601 602 Args: 603 base: The logarithm base to use. Default is 2.0, which gives the 604 entropy in bits. 605 """ 606 return -sum(p * math.log(p, base) 607 for p in self.probabilities() if p > 0.0)
The entropy of a random sample from this population.
Arguments:
- base: The logarithm base to use. Default is 2.0, which gives the entropy in bits.
636 @property 637 def marginals(self: C) -> _Marginals[C]: 638 """A property that applies the `[]` operator to outcomes. 639 640 For example, `population.marginals[:2]` will marginalize the first two 641 elements of sequence outcomes. 642 643 Attributes that do not start with an underscore will also be forwarded. 644 For example, `population.marginals.x` will marginalize the `x` attribute 645 from e.g. `namedtuple` outcomes. 646 """ 647 return Population._Marginals(self)
A property that applies the []
operator to outcomes.
For example, population.marginals[:2]
will marginalize the first two
elements of sequence outcomes.
Attributes that do not start with an underscore will also be forwarded.
For example, population.marginals.x
will marginalize the x
attribute
from e.g. namedtuple
outcomes.
659 def covariance( 660 self: 661 'Population[tuple[numbers.Rational, ...]] | Population[tuple[float, ...]]', 662 i: int, j: int) -> Fraction | float: 663 mean_i = self.marginals[i].mean() 664 mean_j = self.marginals[j].mean() 665 return try_fraction( 666 sum((outcome[i] - mean_i) * (outcome[j] - mean_j) * quantity 667 for outcome, quantity in self.items()), self.denominator())
702 def to_one_hot(self: C, outcomes: Sequence[T_co] | None = None) -> C: 703 """Converts the outcomes of this population to a one-hot representation. 704 705 Args: 706 outcomes: If provided, each outcome will be mapped to a `Vector` 707 where the element at `outcomes.index(outcome)` is set to `True` 708 and the rest to `False`, or all `False` if the outcome is not 709 in `outcomes`. 710 If not provided, `self.outcomes()` is used. 711 """ 712 if outcomes is None: 713 outcomes = self.outcomes() 714 715 data: MutableMapping[Vector[bool], int] = defaultdict(int) 716 for outcome, quantity in zip(self.outcomes(), self.quantities()): 717 value = [False] * len(outcomes) 718 if outcome in outcomes: 719 value[outcomes.index(outcome)] = True 720 data[Vector(value)] += quantity 721 return self._new_type(data)
Converts the outcomes of this population to a one-hot representation.
Arguments:
723 def split(self, 724 which: Callable[..., bool] | Collection[T_co] | None = None, 725 /, 726 *, 727 star: bool | None = None) -> tuple[C, C]: 728 """Splits this population into one containing selected items and another containing the rest. 729 730 The sum of the denominators of the results is equal to the denominator 731 of this population. 732 733 If you want to split more than two ways, use `Population.group_by()`. 734 735 Args: 736 which: Selects which outcomes to select. Options: 737 * A callable that takes an outcome and returns `True` if it 738 should be selected. 739 * A collection of outcomes to select. 740 star: Whether outcomes should be unpacked into separate arguments 741 before sending them to a callable `which`. 742 If not provided, this will be guessed based on the function 743 signature. 744 745 Returns: 746 A population consisting of the outcomes that were selected by 747 `which`, and a population consisting of the unselected outcomes. 748 """ 749 if which is None: 750 outcome_set = {self.min_outcome()} 751 else: 752 outcome_set = self._select_outcomes(which, star) 753 754 selected = {} 755 not_selected = {} 756 for outcome, count in self.items(): 757 if outcome in outcome_set: 758 selected[outcome] = count 759 else: 760 not_selected[outcome] = count 761 762 return self._new_type(selected), self._new_type(not_selected)
Splits this population into one containing selected items and another containing the rest.
The sum of the denominators of the results is equal to the denominator of this population.
If you want to split more than two ways, use Population.group_by()
.
Arguments:
- which: Selects which outcomes to select. Options:
- A callable that takes an outcome and returns
True
if it should be selected. - A collection of outcomes to select.
- A callable that takes an outcome and returns
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
which
. If not provided, this will be guessed based on the function signature.
Returns:
A population consisting of the outcomes that were selected by
which
, and a population consisting of the unselected outcomes.
808 @property 809 def group_by(self: C) -> _GroupBy[C]: 810 """A method-like property that splits this population into sub-populations based on a key function. 811 812 The sum of the denominators of the results is equal to the denominator 813 of this population. 814 815 This can be useful when using the law of total probability. 816 817 Example: `d10.group_by(lambda x: x % 3)` is 818 ```python 819 { 820 0: Die([3, 6, 9]), 821 1: Die([1, 4, 7, 10]), 822 2: Die([2, 5, 8]), 823 } 824 ``` 825 826 You can also use brackets to group by indexes or slices; or attributes 827 to group by those. Example: 828 829 ```python 830 Die([ 831 'aardvark', 832 'alligator', 833 'asp', 834 'blowfish', 835 'cat', 836 'crocodile', 837 ]).group_by[0] 838 ``` 839 840 produces 841 842 ```python 843 { 844 'a': Die(['aardvark', 'alligator', 'asp']), 845 'b': Die(['blowfish']), 846 'c': Die(['cat', 'crocodile']), 847 } 848 ``` 849 850 Args: 851 key_map: A function or mapping that takes outcomes and produces the 852 key of the corresponding outcome in the result. If this is 853 a Mapping, outcomes not in the mapping are their own key. 854 star: Whether outcomes should be unpacked into separate arguments 855 before sending them to a callable `key_map`. 856 If not provided, this will be guessed based on the function 857 signature. 858 """ 859 return Population._GroupBy(self)
A method-like property that splits this population into sub-populations based on a key function.
The sum of the denominators of the results is equal to the denominator of this population.
This can be useful when using the law of total probability.
Example: d10.group_by(lambda x: x % 3)
is
{
0: Die([3, 6, 9]),
1: Die([1, 4, 7, 10]),
2: Die([2, 5, 8]),
}
You can also use brackets to group by indexes or slices; or attributes to group by those. Example:
Die([
'aardvark',
'alligator',
'asp',
'blowfish',
'cat',
'crocodile',
]).group_by[0]
produces
{
'a': Die(['aardvark', 'alligator', 'asp']),
'b': Die(['blowfish']),
'c': Die(['cat', 'crocodile']),
}
Arguments:
- key_map: A function or mapping that takes outcomes and produces the key of the corresponding outcome in the result. If this is a Mapping, outcomes not in the mapping are their own key.
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
key_map
. If not provided, this will be guessed based on the function signature.
861 def sample(self) -> T_co: 862 """A single random sample from this population. 863 864 Note that this is always "with replacement" even for `Deck` since 865 instances are immutable. 866 867 This uses the standard `random` package and is not cryptographically 868 secure. 869 """ 870 # We don't use random.choices since that is based on floats rather than ints. 871 r = random.randrange(self.denominator()) 872 index = bisect.bisect_right(self.quantities('<='), r) 873 return self.outcomes()[index]
A single random sample from this population.
Note that this is always "with replacement" even for Deck
since
instances are immutable.
This uses the standard random
package and is not cryptographically
secure.
875 def format(self, format_spec: str, /, **kwargs) -> str: 876 """Formats this mapping as a string. 877 878 `format_spec` should start with the output format, 879 which can be: 880 * `md` for Markdown (default) 881 * `bbcode` for BBCode 882 * `csv` for comma-separated values 883 * `html` for HTML 884 885 After this, you may optionally add a `:` followed by a series of 886 requested columns. Allowed columns are: 887 888 * `o`: Outcomes. 889 * `*o`: Outcomes, unpacked if applicable. 890 * `q==`, `q<=`, `q>=`: Quantities ==, <=, or >= each outcome. 891 * `p==`, `p<=`, `p>=`: Probabilities (0-1). 892 * `%==`, `%<=`, `%>=`: Probabilities (0%-100%). 893 * `i==`, `i<=`, `i>=`: EXPERIMENTAL: "1 in N". 894 895 Columns may optionally be separated using `|` characters. 896 897 The default setting is equal to `f'{die:md:*o|q==|%==}'`. Here the 898 columns are the outcomes (unpacked if applicable) the quantities, and 899 the probabilities. The quantities are omitted from the default columns 900 if any individual quantity is 10**30 or greater. 901 """ 902 if not self.is_empty() and self.modal_quantity() < 10**30: 903 default_column_spec = '*oq==%==' 904 else: 905 default_column_spec = '*o%==' 906 if len(format_spec) == 0: 907 format_spec = 'md:' + default_column_spec 908 909 format_spec = format_spec.replace('|', '') 910 911 parts = format_spec.split(':') 912 913 if len(parts) == 1: 914 output_format = parts[0] 915 col_spec = default_column_spec 916 elif len(parts) == 2: 917 output_format = parts[0] 918 col_spec = parts[1] 919 else: 920 raise ValueError('format_spec has too many colons.') 921 922 match output_format: 923 case 'md': 924 return icepool.population.format.markdown(self, col_spec) 925 case 'bbcode': 926 return icepool.population.format.bbcode(self, col_spec) 927 case 'csv': 928 return icepool.population.format.csv(self, col_spec, **kwargs) 929 case 'html': 930 return icepool.population.format.html(self, col_spec) 931 case _: 932 raise ValueError( 933 f"Unsupported output format '{output_format}'")
Formats this mapping as a string.
format_spec
should start with the output format,
which can be:
md
for Markdown (default)bbcode
for BBCodecsv
for comma-separated valueshtml
for HTML
After this, you may optionally add a :
followed by a series of
requested columns. Allowed columns are:
o
: Outcomes.*o
: Outcomes, unpacked if applicable.q==
,q<=
,q>=
: Quantities ==, <=, or >= each outcome.p==
,p<=
,p>=
: Probabilities (0-1).%==
,%<=
,%>=
: Probabilities (0%-100%).i==
,i<=
,i>=
: EXPERIMENTAL: "1 in N".
Columns may optionally be separated using |
characters.
The default setting is equal to f'{die:md:*o|q==|%==}'
. Here the
columns are the outcomes (unpacked if applicable) the quantities, and
the probabilities. The quantities are omitted from the default columns
if any individual quantity is 10**30 or greater.
77def tupleize( 78 *args: 'T | icepool.Population[T]' 79) -> 'tuple[T, ...] | icepool.Population[tuple[T, ...]]': 80 """Returns the Cartesian product of the arguments as `tuple`s or a `Population` thereof. 81 82 For example: 83 * `tupleize(1, 2)` would produce `(1, 2)`. 84 * `tupleize(d6, 0)` would produce a `Die` with outcomes `(1, 0)`, `(2, 0)`, 85 ... `(6, 0)`. 86 * `tupleize(d6, d6)` would produce a `Die` with outcomes `(1, 1)`, `(1, 2)`, 87 ... `(6, 5)`, `(6, 6)`. 88 89 If `Population`s are provided, they must all be `Die` or all `Deck` and not 90 a mixture of the two. 91 92 Returns: 93 If none of the outcomes is a `Population`, the result is a `tuple` 94 with one element per argument. Otherwise, the result is a `Population` 95 of the same type as the input `Population`, and the outcomes are 96 `tuple`s with one element per argument. 97 """ 98 return cartesian_product(*args, outcome_type=tuple)
Returns the Cartesian product of the arguments as tuple
s or a Population
thereof.
For example:
tupleize(1, 2)
would produce(1, 2)
.tupleize(d6, 0)
would produce aDie
with outcomes(1, 0)
,(2, 0)
, ...(6, 0)
.tupleize(d6, d6)
would produce aDie
with outcomes(1, 1)
,(1, 2)
, ...(6, 5)
,(6, 6)
.
If Population
s are provided, they must all be Die
or all Deck
and not
a mixture of the two.
Returns:
If none of the outcomes is a
Population
, the result is atuple
with one element per argument. Otherwise, the result is aPopulation
of the same type as the inputPopulation
, and the outcomes aretuple
s with one element per argument.
101def vectorize( 102 *args: 'T | icepool.Population[T]' 103) -> 'Vector[T] | icepool.Population[Vector[T]]': 104 """Returns the Cartesian product of the arguments as `Vector`s or a `Population` thereof. 105 106 For example: 107 * `vectorize(1, 2)` would produce `Vector(1, 2)`. 108 * `vectorize(d6, 0)` would produce a `Die` with outcomes `Vector(1, 0)`, 109 `Vector(2, 0)`, ... `Vector(6, 0)`. 110 * `vectorize(d6, d6)` would produce a `Die` with outcomes `Vector(1, 1)`, 111 `Vector(1, 2)`, ... `Vector(6, 5)`, `Vector(6, 6)`. 112 113 If `Population`s are provided, they must all be `Die` or all `Deck` and not 114 a mixture of the two. 115 116 Returns: 117 If none of the outcomes is a `Population`, the result is a `Vector` 118 with one element per argument. Otherwise, the result is a `Population` 119 of the same type as the input `Population`, and the outcomes are 120 `Vector`s with one element per argument. 121 """ 122 return cartesian_product(*args, outcome_type=Vector)
Returns the Cartesian product of the arguments as Vector
s or a Population
thereof.
For example:
vectorize(1, 2)
would produceVector(1, 2)
.vectorize(d6, 0)
would produce aDie
with outcomesVector(1, 0)
,Vector(2, 0)
, ...Vector(6, 0)
.vectorize(d6, d6)
would produce aDie
with outcomesVector(1, 1)
,Vector(1, 2)
, ...Vector(6, 5)
,Vector(6, 6)
.
If Population
s are provided, they must all be Die
or all Deck
and not
a mixture of the two.
Returns:
If none of the outcomes is a
Population
, the result is aVector
with one element per argument. Otherwise, the result is aPopulation
of the same type as the inputPopulation
, and the outcomes areVector
s with one element per argument.
125class Vector(Outcome, Sequence[T_co]): 126 """Immutable tuple-like class that applies most operators elementwise. 127 128 May become a variadic generic type in the future. 129 """ 130 __slots__ = ['_data'] 131 132 _data: tuple[T_co, ...] 133 134 def __init__(self, 135 elements: Iterable[T_co], 136 *, 137 truth_value: bool | None = None) -> None: 138 self._data = tuple(elements) 139 if any(isinstance(x, icepool.AgainExpression) for x in self._data): 140 raise TypeError('Again is not a valid element of Vector.') 141 self._truth_value = truth_value 142 143 def __hash__(self) -> int: 144 return hash((Vector, self._data)) 145 146 def __len__(self) -> int: 147 return len(self._data) 148 149 @overload 150 def __getitem__(self, index: int) -> T_co: 151 ... 152 153 @overload 154 def __getitem__(self, index: slice) -> 'Vector[T_co]': 155 ... 156 157 def __getitem__(self, index: int | slice) -> 'T_co | Vector[T_co]': 158 if isinstance(index, int): 159 return self._data[index] 160 else: 161 return Vector(self._data[index]) 162 163 def __iter__(self) -> Iterator[T_co]: 164 return iter(self._data) 165 166 # Unary operators. 167 168 def unary_operator(self, op: Callable[..., U], *args, 169 **kwargs) -> 'Vector[U]': 170 """Unary operators on `Vector` are applied elementwise. 171 172 This is used for the standard unary operators 173 `-, +, abs, ~, round, trunc, floor, ceil` 174 """ 175 return Vector(op(x, *args, **kwargs) for x in self) 176 177 def __neg__(self) -> 'Vector[T_co]': 178 return self.unary_operator(operator.neg) 179 180 def __pos__(self) -> 'Vector[T_co]': 181 return self.unary_operator(operator.pos) 182 183 def __invert__(self) -> 'Vector[T_co]': 184 return self.unary_operator(operator.invert) 185 186 def abs(self) -> 'Vector[T_co]': 187 return self.unary_operator(operator.abs) 188 189 __abs__ = abs 190 191 def round(self, ndigits: int | None = None) -> 'Vector': 192 return self.unary_operator(round, ndigits) 193 194 __round__ = round 195 196 def trunc(self) -> 'Vector': 197 return self.unary_operator(math.trunc) 198 199 __trunc__ = trunc 200 201 def floor(self) -> 'Vector': 202 return self.unary_operator(math.floor) 203 204 __floor__ = floor 205 206 def ceil(self) -> 'Vector': 207 return self.unary_operator(math.ceil) 208 209 __ceil__ = ceil 210 211 # Binary operators. 212 213 def binary_operator(self, 214 other, 215 op: Callable[..., U], 216 *args, 217 compare_for_truth: bool = False, 218 **kwargs) -> 'Vector[U]': 219 """Binary operators on `Vector` are applied elementwise. 220 221 If the other operand is also a `Vector`, the operator is applied to each 222 pair of elements from `self` and `other`. Both must have the same 223 length. 224 225 Otherwise the other operand is broadcast to each element of `self`. 226 227 This is used for the standard binary operators 228 `+, -, *, /, //, %, **, <<, >>, &, |, ^`. 229 230 `@` is not included due to its different meaning in `Die`. 231 232 This is also used for the comparators 233 `<, <=, >, >=, ==, !=`. 234 235 In this case, the result also has a truth value based on lexicographic 236 ordering. 237 """ 238 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 239 return NotImplemented # delegate to the other 240 if isinstance(other, Vector): 241 if len(self) == len(other): 242 if compare_for_truth: 243 truth_value = cast(bool, op(self._data, other._data)) 244 else: 245 truth_value = None 246 return Vector( 247 (op(x, y, *args, **kwargs) for x, y in zip(self, other)), 248 truth_value=truth_value) 249 else: 250 raise IndexError( 251 f'Binary operators on Vectors are only valid if both are the same length ({len(self)} vs. {len(other)}).' 252 ) 253 else: 254 return Vector((op(x, other, *args, **kwargs) for x in self)) 255 256 def reverse_binary_operator(self, other, op: Callable[..., U], *args, 257 **kwargs) -> 'Vector[U]': 258 """Reverse version of `binary_operator()`.""" 259 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 260 return NotImplemented # delegate to the other 261 if isinstance(other, Vector): 262 if len(self) == len(other): 263 return Vector( 264 op(y, x, *args, **kwargs) for x, y in zip(self, other)) 265 else: 266 raise IndexError( 267 f'Binary operators on Vectors are only valid if both are the same length ({len(other)} vs. {len(self)}).' 268 ) 269 else: 270 return Vector(op(other, x, *args, **kwargs) for x in self) 271 272 def __add__(self, other) -> 'Vector': 273 return self.binary_operator(other, operator.add) 274 275 def __radd__(self, other) -> 'Vector': 276 return self.reverse_binary_operator(other, operator.add) 277 278 def __sub__(self, other) -> 'Vector': 279 return self.binary_operator(other, operator.sub) 280 281 def __rsub__(self, other) -> 'Vector': 282 return self.reverse_binary_operator(other, operator.sub) 283 284 def __mul__(self, other) -> 'Vector': 285 return self.binary_operator(other, operator.mul) 286 287 def __rmul__(self, other) -> 'Vector': 288 return self.reverse_binary_operator(other, operator.mul) 289 290 def __truediv__(self, other) -> 'Vector': 291 return self.binary_operator(other, operator.truediv) 292 293 def __rtruediv__(self, other) -> 'Vector': 294 return self.reverse_binary_operator(other, operator.truediv) 295 296 def __floordiv__(self, other) -> 'Vector': 297 return self.binary_operator(other, operator.floordiv) 298 299 def __rfloordiv__(self, other) -> 'Vector': 300 return self.reverse_binary_operator(other, operator.floordiv) 301 302 def __pow__(self, other) -> 'Vector': 303 return self.binary_operator(other, operator.pow) 304 305 def __rpow__(self, other) -> 'Vector': 306 return self.reverse_binary_operator(other, operator.pow) 307 308 def __mod__(self, other) -> 'Vector': 309 return self.binary_operator(other, operator.mod) 310 311 def __rmod__(self, other) -> 'Vector': 312 return self.reverse_binary_operator(other, operator.mod) 313 314 def __lshift__(self, other) -> 'Vector': 315 return self.binary_operator(other, operator.lshift) 316 317 def __rlshift__(self, other) -> 'Vector': 318 return self.reverse_binary_operator(other, operator.lshift) 319 320 def __rshift__(self, other) -> 'Vector': 321 return self.binary_operator(other, operator.rshift) 322 323 def __rrshift__(self, other) -> 'Vector': 324 return self.reverse_binary_operator(other, operator.rshift) 325 326 def __and__(self, other) -> 'Vector': 327 return self.binary_operator(other, operator.and_) 328 329 def __rand__(self, other) -> 'Vector': 330 return self.reverse_binary_operator(other, operator.and_) 331 332 def __or__(self, other) -> 'Vector': 333 return self.binary_operator(other, operator.or_) 334 335 def __ror__(self, other) -> 'Vector': 336 return self.reverse_binary_operator(other, operator.or_) 337 338 def __xor__(self, other) -> 'Vector': 339 return self.binary_operator(other, operator.xor) 340 341 def __rxor__(self, other) -> 'Vector': 342 return self.reverse_binary_operator(other, operator.xor) 343 344 # Comparators. 345 # These returns a value with a truth value, but not a bool. 346 347 def __lt__(self, other) -> 'Vector': # type: ignore 348 if not isinstance(other, Vector): 349 return NotImplemented 350 return self.binary_operator(other, operator.lt, compare_for_truth=True) 351 352 def __le__(self, other) -> 'Vector': # type: ignore 353 if not isinstance(other, Vector): 354 return NotImplemented 355 return self.binary_operator(other, operator.le, compare_for_truth=True) 356 357 def __gt__(self, other) -> 'Vector': # type: ignore 358 if not isinstance(other, Vector): 359 return NotImplemented 360 return self.binary_operator(other, operator.gt, compare_for_truth=True) 361 362 def __ge__(self, other) -> 'Vector': # type: ignore 363 if not isinstance(other, Vector): 364 return NotImplemented 365 return self.binary_operator(other, operator.ge, compare_for_truth=True) 366 367 def __eq__(self, other) -> 'Vector | bool': # type: ignore 368 if not isinstance(other, Vector): 369 return False 370 return self.binary_operator(other, operator.eq, compare_for_truth=True) 371 372 def __ne__(self, other) -> 'Vector | bool': # type: ignore 373 if not isinstance(other, Vector): 374 return True 375 return self.binary_operator(other, operator.ne, compare_for_truth=True) 376 377 def __bool__(self) -> bool: 378 if self._truth_value is None: 379 raise TypeError( 380 'Vector only has a truth value if it is the result of a comparison operator.' 381 ) 382 return self._truth_value 383 384 # Sequence manipulation. 385 386 def append(self, other) -> 'Vector': 387 return Vector(self._data + (other, )) 388 389 def concatenate(self, other: 'Iterable') -> 'Vector': 390 return Vector(itertools.chain(self, other)) 391 392 # Strings. 393 394 def __repr__(self) -> str: 395 return type(self).__qualname__ + '(' + repr(self._data) + ')' 396 397 def __str__(self) -> str: 398 return type(self).__qualname__ + '(' + str(self._data) + ')'
Immutable tuple-like class that applies most operators elementwise.
May become a variadic generic type in the future.
168 def unary_operator(self, op: Callable[..., U], *args, 169 **kwargs) -> 'Vector[U]': 170 """Unary operators on `Vector` are applied elementwise. 171 172 This is used for the standard unary operators 173 `-, +, abs, ~, round, trunc, floor, ceil` 174 """ 175 return Vector(op(x, *args, **kwargs) for x in self)
Unary operators on Vector
are applied elementwise.
This is used for the standard unary operators
-, +, abs, ~, round, trunc, floor, ceil
213 def binary_operator(self, 214 other, 215 op: Callable[..., U], 216 *args, 217 compare_for_truth: bool = False, 218 **kwargs) -> 'Vector[U]': 219 """Binary operators on `Vector` are applied elementwise. 220 221 If the other operand is also a `Vector`, the operator is applied to each 222 pair of elements from `self` and `other`. Both must have the same 223 length. 224 225 Otherwise the other operand is broadcast to each element of `self`. 226 227 This is used for the standard binary operators 228 `+, -, *, /, //, %, **, <<, >>, &, |, ^`. 229 230 `@` is not included due to its different meaning in `Die`. 231 232 This is also used for the comparators 233 `<, <=, >, >=, ==, !=`. 234 235 In this case, the result also has a truth value based on lexicographic 236 ordering. 237 """ 238 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 239 return NotImplemented # delegate to the other 240 if isinstance(other, Vector): 241 if len(self) == len(other): 242 if compare_for_truth: 243 truth_value = cast(bool, op(self._data, other._data)) 244 else: 245 truth_value = None 246 return Vector( 247 (op(x, y, *args, **kwargs) for x, y in zip(self, other)), 248 truth_value=truth_value) 249 else: 250 raise IndexError( 251 f'Binary operators on Vectors are only valid if both are the same length ({len(self)} vs. {len(other)}).' 252 ) 253 else: 254 return Vector((op(x, other, *args, **kwargs) for x in self))
Binary operators on Vector
are applied elementwise.
If the other operand is also a Vector
, the operator is applied to each
pair of elements from self
and other
. Both must have the same
length.
Otherwise the other operand is broadcast to each element of self
.
This is used for the standard binary operators
+, -, *, /, //, %, **, <<, >>, &, |, ^
.
@
is not included due to its different meaning in Die
.
This is also used for the comparators
<, <=, >, >=, ==, !=
.
In this case, the result also has a truth value based on lexicographic ordering.
256 def reverse_binary_operator(self, other, op: Callable[..., U], *args, 257 **kwargs) -> 'Vector[U]': 258 """Reverse version of `binary_operator()`.""" 259 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 260 return NotImplemented # delegate to the other 261 if isinstance(other, Vector): 262 if len(self) == len(other): 263 return Vector( 264 op(y, x, *args, **kwargs) for x, y in zip(self, other)) 265 else: 266 raise IndexError( 267 f'Binary operators on Vectors are only valid if both are the same length ({len(other)} vs. {len(self)}).' 268 ) 269 else: 270 return Vector(op(other, x, *args, **kwargs) for x in self)
Reverse version of binary_operator()
.
16class Symbols(Mapping[str, int]): 17 """EXPERIMENTAL: Immutable multiset of single characters. 18 19 Spaces, dashes, and underscores cannot be used as symbols. 20 21 Operations include: 22 23 | Operation | Count / notes | 24 |:----------------------------|:-----------------------------------| 25 | `additive_union`, `+` | `l + r` | 26 | `difference`, `-` | `l - r` | 27 | `intersection`, `&` | `min(l, r)` | 28 | `union`, `\\|` | `max(l, r)` | 29 | `symmetric_difference`, `^` | `abs(l - r)` | 30 | `multiply_counts`, `*` | `count * n` | 31 | `divide_counts`, `//` | `count // n` | 32 | `issubset`, `<=` | all counts l <= r | 33 | `issuperset`, `>=` | all counts l >= r | 34 | `==` | all counts l == r | 35 | `!=` | any count l != r | 36 | unary `+` | drop all negative counts | 37 | unary `-` | reverses the sign of all counts | 38 39 `<` and `>` are lexicographic orderings rather than subset relations. 40 Specifically, they compare the count of each character in alphabetical 41 order. For example: 42 * `'a' > ''` since one `'a'` is more than zero `'a'`s. 43 * `'a' > 'bb'` since `'a'` is compared first. 44 * `'-a' < 'bb'` since the left side has -1 `'a'`s. 45 * `'a' < 'ab'` since the `'a'`s are equal but the right side has more `'b'`s. 46 47 Binary operators other than `*` and `//` implicitly convert the other 48 argument to `Symbols` using the constructor. 49 50 Subscripting with a single character returns the count of that character 51 as an `int`. E.g. `symbols['a']` -> number of `a`s as an `int`. 52 You can also access it as an attribute, e.g. `symbols.a`. 53 54 Subscripting with multiple characters returns a `Symbols` with only those 55 characters, dropping the rest. 56 E.g. `symbols['ab']` -> number of `a`s and `b`s as a `Symbols`. 57 Again you can also access it as an attribute, e.g. `symbols.ab`. 58 This is useful for reducing the outcome space, which reduces computational 59 cost for further operations. If you want to keep only a single character 60 while keeping the type as `Symbols`, you can subscript with that character 61 plus an unused character. 62 63 Subscripting with duplicate characters currently has no further effect, but 64 this may change in the future. 65 66 `Population.marginals` forwards attribute access, so you can use e.g. 67 `die.marginals.a` to get the marginal distribution of `a`s. 68 69 Note that attribute access only works with valid identifiers, 70 so e.g. emojis would need to use the subscript method. 71 """ 72 _data: Mapping[str, int] 73 74 def __new__(cls, 75 symbols: str | Iterable[str] | Mapping[str, int]) -> 'Symbols': 76 """Constructor. 77 78 The argument can be a string, an iterable of characters, or a mapping of 79 characters to counts. 80 81 If the argument is a string, negative symbols can be specified using a 82 minus sign optionally surrounded by whitespace. For example, 83 `a - b` has one positive a and one negative b. 84 """ 85 self = super(Symbols, cls).__new__(cls) 86 if isinstance(symbols, str): 87 data: MutableMapping[str, int] = defaultdict(int) 88 positive, *negative = re.split(r'\s*-\s*', symbols) 89 for s in positive: 90 data[s] += 1 91 if len(negative) > 1: 92 raise ValueError('Multiple dashes not allowed.') 93 if len(negative) == 1: 94 for s in negative[0]: 95 data[s] -= 1 96 elif isinstance(symbols, Mapping): 97 data = defaultdict(int, symbols) 98 else: 99 data = defaultdict(int) 100 for s in symbols: 101 data[s] += 1 102 103 for s in data: 104 if len(s) != 1: 105 raise ValueError(f'Symbol {s} is not a single character.') 106 if re.match(r'[\s_-]', s): 107 raise ValueError( 108 f'{s} (U+{ord(s):04X}) is not a legal symbol.') 109 110 self._data = defaultdict(int, 111 {k: data[k] 112 for k in sorted(data.keys())}) 113 114 return self 115 116 @classmethod 117 def _new_raw(cls, data: defaultdict[str, int]) -> 'Symbols': 118 self = super(Symbols, cls).__new__(cls) 119 self._data = data 120 return self 121 122 # Mapping interface. 123 124 def __getitem__(self, key: str) -> 'int | Symbols': # type: ignore 125 if len(key) == 1: 126 return self._data[key] 127 else: 128 return Symbols._new_raw( 129 defaultdict(int, {s: self._data[s] 130 for s in key})) 131 132 def __getattr__(self, key: str) -> 'int | Symbols': 133 if key[0] == '_': 134 raise AttributeError(key) 135 return self[key] 136 137 def __iter__(self) -> Iterator[str]: 138 return iter(self._data) 139 140 def __len__(self) -> int: 141 return len(self._data) 142 143 # Binary operators. 144 145 def additive_union(self, *args: 146 Iterable[str] | Mapping[str, int]) -> 'Symbols': 147 """The sum of counts of each symbol.""" 148 return functools.reduce(operator.add, args, initial=self) 149 150 def __add__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 151 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 152 return NotImplemented # delegate to the other 153 data = defaultdict(int, self._data) 154 for s, count in Symbols(other).items(): 155 data[s] += count 156 return Symbols._new_raw(data) 157 158 __radd__ = __add__ 159 160 def difference(self, *args: 161 Iterable[str] | Mapping[str, int]) -> 'Symbols': 162 """The difference between the counts of each symbol.""" 163 return functools.reduce(operator.sub, args, initial=self) 164 165 def __sub__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 166 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 167 return NotImplemented # delegate to the other 168 data = defaultdict(int, self._data) 169 for s, count in Symbols(other).items(): 170 data[s] -= count 171 return Symbols._new_raw(data) 172 173 def __rsub__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 174 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 175 return NotImplemented # delegate to the other 176 data = defaultdict(int, Symbols(other)._data) 177 for s, count in self.items(): 178 data[s] -= count 179 return Symbols._new_raw(data) 180 181 def intersection(self, *args: 182 Iterable[str] | Mapping[str, int]) -> 'Symbols': 183 """The min count of each symbol.""" 184 return functools.reduce(operator.and_, args, initial=self) 185 186 def __and__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 187 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 188 return NotImplemented # delegate to the other 189 data: defaultdict[str, int] = defaultdict(int) 190 for s, count in Symbols(other).items(): 191 data[s] = min(self.get(s, 0), count) 192 return Symbols._new_raw(data) 193 194 __rand__ = __and__ 195 196 def union(self, *args: Iterable[str] | Mapping[str, int]) -> 'Symbols': 197 """The max count of each symbol.""" 198 return functools.reduce(operator.or_, args, initial=self) 199 200 def __or__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 201 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 202 return NotImplemented # delegate to the other 203 data = defaultdict(int, self._data) 204 for s, count in Symbols(other).items(): 205 data[s] = max(data[s], count) 206 return Symbols._new_raw(data) 207 208 __ror__ = __or__ 209 210 def symmetric_difference( 211 self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 212 """The absolute difference in symbol counts between the two sets.""" 213 return self ^ other 214 215 def __xor__(self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 216 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 217 return NotImplemented # delegate to the other 218 data = defaultdict(int, self._data) 219 for s, count in Symbols(other).items(): 220 data[s] = abs(data[s] - count) 221 return Symbols._new_raw(data) 222 223 __rxor__ = __xor__ 224 225 def multiply_counts(self, other: int) -> 'Symbols': 226 """Multiplies all counts by an integer.""" 227 return self * other 228 229 def __mul__(self, other: int) -> 'Symbols': 230 if not isinstance(other, int): 231 return NotImplemented 232 data = defaultdict(int, { 233 s: count * other 234 for s, count in self.items() 235 }) 236 return Symbols._new_raw(data) 237 238 __rmul__ = __mul__ 239 240 def divide_counts(self, other: int) -> 'Symbols': 241 """Divides all counts by an integer, rounding down.""" 242 data = defaultdict(int, { 243 s: count // other 244 for s, count in self.items() 245 }) 246 return Symbols._new_raw(data) 247 248 def count_subset(self, 249 divisor: Iterable[str] | Mapping[str, int], 250 *, 251 empty_divisor: int | None = None) -> int: 252 """The number of times the divisor is contained in this multiset.""" 253 if not isinstance(divisor, Mapping): 254 divisor = Counter(divisor) 255 result = None 256 for s, count in divisor.items(): 257 current = self._data[s] // count 258 if result is None or current < result: 259 result = current 260 if result is None: 261 if empty_divisor is None: 262 raise ZeroDivisionError('Divisor is empty.') 263 else: 264 return empty_divisor 265 else: 266 return result 267 268 @overload 269 def __floordiv__(self, other: int) -> 'Symbols': 270 """Same as divide_counts().""" 271 272 @overload 273 def __floordiv__(self, other: Iterable[str] | Mapping[str, int]) -> int: 274 """Same as count_subset().""" 275 276 @overload 277 def __floordiv__( 278 self, 279 other: int | Iterable[str] | Mapping[str, int]) -> 'Symbols | int': 280 ... 281 282 def __floordiv__( 283 self, 284 other: int | Iterable[str] | Mapping[str, int]) -> 'Symbols | int': 285 if isinstance(other, int): 286 return self.divide_counts(other) 287 elif isinstance(other, Iterable): 288 return self.count_subset(other) 289 else: 290 return NotImplemented 291 292 def __rfloordiv__(self, other: Iterable[str] | Mapping[str, int]) -> int: 293 return Symbols(other).count_subset(self) 294 295 def modulo_counts(self, other: int) -> 'Symbols': 296 return self % other 297 298 def __mod__(self, other: int) -> 'Symbols': 299 if not isinstance(other, int): 300 return NotImplemented 301 data = defaultdict(int, { 302 s: count % other 303 for s, count in self.items() 304 }) 305 return Symbols._new_raw(data) 306 307 def __lt__(self, other: 'Symbols') -> bool: 308 if not isinstance(other, Symbols): 309 return NotImplemented 310 keys = sorted(set(self.keys()) | set(other.keys())) 311 for k in keys: 312 if self[k] < other[k]: # type: ignore 313 return True 314 if self[k] > other[k]: # type: ignore 315 return False 316 return False 317 318 def __gt__(self, other: 'Symbols') -> bool: 319 if not isinstance(other, Symbols): 320 return NotImplemented 321 keys = sorted(set(self.keys()) | set(other.keys())) 322 for k in keys: 323 if self[k] > other[k]: # type: ignore 324 return True 325 if self[k] < other[k]: # type: ignore 326 return False 327 return False 328 329 def issubset(self, other: Iterable[str] | Mapping[str, int]) -> bool: 330 """Whether `self` is a subset of the other. 331 332 Same as `<=`. 333 334 Note that the `<` and `>` operators are lexicographic orderings, 335 not proper subset relations. 336 """ 337 return self <= other 338 339 def __le__(self, other: Iterable[str] | Mapping[str, int]) -> bool: 340 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 341 return NotImplemented # delegate to the other 342 other = Symbols(other) 343 return all(self[s] <= other[s] # type: ignore 344 for s in itertools.chain(self, other)) 345 346 def issuperset(self, other: Iterable[str] | Mapping[str, int]) -> bool: 347 """Whether `self` is a superset of the other. 348 349 Same as `>=`. 350 351 Note that the `<` and `>` operators are lexicographic orderings, 352 not proper subset relations. 353 """ 354 return self >= other 355 356 def __ge__(self, other: Iterable[str] | Mapping[str, int]) -> bool: 357 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 358 return NotImplemented # delegate to the other 359 other = Symbols(other) 360 return all(self[s] >= other[s] # type: ignore 361 for s in itertools.chain(self, other)) 362 363 def isdisjoint(self, other: Iterable[str] | Mapping[str, int]) -> bool: 364 """Whether `self` has any positive elements in common with the other. 365 366 Raises: 367 ValueError if either has negative elements. 368 """ 369 other = Symbols(other) 370 return any(self[s] > 0 and other[s] > 0 # type: ignore 371 for s in self) 372 373 def __eq__(self, other) -> bool: 374 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 375 return NotImplemented # delegate to the other 376 try: 377 other = Symbols(other) 378 except ValueError: 379 return NotImplemented 380 return all(self[s] == other[s] # type: ignore 381 for s in itertools.chain(self, other)) 382 383 def __ne__(self, other) -> bool: 384 if isinstance(other, (icepool.Population, icepool.AgainExpression)): 385 return NotImplemented # delegate to the other 386 try: 387 other = Symbols(other) 388 except ValueError: 389 return NotImplemented 390 return any(self[s] != other[s] # type: ignore 391 for s in itertools.chain(self, other)) 392 393 # Unary operators. 394 395 def has_negative_counts(self) -> bool: 396 """Whether any counts are negative.""" 397 return any(c < 0 for c in self.values()) 398 399 def __pos__(self) -> 'Symbols': 400 data = defaultdict(int, { 401 s: count 402 for s, count in self.items() if count > 0 403 }) 404 return Symbols._new_raw(data) 405 406 def __neg__(self) -> 'Symbols': 407 data = defaultdict(int, {s: -count for s, count in self.items()}) 408 return Symbols._new_raw(data) 409 410 @cached_property 411 def _hash(self) -> int: 412 return hash((Symbols, str(self))) 413 414 def __hash__(self) -> int: 415 return self._hash 416 417 def count(self) -> int: 418 """The total number of elements.""" 419 return sum(self._data.values()) 420 421 @cached_property 422 def _str(self) -> str: 423 sorted_keys = sorted(self) 424 positive = ''.join(s * self._data[s] for s in sorted_keys 425 if self._data[s] > 0) 426 negative = ''.join(s * -self._data[s] for s in sorted_keys 427 if self._data[s] < 0) 428 if positive: 429 if negative: 430 return positive + ' - ' + negative 431 else: 432 return positive 433 else: 434 if negative: 435 return '-' + negative 436 else: 437 return '' 438 439 def __str__(self) -> str: 440 """All symbols in unary form (i.e. including duplicates) in ascending order. 441 442 If there are negative elements, they are listed following a ` - ` sign. 443 """ 444 return self._str 445 446 def __repr__(self) -> str: 447 return type(self).__qualname__ + f"('{str(self)}')"
EXPERIMENTAL: Immutable multiset of single characters.
Spaces, dashes, and underscores cannot be used as symbols.
Operations include:
Operation | Count / notes |
---|---|
additive_union , + |
l + r |
difference , - |
l - r |
intersection , & |
min(l, r) |
union , | |
max(l, r) |
symmetric_difference , ^ |
abs(l - r) |
multiply_counts , * |
count * n |
divide_counts , // |
count // n |
issubset , <= |
all counts l <= r |
issuperset , >= |
all counts l >= r |
== |
all counts l == r |
!= |
any count l != r |
unary + |
drop all negative counts |
unary - |
reverses the sign of all counts |
<
and >
are lexicographic orderings rather than subset relations.
Specifically, they compare the count of each character in alphabetical
order. For example:
'a' > ''
since one'a'
is more than zero'a'
s.'a' > 'bb'
since'a'
is compared first.'-a' < 'bb'
since the left side has -1'a'
s.'a' < 'ab'
since the'a'
s are equal but the right side has more'b'
s.
Binary operators other than *
and //
implicitly convert the other
argument to Symbols
using the constructor.
Subscripting with a single character returns the count of that character
as an int
. E.g. symbols['a']
-> number of a
s as an int
.
You can also access it as an attribute, e.g. symbols.a
.
Subscripting with multiple characters returns a Symbols
with only those
characters, dropping the rest.
E.g. symbols['ab']
-> number of a
s and b
s as a Symbols
.
Again you can also access it as an attribute, e.g. symbols.ab
.
This is useful for reducing the outcome space, which reduces computational
cost for further operations. If you want to keep only a single character
while keeping the type as Symbols
, you can subscript with that character
plus an unused character.
Subscripting with duplicate characters currently has no further effect, but this may change in the future.
Population.marginals
forwards attribute access, so you can use e.g.
die.marginals.a
to get the marginal distribution of a
s.
Note that attribute access only works with valid identifiers, so e.g. emojis would need to use the subscript method.
74 def __new__(cls, 75 symbols: str | Iterable[str] | Mapping[str, int]) -> 'Symbols': 76 """Constructor. 77 78 The argument can be a string, an iterable of characters, or a mapping of 79 characters to counts. 80 81 If the argument is a string, negative symbols can be specified using a 82 minus sign optionally surrounded by whitespace. For example, 83 `a - b` has one positive a and one negative b. 84 """ 85 self = super(Symbols, cls).__new__(cls) 86 if isinstance(symbols, str): 87 data: MutableMapping[str, int] = defaultdict(int) 88 positive, *negative = re.split(r'\s*-\s*', symbols) 89 for s in positive: 90 data[s] += 1 91 if len(negative) > 1: 92 raise ValueError('Multiple dashes not allowed.') 93 if len(negative) == 1: 94 for s in negative[0]: 95 data[s] -= 1 96 elif isinstance(symbols, Mapping): 97 data = defaultdict(int, symbols) 98 else: 99 data = defaultdict(int) 100 for s in symbols: 101 data[s] += 1 102 103 for s in data: 104 if len(s) != 1: 105 raise ValueError(f'Symbol {s} is not a single character.') 106 if re.match(r'[\s_-]', s): 107 raise ValueError( 108 f'{s} (U+{ord(s):04X}) is not a legal symbol.') 109 110 self._data = defaultdict(int, 111 {k: data[k] 112 for k in sorted(data.keys())}) 113 114 return self
Constructor.
The argument can be a string, an iterable of characters, or a mapping of characters to counts.
If the argument is a string, negative symbols can be specified using a
minus sign optionally surrounded by whitespace. For example,
a - b
has one positive a and one negative b.
145 def additive_union(self, *args: 146 Iterable[str] | Mapping[str, int]) -> 'Symbols': 147 """The sum of counts of each symbol.""" 148 return functools.reduce(operator.add, args, initial=self)
The sum of counts of each symbol.
160 def difference(self, *args: 161 Iterable[str] | Mapping[str, int]) -> 'Symbols': 162 """The difference between the counts of each symbol.""" 163 return functools.reduce(operator.sub, args, initial=self)
The difference between the counts of each symbol.
181 def intersection(self, *args: 182 Iterable[str] | Mapping[str, int]) -> 'Symbols': 183 """The min count of each symbol.""" 184 return functools.reduce(operator.and_, args, initial=self)
The min count of each symbol.
196 def union(self, *args: Iterable[str] | Mapping[str, int]) -> 'Symbols': 197 """The max count of each symbol.""" 198 return functools.reduce(operator.or_, args, initial=self)
The max count of each symbol.
210 def symmetric_difference( 211 self, other: Iterable[str] | Mapping[str, int]) -> 'Symbols': 212 """The absolute difference in symbol counts between the two sets.""" 213 return self ^ other
The absolute difference in symbol counts between the two sets.
225 def multiply_counts(self, other: int) -> 'Symbols': 226 """Multiplies all counts by an integer.""" 227 return self * other
Multiplies all counts by an integer.
240 def divide_counts(self, other: int) -> 'Symbols': 241 """Divides all counts by an integer, rounding down.""" 242 data = defaultdict(int, { 243 s: count // other 244 for s, count in self.items() 245 }) 246 return Symbols._new_raw(data)
Divides all counts by an integer, rounding down.
248 def count_subset(self, 249 divisor: Iterable[str] | Mapping[str, int], 250 *, 251 empty_divisor: int | None = None) -> int: 252 """The number of times the divisor is contained in this multiset.""" 253 if not isinstance(divisor, Mapping): 254 divisor = Counter(divisor) 255 result = None 256 for s, count in divisor.items(): 257 current = self._data[s] // count 258 if result is None or current < result: 259 result = current 260 if result is None: 261 if empty_divisor is None: 262 raise ZeroDivisionError('Divisor is empty.') 263 else: 264 return empty_divisor 265 else: 266 return result
The number of times the divisor is contained in this multiset.
329 def issubset(self, other: Iterable[str] | Mapping[str, int]) -> bool: 330 """Whether `self` is a subset of the other. 331 332 Same as `<=`. 333 334 Note that the `<` and `>` operators are lexicographic orderings, 335 not proper subset relations. 336 """ 337 return self <= other
Whether self
is a subset of the other.
Same as <=
.
Note that the <
and >
operators are lexicographic orderings,
not proper subset relations.
346 def issuperset(self, other: Iterable[str] | Mapping[str, int]) -> bool: 347 """Whether `self` is a superset of the other. 348 349 Same as `>=`. 350 351 Note that the `<` and `>` operators are lexicographic orderings, 352 not proper subset relations. 353 """ 354 return self >= other
Whether self
is a superset of the other.
Same as >=
.
Note that the <
and >
operators are lexicographic orderings,
not proper subset relations.
363 def isdisjoint(self, other: Iterable[str] | Mapping[str, int]) -> bool: 364 """Whether `self` has any positive elements in common with the other. 365 366 Raises: 367 ValueError if either has negative elements. 368 """ 369 other = Symbols(other) 370 return any(self[s] > 0 and other[s] > 0 # type: ignore 371 for s in self)
Whether self
has any positive elements in common with the other.
Raises:
- ValueError if either has negative elements.
A symbol indicating that the die should be rolled again, usually with some operation applied.
This is designed to be used with the Die()
constructor.
AgainExpression
s should not be fed to functions or methods other than
Die()
, but it can be used with operators. Examples:
Again + 6
: Roll again and add 6.Again + Again
: Roll again twice and sum.
The again_count
, again_depth
, and again_end
arguments to Die()
affect how these arguments are processed. At most one of again_count
or
again_depth
may be provided; if neither are provided, the behavior is as
`again_depth=1.
For finer control over rolling processes, use e.g. Die.map()
instead.
Count mode
When again_count
is provided, we start with one roll queued and execute one
roll at a time. For every Again
we roll, we queue another roll.
If we run out of rolls, we sum the rolls to find the result. If the total number
of rolls (not including the initial roll) would exceed again_count
, we reroll
the entire process, effectively conditioning the process on not rolling more
than again_count
extra dice.
This mode only allows "additive" expressions to be used with Again
, which
means that only the following operators are allowed:
- Binary
+
n @ AgainExpression
, wheren
is a non-negativeint
orPopulation
.
Furthermore, the +
operator is assumed to be associative and commutative.
For example, str
or tuple
outcomes will not produce elements with a definite
order.
Depth mode
When again_depth=0
, again_end
is directly substituted
for each occurence of Again
. For other values of again_depth
, the result for
again_depth-1
is substituted for each occurence of Again
.
If again_end=icepool.Reroll
, then any AgainExpression
s in the final depth
are rerolled.
Rerolls
Reroll
only rerolls that particular die, not the entire process. Any such
rerolls do not count against the again_count
or again_depth
limit.
If again_end=icepool.Reroll
:
- Count mode: Any result that would cause the number of rolls to exceed
again_count
is rerolled. - Depth mode: Any
AgainExpression
s in the final depth level are rerolled.
144class CountsKeysView(KeysView[T], Sequence[T]): 145 """This functions as both a `KeysView` and a `Sequence`.""" 146 147 def __init__(self, counts: Counts[T]): 148 self._mapping = counts 149 150 def __getitem__(self, index): 151 return self._mapping._keys[index] 152 153 def __len__(self) -> int: 154 return len(self._mapping) 155 156 def __eq__(self, other): 157 return self._mapping._keys == other
This functions as both a KeysView
and a Sequence
.
160class CountsValuesView(ValuesView[int], Sequence[int]): 161 """This functions as both a `ValuesView` and a `Sequence`.""" 162 163 def __init__(self, counts: Counts): 164 self._mapping = counts 165 166 def __getitem__(self, index): 167 return self._mapping._values[index] 168 169 def __len__(self) -> int: 170 return len(self._mapping) 171 172 def __eq__(self, other): 173 return self._mapping._values == other
This functions as both a ValuesView
and a Sequence
.
176class CountsItemsView(ItemsView[T, int], Sequence[tuple[T, int]]): 177 """This functions as both an `ItemsView` and a `Sequence`.""" 178 179 def __init__(self, counts: Counts): 180 self._mapping = counts 181 182 def __getitem__(self, index): 183 return self._mapping._items[index] 184 185 def __eq__(self, other): 186 return self._mapping._items == other
This functions as both an ItemsView
and a Sequence
.
144def from_cumulative(outcomes: Sequence[T], 145 cumulative: 'Sequence[int] | Sequence[icepool.Die[bool]]', 146 *, 147 reverse: bool = False) -> 'icepool.Die[T]': 148 """Constructs a `Die` from a sequence of cumulative values. 149 150 Args: 151 outcomes: The outcomes of the resulting die. Sorted order is recommended 152 but not necessary. 153 cumulative: The cumulative values (inclusive) of the outcomes in the 154 order they are given to this function. These may be: 155 * `int` cumulative quantities. 156 * Dice representing the cumulative distribution at that point. 157 reverse: Iff true, both of the arguments will be reversed. This allows 158 e.g. constructing using a survival distribution. 159 """ 160 if len(outcomes) == 0: 161 return icepool.Die({}) 162 163 if reverse: 164 outcomes = list(reversed(outcomes)) 165 cumulative = list(reversed(cumulative)) # type: ignore 166 167 prev = 0 168 d = {} 169 170 if isinstance(cumulative[0], icepool.Die): 171 cumulative = commonize_denominator(*cumulative) 172 for outcome, die in zip(outcomes, cumulative): 173 d[outcome] = die.quantity('!=', False) - prev 174 prev = die.quantity('!=', False) 175 elif isinstance(cumulative[0], int): 176 cumulative = cast(Sequence[int], cumulative) 177 for outcome, quantity in zip(outcomes, cumulative): 178 d[outcome] = quantity - prev 179 prev = quantity 180 else: 181 raise TypeError( 182 f'Unsupported type {type(cumulative)} for cumulative values.') 183 184 return icepool.Die(d)
Constructs a Die
from a sequence of cumulative values.
Arguments:
- outcomes: The outcomes of the resulting die. Sorted order is recommended but not necessary.
- cumulative: The cumulative values (inclusive) of the outcomes in the
order they are given to this function. These may be:
int
cumulative quantities.- Dice representing the cumulative distribution at that point.
- reverse: Iff true, both of the arguments will be reversed. This allows e.g. constructing using a survival distribution.
199def from_rv(rv, outcomes: Sequence[int] | Sequence[float], denominator: int, 200 **kwargs) -> 'icepool.Die[int] | icepool.Die[float]': 201 """Constructs a `Die` from a rv object (as `scipy.stats`). 202 203 This is done using the CDF. 204 205 Args: 206 rv: A rv object (as `scipy.stats`). 207 outcomes: An iterable of `int`s or `float`s that will be the outcomes 208 of the resulting `Die`. 209 If the distribution is discrete, outcomes must be `int`s. 210 Some outcomes may be omitted if their probability is too small 211 compared to the denominator. 212 denominator: The denominator of the resulting `Die` will be set to this. 213 **kwargs: These will be forwarded to `rv.cdf()`. 214 """ 215 if hasattr(rv, 'pdf'): 216 # Continuous distributions use midpoints. 217 midpoints = [(a + b) / 2 for a, b in zip(outcomes[:-1], outcomes[1:])] 218 cdf = rv.cdf(midpoints, **kwargs) 219 quantities_le = tuple(int(round(x * denominator)) 220 for x in cdf) + (denominator, ) 221 else: 222 cdf = rv.cdf(outcomes, **kwargs) 223 quantities_le = tuple(int(round(x * denominator)) for x in cdf) 224 return from_cumulative(outcomes, quantities_le)
Constructs a Die
from a rv object (as scipy.stats
).
This is done using the CDF.
Arguments:
- rv: A rv object (as
scipy.stats
). - outcomes: An iterable of
int
s orfloat
s that will be the outcomes of the resultingDie
. If the distribution is discrete, outcomes must beint
s. Some outcomes may be omitted if their probability is too small compared to the denominator. - denominator: The denominator of the resulting
Die
will be set to this. - **kwargs: These will be forwarded to
rv.cdf()
.
254def pointwise_max(arg0, /, *more_args: 'icepool.Die[T]') -> 'icepool.Die[T]': 255 """Selects the highest chance of rolling >= each outcome among the arguments. 256 257 Naming not finalized. 258 259 Specifically, for each outcome, the chance of the result rolling >= to that 260 outcome is the same as the highest chance of rolling >= that outcome among 261 the arguments. 262 263 Equivalently, any quantile in the result is the highest of that quantile 264 among the arguments. 265 266 This is useful for selecting from several possible moves where you are 267 trying to get >= a threshold that is known but could change depending on the 268 situation. 269 270 Args: 271 dice: Either an iterable of dice, or two or more dice as separate 272 arguments. 273 """ 274 if len(more_args) == 0: 275 args = arg0 276 else: 277 args = (arg0, ) + more_args 278 args = commonize_denominator(*args) 279 outcomes = sorted_union(*args) 280 cumulative = [ 281 min(die.quantity('<=', outcome) for die in args) 282 for outcome in outcomes 283 ] 284 return from_cumulative(outcomes, cumulative)
Selects the highest chance of rolling >= each outcome among the arguments.
Naming not finalized.
Specifically, for each outcome, the chance of the result rolling >= to that outcome is the same as the highest chance of rolling >= that outcome among the arguments.
Equivalently, any quantile in the result is the highest of that quantile among the arguments.
This is useful for selecting from several possible moves where you are trying to get >= a threshold that is known but could change depending on the situation.
Arguments:
- dice: Either an iterable of dice, or two or more dice as separate arguments.
301def pointwise_min(arg0, /, *more_args: 'icepool.Die[T]') -> 'icepool.Die[T]': 302 """Selects the highest chance of rolling <= each outcome among the arguments. 303 304 Naming not finalized. 305 306 Specifically, for each outcome, the chance of the result rolling <= to that 307 outcome is the same as the highest chance of rolling <= that outcome among 308 the arguments. 309 310 Equivalently, any quantile in the result is the lowest of that quantile 311 among the arguments. 312 313 This is useful for selecting from several possible moves where you are 314 trying to get <= a threshold that is known but could change depending on the 315 situation. 316 317 Args: 318 dice: Either an iterable of dice, or two or more dice as separate 319 arguments. 320 """ 321 if len(more_args) == 0: 322 args = arg0 323 else: 324 args = (arg0, ) + more_args 325 args = commonize_denominator(*args) 326 outcomes = sorted_union(*args) 327 cumulative = [ 328 max(die.quantity('<=', outcome) for die in args) 329 for outcome in outcomes 330 ] 331 return from_cumulative(outcomes, cumulative)
Selects the highest chance of rolling <= each outcome among the arguments.
Naming not finalized.
Specifically, for each outcome, the chance of the result rolling <= to that outcome is the same as the highest chance of rolling <= that outcome among the arguments.
Equivalently, any quantile in the result is the lowest of that quantile among the arguments.
This is useful for selecting from several possible moves where you are trying to get <= a threshold that is known but could change depending on the situation.
Arguments:
- dice: Either an iterable of dice, or two or more dice as separate arguments.
99def lowest(arg0, 100 /, 101 *more_args: 'T | icepool.Die[T]', 102 keep: int | None = None, 103 drop: int | None = None, 104 default: T | None = None) -> 'icepool.Die[T]': 105 """The lowest outcome among the rolls, or the sum of some of the lowest. 106 107 The outcomes should support addition and multiplication if `keep != 1`. 108 109 Args: 110 args: Dice or individual outcomes in a single iterable, or as two or 111 more separate arguments. Similar to the built-in `min()`. 112 keep, drop: These arguments work together: 113 * If neither are provided, the single lowest die will be taken. 114 * If only `keep` is provided, the `keep` lowest dice will be summed. 115 * If only `drop` is provided, the `drop` lowest dice will be dropped 116 and the rest will be summed. 117 * If both are provided, `drop` lowest dice will be dropped, then 118 the next `keep` lowest dice will be summed. 119 default: If an empty iterable is provided, the result will be a die that 120 always rolls this value. 121 122 Raises: 123 ValueError if an empty iterable is provided with no `default`. 124 """ 125 if len(more_args) == 0: 126 args = arg0 127 else: 128 args = (arg0, ) + more_args 129 130 if len(args) == 0: 131 if default is None: 132 raise ValueError( 133 "lowest() arg is an empty sequence and no default was provided." 134 ) 135 else: 136 return icepool.Die([default]) 137 138 index_slice = lowest_slice(keep, drop) 139 return _sum_slice(*args, index_slice=index_slice)
The lowest outcome among the rolls, or the sum of some of the lowest.
The outcomes should support addition and multiplication if keep != 1
.
Arguments:
- args: Dice or individual outcomes in a single iterable, or as two or
more separate arguments. Similar to the built-in
min()
. - keep, drop: These arguments work together:
- If neither are provided, the single lowest die will be taken.
- If only
keep
is provided, thekeep
lowest dice will be summed. - If only
drop
is provided, thedrop
lowest dice will be dropped and the rest will be summed. - If both are provided,
drop
lowest dice will be dropped, then the nextkeep
lowest dice will be summed.
- default: If an empty iterable is provided, the result will be a die that always rolls this value.
Raises:
- ValueError if an empty iterable is provided with no
default
.
153def highest(arg0, 154 /, 155 *more_args: 'T | icepool.Die[T]', 156 keep: int | None = None, 157 drop: int | None = None, 158 default: T | None = None) -> 'icepool.Die[T]': 159 """The highest outcome among the rolls, or the sum of some of the highest. 160 161 The outcomes should support addition and multiplication if `keep != 1`. 162 163 Args: 164 args: Dice or individual outcomes in a single iterable, or as two or 165 more separate arguments. Similar to the built-in `max()`. 166 keep, drop: These arguments work together: 167 * If neither are provided, the single highest die will be taken. 168 * If only `keep` is provided, the `keep` highest dice will be summed. 169 * If only `drop` is provided, the `drop` highest dice will be dropped 170 and the rest will be summed. 171 * If both are provided, `drop` highest dice will be dropped, then 172 the next `keep` highest dice will be summed. 173 drop: This number of highest dice will be dropped before keeping dice 174 to be summed. 175 default: If an empty iterable is provided, the result will be a die that 176 always rolls this value. 177 178 Raises: 179 ValueError if an empty iterable is provided with no `default`. 180 """ 181 if len(more_args) == 0: 182 args = arg0 183 else: 184 args = (arg0, ) + more_args 185 186 if len(args) == 0: 187 if default is None: 188 raise ValueError( 189 "highest() arg is an empty sequence and no default was provided." 190 ) 191 else: 192 return icepool.Die([default]) 193 194 index_slice = highest_slice(keep, drop) 195 return _sum_slice(*args, index_slice=index_slice)
The highest outcome among the rolls, or the sum of some of the highest.
The outcomes should support addition and multiplication if keep != 1
.
Arguments:
- args: Dice or individual outcomes in a single iterable, or as two or
more separate arguments. Similar to the built-in
max()
. - keep, drop: These arguments work together:
- If neither are provided, the single highest die will be taken.
- If only
keep
is provided, thekeep
highest dice will be summed. - If only
drop
is provided, thedrop
highest dice will be dropped and the rest will be summed. - If both are provided,
drop
highest dice will be dropped, then the nextkeep
highest dice will be summed.
- drop: This number of highest dice will be dropped before keeping dice to be summed.
- default: If an empty iterable is provided, the result will be a die that always rolls this value.
Raises:
- ValueError if an empty iterable is provided with no
default
.
209def middle(arg0, 210 /, 211 *more_args: 'T | icepool.Die[T]', 212 keep: int = 1, 213 tie: Literal['error', 'high', 'low'] = 'error', 214 default: T | None = None) -> 'icepool.Die[T]': 215 """The middle of the outcomes among the rolls, or the sum of some of the middle. 216 217 The outcomes should support addition and multiplication if `keep != 1`. 218 219 Args: 220 args: Dice or individual outcomes in a single iterable, or as two or 221 more separate arguments. 222 keep: The number of outcomes to sum. 223 tie: What to do if `keep` is odd but the the number of args is even, or 224 vice versa. 225 * 'error' (default): Raises `IndexError`. 226 * 'high': The higher outcome is taken. 227 * 'low': The lower outcome is taken. 228 default: If an empty iterable is provided, the result will be a die that 229 always rolls this value. 230 231 Raises: 232 ValueError if an empty iterable is provided with no `default`. 233 """ 234 if len(more_args) == 0: 235 args = arg0 236 else: 237 args = (arg0, ) + more_args 238 239 if len(args) == 0: 240 if default is None: 241 raise ValueError( 242 "middle() arg is an empty sequence and no default was provided." 243 ) 244 else: 245 return icepool.Die([default]) 246 247 # Expression evaluators are difficult to type. 248 return icepool.Pool(args).middle(keep, tie=tie).sum() # type: ignore
The middle of the outcomes among the rolls, or the sum of some of the middle.
The outcomes should support addition and multiplication if keep != 1
.
Arguments:
- args: Dice or individual outcomes in a single iterable, or as two or more separate arguments.
- keep: The number of outcomes to sum.
- tie: What to do if
keep
is odd but the the number of args is even, or vice versa.- 'error' (default): Raises
IndexError
. - 'high': The higher outcome is taken.
- 'low': The lower outcome is taken.
- 'error' (default): Raises
- default: If an empty iterable is provided, the result will be a die that always rolls this value.
Raises:
- ValueError if an empty iterable is provided with no
default
.
344def min_outcome(*args: 'Iterable[T | icepool.Population[T]] | T') -> T: 345 """The minimum possible outcome among the populations. 346 347 Args: 348 Populations or single outcomes. Alternatively, a single iterable argument of such. 349 """ 350 return min(_iter_outcomes(*args))
The minimum possible outcome among the populations.
Arguments:
- Populations or single outcomes. Alternatively, a single iterable argument of such.
363def max_outcome(*args: 'Iterable[T | icepool.Population[T]] | T') -> T: 364 """The maximum possible outcome among the populations. 365 366 Args: 367 Populations or single outcomes. Alternatively, a single iterable argument of such. 368 """ 369 return max(_iter_outcomes(*args))
The maximum possible outcome among the populations.
Arguments:
- Populations or single outcomes. Alternatively, a single iterable argument of such.
372def consecutive(*args: Iterable[int]) -> Sequence[int]: 373 """A minimal sequence of consecutive ints covering the argument sets.""" 374 start = min((x for x in itertools.chain(*args)), default=None) 375 if start is None: 376 return () 377 stop = max(x for x in itertools.chain(*args)) 378 return tuple(range(start, stop + 1))
A minimal sequence of consecutive ints covering the argument sets.
381def sorted_union(*args: Iterable[T]) -> tuple[T, ...]: 382 """Merge sets into a sorted sequence.""" 383 if not args: 384 return () 385 return tuple(sorted(set.union(*(set(arg) for arg in args))))
Merge sets into a sorted sequence.
388def commonize_denominator( 389 *dice: 'T | icepool.Die[T]') -> tuple['icepool.Die[T]', ...]: 390 """Scale the quantities of the dice so that all of them have the same denominator. 391 392 The denominator is the LCM of the denominators of the arguments. 393 394 Args: 395 *dice: Any number of dice or single outcomes convertible to dice. 396 397 Returns: 398 A tuple of dice with the same denominator. 399 """ 400 converted_dice = [icepool.implicit_convert_to_die(die) for die in dice] 401 denominator_lcm = math.lcm(*(die.denominator() for die in converted_dice 402 if die.denominator() > 0)) 403 return tuple( 404 die.multiply_quantities(denominator_lcm // 405 die.denominator() if die.denominator() > 406 0 else 1) for die in converted_dice)
Scale the quantities of the dice so that all of them have the same denominator.
The denominator is the LCM of the denominators of the arguments.
Arguments:
- *dice: Any number of dice or single outcomes convertible to dice.
Returns:
A tuple of dice with the same denominator.
409def reduce( 410 function: 'Callable[[T, T], T | icepool.Die[T] | icepool.RerollType]', 411 dice: 'Iterable[T | icepool.Die[T]]', 412 *, 413 initial: 'T | icepool.Die[T] | None' = None) -> 'icepool.Die[T]': 414 """Applies a function of two arguments cumulatively to a sequence of dice. 415 416 Analogous to the 417 [`functools` function of the same name.](https://docs.python.org/3/library/functools.html#functools.reduce) 418 419 Args: 420 function: The function to map. The function should take two arguments, 421 which are an outcome from each of two dice, and produce an outcome 422 of the same type. It may also return `Reroll`, in which case the 423 entire sequence is effectively rerolled. 424 dice: A sequence of dice to map the function to, from left to right. 425 initial: If provided, this will be placed at the front of the sequence 426 of dice. 427 again_count, again_depth, again_end: Forwarded to the final die constructor. 428 """ 429 # Conversion to dice is not necessary since map() takes care of that. 430 iter_dice = iter(dice) 431 if initial is not None: 432 result: 'icepool.Die[T]' = icepool.implicit_convert_to_die(initial) 433 else: 434 result = icepool.implicit_convert_to_die(next(iter_dice)) 435 for die in iter_dice: 436 result = map(function, result, die) 437 return result
Applies a function of two arguments cumulatively to a sequence of dice.
Analogous to the
.reduce">functools
function of the same name.
Arguments:
- function: The function to map. The function should take two arguments,
which are an outcome from each of two dice, and produce an outcome
of the same type. It may also return
Reroll
, in which case the entire sequence is effectively rerolled. - dice: A sequence of dice to map the function to, from left to right.
- initial: If provided, this will be placed at the front of the sequence of dice.
- again_count, again_depth, again_end: Forwarded to the final die constructor.
440def accumulate( 441 function: 'Callable[[T, T], T | icepool.Die[T]]', 442 dice: 'Iterable[T | icepool.Die[T]]', 443 *, 444 initial: 'T | icepool.Die[T] | None' = None 445) -> Iterator['icepool.Die[T]']: 446 """Applies a function of two arguments cumulatively to a sequence of dice, yielding each result in turn. 447 448 Analogous to the 449 [`itertools function of the same name`](https://docs.python.org/3/library/itertools.html#itertools.accumulate) 450 , though with no default function and 451 the same parameter order as `reduce()`. 452 453 The number of results is equal to the number of elements of `dice`, with 454 one additional element if `initial` is provided. 455 456 Args: 457 function: The function to map. The function should take two arguments, 458 which are an outcome from each of two dice. 459 dice: A sequence of dice to map the function to, from left to right. 460 initial: If provided, this will be placed at the front of the sequence 461 of dice. 462 """ 463 # Conversion to dice is not necessary since map() takes care of that. 464 iter_dice = iter(dice) 465 if initial is not None: 466 result: 'icepool.Die[T]' = icepool.implicit_convert_to_die(initial) 467 else: 468 try: 469 result = icepool.implicit_convert_to_die(next(iter_dice)) 470 except StopIteration: 471 return 472 yield result 473 for die in iter_dice: 474 result = map(function, result, die) 475 yield result
Applies a function of two arguments cumulatively to a sequence of dice, yielding each result in turn.
Analogous to the
.accumulate">itertools function of the same name
, though with no default function and
the same parameter order as reduce()
.
The number of results is equal to the number of elements of dice
, with
one additional element if initial
is provided.
Arguments:
- function: The function to map. The function should take two arguments, which are an outcome from each of two dice.
- dice: A sequence of dice to map the function to, from left to right.
- initial: If provided, this will be placed at the front of the sequence of dice.
500def map( 501 repl: 502 'Callable[..., T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression] | Mapping[Any, T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression]', 503 /, 504 *args: 'Outcome | icepool.Die | icepool.MultisetExpression', 505 star: bool | None = None, 506 repeat: int | Literal['inf'] = 1, 507 time_limit: int | Literal['inf'] | None = None, 508 again_count: int | None = None, 509 again_depth: int | None = None, 510 again_end: 'T | icepool.Die[T] | icepool.RerollType | None' = None 511) -> 'icepool.Die[T]': 512 """Applies `func(outcome_of_die_0, outcome_of_die_1, ...)` for all joint outcomes, returning a Die. 513 514 See `map_function` for a decorator version of this. 515 516 Example: `map(lambda a, b: a + b, d6, d6)` is the same as d6 + d6. 517 518 `map()` is flexible but not very efficient for more than a few dice. 519 If at all possible, use `reduce()`, `MultisetExpression` methods, and/or 520 `MultisetEvaluator`s. Even `Pool.expand()` (which sorts rolls) is more 521 efficient than using `map` on the dice in order. 522 523 `Again` can be used but is not recommended with `repeat` other than 1. 524 525 Args: 526 repl: One of the following: 527 * A callable that takes in one outcome per element of args and 528 produces a new outcome. 529 * A mapping from old outcomes to new outcomes. 530 Unmapped old outcomes stay the same. 531 In this case args must have exactly one element. 532 As with the `Die` constructor, the new outcomes: 533 * May be dice rather than just single outcomes. 534 * The special value `icepool.Reroll` will reroll that old outcome. 535 * `tuples` containing `Population`s will be `tupleize`d into 536 `Population`s of `tuple`s. 537 This does not apply to subclasses of `tuple`s such as `namedtuple` 538 or other classes such as `Vector`. 539 *args: `func` will be called with all joint outcomes of these. 540 Allowed arg types are: 541 * Single outcome. 542 * `Die`. All outcomes will be sent to `func`. 543 * `MultisetExpression`. All sorted tuples of outcomes will be sent 544 to `func`, as `MultisetExpression.expand()`. The expression must 545 be fully bound. 546 * You can prevent `Die` and `MultisetExpression` expansion by 547 wrapping the argument as `NoExpand(arg)`. 548 star: If `True`, the first of the args will be unpacked before giving 549 them to `func`. 550 If not provided, it will be guessed based on the signature of `func` 551 and the number of arguments. 552 repeat: This will be repeated with the same arguments on the 553 result this many times, except the first of `args` will be replaced 554 by the result of the previous iteration. 555 556 Note that returning `Reroll` from `repl` will effectively reroll all 557 arguments, including the first argument which represents the result 558 of the process up to this point. If you only want to reroll the 559 current stage, you can nest another `map` inside `repl`. 560 561 EXPERIMENTAL: If set to `'inf'`, the result will be as if this 562 were repeated an infinite number of times. In this case, the 563 result will be in simplest form. 564 time_limit: Similar to `repeat`, but will return early if a fixed point 565 is reached. If both `repeat` and `time_limit` are provided 566 (not recommended), `time_limit` takes priority. 567 again_count, again_depth, again_end: Forwarded to the final die constructor. 568 """ 569 transition_function = _canonicalize_transition_function( 570 repl, len(args), star) 571 572 if len(args) == 0: 573 if repeat != 1: 574 raise ValueError('If no arguments are given, repeat must be 1.') 575 return icepool.Die([transition_function()], 576 again_count=again_count, 577 again_depth=again_depth, 578 again_end=again_end) 579 580 # Here len(args) is at least 1. 581 582 first_arg = args[0] 583 extra_args = args[1:] 584 585 if time_limit is not None: 586 repeat = time_limit 587 588 if repeat == 'inf': 589 # Infinite repeat. 590 # T_co and U should be the same in this case. 591 def unary_transition_function(state): 592 return map(transition_function, 593 state, 594 *extra_args, 595 star=False, 596 again_count=again_count, 597 again_depth=again_depth, 598 again_end=again_end) 599 600 return icepool.population.markov_chain.absorbing_markov_chain( 601 icepool.Die([args[0]]), unary_transition_function) 602 else: 603 if repeat < 0: 604 raise ValueError('repeat cannot be negative.') 605 606 if repeat == 0: 607 return icepool.Die([first_arg]) 608 elif repeat == 1 and time_limit is None: 609 final_outcomes: 'list[T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression]' = [] 610 final_quantities: list[int] = [] 611 for outcomes, final_quantity in iter_cartesian_product(*args): 612 final_outcome = transition_function(*outcomes) 613 if final_outcome is not icepool.Reroll: 614 final_outcomes.append(final_outcome) 615 final_quantities.append(final_quantity) 616 return icepool.Die(final_outcomes, 617 final_quantities, 618 again_count=again_count, 619 again_depth=again_depth, 620 again_end=again_end) 621 else: 622 result: 'icepool.Die[T]' = icepool.Die([first_arg]) 623 for _ in range(repeat): 624 next_result = icepool.map(transition_function, 625 result, 626 *extra_args, 627 star=False, 628 again_count=again_count, 629 again_depth=again_depth, 630 again_end=again_end) 631 if time_limit is not None and result.simplify( 632 ) == next_result.simplify(): 633 return result 634 result = next_result 635 return result
Applies func(outcome_of_die_0, outcome_of_die_1, ...)
for all joint outcomes, returning a Die.
See map_function
for a decorator version of this.
Example: map(lambda a, b: a + b, d6, d6)
is the same as d6 + d6.
map()
is flexible but not very efficient for more than a few dice.
If at all possible, use reduce()
, MultisetExpression
methods, and/or
MultisetEvaluator
s. Even Pool.expand()
(which sorts rolls) is more
efficient than using map
on the dice in order.
Again
can be used but is not recommended with repeat
other than 1.
Arguments:
- repl: One of the following:
- A callable that takes in one outcome per element of args and produces a new outcome.
- A mapping from old outcomes to new outcomes.
Unmapped old outcomes stay the same.
In this case args must have exactly one element.
As with the
Die
constructor, the new outcomes: - May be dice rather than just single outcomes.
- The special value
icepool.Reroll
will reroll that old outcome. tuples
containingPopulation
s will betupleize
d intoPopulation
s oftuple
s. This does not apply to subclasses oftuple
s such asnamedtuple
or other classes such asVector
.
- *args:
func
will be called with all joint outcomes of these. Allowed arg types are:- Single outcome.
Die
. All outcomes will be sent tofunc
.MultisetExpression
. All sorted tuples of outcomes will be sent tofunc
, asMultisetExpression.expand()
. The expression must be fully bound.- You can prevent
Die
andMultisetExpression
expansion by wrapping the argument asNoExpand(arg)
.
- star: If
True
, the first of the args will be unpacked before giving them tofunc
. If not provided, it will be guessed based on the signature offunc
and the number of arguments. repeat: This will be repeated with the same arguments on the result this many times, except the first of
args
will be replaced by the result of the previous iteration.Note that returning
Reroll
fromrepl
will effectively reroll all arguments, including the first argument which represents the result of the process up to this point. If you only want to reroll the current stage, you can nest anothermap
insiderepl
.EXPERIMENTAL: If set to
'inf'
, the result will be as if this were repeated an infinite number of times. In this case, the result will be in simplest form.- time_limit: Similar to
repeat
, but will return early if a fixed point is reached. If bothrepeat
andtime_limit
are provided (not recommended),time_limit
takes priority. - again_count, again_depth, again_end: Forwarded to the final die constructor.
675def map_function( 676 function: 677 'Callable[..., T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression] | None' = None, 678 /, 679 *, 680 star: bool | None = None, 681 repeat: int | Literal['inf'] = 1, 682 again_count: int | None = None, 683 again_depth: int | None = None, 684 again_end: 'T | icepool.Die[T] | icepool.RerollType | None' = None 685) -> 'Callable[..., icepool.Die[T]] | Callable[..., Callable[..., icepool.Die[T]]]': 686 """Decorator that turns a function that takes outcomes into a function that takes dice. 687 688 The result must be a `Die`. 689 690 This is basically a decorator version of `map()` and produces behavior 691 similar to AnyDice functions, though Icepool has different typing rules 692 among other differences. 693 694 `map_function` can either be used with no arguments: 695 696 ```python 697 @map_function 698 def explode_six(x): 699 if x == 6: 700 return 6 + Again 701 else: 702 return x 703 704 explode_six(d6, again_depth=2) 705 ``` 706 707 Or with keyword arguments, in which case the extra arguments are bound: 708 709 ```python 710 @map_function(again_depth=2) 711 def explode_six(x): 712 if x == 6: 713 return 6 + Again 714 else: 715 return x 716 717 explode_six(d6) 718 ``` 719 720 Args: 721 again_count, again_depth, again_end: Forwarded to the final die constructor. 722 """ 723 724 if function is not None: 725 return update_wrapper(partial(map, function), function) 726 else: 727 728 def decorator( 729 function: 730 'Callable[..., T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression]' 731 ) -> 'Callable[..., icepool.Die[T]]': 732 733 return update_wrapper( 734 partial(map, 735 function, 736 star=star, 737 repeat=repeat, 738 again_count=again_count, 739 again_depth=again_depth, 740 again_end=again_end), function) 741 742 return decorator
Decorator that turns a function that takes outcomes into a function that takes dice.
The result must be a Die
.
This is basically a decorator version of map()
and produces behavior
similar to AnyDice functions, though Icepool has different typing rules
among other differences.
map_function
can either be used with no arguments:
@map_function
def explode_six(x):
if x == 6:
return 6 + Again
else:
return x
explode_six(d6, again_depth=2)
Or with keyword arguments, in which case the extra arguments are bound:
@map_function(again_depth=2)
def explode_six(x):
if x == 6:
return 6 + Again
else:
return x
explode_six(d6)
Arguments:
- again_count, again_depth, again_end: Forwarded to the final die constructor.
745def map_and_time( 746 repl: 747 'Callable[..., T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression] | Mapping[Any, T | icepool.Die[T] | icepool.RerollType | icepool.AgainExpression]', 748 initial_state: 'T | icepool.Die[T]', 749 /, 750 *extra_args, 751 star: bool | None = None, 752 time_limit: int) -> 'icepool.Die[tuple[T, int]]': 753 """Repeatedly map outcomes of the state to other outcomes, while also 754 counting timesteps. 755 756 This is useful for representing processes. 757 758 The outcomes of the result are `(outcome, time)`, where `time` is the 759 number of repeats needed to reach an absorbing outcome (an outcome that 760 only leads to itself), or `repeat`, whichever is lesser. 761 762 This will return early if it reaches a fixed point. 763 Therefore, you can set `repeat` equal to the maximum number of 764 time you could possibly be interested in without worrying about 765 it causing extra computations after the fixed point. 766 767 Args: 768 repl: One of the following: 769 * A callable returning a new outcome for each old outcome. 770 * A mapping from old outcomes to new outcomes. 771 Unmapped old outcomes stay the same. 772 The new outcomes may be dice rather than just single outcomes. 773 The special value `icepool.Reroll` will reroll that old outcome. 774 initial_state: The initial state of the process, which could be a 775 single state or a `Die`. 776 extra_args: Extra arguments to use, as per `map`. Note that these are 777 rerolled at every time step. 778 star: If `True`, the first of the args will be unpacked before giving 779 them to `func`. 780 If not provided, it will be guessed based on the signature of `func` 781 and the number of arguments. 782 time_limit: This will be repeated with the same arguments on the result 783 up to this many times. 784 785 Returns: 786 The `Die` after the modification. 787 """ 788 transition_function = _canonicalize_transition_function( 789 repl, 1 + len(extra_args), star) 790 791 result: 'icepool.Die[tuple[T, int]]' = map(lambda x: (x, 0), initial_state) 792 793 # Note that we don't expand extra_args during the outer map. 794 # This is needed to correctly evaluate whether each outcome is absorbing. 795 def transition_with_steps(outcome_and_steps, extra_args): 796 outcome, steps = outcome_and_steps 797 next_outcome = map(transition_function, outcome, *extra_args) 798 if icepool.population.markov_chain.is_absorbing(outcome, next_outcome): 799 return outcome, steps 800 else: 801 return icepool.tupleize(next_outcome, steps + 1) 802 803 return map(transition_with_steps, 804 result, 805 extra_args, 806 time_limit=time_limit)
Repeatedly map outcomes of the state to other outcomes, while also counting timesteps.
This is useful for representing processes.
The outcomes of the result are (outcome, time)
, where time
is the
number of repeats needed to reach an absorbing outcome (an outcome that
only leads to itself), or repeat
, whichever is lesser.
This will return early if it reaches a fixed point.
Therefore, you can set repeat
equal to the maximum number of
time you could possibly be interested in without worrying about
it causing extra computations after the fixed point.
Arguments:
- repl: One of the following:
- A callable returning a new outcome for each old outcome.
- A mapping from old outcomes to new outcomes.
Unmapped old outcomes stay the same.
The new outcomes may be dice rather than just single outcomes.
The special value
icepool.Reroll
will reroll that old outcome.
- initial_state: The initial state of the process, which could be a
single state or a
Die
. - extra_args: Extra arguments to use, as per
map
. Note that these are rerolled at every time step. - star: If
True
, the first of the args will be unpacked before giving them tofunc
. If not provided, it will be guessed based on the signature offunc
and the number of arguments. - time_limit: This will be repeated with the same arguments on the result up to this many times.
Returns:
The
Die
after the modification.
809def map_to_pool( 810 repl: 811 'Callable[..., icepool.MultisetGenerator | Sequence[icepool.Die[T] | T] | Mapping[icepool.Die[T], int] | Mapping[T, int] | icepool.RerollType] | Mapping[Any, icepool.MultisetGenerator | Sequence[icepool.Die[T] | T] | Mapping[icepool.Die[T], int] | Mapping[T, int] | icepool.RerollType]', 812 /, 813 *args: 'Outcome | icepool.Die | icepool.MultisetExpression', 814 star: bool | None = None, 815 denominator: int | None = None 816) -> 'icepool.MultisetGenerator[T, tuple[int]]': 817 """EXPERIMENTAL: Applies `repl(outcome_of_die_0, outcome_of_die_1, ...)` for all joint outcomes, producing a MultisetGenerator. 818 819 Args: 820 repl: One of the following: 821 * A callable that takes in one outcome per element of args and 822 produces a `MultisetGenerator` or something convertible to a `Pool`. 823 * A mapping from old outcomes to `MultisetGenerator` 824 or something convertible to a `Pool`. 825 In this case args must have exactly one element. 826 The new outcomes may be dice rather than just single outcomes. 827 The special value `icepool.Reroll` will reroll that old outcome. 828 star: If `True`, the first of the args will be unpacked before giving 829 them to `repl`. 830 If not provided, it will be guessed based on the signature of `repl` 831 and the number of arguments. 832 denominator: If provided, the denominator of the result will be this 833 value. Otherwise it will be the minimum to correctly weight the 834 pools. 835 836 Returns: 837 A `MultisetGenerator` representing the mixture of `Pool`s. Note 838 that this is not technically a `Pool`, though it supports most of 839 the same operations. 840 841 Raises: 842 ValueError: If `denominator` cannot be made consistent with the 843 resulting mixture of pools. 844 """ 845 transition_function = _canonicalize_transition_function( 846 repl, len(args), star) 847 848 data: 'MutableMapping[icepool.MultisetGenerator[T, tuple[int]], int]' = defaultdict( 849 int) 850 for outcomes, quantity in iter_cartesian_product(*args): 851 pool = transition_function(*outcomes) 852 if pool is icepool.Reroll: 853 continue 854 elif isinstance(pool, icepool.MultisetGenerator): 855 data[pool] += quantity 856 else: 857 data[icepool.Pool(pool)] += quantity 858 # I couldn't get the covariance / contravariance to work. 859 return icepool.MixtureGenerator(data, 860 denominator=denominator) # type: ignore
EXPERIMENTAL: Applies repl(outcome_of_die_0, outcome_of_die_1, ...)
for all joint outcomes, producing a MultisetGenerator.
Arguments:
- repl: One of the following:
- A callable that takes in one outcome per element of args and
produces a
MultisetGenerator
or something convertible to aPool
. - A mapping from old outcomes to
MultisetGenerator
or something convertible to aPool
. In this case args must have exactly one element. The new outcomes may be dice rather than just single outcomes. The special valueicepool.Reroll
will reroll that old outcome.
- A callable that takes in one outcome per element of args and
produces a
- star: If
True
, the first of the args will be unpacked before giving them torepl
. If not provided, it will be guessed based on the signature ofrepl
and the number of arguments. - denominator: If provided, the denominator of the result will be this value. Otherwise it will be the minimum to correctly weight the pools.
Returns:
A
MultisetGenerator
representing the mixture ofPool
s. Note
that this is not technically aPool
, though it supports most of the same operations.
Raises:
- ValueError: If
denominator
cannot be made consistent with the resulting mixture of pools.
Indicates that an outcome should be rerolled (with unlimited depth).
This can be used in place of outcomes in many places. See individual function and method descriptions for details.
This effectively removes the outcome from the probability space, along with its contribution to the denominator.
This can be used for conditional probability by removing all outcomes not consistent with the given observations.
Operation in specific cases:
- When used with
Again
, only that stage is rerolled, not the entireAgain
tree. - To reroll with limited depth, use
Die.reroll()
, orAgain
with no modification. - When used with
MultisetEvaluator
, the entire evaluation is rerolled.
37class RerollType(enum.Enum): 38 """The type of the Reroll singleton.""" 39 Reroll = 'Reroll' 40 """Indicates an outcome should be rerolled (with unlimited depth)."""
The type of the Reroll singleton.
Indicates an outcome should be rerolled (with unlimited depth).
68class NoExpand(Expandable[A_co]): 69 """Wraps an argument, instructing `map` and similar functions not to expand it. 70 71 This is not intended for use outside of `map` (or similar) call sites. 72 73 Example: 74 ```python 75 map(lambda x: (x, x), d6) 76 # Here the d6 is expanded so that the function is evaluated six times 77 # with x = 1, 2, 3, 4, 5, 6. 78 # = Die([(1, 1), (2, 2), (3, 3), ...]) 79 80 map(lambda x: (x, x), NoExpand(d6)) 81 # Here the d6 is passed as a Die to the function, which then rolls it twice 82 # independently. 83 # = Die([(1, 1), (1, 2), (1, 3), ...]) 84 # = tupleize(d6, d6) 85 ``` 86 """ 87 88 arg: A_co 89 """The wrapped argument.""" 90 91 def __init__(self, arg: A_co, /): 92 self.arg = arg 93 94 @property 95 def _items_for_cartesian_product(self) -> Sequence[tuple[A_co, int]]: 96 return [(self.arg, 1)]
Wraps an argument, instructing map
and similar functions not to expand it.
This is not intended for use outside of map
(or similar) call sites.
Example:
map(lambda x: (x, x), d6)
# Here the d6 is expanded so that the function is evaluated six times
# with x = 1, 2, 3, 4, 5, 6.
# = Die([(1, 1), (2, 2), (3, 3), ...])
map(lambda x: (x, x), NoExpand(d6))
# Here the d6 is passed as a Die to the function, which then rolls it twice
# independently.
# = Die([(1, 1), (1, 2), (1, 3), ...])
# = tupleize(d6, d6)
26class Pool(KeepGenerator[T]): 27 """Represents a multiset of outcomes resulting from the roll of several dice. 28 29 This should be used in conjunction with `MultisetEvaluator` to generate a 30 result. 31 32 Note that operators are performed on the multiset of rolls, not the multiset 33 of dice. For example, `d6.pool(3) - d6.pool(3)` is not an empty pool, but 34 an expression meaning "roll two pools of 3d6 and get the rolls from the 35 first pool, with rolls in the second pool cancelling matching rolls in the 36 first pool one-for-one". 37 """ 38 39 _dice: tuple[tuple['icepool.Die[T]', int]] 40 _outcomes: tuple[T, ...] 41 42 def __new__( 43 cls, 44 dice: 45 'Sequence[icepool.Die[T] | T] | Mapping[icepool.Die[T], int] | Mapping[T, int] | Mapping[icepool.Die[T] | T, int]', 46 times: Sequence[int] | int = 1) -> 'Pool': 47 """Public constructor for a pool. 48 49 Evaulation is most efficient when the dice are the same or same-side 50 truncations of each other. For example, d4, d6, d8, d10, d12 are all 51 same-side truncations of d12. 52 53 It is permissible to create a `Pool` without providing dice, but not all 54 evaluators will handle this case, especially if they depend on the 55 outcome type. Dice may be in the pool zero times, in which case their 56 outcomes will be considered but without any count (unless another die 57 has that outcome). 58 59 Args: 60 dice: The dice to put in the `Pool`. This can be one of the following: 61 62 * A `Sequence` of `Die` or outcomes. 63 * A `Mapping` of `Die` or outcomes to how many of that `Die` or 64 outcome to put in the `Pool`. 65 66 All outcomes within a `Pool` must be totally orderable. 67 times: Multiplies the number of times each element of `dice` will 68 be put into the pool. 69 `times` can either be a sequence of the same length as 70 `outcomes` or a single `int` to apply to all elements of 71 `outcomes`. 72 73 Raises: 74 ValueError: If a bare `Deck` or `Die` argument is provided. 75 A `Pool` of a single `Die` should constructed as `Pool([die])`. 76 """ 77 if isinstance(dice, Pool): 78 if times == 1: 79 return dice 80 else: 81 dice = {die: quantity for die, quantity in dice._dice} 82 83 if isinstance(dice, (icepool.Die, icepool.Deck, icepool.MultiDeal)): 84 raise ValueError( 85 f'A Pool cannot be constructed with a {type(dice).__name__} argument.' 86 ) 87 88 dice, times = icepool.creation_args.itemize(dice, times) 89 converted_dice = [icepool.implicit_convert_to_die(die) for die in dice] 90 91 dice_counts: MutableMapping['icepool.Die[T]', int] = defaultdict(int) 92 for die, qty in zip(converted_dice, times): 93 if qty == 0: 94 continue 95 dice_counts[die] += qty 96 keep_tuple = (1, ) * sum(times) 97 98 # Includes dice with zero qty. 99 outcomes = icepool.sorted_union(*converted_dice) 100 return cls._new_from_mapping(dice_counts, outcomes, keep_tuple) 101 102 @classmethod 103 @cache 104 def _new_raw(cls, dice: tuple[tuple['icepool.Die[T]', int]], 105 outcomes: tuple[T], keep_tuple: tuple[int, ...]) -> 'Pool[T]': 106 """All pool creation ends up here. This method is cached. 107 108 Args: 109 dice: A tuple of (die, count) pairs. 110 keep_tuple: A tuple of how many times to count each die. 111 """ 112 self = super(Pool, cls).__new__(cls) 113 self._dice = dice 114 self._outcomes = outcomes 115 self._keep_tuple = keep_tuple 116 return self 117 118 @classmethod 119 def clear_cache(cls): 120 """Clears the global pool cache.""" 121 Pool._new_raw.cache_clear() 122 123 @classmethod 124 def _new_from_mapping(cls, dice_counts: Mapping['icepool.Die[T]', int], 125 outcomes: tuple[T, ...], 126 keep_tuple: Sequence[int]) -> 'Pool[T]': 127 """Creates a new pool. 128 129 Args: 130 dice_counts: A map from dice to rolls. 131 keep_tuple: A tuple with length equal to the number of dice. 132 """ 133 dice = tuple( 134 sorted(dice_counts.items(), key=lambda kv: kv[0]._hash_key)) 135 return Pool._new_raw(dice, outcomes, keep_tuple) 136 137 @cached_property 138 def _raw_size(self) -> int: 139 return sum(count for _, count in self._dice) 140 141 def raw_size(self) -> int: 142 """The number of dice in this pool before the keep_tuple is applied.""" 143 return self._raw_size 144 145 def _is_resolvable(self) -> bool: 146 return all(not die.is_empty() for die, _ in self._dice) 147 148 @cached_property 149 def _denominator(self) -> int: 150 return math.prod(die.denominator()**count for die, count in self._dice) 151 152 def denominator(self) -> int: 153 return self._denominator 154 155 @cached_property 156 def _dice_tuple(self) -> tuple['icepool.Die[T]', ...]: 157 return sum(((die, ) * count for die, count in self._dice), start=()) 158 159 @cached_property 160 def _unique_dice(self) -> Collection['icepool.Die[T]']: 161 return set(die for die, _ in self._dice) 162 163 def unique_dice(self) -> Collection['icepool.Die[T]']: 164 """The collection of unique dice in this pool.""" 165 return self._unique_dice 166 167 def outcomes(self) -> Sequence[T]: 168 """The union of possible outcomes among all dice in this pool in ascending order.""" 169 return self._outcomes 170 171 def output_arity(self) -> int: 172 return 1 173 174 def local_order_preference(self) -> tuple[Order, OrderReason]: 175 can_truncate_min, can_truncate_max = icepool.order.can_truncate( 176 self.unique_dice()) 177 if can_truncate_min and not can_truncate_max: 178 return Order.Ascending, OrderReason.PoolComposition 179 if can_truncate_max and not can_truncate_min: 180 return Order.Descending, OrderReason.PoolComposition 181 182 lo_skip, hi_skip = icepool.order.lo_hi_skip(self.keep_tuple()) 183 if lo_skip > hi_skip: 184 return Order.Descending, OrderReason.KeepSkip 185 if hi_skip > lo_skip: 186 return Order.Ascending, OrderReason.KeepSkip 187 188 return Order.Any, OrderReason.NoPreference 189 190 def min_outcome(self) -> T: 191 """The min outcome among all dice in this pool.""" 192 return self._outcomes[0] 193 194 def max_outcome(self) -> T: 195 """The max outcome among all dice in this pool.""" 196 return self._outcomes[-1] 197 198 def _generate_initial(self) -> InitialMultisetGeneration: 199 yield self, 1 200 201 def _generate_min(self, min_outcome) -> PopMultisetGeneration: 202 """Pops the given outcome from this pool, if it is the min outcome. 203 204 Yields: 205 popped_pool: The pool after the min outcome is popped. 206 count: The number of dice that rolled the min outcome, after 207 accounting for keep_tuple. 208 weight: The weight of this incremental result. 209 """ 210 if not self.outcomes(): 211 yield self, (0, ), 1 212 return 213 if min_outcome != self.min_outcome(): 214 yield self, (0, ), 1 215 return 216 generators = [ 217 iter_die_pop_min(die, die_count, min_outcome) 218 for die, die_count in self._dice 219 ] 220 skip_weight = None 221 for pop in itertools.product(*generators): 222 total_hits = 0 223 result_weight = 1 224 next_dice_counts: MutableMapping[Any, int] = defaultdict(int) 225 for popped_die, misses, hits, weight in pop: 226 if not popped_die.is_empty() and misses > 0: 227 next_dice_counts[popped_die] += misses 228 total_hits += hits 229 result_weight *= weight 230 popped_keep_tuple, result_count = pop_min_from_keep_tuple( 231 self.keep_tuple(), total_hits) 232 popped_pool = Pool._new_from_mapping(next_dice_counts, 233 self._outcomes[1:], 234 popped_keep_tuple) 235 if not any(popped_keep_tuple): 236 # Dump all dice in exchange for the denominator. 237 skip_weight = (skip_weight or 238 0) + result_weight * popped_pool.denominator() 239 continue 240 241 yield popped_pool, (result_count, ), result_weight 242 243 if skip_weight is not None: 244 popped_pool = Pool._new_raw((), self._outcomes[1:], ()) 245 yield popped_pool, (sum(self.keep_tuple()), ), skip_weight 246 247 def _generate_max(self, max_outcome) -> PopMultisetGeneration: 248 """Pops the given outcome from this pool, if it is the max outcome. 249 250 Yields: 251 popped_pool: The pool after the max outcome is popped. 252 count: The number of dice that rolled the max outcome, after 253 accounting for keep_tuple. 254 weight: The weight of this incremental result. 255 """ 256 if not self.outcomes(): 257 yield self, (0, ), 1 258 return 259 if max_outcome != self.max_outcome(): 260 yield self, (0, ), 1 261 return 262 generators = [ 263 iter_die_pop_max(die, die_count, max_outcome) 264 for die, die_count in self._dice 265 ] 266 skip_weight = None 267 for pop in itertools.product(*generators): 268 total_hits = 0 269 result_weight = 1 270 next_dice_counts: MutableMapping[Any, int] = defaultdict(int) 271 for popped_die, misses, hits, weight in pop: 272 if not popped_die.is_empty() and misses > 0: 273 next_dice_counts[popped_die] += misses 274 total_hits += hits 275 result_weight *= weight 276 popped_keep_tuple, result_count = pop_max_from_keep_tuple( 277 self.keep_tuple(), total_hits) 278 popped_pool = Pool._new_from_mapping(next_dice_counts, 279 self._outcomes[:-1], 280 popped_keep_tuple) 281 if not any(popped_keep_tuple): 282 # Dump all dice in exchange for the denominator. 283 skip_weight = (skip_weight or 284 0) + result_weight * popped_pool.denominator() 285 continue 286 287 yield popped_pool, (result_count, ), result_weight 288 289 if skip_weight is not None: 290 popped_pool = Pool._new_raw((), self._outcomes[:-1], ()) 291 yield popped_pool, (sum(self.keep_tuple()), ), skip_weight 292 293 def _set_keep_tuple(self, keep_tuple: tuple[int, 294 ...]) -> 'KeepGenerator[T]': 295 return Pool._new_raw(self._dice, self._outcomes, keep_tuple) 296 297 def additive_union( 298 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 299 ) -> 'MultisetExpression[T]': 300 args = tuple( 301 icepool.multiset_expression.implicit_convert_to_expression(arg) 302 for arg in args) 303 if all(isinstance(arg, Pool) for arg in args): 304 pools = cast(tuple[Pool[T], ...], args) 305 keep_tuple: tuple[int, ...] = tuple( 306 reduce(operator.add, (pool.keep_tuple() for pool in pools), 307 ())) 308 if len(keep_tuple) == 0 or all(x == keep_tuple[0] 309 for x in keep_tuple): 310 # All sorted positions count the same, so we can merge the 311 # pools. 312 dice: 'MutableMapping[icepool.Die, int]' = defaultdict(int) 313 for pool in pools: 314 for die, die_count in pool._dice: 315 dice[die] += die_count 316 outcomes = icepool.sorted_union(*(pool.outcomes() 317 for pool in pools)) 318 return Pool._new_from_mapping(dice, outcomes, keep_tuple) 319 return KeepGenerator.additive_union(*args) 320 321 def __str__(self) -> str: 322 return ( 323 f'Pool of {self.raw_size()} dice with keep_tuple={self.keep_tuple()}\n' 324 + ''.join(f' {repr(die)} : {count},\n' 325 for die, count in self._dice)) 326 327 @cached_property 328 def _local_hash_key(self) -> tuple: 329 return Pool, self._dice, self._outcomes, self._keep_tuple
Represents a multiset of outcomes resulting from the roll of several dice.
This should be used in conjunction with MultisetEvaluator
to generate a
result.
Note that operators are performed on the multiset of rolls, not the multiset
of dice. For example, d6.pool(3) - d6.pool(3)
is not an empty pool, but
an expression meaning "roll two pools of 3d6 and get the rolls from the
first pool, with rolls in the second pool cancelling matching rolls in the
first pool one-for-one".
118 @classmethod 119 def clear_cache(cls): 120 """Clears the global pool cache.""" 121 Pool._new_raw.cache_clear()
Clears the global pool cache.
141 def raw_size(self) -> int: 142 """The number of dice in this pool before the keep_tuple is applied.""" 143 return self._raw_size
The number of dice in this pool before the keep_tuple is applied.
The total weight of all paths through this generator.
Raises:
- UnboundMultisetExpressionError if this is called on an expression with free variables.
163 def unique_dice(self) -> Collection['icepool.Die[T]']: 164 """The collection of unique dice in this pool.""" 165 return self._unique_dice
The collection of unique dice in this pool.
167 def outcomes(self) -> Sequence[T]: 168 """The union of possible outcomes among all dice in this pool in ascending order.""" 169 return self._outcomes
The union of possible outcomes among all dice in this pool in ascending order.
174 def local_order_preference(self) -> tuple[Order, OrderReason]: 175 can_truncate_min, can_truncate_max = icepool.order.can_truncate( 176 self.unique_dice()) 177 if can_truncate_min and not can_truncate_max: 178 return Order.Ascending, OrderReason.PoolComposition 179 if can_truncate_max and not can_truncate_min: 180 return Order.Descending, OrderReason.PoolComposition 181 182 lo_skip, hi_skip = icepool.order.lo_hi_skip(self.keep_tuple()) 183 if lo_skip > hi_skip: 184 return Order.Descending, OrderReason.KeepSkip 185 if hi_skip > lo_skip: 186 return Order.Ascending, OrderReason.KeepSkip 187 188 return Order.Any, OrderReason.NoPreference
Any ordering that is preferred or required by this expression node.
190 def min_outcome(self) -> T: 191 """The min outcome among all dice in this pool.""" 192 return self._outcomes[0]
The min outcome among all dice in this pool.
194 def max_outcome(self) -> T: 195 """The max outcome among all dice in this pool.""" 196 return self._outcomes[-1]
The max outcome among all dice in this pool.
297 def additive_union( 298 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 299 ) -> 'MultisetExpression[T]': 300 args = tuple( 301 icepool.multiset_expression.implicit_convert_to_expression(arg) 302 for arg in args) 303 if all(isinstance(arg, Pool) for arg in args): 304 pools = cast(tuple[Pool[T], ...], args) 305 keep_tuple: tuple[int, ...] = tuple( 306 reduce(operator.add, (pool.keep_tuple() for pool in pools), 307 ())) 308 if len(keep_tuple) == 0 or all(x == keep_tuple[0] 309 for x in keep_tuple): 310 # All sorted positions count the same, so we can merge the 311 # pools. 312 dice: 'MutableMapping[icepool.Die, int]' = defaultdict(int) 313 for pool in pools: 314 for die, die_count in pool._dice: 315 dice[die] += die_count 316 outcomes = icepool.sorted_union(*(pool.outcomes() 317 for pool in pools)) 318 return Pool._new_from_mapping(dice, outcomes, keep_tuple) 319 return KeepGenerator.additive_union(*args)
The combined elements from all of the multisets.
Same as a + b + c + ...
.
Any resulting counts that would be negative are set to zero.
Example:
[1, 2, 2, 3] + [1, 2, 4] -> [1, 1, 2, 2, 2, 3, 4]
Inherited Members
332def standard_pool( 333 die_sizes: Collection[int] | Mapping[int, int]) -> 'Pool[int]': 334 """A `Pool` of standard dice (e.g. d6, d8...). 335 336 Args: 337 die_sizes: A collection of die sizes, which will put one die of that 338 sizes in the pool for each element. 339 Or, a mapping of die sizes to how many dice of that size to put 340 into the pool. 341 If empty, the pool will be considered to consist of zero zeros. 342 """ 343 if not die_sizes: 344 return Pool({icepool.Die([0]): 0}) 345 if isinstance(die_sizes, Mapping): 346 die_sizes = list( 347 itertools.chain.from_iterable([k] * v 348 for k, v in die_sizes.items())) 349 return Pool(list(icepool.d(x) for x in die_sizes))
A Pool
of standard dice (e.g. d6, d8...).
Arguments:
- die_sizes: A collection of die sizes, which will put one die of that sizes in the pool for each element. Or, a mapping of die sizes to how many dice of that size to put into the pool. If empty, the pool will be considered to consist of zero zeros.
28class MultisetGenerator(Generic[T, Qs], MultisetExpression[T]): 29 """Abstract base class for generating one or more multisets. 30 31 These include dice pools (`Pool`) and card deals (`Deal`). Most likely you 32 will be using one of these two rather than writing your own subclass of 33 `MultisetGenerator`. 34 35 The multisets are incrementally generated one outcome at a time. 36 For each outcome, a `count` and `weight` are generated, along with a 37 smaller generator to produce the rest of the multiset. 38 39 You can perform simple evaluations using built-in operators and methods in 40 this class. 41 For more complex evaluations and better performance, particularly when 42 multiple generators are involved, you will want to write your own subclass 43 of `MultisetEvaluator`. 44 """ 45 46 _children = () 47 48 @property 49 def _can_keep(self) -> bool: 50 """Whether the generator supports enhanced keep operations.""" 51 return False 52 53 def has_free_variables(self) -> bool: 54 return False 55 56 # Overridden to switch bound generators with variables. 57 58 @property 59 def _bound_inputs(self) -> 'tuple[icepool.MultisetGenerator, ...]': 60 return (self, ) 61 62 def _unbind( 63 self, 64 bound_inputs: 'list[MultisetExpression]' = [] 65 ) -> 'MultisetExpression': 66 result = icepool.MultisetVariable(False, len(bound_inputs)) 67 bound_inputs.append(self) 68 return result 69 70 def _apply_variables( 71 self, outcome: T, bound_counts: tuple[int, ...], 72 free_counts: tuple[int, ...]) -> 'MultisetExpression[T]': 73 raise icepool.MultisetBindingError( 74 '_unbind should have been called before _apply_variables.')
Abstract base class for generating one or more multisets.
These include dice pools (Pool
) and card deals (Deal
). Most likely you
will be using one of these two rather than writing your own subclass of
MultisetGenerator
.
The multisets are incrementally generated one outcome at a time.
For each outcome, a count
and weight
are generated, along with a
smaller generator to produce the rest of the multiset.
You can perform simple evaluations using built-in operators and methods in
this class.
For more complex evaluations and better performance, particularly when
multiple generators are involved, you will want to write your own subclass
of MultisetEvaluator
.
55class MultisetExpression(ABC, Expandable[tuple[T, ...]]): 56 """Abstract base class representing an expression that operates on multisets. 57 58 There are three types of multiset expressions: 59 60 * `MultisetGenerator`, which produce raw outcomes and counts. 61 * `MultisetOperator`, which takes outcomes with one or more counts and 62 produces a count. 63 * `MultisetVariable`, which is a temporary placeholder for some other 64 expression. 65 66 Expression methods can be applied to `MultisetGenerator`s to do simple 67 evaluations. For joint evaluations, try `multiset_function`. 68 69 Use the provided operations to build up more complicated 70 expressions, or to attach a final evaluator. 71 72 Operations include: 73 74 | Operation | Count / notes | 75 |:----------------------------|:--------------------------------------------| 76 | `additive_union`, `+` | `l + r` | 77 | `difference`, `-` | `l - r` | 78 | `intersection`, `&` | `min(l, r)` | 79 | `union`, `\\|` | `max(l, r)` | 80 | `symmetric_difference`, `^` | `abs(l - r)` | 81 | `multiply_counts`, `*` | `count * n` | 82 | `divide_counts`, `//` | `count // n` | 83 | `modulo_counts`, `%` | `count % n` | 84 | `keep_counts` | `count if count >= n else 0` etc. | 85 | unary `+` | same as `keep_counts_ge(0)` | 86 | unary `-` | reverses the sign of all counts | 87 | `unique` | `min(count, n)` | 88 | `keep_outcomes` | `count if outcome in t else 0` | 89 | `drop_outcomes` | `count if outcome not in t else 0` | 90 | `map_counts` | `f(outcome, *counts)` | 91 | `keep`, `[]` | less capable than `KeepGenerator` version | 92 | `highest` | less capable than `KeepGenerator` version | 93 | `lowest` | less capable than `KeepGenerator` version | 94 95 | Evaluator | Summary | 96 |:-------------------------------|:---------------------------------------------------------------------------| 97 | `issubset`, `<=` | Whether the left side's counts are all <= their counterparts on the right | 98 | `issuperset`, `>=` | Whether the left side's counts are all >= their counterparts on the right | 99 | `isdisjoint` | Whether the left side has no positive counts in common with the right side | 100 | `<` | As `<=`, but `False` if the two multisets are equal | 101 | `>` | As `>=`, but `False` if the two multisets are equal | 102 | `==` | Whether the left side has all the same counts as the right side | 103 | `!=` | Whether the left side has any different counts to the right side | 104 | `expand` | All elements in ascending order | 105 | `sum` | Sum of all elements | 106 | `count` | The number of elements | 107 | `any` | Whether there is at least 1 element | 108 | `highest_outcome_and_count` | The highest outcome and how many of that outcome | 109 | `all_counts` | All counts in descending order | 110 | `largest_count` | The single largest count, aka x-of-a-kind | 111 | `largest_count_and_outcome` | Same but also with the corresponding outcome | 112 | `count_subset`, `//` | The number of times the right side is contained in the left side | 113 | `largest_straight` | Length of longest consecutive sequence | 114 | `largest_straight_and_outcome` | Same but also with the corresponding outcome | 115 | `all_straights` | Lengths of all consecutive sequences in descending order | 116 """ 117 118 _children: 'tuple[MultisetExpression[T], ...]' 119 """A tuple of child expressions. These are assumed to the positional arguments of the constructor.""" 120 121 @abstractmethod 122 def outcomes(self) -> Sequence[T]: 123 """The possible outcomes that could be generated, in ascending order.""" 124 125 @abstractmethod 126 def output_arity(self) -> int: 127 """The number of multisets/counts generated. Must be constant.""" 128 129 @abstractmethod 130 def _is_resolvable(self) -> bool: 131 """`True` iff the generator is capable of producing an overall outcome. 132 133 For example, a dice `Pool` will return `False` if it contains any dice 134 with no outcomes. 135 """ 136 137 @abstractmethod 138 def _generate_initial(self) -> InitialMultisetGeneration: 139 """Initialize the expression before any outcomes are emitted. 140 141 Yields: 142 * Each possible initial expression. 143 * The weight for selecting that initial expression. 144 145 Unitary expressions can just yield `(self, 1)` and return. 146 """ 147 148 @abstractmethod 149 def _generate_min(self, min_outcome: T) -> PopMultisetGeneration: 150 """Pops the min outcome from this expression if it matches the argument. 151 152 Yields: 153 * Ax expression with the min outcome popped. 154 * A tuple of counts for the min outcome. 155 * The weight for this many of the min outcome appearing. 156 157 If the argument does not match the min outcome, or this expression 158 has no outcomes, only a single tuple is yielded: 159 160 * `self` 161 * A tuple of zeros. 162 * weight = 1. 163 164 Raises: 165 UnboundMultisetExpressionError if this is called on an expression with free variables. 166 """ 167 168 @abstractmethod 169 def _generate_max(self, max_outcome: T) -> PopMultisetGeneration: 170 """Pops the max outcome from this expression if it matches the argument. 171 172 Yields: 173 * An expression with the max outcome popped. 174 * A tuple of counts for the max outcome. 175 * The weight for this many of the max outcome appearing. 176 177 If the argument does not match the max outcome, or this expression 178 has no outcomes, only a single tuple is yielded: 179 180 * `self` 181 * A tuple of zeros. 182 * weight = 1. 183 184 Raises: 185 UnboundMultisetExpressionError if this is called on an expression with free variables. 186 """ 187 188 @abstractmethod 189 def local_order_preference(self) -> tuple[Order, OrderReason]: 190 """Any ordering that is preferred or required by this expression node.""" 191 192 @abstractmethod 193 def has_free_variables(self) -> bool: 194 """Whether this expression contains any free variables, i.e. parameters to a @multiset_function.""" 195 196 @abstractmethod 197 def denominator(self) -> int: 198 """The total weight of all paths through this generator. 199 200 Raises: 201 UnboundMultisetExpressionError if this is called on an expression with free variables. 202 """ 203 204 @abstractmethod 205 def _unbind( 206 self, 207 bound_inputs: 'list[MultisetExpression]' = [] 208 ) -> 'MultisetExpression': 209 """Removes bound subexpressions, replacing them with variables. 210 211 Args: 212 bound_inputs: The list of bound subexpressions. Bound subexpressions 213 will be added to this list. 214 215 Returns: 216 A copy of this expression with any fully-bound subexpressions 217 replaced with variables. The `index` of each variable is equal to 218 the position of the expression they replaced in `bound_inputs`. 219 """ 220 221 @abstractmethod 222 def _apply_variables( 223 self, outcome: T, bound_counts: tuple[int, ...], 224 free_counts: tuple[int, 225 ...]) -> 'tuple[MultisetExpression[T], int]': 226 """Advances the state of this expression given counts emitted from variables and returns a count. 227 228 Args: 229 outcome: The current outcome being processed. 230 bound_counts: The counts emitted by bound expressions. 231 free_counts: The counts emitted by arguments to the 232 `@mulitset_function`. 233 234 Returns: 235 An expression representing the next state and the count produced by 236 this expression. 237 """ 238 239 @property 240 @abstractmethod 241 def _local_hash_key(self) -> Hashable: 242 """A hash key that logically identifies this object among MultisetExpressions. 243 244 Does not include the hash for children. 245 246 Used to implement `equals()` and `__hash__()` 247 """ 248 249 def min_outcome(self) -> T: 250 return self.outcomes()[0] 251 252 def max_outcome(self) -> T: 253 return self.outcomes()[-1] 254 255 @cached_property 256 def _hash_key(self) -> Hashable: 257 """A hash key that logically identifies this object among MultisetExpressions. 258 259 Used to implement `equals()` and `__hash__()` 260 """ 261 return (self._local_hash_key, 262 tuple(child._hash_key for child in self._children)) 263 264 def equals(self, other) -> bool: 265 """Whether this expression is logically equal to another object.""" 266 if not isinstance(other, MultisetExpression): 267 return False 268 return self._hash_key == other._hash_key 269 270 @cached_property 271 def _hash(self) -> int: 272 return hash(self._hash_key) 273 274 def __hash__(self) -> int: 275 return self._hash 276 277 def _iter_nodes(self) -> 'Iterator[MultisetExpression]': 278 """Iterates over the nodes in this expression in post-order (leaves first).""" 279 for child in self._children: 280 yield from child._iter_nodes() 281 yield self 282 283 def order_preference(self) -> tuple[Order, OrderReason]: 284 return merge_order_preferences(*(node.local_order_preference() 285 for node in self._iter_nodes())) 286 287 @property 288 def _items_for_cartesian_product( 289 self) -> Sequence[tuple[tuple[T, ...], int]]: 290 expansion = cast('icepool.Die[tuple[T, ...]]', self.expand()) 291 return expansion.items() 292 293 # Sampling. 294 295 def sample(self) -> tuple[tuple, ...]: 296 """EXPERIMENTAL: A single random sample from this generator. 297 298 This uses the standard `random` package and is not cryptographically 299 secure. 300 301 Returns: 302 A sorted tuple of outcomes for each output of this generator. 303 """ 304 if not self.outcomes(): 305 raise ValueError('Cannot sample from an empty set of outcomes.') 306 307 order, order_reason = self.order_preference() 308 309 if order is not None and order > 0: 310 outcome = self.min_outcome() 311 generated = tuple(self._generate_min(outcome)) 312 else: 313 outcome = self.max_outcome() 314 generated = tuple(self._generate_max(outcome)) 315 316 cumulative_weights = tuple( 317 itertools.accumulate(g.denominator() * w for g, _, w in generated)) 318 denominator = cumulative_weights[-1] 319 # We don't use random.choices since that is based on floats rather than ints. 320 r = random.randrange(denominator) 321 index = bisect.bisect_right(cumulative_weights, r) 322 popped_generator, counts, _ = generated[index] 323 head = tuple((outcome, ) * count for count in counts) 324 if popped_generator.outcomes(): 325 tail = popped_generator.sample() 326 return tuple(tuple(sorted(h + t)) for h, t, in zip(head, tail)) 327 else: 328 return head 329 330 # Binary operators. 331 332 def __add__(self, 333 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 334 /) -> 'MultisetExpression[T]': 335 try: 336 return MultisetExpression.additive_union(self, other) 337 except ImplicitConversionError: 338 return NotImplemented 339 340 def __radd__( 341 self, 342 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 343 /) -> 'MultisetExpression[T]': 344 try: 345 return MultisetExpression.additive_union(other, self) 346 except ImplicitConversionError: 347 return NotImplemented 348 349 def additive_union( 350 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 351 ) -> 'MultisetExpression[T]': 352 """The combined elements from all of the multisets. 353 354 Same as `a + b + c + ...`. 355 356 Any resulting counts that would be negative are set to zero. 357 358 Example: 359 ```python 360 [1, 2, 2, 3] + [1, 2, 4] -> [1, 1, 2, 2, 2, 3, 4] 361 ``` 362 """ 363 expressions = tuple( 364 implicit_convert_to_expression(arg) for arg in args) 365 return icepool.operator.MultisetAdditiveUnion(*expressions) 366 367 def __sub__(self, 368 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 369 /) -> 'MultisetExpression[T]': 370 try: 371 return MultisetExpression.difference(self, other) 372 except ImplicitConversionError: 373 return NotImplemented 374 375 def __rsub__( 376 self, 377 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 378 /) -> 'MultisetExpression[T]': 379 try: 380 return MultisetExpression.difference(other, self) 381 except ImplicitConversionError: 382 return NotImplemented 383 384 def difference( 385 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 386 ) -> 'MultisetExpression[T]': 387 """The elements from the left multiset that are not in any of the others. 388 389 Same as `a - b - c - ...`. 390 391 Any resulting counts that would be negative are set to zero. 392 393 Example: 394 ```python 395 [1, 2, 2, 3] - [1, 2, 4] -> [2, 3] 396 ``` 397 398 If no arguments are given, the result will be an empty multiset, i.e. 399 all zero counts. 400 401 Note that, as a multiset operation, this will only cancel elements 1:1. 402 If you want to drop all elements in a set of outcomes regardless of 403 count, either use `drop_outcomes()` instead, or use a large number of 404 counts on the right side. 405 """ 406 expressions = tuple( 407 implicit_convert_to_expression(arg) for arg in args) 408 return icepool.operator.MultisetDifference(*expressions) 409 410 def __and__(self, 411 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 412 /) -> 'MultisetExpression[T]': 413 try: 414 return MultisetExpression.intersection(self, other) 415 except ImplicitConversionError: 416 return NotImplemented 417 418 def __rand__( 419 self, 420 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 421 /) -> 'MultisetExpression[T]': 422 try: 423 return MultisetExpression.intersection(other, self) 424 except ImplicitConversionError: 425 return NotImplemented 426 427 def intersection( 428 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 429 ) -> 'MultisetExpression[T]': 430 """The elements that all the multisets have in common. 431 432 Same as `a & b & c & ...`. 433 434 Any resulting counts that would be negative are set to zero. 435 436 Example: 437 ```python 438 [1, 2, 2, 3] & [1, 2, 4] -> [1, 2] 439 ``` 440 441 Note that, as a multiset operation, this will only intersect elements 442 1:1. 443 If you want to keep all elements in a set of outcomes regardless of 444 count, either use `keep_outcomes()` instead, or use a large number of 445 counts on the right side. 446 """ 447 expressions = tuple( 448 implicit_convert_to_expression(arg) for arg in args) 449 return icepool.operator.MultisetIntersection(*expressions) 450 451 def __or__(self, 452 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 453 /) -> 'MultisetExpression[T]': 454 try: 455 return MultisetExpression.union(self, other) 456 except ImplicitConversionError: 457 return NotImplemented 458 459 def __ror__(self, 460 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 461 /) -> 'MultisetExpression[T]': 462 try: 463 return MultisetExpression.union(other, self) 464 except ImplicitConversionError: 465 return NotImplemented 466 467 def union( 468 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 469 ) -> 'MultisetExpression[T]': 470 """The most of each outcome that appear in any of the multisets. 471 472 Same as `a | b | c | ...`. 473 474 Any resulting counts that would be negative are set to zero. 475 476 Example: 477 ```python 478 [1, 2, 2, 3] | [1, 2, 4] -> [1, 2, 2, 3, 4] 479 ``` 480 """ 481 expressions = tuple( 482 implicit_convert_to_expression(arg) for arg in args) 483 return icepool.operator.MultisetUnion(*expressions) 484 485 def __xor__(self, 486 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 487 /) -> 'MultisetExpression[T]': 488 try: 489 return MultisetExpression.symmetric_difference(self, other) 490 except ImplicitConversionError: 491 return NotImplemented 492 493 def __rxor__( 494 self, 495 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 496 /) -> 'MultisetExpression[T]': 497 try: 498 # Symmetric. 499 return MultisetExpression.symmetric_difference(self, other) 500 except ImplicitConversionError: 501 return NotImplemented 502 503 def symmetric_difference( 504 self, 505 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 506 /) -> 'MultisetExpression[T]': 507 """The elements that appear in the left or right multiset but not both. 508 509 Same as `a ^ b`. 510 511 Specifically, this produces the absolute difference between counts. 512 If you don't want negative counts to be used from the inputs, you can 513 do `+left ^ +right`. 514 515 Example: 516 ```python 517 [1, 2, 2, 3] ^ [1, 2, 4] -> [2, 3, 4] 518 ``` 519 """ 520 other = implicit_convert_to_expression(other) 521 return icepool.operator.MultisetSymmetricDifference(self, other) 522 523 def keep_outcomes( 524 self, target: 525 'Callable[[T], bool] | Collection[T] | MultisetExpression[T]', 526 /) -> 'MultisetExpression[T]': 527 """Keeps the elements in the target set of outcomes, and drops the rest by setting their counts to zero. 528 529 This is similar to `intersection()`, except the right side is considered 530 to have unlimited multiplicity. 531 532 Args: 533 target: A callable returning `True` iff the outcome should be kept, 534 or an expression or collection of outcomes to keep. 535 """ 536 if isinstance(target, MultisetExpression): 537 return icepool.operator.MultisetFilterOutcomesBinary(self, target) 538 else: 539 return icepool.operator.MultisetFilterOutcomes(self, target=target) 540 541 def drop_outcomes( 542 self, target: 543 'Callable[[T], bool] | Collection[T] | MultisetExpression[T]', 544 /) -> 'MultisetExpression[T]': 545 """Drops the elements in the target set of outcomes by setting their counts to zero, and keeps the rest. 546 547 This is similar to `difference()`, except the right side is considered 548 to have unlimited multiplicity. 549 550 Args: 551 target: A callable returning `True` iff the outcome should be 552 dropped, or an expression or collection of outcomes to drop. 553 """ 554 if isinstance(target, MultisetExpression): 555 return icepool.operator.MultisetFilterOutcomesBinary(self, 556 target, 557 invert=True) 558 else: 559 return icepool.operator.MultisetFilterOutcomes(self, 560 target=target, 561 invert=True) 562 563 # Adjust counts. 564 565 def map_counts(*args: 566 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 567 function: Callable[..., int]) -> 'MultisetExpression[T]': 568 """Maps the counts to new counts. 569 570 Args: 571 function: A function that takes `outcome, *counts` and produces a 572 combined count. 573 """ 574 expressions = tuple( 575 implicit_convert_to_expression(arg) for arg in args) 576 return icepool.operator.MultisetMapCounts(*expressions, 577 function=function) 578 579 def __mul__(self, n: int) -> 'MultisetExpression[T]': 580 if not isinstance(n, int): 581 return NotImplemented 582 return self.multiply_counts(n) 583 584 # Commutable in this case. 585 def __rmul__(self, n: int) -> 'MultisetExpression[T]': 586 if not isinstance(n, int): 587 return NotImplemented 588 return self.multiply_counts(n) 589 590 def multiply_counts(self, n: int, /) -> 'MultisetExpression[T]': 591 """Multiplies all counts by n. 592 593 Same as `self * n`. 594 595 Example: 596 ```python 597 Pool([1, 2, 2, 3]) * 2 -> [1, 1, 2, 2, 2, 2, 3, 3] 598 ``` 599 """ 600 return icepool.operator.MultisetMultiplyCounts(self, constant=n) 601 602 @overload 603 def __floordiv__(self, other: int) -> 'MultisetExpression[T]': 604 ... 605 606 @overload 607 def __floordiv__( 608 self, other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 609 ) -> 'icepool.Die[int] | icepool.MultisetEvaluator[T, int]': 610 """Same as divide_counts().""" 611 612 @overload 613 def __floordiv__( 614 self, 615 other: 'int | MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 616 ) -> 'MultisetExpression[T] | icepool.Die[int] | icepool.MultisetEvaluator[T, int]': 617 """Same as count_subset().""" 618 619 def __floordiv__( 620 self, 621 other: 'int | MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 622 ) -> 'MultisetExpression[T] | icepool.Die[int] | icepool.MultisetEvaluator[T, int]': 623 if isinstance(other, int): 624 return self.divide_counts(other) 625 else: 626 return self.count_subset(other) 627 628 def divide_counts(self, n: int, /) -> 'MultisetExpression[T]': 629 """Divides all counts by n (rounding down). 630 631 Same as `self // n`. 632 633 Example: 634 ```python 635 Pool([1, 2, 2, 3]) // 2 -> [2] 636 ``` 637 """ 638 return icepool.operator.MultisetFloordivCounts(self, constant=n) 639 640 def __mod__(self, n: int, /) -> 'MultisetExpression[T]': 641 if not isinstance(n, int): 642 return NotImplemented 643 return icepool.operator.MultisetModuloCounts(self, constant=n) 644 645 def modulo_counts(self, n: int, /) -> 'MultisetExpression[T]': 646 """Moduos all counts by n. 647 648 Same as `self % n`. 649 650 Example: 651 ```python 652 Pool([1, 2, 2, 3]) % 2 -> [1, 3] 653 ``` 654 """ 655 return self % n 656 657 def __pos__(self) -> 'MultisetExpression[T]': 658 """Sets all negative counts to zero.""" 659 return icepool.operator.MultisetKeepCounts(self, 660 comparison='>=', 661 constant=0) 662 663 def __neg__(self) -> 'MultisetExpression[T]': 664 """As -1 * self.""" 665 return -1 * self 666 667 def keep_counts(self, comparison: Literal['==', '!=', '<=', '<', '>=', 668 '>'], n: int, 669 /) -> 'MultisetExpression[T]': 670 """Keeps counts fitting the comparison, treating the rest as zero. 671 672 For example, `expression.keep_counts('>=', 2)` would keep pairs, 673 triplets, etc. and drop singles. 674 675 ```python 676 Pool([1, 2, 2, 3, 3, 3]).keep_counts('>=', 2) -> [2, 2, 3, 3, 3] 677 ``` 678 679 Args: 680 comparison: The comparison to use. 681 n: The number to compare counts against. 682 """ 683 return icepool.operator.MultisetKeepCounts(self, 684 comparison=comparison, 685 constant=n) 686 687 def unique(self, n: int = 1, /) -> 'MultisetExpression[T]': 688 """Counts each outcome at most `n` times. 689 690 For example, `generator.unique(2)` would count each outcome at most 691 twice. 692 693 Example: 694 ```python 695 Pool([1, 2, 2, 3]).unique() -> [1, 2, 3] 696 ``` 697 """ 698 return icepool.operator.MultisetUnique(self, constant=n) 699 700 # Keep highest / lowest. 701 702 @overload 703 def keep( 704 self, index: slice | Sequence[int | EllipsisType] 705 ) -> 'MultisetExpression[T]': 706 ... 707 708 @overload 709 def keep(self, 710 index: int) -> 'icepool.Die[T] | icepool.MultisetEvaluator[T, T]': 711 ... 712 713 def keep( 714 self, index: slice | Sequence[int | EllipsisType] | int 715 ) -> 'MultisetExpression[T] | icepool.Die[T] | icepool.MultisetEvaluator[T, T]': 716 """Selects elements after drawing and sorting. 717 718 This is less capable than the `KeepGenerator` version. 719 In particular, it does not know how many elements it is selecting from, 720 so it must be anchored at the starting end. The advantage is that it 721 can be applied to any expression. 722 723 The valid types of argument are: 724 725 * A `slice`. If both start and stop are provided, they must both be 726 non-negative or both be negative. step is not supported. 727 * A sequence of `int` with `...` (`Ellipsis`) at exactly one end. 728 Each sorted element will be counted that many times, with the 729 `Ellipsis` treated as enough zeros (possibly "negative") to 730 fill the rest of the elements. 731 * An `int`, which evaluates by taking the element at the specified 732 index. In this case the result is a `Die` (if fully bound) or a 733 `MultisetEvaluator` (if there are free variables). 734 735 Negative incoming counts are treated as zero counts. 736 737 Use the `[]` operator for the same effect as this method. 738 """ 739 if isinstance(index, int): 740 return icepool.evaluator.KeepEvaluator(index).evaluate(self) 741 else: 742 return icepool.operator.MultisetKeep(self, index=index) 743 744 @overload 745 def __getitem__( 746 self, index: slice | Sequence[int | EllipsisType] 747 ) -> 'MultisetExpression[T]': 748 ... 749 750 @overload 751 def __getitem__( 752 self, 753 index: int) -> 'icepool.Die[T] | icepool.MultisetEvaluator[T, T]': 754 ... 755 756 def __getitem__( 757 self, index: slice | Sequence[int | EllipsisType] | int 758 ) -> 'MultisetExpression[T] | icepool.Die[T] | icepool.MultisetEvaluator[T, T]': 759 return self.keep(index) 760 761 def lowest(self, 762 keep: int | None = None, 763 drop: int | None = None) -> 'MultisetExpression[T]': 764 """Keep some of the lowest elements from this multiset and drop the rest. 765 766 In contrast to the die and free function versions, this does not 767 automatically sum the dice. Use `.sum()` afterwards if you want to sum. 768 Alternatively, you can perform some other evaluation. 769 770 This requires the outcomes to be evaluated in ascending order. 771 772 Args: 773 keep, drop: These arguments work together: 774 * If neither are provided, the single lowest element 775 will be kept. 776 * If only `keep` is provided, the `keep` lowest elements 777 will be kept. 778 * If only `drop` is provided, the `drop` lowest elements 779 will be dropped and the rest will be kept. 780 * If both are provided, `drop` lowest elements will be dropped, 781 then the next `keep` lowest elements will be kept. 782 """ 783 index = lowest_slice(keep, drop) 784 return self.keep(index) 785 786 def highest(self, 787 keep: int | None = None, 788 drop: int | None = None) -> 'MultisetExpression[T]': 789 """Keep some of the highest elements from this multiset and drop the rest. 790 791 In contrast to the die and free function versions, this does not 792 automatically sum the dice. Use `.sum()` afterwards if you want to sum. 793 Alternatively, you can perform some other evaluation. 794 795 This requires the outcomes to be evaluated in descending order. 796 797 Args: 798 keep, drop: These arguments work together: 799 * If neither are provided, the single highest element 800 will be kept. 801 * If only `keep` is provided, the `keep` highest elements 802 will be kept. 803 * If only `drop` is provided, the `drop` highest elements 804 will be dropped and the rest will be kept. 805 * If both are provided, `drop` highest elements will be dropped, 806 then the next `keep` highest elements will be kept. 807 """ 808 index = highest_slice(keep, drop) 809 return self.keep(index) 810 811 # Matching. 812 813 def sort_match(self, 814 comparison: Literal['==', '!=', '<=', '<', '>=', '>'], 815 other: 'MultisetExpression[T]', 816 /, 817 order: Order = Order.Descending) -> 'MultisetExpression[T]': 818 """EXPERIMENTAL: Matches elements of `self` with elements of `other` in sorted order, then keeps elements from `self` that fit `comparison` with their partner. 819 820 Extra elements: If `self` has more elements than `other`, whether the 821 extra elements are kept depends on the `order` and `comparison`: 822 * Descending: kept for `'>='`, `'>'` 823 * Ascending: kept for `'<='`, `'<'` 824 825 Example: An attacker rolls 3d6 versus a defender's 2d6 in the game of 826 *RISK*. Which pairs did the attacker win? 827 ```python 828 d6.pool(3).highest(2).sort_match('>', d6.pool(2)) 829 ``` 830 831 Suppose the attacker rolled 6, 4, 3 and the defender 5, 5. 832 In this case the 4 would be blocked since the attacker lost that pair, 833 leaving the attacker's 6 and 3. If you don't want to keep the extra 834 element, you can use `highest`. 835 ```python 836 Pool([6, 4, 3]).sort_match('>', [5, 5]) -> [6, 3] 837 Pool([6, 4, 3]).highest(2).sort_match('>', [5, 5]) -> [6] 838 ``` 839 840 Contrast `maximum_match()`, which first creates the maximum number of 841 pairs that fit the comparison, not necessarily in sorted order. 842 In the above example, `maximum_match()` would allow the defender to 843 assign their 5s to block both the 4 and the 3. 844 845 Negative incoming counts are treated as zero counts. 846 847 Args: 848 comparison: The comparison to filter by. If you want to drop rather 849 than keep, use the complementary comparison: 850 * `'=='` vs. `'!='` 851 * `'<='` vs. `'>'` 852 * `'>='` vs. `'<'` 853 other: The other multiset to match elements with. 854 order: The order in which to sort before forming matches. 855 Default is descending. 856 """ 857 other = implicit_convert_to_expression(other) 858 859 match comparison: 860 case '==': 861 lesser, tie, greater = 0, 1, 0 862 case '!=': 863 lesser, tie, greater = 1, 0, 1 864 case '<=': 865 lesser, tie, greater = 1, 1, 0 866 case '<': 867 lesser, tie, greater = 1, 0, 0 868 case '>=': 869 lesser, tie, greater = 0, 1, 1 870 case '>': 871 lesser, tie, greater = 0, 0, 1 872 case _: 873 raise ValueError(f'Invalid comparison {comparison}') 874 875 if order > 0: 876 left_first = lesser 877 right_first = greater 878 else: 879 left_first = greater 880 right_first = lesser 881 882 return icepool.operator.MultisetSortMatch(self, 883 other, 884 order=order, 885 tie=tie, 886 left_first=left_first, 887 right_first=right_first) 888 889 def maximum_match_highest( 890 self, comparison: Literal['<=', 891 '<'], other: 'MultisetExpression[T]', /, 892 *, keep: Literal['matched', 893 'unmatched']) -> 'MultisetExpression[T]': 894 """EXPERIMENTAL: Match the highest elements from `self` with even higher (or equal) elements from `other`. 895 896 This matches elements of `self` with elements of `other`, such that in 897 each pair the element from `self` fits the `comparision` with the 898 element from `other`. As many such pairs of elements will be matched as 899 possible, preferring the highest matchable elements of `self`. 900 Finally, either the matched or unmatched elements from `self` are kept. 901 902 This requires that outcomes be evaluated in descending order. 903 904 Example: An attacker rolls a pool of 4d6 and a defender rolls a pool of 905 3d6. Defender dice can be used to block attacker dice of equal or lesser 906 value, and the defender prefers to block the highest attacker dice 907 possible. Which attacker dice were not blocked? 908 ```python 909 d6.pool(4).maximum_match('<=', d6.pool(3), keep='unmatched').sum() 910 ``` 911 912 Suppose the attacker rolls 6, 4, 3, 1 and the defender rolls 5, 5. 913 Then the result would be [6, 1]. 914 ```python 915 d6.pool([6, 4, 3, 1]).maximum_match('<=', [5, 5], keep='unmatched') 916 -> [6, 1] 917 ``` 918 919 Contrast `sort_match()`, which first creates pairs in 920 sorted order and then filters them by `comparison`. 921 In the above example, `sort_matched` would force the defender to match 922 against the 5 and the 4, which would only allow them to block the 4. 923 924 Negative incoming counts are treated as zero counts. 925 926 Args: 927 comparison: Either `'<='` or `'<'`. 928 other: The other multiset to match elements with. 929 keep: Whether 'matched' or 'unmatched' elements are to be kept. 930 """ 931 if keep == 'matched': 932 keep_boolean = True 933 elif keep == 'unmatched': 934 keep_boolean = False 935 else: 936 raise ValueError(f"keep must be either 'matched' or 'unmatched'") 937 938 other = implicit_convert_to_expression(other) 939 match comparison: 940 case '<=': 941 match_equal = True 942 case '<': 943 match_equal = False 944 case _: 945 raise ValueError(f'Invalid comparison {comparison}') 946 return icepool.operator.MultisetMaximumMatch(self, 947 other, 948 order=Order.Descending, 949 match_equal=match_equal, 950 keep=keep_boolean) 951 952 def maximum_match_lowest( 953 self, comparison: Literal['>=', 954 '>'], other: 'MultisetExpression[T]', /, 955 *, keep: Literal['matched', 956 'unmatched']) -> 'MultisetExpression[T]': 957 """EXPERIMENTAL: Match the lowest elements from `self` with even lower (or equal) elements from `other`. 958 959 This matches elements of `self` with elements of `other`, such that in 960 each pair the element from `self` fits the `comparision` with the 961 element from `other`. As many such pairs of elements will be matched as 962 possible, preferring the lowest matchable elements of `self`. 963 Finally, either the matched or unmatched elements from `self` are kept. 964 965 This requires that outcomes be evaluated in ascending order. 966 967 Contrast `sort_match()`, which first creates pairs in 968 sorted order and then filters them by `comparison`. 969 970 Args: 971 comparison: Either `'>='` or `'>'`. 972 other: The other multiset to match elements with. 973 keep: Whether 'matched' or 'unmatched' elements are to be kept. 974 """ 975 if keep == 'matched': 976 keep_boolean = True 977 elif keep == 'unmatched': 978 keep_boolean = False 979 else: 980 raise ValueError(f"keep must be either 'matched' or 'unmatched'") 981 982 other = implicit_convert_to_expression(other) 983 match comparison: 984 case '>=': 985 match_equal = True 986 case '>': 987 match_equal = False 988 case _: 989 raise ValueError(f'Invalid comparison {comparison}') 990 return icepool.operator.MultisetMaximumMatch(self, 991 other, 992 order=Order.Ascending, 993 match_equal=match_equal, 994 keep=keep_boolean) 995 996 # Evaluations. 997 998 def expand( 999 self, 1000 order: Order = Order.Ascending 1001 ) -> 'icepool.Die[tuple[T, ...]] | icepool.MultisetEvaluator[T, tuple[T, ...]]': 1002 """Evaluation: All elements of the multiset in ascending order. 1003 1004 This is expensive and not recommended unless there are few possibilities. 1005 1006 Args: 1007 order: Whether the elements are in ascending (default) or descending 1008 order. 1009 """ 1010 return icepool.evaluator.ExpandEvaluator(order=order).evaluate(self) 1011 1012 def sum( 1013 self, 1014 map: Callable[[T], U] | Mapping[T, U] | None = None 1015 ) -> 'icepool.Die[U] | icepool.MultisetEvaluator[T, U]': 1016 """Evaluation: The sum of all elements.""" 1017 if map is None: 1018 return icepool.evaluator.sum_evaluator.evaluate(self) 1019 else: 1020 return icepool.evaluator.SumEvaluator(map).evaluate(self) 1021 1022 def count(self) -> 'icepool.Die[int] | icepool.MultisetEvaluator[T, int]': 1023 """Evaluation: The total number of elements in the multiset. 1024 1025 This is usually not very interesting unless some other operation is 1026 performed first. Examples: 1027 1028 `generator.unique().count()` will count the number of unique outcomes. 1029 1030 `(generator & [4, 5, 6]).count()` will count up to one each of 1031 4, 5, and 6. 1032 """ 1033 return icepool.evaluator.count_evaluator.evaluate(self) 1034 1035 def any(self) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T, bool]': 1036 """Evaluation: Whether the multiset has at least one positive count. """ 1037 return icepool.evaluator.any_evaluator.evaluate(self) 1038 1039 def highest_outcome_and_count( 1040 self 1041 ) -> 'icepool.Die[tuple[T, int]] | icepool.MultisetEvaluator[T, tuple[T, int]]': 1042 """Evaluation: The highest outcome with positive count, along with that count. 1043 1044 If no outcomes have positive count, the min outcome will be returned with 0 count. 1045 """ 1046 return icepool.evaluator.highest_outcome_and_count_evaluator.evaluate( 1047 self) 1048 1049 def all_counts( 1050 self, 1051 filter: int | Literal['all'] = 1 1052 ) -> 'icepool.Die[tuple[int, ...]] | icepool.MultisetEvaluator[T, tuple[int, ...]]': 1053 """Evaluation: Sorted tuple of all counts, i.e. the sizes of all matching sets. 1054 1055 The sizes are in **descending** order. 1056 1057 Args: 1058 filter: Any counts below this value will not be in the output. 1059 For example, `filter=2` will only produce pairs and better. 1060 If `None`, no filtering will be done. 1061 1062 Why not just place `keep_counts_ge()` before this? 1063 `keep_counts_ge()` operates by setting counts to zero, so you 1064 would still need an argument to specify whether you want to 1065 output zero counts. So we might as well use the argument to do 1066 both. 1067 """ 1068 return icepool.evaluator.AllCountsEvaluator( 1069 filter=filter).evaluate(self) 1070 1071 def largest_count( 1072 self) -> 'icepool.Die[int] | icepool.MultisetEvaluator[T, int]': 1073 """Evaluation: The size of the largest matching set among the elements.""" 1074 return icepool.evaluator.largest_count_evaluator.evaluate(self) 1075 1076 def largest_count_and_outcome( 1077 self 1078 ) -> 'icepool.Die[tuple[int, T]] | icepool.MultisetEvaluator[T, tuple[int, T]]': 1079 """Evaluation: The largest matching set among the elements and the corresponding outcome.""" 1080 return icepool.evaluator.largest_count_and_outcome_evaluator.evaluate( 1081 self) 1082 1083 def __rfloordiv__( 1084 self, other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 1085 ) -> 'icepool.Die[int] | icepool.MultisetEvaluator[T, int]': 1086 other = implicit_convert_to_expression(other) 1087 return other.count_subset(self) 1088 1089 def count_subset( 1090 self, 1091 divisor: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1092 /, 1093 *, 1094 empty_divisor: int | None = None 1095 ) -> 'icepool.Die[int] | icepool.MultisetEvaluator[T, int]': 1096 """Evaluation: The number of times the divisor is contained in this multiset. 1097 1098 Args: 1099 divisor: The multiset to divide by. 1100 empty_divisor: If the divisor is empty, the outcome will be this. 1101 If not set, `ZeroDivisionError` will be raised for an empty 1102 right side. 1103 1104 Raises: 1105 ZeroDivisionError: If the divisor may be empty and 1106 empty_divisor_outcome is not set. 1107 """ 1108 divisor = implicit_convert_to_expression(divisor) 1109 return icepool.evaluator.CountSubsetEvaluator( 1110 empty_divisor=empty_divisor).evaluate(self, divisor) 1111 1112 def largest_straight( 1113 self: 'MultisetExpression[int]' 1114 ) -> 'icepool.Die[int] | icepool.MultisetEvaluator[int, int]': 1115 """Evaluation: The size of the largest straight among the elements. 1116 1117 Outcomes must be `int`s. 1118 """ 1119 return icepool.evaluator.largest_straight_evaluator.evaluate(self) 1120 1121 def largest_straight_and_outcome( 1122 self: 'MultisetExpression[int]', 1123 priority: Literal['low', 'high'] = 'high', 1124 / 1125 ) -> 'icepool.Die[tuple[int, int]] | icepool.MultisetEvaluator[int, tuple[int, int]]': 1126 """Evaluation: The size of the largest straight among the elements and the highest (optionally, lowest) outcome in that straight. 1127 1128 Straight size is prioritized first, then the outcome. 1129 1130 Outcomes must be `int`s. 1131 1132 Args: 1133 priority: Controls which outcome within the straight is returned, 1134 and which straight is picked if there is a tie for largest 1135 straight. 1136 """ 1137 if priority == 'high': 1138 return icepool.evaluator.largest_straight_and_outcome_evaluator_high.evaluate( 1139 self) 1140 elif priority == 'low': 1141 return icepool.evaluator.largest_straight_and_outcome_evaluator_low.evaluate( 1142 self) 1143 else: 1144 raise ValueError("priority must be 'low' or 'high'.") 1145 1146 def all_straights( 1147 self: 'MultisetExpression[int]' 1148 ) -> 'icepool.Die[tuple[int, ...]] | icepool.MultisetEvaluator[int, tuple[int, ...]]': 1149 """Evaluation: The sizes of all straights. 1150 1151 The sizes are in **descending** order. 1152 1153 Each element can only contribute to one straight, though duplicate 1154 elements can produces straights that overlap in outcomes. In this case, 1155 elements are preferentially assigned to the longer straight. 1156 """ 1157 return icepool.evaluator.all_straights_evaluator.evaluate(self) 1158 1159 def all_straights_reduce_counts( 1160 self: 'MultisetExpression[int]', 1161 reducer: Callable[[int, int], int] = operator.mul 1162 ) -> 'icepool.Die[tuple[tuple[int, int], ...]] | icepool.MultisetEvaluator[int, tuple[tuple[int, int], ...]]': 1163 """Experimental: All straights with a reduce operation on the counts. 1164 1165 This can be used to evaluate e.g. cribbage-style straight counting. 1166 1167 The result is a tuple of `(run_length, run_score)`s. 1168 """ 1169 return icepool.evaluator.AllStraightsReduceCountsEvaluator( 1170 reducer=reducer).evaluate(self) 1171 1172 def argsort(self: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1173 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1174 order: Order = Order.Descending, 1175 limit: int | None = None): 1176 """Experimental: Returns the indexes of the originating multisets for each rank in their additive union. 1177 1178 Example: 1179 ```python 1180 MultisetExpression.argsort([10, 9, 5], [9, 9]) 1181 ``` 1182 produces 1183 ```python 1184 ((0,), (0, 1, 1), (0,)) 1185 ``` 1186 1187 Args: 1188 self, *args: The multiset expressions to be evaluated. 1189 order: Which order the ranks are to be emitted. Default is descending. 1190 limit: How many ranks to emit. Default will emit all ranks, which 1191 makes the length of each outcome equal to 1192 `additive_union(+self, +arg1, +arg2, ...).unique().count()` 1193 """ 1194 self = implicit_convert_to_expression(self) 1195 converted_args = [implicit_convert_to_expression(arg) for arg in args] 1196 return icepool.evaluator.ArgsortEvaluator(order=order, 1197 limit=limit).evaluate( 1198 self, *converted_args) 1199 1200 # Comparators. 1201 1202 def _compare( 1203 self, 1204 right: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1205 operation_class: Type['icepool.evaluator.ComparisonEvaluator'], 1206 *, 1207 truth_value_callback: 'Callable[[], bool] | None' = None 1208 ) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T, bool]': 1209 right = icepool.implicit_convert_to_expression(right) 1210 1211 if truth_value_callback is not None: 1212 1213 def data_callback() -> Counts[bool]: 1214 die = cast('icepool.Die[bool]', 1215 operation_class().evaluate(self, right)) 1216 if not isinstance(die, icepool.Die): 1217 raise TypeError('Did not resolve to a die.') 1218 return die._data 1219 1220 return icepool.DieWithTruth(data_callback, truth_value_callback) 1221 else: 1222 return operation_class().evaluate(self, right) 1223 1224 def __lt__(self, 1225 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1226 /) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T, bool]': 1227 try: 1228 return self._compare(other, 1229 icepool.evaluator.IsProperSubsetEvaluator) 1230 except TypeError: 1231 return NotImplemented 1232 1233 def __le__(self, 1234 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1235 /) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T, bool]': 1236 try: 1237 return self._compare(other, icepool.evaluator.IsSubsetEvaluator) 1238 except TypeError: 1239 return NotImplemented 1240 1241 def issubset( 1242 self, 1243 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1244 /) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T, bool]': 1245 """Evaluation: Whether this multiset is a subset of the other multiset. 1246 1247 Specifically, if this multiset has a lesser or equal count for each 1248 outcome than the other multiset, this evaluates to `True`; 1249 if there is some outcome for which this multiset has a greater count 1250 than the other multiset, this evaluates to `False`. 1251 1252 `issubset` is the same as `self <= other`. 1253 1254 `self < other` evaluates a proper subset relation, which is the same 1255 except the result is `False` if the two multisets are exactly equal. 1256 """ 1257 return self._compare(other, icepool.evaluator.IsSubsetEvaluator) 1258 1259 def __gt__(self, 1260 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1261 /) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T, bool]': 1262 try: 1263 return self._compare(other, 1264 icepool.evaluator.IsProperSupersetEvaluator) 1265 except TypeError: 1266 return NotImplemented 1267 1268 def __ge__(self, 1269 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1270 /) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T, bool]': 1271 try: 1272 return self._compare(other, icepool.evaluator.IsSupersetEvaluator) 1273 except TypeError: 1274 return NotImplemented 1275 1276 def issuperset( 1277 self, 1278 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1279 /) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T, bool]': 1280 """Evaluation: Whether this multiset is a superset of the other multiset. 1281 1282 Specifically, if this multiset has a greater or equal count for each 1283 outcome than the other multiset, this evaluates to `True`; 1284 if there is some outcome for which this multiset has a lesser count 1285 than the other multiset, this evaluates to `False`. 1286 1287 A typical use of this evaluation is testing for the presence of a 1288 combo of cards in a hand, e.g. 1289 1290 ```python 1291 deck.deal(5) >= ['a', 'a', 'b'] 1292 ``` 1293 1294 represents the chance that a deal of 5 cards contains at least two 'a's 1295 and one 'b'. 1296 1297 `issuperset` is the same as `self >= other`. 1298 1299 `self > other` evaluates a proper superset relation, which is the same 1300 except the result is `False` if the two multisets are exactly equal. 1301 """ 1302 return self._compare(other, icepool.evaluator.IsSupersetEvaluator) 1303 1304 def __eq__( # type: ignore 1305 self, 1306 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1307 /) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T, bool]': 1308 try: 1309 1310 def truth_value_callback() -> bool: 1311 if not isinstance(other, MultisetExpression): 1312 return False 1313 return self._hash_key == other._hash_key 1314 1315 return self._compare(other, 1316 icepool.evaluator.IsEqualSetEvaluator, 1317 truth_value_callback=truth_value_callback) 1318 except TypeError: 1319 return NotImplemented 1320 1321 def __ne__( # type: ignore 1322 self, 1323 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1324 /) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T, bool]': 1325 try: 1326 1327 def truth_value_callback() -> bool: 1328 if not isinstance(other, MultisetExpression): 1329 return False 1330 return self._hash_key != other._hash_key 1331 1332 return self._compare(other, 1333 icepool.evaluator.IsNotEqualSetEvaluator, 1334 truth_value_callback=truth_value_callback) 1335 except TypeError: 1336 return NotImplemented 1337 1338 def isdisjoint( 1339 self, 1340 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1341 /) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T, bool]': 1342 """Evaluation: Whether this multiset is disjoint from the other multiset. 1343 1344 Specifically, this evaluates to `False` if there is any outcome for 1345 which both multisets have positive count, and `True` if there is not. 1346 1347 Negative incoming counts are treated as zero counts. 1348 """ 1349 return self._compare(other, icepool.evaluator.IsDisjointSetEvaluator)
Abstract base class representing an expression that operates on multisets.
There are three types of multiset expressions:
MultisetGenerator
, which produce raw outcomes and counts.MultisetOperator
, which takes outcomes with one or more counts and produces a count.MultisetVariable
, which is a temporary placeholder for some other expression.
Expression methods can be applied to MultisetGenerator
s to do simple
evaluations. For joint evaluations, try multiset_function
.
Use the provided operations to build up more complicated expressions, or to attach a final evaluator.
Operations include:
Operation | Count / notes |
---|---|
additive_union , + |
l + r |
difference , - |
l - r |
intersection , & |
min(l, r) |
union , | |
max(l, r) |
symmetric_difference , ^ |
abs(l - r) |
multiply_counts , * |
count * n |
divide_counts , // |
count // n |
modulo_counts , % |
count % n |
keep_counts |
count if count >= n else 0 etc. |
unary + |
same as keep_counts_ge(0) |
unary - |
reverses the sign of all counts |
unique |
min(count, n) |
keep_outcomes |
count if outcome in t else 0 |
drop_outcomes |
count if outcome not in t else 0 |
map_counts |
f(outcome, *counts) |
keep , [] |
less capable than KeepGenerator version |
highest |
less capable than KeepGenerator version |
lowest |
less capable than KeepGenerator version |
Evaluator | Summary |
---|---|
issubset , <= |
Whether the left side's counts are all <= their counterparts on the right |
issuperset , >= |
Whether the left side's counts are all >= their counterparts on the right |
isdisjoint |
Whether the left side has no positive counts in common with the right side |
< |
As <= , but False if the two multisets are equal |
> |
As >= , but False if the two multisets are equal |
== |
Whether the left side has all the same counts as the right side |
!= |
Whether the left side has any different counts to the right side |
expand |
All elements in ascending order |
sum |
Sum of all elements |
count |
The number of elements |
any |
Whether there is at least 1 element |
highest_outcome_and_count |
The highest outcome and how many of that outcome |
all_counts |
All counts in descending order |
largest_count |
The single largest count, aka x-of-a-kind |
largest_count_and_outcome |
Same but also with the corresponding outcome |
count_subset , // |
The number of times the right side is contained in the left side |
largest_straight |
Length of longest consecutive sequence |
largest_straight_and_outcome |
Same but also with the corresponding outcome |
all_straights |
Lengths of all consecutive sequences in descending order |
121 @abstractmethod 122 def outcomes(self) -> Sequence[T]: 123 """The possible outcomes that could be generated, in ascending order."""
The possible outcomes that could be generated, in ascending order.
125 @abstractmethod 126 def output_arity(self) -> int: 127 """The number of multisets/counts generated. Must be constant."""
The number of multisets/counts generated. Must be constant.
188 @abstractmethod 189 def local_order_preference(self) -> tuple[Order, OrderReason]: 190 """Any ordering that is preferred or required by this expression node."""
Any ordering that is preferred or required by this expression node.
192 @abstractmethod 193 def has_free_variables(self) -> bool: 194 """Whether this expression contains any free variables, i.e. parameters to a @multiset_function."""
Whether this expression contains any free variables, i.e. parameters to a @multiset_function.
196 @abstractmethod 197 def denominator(self) -> int: 198 """The total weight of all paths through this generator. 199 200 Raises: 201 UnboundMultisetExpressionError if this is called on an expression with free variables. 202 """
The total weight of all paths through this generator.
Raises:
- UnboundMultisetExpressionError if this is called on an expression with free variables.
264 def equals(self, other) -> bool: 265 """Whether this expression is logically equal to another object.""" 266 if not isinstance(other, MultisetExpression): 267 return False 268 return self._hash_key == other._hash_key
Whether this expression is logically equal to another object.
295 def sample(self) -> tuple[tuple, ...]: 296 """EXPERIMENTAL: A single random sample from this generator. 297 298 This uses the standard `random` package and is not cryptographically 299 secure. 300 301 Returns: 302 A sorted tuple of outcomes for each output of this generator. 303 """ 304 if not self.outcomes(): 305 raise ValueError('Cannot sample from an empty set of outcomes.') 306 307 order, order_reason = self.order_preference() 308 309 if order is not None and order > 0: 310 outcome = self.min_outcome() 311 generated = tuple(self._generate_min(outcome)) 312 else: 313 outcome = self.max_outcome() 314 generated = tuple(self._generate_max(outcome)) 315 316 cumulative_weights = tuple( 317 itertools.accumulate(g.denominator() * w for g, _, w in generated)) 318 denominator = cumulative_weights[-1] 319 # We don't use random.choices since that is based on floats rather than ints. 320 r = random.randrange(denominator) 321 index = bisect.bisect_right(cumulative_weights, r) 322 popped_generator, counts, _ = generated[index] 323 head = tuple((outcome, ) * count for count in counts) 324 if popped_generator.outcomes(): 325 tail = popped_generator.sample() 326 return tuple(tuple(sorted(h + t)) for h, t, in zip(head, tail)) 327 else: 328 return head
EXPERIMENTAL: A single random sample from this generator.
This uses the standard random
package and is not cryptographically
secure.
Returns:
A sorted tuple of outcomes for each output of this generator.
349 def additive_union( 350 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 351 ) -> 'MultisetExpression[T]': 352 """The combined elements from all of the multisets. 353 354 Same as `a + b + c + ...`. 355 356 Any resulting counts that would be negative are set to zero. 357 358 Example: 359 ```python 360 [1, 2, 2, 3] + [1, 2, 4] -> [1, 1, 2, 2, 2, 3, 4] 361 ``` 362 """ 363 expressions = tuple( 364 implicit_convert_to_expression(arg) for arg in args) 365 return icepool.operator.MultisetAdditiveUnion(*expressions)
The combined elements from all of the multisets.
Same as a + b + c + ...
.
Any resulting counts that would be negative are set to zero.
Example:
[1, 2, 2, 3] + [1, 2, 4] -> [1, 1, 2, 2, 2, 3, 4]
384 def difference( 385 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 386 ) -> 'MultisetExpression[T]': 387 """The elements from the left multiset that are not in any of the others. 388 389 Same as `a - b - c - ...`. 390 391 Any resulting counts that would be negative are set to zero. 392 393 Example: 394 ```python 395 [1, 2, 2, 3] - [1, 2, 4] -> [2, 3] 396 ``` 397 398 If no arguments are given, the result will be an empty multiset, i.e. 399 all zero counts. 400 401 Note that, as a multiset operation, this will only cancel elements 1:1. 402 If you want to drop all elements in a set of outcomes regardless of 403 count, either use `drop_outcomes()` instead, or use a large number of 404 counts on the right side. 405 """ 406 expressions = tuple( 407 implicit_convert_to_expression(arg) for arg in args) 408 return icepool.operator.MultisetDifference(*expressions)
The elements from the left multiset that are not in any of the others.
Same as a - b - c - ...
.
Any resulting counts that would be negative are set to zero.
Example:
[1, 2, 2, 3] - [1, 2, 4] -> [2, 3]
If no arguments are given, the result will be an empty multiset, i.e. all zero counts.
Note that, as a multiset operation, this will only cancel elements 1:1.
If you want to drop all elements in a set of outcomes regardless of
count, either use drop_outcomes()
instead, or use a large number of
counts on the right side.
427 def intersection( 428 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 429 ) -> 'MultisetExpression[T]': 430 """The elements that all the multisets have in common. 431 432 Same as `a & b & c & ...`. 433 434 Any resulting counts that would be negative are set to zero. 435 436 Example: 437 ```python 438 [1, 2, 2, 3] & [1, 2, 4] -> [1, 2] 439 ``` 440 441 Note that, as a multiset operation, this will only intersect elements 442 1:1. 443 If you want to keep all elements in a set of outcomes regardless of 444 count, either use `keep_outcomes()` instead, or use a large number of 445 counts on the right side. 446 """ 447 expressions = tuple( 448 implicit_convert_to_expression(arg) for arg in args) 449 return icepool.operator.MultisetIntersection(*expressions)
The elements that all the multisets have in common.
Same as a & b & c & ...
.
Any resulting counts that would be negative are set to zero.
Example:
[1, 2, 2, 3] & [1, 2, 4] -> [1, 2]
Note that, as a multiset operation, this will only intersect elements
1:1.
If you want to keep all elements in a set of outcomes regardless of
count, either use keep_outcomes()
instead, or use a large number of
counts on the right side.
467 def union( 468 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 469 ) -> 'MultisetExpression[T]': 470 """The most of each outcome that appear in any of the multisets. 471 472 Same as `a | b | c | ...`. 473 474 Any resulting counts that would be negative are set to zero. 475 476 Example: 477 ```python 478 [1, 2, 2, 3] | [1, 2, 4] -> [1, 2, 2, 3, 4] 479 ``` 480 """ 481 expressions = tuple( 482 implicit_convert_to_expression(arg) for arg in args) 483 return icepool.operator.MultisetUnion(*expressions)
The most of each outcome that appear in any of the multisets.
Same as a | b | c | ...
.
Any resulting counts that would be negative are set to zero.
Example:
[1, 2, 2, 3] | [1, 2, 4] -> [1, 2, 2, 3, 4]
503 def symmetric_difference( 504 self, 505 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 506 /) -> 'MultisetExpression[T]': 507 """The elements that appear in the left or right multiset but not both. 508 509 Same as `a ^ b`. 510 511 Specifically, this produces the absolute difference between counts. 512 If you don't want negative counts to be used from the inputs, you can 513 do `+left ^ +right`. 514 515 Example: 516 ```python 517 [1, 2, 2, 3] ^ [1, 2, 4] -> [2, 3, 4] 518 ``` 519 """ 520 other = implicit_convert_to_expression(other) 521 return icepool.operator.MultisetSymmetricDifference(self, other)
The elements that appear in the left or right multiset but not both.
Same as a ^ b
.
Specifically, this produces the absolute difference between counts.
If you don't want negative counts to be used from the inputs, you can
do +left ^ +right
.
Example:
[1, 2, 2, 3] ^ [1, 2, 4] -> [2, 3, 4]
523 def keep_outcomes( 524 self, target: 525 'Callable[[T], bool] | Collection[T] | MultisetExpression[T]', 526 /) -> 'MultisetExpression[T]': 527 """Keeps the elements in the target set of outcomes, and drops the rest by setting their counts to zero. 528 529 This is similar to `intersection()`, except the right side is considered 530 to have unlimited multiplicity. 531 532 Args: 533 target: A callable returning `True` iff the outcome should be kept, 534 or an expression or collection of outcomes to keep. 535 """ 536 if isinstance(target, MultisetExpression): 537 return icepool.operator.MultisetFilterOutcomesBinary(self, target) 538 else: 539 return icepool.operator.MultisetFilterOutcomes(self, target=target)
Keeps the elements in the target set of outcomes, and drops the rest by setting their counts to zero.
This is similar to intersection()
, except the right side is considered
to have unlimited multiplicity.
Arguments:
- target: A callable returning
True
iff the outcome should be kept, or an expression or collection of outcomes to keep.
541 def drop_outcomes( 542 self, target: 543 'Callable[[T], bool] | Collection[T] | MultisetExpression[T]', 544 /) -> 'MultisetExpression[T]': 545 """Drops the elements in the target set of outcomes by setting their counts to zero, and keeps the rest. 546 547 This is similar to `difference()`, except the right side is considered 548 to have unlimited multiplicity. 549 550 Args: 551 target: A callable returning `True` iff the outcome should be 552 dropped, or an expression or collection of outcomes to drop. 553 """ 554 if isinstance(target, MultisetExpression): 555 return icepool.operator.MultisetFilterOutcomesBinary(self, 556 target, 557 invert=True) 558 else: 559 return icepool.operator.MultisetFilterOutcomes(self, 560 target=target, 561 invert=True)
Drops the elements in the target set of outcomes by setting their counts to zero, and keeps the rest.
This is similar to difference()
, except the right side is considered
to have unlimited multiplicity.
Arguments:
- target: A callable returning
True
iff the outcome should be dropped, or an expression or collection of outcomes to drop.
565 def map_counts(*args: 566 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 567 function: Callable[..., int]) -> 'MultisetExpression[T]': 568 """Maps the counts to new counts. 569 570 Args: 571 function: A function that takes `outcome, *counts` and produces a 572 combined count. 573 """ 574 expressions = tuple( 575 implicit_convert_to_expression(arg) for arg in args) 576 return icepool.operator.MultisetMapCounts(*expressions, 577 function=function)
Maps the counts to new counts.
Arguments:
- function: A function that takes
outcome, *counts
and produces a combined count.
590 def multiply_counts(self, n: int, /) -> 'MultisetExpression[T]': 591 """Multiplies all counts by n. 592 593 Same as `self * n`. 594 595 Example: 596 ```python 597 Pool([1, 2, 2, 3]) * 2 -> [1, 1, 2, 2, 2, 2, 3, 3] 598 ``` 599 """ 600 return icepool.operator.MultisetMultiplyCounts(self, constant=n)
Multiplies all counts by n.
Same as self * n
.
Example:
Pool([1, 2, 2, 3]) * 2 -> [1, 1, 2, 2, 2, 2, 3, 3]
628 def divide_counts(self, n: int, /) -> 'MultisetExpression[T]': 629 """Divides all counts by n (rounding down). 630 631 Same as `self // n`. 632 633 Example: 634 ```python 635 Pool([1, 2, 2, 3]) // 2 -> [2] 636 ``` 637 """ 638 return icepool.operator.MultisetFloordivCounts(self, constant=n)
Divides all counts by n (rounding down).
Same as self // n
.
Example:
Pool([1, 2, 2, 3]) // 2 -> [2]
645 def modulo_counts(self, n: int, /) -> 'MultisetExpression[T]': 646 """Moduos all counts by n. 647 648 Same as `self % n`. 649 650 Example: 651 ```python 652 Pool([1, 2, 2, 3]) % 2 -> [1, 3] 653 ``` 654 """ 655 return self % n
Moduos all counts by n.
Same as self % n
.
Example:
Pool([1, 2, 2, 3]) % 2 -> [1, 3]
667 def keep_counts(self, comparison: Literal['==', '!=', '<=', '<', '>=', 668 '>'], n: int, 669 /) -> 'MultisetExpression[T]': 670 """Keeps counts fitting the comparison, treating the rest as zero. 671 672 For example, `expression.keep_counts('>=', 2)` would keep pairs, 673 triplets, etc. and drop singles. 674 675 ```python 676 Pool([1, 2, 2, 3, 3, 3]).keep_counts('>=', 2) -> [2, 2, 3, 3, 3] 677 ``` 678 679 Args: 680 comparison: The comparison to use. 681 n: The number to compare counts against. 682 """ 683 return icepool.operator.MultisetKeepCounts(self, 684 comparison=comparison, 685 constant=n)
Keeps counts fitting the comparison, treating the rest as zero.
For example, expression.keep_counts('>=', 2)
would keep pairs,
triplets, etc. and drop singles.
Pool([1, 2, 2, 3, 3, 3]).keep_counts('>=', 2) -> [2, 2, 3, 3, 3]
Arguments:
- comparison: The comparison to use.
- n: The number to compare counts against.
687 def unique(self, n: int = 1, /) -> 'MultisetExpression[T]': 688 """Counts each outcome at most `n` times. 689 690 For example, `generator.unique(2)` would count each outcome at most 691 twice. 692 693 Example: 694 ```python 695 Pool([1, 2, 2, 3]).unique() -> [1, 2, 3] 696 ``` 697 """ 698 return icepool.operator.MultisetUnique(self, constant=n)
Counts each outcome at most n
times.
For example, generator.unique(2)
would count each outcome at most
twice.
Example:
Pool([1, 2, 2, 3]).unique() -> [1, 2, 3]
713 def keep( 714 self, index: slice | Sequence[int | EllipsisType] | int 715 ) -> 'MultisetExpression[T] | icepool.Die[T] | icepool.MultisetEvaluator[T, T]': 716 """Selects elements after drawing and sorting. 717 718 This is less capable than the `KeepGenerator` version. 719 In particular, it does not know how many elements it is selecting from, 720 so it must be anchored at the starting end. The advantage is that it 721 can be applied to any expression. 722 723 The valid types of argument are: 724 725 * A `slice`. If both start and stop are provided, they must both be 726 non-negative or both be negative. step is not supported. 727 * A sequence of `int` with `...` (`Ellipsis`) at exactly one end. 728 Each sorted element will be counted that many times, with the 729 `Ellipsis` treated as enough zeros (possibly "negative") to 730 fill the rest of the elements. 731 * An `int`, which evaluates by taking the element at the specified 732 index. In this case the result is a `Die` (if fully bound) or a 733 `MultisetEvaluator` (if there are free variables). 734 735 Negative incoming counts are treated as zero counts. 736 737 Use the `[]` operator for the same effect as this method. 738 """ 739 if isinstance(index, int): 740 return icepool.evaluator.KeepEvaluator(index).evaluate(self) 741 else: 742 return icepool.operator.MultisetKeep(self, index=index)
Selects elements after drawing and sorting.
This is less capable than the KeepGenerator
version.
In particular, it does not know how many elements it is selecting from,
so it must be anchored at the starting end. The advantage is that it
can be applied to any expression.
The valid types of argument are:
- A
slice
. If both start and stop are provided, they must both be non-negative or both be negative. step is not supported. - A sequence of
int
with...
(Ellipsis
) at exactly one end. Each sorted element will be counted that many times, with theEllipsis
treated as enough zeros (possibly "negative") to fill the rest of the elements. - An
int
, which evaluates by taking the element at the specified index. In this case the result is aDie
(if fully bound) or aMultisetEvaluator
(if there are free variables).
Negative incoming counts are treated as zero counts.
Use the []
operator for the same effect as this method.
761 def lowest(self, 762 keep: int | None = None, 763 drop: int | None = None) -> 'MultisetExpression[T]': 764 """Keep some of the lowest elements from this multiset and drop the rest. 765 766 In contrast to the die and free function versions, this does not 767 automatically sum the dice. Use `.sum()` afterwards if you want to sum. 768 Alternatively, you can perform some other evaluation. 769 770 This requires the outcomes to be evaluated in ascending order. 771 772 Args: 773 keep, drop: These arguments work together: 774 * If neither are provided, the single lowest element 775 will be kept. 776 * If only `keep` is provided, the `keep` lowest elements 777 will be kept. 778 * If only `drop` is provided, the `drop` lowest elements 779 will be dropped and the rest will be kept. 780 * If both are provided, `drop` lowest elements will be dropped, 781 then the next `keep` lowest elements will be kept. 782 """ 783 index = lowest_slice(keep, drop) 784 return self.keep(index)
Keep some of the lowest elements from this multiset and drop the rest.
In contrast to the die and free function versions, this does not
automatically sum the dice. Use .sum()
afterwards if you want to sum.
Alternatively, you can perform some other evaluation.
This requires the outcomes to be evaluated in ascending order.
Arguments:
- keep, drop: These arguments work together:
- If neither are provided, the single lowest element will be kept.
- If only
keep
is provided, thekeep
lowest elements will be kept. - If only
drop
is provided, thedrop
lowest elements will be dropped and the rest will be kept. - If both are provided,
drop
lowest elements will be dropped, then the nextkeep
lowest elements will be kept.
786 def highest(self, 787 keep: int | None = None, 788 drop: int | None = None) -> 'MultisetExpression[T]': 789 """Keep some of the highest elements from this multiset and drop the rest. 790 791 In contrast to the die and free function versions, this does not 792 automatically sum the dice. Use `.sum()` afterwards if you want to sum. 793 Alternatively, you can perform some other evaluation. 794 795 This requires the outcomes to be evaluated in descending order. 796 797 Args: 798 keep, drop: These arguments work together: 799 * If neither are provided, the single highest element 800 will be kept. 801 * If only `keep` is provided, the `keep` highest elements 802 will be kept. 803 * If only `drop` is provided, the `drop` highest elements 804 will be dropped and the rest will be kept. 805 * If both are provided, `drop` highest elements will be dropped, 806 then the next `keep` highest elements will be kept. 807 """ 808 index = highest_slice(keep, drop) 809 return self.keep(index)
Keep some of the highest elements from this multiset and drop the rest.
In contrast to the die and free function versions, this does not
automatically sum the dice. Use .sum()
afterwards if you want to sum.
Alternatively, you can perform some other evaluation.
This requires the outcomes to be evaluated in descending order.
Arguments:
- keep, drop: These arguments work together:
- If neither are provided, the single highest element will be kept.
- If only
keep
is provided, thekeep
highest elements will be kept. - If only
drop
is provided, thedrop
highest elements will be dropped and the rest will be kept. - If both are provided,
drop
highest elements will be dropped, then the nextkeep
highest elements will be kept.
813 def sort_match(self, 814 comparison: Literal['==', '!=', '<=', '<', '>=', '>'], 815 other: 'MultisetExpression[T]', 816 /, 817 order: Order = Order.Descending) -> 'MultisetExpression[T]': 818 """EXPERIMENTAL: Matches elements of `self` with elements of `other` in sorted order, then keeps elements from `self` that fit `comparison` with their partner. 819 820 Extra elements: If `self` has more elements than `other`, whether the 821 extra elements are kept depends on the `order` and `comparison`: 822 * Descending: kept for `'>='`, `'>'` 823 * Ascending: kept for `'<='`, `'<'` 824 825 Example: An attacker rolls 3d6 versus a defender's 2d6 in the game of 826 *RISK*. Which pairs did the attacker win? 827 ```python 828 d6.pool(3).highest(2).sort_match('>', d6.pool(2)) 829 ``` 830 831 Suppose the attacker rolled 6, 4, 3 and the defender 5, 5. 832 In this case the 4 would be blocked since the attacker lost that pair, 833 leaving the attacker's 6 and 3. If you don't want to keep the extra 834 element, you can use `highest`. 835 ```python 836 Pool([6, 4, 3]).sort_match('>', [5, 5]) -> [6, 3] 837 Pool([6, 4, 3]).highest(2).sort_match('>', [5, 5]) -> [6] 838 ``` 839 840 Contrast `maximum_match()`, which first creates the maximum number of 841 pairs that fit the comparison, not necessarily in sorted order. 842 In the above example, `maximum_match()` would allow the defender to 843 assign their 5s to block both the 4 and the 3. 844 845 Negative incoming counts are treated as zero counts. 846 847 Args: 848 comparison: The comparison to filter by. If you want to drop rather 849 than keep, use the complementary comparison: 850 * `'=='` vs. `'!='` 851 * `'<='` vs. `'>'` 852 * `'>='` vs. `'<'` 853 other: The other multiset to match elements with. 854 order: The order in which to sort before forming matches. 855 Default is descending. 856 """ 857 other = implicit_convert_to_expression(other) 858 859 match comparison: 860 case '==': 861 lesser, tie, greater = 0, 1, 0 862 case '!=': 863 lesser, tie, greater = 1, 0, 1 864 case '<=': 865 lesser, tie, greater = 1, 1, 0 866 case '<': 867 lesser, tie, greater = 1, 0, 0 868 case '>=': 869 lesser, tie, greater = 0, 1, 1 870 case '>': 871 lesser, tie, greater = 0, 0, 1 872 case _: 873 raise ValueError(f'Invalid comparison {comparison}') 874 875 if order > 0: 876 left_first = lesser 877 right_first = greater 878 else: 879 left_first = greater 880 right_first = lesser 881 882 return icepool.operator.MultisetSortMatch(self, 883 other, 884 order=order, 885 tie=tie, 886 left_first=left_first, 887 right_first=right_first)
EXPERIMENTAL: Matches elements of self
with elements of other
in sorted order, then keeps elements from self
that fit comparison
with their partner.
Extra elements: If self
has more elements than other
, whether the
extra elements are kept depends on the order
and comparison
:
- Descending: kept for
'>='
,'>'
- Ascending: kept for
'<='
,'<'
Example: An attacker rolls 3d6 versus a defender's 2d6 in the game of RISK. Which pairs did the attacker win?
d6.pool(3).highest(2).sort_match('>', d6.pool(2))
Suppose the attacker rolled 6, 4, 3 and the defender 5, 5.
In this case the 4 would be blocked since the attacker lost that pair,
leaving the attacker's 6 and 3. If you don't want to keep the extra
element, you can use highest
.
Pool([6, 4, 3]).sort_match('>', [5, 5]) -> [6, 3]
Pool([6, 4, 3]).highest(2).sort_match('>', [5, 5]) -> [6]
Contrast maximum_match()
, which first creates the maximum number of
pairs that fit the comparison, not necessarily in sorted order.
In the above example, maximum_match()
would allow the defender to
assign their 5s to block both the 4 and the 3.
Negative incoming counts are treated as zero counts.
Arguments:
- comparison: The comparison to filter by. If you want to drop rather
than keep, use the complementary comparison:
'=='
vs.'!='
'<='
vs.'>'
'>='
vs.'<'
- other: The other multiset to match elements with.
- order: The order in which to sort before forming matches. Default is descending.
889 def maximum_match_highest( 890 self, comparison: Literal['<=', 891 '<'], other: 'MultisetExpression[T]', /, 892 *, keep: Literal['matched', 893 'unmatched']) -> 'MultisetExpression[T]': 894 """EXPERIMENTAL: Match the highest elements from `self` with even higher (or equal) elements from `other`. 895 896 This matches elements of `self` with elements of `other`, such that in 897 each pair the element from `self` fits the `comparision` with the 898 element from `other`. As many such pairs of elements will be matched as 899 possible, preferring the highest matchable elements of `self`. 900 Finally, either the matched or unmatched elements from `self` are kept. 901 902 This requires that outcomes be evaluated in descending order. 903 904 Example: An attacker rolls a pool of 4d6 and a defender rolls a pool of 905 3d6. Defender dice can be used to block attacker dice of equal or lesser 906 value, and the defender prefers to block the highest attacker dice 907 possible. Which attacker dice were not blocked? 908 ```python 909 d6.pool(4).maximum_match('<=', d6.pool(3), keep='unmatched').sum() 910 ``` 911 912 Suppose the attacker rolls 6, 4, 3, 1 and the defender rolls 5, 5. 913 Then the result would be [6, 1]. 914 ```python 915 d6.pool([6, 4, 3, 1]).maximum_match('<=', [5, 5], keep='unmatched') 916 -> [6, 1] 917 ``` 918 919 Contrast `sort_match()`, which first creates pairs in 920 sorted order and then filters them by `comparison`. 921 In the above example, `sort_matched` would force the defender to match 922 against the 5 and the 4, which would only allow them to block the 4. 923 924 Negative incoming counts are treated as zero counts. 925 926 Args: 927 comparison: Either `'<='` or `'<'`. 928 other: The other multiset to match elements with. 929 keep: Whether 'matched' or 'unmatched' elements are to be kept. 930 """ 931 if keep == 'matched': 932 keep_boolean = True 933 elif keep == 'unmatched': 934 keep_boolean = False 935 else: 936 raise ValueError(f"keep must be either 'matched' or 'unmatched'") 937 938 other = implicit_convert_to_expression(other) 939 match comparison: 940 case '<=': 941 match_equal = True 942 case '<': 943 match_equal = False 944 case _: 945 raise ValueError(f'Invalid comparison {comparison}') 946 return icepool.operator.MultisetMaximumMatch(self, 947 other, 948 order=Order.Descending, 949 match_equal=match_equal, 950 keep=keep_boolean)
EXPERIMENTAL: Match the highest elements from self
with even higher (or equal) elements from other
.
This matches elements of self
with elements of other
, such that in
each pair the element from self
fits the comparision
with the
element from other
. As many such pairs of elements will be matched as
possible, preferring the highest matchable elements of self
.
Finally, either the matched or unmatched elements from self
are kept.
This requires that outcomes be evaluated in descending order.
Example: An attacker rolls a pool of 4d6 and a defender rolls a pool of 3d6. Defender dice can be used to block attacker dice of equal or lesser value, and the defender prefers to block the highest attacker dice possible. Which attacker dice were not blocked?
d6.pool(4).maximum_match('<=', d6.pool(3), keep='unmatched').sum()
Suppose the attacker rolls 6, 4, 3, 1 and the defender rolls 5, 5. Then the result would be [6, 1].
d6.pool([6, 4, 3, 1]).maximum_match('<=', [5, 5], keep='unmatched')
-> [6, 1]
Contrast sort_match()
, which first creates pairs in
sorted order and then filters them by comparison
.
In the above example, sort_matched
would force the defender to match
against the 5 and the 4, which would only allow them to block the 4.
Negative incoming counts are treated as zero counts.
Arguments:
- comparison: Either
'<='
or'<'
. - other: The other multiset to match elements with.
- keep: Whether 'matched' or 'unmatched' elements are to be kept.
952 def maximum_match_lowest( 953 self, comparison: Literal['>=', 954 '>'], other: 'MultisetExpression[T]', /, 955 *, keep: Literal['matched', 956 'unmatched']) -> 'MultisetExpression[T]': 957 """EXPERIMENTAL: Match the lowest elements from `self` with even lower (or equal) elements from `other`. 958 959 This matches elements of `self` with elements of `other`, such that in 960 each pair the element from `self` fits the `comparision` with the 961 element from `other`. As many such pairs of elements will be matched as 962 possible, preferring the lowest matchable elements of `self`. 963 Finally, either the matched or unmatched elements from `self` are kept. 964 965 This requires that outcomes be evaluated in ascending order. 966 967 Contrast `sort_match()`, which first creates pairs in 968 sorted order and then filters them by `comparison`. 969 970 Args: 971 comparison: Either `'>='` or `'>'`. 972 other: The other multiset to match elements with. 973 keep: Whether 'matched' or 'unmatched' elements are to be kept. 974 """ 975 if keep == 'matched': 976 keep_boolean = True 977 elif keep == 'unmatched': 978 keep_boolean = False 979 else: 980 raise ValueError(f"keep must be either 'matched' or 'unmatched'") 981 982 other = implicit_convert_to_expression(other) 983 match comparison: 984 case '>=': 985 match_equal = True 986 case '>': 987 match_equal = False 988 case _: 989 raise ValueError(f'Invalid comparison {comparison}') 990 return icepool.operator.MultisetMaximumMatch(self, 991 other, 992 order=Order.Ascending, 993 match_equal=match_equal, 994 keep=keep_boolean)
EXPERIMENTAL: Match the lowest elements from self
with even lower (or equal) elements from other
.
This matches elements of self
with elements of other
, such that in
each pair the element from self
fits the comparision
with the
element from other
. As many such pairs of elements will be matched as
possible, preferring the lowest matchable elements of self
.
Finally, either the matched or unmatched elements from self
are kept.
This requires that outcomes be evaluated in ascending order.
Contrast sort_match()
, which first creates pairs in
sorted order and then filters them by comparison
.
Arguments:
- comparison: Either
'>='
or'>'
. - other: The other multiset to match elements with.
- keep: Whether 'matched' or 'unmatched' elements are to be kept.
998 def expand( 999 self, 1000 order: Order = Order.Ascending 1001 ) -> 'icepool.Die[tuple[T, ...]] | icepool.MultisetEvaluator[T, tuple[T, ...]]': 1002 """Evaluation: All elements of the multiset in ascending order. 1003 1004 This is expensive and not recommended unless there are few possibilities. 1005 1006 Args: 1007 order: Whether the elements are in ascending (default) or descending 1008 order. 1009 """ 1010 return icepool.evaluator.ExpandEvaluator(order=order).evaluate(self)
Evaluation: All elements of the multiset in ascending order.
This is expensive and not recommended unless there are few possibilities.
Arguments:
- order: Whether the elements are in ascending (default) or descending order.
1012 def sum( 1013 self, 1014 map: Callable[[T], U] | Mapping[T, U] | None = None 1015 ) -> 'icepool.Die[U] | icepool.MultisetEvaluator[T, U]': 1016 """Evaluation: The sum of all elements.""" 1017 if map is None: 1018 return icepool.evaluator.sum_evaluator.evaluate(self) 1019 else: 1020 return icepool.evaluator.SumEvaluator(map).evaluate(self)
Evaluation: The sum of all elements.
1022 def count(self) -> 'icepool.Die[int] | icepool.MultisetEvaluator[T, int]': 1023 """Evaluation: The total number of elements in the multiset. 1024 1025 This is usually not very interesting unless some other operation is 1026 performed first. Examples: 1027 1028 `generator.unique().count()` will count the number of unique outcomes. 1029 1030 `(generator & [4, 5, 6]).count()` will count up to one each of 1031 4, 5, and 6. 1032 """ 1033 return icepool.evaluator.count_evaluator.evaluate(self)
Evaluation: The total number of elements in the multiset.
This is usually not very interesting unless some other operation is performed first. Examples:
generator.unique().count()
will count the number of unique outcomes.
(generator & [4, 5, 6]).count()
will count up to one each of
4, 5, and 6.
1035 def any(self) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T, bool]': 1036 """Evaluation: Whether the multiset has at least one positive count. """ 1037 return icepool.evaluator.any_evaluator.evaluate(self)
Evaluation: Whether the multiset has at least one positive count.
1039 def highest_outcome_and_count( 1040 self 1041 ) -> 'icepool.Die[tuple[T, int]] | icepool.MultisetEvaluator[T, tuple[T, int]]': 1042 """Evaluation: The highest outcome with positive count, along with that count. 1043 1044 If no outcomes have positive count, the min outcome will be returned with 0 count. 1045 """ 1046 return icepool.evaluator.highest_outcome_and_count_evaluator.evaluate( 1047 self)
Evaluation: The highest outcome with positive count, along with that count.
If no outcomes have positive count, the min outcome will be returned with 0 count.
1049 def all_counts( 1050 self, 1051 filter: int | Literal['all'] = 1 1052 ) -> 'icepool.Die[tuple[int, ...]] | icepool.MultisetEvaluator[T, tuple[int, ...]]': 1053 """Evaluation: Sorted tuple of all counts, i.e. the sizes of all matching sets. 1054 1055 The sizes are in **descending** order. 1056 1057 Args: 1058 filter: Any counts below this value will not be in the output. 1059 For example, `filter=2` will only produce pairs and better. 1060 If `None`, no filtering will be done. 1061 1062 Why not just place `keep_counts_ge()` before this? 1063 `keep_counts_ge()` operates by setting counts to zero, so you 1064 would still need an argument to specify whether you want to 1065 output zero counts. So we might as well use the argument to do 1066 both. 1067 """ 1068 return icepool.evaluator.AllCountsEvaluator( 1069 filter=filter).evaluate(self)
Evaluation: Sorted tuple of all counts, i.e. the sizes of all matching sets.
The sizes are in descending order.
Arguments:
filter: Any counts below this value will not be in the output. For example,
filter=2
will only produce pairs and better. IfNone
, no filtering will be done.Why not just place
keep_counts_ge()
before this?keep_counts_ge()
operates by setting counts to zero, so you would still need an argument to specify whether you want to output zero counts. So we might as well use the argument to do both.
1071 def largest_count( 1072 self) -> 'icepool.Die[int] | icepool.MultisetEvaluator[T, int]': 1073 """Evaluation: The size of the largest matching set among the elements.""" 1074 return icepool.evaluator.largest_count_evaluator.evaluate(self)
Evaluation: The size of the largest matching set among the elements.
1076 def largest_count_and_outcome( 1077 self 1078 ) -> 'icepool.Die[tuple[int, T]] | icepool.MultisetEvaluator[T, tuple[int, T]]': 1079 """Evaluation: The largest matching set among the elements and the corresponding outcome.""" 1080 return icepool.evaluator.largest_count_and_outcome_evaluator.evaluate( 1081 self)
Evaluation: The largest matching set among the elements and the corresponding outcome.
1089 def count_subset( 1090 self, 1091 divisor: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1092 /, 1093 *, 1094 empty_divisor: int | None = None 1095 ) -> 'icepool.Die[int] | icepool.MultisetEvaluator[T, int]': 1096 """Evaluation: The number of times the divisor is contained in this multiset. 1097 1098 Args: 1099 divisor: The multiset to divide by. 1100 empty_divisor: If the divisor is empty, the outcome will be this. 1101 If not set, `ZeroDivisionError` will be raised for an empty 1102 right side. 1103 1104 Raises: 1105 ZeroDivisionError: If the divisor may be empty and 1106 empty_divisor_outcome is not set. 1107 """ 1108 divisor = implicit_convert_to_expression(divisor) 1109 return icepool.evaluator.CountSubsetEvaluator( 1110 empty_divisor=empty_divisor).evaluate(self, divisor)
Evaluation: The number of times the divisor is contained in this multiset.
Arguments:
- divisor: The multiset to divide by.
- empty_divisor: If the divisor is empty, the outcome will be this.
If not set,
ZeroDivisionError
will be raised for an empty right side.
Raises:
- ZeroDivisionError: If the divisor may be empty and empty_divisor_outcome is not set.
1112 def largest_straight( 1113 self: 'MultisetExpression[int]' 1114 ) -> 'icepool.Die[int] | icepool.MultisetEvaluator[int, int]': 1115 """Evaluation: The size of the largest straight among the elements. 1116 1117 Outcomes must be `int`s. 1118 """ 1119 return icepool.evaluator.largest_straight_evaluator.evaluate(self)
Evaluation: The size of the largest straight among the elements.
Outcomes must be int
s.
1121 def largest_straight_and_outcome( 1122 self: 'MultisetExpression[int]', 1123 priority: Literal['low', 'high'] = 'high', 1124 / 1125 ) -> 'icepool.Die[tuple[int, int]] | icepool.MultisetEvaluator[int, tuple[int, int]]': 1126 """Evaluation: The size of the largest straight among the elements and the highest (optionally, lowest) outcome in that straight. 1127 1128 Straight size is prioritized first, then the outcome. 1129 1130 Outcomes must be `int`s. 1131 1132 Args: 1133 priority: Controls which outcome within the straight is returned, 1134 and which straight is picked if there is a tie for largest 1135 straight. 1136 """ 1137 if priority == 'high': 1138 return icepool.evaluator.largest_straight_and_outcome_evaluator_high.evaluate( 1139 self) 1140 elif priority == 'low': 1141 return icepool.evaluator.largest_straight_and_outcome_evaluator_low.evaluate( 1142 self) 1143 else: 1144 raise ValueError("priority must be 'low' or 'high'.")
Evaluation: The size of the largest straight among the elements and the highest (optionally, lowest) outcome in that straight.
Straight size is prioritized first, then the outcome.
Outcomes must be int
s.
Arguments:
- priority: Controls which outcome within the straight is returned, and which straight is picked if there is a tie for largest straight.
1146 def all_straights( 1147 self: 'MultisetExpression[int]' 1148 ) -> 'icepool.Die[tuple[int, ...]] | icepool.MultisetEvaluator[int, tuple[int, ...]]': 1149 """Evaluation: The sizes of all straights. 1150 1151 The sizes are in **descending** order. 1152 1153 Each element can only contribute to one straight, though duplicate 1154 elements can produces straights that overlap in outcomes. In this case, 1155 elements are preferentially assigned to the longer straight. 1156 """ 1157 return icepool.evaluator.all_straights_evaluator.evaluate(self)
Evaluation: The sizes of all straights.
The sizes are in descending order.
Each element can only contribute to one straight, though duplicate elements can produces straights that overlap in outcomes. In this case, elements are preferentially assigned to the longer straight.
1159 def all_straights_reduce_counts( 1160 self: 'MultisetExpression[int]', 1161 reducer: Callable[[int, int], int] = operator.mul 1162 ) -> 'icepool.Die[tuple[tuple[int, int], ...]] | icepool.MultisetEvaluator[int, tuple[tuple[int, int], ...]]': 1163 """Experimental: All straights with a reduce operation on the counts. 1164 1165 This can be used to evaluate e.g. cribbage-style straight counting. 1166 1167 The result is a tuple of `(run_length, run_score)`s. 1168 """ 1169 return icepool.evaluator.AllStraightsReduceCountsEvaluator( 1170 reducer=reducer).evaluate(self)
Experimental: All straights with a reduce operation on the counts.
This can be used to evaluate e.g. cribbage-style straight counting.
The result is a tuple of (run_length, run_score)
s.
1172 def argsort(self: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1173 *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1174 order: Order = Order.Descending, 1175 limit: int | None = None): 1176 """Experimental: Returns the indexes of the originating multisets for each rank in their additive union. 1177 1178 Example: 1179 ```python 1180 MultisetExpression.argsort([10, 9, 5], [9, 9]) 1181 ``` 1182 produces 1183 ```python 1184 ((0,), (0, 1, 1), (0,)) 1185 ``` 1186 1187 Args: 1188 self, *args: The multiset expressions to be evaluated. 1189 order: Which order the ranks are to be emitted. Default is descending. 1190 limit: How many ranks to emit. Default will emit all ranks, which 1191 makes the length of each outcome equal to 1192 `additive_union(+self, +arg1, +arg2, ...).unique().count()` 1193 """ 1194 self = implicit_convert_to_expression(self) 1195 converted_args = [implicit_convert_to_expression(arg) for arg in args] 1196 return icepool.evaluator.ArgsortEvaluator(order=order, 1197 limit=limit).evaluate( 1198 self, *converted_args)
Experimental: Returns the indexes of the originating multisets for each rank in their additive union.
Example:
MultisetExpression.argsort([10, 9, 5], [9, 9])
produces
((0,), (0, 1, 1), (0,))
Arguments:
- self, *args: The multiset expressions to be evaluated.
- order: Which order the ranks are to be emitted. Default is descending.
- limit: How many ranks to emit. Default will emit all ranks, which
makes the length of each outcome equal to
additive_union(+self, +arg1, +arg2, ...).unique().count()
1241 def issubset( 1242 self, 1243 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1244 /) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T, bool]': 1245 """Evaluation: Whether this multiset is a subset of the other multiset. 1246 1247 Specifically, if this multiset has a lesser or equal count for each 1248 outcome than the other multiset, this evaluates to `True`; 1249 if there is some outcome for which this multiset has a greater count 1250 than the other multiset, this evaluates to `False`. 1251 1252 `issubset` is the same as `self <= other`. 1253 1254 `self < other` evaluates a proper subset relation, which is the same 1255 except the result is `False` if the two multisets are exactly equal. 1256 """ 1257 return self._compare(other, icepool.evaluator.IsSubsetEvaluator)
Evaluation: Whether this multiset is a subset of the other multiset.
Specifically, if this multiset has a lesser or equal count for each
outcome than the other multiset, this evaluates to True
;
if there is some outcome for which this multiset has a greater count
than the other multiset, this evaluates to False
.
issubset
is the same as self <= other
.
self < other
evaluates a proper subset relation, which is the same
except the result is False
if the two multisets are exactly equal.
1276 def issuperset( 1277 self, 1278 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1279 /) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T, bool]': 1280 """Evaluation: Whether this multiset is a superset of the other multiset. 1281 1282 Specifically, if this multiset has a greater or equal count for each 1283 outcome than the other multiset, this evaluates to `True`; 1284 if there is some outcome for which this multiset has a lesser count 1285 than the other multiset, this evaluates to `False`. 1286 1287 A typical use of this evaluation is testing for the presence of a 1288 combo of cards in a hand, e.g. 1289 1290 ```python 1291 deck.deal(5) >= ['a', 'a', 'b'] 1292 ``` 1293 1294 represents the chance that a deal of 5 cards contains at least two 'a's 1295 and one 'b'. 1296 1297 `issuperset` is the same as `self >= other`. 1298 1299 `self > other` evaluates a proper superset relation, which is the same 1300 except the result is `False` if the two multisets are exactly equal. 1301 """ 1302 return self._compare(other, icepool.evaluator.IsSupersetEvaluator)
Evaluation: Whether this multiset is a superset of the other multiset.
Specifically, if this multiset has a greater or equal count for each
outcome than the other multiset, this evaluates to True
;
if there is some outcome for which this multiset has a lesser count
than the other multiset, this evaluates to False
.
A typical use of this evaluation is testing for the presence of a combo of cards in a hand, e.g.
deck.deal(5) >= ['a', 'a', 'b']
represents the chance that a deal of 5 cards contains at least two 'a's and one 'b'.
issuperset
is the same as self >= other
.
self > other
evaluates a proper superset relation, which is the same
except the result is False
if the two multisets are exactly equal.
1338 def isdisjoint( 1339 self, 1340 other: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]', 1341 /) -> 'icepool.Die[bool] | icepool.MultisetEvaluator[T, bool]': 1342 """Evaluation: Whether this multiset is disjoint from the other multiset. 1343 1344 Specifically, this evaluates to `False` if there is any outcome for 1345 which both multisets have positive count, and `True` if there is not. 1346 1347 Negative incoming counts are treated as zero counts. 1348 """ 1349 return self._compare(other, icepool.evaluator.IsDisjointSetEvaluator)
Evaluation: Whether this multiset is disjoint from the other multiset.
Specifically, this evaluates to False
if there is any outcome for
which both multisets have positive count, and True
if there is not.
Negative incoming counts are treated as zero counts.
25class MultisetEvaluator(ABC, Generic[T, U_co]): 26 """An abstract, immutable, callable class for evaulating one or more input `MultisetExpression`s. 27 28 There is one abstract method to implement: `next_state()`. 29 This should incrementally calculate the result given one outcome at a time 30 along with how many of that outcome were produced. 31 32 An example sequence of calls, as far as `next_state()` is concerned, is: 33 34 1. `state = next_state(state=None, outcome=1, count_of_1s)` 35 2. `state = next_state(state, 2, count_of_2s)` 36 3. `state = next_state(state, 3, count_of_3s)` 37 4. `state = next_state(state, 4, count_of_4s)` 38 5. `state = next_state(state, 5, count_of_5s)` 39 6. `state = next_state(state, 6, count_of_6s)` 40 7. `outcome = final_outcome(state)` 41 42 A few other methods can optionally be overridden to further customize behavior. 43 44 It is not expected that subclasses of `MultisetEvaluator` 45 be able to handle arbitrary types or numbers of inputs. 46 Indeed, most are expected to handle only a fixed number of inputs, 47 and often even only inputs with a particular outcome type. 48 49 Instances cache all intermediate state distributions. 50 You should therefore reuse instances when possible. 51 52 Instances should not be modified after construction 53 in any way that affects the return values of these methods. 54 Otherwise, values in the cache may be incorrect. 55 """ 56 57 def next_state(self, state: Hashable, outcome: T, /, *counts: 58 int) -> Hashable: 59 """State transition function. 60 61 This should produce a state given the previous state, an outcome, 62 and the count of that outcome produced by each input. 63 64 `evaluate()` will always call this using only positional arguments. 65 Furthermore, there is no expectation that a subclass be able to handle 66 an arbitrary number of counts. Thus, you are free to rename any of 67 the parameters in a subclass, or to replace `*counts` with a fixed set 68 of parameters. 69 70 Make sure to handle the base case where `state is None`. 71 72 States must be hashable. At current, they do not have to be orderable. 73 However, this may change in the future, and if they are not totally 74 orderable, you must override `final_outcome` to create totally orderable 75 final outcomes. 76 77 By default, this method may receive outcomes in any order: 78 79 * If you want to guarantee ascending or descending order, you can 80 implement `next_state_ascending()` or `next_state_descending()` 81 instead. 82 * Alternatively, implement `next_state()` and override `order()` to 83 return the necessary order. This is useful if the necessary order 84 depends on the instance. 85 * If you want to handle either order, but have a different 86 implementation for each, override both `next_state_ascending()` and 87 `next_state_descending()`. 88 89 The behavior of returning a `Die` from `next_state` is currently 90 undefined. 91 92 Args: 93 state: A hashable object indicating the state before rolling the 94 current outcome. If this is the first outcome being considered, 95 `state` will be `None`. 96 outcome: The current outcome. 97 `next_state` will see all rolled outcomes in monotonic order; 98 either ascending or descending depending on `order()`. 99 If there are multiple inputs, the set of outcomes is at 100 least the union of the outcomes of the invididual inputs. 101 You can use `extra_outcomes()` to add extra outcomes. 102 *counts: One value (usually an `int`) for each input indicating how 103 many of the current outcome were produced. 104 105 Returns: 106 A hashable object indicating the next state. 107 The special value `icepool.Reroll` can be used to immediately remove 108 the state from consideration, effectively performing a full reroll. 109 """ 110 raise NotImplementedError() 111 112 def next_state_ascending(self, state: Hashable, outcome: T, /, *counts: 113 int) -> Hashable: 114 """As next_state() but handles outcomes in ascending order only. 115 116 You can implement both `next_state_ascending()` and 117 `next_state_descending()` if you want to handle both outcome orders 118 with a separate implementation for each. 119 """ 120 raise NotImplementedError() 121 122 def next_state_descending(self, state: Hashable, outcome: T, /, *counts: 123 int) -> Hashable: 124 """As next_state() but handles outcomes in descending order only. 125 126 You can implement both `next_state_ascending()` and 127 `next_state_descending()` if you want to handle both outcome orders 128 with a separate implementation for each. 129 """ 130 raise NotImplementedError() 131 132 def final_outcome(self, final_state: Hashable, 133 /) -> 'U_co | icepool.Die[U_co] | icepool.RerollType': 134 """Optional method to generate a final output outcome from a final state. 135 136 By default, the final outcome is equal to the final state. 137 Note that `None` is not a valid outcome for a `Die`, 138 and if there are no outcomes, `final_outcome` will be immediately 139 be callled with `final_state=None`. 140 Subclasses that want to handle this case should explicitly define what 141 happens. 142 143 Args: 144 final_state: A state after all outcomes have been processed. 145 146 Returns: 147 A final outcome that will be used as part of constructing the result `Die`. 148 As usual for `Die()`, this could itself be a `Die` or `icepool.Reroll`. 149 """ 150 # If not overriden, the final_state should have type U_co. 151 return cast(U_co, final_state) 152 153 def order(self) -> Order: 154 """Optional method that specifies what outcome orderings this evaluator supports. 155 156 By default, this is determined by which of `next_state()`, 157 `next_state_ascending()`, and `next_state_descending()` are 158 overridden. 159 160 This is most often overridden by subclasses whose iteration order is 161 determined on a per-instance basis. 162 163 Returns: 164 * Order.Ascending (= 1) 165 if outcomes are to be seen in ascending order. 166 In this case either `next_state()` or `next_state_ascending()` 167 are implemented. 168 * Order.Descending (= -1) 169 if outcomes are to be seen in descending order. 170 In this case either `next_state()` or `next_state_descending()` 171 are implemented. 172 * Order.Any (= 0) 173 if outcomes can be seen in any order. 174 In this case either `next_state()` or both 175 `next_state_ascending()` and `next_state_descending()` 176 are implemented. 177 """ 178 overrides_ascending = self._has_override('next_state_ascending') 179 overrides_descending = self._has_override('next_state_descending') 180 overrides_any = self._has_override('next_state') 181 if overrides_any or (overrides_ascending and overrides_descending): 182 return Order.Any 183 if overrides_ascending: 184 return Order.Ascending 185 if overrides_descending: 186 return Order.Descending 187 raise NotImplementedError( 188 'Subclasses of MultisetEvaluator must implement at least one of next_state, next_state_ascending, next_state_descending.' 189 ) 190 191 def extra_outcomes(self, outcomes: Sequence[T]) -> Collection[T]: 192 """Optional method to specify extra outcomes that should be seen as inputs to `next_state()`. 193 194 These will be seen by `next_state` even if they do not appear in the 195 input(s). The default implementation returns `()`, or no additional 196 outcomes. 197 198 If you want `next_state` to see consecutive `int` outcomes, you can set 199 `extra_outcomes = icepool.MultisetEvaluator.consecutive`. 200 See `consecutive()` below. 201 202 Args: 203 outcomes: The outcomes that could be produced by the inputs, in 204 ascending order. 205 """ 206 return () 207 208 def consecutive(self, outcomes: Sequence[int]) -> Collection[int]: 209 """Example implementation of `extra_outcomes()` that produces consecutive `int` outcomes. 210 211 Set `extra_outcomes = icepool.MultisetEvaluator.consecutive` to use this. 212 213 Returns: 214 All `int`s from the min outcome to the max outcome among the inputs, 215 inclusive. 216 217 Raises: 218 TypeError: if any input has any non-`int` outcome. 219 """ 220 if not outcomes: 221 return () 222 223 if any(not isinstance(x, int) for x in outcomes): 224 raise TypeError( 225 "consecutive cannot be used with outcomes of type other than 'int'." 226 ) 227 228 return range(outcomes[0], outcomes[-1] + 1) 229 230 def bound_inputs(self) -> 'tuple[icepool.MultisetExpression, ...]': 231 """An optional sequence of extra inputs whose counts will be prepended to *counts. 232 233 (Prepending rather than appending is analogous to `functools.partial`.) 234 """ 235 return () 236 237 @cached_property 238 def _cache( 239 self 240 ) -> 'MutableMapping[tuple[Order, Alignment, tuple[MultisetExpression, ...], Hashable], Mapping[Any, int]]': 241 """Cached results. 242 243 The key is `(order, extra_outcomes, inputs, state)`. 244 The value is another mapping `final_state: quantity` representing the 245 state distribution produced by `order, extra_outcomes, inputs` when 246 starting at state `state`. 247 """ 248 return {} 249 250 @overload 251 def evaluate( 252 self, 253 *args: 'Mapping[T, int] | Sequence[T]') -> 'icepool.Die[U_co]': 254 ... 255 256 @overload 257 def evaluate( 258 self, 259 *args: 'MultisetExpression[T]') -> 'MultisetEvaluator[T, U_co]': 260 ... 261 262 @overload 263 def evaluate( 264 self, *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 265 ) -> 'icepool.Die[U_co] | MultisetEvaluator[T, U_co]': 266 ... 267 268 def evaluate( 269 self, *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 270 ) -> 'icepool.Die[U_co] | MultisetEvaluator[T, U_co]': 271 """Evaluates input expression(s). 272 273 You can call the `MultisetEvaluator` object directly for the same effect, 274 e.g. `sum_evaluator(input)` is an alias for `sum_evaluator.evaluate(input)`. 275 276 Most evaluators will expect a fixed number of input multisets. 277 The union of the outcomes of the input(s) must be totally orderable. 278 279 Args: 280 *args: Each may be one of the following: 281 * A `MultisetExpression`. 282 * A mappable mapping outcomes to the number of those outcomes. 283 * A sequence of outcomes. 284 285 Returns: 286 A `Die` representing the distribution of the final outcome if no 287 arg contains a free variable. Otherwise, returns a new evaluator. 288 """ 289 from icepool.generator.alignment import Alignment 290 291 # Convert arguments to expressions. 292 inputs = tuple( 293 icepool.implicit_convert_to_expression(arg) for arg in args) 294 295 if any(input.has_free_variables() for input in inputs): 296 from icepool.evaluator.multiset_function import MultisetFunctionEvaluator 297 return MultisetFunctionEvaluator(*inputs, evaluator=self) 298 299 inputs = self.bound_inputs() + inputs 300 301 # This is kept to verify inputs to operators each have arity exactly 1. 302 total_arity = sum(input.output_arity() for input in inputs) 303 304 if not all(expression._is_resolvable() for expression in inputs): 305 return icepool.Die([]) 306 307 algorithm, order = self._select_algorithm(*inputs) 308 309 next_state_function = self._select_next_state_function(order) 310 311 outcomes = icepool.sorted_union(*(expression.outcomes() 312 for expression in inputs)) 313 extra_outcomes = Alignment(self.extra_outcomes(outcomes)) 314 315 dist: MutableMapping[Any, int] = defaultdict(int) 316 iterators = MultisetEvaluator._initialize_inputs(inputs) 317 for p in itertools.product(*iterators): 318 sub_inputs, sub_weights = zip(*p) 319 prod_weight = math.prod(sub_weights) 320 sub_result = algorithm(order, next_state_function, extra_outcomes, 321 sub_inputs) 322 for sub_state, sub_weight in sub_result.items(): 323 dist[sub_state] += sub_weight * prod_weight 324 325 final_outcomes = [] 326 final_weights = [] 327 for state, weight in dist.items(): 328 outcome = self.final_outcome(state) 329 if outcome is None: 330 raise TypeError( 331 "None is not a valid final outcome.\n" 332 "This may have been a result of not supplying any input with an outcome." 333 ) 334 if outcome is not icepool.Reroll: 335 final_outcomes.append(outcome) 336 final_weights.append(weight) 337 338 return icepool.Die(final_outcomes, final_weights) 339 340 __call__ = evaluate 341 342 def _select_algorithm( 343 self, *inputs: 'icepool.MultisetExpression[T]' 344 ) -> tuple[ 345 'Callable[[Order, Callable[..., Hashable], Alignment[T], tuple[icepool.MultisetExpression[T], ...]], Mapping[Any, int]]', 346 Order]: 347 """Selects an algorithm and iteration order. 348 349 Returns: 350 * The algorithm to use (`_eval_internal*`). 351 * The order in which `next_state()` sees outcomes. 352 1 for ascending and -1 for descending. 353 """ 354 eval_order = self.order() 355 356 if not inputs: 357 # No inputs. 358 return self._eval_internal, eval_order 359 360 input_order, input_order_reason = merge_order_preferences( 361 *(input.order_preference() for input in inputs)) 362 363 if input_order is None: 364 input_order = Order.Any 365 input_order_reason = OrderReason.NoPreference 366 367 # No mandatory evaluation order, go with preferred algorithm. 368 # Note that this has order *opposite* the pop order. 369 if eval_order == Order.Any: 370 return self._eval_internal, Order(-input_order or Order.Ascending) 371 372 # Mandatory evaluation order. 373 if input_order == Order.Any: 374 return self._eval_internal, eval_order 375 elif eval_order != input_order: 376 return self._eval_internal, eval_order 377 else: 378 return self._eval_internal_forward, eval_order 379 380 def _has_override(self, method_name: str) -> bool: 381 """Returns True iff the method name is overridden from MultisetEvaluator.""" 382 try: 383 method = getattr(self, method_name) 384 method_func = getattr(method, '__func__', method) 385 except AttributeError: 386 return False 387 return method_func is not getattr(MultisetEvaluator, method_name) 388 389 def _select_next_state_function(self, 390 order: Order) -> Callable[..., Hashable]: 391 if order == Order.Descending: 392 if self._has_override('next_state_descending'): 393 return self.next_state_descending 394 else: 395 if self._has_override('next_state_ascending'): 396 return self.next_state_ascending 397 if self._has_override('next_state'): 398 return self.next_state 399 raise NotImplementedError( 400 f'Could not find next_state* implementation for order {order}.') 401 402 def _eval_internal( 403 self, order: Order, next_state_function: Callable[..., Hashable], 404 extra_outcomes: 'Alignment[T]', 405 inputs: 'tuple[icepool.MultisetExpression[T], ...]' 406 ) -> Mapping[Any, int]: 407 """Internal algorithm for iterating in the more-preferred order. 408 409 All intermediate return values are cached in the instance. 410 411 Arguments: 412 order: The order in which to send outcomes to `next_state()`. 413 extra_outcomes: As `extra_outcomes()`. Elements will be popped off this 414 during recursion. 415 inputs: One or more `MultisetExpression`s to evaluate. Elements 416 will be popped off this during recursion. 417 418 Returns: 419 A dict `{ state : weight }` describing the probability distribution 420 over states. 421 """ 422 cache_key = (order, extra_outcomes, inputs, None) 423 if cache_key in self._cache: 424 return self._cache[cache_key] 425 426 result: MutableMapping[Any, int] = defaultdict(int) 427 428 if all(not input.outcomes() 429 for input in inputs) and not extra_outcomes.outcomes(): 430 result = {None: 1} 431 else: 432 outcome, prev_extra_outcomes, iterators = MultisetEvaluator._pop_inputs( 433 Order(-order), extra_outcomes, inputs) 434 for p in itertools.product(*iterators): 435 prev_inputs, counts, weights = zip(*p) 436 counts = tuple(itertools.chain.from_iterable(counts)) 437 prod_weight = math.prod(weights) 438 prev = self._eval_internal(order, next_state_function, 439 prev_extra_outcomes, prev_inputs) 440 for prev_state, prev_weight in prev.items(): 441 state = next_state_function(prev_state, outcome, *counts) 442 if state is not icepool.Reroll: 443 result[state] += prev_weight * prod_weight 444 445 self._cache[cache_key] = result 446 return result 447 448 def _eval_internal_forward( 449 self, 450 order: Order, 451 next_state_function: Callable[..., Hashable], 452 extra_outcomes: 'Alignment[T]', 453 inputs: 'tuple[icepool.MultisetExpression[T], ...]', 454 state: Hashable = None) -> Mapping[Any, int]: 455 """Internal algorithm for iterating in the less-preferred order. 456 457 All intermediate return values are cached in the instance. 458 459 Arguments: 460 order: The order in which to send outcomes to `next_state()`. 461 extra_outcomes: As `extra_outcomes()`. Elements will be popped off this 462 during recursion. 463 inputs: One or more `MultisetExpression`s to evaluate. Elements 464 will be popped off this during recursion. 465 466 Returns: 467 A dict `{ state : weight }` describing the probability distribution 468 over states. 469 """ 470 cache_key = (order, extra_outcomes, inputs, state) 471 if cache_key in self._cache: 472 return self._cache[cache_key] 473 474 result: MutableMapping[Any, int] = defaultdict(int) 475 476 if all(not input.outcomes() 477 for input in inputs) and not extra_outcomes.outcomes(): 478 result = {state: 1} 479 else: 480 outcome, next_extra_outcomes, iterators = MultisetEvaluator._pop_inputs( 481 order, extra_outcomes, inputs) 482 for p in itertools.product(*iterators): 483 next_inputs, counts, weights = zip(*p) 484 counts = tuple(itertools.chain.from_iterable(counts)) 485 prod_weight = math.prod(weights) 486 next_state = next_state_function(state, outcome, *counts) 487 if next_state is not icepool.Reroll: 488 final_dist = self._eval_internal_forward( 489 order, next_state_function, next_extra_outcomes, 490 next_inputs, next_state) 491 for final_state, weight in final_dist.items(): 492 result[final_state] += weight * prod_weight 493 494 self._cache[cache_key] = result 495 return result 496 497 @staticmethod 498 def _initialize_inputs( 499 inputs: 'tuple[icepool.MultisetExpression[T], ...]' 500 ) -> 'tuple[icepool.InitialMultisetGeneration, ...]': 501 return tuple(expression._generate_initial() for expression in inputs) 502 503 @staticmethod 504 def _pop_inputs( 505 order: Order, extra_outcomes: 'Alignment[T]', 506 inputs: 'tuple[icepool.MultisetExpression[T], ...]' 507 ) -> 'tuple[T, Alignment[T], tuple[icepool.PopMultisetGeneration, ...]]': 508 """Pops a single outcome from the inputs. 509 510 Args: 511 order: The order in which to pop. Descending order pops from the top 512 and ascending from the bottom. 513 extra_outcomes: Any extra outcomes to use. 514 inputs: The inputs to pop from. 515 516 Returns: 517 * The popped outcome. 518 * The remaining extra outcomes. 519 * A tuple of iterators over the resulting inputs, counts, and weights. 520 """ 521 extra_outcomes_and_inputs = (extra_outcomes, ) + inputs 522 if order < 0: 523 outcome = max(input.max_outcome() 524 for input in extra_outcomes_and_inputs 525 if input.outcomes()) 526 527 next_extra_outcomes, _, _ = next( 528 extra_outcomes._generate_max(outcome)) 529 530 return outcome, next_extra_outcomes, tuple( 531 input._generate_max(outcome) for input in inputs) 532 else: 533 outcome = min(input.min_outcome() 534 for input in extra_outcomes_and_inputs 535 if input.outcomes()) 536 537 next_extra_outcomes, _, _ = next( 538 extra_outcomes._generate_min(outcome)) 539 540 return outcome, next_extra_outcomes, tuple( 541 input._generate_min(outcome) for input in inputs) 542 543 def sample( 544 self, *inputs: 545 'icepool.MultisetExpression[T] | Mapping[T, int] | Sequence[T]'): 546 """EXPERIMENTAL: Samples one result from the input(s) and evaluates the result.""" 547 # Convert non-`Pool` arguments to `Pool`. 548 converted_inputs = tuple( 549 input if isinstance(input, icepool.MultisetExpression 550 ) else icepool.Pool(input) for input in inputs) 551 552 result = self.evaluate(*itertools.chain.from_iterable( 553 input.sample() for input in converted_inputs)) 554 555 if not result.is_empty(): 556 return result.outcomes()[0] 557 else: 558 return result 559 560 def __bool__(self) -> bool: 561 raise TypeError('MultisetEvaluator does not have a truth value.') 562 563 def __str__(self) -> str: 564 return type(self).__name__
An abstract, immutable, callable class for evaulating one or more input MultisetExpression
s.
There is one abstract method to implement: next_state()
.
This should incrementally calculate the result given one outcome at a time
along with how many of that outcome were produced.
An example sequence of calls, as far as next_state()
is concerned, is:
state = next_state(state=None, outcome=1, count_of_1s)
state = next_state(state, 2, count_of_2s)
state = next_state(state, 3, count_of_3s)
state = next_state(state, 4, count_of_4s)
state = next_state(state, 5, count_of_5s)
state = next_state(state, 6, count_of_6s)
outcome = final_outcome(state)
A few other methods can optionally be overridden to further customize behavior.
It is not expected that subclasses of MultisetEvaluator
be able to handle arbitrary types or numbers of inputs.
Indeed, most are expected to handle only a fixed number of inputs,
and often even only inputs with a particular outcome type.
Instances cache all intermediate state distributions. You should therefore reuse instances when possible.
Instances should not be modified after construction in any way that affects the return values of these methods. Otherwise, values in the cache may be incorrect.
57 def next_state(self, state: Hashable, outcome: T, /, *counts: 58 int) -> Hashable: 59 """State transition function. 60 61 This should produce a state given the previous state, an outcome, 62 and the count of that outcome produced by each input. 63 64 `evaluate()` will always call this using only positional arguments. 65 Furthermore, there is no expectation that a subclass be able to handle 66 an arbitrary number of counts. Thus, you are free to rename any of 67 the parameters in a subclass, or to replace `*counts` with a fixed set 68 of parameters. 69 70 Make sure to handle the base case where `state is None`. 71 72 States must be hashable. At current, they do not have to be orderable. 73 However, this may change in the future, and if they are not totally 74 orderable, you must override `final_outcome` to create totally orderable 75 final outcomes. 76 77 By default, this method may receive outcomes in any order: 78 79 * If you want to guarantee ascending or descending order, you can 80 implement `next_state_ascending()` or `next_state_descending()` 81 instead. 82 * Alternatively, implement `next_state()` and override `order()` to 83 return the necessary order. This is useful if the necessary order 84 depends on the instance. 85 * If you want to handle either order, but have a different 86 implementation for each, override both `next_state_ascending()` and 87 `next_state_descending()`. 88 89 The behavior of returning a `Die` from `next_state` is currently 90 undefined. 91 92 Args: 93 state: A hashable object indicating the state before rolling the 94 current outcome. If this is the first outcome being considered, 95 `state` will be `None`. 96 outcome: The current outcome. 97 `next_state` will see all rolled outcomes in monotonic order; 98 either ascending or descending depending on `order()`. 99 If there are multiple inputs, the set of outcomes is at 100 least the union of the outcomes of the invididual inputs. 101 You can use `extra_outcomes()` to add extra outcomes. 102 *counts: One value (usually an `int`) for each input indicating how 103 many of the current outcome were produced. 104 105 Returns: 106 A hashable object indicating the next state. 107 The special value `icepool.Reroll` can be used to immediately remove 108 the state from consideration, effectively performing a full reroll. 109 """ 110 raise NotImplementedError()
State transition function.
This should produce a state given the previous state, an outcome, and the count of that outcome produced by each input.
evaluate()
will always call this using only positional arguments.
Furthermore, there is no expectation that a subclass be able to handle
an arbitrary number of counts. Thus, you are free to rename any of
the parameters in a subclass, or to replace *counts
with a fixed set
of parameters.
Make sure to handle the base case where state is None
.
States must be hashable. At current, they do not have to be orderable.
However, this may change in the future, and if they are not totally
orderable, you must override final_outcome
to create totally orderable
final outcomes.
By default, this method may receive outcomes in any order:
- If you want to guarantee ascending or descending order, you can
implement
next_state_ascending()
ornext_state_descending()
instead. - Alternatively, implement
next_state()
and overrideorder()
to return the necessary order. This is useful if the necessary order depends on the instance. - If you want to handle either order, but have a different
implementation for each, override both
next_state_ascending()
andnext_state_descending()
.
The behavior of returning a Die
from next_state
is currently
undefined.
Arguments:
- state: A hashable object indicating the state before rolling the
current outcome. If this is the first outcome being considered,
state
will beNone
. - outcome: The current outcome.
next_state
will see all rolled outcomes in monotonic order; either ascending or descending depending onorder()
. If there are multiple inputs, the set of outcomes is at least the union of the outcomes of the invididual inputs. You can useextra_outcomes()
to add extra outcomes. - *counts: One value (usually an
int
) for each input indicating how many of the current outcome were produced.
Returns:
A hashable object indicating the next state. The special value
icepool.Reroll
can be used to immediately remove the state from consideration, effectively performing a full reroll.
112 def next_state_ascending(self, state: Hashable, outcome: T, /, *counts: 113 int) -> Hashable: 114 """As next_state() but handles outcomes in ascending order only. 115 116 You can implement both `next_state_ascending()` and 117 `next_state_descending()` if you want to handle both outcome orders 118 with a separate implementation for each. 119 """ 120 raise NotImplementedError()
As next_state() but handles outcomes in ascending order only.
You can implement both next_state_ascending()
and
next_state_descending()
if you want to handle both outcome orders
with a separate implementation for each.
122 def next_state_descending(self, state: Hashable, outcome: T, /, *counts: 123 int) -> Hashable: 124 """As next_state() but handles outcomes in descending order only. 125 126 You can implement both `next_state_ascending()` and 127 `next_state_descending()` if you want to handle both outcome orders 128 with a separate implementation for each. 129 """ 130 raise NotImplementedError()
As next_state() but handles outcomes in descending order only.
You can implement both next_state_ascending()
and
next_state_descending()
if you want to handle both outcome orders
with a separate implementation for each.
132 def final_outcome(self, final_state: Hashable, 133 /) -> 'U_co | icepool.Die[U_co] | icepool.RerollType': 134 """Optional method to generate a final output outcome from a final state. 135 136 By default, the final outcome is equal to the final state. 137 Note that `None` is not a valid outcome for a `Die`, 138 and if there are no outcomes, `final_outcome` will be immediately 139 be callled with `final_state=None`. 140 Subclasses that want to handle this case should explicitly define what 141 happens. 142 143 Args: 144 final_state: A state after all outcomes have been processed. 145 146 Returns: 147 A final outcome that will be used as part of constructing the result `Die`. 148 As usual for `Die()`, this could itself be a `Die` or `icepool.Reroll`. 149 """ 150 # If not overriden, the final_state should have type U_co. 151 return cast(U_co, final_state)
Optional method to generate a final output outcome from a final state.
By default, the final outcome is equal to the final state.
Note that None
is not a valid outcome for a Die
,
and if there are no outcomes, final_outcome
will be immediately
be callled with final_state=None
.
Subclasses that want to handle this case should explicitly define what
happens.
Arguments:
- final_state: A state after all outcomes have been processed.
Returns:
A final outcome that will be used as part of constructing the result
Die
. As usual forDie()
, this could itself be aDie
oricepool.Reroll
.
153 def order(self) -> Order: 154 """Optional method that specifies what outcome orderings this evaluator supports. 155 156 By default, this is determined by which of `next_state()`, 157 `next_state_ascending()`, and `next_state_descending()` are 158 overridden. 159 160 This is most often overridden by subclasses whose iteration order is 161 determined on a per-instance basis. 162 163 Returns: 164 * Order.Ascending (= 1) 165 if outcomes are to be seen in ascending order. 166 In this case either `next_state()` or `next_state_ascending()` 167 are implemented. 168 * Order.Descending (= -1) 169 if outcomes are to be seen in descending order. 170 In this case either `next_state()` or `next_state_descending()` 171 are implemented. 172 * Order.Any (= 0) 173 if outcomes can be seen in any order. 174 In this case either `next_state()` or both 175 `next_state_ascending()` and `next_state_descending()` 176 are implemented. 177 """ 178 overrides_ascending = self._has_override('next_state_ascending') 179 overrides_descending = self._has_override('next_state_descending') 180 overrides_any = self._has_override('next_state') 181 if overrides_any or (overrides_ascending and overrides_descending): 182 return Order.Any 183 if overrides_ascending: 184 return Order.Ascending 185 if overrides_descending: 186 return Order.Descending 187 raise NotImplementedError( 188 'Subclasses of MultisetEvaluator must implement at least one of next_state, next_state_ascending, next_state_descending.' 189 )
Optional method that specifies what outcome orderings this evaluator supports.
By default, this is determined by which of next_state()
,
next_state_ascending()
, and next_state_descending()
are
overridden.
This is most often overridden by subclasses whose iteration order is determined on a per-instance basis.
Returns:
- Order.Ascending (= 1) if outcomes are to be seen in ascending order. In this case either
next_state()
ornext_state_ascending()
are implemented.- Order.Descending (= -1) if outcomes are to be seen in descending order. In this case either
next_state()
ornext_state_descending()
are implemented.- Order.Any (= 0) if outcomes can be seen in any order. In this case either
next_state()
or bothnext_state_ascending()
andnext_state_descending()
are implemented.
191 def extra_outcomes(self, outcomes: Sequence[T]) -> Collection[T]: 192 """Optional method to specify extra outcomes that should be seen as inputs to `next_state()`. 193 194 These will be seen by `next_state` even if they do not appear in the 195 input(s). The default implementation returns `()`, or no additional 196 outcomes. 197 198 If you want `next_state` to see consecutive `int` outcomes, you can set 199 `extra_outcomes = icepool.MultisetEvaluator.consecutive`. 200 See `consecutive()` below. 201 202 Args: 203 outcomes: The outcomes that could be produced by the inputs, in 204 ascending order. 205 """ 206 return ()
Optional method to specify extra outcomes that should be seen as inputs to next_state()
.
These will be seen by next_state
even if they do not appear in the
input(s). The default implementation returns ()
, or no additional
outcomes.
If you want next_state
to see consecutive int
outcomes, you can set
extra_outcomes = icepool.MultisetEvaluator.consecutive
.
See consecutive()
below.
Arguments:
- outcomes: The outcomes that could be produced by the inputs, in
- ascending order.
208 def consecutive(self, outcomes: Sequence[int]) -> Collection[int]: 209 """Example implementation of `extra_outcomes()` that produces consecutive `int` outcomes. 210 211 Set `extra_outcomes = icepool.MultisetEvaluator.consecutive` to use this. 212 213 Returns: 214 All `int`s from the min outcome to the max outcome among the inputs, 215 inclusive. 216 217 Raises: 218 TypeError: if any input has any non-`int` outcome. 219 """ 220 if not outcomes: 221 return () 222 223 if any(not isinstance(x, int) for x in outcomes): 224 raise TypeError( 225 "consecutive cannot be used with outcomes of type other than 'int'." 226 ) 227 228 return range(outcomes[0], outcomes[-1] + 1)
Example implementation of extra_outcomes()
that produces consecutive int
outcomes.
Set extra_outcomes = icepool.MultisetEvaluator.consecutive
to use this.
Returns:
All
int
s from the min outcome to the max outcome among the inputs, inclusive.
Raises:
- TypeError: if any input has any non-
int
outcome.
230 def bound_inputs(self) -> 'tuple[icepool.MultisetExpression, ...]': 231 """An optional sequence of extra inputs whose counts will be prepended to *counts. 232 233 (Prepending rather than appending is analogous to `functools.partial`.) 234 """ 235 return ()
An optional sequence of extra inputs whose counts will be prepended to *counts.
(Prepending rather than appending is analogous to functools.partial
.)
268 def evaluate( 269 self, *args: 'MultisetExpression[T] | Mapping[T, int] | Sequence[T]' 270 ) -> 'icepool.Die[U_co] | MultisetEvaluator[T, U_co]': 271 """Evaluates input expression(s). 272 273 You can call the `MultisetEvaluator` object directly for the same effect, 274 e.g. `sum_evaluator(input)` is an alias for `sum_evaluator.evaluate(input)`. 275 276 Most evaluators will expect a fixed number of input multisets. 277 The union of the outcomes of the input(s) must be totally orderable. 278 279 Args: 280 *args: Each may be one of the following: 281 * A `MultisetExpression`. 282 * A mappable mapping outcomes to the number of those outcomes. 283 * A sequence of outcomes. 284 285 Returns: 286 A `Die` representing the distribution of the final outcome if no 287 arg contains a free variable. Otherwise, returns a new evaluator. 288 """ 289 from icepool.generator.alignment import Alignment 290 291 # Convert arguments to expressions. 292 inputs = tuple( 293 icepool.implicit_convert_to_expression(arg) for arg in args) 294 295 if any(input.has_free_variables() for input in inputs): 296 from icepool.evaluator.multiset_function import MultisetFunctionEvaluator 297 return MultisetFunctionEvaluator(*inputs, evaluator=self) 298 299 inputs = self.bound_inputs() + inputs 300 301 # This is kept to verify inputs to operators each have arity exactly 1. 302 total_arity = sum(input.output_arity() for input in inputs) 303 304 if not all(expression._is_resolvable() for expression in inputs): 305 return icepool.Die([]) 306 307 algorithm, order = self._select_algorithm(*inputs) 308 309 next_state_function = self._select_next_state_function(order) 310 311 outcomes = icepool.sorted_union(*(expression.outcomes() 312 for expression in inputs)) 313 extra_outcomes = Alignment(self.extra_outcomes(outcomes)) 314 315 dist: MutableMapping[Any, int] = defaultdict(int) 316 iterators = MultisetEvaluator._initialize_inputs(inputs) 317 for p in itertools.product(*iterators): 318 sub_inputs, sub_weights = zip(*p) 319 prod_weight = math.prod(sub_weights) 320 sub_result = algorithm(order, next_state_function, extra_outcomes, 321 sub_inputs) 322 for sub_state, sub_weight in sub_result.items(): 323 dist[sub_state] += sub_weight * prod_weight 324 325 final_outcomes = [] 326 final_weights = [] 327 for state, weight in dist.items(): 328 outcome = self.final_outcome(state) 329 if outcome is None: 330 raise TypeError( 331 "None is not a valid final outcome.\n" 332 "This may have been a result of not supplying any input with an outcome." 333 ) 334 if outcome is not icepool.Reroll: 335 final_outcomes.append(outcome) 336 final_weights.append(weight) 337 338 return icepool.Die(final_outcomes, final_weights)
Evaluates input expression(s).
You can call the MultisetEvaluator
object directly for the same effect,
e.g. sum_evaluator(input)
is an alias for sum_evaluator.evaluate(input)
.
Most evaluators will expect a fixed number of input multisets. The union of the outcomes of the input(s) must be totally orderable.
Arguments:
- *args: Each may be one of the following:
- A
MultisetExpression
. - A mappable mapping outcomes to the number of those outcomes.
- A sequence of outcomes.
- A
Returns:
A
Die
representing the distribution of the final outcome if no arg contains a free variable. Otherwise, returns a new evaluator.
543 def sample( 544 self, *inputs: 545 'icepool.MultisetExpression[T] | Mapping[T, int] | Sequence[T]'): 546 """EXPERIMENTAL: Samples one result from the input(s) and evaluates the result.""" 547 # Convert non-`Pool` arguments to `Pool`. 548 converted_inputs = tuple( 549 input if isinstance(input, icepool.MultisetExpression 550 ) else icepool.Pool(input) for input in inputs) 551 552 result = self.evaluate(*itertools.chain.from_iterable( 553 input.sample() for input in converted_inputs)) 554 555 if not result.is_empty(): 556 return result.outcomes()[0] 557 else: 558 return result
EXPERIMENTAL: Samples one result from the input(s) and evaluates the result.
15class Order(enum.IntEnum): 16 """Can be used to define what order outcomes are seen in by MultisetEvaluators.""" 17 Ascending = 1 18 Descending = -1 19 Any = 0 20 21 def merge(*orders: 'Order') -> 'Order': 22 """Merges the given Orders. 23 24 Returns: 25 `Any` if all arguments are `Any`. 26 `Ascending` if there is at least one `Ascending` in the arguments. 27 `Descending` if there is at least one `Descending` in the arguments. 28 29 Raises: 30 `ConflictingOrderError` if both `Ascending` and `Descending` are in 31 the arguments. 32 """ 33 result = Order.Any 34 for order in orders: 35 if (result > 0 and order < 0) or (result < 0 and order > 0): 36 raise ConflictingOrderError( 37 f'Conflicting orders {orders}.\n' + 38 'Tip: If you are using highest(keep=k), try using lowest(drop=n-k) instead, or vice versa.' 39 ) 40 if result == Order.Any: 41 result = order 42 return result
Can be used to define what order outcomes are seen in by MultisetEvaluators.
21 def merge(*orders: 'Order') -> 'Order': 22 """Merges the given Orders. 23 24 Returns: 25 `Any` if all arguments are `Any`. 26 `Ascending` if there is at least one `Ascending` in the arguments. 27 `Descending` if there is at least one `Descending` in the arguments. 28 29 Raises: 30 `ConflictingOrderError` if both `Ascending` and `Descending` are in 31 the arguments. 32 """ 33 result = Order.Any 34 for order in orders: 35 if (result > 0 and order < 0) or (result < 0 and order > 0): 36 raise ConflictingOrderError( 37 f'Conflicting orders {orders}.\n' + 38 'Tip: If you are using highest(keep=k), try using lowest(drop=n-k) instead, or vice versa.' 39 ) 40 if result == Order.Any: 41 result = order 42 return result
Merges the given Orders.
Returns:
Any
if all arguments areAny
.Ascending
if there is at least oneAscending
in the arguments.Descending
if there is at least oneDescending
in the arguments.
Raises:
ConflictingOrderError
if bothAscending
andDescending
are in- the arguments.
20class Deck(Population[T_co]): 21 """Sampling without replacement (within a single evaluation). 22 23 Quantities represent duplicates. 24 """ 25 26 _data: Counts[T_co] 27 _deal: int 28 29 @property 30 def _new_type(self) -> type: 31 return Deck 32 33 def __new__(cls, 34 outcomes: Sequence | Mapping[Any, int], 35 times: Sequence[int] | int = 1) -> 'Deck[T_co]': 36 """Constructor for a `Deck`. 37 38 All quantities must be non-negative. Outcomes with zero quantity will be 39 omitted. 40 41 Args: 42 outcomes: The cards of the `Deck`. This can be one of the following: 43 * A `Sequence` of outcomes. Duplicates will contribute 44 quantity for each appearance. 45 * A `Mapping` from outcomes to quantities. 46 47 Each outcome may be one of the following: 48 * An outcome, which must be hashable and totally orderable. 49 * A `Deck`, which will be flattened into the result. If a 50 `times` is assigned to the `Deck`, the entire `Deck` will 51 be duplicated that many times. 52 times: Multiplies the number of times each element of `outcomes` 53 will be put into the `Deck`. 54 `times` can either be a sequence of the same length as 55 `outcomes` or a single `int` to apply to all elements of 56 `outcomes`. 57 """ 58 59 if icepool.population.again.contains_again(outcomes): 60 raise ValueError('Again cannot be used with Decks.') 61 62 outcomes, times = icepool.creation_args.itemize(outcomes, times) 63 64 if len(outcomes) == 1 and times[0] == 1 and isinstance( 65 outcomes[0], Deck): 66 return outcomes[0] 67 68 counts: Counts[T_co] = icepool.creation_args.expand_args_for_deck( 69 outcomes, times) 70 71 return Deck._new_raw(counts) 72 73 @classmethod 74 def _new_raw(cls, data: Counts[T_co]) -> 'Deck[T_co]': 75 """Creates a new `Deck` using already-processed arguments. 76 77 Args: 78 data: At this point, this is a Counts. 79 """ 80 self = super(Population, cls).__new__(cls) 81 self._data = data 82 return self 83 84 def keys(self) -> CountsKeysView[T_co]: 85 return self._data.keys() 86 87 def values(self) -> CountsValuesView: 88 return self._data.values() 89 90 def items(self) -> CountsItemsView[T_co]: 91 return self._data.items() 92 93 def __getitem__(self, outcome) -> int: 94 return self._data[outcome] 95 96 def __iter__(self) -> Iterator[T_co]: 97 return iter(self.keys()) 98 99 def __len__(self) -> int: 100 return len(self._data) 101 102 size = icepool.Population.denominator 103 104 @cached_property 105 def _popped_min(self) -> tuple['Deck[T_co]', int]: 106 return self._new_raw(self._data.remove_min()), self.quantities()[0] 107 108 def _pop_min(self) -> tuple['Deck[T_co]', int]: 109 """A `Deck` with the min outcome removed.""" 110 return self._popped_min 111 112 @cached_property 113 def _popped_max(self) -> tuple['Deck[T_co]', int]: 114 return self._new_raw(self._data.remove_max()), self.quantities()[-1] 115 116 def _pop_max(self) -> tuple['Deck[T_co]', int]: 117 """A `Deck` with the max outcome removed.""" 118 return self._popped_max 119 120 @overload 121 def deal(self, hand_size: int, /) -> 'icepool.Deal[T_co]': 122 ... 123 124 @overload 125 def deal( 126 self, hand_size: int, hand_size_2: int, /, *more_hand_sizes: 127 int) -> 'icepool.MultiDeal[T_co, tuple[int, ...]]': 128 ... 129 130 @overload 131 def deal( 132 self, *hand_sizes: int 133 ) -> 'icepool.Deal[T_co] | icepool.MultiDeal[T_co, tuple[int, ...]]': 134 ... 135 136 def deal( 137 self, *hand_sizes: int 138 ) -> 'icepool.Deal[T_co] | icepool.MultiDeal[T_co, tuple[int, ...]]': 139 """Creates a `Deal` object from this deck. 140 141 See `Deal()` for details. 142 """ 143 if len(hand_sizes) == 1: 144 return icepool.Deal(self, *hand_sizes) 145 else: 146 return icepool.MultiDeal(self, *hand_sizes) 147 148 # Binary operators. 149 150 def additive_union( 151 self, *args: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 152 """Both decks merged together.""" 153 return functools.reduce(operator.add, args, initial=self) 154 155 def __add__(self, 156 other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 157 data = Counter(self._data) 158 for outcome, count in Counter(other).items(): 159 data[outcome] += count 160 return Deck(+data) 161 162 __radd__ = __add__ 163 164 def difference(self, *args: 165 Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 166 """This deck with the other cards removed (but not below zero of each card).""" 167 return functools.reduce(operator.sub, args, initial=self) 168 169 def __sub__(self, 170 other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 171 data = Counter(self._data) 172 for outcome, count in Counter(other).items(): 173 data[outcome] -= count 174 return Deck(+data) 175 176 def __rsub__(self, 177 other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 178 data = Counter(other) 179 for outcome, count in self.items(): 180 data[outcome] -= count 181 return Deck(+data) 182 183 def intersection( 184 self, *args: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 185 """The cards that both decks have.""" 186 return functools.reduce(operator.and_, args, initial=self) 187 188 def __and__(self, 189 other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 190 data: Counter[T_co] = Counter() 191 for outcome, count in Counter(other).items(): 192 data[outcome] = min(self.get(outcome, 0), count) 193 return Deck(+data) 194 195 __rand__ = __and__ 196 197 def union(self, *args: 198 Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 199 """As many of each card as the deck that has more of them.""" 200 return functools.reduce(operator.or_, args, initial=self) 201 202 def __or__(self, 203 other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 204 data = Counter(self._data) 205 for outcome, count in Counter(other).items(): 206 data[outcome] = max(data[outcome], count) 207 return Deck(+data) 208 209 __ror__ = __or__ 210 211 def symmetric_difference( 212 self, other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 213 """As many of each card as the deck that has more of them.""" 214 return self ^ other 215 216 def __xor__(self, 217 other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 218 data = Counter(self._data) 219 for outcome, count in Counter(other).items(): 220 data[outcome] = abs(data[outcome] - count) 221 return Deck(+data) 222 223 def __mul__(self, other: int) -> 'Deck[T_co]': 224 if not isinstance(other, int): 225 return NotImplemented 226 return self.multiply_quantities(other) 227 228 __rmul__ = __mul__ 229 230 def __floordiv__(self, other: int) -> 'Deck[T_co]': 231 if not isinstance(other, int): 232 return NotImplemented 233 return self.divide_quantities(other) 234 235 def __mod__(self, other: int) -> 'Deck[T_co]': 236 if not isinstance(other, int): 237 return NotImplemented 238 return self.modulo_quantities(other) 239 240 def map( 241 self, 242 repl: 243 'Callable[..., U | Deck[U] | icepool.RerollType] | Mapping[T_co, U | Deck[U] | icepool.RerollType]', 244 /, 245 star: bool | None = None) -> 'Deck[U]': 246 """Maps outcomes of this `Deck` to other outcomes. 247 248 Args: 249 repl: One of the following: 250 * A callable returning a new outcome for each old outcome. 251 * A map from old outcomes to new outcomes. 252 Unmapped old outcomes stay the same. 253 The new outcomes may be `Deck`s, in which case one card is 254 replaced with several. This is not recommended. 255 star: Whether outcomes should be unpacked into separate arguments 256 before sending them to a callable `repl`. 257 If not provided, this will be guessed based on the function 258 signature. 259 """ 260 # Convert to a single-argument function. 261 if callable(repl): 262 if star is None: 263 star = infer_star(repl) 264 if star: 265 266 def transition_function(outcome): 267 return repl(*outcome) 268 else: 269 270 def transition_function(outcome): 271 return repl(outcome) 272 else: 273 # repl is a mapping. 274 def transition_function(outcome): 275 if outcome in repl: 276 return repl[outcome] 277 else: 278 return outcome 279 280 return Deck( 281 [transition_function(outcome) for outcome in self.outcomes()], 282 times=self.quantities()) 283 284 @cached_property 285 def _sequence_cache( 286 self) -> 'MutableSequence[icepool.Die[tuple[T_co, ...]]]': 287 return [icepool.Die([()])] 288 289 def sequence(self, deals: int, /) -> 'icepool.Die[tuple[T_co, ...]]': 290 """Possible sequences produced by dealing from this deck a number of times. 291 292 This is extremely expensive computationally. If you don't care about 293 order, use `deal()` instead. 294 """ 295 if deals < 0: 296 raise ValueError('The number of cards dealt cannot be negative.') 297 for i in range(len(self._sequence_cache), deals + 1): 298 299 def transition(curr): 300 remaining = icepool.Die(self - curr) 301 return icepool.map(lambda curr, next: curr + (next, ), curr, 302 remaining) 303 304 result: 'icepool.Die[tuple[T_co, ...]]' = self._sequence_cache[ 305 i - 1].map(transition) 306 self._sequence_cache.append(result) 307 return result 308 309 @cached_property 310 def _hash_key(self) -> tuple: 311 return Deck, tuple(self.items()) 312 313 def __eq__(self, other) -> bool: 314 if not isinstance(other, Deck): 315 return False 316 return self._hash_key == other._hash_key 317 318 @cached_property 319 def _hash(self) -> int: 320 return hash(self._hash_key) 321 322 def __hash__(self) -> int: 323 return self._hash 324 325 def __repr__(self) -> str: 326 items_string = ', '.join(f'{repr(outcome)}: {quantity}' 327 for outcome, quantity in self.items()) 328 return type(self).__qualname__ + '({' + items_string + '})'
Sampling without replacement (within a single evaluation).
Quantities represent duplicates.
287 def denominator(self) -> int: 288 """The sum of all quantities (e.g. weights or duplicates). 289 290 For the number of unique outcomes, use `len()`. 291 """ 292 return self._denominator
The sum of all quantities (e.g. weights or duplicates).
For the number of unique outcomes, use len()
.
136 def deal( 137 self, *hand_sizes: int 138 ) -> 'icepool.Deal[T_co] | icepool.MultiDeal[T_co, tuple[int, ...]]': 139 """Creates a `Deal` object from this deck. 140 141 See `Deal()` for details. 142 """ 143 if len(hand_sizes) == 1: 144 return icepool.Deal(self, *hand_sizes) 145 else: 146 return icepool.MultiDeal(self, *hand_sizes)
150 def additive_union( 151 self, *args: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 152 """Both decks merged together.""" 153 return functools.reduce(operator.add, args, initial=self)
Both decks merged together.
164 def difference(self, *args: 165 Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 166 """This deck with the other cards removed (but not below zero of each card).""" 167 return functools.reduce(operator.sub, args, initial=self)
This deck with the other cards removed (but not below zero of each card).
183 def intersection( 184 self, *args: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 185 """The cards that both decks have.""" 186 return functools.reduce(operator.and_, args, initial=self)
The cards that both decks have.
197 def union(self, *args: 198 Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 199 """As many of each card as the deck that has more of them.""" 200 return functools.reduce(operator.or_, args, initial=self)
As many of each card as the deck that has more of them.
211 def symmetric_difference( 212 self, other: Iterable[T_co] | Mapping[T_co, int]) -> 'Deck[T_co]': 213 """As many of each card as the deck that has more of them.""" 214 return self ^ other
As many of each card as the deck that has more of them.
240 def map( 241 self, 242 repl: 243 'Callable[..., U | Deck[U] | icepool.RerollType] | Mapping[T_co, U | Deck[U] | icepool.RerollType]', 244 /, 245 star: bool | None = None) -> 'Deck[U]': 246 """Maps outcomes of this `Deck` to other outcomes. 247 248 Args: 249 repl: One of the following: 250 * A callable returning a new outcome for each old outcome. 251 * A map from old outcomes to new outcomes. 252 Unmapped old outcomes stay the same. 253 The new outcomes may be `Deck`s, in which case one card is 254 replaced with several. This is not recommended. 255 star: Whether outcomes should be unpacked into separate arguments 256 before sending them to a callable `repl`. 257 If not provided, this will be guessed based on the function 258 signature. 259 """ 260 # Convert to a single-argument function. 261 if callable(repl): 262 if star is None: 263 star = infer_star(repl) 264 if star: 265 266 def transition_function(outcome): 267 return repl(*outcome) 268 else: 269 270 def transition_function(outcome): 271 return repl(outcome) 272 else: 273 # repl is a mapping. 274 def transition_function(outcome): 275 if outcome in repl: 276 return repl[outcome] 277 else: 278 return outcome 279 280 return Deck( 281 [transition_function(outcome) for outcome in self.outcomes()], 282 times=self.quantities())
Maps outcomes of this Deck
to other outcomes.
Arguments:
- repl: One of the following:
- A callable returning a new outcome for each old outcome.
- A map from old outcomes to new outcomes.
Unmapped old outcomes stay the same.
The new outcomes may be
Deck
s, in which case one card is replaced with several. This is not recommended.
- star: Whether outcomes should be unpacked into separate arguments
before sending them to a callable
repl
. If not provided, this will be guessed based on the function signature.
289 def sequence(self, deals: int, /) -> 'icepool.Die[tuple[T_co, ...]]': 290 """Possible sequences produced by dealing from this deck a number of times. 291 292 This is extremely expensive computationally. If you don't care about 293 order, use `deal()` instead. 294 """ 295 if deals < 0: 296 raise ValueError('The number of cards dealt cannot be negative.') 297 for i in range(len(self._sequence_cache), deals + 1): 298 299 def transition(curr): 300 remaining = icepool.Die(self - curr) 301 return icepool.map(lambda curr, next: curr + (next, ), curr, 302 remaining) 303 304 result: 'icepool.Die[tuple[T_co, ...]]' = self._sequence_cache[ 305 i - 1].map(transition) 306 self._sequence_cache.append(result) 307 return result
Possible sequences produced by dealing from this deck a number of times.
This is extremely expensive computationally. If you don't care about
order, use deal()
instead.
Inherited Members
17class Deal(KeepGenerator[T]): 18 """Represents an unordered deal of a single hand from a `Deck`.""" 19 20 _deck: 'icepool.Deck[T]' 21 _hand_size: int 22 23 def __init__(self, deck: 'icepool.Deck[T]', hand_size: int) -> None: 24 """Constructor. 25 26 For algorithmic reasons, you must pre-commit to the number of cards to 27 deal. 28 29 It is permissible to deal zero cards from an empty deck, but not all 30 evaluators will handle this case, especially if they depend on the 31 outcome type. Dealing zero cards from a non-empty deck does not have 32 this issue. 33 34 Args: 35 deck: The `Deck` to deal from. 36 hand_size: How many cards to deal. 37 """ 38 if hand_size < 0: 39 raise ValueError('hand_size cannot be negative.') 40 if hand_size > deck.size(): 41 raise ValueError( 42 'The number of cards dealt cannot exceed the size of the deck.' 43 ) 44 self._deck = deck 45 self._hand_size = hand_size 46 self._keep_tuple = (1, ) * hand_size 47 48 @classmethod 49 def _new_raw(cls, deck: 'icepool.Deck[T]', hand_size: int, 50 keep_tuple: tuple[int, ...]) -> 'Deal[T]': 51 self = super(Deal, cls).__new__(cls) 52 self._deck = deck 53 self._hand_size = hand_size 54 self._keep_tuple = keep_tuple 55 return self 56 57 def _set_keep_tuple(self, keep_tuple: tuple[int, ...]) -> 'Deal[T]': 58 return Deal._new_raw(self._deck, self._hand_size, keep_tuple) 59 60 def deck(self) -> 'icepool.Deck[T]': 61 """The `Deck` the cards are dealt from.""" 62 return self._deck 63 64 def hand_sizes(self) -> tuple[int, ...]: 65 """The number of cards dealt to each hand as a tuple.""" 66 return (self._hand_size, ) 67 68 def total_cards_dealt(self) -> int: 69 """The total number of cards dealt.""" 70 return self._hand_size 71 72 def outcomes(self) -> CountsKeysView[T]: 73 """The outcomes of the `Deck` in ascending order. 74 75 These are also the `keys` of the `Deck` as a `Mapping`. 76 Prefer to use the name `outcomes`. 77 """ 78 return self.deck().outcomes() 79 80 def output_arity(self) -> int: 81 return 1 82 83 def _is_resolvable(self) -> bool: 84 return len(self.outcomes()) > 0 85 86 def denominator(self) -> int: 87 return icepool.math.comb(self.deck().size(), self._hand_size) 88 89 def _generate_initial(self) -> InitialMultisetGeneration: 90 yield self, 1 91 92 def _generate_min(self, min_outcome) -> PopMultisetGeneration: 93 if not self.outcomes() or min_outcome != self.min_outcome(): 94 yield self, (0, ), 1 95 return 96 97 popped_deck, deck_count = self.deck()._pop_min() 98 99 min_count = max(0, deck_count + self._hand_size - self.deck().size()) 100 max_count = min(deck_count, self._hand_size) 101 skip_weight = None 102 for count in range(min_count, max_count + 1): 103 popped_keep_tuple, result_count = pop_min_from_keep_tuple( 104 self.keep_tuple(), count) 105 popped_deal = Deal._new_raw(popped_deck, self._hand_size - count, 106 popped_keep_tuple) 107 weight = icepool.math.comb(deck_count, count) 108 if not any(popped_keep_tuple): 109 # Dump all dice in exchange for the denominator. 110 skip_weight = (skip_weight 111 or 0) + weight * popped_deal.denominator() 112 continue 113 yield popped_deal, (result_count, ), weight 114 115 if skip_weight is not None: 116 popped_deal = Deal._new_raw(popped_deck, 0, ()) 117 yield popped_deal, (sum(self.keep_tuple()), ), skip_weight 118 119 def _generate_max(self, max_outcome) -> PopMultisetGeneration: 120 if not self.outcomes() or max_outcome != self.max_outcome(): 121 yield self, (0, ), 1 122 return 123 124 popped_deck, deck_count = self.deck()._pop_max() 125 126 min_count = max(0, deck_count + self._hand_size - self.deck().size()) 127 max_count = min(deck_count, self._hand_size) 128 skip_weight = None 129 for count in range(min_count, max_count + 1): 130 popped_keep_tuple, result_count = pop_max_from_keep_tuple( 131 self.keep_tuple(), count) 132 popped_deal = Deal._new_raw(popped_deck, self._hand_size - count, 133 popped_keep_tuple) 134 weight = icepool.math.comb(deck_count, count) 135 if not any(popped_keep_tuple): 136 # Dump all dice in exchange for the denominator. 137 skip_weight = (skip_weight 138 or 0) + weight * popped_deal.denominator() 139 continue 140 yield popped_deal, (result_count, ), weight 141 142 if skip_weight is not None: 143 popped_deal = Deal._new_raw(popped_deck, 0, ()) 144 yield popped_deal, (sum(self.keep_tuple()), ), skip_weight 145 146 def local_order_preference(self) -> tuple[Order, OrderReason]: 147 lo_skip, hi_skip = icepool.order.lo_hi_skip(self.keep_tuple()) 148 if lo_skip > hi_skip: 149 return Order.Descending, OrderReason.KeepSkip 150 if hi_skip > lo_skip: 151 return Order.Ascending, OrderReason.KeepSkip 152 153 return Order.Any, OrderReason.NoPreference 154 155 @cached_property 156 def _local_hash_key(self) -> Hashable: 157 return Deal, self.deck(), self._hand_size, self._keep_tuple 158 159 def __repr__(self) -> str: 160 return type( 161 self 162 ).__qualname__ + f'({repr(self.deck())}, hand_size={self._hand_size})' 163 164 def __str__(self) -> str: 165 return type( 166 self 167 ).__qualname__ + f' of hand_size={self._hand_size} from deck:\n' + str( 168 self.deck())
Represents an unordered deal of a single hand from a Deck
.
23 def __init__(self, deck: 'icepool.Deck[T]', hand_size: int) -> None: 24 """Constructor. 25 26 For algorithmic reasons, you must pre-commit to the number of cards to 27 deal. 28 29 It is permissible to deal zero cards from an empty deck, but not all 30 evaluators will handle this case, especially if they depend on the 31 outcome type. Dealing zero cards from a non-empty deck does not have 32 this issue. 33 34 Args: 35 deck: The `Deck` to deal from. 36 hand_size: How many cards to deal. 37 """ 38 if hand_size < 0: 39 raise ValueError('hand_size cannot be negative.') 40 if hand_size > deck.size(): 41 raise ValueError( 42 'The number of cards dealt cannot exceed the size of the deck.' 43 ) 44 self._deck = deck 45 self._hand_size = hand_size 46 self._keep_tuple = (1, ) * hand_size
Constructor.
For algorithmic reasons, you must pre-commit to the number of cards to deal.
It is permissible to deal zero cards from an empty deck, but not all evaluators will handle this case, especially if they depend on the outcome type. Dealing zero cards from a non-empty deck does not have this issue.
Arguments:
- deck: The
Deck
to deal from. - hand_size: How many cards to deal.
60 def deck(self) -> 'icepool.Deck[T]': 61 """The `Deck` the cards are dealt from.""" 62 return self._deck
The Deck
the cards are dealt from.
64 def hand_sizes(self) -> tuple[int, ...]: 65 """The number of cards dealt to each hand as a tuple.""" 66 return (self._hand_size, )
The number of cards dealt to each hand as a tuple.
68 def total_cards_dealt(self) -> int: 69 """The total number of cards dealt.""" 70 return self._hand_size
The total number of cards dealt.
The total weight of all paths through this generator.
Raises:
- UnboundMultisetExpressionError if this is called on an expression with free variables.
146 def local_order_preference(self) -> tuple[Order, OrderReason]: 147 lo_skip, hi_skip = icepool.order.lo_hi_skip(self.keep_tuple()) 148 if lo_skip > hi_skip: 149 return Order.Descending, OrderReason.KeepSkip 150 if hi_skip > lo_skip: 151 return Order.Ascending, OrderReason.KeepSkip 152 153 return Order.Any, OrderReason.NoPreference
Any ordering that is preferred or required by this expression node.
18class MultiDeal(MultisetGenerator[T, Qs]): 19 """Represents an unordered deal of multiple hands from a `Deck`.""" 20 21 _deck: 'icepool.Deck[T]' 22 _hand_sizes: Qs 23 24 def __init__(self, deck: 'icepool.Deck[T]', *hand_sizes: int) -> None: 25 """Constructor. 26 27 For algorithmic reasons, you must pre-commit to the number of cards to 28 deal for each hand. 29 30 It is permissible to deal zero cards from an empty deck, but not all 31 evaluators will handle this case, especially if they depend on the 32 outcome type. Dealing zero cards from a non-empty deck does not have 33 this issue. 34 35 Args: 36 deck: The `Deck` to deal from. 37 *hand_sizes: How many cards to deal. If multiple `hand_sizes` are 38 provided, `MultisetEvaluator.next_state` will recieve one count 39 per hand in order. Try to keep the number of hands to a minimum 40 as this can be computationally intensive. 41 """ 42 if any(hand < 0 for hand in hand_sizes): 43 raise ValueError('hand_sizes cannot be negative.') 44 self._deck = deck 45 self._hand_sizes = cast(Qs, hand_sizes) 46 if self.total_cards_dealt() > self.deck().size(): 47 raise ValueError( 48 'The total number of cards dealt cannot exceed the size of the deck.' 49 ) 50 51 @classmethod 52 def _new_raw(cls, deck: 'icepool.Deck[T]', 53 hand_sizes: Qs) -> 'MultiDeal[T, Qs]': 54 self = super(MultiDeal, cls).__new__(cls) 55 self._deck = deck 56 self._hand_sizes = hand_sizes 57 return self 58 59 def deck(self) -> 'icepool.Deck[T]': 60 """The `Deck` the cards are dealt from.""" 61 return self._deck 62 63 def hand_sizes(self) -> Qs: 64 """The number of cards dealt to each hand as a tuple.""" 65 return self._hand_sizes 66 67 def total_cards_dealt(self) -> int: 68 """The total number of cards dealt.""" 69 return sum(self.hand_sizes()) 70 71 def outcomes(self) -> CountsKeysView[T]: 72 """The outcomes of the `Deck` in ascending order. 73 74 These are also the `keys` of the `Deck` as a `Mapping`. 75 Prefer to use the name `outcomes`. 76 """ 77 return self.deck().outcomes() 78 79 def output_arity(self) -> int: 80 return len(self._hand_sizes) 81 82 def _is_resolvable(self) -> bool: 83 return len(self.outcomes()) > 0 84 85 @cached_property 86 def _denomiator(self) -> int: 87 d_total = icepool.math.comb(self.deck().size(), 88 self.total_cards_dealt()) 89 d_split = math.prod( 90 icepool.math.comb(self.total_cards_dealt(), h) 91 for h in self.hand_sizes()[1:]) 92 return d_total * d_split 93 94 def denominator(self) -> int: 95 return self._denomiator 96 97 def _generate_initial(self) -> InitialMultisetGeneration: 98 yield self, 1 99 100 def _generate_common(self, popped_deck: 'icepool.Deck[T]', 101 deck_count: int) -> PopMultisetGeneration: 102 """Common implementation for _generate_min and _generate_max.""" 103 min_count = max( 104 0, deck_count + self.total_cards_dealt() - self.deck().size()) 105 max_count = min(deck_count, self.total_cards_dealt()) 106 for count_total in range(min_count, max_count + 1): 107 weight_total = icepool.math.comb(deck_count, count_total) 108 # The "deck" is the hand sizes. 109 for counts, weight_split in iter_hypergeom(self.hand_sizes(), 110 count_total): 111 popped_deal = MultiDeal._new_raw( 112 popped_deck, 113 tuple(h - c for h, c in zip(self.hand_sizes(), counts))) 114 weight = weight_total * weight_split 115 yield popped_deal, counts, weight 116 117 def _generate_min(self, min_outcome) -> PopMultisetGeneration: 118 if not self.outcomes() or min_outcome != self.min_outcome(): 119 yield self, (0, ), 1 120 return 121 122 popped_deck, deck_count = self.deck()._pop_min() 123 124 yield from self._generate_common(popped_deck, deck_count) 125 126 def _generate_max(self, max_outcome) -> PopMultisetGeneration: 127 if not self.outcomes() or max_outcome != self.max_outcome(): 128 yield self, (0, ), 1 129 return 130 131 popped_deck, deck_count = self.deck()._pop_max() 132 133 yield from self._generate_common(popped_deck, deck_count) 134 135 def local_order_preference(self) -> tuple[Order, OrderReason]: 136 return Order.Any, OrderReason.NoPreference 137 138 @cached_property 139 def _local_hash_key(self) -> Hashable: 140 return MultiDeal, self.deck(), self.hand_sizes() 141 142 def __repr__(self) -> str: 143 return type( 144 self 145 ).__qualname__ + f'({repr(self.deck())}, hand_sizes={self.hand_sizes()})' 146 147 def __str__(self) -> str: 148 return type( 149 self 150 ).__qualname__ + f' of hand_sizes={self.hand_sizes()} from deck:\n' + str( 151 self.deck())
Represents an unordered deal of multiple hands from a Deck
.
24 def __init__(self, deck: 'icepool.Deck[T]', *hand_sizes: int) -> None: 25 """Constructor. 26 27 For algorithmic reasons, you must pre-commit to the number of cards to 28 deal for each hand. 29 30 It is permissible to deal zero cards from an empty deck, but not all 31 evaluators will handle this case, especially if they depend on the 32 outcome type. Dealing zero cards from a non-empty deck does not have 33 this issue. 34 35 Args: 36 deck: The `Deck` to deal from. 37 *hand_sizes: How many cards to deal. If multiple `hand_sizes` are 38 provided, `MultisetEvaluator.next_state` will recieve one count 39 per hand in order. Try to keep the number of hands to a minimum 40 as this can be computationally intensive. 41 """ 42 if any(hand < 0 for hand in hand_sizes): 43 raise ValueError('hand_sizes cannot be negative.') 44 self._deck = deck 45 self._hand_sizes = cast(Qs, hand_sizes) 46 if self.total_cards_dealt() > self.deck().size(): 47 raise ValueError( 48 'The total number of cards dealt cannot exceed the size of the deck.' 49 )
Constructor.
For algorithmic reasons, you must pre-commit to the number of cards to deal for each hand.
It is permissible to deal zero cards from an empty deck, but not all evaluators will handle this case, especially if they depend on the outcome type. Dealing zero cards from a non-empty deck does not have this issue.
Arguments:
- deck: The
Deck
to deal from. - *hand_sizes: How many cards to deal. If multiple
hand_sizes
are provided,MultisetEvaluator.next_state
will recieve one count per hand in order. Try to keep the number of hands to a minimum as this can be computationally intensive.
59 def deck(self) -> 'icepool.Deck[T]': 60 """The `Deck` the cards are dealt from.""" 61 return self._deck
The Deck
the cards are dealt from.
63 def hand_sizes(self) -> Qs: 64 """The number of cards dealt to each hand as a tuple.""" 65 return self._hand_sizes
The number of cards dealt to each hand as a tuple.
67 def total_cards_dealt(self) -> int: 68 """The total number of cards dealt.""" 69 return sum(self.hand_sizes())
The total number of cards dealt.
65def multiset_function(function: Callable[..., NestedTupleOrEvaluator[T, U_co]], 66 /) -> MultisetEvaluator[T, NestedTupleOrOutcome[U_co]]: 67 """EXPERIMENTAL: A decorator that turns a function into a `MultisetEvaluator`. 68 69 The provided function should take in arguments representing multisets, 70 do a limited set of operations on them (see `MultisetExpression`), and 71 finish off with an evaluation. You can return tuples to perform a joint 72 evaluation. 73 74 For example, to create an evaluator which computes the elements each of two 75 multisets has that the other doesn't: 76 77 ```python 78 @multiset_function 79 def two_way_difference(a, b): 80 return (a - b).expand(), (b - a).expand() 81 ``` 82 83 Any globals inside `function` are effectively bound at the time 84 `multiset_function` is invoked. Note that this is different than how 85 ordinary Python closures behave. For example, 86 87 ```python 88 target = [1, 2, 3] 89 90 @multiset_function 91 def count_intersection(a): 92 return (a & target).count() 93 94 print(count_intersection(d6.pool(3))) 95 96 target = [1] 97 print(count_intersection(d6.pool(3))) 98 ``` 99 100 would produce the same thing both times. Likewise, the function should not 101 have any side effects. 102 103 Be careful when using control structures: you cannot branch on the value of 104 a multiset expression or evaluation, so e.g. 105 106 ```python 107 @multiset_function 108 def bad(a, b) 109 if a == b: 110 ... 111 ``` 112 113 is not allowed. 114 115 `multiset_function` has considerable overhead, being effectively a 116 mini-language within Python. For better performance, you can try 117 implementing your own subclass of `MultisetEvaluator` directly. 118 119 Args: 120 function: This should take in a fixed number of multiset variables and 121 output an evaluator or a nested tuple of evaluators. Tuples will 122 result in a `JointEvaluator`. 123 """ 124 parameters = inspect.signature(function, follow_wrapped=False).parameters 125 multiset_variables = [] 126 for index, parameter in enumerate(parameters.values()): 127 if parameter.kind not in [ 128 inspect.Parameter.POSITIONAL_ONLY, 129 inspect.Parameter.POSITIONAL_OR_KEYWORD, 130 ] or parameter.default != inspect.Parameter.empty: 131 raise ValueError( 132 'Callable must take only a fixed number of positional arguments.' 133 ) 134 multiset_variables.append( 135 MV(is_free=True, index=index, name=parameter.name)) 136 tuple_or_evaluator = function(*multiset_variables) 137 evaluator = replace_tuples_with_joint_evaluator(tuple_or_evaluator) 138 # This is not actually a function. 139 return update_wrapper(evaluator, function) # type: ignore
EXPERIMENTAL: A decorator that turns a function into a MultisetEvaluator
.
The provided function should take in arguments representing multisets,
do a limited set of operations on them (see MultisetExpression
), and
finish off with an evaluation. You can return tuples to perform a joint
evaluation.
For example, to create an evaluator which computes the elements each of two multisets has that the other doesn't:
@multiset_function
def two_way_difference(a, b):
return (a - b).expand(), (b - a).expand()
Any globals inside function
are effectively bound at the time
multiset_function
is invoked. Note that this is different than how
ordinary Python closures behave. For example,
target = [1, 2, 3]
@multiset_function
def count_intersection(a):
return (a & target).count()
print(count_intersection(d6.pool(3)))
target = [1]
print(count_intersection(d6.pool(3)))
would produce the same thing both times. Likewise, the function should not have any side effects.
Be careful when using control structures: you cannot branch on the value of a multiset expression or evaluation, so e.g.
@multiset_function
def bad(a, b)
if a == b:
...
is not allowed.
multiset_function
has considerable overhead, being effectively a
mini-language within Python. For better performance, you can try
implementing your own subclass of MultisetEvaluator
directly.
Arguments:
- function: This should take in a fixed number of multiset variables and
output an evaluator or a nested tuple of evaluators. Tuples will
result in a
JointEvaluator
.
23def format_probability_inverse(probability, /, int_start: int = 20): 24 """EXPERIMENTAL: Formats the inverse of a value as "1 in N". 25 26 Args: 27 probability: The value to be formatted. 28 int_start: If N = 1 / probability is between this value and 1 million 29 times this value it will be formatted as an integer. Otherwise it 30 be formatted asa float with precision at least 1 part in int_start. 31 """ 32 max_precision = math.ceil(math.log10(int_start)) 33 if probability <= 0 or probability >= 1: 34 return 'n/a' 35 product = probability * int_start 36 if product <= 1: 37 if probability * int_start * 10**6 <= 1: 38 return f'1 in {1.0 / probability:<.{max_precision}e}' 39 else: 40 return f'1 in {round(1 / probability)}' 41 42 precision = 0 43 precision_factor = 1 44 while product > precision_factor and precision < max_precision: 45 precision += 1 46 precision_factor *= 10 47 return f'1 in {1.0 / probability:<.{precision}f}'
EXPERIMENTAL: Formats the inverse of a value as "1 in N".
Arguments:
- probability: The value to be formatted.
- int_start: If N = 1 / probability is between this value and 1 million times this value it will be formatted as an integer. Otherwise it be formatted asa float with precision at least 1 part in int_start.