Skip to content

Commit

Permalink
feat: Add missing of methods to scores in Python (#1055)
Browse files Browse the repository at this point in the history
  • Loading branch information
Christopher-Chianelli authored Aug 26, 2024
1 parent 6c7416c commit ff6ae02
Show file tree
Hide file tree
Showing 9 changed files with 181 additions and 33 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -185,7 +185,7 @@ Python::
constraint_weight_overrides = ConstraintWeightOverrides(
{
"Vehicle capacity": HardSoftScore.of(2, 0),
"Vehicle capacity": HardSoftScore.of_hard(2),
"Service finished after max end time", HardSoftScore.ZERO
}
)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -564,7 +564,7 @@ Alternatively, you can also specify the trend for each score level separately:
=== Invalid score detection

When you put the xref:using-timefold-solver/running-the-solver.adoc#environmentMode[`environmentMode`] in `FULL_ASSERT` (or ``FAST_ASSERT``),
it will detect score corruption in the xref:constraints-and-score/performance.adoc#incrementalScoreCalculation[incremental score calculation].
it will detect score corruption in the xref:constraints-and-score/performance.adoc#incrementalScoreCalculationPerformance[incremental score calculation].
However, that will not verify that your score calculator actually implements your score constraints as your business desires.
For example, one constraint might consistently match the wrong pattern.
To verify the constraints against an independent implementation, configure a ``assertionScoreDirectorFactory``:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ and still compare it with the original's calculation speed.
Comparing the best score with the original's best score is pointless: it's comparing apples and oranges.
====

[#incrementalScoreCalculation]
[#incrementalScoreCalculationPerformance]
== Incremental score calculation (with deltas)

When a solution changes, incremental score calculation (AKA delta based score calculation)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ def do_not_assign_ann() -> int:
----
====

However, that scales poorly because it doesn't do an xref:constraints-and-score/performance.adoc#incrementalScoreCalculation[incremental calculation]:
However, that scales poorly because it doesn't do an xref:constraints-and-score/performance.adoc#incrementalScoreCalculationPerformance[incremental calculation]:
When the planning variable of a single `Shift` changes, to recalculate the score,
the normal Streams API has to execute the entire stream from scratch.

Expand Down Expand Up @@ -85,7 +85,7 @@ image::constraints-and-score/score-calculation/constraintStreamIntroduction.png[

If any of the instances change during solving, the constraint stream automatically detects the change
and only recalculates the minimum necessary portion of the problem that is affected by the change.
The following figure illustrates this xref:constraints-and-score/performance.adoc#incrementalScoreCalculation[incremental score calculation]:
The following figure illustrates this xref:constraints-and-score/performance.adoc#incrementalScoreCalculationPerformance[incremental score calculation]:

image::constraints-and-score/score-calculation/constraintStreamIncrementalCalculation.png[align="center"]

Expand Down Expand Up @@ -1191,7 +1191,7 @@ unless it is a sorted collector such as `toSortedSet` or `toSortedMap`.
[NOTE]
====
Collecting elements into a `Collection` negates benefits of
xref:constraints-and-score/performance.adoc#incrementalScoreCalculation[incremental score calculation],
xref:constraints-and-score/performance.adoc#incrementalScoreCalculationPerformance[incremental score calculation],
as all operations on the resulting `Collection` will no longer be incremental.
If performance is a concern, avoid these collectors.
====
Expand Down Expand Up @@ -2408,7 +2408,7 @@ def test_given_facts_multiple_constraints():
(constraint_verifier.verify_that()
.given(vehicleA, visit1, visit2)
.scores(HardSoftScore.of(0, 20)))
.scores(HardSoftScore.of_soft(20)))
----
====

Expand Down Expand Up @@ -2478,7 +2478,7 @@ An easy way to implement your score calculation as imperative code:
** Useful for prototyping.
* Disadvantages:
** Slower, typically not suitable for production.
** Does not scale because there is no xref:constraints-and-score/performance.adoc#incrementalScoreCalculation[incremental score calculation].
** Does not scale because there is no xref:constraints-and-score/performance.adoc#incrementalScoreCalculationPerformance[incremental score calculation].
** Can not xref:constraints-and-score/understanding-the-score.adoc[explain the score].

To start using an Easy score calculator, implement the one method of the interface ``EasyScoreCalculator``:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -467,7 +467,7 @@ It is not available in the Community Edition.
There are several ways of doing multi-threaded solving:

* *<<multithreadedIncrementalSolving,Multi-threaded incremental solving>>*:
Solve 1 dataset with multiple threads without sacrificing xref:constraints-and-score/performance.adoc#incrementalScoreCalculation[incremental score calculation].
Solve 1 dataset with multiple threads without sacrificing xref:constraints-and-score/performance.adoc#incrementalScoreCalculationPerformance[incremental score calculation].
** Donate a portion of your CPU cores to Timefold Solver to scale up the score calculation speed and get the same results in fraction of the time.
* *<<partitionedSearch,Partitioned Search>>*:
Split 1 dataset in multiple parts and solve them independently.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -157,7 +157,7 @@ This combination is very efficient, because:

* A score calculation engine, is *great for calculating the score* of a solution of a planning problem.
It makes it easy and scalable to add additional soft or hard constraints.
It does xref:constraints-and-score/performance.adoc#incrementalScoreCalculation[incremental score calculation (deltas)] without any extra code.
It does xref:constraints-and-score/performance.adoc#incrementalScoreCalculationPerformance[incremental score calculation (deltas)] without any extra code.
However it tends to be not suitable to actually find new solutions.
* An optimization algorithm is *great at finding new improving solutions* for a planning problem,
without necessarily brute-forcing every possibility.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -429,7 +429,7 @@ def published_timeslot(factory: ConstraintFactory) -> Constraint:
return (factory.for_each(Talk)
.filter(lambda talk: talk.published_timeslot is not None
and talk.timeslot != talk.published_timeslot)
.penalize(HardSoftScore.of(0, 1000))
.penalize(HardSoftScore.of_soft(1000))
.as_constraint("Published timeslot")
)
----
Expand Down Expand Up @@ -744,7 +744,7 @@ while providing a selection of the best available options for fitting the change
It doesn't use the full xref:optimization-algorithms/optimization-algorithms.adoc#localSearch[local search algorithm].
Instead,
it uses a simple xref:optimization-algorithms/optimization-algorithms.adoc#constructionHeuristics[greedy algorithm]
together with xref:constraints-and-score/performance.adoc#incrementalScoreCalculation[incremental calculation].
together with xref:constraints-and-score/performance.adoc#incrementalScoreCalculationPerformance[incremental calculation].
This combination allows the API to find the best possible fit within the existing solution in a matter of milliseconds,
even for large planning problems.
Expand Down
138 changes: 126 additions & 12 deletions python/python-core/src/main/python/score/_score.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
from abc import ABC, abstractmethod
from typing import ClassVar
from dataclasses import dataclass, field
from jpype import JArray, JLong
from decimal import Decimal
from jpype import JArray, JLong
from typing import ClassVar

from .._timefold_java_interop import _java_score_mapping_dict


Expand Down Expand Up @@ -75,6 +76,10 @@ def is_feasible(self) -> bool:
def of(score: int) -> 'SimpleScore':
return SimpleScore(score, init_score=0)

@staticmethod
def of_uninitialized(init_score: int, score: int) -> 'SimpleScore':
return SimpleScore(score, init_score=init_score)

@staticmethod
def parse(score_text: str) -> 'SimpleScore':
if 'init' in score_text:
Expand Down Expand Up @@ -138,6 +143,18 @@ def is_feasible(self) -> bool:
def of(hard_score: int, soft_score: int) -> 'HardSoftScore':
return HardSoftScore(hard_score, soft_score, init_score=0)

@staticmethod
def of_uninitialized(init_score: int, hard_score: int, soft_score: int) -> 'HardSoftScore':
return HardSoftScore(hard_score, soft_score, init_score=init_score)

@staticmethod
def of_hard(hard_score: int) -> 'HardSoftScore':
return HardSoftScore(hard_score, 0, init_score=0)

@staticmethod
def of_soft(soft_score: int) -> 'HardSoftScore':
return HardSoftScore(0, soft_score, init_score=0)

@staticmethod
def parse(score_text: str) -> 'HardSoftScore':
if 'init' in score_text:
Expand All @@ -161,8 +178,8 @@ def __str__(self):


HardSoftScore.ZERO = HardSoftScore.of(0, 0)
HardSoftScore.ONE_HARD = HardSoftScore.of(1, 0)
HardSoftScore.ONE_SOFT = HardSoftScore.of(0, 1)
HardSoftScore.ONE_HARD = HardSoftScore.of_hard(1)
HardSoftScore.ONE_SOFT = HardSoftScore.of_soft(1)


@dataclass(unsafe_hash=True, order=True)
Expand Down Expand Up @@ -215,6 +232,22 @@ def is_feasible(self) -> bool:
def of(hard_score: int, medium_score: int, soft_score: int) -> 'HardMediumSoftScore':
return HardMediumSoftScore(hard_score, medium_score, soft_score, init_score=0)

@staticmethod
def of_uninitialized(init_score: int, hard_score: int, medium_score: int, soft_score: int) -> 'HardMediumSoftScore':
return HardMediumSoftScore(hard_score, medium_score, soft_score, init_score=init_score)

@staticmethod
def of_hard(hard_score: int) -> 'HardMediumSoftScore':
return HardMediumSoftScore(hard_score, 0, 0, init_score=0)

@staticmethod
def of_medium(medium_score: int) -> 'HardMediumSoftScore':
return HardMediumSoftScore(0, medium_score, 0, init_score=0)

@staticmethod
def of_soft(soft_score: int) -> 'HardMediumSoftScore':
return HardMediumSoftScore(0, 0, soft_score, init_score=0)

@staticmethod
def parse(score_text: str) -> 'HardMediumSoftScore':
if 'init' in score_text:
Expand All @@ -240,9 +273,9 @@ def __str__(self):


HardMediumSoftScore.ZERO = HardMediumSoftScore.of(0, 0, 0)
HardMediumSoftScore.ONE_HARD = HardMediumSoftScore.of(1, 0, 0)
HardMediumSoftScore.ONE_MEDIUM = HardMediumSoftScore.of(0, 1, 0)
HardMediumSoftScore.ONE_SOFT = HardMediumSoftScore.of(0, 0, 1)
HardMediumSoftScore.ONE_HARD = HardMediumSoftScore.of_hard(1)
HardMediumSoftScore.ONE_MEDIUM = HardMediumSoftScore.of_medium(1)
HardMediumSoftScore.ONE_SOFT = HardMediumSoftScore.of_soft(1)


@dataclass(unsafe_hash=True, order=True)
Expand All @@ -268,10 +301,33 @@ class BendableScore(Score):
def is_feasible(self) -> bool:
return self.is_solution_initialized and all(score >= 0 for score in self.hard_scores)

@staticmethod
def zero(hard_levels_size: int, soft_levels_size: int) -> 'BendableScore':
return BendableScore(tuple([0] * hard_levels_size), tuple([0] * soft_levels_size))

@staticmethod
def of(hard_scores: tuple[int, ...], soft_scores: tuple[int, ...]) -> 'BendableScore':
return BendableScore(hard_scores, soft_scores, init_score=0)

@staticmethod
def of_uninitialized(init_score: int, hard_scores: tuple[int, ...],
soft_scores: tuple[int, ...]) -> 'BendableScore':
return BendableScore(hard_scores, soft_scores, init_score=init_score)

@staticmethod
def of_hard(hard_levels_size: int, soft_levels_size: int, hard_level: int, hard_score: int) -> 'BendableScore':
hard_scores = [0] * hard_levels_size
hard_scores[hard_level] = hard_score
soft_scores = [0] * soft_levels_size
return BendableScore(tuple(hard_scores), tuple(soft_scores), init_score=0)

@staticmethod
def of_soft(hard_levels_size: int, soft_levels_size: int, soft_level: int, soft_score: int) -> 'BendableScore':
hard_scores = [0] * hard_levels_size
soft_scores = [0] * soft_levels_size
soft_scores[soft_level] = soft_score
return BendableScore(tuple(hard_scores), tuple(soft_scores), init_score=0)

@staticmethod
def parse(score_text: str) -> 'BendableScore':
if 'init' in score_text:
Expand Down Expand Up @@ -333,6 +389,10 @@ def is_feasible(self) -> bool:
def of(score: Decimal) -> 'SimpleDecimalScore':
return SimpleDecimalScore(score, init_score=0)

@staticmethod
def of_uninitialized(init_score: int, score: Decimal) -> 'SimpleDecimalScore':
return SimpleDecimalScore(score, init_score=init_score)

@staticmethod
def parse(score_text: str) -> 'SimpleDecimalScore':
if 'init' in score_text:
Expand Down Expand Up @@ -396,6 +456,18 @@ def is_feasible(self) -> bool:
def of(hard_score: Decimal, soft_score: Decimal) -> 'HardSoftDecimalScore':
return HardSoftDecimalScore(hard_score, soft_score, init_score=0)

@staticmethod
def of_uninitialized(init_score: int, hard_score: Decimal, soft_score: Decimal) -> 'HardSoftDecimalScore':
return HardSoftDecimalScore(hard_score, soft_score, init_score=init_score)

@staticmethod
def of_hard(hard_score: Decimal) -> 'HardSoftDecimalScore':
return HardSoftDecimalScore(hard_score, Decimal(0), init_score=0)

@staticmethod
def of_soft(soft_score: Decimal) -> 'HardSoftDecimalScore':
return HardSoftDecimalScore(Decimal(0), soft_score, init_score=0)

@staticmethod
def parse(score_text: str) -> 'HardSoftDecimalScore':
if 'init' in score_text:
Expand All @@ -419,8 +491,8 @@ def __str__(self):


HardSoftDecimalScore.ZERO = HardSoftDecimalScore.of(Decimal(0), Decimal(0))
HardSoftDecimalScore.ONE_HARD = HardSoftDecimalScore.of(Decimal(1), Decimal(0))
HardSoftDecimalScore.ONE_SOFT = HardSoftDecimalScore.of(Decimal(0), Decimal(1))
HardSoftDecimalScore.ONE_HARD = HardSoftDecimalScore.of_hard(Decimal(1))
HardSoftDecimalScore.ONE_SOFT = HardSoftDecimalScore.of_soft(Decimal(1))


@dataclass(unsafe_hash=True, order=True)
Expand Down Expand Up @@ -473,6 +545,23 @@ def is_feasible(self) -> bool:
def of(hard_score: Decimal, medium_score: Decimal, soft_score: Decimal) -> 'HardMediumSoftDecimalScore':
return HardMediumSoftDecimalScore(hard_score, medium_score, soft_score, init_score=0)

@staticmethod
def of_uninitialized(init_score: int, hard_score: Decimal, medium_score: Decimal,
soft_score: Decimal) -> 'HardMediumSoftDecimalScore':
return HardMediumSoftDecimalScore(hard_score, medium_score, soft_score, init_score=init_score)

@staticmethod
def of_hard(hard_score: Decimal) -> 'HardMediumSoftDecimalScore':
return HardMediumSoftDecimalScore(hard_score, Decimal(0), Decimal(0), init_score=0)

@staticmethod
def of_medium(medium_score: Decimal) -> 'HardMediumSoftDecimalScore':
return HardMediumSoftDecimalScore(Decimal(0), medium_score, Decimal(0), init_score=0)

@staticmethod
def of_soft(soft_score: Decimal) -> 'HardMediumSoftDecimalScore':
return HardMediumSoftDecimalScore(Decimal(0), Decimal(0), soft_score, init_score=0)

@staticmethod
def parse(score_text: str) -> 'HardMediumSoftDecimalScore':
if 'init' in score_text:
Expand All @@ -498,9 +587,9 @@ def __str__(self):


HardMediumSoftDecimalScore.ZERO = HardMediumSoftDecimalScore.of(Decimal(0), Decimal(0), Decimal(0))
HardMediumSoftDecimalScore.ONE_HARD = HardMediumSoftDecimalScore.of(Decimal(1), Decimal(0), Decimal(0))
HardMediumSoftDecimalScore.ONE_MEDIUM = HardMediumSoftDecimalScore.of(Decimal(0), Decimal(1), Decimal(0))
HardMediumSoftDecimalScore.ONE_SOFT = HardMediumSoftDecimalScore.of(Decimal(0), Decimal(0), Decimal(1))
HardMediumSoftDecimalScore.ONE_HARD = HardMediumSoftDecimalScore.of_hard(Decimal(1))
HardMediumSoftDecimalScore.ONE_MEDIUM = HardMediumSoftDecimalScore.of_medium(Decimal(1))
HardMediumSoftDecimalScore.ONE_SOFT = HardMediumSoftDecimalScore.of_soft(Decimal(1))


@dataclass(unsafe_hash=True, order=True)
Expand All @@ -526,10 +615,35 @@ class BendableDecimalScore(Score):
def is_feasible(self) -> bool:
return self.is_solution_initialized and all(score >= 0 for score in self.hard_scores)

@staticmethod
def zero(hard_levels_size: int, soft_levels_size: int) -> 'BendableDecimalScore':
return BendableDecimalScore(tuple([Decimal(0)] * hard_levels_size), tuple([Decimal(0)] * soft_levels_size))

@staticmethod
def of(hard_scores: tuple[Decimal, ...], soft_scores: tuple[Decimal, ...]) -> 'BendableDecimalScore':
return BendableDecimalScore(hard_scores, soft_scores, init_score=0)

@staticmethod
def of_uninitialized(init_score: int, hard_scores: tuple[Decimal, ...], soft_scores: tuple[Decimal, ...]) -> \
'BendableDecimalScore':
return BendableDecimalScore(hard_scores, soft_scores, init_score=init_score)

@staticmethod
def of_hard(hard_levels_size: int, soft_levels_size: int, hard_level: int, hard_score: Decimal) -> \
'BendableDecimalScore':
hard_scores = [Decimal(0)] * hard_levels_size
hard_scores[hard_level] = hard_score
soft_scores = [Decimal(0)] * soft_levels_size
return BendableDecimalScore(tuple(hard_scores), tuple(soft_scores), init_score=0)

@staticmethod
def of_soft(hard_levels_size: int, soft_levels_size: int, soft_level: int, soft_score: Decimal) -> \
'BendableDecimalScore':
hard_scores = [Decimal(0)] * hard_levels_size
soft_scores = [Decimal(0)] * soft_levels_size
soft_scores[soft_level] = soft_score
return BendableDecimalScore(tuple(hard_scores), tuple(soft_scores), init_score=0)

@staticmethod
def parse(score_text: str) -> 'BendableDecimalScore':
if 'init' in score_text:
Expand Down
Loading

0 comments on commit ff6ae02

Please sign in to comment.