Observations¶
Interface¶
- class ecole.typing.ObservationFunction(*args, **kwargs)[source]¶
Class repsonsible for extracting observations.
Observation functions are objects given to the
Environment
to extract the observations used to take the next action.This class presents the interface expected to define a valid observation function. It is not necessary to inherit from this class, as observation functions are defined by structural subtyping. It is exists to support Python type hints.
See also
DataFunction
Observation function are equivalent to the generic data function, that is a function to extact an arbitrary type of data.
- __init__(*args, **kwargs)¶
Initialize self. See help(type(self)) for accurate signature.
- before_reset(model: ecole.scip.Model) → None[source]¶
Reset internal data at the start of episodes.
The method is called on new episodes
reset()
right before the MDP is actually reset, that is right before the environment callsreset_dynamics()
.It is usually used to reset the internal data.
- Parameters
model – The
Model
, model defining the current state of the solver.
- extract(model: ecole.scip.Model, done: bool) → ecole.typing.Observation[source]¶
Extract the observation on the given state.
Extract the observation after transitionning on the new state given by
model
. The function is reponsible for keeping track of relevant information from previous states. This can safely be done in this method as it will only be called once per state i.e., this method is not a getter and can have side effects.- Parameters
model – The
Model
, model defining the current state of the solver.done – A flag indicating wether the state is terminal (as decided by the environment).
- Returns
The return is passed to the user by the environment.
Listing¶
The list of observation functions relevant to users is given below.
Node Bipartite¶
- class ecole.observation.NodeBipartite¶
Bipartite graph observation function on branch-and bound node.
This observation function extract structured
NodeBipartiteObs
.- __init__(self: ecole.observation.NodeBipartite, cache: bool = False) → None¶
Constructor for NodeBipartite.
- Parameters
cache – Whether or not to cache static features within an episode. Currently, this is only safe if cutting planes are disabled.
- before_reset(self: ecole.observation.NodeBipartite, model: ecole.scip.Model) → None¶
Cache some feature not expected to change during an episode.
- extract(self: ecole.observation.NodeBipartite, model: ecole.scip.Model, done: bool) → Optional[ecole.observation.NodeBipartiteObs]¶
Extract a new
NodeBipartiteObs
.
- class ecole.observation.NodeBipartiteObs¶
Bipartite graph observation for branch-and-bound nodes.
The optimization problem is represented as an heterogenous bipartite graph. On one side, a node is associated with one variable, on the other side a node is associated with one constraint. There exist an edge between a variable and a constraint if the variable exists in the constraint with a non-zero coefficient.
Each variable and constraint node is associated with a vector of features. Each edge is associated with the coefficient of the variable in the constraint.
- class ColumnFeatures¶
Members:
objective
is_type_binary
is_type_integer
is_type_implicit_integer
is_type_continuous
has_lower_bound
has_upper_bound
normed_reduced_cost
solution_value
solution_frac
is_solution_at_lower_bound
is_solution_at_upper_bound
scaled_age
incumbent_value
average_incumbent_value
is_basis_lower
is_basis_basic
is_basis_upper
is_basis_zero
- __init__(self: ecole.observation.NodeBipartiteObs.ColumnFeatures, value: int) → None¶
- average_incumbent_value = <ColumnFeatures.average_incumbent_value: 14>¶
- has_lower_bound = <ColumnFeatures.has_lower_bound: 5>¶
- has_upper_bound = <ColumnFeatures.has_upper_bound: 6>¶
- incumbent_value = <ColumnFeatures.incumbent_value: 13>¶
- is_basis_basic = <ColumnFeatures.is_basis_basic: 16>¶
- is_basis_lower = <ColumnFeatures.is_basis_lower: 15>¶
- is_basis_upper = <ColumnFeatures.is_basis_upper: 17>¶
- is_basis_zero = <ColumnFeatures.is_basis_zero: 18>¶
- is_solution_at_lower_bound = <ColumnFeatures.is_solution_at_lower_bound: 10>¶
- is_solution_at_upper_bound = <ColumnFeatures.is_solution_at_upper_bound: 11>¶
- is_type_binary = <ColumnFeatures.is_type_binary: 1>¶
- is_type_continuous = <ColumnFeatures.is_type_continuous: 4>¶
- is_type_implicit_integer = <ColumnFeatures.is_type_implicit_integer: 3>¶
- is_type_integer = <ColumnFeatures.is_type_integer: 2>¶
- property name¶
- normed_reduced_cost = <ColumnFeatures.normed_reduced_cost: 7>¶
- objective = <ColumnFeatures.objective: 0>¶
- scaled_age = <ColumnFeatures.scaled_age: 12>¶
- solution_frac = <ColumnFeatures.solution_frac: 9>¶
- solution_value = <ColumnFeatures.solution_value: 8>¶
- property value¶
- class RowFeatures¶
Members:
bias
objective_cosine_similarity
is_tight
dual_solution_value
scaled_age
- __init__(self: ecole.observation.NodeBipartiteObs.RowFeatures, value: int) → None¶
- bias = <RowFeatures.bias: 0>¶
- dual_solution_value = <RowFeatures.dual_solution_value: 3>¶
- is_tight = <RowFeatures.is_tight: 2>¶
- property name¶
- objective_cosine_similarity = <RowFeatures.objective_cosine_similarity: 1>¶
- scaled_age = <RowFeatures.scaled_age: 4>¶
- property value¶
- __init__(*args, **kwargs)¶
Initialize self. See help(type(self)) for accurate signature.
- property column_features¶
A matrix where each row is represents a variable, and each column a feature of the variables.
- property edge_features¶
The constraint matrix of the optimization problem, with rows for contraints and columns for variables.
- property row_features¶
A matrix where each row is represents a constraint, and each column a feature of the constraints.
Milp Bipartite¶
- class ecole.observation.MilpBipartite¶
Bipartite graph observation function for the sub-MILP at the latest branch-and-bound node.
This observation function extract structured
MilpBipartiteObs
.- __init__(self: ecole.observation.MilpBipartite, normalize: bool = False) → None¶
Constructor for MilpBipartite.
- Parameters
use_normalization – Should the features be normalized? This is recommended for some application such as deep learning models.
- before_reset(self: ecole.observation.MilpBipartite, model: ecole.scip.Model) → None¶
Do nothing.
- extract(self: ecole.observation.MilpBipartite, model: ecole.scip.Model, done: bool) → Optional[ecole.observation.MilpBipartiteObs]¶
Extract a new
MilpBipartiteObs
.
- class ecole.observation.MilpBipartiteObs¶
Bipartite graph observation that represents the most recent MILP during presolving.
The optimization problem is represented as an heterogenous bipartite graph. On one side, a node is associated with one variable, on the other side a node is associated with one constraint. There exist an edge between a variable and a constraint if the variable exists in the constraint with a non-zero coefficient.
Each variable and constraint node is associated with a vector of features. Each edge is associated with the coefficient of the variable in the constraint.
- class ConstraintFeatures¶
Members:
bias
- __init__(self: ecole.observation.MilpBipartiteObs.ConstraintFeatures, value: int) → None¶
- bias = <ConstraintFeatures.bias: 0>¶
- property name¶
- property value¶
- class VariableFeatures¶
Members:
objective
is_type_binary
is_type_integer
is_type_implicit_integer
is_type_continuous
has_lower_bound
has_upper_bound
lower_bound
upper_bound
- __init__(self: ecole.observation.MilpBipartiteObs.VariableFeatures, value: int) → None¶
- has_lower_bound = <VariableFeatures.has_lower_bound: 5>¶
- has_upper_bound = <VariableFeatures.has_upper_bound: 6>¶
- is_type_binary = <VariableFeatures.is_type_binary: 1>¶
- is_type_continuous = <VariableFeatures.is_type_continuous: 4>¶
- is_type_implicit_integer = <VariableFeatures.is_type_implicit_integer: 3>¶
- is_type_integer = <VariableFeatures.is_type_integer: 2>¶
- lower_bound = <VariableFeatures.lower_bound: 7>¶
- property name¶
- objective = <VariableFeatures.objective: 0>¶
- upper_bound = <VariableFeatures.upper_bound: 8>¶
- property value¶
- __init__(*args, **kwargs)¶
Initialize self. See help(type(self)) for accurate signature.
- property constraint_features¶
A matrix where each row is represents a constraint, and each column a feature of the constraints.
- property edge_features¶
The constraint matrix of the optimization problem, with rows for contraints and columns for variables.
- property variable_features¶
A matrix where each row represents a variable, and each column a feature of the variables.
Strong Branching Scores¶
- class ecole.observation.StrongBranchingScores¶
Strong branching score observation function on branch-and bound node.
This observation obtains scores for all LP or pseudo candidate variables at a branch-and-bound node. The strong branching score measures the quality of branching for each variable. This observation can be used as an expert for imitation learning algorithms.
This observation function extracts an array containing the strong branching score for each variable in the problem which can be indexed by the action set. Variables for which a strong branching score is not applicable are filled with NaN.
- __init__(self: ecole.observation.StrongBranchingScores, pseudo_candidates: bool = True) → None¶
Constructor for StrongBranchingScores.
- Parameters
pseudo_candidates – The parameter determines if strong branching scores are computed for psuedo-candidate variables if true or LP canidate variables if false. By default psuedo-candidates will be computed.
- before_reset(self: ecole.observation.StrongBranchingScores, model: ecole.scip.Model) → None¶
Do nothing.
- extract(self: ecole.observation.StrongBranchingScores, model: ecole.scip.Model, done: bool) → Optional[xt::xtensor]¶
Extract an array containing strong branching scores.
Pseudocosts¶
- class ecole.observation.Pseudocosts¶
Pseudocosts observation function on branch-and bound node.
This observation obtains pseudocosts for all LP fractional candidate variables at a branch-and-bound node. The pseudocost is a cheap approximation to the strong branching score and measures the quality of branching for each variable. This observation can be used as a practical branching strategy by always branching on the variable with the highest pseudocost, although in practice is it not as efficient as SCIP’s default strategy, reliability pseudocost branching (also known as hybrid branching).
This observation function extracts an array containing the pseudocost for each variable in the problem which can be indexed by the action set. Variables for which a pseudocost is not applicable are filled with NaN.
- __init__(self: ecole.observation.Pseudocosts) → None¶
- before_reset(self: ecole.observation.Pseudocosts, model: ecole.scip.Model) → None¶
Do nothing.
- extract(self: ecole.observation.Pseudocosts, model: ecole.scip.Model, done: bool) → Optional[xt::xtensor]¶
Extract an array containing pseudocosts.
Khalil et al. 2016¶
- class ecole.observation.Khalil2016¶
Branching candidates features from Khalil et al. (2016).
This observation function extract structured
Khalil2016Obs
.- __init__(self: ecole.observation.Khalil2016) → None¶
- before_reset(self: ecole.observation.Khalil2016, model: ecole.scip.Model) → None¶
Reset static features cache.
- extract(self: ecole.observation.Khalil2016, model: ecole.scip.Model, done: bool) → Optional[ecole.observation.Khalil2016Obs]¶
Extract the observation matrix.
- class ecole.observation.Khalil2016Obs¶
Branching candidates features from Khalil et al. (2016).
The observation is a matrix where rows represent pseudo branching candidates and columns represent features related to these variables. See [Khalil2016] for a complete reference on this observation function.
The first
Khalil2016Obs.n_static_features
are static (they do not change through the solving process), and the remainingKhalil2016Obs.n_dynamic_features
dynamic.- Khalil2016
Khalil, Elias Boutros, Pierre Le Bodic, Le Song, George Nemhauser, and Bistra Dilkina. “Learning to branch in mixed integer programming.” Thirtieth AAAI Conference on Artificial Intelligence. 2016.
- class Features¶
Members:
obj_coef
obj_coef_pos_part
obj_coef_neg_part
n_rows
rows_deg_mean
rows_deg_stddev
rows_deg_min
rows_deg_max
rows_pos_coefs_count
rows_pos_coefs_mean
rows_pos_coefs_stddev
rows_pos_coefs_min
rows_pos_coefs_max
rows_neg_coefs_count
rows_neg_coefs_mean
rows_neg_coefs_stddev
rows_neg_coefs_min
rows_neg_coefs_max
slack
ceil_dist
pseudocost_up
pseudocost_down
pseudocost_ratio
pseudocost_sum
pseudocost_product
n_cutoff_up
n_cutoff_down
n_cutoff_up_ratio
n_cutoff_down_ratio
rows_dynamic_deg_mean
rows_dynamic_deg_stddev
rows_dynamic_deg_min
rows_dynamic_deg_max
rows_dynamic_deg_mean_ratio
rows_dynamic_deg_min_ratio
rows_dynamic_deg_max_ratio
coef_pos_rhs_ratio_min
coef_pos_rhs_ratio_max
coef_neg_rhs_ratio_min
coef_neg_rhs_ratio_max
pos_coef_pos_coef_ratio_min
pos_coef_pos_coef_ratio_max
pos_coef_neg_coef_ratio_min
pos_coef_neg_coef_ratio_max
neg_coef_pos_coef_ratio_min
neg_coef_pos_coef_ratio_max
neg_coef_neg_coef_ratio_min
neg_coef_neg_coef_ratio_max
active_coef_weight1_count
active_coef_weight1_sum
active_coef_weight1_mean
active_coef_weight1_stddev
active_coef_weight1_min
active_coef_weight1_max
active_coef_weight2_count
active_coef_weight2_sum
active_coef_weight2_mean
active_coef_weight2_stddev
active_coef_weight2_min
active_coef_weight2_max
active_coef_weight3_count
active_coef_weight3_sum
active_coef_weight3_mean
active_coef_weight3_stddev
active_coef_weight3_min
active_coef_weight3_max
active_coef_weight4_count
active_coef_weight4_sum
active_coef_weight4_mean
active_coef_weight4_stddev
active_coef_weight4_min
active_coef_weight4_max
- __init__(self: ecole.observation.Khalil2016Obs.Features, value: int) → None¶
- active_coef_weight1_count = <Features.active_coef_weight1_count: 48>¶
- active_coef_weight1_max = <Features.active_coef_weight1_max: 53>¶
- active_coef_weight1_mean = <Features.active_coef_weight1_mean: 50>¶
- active_coef_weight1_min = <Features.active_coef_weight1_min: 52>¶
- active_coef_weight1_stddev = <Features.active_coef_weight1_stddev: 51>¶
- active_coef_weight1_sum = <Features.active_coef_weight1_sum: 49>¶
- active_coef_weight2_count = <Features.active_coef_weight2_count: 54>¶
- active_coef_weight2_max = <Features.active_coef_weight2_max: 59>¶
- active_coef_weight2_mean = <Features.active_coef_weight2_mean: 56>¶
- active_coef_weight2_min = <Features.active_coef_weight2_min: 58>¶
- active_coef_weight2_stddev = <Features.active_coef_weight2_stddev: 57>¶
- active_coef_weight2_sum = <Features.active_coef_weight2_sum: 55>¶
- active_coef_weight3_count = <Features.active_coef_weight3_count: 60>¶
- active_coef_weight3_max = <Features.active_coef_weight3_max: 65>¶
- active_coef_weight3_mean = <Features.active_coef_weight3_mean: 62>¶
- active_coef_weight3_min = <Features.active_coef_weight3_min: 64>¶
- active_coef_weight3_stddev = <Features.active_coef_weight3_stddev: 63>¶
- active_coef_weight3_sum = <Features.active_coef_weight3_sum: 61>¶
- active_coef_weight4_count = <Features.active_coef_weight4_count: 66>¶
- active_coef_weight4_max = <Features.active_coef_weight4_max: 71>¶
- active_coef_weight4_mean = <Features.active_coef_weight4_mean: 68>¶
- active_coef_weight4_min = <Features.active_coef_weight4_min: 70>¶
- active_coef_weight4_stddev = <Features.active_coef_weight4_stddev: 69>¶
- active_coef_weight4_sum = <Features.active_coef_weight4_sum: 67>¶
- ceil_dist = <Features.ceil_dist: 19>¶
- coef_neg_rhs_ratio_max = <Features.coef_neg_rhs_ratio_max: 39>¶
- coef_neg_rhs_ratio_min = <Features.coef_neg_rhs_ratio_min: 38>¶
- coef_pos_rhs_ratio_max = <Features.coef_pos_rhs_ratio_max: 37>¶
- coef_pos_rhs_ratio_min = <Features.coef_pos_rhs_ratio_min: 36>¶
- n_cutoff_down = <Features.n_cutoff_down: 26>¶
- n_cutoff_down_ratio = <Features.n_cutoff_down_ratio: 28>¶
- n_cutoff_up = <Features.n_cutoff_up: 25>¶
- n_cutoff_up_ratio = <Features.n_cutoff_up_ratio: 27>¶
- n_rows = <Features.n_rows: 3>¶
- property name¶
- neg_coef_neg_coef_ratio_max = <Features.neg_coef_neg_coef_ratio_max: 47>¶
- neg_coef_neg_coef_ratio_min = <Features.neg_coef_neg_coef_ratio_min: 46>¶
- neg_coef_pos_coef_ratio_max = <Features.neg_coef_pos_coef_ratio_max: 45>¶
- neg_coef_pos_coef_ratio_min = <Features.neg_coef_pos_coef_ratio_min: 44>¶
- obj_coef = <Features.obj_coef: 0>¶
- obj_coef_neg_part = <Features.obj_coef_neg_part: 2>¶
- obj_coef_pos_part = <Features.obj_coef_pos_part: 1>¶
- pos_coef_neg_coef_ratio_max = <Features.pos_coef_neg_coef_ratio_max: 43>¶
- pos_coef_neg_coef_ratio_min = <Features.pos_coef_neg_coef_ratio_min: 42>¶
- pos_coef_pos_coef_ratio_max = <Features.pos_coef_pos_coef_ratio_max: 41>¶
- pos_coef_pos_coef_ratio_min = <Features.pos_coef_pos_coef_ratio_min: 40>¶
- pseudocost_down = <Features.pseudocost_down: 21>¶
- pseudocost_product = <Features.pseudocost_product: 24>¶
- pseudocost_ratio = <Features.pseudocost_ratio: 22>¶
- pseudocost_sum = <Features.pseudocost_sum: 23>¶
- pseudocost_up = <Features.pseudocost_up: 20>¶
- rows_deg_max = <Features.rows_deg_max: 7>¶
- rows_deg_mean = <Features.rows_deg_mean: 4>¶
- rows_deg_min = <Features.rows_deg_min: 6>¶
- rows_deg_stddev = <Features.rows_deg_stddev: 5>¶
- rows_dynamic_deg_max = <Features.rows_dynamic_deg_max: 32>¶
- rows_dynamic_deg_max_ratio = <Features.rows_dynamic_deg_max_ratio: 35>¶
- rows_dynamic_deg_mean = <Features.rows_dynamic_deg_mean: 29>¶
- rows_dynamic_deg_mean_ratio = <Features.rows_dynamic_deg_mean_ratio: 33>¶
- rows_dynamic_deg_min = <Features.rows_dynamic_deg_min: 31>¶
- rows_dynamic_deg_min_ratio = <Features.rows_dynamic_deg_min_ratio: 34>¶
- rows_dynamic_deg_stddev = <Features.rows_dynamic_deg_stddev: 30>¶
- rows_neg_coefs_count = <Features.rows_neg_coefs_count: 13>¶
- rows_neg_coefs_max = <Features.rows_neg_coefs_max: 17>¶
- rows_neg_coefs_mean = <Features.rows_neg_coefs_mean: 14>¶
- rows_neg_coefs_min = <Features.rows_neg_coefs_min: 16>¶
- rows_neg_coefs_stddev = <Features.rows_neg_coefs_stddev: 15>¶
- rows_pos_coefs_count = <Features.rows_pos_coefs_count: 8>¶
- rows_pos_coefs_max = <Features.rows_pos_coefs_max: 12>¶
- rows_pos_coefs_mean = <Features.rows_pos_coefs_mean: 9>¶
- rows_pos_coefs_min = <Features.rows_pos_coefs_min: 11>¶
- rows_pos_coefs_stddev = <Features.rows_pos_coefs_stddev: 10>¶
- slack = <Features.slack: 18>¶
- property value¶
- __init__(*args, **kwargs)¶
Initialize self. See help(type(self)) for accurate signature.
- property features¶
A matrix where each row represents a variable, and each column a feature of the variables.
- n_dynamic_features = 54¶
- n_static_features = 18¶