Observations

Interface

class ecole.typing.ObservationFunction(*args, **kwargs)[source]

Class repsonsible for extracting observations.

Observation functions are objects given to the Environment to extract the observations used to take the next action.

This class presents the interface expected to define a valid observation function. It is not necessary to inherit from this class, as observation functions are defined by structural subtyping. It is exists to support Python type hints.

See also

DataFunction

Observation function are equivalent to the generic data function, that is a function to extact an arbitrary type of data.

__init__(*args, **kwargs)

Initialize self. See help(type(self)) for accurate signature.

before_reset(model: ecole.scip.Model)None[source]

Reset internal data at the start of episodes.

The method is called on new episodes reset() right before the MDP is actually reset, that is right before the environment calls reset_dynamics().

It is usually used to reset the internal data.

Parameters

model – The Model, model defining the current state of the solver.

extract(model: ecole.scip.Model, done: bool)ecole.typing.Observation[source]

Extract the observation on the given state.

Extract the observation after transitionning on the new state given by model. The function is reponsible for keeping track of relevant information from previous states. This can safely be done in this method as it will only be called once per state i.e., this method is not a getter and can have side effects.

Parameters
  • model – The Model, model defining the current state of the solver.

  • done – A flag indicating wether the state is terminal (as decided by the environment).

Returns

The return is passed to the user by the environment.

Listing

The list of observation functions relevant to users is given below.

Nothing

ecole.observation.Nothing

alias of ecole.core.data.NoneFunction

Node Bipartite

class ecole.observation.NodeBipartite

Bipartite graph observation function on branch-and bound node.

This observation function extract structured NodeBipartiteObs.

__init__(self: ecole.observation.NodeBipartite, cache: bool = False)None

Constructor for NodeBipartite.

Parameters

cache – Whether or not to cache static features within an episode. Currently, this is only safe if cutting planes are disabled.

before_reset(self: ecole.observation.NodeBipartite, model: ecole.scip.Model)None

Cache some feature not expected to change during an episode.

extract(self: ecole.observation.NodeBipartite, model: ecole.scip.Model, done: bool)Optional[ecole.observation.NodeBipartiteObs]

Extract a new NodeBipartiteObs.

class ecole.observation.NodeBipartiteObs

Bipartite graph observation for branch-and-bound nodes.

The optimization problem is represented as an heterogenous bipartite graph. On one side, a node is associated with one variable, on the other side a node is associated with one LP row. There exist an edge between a variable and a constraint if the variable exists in the constraint with a non-zero coefficient.

Each variable and constraint node is associated with a vector of features. Each edge is associated with the coefficient of the variable in the constraint.

class RowFeatures

Members:

bias

objective_cosine_similarity

is_tight

dual_solution_value

scaled_age

__init__(self: ecole.observation.NodeBipartiteObs.RowFeatures, value: int)None
bias = <RowFeatures.bias: 0>
dual_solution_value = <RowFeatures.dual_solution_value: 3>
is_tight = <RowFeatures.is_tight: 2>
property name
objective_cosine_similarity = <RowFeatures.objective_cosine_similarity: 1>
scaled_age = <RowFeatures.scaled_age: 4>
property value
class VariableFeatures

Members:

objective

is_type_binary

is_type_integer

is_type_implicit_integer

is_type_continuous

has_lower_bound

has_upper_bound

normed_reduced_cost

solution_value

solution_frac

is_solution_at_lower_bound

is_solution_at_upper_bound

scaled_age

incumbent_value

average_incumbent_value

is_basis_lower

is_basis_basic

is_basis_upper

is_basis_zero

__init__(self: ecole.observation.NodeBipartiteObs.VariableFeatures, value: int)None
average_incumbent_value = <VariableFeatures.average_incumbent_value: 14>
has_lower_bound = <VariableFeatures.has_lower_bound: 5>
has_upper_bound = <VariableFeatures.has_upper_bound: 6>
incumbent_value = <VariableFeatures.incumbent_value: 13>
is_basis_basic = <VariableFeatures.is_basis_basic: 16>
is_basis_lower = <VariableFeatures.is_basis_lower: 15>
is_basis_upper = <VariableFeatures.is_basis_upper: 17>
is_basis_zero = <VariableFeatures.is_basis_zero: 18>
is_solution_at_lower_bound = <VariableFeatures.is_solution_at_lower_bound: 10>
is_solution_at_upper_bound = <VariableFeatures.is_solution_at_upper_bound: 11>
is_type_binary = <VariableFeatures.is_type_binary: 1>
is_type_continuous = <VariableFeatures.is_type_continuous: 4>
is_type_implicit_integer = <VariableFeatures.is_type_implicit_integer: 3>
is_type_integer = <VariableFeatures.is_type_integer: 2>
property name
normed_reduced_cost = <VariableFeatures.normed_reduced_cost: 7>
objective = <VariableFeatures.objective: 0>
scaled_age = <VariableFeatures.scaled_age: 12>
solution_frac = <VariableFeatures.solution_frac: 9>
solution_value = <VariableFeatures.solution_value: 8>
property value
__init__(*args, **kwargs)

Initialize self. See help(type(self)) for accurate signature.

property edge_features

The constraint matrix of the optimization problem, with rows for contraints and columns for variables.

property row_features

A matrix where each row is represents a constraint, and each column a feature of the constraints.

property variable_features

A matrix where each row represents a variable, and each column a feature of the variable.

Variables are ordered according to their position in the original problem (SCIPvarGetProbindex), hence they can be indexed by the Branching environment action_set.

Milp Bipartite

class ecole.observation.MilpBipartite

Bipartite graph observation function for the sub-MILP at the latest branch-and-bound node.

This observation function extract structured MilpBipartiteObs.

__init__(self: ecole.observation.MilpBipartite, normalize: bool = False)None

Constructor for MilpBipartite.

Parameters

normalize – Should the features be normalized? This is recommended for some application such as deep learning models.

before_reset(self: ecole.observation.MilpBipartite, model: ecole.scip.Model)None

Do nothing.

extract(self: ecole.observation.MilpBipartite, model: ecole.scip.Model, done: bool)Optional[ecole.observation.MilpBipartiteObs]

Extract a new MilpBipartiteObs.

class ecole.observation.MilpBipartiteObs

Bipartite graph observation that represents the most recent MILP during presolving.

The optimization problem is represented as an heterogenous bipartite graph. On one side, a node is associated with one variable, on the other side a node is associated with one constraint. There exist an edge between a variable and a constraint if the variable exists in the constraint with a non-zero coefficient.

Each variable and constraint node is associated with a vector of features. Each edge is associated with the coefficient of the variable in the constraint.

class ConstraintFeatures

Members:

bias

__init__(self: ecole.observation.MilpBipartiteObs.ConstraintFeatures, value: int)None
bias = <ConstraintFeatures.bias: 0>
property name
property value
class VariableFeatures

Members:

objective

is_type_binary

is_type_integer

is_type_implicit_integer

is_type_continuous

has_lower_bound

has_upper_bound

lower_bound

upper_bound

__init__(self: ecole.observation.MilpBipartiteObs.VariableFeatures, value: int)None
has_lower_bound = <VariableFeatures.has_lower_bound: 5>
has_upper_bound = <VariableFeatures.has_upper_bound: 6>
is_type_binary = <VariableFeatures.is_type_binary: 1>
is_type_continuous = <VariableFeatures.is_type_continuous: 4>
is_type_implicit_integer = <VariableFeatures.is_type_implicit_integer: 3>
is_type_integer = <VariableFeatures.is_type_integer: 2>
lower_bound = <VariableFeatures.lower_bound: 7>
property name
objective = <VariableFeatures.objective: 0>
upper_bound = <VariableFeatures.upper_bound: 8>
property value
__init__(*args, **kwargs)

Initialize self. See help(type(self)) for accurate signature.

property constraint_features

A matrix where each row is represents a constraint, and each column a feature of the constraints.

property edge_features

The constraint matrix of the optimization problem, with rows for contraints and columns for variables.

property variable_features

A matrix where each row represents a variable, and each column a feature of the variable.

Variables are ordered according to their position in the original problem (SCIPvarGetProbindex), hence they can be indexed by the Branching environment action_set.

Strong Branching Scores

class ecole.observation.StrongBranchingScores

Strong branching score observation function on branch-and bound node.

This observation obtains scores for all LP or pseudo candidate variables at a branch-and-bound node. The strong branching score measures the quality of each variable for branching (higher is better). This observation can be used as an expert for imitation learning algorithms.

This observation function extracts an array containing the strong branching score for each variable in the problem. Variables are ordered according to their position in the original problem (SCIPvarGetProbindex), hence they can be indexed by the Branching environment action_set. Variables for which a strong branching score is not applicable are filled with NaN.

__init__(self: ecole.observation.StrongBranchingScores, pseudo_candidates: bool = False)None

Constructor for StrongBranchingScores.

Parameters

pseudo_candidates – The parameter determines if strong branching scores are computed for pseudo candidate variables (when true) or LP candidate variables (when false).

before_reset(self: ecole.observation.StrongBranchingScores, model: ecole.scip.Model)None

Do nothing.

extract(self: ecole.observation.StrongBranchingScores, model: ecole.scip.Model, done: bool)Optional[numpy.ndarray[numpy.float64]]

Extract an array containing strong branching scores.

Pseudocosts

class ecole.observation.Pseudocosts

Pseudocosts observation function on branch-and-bound nodes.

This observation obtains pseudocosts for all LP fractional candidate variables at a branch-and-bound node. The pseudocost is a cheap approximation to the strong branching score and measures the quality of branching for each variable. This observation can be used as a practical branching strategy by always branching on the variable with the highest pseudocost, although in practice is it not as efficient as SCIP’s default strategy, reliability pseudocost branching (also known as hybrid branching).

This observation function extracts an array containing the pseudocost for each variable in the problem. Variables are ordered according to their position in the original problem (SCIPvarGetProbindex), hence they can be indexed by the Branching environment action_set. Variables for which a pseudocost is not applicable are filled with NaN.

__init__(self: ecole.observation.Pseudocosts)None
before_reset(self: ecole.observation.Pseudocosts, model: ecole.scip.Model)None

Do nothing.

extract(self: ecole.observation.Pseudocosts, model: ecole.scip.Model, done: bool)Optional[numpy.ndarray[numpy.float64]]

Extract an array containing pseudocosts.

Khalil et al. 2016

class ecole.observation.Khalil2016

Branching candidates features from Khalil et al. (2016).

This observation function extract structured Khalil2016Obs.

__init__(self: ecole.observation.Khalil2016, pseudo_candidates: bool = False)None

Create new observation.

Parameters

pseudo_candidates – Whether the pseudo branching variable candidates (SCIPgetPseudoBranchCands) or LP branching variable candidates (SCIPgetPseudoBranchCands) are observed.

before_reset(self: ecole.observation.Khalil2016, model: ecole.scip.Model)None

Reset static features cache.

extract(self: ecole.observation.Khalil2016, model: ecole.scip.Model, done: bool)Optional[ecole.observation.Khalil2016Obs]

Extract the observation matrix.

class ecole.observation.Khalil2016Obs

Branching candidates features from Khalil et al. (2016).

The observation is a matrix where rows represent all variables and columns represent features related to these variables. See [Khalil2016] for a complete reference on this observation function.

Khalil2016

Khalil, Elias Boutros, Pierre Le Bodic, Le Song, George Nemhauser, and Bistra Dilkina. “Learning to branch in mixed integer programming.Thirtieth AAAI Conference on Artificial Intelligence. 2016.

class Features

Members:

obj_coef

obj_coef_pos_part

obj_coef_neg_part

n_rows

rows_deg_mean

rows_deg_stddev

rows_deg_min

rows_deg_max

rows_pos_coefs_count

rows_pos_coefs_mean

rows_pos_coefs_stddev

rows_pos_coefs_min

rows_pos_coefs_max

rows_neg_coefs_count

rows_neg_coefs_mean

rows_neg_coefs_stddev

rows_neg_coefs_min

rows_neg_coefs_max

slack

ceil_dist

pseudocost_up

pseudocost_down

pseudocost_ratio

pseudocost_sum

pseudocost_product

n_cutoff_up

n_cutoff_down

n_cutoff_up_ratio

n_cutoff_down_ratio

rows_dynamic_deg_mean

rows_dynamic_deg_stddev

rows_dynamic_deg_min

rows_dynamic_deg_max

rows_dynamic_deg_mean_ratio

rows_dynamic_deg_min_ratio

rows_dynamic_deg_max_ratio

coef_pos_rhs_ratio_min

coef_pos_rhs_ratio_max

coef_neg_rhs_ratio_min

coef_neg_rhs_ratio_max

pos_coef_pos_coef_ratio_min

pos_coef_pos_coef_ratio_max

pos_coef_neg_coef_ratio_min

pos_coef_neg_coef_ratio_max

neg_coef_pos_coef_ratio_min

neg_coef_pos_coef_ratio_max

neg_coef_neg_coef_ratio_min

neg_coef_neg_coef_ratio_max

active_coef_weight1_count

active_coef_weight1_sum

active_coef_weight1_mean

active_coef_weight1_stddev

active_coef_weight1_min

active_coef_weight1_max

active_coef_weight2_count

active_coef_weight2_sum

active_coef_weight2_mean

active_coef_weight2_stddev

active_coef_weight2_min

active_coef_weight2_max

active_coef_weight3_count

active_coef_weight3_sum

active_coef_weight3_mean

active_coef_weight3_stddev

active_coef_weight3_min

active_coef_weight3_max

active_coef_weight4_count

active_coef_weight4_sum

active_coef_weight4_mean

active_coef_weight4_stddev

active_coef_weight4_min

active_coef_weight4_max

__init__(self: ecole.observation.Khalil2016Obs.Features, value: int)None
active_coef_weight1_count = <Features.active_coef_weight1_count: 48>
active_coef_weight1_max = <Features.active_coef_weight1_max: 53>
active_coef_weight1_mean = <Features.active_coef_weight1_mean: 50>
active_coef_weight1_min = <Features.active_coef_weight1_min: 52>
active_coef_weight1_stddev = <Features.active_coef_weight1_stddev: 51>
active_coef_weight1_sum = <Features.active_coef_weight1_sum: 49>
active_coef_weight2_count = <Features.active_coef_weight2_count: 54>
active_coef_weight2_max = <Features.active_coef_weight2_max: 59>
active_coef_weight2_mean = <Features.active_coef_weight2_mean: 56>
active_coef_weight2_min = <Features.active_coef_weight2_min: 58>
active_coef_weight2_stddev = <Features.active_coef_weight2_stddev: 57>
active_coef_weight2_sum = <Features.active_coef_weight2_sum: 55>
active_coef_weight3_count = <Features.active_coef_weight3_count: 60>
active_coef_weight3_max = <Features.active_coef_weight3_max: 65>
active_coef_weight3_mean = <Features.active_coef_weight3_mean: 62>
active_coef_weight3_min = <Features.active_coef_weight3_min: 64>
active_coef_weight3_stddev = <Features.active_coef_weight3_stddev: 63>
active_coef_weight3_sum = <Features.active_coef_weight3_sum: 61>
active_coef_weight4_count = <Features.active_coef_weight4_count: 66>
active_coef_weight4_max = <Features.active_coef_weight4_max: 71>
active_coef_weight4_mean = <Features.active_coef_weight4_mean: 68>
active_coef_weight4_min = <Features.active_coef_weight4_min: 70>
active_coef_weight4_stddev = <Features.active_coef_weight4_stddev: 69>
active_coef_weight4_sum = <Features.active_coef_weight4_sum: 67>
ceil_dist = <Features.ceil_dist: 19>
coef_neg_rhs_ratio_max = <Features.coef_neg_rhs_ratio_max: 39>
coef_neg_rhs_ratio_min = <Features.coef_neg_rhs_ratio_min: 38>
coef_pos_rhs_ratio_max = <Features.coef_pos_rhs_ratio_max: 37>
coef_pos_rhs_ratio_min = <Features.coef_pos_rhs_ratio_min: 36>
n_cutoff_down = <Features.n_cutoff_down: 26>
n_cutoff_down_ratio = <Features.n_cutoff_down_ratio: 28>
n_cutoff_up = <Features.n_cutoff_up: 25>
n_cutoff_up_ratio = <Features.n_cutoff_up_ratio: 27>
n_rows = <Features.n_rows: 3>
property name
neg_coef_neg_coef_ratio_max = <Features.neg_coef_neg_coef_ratio_max: 47>
neg_coef_neg_coef_ratio_min = <Features.neg_coef_neg_coef_ratio_min: 46>
neg_coef_pos_coef_ratio_max = <Features.neg_coef_pos_coef_ratio_max: 45>
neg_coef_pos_coef_ratio_min = <Features.neg_coef_pos_coef_ratio_min: 44>
obj_coef = <Features.obj_coef: 0>
obj_coef_neg_part = <Features.obj_coef_neg_part: 2>
obj_coef_pos_part = <Features.obj_coef_pos_part: 1>
pos_coef_neg_coef_ratio_max = <Features.pos_coef_neg_coef_ratio_max: 43>
pos_coef_neg_coef_ratio_min = <Features.pos_coef_neg_coef_ratio_min: 42>
pos_coef_pos_coef_ratio_max = <Features.pos_coef_pos_coef_ratio_max: 41>
pos_coef_pos_coef_ratio_min = <Features.pos_coef_pos_coef_ratio_min: 40>
pseudocost_down = <Features.pseudocost_down: 21>
pseudocost_product = <Features.pseudocost_product: 24>
pseudocost_ratio = <Features.pseudocost_ratio: 22>
pseudocost_sum = <Features.pseudocost_sum: 23>
pseudocost_up = <Features.pseudocost_up: 20>
rows_deg_max = <Features.rows_deg_max: 7>
rows_deg_mean = <Features.rows_deg_mean: 4>
rows_deg_min = <Features.rows_deg_min: 6>
rows_deg_stddev = <Features.rows_deg_stddev: 5>
rows_dynamic_deg_max = <Features.rows_dynamic_deg_max: 32>
rows_dynamic_deg_max_ratio = <Features.rows_dynamic_deg_max_ratio: 35>
rows_dynamic_deg_mean = <Features.rows_dynamic_deg_mean: 29>
rows_dynamic_deg_mean_ratio = <Features.rows_dynamic_deg_mean_ratio: 33>
rows_dynamic_deg_min = <Features.rows_dynamic_deg_min: 31>
rows_dynamic_deg_min_ratio = <Features.rows_dynamic_deg_min_ratio: 34>
rows_dynamic_deg_stddev = <Features.rows_dynamic_deg_stddev: 30>
rows_neg_coefs_count = <Features.rows_neg_coefs_count: 13>
rows_neg_coefs_max = <Features.rows_neg_coefs_max: 17>
rows_neg_coefs_mean = <Features.rows_neg_coefs_mean: 14>
rows_neg_coefs_min = <Features.rows_neg_coefs_min: 16>
rows_neg_coefs_stddev = <Features.rows_neg_coefs_stddev: 15>
rows_pos_coefs_count = <Features.rows_pos_coefs_count: 8>
rows_pos_coefs_max = <Features.rows_pos_coefs_max: 12>
rows_pos_coefs_mean = <Features.rows_pos_coefs_mean: 9>
rows_pos_coefs_min = <Features.rows_pos_coefs_min: 11>
rows_pos_coefs_stddev = <Features.rows_pos_coefs_stddev: 10>
slack = <Features.slack: 18>
property value
__init__(*args, **kwargs)

Initialize self. See help(type(self)) for accurate signature.

property features

A matrix where each row represents a variable, and each column a feature of the variable.

Variables are ordered according to their position in the original problem (SCIPvarGetProbindex), hence they can be indexed by the Branching environment action_set. Variables for which the features are not applicable are filled with NaN.

The first Khalil2016Obs.n_static_features features columns are static (they do not change through the solving process), and the remaining Khalil2016Obs.n_dynamic_features are dynamic.

n_dynamic_features = 54
n_static_features = 18

Hutter et al. 2011

class ecole.observation.Hutter2011

Instance features from Hutter et al. (2011).

This observation function extracts a structured Hutter2011Obs.

__init__(self: ecole.observation.Hutter2011)None
before_reset(self: ecole.observation.Hutter2011, model: ecole.scip.Model)None

Do nothing.

extract(self: ecole.observation.Hutter2011, model: ecole.scip.Model, done: bool)Optional[ecole.observation.Hutter2011Obs]

Extract the observation matrix.

class ecole.observation.Hutter2011Obs

Instance features from Hutter et al. (2011).

The observation is a vector of features that globally characterize the instance. See [Hutter2011] for a complete reference on this observation function.

Hutter2011

Hutter, Frank, Hoos, Holger H., and Leyton-Brown, Kevin. “Sequential model-based optimization for general algorithm configuration.International Conference on Learning and Intelligent Optimization. 2011.

class Features

Members:

nb_variables

nb_constraints

nb_nonzero_coefs

variable_node_degree_mean

variable_node_degree_max

variable_node_degree_min

variable_node_degree_std

constraint_node_degree_mean

constraint_node_degree_max

constraint_node_degree_min

constraint_node_degree_std

node_degree_mean

node_degree_max

node_degree_min

node_degree_std

node_degree_25q

node_degree_75q

edge_density

lp_slack_mean

lp_slack_max

lp_slack_l2

lp_objective_value

objective_coef_m_std

objective_coef_n_std

objective_coef_sqrtn_std

constraint_coef_mean

constraint_coef_std

constraint_var_coef_mean

constraint_var_coef_std

discrete_vars_support_size_mean

discrete_vars_support_size_std

ratio_unbounded_discrete_vars

ratio_continuous_vars

__init__(self: ecole.observation.Hutter2011Obs.Features, value: int)None
constraint_coef_mean = <Features.constraint_coef_mean: 25>
constraint_coef_std = <Features.constraint_coef_std: 26>
constraint_node_degree_max = <Features.constraint_node_degree_max: 8>
constraint_node_degree_mean = <Features.constraint_node_degree_mean: 7>
constraint_node_degree_min = <Features.constraint_node_degree_min: 9>
constraint_node_degree_std = <Features.constraint_node_degree_std: 10>
constraint_var_coef_mean = <Features.constraint_var_coef_mean: 27>
constraint_var_coef_std = <Features.constraint_var_coef_std: 28>
discrete_vars_support_size_mean = <Features.discrete_vars_support_size_mean: 29>
discrete_vars_support_size_std = <Features.discrete_vars_support_size_std: 30>
edge_density = <Features.edge_density: 17>
lp_objective_value = <Features.lp_objective_value: 21>
lp_slack_l2 = <Features.lp_slack_l2: 20>
lp_slack_max = <Features.lp_slack_max: 19>
lp_slack_mean = <Features.lp_slack_mean: 18>
property name
nb_constraints = <Features.nb_constraints: 1>
nb_nonzero_coefs = <Features.nb_nonzero_coefs: 2>
nb_variables = <Features.nb_variables: 0>
node_degree_25q = <Features.node_degree_25q: 15>
node_degree_75q = <Features.node_degree_75q: 16>
node_degree_max = <Features.node_degree_max: 12>
node_degree_mean = <Features.node_degree_mean: 11>
node_degree_min = <Features.node_degree_min: 13>
node_degree_std = <Features.node_degree_std: 14>
objective_coef_m_std = <Features.objective_coef_m_std: 22>
objective_coef_n_std = <Features.objective_coef_n_std: 23>
objective_coef_sqrtn_std = <Features.objective_coef_sqrtn_std: 24>
ratio_continuous_vars = <Features.ratio_continuous_vars: 32>
ratio_unbounded_discrete_vars = <Features.ratio_unbounded_discrete_vars: 31>
property value
variable_node_degree_max = <Features.variable_node_degree_max: 4>
variable_node_degree_mean = <Features.variable_node_degree_mean: 3>
variable_node_degree_min = <Features.variable_node_degree_min: 5>
variable_node_degree_std = <Features.variable_node_degree_std: 6>
__init__(*args, **kwargs)

Initialize self. See help(type(self)) for accurate signature.

property features

A vector of instance features.