Skip to content

enopt⚓︎

Ensemble optimisation algorithm.

EnOpt ⚓︎

Bases: Optimize

This is an implementation of the ensemble steepest descent ensemble optimization algorithm - EnOpt. The update of the control variable is done with the simple steepest (or gradient) descent algorithm:

\[ x_l = x_{l-1} - \alpha \times C \times G \]

where \(x\) is the control variable, \(l\) is the iteration index, \(\alpha\) is the step size, \(C\) is a smoothing matrix (e.g., covariance matrix for \(x\)), and \(G\) is the ensemble gradient.

Methods:

Name Description
calc_update

Update using steepest descent method with ensemble gradient

References

Chen et al., 2009, 'Efficient Ensemble-Based Closed-Loop Production Optimization', SPE Journal, 14 (4): 634-645.

__init__(fun, x, args, jac, hess, bounds=None, **options) ⚓︎

Parameters:

Name Type Description Default
fun callable

objective function

required
x ndarray

Initial state

required
args tuple

Initial covariance

required
jac callable

Gradient function

required
hess callable

Hessian function

required
bounds list

(min, max) pairs for each element in x. None is used to specify no bound.

None
options dict

Optimization options

  • maxiter: maximum number of iterations (default 10)
  • restart: restart optimization from a restart file (default false)
  • restartsave: save a restart file after each successful iteration (defalut false)
  • tol: convergence tolerance for the objective function (default 1e-6)
  • alpha: step size for the steepest decent method (default 0.1)
  • beta: momentum coefficient for running accelerated optimization (default 0.0)
  • alpha_maxiter: maximum number of backtracing trials (default 5)
  • resample: number indicating how many times resampling is tried if no improvement is found
  • optimizer: 'GA' (gradient accent) or Adam (default 'GA')
  • nesterov: use Nesterov acceleration if true (default false)
  • hessian: use Hessian approximation (if the algorithm permits use of Hessian) (default false)
  • normalize: normalize the gradient if true (default true)
  • cov_factor: factor used to shrink the covariance for each resampling trial (defalut 0.5)
  • savedata: specify which class variables to save to the result files (state, objective function value, iteration number, number of function evaluations, and number of gradient evaluations, are always saved)
{}

calc_update() ⚓︎

Update using steepest descent method with ensemble gradients