Adaptive Rate Monte Carlo Optimizer¶
-
class
ARMCOptimizer
(_opt_id=None, _signal_pipe=None, _results_queue=None, _pause_flag=None, _is_log_detailed=False, _workers=1, _backend='threads', phi=1.0, gamma=2.0, a_target=0.25, super_iter_len=1000, sub_iter_len=100, move_range=array([0.9, 0.905, 0.91, 0.915, 0.92, 0.925, 0.93, 0.935, 0.94, 0.945, 0.95, 0.955, 0.96, 0.965, 0.97, 0.975, 0.98, 0.985, 0.99, 0.995, 1.005, 1.01, 1.015, 1.02, 1.025, 1.03, 1.035, 1.04, 1.045, 1.05, 1.055, 1.06, 1.065, 1.07, 1.075, 1.08, 1.085, 1.09, 1.095, 1.1 ]), move_func=<ufunc 'multiply'>)[source]¶ Bases:
scm.glompo.optimizers.baseoptimizer.BaseOptimizer
An optimizer using the Adaptive Rate Monte Carlo (ARMC) algorithm.
A trial state, \(S_{\omega}\), is generated by moving a random parameter retrieved from a user-specified parameter set (e.g. atomic charge). By default, parameters moves are applied in a multiplicative manner.
The move is accepted if the new set of parameters, \(S_{\omega}\), lowers the auxiliary error (\(\Delta \varepsilon_{QM-MM}\)) with respect to the previous set of accepted parameters \(S_{\omega-i}\) (\(i > 0\); see (1)). The auxiliary error is calculated with a user-specified cost function.
(1)¶\[p(\omega \leftarrow \omega-i) = \Biggl \lbrace { 1, \quad \Delta \varepsilon_{QM-MM} ( S_{\omega} ) \; \lt \; \Delta \varepsilon_{QM-MM} ( S_{\omega-i} ) \atop 0, \quad \Delta \varepsilon_{QM-MM} ( S_{\omega} ) \; \gt \; \Delta \varepsilon_{QM-MM} ( S_{\omega-i} ) }\]The parameter history is updated. Either \(S_{\omega}\) or \(S_{\omega-i}\) is increased by the variable \(\phi\) (see (2)) if, respectively, the new parameters are accepted or rejected. In this manner the underlying PES is continuously modified, preventing the optimizer from getting permanently stuck in a (local) parameter space minima.
(2)¶\[\Delta \varepsilon_{QM-MM} ( S_{\omega} ) + \phi \quad \text{if} \quad \Delta \varepsilon_{QM-MM} ( S_{\omega} ) \; \lt \; \Delta \varepsilon_{QM-MM} ( S_{\omega-i} ) \atop \Delta \varepsilon_{QM-MM} ( S_{\omega-i} ) + \phi \quad \text{if} \quad \Delta \varepsilon_{QM-MM} ( S_{\omega} ) \; \gt \; \Delta \varepsilon_{QM-MM} ( S_{\omega-i} )\]The parameter \(\phi\) is updated at regular intervals in order to maintain a constant acceptance rate \(\alpha_{t}\). This is illustrated in (3), where \(\phi\) is updated the beginning of every super-iteration \(\kappa\). In this example the total number of iterations, \(\kappa \omega\), is divided into \(\kappa\) super- and \(\omega\) sub-iterations.
(3)¶\[\phi_{\kappa \omega} = \phi_{ ( \kappa - 1 ) \omega} * \gamma^{ \text{sgn} ( \alpha_{t} - \overline{\alpha}_{ ( \kappa - 1 ) }) } \quad \kappa = 1, 2, 3, ..., N\]- Parameters
- _opt_id, _signal_pipe, _results_queue, _pause_flag, _is_log_detailed, _workers, _backend
See
BaseOptimizer
.- phi
The variable \(\phi\).
- gamma
The constant \(\gamma\).
- a_target
The target acceptance rate \(\alpha_{t}\).
- super_iter_len
The total number of each super-iterations \(\kappa\). Total number of iterations: \(\kappa \omega\).
- sub_iter_len
The length of each ARMC sub-iteration \(\omega\). Total number of iterations: \(\kappa \omega\).
- move_range
An array-like object containing all allowed move sizes.
- move_func
A callable for performing the moves. The callable should take 2 floats (i.e. a single value from the parameter set S_{omega-i} and the move size) and return 1 float (i.e. an updated value for the parameter set S_{omega}).
- Attributes
- phifloat
The variable \(\phi\).
- gammafloat
The constant \(\gamma\).
- a_targetfloat
The target acceptance rate \(\alpha_{t}\).
- super_iterrange
A range object with the total number of each super-iterations \(\kappa\). Total number of iterations: \(\kappa \omega\).
- sub_iterrange
A range object with the length of each ARMC sub-iteration \(\omega\). Total number of iterations: \(\kappa \omega\).
- move_rangenumpy.ndarray[float]
An array-like object containing all allowed move sizes.
- move_funcCallable[[float, float], float]
A callable for performing the moves. The callable should take 2 floats (i.e. a single value from the parameter set S_{omega-i} and the move size) and return 1 float (i.e. an updated value for the parameter set S_{omega}).
- runCallable
See the
function
parameter inARMCOptimizer.minimize()
.- boundsnumpy.ndarray or (None, None)
An 2D array (or 2-tuple filled with
None
) denoting minimum and maximum values for each to-be moved parameter. See thebounds
parameter inARMCOptimizer.minimize()
.- x_bestnumpy.ndarray[float]
The parameter set which minimizes the user-specified error function.
- fx_bestfloat
The error associated with
x_best
.- fx_oldfloat
The error of the last set of accepted parameters.
See Also:
The paper describing the original ARMC implementation: Salvatore Cosseddu et al., J. Chem. Theory Comput., 2017, 13, 297–308 10.1021/acs.jctc.6b01089.
The Python-based implementation of ARMC this class is based on: Automated Forcefield Optimization Extension github.com/nlesc-nano/auto-FOX.
-
property
bounds
¶ Get or set the
bounds
parameter ofARMCOptimizer.minimize()
.
-
checkpoint_save
(path, force=None, block=None)[source]¶ Save current state, suitable for restarting.
- Parameters
- path
Path to file into which the object will be dumped. Typically supplied by the manager.
- force
Set of variable names which will be forced into the dumped file. Convenient shortcut for overwriting if fails for a particular optimizer because a certain variable is filtered out of the data dump.
- block
Set of variable names which are typically caught in the construction of the checkpoint but should be excluded. Useful for excluding some properties.
- Notes
Only the absolutely critical aspects of the state of the optimizer need to be saved. The manager will resupply multiprocessing parameters when the optimizer is reconstructed.
This method will almost never be called directly by the user. Rather it will be called (via signals) by the manager.
This is a basic implementation which should suit most optimizers; may need to be overwritten. Typically it is sufficient to call the super method and use the
force
andblock
parameters to get a working implementation.
-
minimize
(function, x0, bounds=None)[source]¶ Start the minimization process.
Parameters:
- functionCallable
The function to be minimized.
- x0array-like[float]
A 1D array-like object representing the set of to-be optimized parameters \(S\).
- boundsarray-like[float], optional
An (optional) 2D array-like object denoting minimum and maximum values for each to-be moved parameter. The sequence should be of the same length as
x0
.
-
move
(x0, x0_min=None, x0_max=None)[source]¶ Create a copy of
x0
and apply a random move to it.The move will be applied with
move_func
(multiplication by default) using a random value frommove_range
.Parameters:
- x0numpy.ndarray[float]
A 1D array of parameters \(S_{\omega-i}\).
- value_minnumpy.ndarray[float], optional
A 1D array minimum values for each to-be moved parameter. The array should be of the same length as
x0
.- value_maxnumpy.ndarray[float], optional
A 1D array maximum values for each to-be moved parameter. The array should be of the same length as
x0
.
Returns:
- numpy.ndarray[float]
A copy of
x0
\(S_{\omega}\). A single value is moved within this parameter set.
-
phi_apply
(aux_err)[source]¶ Apply
phi
to the supplied auxiliary error: \(\Delta \varepsilon_{QM-MM} + \phi\).
-
phi_update
()[source]¶ Update the variable \(\phi\) (
phi
).\(\phi\) is updated based on the target acceptance rate, \(\alpha_{t}\) (
a_target
), and the acceptance rate,acceptance
, of the current super-iteration:\[\phi_{\kappa \omega} = \phi_{ ( \kappa - 1 ) \omega} * \gamma^{ \text{sgn} ( \alpha_{t} - \overline{\alpha}_{ ( \kappa - 1 ) }) }\]Parameters:
- acceptancenumpy.ndarray[bool]
A boolean array for keeping track of accepted moves over the course of the current sub-iteration of \(\kappa\).