6. Frequently Asked Questions¶
No. ParAMS can only be run on a single compute node. However, it can run in parallel on that node. See Parallelization.
It is currently not possible to delete parameters.
If you use the GUI or a Results Importer to import a job, setting the Task to GeometryOptimization, you’ll find that the settings for the GeometryOptimization default to
GeometryOptimization
MaxIterations 30
PretendConverged Yes
End
This means that during the parametrization, only a maximum of 30 iterations are allowed. The reason to limit the number of iterations is that during the parametrization, there may be unrealistic sets of parameters for which a geometry optimization would “never” converge. By limiting the number of iterations, the parametrization will not get stuck.
PretendConverged Yes
means that if the maximum of 30 iterations is reached,
ParAMS will simply use the last geometry (and its energy). If you wouldn’t set PretendConverged,
the geometry optimization would be considered as an error (because it didn’t converge in MaxIterations),
giving an infinite loss function value.
You can easily change the MaxIterations for many jobs at once. In the GUI, select all the geometry optimization jobs you want to edit, and double-click the Details of one of them. Change the MaxIterations in the window, and click OK. That will change it for all jobs you originally selected.
If you use the ResultsImporter class, you can set MaxIterations in the settings.
This warning can appear when you are running an optimization in parallel and if you log frequently to disk, especially if you have a slow disk.
It might affect the files in the training_set_results/latest
directory.
However, the files are likely to be overwritten at the next logging time,
in which case there is no problem.
To avoid this warning, you can try to
- Decrease the number of
parametervectors
(in ParallelLevels) that you parallelize over - Increase the
logger_every
, i.e., log less frequently. - Make sure to run the optimization in a directory on a fast local disk
This warning usually means that the CMA optimizer is stuck in a parameter region that repeatedly causes one or multiple of your training set jobs to fail. Mostly, this will be due to unphysical parameters, but too many or too tight Constraints can also be the reason. The warning can resolve itself after some time, in which case it can be ignored. However, if the issue persists and CMA is not able leave the problematic region your optimization might stop early without producing any improved results. When this happens, consider the following
- Increase the CMA-ES sigma value
- Start the optimization with different initial parameters, as defined by the
parameterinterface.active
parameters (e.g. a different force field) - Check that none of your training set jobs are prone to crashes
- When in use, check your Constraints
Note that if you start your Optimization with the skip_x0=True argument, such warnings are expected as there is no guarantee that the initial set of parameters is making any physical sense.
When you delete a reference value for a training set entry in the ParAMS GUI, the value will automatically be fetched from the reference jobs.
If you want to delete the reference value in order to Calculate reference values with ParAMS with a new reference engine, then the reference value will be deleted when you change the reference engine for a job.
If the reference jobs have not been run or do not exist, you can delete the reference value.
Below is a minimal self-contained example of how manual evaluation of parameters can be implemented. Replace the random sampling of X with your parameters.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | import numpy as np
from scm.params import *
from scm.plams import from_smiles, Settings
# Prepare the training data:
jc = JobCollection()
jc.add_entry('water', JCEntry(molecule=from_smiles('O'), settings='go'))
ds = DataSet()
ds.add_entry("rmsd('water')")
# Run the reference calculation:
s = Settings()
s.input.mopac
results = jc.run(s)
ds.calculate_reference(results)
# Manually evaluate 10 random points
ljp = LennardJonesParameters() # your favourite parameter interface
X = np.array([np.random.uniform(p.range[0], p.range[1], size=10) for p in ljp.active]).T # replace with something more meaningful
fX = [] # stores all loss function values
for x in X:
ljp.active.x = x # manually set the parameters
results = jc.run(ljp) # run all jobs
fx = ds.evaluate(results) # evaluate the data set, calculating the loss
fX.append(fx)
|
Alternatively, you can also use a Data Set Evaluator.
The output gives all reference dihedral angles as 0°, and the prediction as the difference to the reference value. This is because the dihedral extractor uses a comparator to compare the prediction to the reference value. This is to ensure that if the reference value is 1° and the prediction is 359°, the difference is actually only 2° and not 358°.
You can access the actual reference value in the input (training_set.yaml), and get the actual prediction by adding the difference from scatter_plots/dihedral.txt.
For further support, contact us at support@scm.com.