11. Frequently Asked Questions¶
11.1. General questions¶
Can ParAMS run on multiple compute nodes?
No. ParAMS can only be run on a single compute node. However, it can run in parallel on that node. See Parallelization.
How do I start the ParAMS GUI from the command-line?
$AMSBIN/params -gui
To open an existing project: $AMSBIN/params -gui jobname.params
To open a results folder: $AMSBIN/params -gui jobname.results
Can I train classical force fields (AMBER, GAFF, …) with ParAMS?
No. At the moment, ParAMS supports training ReaxFF, DFTB, ML Potentials, Lennard-Jones, and custom ASE calculators.
Which licenses do I need?
To train ReaxFF you need licenses for:
ReaxFF,
Advanced Workflows and Tools,
(optional, recommended): ADF, BAND, or Quantum ESPRESSO to calculate reference data. You may also be able to import data calculated with a different DFT program.
To train DFTB you need licenses for:
DFTB,
Advanced Workflows and Tools,
(optional, recommended): ADF, BAND, or Quantum ESPRESSO to calculate reference data. You may also be able to import data calculated with a different DFT program.
To train ML potentials (including active learning), you need licenses for:
Classical force fields and machine learning potentials,
Advanced Workflows and Tools,
(optional, recommended): ADF, BAND, or Quantum ESPRESSO to calculate reference data. You may also be able to import data calculated with a different DFT program.
How do I delete a parameter (block) in the GUI?
It is currently not possible to delete parameters.
How do I delete a reference value in the ParAMS GUI?
When you delete a reference value for a training set entry in the ParAMS GUI, the value will automatically be fetched from the reference jobs.
If you want to delete the reference value in order to Generate reference values with a new reference engine, then the reference value will be deleted when you change the reference engine for a job.
If the reference jobs have not been run or do not exist, you can delete the reference value.
How do I manually evaluate a set of parameters?
This can be done with a Task: SinglePoint job.
How do I convert from ASE format training_set.xyz, training_set.db, etc. to the ParAMS format?
You can run this command, which will store the ParAMS .yaml files in the directory yaml_dir
:
$AMSBIN/amspython -c "from scm.params import ResultsImporter; ResultsImporter.from_ase('training_set.xyz', 'validation_set.xyz').store('yaml_dir')"
See also the tutorial.
11.2. ReaxFF questions¶
How do I add new custom force fields to the “Initialize ReaxFF block” feature?
It is easiest to merge ReaxFF force fields using Python. See the
create_parameters()
function in generate_input_files.py
from the
ReaxFF (advanced): ZnS tutorial. Run the script using the command
$AMSBIN/amspython generate_input_files.py
. This will create the
parameter_interface.yaml and custom_forcefield.ff files. Later in the
script an error is generated unless you download some extra files from the
tutorial, but the initial parameters are still generated correctly.
If you prefer to use the ParAMS GUI, then on Windows or Linux (but not Mac)
you need to update the reaxffdb.gz
file in
$AMSHOME/scripting/scm/params/data
as follows:
Create a backup copy of the original
reaxffdb.gz
file, so that you can restore it later if you make a mistake.Create a new directory with all the .ff force fields that you would like to be able to choose from.
Then run the below commands from the command-line.
cd /directory/containing/your/ff-files/
"$AMSBIN/amspython" -c \
"import scm.params; scm.params.ReaxFFParameters.generate_paramsdb(paths=['.'])"
# First verify that you have backed up the original reaxffdb.gz!
cp -i reaxffdb.gz "$AMSHOME/scripting/scm/params/data/reaxffdb.gz"
Important
On Mac you cannot modify the contents of $AMSHOME
, as it is part
of the signed package. On Mac you must use the Python method.
Can I use my new trained ReaxFF force field with LAMMPS?
Yes, if LAMMPS supports the type of ReaxFF force field used.
Some features, like e-ReaxFF, might not be compatible with LAMMPS.
11.3. Job settings¶
Why are MaxIterations and PretendConverged set for geometry optimization jobs?
If you use the GUI or a Results Importer to import a job, setting the Task to GeometryOptimization, you’ll find that the settings for the GeometryOptimization default to
GeometryOptimization
MaxIterations 30
PretendConverged Yes
End
This means that during the parametrization, only a maximum of 30 iterations are allowed. The reason to limit the number of iterations is that during the parametrization, there may be unrealistic sets of parameters for which a geometry optimization would “never” converge. By limiting the number of iterations, the parametrization will not get stuck.
PretendConverged Yes
means that if the maximum of 30 iterations is reached,
ParAMS will simply use the last geometry (and its energy). If you wouldn’t set PretendConverged,
the geometry optimization would be considered as an error (because it didn’t converge in MaxIterations),
giving an infinite loss function value.
You can easily change the MaxIterations for many jobs at once. In the GUI, select all the geometry optimization jobs you want to edit, and double-click the Details of one of them. Change the MaxIterations in the window, and click OK. That will change it for all jobs you originally selected.
If you use the ResultsImporter class, you can set MaxIterations in the settings.
How do I update the geometry optimization settings for all jobs?
This is easiest to do with a Python script:
#!/usr/bin/env amspython
from scm.plams import *
from scm.params import *
def modify_settings(job_collection:JobCollection, task:str, new_settings:Settings):
for jid in job_collection:
jce = job_collection[jid]
if jce.settings.input.ams.task.lower() == task.lower():
jce.settings.update(new_settings)
def main():
jc = JobCollection('job_collection.yaml')
new_settings = Settings()
new_settings.input.ams.GeometryOptimization.MaxIterations = 55
new_settings.input.ams.GeometryOptimization.Method = 'FIRE'
modify_settings(jc, 'geometryoptimization', new_settings)
modify_settings(jc, 'pesscan', new_settings)
jc.store('modified_job_collection.yaml')
if __name__ == '__main__':
main()
11.4. Errors and warnings¶
When importing VASP data: “File ams.rkf not present in /path/”
Make sure that the VASP output file is called exactly OUTCAR
.
“UserWarning: At iteration ___ (training_set), received warning: I/O operation on closed file”
This warning can appear when you are running an optimization in parallel and if you log frequently to disk, especially if you have a slow disk.
It might affect the files in the training_set_results/latest
directory.
However, the files are likely to be overwritten at the next logging time,
in which case there is no problem.
To avoid this warning, you can try to
Decrease the number of
parametervectors
(in ParallelLevels) that you parallelize overIncrease the
logger_every
, i.e., log less frequently.Make sure to run the optimization in a directory on a fast local disk
WARNING: At iteration 1, received error: AssertionError(‘DataSet and residuals lengths do not match!’). Do not trust data for this iteration!
This message can have two causes:
For the current set of parameters, one of the jobs crashes or otherwise does not produce the requested results.
You have set a batch size, which causes this message to repeatedly be shown. You can then ignore it.
If the optimization seems to progress normally, this warning can be ignored.
‘Ill-defined region’
This warning usually means that the CMA optimizer is stuck in a parameter region that repeatedly causes one or multiple of your training set jobs to fail. Mostly, this will be due to unphysical parameters, but too many or too tight Constraints can also be the reason. The warning can resolve itself after some time, in which case it can be ignored. However, if the issue persists and CMA is not able leave the problematic region your optimization might stop early without producing any improved results. When this happens, consider the following
Increase the CMA-ES sigma value
Start the optimization with different initial parameters
Check that none of your training set jobs are prone to crashes
When in use, check your Constraints
Note that if you start your Optimization with the skip_x0=True argument, such warnings are expected as there is no guarantee that the initial set of parameters makes any physical sense.
AttributeError: ‘AMSWorkerResults’ object has no attribute ‘readrkf’, scm.params.core.dataset.DataSetEvaluationError: Error evaluating ‘….’
Potential solution #1: This error means that you run a job through the pipe (pipe = efficient communication between params and AMS) but use an extractor (for example bandgap
, bandstructure
) that is not compatible with the pipe.
To solve this issue, you need to disable the pipe:
Input file: Set
DataSet%UsePipe No
GUI: On the Details → Technical panel, uncheck the Use pipe option. Do this for both the training set and validation set.
If you use one of these extractors it is likely that you are parametrizing a relative expensive compute engine (DFTB). Then the performance overhead of not using the pipe is negligible.
Potential solution #2: If you’re using the vibfreq
(vibrations, frequencies) extractor you may have forgotten to set
Properties
NormalModes Yes
End
in the job. This will cause the job to run over the pipe, but the frequencies then cannot be extracted.
To solve this issue, make sure that you have enabled the normal modes (frequencies) in the job settings.
11.5. Optimization questions¶
I want exactly total 12 optimization results, but only running at most 4 at a time to fit on my available cores
# Have at most 4 optimizers running at the same time
ParallelLevels
Optimizations 4
End
# Even if less than 4 optimizers are running, after starting a total of 12 optimizers,
# never start any more
ControlOptimizerSpawning
MaxOptimizers 12
End
There are no ASE parameters shown in the GUI?
The GUI cannot be used to create ASE parameters. You need to first create the parameter_interface.yaml
file and then load that.
It is easiest to create the file with a Python script. See the ASE Calculator parametrization tutorial.
11.6. Results questions¶
How can I compare similarity of parameter values between two ReaxFF force fields?
The easiest way is to create a file with one line per parameter, print the parameter name and value, and compare the resulting output files with an external tool like xxdiff or WinMerge (not provided by SCM nor supported by SCM).
For example to print a sorted version of ‘CHO.ff’:
#!/usr/bin/env amspython
from scm.params import ReaxFFParameters
interf = ReaxFFParameters('CHO.ff')
lines = [f'{p.block} {p.name} {p.value}' for p in interf]
for line in sorted(lines):
print(line)
The reference dihedral angle is given as 0° in the output (scatter_plots/dihedral.txt)
The output gives all reference dihedral angles as 0°, and the prediction as the difference to the reference value. This is because the dihedral extractor uses a comparator to compare the prediction to the reference value. This is to ensure that if the reference value is 1° and the prediction is 359°, the difference is actually only 2° and not 358°.
You can access the actual reference value in the input (training_set.yaml), and get the actual prediction by adding the difference from scatter_plots/dihedral.txt.