AMSbatch

It is often necessary to repeat an identical job for many different systems. The 2023 release of the Amsterdam Modeling Suite adds the AMSbatch utility program to do this.

Tip

AMSbatch only covers the very simple case of repeating the exact same jobs for multiple systems. More complicated workflows with processing of results can easily be implemented by using our scripting tools (e.g. PLAMS).

Overview

The input of AMSbatch is basically identical to the input of an AMS job. The only difference is with the System blocks. For AMSbatch every System block needs to have a unique name (supplied in the header of the block), which will be used as the identifier of the system. Consider the following example input, which independently does a geometry optimization followed by a normal modes calculation for two systems (CO and water):

#!/bin/sh

AMS_JOBNAME=go_freq $AMSBIN/amsbatch << EOF

   Task GeometryOptimization
   Properties NormalModes=True

   Engine DFTB
      [...]
   EndEngine

   System Water
       Atoms
           O 0 0 0
           H 0 1 0
           H 1 0 0
       End
   End

   System CO
       Atoms
           C 0 0 0
           O 0 0 1
       End
   End

EOF

Just like the AMS driver, running AMSbatch will create a results directory on disk. Inside of the results directory, you will find subdirectories for the individual jobs (identified by their System name), containing the usual AMS output files. Running the script above would produce the following output:

go_freq.results/
├── amsbatch.log
├── amsbatch.rkf
├── CO
│   ├── ams.log
│   ├── ams.rkf
│   ├── CO.dill
│   ├── CO.err
│   ├── CO.in
│   ├── CO.out
│   ├── CO.run
│   ├── dftb.rkf
│   └── output.xyz
└── Water
    ├── ams.log
    ├── ams.rkf
    ├── dftb.rkf
    ├── output.xyz
    ├── Water.dill
    ├── Water.err
    ├── Water.in
    ├── Water.out
    └── Water.run

Loading systems from file

Instead of supplying all systems individually in the input, AMSbatch also supports to a few ways to load multiple systems from files at once.

The SystemsFromSMILES keyword allows to run jobs directly on collections of SMILES strings, using RDKit to generate the initial 3D structure of the system:

SystemsFromSMILES
   Methane, C
   Ethane,  CC
   Propane, CCC
   Butane,  CCCC
   Pentane, CCCCC
   Hexane,  CCCCCC
End
SystemsFromSMILES
Type:

Non-standard block

Description:

Generates systems from SMILES strings using the plams.from_smiles() functions. Each line should contain first the system identifier separated by a comma from its SMILES string, e.g.: ‘propanol, CCCO’

The SystemsFromConformers keyword can be used to load a set of conformers as found by our Conformers tool. This is especially useful as the last step of a conformer workflow, e.g. to do a single point for the calculation of spectra on each conformer within an energy window:

SystemsFromConformers
   File string
   MaxEnergy float
End
SystemsFromConformers
Type:

Block

Description:

Constructs a series of systems (with identifiers ‘conformer1’, ‘conformer2’, etc.) from a RKF or XYZ file from a Conformers run.

File
Type:

String

Description:

A path to an RKF or concatenated XYZ file.

MaxEnergy
Type:

Float

Unit:

Hartree

Description:

Energy cut-off relative to lowest energy conformer.

Finally, the SystemsFromTrajectory keyword allows reading frames from a trajectory in RKF or XYZ format:

SystemsFromTrajectory
   File string
   Frames integer_list
End
SystemsFromTrajectory
Type:

Block

Description:

Constructs a series of systems (with identifiers ‘frame1’, ‘frame2’, etc.) from a trajectory file.

File
Type:

String

Description:

A path to an RKF or concatenated XYZ file.

Frames
Type:

Integer List

Description:

List of frames to load from the file. If not specified, all frames will be loaded.

Restart

AMSbatch based on the PLAMS scripting framework. As such it can rely on the PLAMS rerun prevention to implement restarting of interrupted depositions. The easiest way to restart an AMSbatch job is to include the --restart (or short: -r) command line flag:

#!/bin/sh

AMS_JOBNAME=myjob $AMSBIN/amsbatch --restart << EOF
...
EOF

This first (interrupted) run will have created the myjob.results directory. Running the above script again will move that directory to myjob.results.bak and reuse all successful jobs from the first run. (People already familiar with PLAMS will recognize that this works just like the -r flag on the PLAMS launch script.)