ADF (pre-2020 version)¶
Warning
This page describes the old interface to the standalone ADF binary.
Starting from AMS2020, ADF is an AMS engine and should be run using AMSJob
.
If you are running AMS2019.3 or older version, you should still use ADFJob
ADF can be run from PLAMS using the ADFJob
class and the corresponding ADFResults
.
The are subclasses of, respectively, SCMJob
and SCMResults
, which gather common pre-AMS logic for all members of the former ADFSuite.
Preparing input¶
Input files for ADF consist of blocks and subblocks containg keys and values.
That kind of structure can be easily reflected by Settings
objects since they are built in a similar way.
The input file is generated based on input
branch of job’s Settings
.
All data present there is translated to input contents.
Nested Settings
instances define blocks and subblocks, as in the example below:
myjob = ADFJob(molecule=Molecule('water.xyz'))
myjob.settings.input.basis.type = 'DZP'
myjob.settings.input.basis.core = 'None'
myjob.settings.input.basis.createoutput = 'None'
myjob.settings.input.scf.iterations = 100
myjob.settings.input.scf.converge = '1.0e-06 1.0e-06'
myjob.settings.input.save = 'TAPE13'
Input file created during execution of myjob
looks like:
atoms
#coordinates from water.xyz
end
basis
createoutput None
core None
type DZP
end
save TAPE13
scf
converge 1.0e-06 1.0e-06
iterations 100
end
As you can see, entries present in myjob.settings.input.
are listed in the alphabetical order.
If an entry is a regular key-value pair it is printed in one line (like save TAPE13
above).
If an entry is a nested Settings
instance it is printed as a block and entries in this instance correspond to contents of a the block.
Both keys and values are kept in their original case.
Strings put as values can contain spaces like converge
above – the whole string is printed after the key.
That allows to handle lines that need to contain more than one key=value pair.
If you need to put a key without any value, True
or empty string can be given as a value:
>>> myjob.settings.input.geometry.SP = True
>>> myjob.settings.input.writefock = ''
# translates to:
geometry
SP
end
writefock
If a value of a particualr key is False
, that key is omitted.
To produce an empty block simply type:
>>> myjob.settings.input.geometry # this is equivalent to myjob.settings.input.geometry = Settings()
#
geometry
end
The algorithm translating Settings
contents into input file does not check the correctness of the data - it simply takes keys and values from Settings
instance and puts them in the text file.
Due to that you are not going to be warned if you make a typo, use wrong keyword or improper syntax.
Beware of that.
>>> myjob.settings.input.dog.cat.apple = 'pear'
#
dog
cat
apple pear
subend
end
Some blocks require (or allow) some data to be put in the header line, next to the block name.
Special key _h
is helpful in these situations:
>>> myjob.settings.input.someblock._h = 'header=very important'
>>> myjob.settings.input.someblock.key1 = 'value1'
>>> myjob.settings.input.someblock.key2 = 'value2'
#
someblock header=very important
key1 value1
key2 value2
end
The order of blocks within input file and subblocks within a parent block follows Settings
iteration order which is lexicographical (however, SCMJob
is smart enough to put blocks like DEFINE or UNITS at the top of the input).
In rare cases you would want to override this order, for example when you supply ATOMS block manually, which can be done when automatic molecule handling is disabled (see below).
That behavior can be achieved by another type of special key:
>>> myjob.settings.input.block._1 = 'entire line that has to be the first line of block'
>>> myjob.settings.input.block._2 = 'second line'
>>> myjob.settings.input.block._4 = 'I will not be printed'
>>> myjob.settings.input.block.key1 = 'value1'
>>> myjob.settings.input.block.key2 = 'value2'
#
block
entire line that has to be the first line of block
second line
key1 value1
key2 value2
end
Sometimes one needs to put more instances of the same key within one block, like for example in CONSTRAINTS block in ADF. It can be done by using list of values instead of a single value:
>>> myjob.settings.input.constraints.atom = [1,5,4]
>>> myjob.settings.input.constraints.block = ['ligand', 'residue']
#
constraints
atom 1
atom 5
atom 4
block ligand
block residue
end
Finally, in some rare cases key and value pair in the input needs to be printed in a form key=value
instead of key value
.
When value is a string starting with the equal sign, no space is inserted between key and value:
>>> myjob.settings.input.block.key = '=value'
#
block
key=value
end
Sometimes a value of a key in the input file needs to be a path to some file, usually KF file with results of some previous calculation.
Of course such a path can be given explicitly newjob.restart = '/home/user/science/plams.12345/oldjob/oldjob.t21'
, but for user’s convenience instances of SCMJob
or SCMResults
(or directly KFFile
) can be also used.
Algorithm will detect it and use an absolute path to the main KF file instead:
>>> myjob.settings.input.restart = oldjob
>>> myjob.settings.input.fragment.frag1 = fragjob
#
restart /home/user/science/plams.12345/oldjob/oldjob.t21
fragment
frag1 /home/user/science/fragmentresults/somejob/somejob.t21
end
Molecule
instance stored in job’s molecule
attribute is automatically processed during the input file preparation and printed in the proper format, depending on the program.
It is possible to disable that and give molecular coordinates explicitly as entries in myjob.settings.input.
.
Automatic molecule processing can be turned off by myjob.settings.ignore_molecule = True
.
Special atoms in ADF¶
In ADF atomic coordinates in atoms
block can be enriched with some additional information like special names of atoms (for example in case of using different isotopes) or block/fragment membership.
Since usually contents of atoms
block are generated automatically based on the Molecule
associated with a job, this information needs to be supplied inside the given Molecule
instance.
Details about every atom can be adjusted separately, by modifying attributes of a particular Atom
instance according to the following convention:
- Atomic symbol is generated based on atomic number stored in
atnum
attribute of a correspondingAtom
. Atomic number 0 corresponds to the “dummy atom” for which the symbol is empty. - If
atom.properties.ghost
exists and isTrue
, the above atomic symbol is prefixed withGh.
. - If
atom.properties.name
exists, its contents are added after the symbol. Hence settingatnum
to 0 and adjustingname
allows to put an arbitrary string as the atomic symbol. - If
atom.properties.adf.fragment
exists, its contents are added after atomic coordinates withf=
prefix. - If
atom.properties.adf.block
exists, its contents are added after atomic coordinates withb=
prefix.
The following example illustrates the usage of this mechanism:
>>> mol = Molecule('xyz/Ethanol.xyz')
>>> mol[1].properties.ghost = True
>>> mol[2].properties.name = 'D'
>>> mol[3].properties.ghost = True
>>> mol[3].properties.name = 'T'
>>> mol[4].properties.atnum = 0
>>> mol[4].properties.name = 'J.XYZ'
>>> mol[5].properties.atnum = 0
>>> mol[5].properties.name = 'J.ASD'
>>> mol[5].properties.ghost = True
>>> mol[6].properties.adf.fragment = 'myfragment'
>>> mol[7].properties.adf.block = 'block1'
>>> mol[8].properties.adf.fragment = 'frag'
>>> mol[8].properties.adf.block = 'block2'
>>> myjob = ADFJob(molecule=mol)
#
atoms
1 Gh.C 0.01247 0.02254 1.08262
2 C.D -0.00894 -0.01624 -0.43421
3 Gh.H.T -0.49334 0.93505 1.44716
4 J.XYZ 1.05522 0.04512 1.44808
5 Gh.J.ASD -0.64695 -1.12346 2.54219
6 H 0.50112 -0.91640 -0.80440 f=myfragment
7 H 0.49999 0.86726 -0.84481 b=block1
8 H -1.04310 -0.02739 -0.80544 f=frag b=block2
9 O -0.66442 -1.15471 1.56909
end
Preparing runscript¶
Runscripts for ADF are very simple.
The only adjustable option (apart from usual pre
, post
, shebang
and stdout_redirect
which are common for all single jobs) is myjob.settings.runscript.nproc
, indicating the number of parallel processes to run ADF with (like with -n
flag or NSCM
environmental variable).
API¶
-
class
ADFResults
(job)[source]¶ A specialized
SCMResults
subclass for accessing the results ofADFJob
.-
get_properties
()[source]¶ Return a dictionary with all the entries from
Properties
section in the main KF file ($JN.t21
).
-
get_main_molecule
()[source]¶ Return a
Molecule
instance based on theGeometry
section in the main KF file ($JN.t21
).For runs with multiple geometries (geometry optimization, transition state search, intrinsic reaction coordinate) this is the final geometry. In such a case, to access the initial (or any intermediate) coordinates please use
get_input_molecule()
or extract coordinates from sectionHistory
, variablesxyz 1
,xyz 2
and so on. Mind the fact that all coordinates written by ADF toHistory
section are in bohr and internal atom order:mol = results.get_molecule(section='History', variable='xyz 1', unit='bohr', internal=True)
-
get_input_molecule
()[source]¶ Return a
Molecule
instance with initial coordinates.All data used by this method is taken from
$JN.t21
file. Themolecule
attribute of the corresponding job is ignored.
-
get_gradients
(energy_unit='au', dist_unit='bohr')[source]¶ Return the cartesian gradients from the ‘Gradients_InputOrder’ field of the ‘GeoOpt’ Section in the kf-file, expressed in given units. Returned value is a numpy array with shape (nAtoms,3).
-
_extract_hessian
(section, variable, internal_order)[source]¶ Extract Hessian from section/variable of the TAPE21 file. Reorder from internal to input order, if internal_order is
True
.
-
get_hessian
()[source]¶ Try extracting Hessian, either analytical or numerical, whichever is present in the TAPE21 file, in the input order. Returned value is a square numpy array of size 3*nAtoms.
-
get_energy_decomposition
(unit='au')[source]¶ get_energy(unit=’au’) Return a dictionary with energy decomposition terms, expressed in unit.
The following keys are present in the returned dictionary:
Electrostatic
,Kinetic
,Coulomb
,XC
. The sum of all the values is equal to the value returned byget_energy()
. Note that additional contributions might be included, those are up to now:Dispersion
.
-
get_frequencies
(unit='cm^-1')[source]¶ Return a numpy array of vibrational frequencies, expressed in unit.
-
get_timings
()[source]¶ Return a dictionary with timing statistics of the job execution. Returned dictionary contains keys
cpu
,system
andelapsed
. The values are corresponding timings, expressed in seconds.
-
recreate_molecule
()[source]¶ Recreate the input molecule for the corresponding job based on files present in the job folder. This method is used by
load_external()
.
-
recreate_settings
()[source]¶ Recreate the input
Settings
instance for the corresponding job based on files present in the job folder. This method is used byload_external()
.
-
Parent abstract classes:
-
class
SCMJob
(molecule=None, name='plamsjob', settings=None, depend=None)[source]¶ Abstract class gathering common mechanisms for jobs with ADF Suite programs.
-
get_input
()[source]¶ Generate the input file. This method is just a wrapper around
_serialize_input()
.Each instance of
SCMJob
orSCMResults
present as a value insettings.input
branch is replaced with an absolute path to the main KF file of that job.
-
get_runscript
()[source]¶ Generate a runscript. Returned string is of the form:
$AMSBIN/name [-n nproc] <jobname.in [>jobname.out]
name
is taken from the class attribute_command
.-n
flag is added ifsettings.runscript.nproc
exists.[>jobname.out]
is used based onsettings.runscript.stdout_redirect
.
-
check
()[source]¶ Check if
termination status
variable fromGeneral
section of main KF file equalsNORMAL TERMINATION
.
-
hash_input
()[source]¶ Calculate the hash of the input file.
All instances of
SCMJob
orSCMResults
present as values insettings.input
branch are replaced with hashes of corresponding job’s inputs.
-
_serialize_input
(special)[source]¶ Transform all contents of
setting.input
branch into string with blocks, keys and values.On the highest level alphabetic order of iteration is modified: keys occuring in attribute
_top
are printed first. Special values can be indicated with special argument, which should be a dictionary having types of objects as keys and functions translating these types to strings as values.Automatic handling of
molecule
can be disabled withsettings.ignore_molecule = True
.
-
_serialize_mol
()[source]¶ Process
Molecule
instance stored inmolecule
attribute and add it as relevant entries ofsettings.input
branch. Abstract method.
-
_remove_mol
()[source]¶ Remove from
settings.input
all entries added by_serialize_mol()
. Abstract method.
-
static
_atom_symbol
(atom)[source]¶ Return the atomic symbol of atom. Ensure proper formatting for ADFSuite input taking into account
ghost
andname
entries inproperties
of atom.
-
classmethod
from_inputfile
(filename: str, heredoc_delimit: str = 'eor', **kwargs) → scm.plams.interfaces.adfsuite.scmjob.SCMJob[source]¶ Construct a
SCMJob
instance from an ADF inputfile.If a runscript is provide than this method will attempt to extract the input file based on the heredoc delimiter (see heredoc_delimit).
-
static
settings_to_mol
(s: scm.plams.core.settings.Settings) → None[source]¶ An abstract method for extracting molecules from input settings (see
SCMJob.from_inputfile()
).
-
-
class
SCMResults
(job)[source]¶ Abstract class gathering common mechanisms for results of ADF Suite programs.
-
collect
()[source]¶ Collect files present in the job folder. Use parent method from
Results
, then create an instance ofKFFile
for the main KF file and store it as_kf
attribute.
-
refresh
()[source]¶ Refresh the contents of
files
list. Use parent method fromResults
, then look at all attributes that are instances ofKFFile
and check if they point to existing files. If not, try to reinstantiate them with current job path (that can happen while loading a pickled job after the entire job folder was moved).
-
readkf
(section, variable)[source]¶ Read data from section/variable of the main KF file.
The type of the returned value depends on the type of variable defined inside KF file. It can be: single int, list of ints, single float, list of floats, single boolean, list of booleans or string.
-
newkf
(filename)[source]¶ Create new
KFFile
instance using file filename in the job folder.Example usage:
>>> res = someadfjob.run() >>> tape13 = res.newkf('$JN.t13') >>> print(tape13.read('Geometry', 'xyz'))
-
get_properties
()[source]¶ Return a dictionary with all the entries from
Properties
section in the main KF file.
-
get_molecule
(section, variable, unit='bohr', internal=False, n=1)[source]¶ Read molecule coordinates from section/variable of the main KF file.
Returned
Molecule
instance is created by copying a molecule from associatedSCMJob
instance and updating atomic coordinates with values read from section/variable. The format in which coordinates are stored is not consistent for all programs or even for different sections of the same KF file. Sometimes coordinates are stored in bohr, sometimes in angstrom. The order of atoms can be either input order or internal order. These settings can be adjusted with unit and internal parameters. Some variables store more than one geometry, in those cases n can be used to choose the preferred one.
-
_get_single_value
(section, variable, output_unit, native_unit='au')[source]¶ A small method template for all the single number “get_something()” methods extracting data from main KF file. Returned value is converted from native_unit to output_unit.
-
_atomic_numbers_input_order
()[source]¶ Return a list of atomic numbers, in the input order. Abstract method.
-
_export_attribute
(attr, other)[source]¶ If attr is a KF file take care of a proper path. Otherwise use parent method. See
Results._copy_to
for details.
-
to_input_order
(self, data)[source]¶ Reorder any iterable data from the internal atom order to the input atom order. The length of data must be equal to the number of atoms, otherwise an exception is raised. Returned value is a container of the same type as data.
-
readarray
(section: str, subsection: str, **kwargs) → numpy.ndarray[source]¶ Read data from section/subsection of the main KF file and return as NumPy array.
All additional provided keyword arguments will be passed onto the numpy.array function.
-