5.3. Task: SinglePoint¶
Set Task SinglePoint
to run all the jobs in the job collection with the
specified parameter interface and engine collection. The type of engine is
inferred from the parameter interface:
A
ReaxFFParameters
interface will initialize aReaxFF
engine, andA
GFN1xTBParameters
interface will initialize aDFTB
engine withModel GFN1xTB
.A
LennardJonesParameters
interface will initialize aLennardJones
engineA
NoParameters
interface will not initialize a working engine. All engine settings need to be specified in the engine collection.
Any additional engine settings per job are controlled by the ExtraEngineID
(known as ParAMS engine in the GUI).
See also
Tutorial: Benchmarks from literature
The two main options for Task SinglePoint
are:
StoreJobs
- Type
Multiple Choice
- Default value
Auto
- Options
[Auto, Yes, No]
- Description
Keeps the results files for each of the jobs. If No, all pipeable jobs will be run through the AMS Pipe and no files will be saved (not even the ones not run through the pipe). If Auto, the pipeable jobs are run through the pipe and the results of nonpipeable jobs are saved to disk. If Yes, no jobs are run through the pipe and all job results are stored on disk. Warning: If both Store Jobs and Evaluate Loss are No then task SinglePoint will not produce any output.
EvaluateLoss
- Type
Bool
- Default value
Yes
- Description
Evaluate the loss function based on the job results. This will produce the same output files as Task Optimization. If No, this will be skipped and only the jobs will be run (and saved). Warning: If both Store Jobs and Evaluate Loss are No then this task will not produce any output.
To continue from an interrupted run, you can set a RestartDirectory
to the
results/single_point/jobs
directory from the previous run (this only works
if StoreJobs
was enabled).
RestartDirectory
- Type
String
- Default value
- GUI name
Load jobs from:
- Description
Specify a directory to continue interrupted GenerateReference or SinglePoint calculations. The directory depends on the task: GenerateReference: results/reference_jobs SinglePoint: results/single_point/jobs Note: If you use the GUI this directory will be COPIED into the results folder and the name will be prepended with ‘dep-’. This can take up a lot of disk space, so you may want to remove the ‘dep-’ folder after the job has finished.
Parallelization options can be set with ParallelLevels
. Note: Only
ParallelLevels%Jobs
, ParallelLevels%Processes
, and
ParallelLevels%Threads
are used for SinglePoint jobs.
ParallelLevels
- Type
Block
- GUI name
Parallelization distribution:
- Description
Distribution of threads/processes between the parallelization levels.
Jobs
- Type
Integer
- Default value
0
- GUI name
Jobs (per loss function evaluation)
- Description
Number of JobCollection jobs to run in parallel for each loss function evaluation.
Processes
- Type
Integer
- Default value
1
- GUI name
Processes (per Job)
- Description
Number of processes (MPI ranks) to spawn for each JobCollection job. This effectively sets the NSCM environment variable for each job. A value of -1 will disable explicit setting of related variables. We recommend a value of 1 in almost all cases. A value greater than 1 would only be useful if you parametrize DFTB with a serial optimizer and have very few jobs in the job collection.
Threads
- Type
Integer
- Default value
1
- GUI name
Threads (per Process)
- Description
Number of threads to use for each of the processes. This effectively set the OMP_NUM_THREADS environment variable. Note that the DFTB engine does not use threads, so the value of this variable would not have any effect. We recommend always leaving it at the default value of 1. Please consult the manual of the engine you are parameterizing. A value of -1 will disable explicit setting of related variables.