Public functions¶
This chapter gathers information about public functions that can be used in PLAMS scripts.
-
init
(path=None, folder=None, config_settings: Dict = None)[source]¶ Initialize PLAMS environment. Create global
config
and the defaultJobManager
.An empty
Settings
instance is created and populated with default settings by executingplams_defaults
. The following locations are used to search for the defaults file, in order of precedence:- If
$PLAMSDEFAULTS
variable is in your environment and it points to a file, this file is used (executed as a Python script). - If
$AMSHOME
variable is in your environment and$AMSHOME/scripting/scm/plams/plams_defaults
exists, it is used. - Otherwise, the path
../plams_defaults
relative to the current file (functions.py
) is checked. If defaults file is not found there, an exception is raised.
Then a
JobManager
instance is created asconfig.default_jobmanager
using path and folder to determine the main working folder. Settings for this instance are taken fromconfig.jobmanager
. If path is not supplied, the current directory is used. If folder is not supplied,plams_workdir
is used.Optionally, an additional dict (or
Settings
instance) can be provided to the config_settings argument which will be used to update the values from theplams_defaults
.Warning
This function must be called before any other PLAMS command can be executed. Trying to do anything without it results in a crash. See also The launch script.
- If
-
finish
(otherJM=None)[source]¶ Wait for all threads to finish and clean the environment.
This function must be called at the end of your script for Cleaning job folder to take place. See The launch script for details.
If you used some other job managers than just the default one, they need to be passed as otherJM list.
-
load
(filename)[source]¶ Load previously saved job from
.dill
file. This is just a shortcut forload_job()
method of the defaultJobManager
config.default_jobmanager
.
-
load_all
(path, jobmanager=None)[source]¶ Load all jobs from path.
This function works as multiple executions of
load_job()
. It searches for.dill
files inside the directory given by path, yet not directly in it, but one level deeper. In other words, all files matchingpath/*/*.dill
are used. That way a path to the main working folder of a previously run script can be used to import all the jobs run by that script.In case of partially failed
MultiJob
instances (some children jobs finished successfully, but not all) the function will search for.dill
files in children folders. That means, ifpath/[multijobname]/
contains some subfolders (for children jobs) but does not contail a.dill
file (theMultiJob
was not fully successful), it will look into these subfolders. This behavior is recursive up to any folder tree depth.The purpose of this function is to provide a quick way of restarting a script. Loading all successful jobs from the previous run prevents double work and allows the new execution of the script to proceed directly to the place where the previous execution failed.
Jobs are loaded using default job manager stored in
config.default_jobmanager
. If you wish to use a different one you can pass it as jobmanager argument of this function.Returned value is a dictionary containing all loaded jobs as values and absolute paths to
.dill
files as keys.
-
read_molecules
(folder, formats=None)[source]¶ Read all molecules from folder.
Read all the files present in folder with extensions compatible with
Molecule.read
. Returned value is a dictionary with keys being molecule names (filename without extension) and values beingMolecule
instances.The optional argument formats can be used to narrow down the search to files with specified extensions:
molecules = read_molecules('mymols', formats=['xyz', 'pdb'])
Logging¶
PLAMS features a simple logging mechanism. All important actions happening in functions and methods register their activity using log messages. These massages can be printed to the standard output and/or saved to the logfile located in the main working folder.
Every log message has its “verbosity” defined as an integer number: the higher the number, the more detailed and descriptive the message is. In other words, it is a measure of importance of the message. Important events (like “job started”, “job finished”, “something went wrong”) should have low verbosity, whereas less crucial ones (for example “pickling of job X successful”) – higher verbosity. The purpose of that is to allow the user to choose how verbose the whole logfile is. Each log output (either file or stdout) has an integer number associated with it, defining which messages are printed to this channel (for example, if this number is 3, all messages with verbosity 3 or less are printed). That way using a smaller number results in the log being short and containing only the most relevant information while larger numbers produce longer and more detailed log messages.
The behavior of the logging mechanism is adjusted by config.log
settings branch with the following keys:
file
(integer) – verbosity of logs printed to thelogfile
in the main working folder.stdout
(integer) – verbosity of logs printed to the standard output.time
(boolean) – print time of each log event.date
(boolean) – print date of each log event.
Log messages used within the PLAMS code use four different levels of verbosity:
- 1: important
- 3: normal
- 5: verbose
- 7: debug
Even levels are left empty for the user. For example, if you find level 5 too verbose and still want to be able to switch on and off log messages of your own code, you can log them with verbosity 4.
Note
Your own code can (and should) contain some log()
calls.
They are very important for debugging purposes.
-
log
(message, level=0)[source]¶ Log message with verbosity level.
Logs are printed independently to the text logfile (a file called
logfile
in the main working folder) and to the standard output. If level is equal or lower than verbosity (defined byconfig.log.file
orconfig.log.stdout
) the message is printed. Date and/or time can be added based onconfig.log.date
andconfig.log.time
. All logging activity is thread safe.
Binding decorators¶
Sometimes one wants to expand the functionality of a class by adding a new method or modifying an existing one. It can be done in a few different ways:
- One can go directly to the source code defining the class and modify it there before running a script.
Such a change is global – it affects all the future scripts, so in most cases it is not a good thing (for defining
prerun()
, for example). - Creating a subclass with new or modified method definitions is usually the best solution. It can be done directly in your script before the work is done or in a separate dedicated file executed before the actual script (see The launch script). Newly defined class can be then used instead of the old one. However, this solution fails in some rare cases when a method needs to differ for different instances or when it needs to be changed during the runtime of the script.
- PLAMS binding decorators (
add_to_class()
andadd_to_instance()
) can be used.
Binding decorators allow to bind methods to existing classes or even directly to particular instances without having to define a subclass. Such changes are visible only inside the script in which they are used.
To fully understand how binding decorators work let us take a look at how Python handles method calling.
Assume we have an instance of a class (let’s say myres
is an instance of AMSResults
) and there is a method call in our script (let it be myres.somemethod(arguments)
).
Python first looks for somemethod
amongst attributes of myres
.
If it is not there (which is usually the case, since methods are defined in classes), attributes of AMSResults
class are checked.
If somemethod
is still not there, parent classes are checked in the order of inheritance (in our case it’s only Results
).
That implies two important things:
add_to_instance()
affects only one particular instance, but is “stronger” thanadd_to_class()
– method added to instance always takes precedence before the same method added to (or just defined in) a class- changes done with
add_to_class()
affect all instances of that particular class, including even those created beforeadd_to_class()
was used.
The usage of binding decorators is straightforward.
You simply define a regular function somewhere inside your script and decorate it with one of the decorators (see below).
The function needs to have a valid method syntax, so it should have self
as the first argument and use it to reference the class instance.
-
add_to_class
(classname)[source]¶ Add decorated function as a method to the whole class classname.
The decorated function should follow a method-like syntax, with the first argument
self
that references the class instance. Example usage:@add_to_class(ADFResults) def get_energy(self): return self.readkf('Energy', 'Bond Energy')
After executing the above code all instances of
ADFResults
in the current script (even the ones created beforehand) are enriched withget_energy
method that can be invoked by:someadfresults.get_energy()
The added method is accessible also from subclasses of classname so
@add_to_class(Results)
in the above example will work too.If classname is
Results
or any of its subclasses, the added method will be wrapped with the thread safety guard (see Synchronization of parallel job executions).
-
add_to_instance
(instance)[source]¶ Add decorated function as a method to one particular instance.
The decorated function should follow a method-like syntax, with the first argument
self
that references the class instance. Example usage:results = myjob.run() @add_to_instance(results) def get_energy(self): return self.readkf('Energy', 'Bond Energy') results.get_energy()
The added method is accessible only for that one particular instance and it overrides any methods with the same name defined on a class level (in original class’ source) or added with
add_to_class()
decorator.If instance is an instance of
Results
or any of its subclasses, the added method will be wrapped with the thread safety guard (see Synchronization of parallel job executions).
Technical
Each of the above decorators is in fact a decorator factory that, given an object (class or instance), produces a decorator that binds function as a method of that object. Both decorators are adding instance methods only, they cannot be used for static or class methods.