utopya_backend.model package

Contents

utopya_backend.model package#

Provides classes that can be used for model implementation:

  • BaseModel: provides shared simulation infrastructure like a logger, a shared RNG and signal handling.

  • StepwiseModel: a base model class that is optimized for stepwise model iteration.

All these base models still require to be subclassed and certain methods being implemented.

Submodules#

utopya_backend.model.base module#

This module implements the BaseModel class which can be inherited from for implementing a utopya-controlled model.

Its main aim is to provide shared simulation infrastructure and does not make further assumptions about the abstraction a model makes, like the step-wise iteration done in StepwiseModel.

class utopya_backend.model.base.BaseModel(*, cfg_file_path: str, _log: logging.Logger = None)[source]#

Bases: ABC

An abstract base model class that can be inherited from to implement a model. This class provides basic simulation infrastructure in a way that it couples to utopya as a frontend:

  • A shared RNG instance and a logger.

  • includes logic to evaluate num_steps and write_{start,every}

  • emits monitor information to the frontend, informing the utopya frontend about simulation progress

For more specific purposes, there are specialized classes that are built around a shared modelling paradigm like step-wise iteration of the model, see StepwiseModel.

ATTACH_SIGNAL_HANDLERS: bool = True#

If true, calls attach_signal_handlers() to attach signal handlers for stop conditions and interrupts.

Hint

You may want to disable this in a subclass in case you cannot handle a case where a signal is meant to stop your simulation gracefully.

USE_SYS_EXIT: bool = True#

If false, will not call sys.exit upon handled signal, but just return the error code.

__init__(*, cfg_file_path: str, _log: logging.Logger = None)[source]#

Initialize the model instance, constructing an RNG and HDF5 group to write the output data to.

Todo

Allow initializing from a “parent model” such that hierarchical nesting of models becomes possible.

Parameters:
  • cfg_file_path (str) – The path to the config file.

  • _log (logging.Logger, optional) – The logger instance from which to create a child logger for this model. If not given, will use the backend logger instance.

__del__()[source]#

Takes care of tearing down the model

property name: str#

Returns the name of this model instance

property log: logging.Logger#

Returns the model’s logger instance

property rng: numpy.random.Generator#

Returns the shared random number generator instance

property h5group: h5py.Group#

The HDF5 group this model should write to

property root_cfg: dict#

Returns the root configuration of the simulation run

property cfg: dict#

Returns the model configuration, self.root_cfg[self.name]

property n_iterations: int#

Returns the number of iterations performed by this base class, i.e. the number of times iterate() was called.

Note

This may not correspond to potentially existing other measures that specialized base classes implement. For instance, utopya_backend.model.step.StepwiseModel.time() is not the same as the number of iterations.

run() int[source]#

Performs a simulation run for this model, calling the iterate() method while should_iterate() is true. In addition, it takes care to invoke data writing and monitoring.

Returns:

exit code, non-zero upon handled signals

Return type:

int

abstract setup() None[source]#

Called upon initialization of the model

abstract should_iterate() bool[source]#

A method that determines whether iterate() should be called or not.

abstract iterate() None[source]#

Called repeatedly until the end of the simulation, which can be either due to

abstract should_write() bool[source]#

A method that determines whether write_data() should be called after an iteration or not.

abstract write_data() None[source]#

Performs data writing if should_write() returned true.

_parse_root_cfg(**_) None[source]#

Invoked from within __init__(), parses and handles configuration parameters.

Hint

This method can be specialized in a subclass.

_setup_finished() None[source]#

Invoked from within __init__() after the call to the setup() method has finished.

Hint

This method can be specialized in a subclass.

monitor(monitor_info: dict) dict[source]#

Called when a monitor emission is imminent; should be used to update the (model-specific) monitoring information passed here as arguments.

Hint

This method can be specialized in a subclass.

compute_progress() float[source]#

Computes the progress of the simulation run. Should return a float between 0 and 1 and should always be monotonic.

Hint

This method can be specialized in a subclass.

show_iteration_info() None[source]#

A method that informs about the current iteration

prolog() None[source]#

Invoked at the beginning of run(), before the first call to iterate().

Hint

This method can be specialized in a subclass.

epilog(*, finished_run: bool) None[source]#

Always invoked at the end of run().

This may happen either after should_iterate() returned False or any time before that, e.g. due to an interrupt signal or a stop condition. In the latter case, finished_run will be False.

Hint

This method can be specialized in a subclass.

_pre_run()[source]#

Invoked at beginning of run()

_post_run(*, finished_run: bool) None[source]#

Invoked at end of run()

_pre_iterate()[source]#

Invoked at beginning of a full iteration

_post_iterate()[source]#

Invoked at end of a full iteration (including monitoring, data writing etc.)

_pre_monitor()[source]#

Invoked before monitor emission

_post_monitor()[source]#

Invoked after monitor emission

_attach_signal_handlers()[source]#

Invoked from __init__(), attaches a signal handler for the stop condition signal and other interrupts.

Note

This should only be overwritten if you want or need to do your own signal handling.

_check_signals() Union[None, int][source]#

Evaluates whether the iteration should stop due to an (expected) signal, e.g. from a stop condition or an interrupt. If it should stop, will return an integer, which can then be passed into sys.exit().

Exit codes will be 128 + abs(signum), as is convention. This is also expected by WorkerManager and is used to behave differently on a stop-condition-related signal than on an interrupt signal.

Returns:

An integer if the signal denoted that there

should be a system exit; None otherwise.

Return type:

Union[None, int]

_monitor_should_emit(*, t: Optional[float] = None) bool[source]#

Evaluates whether the monitor should emit. This method will only return True once a monitor emit interval has passed since the last time the monitor was emitted.

Parameters:

t (None, optional) – If given, uses this time, otherwise calls time.time().

Returns:

Whether to emit or not.

Return type:

bool

_emit_monitor()[source]#

Actually emits the monitoring information using print().

trigger_monitor(*, force: bool = False)[source]#

Invokes the monitoring procedure:

  1. Checks whether _monitor_should_emit().

  2. If so, calls monitor() to update monitoring information.

  3. Then calls _emit_monitor() to emit that information.

If force is given, will always emit.

Hint

This method should not be subclassed, but it can be invoked from within the subclass at any desired point.

_get_root_cfg(cfg_file_path: str, *, _log: logging.Logger) dict[source]#

Retrieves the root configuration for this simulation run by loading it from the given file path.

_setup_loggers(_log: logging.Logger)[source]#

Sets up the model logger and configures the backend logger according to the log_levels entry set in the root configuration.

Todo

Allow setting the logging format as well.

Parameters:

_log (logging.Logger) – The logger to initialize the model logger from, typically the

_setup_rng(*, seed: int, seed_numpy_rng: Optional[Union[bool, int]] = None, seed_system_rng: Optional[Union[bool, int]] = None, **rng_kwargs) numpy.random.Generator[source]#

Sets up the shared RNG.

Note

If also seeding the other RNGs, make sure to use different seeds for them, such that random number sequences are ensured to be different even if the underlying generator may be the same.

Parameters:
  • seed (int) – The seed for the new, model-specific RNG, constructed via numpy.random.default_rng()

  • seed_numpy_rng (Optional[Union[bool, int]], optional) – If not False or None, will also seed numpy’s singleton (i.e. default) RNG by calling numpy.random.seed(). If True, will use seed + 1 for that.

  • seed_system_rng (Optional[Union[bool, int]], optional) – If not False or None, will also seed the system’s default RNG by calling random.seed(). If True, will use seed + 2 for that.

  • **rng_kwargs – Passed on to numpy.random.default_rng()

_setup_output_file() h5py.File[source]#

Creates the output file for this model; by default, it is a HDF5 file that is managed by a h5py.File object.

Note

This method can be subclassed to implement different output file formats. In that case, consider not using the _h5file and _h5group attributes but something else.

_setup_output_group(h5file: h5py.File = None) h5py.Group[source]#

Creates the group that this model’s output is written to

_invoke_iterate()[source]#
_invoke_write_data()[source]#
_invoke_setup()[source]#
_abc_impl = <_abc._abc_data object>#
_invoke_prolog()[source]#

Helps invoking the prolog()

_invoke_epilog(**kwargs)[source]#

Helps invoking the epilog()

utopya_backend.model.step module#

Implements a model that is optimized for a stepwise iteration paradigm.

class utopya_backend.model.step.StepwiseModel(*, cfg_file_path: str, _log: logging.Logger = None)[source]#

Bases: BaseModel

A base class that is optimized for models based on stepwise integration, i.e. with constant time increments.

should_iterate() bool[source]#

Iteration should continue until the maximum number of steps is reached.

iterate()[source]#

Performs a single iteration using a stepwise integration over a fixed time interval. The step consists of:

  1. the simulation step via perform_step()

  2. incrementing the model’s time variable

should_write() bool[source]#

Decides whether to write data or not

abstract perform_step()[source]#

Called once from each iterate() call

_parse_root_cfg(*, num_steps: int, write_every: int = 1, write_start: int = 0, **_)[source]#

Extracts class-specific parameters from the model configuration.

Parameters:
  • num_steps (int) – Number of iteration steps to make

  • write_every (int, optional) – How frequently to write data

  • write_start (int, optional) – When to start writing data

  • **_ignored

_setup_finished()[source]#

Called after the model setup has finished.

compute_progress() float[source]#

Computes simulation progress

show_iteration_info() None[source]#

Informs about the state of the iteration

_invoke_epilog(*, finished_run: bool, **kwargs)[source]#

Overwrites the parent method and logs some information in case that the epilog is invoked with not finished_run.

property time: int#

Returns the current time, which is incremented after each step.

Note

This is not the same as n_iterations that BaseModel keeps track of!

property num_steps: int#

Returns the num_steps parameter for this simulation run.

property write_start: int#

Returns the write_start parameter for this simulation run.

property write_every: int#

Returns the write_every parameter for this simulation run.

create_ts_dset(name: str, *, extra_dims: tuple = (), sizes: dict = {}, coords: dict = {}, compression=2, **dset_kwargs) h5py.Dataset[source]#

Creates a h5py.Dataset that is meant to store time series data. It supports adding extra dimensions to the back of the dataset and supports writing attributes that can be read (by the dantro.utils.coords module) to have dimension and coordinate labels available during data evaluation.

The time dimension will be the very first one (axis=0) of the resulting dataset. Also, the initial size will be zero along that dimension – you will need to resize it before writing data to it.

Parameters:
  • name (str) – Name of the dataset

  • extra_dims (tuple, optional) – Sequence of additional dimension names, which will follow the time dimension

  • sizes (dict, optional) – Sizes of the additional dimensions; if not given, will not limit the maximum size in that dimension.

  • coords (dict, optional) – Attributes that allow coordinate mapping will be added for all keys in this dict. Values can be either a dict with the mode and coords keys, specifying parameters for dantro.utils.coords.extract_coords(), or a list or 1D array that specifies coordinate values.

  • compression (int, optional) – Compression parameter for h5py dataset

  • **dset_kwargs – Passed on to h5py.Group.create_dataset()

Raises:

ValueError – If an invalid dimension name is given in coords or if the size of the coordinates did not match the dimension size

_abc_impl = <_abc._abc_data object>#