Release: | 0.5 |
---|---|
Date: | Jul 11, 2019 |
Dakotathon provides a Python API and BMI for the Dakota iterative systems analysis toolkit. The interface follows the documentation for the keywords used to configure a Dakota experiment.
A Python interface to the Dakota iterative systems analysis toolkit.
dakotathon.dakota.
Dakota
(run_directory='/home/docs/checkouts/readthedocs.org/user_builds/csdms-dakota/checkouts/latest/docs/source', configuration_file='dakota.yaml', input_file='dakota.in', output_file='dakota.out', run_log='run.log', error_log='stderr.log', template_file=None, auxiliary_files=(), **kwargs)[source]¶Bases: dakotathon.experiment.Experiment
Controller for configuring and running a Dakota experiment.
__init__
(run_directory='/home/docs/checkouts/readthedocs.org/user_builds/csdms-dakota/checkouts/latest/docs/source', configuration_file='dakota.yaml', input_file='dakota.in', output_file='dakota.out', run_log='run.log', error_log='stderr.log', template_file=None, auxiliary_files=(), **kwargs)[source]¶Initialize a Dakota experiment.
Called with no parameters, a Dakota experiment with basic defaults
(a vector parameter study with the built-in rosenbrock example)
is created. Use method
to set the Dakota analysis method in a
new experiment.
{total_annual_precipitation}
(default is None).Create a generic Dakota experiment:
>>> d = Dakota()
Create a vector parameter study experiment:
>>> d = Dakota(method='vector_parameter_study')
auxiliary_files
¶Auxiliary files used by the component.
configuration_file
¶The configuration file path.
from_file_like
(file_like)[source]¶Create a Dakota instance from a file-like object.
run
()[source]¶Run the Dakota experiment.
Run is executed in the directory specified by run_directory keyword and run log and error log are created.
run_directory
¶The run directory path.
serialize
(config_file=None)[source]¶Dump settings for experiment to a YAML configuration file.
Make a configuration file for a vector parameter study experiment:
>>> d = Dakota(method='vector_parameter_study')
>>> d.serialize('dakota.yaml')
setup
()[source]¶Write the Dakota configuration and input files.
As a convenience, make a configuration file and an input file for an experiment in one step:
>>> d = Dakota(method='vector_parameter_study')
>>> d.setup()
template_file
¶The template file path.
write_input_file
(input_file=None)[source]¶Create the Dakota input file for the experiment.
The input file is written to the directory specified by the run_directory attribute.
Make an input file for a vector parameter study experiment:
>>> d = Dakota(method='vector_parameter_study')
>>> d.write_input_file('dakota.in')
A Python interface to a Dakota input file.
dakotathon.experiment.
Experiment
(component=None, plugin=None, environment='environment', method='vector_parameter_study', variables='continuous_design', interface='direct', responses='response_functions', **kwargs)[source]¶Bases: object
An aggregate of control blocks that define a Dakota input file.
__init__
(component=None, plugin=None, environment='environment', method='vector_parameter_study', variables='continuous_design', interface='direct', responses='response_functions', **kwargs)[source]¶Create the set of control blocks for a Dakota experiment.
Called with no parameters, a Dakota experiment with basic defaults (a vector parameter study with the built-in rosenbrock example) is created.
Create a generic Dakota experiment:
>>> x = Experiment()
Create a vector parameter study experiment:
>>> x = Experiment(method='vector_parameter_study')
__str__
()[source]¶The contents of the Dakota input file represented as a string.
Print the Dakota input file to the console.
>>> x = Experiment()
>>> print(x)
# Dakota input file
environment
tabular_data
tabular_data_file = 'dakota.dat'
<BLANKLINE>
method
vector_parameter_study
final_point = 1.1 1.3
num_steps = 10
<BLANKLINE>
variables
continuous_design = 2
descriptors = 'x1' 'x2'
initial_point = -0.3 0.2
<BLANKLINE>
interface
id_interface = 'CSDMS'
direct
analysis_driver = 'rosenbrock'
<BLANKLINE>
responses
response_functions = 1
response_descriptors = 'y1'
no_gradients
no_hessians
<BLANKLINE>
blocks
= ('environment', 'method', 'variables', 'interface', 'responses')¶The named control blocks of a Dakota input file.
environment
¶The environment control block.
interface
¶The interface control block.
method
¶The method control block.
responses
¶The responses control block.
variables
¶The variables control block.
Environments are top-level settings for Dakota execution.
An abstract base class for top-level Dakota settings.
A class for top-level Dakota settings.
dakotathon.environment.environment.
Environment
(data_file='dakota.dat', **kwargs)[source]¶Bases: dakotathon.environment.base.EnvironmentBase
Describe Dakota environment.
Python wrappers for Dakota analysis methods.
Abstract base classes for Dakota analysis methods.
dakotathon.method.base.
MethodBase
(method='vector_parameter_study', max_iterations=None, convergence_tolerance=None, **kwargs)[source]¶Bases: object
Describe common features of Dakota analysis methods.
The max_iterations and convergence_tolerance keywords are included in Dakota’s set of method independent controls.
__init__
(method='vector_parameter_study', max_iterations=None, convergence_tolerance=None, **kwargs)[source]¶Create default method parameters.
convergence_tolerance
¶Convergence tolerance for the method.
max_iterations
¶Maximum number of iterations for the method.
method
¶The name of the analysis method used in the experiment.
dakotathon.method.base.
UncertaintyQuantificationBase
(basis_polynomial_family='extended', probability_levels=(0.1, 0.5, 0.9), response_levels=(), samples=10, sample_type='random', seed=None, variance_based_decomp=False, **kwargs)[source]¶Bases: dakotathon.method.base.MethodBase
Describe features of uncertainty quantification methods.
To supply probability_levels or response_levels to multiple responses, nest the inputs to these properties.
__init__
(basis_polynomial_family='extended', probability_levels=(0.1, 0.5, 0.9), response_levels=(), samples=10, sample_type='random', seed=None, variance_based_decomp=False, **kwargs)[source]¶Create default method parameters.
__str__
()[source]¶Define the method block for a UQ experiment.
dakotathon.method.base.MethodBase.__str__
basis_polynomial_family
¶The type of basis polynomials used by the method.
probability_levels
¶Probabilities at which to estimate response values.
response_levels
¶Values at which to estimate statistics for responses.
sample_type
¶Sampling strategy.
samples
¶Number of samples in experiment.
seed
¶Seed of the random number generator.
variance_based_decomp
¶Use variance-based decomposition global sensitivity analysis.
Implementation of a Dakota centered parameter study.
dakotathon.method.centered_parameter_study.
CenteredParameterStudy
(steps_per_variable=(5, 4), step_vector=(0.4, 0.5), **kwargs)[source]¶Bases: dakotathon.method.base.MethodBase
Define parameters for a Dakota centered parameter study.
__init__
(steps_per_variable=(5, 4), step_vector=(0.4, 0.5), **kwargs)[source]¶Create a new Dakota centered parameter study.
Create a default centered parameter study experiment:
>>> c = CenteredParameterStudy()
__str__
()[source]¶Define a centered parameter study method block.
dakotathon.method.base.MethodBase.__str__
step_vector
¶Step size in each direction.
steps_per_variable
¶Number of steps to take in each direction.
Implementation of a Dakota multidim parameter study.
dakotathon.method.multidim_parameter_study.
MultidimParameterStudy
(partitions=(10, 8), **kwargs)[source]¶Bases: dakotathon.method.base.MethodBase
Define parameters for a Dakota multidim parameter study.
__init__
(partitions=(10, 8), **kwargs)[source]¶Create a new Dakota multidim parameter study.
Create a default multidim parameter study experiment:
>>> m = MultidimParameterStudy()
__str__
()[source]¶Define a multidim parameter study method block.
dakotathon.method.base.MethodBase.__str__
partitions
¶The number of evaluation intervals for each parameter.
Implementation of a Dakota vector parameter study.
dakotathon.method.vector_parameter_study.
VectorParameterStudy
(final_point=(1.1, 1.3), n_steps=10, **kwargs)[source]¶Bases: dakotathon.method.base.MethodBase
Define parameters for a Dakota vector parameter study.
__init__
(final_point=(1.1, 1.3), n_steps=10, **kwargs)[source]¶Create a new Dakota vector parameter study.
Create a default vector parameter study experiment:
>>> v = VectorParameterStudy()
__str__
()[source]¶Define a vector parameter study method block for a Dakota input file.
dakotathon.method.base.MethodBase.__str__
final_point
¶End points used by study variables.
n_steps
¶Number of steps along vector.
Implementation of the Dakota sampling method.
dakotathon.method.sampling.
Sampling
(**kwargs)[source]¶Bases: dakotathon.method.base.UncertaintyQuantificationBase
The Dakota sampling method.
Implementation of the Dakota polynomial chaos method.
dakotathon.method.polynomial_chaos.
PolynomialChaos
(coefficient_estimation_approach='quadrature_order_sequence', quadrature_order=2, dimension_preference=(), nested=False, **kwargs)[source]¶Bases: dakotathon.method.base.UncertaintyQuantificationBase
The Dakota polynomial chaos uncertainty quantification method.
Designation of a coefficient estimation approach is required, but the only approach currently implemented is quadrature_order_sequence, which obtains coefficients of the expansion using multidimensional integration by a tensor-product of Gaussian quadrature rules specified with quadrature_order, and, optionally, with dimension_preference. If dimension_preference is defined, its highest value is set to the quadrature_order.
This implementation of the polynomial chaos method is based on the description provided in the Dakota 6.4 documentation.
__init__
(coefficient_estimation_approach='quadrature_order_sequence', quadrature_order=2, dimension_preference=(), nested=False, **kwargs)[source]¶Create a new Dakota polynomial chaos study.
Create a default instance of PolynomialChaos with:
>>> m = PolynomialChaos()
__str__
()[source]¶Define the method block for a polynomial_chaos experiment.
Display the method block created by a default instance of PolynomialChaos:
>>> m = PolynomialChaos()
>>> print(m)
method
polynomial_chaos
sample_type = random
samples = 10
probability_levels = 0.1 0.5 0.9
quadrature_order = 2
non_nested
<BLANKLINE>
<BLANKLINE>
dakotathon.method.base.UncertaintyQuantificationBase.__str__
dimension_preference
¶Weights specifying the relative importance of each dimension.
nested
¶Enforce use of nested quadrature rules.
quadrature_order
¶The highest order polynomial used by the method.
Implementation of the Dakota stochastic collocation method.
dakotathon.method.stoch_collocation.
StochasticCollocation
(coefficient_estimation_approach='quadrature_order_sequence', quadrature_order=2, dimension_preference=(), nested=False, **kwargs)[source]¶Bases: dakotathon.method.base.UncertaintyQuantificationBase
The Dakota stochastic collocation uncertainty quantification method.
Stochastic collocation is a general framework for approximate representation of random response functions in terms of finite-dimensional interpolation bases. Stochastic collocation is very similar to polynomial chaos, with the key difference that the orthogonal polynomial basis functions are replaced with interpolation polynomial bases.
This implementation of the stochastic collocation method is based on the description provided in the Dakota 6.4 documentation.
__init__
(coefficient_estimation_approach='quadrature_order_sequence', quadrature_order=2, dimension_preference=(), nested=False, **kwargs)[source]¶Create a new Dakota stochastic collocation study.
Create a default instance of StochasticCollocation with:
>>> m = StochasticCollocation()
__str__
()[source]¶Define the method block for a stoch_collocation experiment.
Display the method block created by a default instance of StochasticCollocation:
>>> m = StochasticCollocation()
>>> print(m)
method
stoch_collocation
sample_type = random
samples = 10
probability_levels = 0.1 0.5 0.9
quadrature_order = 2
non_nested
<BLANKLINE>
<BLANKLINE>
dakotathon.method.base.UncertaintyQuantificationBase.__str__
basis_polynomial_family
¶The type of basis polynomials used by the method.
dimension_preference
¶Weights specifying the relative importance of each dimension.
nested
¶Enforce use of nested quadrature rules.
quadrature_order
¶The highest order polynomial used by the method.
Dakota variables are the parameter sets to be iterated by a particular analysis method.
An abstract base class for all Dakota variable types.
dakotathon.variables.base.
VariablesBase
(variables='continuous_design', descriptors=(), **kwargs)[source]¶Bases: object
Describe features common to all Dakota variable types.
__init__
(variables='continuous_design', descriptors=(), **kwargs)[source]¶Create default variables parameters.
descriptors
¶Labels attached to Dakota variables.
Implementation of a Dakota continous design variable.
dakotathon.variables.continuous_design.
ContinuousDesign
(descriptors=('x1', 'x2'), initial_point=None, lower_bounds=None, upper_bounds=None, **kwargs)[source]¶Bases: dakotathon.variables.base.VariablesBase
Define attributes for Dakota continous design variables.
Continuous variables are defined by a real interval and are changed during the search for the optimal design.
__init__
(descriptors=('x1', 'x2'), initial_point=None, lower_bounds=None, upper_bounds=None, **kwargs)[source]¶Create the parameter set for a continuous design variable.
Create a default ContinuousDesign instance with:
>>> v = ContinuousDesign()
__str__
()[source]¶Define the variables block for continous design variables.
Display the variables block created by a default instance of ContinuousDesign:
>>> v = ContinuousDesign()
>>> print(v)
variables
continuous_design = 2
descriptors = 'x1' 'x2'
initial_point = -0.3 0.2
<BLANKLINE>
<BLANKLINE>
dakotathon.variables.base.VariablesBase.__str__
initial_point
¶Start points used by study variables.
lower_bounds
¶Minimum values of study variables.
upper_bounds
¶Maximum values of study variables.
Implementation of a Dakota uniform uncertain variable.
dakotathon.variables.uniform_uncertain.
UniformUncertain
(descriptors=('x1', 'x2'), lower_bounds=(-2.0, -2.0), upper_bounds=(2.0, 2.0), initial_point=None, **kwargs)[source]¶Bases: dakotathon.variables.base.VariablesBase
Define attributes for Dakota uniform uncertain variables.
The distribution lower and upper bounds are required specifications; the initial point is optional.
__init__
(descriptors=('x1', 'x2'), lower_bounds=(-2.0, -2.0), upper_bounds=(2.0, 2.0), initial_point=None, **kwargs)[source]¶Create the parameter set for a uniform uncertain variable.
Create a default instance of UniformUncertain with:
>>> v = UniformUncertain()
__str__
()[source]¶Define the variables block for a uniform uncertain variable.
Display the variables block created by a default instance of UniformUncertain:
>>> v = UniformUncertain()
>>> print(v)
variables
uniform_uncertain = 2
descriptors = 'x1' 'x2'
lower_bounds = -2.0 -2.0
upper_bounds = 2.0 2.0
<BLANKLINE>
<BLANKLINE>
dakotathon.variables.base.VariablesBase.__str__
initial_point
¶Start points used by study variables.
lower_bounds
¶Minimum values of study variables.
upper_bounds
¶Maximum values of study variables.
Implementation of a Dakota normal uncertain variable.
dakotathon.variables.normal_uncertain.
NormalUncertain
(descriptors=('x1', 'x2'), means=(0.0, 0.0), std_deviations=(1.0, 1.0), lower_bounds=None, upper_bounds=None, initial_point=None, **kwargs)[source]¶Bases: dakotathon.variables.base.VariablesBase
Define attributes for Dakota normal uncertain variables.
The means and standard deviations are required specifications; the initial point, and the distribution lower and upper bounds are optional.
For vector and centered parameter studies, an inferred initial starting point is needed for uncertain variables. These variables are initialized to their means for these studies.
__init__
(descriptors=('x1', 'x2'), means=(0.0, 0.0), std_deviations=(1.0, 1.0), lower_bounds=None, upper_bounds=None, initial_point=None, **kwargs)[source]¶Create the parameter set for a normal uncertain variable.
Create a default instance of NormalUncertain with:
>>> v = NormalUncertain()
__str__
()[source]¶Define the variables block for a normal uncertain variable.
Display the variables block created by a default instance of NormalUncertain:
>>> v = NormalUncertain()
>>> print(v)
variables
normal_uncertain = 2
descriptors = 'x1' 'x2'
means = 0.0 0.0
std_deviations = 1.0 1.0
<BLANKLINE>
<BLANKLINE>
dakotathon.variables.base.VariablesBase.__str__
initial_point
¶Start points used by study variables.
lower_bounds
¶Minimum values of study variables.
means
¶Mean values of study variables.
std_deviations
¶Standard deviations of study variables.
upper_bounds
¶Maximum values of study variables.
Dakota interfaces specify how function evaluations will be performed in order to map variables into responses.
An abstract base class for all Dakota interfaces.
dakotathon.interface.base.
InterfaceBase
(interface='direct', id_interface='CSDMS', analysis_driver='rosenbrock', asynchronous=False, evaluation_concurrency=2, work_directory='/home/docs/checkouts/readthedocs.org/user_builds/csdms-dakota/checkouts/latest/docs/source', work_folder='run', parameters_file='params.in', results_file='results.out', **kwargs)[source]¶Bases: object
Describe features common to all Dakota interfaces.
__init__
(interface='direct', id_interface='CSDMS', analysis_driver='rosenbrock', asynchronous=False, evaluation_concurrency=2, work_directory='/home/docs/checkouts/readthedocs.org/user_builds/csdms-dakota/checkouts/latest/docs/source', work_folder='run', parameters_file='params.in', results_file='results.out', **kwargs)[source]¶Create a default interface.
asynchronous
¶State of Dakota evaluation concurrency.
evaluation_concurrency
¶Number of concurrent evaluations.
Implementation of a Dakota direct interface.
Implementation of a Dakota fork interface.
Responses are the description of the model output data returned to Dakota upon evaluation of an interface.
An abstract base class for all Dakota responses.
dakotathon.responses.base.
ResponsesBase
(responses='response_functions', response_descriptors=(), gradients='no_gradients', hessians='no_hessians', **kwargs)[source]¶Bases: object
Describe features common to all Dakota responses.
__init__
(responses='response_functions', response_descriptors=(), gradients='no_gradients', hessians='no_hessians', **kwargs)[source]¶Create a default response.
response_descriptors
¶Labels attached to Dakota responses.
Implementation of the Dakota response_function response type.
dakotathon.responses.response_functions.
ResponseFunctions
(response_descriptors=('y1', ), response_files=(), response_statistics=('mean', ), **kwargs)[source]¶Bases: dakotathon.responses.base.ResponsesBase
Define attributes for Dakota response functions.
__init__
(response_descriptors=('y1', ), response_files=(), response_statistics=('mean', ), **kwargs)[source]¶Create a response using response functions.
Create a ResponseFunctions instance:
>>> f = ResponseFunctions()
__str__
()[source]¶Define the responses block of a Dakota input file.
dakotathon.responses.base.ResponsesBase.__str__
response_files
¶Model output files used in Dakota responses.
response_statistics
¶Statistics used to calculate Dakota responses.
Plugin classes for non-componentized models that can be called by Dakota.
An abstract base class for all Dakota model plugins.
dakotathon.plugins.base.
PluginBase
(**kwargs)[source]¶Bases: object
Describe features common to all Dakota plugins.
load
(output_file)[source]¶Read data from a model output file.
dakotathon.plugins.base.
write_dflt_file
(tmpl_file, parameters_file, run_duration=1.0)[source]¶Create a model input file populated with default values.
dakotathon.plugins.base.
write_dtmpl_file
(tmpl_file, dflt_input_file, parameter_names)[source]¶Create a template input file for use by Dakota.
In the CSDMS framework, the tmpl file is an input file for a model, but with the parameter values replaced by {parameter_name}. Dakota uses the same idea. This function creates a Dakota dtmpl file from a CSDMS model tmpl file. Only the parameters used by Dakota are left in the tmpl format; the remainder are populated with default values for the model. The dtmpl file is written to the current directory.
Provides a Dakota interface to the HydroTrend model.
dakotathon.plugins.hydrotrend.
HydroTrend
(input_dir='HYDRO_IN', output_dir='HYDRO_OUTPUT', input_file='HYDRO.IN', input_template='HYDRO.IN.dtmpl', hypsometry_file='HYDRO0.HYPS', output_files=None, output_statistics=None, **kwargs)[source]¶Bases: dakotathon.plugins.base.PluginBase
Represent a HydroTrend simulation in a Dakota experiment.
__init__
(input_dir='HYDRO_IN', output_dir='HYDRO_OUTPUT', input_file='HYDRO.IN', input_template='HYDRO.IN.dtmpl', hypsometry_file='HYDRO0.HYPS', output_files=None, output_statistics=None, **kwargs)[source]¶Configure a default HydroTrend simulation.
Create a HydroTrend instance with:
>>> h = HydroTrend()
load
(output_file)[source]¶Read a column of data from a HydroTrend output file.
setup
(config)[source]¶Configure HydroTrend inputs.
Sets attributes using information from the run configuration
file. The Dakota parsing utility dprepro
reads parameters
from Dakota to create a new HydroTrend input file from a
template.
setup_directories
(config)[source]¶Configure HydroTrend input and output directories.
These console scripts are called as Dakota’s analysis_driver: dakota_run_component for a CSDMS component, and dakota_run_plugin for a model interfaced by a Dakotathon plugin class.
Defines the dakota_run_component console script.
dakotathon.run_component.
main
()[source]¶Handle arguments to the dakota_run_component console script.
dakotathon.run_component.
run_component
(params_file, results_file)[source]¶Brokers communication between Dakota and a CSDMS component.
This console script provides a generic analysis driver for a Dakota experiment. At each evaluation step, Dakota calls this script with two arguments, the names of the parameters and results files:
Once the component is identified, a worker is created to perform three steps: preprocessing, execution, and postprocessing. In the preprocessing step, information from the configuration file is transferred to the component. In the execution step, the component is called, using the information passed from Dakota. In the postprocessing step, output from the component is read, and a single statistic (e.g., mean, median, max, etc.) is applied to it. This number, one for each response, is returned to Dakota through the results file, ending the Dakota evaluation step.
Defines the dakota_run_plugin console script.
dakotathon.run_plugin.
run_plugin
(params_file, results_file)[source]¶Brokers communication between Dakota and a model through files.
This console script provides a generic analysis driver for a Dakota experiment. At each evaluation step, Dakota calls this script with two arguments, the names of the parameters and results files:
Once the model is identified, an interface is created to perform three steps: preprocessing, execution, and postprocessing. In the preprocessing step, information from the configuration file is transferred to the component. In the execution step, the component is called, using the information passed from Dakota. In the postprocessing step, output from the component is read, and a single statistic (e.g., mean, median, max, etc.) is applied to it. This number, one for each response, is returned to Dakota through the results file, ending the Dakota evaluation step.
Helper functions for processing Dakota parameter and results files.
dakotathon.utils.
add_dyld_library_path
()[source]¶Add the DYLD_LIBRARY_PATH environment variable for Dakota.
dakotathon.utils.
compute_statistic
(statistic, array)[source]¶Compute the statistic used in a Dakota response function.
dakotathon.utils.
configure_parameters
(params)[source]¶Preprocess Dakota parameters prior to committing to a config file.
dakotathon.utils.
deserialize
(config_file)[source]¶Load settings from a YAML configuration file.
dakotathon.utils.
get_attributes
(obj)[source]¶Get and format the attributes of an object.
dakotathon.utils.
get_configuration_file
(params_file)[source]¶Extract the configuration filepath from a Dakota parameters file.
dakotathon.utils.
get_response_descriptors
(params_file)[source]¶Extract response descriptors from a Dakota parameters file.
dakotathon.utils.
is_dakota_installed
()[source]¶Check whether Dakota is installed and in the execution path.
dakotathon.utils.
to_iterable
(x)[source]¶Get an iterable version of an input.
If the input isn’t iterable, or is a string, then a tuple; else, the input.
dakotathon.utils.
which
(prog, env=None)[source]¶Call the OS which function.
The path to the command, or None if the command is not found.
The Basic Model Interface (BMI) defines an interface for converting a standalone model into an integrated modeling framework component.
Basic Model Interface for the Dakota iterative systems analysis toolkit.
dakotathon.bmi.
BmiDakota
[source]¶Bases: bmipy.bmi.Bmi
The BMI implementation for the CSDMS Dakota interface.
finalize
()[source]¶Perform tear-down tasks for the model.
Perform all tasks that take place after exiting the model’s time loop. This typically includes deallocating memory, closing files and printing reports.
get_grid_edge_count
(grid)[source]¶Get the number of edges in the grid.
get_grid_edge_nodes
(grid, edge_nodes)[source]¶Get the edge-node connectivity.
get_grid_face_count
(grid)[source]¶Get the number of faces in the grid.
get_grid_face_nodes
(grid, face_nodes)[source]¶Get the face-node connectivity.
get_grid_node_count
(grid)[source]¶Get the number of nodes in the grid.
get_grid_nodes_per_face
(grid, nodes_per_face)[source]¶Get the number of nodes for each face.
get_grid_origin
(grid, origin)[source]¶Get coordinates for the lower-left corner of the computational grid.
get_grid_rank
(grid)[source]¶Get number of dimensions of the computational grid.
get_grid_shape
(grid, shape)[source]¶Get dimensions of the computational grid.
get_grid_size
(grid)[source]¶Get the total number of elements in the computational grid.
get_grid_spacing
(grid, spacing)[source]¶Get distance between nodes of the computational grid.
get_grid_type
(grid)[source]¶Get the grid type as a string.
get_grid_x
(grid, x)[source]¶Get coordinates of grid nodes in the x direction.
get_grid_y
(grid, y)[source]¶Get coordinates of grid nodes in the y direction.
get_grid_z
(grid, z)[source]¶Get coordinates of grid nodes in the z direction.
get_input_var_names
()[source]¶List of a model’s input variables.
Input variable names must be CSDMS Standard Names, also known as long variable names.
Standard Names enable the CSDMS framework to determine whether an input variable in one model is equivalent to, or compatible with, an output variable in another model. This allows the framework to automatically connect components.
Standard Names do not have to be used within the model.
get_output_var_names
()[source]¶List of a model’s output variables.
Output variable names must be CSDMS Standard Names, also known as long variable names.
get_start_time
()[source]¶Start time of the model.
Model times should be of type float.
get_time_step
()[source]¶Current time step of the model.
The model time step should be of type float.
get_time_units
()[source]¶Time units of the model.
CSDMS uses the UDUNITS standard from Unidata.
get_value
(name, dest)[source]¶Get a copy of values of the given variable.
This is a getter for the model, used to access the model’s current state. It returns a copy of a model variable, with the return type, size and rank dependent on the variable.
get_value_at_indices
(name, dest, inds)[source]¶Get values at particular indices.
get_value_ptr
(name)[source]¶Get a reference to values of the given variable.
This is a getter for the model, used to access the model’s current state. It returns a reference to a model variable, with the return type, size and rank dependent on the variable.
get_var_grid
(name)[source]¶Get grid identifier for the given variable.
get_var_itemsize
(name)[source]¶Get memory use for each array element in bytes.
get_var_location
(name)[source]¶Get the grid element type that the a given variable is defined on.
The grid topology can be composed of nodes, edges, and faces.
CSDMS uses the ugrid conventions to define unstructured grids.
get_var_nbytes
(name)[source]¶Get size, in bytes, of the given variable.
get_var_type
(name)[source]¶Get data type of the given variable.
str
, int
, float
.get_var_units
(name)[source]¶Get units of the given variable.
Standard unit names, in lower case, should be used, such as
meters
or seconds
. Standard abbreviations, like m
for
meters, are also supported. For variables with compound units,
each unit name is separated by a single space, with exponents
other than 1 placed immediately after the name, as in m s-1
for velocity, W m-2
for an energy flux, or km2
for an
area.
CSDMS uses the UDUNITS standard from Unidata.
initialize
(config_file)[source]¶Perform startup tasks for the model.
Perform all tasks that take place before entering the model’s time loop, including opening files and initializing the model state. Model inputs are read from a text-based configuration file, specified by filename.
Models should be refactored, if necessary, to use a configuration file. CSDMS does not impose any constraint on how configuration files are formatted, although YAML is recommended. A template of a model’s configuration file with placeholder values is used by the BMI.
set_value
(name, values)[source]¶Specify a new value for a model variable.
This is the setter for the model, used to change the model’s current state. It accepts, through src, a new value for a model variable, with the type, size and rank of src dependent on the variable.
set_value_at_indices
(name, inds, src)[source]¶Specify a new value for a model variable at particular indices.
update
()[source]¶Advance model state by one time step.
Perform all tasks that take place within one pass through the model’s
time loop. This typically includes incrementing all of the model’s
state variables. If the model’s state variables don’t change in time,
then they can be computed by the initialize()
method and this
method can return with no action.
dakotathon.bmi.
CenteredParameterStudy
[source]¶Bases: dakotathon.bmi.BmiDakota
BMI implementation of a Dakota centered parameter study.
dakotathon.bmi.
MultidimParameterStudy
[source]¶Bases: dakotathon.bmi.BmiDakota
BMI implementation of a Dakota multidim parameter study.
dakotathon.bmi.
PolynomialChaos
[source]¶Bases: dakotathon.bmi.BmiDakota
BMI implementation of a Dakota study with the polynomial chaos method.
dakotathon.bmi.
PsuadeMoat
[source]¶Bases: dakotathon.bmi.BmiDakota
BMI implementation of a Dakota study with the PSUADE MOAT method.
dakotathon.bmi.
Sampling
[source]¶Bases: dakotathon.bmi.BmiDakota
BMI implementation of a Dakota sampling study.
dakotathon.bmi.
StochasticCollocation
[source]¶Bases: dakotathon.bmi.BmiDakota
BMI implementation of a Dakota study with the stochastic collocation method.
dakotathon.bmi.
VectorParameterStudy
[source]¶Bases: dakotathon.bmi.BmiDakota
BMI implementation of a Dakota vector parameter study.