Modules Reference#
Models#
Workflow Specs#
Models for Simulation Specifications.
BaseSpec
#
Bases: BaseModel
A base spec for running a simulation.
The main features are utilities to fetch files from uris and generate a locally scoped path for the files. according to the experiment_id.
fetch_uri(uri, use_cache=True)
#
Fetch a file from a uri and return the local path.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
uri
|
AnyUrl
|
The uri to fetch |
required |
use_cache
|
bool
|
Whether to use the cache |
True
|
Returns:
Name | Type | Description |
---|---|---|
local_path |
Path
|
The local path of the fetched file |
from_payload(payload)
classmethod
#
Create a simulation spec from a payload.
Fetches a spec from a payload and return the spec, or validates it if it is already a spec dict.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
payload
|
dict
|
The payload to fetch |
required |
Returns:
Name | Type | Description |
---|---|---|
spec |
BaseSpec
|
The fetched spec |
from_uri(uri)
classmethod
#
Fetch a spec from a uri and return the spec.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
uri
|
AnyUrl
|
The uri to fetch |
required |
Returns:
Name | Type | Description |
---|---|---|
spec |
BaseSpec
|
The fetched spec |
local_path(pth)
#
Return the local path of a uri scoped to the experiment_id.
Note that this should only be used for non-ephemeral files.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
pth
|
AnyUrl
|
The uri to convert to a local path |
required |
Returns:
Name | Type | Description |
---|---|---|
local_path |
Path
|
The local path of the uri |
log(msg)
#
Log a message to the context or to the logger.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
msg
|
str
|
The message to log |
required |
LeafSpec
#
Bases: BaseSpec
A spec for running a leaf workflow.
Models for leaf workflows.
SimpleSpec
#
Bases: LeafSpec
A test spec for working with a simple leaf workflow.
SimulationSpec
#
Bases: LeafSpec
A spec for running an EnergyPlus simulation.
ddy_path
cached
property
#
Fetch the ddy file and return the local path.
Returns:
Name | Type | Description |
---|---|---|
local_path |
Path
|
The local path of the ddy file |
epw_path
cached
property
#
Fetch the epw file and return the local path.
Returns:
Name | Type | Description |
---|---|---|
local_path |
Path
|
The local path of the epw file |
idf_path
cached
property
#
Fetch the idf file and return the local path.
Returns:
Name | Type | Description |
---|---|---|
local_path |
Path
|
The local path of the idf file |
Models for lists and recursions of simulation specs.
BranchesSpec
#
Bases: BaseSpec
, Generic[SpecListItem]
A spec for running multiple simulations.
One key feature is that children simulations inherit the experiment_id of the parent simulations BaseSpec since they are both part of the same experiment.
deser_and_set_exp_id_idx(values)
classmethod
#
Deserializes the spec list if necessary and sets the experiment_id of each child spec to the experiment_id of the parent along with the sort_index.
RecursionMap
#
Bases: BaseModel
A map of recursion specs to use in recursive calls.
This allows a recursion node to understand where it is in the recursion tree and how to behave.
validate_path_is_length_ge_1(values)
classmethod
#
Validate that the path is at least length 1.
RecursionSpec
#
Bases: BaseModel
A spec for recursive calls.
validate_offset_less_than_factor(values)
classmethod
#
Validate that the offset is less than the factor.
WorkflowSelector
#
Bases: BaseModel
A class for generating a BranchSpec from a workflow name.
BranchesSpec: type[BranchesSpec[LeafSpec]]
property
#
Return the branches spec class for the workflow.
Returns:
Name | Type | Description |
---|---|---|
BranchesSpecClass |
type[BranchesSpec[LeafSpec]]
|
The branches spec class for the workflow |
Spec: type[LeafSpec]
property
#
Return the spec class for the workflow.
Returns:
Name | Type | Description |
---|---|---|
SpecClass |
type[LeafSpec]
|
The spec class for the workflow |
Models for workflow outputs etc.
URIResponse
#
Bases: BaseModel
A response containing the uri of a file.
Mixins#
Mixin Classes for working with the epengine models.
WithBucket
#
Bases: BaseModel
A model with a bucket to store results.
WithHContext
#
Bases: BaseModel
A model with a Hatchet context.
log(msg)
#
Log a message to the hatchet context.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
msg
|
str
|
The message to log |
required |
WithOptionalBucket
#
Bases: BaseModel
A model with an optional bucket to store results.
DDY Injector#
A module to inject DDY files into IDF files.
DDYField
#
Bases: Enum
An enumeration of the fields in a DDY file that can be injected into an IDF file.
DDYFieldNotFoundError
#
Bases: Exception
Raised when a field is not found in a DDY file.
__init__(field, obj)
#
Initialize the error.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
field
|
DDYField
|
The field that was not found. |
required |
obj
|
str
|
The object that was not found. |
required |
DDYSizingSpec
#
Bases: BaseModel
A class to define how to inject a DDY file into an IDF file.
handle_design_days(idf, ddy)
#
Handles the SIZINGPERIOD:DESIGNDAY field in the DDY file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
idf
|
IDF
|
The IDF file to inject the DDY file into. |
required |
ddy
|
IDF
|
The DDY file to inject into the IDF file. |
required |
handle_site_location(idf, ddy)
#
Handles the SITE:LOCATION field in the DDY file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
idf
|
IDF
|
The IDF file to inject the DDY file into. |
required |
ddy
|
IDF
|
The DDY file to inject into the IDF file. |
required |
handle_weather_file_condition_types(idf, ddy)
#
Handles the SIZINGPERIOD:WEATHERFILECONDITIONTYPE field in the DDY file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
idf
|
IDF
|
The IDF file to inject the DDY file into. |
required |
ddy
|
IDF
|
The DDY file to inject into the IDF. |
required |
inject_ddy(idf, ddy)
#
Copies the DDY file into the IDF file according to the spec.
Currently, only the following DDY fields are supported: - SITE:LOCATION - SIZINGPERIOD:DESIGNDAY - SIZINGPERIOD:WEATHERFILECONDITIONTYPE
The following DDY fields are ignored as the just contain rain information or are not used: - RUNPERIODCONTROL:DAYLIGHTSAVINGTIME - SITE:PRECIPITATION - ROOFIRRIGATION - SCHEDULE:FILE
Parameters:
Name | Type | Description | Default |
---|---|---|---|
idf
|
IDF
|
The IDF file to inject the DDY file into. |
required |
ddy
|
IDF
|
The DDY file to inject into the IDF file. |
required |
remove_and_replace(idf, ddy, field, copy_names)
#
Removes all objects of the given field and replaces them with the new ones.
Raises an error if the object is not found in the DDY file and self.raise_on_not_found
is True.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
idf
|
IDF
|
The IDF file to remove and replace objects from. |
required |
ddy
|
IDF
|
The DDY file to copy objects from. |
required |
field
|
DDYField
|
The field to remove and replace objects from. |
required |
copy_names
|
set[WeatherFileConditionType] | set[DesignDayName]
|
The names of the objects to copy. |
required |
Utils#
Results#
This module contains functions to postprocess and serialize results.
CombineRecurseResultsMultipleKeysError
#
Bases: ValueError
Raised when a result dictionary contains more than one key when it has a "uri" key.
__init__()
#
Initializes the error.
collate_subdictionaries(results)
#
Collate subdictionaries into a single dictionary of dataframes.
Note that this assumes the dictionaries are in the tight orientation and that the index keys are the same across all dictionaries.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
results
|
list[dict[str, dict]]
|
A list of dictionaries of dataframes |
required |
Returns:
Type | Description |
---|---|
dict[str, DataFrame]
|
dict[str, pd.DataFrame]: A dictionary of dataframes |
combine_recurse_results(results)
#
Combines the results of recursive operations into a single dictionary of DataFrames.
The recursive returns may have been uris or explicit results. This function combines them into a single dictionary of DataFrames.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
results
|
list[dict[str, Any]]
|
A list of dictionaries representing the results of recursive operations. |
required |
Returns:
Name | Type | Description |
---|---|---|
collected_dfs |
dict[str, DataFrame]
|
A dictionary containing the combined DataFrames, where the keys are the URIs of the DataFrames. |
Raises:
Type | Description |
---|---|
CombineRecurseResultsMultipleKeysError
|
If a result dictionary contains more than one key when it has a "uri" key. |
create_errored_and_missing_df(errored_workflows, missing_results)
#
Creates a DataFrame of errored and missing results.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
errored_workflows
|
list[tuple[str, ZipDataContent, BaseException]]
|
The list of errored workflows. |
required |
missing_results
|
list[tuple[str, ZipDataContent]]
|
The list of missing results. |
required |
Returns:
Name | Type | Description |
---|---|---|
errors |
DataFrame
|
The DataFrame of errored and missing results. |
handle_explicit_result(collected_dfs, result)
#
Updates the collected dataframes with an explicit result.
Note that this function mutates the collected_dfs dictionary and does not return a value.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
collected_dfs
|
dict[str, DataFrame]
|
A dictionary containing the collected dataframes. |
required |
result
|
dict
|
The explicit result to handle. |
required |
handle_referenced_result(collected_dfs, uri)
#
Fetches a result from a given URI and updates the collected dataframes.
Note that this function mutates the collected_dfs dictionary and does not return a value.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
collected_dfs
|
dict[str, DataFrame]
|
A dictionary containing the collected dataframes. |
required |
uri
|
str
|
The URI of the result to fetch. |
required |
postprocess(sql, index_data, tabular_lookups, columns=None)
#
Postprocess tabular data from the SQL file.
Requests a series of Energy Plus table lookups and return the data in a dictionary of dataframes with a single row; the provided index data is configured as the MultiIndex of the dataframe.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
sql
|
Sql
|
The sql object to query |
required |
index_data
|
dict
|
The index data to use |
required |
tabular_lookups
|
list[tuple[str, str]]
|
The tabular data to query |
required |
columns
|
list[str]
|
The columns to keep. Defaults to None. |
None
|
Returns:
Type | Description |
---|---|
dict[str, DataFrame]
|
dict[str, pd.DataFrame]: A dictionary of dataframes |
save_and_upload_results(collected_dfs, s3, bucket, output_key, save_errors=False)
#
Saves and uploads the collected dataframes to S3.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
collected_dfs
|
dict[str, DataFrame]
|
A dictionary containing the collected dataframes. |
required |
bucket
|
str
|
The S3 bucket to upload the results to. |
required |
output_key
|
str
|
The key to use for the uploaded results. |
required |
save_errors
|
bool
|
Whether to save errors in the results. |
False
|
s3
|
S3Client
|
The S3 client to use for uploading the results. |
required |
Returns:
Name | Type | Description |
---|---|---|
uri |
str
|
The URI of the uploaded results. |
separate_errors_and_safe_sim_results(ids, zip_data, results)
#
Separates errored workflows from safe simulation results.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ids
|
list[str]
|
The list of workflow IDs. |
required |
zip_data
|
list[ZipDataContent]
|
The list of data to return with rows. |
required |
results
|
list[ResultDataContent | BaseException]
|
The list of results to separate. |
required |
Returns:
Name | Type | Description |
---|---|---|
safe_results |
list[tuple[str, ZipDataContent, ResultDataContent]]]
|
A list of safe results. |
errored_results |
list[tuple[str, ZipDataContent, BaseException]]
|
A list of errored results. |
serialize_df_dict(dfs)
#
Serialize a dictionary of dataframes into a dictionary of dictionaries.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dfs
|
dict[str, DataFrame]
|
A dictionary of dataframes |
required |
Returns:
Type | Description |
---|---|
dict[str, dict]
|
dict[str, dict]: A dictionary of dictionaries |
update_collected_with_df(collected_dfs, key, df)
#
Updates the collected dataframes with a new DataFrame.
Note that this function mutates the collected_dfs dictionary and does not return a value.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
collected_dfs
|
dict[str, DataFrame]
|
A dictionary containing the collected dataframes. |
required |
key
|
str
|
The key to use for the new DataFrame. |
required |
df
|
DataFrame
|
The DataFrame to add to the collected dataframes. |
required |
FileSystem Utilities#
Filesystem utilities.
fetch_uri(uri, local_path, use_cache=True, logger_fn=logger.info, s3=s3)
#
Fetch a file from a uri and return the local path.
Caching is enabled by default and works by checking if the file exists locally before downloading it to avoid downloading the same file multiple times.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
uri
|
AnyUrl
|
The uri to fetch |
required |
local_path
|
Path
|
The local path to save the fetched file |
required |
use_cache
|
bool
|
Whether to use the cache |
True
|
logger_fn
|
Callable
|
The logger function to use |
info
|
s3
|
S3Client
|
The S3 client to use |
s3
|
Returns:
Name | Type | Description |
---|---|---|
local_path |
Path
|
The local path of the fetched file |