sinergym.envs.eplus_env.EplusEnv

class sinergym.envs.eplus_env.EplusEnv(building_file: str, weather_files: str | ~typing.List[str], action_space: ~gymnasium.spaces.box.Box = Box([], [], (0,), float32), time_variables: ~typing.List[str] = [], variables: ~typing.Dict[str, ~typing.Tuple[str, str]] = {}, meters: ~typing.Dict[str, str] = {}, actuators: ~typing.Dict[str, ~typing.Tuple[str, str, str]] = {}, context: ~typing.Dict[str, ~typing.Tuple[str, str, str]] = {}, initial_context: ~typing.List[float] | None = None, weather_variability: ~typing.Dict[str, ~typing.Tuple[float | ~typing.Tuple[float, float], float | ~typing.Tuple[float, float], float | ~typing.Tuple[float, float]] | ~typing.Tuple[float | ~typing.Tuple[float, float], float | ~typing.Tuple[float, float], float | ~typing.Tuple[float, float], ~typing.Tuple[float, float]]] | None = None, reward: ~typing.Any = <class 'sinergym.utils.rewards.LinearReward'>, reward_kwargs: ~typing.Dict[str, ~typing.Any] = {}, max_ep_store: int = 10, env_name: str = 'eplus-env-v1', building_config: ~typing.Dict[str, ~typing.Any] | None = None, seed: int | None = None)
__init__(building_file: str, weather_files: str | ~typing.List[str], action_space: ~gymnasium.spaces.box.Box = Box([], [], (0,), float32), time_variables: ~typing.List[str] = [], variables: ~typing.Dict[str, ~typing.Tuple[str, str]] = {}, meters: ~typing.Dict[str, str] = {}, actuators: ~typing.Dict[str, ~typing.Tuple[str, str, str]] = {}, context: ~typing.Dict[str, ~typing.Tuple[str, str, str]] = {}, initial_context: ~typing.List[float] | None = None, weather_variability: ~typing.Dict[str, ~typing.Tuple[float | ~typing.Tuple[float, float], float | ~typing.Tuple[float, float], float | ~typing.Tuple[float, float]] | ~typing.Tuple[float | ~typing.Tuple[float, float], float | ~typing.Tuple[float, float], float | ~typing.Tuple[float, float], ~typing.Tuple[float, float]]] | None = None, reward: ~typing.Any = <class 'sinergym.utils.rewards.LinearReward'>, reward_kwargs: ~typing.Dict[str, ~typing.Any] = {}, max_ep_store: int = 10, env_name: str = 'eplus-env-v1', building_config: ~typing.Dict[str, ~typing.Any] | None = None, seed: int | None = None)

Environment with EnergyPlus simulator.

Parameters:
  • building_file (str) – Name of the JSON file with the building definition.

  • weather_files (Union[str,List[str]]) – Name of the EPW file for weather conditions. Can also be a list of weather files to sample randomly for each episode.

  • action_space (gym.spaces.Box, optional) – Gym Action Space definition. Defaults to empty (no control).

  • time_variables (List[str]) – EnergyPlus time variables to observe. Names must match E+ Data Transfer API method names. Defaults to empty list.

  • variables (Dict[str, Tuple[str, str]]) – Specification for EnergyPlus Output:Variable. Key is custom name; value is tuple(original variable name, output key). Defaults to empty dict.

  • meters (Dict[str, str]) – Specification for EnergyPlus Output:Meter. Key is custom; value is original meter name. Defaults to empty dict.

  • actuators (Dict[str, Tuple[str, str, str]]) – Specification for EnergyPlus Input Actuators. Key is custom; value is tuple(actuator type, value type, original name). Defaults to empty dict.

  • context (Dict[str, Tuple[str, str, str]]) – Specification for EnergyPlus Context. Key is custom; value is tuple(actuator type, value type, original name). Used for real-time building configuration. Defaults to empty dict.

  • initial_context (Optional[List[float]]) – Initial context values to set in the building model. Defaults to None.

  • (Optional[Dict[str (weather_variability) – Union[float, Tuple[float, float]], Union[float, Tuple[float, float]], Union[float, Tuple[float, float]], Optional[Tuple[float, float]]

  • Tuple[ – Union[float, Tuple[float, float]], Union[float, Tuple[float, float]], Union[float, Tuple[float, float]], Optional[Tuple[float, float]]

  • ]]])

    Variation for weather data for Ornstein-Uhlenbeck process.

    param sigma:

    Standard deviation or range to sample from.

    type sigma:

    float or Tuple[float, float]

    param mu:

    Mean value or range to sample from.

    type mu:

    float or Tuple[float, float]

    param tau:

    Time constant or range to sample from.

    type tau:

    float or Tuple[float, float]

    param var_range:

    (optional) Tuple (min_val, max_val) for clipping the variable.

    type var_range:

    Tuple[float, float], optional

    default:

    None

  • reward (Any, optional) – Reward function instance. Defaults to LinearReward.

  • reward_kwargs (Dict[str, Any], optional) – Parameters to pass to the reward function. Defaults to empty dict.

  • max_ep_store (int, optional) – Number of last episode folders to store. Defaults to 10.

  • env_name (str, optional) – Env name for working directory generation. Defaults to ‘eplus-env-v1’.

  • building_config (Optional[Dict[str, Any]], optional) – Extra configuration for building. Defaults to None.

  • seed (Optional[int], optional) – Seed for random number generator. Defaults to None.

Methods

__init__(building_file, weather_files[, ...])

Environment with EnergyPlus simulator.

clear_callbacks()

Clear all user-registered custom callbacks (delegates to the simulator).

close()

End simulation.

from_dict(data)

get_obs_dict(obs)

Get the observation as a dictionary.

get_wrapper_attr(name)

Gets the attribute name from the environment.

has_wrapper_attr(name)

Checks if the attribute name exists in the environment.

register_callback(callback_name, callback_func)

Register a custom EnergyPlus runtime callback (delegates to the simulator).

render([mode])

Environment rendering.

reset([seed, options])

Reset the environment.

save_config()

Save environment configuration as a YAML file.

set_seed(seed)

Set seed for random number generator.

set_wrapper_attr(name, value, *[, force])

Sets the attribute name on the environment with value, see Wrapper.set_wrapper_attr for more info.

step(action)

Sends action to the environment.

to_dict()

Convert the environment instance to a Python dictionary.

to_str()

update_context(context_values)

Update real-time building context (actuators which are not controlled by the agent).

Attributes

action_space

actuator_handlers

available_handlers

building_path

callbacks

mapping from callback point to __name__ lists.

context_handlers

ddy_path

episode_length

episode_path

idd_path

is_discrete

is_running

logger

metadata

meter_handlers

np_random

Returns the environment's internal _np_random that if not set will initialise with a random seed.

np_random_seed

Returns the environment's internal _np_random_seed that if not set will first initialise with a random int as seed.

observation_space

render_mode

runperiod

schedulers

simple_printer

spec

step_size

timestep_per_episode

unwrapped

Returns the base non-wrapped environment.

var_handlers

weather_path

workspace_path

zone_names

property action_space: Space[Any]
property actuator_handlers: Dict[str, int] | None
property available_handlers: str | None
property building_path: str
property callbacks: Dict[str, List[str]]

mapping from callback point to __name__ lists.

Mirrors registered_callbacks. See Custom callbacks.

Type:

User-registered callbacks

clear_callbacks() None

Clear all user-registered custom callbacks (delegates to the simulator).

Internal Sinergym callbacks (observations, actions, context, warmup, progress) are unchanged. See Custom callbacks for semantics and interaction with EnergyPlus.

close() None

End simulation.

property context_handlers: Dict[str, int] | None
property ddy_path: str | None
property episode_length: float
property episode_path: str | None
classmethod from_dict(data)
get_obs_dict(obs: ndarray) Dict[str, float]

Get the observation as a dictionary.

Parameters:

obs (np.ndarray) – Observation array.

Returns:

Dictionary mapping observation variable names to their values.

Return type:

Dict[str, float]

property idd_path: str | None
property is_discrete: bool
property is_running: bool
logger = <Logger ENVIRONMENT (INFO)>
metadata: dict[str, Any] = {'render_modes': ['human']}
property meter_handlers: Dict[str, int] | None
property observation_space: Space[Any]
register_callback(callback_name: str, callback_func: Callable, component_program_name: str | None = None) None

Register a custom EnergyPlus runtime callback (delegates to the simulator).

User callbacks are stored on the underlying simulator and apply from subsequent episodes onward; use clear_callbacks() or restart the environment to drop them. Hooks are attached to EnergyPlus when the run starts (during reset()).

The full list of valid callback_name values, component_program_name rules, lifecycle notes, and examples are in Custom callbacks.

Parameters:
  • callback_name – EnergyPlus Python API runtime callback identifier.

  • callback_func – Usually callback_func(state); UserDefined callbacks also use this signature—see Custom callbacks.

  • component_program_name – Required only for callback_user_defined_component_model (must match the UserDefined program name in the IDF); otherwise None.

Raises:

ValueError – Invalid callback_name, or invalid use of component_program_name.

render(mode: str = 'human') None

Environment rendering.

Parameters:

mode (str, optional) – Mode for rendering. Defaults to ‘human’.

reset(seed: int | None = None, options: Dict[str, Any] | None = None) Tuple[ndarray, Dict[str, Any]]

Reset the environment.

Parameters:
  • seed (Optional[int]) – The seed that is used to initialize the environment’s episode (np_random). If global seed was configured in environment, reset seed will not be applied. Defaults to None.

  • options (Optional[Dict[str, Any]]) – Additional information to specify how the environment is reset. Defaults to None.

Returns:

Current observation and info context with additional information.

Return type:

Tuple[np.ndarray,Dict[str,Any]]

property runperiod: Dict[str, int]
save_config() None

Save environment configuration as a YAML file.

property schedulers: Dict[str, Dict[str, str | Dict[str, str]]]
set_seed(seed: int | None) None

Set seed for random number generator.

Parameters:

seed (Optional[int]) – Seed for random number generator.

simple_printer = <Logger Printer (INFO)>
step(action: ndarray) Tuple[ndarray, float, bool, bool, Dict[str, Any]]

Sends action to the environment.

Parameters:

action (np.ndarray) – Action selected by the agent.

Returns:

Observation for next timestep, reward obtained, Whether the episode has ended or not, Whether episode has been truncated or not, and a dictionary with extra information

Return type:

Tuple[np.ndarray, float, bool, Dict[str, Any]]

property step_size: float
property timestep_per_episode: int
to_dict() Dict[str, Any]

Convert the environment instance to a Python dictionary.

Returns:

Environment configuration.

Return type:

Dict[str, Any]

to_str()
update_context(context_values: ndarray | List[float] | Dict[str, float]) None

Update real-time building context (actuators which are not controlled by the agent).

This method supports two input formats:

  • Full vector update (backwards-compatible): List[float] or np.ndarray with the same length and order than self.context_variables.

  • Partial update by name: Dict[str, float] where keys are context variable names. Values not provided are kept from the last applied context (self.last_context) or initialized to zeros.

property var_handlers: Dict[str, int] | None
property weather_path: str
property workspace_path: str
property zone_names: list