sinergym.envs.eplus_env.EplusEnv

class sinergym.envs.eplus_env.EplusEnv(building_file: str, weather_files: str | ~typing.List[str], action_space: ~gymnasium.spaces.box.Box = Box([], [], (0,), float32), time_variables: ~typing.List[str] = [], variables: ~typing.Dict[str, ~typing.Tuple[str, str]] = {}, meters: ~typing.Dict[str, str] = {}, actuators: ~typing.Dict[str, ~typing.Tuple[str, str, str]] = {}, weather_variability: ~typing.Dict[str, ~typing.Tuple[float, float, float]] | None = None, reward: ~typing.Any = <class 'sinergym.utils.rewards.LinearReward'>, reward_kwargs: ~typing.Dict[str, ~typing.Any] | None = {}, max_ep_data_store_num: int = 10, env_name: str = 'eplus-env-v1', config_params: ~typing.Dict[str, ~typing.Any] | None = None)
__init__(building_file: str, weather_files: str | ~typing.List[str], action_space: ~gymnasium.spaces.box.Box = Box([], [], (0,), float32), time_variables: ~typing.List[str] = [], variables: ~typing.Dict[str, ~typing.Tuple[str, str]] = {}, meters: ~typing.Dict[str, str] = {}, actuators: ~typing.Dict[str, ~typing.Tuple[str, str, str]] = {}, weather_variability: ~typing.Dict[str, ~typing.Tuple[float, float, float]] | None = None, reward: ~typing.Any = <class 'sinergym.utils.rewards.LinearReward'>, reward_kwargs: ~typing.Dict[str, ~typing.Any] | None = {}, max_ep_data_store_num: int = 10, env_name: str = 'eplus-env-v1', config_params: ~typing.Dict[str, ~typing.Any] | None = None)

Environment with EnergyPlus simulator.

Parameters:
  • building_file (str) – Name of the JSON file with the building definition.

  • weather_files (Union[str,List[str]]) – Name of the EPW file for weather conditions. It can be specified a list of weathers files in order to sample a weather in each episode randomly.

  • action_space (gym.spaces.Box, optional) – Gym Action Space definition. Defaults to an empty action_space (no control).

  • time_variables (List[str]) – EnergyPlus time variables we want to observe. The name of the variable must match with the name of the E+ Data Transfer API method name. Defaults to empty list.

  • variables (Dict[str, Tuple[str, str]]) – Specification for EnergyPlus Output:Variable. The key name is custom, then tuple must be the original variable name and the output variable key. Defaults to empty dict.

  • meters (Dict[str, str]) – Specification for EnergyPlus Output:Meter. The key name is custom, then value is the original EnergyPlus Meters name.

  • actuators (Dict[str, Tuple[str, str, str]]) – Specification for EnergyPlus Input Actuators. The key name is custom, then value is a tuple with actuator type, value type and original actuator name. Defaults to empty dict.

  • Optional[Dict[str (weather_variability) – Tuple with sigma, mu and tao of the Ornstein-Uhlenbeck process for each desired variable to be applied to weather data. Defaults to None.

  • Tuple[float – Tuple with sigma, mu and tao of the Ornstein-Uhlenbeck process for each desired variable to be applied to weather data. Defaults to None.

  • float – Tuple with sigma, mu and tao of the Ornstein-Uhlenbeck process for each desired variable to be applied to weather data. Defaults to None.

  • float]]] – Tuple with sigma, mu and tao of the Ornstein-Uhlenbeck process for each desired variable to be applied to weather data. Defaults to None.

  • reward (Any, optional) – Reward function instance used for agent feedback. Defaults to LinearReward.

  • reward_kwargs (Optional[Dict[str, Any]], optional) – Parameters to be passed to the reward function. Defaults to empty dict.

  • max_ep_data_store_num (int, optional) – Number of last sub-folders (one for each episode) generated during execution on the simulation.

  • env_name (str, optional) – Env name used for working directory generation. Defaults to eplus-env-v1.

  • config_params (Optional[Dict[str, Any]], optional) – Dictionary with all extra configuration for simulator. Defaults to None.

Methods

__init__(building_file, weather_files[, ...])

Environment with EnergyPlus simulator.

close()

End simulation.

get_wrapper_attr(name)

Gets the attribute name from the environment.

has_wrapper_attr(name)

Checks if the attribute name exists in the environment.

info()

render([mode])

Environment rendering.

reset([seed, options])

Reset the environment.

set_wrapper_attr(name, value)

Sets the attribute name on the environment with value.

step(action)

Sends action to the environment.

Attributes

action_space

actuator_handlers

available_handlers

building_path

ddy_path

episode_length

episode_path

idd_path

is_discrete

is_running

logger

metadata

meter_handlers

np_random

Returns the environment's internal _np_random that if not set will initialise with a random seed.

np_random_seed

Returns the environment's internal _np_random_seed that if not set will first initialise with a random int as seed.

observation_space

render_mode

runperiod

schedulers

simple_printer

spec

step_size

timestep_per_episode

unwrapped

Returns the base non-wrapped environment.

var_handlers

weather_path

workspace_path

zone_names

property action_space: Space[Any]
property actuator_handlers: Dict[str, int] | None
property available_handlers: str | None
property building_path: str
close() None

End simulation.

property ddy_path: str
property episode_length: float
property episode_path: str
property idd_path: str
info()
property is_discrete: bool
property is_running: bool
logger = <Logger ENVIRONMENT (INFO)>
metadata: dict[str, Any] = {'render_modes': ['human']}
property meter_handlers: Dict[str, int] | None
property observation_space: Space[Any]
render(mode: str = 'human') None

Environment rendering.

Parameters:

mode (str, optional) – Mode for rendering. Defaults to ‘human’.

reset(seed: int | None = None, options: Dict[str, Any] | None = None) Tuple[ndarray, Dict[str, Any]]

Reset the environment.

Parameters:
  • seed (Optional[int]) – The seed that is used to initialize the environment’s episode (np_random). if value is None, a seed will be chosen from some source of entropy. Defaults to None.

  • options (Optional[Dict[str, Any]]) – Additional information to specify how the environment is reset. Defaults to None.

Returns:

Current observation and info context with additional information.

Return type:

Tuple[np.ndarray,Dict[str,Any]]

property runperiod: Dict[str, int]
property schedulers: Dict[str, Dict[str, str | Dict[str, str]]]
simple_printer = <Logger Printer (INFO)>
step(action: int | float | integer | ndarray | List[Any] | Tuple[Any]) Tuple[ndarray, float, bool, bool, Dict[str, Any]]

Sends action to the environment.

Parameters:

action (Union[int, float, np.integer, np.ndarray, List[Any], Tuple[Any]]) – Action selected by the agent.

Returns:

Observation for next timestep, reward obtained, Whether the episode has ended or not, Whether episode has been truncated or not, and a dictionary with extra information

Return type:

Tuple[np.ndarray, float, bool, Dict[str, Any]]

property step_size: float
property timestep_per_episode: int
property var_handlers: Dict[str, int] | None
property weather_path: str
property workspace_path: str
property zone_names: list