sinergym.envs.eplus_env.EplusEnv

class sinergym.envs.eplus_env.EplusEnv(*args: Any, **kwargs: Any)

Environment with EnergyPlus simulator.

__init__(idf_file, weather_file, variables_file, spaces_file, env_name='eplus-env-v1', discrete_actions=True, weather_variability=None, reward=<sinergym.utils.rewards.LinearReward object>, config_params: dict = None)

Environment with EnergyPlus simulator.

Parameters:
  • idf_file (str) – Name of the IDF file with the building definition.

  • weather_file (str) – Name of the EPW file for weather conditions.

  • variables_file (str) – Variables defined in environment to be observation and action (see sinergym/data/variables/ for examples).

  • spaces_file (str) – Action and observation space defined in a xml (see sinergym/data/variables/ for examples).

  • env_name – Env name used for working directory generation.

  • discrete_actions (bool, optional) – Whether the actions are discrete (True) or continuous (False). Defaults to True.

  • weather_variability (tuple, optional) – Tuple with sigma, mu and tao of the Ornstein-Uhlenbeck process to be applied to weather data. Defaults to None.

  • reward (Reward instance) – Reward function instance used for agent feedback. Defaults to LinearReward.

Methods

__init__(idf_file, weather_file, ...[, ...])

Environment with EnergyPlus simulator.

close()

End simulation.

render([mode])

Environment rendering.

reset()

Reset the environment.

step(action)

Sends action to the environment.

Attributes

metadata

close()

End simulation.

metadata = {'render.modes': ['human']}
render(mode='human')

Environment rendering.

reset()

Reset the environment.

Returns:

Current observation.

Return type:

np.array

step(action)

Sends action to the environment.

Parameters:

action (int or np.array) – Action selected by the agent.

Returns:

Observation for next timestep. float: Reward obtained. bool: Whether the episode has ended or not. dict: A dictionary with extra information.

Return type:

np.array