sinergym.envs.eplus_env.EplusEnv
- class sinergym.envs.eplus_env.EplusEnv(idf_file: str, weather_file: str | ~typing.List[str], observation_space: ~gymnasium.spaces.box.Box = Box(-5000000.0, 5000000.0, (4,), float32), observation_variables: ~typing.List[str] = [], action_space: ~gymnasium.spaces.box.Box | ~gymnasium.spaces.discrete.Discrete = Box([], [], (0,), float32), action_variables: ~typing.List[str] = [], action_mapping: ~typing.Dict[int, ~typing.Tuple[float, ...]] = {}, weather_variability: ~typing.Tuple[float] | None = None, reward: ~typing.Any = <class 'sinergym.utils.rewards.LinearReward'>, reward_kwargs: ~typing.Dict[str, ~typing.Any] | None = {}, act_repeat: int = 1, max_ep_data_store_num: int = 10, action_definition: ~typing.Dict[str, ~typing.Any] | None = None, env_name: str = 'eplus-env-v1', config_params: ~typing.Dict[str, ~typing.Any] | None = None)
- __init__(idf_file: str, weather_file: str | ~typing.List[str], observation_space: ~gymnasium.spaces.box.Box = Box(-5000000.0, 5000000.0, (4,), float32), observation_variables: ~typing.List[str] = [], action_space: ~gymnasium.spaces.box.Box | ~gymnasium.spaces.discrete.Discrete = Box([], [], (0,), float32), action_variables: ~typing.List[str] = [], action_mapping: ~typing.Dict[int, ~typing.Tuple[float, ...]] = {}, weather_variability: ~typing.Tuple[float] | None = None, reward: ~typing.Any = <class 'sinergym.utils.rewards.LinearReward'>, reward_kwargs: ~typing.Dict[str, ~typing.Any] | None = {}, act_repeat: int = 1, max_ep_data_store_num: int = 10, action_definition: ~typing.Dict[str, ~typing.Any] | None = None, env_name: str = 'eplus-env-v1', config_params: ~typing.Dict[str, ~typing.Any] | None = None)
Environment with EnergyPlus simulator.
- Parameters:
idf_file (str) – Name of the IDF file with the building definition.
weather_file (Union[str,List[str]]) – Name of the EPW file for weather conditions. It can be specified a list of weathers files in order to sample a weather in each episode randomly.
observation_space (gym.spaces.Box, optional) – Gym Observation Space definition. Defaults to an empty observation_space (no control).
observation_variables (List[str], optional) – List with variables names in IDF. Defaults to an empty observation variables (no control).
action_space (Union[gym.spaces.Box, gym.spaces.Discrete], optional) – Gym Action Space definition. Defaults to an empty action_space (no control).
action_variables (List[str],optional) – Action variables to be controlled in IDF, if that actions names have not been configured manually in IDF, you should configure or use extra_config. Default to empty List.
action_mapping (Dict[int, Tuple[float, ...]], optional) – Action mapping list for discrete actions spaces only. Defaults to empty list.
weather_variability (Optional[Tuple[float]], optional) – Tuple with sigma, mu and tao of the Ornstein-Uhlenbeck process to be applied to weather data. Defaults to None.
reward (Any, optional) – Reward function instance used for agent feedback. Defaults to LinearReward.
reward_kwargs (Optional[Dict[str, Any]], optional) – Parameters to be passed to the reward function. Defaults to empty dict.
act_repeat (int, optional) – Number of timesteps that an action is repeated in the simulator, regardless of the actions it receives during that repetition interval.
max_ep_data_store_num (int, optional) – Number of last sub-folders (one for each episode) generated during execution on the simulation.
action_definition (Optional[Dict[str, Any]) – Dict with building components to being controlled by Sinergym automatically if it is supported. Default value to None.
env_name (str, optional) – Env name used for working directory generation. Defaults to eplus-env-v1.
config_params (Optional[Dict[str, Any]], optional) – Dictionary with all extra configuration for simulator. Defaults to None.
Methods
__init__
(idf_file, weather_file[, ...])Environment with EnergyPlus simulator.
close
()End simulation.
get_schedulers
([path])Extract all schedulers available in the building model to be controlled.
get_wrapper_attr
(name)Gets the attribute name from the environment.
Get the zone names available in the building model of that environment.
has_wrapper_attr
(name)Checks if the attribute name exists in the environment.
render
([mode])Environment rendering.
reset
([seed, options])Reset the environment.
set_wrapper_attr
(name, value)Sets the attribute name on the environment with value.
step
(action)Sends action to the environment
Attributes
np_random
Returns the environment's internal
_np_random
that if not set will initialise with a random seed.np_random_seed
Returns the environment's internal
_np_random_seed
that if not set will first initialise with a random int as seed.render_mode
spec
unwrapped
Returns the base non-wrapped environment.
- property action_space: Space[Any]
- close() None
End simulation.
- get_schedulers(path: str | None = None) Dict[str, Any]
Extract all schedulers available in the building model to be controlled.
- Parameters:
path (str, optional) – If path is specified, then this method export a xlsx file version in addition to return the dictionary.
- Returns:
Python Dictionary: For each scheduler found, it shows type value and where this scheduler is present (Object name, Object field and Object type).
- Return type:
Dict[str, Any]
- get_zones() List[str]
Get the zone names available in the building model of that environment.
- Returns:
List of the zone names.
- Return type:
List[str]
- metadata: dict[str, Any] = {'render_modes': ['human']}
- property observation_space: Space[Any]
- render(mode: str = 'human') None
Environment rendering.
- Parameters:
mode (str, optional) – Mode for rendering. Defaults to ‘human’.
- reset(seed: int | None = None, options: Dict[str, Any] | None = None) Tuple[ndarray, Dict[str, Any]]
Reset the environment.
- Parameters:
seed (Optional[int]) – The seed that is used to initialize the environment’s episode (np_random). if value is None, a seed will be chosen from some source of entropy. Defaults to None.
options (Optional[Dict[str, Any]]) – Additional information to specify how the environment is reset. Defaults to None.
- Returns:
Current observation and info context with additional information.
- Return type:
Tuple[np.ndarray,Dict[str,Any]]
- step(action: int | float | integer | ndarray | List[Any] | Tuple[Any]) Tuple[ndarray, float, bool, bool, Dict[str, Any]]
Sends action to the environment
- Parameters:
action (Union[int, float, np.integer, np.ndarray, List[Any], Tuple[Any]]) – Action selected by the agent.
- Returns:
Observation for next timestep, reward obtained, Whether the episode has ended or not, Whether episode has been truncated or not, and a dictionary with extra information
- Return type:
Tuple[np.ndarray, float, bool, Dict[str, Any]]