sinergym.envs.eplus_env.EplusEnvο
- class sinergym.envs.eplus_env.EplusEnv(building_file: str, weather_files: str | ~typing.List[str], action_space: ~gymnasium.spaces.box.Box = Box([], [], (0,), float32), time_variables: ~typing.List[str] = [], variables: ~typing.Dict[str, ~typing.Tuple[str, str]] = {}, meters: ~typing.Dict[str, str] = {}, actuators: ~typing.Dict[str, ~typing.Tuple[str, str, str]] = {}, context: ~typing.Dict[str, ~typing.Tuple[str, str, str]] = {}, initial_context: ~typing.List[float] | None = None, weather_variability: ~typing.Dict[str, ~typing.Tuple[float | ~typing.Tuple[float, float], float | ~typing.Tuple[float, float], float | ~typing.Tuple[float, float]]] | None = None, reward: ~typing.Any = <class 'sinergym.utils.rewards.LinearReward'>, reward_kwargs: ~typing.Dict[str, ~typing.Any] | None = {}, max_ep_data_store_num: int = 10, env_name: str = 'eplus-env-v1', config_params: ~typing.Dict[str, ~typing.Any] | None = None, seed: int | None = None)ο
- __init__(building_file: str, weather_files: str | ~typing.List[str], action_space: ~gymnasium.spaces.box.Box = Box([], [], (0,), float32), time_variables: ~typing.List[str] = [], variables: ~typing.Dict[str, ~typing.Tuple[str, str]] = {}, meters: ~typing.Dict[str, str] = {}, actuators: ~typing.Dict[str, ~typing.Tuple[str, str, str]] = {}, context: ~typing.Dict[str, ~typing.Tuple[str, str, str]] = {}, initial_context: ~typing.List[float] | None = None, weather_variability: ~typing.Dict[str, ~typing.Tuple[float | ~typing.Tuple[float, float], float | ~typing.Tuple[float, float], float | ~typing.Tuple[float, float]]] | None = None, reward: ~typing.Any = <class 'sinergym.utils.rewards.LinearReward'>, reward_kwargs: ~typing.Dict[str, ~typing.Any] | None = {}, max_ep_data_store_num: int = 10, env_name: str = 'eplus-env-v1', config_params: ~typing.Dict[str, ~typing.Any] | None = None, seed: int | None = None)ο
Environment with EnergyPlus simulator.
- Parameters:
building_file (str) β Name of the JSON file with the building definition.
weather_files (Union[str,List[str]]) β Name of the EPW file for weather conditions. It can be specified a list of weathers files in order to sample a weather in each episode randomly.
action_space (gym.spaces.Box, optional) β Gym Action Space definition. Defaults to an empty action_space (no control).
time_variables (List[str]) β EnergyPlus time variables we want to observe. The name of the variable must match with the name of the E+ Data Transfer API method name. Defaults to empty list.
variables (Dict[str, Tuple[str, str]]) β Specification for EnergyPlus Output:Variable. The key name is custom, then tuple must be the original variable name and the output variable key. Defaults to empty dict.
meters (Dict[str, str]) β Specification for EnergyPlus Output:Meter. The key name is custom, then value is the original EnergyPlus Meters name.
actuators (Dict[str, Tuple[str, str, str]]) β Specification for EnergyPlus Input Actuators. The key name is custom, then value is a tuple with actuator type, value type and original actuator name. Defaults to empty dict.
context (Dict[str, Tuple[str, str, str]]) β Specification for EnergyPlus Context. The key name is custom, then value is a tuple with actuator type, value type and original actuator name. These values are processed as real-time building configuration instead of real-time control. Defaults to empty dict.
initial_context (Optional[List[float]]) β Initial context values to be set in the building model. Defaults to None.
weather_variability (Optional[Dict[str,Tuple[Union[float,Tuple[float,float]],Union[float,Tuple[float,float]],Union[float,Tuple[float,float]]]]]) β Tuple with sigma, mu and tau of the Ornstein-Uhlenbeck process for each desired variable to be applied to weather data. Ranges can be specified to and a value will be select randomly for each episode. Defaults to None.
reward (Any, optional) β Reward function instance used for agent feedback. Defaults to LinearReward.
reward_kwargs (Optional[Dict[str, Any]], optional) β Parameters to be passed to the reward function. Defaults to empty dict.
max_ep_data_store_num (int, optional) β Number of last sub-folders (one for each episode) generated during execution on the simulation.
env_name (str, optional) β Env name used for working directory generation. Defaults to eplus-env-v1.
config_params (Optional[Dict[str, Any]], optional) β Dictionary with all extra configuration for simulator. Defaults to None.
seed (Optional[int], optional) β Seed for random number generator. Defaults to None.
Methods
__init__
(building_file,Β weather_files[,Β ...])Environment with EnergyPlus simulator.
close
()End simulation.
get_wrapper_attr
(name)Gets the attribute name from the environment.
has_wrapper_attr
(name)Checks if the attribute name exists in the environment.
info
()render
([mode])Environment rendering.
reset
([seed,Β options])Reset the environment.
set_seed
(seed)Set seed for random number generator.
set_wrapper_attr
(name,Β value,Β *[,Β force])Sets the attribute name on the environment with value, see Wrapper.set_wrapper_attr for more info.
step
(action)Sends action to the environment.
update_context
(context_values)Update real-time building context (actuators which are not controlled by the agent).
Attributes
np_random
Returns the environment's internal
_np_random
that if not set will initialise with a random seed.np_random_seed
Returns the environment's internal
_np_random_seed
that if not set will first initialise with a random int as seed.render_mode
spec
unwrapped
Returns the base non-wrapped environment.
- property action_space: Space[Any]ο
- property actuator_handlers: Dict[str, int] | Noneο
- property available_handlers: str | Noneο
- property building_path: strο
- close() None ο
End simulation.
- property context_handlers: Dict[str, int] | Noneο
- property ddy_path: strο
- property episode_length: floatο
- property episode_path: strο
- property idd_path: strο
- info()ο
- property is_discrete: boolο
- property is_running: boolο
- logger = <Logger ENVIRONMENT (INFO)>ο
- metadata: dict[str, Any] = {'render_modes': ['human']}ο
- property meter_handlers: Dict[str, int] | Noneο
- property observation_space: Space[Any]ο
- render(mode: str = 'human') None ο
Environment rendering.
- Parameters:
mode (str, optional) β Mode for rendering. Defaults to βhumanβ.
- reset(seed: int | None = None, options: Dict[str, Any] | None = None) Tuple[ndarray, Dict[str, Any]] ο
Reset the environment.
- Parameters:
seed (Optional[int]) β The seed that is used to initialize the environmentβs episode (np_random). If global seed was configured in environment, reset seed will not be applied. Defaults to None.
options (Optional[Dict[str, Any]]) β Additional information to specify how the environment is reset. Defaults to None.
- Returns:
Current observation and info context with additional information.
- Return type:
Tuple[np.ndarray,Dict[str,Any]]
- property runperiod: Dict[str, int]ο
- property schedulers: Dict[str, Dict[str, str | Dict[str, str]]]ο
- set_seed(seed: int | None) None ο
Set seed for random number generator.
- Parameters:
seed (Optional[int]) β Seed for random number generator.
- simple_printer = <Logger Printer (INFO)>ο
- step(action: ndarray) Tuple[ndarray, float, bool, bool, Dict[str, Any]] ο
Sends action to the environment.
- Parameters:
action (np.ndarray) β Action selected by the agent.
- Returns:
Observation for next timestep, reward obtained, Whether the episode has ended or not, Whether episode has been truncated or not, and a dictionary with extra information
- Return type:
Tuple[np.ndarray, float, bool, Dict[str, Any]]
- property step_size: floatο
- property timestep_per_episode: intο
- update_context(context_values: ndarray | List[float]) None ο
Update real-time building context (actuators which are not controlled by the agent).
- Parameters:
context_values (Union[np.ndarray, List[float]]) β List of values to be updated in the building model.
- property var_handlers: Dict[str, int] | Noneο
- property weather_path: strο
- property workspace_path: strο
- property zone_names: listο