sinergym.utils.wrappers.EnergyCostWrapper

class sinergym.utils.wrappers.EnergyCostWrapper(env: Env, energy_cost_data_file: str, reward_kwargs: Dict[str, Any] | None = {'energy_cost_variables': ['energy_cost'], 'energy_variables': ['HVAC_electricity_demand_rate'], 'energy_weight': 0.4, 'lambda_energy': 0.0001, 'lambda_energy_cost': 1.0, 'lambda_temperature': 1.0, 'range_comfort_summer': [23.0, 26.0], 'range_comfort_winter': [20.0, 23.5], 'temperature_variables': ['air_temperature'], 'temperature_weight': 0.4}, energy_cost_variability: Tuple[float, float, float] | None = None)
__init__(env: Env, energy_cost_data_file: str, reward_kwargs: Dict[str, Any] | None = {'energy_cost_variables': ['energy_cost'], 'energy_variables': ['HVAC_electricity_demand_rate'], 'energy_weight': 0.4, 'lambda_energy': 0.0001, 'lambda_energy_cost': 1.0, 'lambda_temperature': 1.0, 'range_comfort_summer': [23.0, 26.0], 'range_comfort_winter': [20.0, 23.5], 'temperature_variables': ['air_temperature'], 'temperature_weight': 0.4}, energy_cost_variability: Tuple[float, float, float] | None = None)

Adds energy cost information to the current observation.

Parameters:
  • env (Env) – Original Gym environment.

  • energy_cost_data_file (str) – file from which the energy cost data is obtained

  • energy_cost_variability (Tuple[float,float,float], optional) – variation for energy cost data

  • reward_kwargs (Dict[str, Any], optional) – Parameters for customizing the reward function.

Methods

__init__(env, energy_cost_data_file[, ...])

Adds energy cost information to the current observation.

apply_ou_variability()

Modify energy cost data using Ornstein-Uhlenbeck process according to the variation specified in the energy_cost_variability variable.

class_name()

Returns the class name of the wrapper.

close()

Closes the wrapper and env.

get_wrapper_attr(name)

Gets an attribute from the wrapper and lower environments if name doesn't exist in this object.

observation(obs, info)

Build the state observation by adding energy cost information.

render()

Uses the render() of the env that can be overwritten to change the returned data.

reset([seed, options])

Resets the environment.

set_energy_cost_data()

Sets the cost of energy data used to construct the state observation.

step(action)

Performs the action in the new environment.

wrapper_spec(**kwargs)

Generates a WrapperSpec for the wrappers.

Attributes

action_space

Return the Env action_space unless overwritten then the wrapper action_space is used.

logger

metadata

Returns the Env metadata.

np_random

Returns the Env np_random attribute.

observation_space

Return the Env observation_space unless overwritten then the wrapper observation_space is used.

render_mode

Returns the Env render_mode.

reward_range

Return the Env reward_range unless overwritten then the wrapper reward_range is used.

spec

Returns the Env spec attribute with the WrapperSpec if the wrapper inherits from EzPickle.

unwrapped

Returns the base environment of the wrapper.

apply_ou_variability()

Modify energy cost data using Ornstein-Uhlenbeck process according to the variation specified in the energy_cost_variability variable.

logger = <Logger WRAPPER EnergyCostWrapper (INFO)>
observation(obs, info)

Build the state observation by adding energy cost information.

Parameters:
  • obs (np.ndarray) – Original observation.

  • info (Dict[str, Any]) – Information about the enviroment.

Returns:

Transformed observation.

Return type:

np.ndarray

reset(seed: int | None = None, options: Dict[str, Any] | None = None) Tuple[ndarray, Dict[str, Any]]

Resets the environment.

Returns:

Tuple with next observation, and dict with information about the enviroment.

Return type:

Tuple[np.ndarray,Dict[str,Any]]

set_energy_cost_data()

Sets the cost of energy data used to construct the state observation.

step(action: int | ndarray) Tuple[ndarray, float, bool, bool, Dict[str, Any]]

Performs the action in the new environment.

Parameters:

action (Union[int, np.ndarray]) – Action to be executed in environment.

Returns:

Tuple with next observation, reward, bool for terminated episode and dict with Information about the enviroment.

Return type:

Tuple[np.ndarray, float, bool, Dict[str, Any]]