LoggerWrapper customization
In this notebook, we will demonstrate how to customize the LoggerWrapper
provided by Sinergym.
[1]:
import gymnasium as gym
import numpy as np
import sinergym
from sinergym.utils.wrappers import (BaseLoggerWrapper, LoggerWrapper, CSVLogger, WandBLogger)
Step 1: Inherit and complete abstract methods from BaseLoggerWrapper
We simply need to inherit from this class and define both the custom metrics to be monitored, and the summary metrics that are calculated from the logger data for each simulated episode.
Additionally, you can change the back-end where the information is stored by modifying logger_class
, instead of using the default.
Sinergym use this structure to implement its default LoggerWrapper.
[ ]:
from sinergym.utils.logger import LoggerStorage, TerminalLogger
from sinergym.utils.constants import LOG_WRAPPERS_LEVEL
from typing import Any, Dict, Optional, Union, List, Callable
class CustomLoggerWrapper(BaseLoggerWrapper):
logger = TerminalLogger().getLogger(name='WRAPPER CustomLoggerWrapper',
level=LOG_WRAPPERS_LEVEL)
def __init__(
self,
env: gym.Env,
logger_class: Callable = LoggerStorage):
super(CustomLoggerWrapper, self).__init__(env, logger_class)
# Custom variables and summary variables
self.custom_variables = ['custom_variable1', 'custom_variable2']
self.summary_variables = ['episode_num',
'double_mean_reward', 'half_power_demand']
# Define abstract methods for metrics calculation
def calculate_custom_metrics(self,
obs: np.ndarray,
action: Union[int, np.ndarray],
reward: float,
info: Dict[str, Any],
terminated: bool,
truncated: bool):
# Variables combining information
return [obs[0]*2, obs[-1]+reward]
def get_episode_summary(self) -> Dict[str, float]:
# Get information from logger
power_demands = [info['total_power_demand']
for info in self.data_logger.infos]
# Data summary
data_summary = {
'episode_num': self.get_wrapper_attr('episode'),
'double_mean_reward': np.mean(self.data_logger.rewards)*2,
'half_power_demand': np.mean(power_demands)/2,
}
return data_summary
Low-level changes can be done to the logging system by creating your own BaseLoggerWrapper
. This would require a deep understanding of the tool.
Step 2: Use CustomLoggerWrapper to save information
Now we can combine the new wrapper with any of Sinergym ‘s output wrappers, and the data will be saved properly.
For instance, let’s combine it with CSVLogger to save the data in CSV files. However, it can also be used with WandBLogger or any other logger created by the user.
[ ]:
env = gym.make('Eplus-demo-v1')
env = CustomLoggerWrapper(env)
env = CSVLogger(env)
#==============================================================================================#
[ENVIRONMENT] (INFO) : Creating Gymnasium environment.
[ENVIRONMENT] (INFO) : Name: demo-v1
#==============================================================================================#
[MODELING] (INFO) : Experiment working directory created.
[MODELING] (INFO) : Working directory: /workspaces/sinergym/examples/Eplus-env-demo-v1-res1
[MODELING] (INFO) : Model Config is correct.
[MODELING] (INFO) : Update building model Output:Variable with variable names.
[MODELING] (INFO) : Update building model Output:Meter with meter names.
[MODELING] (INFO) : Extra config: runperiod updated to {'apply_weekend_holiday_rule': 'No', 'begin_day_of_month': 1, 'begin_month': 1, 'begin_year': 1991, 'day_of_week_for_start_day': 'Monday', 'end_day_of_month': 1, 'end_month': 3, 'end_year': 1991, 'use_weather_file_daylight_saving_period': 'Yes', 'use_weather_file_holidays_and_special_days': 'Yes', 'use_weather_file_rain_indicators': 'Yes', 'use_weather_file_snow_indicators': 'Yes'}
[MODELING] (INFO) : Updated episode length (seconds): 5184000.0
[MODELING] (INFO) : Updated timestep size (seconds): 3600.0
[MODELING] (INFO) : Updated timesteps per episode: 1440
[MODELING] (INFO) : Runperiod established.
[MODELING] (INFO) : Episode length (seconds): 5184000.0
[MODELING] (INFO) : timestep size (seconds): 3600.0
[MODELING] (INFO) : timesteps per episode: 1440
[REWARD] (INFO) : Reward function initialized.
[ENVIRONMENT] (INFO) : Environment created successfully.
[WRAPPER CSVLogger] (INFO) : Wrapper initialized.
Now, if we run the environment (with a random agent, for example), we can see how the files are correctly saved in the Sinergym output.
progress.csv
contains the summary variables we have defined, and within the monitor folder of each episode, a new CSV file named custom_metrics.csv
is created, registering the new metrics tracked.
[4]:
for i in range(1):
obs, info = env.reset()
rewards = []
truncated = terminated = False
current_month = 0
while not (terminated or truncated):
a = env.action_space.sample()
obs, reward, terminated, truncated, info = env.step(a)
rewards.append(reward)
if info['month'] != current_month: # display results every month
current_month = info['month']
print('Reward: ', sum(rewards), info)
print('Episode ', i, 'Mean reward: ', np.mean(
rewards), 'Cumulative reward: ', sum(rewards))
env.close()
#----------------------------------------------------------------------------------------------#
[ENVIRONMENT] (INFO) : Starting a new episode.
[ENVIRONMENT] (INFO) : Episode 1: demo-v1
#----------------------------------------------------------------------------------------------#
[MODELING] (INFO) : Episode directory created.
[MODELING] (INFO) : Weather file USA_PA_Pittsburgh-Allegheny.County.AP.725205_TMY3.epw used.
[MODELING] (INFO) : Adapting weather to building model.
[ENVIRONMENT] (INFO) : Saving episode output path.
[ENVIRONMENT] (INFO) : Episode 1 started.
[SIMULATOR] (INFO) : handlers initialized.
[SIMULATOR] (INFO) : handlers are ready.
[SIMULATOR] (INFO) : System is ready.
Reward: -43.96143518328036 {'time_elapsed(hours)': 2.5, 'month': 1, 'day': 1, 'hour': 1, 'is_raining': False, 'action': array([21.587074, 29.06685 ], dtype=float32), 'timestep': 1, 'reward': -43.96143518328036, 'energy_term': -43.67932315835093, 'comfort_term': -0.2821120249294271, 'reward_weight': 0.5, 'abs_energy_penalty': -87.35864631670186, 'abs_comfort_penalty': -0.5642240498588542, 'total_power_demand': 87.35864631670186, 'total_temperature_violation': 0.5642240498588542}
Simulation Progress [Episode 1]: 53%|█████▎ | 53/100 [00:00<00:00, 156.11%/s, 53% completed] Reward: -1655154.188930636 {'time_elapsed(hours)': 745.1666666666666, 'month': 2, 'day': 1, 'hour': 0, 'is_raining': False, 'action': array([18.982193, 28.832418], dtype=float32), 'timestep': 744, 'reward': -1142.821006760398, 'energy_term': -1142.7948709960122, 'comfort_term': -0.026135764385772475, 'reward_weight': 0.5, 'abs_energy_penalty': -2285.5897419920243, 'abs_comfort_penalty': -0.05227152877154495, 'total_power_demand': 2285.5897419920243, 'total_temperature_violation': 0.05227152877154495}
Simulation Progress [Episode 1]: 98%|█████████▊| 98/100 [00:00<00:00, 135.07%/s, 98% completed]Reward: -2802448.0918085002 {'time_elapsed(hours)': 1417.25, 'month': 3, 'day': 1, 'hour': 0, 'is_raining': False, 'action': array([16.598038, 24.909565], dtype=float32), 'timestep': 1416, 'reward': -43.67932315835093, 'energy_term': -43.67932315835093, 'comfort_term': 0.0, 'reward_weight': 0.5, 'abs_energy_penalty': -87.35864631670186, 'abs_comfort_penalty': 0, 'total_power_demand': 87.35864631670186, 'total_temperature_violation': 0.0}
Episode 0 Mean reward: -1952.6775391260724 Cumulative reward: -2811855.656341544
[WRAPPER CSVLogger] (INFO) : Environment closed, data updated in monitor and progress.csv.
Simulation Progress [Episode 1]: 98%|█████████▊| 98/100 [00:02<00:00, 33.39%/s, 98% completed]
[ENVIRONMENT] (INFO) : Environment closed. [demo-v1]