19. Logger Wrapper personalization/configuration

We will see on this notebook how to personalize the logger wrapper defined by sinergym.

[1]:
import gymnasium as gym
import numpy as np
import sinergym
from sinergym.utils.wrappers import (LoggerWrapper, MultiObsWrapper,
                                     NormalizeObservation)
from sinergym.utils.constants import RANGES_5ZONE

19.1. Step 1 Inherit and modify the CSVloger

First we need to change the CSV logger to modify the values written into the file on the function create_row_contents

[2]:

from sinergym.utils.logger import CSVLogger from typing import Any, Dict, Optional, Sequence, Tuple, Union, List class CustomCSVLogger(CSVLogger): def __init__( self, monitor_header: str, progress_header: str, log_progress_file: str, log_file: Optional[str] = None, flag: bool = True): super(CustomCSVLogger, self).__init__(monitor_header,progress_header,log_progress_file,log_file,flag) self.last_10_steps_reward = [0]*10 def _create_row_content( self, obs: List[Any], action: Union[int, np.ndarray, List[Any]], reward: Optional[float], done: bool, info: Optional[Dict[str, Any]]) -> List: if reward is not None: self.last_10_steps_reward.pop(0) self.last_10_steps_reward.append(reward) if info is None: # In a reset return [0] + list(obs) + list(action) + \ [0, reward, np.mean(self.last_10_steps_reward), None, None, None, done] else: return [ info['timestep']] + list(obs) + list(action) + [ info['time_elapsed'], reward, np.mean(self.last_10_steps_reward), info['total_power_no_units'], info['comfort_penalty'], info['abs_comfort'], done]

19.2. Step 2 Instantiate the LoggerWrapper

now we need to instantiate the loggerwrapper and specify the new headers of our file and the csvlogger class we want to use.

[3]:
env=gym.make('Eplus-demo-v1')
env=LoggerWrapper(env,logger_class=CustomCSVLogger,monitor_header = ['timestep'] + env.variables['observation'] +
                env.variables['action'] + ['time (seconds)', 'reward', '10-mean-reward',
                'power_penalty', 'comfort_penalty', 'done'])
[2023-02-09 11:23:44,589] EPLUS_ENV_demo-v1_MainThread_ROOT INFO:Updating idf ExternalInterface object if it is not present...
[2023-02-09 11:23:44,590] EPLUS_ENV_demo-v1_MainThread_ROOT INFO:Updating idf Site:Location and SizingPeriod:DesignDay(s) to weather and ddy file...
[2023-02-09 11:23:44,591] EPLUS_ENV_demo-v1_MainThread_ROOT INFO:Updating idf OutPut:Variable and variables XML tree model for BVCTB connection.
[2023-02-09 11:23:44,592] EPLUS_ENV_demo-v1_MainThread_ROOT INFO:Setting up extra configuration in building model if exists...
[2023-02-09 11:23:44,592] EPLUS_ENV_demo-v1_MainThread_ROOT INFO:Setting up action definition in building model if exists...

Now, you can see in Sinergym output folder that you will have available progress.csv file and monitor.csv files in each episode.

[4]:
for i in range(1):
    obs, info = env.reset()
    rewards = []
    terminated = False
    current_month = 0
    while not terminated:
        a = env.action_space.sample()
        obs, reward, terminated, truncated, info = env.step(a)
        rewards.append(reward)
        if info['month'] != current_month:  # display results every month
            current_month = info['month']
            print('Reward: ', sum(rewards), info)
    print('Episode ', i, 'Mean reward: ', np.mean(
        rewards), 'Cumulative reward: ', sum(rewards))
env.close()
[2023-02-09 11:23:49,091] EPLUS_ENV_demo-v1_MainThread_ROOT INFO:Creating new EnergyPlus simulation episode...
[2023-02-09 11:23:49,099] EPLUS_ENV_demo-v1_MainThread_ROOT INFO:EnergyPlus working directory is in /workspaces/sinergym/examples/Eplus-env-demo-v1-res1/Eplus-env-sub_run1
Reward:  -0.5795042490937325 {'timestep': 1, 'time_elapsed': 900, 'year': 1991, 'month': 1, 'day': 1, 'hour': 0, 'action': [19, 26], 'total_power': 4954.294078583251, 'total_power_no_units': -0.49542940785832507, 'comfort_penalty': -0.6635790903291401, 'abs_comfort': 0.6635790903291401, 'temperatures': [19.33642090967086]}
Reward:  -2058.929866314376 {'timestep': 2976, 'time_elapsed': 2678400, 'year': 1991, 'month': 2, 'day': 1, 'hour': 0, 'action': [15, 30], 'total_power': 4284.524651309577, 'total_power_no_units': -0.42845246513095775, 'comfort_penalty': -1.7535396279158597, 'abs_comfort': 1.7535396279158597, 'temperatures': [18.24646037208414]}
Reward:  -4051.758361432438 {'timestep': 5664, 'time_elapsed': 5097600, 'year': 1991, 'month': 3, 'day': 1, 'hour': 0, 'action': [19, 26], 'total_power': 3153.187061208363, 'total_power_no_units': -0.31531870612083635, 'comfort_penalty': -1.067937288260211, 'abs_comfort': 1.067937288260211, 'temperatures': [18.93206271173979]}
Reward:  -5382.013734959742 {'timestep': 8640, 'time_elapsed': 7776000, 'year': 1991, 'month': 4, 'day': 1, 'hour': 0, 'action': [22, 23], 'total_power': 12475.64348007228, 'total_power_no_units': -1.247564348007228, 'comfort_penalty': -0.0, 'abs_comfort': 0.0, 'temperatures': [21.83275995845086]}
Reward:  -6277.484511420095 {'timestep': 11520, 'time_elapsed': 10368000, 'year': 1991, 'month': 5, 'day': 1, 'hour': 0, 'action': [22, 22], 'total_power': 10484.78553645204, 'total_power_no_units': -1.0484785536452041, 'comfort_penalty': -0.0, 'abs_comfort': 0.0, 'temperatures': [21.93922073357949]}
Reward:  -7135.1225886863995 {'timestep': 14496, 'time_elapsed': 13046400, 'year': 1991, 'month': 6, 'day': 1, 'hour': 0, 'action': [16, 29], 'total_power': 257.3532327751198, 'total_power_no_units': -0.025735323277511983, 'comfort_penalty': -2.3377382776424795, 'abs_comfort': 2.3377382776424795, 'temperatures': [20.66226172235752]}
Reward:  -10046.461491112917 {'timestep': 17376, 'time_elapsed': 15638400, 'year': 1991, 'month': 7, 'day': 1, 'hour': 0, 'action': [18, 27], 'total_power': 175.7796775010779, 'total_power_no_units': -0.017577967750107792, 'comfort_penalty': -2.0312783648650807, 'abs_comfort': 2.0312783648650807, 'temperatures': [20.96872163513492]}
Reward:  -13260.618700298783 {'timestep': 20352, 'time_elapsed': 18316800, 'year': 1991, 'month': 8, 'day': 1, 'hour': 0, 'action': [16, 29], 'total_power': 4110.951203129332, 'total_power_no_units': -0.41109512031293316, 'comfort_penalty': -3.114623854411061, 'abs_comfort': 3.114623854411061, 'temperatures': [19.88537614558894]}
Reward:  -16468.41380897643 {'timestep': 23328, 'time_elapsed': 20995200, 'year': 1991, 'month': 9, 'day': 1, 'hour': 0, 'action': [21, 21], 'total_power': 2023.000058680098, 'total_power_no_units': -0.2023000058680098, 'comfort_penalty': -2.00019975340577, 'abs_comfort': 2.00019975340577, 'temperatures': [20.99980024659423]}
Reward:  -19250.32398402108 {'timestep': 26208, 'time_elapsed': 23587200, 'year': 1991, 'month': 10, 'day': 1, 'hour': 0, 'action': [21, 21], 'total_power': 2903.817402808811, 'total_power_no_units': -0.2903817402808811, 'comfort_penalty': -0.0, 'abs_comfort': 0.0, 'temperatures': [20.99005393832046]}
Reward:  -20272.56018299861 {'timestep': 29184, 'time_elapsed': 26265600, 'year': 1991, 'month': 11, 'day': 1, 'hour': 0, 'action': [17, 28], 'total_power': 152.4868953414246, 'total_power_no_units': -0.01524868953414246, 'comfort_penalty': -0.0, 'abs_comfort': 0.0, 'temperatures': [20.64907727570824]}
Reward:  -21440.140959844375 {'timestep': 32064, 'time_elapsed': 28857600, 'year': 1991, 'month': 12, 'day': 1, 'hour': 0, 'action': [18, 27], 'total_power': 152.4868953414246, 'total_power_no_units': -0.01524868953414246, 'comfort_penalty': -0.12924490872369887, 'abs_comfort': 0.12924490872369887, 'temperatures': [19.8707550912763]}
Reward:  -23429.375251826543 {'timestep': 35040, 'time_elapsed': 31536000, 'year': 1992, 'month': 1, 'day': 1, 'hour': 0, 'action': [17, 28], 'total_power': 5116.385833393026, 'total_power_no_units': -0.5116385833393026, 'comfort_penalty': -2.751923110197989, 'abs_comfort': 2.751923110197989, 'temperatures': [17.24807688980201]}
Episode  0 Mean reward:  -0.6686465539904984 Cumulative reward:  -23429.375251826543
[2023-02-09 11:24:00,984] EPLUS_ENV_demo-v1_MainThread_ROOT INFO:EnergyPlus simulation closed successfully.