Rule Controller example

First we import all the used libraries, remember to always import sinergym even if it says is not used, because that is needed to define the environments

[1]:
from typing import List, Any, Sequence
from sinergym.utils.common import get_season_comfort_range
from datetime import datetime
import gym
import numpy as np
import sinergym
/usr/local/lib/python3.10/dist-packages/gym/spaces/box.py:73: UserWarning: WARN: Box bound precision lowered by casting to float32
  logger.warn(

Now we can define the environment we want to use, in our case we are using the Eplus demo.

[2]:
env = gym.make('Eplus-demo-v1')
[2022-08-24 09:21:11,092] EPLUS_ENV_demo-v1_MainThread_ROOT INFO:Updating idf ExternalInterface object if it is not present...
[2022-08-24 09:21:11,093] EPLUS_ENV_demo-v1_MainThread_ROOT INFO:Updating idf Site:Location and SizingPeriod:DesignDay(s) to weather and ddy file...
[2022-08-24 09:21:11,095] EPLUS_ENV_demo-v1_MainThread_ROOT INFO:Updating idf OutPut:Variable and variables XML tree model for BVCTB connection.
[2022-08-24 09:21:11,097] EPLUS_ENV_demo-v1_MainThread_ROOT INFO:Setting up extra configuration in building model if exists...

For the Rule-base controller have a look at the already defined controllers, there is one for each building, since the demo is based on the 5Zone building we are extending that controller and defining the action function we desire, feel free to play with the function to define your own action.

[3]:
from sinergym.utils.controllers import RBC5Zone

class MyRuleBasedController(RBC5Zone):

    def act(self, observation: List[Any]) -> Sequence[Any]:
        """Select action based on outdoor air drybulb temperature and daytime.

        Args:
            observation (List[Any]): Perceived observation.

        Returns:
            Sequence[Any]: Action chosen.
        """
        obs_dict = dict(zip(self.variables['observation'], observation))

        out_temp = obs_dict['Site Outdoor Air Drybulb Temperature(Environment)']

        day = int(obs_dict['day'])
        month = int(obs_dict['month'])
        hour = int(obs_dict['hour'])
        year = int(obs_dict['year'])

        summer_start_date = datetime(year, 6, 1)
        summer_final_date = datetime(year, 9, 30)

        current_dt = datetime(year, month, day)

        # Get season comfort range
        if current_dt >= summer_start_date and current_dt <= summer_final_date:
            season_comfort_range = self.setpoints_summer
        else:
            season_comfort_range = self.setpoints_summer
        season_comfort_range = get_season_comfort_range(1991,month, day)
        # Update setpoints
        in_temp = obs_dict['Zone Air Temperature(SPACE1-1)']

        current_heat_setpoint = obs_dict[
            'Zone Thermostat Heating Setpoint Temperature(SPACE1-1)']
        current_cool_setpoint = obs_dict[
            'Zone Thermostat Cooling Setpoint Temperature(SPACE1-1)']

        new_heat_setpoint = current_heat_setpoint
        new_cool_setpoint = current_cool_setpoint

        if in_temp < season_comfort_range[0]:
            new_heat_setpoint = current_heat_setpoint + 1
            new_cool_setpoint = current_cool_setpoint + 1
        elif in_temp > season_comfort_range[1]:
            new_cool_setpoint = current_cool_setpoint - 1
            new_heat_setpoint = current_heat_setpoint - 1

        action = (new_heat_setpoint, new_cool_setpoint)
        if current_dt.weekday() > 5 or hour in range(22, 6):
            #weekend or night
            action = (18.33, 23.33)

        return action

Now that we have our controller ready we can use it:

[4]:

# create rule-based controller agent = MyRuleBasedController(env) for i in range(1): obs = env.reset() rewards = [] done = False current_month = 0 while not done: action = agent.act(obs) obs, reward, done, info = env.step(action) rewards.append(reward) if info['month'] != current_month: # display results every month current_month = info['month'] print('Reward: ', sum(rewards), info) print( 'Episode ', i, 'Mean reward: ', np.mean(rewards), 'Cumulative reward: ', sum(rewards))
[2022-08-24 09:21:11,676] EPLUS_ENV_demo-v1_MainThread_ROOT INFO:Creating new EnergyPlus simulation episode...
[2022-08-24 09:21:11,687] EPLUS_ENV_demo-v1_MainThread_ROOT INFO:EnergyPlus working directory is in /workspaces/sinergym/examples/Eplus-env-demo-v1-res5/Eplus-env-sub_run1
Reward:  -0.3808358083250144 {'timestep': 1, 'time_elapsed': 900, 'year': 1991, 'month': 1, 'day': 1, 'hour': 0, 'total_power': 7616.716166500288, 'total_power_no_units': -0.7616716166500288, 'comfort_penalty': -0.0, 'abs_comfort': 0.0, 'temperatures': [20.99998783039325], 'out_temperature': 1.8, 'action_': [21.0, 25.0]}
Reward:  -1394.9337411038548 {'timestep': 2976, 'time_elapsed': 2678400, 'year': 1991, 'month': 2, 'day': 1, 'hour': 0, 'total_power': 8783.948537124865, 'total_power_no_units': -0.8783948537124865, 'comfort_penalty': -0.0, 'abs_comfort': 0.0, 'temperatures': [20.32998668433318], 'out_temperature': -7.0, 'action_': [20.33, 25.33]}
Reward:  -2650.5674622643073 {'timestep': 5664, 'time_elapsed': 5097600, 'year': 1991, 'month': 3, 'day': 1, 'hour': 0, 'total_power': 2828.55143464232, 'total_power_no_units': -0.282855143464232, 'comfort_penalty': -0.0, 'abs_comfort': 0.0, 'temperatures': [20.32988764119707], 'out_temperature': 8.1, 'action_': [20.33, 25.33]}
Reward:  -3464.45346341394 {'timestep': 8640, 'time_elapsed': 7776000, 'year': 1991, 'month': 4, 'day': 1, 'hour': 0, 'total_power': 186.5934720667916, 'total_power_no_units': -0.018659347206679163, 'comfort_penalty': -1.2670384566951398, 'abs_comfort': 1.2670384566951398, 'temperatures': [18.73296154330486], 'out_temperature': 7.7, 'action_': [18.33, 23.33]}
Reward:  -4056.362905025186 {'timestep': 11520, 'time_elapsed': 10368000, 'year': 1991, 'month': 5, 'day': 1, 'hour': 0, 'total_power': 1049.348956006743, 'total_power_no_units': -0.1049348956006743, 'comfort_penalty': -0.0, 'abs_comfort': 0.0, 'temperatures': [20.33021459066708], 'out_temperature': 13.0, 'action_': [20.33, 25.33]}
Reward:  -4575.388821263204 {'timestep': 14496, 'time_elapsed': 13046400, 'year': 1991, 'month': 6, 'day': 1, 'hour': 0, 'total_power': 602.6474404745892, 'total_power_no_units': -0.06026474404745892, 'comfort_penalty': -2.67021711767433, 'abs_comfort': 2.67021711767433, 'temperatures': [20.32978288232567], 'out_temperature': 18.4, 'action_': [20.33, 25.33]}
Reward:  -5959.068563513181 {'timestep': 17376, 'time_elapsed': 15638400, 'year': 1991, 'month': 7, 'day': 1, 'hour': 0, 'total_power': 215.8105427085715, 'total_power_no_units': -0.02158105427085715, 'comfort_penalty': -2.8710106251693617, 'abs_comfort': 2.8710106251693617, 'temperatures': [20.12898937483064], 'out_temperature': 17.7, 'action_': [18.33, 23.33]}
Reward:  -7457.679714954765 {'timestep': 20352, 'time_elapsed': 18316800, 'year': 1991, 'month': 8, 'day': 1, 'hour': 0, 'total_power': 9961.58714336648, 'total_power_no_units': -0.996158714336648, 'comfort_penalty': -0.0, 'abs_comfort': 0.0, 'temperatures': [23.33021175763765], 'out_temperature': 20.6, 'action_': [23.33, 28.33]}
Reward:  -8981.302898129286 {'timestep': 23328, 'time_elapsed': 20995200, 'year': 1991, 'month': 9, 'day': 1, 'hour': 0, 'total_power': 1814.198506979717, 'total_power_no_units': -0.1814198506979717, 'comfort_penalty': -0.0, 'abs_comfort': 0.0, 'temperatures': [23.32960549764289], 'out_temperature': 18.8, 'action_': [23.33, 28.33]}
Reward:  -10197.282478127578 {'timestep': 26208, 'time_elapsed': 23587200, 'year': 1991, 'month': 10, 'day': 1, 'hour': 0, 'total_power': 2130.682079753009, 'total_power_no_units': -0.2130682079753009, 'comfort_penalty': -0.0, 'abs_comfort': 0.0, 'temperatures': [23.33031354561692], 'out_temperature': 13.3, 'action_': [23.33, 28.33]}
Reward:  -10685.5692882536 {'timestep': 29184, 'time_elapsed': 26265600, 'year': 1991, 'month': 11, 'day': 1, 'hour': 0, 'total_power': 944.0904158092028, 'total_power_no_units': -0.09440904158092028, 'comfort_penalty': -0.0, 'abs_comfort': 0.0, 'temperatures': [20.33017991675662], 'out_temperature': 13.0, 'action_': [20.33, 25.33]}
Reward:  -11374.367509294058 {'timestep': 32064, 'time_elapsed': 28857600, 'year': 1991, 'month': 12, 'day': 1, 'hour': 0, 'total_power': 2788.852968187356, 'total_power_no_units': -0.27888529681873564, 'comfort_penalty': -0.0, 'abs_comfort': 0.0, 'temperatures': [20.32995260044678], 'out_temperature': 5.1, 'action_': [20.33, 25.33]}
Reward:  -12774.026233082805 {'timestep': 35040, 'time_elapsed': 31536000, 'year': 1992, 'month': 1, 'day': 1, 'hour': 0, 'total_power': 10847.14414165407, 'total_power_no_units': -1.084714414165407, 'comfort_penalty': -0.0, 'abs_comfort': 0.0, 'temperatures': [20.33000003505644], 'out_temperature': -12.0, 'action_': [20.33, 25.33]}
Episode  0 Mean reward:  -0.3645555431815984 Cumulative reward:  -12774.026233082805

Always remember to close the environment:

[5]:
env.close()
[2022-08-24 09:21:20,536] EPLUS_ENV_demo-v1_MainThread_ROOT INFO:EnergyPlus simulation closed successfully.

Note

For more information about our defines controllers and how create a new one, please, visit our Controller Documentation