15. Basic example

Sinergym uses the standard OpenAI gym API. Lets see how to create a basic loop.

First we need to include sinergym and create an environment, in our case using ‘Eplus-demo-v1’

[1]:
import gymnasium as gym
import numpy as np

import sinergym
env = gym.make('Eplus-demo-v1')
[2023-05-25 09:33:37,986] EPLUS_ENV_demo-v1_MainThread_ROOT INFO:Updating Building model ExternalInterface object if it is not present...
[2023-05-25 09:33:37,987] EPLUS_ENV_demo-v1_MainThread_ROOT INFO:Updating Building model Site:Location and SizingPeriod:DesignDay(s) to weather and ddy file...
[2023-05-25 09:33:37,989] EPLUS_ENV_demo-v1_MainThread_ROOT INFO:Updating building model OutPut:Variable and variables XML tree model for BVCTB connection.
[2023-05-25 09:33:37,989] EPLUS_ENV_demo-v1_MainThread_ROOT INFO:Setting up extra configuration in building model if exists...
[2023-05-25 09:33:37,989] EPLUS_ENV_demo-v1_MainThread_ROOT INFO:Setting up action definition in building model if exists...

At first glance may appear that sinergym is only imported but never used, but by importing Sinergym all its Environments are defined to be used, in this case ‘Eplus-demo-v1’ with all the information contained in the building file and the config file.

After this simple definition we are ready to loop the episodes, for this simple example we are going to consider only 1 episode. In summary the code we need is something like this:

[2]:
for i in range(1):
    obs, info = env.reset()
    rewards = []
    terminated = False
    current_month = 0
    while not terminated:
        a = env.action_space.sample()
        obs, reward, terminated, truncated, info = env.step(a)
        rewards.append(reward)
        if info['month'] != current_month:  # display results every month
            current_month = info['month']
            print('Reward: ', sum(rewards), info)
[2023-05-25 09:33:38,052] EPLUS_ENV_demo-v1_MainThread_ROOT INFO:Creating new EnergyPlus simulation episode...
[2023-05-25 09:33:38,168] EPLUS_ENV_demo-v1_MainThread_ROOT INFO:EnergyPlus working directory is in /workspaces/sinergym/examples/Eplus-env-demo-v1-res1/Eplus-env-sub_run1
/usr/local/lib/python3.10/dist-packages/opyplus/weather_data/weather_data.py:493: FutureWarning: the 'line_terminator'' keyword is deprecated, use 'lineterminator' instead.
  epw_content = self._headers_to_epw(use_datetimes=use_datetimes) + df.to_csv(
Reward:  -0.3490105016546199 {'timestep': 1, 'time_elapsed': 900, 'year': 1991, 'month': 1, 'day': 1, 'hour': 0, 'action': [15, 30], 'reward': -0.3490105016546199, 'reward_energy': -0.1640169602508194, 'reward_comfort': -0.5340040430584203, 'total_energy': 1640.169602508194, 'abs_comfort': 0.5340040430584203, 'temperatures': [19.46599595694158]}
Reward:  -1931.9344757031301 {'timestep': 2976, 'time_elapsed': 2678400, 'year': 1991, 'month': 2, 'day': 1, 'hour': 0, 'action': [19, 26], 'reward': -1.1620499503941164, 'reward_energy': -1.221344355363743, 'reward_comfort': -1.1027555454244897, 'total_energy': 12213.44355363743, 'abs_comfort': 1.1027555454244897, 'temperatures': [18.89724445457551]}
Reward:  -3891.7713096588504 {'timestep': 5664, 'time_elapsed': 5097600, 'year': 1991, 'month': 3, 'day': 1, 'hour': 0, 'action': [19, 26], 'reward': -0.05528226607644676, 'reward_energy': -0.06426320076971337, 'reward_comfort': -0.046301331383180155, 'total_energy': 642.6320076971336, 'abs_comfort': 0.046301331383180155, 'temperatures': [19.95369866861682]}
Reward:  -5188.949779733545 {'timestep': 8640, 'time_elapsed': 7776000, 'year': 1991, 'month': 4, 'day': 1, 'hour': 0, 'action': [18, 27], 'reward': -0.1570930549615421, 'reward_energy': -0.006803586826214769, 'reward_comfort': -0.3073825230968694, 'total_energy': 68.03586826214769, 'abs_comfort': 0.3073825230968694, 'temperatures': [19.69261747690313]}
Reward:  -6075.841012317875 {'timestep': 11520, 'time_elapsed': 10368000, 'year': 1991, 'month': 5, 'day': 1, 'hour': 0, 'action': [21, 21], 'reward': -0.248299019799475, 'reward_energy': -0.49659803959895, 'reward_comfort': -0.0, 'total_energy': 4965.9803959895, 'abs_comfort': 0.0, 'temperatures': [20.98141752891553]}
Reward:  -6911.640711162853 {'timestep': 14496, 'time_elapsed': 13046400, 'year': 1991, 'month': 6, 'day': 1, 'hour': 0, 'action': [15, 30], 'reward': -1.0783216797893724, 'reward_energy': -0.01329469346279363, 'reward_comfort': -2.1433486661159513, 'total_energy': 132.9469346279363, 'abs_comfort': 2.1433486661159513, 'temperatures': [20.85665133388405]}
Reward:  -9174.877781927416 {'timestep': 17376, 'time_elapsed': 15638400, 'year': 1991, 'month': 7, 'day': 1, 'hour': 0, 'action': [15, 30], 'reward': -1.0265839003008237, 'reward_energy': -0.00780572594377727, 'reward_comfort': -2.04536207465787, 'total_energy': 78.0572594377727, 'abs_comfort': 2.04536207465787, 'temperatures': [20.95463792534213]}
Reward:  -11558.331851551256 {'timestep': 20352, 'time_elapsed': 18316800, 'year': 1991, 'month': 8, 'day': 1, 'hour': 0, 'action': [17, 28], 'reward': -1.7120168877607262, 'reward_energy': -0.2013126060233338, 'reward_comfort': -3.2227211694981186, 'total_energy': 2013.126060233338, 'abs_comfort': 3.2227211694981186, 'temperatures': [19.77727883050188]}
Reward:  -13941.94050131651 {'timestep': 23328, 'time_elapsed': 20995200, 'year': 1991, 'month': 9, 'day': 1, 'hour': 0, 'action': [18, 27], 'reward': -1.0694712468917504, 'reward_energy': -0.01275428900255242, 'reward_comfort': -2.1261882047809486, 'total_energy': 127.5428900255242, 'abs_comfort': 2.1261882047809486, 'temperatures': [20.87381179521905]}
Reward:  -16249.196222768007 {'timestep': 26208, 'time_elapsed': 23587200, 'year': 1991, 'month': 10, 'day': 1, 'hour': 0, 'action': [22, 22], 'reward': -0.4853031266999031, 'reward_energy': -0.9706062533998062, 'reward_comfort': -0.0, 'total_energy': 9706.062533998062, 'abs_comfort': 0.0, 'temperatures': [21.95674140719972]}
Reward:  -17580.11195641384 {'timestep': 29184, 'time_elapsed': 26265600, 'year': 1991, 'month': 11, 'day': 1, 'hour': 0, 'action': [22, 22], 'reward': -0.5797100676312941, 'reward_energy': -1.1594201352625881, 'reward_comfort': -0.0, 'total_energy': 11594.20135262588, 'abs_comfort': 0.0, 'temperatures': [21.84900454881994]}
Reward:  -18742.72054037921 {'timestep': 32064, 'time_elapsed': 28857600, 'year': 1991, 'month': 12, 'day': 1, 'hour': 0, 'action': [19, 26], 'reward': -0.010290336399695095, 'reward_energy': -0.02058067279939019, 'reward_comfort': -0.0, 'total_energy': 205.8067279939019, 'abs_comfort': 0.0, 'temperatures': [20.01970452128446]}
Reward:  -20445.351142368778 {'timestep': 35040, 'time_elapsed': 31536000, 'year': 1992, 'month': 1, 'day': 1, 'hour': 0, 'action': [22, 22], 'reward': -1.1023057700571006, 'reward_energy': -2.2046115401142012, 'reward_comfort': -0.0, 'total_energy': 22046.11540114201, 'abs_comfort': 0.0, 'temperatures': [20.74344960427339]}

And as always don’t forget to close the environment:

[3]:
env.close()
[2023-05-25 09:33:48,555] EPLUS_ENV_demo-v1_MainThread_ROOT INFO:EnergyPlus simulation closed successfully.

Now we can see the final rewards:

[4]:
print(
    'Mean reward: ',
    np.mean(rewards),
    'Cumulative reward: ',
    sum(rewards))
Mean reward:  -0.5834860485835932 Cumulative reward:  -20445.351142368778

The list of environments that we have registered in Sinergym is extensive and we use buildings changing particularities. For example, continuous action space or discrete, noise over weather, runperiod, timesteps, reward function, etc. We will see it in the following notebooks.****