19. Default building control setting up an empty action interface
When you want to run a simulation with all the default building control (specified in the building file). You can directly use the EnergyPlus simulation engine. For example, placing us in the workspace of the container, it would be to run the following:
$ energyplus -w sinergym/data/weather/USA_PA_Pittsburgh-Allegheny.County.AP.725205_TMY3.epw sinergym/data/buildings/5ZoneAutoDXVAV.epJSON
However, doing so without our framework has some disadvantages. You will have EnergyPlus default output and will not have some added outputs such as our wrapper logger that monitors all interactions with the environment. The buildings have a default Site:Location and SizingPeriod:DesignDay, which Sinergym changes automatically depending on the specified weather, so you would have to adjust it before using the simulator manually. Finally, the RunPeriod of the buildings are set to one episode for DRL, which means that as the buildings stand you can only simulate one year. So, you would also have to modify the RunPeriod manually in the building file before starting the simulation.
Therefore, our recommended proposal is setting up an empty action interface in an environment with Sinergym. For example:
[1]:
import gymnasium as gym
import numpy as np
import sinergym
from sinergym.utils.wrappers import LoggerWrapper
env = gym.make(
'Eplus-office-hot-continuous-v1',
actuators={},
action_space=gym.spaces.Box(
low=0,
high=0,
shape=(0,)))
env = LoggerWrapper(env)
for i in range(1):
obs, info = env.reset()
rewards = []
terminated = False
current_month = 0
while not terminated:
a = env.action_space.sample()
obs, reward, terminated, truncated, info = env.step(a)
rewards.append(reward)
if info['month'] != current_month: # display results every month
current_month = info['month']
print('Reward: ', sum(rewards), info)
print(
'Episode ',
i,
'Mean reward: ',
np.mean(rewards),
'Cumulative reward: ',
sum(rewards))
env.close()
#==============================================================================================#
[ENVIRONMENT] (INFO) : Creating Gymnasium environment... [office-hot-discrete-v1]
#==============================================================================================#
[MODELING] (INFO) : Experiment working directory created [/workspaces/sinergym/examples/Eplus-env-office-hot-discrete-v1-res1]
[MODELING] (INFO) : runperiod established: {'start_day': 1, 'start_month': 1, 'start_year': 1991, 'end_day': 31, 'end_month': 12, 'end_year': 1991, 'start_weekday': 6, 'n_steps_per_hour': 4}
[MODELING] (INFO) : Episode length (seconds): 31536000.0
[MODELING] (INFO) : timestep size (seconds): 900.0
[MODELING] (INFO) : timesteps per episode: 35040
[MODELING] (INFO) : Model Config is correct.
[REWARD] (INFO) : Reward function initialized.
[ENVIRONMENT] (INFO) : Environment office-hot-discrete-v1 created successfully.
[WRAPPER LoggerWrapper] (INFO) : Wrapper initialized.
#----------------------------------------------------------------------------------------------#
[ENVIRONMENT] (INFO) : Starting a new episode... [office-hot-discrete-v1] [Episode 1]
#----------------------------------------------------------------------------------------------#
[MODELING] (INFO) : Episode directory created [/workspaces/sinergym/examples/Eplus-env-office-hot-discrete-v1-res1/Eplus-env-sub_run1]
[MODELING] (INFO) : Weather file USA_AZ_Davis-Monthan.AFB.722745_TMY3.epw used.
[MODELING] (INFO) : Updated building model with whole Output:Variable available names
[MODELING] (INFO) : Updated building model with whole Output:Meter available names
[MODELING] (INFO) : Adapting weather to building model. [USA_AZ_Davis-Monthan.AFB.722745_TMY3.epw]
[ENVIRONMENT] (INFO) : Saving episode output path... [/workspaces/sinergym/examples/Eplus-env-office-hot-discrete-v1-res1/Eplus-env-sub_run1/output]
/usr/local/lib/python3.10/dist-packages/opyplus/weather_data/weather_data.py:493: FutureWarning: the 'line_terminator'' keyword is deprecated, use 'lineterminator' instead.
epw_content = self._headers_to_epw(use_datetimes=use_datetimes) + df.to_csv(
[SIMULATOR] (INFO) : Running EnergyPlus with args: ['-w', '/workspaces/sinergym/examples/Eplus-env-office-hot-discrete-v1-res1/Eplus-env-sub_run1/USA_AZ_Davis-Monthan.AFB.722745_TMY3.epw', '-d', '/workspaces/sinergym/examples/Eplus-env-office-hot-discrete-v1-res1/Eplus-env-sub_run1/output', '/workspaces/sinergym/examples/Eplus-env-office-hot-discrete-v1-res1/Eplus-env-sub_run1/ASHRAE901_OfficeMedium_STD2019_Denver.epJSON']
[ENVIRONMENT] (INFO) : Episode 1 started.
[SIMULATOR] (INFO) : handlers initialized.
[SIMULATOR] (INFO) : handlers are ready.
[SIMULATOR] (INFO) : System is ready.
[WRAPPER LoggerWrapper] (INFO) : Creating monitor.csv for current episode (episode 1) if logger is active
Reward: -1.040098962863255 {'time_elapsed(hours)': 0.5, 'month': 7, 'day': 21, 'hour': 0, 'is_raining': False, 'action': array([], dtype=float32), 'timestep': 2, 'reward': -1.040098962863255, 'energy_term': -1.018893380782864, 'comfort_term': -0.02120558208039114, 'reward_weight': 0.5, 'abs_energy': 20377.86761565728, 'abs_comfort': 0.04241116416078228, 'energy_values': [20377.86761565728], 'temp_values': [26.738852781557213, 26.70616176904293, 26.703710048030462, 27.042411164160782, 26.379228760971404, 26.53822100691485, 26.325717463001574, 26.541555584377814, 26.148026393976973, 26.566761270651252, 26.720729493260777, 26.745565263920668, 26.717924954819775, 26.749034869196144, 26.724106409891995, 26.746080113592953, 26.721180581946832, 26.75203949912126]}
Reward: -321.9497354183802 {'time_elapsed(hours)': 0.25, 'month': 12, 'day': 21, 'hour': 0, 'is_raining': False, 'action': array([], dtype=float32), 'timestep': 97, 'reward': -0.006402439024390249, 'energy_term': -0.006402439024390249, 'comfort_term': -0.0, 'reward_weight': 0.5, 'abs_energy': 128.04878048780498, 'abs_comfort': 0.0, 'energy_values': [128.04878048780498], 'temp_values': [22.999634653629034, 22.999555056583162, 22.99666173171838, 22.903480020252864, 22.876186402701965, 22.820452116127708, 22.309153008391664, 22.27766692168502, 22.322752347845213, 22.29208712515913, 22.21973711769706, 22.18328668832165, 22.235045107678307, 22.19937480889174, 22.168621791819803, 22.130006355654462, 22.184828759025105, 22.147029855625828]}
Reward: -8449.532869819053 {'time_elapsed(hours)': 0.25, 'month': 1, 'day': 1, 'hour': 0, 'is_raining': False, 'action': array([], dtype=float32), 'timestep': 1057, 'reward': -0.008417052432114897, 'energy_term': -0.008417052432114897, 'comfort_term': -0.0, 'reward_weight': 0.5, 'abs_energy': 168.34104864229795, 'abs_comfort': 0.0, 'energy_values': [168.34104864229795], 'temp_values': [23.149345848460342, 23.150514461425416, 23.14812351670179, 22.858321421792976, 22.81770703019136, 22.75953682781952, 22.4517277581382, 22.42047961195113, 22.46107642729404, 22.430404117791642, 22.326602915153778, 22.288256827640698, 22.33763950224591, 22.299827804173795, 22.2540491619612, 22.21262374343245, 22.2656312266007, 22.22475557813916]}
Reward: -10750.45064163916 {'time_elapsed(hours)': 744.25, 'month': 2, 'day': 1, 'hour': 0, 'is_raining': False, 'action': array([], dtype=float32), 'timestep': 5281, 'reward': -0.010066938284793056, 'energy_term': -0.010066938284793056, 'comfort_term': -0.0, 'reward_weight': 0.5, 'abs_energy': 201.33876569586113, 'abs_comfort': 0.0, 'energy_values': [201.33876569586113], 'temp_values': [26.19407731358453, 25.893182764402184, 25.23319381087007, 25.352756796479834, 25.145014331659493, 23.044094470156978, 26.064466639774928, 24.451401745306942, 23.713382639436837, 24.81563768547454, 25.151745967733667, 24.439213625906792, 24.27639522806313, 24.763586483892364, 24.637541666563628, 23.915834805222858, 23.74732757969822, 24.256312010106484]}
Reward: -11548.32022823176 {'time_elapsed(hours)': 1416.25, 'month': 3, 'day': 1, 'hour': 0, 'is_raining': False, 'action': array([], dtype=float32), 'timestep': 7969, 'reward': -0.010066938284793056, 'energy_term': -0.010066938284793056, 'comfort_term': -0.0, 'reward_weight': 0.5, 'abs_energy': 201.33876569586113, 'abs_comfort': 0.0, 'energy_values': [201.33876569586113], 'temp_values': [26.086629771558385, 25.800723578113704, 25.122066701037642, 25.208877778477245, 25.058773967039258, 22.902553924481154, 25.640351119712147, 24.321212935563235, 23.309496148008126, 24.74718058330181, 25.13105904442731, 24.34132552077085, 24.092391290403306, 24.76336967540199, 24.604515386030812, 23.8080148135287, 23.548694042230082, 24.24260320229046]}
Reward: -13024.726695594163 {'time_elapsed(hours)': 2160.25, 'month': 4, 'day': 1, 'hour': 0, 'is_raining': False, 'action': array([], dtype=float32), 'timestep': 10945, 'reward': -0.010059948897039251, 'energy_term': -0.010059948897039251, 'comfort_term': -0.0, 'reward_weight': 0.5, 'abs_energy': 201.198977940785, 'abs_comfort': 0.0, 'energy_values': [201.198977940785], 'temp_values': [26.10453701684556, 25.70680120943889, 25.268166982664578, 25.171485556687838, 25.136702063330183, 23.36460897157091, 25.281406368027305, 24.746965024451924, 23.122492094230708, 24.949652376996074, 25.267861141693295, 24.869705323946068, 24.257056918609834, 25.339380228263305, 24.89127306209505, 24.524161569812392, 23.93625775693054, 24.951894527724924]}
Reward: -16892.403365280938 {'time_elapsed(hours)': 2880.25, 'month': 5, 'day': 1, 'hour': 0, 'is_raining': False, 'action': array([], dtype=float32), 'timestep': 13825, 'reward': -0.010059948897039251, 'energy_term': -0.010059948897039251, 'comfort_term': -0.0, 'reward_weight': 0.5, 'abs_energy': 201.198977940785, 'abs_comfort': 0.0, 'energy_values': [201.198977940785], 'temp_values': [26.37224105914655, 26.233862884129856, 25.9058518362416, 25.814482411819796, 25.81929272545216, 24.47561177379764, 25.613096200574414, 25.776871813136633, 24.876061291823856, 26.23259020873576, 25.7345497449363, 25.713184079083927, 25.423097775218647, 26.219914681982832, 25.501795840290757, 25.487341586511533, 25.17496552068692, 25.970852161596834]}
Reward: -21896.489124247968 {'time_elapsed(hours)': 3624.25, 'month': 6, 'day': 1, 'hour': 0, 'is_raining': False, 'action': array([], dtype=float32), 'timestep': 16801, 'reward': -0.008430450234156376, 'energy_term': -0.008430450234156376, 'comfort_term': -0.0, 'reward_weight': 0.5, 'abs_energy': 168.6090046831275, 'abs_comfort': 0.0, 'energy_values': [168.6090046831275], 'temp_values': [26.271287499064524, 26.103922374125307, 25.989522929526814, 25.906032387533777, 25.980175234933945, 25.19425558540028, 25.59396905606546, 26.4220851623168, 25.48894882371821, 26.999637868869247, 26.06664793651545, 26.338281331752377, 26.088962724586498, 26.961394089981173, 26.003705212742588, 26.266104055476056, 26.016204585390692, 26.881385909767747]}
Reward: -29221.960458223268 {'time_elapsed(hours)': 4344.25, 'month': 7, 'day': 1, 'hour': 0, 'is_raining': False, 'action': array([], dtype=float32), 'timestep': 19681, 'reward': -1.6949311204218827, 'energy_term': -0.26742679043559126, 'comfort_term': -1.4275043299862915, 'reward_weight': 0.5, 'abs_energy': 5348.535808711825, 'abs_comfort': 2.855008659972583, 'energy_values': [5348.535808711825], 'temp_values': [25.690838559877328, 25.915884970270934, 26.71248902108889, 26.76516048261861, 27.274973060234515, 27.23471113677184, 26.187544582040186, 27.254721173565525, 26.135732451572082, 27.578972583602845, 26.711392832758165, 26.737880199154255, 26.710083408093293, 26.779567409386665, 27.224743717142747, 27.396200191451324, 27.211305724387437, 27.67938107281635]}
Reward: -35568.982500109734 {'time_elapsed(hours)': 5088.25, 'month': 8, 'day': 1, 'hour': 0, 'is_raining': False, 'action': array([], dtype=float32), 'timestep': 22657, 'reward': -0.3672833996926315, 'energy_term': -0.010059948897039251, 'comfort_term': -0.35722345079559226, 'reward_weight': 0.5, 'abs_energy': 201.198977940785, 'abs_comfort': 0.7144469015911845, 'energy_values': [201.198977940785], 'temp_values': [26.30867148538614, 26.309459135039948, 26.21409138679209, 26.04286119781551, 26.107384318126506, 26.140368086263685, 26.61918932935772, 27.137783183036678, 26.060763866443537, 27.162618330118143, 26.812593090797378, 26.968941836819717, 26.629687662635828, 27.13979312148985, 26.840213153181647, 27.060250789892102, 26.629488521519967, 27.214001477054413]}
Reward: -41555.37202758854 {'time_elapsed(hours)': 5832.25, 'month': 9, 'day': 1, 'hour': 0, 'is_raining': False, 'action': array([], dtype=float32), 'timestep': 25633, 'reward': -0.008430450234156376, 'energy_term': -0.008430450234156376, 'comfort_term': -0.0, 'reward_weight': 0.5, 'abs_energy': 168.6090046831275, 'abs_comfort': 0.0, 'energy_values': [168.6090046831275], 'temp_values': [26.217442233499717, 26.19079547564029, 26.243057033393647, 25.90122774932239, 25.94047661764487, 25.817987145745196, 26.44306615365681, 26.55705870892625, 25.90298745924243, 26.549128667042552, 26.459794392100665, 26.49021644748176, 26.314940373349994, 26.54942547428998, 26.512114023279846, 26.522163031742796, 26.362385306000814, 26.591416269343217]}
Reward: -46726.14118324007 {'time_elapsed(hours)': 6552.25, 'month': 10, 'day': 1, 'hour': 0, 'is_raining': False, 'action': array([], dtype=float32), 'timestep': 28513, 'reward': -0.06789851005825302, 'energy_term': -0.010059948897039251, 'comfort_term': -0.057838561161213775, 'reward_weight': 0.5, 'abs_energy': 201.198977940785, 'abs_comfort': 0.11567712232242755, 'energy_values': [201.198977940785], 'temp_values': [26.54443008629097, 26.408313327693552, 26.224862634103623, 26.117585185109778, 26.037801974817327, 25.507769966536845, 27.115677122322428, 26.86131650179226, 26.223518887721035, 26.75893242733751, 26.530586901729745, 26.45393393827288, 26.272802747471584, 26.486634787123645, 26.371940434897027, 26.302775628790542, 26.11515210984941, 26.32785043815016]}
Reward: -50168.979949119224 {'time_elapsed(hours)': 7296.25, 'month': 11, 'day': 1, 'hour': 0, 'is_raining': False, 'action': array([], dtype=float32), 'timestep': 31489, 'reward': -0.11419542960070476, 'energy_term': -0.11419542960070476, 'comfort_term': -0.0, 'reward_weight': 0.5, 'abs_energy': 2283.908592014095, 'abs_comfort': 0.0, 'energy_values': [2283.908592014095], 'temp_values': [25.605597453282652, 26.008645230812945, 25.35626077584623, 25.68554015361045, 25.424162800676545, 23.54394243021113, 26.668604907614746, 25.074325327970303, 24.009565086141368, 25.496495319039795, 25.843637917081118, 24.971422646618414, 24.769718084042537, 25.31616435897856, 25.31021959098016, 24.449767133034882, 24.250704417015097, 24.803384417176364]}
Reward: -52076.90037516009 {'time_elapsed(hours)': 8016.25, 'month': 12, 'day': 1, 'hour': 0, 'is_raining': False, 'action': array([], dtype=float32), 'timestep': 34369, 'reward': -0.008430450234156376, 'energy_term': -0.008430450234156376, 'comfort_term': -0.0, 'reward_weight': 0.5, 'abs_energy': 168.6090046831275, 'abs_comfort': 0.0, 'energy_values': [168.6090046831275], 'temp_values': [25.85779811067494, 25.47876613692316, 24.426144348275088, 25.139488733403255, 24.70031998681098, 22.068803440473825, 26.486356831060117, 24.29789277185311, 23.45700782913386, 24.266051343026817, 24.664023677829388, 23.73988367970229, 23.538095370223804, 23.828491940697145, 23.831731365763897, 22.90653200864582, 22.706033444412455, 23.006451254621872]}
Progress: |****************************************************************************************************| 100%
Episode 0 Mean reward: -1.4328161223963918 Cumulative reward: -53505.652458642195
[WRAPPER LoggerWrapper] (INFO) : End of episode, recording summary (progress.csv) if logger is active
[ENVIRONMENT] (INFO) : Environment closed. [office-hot-discrete-v1]
In this example, a default environment is loaded, but the space and definition of the default action is replaced with an empty one, Sinergym takes care of making the necessary changes in the background. Then, the random agent implemented send empty actions ([]).
The advantages are that you can combine the weathers with the buildings as you want, Sinergym will take care of adapting everything automatically (you don’t have the disadvantages of before), you can run in a single experiment as many years as you want, with our loggers. You can decide which observation variables you want in a more comfortable way too. When you set an empty action interface, Sinergym preserves the default actuators that the building definition comes with (these can be more or less sophisticated depending on the definition of the building in the epJSON file).