Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gh pages custom #637

Closed
wants to merge 43 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
43 commits
Select commit Hold shift + click to select a range
7528147
Add method to get KPIs disaggregated.
javiarrobas Jan 11, 2024
050da8c
Interface to get_kpis_disaggregated at testcase.py
javiarrobas Jan 11, 2024
34eda99
Implement Rest API call.
javiarrobas Jan 11, 2024
8f7f31d
Add check for getting all core KPIs.
javiarrobas Jan 11, 2024
973c9da
Add check for getting all KPIs disaggregated.
javiarrobas Jan 11, 2024
c521868
Add reference results for new tests.
javiarrobas Jan 11, 2024
f032705
Calculate integral separately not to add up when computing by source.
javiarrobas Jan 11, 2024
08a4dfc
Normalize peak power only for peak_tot, not for peak_dict.
javiarrobas Jan 11, 2024
995d842
Update xxxx_dict references due to peak KPIs normalized only at xxxx_…
javiarrobas Jan 11, 2024
893ce5d
Describe new method in releasenotes.md.
javiarrobas Jan 11, 2024
622ae19
Add kpi_disaggregated to README.md.
javiarrobas Jan 11, 2024
f13818a
Update refs for numerical differences.
javiarrobas Jan 12, 2024
be4913b
Update get_html_IO script to print activate and new total file
EttoreZ Feb 21, 2024
c52c2f9
Update testcases documentation
EttoreZ Feb 21, 2024
cd2c03b
Updated release notes
EttoreZ Feb 21, 2024
e3316d7
Address review comments
EttoreZ Feb 22, 2024
50396f7
Update documentation
EttoreZ Feb 22, 2024
fd39c7a
Update documentation
EttoreZ Feb 22, 2024
8afd1f3
first implementation
HWalnum Feb 27, 2024
5780811
fixed get_results call
HWalnum Feb 27, 2024
ee92c06
fixed results quiery as some outputs are removed from output_names
HWalnum Feb 27, 2024
525a5e9
Update README and release notes
EttoreZ Feb 28, 2024
8b510d3
Edits to readme text
dhblum Mar 5, 2024
cf4a164
Merge pull request #624 from ibpsa/issue555_missingActivateDocumentation
dhblum Mar 6, 2024
3d4099d
created _get_test_results() to avoid duplicate code
HWalnum Mar 8, 2024
f50b7b4
updated releasenotes.md
HWalnum Mar 8, 2024
fd67865
Merge pull request #629 from HWalnum/issue626_storeResults
dhblum Mar 11, 2024
9987e2d
Make time as index for csv
dhblum Mar 11, 2024
74314ea
Use points instead of parameters
dhblum Mar 11, 2024
485301c
Doc formatting and edits
dhblum Mar 11, 2024
ed371c0
Revert back to forecastParameters without other points
dhblum Mar 13, 2024
2d70472
Merge branch 'master' into issue626_storeResults
dhblum Mar 13, 2024
edc0c3c
Update dict in python2,3 compatible way for unit tests to pass
dhblum Mar 13, 2024
4b81091
Update releasenotes.md [ci skip]
dhblum Mar 14, 2024
841c803
Merge pull request #632 from ibpsa/issue626_storeResults
dhblum Mar 14, 2024
7df1a1c
Merge branch 'master' into issue604_kpisDisaggregated
javiarrobas Mar 22, 2024
3ac058d
Add release note.
javiarrobas Mar 22, 2024
af18410
Run pre-commit
javiarrobas Mar 22, 2024
0d0bfee
Add space before returns.
javiarrobas Mar 22, 2024
5a0a545
Add space before end of docstring.
javiarrobas Mar 22, 2024
c5fc249
Be more specific in README.md
javiarrobas Mar 22, 2024
4629df4
Merge branch 'issue604_kpisDisaggregated' into gh-pages-custom
javiarrobas Mar 22, 2024
8374559
Update user guide with new API endpoint.
javiarrobas Mar 22, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,14 +68,15 @@ Example RESTful interaction:

| Interaction | Request |
|-----------------------------------------------------------------------|-----------------------------------------------------------|
| Advance simulation with control input and receive measurements. | POST ``advance`` with optional json data "{<input_name>:<value>}" |
| Advance simulation with control input and receive measurements. | POST ``advance`` with optional arguments ``<input_name_u>:<value>``, and corresponding ``<input_name_activate>:<0 or 1>``, where 1 enables value overwrite and 0 disables (0 is default) |
| Initialize simulation to a start time using a warmup period in seconds. Also resets point data history and KPI calculations. | PUT ``initialize`` with required arguments ``start_time=<value>``, ``warmup_period=<value>``|
| Receive communication step in seconds. | GET ``step`` |
| Set communication step in seconds. | PUT ``step`` with required argument ``step=<value>`` |
| Receive sensor signal point names (y) and metadata. | GET ``measurements`` |
| Receive control signal point names (u) and metadata. | GET ``inputs`` |
| Receive test result data for the given point names between the start and final time in seconds. | PUT ``results`` with required arguments ``point_names=<list of strings>``, ``start_time=<value>``, ``final_time=<value>``|
| Receive test KPIs. | GET ``kpi`` |
| Receive test KPIs disaggregated into contributing components (e.g. each equipment or zone) ...| GET ``kpi_disaggregated`` |
| Receive test case name. | GET ``name`` |
| Receive boundary condition forecast from current communication step for the given point names for the horizon and at the interval in seconds. | PUT ``forecast`` with required arguments ``point_names=<list of strings>``, ``horizon=<value>``, ``interval=<value>``|
| Receive boundary condition forecast available point names and metadata. | GET ``forecast_points`` |
Expand Down
29 changes: 22 additions & 7 deletions data/get_html_IO.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,9 @@
2. Run BOPTEST test case on localhost:5000
3. Run this script

Outputs:
"inputs.txt": html code documenting the inputs
"measurements.txt": html code documenting the outputs

Output:
"inputs_measurements_forecasts.html" html code documenting inputs, outputs and
forecasts together
"""

# GENERAL PACKAGE IMPORT
Expand Down Expand Up @@ -40,24 +39,40 @@ def run():

# GET TEST INFORMATION
# --------------------
# Create single I/O file
# Inputs available
inputs = requests.get('{0}/inputs'.format(url)).json()['payload']
with open('inputs.txt', 'w') as f:
with open('inputs_measurements_forecasts.html', 'w') as f:
f.write('<h3>Model IO\'s</h3>\n')
f.write('<h4>Inputs</h4>\n')
f.write('The model inputs are:\n')
f.write('<ul>\n')
for i in sorted(inputs.keys()):
if 'activate' not in i:
f.write('<li>\n<code>{0}</code> [{1}] [min={2}, max={3}]: {4}\n</li>\n'.format(i,inputs[i]['Unit'],inputs[i]['Minimum'], inputs[i]['Maximum'], inputs[i]['Description']))
else:
f.write('<li>\n<code>{0}</code> [1] [min=0, max=1]: Activation signal to overwrite input {1} where 1 activates, 0 deactivates (default value)\n</li>\n'.format(i,i.replace('activate','')+'u'))
f.write('</ul>\n')
# Measurements available
measurements = requests.get('{0}/measurements'.format(url)).json()['payload']
with open('measurements.txt', 'w') as f:
with open('inputs_measurements_forecasts.html', 'a') as f:
f.write('<h4>Outputs</h4>\n')
f.write('The model outputs are:\n')
f.write('<ul>\n')
for i in sorted(measurements.keys()):
if 'activate' not in i:
f.write('<li>\n<code>{0}</code> [{1}] [min={2}, max={3}]: {4}\n</li>\n'.format(i,measurements[i]['Unit'],measurements[i]['Minimum'], measurements[i]['Maximum'], measurements[i]['Description']))
f.write('</ul>\n')
# Forecasts available
forecast_points = requests.get('{0}/forecast_points'.format(url)).json()['payload']
with open('forecast_points.txt', 'w') as f:
with open('inputs_measurements_forecasts.html', 'a') as f:
f.write('<h4>Forecasts</h4>\n')
f.write('The model forecasts are:\n')
f.write('<ul>\n')
for i in sorted(forecast_points.keys()):
if 'activate' not in i:
f.write('<li>\n<code>{0}</code> [{1}]: {2}\n</li>\n'.format(i,forecast_points[i]['Unit'],forecast_points[i]['Description']))
f.write('</ul>\n')
# --------------------

if __name__ == "__main__":
Expand Down
24 changes: 24 additions & 0 deletions docs/user_guide/source/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -290,6 +290,30 @@ GET /kpi
"time_rat":<value> // float, Computational time ratio in s/ss
}

GET /kpi_disaggregated
----------------------

- **Description:** Receive KPI values disaggregated into contributing components (e.g. each equipment or zone).
The returned results are in absolute values, that is, they are not normalized by floor area or by number of zones.
Calculated from start time and do not include warmup periods.

- **Arguments:** None.

- **Returns:**

::

{
"cost":<kpi_ele_name>:<kpi_ele_value>, // dict, Contribution of each element to HVAC energy cost in $ or Euro
"emis":<kpi_ele_name>:<kpi_ele_value>, // dict, Contribution of each element to HVAC energy emissions in kgCO2e
"ener":<kpi_ele_name>:<kpi_ele_value>, // dict, Contribution of each element to HVAC energy total usage in kWh
"pele":<kpi_ele_name>:<kpi_ele_value>, // dict, Contribution of each element to HVAC at the overall peak electrical demand in kW
"pgas":<kpi_ele_name>:<kpi_ele_value>, // dict, Contribution of each element to HVAC at the overall peak gas demand in kW
"pdih":<kpi_ele_name>:<kpi_ele_value>, // dict, Contribution of each element to HVAC at the overall peak district heating demand in kW
"idis":<kpi_ele_name>:<kpi_ele_value>, // dict, Contribution of each element to indoor air quality discomfort in ppmh
"tdis":<kpi_ele_name>:<kpi_ele_value>, // dict, Contribution of each element to thermal discomfort in Kh
}

GET /submit
-----------

Expand Down
69 changes: 50 additions & 19 deletions kpis/kpi_calculator.py
Original file line number Diff line number Diff line change
Expand Up @@ -215,6 +215,40 @@ def get_core_kpis(self, price_scenario='Constant'):

return ckpi

def get_kpis_disaggregated(self, price_scenario='Constant'):
'''Return the core KPIs of a test case disaggregated and
with absolute values (not normalized by area or zone)
to see the contributions of each element to each KPI.

Parameters
----------
price_scenario : str, optional
Price scenario for cost kpi calculation.
'Constant' or 'Dynamic' or 'HighlyDynamic'.
Default is 'Constant'.

Returns
-------
dkpi = dict
Dictionary with the core KPIs disaggregated and
with absolute values.

'''

_ = self.get_core_kpis(price_scenario=price_scenario)

dkpi = OrderedDict()
dkpi['tdis'] = self.tdis_dict
dkpi['idis'] = self.idis_dict
dkpi['ener'] = self.ener_dict
dkpi['cost'] = self.cost_dict
dkpi['emis'] = self.emis_dict
dkpi['pele'] = self.pele_dict
dkpi['pgas'] = self.pgas_dict
dkpi['pdih'] = self.pdih_dict

return dkpi

def get_thermal_discomfort(self):
'''The thermal discomfort is the integral of the deviation
of the temperature with respect to the predefined comfort
Expand Down Expand Up @@ -333,11 +367,10 @@ def get_energy(self):
if 'Power' in source:
for signal in self.case.kpi_json[source]:
pow_data = np.array(self._get_data_from_last_index(signal,self.i_last_ener))
self.ener_dict[signal] += \
trapz(pow_data,
self._get_data_from_last_index('time',self.i_last_ener))*2.77778e-7 # Convert to kWh
self.ener_dict_by_source[source+'_'+signal] += \
self.ener_dict[signal]
integral = trapz(pow_data,
self._get_data_from_last_index('time',self.i_last_ener))*2.77778e-7 # Convert to kWh
self.ener_dict[signal] += integral
self.ener_dict_by_source[source+'_'+signal] += integral
self.ener_tot = self.ener_tot + self.ener_dict[signal]/self.case._get_area() # Normalize total by floor area

# Assign to case
Expand Down Expand Up @@ -382,10 +415,10 @@ def get_peak_electricity(self):
df_pow_data_all = pd.concat([df_pow_data_all, df_pow_data], axis=1)
df_pow_data_all.index = pd.TimedeltaIndex(df_pow_data_all.index, unit='s')
df_pow_data_all['total_demand'] = df_pow_data_all.sum(axis=1)
df_pow_data_all = df_pow_data_all.resample('15T').mean()/self.case._get_area()/1000.
df_pow_data_all = df_pow_data_all.resample('15T').mean()/1000.
i = df_pow_data_all['total_demand'].idxmax()
peak = df_pow_data_all.loc[i,'total_demand']
self.pele_tot = peak
self.pele_tot = peak/self.case._get_area()
# Find contributions to peak by each signal
for signal in self.case.kpi_json[source]:
self.pele_dict[signal] = df_pow_data_all.loc[i,signal]
Expand Down Expand Up @@ -429,10 +462,10 @@ def get_peak_gas(self):
df_pow_data_all = pd.concat([df_pow_data_all, df_pow_data], axis=1)
df_pow_data_all.index = pd.TimedeltaIndex(df_pow_data_all.index, unit='s')
df_pow_data_all['total_demand'] = df_pow_data_all.sum(axis=1)
df_pow_data_all = df_pow_data_all.resample('15T').mean()/self.case._get_area()/1000.
df_pow_data_all = df_pow_data_all.resample('15T').mean()/1000.
i = df_pow_data_all['total_demand'].idxmax()
peak = df_pow_data_all.loc[i,'total_demand']
self.pgas_tot = peak
self.pgas_tot = peak/self.case._get_area()
# Find contributions to peak by each signal
for signal in self.case.kpi_json[source]:
self.pgas_dict[signal] = df_pow_data_all.loc[i,signal]
Expand Down Expand Up @@ -476,10 +509,10 @@ def get_peak_district_heating(self):
df_pow_data_all = pd.concat([df_pow_data_all, df_pow_data], axis=1)
df_pow_data_all.index = pd.TimedeltaIndex(df_pow_data_all.index, unit='s')
df_pow_data_all['total_demand'] = df_pow_data_all.sum(axis=1)
df_pow_data_all = df_pow_data_all.resample('15T').mean()/self.case._get_area()/1000.
df_pow_data_all = df_pow_data_all.resample('15T').mean()/1000.
i = df_pow_data_all['total_demand'].idxmax()
peak = df_pow_data_all.loc[i,'total_demand']
self.pdih_tot = peak
self.pdih_tot = peak/self.case._get_area()
# Find contributions to peak by each signal
for signal in self.case.kpi_json[source]:
self.pdih_dict[signal] = df_pow_data_all.loc[i,signal]
Expand Down Expand Up @@ -541,11 +574,10 @@ def get_cost(self, scenario='Constant'):
# Calculate costs
for signal in self.case.kpi_json[source]:
pow_data = np.array(self._get_data_from_last_index(signal,self.i_last_cost))
self.cost_dict[signal] += \
trapz(np.multiply(source_price_data,pow_data),
integral = trapz(np.multiply(source_price_data,pow_data),
self._get_data_from_last_index('time',self.i_last_cost))*factor
self.cost_dict_by_source[source+'_'+signal] += \
self.cost_dict[signal]
self.cost_dict[signal] += integral
self.cost_dict_by_source[source+'_'+signal] += integral
self.cost_tot = self.cost_tot + self.cost_dict[signal]/self.case._get_area() # Normalize total by floor area

# Assign to case
Expand Down Expand Up @@ -585,11 +617,10 @@ def get_emissions(self):
['Emissions'+source])
for signal in self.case.kpi_json[source]:
pow_data = np.array(self._get_data_from_last_index(signal,self.i_last_emis))
self.emis_dict[signal] += \
trapz(np.multiply(source_emissions_data,pow_data),
integral = trapz(np.multiply(source_emissions_data,pow_data),
self._get_data_from_last_index('time',self.i_last_emis))*2.77778e-7 # Convert to kWh
self.emis_dict_by_source[source+'_'+signal] += \
self.emis_dict[signal]
self.emis_dict[signal] += integral
self.emis_dict_by_source[source+'_'+signal] += integral
self.emis_tot = self.emis_tot + self.emis_dict[signal]/self.case._get_area() # Normalize total by floor area

# Update last integration index
Expand Down
4 changes: 3 additions & 1 deletion releasenotes.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,9 @@ Released on xx/xx/xxxx.
- Correct typo in documentation for ``multizone_office_simple_air``, cooling setback temperature changed from 12 to 30. This is for [#605](https://github.com/ibpsa/project1-boptest/issues/605).
- Modify unit tests for test case scenarios to only simulate two days after warmup instead of the whole two-week scenario. This is for [#576](https://github.com/ibpsa/project1-boptest/issues/576).
- Fix unit tests for possible false passes in certain test cases. This is for [#620](https://github.com/ibpsa/project1-boptest/issues/620).

- Add ``activate`` control inputs to all test case documentation and update ``get_html_IO.py`` to print one file with all inputs, outputs, and forecasts. This is for [#555](https://github.com/ibpsa/project1-boptest/issues/555).
- Add storing of scenario result trajectories, kpis, and test information to simulation directory within test case docker container. This is for [#626](https://github.com/ibpsa/project1-boptest/issues/626).
- Implement method to get disaggregated KPIs with absolute values. This enables to make a more comprehensive analysis of which elements are contributing to each KPI. This is for [#604](https://github.com/ibpsa/project1-boptest/issues/604).

## BOPTEST v0.5.0

Expand Down
8 changes: 8 additions & 0 deletions restapi.py
Original file line number Diff line number Diff line change
Expand Up @@ -191,6 +191,13 @@ def get(self):
status, message, payload = case.get_kpis()
return construct(status, message, payload)

class KPI_Disaggregated(Resource):
'''Interface to test case KPIs disaggregated and with absolute values.'''

def get(self):
'''GET request to receive KPIs disaggregated and with absolute values.'''
status, message, payload = case.get_kpis_disaggregated()
return construct(status, message, payload)

class Forecast(Resource):
'''Interface to test case forecast data.'''
Expand Down Expand Up @@ -267,6 +274,7 @@ def post(self):
api.add_resource(Forecast_Points, '/forecast_points')
api.add_resource(Results, '/results')
api.add_resource(KPI, '/kpi')
api.add_resource(KPI_Disaggregated, '/kpi_disaggregated')
api.add_resource(Forecast, '/forecast')
api.add_resource(Scenario, '/scenario')
api.add_resource(Name, '/name')
Expand Down
Loading