diff --git a/documentation/reference_doc/4_2_meta.tex b/documentation/reference_doc/4_2_meta.tex index 9f02f27ee..4aaffa6d5 100644 --- a/documentation/reference_doc/4_2_meta.tex +++ b/documentation/reference_doc/4_2_meta.tex @@ -96,4 +96,7 @@ \subsection{Space Type Ratios} \subsection{Weather Data} ComStock can be run with two different types of weather data: typical meteorological year (TMY3) and actual meteorological year (AMY). AMY data is the data measured during a specific year, taken from weather stations such as those at airports. Because these data are from a particular calendar year, weather patterns that span large areas, such as nationwide heat waves, are captured in the data across multiple locations. Therefore, these weather patterns are captured in the outputs of ComStock. This is important for use cases where coordinated weather patterns influence loads, such as peak load impacts for bulk power grid planning. TMY3 data, in contrast, take the ``most typical'' weather for each calendar month from a 30-year historical record and stitch these months together to form a complete year. The advantage of this method is that the weather data is less influenced by an extremely hot or cold year. However, this approach does not capture wide-area weather patterns, as the month of data used varies from location to location. For a more in-depth discussion of AMY and TMY3 weather data, see \cite{eulp_final_report}. -For geographic granularity, ComStock currently uses one weather file for each county in the United States. For counties with no weather data available (generally sparsely populated rural areas), data from the nearest weather station in the same climate zone are used. See \citep{eulp_final_report} for a more in-depth discussion of the weather data sources, cleaning process, and assignment assumptions. \ No newline at end of file +For geographic granularity, ComStock currently uses one weather file for each county in the United States. For counties with no weather data available (generally sparsely populated rural areas), data from the nearest weather station in the same climate zone are used. See \citep{eulp_final_report} for a more in-depth discussion of the weather data sources, cleaning process, and assignment assumptions. + +\subsection{Soil Properties} +Soil thermal conductivity and undisturbed ground temperature are location-dependent properties that are required in the ComStock model by several goethermal heat pump upgrade measures. Therefore, these properties are part of the ComStock sampling workflow and are stored as additionl properties in the building models, which can then be used by downstream measures. Soil thermal conductivity distributions by climate zone were dervied from a dataset produced by the Southern Methodist University Geothermal Lab, and are shown in Table \ref{fig:soil_conductivity} (\cite{smu_soil_conductivity}). The soil thermal conductivity values range from 0.5 to 2.6 W/m-K. Average undisturbed ground temperatures by climate zone were derived from a 2014 Oklahoma State University study and are shown in Table \ref{tab:undisturbed_ground_temp} (\cite{xing2014}). \ No newline at end of file diff --git a/documentation/reference_doc/4_7_plug_and_process.tex b/documentation/reference_doc/4_7_plug_and_process.tex index 51a16eb5e..6cf875559 100644 --- a/documentation/reference_doc/4_7_plug_and_process.tex +++ b/documentation/reference_doc/4_7_plug_and_process.tex @@ -66,7 +66,7 @@ \subsection{Kitchen Equipment} ComStock uses published data to create representative probability distributions of commercial cooking equipment counts, by building type, for both gas and electric appliances. Additionally, the equipment distributions are scaled by area to represent the non-linear scaling suggested in the literature. Although other building types likely include some degree of cooking equipment as well, such as larger offices (\cite{eia2012cbecs}), the current implementation of ComStock only includes cooking equipment in the previously-mentioned six building types plus quick service restaurants found in strip malls. -Commercial kitchens can contain electric or gas cooking equipment, or a mix of both. The prevalence of gas and electric fuel types for each equipment type and the rated powers used in ComStock are derived from a DOE study (\cite{goetzler_commercial_appliances}). These are shown in Table~\ref{tab:kitchen_prev_and_power}. +Commercial kitchens can contain electric or gas cooking equipment, or a mix of both. The prevalence of gas and electric fuel types for each equipment type used in ComStock are derived from a DOE study (\cite{goetzler_commercial_appliances}). ComStock requires rated input power values and fractions of radiant, latent, and lost heat for gas and electric kitchen equipment. These values are primarily derived from the ASHRAE Fundamentals Handbook (\cite{ashrae2017}) after comparisons with other kitchen equipment studies and commercially available products. More details about how these values were determined can be found in the End Use Savings Shapes documentation (\cite{nrel89130}).The assumptions used in ComStock for prevalence, rated input power, and fractions radiant, latent, and lost for gas and electric appliances are shown in Table~\ref{tab:kitchen_prev_and_power}. \input{tables/kitchen_prev_and_power} diff --git a/documentation/reference_doc/6_AppendixA.tex b/documentation/reference_doc/6_AppendixA.tex index c80e749dd..a4f44c2f0 100644 --- a/documentation/reference_doc/6_AppendixA.tex +++ b/documentation/reference_doc/6_AppendixA.tex @@ -87,3 +87,4 @@ \input{tables/kitchen_cook_counts} +\input{tables/undisturbed_ground_temp} \ No newline at end of file diff --git a/documentation/reference_doc/7_AppendixB.tex b/documentation/reference_doc/7_AppendixB.tex index 4b69b6905..f57ed9a06 100644 --- a/documentation/reference_doc/7_AppendixB.tex +++ b/documentation/reference_doc/7_AppendixB.tex @@ -460,4 +460,12 @@ \centering \includegraphics[width=1.0\textwidth]{figures/refrigeration_LTnew.png} \caption[Compressor performance for large, new, low temperature compressors]{Power and capacity values as a function of suction and discharge temperature for large, new, low-temperature compressors.} \label{fig:refrig_lt_new} -\end{figure} \ No newline at end of file +\end{figure} + +\begin{figure} [ht!] + \includegraphics[width=0.8\textwidth]{figures/soil_conductivity.png} + \centering + \caption[Soil thermal conductivity distributions by climate zone]{Soil thermal conductivity distributions by climate zone.} + \label{fig:soil_conductivity} +\end{figure} + \ No newline at end of file diff --git a/documentation/reference_doc/bibliography.bib b/documentation/reference_doc/bibliography.bib index 0ca7b09d8..378ced561 100644 --- a/documentation/reference_doc/bibliography.bib +++ b/documentation/reference_doc/bibliography.bib @@ -584,7 +584,7 @@ @misc{trane_foundation } @misc{carrier_economiser, - title = {Carrier Economi$er}, + title = {Carrier Economizer}, author = {{Carrier}}, year = 2023, } @@ -690,7 +690,7 @@ @misc{zip_to_util } @misc{tract_to_zip, - author = {{HUD PD&R}}, + author = {{HUD PD\&R}}, institution = {U.S. Department of Housing and Urban Development Office of Policy Development and Research}, title = {HUD USPS ZIP CODE CROSSWALK FILES}, year = 2023, @@ -714,4 +714,37 @@ @misc{atus2018 year = 2018, howpublished = {https://www.bls.gov/tus/}, note = {Accessed: 2023-11-27} +} + +@book{ashrae2017, + title = {ASHRAE Handbook - Fundamentals}, + publisher = {American Society of Heating, Refrigerating, and Air-Conditioning Engineers, Inc.}, + year = {2017}, + address = {Atlanta} +} + +@techreport{nrel89130, + author = {Marlena Praprost}, + title = {End-Use Savings Shapes Measure Documentation: Electric Cooking Equipment}, + institution = {National Renewable Energy Laboratory}, + year = {2024}, + type = {Technical Report}, + number = {89130}, + url = {https://www.nrel.gov/docs/fy24osti/89130.pdf} +} + +@online{smu_soil_conductivity, + author = {{Dedman College of Humanities and Sciences; Roy M Huffington Department of Earth Sciences}}, + title = {SMU Geothermal Lab | Data and Maps | Temperature Maps}, + year = {2023}, + url = {https://www.smu.edu/dedman/academics/departments/earth-sciences/research/geothermallab/datamaps/temperaturemaps}, + note = {Accessed: 15 December 2023} +} + +@phdthesis{xing2014, + author = {L. Xing}, + title = {Estimations of Undisturbed Ground Temperatures Using Numerical and Analytical Modeling}, + school = {Oklahoma State University}, + address = {Stillwater, Oklahoma}, + year = {2014} } \ No newline at end of file diff --git a/documentation/reference_doc/figures/soil_conductivity.png b/documentation/reference_doc/figures/soil_conductivity.png new file mode 100644 index 000000000..dac4f6b42 Binary files /dev/null and b/documentation/reference_doc/figures/soil_conductivity.png differ diff --git a/documentation/reference_doc/main.tex b/documentation/reference_doc/main.tex index 2d12c25b2..f4924efc8 100644 --- a/documentation/reference_doc/main.tex +++ b/documentation/reference_doc/main.tex @@ -8,6 +8,7 @@ \usepackage[section]{placeins} \usepackage{pdflscape} \usepackage{soul} +\usepackage{biblatex} \hypersetup{colorlinks=true} % ----------------------------------- @@ -23,6 +24,8 @@ \author{Amy LeBar} \author{Janghyun Kim} \author{Lauren Klun} +\author{Eric Ringold} +\author{Wenyi Kuang} \affil{National Renewable Energy Laboratory} \fancypagestyle{plain}{} diff --git a/documentation/reference_doc/tables/kitchen_prev_and_power.tex b/documentation/reference_doc/tables/kitchen_prev_and_power.tex index 4f4bdf37e..3cea7ef3c 100644 --- a/documentation/reference_doc/tables/kitchen_prev_and_power.tex +++ b/documentation/reference_doc/tables/kitchen_prev_and_power.tex @@ -1,16 +1,95 @@ -\begin{table}[h] -\small -\centering -\caption[Cooking Equipment Fuel Type Prevelance and Rater Power]{Cooking Equipment Fuel Type Prevelance and Rater Power} -\label{tab:kitchen_prev_and_power} -\begin{tabular}{|l|l|l|l|l|} -\hline -\textbf{Appliance} & \textbf{\% Gas} & \textbf{\% Electric} & \textbf{Rated Energy - Gas (BTU/hr)} & \textbf{Rated Energy - Electricity (kW)} \\ \hline -Broiler & 0.91 & 0.09 & 88000 & 11 \\ \hline -Fryers & 0.58 & 0.42 & 164000 & 22 \\ \hline -Griddles & 0.5 & 0.5 & 70000 & 12 \\ \hline -Ovens & 0.55 & 0.45 & 56000 & 15 \\ \hline -Ranges & 0.91 & 0.09 & 179000 & 12 \\ \hline -Steamers & 0.33 & 0.67 & 210000 & 24 \\ \hline -\end{tabular} -\end{table} \ No newline at end of file +% Please add the following required packages to your document preamble: +% \usepackage{multirow} +% \usepackage{graphicx} +\begin{table}[] + \caption[Cooking Equipment Fuel Type Prevelance and Rater Power]{Cooking Equipment Fuel Type Prevelance and Rater Power} + \label{tab:kitchen_prev_and_power} + \resizebox{\columnwidth}{!}{% + \begin{tabular}{|l|ll|ll|ll|ll|ll|} + \hline + \multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{Appliance}}} & + \multicolumn{2}{c|}{\textbf{Fuel Prevalence Fraction}} & + \multicolumn{2}{c|}{\textbf{Rated Power}} & + \multicolumn{2}{c|}{\textbf{Fraction Radiant}} & + \multicolumn{2}{c|}{\textbf{Fraction Latent}} & + \multicolumn{2}{c|}{\textbf{Fraction Lost}} \\ \cline{2-11} + \multicolumn{1}{|c|}{} & + \multicolumn{1}{l|}{\textbf{Gas}} & + \textbf{Electric} & + \multicolumn{1}{l|}{\textbf{Gas (Btu/h)}} & + \textbf{Electric (kW)} & + \multicolumn{1}{l|}{\textbf{Gas}} & + \textbf{Electric} & + \multicolumn{1}{l|}{\textbf{Gas}} & + \textbf{Electric} & + \multicolumn{1}{l|}{\textbf{Gas}} & + \textbf{Electric} \\ \hline + Broiler & + \multicolumn{1}{l|}{0.91} & + 0.09 & + \multicolumn{1}{l|}{96,000} & + 10.8 & + \multicolumn{1}{l|}{0.12} & + 0.35 & + \multicolumn{1}{l|}{0.1} & + 0.1 & + \multicolumn{1}{l|}{0.68} & + 0.45 \\ \hline + Griddle & + \multicolumn{1}{l|}{0.58} & + 0.42 & + \multicolumn{1}{l|}{90,000} & + 17.1 & + \multicolumn{1}{l|}{0.18} & + 0.39 & + \multicolumn{1}{l|}{0.1} & + 0.1 & + \multicolumn{1}{l|}{0.62} & + 0.41 \\ \hline + Fryer & + \multicolumn{1}{l|}{0.5} & + 0.5 & + \multicolumn{1}{l|}{80,000} & + 14 & + \multicolumn{1}{l|}{0.23} & + 0.36 & + \multicolumn{1}{l|}{0.1} & + 0.1 & + \multicolumn{1}{l|}{0.57} & + 0.44 \\ \hline + Oven & + \multicolumn{1}{l|}{0.55} & + 0.45 & + \multicolumn{1}{l|}{44,000} & + 12.1 & + \multicolumn{1}{l|}{0.08} & + 0.22 & + \multicolumn{1}{l|}{0.1} & + 0.1 & + \multicolumn{1}{l|}{0.72} & + 0.58 \\ \hline + Range & + \multicolumn{1}{l|}{0.91} & + 0.09 & + \multicolumn{1}{l|}{145,000} & + 21 & + \multicolumn{1}{l|}{0.11} & + 0.1 & + \multicolumn{1}{l|}{0.1} & + 0.1 & + \multicolumn{1}{l|}{0.69} & + 0.7 \\ \hline + Steamer & + \multicolumn{1}{l|}{0.33} & + 0.67 & + \multicolumn{1}{l|}{200,000} & + 27 & + \multicolumn{1}{l|}{0.1} & + 0.1 & + \multicolumn{1}{l|}{0.1} & + 0.1 & + \multicolumn{1}{l|}{0.7} & + 0.7 \\ \hline + \end{tabular}% + } + \end{table} \ No newline at end of file diff --git a/documentation/reference_doc/tables/undisturbed_ground_temp.tex b/documentation/reference_doc/tables/undisturbed_ground_temp.tex new file mode 100644 index 000000000..e0d57c182 --- /dev/null +++ b/documentation/reference_doc/tables/undisturbed_ground_temp.tex @@ -0,0 +1,28 @@ +% Please add the following required packages to your document preamble: +% \usepackage{graphicx} +\begin{table}[] + \centering + \caption[Average Undisturbed Ground Temperature by Climate Zone]{Average Undisturbed Ground Temperature by Climate Zone} + \label{tab:undisturbed_ground_temp} + \begin{tabular}{|l|l|} + \hline + 2012 IECC Climate zone & Annual average undisturbed ground temperature (C) \\ \hline + 1A & 25.9 \\ \hline + 2A & 20.9 \\ \hline + 2B & 25 \\ \hline + 3A & 17.9 \\ \hline + 3B & 19.7 \\ \hline + 3C & 17 \\ \hline + 4A & 14.7 \\ \hline + 4B & 16.3 \\ \hline + 4C & 13.3 \\ \hline + 5A & 11.5 \\ \hline + 5B & 12.9 \\ \hline + 6A & 9 \\ \hline + 6B & 9.3 \\ \hline + 7A & 7 \\ \hline + 7AK & 5.4 \\ \hline + 7B & 6.5 \\ \hline + 8AK & 2.3 \\ \hline + \end{tabular}% +\end{table} \ No newline at end of file diff --git a/postprocessing/compare_comstock_to_cbecs.py.template b/postprocessing/compare_comstock_to_cbecs.py.template index 4a98f42df..8446ef443 100644 --- a/postprocessing/compare_comstock_to_cbecs.py.template +++ b/postprocessing/compare_comstock_to_cbecs.py.template @@ -32,12 +32,6 @@ def main(): stock_estimation_version='2024R2', # Only updated when a new stock estimate is published truth_data_version='v01' # Typically don't change this ) - - # Scale ComStock run to CBECS 2018 AND remove non-ComStock buildings from CBECS - comstock.add_weights_aportioned_by_stock_estimate(apportionment=stock_estimate) - comstock.create_national_aggregation() - comstock.create_geospatially_resolved_aggregations(comstock.STATE_ID, pretty_geo_col_name='state_id') - comstock.create_geospatially_resolved_aggregations(comstock.COUNTY_ID, pretty_geo_col_name='county_id') # CBECS cbecs = cspp.CBECS( @@ -46,19 +40,26 @@ def main(): color_hex='#009E73', # Color used to represent CBECS in plots reload_from_csv=False # True if CSV already made and want faster reload times ) - - # TODO Update past here including ensuring we can still apply CBECS weights on top of previous weights. - + + # Scale ComStock runs to the 'truth data' from StockE V3 estimates using bucket-based apportionment + comstock.add_weights_aportioned_by_stock_estimate(apportionment=stock_estimate) # Scale ComStock run to CBECS 2018 AND remove non-ComStock buildings from CBECS comstock.add_national_scaling_weights(cbecs, remove_non_comstock_bldg_types_from_cbecs=True) - comstock.calculate_weighted_columnal_values() - comstock.export_to_csv_wide() + # TODO This needs to be rewritten with safe column names, lazyframe usage, etc. + #comstock.calculate_weighted_columnal_values() + + # Uncomment whichever to write results to disk: + comstock.create_national_aggregation() + # comstock.create_geospatially_resolved_aggregations(comstock.STATE_ID, pretty_geo_col_name='state_id') + # comstock.create_geospatially_resolved_aggregations(comstock.COUNTY_ID, pretty_geo_col_name='county_id') # Make a comparison by passing in a list of CBECs and ComStock runs to compare # upgrade_id can be 'All' or the upgrade number + comstock.create_plotting_lazyframe() comp = cspp.ComStockToCBECSComparison(cbecs_list=[cbecs], comstock_list=[comstock], upgrade_id='All',make_comparison_plots=True) comp.export_to_csv_wide() + # Code to execute the script if __name__ == "__main__": diff --git a/postprocessing/compare_runs.py.template b/postprocessing/compare_runs.py.template index e90cb22f6..c1a92d13d 100644 --- a/postprocessing/compare_runs.py.template +++ b/postprocessing/compare_runs.py.template @@ -12,14 +12,14 @@ logger = logging.getLogger(__name__) def main(): # First ComStock run comstock_a = cspp.ComStock( - s3_base_dir='eulp/comstock_fy22', # If run not on S3, download results_up**.parquet manually - comstock_run_name='com_v15_cooking', # Name of the run on S3 - comstock_run_version='v15', # Use whatever you want to see in plot and folder names + s3_base_dir='eulp/euss_com', # If run not on S3, download results_up**.parquet manually + comstock_run_name='sampling_lighting_11079_1', # Name of the run on S3 + comstock_run_version='sampling_lighting_11079_1', # Use whatever you want to see in plot and folder names comstock_year=2018, # Typically don't change this athena_table_name=None, # Typically don't change this truth_data_version='v01', # Typically don't change this buildstock_csv_name='buildstock.csv', # Download buildstock.csv manually - acceptable_failure_percentage=0.05, # Can increase this when testing and high failure are OK + acceptable_failure_percentage=0.25, # Can increase this when testing and high failure are OK drop_failed_runs=True, # False if you want to evaluate which runs failed in raw output data color_hex='#0072B2', # Color used to represent this run in plots skip_missing_columns=True, # False if you want to ensure you have all data specified for export @@ -29,14 +29,14 @@ def main(): # Second ComStock run comstock_b = cspp.ComStock( - s3_base_dir='eulp/comstock_fy22', # If run not on S3, download results_up**.parquet manually - comstock_run_name='com_v16_windows_lighting', # Name of the run on S3 - comstock_run_version='v16', # Use whatever you want to see in plot and folder names + s3_base_dir='eulp/euss_com', # If run not on S3, download results_up**.parquet manually + comstock_run_name='cycle_4_sampling_test_rand_985932_20240321', # Name of the run on S3 + comstock_run_version='new_sampling_test', # Use whatever you want to see in plot and folder names comstock_year=2018, # Typically don't change this - athena_table_name=None, # Typically don't change this + athena_table_name='rand_985932_20240321', # Typically same as comstock_run_name or None truth_data_version='v01', # Typically don't change this - buildstock_csv_name='buildstock.csv', # Download buildstock.csv manually - acceptable_failure_percentage=0.05, # Can increase this when testing and high failure are OK + buildstock_csv_name='rand_985932_sampling_buildstock.csv', # Download buildstock.csv manually + acceptable_failure_percentage=0.9, # Can increase this when testing and high failure are OK drop_failed_runs=True, # False if you want to evaluate which runs failed in raw output data color_hex='#56B4E9', # Color used to represent this run in plots skip_missing_columns=True, # False if you want to ensure you have all data specified for export @@ -44,6 +44,12 @@ def main(): include_upgrades=False # False if not looking at upgrades ) + # Stock Estimation for Apportionment: + stock_estimate = cspp.Apportion( + stock_estimation_version='2024R2', # Only updated when a new stock estimate is published + truth_data_version='v01' # Typically don't change this + ) + # CBECS cbecs = cspp.CBECS( cbecs_year=2018, # 2012 and 2018 currently available @@ -52,19 +58,19 @@ def main(): reload_from_csv=False # True if CSV already made and want faster reload times ) - # Scale both ComStock runs to CBECS 2018 AND remove non-ComStock buildings from CBECS + # First scale ComStock runs to the 'truth data' from StockE V3 estimates using bucket-based apportionment + # Then scale both ComStock runs to CBECS 2018 AND remove non-ComStock buildings from CBECS # This is how weights in the models are set to represent national energy consumption + comstock_a.add_weights_aportioned_by_stock_estimate(apportionment=stock_estimate) comstock_a.add_national_scaling_weights(cbecs, remove_non_comstock_bldg_types_from_cbecs=True) + comstock_b.add_weights_aportioned_by_stock_estimate(apportionment=stock_estimate) comstock_b.add_national_scaling_weights(cbecs, remove_non_comstock_bldg_types_from_cbecs=True) - # Uncomment this to correct gas consumption for a ComStock run to match CBECS - # Don't typically want to do this - # comstock_a.correct_comstock_gas_to_match_cbecs(cbecs) - # Export CBECS and ComStock data to wide and long formats for Tableau and to skip processing later cbecs.export_to_csv_wide() # May comment this out if CSV output isn't needed - comstock_a.export_to_csv_wide() # May comment this out if CSV output isn't needed - comstock_b.export_to_csv_wide() # May comment this out if CSV output isn't needed + # comstock_a.create_national_aggregation() # May comment this out if CSV output isn't needed + # comstock_b.create_national_aggregation() # May comment this out if CSV output isn't needed + # TODO This (long CSV export) is not yet re-implemented # comstock_a.export_to_csv_long() # Long format useful for stacking end uses and fuels # comstock_b.export_to_csv_long() # Long format useful for stacking end uses and fuels @@ -73,7 +79,7 @@ def main(): cbecs_list=[cbecs], comstock_list = [comstock_a, comstock_b], make_comparison_plots=True - ) + ) # Export the comparison data to wide format for Tableau comparison.export_to_csv_wide() diff --git a/postprocessing/compare_upgrades.py.template b/postprocessing/compare_upgrades.py.template index 4f724032f..888bce4e8 100644 --- a/postprocessing/compare_upgrades.py.template +++ b/postprocessing/compare_upgrades.py.template @@ -1,63 +1,73 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- - -import logging - -import comstockpostproc as cspp - - -logging.basicConfig(level='INFO') # Use DEBUG, INFO, or WARNING -logger = logging.getLogger(__name__) - -def main(): - # ComStock run - comstock = cspp.ComStock( - s3_base_dir='eulp/euss_com', # If run not on S3, download results_up**.parquet manually - comstock_run_name='hprtu_stdperf_fan_test_10k', # Name of the run on S3 - comstock_run_version='hprtu_stdperf_fan_test_10k', # Use whatever you want to see in plot and folder names - comstock_year=2018, # Typically don't change this - athena_table_name=None, # Typically don't change this - truth_data_version='v01', # Typically don't change this - buildstock_csv_name='buildstock.csv', # Download buildstock.csv manually - acceptable_failure_percentage=0.025, # Can increase this when testing and high failure are OK - drop_failed_runs=True, # False if you want to evaluate which runs failed in raw output data - color_hex='#0072B2', # Color used to represent this run in plots - skip_missing_columns=True, # False if you want to ensure you have all data specified for export - reload_from_csv=False, # True if CSV already made and want faster reload times - include_upgrades=True, # False if not looking at upgrades - upgrade_ids_to_skip=[], # Use [1, 3] etc. to exclude certain upgrades - make_timeseries_plots=True, - states={ - #'MN': 'Minnesota', # specify state to use for timeseries plots in dictionary format. State ID must correspond correctly. - 'MA':'Massachusetts', - 'OR': 'Oregon', - 'LA': 'Louisiana', - #'AZ': 'Arizona', - #'TN': 'Tennessee' - }, - upgrade_ids_for_comparison={} # Use {'':[0,1,2]}; add as many upgrade IDs as needed, but plots look strange over 5 - ) - - # CBECS - cbecs = cspp.CBECS( - cbecs_year=2018, # 2012 and 2018 currently available - truth_data_version='v01', # Typically don't change this - color_hex='#009E73', # Color used to represent CBECS in plots - reload_from_csv=False # True if CSV already made and want faster reload times - ) - - # Scale ComStock run to CBECS 2018 AND remove non-ComStock buildings from CBECS - # This is how weights in the models are set to represent national energy consumption - comstock.add_national_scaling_weights(cbecs, remove_non_comstock_bldg_types_from_cbecs=True) - - # Export CBECS and ComStock data to wide and long formats for Tableau and to skip processing later - cbecs.export_to_csv_wide() # May comment this out after run once - comstock.export_to_csv_wide() # May comment this out after run once - # comstock.export_to_csv_long() # Long format useful for stacking end uses and fuels - - # Create measure run comparisons; only use if run has measures - comparison = cspp.ComStockMeasureComparison(comstock, states=comstock.states, make_comparison_plots = comstock.make_comparison_plots, make_timeseries_plots = comstock.make_timeseries_plots) - -# Code to execute the script -if __name__=="__main__": - main() +#!/usr/bin/env python3 +# -*- coding: utf-8 -*- + +import logging + +import comstockpostproc as cspp + + +logging.basicConfig(level='INFO') # Use DEBUG, INFO, or WARNING +logger = logging.getLogger(__name__) + +def main(): + # ComStock run + comstock = cspp.ComStock( + s3_base_dir='eulp/euss_com', # If run not on S3, download results_up**.parquet manually + comstock_run_name='sampling_lighting_11079_1', # Name of the run on S3 + comstock_run_version='sampling_lighting_11079_1', # Use whatever you want to see in plot and folder names + comstock_year=2018, # Typically don't change this + athena_table_name=None, # Typically don't change this + truth_data_version='v01', # Typically don't change this + buildstock_csv_name='buildstock.csv', # Download buildstock.csv manually + acceptable_failure_percentage=0.25, # Can increase this when testing and high failure are OK + drop_failed_runs=True, # False if you want to evaluate which runs failed in raw output data + color_hex='#0072B2', # Color used to represent this run in plots + skip_missing_columns=True, # False if you want to ensure you have all data specified for export + reload_from_csv=False, # True if CSV already made and want faster reload times + include_upgrades=True, # False if not looking at upgrades + upgrade_ids_to_skip=[], # Use [1, 3] etc. to exclude certain upgrades + make_timeseries_plots=False, + states={ + #'MN': 'Minnesota', # specify state to use for timeseries plots in dictionary format. State ID must correspond correctly. + 'MA':'Massachusetts', + #'OR': 'Oregon', + #'LA': 'Louisiana', + #'AZ': 'Arizona', + #'TN': 'Tennessee' + }, + upgrade_ids_for_comparison={} # Use {'':[0,1,2]}; add as many upgrade IDs as needed, but plots look strange over 5 + ) + + # Stock Estimation for Apportionment: + stock_estimate = cspp.Apportion( + stock_estimation_version='2024R2', # Only updated when a new stock estimate is published + truth_data_version='v01' # Typically don't change this + ) + + # CBECS + cbecs = cspp.CBECS( + cbecs_year=2018, # 2012 and 2018 currently available + truth_data_version='v01', # Typically don't change this + color_hex='#009E73', # Color used to represent CBECS in plots + reload_from_csv=False # True if CSV already made and want faster reload times + ) + + # Scale ComStock runs to the 'truth data' from StockE V3 estimates using bucket-based apportionment + comstock.add_weights_aportioned_by_stock_estimate(apportionment=stock_estimate) + # Scale ComStock run to CBECS 2018 AND remove non-ComStock buildings from CBECS + comstock.add_national_scaling_weights(cbecs, remove_non_comstock_bldg_types_from_cbecs=True) + + # Export CBECS and ComStock data to wide and long formats for Tableau and to skip processing later + # cbecs.export_to_csv_wide() # May comment this out after run once + # comstock.create_national_aggregation() + # comstock.create_geospatially_resolved_aggregations(comstock.STATE_ID, pretty_geo_col_name='state_id') + # comstock.create_geospatially_resolved_aggregations(comstock.COUNTY_ID, pretty_geo_col_name='county_id') + # TODO Long is def not working as expected anymore... + # comstock.export_to_csv_long() # Long format useful for stacking end uses and fuels + + # Create measure run comparisons; only use if run has measures + comparison = cspp.ComStockMeasureComparison(comstock, states=comstock.states, make_comparison_plots = comstock.make_comparison_plots, make_timeseries_plots = comstock.make_timeseries_plots) + +# Code to execute the script +if __name__=="__main__": + main() diff --git a/postprocessing/comstockpostproc/cbecs.py b/postprocessing/comstockpostproc/cbecs.py index de4005c93..7e6561a17 100644 --- a/postprocessing/comstockpostproc/cbecs.py +++ b/postprocessing/comstockpostproc/cbecs.py @@ -483,4 +483,9 @@ def export_to_csv_wide(self): file_name = f'CBECS wide.csv' file_path = os.path.join(self.output_dir, file_name) - self.data.to_csv(file_path, index=False) + try: + self.data.sink_csv(file_path) + except pl.exceptions.InvalidOperationError: + logger.warn('Warning - sink_csv not supported for metadata write in current polars version') + logger.warn('Falling back to .collect.write_csv') + self.data.collect().write_csv(file_path) diff --git a/postprocessing/comstockpostproc/comstock.py b/postprocessing/comstockpostproc/comstock.py index e6d0dd6ec..e963916d4 100644 --- a/postprocessing/comstockpostproc/comstock.py +++ b/postprocessing/comstockpostproc/comstock.py @@ -8,6 +8,7 @@ import glob import json import logging +import botocore.exceptions import numpy as np import pandas as pd import polars as pl @@ -99,6 +100,7 @@ def __init__(self, s3_base_dir, comstock_run_name, comstock_run_version, comstoc self.rename_upgrades_file_name = 'rename_upgrades.json' self.athena_table_name = athena_table_name self.data = None + self.plotting_data = None self.monthly_data = None self.monthly_data_gap = None self.ami_timeseries_data = None @@ -116,7 +118,11 @@ def __init__(self, s3_base_dir, comstock_run_name, comstock_run_version, comstoc self.unweighted_weighted_map = {} self.dropping_columns = [] self.cached_parquet = [] # List of parquet files to reload and export + # TODO our currect credential setup aren't playing well with this approach but does with the s3 ServiceResource + # We are currently unable to list the HeadObject for automatically uploaded data + # Consider migrating all usage to s3 ServiceResource instead. self.s3_client = boto3.client('s3', config=botocore.client.Config(max_pool_connections=50)) + self.s3_resource = boto3.resource('s3') if self.athena_table_name is not None: self.athena_client = BuildStockQuery(workgroup='eulp', db_name='enduse', @@ -126,6 +132,7 @@ def __init__(self, s3_base_dir, comstock_run_name, comstock_run_version, comstoc self.make_comparison_plots = make_comparison_plots self.make_timeseries_plots = make_timeseries_plots self.APPORTIONED = False # Including this for some basic control logic in which methods are allowed + self.CBECS_WEIGHTS_APPLIED = False # Including this for some additional control logic about method order logger.info(f'Creating {self.dataset_name}') # Make directories @@ -136,7 +143,10 @@ def __init__(self, s3_base_dir, comstock_run_name, comstock_run_version, comstoc # S3 location self.s3_inpath = None if s3_base_dir is not None: - self.s3_inpath = f"s3://{s3_base_dir}/{self.comstock_run_name}/{self.comstock_run_name}" + if self.athena_table_name: + self.s3_inpath = f"s3://{s3_base_dir}/{self.comstock_run_name}/{self.athena_table_name}" + else: + self.s3_inpath = f"s3://{s3_base_dir}/{self.comstock_run_name}/{self.comstock_run_name}" # Load and transform data, preserving all columns self.download_data() @@ -240,13 +250,26 @@ def __init__(self, s3_base_dir, comstock_run_name, comstock_run_version, comstoc # logger.debug(c) def download_data(self): + # Get data on the s3 resource to download data from: + if self.s3_inpath is None: + logger.info('The s3 path provided in the ComStock object initalization is invalid.') + s3_path_items = self.s3_inpath.lstrip('s3://').split('/') + bucket_name = s3_path_items[0] + prfx = '/'.join(s3_path_items[1:]) + # baseline/results_up00.parquet results_data_path = os.path.join(self.data_dir, self.results_file_name) if not os.path.exists(results_data_path): - s3_path = f"{self.s3_inpath}/baseline/{self.results_file_name}" - logger.info(f'Downloading: {s3_path}') - data = pd.read_parquet(s3_path, engine="pyarrow") - data.to_parquet(results_data_path) + baseline_parquet_path = f"{prfx}/baseline/{self.results_file_name}" + try: + self.s3_resource.Object(bucket_name, baseline_parquet_path).load() + except botocore.exceptions.ClientError: + logger.error(f'Could not find results_up00.parquet at {baseline_parquet_path} in bucket {bucket_name}') + raise FileNotFoundError( + f'Missing results_up00.parquet file. Manually download and place at {results_data_path}' + ) + logger.info(f'Downloading {baseline_parquet_path} from the {bucket_name} bucket') + self.s3_resource.Object(bucket_name, baseline_parquet_path).download_file(results_data_path) # upgrades/upgrade=*/results_up*.parquet if self.include_upgrades: @@ -255,13 +278,10 @@ def download_data(self): logger.info('The s3 path passed to the constructor is invalid, ' 'cannot check for results_up**.parquet files to download') else: - s3_path_items = self.s3_inpath.lstrip('s3://').split('/') - bucket_name = s3_path_items[0] - prfx = '/'.join(s3_path_items[1:]) - prfx = f'{prfx}/upgrades' - resp = self.s3_client.list_objects_v2(Bucket=bucket_name, Prefix=prfx) - for obj in resp.get("Contents"): - obj_path = obj['Key'] + upgrade_parquet_path = f'{prfx}/upgrades' + resp = self.s3_resource.Bucket(bucket_name).objects.filter(Prefix=upgrade_parquet_path).all() + for obj in list(resp): + obj_path = obj.key obj_name = obj_path.split('/')[-1] m = re.search('results_up(.*).parquet', obj_name) if not m: @@ -272,21 +292,26 @@ def download_data(self): continue results_data_path = os.path.join(self.data_dir, obj_name) if not os.path.exists(results_data_path): - s3_path = f"s3://{bucket_name}/{obj_path}" - logger.info(f'Downloading: {s3_path}') - data = pd.read_parquet(s3_path, engine="pyarrow") - data.to_parquet(results_data_path) + logger.info(f'Downloading {obj_path} from the {bucket_name} bucket') + self.s3_resource.Object(bucket_name, obj_path).download_file(results_data_path) # buildstock.csv - #TODO: handle the missing buildstock.csv in a more robust way #1. check the file in the data_dir #2. if not found, download from S3 #3. if not found in S3, raise an error - buildstock_csv_path = os.path.join(self.data_dir, self.buildstock_file_name) if not os.path.exists(buildstock_csv_path): - raise FileNotFoundError( - f'Missing buildstock.csv file. Manually download and place in {os.path.abspath(self.data_dir)}') + s3_path = f"{self.s3_inpath}/buildstock_csv/buildstock.csv" + bldstk_s3_path = f'{prfx}/buildstock_csv/buildstock.csv' + try: + self.s3_resource.Object(bucket_name, bldstk_s3_path).load() + except botocore.exceptions.ClientError: + logger.error(f'Could not find buildstock.csv at {bldstk_s3_path} in bucket {bucket_name}') + raise FileNotFoundError( + f'Missing buildstock.csv file. Manually download and place at {buildstock_csv_path}' + ) + logger.info(f'Downloading {bldstk_s3_path} from the {bucket_name} bucket') + self.s3_resource.Object(bucket_name, bldstk_s3_path).download_file(buildstock_csv_path) # EJSCREEN ejscreen_data_path = os.path.join(self.truth_data_dir, self.ejscreen_file_name) @@ -1433,7 +1458,7 @@ def diff_lists(li1, li2): # These geography columns should be close together for convenience # but have no obvious pattern to match against possible_geog_cols = [ - 'in.ashrae_iecc_climate_zone_2004', + 'in.ashrae_iecc_climate_zone_2006', 'in.building_america_climate_zone', 'in.cambium_grid_region', 'in.census_division_name', @@ -1820,13 +1845,11 @@ def add_building_type_group(self): self.data = self.data.with_columns((pl.col(self.BLDG_TYPE).cast(pl.Utf8).replace(bldg_type_groups, default=None)).alias(self.BLDG_TYPE_GROUP)) self.data = self.data.with_columns(pl.col(self.BLDG_TYPE_GROUP).cast(pl.Categorical)) - def add_national_scaling_weights(self, cbecs: CBECS, remove_non_comstock_bldg_types_from_cbecs: bool): # Remove CBECS entries for building types not included in the ComStock run # comstock_bldg_types = self.data[self.BLDG_TYPE].unique() # assert "calc.weighted.utility_bills.total_mean_bill..billion_usd" in self.data.columns assert isinstance(self.data, pl.LazyFrame) - comstock_bldg_types: set = set(self.data.select(self.BLDG_TYPE).unique().collect().to_pandas()[self.BLDG_TYPE].tolist()) cbecs.data: pd.DataFrame = cbecs.data.collect().to_pandas() @@ -1858,15 +1881,24 @@ def add_national_scaling_weights(self, cbecs: CBECS, remove_non_comstock_bldg_ty logger.debug(cbecs_bldg_type_sqft) # Total sqft of each building type, ComStock - baseline_data: pl.LazyFrame = self.data.filter(pl.col(self.UPGRADE_NAME) == self.BASE_NAME).clone() - comstock_bldg_type_sqft: pl.DataFrame = baseline_data.group_by(self.BLDG_TYPE).agg([pl.col(self.FLR_AREA).sum()]).collect() - comstock_bldg_type_sqft: pd.DataFrame = comstock_bldg_type_sqft.to_pandas().set_index(self.BLDG_TYPE) + if self.APPORTIONED: + # Since this is a national calculation, groupby on building id and upgrade only in foreign key table + national_agg = self.fkt.filter(pl.col(self.UPGRADE_ID) == 0).clone() + national_agg = national_agg.select([pl.col(self.BLDG_WEIGHT), pl.col(self.BLDG_ID)]).groupby(pl.col(self.BLDG_ID)).sum() + cs_data = self.data.filter(pl.col(self.UPGRADE_NAME) == self.BASE_NAME).select([pl.col(self.BLDG_ID), pl.col(self.FLR_AREA), pl.col(self.BLDG_TYPE)]).clone() + national_agg = national_agg.join(cs_data, on=pl.col(self.BLDG_ID)) + national_agg = national_agg.with_columns((pl.col(self.BLDG_WEIGHT) * pl.col(self.FLR_AREA)).alias(self.FLR_AREA)) + national_agg = national_agg.select([pl.col(self.BLDG_TYPE), pl.col(self.FLR_AREA)]).groupby(pl.col(self.BLDG_TYPE)).sum().collect() + comstock_bldg_type_sqft: pd.DataFrame = national_agg.to_pandas().set_index(self.BLDG_TYPE) + else: + baseline_data: pl.LazyFrame = self.data.filter(pl.col(self.UPGRADE_NAME) == self.BASE_NAME).clone() + comstock_bldg_type_sqft: pl.DataFrame = baseline_data.group_by(self.BLDG_TYPE).agg([pl.col(self.FLR_AREA).sum()]).collect() + comstock_bldg_type_sqft: pd.DataFrame = comstock_bldg_type_sqft.to_pandas().set_index(self.BLDG_TYPE) logger.debug('ComStock Baseline floor area by building type') logger.debug(comstock_bldg_type_sqft) # Calculate scaling factor for each building type based on floor area (not building/model count) sf = pd.concat([cbecs_bldg_type_sqft, comstock_bldg_type_sqft], axis = 1) - logger.info("sf wt_area_col shape: ", sf[wt_area_col].shape) sf[self.BLDG_WEIGHT] = sf[wt_area_col].astype(float) / sf[self.FLR_AREA].astype(float) bldg_type_scale_factors = sf[self.BLDG_WEIGHT].to_dict() if np.nan in bldg_type_scale_factors: @@ -1876,18 +1908,31 @@ def add_national_scaling_weights(self, cbecs: CBECS, remove_non_comstock_bldg_ty del bldg_type_scale_factors[np.nan] # Report any scaling factor greater than some threshold. + if self.APPORTIONED: + logger.info(f'{self.dataset_name} post-apportionment scaling factors to CBECS floor area:') + for bldg_type, scaling_factor in bldg_type_scale_factors.items(): + logger.info(f'--- {bldg_type}: {round(scaling_factor, 2)}') + if scaling_factor > 1.3: + wrn_msg = (f'The scaling factor for {bldg_type} is high, which indicates something unexpected ' + 'in the apportionment step, except for Healthcare where this is expected. Please review.') + logger.warning(wrn_msg) + elif scaling_factor < 0.6: + wrn_msg = (f'The scaling factor for {bldg_type} is low, which indicates something unexpected ' + 'in the apportionment step. Please review.') + logger.warning(wrn_msg) + else: # In situations with high failure rates of a single building, # the scaling factor will be high, and the results are likely to be # heavily skewed toward the few successful simulations of that building type. - logger.info(f'{self.dataset_name} scaling factors - scale ComStock results to CBECS floor area') - for bldg_type, scaling_factor in bldg_type_scale_factors.items(): - logger.info(f'--- {bldg_type}: {round(scaling_factor, 2)}') - if scaling_factor > 15: - wrn_msg = (f'The scaling factor for {bldg_type} is high, which indicates either a test run <350k models ' - f'or significant failed runs for this building type. Comparisons to CBECS will likely be invalid.') - logger.warning(wrn_msg) - - # For reference/comparison, here are the weights from the ComStock V1 runs + logger.info(f'{self.dataset_name} scaling factors - scale ComStock results to CBECS floor area') + for bldg_type, scaling_factor in bldg_type_scale_factors.items(): + logger.info(f'--- {bldg_type}: {round(scaling_factor, 2)}') + if scaling_factor > 15: + wrn_msg = (f'The scaling factor for {bldg_type} is high, which indicates either a test run <350k models ' + f'or significant failed runs for this building type. Comparisons to CBECS will likely be invalid.') + logger.warning(wrn_msg) + + # For reference/comparison, here are the weights from the ComStock Pre-EUSS 2024R2 runs # PROD_V1_COMSTOCK_WEIGHTS = { # 'small_office': 9.625838016683277, # 'medium_office': 9.625838016683277, @@ -1904,24 +1949,49 @@ def add_national_scaling_weights(self, cbecs: CBECS, remove_non_comstock_bldg_ty # 'strip_mall': 2.1106205675100735, # 'warehouse': 2.1086048544461304 # } + # Here are the 'nominal' weights from Sampling V2 implementation (EUSS 2024 R2 on): + # TODO Add weights here # Assign scaling factors to each ComStock run self.building_type_weights = bldg_type_scale_factors - self.data = self.data.with_columns((pl.col(self.BLDG_TYPE).cast(pl.Utf8).replace(bldg_type_scale_factors, default=None)).alias(self.BLDG_WEIGHT)) + if self.APPORTIONED: + cbecs_weights = pl.LazyFrame({self.BLDG_TYPE: bldg_type_scale_factors.keys(), 'cbecs_weight': bldg_type_scale_factors.values()}) + self.fkt = self.fkt.join(cbecs_weights, on=pl.col(self.BLDG_TYPE)) + self.fkt = self.fkt.with_columns((pl.col(self.BLDG_WEIGHT) * pl.col('cbecs_weight')).alias(self.BLDG_WEIGHT)) + self.fkt = self.fkt.drop(self.BLDG_TYPE, 'cbecs_weight') + else: + self.data = self.data.with_columns((pl.col(self.BLDG_TYPE).cast(pl.Utf8).replace(bldg_type_scale_factors, default=None)).alias(self.BLDG_WEIGHT)) assert isinstance(cbecs.data, pd.DataFrame) cbecs.data = pl.from_pandas(cbecs.data).lazy() assert isinstance(cbecs.data, pl.LazyFrame) + self.CBECS_WEIGHTS_APPLIED = True return bldg_type_scale_factors def _calculate_weighted_columnal_values(self, input_lf: pl.LazyFrame): # Apply the weights to the columns - input_lf = self.add_weighted_area_energy_savings_columns(input_lf) #compute out the weighted value, based on the unweighted columns and the weights. - assert isinstance(self.data, pl.LazyFrame) + #compute out the weighted value, based on the unweighted columns and the weights. + input_lf = self.add_weighted_area_energy_savings_columns(input_lf) + assert isinstance(input_lf, pl.LazyFrame) return input_lf + def create_plotting_lazyframe(self): + plotting_aggregation = self.fkt.clone() + plotting_aggregation = plotting_aggregation.select( + [pl.col(self.BLDG_WEIGHT), pl.col(self.UPGRADE_ID), pl.col(self.BLDG_ID), pl.col(self.CEN_DIV)] + ).groupby([pl.col(self.UPGRADE_ID), pl.col(self.BLDG_ID), pl.col(self.CEN_DIV)]).sum() + plotting_aggregation = plotting_aggregation.join(self.data, on=[pl.col(self.UPGRADE_ID), pl.col(self.BLDG_ID)]) + plotting_aggregation = self._calculate_weighted_columnal_values(plotting_aggregation) + plotting_aggregation = self.reorder_data_columns(plotting_aggregation) + plotting_aggregation = self.add_sightglass_column_units(plotting_aggregation) + assert isinstance(plotting_aggregation, pl.LazyFrame) + self.plotting_data = plotting_aggregation + def create_national_aggregation(self): - national_aggregation = self.fkt.select([pl.col('weight'), pl.col(self.UPGRADE_ID), pl.col(self.BLDG_ID)]).groupby([pl.col(self.UPGRADE_ID), pl.col(self.BLDG_ID)]).sum() + national_aggregation = self.fkt.clone() + national_aggregation = national_aggregation.select( + [pl.col(self.BLDG_WEIGHT), pl.col(self.UPGRADE_ID), pl.col(self.BLDG_ID)] + ).groupby([pl.col(self.UPGRADE_ID), pl.col(self.BLDG_ID)]).sum() national_aggregation = national_aggregation.join(self.data, on=[pl.col(self.UPGRADE_ID), pl.col(self.BLDG_ID)]) national_aggregation = self._calculate_weighted_columnal_values(national_aggregation) @@ -1930,7 +2000,6 @@ def create_national_aggregation(self): # self.get_scaled_comstock_monthly_consumption_by_state(national_aggregation) # Reorder the columns before exporting - # TODO works on self.data national_aggregation = self.reorder_data_columns(national_aggregation) assert isinstance(national_aggregation, pl.LazyFrame) @@ -1963,13 +2032,22 @@ def create_national_aggregation(self): # Export dictionaries corresponding to the exported columns self.export_data_and_enumeration_dictionary() + # Return the nationally aggregated dataframe for use by plotting, etc. + return national_aggregation + def create_geospatially_resolved_aggregations(self, geographic_col_name, pretty_geo_col_name=False): - supported_geographies = [self.STATE_ID, self.COUNTY_ID, self.TRACT_ID, self.CZ_ASHRAE] + # Ensure the geography is supported + supported_geographies = [self.CEN_DIV, self.STATE_ID, self.COUNTY_ID, self.TRACT_ID, self.CZ_ASHRAE] if geographic_col_name not in [self.STATE_ID, self.COUNTY_ID, self.TRACT_ID, self.CZ_ASHRAE]: logger.error(f'Requeted geographic aggregation {geographic_col_name} not in supported geographies.') logger.error(f'Currently supported geographies are {supported_geographies}') raise RuntimeError('Unsupported geography selected for geospatial aggregation') - spatial_aggregation = self.fkt.select([pl.col('weight'), pl.col(self.UPGRADE_ID), pl.col(self.BLDG_ID), pl.col(geographic_col_name)]).groupby([pl.col(self.UPGRADE_ID), pl.col(self.BLDG_ID), pl.col(geographic_col_name)]).sum() + + # Create the spatial aggregation + spatial_aggregation = self.fkt.clone() + spatial_aggregation = spatial_aggregation.select( + [pl.col('weight'), pl.col(self.UPGRADE_ID), pl.col(self.BLDG_ID), pl.col(geographic_col_name)] + ).groupby([pl.col(self.UPGRADE_ID), pl.col(self.BLDG_ID), pl.col(geographic_col_name)]).sum() spatial_aggregation = spatial_aggregation.join(self.data, on=[pl.col(self.UPGRADE_ID), pl.col(self.BLDG_ID)]) spatial_aggregation = self._calculate_weighted_columnal_values(spatial_aggregation) @@ -1982,7 +2060,9 @@ def create_geospatially_resolved_aggregations(self, geographic_col_name, pretty_ up_ids.sort() for up_id in up_ids: file_name = f'{self.UPGRADE_ID}={up_id}' - file_path = os.path.abspath(os.path.join(self.output_dir, 'geospatial_results', geographic_col_name.replace('in.', ''), file_name)) + file_path = os.path.abspath(os.path.join( + self.output_dir, 'geospatial_results', geographic_col_name.replace('in.', ''), file_name + )) logger.info(f'Exporting to: {file_path}') to_write = spatial_aggregation.filter(pl.col(self.UPGRADE_ID) == up_id) if pretty_geo_col_name: @@ -1994,13 +2074,21 @@ def create_geospatially_resolved_aggregations(self, geographic_col_name, pretty_ logger.warn("ulimit -n 200000") logger.warn("ulimit -u 2048") logger.info("Attempting pottentially OSERROR triggering write:") - to_write.collect().write_parquet(file_path, use_pyarrow=True, pyarrow_options={"partition_cols": [pretty_geo_col_name], 'max_partitions': 3143}) + to_write.collect().write_parquet(file_path, use_pyarrow=True, pyarrow_options={ + "partition_cols": [pretty_geo_col_name], 'max_partitions': 3143 + }) # Export dictionaries corresponding to the exported columns self.export_data_and_enumeration_dictionary() + # Return the geospatially aggregated dataframe for use by plotting, etc. + return spatial_aggregation def add_weights_aportioned_by_stock_estimate(self, apportionment: Apportion, keep_n_per_apportionment_group=False): + # This function doesn't support already CBECS-weighted self.data - error out + if self.CBECS_WEIGHTS_APPLIED: + raise RuntimeError('Unable to apply apportionment weighting after CBECS weighting - reverse order.') + # TODO this should live somewhere else - don't know where... self.data = self.data.with_columns( pl.col(self.COUNTY_ID).cast(str).str.slice(0, 4).alias(self.STATE_ID) @@ -2008,7 +2096,7 @@ def add_weights_aportioned_by_stock_estimate(self, apportionment: Apportion, kee # Pull the columns required to do the matching plus the annual energy total as a safety blanket # TODO this is a superset for convienience - slim down later - csdf = self.data.filter(pl.col(self.UPGRADE_NAME) == self.BASE_NAME).select(pl.col( + csdf = self.data.clone().filter(pl.col(self.UPGRADE_NAME) == self.BASE_NAME).select(pl.col( self.BLDG_ID, self.STATE_ID, self.COUNTY_ID, self.TRACT_ID, self.SAMPLING_REGION, self.CZ_ASHRAE, self.BLDG_TYPE, self.HVAC_SYS, self.SH_FUEL, self.SIZE_BIN, self.FLR_AREA, self.TOT_EUI, self.CEN_DIV )) @@ -2029,7 +2117,9 @@ def add_weights_aportioned_by_stock_estimate(self, apportionment: Apportion, kee # domain (csdf data). # TODO make the apportionment data object a lazy df nativly apportionment.data.loc[:, 'hvac_and_fueltype'] = apportionment.data.loc[:, 'system_type'] + '_' + apportionment.data.loc[:, 'heating_fuel'] - appo_group_df = apportionment.data.loc[:, ['sampling_region', 'building_type', 'size_bin', 'hvac_and_fueltype']] + appo_group_df = apportionment.data.copy(deep=True).loc[ + :, ['sampling_region', 'building_type', 'size_bin', 'hvac_and_fueltype'] + ] appo_group_df = appo_group_df.drop_duplicates(keep='first').sort_values( by=['sampling_region', 'building_type', 'size_bin', 'hvac_and_fueltype'] ).reset_index(drop=True).reset_index(names='appo_group_id') @@ -2045,7 +2135,7 @@ def add_weights_aportioned_by_stock_estimate(self, apportionment: Apportion, kee raise RuntimeError('Not all combinations of sampling region, bt, and size bin could be matched.') # Join apportionment group id into comstock data - tdf = pl.DataFrame(apportionment.data).lazy() + tdf = pl.DataFrame(apportionment.data.copy(deep=True)).lazy() tdf = tdf.join(appo_group_df, on=['sampling_region', 'building_type', 'size_bin', 'hvac_and_fueltype']) # Identify combination in the truth data not supported by the current sample. @@ -2116,8 +2206,10 @@ def add_weights_aportioned_by_stock_estimate(self, apportionment: Apportion, kee pl.col('county').alias(self.COUNTY_ID), pl.col('state').alias(self.STATE_ID), pl.col('cz').alias(self.CZ_ASHRAE), + pl.col('cen_div').alias(self.CEN_DIV), pl.col('sqft').alias('truth_sqft'), - pl.col('tract_assignment_type').alias('in.tract_assignment_type') + pl.col('tract_assignment_type').alias('in.tract_assignment_type'), + pl.col('building_type').alias(self.BLDG_TYPE) ), on=pl.col('tdf_id')) # Pull in the sqft calculate weights @@ -2132,11 +2224,11 @@ def add_weights_aportioned_by_stock_estimate(self, apportionment: Apportion, kee self.TRACT_ID: self.TRACT_ID.replace('in.', self.POST_APPO_SIM_COL_PREFIX), self.COUNTY_ID: self.COUNTY_ID.replace('in.', self.POST_APPO_SIM_COL_PREFIX), self.STATE_ID: self.STATE_ID.replace('in.', self.POST_APPO_SIM_COL_PREFIX), - self.CZ_ASHRAE: self.CZ_ASHRAE.replace('in.', self.POST_APPO_SIM_COL_PREFIX), + self.CEN_DIV: self.CEN_DIV.replace('in.', self.POST_APPO_SIM_COL_PREFIX), }) # Drop unwanted columns from the foreign key table and persist - fkt = fkt.drop('tdf_id', 'appo_group_id', 'truth_sqft', 'in.tract_assignment_type') + fkt = fkt.drop('tdf_id', 'appo_group_id', 'truth_sqft', 'in.tract_assignment_type', self.FLR_AREA) self.APPORTIONED = True self.fkt = fkt logger.info('Successfully completed the apportionment sampling postprocessing') @@ -2444,7 +2536,7 @@ def rmv_units(c): self.data = self.data.rename(crnms) - def add_sightglass_column_units(self): + def add_sightglass_column_units(self, lazyframe): # SightGlass requires that the energy_consumption, energy_consumption_intensity, # energy_savings, and energy_savings_intensity columns have no units on the # column names. This method adds those units back to the appropriate column names, @@ -2454,7 +2546,7 @@ def rmv_units(c): return c.replace(f'..{self.units_from_col_name(c)}', '') crnms = {} # Column renames - og_cols = self.data.columns + og_cols = lazyframe.columns for col in (self.COLS_TOT_ANN_ENGY + self.COLS_ENDUSE_ANN_ENGY): # energy_consumption if rmv_units(col) in og_cols: crnms[rmv_units(col)] = col @@ -2480,7 +2572,8 @@ def rmv_units(c): assert new.startswith(old) logger.debug(f'{old} -> {new}') - self.data = self.data.rename(crnms) + lazyframe = lazyframe.rename(crnms) + return lazyframe def get_comstock_unscaled_monthly_energy_consumption(self): """ diff --git a/postprocessing/comstockpostproc/comstock_apportionment.py b/postprocessing/comstockpostproc/comstock_apportionment.py index 59ff25c4d..c564d8301 100644 --- a/postprocessing/comstockpostproc/comstock_apportionment.py +++ b/postprocessing/comstockpostproc/comstock_apportionment.py @@ -1,3 +1,6 @@ +# ComStockā„¢, Copyright (c) 2023 Alliance for Sustainable Energy, LLC. All rights reserved. +# See top level LICENSE.txt file for license terms. + import boto3 import botocore from glob import glob @@ -89,8 +92,8 @@ def __init__(self, stock_estimation_version, truth_data_version, bootstrap_coeff CEN_DIV_LKUP={ 'G090': 'New England', 'G230': 'New England', 'G250': 'New England', 'G330': 'New England', - 'G440': 'New England', 'G500': 'New England', 'G340': 'Mid-Atlantic', 'G360': 'Mid-Atlantic', - 'G420': 'Mid-Atlantic', 'G180': 'East North Central', 'G170': 'East North Central', + 'G440': 'New England', 'G500': 'New England', 'G340': 'Middle Atlantic', 'G360': 'Middle Atlantic', + 'G420': 'Middle Atlantic', 'G180': 'East North Central', 'G170': 'East North Central', 'G260': 'East North Central', 'G390': 'East North Central', 'G550': 'East North Central', 'G190': 'West North Central', 'G200': 'West North Central', 'G270': 'West North Central', 'G290': 'West North Central', 'G310': 'West North Central', 'G380': 'West North Central', @@ -534,6 +537,7 @@ def upsample_hvac_system_fuel_types(self): hcols = [col.replace('Option=', '') for col in hsdf.columns if 'Option=' in col] hsdf.columns = [col.replace('Dependency=', '').replace('Option=', '') for col in hsdf.columns] hsdf.loc[:, 'building_type'] = hsdf.loc[:, 'building_type'].map(self.BUILDING_TYPE_NAME_MAPPER) + hsdf.loc[:, 'census_region'] = hsdf.loc[:, 'census_region'].replace('Mid-Atlantic', 'Middle Atlantic') df = df.merge(hsdf, left_on=['building_type', 'heating_fuel', 'cen_div'], right_on=['building_type', 'heating_fuel', 'census_region']) # Use the merged probabilities to sample in fuel type diff --git a/postprocessing/comstockpostproc/comstock_measure_comparison.py b/postprocessing/comstockpostproc/comstock_measure_comparison.py index c8611f42a..7e242dc34 100644 --- a/postprocessing/comstockpostproc/comstock_measure_comparison.py +++ b/postprocessing/comstockpostproc/comstock_measure_comparison.py @@ -20,7 +20,12 @@ def __init__(self, comstock_object: comstock.ComStock, states, make_comparison_p # Initialize members assert isinstance(comstock_object.data, pl.LazyFrame) - self.data = comstock_object.data.clone() #not really a deep copy, only schema is copied but not data. + # Instantiate the plotting data lazyframe if it doesn't yet exist: + if not isinstance(comstock_object.plotting_data, pl.LazyFrame): + logger.info(f'Instantiating plotting lazyframe for comstock dataset {comstock_object.dataset_name}.') + comstock_object.create_plotting_lazyframe() + assert isinstance(comstock_object.plotting_data, pl.LazyFrame) + self.data = comstock_object.plotting_data.clone() #not really a deep copy, only schema is copied but not data. assert isinstance(self.data, pl.LazyFrame) self.color_map = {} diff --git a/postprocessing/comstockpostproc/comstock_to_cbecs_comparison.py b/postprocessing/comstockpostproc/comstock_to_cbecs_comparison.py index 3e499315d..dfa1b7489 100644 --- a/postprocessing/comstockpostproc/comstock_to_cbecs_comparison.py +++ b/postprocessing/comstockpostproc/comstock_to_cbecs_comparison.py @@ -51,9 +51,13 @@ def __init__(self, comstock_list: List[ComStock], cbecs_list: List[CBECS], upgra # remove measure data from ComStock if isinstance(dataset, ComStock): #dataset is ComStock assert isinstance(dataset.data, pl.LazyFrame) + # Instantiate the plotting data lazyframe if it doesn't yet exist: + if not isinstance(dataset.plotting_data, pl.LazyFrame): + logger.info(f'Instantiating plotting lazyframe for comstock dataset {dataset.dataset_name}.') + dataset.create_plotting_lazyframe() + assert isinstance(dataset.plotting_data, pl.LazyFrame) - dataset.add_sightglass_column_units() # Add units to SightGlass columns if missing - up_id_name: list = dataset.data.select(dataset.UPGRADE_ID, dataset.UPGRADE_NAME).collect().unique().to_numpy().tolist() + up_id_name: list = dataset.plotting_data.select(dataset.UPGRADE_ID, dataset.UPGRADE_NAME).collect().unique().to_numpy().tolist() up_name_map = {k: v for k, v in up_id_name} valid_upgrade_id = [x for x in up_name_map.keys()] valid_upgrade_name = [up_name_map[x] for x in valid_upgrade_id] @@ -62,10 +66,12 @@ def __init__(self, comstock_list: List[ComStock], cbecs_list: List[CBECS], upgra if upgrade_id == 'All': # df_data: pl.LazyFrame = dataset.data # df_data[dataset.DATASET] = df_data[dataset.DATASET] + ' - ' + df_data['upgrade_name'] - comstock_dfs_to_concat.append(dataset.data) + comstock_dfs_to_concat.append(dataset.plotting_data) # df_data[dataset.DATASET] = df_data[dataset.DATASET].astype(str) + ' - ' + df_data[dataset.UPGRADE_NAME].astype(str) - dataset.data = dataset.data.with_columns((pl.col(dataset.DATASET).cast(pl.Utf8) + ' - ' + pl.col(dataset.UPGRADE_NAME).cast(pl.Utf8)).alias(dataset.DATASET)) - dfs_to_concat.append(dataset.data) + dataset.plotting_data = dataset.plotting_data.with_columns(( + pl.col(dataset.DATASET).cast(pl.Utf8) + ' - ' + pl.col(dataset.UPGRADE_NAME).cast(pl.Utf8) + ).alias(dataset.DATASET)) + dfs_to_concat.append(dataset.plotting_data) # up_name_map = dict(zip(df_data[dataset.UPGRADE_ID].unique(), df_data[dataset.UPGRADE_NAME].unique())) # upgrade_list = list(df_data[dataset.UPGRADE_ID].unique()) color_dict = self.linear_gradient(dataset.COLOR_COMSTOCK_BEFORE, dataset.COLOR_COMSTOCK_AFTER, len(valid_upgrade_id)) @@ -78,7 +84,7 @@ def __init__(self, comstock_list: List[ComStock], cbecs_list: List[CBECS], upgra elif upgrade_id not in valid_upgrade_id: logger.error(f"Upgrade {upgrade_id} not found in {dataset.dataset_name}. Enter a valid upgrade ID in the ComStockToCBECSComparison constructor or \"All\" to include all upgrades.") else: - df_data = dataset.data.filter(pl.col(dataset.UPGRADE_ID) == upgrade_id) + df_data = dataset.plotting_data.filter(pl.col(dataset.UPGRADE_ID) == upgrade_id) df_data = df_data.with_columns((pl.col(dataset.DATASET).cast(pl.Utf8) + ' - ' + pl.col(dataset.UPGRADE_NAME).cast(pl.Utf8)).alias(dataset.DATASET)) dataset_name = dataset.dataset_name + " - " + up_name_map[upgrade_id] comstock_dfs_to_concat.append(df_data) @@ -112,6 +118,12 @@ def __init__(self, comstock_list: List[ComStock], cbecs_list: List[CBECS], upgra current_dir = os.path.dirname(os.path.abspath(__file__)) # Combine just comstock runs into single dataframe for QOI plots + common_columns = set(comstock_dfs_to_concat[0].columns) + all_columns = common_columns + for df in comstock_dfs_to_concat: + common_columns = common_columns & set(df.columns) + logger.info(f"Not including columns {all_columns - common_columns} in comstock only plots") + comstock_dfs_to_concat = [df.select(common_columns) for df in comstock_dfs_to_concat] comstock_df = pl.concat(comstock_dfs_to_concat, how="vertical_relaxed") # comstock_df = comstock_df[[self.DATASET] + self.QOI_MAX_DAILY_TIMING_COLS + self.QOI_MAX_USE_COLS + self.QOI_MIN_USE_COLS + self.QOI_MAX_USE_COLS_NORMALIZED + self.QOI_MIN_USE_COLS_NORMALIZED] comstock_qoi_columns = [self.DATASET] + self.QOI_MAX_DAILY_TIMING_COLS + self.QOI_MAX_USE_COLS + self.QOI_MIN_USE_COLS + self.QOI_MAX_USE_COLS_NORMALIZED + self.QOI_MIN_USE_COLS_NORMALIZED @@ -198,4 +210,9 @@ def export_to_csv_wide(self): file_name = f'ComStock wide.csv' file_path = os.path.join(self.output_dir, file_name) - self.data.to_csv(file_path, index=False) \ No newline at end of file + try: + self.data.sink_csv(file_path) + except pl.exceptions.InvalidOperationError: + logger.warn('Warning - sink_csv not supported for metadata write in current polars version') + logger.warn('Falling back to .collect.write_csv') + self.data.collect().write_csv(file_path) diff --git a/resources/measures/upgrade_hvac_add_heat_pump_rtu/README.md b/resources/measures/upgrade_hvac_add_heat_pump_rtu/README.md index 7479132fb..9f9f8dd61 100644 --- a/resources/measures/upgrade_hvac_add_heat_pump_rtu/README.md +++ b/resources/measures/upgrade_hvac_add_heat_pump_rtu/README.md @@ -115,6 +115,15 @@ Determines performance assumptions. two_speed_standard_eff is a standard efficie **Model Dependent:** false +### Upgrade Roof Insulation? +Upgrade roof insulation per AEDG recommendations. +**Name:** roof, +**Type:** Boolean, +**Units:** , +**Required:** true, +**Model Dependent:** false + + ### Do a sizing run for informing sizing instead of using hard-sized model parameters? **Name:** sizing_run, diff --git a/resources/measures/upgrade_hvac_add_heat_pump_rtu/measure.rb b/resources/measures/upgrade_hvac_add_heat_pump_rtu/measure.rb index fe4246bd6..76dd9c579 100644 --- a/resources/measures/upgrade_hvac_add_heat_pump_rtu/measure.rb +++ b/resources/measures/upgrade_hvac_add_heat_pump_rtu/measure.rb @@ -6,12 +6,9 @@ # see the URL below for information on how to write OpenStudio measures # http://nrel.github.io/OpenStudio-user-documentation/reference/measure_writing_guide/ require 'openstudio-standards' -require_relative '../upgrade_env_roof_insul_aedg/measure.rb' -# require 'minitest/autorun' # start the measure class AddHeatPumpRtu < OpenStudio::Measure::ModelMeasure - # defining global variable # adding tolerance because EnergyPlus unit conversion differs from manual conversion # reference: https://github.com/NREL/EnergyPlus/blob/337bfbadf019a80052578d1bad6112dca43036db/src/EnergyPlus/DataHVACGlobals.hh#L362-L368 @@ -42,7 +39,7 @@ def arguments(_model) args = OpenStudio::Measure::OSArgumentVector.new # make list of backup heat options - li_backup_heat_options = %w[match_original_primary_heating_fuel electric_resistance_backup] + li_backup_heat_options = ['match_original_primary_heating_fuel', 'electric_resistance_backup'] v_backup_heat_options = OpenStudio::StringVector.new li_backup_heat_options.each do |option| v_backup_heat_options << option @@ -64,7 +61,7 @@ def arguments(_model) args << performance_oversizing_factor # heating sizing options TODO - li_htg_sizing_option = %w[47F 17F 0F -10F] + li_htg_sizing_option = ['47F', '17F', '0F', '-10F'] v_htg_sizing_option = OpenStudio::StringVector.new li_htg_sizing_option.each do |option| v_htg_sizing_option << option @@ -99,7 +96,7 @@ def arguments(_model) args << hp_min_comp_lockout_temp_f # make list of cchpc scenarios - li_hprtu_scenarios = %w[two_speed_standard_eff variable_speed_high_eff cchpc_2027_spec] + li_hprtu_scenarios = ['two_speed_standard_eff', 'variable_speed_high_eff', 'cchpc_2027_spec'] v_li_hprtu_scenarios = OpenStudio::StringVector.new li_hprtu_scenarios.each do |option| v_li_hprtu_scenarios << option @@ -208,7 +205,6 @@ def air_loop_hvac_unitary_system?(air_loop_hvac) # load curve to model from json # modified version from OS Standards to read from custom json file def model_add_curve(model, curve_name, standards_data_curve, std) - # First check model and return curve if it already exists existing_curves = [] existing_curves += model.getCurveLinears @@ -384,9 +380,6 @@ def model_add_curve(model, curve_name, standards_data_curve, std) table.addIndependentVariable(table_indvar) end table - else - # OpenStudio.logFree(OpenStudio::Error, 'openstudio.Model.Model', "#{curve_name}' has an invalid form: #{data['form']}', cannot create this curve.") - nil end end @@ -410,14 +403,14 @@ def assign_staging_data(staging_data_json, std) stage_rated_cop_frac_heating = eval(staging_data['stage_rated_cop_frac_heating']) stage_rated_cop_frac_cooling = eval(staging_data['stage_rated_cop_frac_cooling']) boost_stage_num_and_max_temp_tuple = eval(staging_data['boost_stage_num_and_max_temp_tuple']) - stage_GrossRatedSensibleHeatRatio_cooling = eval(staging_data['stage_GrossRatedSensibleHeatRatio_cooling']) + stage_gross_rated_sensible_heat_ratio_cooling = eval(staging_data['stage_gross_rated_sensible_heat_ratio_cooling']) enable_cycling_losses_above_lowest_speed = staging_data['enable_cycling_losses_above_lowest_speed'] reference_cooling_cfm_per_ton = staging_data['reference_cooling_cfm_per_ton'] reference_heating_cfm_per_ton = staging_data['reference_cooling_cfm_per_ton'] # Return assigned variables [num_heating_stages, num_cooling_stages, rated_stage_num_heating, rated_stage_num_cooling, final_rated_cooling_cop, final_rated_heating_cop, stage_cap_fractions_heating, stage_flow_fractions_heating, - stage_cap_fractions_cooling, stage_flow_fractions_cooling, stage_rated_cop_frac_heating, stage_rated_cop_frac_cooling, boost_stage_num_and_max_temp_tuple, stage_GrossRatedSensibleHeatRatio_cooling, + stage_cap_fractions_cooling, stage_flow_fractions_cooling, stage_rated_cop_frac_heating, stage_rated_cop_frac_cooling, boost_stage_num_and_max_temp_tuple, stage_gross_rated_sensible_heat_ratio_cooling, enable_cycling_losses_above_lowest_speed, reference_cooling_cfm_per_ton, reference_heating_cfm_per_ton] end @@ -479,7 +472,6 @@ def m_3_per_sec_watts_to_cfm_per_ton(m_3_per_sec_watts) # adjust rated COP based on reference CFM/ton def adjust_rated_cop_from_ref_cfm_per_ton(runner, airflow_sized_m_3_per_s, reference_cfm_per_ton, rated_capacity_w, original_rated_cop, eir_modifier_curve_flow) - # get reference airflow airflow_reference_m_3_per_s = cfm_per_ton_to_m_3_per_sec_watts(reference_cfm_per_ton) * rated_capacity_w @@ -503,7 +495,6 @@ def adjust_rated_cop_from_ref_cfm_per_ton(runner, airflow_sized_m_3_per_s, refer end def adjust_cfm_per_ton_per_limits(stage_cap_fractions, stage_flows, stage_flow_fractions, dx_rated_cap_applied, rated_stage_num, old_terminal_sa_flow_m3_per_s, min_airflow_ratio, air_loop_hvac, heating_or_cooling, runner, debug_verbose) - # determine capacities for each stage # this is based on user-input capacities for each stage and any upsizing applied # Flow per ton will be maintained between 300 CFM/Ton and 450 CFM/Ton @@ -512,7 +503,6 @@ def adjust_cfm_per_ton_per_limits(stage_cap_fractions, stage_flows, stage_flow_f stage_caps = {} # Calculate and store each stage's capacity stage_cap_fractions.sort.each do |stage, ratio| - # define cfm/ton bounds cfm_per_ton_min = CFM_PER_TON_MIN_RATED cfm_per_ton_max = CFM_PER_TON_MAX_RATED @@ -526,14 +516,17 @@ def adjust_cfm_per_ton_per_limits(stage_cap_fractions, stage_flows, stage_flow_f # Calculate the flow per ton flow_per_ton = airflow / stage_capacity - #puts "Debug*************************************************************" - #puts "#{heating_or_cooling} Stage #{stage}" - #puts "min_airflow_ratio: #{min_airflow_ratio}" - #puts "airflow: #{airflow}" - #puts "stage_capacity: #{stage_capacity}" - #puts "flow_per_ton: #{flow_per_ton}" - #puts "m_3_per_s_per_w_max: #{m_3_per_s_per_w_max.round(8)}" - #puts "In Bounds: #{(flow_per_ton.round(8) >= m_3_per_s_per_w_min.round(8)) && (flow_per_ton.round(8) <= m_3_per_s_per_w_max.round(8))}" + if debug_verbose + runner.registerInfo('stage summary: ---------------------------------------------------------------') + runner.registerInfo("stage summary: air_loop_hvac: #{air_loop_hvac.name}") + runner.registerInfo("stage summary: #{heating_or_cooling} Stage #{stage}") + runner.registerInfo("stage summary: min_airflow_ratio: #{min_airflow_ratio}") + runner.registerInfo("stage summary: airflow: #{airflow}") + runner.registerInfo("stage summary: stage_capacity: #{stage_capacity}") + runner.registerInfo("stage summary: flow_per_ton: #{flow_per_ton}") + runner.registerInfo("stage summary: m_3_per_s_per_w_max: #{m_3_per_s_per_w_max.round(8)}") + runner.registerInfo("stage summary: In Bounds: #{(flow_per_ton.round(8) >= m_3_per_s_per_w_min.round(8)) && (flow_per_ton.round(8) <= m_3_per_s_per_w_max.round(8))}") + end # If flow/ton is less than minimum, increase airflow of stage to meet minimum if (flow_per_ton.round(8) < m_3_per_s_per_w_min.round(8)) && (stage < rated_stage_num) @@ -545,7 +538,8 @@ def adjust_cfm_per_ton_per_limits(stage_cap_fractions, stage_flows, stage_flow_f stage_flow_fractions[stage] = new_stage_airflow / old_terminal_sa_flow_m3_per_s # TODO: - need to check if we can go over design airflow. If so, need to adjust min OA. stage_caps[stage] = stage_capacity if debug_verbose - runner.registerInfo("#{air_loop_hvac.name} | cfm/ton low limit violation | #{heating_or_cooling} | stage = #{stage} | cfm/ton after adjustment = #{m_3_per_sec_watts_to_cfm_per_ton(stage_flows[stage]/stage_caps[stage])}") + runner.registerInfo('stage summary: entered flow/ton too low loop....') + runner.registerInfo("stage summary: #{air_loop_hvac.name} | cfm/ton low limit violation | #{heating_or_cooling} | stage = #{stage} | cfm/ton after adjustment = #{m_3_per_sec_watts_to_cfm_per_ton(stage_flows[stage] / stage_caps[stage])}") end # If flow/ton is greater than maximum, decrease the airflow elsif (flow_per_ton.round(8) > m_3_per_s_per_w_max.round(8)) && (stage < rated_stage_num) @@ -554,12 +548,14 @@ def adjust_cfm_per_ton_per_limits(stage_cap_fractions, stage_flows, stage_flow_f # if cfm/ton limit can't be met by reducing airflow, allow increase capacity of up to 65% range between capacities # calculate maximum allowable ratio, no more than 50% increase between specified stages - #puts "Debugging*************************************" - #puts "air_loop_hvac: #{air_loop_hvac.name}" - #puts "ratio: #{ratio}" - #puts "stage: #{stage}" - #puts "stage_cap_fractions: #{stage_cap_fractions}" - #puts "dx_rated_cap_applied: #{dx_rated_cap_applied}" + if debug_verbose + runner.registerInfo('stage summary: entered flow/ton too high loop....') + runner.registerInfo("stage summary: air_loop_hvac: #{air_loop_hvac.name}") + runner.registerInfo("stage summary: ratio: #{ratio}") + runner.registerInfo("stage summary: stage: #{stage}") + runner.registerInfo("stage summary: stage_cap_fractions: #{stage_cap_fractions}") + runner.registerInfo("stage summary: dx_rated_cap_applied: #{dx_rated_cap_applied}") + end ratio_allowance_50_pct = ratio + (stage_cap_fractions[stage + 1] - ratio) * 0.65 required_stage_cap_ratio = airflow / m_3_per_s_per_w_max / (stage_cap_fractions[rated_stage_num] * dx_rated_cap_applied) @@ -572,35 +568,39 @@ def adjust_cfm_per_ton_per_limits(stage_cap_fractions, stage_flows, stage_flow_f stage_flow_fractions[stage] = new_stage_airflow / old_terminal_sa_flow_m3_per_s stage_caps[stage] = stage_capacity if debug_verbose - runner.registerInfo("#{air_loop_hvac.name} | cfm/ton high limit violation | #{heating_or_cooling} | stage = #{stage} | cfm/ton after adjustment = #{m_3_per_sec_watts_to_cfm_per_ton(stage_flows[stage]/stage_caps[stage])}") + runner.registerInfo("stage summary: #{air_loop_hvac.name} | cfm/ton high limit violation | #{heating_or_cooling} | stage = #{stage} | cfm/ton after adjustment = #{m_3_per_sec_watts_to_cfm_per_ton(stage_flows[stage] / stage_caps[stage])}") end elsif required_stage_cap_ratio <= ratio_allowance_50_pct stage_cap_fractions[stage] = required_stage_cap_ratio stage_caps[stage] = required_stage_cap_ratio * (stage_cap_fractions[rated_stage_num] * dx_rated_cap_applied) if debug_verbose - runner.registerInfo("#{air_loop_hvac.name} | cfm/ton high limit violation (ratio_allowance_50_pct) | #{heating_or_cooling} | stage = #{stage} | cfm/ton after adjustment = #{m_3_per_sec_watts_to_cfm_per_ton(stage_flows[stage]/stage_caps[stage])}") + runner.registerInfo("stage summary: #{air_loop_hvac.name} | cfm/ton high limit violation (ratio_allowance_50_pct) | #{heating_or_cooling} | stage = #{stage} | cfm/ton after adjustment = #{m_3_per_sec_watts_to_cfm_per_ton(stage_flows[stage] / stage_caps[stage])}") end # we need at least 2 stages; apply the allowance value and accept some degree of being out of range elsif stage == (rated_stage_num - 1) stage_cap_fractions[stage] = ratio_allowance_50_pct stage_caps[stage] = ratio_allowance_50_pct * (stage_cap_fractions[rated_stage_num] * dx_rated_cap_applied) if debug_verbose - runner.registerInfo("#{air_loop_hvac.name} | cfm/ton high limit violation (rated_stage_num) | #{heating_or_cooling} | stage = #{stage} | cfm/ton after adjustment = #{m_3_per_sec_watts_to_cfm_per_ton(stage_flows[stage]/stage_caps[stage])}") + runner.registerInfo("stage summary: #{air_loop_hvac.name} | cfm/ton high limit violation (rated_stage_num) | #{heating_or_cooling} | stage = #{stage} | cfm/ton after adjustment = #{m_3_per_sec_watts_to_cfm_per_ton(stage_flows[stage] / stage_caps[stage])}") end # remove stage if maximum flow/ton ratio cannot be accommodated without violating minimum airflow ratios else + if debug_verbose + runner.registerInfo('stage summary: stage removed') + end stage_flows[stage] = false stage_flow_fractions[stage] = false stage_caps[stage] = false stage_cap_fractions[stage] = false if debug_verbose - runner.registerInfo("#{air_loop_hvac.name} | cfm/ton high limit violation (removing stage) | #{heating_or_cooling} | stage = #{stage} | cfm/ton after adjustment = n/a") + runner.registerInfo("stage summary: #{air_loop_hvac.name} | cfm/ton high limit violation (removing stage) | #{heating_or_cooling} | stage = #{stage} | cfm/ton after adjustment = n/a") end end else stage_caps[stage] = stage_capacity if debug_verbose - runner.registerInfo("#{air_loop_hvac.name} | no cfm/ton violation | #{heating_or_cooling} | stage = #{stage} | cfm/ton = #{m_3_per_sec_watts_to_cfm_per_ton(stage_flows[stage]/stage_caps[stage])}") + runner.registerInfo('stage summary: entered no adjustment loop') + runner.registerInfo("stage summary: #{air_loop_hvac.name} | no cfm/ton violation | #{heating_or_cooling} | stage = #{stage} | cfm/ton = #{m_3_per_sec_watts_to_cfm_per_ton(stage_flows[stage] / stage_caps[stage])}") end end end @@ -612,13 +612,13 @@ def adjust_cfm_per_ton_per_limits(stage_cap_fractions, stage_flows, stage_flow_f end def set_cooling_coil_stages(model, runner, stage_flows_cooling, stage_caps_cooling, num_cooling_stages, final_rated_cooling_cop, cool_cap_ft_curve_stages, cool_eir_ft_curve_stages, - cool_cap_ff_curve_stages, cool_eir_ff_curve_stages, cool_plf_fplr1, stage_rated_cop_frac_cooling, stage_GrossRatedSensibleHeatRatio_cooling, + cool_cap_ff_curve_stages, cool_eir_ff_curve_stages, cool_plf_fplr1, stage_rated_cop_frac_cooling, stage_gross_rated_sensible_heat_ratio_cooling, rated_stage_num_cooling, enable_cycling_losses_above_lowest_speed, air_loop_hvac, always_on, stage_caps_heating, debug_verbose) if (stage_flows_cooling.values.count(&:itself)) == (stage_caps_cooling.values.count(&:itself)) num_cooling_stages = stage_flows_cooling.values.count(&:itself) if debug_verbose - runner.registerInfo("The final number of cooling stages for #{air_loop_hvac.name} is #{num_cooling_stages}.") + runner.registerInfo("stage summary: The final number of cooling stages for #{air_loop_hvac.name} is #{num_cooling_stages}.") end else runner.registerError("For airloop #{air_loop_hvac.name}, the number of stages of cooling capacity is different from number of stages of cooling airflow. Revise measure as needed.") @@ -633,7 +633,7 @@ def set_cooling_coil_stages(model, runner, stage_flows_cooling, stage_caps_cooli new_dx_cooling_coil.setCondenserType('AirCooled') new_dx_cooling_coil.setRatedCOP(final_rated_cooling_cop * stage_rated_cop_frac_cooling[rated_stage_num_cooling]) new_dx_cooling_coil.setRatedTotalCoolingCapacity(stage_caps_cooling[rated_stage_num_cooling]) - new_dx_cooling_coil.setGrossRatedSensibleHeatRatio(stage_GrossRatedSensibleHeatRatio_cooling[rated_stage_num_cooling]) + new_dx_cooling_coil.setGrossRatedSensibleHeatRatio(stage_gross_rated_sensible_heat_ratio_cooling[rated_stage_num_cooling]) new_dx_cooling_coil.setRatedAirFlowRate(stage_flows_cooling[rated_stage_num_cooling]) new_dx_cooling_coil.setRatedEvaporatorFanPowerPerVolumeFlowRate2017(773.3) new_dx_cooling_coil.setTotalCoolingCapacityFunctionOfTemperatureCurve(cool_cap_ft_curve_stages[rated_stage_num_cooling]) @@ -672,19 +672,25 @@ def set_cooling_coil_stages(model, runner, stage_flows_cooling, stage_caps_cooli # loop through stages stage_caps_cooling.sort.each do |stage, cap| - next unless cap != false + # use current stage if allowed; otherwise use highest available stage as "dummy" + # this is a temporary workaround until OS translator supports different numbers of speed levels between heating and cooling + # GitHub issue: https://github.com/NREL/OpenStudio/issues/5277 + applied_stage = stage + if cap == false + applied_stage = stage_caps_cooling.reject { |k, v| v == false }.keys.min + end # add speed data for each stage dx_coil_speed_data = OpenStudio::Model::CoilCoolingDXMultiSpeedStageData.new(model) - dx_coil_speed_data.setGrossRatedTotalCoolingCapacity(stage_caps_cooling[stage]) - dx_coil_speed_data.setGrossRatedSensibleHeatRatio(stage_GrossRatedSensibleHeatRatio_cooling[stage]) - dx_coil_speed_data.setRatedAirFlowRate(stage_flows_cooling[stage]) - dx_coil_speed_data.setGrossRatedCoolingCOP(final_rated_cooling_cop * stage_rated_cop_frac_cooling[stage]) + dx_coil_speed_data.setGrossRatedTotalCoolingCapacity(stage_caps_cooling[applied_stage]) + dx_coil_speed_data.setGrossRatedSensibleHeatRatio(stage_gross_rated_sensible_heat_ratio_cooling[applied_stage]) + dx_coil_speed_data.setRatedAirFlowRate(stage_flows_cooling[applied_stage]) + dx_coil_speed_data.setGrossRatedCoolingCOP(final_rated_cooling_cop * stage_rated_cop_frac_cooling[applied_stage]) dx_coil_speed_data.setRatedEvaporatorFanPowerPerVolumeFlowRate2017(773.3) - dx_coil_speed_data.setTotalCoolingCapacityFunctionofTemperatureCurve(cool_cap_ft_curve_stages[stage]) - dx_coil_speed_data.setTotalCoolingCapacityFunctionofFlowFractionCurve(cool_cap_ff_curve_stages[stage]) - dx_coil_speed_data.setEnergyInputRatioFunctionofTemperatureCurve(cool_eir_ft_curve_stages[stage]) - dx_coil_speed_data.setEnergyInputRatioFunctionofFlowFractionCurve(cool_eir_ff_curve_stages[stage]) + dx_coil_speed_data.setTotalCoolingCapacityFunctionofTemperatureCurve(cool_cap_ft_curve_stages[applied_stage]) + dx_coil_speed_data.setTotalCoolingCapacityFunctionofFlowFractionCurve(cool_cap_ff_curve_stages[applied_stage]) + dx_coil_speed_data.setEnergyInputRatioFunctionofTemperatureCurve(cool_eir_ft_curve_stages[applied_stage]) + dx_coil_speed_data.setEnergyInputRatioFunctionofFlowFractionCurve(cool_eir_ff_curve_stages[applied_stage]) dx_coil_speed_data.setPartLoadFractionCorrelationCurve(cool_plf_fplr1) dx_coil_speed_data.setEvaporativeCondenserEffectiveness(0.9) dx_coil_speed_data.setNominalTimeforCondensateRemovaltoBegin(1000) @@ -694,7 +700,7 @@ def set_cooling_coil_stages(model, runner, stage_flows_cooling, stage_caps_cooli dx_coil_speed_data.autosizeRatedEvaporativeCondenserPumpPowerConsumption # add speed data to multispeed coil object - new_dx_cooling_coil.addStage(dx_coil_speed_data) unless stage_caps_heating[stage] == false + new_dx_cooling_coil.addStage(dx_coil_speed_data) # unless stage_caps_heating[stage] == false end end new_dx_cooling_coil @@ -708,16 +714,13 @@ def set_heating_coil_stages(model, runner, stage_flows_heating, stage_caps_heati if (stage_flows_heating.values.count(&:itself)) == (stage_caps_heating.values.count(&:itself)) num_heating_stages = stage_flows_heating.values.count(&:itself) if debug_verbose - runner.registerInfo("The final number of heating stages for #{air_loop_hvac.name} is #{num_heating_stages}.") + runner.registerInfo("stage summary: num_heating_stages: #{num_heating_stages}") + runner.registerInfo("stage summary: The final number of heating stages for #{air_loop_hvac.name} is #{num_heating_stages}.") end else runner.registerError("For airloop #{air_loop_hvac.name}, the number of stages of heating capacity is different from number of stages of heating airflow. Revise measure as needed.") end - #puts "stage_flows_heating: #{stage_flows_heating}" - #puts "num_heating_stages: #{num_heating_stages}" - #puts "stage_caps_heating: #{stage_caps_heating}" - # use single speed DX heating coil if only 1 speed new_dx_heating_coil = nil if num_heating_stages == 1 @@ -748,7 +751,6 @@ def set_heating_coil_stages(model, runner, stage_flows_heating, stage_caps_heati # use multi speed DX heating coil if multiple speeds are defined else - # define multi speed heating coil new_dx_heating_coil = OpenStudio::Model::CoilHeatingDXMultiSpeed.new(model) new_dx_heating_coil.setName("#{air_loop_hvac.name} Heat Pump heating Coil") @@ -769,23 +771,30 @@ def set_heating_coil_stages(model, runner, stage_flows_heating, stage_caps_heati # loop through stages stage_caps_heating.sort.each do |stage, cap| - next unless cap != false + # use current stage if allowed; otherwise use highest available stage as "dummy" + # the stage that is actually used to articulate the speed level is the 'applied_stage' + # this is a temporary workaround until OS translator supports different numbers of speed levels between heating and cooling + # GitHub issue: https://github.com/NREL/OpenStudio/issues/5277 + applied_stage = stage + if cap == false + applied_stage = stage_caps_heating.reject { |k, v| v == false }.keys.min + end # add speed data for each stage dx_coil_speed_data = OpenStudio::Model::CoilHeatingDXMultiSpeedStageData.new(model) - dx_coil_speed_data.setGrossRatedHeatingCapacity(stage_caps_heating[stage]) - dx_coil_speed_data.setGrossRatedHeatingCOP(final_rated_heating_cop * _stage_rated_cop_frac_heating[stage]) - dx_coil_speed_data.setRatedAirFlowRate(stage_flows_heating[stage]) + dx_coil_speed_data.setGrossRatedHeatingCapacity(stage_caps_heating[applied_stage]) + dx_coil_speed_data.setGrossRatedHeatingCOP(final_rated_heating_cop * _stage_rated_cop_frac_heating[applied_stage]) + dx_coil_speed_data.setRatedAirFlowRate(stage_flows_heating[applied_stage]) dx_coil_speed_data.setRatedSupplyAirFanPowerPerVolumeFlowRate2017(773.3) - dx_coil_speed_data.setHeatingCapacityFunctionofTemperatureCurve(heat_cap_ft_curve_stages[stage]) + dx_coil_speed_data.setHeatingCapacityFunctionofTemperatureCurve(heat_cap_ft_curve_stages[applied_stage]) # set performance curves - dx_coil_speed_data.setHeatingCapacityFunctionofTemperatureCurve(heat_cap_ft_curve_stages[stage]) - dx_coil_speed_data.setHeatingCapacityFunctionofFlowFractionCurve(heat_cap_ff_curve_stages[stage]) - dx_coil_speed_data.setEnergyInputRatioFunctionofTemperatureCurve(heat_eir_ft_curve_stages[stage]) - dx_coil_speed_data.setEnergyInputRatioFunctionofFlowFractionCurve(heat_eir_ff_curve_stages[stage]) + dx_coil_speed_data.setHeatingCapacityFunctionofTemperatureCurve(heat_cap_ft_curve_stages[applied_stage]) + dx_coil_speed_data.setHeatingCapacityFunctionofFlowFractionCurve(heat_cap_ff_curve_stages[applied_stage]) + dx_coil_speed_data.setEnergyInputRatioFunctionofTemperatureCurve(heat_eir_ft_curve_stages[applied_stage]) + dx_coil_speed_data.setEnergyInputRatioFunctionofFlowFractionCurve(heat_eir_ff_curve_stages[applied_stage]) dx_coil_speed_data.setPartLoadFractionCorrelationCurve(heat_plf_fplr1) # add speed data to multispeed coil object - new_dx_heating_coil.addStage(dx_coil_speed_data) unless stage_caps_cooling[stage] == false + new_dx_heating_coil.addStage(dx_coil_speed_data) # falseunless stage_caps_cooling[stage] == false # temporary 'unless' until bug fix for (https://github.com/NREL/OpenStudio/issues/5277) end end new_dx_heating_coil @@ -896,6 +905,274 @@ def interpolate_from_two_ind_vars(runner, ind_var_1, ind_var_2, dep_var, input1, v22 * (input1 - x1) * (input2 - y1)) / ((x2 - x1) * (y2 - y1)) end + def upgrade_env_roof_insul_aedg(model, runner) + # set limit for minimum insulation in IP units -- this is used to limit input and for inferring insulation layer in construction + min_exp_r_val_ip = 1.0 + + # build standard to use OS standards methods + template = 'ComStock 90.1-2019' + std = Standard.build(template) + # get climate zone to set target_r_val_ip + climate_zone = OpenstudioStandards::Weather.model_get_climate_zone(model) + + # apply target R-value by climate zone + if climate_zone.include?('ASHRAE 169-2013-1') || climate_zone.include?('CEC15') + target_r_val_ip = 21 + elsif climate_zone.include?('ASHRAE 169-2013-2') || climate_zone.include?('ASHRAE 169-2013-3') + target_r_val_ip = 26 + elsif climate_zone.include?('ASHRAE 169-2013-4') || climate_zone.include?('ASHRAE 169-2013-5') || climate_zone.include?('ASHRAE 169-2013-6') || climate_zone.include?('CEC16') + target_r_val_ip = 33 + elsif climate_zone.include?('ASHRAE 169-2013-7') || climate_zone.include?('ASHRAE 169-2013-8') + target_r_val_ip = 37 + else # all DEER climate zones except 15 and 16 + target_r_val_ip = 26 + end + # Convert target_r_val_ip to si + target_r_val_si = OpenStudio.convert(target_r_val_ip, 'ft^2*h*R/Btu', 'm^2*K/W').get + + runner.registerInfo("roof measure: Target AEDG r-value for roof assemblies: #{target_r_val_ip}") + + # find existing roof assembly R-value + # Find all roofs and get a list of their constructions + roof_constructions = [] + model.getSurfaces.each do |surface| + if surface.outsideBoundaryCondition == 'Outdoors' && surface.surfaceType == 'RoofCeiling' && surface.construction.is_initialized + roof_constructions << surface.construction.get + end + end + + # create an array of roofs and find range of starting construction R-value (not just insulation layer) + ext_surfs = [] + ext_surf_consts = [] + ext_surf_const_names = [] + roof_resist = [] + model.getSurfaces.each do |surface| + next unless (surface.outsideBoundaryCondition == 'Outdoors') && (surface.surfaceType == 'RoofCeiling') # which are outdoor roofs + + ext_surfs << surface + roof_const = surface.construction.get + # only add construction if it hasn't been added yet + ext_surf_consts << roof_const.to_Construction.get unless ext_surf_const_names.include?(roof_const.name.to_s) + ext_surf_const_names << roof_const.name.to_s + roof_resist << 1 / roof_const.thermalConductance.to_f + end + + # hashes to track constructions and materials made by the measure, to avoid duplicates + consts_old_new = {} + + # used to get net area of new construction + consts_new_old = {} + matls_hash = {} + + # array and counter for new constructions that are made, used for reporting final condition + final_consts = [] + + # loop through all constructions and materials used on roofs, edit and clone + ext_surf_consts.each do |ext_surf_const| + matls_in_const = ext_surf_const.layers.map.with_index { |l, i| { 'name' => l.name.to_s, 'index' => i, 'nomass' => !l.to_MasslessOpaqueMaterial.empty?, 'r_val' => l.to_OpaqueMaterial.get.thermalResistance, 'matl' => l } } + no_mass_matls = matls_in_const.select { |m| m['nomass'] == true } + + # measure will select the no-mass material with the highest R-value as the insulation layer -- if no no-mass materials are present, the measure will select the material with the highest R-value per inch + if no_mass_matls.empty? + r_val_per_thick_vals = matls_in_const.map { |m| m['r_val'] / m['mat'].thickness } + max_matl_hash = matls_in_const.select { |m| m['index'] == r_val_per_thick_vals.index(r_val_per_thick_vals.max) } + r_vals = matls_in_const.map { |m| m['r_val'] } + else + r_vals = no_mass_matls.map { |m| m['r_val'] } + max_matl_hash = no_mass_matls.select { |m| m['r_val'] >= r_vals.max } + end + max_r_val_matl = max_matl_hash[0]['matl'] + max_r_val_matl_idx = max_matl_hash[0]['index'] + # check to make sure assumed insulation layer is between reasonable bounds + if max_r_val_matl.to_OpaqueMaterial.get.thermalResistance <= OpenStudio.convert(min_exp_r_val_ip, 'ft^2*h*R/Btu', 'm^2*K/W').get + runner.registerWarning("Construction '#{ext_surf_const.name}' does not appear to have an insulation layer and was not altered") + elsif max_r_val_matl.to_OpaqueMaterial.get.thermalResistance >= target_r_val_si + runner.registerInfo("roof measure: The insulation layer of construction #{ext_surf_const.name} exceeds the requested R-value and was not altered") + else + + # start new XPS material layer + ins_layer_xps = OpenStudio::Model::StandardOpaqueMaterial.new(model) + ins_layer_xps.setRoughness('MediumSmooth') + ins_layer_xps.setConductivity(0.029) + ins_layer_xps.setDensity(29.0) + ins_layer_xps.setSpecificHeat(1210.0) + ins_layer_xps.setSolarAbsorptance(0.7) + ins_layer_xps.setVisibleAbsorptance(0.7) + + # need to calculate required insulation addition + # clone the construction + final_const = ext_surf_const.clone(model).to_Construction.get + # get r-value + final_const_r_si = 1 / final_const.thermalConductance.to_f + final_const_r_ip = OpenStudio.convert(final_const_r_si, 'm^2*K/W', 'ft^2*h*R/Btu').get + # determine required r-value of XPS insulation to bring roof up to target + xps_target_r_val_si = target_r_val_si - final_const_r_si + target_r_val_ip = OpenStudio.convert(target_r_val_si, 'm^2*K/W', 'ft^2*h*R/Btu').get + xps_target_r_val_ip = OpenStudio.convert(xps_target_r_val_si, 'm^2*K/W', 'ft^2*h*R/Btu').get + # Calculate the thickness required to meet the desired R-Value + reqd_thickness_si = xps_target_r_val_si * ins_layer_xps.thermalConductivity + reqd_thickness_ip = OpenStudio.convert(reqd_thickness_si, 'm', 'in').get + # round to nearest half inch + reqd_thickness_ip = (reqd_thickness_ip * 2).round / 2 + ins_layer_xps.setThickness(reqd_thickness_si) + ins_layer_xps.thermalConductivity + ins_layer_xps.setName("Expanded Polystyrene - Extruded - #{reqd_thickness_ip.round(1)} in.") + runner.registerInfo("roof measure: Construction #{ext_surf_const.name} starts with an R-value of #{final_const_r_ip.round(1)}. To achieve an R-Value of #{target_r_val_ip.round(1)}, this construction needs to add R-#{xps_target_r_val_ip.round(1)} of XPS insulation, which equates to #{reqd_thickness_ip} inches.") + + # insert new construction + final_const.insertLayer(1, ins_layer_xps) + final_const.setName("#{ext_surf_const.name} with Added Roof Insul") + final_consts << final_const + + # push to hashes + consts_old_new[ext_surf_const.name.to_s] = final_const + # push the object to hash key v. name + consts_new_old[final_const] = ext_surf_const + + # find already cloned insulation material and link to construction + found_matl = false + matls_hash.each do |orig, new| + if max_r_val_matl.name.to_s == orig + new_matl = new + matls_hash[max_r_val_matl.name.to_s] = new_matl + final_const.eraseLayer(max_r_val_matl_idx) + final_const.insertLayer(max_r_val_matl_idx, new_matl) + found_matl = true + end + end + end + end + + # register as not applicable if + if final_consts.empty? + runner.registerAsNotApplicable('No applicable roofs were found.') + return true + end + + # loop through construction sets used in the model + default_const_sets = model.getDefaultConstructionSets + default_const_sets.each do |default_const_set| + if default_const_set.directUseCount > 0 + default_surf_const_set = default_const_set.defaultExteriorSurfaceConstructions + if !default_surf_const_set.empty? + start_const = default_surf_const_set.get.roofCeilingConstruction + + # creating new default construction set + new_default_const_set = default_const_set.clone(model) + new_default_const_set = new_default_const_set.to_DefaultConstructionSet.get + new_default_const_set.setName("#{default_const_set.name} Added Roof Insul") + + # create new surface set and link to construction set + new_default_surf_const_set = default_surf_const_set.get.clone(model) + new_default_surf_const_set = new_default_surf_const_set.to_DefaultSurfaceConstructions.get + new_default_surf_const_set.setName("#{default_surf_const_set.get.name} Added Roof Insul") + new_default_const_set.setDefaultExteriorSurfaceConstructions(new_default_surf_const_set) + + # use the hash to find the proper construction and link to the new default surface construction set + target_const = new_default_surf_const_set.roofCeilingConstruction + if !target_const.empty? + target_const = target_const.get.name.to_s + found_const_flag = false + consts_old_new.each do |orig, new| + if target_const == orig + final_const = new + new_default_surf_const_set.setRoofCeilingConstruction(final_const) + found_const_flag = true + end + end + # this should never happen but is just an extra test in case something goes wrong with the measure code + runner.registerWarning("Measure couldn't find the roof construction named '#{target_const}' assigned to any exterior surfaces") if found_const_flag == false + end + + # swap all uses of the old construction set for the new + const_set_srcs = default_const_set.sources + const_set_srcs.each do |const_set_src| + bldg_src = const_set_src.to_Building + + # if statement for each type of object that can use a DefaultConstructionSet + if !bldg_src.empty? + bldg_src = bldg_src.get + bldg_src.setDefaultConstructionSet(new_default_const_set) + end + bldg_story_src = const_set_src.to_BuildingStory + if !bldg_story_src.empty? + bldg_story_src = bldg_story_src.get + bldg_story_src.setDefaultConstructionSet(new_default_const_set) + end + space_type_src = const_set_src.to_SpaceType + if !bldg_story_src.empty? + bldg_story_src = bldg_story_src.get + bldg_story_src.setDefaultConstructionSet(new_default_const_set) + end + space_src = const_set_src.to_Space + if !space_src.empty? + space_src = space_src.get + space_src.setDefaultConstructionSet(new_default_const_set) + end + end + end + end + end + + # link cloned and edited constructions for surfaces with hard assigned constructions + ext_surfs.each do |ext_surf| + if !ext_surf.isConstructionDefaulted && !ext_surf.construction.empty? + # use the hash to find the proper construction and link to surface + target_const = ext_surf.construction + if !target_const.empty? + target_const = target_const.get.name.to_s + consts_old_new.each do |orig, new| + if target_const == orig + final_const = new + ext_surf.setConstruction(final_const) + end + end + end + end + end + + # nothing will be done if there are no exterior surfaces + if ext_surfs.empty? + runner.registerAsNotApplicable('The building has no roofs.') + return true + end + + # report strings for initial condition + init_str = [] + ext_surf_consts.uniq.each do |ext_surf_const| + # unit conversion of roof insulation from SI units (m2-K/W) to IP units (ft2-h-R/Btu) + init_r_val_ip = OpenStudio.convert(1 / ext_surf_const.thermalConductance.to_f, 'm^2*K/W', 'ft^2*h*R/Btu').get + init_str << "#{ext_surf_const.name} (R-#{format '%.1f', init_r_val_ip})" + end + + # report strings for final condition, not all roof constructions, but only new ones made -- if roof didn't have insulation and was not altered we don't want to show it + final_str = [] + area_changed_si = 0 + final_consts.uniq.each do |final_const| + # unit conversion of roof insulation from SI units (M^2*K/W) to IP units (ft^2*h*R/Btu) + final_r_val_ip = OpenStudio.convert(1.0 / final_const.thermalConductance.to_f, 'm^2*K/W', 'ft^2*h*R/Btu').get + final_str << "#{final_const.name} (R-#{format '%.1f', final_r_val_ip})" + area_changed_si += final_const.getNetArea + end + + # add not applicable test if there were roof constructions but non of them were altered (already enough insulation or doesn't look like insulated roof) + if area_changed_si == 0 + runner.registerAsNotApplicable('No roofs were altered') + return true + else + # IP construction area for reporting + area_changed_ip = OpenStudio.convert(area_changed_si, 'm^2', 'ft^2').get + end + + # Report the initial condition + # runner.registerInitialCondition("The building had #{init_str.size} roof constructions: #{init_str.sort.join(', ')}") + + # Report the final condition + # runner.registerFinalCondition("The insulation for roofs was set to R-#{target_r_val_ip.round(1)} -- this was applied to #{area_changed_ip.round(2)} ft2 across #{final_str.size} roof constructions: #{final_str.sort.join(', ')}") + runner.registerValue('env_roof_insul_roof_area_ft2', area_changed_ip.round(2), 'ft2') + return true + end + #### End predefined functions # define what happens when the measure is run @@ -920,34 +1197,34 @@ def run(model, runner, user_arguments) sizing_run = runner.getBoolArgumentValue('sizing_run', user_arguments) debug_verbose = runner.getBoolArgumentValue('debug_verbose', user_arguments) - ## adding output variables (for debugging) - #out_vars = [ - # 'Air System Mixed Air Mass Flow Rate', - # 'Fan Air Mass Flow Rate', - # 'Unitary System Predicted Sensible Load to Setpoint Heat Transfer Rate', - # 'Cooling Coil Total Cooling Rate', - # 'Cooling Coil Electricity Rate', - # 'Cooling Coil Runtime Fraction', - # 'Heating Coil Heating Rate', - # 'Heating Coil Electricity Rate', - # 'Heating Coil Runtime Fraction', - # 'Unitary System DX Coil Cycling Ratio', - # 'Unitary System DX Coil Speed Ratio', - # 'Unitary System DX Coil Speed Level', - # 'Unitary System Total Cooling Rate', - # 'Unitary System Total Heating Rate', - # 'Unitary System Electricity Rate', - # 'HVAC System Solver Iteration Count', - # 'Site Outdoor Air Drybulb Temperature', - # 'Heating Coil Crankcase Heater Electricity Rate', - # 'Heating Coil Defrost Electricity Rate' - # ] - # out_vars.each do |out_var_name| - # ov = OpenStudio::Model::OutputVariable.new('ov', model) - # ov.setKeyValue('*') - # ov.setReportingFrequency('detailed') - # ov.setVariableName(out_var_name) - # end + # adding output variables (for debugging) + out_vars = [ + 'Air System Mixed Air Mass Flow Rate', + # 'Fan Air Mass Flow Rate', + # 'Unitary System Predicted Sensible Load to Setpoint Heat Transfer Rate', + 'Cooling Coil Total Cooling Rate', + 'Cooling Coil Electricity Rate', + # 'Cooling Coil Runtime Fraction', + 'Heating Coil Heating Rate', + 'Heating Coil Electricity Rate', + # 'Heating Coil Runtime Fraction', + 'Unitary System DX Coil Cycling Ratio', + 'Unitary System DX Coil Speed Ratio', + 'Unitary System DX Coil Speed Level', + # 'Unitary System Total Cooling Rate', + # 'Unitary System Total Heating Rate', + # 'Unitary System Electricity Rate', + # 'HVAC System Solver Iteration Count', + 'Site Outdoor Air Drybulb Temperature', + # 'Heating Coil Crankcase Heater Electricity Rate', + # 'Heating Coil Defrost Electricity Rate' + ] + out_vars.each do |out_var_name| + ov = OpenStudio::Model::OutputVariable.new('ov', model) + ov.setKeyValue('*') + ov.setReportingFrequency('hourly') + ov.setVariableName(out_var_name) + end # build standard to use OS standards methods template = 'ComStock 90.1-2019' @@ -980,9 +1257,9 @@ def run(model, runner, user_arguments) air_loop_hvac.supplyComponents.each do |component| obj_type = component.iddObjectType.valueName.to_s # flag system if contains water coil; this will cause air loop to be skipped - is_water_coil = true if %w[Coil_Heating_Water Coil_Cooling_Water].any? { |word| (obj_type).include?(word) } + is_water_coil = true if ['Coil_Heating_Water', 'Coil_Cooling_Water'].any? { |word| (obj_type).include?(word) } # flag gas heating as true if gas coil is found in any airloop - prim_ht_fuel_type = 'gas' if %w[Gas GAS gas].any? { |word| (obj_type).include?(word) } + prim_ht_fuel_type = 'gas' if ['Gas', 'GAS', 'gas'].any? { |word| (obj_type).include?(word) } # check unitary systems for DX heating or water coils if obj_type == 'OS_AirLoopHVAC_UnitarySystem' unitary_sys = component.to_AirLoopHVACUnitarySystem.get @@ -997,7 +1274,7 @@ def run(model, runner, user_arguments) elsif ['Water'].any? { |word| (htg_coil).include?(word) } is_water_coil = true # check for gas heating - elsif %w[Gas GAS gas].any? { |word| (htg_coil).include?(word) } + elsif ['Gas', 'GAS', 'gas'].any? { |word| (htg_coil).include?(word) } prim_ht_fuel_type = 'gas' end else @@ -1026,9 +1303,9 @@ def run(model, runner, user_arguments) (air_loop_hvac.name.get).include?(word) end # skip kitchens - next if %w[Kitchen KITCHEN Kitchen].any? { |word| (air_loop_hvac.name.get).include?(word) } + next if ['Kitchen', 'KITCHEN', 'Kitchen'].any? { |word| (air_loop_hvac.name.get).include?(word) } # skip VAV sysems - next if %w[VAV PVAV].any? { |word| (air_loop_hvac.name.get).include?(word) } + next if ['VAV', 'PVAV'].any? { |word| (air_loop_hvac.name.get).include?(word) } # skip if residential system next if air_loop_res?(air_loop_hvac) # skip if system has no outdoor air, also indication of residential system @@ -1109,33 +1386,13 @@ def run(model, runner, user_arguments) end # call roof insulation measure based on user input - if (roof==true) && (!selected_air_loops.empty?) - - #get path to economizer measure - econ_measure_path = Dir.glob(File.join(__dir__, '../upgrade_env_roof_insul_aedg')) - # Load economizer measure - measure = EnvRoofInsulAedg.new - - # Apply economizer measure - result = measure.run(model, runner, OpenStudio::Measure::OSArgumentMap.new) - result = runner.result - - # Check if the measure ran successfully - if result.value.valueName == 'Success' - runner.registerInfo('Roof insulation measure was applied successfully, as requested by user argument.') - elsif result.value.valueName == 'NA' - runner.registerInfo('Roof insulation measure was not applicable') - result = true - else - runner.registerError('Roof insulation measure failed.') - return false - end - + if (roof == true) && !selected_air_loops.empty? + upgrade_env_roof_insul_aedg(model, runner) end # do sizing run with new equipment to set sizing-specific features if (is_sizing_run_needed == true) || (sizing_run == true) - runner.registerInfo('Sizing run needed') + runner.registerInfo('sizing summary: sizing run needed') return false if std.model_run_sizing_run(model, "#{Dir.pwd}/SR1") == false model.applySizingValues @@ -1159,7 +1416,6 @@ def run(model, runner, user_arguments) # add systems with high outdoor air ratios to a list for non-applicability oa_ration_allowance = 0.55 selected_air_loops.each do |air_loop_hvac| - thermal_zone = air_loop_hvac.thermalZones[0] # get the min OA flow rate for calculating unit OA fraction @@ -1224,11 +1480,7 @@ def run(model, runner, user_arguments) end # if supply operating schedule does not include a 0, the unit does not night cycle - unit_night_cycles = if night_cyc_sched_vals.include? [0, 0.0] - true - else - false - end + unit_night_cycles = night_cyc_sched_vals.include? [0, 0.0] # register as not applicable if OA limit exceeded and unit has night cycling schedules next unless (min_oa_flow_ratio > oa_ration_allowance) && (unit_night_cycles == true) @@ -1267,12 +1519,7 @@ def run(model, runner, user_arguments) # building type not applicable to ERVs as part of this measure will receive no additional or modification of ERV systems # this is only relevant if the user selected to add ERVs # space type applicability is handled later in the code when looping through individual air loops - building_types_to_exclude = %w[ - RFF - RSD - QuickServiceRestaurant - FullServiceRestaurant - ] + building_types_to_exclude = ['RFF', 'RSD', 'QuickServiceRestaurant', 'FullServiceRestaurant'] # determine building type applicability for ERV btype_erv_applicable = true building_types_to_exclude = building_types_to_exclude.map(&:downcase) @@ -1297,7 +1544,7 @@ def run(model, runner, user_arguments) # Get ER/HR type from climate zone _, _, doas_type = - if %w[1A 2A 3A 4A 5A 6A 7 7A 8 8A].include?(climate_zone_classification) + if ['1A', '2A', '3A', '4A', '5A', '6A', '7', '7A', '8', '8A'].include?(climate_zone_classification) [12.7778, 19.4444, 'ERV'] else [15.5556, 19.4444, 'HRV'] @@ -1427,7 +1674,7 @@ def run(model, runner, user_arguments) heat_cap_ft2 = model_add_curve(model, 'h_cap_medium', custom_data_json, std) heat_cap_ft3 = model_add_curve(model, 'h_cap_high', custom_data_json, std) heat_cap_ft4 = model_add_curve(model, 'h_cap_boost', custom_data_json, std) - heat_cap_ft_curve_stages = {1=>heat_cap_ft1, 2=>heat_cap_ft2, 3=>heat_cap_ft3, 4=>heat_cap_ft4} + heat_cap_ft_curve_stages = { 1 => heat_cap_ft1, 2 => heat_cap_ft2, 3 => heat_cap_ft3, 4 => heat_cap_ft4 } end # Curve Import - Heating efficiency as a function of temperature @@ -1446,7 +1693,7 @@ def run(model, runner, user_arguments) heat_eir_ft2 = model_add_curve(model, 'h_eir_medium', custom_data_json, std) heat_eir_ft3 = model_add_curve(model, 'h_eir_high', custom_data_json, std) heat_eir_ft4 = model_add_curve(model, 'h_eir_boost', custom_data_json, std) - heat_eir_ft_curve_stages = {1=>heat_eir_ft1, 2=>heat_eir_ft2, 3=>heat_eir_ft3, 4=>heat_eir_ft4} + heat_eir_ft_curve_stages = { 1 => heat_eir_ft1, 2 => heat_eir_ft2, 3 => heat_eir_ft3, 4 => heat_eir_ft4 } end # Curve Import - Heating capacity as a function of flow rate @@ -1459,7 +1706,7 @@ def run(model, runner, user_arguments) heat_cap_ff_curve_stages = { 1 => heat_cap_ff1 } when 'cchpc_2027_spec' heat_cap_ff1 = model_add_curve(model, 'h_cap_allstages_ff', custom_data_json, std) - heat_cap_ff_curve_stages = {1=>heat_cap_ff1, 2=>heat_cap_ff1, 3=>heat_cap_ff1, 4=>heat_cap_ff1} + heat_cap_ff_curve_stages = { 1 => heat_cap_ff1, 2 => heat_cap_ff1, 3 => heat_cap_ff1, 4 => heat_cap_ff1 } end # Curve Import - Heating efficiency as a function of flow rate @@ -1472,7 +1719,7 @@ def run(model, runner, user_arguments) heat_eir_ff_curve_stages = { 1 => heat_eir_ff1 } when 'cchpc_2027_spec' heat_eir_ff1 = model_add_curve(model, 'h_eir_allstages_ff', custom_data_json, std) - heat_eir_ff_curve_stages = {1=>heat_eir_ff1, 2=>heat_eir_ff1, 3=>heat_eir_ff1, 4=>heat_eir_ff1} + heat_eir_ff_curve_stages = { 1 => heat_eir_ff1, 2 => heat_eir_ff1, 3 => heat_eir_ff1, 4 => heat_eir_ff1 } end # Curve Import - Heating efficiency as a function of part load ratio @@ -1526,7 +1773,7 @@ def run(model, runner, user_arguments) # convert component to string name obj_type = component.iddObjectType.valueName.to_s # skip unless component is of relevant type - next unless %w[Fan Unitary Coil].any? { |word| (obj_type).include?(word) } + next unless ['Fan', 'Unitary', 'Coil'].any? { |word| (obj_type).include?(word) } # make list of equipment to delete equip_to_delete << component @@ -1596,7 +1843,7 @@ def run(model, runner, user_arguments) orig_clg_coil_gross_cap = orig_clg_coil.autosizedRatedHighSpeedTotalCoolingCapacity.get elsif orig_clg_coil.ratedHighSpeedTotalCoolingCapacity.is_initialized orig_clg_coil_gross_cap = orig_clg_coil.ratedHighSpeedTotalCoolingCapacity.get - elsif + else runner.registerError("Original cooling coil capacity for #{air_loop_hvac.name} not found. Either it was not directly specified, or sizing run data is not available.") end else @@ -1631,7 +1878,7 @@ def run(model, runner, user_arguments) # convert component to string name obj_type = component.iddObjectType.valueName.to_s # skip unless component is of relevant type - next unless %w[Fan Unitary Coil].any? { |word| (obj_type).include?(word) } + next unless ['Fan', 'Unitary', 'Coil'].any? { |word| (obj_type).include?(word) } # make list of equipment to delete equip_to_delete << component @@ -1741,7 +1988,7 @@ def run(model, runner, user_arguments) old_terminal.remove air_loop_hvac.removeBranchForZone(thermal_zone) # define new terminal box - #new_terminal = OpenStudio::Model::AirTerminalSingleDuctConstantVolumeNoReheat.new(model, always_on) + # new_terminal = OpenStudio::Model::AirTerminalSingleDuctConstantVolumeNoReheat.new(model, always_on) new_terminal = OpenStudio::Model::AirTerminalSingleDuctVAVHeatAndCoolNoReheat.new(model) # set name of terminal box and add new_terminal.setName("#{thermal_zone.name} VAV Terminal") @@ -1764,7 +2011,7 @@ def run(model, runner, user_arguments) wntr_design_day_temp_c = li_htg_dsgn_day_temps.min # get user-input heating sizing temperature - htg_sizing_option_hash = { '47F' => 47, '17F' => 17, '0F' => 0, '-10F' => -10} + htg_sizing_option_hash = { '47F' => 47, '17F' => 17, '0F' => 0, '-10F' => -10 } htg_sizing_option_f = htg_sizing_option_hash[htg_sizing_option] htg_sizing_option_c = OpenStudio.convert(htg_sizing_option_f, 'F', 'C').get hp_sizing_temp_c = nil @@ -1772,7 +2019,7 @@ def run(model, runner, user_arguments) if htg_sizing_option_c >= wntr_design_day_temp_c hp_sizing_temp_c = htg_sizing_option_c if debug_verbose - runner.registerInfo("For heat pump sizing, heating design day temperature is #{OpenStudio.convert( + runner.registerInfo("sizing summary: For heat pump sizing, heating design day temperature is #{OpenStudio.convert( wntr_design_day_temp_c, 'C', 'F' ).get.round(0)}F, and the user-input temperature to size on is #{OpenStudio.convert( htg_sizing_option_c, 'C', 'F' @@ -1781,7 +2028,7 @@ def run(model, runner, user_arguments) else hp_sizing_temp_c = wntr_design_day_temp_c if debug_verbose - runner.registerInfo("For heat pump sizing, heating design day temperature is #{OpenStudio.convert( + runner.registerInfo("sizing summary: For heat pump sizing, heating design day temperature is #{OpenStudio.convert( wntr_design_day_temp_c, 'C', 'F' ).get.round(0)}F, and the user-input temperature to size on is #{OpenStudio.convert( htg_sizing_option_c, 'C', 'F' @@ -1792,7 +2039,7 @@ def run(model, runner, user_arguments) ## define number of stages, and capacity/airflow fractions for each stage (_, _, rated_stage_num_heating, rated_stage_num_cooling, final_rated_cooling_cop, final_rated_heating_cop, stage_cap_fractions_heating, stage_flow_fractions_heating, stage_cap_fractions_cooling, stage_flow_fractions_cooling, stage_rated_cop_frac_heating, - stage_rated_cop_frac_cooling, boost_stage_num_and_max_temp_tuple, stage_GrossRatedSensibleHeatRatio_cooling, enable_cycling_losses_above_lowest_speed, reference_cooling_cfm_per_ton, + stage_rated_cop_frac_cooling, boost_stage_num_and_max_temp_tuple, stage_gross_rated_sensible_heat_ratio_cooling, enable_cycling_losses_above_lowest_speed, reference_cooling_cfm_per_ton, reference_heating_cfm_per_ton) = assign_staging_data(custom_data_json, std) # get appropriate design heating load @@ -1849,7 +2096,7 @@ def run(model, runner, user_arguments) # override design heating load with Q = vdot * rho * cp * (Tout - Tin) orig_htg_coil_gross_cap = design_air_flow_from_zone_sizing_heating_m_3_per_s * air_density_kg_per_m_3 * air_heat_capacity_j_per_kg_k * (coil_leaving_temperature_c - coil_entering_temperature_c) if debug_verbose - runner.registerInfo("original heating design load overriden from sizing run: #{orig_htg_coil_gross_cap_old.round(3)} W to #{orig_htg_coil_gross_cap.round(3)} W for airloop (#{air_loop_hvac.name})") + runner.registerInfo("sizing summary: original heating design load overriden from sizing run: #{orig_htg_coil_gross_cap_old.round(3)} W to #{orig_htg_coil_gross_cap.round(3)} W for airloop (#{air_loop_hvac.name})") end end @@ -1917,6 +2164,7 @@ def run(model, runner, user_arguments) design_cooling_airflow_m_3_per_s = old_terminal_sa_flow_m3_per_s design_heating_airflow_m_3_per_s = design_air_flow_from_zone_sizing_heating_m_3_per_s end + # sizing result summary output log using for measure documentation if debug_verbose runner.registerInfo("sizing summary: sizing air loop (#{air_loop_hvac.name}): air_loop_hvac name = #{air_loop_hvac.name}") @@ -1934,30 +2182,28 @@ def run(model, runner, user_arguments) runner.registerInfo("sizing summary: sizing air loop (#{air_loop_hvac.name}): upsized rated heating capacity W = #{dx_rated_htg_cap_applied.round(2)}") runner.registerInfo("sizing summary: sizing air loop (#{air_loop_hvac.name}): upsized rated cooling capacity W = #{dx_rated_clg_cap_applied.round(2)}") runner.registerInfo("sizing summary: sizing air loop (#{air_loop_hvac.name}): final upsizing percentage % = #{((dx_rated_htg_cap_applied - orig_clg_coil_gross_cap) / orig_clg_coil_gross_cap * 100).round(2)}") - runner.registerInfo("") end # calculate applied upsizing factor upsize_factor = (dx_rated_htg_cap_applied - orig_clg_coil_gross_cap) / orig_clg_coil_gross_cap # upsize airflow accordingly - design_heating_airflow_m_3_per_s = design_heating_airflow_m_3_per_s * (1 + upsize_factor) - design_cooling_airflow_m_3_per_s = design_cooling_airflow_m_3_per_s * (1 + upsize_factor) + design_heating_airflow_m_3_per_s *= (1 + upsize_factor) + design_cooling_airflow_m_3_per_s *= (1 + upsize_factor) if debug_verbose - runner.registerInfo("sizing summary: before rated cfm/ton adjustmant") + runner.registerInfo('sizing summary: before rated cfm/ton adjustmant') runner.registerInfo("sizing summary: dx_rated_htg_cap_applied = #{dx_rated_htg_cap_applied}") runner.registerInfo("sizing summary: design_heating_airflow_m_3_per_s = #{design_heating_airflow_m_3_per_s}") - runner.registerInfo("sizing summary: cfm/ton heating = #{m_3_per_sec_watts_to_cfm_per_ton(design_heating_airflow_m_3_per_s/dx_rated_htg_cap_applied)}") + runner.registerInfo("sizing summary: cfm/ton heating = #{m_3_per_sec_watts_to_cfm_per_ton(design_heating_airflow_m_3_per_s / dx_rated_htg_cap_applied)}") runner.registerInfo("sizing summary: dx_rated_clg_cap_applied = #{dx_rated_clg_cap_applied}") runner.registerInfo("sizing summary: design_cooling_airflow_m_3_per_s = #{design_cooling_airflow_m_3_per_s}") - runner.registerInfo("sizing summary: cfm/ton heating = #{m_3_per_sec_watts_to_cfm_per_ton(design_cooling_airflow_m_3_per_s/dx_rated_clg_cap_applied)}") - runner.registerInfo("") + runner.registerInfo("sizing summary: cfm/ton heating = #{m_3_per_sec_watts_to_cfm_per_ton(design_cooling_airflow_m_3_per_s / dx_rated_clg_cap_applied)}") end # adjust if rated/highest stage cfm/ton is violated - cfm_per_ton_rated_heating = m_3_per_sec_watts_to_cfm_per_ton(design_heating_airflow_m_3_per_s/dx_rated_htg_cap_applied) - cfm_per_ton_rated_cooling = m_3_per_sec_watts_to_cfm_per_ton(design_cooling_airflow_m_3_per_s/dx_rated_clg_cap_applied) + cfm_per_ton_rated_heating = m_3_per_sec_watts_to_cfm_per_ton(design_heating_airflow_m_3_per_s / dx_rated_htg_cap_applied) + cfm_per_ton_rated_cooling = m_3_per_sec_watts_to_cfm_per_ton(design_cooling_airflow_m_3_per_s / dx_rated_clg_cap_applied) if cfm_per_ton_rated_heating < CFM_PER_TON_MIN_RATED design_heating_airflow_m_3_per_s = cfm_per_ton_to_m_3_per_sec_watts(CFM_PER_TON_MIN_RATED) * dx_rated_htg_cap_applied elsif cfm_per_ton_rated_heating > CFM_PER_TON_MAX_RATED @@ -1970,18 +2216,15 @@ def run(model, runner, user_arguments) end if debug_verbose - runner.registerInfo("sizing summary: after rated cfm/ton adjustmant") + runner.registerInfo('sizing summary: after rated cfm/ton adjustmant') runner.registerInfo("sizing summary: dx_rated_htg_cap_applied = #{dx_rated_htg_cap_applied}") runner.registerInfo("sizing summary: design_heating_airflow_m_3_per_s = #{design_heating_airflow_m_3_per_s}") - runner.registerInfo("sizing summary: cfm/ton heating = #{m_3_per_sec_watts_to_cfm_per_ton(design_heating_airflow_m_3_per_s/dx_rated_htg_cap_applied)}") + runner.registerInfo("sizing summary: cfm/ton heating = #{m_3_per_sec_watts_to_cfm_per_ton(design_heating_airflow_m_3_per_s / dx_rated_htg_cap_applied)}") runner.registerInfo("sizing summary: dx_rated_clg_cap_applied = #{dx_rated_clg_cap_applied}") runner.registerInfo("sizing summary: design_cooling_airflow_m_3_per_s = #{design_cooling_airflow_m_3_per_s}") - runner.registerInfo("sizing summary: cfm/ton heating = #{m_3_per_sec_watts_to_cfm_per_ton(design_cooling_airflow_m_3_per_s/dx_rated_clg_cap_applied)}") - runner.registerInfo("") + runner.registerInfo("sizing summary: cfm/ton heating = #{m_3_per_sec_watts_to_cfm_per_ton(design_cooling_airflow_m_3_per_s / dx_rated_clg_cap_applied)}") runner.registerInfo("sizing summary: upsize_factor = #{upsize_factor}") - runner.registerInfo("") runner.registerInfo("sizing summary: heating_load_category = #{heating_load_category}") - runner.registerInfo("") end # set airloop design airflow based on the maximum of heating and cooling design flow @@ -1991,28 +2234,39 @@ def run(model, runner, user_arguments) design_cooling_airflow_m_3_per_s end - # increase design airflow to accomodate upsizing - air_loop_hvac.setDesignSupplyAirFlowRate(design_airflow_for_sizing_m_3_per_s) - controller_oa.setMaximumOutdoorAirFlowRate(design_airflow_for_sizing_m_3_per_s) + # reset supply airflow if less than minimum OA + if oa_flow_m3_per_s > design_airflow_for_sizing_m_3_per_s + design_airflow_for_sizing_m_3_per_s = oa_flow_m3_per_s + end + if oa_flow_m3_per_s > design_cooling_airflow_m_3_per_s + design_cooling_airflow_m_3_per_s = oa_flow_m3_per_s + end + if oa_flow_m3_per_s > design_heating_airflow_m_3_per_s + design_heating_airflow_m_3_per_s = oa_flow_m3_per_s + end # set minimum flow rate to 0.40, or higher as needed to maintain outdoor air requirements min_flow = 0.40 # determine minimum airflow ratio for sizing; 0.4 is used unless OA requires higher min_airflow_m3_per_s = nil - if min_oa_flow_ratio > min_flow - min_airflow_ratio = min_oa_flow_ratio - min_airflow_m3_per_s = min_oa_flow_ratio * design_airflow_for_sizing_m_3_per_s + current_min_oa_flow_ratio = oa_flow_m3_per_s / design_heating_airflow_m_3_per_s + if current_min_oa_flow_ratio > min_flow + min_airflow_ratio = current_min_oa_flow_ratio + min_airflow_m3_per_s = min_airflow_ratio * design_airflow_for_sizing_m_3_per_s else min_airflow_ratio = min_flow min_airflow_m3_per_s = min_airflow_ratio * design_airflow_for_sizing_m_3_per_s end + # increase design airflow to accomodate upsizing + air_loop_hvac.setDesignSupplyAirFlowRate(design_airflow_for_sizing_m_3_per_s) + controller_oa.setMaximumOutdoorAirFlowRate(design_airflow_for_sizing_m_3_per_s) + if debug_verbose runner.registerInfo("sizing summary: design_airflow_for_sizing_m_3_per_s = #{design_airflow_for_sizing_m_3_per_s}") runner.registerInfo("sizing summary: min_oa_flow_ratio = #{min_oa_flow_ratio} | min_flow = #{min_flow}") runner.registerInfo("sizing summary: min_airflow_m3_per_s = #{min_airflow_m3_per_s}") - runner.registerInfo("") end # determine airflows for each stage of heating @@ -2039,12 +2293,11 @@ def run(model, runner, user_arguments) end if debug_verbose - runner.registerInfo("sizing summary: before cfm/ton adjustments for lower stages") + runner.registerInfo('sizing summary: before cfm/ton adjustments for lower stages') runner.registerInfo("sizing summary: stage_flow_fractions_heating = #{stage_flow_fractions_heating}") runner.registerInfo("sizing summary: stage_flow_fractions_cooling = #{stage_flow_fractions_cooling}") runner.registerInfo("sizing summary: stage_flows_heating = #{stage_flows_heating}") runner.registerInfo("sizing summary: stage_flows_cooling = #{stage_flows_cooling}") - runner.registerInfo("") end # heating - align stage CFM/ton bounds where possible @@ -2080,10 +2333,9 @@ def run(model, runner, user_arguments) ) if debug_verbose - runner.registerInfo("sizing summary: after cfm/ton adjustments for lower stages") + runner.registerInfo('sizing summary: after cfm/ton adjustments for lower stages') runner.registerInfo("sizing summary: stage_flows_heating = #{stage_flows_heating}") runner.registerInfo("sizing summary: stage_flows_cooling = #{stage_flows_cooling}") - runner.registerInfo("") end #################################### Start performance curve assignment @@ -2093,11 +2345,12 @@ def run(model, runner, user_arguments) # adjust rated cooling cop if final_rated_cooling_cop == false final_rated_cooling_cop = adjust_rated_cop_from_ref_cfm_per_ton(runner, stage_flows_cooling[rated_stage_num_cooling], - reference_cooling_cfm_per_ton, - stage_caps_cooling[rated_stage_num_cooling], - get_rated_cop_cooling(stage_caps_cooling[rated_stage_num_cooling]), - cool_eir_ff_curve_stages[rated_stage_num_cooling]) - runner.registerInfo("rated cooling COP adjusted from #{get_rated_cop_cooling(stage_caps_cooling[rated_stage_num_cooling]).round(3)} to #{final_rated_cooling_cop.round(3)} based on reference cfm/ton of #{reference_cooling_cfm_per_ton.round(0)} (i.e., average value of actual products)") + reference_cooling_cfm_per_ton, + stage_caps_cooling[rated_stage_num_cooling], + get_rated_cop_cooling(stage_caps_cooling[rated_stage_num_cooling]), + cool_eir_ff_curve_stages[rated_stage_num_cooling]) + runner.registerInfo("sizing summary: rated cooling COP adjusted from #{get_rated_cop_cooling(stage_caps_cooling[rated_stage_num_cooling]).round(3)} to #{final_rated_cooling_cop.round(3)} based on reference cfm/ton of #{reference_cooling_cfm_per_ton.round(0)} (i.e., average value of actual products)") + runner.registerInfo("sizing summary: sizing air loop (#{air_loop_hvac.name}): final rated cooling COP = #{final_rated_cooling_cop.round(3)}") end # define new cooling coil @@ -2115,7 +2368,7 @@ def run(model, runner, user_arguments) cool_eir_ff_curve_stages, cool_plf_fplr1, stage_rated_cop_frac_cooling, - stage_GrossRatedSensibleHeatRatio_cooling, + stage_gross_rated_sensible_heat_ratio_cooling, rated_stage_num_cooling, enable_cycling_losses_above_lowest_speed, air_loop_hvac, @@ -2134,7 +2387,8 @@ def run(model, runner, user_arguments) stage_caps_heating[rated_stage_num_heating], get_rated_cop_heating(stage_caps_heating[rated_stage_num_heating]), heat_eir_ff_curve_stages[rated_stage_num_heating]) - runner.registerInfo("rated heating COP adjusted from #{get_rated_cop_heating(stage_caps_heating[rated_stage_num_heating]).round(3)} to #{final_rated_heating_cop.round(3)} based on reference cfm/ton of #{reference_heating_cfm_per_ton.round(0)} (i.e., average value of actual products)") + runner.registerInfo("sizing summary: rated heating COP adjusted from #{get_rated_cop_heating(stage_caps_heating[rated_stage_num_heating]).round(3)} to #{final_rated_heating_cop.round(3)} based on reference cfm/ton of #{reference_heating_cfm_per_ton.round(0)} (i.e., average value of actual products)") + runner.registerInfo("sizing summary: sizing air loop (#{air_loop_hvac.name}): final rated heating COP = #{final_rated_heating_cop.round(3)}") end # define new heating coil @@ -2241,67 +2495,6 @@ def run(model, runner, user_arguments) # set no load design flow rate new_air_to_air_heatpump.setSupplyAirFlowRateWhenNoCoolingorHeatingisRequired(min_airflow_m3_per_s) - # new_air_to_air_heatpump.setDOASDXCoolingCoilLeavingMinimumAirTemperature(7.5) # set minimum discharge temp to 45F, required for VAV operation - - ## EMS control for boost mode - #unless boost_stage_num_and_max_temp_tuple.empty? - - # puts "DEBUGGING**********************************" - - # # set sensor for speed level - # sens_speed_level = OpenStudio::Model::EnergyManagementSystemSensor.new(model, 'Unitary System DX Coil Speed Level') - # sens_speed_level.setName("sens_speed_level_#{new_air_to_air_heatpump.name.get.to_s.gsub("-", "")}") - # sens_speed_level.setKeyName("#{new_air_to_air_heatpump.name.get}") - - # # set sensor for outdoor air temperature - # sens_speed_level = OpenStudio::Model::EnergyManagementSystemSensor.new(model, 'Site Outdoor Air Drybulb Temperature') - # sens_speed_level.setName("sens_oa_temp_#{new_air_to_air_heatpump.name.get.to_s.gsub("-", "")}") - # sens_speed_level.setKeyName("Environment") - - # # set sensor for predicted load - # # this is used to determine heating or cooling mode - # sens_predicted_load = OpenStudio::Model::EnergyManagementSystemSensor.new(model, 'Unitary System Predicted Sensible Load to Setpoint Heat Transfer Rate') - # sens_predicted_load.setName("sens_predicted_load_#{new_air_to_air_heatpump.name.get.to_s.gsub("-", "")}") - # sens_predicted_load.setKeyName("#{new_air_to_air_heatpump.name.get}") - - # # set actuator - unitary system speed level - # act_speed_level = OpenStudio::Model::EnergyManagementSystemActuator.new(new_air_to_air_heatpump, - # 'Coil Speed Control', - # 'Unitary System DX Coil Speed Value' - # ) - # act_speed_level.setName("act_speed_level_#{new_air_to_air_heatpump.name.get.to_s.gsub("-", "")}") - - # puts "boost_speed_level: #{boost_stage_num_and_max_temp_tuple[0]}" - # puts "boost_speed_max_temp_c: #{boost_stage_num_and_max_temp_tuple[1]}" - - # #### Program ##### - # # reset OA to min OA if there is a call for economizer but no cooling load - # prgrm_hp_speed_override = model.getEnergyManagementSystemTrendVariableByName('hp_speed_override') - # unless prgrm_hp_speed_override.is_initialized - # prgrm_hp_speed_override = OpenStudio::Model::EnergyManagementSystemProgram.new(model) - # prgrm_hp_speed_override.setName("#{air_loop_hvac.name.get.to_s.gsub("-", "")}_program") - # prgrm_hp_speed_override_body = <<-EMS - # SET #{act_speed_level.handle} = #{act_speed_level.handle}, - # SET sens_speed_level = #{sens_speed_level.name}, - # SET boost_speed_level = #{boost_stage_num_and_max_temp_tuple[0]}, - # SET boost_speed_max_temp_c = #{boost_stage_num_and_max_temp_tuple[1]}, - # SET sens_oa_temp = #{sens_speed_level.name} - # SET sens_predicted_load = #{sens_predicted_load.name} - - # IF ((sens_oa_temp > boost_speed_max_temp_c) && (sens_speed_level > (boost_speed_level-1)) && (sens_predicted_load > 0)), - # SET #{act_speed_level.handle} = (boost_speed_level-1), - # ELSE, - # SET #{act_speed_level.handle} = Null, - # ENDIF - # EMS - # prgrm_hp_speed_override.setBody(prgrm_hp_speed_override_body) - # end - # programs_at_beginning_of_timestep = OpenStudio::Model::EnergyManagementSystemProgramCallingManager.new(model) - # programs_at_beginning_of_timestep.setName("#{air_loop_hvac.name.get.to_s.gsub("-", "")}_Programs_InsideHVACSystemIterationLoop") - # programs_at_beginning_of_timestep.setCallingPoint('InsideHVACSystemIterationLoop') - # programs_at_beginning_of_timestep.addProgram(prgrm_hp_speed_override) - #end - # add dcv to air loop if dcv flag is true if dcv == true oa_system = air_loop_hvac.airLoopHVACOutdoorAirSystem.get @@ -2350,14 +2543,7 @@ def run(model, runner, user_arguments) next unless (hr == true) && (btype_erv_applicable == true) # check for space type applicability - thermal_zone_names_to_exclude = %w[ - Kitchen - kitchen - KITCHEN - Dining - dining - DINING - ] + thermal_zone_names_to_exclude = ['Kitchen', 'kitchen', 'KITCHEN', 'Dining', 'dining', 'DINING'] # skip air loops that serve non-applicable space types and warn user if thermal_zone_names_to_exclude.any? { |word| (thermal_zone.name.to_s).include?(word) } runner.registerWarning("The user selected to add energy recovery to the HP-RTUs, but thermal zone #{thermal_zone.name} is a non-applicable space type for energy recovery. Any existing energy recovery will remain for consistancy, but no new energy recovery will be added.") @@ -2470,7 +2656,7 @@ def run(model, runner, user_arguments) # report final condition of model runner.registerFinalCondition("The building finished with heat pump RTUs replacing the HVAC equipment for #{selected_air_loops.size} air loops.") - #model.getOutputControlFiles.setOutputCSV(true) + # model.getOutputControlFiles.setOutputCSV(true) true end diff --git a/resources/measures/upgrade_hvac_add_heat_pump_rtu/measure.xml b/resources/measures/upgrade_hvac_add_heat_pump_rtu/measure.xml index 465dfcde6..93e275873 100644 --- a/resources/measures/upgrade_hvac_add_heat_pump_rtu/measure.xml +++ b/resources/measures/upgrade_hvac_add_heat_pump_rtu/measure.xml @@ -3,8 +3,8 @@ 3.1 add_heat_pump_rtu f4567a68-27f2-4a15-ae91-ba0f35cd08c7 - 122fbc6d-b5bf-4af6-bc29-0f07dc05fe65 - 2024-10-16T01:49:30Z + 4cf5bd5c-a940-4dc5-83d1-e17104a0bf06 + 2024-10-29T14:28:52Z 5E2576E4 AddHeatPumpRtu add_heat_pump_rtu @@ -170,6 +170,25 @@ + + roof + Upgrade Roof Insulation? + Upgrade roof insulation per AEDG recommendations. + Boolean + true + false + false + + + true + true + + + false + false + + + sizing_run Do a sizing run for informing sizing instead of using hard-sized model parameters? @@ -250,7 +269,7 @@ README.md md readme - BAA5A5F6 + FD925C06 README.md.erb @@ -273,25 +292,25 @@ measure.rb rb script - D77E126F + 4FEEEBF8 performance_map_CCHP_spec_2027.json json resource - B916A0BF + 57B55228 performance_maps_hprtu_std.json json resource - C716C5A8 + 5733809B performance_maps_hprtu_variable_speed.json json resource - EEA7F73C + 8C2914D9 example_model.osm @@ -303,7 +322,7 @@ measure_test.rb rb test - F42E568B + 8F68BD89 diff --git a/resources/measures/upgrade_hvac_add_heat_pump_rtu/resources/performance_map_CCHP_spec_2027.json b/resources/measures/upgrade_hvac_add_heat_pump_rtu/resources/performance_map_CCHP_spec_2027.json index 49a7c0588..70ef3573f 100644 --- a/resources/measures/upgrade_hvac_add_heat_pump_rtu/resources/performance_map_CCHP_spec_2027.json +++ b/resources/measures/upgrade_hvac_add_heat_pump_rtu/resources/performance_map_CCHP_spec_2027.json @@ -657,7 +657,7 @@ "stage_rated_cop_frac_heating": "{4 => 0.722198906, 3 => 1.097922025, 2 => 1, 1 => 1.018507403}", "stage_rated_cop_frac_cooling": "{4 => 1, 3 => 1.08, 2 => 1.11, 1 => 1.07}", "boost_stage_num_and_max_temp_tuple": "[4, -8.33333]", - "stage_GrossRatedSensibleHeatRatio_cooling": "{4 => 0.77, 3 => 0.79, 2 => 0.80, 1 => 0.85}", + "stage_gross_rated_sensible_heat_ratio_cooling": "{4 => 0.77, 3 => 0.79, 2 => 0.80, 1 => 0.85}", "enable_cycling_losses_above_lowest_speed": false, "reference_cooling_cfm_per_ton": 365.0, "reference_heating_cfm_per_ton": 411.0 diff --git a/resources/measures/upgrade_hvac_add_heat_pump_rtu/resources/performance_maps_hprtu_std.json b/resources/measures/upgrade_hvac_add_heat_pump_rtu/resources/performance_maps_hprtu_std.json index c9f0c179b..3fbc886ac 100644 --- a/resources/measures/upgrade_hvac_add_heat_pump_rtu/resources/performance_maps_hprtu_std.json +++ b/resources/measures/upgrade_hvac_add_heat_pump_rtu/resources/performance_maps_hprtu_std.json @@ -466,7 +466,7 @@ "stage_rated_cop_frac_heating": "{1 => 1}", "stage_rated_cop_frac_cooling": "{2 => 1, 1 => 1}", "boost_stage_num_and_max_temp_tuple": "[]", - "stage_GrossRatedSensibleHeatRatio_cooling": "{2 => 0.77, 1 => 0.82}", + "stage_gross_rated_sensible_heat_ratio_cooling": "{2 => 0.77, 1 => 0.82}", "enable_cycling_losses_above_lowest_speed": true, "reference_cooling_cfm_per_ton": 404.0, "reference_heating_cfm_per_ton": 420.0 diff --git a/resources/measures/upgrade_hvac_add_heat_pump_rtu/resources/performance_maps_hprtu_variable_speed.json b/resources/measures/upgrade_hvac_add_heat_pump_rtu/resources/performance_maps_hprtu_variable_speed.json index ea8470b1e..456d40b56 100644 --- a/resources/measures/upgrade_hvac_add_heat_pump_rtu/resources/performance_maps_hprtu_variable_speed.json +++ b/resources/measures/upgrade_hvac_add_heat_pump_rtu/resources/performance_maps_hprtu_variable_speed.json @@ -603,7 +603,7 @@ "stage_rated_cop_frac_heating": "{4 => 1, 3 => 1.05, 2 => 1.24, 1 => 1.45}", "stage_rated_cop_frac_cooling": "{4 => 1, 3 => 1.08, 2 => 1.11, 1 => 1.07}", "boost_stage_num_and_max_temp_tuple": "[]", - "stage_GrossRatedSensibleHeatRatio_cooling": "{4 => 0.77, 3 => 0.79, 2 => 0.80, 1 => 0.85}", + "stage_gross_rated_sensible_heat_ratio_cooling": "{4 => 0.77, 3 => 0.79, 2 => 0.80, 1 => 0.85}", "enable_cycling_losses_above_lowest_speed": false, "reference_cooling_cfm_per_ton": 365.0, "reference_heating_cfm_per_ton": 411.0 diff --git a/resources/measures/upgrade_hvac_add_heat_pump_rtu/tests/measure_test.rb b/resources/measures/upgrade_hvac_add_heat_pump_rtu/tests/measure_test.rb index fd23f8a12..d8b955145 100644 --- a/resources/measures/upgrade_hvac_add_heat_pump_rtu/tests/measure_test.rb +++ b/resources/measures/upgrade_hvac_add_heat_pump_rtu/tests/measure_test.rb @@ -47,7 +47,6 @@ require_relative '../../../../test/helpers/minitest_helper' class AddHeatPumpRtuTest < Minitest::Test - # return file paths to test models in test directory def models_for_tests paths = Dir.glob(File.join(File.dirname(__FILE__), '../../../tests/models/*.osm')) @@ -97,7 +96,7 @@ def report_path(test_name) def set_weather_and_apply_measure_and_run(test_name, measure, argument_map, osm_path, epw_path, run_model: false, model: nil, apply: true, expected_results: 'Success') assert(File.exist?(osm_path)) assert(File.exist?(epw_path)) - ddy_path = epw_path.gsub(".epw","") + ".ddy" + ddy_path = "#{epw_path.gsub('.epw', '')}.ddy" # create run directory if it does not exist FileUtils.mkdir_p(run_dir(test_name)) unless File.exist?(run_dir(test_name)) @@ -216,7 +215,7 @@ def test_number_of_arguments_and_argument_names assert_equal('hr', arguments[7].name) assert_equal('dcv', arguments[8].name) assert_equal('econ', arguments[9].name) - assert_equal('roof', arguments[10].name) + assert_equal('roof', arguments[10].name) assert_equal('sizing_run', arguments[11].name) assert_equal('debug_verbose', arguments[12].name) end @@ -248,7 +247,6 @@ def calc_cfm_per_ton_singlespdcoil_heating(model, cfm_per_ton_min, cfm_per_ton_m end def calc_cfm_per_ton_multispdcoil_heating(model, cfm_per_ton_min, cfm_per_ton_max) - # get relevant heating coils coils_heating = model.getCoilHeatingDXMultiSpeedStageDatas @@ -275,7 +273,6 @@ def calc_cfm_per_ton_multispdcoil_heating(model, cfm_per_ton_min, cfm_per_ton_ma end def calc_cfm_per_ton_multispdcoil_cooling(model, cfm_per_ton_min, cfm_per_ton_max) - # get cooling coils coils_cooling = model.getCoilCoolingDXMultiSpeedStageDatas @@ -337,19 +334,18 @@ def verify_cfm_per_ton(model, result) end def _mimic_hardsize_model(model, test_dir) - standard = Standard.build('ComStock DOE Ref Pre-1980') # Run a sizing run to determine equipment capacities and flow rates - if standard.model_run_sizing_run(model, "#{test_dir}") == false - puts("Sizing run for Hardsize model failed, cannot hard-size model.") + if standard.model_run_sizing_run(model, test_dir.to_s) == false + puts('Sizing run for Hardsize model failed, cannot hard-size model.') return false end # APPLY model.applySizingValues - # TODO remove once this functionality is added to the OpenStudio C++ for hard sizing UnitarySystems + # TODO: remove once this functionality is added to the OpenStudio C++ for hard sizing UnitarySystems model.getAirLoopHVACUnitarySystems.each do |unitary| if model.version < OpenStudio::VersionString.new('3.7.0') unitary.setSupplyAirFlowRateMethodDuringCoolingOperation('SupplyAirFlowRate') @@ -357,11 +353,11 @@ def _mimic_hardsize_model(model, test_dir) else # unitary.applySizingValues end - end - # TODO remove once this functionality is added to the OpenStudio C++ for hard sizing Sizing:System + # TODO: remove once this functionality is added to the OpenStudio C++ for hard sizing Sizing:System model.getSizingSystems.each do |sizing_system| next if sizing_system.isDesignOutdoorAirFlowRateAutosized + sizing_system.setSystemOutdoorAirMethod('ZoneSum') end @@ -369,7 +365,6 @@ def _mimic_hardsize_model(model, test_dir) end def verify_hp_rtu(test_name, model, measure, argument_map, osm_path, epw_path) - # set weather file but not apply measure result = set_weather_and_apply_measure_and_run(test_name, measure, argument_map, osm_path, epw_path, run_model: false, apply: false) model = load_model(model_output_path(test_name)) @@ -517,7 +512,6 @@ def verify_hp_rtu(test_name, model, measure, argument_map, osm_path, epw_path) end def get_cooling_coil_capacity_and_cop(model, coil) - capacity_w = 0.0 coil_design_cop = 0.0 @@ -677,14 +671,13 @@ def get_sizing_summary(model) # get coil capacity: cooling coil = airloophvacunisys.coolingCoil.get - capacity_w, _ = get_cooling_coil_capacity_and_cop(model, coil) + capacity_w, = get_cooling_coil_capacity_and_cop(model, coil) sizing_summary['AirLoopHVACUnitarySystem'][name_obj]['cooling_coil_capacity_w'] = capacity_w # get coil capacity: heating coil = airloophvacunisys.heatingCoil.get - capacity_w, _, _, _, _, _, _, _ = get_heating_coil_capacity_and_cop(model, coil) + capacity_w, = get_heating_coil_capacity_and_cop(model, coil) sizing_summary['AirLoopHVACUnitarySystem'][name_obj]['heating_coil_capacity_w'] = capacity_w - end sizing_summary['AirLoopHVAC'] = {} model.getAirLoopHVACs.each do |airloophvac| @@ -724,15 +717,14 @@ def check_sizing_results_no_upsizing(model, sizing_summary_reference) # check capacity: cooling coil = airloophvacunisys.coolingCoil.get value_before = sizing_summary_reference['AirLoopHVACUnitarySystem'][name_obj]['cooling_coil_capacity_w'] - value_after, _ = get_cooling_coil_capacity_and_cop(model, coil) + value_after, = get_cooling_coil_capacity_and_cop(model, coil) assert_in_epsilon(value_before, value_after, 0.000001, "values do not match: AirLoopHVACUnitarySystem | #{name_obj} | cooling_coil_capacity_w") # check capacity: heating coil = airloophvacunisys.heatingCoil.get value_before = sizing_summary_reference['AirLoopHVACUnitarySystem'][name_obj]['heating_coil_capacity_w'] - value_after, _ = get_heating_coil_capacity_and_cop(model, coil) + value_after, = get_heating_coil_capacity_and_cop(model, coil) assert_in_epsilon(value_before, value_after, 0.000001, "values do not match: AirLoopHVACUnitarySystem | #{name_obj} | heating_coil_capacity_w") - end model.getAirLoopHVACs.each do |airloophvac| name_obj = airloophvac.name.to_s @@ -761,14 +753,14 @@ def check_sizing_results_upsizing(model, sizing_summary_reference) # check capacity: cooling coil = airloophvacunisys.coolingCoil.get value_before = sizing_summary_reference['AirLoopHVACUnitarySystem'][name_obj]['cooling_coil_capacity_w'] - value_after, _ = get_cooling_coil_capacity_and_cop(model, coil) + value_after, = get_cooling_coil_capacity_and_cop(model, coil) relative_difference = (value_after - value_before) / value_before assert_in_epsilon(relative_difference, 0.25, 0.01, "values difference not close to threshold: AirLoopHVACUnitarySystem | #{name_obj} | cooling_coil_capacity_w") # check capacity: heating coil = airloophvacunisys.heatingCoil.get value_before = sizing_summary_reference['AirLoopHVACUnitarySystem'][name_obj]['heating_coil_capacity_w'] - value_after, _ = get_heating_coil_capacity_and_cop(model, coil) + value_after, = get_heating_coil_capacity_and_cop(model, coil) relative_difference = (value_after - value_before) / value_before assert_in_epsilon(relative_difference, 0.25, 0.01, "values difference not close to threshold: AirLoopHVACUnitarySystem | #{name_obj} | heating_coil_capacity_w") end @@ -819,7 +811,6 @@ def calc_cfm_per_ton_singlespdcoil_heating(model, cfm_per_ton_min, cfm_per_ton_m end def calc_cfm_per_ton_multispdcoil_heating(model, cfm_per_ton_min, cfm_per_ton_max) - # get relevant heating coils coils_heating = model.getCoilHeatingDXMultiSpeedStageDatas @@ -846,7 +837,6 @@ def calc_cfm_per_ton_multispdcoil_heating(model, cfm_per_ton_min, cfm_per_ton_ma end def calc_cfm_per_ton_multispdcoil_cooling(model, cfm_per_ton_min, cfm_per_ton_max) - # get cooling coils coils_cooling = model.getCoilCoolingDXMultiSpeedStageDatas @@ -907,6 +897,70 @@ def verify_cfm_per_ton(model, result) end end + # # ########################################################################## + # # Single building result examples + # def test_single_building_result_examples + # osm_epw_pair = { + # 'example_model_AK_380.osm' => 'USA_AK_Fairbanks.Intl.AP.702610_TMY3.epw', + # 'example_model_NM_380.osm' => 'USA_NM_Albuquerque.Intl.AP.723650_TMY3.epw', + # 'example_model_HI_380.osm' => 'USA_HI_Honolulu.Intl.AP.911820_TMY3.epw', + # } + + # test_name = 'test_single_building_result_examples' + + # puts "\n######\nTEST:#{test_name}\n######\n" + + # osm_epw_pair.each_with_index do |(osm_name, epw_name), idx| + + # osm_path = model_input_path(osm_name) + # epw_path = epw_input_path(epw_name) + + # puts("### DEBUGGING: ----------------------------------------------------------") + # puts("### DEBUGGING: osm_path = #{osm_path}") + # puts("### DEBUGGING: epw_path = #{epw_path}") + + # # Create an instance of the measure + # measure = AddHeatPumpRtu.new + + # # Load the model; only used here for populating arguments + # model = load_model(osm_path) + + # # get arguments + # arguments = measure.arguments(model) + # argument_map = OpenStudio::Measure.convertOSArgumentVectorToMap(arguments) + + # # populate specific argument for testing + # arguments.each_with_index do |arg, idx| + # temp_arg_var = arg.clone + # case arg.name + # when 'sizing_run' + # sizing_run = arguments[idx].clone + # sizing_run.setValue(true) + # argument_map[arg.name] = sizing_run + # when 'hprtu_scenario' + # hprtu_scenario = arguments[idx].clone + # hprtu_scenario.setValue('variable_speed_high_eff') # variable_speed_high_eff, two_speed_standard_eff + # argument_map[arg.name] = hprtu_scenario + # when 'performance_oversizing_factor' + # performance_oversizing_factor = arguments[idx].clone + # performance_oversizing_factor.setValue(0.25) + # argument_map[arg.name] = performance_oversizing_factor + # when 'debug_verbose' + # debug_verbose = arguments[idx].clone + # debug_verbose.setValue(true) + # argument_map[arg.name] = debug_verbose + # else + # argument_map[arg.name] = temp_arg_var + # end + # end + + # # Apply the measure to the model and optionally run the model + # result = set_weather_and_apply_measure_and_run("#{test_name}_#{idx}", measure, argument_map, osm_path, epw_path, run_model: true, apply: true) + # model = load_model(model_output_path("#{test_name}_#{idx}")) + + # end + # end + # ########################################################################## # This section tests upsizing algorithm # tests compare: @@ -936,11 +990,12 @@ def test_sizing_model_in_alaska # populate specific argument for testing arguments.each_with_index do |arg, idx| temp_arg_var = arg.clone - if arg.name == 'sizing_run' + case arg.name + when 'sizing_run' sizing_run = arguments[idx].clone sizing_run.setValue(true) argument_map[arg.name] = sizing_run - elsif arg.name == 'hprtu_scenario' + when 'hprtu_scenario' hprtu_scenario = arguments[idx].clone hprtu_scenario.setValue('two_speed_standard_eff') # variable_speed_high_eff, two_speed_standard_eff argument_map[arg.name] = hprtu_scenario @@ -960,8 +1015,8 @@ def test_sizing_model_in_alaska end # Apply the measure to the model and optionally run the model - result = set_weather_and_apply_measure_and_run(test_name + "_b", measure, argument_map, osm_path, epw_path, run_model: false, apply: true) - model = load_model(model_output_path(test_name + "_b")) + result = set_weather_and_apply_measure_and_run("#{test_name}_b", measure, argument_map, osm_path, epw_path, run_model: false, apply: true) + model = load_model(model_output_path("#{test_name}_b")) # get sizing info from regular sized model sizing_summary_reference = get_sizing_summary(model) @@ -977,8 +1032,8 @@ def test_sizing_model_in_alaska end # Apply the measure to the model and optionally run the model - result = set_weather_and_apply_measure_and_run(test_name + "_a", measure, argument_map, osm_path, epw_path, run_model: false, apply: true) - model = load_model(model_output_path(test_name + "_a")) + result = set_weather_and_apply_measure_and_run("#{test_name}_a", measure, argument_map, osm_path, epw_path, run_model: false, apply: true) + model = load_model(model_output_path("#{test_name}_a")) # compare sizing summary of upsizing model with regular sized model check_sizing_results_upsizing(model, sizing_summary_reference) @@ -1008,11 +1063,12 @@ def test_sizing_model_in_hawaii # populate specific argument for testing arguments.each_with_index do |arg, idx| temp_arg_var = arg.clone - if arg.name == 'sizing_run' + case arg.name + when 'sizing_run' sizing_run = arguments[idx].clone sizing_run.setValue(true) argument_map[arg.name] = sizing_run - elsif arg.name == 'hprtu_scenario' + when 'hprtu_scenario' hprtu_scenario = arguments[idx].clone hprtu_scenario.setValue('variable_speed_high_eff') # variable_speed_high_eff, two_speed_standard_eff argument_map[arg.name] = hprtu_scenario @@ -1032,8 +1088,8 @@ def test_sizing_model_in_hawaii end # Apply the measure to the model and optionally run the model - result = set_weather_and_apply_measure_and_run(test_name + "_b", measure, argument_map, osm_path, epw_path, run_model: false, apply: true) - model = load_model(model_output_path(test_name + "_b")) + result = set_weather_and_apply_measure_and_run("#{test_name}_b", measure, argument_map, osm_path, epw_path, run_model: false, apply: true) + model = load_model(model_output_path("#{test_name}_b")) # get sizing info from regular sized model sizing_summary_reference = get_sizing_summary(model) @@ -1049,8 +1105,8 @@ def test_sizing_model_in_hawaii end # Apply the measure to the model and optionally run the model - result = set_weather_and_apply_measure_and_run(test_name + "_a", measure, argument_map, osm_path, epw_path, run_model: false, apply: true) - model = load_model(model_output_path(test_name + "_a")) + result = set_weather_and_apply_measure_and_run("#{test_name}_a", measure, argument_map, osm_path, epw_path, run_model: false, apply: true) + model = load_model(model_output_path("#{test_name}_a")) # compare sizing summary of upsizing model with regular sized model check_sizing_results_no_upsizing(model, sizing_summary_reference) @@ -1252,11 +1308,7 @@ def test_380_full_service_restaurant_psz_gas_coil nonkitchen_htg_coils = [] model.getAirLoopHVACUnitarySystems.sort.each do |unitary_sys| # skip kitchen spaces - thermal_zone_names_to_exclude = %w[ - Kitchen - kitchen - KITCHEN - ] + thermal_zone_names_to_exclude = ['Kitchen', 'kitchen', 'KITCHEN'] if thermal_zone_names_to_exclude.any? { |word| (unitary_sys.name.to_s).include?(word) } tz_kitchens << unitary_sys @@ -1284,11 +1336,7 @@ def test_380_full_service_restaurant_psz_gas_coil nonkitchen_htg_coils_final = [] model.getAirLoopHVACUnitarySystems.sort.each do |unitary_sys| # skip kitchen spaces - thermal_zone_names_to_exclude = %w[ - Kitchen - kitchen - KITCHEN - ] + thermal_zone_names_to_exclude = ['Kitchen', 'kitchen', 'KITCHEN'] if thermal_zone_names_to_exclude.any? { |word| (unitary_sys.name.to_s).include?(word) } tz_kitchens_final << unitary_sys @@ -1387,15 +1435,16 @@ def test_380_full_service_restaurant_psz_gas_coil_upsizing # get arguments arguments.each_with_index do |arg, idx| temp_arg_var = arg.clone - if arg.name == 'sizing_run' + case arg.name + when 'sizing_run' sizing_run = arguments[idx].clone sizing_run.setValue(false) argument_map[arg.name] = sizing_run - elsif arg.name == 'hprtu_scenario' + when 'hprtu_scenario' hprtu_scenario = arguments[idx].clone hprtu_scenario.setValue('variable_speed_high_eff') # variable_speed_high_eff, two_speed_standard_eff argument_map[arg.name] = hprtu_scenario - elsif arg.name == 'performance_oversizing_factor' + when 'performance_oversizing_factor' performance_oversizing_factor = arguments[idx].clone performance_oversizing_factor.setValue(0.25) # override performance_oversizing_factor arg argument_map[arg.name] = performance_oversizing_factor @@ -1435,15 +1484,16 @@ def test_380_small_office_psz_gas_coil_7A_upsizing_adv # populate specific argument for testing arguments.each_with_index do |arg, idx| temp_arg_var = arg.clone - if arg.name == 'sizing_run' + case arg.name + when 'sizing_run' sizing_run = arguments[idx].clone sizing_run.setValue(true) argument_map[arg.name] = sizing_run - elsif arg.name == 'hprtu_scenario' + when 'hprtu_scenario' hprtu_scenario = arguments[idx].clone hprtu_scenario.setValue('variable_speed_high_eff') # variable_speed_high_eff, two_speed_standard_eff argument_map[arg.name] = hprtu_scenario - elsif arg.name == 'debug_verbose' + when 'debug_verbose' debug_verbose = arguments[idx].clone debug_verbose.setValue(true) argument_map[arg.name] = debug_verbose @@ -1492,15 +1542,16 @@ def test_380_small_office_psz_gas_coil_7A_upsizing_std # populate specific argument for testing arguments.each_with_index do |arg, idx| temp_arg_var = arg.clone - if arg.name == 'sizing_run' + case arg.name + when 'sizing_run' sizing_run = arguments[idx].clone sizing_run.setValue(true) argument_map[arg.name] = sizing_run - elsif arg.name == 'hprtu_scenario' + when 'hprtu_scenario' hprtu_scenario = arguments[idx].clone hprtu_scenario.setValue('two_speed_standard_eff') # variable_speed_high_eff, two_speed_standard_eff argument_map[arg.name] = hprtu_scenario - elsif arg.name == 'debug_verbose' + when 'debug_verbose' debug_verbose = arguments[idx].clone debug_verbose.setValue(true) argument_map[arg.name] = debug_verbose @@ -1595,7 +1646,7 @@ def test_380_warehouse_pvav_gas_boiler_reheat_2A # Apply the measure to the model and optionally run the model result = set_weather_and_apply_measure_and_run(__method__, measure, argument_map, osm_path, epw_path, run_model: false, apply: true, expected_results: 'NA') - end + end # assert that non applicable HVAC system registers as NA def test_380_medium_office_doas_fan_coil_acc_boiler_3A diff --git a/resources/measures/upgrade_hvac_multispeed_minimum_flow/measure.xml b/resources/measures/upgrade_hvac_multispeed_minimum_flow/measure.xml index 5fa2706e1..66991e72d 100644 --- a/resources/measures/upgrade_hvac_multispeed_minimum_flow/measure.xml +++ b/resources/measures/upgrade_hvac_multispeed_minimum_flow/measure.xml @@ -3,8 +3,8 @@ 3.1 multispeed_minimum_flow 82ab405f-375a-4dc6-bc66-9e6aac4baf52 - cc9cc032-1769-4fb7-ba74-107c135ebecd - 2024-06-20T18:58:45Z + d93304bc-6ef7-45dd-9636-7023c2980590 + 2024-10-21T21:10:19Z 912062DA MultispeedMinimumFlow Multispeed Minimum Flow @@ -104,7 +104,7 @@ measure.rb rb script - 396FB962 + 55ECBE03 Cold_ASHRAE.idf @@ -136,11 +136,35 @@ test 0618798B + + in.epw + epw + test + 978FBEFE + + + in.osm + osm + test + 1C31C871 + measure_Test.rb rb test - E229ABEE + 9001A5F6 + + + test files/base_new_vap_spd_hp1-old.osm + osm + test + 1FF43CE4 + + + test files/base_new_vap_spd_hp1.osm + osm + test + 0618798B diff --git a/sampling/resources/10k_sample_input_validated.csv.zip b/sampling/resources/10k_sample_input_validated.csv.zip new file mode 100644 index 000000000..27b8d6256 Binary files /dev/null and b/sampling/resources/10k_sample_input_validated.csv.zip differ