This is a library of test models and results for development of the QSS solver being developed as part of the "Spawn of EnergyPlus" project. The library serves these purposes:
- Comparison testing QSS vs OPTIMICA, JModelica, Dymola, Ptolemy, and other solvers
- Regresssion testing new releases
- Performance testing
Models will be run as FMUs since the QSS solver is being built for integration into JModelica via the FMU interface. Some simpler models may also be run as QSS "code-defined" models for results and performance comparison.
Due to the size of model FMUs, modelDescription.xml, and output signal files this repository stores only scripts, models, and descriptive text files.
The top-level repository directory contains these subdirectories:
bin/
mdl/
The bin
directory contains scripts for modeling and testing:
bld.py
Defaultbld_fmu.py
wrapperbld_fmu.py
Builds model FMU with OPTIMICA or JModelica depending on the current directorybld_fmus.py
Builds all model FMUs with OPTIMICAcleanup
Removes comparison/regression testing output filescmp_CVode_QSS3_Buildings.py
Run and compare CVode and QSS3 simulations for a set of Buildings library modelscmp_CVode_QSS3_simple.py
Run and compare CVode and QSS3 simulations for a set of simple modelscmp_PyFMI_QSS.py
Run and compare PYFMI and QSS simulations for the local modelcmp_PyFMI_QSS_hdr.py
Generate the YAML file header for a PyFMI vs QSS comparison runcmp_PyFMI_QSS_yaml.py
Compare the YAML results file for two PyFMI vs QSS comparison runscomparison
Compares results from two modeling toolscsv2ascii.py
Converts CVS files to ASCII filesjm*
Wrapsjm_python.sh
: Customize to your systemjmi*
Wrapsjm_ipython.sh
: Customize to your systemref.py
Runs the model's FMU with PyFMI or QSS depending on the local directory: small tolerance "reference" solutionregression
Regression tests results from two versions of a modeling toolrun.py
Runs the model's FMU with PyFMI or QSS depending on the local directoryrun_PyFMI.py
Runs the model's FMU with PyFMIrun_PyFMI_red.py
Runs the model's FMU with PyFMI with optional output redirection via a--red=LOGFILE
optionrun_PyFMI_run.py
Runs the model's FMU with PyFMI with output redirection to arun.log
filerun_QSS.py
Runs the model's OCT FMU with QSS (supports QSS options)set_JModelica
Sets environment for JModelica: Customize to your systemset_Modelica
Sets environment for Modelica and the Buildings library: Customize to your systemset_OCT
Sets environment for OCT: Customize to your systemsimdiff.py
Simulation results comparison toolstp_QSS.py
Runs the models' FMU with QSS and check/report step countsstp_QSS_simple.py
Check/report step counts for a set of simple models with known/expected step counts
- Place copies of scripts needing customization for your system's Buildings library location in a directory early in your PATH.
- The JModelica or OPTIMICA
jm_python.sh
needs to be on your PATH to use some of these scripts.
The mdl
directory contains the models and results with this (tentative) organization for each model:
ModelName/
ModelName.mo Modelica model
ModelName.ref Modelica or Buildings Library model name and, optionally, Buildings Library branch and/or commit
ModelName.txt Notes
ModelName.var Variables to output name list (supports glob wildcards and syntax)
Dymola/ Dymola model & results
JModelica/ JModelica model & results
OCT/ OPTIMICA model & results
Ptolemy/ Ptolemy model & results
QSS/ QSS model & results
Each non-QSS modeling tool (OCT, JModelica, Dymola, and Ptolemy) sub-directory has this structure:
*Tool*/
ModelName.txt Notes
ModelName.mo Modelica model customized for Tool if needed
ModelName.fmu FMU generated by Tool (with a modified XML inserted)
modelDescription.orig.xml Original FMU XML (if XML modifications needed)
modelDescription.prep.xml Modified FMU XML (if XML modifications needed)
modelDescription.xml XML from the FMU (after any modifications are made)
out/ Results
run/ Standard run results
ref/ Reference run results
Tools-specific versions of Modelica files include customization for that tool:
- JModelica-specific models have explicit __zc_name and __zc_der_name variables added for the zero-crossing functions and their derivatives that QSS can use. (OPTIMICA has event indicator variables to supply this information without modifiying the models.)
The QSS sub-directory has this structure:
QSS/
ModelName.txt Notes
ModelName.fmu Specialized FMU used for QSS runs if needed
FMU-[LI]QSS#/ FMU-QSS [LI]QSS# run
[LI|x]QSS#/ [LI|x]QSS# run
where # is 1, 2, or 3 indicating the QSS method order.
The QSS subdirectories may have a custom run.py
script with specialized options suggested or needed for the model.
The FMU-QSS subdirectories have a run
script that generates the FMU-QSS and then runs it with the QSS application.
The QSS2 method is currently the best choice in most circumstances (QSS3 performance and accuracy are limited due to numerical differentiation) so the other sub-directories may not be present. The LIQSS2 method is probably the best for "stiff" models. The first-order QSS1 and LIQSS1 methods are mostly of academic interest since they are very slow for most models.
- Models of different types are being brought into the repository as they are ready.
- There are simple models to test basic behavior, feature models to develop/test QSS support for specific Modelica features, and EnergyPlus related models that use the Modelica Buildings library.
- Some of the models are relevant for performance assessment but performance profiling automation is not currently provided here.
Notes on each of the modeling tools appear below.
- OCT has event indicator variable/dependency support that QSS uses to track the zero-crossing functions.
- FMI/FMIL API extensions to better support QSS solvers are under development at Modelon.
- QSS simulation of models with if/when conditional blocks requires customized .mo files with explicit zero-crossing variables and the addition of their dependencies to the modelDescription.xml so JModelica-generated FMUs aren't practical for QSS simulation with other than small models.
- QSS simulation of models with if/when conditional blocks requires customized .mo files with explicit zero-crossing variables and the addition of their dependencies to the modelDescription.xml so Dymola-generated FMUs aren't practical for QSS simulation with other than small models.
- Dymola FMUs have twice as many event indicators as expected. The QSS simulation accounts for this when checking Dymola-generated FMUs.
- Some outputs generated by PyFMI, not Dymola, are included.
- At this time Dymola-generated outputs are not planned for inclusion but we are checking whether Dymola and PyFMI results are consistent with some of the models. We are noting any significant differences and isolating the causes when possible and reporting any bugs in these tools where appropriate.
- Ptolemy models may be added if desired by LBNL.
- OCT can (with the correct options) generate FMUs with event indicator variables for zero-crossing functions that QSS needs. Other Modelica tool FMU generators lack this support and are not practical for production QSS use.
- The FMI/FMIL APIs do not provide a mechanism to tell the FMU to process a conditional event at a specific time, so QSS advances the relevant variables a small time step beyond the potential conditional event in the hope that the FMU will then detect and process the event if an event actually occurs at that time. This may not yet be robust for all models, especially due to the current dependence on numerical differentiation. Until a better mechanism becomes available it is possible for QSS solutions to miss some conditional events that other solvers don't miss.
- Comparisons between modeling tools may have names with prefixes like: ModelName
.Tool1-Tool2
.
- Comparison of solution accuracy and speed between PyFMI and QSS requires a few considerations:
- The solution accuracy should be gauged by comparison with a tight-tolerance reference solution.
- PyFMI and QSS simulations with the same relative tolerance do not generally give the same solution accuracy.
Speed comparisons should be based on runs where the PyFMI and QSS tolerances are adjusted to give roughly the same solution accuracy, which can be time consuming to determine.
- PyFMI performs additional solver steps at output times so, unlike QSS, the output sampling rate change reduce the effective solution tolerance.
- QSS performance is currently limited by the need to do expensive numeric differentiation: automatic differentiation support is anticipated and should provide a significant (~4X) QSS speedup.
- QSS3 accuracy and performance are both seriously limited by numeric differentiation: QSS3 is included for research purposes at this point but won't be practical until FMUs provide automatic differentiation. QSS3 (and possibly higher order QSS) are important for simulation speed with many models.
- FMU calls and algebraic DAE solution overhead can be speed-limiting for QSS simulation: the Binned-QSS support is designed to amortize this overhead to extend the class of models for which QSS offers a performance benefit.
- Interpolating across different sampling times artificially increases the difference between signals so that many signals that are actually a good match will fail the tolerance test: increasing sampling rates and/or using the
--coarse
option when differencing need to be explored. - A max difference metric isn't good for discrete and boolean signals: we probably need to use RMS, integral-based, or another metric for comparison/regression testing.
OPTIMICA is the default Modelica tool for QSS now that it has event indicator and other QSS-specific support.
JModelica lacks this QSS support and is being retired but can still be used for limited QSS modeling so it is still supported here.
FMUs can be built directly from models in the Buildings Library by placing a ModelName.ref file alongside the ModelName.mo file. The ModelName.ref file is a text file with these lines:
Modelica or Buildings Library full model name
Buildings Library branch if not master (optional)
Buildings Library commit hash if not HEAD of branch (optional)
Here is the FloorOpenLoop.ref file:
Buildings.ThermalZones.EnergyPlus.Examples.VAVReheatRefBldgSmallOffice.FloorOpenLoop
issue1129_energyPlus_zone
This model exists in the issue1129_energyPlus_zone branch in the HEAD commit.
Models that are defined in the local .mo file but depend on a specific branch/commit of the Buildings library should use a ref file as above but with just Buildings
in the first line.
Run bld_fmu.py
from the OCT
sub-directory of the model's directory.
Run bld_fmu.py
from the JModelica
sub-directory of the model's directory.
Run run.py
or run_PyFMI.py
from the desired output sub-directory under the modeling tool sub-directory of the model's directory, such as MyModel/JModelica/out.
- PyFMI options like
--ncp
and--final_time
are accepted by these scripts. - A ModelName.var file, if present alongside the ModelName.mo file, will be used to limit the output variables that are generated. It is a simple text file with one variable name/glob per line.
- The run scripts generate per-variable ASCII output files named as MyVariable.out for easy comparison with QSS results.
- PyFMI fails to generate output for some variables but we don't know why yet.
Run run.py
or run_QSS.py
in each QSS method sub-directory of the model's QSS
sub-directory.
Custom run.py
scripts may be present under QSS
with recommended or needed QSS options for that model.
Notes:
- QSS options like
--out
and--zFac
are accepted by these scripts. - A ModelName.var file, if present alongside the ModelName.mo file, will be used to limit the output variables that are generated. It is a simple text file with one variable name/glob per line.
- OPTIMICA (OCT) FMUs are recommended for use with QSS.
- Use of JModelica-generated FMUs requires special treatment for use with QSS as described below.
- Dymola-generated FMUs with if/when constructs and the related zero-crossing functions will lack the necessary dependency information for correct QSS simulation.
- Dymola-generated FMUs may not have the correct startTime, stopTime, and/or tolerance values in the
modelDescription.xml
DefaultExperiment
section.
The use of JModelica-generated FMUs with QSS requires special treatment:
- Zero-crossing functions need to be assigned to variables with names of the form
__zc_
VariableName and their derivatives assigned to variables with names of the form__zc_der_
VariableName. - The
modelDescription.xml
files in the FMU files need to be modified for QSS use in some cases. The FMU files are zip files so themodelDescription.xml
files can be extracted withunzip
, modified, and then updated in the FMU by runningzip -f
. - QSS needs some dependency information not included in
modelDescription.xml
by JModelica-generated FMUs:- Discrete variables modified by conditional/zero-crossing events need to be listed in a
DiscreteStates
section (between theDerivatives
andInitialUnknowns
sections) with dependency on the corresponding zero-crossing variable(s). - State variable modified by conditional/zero-crossing events need to be listed in the
InitialUnknowns
section with dependency on the corresponding zero-crossing variable(s).
- Discrete variables modified by conditional/zero-crossing events need to be listed in a
The cmp_PyFMI_QSS.py
script will run and compare the PyFMI and QSS simulations of the local model.
In addition to passing PyFMI and QSS options through it accepts options such as:
--cmp=
Variable to specify a variable to compare--cmp=
Variable=
RMS to specify a variable to compare and an RMS difference limit to compare against--red=
File to redirect output to the specified file
The PyFMI and QSS runs are set to use only sampled output to aid in the automated comparison: sampled QSS output may not show key events accurately.
Comparison wrapper scripts, such as cmp_CVode_QSS3_Buildings.py
, can be used to run the comparison on a set of models, including any desired custom options.
By including RMS "pass" limits these can serve as a type of regression test to make sure that OCT and QSS updates do not cause unexpected solution discrepancies.
The cmp_PyFMI_QSS_yaml.py
script can compare the YAML results file from two comparison runs with an optional relative tolerance argument to use when comparing variable RMS differences.
Run the comparison
script from the tst
sub-directory of the model's directory passing the directories of the two results to be compared, such as:
comparison ../OCT ../QSS/QSS2
This generates report (.rpt
) files for each pair of signals compared, a summary (.sum
) file listing the number of signal comparisons that pass and that fail, a 0-size pass (.pass
) or fail (.fail
) file, and PDFs with plots of signal pairs that fail, showing the signal overlay and difference plots.
- This will use the
out
sub-directory of the specified directory if no.out
files are found. comparison
wrapssimdiff.py
with the default comparison testing options.- The tolerances used by
comparison
may not be appropriate for all models. - Signals that represent discrete or boolean variables are not well-compared by the use of a worst-case difference magnitude criterion because slight timing difference in the discrete value changes can cause large differences. The use of RMS or integral-based metrics for such signals will be explored.
- Signal comparisons are based on interpolating the pairs of signals onto the same time steps. This can cause the apparent signal differences to "bounce" between time steps when the sampling resolution is low, giving an artificially high worst-case difference. The
--coarse
option insimdiff.py
can reduce this effect when one signal has much more frequent sampling by only measuring the difference at the time steps of the "coarser" (lower sampling rate) signal. A combination of the--coarse
option, refining the tolerances, and adjusting the QSS output sampling rates will probably be used to obtain more meaningful comparisons. (PyFMI doesn't appear to offer a method of decreasing the simulation time steps to obtain a higher samping rate.) For now many models with a very good match between modeling tools will report as "failed" by comparison.
Run the regression
script from the tst
sub-directory of the model's directory passing the directories of the two results to be compared, such as:
regression ../QSS/QSS2/new ../QSS/QSS2
This generates report (.rpt
) files for each pair of signals compared, a summary (.sum
) file listing the number of signal comparisons that pass and that fail, a 0-size pass (.pass
) or fail (.fail
) file, and PDFs with plots of signal pairs that fail, showing the signal overlay and difference plots.
- This will use the
out
sub-directory of the specified directory if no.out
files are found. regression
wrapssimdiff.py
with the default regression testing options.- The tolerances used by
regression
may not be appropriate for all models. - Signals that represent discrete or boolean variables are not well-compared by the use of a worst-case difference magnitude criterion because slight timing difference in the discrete value changes can cause large differences. The use of RMS or integral-based metrics for such signals will be explored.
- After regression testing is found to be satisfactory we will be copying new results over the prior version's results.