Build status | Documentation |
---|---|
A collection of tests for constraint-based metabolic models.
There are currently 2 main test suites implemented:
- FROG reproducibility and curation checks
- MEMOTE-style model consistency and annotation checks
You can generate FROG reports using function FROG.generate_report
and compare
the "compatibility" of two reports using FROG.compare_reports
. See the
documentation for more details.
The supposed workflow with FROG reports is the following:
- If you are a model author, you generate a FROG report that is used as a reference solution of your metabolic model problem, and distribute it along a model.
- If you are a model curator, you demand the FROG report from the original author as a "certificate" of their model's intended functionality.
- If you use a model of someone other, you generate another FROG report with your analysis software, and then compare the report against the original model author's report to verify that your software is interpreting the model information correctly and that the solutions are compatible within a reasonable tolerance.
The implementation is based off the description of the EBI's FBC curation
site, with some details
following the decisions in the fbc_curation
Python
tool (working with COBRApy) by
Matthias König (@matthiaskoenig).
The implementation in FBCModelTests.jl is mostly authored by Mirek Kratochvíl (@exaexa) with parts contributed by St. Elmo Wilken (@stelmo).
You can use the supplied scripts to conveniently run FROG from a command line on a system that has FBCModelTests.jl (and possibly a solver) already installed.
After copying the files from the bin/
directory in this repository into your
$PATH
, you can use them to generate and compare FROG reports as follows:
$ fbcmt-run-frog -s GLPK model.xml report_dir
$ fbcmt-compare-frog report_dir other_dir
A pre-packaged dockerized version of the commands is available from GHCR. The following commands run the dockerized versions of the above scripts:
$ docker run -ti --rm -v $PWD:/data -w /data ghcr.io/lcsb-biocore/docker/fbcmodeltests-run-frog -s GLPK model.xml report_dir
$ docker run -ti --rm -v $PWD:/data -w /data ghcr.io/lcsb-biocore/docker/fbcmodeltests-compare-frog report_dir other_dir
Docker containers may be re-used and executed in many other environments: using podman allows you to run in without installation in some HPC setups; using the Dockerized tool capability of Galaxy allows you to run the model tests in many institutional cloud-computing services and local Galaxy instances.
The primary entry point for the MEMOTE test
suit implemented here is the function run_tests
. When building a model, it is
most convenient to incorporate it into the CI of the model. Another option is to
use the command line functionality, and save the output for later analysis.
To run the test suite on a toy model, use run_tests
:
using FBCModelTests, GLPK, Distributed
addprocs(10)
FBCModelTests.Memote.run_tests("e_coli_core.json", GLPK.Optimizer; workers=workers())
Any optimizer supported by JuMP can be used. The output of
run_tests
is the standard Julia unit testing scheme. However, in the repl the
full output is usually truncated, and only a summary is shown. If you want more
details about where/why your model failed certain tests, it is best to capture
the output, and save it to a file. A convenient way to do this is with
ansi2html. Additionally, to make the
output more display friendly, we recommend run_tests_toplevel
is used instead
of run_tests
.
An example workflow entails using the scripts located in bin/
:
julia --color=yes fbcmt-memote-run -s GLPK -w 6 e_coli_core.xml > e_coli_core.test.out
ansi2html < e_coli_core.test.out > e_coli_core.test.html
The resultant html
can be inspected in any browser.
See the function documentation for additional test configuration information. Note, the tests implemented here are significantly more conservative than in the original Memote. In particular, no heuristics are used to guess reaction types, e.g. biomass, atp maintenance, transporter, exchange, etc. Only SBO annotations are used for this purpose, because only these are actually standardized. Consequently, all tests that rely on properly annotated reactions will fail if this is not incorporated into the model being tested.
The implementation in FBCModelTests.jl is mostly authored by St. Elmo Wilken (@stelmo) with parts contributed by Mirek Kratochvíl (@exaexa), Vincent M. von Häfen (@vm-vh), and Flora Schlüter (@Fl-Sch).
FBCModelTests.jl package is developed at the Luxembourg Centre for Systems Biomedicine of the University of Luxembourg (uni.lu/lcsb) and the Institute for Quantitative and Theoretical Biology at the Heinrich Heine University in Düsseldorf (qtb.hhu.de). The development was supported by European Union's Horizon 2020 Programme under PerMedCoE project (permedcoe.eu) agreement no. 951773.