Admins: Will Usher and Jon Herman
Thanks to all those who have contributed so far!
We use GitHub issues to keep track of bugs and to answer questions about the use of the library.
For bugs, create a new issue on GitHub. Use this to describe the nature of the bug and the conditions needed to recreate it, including operating system and Python version.
If you have a question on interpretation of results, then we may be able to help.
We cannot answer specific implementation questions (such as 'how do I run my model with SALib?')
You can format your questions using GitHub Markdown, which makes it easy to paste in short snippets of code. If including a very long Python error traceback, please use a GitHub gist.
To contribute new code, submit a pull request. There are two instances in which you may want to contribute code: to fix a bug, or to add a new feature, such as a new sensitivity analysis method.
.. note::
We **strongly** recommend using a virtual environment setup, such as
``venv`` or ``conda``.
First, fork a copy of the main SALib repository in GitHub onto your own account and then create your local repository via::
git clone [email protected]:YOURUSERNAME/SALib.git
cd SALib
Next, set up your development environment.
With conda
installed (through
Miniforge or Mambaforge <https://github.com/conda-forge/miniforge>
,
Miniconda <https://docs.conda.io/en/latest/miniconda.html>
or
Anaconda <https://www.anaconda.com/products/individual>
), execute the
following commands at the terminal from the base directory of your
SALib <https://github.com/SALib/SALib>
clone::
# Create an environment with all development dependencies
conda env create -f environment.yml # works with `mamba` too
# Activate the environment
conda activate SALib
Finally, you can install SALib in editable mode in your environment::
pip install -e .
First, create a new issue on GitHub with the label bug
. Use this to describe the nature of the bug and the conditions needed to recreate it.
Then, please create a new branch with the name bug_xxx
where xxx is the number of the issue.
If possible, write a test which reproduces the bug. The tests are stored in the SALib/tests/
folder, and are run using pytest
from the root folder of the library.
You can run the tests with the command pytest
from the root project directory. Individual tests can be run with by specifying a file, or file and test function.
For example::
$ pytest # run all tests
$ pytest tests/test_file.py # run tests within a specific file
$ pytest tests/test_file.py::specific_function # run a specific test
Then, fix the bug in the code so that the test passes.
Submit a pull request with a descriptive title and reference the issue in the text. Once a pull request is submitted, the tests will run on Travis CI. If these tests pass, we will review and merge in your changes.
Methods in SALib follow a decoupled sample/analysis workflow. In other words, the generation of parameter samples and the calculation of sensitivity indices can be performed in two separate steps. This is because many users have models in languages other than Python, so sending data to/from the model is left to the user. All methods should support a command-line interface on top of the Python functions.
To add a new method, create an issue on GitHub with the label add_method
, if one does not already exist. Please describe the method, and link to the peer reviewed article in which the method is described. The master branch should only contain published methods. First check the current open issues with this label for inspiration or to see if someone is already working on a certain method.
We use GitHub issues to track ideas for and enhancements. If you are looking to contribute new methods to the library, check the labels for inspiration or to ensure you are not duplicating another's work.
Then, create a new branch with a useful name, such as new_method_method_name
. Methods should consist of:
-
A sampling module (a
method_name.py
file inSALib.sample
). This will contain, among other things, a functionsample(problem, ...)
that accepts a problem dictionary and returns a numpy array of samples, one column for each parameter. See SALib.sample.saltelli for an example. -
An analysis module (a
method_name.py
file inSALib.analyze
). This will contain a function analyze(problem, ...) that returns a dictionary of sensitivity indices. See SALib.analyze.sobol for an example. -
An example shell script and python file in the
examples
folder, ideally using a test function included in SALib such as the Ishigami or Sobol-G functions. -
Docstrings for the
sample
andanalyze
functions that include citations. Please add an entry todocs/index.rst
to add your method documentation to the concise API reference. -
All contributed methods should also provide functions to support their use through the command line interface (CLI). These are
cli_parse()
andcli_action()
to parse command line options and to run the sampling and analysis respectively. See the implementations in SALib.analyze.delta for an example. -
Tests in the
tests
folder. We're using Travis CI and Coveralls. Ideally, every new function will have one or more corresponding tests to check that errors are raised for invalid inputs, and that functions return matrices of the proper sizes. (For example see here. But at a minimum, please include a regression test for the Ishigami function, in the same format as all of the other methods see here. This will at least ensure that future updates don't break your code!
Finally, submit a pull request. Either @willu47 or @jdherman will review the pull request and merge in your changes.
Contributions not related to new methods are also welcome. These might include new test functions (see SALib.test_functions for how these are set up), or other code that is general across some or all of the methods. This general code is currently included in SALib.util.__init__.py.
Most of the sampling techniques make heavy use of pseudo-random number
generators.
We use primarily numpy.random
as the python standard library
random
library is inconsistent across Python 2 and 3.
When writing tests for methods which use these random number generators, set the seeds using numpy.random.seed(SEED)
where SEED
is a fixed integer.
This will ensure that your tests are repeatable.
- SALib contains a few basic types of plots, especially for the Morris method. Indicative results can be made by calling the
.plot()
method - However, we generally assume that plot types and styles are left to the user, as these are often application-specific. Users interested in more complex plot types should check out the savvy library, which is built on top of SALib.
Thanks again!
Following is the process that the development team follows in order to make a release:
- Document overview of changes since last release in
CHANGELOG.MD
- Update the version in the main
__init__.py
. - Build locally using
hatch build
, and verify the content of the artifacts - Submit PR, wait for tests to pass, and merge release into
main
- Tag release with version number and push to SALib repo
- Check that release has been deployed to PyPI
- Check documentation is built and deployed to readthedocs (http://salib.readthedocs.org)
- Check that auto-generated PR is auto-merged on the conda-forge feedstock repo (conda-forge/salib-feedstock)
- Update development roadmap on GitHub
Assuming the current location is the project root (the SALib
directory):
$ conda install pydata-sphinx-theme myst-parser -c conda-forge
$ sphinx-build -b html docs docs/html
A copy of the documentation will be in the docs/html
directory.
Open index.html
to view it.