Skip to content

Commit

Permalink
Merge pull request #2 from ClimateImpactLab/manuscript-revisions
Browse files Browse the repository at this point in the history
manuscript revisions
  • Loading branch information
bolliger32 authored Mar 21, 2023
2 parents f49619e + d5006a7 commit 33ca446
Show file tree
Hide file tree
Showing 46 changed files with 16,216 additions and 10,075 deletions.
8 changes: 8 additions & 0 deletions HISTORY.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,14 @@ Unreleased
------
* Use general Zenodo DOI numbers referencing latest version of each deposit

v1.1.0
------
* Addition of AR6 and Sweet scenarios
* Addition of `execute_pyciam` wrapper function
* Updates to SLIIDERS inputs based on reviewer comments
* General repo hygiene
* Additional/updated figures/tables/results in `post-processing/pyCIAM-results-figures.ipynb`

v1.0.2
------
* Add HISTORY.rst
Expand Down
51 changes: 36 additions & 15 deletions README.md

Large diffs are not rendered by default.

36 changes: 36 additions & 0 deletions environment/environment.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
name: pyciam
channels:
- conda-forge

dependencies:
- bottleneck=1.3
- bokeh=2.4.3 # for use of dask dashboard
- cartopy=0.21.1
- cloudpathlib=0.13
- distributed=2023.3.1
- flox=0.6.8
- fsspec=2023.3.0
- geopandas=0.12.2
- gitpython=3.1.31 # unmarked dependency of rhg_compute_tools
- jupyterlab=3.6.1
- matplotlib=3.7.1
- numpy=1.24.2
- numpy_groupies=0.9.20
- oct2py=5.6.0
- octave=7.3.0
- openpyxl=3.1.1
- pandas=1.5.3
- papermill=2.3.4
- pint-xarray=0.3
- pip=23.0.1
- python=3.11.0
- requests=2.28.2
- scikit-learn=1.2.2
- scipy=1.10.1
- tqdm=4.65.0
- xarray=2023.2.0
- zarr=2.14.2
- pip:
# - python-ciam==1.1.0
- rhg_compute_tools==1.3
- parameterize_jobs==0.1.1
25 changes: 14 additions & 11 deletions notebooks/README.md
Original file line number Diff line number Diff line change
@@ -1,19 +1,22 @@
# Executing pyCIAM

This README describes the workflow used to produce results contained in Depsky et al. 2022. Asterisks correspond to notebooks that contain necessary steps even for those wishing to run pyCIAM in a different context. For example, notebooks that generate figures or conduct auxilliary analyses for Depsky et al. 2022 are not starred.
This README describes the workflow used to produce results contained in Depsky et al. 2023. The list of notebooks necessary to run an example pyCIAM workflow or to recreate the full set of Depsky et al. 2023 results are contained in [run_example.sh](run_example.sh) and [run_full_replication.sh](run_full_replication.sh), respectively.

The aggregated coastal input datasets required for pyCIAM are [SLIIDERS-ECON](https://doi.org/10.5281/zenodo.6010452) and [SLIIDERS-SLR](https://doi.org/10.5281/zenodo.6012027). Alternatively, users may construct their own inputs, for example to integrate alternative underlying data layers. In this case, they must still conform to the format of the SLIIDERS datasets. We would recommend starting from the SLIIDERS construction code found in the [SLIIDERS repository](https://github.com/ClimateImpactLab/sliiders)
The aggregated coastal input dataset required for pyCIAM is [SLIIDERS](https://doi.org/10.5281/zenodo.6449230). Alternatively, users may construct their own inputs, for example to integrate alternative underlying data layers. In this case, they must still conform to the format of the SLIIDERS dataset. We would recommend starting from the SLIIDERS construction code found in the [SLIIDERS repository](https://github.com/ClimateImpactLab/sliiders)

A set of common filepaths, settings, and helper functions used for this worfkflow are contained in [shared.py](./shared.py). These should be adjusted as needed. In particular, you will need to adjust the filepaths to suit your data storage structure.

The following notebooks should be run in the described order to replicate the manuscript results.

1. [download-data.ipynb](./download-data.ipynb): This notebook downloads some input data programmatically and also provides instructions for manual data downloads where necessary.
2. *[collapse-sliiders-econ-to-seg.ipynb](./collapse-sliiders-econ-to-seg.ipynb): SLIIDERS-ECON is provided where each analysis unit corresponds to a unique combination of admin1 region and coastal segment. This is helpful for aggregating results to admin1-level outputs, since the decision-making agent must occur at the segment level. For certain use cases, e.g. creating the surge lookup table, the additional admin1 dimension is unnecessary and leads to excess computational demands. Thus, we collapse the dataset to the segment level. This notebook would not be necessary if, for example, a user created a SLIIDERS-ECON alternative that was only indexed by segment.
3. [create-diaz-pyCIAM-inputs.ipynb](./create-diaz-pyCIAM-inputs.ipynb): This notebook generates a SLIIDERS-like input dataset that reflects the inputs used in [Diaz 2016](https://link.springer.com/article/10.1007/s10584-016-1675-4#Sec13). This is necessary for comparisons of results from the original CIAM paper to the updated version. These comparsions are performed and reported on in Depsky et al. 2022.
4. [create-slr-quantile.ipynb](./create-slr-quantile.ipynb): This notebook reduces the Monte Carlo-based SLIIDERS-SLR dataset to a set of quantiles defined in [shared.py](./shared.py) because these are all that are reported in Depsky et al. 2022. pyCIAM is also fully capable of running on the full set of Monte Carlo simulations from SLIIDERS-SLR, so users may choose to skip this step and point later notebooks to the raw SLIIDERS-SLR dataset.
5. *[create-surge-lookup-tables.ipynb](./create-surge-lookup-tables.ipynb): This notebook creates segment-adm1-specific lookup tables that estimate fraction of total capital stock lost and fraction of total population killed as a function of extreme sea level height. Computing these on the fly for a large number of SLR simulations is computationally intractable given the numerical integration needed, so lookup tables are used to enable these calculations.
6. [fit-movefactor.ipynb](./fit-movefactor.ipynb): This notebook performs the empirical estimation of the relocation cost parameter `movefactor`, as detailed in Depsky et al. 2022. It is purely for analysis and does not create any output datasets necessary for other notebooks.
7. *[run-pyCIAM-slrquantiles.ipynb](./run-pyCIAM-slrquantiles.ipynb): Run pyCIAM on the SLR quantile outputs from `create-slr-quantile.ipynb`.
8. [run-pyCIAM-diaz2016.ipynb](./run-pyCIAM-diaz2016.ipynb): Run pyCIAM using a configuration and parameter set analogous to that used in the original CIAM paper. These outputs are used for validation and comparison within Depsky et al. 2022.
9. [pyCIAM-results-figures.ipynb](./pyCIAM-results-figures.ipynb): This notebook generates numbers and figures used in Depsky et al. 2022.
1. [data-acquisition.ipynb](data-acquisition.ipynb): This notebook downloads all input data necessary to replicate the results of Depsky et al. 2023, with options to download only a subset necessary to run an example pyCIAM model.
2. [data-processing/collapse-sliiders-to-seg.ipynb](data-processing/collapse-sliiders-to-seg.ipynb): SLIIDERS is provided where each analysis unit corresponds to a unique combination of admin1 region and coastal segment. This is helpful for aggregating results to admin1-level outputs, since the decision-making agent must occur at the segment level. For certain use cases, e.g. creating the surge lookup table, the additional admin1 dimension is unnecessary and leads to excess computational demands. Thus, we collapse the dataset to the segment level. This notebook would not be necessary if, for example, a user created a SLIIDERS alternative that was only indexed by segment.
3. [data-processing/create-diaz-pyCIAM-inputs.ipynb](data-processing/create-diaz-pyCIAM-inputs.ipynb): This notebook generates a SLIIDERS-like input dataset that reflects the inputs used in [Diaz 2016](https://link.springer.com/article/10.1007/s10584-016-1675-4#Sec13). This is necessary for comparisons of results from the original CIAM paper to the updated version. These comparsions are performed and reported on in Depsky et al. 2023.
4. [data-processing/slr/AR6.ipynb](data-processing/slr/AR6.ipynb): This notebook processes SLR projections based on AR6 emissions scenarios from the FACTS SLR framework.
5. [data-processing/slr/sweet.ipynb](data-processing/slr/sweet.ipynb): This notebook processes FACTS-generated projections grouped by end-of-century GMSL level as in Sweet et al. 2022.
6. [data-processing/slr/AR5](data-processing/slr/AR5): These notebooks run LocalizeSL (the predecessor to FACTS) on a variety of SLR scenarios based largely on the IPCC AR5 emissions scenarios. See the [README inside this folder](data-processing/slr/AR5/README.md) for more details.
7. [models/create-surge-lookup-tables.ipynb](models/create-surge-lookup-tables.ipynb): This notebook creates segment-adm1-specific lookup tables that estimate fraction of total capital stock lost and fraction of total population killed as a function of extreme sea level height. Computing these on the fly for a large number of SLR simulations is computationally intractable given the numerical integration needed, so lookup tables are used to enable these calculations.
8. [models/fit-movefactor.ipynb](models/fit-movefactor.ipynb): This notebook performs the empirical estimation of the relocation cost parameter `movefactor`, as detailed in Depsky et al. 2023. It is purely for analysis and does not create any output datasets necessary for other notebooks.
9. [models/run-pyCIAM-slrquantiles.ipynb](models/run-pyCIAM-slrquantiles.ipynb): This notebook is just a thin wrapper to call `execute_pyciam()` using appropriate inputs.
10. [models/run-pyCIAM-diaz2016.ipynb](models/run-pyCIAM-diaz2016.ipynb): This notebook is just a thin wrapper to call `execute_pyciam()` using inputs and configuration consistent with Diaz 2016. These outputs are used for validation and comparison within Depsky et al. 2023.
11. [post-processing/pyCIAM-results-figures.ipynb](post-processing/pyCIAM-results-figures.ipynb): This notebook generates numbers and figures used in Depsky et al. 2023.
12. [post-processing/zenodo-upload.ipynb](post-processing/zenodo-upload.ipynb): This notebook can be used by core model developers to upload new versions of SLIIDERS and/or model outputs to Zenodo. It will not need to be used by typical users.
168 changes: 0 additions & 168 deletions notebooks/collapse-sliiders-econ-to-seg.ipynb

This file was deleted.

103 changes: 0 additions & 103 deletions notebooks/create-slr-quantile.ipynb

This file was deleted.

Loading

0 comments on commit 33ca446

Please sign in to comment.