diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 5e62fc2..403075e 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -20,11 +20,11 @@ conda activate pyprecis-environment
:exclamation: *Note: As of v1.0 we are unable to provison the model data necessary for reproducing the full PyPRECIS learning environment via github due to it's large file size. Contact the PRECIS team for more information.*
## Before you start...
-Read through the current issues to see what you can help with. If you have your own ideas for improvements, please start a new issues so we can track and discuss your improvement. You must create a new branch for any changes you make.
+Read through the current issues to see what you can help with. If you have your own ideas for improvements, please start a new issue so we can track and discuss your improvement. You must create a new branch for any changes you make.
**Please take note of the following guidelines when contributing to the PyPRECIS repository.**
-* Please do **not** make changes to the `master` branch. The `master` branch is reserved for files and code that has been fully tested and reviewed. Only the core PyPRECIS developers can/should push to the `master` branch.
+* Please do **not** make changes to `main` or `develop` branches. The `main` branch is reserved for files and code that has been fully tested and reviewed. Only the core PyPRECIS developers can push to the `main` and `develop` branches.
* The `develop` branch contains the latest holistic version of the `PyPRECIS` repository. Please branch off `develop` to fix a particular issue or add a new feature.
* Please use the following tokens at the start of a new branch name to help sign-post and group branches:
@@ -66,5 +66,5 @@ have questions.**
-© British Crown Copyright 2018 - 2019, Met Office
+© British Crown Copyright 2018 - 2022, Met Office
diff --git a/README.md b/README.md
index 691b3f5..6af1d82 100644
--- a/README.md
+++ b/README.md
@@ -31,7 +31,7 @@ PyPRECIS is built on [Jupyter Notebooks](https://jupyter.org/), with data proces
Further information about PRECIS can be found on the [Met Office website](https://www.metoffice.gov.uk/precis).
## Contents
-The teaching elements of PyPRECIS are contained in the `notebooks` directory. The primary worksheets are:
+The teaching elements of PyPRECIS are contained in the `notebooks` directory. The core primary worksheets are:
Worksheet | Aims
:----: | -----------
@@ -42,7 +42,7 @@ Worksheet | Aims
[5](notebooks/worksheet5.ipynb) | Have an appreciation for working with daily model dataUnderstand how to calculate some useful climate extremes statisticsBe aware of some coding stratagies for dealing with large data sets
[6](notebooks/worksheet6.ipynb) | An extended coding exercise designed to allow you to put everything you've learned into practise
-Additional tutorials specific to the CSSP 20th Century reanalysis datasets:
+Additional tutorials specific to the CSSP 20th Century reanalysis dataset:
Worksheet | Aims
:----: | -----------
@@ -55,10 +55,17 @@ Three additional worksheets are available for use by workshop instructors:
* `makedata.ipynb`: Provides scripts for preparing raw model output for use in notebook exercises.
* `worksheet_solutions.ipyn`: Solutions to worksheet exercices.
-* `worksheet6example.ipynb`: Example code for Worksheet 6.
+* `worksheet6example.ipynb`: Example code for Worksheet 6.
## Data
-The data used in the worksheets is currently only available within the Met Office. Data relating to the CSSP_20CRDS_Tutorials is also available in Zarr format in an Azure Blob Storage Service. See the `data/DATA-ACESS.md` for further details.
+Data relating to the PyPRECIS project is currently held internally to the Met Office.
+
+The total data volume for the core worksheets is 36.68 GB, of which ~20 GB is raw pp data. This is too large to be stored on github, or via git lfs.
+As of v2.0, the storage solution for making this data available alongside the notebooks is still under investgation.
+
+Data relating to the **CSSP 20CRDS** tutorials is held online in an Azure Blob Storage Service. To access this data user will need a valid shared access signature (SAS) token. The data is in [Zarr](https://zarr.readthedocs.io/en/stable/) format and the total volume is ~2TB. The data is in hourly, 3 hourly, 6 hourly, daily and monthly frequencies stored seperatrely under the `metoffice-20cr-ds` container on MS-Azure. Monthly data only is also via [Zenodo](https://zenodo.org/record/2558135).
+
+
## Contributing
Information on how to contribute can be found in the [Contributing guide](CONTRIBUTING.md).
@@ -69,5 +76,5 @@ PyPRECIS is licenced under BSD 3-clause licence for use outside of the Met Offic
-© British Crown Copyright 2018 - 2020, Met Office
+© British Crown Copyright 2018 - 2022, Met Office
diff --git a/data/DATA-ACCESS.md b/data/DATA-ACCESS.md
deleted file mode 100644
index baffded..0000000
--- a/data/DATA-ACCESS.md
+++ /dev/null
@@ -1,12 +0,0 @@
-# Data Access
-
-Data relating to the PyPRECIS project is currently stored at /project/ciid/projects/PRECIS/worksheets/data
-
-The total data volume is 36.68 GB, of which ~20 GB is raw pp data. This is too large to be stored on github, or via git lfs.
-As of v1.0, the storage solution for making this data available alongside the notebooks is still under investgation.
-
-
-Data relating to the CSSP_20CRDS_Tutorials is held online in an Azure Blob Storage Service. To access this data user will need a valid SAS (shared access signature) token.
-
-The data is in Zarr format and the total volume is ~2TB. The data is in hourly, 3 hourly, 6 hourly, daily and monthly frequencies stored seperatrely uner the metoffice-20cr-ds container on MS-Azure.
-A copy of this data is also available at Met office linux file system.
\ No newline at end of file
diff --git a/dockerfile b/dockerfile
new file mode 100644
index 0000000..931cd9e
--- /dev/null
+++ b/dockerfile
@@ -0,0 +1,23 @@
+FROM continuumio/miniconda3
+
+RUN apt-get update
+
+# Set working directory for the project
+WORKDIR /app
+
+SHELL ["/bin/bash", "--login", "-c"]
+
+RUN apt-get install -y git
+
+# Create Conda environment from the YAML file
+COPY environment.yml .
+RUN pip install --upgrade pip
+
+RUN conda env create -f environment.yml
+
+RUN conda init bash
+RUN conda activate pyprecis-environment
+
+RUN pip install ipykernel && \
+ python -m ipykernel install --name pyprecis-training
+
diff --git a/environment.yml b/environment.yml
index c5adde3..07a8b8e 100644
--- a/environment.yml
+++ b/environment.yml
@@ -1,11 +1,17 @@
name: pyprecis-environment
channels:
- conda-forge
- - defaults
- dependencies:
- - python=3.6.6
- - numpy
- - matplotlib
- - cartopy=0.16.0
- - dask=0.19.4
- - iris=2.2.0
+dependencies:
+ - python=3.6.10
+ - iris=2.4.0
+ - numpy=1.17.4
+ - matplotlib=3.1.3
+ - nc-time-axis=1.2.0
+ - jupyter_client=6.1.7
+ - jupyter_core=4.6.3
+ - dask=2.11.0
+ - notebook=5.7.8
+ - mo_pack=0.2.0
+ - boto3
+ - botocore
+ - tqdm
diff --git a/notebooks/awsutils/README-AWS.md b/notebooks/awsutils/README-AWS.md
new file mode 100644
index 0000000..01f6e4a
--- /dev/null
+++ b/notebooks/awsutils/README-AWS.md
@@ -0,0 +1,129 @@
+
+## AWS
+
+### Create an EC2 instance
+
+* Select Eu-west2 (London) region from the top right of navigation bar
+* Click on Launch instance
+* Choose Amazon Linux 2 AMI (HVM) kARNEL 5.10 64-bit (- X86) machine, click select
+* Choose t2.2xlarge and click next: configure instance details
+* Choose subnet default eu-west-2c
+* In IAM role choose existing trainings-ec2-dev role and click next: storage
+* 8 gb is fine, click next: add tags
+* Add following tags
+ * Name: [Unique Instance name]
+ * Tenable: FA
+ * ServiceOwner: [firstname.lastname]
+ * ServiceCode: PABCLT
+* add securitygroup, select an existing security group: IAStrainings-ec2-mo
+* Review and Launch and then select launch
+* It will prompt to set a key pair (to allow ssh). create a new key and download it.
+
+It will create the instance. To see the running instance goto instances and instacne state will be "Running"
+
+### SSH instance on VDI
+
+
+* Save the key (.pem) to .ssh and set the permission: chmod 0400 ~/.ssh/your_key.pem
+* Open ~/.ssh/config and add following:
+
+```
+Host ec2-*.eu-west-2.compute.amazonaws.com
+ IdentityFile ~/.ssh/your_key.pem
+ User ec2-user
+
+```
+
+* Find the public IPv4 DNS and ssh in using it ssh ec2-.eu-west-2.compute.amazonaws.com, public IPv4 DNS can be found in instance detail on AWS. Click on your instance and it will open the details.
+
+* Remember to shutdown the instance when not using it. It will save the cost.
+### create s3 bucket
+
+* goto s3 service and press "create bucket"
+* name the bucket
+* set region to EU (London) eu-west-2
+* add tags:
+ * Name: [name of bucket or any unique name]
+ * ServiceOwner: [your-name]
+ * ServiceCode: PABCLT
+ * Tenable: FA
+* click on "create bucket"
+
+### Key configurations
+
+
+The above script run only when config files contains latest keys. In order to update the keys:
+
+* go to AB climate training dev --> Administrator access --> command line or programmatic access
+* Copy keys in "Option 1: Set AWS environment variables"
+* In VDI, paste (/replace existing) these keys in ~/.aws/config
+* add [default] in first line
+* Copy keys in "Option 2: Add a profile to your AWS credentials file"
+* In VDI, Paste the keys in credentials file: ~/.aws/credentials (remove the first copied line, looks somethings like: [198477955030_AdministratorAccess])
+* add [default] in first line
+
+The config and credentials file should look like (with own keys):
+
+```
+[default]
+export AWS_ACCESS_KEY_ID="ASIAS4NRVH7LD2RRGSFB"
+export AWS_SECRET_ACCESS_KEY="rpI/dxzQWhCul8ZHd18n1VW1FWjc0LxoKeGO50oM"
+export AWS_SESSION_TOKEN="IQoJb3JpZ2luX2VjEGkaCWV1LXdlc3QtMiJH"
+```
+
+### Loading data on s3 bucket from VDI (using boto3)
+
+to upload the file(s) on S3 use: /aws-scripts/s3_file_upload.py
+to upload the directory(s) on S3 use: /aws-scripts/s3_bulk_data_upload.py
+
+### AWS Elastic container repository
+
+Following instructions are for creating image repo on ECR and uploading container image
+
+* ssh to the previously created EC2 instance, make an empty Git repo:
+
+```
+sudo yum install -y git
+git init
+```
+* On VDI, run the following command to push the PyPrecis repo containing the docker file to the EC2 instance:
+```
+git push :~
+```
+
+* Now checkout the branch on EC2: git checkout [branch-name]
+* Install docker and start docker service
+
+```
+sudo amazon-linux-extras install docker
+sudo service docker start
+```
+
+* build docker image:
+
+```
+sudo docker build .
+```
+
+* goto AWS ECR console and "create repository", make it private and name it
+
+* Once created, press "push commands"
+
+* copy the command and run it on EC2 instance, it will push the container image on record. if get "permission denied" error, please add "sudo" before "docker" in the command.
+
+
+
+### AWS Sagemaker: Run notebook using custom kernel
+The instructions below follow the following tutorial:
+https://aws.amazon.com/blogs/machine-learning/bringing-your-own-custom-container-image-to-amazon-sagemaker-studio-notebooks/
+
+* goto Sagemaker and "open sagemaker domain"
+* add user
+ * Name and and select Amazonsagemaker-executionrole (dafult one)
+
+* Once user is created, goto "attach image"
+* Select "New Image" and add image URI (copy from image repo)
+* Give new image name, display name, sagmaker-executionrole and add tags and attach the image
+* add kernel name and display name (both can be same)
+* Now, launch app -> Studio and it will open the Notebook dashboard.
+* Select python notebook and add your custom named Kernel
diff --git a/notebooks/awsutils/fetch_s3_file.py b/notebooks/awsutils/fetch_s3_file.py
new file mode 100644
index 0000000..bb57852
--- /dev/null
+++ b/notebooks/awsutils/fetch_s3_file.py
@@ -0,0 +1,111 @@
+
+import io
+import os
+import boto3
+from urllib.parse import urlparse
+from fnmatch import fnmatch
+from shutil import copyfile
+
+
+def _fetch_s3_file(s3_uri, save_to):
+
+ bucket_name, key = _split_s3_uri(s3_uri)
+ print(f"Fetching s3 object {key} from bucket {bucket_name}")
+
+ client = boto3.client("s3")
+ obj = client.get_object(
+ Bucket=bucket_name,
+ Key=key,
+ )
+ with io.FileIO(save_to, "w") as f:
+ for i in obj["Body"]:
+ f.write(i)
+
+
+def _save_s3_file(s3_uri, out_filename, file_to_save="/tmp/tmp"):
+ bucket, folder = _split_s3_uri(s3_uri)
+ out_filepath = os.path.join(folder, out_filename)
+ print(f"Save s3 object {out_filepath} to bucket {bucket}")
+ client = boto3.client("s3")
+ client.upload_file(
+ Filename=file_to_save,
+ Bucket=bucket,
+ Key=out_filepath
+ )
+
+
+def _split_s3_uri(s3_uri):
+ parsed_uri = urlparse(s3_uri)
+ return parsed_uri.netloc, parsed_uri.path[1:]
+
+
+def find_matching_s3_keys(in_fileglob):
+
+ bucket_name, file_and_folder_name = _split_s3_uri(in_fileglob)
+ folder_name = os.path.split(file_and_folder_name)[0]
+ all_key_responses = _get_all_files_in_s3_folder(bucket_name, folder_name)
+ matching_keys = []
+ for key in [k["Key"] for k in all_key_responses]:
+ if fnmatch(key, file_and_folder_name):
+ matching_keys.append(key)
+ return matching_keys
+
+
+def _get_all_files_in_s3_folder(bucket_name, folder_name):
+ client = boto3.client("s3")
+ response = client.list_objects_v2(
+ Bucket=bucket_name,
+ Prefix=folder_name,
+ )
+ all_key_responses = []
+ if "Contents" in response:
+ all_key_responses = response["Contents"]
+ while response["IsTruncated"]:
+ continuation_token = response["NextContinuationToken"]
+ response = client.list_objects_v2(
+ Bucket=bucket_name,
+ Prefix=folder_name,
+ ContinuationToken=continuation_token,
+ )
+ if "Contents" in response:
+ all_key_responses += response["Contents"]
+ return all_key_responses
+
+
+def copy_s3_files(in_fileglob, out_folder):
+ '''
+ This function copy files from s3 bucket to local directory.
+ args
+ ---
+ in_fileglob: s3 uri of flies (wild card can be used)
+ out_folder: local path where data will be stored
+ '''
+ matching_keys = find_matching_s3_keys(in_fileglob)
+ in_bucket_name = _split_s3_uri(in_fileglob)[0]
+ out_scheme = urlparse(out_folder).scheme
+ for key in matching_keys:
+ new_filename = os.path.split(key)[1]
+ temp_filename = os.path.join("/tmp", new_filename)
+ in_s3_uri = os.path.join(f"s3://{in_bucket_name}", key)
+ _fetch_s3_file(in_s3_uri, temp_filename)
+ if out_scheme == "s3":
+ _save_s3_file(
+ out_folder,
+ new_filename,
+ temp_filename,
+ )
+ else:
+ copyfile(
+ temp_filename, os.path.join(out_folder, new_filename)
+ )
+ os.remove(temp_filename)
+
+
+def main():
+ in_fileglob = 's3://ias-pyprecis/data/cmip5/*.nc'
+ out_folder = '/home/h01/zmaalick/myprojs/PyPRECIS/aws-scripts'
+ copy_s3_files(in_fileglob, out_folder)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/notebooks/awsutils/main.py b/notebooks/awsutils/main.py
new file mode 100644
index 0000000..8ea37ba
--- /dev/null
+++ b/notebooks/awsutils/main.py
@@ -0,0 +1,42 @@
+from fetch_s3_file import copy_s3_files, find_matching_s3_keys
+import iris
+import os
+
+
+
+
+def load_data(inpath):
+
+ if inpath.startswith('s3'):
+ keys = find_matching_s3_keys(inpath)
+ s3dir = get_directory(inpath)
+ temp_path = '/tmp'
+ for key in keys:
+ file = key.split('/')[-1]
+ if os.path.exists(os.path.join(temp_path,file)) == 0:
+ copy_s3_files(os.path.join(s3dir,key), temp_path)
+ else:
+ print(key, ' already exist')
+
+ files = inpath.split('/')[-1]
+ data = iris.load(os.path.join(temp_path,files))
+ return data
+
+
+def get_directory(inpath):
+ path = inpath.split('/')
+ dirpath='s3://'
+ for p in path[2:-1]:
+ dirpath = os.path.join(dirpath,p)
+ return dirpath
+
+
+
+def main():
+ inpath = 's3://ias-pyprecis/data/sample_data.nc'
+ data = load_data(inpath)
+ print(data)
+
+
+if __name__ == "__main__":
+ main()
\ No newline at end of file
diff --git a/notebooks/awsutils/s3_bulk_data_upload.py b/notebooks/awsutils/s3_bulk_data_upload.py
new file mode 100644
index 0000000..50d8bec
--- /dev/null
+++ b/notebooks/awsutils/s3_bulk_data_upload.py
@@ -0,0 +1,26 @@
+import os
+from tqdm import tqdm
+import boto3
+
+
+def upload_folder_to_s3(s3_client, s3bucket, input_dir, s3_path):
+ # This method uploads the directory (also subdirectories) to S3.
+ # You can also specify the s3 bucket directory where to store data
+ pbar = tqdm(os.walk(input_dir))
+ for path, subdirs, files in pbar:
+ for file in files:
+ dest_path = path.replace(input_dir, "").replace(os.sep, '/')
+ s3_file = f'{s3_path}/{dest_path}/{file}'.replace('//', '/')
+ local_file = os.path.join(path, file)
+ s3_client.upload_file(local_file, s3bucket, s3_file)
+ pbar.set_description(f'Uploaded {local_file} to {s3_file}')
+ print(f"Successfully uploaded {input_dir} to S3 {s3_path}")
+
+
+def main():
+ s3_client = boto3.client('s3')
+ upload_folder_to_s3(s3_client, 'ias-pyprecis', '/data/users/fris/s3_uploads/pp', 'data/pp')
+
+
+if __name__ == "__main__":
+ main()
diff --git a/notebooks/awsutils/s3_files_upload.py b/notebooks/awsutils/s3_files_upload.py
new file mode 100644
index 0000000..5d82ffe
--- /dev/null
+++ b/notebooks/awsutils/s3_files_upload.py
@@ -0,0 +1,43 @@
+# this script load data on aws s3 bucket
+import botocore
+import boto3
+import logging
+import os
+import glob
+
+
+def upload_file(file_name, bucket, object_name=None):
+ """Upload a file to an S3 bucket
+
+ :param file_name: File to upload
+ :param bucket: Bucket to upload to
+ :param object_name: S3 object name. If not specified then file_name is used
+ :return: True if file was uploaded, else False
+ """
+
+ # If S3 object_name was not specified, use file_name
+ if object_name is None:
+ object_name = os.path.basename(file_name)
+
+ # Upload the file
+ s3_client = boto3.client('s3')
+ try:
+ s3_client.upload_file(file_name, bucket, object_name)
+ except botocore.exceptions.ClientError as e:
+ logging.error(e)
+ return False
+ return True
+
+
+def main():
+ bucket = "ias-pyprecis"
+
+ file_path = "*.txt"
+ files = glob.glob(file_path)
+
+ for file in files:
+ upload_file(file, bucket, object_name=None)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/notebooks/makedata.ipynb b/notebooks/makedata.ipynb
index 7485925..25c0316 100755
--- a/notebooks/makedata.ipynb
+++ b/notebooks/makedata.ipynb
@@ -49,16 +49,14 @@
" for cube in tempcubelist:\n",
" if cube.name() == 'air_temperature':\n",
" cube.convert_units('celsius')\n",
- " outpath = oimport glob\n",
- "import os\n",
- "import iriss.path.join(datadir, 'future', runid + '.mon.2021_2050.tm.rr.C.nc')\n",
+ " outpath = os.path.join(DATADIR, 'future', runid + '.mon.2021_2050.tm.rr.C.nc')\n",
" cube.remove_coord('forecast_period')\n",
" cube.remove_coord('forecast_reference_time')\n",
" iris.save(cube, outpath)\n",
" if cube.name() == 'precipitation_flux':\n",
" cube.convert_units('kg m-2 day-1')\n",
" cube.units = 'mm day-1'\n",
- " outpath = os.path.join(datadir, 'future', runid + '.mon.2021_2050.pr.rr.mmday-1.nc')\n",
+ " outpath = os.path.join(DATADIR, 'future', runid + '.mon.2021_2050.pr.rr.mmday-1.nc')\n",
" cube.remove_coord('forecast_period')\n",
" cube.remove_coord('forecast_reference_time')\n",
" iris.save(cube, outpath)"
@@ -94,7 +92,7 @@
" for cube in tempcubelist:\n",
" if cube.name() == 'air_temperature':\n",
" cube.convert_units('celsius')\n",
- " outpath = os.path.join(datadir, 'historical', runid + '.mon.1961_1990.tm.rr.C.nc')\n",
+ " outpath = os.path.join(DATADIR, 'historical', runid + '.mon.1961_1990.tm.rr.C.nc')\n",
" cube.remove_coord('forecast_period')\n",
" cube.remove_coord('forecast_reference_time')\n",
" iris.save(cube, outpath)\n",
@@ -222,9 +220,9 @@
],
"metadata": {
"kernelspec": {
- "display_name": "Python 3",
+ "display_name": "pyprecis-environment",
"language": "python",
- "name": "python3"
+ "name": "pyprecis-environment"
},
"language_info": {
"codemirror_mode": {
@@ -236,7 +234,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.6"
+ "version": "3.7.12"
}
},
"nbformat": 4,
diff --git a/notebooks/pyprecis-environment.lock b/notebooks/pyprecis-environment.lock
new file mode 100644
index 0000000..9c22906
--- /dev/null
+++ b/notebooks/pyprecis-environment.lock
@@ -0,0 +1,179 @@
+# This file may be used to create an environment using:
+# $ conda create --name --file
+# platform: linux-64
+@EXPLICIT
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/_libgcc_mutex-0.1-conda_forge.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/ca-certificates-2021.10.8-ha878542_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/ld_impl_linux-64-2.36.1-hea4e1c9_2.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/libgfortran4-7.5.0-h14aa051_19.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/libstdcxx-ng-11.2.0-he4da1e4_11.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/llvm-openmp-12.0.1-h4bd325d_1.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/pandoc-2.17.0.1-h7f98852_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/_openmp_mutex-4.5-1_llvm.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/libgfortran-ng-7.5.0-h14aa051_19.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/libgcc-ng-11.2.0-h1d223b6_11.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/bzip2-1.0.8-h7f98852_4.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/c-ares-1.18.1-h7f98852_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/expat-2.4.3-h9c3ff4c_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/geos-3.9.1-h9c3ff4c_2.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/icu-64.2-he1b5a44_1.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/jbig-2.1-h7f98852_2003.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/jpeg-9d-h36c2ea0_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/lerc-3.0-h9c3ff4c_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/libdeflate-1.8-h7f98852_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/libev-4.33-h516909a_1.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/libffi-3.2.1-he1b5a44_1007.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/libiconv-1.16-h516909a_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/libmo_unpack-3.1.2-hf484d3e_1001.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/libopenblas-0.3.10-pthreads_hb3c22a3_5.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/libsodium-1.0.18-h36c2ea0_1.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/libuuid-2.32.1-h7f98852_1000.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/libwebp-base-1.2.1-h7f98852_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/libzlib-1.2.11-h36c2ea0_1013.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/lz4-c-1.9.3-h9c3ff4c_1.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/ncurses-6.2-h58526e2_4.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/openssl-1.1.1l-h7f98852_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/pcre-8.45-h9c3ff4c_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/pthread-stubs-0.4-h36c2ea0_1001.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/xorg-libxau-1.0.9-h7f98852_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/xorg-libxdmcp-1.1.3-h7f98852_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/xxhash-0.8.0-h7f98852_3.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/xz-5.2.5-h516909a_1.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/yaml-0.2.5-h7f98852_2.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/gettext-0.19.8.1-hf34092f_1004.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/libblas-3.8.0-17_openblas.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/libedit-3.1.20191231-he28a2e2_2.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/libxcb-1.13-h7f98852_1004.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/readline-8.1-h46c0cb4_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/udunits2-2.2.27.27-hc3e0081_3.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/zeromq-4.3.4-h9c3ff4c_1.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/zlib-1.2.11-h36c2ea0_1013.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/zstd-1.5.1-ha95c52a_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/hdf4-4.2.15-h10796ff_3.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/libcblas-3.8.0-17_openblas.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/libglib-2.66.3-hbe7bbb4_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/liblapack-3.8.0-17_openblas.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/libnghttp2-1.43.0-h812cca2_1.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/libpng-1.6.37-h21135ba_2.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/libssh2-1.10.0-ha56f1ee_2.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/libtiff-4.3.0-h6f004c6_2.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/libxml2-2.9.10-hee79883_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/sqlite-3.37.0-h9cd32fc_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/tk-8.6.11-h27826a3_1.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/freetype-2.10.4-h0708190_1.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/krb5-1.19.2-hcc1bbae_3.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/lcms2-2.12-hddcbb42_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/liblapacke-3.8.0-17_openblas.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/openjpeg-2.4.0-hb52868f_1.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/python-3.6.10-h8356626_1011_cpython.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/async_generator-1.10-py_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/attrs-21.4.0-pyhd8ed1ab_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/backcall-0.2.0-pyh9f0ad1d_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/backports-1.0-py_2.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/blas-2.17-openblas.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/click-7.1.2-pyh9f0ad1d_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/cloudpickle-2.0.0-pyhd8ed1ab_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/cycler-0.11.0-pyhd8ed1ab_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/decorator-5.1.1-pyhd8ed1ab_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/defusedxml-0.7.1-pyhd8ed1ab_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/entrypoints-0.3-pyhd8ed1ab_1003.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/fontconfig-2.13.1-hba837de_1005.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/fsspec-2022.1.0-pyhd8ed1ab_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/glib-2.66.3-h58526e2_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/heapdict-1.0.1-py_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/ipython_genutils-0.2.0-py_1.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/libcurl-7.81.0-h2574ce0_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/locket-0.2.0-py_2.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/nest-asyncio-1.5.4-pyhd8ed1ab_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/olefile-0.46-pyh9f0ad1d_1.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/pandocfilters-1.5.0-pyhd8ed1ab_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/parso-0.7.1-pyh9f0ad1d_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/pickleshare-0.7.5-py_1003.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/prometheus_client-0.12.0-pyhd8ed1ab_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/ptyprocess-0.7.0-pyhd3deb0d_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/pyke-1.1.1-pyhd8ed1ab_1004.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/pyparsing-3.0.6-pyhd8ed1ab_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/pyshp-2.1.3-pyh44b312d_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/python_abi-3.6-2_cp36m.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/pytz-2021.3-pyhd8ed1ab_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/send2trash-1.8.0-pyhd8ed1ab_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/sip-4.19.8-py36hf484d3e_1000.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/six-1.16.0-pyh6c4a22f_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/sortedcontainers-2.4.0-pyhd8ed1ab_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/tblib-1.7.0-pyhd8ed1ab_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/testpath-0.5.0-pyhd8ed1ab_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/toolz-0.11.2-pyhd8ed1ab_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/typing_extensions-4.0.1-pyha770c72_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/webencodings-0.5.1-py_1.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/wheel-0.37.1-pyhd8ed1ab_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/zipp-3.7.0-pyhd8ed1ab_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/antlr-python-runtime-4.7.2-py36h5fab9bb_1002.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/curl-7.81.0-h2574ce0_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/cytoolz-0.11.0-py36h8f6f2f9_3.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/dbus-1.13.6-hfdff14a_1.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/gstreamer-1.14.5-h36ae1b5_2.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/hdf5-1.10.6-nompi_h7c3c948_1111.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/importlib-metadata-4.8.1-py36h5fab9bb_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/jedi-0.17.2-py36h5fab9bb_1.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/kiwisolver-1.3.1-py36h605e78d_1.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/markupsafe-2.0.1-py36h8f6f2f9_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/mistune-0.8.4-py36h8f6f2f9_1004.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/msgpack-python-1.0.2-py36h605e78d_1.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-main/linux-64/numpy-base-1.17.4-py36h2f8d375_0.conda
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/packaging-21.3-pyhd8ed1ab_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/partd-1.2.0-pyhd8ed1ab_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/pexpect-4.8.0-pyh9f0ad1d_2.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/pillow-8.3.2-py36h676a545_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/proj-7.2.0-h277dcde_2.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/psutil-5.8.0-py36h8f6f2f9_1.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/pyrsistent-0.17.3-py36h8f6f2f9_2.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/python-dateutil-2.8.2-pyhd8ed1ab_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/python-xxhash-2.0.2-py36h8f6f2f9_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/pyyaml-5.4.1-py36h8f6f2f9_1.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/pyzmq-22.3.0-py36h7068817_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/setuptools-58.0.4-py36h5fab9bb_2.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/tornado-6.1-py36h8f6f2f9_1.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/traitlets-4.3.3-pyhd8ed1ab_2.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/typing-extensions-4.0.1-hd8ed1ab_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/zict-2.0.0-py_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/backports.functools_lru_cache-1.6.4-pyhd8ed1ab_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/bleach-4.1.0-pyhd8ed1ab_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/dask-core-2.11.0-py_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/gst-plugins-base-1.14.5-h0935bb2_2.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/immutables-0.16-py36h8f6f2f9_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/jinja2-3.0.3-pyhd8ed1ab_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/jsonschema-4.1.2-pyhd8ed1ab_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/jupyter_core-4.6.3-py36h5fab9bb_2.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/libnetcdf-4.7.4-nompi_h56d31a8_107.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-main/linux-64/numpy-1.17.4-py36hd5be1e1_0.conda
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/pip-21.3.1-pyhd8ed1ab_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/pygments-2.11.2-pyhd8ed1ab_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/terminado-0.12.1-py36h5fab9bb_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/bokeh-2.3.3-py36h5fab9bb_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/cftime-1.4.1-py36h92226af_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/contextvars-2.4-py_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/jupyter_client-6.1.7-py_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/jupyterlab_pygments-0.1.2-pyh9f0ad1d_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/matplotlib-base-3.1.3-py36h250f245_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/mo_pack-0.2.0-py36h92226af_1005.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/nbformat-5.1.3-pyhd8ed1ab_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/pandas-1.1.5-py36h284efc9_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/qt-5.9.7-h0c104cb_3.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/scipy-1.5.3-py36h976291a_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/shapely-1.7.1-py36h93b233e_4.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/wcwidth-0.2.5-pyh9f0ad1d_2.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/cartopy-0.18.0-py36h104b3a8_13.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/cf-units-2.1.4-py36h92226af_2.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/distributed-2.30.1-py36h5fab9bb_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/nbclient-0.5.9-pyhd8ed1ab_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/nc-time-axis-1.2.0-pyhd8ed1ab_2.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/netcdf4-1.5.6-nompi_py36hc29086f_100.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/prompt-toolkit-3.0.24-pyha770c72_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/pyqt-5.9.2-py36hcca6a23_4.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/noarch/dask-2.11.0-py_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/ipython-7.16.1-py36he448a4c_2.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/matplotlib-3.1.3-py36_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/nbconvert-6.0.7-py36h5fab9bb_3.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/ipykernel-5.5.5-py36hcb3619a_0.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/iris-2.4.0-py36h5fab9bb_1.tar.bz2
+https://metoffice.jfrog.io/metoffice/api/conda/conda-forge/linux-64/notebook-5.7.8-py36_1.tar.bz2
diff --git a/notebooks/utils.py b/notebooks/utils.py
new file mode 100644
index 0000000..10b4615
--- /dev/null
+++ b/notebooks/utils.py
@@ -0,0 +1,162 @@
+
+import io
+import os
+import boto3
+from urllib.parse import urlparse
+from fnmatch import fnmatch
+from shutil import copyfile
+import iris
+
+
+def _fetch_s3_file(s3_uri, save_to):
+
+ bucket_name, key = _split_s3_uri(s3_uri)
+ # print(f"Fetching s3 object {key} from bucket {bucket_name}")
+
+ client = boto3.client("s3")
+ obj = client.get_object(
+ Bucket=bucket_name,
+ Key=key,
+ )
+ with io.FileIO(save_to, "w") as f:
+ for i in obj["Body"]:
+ f.write(i)
+
+
+def _save_s3_file(s3_uri, out_filename, file_to_save="/tmp/tmp"):
+ bucket, folder = _split_s3_uri(s3_uri)
+ out_filepath = os.path.join(folder, out_filename)
+ # print(f"Save s3 object {out_filepath} to bucket {bucket}")
+ client = boto3.client("s3")
+ client.upload_file(
+ Filename=file_to_save,
+ Bucket=bucket,
+ Key=out_filepath
+ )
+
+
+def _split_s3_uri(s3_uri):
+ parsed_uri = urlparse(s3_uri)
+ return parsed_uri.netloc, parsed_uri.path[1:]
+
+
+def find_matching_s3_keys(in_fileglob):
+
+ bucket_name, file_and_folder_name = _split_s3_uri(in_fileglob)
+ folder_name = os.path.split(file_and_folder_name)[0]
+ all_key_responses = _get_all_files_in_s3_folder(bucket_name, folder_name)
+ matching_keys = []
+ for key in [k["Key"] for k in all_key_responses]:
+ if fnmatch(key, file_and_folder_name):
+ matching_keys.append(key)
+ return matching_keys
+
+
+def _get_all_files_in_s3_folder(bucket_name, folder_name):
+ client = boto3.client("s3")
+ response = client.list_objects_v2(
+ Bucket=bucket_name,
+ Prefix=folder_name,
+ )
+ all_key_responses = []
+ if "Contents" in response:
+ all_key_responses = response["Contents"]
+ while response["IsTruncated"]:
+ continuation_token = response["NextContinuationToken"]
+ response = client.list_objects_v2(
+ Bucket=bucket_name,
+ Prefix=folder_name,
+ ContinuationToken=continuation_token,
+ )
+ if "Contents" in response:
+ all_key_responses += response["Contents"]
+ return all_key_responses
+
+
+def copy_s3_files(in_fileglob, out_folder):
+ '''
+ This function copy files from s3 bucket to local directory.
+ args
+ ---
+ in_fileglob: s3 uri of flies (wild card can be used)
+ out_folder: local path where data will be stored
+ '''
+ if os.path.isdir(out_folder) == 0:
+ mode = 0o777
+ os.makedirs(out_folder, mode, exist_ok = False)
+ matching_keys = find_matching_s3_keys(in_fileglob)
+ in_bucket_name = _split_s3_uri(in_fileglob)[0]
+ out_scheme = urlparse(out_folder).scheme
+ for key in matching_keys:
+ new_filename = os.path.split(key)[1]
+ temp_filename = os.path.join("/tmp", new_filename)
+ in_s3_uri = os.path.join("s3://{}".format(in_bucket_name), key)
+ _fetch_s3_file(in_s3_uri, temp_filename)
+ if out_scheme == "s3":
+ _save_s3_file(
+ out_folder,
+ new_filename,
+ temp_filename,
+ )
+ else:
+ copyfile(
+ temp_filename, os.path.join(out_folder, new_filename)
+ )
+ os.remove(temp_filename)
+
+
+def load_data(inpath):
+ '''
+ This methods copy the data from s3 bucket and load the data as iris cubelist.
+ Data is stored in data/ directory.
+
+ input: file(s) path on s3 bucket
+ output: iris cubelist
+ '''
+ if inpath.startswith('s3'):
+ keys = find_matching_s3_keys(inpath)
+ s3dir = _get_directory(inpath)
+ temp_path = 'data/'
+ if os.path.exists(temp_path) == 0:
+ os.mkdir(temp_path)
+
+ for key in keys:
+ file = key.split('/')[-1]
+ if os.path.exists(os.path.join(temp_path, file)) == 0:
+ copy_s3_files(os.path.join(s3dir, file), temp_path)
+ else:
+ print(key, ' already exist')
+ files = inpath.split('/')[-1]
+ data = iris.load(os.path.join(temp_path, files))
+
+ return data
+
+
+def _get_directory(inpath):
+ path = inpath.split('/')
+ dirpath = 's3://'
+ for p in path[2:-1]:
+ dirpath = os.path.join(dirpath, p)
+ return dirpath
+
+
+def flush_data(path):
+ '''
+ It delete the data from compute node.
+
+ Input: file(s) path
+ '''
+ import glob
+ files = glob.glob(path)
+ for file in files:
+ os.remove(file)
+
+
+def main():
+ in_fileglob = 's3://ias-pyprecis/data/cmip5/.nc'
+ out_folder = '/home/h01/zmaalick/myprojs/PyPRECIS/aws-scripts'
+ copy_s3_files(in_fileglob, out_folder)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/notebooks/worksheet1.ipynb b/notebooks/worksheet1.ipynb
index c23b53f..b6869eb 100755
--- a/notebooks/worksheet1.ipynb
+++ b/notebooks/worksheet1.ipynb
@@ -317,9 +317,19 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "scrolled": false
- },
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# download data from S3 bucket in data directory\n",
+ "from utils import copy_s3_files, flush_data\n",
+ "\n",
+ "copy_s3_files('s3://ias-pyprecis/data/sample_data.nc', 'data/')"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
"outputs": [],
"source": [
"# import the necessary modules\n",
@@ -331,7 +341,7 @@
"%matplotlib inline \n",
"\n",
"# provide the path of your sample data\n",
- "sample_data = '/project/ciid/projects/PRECIS/worksheets/data/sample_data.nc'\n",
+ "sample_data = 'data/sample_data.nc'\n",
"\n",
"# Constraint the reading to a single variable and load it into an Iris cube\n",
"cube = iris.load_cube(sample_data)\n",
@@ -454,18 +464,29 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "scrolled": false
- },
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# download data from S3 buket to data directory\n",
+ "from utils import copy_s3_files\n",
+ "\n",
+ "copy_s3_files('s3://ias-pyprecis/data/pp/cahpa/*', 'data/pp/cahpa/')\n",
+ "copy_s3_files('s3://ias-pyprecis/data/pp/cahpb/*', 'data/pp/cahpb/')"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
"outputs": [],
"source": [
- "datadir = '/project/ciid/projects/PRECIS/worksheets/data'\n",
+ "datadir = 'data/'\n",
"\n",
"rim_width = 8 # width of rim (in number of grid boxes)\n",
"\n",
"for runid in ['cahpa', 'cahpb']:\n",
" ppdir = os.path.join(datadir, 'pp', runid)\n",
- " \n",
+ "\n",
" # find all the files from which to remove the rim\n",
" file_list = glob.glob(ppdir + '/*pm[ghij]*.pp')\n",
" \n",
@@ -483,13 +504,28 @@
" # add meta data stating that rim has been removed\n",
" rrcube.attributes['rim_removed'] = '{} point rim removed'.format(rim_width)\n",
" trimmed_cubes.append(rrcube)\n",
+ " \n",
" rrcubes = iris.cube.CubeList(trimmed_cubes)\n",
" # Write out the trimmed data file\n",
- " outfile = os.path.join(datadir, 'historical', runid + '.mon.1961_1990.rr.nc')\n",
+ " #outfile = os.path.join(datadir, 'historical', runid + '.mon.1961_1990.rr.nc')\n",
+ " outfile = os.path.join(datadir, runid + '.mon.1961_1990.rr.nc')\n",
+ "\n",
" iris.save(rrcubes, outfile)\n",
" print('Saved {}'.format(outfile))"
]
},
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Delete pp data from the disk\n",
+ "from utils import flush_data\n",
+ "flush_data('data/pp/cahpa/*')\n",
+ "flush_data('data/pp/cahpb/*')"
+ ]
+ },
{
"cell_type": "markdown",
"metadata": {},
@@ -564,14 +600,14 @@
"\n",
"for runid in ['cahpa', 'cahpb']:\n",
" # Get data directory\n",
- " infile = os.path.join(datadir, 'historical', runid + '.mon.1961_1990.rr.nc')\n",
+ " infile = os.path.join(datadir, runid + '.mon.1961_1990.rr.nc')\n",
" # This will load all the variables in the file into a CubeList\n",
" datacubes = iris.load(infile)\n",
" for cube in datacubes:\n",
" # get the STASH code\n",
" cubeSTASH = cube.attributes['STASH']\n",
" # Make the output file name\n",
- " outfile = os.path.join(datadir, 'historical', runid + '.mon.1961_1990.' + stash_codes[str(cubeSTASH)] + '.rr.nc')\n",
+ " outfile = os.path.join(datadir, runid + '.mon.1961_1990.' + stash_codes[str(cubeSTASH)] + '.rr.nc')\n",
" # Save the file\n",
" iris.save(cube, outfile)\n",
" print('Saved {}'.format(outfile)) "
@@ -653,10 +689,11 @@
}
],
"metadata": {
+ "instance_type": "ml.t3.medium",
"kernelspec": {
- "display_name": "Python 3",
+ "display_name": "Python [conda env:pyprecis-environment] (arn:aws:sagemaker:eu-west-2:198477955030:image-version/abtraining/1)",
"language": "python",
- "name": "python3"
+ "name": "conda-env-pyprecis-environment-py__SAGEMAKER_INTERNAL__arn:aws:sagemaker:eu-west-2:198477955030:image-version/abtraining/1"
},
"language_info": {
"codemirror_mode": {
@@ -668,7 +705,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.6"
+ "version": "3.6.10"
},
"widgets": {
"state": {},
@@ -676,5 +713,5 @@
}
},
"nbformat": 4,
- "nbformat_minor": 2
+ "nbformat_minor": 4
}
diff --git a/notebooks/worksheet2.ipynb b/notebooks/worksheet2.ipynb
index e998e51..36fe9e5 100755
--- a/notebooks/worksheet2.ipynb
+++ b/notebooks/worksheet2.ipynb
@@ -75,7 +75,8 @@
"import iris.quickplot as qplt\n",
"import cartopy.crs as ccrs\n",
"from mpl_toolkits.axes_grid1 import AxesGrid\n",
- "from cartopy.mpl.geoaxes import GeoAxes"
+ "from cartopy.mpl.geoaxes import GeoAxes\n",
+ "from utils import copy_s3_files, flush_data"
]
},
{
@@ -104,6 +105,16 @@
"Before running the code, take a look at it line-by-line to understand what steps are being taken. Then click in the box and press ctrl + enter to run the code."
]
},
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# we first need to download APHRODITE data\n",
+ "copy_s3_files('s3://ias-pyprecis/data/APHRODITE/*.nc', 'data/APHRODITE/')"
+ ]
+ },
{
"cell_type": "code",
"execution_count": null,
@@ -111,14 +122,14 @@
"outputs": [],
"source": [
"# Provide the names of the directories where the netCDF model files are stored\n",
- "DATADIR = '/project/ciid/projects/PRECIS/worksheets/data/'\n",
+ "DATADIR = 'data/'\n",
"\n",
"# Load and print the HadCM3Q0 (cahpa) model cube data\n",
- "infile = os.path.join(DATADIR, 'historical', 'cahpa.mon.1961_1990.pr.rr.nc')\n",
+ "infile = os.path.join(DATADIR, 'cahpa.mon.1961_1990.pr.rr.nc')\n",
"cahpaData = iris.load_cube(infile)\n",
"\n",
"# Load and print the ECHAM5 (cahpb) model cube data\n",
- "infile = os.path.join(DATADIR, 'historical', 'cahpb.mon.1961_1990.pr.rr.nc')\n",
+ "infile = os.path.join(DATADIR, 'cahpb.mon.1961_1990.pr.rr.nc')\n",
"cahpbData = iris.load_cube(infile)\n",
"\n",
"# Load and print the APHRODITE observation cube data\n",
@@ -309,7 +320,7 @@
"cahpaData.remove_coord('forecast_period')\n",
"cahpaData.remove_coord('forecast_reference_time')\n",
"# Save the new cube as a new netCDF file\n",
- "outfile = os.path.join(DATADIR, 'historical', 'cahpa.mon.1961_1990.pr.rr.mmday-1.nc')\n",
+ "outfile = os.path.join(DATADIR, 'cahpa.mon.1961_1990.pr.rr.mmday-1.nc')\n",
"iris.save(cahpaData, outfile)"
]
},
@@ -338,7 +349,7 @@
"# Remove extraneous cube metadata. This helps make cube comparisons easier later.\n",
"\n",
"# Save the new cube as a new netCDF file using the `outfile` filename we've provided below!\n",
- "outfile = os.path.join(DATADIR, 'historical', 'cahpb.mon.1961_1990.pr.rr.mmday-1.nc')\n",
+ "outfile = os.path.join(DATADIR, 'cahpb.mon.1961_1990.pr.rr.mmday-1.nc')\n",
"\n"
]
},
@@ -373,7 +384,7 @@
"\n",
"# Loop through two model runs\n",
"for jobid in ['cahpa', 'cahpb']:\n",
- " infile = os.path.join(DATADIR, 'historical', jobid + '.mon.1961_1990.pr.rr.mmday-1.nc')\n",
+ " infile = os.path.join(DATADIR, jobid + '.mon.1961_1990.pr.rr.mmday-1.nc')\n",
"\n",
" # Load the data\n",
" data = iris.load_cube(infile)\n",
@@ -437,6 +448,16 @@
"Follow step d) and complete the code yourself. The file name to load is: `aphro.mon.1961_1990.nc`. We've given you the infile and outfile names to make sure you load and save it in the right place for later!"
]
},
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# we first need to download APHRODITE data\n",
+ "copy_s3_files('s3://ias-pyprecis/data/climatology/*.nc', 'data/climatology/')"
+ ]
+ },
{
"cell_type": "code",
"execution_count": null,
@@ -444,7 +465,7 @@
"outputs": [],
"source": [
"# Directory names where data is read from and stored to\n",
- "infile = os.path.join(DATADIR, 'APHRODITE', 'aphro.mon.1961_1990.nc')\n",
+ "infile = os.path.join(DATADIR, 'climatology', 'aphro.mon.1961_1990.nc')\n",
"\n",
"\n",
"# Load the aphrodite data\n",
@@ -460,7 +481,7 @@
"\n",
"\n",
"# save the seasonal mean cube as a NetCDF file\n",
- "outfile = os.path.join(DATADIR, 'climatology', 'aphro.OND.mean.1961_1990.pr.mmday-1.nc')\n",
+ "outfile = os.path.join(DATADIR, 'aphro.OND.mean.1961_1990.pr.mmday-1.nc')\n",
"\n",
"\n",
"# print the APHRODITE seasonal mean cube\n",
@@ -550,7 +571,7 @@
"outputs": [],
"source": [
"# Directory name where data is read from\n",
- "indir = os.path.join(DATADIR, 'climatology')\n",
+ "indir = DATADIR\n",
"\n",
"# load cahpa model data\n",
"infile = os.path.join(indir, 'cahpa.OND.mean.1961_1990.pr.mmday-1.nc')\n",
@@ -663,10 +684,11 @@
}
],
"metadata": {
+ "instance_type": "ml.t3.medium",
"kernelspec": {
- "display_name": "Python 3",
+ "display_name": "Python [conda env:pyprecis-environment] (arn:aws:sagemaker:eu-west-2:198477955030:image-version/abtraining/1)",
"language": "python",
- "name": "python3"
+ "name": "conda-env-pyprecis-environment-py__SAGEMAKER_INTERNAL__arn:aws:sagemaker:eu-west-2:198477955030:image-version/abtraining/1"
},
"language_info": {
"codemirror_mode": {
@@ -678,7 +700,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.6"
+ "version": "3.6.10"
},
"widgets": {
"state": {},
@@ -686,5 +708,5 @@
}
},
"nbformat": 4,
- "nbformat_minor": 1
+ "nbformat_minor": 4
}
diff --git a/notebooks/worksheet3.ipynb b/notebooks/worksheet3.ipynb
index 44c1001..1619ceb 100755
--- a/notebooks/worksheet3.ipynb
+++ b/notebooks/worksheet3.ipynb
@@ -63,9 +63,11 @@
"import cartopy.crs as ccrs\n",
"from mpl_toolkits.axes_grid1 import AxesGrid\n",
"from cartopy.mpl.geoaxes import GeoAxes\n",
+ "from utils import copy_s3_files, flush_data\n",
+ "\n",
"\n",
"# Provide the names of the directories where the netCDF model files are stored\n",
- "DATADIR = '/project/ciid/projects/PRECIS/worksheets/data/'\n",
+ "DATADIR = 'data/'\n",
"\n",
"# Directory name where data is read from\n",
"HISTDIR = os.path.join(DATADIR, 'historical')\n",
@@ -178,6 +180,10 @@
"metadata": {},
"outputs": [],
"source": [
+ "# Load the HadCM3Q0 (cahpa) model cube data as need grid information from it\n",
+ "infile = os.path.join(DATADIR, 'cahpa.mon.1961_1990.pr.rr.nc')\n",
+ "cahpa_cube = iris.load_cube(infile)\n",
+ "\n",
"pole_lat = cahpa_cube.coord_system().grid_north_pole_latitude\n",
"pole_lon = cahpa_cube.coord_system().grid_north_pole_longitude\n",
"print('Pole Latitude: {}'.format(pole_lat))\n",
@@ -222,8 +228,8 @@
"\n",
"for jobid in ['cahpa', 'cahpb']:\n",
" # Directory name where data are read from and stored to\n",
- " infile = os.path.join(DATADIR, 'historical', jobid + '.mon.1961_1990.pr.rr.mmday-1.nc')\n",
- " \n",
+ " infile = os.path.join(DATADIR, jobid + '.mon.1961_1990.pr.rr.mmday-1.nc')\n",
+ " print(infile)\n",
" # Load the baseline precipitation data using the KL_constraint - the command below\n",
" # loads the data into a cube constrained by the area chosen\n",
" data = iris.load_cube(infile)\n",
@@ -232,7 +238,7 @@
" grid_latitude=rotated_lats)\n",
"\n",
" # save the constrained cube\n",
- " outfile = os.path.join(DATADIR, 'historical', jobid + '.mon.1961_1990.pr.rr.mmday-1.KL.nc')\n",
+ " outfile = os.path.join(DATADIR, jobid + '.mon.1961_1990.pr.rr.mmday-1.KL.nc')\n",
" iris.save(data_KL, outfile)\n",
" print('Saved: {}'.format(outfile))"
]
@@ -298,7 +304,7 @@
"source": [
"for jobid in ['cahpa', 'cahpb']:\n",
" # Set up the path to the data\n",
- " infile = os.path.join(DATADIR, 'historical', jobid + '.mon.1961_1990.pr.rr.mmday-1.KL.nc')\n",
+ " infile = os.path.join(DATADIR, jobid + '.mon.1961_1990.pr.rr.mmday-1.KL.nc')\n",
" \n",
" # Load the data extracted around Kuala Lumpur created in previous step\n",
" data = iris.load_cube(infile)\n",
@@ -741,6 +747,17 @@
"**j) Plot a series of figures** that shows 1) the monthly cycles of temperature and rainfall comparing the 6 models and the observations; and 2) the monthly differences between the models and observations"
]
},
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# we first need to download CRU and netcdf data\n",
+ "copy_s3_files('s3://ias-pyprecis/data/CRU/*.nc', 'data/CRU/')\n",
+ "copy_s3_files('s3://ias-pyprecis/data/netcdf/*.nc', 'data/netcdf/')"
+ ]
+ },
{
"cell_type": "code",
"execution_count": null,
@@ -751,7 +768,7 @@
"Here are some useful varibles you might like to use in your scripts\n",
"'''\n",
"# Some helpful data locations\n",
- "DATADIR = '/project/precis/worksheets/data'\n",
+ "DATADIR = 'data'\n",
"APHRODIR = os.path.join(DATADIR, 'APHRODITE')\n",
"CRUDIR = os.path.join(DATADIR, 'CRU')\n",
"CLIMDIR = os.path.join(DATADIR, 'climatology')\n",
@@ -987,10 +1004,11 @@
}
],
"metadata": {
+ "instance_type": "ml.t3.medium",
"kernelspec": {
- "display_name": "Python 3",
+ "display_name": "Python [conda env:pyprecis-environment] (arn:aws:sagemaker:eu-west-2:198477955030:image-version/abtraining/1)",
"language": "python",
- "name": "python3"
+ "name": "conda-env-pyprecis-environment-py__SAGEMAKER_INTERNAL__arn:aws:sagemaker:eu-west-2:198477955030:image-version/abtraining/1"
},
"language_info": {
"codemirror_mode": {
@@ -1002,7 +1020,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.6"
+ "version": "3.6.10"
},
"widgets": {
"state": {},
@@ -1010,5 +1028,5 @@
}
},
"nbformat": 4,
- "nbformat_minor": 1
+ "nbformat_minor": 4
}
diff --git a/notebooks/worksheet4.ipynb b/notebooks/worksheet4.ipynb
index f0a4761..4cea79d 100755
--- a/notebooks/worksheet4.ipynb
+++ b/notebooks/worksheet4.ipynb
@@ -68,7 +68,7 @@
"import numpy.ma as ma\n",
"\n",
"# Some helpful data locations\n",
- "DATADIR = '/project/ciid/projects/PRECIS/worksheets/data'\n",
+ "DATADIR = 'data'\n",
"CLIMDIR = os.path.join(DATADIR, 'climatology')\n",
"HISTDIR = os.path.join(DATADIR, 'historical')\n",
"FUTRDIR = os.path.join(DATADIR, 'future')\n",
@@ -305,7 +305,6 @@
" ax = plt.gca()\n",
" ax.coastlines()\n",
"\n",
- "plt.tight_layout() # automatically adjusts subplot(s) to fit in to the figure area\n",
"plt.show()"
]
},
@@ -443,9 +442,7 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "scrolled": false
- },
+ "metadata": {},
"outputs": [],
"source": [
"# Read in the monthly series\n",
@@ -589,10 +586,11 @@
}
],
"metadata": {
+ "instance_type": "ml.t3.medium",
"kernelspec": {
- "display_name": "Python 3",
+ "display_name": "Python [conda env:pyprecis-environment] (arn:aws:sagemaker:eu-west-2:198477955030:image-version/abtraining/1)",
"language": "python",
- "name": "python3"
+ "name": "conda-env-pyprecis-environment-py__SAGEMAKER_INTERNAL__arn:aws:sagemaker:eu-west-2:198477955030:image-version/abtraining/1"
},
"language_info": {
"codemirror_mode": {
@@ -604,7 +602,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.6"
+ "version": "3.6.10"
},
"widgets": {
"state": {},
@@ -612,5 +610,5 @@
}
},
"nbformat": 4,
- "nbformat_minor": 1
+ "nbformat_minor": 4
}
diff --git a/notebooks/worksheet5.ipynb b/notebooks/worksheet5.ipynb
index 418de56..6261df0 100755
--- a/notebooks/worksheet5.ipynb
+++ b/notebooks/worksheet5.ipynb
@@ -81,7 +81,7 @@
"from iris.analysis import Aggregator\n",
"\n",
"# Some helpful data locations\n",
- "DATADIR = '/project/ciid/projects/PRECIS/worksheets/data'\n",
+ "DATADIR = 'data'\n",
"CLIMDIR = os.path.join(DATADIR, 'climatology')\n",
"HISTDIR = os.path.join(DATADIR, 'historical')\n",
"FUTRDIR = os.path.join(DATADIR, 'future')\n",
@@ -201,7 +201,8 @@
"outfile = os.path.join(CLIMDIR, 'aphro.wetday.nc')\n",
"\n",
"\n",
- "# Find number of days in dataset\n",
+ "# Find number of days in dataset (number_aphro_days)\n",
+ "number_aphro_days =\n",
"\n",
"\n",
"# Find wet days as percent of all aphro days \n",
@@ -711,9 +712,7 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {
- "scrolled": false
- },
+ "metadata": {},
"outputs": [],
"source": [
"# HINT: The filenames have the following pattern: runid + '.day.pc95.bias.pr.mmday-1.nc'\n",
@@ -792,10 +791,11 @@
}
],
"metadata": {
+ "instance_type": "ml.t3.medium",
"kernelspec": {
- "display_name": "Python 3",
+ "display_name": "Python [conda env:pyprecis-environment] (arn:aws:sagemaker:eu-west-2:198477955030:image-version/abtraining/1)",
"language": "python",
- "name": "python3"
+ "name": "conda-env-pyprecis-environment-py__SAGEMAKER_INTERNAL__arn:aws:sagemaker:eu-west-2:198477955030:image-version/abtraining/1"
},
"language_info": {
"codemirror_mode": {
@@ -807,7 +807,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.6"
+ "version": "3.6.10"
},
"widgets": {
"state": {},
@@ -815,5 +815,5 @@
}
},
"nbformat": 4,
- "nbformat_minor": 2
+ "nbformat_minor": 4
}
diff --git a/notebooks/worksheet6.ipynb b/notebooks/worksheet6.ipynb
index f4be6f1..1d8f00a 100755
--- a/notebooks/worksheet6.ipynb
+++ b/notebooks/worksheet6.ipynb
@@ -86,7 +86,7 @@
"from iris.analysis import Aggregator\n",
"\n",
"# Some helpful data locations\n",
- "DATADIR = '/project/ciid/projects/PRECIS/worksheets/data'\n",
+ "DATADIR = 'data'\n",
"PPDIR = os.path.join(DATADIR, 'pp')\n",
"CLIMDIR = os.path.join(DATADIR, 'climatology')\n",
"HISTDIR = os.path.join(DATADIR, 'historical')\n",
@@ -230,10 +230,11 @@
}
],
"metadata": {
+ "instance_type": "ml.t3.medium",
"kernelspec": {
- "display_name": "Python 3",
+ "display_name": "Python [conda env:pyprecis-environment] (arn:aws:sagemaker:eu-west-2:198477955030:image-version/abtraining/1)",
"language": "python",
- "name": "python3"
+ "name": "conda-env-pyprecis-environment-py__SAGEMAKER_INTERNAL__arn:aws:sagemaker:eu-west-2:198477955030:image-version/abtraining/1"
},
"language_info": {
"codemirror_mode": {
@@ -245,9 +246,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.6"
+ "version": "3.6.10"
}
},
"nbformat": 4,
- "nbformat_minor": 2
+ "nbformat_minor": 4
}