Skip to content

DISP‐S1 Interface Acceptance Testing Instructions

Scott Collins edited this page Aug 5, 2024 · 1 revision

This page contains instructions for performing Acceptance Testing for the DISP-S1 interface delivery from the OPERA-ADT team. These instructions assume the user has access to the JPL FN-Artifactory, and has Docker installed on their local machine.

Acquiring the DISP-S1 Interface Docker Image

The image is currently hosted on JPL FN-Artifactory, which requires JPL VPN access and JPL credentials. You may also need to be added to the gov.nasa.jpl.opera.adt organization.

Once you have access, the container tarball delivery is available under general/gov/nasa/jpl/opera/adt/disp_s1/dockerimg_disp_s1_beta.tar.

Sample small inputs and outputs are available under general/gov/nasa/jpl/opera/adt/disp_s1/r3/delivery_data_small.tar (https://artifactory-fn.jpl.nasa.gov:443/artifactory/general/gov/nasa/jpl/opera/adt/disp_s1/r3/delivery_data_small.tar)

Sample full granule case is available here: general/gov/nasa/jpl/opera/adt/disp_s1/r3/delivery_data_full.tar.

Download container tarball and sample data files to a location on your local machine. This location will be referred to throughout this instructions as ${DISP_S1_DIR}

Loading the image into Docker

The first step in running the DISP-S1 image is to load it into Docker via the following command:

docker load -i ${DISP_DIR}/dockerimg_disp_s1_interface_r2.1.tar

This should add the Docker image to your local repository with the name opera/disp-s1 and the tag 0.1.2.

Preparing the test data (using small dataset)

Once the delivery_data_dolphin_small.zip file is downloaded to your local machine, unpack it to ${DISP_DIR}:

cd ${DISP_DIR}; unzip delivery_data_dolphin_small.zip

This will create a delivery_data_dolphin_small directory within ${DISP_DIR} containing the following files/directories:

  • algorithm_parameters.yaml
  • dynamic_ancillary/
  • golden_output/
  • input_slcs/
  • runconfig.yaml

In order to execute the SAS, the input file directory, runconfig and an output location will be mounted into container instance as Docker Volumes. To help streamline this process, we recommend making the following changes to the delivery_2_beta directory:

  1. Rename the delivery_2_beta directory to test_datasets

    mv <DSWx_DIR>/delivery_2_beta <DSWx_DIR>/test_datasets

  2. Create a directory named runconfig within each dataset directory, and move the existing runconfig YAML file into them:

    mkdir -p <DSWx_DIR>/test_datasets/{l30_greenland|s30_louisiana}/runconfig

    mv <DSWx_DIR>/test_datasets/{l30_greenland|s30_louisiana}/dswx_hls.yaml <DSWx_DIR>/test_datasets/{l30_greenland|s30_louisiana}/runconfig

Executing the DISP-S1 container on the sample datasets

Change directory into the sample data directory:

cd ${DISP_DIR}/delivery_data_dolphin_small

We're now ready to execute the DISP-S1 Interface. Run the following the command to kick off execution with the test assets:

docker run --rm --user $(id -u):$(id -g) --volume ${DISP_DIR}/delivery_data_dolphin_small:/work opera/disp-s1:0.1.2 dolphin run --pge runconfig.yaml

The docker container will output progress messages as it runs, e.g.:

[2023-06-09 19:38:35] INFO     Found SLC files from 2 bursts       s1_disp.py:61
...

Execution time for the small test case on opera-dev-pge was 12.6 minutes.

(Note: Execution of full granule took nearly 13 hours on opera-dev-pge)

When the docker run is finished, a scratch/ and output/ directory will have been created.

The output/ directory will contain the product file:

-rw-r--r-- 1 dinglish cloud-user 15122614 Jun  9 20:42 20180101_20180330.unw.nc

Running the Quality Assurance test

Now that we've successfully executed the SAS container and generated outputs, the last step is to perform a QA check against the expected outputs.

A Python program to compare DISP-S1 products generated by DISP-S1-SAS with expected outputs “golden datasets” is included in the Docker image. The script validate_product.py accepts two input files: the golden dataset and the test dataset.

The docker command to run this is:

docker run \
    --rm \
    --volume <local host directory>:/work \
    opera/disp-s1:0.1.2 \
    python /dolphin/scripts/release/validate_product.py \
    <path to golden dataset> \
    <path to test dataset>

For example, if the SAS was run using the example command above and the result is in the output/ directory, the validation program can be run as as follows:

docker run --rm --volume ${DISP_DIR}/delivery_data_dolphin_small:/work \
    opera/disp-s1:0.1.2 python \
    /dolphin/scripts/release/validate_product.py \
    golden_output/20180101_20180330.unw.nc output/20180101_20180330.unw.nc

The full-size granule test case passed validation, however the small test case does not pass validation.

Small-size test case validation output:

[2023-06-22 17:51:46] INFO     Comparing HDF5        validate_product.py.org:482
                           contents...
                  INFO     Checking connected    validate_product.py.org:193
                           component labels...
                  INFO     Test unwrapped area:  validate_product.py.org:224
                           1249477/7650126
                           (16.333%)
                  INFO     Reference unwrapped   validate_product.py.org:225
                           area: 1249486/7650126
                           (16.333%)
                  INFO     Intersection/Referenc validate_product.py.org:226
                           e: 1249476/1249486
                           (99.999%)
                  INFO     Intersection/Union:   validate_product.py.org:227
                           1249476/1249487
                           (99.999%)
Traceback (most recent call last):
File "/data/dinglish/test/validate_product.py.org", line 520, in <module>
  compare(args.golden, args.test, args.data_dset)
File "/data/dinglish/test/validate_product.py.org", line 484, in compare
  compare_groups(hf_g, hf_t)
File "/data/dinglish/test/validate_product.py.org", line 69, in compare_groups
  compare_groups(
File "/data/dinglish/test/validate_product.py.org", line 92, in compare_groups
  _validate_dataset(
File "/data/dinglish/test/validate_product.py.org", line 401, in _validate_dataset
  raise ComparisonError(f"Dataset {golden_dataset.name} values do not match")
__main__.ComparisonError: Dataset /identification/software_version values do not match

Full-size validation output:

The validate_product.py program took less than a minute to run and produced some informational messages while executing.

The last message displayed should show that the output and expected files match:

                  INFO     Files                     validate_product.py:492
                           golden_output/20180101_20
                           180716.unw.nc and
                           output/20180101_20180716.
                           unw.nc match.
Clone this wiki locally