-
Notifications
You must be signed in to change notification settings - Fork 6
CSLC‐S1 Beta Acceptance Testing Instructions
This page contains instructions for performing Acceptance Testing for the CSLC-S1 Beta delivery from the OPERA-ADT team. These instructions assume the user has access to the JPL FN-Artifactory, and has Docker installed on their local machine.
The image is currently hosted on JPL FN-Artifactory, which requires JPL VPN access and JPL credentials. You may also need to be added to the gov.nasa.jpl.opera.adt organization.
Once you have access, the container tarball delivery is available under general/gov/nasa/jpl/opera/adt/cslc_s1/r3/beta/dockerimg_cslc_s1_beta_0.1.tar
. Sample inputs and outputs are also available under general/gov/nasa/jpl/opera/adt/cslc_s1/r3/beta/delivery_cslc_s1_beta_0.1.zip
.
Download both images to a location on your local machine. This location will be referred to throughout this instructions as <CSLC_DIR>
Note that the sample data is quite large, so the download from AF can take some time.
The first step in running the CSLC-S1 image is to load it into Docker via the following command:
docker load -i <CSLC_DIR>/dockerimg_cslc_s1_beta_0.1.tar
This should add the Docker image to your local repository with the name opera/cslc_s1
and the tag beta_0.1
.
Once the delivery_cslc_s1_beta_0.1.zip
file is downloaded to your local machine, unpack it to <CSLC_DIR>
:
cd <CSLC_DIR>; unzip delivery_cslc_s1_beta_0.1.zip
This will create a delivery_cslc_s1_beta_0.1
directory within <CSLC_DIR>
containing the following files/directories:
- expected_output_dir/ |_ t064_135518_iw1/ |_ 20220501/ |_ t064_135518_iw1_20220501_VV.h5 |_
- input_data/ |_ dem_4326.tiff |_ S1A_IW_SLC__1SDV_20220501T015035_20220501T015102_043011_0522A4_42CC.zip |_ S1A_OPER_AUX_POEORB_OPOD_20220521T081912_V20220430T225942_20220502T005942.EOF |_ opera_burst_database_deploy_2022_1212.sqlite3
- output_s1_cslc/
- scratch_s1_cslc/
- runconfig_cslc_beta.yaml
In order to execute the SAS, the input file directory, runconfig, scratch and output locations will be mounted into the container instance as Docker Volumes. To help streamline this process, we recommend making the following changes within the delivery_cslc_s1_beta_0.1
directory:
-
Create a directory named
runconfig
, and copy and rename the existing runconfig YAML file into it:mkdir -p <CSLC_DIR>/delivery_cslc_s1_beta_0.1/runconfig
cp <CSLC_DIR>/delivery_cslc_s1_beta_0.1/runconfig_cslc_beta.yaml <CSLC_DIR>/delivery_cslc_s1_beta_0.1/runconfig/runconfig_cslc_beta.yaml
-
Make sure the output and scratch directories have write permissions set
chmod ga+w output_s1_cslc/ scratch_s1_clsc/
We're now ready to execute the CSLC-S1 Beta. Run the following the command to kick off execution with the test assets:
docker run --rm -u $UID:$(id -g) \
-w /home/compass_user \
-v <CSLC_DIR>/delivery_cslc_s1_beta_0.1/runconfig:/home/compass_user/runconfig:ro \
-v <CSLC_DIR>/delivery_cslc_s1_beta_0.1/input_data:/home/compass_user/input_data:ro \
-v <CSLC_DIR>/delivery_cslc_s1_beta_0.1/output_s1_cslc:/home/compass_user/output_s1_cslc \
-v <CSLC_DIR>/delivery_cslc_s1_beta_0.1/scratch_s1_cslc:/home/compass_user/scratch_s1_cslc \
-i --tty opera/cslc_s1:beta_0.1 s1_cslc.py /home/compass_user/runconfig/runconfig_cslc_beta.yaml
Execution should only take roughly 50 minutes. Once execution is complete, you should see 28 sub-directories within the output_s1_cslc
directory, one for each burst processed. Each burst subdir should contain a 20220501
directory which itself contains a single .h5
file for the burst.
Now that we've successfully executed the SAS container and generated outputs, the last step is to perform a QA check against the expected outputs.
Download a copy of the following python script to <CSLC_DIR>
: https://raw.githubusercontent.com/opera-adt/COMPASS/main/src/compass/utils/validate_cslc.py
The following shell script can be used to automated the comparisons across all burst-based products:
#!/bin/bash
declare -a burst_ids=( "t064_135518_iw1"
"t064_135518_iw2"
"t064_135518_iw3"
"t064_135519_iw1"
"t064_135519_iw2"
"t064_135519_iw3"
"t064_135520_iw1"
"t064_135520_iw2"
"t064_135520_iw3"
"t064_135521_iw1"
"t064_135521_iw2"
"t064_135521_iw3"
"t064_135522_iw1"
"t064_135522_iw2"
"t064_135522_iw3"
"t064_135523_iw1"
"t064_135523_iw2"
"t064_135523_iw3"
"t064_135524_iw1"
"t064_135524_iw2"
"t064_135524_iw3"
"t064_135525_iw1"
"t064_135525_iw2"
"t064_135525_iw3"
"t064_135526_iw1"
"t064_135526_iw2"
"t064_135526_iw3"
"t064_135527_iw1")
expected_dir=$1
output_dir=$2
for burst_id in "${burst_ids[@]}"; do
echo "-------------------------------------"
echo "Comparing results for ${burst_id}"
python3 validate_cslc.py \
--ref-product "${expected_dir}/${burst_id}/20220501/${burst_id}_20220501_VV.h5" \
--sec-product "${output_dir}/${burst_id}/20220501/${burst_id}_20220501_VV.h5"
done
After giving the script a name, such as cslc_compare_products.sh
and making it executable, you can compare all expected/actual products with the following command. Note that this requires a Python environment with both numpy
, h5py
and osgeo
(gdal
) installed. The cslc_compare_products.sh
script should also be stored within <CSLC_DIR>
.
<CSLC_DIR>/cslc_compare_results.sh \
<CSLC_DIR>/delivery_cslc_s1_beta_0.1/expected_output_dir \
<CSLC_DIR>/delivery_cslc_s1_beta_0.1/output_s1_cslc
A similar console output should be returned for each of the burst products:
-------------------------------------
Comparing results for t064_135526_iw2
Comparing CSLC number of bands ...
Comparing CSLC projection ...
Comparing geo transform arrays ...
Check that the percentage of pixels in the difference between referenceand secondary products real parts above the threshold 1.0e-5 is below 0.1 %
Check that the percentage of pixels in the difference between referenceand secondary products imaginary parts above the threshold 1.0e-5 is below 0.1 %
All CSLC product checks have passed
All CSLC metadata checks have passed
...
The expected results for this Acceptance Test are to see the All CSLC product checks have passed
and All CSLC metadata checks have passed
messages returned from the validation script for each burst product. If any actual results differ, the output products from the Acceptance Test, as well as the results of the comparison script should be provided to ADT for diagnostics.