L2b-ACM-CAP-ATP
VARSY Project / Code / L2b-ACM-CAP-ATP
Issue / 02
Date / 4/07/2013
Page / 2 of 26
EarthCARE Level 2 Documentation
ACM-CAP (Version 1.0)
Cloud and Precipitation Best Estimate
Acceptance Test Plan (ATP)
VARSY Project
Code: / L2b-ACM-CAP-ATP
Issue: / 01
Date: / 4/07/2013
Reference: / University of Reading
Name / Function / Signature
Prepared by / Robin Hogan / Project Scientists
Reviewed by / Pavlos Kollias / Project Scientist
Approved by / Pavlos Kollias / Project Manager
Signatures and approvals on original

This page intentionally left blank


Document Information

Contract Data
Contract Number: / 4000104528/11/NL/CT
Contract Issuer: / ESA-ESTEC
Internal Distribution
Name / Unit / Copies
Robin Hogan / University of Reading / 1
Internal Confidentiality Level
Unclassified / ¨ / Restricted / þ / Confidential / ¨
External Distribution
Name / Organisation / Copies
Tobias Wehr / ESA-ESTEC / 1
Michael Eisinger / ESA-ESTEC / 1
Dulce Lajas / ESA-ESTEC / 1
Pavlos Kollias / McGill University / 1
Julien Delanoë / LATMOS / 1
Gerd-Jan van Zadelhof / KNMI / 1
David Donovan / KNMI / 1
Alessandro Battaglia / University of Leicester / 1
Archiving
Word Processor: / MS Word 2003
File Name: / VARSY-L2-ACM-CAP-ATP


Document Status Log

Issue / Change description / Date / Approved
01 / First version / 4/07/2013


Table of Contents

1. Purpose and Scope 9

1.1. Applicable Documents 9

1.2. Reference Documents 11

1.3. List of Abbreviations 12

2. Introduction 13

3. Software Package 14

4. Test set up 15

5. Testing the software 16

5.1. Test 1: Nominal retrieval using A-Train input data 16

5.1.1. Purpose 16

5.1.2. Strategy 16

5.1.3. Input 17

5.1.4. Output 17

5.1.5. Pass/fail criteria 17

5.1.6. Plotting the output 18

5.2. Test 2: Nominal retrieval using simulated EarthCARE input data 19

5.2.1. Purpose 19

5.2.2. Strategy 19

5.2.3. Input 19

5.2.4. Output 20

5.2.5. Pass/fail criteria 20

5.2.6. Plotting the output 20

5.3. Test 3: Failure to specify configuration file name on command line 21

5.3.1. Purpose 21

5.3.2. Strategy 21

5.3.3. Input 22

5.3.4. Output 22

5.3.5. Pass/fail criterion 22

5.4. Test 4: Failure due to incorrect input filename 22

5.4.1. Purpose 22

5.4.2. Strategy 22

5.4.3. Output 23

5.4.4. Pass/fail criterion 23

5.5. Test 5: Failure due to incorrect variable name 23

5.5.1. Purpose 23

5.5.2. Strategy 23

5.5.3. Input 24

5.5.4. Output 24

5.5.5. Pass/fail criteria 24

6. Environment 26

List of Tables

Table 1: Applicable Documents 9

Table 2: Reference Documents 11

Table 3: List of abbreviations 12

Table 4: The development environments for the reference and testing machines 27

1. Purpose and Scope

This document describes the procedure to be performed for the acceptance of the ACM-CAP software developed in the scope of the VARSY project.

1.1. Applicable Documents

Table 1: Applicable Documents

Reference / Code / Title / Issue /
[SOW] / EC-SW-ESA-SY-0310 / Statement of Work: VARSY - 1-Dimensional VARiational Retrieval of SYnergistic EarthCARE Products / 1.0
[CC] / Appendix 2 to
AO/1-6823/11/NL/CT / Draft Contract (attachment to SOW) / 1.0
[AD 1] / EC-SW-ESA-SY-0152 / EarthCARE Level 2 Processor Development
General Requirements Baseline / 1.0
[AD 2] / EC.ICD.ASD.SY.00004 / EarthCARE Product Definitions. Vol. 0:
Introduction
[AD 3] / EC.ICD.ASD.SY.00005 / EarthCARE Product Definitions. Vol. 1:
Common Product Definitions / 1.0
[AD 4] / EC.ICD.ASD.ATL.00021 / EarthCARE Product Definitions. Vol. 2b:
ATLID level 1 / 1.0
[AD 5] / EC.ICD.ASD.BBR.00022 / EarthCARE Product Definitions. Vol. 3b:
BBR level 1 / 1.0
[AD 6] / EC.ICD.ASD.MSI.00023 / EarthCARE Product Definitions. Vol. 4b:
MSI level 1 / 1.0
[AD 7] / ECSIM-DMS-TEC-ICD01-R / ECSIM Simulator Interface Control Document
[AD 8] / PE-TN-ESA-GS-0001 / Ground Segment: File Format Standard / 1.0
[AD 9] / EC-TN-ESA-GS-0218 / Tailoring of the Earth Explorer File Format Standard for the EarthCARE Ground Segment / 2.0

1.2. Reference Documents

Table 2: Reference Documents

Reference / Code / Title / Issue /
[RD1] / ECSIM-DMS-TEC-SUM-01-R / ECSIM System User Manual
[RD2] / ECSIM-KNMI-MAD01-R / ECSIM Model and Algorithms Document
[RD3] / EE-MA-DMS-GS-0001 / Earth Explorer Mission CFI Software:
General Software User Manual
[RD4] / EOP-SM/1567/TW / EarthCARE Mission Requirements Document
[ATLAS-FR] / EC-FR-KNMI-ATL-027 / ATLAS Final report / 1.0
[ATLAS-ACM-TC] / EC-TN-KNMI-ATL-ACM-TC-024 / L2b Classification ATBD / 1.2
13/03/08
[ATLAS-EBD] / EC-TN-KNMI-ATL-ATBD-A-EBD-021 / L2a ATLID Extinction, Backscatter and Depolarization algorithm ATBD / 1.1 27/04/09
[ATLAS-FM] / EC-TN-KNMI-ATL-ATBD-A-FM-010 / L2a ATLID Feature mask ATBD / 2.2
[RATEC-FR] / RATEC-FR-READING-1 / RATEC Final Report / 1.0, April 2011

1.3. List of Abbreviations

Table 3: List of abbreviations

Abbreviation / Name
1D-VAR RS / 1-dimensional variational retrieval scheme
ATLID / Atmospheric Lidar (The EarthCARE lidar)
CASPER / Cloud and Aerosol Synergetic Products from EarthCARE retrievals
CPR / Cloud Precipitation Radar (The EarthCARE radar)
EarthCARE / The Earth Clouds, Aerosols and Radiation Explorer
ECSIM / EarthCARE Simulator
HSRL / High-Spectral Resolution Lidar
MSI / Multi-spectral Imager (The EarthCARE imager )

2. Introduction

As part of the algorithm development, specific new algorithm features are tested using real data. This is part of the debugging cycle to assure that mistakes related to mathematical derivations, solution mistakes and coding errors are eliminated in release versions.

This document is concerned with stand-alone testing. This means that the full algorithm is tested on a scenario that is relatively realistic. In the case of the ACM-CAP algorithm, a test scene is used from the A-Train of instruments. This enables testing of the basic ability to retrieve cloud, aerosol and precipitation properties from radar and lidar backscatter. A second test scenario consists of simulated EarthCARE data for the same scene; this enables testing of the ability to make use of the Doppler and HSRL capability. Three further tests check appropriate behaviour when the algorithm is called incorrectly or data are missing.

The stand-alone tests can be performed on different machines in order to identify whether the algorithm is susceptible to the build up of “rounding errors” which may result in inaccurate model results when using the model on different platforms. This is measured by checking whether there are differences between the output NetCDF file and a reference NetCDF file produced on the development machine. In addition, a script is provided to produce quicklooks from the output file in order to quickly see visually what the retrieved fields look like.

This document outlines:

·  The process of compiling the algorithm and pre-requisites.

·  The execution of the algorithm on test data.

·  Comparing the output file from the test machine with a reference file to identify any numerical differences, using the NCCMP tool (http://nccmp.sourceforge.net/).

·  The production of quicklooks from the output file.

·  Tests to confirm correct behaviour in the case of incorrect calling of the program, including omitting the configuration file name in the command-line, an incorrect input file name, and a missing variable in the NetCDF file.

3. Software Package

The code and is distributed as a tar.gz file with name of the form optimal_synergy-X.Y.Z.tar.gz, where X.Y.Z is the version number. This unpacks into a directory optimal_synergy-X.Y.Z with contents described in the Software User Manual. The test files are contained in the test/ directory, which contains a Makefile to facilitate the testing process (see section 5), as well as the following subdirectories:

·  test/input/: Contains two input NetCDF files.

·  test/output/: Where the output NetCDF files will be written along with a log files containing a copy of what was sent to standard output.

·  test/plots1/, test/plots2/: Where plots of the output fields will be placed for test scenarios 1 and 2, respectively.

·  test/output_reference/: Contains reference output NetCDF files and log files, for comparison with the output files from the test.

·  test/plots1_reference/, test/plots2_reference/: Contains reference plotted fields, for comparison with the plots from the test, for scenarios 1 and 2, respectively.

·  test/diff/: Where the output of nccmp will be stored, recording the result of the comparison of the output with the reference output data.

4. Test set up

Compiling the necessary software to perform the tests on a Linux platform is as follows:

1.  Install the NCCMP package (http://nccmp.sourceforge.net/). This is not available as an RPM, so you will need to compile it and put the nccmp executable in your executable path (one of the directories listed with “cat $PATH”). The only version on the NCCMP web site is 1.4.0 (as of August 2013). If the nccmp executable is in a non-standard location, then before testing the software (as described in section 4.2), edit the file test/Makefile so that the NCCMP variable is set to the full path of the nccmp executable.

2.  Install the algorithm software and pre-requisites following the instructions in the Software User Manual.

5. Testing the software

This section describes step-by-step the operations required to execute the tests. All tests are controlled by the Makefile in the test/ directory of the software package, so after compiling the software, type

cd test

Typing

make help

Then displays the contents of the README file, which lists the available tests that can be performed.

5.1. Test 1: Nominal retrieval using A-Train input data

5.1.1. Purpose

To demonstrate the normal execution of the algorithm on CloudSat and Calipso data using a scene of 1400 profiles containing supercooled liquid cloud, ice cloud, rain and aerosol. The scene is from near the start of CloudSat granule 03631 on 2 January 2007, over New Guinea and Australia.

5.1.2. Strategy

In the test directory of the distribution, type

make test1

This will execute the command

../bin/unified_retrieval append_path=.. log_level=progress \

input=dardar-ceccaldi-v5_2007002162454_03631-rays-700-2099.nc \
output=acm-cap-atrain_2007002162454_03631.nc conf/atrain.cfg \
| tee -i output/acm-cap-atrain_2007002162454_03631.log

Note that the “tee” command stores the messaging output from unified_retrieval in the specified file as well as echoing the messages to the terminal. This test should take between 10 and 15 minutes to run.

5.1.3. Input

The input files (relative to the test/ directory) are:

input/dardar-ceccaldi-v5_2007002162454_03631-rays-700-2099.nc

conf/atrain.cfg

5.1.4. Output

The output files (relative to the test/ directory) are:

output/acm-cap-atrain_2007002162454_03631.nc

output/acm-cap-atrain_2007002162454_03631.log

5.1.5. Pass/fail criteria

The execution should complete without issuing an error (error messages are obvious as they are surrounded by asterisks). To test for close agreement with the reference output dataset, type

make compare1

This runs the following command:

nccmp -dmfs -T 1.0 output/acm-cap-atrain_2007002162454_03631.nc output_reference/acm-cap-atrain_2007002162454_03631.nc

(For this to execute correctly, the nccmp executable must be in your execution path, i.e. in the PATH environment variable. Otherwise, edit the COMPARE variable of the Makefile to specify the exact location of nccmp on your system.) If the files are reported to be identical or to agree to within the specified tolerance of 1% then the test has passed. If differences greater than 1% are found then the test has not passed.

It should be noted here that the nature of the iterative solver means that any small numerical differences may lead to a different path of the minimizer through phase space, resulting in differences in the retrieval greater than 1%. For the practical performance of the algorithm on the test machine to be regarded as acceptable, the real question is whether the output data agrees with the reference data to within the retrieval error, but this is not possible to test with nccmp. Therefore we apply the stricter test if a much closer match (1%), which is generally only achieved if there are absolutely no numerical differences on the development and test platform so that the iterative solver takes exactly the same path through phase space. In fact, it has been found when testing the retrieval on an older 64-bit AMD processor, while the development machine was a 64-bit Intel Xeon processor, that there were numerical differences leading to different minimizer paths and consequently differences in the retrieved variables greater than 1%. However, the difference in the plots was rather small; both appeared to be acceptable retrievals. Nonetheless, it is likely that the test will only be passed to 1% if the test platform also has an Intel processor(s).

5.1.6. Plotting the output

This is optional and not part of the test.

  1. To create quicklooks of the output: make plots1
  2. To view the quicklooks in a web browser, point a web browser to the test/plots1/index.html file, or if you use the firefox web browser, simply type: firefox plots1/index.html
  3. To view the reference quicklooks in a web browser, point a web browser to the test/plots1_reference/index.html file, or if you use the firefox web browser, simply type: firefox plots1_reference/index.html

5.2. Test 2: Nominal retrieval using simulated EarthCARE input data

5.2.1. Purpose

To demonstrate the normal execution of the algorithm on simulated EarthCARE observations, specifically the CPR reflectivity and Doppler velocity, and the ATLID Mie and Rayleigh channels. The scene is the same as in Test 1 with the observations having been simulated, with noise, from the A-Train retrievals on this scene.

5.2.2. Strategy

In the test directory of the distribution, type

make test2

This will execute the command

../bin/unified_retrieval append_path=.. log_level=progress \

input=simulated-earthcare_2007002162454_03631-rays-700-2099.nc \

environment_input=dardar-ceccaldi-v5_2007002162454_03631-rays-700-2099.nc \

output=acm-cap-earthcare_2007002162454_03631.nc \

conf/earthcare.cfg \

| tee -i output/acm-cap-earthcare_2007002162454_03631.log

Note that the “environment_input” parameter specifies that thermodynamic data should be taken from a different file to that containing the observations. This test should take of order 5-10 minutes to run.

5.2.3. Input

The input files (relative to the test/ directory) are:

input/simulated-earthcare_2007002162454_03631-rays-700-2099.nc

input/dardar-ceccaldi-v5_2007002162454_03631-rays-700-2099.nc

conf/earthcare.cfg

5.2.4. Output

The output files (relative to the test/ directory) are:

output/acm-cap-earthcare_2007002162454_03631.nc

output/acm-cap-earthcare_2007002162454_03631.log

5.2.5. Pass/fail criteria

The execution should complete without issuing an error. To test for agreement with the reference output dataset, type