This README summarizes the integration notes for the February 2020 delivery of the ASOS project to NDE. Last modified: 02/18/2019 Within this directory one may find the top-level XML configurations, this README, and a selection of test scripts located in the Test_Scripts/ directory. The following test scripts may be run to verify a successful compilation of the Framework, and serve as examples of executing the Framework for representative cases of this project: AIT_Framework/Config/Projects/ASOS/Test_Scripts/run-ASOS_CLOUD_UNIT.bash AIT_Framework/Config/Projects/ASOS/Test_Scripts/run-ASOS_CLOUD_UNIT-CONUS.bash By default, the scripts assume that the following executables and directories are located in your working directory: Executables: algorithms.exe - Framework executable Directories: Config/ - Location of XML configurations developed by ASSISTT AIT L1B/ - Location of L1B imagery files algorithm_ancillary/ - Location of algorithm ancillary data, e.g. LUTs framework_ancillary/ - Location of framework ancillary data, e.g. land mask gfs_grib2_0.5deg/ - Location of NWP GFS grib data oisst_daily/ - Location of OISST data snow_map/ - Location of IMS/SSMI snow map data Input/ - Input directory for data produced by Framework, e.g. Level-2 Enterprise Cloud Phase Output/ - Output directory for current Framework run The following top-level XML configuration files have been designed by the integration team to support this project: AIT_Framework/Config/Projects/ASOS/Config-ASOS_CLOUD_UNIT.xml A top-level configuration file is passed as an argument when executing the Framework. The Framework will automatically include other XML files within the AlgorithmServices/Config/ folder as needed. Directly editing the XML configuration is not recommended. Examples of how to use these configurations are provided in the test bash script listed above. The following time and memory information for the ASOS CLOUD unit was measured on STAR machine rhw1195 with Intel 16.0.4 compilers using a maximum of 4 processes for four cases: Full Disk with no mitigation, Full Disk with mitigation, Full Disk with complete failure (only Cloud Mask runs - Cloud Height fails and outputs all missing values), and CONUS with no mitigation. rhw1195 machine specifications: Processors: 24 Intel(R) Xeon(R) CPU E5-2643 v3 @ 3.40GHz Memory: 264 Gb Disk bandwidth: 2.0 Gb/s ASOS CLOUD Unit: Coverage: Full Disk (no mitigation) Total time: 267.99 seconds Total Memory (estimate): 2.31 Gb/process Coverage: Full Disk (with mitigation) Total time: 297.23 seconds Total Memory (estimate): 2.31 Gb/process Coverage: Full Disk (Complete failure): Total time: 193.74 seconds Total Memory (estimate): 2.31 Gb/process Coverage: CONUS (no mitigation) Total time: 64.56 seconds Total Memory (estimate): 1.41 Gb/process Config-ASOS_CLOUD_UNIT.xml is designed to produce only the variables from Cloud Height and Cloud Mask needed by ASOS. To facilitate parallel processing, segments may be computed using independent executions of the Framework. Parallel processing has three steps: pre-processing, segment processing, and post-processing. Before any segment is processed, pre-processing should be invoked as follows: ./algorithms.exe -j -m pre Here, is one of the top-level configuration files provided in this project and is a unique job idenitification number defined by the user. The same job id must be used for pre-processing, segment processing, and post-processing. Pre-processing is a precondition for segment processing. Each segment is executed separately using the following command: ./algorithms.exe -j -s Here, is the index of the current segment. The segment indices have values between one and the total number of segments (inclusive). Any job scheduler may be used to manage the segment executions in any order or simultaneously. The test BASH scripts show examples of scheduling the segment processing by using the xargs command. After all segments have been processed, post-processing should be invoked as follows: ./algorithms.exe -j -m post By default, each of the configuration XMLs in this project uses 24 total segments for the Full Disk and 5 segments for CONUS. The following environment variables are available to the operational user to control the ASOS project configuration: Standard environment variables: ENV_COVERAGE - The ABI coverage. e.g. FD, CONUS, or MESO ENV_SAT_ABI_L1B_FILE - Full path of the ABI L1B filename Standard data environment variables: ENV_ALGORITHM_ANCILLARY_DIR - Algorithm ancillary directory. Default: algorithm_ancillary/ ENV_FRAMEWORK_ANCILLARY_DIR - Framework ancillary directory. Default: framework_ancillary/ ENV_NWP_GFS_GRIB2_DIR - NWP GFS ancillary directory. Default: gfs_grib2_0.5deg/ ENV_OISST_DAILY_DIR - OISST ancillary directory. Default: oisst_daily/ NetCDF output environment variables: ENV_OUTPUT_DIRECTORY - Output directory of Framework NetCDF files ENV_OUTPUT_LIST - Comma-separated string list of product algorithms to be produced as output. e.g. "CLOUD_MASK_EN,CLOUD_HEIGHT_EN" NetCDF input environment variables: ENV_INPUT_DIRECTORY - Input directory of Framework NetCDF files ENV_INPUT_LIST - Comma-separated string list of product algorithms to be input. e.g. "CLOUD_MASK_EN,CLOUD_HEIGHT_EN" ENV_START_TIME_STAMP - Start time stamp for the current run. Used for automatic generation of input filenames. Snow Mask environment variables: ENV_SNOW_MASK_COMMON_ALG - Ancillary snow mask algorithm to be used as precedent data. Default SNOW_MASK_IMS_SSMI ENV_SNOW_MAP_DIR - Directory of IMS/SSMI snow mask data NDE production metadata environment variables: ENV_INSTITUTION - Institution string for NDE global metadata attribute ENV_NAMING_AUTHORITY - Naming authority string for NDE global metadata attribute ENV_PRODUCTION_SITE - Production site string for NDE global metadata attribute ENV_PRODUCTION_ENVIRONMENT - Production Environment string for NDE global metadata attribute NetCDF output control environment variables: ENV_NETCDF_COMPRESSION - Compression level of NetCDF files. 0 is no compression. Default: 0. Data Cache Archive control environment variables: ENV_ARCHIVE_NAME - Name of the Data Cache Archive. If not provided or empty, the Job ID will be used as the archive name. ENV_ARCHIVE_DIR - Location of the Data Cache Archive folder. Default: /dev/shm Satellite reader control environment variables: ENV_SUB_SAT_LON_TRUE - Sets the nominal satellite longitude subpoint. Overrides the value read from L1B. Not needed for L1B files more recent than August 2018.