-
Notifications
You must be signed in to change notification settings - Fork 9
Interfaces
Overview of relevant interfaces - see separate work package areas for details.
There was discussion on how to version the interfaces to ensure compliance and compatibility and we decided that for such a short time frame the overhead of a formal system for a small number of interfaces was not worth the effort. Instead, here we will document the essentials of the interfaces and the dependencies between them.
It is important that any changes to the interfaces are documented here before they are deployed and dependent teams are informed of the changes.
dependent on: None | has dependents: integration
- There will be an archive_event trigger in the SHAO NGAS server which will, once all files for an observation have arrived, trigger the processing pipeline. Specifically it will execute the appropriate bash script and pass the observation identifier as well as the list of files that the observation needs.
dependent on: data transfer, pipeline | has dependents: integration, pipeline
List available files
wget -O /dev/stdout 'http://159.226.233.198:7777/QUERY?query=files_list&format=list'
See ngas plugins to see the kickoff plugin.
- WP2 will provide a simple script for each of the pipelines that can be invoked by NGAS to kick off their execution when necessary.
- A piece of code attached to NGAS will be responsible for determining when all the data needed by a given pipeline is available and invoking the script.
- In the case of the source extraction pipeline, this code will be developed by WP3. The file ID for the individual image that needs to be source-extracted will be passed down as a command-line argument to the pipeline script that needs to be invoked. In particular the
kickoff_source_extractor.py
plugin will callsrc/integration/test-source-finding/test1-export-graph.py
to test that this functionality works. - In the case of the MWA imagining pipeline, this code will be developed by WP1, and the MWA observation ID together with the list of all file IDS for that observation will be passed down as command line arguments.
The following command will be used to pull data out of NGAS:
wget http://202.127.29.97:7777/RETRIEVE?file_id=<file_id>
(or any other similar command that creates the same HTTP request)
Different pipelines will require different files. They will know the file IDs for each of the files they need to retrieve because they will be communicated during pipeline triggering.
Data will be pushed using the following command
wget --post-file <filename_on_disk> --header 'Content-Type: <content-type>' http://202.127.29.97:7777/ARCHIVE?filename=<filename_on_NGAS>
(or any other similar command that creates the same HTTP request)
This should be repeated for each file that the pipeline needs to store into NGAS. These are the currently agreed types:
-
Source extraction pipeline: A VOTable XML document of type
application/x-votable+xml
-
ASKAP imagining pipeline: An image of type
image/fits
- TopCat access TAP server at http://202.127.29.97:8888/casda_vo_tools/tap
- Query: select * from ivoa.obscore where dataproduct_subtype =
- Select row with appropriate catalogue
- Click Access URL to download catalogue
dependent on: data management | has dependents: integration
Trigger pipeline
WP5 will initiate the pipeline and load the data referenced by imglist
from local disk
python trigger_pipeline.py
--imglist /home/ska_au_china_2018/SKA-AU-China-2018/src/pipelines/Simple_Selavy_Test/selavy-fits-list.txt
--nodelist 192.168.0.101,192.168.0.102,192.168.0.103,192.168.0.104
--masterport 8002
This will need to be integrated to use NGAS (TODO)
Retrieve from NGAS
ngas-get-files.py
Archive Product
The pipeline will store selavy-results.components.xml
in NGAS. Content type Content-Type: application/VOTABLE
.
TODO
dependent on: | has dependents:
Monitoring Execution