-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Docs: recommended workflow for driving post processing with pqact from containers #94
Comments
Hi Eric. Thank you for these suggestions. Indeed Tom and I discussed these ideas in the past when we were first getting this project started and I remember going through a similar thought process as points 1-4. Indeed, I believe it is possible to have inter container communication, for example, though I have not tried this personally. Actually, the TDM and TDS docker containers communicate but not specifically using Docker technology (the communication there is HTTP driven and occurs between different VMs). I am juggling a few tasks right now, but I will try to dig into this more deeply next week. Pinging your former student and new Unidata employ Bobby Espinoza :-) (@robertej09) who is working on these containers with me. |
Thanks, @julienchastang and hello again, @robertej09! I've seen some advice to use I think, since Docker grew out of the HTTP world, it's pretty normal to communicate through services at an HTTP API endpoint, and so you can trigger things that way. But "just set up an HTTP API" is not quite as trivial for us occasional Unix/server dabblers! Easier with TDS where there's already a server, etc. |
Hey all, Great to hear from you Eric @deeplycloudy ! We'll look into these various options. While modifying the Dockerfile seems like the easiest solution, getting the inter-container communication set up is more flexible in the long run and I can see a future where different kinds of post-processing scripts, jobs, etc. can all be containerized and "plug-and-play" through We'll keep in touch to see where this goes. Send my "hello's" to folks over at Tech! |
Hi @deeplycloudy. @robertej09 and I were chatting this AM. In short, option |
Hello again @deeplycloudy , To Long; Didn't Read: I wanted to give you a quick update on our thoughts/progress concerning dockerized post-processing. As has already been mentioned, and as @julienchastang has summarized above, your Option 1 seems like the easiest/quickest way forward. Regardless, I've spent some time yesterday afternoon and this morning developing a minimal example demonstrating how a named pipe created using I also had a meeting with Ryan May to discuss how the data flowed from LDM to the To expand a little bit on the repo I linked above, I believe there are a few ways we could proceed:
Of these options, 2) seems more flexible, as the script the data is being initially piped in to could perform some potentially useful logic. I believe 2b) would be the easiest to orchestrate with Nevertheless, since we have something functioning (in principle) at the moment, we will be attempting to set up a more practical proof of concept using an ldm-docker container in the near future. Could you provide us some more detail on the type of data you're expecting and what type of post-processing tasks you plan on performing? While that should not be particularly important right at this moment, it would be helpful for all of us to have the same vision so we don't get carried away and lost in the details! Let us know if you have any questions or suggestions. Thanks! |
The non-container LDM workflow allows easy post-processing with external scripts driven by
pqact
. For instance, the Unidataldm-alchemy
project is used to stitch together GOES imagery tiles, and simply receives the product stream over aPIPE
. (I'm interested in doing this over the next week)I'm opening this issue to request advice (or documentation) of how each of the pqact action categories should be used in the context of an isolated, containerized environment that (by philosophy) isolates access to other software installations and inter-process communication. I can think of a few ways:
PIPE
with ldm-docker that is read by another container. (If that's even possible!)The text was updated successfully, but these errors were encountered: