-
Notifications
You must be signed in to change notification settings - Fork 24
Package Proposal #22
base: main
Are you sure you want to change the base?
Package Proposal #22
Conversation
I've added the necessary framework for semi-reproducible cataloging of a RUN. This is a relatively flexible item, since no automation has been produced yet. Eventually existing tooling will make this spec more difficult to change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to preserve the original readme.md
in some other location?
@@ -1,61 +0,0 @@ | |||
# CIROH Cloud Terraform Configuration |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like we need to preserve some of this as a Terraform_README.md
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah these were just Matt D.'s notes on the prototype from a while back
Get the files from the appropriate bucket & RUN_CONFIG | ||
``` | ||
wget --no-parent https://awi-ciroh-ngen-data.s3.us-east-2.amazonaws.com/AWI_001/AWI_03W_113060_001.tar.gz . | ||
|
||
tar -xvf AWI_03W_113060_001.tar.gz | ||
``` | ||
Then we can confirm file location and integrity against the example JSON file. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We discussed splitting the tar into several pieces. This will work for tomorrow. If we can set up the split files, that would be more consistent with the eventual goal.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a stopgap for sure, I can add a TODO there if you like
I can split the cloud stuff into separate "discipline" tar files, but that would require me to work the Bucket into a different shape to allow for pointing at the bucket locations that would be new. The current situation points to local files after you get the .tar.gz to get the files locally, but you could also get them specifically from the bucket piecemeal. |
On hold while the wrapper script and data mounting standardization changes are proposed and merged |
Adding spec, notes and some examples for semi-reproducible cataloging of a RUN. This is a relatively flexible item, since no automation has been produced yet. Eventually existing tooling will make this spec more difficult to change.
Open to comments and discussion.