-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reducing GHOST IFU data with DRAGONS #255
base: master
Are you sure you want to change the base?
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR @bmerino95! Here are my comments:
- Add the"GHOST" keyword to keywords.txt in this PR
- Change the following keywords in the first cell of the NB: "star" --> "stars", "dragons" --> "DRAGONS" (since "stars" and "DRAGONS" are already a part of the keywords.txt file)
- In the Goals section, it says that this NB uses the "DRAGONS (Py3.7)" kernel. Change this to "DRAGONS-3.2.1 (Py3.10)"
- In the Summary section, the link to program ID GS-CAL20230416 basically contains no information... Just blanks for PI, Co-I(s), and the abstract. Is this something that can be fixed on the Gemini Archive end?
- Use markdown format instead of HTML for links. I.e. instead of
<a href="https://zenodo.org/record/7776065#.ZDg5qOzMLUI">DRAGONS open source software publication</a>
use:
[DRAGONS open source software publication](https://zenodo.org/record/7776065#.ZDg5qOzMLUI)
- In the "About the dataset" section, the last column in the summary table is "Purpose and Exposure (seconds)" but none of the values indicate an exposure time in seconds?
- In the same section and table, the statement "In this case, the calibrations for the science can be used for the standard star." is in the "File name(s)" column but should probably be in the "Purpose" column?
- In the "Create directory for raw files" section, change "...that will temporarily stored in the working..." --> "...that will be temporarily stored in the working..."
- In the same section, a typo: "priliminary" --> "preliminary"
- In the "Create file lists" section, it says "The first step is to create lists that will be used in the data reduction process. For that, we use dataselect." ...But then dataselect isn't actually used until several cells/sections later. Either rephrase or reorganize these sections
- Can you add a description to the "reduce_func" function like you did with the "update_list" function?
- In the "reduce_func" function, you have these two lines commented out:
#reduce_stdred.uparms = [('scaleCountsToReference:tolerance', 1)]
#reduce_stdred.recipename = 'reduceStandard'
But you don't explain why they're commented out or what they're used for or refer to. Are they necessary to keep in this notebook? If so, add an explanation as to what they do - The section "Use dataselect to choose the all the BIAS files." title should be changed to "Use dataselect to choose all the BIAS files." or "Select and reduce biases" (the latter of which is the name for this section in the Table of Contents)
- The section "Use reduce_func() to reduce the biases" has the tag "Reduce_biases" but this section is not mentioned in the Table of Contents. Either remove the tag or add it to the ToC
- In the next section, a typo: "..we first need to udpate our list of files.." --> "..we first need to update our list of files.."
- "Reduce the slit biases." section has the tag "Reduce_slit_biases", but this tag is not used. Same as # 12
- After the "Select the blue science biases." section, you have "Reduce the red science biases.", but it should be "Reduce the blue science biases."
- In the clean_up_1 section, change "..files in the work directory.." --> "..files in the working directory.."
- In the "Select and Reduce the blue science frames." section, should you add np.sort on
sciblue
like you did withscired
in the previous section? (If there's ever a case where there's more than one sciblue file...) - In the "Reduce the biases" sections, the *_bias.fits files get written to the working directory AND the calibrations/processed_bias/ directory. Is it necessary to have the file in both locations? Are they the same file?
- Same with the *_slitflat.fits files, but to the working directory and calibrations/processed_slitflat/ directory, and *_flat.fits and calibrations/processed_flat/, etc
Oh also, I never experienced the .nfs files problem you mentioned!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for adding this to the notebooks suite!
My review:
- As in my previous review of the initial GHOST notebooks, I would add what the new file names are going to be in case there are new created.
- Change:
“If you want to run this notebook…”
with
“It may take more or less if you work with other dataset, mostly depending on their number and variety of files.. If you want to run this notebook…” - Change
“To do so, make sure the contents of..”
with
“To do so, make sure the calibrations section of…” - I would separate “Table of contents” as a single cell from where it's now, so clicking on Goals, Summary or Disclaimer will take you to that section, not as it is now.
- Change the name of the section “Importing Python libraries” to “Imports and setup” as it’s the default in all DataLab notebooks.
- In the summary section, maye be good to warn the fact that the cells where a DRAGON command is executed, will display its output inside the cell, not after the cell, as is the case of a regular python task and that they need to scroll inside the cell until the end to see the output, otherwise they may miss errors (as it happened to me many times!)
I think this is something that should have been done for all DRAGONS notebooks, but is not urgent to do with those that are live now. - In the same section, it's probably better not to make the commissioning program ID GS-CAL20230416 a link, since it's considered a calibration program and those usually don't have description, PI, etc. The link, as Alice mentioned, takes you to a blank book.
- In the “About the dataset” section table, you could summarize
“In this case, the calibrations for the science can be used for the standard star.” as
“Use science calibrations”
since the comment is in the Standard calibrations row. - Right after “Add the Bad Pixel Masks to the calibration database”, there is a “GP13” large title. Is that necessary or is it left over?
import recipe_system,
should be included by default, as in all DataLab notebooks, in the “Imports and setup” section, unless there is something I'm missing? Some sub-libraries from it are imported there. Maybe replacing them with the full “import recipe_system” will work?- After the “Select and Reduce the blue science frames.” section, there are several text cells that are in header3 size. I think they should be in regular size since are not section or sub-sections titles.
- The text
“DRAGONS' reduce() function creates a lot of intermediate files etc…” shouldn't be in header font either, but it should be in regular, standard size. - It also has the typo “udpate” instead of “update”
- Also the text in the first “Clean-up” section has a header font. It should be in regular size font.
- Similarly, the texts “Note: This notebook's finished products etc…” and “This notebook has only used DRAGONS' etc…” shouldn't be in such a large font.
…iles created at each step.
…dback. NFS file issue did not come up during this round of testing.
Hi all! I have completed my work with this notebook. This notebook incorporates Alice and David's feedback. This version of the notebook includes the names/suffixes of the files created at each step. As noted in one of my most recent pushes, I didn't run into the NFS issue, so I think that problem has finally been resolved. If you have time to run the notebook again, please let me know if you run into any new errors. |
David pointed out that reducing the slitflats produces an error message regarding the inputs having different numbers of SCI extensions. This is a known issue with DRAGONS and will not affect the reduction. I have added a note letting users know that they can ignore that error. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @bmerino95 thanks for the updates, this is looking good! Just a few more things:
- Add the keywords.txt file to this PR (and add the "GHOST" keyword to the keywords.txt if you haven't already)
- It looks like this notebook will be the next one to be merged, so change
__nbid__ = '00XX'
to__nbid__ = '0070'
in the notebook and also update the nbid.csv file and add that file to this PR - I guess the name of the DRAGONS kernel changed again.... So in the "Goals" section, change the kernel name to "DRAGONS-3.2.2 (DL,Py3.10.14)"
FYI it took about 50 minutes to run the whole notebook
@bmerino95 : to add to @jacquesalice's comment: please also update nbid.csv (in the main dir), but appending a new row with nbid 70 and the path to your new NB. |
…, and added "GHOST" to keywords.txt. Thank you for the feedback
@bmerino95 Some more comments:
Better use
|
@bmerino95, great work applying all my suggestions!
Just a matter of copy & paste the message you already typed before |
I am adding a new addition to the data reduction with DRAGONS suite. This notebook reduces GHOST IFU data of the star XX Oph. I have run this notebook many times on GP13 using the DRAGONS-3.2.1 (Py3.10) kernel, and things seem to be working, but I would appreciate any feedback so I can get this notebook ready for production.
Note: The notebook can take up to an hour to run.
Note 2: The only intermediate issue I have been having with this notebook involves the creation of hidden .nfs files. After you run the notebook, can you let me know if any files that look like '.nfs0000000001663137000001ee' exist in /raw? I wrote a clean-up function that will hopefully prevent this issue, but if they still appear, I would need to rework clean_up().