Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feedback #56

Open
rmanaem opened this issue May 31, 2023 · 13 comments
Open

Feedback #56

rmanaem opened this issue May 31, 2023 · 13 comments

Comments

@rmanaem
Copy link
Contributor

rmanaem commented May 31, 2023

Hey folks, please try the tool here and comment your feedback here.

@Remi-Gau
Copy link

@rmanaem
Copy link
Contributor Author

rmanaem commented May 31, 2023

No that's alright, thanks for mentioning it here as well

@Remi-Gau
Copy link

UX details: took me a while to find the Select all datasets tickbox to "enable" the download button.
Maybe bring the tick box closer to the download button.

When users then tick this box and all datasets get selected they should then realize that they can toggle datasets individually.

@Remi-Gau
Copy link

selected

Ran the the tool with:

  • diagnosis: healthy control
  • modality: flow weighted
  • download the dataset-results.tsv

and got an improperly formatted tsv

dataset-results.zip

@surchs
Copy link
Contributor

surchs commented May 31, 2023

Some observations, I'll keep adding/editing to this comment

Benign funkiness

  • both types of .tsv I download have an extra empty line on top
  • the first column in both .tsv is prepended by a couple of spaces
  • the modality buttons (e.g. "Flow", "T1") are clickable (i.e. my cursor turns into a hand) but clicking them doesn't do anything
  • relatedly: "Flow" and "T1" is OK, but not great in terms of clarity. Better would be a full name / description on-hover

Slightly annoying funkiness

  • in the participants.tsv file, session_id seems to be the only column where "missing values" show up with a string of "undefined", everywhere else it's just empty
  • it seems like we still show the full IRI for the controlled terms to the users in the output .tsv. I believe we decided to keep it that way for now and then replace with human readable labels in the future, just pointing out here.
  • we aren't caching same queries. This is a enhancement we should do at some point. But one of the benefits of a GET request is that it can be cached: https://web.dev/http-cache/. I think if we want to do that with the browser, we might have to have the request appear in the URL of the query tool. Probably worth further investigation

No good funkiness (needs bug report)

  • I believe same issue as @Remi-Gau pointed out: in empty query, my dataset.tsv results file has a weird standout line beginning with "spelling deficits and dyslexia". No idea how that happened, it does appear to be correctly formatted as a .tsv, but the content of each column is obviously weird. Here's what I see:
➜ grep "spelling" -B 1 -A 1 dataset-results.tsv -n
71-https://openneuro.org/datasets/ds003126	https://github.com/OpenNeuroDatasets/ds003126.git	MRI Lab Graz: Reading-related functional activity in children with isolated
72:spelling deficits and dyslexia	58	http://purl.org/nidash/nidm#FlowWeighted, http://purl.org/nidash/nidm#T1Weighted
73-https://openneuro.org/datasets/ds003470	https://github.com/OpenNeuroDatasets/ds003470.git	MRI Lab Graz: Target processing in overt serial visual search involves the dorsal attention network: A fixation-based event-related fMRI study.	20	http://purl.org/nidash/nidm#FlowWeighted, http://purl.org/nidash/nidm#T1Weighted
  • If I run a query, get results, select some datasets (let's say second and fourth result) and then run another query, the second and fourth result will remain selected. Not bad in itself (in fact I think that's a nice convenience feature I didn't even know we had), but obviously what is the 2nd and 4th result can change between queries - and then I suddenly have different datasets selected. Leaving here for now before bug issue is made.

Random ideas (no expectation of implementation)

  • could we hash the query parameters for debug purposes. So if I do not run a full query, I can easily explain what kind of query I ran?
  • not specific to query tool: I think it'd be nice to have a link to the GH repo on the interface, and (as soon as all annoying bugs are squashed) some Neurobagel branding / link to our main website
  • as soon as there is a release, the release version should also be part of the UI. or if we keep continuously deploying, then the HEAD commit sha
  • on narrower screen sizes, the UI is starting to responsively rearrange in ways that look a little strange. we might want to start thinking about using cypresses ability to test for different screen sizes and to do assertions on UI rendering

@Remi-Gau
Copy link

The tool allow to query for 0 sessions but this then throws an error. I suspect zero is value we do not want to allow at all.

On the plus side: specifying no session or min nb session == 1 seem to return the same thing which is what I would expect.

@Remi-Gau
Copy link

Imaging modality can only allow to select single modality: I suspect people are quickly going to want bold AND t1w (for example)

@rmanaem
Copy link
Contributor Author

rmanaem commented Jun 1, 2023

Addressing comment 1 which is also mentioned in this comment and addressing comment 2 in #70

@michellewang
Copy link

Hi! Here are my notes/feedback from trying the query tool:

  1. I didn’t realize until someone pointed it out yesterday that we could toggle between dataset-level and participant-level results. Personally, I think it would be better to have 2 download buttons instead of the “Toggle level of detail to download” checkbox
  2. I really like that the participant-level results include the paths to the data. This is really useful for downloading/getting the data to build the cohort. I think pointing to the session directories makes sense, but the user might get more modalities than they specified (e.g., get T2w and diffusion data if they only asked for T1w in the query). So I wonder if it would be better to output paths to individual files instead
    1. It would also be nice if there was some information about how to get the data (e.g., where it is stored, if you need Datalad, etc.)
  3. I agree with Rémi that it would be nice to have ways to query for more than one modality or assessment tool
  4. Minor things
    1. Why are some entries in the Assessment tools dropdown all lowercase?
    2. I don't think it's very clear right now that the user should either check the “Healthy Control” box OR select a diagnosis. I get that if the checkbox is selected, then the diagnosis dropdown can’t be interacted with, but the logic can be made clearer/more explicit in my opinion

@Remi-Gau
Copy link

Remi-Gau commented Jul 3, 2023

2. I don't think it's very clear right now that the user should either check the “Healthy Control” box OR select a diagnosis.

Related to this (though I know this is already on one roadmap) it is very likely people will want to query healthy controls AND at least one diagnosis.

If I want to create a cohort by aggregating across datasets then users may want to have "matched" controls for their patients.

@rmanaem
Copy link
Contributor Author

rmanaem commented Jul 3, 2023

@michellewang @Remi-Gau @surchs Thanks for the feedback folks!

@github-actions
Copy link

We want to keep our issues up to date and active. This issue hasn't seen any activity in the last 30 days.
We have applied the stale-issue label to indicate that this issue should be reviewed again and then either prioritized or closed.

@github-actions github-actions bot added the _flag:stale [BOT ONLY] Flag issue that hasn't been updated in a while and needs to be triaged again label Aug 18, 2023
@surchs surchs transferred this issue from neurobagel/old-query-tool Mar 7, 2024
@surchs surchs added this to Neurobagel Mar 7, 2024
@github-actions github-actions bot removed the _flag:stale [BOT ONLY] Flag issue that hasn't been updated in a while and needs to be triaged again label Mar 8, 2024
Copy link

We want to keep our issues up to date and active. This issue hasn't seen any activity in the last 75 days.
We have applied the _flag:stale label to indicate that this issue should be reviewed again.
When you review, please reread the spec and then apply one of these three options:

  • prioritize: apply the flag:schedule label to suggest moving this issue into the backlog now
  • close: if the issue is no longer relevant, explain why (give others a chance to reply) and then close.
  • archive: sometimes an issue has important information or ideas but we won't work on it soon. In this case
    apply the someday label to show that this won't be prioritized. The stalebot will ignore issues with this
    label in the future. Use sparingly!

@github-actions github-actions bot added the _flag:stale [BOT ONLY] Flag issue that hasn't been updated in a while and needs to be triaged again label May 23, 2024
@github-actions github-actions bot removed the _flag:stale [BOT ONLY] Flag issue that hasn't been updated in a while and needs to be triaged again label Dec 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: No status
Development

No branches or pull requests

4 participants