diff --git a/docs/source/FileRemodelingTools.md b/docs/source/FileRemodelingTools.md index 7cf59aa..e3f931f 100644 --- a/docs/source/FileRemodelingTools.md +++ b/docs/source/FileRemodelingTools.md @@ -877,7 +877,7 @@ The resulting columns are called *stopped* and *stop_failed*, respectively. The results of executing this *factor_column* operation on the [**sample remodel event file**](sample-remodel-event-file-anchor) are: -````{admonition} Results of the factor_column operation on the sample data. +````{admonition} Results of the factor_column operation on the samplepip data. | onset | duration | trial_type | stop_signal_delay | response_time | response_accuracy | response_hand | sex | stopped | stop_failed | | ----- | -------- | ---------- | ----------------- | ------------- | ----------------- | ------------- | --- | ---------- | ---------- | diff --git a/docs/source/HedSearchGuide.md b/docs/source/HedSearchGuide.md index d040a3a..4be0dd7 100644 --- a/docs/source/HedSearchGuide.md +++ b/docs/source/HedSearchGuide.md @@ -14,27 +14,25 @@ At a more global level, analysts may want to locate datasets whose event markers have certain properties in choosing data for initial analysis or for comparisons with their own data. -## HED search basics - Datasets whose event markers are annotated with HED (Hierarchical Event Descriptors) can be searched in a dataset independent manner. -The HED search facility has been implemented in the -Python [**HEDTools**](https://pypi.org/project/hedtools/) library, -an open source Python library. -The latest versions are available in the -[**hed-python**](https://github.com/hed-standard/hed-python) GitHub repository. - -To perform a query using HEDTools, -users create a query object containing the parsed query. +The Python [**HEDTools**](https://pypi.org/project/hedtools/) support two types of search: object-based and text-based. +The object-based search can distinguish complex tag relationships as part of the search. +The text-based search operates on strings rather than HED objects and is considerably faster, but less powerful. +Text-based searches need the long-form of the HED strings to detect children based on a parent tag. + +## HED object-based search + +To perform object-based HED search users create a query object containing the parsed query. Once created, this query object can then be applied to any number of HED annotations -- say to the annotations for each event-marker associated with a data recording. The query object returns a list of matches within the annotation. Usually, users just test whether this list is empty to determine if the query was satisfied. -### Calling syntax +### Object-based search syntax -To perform a search, create a `TagExpressionParser` object, which parses the query. +Create a `TagExpressionParser` object to parse the query. Once created, this query object can be applied to search multiple HED annotations. The syntax is demonstrated in the following example: @@ -124,7 +122,7 @@ The tag short forms are used for the matching to assure consistency. #### Tag-prefix with wildcard -Matching using a tag prefix with the `*` wildcard, matches the starting portion of the tag. +Matching using a tag prefix with the * wildcard, matches the starting portion of the tag. Thus, Age/3* matches *Age/3* as well as *Age/34*. Notice that the query Age* matches a myriad of tags including *Agent*, *Agent-state*, @@ -132,19 +130,19 @@ and *Agent-property*. ### Logical queries -In the following `A` and `B` represent HED expressions that may contain multiple +In the following *A* and *B* represent HED expressions that may contain multiple comma-separated tags and parenthesized groups. -`A` and `B` may also contain group queries as described in the next section. -The expressions for `A` and `B` are each evaluated and then combined using standard logic. +*A* and *B* may also contain group queries as described in the next section. +The expressions for *A* and *B* are each evaluated and then combined using standard logic. -| Query form | Example query | Matches | Does not match | -|-------------------------------------------------------------------------------------------------|-----------------------------|----------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------| -| **`A`, `B`**
Match if both `A` and `B`
are matched. | *Event*, *Sensory-event* | *Event*, *Sensory-event*
*Sensory-event*, *Event*
(*Event*, *Sensory-event*) | *Event* | -| **`A` && `B`**
Match if both `A` and `B`
are matched.
Same as the comma notation. | *Event* and *Sensory-event* | *Event*, *Sensory-event*
*Sensory-event*, *Event*
(*Event*, *Sensory-event*) | | *Event* | -| **`A` \|\| `B`**
Match if either `A` or `B`. | *Event* or *Sensory-event* | *Event*, *Sensory-event*
*Sensory-event*, *Event*
(*Event*, *Sensory-event*)
*Event*
*Sensory-event* | *Agent-trait* | -| **~`A`**
Match groups that do
not contain `A`
`A` can be an arbitrary expression. | { *Event*, ~*Action* } | (*Event*)
(*Event*, *Animal-agent*)
(*Sensory-event*, (*Action*)) | *Event*
*Event*, *Action*
(*Event*, *Action*)
| -| **@`A`**
Match a line that
does not contain `A`. | @*Event* | *Action*
*Agent-trait*
*Action, Agent-Trait*
(*Action*, *Agent*) | *Event*
(*Action*, *Event*)
(*Action*, *Sensory-event*)
(*Agent*, (*Sensory-event*, *Blue*)) | +| Query form | Example query | Matches | Does not match | +|-----------------------------------------------------------------------------------------------|-----------------------------|----------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------| +| *A*, B*
Match if both *A* and *B*
are matched. | *Event*, *Sensory-event* | *Event*, *Sensory-event*
*Sensory-event*, *Event*
(*Event*, *Sensory-event*) | *Event* | +| *A && B*
Match if both *A* and *B*
are matched. Same
as comma notation. | *Event* and *Sensory-event* | *Event*, *Sensory-event*
*Sensory-event*, *Event*
(*Event*, *Sensory-event*) | | *Event* | +| *A \|\| B*
Match if either *A* or *B*. | *Event* or *Sensory-event* | *Event*, *Sensory-event*
*Sensory-event*, *Event*
(*Event*, *Sensory-event*)
*Event*
*Sensory-event* | *Agent-trait* | +| *~A*
Match groups that do
not contain *A*
*A* can be an arbitrary
expression. | { *Event*, ~*Action* } | (*Event*)
(*Event*, *Animal-agent*)
(*Sensory-event*, (*Action*)) | *Event*
*Event*, *Action*
(*Event*, *Action*)
| +| *@A*
Match a line that
does not contain *A*. | @*Event* | *Action*
*Agent-trait*
*Action, Agent-Trait*
(*Action*, *Agent*) | *Event*
(*Action*, *Event*)
(*Action*, *Sensory-event*)
(*Agent*, (*Sensory-event*, *Blue*)) | ### Group queries @@ -165,38 +163,71 @@ Indicates a red square and a blue triangle. Group queries allow analysts to detect these groupings. As with logical queries, -`A` and `B` represent HED expressions that may contain multiple +*A* and *B* represent HED expressions that may contain multiple comma-separated tags and parenthesized groups. -| Query form | Example query | Matches | Does not match | -|--------------------------------------------------------------------------------------------------------------------------|---------------|-------------------------------------------------------------|---------------------------------------------------------------| -| **{`A`, `B`}**
Match a group that
contains both `A` and `B`
at the same level
in the same group. | *{Red, Blue}* | *(Red, Blue)*
*(Red, Blue, Green)* | *(Red, (Blue, Green))* | -| **[`A`, `B`]**
Match a group that
contains `A` and `B`.
Both `A` and `B` could
be any subgroup level. | *[Red, Blue]* | *(Red, (Blue, Green))*
*((Red, Yellow), (Blue, Green))* | *Red, (Blue, Green)* | -| **{`A`, `B:`}**
Match a group that
contains both `A` and `B`
at the same level
and no other contents. | *{Red, Blue:}* | *(Red, Blue)* | *(Red, Blue, Green)*
*(Red, Blue, (Green))* | -| **{`A`, `B: C`}**
Match a group that
contains both `A` and `B`
at the same level
and optionally `C`. | *{Red, Blue: Green}* | *(Red, Blue)*
*(Red, Blue, Green)* | *(Red, (Blue, Green))*
*(Red, Blue, (Green))*| +| Query form | Example query | Matches | Does not match | +|--------------------------------------------------------------------------------------------------------------------|---------------|-------------------------------------------------------------|---------------------------------------------------------------| +| *{A, B}*
Match a group that
contains both *A* and *B*
at the same level
in the same group. | *{Red, Blue}* | *(Red, Blue)*
*(Red, Blue, Green)* | *(Red, (Blue, Green))* | +| *[A, B]*
Match a group that
contains *A* and *B*.
Both *A* and *B* could
be any subgroup level. | *[Red, Blue]* | *(Red, (Blue, Green))*
*((Red, Yellow), (Blue, Green))* | *Red, (Blue, Green)* | +| *{A, B:}*
Match a group that
contains both *A* and *B*
at the same level
and no other contents. | *{Red, Blue:}* | *(Red, Blue)* | *(Red, Blue, Green)*
*(Red, Blue, (Green))* | +| *{A, B: C}*
Match a group that
contains both *A* and *B*
at the same level
and optionally *C*. | *{Red, Blue: Green}* | *(Red, Blue)*
*(Red, Blue, Green)* | *(Red, (Blue, Green))*
*(Red, Blue, (Green))*| These operations can be arbitrarily nested and combined, as for example in the query: -> *[`A` || {`B` && `C`} ]* +> *[A || {B && C} ]* -In this query Ordering on either the search terms or strings to be searched doesn't matter, precedence is generally left to right outside of grouping operations. Wildcard matching is supported, but primarily makes sense in exact matching groups. You can replace any term with a wildcard: -| Query form | Example query | Matches | Does not match | -|--------------------------------------|-----------------|-----------------------|-------------------------------| -| **`?`**
Matches any tag or group | {`A` && `?`} | *(A, B}
(A, (B))* | *(A)
(B, C)* | -| **`??`**
Matches any tag | {`A` && `??`} | *(A, B}* | *(A)
(B, C)
(A, (B))* | -| **`???`**
Matches any group | {`A` && `???`} | *(A, (B))* | *(A)
(B, C)
(A, B)* | +| Query form | Example query | Matches | Does not match | +|------------------------------|--------------|-----------------------|-------------------------------| +| *?*
Matches any tag or group | *{A && ?}* | *(A, B}
(A, (B))* | *(A)
(B, C)* | +| *??*
Matches any tag | *{A && ??}* | *(A, B}* | *(A)
(B, C)
(A, (B))* | +| *???*
Matches any group | *{A && ???}* | *(A, (B))* | *(A)
(B, C)
(A, B)* | + + +**Notes**: You cannot use negation inside exact matching groups *{X:}* or *{X:Y}* notation.
+You cannot use negation in combination with wildcards ( *?*, *??*, or *???* )
+In exact group matching, *||* matches one or the other, not both: +*{A || B:}* matches *(A)* or *(B)*, but not *(A, B)*. + +## HED text-based search + +In addition to the HED object-based search, HED also supports a string search interface. +which is notably faster than object-based search and still has the key features of searching parent tags and grouping. + +### Text-based search syntax +HED text-based syntax is summarized in the following table: +| Query type | Example | Matches | Does not match | +|-------------------------------------------------------------------------------------------|---------------|-------------------------------------------------|---------------------------------------| +| **Anywhere-term**
Prefix the term with *@*
to match in line. | *@A* | *A* in string | No *A* in string | +| **Negation-term**
Prefix the term with *~*
to match line with no term. | *~A* | No *A* in string | *A* in string | +| **Nested-term**
Elements in parentheses
match tags at same level. | *"(A), (B)"* | *(A), (B, C)
((A), (B, C))* | *(A, B)
(A, C, B)
(A, (C, B))* | +| **Wildcard-term**
Use * to match remaining word
(except comma or parenthesis). | *Long** | *Long*
Parent/LongX
Parent/LongY | *Parent/Other* | + +The simplest type of query is to search for the presence or absence of a single tag. +Searches can be combined, but all searches are trying to match all terms. +The HED text-based searches do not use the HED schema, so searches can handle invalid HED annotations. + +```{warning} +- Specific words only care about their level relative to other specific words, not overall. +- If there are no grouping or anywhere words in the search, it assumes all terms are anywhere words. +- The format of the series should match the format of the search string, whether it's in short or long form. +- To retrieve children of a parent tag, ensure that both the series and search string are in long form. +``` +### Example text-based queries +The following table shows some example queries and types of matches: -**Notes**: You cannot use negation inside exact matching groups `{X:}` or `{X:Y}` notation.
-You cannot use negation in combination with wildcards ( `?`, `??`, or `???` )
-In exact group matching, `||` matches one or the other, not both: -`{A || B:}` matches `(A)` or `(B)`, but not `(A, B)` +| Query type | Example query | Matches | Does not match | +|---------------------------------------------|---------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------| +| **Tag sharing group
with another tag** | *(Face, Item-interval/1)*
*Face* in group with
exact tag *Item-interval/1* | *(Face, Item-interval/1)*
*Face, Item-interval/1* | *(Face, Item-interval/12)*
*Face, Item-interval/1A*
*(Face, Item-interval)*
*(Item-interval/1)*| +| **Tag sharing group
with wildcard tag** | (Face, Item-interval/1*)
*Face* in group with
tag starting with
*Item-interval/1* | *(Face, Item-interval/1)*
*Face, Item-interval/12*
*(Face, Item-interval/1A)* | *(Face, Item-interval)*
*(Item-interval/1)* | +| **Tag sharing group
with subgroup** | *(Face, (Item-interval/1))*
*Face* in group with
subgroup containing
*Item-interval/1* | *(Face, (Item-interval/1))*
*Face, (Item-interval/1)*
*(Item-interval/1), Event, Face* | *(Face, Item-interval/1)*
*(Item-interval/1)* | ## Where can HED search be used? @@ -218,30 +249,3 @@ Work is underway to integrate HED-based search into other tools including as well into the analysis platforms [**NEMAR**](https://nemar.org/) and [**EEGNET**](http://eegnet.org/) -### Basic search [In development - no web API yet] - -A simpler text search also exists, this is notably faster and still has the key features of searching parent tags and grouping. - -The simplest type of query is to search for the presence or absence of a single tag. These can be combined in a few ways, but all searches are trying to match all terms. - -| Query type | Example query | Matches | Does not match | -|-------------------------------------------------------------------------------------------------------------------|----------------|----------------------------------------------------------------------|-----------------------------------------------| -| **Anywhere-term**
Prefix the term with `@` to match anywhere within a line. | *@A* | Lines with the term A anywhere in them | Lines without the term A anywhere within them | -| **Negation-term**
Prefix the term with `~` to match lines that do NOT contain the term. | *~A* | Lines that do not contain the term A | Lines containing the term A | -| **Nested-term**
Elements within parentheses must appear at the same level of nesting. | *"(A), (B)"* | (A), (B, C)
((A), (B, C)) | (A, B)
(A, C, B)
(A, (C, B)) | -| **Wildcard-term**
Use `*` to match any remaining word (anything but a comma or parenthesis). | *LongFormTag** | LongFormTag
Parent/LongFormTagExample
Parent/LongFormTagSample | Parent/OtherWordLongFormTag | - -### Notes - -- Specific words only care about their level relative to other specific words, not overall. -- If there are no grouping or anywhere words in the search, it assumes all terms are anywhere words. -- The format of the series should match the format of the search string, whether it's in short or long form. -- To enable support for matching parent tags, ensure that both the series and search string are in long form. - -#### Example search queries - -| Query type | Example query | Matches | Does not match | -|---------------------------------------------|-----------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------| -| **Tag sharing a group with another tag** | `(Face, Item-interval/1)`
Face in a group with the exact tag Item-interval/1 | (Face, Item-interval/1)
Face, Item-interval/1 | (Face, Item-interval/12)
Face, Item-interval/1A
(Face, Item-interval)
(Item-interval/1) | -| **Tag sharing a group with a wildcard tag** | `(Face, Item-interval/1*)`
Face in a group with a tag starting with Item-interval/1 | (Face, Item-interval/1)
Face, Item-interval/12
(Face, Item-interval/1A) | (Face, Item-interval)
(Item-interval/1) | -| **Tag sharing a group with a subgroup** | `(Face, (Item-interval/1))`
Face in a group, with subgroup containing Item-interval/1 | (Face, (Item-interval/1))
Face, (Item-interval/1)
(Item-interval/1), Event, Face | (Face, Item-interval/1)
(Item-interval/1) | diff --git a/docs/source/WhatsNew.md b/docs/source/WhatsNew.md index cb2d5ab..c55fbac 100644 --- a/docs/source/WhatsNew.md +++ b/docs/source/WhatsNew.md @@ -1,6 +1,11 @@ (whats-new-anchor)= # What's new? +**June 10, 2024**: **HEDTools 0.5.0 released on PyPI.** +> Remodeling tool validation uses JSON schema. +> Supports `.tsv` format and HED ontology generation for HED schemas. +> Additional visualizations and summaries. + **June 10, 2024**: **HED standard schema v8.3.0 released.** > The [**HED schema v8.3,0**](https://doi.org/10.5281/zenodo.7876037) has just been released. This release introduces `hedId` globally unique identifiers for every HED element and enables mapping into a HED ontology. diff --git a/src/README.md b/src/README.md index 18196a1..00f2acb 100644 --- a/src/README.md +++ b/src/README.md @@ -46,4 +46,4 @@ To install directly from the pip install git+https://github.com/hed-standard/hed-python/@master ``` -HEDTools require python 3.7 or greater. +HEDTools require python 3.8 or greater. diff --git a/src/jupyter_notebooks/README.md b/src/jupyter_notebooks/README.md index 144461b..97acbf6 100644 --- a/src/jupyter_notebooks/README.md +++ b/src/jupyter_notebooks/README.md @@ -14,4 +14,4 @@ To install directly from the pip install git+https://github.com/hed-standard/hed-python/@master ``` -HEDTools require python 3.7 or greater. +HEDTools require python 3.8 or greater. diff --git a/src/jupyter_notebooks/bids/README.md b/src/jupyter_notebooks/bids/README.md index a173349..b99558a 100644 --- a/src/jupyter_notebooks/bids/README.md +++ b/src/jupyter_notebooks/bids/README.md @@ -18,23 +18,18 @@ validating, summarizing, and analyzing your BIDS datasets. These notebooks require HEDTools, which can be installed using `pip` or directly. -**NOTE: These notebooks have been updated to use the HEDTOOLS version on the develop branch of the HedTools. -These tools must be installed directly from GitHub until the newest version of HEDTools is released.** - -To install directly from the -[GitHub](https://github.com/hed-standard/hed-python) repository: +To use `pip` to install `hedtools` from PyPI: ``` - pip install git+https://github.com/hed-standard/hed-python/@master + pip install hedtools ``` - -To use `pip` to install `hedtools` from PyPI: +To install directly from the +[GitHub](https://github.com/hed-standard/hed-python) repository: ``` - pip install hedtools + pip install git+https://github.com/hed-standard/hed-python/@master ``` - -HEDTools require python 3.7 or greater. +HEDTools require python 3.8 or greater. diff --git a/src/jupyter_notebooks/bids/extract_json_template.ipynb b/src/jupyter_notebooks/bids/extract_json_template.ipynb index 8f595e6..f8b936c 100644 --- a/src/jupyter_notebooks/bids/extract_json_template.ipynb +++ b/src/jupyter_notebooks/bids/extract_json_template.ipynb @@ -34,7 +34,37 @@ }, { "cell_type": "code", - "execution_count": 2, + "source": [ + "import json\n", + "from hed.tools.analysis.tabular_summary import TabularSummary\n", + "from hed.tools.util.io_util import get_file_list\n", + "\n", + "dataset_root = '../../../datasets/eeg_ds003645s_hed'\n", + "exclude_dirs = ['stimuli', 'code', 'derivatives', 'sourcedata', 'phenotype']\n", + "skip_columns = [\"onset\", \"duration\", \"sample\"]\n", + "value_columns = [\"stim_file\", \"response_time\"]\n", + "output_path = None\n", + "\n", + "# Construct the event file dictionary for the BIDS event files\n", + "event_files = get_file_list(dataset_root, extensions=[\".tsv\"], name_suffix=\"_events\", exclude_dirs=exclude_dirs)\n", + "\n", + "# Construct the event file value summary and generate a sidecar template representing dataset\n", + "value_summary = TabularSummary(value_cols=value_columns, skip_cols=skip_columns, name=\"Wakeman-Hanson test data\")\n", + "value_summary.update(event_files)\n", + "sidecar_template = value_summary.extract_sidecar_template()\n", + "if output_path:\n", + " with open(output_path, \"w\") as f:\n", + " json.dump(sidecar_template, f, indent=4)\n", + "else:\n", + " print(json.dumps(sidecar_template, indent=4))" + ], + "metadata": { + "collapsed": false, + "ExecuteTime": { + "end_time": "2024-06-15T15:54:13.163193Z", + "start_time": "2024-06-15T15:53:40.611422Z" + } + }, "outputs": [ { "name": "stdout", @@ -297,37 +327,7 @@ ] } ], - "source": [ - "import json\n", - "from hed.tools.analysis.tabular_summary import TabularSummary\n", - "from hed.tools.util.io_util import get_file_list\n", - "\n", - "dataset_root = '../../../datasets/eeg_ds003645s_hed'\n", - "exclude_dirs = ['stimuli', 'code', 'derivatives', 'sourcedata', 'phenotype']\n", - "skip_columns = [\"onset\", \"duration\", \"sample\"]\n", - "value_columns = [\"stim_file\", \"response_time\"]\n", - "output_path = None\n", - "\n", - "# Construct the event file dictionary for the BIDS event files\n", - "event_files = get_file_list(dataset_root, extensions=[\".tsv\"], name_suffix=\"_events\", exclude_dirs=exclude_dirs)\n", - "\n", - "# Construct the event file value summary and generate a sidecar template representing dataset\n", - "value_summary = TabularSummary(value_cols=value_columns, skip_cols=skip_columns, name=\"Wakeman-Hanson test data\")\n", - "value_summary.update(event_files)\n", - "sidecar_template = value_summary.extract_sidecar_template()\n", - "if output_path:\n", - " with open(output_path, \"w\") as f:\n", - " json.dump(sidecar_template, f, indent=4)\n", - "else:\n", - " print(json.dumps(sidecar_template, indent=4))" - ], - "metadata": { - "collapsed": false, - "ExecuteTime": { - "end_time": "2024-01-09T22:02:52.047144900Z", - "start_time": "2024-01-09T22:02:51.951144900Z" - } - } + "execution_count": 1 } ], "metadata": { diff --git a/src/jupyter_notebooks/bids/find_event_combinations.ipynb b/src/jupyter_notebooks/bids/find_event_combinations.ipynb index 6093d01..3943cbc 100644 --- a/src/jupyter_notebooks/bids/find_event_combinations.ipynb +++ b/src/jupyter_notebooks/bids/find_event_combinations.ipynb @@ -26,49 +26,6 @@ }, { "cell_type": "code", - "execution_count": 3, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "sub-002_task-FaceRecognition_events.tsv\n", - "sub-003_task-FaceRecognition_events.tsv\n", - "sub-004_task-FaceRecognition_events.tsv\n", - "sub-005_task-FaceRecognition_events.tsv\n", - "sub-006_task-FaceRecognition_events.tsv\n", - "sub-007_task-FaceRecognition_events.tsv\n", - "sub-008_task-FaceRecognition_events.tsv\n", - "sub-009_task-FaceRecognition_events.tsv\n", - "sub-010_task-FaceRecognition_events.tsv\n", - "sub-011_task-FaceRecognition_events.tsv\n", - "sub-012_task-FaceRecognition_events.tsv\n", - "sub-013_task-FaceRecognition_events.tsv\n", - "sub-014_task-FaceRecognition_events.tsv\n", - "sub-015_task-FaceRecognition_events.tsv\n", - "sub-016_task-FaceRecognition_events.tsv\n", - "sub-017_task-FaceRecognition_events.tsv\n", - "sub-018_task-FaceRecognition_events.tsv\n", - "sub-019_task-FaceRecognition_events.tsv\n", - "The total count of the keys is:31448\n", - " key_counts trial_type value\n", - "0 90 boundary 0\n", - "1 2700 famous_new 5\n", - "2 1313 famous_second_early 6\n", - "3 1291 famous_second_late 7\n", - "4 3532 left_nonsym 256\n", - "5 3381 left_sym 256\n", - "6 3616 right_nonsym 4096\n", - "7 4900 right_sym 4096\n", - "8 2700 scrambled_new 17\n", - "9 1271 scrambled_second_early 18\n", - "10 1334 scrambled_second_late 19\n", - "11 2700 unfamiliar_new 13\n", - "12 1304 unfamiliar_second_early 14\n", - "13 1316 unfamiliar_second_late 15\n" - ] - } - ], "source": [ "import os\n", "from hed.tools.analysis.key_map import KeyMap\n", @@ -76,16 +33,17 @@ "from hed.tools.util.io_util import get_file_list\n", "\n", "# Variables to set for the specific dataset\n", - "data_root = 'T:/summaryTests/ds002718-download'\n", + "dataset_root = '../../../datasets/eeg_ds002893s_hed_attention_shift'\n", + "exclude_dirs = ['stimuli', 'code', 'derivatives', 'sourcedata', 'phenotype']\n", "output_path = ''\n", - "exclude_dirs = ['stimuli', 'derivatives', 'code', 'sourcedata']\n", + "exclude_dirs = ['trial', 'derivatives', 'code', 'sourcedata']\n", "\n", "# Construct the key map\n", - "key_columns = [ \"trial_type\", \"value\"]\n", + "key_columns = [\"focus_modality\", \"event_type\", \"attention_status\"]\n", "key_map = KeyMap(key_columns)\n", "\n", "# Construct the unique combinations\n", - "event_files = get_file_list(data_root, extensions=[\".tsv\"], name_suffix=\"_events\", exclude_dirs=exclude_dirs)\n", + "event_files = get_file_list(dataset_root, extensions=[\".tsv\"], name_suffix=\"_events\", exclude_dirs=exclude_dirs)\n", "for event_file in event_files:\n", " print(f\"{os.path.basename(event_file)}\")\n", " df = get_new_dataframe(event_file)\n", @@ -103,10 +61,42 @@ "metadata": { "collapsed": false, "ExecuteTime": { - "end_time": "2023-10-24T20:08:40.958637400Z", - "start_time": "2023-10-24T20:08:24.603887900Z" + "end_time": "2024-06-15T16:02:17.144301Z", + "start_time": "2024-06-15T16:02:14.364188Z" } - } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "sub-001_task-AuditoryVisualShift_run-01_events.tsv\n", + "sub-002_task-AuditoryVisualShift_run-01_events.tsv\n", + "The total count of the keys is:11730\n", + " key_counts focus_modality event_type attention_status\n", + "0 2298 auditory low_tone attended\n", + "1 2292 visual dark_bar attended\n", + "2 1540 auditory dark_bar unattended\n", + "3 1538 visual low_tone unattended\n", + "4 585 auditory button_press nan\n", + "5 577 auditory high_tone attended\n", + "6 576 visual light_bar attended\n", + "7 572 visual button_press nan\n", + "8 384 auditory light_bar unattended\n", + "9 383 visual high_tone unattended\n", + "10 288 auditory hear_word attended\n", + "11 287 visual look_word attended\n", + "12 96 visual look_word unattended\n", + "13 96 auditory hear_word unattended\n", + "14 96 auditory look_word unattended\n", + "15 96 visual hear_word unattended\n", + "16 14 visual pause_recording nan\n", + "17 11 auditory pause_recording nan\n", + "18 1 nan pause_recording nan\n" + ] + } + ], + "execution_count": 3 } ], "metadata": { diff --git a/src/jupyter_notebooks/bids/merge_spreadsheet_into_sidecar.ipynb b/src/jupyter_notebooks/bids/merge_spreadsheet_into_sidecar.ipynb index db2fa00..887e0f7 100644 --- a/src/jupyter_notebooks/bids/merge_spreadsheet_into_sidecar.ipynb +++ b/src/jupyter_notebooks/bids/merge_spreadsheet_into_sidecar.ipynb @@ -30,7 +30,37 @@ }, { "cell_type": "code", - "execution_count": 1, + "source": [ + "import os\n", + "import json\n", + "from hed.models import SpreadsheetInput\n", + "from hed.tools import df_to_hed, merge_hed_dict\n", + "\n", + "# Spreadsheet input\n", + "spreadsheet_path = os.path.realpath('../../../docs/source/_static/data/task-WorkingMemory_example_spreadsheet.tsv')\n", + "filename = os.path.basename(spreadsheet_path)\n", + "worksheet_name = None\n", + "spreadsheet = SpreadsheetInput(file=spreadsheet_path, worksheet_name=worksheet_name,\n", + " tag_columns=['HED'], has_column_names=True, name=filename)\n", + "\n", + "# Must convert the spreadsheet to a sidecar before merging\n", + "spreadsheet_sidecar = df_to_hed(spreadsheet.dataframe, description_tag=False)\n", + "\n", + "# Use an empty dict to merge into, but any valid dict read from JSON will work\n", + "target_sidecar_dict = {}\n", + "\n", + "# Do the merge\n", + "merge_hed_dict(target_sidecar_dict, spreadsheet_sidecar)\n", + "merged_json = json.dumps(target_sidecar_dict, indent=4)\n", + "print(merged_json)" + ], + "metadata": { + "collapsed": false, + "ExecuteTime": { + "end_time": "2024-06-15T16:03:32.787320Z", + "start_time": "2024-06-15T16:03:32.760819Z" + } + }, "outputs": [ { "name": "stdout", @@ -107,37 +137,7 @@ ] } ], - "source": [ - "import os\n", - "import json\n", - "from hed.models import SpreadsheetInput\n", - "from hed.tools import df_to_hed, merge_hed_dict\n", - "\n", - "# Spreadsheet input\n", - "spreadsheet_path = os.path.realpath('../../../docs/source/_static/data/task-WorkingMemory_example_spreadsheet.tsv')\n", - "filename = os.path.basename(spreadsheet_path)\n", - "worksheet_name = None\n", - "spreadsheet = SpreadsheetInput(file=spreadsheet_path, worksheet_name=worksheet_name,\n", - " tag_columns=['HED'], has_column_names=True, name=filename)\n", - "\n", - "# Must convert the spreadsheet to a sidecar before merging\n", - "spreadsheet_sidecar = df_to_hed(spreadsheet.dataframe, description_tag=False)\n", - "\n", - "# Use an empty dict to merge into, but any valid dict read from JSON will work\n", - "target_sidecar_dict = {}\n", - "\n", - "# Do the merge\n", - "merge_hed_dict(target_sidecar_dict, spreadsheet_sidecar)\n", - "merged_json = json.dumps(target_sidecar_dict, indent=4)\n", - "print(merged_json)" - ], - "metadata": { - "collapsed": false, - "ExecuteTime": { - "end_time": "2024-01-10T12:44:41.634832500Z", - "start_time": "2024-01-10T12:44:39.230433200Z" - } - } + "execution_count": 2 } ], "metadata": { diff --git a/src/jupyter_notebooks/bids/sidecar_to_spreadsheet.ipynb b/src/jupyter_notebooks/bids/sidecar_to_spreadsheet.ipynb index 7af8bad..017e857 100644 --- a/src/jupyter_notebooks/bids/sidecar_to_spreadsheet.ipynb +++ b/src/jupyter_notebooks/bids/sidecar_to_spreadsheet.ipynb @@ -23,29 +23,24 @@ }, { "cell_type": "code", - "execution_count": 1, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Saving the spreadsheet to example_spreadsheet.tsv\n" - ] - } - ], "source": [ "import os\n", "import json\n", + "from io import StringIO\n", "from hed.tools import hed_to_df\n", "\n", "json_path = os.path.realpath('../../../datasets/eeg_ds003645s_hed/task-FacePerception_events.json')\n", - "spreadsheet_filename = os.path.realpath('example_spreadsheet.tsv')\n", + "spreadsheet_filename = ''\n", "with open(json_path) as fp:\n", " example_sidecar = json.load(fp)\n", "example_spreadsheet = hed_to_df(example_sidecar)\n", "if spreadsheet_filename:\n", " print(f\"Saving the spreadsheet to {os.path.basename(spreadsheet_filename)}\")\n", - " example_spreadsheet.to_csv(spreadsheet_filename, sep='\\t', index=False,)" + " example_spreadsheet.to_csv(spreadsheet_filename, sep='\\t', index=False,)\n", + "else:\n", + " output = StringIO()\n", + " example_spreadsheet.to_csv(output, sep='\\t', index=False,)\n", + " print(f\"{output.getvalue()}\")" ], "metadata": { "collapsed": false, @@ -53,10 +48,55 @@ "name": "#%% Create a spreadsheet corresponding to a JSON sidecar and save it.\n" }, "ExecuteTime": { - "end_time": "2024-01-09T23:04:15.357250800Z", - "start_time": "2024-01-09T23:04:13.896251Z" + "end_time": "2024-06-15T16:11:28.865510Z", + "start_time": "2024-06-15T16:11:28.848888Z" } - } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "column_name\tcolumn_value\tdescription\tHED\r\n", + "event_type\tshow_face\tDisplay a face to mark end of pre-stimulus and start of blink-inhibition.\tSensory-event, Experimental-stimulus, (Def/Face-image, Onset), (Def/Blink-inhibition-task,Onset),(Def/Cross-only, Offset)\r\n", + "event_type\tshow_face_initial\tDisplay a face at the beginning of the recording.\tSensory-event, Experimental-stimulus, (Def/Face-image, Onset), (Def/Blink-inhibition-task,Onset), (Def/Fixation-task, Onset)\r\n", + "event_type\tshow_circle\tDisplay a white circle to mark end of the stimulus and blink inhibition.\tSensory-event, (Intended-effect, Cue), (Def/Circle-only, Onset), (Def/Face-image, Offset), (Def/Blink-inhibition-task, Offset), (Def/Fixation-task, Offset)\r\n", + "event_type\tshow_cross\tDisplay only a white cross to mark start of trial and fixation.\tSensory-event, (Intended-effect, Cue), (Def/Cross-only, Onset), (Def/Fixation-task, Onset), (Def/Circle-only, Offset)\r\n", + "event_type\tleft_press\tExperiment participant presses a key with left index finger.\tAgent-action, Participant-response, Def/Press-left-finger\r\n", + "event_type\tright_press\tExperiment participant presses a key with right index finger.\tAgent-action, Participant-response, Def/Press-right-finger\r\n", + "event_type\tsetup_left_sym\tSetup for experiment with pressing key with left index finger means a face with above average symmetry.\tExperiment-structure, (Def/Left-sym-cond, Onset), (Def/Initialize-recording, Onset)\r\n", + "event_type\tsetup_right_sym\tSetup for experiment with pressing key with right index finger means a face with above average symmetry.\tExperiment-structure, (Def/Right-sym-cond, Onset), (Def/Initialize-recording, Onset)\r\n", + "event_type\tdouble_press\tExperiment participant presses both keys .\tAgent-action, Indeterminate-action, (Press, Keyboard-key)\r\n", + "face_type\tfamous_face\tA face that should be recognized by the participants.\tDef/Famous-face-cond\r\n", + "face_type\tunfamiliar_face\tA face that should not be recognized by the participants.\tDef/Unfamiliar-face-cond\r\n", + "face_type\tscrambled_face\tA scrambled face image generated by taking face 2D FFT.\tDef/Scrambled-face-cond\r\n", + "rep_status\tfirst_show\tFactor level indicating the first display of this face.\tDef/First-show-cond\r\n", + "rep_status\timmediate_repeat\tFactor level indicating this face was the same as previous one.\tDef/Immediate-repeat-cond\r\n", + "rep_status\tdelayed_repeat\tFactor level indicating face was seen 5 to 15 trials ago.\tDef/Delayed-repeat-cond\r\n", + "rep_lag\tn/a\tHow face images before this one was the image was previously presented.\t(Face, Item-interval/#)\r\n", + "stim_file\tn/a\tPath of the stimulus file in the stimuli directory.\t(Image, Pathname/#)\r\n", + "hed_def_sensory\tcross_only_def\tA white fixation cross on a black background in the center of the screen.\t(Definition/Cross-only, (Visual-presentation, (Foreground-view, (White, Cross), (Center-of, Computer-screen)), (Background-view, Black)))\r\n", + "hed_def_sensory\tface_image_def\tA happy or neutral face in frontal or three-quarters frontal pose with long hair cropped presented as an achromatic foreground image on a black background with a white fixation cross superposed.\t(Definition/Face-image, (Visual-presentation, (Foreground-view, ((Image, Face, Hair), Color/Grayscale), ((White, Cross), (Center-of, Computer-screen))), (Background-view, Black)))\r\n", + "hed_def_sensory\tcircle_only_def\tA white circle on a black background in the center of the screen.\t(Definition/Circle-only, (Visual-presentation, (Foreground-view, ((White, Circle), (Center-of, Computer-screen))), (Background-view, Black)))\r\n", + "hed_def_actions\tpress_left_finger_def\tThe participant presses a key with the left index finger to indicate a face symmetry judgment.\t(Definition/Press-left-finger, ((Index-finger, (Left-side-of, Experiment-participant)), (Press, Keyboard-key)))\r\n", + "hed_def_actions\tpress_right_finger_def\tThe participant presses a key with the right index finger to indicate a face symmetry evaluation.\t(Definition/Press-right-finger, ((Index-finger, (Right-side-of, Experiment-participant)), (Press, Keyboard-key)))\r\n", + "hed_def_conds\tfamous_face_cond_def\tA face that should be recognized by the participants\t(Definition/Famous-face-cond, (Condition-variable/Face-type, (Image, (Face, Famous))))\r\n", + "hed_def_conds\tunfamiliar_face_cond_def\tA face that should not be recognized by the participants.\t(Definition/Unfamiliar-face-cond, (Condition-variable/Face-type, (Image, (Face, Unfamiliar))))\r\n", + "hed_def_conds\tscrambled_face_cond_def\tA scrambled face image generated by taking face 2D FFT.\t(Definition/Scrambled-face-cond, (Condition-variable/Face-type, (Image, (Face, Disordered))))\r\n", + "hed_def_conds\tfirst_show_cond_def\tFactor level indicating the first display of this face.\t(Definition/First-show-cond, ((Condition-variable/Repetition-type, (Item-count/1, Face), Item-interval/0)))\r\n", + "hed_def_conds\timmediate_repeat_cond_def\tFactor level indicating this face was the same as previous one.\t(Definition/Immediate-repeat-cond, ((Condition-variable/Repetition-type, (Item-count/2, Face), Item-interval/1)))\r\n", + "hed_def_conds\tdelayed_repeat_cond_def\tFactor level indicating face was seen 5 to 15 trials ago.\t(Definition/Delayed-repeat-cond, (Condition-variable/Repetition-type, (Item-count/2, Face), (Item-interval, (Greater-than-or-equal-to, Item-interval/5))))\r\n", + "hed_def_conds\tleft_sym_cond_def\tLeft index finger key press indicates a face with above average symmetry.\t(Definition/Left-sym-cond, (Condition-variable/Key-assignment, ((Index-finger, (Left-side-of, Experiment-participant)), (Behavioral-evidence, Symmetrical)), ((Index-finger, (Right-side-of, Experiment-participant)), (Behavioral-evidence, Asymmetrical))))\r\n", + "hed_def_conds\tright_sym_cond_def\tRight index finger key press indicates a face with above average symmetry.\t(Definition/Right-sym-cond, (Condition-variable/Key-assignment, ((Index-finger, (Right-side-of, Experiment-participant)), (Behavioral-evidence, Symmetrical)), ((Index-finger, (Left-side-of, Experiment-participant)), (Behavioral-evidence, Asymmetrical))))\r\n", + "hed_def_tasks\tface_symmetry_evaluation_task_def\tEvaluate degree of image symmetry and respond with key press evaluation.\t(Definition/Face-symmetry-evaluation-task, (Task, Experiment-participant, (See, Face), (Discriminate, (Face, Symmetrical)), (Press, Keyboard-key)))\r\n", + "hed_def_tasks\tblink_inhibition_task_def\tDo not blink while the face image is displayed.\t(Definition/Blink-inhibition-task, (Task, Experiment-participant, Inhibit-blinks))\r\n", + "hed_def_tasks\tfixation_task_def\tFixate on the cross at the screen center.\t(Definition/Fixation-task, (Task, Experiment-participant, (Fixate, Cross)))\r\n", + "hed_def_setup\tsetup_def\tn/a\t(Definition/Initialize-recording, (Recording))\r\n", + "\n" + ] + } + ], + "execution_count": 3 } ], "metadata": { diff --git a/src/jupyter_notebooks/bids/summarize_events.ipynb b/src/jupyter_notebooks/bids/summarize_events.ipynb index 7eae4d7..30c73c7 100644 --- a/src/jupyter_notebooks/bids/summarize_events.ipynb +++ b/src/jupyter_notebooks/bids/summarize_events.ipynb @@ -33,8 +33,43 @@ } }, { + "metadata": { + "ExecuteTime": { + "end_time": "2024-06-15T16:14:21.640014Z", + "start_time": "2024-06-15T16:14:20.294996Z" + } + }, "cell_type": "code", - "execution_count": 1, + "source": [ + "import os\n", + "from hed.tools import TabularSummary, get_file_list\n", + "\n", + "# Variables to set for the specific dataset\n", + "dataset_path = os.path.realpath('../../../datasets/eeg_ds003645s_hed')\n", + "output_path = ''\n", + "name = 'eeg_ds003645s_hed'\n", + "exclude_dirs = ['stimuli', 'code', 'derivatives', 'sourcedata', 'phenotype']\n", + "skip_columns = [\"onset\", \"duration\", \"sample\", \"trial\", \"response_time\"]\n", + "value_columns = [\"stim_file\"]\n", + "\n", + "# Construct the file dictionary for the BIDS event files\n", + "event_files = get_file_list(dataset_path, extensions=[\".tsv\"], name_suffix=\"_events\", exclude_dirs=exclude_dirs)\n", + "print(f\"Processing {len(event_files)} files...\")\n", + "# Create a tabular summary object\n", + "tab_sum = TabularSummary(value_cols=value_columns, skip_cols=skip_columns, name=name)\n", + "\n", + "# Update the tabular summary with the information from each event file\n", + "print(\"Updating the summaries\")\n", + "for events in event_files:\n", + " tab_sum.update(events)\n", + " \n", + "# Save or print\n", + "if output_path:\n", + " with open(output_path, 'w') as fp:\n", + " fp.write(f\"{tab_sum}\")\n", + "else:\n", + " print(f\"{tab_sum}\")\n" + ], "outputs": [ { "name": "stdout", @@ -97,43 +132,7 @@ ] } ], - "source": [ - "import os\n", - "from hed.tools import TabularSummary, get_file_list\n", - "\n", - "# Variables to set for the specific dataset\n", - "dataset_path = os.path.realpath('../../../datasets/eeg_ds003645s_hed')\n", - "output_path = ''\n", - "name = 'eeg_ds003645s_hed'\n", - "exclude_dirs = ['stimuli']\n", - "skip_columns = [\"onset\", \"duration\", \"sample\", \"trial\", \"response_time\"]\n", - "value_columns = [\"stim_file\"]\n", - "\n", - "# Construct the file dictionary for the BIDS event files\n", - "event_files = get_file_list(dataset_path, extensions=[\".tsv\"], name_suffix=\"_events\", exclude_dirs=exclude_dirs)\n", - "print(f\"Processing {len(event_files)} files...\")\n", - "# Create a tabular summary object\n", - "tab_sum = TabularSummary(value_cols=value_columns, skip_cols=skip_columns, name=name)\n", - "\n", - "# Update the tabular summary with the information from each event file\n", - "print(\"Updating the summaries\")\n", - "for events in event_files:\n", - " tab_sum.update(events)\n", - " \n", - "# Save or print\n", - "if output_path:\n", - " with open(output_path, 'w') as fp:\n", - " fp.write(f\"{tab_sum}\")\n", - "else:\n", - " print(f\"{tab_sum}\")\n" - ], - "metadata": { - "collapsed": false, - "ExecuteTime": { - "end_time": "2024-01-09T23:05:04.716711Z", - "start_time": "2024-01-09T23:05:03.264975300Z" - } - } + "execution_count": 1 } ], "metadata": { diff --git a/src/jupyter_notebooks/bids/validate_bids_dataset.ipynb b/src/jupyter_notebooks/bids/validate_bids_dataset.ipynb index 0139253..c9ced62 100644 --- a/src/jupyter_notebooks/bids/validate_bids_dataset.ipynb +++ b/src/jupyter_notebooks/bids/validate_bids_dataset.ipynb @@ -29,18 +29,6 @@ }, { "cell_type": "code", - "execution_count": 7, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Using HEDTOOLS version: {'date': '2024-02-05T17:04:52-0600', 'dirty': True, 'error': None, 'full-revisionid': '7c05a5461ed273b9a39bddf27d99c2e767ca1aab', 'version': '0.4.0+147.g7c05a54.dirty'}\n", - "Number of issues: 0\n", - "No HED validation errors\n" - ] - } - ], "source": [ "from hed.errors import get_printable_issue_string\n", "from hed.tools import BidsDataset\n", @@ -72,10 +60,22 @@ "metadata": { "collapsed": false, "ExecuteTime": { - "end_time": "2024-02-06T13:54:39.249794200Z", - "start_time": "2024-02-06T13:54:34.365938400Z" + "end_time": "2024-06-15T16:15:15.994199Z", + "start_time": "2024-06-15T16:15:10.086834Z" } - } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Using HEDTOOLS version: {'date': '2024-06-14T17:02:33-0500', 'dirty': False, 'error': None, 'full-revisionid': '940e75ddcedd5a14910098b60277413edc3c024e', 'version': '0.5.0'}\n", + "Number of issues: 0\n", + "No HED validation errors\n" + ] + } + ], + "execution_count": 1 } ], "metadata": { diff --git a/src/jupyter_notebooks/bids/validate_bids_dataset_with_libraries.ipynb b/src/jupyter_notebooks/bids/validate_bids_dataset_with_libraries.ipynb index 55e4b84..615b542 100644 --- a/src/jupyter_notebooks/bids/validate_bids_dataset_with_libraries.ipynb +++ b/src/jupyter_notebooks/bids/validate_bids_dataset_with_libraries.ipynb @@ -1,55 +1,7 @@ { "cells": [ - { - "cell_type": "markdown", - "source": [ - "## Validate HED in a BIDS dataset that uses library schema.\n", - "\n", - "Validating annotations HED as you develop them makes the annotation process much easier and faster to debug. This notebook validates HED in a BIDS dataset.\n", - "\n", - "The tool creates a `BidsDataset` object, which represents the information from a BIDS\n", - "dataset that is relevant to HED, including the `dataset_description.json`, all `events.tsv` files, and all `events.json` sidecar files.\n", - "\n", - "The `validate` method of `BidsDataset` first validates all of the `events.json` sidecars and then assembles the relevant sidecars for each `events.tsv` file and validates it. By default, validation uses the HED schemas specified in the `HEDVersion` field of the dataset's `dataset_description.json` file. A second example in this script shows how to specify the HED schemas directly rather than just through the `dataset_description.json`.\n", - "\n", - "The script does the following steps:\n", - "\n", - "1. Set the dataset location (`dataset_path`) to the absolute path of the root of your BIDS dataset.\n", - "2. Indicates whether to check for warnings during validation (`check_for_warnings`).\n", - "3. Create a `BidsDataset` for the dataset.\n", - "4. Validate the dataset and output the issues.\n", - "\n", - "**Note:** This validation pertains to event files and HED annotation only. It does not do a full BIDS validation.\n", - "\n", - "The example below uses a\n", - "[small version](https://github.com/hed-standard/hed-examples/tree/main/datasets/eeg_ds003645s_hed)\n", - "of the Wakeman-Hanson face-processing dataset available on openNeuro as\n", - "[ds003645](https://openneuro.org/datasets/ds003645/versions/2.0.0).\n", - "\n", - "This dataset has no validation errors, but since we have set `check_for_warnings` to `True`, validation returns warnings that the `sample` column does not have any metadata.\n", - "\n", - "For validation of a single `events.json` file during annotation development, users often find the [online sidecar tools](https://hedtools.ucsd.edu/hed/sidecar) convenient, but the online tool does not provide complete dataset-level validation." - ], - "metadata": { - "collapsed": false - } - }, { "cell_type": "code", - "execution_count": 2, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Handling a BIDS data set that uses dataset_description\n", - "No HED validation errors when dataset_description is used\n", - "\n", - "Now validating with schema URLs.\n", - "No HED validation errors when schemas are passed\n" - ] - } - ], "source": [ "from hed.errors import get_printable_issue_string\n", "from hed.schema import HedSchemaGroup, load_schema, load_schema_version\n", @@ -92,10 +44,24 @@ "metadata": { "collapsed": false, "ExecuteTime": { - "end_time": "2024-01-09T23:06:55.321821500Z", - "start_time": "2024-01-09T23:06:46.320753600Z" + "end_time": "2024-06-15T16:15:49.216354Z", + "start_time": "2024-06-15T16:15:42.541945Z" } - } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Handling a BIDS data set that uses dataset_description\n", + "No HED validation errors when dataset_description is used\n", + "\n", + "Now validating with schema URLs.\n", + "No HED validation errors when schemas are passed\n" + ] + } + ], + "execution_count": 1 }, { "cell_type": "raw", diff --git a/src/jupyter_notebooks/bids/validate_bids_datasets.ipynb b/src/jupyter_notebooks/bids/validate_bids_datasets.ipynb index 8f3d730..ba92a0e 100644 --- a/src/jupyter_notebooks/bids/validate_bids_datasets.ipynb +++ b/src/jupyter_notebooks/bids/validate_bids_datasets.ipynb @@ -30,49 +30,6 @@ }, { "cell_type": "code", - "execution_count": 1, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Using HEDTOOLS version: {'date': '2024-01-11T08:20:12-0600', 'dirty': False, 'error': None, 'full-revisionid': 'df75b546be42d11be8fd2c0531883e09e5cb6fde', 'version': '0.4.0+109.gdf75b54'}\n", - "\n", - "Validating eeg_ds002893s_hed_attention_shift\n", - "No HED validation errors\n", - "\n", - "Validating eeg_ds003645s_hed\n", - "No HED validation errors\n", - "\n", - "Validating eeg_ds003645s_hed_demo\n", - "No HED validation errors\n", - "\n", - "Validating eeg_ds003645s_hed_library\n", - "No HED validation errors\n", - "\n", - "Validating eeg_ds003645s_hed_partnered\n", - "No HED validation errors\n", - "\n", - "Validating eeg_ds003645s_hed_remodel\n", - "No HED validation errors\n", - "\n", - "Validating eeg_ds004105s_hed\n", - "No HED validation errors\n", - "\n", - "Validating eeg_ds004106s_hed\n", - "No HED validation errors\n", - "\n", - "Validating eeg_ds004117s_hed_sternberg\n", - "No HED validation errors\n", - "\n", - "Validating fmri_ds002790s_hed_aomic\n", - "No HED validation errors\n", - "\n", - "Validating fmri_soccer21s_hed\n", - "No HED validation errors\n" - ] - } - ], "source": [ "import os\n", "from hed.errors import get_printable_issue_string\n", @@ -113,10 +70,53 @@ "metadata": { "collapsed": false, "ExecuteTime": { - "end_time": "2024-01-19T16:22:12.532009100Z", - "start_time": "2024-01-19T16:21:44.157168600Z" + "end_time": "2024-06-15T16:16:40.666531Z", + "start_time": "2024-06-15T16:16:08.008717Z" } - } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Using HEDTOOLS version: {'date': '2024-06-14T17:02:33-0500', 'dirty': False, 'error': None, 'full-revisionid': '940e75ddcedd5a14910098b60277413edc3c024e', 'version': '0.5.0'}\n", + "\n", + "Validating eeg_ds002893s_hed_attention_shift\n", + "No HED validation errors\n", + "\n", + "Validating eeg_ds003645s_hed\n", + "No HED validation errors\n", + "\n", + "Validating eeg_ds003645s_hed_demo\n", + "No HED validation errors\n", + "\n", + "Validating eeg_ds003645s_hed_library\n", + "No HED validation errors\n", + "\n", + "Validating eeg_ds003645s_hed_partnered\n", + "No HED validation errors\n", + "\n", + "Validating eeg_ds003645s_hed_remodel\n", + "No HED validation errors\n", + "\n", + "Validating eeg_ds004105s_hed\n", + "No HED validation errors\n", + "\n", + "Validating eeg_ds004106s_hed\n", + "No HED validation errors\n", + "\n", + "Validating eeg_ds004117s_hed_sternberg\n", + "No HED validation errors\n", + "\n", + "Validating fmri_ds002790s_hed_aomic\n", + "No HED validation errors\n", + "\n", + "Validating fmri_soccer21s_hed\n", + "No HED validation errors\n" + ] + } + ], + "execution_count": 1 } ], "metadata": { diff --git a/src/jupyter_notebooks/remodeling/run_remodel.ipynb b/src/jupyter_notebooks/remodeling/run_remodel.ipynb index f124dcc..2de204c 100644 --- a/src/jupyter_notebooks/remodeling/run_remodel.ipynb +++ b/src/jupyter_notebooks/remodeling/run_remodel.ipynb @@ -30,16 +30,6 @@ }, { "cell_type": "code", - "execution_count": 2, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Results will be found in derivatives/remodel/summaries relative to data_root\n" - ] - } - ], "source": [ "import os\n", "import hed.tools.remodeling.cli.run_remodel as cli_remodel\n", @@ -58,23 +48,33 @@ "name": "#%% This removes all summaries from eeg_ds003645s_hed_remodel and then reruns.\n" }, "ExecuteTime": { - "end_time": "2024-01-10T13:02:54.499667Z", - "start_time": "2024-01-10T13:02:54.303667100Z" + "end_time": "2024-06-15T16:20:24.831395Z", + "start_time": "2024-06-15T16:20:22.979092Z" + } + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Results will be found in derivatives/remodel/summaries relative to data_root\n" + ] } - } + ], + "execution_count": 1 }, { "cell_type": "code", - "execution_count": 2, - "outputs": [], "source": [], "metadata": { "collapsed": false, "ExecuteTime": { - "end_time": "2024-01-10T13:02:54.514672400Z", - "start_time": "2024-01-10T13:02:54.501668400Z" + "end_time": "2024-06-15T16:20:24.847180Z", + "start_time": "2024-06-15T16:20:24.837479Z" } - } + }, + "outputs": [], + "execution_count": 1 } ], "metadata": { diff --git a/src/matlab_scripts/README.md b/src/matlab_scripts/README.md deleted file mode 100644 index eb4bbec..0000000 --- a/src/matlab_scripts/README.md +++ /dev/null @@ -1,35 +0,0 @@ -## MATLAB Scripts for HED processing - -The MATLAB scripts for processing are in two directories: -**web_services** and **utility_scripts**. - -### HED MATLAB services - -The HED MATLAB services are located in the -[**web_services**](https://github.com/hed-standard/hed-examples/tree/main/hedcode/matlab_scripts/web_services) -subdirectory of [**hed-examples**](https://github.com/hed-standard/hed-examples). - -These scripts access HED Rest services. -They rely on the HED online services to be running somewhere, -either in a local Docker module or remotely. - -Access to the HED services are also available online through -the [**HED Online Tools**](https://hedtools.ucsd.edu/hed). - -You can read more about these services in -[**HED services in MATLAB**](https://hed-examples.readthedocs.io/en/latest/HedInMatlab.html#hed-services-in-matlab). - -### HED MATLAB utilities - -Some MATLAB utilities are available in -[**utility_scripts**](https://github.com/hed-standard/hed-examples/tree/main/hedcode/matlab_scripts/utility_scripts). -These utilities are main used for processing events and other information from EEGLAB `.set` files. - -Additional MATLAB tools for working with EEG `.set` files in -BIDS datasets are available in the -[**matlab_utility_scripts**](https://github.com/hed-standard/hed-curation/tree/main/src/curation/matlab_utility_scripts) -directory of the [**hed-curation**](https://github.com/hed-standard/hed-curation) GitHub repository. - -[**EEGLAB**](https://sccn.ucsd.edu/eeglab/index.php) also has a number -of HED tools which are available as plugins for processing BIDS datasets -in the EEGLAB environment. diff --git a/src/matlab_scripts/data_cleaning/getChannelMap.m b/src/matlab_scripts/data_cleaning/getChannelMap.m deleted file mode 100644 index ffe2dc2..0000000 --- a/src/matlab_scripts/data_cleaning/getChannelMap.m +++ /dev/null @@ -1,15 +0,0 @@ -function [chanMap, chanNames] = getChannelMap(chanFile) -%% Create a Map of (channel name, channel type) and list of channel names. -% -% Parameters: -% chanFile Path name of BIDS channels.tsv file. -% chanMap (output) Map(channel names, channel types) -% chanNames (output) Channel names in order they appear in chanFile. -%% - opts = delimitedTextImportOptions( ... - 'Delimiter', '\t', 'DataLines', 2, 'VariableNamesLine', 1); - T = readtable(chanFile, opts, 'ReadVariableNames', true); - chanNames = T.name; - types = T.type; - chanMap = containers.Map(chanNames, types); -end diff --git a/src/matlab_scripts/data_cleaning/getEventTable.m b/src/matlab_scripts/data_cleaning/getEventTable.m deleted file mode 100644 index acc74e5..0000000 --- a/src/matlab_scripts/data_cleaning/getEventTable.m +++ /dev/null @@ -1,39 +0,0 @@ -function eventTable = getEventTable(eventsFile, typeMap, renameMap) -% Read the table of events from the events file -% -% Parameters: -% eventsFile - the path of a BIDS tabular events file. -% typeMap - map of non-string column types: (column-name, column-type) -% renameMap - map of columns to be renamed: (old-name, new-name) -% - - optsDect = detectImportOptions(eventsFile, 'FileType', 'delimitedtext'); - - % Set the types and fill values of the columns as specified. - columnNames = optsDect.VariableNames; - columnTypes = cell(size(columnNames)); - for m = 1:length(columnNames) - if isKey(typeMap, columnNames{m}) - columnTypes{m} = typeMap(columnNames{m}); - else - columnTypes{m} = 'char'; - end - end - optsDect = setvartype(optsDect, columnTypes); - optsDect = setvaropts(optsDect, ... - columnNames(~strcmpi(columnTypes, 'char')), 'FillValue', NaN); - optsDect = setvaropts(optsDect, ... - columnNames(strcmpi(columnTypes, 'char')), 'FillValue', 'n/a'); - - % Read in the event table - eventTable = readtable(eventsFile, optsDect); - - % Rename the columns that are requested. - variableNames = eventTable.Properties.VariableNames; - for m = 1:length(variableNames) - if isKey(renameMap, variableNames{m}) - variableNames{m} = renameMap(variableNames{m}); - end - end - eventTable.Properties.VariableNames = variableNames; - \ No newline at end of file diff --git a/src/matlab_scripts/data_cleaning/getFileList.m b/src/matlab_scripts/data_cleaning/getFileList.m deleted file mode 100644 index 83e89f5..0000000 --- a/src/matlab_scripts/data_cleaning/getFileList.m +++ /dev/null @@ -1,79 +0,0 @@ -%% Dump the EEG.event structure from each EEG.set file in a dataset. - -%% Create a list with all of the .set files in the BIDS dataset -function selectedList = getFileList(rootPath, namePrefix, nameSuffix, ... - extensions, excludeDirs) -%% Return a full path list of specified files in rootPath directory tree. -% -% Parameters: -% rootPath (char) full path of root of directory tree to search -% namePrefix (char) prefix of the filename or any if empty -% nameSuffix (char) suffix of the filename or any if empty -% extensions (cell array) names of extensions (with . included) -% excludeDirs (cell array) names of subdirectories to exclude -% -% Returns: -% selectedList (cell array) list of full paths of the files -% - - selectedList = {}; - dirList = {rootPath}; - while ~isempty(dirList) - thisDir = dirList{1}; - dirList = dirList(2:end); - fileList = dir(thisDir); - for k = 1:length(fileList) - thisFile = fileList(k); - if checkDirExclusions(thisFile, excludeDirs) - continue; - elseif fileList(k).isdir - dirList{end+1} = [fileList(k).folder filesep fileList(k).name]; %#ok - elseif ~checkFileExclusions(thisFile, namePrefix, ... - nameSuffix, extensions) - thisPath = [thisFile.folder filesep thisFile.name]; - selectedList{end+1} = thisPath; %#ok - end - end - end -end - - -function isExcluded = checkDirExclusions(thisFile, excludeDirs) -% Returns true if this file entry corresponds to an excluded directory - if ~thisFile.isdir - isExcluded = false; - elseif startsWith(thisFile.name, '.') - isExcluded = true; - else - isExcluded = false; - for k = 1:length(excludeDirs) - if startsWith(thisFile.name, excludeDirs{k}) - isExcluded = true; - break - end - end - - end -end - -function isExcluded = checkFileExclusions(thisFile, namePrefix, ... - nameSuffix, extensions) -% Returns true if this file entry corresponds to an excluded directory - [~, theName, theExt] = fileparts(thisFile.name); - if ~isempty(namePrefix) && ~startsWith(theName, namePrefix) - isExcluded = true; - elseif ~isempty(nameSuffix) && ~endsWith(theName, nameSuffix) - isExcluded = true; - elseif isempty(extensions) - isExcluded = false; - else - isExcluded = true; - for k = 1:length(extensions) - if strcmpi(theExt, extensions{k}) - isExcluded = false; - break - end - end - - end -end diff --git a/src/matlab_scripts/data_cleaning/renameChannels.m b/src/matlab_scripts/data_cleaning/renameChannels.m deleted file mode 100644 index 93f1f41..0000000 --- a/src/matlab_scripts/data_cleaning/renameChannels.m +++ /dev/null @@ -1,25 +0,0 @@ -% renameChannels - rename channels based on dictionary (does not reorder) -% -% Usage: -% chanlocs = renameChannels(chanlocs, chanMap) -% -% -% Parameters: -% chanlocs [struct] the EEG.chanlocs structure -% -% chanRemap [containers.Map] with (old names, new names) -% -% Author: Kay Robbins, 2022 -function [chanlocs, numRenamed] = renameChannels(chanlocs, chanRemap) - numRenamed = 0; - if isempty(chanRemap) - return; - end - for k = 1:length(chanlocs) - if isKey(chanRemap, chanlocs(k).labels) - chanlocs(k).labels = chanRemap(chanlocs(k).labels); - numRenamed = numRenamed + 1; - end - end -end - diff --git a/src/matlab_scripts/data_cleaning/runEeglabChannelsToJson.m b/src/matlab_scripts/data_cleaning/runEeglabChannelsToJson.m deleted file mode 100644 index 4ea5dcf..0000000 --- a/src/matlab_scripts/data_cleaning/runEeglabChannelsToJson.m +++ /dev/null @@ -1,40 +0,0 @@ -%% This script dumps the channel labels to a JSON file. - -%% Set up the specifics for your dataset -%rootPath = '/XXX/SternbergWorking'; -%rootPath = '/XXX/AuditoryOddballWorking'; -%rootPath = '/XXX/GoNogoWorking'; -%rootPath = '/XXX/ImaginedEmotionWorking'; -%rootPath = '/XXX/AttentionShiftWorking'; -%rootPath = '/XXX/AdvancedGuardDutyWorking'; -%rootPath = '/XXX/AuditoryCueingWorking'; -%rootPath = '/XXX/BaselineDrivingWorking'; -%rootPath = '/XXX/BasicGuardDutyWorking'; -%rootPath = '/XXX/CalibrationDrivingWorking'; -%rootPath = '/XXX/MindWanderingWorking'; -%rootPath = '/XXX/RSVPBaselineWorking'; -rootPath = '/XXX/RSVPExpertiseWorking'; -%rootPath = '/XXX/SpeedControlWorking'; -%rootPath = '/XXX/TrafficComplexityWorking'; -sratePath = [rootPath filesep 'code']; -excludeDirs = {'sourcedata', 'code', 'stimuli'}; -namePrefix = ''; -nameSuffix = '_eeg'; -extensions = {'.set'}; -fileList = getFileList(rootPath, namePrefix, nameSuffix, ... - extensions, excludeDirs); -channelsJson = 'channelsOriginal.json'; - -%% Generate json file. -fprintf('Creating a JSON file with channel labels for %d EEG.set files...\n', length(fileList)); -channelMap = containers.Map('KeyType', 'char', 'ValueType', 'any'); -for k = 1:length(fileList) - EEG = pop_loadset(fileList{k}); - [pathName, basename, ext] = fileparts(fileList{k}); - channelMap([basename ext]) = {EEG.chanlocs.labels}; -end -y = jsonencode(channelMap); -fileName = [rootPath filesep 'code' filesep channelsJson]; -fp = fopen(fileName, 'w'); -fprintf(fp, '%s', y); -fclose(fp); \ No newline at end of file diff --git a/src/matlab_scripts/data_cleaning/runEeglabEventsToFiles.m b/src/matlab_scripts/data_cleaning/runEeglabEventsToFiles.m deleted file mode 100644 index 6fe60b1..0000000 --- a/src/matlab_scripts/data_cleaning/runEeglabEventsToFiles.m +++ /dev/null @@ -1,52 +0,0 @@ -%% This script dumps all of the EEG.set events to files _eventstemp.tsv. -% You must provide the root path to your dataset directory tree -% and also the exclude directories to skip. - -%% Set up the specifics for your dataset -%rootPath = '/XXX/SternbergWorking'; -%rootPath = '/XXX/AuditoryOddballWorking'; -%rootPath = '/XXX/GoNogoWorking'; -%rootPath = '/XXX/ImaginedEmotionWorking'; -%rootPath = '/XXX/AttentionShiftWorking'; -%rootPath = '/XXX/AdvancedGuardDutyWorking'; -rootPath = '/XXX/AuditoryCueingWorking'; -%rootPath = '/XXX/BaselineDrivingWorking'; -%rootPath = '/XXX/BasicGuardDutyWorking'; -%rootPath = '/XXX/CalibrationDrivingWorking'; -%rootPath = '/XXX/MindWanderingWorking'; -%rootPath = '/XXX/RSVPBaselineWorking'; -%rootPath = '/XXX/RSVPExpertiseWorking'; -%rootPath = '/XXX/SpeedControlWorking'; -%rootPath = '/XXX/TrafficComplexityWorking'; -sratePath = [rootPath filesep 'code']; -excludeDirs = {'sourcedata', 'code', 'stimuli'}; -namePrefix = ''; -nameSuffix = '_eeg'; -extensions = {'.set'}; -selectedList = getFileList(rootPath, namePrefix, nameSuffix, ... - extensions, excludeDirs); - -%% Generate the eventstemp.tsv files and srate file from EEG.set files - -% Output a list of ifiles -for k = 1:length(selectedList) - fprintf('%s\n', selectedList{k}); -end - -% Use eeglabEventsToTsv to save EEG.set events to tsv file -saveSuffix = '_eventstemp.tsv'; -nameSuffix = '_eeg'; -srateMap = eeglabEventsToTsv(selectedList, nameSuffix, saveSuffix); - - -% Save the return list of sampling rates -if ~isfolder(sratePath) - mkdir(sratePath); -end -srateFile = fopen([sratePath filesep 'samplingRates.tsv'], 'w'); -theKeys = keys(srateMap); -fprintf(srateFile, 'file_basename\tsampling_rate\n'); -for k = 1:length(theKeys) - fprintf(srateFile, '%s\t%g\n', theKeys{k}, srateMap(theKeys{k})); -end -fclose(srateFile); diff --git a/src/matlab_scripts/data_cleaning/runEeglabFixChannels.m b/src/matlab_scripts/data_cleaning/runEeglabFixChannels.m deleted file mode 100644 index 8686f49..0000000 --- a/src/matlab_scripts/data_cleaning/runEeglabFixChannels.m +++ /dev/null @@ -1,87 +0,0 @@ -%% This script reads the channel.tsv files and updates the channels in -% the EEG.set files. This script optionally supports renaming particular -% channels in the EEG.chanlocs, reordering the channels in the -% EEG.chanlocs, and resetting the EEg.urchanlocs. -% -% The script also sets the type field in EEG.chanlocs to agree with the -% those in the BIDS files and write the X, Y. Z positions in the BIDS -% channels.tsv to agree with those in the EEG.chanlocs. -% -%% Set up the specifics for your dataset - -% Sternberg requires reordering of channels as well as reset of urchanlocs. -rootPath = '/XXX/SternbergWorkingPhaseTwo'; -log_name = 'sternberg_12_fix_eeglab_channels_log.txt'; -resetUrchans = true; % If true copies chanlocs into urchanlocs -reorderChans = true; % If true reorders channels to be BIDS order -renameRemap = containers.Map(); - -%% Set the common variables -sratePath = [rootPath filesep 'code']; -excludeDirs = {'sourcedata', 'code', 'stimuli'}; -namePrefix = ''; -nameSuffix = '_eeg'; -extensions = {'.set'}; -fileList = getFileList(rootPath, namePrefix, nameSuffix, ... - extensions, excludeDirs); - -%% Open the log -fid = fopen([rootPath filesep 'code/curation_logs', filesep log_name], 'w'); -fprintf(fid, 'Log of runEeglabFixChannels.m on %s\n', datetime('now')); - -%% Rename the channels and set the channel types -fprintf('Making the EEG channels and the BIDS channels compatible.\n'); -for k = 1:length(fileList) - [pathName, basename, ext] = fileparts(fileList{k}); - fprintf(fid, '%s:\n', basename); - fprintf(fid, '\tLoading EEG.set file\n'); - EEG = pop_loadset(fileList{k}); - - %% Load the channels.tsv file and make the channel map. - fprintf(fid, '\tLoading channels.tsv file\n'); - chanFile = [pathName filesep basename(1:(end-3)) 'channels.tsv']; - [chanMap, chanNames] = getChannelMap(chanFile); - chanlocs = EEG.chanlocs; - - %% Reset the urchanlocs if requested. - if resetUrchans - EEG.urchanlocs = rmfield(chanlocs, 'urchan'); - end - - %% Rename channels if required. - if ~isempty(renameRemap) - mkeys = keys(renameRemap); - chanlocs = renameChannels(chanlocs, renameRemap); - fprintf(fid, '\Renaming channels [%s]\n', join(mkeys(:)', ' ')); - end - - %% Set the channel types in the chanlocs. - [chanlocs, missing] = setChannelTypes(chanlocs, chanMap); - if ~isempty(missing) - missInfo = join(missing(:)', ' '); - fprintf(fid, '\tWARNING---Missing channels [%s]\n', missInfo{1}); - end - EEG.chanlocs = chanlocs; - - %% Now reorder the channels and data if requested. - if reorderChans - chanLabels = {chanlocs.labels}; - [C, ia, ib] = intersect(chanNames, chanLabels, 'stable'); - EEG.data = EEG.data(ib(:), :); - EEG.chanlocs = chanlocs(ib); - end - - %% Now write the electrode files. - electrodePath = [pathName filesep basename(1:(end-3)) 'electrodes.tsv']; - num_written = writeElectrodeFile(EEG.chanlocs, electrodePath); - fprintf(fid, '\tWriting electrode file with %d electrodes\n', num_written); - if num_written == 0 - fprintf(fid, '\tWARNING---EEG missing chanlocs.\n'); - end - - EEG = pop_saveset(EEG, 'savemode', 'resave', 'version', '7.3'); - fprintf(fid, '\tResaving the EEG.set file\n'); -end - -%% Closing the file. -fclose(fid); \ No newline at end of file diff --git a/src/matlab_scripts/data_cleaning/runEeglabImportEvents.m b/src/matlab_scripts/data_cleaning/runEeglabImportEvents.m deleted file mode 100644 index 458f3d0..0000000 --- a/src/matlab_scripts/data_cleaning/runEeglabImportEvents.m +++ /dev/null @@ -1,47 +0,0 @@ -%% Import the _events.tsv into the corresponding EEG.event structure - -%% Set up the specifics for your dataset -rootPath = 'T:/summaryTests/ds004106-download'; -setname = 'BCITAdvancedGuardDuty_'; -excludeDirs = {'sourcedata', 'code', 'stimuli', 'derivatives', 'phenotype'}; -namePrefix = ''; -nameSuffix = '_eeg'; -extensions = {'.set'}; - - -% Designate the columns that are numeric (rest are char) -columnTypes = containers.Map({'onset', 'duration', 'sample'}, ... - {'double', 'double', 'int32'}); -% Designate the columns that should be renamed -renameColumns = containers.Map({'onset'}, {'latency'}); -convertLatency = true; - -%% Generate json file. -fileList = getFileList(rootPath, namePrefix, nameSuffix, ... - extensions, excludeDirs); -for k = 1:length(fileList) - EEG = pop_loadset(fileList{k}); - [pathName, basename, ext] = fileparts(fileList{k}); - fprintf('%s:\n', basename); - eventsFile = [pathName filesep basename(1:(end-3)) 'events.tsv']; - eventTable = getEventTable(eventsFile, columnTypes, renameColumns); - fprintf('\tCreate a table from the events file\n'); - if convertLatency - eventTable.('latency') = eventTable.('latency')*EEG.srate + 1; - fprintf('\tConvert the latency column to samples\n'); - end - fprintf('%s: EEG.event has %d events and BIDS event file has %d events\n', ... - basename, length(EEG.event), size(eventTable,1)); - fprintf('\tReset the EEG.event.urevent\n'); - eventTable.('urevent') = transpose(1:size(eventTable)); - fprintf('\tSet the EEG.event\n'); - EEG.event = table2struct(eventTable)'; - fprintf('\tSet the EEG.urevent\n'); - EEG.urevent = EEG.event; - if ~isempty(setname) - EEG.setname = [setname basename]; - fprintf('\tSet the EEG.setname\n'); - end - fprintf('\tResave the EEG.set file\n'); - EEG = pop_saveset(EEG, 'savemode', 'resave', 'version', '7.3'); -end \ No newline at end of file diff --git a/src/matlab_scripts/data_cleaning/runEeglabImportEventsOld.m b/src/matlab_scripts/data_cleaning/runEeglabImportEventsOld.m deleted file mode 100644 index 2b81ea7..0000000 --- a/src/matlab_scripts/data_cleaning/runEeglabImportEventsOld.m +++ /dev/null @@ -1,88 +0,0 @@ -%% This imports the _events.tsv into the corresponding EEG.set file - -%% Set up the specifics for your dataset - -rootPath = '/XXX/SternbergWorkingPhaseTwo'; -setname = ''; -log_name = 'sternberg_12_import_events_log.txt'; -renameColumns = {'event_type', 'type'; 'onset', 'latency'}; - -% rootPath = 'G:/AttentionShift/AttentionShiftWorkingPhaseTwo'; -% setname = 'Auditory Visual Attention Shift'; -% log_name = 'attention_shift_18_import_events_log.txt'; - -% rootPath = 's:/bcit/AdvancedGuardDutyWorkingPhaseTwo'; -% setname = 'BCIT Advanced Guard Duty'; -% log_name = 'bcit_advanced_guard_duty_10_import_events_log.txt'; -% -% rootPath = 's:/bcit/AuditoryCueingWorkingPhaseTwo'; -% setname = 'BCIT Auditory Cueing'; -% log_name = 'bcit_auditory_cueing_10_import_events_log.txt'; - -% rootPath = 's:/bcit/BaselineDrivingWorkingPhaseTwo'; -% setname = 'BCIT Baseline Driving'; -% log_name = 'bcit_baseline_driving_10_import_events_log.txt'; - -% rootPath = 's:/bcit/BasicGuardDutyWorkingPhaseTwo'; -% setname = 'BCIT Basic Guard Duty'; -% log_name = 'bcit_basic_guard_duty_10_import_events_log.txt'; - -% rootPath = 's:/bcit/CalibrationDrivingWorkingPhaseTwo'; -% setname = 'BCIT Calibration Driving'; -% log_name = 'bcit_calibration_driving_10_import_events_log.txt'; - -% rootPath = 's:/bcit/MindWanderingWorkingPhaseTwo'; -% setname = 'BCIT Mind Wandering'; -% log_name = 'bcit_mind_wandering_10_import_events_log.txt'; - -% rootPath = 's:/bcit/SpeedControlWorkingPhaseTwo'; -% setname = 'BCIT Speed Control'; -% log_name = 'bcit_speed_control_10_import_events_log.txt'; - -% rootPath = 's:/bcit/TrafficComplexityWorkingPhaseTwo'; -% setname = 'BCIT Traffic Complexity'; -% log_name = 'bcit_traffic_complexity_10_import_events_log.txt'; - -excludeDirs = {'sourcedata', 'code', 'stimuli', 'derivatives'}; -namePrefix = ''; -nameSuffix = '_eeg'; -extensions = {'.set'}; - -% Designate the columns that are numeric (rest are char) -columnTypes = {'onset', 'double'; 'duration', 'double'; 'sample', 'int32'}; - -convertLatency = true; - -%% Open the log -fid = fopen([rootPath filesep 'code/curation_logs', filesep log_name], 'w'); -fprintf(fid, 'Log of runEeglabEventsImport.m on %s\n', datetime('now')); - -%% Generate json file. -fileList = getFileList(rootPath, namePrefix, nameSuffix, ... - extensions, excludeDirs); -for k = 1:length(fileList) - EEG = pop_loadset(fileList{k}); - [pathName, basename, ext] = fileparts(fileList{k}); - fprintf(fid, '%s:\n', basename); - eventsFile = [pathName filesep basename(1:(end-3)) 'events.tsv']; - eventTable = getEventTable(eventsFile, columnTypes, renameColumns); - fprintf(fid, '\tCreate a table from the events file\n'); - if convertLatency - eventTable.('latency') = eventTable.('latency')*EEG.srate + 1; - fprintf(fid, '\tConvert the latency column to samples\n'); - end - fprintf('%s: EEG.event has %d events and BIDS event file has %d events\n', ... - basename, length(EEG.event), size(eventTable,1)); - EEG.urevent = table2struct(eventTable)'; - fprintf(fid, '\tSet the EEG.urevent\n'); - eventTable.('urevent') = transpose(1:size(eventTable)); - EEG.event = table2struct(eventTable)'; - fprintf(fid, '\tSet the EEG.event\n'); - if ~isempty(setname) - EEG.setname = [setname basename]; - fprintf(fid, '\tSet the EEG.setname\n'); - end - fprintf(fid, '\tResave the EEG.set file\n'); - EEG = pop_saveset(EEG, 'savemode', 'resave', 'version', '7.3'); -end -fclose(fid); \ No newline at end of file diff --git a/src/matlab_scripts/data_cleaning/runEeglabJsonToChannels.m b/src/matlab_scripts/data_cleaning/runEeglabJsonToChannels.m deleted file mode 100644 index 12ba7ce..0000000 --- a/src/matlab_scripts/data_cleaning/runEeglabJsonToChannels.m +++ /dev/null @@ -1,52 +0,0 @@ -%% This script dumps the channel labels to a JSON file. - -%% Set up the specifics for your dataset -%rootPath = '/XXX/SternbergWorking'; -%rootPath = '/XXX/AuditoryOddballWorking'; -%rootPath = '/XXX/GoNogoWorking'; -%rootPath = '/XXX/ImaginedEmotionWorking'; -%rootPath = '/XXX/AttentionShiftWorking'; -rootPath = '/XXX/AdvancedGuardDutyWorking'; -%rootPath = '/XXX/AuditoryCueingWorking'; -%rootPath = '/XXX/BaselineDrivingWorking'; -%rootPath = '/XXX/BasicGuardDutyWorking'; -%rootPath = '/XXX/CalibrationDrivingWorking'; -%rootPath = '/XXX/MindWanderingWorking'; -%rootPath = '/XXX/RSVPBaselineWorking'; -%rootPath = '/XXX/RSVPExpertiseWorking'; -%rootPath = '/XXX/SpeedControlWorking'; -%rootPath = '/XXX/TrafficComplexityWorking'; -sratePath = [rootPath filesep 'code']; -excludeDirs = {'sourcedata', 'code', 'stimuli'}; -namePrefix = ''; -nameSuffix = '_eeg'; -extensions = {'.set'}; -fileList = getFileList(rootPath, namePrefix, nameSuffix, ... - extensions, excludeDirs); -extChannels = {'EXG1', 'EXG2', 'EXG3', 'EXG4', 'EXG5', 'EXG6'}; -mapSet = {'LHEOG', 'RHEOG', 'UVEOG', 'LVEOG', 'LMAST', 'RMAST'}; -chanMap = containers.Map(extChannels, mapSet); -eogChannels = {'LHEOG', 'RHEOG', 'UVEOG', 'LVEOG'}; -miscChanels = {'LMAST', 'RMAST'}; - -%% Generate json file. -fprintf('Saving events from %d EEG.set files...\n', length(fileList)); -channelMap = containers.Map('KeyType', 'char', 'ValueType', 'any'); -for k = 1%:length(fileList) - EEG = pop_loadset(fileList{k}); - chanlocs = EEG.chanlocs; - for n = 1:length(chanlocs) - chan = chanlocs(n).labels; - if (sum(strcmpi(extChannels, chan)) == 0) - chanlocs(n).type = 'EEG'; - continue; - end - chanlocs(n).labels = chanMap(chan); - if (sum(strcmpi(eogChannels, chanMap(chan))) > 0) - chanlocs(n).type = 'EOG'; - else - chanlocs(n).type = 'MISC'; - end - end - -end diff --git a/src/matlab_scripts/data_cleaning/runEeglabRenameBCITChannels.m b/src/matlab_scripts/data_cleaning/runEeglabRenameBCITChannels.m deleted file mode 100644 index 3c311be..0000000 --- a/src/matlab_scripts/data_cleaning/runEeglabRenameBCITChannels.m +++ /dev/null @@ -1,50 +0,0 @@ -%% This script dumps the channel labels to a JSON file. - -%% Set up the specifics for your dataset -rootPath = '/XXX/AdvancedGuardDutyWorking'; -%rootPath = /XXX/AuditoryCueingWorking'; -%rootPath = '/XXX/BaselineDrivingWorking'; -%rootPath = '/XXX/BasicGuardDutyWorking'; -%rootPath = '/XXX/CalibrationDrivingWorking'; -%rootPath = '/XXX/MindWanderingWorking'; -%rootPath = '/XXX/RSVPBaselineWorking'; -%rootPath = '/XXX/RSVPExpertiseWorking'; -%rootPath = '/XXX/SpeedControlWorking'; -%rootPath = '/XXX/TrafficComplexityWorking'; -sratePath = [rootPath filesep 'code']; -excludeDirs = {'sourcedata', 'code', 'stimuli'}; -namePrefix = ''; -nameSuffix = '_eeg'; -extensions = {'.set'}; -fileList = getFileList(rootPath, namePrefix, nameSuffix, ... - extensions, excludeDirs); -extChannels = {'EXG1', 'EXG2', 'EXG3', 'EXG4', 'EXG5', 'EXG6'}; -mapSet = {'LHEOG', 'RHEOG', 'UVEOG', 'LVEOG', 'LMAST', 'RMAST'}; -chanMap = containers.Map(extChannels, mapSet); -eogChannels = {'LHEOG', 'RHEOG', 'UVEOG', 'LVEOG'}; -miscChanels = {'LMAST', 'RMAST'}; - -%% Rename the channels in the EEG.set file. -fprintf('Saving events from %d EEG.set files...\n', length(fileList)); -channelMap = containers.Map('KeyType', 'char', 'ValueType', 'any'); -for k = 1:length(fileList) - EEG = pop_loadset(fileList{k}); - chanlocs = EEG.chanlocs; - for n = 1:length(chanlocs) - chan = chanlocs(n).labels; - if (sum(strcmpi(extChannels, chan)) == 0) - chanlocs(n).type = 'EEG'; - continue; - end - chanlocs(n).labels = chanMap(chan); - if (sum(strcmpi(eogChannels, chanMap(chan))) > 0) - chanlocs(n).type = 'EOG'; - else - chanlocs(n).type = 'MISC'; - end - end - EEG.chanlocs = chanlocs; - pop_saveset(EEG, 'filepath', fileList{k}, ... - 'savemode', 'onefile', 'version', '7.3'); - -end diff --git a/src/matlab_scripts/data_cleaning/runEeglabRenameTask.m b/src/matlab_scripts/data_cleaning/runEeglabRenameTask.m deleted file mode 100644 index d758c4d..0000000 --- a/src/matlab_scripts/data_cleaning/runEeglabRenameTask.m +++ /dev/null @@ -1,33 +0,0 @@ -%% This script dumpts all of the EEG.set events to files named _events_temp.tsv. -% You must provide the root path to your dataset directory tree and excude directories to skip - -%% Set up the specifics for your dataset -rootPath = '/XXX/SternbergWorking'; -sratePath = [rootPath filesep 'code']; -excludeDirs = {'sourcedata', 'code', 'stimuli'}; -namePrefix = ''; -nameSuffix = '_eeg'; -extensions = {'.set'}; -fileList = getFileList(rootPath, namePrefix, nameSuffix, ... - extensions, excludeDirs); -oldTask = '_task-Experiment_'; -newTask = '_task-WorkingMemory_'; - -%% Make a copy of the files -errorList = []; -for k = 1:length(fileList) - EEG = pop_loadset(fileList{k}); - [filepath, basename, ext] = fileparts(fileList{k}); - pos = strfind(basename, oldTask); - if (isempty(pos)) - fprintf('%s does not have old task\n', fileList{k}); - errorList(end+1) = k; - else - firstPart = basename(1:(pos(1) - 1)); - lastPart = basename(pos(1)+17:end); - newName = [firstPart newTask lastPart ext]; - newPath = [filepath filesep newName]; - pop_saveset(EEG, 'filepath', newPath, ... - 'savemode', 'twofiles', 'version', '7.3'); - end -end diff --git a/src/matlab_scripts/data_cleaning/setChanTypes.m b/src/matlab_scripts/data_cleaning/setChanTypes.m deleted file mode 100644 index b9f1b02..0000000 --- a/src/matlab_scripts/data_cleaning/setChanTypes.m +++ /dev/null @@ -1,31 +0,0 @@ -% eeg_setchantypes - set the type field of chanlocs based on channels.tsv -% -% Usage: -% eeg_setchantypes(EEG, fileOut) -% -% -% Parameters: -% EEG - [struct] the EEG structure -% chanFile - [string] filepath of relevant BIDS channels.tsv file -% -% Returns: -% EEG - [struct] the EEG structure modified with channel types -% missing - [cell array] a list of channels not found in the EEG. -% -% Author:Kay Robbins, 2022 - -function [chanlocs, missing] = setChanTypes(chanlocs, chanFile) - chanMap = getChannelMap(chanFile); - numRenamed = 0; - missing = {}; - for nk = 1:length(chanlocs) - label = chanlocs(nk).labels; - if isKey(chanMap, label) - chanlocs(nk).type = chanMap(label); - numRenamed = numRenamed + 1; - else - missing{end+1} = label; %#ok - end - end -end - diff --git a/src/matlab_scripts/data_cleaning/setChannelTypes.m b/src/matlab_scripts/data_cleaning/setChannelTypes.m deleted file mode 100644 index 693df90..0000000 --- a/src/matlab_scripts/data_cleaning/setChannelTypes.m +++ /dev/null @@ -1,22 +0,0 @@ - -function [chanlocs, missing] = setChannelTypes(chanlocs, chanMap) -%% Set the types of the channels based on a map of channels and types. -% -% Parameters: -% chanlocs [struct](Input/Output) The EEG.chanlocs structure. -% chanMap [containers.Map] (Map of channel names, channel types). -% missing [cell array] (Output) Channels not found in the EEG. -% -% Author:Kay Robbins, 2022 - - missing = {}; - for nk = 1:length(chanlocs) - label = chanlocs(nk).labels; - if isKey(chanMap, label) - chanlocs(nk).type = chanMap(label); - else - missing{end+1} = label; %#ok - end - end -end - diff --git a/src/matlab_scripts/data_cleaning/writeElectrodeFile.m b/src/matlab_scripts/data_cleaning/writeElectrodeFile.m deleted file mode 100644 index 253f69a..0000000 --- a/src/matlab_scripts/data_cleaning/writeElectrodeFile.m +++ /dev/null @@ -1,36 +0,0 @@ -function numchans = writeElectrodeFile(chanlocs, electrodesFile) -%% Write the electrodes.tsv file for the chanlocs. -% -% writeElectrodeFile(chanlocs, electrodesFile) -% -% Parameters: -% chanlocs [struct] The EEG.chanlocs structure. -% electrodesFile [string] The filepath of the electrodes file. -% -% Returns: -% numchans [numerical] number of channels in electrodes file. -%% - if isempty(chanlocs) || ~isfield(chanlocs, 'X') - numchans = 0; - return - end - fid = fopen(electrodesFile, 'w'); - fprintf(fid, 'name\tx\ty\tz\n'); - for iChan = 1:length(chanlocs) - fprintf(fid, '%s', chanlocs(iChan).labels); - chanwrite(fid, chanlocs(iChan).X); - chanwrite(fid, chanlocs(iChan).Y); - chanwrite(fid, chanlocs(iChan).Z); - fprintf(fid, '\n'); - end - fclose(fid); - numchans = length(chanlocs); -end - -function [] = chanwrite(fid, pos) - if isempty(pos) || isnan(pos) - fprintf(fid, '\tn/a'); - else - fprintf(fid,'\t%2.6f', pos); - end -end \ No newline at end of file diff --git a/src/matlab_scripts/hedtools_wrappers/runRemodel.m b/src/matlab_scripts/hedtools_wrappers/runRemodel.m deleted file mode 100644 index 1659c25..0000000 --- a/src/matlab_scripts/hedtools_wrappers/runRemodel.m +++ /dev/null @@ -1,9 +0,0 @@ -function runRemodel(remodel_args) -% Run the remodeling tools. -% -% Parameters: -% remodel_args - Full path to the root directory of a BIDS dataset. -% - py.importlib.import_module('hed'); - py.hed.tools.remodeling.cli.run_remodel.main(remodel_args); - diff --git a/src/matlab_scripts/hedtools_wrappers/runRemodelBackup.m b/src/matlab_scripts/hedtools_wrappers/runRemodelBackup.m deleted file mode 100644 index 73f8a8d..0000000 --- a/src/matlab_scripts/hedtools_wrappers/runRemodelBackup.m +++ /dev/null @@ -1,8 +0,0 @@ -function runRemodelBackup(backup_args) -% Create a remodeling backup. -% -% Parameters: -% backup_args - cell array with backup arguments. - - py.importlib.import_module('hed'); - py.hed.tools.remodeling.cli.run_remodel_backup.main(backup_args); diff --git a/src/matlab_scripts/hedtools_wrappers/runRemodelRestore.m b/src/matlab_scripts/hedtools_wrappers/runRemodelRestore.m deleted file mode 100644 index 90ba770..0000000 --- a/src/matlab_scripts/hedtools_wrappers/runRemodelRestore.m +++ /dev/null @@ -1,8 +0,0 @@ -function runRemodelRestore(restore_args) -% Restore the specified remodeling backup. -% -% Parameters: -% restore_args - cell array with restore arguments. - - py.importlib.import_module('hed'); - py.hed.tools.remodeling.cli.run_remodel_restore.main(restore_args); diff --git a/src/matlab_scripts/hedtools_wrappers/testBidsValidation.m b/src/matlab_scripts/hedtools_wrappers/testBidsValidation.m deleted file mode 100644 index 6793b3d..0000000 --- a/src/matlab_scripts/hedtools_wrappers/testBidsValidation.m +++ /dev/null @@ -1,9 +0,0 @@ -%% A test script for a wrapper function to validate HED in a BIDS dataset. - -dataPath = 'G:\eeg_ds003645s_hed\'; -issueString = validateHedInBids(dataPath); -if isempty(issueString) - fprintf('Dataset %s has no HED validation errors\n', dataPath); -else - fprintf('Validation errors for dataset %s:\n%s\n', dataPath, issueString); -end \ No newline at end of file diff --git a/src/matlab_scripts/hedtools_wrappers/testRemodel.m b/src/matlab_scripts/hedtools_wrappers/testRemodel.m deleted file mode 100644 index e4fca36..0000000 --- a/src/matlab_scripts/hedtools_wrappers/testRemodel.m +++ /dev/null @@ -1,17 +0,0 @@ -%% A test script for a wrapper function for run_remodel. - -dataPath = 'G:\eeg_ds003645s_hed'; - -%% Backup the data using default backup name (only should be done once) -backup_args = {dataPath, '-x', 'stimuli', 'derivatives'}; -runRemodelBackup(backup_args); - -%% Run the remodeling file -remodelFile = 'G:\summarize_hed_types_rmdl.json'; -dataPath = 'G:\eeg_ds003645s_hed'; -remodel_args = {dataPath, remodelFile, '-b', '-x', 'stimuli', 'derivatives'}; -runRemodel(remodel_args); - -%% Restore the data files to originals (usually does not have to be done) -restore_args = {dataPath}; -runRemodelRestore(restore_args); \ No newline at end of file diff --git a/src/matlab_scripts/hedtools_wrappers/validateHedInBids.m b/src/matlab_scripts/hedtools_wrappers/validateHedInBids.m deleted file mode 100644 index ff2c3ca..0000000 --- a/src/matlab_scripts/hedtools_wrappers/validateHedInBids.m +++ /dev/null @@ -1,14 +0,0 @@ -function issueString = validateHedInBids(dataPath) -% Validate the HED annotations in a BIDS dataset -% -% Parameters: -% dataPath - Full path to the root directory of a BIDS dataset. -% -% Returns: -% issueString - A string with the validation issues suitable for -% printing (has newlines). -% - py.importlib.import_module('hed'); - bids = py.hed.tools.BidsDataset(dataPath); - issues = bids.validate(); - issueString = string(py.hed.get_printable_issue_string(issues)); diff --git a/src/matlab_scripts/utility_scripts/getFileList.m b/src/matlab_scripts/utility_scripts/getFileList.m deleted file mode 100644 index 83e89f5..0000000 --- a/src/matlab_scripts/utility_scripts/getFileList.m +++ /dev/null @@ -1,79 +0,0 @@ -%% Dump the EEG.event structure from each EEG.set file in a dataset. - -%% Create a list with all of the .set files in the BIDS dataset -function selectedList = getFileList(rootPath, namePrefix, nameSuffix, ... - extensions, excludeDirs) -%% Return a full path list of specified files in rootPath directory tree. -% -% Parameters: -% rootPath (char) full path of root of directory tree to search -% namePrefix (char) prefix of the filename or any if empty -% nameSuffix (char) suffix of the filename or any if empty -% extensions (cell array) names of extensions (with . included) -% excludeDirs (cell array) names of subdirectories to exclude -% -% Returns: -% selectedList (cell array) list of full paths of the files -% - - selectedList = {}; - dirList = {rootPath}; - while ~isempty(dirList) - thisDir = dirList{1}; - dirList = dirList(2:end); - fileList = dir(thisDir); - for k = 1:length(fileList) - thisFile = fileList(k); - if checkDirExclusions(thisFile, excludeDirs) - continue; - elseif fileList(k).isdir - dirList{end+1} = [fileList(k).folder filesep fileList(k).name]; %#ok - elseif ~checkFileExclusions(thisFile, namePrefix, ... - nameSuffix, extensions) - thisPath = [thisFile.folder filesep thisFile.name]; - selectedList{end+1} = thisPath; %#ok - end - end - end -end - - -function isExcluded = checkDirExclusions(thisFile, excludeDirs) -% Returns true if this file entry corresponds to an excluded directory - if ~thisFile.isdir - isExcluded = false; - elseif startsWith(thisFile.name, '.') - isExcluded = true; - else - isExcluded = false; - for k = 1:length(excludeDirs) - if startsWith(thisFile.name, excludeDirs{k}) - isExcluded = true; - break - end - end - - end -end - -function isExcluded = checkFileExclusions(thisFile, namePrefix, ... - nameSuffix, extensions) -% Returns true if this file entry corresponds to an excluded directory - [~, theName, theExt] = fileparts(thisFile.name); - if ~isempty(namePrefix) && ~startsWith(theName, namePrefix) - isExcluded = true; - elseif ~isempty(nameSuffix) && ~endsWith(theName, nameSuffix) - isExcluded = true; - elseif isempty(extensions) - isExcluded = false; - else - isExcluded = true; - for k = 1:length(extensions) - if strcmpi(theExt, extensions{k}) - isExcluded = false; - break - end - end - - end -end diff --git a/src/matlab_scripts/web_services/exampleGenerateSidecar.m b/src/matlab_scripts/web_services/exampleGenerateSidecar.m deleted file mode 100644 index 94675a1..0000000 --- a/src/matlab_scripts/web_services/exampleGenerateSidecar.m +++ /dev/null @@ -1,12 +0,0 @@ -host = 'https://hedtools.ucsd.edu/hed'; -[servicesUrl, options] = getHostOptions(host); % Setup the options -pathname = '../../../datasets/eeg_ds003645s_hed/sub-002/eeg/'; -filename = 'sub-002_task-FacePerception_run-1_events.tsv'; -eventsText = fileread([pathname filename]); % Read the data -request = struct('service', 'events_generate_sidecar', ... - 'events_string', eventsText); -request.columns_categorical = {'event_type', 'face_type', 'rep_status'}; -request.columns_value = {'trial', 'rep_lag', 'stim_file'}; -response = webwrite(servicesUrl, request, options); -response = jsondecode(response); -outputReport(response,'Example: generate a sidecar from an event file.'); diff --git a/src/matlab_scripts/web_services/getHostOptions.m b/src/matlab_scripts/web_services/getHostOptions.m deleted file mode 100644 index 4e5ed53..0000000 --- a/src/matlab_scripts/web_services/getHostOptions.m +++ /dev/null @@ -1,12 +0,0 @@ -function [servicesUrl, options] = getHostOptions(host) -%% Set the options associated with the services for host. - csrfUrl = [host '/services']; - servicesUrl = [host '/services_submit']; - [cookie, csrftoken] = getSessionInfo(csrfUrl); - header = ["Content-Type" "application/json"; ... - "Accept" "application/json"; ... - "X-CSRFToken" csrftoken; "Cookie" cookie]; - - options = weboptions('MediaType', 'application/json', ... - 'Timeout', 120, 'HeaderFields', header); -end \ No newline at end of file diff --git a/src/matlab_scripts/web_services/getSessionInfo.m b/src/matlab_scripts/web_services/getSessionInfo.m deleted file mode 100644 index bdb3aa8..0000000 --- a/src/matlab_scripts/web_services/getSessionInfo.m +++ /dev/null @@ -1,19 +0,0 @@ -function [cookie, csrftoken] = getSessionInfo(csrf_url) -%% Setup the session for accessing the HED webservices -% Parameters: -% csrf_url = URL for the services -% -% Returns: -% cookie = a string cookie value -% csrftoken = a string csrf token for the session. -% - request = matlab.net.http.RequestMessage; - uri = matlab.net.URI(csrf_url); - response1 = send(request,uri); - cookies = response1.getFields('Set-Cookie'); - cookie = cookies.Value; - data = response1.Body.char; - csrfIdx = strfind(data,'csrf_token'); - tmp = data(csrfIdx(1)+length('csrf_token')+1:end); - csrftoken = regexp(tmp,'".*?"','match'); - csrftoken = string(csrftoken{1}(2:end-1)); diff --git a/src/matlab_scripts/web_services/getTestData.m b/src/matlab_scripts/web_services/getTestData.m deleted file mode 100644 index 6824341..0000000 --- a/src/matlab_scripts/web_services/getTestData.m +++ /dev/null @@ -1,41 +0,0 @@ -function data = getTestData() -%% Return the test data in a struct for running the services. - - %% Read the JSON sidecar into a string for all examples - data = struct('descPrefix', '', 'eventsText', '', ... - 'jsonBadText', '', 'jsonText', '', 'labelPrefix', '', ... - 'schemaUrl', '', 'schemaText', '', ... - 'spreadsheetText', '', 'spreadsheetTextInvalid', ''); - - remodelPath = '../../../datasets/eeg_ds003645s_hed_remodel/'; - libraryPath = '../../../datasets/eeg_ds003645s_hed_library/'; - data.jsonLibrary = fileread(... - [libraryPath 'task-FacePerception_events.json']); - data.jsonText = fileread(... - [remodelPath 'task-FacePerception_events.json']); - data.eventsText = fileread([remodelPath ... - 'sub-002/eeg/sub-002_task-FacePerception_run-1_events.tsv']); - data.remodel1Text = fileread(... - [remodelPath 'derivatives/remodel/remodeling_files/remove_extra_rmdl.json']); - data.remodel2Text = fileread(... - [remodelPath 'derivatives/remodel/remodeling_files/' ... - 'summarize_columns_rmdl.json']); - data.remodel3Text = fileread(... - [remodelPath 'derivatives/remodel/remodeling_files/' ... - 'summarize_hed_types_rmdl.json']); - data.jsonBadText = ... - fileread('../../data/bids_data/both_types_events_errors.json'); - data.labelPrefix = 'Property/Informational-property/Label/'; - data.descPrefix = 'Property/Informational-property/Description/'; - data.schemaText = fileread('../../data/schema_data/HED8.2.0.xml'); - data.schemaUrl = ['https://raw.githubusercontent.com/hed-standard/' ... - 'hed-schemas/master/standard_schema/hedxml/HED8.2.0.xml']; - data.spreadsheetText = ... - fileread('../../data/spreadsheet_data/LKTEventCodesHED3.tsv'); - data.spreadsheetTextExtracted = ... - fileread('../../data/bids_data/task-FacePerception_events_extracted.tsv'); - data.spreadsheetTextInvalid = ... - fileread('../../data/spreadsheet_data/LKTEventCodesHED2.tsv'); - data.goodStrings = {'Red,Blue', 'Green', 'White, (Black, Image)'}; - data.badStrings = {'Red, Blue, Blech', 'Green', 'White, Black, Binge'}; -end \ No newline at end of file diff --git a/src/matlab_scripts/web_services/outputReport.m b/src/matlab_scripts/web_services/outputReport.m deleted file mode 100644 index 2392e75..0000000 --- a/src/matlab_scripts/web_services/outputReport.m +++ /dev/null @@ -1,56 +0,0 @@ -function [] = outputReport(response, theTitle) - - fprintf('\nHED services report for %s\n', theTitle); - fprintf('Error report: [%s] %s\n', response.error_type, response.error_msg); - - %% Print out the results if available - if ~isfield(response, 'results') || isempty(response.results) - return - end - - results = response.results; - fprintf('[%s] status %s: %s\n', response.service, results.msg_category, results.msg); - if isfield(results, 'schema_version') - fprintf('HED version: %s\n', results.schema_version); - end - fprintf('\nReturn data for service %s [command: %s]:\n', ... - response.service, results.command); - data = results.data; - if ~iscell(data) - fprintf('%s\n', data); - else - for k = 1:length(data) - if ~isempty(data{k}) - fprintf('[%d]: %s\n', k, data{k}); - end - end - end - - %% Output the spreadsheet if available - if isfield(results, 'spreadsheet') - fprintf('\n----Spreadsheet result----\n'); - fprintf(results.spreadsheet); - end - - if isfield(results, 'definitions') && isstruct(results.definitions) - fprintf('\n\n----------definitions---------\n'); - defNames = fieldnames(results.definitions); - for k=1:length(defNames) - name = defNames{k}; - fprintf('\n%s: %s\n', defNames{k}, results.definitions.(name)); - end - end - - %% Output the file descriptions - if isfield(results, 'file_list') && ... - isstruct(results.file_list) - list = results.file_list; - fprintf('\n\n----------Summaries----------\n'); - for k = 1:length(list) - fprintf('\nFile:%s File type:%s\n', ... - list(k).file_name, list(k).file_type) - fprintf('%s\n', list(k).content); - end - end - -end \ No newline at end of file diff --git a/src/matlab_scripts/web_services/runAllTests.m b/src/matlab_scripts/web_services/runAllTests.m deleted file mode 100644 index ef6e732..0000000 --- a/src/matlab_scripts/web_services/runAllTests.m +++ /dev/null @@ -1,29 +0,0 @@ -host = 'https://hedtools.org/hed'; -%host = 'https://hedtools.org/hed_dev'; -host = 'http://127.0.0.1:5000'; - - -errorMap = containers.Map('KeyType', 'char', 'ValueType', 'any'); -errorMap('testGetServices') = testGetServices(host); -errorMap('testEventServices') = testEventServices(host); -errorMap('testEventSearchServices') = testEventSearchServices(host); -errorMap('testEventRemodelingServices') = testEventRemodelingServices(host); -errorMap('testSidecarServices') = testSidecarServices(host); -errorMap('testSpreadsheetServices') = testSpreadsheetServices(host); -errorMap('testStringServices') = testStringServices(host); -errorMap('testLibraryServices') = testLibraryServices(host); - -%% Output the errors -fprintf('\n\nOverall error report:\n'); -keys = errorMap.keys(); -for k = 1:length(keys) - errors = errorMap(keys{k}); - if isempty(errors) - fprintf('\t%s: no errors\n', keys{k}); - else - fprintf('\t%s:\n', keys{k}); - for n = 1:length(errors) - fprintf('\t\t%s\n', keys{k}, errors{n}); - end - end -end diff --git a/src/matlab_scripts/web_services/runAssembleTest.m b/src/matlab_scripts/web_services/runAssembleTest.m deleted file mode 100644 index 66d510b..0000000 --- a/src/matlab_scripts/web_services/runAssembleTest.m +++ /dev/null @@ -1,26 +0,0 @@ -%% Use this script to run an individual type of service. -% host = 'https://hedtools.ucsd.edu/hed'; -host = 'http://127.0.0.1:5000/'; -%host = 'https://hedtools.ucsd.edu/hed_dev'; -[servicesUrl, options] = getHostOptions(host); -dataPath = 'D:/test1/'; -jsonText = fileread([dataPath 'events.json']); -eventsText = fileread([dataPath 'events.tsv']); -data = getTestData(); -errors = {}; -%% Example 4: Assemble valid event HED strings(expand defs on). -request4 = struct('service', 'events_assemble', ... - 'schema_version', '8.2.0', ... - 'sidecar_string', jsonText, ... - 'events_string', eventsText, ... - 'expand_defs', 'on'); -response4 = webwrite(servicesUrl, request4, options); -response4 = jsondecode(response4); -outputReport(response4, ... - 'Example 4 assembling HED annotations for events.'); -if ~isempty(response4.error_type) || ... - ~strcmpi(response4.results.msg_category, 'success') - errors{end + 1} = ... - 'Example 4 failed to assemble events file with expand defs.'; -end - diff --git a/src/matlab_scripts/web_services/runTest.m b/src/matlab_scripts/web_services/runTest.m deleted file mode 100644 index fab3b87..0000000 --- a/src/matlab_scripts/web_services/runTest.m +++ /dev/null @@ -1,9 +0,0 @@ -%% Use this script to run an individual type of service. -% host = 'https://hedtools.org/hed'; -host = 'http://127.0.0.1:5000/'; -%host = 'https://hedtools.org/hed_dev'; -%errors = testLibraryServices(host); -%errors = testSpreadsheetServices(host); -%errors = testEventSearchServices(host); -%errors = testEventServices(host); -errors = testStringServices(host); \ No newline at end of file diff --git a/src/matlab_scripts/web_services/testEventRemodelingServices.m b/src/matlab_scripts/web_services/testEventRemodelingServices.m deleted file mode 100644 index 796d14e..0000000 --- a/src/matlab_scripts/web_services/testEventRemodelingServices.m +++ /dev/null @@ -1,55 +0,0 @@ -function errors = testEventRemodelingServices(host) -%% Shows how to call hed-services to remodel an events file. -% -% Example 1: Remodeling an events file with no summary or HED -% -% Example 2: Search an events file for HED and return additional columns. - -%% Get the options and data -[servicesUrl, options] = getHostOptions(host); -data = getTestData(); -errors = {}; - -% %% Example 1: Remodel an events file with no summary or HED. -request1 = struct('service', 'events_remodel', ... - 'schema_version', '8.2.0', ... - 'remodel_string', data.remodel1Text, ... - 'events_string', data.eventsText); - -response1 = webwrite(servicesUrl, request1, options); -response1 = jsondecode(response1); -outputReport(response1, 'Example 1 Remodel by removing value and sample columns'); -if ~isempty(response1.error_type) || ... - ~strcmpi(response1.results.msg_category, 'success') - errors{end + 1} = 'Example 1 failed execute the search.'; -end - -%% Example 2: Remodel an events file with summary and no HED. -request2 = struct('service', 'events_remodel', ... - 'schema_version', '8.2.0', ... - 'remodel_string', data.remodel2Text, ... - 'events_string', data.eventsText); - -response2 = webwrite(servicesUrl, request2, options); -response2 = jsondecode(response2); -outputReport(response2, 'Example 2 Remodel by summarizing columns'); -if ~isempty(response2.error_type) || ... - ~strcmpi(response2.results.msg_category, 'success') - errors{end + 1} = 'Example 2 failed execute the search.'; -end - -%% Example 3: Summarize files including HED -request3 = struct('service', 'events_remodel', ... - 'schema_version', '8.2.0', ... - 'remodel_string', data.remodel3Text, ... - 'events_string', data.eventsText, ... - 'sidecar_string', data.jsonText); - -response3 = webwrite(servicesUrl, request3, options); -response3 = jsondecode(response3); -outputReport(response3, 'Example 3 Remodel by summarizing columns'); -if ~isempty(response3.error_type) || ... - ~strcmpi(response3.results.msg_category, 'success') - errors{end + 1} = 'Example 3 failed execute the search.'; -end - diff --git a/src/matlab_scripts/web_services/testEventSearchServices.m b/src/matlab_scripts/web_services/testEventSearchServices.m deleted file mode 100644 index f5dfb70..0000000 --- a/src/matlab_scripts/web_services/testEventSearchServices.m +++ /dev/null @@ -1,43 +0,0 @@ -function errors = testEventSearchServices(host) -%% Shows how to call hed-services to search a BIDS events file. -% -% Example 1: Search an events file for HED using a valid query. -% -% Example 2: Search an events file for HED and return additional columns. - -%% Get the options and data -[servicesUrl, options] = getHostOptions(host); -data = getTestData(); -errors = {}; - -%% Example 1: Search an events file for HED -request1 = struct('service', 'events_search', ... - 'schema_version', '8.2.0', ... - 'sidecar_string', data.jsonText, ... - 'events_string', data.eventsText, ... - 'query', '{Intended-effect, Cue}'); - -response1 = webwrite(servicesUrl, request1, options); -response1 = jsondecode(response1); -outputReport(response1, 'Example 1 Querying an events file'); -if ~isempty(response1.error_type) || ... - ~strcmpi(response1.results.msg_category, 'success') - errors{end + 1} = 'Example 1 failed execute the search.'; -end - -%% Example 2: Search an events file for HED -request2 = struct('service', 'events_search', ... - 'schema_version', '8.2.0', ... - 'sidecar_string', data.jsonText, ... - 'events_string', data.eventsText, ... - 'columns_included', '', ... - 'query', '{Intended-effect, Cue}'); -request2.columns_included = {'onset'}; -response2 = webwrite(servicesUrl, request2, options); -response2 = jsondecode(response2); -outputReport(response2, 'Example 2 Querying an events file with extra columns'); -if ~isempty(response2.error_type) || ... - ~strcmpi(response2.results.msg_category, 'success') - errors{end + 1} = 'Example 2 failed execute the search.'; -end - diff --git a/src/matlab_scripts/web_services/testEventServices.m b/src/matlab_scripts/web_services/testEventServices.m deleted file mode 100644 index f0e9905..0000000 --- a/src/matlab_scripts/web_services/testEventServices.m +++ /dev/null @@ -1,118 +0,0 @@ -function errors = testEventServices(host) - -%% Shows how to call hed-services to process a BIDS events file. -% -% Example 1: Validate valid events file using HED version. -% -% Example 2: Validate invalid events file using a HED URL. -% -% Example 3: Assemble valid event HED strings uploading HED schema. -% -% Example 4: Assemble valid event HED strings (def expand) using HED version. -% -% Example 5: Assemble valid event HED strings (no def expand) with extra columns. -% -% Example 6: Generate a JSON sidecar template from an events file. - - -%% Get the options and data -[servicesUrl, options] = getHostOptions(host); -data = getTestData(); -errors = {}; - -%% Example 1: Validate valid events file using HED version. -request1 = struct('service', 'events_validate', ... - 'schema_version', '8.2.0', ... - 'sidecar_string', data.jsonText, ... - 'events_string', data.eventsText, ... - 'check_for_warnings', 'off'); -response1 = webwrite(servicesUrl, request1, options); -response1 = jsondecode(response1); -outputReport(response1, 'Example 1 validating a valid event file.'); -if ~isempty(response1.error_type) || ... - ~strcmpi(response1.results.msg_category, 'success') - errors{end + 1} = 'Example 1 failed to validate a correct event file.'; -end - -%% Example 2: Validate invalid events file using a HED URL. -request2 = struct('service', 'events_validate', ... - 'schema_url', data.schemaUrl, ... - 'sidecar_string', data.jsonBadText, ... - 'events_string', data.eventsText, ... - 'check_for_warnings', 'off'); - -response2 = webwrite(servicesUrl, request2, options); -response2 = jsondecode(response2); -outputReport(response2, 'Example 2 validating events with invalid JSON.'); -if isempty(response2.error_type) && ... - ~isempty(response2.results.msg_category) && ... - strcmpi(response2.results.msg_category, 'success') - errors{end + 1} = ... - 'Example 2 failed to detect event file validation errors.'; -end - -%% Example 3: Assemble valid events file uploading a HED schema -request3 = struct('service', 'events_assemble', ... - 'schema_string', data.schemaText, ... - 'sidecar_string', data.jsonText, ... - 'events_string', data.eventsText, ... - 'columns_included', '', ... - 'expand_defs', 'off'); -request3.columns_included = {'onset'}; -response3 = webwrite(servicesUrl, request3, options); -response3 = jsondecode(response3); -outputReport(response3, ... - 'Example 3 output for assembling valid events file'); -if ~isempty(response3.error_type) || ... - ~strcmpi(response3.results.msg_category, 'success') - errors{end + 1} = 'Example 3 failed to assemble a correct events file.'; -end - -%% Example 4: Assemble valid event HED strings(expand defs on). -request4 = struct('service', 'events_assemble', ... - 'schema_version', '8.2.0', ... - 'sidecar_string', data.jsonText, ... - 'events_string', data.eventsText, ... - 'expand_defs', 'on'); -response4 = webwrite(servicesUrl, request4, options); -response4 = jsondecode(response4); -outputReport(response4, ... - 'Example 4 assembling HED annotations for events.'); -if ~isempty(response4.error_type) || ... - ~strcmpi(response4.results.msg_category, 'success') - errors{end + 1} = ... - 'Example 4 failed to assemble events file with expand defs.'; -end - -%% Example 5: Assemble valid event HED strings with additional columns. -columns_included = {'onset', 'face_type', 'rep_status'}; -request5 = struct('service', 'events_assemble', ... - 'schema_version', '8.2.0', ... - 'sidecar_string', data.jsonText, ... - 'events_string', data.eventsText, ... - 'columns_included', '', ... - 'expand_defs', 'off'); -request5.columns_included = columns_included; -response5 = webwrite(servicesUrl, request5, options); -response5 = jsondecode(response5); -outputReport(response5, ... - 'Example 5 assembling HED with extra columns for events.'); -if ~isempty(response5.error_type) || ... - ~strcmpi(response5.results.msg_category, 'success') - errors{end + 1} = ... - 'Example 5 failed to assemble file with extra columns.'; -end - -%% Example 6: Generate a sidecar template from an events file. -request6 = struct('service', 'events_generate_sidecar', ... - 'events_string', data.eventsText); -request6.columns_categorical = {'event_type', 'face_type', 'rep_status'}; -request6.columns_value = {'trial', 'rep_lag', 'stim_file'}; -response6 = webwrite(servicesUrl, request6, options); -response6 = jsondecode(response6); -outputReport(response6, ... - 'Example 6 generate a sidecar from an event file.'); -if ~isempty(response6.error_type) || ... - ~strcmpi(response6.results.msg_category, 'success') - errors{end + 1} = 'Example 6 failed to generate a sidecar correctly.'; -end diff --git a/src/matlab_scripts/web_services/testGetServices.m b/src/matlab_scripts/web_services/testGetServices.m deleted file mode 100644 index df5b479..0000000 --- a/src/matlab_scripts/web_services/testGetServices.m +++ /dev/null @@ -1,22 +0,0 @@ -function errors = testGetServices(host) -%% Get the options and data -[servicesUrl, options] = getHostOptions(host); -errors = {}; - -%% Send the request and get the response -request = struct('service', 'get_services'); -response = webwrite(servicesUrl, request, options); -response = jsondecode(response); -fprintf('Error report: [%s] %s\n', response.error_type, response.error_msg); - -%% Print out the results if available -if isfield(response, 'results') && ~isempty(response.results) - results = response.results; - fprintf('[%s] status %s: %s\n', response.service, results.msg_category, results.msg); - fprintf('Return data:\n%s\n', results.data); -end - -if ~isempty(response.error_type) || ... - ~strcmpi(response.results.msg_category, 'success') - errors{end + 1} = 'Get services failed to return services.'; -end \ No newline at end of file diff --git a/src/matlab_scripts/web_services/testLibraryServices.m b/src/matlab_scripts/web_services/testLibraryServices.m deleted file mode 100644 index c48c4e3..0000000 --- a/src/matlab_scripts/web_services/testLibraryServices.m +++ /dev/null @@ -1,61 +0,0 @@ -function errors = testLibraryServices(host) - -%% Shows how to call hed-services using libraries. -% -% Example 1: Validate valid events file using HED version list. -% Example 2: Validate valid events file using HED version list needed libraries. -% Example 3: Validate events file invalid because of missing library. - -%% Get the options and data -[servicesUrl, options] = getHostOptions(host); -data = getTestData(); -errors = {}; - -%% Example 1: Validate valid events file using HED versions no library tags. -request1 = struct('service', 'events_validate', ... - 'schema_version', '', ... - 'sidecar_string', data.jsonText, ... - 'events_string', data.eventsText, ... - 'check_for_warnings', 'off'); -request1.schema_version = ... - {'8.2.0', 'sc:score_1.0.0', 'test:testlib_1.0.2'}; -response1 = webwrite(servicesUrl, request1, options); -response1 = jsondecode(response1); -outputReport(response1, 'Example 1 validating a valid event file.'); -if ~isempty(response1.error_type) || ... - ~strcmpi(response1.results.msg_category, 'success') - errors{end + 1} = 'Example 1 failed to validate a correct event file.'; -end - -%% Example 2: Validate valid events file using library tags. -request2 = struct('service', 'events_validate', ... - 'schema_version', '', ... - 'sidecar_string', data.jsonLibrary, ... - 'events_string', data.eventsText, ... - 'check_for_warnings', 'off'); -request2.schema_version = ... - {'8.2.0', 'sc:score_1.0.0', 'test:testlib_1.0.2'}; -response2 = webwrite(servicesUrl, request2, options); -response2 = jsondecode(response2); -outputReport(response2, 'Example 2 validating a valid event file with libraries.'); -if ~isempty(response2.error_type) || ... - ~strcmpi(response2.results.msg_category, 'success') - errors{end + 1} = 'Example 2 validated a correct event file.'; -end - -%% Example 3: Validate invalid events file because of missing libraries. -request3 = struct('service', 'events_validate', ... - 'schema_version', '8.2.0', ... ... - 'sidecar_string', data.jsonLibrary, ... - 'events_string', data.eventsText, ... - 'check_for_warnings', 'off'); - -response3 = webwrite(servicesUrl, request3, options); -response3 = jsondecode(response3); -outputReport(response3, 'Example 3 validating events with missing library.'); -if isempty(response3.error_type) && ... - ~isempty(response3.results.msg_category) && ... - strcmpi(response3.results.msg_category, 'success') - errors{end + 1} = ... - 'Example 3 failed to detect event file validation errors missing library.'; -end diff --git a/src/matlab_scripts/web_services/testSidecarServices.m b/src/matlab_scripts/web_services/testSidecarServices.m deleted file mode 100644 index 55874f3..0000000 --- a/src/matlab_scripts/web_services/testSidecarServices.m +++ /dev/null @@ -1,102 +0,0 @@ -function errors = testSidecarServices(host) -%% Shows how to call hed-services to process a BIDS JSON sidecar. -% -% Example 1: Validate valid JSON sidecar using a HED version. -% -% Example 2: Validate invalid JSON sidecar using HED URL. -% -% Example 3: Convert valid JSON sidecar to long uploading HED schema. -% -% Example 4: Convert valid JSON sidecar to short using a HED version. -% -% Example 5: Extract a 4-column spreadsheet from a valid JSON sidecar. -% -% Example 6: Merge a 4-column spreadsheet with a JSON sidecar. - -%% Get the options and data -[servicesUrl, options] = getHostOptions(host); -data = getTestData(); -errors = {}; - -%% Example 1: Validate valid JSON sidecar using a HED version. -request1 = struct('service', 'sidecar_validate', ... - 'schema_version', '8.2.0', ... - 'sidecar_string', data.jsonText, ... - 'check_for_warnings', 'on'); -response1 = webwrite(servicesUrl, request1, options); -response1 = jsondecode(response1); -outputReport(response1, 'Example 1 validate a valid JSON sidecar.'); -if ~isempty(response1.error_type) || ... - ~strcmpi(response1.results.msg_category, 'success') - errors{end + 1} = 'Example 1 failed to validate a correct JSON file.'; -end - -%% Example 2: Validate invalid JSON sidecar using HED URL. -request2 = struct('service', 'sidecar_validate', ... - 'sidecar_string', data.jsonBadText, ... - 'schema_url', data.schemaUrl, ... - 'check_for_warnings', 'on'); -response2 = webwrite(servicesUrl, request2, options); -response2 = jsondecode(response2); -outputReport(response2, 'Example 2 validate an invalid JSON sidecar.'); -if isempty(response2.error_type) && ... - ~isempty(response2.results.msg_category) && ... - strcmpi(response2.results.msg_category, 'success') - errors{end + 1} = 'Example 2 failed to detect an incorrect JSON file.'; -end - -%% Example 3: Convert valid JSON sidecar to long uploading HED schema. -request3 = struct('service', 'sidecar_to_long', ... - 'schema_string', data.schemaText, ... - 'sidecar_string', data.jsonText, ... - 'expand_defs', 'off'); - -response3 = webwrite(servicesUrl, request3, options); -response3 = jsondecode(response3); -outputReport(response3, 'Example 3 convert a JSON sidecar to long form.'); -if ~isempty(response3.error_type) || ... - ~strcmpi(response3.results.msg_category, 'success') - errors{end + 1} = 'Example 3 failed to convert a valid JSON to long.'; -end - -%% Example 4: Convert valid JSON sidecar to short using a HED version.. -request4 = struct('service', 'sidecar_to_short', ... - 'schema_version', '8.2.0', ... - 'sidecar_string', data.jsonText, ... - 'expand_defs', 'on'); -response4 = webwrite(servicesUrl, request4, options); -response4 = jsondecode(response4); -outputReport(response4, 'Example 4 convert a JSON sidecar to short form.'); -if ~isempty(response4.error_type) || ... - ~strcmpi(response4.results.msg_category, 'success') - errors{end + 1} = 'Example 4 failed to convert a valid JSON to short.'; -end - -%% Example 5: Extract a 4-column spreadsheet from a JSON sidecar. -request5 = struct('service', 'sidecar_extract_spreadsheet', ... - 'sidecar_string', data.jsonText); -response5 = webwrite(servicesUrl, request5, options); -response5 = jsondecode(response5); -outputReport(response5, ... - 'Example 5 extract 4-column spreadsheet from a JSON sidecar.'); -if ~isempty(response5.error_type) || ... - ~strcmpi(response5.results.msg_category, 'success') - errors{end + 1} = ... - 'Example 5 failed to convert JSON to 4-column spreadsheet.'; -end - -%% Example 6: Merge a 4-column spreadsheet with a JSON sidecar. -request6 = struct('service', 'sidecar_merge_spreadsheet', ... - 'sidecar_string', '{}', 'has_column_names', 'on', ... - 'spreadsheet_string', ''); -request6.spreadsheet_string = data.spreadsheetTextExtracted; -response6 = webwrite(servicesUrl, request6, options); -response6 = jsondecode(response6); -outputReport(response6, ... - 'Example 6 merge a 4-column spreadsheet with a JSON sidecar.'); -if ~isempty(response6.error_type) || ... - ~strcmpi(response6.results.msg_category, 'success') - errors{end + 1} = ... - 'Example 6 failed to merge 4-column spreadsheet with JSON.'; -end - \ No newline at end of file diff --git a/src/matlab_scripts/web_services/testSpreadsheetServices.m b/src/matlab_scripts/web_services/testSpreadsheetServices.m deleted file mode 100644 index 2a1a1db..0000000 --- a/src/matlab_scripts/web_services/testSpreadsheetServices.m +++ /dev/null @@ -1,77 +0,0 @@ -function errors = testSpreadsheetServices(host) -%% Shows how to call hed-services to process a spreadsheet of event tags. -% -% Example 1: Validate valid spreadsheet file using schema version. -% -% Example 2: Validate invalid spreadsheet file using HED URL. -% -% Example 3: Convert valid spreadsheet file to long uploading HED schema. -% -% Example 4: Convert valid spreadsheet file to short using HED version. -% -%% Get the options and data -[servicesUrl, options] = getHostOptions(host); -data = getTestData(); -errors = {}; - -%% Example 1: Validate valid spreadsheet file using schema version. -request1 = struct('service', 'spreadsheet_validate', ... - 'schema_version', '8.2.0', ... - 'spreadsheet_string', data.spreadsheetText, ... - 'check_for_warnings', 'on', ... - 'column_4_check', 'on'); -response1 = webwrite(servicesUrl, request1, options); -response1 = jsondecode(response1); -outputReport(response1, 'Example 1 validate a valid spreadsheet'); - -if ~isempty(response1.error_type) || ... - ~strcmpi(response1.results.msg_category, 'success') - errors{end + 1} = ... - 'Example 1 failed to validate a correct spreadsheet file.'; -end - -%% Example 2: Validate invalid spreadsheet file using HED URL. -request2 = struct('service', 'spreadsheet_validate', ... - 'schema_url', data.schemaUrl, ... - 'spreadsheet_string', data.spreadsheetTextInvalid, ... - 'column_4_check', 'on'); -response2 = webwrite(servicesUrl, request2, options); -response2 = jsondecode(response2); -outputReport(response2, 'Example 2 validate an invalid spreadsheet'); -if isempty(response2.error_type) && ... - ~isempty(response2.results.msg_category) && ... - strcmpi(response2.results.msg_category, 'success') - errors{end + 1} = ... - 'Example 2 failed to detect an incorrect spreadsheet file.'; -end - -%% Example 3: Convert valid spreadsheet file to long uploading HED schema. -request3 = struct('service', 'spreadsheet_to_long', ... - 'schema_string', data.schemaText, ... - 'spreadsheet_string', data.spreadsheetText, ... - 'expand_defs', 'on', ... - 'column_4_check', 'on'); -response3 = webwrite(servicesUrl, request3, options); -response3 = jsondecode(response3); -outputReport(response3, 'Example 3 convert a spreadsheet to long form'); -if ~isempty(response3.error_type) || ... - ~strcmpi(response3.results.msg_category, 'success') - errors{end + 1} = ... -'Example 3 failed to convert a spreadsheet file to long.'; -end - -%% Example 4: Convert valid spreadsheet file to short using uploaded HED. -request4 = struct('service', 'spreadsheet_to_short', ... - 'schema_string', data.schemaText, ... - 'spreadsheet_string', data.spreadsheetText, ... - 'expand_defs', 'on', ... - 'column_4_check', 'on'); -response4 = webwrite(servicesUrl, request4, options); -response4 = jsondecode(response4); -outputReport(response4, 'Example 4 convert a spreadsheet to short form'); -if ~isempty(response4.error_type) || ... - ~strcmpi(response4.results.msg_category, 'success') - errors{end + 1} = ... - 'Example 4 failed to convert a spreadsheet file to short.'; -end - diff --git a/src/matlab_scripts/web_services/testStringServices.m b/src/matlab_scripts/web_services/testStringServices.m deleted file mode 100644 index c782524..0000000 --- a/src/matlab_scripts/web_services/testStringServices.m +++ /dev/null @@ -1,77 +0,0 @@ -function errors = testStringServices(host) -%% Shows how to call hed-services to process a list of hedstrings. -% -% Example 1: Validate valid list of strings using HED version. -% -% Example 2: Validate invalid list of strings using HED URL. -% -% Example 3: Validate invalid list of strings uploading HED schema. -% -% Example 4: Convert valid strings to long using HED version. -% - -%% Get the options and data -[servicesUrl, options] = getHostOptions(host); -data = getTestData(); -errors = {}; - - -%% Example 1: Validate valid list of strings using HED URL. -request1 = struct('service', 'strings_validate', ... - 'schema_version', '8.2.0', ... - 'string_list', '', ... - 'check_warnings_validate', 'on'); -request1.string_list = data.goodStrings; -response1 = webwrite(servicesUrl, request1, options); -response1 = jsondecode(response1); -outputReport(response1, 'Example 1 Validating a valid list of strings'); -if ~isempty(response1.error_type) || ... - ~strcmpi(response1.results.msg_category, 'success') - errors{end + 1} = 'Example 1 failed to validate valid HED strings.'; -end - -%% Example 2: Validate a list of invalid strings. HED schema is URL. -request2 = struct('service', 'strings_validate', ... - 'schema_url', '', ... - 'string_list', '', ... - 'check_for_warnings', 'on'); -request2.string_list = data.badStrings; -request2.schema_url = data.schemaUrl; -response2 = webwrite(servicesUrl, request2, options); -response2 = jsondecode(response2); -outputReport(response2, ... - 'Example 2 validating a list of strings with invalid values'); -if isempty(response2.error_type) && ... - strcmpi(response2.results.msg_category, 'success') - errors{end + 1} = 'Example 2 failed to detect invalid HED strings.'; -end - -%% Example 3: Validate list of invalid strings uploading HED schema. -request3 = struct('service', 'strings_validate', ... - 'schema_string', data.schemaText, ... - 'string_list', '', ... - 'check_for_warnings', 'on'); -request3.string_list = data.badStrings; -response3 = webwrite(servicesUrl, request3, options); -response3 = jsondecode(response3); -outputReport(response3, ... - 'Example 3 validate invalid strings using an uploaded HED schema'); -if ~isempty(response3.error_type) || ... - ~strcmpi(response3.results.msg_category, 'warning') - errors{end + 1} = 'Example 3 failed to detect invalid HED strings.'; -end - -%% Example 4: Convert valid strings to long using HED version. -request4 = struct('service', 'strings_to_long', ... - 'schema_version', '8.2.0', ... - 'string_list', ''); -request4.string_list = data.goodStrings; -response4 = webwrite(servicesUrl, request4, options); -response4 = jsondecode(response4); -outputReport(response4, ... - 'Example 4 Convert a list of valid strings to long'); -if ~isempty(response4.error_type) || ... - ~strcmpi(response4.results.msg_category, 'success') - errors{end + 1} = 'Example 4 failed to convert HED strings for long.'; -end -