From 13153334da6fea2717200a3840a496761140ecfe Mon Sep 17 00:00:00 2001 From: Anton Rubin Date: Fri, 4 Oct 2024 10:02:19 +0100 Subject: [PATCH 1/7] add word delimiter graph token filter docs #8454 Signed-off-by: Anton Rubin --- _analyzers/token-filters/index.md | 2 +- .../token-filters/word-delimiter-graph.md | 125 ++++++++++++++++++ 2 files changed, 126 insertions(+), 1 deletion(-) create mode 100644 _analyzers/token-filters/word-delimiter-graph.md diff --git a/_analyzers/token-filters/index.md b/_analyzers/token-filters/index.md index a9b621d5ab..0b4f78cc03 100644 --- a/_analyzers/token-filters/index.md +++ b/_analyzers/token-filters/index.md @@ -62,4 +62,4 @@ Normalization | `arabic_normalization`: [ArabicNormalizer](https://lucene.apache `unique` | N/A | Ensures each token is unique by removing duplicate tokens from a stream. `uppercase` | [UpperCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/core/LowerCaseFilter.html) | Converts tokens to uppercase. `word_delimiter` | [WordDelimiterFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/WordDelimiterFilter.html) | Splits tokens at non-alphanumeric characters and performs normalization based on the specified rules. -`word_delimiter_graph` | [WordDelimiterGraphFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/WordDelimiterGraphFilter.html) | Splits tokens at non-alphanumeric characters and performs normalization based on the specified rules. Assigns multi-position tokens a `positionLength` attribute. +[`word_delimiter_graph`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/word-delimiter-graph/) | [WordDelimiterGraphFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/WordDelimiterGraphFilter.html) | Splits tokens at non-alphanumeric characters and performs normalization based on the specified rules. Assigns multi-position tokens a `positionLength` attribute. diff --git a/_analyzers/token-filters/word-delimiter-graph.md b/_analyzers/token-filters/word-delimiter-graph.md new file mode 100644 index 0000000000..9a5472c616 --- /dev/null +++ b/_analyzers/token-filters/word-delimiter-graph.md @@ -0,0 +1,125 @@ +--- +layout: default +title: Word delimiter graph +parent: Token filters +nav_order: 480 +--- + +# Word delimiter graph token filter + +The `word_delimiter_graph` token filter is used to split tokens at predefined characters, while also offering optional token normalization based on customizable rules. + +It's important **not** to use tokenizers that strip punctuation, like the `standard` tokenizer, with this filter. Doing so may prevent proper token splitting and interfere with options like `catenate_all` or `preserve_original`. Instead, it's recommended to use the `keyword` or `whitespace` tokenizer. +{: .note} + +## Parameters + +You can configure the `word_delimiter_graph` token filter using the following parameters: + +- `adjust_offsets`: Adjusts the token offsets for better accuracy. If your analyzer uses filters that change the length of tokens without changing their offsets, such as `trim`, setting this parameter to `false` is recommended. Default is `true`. (Boolean, _Optional_) + +- `catenate_all`: Produces concatenated tokens from a sequence of alphanumeric parts. For example, `"quick-fast-200"` becomes `[ quickfast200, quick, fast, 200 ]`. Default is `false`. (Boolean, _Optional_) + +- `catenate_numbers`: Combines numerical sequences, such as `"10-20-30"` turning into `[ 102030, 10, 20, 30 ]`. Default is `false`. (Boolean, _Optional_) + +- `catenate_words`: Concatenates alphabetic words. For example `"high-speed-level"` becomes `[ highspeedlevel, high, speed, level ]`. Default is `false`. (Boolean, _Optional_) + +- `generate_number_parts`: Controls whether numeric tokens are generated separately. Default is `true`. (Boolean, _Optional_) + +- `generate_word_parts`: Specifies whether alphabetical tokens should be generated. Default is `true`. (Boolean, _Optional_) + +- `ignore_keywords`: Skips over tokens marked as keywords. Default is `false`. (Boolean, _Optional_) + +- `preserve_original`: Keeps the original, unsplit token alongside the generated tokens. For example `"auto-drive-300"` will result in `[ auto-drive-300, auto, drive, 300 ]`. Default is `false`. (Boolean, _Optional_) + +- `protected_words`: Specifies tokens that the filter should not split. (Array, _Optional_) + +- `protected_words_path`: Specifies a path (absolute or relating to config directory) to a file containing tokens separated by new line which should not be split. (string, _Optional_) + +- `split_on_case_change`: Splits tokens when there is a transition between lowercase and uppercase letters. Default is `true`. (Boolean, _Optional_) + +- `split_on_numerics`: Splits tokens where letters and numbers meet. For example `"v8engine"` will become `[ v, 8, engine ]`. Default is `true`. (Boolean, _Optional_) + +- `stem_english_possessive`: Removes English possessive endings such as `"'s."` Default is `true`. (Boolean, _Optional_) + +- `type_table`: Custom mappings can be provided for characters to treat them as alphanumeric or numeric, which avoids unwanted splitting. For example: `["- => ALPHA"]`. (Array of strings, _Optional_) + + +## Example + +The following example request creates a new index named `my-custom-index` and configures an analyzer with `word_delimiter_graph` filter: + +```json +PUT /my-custom-index +{ + "settings": { + "analysis": { + "analyzer": { + "custom_analyzer": { + "tokenizer": "keyword", + "filter": [ "custom_word_delimiter_filter" ] + } + }, + "filter": { + "custom_word_delimiter_filter": { + "type": "word_delimiter_graph", + "split_on_case_change": true, + "split_on_numerics": true, + "stem_english_possessive": true + } + } + } + } +} +``` +{% include copy-curl.html %} + +## Generated tokens + +Use the following request to examine the tokens generated using the analyzer: + +```json +GET /my-custom-index/_analyze +{ + "analyzer": "custom_analyzer", + "text": "FastCar's Model2023" +} +``` +{% include copy-curl.html %} + +The response contains the generated tokens: + +```json +{ + "tokens": [ + { + "token": "Fast", + "start_offset": 0, + "end_offset": 4, + "type": "word", + "position": 0 + }, + { + "token": "Car", + "start_offset": 4, + "end_offset": 7, + "type": "word", + "position": 1 + }, + { + "token": "Model", + "start_offset": 10, + "end_offset": 15, + "type": "word", + "position": 2 + }, + { + "token": "2023", + "start_offset": 15, + "end_offset": 19, + "type": "word", + "position": 3 + } + ] +} +``` From f2372f630571a91ce071cc9152e77d96972a53c5 Mon Sep 17 00:00:00 2001 From: Anton Rubin Date: Wed, 16 Oct 2024 17:33:45 +0100 Subject: [PATCH 2/7] updating parameter table Signed-off-by: Anton Rubin --- .../token-filters/word-delimiter-graph.md | 47 +++++++------------ 1 file changed, 18 insertions(+), 29 deletions(-) diff --git a/_analyzers/token-filters/word-delimiter-graph.md b/_analyzers/token-filters/word-delimiter-graph.md index 9a5472c616..eaff1368d1 100644 --- a/_analyzers/token-filters/word-delimiter-graph.md +++ b/_analyzers/token-filters/word-delimiter-graph.md @@ -14,35 +14,24 @@ It's important **not** to use tokenizers that strip punctuation, like the `stand ## Parameters -You can configure the `word_delimiter_graph` token filter using the following parameters: - -- `adjust_offsets`: Adjusts the token offsets for better accuracy. If your analyzer uses filters that change the length of tokens without changing their offsets, such as `trim`, setting this parameter to `false` is recommended. Default is `true`. (Boolean, _Optional_) - -- `catenate_all`: Produces concatenated tokens from a sequence of alphanumeric parts. For example, `"quick-fast-200"` becomes `[ quickfast200, quick, fast, 200 ]`. Default is `false`. (Boolean, _Optional_) - -- `catenate_numbers`: Combines numerical sequences, such as `"10-20-30"` turning into `[ 102030, 10, 20, 30 ]`. Default is `false`. (Boolean, _Optional_) - -- `catenate_words`: Concatenates alphabetic words. For example `"high-speed-level"` becomes `[ highspeedlevel, high, speed, level ]`. Default is `false`. (Boolean, _Optional_) - -- `generate_number_parts`: Controls whether numeric tokens are generated separately. Default is `true`. (Boolean, _Optional_) - -- `generate_word_parts`: Specifies whether alphabetical tokens should be generated. Default is `true`. (Boolean, _Optional_) - -- `ignore_keywords`: Skips over tokens marked as keywords. Default is `false`. (Boolean, _Optional_) - -- `preserve_original`: Keeps the original, unsplit token alongside the generated tokens. For example `"auto-drive-300"` will result in `[ auto-drive-300, auto, drive, 300 ]`. Default is `false`. (Boolean, _Optional_) - -- `protected_words`: Specifies tokens that the filter should not split. (Array, _Optional_) - -- `protected_words_path`: Specifies a path (absolute or relating to config directory) to a file containing tokens separated by new line which should not be split. (string, _Optional_) - -- `split_on_case_change`: Splits tokens when there is a transition between lowercase and uppercase letters. Default is `true`. (Boolean, _Optional_) - -- `split_on_numerics`: Splits tokens where letters and numbers meet. For example `"v8engine"` will become `[ v, 8, engine ]`. Default is `true`. (Boolean, _Optional_) - -- `stem_english_possessive`: Removes English possessive endings such as `"'s."` Default is `true`. (Boolean, _Optional_) - -- `type_table`: Custom mappings can be provided for characters to treat them as alphanumeric or numeric, which avoids unwanted splitting. For example: `["- => ALPHA"]`. (Array of strings, _Optional_) +You can configure the `word_delimiter_graph` token filter using the following parameters. + +Parameter | Required/Optional | Data type | Description +:--- | :--- | :--- | :--- +`adjust_offsets` | Optional | Boolean | Adjusts the token offsets for better accuracy. If your analyzer uses filters that change the length of tokens without changing their offsets, such as `trim`, setting this parameter to `false` is recommended. Default is `true`. +`catenate_all` | Optional | Boolean | Produces concatenated tokens from a sequence of alphanumeric parts. For example, `"quick-fast-200"` becomes `[ quickfast200, quick, fast, 200 ]`. Default is `false`. +`catenate_numbers` | Optional | Boolean | Combines numerical sequences, such as `"10-20-30"` turning into `[ 102030, 10, 20, 30 ]`. Default is `false`. +`catenate_words` | Optional | Boolean | Concatenates alphabetic words. For example `"high-speed-level"` becomes `[ highspeedlevel, high, speed, level ]`. Default is `false`. +`generate_number_parts` | Optional | Boolean | Controls whether numeric tokens are generated separately. Default is `true`. +`generate_word_parts` | Optional | Boolean | Specifies whether alphabetical tokens should be generated. Default is `true`. +`ignore_keywords` | Optional | Boolean | Skips over tokens marked as keywords. Default is `false`. +`preserve_original` | Optional | Boolean | Keeps the original, unsplit token alongside the generated tokens. For example `"auto-drive-300"` will result in `[ auto-drive-300, auto, drive, 300 ]`. Default is `false`. +`protected_words` | Optional | Array of strings | Specifies tokens that the filter should not split. +`protected_words_path` | Optional | String | Specifies a path (absolute or relating to config directory) to a file containing tokens separated by new line which should not be split. +`split_on_case_change` | Optional | Boolean | Splits tokens when there is a transition between lowercase and uppercase letters. Default is `true`. +`split_on_numerics` | Optional | Boolean | Splits tokens where letters and numbers meet. For example `"v8engine"` will become `[ v, 8, engine ]`. Default is `true`. +`stem_english_possessive` | Optional | Boolean | Removes English possessive endings such as `"'s."` Default is `true`. +`type_table` | Optional | Array of strings | Custom mappings can be provided for characters to treat them as alphanumeric or numeric, which avoids unwanted splitting. For example: `["- => ALPHA"]`. ## Example From c8351409d53709c58a14516aa1fe856925955a5d Mon Sep 17 00:00:00 2001 From: Fanit Kolchina Date: Thu, 21 Nov 2024 17:20:14 -0500 Subject: [PATCH 3/7] Doc review Signed-off-by: Fanit Kolchina --- .../token-filters/word-delimiter-graph.md | 82 +++++++++++++++---- 1 file changed, 66 insertions(+), 16 deletions(-) diff --git a/_analyzers/token-filters/word-delimiter-graph.md b/_analyzers/token-filters/word-delimiter-graph.md index eaff1368d1..ff6a6888ad 100644 --- a/_analyzers/token-filters/word-delimiter-graph.md +++ b/_analyzers/token-filters/word-delimiter-graph.md @@ -7,32 +7,45 @@ nav_order: 480 # Word delimiter graph token filter -The `word_delimiter_graph` token filter is used to split tokens at predefined characters, while also offering optional token normalization based on customizable rules. +The `word_delimiter_graph` token filter is used to split tokens at predefined characters while also offering optional token normalization based on customizable rules. -It's important **not** to use tokenizers that strip punctuation, like the `standard` tokenizer, with this filter. Doing so may prevent proper token splitting and interfere with options like `catenate_all` or `preserve_original`. Instead, it's recommended to use the `keyword` or `whitespace` tokenizer. +The `word_delimiter_graph` filter is intended for removing punctuation from complex identifiers like part numbers or product IDs. For such cases, it is best used with the `keyword` tokenizer. For hyphenated words, use the `synonym_graph` token filter instead of the `word_delimiter_graph` filter because users frequently search for these terms both with and without hyphens. {: .note} +By default, the filter applies the following rules. + +| Action | Description | Input | Output | +|:---|:---|:---|:---| +| Split tokens at non-alphanumeric characters | Non-alphanumeric characters are treated as delimiters. | `ultra-fast` | `ultra`, `fast` | +| Remove leading or trailing delimiters | Removes delimiters at the start or end of tokens. | `Z99++'Decoder'`| `Z99`, `Decoder` | +| Split tokens at letter case transitions | Splits tokens when there is a transition between uppercase and lowercase letters. | `OpenSearch` | `Open`, `Search` | +| Split tokens at letter-number transitions | Splits tokens when there is a transition between letters and numbers. | `T1000` | `T`, `1000` | +| Remove the English possessive ('s) | Removes the possessive ('s) from the end of tokens. | `John's` | `John` | + +It's important **not** to use tokenizers that strip punctuation, like the `standard` tokenizer, with this filter. Doing so may prevent proper token splitting and interfere with options like `catenate_all` or `preserve_original`. We recommend using this filter with a `keyword` or `whitespace` tokenizer. +{: .important} + ## Parameters You can configure the `word_delimiter_graph` token filter using the following parameters. Parameter | Required/Optional | Data type | Description :--- | :--- | :--- | :--- -`adjust_offsets` | Optional | Boolean | Adjusts the token offsets for better accuracy. If your analyzer uses filters that change the length of tokens without changing their offsets, such as `trim`, setting this parameter to `false` is recommended. Default is `true`. +`adjust_offsets` | Optional | Boolean | Determines whether the token offsets should be recalculated for split or concatenated tokens. When `true`, the filter adjusts the token offsets to accurately represent the token's position within the token stream. This adjustment ensures that the token's location in the text aligns with its modified form after processing, which is particularly useful for applications like highlighting or phrase queries. When `false`, the offsets remain unchanged, which may result in misalignment when the processed tokens are mapped back to their positions in the original text. If your analyzer uses filters like `trim` that change the token lengths without changing their offsets, we recommend setting this parameter to `false`. Default is `true`. `catenate_all` | Optional | Boolean | Produces concatenated tokens from a sequence of alphanumeric parts. For example, `"quick-fast-200"` becomes `[ quickfast200, quick, fast, 200 ]`. Default is `false`. -`catenate_numbers` | Optional | Boolean | Combines numerical sequences, such as `"10-20-30"` turning into `[ 102030, 10, 20, 30 ]`. Default is `false`. -`catenate_words` | Optional | Boolean | Concatenates alphabetic words. For example `"high-speed-level"` becomes `[ highspeedlevel, high, speed, level ]`. Default is `false`. -`generate_number_parts` | Optional | Boolean | Controls whether numeric tokens are generated separately. Default is `true`. -`generate_word_parts` | Optional | Boolean | Specifies whether alphabetical tokens should be generated. Default is `true`. -`ignore_keywords` | Optional | Boolean | Skips over tokens marked as keywords. Default is `false`. -`preserve_original` | Optional | Boolean | Keeps the original, unsplit token alongside the generated tokens. For example `"auto-drive-300"` will result in `[ auto-drive-300, auto, drive, 300 ]`. Default is `false`. -`protected_words` | Optional | Array of strings | Specifies tokens that the filter should not split. -`protected_words_path` | Optional | String | Specifies a path (absolute or relating to config directory) to a file containing tokens separated by new line which should not be split. -`split_on_case_change` | Optional | Boolean | Splits tokens when there is a transition between lowercase and uppercase letters. Default is `true`. -`split_on_numerics` | Optional | Boolean | Splits tokens where letters and numbers meet. For example `"v8engine"` will become `[ v, 8, engine ]`. Default is `true`. -`stem_english_possessive` | Optional | Boolean | Removes English possessive endings such as `"'s."` Default is `true`. -`type_table` | Optional | Array of strings | Custom mappings can be provided for characters to treat them as alphanumeric or numeric, which avoids unwanted splitting. For example: `["- => ALPHA"]`. - +`catenate_numbers` | Optional | Boolean | Concatenates numerical sequences. For example, `"10-20-30"` becomes `[ 102030, 10, 20, 30 ]`. Default is `false`. +`catenate_words` | Optional | Boolean | Concatenates alphabetic words. For example, `"high-speed-level"` becomes `[ highspeedlevel, high, speed, level ]`. Default is `false`. +`generate_number_parts` | Optional | Boolean | If `true`, numeric tokens (tokens consisting of numbers only) are included in the output. Default is `true`. +`generate_word_parts` | Optional | Boolean | If `true`, alphabetical tokens (tokens consisting of alphabetic characters only) are included in the output. Default is `true`. +`ignore_keywords` | Optional | Boolean | Whether to process tokens marked as keywords. Default is `false`. +`preserve_original` | Optional | Boolean | Keeps the original token (that may include non-alphanumeric delimeters) alongside the generated tokens in the output. For example, `"auto-drive-300"` becomes `[ auto-drive-300, auto, drive, 300 ]`. If `true`, the filter generates multi-position tokens not supported by indexing, so do not use this filter in an index analyzer or use the `flatten_graph` filter after this filter. Default is `false`. +`protected_words` | Optional | Array of strings | Specifies the tokens that should not be split. +`protected_words_path` | Optional | String | Specifies a path (absolute or relative to the config directory) to a file containing tokens that should not be split separated by new lines. +`split_on_case_change` | Optional | Boolean | Splits tokens at the place where consecutive letters have a different case (one is lowercase the other is uppercase). For example, `"OpenSearch"` becomes `[ Open, Search ]`. Default is `true`. +`split_on_numerics` | Optional | Boolean | Splits tokens at the place where there are a consecutive letter and number. For example `"v8engine"` will become `[ v, 8, engine ]`. Default is `true`. +`stem_english_possessive` | Optional | Boolean | Removes English possessive endings such as `'s`. Default is `true`. +`type_table` | Optional | Array of strings | A custom map that specifies how to treat characters and whether to treat them as delimiters, which avoids unwanted splitting. For example, to treat a hyphen (`-`) as an alphanumeric character, specify `["- => ALPHA"]` so words are not split on hyphens. Valid types are:
- `ALPHA`: alphabetical
- `ALPHANUM`: alphanumeric
- `DIGIT`: numeric
- `LOWER`: lowercase alphabetical
- `SUBWORD_DELIM`: non-alphanumeric delimiter
- `UPPER`: uppercase alphabetical +`type_table_path` | Optional | String | Specifies a path (absolute or relative to the config directory) to a file containing a custom character map. The map specifies how to treat characters and whether to treat them as delimiters, which avoids unwanted splitting. For valid types, see `type_table`. ## Example @@ -112,3 +125,40 @@ The response contains the generated tokens: ] } ``` + + +## Differences between word_delimiter_graph and word_delimiter filters + + +Both the `word_delimiter_graph` and `word_delimiter` token filters generate tokens spanning multiple positions when any of the following parameters are set to `true`: + +- `catenate_all` +- `catenate_numbers` +- `catenate_words` +- `preserve_original` + +To illustrate the filter differences, consider the input text `Pro-XT500`. + + +### word_delimiter_graph + + +The `word_delimiter_graph` filter assigns a `positionLength` attribute to multi-position tokens, indicating how many positions a token spans. This ensures that the filter always generates valid token graphs, making it suitable for use in advanced token graph scenarios. Although token graphs with multi-position tokens are not supported for indexing, they can still be useful in search scenarios. For example, queries like `match_phrase` can use these graphs to generate multiple subqueries from a single input string. For the example input text, the `word_delimiter_graph` filter generates the following tokens: + +- `Pro` (position 1) +- `XT500` (position 2) +- `ProXT500` (position 1, `positionLength`: 2) + +The `positionLength` attribute ensures a valid graph for advanced queries. + + +### word_delimiter + + +In contrast, the `word_delimiter` filter does not assign a `positionLength` attribute to multi-position tokens, leading to invalid graphs when these tokens are present. For the example input text, the `word_delimiter` filter generates the following tokens: + +- `Pro` (position 1) +- `XT500` (position 2) +- `ProXT500` (position 1, no `positionLength`) + +The lack of a `positionLength` attribute means the resulting token graph is invalid for token streams containing multi-position tokens. \ No newline at end of file From 26f7548374dc2d1724513507cf784bc78ee26d7f Mon Sep 17 00:00:00 2001 From: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Date: Tue, 3 Dec 2024 09:05:19 -0500 Subject: [PATCH 4/7] Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --- _analyzers/token-filters/index.md | 2 +- .../token-filters/word-delimiter-graph.md | 28 +++++++++---------- 2 files changed, 15 insertions(+), 15 deletions(-) diff --git a/_analyzers/token-filters/index.md b/_analyzers/token-filters/index.md index 0b4f78cc03..316ca7dd53 100644 --- a/_analyzers/token-filters/index.md +++ b/_analyzers/token-filters/index.md @@ -62,4 +62,4 @@ Normalization | `arabic_normalization`: [ArabicNormalizer](https://lucene.apache `unique` | N/A | Ensures each token is unique by removing duplicate tokens from a stream. `uppercase` | [UpperCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/core/LowerCaseFilter.html) | Converts tokens to uppercase. `word_delimiter` | [WordDelimiterFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/WordDelimiterFilter.html) | Splits tokens at non-alphanumeric characters and performs normalization based on the specified rules. -[`word_delimiter_graph`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/word-delimiter-graph/) | [WordDelimiterGraphFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/WordDelimiterGraphFilter.html) | Splits tokens at non-alphanumeric characters and performs normalization based on the specified rules. Assigns multi-position tokens a `positionLength` attribute. +[`word_delimiter_graph`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/word-delimiter-graph/) | [WordDelimiterGraphFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/WordDelimiterGraphFilter.html) | Splits tokens at non-alphanumeric characters and performs normalization based on the specified rules. Assigns a `positionLength` attribute to multi-position tokens. diff --git a/_analyzers/token-filters/word-delimiter-graph.md b/_analyzers/token-filters/word-delimiter-graph.md index ff6a6888ad..71c9f49fc7 100644 --- a/_analyzers/token-filters/word-delimiter-graph.md +++ b/_analyzers/token-filters/word-delimiter-graph.md @@ -7,9 +7,9 @@ nav_order: 480 # Word delimiter graph token filter -The `word_delimiter_graph` token filter is used to split tokens at predefined characters while also offering optional token normalization based on customizable rules. +The `word_delimiter_graph` token filter is used to split tokens at predefined characters and also offers optional token normalization based on customizable rules. -The `word_delimiter_graph` filter is intended for removing punctuation from complex identifiers like part numbers or product IDs. For such cases, it is best used with the `keyword` tokenizer. For hyphenated words, use the `synonym_graph` token filter instead of the `word_delimiter_graph` filter because users frequently search for these terms both with and without hyphens. +The `word_delimiter_graph` filter is used to remove punctuation from complex identifiers like part numbers or product IDs. In such cases, it is best used with the `keyword` tokenizer. For hyphenated words, use the `synonym_graph` token filter instead of the `word_delimiter_graph` filter because users frequently search for these terms both with and without hyphens. {: .note} By default, the filter applies the following rules. @@ -17,7 +17,7 @@ By default, the filter applies the following rules. | Action | Description | Input | Output | |:---|:---|:---|:---| | Split tokens at non-alphanumeric characters | Non-alphanumeric characters are treated as delimiters. | `ultra-fast` | `ultra`, `fast` | -| Remove leading or trailing delimiters | Removes delimiters at the start or end of tokens. | `Z99++'Decoder'`| `Z99`, `Decoder` | +| Remove leading or trailing delimiters | Removes delimiters at the beginning or end of tokens. | `Z99++'Decoder'`| `Z99`, `Decoder` | | Split tokens at letter case transitions | Splits tokens when there is a transition between uppercase and lowercase letters. | `OpenSearch` | `Open`, `Search` | | Split tokens at letter-number transitions | Splits tokens when there is a transition between letters and numbers. | `T1000` | `T`, `1000` | | Remove the English possessive ('s) | Removes the possessive ('s) from the end of tokens. | `John's` | `John` | @@ -38,18 +38,18 @@ Parameter | Required/Optional | Data type | Description `generate_number_parts` | Optional | Boolean | If `true`, numeric tokens (tokens consisting of numbers only) are included in the output. Default is `true`. `generate_word_parts` | Optional | Boolean | If `true`, alphabetical tokens (tokens consisting of alphabetic characters only) are included in the output. Default is `true`. `ignore_keywords` | Optional | Boolean | Whether to process tokens marked as keywords. Default is `false`. -`preserve_original` | Optional | Boolean | Keeps the original token (that may include non-alphanumeric delimeters) alongside the generated tokens in the output. For example, `"auto-drive-300"` becomes `[ auto-drive-300, auto, drive, 300 ]`. If `true`, the filter generates multi-position tokens not supported by indexing, so do not use this filter in an index analyzer or use the `flatten_graph` filter after this filter. Default is `false`. -`protected_words` | Optional | Array of strings | Specifies the tokens that should not be split. -`protected_words_path` | Optional | String | Specifies a path (absolute or relative to the config directory) to a file containing tokens that should not be split separated by new lines. -`split_on_case_change` | Optional | Boolean | Splits tokens at the place where consecutive letters have a different case (one is lowercase the other is uppercase). For example, `"OpenSearch"` becomes `[ Open, Search ]`. Default is `true`. -`split_on_numerics` | Optional | Boolean | Splits tokens at the place where there are a consecutive letter and number. For example `"v8engine"` will become `[ v, 8, engine ]`. Default is `true`. -`stem_english_possessive` | Optional | Boolean | Removes English possessive endings such as `'s`. Default is `true`. -`type_table` | Optional | Array of strings | A custom map that specifies how to treat characters and whether to treat them as delimiters, which avoids unwanted splitting. For example, to treat a hyphen (`-`) as an alphanumeric character, specify `["- => ALPHA"]` so words are not split on hyphens. Valid types are:
- `ALPHA`: alphabetical
- `ALPHANUM`: alphanumeric
- `DIGIT`: numeric
- `LOWER`: lowercase alphabetical
- `SUBWORD_DELIM`: non-alphanumeric delimiter
- `UPPER`: uppercase alphabetical +`preserve_original` | Optional | Boolean | Keeps the original token (which may include non-alphanumeric delimeters) alongside the generated tokens in the output. For example, `"auto-drive-300"` becomes `[ auto-drive-300, auto, drive, 300 ]`. If `true`, the filter generates multi-position tokens not supported by indexing, so do not use this filter in an index analyzer or use the `flatten_graph` filter after this filter. Default is `false`. +`protected_words` | Optional | Array of strings | Specifies tokens that should not be split. +`protected_words_path` | Optional | String | Specifies a path (absolute or relative to the config directory) to a file containing tokens that should not be separated by new lines. +`split_on_case_change` | Optional | Boolean | Splits tokens where consecutive letters have different cases (one is lowercase and the other is uppercase). For example, `"OpenSearch"` becomes `[ Open, Search ]`. Default is `true`. +`split_on_numerics` | Optional | Boolean | Splits tokens where there are consecutive letters and numbers. For example `"v8engine"` will become `[ v, 8, engine ]`. Default is `true`. +`stem_english_possessive` | Optional | Boolean | Removes English possessive endings, such as `'s`. Default is `true`. +`type_table` | Optional | Array of strings | A custom map that specifies how to treat characters and whether to treat them as delimiters, which avoids unwanted splitting. For example, to treat a hyphen (`-`) as an alphanumeric character, specify `["- => ALPHA"]` so that words are not split at hyphens. Valid types are:
- `ALPHA`: alphabetical
- `ALPHANUM`: alphanumeric
- `DIGIT`: numeric
- `LOWER`: lowercase alphabetical
- `SUBWORD_DELIM`: non-alphanumeric delimiter
- `UPPER`: uppercase alphabetical `type_table_path` | Optional | String | Specifies a path (absolute or relative to the config directory) to a file containing a custom character map. The map specifies how to treat characters and whether to treat them as delimiters, which avoids unwanted splitting. For valid types, see `type_table`. ## Example -The following example request creates a new index named `my-custom-index` and configures an analyzer with `word_delimiter_graph` filter: +The following example request creates a new index named `my-custom-index` and configures an analyzer with a `word_delimiter_graph` filter: ```json PUT /my-custom-index @@ -127,7 +127,7 @@ The response contains the generated tokens: ``` -## Differences between word_delimiter_graph and word_delimiter filters +## Differences between the word_delimiter_graph and word_delimiter filters Both the `word_delimiter_graph` and `word_delimiter` token filters generate tokens spanning multiple positions when any of the following parameters are set to `true`: @@ -137,7 +137,7 @@ Both the `word_delimiter_graph` and `word_delimiter` token filters generate toke - `catenate_words` - `preserve_original` -To illustrate the filter differences, consider the input text `Pro-XT500`. +To illustrate the differences between these filters, consider the input text `Pro-XT500`. ### word_delimiter_graph @@ -161,4 +161,4 @@ In contrast, the `word_delimiter` filter does not assign a `positionLength` attr - `XT500` (position 2) - `ProXT500` (position 1, no `positionLength`) -The lack of a `positionLength` attribute means the resulting token graph is invalid for token streams containing multi-position tokens. \ No newline at end of file +The lack of a `positionLength` attribute results in a token graph that is invalid for token streams containing multi-position tokens. \ No newline at end of file From eb6995b022f21f846e3215e34b55f92e9a9a23de Mon Sep 17 00:00:00 2001 From: Fanit Kolchina Date: Tue, 3 Dec 2024 09:11:20 -0500 Subject: [PATCH 5/7] Editorial comments Signed-off-by: Fanit Kolchina --- _analyzers/token-filters/index.md | 2 +- .../token-filters/word-delimiter-graph.md | 42 +++++++++---------- 2 files changed, 22 insertions(+), 22 deletions(-) diff --git a/_analyzers/token-filters/index.md b/_analyzers/token-filters/index.md index 0b4f78cc03..316ca7dd53 100644 --- a/_analyzers/token-filters/index.md +++ b/_analyzers/token-filters/index.md @@ -62,4 +62,4 @@ Normalization | `arabic_normalization`: [ArabicNormalizer](https://lucene.apache `unique` | N/A | Ensures each token is unique by removing duplicate tokens from a stream. `uppercase` | [UpperCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/core/LowerCaseFilter.html) | Converts tokens to uppercase. `word_delimiter` | [WordDelimiterFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/WordDelimiterFilter.html) | Splits tokens at non-alphanumeric characters and performs normalization based on the specified rules. -[`word_delimiter_graph`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/word-delimiter-graph/) | [WordDelimiterGraphFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/WordDelimiterGraphFilter.html) | Splits tokens at non-alphanumeric characters and performs normalization based on the specified rules. Assigns multi-position tokens a `positionLength` attribute. +[`word_delimiter_graph`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/word-delimiter-graph/) | [WordDelimiterGraphFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/WordDelimiterGraphFilter.html) | Splits tokens at non-alphanumeric characters and performs normalization based on the specified rules. Assigns a `positionLength` attribute to multi-position tokens. diff --git a/_analyzers/token-filters/word-delimiter-graph.md b/_analyzers/token-filters/word-delimiter-graph.md index ff6a6888ad..17f6e7e21f 100644 --- a/_analyzers/token-filters/word-delimiter-graph.md +++ b/_analyzers/token-filters/word-delimiter-graph.md @@ -7,20 +7,20 @@ nav_order: 480 # Word delimiter graph token filter -The `word_delimiter_graph` token filter is used to split tokens at predefined characters while also offering optional token normalization based on customizable rules. +The `word_delimiter_graph` token filter is used to split tokens at predefined characters and also offers optional token normalization based on customizable rules. -The `word_delimiter_graph` filter is intended for removing punctuation from complex identifiers like part numbers or product IDs. For such cases, it is best used with the `keyword` tokenizer. For hyphenated words, use the `synonym_graph` token filter instead of the `word_delimiter_graph` filter because users frequently search for these terms both with and without hyphens. +The `word_delimiter_graph` filter is used to remove punctuation from complex identifiers like part numbers or product IDs. In such cases, it is best used with the `keyword` tokenizer. For hyphenated words, use the `synonym_graph` token filter instead of the `word_delimiter_graph` filter because users frequently search for these terms both with and without hyphens. {: .note} By default, the filter applies the following rules. -| Action | Description | Input | Output | -|:---|:---|:---|:---| -| Split tokens at non-alphanumeric characters | Non-alphanumeric characters are treated as delimiters. | `ultra-fast` | `ultra`, `fast` | -| Remove leading or trailing delimiters | Removes delimiters at the start or end of tokens. | `Z99++'Decoder'`| `Z99`, `Decoder` | -| Split tokens at letter case transitions | Splits tokens when there is a transition between uppercase and lowercase letters. | `OpenSearch` | `Open`, `Search` | -| Split tokens at letter-number transitions | Splits tokens when there is a transition between letters and numbers. | `T1000` | `T`, `1000` | -| Remove the English possessive ('s) | Removes the possessive ('s) from the end of tokens. | `John's` | `John` | + Description | Input | Output | +|:---|:---|:---| +| Treats non-alphanumeric characters as delimiters. | `ultra-fast` | `ultra`, `fast` | +| Removes delimiters at the beginning or end of tokens. | `Z99++'Decoder'`| `Z99`, `Decoder` | +| Splits tokens when there is a transition between uppercase and lowercase letters. | `OpenSearch` | `Open`, `Search` | +| Splits tokens when there is a transition between letters and numbers. | `T1000` | `T`, `1000` | +| Removes the possessive ('s) from the end of tokens. | `John's` | `John` | It's important **not** to use tokenizers that strip punctuation, like the `standard` tokenizer, with this filter. Doing so may prevent proper token splitting and interfere with options like `catenate_all` or `preserve_original`. We recommend using this filter with a `keyword` or `whitespace` tokenizer. {: .important} @@ -38,18 +38,18 @@ Parameter | Required/Optional | Data type | Description `generate_number_parts` | Optional | Boolean | If `true`, numeric tokens (tokens consisting of numbers only) are included in the output. Default is `true`. `generate_word_parts` | Optional | Boolean | If `true`, alphabetical tokens (tokens consisting of alphabetic characters only) are included in the output. Default is `true`. `ignore_keywords` | Optional | Boolean | Whether to process tokens marked as keywords. Default is `false`. -`preserve_original` | Optional | Boolean | Keeps the original token (that may include non-alphanumeric delimeters) alongside the generated tokens in the output. For example, `"auto-drive-300"` becomes `[ auto-drive-300, auto, drive, 300 ]`. If `true`, the filter generates multi-position tokens not supported by indexing, so do not use this filter in an index analyzer or use the `flatten_graph` filter after this filter. Default is `false`. -`protected_words` | Optional | Array of strings | Specifies the tokens that should not be split. -`protected_words_path` | Optional | String | Specifies a path (absolute or relative to the config directory) to a file containing tokens that should not be split separated by new lines. -`split_on_case_change` | Optional | Boolean | Splits tokens at the place where consecutive letters have a different case (one is lowercase the other is uppercase). For example, `"OpenSearch"` becomes `[ Open, Search ]`. Default is `true`. -`split_on_numerics` | Optional | Boolean | Splits tokens at the place where there are a consecutive letter and number. For example `"v8engine"` will become `[ v, 8, engine ]`. Default is `true`. -`stem_english_possessive` | Optional | Boolean | Removes English possessive endings such as `'s`. Default is `true`. -`type_table` | Optional | Array of strings | A custom map that specifies how to treat characters and whether to treat them as delimiters, which avoids unwanted splitting. For example, to treat a hyphen (`-`) as an alphanumeric character, specify `["- => ALPHA"]` so words are not split on hyphens. Valid types are:
- `ALPHA`: alphabetical
- `ALPHANUM`: alphanumeric
- `DIGIT`: numeric
- `LOWER`: lowercase alphabetical
- `SUBWORD_DELIM`: non-alphanumeric delimiter
- `UPPER`: uppercase alphabetical +`preserve_original` | Optional | Boolean | Keeps the original token (which may include non-alphanumeric delimiters) alongside the generated tokens in the output. For example, `"auto-drive-300"` becomes `[ auto-drive-300, auto, drive, 300 ]`. If `true`, the filter generates multi-position tokens not supported by indexing, so do not use this filter in an index analyzer or use the `flatten_graph` filter after this filter. Default is `false`. +`protected_words` | Optional | Array of strings | Specifies tokens that should not be split. +`protected_words_path` | Optional | String | Specifies a path (absolute or relative to the config directory) to a file containing tokens that should not be separated by new lines. +`split_on_case_change` | Optional | Boolean | Splits tokens where consecutive letters have different cases (one is lowercase and the other is uppercase). For example, `"OpenSearch"` becomes `[ Open, Search ]`. Default is `true`. +`split_on_numerics` | Optional | Boolean | Splits tokens where there are consecutive letters and numbers. For example `"v8engine"` will become `[ v, 8, engine ]`. Default is `true`. +`stem_english_possessive` | Optional | Boolean | Removes English possessive endings, such as `'s`. Default is `true`. +`type_table` | Optional | Array of strings | A custom map that specifies how to treat characters and whether to treat them as delimiters, which avoids unwanted splitting. For example, to treat a hyphen (`-`) as an alphanumeric character, specify `["- => ALPHA"]` so that words are not split at hyphens. Valid types are:
- `ALPHA`: alphabetical
- `ALPHANUM`: alphanumeric
- `DIGIT`: numeric
- `LOWER`: lowercase alphabetical
- `SUBWORD_DELIM`: non-alphanumeric delimiter
- `UPPER`: uppercase alphabetical `type_table_path` | Optional | String | Specifies a path (absolute or relative to the config directory) to a file containing a custom character map. The map specifies how to treat characters and whether to treat them as delimiters, which avoids unwanted splitting. For valid types, see `type_table`. ## Example -The following example request creates a new index named `my-custom-index` and configures an analyzer with `word_delimiter_graph` filter: +The following example request creates a new index named `my-custom-index` and configures an analyzer with a `word_delimiter_graph` filter: ```json PUT /my-custom-index @@ -127,7 +127,7 @@ The response contains the generated tokens: ``` -## Differences between word_delimiter_graph and word_delimiter filters +## Differences between the word_delimiter_graph and word_delimiter filters Both the `word_delimiter_graph` and `word_delimiter` token filters generate tokens spanning multiple positions when any of the following parameters are set to `true`: @@ -137,7 +137,7 @@ Both the `word_delimiter_graph` and `word_delimiter` token filters generate toke - `catenate_words` - `preserve_original` -To illustrate the filter differences, consider the input text `Pro-XT500`. +To illustrate the differences between these filters, consider the input text `Pro-XT500`. ### word_delimiter_graph @@ -149,7 +149,7 @@ The `word_delimiter_graph` filter assigns a `positionLength` attribute to multi- - `XT500` (position 2) - `ProXT500` (position 1, `positionLength`: 2) -The `positionLength` attribute ensures a valid graph for advanced queries. +The `positionLength` attribute the production of a valid graph to be used in advanced queries. ### word_delimiter @@ -161,4 +161,4 @@ In contrast, the `word_delimiter` filter does not assign a `positionLength` attr - `XT500` (position 2) - `ProXT500` (position 1, no `positionLength`) -The lack of a `positionLength` attribute means the resulting token graph is invalid for token streams containing multi-position tokens. \ No newline at end of file +The lack of a `positionLength` attribute results in a token graph that is invalid for token streams containing multi-position tokens. \ No newline at end of file From 6733b84c2baf35df8c869a43d21d7cfef5de39f0 Mon Sep 17 00:00:00 2001 From: Fanit Kolchina Date: Tue, 3 Dec 2024 09:15:00 -0500 Subject: [PATCH 6/7] More merge conflicts Signed-off-by: Fanit Kolchina --- _analyzers/token-filters/word-delimiter-graph.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/_analyzers/token-filters/word-delimiter-graph.md b/_analyzers/token-filters/word-delimiter-graph.md index 5af6790397..20e958c9e6 100644 --- a/_analyzers/token-filters/word-delimiter-graph.md +++ b/_analyzers/token-filters/word-delimiter-graph.md @@ -14,13 +14,13 @@ The `word_delimiter_graph` filter is used to remove punctuation from complex ide By default, the filter applies the following rules. -| Action | Description | Input | Output | -|:---|:---|:---|:---| -| Split tokens at non-alphanumeric characters | Non-alphanumeric characters are treated as delimiters. | `ultra-fast` | `ultra`, `fast` | -| Remove leading or trailing delimiters | Removes delimiters at the beginning or end of tokens. | `Z99++'Decoder'`| `Z99`, `Decoder` | -| Split tokens at letter case transitions | Splits tokens when there is a transition between uppercase and lowercase letters. | `OpenSearch` | `Open`, `Search` | -| Split tokens at letter-number transitions | Splits tokens when there is a transition between letters and numbers. | `T1000` | `T`, `1000` | -| Remove the English possessive ('s) | Removes the possessive ('s) from the end of tokens. | `John's` | `John` | +| Description | Input | Output | +|:---|:---|:---| +| Treats non-alphanumeric characters as delimiters. | `ultra-fast` | `ultra`, `fast` | +| Removes delimiters at the beginning or end of tokens. | `Z99++'Decoder'`| `Z99`, `Decoder` | +| Splits tokens when there is a transition between uppercase and lowercase letters. | `OpenSearch` | `Open`, `Search` | +| Splits tokens when there is a transition between letters and numbers. | `T1000` | `T`, `1000` | +| Removes the possessive ('s) from the end of tokens. | `John's` | `John` | It's important **not** to use tokenizers that strip punctuation, like the `standard` tokenizer, with this filter. Doing so may prevent proper token splitting and interfere with options like `catenate_all` or `preserve_original`. We recommend using this filter with a `keyword` or `whitespace` tokenizer. {: .important} From c15ccbea460b9f866033da0c03db6d6ae7e2498d Mon Sep 17 00:00:00 2001 From: Fanit Kolchina Date: Tue, 3 Dec 2024 09:15:58 -0500 Subject: [PATCH 7/7] typo fix Signed-off-by: Fanit Kolchina --- _analyzers/token-filters/word-delimiter-graph.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_analyzers/token-filters/word-delimiter-graph.md b/_analyzers/token-filters/word-delimiter-graph.md index 20e958c9e6..ac734bebeb 100644 --- a/_analyzers/token-filters/word-delimiter-graph.md +++ b/_analyzers/token-filters/word-delimiter-graph.md @@ -38,7 +38,7 @@ Parameter | Required/Optional | Data type | Description `generate_number_parts` | Optional | Boolean | If `true`, numeric tokens (tokens consisting of numbers only) are included in the output. Default is `true`. `generate_word_parts` | Optional | Boolean | If `true`, alphabetical tokens (tokens consisting of alphabetic characters only) are included in the output. Default is `true`. `ignore_keywords` | Optional | Boolean | Whether to process tokens marked as keywords. Default is `false`. -`preserve_original` | Optional | Boolean | Keeps the original token (which may include non-alphanumeric delimeters) alongside the generated tokens in the output. For example, `"auto-drive-300"` becomes `[ auto-drive-300, auto, drive, 300 ]`. If `true`, the filter generates multi-position tokens not supported by indexing, so do not use this filter in an index analyzer or use the `flatten_graph` filter after this filter. Default is `false`. +`preserve_original` | Optional | Boolean | Keeps the original token (which may include non-alphanumeric delimiters) alongside the generated tokens in the output. For example, `"auto-drive-300"` becomes `[ auto-drive-300, auto, drive, 300 ]`. If `true`, the filter generates multi-position tokens not supported by indexing, so do not use this filter in an index analyzer or use the `flatten_graph` filter after this filter. Default is `false`. `protected_words` | Optional | Array of strings | Specifies tokens that should not be split. `protected_words_path` | Optional | String | Specifies a path (absolute or relative to the config directory) to a file containing tokens that should not be separated by new lines. `split_on_case_change` | Optional | Boolean | Splits tokens where consecutive letters have different cases (one is lowercase and the other is uppercase). For example, `"OpenSearch"` becomes `[ Open, Search ]`. Default is `true`.