From 0d694e1bb792722102ca2d46927f41637729cc34 Mon Sep 17 00:00:00 2001 From: Anton Rubin Date: Mon, 14 Oct 2024 11:38:19 +0100 Subject: [PATCH 1/5] adding standard analyzer docs Signed-off-by: Anton Rubin --- _analyzers/standard.md | 41 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 41 insertions(+) create mode 100644 _analyzers/standard.md diff --git a/_analyzers/standard.md b/_analyzers/standard.md new file mode 100644 index 0000000000..295094a0ee --- /dev/null +++ b/_analyzers/standard.md @@ -0,0 +1,41 @@ +--- +layout: default +title: Standard analyzer +nav_order: 40 +--- + +# Standard analyzer + +`standard` analyzer is the default analyzer that is used when no other analyzer is specified. It is designed to provide a basic and efficient approach for general-purpose text processing. + +This analyzer is made up of the following tokenizers and token filters: + +- `standard` tokenizer: removes most punctuation and splits based on spaces and other common delimiters. +- `lowercase` token filter: all tokens are converted to lowercase, ensuring case-insensitive searching. +- `stop` token filter: removes common stop words such as "the" "is" "and" from the tokenized output. + +## Configuring custom analyzer + +You can use the following command to configure index `my_custom_index` with custom analyser equivalent to `standard` analyzer: + +```json +PUT /my_custom_index +{ + "settings": { + "analysis": { + "analyzer": { + "my_custom_analyzer": { + "type": "custom", + "tokenizer": "standard", + "filter": [ + "lowercase", + "stop" + ] + } + } + } + } +} +``` +{% include copy-curl.html %} + From a3daaa06a0f66648514cec26322f8abf7b06e052 Mon Sep 17 00:00:00 2001 From: Anton Rubin Date: Mon, 14 Oct 2024 14:08:42 +0100 Subject: [PATCH 2/5] adding further details Signed-off-by: Anton Rubin --- _analyzers/standard.md | 21 ++++++++++++++++++++- 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/_analyzers/standard.md b/_analyzers/standard.md index 295094a0ee..4c2f0e9d1b 100644 --- a/_analyzers/standard.md +++ b/_analyzers/standard.md @@ -14,9 +14,28 @@ This analyzer is made up of the following tokenizers and token filters: - `lowercase` token filter: all tokens are converted to lowercase, ensuring case-insensitive searching. - `stop` token filter: removes common stop words such as "the" "is" "and" from the tokenized output. +## Example configuration + +You can use the following command to create index `my_standard_index` with `standard` analyzer: + +```json +PUT /my_standard_index +{ + "mappings": { + "properties": { + "my_field": { + "type": "text", + "analyzer": "standard" + } + } + } +} +``` +{% include copy-curl.html %} + ## Configuring custom analyzer -You can use the following command to configure index `my_custom_index` with custom analyser equivalent to `standard` analyzer: +You can use the following command to configure index `my_custom_index` with custom analyzer equivalent to `standard` analyzer: ```json PUT /my_custom_index From 336f848ec316e5109cfa955780ae635481bcfbe4 Mon Sep 17 00:00:00 2001 From: Anton Rubin Date: Mon, 14 Oct 2024 15:45:51 +0100 Subject: [PATCH 3/5] adding more examples Signed-off-by: Anton Rubin --- _analyzers/standard.md | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) diff --git a/_analyzers/standard.md b/_analyzers/standard.md index 4c2f0e9d1b..284d01fd07 100644 --- a/_analyzers/standard.md +++ b/_analyzers/standard.md @@ -58,3 +58,28 @@ PUT /my_custom_index ``` {% include copy-curl.html %} +## Generated tokens + +Use the following request to examine the tokens generated using the created analyzer: + +```json +POST /my_custom_index/_analyze +{ + "analyzer": "my_custom_analyzer", + "text": "The slow turtle swims away" +} +``` +{% include copy-curl.html %} + +The response contains the generated tokens: + +```json +{ + "tokens": [ + {"token": "slow","start_offset": 4,"end_offset": 8,"type": "","position": 1}, + {"token": "turtle","start_offset": 9,"end_offset": 15,"type": "","position": 2}, + {"token": "swims","start_offset": 16,"end_offset": 21,"type": "","position": 3}, + {"token": "away","start_offset": 22,"end_offset": 26,"type": "","position": 4} + ] +} +``` From 6c13948a842872b478c67f9144e05cfb1e957ea8 Mon Sep 17 00:00:00 2001 From: Fanit Kolchina Date: Fri, 6 Dec 2024 13:41:16 -0500 Subject: [PATCH 4/5] Doc review Signed-off-by: Fanit Kolchina --- _analyzers/standard.md | 31 +++++++++++++++++++++---------- 1 file changed, 21 insertions(+), 10 deletions(-) diff --git a/_analyzers/standard.md b/_analyzers/standard.md index 284d01fd07..14f335f052 100644 --- a/_analyzers/standard.md +++ b/_analyzers/standard.md @@ -6,17 +6,17 @@ nav_order: 40 # Standard analyzer -`standard` analyzer is the default analyzer that is used when no other analyzer is specified. It is designed to provide a basic and efficient approach for general-purpose text processing. +The `standard` analyzer is the default analyzer that is used when no other analyzer is specified. It is designed to provide a basic and efficient approach for generic text processing. -This analyzer is made up of the following tokenizers and token filters: +This analyzer consists of the following tokenizers and token filters: -- `standard` tokenizer: removes most punctuation and splits based on spaces and other common delimiters. -- `lowercase` token filter: all tokens are converted to lowercase, ensuring case-insensitive searching. -- `stop` token filter: removes common stop words such as "the" "is" "and" from the tokenized output. +- `standard` tokenizer: Removes most punctuation and splits text on spaces and other common delimiters. +- `lowercase` token filter: Converts all tokens to lowercase, ensuring case-insensitive matching. +- `stop` token filter: Removes common stopwords such as "the", "is", and "and" from the tokenized output. -## Example configuration +## Example -You can use the following command to create index `my_standard_index` with `standard` analyzer: +Use the following command to create an index named `my_standard_index` with a `standard` analyzer: ```json PUT /my_standard_index @@ -33,9 +33,20 @@ PUT /my_standard_index ``` {% include copy-curl.html %} -## Configuring custom analyzer +## Parameters -You can use the following command to configure index `my_custom_index` with custom analyzer equivalent to `standard` analyzer: +You can configure a `standard` analyzer with the following parameters. + +Parameter | Required/Optional | Data type | Description +:--- | :--- | :--- | :--- +`max_token_length` | Optional | Integer | Sets the maximum length of the produced token. If this length is exceeded, the token is split into multiple tokens at the length configured in the `max_token_length`. Default is `255`. +`stopwords` | Optional | String or list of strings | A string specifying a predefined list of stopwords (such as `_english_`) or an array specifying a custom list of stopwords. Default is `_none_`. +`stopwords_path` | Optional | String | The path (absolute or relative to the config directory) to the file containing a list of stop words. + + +## Configuring a custom analyzer + +Use the following command to configure an index with a custom analyzer that is equivalent to the `standard` analyzer: ```json PUT /my_custom_index @@ -60,7 +71,7 @@ PUT /my_custom_index ## Generated tokens -Use the following request to examine the tokens generated using the created analyzer: +Use the following request to examine the tokens generated using the analyzer: ```json POST /my_custom_index/_analyze From 9fd184d8b5708a60d22cf61c309f2b197060dd59 Mon Sep 17 00:00:00 2001 From: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> Date: Tue, 10 Dec 2024 09:00:52 -0500 Subject: [PATCH 5/5] Apply suggestions from code review Co-authored-by: Nathan Bower Signed-off-by: kolchfa-aws <105444904+kolchfa-aws@users.noreply.github.com> --- _analyzers/standard.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/_analyzers/standard.md b/_analyzers/standard.md index 14f335f052..e4a7a70fbc 100644 --- a/_analyzers/standard.md +++ b/_analyzers/standard.md @@ -6,13 +6,13 @@ nav_order: 40 # Standard analyzer -The `standard` analyzer is the default analyzer that is used when no other analyzer is specified. It is designed to provide a basic and efficient approach for generic text processing. +The `standard` analyzer is the default analyzer used when no other analyzer is specified. It is designed to provide a basic and efficient approach to generic text processing. This analyzer consists of the following tokenizers and token filters: - `standard` tokenizer: Removes most punctuation and splits text on spaces and other common delimiters. - `lowercase` token filter: Converts all tokens to lowercase, ensuring case-insensitive matching. -- `stop` token filter: Removes common stopwords such as "the", "is", and "and" from the tokenized output. +- `stop` token filter: Removes common stopwords, such as "the", "is", and "and", from the tokenized output. ## Example @@ -39,7 +39,7 @@ You can configure a `standard` analyzer with the following parameters. Parameter | Required/Optional | Data type | Description :--- | :--- | :--- | :--- -`max_token_length` | Optional | Integer | Sets the maximum length of the produced token. If this length is exceeded, the token is split into multiple tokens at the length configured in the `max_token_length`. Default is `255`. +`max_token_length` | Optional | Integer | Sets the maximum length of the produced token. If this length is exceeded, the token is split into multiple tokens at the length configured in `max_token_length`. Default is `255`. `stopwords` | Optional | String or list of strings | A string specifying a predefined list of stopwords (such as `_english_`) or an array specifying a custom list of stopwords. Default is `_none_`. `stopwords_path` | Optional | String | The path (absolute or relative to the config directory) to the file containing a list of stop words.