-
Notifications
You must be signed in to change notification settings - Fork 506
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
27 changed files
with
2,295 additions
and
405 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,109 @@ | ||
--- | ||
layout: default | ||
title: Flatten graph | ||
parent: Token filters | ||
nav_order: 150 | ||
--- | ||
|
||
# Flatten graph token filter | ||
|
||
The `flatten_graph` token filter is used to handle complex token relationships that occur when multiple tokens are generated at the same position in a graph structure. Some token filters, like `synonym_graph` and `word_delimiter_graph`, generate multi-position tokens---tokens that overlap or span multiple positions. These token graphs are useful for search queries but are not directly supported during indexing. The `flatten_graph` token filter resolves multi-position tokens into a linear sequence of tokens. Flattening the graph ensures compatibility with the indexing process. | ||
|
||
Token graph flattening is a lossy process. Whenever possible, avoid using the `flatten_graph` filter. Instead, apply graph token filters exclusively in search analyzers, removing the need for the `flatten_graph` filter. | ||
{: .important} | ||
|
||
## Example | ||
|
||
The following example request creates a new index named `test_index` and configures an analyzer with a `flatten_graph` filter: | ||
|
||
```json | ||
PUT /test_index | ||
{ | ||
"settings": { | ||
"analysis": { | ||
"analyzer": { | ||
"my_index_analyzer": { | ||
"type": "custom", | ||
"tokenizer": "standard", | ||
"filter": [ | ||
"my_custom_filter", | ||
"flatten_graph" | ||
] | ||
} | ||
}, | ||
"filter": { | ||
"my_custom_filter": { | ||
"type": "word_delimiter_graph", | ||
"catenate_all": true | ||
} | ||
} | ||
} | ||
} | ||
} | ||
``` | ||
{% include copy-curl.html %} | ||
|
||
## Generated tokens | ||
|
||
Use the following request to examine the tokens generated using the analyzer: | ||
|
||
```json | ||
POST /test_index/_analyze | ||
{ | ||
"analyzer": "my_index_analyzer", | ||
"text": "OpenSearch helped many employers" | ||
} | ||
``` | ||
{% include copy-curl.html %} | ||
|
||
The response contains the generated tokens: | ||
|
||
```json | ||
{ | ||
"tokens": [ | ||
{ | ||
"token": "OpenSearch", | ||
"start_offset": 0, | ||
"end_offset": 10, | ||
"type": "<ALPHANUM>", | ||
"position": 0, | ||
"positionLength": 2 | ||
}, | ||
{ | ||
"token": "Open", | ||
"start_offset": 0, | ||
"end_offset": 4, | ||
"type": "<ALPHANUM>", | ||
"position": 0 | ||
}, | ||
{ | ||
"token": "Search", | ||
"start_offset": 4, | ||
"end_offset": 10, | ||
"type": "<ALPHANUM>", | ||
"position": 1 | ||
}, | ||
{ | ||
"token": "helped", | ||
"start_offset": 11, | ||
"end_offset": 17, | ||
"type": "<ALPHANUM>", | ||
"position": 2 | ||
}, | ||
{ | ||
"token": "many", | ||
"start_offset": 18, | ||
"end_offset": 22, | ||
"type": "<ALPHANUM>", | ||
"position": 3 | ||
}, | ||
{ | ||
"token": "employers", | ||
"start_offset": 23, | ||
"end_offset": 32, | ||
"type": "<ALPHANUM>", | ||
"position": 4 | ||
} | ||
] | ||
} | ||
``` |
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,160 @@ | ||
--- | ||
layout: default | ||
title: Keyword repeat | ||
parent: Token filters | ||
nav_order: 210 | ||
--- | ||
|
||
# Keyword repeat token filter | ||
|
||
The `keyword_repeat` token filter emits the keyword version of a token into a token stream. This filter is typically used when you want to retain both the original token and its modified version after further token transformations, such as stemming or synonym expansion. The duplicated tokens allow the original, unchanged version of the token to remain in the final analysis alongside the modified versions. | ||
|
||
The `keyword_repeat` token filter should be placed before stemming filters. Stemming is not applied to every token, thus you may have duplicate tokens in the same position after stemming. To remove duplicate tokens, use the `remove_duplicates` token filter after the stemmer. | ||
{: .note} | ||
|
||
|
||
## Example | ||
|
||
The following example request creates a new index named `my_index` and configures an analyzer with a `keyword_repeat` filter: | ||
|
||
```json | ||
PUT /my_index | ||
{ | ||
"settings": { | ||
"analysis": { | ||
"filter": { | ||
"my_kstem": { | ||
"type": "kstem" | ||
}, | ||
"my_lowercase": { | ||
"type": "lowercase" | ||
} | ||
}, | ||
"analyzer": { | ||
"my_custom_analyzer": { | ||
"type": "custom", | ||
"tokenizer": "standard", | ||
"filter": [ | ||
"my_lowercase", | ||
"keyword_repeat", | ||
"my_kstem" | ||
] | ||
} | ||
} | ||
} | ||
} | ||
} | ||
``` | ||
{% include copy-curl.html %} | ||
|
||
## Generated tokens | ||
|
||
Use the following request to examine the tokens generated using the analyzer: | ||
|
||
```json | ||
POST /my_index/_analyze | ||
{ | ||
"analyzer": "my_custom_analyzer", | ||
"text": "Stopped quickly" | ||
} | ||
``` | ||
{% include copy-curl.html %} | ||
|
||
The response contains the generated tokens: | ||
|
||
```json | ||
{ | ||
"tokens": [ | ||
{ | ||
"token": "stopped", | ||
"start_offset": 0, | ||
"end_offset": 7, | ||
"type": "<ALPHANUM>", | ||
"position": 0 | ||
}, | ||
{ | ||
"token": "stop", | ||
"start_offset": 0, | ||
"end_offset": 7, | ||
"type": "<ALPHANUM>", | ||
"position": 0 | ||
}, | ||
{ | ||
"token": "quickly", | ||
"start_offset": 8, | ||
"end_offset": 15, | ||
"type": "<ALPHANUM>", | ||
"position": 1 | ||
}, | ||
{ | ||
"token": "quick", | ||
"start_offset": 8, | ||
"end_offset": 15, | ||
"type": "<ALPHANUM>", | ||
"position": 1 | ||
} | ||
] | ||
} | ||
``` | ||
|
||
You can further examine the impact of the `keyword_repeat` token filter by adding the following parameters to the `_analyze` query: | ||
|
||
```json | ||
POST /my_index/_analyze | ||
{ | ||
"analyzer": "my_custom_analyzer", | ||
"text": "Stopped quickly", | ||
"explain": true, | ||
"attributes": "keyword" | ||
} | ||
``` | ||
{% include copy-curl.html %} | ||
|
||
The response includes detailed information, such as tokenization, filtering, and the application of specific token filters: | ||
|
||
```json | ||
{ | ||
"detail": { | ||
"custom_analyzer": true, | ||
"charfilters": [], | ||
"tokenizer": { | ||
"name": "standard", | ||
"tokens": [ | ||
{"token": "OpenSearch","start_offset": 0,"end_offset": 10,"type": "<ALPHANUM>","position": 0}, | ||
{"token": "helped","start_offset": 11,"end_offset": 17,"type": "<ALPHANUM>","position": 1}, | ||
{"token": "many","start_offset": 18,"end_offset": 22,"type": "<ALPHANUM>","position": 2}, | ||
{"token": "employers","start_offset": 23,"end_offset": 32,"type": "<ALPHANUM>","position": 3} | ||
] | ||
}, | ||
"tokenfilters": [ | ||
{ | ||
"name": "lowercase", | ||
"tokens": [ | ||
{"token": "opensearch","start_offset": 0,"end_offset": 10,"type": "<ALPHANUM>","position": 0}, | ||
{"token": "helped","start_offset": 11,"end_offset": 17,"type": "<ALPHANUM>","position": 1}, | ||
{"token": "many","start_offset": 18,"end_offset": 22,"type": "<ALPHANUM>","position": 2}, | ||
{"token": "employers","start_offset": 23,"end_offset": 32,"type": "<ALPHANUM>","position": 3} | ||
] | ||
}, | ||
{ | ||
"name": "keyword_marker_filter", | ||
"tokens": [ | ||
{"token": "opensearch","start_offset": 0,"end_offset": 10,"type": "<ALPHANUM>","position": 0,"keyword": true}, | ||
{"token": "helped","start_offset": 11,"end_offset": 17,"type": "<ALPHANUM>","position": 1,"keyword": false}, | ||
{"token": "many","start_offset": 18,"end_offset": 22,"type": "<ALPHANUM>","position": 2,"keyword": false}, | ||
{"token": "employers","start_offset": 23,"end_offset": 32,"type": "<ALPHANUM>","position": 3,"keyword": false} | ||
] | ||
}, | ||
{ | ||
"name": "kstem_filter", | ||
"tokens": [ | ||
{"token": "opensearch","start_offset": 0,"end_offset": 10,"type": "<ALPHANUM>","position": 0,"keyword": true}, | ||
{"token": "help","start_offset": 11,"end_offset": 17,"type": "<ALPHANUM>","position": 1,"keyword": false}, | ||
{"token": "many","start_offset": 18,"end_offset": 22,"type": "<ALPHANUM>","position": 2,"keyword": false}, | ||
{"token": "employer","start_offset": 23,"end_offset": 32,"type": "<ALPHANUM>","position": 3,"keyword": false} | ||
] | ||
} | ||
] | ||
} | ||
} | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,92 @@ | ||
--- | ||
layout: default | ||
title: KStem | ||
parent: Token filters | ||
nav_order: 220 | ||
--- | ||
|
||
# KStem token filter | ||
|
||
The `kstem` token filter is a stemming filter used to reduce words to their root forms. The filter is a lightweight algorithmic stemmer designed for the English language that performs the following stemming operations: | ||
|
||
- Reduces plurals to their singular form. | ||
- Converts different verb tenses to their base form. | ||
- Removes common derivational endings, such as "-ing" or "-ed". | ||
|
||
The `kstem` token filter is equivalent to the a `stemmer` filter configured with a `light_english` language. It provides a more conservative stemming compared to other stemming filters like `porter_stem`. | ||
|
||
The `kstem` token filter is based on the Lucene KStemFilter. For more information, see the [Lucene documentation](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/en/KStemFilter.html). | ||
|
||
## Example | ||
|
||
The following example request creates a new index named `my_kstem_index` and configures an analyzer with a `kstem` filter: | ||
|
||
```json | ||
PUT /my_kstem_index | ||
{ | ||
"settings": { | ||
"analysis": { | ||
"filter": { | ||
"kstem_filter": { | ||
"type": "kstem" | ||
} | ||
}, | ||
"analyzer": { | ||
"my_kstem_analyzer": { | ||
"type": "custom", | ||
"tokenizer": "standard", | ||
"filter": [ | ||
"lowercase", | ||
"kstem_filter" | ||
] | ||
} | ||
} | ||
} | ||
}, | ||
"mappings": { | ||
"properties": { | ||
"content": { | ||
"type": "text", | ||
"analyzer": "my_kstem_analyzer" | ||
} | ||
} | ||
} | ||
} | ||
``` | ||
{% include copy-curl.html %} | ||
|
||
## Generated tokens | ||
|
||
Use the following request to examine the tokens generated using the analyzer: | ||
|
||
```json | ||
POST /my_kstem_index/_analyze | ||
{ | ||
"analyzer": "my_kstem_analyzer", | ||
"text": "stops stopped" | ||
} | ||
``` | ||
{% include copy-curl.html %} | ||
|
||
The response contains the generated tokens: | ||
|
||
```json | ||
{ | ||
"tokens": [ | ||
{ | ||
"token": "stop", | ||
"start_offset": 0, | ||
"end_offset": 5, | ||
"type": "<ALPHANUM>", | ||
"position": 0 | ||
}, | ||
{ | ||
"token": "stop", | ||
"start_offset": 6, | ||
"end_offset": 13, | ||
"type": "<ALPHANUM>", | ||
"position": 1 | ||
} | ||
] | ||
} | ||
``` |
Oops, something went wrong.