-
Notifications
You must be signed in to change notification settings - Fork 503
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Browse files
Browse the repository at this point in the history
- Loading branch information
1 parent
47c99eb
commit 6bd8db0
Showing
1 changed file
with
96 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,96 @@ | ||
--- | ||
layout: default | ||
title: Standard analyzer | ||
nav_order: 40 | ||
--- | ||
|
||
# Standard analyzer | ||
|
||
The `standard` analyzer is the default analyzer used when no other analyzer is specified. It is designed to provide a basic and efficient approach to generic text processing. | ||
|
||
This analyzer consists of the following tokenizers and token filters: | ||
|
||
- `standard` tokenizer: Removes most punctuation and splits text on spaces and other common delimiters. | ||
- `lowercase` token filter: Converts all tokens to lowercase, ensuring case-insensitive matching. | ||
- `stop` token filter: Removes common stopwords, such as "the", "is", and "and", from the tokenized output. | ||
|
||
## Example | ||
|
||
Use the following command to create an index named `my_standard_index` with a `standard` analyzer: | ||
|
||
```json | ||
PUT /my_standard_index | ||
{ | ||
"mappings": { | ||
"properties": { | ||
"my_field": { | ||
"type": "text", | ||
"analyzer": "standard" | ||
} | ||
} | ||
} | ||
} | ||
``` | ||
{% include copy-curl.html %} | ||
|
||
## Parameters | ||
|
||
You can configure a `standard` analyzer with the following parameters. | ||
|
||
Parameter | Required/Optional | Data type | Description | ||
:--- | :--- | :--- | :--- | ||
`max_token_length` | Optional | Integer | Sets the maximum length of the produced token. If this length is exceeded, the token is split into multiple tokens at the length configured in `max_token_length`. Default is `255`. | ||
`stopwords` | Optional | String or list of strings | A string specifying a predefined list of stopwords (such as `_english_`) or an array specifying a custom list of stopwords. Default is `_none_`. | ||
`stopwords_path` | Optional | String | The path (absolute or relative to the config directory) to the file containing a list of stop words. | ||
|
||
|
||
## Configuring a custom analyzer | ||
|
||
Use the following command to configure an index with a custom analyzer that is equivalent to the `standard` analyzer: | ||
|
||
```json | ||
PUT /my_custom_index | ||
{ | ||
"settings": { | ||
"analysis": { | ||
"analyzer": { | ||
"my_custom_analyzer": { | ||
"type": "custom", | ||
"tokenizer": "standard", | ||
"filter": [ | ||
"lowercase", | ||
"stop" | ||
] | ||
} | ||
} | ||
} | ||
} | ||
} | ||
``` | ||
{% include copy-curl.html %} | ||
|
||
## Generated tokens | ||
|
||
Use the following request to examine the tokens generated using the analyzer: | ||
|
||
```json | ||
POST /my_custom_index/_analyze | ||
{ | ||
"analyzer": "my_custom_analyzer", | ||
"text": "The slow turtle swims away" | ||
} | ||
``` | ||
{% include copy-curl.html %} | ||
|
||
The response contains the generated tokens: | ||
|
||
```json | ||
{ | ||
"tokens": [ | ||
{"token": "slow","start_offset": 4,"end_offset": 8,"type": "<ALPHANUM>","position": 1}, | ||
{"token": "turtle","start_offset": 9,"end_offset": 15,"type": "<ALPHANUM>","position": 2}, | ||
{"token": "swims","start_offset": 16,"end_offset": 21,"type": "<ALPHANUM>","position": 3}, | ||
{"token": "away","start_offset": 22,"end_offset": 26,"type": "<ALPHANUM>","position": 4} | ||
] | ||
} | ||
``` |