Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add whitespace tokenizer docs #8484

2 changes: 1 addition & 1 deletion _analyzers/tokenizers/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
layout: default
title: Tokenizers
nav_order: 60
has_children: false
has_children: true
has_toc: false
redirect_from:
- /analyzers/tokenizers/index/
Expand Down
110 changes: 110 additions & 0 deletions _analyzers/tokenizers/whitespace.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
---
layout: default
title: Whitespace
parent: Tokenizers
nav_order: 160
---

# Whitespace tokenizer

Check failure on line 8 in _analyzers/tokenizers/whitespace.md

View workflow job for this annotation

GitHub Actions / style-job

[vale] reported by reviewdog 🐶 [OpenSearch.SubstitutionsError] Use 'white space' instead of 'Whitespace'. Raw Output: {"message": "[OpenSearch.SubstitutionsError] Use 'white space' instead of 'Whitespace'.", "location": {"path": "_analyzers/tokenizers/whitespace.md", "range": {"start": {"line": 8, "column": 3}}}, "severity": "ERROR"}

The `whitespace` tokenizer splits text at white space characters, such as spaces, tabs, and new lines. It treats each word separated by white space as a token and does not perform any additional analysis or normalization like lowercasing or punctuation removal.

Check failure on line 10 in _analyzers/tokenizers/whitespace.md

View workflow job for this annotation

GitHub Actions / style-job

[vale] reported by reviewdog 🐶 [OpenSearch.Spelling] Error: lowercasing. If you are referencing a setting, variable, format, function, or repository, surround it with tic marks. Raw Output: {"message": "[OpenSearch.Spelling] Error: lowercasing. If you are referencing a setting, variable, format, function, or repository, surround it with tic marks.", "location": {"path": "_analyzers/tokenizers/whitespace.md", "range": {"start": {"line": 10, "column": 227}}}, "severity": "ERROR"}

## Example usage

The following example request creates a new index named `my_index` and configures an analyzer with a `whitespace` tokenizer:

```json
PUT /my_index
{
"settings": {
"analysis": {
"tokenizer": {
"whitespace_tokenizer": {
"type": "whitespace"
}
},
"analyzer": {
"my_whitespace_analyzer": {
"type": "custom",
"tokenizer": "whitespace_tokenizer"
}
}
}
},
"mappings": {
"properties": {
"content": {
"type": "text",
"analyzer": "my_whitespace_analyzer"
}
}
}
}
```
{% include copy-curl.html %}

## Generated tokens

Use the following request to examine the tokens generated using the analyzer:

```json
POST /my_index/_analyze
{
"analyzer": "my_whitespace_analyzer",
"text": "OpenSearch is fast! Really fast."
}
```
{% include copy-curl.html %}

The response contains the generated tokens:

```json
{
"tokens": [
{
"token": "OpenSearch",
"start_offset": 0,
"end_offset": 10,
"type": "word",
"position": 0
},
{
"token": "is",
"start_offset": 11,
"end_offset": 13,
"type": "word",
"position": 1
},
{
"token": "fast!",
"start_offset": 14,
"end_offset": 19,
"type": "word",
"position": 2
},
{
"token": "Really",
"start_offset": 20,
"end_offset": 26,
"type": "word",
"position": 3
},
{
"token": "fast.",
"start_offset": 27,
"end_offset": 32,
"type": "word",
"position": 4
}
]
}
```

## Parameters

The `whitespace` tokenizer can be configured with the following parameter.

Parameter | Required/Optional | Data type | Description
:--- | :--- | :--- | :---
`max_token_length` | Optional | Integer | Sets the maximum length of the produced token. If this length is exceeded, the token is split into multiple tokens at the length configured in `max_token_length`. Default is `255`.

Loading