elasticsearch/docs/reference/text-analysis/analysis-classic-tokenizer.md
Colleen McGinnis 9bcd59596d
[docs] Prepare for docs-assembler (#125118)
* reorg files for docs-assembler and create toc.yml files

* fix build error, add redirects

* only toc

* move images
2025-03-20 12:09:12 -05:00

78 lines
2.2 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

---
navigation_title: "Classic"
mapped_pages:
- https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-classic-tokenizer.html
---
# Classic tokenizer [analysis-classic-tokenizer]
The `classic` tokenizer is a grammar based tokenizer that is good for English language documents. This tokenizer has heuristics for special treatment of acronyms, company names, email addresses, and internet host names. However, these rules dont always work, and the tokenizer doesnt work well for most languages other than English:
* It splits words at most punctuation characters, removing punctuation. However, a dot thats not followed by whitespace is considered part of a token.
* It splits words at hyphens, unless theres a number in the token, in which case the whole token is interpreted as a product number and is not split.
* It recognizes email addresses and internet hostnames as one token.
## Example output [_example_output_8]
```console
POST _analyze
{
"tokenizer": "classic",
"text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
}
```
The above sentence would produce the following terms:
```text
[ The, 2, QUICK, Brown, Foxes, jumped, over, the, lazy, dog's, bone ]
```
## Configuration [_configuration_9]
The `classic` tokenizer accepts the following parameters:
`max_token_length`
: The maximum token length. If a token is seen that exceeds this length then it is split at `max_token_length` intervals. Defaults to `255`.
## Example configuration [_example_configuration_6]
In this example, we configure the `classic` tokenizer to have a `max_token_length` of 5 (for demonstration purposes):
```console
PUT my-index-000001
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"type": "classic",
"max_token_length": 5
}
}
}
}
}
POST my-index-000001/_analyze
{
"analyzer": "my_analyzer",
"text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
}
```
The above example produces the following terms:
```text
[ The, 2, QUICK, Brown, Foxes, jumpe, d, over, the, lazy, dog's, bone ]
```