elasticsearch/docs/reference/data-analysis/text-analysis/analysis-classic-tokenizer.md
Colleen McGinnis b7e3a1e14b
[docs] Migrate docs from AsciiDoc to Markdown (#123507)
* delete asciidoc files

* add migrated files

* fix errors

* Disable docs tests

* Clarify release notes page titles

* Revert "Clarify release notes page titles"

This reverts commit 8be688648d.

* Comment out edternal URI images

* Clean up query languages landing pages, link to conceptual docs

* Add .md to url

* Fixes inference processor nesting.

---------

Co-authored-by: Liam Thompson <32779855+leemthompo@users.noreply.github.com>
Co-authored-by: Liam Thompson <leemthompo@gmail.com>
Co-authored-by: Martijn Laarman <Mpdreamz@gmail.com>
Co-authored-by: István Zoltán Szabó <szabosteve@gmail.com>
2025-02-27 17:56:14 +01:00

2.2 KiB
Raw Blame History

navigation_title mapped_pages
Classic
https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-classic-tokenizer.html

Classic tokenizer [analysis-classic-tokenizer]

The classic tokenizer is a grammar based tokenizer that is good for English language documents. This tokenizer has heuristics for special treatment of acronyms, company names, email addresses, and internet host names. However, these rules dont always work, and the tokenizer doesnt work well for most languages other than English:

  • It splits words at most punctuation characters, removing punctuation. However, a dot thats not followed by whitespace is considered part of a token.
  • It splits words at hyphens, unless theres a number in the token, in which case the whole token is interpreted as a product number and is not split.
  • It recognizes email addresses and internet hostnames as one token.

Example output [_example_output_8]

POST _analyze
{
  "tokenizer": "classic",
  "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
}

The above sentence would produce the following terms:

[ The, 2, QUICK, Brown, Foxes, jumped, over, the, lazy, dog's, bone ]

Configuration [_configuration_9]

The classic tokenizer accepts the following parameters:

max_token_length
The maximum token length. If a token is seen that exceeds this length then it is split at max_token_length intervals. Defaults to 255.

Example configuration [_example_configuration_6]

In this example, we configure the classic tokenizer to have a max_token_length of 5 (for demonstration purposes):

PUT my-index-000001
{
  "settings": {
    "analysis": {
      "analyzer": {
        "my_analyzer": {
          "tokenizer": "my_tokenizer"
        }
      },
      "tokenizer": {
        "my_tokenizer": {
          "type": "classic",
          "max_token_length": 5
        }
      }
    }
  }
}

POST my-index-000001/_analyze
{
  "analyzer": "my_analyzer",
  "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
}

The above example produces the following terms:

[ The, 2, QUICK, Brown, Foxes, jumpe, d, over, the, lazy, dog's, bone ]