elasticsearch/docs/reference/text-analysis/analysis-standard-tokenizer.md
Liam Thompson d90c6b76a2
[9.0] [docs] Prepare for docs-assembler (#125118) (#125339)
* [docs] Prepare for docs-assembler (#125118)

* reorg files for docs-assembler and create toc.yml files

* fix build error, add redirects

* only toc

* move images

(cherry picked from commit 9bcd59596d)

# Conflicts:
#	docs/reference/aggregations/search-aggregations-pipeline-bucket-script-aggregation.md
#	docs/reference/aggregations/search-aggregations-pipeline-cumulative-cardinality-aggregation.md
#	docs/reference/aggregations/search-aggregations-pipeline-cumulative-sum-aggregation.md
#	docs/reference/aggregations/search-aggregations-pipeline-derivative-aggregation.md
#	docs/reference/aggregations/search-aggregations-pipeline-extended-stats-bucket-aggregation.md
#	docs/reference/aggregations/search-aggregations-pipeline-max-bucket-aggregation.md
#	docs/reference/aggregations/search-aggregations-pipeline-min-bucket-aggregation.md
#	docs/reference/aggregations/search-aggregations-pipeline-percentiles-bucket-aggregation.md
#	docs/reference/aggregations/search-aggregations-pipeline-stats-bucket-aggregation.md
#	docs/reference/aggregations/search-aggregations-pipeline-sum-bucket-aggregation.md
#	docs/reference/query-languages/esql/esql-commands.md
#	docs/reference/query-languages/esql/esql-lookup-join.md
#	docs/reference/query-languages/esql/esql-process-data-with-dissect-grok.md
#	docs/reference/query-languages/images/esql-lookup-join.png
#	docs/reference/query-languages/toc.yml
#	docs/reference/search-connectors/es-connectors-run-from-docker.md
#	docs/reference/text-analysis/analysis-apostrophe-tokenfilter.md
#	docs/reference/toc.yml

* remove markers

---------

Co-authored-by: Colleen McGinnis <colleen.mcginnis@elastic.co>
2025-03-20 22:20:12 +02:00

1.7 KiB

navigation_title mapped_pages
Standard
https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-standard-tokenizer.html

Standard tokenizer [analysis-standard-tokenizer]

The standard tokenizer provides grammar based tokenization (based on the Unicode Text Segmentation algorithm, as specified in Unicode Standard Annex #29) and works well for most languages.

Example output [_example_output_16]

POST _analyze
{
  "tokenizer": "standard",
  "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
}

The above sentence would produce the following terms:

[ The, 2, QUICK, Brown, Foxes, jumped, over, the, lazy, dog's, bone ]

Configuration [_configuration_19]

The standard tokenizer accepts the following parameters:

max_token_length
The maximum token length. If a token is seen that exceeds this length then it is split at max_token_length intervals. Defaults to 255.

Example configuration [_example_configuration_13]

In this example, we configure the standard tokenizer to have a max_token_length of 5 (for demonstration purposes):

PUT my-index-000001
{
  "settings": {
    "analysis": {
      "analyzer": {
        "my_analyzer": {
          "tokenizer": "my_tokenizer"
        }
      },
      "tokenizer": {
        "my_tokenizer": {
          "type": "standard",
          "max_token_length": 5
        }
      }
    }
  }
}

POST my-index-000001/_analyze
{
  "analyzer": "my_analyzer",
  "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
}

The above example produces the following terms:

[ The, 2, QUICK, Brown, Foxes, jumpe, d, over, the, lazy, dog's, bone ]