mirror of
https://github.com/elastic/elasticsearch.git
synced 2025-06-28 17:34:17 -04:00
* [docs] Prepare for docs-assembler (#125118)
* reorg files for docs-assembler and create toc.yml files
* fix build error, add redirects
* only toc
* move images
(cherry picked from commit 9bcd59596d
)
# Conflicts:
# docs/reference/aggregations/search-aggregations-pipeline-bucket-script-aggregation.md
# docs/reference/aggregations/search-aggregations-pipeline-cumulative-cardinality-aggregation.md
# docs/reference/aggregations/search-aggregations-pipeline-cumulative-sum-aggregation.md
# docs/reference/aggregations/search-aggregations-pipeline-derivative-aggregation.md
# docs/reference/aggregations/search-aggregations-pipeline-extended-stats-bucket-aggregation.md
# docs/reference/aggregations/search-aggregations-pipeline-max-bucket-aggregation.md
# docs/reference/aggregations/search-aggregations-pipeline-min-bucket-aggregation.md
# docs/reference/aggregations/search-aggregations-pipeline-percentiles-bucket-aggregation.md
# docs/reference/aggregations/search-aggregations-pipeline-stats-bucket-aggregation.md
# docs/reference/aggregations/search-aggregations-pipeline-sum-bucket-aggregation.md
# docs/reference/query-languages/esql/esql-commands.md
# docs/reference/query-languages/esql/esql-lookup-join.md
# docs/reference/query-languages/esql/esql-process-data-with-dissect-grok.md
# docs/reference/query-languages/images/esql-lookup-join.png
# docs/reference/query-languages/toc.yml
# docs/reference/search-connectors/es-connectors-run-from-docker.md
# docs/reference/text-analysis/analysis-apostrophe-tokenfilter.md
# docs/reference/toc.yml
* remove markers
---------
Co-authored-by: Colleen McGinnis <colleen.mcginnis@elastic.co>
1.7 KiB
1.7 KiB
navigation_title | mapped_pages | |
---|---|---|
Standard |
|
Standard tokenizer [analysis-standard-tokenizer]
The standard
tokenizer provides grammar based tokenization (based on the Unicode Text Segmentation algorithm, as specified in Unicode Standard Annex #29) and works well for most languages.
Example output [_example_output_16]
POST _analyze
{
"tokenizer": "standard",
"text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
}
The above sentence would produce the following terms:
[ The, 2, QUICK, Brown, Foxes, jumped, over, the, lazy, dog's, bone ]
Configuration [_configuration_19]
The standard
tokenizer accepts the following parameters:
max_token_length
- The maximum token length. If a token is seen that exceeds this length then it is split at
max_token_length
intervals. Defaults to255
.
Example configuration [_example_configuration_13]
In this example, we configure the standard
tokenizer to have a max_token_length
of 5 (for demonstration purposes):
PUT my-index-000001
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"type": "standard",
"max_token_length": 5
}
}
}
}
}
POST my-index-000001/_analyze
{
"analyzer": "my_analyzer",
"text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
}
The above example produces the following terms:
[ The, 2, QUICK, Brown, Foxes, jumpe, d, over, the, lazy, dog's, bone ]