mirror of
https://github.com/elastic/elasticsearch.git
synced 2025-04-25 23:57:20 -04:00
The `edge_ngram` tokenizer limits tokens to the `max_gram` character length. Autocomplete searches for terms longer than this limit return no results. To prevent this, you can use the `truncate` token filter to truncate tokens to the `max_gram` character length. However, this could return irrelevant results. This commit adds some advisory text to make users aware of this limitation and outline the tradeoffs for each approach. Closes #48956. |
||
---|---|---|
.. | ||
chargroup-tokenizer.asciidoc | ||
classic-tokenizer.asciidoc | ||
edgengram-tokenizer.asciidoc | ||
keyword-tokenizer.asciidoc | ||
letter-tokenizer.asciidoc | ||
lowercase-tokenizer.asciidoc | ||
ngram-tokenizer.asciidoc | ||
pathhierarchy-tokenizer-examples.asciidoc | ||
pathhierarchy-tokenizer.asciidoc | ||
pattern-tokenizer.asciidoc | ||
simplepattern-tokenizer.asciidoc | ||
simplepatternsplit-tokenizer.asciidoc | ||
standard-tokenizer.asciidoc | ||
thai-tokenizer.asciidoc | ||
uaxurlemail-tokenizer.asciidoc | ||
whitespace-tokenizer.asciidoc |