elasticsearch/docs/reference/analysis/tokenizers/letter-tokenizer.asciidoc
2013-08-29 01:24:34 +02:00

7 lines
337 B
Text

[[analysis-letter-tokenizer]]
=== Letter Tokenizer
A tokenizer of type `letter` that divides text at non-letters. That's to
say, it defines tokens as maximal strings of adjacent letters. Note,
this does a decent job for most European languages, but does a terrible
job for some Asian languages, where words are not separated by spaces.