mirror of
https://github.com/elastic/elasticsearch.git
synced 2025-06-29 01:44:36 -04:00
The first example of splitting rules for the `word_delimiter` token filter was spread across two bullet points. This makes it look like they are two separate splitting rules. |
||
---|---|---|
.. | ||
analyzers | ||
charfilters | ||
tokenfilters | ||
tokenizers | ||
analyzers.asciidoc | ||
anatomy.asciidoc | ||
charfilters.asciidoc | ||
normalizers.asciidoc | ||
testing.asciidoc | ||
tokenfilters.asciidoc | ||
tokenizers.asciidoc |