In a few previous PR's we restructured the ES|QL docs to make it possible to generate them dynamically. This PR just moves a few files around to make the query languages docs easier to work with, and a little more organized like the ES|QL docs. A bit part of this was setting up redirects to the new locations, so other repo's could correctly link to the elasticsearch docs.
2.2 KiB
navigation_title | mapped_pages | |
---|---|---|
Simple pattern split |
|
Simple pattern split tokenizer [analysis-simplepatternsplit-tokenizer]
The simple_pattern_split
tokenizer uses a regular expression to split the input into terms at pattern matches. The set of regular expression features it supports is more limited than the pattern
tokenizer, but the tokenization is generally faster.
This tokenizer does not produce terms from the matches themselves. To produce terms from matches using patterns in the same restricted regular expression subset, see the simple_pattern
tokenizer.
This tokenizer uses Lucene regular expressions. For an explanation of the supported features and syntax, see Regular Expression Syntax.
The default pattern is the empty string, which produces one term containing the full input. This tokenizer should always be configured with a non-default pattern.
Configuration [_configuration_18]
The simple_pattern_split
tokenizer accepts the following parameters:
pattern
- A Lucene regular expression, defaults to the empty string.
Example configuration [_example_configuration_12]
This example configures the simple_pattern_split
tokenizer to split the input text on underscores.
PUT my-index-000001
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"type": "simple_pattern_split",
"pattern": "_"
}
}
}
}
}
POST my-index-000001/_analyze
{
"analyzer": "my_analyzer",
"text": "an_underscored_phrase"
}
The above example produces these terms:
[ an, underscored, phrase ]