mirror of
https://github.com/elastic/elasticsearch.git
synced 2025-06-28 17:34:17 -04:00
* delete asciidoc files
* add migrated files
* fix errors
* Disable docs tests
* Clarify release notes page titles
* Revert "Clarify release notes page titles"
This reverts commit 8be688648d
.
* Comment out edternal URI images
* Clean up query languages landing pages, link to conceptual docs
* Add .md to url
* Fixes inference processor nesting.
---------
Co-authored-by: Liam Thompson <32779855+leemthompo@users.noreply.github.com>
Co-authored-by: Liam Thompson <leemthompo@gmail.com>
Co-authored-by: Martijn Laarman <Mpdreamz@gmail.com>
Co-authored-by: István Zoltán Szabó <szabosteve@gmail.com>
2.5 KiB
2.5 KiB
mapped_pages | |
---|---|
|
Analyzer reference [analysis-analyzers]
Elasticsearch ships with a wide range of built-in analyzers, which can be used in any index without further configuration:
- Standard Analyzer
- The
standard
analyzer divides text into terms on word boundaries, as defined by the Unicode Text Segmentation algorithm. It removes most punctuation, lowercases terms, and supports removing stop words. - Simple Analyzer
- The
simple
analyzer divides text into terms whenever it encounters a character which is not a letter. It lowercases all terms. - Whitespace Analyzer
- The
whitespace
analyzer divides text into terms whenever it encounters any whitespace character. It does not lowercase terms. - Stop Analyzer
- The
stop
analyzer is like thesimple
analyzer, but also supports removal of stop words. - Keyword Analyzer
- The
keyword
analyzer is a noop analyzer that accepts whatever text it is given and outputs the exact same text as a single term. - Pattern Analyzer
- The
pattern
analyzer uses a regular expression to split the text into terms. It supports lower-casing and stop words. - Language Analyzers
- Elasticsearch provides many language-specific analyzers like
english
orfrench
. - Fingerprint Analyzer
- The
fingerprint
analyzer is a specialist analyzer which creates a fingerprint which can be used for duplicate detection.
Custom analyzers [_custom_analyzers]
If you do not find an analyzer suitable for your needs, you can create a custom
analyzer which combines the appropriate character filters, tokenizer, and token filters.