mirror of
https://github.com/elastic/elasticsearch.git
synced 2025-06-28 17:34:17 -04:00
* [docs] Migrate docs from AsciiDoc to Markdown (#123507) * delete asciidoc files * add migrated files * fix errors * Disable docs tests * Clarify release notes page titles * Revert "Clarify release notes page titles" This reverts commit8be688648d
. * Comment out edternal URI images * Clean up query languages landing pages, link to conceptual docs * Add .md to url * Fixes inference processor nesting. --------- Co-authored-by: Liam Thompson <32779855+leemthompo@users.noreply.github.com> Co-authored-by: Liam Thompson <leemthompo@gmail.com> Co-authored-by: Martijn Laarman <Mpdreamz@gmail.com> Co-authored-by: István Zoltán Szabó <szabosteve@gmail.com> (cherry picked from commitb7e3a1e14b
) # Conflicts: # docs/build.gradle # docs/reference/migration/index.asciidoc # docs/reference/migration/migrate_9_0.asciidoc # docs/reference/release-notes.asciidoc # docs/reference/release-notes/9.0.0.asciidoc # docs/reference/release-notes/highlights.asciidoc * Fix build file * Really fix build file --------- Co-authored-by: Colleen McGinnis <colleen.j.mcginnis@gmail.com>
861 B
861 B
mapped_pages | |
---|---|
|
kuromoji_completion token filter [analysis-kuromoji-completion]
The kuromoji_completion
token filter adds Japanese romanized tokens to the term attributes along with the original tokens (surface forms).
GET _analyze
{
"analyzer": "kuromoji_completion",
"text": "寿司" <1>
}
- Returns
寿司
,susi
(Kunrei-shiki) andsushi
(Hepburn-shiki).
The kuromoji_completion
token filter accepts the following settings:
mode
- The tokenization mode determines how the tokenizer handles compound and unknown words. It can be set to:
index
- Simple romanization. Expected to be used when indexing.
query
- Input Method aware romanization. Expected to be used when querying.
Defaults to index
.