mirror of
https://github.com/elastic/elasticsearch.git
synced 2025-06-28 09:28:55 -04:00
* delete asciidoc files
* add migrated files
* fix errors
* Disable docs tests
* Clarify release notes page titles
* Revert "Clarify release notes page titles"
This reverts commit 8be688648d
.
* Comment out edternal URI images
* Clean up query languages landing pages, link to conceptual docs
* Add .md to url
* Fixes inference processor nesting.
---------
Co-authored-by: Liam Thompson <32779855+leemthompo@users.noreply.github.com>
Co-authored-by: Liam Thompson <leemthompo@gmail.com>
Co-authored-by: Martijn Laarman <Mpdreamz@gmail.com>
Co-authored-by: István Zoltán Szabó <szabosteve@gmail.com>
1.7 KiB
1.7 KiB
mapped_pages | |
---|---|
|
ICU transform token filter [analysis-icu-transform]
Transforms are used to process Unicode text in many different ways, such as case mapping, normalization, transliteration and bidirectional text handling.
You can define which transformation you want to apply with the id
parameter (defaults to Null
), and specify text direction with the dir
parameter which accepts forward
(default) for LTR and reverse
for RTL. Custom rulesets are not yet supported.
For example:
PUT icu_sample
{
"settings": {
"index": {
"analysis": {
"analyzer": {
"latin": {
"tokenizer": "keyword",
"filter": [
"myLatinTransform"
]
}
},
"filter": {
"myLatinTransform": {
"type": "icu_transform",
"id": "Any-Latin; NFD; [:Nonspacing Mark:] Remove; NFC" <1>
}
}
}
}
}
}
GET icu_sample/_analyze
{
"analyzer": "latin",
"text": "你好" <2>
}
GET icu_sample/_analyze
{
"analyzer": "latin",
"text": "здравствуйте" <3>
}
GET icu_sample/_analyze
{
"analyzer": "latin",
"text": "こんにちは" <4>
}
- This transforms transliterates characters to Latin, and separates accents from their base characters, removes the accents, and then puts the remaining text into an unaccented form.
- Returns
ni hao
. - Returns
zdravstvujte
. - Returns
kon'nichiha
.
For more documentation, Please see the user guide of ICU Transform.