mirror of
https://github.com/elastic/elasticsearch.git
synced 2025-06-28 17:34:17 -04:00
* [docs] Prepare for docs-assembler (#125118)
* reorg files for docs-assembler and create toc.yml files
* fix build error, add redirects
* only toc
* move images
(cherry picked from commit 9bcd59596d
)
# Conflicts:
# docs/reference/aggregations/search-aggregations-pipeline-bucket-script-aggregation.md
# docs/reference/aggregations/search-aggregations-pipeline-cumulative-cardinality-aggregation.md
# docs/reference/aggregations/search-aggregations-pipeline-cumulative-sum-aggregation.md
# docs/reference/aggregations/search-aggregations-pipeline-derivative-aggregation.md
# docs/reference/aggregations/search-aggregations-pipeline-extended-stats-bucket-aggregation.md
# docs/reference/aggregations/search-aggregations-pipeline-max-bucket-aggregation.md
# docs/reference/aggregations/search-aggregations-pipeline-min-bucket-aggregation.md
# docs/reference/aggregations/search-aggregations-pipeline-percentiles-bucket-aggregation.md
# docs/reference/aggregations/search-aggregations-pipeline-stats-bucket-aggregation.md
# docs/reference/aggregations/search-aggregations-pipeline-sum-bucket-aggregation.md
# docs/reference/query-languages/esql/esql-commands.md
# docs/reference/query-languages/esql/esql-lookup-join.md
# docs/reference/query-languages/esql/esql-process-data-with-dissect-grok.md
# docs/reference/query-languages/images/esql-lookup-join.png
# docs/reference/query-languages/toc.yml
# docs/reference/search-connectors/es-connectors-run-from-docker.md
# docs/reference/text-analysis/analysis-apostrophe-tokenfilter.md
# docs/reference/toc.yml
* remove markers
---------
Co-authored-by: Colleen McGinnis <colleen.mcginnis@elastic.co>
1.5 KiB
1.5 KiB
mapped_pages | |
---|---|
|
polish_stop token filter [analysis-polish-stop]
The polish_stop
token filter filters out Polish stopwords (_polish_
), and any other custom stopwords specified by the user. This filter only supports the predefined _polish_
stopwords list. If you want to use a different predefined list, then use the stop
token filter instead.
PUT /polish_stop_example
{
"settings": {
"index": {
"analysis": {
"analyzer": {
"analyzer_with_stop": {
"tokenizer": "standard",
"filter": [
"lowercase",
"polish_stop"
]
}
},
"filter": {
"polish_stop": {
"type": "polish_stop",
"stopwords": [
"_polish_",
"jeść"
]
}
}
}
}
}
}
GET polish_stop_example/_analyze
{
"analyzer": "analyzer_with_stop",
"text": "Gdzie kucharek sześć, tam nie ma co jeść."
}
The above request returns:
{
"tokens" : [
{
"token" : "kucharek",
"start_offset" : 6,
"end_offset" : 14,
"type" : "<ALPHANUM>",
"position" : 1
},
{
"token" : "sześć",
"start_offset" : 15,
"end_offset" : 20,
"type" : "<ALPHANUM>",
"position" : 2
}
]
}