mirror of
https://github.com/elastic/elasticsearch.git
synced 2025-04-25 07:37:19 -04:00
[DOCS] Removed outdated new/deprecated version notices
This commit is contained in:
parent
d5a47e597d
commit
393c28bee4
49 changed files with 83 additions and 524 deletions
|
@ -91,7 +91,6 @@ The hunspell token filter accepts four options:
|
||||||
Configures the recursion level a
|
Configures the recursion level a
|
||||||
stemmer can go into. Defaults to `2`. Some languages (for example czech)
|
stemmer can go into. Defaults to `2`. Some languages (for example czech)
|
||||||
give better results when set to `1` or `0`, so you should test it out.
|
give better results when set to `1` or `0`, so you should test it out.
|
||||||
(since 0.90.3)
|
|
||||||
|
|
||||||
NOTE: As opposed to the snowball stemmers (which are algorithm based)
|
NOTE: As opposed to the snowball stemmers (which are algorithm based)
|
||||||
this is a dictionary lookup based stemmer and therefore the quality of
|
this is a dictionary lookup based stemmer and therefore the quality of
|
||||||
|
|
|
@ -9,8 +9,6 @@ subsequent stemmer will be indexed twice. Therefore, consider adding a
|
||||||
`unique` filter with `only_on_same_position` set to `true` to drop
|
`unique` filter with `only_on_same_position` set to `true` to drop
|
||||||
unnecessary duplicates.
|
unnecessary duplicates.
|
||||||
|
|
||||||
Note: this is available from `0.90.0.Beta2` on.
|
|
||||||
|
|
||||||
Here is an example:
|
Here is an example:
|
||||||
|
|
||||||
[source,js]
|
[source,js]
|
||||||
|
|
|
@ -11,5 +11,3 @@ http://lucene.apache.org/core/4_3_1/analyzers-common/org/apache/lucene/analysis/
|
||||||
or the
|
or the
|
||||||
http://lucene.apache.org/core/4_3_1/analyzers-common/org/apache/lucene/analysis/fa/PersianNormalizer.html[PersianNormalizer]
|
http://lucene.apache.org/core/4_3_1/analyzers-common/org/apache/lucene/analysis/fa/PersianNormalizer.html[PersianNormalizer]
|
||||||
documentation.
|
documentation.
|
||||||
|
|
||||||
*Note:* This filters are available since `0.90.2`
|
|
||||||
|
|
|
@ -36,8 +36,7 @@ settings are: `ignore_case` (defaults to `false`), and `expand`
|
||||||
The `tokenizer` parameter controls the tokenizers that will be used to
|
The `tokenizer` parameter controls the tokenizers that will be used to
|
||||||
tokenize the synonym, and defaults to the `whitespace` tokenizer.
|
tokenize the synonym, and defaults to the `whitespace` tokenizer.
|
||||||
|
|
||||||
As of elasticsearch 0.17.9 two synonym formats are supported: Solr,
|
Two synonym formats are supported: Solr, WordNet.
|
||||||
WordNet.
|
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
==== Solr synonyms
|
==== Solr synonyms
|
||||||
|
|
|
@ -16,7 +16,7 @@ type:
|
||||||
|
|
||||||
|`max_gram` |Maximum size in codepoints of a single n-gram |`2`.
|
|`max_gram` |Maximum size in codepoints of a single n-gram |`2`.
|
||||||
|
|
||||||
|`token_chars` |(Since `0.90.2`) Characters classes to keep in the
|
|`token_chars` | Characters classes to keep in the
|
||||||
tokens, Elasticsearch will split on characters that don't belong to any
|
tokens, Elasticsearch will split on characters that don't belong to any
|
||||||
of these classes. |`[]` (Keep all characters)
|
of these classes. |`[]` (Keep all characters)
|
||||||
|=======================================================================
|
|=======================================================================
|
||||||
|
|
|
@ -12,7 +12,7 @@ The following are settings that can be set for a `nGram` tokenizer type:
|
||||||
|
|
||||||
|`max_gram` |Maximum size in codepoints of a single n-gram |`2`.
|
|`max_gram` |Maximum size in codepoints of a single n-gram |`2`.
|
||||||
|
|
||||||
|`token_chars` |(Since `0.90.2`) Characters classes to keep in the
|
|`token_chars` |Characters classes to keep in the
|
||||||
tokens, Elasticsearch will split on characters that don't belong to any
|
tokens, Elasticsearch will split on characters that don't belong to any
|
||||||
of these classes. |`[]` (Keep all characters)
|
of these classes. |`[]` (Keep all characters)
|
||||||
|=======================================================================
|
|=======================================================================
|
||||||
|
|
|
@ -83,7 +83,7 @@ The `all` flag can be set to return all the stats.
|
||||||
[float]
|
[float]
|
||||||
=== Field data statistics
|
=== Field data statistics
|
||||||
|
|
||||||
From 0.90, you can get information about field data memory usage on node
|
You can get information about field data memory usage on node
|
||||||
level or on index level.
|
level or on index level.
|
||||||
|
|
||||||
[source,js]
|
[source,js]
|
||||||
|
|
|
@ -119,7 +119,7 @@ There is a specific list of settings that can be updated, those include:
|
||||||
`cluster.routing.allocation.exclude.*`::
|
`cluster.routing.allocation.exclude.*`::
|
||||||
See <<modules-cluster>>.
|
See <<modules-cluster>>.
|
||||||
|
|
||||||
`cluster.routing.allocation.require.*` (from 0.90)::
|
`cluster.routing.allocation.require.*`
|
||||||
See <<modules-cluster>>.
|
See <<modules-cluster>>.
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
|
@ -177,10 +177,7 @@ There is a specific list of settings that can be updated, those include:
|
||||||
See <<modules-indices>>
|
See <<modules-indices>>
|
||||||
|
|
||||||
`indices.recovery.max_bytes_per_sec`::
|
`indices.recovery.max_bytes_per_sec`::
|
||||||
Since 0.90.1. See <<modules-indices>>
|
See <<modules-indices>>
|
||||||
|
|
||||||
`indices.recovery.max_size_per_sec`::
|
|
||||||
Deprecated since 0.90.1. See `max_bytes_per_sec` instead.
|
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
==== Store level throttling
|
==== Store level throttling
|
||||||
|
|
|
@ -19,8 +19,8 @@ optional_source\n
|
||||||
|
|
||||||
*NOTE*: the final line of data must end with a newline character `\n`.
|
*NOTE*: the final line of data must end with a newline character `\n`.
|
||||||
|
|
||||||
The possible actions are `index`, `create`, `delete` and since version
|
The possible actions are `index`, `create`, `delete` and `update`.
|
||||||
`0.90.1` also `update`. `index` and `create` expect a source on the next
|
`index` and `create` expect a source on the next
|
||||||
line, and have the same semantics as the `op_type` parameter to the
|
line, and have the same semantics as the `op_type` parameter to the
|
||||||
standard index API (i.e. create will fail if a document with the same
|
standard index API (i.e. create will fail if a document with the same
|
||||||
index and type exists already, whereas index will add or replace a
|
index and type exists already, whereas index will add or replace a
|
||||||
|
|
|
@ -82,17 +82,16 @@ extraction from _source, like `obj1.obj2`.
|
||||||
[float]
|
[float]
|
||||||
=== Getting the _source directly
|
=== Getting the _source directly
|
||||||
|
|
||||||
Since version `0.90.1` there is a new rest end point that allows the
|
Use the `/{index}/{type}/{id}/_source` endpoint to get
|
||||||
source to be returned directly without any additional content around it.
|
just the `_source` field of the document,
|
||||||
The get endpoint has the following structure:
|
without any additional content around it. For example:
|
||||||
`{index}/{type}/{id}/_source`. Curl example:
|
|
||||||
|
|
||||||
[source,js]
|
[source,js]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
curl -XGET 'http://localhost:9200/twitter/tweet/1/_source'
|
curl -XGET 'http://localhost:9200/twitter/tweet/1/_source'
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
Note, there is also a HEAD variant for the new _source endpoint. Curl
|
Note, there is also a HEAD variant for the _source endpoint. Curl
|
||||||
example:
|
example:
|
||||||
|
|
||||||
[source,js]
|
[source,js]
|
||||||
|
|
|
@ -66,8 +66,7 @@ on the specific index settings).
|
||||||
|
|
||||||
Automatic index creation can include a pattern based white/black list,
|
Automatic index creation can include a pattern based white/black list,
|
||||||
for example, set `action.auto_create_index` to `+aaa*,-bbb*,+ccc*,-*` (+
|
for example, set `action.auto_create_index` to `+aaa*,-bbb*,+ccc*,-*` (+
|
||||||
meaning allowed, and - meaning disallowed). Note, this feature is
|
meaning allowed, and - meaning disallowed).
|
||||||
available since 0.20.
|
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Versioning
|
=== Versioning
|
||||||
|
|
|
@ -6,7 +6,7 @@ The operation gets the document (collocated with the shard) from the
|
||||||
index, runs the script (with optional script language and parameters),
|
index, runs the script (with optional script language and parameters),
|
||||||
and index back the result (also allows to delete, or ignore the
|
and index back the result (also allows to delete, or ignore the
|
||||||
operation). It uses versioning to make sure no updates have happened
|
operation). It uses versioning to make sure no updates have happened
|
||||||
during the "get" and "reindex". (available from `0.19` onwards).
|
during the "get" and "reindex".
|
||||||
|
|
||||||
Note, this operation still means full reindex of the document, it just
|
Note, this operation still means full reindex of the document, it just
|
||||||
removes some network roundtrips and reduces chances of version conflicts
|
removes some network roundtrips and reduces chances of version conflicts
|
||||||
|
@ -92,7 +92,7 @@ ctx._source.tags.contains(tag) ? (ctx.op = \"none\") : ctx._source.tags += tag
|
||||||
if (ctx._source.tags.contains(tag)) { ctx.op = \"none\" } else { ctx._source.tags += tag }
|
if (ctx._source.tags.contains(tag)) { ctx.op = \"none\" } else { ctx._source.tags += tag }
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
The update API also support passing a partial document (since 0.20),
|
The update API also support passing a partial document,
|
||||||
which will be merged into the existing document (simple recursive merge,
|
which will be merged into the existing document (simple recursive merge,
|
||||||
inner merging of objects, replacing core "keys/values" and arrays). For
|
inner merging of objects, replacing core "keys/values" and arrays). For
|
||||||
example:
|
example:
|
||||||
|
@ -109,7 +109,7 @@ curl -XPOST 'localhost:9200/test/type1/1/_update' -d '{
|
||||||
If both `doc` and `script` is specified, then `doc` is ignored. Best is
|
If both `doc` and `script` is specified, then `doc` is ignored. Best is
|
||||||
to put your field pairs of the partial document in the script itself.
|
to put your field pairs of the partial document in the script itself.
|
||||||
|
|
||||||
There is also support for `upsert` (since 0.20). If the document does
|
There is also support for `upsert`. If the document does
|
||||||
not already exists, the content of the `upsert` element will be used to
|
not already exists, the content of the `upsert` element will be used to
|
||||||
index the fresh doc:
|
index the fresh doc:
|
||||||
|
|
||||||
|
@ -126,7 +126,7 @@ curl -XPOST 'localhost:9200/test/type1/1/_update' -d '{
|
||||||
}'
|
}'
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
Last it also supports `doc_as_upsert` (since 0.90.2). So that the
|
Last it also supports `doc_as_upsert`. So that the
|
||||||
provided document will be inserted if the document does not already
|
provided document will be inserted if the document does not already
|
||||||
exist. This will reduce the amount of data that needs to be sent to
|
exist. This will reduce the amount of data that needs to be sent to
|
||||||
elasticsearch.
|
elasticsearch.
|
||||||
|
@ -164,8 +164,8 @@ including:
|
||||||
so that the updated document appears in search results
|
so that the updated document appears in search results
|
||||||
immediately.
|
immediately.
|
||||||
|
|
||||||
`fields`:: return the relevant fields from the document updated
|
`fields`:: return the relevant fields from the updated document.
|
||||||
(since 0.20). Support `_source` to return the full updated
|
Support `_source` to return the full updated
|
||||||
source.
|
source.
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -36,7 +36,7 @@ curl -XPUT localhost:9200/test/_settings -d '{
|
||||||
}'
|
}'
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
From version 0.90, `index.routing.allocation.require.*` can be used to
|
`index.routing.allocation.require.*` can be used to
|
||||||
specify a number of rules, all of which MUST match in order for a shard
|
specify a number of rules, all of which MUST match in order for a shard
|
||||||
to be allocated to a node. This is in contrast to `include` which will
|
to be allocated to a node. This is in contrast to `include` which will
|
||||||
include a node if ANY rule matches.
|
include a node if ANY rule matches.
|
||||||
|
|
|
@ -10,8 +10,6 @@ Configuring custom postings formats is an expert feature and most likely
|
||||||
using the builtin postings formats will suite your needs as is described
|
using the builtin postings formats will suite your needs as is described
|
||||||
in the <<mapping-core-types,mapping section>>
|
in the <<mapping-core-types,mapping section>>
|
||||||
|
|
||||||
Codecs are available in Elasticsearch from version `0.90.0.beta1`.
|
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Configuring a custom postings format
|
=== Configuring a custom postings format
|
||||||
|
|
||||||
|
|
|
@ -7,7 +7,7 @@ document based access to those values. The field data cache can be
|
||||||
expensive to build for a field, so its recommended to have enough memory
|
expensive to build for a field, so its recommended to have enough memory
|
||||||
to allocate it, and to keep it loaded.
|
to allocate it, and to keep it loaded.
|
||||||
|
|
||||||
From version 0.90 onwards, the amount of memory used for the field
|
The amount of memory used for the field
|
||||||
data cache can be controlled using `indices.fielddata.cache.size`. Note:
|
data cache can be controlled using `indices.fielddata.cache.size`. Note:
|
||||||
reloading the field data which does not fit into your cache will be expensive
|
reloading the field data which does not fit into your cache will be expensive
|
||||||
and perform poorly.
|
and perform poorly.
|
||||||
|
@ -117,24 +117,6 @@ The `frequency` and `regex` filters can be combined:
|
||||||
}
|
}
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
[float]
|
|
||||||
=== Settings before v0.90
|
|
||||||
|
|
||||||
[cols="<,<",options="header",]
|
|
||||||
|=======================================================================
|
|
||||||
|Setting |Description
|
|
||||||
|`index.cache.field.type` |The default type for the field data cache is
|
|
||||||
`resident` (because of the cost of rebuilding it). Other types include
|
|
||||||
`soft`
|
|
||||||
|
|
||||||
|`index.cache.field.max_size` |The max size (count, not byte size) of
|
|
||||||
the cache (per search segment in a shard). Defaults to not set (`-1`).
|
|
||||||
|
|
||||||
|`index.cache.field.expire` |A time based setting that expires filters
|
|
||||||
after a certain time of inactivity. Defaults to `-1`. For example, can
|
|
||||||
be set to `5m` for a 5 minute expiry.
|
|
||||||
|=======================================================================
|
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Monitoring field data
|
=== Monitoring field data
|
||||||
|
|
||||||
|
|
|
@ -9,8 +9,6 @@ Configuring a custom similarity is considered a expert feature and the
|
||||||
builtin similarities are most likely sufficient as is described in the
|
builtin similarities are most likely sufficient as is described in the
|
||||||
<<mapping-core-types,mapping section>>
|
<<mapping-core-types,mapping section>>
|
||||||
|
|
||||||
Configuring similarities is a `0.90.0.Beta1` feature.
|
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Configuring a similarity
|
=== Configuring a similarity
|
||||||
|
|
||||||
|
|
|
@ -18,38 +18,10 @@ heap space* using the "Memory" (see below) storage type. It translates
|
||||||
to the fact that there is no need for extra large JVM heaps (with their
|
to the fact that there is no need for extra large JVM heaps (with their
|
||||||
own consequences) for storing the index in memory.
|
own consequences) for storing the index in memory.
|
||||||
|
|
||||||
[float]
|
|
||||||
=== Store Level Compression
|
|
||||||
|
|
||||||
*From version 0.90 onwards, store compression is always enabled.*
|
|
||||||
|
|
||||||
For versions 0.19.5 to 0.20:
|
|
||||||
|
|
||||||
In the mapping, one can configure the `_source` field to be compressed.
|
|
||||||
The problem with it is the fact that small documents don't end up
|
|
||||||
compressing well, as several documents compressed in a single
|
|
||||||
compression "block" will provide a considerable better compression
|
|
||||||
ratio. This version introduces the ability to compress stored fields
|
|
||||||
using the `index.store.compress.stored` setting, as well as term vector
|
|
||||||
using the `index.store.compress.tv` setting.
|
|
||||||
|
|
||||||
The settings can be set on the index level, and are dynamic, allowing to
|
|
||||||
change them using the index update settings API. elasticsearch can
|
|
||||||
handle mixed stored / non stored cases. This allows, for example, to
|
|
||||||
enable compression at a later stage in the index lifecycle, and optimize
|
|
||||||
the index to make use of it (generating new segments that use
|
|
||||||
compression).
|
|
||||||
|
|
||||||
Best compression, compared to _source level compression, will mainly
|
|
||||||
happen when indexing smaller documents (less than 64k). The price on the
|
|
||||||
other hand is the fact that for each doc returned, a block will need to
|
|
||||||
be decompressed (its fast though) in order to extract the document data.
|
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Store Level Throttling
|
=== Store Level Throttling
|
||||||
|
|
||||||
(0.19.5 and above).
|
|
||||||
|
|
||||||
The way Lucene, the IR library elasticsearch uses under the covers,
|
The way Lucene, the IR library elasticsearch uses under the covers,
|
||||||
works is by creating immutable segments (up to deletes) and constantly
|
works is by creating immutable segments (up to deletes) and constantly
|
||||||
merging them (the merge policy settings allow to control how those
|
merging them (the merge policy settings allow to control how those
|
||||||
|
@ -66,7 +38,7 @@ node, the merge process won't pass the specific setting bytes per
|
||||||
second. It can be set by setting `indices.store.throttle.type` to
|
second. It can be set by setting `indices.store.throttle.type` to
|
||||||
`merge`, and setting `indices.store.throttle.max_bytes_per_sec` to
|
`merge`, and setting `indices.store.throttle.max_bytes_per_sec` to
|
||||||
something like `5mb`. The node level settings can be changed dynamically
|
something like `5mb`. The node level settings can be changed dynamically
|
||||||
using the cluster update settings API. Since 0.90.1 the default is set
|
using the cluster update settings API. The default is set
|
||||||
to `20mb` with type `merge`.
|
to `20mb` with type `merge`.
|
||||||
|
|
||||||
If specific index level configuration is needed, regardless of the node
|
If specific index level configuration is needed, regardless of the node
|
||||||
|
|
|
@ -152,8 +152,7 @@ curl -XGET 'http://localhost:9200/alias2/_search?q=user:kimchy&routing=2,3'
|
||||||
[float]
|
[float]
|
||||||
=== Add a single index alias
|
=== Add a single index alias
|
||||||
|
|
||||||
From version `0.90.1` there is an api to add a single index alias,
|
There is also an api to add a single index alias, with options:
|
||||||
options:
|
|
||||||
|
|
||||||
[horizontal]
|
[horizontal]
|
||||||
`index`:: The index to alias refers to. This is a required option.
|
`index`:: The index to alias refers to. This is a required option.
|
||||||
|
@ -190,8 +189,7 @@ curl -XPUT 'localhost:9200/users/_alias/user_12' -d '{
|
||||||
[float]
|
[float]
|
||||||
=== Delete a single index alias
|
=== Delete a single index alias
|
||||||
|
|
||||||
From version `0.90.1` there is an api to delete a single index alias,
|
Th API to delete a single index alias, has options:
|
||||||
options:
|
|
||||||
|
|
||||||
[horizontal]
|
[horizontal]
|
||||||
`index`:: The index the alias is in, the needs to be deleted. This is
|
`index`:: The index the alias is in, the needs to be deleted. This is
|
||||||
|
@ -208,7 +206,7 @@ curl -XDELETE 'localhost:9200/users/_alias/user_12'
|
||||||
[float]
|
[float]
|
||||||
=== Retrieving existing aliases
|
=== Retrieving existing aliases
|
||||||
|
|
||||||
The get index alias api (Available since `0.90.1`) allows to filter by
|
The get index alias api allows to filter by
|
||||||
alias name and index name. This api redirects to the master and fetches
|
alias name and index name. This api redirects to the master and fetches
|
||||||
the requested index aliases, if available. This api only serialises the
|
the requested index aliases, if available. This api only serialises the
|
||||||
found index aliases.
|
found index aliases.
|
||||||
|
@ -336,16 +334,3 @@ curl -XHEAD 'localhost:9200/_alias/2013'
|
||||||
curl -XHEAD 'localhost:9200/_alias/2013_01*'
|
curl -XHEAD 'localhost:9200/_alias/2013_01*'
|
||||||
curl -XHEAD 'localhost:9200/users/_alias/*'
|
curl -XHEAD 'localhost:9200/users/_alias/*'
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
[float]
|
|
||||||
=== Pre 0.90.1 way of getting index aliases
|
|
||||||
|
|
||||||
Aliases can be retrieved using the get aliases API, which can either
|
|
||||||
return all indices with all aliases, or just for specific indices:
|
|
||||||
|
|
||||||
[source,js]
|
|
||||||
--------------------------------------------------
|
|
||||||
curl -XGET 'localhost:9200/test/_aliases'
|
|
||||||
curl -XGET 'localhost:9200/test1,test2/_aliases'
|
|
||||||
curl -XGET 'localhost:9200/_aliases'
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
|
@ -1,8 +1,7 @@
|
||||||
[[indices-types-exists]]
|
[[indices-types-exists]]
|
||||||
== Types Exists
|
== Types Exists
|
||||||
|
|
||||||
Used to check if a type/types exists in an index/indices (available
|
Used to check if a type/types exists in an index/indices.
|
||||||
since 0.20).
|
|
||||||
|
|
||||||
[source,js]
|
[source,js]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
|
@ -4,8 +4,7 @@
|
||||||
Index warming allows to run registered search requests to warm up the
|
Index warming allows to run registered search requests to warm up the
|
||||||
index before it is available for search. With the near real time aspect
|
index before it is available for search. With the near real time aspect
|
||||||
of search, cold data (segments) will be warmed up before they become
|
of search, cold data (segments) will be warmed up before they become
|
||||||
available for search. This feature is available from version 0.20
|
available for search.
|
||||||
onwards.
|
|
||||||
|
|
||||||
Warmup searches typically include requests that require heavy loading of
|
Warmup searches typically include requests that require heavy loading of
|
||||||
data, such as faceting or sorting on specific fields. The warmup APIs
|
data, such as faceting or sorting on specific fields. The warmup APIs
|
||||||
|
|
|
@ -22,11 +22,6 @@ using:
|
||||||
}
|
}
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
In order to maintain backward compatibility, a node level setting
|
|
||||||
`index.mapping._id.indexed` can be set to `true` to make sure that the
|
|
||||||
id is indexed when upgrading to `0.16`, though it's recommended to not
|
|
||||||
index the id.
|
|
||||||
|
|
||||||
The `_id` mapping can also be associated with a `path` that will be used
|
The `_id` mapping can also be associated with a `path` that will be used
|
||||||
to extract the id from a different location in the source document. For
|
to extract the id from a different location in the source document. For
|
||||||
example, having the following mapping:
|
example, having the following mapping:
|
||||||
|
|
|
@ -21,30 +21,6 @@ example:
|
||||||
}
|
}
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
[float]
|
|
||||||
==== Compression
|
|
||||||
|
|
||||||
*From version 0.90 onwards, all stored fields (including `_source`) are
|
|
||||||
always compressed.*
|
|
||||||
|
|
||||||
For versions before 0.90:
|
|
||||||
|
|
||||||
The source field can be compressed (LZF) when stored in the index. This
|
|
||||||
can greatly reduce the index size, as well as possibly improving
|
|
||||||
performance (when decompression overhead is better than loading a bigger
|
|
||||||
source from disk). The code takes special care to decompress the source
|
|
||||||
only when needed, for example decompressing it directly into the REST
|
|
||||||
stream of a result.
|
|
||||||
|
|
||||||
In order to enable compression, the `compress` option should be set to
|
|
||||||
`true`. By default it is set to `false`. Note, this can be changed on an
|
|
||||||
existing index, as a mix of compressed and uncompressed sources is
|
|
||||||
supported.
|
|
||||||
|
|
||||||
Moreover, a `compress_threshold` can be set to control when the source
|
|
||||||
will be compressed. It accepts a byte size value (for example `100b`,
|
|
||||||
`10kb`). Note, `compress` should be set to `true`.
|
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
==== Includes / Excludes
|
==== Includes / Excludes
|
||||||
|
|
||||||
|
|
|
@ -100,16 +100,12 @@ all.
|
||||||
to `false` for `analyzed` fields, and to `true` for `not_analyzed`
|
to `false` for `analyzed` fields, and to `true` for `not_analyzed`
|
||||||
fields.
|
fields.
|
||||||
|
|
||||||
|`omit_term_freq_and_positions` |Boolean value if term freq and
|
|`index_options` | Allows to set the indexing
|
||||||
positions should be omitted. Defaults to `false`. Deprecated since 0.20,
|
|
||||||
see `index_options`.
|
|
||||||
|
|
||||||
|`index_options` |Available since 0.20. Allows to set the indexing
|
|
||||||
options, possible values are `docs` (only doc numbers are indexed),
|
options, possible values are `docs` (only doc numbers are indexed),
|
||||||
`freqs` (doc numbers and term frequencies), and `positions` (doc
|
`freqs` (doc numbers and term frequencies), and `positions` (doc
|
||||||
numbers, term frequencies and positions). Defaults to `positions` for
|
numbers, term frequencies and positions). Defaults to `positions` for
|
||||||
`analyzed` fields, and to `docs` for `not_analyzed` fields. Since 0.90
|
`analyzed` fields, and to `docs` for `not_analyzed` fields. It
|
||||||
it is also possible to set it to `offsets` (doc numbers, term
|
is also possible to set it to `offsets` (doc numbers, term
|
||||||
frequencies, positions and offsets).
|
frequencies, positions and offsets).
|
||||||
|
|
||||||
|`analyzer` |The analyzer used to analyze the text contents when
|
|`analyzer` |The analyzer used to analyze the text contents when
|
||||||
|
@ -128,7 +124,6 @@ defaults to `true` or to the parent `object` type setting.
|
||||||
|
|
||||||
|`ignore_above` |The analyzer will ignore strings larger than this size.
|
|`ignore_above` |The analyzer will ignore strings larger than this size.
|
||||||
Useful for generic `not_analyzed` fields that should ignore long text.
|
Useful for generic `not_analyzed` fields that should ignore long text.
|
||||||
(since @0.19.9).
|
|
||||||
|
|
||||||
|`position_offset_gap` |Position increment gap between field instances
|
|`position_offset_gap` |Position increment gap between field instances
|
||||||
with the same field name. Defaults to 0.
|
with the same field name. Defaults to 0.
|
||||||
|
@ -212,7 +207,7 @@ enabled). If `index` is set to `no` this defaults to `false`, otherwise,
|
||||||
defaults to `true` or to the parent `object` type setting.
|
defaults to `true` or to the parent `object` type setting.
|
||||||
|
|
||||||
|`ignore_malformed` |Ignored a malformed number. Defaults to `false`.
|
|`ignore_malformed` |Ignored a malformed number. Defaults to `false`.
|
||||||
(Since @0.19.9).
|
|
||||||
|=======================================================================
|
|=======================================================================
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
|
@ -276,7 +271,7 @@ enabled). If `index` is set to `no` this defaults to `false`, otherwise,
|
||||||
defaults to `true` or to the parent `object` type setting.
|
defaults to `true` or to the parent `object` type setting.
|
||||||
|
|
||||||
|`ignore_malformed` |Ignored a malformed number. Defaults to `false`.
|
|`ignore_malformed` |Ignored a malformed number. Defaults to `false`.
|
||||||
(Since @0.19.9).
|
|
||||||
|=======================================================================
|
|=======================================================================
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
|
@ -402,9 +397,8 @@ to reload the fielddata using the new filters.
|
||||||
|
|
||||||
Posting formats define how fields are written into the index and how
|
Posting formats define how fields are written into the index and how
|
||||||
fields are represented into memory. Posting formats can be defined per
|
fields are represented into memory. Posting formats can be defined per
|
||||||
field via the `postings_format` option. Postings format are configurable
|
field via the `postings_format` option. Postings format are configurable.
|
||||||
since version `0.90.0.Beta1`. Elasticsearch has several builtin
|
Elasticsearch has several builtin formats:
|
||||||
formats:
|
|
||||||
|
|
||||||
`direct`::
|
`direct`::
|
||||||
A postings format that uses disk-based storage but loads
|
A postings format that uses disk-based storage but loads
|
||||||
|
@ -463,8 +457,7 @@ information.
|
||||||
[float]
|
[float]
|
||||||
==== Similarity
|
==== Similarity
|
||||||
|
|
||||||
From version `0.90.Beta1` Elasticsearch includes changes from Lucene 4
|
Elasticsearch allows you to configure a similarity (scoring algorithm) per field.
|
||||||
that allows you to configure a similarity (scoring algorithm) per field.
|
|
||||||
Allowing users a simpler extension beyond the usual TF/IDF algorithm. As
|
Allowing users a simpler extension beyond the usual TF/IDF algorithm. As
|
||||||
part of this, new algorithms have been added including BM25. Also as
|
part of this, new algorithms have been added including BM25. Also as
|
||||||
part of the changes, it is now possible to define a Similarity per
|
part of the changes, it is now possible to define a Similarity per
|
||||||
|
|
|
@ -17,11 +17,6 @@ http://www.vividsolutions.com/jts/jtshome.htm[JTS], both of which are
|
||||||
optional dependencies. Consequently you must add Spatial4J v0.3 and JTS
|
optional dependencies. Consequently you must add Spatial4J v0.3 and JTS
|
||||||
v1.12 to your classpath in order to use this type.
|
v1.12 to your classpath in order to use this type.
|
||||||
|
|
||||||
Note, the implementation of geo_shape was modified in an API breaking
|
|
||||||
way in 0.90. Implementations prior to this version had significant
|
|
||||||
issues and users are recommended to update to the latest version of
|
|
||||||
Elasticsearch if they wish to use the geo_shape functionality.
|
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
==== Mapping Options
|
==== Mapping Options
|
||||||
|
|
||||||
|
|
|
@ -11,8 +11,6 @@ include::modules/http.asciidoc[]
|
||||||
|
|
||||||
include::modules/indices.asciidoc[]
|
include::modules/indices.asciidoc[]
|
||||||
|
|
||||||
include::modules/jmx.asciidoc[]
|
|
||||||
|
|
||||||
include::modules/memcached.asciidoc[]
|
include::modules/memcached.asciidoc[]
|
||||||
|
|
||||||
include::modules/network.asciidoc[]
|
include::modules/network.asciidoc[]
|
||||||
|
|
|
@ -177,7 +177,7 @@ curl -XPUT localhost:9200/test/_settings -d '{
|
||||||
}'
|
}'
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
From version 0.90, `index.routing.allocation.require.*` can be used to
|
`index.routing.allocation.require.*` can be used to
|
||||||
specify a number of rules, all of which MUST match in order for a shard
|
specify a number of rules, all of which MUST match in order for a shard
|
||||||
to be allocated to a node. This is in contrast to `include` which will
|
to be allocated to a node. This is in contrast to `include` which will
|
||||||
include a node if ANY rule matches.
|
include a node if ANY rule matches.
|
||||||
|
|
|
@ -68,9 +68,7 @@ As part of the initial ping process a master of the cluster is either
|
||||||
elected or joined to. This is done automatically. The
|
elected or joined to. This is done automatically. The
|
||||||
`discovery.zen.ping_timeout` (which defaults to `3s`) allows to
|
`discovery.zen.ping_timeout` (which defaults to `3s`) allows to
|
||||||
configure the election to handle cases of slow or congested networks
|
configure the election to handle cases of slow or congested networks
|
||||||
(higher values assure less chance of failure). Note, this setting was
|
(higher values assure less chance of failure).
|
||||||
changed from 0.15.1 onwards, prior it was called
|
|
||||||
`discovery.zen.initial_ping_timeout`.
|
|
||||||
|
|
||||||
Nodes can be excluded from becoming a master by setting `node.master` to
|
Nodes can be excluded from becoming a master by setting `node.master` to
|
||||||
`false`. Note, once a node is a client node (`node.client` set to
|
`false`. Note, once a node is a client node (`node.client` set to
|
||||||
|
|
|
@ -56,10 +56,7 @@ The following settings can be set to manage recovery policy:
|
||||||
defaults to `true`.
|
defaults to `true`.
|
||||||
|
|
||||||
`indices.recovery.max_bytes_per_sec`::
|
`indices.recovery.max_bytes_per_sec`::
|
||||||
since 0.90.1, defaults to `20mb`.
|
defaults to `20mb`.
|
||||||
|
|
||||||
`indices.recovery.max_size_per_sec`::
|
|
||||||
deprecated from 0.90.1. Replaced by `indices.recovery.max_bytes_per_sec`.
|
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Store level throttling
|
=== Store level throttling
|
||||||
|
|
|
@ -1,34 +0,0 @@
|
||||||
[[modules-jmx]]
|
|
||||||
== JMX
|
|
||||||
|
|
||||||
[float]
|
|
||||||
=== REMOVED AS OF v0.90
|
|
||||||
|
|
||||||
Use the stats APIs instead.
|
|
||||||
|
|
||||||
The JMX module exposes node information through
|
|
||||||
http://java.sun.com/javase/technologies/core/mntr-mgmt/javamanagement/[JMX].
|
|
||||||
JMX can be used by either
|
|
||||||
http://en.wikipedia.org/wiki/JConsole[jconsole] or
|
|
||||||
http://en.wikipedia.org/wiki/VisualVM[VisualVM].
|
|
||||||
|
|
||||||
Exposed JMX data include both node level information, as well as
|
|
||||||
instantiated index and shard on specific node. This is a work in
|
|
||||||
progress with each version exposing more information.
|
|
||||||
|
|
||||||
[float]
|
|
||||||
=== jmx.domain
|
|
||||||
|
|
||||||
The domain under which the JMX will register under can be set using
|
|
||||||
`jmx.domain` setting. It defaults to `{elasticsearch}`.
|
|
||||||
|
|
||||||
[float]
|
|
||||||
=== jmx.create_connector
|
|
||||||
|
|
||||||
An RMI connector can be started to accept JMX requests. This can be
|
|
||||||
enabled by setting `jmx.create_connector` to `true`. An RMI connector
|
|
||||||
does come with its own overhead, make sure you really need it.
|
|
||||||
|
|
||||||
When an RMI connector is created, the `jmx.port` setting provides a port
|
|
||||||
range setting for the ports the rmi connector can open on. By default,
|
|
||||||
it is set to `9400-9500`.
|
|
|
@ -17,8 +17,14 @@ Installing plugins can either be done manually by placing them under the
|
||||||
be found under the https://github.com/elasticsearch[elasticsearch]
|
be found under the https://github.com/elasticsearch[elasticsearch]
|
||||||
organization in GitHub, starting with `elasticsearch-`.
|
organization in GitHub, starting with `elasticsearch-`.
|
||||||
|
|
||||||
Starting from 0.90.2, installing plugins typically take the form of
|
Installing plugins typically take the following form:
|
||||||
`plugin --install <org>/<user/component>/<version>`. The plugins will be
|
|
||||||
|
[source,shell]
|
||||||
|
-----------------------------------
|
||||||
|
plugin --install <org>/<user/component>/<version>
|
||||||
|
-----------------------------------
|
||||||
|
|
||||||
|
The plugins will be
|
||||||
automatically downloaded in this case from `download.elasticsearch.org`,
|
automatically downloaded in this case from `download.elasticsearch.org`,
|
||||||
and in case they don't exist there, from maven (central and sonatype).
|
and in case they don't exist there, from maven (central and sonatype).
|
||||||
|
|
||||||
|
@ -26,17 +32,16 @@ Note that when the plugin is located in maven central or sonatype
|
||||||
repository, `<org>` is the artifact `groupId` and `<user/component>` is
|
repository, `<org>` is the artifact `groupId` and `<user/component>` is
|
||||||
the `artifactId`.
|
the `artifactId`.
|
||||||
|
|
||||||
For prior version, the older form is
|
|
||||||
`plugin -install <org>/<user/component>/<version>`
|
|
||||||
|
|
||||||
A plugin can also be installed directly by specifying the URL for it,
|
A plugin can also be installed directly by specifying the URL for it,
|
||||||
for example:
|
for example:
|
||||||
`bin/plugin --url file://path/to/plugin --install plugin-name` or
|
|
||||||
`bin/plugin -url file://path/to/plugin -install plugin-name` for older
|
|
||||||
version.
|
|
||||||
|
|
||||||
Starting from 0.90.2, for more information about plugins, you can run
|
[source,shell]
|
||||||
`bin/plugin -h`.
|
-----------------------------------
|
||||||
|
bin/plugin --url file://path/to/plugin --install plugin-name
|
||||||
|
-----------------------------------
|
||||||
|
|
||||||
|
|
||||||
|
You can run `bin/plugin -h`.
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
==== Site Plugins
|
==== Site Plugins
|
||||||
|
@ -56,13 +61,8 @@ running:
|
||||||
|
|
||||||
[source,js]
|
[source,js]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
# From 0.90.2
|
|
||||||
bin/plugin --install mobz/elasticsearch-head
|
bin/plugin --install mobz/elasticsearch-head
|
||||||
bin/plugin --install lukas-vlcek/bigdesk
|
bin/plugin --install lukas-vlcek/bigdesk
|
||||||
|
|
||||||
# From a prior version
|
|
||||||
bin/plugin -install mobz/elasticsearch-head
|
|
||||||
bin/plugin -install lukas-vlcek/bigdesk
|
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
Will install both of those site plugins, with `elasticsearch-head`
|
Will install both of those site plugins, with `elasticsearch-head`
|
||||||
|
|
|
@ -7,29 +7,28 @@ pools, but the important ones include:
|
||||||
|
|
||||||
[horizontal]
|
[horizontal]
|
||||||
`index`::
|
`index`::
|
||||||
For index/delete operations, defaults to `fixed` type since
|
For index/delete operations, defaults to `fixed`,
|
||||||
`0.90.0`, size `# of available processors`. (previously type `cached`)
|
size `# of available processors`.
|
||||||
|
|
||||||
`search`::
|
`search`::
|
||||||
For count/search operations, defaults to `fixed` type since
|
For count/search operations, defaults to `fixed`,
|
||||||
`0.90.0`, size `3x # of available processors`. (previously type
|
size `3x # of available processors`.
|
||||||
`cached`)
|
|
||||||
|
|
||||||
`get`::
|
`get`::
|
||||||
For get operations, defaults to `fixed` type since `0.90.0`,
|
For get operations, defaults to `fixed`
|
||||||
size `# of available processors`. (previously type `cached`)
|
size `# of available processors`.
|
||||||
|
|
||||||
`bulk`::
|
`bulk`::
|
||||||
For bulk operations, defaults to `fixed` type since `0.90.0`,
|
For bulk operations, defaults to `fixed`
|
||||||
size `# of available processors`. (previously type `cached`)
|
size `# of available processors`.
|
||||||
|
|
||||||
`warmer`::
|
`warmer`::
|
||||||
For segment warm-up operations, defaults to `scaling` since
|
For segment warm-up operations, defaults to `scaling`
|
||||||
`0.90.0` with a `5m` keep-alive. (previously type `cached`)
|
with a `5m` keep-alive.
|
||||||
|
|
||||||
`refresh`::
|
`refresh`::
|
||||||
For refresh operations, defaults to `scaling` since
|
For refresh operations, defaults to `scaling`
|
||||||
`0.90.0` with a `5m` keep-alive. (previously type `cached`)
|
with a `5m` keep-alive.
|
||||||
|
|
||||||
Changing a specific thread pool can be done by setting its type and
|
Changing a specific thread pool can be done by setting its type and
|
||||||
specific type parameters, for example, changing the `index` thread pool
|
specific type parameters, for example, changing the `index` thread pool
|
||||||
|
|
|
@ -119,19 +119,3 @@ can contain 10s-100s of coordinates and any one differing means a new
|
||||||
shape, it may make sense to only using caching when you are sure that
|
shape, it may make sense to only using caching when you are sure that
|
||||||
the shapes will remain reasonably static.
|
the shapes will remain reasonably static.
|
||||||
|
|
||||||
[float]
|
|
||||||
==== Compatibility with older versions
|
|
||||||
|
|
||||||
Elasticsearch 0.90 changed the geo_shape implementation in a way that is
|
|
||||||
not compatible. Prior to this version, there was a required `relation`
|
|
||||||
field on queries and filter queries that indicated the relation of the
|
|
||||||
query shape to the indexed shapes. Support for this was implemented in
|
|
||||||
Elasticsearch and was poorly aligned with the underlying Lucene
|
|
||||||
implementation, which has no notion of a relation. From 0.90, this field
|
|
||||||
defaults to its only supported value: `intersects`. The other values of
|
|
||||||
`contains`, `within`, `disjoint` are no longer supported. By using e.g.
|
|
||||||
a bool filter, one can easily emulate `disjoint`. Given the imprecise
|
|
||||||
accuracy (see
|
|
||||||
<<mapping-geo-shape-type,geo_shape Mapping>>),
|
|
||||||
`within` and `contains` were always somewhat problematic and
|
|
||||||
`intersects` is generally good enough.
|
|
||||||
|
|
|
@ -7,9 +7,6 @@ type. This filter return child documents which associated parents have
|
||||||
matched. For the rest `has_parent` filter has the same options and works
|
matched. For the rest `has_parent` filter has the same options and works
|
||||||
in the same manner as the `has_child` filter.
|
in the same manner as the `has_child` filter.
|
||||||
|
|
||||||
The `has_parent` filter is available from version `0.19.10`. This is an
|
|
||||||
experimental filter.
|
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
==== Filter example
|
==== Filter example
|
||||||
|
|
||||||
|
|
|
@ -90,8 +90,6 @@ Potentially the amount of user ids specified in the terms filter can be
|
||||||
a lot. In this scenario it makes sense to use the terms filter's terms
|
a lot. In this scenario it makes sense to use the terms filter's terms
|
||||||
lookup mechanism.
|
lookup mechanism.
|
||||||
|
|
||||||
The terms lookup mechanism is supported from version `0.90.0.Beta1`.
|
|
||||||
|
|
||||||
The terms lookup mechanism supports the following options:
|
The terms lookup mechanism supports the following options:
|
||||||
|
|
||||||
[horizontal]
|
[horizontal]
|
||||||
|
|
|
@ -82,8 +82,6 @@ include::queries/top-children-query.asciidoc[]
|
||||||
|
|
||||||
include::queries/wildcard-query.asciidoc[]
|
include::queries/wildcard-query.asciidoc[]
|
||||||
|
|
||||||
include::queries/text-query.asciidoc[]
|
|
||||||
|
|
||||||
include::queries/minimum-should-match.asciidoc[]
|
include::queries/minimum-should-match.asciidoc[]
|
||||||
|
|
||||||
include::queries/multi-term-rewrite.asciidoc[]
|
include::queries/multi-term-rewrite.asciidoc[]
|
||||||
|
|
|
@ -47,20 +47,3 @@ Currently Elasticsearch does not have any notion of geo shape relevancy,
|
||||||
consequently the Query internally uses a `constant_score` Query which
|
consequently the Query internally uses a `constant_score` Query which
|
||||||
wraps a <<query-dsl-geo-shape-filter,geo_shape
|
wraps a <<query-dsl-geo-shape-filter,geo_shape
|
||||||
filter>>.
|
filter>>.
|
||||||
|
|
||||||
[float]
|
|
||||||
==== Compatibility with older versions
|
|
||||||
|
|
||||||
Elasticsearch 0.90 changed the geo_shape implementation in a way that is
|
|
||||||
not compatible. Prior to this version, there was a required `relation`
|
|
||||||
field on queries and filter queries that indicated the relation of the
|
|
||||||
query shape to the indexed shapes. Support for this was implemented in
|
|
||||||
Elasticsearch and was poorly aligned with the underlying Lucene
|
|
||||||
implementation, which has no notion of a relation. From 0.90, this field
|
|
||||||
defaults to its only supported value: `intersects`. The other values of
|
|
||||||
`contains`, `within`, `disjoint` are no longer supported. By using e.g.
|
|
||||||
a bool filter, one can easily emulate `disjoint`. Given the imprecise
|
|
||||||
accuracy (see
|
|
||||||
<<mapping-geo-shape-type,geo_shape Mapping>>),
|
|
||||||
`within` and `contains` were always somewhat problematic and
|
|
||||||
`intersects` is generally good enough.
|
|
||||||
|
|
|
@ -30,7 +30,7 @@ query the `total_hits` is always correct.
|
||||||
[float]
|
[float]
|
||||||
==== Scoring capabilities
|
==== Scoring capabilities
|
||||||
|
|
||||||
The `has_child` also has scoring support from version `0.20.2`. The
|
The `has_child` also has scoring support. The
|
||||||
supported score types are `max`, `sum`, `avg` or `none`. The default is
|
supported score types are `max`, `sum`, `avg` or `none`. The default is
|
||||||
`none` and yields the same behaviour as in previous versions. If the
|
`none` and yields the same behaviour as in previous versions. If the
|
||||||
score type is set to another value than `none`, the scores of all the
|
score type is set to another value than `none`, the scores of all the
|
||||||
|
@ -53,30 +53,6 @@ inside the `has_child` query:
|
||||||
}
|
}
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
[float]
|
|
||||||
==== Scope
|
|
||||||
|
|
||||||
The `_scope` support has been removed from version `0.90.beta1`. See:
|
|
||||||
https://github.com/elasticsearch/elasticsearch/issues/2606
|
|
||||||
|
|
||||||
A `_scope` can be defined on the filter allowing to run facets on the
|
|
||||||
same scope name that will work against the child documents. For example:
|
|
||||||
|
|
||||||
[source,js]
|
|
||||||
--------------------------------------------------
|
|
||||||
{
|
|
||||||
"has_child" : {
|
|
||||||
"_scope" : "my_scope",
|
|
||||||
"type" : "blog_tag",
|
|
||||||
"query" : {
|
|
||||||
"term" : {
|
|
||||||
"tag" : "something"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
==== Memory Considerations
|
==== Memory Considerations
|
||||||
|
|
||||||
|
|
|
@ -6,8 +6,7 @@ The `has_parent` query works the same as the
|
||||||
filter, by automatically wrapping the filter with a constant_score (when
|
filter, by automatically wrapping the filter with a constant_score (when
|
||||||
using the default score type). It has the same syntax as the
|
using the default score type). It has the same syntax as the
|
||||||
<<query-dsl-has-parent-filter,has_parent>>
|
<<query-dsl-has-parent-filter,has_parent>>
|
||||||
filter. This query is experimental and is available from version
|
filter.
|
||||||
`0.19.10`.
|
|
||||||
|
|
||||||
[source,js]
|
[source,js]
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
@ -26,7 +25,7 @@ filter. This query is experimental and is available from version
|
||||||
[float]
|
[float]
|
||||||
==== Scoring capabilities
|
==== Scoring capabilities
|
||||||
|
|
||||||
The `has_parent` also has scoring support from version `0.20.2`. The
|
The `has_parent` also has scoring support. The
|
||||||
supported score types are `score` or `none`. The default is `none` and
|
supported score types are `score` or `none`. The default is `none` and
|
||||||
this ignores the score from the parent document. The score is in this
|
this ignores the score from the parent document. The score is in this
|
||||||
case equal to the boost on the `has_parent` query (Defaults to 1). If
|
case equal to the boost on the `has_parent` query (Defaults to 1). If
|
||||||
|
@ -50,31 +49,6 @@ matching parent document. The score type can be specified with the
|
||||||
}
|
}
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
[float]
|
|
||||||
==== Scope
|
|
||||||
|
|
||||||
The `_scope` support has been removed from version `0.90.beta1`. See:
|
|
||||||
https://github.com/elasticsearch/elasticsearch/issues/2606
|
|
||||||
|
|
||||||
A `_scope` can be defined on the filter allowing to run facets on the
|
|
||||||
same scope name that will work against the parent documents. For
|
|
||||||
example:
|
|
||||||
|
|
||||||
[source,js]
|
|
||||||
--------------------------------------------------
|
|
||||||
{
|
|
||||||
"has_parent" : {
|
|
||||||
"_scope" : "my_scope",
|
|
||||||
"parent_type" : "blog",
|
|
||||||
"query" : {
|
|
||||||
"term" : {
|
|
||||||
"tag" : "something"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
==== Memory Considerations
|
==== Memory Considerations
|
||||||
|
|
||||||
|
|
|
@ -58,8 +58,7 @@ change in structure, `message` is the field name):
|
||||||
}
|
}
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
zero_terms_query
|
.zero_terms_query
|
||||||
|
|
||||||
If the analyzer used removes all tokens in a query like a `stop` filter
|
If the analyzer used removes all tokens in a query like a `stop` filter
|
||||||
does, the default behavior is to match no documents at all. In order to
|
does, the default behavior is to match no documents at all. In order to
|
||||||
change that the `zero_terms_query` option can be used, which accepts
|
change that the `zero_terms_query` option can be used, which accepts
|
||||||
|
@ -78,9 +77,8 @@ change that the `zero_terms_query` option can be used, which accepts
|
||||||
}
|
}
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
cutoff_frequency
|
.cutoff_frequency
|
||||||
|
The match query supports a `cutoff_frequency` that allows
|
||||||
Since `0.90.0` match query supports a `cutoff_frequency` that allows
|
|
||||||
specifying an absolute or relative document frequency where high
|
specifying an absolute or relative document frequency where high
|
||||||
frequent terms are moved into an optional subquery and are only scored
|
frequent terms are moved into an optional subquery and are only scored
|
||||||
if one of the low frequent (below the cutoff) terms in the case of an
|
if one of the low frequent (below the cutoff) terms in the case of an
|
||||||
|
|
|
@ -70,7 +70,7 @@ in the resulting boolean query should match. It can be an absolute value
|
||||||
both>>.
|
both>>.
|
||||||
|
|
||||||
|`lenient` |If set to `true` will cause format based failures (like
|
|`lenient` |If set to `true` will cause format based failures (like
|
||||||
providing text to a numeric field) to be ignored. (since 0.19.4).
|
providing text to a numeric field) to be ignored.
|
||||||
|=======================================================================
|
|=======================================================================
|
||||||
|
|
||||||
When a multi term query is being generated, one can control how it gets
|
When a multi term query is being generated, one can control how it gets
|
||||||
|
@ -128,7 +128,7 @@ search on all "city" fields:
|
||||||
|
|
||||||
Another option is to provide the wildcard fields search in the query
|
Another option is to provide the wildcard fields search in the query
|
||||||
string itself (properly escaping the `*` sign), for example:
|
string itself (properly escaping the `*` sign), for example:
|
||||||
`city.\*:something`. (since 0.19.4).
|
`city.\*:something`.
|
||||||
|
|
||||||
When running the `query_string` query against multiple fields, the
|
When running the `query_string` query against multiple fields, the
|
||||||
following additional parameters are allowed:
|
following additional parameters are allowed:
|
||||||
|
|
|
@ -28,5 +28,3 @@ A boost can also be associated with the query:
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
The `span_multi` query is supported from version `0.90.1`
|
|
||||||
|
|
|
@ -1,171 +0,0 @@
|
||||||
[[query-dsl-text-query]]
|
|
||||||
=== Text Query
|
|
||||||
|
|
||||||
`text` query has been deprecated (effectively renamed) to `match` query
|
|
||||||
since `0.19.9`, please use it. `text` is still supported.
|
|
||||||
|
|
||||||
A family of `text` queries that accept text, analyzes it, and constructs
|
|
||||||
a query out of it. For example:
|
|
||||||
|
|
||||||
[source,js]
|
|
||||||
--------------------------------------------------
|
|
||||||
{
|
|
||||||
"text" : {
|
|
||||||
"message" : "this is a test"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
Note, even though the name is text, it also supports exact matching
|
|
||||||
(`term` like) on numeric values and dates.
|
|
||||||
|
|
||||||
Note, `message` is the name of a field, you can substitute the name of
|
|
||||||
any field (including `_all`) instead.
|
|
||||||
|
|
||||||
[float]
|
|
||||||
[float]
|
|
||||||
==== Types of Text Queries
|
|
||||||
|
|
||||||
[float]
|
|
||||||
[float]
|
|
||||||
===== boolean
|
|
||||||
|
|
||||||
The default `text` query is of type `boolean`. It means that the text
|
|
||||||
provided is analyzed and the analysis process constructs a boolean query
|
|
||||||
from the provided text. The `operator` flag can be set to `or` or `and`
|
|
||||||
to control the boolean clauses (defaults to `or`).
|
|
||||||
|
|
||||||
The `analyzer` can be set to control which analyzer will perform the
|
|
||||||
analysis process on the text. It default to the field explicit mapping
|
|
||||||
definition, or the default search analyzer.
|
|
||||||
|
|
||||||
`fuzziness` can be set to a value (depending on the relevant type, for
|
|
||||||
string types it should be a value between `0.0` and `1.0`) to constructs
|
|
||||||
fuzzy queries for each term analyzed. The `prefix_length` and
|
|
||||||
`max_expansions` can be set in this case to control the fuzzy process.
|
|
||||||
|
|
||||||
Here is an example when providing additional parameters (note the slight
|
|
||||||
change in structure, `message` is the field name):
|
|
||||||
|
|
||||||
[source,js]
|
|
||||||
--------------------------------------------------
|
|
||||||
{
|
|
||||||
"text" : {
|
|
||||||
"message" : {
|
|
||||||
"query" : "this is a test",
|
|
||||||
"operator" : "and"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[float]
|
|
||||||
[float]
|
|
||||||
===== phrase
|
|
||||||
|
|
||||||
The `text_phrase` query analyzes the text and creates a `phrase` query
|
|
||||||
out of the analyzed text. For example:
|
|
||||||
|
|
||||||
[source,js]
|
|
||||||
--------------------------------------------------
|
|
||||||
{
|
|
||||||
"text_phrase" : {
|
|
||||||
"message" : "this is a test"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
Since `text_phrase` is only a `type` of a `text` query, it can also be
|
|
||||||
used in the following manner:
|
|
||||||
|
|
||||||
[source,js]
|
|
||||||
--------------------------------------------------
|
|
||||||
{
|
|
||||||
"text" : {
|
|
||||||
"message" : {
|
|
||||||
"query" : "this is a test",
|
|
||||||
"type" : "phrase"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
A phrase query maintains order of the terms up to a configurable `slop`
|
|
||||||
(which defaults to 0).
|
|
||||||
|
|
||||||
The `analyzer` can be set to control which analyzer will perform the
|
|
||||||
analysis process on the text. It default to the field explicit mapping
|
|
||||||
definition, or the default search analyzer, for example:
|
|
||||||
|
|
||||||
[source,js]
|
|
||||||
--------------------------------------------------
|
|
||||||
{
|
|
||||||
"text_phrase" : {
|
|
||||||
"message" : {
|
|
||||||
"query" : "this is a test",
|
|
||||||
"analyzer" : "my_analyzer"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[float]
|
|
||||||
[float]
|
|
||||||
===== text_phrase_prefix
|
|
||||||
|
|
||||||
The `text_phrase_prefix` is the same as `text_phrase`, expect it allows
|
|
||||||
for prefix matches on the last term in the text. For example:
|
|
||||||
|
|
||||||
[source,js]
|
|
||||||
--------------------------------------------------
|
|
||||||
{
|
|
||||||
"text_phrase_prefix" : {
|
|
||||||
"message" : "this is a test"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
Or:
|
|
||||||
|
|
||||||
[source,js]
|
|
||||||
--------------------------------------------------
|
|
||||||
{
|
|
||||||
"text" : {
|
|
||||||
"message" : {
|
|
||||||
"query" : "this is a test",
|
|
||||||
"type" : "phrase_prefix"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
It accepts the same parameters as the phrase type. In addition, it also
|
|
||||||
accepts a `max_expansions` parameter that can control to how many
|
|
||||||
prefixes the last term will be expanded. It is highly recommended to set
|
|
||||||
it to an acceptable value to control the execution time of the query.
|
|
||||||
For example:
|
|
||||||
|
|
||||||
[source,js]
|
|
||||||
--------------------------------------------------
|
|
||||||
{
|
|
||||||
"text_phrase_prefix" : {
|
|
||||||
"message" : {
|
|
||||||
"query" : "this is a test",
|
|
||||||
"max_expansions" : 10
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
[float]
|
|
||||||
[float]
|
|
||||||
==== Comparison to query_string / field
|
|
||||||
|
|
||||||
The text family of queries does not go through a "query parsing"
|
|
||||||
process. It does not support field name prefixes, wildcard characters,
|
|
||||||
or other "advance" features. For this reason, chances of it failing are
|
|
||||||
very small / non existent, and it provides an excellent behavior when it
|
|
||||||
comes to just analyze and run that text as a query behavior (which is
|
|
||||||
usually what a text search box does). Also, the `phrase_prefix` can
|
|
||||||
provide a great "as you type" behavior to automatically load search
|
|
||||||
results.
|
|
|
@ -13,8 +13,7 @@ and "remove" (`-`), for example: `+test*,-test3`.
|
||||||
|
|
||||||
All multi indices API support the `ignore_indices` option. Setting it to
|
All multi indices API support the `ignore_indices` option. Setting it to
|
||||||
`missing` will cause indices that do not exists to be ignored from the
|
`missing` will cause indices that do not exists to be ignored from the
|
||||||
execution. By default, when its not set, the request will fail. Note,
|
execution. By default, when its not set, the request will fail.
|
||||||
this feature is available since 0.20 version.
|
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
== Routing
|
== Routing
|
||||||
|
|
|
@ -3,8 +3,7 @@
|
||||||
|
|
||||||
The explain api computes a score explanation for a query and a specific
|
The explain api computes a score explanation for a query and a specific
|
||||||
document. This can give useful feedback whether a document matches or
|
document. This can give useful feedback whether a document matches or
|
||||||
didn't match a specific query. This feature is available from version
|
didn't match a specific query.
|
||||||
`0.19.9` and up.
|
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
=== Usage
|
=== Usage
|
||||||
|
@ -62,8 +61,7 @@ This will yield the same result as the previous request.
|
||||||
[horizontal]
|
[horizontal]
|
||||||
`fields`::
|
`fields`::
|
||||||
Allows to control which fields to return as part of the
|
Allows to control which fields to return as part of the
|
||||||
document explained (support `_source` for the full document). Note, this
|
document explained (support `_source` for the full document).
|
||||||
feature is available since 0.20.
|
|
||||||
|
|
||||||
`routing`::
|
`routing`::
|
||||||
Controls the routing in the case the routing was used
|
Controls the routing in the case the routing was used
|
||||||
|
|
|
@ -209,15 +209,6 @@ And, here is a sample data:
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
|
|
||||||
.Nested Query Facets
|
|
||||||
[NOTE]
|
|
||||||
--
|
|
||||||
Scoped filters and queries have been removed from version `0.90.0.Beta1`
|
|
||||||
instead the facet / queries need be repeated as `facet_filter`. More
|
|
||||||
information about this can be found in
|
|
||||||
https://github.com/elasticsearch/elasticsearch/issues/2606[issue 2606]
|
|
||||||
--
|
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
==== All Nested Matching Root Documents
|
==== All Nested Matching Root Documents
|
||||||
|
|
||||||
|
|
|
@ -2,8 +2,7 @@
|
||||||
== Multi Search API
|
== Multi Search API
|
||||||
|
|
||||||
The multi search API allows to execute several search requests within
|
The multi search API allows to execute several search requests within
|
||||||
the same API. The endpoint for it is `_msearch` (available from `0.19`
|
the same API. The endpoint for it is `_msearch`.
|
||||||
onwards).
|
|
||||||
|
|
||||||
The format of the request is similar to the bulk API format, and the
|
The format of the request is similar to the bulk API format, and the
|
||||||
structure is as follows (the structure is specifically optimized to
|
structure is as follows (the structure is specifically optimized to
|
||||||
|
|
|
@ -50,7 +50,7 @@ the index to be bigger):
|
||||||
}
|
}
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
Since `0.20.2` the field name support wildcard notation, for example,
|
The field name supports wildcard notation, for example,
|
||||||
using `comment_*` which will cause all fields that match the expression
|
using `comment_*` which will cause all fields that match the expression
|
||||||
to be highlighted.
|
to be highlighted.
|
||||||
|
|
||||||
|
|
|
@ -28,8 +28,7 @@ the response.
|
||||||
|
|
||||||
==== Sort mode option
|
==== Sort mode option
|
||||||
|
|
||||||
From version `0.90.0.Beta1` Elasticsearch supports sorting by array
|
Elasticsearch supports sorting by array or multi-valued fields. The `mode` option
|
||||||
fields which is also known as multi-valued fields. The `mode` option
|
|
||||||
controls what array value is picked for sorting the document it belongs
|
controls what array value is picked for sorting the document it belongs
|
||||||
to. The `mode` option can have the following values:
|
to. The `mode` option can have the following values:
|
||||||
|
|
||||||
|
@ -61,7 +60,7 @@ curl -XPOST 'localhost:9200/_search' -d '{
|
||||||
|
|
||||||
==== Sorting within nested objects.
|
==== Sorting within nested objects.
|
||||||
|
|
||||||
Also from version `0.90.0.Beta1` Elasticsearch supports sorting by
|
Elasticsearch also supports sorting by
|
||||||
fields that are inside one or more nested objects. The sorting by nested
|
fields that are inside one or more nested objects. The sorting by nested
|
||||||
field support has the following parameters on top of the already
|
field support has the following parameters on top of the already
|
||||||
existing sort options:
|
existing sort options:
|
||||||
|
@ -105,7 +104,7 @@ curl -XPOST 'localhost:9200/_search' -d '{
|
||||||
}'
|
}'
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
Since version `0.90.1` nested sorting is also support when sorting by
|
Nested sorting is also supported when sorting by
|
||||||
scripts and sorting by geo distance.
|
scripts and sorting by geo distance.
|
||||||
|
|
||||||
==== Missing Values
|
==== Missing Values
|
||||||
|
@ -126,7 +125,7 @@ will be used for missing docs as the sort value). For example:
|
||||||
}
|
}
|
||||||
--------------------------------------------------
|
--------------------------------------------------
|
||||||
|
|
||||||
Note: from version `0.90.1` if a nested inner object doesn't match with
|
NOTE: If a nested inner object doesn't match with
|
||||||
the `nested_filter` then a missing value is used.
|
the `nested_filter` then a missing value is used.
|
||||||
|
|
||||||
==== Ignoring Unmapped Fields
|
==== Ignoring Unmapped Fields
|
||||||
|
|
|
@ -2,8 +2,7 @@
|
||||||
== Suggesters
|
== Suggesters
|
||||||
|
|
||||||
The suggest feature suggests similar looking terms based on a provided
|
The suggest feature suggests similar looking terms based on a provided
|
||||||
text by using a suggester. The suggest feature is available from version
|
text by using a suggester. Parts of the suggest feature are still under
|
||||||
`0.90.0.Beta1`. Parts of the suggest feature are still under
|
|
||||||
development.
|
development.
|
||||||
|
|
||||||
The suggest request part is either defined alongside the query part in a
|
The suggest request part is either defined alongside the query part in a
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue