mirror of
https://github.com/elastic/elasticsearch.git
synced 2025-06-28 09:28:55 -04:00
Fix some typos in plugins & reference docs (#84667)
This pull request removes a few instances of duplicate words or punctuation and erroneous spelling from the docs.
This commit is contained in:
parent
26307bbef3
commit
9ecb96fcf3
7 changed files with 7 additions and 7 deletions
|
@ -442,7 +442,7 @@ unless otherwise specified in the collation.
|
|||
Possible values: `no` (default, but collation-dependent) or `canonical`.
|
||||
Setting this decomposition property to `canonical` allows the Collator to
|
||||
handle unnormalized text properly, producing the same results as if the text
|
||||
were normalized. If `no` is set, it is the user's responsibility to insure
|
||||
were normalized. If `no` is set, it is the user's responsibility to ensure
|
||||
that all text is already in the appropriate form before a comparison or before
|
||||
getting a CollationKey. Adjusting decomposition mode allows the user to select
|
||||
between faster and more complete collation behavior. Since a great many of the
|
||||
|
|
|
@ -301,7 +301,7 @@ plugin configuration file.
|
|||
If you run {es} using Docker, you can manage plugins using a declarative configuration file.
|
||||
When {es} starts up, it will compare the plugins in the file with those
|
||||
that are currently installed, and add or remove plugins as required. {es}
|
||||
will also upgrade offical plugins when you upgrade {es} itself.
|
||||
will also upgrade official plugins when you upgrade {es} itself.
|
||||
|
||||
The file is called `elasticsearch-plugins.yml`, and must be placed in the
|
||||
Elasticsearch configuration directory, alongside `elasticsearch.yml`. Here
|
||||
|
|
|
@ -74,7 +74,7 @@ TIP: A shard can return fewer than `shard_size` buckets, but it cannot return mo
|
|||
|
||||
==== Shard size
|
||||
The `shard_size` parameter specifies the number of buckets that the coordinating node will request from each shard.
|
||||
A higher `shard_size` leads each shard to produce smaller buckets. This reduce the likelihood of buckets overlapping
|
||||
A higher `shard_size` leads each shard to produce smaller buckets. This reduces the likelihood of buckets overlapping
|
||||
after the reduction step. Increasing the `shard_size` will improve the accuracy of the histogram, but it will
|
||||
also make it more expensive to compute the final result because bigger priority queues will have to be managed on a
|
||||
shard level, and the data transfers between the nodes and the client will be larger.
|
||||
|
|
|
@ -99,7 +99,7 @@ stop word.
|
|||
==== `tokenizer` and `ignore_case` are deprecated
|
||||
|
||||
The `tokenizer` parameter controls the tokenizers that will be used to
|
||||
tokenize the synonym, this parameter is for backwards compatibility for indices that created before 6.0..
|
||||
tokenize the synonym, this parameter is for backwards compatibility for indices that created before 6.0.
|
||||
The `ignore_case` parameter works with `tokenizer` parameter only.
|
||||
|
||||
Two synonym formats are supported: Solr, WordNet.
|
||||
|
|
|
@ -91,7 +91,7 @@ names can be specified as arguments to the `remove` command.
|
|||
`show <setting>`:: Displays the value of a single setting in the keystore.
|
||||
Pass the `-o` (or `--output`) parameter to write the setting to a file.
|
||||
If writing to the standard output (the terminal) the setting's value is always
|
||||
interpretted as a UTF-8 string. If the setting contains binary data (for example
|
||||
interpreted as a UTF-8 string. If the setting contains binary data (for example
|
||||
for data that was added via the `add-file` command), always use the `-o` option
|
||||
to write to a file.
|
||||
|
||||
|
|
|
@ -23,7 +23,7 @@ only be executed on local shards after the all operations up to and including th
|
|||
sequence number checkpoint are visible for search. Indexed operations become visible after a
|
||||
refresh. The checkpoints are indexed by shard.
|
||||
|
||||
If a timeout occurs before the the checkpoint has been refreshed into Elasticsearch,
|
||||
If a timeout occurs before the checkpoint has been refreshed into Elasticsearch,
|
||||
the search request will timeout.
|
||||
|
||||
The fleet search API only supports searches against a single target. If an index alias
|
||||
|
|
|
@ -171,7 +171,7 @@ POST /my_source_index/_shrink/my_target_index
|
|||
|
||||
<1> The number of shards in the target index. This must be a factor of the
|
||||
number of shards in the source index.
|
||||
<2> Best compression will only take affect when new writes are made to the
|
||||
<2> Best compression will only take effect when new writes are made to the
|
||||
index, such as when <<indices-forcemerge,force-merging>> the shard to a single
|
||||
segment.
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue