mirror of
https://github.com/elastic/elasticsearch.git
synced 2025-04-25 07:37:19 -04:00
931 lines
35 KiB
Text
931 lines
35 KiB
Text
[discrete]
|
|
[[breaking_80_cluster_node_setting_changes]]
|
|
==== Cluster and node setting changes
|
|
|
|
TIP: {ess-setting-change}
|
|
|
|
.`action.destructive_requires_name` now defaults to `true`. {ess-icon}
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
The default for the `action.destructive_requires_name` setting changes from `false`
|
|
to `true` in {es} 8.0.0.
|
|
|
|
Previously, defaulting to `false` allowed users to use wildcard
|
|
patterns to delete, close, or change index blocks on indices.
|
|
To prevent the accidental deletion of indices that happen to match a
|
|
wildcard pattern, we now default to requiring that destructive
|
|
operations explicitly name the indices to be modified.
|
|
|
|
*Impact* +
|
|
To use wildcard patterns for destructive actions, set
|
|
`action.destructive_requires_name` to `false` using the
|
|
{ref}/cluster-update-settings.html[] cluster settings API].
|
|
====
|
|
|
|
.You can no longer set `xpack.searchable.snapshot.shared_cache.size` on non-frozen nodes.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
You can no longer set
|
|
{ref}/searchable-snapshots.html#searchable-snapshots-shared-cache[`xpack.searchable.snapshot.shared_cache.size`]
|
|
on a node that doesn't have the `data_frozen` node role. This setting reserves
|
|
disk space for the shared cache of partially mounted indices. {es} only
|
|
allocates partially mounted indices to nodes with the `data_frozen` role.
|
|
|
|
*Impact* +
|
|
Remove `xpack.searchable.snapshot.shared_cache.size` from `elasticsearch.yml`
|
|
for nodes that don't have the `data_frozen` role. Specifying the setting on a
|
|
non-frozen node will result in an error on startup.
|
|
====
|
|
|
|
[[max_clause_count_change]]
|
|
.`indices.query.bool.max_clause_count` is deprecated and has no effect.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
Elasticsearch will now dynamically set the maximum number of allowed clauses
|
|
in a query, using a heuristic based on the size of the search thread pool and
|
|
the size of the heap allocated to the JVM. This limit has a minimum value of
|
|
1024 and will in most cases be larger (for example, a node with 30Gb RAM and
|
|
48 CPUs will have a maximum clause count of around 27,000). Larger heaps lead
|
|
to higher values, and larger thread pools result in lower values.
|
|
|
|
*Impact* +
|
|
Queries with many clauses should be avoided whenever possible.
|
|
If you previously bumped this setting to accommodate heavy queries,
|
|
you might need to increase the amount of memory available to Elasticsearch,
|
|
or to reduce the size of your search thread pool so that more memory is
|
|
available to each concurrent search.
|
|
|
|
In previous versions of Lucene you could get around this limit by nesting
|
|
boolean queries within each other, but the limit is now based on the total
|
|
number of leaf queries within the query as a whole and this workaround will
|
|
no longer help.
|
|
|
|
Specifying `indices.query.bool.max_clause_count` will have no effect
|
|
but will generate deprecation warnings. To avoid these warnings, remove the
|
|
setting from `elasticsearch.yml` during an upgrade or node restart.
|
|
====
|
|
|
|
[[ilm-poll-interval-limit]]
|
|
.`indices.lifecycle.poll_interval` must be greater than `1s`.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
Setting `indices.lifecycle.poll_interval` too low can cause
|
|
excessive load on a cluster. The poll interval must now be at least `1s` (one second).
|
|
|
|
*Impact* +
|
|
Set `indices.lifecycle.poll_interval` setting to `1s` or
|
|
greater in `elasticsearch.yml` or through the
|
|
{ref}/cluster-update-settings.html[cluster update settings API].
|
|
|
|
Setting `indices.lifecycle.poll_interval` to less than `1s` in
|
|
`elasticsearch.yml` will result in an error on startup.
|
|
{ref}/cluster-update-settings.html[Cluster update settings API] requests that
|
|
set `indices.lifecycle.poll_interval` to less than `1s` will return an error.
|
|
====
|
|
|
|
.The file and native realms are now enabled unless explicitly disabled.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
The file and native realms are now enabled unless explicitly disabled. If
|
|
explicitly disabled, the file and native realms remain disabled at all times.
|
|
|
|
Previously, the file and native realms had the following implicit behaviors:
|
|
|
|
* If the file and native realms were not configured, they were implicitly disabled
|
|
if any other realm was configured.
|
|
|
|
* If no other realm was available because realms were either not configured,
|
|
not permitted by license, or explicitly disabled, the file and native realms
|
|
were enabled, even if explicitly disabled.
|
|
|
|
*Impact* +
|
|
To explicitly disable the file or native realm, set the respective
|
|
`file.<realm-name>.enabled` or `native.<realm-name>.enabled` setting to `false`
|
|
under the `xpack.security.authc.realms` namespace in `elasticsearch.yml`.
|
|
|
|
The following configuration example disables the native realm and the file realm.
|
|
|
|
[source,yaml]
|
|
----
|
|
xpack.security.authc.realms:
|
|
|
|
native.realm1.enabled: false
|
|
file.realm2.enabled: false
|
|
|
|
...
|
|
----
|
|
====
|
|
|
|
.The realm `order` setting is now required.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
The `xpack.security.authc.realms.{type}.{name}.order` setting is now required and must be
|
|
specified for each explicitly configured realm. Each value must be unique.
|
|
|
|
*Impact* +
|
|
The cluster will fail to start if the requirements are not met.
|
|
|
|
For example, the following configuration is invalid:
|
|
[source,yaml]
|
|
--------------------------------------------------
|
|
xpack.security.authc.realms.kerberos.kerb1:
|
|
keytab.path: es.keytab
|
|
remove_realm_name: false
|
|
--------------------------------------------------
|
|
|
|
And must be configured as:
|
|
[source,yaml]
|
|
--------------------------------------------------
|
|
xpack.security.authc.realms.kerberos.kerb1:
|
|
order: 0
|
|
keytab.path: es.keytab
|
|
remove_realm_name: false
|
|
--------------------------------------------------
|
|
====
|
|
|
|
[[breaking_80_allocation_change_include_relocations_removed]]
|
|
.`cluster.routing.allocation.disk.include_relocations` has been removed.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
{es} now always accounts for the sizes of relocating shards when making
|
|
allocation decisions based on the disk usage of the nodes in the cluster. In
|
|
earlier versions, you could disable this by setting `cluster.routing.allocation.disk.include_relocations` to `false`.
|
|
That could result in poor allocation decisions that could overshoot watermarks and require significant
|
|
extra work to correct. The `cluster.routing.allocation.disk.include_relocations` setting has been removed.
|
|
|
|
*Impact* +
|
|
Remove the `cluster.routing.allocation.disk.include_relocations`
|
|
setting. Specifying this setting in `elasticsearch.yml` will result in an error
|
|
on startup.
|
|
====
|
|
|
|
.`cluster.join.timeout` has been removed.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
The `cluster.join.timeout` setting has been removed. Join attempts no longer
|
|
time out.
|
|
|
|
*Impact* +
|
|
Remove `cluster.join.timeout` from `elasticsearch.yml`.
|
|
====
|
|
|
|
.`discovery.zen` settings have been removed.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
All settings under the `discovery.zen` namespace are no longer supported. They existed only only for BWC reasons in 7.x. This includes:
|
|
|
|
- `discovery.zen.minimum_master_nodes`
|
|
- `discovery.zen.no_master_block`
|
|
- `discovery.zen.hosts_provider`
|
|
- `discovery.zen.publish_timeout`
|
|
- `discovery.zen.commit_timeout`
|
|
- `discovery.zen.publish_diff.enable`
|
|
- `discovery.zen.ping.unicast.concurrent_connects`
|
|
- `discovery.zen.ping.unicast.hosts.resolve_timeout`
|
|
- `discovery.zen.ping.unicast.hosts`
|
|
- `discovery.zen.ping_timeout`
|
|
- `discovery.zen.unsafe_rolling_upgrades_enabled`
|
|
- `discovery.zen.fd.connect_on_network_disconnect`
|
|
- `discovery.zen.fd.ping_interval`
|
|
- `discovery.zen.fd.ping_timeout`
|
|
- `discovery.zen.fd.ping_retries`
|
|
- `discovery.zen.fd.register_connection_listener`
|
|
- `discovery.zen.join_retry_attempts`
|
|
- `discovery.zen.join_retry_delay`
|
|
- `discovery.zen.join_timeout`
|
|
- `discovery.zen.max_pings_from_another_master`
|
|
- `discovery.zen.send_leave_request`
|
|
- `discovery.zen.master_election.wait_for_joins_timeout`
|
|
- `discovery.zen.master_election.ignore_non_master_pings`
|
|
- `discovery.zen.publish.max_pending_cluster_states`
|
|
- `discovery.zen.bwc_ping_timeout`
|
|
|
|
*Impact* +
|
|
Remove the `discovery.zen` settings from `elasticsearch.yml`. Specifying these settings will result in an error on startup.
|
|
====
|
|
|
|
.`http.content_type.required` has been removed.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
The `http.content_type.required` setting was deprecated in Elasticsearch 6.0
|
|
and has been removed in Elasticsearch 8.0. The setting was introduced in
|
|
Elasticsearch 5.3 to prepare users for Elasticsearch 6.0, where content type
|
|
auto detection was removed for HTTP requests.
|
|
|
|
*Impact* +
|
|
Remove the `http.content_type.required` setting from `elasticsearch.yml`. Specifying this setting will result in an error on startup.
|
|
====
|
|
|
|
.`http.tcp_no_delay` has been removed.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
The `http.tcp_no_delay` setting was deprecated in 7.x and has been removed in 8.0. Use `http.tcp.no_delay` instead.
|
|
|
|
*Impact* +
|
|
Replace the `http.tcp_no_delay` setting with `http.tcp.no_delay`.
|
|
Specifying `http.tcp_no_delay` in `elasticsearch.yml` will
|
|
result in an error on startup.
|
|
====
|
|
|
|
.`network.tcp.connect_timeout` has been removed.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
The `network.tcp.connect_timeout` setting was deprecated in 7.x and has been removed in 8.0. This setting
|
|
was a fallback setting for `transport.connect_timeout`.
|
|
|
|
*Impact* +
|
|
Remove the `network.tcp.connect_timeout` setting.
|
|
Use the `transport.connect_timeout` setting to change the default connection
|
|
timeout for client connections. Specifying
|
|
`network.tcp.connect_timeout` in `elasticsearch.yml` will result in an
|
|
error on startup.
|
|
====
|
|
|
|
.`node.max_local_storage_nodes` has been removed.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
The `node.max_local_storage_nodes` setting was deprecated in 7.x and
|
|
has been removed in 8.0. Nodes should be run on separate data paths
|
|
to ensure that each node is consistently assigned to the same data path.
|
|
|
|
*Impact* +
|
|
Remove the `node.max_local_storage_nodes` setting. Specifying this
|
|
setting in `elasticsearch.yml` will result in an error on startup.
|
|
====
|
|
|
|
[[accept-default-password-removed]]
|
|
.The `accept_default_password` setting has been removed.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
The `xpack.security.authc.accept_default_password` setting has not had any affect
|
|
since the 6.0 release of {es} and is no longer allowed.
|
|
|
|
*Impact* +
|
|
Remove the `xpack.security.authc.accept_default_password` setting from `elasticsearch.yml`.
|
|
Specifying this setting will result in an error on startup.
|
|
====
|
|
|
|
[[roles-index-cache-removed]]
|
|
.The `roles.index.cache.*` settings have been removed.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
The `xpack.security.authz.store.roles.index.cache.max_size` and
|
|
`xpack.security.authz.store.roles.index.cache.ttl` settings have
|
|
been removed. These settings have been redundant and deprecated
|
|
since the 5.2 release of {es}.
|
|
|
|
*Impact* +
|
|
Remove the `xpack.security.authz.store.roles.index.cache.max_size`
|
|
and `xpack.security.authz.store.roles.index.cache.ttl` settings from `elasticsearch.yml` .
|
|
Specifying these settings will result in an error on startup.
|
|
====
|
|
|
|
[[separating-node-and-client-traffic]]
|
|
.The `transport.profiles.*.xpack.security.type` setting has been removed.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
The `transport.profiles.*.xpack.security.type` setting is no longer supported.
|
|
The Transport Client has been removed and all client traffic now uses
|
|
the HTTP transport. Transport profiles using this setting should be removed.
|
|
|
|
*Impact* +
|
|
Remove the `transport.profiles.*.xpack.security.type` setting from `elasticsearch.yml`.
|
|
Specifying this setting in a transport profile will result in an error on startup.
|
|
====
|
|
|
|
[discrete]
|
|
[[saml-realm-nameid-changes]]
|
|
.The `nameid_format` SAML realm setting no longer has a default value.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
In SAML, Identity Providers (IdPs) can either be explicitly configured to
|
|
release a `NameID` with a specific format, or configured to attempt to conform
|
|
with the requirements of a Service Provider (SP). The SP declares its
|
|
requirements in the `NameIDPolicy` element of a SAML Authentication Request.
|
|
In {es}, the `nameid_format` SAML realm setting controls the `NameIDPolicy`
|
|
value.
|
|
|
|
Previously, the default value for `nameid_format` was
|
|
`urn:oasis:names:tc:SAML:2.0:nameid-format:transient`. This setting created
|
|
authentication requests that required the IdP to release `NameID` with a
|
|
`transient` format.
|
|
|
|
The default value has been removed, which means that {es} will create SAML Authentication Requests by default that don't put this requirement on the
|
|
IdP. If you want to retain the previous behavior, set `nameid_format` to
|
|
`urn:oasis:names:tc:SAML:2.0:nameid-format:transient`.
|
|
|
|
*Impact* +
|
|
If you currently don't configure `nameid_format` explicitly, it's possible
|
|
that your IdP will reject authentication requests from {es} because the requests
|
|
do not specify a `NameID` format (and your IdP is configured to expect one).
|
|
This mismatch can result in a broken SAML configuration. If you're unsure whether
|
|
your IdP is explicitly configured to use a certain `NameID` format and you want to retain current behavior
|
|
, try setting `nameid_format` to `urn:oasis:names:tc:SAML:2.0:nameid-format:transient` explicitly.
|
|
====
|
|
|
|
.The `xpack.security.transport.ssl.enabled` setting is now required to configure `xpack.security.transport.ssl` settings.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
It is now an error to configure any SSL settings for
|
|
`xpack.security.transport.ssl` without also configuring
|
|
`xpack.security.transport.ssl.enabled`.
|
|
|
|
*Impact* +
|
|
If using other `xpack.security.transport.ssl` settings, you must explicitly
|
|
specify the `xpack.security.transport.ssl.enabled` setting.
|
|
|
|
If you do not want to enable SSL and are currently using other
|
|
`xpack.security.transport.ssl` settings, do one of the following:
|
|
|
|
* Explicitly specify `xpack.security.transport.ssl.enabled` as `false`
|
|
* Discontinue use of other `xpack.security.transport.ssl` settings
|
|
|
|
If you want to enable SSL, follow the instructions in
|
|
{ref}/configuring-tls.html#tls-transport[Encrypting communications between nodes
|
|
in a cluster]. As part of this configuration, explicitly specify
|
|
`xpack.security.transport.ssl.enabled` as `true`.
|
|
|
|
For example, the following configuration is invalid:
|
|
[source,yaml]
|
|
--------------------------------------------------
|
|
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
|
|
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
|
|
--------------------------------------------------
|
|
|
|
And must be configured as:
|
|
[source,yaml]
|
|
--------------------------------------------------
|
|
xpack.security.transport.ssl.enabled: true <1>
|
|
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
|
|
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
|
|
--------------------------------------------------
|
|
<1> or `false`.
|
|
====
|
|
|
|
.The `xpack.security.http.ssl.enabled` setting is now required to configure `xpack.security.http.ssl` settings.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
It is now an error to configure any SSL settings for
|
|
`xpack.security.http.ssl` without also configuring
|
|
`xpack.security.http.ssl.enabled`.
|
|
|
|
*Impact* +
|
|
If using other `xpack.security.http.ssl` settings, you must explicitly
|
|
specify the `xpack.security.http.ssl.enabled` setting.
|
|
|
|
If you do not want to enable SSL and are currently using other
|
|
`xpack.security.http.ssl` settings, do one of the following:
|
|
|
|
* Explicitly specify `xpack.security.http.ssl.enabled` as `false`
|
|
* Discontinue use of other `xpack.security.http.ssl` settings
|
|
|
|
If you want to enable SSL, follow the instructions in
|
|
{ref}/security-basic-setup-https.html#encrypt-http-communication[Encrypting HTTP client communications]. As part
|
|
of this configuration, explicitly specify `xpack.security.http.ssl.enabled`
|
|
as `true`.
|
|
|
|
For example, the following configuration is invalid:
|
|
[source,yaml]
|
|
--------------------------------------------------
|
|
xpack.security.http.ssl.certificate: elasticsearch.crt
|
|
xpack.security.http.ssl.key: elasticsearch.key
|
|
xpack.security.http.ssl.certificate_authorities: [ "corporate-ca.crt" ]
|
|
--------------------------------------------------
|
|
|
|
And must be configured as either:
|
|
[source,yaml]
|
|
--------------------------------------------------
|
|
xpack.security.http.ssl.enabled: true <1>
|
|
xpack.security.http.ssl.certificate: elasticsearch.crt
|
|
xpack.security.http.ssl.key: elasticsearch.key
|
|
xpack.security.http.ssl.certificate_authorities: [ "corporate-ca.crt" ]
|
|
--------------------------------------------------
|
|
<1> or `false`.
|
|
====
|
|
|
|
.A `xpack.security.transport.ssl` certificate and key are now required to enable SSL for the transport interface.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
It is now an error to enable SSL for the transport interface without also configuring
|
|
a certificate and key through use of the `xpack.security.transport.ssl.keystore.path`
|
|
setting or the `xpack.security.transport.ssl.certificate` and
|
|
`xpack.security.transport.ssl.key` settings.
|
|
|
|
*Impact* +
|
|
If `xpack.security.transport.ssl.enabled` is set to `true`, provide a
|
|
certificate and key using the `xpack.security.transport.ssl.keystore.path`
|
|
setting or the `xpack.security.transport.ssl.certificate` and
|
|
`xpack.security.transport.ssl.key` settings. If a certificate and key is not
|
|
provided, {es} will return in an error on startup.
|
|
====
|
|
|
|
.A `xpack.security.http.ssl` certificate and key are now required to enable SSL for the HTTP server.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
It is now an error to enable SSL for the HTTP (Rest) server without also configuring
|
|
a certificate and key through use of the `xpack.security.http.ssl.keystore.path`
|
|
setting or the `xpack.security.http.ssl.certificate` and
|
|
`xpack.security.http.ssl.key` settings.
|
|
|
|
*Impact* +
|
|
If `xpack.security.http.ssl.enabled` is set to `true`, provide a certificate and
|
|
key using the `xpack.security.http.ssl.keystore.path` setting or the
|
|
`xpack.security.http.ssl.certificate` and `xpack.security.http.ssl.key`
|
|
settings. If certificate and key is not provided, {es} will return in an error
|
|
on startup.
|
|
====
|
|
|
|
.PKCS#11 keystores and trustores cannot be configured in `elasticsearch.yml`
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
The settings `*.ssl.keystore.type` and `*.ssl.truststore.type` no longer accept "PKCS11" as a valid type.
|
|
This applies to all SSL settings in Elasticsearch, including
|
|
|
|
- `xpack.security.http.keystore.type`
|
|
- `xpack.security.transport.keystore.type`
|
|
- `xpack.security.http.truststore.type`
|
|
- `xpack.security.transport.truststore.type`
|
|
|
|
As well as SSL settings for security realms, watcher and monitoring.
|
|
|
|
Use of a PKCS#11 keystore or truststore as the JRE's default store is not affected.
|
|
|
|
*Impact* +
|
|
If you have a PKCS#11 keystore configured within your `elasticsearch.yml` file, you must remove that
|
|
configuration and switch to a supported keystore type, or configure your PKCS#11 keystore as the
|
|
JRE default store.
|
|
====
|
|
|
|
.The `kibana` user has been replaced by `kibana_system`.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
The `kibana` user was historically used to authenticate {kib} to {es}.
|
|
The name of this user was confusing, and was often mistakenly used to login to {kib}.
|
|
This has been renamed to `kibana_system` in order to reduce confusion, and to better
|
|
align with other built-in system accounts.
|
|
|
|
*Impact* +
|
|
Replace any use of the `kibana` user with the `kibana_system` user. Specifying
|
|
the `kibana` user in `kibana.yml` will result in an error on startup.
|
|
|
|
If your `kibana.yml` used to contain:
|
|
[source,yaml]
|
|
--------------------------------------------------
|
|
elasticsearch.username: kibana
|
|
--------------------------------------------------
|
|
|
|
then you should update to use the new `kibana_system` user instead:
|
|
[source,yaml]
|
|
--------------------------------------------------
|
|
elasticsearch.username: kibana_system
|
|
--------------------------------------------------
|
|
|
|
IMPORTANT: The new `kibana_system` user does not preserve the previous `kibana`
|
|
user password. You must explicitly set a password for the `kibana_system` user.
|
|
====
|
|
|
|
[[search-remote-settings-removed]]
|
|
.The `search.remote.*` settings have been removed.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
In 6.5 these settings were deprecated in favor of `cluster.remote`. In 7.x we
|
|
provided automatic upgrading of these settings to their `cluster.remote`
|
|
counterparts. In 8.0.0, these settings have been removed. Elasticsearch will
|
|
refuse to start if you have these settings in your configuration or cluster
|
|
state.
|
|
|
|
*Impact* +
|
|
Use the replacement `cluster.remote` settings. Discontinue use of the
|
|
`search.remote.*` settings. Specifying these settings in `elasticsearch.yml`
|
|
will result in an error on startup.
|
|
====
|
|
|
|
[[remove-pidfile]]
|
|
.The `pidfile` setting has been replaced by `node.pidfile`.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
To ensure that all settings are in a proper namespace, the `pidfile` setting was
|
|
previously deprecated in version 7.4.0 of Elasticsearch, and is removed in
|
|
version 8.0.0. Instead, use `node.pidfile`.
|
|
|
|
*Impact* +
|
|
Use the `node.pidfile` setting. Discontinue use of the `pidfile` setting.
|
|
Specifying the `pidfile` setting in `elasticsearch.yml` will result in an error
|
|
on startup.
|
|
====
|
|
|
|
[[remove-processors]]
|
|
.The `processors` setting has been replaced by `node.processors`.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
To ensure that all settings are in a proper namespace, the `processors` setting
|
|
was previously deprecated in version 7.4.0 of Elasticsearch, and is removed in
|
|
version 8.0.0. Instead, use `node.processors`.
|
|
|
|
*Impact* +
|
|
Use the `node.processors` setting. Discontinue use of the `processors` setting.
|
|
Specifying the `processors` setting in `elasticsearch.yml` will result in an
|
|
error on startup.
|
|
====
|
|
|
|
.The `node.processors` setting can no longer exceed the available number of processors.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
Previously it was possible to set the number of processors used to set the
|
|
default sizes for the thread pools to be more than the number of available
|
|
processors. As this leads to more context switches and more threads but without
|
|
an increase in the number of physical CPUs on which to schedule these additional
|
|
threads, the `node.processors` setting is now bounded by the number of available
|
|
processors.
|
|
|
|
*Impact* +
|
|
If specified, ensure the value of `node.processors` setting does not exceed the
|
|
number of available processors. Setting the `node.processors` value greater than
|
|
the number of available processors in `elasticsearch.yml` will result in an
|
|
error on startup.
|
|
====
|
|
|
|
.The `cluster.remote.connect` setting has been removed.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
In Elasticsearch 7.7.0, the setting `cluster.remote.connect` was deprecated in
|
|
favor of setting `node.remote_cluster_client`. In Elasticsearch 8.0.0, the
|
|
setting `cluster.remote.connect` is removed.
|
|
|
|
*Impact* +
|
|
Use the `node.remote_cluster_client` setting. Discontinue use of the
|
|
`cluster.remote.connect` setting. Specifying the `cluster.remote.connect`
|
|
setting in `elasticsearch.yml` will result in an error on startup.
|
|
====
|
|
|
|
.The `node.local_storage` setting has been removed.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
In Elasticsearch 7.8.0, the setting `node.local_storage` was deprecated and
|
|
beginning in Elasticsearch 8.0.0 all nodes will require local storage. Therefore,
|
|
the `node.local_storage` setting has been removed.
|
|
|
|
*Impact* +
|
|
Discontinue use of the `node.local_storage` setting. Specifying this setting in
|
|
`elasticsearch.yml` will result in an error on startup.
|
|
====
|
|
|
|
.The `auth.password` setting for HTTP monitoring has been removed.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
In Elasticsearch 7.7.0, the setting `xpack.monitoring.exporters.<exporterName>.auth.password`
|
|
was deprecated in favor of setting `xpack.monitoring.exporters.<exporterName>.auth.secure_password`.
|
|
In Elasticsearch 8.0.0, the setting `xpack.monitoring.exporters.<exporterName>.auth.password` is
|
|
removed.
|
|
|
|
*Impact* +
|
|
Use the `xpack.monitoring.exporters.<exporterName>.auth.secure_password`
|
|
setting. Discontinue use of the
|
|
`xpack.monitoring.exporters.<exporterName>.auth.password` setting. Specifying
|
|
the `xpack.monitoring.exporters.<exporterName>.auth.password` setting in
|
|
`elasticsearch.yml` will result in an error on startup.
|
|
====
|
|
|
|
.Settings used to disable basic license features have been removed.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
The following settings were deprecated in {es} 7.8.0 and have been removed
|
|
in {es} 8.0.0:
|
|
|
|
* `xpack.enrich.enabled`
|
|
* `xpack.flattened.enabled`
|
|
* `xpack.ilm.enabled`
|
|
* `xpack.monitoring.enabled`
|
|
* `xpack.rollup.enabled`
|
|
* `xpack.slm.enabled`
|
|
* `xpack.sql.enabled`
|
|
* `xpack.transform.enabled`
|
|
* `xpack.vectors.enabled`
|
|
|
|
These basic license features are now always enabled.
|
|
|
|
If you have disabled ILM so that you can use another tool to manage Watcher
|
|
indices, the newly introduced `xpack.watcher.use_ilm_index_management` setting
|
|
may be set to false.
|
|
|
|
*Impact* +
|
|
Discontinue use of the removed settings. Specifying these settings in
|
|
`elasticsearch.yml` will result in an error on startup.
|
|
====
|
|
|
|
.Settings used to defer cluster recovery pending a certain number of master nodes have been removed.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
The following cluster settings have been removed:
|
|
|
|
* `gateway.expected_nodes`
|
|
* `gateway.expected_master_nodes`
|
|
* `gateway.recover_after_nodes`
|
|
* `gateway.recover_after_master_nodes`
|
|
|
|
It is safe to recover the cluster as soon as a majority of master-eligible
|
|
nodes have joined so there is no benefit in waiting for any additional
|
|
master-eligible nodes to start.
|
|
|
|
*Impact* +
|
|
Discontinue use of the removed settings. If needed, use
|
|
`gateway.expected_data_nodes` or `gateway.recover_after_data_nodes` to defer
|
|
cluster recovery pending a certain number of data nodes.
|
|
====
|
|
|
|
.Legacy role settings have been removed.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
The legacy role settings:
|
|
|
|
* `node.data`
|
|
* `node.ingest`
|
|
* `node.master`
|
|
* `node.ml`
|
|
* `node.remote_cluster_client`
|
|
* `node.transform`
|
|
* `node.voting_only`
|
|
|
|
have been removed. Instead, use the `node.roles` setting. If you were previously
|
|
using the legacy role settings on a 7.13 or later cluster, you will have a
|
|
deprecation log message on each of your nodes indicating the exact replacement
|
|
value for `node.roles`.
|
|
|
|
*Impact* +
|
|
Discontinue use of the removed settings. Specifying these settings in
|
|
`elasticsearch.yml` will result in an error on startup.
|
|
====
|
|
|
|
[[system-call-filter-setting]]
|
|
.The system call filter setting has been removed.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
Elasticsearch uses system call filters to remove its ability to fork another
|
|
process. This is useful to mitigate remote code exploits. These system call
|
|
filters are enabled by default, and were previously controlled via the setting
|
|
`bootstrap.system_call_filter`. Starting in Elasticsearch 8.0, system call
|
|
filters will be required. As such, the setting `bootstrap.system_call_filter`
|
|
was deprecated in Elasticsearch 7.13.0, and is removed as of Elasticsearch
|
|
8.0.0.
|
|
|
|
*Impact* +
|
|
Discontinue use of the removed setting. Specifying this setting in Elasticsearch
|
|
configuration will result in an error on startup.
|
|
====
|
|
|
|
[[tier-filter-setting]]
|
|
.Tier filtering settings have been removed.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
The cluster and index level settings ending in `._tier` used for filtering the allocation of a shard
|
|
to a particular set of nodes have been removed. Instead, the
|
|
{ref}/data-tier-shard-filtering.html#tier-preference-allocation-filter[tier
|
|
preference setting], `index.routing.allocation.include._tier_preference` should
|
|
be used. The removed settings are:
|
|
|
|
Cluster level settings:
|
|
|
|
- `cluster.routing.allocation.include._tier`
|
|
- `cluster.routing.allocation.exclude._tier`
|
|
- `cluster.routing.allocation.require._tier`
|
|
|
|
Index settings:
|
|
|
|
- `index.routing.allocation.include._tier`
|
|
- `index.routing.allocation.exclude._tier`
|
|
- `index.routing.allocation.require._tier`
|
|
|
|
*Impact* +
|
|
Discontinue use of the removed settings. Specifying any of these cluster settings in Elasticsearch
|
|
configuration will result in an error on startup. Any indices using these settings will have the
|
|
settings archived (and they will have no effect) when the index metadata is loaded.
|
|
====
|
|
|
|
[[shared-data-path-setting]]
|
|
.Shared data path and per index data path settings are deprecated.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
Elasticsearch uses the shared data path as the base path of per index data
|
|
paths. This feature was previously used with shared replicas. Starting in
|
|
7.13.0, these settings are deprecated. Starting in 8.0 only existing
|
|
indices created in 7.x will be capable of using the shared data path and
|
|
per index data path settings.
|
|
|
|
*Impact* +
|
|
Discontinue use of the deprecated settings.
|
|
====
|
|
|
|
[[single-data-node-watermark-setting]]
|
|
.The single data node watermark setting is deprecated and now only accepts `true`.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
In 7.14, setting `cluster.routing.allocation.disk.watermark.enable_for_single_data_node`
|
|
to false was deprecated. Starting in 8.0, the only legal value will be
|
|
true. In a future release, the setting will be removed completely, with same
|
|
behavior as if the setting was `true`.
|
|
|
|
If the old behavior is desired for a single data node cluster, disk based
|
|
allocation can be disabled by setting
|
|
`cluster.routing.allocation.disk.threshold_enabled: false`
|
|
|
|
*Impact* +
|
|
Discontinue use of the deprecated setting.
|
|
====
|
|
|
|
[[auto-import-dangling-indices-removed]]
|
|
.The `gateway.auto_import_dangling_indices` setting has been removed.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
The `gateway.auto_import_dangling_indices` cluster setting has been removed.
|
|
Previously, you could use this setting to automatically import
|
|
{ref}/modules-gateway.html#dangling-indices[dangling indices]. However,
|
|
automatically importing dangling indices is unsafe. Use the
|
|
{ref}/indices.html#dangling-indices-api[dangling indices APIs] to manage and
|
|
import dangling indices instead.
|
|
|
|
*Impact* +
|
|
Discontinue use of the removed setting. Specifying the setting in
|
|
`elasticsearch.yml` will result in an error on startup.
|
|
====
|
|
|
|
.The `listener` thread pool has been removed.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
Previously, the transport client used the thread pool to ensure listeners aren't
|
|
called back on network threads. The transport client has been removed
|
|
in 8.0, and the thread pool is no longer needed.
|
|
|
|
*Impact* +
|
|
Remove `listener` thread pool settings from `elasticsearch.yml` for any nodes.
|
|
Specifying `listener` thread pool settings in `elasticsearch.yml` will result in
|
|
an error on startup.
|
|
====
|
|
|
|
.The `fixed_auto_queue_size` thread pool type has been removed.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
The `fixed_auto_queue_size` thread pool type, previously marked as an
|
|
experimental feature, was deprecated in 7.x and has been removed in 8.0.
|
|
The `search` and `search_throttled` thread pools have the `fixed` type now.
|
|
|
|
*Impact* +
|
|
No action needed.
|
|
====
|
|
|
|
.Several `transport` settings have been replaced.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
The following settings have been deprecated in 7.x and removed in 8.0. Each setting has a replacement
|
|
setting that was introduced in 6.7.
|
|
|
|
- `transport.tcp.port` replaced by `transport.port`
|
|
- `transport.tcp.compress` replaced by `transport.compress`
|
|
- `transport.tcp.connect_timeout` replaced by `transport.connect_timeout`
|
|
- `transport.tcp_no_delay` replaced by `transport.tcp.no_delay`
|
|
- `transport.profiles.profile_name.tcp_no_delay` replaced by `transport.profiles.profile_name.tcp.no_delay`
|
|
- `transport.profiles.profile_name.tcp_keep_alive` replaced by `transport.profiles.profile_name.tcp.keep_alive`
|
|
- `transport.profiles.profile_name.reuse_address` replaced by `transport.profiles.profile_name.tcp.reuse_address`
|
|
- `transport.profiles.profile_name.send_buffer_size` replaced by `transport.profiles.profile_name.tcp.send_buffer_size`
|
|
- `transport.profiles.profile_name.receive_buffer_size` replaced by `transport.profiles.profile_name.tcp.receive_buffer_size`
|
|
|
|
*Impact* +
|
|
Use the replacement settings. Discontinue use of the removed settings.
|
|
Specifying the removed settings in `elasticsearch.yml` will result in an error
|
|
on startup.
|
|
====
|
|
|
|
.Selective transport compression has been enabled by default.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
Prior to 8.0, transport compression was disabled by default. Starting in 8.0,
|
|
`transport.compress` defaults to `indexing_data`. This configuration means that
|
|
the propagation of raw indexing data will be compressed between nodes.
|
|
|
|
*Impact* +
|
|
Inter-node transit will get reduced along the indexing path. In some scenarios,
|
|
CPU usage could increase.
|
|
====
|
|
|
|
.Transport compression defaults to lz4.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
Prior to 8.0, the `transport.compression_scheme` setting defaulted to `deflate`. Starting in
|
|
8.0, `transport.compress_scheme` defaults to `lz4`.
|
|
|
|
Prior to 8.0, the `cluster.remote.<cluster_alias>.transport.compression_scheme`
|
|
setting defaulted to `deflate` when `cluster.remote.<cluster_alias>.transport.compress`
|
|
was explicitly configured. Starting in 8.0,
|
|
`cluster.remote.<cluster_alias>.transport.compression_scheme` will fallback to
|
|
`transport.compression_scheme` by default.
|
|
|
|
*Impact* +
|
|
This configuration means that transport compression will produce somewhat lower
|
|
compression ratios in exchange for lower CPU load.
|
|
====
|
|
|
|
.The `repositories.fs.compress` node-level setting has been removed.
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
For shared file system repositories (`"type": "fs"`), the node level setting `repositories.fs.compress` could
|
|
previously be used to enable compression for all shared file system repositories where `compress` was not specified.
|
|
The `repositories.fs.compress` setting has been removed.
|
|
|
|
*Impact* +
|
|
Discontinue use of the `repositories.fs.compress` node-level setting. Use the
|
|
repository-specific `compress` setting to enable compression instead. Refer to
|
|
{ref}/snapshots-filesystem-repository.html#filesystem-repository-settings[Shared
|
|
file system repository settings].
|
|
====
|
|
|
|
// This change is not notable because it should not have any impact on upgrades
|
|
// However we document it here out of an abundance of caution
|
|
[[fips-default-hash-changed]]
|
|
.When FIPS mode is enabled the default password hash is now PBKDF2_STRETCH
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
If `xpack.security.fips_mode.enabled` is true (see <<fips-140-compliance>>),
|
|
the value of `xpack.security.authc.password_hashing.algorithm` now defaults to
|
|
`pbkdf2_stretch`.
|
|
|
|
In earlier versions this setting would always default to `bcrypt` and a runtime
|
|
check would prevent a node from starting unless the value was explicitly set to
|
|
a "pbkdf2" variant.
|
|
|
|
There is no change for clusters that do not enable FIPS 140 mode.
|
|
|
|
*Impact* +
|
|
This change should not have any impact on upgraded nodes.
|
|
Any node with an explicitly configured value for the password hashing algorithm
|
|
will continue to use that configured value.
|
|
Any node that did not have an explicitly configured password hashing algorithm in
|
|
{es} 6.x or {es} 7.x would have failed to start.
|
|
====
|
|
|
|
.The `xpack.monitoring.history.duration` will not delete indices created by metricbeat or elastic agent
|
|
[%collapsible]
|
|
====
|
|
*Details* +
|
|
|
|
Prior to 8.0, Elasticsearch would internally handle removal of all monitoring indices according to the
|
|
`xpack.monitoring.history.duration` setting.
|
|
|
|
When using metricbeat or elastic agent >= 8.0 to collect monitoring data, indices are managed via an ILM policy. If the setting is present, the policy will be created using the `xpack.monitoring.history.duration` as an initial retention period.
|
|
|
|
If you need to customize retention settings for monitoring data collected with metricbeat, please update the `.monitoring-8-ilm-policy` ILM policy directly.
|
|
|
|
The `xpack.monitoring.history.duration` setting will only apply to monitoring indices written using (legacy) internal
|
|
collection, not indices created by metricbeat or agent.
|
|
|
|
*Impact* +
|
|
After upgrading, insure that the `.monitoring-8-ilm-policy` ILM policy aligns with your desired retention settings.
|
|
|
|
If you only use
|
|
metricbeat or agent to collect monitoring data, you can also remove any custom `xpack.monitoring.history.duration`
|
|
settings.
|
|
|
|
====
|