mirror of
https://github.com/elastic/elasticsearch.git
synced 2025-04-25 15:47:23 -04:00
This adds additional documentation for shared_cache searchable snapshots that are targeting the frozen tier: - it generalizes the introduction section on searchable snapshots, mentioning that they come in two flavors now as well as the relation to cold and frozen tiers, - it expands the shared_cache section and - it adds Cloud-specific instructions for getting started with the frozen tier Co-authored-by: James Rodewig <james.rodewig@elastic.co> Co-authored-by: debadair <debadair@elastic.co> Co-authored-by: David Turner <david.turner@elastic.co>
141 lines
6.5 KiB
Text
141 lines
6.5 KiB
Text
[role="xpack"]
|
|
[[data-tiers]]
|
|
== Data tiers
|
|
|
|
A _data tier_ is a collection of nodes with the same data role that
|
|
typically share the same hardware profile:
|
|
|
|
* <<content-tier, Content tier>> nodes handle the indexing and query load for content such as a product catalog.
|
|
* <<hot-tier, Hot tier>> nodes handle the indexing load for time series data such as logs or metrics
|
|
and hold your most recent, most-frequently-accessed data.
|
|
* <<warm-tier, Warm tier>> nodes hold time series data that is accessed less-frequently
|
|
and rarely needs to be updated.
|
|
* <<cold-tier, Cold tier>> nodes hold time series data that is accessed infrequently and not normally updated.
|
|
* <<frozen-tier, Frozen tier>> nodes hold time series data that is accessed rarely and never updated, kept in searchable snapshots.
|
|
|
|
When you index documents directly to a specific index, they remain on content tier nodes indefinitely.
|
|
|
|
When you index documents to a data stream, they initially reside on hot tier nodes.
|
|
You can configure <<index-lifecycle-management, {ilm}>> ({ilm-init}) policies
|
|
to automatically transition your time series data through the hot, warm, and cold tiers
|
|
according to your performance, resiliency and data retention requirements.
|
|
|
|
A node's <<data-node, data role>> is configured in `elasticsearch.yml`.
|
|
For example, the highest-performance nodes in a cluster might be assigned to both the hot and content tiers:
|
|
|
|
[source,yaml]
|
|
--------------------------------------------------
|
|
node.roles: ["data_hot", "data_content"]
|
|
--------------------------------------------------
|
|
|
|
[discrete]
|
|
[[content-tier]]
|
|
=== Content tier
|
|
|
|
Data stored in the content tier is generally a collection of items such as a product catalog or article archive.
|
|
Unlike time series data, the value of the content remains relatively constant over time,
|
|
so it doesn't make sense to move it to a tier with different performance characteristics as it ages.
|
|
Content data typically has long data retention requirements, and you want to be able to retrieve
|
|
items quickly regardless of how old they are.
|
|
|
|
Content tier nodes are usually optimized for query performance--they prioritize processing power over IO throughput
|
|
so they can process complex searches and aggregations and return results quickly.
|
|
While they are also responsible for indexing, content data is generally not ingested at as high a rate
|
|
as time series data such as logs and metrics. From a resiliency perspective the indices in this
|
|
tier should be configured to use one or more replicas.
|
|
|
|
New indices are automatically allocated to the <<content-tier>> unless they are part of a data stream.
|
|
|
|
[discrete]
|
|
[[hot-tier]]
|
|
=== Hot tier
|
|
|
|
The hot tier is the {es} entry point for time series data and holds your most-recent,
|
|
most-frequently-searched time series data.
|
|
Nodes in the hot tier need to be fast for both reads and writes,
|
|
which requires more hardware resources and faster storage (SSDs).
|
|
For resiliency, indices in the hot tier should be configured to use one or more replicas.
|
|
|
|
New indices that are part of a <<data-streams, data stream>> are automatically allocated to the
|
|
hot tier.
|
|
|
|
[discrete]
|
|
[[warm-tier]]
|
|
=== Warm tier
|
|
|
|
Time series data can move to the warm tier once it is being queried less frequently
|
|
than the recently-indexed data in the hot tier.
|
|
The warm tier typically holds data from recent weeks.
|
|
Updates are still allowed, but likely infrequent.
|
|
Nodes in the warm tier generally don't need to be as fast as those in the hot tier.
|
|
For resiliency, indices in the warm tier should be configured to use one or more replicas.
|
|
|
|
[discrete]
|
|
[[cold-tier]]
|
|
=== Cold tier
|
|
|
|
Once data is no longer being updated, it can move from the warm tier to the cold tier where it
|
|
stays while being queried infrequently.
|
|
The cold tier is still a responsive query tier, but data in the cold tier is not normally updated.
|
|
As data transitions into the cold tier it can be compressed and shrunken.
|
|
For resiliency, indices in the cold tier can rely on
|
|
<<ilm-searchable-snapshot, searchable snapshots>>, eliminating the need for replicas.
|
|
|
|
[discrete]
|
|
[[frozen-tier]]
|
|
=== Frozen tier
|
|
|
|
experimental::[]
|
|
|
|
Once data is no longer being queried, or being queried rarely, it may move from
|
|
the cold tier to the frozen tier where it stays for the rest of its life.
|
|
|
|
The frozen tier uses <<searchable-snapshots,{search-snaps}>> to store and load
|
|
data from a snapshot repository. Instead of using a full local copy of your
|
|
data, these {search-snaps} use smaller <<shared-cache,local caches>> containing
|
|
only recently searched data. If a search requires data that is not in a cache,
|
|
{es} fetches the data as needed from the snapshot repository. This decouples
|
|
compute and storage, letting you run searches over very large data sets with
|
|
minimal compute resources, which significantly reduces your storage and
|
|
operating costs.
|
|
|
|
The <<ilm-index-lifecycle, frozen phase>> automatically converts data
|
|
transitioning into the frozen tier into a shared-cache searchable snapshot.
|
|
|
|
Search is typically slower on the frozen tier than the cold tier, because {es}
|
|
must sometimes fetch data from the snapshot repository.
|
|
|
|
NOTE: The frozen tier is not yet available on the {ess-trial}[{ess}]. To
|
|
recreate similar functionality, see
|
|
<<searchable-snapshots-frozen-tier-on-cloud>>.
|
|
|
|
[discrete]
|
|
[[data-tier-allocation]]
|
|
=== Data tier index allocation
|
|
|
|
When you create an index, by default {es} sets
|
|
<<tier-preference-allocation-filter, `index.routing.allocation.include._tier_preference`>>
|
|
to `data_content` to automatically allocate the index shards to the content tier.
|
|
|
|
When {es} creates an index as part of a <<data-streams, data stream>>,
|
|
by default {es} sets
|
|
<<tier-preference-allocation-filter, `index.routing.allocation.include._tier_preference`>>
|
|
to `data_hot` to automatically allocate the index shards to the hot tier.
|
|
|
|
You can override the automatic tier-based allocation by specifying
|
|
<<shard-allocation-filtering, shard allocation filtering>>
|
|
settings in the create index request or index template that matches the new index.
|
|
|
|
You can also explicitly set `index.routing.allocation.include._tier_preference`
|
|
to opt out of the default tier-based allocation.
|
|
If you set the tier preference to `null`, {es} ignores the data tier roles during allocation.
|
|
|
|
[discrete]
|
|
[[data-tier-migration]]
|
|
=== Automatic data tier migration
|
|
|
|
{ilm-init} automatically transitions managed
|
|
indices through the available data tiers using the <<ilm-migrate, migrate>> action.
|
|
By default, this action is automatically injected in every phase.
|
|
You can explicitly specify the migrate action to override the default behavior,
|
|
or use the <<ilm-allocate, allocate action>> to manually specify allocation rules.
|