[[snapshots-restore-snapshot]] == Restore a snapshot This guide shows you how to restore a snapshot. Snapshots are a convenient way to store a copy of your data outside of a cluster. You can restore a snapshot to recover indices and data streams after deletion or a hardware failure. You can also use snapshots to transfer data between clusters. In this guide, you'll learn how to: * Get a list of available snapshots * Restore an index or data stream from a snapshot * Restore a feature state * Restore an entire cluster * Monitor the restore operation * Cancel an ongoing restore This guide also provides tips for <> and <>. [discrete] [[restore-snapshot-prereqs]] === Prerequisites include::register-repository.asciidoc[tag=kib-snapshot-prereqs] include::apis/restore-snapshot-api.asciidoc[tag=restore-prereqs] [discrete] [[restore-snapshot-considerations]] === Considerations When restoring data from a snapshot, keep the following in mind: * If you restore a data stream, you also restore its backing indices. * You can only restore an existing index if it's <> and the index in the snapshot has the same number of primary shards. * You can't restore an existing open index. This includes backing indices for a data stream. * The restore operation automatically opens restored indices, including backing indices. * You can restore only a specific backing index from a data stream. However, the restore operation doesn't add the restored backing index to any existing data stream. [discrete] [[get-snapshot-list]] === Get a list of available snapshots To view a list of available snapshots in {kib}, go to the main menu and click *Stack Management > Snapshot and Restore*. You can also use the <> and the <> to find snapshots that are available to restore. First, use the get repository API to fetch a list of registered snapshot repositories. [source,console] ---- GET _snapshot ---- // TEST[setup:setup-snapshots] Then use the get snapshot API to get a list of snapshots in a specific repository. This also returns each snapshot's contents. [source,console] ---- GET _snapshot/my_repository/*?verbose=false ---- // TEST[setup:setup-snapshots] [discrete] [[restore-index-data-stream]] === Restore an index or data stream You can restore a snapshot using {kib}'s *Snapshot and Restore* feature or the <>. By default, a restore request attempts to restore all regular indices and regular data streams in a snapshot. In most cases, you only need to restore a specific index or data stream from a snapshot. However, you can't restore an existing open index. If you're restoring data to a pre-existing cluster, use one of the following methods to avoid conflicts with existing indices and data streams: * <> * <> [discrete] [[delete-restore]] ==== Delete and restore The simplest way to avoid conflicts is to delete an existing index or data stream before restoring it. To prevent the accidental re-creation of the index or data stream, we recommend you temporarily stop all indexing until the restore operation is complete. WARNING: If the <> cluster setting is `false`, don't use the <> to target the `*` or `.*` wildcard pattern. If you use {es}'s security features, this will delete system indices required for authentication. Instead, target the `*,-.*` wildcard pattern to exclude these system indices and other index names that begin with a dot (`.`). [source,console] ---- # Delete an index DELETE my-index # Delete a data stream DELETE _data_stream/logs-my_app-default ---- // TEST[setup:setup-snapshots] In the restore request, explicitly specify any indices and data streams to restore. [source,console] ---- POST _snapshot/my_repository/my_snapshot_2099.05.06/_restore { "indices": "my-index,logs-my_app-default" } ---- // TEST[continued] // TEST[s/_restore/_restore?wait_for_completion=true/] [discrete] [[rename-on-restore]] ==== Rename on restore If you want to avoid deleting existing data, you can instead rename the indices and data streams you restore. You typically use this method to compare existing data to historical data from a snapshot. For example, you can use this method to review documents after an accidental update or deletion. Before you start, ensure the cluster has enough capacity for both the existing and restored data. The following restore snapshot API request prepends `restored-` to the name of any restored index or data stream. [source,console] ---- POST _snapshot/my_repository/my_snapshot_2099.05.06/_restore { "indices": "my-index,logs-my_app-default", "rename_pattern": "(.+)", "rename_replacement": "restored-$1" } ---- // TEST[setup:setup-snapshots] // TEST[s/_restore/_restore?wait_for_completion=true/] If the rename options produce two or more indices or data streams with the same name, the restore operation fails. If you rename a data stream, its backing indices are also renamed. For example, if you rename the `logs-my_app-default` data stream to `restored-logs-my_app-default`, the backing index `.ds-logs-my_app-default-2099.03.09-000005` is renamed to `.ds-restored-logs-my_app-default-2099.03.09-000005`. When the restore operation is complete, you can compare the original and restored data. If you no longer need an original index or data stream, you can delete it and use a <> to rename the restored one. [source,console] ---- # Delete the original index DELETE my-index # Reindex the restored index to rename it POST _reindex { "source": { "index": "restored-my-index" }, "dest": { "index": "my-index" } } # Delete the original data stream DELETE _data_stream/logs-my_app-default # Reindex the restored data stream to rename it POST _reindex { "source": { "index": "restored-logs-my_app-default" }, "dest": { "index": "logs-my_app-default", "op_type": "create" } } ---- // TEST[continued] [discrete] [[restore-feature-state]] === Restore a feature state You can restore a <> to recover system indices, system data streams, and other configuration data for a feature from a snapshot. If you restore a snapshot's cluster state, the operation restores all feature states in the snapshot by default. Similarly, if you don't restore a snapshot's cluster state, the operation doesn't restore any feature states by default. You can also choose to restore only specific feature states from a snapshot, regardless of the cluster state. To view a snapshot's feature states, use the get snapshot API. [source,console] ---- GET _snapshot/my_repository/my_snapshot_2099.05.06 ---- // TEST[setup:setup-snapshots] The response's `feature_states` property contains a list of features in the snapshot as well as each feature's indices. To restore a specific feature state from the snapshot, specify the `feature_name` from the response in the restore snapshot API's <> parameter. NOTE: When you restore a feature state, {es} closes and overwrites the feature's existing indices. WARNING: Restoring the `security` feature state overwrites system indices used for authentication. If you use {ess}, ensure you have access to the {ess} Console before restoring the `security` feature state. If you run {es} on your own hardware, <> to ensure you'll still be able to access your cluster. [source,console] ---- POST _snapshot/my_repository/my_snapshot_2099.05.06/_restore { "feature_states": [ "geoip" ], "include_global_state": false, <1> "indices": "-*" <2> } ---- // TEST[setup:setup-snapshots] // TEST[s/^/DELETE my-index\nDELETE _data_stream\/logs-my_app-default\n/] // TEST[s/_restore/_restore?wait_for_completion=true/] // TEST[s/"feature_states": \[ "geoip" \],//] <1> Exclude the cluster state from the restore operation. <2> Exclude the other indices and data streams in the snapshot from the restore operation. [discrete] [[restore-entire-cluster]] === Restore an entire cluster In some cases, you need to restore an entire cluster from a snapshot, including the cluster state and all <>. These cases should be rare, such as in the event of a catastrophic failure. Restoring an entire cluster involves deleting important system indices, including those used for authentication. Consider whether you can restore specific indices or data streams instead. If you're restoring to a different cluster, see <> before you start. . If you <>, you can restore them to each node. This step is optional and requires a <>. + After you shut down a node, copy the backed-up configuration files over to the node's `$ES_PATH_CONF` directory. Before restarting the node, ensure `elasticsearch.yml` contains the appropriate node roles, node name, and other node-specific settings. + If you choose to perform this step, you must repeat this process on each node in the cluster. . Temporarily stop indexing and turn off the following features: + -- * GeoIP database downloader and ILM history store + [source,console] ---- PUT _cluster/settings { "persistent": { "ingest.geoip.downloader.enabled": false, "indices.lifecycle.history_index_enabled": false } } ---- * ILM + [source,console] ---- POST _ilm/stop ---- //// [source,console] ---- POST _ilm/start ---- // TEST[continued] //// * Machine Learning + [source,console] ---- POST _ml/set_upgrade_mode?enabled=true ---- //// [source,console] ---- POST _ml/set_upgrade_mode?enabled=false ---- // TEST[continued] //// * Monitoring + [source,console] ---- PUT _cluster/settings { "persistent": { "xpack.monitoring.collection.enabled": false } } ---- // TEST[warning:[xpack.monitoring.collection.enabled] setting was deprecated in Elasticsearch and will be removed in a future release.] * Watcher + [source,console] ---- POST _watcher/_stop ---- //// [source,console] ---- POST _watcher/_start ---- // TEST[continued] //// * Universal Profiling + Check if Universal Profiling index template management is enabled: + [source,console] ---- GET /_cluster/settings?filter_path=**.xpack.profiling.templates.enabled&include_defaults=true ---- + If the value is `true`, disable Universal Profiling index template management: + [source,console] ---- PUT _cluster/settings { "persistent": { "xpack.profiling.templates.enabled": false } } ---- -- . [[restore-create-file-realm-user]]If you use {es} security features, log in to a node host, navigate to the {es} installation directory, and add a user with the `superuser` role to the file realm using the <> tool. + For example, the following command creates a user named `restore_user`. + [source,sh] ---- ./bin/elasticsearch-users useradd restore_user -p my_password -r superuser ---- + Use this file realm user to authenticate requests until the restore operation is complete. . Use the <> to set <> to `false`. This lets you delete data streams and indices using wildcards. + [source,console] ---- PUT _cluster/settings { "persistent": { "action.destructive_requires_name": false } } ---- // TEST[setup:setup-snapshots] . Delete all existing data streams on the cluster. + [source,console] ---- DELETE _data_stream/*?expand_wildcards=all ---- // TEST[continued] . Delete all existing indices on the cluster. + [source,console] ---- DELETE *?expand_wildcards=all ---- // TEST[continued] . Restore the entire snapshot, including the cluster state. By default, restoring the cluster state also restores any feature states in the snapshot. + [source,console] ---- POST _snapshot/my_repository/my_snapshot_2099.05.06/_restore { "indices": "*", "include_global_state": true } ---- // TEST[continued] // TEST[s/_restore/_restore?wait_for_completion=true/] . When the restore operation is complete, resume indexing and restart any features you stopped: + NOTE: When the snapshot is restored, the license that was in use at the time the snapshot was taken will be restored as well. If your license has expired since the snapshot was taken, you will need to use the <> to install a current license. + -- * GeoIP database downloader and ILM history store + [source,console] ---- PUT _cluster/settings { "persistent": { "ingest.geoip.downloader.enabled": true, "indices.lifecycle.history_index_enabled": true } } ---- //TEST[s/true/false/] * ILM + [source,console] ---- POST _ilm/start ---- * Machine Learning + [source,console] ---- POST _ml/set_upgrade_mode?enabled=false ---- * Monitoring + [source,console] ---- PUT _cluster/settings { "persistent": { "xpack.monitoring.collection.enabled": true } } ---- // TEST[warning:[xpack.monitoring.collection.enabled] setting was deprecated in Elasticsearch and will be removed in a future release.] // TEST[s/true/false/] * Watcher + [source,console] ---- POST _watcher/_start ---- -- * Universal Profiling + If the value was `true` initially, enable Universal Profiling index template management again, otherwise skip this step: + [source,console] ---- PUT _cluster/settings { "persistent": { "xpack.profiling.templates.enabled": true } } ---- //TEST[s/true/false/] . If wanted, reset the `action.destructive_requires_name` cluster setting. + [source,console] ---- PUT _cluster/settings { "persistent": { "action.destructive_requires_name": null } } ---- [discrete] [[monitor-restore]] === Monitor a restore The restore operation uses the <> to restore an index's primary shards from a snapshot. While the restore operation recovers primary shards, the cluster will have a `yellow` <>. After all primary shards are recovered, the replication process creates and distributes replicas across eligible data nodes. When replication is complete, the cluster health status typically becomes `green`. Once you start a restore in {kib}, you’re navigated to the **Restore Status** page. You can use this page to track the current state for each shard in the snapshot. You can also monitor snapshot recover using {es} APIs. To monitor the cluster health status, use the <>. [source,console] ---- GET _cluster/health ---- To get detailed information about ongoing shard recoveries, use the <>. [source,console] ---- GET my-index/_recovery ---- // TEST[setup:setup-snapshots] To view any unassigned shards, use the <>. [source,console] ---- GET _cat/shards?v=true&h=index,shard,prirep,state,node,unassigned.reason&s=state ---- Unassigned shards have a `state` of `UNASSIGNED`. The `prirep` value is `p` for primary shards and `r` for replicas. The `unassigned.reason` describes why the shard remains unassigned. To get a more in-depth explanation of an unassigned shard's allocation status, use the <>. [source,console] ---- GET _cluster/allocation/explain { "index": "my-index", "shard": 0, "primary": false, "current_node": "my-node" } ---- // TEST[s/^/PUT my-index\n/] // TEST[s/"primary": false,/"primary": false/] // TEST[s/"current_node": "my-node"//] [discrete] [[cancel-restore]] === Cancel a restore You can delete an index or data stream to cancel its ongoing restore. This also deletes any existing data in the cluster for the index or data stream. Deleting an index or data stream doesn't affect the snapshot or its data. [source,console] ---- # Delete an index DELETE my-index # Delete a data stream DELETE _data_stream/logs-my_app-default ---- // TEST[setup:setup-snapshots] [discrete] [[restore-different-cluster]] === Restore to a different cluster TIP: {ess} can help you restore snapshots from other deployments. See {cloud}/ec-restoring-snapshots.html[Work with snapshots]. Snapshots aren't tied to a particular cluster or a cluster name. You can create a snapshot in one cluster and restore it in another <>. Any data stream or index you restore from a snapshot must also be compatible with the current cluster’s version. The topology of the clusters doesn't need to match. To restore a snapshot, its repository must be <> and available to the new cluster. If the original cluster still has write access to the repository, register the repository as read-only. This prevents multiple clusters from writing to the repository at the same time and corrupting the repository's contents. It also prevents {es} from caching the repository's contents, which means that changes made by other clusters will become visible straight away. Before you start a restore operation, ensure the new cluster has enough capacity for any data streams or indices you want to restore. If the new cluster has a smaller capacity, you can: * Add nodes or upgrade your hardware to increase capacity. * Restore fewer indices and data streams. * Reduce the <> for restored indices. + For example, the following restore snapshot API request uses the `index_settings` option to set `index.number_of_replicas` to `1`. + [source,console] ---- POST _snapshot/my_repository/my_snapshot_2099.05.06/_restore { "indices": "my-index,logs-my_app-default", "index_settings": { "index.number_of_replicas": 1 } } ---- // TEST[setup:setup-snapshots] // TEST[s/^/DELETE my-index\nDELETE _data_stream\/logs-my_app-default\n/] // TEST[s/_restore/_restore?wait_for_completion=true/] If indices or backing indices in the original cluster were assigned to particular nodes using <>, the same rules will be enforced in the new cluster. If the new cluster does not contain nodes with appropriate attributes that a restored index can be allocated on, the index will not be successfully restored unless these index allocation settings are changed during the restore operation. The restore operation also checks that restored persistent settings are compatible with the current cluster to avoid accidentally restoring incompatible settings. If you need to restore a snapshot with incompatible persistent settings, try restoring it without the <>. [discrete] [[troubleshoot-restore]] === Troubleshoot restore errors Here's how to resolve common errors returned by restore requests. [discrete] ==== Cannot restore index [] because an open index with same name already exists in the cluster You can't restore an open index that already exists. To resolve this error, try one of the methods in <>. [discrete] ==== Cannot restore index [] with [x] shards from a snapshot of index [] with [y] shards You can only restore an existing index if it's closed and the index in the snapshot has the same number of primary shards. This error indicates the index in the snapshot has a different number of primary shards. To resolve this error, try one of the methods in <>.