diff --git a/docs/reference/ilm/ilm-tutorial.asciidoc b/docs/reference/ilm/ilm-tutorial.asciidoc index 0ac1978e9211..93390a1ae0da 100644 --- a/docs/reference/ilm/ilm-tutorial.asciidoc +++ b/docs/reference/ilm/ilm-tutorial.asciidoc @@ -43,9 +43,8 @@ A lifecycle policy specifies the phases in the index lifecycle and the actions to perform in each phase. A lifecycle can have up to four phases: `hot`, `warm`, `cold`, and `delete`. -You can define and manage policies through the {kib} Management UI, -which invokes the {ilm-init} <> API to create policies -according to the options you specify. +You can define and manage policies through {kib} Management or with the +<> API. For example, you might define a `timeseries_policy` that has two phases: diff --git a/docs/reference/settings/ilm-settings.asciidoc b/docs/reference/settings/ilm-settings.asciidoc index ba6ecc4ef24f..3dd24841add1 100644 --- a/docs/reference/settings/ilm-settings.asciidoc +++ b/docs/reference/settings/ilm-settings.asciidoc @@ -1,38 +1,47 @@ [role="xpack"] [[ilm-settings]] -=== {ilm-cap} settings +=== {ilm-cap} settings in {es} +[subs="attributes"] +++++ +{ilm-cap} settings +++++ -These are the settings available for configuring Index Lifecycle Management +These are the settings available for configuring <> ({ilm-init}). ==== Cluster level settings `xpack.ilm.enabled`:: +(boolean) deprecated:[7.8.0,Basic License features are always enabled] + This deprecated setting has no effect and will be removed in Elasticsearch 8.0. -`indices.lifecycle.poll_interval`:: -(<>) How often {ilm} checks for indices that meet policy -criteria. Defaults to `10m`. - `indices.lifecycle.history_index_enabled`:: +(boolean) Whether ILM's history index is enabled. If enabled, ILM will record the history of actions taken as part of ILM policies to the `ilm-history-*` indices. Defaults to `true`. +`indices.lifecycle.poll_interval`:: +(<>, <>) +How often {ilm} checks for indices that meet policy criteria. Defaults to `10m`. + ==== Index level settings These index-level {ilm-init} settings are typically configured through index templates. For more information, see <>. `index.lifecycle.name`:: +(<>, string) The name of the policy to use to manage the index. `index.lifecycle.rollover_alias`:: +(<>, string) The index alias to update when the index rolls over. Specify when using a policy that contains a rollover action. When the index rolls over, the alias is updated to reflect that the index is no longer the write index. For more information about rollover, see <>. `index.lifecycle.parse_origination_date`:: +(<>, boolean) When configured to `true` the origination date will be parsed from the index name. The index format must match the pattern `^.*-{date_format}-\\d+`, where the `date_format` is `yyyy.MM.dd` and the trailing digits are optional (an @@ -41,6 +50,8 @@ index that was rolled over would normally match the full format eg. the index creation will fail. `index.lifecycle.origination_date`:: +(<>, long) The timestamp that will be used to calculate the index age for its phase transitions. This allows the users to create an index containing old data and -use the original creation date of the old data to calculate the index age. Must be a long (Unix epoch) value. +use the original creation date of the old data to calculate the index age. +Must be a long (Unix epoch) value. diff --git a/docs/reference/settings/slm-settings.asciidoc b/docs/reference/settings/slm-settings.asciidoc new file mode 100644 index 000000000000..aab31ae53127 --- /dev/null +++ b/docs/reference/settings/slm-settings.asciidoc @@ -0,0 +1,33 @@ +[role="xpack"] +[[slm-settings]] +=== {slm-cap} settings in {es} +[subs="attributes"] +++++ +{slm-cap} settings +++++ + +These are the settings available for configuring +<> ({slm-init}). + +==== Cluster-level settings + +[[slm-history-index-enabled]] +`slm.history_index_enabled`:: +(boolean) +Controls whether {slm-init} records the history of actions taken as part of {slm-init} policies +to the `slm-history-*` indices. Defaults to `true`. + +[[slm-retention-schedule]] +`slm.retention_schedule`:: +(<>, <>) +Controls when the <> runs. +Can be a periodic or absolute time schedule. +Supports all values supported by the <>. +Defaults to daily at 1:30am UTC: `0 30 1 * * ?`. + +[[slm-retention-duration]] +`slm.retention_duration`:: +(<>, <>) +Limits how long {slm-init} should spend deleting old snapshots. +Defaults to one hour: `1h`. + diff --git a/docs/reference/setup.asciidoc b/docs/reference/setup.asciidoc index 66a908b6460a..2e9bce23b298 100644 --- a/docs/reference/setup.asciidoc +++ b/docs/reference/setup.asciidoc @@ -45,18 +45,20 @@ include::setup/jvm-options.asciidoc[] include::setup/secure-settings.asciidoc[] -include::settings/ccr-settings.asciidoc[] +include::settings/audit-settings.asciidoc[] include::modules/indices/circuit_breaker.asciidoc[] -include::modules/indices/recovery.asciidoc[] - -include::modules/indices/indexing_buffer.asciidoc[] +include::settings/ccr-settings.asciidoc[] include::modules/indices/fielddata.asciidoc[] include::settings/ilm-settings.asciidoc[] +include::modules/indices/recovery.asciidoc[] + +include::modules/indices/indexing_buffer.asciidoc[] + include::settings/license-settings.asciidoc[] include::setup/logging-config.asciidoc[] @@ -69,13 +71,13 @@ include::modules/network.asciidoc[] include::modules/indices/query_cache.asciidoc[] -include::modules/indices/request_cache.asciidoc[] - include::modules/indices/search-settings.asciidoc[] include::settings/security-settings.asciidoc[] -include::settings/audit-settings.asciidoc[] +include::modules/indices/request_cache.asciidoc[] + +include::settings/slm-settings.asciidoc[] include::settings/sql-settings.asciidoc[] diff --git a/docs/reference/slm/apis/slm-put.asciidoc b/docs/reference/slm/apis/slm-put.asciidoc index cf234c0b8fdc..8a3e1c17b7ed 100644 --- a/docs/reference/slm/apis/slm-put.asciidoc +++ b/docs/reference/slm/apis/slm-put.asciidoc @@ -83,6 +83,7 @@ Repository used to store snapshots created by this policy. This repository must exist prior to the policy's creation. You can create a repository using the <>. +[[slm-api-put-retention]] `retention`:: (Optional, object) Retention rules used to retain and delete snapshots created by the policy. diff --git a/docs/reference/slm/getting-started-slm.asciidoc b/docs/reference/slm/getting-started-slm.asciidoc index cfcc33b32e8c..65f2b365403d 100644 --- a/docs/reference/slm/getting-started-slm.asciidoc +++ b/docs/reference/slm/getting-started-slm.asciidoc @@ -1,23 +1,34 @@ [role="xpack"] [testenv="basic"] [[getting-started-snapshot-lifecycle-management]] -=== Configure snapshot lifecycle policies +=== Tutorial: Automate backups with {slm-init} -Let's get started with {slm} ({slm-init}) by working through a -hands-on scenario. The goal of this example is to automatically back up {es} -indices using the <> every day at a particular -time. Once these snapshots have been created, they are kept for a configured -amount of time and then deleted per a configured retention policy. +This tutorial demonstrates how to automate daily backups of {es} indices using an {slm-init} policy. +The policy takes <> of all indices in the cluster +and stores them in a local repository. +It also defines a retention policy and automatically deletes snapshots +when they are no longer needed. -[float] +To manage snapshots with {slm-init}, you: + +. <>. +. <>. + +To test the policy, you can manually trigger it to take an initial snapshot. + +[discrete] [[slm-gs-register-repository]] ==== Register a repository -Before we can set up an SLM policy, we'll need to set up a -snapshot repository where the snapshots will be -stored. Repositories can use {plugins}/repository.html[many different backends], -including cloud storage providers. You'll probably want to use one of these in -production, but for this example we'll use a shared file system repository: +To use {slm-init}, you must have a snapshot repository configured. +The repository can be local (shared filesystem) or remote (cloud storage). +Remote repositories can reside on S3, HDFS, Azure, Google Cloud Storage, +or any other platform supported by a {plugins}/repository.html[repository plugin]. +Remote repositories are generally used for production deployments. + +For this tutorial, you can register a local repository from +{kibana-ref}/snapshot-repositories.html[{kib} Management] +or use the put repository API: [source,console] ----------------------------------- @@ -30,19 +41,26 @@ PUT /_snapshot/my_repository } ----------------------------------- -[float] +[discrete] [[slm-gs-create-policy]] -==== Setting up a snapshot policy +==== Set up a snapshot policy -Now that we have a repository in place, we can create a policy to automatically -take snapshots. Policies are written in JSON and will define when to take -snapshots, what the snapshots should be named, and which indices should be -included, among other things. We'll use the <> API -to create the policy. +Once you have a repository in place, +you can define an {slm-init} policy to take snapshots automatically. +The policy defines when to take snapshots, which indices should be included, +and what to name the snapshots. +A policy can also specify a <> and +automatically delete snapshots when they are no longer needed. -When configurating a policy, retention can also optionally be configured. See -the <> documentation for the full documentation of -how retention works. +TIP: Don't be afraid to configure a policy that takes frequent snapshots. +Snapshots are incremental and make efficient use of storage. + +You can define and manage policies through {kib} Management or with the put policy API. + +For example, you could define a `nightly-snapshots` policy +to back up all of your indices daily at 2:30AM UTC. + +A put policy request defines the policy configuration in JSON: [source,console] -------------------------------------------------- @@ -62,44 +80,39 @@ PUT /_slm/policy/nightly-snapshots } -------------------------------------------------- // TEST[continued] -<1> when the snapshot should be taken, using - <>, in this - case at 1:30AM each day -<2> whe name each snapshot should be given, using - <> to include the current date in the name - of the snapshot -<3> the repository the snapshot should be stored in -<4> the configuration to be used for the snapshot requests (see below) -<5> which indices should be included in the snapshot, in this case, every index -<6> Optional retention configuration -<7> Keep snapshots for 30 days -<8> Always keep at least 5 successful snapshots -<9> Keep no more than 50 successful snapshots, even if they're less than 30 days old +<1> When the snapshot should be taken in + <>: daily at 2:30AM UTC +<2> How to name the snapshot: use + <> to include the current date in the snapshot name +<3> Where to store the snapshot +<4> The configuration to be used for the snapshot requests (see below) +<5> Which indices to include in the snapshot: all indices +<6> Optional retention policy: keep snapshots for 30 days, +retaining at least 5 and no more than 50 snapshots regardless of age -This policy will take a snapshot of every index each day at 1:30AM UTC. -Snapshots are incremental, allowing frequent snapshots to be stored efficiently, -so don't be afraid to configure a policy to take frequent snapshots. +You can specify additional snapshot configuration options to customize how snapshots are taken. +For example, you could configure the policy to fail the snapshot +if one of the specified indices is missing. +For more information about snapshot options, see <>. -In addition to specifying the indices that should be included in the snapshot, -the `config` field can be used to customize other aspects of the snapshot. You -can use any option allowed in <>, so you can specify, for example, whether the snapshot should fail in -special cases, such as if one of the specified indices cannot be found. - -[float] +[discrete] [[slm-gs-test-policy]] ==== Test the snapshot policy -While snapshots taken by SLM policies can be viewed through the standard snapshot -API, SLM also keeps track of policy successes and failures in ways that are a bit -easier to use to make sure the policy is working. Once a policy has executed at -least once, when you view the policy using the <>, -some metadata will be returned indicating whether the snapshot was sucessfully -initiated or not. +A snapshot taken by {slm-init} is just like any other snapshot. +You can view information about snapshots in {kib} Management or +get info with the <>. +In addition, {slm-init} keeps track of policy successes and failures so you +have insight into how the policy is working. If the policy has executed at +least once, the <> API returns additional metadata +that shows if the snapshot succeeded. -Instead of waiting for our policy to run, let's tell SLM to take a snapshot -as using the configuration from our policy right now instead of waiting for -1:30AM. +You can manually execute a snapshot policy to take a snapshot immediately. +This is useful for taking snapshots before making a configuration change, +upgrading, or to test a new policy. +Manually executing a policy does not affect its configured schedule. + +For example, the following request manually triggers the `nightly-snapshots` policy: [source,console] -------------------------------------------------- @@ -107,11 +120,9 @@ POST /_slm/policy/nightly-snapshots/_execute -------------------------------------------------- // TEST[skip:we can't easily handle snapshots from docs tests] -This request will kick off a snapshot for our policy right now, regardless of -the schedule in the policy. This is useful for taking snapshots before making -a configuration change, upgrading, or for our purposes, making sure our policy -is going to work successfully. The policy will continue to run on its configured -schedule after this execution of the policy. + +After forcing the `nightly-snapshots` policy to run, +you can retrieve the policy to get success or failure information. [source,console] -------------------------------------------------- @@ -119,9 +130,14 @@ GET /_slm/policy/nightly-snapshots?human -------------------------------------------------- // TEST[continued] -This request will return a response that includes the policy, as well as -information about the last time the policy succeeded and failed, as well as the -next time the policy will be executed. +Only the most recent success and failure are returned, +but all policy executions are recorded in the `.slm-history*` indices. +The response also shows when the policy is scheduled to execute next. + +NOTE: The response shows if the policy succeeded in _initiating_ a snapshot. +However, that does not guarantee that the snapshot completed successfully. +It is possible for the initiated snapshot to fail if, for example, the connection to a remote +repository is lost while copying files. [source,console-result] -------------------------------------------------- @@ -143,44 +159,19 @@ next time the policy will be executed. "max_count": 50 } }, - "last_success": { <1> - "snapshot_name": "nightly-snap-2019.04.24-tmtnyjtrsxkhbrrdcgg18a", <2> - "time_string": "2019-04-24T16:43:49.316Z", + "last_success": { + "snapshot_name": "nightly-snap-2019.04.24-tmtnyjtrsxkhbrrdcgg18a", <1> + "time_string": "2019-04-24T16:43:49.316Z", <2> "time": 1556124229316 } , - "last_failure": { <3> - "snapshot_name": "nightly-snap-2019.04.02-lohisb5ith2n8hxacaq3mw", - "time_string": "2019-04-02T01:30:00.000Z", - "time": 1556042030000, - "details": "{\"type\":\"index_not_found_exception\",\"reason\":\"no such index [important]\",\"resource.type\":\"index_or_alias\",\"resource.id\":\"important\",\"index_uuid\":\"_na_\",\"index\":\"important\",\"stack_trace\":\"[important] IndexNotFoundException[no such index [important]]\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.indexNotFoundException(IndexNameExpressionResolver.java:762)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.innerResolve(IndexNameExpressionResolver.java:714)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:670)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:163)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:142)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:102)\\n\\tat org.elasticsearch.snapshots.SnapshotsService$1.execute(SnapshotsService.java:280)\\n\\tat org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:47)\\n\\tat org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:687)\\n\\tat org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:310)\\n\\tat org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:210)\\n\\tat org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:142)\\n\\tat org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150)\\n\\tat org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188)\\n\\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:688)\\n\\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252)\\n\\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215)\\n\\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\\n\\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\\n\\tat java.base/java.lang.Thread.run(Thread.java:834)\\n\"}" - } , - "next_execution": "2019-04-24T01:30:00.000Z", <4> - "next_execution_millis": 1556048160000 + "next_execution": "2019-04-24T01:30:00.000Z", <3> + "next_execution_millis": 1556048160000 } } -------------------------------------------------- // TESTRESPONSE[skip:the presence of last_failure and last_success is asynchronous and will be present for users, but is untestable] -<1> information about the last time the policy successfully initated a snapshot -<2> the name of the snapshot that was successfully initiated -<3> information about the last time the policy failed to initiate a snapshot -<4> the next time the policy will execute +<1> The name of the last snapshot that was succesfully initiated by the policy +<2> When the snapshot was initiated +<3> When the policy will initiate the next snapshot -NOTE: This metadata only indicates whether the request to initiate the snapshot was -made successfully or not - after the snapshot has been successfully started, it -is possible for the snapshot to fail if, for example, the connection to a remote -repository is lost while copying files. - -If you're following along, the returned SLM policy shouldn't have a `last_failure` -field - it's included above only as an example. You should, however, see a -`last_success` field and a snapshot name. If you do, you've successfully taken -your first snapshot using SLM! - -While only the most recent sucess and failure are available through the Get Policy -API, all policy executions are recorded to a history index, which may be queried -by searching the index pattern `.slm-history*`. - -That's it! We have our first SLM policy set up to periodically take snapshots -so that our backups are always up to date. You can read more details in the -<> and the -<> diff --git a/docs/reference/slm/index.asciidoc b/docs/reference/slm/index.asciidoc index 22920205edf1..34594910d99b 100644 --- a/docs/reference/slm/index.asciidoc +++ b/docs/reference/slm/index.asciidoc @@ -1,71 +1,21 @@ [role="xpack"] [testenv="basic"] [[snapshot-lifecycle-management]] -== Manage the snapshot lifecycle +== {slm-init}: Manage the snapshot lifecycle You can set up snapshot lifecycle policies to automate the timing, frequency, and retention of snapshots. Snapshot policies can apply to multiple indices. -The snapshot lifecycle management (SLM) <> provide -the building blocks for the snapshot policy features that are part of the Management application in {kib}. -The Snapshot and Restore UI makes it easy to set up policies, register snapshot repositories, -view and manage snapshots, and restore indices. +The {slm} ({slm-init}) <> provide +the building blocks for the snapshot policy features that are part of {kib} Management. +{kibana-ref}/snapshot-repositories.html[Snapshot and Restore] makes it easy to +set up policies, register snapshot repositories, view and manage snapshots, and restore indices. -You can stop and restart SLM to temporarily pause automatic backups while performing +You can stop and restart {slm-init} to temporarily pause automatic backups while performing upgrades or other maintenance. -[float] -[[slm-and-security]] -=== Security and SLM - -Two built-in cluster privileges control access to the SLM actions when -{es} {security-features} are enabled: - -`manage_slm`:: Allows a user to perform all SLM actions, including creating and updating policies -and starting and stopping SLM. - -`read_slm`:: Allows a user to perform all read-only SLM actions, -such as getting policies and checking the SLM status. - -`cluster:admin/snapshot/*`:: Allows a user to take and delete snapshots of any -index, whether or not they have access to that index. - -For example, the following request configures an `slm-admin` role that grants the privileges -necessary for administering SLM. - -[source,console] ------------------------------------ -POST /_security/role/slm-admin -{ - "cluster": ["manage_slm", "cluster:admin/snapshot/*"], - "indices": [ - { - "names": [".slm-history-*"], - "privileges": ["all"] - } - ] -} ------------------------------------ -// TEST[skip:security is not enabled here] - -Or, for a read-only role that can retrieve policies (but not update, execute, or -delete them), as well as only view the history index: - -[source,console] ------------------------------------ -POST /_security/role/slm-read-only -{ - "cluster": ["read_slm"], - "indices": [ - { - "names": [".slm-history-*"], - "privileges": ["read"] - } - ] -} ------------------------------------ -// TEST[skip:security is not enabled here] - include::getting-started-slm.asciidoc[] +include::slm-security.asciidoc[] + include::slm-retention.asciidoc[] diff --git a/docs/reference/slm/slm-retention.asciidoc b/docs/reference/slm/slm-retention.asciidoc index ae9bb00cacab..3eaefb552320 100644 --- a/docs/reference/slm/slm-retention.asciidoc +++ b/docs/reference/slm/slm-retention.asciidoc @@ -3,30 +3,34 @@ [[slm-retention]] === Snapshot retention -Automatic deletion of older snapshots is an optional feature of snapshot lifecycle management. -Retention is run as a cluster level task that is not associated with a particular policy's schedule -(though the configuration of which snapshots to keep is done on a per-policy basis). Retention -configuration consists of two parts—The first a cluster-level configuration for when retention is -run and for how long, the second configured on a policy for which snapshots should be eligible for -retention. +You can include a retention policy in an {slm-init} policy to automatically delete old snapshots. +Retention runs as a cluster-level task and is not associated with a particular policy's schedule. +The retention criteria are evaluated as part of the retention task, not when the policy executes. +For the retention task to automatically delete snapshots, +you need to include a <> object in your {slm-init} policy. -The cluster level settings for retention are shown below, and can be changed dynamically using the -<> API: +To control when the retention task runs, configure +<> in the cluster settings. +You can define the schedule as a periodic or absolute <>. +The <> setting limits how long +{slm-init} should spend deleting old snapshots. -|===================================== -| Setting | Default value | Description +You can update the schedule and duration dynamically with the +<> API. +You can run the retention task manually with the +<> API. -| `slm.retention_schedule` | `0 30 1 * * ?` | A periodic or absolute time schedule for when - retention should be run. Supports all values supported by the cron scheduler: <>. Retention can also be manually run using the - <> API. Defaults to daily at 1:30am UTC. +The retention task only considers snapshots initiated through {slm-init} policies, +either according to the policy schedule or through the +<> API. +Manual snapshots are ignored and don't count toward the retention limits. -| `slm.retention_duration` | `"1h"` | A limit of how long SLM should spend deleting old snapshots. -|===================================== +If multiple policies snapshot to the same repository, they can define differing retention criteria. -Policy level configuration for retention is done inside the `retention` object when creating or -updating a policy. All of the retention configurations options are optional. +To retrieve information about the snapshot retention task history, +use the <> API: +//// [source,console] -------------------------------------------------- PUT /_slm/policy/daily-snapshots @@ -46,35 +50,7 @@ PUT /_slm/policy/daily-snapshots <2> Keep snapshots for 30 days <3> Always keep at least 5 successful snapshots <4> Keep no more than 50 successful snapshots - -Supported configuration for retention from within a policy are as follows. The default value for -each is unset unless specified by the user in the policy configuration. - -NOTE: The oldest snapshots are always deleted first, in the case of a `max_count` of 5 for a policy -with 6 snapshots, the oldest snapshot will be deleted. - -|===================================== -| Setting | Description -| `expire_after` | A timevalue for how old a snapshot must be in order to be eligible for deletion. -| `min_count` | A minimum number of snapshots to keep, regardless of age. -| `max_count` | The maximum number of snapshots to keep, regardless of age. -|===================================== - -As an example, the retention setting in the policy configured about would read in English as: - -____ -Remove snapshots older than thirty days, but always keep the latest five snapshots. If there are -more than fifty snapshots, remove the oldest surplus snapshots until there are no more than fifty -successful snapshots. -____ - -If multiple policies are configured to snapshot to the same repository, or manual snapshots have -been taken without using the <> API, they are treated as not -eligible for retention, and do not count towards any limits. This allows multiple policies to have -differing retention configuration while using the same snapshot repository. - -Statistics for snapshot retention can be retrieved using the -<> API: +//// [source,console] -------------------------------------------------- @@ -82,7 +58,7 @@ GET /_slm/stats -------------------------------------------------- // TEST[continued] -Which returns a response +The response includes the following statistics: [source,js] -------------------------------------------------- diff --git a/docs/reference/slm/slm-security.asciidoc b/docs/reference/slm/slm-security.asciidoc new file mode 100644 index 000000000000..b01c76531c1d --- /dev/null +++ b/docs/reference/slm/slm-security.asciidoc @@ -0,0 +1,58 @@ +[[slm-and-security]] +=== Security and {slm-init} + +Two built-in cluster privileges control access to the {slm-init} actions when +{es} {security-features} are enabled: + +`manage_slm`:: Allows a user to perform all {slm-init} actions, including creating and updating policies +and starting and stopping {slm-init}. + +`read_slm`:: Allows a user to perform all read-only {slm-init} actions, +such as getting policies and checking the {slm-init} status. + +`cluster:admin/snapshot/*`:: Allows a user to take and delete snapshots of any +index, whether or not they have access to that index. + +You can create and manage roles to assign these privileges through {kib} Management. + +To grant the privileges necessary to create and manage {slm-init} policies and snapshots, +you can set up a role with the `manage_slm` and `cluster:admin/snapshot/*` cluster privileges +and full access to the {slm-init} history indices. + +For example, the following request creates an `slm-admin` role: + +[source,console] +----------------------------------- +POST /_security/role/slm-admin +{ + "cluster": ["manage_slm", "cluster:admin/snapshot/*"], + "indices": [ + { + "names": [".slm-history-*"], + "privileges": ["all"] + } + ] +} +----------------------------------- +// TEST[skip:security is not enabled here] + +To grant read-only access to {slm-init} policies and the snapshot history, +you can set up a role with the `read_slm` cluster privilege and read access +to the {slm} history indices. + +For example, the following request creates a `slm-read-only` role: + +[source,console] +----------------------------------- +POST /_security/role/slm-read-only +{ + "cluster": ["read_slm"], + "indices": [ + { + "names": [".slm-history-*"], + "privileges": ["read"] + } + ] +} +----------------------------------- +// TEST[skip:security is not enabled here] \ No newline at end of file