Commit graph

7 commits

Author SHA1 Message Date
Dan Hermann
ff333800d9
[DOCS] Data stream modification API (#80094) (#80664) 2021-11-11 07:46:04 -06:00
James Rodewig
83eabfcf2e
[DOCS] Data stream migration API (#65017) (#66786)
Co-authored-by: Dan Hermann <danhermann@users.noreply.github.com>
2020-12-23 09:40:21 -05:00
Martijn van Groningen
1596b93731
Protect replicated data streams against local rollovers (#65999)
Backporting #64710 to the 7.x branch.

When a data stream is being auto followed then a rollover in a local cluster can break auto following,
if the local cluster performs a rollover then it creates a new write index and if then later the remote
cluster rolls over as well then that new write index can't be replicated, because it has the same name
as in the write index in the local cluster, which was created earlier.

If a data stream is managed by ccr, then the local cluster should not do a rollover for those data streams.
The data stream should be rolled over in the remote cluster and that change should replicate to the local
cluster. Performing a rollover in the local cluster is an operation that the data stream support in ccr should
perform.

To protect against rolling over a replicated data stream, this PR adds a replicate field to DataStream class.
The rollover api will fail with an error in case a data stream is being rolled over and the targeted data stream is
a replicated data stream. When the put follow api creates a data stream in the local cluster then the replicate flag
is set to true. There should be a way to turn a replicated data stream into a regular data stream when for example
during disaster recovery. The newly added api in this pr (promote data stream api) is doing that. After a replicated
data stream is promoted to a regular data stream then the local data stream can be rolled over, so that the new
write index is no longer a follower index. Also if the put follow api is attempting to update this data stream
(for example to attempt to resume auto following) then that with fail, because the data stream is no longer a
replicated data stream.

Today with time based indices behind an alias, the is_write_index property isn't replicated from remote cluster
to the local cluster, so when attempting to rollover the alias in the local cluster the rollover fails, because the
alias doesn't have a write index. The added replicated field in the DataStream class and added validation
achieve the same kind of protection, but in a more robust way.

A followup from #61993
2020-12-08 10:45:58 +01:00
James Rodewig
76b2dd23e2
[DOCS] Document data stream stats API (#59435) (#59874) 2020-07-20 09:50:26 -04:00
James Rodewig
fca722cee1
[DOCS] Add x-pack tag to data stream docs (#59241) (#59299) 2020-07-09 13:12:38 -04:00
James Rodewig
9ba1b1d067
[DOCS] Reformat data stream API docs (#58322) (#58334) 2020-06-18 10:59:12 -04:00
James Rodewig
6fc8317f07
[DOCS] Reformat data streams intro and overview (#57954) (#57993)
Changes:

* Updates 'Data streams' intro page to focus on problem solution and
  benefits.

* Adds 'Data streams overview' page to cover conceptual information,
  based on existing content in the 'Data streams' intro.

* Adds diagrams for data streams and search/indexing request examples.

* Moves API jump list and API docs to a new 'Data streams APIs' section.
  Links to these APIs will be available through tutorials.

* Add xrefs to existing docs for concepts like generation, write index,
  and append-only.
2020-06-11 11:32:09 -04:00