* [DOCS] Add local dev setup instructions
- Replace existing Run ES in Docker locally page, with simpler no-security local dev setup
- Move this file into Quickstart folder, along with existing quickstart guide
- Update self-managed instructions in Quickstart guide to use local dev approach
* Remove `es-test-dir` book-scoped variable
* Remove `plugins-examples-dir` book-scoped variable
* Remove `:dependencies-dir:` and `:xes-repo-dir:` book-scoped variables
- In `index.asciidoc`, two variables (`:dependencies-dir:` and `:xes-repo-dir:`) were removed.
- In `sql/index.asciidoc`, the `:sql-tests:` path was updated to fuller path
- In `esql/index.asciidoc`, the `:esql-tests:` path was updated idem
* Replace `es-repo-dir` with `es-ref-dir`
* Move `:include-xpack: true` to few files that use it, remove from index.asciidoc
This enhancement adds a new abstraction to the _search API called "retriever." A
retriever is something that returns top hits. This adds three initial retrievers called
"standard", "knn", and "rrf". The retrievers use a parser-only approach where they
are parsed and then translated into a SearchSourceBuilder to execute the actual
search.
---------
Co-authored-by: Mayya Sharipova <mayya.sharipova@elastic.co>
This PR extends the repository integrity health indicator to cover also unknown and invalid repositories. Because these errors are local to a node, we extend the `LocalHealthMonitor` to monitor the repositories and report the changes in their health regarding the unknown or invalid status.
To simplify this extension in the future, we introduce the `HealthTracker` abstract class that can be used to create new local health checks.
Furthermore, we change the severity of the health status when the repository integrity indicator reports unhealthy from `RED` to `YELLOW` because even though this is a serious issue, there is no user impact yet.
* [DOCS] TEST restore quickstart
* Use up to date Docker instructions, minor user-friendly modifications
* Use books dataset, update verbiage, add examples
* Update verbiage
* Updated Elasticsearch 'Getting Started' docs: added SSL, Docker setup, Python resources, and expanded next steps
* minor formatting
* Collapse responses, TODO comment tests
* Add request tests
* Edit superfluities
* Apply suggestions
Co-authored-by: István Zoltán Szabó <istvan.szabo@elastic.co>
* Update docs/reference/tab-widgets/quick-start-install.asciidoc
Co-authored-by: István Zoltán Szabó <istvan.szabo@elastic.co>
---------
Co-authored-by: István Zoltán Szabó <istvan.szabo@elastic.co>
* Page structure
* More getting started content
* Fix build errors
* Small improvements
* Typo
* Add link to public demo environment
* Review feedback
* Update docs/reference/esql/esql-get-started.asciidoc
Co-authored-by: Andrei Stefan <astefan@users.noreply.github.com>
* Review feedback
---------
Co-authored-by: Andrei Stefan <astefan@users.noreply.github.com>
- Removes duplicated security autoconfiguration output from the docs. This is difficult to keep updated and makes the docs longer.
- Encourages the user to store the `elastic` password as an environment variable. Users don't need to rely on curl's password prompts.
- Removes unused `api-call-widget` files. These aren't published anywhere in the docs currently.
Report node "roles" in the /_cluster/allocation/explain response.
Nodes with limited sets of roles may affect shard distribution in ways
users did not originally consider, so it is helpful to surface this
information along with node allocation decision explanations.
The `shards_availability` indicator diagnoses the condition where
indices need to be restored from snapshot.
Starting with 8.0 using feature_states when restoring from snapshot is
mandatory.
This adds support for the `FEATURE_STATE` affected resource to aid with
building up the snapshot restore API call (which will need to include
all the indices and feature states reported by the restore-from-snapshot
diagnosis).
Note that the health API will not report any indices that are part of a
feature state.
This troubleshooting guide is what will be returned from the SLM health indicator
when a SLM policy has suffered from too many repeat failures without a successful
execution.
This adds troubleshooting documentation for the case when the ShardsAvailabilityHealthIndicatorService
reports that there are not enough nodes in the data tier (user action "increase_node_capacity_for_allocations" or
"increase_tier_capacity_for_allocations_". This covers both the cloud and self-managed environments. For
cloud we first recommend increasing the number of availability zones (because you cannot directly add nodes), and
decreasing index.number_of_replicas if that is not possible. For self-managed, we first recommend adding nodes,
and decreasing index.number_of_replicas if that is not possible.