[DOCS] Describe how to use Elastic Agent to monitor Kibana (#152634)

## Summary

Add Elastic Agent as another way to collect monitoring data.

This work is tracked by
https://github.com/elastic/observability-docs/issues/2602.

There will be additional PRs to address changes required to monitoring
docs for other stack components. TBH, it pains me a bit to see how many
places users need to go to find info about stack monitoring, but fixing
that problem is not in scope for these updates unfortunately. :-/

Please respond to questions addressed to reviewers.

### Checklist

Delete any items that are not applicable to this PR.

- [x] Any text added follows [EUI's writing
guidelines](https://elastic.github.io/eui/#/guidelines/writing), uses
sentence case text and includes [i18n
support](https://github.com/elastic/kibana/blob/main/packages/kbn-i18n/README.md)

### To Do before merging

- [x] Remove questions to reviewers.

---------

Co-authored-by: Kevin Lacabane <klacabane@gmail.com>
This commit is contained in:
DeDe Morton 2023-03-23 11:00:13 -07:00 committed by GitHub
parent ca8848e00d
commit 9ff847dec7
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
9 changed files with 115 additions and 25 deletions

View file

@ -5,11 +5,19 @@
<titleabbrev>Configure monitoring</titleabbrev>
++++
If you enable the {monitor-features} in your cluster, there are two methods to
collect metrics about {kib}:
If you enable the {monitor-features} in your cluster, there are a few methods
available to collect metrics about {kib}:
* <<monitoring-metricbeat,{metricbeat} collection methods>>
* <<monitoring-kibana,Legacy collection methods>>
* <<monitoring-elastic-agent,{agent} collection>>: Uses a single agent to gather
logs and metrics. Can be managed from a central location in {fleet}.
* <<monitoring-metricbeat,{metricbeat} collection>>: Uses a lightweight {beats}
shipper to gather metrics. May be preferred if you have an existing investment
in {beats} or are not yet ready to use {agent}.
* <<monitoring-kibana,Legacy collection>>: Uses internal collectors to gather
metrics. Not recommended. If you have previously configured legacy collection
methods, you should migrate to using {agent} or {metricbeat}.
You can also use {kib} to
<<monitoring-data,visualize monitoring data from across the {stack}>>.

View file

@ -18,4 +18,4 @@ status of each Logstash node.
. Click the name of a node to view its statistics over time.
For more information, refer to
{logstash-ref}/monitoring-logstash.html[Monitoring Logstash].
{logstash-ref}/configuring-logstash.html[Monitoring Logstash].

View file

@ -0,0 +1,73 @@
[[monitoring-elastic-agent]]
= Collect {kib} monitoring data with {agent}
++++
<titleabbrev>Collect monitoring data with {agent}</titleabbrev>
++++
preview::[]
In 8.5 and later, you can use {agent} to collect data about {kib} and ship it to
the monitoring cluster, rather than <<monitoring-metricbeat,using {metricbeat}>>
or routing data through the production cluster as described in
<<monitoring-kibana>>.
To learn about monitoring in general, see
{ref}/monitor-elasticsearch-cluster.html[Monitor a cluster].
[discrete]
== Prerequisites
* Set up {es} monitoring and optionally create a monitoring cluster as described
in the {ref}/monitoring-production.html[{es} monitoring documentation].
* Create a user on the production cluster that has the
`remote_monitoring_collector` {ref}/built-in-roles.html[built-in role].
[discrete]
== Add {kib} monitoring data
To collect {kib} monitoring data, add a {kib} integration to an {agent} and
deploy it to the host where {kib} is running.
. Go to the {kib} home page and click **Add integrations**.
+
NOTE: If you're using a monitoring cluster, use the {kib} instance connected to
the monitoring cluster.
. In the query bar, search for and select the **Kibana** integration for
{agent}.
. Read the overview to make sure you understand integration requirements and
other considerations.
. Click **Add Kibana**.
+
TIP: If you're installing an integration for the first time, you may be prompted
to install {agent}. Click **Add integration only (skip agent installation)**.
. Configure the integration name and optionally add a description. Make sure you
configure all required settings:
* Under **Collect Kibana logs**, modify the log paths to match your {kib}
environment.
* Under **Collect Kibana metrics**, make sure the hosts setting points to your
Kibana host URLs. By default, the integration collects {kib} monitoring metrics
from `localhost:5601`. If that host and port number are not correct, update the
`hosts` setting. If you configured {kib} to use encrypted communications, you
must access it via HTTPS. For example, use a `hosts` setting like
`https://localhost:5601`.
* If the Elastic {security-features} are enabled, expand **Advanced options**
under the Hosts setting and enter the username and password of a user that has
the `remote_monitoring_collector` role.
. Choose where to add the integration policy. Click **New hosts** to add it to
new agent policy or **Existing hosts** to add it to an existing agent policy.
. Click **Save and continue**. This step takes a minute or two to complete. When
it's done, you'll have an agent policy that contains an integration for
collecting monitoring data from {kib}.
. If an {agent} is already assigned to the policy and deployed to the host where
{kib} is running, you're done. Otherwise, you need to deploy an {agent}. To
deploy an {agent}:
.. Go to **{fleet} -> Agents**, then click **Add agent**.
.. Follow the steps in the **Add agent** flyout to download, install,
and enroll the {agent}. Make sure you choose the agent policy you created
earlier.
. Wait a minute or two until incoming data is confirmed.
. <<monitoring-data,View the monitoring data in {kib}>>.

View file

@ -10,14 +10,15 @@ optionally collect metrics about {kib}.
[IMPORTANT]
=========================
{metricbeat} is the recommended method for collecting and shipping monitoring
data to a monitoring cluster.
{agent} and {metricbeat} are the recommended methods for collecting and shipping
monitoring data to a monitoring cluster.
If you have previously configured legacy collection methods, you should migrate
to using {metricbeat} collection methods. Use either {metricbeat} collection or
legacy collection methods; do not use both.
to using {agent} or {metricbeat} collection. Do not use legacy collection
alongside other collection methods.
For the recommended method, refer to <<monitoring-metricbeat>>.
For more information, refer to <<monitoring-elastic-agent>> and
<<monitoring-metricbeat>>.
=========================
The following method involves sending the metrics to the production cluster,

View file

@ -15,12 +15,9 @@ image::user/monitoring/images/metricbeat.png[Example monitoring architecture]
To learn about monitoring in general, see
{ref}/monitor-elasticsearch-cluster.html[Monitor a cluster].
//NOTE: The tagged regions are re-used in the Stack Overview.
. Disable the default collection of {kib} monitoring metrics. +
+
--
// tag::disable-kibana-collection[]
Add the following setting in the {kib} configuration file (`kibana.yml`):
[source,yaml]
@ -29,7 +26,6 @@ monitoring.kibana.collection.enabled: false
----------------------------------
Leave the `monitoring.enabled` set to its default value (`true`).
// end::disable-kibana-collection[]
For more information, see
<<monitoring-settings-kb,Monitoring settings in {kib}>>.
--

View file

@ -56,9 +56,11 @@ from a different monitoring cluster, set `monitoring.ui.elasticsearch.hosts`.
See <<monitoring-settings-kb>>.
. Confirm that there is monitoring data available at that URL. It is stored in
indices such as `.monitoring-kibana-*` and `.monitoring-es-*`. At a minimum, you
must have monitoring data for the {es} production cluster. Once that data exists,
{kib} can display monitoring data for other products in the cluster.
indices such as `.monitoring-kibana-*` and `.monitoring-es-*` or
`metrics-kibana.stack_monitoring.*`, depending on which method is
used to collect monitoring data. At a minimum, you must have monitoring data
for the {es} production cluster. Once that data exists, {kib} can display
monitoring data for other products in the cluster.
. Set the time filter to “Last 1 hour”. When monitoring data appears in your
cluster, the page automatically refreshes with the monitoring summary.

View file

@ -48,6 +48,11 @@ must provide a user ID and password so {kib} can retrieve the data.
.. Create a user that has the `monitoring_user`
{ref}/built-in-roles.html[built-in role] on the monitoring cluster.
+
NOTE: Make sure the `monitoring_user` role has read privileges on `metrics-*`
indices. If it doesn't, create a new role with `read` and `read_cross_cluster`
index privileges on `metrics-*`, then assign the new role (along with
`monitoring_user`) to your user.
.. Add the `monitoring.ui.elasticsearch.username` and
`monitoring.ui.elasticsearch.password` settings in the `kibana.yml` file.
@ -70,7 +75,8 @@ remote monitoring cluster, you must use credentials that are valid on both the
--
.. Create users that have the `monitoring_user` and `kibana_admin`
{ref}/built-in-roles.html[built-in roles].
{ref}/built-in-roles.html[built-in roles]. If you created a new role with
read privileges on `metrics-*` indices, also assign that role to the users.
. Open {kib} in your web browser.
+

View file

@ -8,16 +8,19 @@
The {kib} {monitor-features} serve two separate purposes:
. To visualize monitoring data from across the {stack}. You can view health and
performance data for {es}, {ls}, and Beats in real time, as well as analyze past
performance.
performance data for {es}, {ls}, {ents}, APM, and Beats in real time,
as well as analyze past performance.
. To monitor {kib} itself and route that data to the monitoring cluster.
If you enable monitoring across the {stack}, each {es} node, {ls} node, {kib}
instance, and Beat is considered unique based on its persistent
UUID, which is written to the <<settings,`path.data`>> directory when the node
or instance starts.
If you enable monitoring across the {stack}, each monitored component is
considered unique based on its persistent UUID, which is written to the
<<settings,`path.data`>> directory when the node or instance starts.
For more information, see <<configuring-monitoring>> and
{ref}/monitor-elasticsearch-cluster.html[Monitor a cluster].
{ref}/monitor-elasticsearch-cluster.html[Monitor a cluster].
Want to monitor your fleet of {agent}s, too? Use {fleet} instead of the Stack
Monitoring UI. To learn more, refer to
{fleet-guide}/monitor-elastic-agent.html[Monitor {agent}s].
--

View file

@ -67,6 +67,7 @@ include::{kib-repo-dir}/setup/configuring-reporting.asciidoc[]
include::{kib-repo-dir}/setup/configuring-logging.asciidoc[]
include::monitoring/configuring-monitoring.asciidoc[leveloffset=+1]
include::monitoring/monitoring-elastic-agent.asciidoc[leveloffset=+2]
include::monitoring/monitoring-metricbeat.asciidoc[leveloffset=+2]
include::monitoring/viewing-metrics.asciidoc[leveloffset=+2]
include::monitoring/monitoring-kibana.asciidoc[leveloffset=+2]