[Stack Monitoring] Clarify "From Source" docs (#137566)

Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
This commit is contained in:
Milton Hultgren 2022-08-09 09:59:57 +01:00 committed by GitHub
parent 7cc9e96cb9
commit b864928c52
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23

View file

@ -20,7 +20,7 @@ For metricbeat collection, omit the monitoring settings.
Optionally set `--max-workers=1` for less terminal noise once the initial build is complete.
The passwords won't be the usual "changeme" so run this to set them for use with typical kibana dev settings:
The passwords won't be the usual "changeme" so run this to set them for use with typical Kibana dev settings:
```shell
curl -k -u elastic-admin:elastic-password -H 'Content-Type: application/json' \
@ -31,7 +31,10 @@ curl -k -u elastic:changeme -H 'Content-Type: application/json' \
### Multi-cluster tests (for CCR/CCS or listing)
For multi-cluster tests its best to create a package first:
To setup multiple clusters we'll start by running a single node cluster first, this generates some config files that we can edit and copy when adding
more nodes or clusters.
For multi-cluster tests it's best to create a package first:
```shell
./gradlew localDistro
@ -43,12 +46,18 @@ Then move into the distro path:
cd "$(ls -1dt build/distribution/local/elasticsearch-* | head -n1)"
```
Then start the server (for example with internal collection enabled):
Then start the server, with or without internal collection enabled:
```shell
./bin/elasticsearch -E cluster.name=main -E xpack.license.self_generated.type=trial -E xpack.monitoring.collection.enabled=true -E xpack.monitoring.exporters.id0.type=local
```
Or:
```shell
./bin/elasticsearch -E xpack.license.self_generated.type=trial
```
Once it shows the generated password, stop the server (Ctrl+C) and disable SSL by changing this entry in `config/elasticsearch.yml`:
```yaml
@ -65,7 +74,7 @@ curl -u elastic:changeme -H 'Content-Type: application/json' \
http://localhost:9200/_security/user/kibana_system/_password -d'{"password": "changeme"}'
```
To start the second server (in another terminal from the same directory), first copy the config and export the new location as `ES_PATH_CONF`
To start the second server (in another terminal from the same directory), run the commands below:
```shell
export ES_PATH_CONF=config-secondary
@ -79,7 +88,7 @@ To report internal collection to the main server, you also need to add the passw
echo changeme | ./bin/elasticsearch-keystore add xpack.monitoring.exporters.id0.auth.secure_password
```
And finally start the server
And finally start the server, with or without internal collection enabled (make sure ES_PATH_CONF is still set to `config-secondary`):
```shell
./bin/elasticsearch -E cluster.name=secondary -E http.port=9210 -E transport.port=9310 -E path.data=data-secondary -E xpack.license.self_generated.type=trial \
@ -89,6 +98,11 @@ And finally start the server
-E xpack.monitoring.exporters.id0.ssl.verification_mode=none
```
Or:
```shell
./bin/elasticsearch -E cluster.name=secondary -E http.port=9201 -E transport.port=9301 -E path.data=data2 -E xpack.license.self_generated.type=trial
```
You'll likely want to reset the passwords for the secondary cluster as well:
```shell
@ -106,7 +120,7 @@ For metricbeat collection, omit the monitoring settings, provide both cluster ho
#### CCR configuration
Once you have two clusters going you can use something like this to configure the remote (or use kibana).
Once you have two clusters going you can use something like this to configure the remote (or use Kibana).
```
curl -u elastic:changeme -H 'Content-Type: application/json' \
@ -120,9 +134,9 @@ Create an index on the secondary cluster:
curl -XPOST -H'Content-Type: application/json' -d'{"some": "stuff"}' -u elastic:changeme http://localhost:9210/stuff/_doc
```
Then use the "Cross-Cluster Replication" kibana UI to set up a follower index (`stuff-replica`) in the main cluster.
Then use the "Cross-Cluster Replication" Kibana UI to set up a follower index (`stuff-replica`) in the main cluster.
Note that the replica may show as "paused" for the first few seconds of replication.
Note that the replica may show as "paused" for the first few seconds of replication. Wait and refresh the page.
You can `POST` some additional documents to the secondary cluster ensure you have something in the "Ops synced" metrics on stack monitoring.
@ -130,9 +144,13 @@ The [CCR Tutorial](https://www.elastic.co/guide/en/elasticsearch/reference/curre
### Machine Learning configuration
Note: You might want to skip to the Beats section first to gather data to run the ML job on.
If you used one of the above methods to launch Elasticsearch it should already be capable of running ML jobs. For cloud configurations, make sure your deployment includes at least one ML node (or has auto-scaled one) before you attempt to monitor ML jobs.
You can create job using the machine learning UI in kibana. Select (or create) a data view that's getting some data ingested. Create a "Single metric" job that counts the documents being ingested. You can push the "Use full data" button as well, since you probably have a small test data set.
You can create job using the machine learning UI in Kibana. Select (or create) a data view that's getting some data ingested. Create a "Single metric" job that counts the documents being ingested. You can push the "Use full data" button as well, since you probably have a small test data set.
Note: There seems to be a router bug that throws you back to the overview page when clicking "Use full data", just try again.
Once the job is created push the "Start job running in real time". This will help exercise the active-job state in Stack Monitoring UI.
@ -252,7 +270,7 @@ cp -r config/* "${ES_PATH_CONF}"
-Ecluster.initial_master_nodes=127.0.0.1:9310,127.0.0.1:9311,127.0.0.1:9312
```
Note that all 6 notes will need to be in the metricbeat config if you want to run the Stack Monitoring UI as well. Here's an example `metricbeat.multinode.yaml` you can use as a starting point:
Note that all 6 nodes will need to be in the metricbeat config if you want to run the Stack Monitoring UI as well. Here's an example `metricbeat.multinode.yaml` you can use as a starting point:
```yaml
http.enabled: true
@ -300,7 +318,7 @@ output.elasticsearch:
See the [local setup](local_setup.md) guide for running from source.
If you need to run kibana from a release snapshot on macOS, note that you'll likely need to run `xattr -r -d com.apple.quarantine node/bin/node` to be able to run the packaged node runtime.
If you need to run Kibana from a release snapshot on macOS, note that you'll likely need to run `xattr -r -d com.apple.quarantine node/bin/node` to be able to run the packaged node runtime.
## Beats
@ -563,7 +581,7 @@ So far it seems the easiest way to run enterprise search is via the docker conta
These instructions enable monitoring using a version of metricbeat that is packaged along with enterprise search.
First add `enterpriseSearch.host: 'http://localhost:3002'` to your kibana config to enable the enterprise search UI.
First add `enterpriseSearch.host: 'http://localhost:3002'` to your Kibana config to enable the enterprise search UI.
Then run the container. Note that this includes a `kibana.host` setting which may vary depending on your base path: