[Fleet] Update README and local setup dev docs (#184629)

## Summary

Closes https://github.com/elastic/ingest-dev/issues/3354

This is a docs only change.

---------

Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
This commit is contained in:
Jill Guyonnet 2024-06-04 10:13:52 +01:00 committed by GitHub
parent e3ca2c59b0
commit 3e5852cb89
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
18 changed files with 523 additions and 250 deletions

View file

@ -587,7 +587,7 @@ activities.
|{kib-repo}blob/{branch}/x-pack/plugins/fleet/README.md[fleet]
|Fleet needs to have Elasticsearch API keys enabled.
|Fleet provides a web-based UI in Kibana for centrally managing Elastic Agents and their policies.
|{kib-repo}blob/{branch}/x-pack/plugins/global_search/README.md[globalSearch]

View file

@ -1,162 +1,126 @@
# Fleet
## Plugin
Fleet provides a web-based UI in Kibana for centrally managing Elastic Agents and their policies.
- The plugin is enabled by default. See the TypeScript type for the [the available plugin configuration options](https://github.com/elastic/kibana/blob/main/x-pack/plugins/fleet/common/types/index.ts#L9-L27)
- Adding `xpack.fleet.enabled=false` will disable the plugin including the EPM and Fleet features. It will also remove the `PACKAGE_POLICY_API_ROUTES` and `AGENT_POLICY_API_ROUTES` values in [`common/constants/routes.ts`](./common/constants/routes.ts)
- Adding `--xpack.fleet.agents.enabled=false` will disable the Fleet API & UI
- [code for adding the routes](https://github.com/elastic/kibana/blob/1f27d349533b1c2865c10c45b2cf705d7416fb36/x-pack/plugins/ingest_manager/server/plugin.ts#L115-L133)
- [Integration tests](server/integration_tests/router.test.ts)
- Both EPM and Fleet require `ingestManager` be enabled. They are not standalone features.
- For Enterprise license, a custom package registry URL can be used by setting `xpack.fleet.registryUrl=http://localhost:8080`
- This property is currently only for internal Elastic development and is unsupported
Official documentation: https://www.elastic.co/guide/en/fleet/current/index.html.
## Fleet Requirements
## Plugin overview
Fleet needs to have Elasticsearch API keys enabled.
The Fleet plugin is enabled by default. The Fleet API and UI can be disabled by setting the `xpack.fleet.agents.enabled` Kibana setting to `false`.
Also you need to configure the hosts your agent is going to use to comunication with Elasticsearch and Kibana (Not needed if you use Elastic cloud). You can use the following flags:
Available Fleet settings are listed in the [official documentation](https://www.elastic.co/guide/en/kibana/current/fleet-settings-kb.html). For an exhaustive list including internal settings, refer to the [FleetConfigType](https://github.com/elastic/kibana/blob/main/x-pack/plugins/fleet/common/types/index.ts) type definition.
```
--xpack.fleet.agents.elasticsearch.host=http://localhost:9200
--xpack.fleet.agents.kibana.host=http://localhost:5601
```
This plugin follows the `common`, `server`, `public` structure described in the [Kibana Developer Guide](https://docs.elastic.dev/kibana-dev-docs/key-concepts/platform-intro). Refer to [The anatomy of a plugin](https://docs.elastic.dev/kibana-dev-docs/key-concepts/anatomy-of-a-plugin) in the guide for further details.
Note: this plugin was previously named Ingest Manager, there are still a few references to that old name in the code.
## Fleet setup
Refer to [the documentation](https://www.elastic.co/guide/en/fleet/current/fleet-deployment-models.html) for details on how to configure Fleet depending on the deployment model (self-managed, Elasticsearch Service or Elastic Cloud serverless).
Running a [self-managed stack](https://www.elastic.co/guide/en/fleet/current/add-fleet-server-on-prem.html) (see below for local development setup), in particular, required setting up a Fleet Server and configuring [Fleet settings](https://www.elastic.co/guide/en/kibana/8.13/fleet-settings-kb.html).
## Development
### Getting started
See the [Contributing to Kibana documentation](https://github.com/elastic/kibana/blob/main/CONTRIBUTING.md) or head straight to the [Kibana Developer Guide](https://docs.elastic.dev/kibana-dev-docs/getting-started/welcome) for setting up your dev environment, run Elasticsearch and start Kibana.
Refer to the [Contributing to Kibana](https://github.com/elastic/kibana/blob/main/CONTRIBUTING.md) documentation for getting started with developing for Kibana. As detailed under the Contributing section of the documentation, we follow the pattern of developing feature branches under your personal fork of Kibana.
This plugin follows the `common`, `server`, `public` structure described in the [Kibana Developer Guide](https://docs.elastic.dev/kibana-dev-docs/key-concepts/platform-intro). Refer to [The anatomy of a plugin](https://docs.elastic.dev/kibana-dev-docs/key-concepts/anatomy-of-a-plugin) in the guide for further details.
Fleet development usually requires running Kibana from source alongside a snapshot of Elasticsearch, as detailed in the [Contributing to Kibana](https://github.com/elastic/kibana/blob/main/CONTRIBUTING.md) documentation. The next section provides an overview of this process.
We follow the pattern of developing feature branches under your personal fork of Kibana. Refer to [Set up a Development Environment](https://docs.elastic.dev/kibana-dev-docs/getting-started/setup-dev-env) in the guide for further details. Other best practices including developer principles, standards and style guide can be found under the Contributing section of the guide.
In addition, it is typically needed to set up a Fleet Server and enroll Elastic Agents in Fleet. Refer to one of the following guides depending on your requirements for details:
- [Running a local Fleet Server and enrolling Elastic Agents](dev_docs/local_setup/enrolling_agents.md) for developing Kibana in stateful (not serverless) mode
- [Developing Kibana in serverless mode](dev_docs/local_setup/developing_kibana_in_serverless.md) for developing Kibana in serverless mode
- [Developing Kibana and Fleet Server simultaneously](dev_docs/local_setup/developing_kibana_and_fleet_server.md) for doing simultaneous Kibana and Fleet Server development
Note: The plugin was previously named Ingest Manager, it's possible that some variables are still named with that old plugin name.
### Running Fleet locally in stateful mode
#### Dev environment setup
Prerequisites:
- Fork the Kibana repository and clone it locally
- Install the `node` and `yarn` versions required by `.nvmrc`
These are some additional recommendations to the steps detailed in the [Kibana Developer Guide](https://docs.elastic.dev/kibana-dev-docs/getting-started/setup-dev-env).
Once that is set up, the high level steps are:
- Run Elasticsearch from snapshot
- Configure Kibana settings
- Run Kibana from source
- Enroll a Fleet Server
- Enroll Elastic Agents
Note: this section details how to run Kibana in stateful mode. For serverless development, see the [Developing Kibana in serverless mode](dev_docs/developing_kibana_and_fleet_server.md) guide.
#### Running Elasticsearch from snapshot
1. Create a `config/kibana.dev.yml` file by copying the existing `config/kibana.yml` file.
2. It is recommended to explicitly set a base path for Kibana (refer to [Considerations for basepath](https://www.elastic.co/guide/en/kibana/current/development-basepath.html) for details). To do this, add the following to your `kibana.dev.yml`:
As detailed in [Running Elasticsearch during development](https://www.elastic.co/guide/en/kibana/current/running-elasticsearch.html), there are different ways to run Elasticsearch when developing Kibana, with snapshot being the most common.
To do this, run the following from the Kibana root folder:
```sh
yarn es snapshot --license trial
```
The `--license trial` flag provides the equivalent of a Platinum license (defaults to Basic).
In addition, it can be useful to set a folder for preserving data between runs (by default, data is stored inside the snapshot and lost on exit) with the `-E path.data=<pathToSavedData>` setting. Common path choices are:
- `../data` (or any other name, e.g. `../mycluster`), which saves the data in the `.es` folder (in the Kibana root folder)
- `/tmp/es-data`
Note: the required API key service and token service (cf. [Security settings in Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html)) set by `-E xpack.security.authc.api_key.enabled` and `-E xpack.security.authc.token.enabled` are enabled by default.
Finally, setting up a Fleet Server requires setting the HTTP host to Fleet Server default host with `-E http.host=0.0.0.0`.
The complete command usually looks like:
```sh
yarn es snapshot --license trial -E path.data=../data -E http.host=0.0.0.0
```
#### Configure Kibana settings
Create a `config/kibana.dev.yml` file if you don't have one by copying the existing `config/kibana.yml` file.
To get started, it is recommended to set the following settings:
1\. The URL at which Kibana is available for end users: unless explicitly specified, this path is randomized in dev mode (refer to [Considerations for basepath](https://www.elastic.co/guide/en/kibana/current/development-basepath.html) for details). To set it, add the following to your `kibana.dev.yml`:
```yml
server.basePath: /<yourPath>
server.basePath: /yourPath
```
where `yourPath` is a path of your choice (e.g. your name; must not end with a slash).
where `yourPath` is a path of your choice (e.g. your name).
3. Bootstrap Kibana:
```bash
yarn kbn bootstrap
```
#### Running Elasticsearch and Kibana
- Start Elasticsearch in one shell (NB: you might want to add other flags to enable data persistency and/or running Fleet Server locally, see below):
```
yarn es snapshot -E xpack.security.authc.api_key.enabled=true -E xpack.security.authc.token.enabled=true
```
- Start Kibana in another shell:
```
yarn start
```
If you don't have a base path set up, add `--no-base-path` to `yarn start`.
#### Useful tips
To avoid the enforcing of version headers when running in dev mode, add the following to your `kibana.dev.yml`:
```
2\. The API version resolution: in dev mode, a version is required for all API requests. In other environements (e.g. production), the version falls back to `oldest` in stateful mode and `newest` in serverless mode for public APIs, while internal APIs always require a version. Set the API version resolution with:
```yml
server.versioned.versionResolution: oldest
```
This will provide a default version for the public apis.
If Kibana fails to start, it is possible that your local setup got corrupted. An easy fix is to run:
```
yarn kbn clean && yarn kbn bootstrap
```
To avoid losing all your data when you restart Elasticsearch, you can provide a path to store the data when running the `yarn es snapshot ` command, e.g.:
```
-E path.data=/tmp/es-data
```
Refer to the [Running Elasticsearch during development](https://www.elastic.co/guide/en/kibana/current/running-elasticsearch.html) page of the guide for other options.
### Running Fleet Server Locally in a Container
It can be useful to run Fleet Server in a container on your local machine in order to free up your actual "bare metal" machine to run Elastic Agent for testing purposes. Otherwise, you'll only be able to a single instance of Elastic Agent dedicated to Fleet Server on your local machine, and this can make testing integrations and policies difficult.
Note: if you need to do simultaneous Kibana and Fleet Server development, refer to the [Developing Kibana and Fleet Server simulatanously](dev_docs/developing_kibana_and_fleet_server.md) guide.
_The following is adapted from the Fleet Server [README](https://github.com/elastic/fleet-server#running-elastic-agent-with-fleet-server-in-container)_
1. Add the following configuration to your `kibana.dev.yml`
3\. Fleet logging:
```yml
server.host: 0.0.0.0
xpack.fleet.agents.enabled: true
xpack.fleet.packages:
- name: fleet_server
version: latest
xpack.fleet.agentPolicies:
- name: Fleet Server policy
id: fleet-server-policy
description: Fleet server policy
namespace: default
package_policies:
- name: Fleet Server
package:
name: fleet_server
logging:
loggers:
- name: plugins.fleet
appenders: [console]
level: debug
```
2. Append the following option to the command you use to start Elasticsearch
You can find these settings along with others required to run a Fleet Server and enroll Elastic Agents in the [sample kibana.dev.yml file](dev_docs/local_setup/sample_kibana_dev_yml.md).
```
-E http.host=0.0.0.0
#### Run Kibana from source
From the Kibana root folder, bootstrap (install dependencies) and run Kibana with:
```sh
yarn kbn bootstrap && yarn start
```
This command should look something like this:
Once the line "Kibana is now availabe" is logged, you can access Kibana in the browser at localhost:5601/your-base-path and log with the default `elastic` username and the password `changeme`.
```
yarn es snapshot --license trial -E xpack.security.authc.api_key.enabled=true -E xpack.security.authc.token.enabled=true -E path.data=/tmp/es-data -E http.host=0.0.0.0
```
As a general rule, it is recommended to run `yarn kbn bootstrap` on branch change. Because merges to `main` are frequent, it is a good idea to run `yarn kbn bootstrap && yarn start` instead of just `yarn start` when frequently pulling latest `main`.
3. Run the Fleet Server Docker container. Make sure you include a `BASE-PATH` value if your local Kibana instance is using one. `YOUR-IP` should correspond to the IP address used by your Docker network to represent the host. For Windows and Mac machines, this should be `192.168.65.2`. If you're not sure what this IP should be, run the following to look it up:
If Kibana fails to start after switching branch or pulling the latest, try clearing caches with `yarn kbn clean` before bootstraping again.
```
docker run -it --rm alpine nslookup host.docker.internal
```
If you are still encountering errors after `yarn kbn clean`, you can try a more aggressive reset with `yarn kbn reset`.
To run the Fleet Server Docker container:
#### Set up a Fleet Server and enroll Elastic Agents
```
docker run -e KIBANA_HOST=http://{YOUR-IP}:5601/{BASE-PATH} -e KIBANA_USERNAME=elastic -e KIBANA_PASSWORD=changeme -e ELASTICSEARCH_HOST=http://{YOUR-IP}:9200 -e KIBANA_FLEET_SETUP=1 -e FLEET_SERVER_ENABLE=1 -e FLEET_SERVER_POLICY_ID=fleet-server-policy -p 8220:8220 docker.elastic.co/beats/elastic-agent:{VERSION}
```
[Fleet Server](https://github.com/elastic/fleet-server) is the component that manages Elastic Agents within Fleet. It needs to be set up in order to enroll Elastic Agents into Fleet and is itself a special instance of Elastic Agent.
Ensure you provide the `-p 8220:8220` port mapping to map the Fleet Server container's port `8220` to your local machine's port `8220` in order for Fleet to communicate with Fleet Server.
This means that developing with enrolled agents requires at least two Elastic Agent instances: a Fleet Server and data shipping agents. As only one instance is allowed per host, the usual method is to run these instances in virtual machines or Docker containers. The [Running a local Fleet Server and enrolling Elastic Agents](dev_docs/local_setup/enrolling_agents.md) guide details this.
Explore the available versions at https://www.docker.elastic.co/r/beats/elastic-agent. Only released versions are shown by default: tick the `Include snapshots` checkbox to see the latest version, e.g. `8.8.0-SNAPSHOT`.
Once the Fleet Server container is running, you should be able to treat it as if it were a local process running on `https://localhost:8220` when configuring Fleet via the UI. You can then run `elastic-agent` on your local machine directly for testing purposes, or with Docker (recommended) see next section.
### Running Elastic Agent Locally in a Container (managed mode)
1. Create a new agent policy from the Fleet UI, by going to the Fleet app in Kibana > Agent policies > Add agent policy
2. Click "Add Agent"
3. Scroll down to the bottom of the flyout that opens to view the enrollment command, copy the contents of the `--enrollment-token` option
4. Run this docker command:
```
docker run -e FLEET_ENROLL=true -e FLEET_INSECURE=true -e FLEET_URL=https://192.168.65.2:8220 -e FLEET_ENROLLMENT_TOKEN=<pasted from step 3> --rm docker.elastic.co/beats/elastic-agent:{VERSION}
```
Note: if you need to do simultaneous Kibana and Fleet Server development, refer to the [Developing Kibana and Fleet Server simultaneously](dev_docs/local_setup/developing_kibana_and_fleet_server.md) guide
### Tests
@ -164,13 +128,13 @@ Once the Fleet Server container is running, you should be able to treat it as if
Kibana primarily uses Jest for unit testing. Each plugin or package defines a `jest.config.js` that extends a preset provided by the `@kbn/test` package. Unless you intend to run all unit tests within the project, you should provide the Jest configuration for Fleet. The following command runs all Fleet unit tests:
```
```sh
yarn jest --config x-pack/plugins/fleet/jest.config.js
```
You can also run a specific test by passing the filepath as an argument, e.g.:
```
```sh
yarn jest --config x-pack/plugins/fleet/jest.config.js x-pack/plugins/fleet/common/services/validate_package_policy.test.ts
```
@ -180,39 +144,38 @@ API integration tests are run using the functional test runner (FTR). When devel
Note: Docker needs to be running to run these tests.
1. In one terminal, run the server from the Kibana root directory with
1\. In one terminal, run the server from the Kibana root folder with
```
```sh
FLEET_PACKAGE_REGISTRY_PORT=12345 yarn test:ftr:server --config x-pack/test/fleet_api_integration/<configFile>
```
where `configFile` is the relevant config file relevant from the following:
- config.agent.ts
- config.agent_policy.ts
- config.epm.ts
- config.fleet.ts
- config.package_policy.ts
1. In a second terminal, run the tests from the Kibana root directory with
2\. In a second terminal, run the tests from the Kibana root folder with
```bash
```sh
FLEET_PACKAGE_REGISTRY_PORT=12345 yarn test:ftr:runner --config x-pack/test/fleet_api_integration/<configFile>
```
Optionally, you can filter which tests you want to run using `--grep`
```bash
FLEET_PACKAGE_REGISTRY_PORT=12345 yarn test:ftr:runner --config x-pack/test/fleet_api_integration/<configFile> --grep='fleet'
```sh
FLEET_PACKAGE_REGISTRY_PORT=12345 yarn test:ftr:runner --config x-pack/test/fleet_api_integration/<configFile> --grep='my filter string'
```
Note: you can also supply which Docker image to use for the Package Registry via the `FLEET_PACKAGE_REGISTRY_DOCKER_IMAGE` env variable. For example,
Note: you can supply which Docker image to use for the Package Registry via the `FLEET_PACKAGE_REGISTRY_DOCKER_IMAGE` env variable. For example,
```bash
```sh
FLEET_PACKAGE_REGISTRY_DOCKER_IMAGE='docker.elastic.co/package-registry/distribution:production' FLEET_PACKAGE_REGISTRY_PORT=12345 yarn test:ftr:runner
```
To speed up test you can also use the `FLEET_SKIP_RUNNING_PACKAGE_REGISTRY=true` flag to not re-run the package registry each time. When launching the test for the first time you will get the docker command to run the package registry.
You can also speed up the tests execution with the `FLEET_SKIP_RUNNING_PACKAGE_REGISTRY=true` flag, which avoids rerunning the package registry each time. Running the tests the first time will output the Docker command for running the package registry.
```bash
FLEET_SKIP_RUNNING_PACKAGE_REGISTRY=true FLEET_PACKAGE_REGISTRY_PORT=12345 yarn test:ftr:runner
@ -220,18 +183,16 @@ FLEET_SKIP_RUNNING_PACKAGE_REGISTRY=true FLEET_PACKAGE_REGISTRY_PORT=12345 yarn
#### API integration tests (serverless)
The process for running serverless API integration tests is similar as above. Security and observability project types have Fleet enabled. At the time of writing, the same tests exist for Fleet under these two project types.
The process for running serverless API integration tests is similar to above. Security and observability project types have Fleet enabled. At the time of writing, the same tests exist for Fleet under these two project types.
Security:
```bash
```sh
FLEET_PACKAGE_REGISTRY_PORT=12345 yarn test:ftr:server --config x-pack/test_serverless/api_integration/test_suites/security/fleet/config.ts
FLEET_PACKAGE_REGISTRY_PORT=12345 yarn test:ftr:runner --config x-pack/test_serverless/api_integration/test_suites/security/fleet/config.ts
```
Observability:
```bash
```sh
FLEET_PACKAGE_REGISTRY_PORT=12345 yarn test:ftr:server --config x-pack/test_serverless/api_integration/test_suites/observability/fleet/config.ts
FLEET_PACKAGE_REGISTRY_PORT=12345 yarn test:ftr:runner --config x-pack/test_serverless/api_integration/test_suites/observability/fleet/config.ts
```
@ -242,30 +203,30 @@ We support UI end-to-end testing with Cypress. Refer to [cypress/README.md](./cy
#### Jest integration tests
Some features need to test different Kibana configuration, test with multiple Kibana instances, ... For this purpose, Jest integration tests can be used, which allow starting ES and Kibana as required for each test
Some features require testing under specific conditions, such as different Kibana configurations or multiple Kibana instances. Jest integration tests allow starting Elasticsearch and Kibana as required for each test.
To run these tests `docker` needs to be running on your environment.
These tests, however, are slow and difficult to maintain. API integration tests should therefore be preferred whenever possible.
You can run the tests with the following commands:
Note: Docker needs to be running to run these tests.
```bash
Run the tests from the Kibana root folder with:
```sh
node scripts/jest_integration.js x-pack/plugins/fleet/server/integration_tests/<YOUR_TEST_FILE>
```
You could also use node debugger to inspect ES indices (add the `debugger` directive in your test)
Running the tests with [Node Inspector](https://nodejs.org/en/learn/getting-started/debugging) allows inspecting Elasticsearch indices. To do this, add a `debugger;` statement in the test (cf. [Jest documentation](https://jestjs.io/docs/troubleshooting)) and run `node` with `--inspect` or `--inspect-brk`:
```bash
```sh
node --inspect scripts/jest_integration.js x-pack/plugins/fleet/server/integration_tests/<YOUR_TEST_FILE>
```
However, these tests are slow and harder to maintain. Therefore, we should try to avoid them and use API integration tests instead whenever possible.
### Storybook
Fleet contains [Storybook](https://storybook.js.org/) stories for developing UI components in isolation. To start the Storybook environment for Fleet, run the following from your `kibana` project root:
```sh
$ yarn storybook fleet
yarn storybook fleet
```
Write stories by creating `.stories.tsx` files colocated with the components you're working on. Consult the [Storybook docs](https://storybook.js.org/docs/react/get-started/introduction) for more information.

View file

@ -1,4 +1,4 @@
# Developing Kibana and Fleet Server simulatanously
# Developing Kibana and Fleet Server simultaneously
Many times, a contributor to Fleet will only need to make changes to [Fleet Server](https://github.com/elastic/fleet-server) or [Kibana](https://github.com/elastic/kibana) - not both. But, there are times when end-to-end changes across both componenents are necessary. To facilitate this, we've created a guide to help you get up and running with a local development environment that includes both Kibana and Fleet Server. This is a more involved process than setting up either component on its own.
@ -296,109 +296,9 @@ docker run --add-host host.docker.internal:host-gateway \
docker.elastic.co/beats/elastic-agent:8.13.0-SNAPSHOT # <-- Update this version as needed
```
You can also create a `run-dockerized-agent.sh` file as below to make this process easier. This script will run a Docker container with Elastic Agent and enroll it to your local Fleet Server. You can also use it to run a Dockerized Fleet Server container if you don't need to develop Fleet Server locally.
You can also use the [run_dockerized_agent.sh](./run_dockerized_elastic_agent.sh) script to make this process easier. This script will run a Docker container with Elastic Agent and enroll it to your local Fleet Server. You can also use it to run a Dockerized Fleet Server container if you don't need to develop Fleet Server locally.
```bash
#!/usr/bin/env bash
# Name this file `run-dockerized-agent.sh` and place it somewhere convenient. Make sure to run `chmod +x` on it to make it executable.
# This script is used to run a instance of Elastic Agent in a Docker container.
# Ref.: https://www.elastic.co/guide/en/fleet/current/elastic-agent-container.html
# To run a Fleet server: ./run_dockerized_agent.sh fleet_server
# To run an agent: ./run_dockerized_agent agent -e <enrollment token> -v <version> -t <tags>
# NB: this script assumes a Fleet server policy with id "fleet-server-policy" is already created.
CMD=$1
while [ $# -gt 0 ]; do
case $1 in
-e | --enrollment-token) ENROLLMENT_TOKEN=$2 ;;
-v | --version) ELASTIC_AGENT_VERSION=$2 ;;
-t | --tags) TAGS=$2 ;;
esac
shift
done
DEFAULT_ELASTIC_AGENT_VERSION=8.13.0-SNAPSHOT # update as needed
# Needed for Fleet Server
ELASTICSEARCH_HOST=http://host.docker.internal:9200 # should match Fleet settings or xpack.fleet.agents.elasticsearch.hosts in kibana.dev.yml
KIBANA_HOST=http://host.docker.internal:5601
KIBANA_BASE_PATH=kyle # should match server.basePath in kibana.dev.yml
FLEET_SERVER_POLICY_ID=fleet-server-policy # as defined in kibana.dev.yml
# Needed for agent
FLEET_SERVER_URL=https://host.docker.internal:8220
printArgs() {
if [[ $ELASTIC_AGENT_VERSION == "" ]]; then
ELASTIC_AGENT_VERSION=$DEFAULT_ELASTIC_AGENT_VERSION
echo "No Elastic Agent version specified, setting to $ELASTIC_AGENT_VERSION (default)"
else
echo "Received Elastic Agent version $ELASTIC_AGENT_VERSION"
fi
if [[ $ENROLLMENT_TOKEN == "" ]]; then
echo "Warning: no enrollment token provided!"
else
echo "Received enrollment token: ${ENROLLMENT_TOKEN}"
fi
if [[ $TAGS != "" ]]; then
echo "Received tags: ${TAGS}"
fi
}
echo "--- Elastic Agent Container Runner ---"
if [[ $CMD == "fleet_server" ]]; then
echo "Starting Fleet Server container..."
printArgs
docker run \
-e ELASTICSEARCH_HOST=${ELASTICSEARCH_HOST} \
-e KIBANA_HOST=${KIBANA_HOST}/${KIBANA_BASE_PATH} \
-e KIBANA_USERNAME=elastic \
-e KIBANA_PASSWORD=changeme \
-e KIBANA_FLEET_SETUP=1 \
-e FLEET_INSECURE=1 \
-e FLEET_SERVER_ENABLE=1 \
-e FLEET_SERVER_POLICY_ID=${FLEET_SERVER_POLICY_ID} \
-e ELASTIC_AGENT_TAGS=${TAGS} \
-p 8220:8220 \
--rm docker.elastic.co/beats/elastic-agent:${ELASTIC_AGENT_VERSION}
elif [[ $CMD == "agent" ]]; then
echo "Starting Elastic Agent container..."
printArgs
docker run \
-e FLEET_URL=${FLEET_SERVER_URL} \
-e FLEET_ENROLL=1 \
-e FLEET_ENROLLMENT_TOKEN=${ENROLLMENT_TOKEN} \
-e FLEET_INSECURE=1 \
-e ELASTIC_AGENT_TAGS=${TAGS} \
--rm docker.elastic.co/beats/elastic-agent:${ELASTIC_AGENT_VERSION}
elif [[ $CMD == "help" ]]; then
echo "Usage: ./run_elastic_agent.sh <agent/fleet_server> -e <enrollment token> -v <version> -t <tags>"
elif [[ $CMD == "" ]]; then
echo "Command missing. Available commands: agent, fleet_server, help"
else
echo "Invalid command: $CMD"
fi
```
Another option is to use a lightweight virtualization provider like https://multipass.run/ and enrolling agents using an enrollment token generated via Fleet UI. You will need to add a Fleet Server Host entry + Output to your Fleet settings that corresponds with your Multipass bridge network interface, similar to how we've set up Docker above.
_To do: add specific docs for enrolling Multipass agents and link here_
Another option is to use a lightweight virtualization provider like https://multipass.run/ and enroll agents using an enrollment token generated via Fleet UI. You will need to update your Fleet Settings with a Fleet Server Host entry + Output that corresponds with your Multipass bridge network interface, similar to how we've set up Docker above. Refer to [Running a local Fleet Server and enrolling Elastic Agents](./enrolling_agents.md) for details about how to use Multipass.
## Running in serverless mode

View file

@ -0,0 +1,237 @@
# Running a local Fleet Server and enrolling Elastic Agents
This guide assumes Elasticsearch is running from snapshot and Kibana is running from source as detailed in [the README](../README.md#running-fleet-locally-in-stateful-mode). Note that `-E http.host=0.0.0.0` must be passed to `yarn es snapshot`.
As explained in the [Set up a Fleet Server and enroll Elastic Agents](../README.md#set-up-a-fleet-server-and-enroll-elastic-agents) section, is it useful to run Elastic Agents in virtual machines or Docker containers for testing purposes. This guide provides step-by-step instructions for both methods using HTTP (note: you can also mix both and have e.g. a dockerized Fleet Server and agents on VMs). Refer to [Developing Kibana and Fleet Server simultaneously](./developing_kibana_and_fleet_server.md) for details about using HTTPS instead.
## Kibana config
Add the following to your `kibana.dev.yml`. Note that the only differences between VM and container setups are the URLs of the Fleet Server host and Elasticsearch output. If you want to set up a Fleet Server on a VM, you will first need to launch the VM in order to get the IP address.
```yml
# Set the Kibana server address to Fleet Server default host.
server.host: 0.0.0.0
# Install Fleet Server package.
xpack.fleet.packages:
- name: fleet_server
version: latest
# Create an agent policy for Fleet Server.
xpack.fleet.agentPolicies:
- name: Fleet Server policy
id: fleet-server-policy
is_default_fleet_server: true
# is_managed: true # Useful to mimic cloud environment
description: Fleet server policy
namespace: default
package_policies:
- name: Fleet Server
package:
name: fleet_server
inputs:
- type: fleet-server
keep_enabled: true
vars:
- name: host
value: 0.0.0.0
frozen: true
- name: port
value: 8220
frozen: true
# Set a default Fleet Server host.
xpack.fleet.fleetServerHosts:
- id: default-fleet-server
name: Default Fleet server
is_default: true
# host_urls: [https://<the-IP-address-of-your-VM>:8220] # For running a Fleet Server in a VM
# host_urls: ['https://host.docker.internal:8220'] # For running a Fleet Server Docker container
# Set a default Elasticsearch output.
xpack.fleet.outputs:
- id: es-default-output
name: Default output
type: elasticsearch
is_default: true
is_default_monitoring: true
# hosts: ['http://<your-local-IP>:9200'] # For enrolling agents on VM
# hosts: ['http://host.docker.internal:9200'] # For enrolling dockerized agents
```
## Using Multipass VMs
[Multipass](https://multipass.run/) is a lightweight virtualization tool for running Ubuntu VMs. Follow the instructions at https://multipass.run/install to install Multipass on your local machine.
Advantages of running Elastic Agents on a VM include:
- More realistic setup.
- Ability to use the `elastic-agent` commands, e.g. `sudo elastic-agent status`, `sudo elastic-agent restart`...
- Agents can be upgraded.
- Elastic Defend can be installed.
To run a Fleet Server and agents on VMs, make sure the default output host defined in your `kibana.dev.yml` uses your local IP address (NB: using `localhost` can cause connection issues). For Mac users using a WiFi connection, the local IP address can be retrieved with:
```sh
ipconfig getifaddr en0
```
The default Fleet Server host can be set once the VM for the Fleet Server is running.
In Fleet UI, these host URLs should be reflected in the Settings page:
![Fleet settings UI showing hosts for agents on VM](./screenshots/vm_fleet_settings.png)
### Running a Fleet Server
1\. Launch a Multipass instance for your Fleet Server:
```sh
multipass launch --name fleet-server --disk 10G --network en0
```
Available options are detailed at https://multipass.run/docs/launch-command.
It is generally recommended to provide additional disk space (default 5G) for running Elastic Agents.
In addition, the `--network` option adds a network interface to the instance, in this case `en0`. This allows the Fleet Server instance to communicate with the enrolled agents via the wifi network interface. You can find out the IP address by running:
```sh
multipass list
```
Example output:
```sh
Name State IPv4 Image
fleet-server Running 192.168.1.1 Ubuntu 24.04 LTS
192.168.1.100
```
Copy the second IP address into the host URLs of the Fleet Server host in your `kibana.dev.yml`. Wait for Kibana to restart.
2\. Shell into the instance:
```sh
multipass shell fleet-server
```
3\. Open Kibana in a browser and head over to Fleet. Initially, there should be no Fleet Server:
![Fleet UI with no Fleet Server](./screenshots/no_fleet_server.png)
4\. Click "Add Fleet Server". In the flyout, check that the Fleet Server host is correct and click "Continue".
![Fleet UI showing the Add Fleet Server flyout, step 1](./screenshots/vm_fleet_server_1.png)
![Fleet UI showing the Add Fleet Server flyout, step 2](./screenshots/vm_fleet_server_2.png)
5\. Before copying the install instructions, amend the download URL to suit the desired version and your host architecture:
- Because Multipass only supports the host's architecture, you may need to change `linux-x86_64` to `linux-arm64` (e.g. on M-series Macbooks).
- By default, the proposed version is the latest release. You can explore available versions at https://artifacts-api.elastic.co/v1/versions and then check out `https://artifacts-api.elastic.co/v1/versions/<version>/builds/latest` to find the relevant download URL. An even easier way is to use the API: the following command will output the download URL for the `elastic-agent-8.15.0-SNAPSHOT-linux-arm64.tar.gz` version:
```sh
curl -s https://artifacts-api.elastic.co/v1/versions/8.15-SNAPSHOT/builds/latest | \
jq '.build.projects."elastic-agent-package".packages."elastic-agent-8.15.0-SNAPSHOT-linux-arm64.tar.gz".url'
```
6\. With the modified install URL, copy the install instructions from the flyout and install the agent in the VM (hit Enter to confirm when running `sudo ./elastic-agent install`). After a few seconds the Fleet Server should be connected.
![Fleet UI showing the Add Fleet Server flyout, step 3](./screenshots/vm_fleet_server_3.png)
7\. You can click "Continue enrolling agents" to proceed to agent enrolling.
### Enrolling Elastic Agents
1\. In Fleet UI, click "Add agent". Select or create an agent policy which the agent will be assigned to. Leaving "Collect system logs and metrics" checked will install the `system` integration, which is a good way of checking data ingestion. Note that the agent policy will use the default Fleet Server host and default Elasticsearch output defined in your `kibana.dev.yml`.
![Fleet UI showing add agent on VM flow, step 1](./screenshots/vm_add_agent_1.png)
2\. Launch a Multipass instance, e.g.:
```sh
multipass launch --name agent1 --disk 10G
```
3\. Shell into the instance:
```sh
multipass shell agent1
```
4\. Refer to the [Running a Fleet Server](#running-a-fleet-server) section above for modifying the Elastic Agent download URL to suit your architecture and desired version. Install and extract the agent binary as instructed in the flyout.
5\. Run the `sudo ./elastic-agent install` command provided in the instructions, adding the `--insecure` flag as we are connecting to Elasticsearch using HTTP. After a few seconds the UI should confirm that the agent is enrolled and shipping data:
![Fleet UI showing add agent on VM flow, step 2](./screenshots/vm_add_agent_2.png)
### Gotchas
1\. The system clock within Multipass instances stops when the host computer is suspended (see https://askubuntu.com/questions/1486977/repeated-incorrect-time-in-multipass-clients-with-ubuntu-22-04). This can result in a running Elastic Agent being incorrectly "in the past" after your laptop was asleep for a while. The easiest fix is to restart all Multipass instances, which will reset their clocks:
```sh
multipass restart --all
```
2\. As noted in the enrollment steps above, the architecture of a Multipass VM matches the one of the host. If the agent installation fails, check that the downloaded agent binary has the correct architecture type.
## Using Docker containers
Official documentation: https://www.elastic.co/guide/en/fleet/current/elastic-agent-container.html
The main advantage of running Elastic Agents in a Docker container is a one command setup that can be easily be scripted (see [Using the `run_dockerized_agent.sh` script](#using-the-run_dockerized_agentsh-script) below). There are a few limitations, however, including:
- Agents cannot be upgraded.
- Elastic Defend cannot be installed.
To use dockerized Fleet Server and agents, make sure the default Fleet Server host and default output defined in your `kibana.dev.yml` use `host.docker.internal`.
In Fleet UI, these host URLs should be reflected in the Settings page:
![Fleet settings UI showing hosts for dockerized agents](./screenshots/docker_fleet_settings.png)
### Running a Fleet Server
With Docker running, launch your Fleet Server with:
```sh
docker run \
-e ELASTICSEARCH_HOST=http://host.docker.internal:9200 \
-e KIBANA_HOST=http://host.docker.internal:5601/your-base-path \
-e KIBANA_USERNAME=elastic \
-e KIBANA_PASSWORD=changeme \
-e KIBANA_FLEET_SETUP=1 \
-e FLEET_INSECURE=1 \
-e FLEET_SERVER_ENABLE=1 \
-e FLEET_SERVER_POLICY_ID=fleet-server-policy \
-p 8220:8220 \
--rm docker.elastic.co/beats/elastic-agent:<version>
```
where the version can be e.g. `8.13.3` or `8.15.0-SNAPSHOT`. You can explore the available versions at https://www.docker.elastic.co/r/beats/elastic-agent.
You can also check the list of available environment variables for the `docker run` command in the [elastic-agent source code](https://github.com/elastic/elastic-agent/blob/main/internal/pkg/agent/cmd/container.go#L66-L134).
Note the `-p 8220:8220` port mapping to map the Fleet Server container's port `8220` to your local machine's port `8220` in order for Fleet to communicate with Fleet Server.
Once the container is running, it can be treated as a local process running on `https://localhost:8220` and your Fleet Server should be enrolled in Fleet:
![Fleet UI with dockerized Fleet Server](./screenshots/docker_fleet_server.png)
### Enrolling Elastic Agents
1\. In Fleet UI, click "Add agent". In the flyout, select or create an agent policy which the agent will be assigned to. Leaving "Collect system logs and metrics" checked will install the `system` integration, which is a good way of checking data ingestion. Note that the agent policy will use the default Fleet Server host and default Elasticsearch output defined in your `kibana.dev.yml`.
![Fleet UI showing add dockerized agent flow, step 1](./screenshots/docker_add_agent_1.png)
2\. Scroll down to the enrollment CLI steps and copy the enrollment token from the end of the `sudo ./elastic-agent install` command.
3\. Enroll the agent with:
```sh
docker run \
-e FLEET_URL=https://host.docker.internal:8220 \
-e FLEET_ENROLL=1 \
-e FLEET_ENROLLMENT_TOKEN=<enrollment_token> \
-e FLEET_INSECURE=1 \
--rm docker.elastic.co/beats/elastic-agent:<version>
```
After a short moment, the UI should confirm that the agent is enrolled and shipping data:
![Fleet UI showing add dockerized agent flow, step 2](./screenshots/docker_add_agent_2.png)
Tip: if the agent enrolls but there is no incoming data, check the host URL of the default output.
### Using the `run_dockerized_agent.sh` script
You can make either running a Fleet Server or enrolling an agent quicker by using the [run_dockerized_agent.sh](./run_dockerized_elastic_agent.sh) script:
- Copy the script place it somewhere convenient.
- Run `chmod +x` on it to make it executable.
- Update the version and the Kibana base path within the script.
Run a Fleet Server with:
```sh
./run_elastic_agent.sh fleet_server
```
And enroll an Elastic Agent with:
```sh
./run agent -e <enrollment token> -v <version> -t <tags>
```
where the version and tags are optional.

View file

@ -0,0 +1,88 @@
#!/usr/bin/env bash
# This script is used to run a instance of Elastic Agent in a Docker container.
# Ref.: https://www.elastic.co/guide/en/fleet/current/elastic-agent-container.html
# To run a Fleet server: ./run_elastic_agent.sh fleet_server
# To run an agent: ./run agent -e <enrollment token> -v <version> -t <tags>
# NB: this script assumes a Fleet server policy with id "fleet-server-policy" is already created.
CMD=$1
while [ $# -gt 0 ]; do
case $1 in
-e | --enrollment-token) ENROLLMENT_TOKEN=$2 ;;
-v | --version) ELASTIC_AGENT_VERSION=$2 ;;
-t | --tags) TAGS=$2 ;;
esac
shift
done
DEFAULT_ELASTIC_AGENT_VERSION=8.15.0-SNAPSHOT # update as needed
ELASTICSEARCH_HOST=http://host.docker.internal:9200
KIBANA_HOST=http://host.docker.internal:5601
KIBANA_BASE_PATH=your-base-path
FLEET_SERVER_URL=https://host.docker.internal:8220
FLEET_SERVER_POLICY_ID=fleet-server-policy
printArgs() {
if [[ $ELASTIC_AGENT_VERSION == "" ]]; then
ELASTIC_AGENT_VERSION=$DEFAULT_ELASTIC_AGENT_VERSION
echo "No Elastic Agent version specified, setting to $ELASTIC_AGENT_VERSION (default)"
else
echo "Received Elastic Agent version $ELASTIC_AGENT_VERSION"
fi
if [[ $ENROLLMENT_TOKEN == "" ]]; then
echo "Warning: no enrollment token provided!"
else
echo "Received enrollment token: ${ENROLLMENT_TOKEN}"
fi
if [[ $TAGS != "" ]]; then
echo "Received tags: ${TAGS}"
fi
}
echo "--- Elastic Agent Container Runner ---"
if [[ $CMD == "fleet_server" ]]; then
echo "Starting Fleet Server container..."
printArgs
docker run \
-e ELASTICSEARCH_HOST=${ELASTICSEARCH_HOST} \
-e KIBANA_HOST=${KIBANA_HOST}/${KIBANA_BASE_PATH} \
-e KIBANA_USERNAME=elastic \
-e KIBANA_PASSWORD=changeme \
-e KIBANA_FLEET_SETUP=1 \
-e FLEET_INSECURE=1 \
-e FLEET_SERVER_ENABLE=1 \
-e FLEET_SERVER_POLICY_ID=${FLEET_SERVER_POLICY_ID} \
-e ELASTIC_AGENT_TAGS=${TAGS} \
-p 8220:8220 \
--rm docker.elastic.co/beats/elastic-agent:${ELASTIC_AGENT_VERSION}
elif [[ $CMD == "agent" ]]; then
echo "Starting Elastic Agent container..."
printArgs
docker run \
-e FLEET_URL=${FLEET_SERVER_URL} \
-e FLEET_ENROLL=1 \
-e FLEET_ENROLLMENT_TOKEN=${ENROLLMENT_TOKEN} \
-e FLEET_INSECURE=1 \
-e ELASTIC_AGENT_TAGS=${TAGS} \
--rm docker.elastic.co/beats/elastic-agent:${ELASTIC_AGENT_VERSION}
elif [[ $CMD == "help" ]]; then
echo "Usage: ./run_elastic_agent.sh <agent/fleet_server> -e <enrollment token> -v <version> -t <tags>"
elif [[ $CMD == "" ]]; then
echo "Command missing. Available commands: agent, fleet_server, help"
else
echo "Invalid command: $CMD"
fi

View file

@ -0,0 +1,87 @@
# Sample kibana.dev.yml for Fleet development
```yml
# =================== System: Kibana Server ===================
# Specifies a path to mount Kibana at.
server.basePath: /yourname # <--- CHANGE ME
# Specifies the address to which the Kibana server will bind.
server.host: 0.0.0.0 # Fleet Server default host
# Provides an API version. Set to 'oldest' in stateful mode, 'newest' in serverless mode.
server.versioned.versionResolution: oldest
# =================== System: Elasticsearch ===================
# The URLs of the Elasticsearch instances.
elasticsearch.hosts: [http://localhost:9200]
# =================== System: Logging ===================
logging:
loggers:
- name: plugins.fleet
appenders: [console]
level: debug
# Logs queries sent to Elasticsearch.
# - name: elasticsearch.query
# level: debug
# =================== Fleet Settings ===================
# Official Fleet settings documentation: https://www.elastic.co/guide/en/kibana/current/fleet-settings-kb.html
# FleetConfigType definition: https://github.com/elastic/kibana/blob/main/x-pack/plugins/fleet/common/types/index.ts
# PluginConfigDescriptor definition: https://github.com/elastic/kibana/blob/main/x-pack/plugins/fleet/server/config.ts
# xpack.fleet.registryUrl: https://localhost:8080
# xpack.fleet.enableExperimental: []
# Allows enrolling agents when standalone Fleet Server is in use
# xpack.fleet.internal.fleetServerStandalone: true
xpack.fleet.fleetServerHosts:
# ID must be default-fleet-server if running in serverless mode
- id: default-fleet-server
name: Default Fleet server
is_default: true
host_urls: ['https://<FLEET-SERVER-VM-IP>:8220'] # For running a Fleet Server in a VM <--- CHANGE ME
# host_urls: ['https://host.docker.internal:8220'] # For running a Fleet Server Docker container
xpack.fleet.outputs:
# ID must be es-default-output if running in serverless mode
- id: es-default-output
name: Default output
type: elasticsearch
is_default: true
is_default_monitoring: true
hosts: ['http://<YOUR-LOCAL-IP>:9200'] # For enrolling agents on VM <--- CHANGE ME
# hosts: ['http://host.docker.internal:9200'] # For enrolling dockerized agents
xpack.fleet.packages:
- name: fleet_server
version: latest
xpack.fleet.agentPolicies:
- name: Fleet Server policy
id: fleet-server-policy
is_default_fleet_server: true
# is_managed: true # Useful to mimic cloud environment
description: Fleet server policy
namespace: default
package_policies:
- name: Fleet Server
id: fleet_server
package:
name: fleet_server
inputs:
- type: fleet-server
keep_enabled: true
vars:
- name: host
value: 0.0.0.0
frozen: true
- name: port
value: 8220
frozen: true
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 374 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 527 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 272 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 390 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 209 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 390 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 503 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 414 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 316 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 571 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 325 KiB