mirror of
https://github.com/elastic/kibana.git
synced 2025-04-22 08:49:27 -04:00
* [DOCS] Fixes titles in Setup section for consistency * [DOCS] Fixes to titles and orders of docs in Set up
175 lines
6.6 KiB
Text
175 lines
6.6 KiB
Text
[[production]]
|
|
== Use {kib} in a production environment
|
|
|
|
* <<configuring-kibana-shield>>
|
|
* <<csp-strict-mode>>
|
|
* <<enabling-ssl>>
|
|
* <<load-balancing-es>>
|
|
* <<load-balancing-kibana>>
|
|
* <<high-availability>>
|
|
* <<memory>>
|
|
|
|
How you deploy Kibana largely depends on your use case. If you are the only user,
|
|
you can run Kibana on your local machine and configure it to point to whatever
|
|
Elasticsearch instance you want to interact with. Conversely, if you have a large
|
|
number of heavy Kibana users, you might need to load balance across multiple
|
|
Kibana instances that are all connected to the same Elasticsearch instance.
|
|
|
|
While Kibana isn't terribly resource intensive, we still recommend running Kibana
|
|
separate from your Elasticsearch data or master nodes. To distribute Kibana
|
|
traffic across the nodes in your Elasticsearch cluster, you can run Kibana
|
|
and an Elasticsearch client node on the same machine. For more information, see
|
|
<<load-balancing-es, Load balancing across multiple Elasticsearch nodes>>.
|
|
|
|
[float]
|
|
[[configuring-kibana-shield]]
|
|
=== Use {stack} {security-features}
|
|
|
|
You can use {stack} {security-features} to control what {es} data users can
|
|
access through Kibana.
|
|
|
|
When {security-features} are enabled, Kibana users have to log in. They need to
|
|
have a role granting <<kibana-privileges, Kibana privileges>> as well as access
|
|
to the indices they will be working with in Kibana.
|
|
|
|
If a user loads a Kibana dashboard that accesses data in an index that they
|
|
are not authorized to view, they get an error that indicates the index does
|
|
not exist.
|
|
|
|
For more information on granting access to Kibana, see <<xpack-security-authorization>>.
|
|
|
|
[float]
|
|
[[csp-strict-mode]]
|
|
=== Require Content Security Policy
|
|
|
|
Kibana uses a Content Security Policy to help prevent the browser from allowing
|
|
unsafe scripting, but older browsers will silently ignore this policy. If your
|
|
organization does not need to support Internet Explorer 11 or much older
|
|
versions of our other supported browsers, we recommend that you enable Kibana's
|
|
`strict` mode for content security policy, which will block access to Kibana
|
|
for any browser that does not enforce even a rudimentary set of CSP
|
|
protections.
|
|
|
|
To do this, set `csp.strict` to `true` in your `kibana.yml`:
|
|
|
|
--------
|
|
csp.strict: true
|
|
--------
|
|
|
|
|
|
[float]
|
|
[[enabling-ssl]]
|
|
=== Enable SSL
|
|
|
|
See <<configuring-tls>>.
|
|
|
|
[float]
|
|
[[load-balancing-es]]
|
|
=== Load Balancing across multiple {es} nodes
|
|
If you have multiple nodes in your Elasticsearch cluster, the easiest way to distribute Kibana requests
|
|
across the nodes is to run an Elasticsearch _Coordinating only_ node on the same machine as Kibana.
|
|
Elasticsearch Coordinating only nodes are essentially smart load balancers that are part of the cluster. They
|
|
process incoming HTTP requests, redirect operations to the other nodes in the cluster as needed, and
|
|
gather and return the results. For more information, see
|
|
{ref}/modules-node.html[Node] in the Elasticsearch reference.
|
|
|
|
To use a local client node to load balance Kibana requests:
|
|
|
|
. Install Elasticsearch on the same machine as Kibana.
|
|
. Configure the node as a Coordinating only node. In `elasticsearch.yml`, set `node.data`, `node.master` and `node.ingest` to `false`:
|
|
+
|
|
--------
|
|
# 3. You want this node to be neither master nor data node nor ingest node, but
|
|
# to act as a "search load balancer" (fetching data from nodes,
|
|
# aggregating results, etc.)
|
|
#
|
|
node.master: false
|
|
node.data: false
|
|
node.ingest: false
|
|
--------
|
|
. Configure the client node to join your Elasticsearch cluster. In `elasticsearch.yml`, set the `cluster.name` to the
|
|
name of your cluster.
|
|
+
|
|
--------
|
|
cluster.name: "my_cluster"
|
|
--------
|
|
. Check your transport and HTTP host configs in `elasticsearch.yml` under `network.host` and `transport.host`. The `transport.host` needs to be on the network reachable to the cluster members, the `network.host` is the network for the HTTP connection for Kibana (localhost:9200 by default).
|
|
+
|
|
--------
|
|
network.host: localhost
|
|
http.port: 9200
|
|
|
|
# by default transport.host refers to network.host
|
|
transport.host: <external ip>
|
|
transport.tcp.port: 9300 - 9400
|
|
--------
|
|
. Make sure Kibana is configured to point to your local client node. In `kibana.yml`, the `elasticsearch.hosts` setting should be set to
|
|
`["localhost:9200"]`.
|
|
+
|
|
--------
|
|
# The Elasticsearch instance to use for all your queries.
|
|
elasticsearch.hosts: ["http://localhost:9200"]
|
|
--------
|
|
|
|
[float]
|
|
[[load-balancing-kibana]]
|
|
=== Load balancing across multiple Kibana instances
|
|
To serve multiple Kibana installations behind a load balancer, you must change the configuration. See {kibana-ref}/settings.html[Configuring Kibana] for details on each setting.
|
|
|
|
Settings unique across each Kibana instance:
|
|
--------
|
|
server.uuid
|
|
server.name
|
|
--------
|
|
|
|
Settings unique across each host (for example, running multiple installations on the same virtual machine):
|
|
--------
|
|
logging.dest
|
|
path.data
|
|
pid.file
|
|
server.port
|
|
--------
|
|
|
|
Settings that must be the same:
|
|
--------
|
|
xpack.security.encryptionKey //decrypting session cookies
|
|
xpack.reporting.encryptionKey //decrypting reports
|
|
xpack.encryptedSavedObjects.encryptionKey // decrypting saved objects
|
|
--------
|
|
|
|
Separate configuration files can be used from the command line by using the `-c` flag:
|
|
--------
|
|
bin/kibana -c config/instance1.yml
|
|
bin/kibana -c config/instance2.yml
|
|
--------
|
|
|
|
[float]
|
|
[[high-availability]]
|
|
=== High availability across multiple {es} nodes
|
|
Kibana can be configured to connect to multiple Elasticsearch nodes in the same cluster. In situations where a node becomes unavailable,
|
|
Kibana will transparently connect to an available node and continue operating. Requests to available hosts will be routed in a round robin fashion.
|
|
|
|
Currently the Console application is limited to connecting to the first node listed.
|
|
|
|
In kibana.yml:
|
|
--------
|
|
elasticsearch.hosts:
|
|
- http://elasticsearch1:9200
|
|
- http://elasticsearch2:9200
|
|
--------
|
|
|
|
Related configurations include `elasticsearch.sniffInterval`, `elasticsearch.sniffOnStart`, and `elasticsearch.sniffOnConnectionFault`.
|
|
These can be used to automatically update the list of hosts as a cluster is resized. Parameters can be found on the {kibana-ref}/settings.html[settings page].
|
|
|
|
[float]
|
|
[[memory]]
|
|
=== Memory
|
|
Kibana has a default maximum memory limit of 1.4 GB, and in most cases, we recommend leaving this unconfigured. In some scenarios, such as large reporting jobs,
|
|
it may make sense to tweak limits to meet more specific requirements.
|
|
|
|
You can modify this limit by setting `--max-old-space-size` in the `NODE_OPTIONS` environment variable. For deb and rpm, packages this is passed in via `/etc/default/kibana` and can be appended to the bottom of the file.
|
|
|
|
The option accepts a limit in MB:
|
|
--------
|
|
NODE_OPTIONS="--max-old-space-size=2048" bin/kibana
|
|
--------
|