mirror of
https://github.com/elastic/kibana.git
synced 2025-04-23 17:28:26 -04:00
# Backport This will backport the following commits from `main` to `8.6`: - [Fixes issue in sorting using TX and RX columns (#145994)](https://github.com/elastic/kibana/pull/145994) <!--- Backport version: 8.9.7 --> ### Questions ? Please refer to the [Backport tool documentation](https://github.com/sqren/backport) <!--BACKPORT [{"author":{"name":"mohamedhamed-ahmed","email":"eng_mohamedhamed@hotmail.com"},"sourceCommit":{"committedDate":"2022-11-29T09:41:18Z","message":"Fixes issue in sorting using TX and RX columns (#145994)\n\n## Summary\r\ncloses #142667\r\nMore details about the issue can be found\r\n[here](https://github.com/elastic/kibana/issues/142667)\r\n\r\n## Problem \r\n\r\nThe problem here is the use of bucket script in the query which doesn't\r\nreturn a single metric value and thus can't be used for pipeline\r\naggregation.\r\n```\r\nbucket_script: {\r\n buckets_path: {\r\n value: 'rx_avg',\r\n period: 'rx_period>period',\r\n },\r\n script: {\r\n source: 'params.value / (params.period / 1000)',\r\n lang: 'painless',\r\n },\r\n gap_policy: 'skip'\r\n }\r\n```\r\n\r\n## Proposed Solutions:\r\n\r\n1. Using\r\n[Runtime](https://www.elastic.co/guide/en/elasticsearch/reference/current/runtime.html)\r\nfield as below:\r\n```\r\n\"runtime_mappings\": {\r\n \"rx_bytes_per_period\": {\r\n \"type\": \"long\",\r\n \"script\": {\r\n \"source\": \"\"\"\r\n emit(doc['host.network.ingress.bytes'].size()==0 ? -1 : (doc['host.network.ingress.bytes'].value/doc['metricset.period'].value));\r\n \"\"\"\r\n }\r\n }\r\n }\r\n```\r\n\r\n2. Using\r\n[Scripted_Metric](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-scripted-metric-aggregation.html):\r\n\r\n```\r\nscripted_metric\": {\r\n \"init_script\": \"state.bytes_per_period = []\",\r\n \"map_script\": \"state.bytes_per_period.add(doc['host.network.ingress.bytes'].value/(doc['metricset.period'].value/1000))\",\r\n \"combine_script\": \"double avg_bytes_per_period = 0; for (t in state.bytes_per_period) { avg_bytes_per_period += t } return avg_bytes_per_period/state.bytes_per_period.size()\",\r\n \"reduce_script\": \"double result = 0; for (a in states) { result += a) } return result/states.size()\"\r\n }\r\n```\r\n\r\n## Conclusion\r\n\r\nI decided to go with the runtime field as its a bit more concise and\r\neasier to understand and performance wise it was slightly faster than\r\nthe scripted metric in most times.\r\n\r\n### Testing\r\n\r\nNavigate to `Observability` -> `Overview` -> `Hosts Table` try to filter\r\nwith Rx and Tx columns","sha":"6e4f22d12dfc18ceff35b9bd0f6038b7fbf5ec7b","branchLabelMapping":{"^v8.7.0$":"main","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["release_note:fix","Team:Infra Monitoring UI","Feature:Observability Overview","backport:prev-minor","v8.7.0"],"number":145994,"url":"https://github.com/elastic/kibana/pull/145994","mergeCommit":{"message":"Fixes issue in sorting using TX and RX columns (#145994)\n\n## Summary\r\ncloses #142667\r\nMore details about the issue can be found\r\n[here](https://github.com/elastic/kibana/issues/142667)\r\n\r\n## Problem \r\n\r\nThe problem here is the use of bucket script in the query which doesn't\r\nreturn a single metric value and thus can't be used for pipeline\r\naggregation.\r\n```\r\nbucket_script: {\r\n buckets_path: {\r\n value: 'rx_avg',\r\n period: 'rx_period>period',\r\n },\r\n script: {\r\n source: 'params.value / (params.period / 1000)',\r\n lang: 'painless',\r\n },\r\n gap_policy: 'skip'\r\n }\r\n```\r\n\r\n## Proposed Solutions:\r\n\r\n1. Using\r\n[Runtime](https://www.elastic.co/guide/en/elasticsearch/reference/current/runtime.html)\r\nfield as below:\r\n```\r\n\"runtime_mappings\": {\r\n \"rx_bytes_per_period\": {\r\n \"type\": \"long\",\r\n \"script\": {\r\n \"source\": \"\"\"\r\n emit(doc['host.network.ingress.bytes'].size()==0 ? -1 : (doc['host.network.ingress.bytes'].value/doc['metricset.period'].value));\r\n \"\"\"\r\n }\r\n }\r\n }\r\n```\r\n\r\n2. Using\r\n[Scripted_Metric](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-scripted-metric-aggregation.html):\r\n\r\n```\r\nscripted_metric\": {\r\n \"init_script\": \"state.bytes_per_period = []\",\r\n \"map_script\": \"state.bytes_per_period.add(doc['host.network.ingress.bytes'].value/(doc['metricset.period'].value/1000))\",\r\n \"combine_script\": \"double avg_bytes_per_period = 0; for (t in state.bytes_per_period) { avg_bytes_per_period += t } return avg_bytes_per_period/state.bytes_per_period.size()\",\r\n \"reduce_script\": \"double result = 0; for (a in states) { result += a) } return result/states.size()\"\r\n }\r\n```\r\n\r\n## Conclusion\r\n\r\nI decided to go with the runtime field as its a bit more concise and\r\neasier to understand and performance wise it was slightly faster than\r\nthe scripted metric in most times.\r\n\r\n### Testing\r\n\r\nNavigate to `Observability` -> `Overview` -> `Hosts Table` try to filter\r\nwith Rx and Tx columns","sha":"6e4f22d12dfc18ceff35b9bd0f6038b7fbf5ec7b"}},"sourceBranch":"main","suggestedTargetBranches":[],"targetPullRequestStates":[{"branch":"main","label":"v8.7.0","labelRegex":"^v8.7.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/145994","number":145994,"mergeCommit":{"message":"Fixes issue in sorting using TX and RX columns (#145994)\n\n## Summary\r\ncloses #142667\r\nMore details about the issue can be found\r\n[here](https://github.com/elastic/kibana/issues/142667)\r\n\r\n## Problem \r\n\r\nThe problem here is the use of bucket script in the query which doesn't\r\nreturn a single metric value and thus can't be used for pipeline\r\naggregation.\r\n```\r\nbucket_script: {\r\n buckets_path: {\r\n value: 'rx_avg',\r\n period: 'rx_period>period',\r\n },\r\n script: {\r\n source: 'params.value / (params.period / 1000)',\r\n lang: 'painless',\r\n },\r\n gap_policy: 'skip'\r\n }\r\n```\r\n\r\n## Proposed Solutions:\r\n\r\n1. Using\r\n[Runtime](https://www.elastic.co/guide/en/elasticsearch/reference/current/runtime.html)\r\nfield as below:\r\n```\r\n\"runtime_mappings\": {\r\n \"rx_bytes_per_period\": {\r\n \"type\": \"long\",\r\n \"script\": {\r\n \"source\": \"\"\"\r\n emit(doc['host.network.ingress.bytes'].size()==0 ? -1 : (doc['host.network.ingress.bytes'].value/doc['metricset.period'].value));\r\n \"\"\"\r\n }\r\n }\r\n }\r\n```\r\n\r\n2. Using\r\n[Scripted_Metric](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-scripted-metric-aggregation.html):\r\n\r\n```\r\nscripted_metric\": {\r\n \"init_script\": \"state.bytes_per_period = []\",\r\n \"map_script\": \"state.bytes_per_period.add(doc['host.network.ingress.bytes'].value/(doc['metricset.period'].value/1000))\",\r\n \"combine_script\": \"double avg_bytes_per_period = 0; for (t in state.bytes_per_period) { avg_bytes_per_period += t } return avg_bytes_per_period/state.bytes_per_period.size()\",\r\n \"reduce_script\": \"double result = 0; for (a in states) { result += a) } return result/states.size()\"\r\n }\r\n```\r\n\r\n## Conclusion\r\n\r\nI decided to go with the runtime field as its a bit more concise and\r\neasier to understand and performance wise it was slightly faster than\r\nthe scripted metric in most times.\r\n\r\n### Testing\r\n\r\nNavigate to `Observability` -> `Overview` -> `Hosts Table` try to filter\r\nwith Rx and Tx columns","sha":"6e4f22d12dfc18ceff35b9bd0f6038b7fbf5ec7b"}}]}] BACKPORT--> Co-authored-by: mohamedhamed-ahmed <eng_mohamedhamed@hotmail.com>
This commit is contained in:
parent
7375f91e28
commit
1b2ce1a5a8
7 changed files with 432 additions and 97 deletions
|
@ -27,16 +27,16 @@ export const convertESResponseToTopNodesResponse = (
|
|||
cpu: bucket.cpu.value,
|
||||
iowait: bucket.iowait.value,
|
||||
load: bucket.load.value,
|
||||
rx: bucket.rx?.value || null,
|
||||
tx: bucket.tx?.value || null,
|
||||
rx: bucket.rx?.bytes.value || null,
|
||||
tx: bucket.tx?.bytes.value || null,
|
||||
};
|
||||
}),
|
||||
cpu: node.cpu.value,
|
||||
iowait: node.iowait.value,
|
||||
load: node.load.value,
|
||||
uptime: node.uptime.value,
|
||||
rx: node.rx?.value || null,
|
||||
tx: node.tx?.value || null,
|
||||
rx: node.rx?.bytes.value || null,
|
||||
tx: node.tx?.bytes.value || null,
|
||||
};
|
||||
}),
|
||||
};
|
||||
|
|
|
@ -13,10 +13,41 @@ export const createTopNodesQuery = (
|
|||
options: TopNodesRequest,
|
||||
source: MetricsSourceConfiguration
|
||||
) => {
|
||||
const nestedSearchFields: { [key: string]: string } = {
|
||||
rx: 'rx>bytes',
|
||||
tx: 'tx>bytes',
|
||||
};
|
||||
const sortByHost = options.sort && options.sort === 'name';
|
||||
const sortField = sortByHost ? '_key' : options.sort ?? 'uptime';
|
||||
const metricsSortField = options.sort
|
||||
? nestedSearchFields[options.sort] || options.sort
|
||||
: 'uptime';
|
||||
const sortField = sortByHost ? '_key' : metricsSortField;
|
||||
const sortDirection = options.sortDirection ?? 'asc';
|
||||
return {
|
||||
runtime_mappings: {
|
||||
rx_bytes_per_period: {
|
||||
type: 'double',
|
||||
script: {
|
||||
source: `
|
||||
if(doc[\'host.network.ingress.bytes\'].size() !=0)
|
||||
{
|
||||
emit((doc[\'host.network.ingress.bytes\'].value/(doc[\'metricset.period\'].value / 1000)));
|
||||
}
|
||||
`,
|
||||
},
|
||||
},
|
||||
tx_bytes_per_period: {
|
||||
type: 'double',
|
||||
script: {
|
||||
source: `
|
||||
if(doc[\'host.network.egress.bytes\'].size() !=0)
|
||||
{
|
||||
emit((doc[\'host.network.egress.bytes\'].value/(doc[\'metricset.period\'].value / 1000)));
|
||||
}
|
||||
`,
|
||||
},
|
||||
},
|
||||
},
|
||||
size: 0,
|
||||
query: {
|
||||
bool: {
|
||||
|
@ -75,70 +106,34 @@ export const createTopNodesQuery = (
|
|||
field: 'system.load.15',
|
||||
},
|
||||
},
|
||||
rx_avg: {
|
||||
avg: {
|
||||
field: 'host.network.ingress.bytes',
|
||||
},
|
||||
},
|
||||
rx_period: {
|
||||
rx: {
|
||||
filter: {
|
||||
exists: {
|
||||
field: 'host.network.ingress.bytes',
|
||||
},
|
||||
},
|
||||
aggs: {
|
||||
period: {
|
||||
max: {
|
||||
field: 'metricset.period',
|
||||
bytes: {
|
||||
avg: {
|
||||
field: 'rx_bytes_per_period',
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
rx: {
|
||||
bucket_script: {
|
||||
buckets_path: {
|
||||
value: 'rx_avg',
|
||||
period: 'rx_period>period',
|
||||
},
|
||||
script: {
|
||||
source: 'params.value / (params.period / 1000)',
|
||||
lang: 'painless',
|
||||
},
|
||||
gap_policy: 'skip',
|
||||
},
|
||||
},
|
||||
tx_avg: {
|
||||
avg: {
|
||||
field: 'host.network.egress.bytes',
|
||||
},
|
||||
},
|
||||
tx_period: {
|
||||
tx: {
|
||||
filter: {
|
||||
exists: {
|
||||
field: 'host.network.egress.bytes',
|
||||
},
|
||||
},
|
||||
aggs: {
|
||||
period: {
|
||||
max: {
|
||||
field: 'metricset.period',
|
||||
bytes: {
|
||||
avg: {
|
||||
field: 'tx_bytes_per_period',
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
tx: {
|
||||
bucket_script: {
|
||||
buckets_path: {
|
||||
value: 'tx_avg',
|
||||
period: 'tx_period>period',
|
||||
},
|
||||
script: {
|
||||
source: 'params.value / (params.period / 1000)',
|
||||
lang: 'painless',
|
||||
},
|
||||
gap_policy: 'skip',
|
||||
},
|
||||
},
|
||||
timeseries: {
|
||||
date_histogram: {
|
||||
field: '@timestamp',
|
||||
|
@ -164,70 +159,34 @@ export const createTopNodesQuery = (
|
|||
field: 'system.load.15',
|
||||
},
|
||||
},
|
||||
rx_avg: {
|
||||
avg: {
|
||||
field: 'host.network.ingress.bytes',
|
||||
},
|
||||
},
|
||||
rx_period: {
|
||||
rx: {
|
||||
filter: {
|
||||
exists: {
|
||||
field: 'host.network.ingress.bytes',
|
||||
},
|
||||
},
|
||||
aggs: {
|
||||
period: {
|
||||
max: {
|
||||
field: 'metricset.period',
|
||||
bytes: {
|
||||
avg: {
|
||||
field: 'rx_bytes_per_period',
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
rx: {
|
||||
bucket_script: {
|
||||
buckets_path: {
|
||||
value: 'rx_avg',
|
||||
period: 'rx_period>period',
|
||||
},
|
||||
script: {
|
||||
source: 'params.value / (params.period / 1000)',
|
||||
lang: 'painless',
|
||||
},
|
||||
gap_policy: 'skip',
|
||||
},
|
||||
},
|
||||
tx_avg: {
|
||||
avg: {
|
||||
field: 'host.network.egress.bytes',
|
||||
},
|
||||
},
|
||||
tx_period: {
|
||||
tx: {
|
||||
filter: {
|
||||
exists: {
|
||||
field: 'host.network.egress.bytes',
|
||||
},
|
||||
},
|
||||
aggs: {
|
||||
period: {
|
||||
max: {
|
||||
field: 'metricset.period',
|
||||
bytes: {
|
||||
avg: {
|
||||
field: 'tx_bytes_per_period',
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
tx: {
|
||||
bucket_script: {
|
||||
buckets_path: {
|
||||
value: 'tx_avg',
|
||||
period: 'tx_period>period',
|
||||
},
|
||||
script: {
|
||||
source: 'params.value / (params.period / 1000)',
|
||||
lang: 'painless',
|
||||
},
|
||||
gap_policy: 'skip',
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
|
|
|
@ -7,6 +7,10 @@
|
|||
|
||||
type NumberOrNull = number | null;
|
||||
|
||||
interface RuntimeField {
|
||||
[key: string]: NodeMetric;
|
||||
}
|
||||
|
||||
interface TopMetric {
|
||||
sort: string[];
|
||||
metrics: Record<string, string | null>;
|
||||
|
@ -22,8 +26,8 @@ interface NodeMetrics {
|
|||
cpu: NodeMetric;
|
||||
iowait: NodeMetric;
|
||||
load: NodeMetric;
|
||||
rx: NodeMetric;
|
||||
tx: NodeMetric;
|
||||
rx: RuntimeField;
|
||||
tx: RuntimeField;
|
||||
}
|
||||
|
||||
interface TimeSeriesMetric extends NodeMetrics {
|
||||
|
|
|
@ -23,6 +23,10 @@ export const DATES = {
|
|||
min: new Date('2022-01-20T17:09:55.124Z').getTime(),
|
||||
max: new Date('2022-01-20T17:14:57.378Z').getTime(),
|
||||
},
|
||||
hosts_and_netowrk: {
|
||||
min: new Date('2022-11-23T14:13:19.534Z').getTime(),
|
||||
max: new Date('2022-11-25T14:13:19.534Z').getTime(),
|
||||
},
|
||||
hosts_only: {
|
||||
min: new Date('2022-01-18T19:57:47.534Z').getTime(),
|
||||
max: new Date('2022-01-18T20:02:50.043Z').getTime(),
|
||||
|
|
|
@ -18,13 +18,12 @@ export default function ({ getService }: FtrProviderContext) {
|
|||
const esArchiver = getService('esArchiver');
|
||||
const supertest = getService('supertest');
|
||||
|
||||
const { min, max } = DATES['7.0.0'].hosts;
|
||||
|
||||
describe('API /metrics/overview/top', () => {
|
||||
before(() => esArchiver.load('x-pack/test/functional/es_archives/infra/7.0.0/hosts'));
|
||||
after(() => esArchiver.unload('x-pack/test/functional/es_archives/infra/7.0.0/hosts'));
|
||||
|
||||
it('works', async () => {
|
||||
const { min, max } = DATES['7.0.0'].hosts;
|
||||
const response = await supertest
|
||||
.post('/api/metrics/overview/top')
|
||||
.set({
|
||||
|
@ -49,5 +48,49 @@ export default function ({ getService }: FtrProviderContext) {
|
|||
expect(series[0].id).to.be('demo-stack-mysql-01');
|
||||
expect(series[0].timeseries[1].timestamp - series[0].timeseries[0].timestamp).to.be(300_000);
|
||||
});
|
||||
|
||||
describe('Runtime fields calculation', () => {
|
||||
before(() =>
|
||||
esArchiver.load('x-pack/test/functional/es_archives/infra/8.0.0/hosts_and_network')
|
||||
);
|
||||
after(() =>
|
||||
esArchiver.unload('x-pack/test/functional/es_archives/infra/8.0.0/hosts_and_network')
|
||||
);
|
||||
|
||||
it('should return correct sorted calculations', async () => {
|
||||
const { min, max } = DATES['8.0.0'].hosts_and_netowrk;
|
||||
const response = await supertest
|
||||
.post('/api/metrics/overview/top')
|
||||
.set({
|
||||
'kbn-xsrf': 'some-xsrf-token',
|
||||
})
|
||||
.send(
|
||||
TopNodesRequestRT.encode({
|
||||
sourceId: 'default',
|
||||
bucketSize: '300s',
|
||||
size: 5,
|
||||
timerange: {
|
||||
from: min,
|
||||
to: max,
|
||||
},
|
||||
sort: 'rx',
|
||||
sortDirection: 'asc',
|
||||
})
|
||||
)
|
||||
.expect(200);
|
||||
const { series } = decodeOrThrow(TopNodesResponseRT)(response.body);
|
||||
|
||||
const hosts = series.map((s) => ({
|
||||
name: s.name,
|
||||
rx: s.rx,
|
||||
tx: s.tx,
|
||||
}));
|
||||
|
||||
expect(hosts.length).to.be(3);
|
||||
expect(hosts[0]).to.eql({ name: 'metricbeat-2', rx: 8000, tx: 16860 });
|
||||
expect(hosts[1]).to.eql({ name: 'metricbeat-1', rx: 11250, tx: 25290.5 });
|
||||
expect(hosts[2]).to.eql({ name: 'metricbeat-3', rx: null, tx: null });
|
||||
});
|
||||
});
|
||||
});
|
||||
}
|
||||
|
|
Binary file not shown.
|
@ -0,0 +1,325 @@
|
|||
{
|
||||
"type": "index",
|
||||
"value": {
|
||||
"aliases": {
|
||||
},
|
||||
"index": "metricbeat-8.7.0",
|
||||
"mappings": {
|
||||
"date_detection": false,
|
||||
"dynamic_templates": [
|
||||
{
|
||||
"labels": {
|
||||
"mapping": {
|
||||
"type": "keyword"
|
||||
},
|
||||
"match_mapping_type": "string",
|
||||
"path_match": "labels.*"
|
||||
}
|
||||
},
|
||||
{
|
||||
"strings_as_keyword": {
|
||||
"mapping": {
|
||||
"ignore_above": 1024,
|
||||
"type": "keyword"
|
||||
},
|
||||
"match_mapping_type": "string"
|
||||
}
|
||||
}
|
||||
],
|
||||
"properties": {
|
||||
"@timestamp": {
|
||||
"type": "date"
|
||||
},
|
||||
"event": {
|
||||
"properties": {
|
||||
"dataset": {
|
||||
"ignore_above": 256,
|
||||
"type": "keyword"
|
||||
},
|
||||
"module": {
|
||||
"ignore_above": 256,
|
||||
"type": "keyword"
|
||||
}
|
||||
}
|
||||
},
|
||||
"host": {
|
||||
"properties": {
|
||||
"architecture": {
|
||||
"type": "keyword",
|
||||
"ignore_above": 1024
|
||||
},
|
||||
"containerized": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"cpu": {
|
||||
"properties": {
|
||||
"usage": {
|
||||
"type": "scaled_float",
|
||||
"scaling_factor": 1000
|
||||
}
|
||||
}
|
||||
},
|
||||
"disk": {
|
||||
"properties": {
|
||||
"read": {
|
||||
"properties": {
|
||||
"bytes": {
|
||||
"type": "long"
|
||||
}
|
||||
}
|
||||
},
|
||||
"write": {
|
||||
"properties": {
|
||||
"bytes": {
|
||||
"type": "long"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"domain": {
|
||||
"type": "keyword",
|
||||
"ignore_above": 1024
|
||||
},
|
||||
"geo": {
|
||||
"properties": {
|
||||
"city_name": {
|
||||
"type": "keyword",
|
||||
"ignore_above": 1024
|
||||
},
|
||||
"continent_code": {
|
||||
"type": "keyword",
|
||||
"ignore_above": 1024
|
||||
},
|
||||
"continent_name": {
|
||||
"type": "keyword",
|
||||
"ignore_above": 1024
|
||||
},
|
||||
"country_iso_code": {
|
||||
"type": "keyword",
|
||||
"ignore_above": 1024
|
||||
},
|
||||
"country_name": {
|
||||
"type": "keyword",
|
||||
"ignore_above": 1024
|
||||
},
|
||||
"location": {
|
||||
"type": "geo_point"
|
||||
},
|
||||
"name": {
|
||||
"type": "keyword",
|
||||
"ignore_above": 1024
|
||||
},
|
||||
"postal_code": {
|
||||
"type": "keyword",
|
||||
"ignore_above": 1024
|
||||
},
|
||||
"region_iso_code": {
|
||||
"type": "keyword",
|
||||
"ignore_above": 1024
|
||||
},
|
||||
"region_name": {
|
||||
"type": "keyword",
|
||||
"ignore_above": 1024
|
||||
},
|
||||
"timezone": {
|
||||
"type": "keyword",
|
||||
"ignore_above": 1024
|
||||
}
|
||||
}
|
||||
},
|
||||
"hostname": {
|
||||
"type": "keyword",
|
||||
"ignore_above": 1024
|
||||
},
|
||||
"id": {
|
||||
"type": "keyword",
|
||||
"ignore_above": 1024
|
||||
},
|
||||
"ip": {
|
||||
"type": "ip"
|
||||
},
|
||||
"mac": {
|
||||
"type": "keyword",
|
||||
"ignore_above": 1024
|
||||
},
|
||||
"name": {
|
||||
"type": "keyword",
|
||||
"ignore_above": 1024
|
||||
},
|
||||
"network": {
|
||||
"properties": {
|
||||
"egress": {
|
||||
"properties": {
|
||||
"bytes": {
|
||||
"type": "long"
|
||||
},
|
||||
"packets": {
|
||||
"type": "long"
|
||||
}
|
||||
}
|
||||
},
|
||||
"ingress": {
|
||||
"properties": {
|
||||
"bytes": {
|
||||
"type": "long"
|
||||
},
|
||||
"packets": {
|
||||
"type": "long"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"os": {
|
||||
"properties": {
|
||||
"build": {
|
||||
"type": "keyword",
|
||||
"ignore_above": 1024
|
||||
},
|
||||
"codename": {
|
||||
"type": "keyword",
|
||||
"ignore_above": 1024
|
||||
},
|
||||
"family": {
|
||||
"type": "keyword",
|
||||
"ignore_above": 1024
|
||||
},
|
||||
"full": {
|
||||
"type": "keyword",
|
||||
"ignore_above": 1024,
|
||||
"fields": {
|
||||
"text": {
|
||||
"type": "match_only_text"
|
||||
}
|
||||
}
|
||||
},
|
||||
"kernel": {
|
||||
"type": "keyword",
|
||||
"ignore_above": 1024
|
||||
},
|
||||
"name": {
|
||||
"type": "keyword",
|
||||
"ignore_above": 1024,
|
||||
"fields": {
|
||||
"text": {
|
||||
"type": "match_only_text"
|
||||
}
|
||||
}
|
||||
},
|
||||
"platform": {
|
||||
"type": "keyword",
|
||||
"ignore_above": 1024
|
||||
},
|
||||
"type": {
|
||||
"type": "keyword",
|
||||
"ignore_above": 1024
|
||||
},
|
||||
"version": {
|
||||
"type": "keyword",
|
||||
"ignore_above": 1024
|
||||
}
|
||||
}
|
||||
},
|
||||
"type": {
|
||||
"type": "keyword",
|
||||
"ignore_above": 1024
|
||||
},
|
||||
"uptime": {
|
||||
"type": "long"
|
||||
}
|
||||
}
|
||||
},
|
||||
"labels": {
|
||||
"properties": {
|
||||
"eventId": {
|
||||
"type": "keyword"
|
||||
},
|
||||
"groupId": {
|
||||
"type": "keyword"
|
||||
}
|
||||
}
|
||||
},
|
||||
"metricset": {
|
||||
"properties": {
|
||||
"period": {
|
||||
"type": "long"
|
||||
}
|
||||
}
|
||||
},
|
||||
"system": {
|
||||
"properties": {
|
||||
"cpu": {
|
||||
"properties": {
|
||||
"cores": {
|
||||
"type": "long"
|
||||
},
|
||||
"system": {
|
||||
"properties": {
|
||||
"pct": {
|
||||
"scaling_factor": 1000,
|
||||
"type": "scaled_float"
|
||||
}
|
||||
}
|
||||
},
|
||||
"total": {
|
||||
"properties": {
|
||||
"norm": {
|
||||
"properties": {
|
||||
"pct": {
|
||||
"scaling_factor": 1000,
|
||||
"type": "scaled_float"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"user": {
|
||||
"properties": {
|
||||
"pct": {
|
||||
"scaling_factor": 1000,
|
||||
"type": "scaled_float"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"network": {
|
||||
"properties": {
|
||||
"in": {
|
||||
"properties": {
|
||||
"bytes": {
|
||||
"type": "long"
|
||||
}
|
||||
}
|
||||
},
|
||||
"name": {
|
||||
"ignore_above": 256,
|
||||
"type": "keyword"
|
||||
},
|
||||
"out": {
|
||||
"properties": {
|
||||
"bytes": {
|
||||
"type": "long"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"settings": {
|
||||
"index": {
|
||||
"mapping": {
|
||||
"total_fields": {
|
||||
"limit": "10000"
|
||||
}
|
||||
},
|
||||
"number_of_replicas": "0",
|
||||
"number_of_shards": "1"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
Loading…
Add table
Add a link
Reference in a new issue