HLRC: Adding Update datafeed API (#34882)

* HLRC: Adding Update datafeed API

* Addressing unused import

* Adjusting docs and fixing minor comments

* fixing comment
This commit is contained in:
Benjamin Trent 2018-10-26 16:44:12 -05:00 committed by GitHub
parent 11fa8d3744
commit 052dfa5646
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
10 changed files with 351 additions and 2 deletions

View file

@ -0,0 +1,58 @@
--
:api: update-datafeed
:request: UpdateDatafeedRequest
:response: PutDatafeedResponse
--
[id="{upid}-{api}"]
=== Update Datafeed API
The Update Datafeed API can be used to update a {ml} datafeed
in the cluster. The API accepts a +{request}+ object
as a request and returns a +{response}+.
[id="{upid}-{api}-request"]
==== Update Datafeed Request
A +{request}+ requires the following argument:
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests-file}[{api}-request]
--------------------------------------------------
<1> The updated configuration of the {ml} datafeed
[id="{upid}-{api}-config"]
==== Updated Datafeed Arguments
A `DatafeedUpdate` requires an existing non-null `datafeedId` and
allows updating various settings.
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests-file}[{api}-config]
--------------------------------------------------
<1> Mandatory, non-null `datafeedId` referencing an existing {ml} datafeed
<2> Optional, set the datafeed Aggregations for data gathering
<3> Optional, the indices that contain the data to retrieve and feed into the job
<4> Optional, specifies how data searches are split into time chunks.
<5> Optional, the interval at which scheduled queries are made while the datafeed runs in real time.
<6> Optional, a query to filter the search results by. Defaults to the `match_all` query.
<7> Optional, the time interval behind real time that data is queried.
<8> Optional, allows the use of script fields.
<9> Optional, the `size` parameter used in the searches.
<10> Optional, the `jobId` that references the job that the datafeed should be associated with
after the update.
include::../execution.asciidoc[]
[id="{upid}-{api}-response"]
==== Response
The returned +{response}+ returns the full representation of
the updated {ml} datafeed if it has been successfully updated.
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{doc-tests-file}[{api}-response]
--------------------------------------------------
<1> The updated datafeed

View file

@ -240,6 +240,7 @@ The Java High Level REST Client supports the following Machine Learning APIs:
* <<{upid}-update-job>>
* <<{upid}-get-job-stats>>
* <<{upid}-put-datafeed>>
* <<{upid}-update-datafeed>>
* <<{upid}-get-datafeed>>
* <<{upid}-delete-datafeed>>
* <<{upid}-preview-datafeed>>
@ -266,6 +267,7 @@ include::ml/close-job.asciidoc[]
include::ml/update-job.asciidoc[]
include::ml/flush-job.asciidoc[]
include::ml/put-datafeed.asciidoc[]
include::ml/update-datafeed.asciidoc[]
include::ml/get-datafeed.asciidoc[]
include::ml/delete-datafeed.asciidoc[]
include::ml/preview-datafeed.asciidoc[]