[ML][Inference] don't return inflated definition when storing trained models (#52573)

When `PUT` is called to store a trained model, it is useful to return the newly create model config. But, it is NOT useful to return the inflated definition.

These definitions can be large and returning the inflated definition causes undo work on the server and client side.
This commit is contained in:
Benjamin Trent 2020-02-20 11:25:34 -05:00 committed by GitHub
parent 1326fdb818
commit 1c1d45130c
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
7 changed files with 99 additions and 9 deletions

View file

@ -46,6 +46,8 @@ include::../execution.asciidoc[]
==== Response
The returned +{response}+ contains the newly created trained model.
The +{response}+ will omit the model definition as a precaution against
streaming large model definitions back to the client.
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------