mirror of
https://github.com/elastic/kibana.git
synced 2025-06-28 19:13:14 -04:00
761 lines
22 KiB
Text
761 lines
22 KiB
Text
[[saved-objects-service-use-case-examples]]
|
||
=== Use-case examples
|
||
|
||
These are example of the migration scenario currently supported (out of
|
||
the box) by the system.
|
||
|
||
*note:* _more complex scenarios (e.g field mutation by copy/sync) could
|
||
already be implemented, but without the proper tooling exposed from
|
||
Core, most of the work related to sync and compatibility would have to
|
||
be implemented in the domain layer of the type owners, which is why
|
||
we’re not documenting them yet._
|
||
|
||
==== Adding a non-indexed field without default value
|
||
|
||
We are currently in model version 1, and our type has 2 indexed fields
|
||
defined: `foo` and `bar`.
|
||
|
||
The definition of the type at version 1 would look like:
|
||
|
||
[source,ts]
|
||
----
|
||
const myType: SavedObjectsType = {
|
||
name: 'test',
|
||
namespaceType: 'single',
|
||
switchToModelVersionAt: '8.10.0',
|
||
modelVersions: {
|
||
// initial (and current) model version
|
||
1: {
|
||
changes: [],
|
||
schemas: {
|
||
// FC schema defining the known fields (indexed or not) for this version
|
||
forwardCompatibility: schema.object(
|
||
{ foo: schema.string(), bar: schema.string() },
|
||
{ unknowns: 'ignore' } // note the `unknown: ignore` which is how we're evicting the unknown fields
|
||
),
|
||
// schema that will be used to validate input during `create` and `bulkCreate`
|
||
create: schema.object(
|
||
{ foo: schema.string(), bar: schema.string() },
|
||
)
|
||
},
|
||
},
|
||
},
|
||
mappings: {
|
||
properties: {
|
||
foo: { type: 'text' },
|
||
bar: { type: 'text' },
|
||
},
|
||
},
|
||
};
|
||
----
|
||
|
||
From here, say we want to introduce a new `dolly` field that is not
|
||
indexed, and that we don’t need to populate with a default value.
|
||
|
||
To achieve that, we need to introduce a new model version, with the only
|
||
thing to do will be to define the associated schemas to include this new
|
||
field.
|
||
|
||
The added model version would look like:
|
||
|
||
[source,ts]
|
||
----
|
||
// the new model version adding the `dolly` field
|
||
let modelVersion2: SavedObjectsModelVersion = {
|
||
// not an indexed field, no data backfill, so changes are actually empty
|
||
changes: [],
|
||
schemas: {
|
||
// the only addition in this model version: taking the new field into account for the schemas
|
||
forwardCompatibility: schema.object(
|
||
{ foo: schema.string(), bar: schema.string(), dolly: schema.string() },
|
||
{ unknowns: 'ignore' } // note the `unknown: ignore` which is how we're evicting the unknown fields
|
||
),
|
||
create: schema.object(
|
||
{ foo: schema.string(), bar: schema.string(), dolly: schema.string() },
|
||
)
|
||
},
|
||
};
|
||
----
|
||
|
||
The full type definition after the addition of the new model version:
|
||
|
||
[source,ts]
|
||
----
|
||
const myType: SavedObjectsType = {
|
||
name: 'test',
|
||
namespaceType: 'single',
|
||
switchToModelVersionAt: '8.10.0',
|
||
modelVersions: {
|
||
1: {
|
||
changes: [],
|
||
schemas: {
|
||
forwardCompatibility: schema.object(
|
||
{ foo: schema.string(), bar: schema.string() },
|
||
{ unknowns: 'ignore' }
|
||
),
|
||
create: schema.object(
|
||
{ foo: schema.string(), bar: schema.string() },
|
||
)
|
||
},
|
||
},
|
||
2: {
|
||
changes: [],
|
||
schemas: {
|
||
forwardCompatibility: schema.object(
|
||
{ foo: schema.string(), bar: schema.string(), dolly: schema.string() },
|
||
{ unknowns: 'ignore' }
|
||
),
|
||
create: schema.object(
|
||
{ foo: schema.string(), bar: schema.string(), dolly: schema.string() },
|
||
)
|
||
},
|
||
},
|
||
},
|
||
mappings: {
|
||
properties: {
|
||
foo: { type: 'text' },
|
||
bar: { type: 'text' },
|
||
},
|
||
},
|
||
};
|
||
----
|
||
|
||
==== Adding an indexed field without default value
|
||
|
||
This scenario is fairly close to the previous one. The difference being
|
||
that working with an indexed field means adding a `mappings_addition`
|
||
change and to also update the root mappings accordingly.
|
||
|
||
To reuse the previous example, let’s say the `dolly` field we want to
|
||
add would need to be indexed instead.
|
||
|
||
In that case, the new version needs to do the following: - add a
|
||
`mappings_addition` type change to define the new mappings - update the
|
||
root `mappings` accordingly - add the updated schemas as we did for the
|
||
previous example
|
||
|
||
The new version definition would look like:
|
||
|
||
[source,ts]
|
||
----
|
||
let modelVersion2: SavedObjectsModelVersion = {
|
||
// add a change defining the mapping for the new field
|
||
changes: [
|
||
{
|
||
type: 'mappings_addition',
|
||
addedMappings: {
|
||
dolly: { type: 'text' },
|
||
},
|
||
},
|
||
],
|
||
schemas: {
|
||
// adding the new field to the forwardCompatibility schema
|
||
forwardCompatibility: schema.object(
|
||
{ foo: schema.string(), bar: schema.string(), dolly: schema.string() },
|
||
{ unknowns: 'ignore' }
|
||
),
|
||
create: schema.object(
|
||
{ foo: schema.string(), bar: schema.string(), dolly: schema.string() },
|
||
)
|
||
},
|
||
};
|
||
----
|
||
|
||
As said, we will also need to update the root mappings definition:
|
||
|
||
[source,ts]
|
||
----
|
||
mappings: {
|
||
properties: {
|
||
foo: { type: 'text' },
|
||
bar: { type: 'text' },
|
||
dolly: { type: 'text' },
|
||
},
|
||
},
|
||
----
|
||
|
||
the full type definition after the addition of the model version 2 would
|
||
be:
|
||
|
||
[source,ts]
|
||
----
|
||
const myType: SavedObjectsType = {
|
||
name: 'test',
|
||
namespaceType: 'single',
|
||
switchToModelVersionAt: '8.10.0',
|
||
modelVersions: {
|
||
1: {
|
||
changes: [
|
||
{
|
||
type: 'mappings_addition',
|
||
addedMappings: {
|
||
foo: { type: 'text' },
|
||
bar: { type: 'text' },
|
||
},
|
||
},
|
||
],
|
||
schemas: {
|
||
forwardCompatibility: schema.object(
|
||
{ foo: schema.string(), bar: schema.string() },
|
||
{ unknowns: 'ignore' }
|
||
),
|
||
create: schema.object(
|
||
{ foo: schema.string(), bar: schema.string() },
|
||
)
|
||
},
|
||
},
|
||
2: {
|
||
changes: [
|
||
{
|
||
type: 'mappings_addition',
|
||
addedMappings: {
|
||
dolly: { type: 'text' },
|
||
},
|
||
},
|
||
],
|
||
schemas: {
|
||
forwardCompatibility: schema.object(
|
||
{ foo: schema.string(), bar: schema.string(), dolly: schema.string() },
|
||
{ unknowns: 'ignore' }
|
||
),
|
||
create: schema.object(
|
||
{ foo: schema.string(), bar: schema.string(), dolly: schema.string() },
|
||
)
|
||
},
|
||
},
|
||
},
|
||
mappings: {
|
||
properties: {
|
||
foo: { type: 'text' },
|
||
bar: { type: 'text' },
|
||
dolly: { type: 'text' },
|
||
},
|
||
},
|
||
};
|
||
----
|
||
|
||
==== Adding an indexed field with a default value
|
||
|
||
Now a slightly different scenario where we’d like to populate the newly
|
||
introduced field with a default value.
|
||
|
||
In that case, we’d need to add an additional `data_backfill` change to
|
||
populate the new field’s value (in addition to the `mappings_addition`
|
||
one):
|
||
|
||
[source,ts]
|
||
----
|
||
let modelVersion2: SavedObjectsModelVersion = {
|
||
changes: [
|
||
// setting the `dolly` field's default value.
|
||
{
|
||
type: 'data_backfill',
|
||
transform: (document) => {
|
||
return { attributes: { dolly: 'default_value' } };
|
||
},
|
||
},
|
||
// define the mappings for the new field
|
||
{
|
||
type: 'mappings_addition',
|
||
addedMappings: {
|
||
dolly: { type: 'text' },
|
||
},
|
||
},
|
||
],
|
||
schemas: {
|
||
// define `dolly` as an know field in the schema
|
||
forwardCompatibility: schema.object(
|
||
{ foo: schema.string(), bar: schema.string(), dolly: schema.string() },
|
||
{ unknowns: 'ignore' }
|
||
),
|
||
create: schema.object(
|
||
{ foo: schema.string(), bar: schema.string(), dolly: schema.string() },
|
||
)
|
||
},
|
||
};
|
||
----
|
||
|
||
The full type definition would look like:
|
||
|
||
[source,ts]
|
||
----
|
||
const myType: SavedObjectsType = {
|
||
name: 'test',
|
||
namespaceType: 'single',
|
||
switchToModelVersionAt: '8.10.0',
|
||
modelVersions: {
|
||
1: {
|
||
changes: [
|
||
{
|
||
type: 'mappings_addition',
|
||
addedMappings: {
|
||
foo: { type: 'text' },
|
||
bar: { type: 'text' },
|
||
},
|
||
},
|
||
],
|
||
schemas: {
|
||
forwardCompatibility: schema.object(
|
||
{ foo: schema.string(), bar: schema.string() },
|
||
{ unknowns: 'ignore' }
|
||
),
|
||
create: schema.object(
|
||
{ foo: schema.string(), bar: schema.string() },
|
||
)
|
||
},
|
||
},
|
||
2: {
|
||
changes: [
|
||
{
|
||
type: 'data_backfill',
|
||
transform: (document) => {
|
||
return { attributes: { dolly: 'default_value' } };
|
||
},
|
||
},
|
||
{
|
||
type: 'mappings_addition',
|
||
addedMappings: {
|
||
dolly: { type: 'text' },
|
||
},
|
||
},
|
||
],
|
||
schemas: {
|
||
forwardCompatibility: schema.object(
|
||
{ foo: schema.string(), bar: schema.string(), dolly: schema.string() },
|
||
{ unknowns: 'ignore' }
|
||
),
|
||
create: schema.object(
|
||
{ foo: schema.string(), bar: schema.string(), dolly: schema.string() },
|
||
)
|
||
},
|
||
},
|
||
},
|
||
mappings: {
|
||
properties: {
|
||
foo: { type: 'text' },
|
||
bar: { type: 'text' },
|
||
dolly: { type: 'text' },
|
||
},
|
||
},
|
||
};
|
||
----
|
||
|
||
*Note:* _if the field was non-indexed, we would just not use the
|
||
`mappings_addition` change or update the mappings (as done in example
|
||
1)_
|
||
|
||
==== Removing an existing field
|
||
|
||
We are currently in model version 1, and our type has 2 indexed fields
|
||
defined: `kept` and `removed`.
|
||
|
||
The definition of the type at version 1 would look like:
|
||
|
||
[source,ts]
|
||
----
|
||
const myType: SavedObjectsType = {
|
||
name: 'test',
|
||
namespaceType: 'single',
|
||
switchToModelVersionAt: '8.10.0',
|
||
modelVersions: {
|
||
// initial (and current) model version
|
||
1: {
|
||
changes: [],
|
||
schemas: {
|
||
// FC schema defining the known fields (indexed or not) for this version
|
||
forwardCompatibility: schema.object(
|
||
{ kept: schema.string(), removed: schema.string() },
|
||
{ unknowns: 'ignore' } // note the `unknown: ignore` which is how we're evicting the unknown fields
|
||
),
|
||
// schema that will be used to validate input during `create` and `bulkCreate`
|
||
create: schema.object(
|
||
{ kept: schema.string(), removed: schema.string() },
|
||
)
|
||
},
|
||
},
|
||
},
|
||
mappings: {
|
||
properties: {
|
||
kept: { type: 'text' },
|
||
removed: { type: 'text' },
|
||
},
|
||
},
|
||
};
|
||
----
|
||
|
||
From here, say we want to remove the `removed` field, as our application
|
||
doesn’t need it anymore since a recent change.
|
||
|
||
The first thing to understand here is the impact toward backward
|
||
compatibility: Say that Kibana version `X` was still using this field,
|
||
and that we stopped utilizing the field in version `X+1`.
|
||
|
||
We can’t remove the data in version `X+1`, as we need to be able to
|
||
rollback to the prior version at *any time*. If we were to delete the
|
||
data of this `removed` field during the upgrade to version `X+1`, and if
|
||
then, for any reason, we’d need to rollback to version `X`, it would
|
||
cause a data loss, as version `X` was still using this field, but it
|
||
would no longer present in our document after the rollback.
|
||
|
||
Which is why we need to perform any field removal as a 2-step operation:
|
||
- release `X`: Kibana still utilize the field - release `X+1`: Kibana no
|
||
longer utilize the field, but the data is still present in the documents
|
||
- release `X+2`: The data is effectively deleted from the documents.
|
||
|
||
That way, any prior-version rollback (`X+2` to `X+1` *or* `X+1` to `X`
|
||
is safe in term of data integrity)
|
||
|
||
The main question then, is what’s the best way of having our application
|
||
layer simply ignore this `removed` field during version `X+1`, as we
|
||
don’t want this field (now non-utilized) to be returned from the
|
||
persistence layer, as it could ``pollute'' the higher-layers where the
|
||
field is effectively no longer used or even known.
|
||
|
||
This can easily be done by introducing a new version and using the
|
||
`forwardCompatibility` schema to ``shallow'' the field.
|
||
|
||
The `X+1` model version would look like:
|
||
|
||
[source,ts]
|
||
----
|
||
// the new model version ignoring the `removed` field
|
||
let modelVersion2: SavedObjectsModelVersion = {
|
||
changes: [],
|
||
schemas: {
|
||
forwardCompatibility: schema.object(
|
||
{ kept: schema.string() }, // `removed` is no longer defined here
|
||
{ unknowns: 'ignore' }
|
||
),
|
||
create: schema.object(
|
||
{ kept: schema.string() }, // `removed` is no longer defined here
|
||
)
|
||
},
|
||
};
|
||
----
|
||
|
||
The full type definition after the addition of the new model version:
|
||
|
||
[source,ts]
|
||
----
|
||
const myType: SavedObjectsType = {
|
||
name: 'test',
|
||
namespaceType: 'single',
|
||
switchToModelVersionAt: '8.10.0',
|
||
modelVersions: {
|
||
// initial (and current) model version
|
||
1: {
|
||
changes: [],
|
||
schemas: {
|
||
// FC schema defining the known fields (indexed or not) for this version
|
||
forwardCompatibility: schema.object(
|
||
{ kept: schema.string(), removed: schema.string() },
|
||
{ unknowns: 'ignore' } // note the `unknown: ignore` which is how we're evicting the unknown fields
|
||
),
|
||
// schema that will be used to validate input during `create` and `bulkCreate`
|
||
create: schema.object(
|
||
{ kept: schema.string(), removed: schema.string() },
|
||
)
|
||
},
|
||
},
|
||
2: {
|
||
changes: [],
|
||
schemas: {
|
||
forwardCompatibility: schema.object(
|
||
{ kept: schema.string() }, // `removed` is no longer defined here
|
||
{ unknowns: 'ignore' }
|
||
),
|
||
create: schema.object(
|
||
{ kept: schema.string() }, // `removed` is no longer defined here
|
||
)
|
||
},
|
||
}
|
||
},
|
||
mappings: {
|
||
properties: {
|
||
kept: { type: 'text' },
|
||
removed: { type: 'text' },
|
||
},
|
||
},
|
||
};
|
||
----
|
||
|
||
then, in a *later* release, we can then deploy the change that will
|
||
effectively remove the data from the documents:
|
||
|
||
[source,ts]
|
||
----
|
||
// the new model version ignoring the `removed` field
|
||
let modelVersion3: SavedObjectsModelVersion = {
|
||
changes: [ // define a data_removal change to delete the field
|
||
{
|
||
type: 'data_removal',
|
||
removedAttributePaths: ['removed']
|
||
}
|
||
],
|
||
schemas: {
|
||
forwardCompatibility: schema.object(
|
||
{ kept: schema.string() },
|
||
{ unknowns: 'ignore' }
|
||
),
|
||
create: schema.object(
|
||
{ kept: schema.string() },
|
||
)
|
||
},
|
||
};
|
||
----
|
||
|
||
The full type definition after the data removal would look like:
|
||
|
||
[source,ts]
|
||
----
|
||
const myType: SavedObjectsType = {
|
||
name: 'test',
|
||
namespaceType: 'single',
|
||
switchToModelVersionAt: '8.10.0',
|
||
modelVersions: {
|
||
// initial (and current) model version
|
||
1: {
|
||
changes: [],
|
||
schemas: {
|
||
// FC schema defining the known fields (indexed or not) for this version
|
||
forwardCompatibility: schema.object(
|
||
{ kept: schema.string(), removed: schema.string() },
|
||
{ unknowns: 'ignore' } // note the `unknown: ignore` which is how we're evicting the unknown fields
|
||
),
|
||
// schema that will be used to validate input during `create` and `bulkCreate`
|
||
create: schema.object(
|
||
{ kept: schema.string(), removed: schema.string() },
|
||
)
|
||
},
|
||
},
|
||
2: {
|
||
changes: [],
|
||
schemas: {
|
||
forwardCompatibility: schema.object(
|
||
{ kept: schema.string() }, // `removed` is no longer defined here
|
||
{ unknowns: 'ignore' }
|
||
),
|
||
create: schema.object(
|
||
{ kept: schema.string() }, // `removed` is no longer defined here
|
||
)
|
||
},
|
||
},
|
||
3: {
|
||
changes: [ // define a data_removal change to delete the field
|
||
{
|
||
type: 'data_removal',
|
||
removedAttributePaths: ['removed']
|
||
}
|
||
],
|
||
schemas: {
|
||
forwardCompatibility: schema.object(
|
||
{ kept: schema.string() },
|
||
{ unknowns: 'ignore' }
|
||
),
|
||
create: schema.object(
|
||
{ kept: schema.string() },
|
||
)
|
||
},
|
||
}
|
||
},
|
||
mappings: {
|
||
properties: {
|
||
kept: { type: 'text' },
|
||
removed: { type: 'text' },
|
||
},
|
||
},
|
||
};
|
||
----
|
||
|
||
=== Testing model versions
|
||
|
||
Model versions definitions are more structured than the legacy migration
|
||
functions, which makes them harder to test without the proper tooling.
|
||
This is why a set of testing tools and utilities are exposed from the
|
||
`@kbn/core-test-helpers-model-versions` package, to help properly test
|
||
the logic associated with model version and their associated
|
||
transformations.
|
||
|
||
==== Tooling for unit tests
|
||
|
||
For unit tests, the package exposes utilities to easily test the impact
|
||
of transforming documents from a model version to another one, either
|
||
upward or backward.
|
||
|
||
===== Model version test migrator
|
||
|
||
The `createModelVersionTestMigrator` helper allows to create a test
|
||
migrator that can be used to test model version changes between
|
||
versions, by transforming documents the same way the migration algorithm
|
||
would during an upgrade.
|
||
|
||
*Example:*
|
||
|
||
[source,ts]
|
||
----
|
||
import {
|
||
createModelVersionTestMigrator,
|
||
type ModelVersionTestMigrator
|
||
} from '@kbn/core-test-helpers-model-versions';
|
||
|
||
const mySoTypeDefinition = someSoType();
|
||
|
||
describe('mySoTypeDefinition model version transformations', () => {
|
||
let migrator: ModelVersionTestMigrator;
|
||
|
||
beforeEach(() => {
|
||
migrator = createModelVersionTestMigrator({ type: mySoTypeDefinition });
|
||
});
|
||
|
||
describe('Model version 2', () => {
|
||
it('properly backfill the expected fields when converting from v1 to v2', () => {
|
||
const obj = createSomeSavedObject();
|
||
|
||
const migrated = migrator.migrate({
|
||
document: obj,
|
||
fromVersion: 1,
|
||
toVersion: 2,
|
||
});
|
||
|
||
expect(migrated.properties).toEqual(expectedV2Properties);
|
||
});
|
||
|
||
it('properly removes the expected fields when converting from v2 to v1', () => {
|
||
const obj = createSomeSavedObject();
|
||
|
||
const migrated = migrator.migrate({
|
||
document: obj,
|
||
fromVersion: 2,
|
||
toVersion: 1,
|
||
});
|
||
|
||
expect(migrated.properties).toEqual(expectedV1Properties);
|
||
});
|
||
});
|
||
});
|
||
----
|
||
|
||
==== Tooling for integration tests
|
||
|
||
During integration tests, we can boot a real Elasticsearch cluster,
|
||
allowing us to manipulate SO documents in a way almost similar to how it
|
||
would be done on production runtime. With integration tests, we can even
|
||
simulate the cohabitation of two Kibana instances with different model
|
||
versions to assert the behavior of their interactions.
|
||
|
||
===== Model version test bed
|
||
|
||
The package exposes a `createModelVersionTestBed` function that can be
|
||
used to fully setup a test bed for model version integration testing. It
|
||
can be used to start and stop the ES server, and to initiate the
|
||
migration between the two versions we’re testing.
|
||
|
||
*Example:*
|
||
|
||
[source,ts]
|
||
----
|
||
import {
|
||
createModelVersionTestBed,
|
||
type ModelVersionTestKit
|
||
} from '@kbn/core-test-helpers-model-versions';
|
||
|
||
describe('myIntegrationTest', () => {
|
||
const testbed = createModelVersionTestBed();
|
||
let testkit: ModelVersionTestKit;
|
||
|
||
beforeAll(async () => {
|
||
await testbed.startES();
|
||
});
|
||
|
||
afterAll(async () => {
|
||
await testbed.stopES();
|
||
});
|
||
|
||
beforeEach(async () => {
|
||
// prepare the test, preparing the index and performing the SO migration
|
||
testkit = await testbed.prepareTestKit({
|
||
savedObjectDefinitions: [{
|
||
definition: mySoTypeDefinition,
|
||
// the model version that will be used for the "before" version
|
||
modelVersionBefore: 1,
|
||
// the model version that will be used for the "after" version
|
||
modelVersionAfter: 2,
|
||
}]
|
||
})
|
||
});
|
||
|
||
afterEach(async () => {
|
||
if(testkit) {
|
||
// delete the indices between each tests to perform a migration again
|
||
await testkit.tearDown();
|
||
}
|
||
});
|
||
|
||
it('can be used to test model version cohabitation', async () => {
|
||
// last registered version is `1` (modelVersionBefore)
|
||
const repositoryV1 = testkit.repositoryBefore;
|
||
// last registered version is `2` (modelVersionAfter)
|
||
const repositoryV2 = testkit.repositoryAfter;
|
||
|
||
// do something with the two repositories, e.g
|
||
await repositoryV1.create(someAttrs, { id });
|
||
const v2docReadFromV1 = await repositoryV2.get('my-type', id);
|
||
expect(v2docReadFromV1.attributes).toEqual(whatIExpect);
|
||
});
|
||
});
|
||
----
|
||
|
||
*Limitations:*
|
||
|
||
Because the test bed is only creating the parts of Core required to
|
||
instantiate the two SO repositories, and because we’re not able to
|
||
properly load all plugins (for proper isolation), the integration test
|
||
bed currently has some limitations:
|
||
|
||
* no extensions are enabled
|
||
** no security
|
||
** no encryption
|
||
** no spaces
|
||
* all SO types will be using the same SO index
|
||
|
||
=== Limitations and edge cases in serverless environments
|
||
|
||
The serverless environment, and the fact that upgrade in such
|
||
environments are performed in a way where, at some point, the old and
|
||
new version of the application are living in cohabitation, leads to some
|
||
particularities regarding the way the SO APIs works, and to some
|
||
limitations / edge case that we need to document
|
||
|
||
==== Using the `fields` option of the `find` savedObjects API
|
||
|
||
By default, the `find` API (as any other SO API returning documents)
|
||
will migrate all documents before returning them, to ensure that
|
||
documents can be used by both versions during a cohabitation (e.g an old
|
||
node searching for documents already migrated, or a new node searching
|
||
for documents not yet migrated).
|
||
|
||
However, when using the `fields` option of the `find` API, the documents
|
||
can’t be migrated, as some model version changes can’t be applied
|
||
against a partial set of attributes. For this reason, when the `fields`
|
||
option is provided, the documents returned from `find` will *not* be
|
||
migrated.
|
||
|
||
Which is why, when using this option, the API consumer needs to make
|
||
sure that _all_ the fields passed to the `fields` option *were already
|
||
present in the prior model version*. Otherwise, it may lead to
|
||
inconsistencies during upgrades, where newly introduced or backfilled
|
||
fields may not necessarily appear in the documents returned from the
|
||
`search` API when the option is used.
|
||
|
||
(_note_: both the previous and next version of Kibana must follow this
|
||
rule then)
|
||
|
||
==== Using `bulkUpdate` for fields with large `json` blobs
|
||
|
||
The savedObjects `bulkUpdate` API will update documents client-side and
|
||
then reindex the updated documents. These update operations are done
|
||
in-memory, and cause memory constraint issues when updating many objects
|
||
with large `json` blobs stored in some fields. As such, we recommend
|
||
against using `bulkUpdate` for savedObjects that: - use arrays (as these
|
||
tend to be large objects) - store large `json` blobs in some fields
|
||
|