mirror of
https://github.com/elastic/elasticsearch.git
synced 2025-06-29 18:03:32 -04:00
[doc] Reorganize and clean Java documentation
This commit reorganizes the docs to make Java API docs looking more like the REST docs. Also, with 2.0.0, FilterBuilders don't exist anymore but only QueryBuilders. Also, all docs api move now to docs/java-api/docs dir as for REST doc. Remove removed queries/filters ----- * Remove Constant Score Query with filter * Remove Fuzzy Like This (Field) Query (flt and flt_field) * Remove FilterBuilders Move filters to queries ----- * Move And Filter to And Query * Move Bool Filter to Bool Query * Move Exists Filter to Exists Query * Move Geo Bounding Box Filter to Geo Bounding Box Query * Move Geo Distance Filter to Geo Distance Query * Move Geo Distance Range Filter to Geo Distance Range Query * Move Geo Polygon Filter to Geo Polygon Query * Move Geo Shape Filter to Geo Shape Query * Move Has Child Filter by Has Child Query * Move Has Parent Filter by Has Parent Query * Move Ids Filter by Ids Query * Move Limit Filter to Limit Query * Move MatchAll Filter to MatchAll Query * Move Missing Filter to Missing Query * Move Nested Filter to Nested Query * Move Not Filter to Not Query * Move Or Filter to Or Query * Move Range Filter to Range Query * Move Ids Filter to Ids Query * Move Term Filter to Term Query * Move Terms Filter to Terms Query * Move Type Filter to Type Query Add missing queries ----- * Add Common Terms Query * Add Filtered Query * Add Function Score Query * Add Geohash Cell Query * Add Regexp Query * Add Script Query * Add Simple Query String Query * Add Span Containing Query * Add Span Multi Term Query * Add Span Within Query Reorganize the documentation ----- * Organize by full text queries * Organize by term level queries * Organize by compound queries * Organize by joining queries * Organize by geo queries * Organize by specialized queries * Organize by span queries * Move Boosting Query * Move DisMax Query * Move Fuzzy Query * Move Indices Query * Move Match Query * Move Mlt Query * Move Multi Match Query * Move Prefix Query * Move Query String Query * Move Span First Query * Move Span Near Query * Move Span Not Query * Move Span Or Query * Move Span Term Query * Move Template Query * Move Wildcard Query Add some missing pages ---- * Add multi get API * Add indexed-scripts link Also closes #7826 Related to https://github.com/elastic/elasticsearch/pull/11477#issuecomment-114745934
This commit is contained in:
parent
e429b8d190
commit
1e35674eb0
72 changed files with 1477 additions and 1271 deletions
122
docs/java-api/docs/bulk.asciidoc
Normal file
122
docs/java-api/docs/bulk.asciidoc
Normal file
|
@ -0,0 +1,122 @@
|
|||
[[java-docs-bulk]]
|
||||
=== Bulk API
|
||||
|
||||
The bulk API allows one to index and delete several documents in a
|
||||
single request. Here is a sample usage:
|
||||
|
||||
[source,java]
|
||||
--------------------------------------------------
|
||||
import static org.elasticsearch.common.xcontent.XContentFactory.*;
|
||||
|
||||
BulkRequestBuilder bulkRequest = client.prepareBulk();
|
||||
|
||||
// either use client#prepare, or use Requests# to directly build index/delete requests
|
||||
bulkRequest.add(client.prepareIndex("twitter", "tweet", "1")
|
||||
.setSource(jsonBuilder()
|
||||
.startObject()
|
||||
.field("user", "kimchy")
|
||||
.field("postDate", new Date())
|
||||
.field("message", "trying out Elasticsearch")
|
||||
.endObject()
|
||||
)
|
||||
);
|
||||
|
||||
bulkRequest.add(client.prepareIndex("twitter", "tweet", "2")
|
||||
.setSource(jsonBuilder()
|
||||
.startObject()
|
||||
.field("user", "kimchy")
|
||||
.field("postDate", new Date())
|
||||
.field("message", "another post")
|
||||
.endObject()
|
||||
)
|
||||
);
|
||||
|
||||
BulkResponse bulkResponse = bulkRequest.get();
|
||||
if (bulkResponse.hasFailures()) {
|
||||
// process failures by iterating through each bulk response item
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
[[java-docs-bulk-processor]]
|
||||
=== Using Bulk Processor
|
||||
|
||||
The `BulkProcessor` class offers a simple interface to flush bulk operations automatically based on the number or size
|
||||
of requests, or after a given period.
|
||||
|
||||
To use it, first create a `BulkProcessor` instance:
|
||||
|
||||
[source,java]
|
||||
--------------------------------------------------
|
||||
import org.elasticsearch.action.bulk.BulkProcessor;
|
||||
import org.elasticsearch.common.unit.ByteSizeUnit;
|
||||
import org.elasticsearch.common.unit.ByteSizeValue;
|
||||
|
||||
BulkProcessor bulkProcessor = BulkProcessor.builder(
|
||||
client, <1>
|
||||
new BulkProcessor.Listener() {
|
||||
@Override
|
||||
public void beforeBulk(long executionId,
|
||||
BulkRequest request) { ... } <2>
|
||||
|
||||
@Override
|
||||
public void afterBulk(long executionId,
|
||||
BulkRequest request,
|
||||
BulkResponse response) { ... } <3>
|
||||
|
||||
@Override
|
||||
public void afterBulk(long executionId,
|
||||
BulkRequest request,
|
||||
Throwable failure) { ... } <4>
|
||||
})
|
||||
.setBulkActions(10000) <5>
|
||||
.setBulkSize(new ByteSizeValue(1, ByteSizeUnit.GB)) <6>
|
||||
.setFlushInterval(TimeValue.timeValueSeconds(5)) <7>
|
||||
.setConcurrentRequests(1) <8>
|
||||
.build();
|
||||
--------------------------------------------------
|
||||
<1> Add your elasticsearch client
|
||||
<2> This method is called just before bulk is executed. You can for example see the numberOfActions with
|
||||
`request.numberOfActions()`
|
||||
<3> This method is called after bulk execution. You can for example check if there was some failing requests
|
||||
with `response.hasFailures()`
|
||||
<4> This method is called when the bulk failed and raised a `Throwable`
|
||||
<5> We want to execute the bulk every 10 000 requests
|
||||
<6> We want to flush the bulk every 1gb
|
||||
<7> We want to flush the bulk every 5 seconds whatever the number of requests
|
||||
<8> Set the number of concurrent requests. A value of 0 means that only a single request will be allowed to be
|
||||
executed. A value of 1 means 1 concurrent request is allowed to be executed while accumulating new bulk requests.
|
||||
|
||||
Then you can simply add your requests to the `BulkProcessor`:
|
||||
|
||||
[source,java]
|
||||
--------------------------------------------------
|
||||
bulkProcessor.add(new IndexRequest("twitter", "tweet", "1").source(/* your doc here */));
|
||||
bulkProcessor.add(new DeleteRequest("twitter", "tweet", "2"));
|
||||
--------------------------------------------------
|
||||
|
||||
By default, `BulkProcessor`:
|
||||
|
||||
* sets bulkActions to `1000`
|
||||
* sets bulkSize to `5mb`
|
||||
* does not set flushInterval
|
||||
* sets concurrentRequests to 1
|
||||
|
||||
When all documents are loaded to the `BulkProcessor` it can be closed by using `awaitClose` or `close` methods:
|
||||
|
||||
[source,java]
|
||||
--------------------------------------------------
|
||||
bulkProcessor.awaitClose(10, TimeUnit.MINUTES);
|
||||
--------------------------------------------------
|
||||
|
||||
or
|
||||
|
||||
[source,java]
|
||||
--------------------------------------------------
|
||||
bulkProcessor.close();
|
||||
--------------------------------------------------
|
||||
|
||||
Both methods flush any remaining documents and disable all other scheduled flushes if they were scheduled by setting
|
||||
`flushInterval`. If concurrent requests were enabled the `awaitClose` method waits for up to the specified timeout for
|
||||
all bulk requests to complete then returns `true`, if the specified waiting time elapses before all bulk requests complete,
|
||||
`false` is returned. The `close` method doesn't wait for any remaining bulk requests to complete and exists immediately.
|
||||
|
34
docs/java-api/docs/delete-by-query.asciidoc
Normal file
34
docs/java-api/docs/delete-by-query.asciidoc
Normal file
|
@ -0,0 +1,34 @@
|
|||
[[java-docs-delete-by-query]]
|
||||
=== Delete By Query API
|
||||
|
||||
The delete by query API allows one to delete documents from one or more
|
||||
indices and one or more types based on a <<java-query-dsl,query>>.
|
||||
|
||||
It's available as a plugin so you need to explicitly declare it in your project:
|
||||
|
||||
[source,xml]
|
||||
--------------------------------------------------
|
||||
<dependency>
|
||||
<groupId>org.elasticsearch.plugin</groupId>
|
||||
<artifactId>elasticsearch-delete-by-query</artifactId>
|
||||
<version>${es.version}</version>
|
||||
</dependency>
|
||||
--------------------------------------------------
|
||||
|
||||
To use it from Java, you can do the following:
|
||||
|
||||
[source,java]
|
||||
--------------------------------------------------
|
||||
import static org.elasticsearch.index.query.QueryBuilders.*;
|
||||
|
||||
DeleteByQueryResponse response = client
|
||||
.prepareDeleteByQuery("test") <1>
|
||||
.setQuery(termQuery("_type", "type1")) <2>
|
||||
.get();
|
||||
--------------------------------------------------
|
||||
<1> index name
|
||||
<2> query
|
||||
|
||||
For more information on the delete by query operation, check out the
|
||||
{ref}/docs-delete-by-query.html[delete_by_query API]
|
||||
docs.
|
37
docs/java-api/docs/delete.asciidoc
Normal file
37
docs/java-api/docs/delete.asciidoc
Normal file
|
@ -0,0 +1,37 @@
|
|||
[[java-docs-delete]]
|
||||
=== Delete API
|
||||
|
||||
The delete API allows one to delete a typed JSON document from a specific
|
||||
index based on its id. The following example deletes the JSON document
|
||||
from an index called twitter, under a type called tweet, with id valued
|
||||
1:
|
||||
|
||||
[source,java]
|
||||
--------------------------------------------------
|
||||
DeleteResponse response = client.prepareDelete("twitter", "tweet", "1").get();
|
||||
--------------------------------------------------
|
||||
|
||||
For more information on the delete operation, check out the
|
||||
{ref}/docs-delete.html[delete API] docs.
|
||||
|
||||
|
||||
[[java-docs-delete-thread]]
|
||||
==== Operation Threading
|
||||
|
||||
The delete API allows to set the threading model the operation will be
|
||||
performed when the actual execution of the API is performed on the same
|
||||
node (the API is executed on a shard that is allocated on the same
|
||||
server).
|
||||
|
||||
The options are to execute the operation on a different thread, or to
|
||||
execute it on the calling thread (note that the API is still async). By
|
||||
default, `operationThreaded` is set to `true` which means the operation
|
||||
is executed on a different thread. Here is an example that sets it to
|
||||
`false`:
|
||||
|
||||
[source,java]
|
||||
--------------------------------------------------
|
||||
DeleteResponse response = client.prepareDelete("twitter", "tweet", "1")
|
||||
.setOperationThreaded(false)
|
||||
.get();
|
||||
--------------------------------------------------
|
36
docs/java-api/docs/get.asciidoc
Normal file
36
docs/java-api/docs/get.asciidoc
Normal file
|
@ -0,0 +1,36 @@
|
|||
[[java-docs-get]]
|
||||
=== Get API
|
||||
|
||||
The get API allows to get a typed JSON document from the index based on
|
||||
its id. The following example gets a JSON document from an index called
|
||||
twitter, under a type called tweet, with id valued 1:
|
||||
|
||||
[source,java]
|
||||
--------------------------------------------------
|
||||
GetResponse response = client.prepareGet("twitter", "tweet", "1").get();
|
||||
--------------------------------------------------
|
||||
|
||||
For more information on the get operation, check out the REST
|
||||
{ref}/docs-get.html[get] docs.
|
||||
|
||||
|
||||
[[java-docs-get-thread]]
|
||||
==== Operation Threading
|
||||
|
||||
The get API allows to set the threading model the operation will be
|
||||
performed when the actual execution of the API is performed on the same
|
||||
node (the API is executed on a shard that is allocated on the same
|
||||
server).
|
||||
|
||||
The options are to execute the operation on a different thread, or to
|
||||
execute it on the calling thread (note that the API is still async). By
|
||||
default, `operationThreaded` is set to `true` which means the operation
|
||||
is executed on a different thread. Here is an example that sets it to
|
||||
`false`:
|
||||
|
||||
[source,java]
|
||||
--------------------------------------------------
|
||||
GetResponse response = client.prepareGet("twitter", "tweet", "1")
|
||||
.setOperationThreaded(false)
|
||||
.get();
|
||||
--------------------------------------------------
|
193
docs/java-api/docs/index_.asciidoc
Normal file
193
docs/java-api/docs/index_.asciidoc
Normal file
|
@ -0,0 +1,193 @@
|
|||
[[java-docs-index]]
|
||||
=== Index API
|
||||
|
||||
The index API allows one to index a typed JSON document into a specific
|
||||
index and make it searchable.
|
||||
|
||||
|
||||
[[java-docs-index-generate]]
|
||||
==== Generate JSON document
|
||||
|
||||
There are several different ways of generating a JSON document:
|
||||
|
||||
* Manually (aka do it yourself) using native `byte[]` or as a `String`
|
||||
|
||||
* Using a `Map` that will be automatically converted to its JSON
|
||||
equivalent
|
||||
|
||||
* Using a third party library to serialize your beans such as
|
||||
http://wiki.fasterxml.com/JacksonHome[Jackson]
|
||||
|
||||
* Using built-in helpers XContentFactory.jsonBuilder()
|
||||
|
||||
Internally, each type is converted to `byte[]` (so a String is converted
|
||||
to a `byte[]`). Therefore, if the object is in this form already, then
|
||||
use it. The `jsonBuilder` is highly optimized JSON generator that
|
||||
directly constructs a `byte[]`.
|
||||
|
||||
|
||||
[[java-docs-index-generate-diy]]
|
||||
===== Do It Yourself
|
||||
|
||||
Nothing really difficult here but note that you will have to encode
|
||||
dates according to the
|
||||
{ref}/mapping-date-format.html[Date Format].
|
||||
|
||||
[source,java]
|
||||
--------------------------------------------------
|
||||
String json = "{" +
|
||||
"\"user\":\"kimchy\"," +
|
||||
"\"postDate\":\"2013-01-30\"," +
|
||||
"\"message\":\"trying out Elasticsearch\"" +
|
||||
"}";
|
||||
--------------------------------------------------
|
||||
|
||||
|
||||
[[java-docs-index-generate-using-map]]
|
||||
===== Using Map
|
||||
|
||||
Map is a key:values pair collection. It represents a JSON structure:
|
||||
|
||||
[source,java]
|
||||
--------------------------------------------------
|
||||
Map<String, Object> json = new HashMap<String, Object>();
|
||||
json.put("user","kimchy");
|
||||
json.put("postDate",new Date());
|
||||
json.put("message","trying out Elasticsearch");
|
||||
--------------------------------------------------
|
||||
|
||||
|
||||
[[java-docs-index-generate-beans]]
|
||||
===== Serialize your beans
|
||||
|
||||
Elasticsearch already uses Jackson but shades it under
|
||||
`org.elasticsearch.common.jackson` package. +
|
||||
So, you can add your own Jackson version in your `pom.xml` file or in
|
||||
your classpath. See http://wiki.fasterxml.com/JacksonDownload[Jackson
|
||||
Download Page].
|
||||
|
||||
For example:
|
||||
|
||||
[source,xml]
|
||||
--------------------------------------------------
|
||||
<dependency>
|
||||
<groupId>com.fasterxml.jackson.core</groupId>
|
||||
<artifactId>jackson-databind</artifactId>
|
||||
<version>2.1.3</version>
|
||||
</dependency>
|
||||
--------------------------------------------------
|
||||
|
||||
Then, you can start serializing your beans to JSON:
|
||||
|
||||
[source,java]
|
||||
--------------------------------------------------
|
||||
import com.fasterxml.jackson.databind.*;
|
||||
|
||||
// instance a json mapper
|
||||
ObjectMapper mapper = new ObjectMapper(); // create once, reuse
|
||||
|
||||
// generate json
|
||||
byte[] json = mapper.writeValueAsBytes(yourbeaninstance);
|
||||
--------------------------------------------------
|
||||
|
||||
|
||||
[[java-docs-index-generate-helpers]]
|
||||
===== Use Elasticsearch helpers
|
||||
|
||||
Elasticsearch provides built-in helpers to generate JSON content.
|
||||
|
||||
[source,java]
|
||||
--------------------------------------------------
|
||||
import static org.elasticsearch.common.xcontent.XContentFactory.*;
|
||||
|
||||
XContentBuilder builder = jsonBuilder()
|
||||
.startObject()
|
||||
.field("user", "kimchy")
|
||||
.field("postDate", new Date())
|
||||
.field("message", "trying out Elasticsearch")
|
||||
.endObject()
|
||||
--------------------------------------------------
|
||||
|
||||
Note that you can also add arrays with `startArray(String)` and
|
||||
`endArray()` methods. By the way, the `field` method +
|
||||
accepts many object types. You can directly pass numbers, dates and even
|
||||
other XContentBuilder objects.
|
||||
|
||||
If you need to see the generated JSON content, you can use the
|
||||
`string()` method.
|
||||
|
||||
[source,java]
|
||||
--------------------------------------------------
|
||||
String json = builder.string();
|
||||
--------------------------------------------------
|
||||
|
||||
|
||||
[[java-docs-index-doc]]
|
||||
==== Index document
|
||||
|
||||
The following example indexes a JSON document into an index called
|
||||
twitter, under a type called tweet, with id valued 1:
|
||||
|
||||
[source,java]
|
||||
--------------------------------------------------
|
||||
import static org.elasticsearch.common.xcontent.XContentFactory.*;
|
||||
|
||||
IndexResponse response = client.prepareIndex("twitter", "tweet", "1")
|
||||
.setSource(jsonBuilder()
|
||||
.startObject()
|
||||
.field("user", "kimchy")
|
||||
.field("postDate", new Date())
|
||||
.field("message", "trying out Elasticsearch")
|
||||
.endObject()
|
||||
)
|
||||
.get();
|
||||
--------------------------------------------------
|
||||
|
||||
Note that you can also index your documents as JSON String and that you
|
||||
don't have to give an ID:
|
||||
|
||||
[source,java]
|
||||
--------------------------------------------------
|
||||
String json = "{" +
|
||||
"\"user\":\"kimchy\"," +
|
||||
"\"postDate\":\"2013-01-30\"," +
|
||||
"\"message\":\"trying out Elasticsearch\"" +
|
||||
"}";
|
||||
|
||||
IndexResponse response = client.prepareIndex("twitter", "tweet")
|
||||
.setSource(json)
|
||||
.get();
|
||||
--------------------------------------------------
|
||||
|
||||
`IndexResponse` object will give you a report:
|
||||
|
||||
[source,java]
|
||||
--------------------------------------------------
|
||||
// Index name
|
||||
String _index = response.getIndex();
|
||||
// Type name
|
||||
String _type = response.getType();
|
||||
// Document ID (generated or not)
|
||||
String _id = response.getId();
|
||||
// Version (if it's the first time you index this document, you will get: 1)
|
||||
long _version = response.getVersion();
|
||||
// isCreated() is true if the document is a new one, false if it has been updated
|
||||
boolean created = response.isCreated();
|
||||
--------------------------------------------------
|
||||
|
||||
For more information on the index operation, check out the REST
|
||||
{ref}/docs-index_.html[index] docs.
|
||||
|
||||
|
||||
[[java-docs-index-thread]]
|
||||
==== Operation Threading
|
||||
|
||||
The index API allows one to set the threading model the operation will be
|
||||
performed when the actual execution of the API is performed on the same
|
||||
node (the API is executed on a shard that is allocated on the same
|
||||
server).
|
||||
|
||||
The options are to execute the operation on a different thread, or to
|
||||
execute it on the calling thread (note that the API is still asynchronous). By
|
||||
default, `operationThreaded` is set to `true` which means the operation
|
||||
is executed on a different thread.
|
30
docs/java-api/docs/multi-get.asciidoc
Normal file
30
docs/java-api/docs/multi-get.asciidoc
Normal file
|
@ -0,0 +1,30 @@
|
|||
[[java-docs-multi-get]]
|
||||
=== Multi Get API
|
||||
|
||||
The multi get API allows to get a list of documents based on their `index`, `type` and `id`:
|
||||
|
||||
[source,java]
|
||||
--------------------------------------------------
|
||||
MultiGetResponse multiGetItemResponses = client.prepareMultiGet()
|
||||
.add("twitter", "tweet", "1") <1>
|
||||
.add("twitter", "tweet", "2", "3", "4") <2>
|
||||
.add("another", "type", "foo") <3>
|
||||
.get();
|
||||
|
||||
for (MultiGetItemResponse itemResponse : multiGetItemResponses) { <4>
|
||||
GetResponse response = itemResponse.getResponse();
|
||||
if (response.isExists()) { <5>
|
||||
String json = response.getSourceAsString(); <6>
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
<1> get by a single id
|
||||
<2> or by a list of ids for the same index / type
|
||||
<3> you can also get from another index
|
||||
<4> iterate over the result set
|
||||
<5> you can check if the document exists
|
||||
<6> access to the `_source` field
|
||||
|
||||
For more information on the multi get operation, check out the REST
|
||||
{ref}/docs-multi-get.html[multi get] docs.
|
||||
|
118
docs/java-api/docs/update.asciidoc
Normal file
118
docs/java-api/docs/update.asciidoc
Normal file
|
@ -0,0 +1,118 @@
|
|||
[[java-docs-update]]
|
||||
=== Update API
|
||||
|
||||
|
||||
You can either create an `UpdateRequest` and send it to the client:
|
||||
|
||||
[source,java]
|
||||
--------------------------------------------------
|
||||
UpdateRequest updateRequest = new UpdateRequest();
|
||||
updateRequest.index("index");
|
||||
updateRequest.type("type");
|
||||
updateRequest.id("1");
|
||||
updateRequest.doc(jsonBuilder()
|
||||
.startObject()
|
||||
.field("gender", "male")
|
||||
.endObject());
|
||||
client.update(updateRequest).get();
|
||||
--------------------------------------------------
|
||||
|
||||
Or you can use `prepareUpdate()` method:
|
||||
|
||||
[source,java]
|
||||
--------------------------------------------------
|
||||
client.prepareUpdate("ttl", "doc", "1")
|
||||
.setScript(new Script("ctx._source.gender = \"male\"" <1> , ScriptService.ScriptType.INLINE, null, null))
|
||||
.get();
|
||||
|
||||
client.prepareUpdate("ttl", "doc", "1")
|
||||
.setDoc(jsonBuilder() <2>
|
||||
.startObject()
|
||||
.field("gender", "male")
|
||||
.endObject())
|
||||
.get();
|
||||
--------------------------------------------------
|
||||
<1> Your script. It could also be a locally stored script name.
|
||||
In that case, you'll need to use `ScriptService.ScriptType.FILE`
|
||||
<2> Document which will be merged to the existing one.
|
||||
|
||||
Note that you can't provide both `script` and `doc`.
|
||||
|
||||
[[java-docs-update-api-script]]
|
||||
==== Update by script
|
||||
|
||||
The update API allows to update a document based on a script provided:
|
||||
|
||||
[source,java]
|
||||
--------------------------------------------------
|
||||
UpdateRequest updateRequest = new UpdateRequest("ttl", "doc", "1")
|
||||
.script(new Script("ctx._source.gender = \"male\""));
|
||||
client.update(updateRequest).get();
|
||||
--------------------------------------------------
|
||||
|
||||
|
||||
[[java-docs-update-api-merge-docs]]
|
||||
==== Update by merging documents
|
||||
|
||||
The update API also support passing a partial document, which will be merged into the existing document (simple
|
||||
recursive merge, inner merging of objects, replacing core "keys/values" and arrays). For example:
|
||||
|
||||
[source,java]
|
||||
--------------------------------------------------
|
||||
UpdateRequest updateRequest = new UpdateRequest("index", "type", "1")
|
||||
.doc(jsonBuilder()
|
||||
.startObject()
|
||||
.field("gender", "male")
|
||||
.endObject());
|
||||
client.update(updateRequest).get();
|
||||
--------------------------------------------------
|
||||
|
||||
|
||||
[[java-docs-update-api-upsert]]
|
||||
==== Upsert
|
||||
|
||||
There is also support for `upsert`. If the document already exists, the content of the `upsert`
|
||||
element will be used to index the fresh doc:
|
||||
|
||||
[source,java]
|
||||
--------------------------------------------------
|
||||
IndexRequest indexRequest = new IndexRequest("index", "type", "1")
|
||||
.source(jsonBuilder()
|
||||
.startObject()
|
||||
.field("name", "Joe Smith")
|
||||
.field("gender", "male")
|
||||
.endObject());
|
||||
UpdateRequest updateRequest = new UpdateRequest("index", "type", "1")
|
||||
.doc(jsonBuilder()
|
||||
.startObject()
|
||||
.field("gender", "male")
|
||||
.endObject())
|
||||
.upsert(indexRequest); <1>
|
||||
client.update(updateRequest).get();
|
||||
--------------------------------------------------
|
||||
<1> If the document does not exist, the one in `indexRequest` will be added
|
||||
|
||||
If the document `index/type/1` already exists, we will have after this operation a document like:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"name" : "Joe Dalton",
|
||||
"gender": "male" <1>
|
||||
}
|
||||
--------------------------------------------------
|
||||
<1> This field is added by the update request
|
||||
|
||||
If it does not exist, we will have a new document:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"name" : "Joe Smith",
|
||||
"gender": "male"
|
||||
}
|
||||
--------------------------------------------------
|
||||
|
||||
|
||||
|
||||
|
Loading…
Add table
Add a link
Reference in a new issue