[[infer-service-google-ai-studio]] === Google AI Studio {infer} integration .New API reference [sidebar] -- For the most up-to-date API details, refer to {api-es}/group/endpoint-inference[{infer-cap} APIs]. -- Creates an {infer} endpoint to perform an {infer} task with the `googleaistudio` service. [discrete] [[infer-service-google-ai-studio-api-request]] ==== {api-request-title} `PUT /_inference//` [discrete] [[infer-service-google-ai-studio-api-path-params]] ==== {api-path-parms-title} ``:: (Required, string) include::inference-shared.asciidoc[tag=inference-id] ``:: (Required, string) include::inference-shared.asciidoc[tag=task-type] + -- Available task types: * `completion`, * `text_embedding`. -- [discrete] [[infer-service-google-ai-studio-api-request-body]] ==== {api-request-body-title} `chunking_settings`:: (Optional, object) include::inference-shared.asciidoc[tag=chunking-settings] `max_chunk_size`::: (Optional, integer) include::inference-shared.asciidoc[tag=chunking-settings-max-chunking-size] `overlap`::: (Optional, integer) include::inference-shared.asciidoc[tag=chunking-settings-overlap] `sentence_overlap`::: (Optional, integer) include::inference-shared.asciidoc[tag=chunking-settings-sentence-overlap] `strategy`::: (Optional, string) include::inference-shared.asciidoc[tag=chunking-settings-strategy] `service`:: (Required, string) The type of service supported for the specified task type. In this case, `googleaistudio`. `service_settings`:: (Required, object) include::inference-shared.asciidoc[tag=service-settings] + -- These settings are specific to the `googleaistudio` service. -- `api_key`::: (Required, string) A valid API key for the Google Gemini API. `model_id`::: (Required, string) The name of the model to use for the {infer} task. You can find the supported models at https://ai.google.dev/gemini-api/docs/models/gemini[Gemini API models]. `rate_limit`::: (Optional, object) By default, the `googleaistudio` service sets the number of requests allowed per minute to `360`. This helps to minimize the number of rate limit errors returned from Google AI Studio. To modify this, set the `requests_per_minute` setting of this object in your service settings: + -- include::inference-shared.asciidoc[tag=request-per-minute-example] -- [discrete] [[inference-example-google-ai-studio]] ==== Google AI Studio service example The following example shows how to create an {infer} endpoint called `google_ai_studio_completion` to perform a `completion` task type. [source,console] ------------------------------------------------------------ PUT _inference/completion/google_ai_studio_completion { "service": "googleaistudio", "service_settings": { "api_key": "", "model_id": "" } } ------------------------------------------------------------ // TEST[skip:TBD]