VertexAI Gemini Chat

Vertex AI Gemini API允许开发人员使用 Gemini 模型构建生成式 AI 应用程序。Vertex AI Gemini API 支持多模态提示作为输入和输出文本或代码。多模态模型是一个模型,能够处理来自多个模态的信息,包括图像、视频和文本。例如,您可以向模型发送一盘饼干的照片,并要求它为您提供这些饼干的食谱。

The Vertex AI Gemini API allows developers to build generative AI applications using the Gemini model. The Vertex AI Gemini API supports multimodal prompts as input and output text or code. A multimodal model is a model that is capable of processing information from multiple modalities, including images, videos, and text. For example, you can send the model a photo of a plate of cookies and ask it to give you a recipe for those cookies.

Gemini是Google DeepMind开发的一系列生成式AI模型,专为多模态用例而设计。Gemini API让您可以访问 Gemini 2.0 FlashGemini 2.0 Flash-Lite 。有关Vertex AI Gemini API模型的规范,请参阅 Model information

Gemini is a family of generative AI models developed by Google DeepMind that is designed for multimodal use cases. The Gemini API gives you access to the Gemini 2.0 Flash and Gemini 2.0 Flash-Lite. For specifications of the Vertex AI Gemini API models, see Model information.

Prerequisites

  • 以下是使用Gemini将该文本翻译成中文的结果:安装适用于您操作系统的 gcloud CLI。

  • Install the gcloud CLI, appropriate for you OS.

  • 请运行以下命令进行身份验证。将 PROJECT_ID 替换为您的 Google Cloud 项目 ID,将 ACCOUNT 替换为您的 Google Cloud 用户名。

  • Authenticate by running the following command. Replace PROJECT_ID with your Google Cloud project ID and ACCOUNT with your Google Cloud username.

gcloud config set project <PROJECT_ID> &&
gcloud auth application-default login <ACCOUNT>

Auto-configuration

Spring AI 自动配置、启动器模块的工件名称发生了重大变化。请参阅 upgrade notes 以获取更多信息。

There has been a significant change in the Spring AI auto-configuration, starter modules' artifact names. Please refer to the upgrade notes for more information.

Spring AI为VertexAI Gemini聊天客户端提供了Spring Boot自动配置。要启用它,请将以下依赖项添加到项目的Maven pom.xml 或Gradle build.gradle 构建文件中:

Spring AI provides Spring Boot auto-configuration for the VertexAI Gemini Chat Client. To enable it add the following dependency to your project’s Maven pom.xml or Gradle build.gradle build files:

  • Maven

  • Gradle

<dependency>
    <groupId>org.springframework.ai</groupId>
    <artifactId>spring-ai-starter-model-vertex-ai-gemini</artifactId>
</dependency>
dependencies {
    implementation 'org.springframework.ai:spring-ai-starter-model-vertex-ai-gemini'
}
  1. 参见 Dependency Management 部分,将 Spring AI BOM 添加到你的构建文件中。

Refer to the Dependency Management section to add the Spring AI BOM to your build file.

Chat Properties

聊天自动配置的启用和禁用现在通过前缀为 spring.ai.model.chat 的顶级属性进行配置。

Enabling and disabling of the chat auto-configurations are now configured via top level properties with the prefix spring.ai.model.chat.

为了启用,spring.ai.model.chat=vertexai(默认已启用)

To enable, spring.ai.model.chat=vertexai (It is enabled by default)

为了禁用,spring.ai.model.chat=none(或任何不匹配`vertexai`的值)

To disable, spring.ai.model.chat=none (or any value which doesn’t match vertexai)

此更改旨在允许配置多个模型。

This change is done to allow configuration of multiple models.

前缀 spring.ai.vertex.ai.gemini 用作允许您连接到 VertexAI 的属性前缀。

The prefix spring.ai.vertex.ai.gemini is used as the property prefix that lets you connect to VertexAI.

Property Description Default

spring.ai.model.chat

Enable Chat Model client

vertexai

spring.ai.vertex.ai.gemini.project-id

Google Cloud Platform project ID

-

spring.ai.vertex.ai.gemini.location

Region

-

spring.ai.vertex.ai.gemini.credentials-uri

URI to Vertex AI Gemini credentials. When provided it is used to create an a GoogleCredentials instance to authenticate the VertexAI.

-

spring.ai.vertex.ai.gemini.api-endpoint

Vertex AI Gemini API endpoint.

-

spring.ai.vertex.ai.gemini.scopes

-

spring.ai.vertex.ai.gemini.transport

API transport. GRPC or REST.

GRPC

前缀` spring.ai.vertex.ai.gemini.chat `是属性前缀,允许您配置VertexAI Gemini Chat的聊天模型实现。

The prefix spring.ai.vertex.ai.gemini.chat is the property prefix that lets you configure the chat model implementation for VertexAI Gemini Chat.

Property Description Default

spring.ai.vertex.ai.gemini.chat.options.model

Supported Vertex AI Gemini Chat model to use include the gemini-2.0-flash, gemini-2.0-flash-lite and the new gemini-2.5-pro-preview-03-25, gemini-2.5-flash-preview-04-17 models.

gemini-2.0-flash

spring.ai.vertex.ai.gemini.chat.options.response-mime-type

Output response mimetype of the generated candidate text.

text/plain: (default) Text output or application/json: JSON response.

spring.ai.vertex.ai.gemini.chat.options.google-search-retrieval

Use Google search Grounding feature

true or false, default false.

spring.ai.vertex.ai.gemini.chat.options.temperature

Controls the randomness of the output. Values can range over [0.0,1.0], inclusive. A value closer to 1.0 will produce responses that are more varied, while a value closer to 0.0 will typically result in less surprising responses from the generative. This value specifies default to be used by the backend while making the call to the generative.

0.7

spring.ai.vertex.ai.gemini.chat.options.top-k

The maximum number of tokens to consider when sampling. The generative uses combined Top-k and nucleus sampling. Top-k sampling considers the set of topK most probable tokens.

-

spring.ai.vertex.ai.gemini.chat.options.top-p

The maximum cumulative probability of tokens to consider when sampling. The generative uses combined Top-k and nucleus sampling. Nucleus sampling considers the smallest set of tokens whose probability sum is at least topP.

-

spring.ai.vertex.ai.gemini.chat.options.candidate-count

The number of generated response messages to return. This value must be between [1, 8], inclusive. Defaults to 1.

1

spring.ai.vertex.ai.gemini.chat.options.max-output-tokens

The maximum number of tokens to generate.

-

spring.ai.vertex.ai.gemini.chat.options.tool-names

List of tools, identified by their names, to enable for function calling in a single prompt request. Tools with those names must exist in the ToolCallback registry.

-

(deprecated by tool-names) spring.ai.vertex.ai.gemini.chat.options.functions

List of functions, identified by their names, to enable for function calling in a single prompt request. Functions with those names must exist in the functionCallbacks registry.

-

spring.ai.vertex.ai.gemini.chat.options.internal-tool-execution-enabled

If true, the tool execution should be performed, otherwise the response from the model is returned back to the user. Default is null, but if it’s null, ToolCallingChatOptions.DEFAULT_TOOL_EXECUTION_ENABLED which is true will take into account

-

(deprecated by internal-tool-execution-enabled) spring.ai.vertex.ai.gemini.chat.options.proxy-tool-calls

If true, the Spring AI will not handle the function calls internally, but will proxy them to the client. Then is the client’s responsibility to handle the function calls, dispatch them to the appropriate function, and return the results. If false (the default), the Spring AI will handle the function calls internally. Applicable only for chat models with function calling support

false

spring.ai.vertex.ai.gemini.chat.options.safety-settings

List of safety settings to control safety filters, as defined by Vertex AI Safety Filters. Each safety setting can have a method, threshold, and category.

-

带有 spring.ai.vertex.ai.gemini.chat.options 前缀的所有属性都可以通过将特定请求 Runtime options 添加到 Prompt 调用在运行时进行覆盖。

All properties prefixed with spring.ai.vertex.ai.gemini.chat.options can be overridden at runtime by adding a request specific Runtime options to the Prompt call.

Runtime options

VertexAiGeminiChatOptions.java提供模型配置,例如温度、前 K 个等。

The VertexAiGeminiChatOptions.java provides model configurations, such as the temperature, the topK, etc.

在启动时,可以使用` VertexAiGeminiChatModel(api, options) 构造函数或 spring.ai.vertex.ai.chat.options.* `属性配置默认选项。

On start-up, the default options can be configured with the VertexAiGeminiChatModel(api, options) constructor or the spring.ai.vertex.ai.chat.options.* properties.

在运行时,您可以通过向` Prompt `调用添加新的、特定于请求的选项来覆盖默认选项。例如,要覆盖特定请求的默认温度:

At runtime, you can override the default options by adding new, request specific, options to the Prompt call. For example, to override the default temperature for a specific request:

ChatResponse response = chatModel.call(
    new Prompt(
        "Generate the names of 5 famous pirates.",
        VertexAiGeminiChatOptions.builder()
            .temperature(0.4)
        .build()
    ));

除了模型特定的 VertexAiGeminiChatOptions ,您还可以使用通过 ChatOptionsBuilder#builder() 创建的便携式 ChatOptions 实例。

In addition to the model specific VertexAiGeminiChatOptions you can use a portable ChatOptions instance, created with the ChatOptionsBuilder#builder().

Tool Calling

Vertex AI Gemini模型支持工具调用(在Google Gemini上下文中,它被称为` function calling )功能,允许模型在对话中使用工具。以下是定义和使用基于 @Tool `的工具的示例:

The Vertex AI Gemini model supports tool calling (in Google Gemini context, it’s called function calling) capabilities, allowing models to use tools during conversations. Here’s an example of how to define and use @Tool-based tools:

public class WeatherService {

    @Tool(description = "Get the weather in location")
    public String weatherByLocation(@ToolParam(description= "City or state name") String location) {
        ...
    }
}

String response = ChatClient.create(this.chatModel)
        .prompt("What's the weather like in Boston?")
        .tools(new WeatherService())
        .call()
        .content();

您也可以使用 java.util.function bean 作为工具:

You can use the java.util.function beans as tools as well:

@Bean
@Description("Get the weather in location. Return temperature in 36°F or 36°C format.")
public Function<Request, Response> weatherFunction() {
    return new MockWeatherService();
}

String response = ChatClient.create(this.chatModel)
        .prompt("What's the weather like in Boston?")
        .tools("weatherFunction")
        .inputType(Request.class)
        .call()
        .content();

Tools 文档中查找更多信息。

Find more in Tools documentation.

Multimodal

多模态是指模型同时理解和处理来自各种(输入)源(包括` text pdf images audio `和其他数据格式)信息的能力。

Multimodality refers to a model’s ability to simultaneously understand and process information from various (input) sources, including text, pdf, images, audio, and other data formats.

Image, Audio, Video

Google的Gemini AI模型通过理解和整合文本、代码、音频、图像和视频来支持此功能。欲了解更多详情,请参阅博客文章` Introducing Gemini `。

Google’s Gemini AI models support this capability by comprehending and integrating text, code, audio, images, and video. For more details, refer to the blog post Introducing Gemini.

Spring AI 的 Message 接口通过引入媒体类型来支持多模态 AI 模型。此类型包含消息中媒体附件的数据和信息,使用 Spring 的 org.springframework.util.MimeTypejava.lang.Object 作为原始媒体数据。

Spring AI’s Message interface supports multimodal AI models by introducing the Media type. This type contains data and information about media attachments in messages, using Spring’s org.springframework.util.MimeType and a java.lang.Object for the raw media data.

下面是一个摘自` VertexAiGeminiChatModelIT#multiModalityTest() `的简单代码示例,演示了用户文本与图像的组合。

Below is a simple code example extracted from VertexAiGeminiChatModelIT#multiModalityTest(), demonstrating the combination of user text with an image.

byte[] data = new ClassPathResource("/vertex-test.png").getContentAsByteArray();

var userMessage = new UserMessage("Explain what do you see on this picture?",
        List.of(new Media(MimeTypeUtils.IMAGE_PNG, this.data)));

ChatResponse response = chatModel.call(new Prompt(List.of(this.userMessage)));

PDF

最新的Vertex Gemini支持PDF输入类型。使用` application/pdf `媒体类型将PDF文件附加到消息:

Latest Vertex Gemini provides support for PDF input types.. Use the application/pdf media type to attach a PDF file to the message:

var pdfData = new ClassPathResource("/spring-ai-reference-overview.pdf");

var userMessage = new UserMessage(
        "You are a very professional document summarization specialist. Please summarize the given document.",
        List.of(new Media(new MimeType("application", "pdf"), pdfData)));

var response = this.chatModel.call(new Prompt(List.of(userMessage)));

Sample Controller

` Create 一个新的Spring Boot项目,并将 spring-ai-starter-model-vertex-ai-gemini `添加到您的pom(或gradle)依赖项中。

Create a new Spring Boot project and add the spring-ai-starter-model-vertex-ai-gemini to your pom (or gradle) dependencies.

src/main/resources 目录下添加一个 application.properties 文件,以启用和配置 VertexAi 聊天模型:

Add a application.properties file, under the src/main/resources directory, to enable and configure the VertexAi chat model:

spring.ai.vertex.ai.gemini.project-id=PROJECT_ID
spring.ai.vertex.ai.gemini.location=LOCATION
spring.ai.vertex.ai.gemini.chat.options.model=gemini-2.0-flash
spring.ai.vertex.ai.gemini.chat.options.temperature=0.5

将` project-id 替换为您的Google Cloud Project ID, location 是Google Cloud区域,例如 us-central1 europe-west1 `等…​

Replace the project-id with your Google Cloud Project ID and location is Google Cloud Region like us-central1, europe-west1, etc…​

每个模型都有其自己支持的区域集,您可以在模型页面中找到支持区域的列表。

Each model has its own set of supported regions, you can find the list of supported regions in the model page.

例如,model= `gemini-2.5-flash 目前仅在 us-central1 区域可用,您必须将`location= `us-central1 设置为 Gemini 2.5 Flash - Supported Regions `。

For example, model=gemini-2.5-flash is currently available in us-central1 region only, you must set location=us-central1, following the model page Gemini 2.5 Flash - Supported Regions.

这将创建一个` VertexAiGeminiChatModel 实现,您可以将其注入到您的类中。这是一个使用聊天模型进行文本生成的简单 @Controller `类的示例。

This will create a VertexAiGeminiChatModel implementation that you can inject into your class. Here is an example of a simple @Controller class that uses the chat model for text generations.

@RestController
public class ChatController {

    private final VertexAiGeminiChatModel chatModel;

    @Autowired
    public ChatController(VertexAiGeminiChatModel chatModel) {
        this.chatModel = chatModel;
    }

    @GetMapping("/ai/generate")
    public Map generate(@RequestParam(value = "message", defaultValue = "Tell me a joke") String message) {
        return Map.of("generation", this.chatModel.call(message));
    }

    @GetMapping("/ai/generateStream")
	public Flux<ChatResponse> generateStream(@RequestParam(value = "message", defaultValue = "Tell me a joke") String message) {
        Prompt prompt = new Prompt(new UserMessage(message));
        return this.chatModel.stream(prompt);
    }
}

Manual Configuration

` VertexAiGeminiChatModel 实现了 ChatModel ,并使用 VertexAI `连接到Vertex AI Gemini服务。

The VertexAiGeminiChatModel implements the ChatModel and uses the VertexAI to connect to the Vertex AI Gemini service.

向项目 Maven pom.xml 文件中添加 spring-ai-vertex-ai-gemini 依赖项:

Add the spring-ai-vertex-ai-gemini dependency to your project’s Maven pom.xml file:

<dependency>
    <groupId>org.springframework.ai</groupId>
    <artifactId>spring-ai-vertex-ai-gemini</artifactId>
</dependency>

或添加到 Gradle build.gradle 构建文件中。

or to your Gradle build.gradle build file.

dependencies {
    implementation 'org.springframework.ai:spring-ai-vertex-ai-gemini'
}
  1. 参见 Dependency Management 部分,将 Spring AI BOM 添加到你的构建文件中。

Refer to the Dependency Management section to add the Spring AI BOM to your build file.

接下来,创建一个 VertexAiGeminiChatModel 并将其用于文本生成:

Next, create a VertexAiGeminiChatModel and use it for text generations:

VertexAI vertexApi =  new VertexAI(projectId, location);

var chatModel = new VertexAiGeminiChatModel(this.vertexApi,
    VertexAiGeminiChatOptions.builder()
        .model(ChatModel.GEMINI_2_0_FLASH)
        .temperature(0.4)
    .build());

ChatResponse response = this.chatModel.call(
    new Prompt("Generate the names of 5 famous pirates."));

VertexAiGeminiChatOptions 提供了聊天请求的配置信息。VertexAiGeminiChatOptions.Builder 是流畅的选项构建器。

The VertexAiGeminiChatOptions provides the configuration information for the chat requests. The VertexAiGeminiChatOptions.Builder is fluent options builder.

Low-level Java Client

以下类图展示了Vertex AI Gemini原生Java API:

Following class diagram illustrates the Vertex AI Gemini native Java API:

vertex ai gemini native api