OpenAI Chat

Spring AI 支持 OpenAI 的各种 AI 语言模型,OpenAI 是 ChatGPT 背后的公司,它通过创建行业领先的文本生成模型和嵌入,在激发人们对 AI 驱动的文本生成兴趣方面发挥了重要作用。

Spring AI supports the various AI language models from OpenAI, the company behind ChatGPT, which has been instrumental in sparking interest in AI-driven text generation thanks to its creation of industry-leading text generation models and embeddings.

Prerequisites

您需要使用 OpenAI 创建一个 API 才能访问 ChatGPT 模型。

You will need to create an API with OpenAI to access ChatGPT models.

open.bigmodel.cn 创建账户并在 API密钥 上生成令牌。

Create an account at OpenAI signup page and generate the token on the API Keys page.

Spring AI 项目定义了一个名为 spring.ai.openai.api-key 的配置属性,您应该将其设置为从 openai.com 获取的 API 密钥 的值。

The Spring AI project defines a configuration property named spring.ai.openai.api-key that you should set to the value of the API Key obtained from openai.com.

你可以在` application.properties `文件中设置此配置属性:

You can set this configuration property in your application.properties file:

spring.ai.openai.api-key=<your-openai-api-key>

为了在处理 API 密钥等敏感信息时增强安全性,您可以使用 Spring 表达式语言 (SpEL) 引用自定义环境变量:

For enhanced security when handling sensitive information like API keys, you can use Spring Expression Language (SpEL) to reference a custom environment variable:

# In application.yml
spring:
  ai:
    openai:
      api-key: ${OPENAI_API_KEY}
# In your environment or .env file
export OPENAI_API_KEY=<your-openai-api-key>

你也可以在应用程序代码中以编程方式设置此配置:

You can also set this configuration programmatically in your application code:

// Retrieve API key from a secure source or environment variable
String apiKey = System.getenv("OPENAI_API_KEY");

Add Repositories and BOM

Spring AI 工件发布在 Maven Central 和 Spring Snapshot 存储库中。请参阅“添加 Spring AI 仓库”部分,将这些仓库添加到您的构建系统。

Spring AI artifacts are published in Maven Central and Spring Snapshot repositories. Refer to the Artifact Repositories section to add these repositories to your build system.

为了帮助进行依赖项管理,Spring AI 提供了一个 BOM(物料清单)以确保在整个项目中使用一致版本的 Spring AI。有关将 Spring AI BOM 添加到你的构建系统的说明,请参阅 Dependency Management 部分。

To help with dependency management, Spring AI provides a BOM (bill of materials) to ensure that a consistent version of Spring AI is used throughout the entire project. Refer to the Dependency Management section to add the Spring AI BOM to your build system.

Auto-configuration

Spring AI 自动配置、启动器模块的工件名称发生了重大变化。请参阅 upgrade notes 以获取更多信息。

There has been a significant change in the Spring AI auto-configuration, starter modules' artifact names. Please refer to the upgrade notes for more information.

Spring AI 为 OpenAI Chat Client 提供 Spring Boot 自动配置。要启用它,请将以下依赖项添加到您项目的 Maven pom.xml 或 Gradle build.gradle 构建文件中:

Spring AI provides Spring Boot auto-configuration for the OpenAI Chat Client. To enable it add the following dependency to your project’s Maven pom.xml or Gradle build.gradle build files:

  • Maven

  • Gradle

<dependency>
    <groupId>org.springframework.ai</groupId>
    <artifactId>spring-ai-starter-model-openai</artifactId>
</dependency>
dependencies {
    implementation 'org.springframework.ai:spring-ai-starter-model-openai'
}
  1. 参见 Dependency Management 部分,将 Spring AI BOM 添加到你的构建文件中。

Refer to the Dependency Management section to add the Spring AI BOM to your build file.

Chat Properties

Retry Properties

前缀 spring.ai.retry 用作属性前缀,允许您配置OpenAI聊天模型的重试机制。

The prefix spring.ai.retry is used as the property prefix that lets you configure the retry mechanism for the OpenAI chat model.

Property Description Default

spring.ai.retry.max-attempts

Maximum number of retry attempts.

10

spring.ai.retry.backoff.initial-interval

Initial sleep duration for the exponential backoff policy.

2 sec.

spring.ai.retry.backoff.multiplier

Backoff interval multiplier.

5

spring.ai.retry.backoff.max-interval

Maximum backoff duration.

3 min.

spring.ai.retry.on-client-errors

If false, throw a NonTransientAiException, and do not attempt retry for 4xx client error codes

false

spring.ai.retry.exclude-on-http-codes

List of HTTP status codes that should not trigger a retry (e.g. to throw NonTransientAiException).

empty

spring.ai.retry.on-http-codes

List of HTTP status codes that should trigger a retry (e.g. to throw TransientAiException).

empty

Connection Properties

spring.ai.openai 前缀用作可让你连接到 Open AI 的属性前缀。

The prefix spring.ai.openai is used as the property prefix that lets you connect to OpenAI.

Property Description Default

spring.ai.openai.base-url

The URL to connect to

[role="bare"]https://api.openai.com

spring.ai.openai.api-key

The API Key

-

spring.ai.openai.organization-id

Optionally, you can specify which organization to use for an API request.

-

spring.ai.openai.project-id

Optionally, you can specify which project to use for an API request.

-

对于属于多个组织(或通过其旧版用户 API 密钥访问其项目)的用户,您可以选择指定用于 API 请求的组织和项目。这些 API 请求的使用将计为指定组织和项目的使用。

For users that belong to multiple organizations (or are accessing their projects through their legacy user API key), you can optionally specify which organization and project is used for an API request. Usage from these API requests will count as usage for the specified organization and project.

Configuration Properties

聊天自动配置的启用和禁用现在通过前缀为 spring.ai.model.chat 的顶级属性进行配置。

Enabling and disabling of the chat auto-configurations are now configured via top level properties with the prefix spring.ai.model.chat.

要启用,spring.ai.model.chat=openai(默认启用)

To enable, spring.ai.model.chat=openai (It is enabled by default)

要禁用,spring.ai.model.chat=none(或任何与 openai 不匹配的值)

To disable, spring.ai.model.chat=none (or any value which doesn’t match openai)

此更改旨在允许配置多个模型。

This change is done to allow configuration of multiple models.

前缀 spring.ai.openai.chat 是允许您为 OpenAI 配置聊天模型实现的属性前缀。

The prefix spring.ai.openai.chat is the property prefix that lets you configure the chat model implementation for OpenAI.

Property Description Default

spring.ai.openai.chat.enabled (Removed and no longer valid)

Enable OpenAI chat model.

true

spring.ai.model.chat

Enable OpenAI chat model.

openai

spring.ai.openai.chat.base-url

Optional override for the spring.ai.openai.base-url property to provide a chat-specific URL.

-

spring.ai.openai.chat.completions-path

The path to append to the base URL.

/v1/chat/completions

spring.ai.openai.chat.api-key

Optional override for the spring.ai.openai.api-key to provide a chat-specific API Key.

-

spring.ai.openai.chat.organization-id

Optionally, you can specify which organization to use for an API request.

-

spring.ai.openai.chat.project-id

Optionally, you can specify which project to use for an API request.

-

spring.ai.openai.chat.options.model

Name of the OpenAI chat model to use. You can select between models such as: gpt-4o, gpt-4o-mini, gpt-4-turbo, gpt-3.5-turbo, and more. See the models page for more information.

gpt-4o-mini

spring.ai.openai.chat.options.temperature

The sampling temperature to use that controls the apparent creativity of generated completions. Higher values will make output more random while lower values will make results more focused and deterministic. It is not recommended to modify temperature and top_p for the same completions request as the interaction of these two settings is difficult to predict.

0.8

spring.ai.openai.chat.options.frequencyPenalty

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.

0.0f

spring.ai.openai.chat.options.logitBias

Modify the likelihood of specified tokens appearing in the completion.

-

spring.ai.openai.chat.options.maxTokens

(Deprecated in favour of maxCompletionTokens) The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model’s context length.

-

spring.ai.openai.chat.options.maxCompletionTokens

An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.

-

spring.ai.openai.chat.options.n

How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs.

1

spring.ai.openai.chat.options.store

Whether to store the output of this chat completion request for use in our model

false

spring.ai.openai.chat.options.metadata

Developer-defined tags and values used for filtering completions in the chat completion dashboard

empty map

spring.ai.openai.chat.options.output-modalities

Output types that you would like the model to generate for this request. Most models are capable of generating text, which is the default. The gpt-4o-audio-preview model can also be used to generate audio. To request that this model generate both text and audio responses, you can use: text, audio. Not supported for streaming.

-

spring.ai.openai.chat.options.output-audio

Audio parameters for the audio generation. Required when audio output is requested with output-modalities: audio. Requires the gpt-4o-audio-preview model and is is not supported for streaming completions.

-

spring.ai.openai.chat.options.presencePenalty

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.

-

spring.ai.openai.chat.options.responseFormat.type

Compatible with GPT-4o, GPT-4o mini, GPT-4 Turbo and all GPT-3.5 Turbo models newer than gpt-3.5-turbo-1106. The JSON_OBJECT type enables JSON mode, which guarantees the message the model generates is valid JSON. The JSON_SCHEMA type enables Structured Outputs which guarantees the model will match your supplied JSON schema. The JSON_SCHEMA type requires setting the responseFormat.schema property as well.

-

spring.ai.openai.chat.options.responseFormat.name

Response format schema name. Applicable only for responseFormat.type=JSON_SCHEMA

custom_schema

spring.ai.openai.chat.options.responseFormat.schema

Response format JSON schema. Applicable only for responseFormat.type=JSON_SCHEMA

-

spring.ai.openai.chat.options.responseFormat.strict

Response format JSON schema adherence strictness. Applicable only for responseFormat.type=JSON_SCHEMA

-

spring.ai.openai.chat.options.seed

This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.

-

spring.ai.openai.chat.options.stop

Up to 4 sequences where the API will stop generating further tokens.

-

spring.ai.openai.chat.options.topP

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.

-

spring.ai.openai.chat.options.tools

A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for.

-

spring.ai.openai.chat.options.toolChoice

Controls which (if any) function is called by the model. none means the model will not call a function and instead generates a message. auto means the model can pick between generating a message or calling a function. Specifying a particular function via {"type: "function", "function": {"name": "my_function"}} forces the model to call that function. none is the default when no functions are present. auto is the default if functions are present.

-

spring.ai.openai.chat.options.user

A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.

-

spring.ai.openai.chat.options.functions

List of functions, identified by their names, to enable for function calling in a single prompt requests. Functions with those names must exist in the functionCallbacks registry.

-

spring.ai.openai.chat.options.stream-usage

(For streaming only) Set to add an additional chunk with token usage statistics for the entire request. The choices field for this chunk is an empty array and all other chunks will also include a usage field, but with a null value.

false

spring.ai.openai.chat.options.parallel-tool-calls

Whether to enable parallel function calling during tool use.

true

spring.ai.openai.chat.options.http-headers

Optional HTTP headers to be added to the chat completion request. To override the api-key you need to use an Authorization header key, and you have to prefix the key value with the Bearer prefix.

-

spring.ai.openai.chat.options.proxy-tool-calls

If true, the Spring AI will not handle the function calls internally, but will proxy them to the client. Then is the client’s responsibility to handle the function calls, dispatch them to the appropriate function, and return the results. If false (the default), the Spring AI will handle the function calls internally. Applicable only for chat models with function calling support

false

您可以覆盖 ChatModelEmbeddingModel 实现的通用 spring.ai.openai.base-urlspring.ai.openai.api-keyspring.ai.openai.chat.base-urlspring.ai.openai.chat.api-key 属性(如果设置)优先于通用属性。如果您希望为不同的模型和不同的模型端点使用不同的 OpenAI 帐户,这将非常有用。

You can override the common spring.ai.openai.base-url and spring.ai.openai.api-key for the ChatModel and EmbeddingModel implementations. The spring.ai.openai.chat.base-url and spring.ai.openai.chat.api-key properties, if set, take precedence over the common properties. This is useful if you want to use different OpenAI accounts for different models and different model endpoints.

所有以 spring.ai.openai.chat.options 为前缀的属性都可以在运行时通过向 Prompt 调用添加请求特定的 Runtime Options 来覆盖。

All properties prefixed with spring.ai.openai.chat.options can be overridden at runtime by adding request-specific Runtime Options to the Prompt call.

Runtime Options

OpenAiChatOptions.java 类提供模型配置,例如要使用的模型、温度、频率惩罚等。

The OpenAiChatOptions.java class provides model configurations such as the model to use, the temperature, the frequency penalty, etc.

启动时,可以使用 OpenAiChatModel(api, options) 构造函数或 spring.ai.openai.chat.options.* 属性配置默认选项。

On start-up, the default options can be configured with the OpenAiChatModel(api, options) constructor or the spring.ai.openai.chat.options.* properties.

在运行时,你可以通过向 Prompt 调用添加新的、请求特定的选项来覆盖默认选项。例如,要覆盖特定请求的默认模型和温度:

At run-time, you can override the default options by adding new, request-specific options to the Prompt call. For example, to override the default model and temperature for a specific request:

ChatResponse response = chatModel.call(
    new Prompt(
        "Generate the names of 5 famous pirates.",
        OpenAiChatOptions.builder()
            .model("gpt-4o")
            .temperature(0.4)
        .build()
    ));

除了模型特定的 OpenAiChatOptions ,您还可以使用通过 ChatOptionsBuilder#builder() 创建的便携式 ChatOptions 实例。

In addition to the model specific OpenAiChatOptions you can use a portable ChatOptions instance, created with ChatOptionsBuilder#builder().

Function Calling

您可以使用 OpenAiChatModel 注册自定义 Java 函数,并让 OpenAI 模型智能地选择输出一个 JSON 对象,其中包含调用一个或多个已注册函数的参数。这是一种将 LLM 功能与外部工具和 API 连接起来的强大技术。阅读更多关于 Tool Calling 的信息。

You can register custom Java functions with the OpenAiChatModel and have the OpenAI model intelligently choose to output a JSON object containing arguments to call one or many of the registered functions. This is a powerful technique to connect the LLM capabilities with external tools and APIs. Read more about Tool Calling.

Multimodal

多模态是指模型同时理解和处理来自各种来源(包括文本、图像、音频和其他数据格式)信息的能力。OpenAI 支持文本、视觉和音频输入模态。

Multimodality refers to a model’s ability to simultaneously understand and process information from various sources, including text, images, audio, and other data formats. OpenAI supports text, vision, and audio input modalities.

Vision

提供视觉多模态支持的 OpenAI 模型包括 gpt-4gpt-4ogpt-4o-mini 。有关更多信息,请参阅 Vision 指南。

OpenAI models that offer vision multimodal support include gpt-4, gpt-4o, and gpt-4o-mini. Refer to the Vision guide for more information.

OpenAI User Message API 可以在消息中包含 base64 编码的图像列表或图像 URL。Spring AI 的 Message 接口通过引入 Media 类型来促进多模态 AI 模型。此类型包含有关消息中媒体附件的数据和详细信息,利用 Spring 的 org.springframework.util.MimeType 和用于原始媒体数据的 org.springframework.core.io.Resource

The OpenAI User Message API can incorporate a list of base64-encoded images or image urls with the message. Spring AI’s Message interface facilitates multimodal AI models by introducing the Media type. This type encompasses data and details regarding media attachments in messages, utilizing Spring’s org.springframework.util.MimeType and a org.springframework.core.io.Resource for the raw media data.

下面是摘自 OpenAiChatModelIT.java 的代码示例,展示了使用 gpt-4o 模型将用户文本与图像融合。

Below is a code example excerpted from OpenAiChatModelIT.java, illustrating the fusion of user text with an image using the gpt-4o model.

var imageResource = new ClassPathResource("/multimodal.test.png");

var userMessage = new UserMessage("Explain what do you see on this picture?",
        new Media(MimeTypeUtils.IMAGE_PNG, this.imageResource));

ChatResponse response = chatModel.call(new Prompt(this.userMessage,
        OpenAiChatOptions.builder().model(OpenAiApi.ChatModel.GPT_4_O.getValue()).build()));

自 2024 年 6 月 17 日起,GPT_4_VISION_PREVIEW 将仅适用于该模型的现有用户。如果您不是现有用户,请使用 GPT_4_O 或 GPT_4_TURBO 模型。更多详细信息 here

GPT_4_VISION_PREVIEW will continue to be available only to existing users of this model starting June 17, 2024. If you are not an existing user, please use the GPT_4_O or GPT_4_TURBO models. More details here

或使用 gpt-4o 模型的图像 URL 等效项:

or the image URL equivalent using the gpt-4o model:

var userMessage = new UserMessage("Explain what do you see on this picture?",
        new Media(MimeTypeUtils.IMAGE_PNG,
                URI.create("https://docs.spring.io/spring-ai/reference/_images/multimodal.test.png")));

ChatResponse response = chatModel.call(new Prompt(this.userMessage,
        OpenAiChatOptions.builder().model(OpenAiApi.ChatModel.GPT_4_O.getValue()).build()));

您也可以传递多张图像。

You can pass multiple images as well.

该示例展示了一个模型将 multimodal.test.png 图像作为输入:

The example shows a model taking as an input the multimodal.test.png image:

multimodal.test

以及文本消息“解释一下你在这张图片上看到了什么?”,并生成如下响应:

along with the text message "Explain what do you see on this picture?", and generating a response like this:

This is an image of a fruit bowl with a simple design. The bowl is made of metal with curved wire edges that
create an open structure, allowing the fruit to be visible from all angles. Inside the bowl, there are two
yellow bananas resting on top of what appears to be a red apple. The bananas are slightly overripe, as
indicated by the brown spots on their peels. The bowl has a metal ring at the top, likely to serve as a handle
for carrying. The bowl is placed on a flat surface with a neutral-colored background that provides a clear
view of the fruit inside.

Audio

提供输入音频多模态支持的 OpenAI 模型包括 gpt-4o-audio-preview 。有关更多信息,请参阅 Audio 指南。

OpenAI models that offer input audio multimodal support include gpt-4o-audio-preview. Refer to the Audio guide for more information.

OpenAI User Message API 可以在消息中包含 base64 编码的音频文件列表。Spring AI 的 Message 接口通过引入 Media 类型来促进多模态 AI 模型。此类型包含有关消息中媒体附件的数据和详细信息,利用 Spring 的 org.springframework.util.MimeType 和用于原始媒体数据的 org.springframework.core.io.Resource 。目前,OpenAI 仅支持以下媒体类型: audio/mp3audio/wav

The OpenAI User Message API can incorporate a list of base64-encoded audio files with the message. Spring AI’s Message interface facilitates multimodal AI models by introducing the Media type. This type encompasses data and details regarding media attachments in messages, utilizing Spring’s org.springframework.util.MimeType and a org.springframework.core.io.Resource for the raw media data. Currently, OpenAI support only the following media types: audio/mp3 and audio/wav.

下面是摘自 OpenAiChatModelIT.java 的代码示例,展示了使用 gpt-4o-audio-preview 模型将用户文本与音频文件融合。

Below is a code example excerpted from OpenAiChatModelIT.java, illustrating the fusion of user text with an audio file using the gpt-4o-audio-preview model.

var audioResource = new ClassPathResource("speech1.mp3");

var userMessage = new UserMessage("What is this recording about?",
        List.of(new Media(MimeTypeUtils.parseMimeType("audio/mp3"), audioResource)));

ChatResponse response = chatModel.call(new Prompt(List.of(userMessage),
        OpenAiChatOptions.builder().model(OpenAiApi.ChatModel.GPT_4_O_AUDIO_PREVIEW).build()));

您也可以传递多个音频文件。

You can pass multiple audio files as well.

Output Audio

提供输入音频多模态支持的 OpenAI 模型包括 gpt-4o-audio-preview 。有关更多信息,请参阅 Audio 指南。

OpenAI models that offer input audio multimodal support include gpt-4o-audio-preview. Refer to the Audio guide for more information.

OpenAI Assystant Message API 可以在消息中包含 base64 编码的音频文件列表。Spring AI 的 Message 接口通过引入 Media 类型来促进多模态 AI 模型。此类型包含有关消息中媒体附件的数据和详细信息,利用 Spring 的 org.springframework.util.MimeType 和用于原始媒体数据的 org.springframework.core.io.Resource 。目前,OpenAI 仅支持以下音频类型: audio/mp3audio/wav

The OpenAI Assystant Message API can contain a list of base64-encoded audio files with the message. Spring AI’s Message interface facilitates multimodal AI models by introducing the Media type. This type encompasses data and details regarding media attachments in messages, utilizing Spring’s org.springframework.util.MimeType and a org.springframework.core.io.Resource for the raw media data. Currently, OpenAI support only the following audio types: audio/mp3 and audio/wav.

下面是一个代码示例,展示了使用 gpt-4o-audio-preview 模型将用户文本的响应与音频字节数组结合:

Below is a code example, illustrating the response of user text along with an audio byte array, using the gpt-4o-audio-preview model:

var userMessage = new UserMessage("Tell me joke about Spring Framework");

ChatResponse response = chatModel.call(new Prompt(List.of(userMessage),
        OpenAiChatOptions.builder()
            .model(OpenAiApi.ChatModel.GPT_4_O_AUDIO_PREVIEW)
            .outputModalities(List.of("text", "audio"))
            .outputAudio(new AudioParameters(Voice.ALLOY, AudioResponseFormat.WAV))
            .build()));

String text = response.getResult().getOutput().getContent(); // audio transcript

byte[] waveAudio = response.getResult().getOutput().getMedia().get(0).getDataAsByteArray(); // audio data

您必须在 OpenAiChatOptions 中指定 audio 模态才能生成音频输出。 AudioParameters 类提供音频输出的语音和音频格式。

You have to specify an audio modality in the OpenAiChatOptions to generate audio output. The AudioParameters class provides the voice and audio format for the audio output.

Structured Outputs

OpenAI 提供自定义 Structured Outputs API,确保您的模型生成符合您提供的 JSON Schema 的响应。除了现有的 Spring AI 模型无关的 Structured Output Converter 之外,这些 API 还提供增强的控制和精度。

OpenAI provides custom Structured Outputs APIs that ensure your model generates responses conforming strictly to your provided JSON Schema. In addition to the existing Spring AI model-agnostic Structured Output Converter, these APIs offer enhanced control and precision.

目前,OpenAI 支持 subset of the JSON Schema language 格式。

Currently, OpenAI supports a subset of the JSON Schema language format.

Configuration

Spring AI 允许您以编程方式使用 OpenAiChatOptions 构建器或通过应用程序属性配置响应格式。

Spring AI allows you to configure your response format either programmatically using the OpenAiChatOptions builder or through application properties.

Using the Chat Options Builder

您可以使用 OpenAiChatOptions 构建器以编程方式设置响应格式,如下所示:

You can set the response format programmatically with the OpenAiChatOptions builder as shown below:

String jsonSchema = """
        {
            "type": "object",
            "properties": {
                "steps": {
                    "type": "array",
                    "items": {
                        "type": "object",
                        "properties": {
                            "explanation": { "type": "string" },
                            "output": { "type": "string" }
                        },
                        "required": ["explanation", "output"],
                        "additionalProperties": false
                    }
                },
                "final_answer": { "type": "string" }
            },
            "required": ["steps", "final_answer"],
            "additionalProperties": false
        }
        """;

Prompt prompt = new Prompt("how can I solve 8x + 7 = -23",
        OpenAiChatOptions.builder()
            .model(ChatModel.GPT_4_O_MINI)
            .responseFormat(new ResponseFormat(ResponseFormat.Type.JSON_SCHEMA, this.jsonSchema))
            .build());

ChatResponse response = this.openAiChatModel.call(this.prompt);

遵循 OpenAI subset of the JSON Schema language 格式。

Adhere to the OpenAI subset of the JSON Schema language format.

Integrating with BeanOutputConverter Utilities

您可以利用现有的 BeanOutputConverter 实用程序自动从您的领域对象生成 JSON 架构,然后将结构化响应转换为领域特定的实例:

You can leverage existing BeanOutputConverter utilities to automatically generate the JSON Schema from your domain objects and later convert the structured response into domain-specific instances:

  • Java

  • Kotlin

record MathReasoning(
    @JsonProperty(required = true, value = "steps") Steps steps,
    @JsonProperty(required = true, value = "final_answer") String finalAnswer) {

    record Steps(
        @JsonProperty(required = true, value = "items") Items[] items) {

        record Items(
            @JsonProperty(required = true, value = "explanation") String explanation,
            @JsonProperty(required = true, value = "output") String output) {
        }
    }
}

var outputConverter = new BeanOutputConverter<>(MathReasoning.class);

var jsonSchema = this.outputConverter.getJsonSchema();

Prompt prompt = new Prompt("how can I solve 8x + 7 = -23",
        OpenAiChatOptions.builder()
            .model(ChatModel.GPT_4_O_MINI)
            .responseFormat(new ResponseFormat(ResponseFormat.Type.JSON_SCHEMA, this.jsonSchema))
            .build());

ChatResponse response = this.openAiChatModel.call(this.prompt);
String content = this.response.getResult().getOutput().getContent();

MathReasoning mathReasoning = this.outputConverter.convert(this.content);
data class MathReasoning(
	val steps: Steps,
	@get:JsonProperty(value = "final_answer") val finalAnswer: String) {

	data class Steps(val items: Array<Items>) {

		data class Items(
			val explanation: String,
			val output: String)
	}
}

val outputConverter = BeanOutputConverter(MathReasoning::class.java)

val jsonSchema = outputConverter.jsonSchema;

val prompt = Prompt("how can I solve 8x + 7 = -23",
	OpenAiChatOptions.builder()
		.model(ChatModel.GPT_4_O_MINI)
		.responseFormat(ResponseFormat(ResponseFormat.Type.JSON_SCHEMA, jsonSchema))
		.build())

val response = openAiChatModel.call(prompt)
val content = response.getResult().getOutput().getContent()

val mathReasoning = outputConverter.convert(content)

尽管这对于 JSON Schema 是可选的,但 OpenAI mandates 要求字段以使结构化响应正常运行。Kotlin 反射用于根据类型的可空性和参数的默认值推断哪些属性是必需的或不是必需的,因此对于大多数用例,不需要 @get:JsonProperty(required = true)@get:JsonProperty(value = "custom_name") 可用于自定义属性名称。确保使用 @get: 语法在相关的 getter 上生成注释,请参阅 related documentation

Although this is optional for JSON Schema, OpenAI mandates required fields for the structured response to function correctly. Kotlin reflection is used to infer which property are required or not based on the nullability of types and default values of parameters, so for most use case @get:JsonProperty(required = true) is not needed. @get:JsonProperty(value = "custom_name") can be useful to customize the property name. Make sure to generate the annotation on the related getters with this @get: syntax, see related documentation.

Configuring via Application Properties

或者,在使用 OpenAI 自动配置时,您可以通过以下应用程序属性配置所需的响应格式:

Alternatively, when using the OpenAI auto-configuration, you can configure the desired response format through the following application properties:

spring.ai.openai.api-key=YOUR_API_KEY
spring.ai.openai.chat.options.model=gpt-4o-mini

spring.ai.openai.chat.options.response-format.type=JSON_SCHEMA
spring.ai.openai.chat.options.response-format.name=MySchemaName
spring.ai.openai.chat.options.response-format.schema={"type":"object","properties":{"steps":{"type":"array","items":{"type":"object","properties":{"explanation":{"type":"string"},"output":{"type":"string"}},"required":["explanation","output"],"additionalProperties":false}},"final_answer":{"type":"string"}},"required":["steps","final_answer"],"additionalProperties":false}
spring.ai.openai.chat.options.response-format.strict=true

Sample Controller

Create 一个新的 Spring Boot 项目,并将 spring-ai-starter-model-openai 添加到您的 pom(或 gradle)依赖项中。

Create a new Spring Boot project and add the spring-ai-starter-model-openai to your pom (or gradle) dependencies.

src/main/resources 目录下添加一个 application.properties 文件以启用和配置 OpenAI 聊天模型:

Add an application.properties file under the src/main/resources directory to enable and configure the OpenAi chat model:

spring.ai.openai.api-key=YOUR_API_KEY
spring.ai.openai.chat.options.model=gpt-4o
spring.ai.openai.chat.options.temperature=0.7

api-key 替换为您的 OpenAI 凭据。

Replace the api-key with your OpenAI credentials.

这将创建一个您可以注入到类中的 OpenAiChatModel 实现。这是一个使用聊天模型进行文本生成的简单 @RestController 类的示例。

This will create an OpenAiChatModel implementation that you can inject into your classes. Here is an example of a simple @RestController class that uses the chat model for text generations.

@RestController
public class ChatController {

    private final OpenAiChatModel chatModel;

    @Autowired
    public ChatController(OpenAiChatModel chatModel) {
        this.chatModel = chatModel;
    }

    @GetMapping("/ai/generate")
    public Map<String,String> generate(@RequestParam(value = "message", defaultValue = "Tell me a joke") String message) {
        return Map.of("generation", this.chatModel.call(message));
    }

    @GetMapping("/ai/generateStream")
	public Flux<ChatResponse> generateStream(@RequestParam(value = "message", defaultValue = "Tell me a joke") String message) {
        Prompt prompt = new Prompt(new UserMessage(message));
        return this.chatModel.stream(prompt);
    }
}

Manual Configuration

OpenAiChatModel 实现了 ChatModelStreamingChatModel ,并使用 Low-level OpenAiApi Client 连接到 OpenAI 服务。

The OpenAiChatModel implements the ChatModel and StreamingChatModel and uses the Low-level OpenAiApi Client to connect to the OpenAI service.

添加 spring-ai-openai 依赖到你的项目的 Maven pom.xml 文件中:

Add the spring-ai-openai dependency to your project’s Maven pom.xml file:

<dependency>
    <groupId>org.springframework.ai</groupId>
    <artifactId>spring-ai-openai</artifactId>
</dependency>

或添加到 Gradle build.gradle 构建文件中。

or to your Gradle build.gradle build file.

dependencies {
    implementation 'org.springframework.ai:spring-ai-openai'
}
  1. 参见 Dependency Management 部分,将 Spring AI BOM 添加到你的构建文件中。

Refer to the Dependency Management section to add the Spring AI BOM to your build file.

接下来,创建一个 OpenAiChatModel 并将其用于文本生成:

Next, create an OpenAiChatModel and use it for text generations:

var openAiApi = OpenAiApi.builder()
            .apiKey(System.getenv("OPENAI_API_KEY"))
            .build();
var openAiChatOptions = OpenAiChatOptions.builder()
            .model("gpt-3.5-turbo")
            .temperature(0.4)
            .maxTokens(200)
            .build();
var chatModel = new OpenAiChatModel(this.openAiApi, this.openAiChatOptions);

ChatResponse response = this.chatModel.call(
    new Prompt("Generate the names of 5 famous pirates."));

// Or with streaming responses
Flux<ChatResponse> response = this.chatModel.stream(
    new Prompt("Generate the names of 5 famous pirates."));

OpenAiChatOptions 提供了聊天请求的配置信息。 OpenAiApi.BuilderOpenAiChatOptions.Builder 分别是 API 客户端和聊天配置的流畅选项构建器。

The OpenAiChatOptions provides the configuration information for the chat requests. The OpenAiApi.Builder and OpenAiChatOptions.Builder are fluent options-builders for API client and chat config respectively.

Low-level OpenAiApi Client

OpenAiApi 提供适用于 OpenAI Chat API 的轻量级 Java 客户端 OpenAI 聊天 API。

The OpenAiApi provides is lightweight Java client for OpenAI Chat API OpenAI Chat API.

下面的类图说明了 OpenAiApi 聊天接口和构建块:

Following class diagram illustrates the OpenAiApi chat interfaces and building blocks:

openai chat api

这是一个简单的代码片段,展示了如何以编程方式使用 API:

Here is a simple snippet showing how to use the API programmatically:

OpenAiApi openAiApi = OpenAiApi.builder()
            .apiKey(System.getenv("OPENAI_API_KEY"))
            .build();

ChatCompletionMessage chatCompletionMessage =
    new ChatCompletionMessage("Hello world", Role.USER);

// Sync request
ResponseEntity<ChatCompletion> response = this.openAiApi.chatCompletionEntity(
    new ChatCompletionRequest(List.of(this.chatCompletionMessage), "gpt-3.5-turbo", 0.8, false));

// Streaming request
Flux<ChatCompletionChunk> streamResponse = this.openAiApi.chatCompletionStream(
        new ChatCompletionRequest(List.of(this.chatCompletionMessage), "gpt-3.5-turbo", 0.8, true));

请遵循 OpenAiApi.java 的 JavaDoc 了解更多信息。

Follow the OpenAiApi.java's JavaDoc for further information.

Low-level API Examples

API Key Management

Spring AI 通过 ApiKey 接口及其实现提供灵活的 API 密钥管理。默认实现 SimpleApiKey 适用于大多数用例,但您也可以为更复杂的场景创建自定义实现。

Spring AI provides flexible API key management through the ApiKey interface and its implementations. The default implementation, SimpleApiKey, is suitable for most use cases, but you can also create custom implementations for more complex scenarios.

Default Configuration

默认情况下,Spring Boot 自动配置将使用 spring.ai.openai.api-key 属性创建一个 API 密钥 bean:

By default, Spring Boot auto-configuration will create an API key bean using the spring.ai.openai.api-key property:

spring.ai.openai.api-key=your-api-key-here

Custom API Key Configuration

您可以使用构建器模式创建具有您自己的 ApiKey 实现的 OpenAiApi 自定义实例:

You can create a custom instance of OpenAiApi with your own ApiKey implementation using the builder pattern:

ApiKey customApiKey = new ApiKey() {
    @Override
    public String getValue() {
        // Custom logic to retrieve API key
        return "your-api-key-here";
    }
};

OpenAiApi openAiApi = OpenAiApi.builder()
    .apiKey(customApiKey)
    .build();

// Create a chat client with the custom OpenAiApi instance
OpenAiChatClient chatClient = new OpenAiChatClient(openAiApi);

这在您需要时很有用:

This is useful when you need to:

  • 从安全密钥存储中检索 API 密钥

  • Retrieve the API key from a secure key store

  • Rotate API keys dynamically

  • 实现自定义 API 密钥选择逻辑

  • Implement custom API key selection logic