Azure OpenAI Chat

Azure 的 OpenAI 产品由 ChatGPT 提供支持,它的功能超越了传统 OpenAI 的能力,提供具有增强功能的 AI 驱动的文本生成。Azure 提供了额外的 AI 安全和负责任的 AI 功能,如其最近的更新 here 中所强调的那样。

Azure’s OpenAI offering, powered by ChatGPT, extends beyond traditional OpenAI capabilities, delivering AI-driven text generation with enhanced functionality. Azure offers additional AI safety and responsible AI features, as highlighted in their recent update here.

Azure 为 Java 开发人员提供了将人工智能与其一系列 Azure 服务(包括 Azure 上的 Vector Store 等与人工智能相关的资源)集成在一起,从而充分利用人工智能潜力的机会。

Azure offers Java developers the opportunity to leverage AI’s full potential by integrating it with an array of Azure services, which includes AI-related resources such as Vector Stores on Azure.

Prerequisites

Azure OpenAI客户端提供了三种连接选项:使用Azure API密钥、使用OpenAI API密钥或使用Microsoft Entra ID。

The Azure OpenAI client offers three options to connect: using an Azure API key or using an OpenAI API Key, or using Microsoft Entra ID.

Azure API Key & Endpoint

要使用 API 密钥访问模型,请从 Azure Portal 上的 Azure OpenAI 服务部分获取您的 Azure OpenAI endpointapi-key

To access models using an API key, obtain your Azure OpenAI endpoint and api-key from the Azure OpenAI Service section on the Azure Portal.

Spring AI定义了两个配置属性:

Spring AI defines two configuration properties:

  1. * spring.ai.azure.openai.api-key:将其设置为从Azure获取的API密钥的值。

  2. spring.ai.azure.openai.api-key: Set this to the value of the API Key obtained from Azure.

  3. * spring.ai.azure.openai.endpoint:将其设置为在Azure中配置模型时获取的端点URL。

  4. spring.ai.azure.openai.endpoint: Set this to the endpoint URL obtained when provisioning your model in Azure.

您可以在`application.properties`或`application.yml`文件中设置这些配置属性:

You can set these configuration properties in your application.properties or application.yml file:

spring.ai.azure.openai.api-key=<your-azure-api-key>
spring.ai.azure.openai.endpoint=<your-azure-endpoint-url>

为了在处理敏感信息(如 API 密钥)时增强安全性,您可以使用 Spring 表达式语言 (SpEL) 来引用自定义环境变量:

For enhanced security when handling sensitive information like API keys, you can use Spring Expression Language (SpEL) to reference custom environment variables:

# In application.yml
spring:
  ai:
    azure:
      openai:
        api-key: ${AZURE_OPENAI_API_KEY}
        endpoint: ${AZURE_OPENAI_ENDPOINT}
# In your environment or .env file
export AZURE_OPENAI_API_KEY=<your-azure-openai-api-key>
export AZURE_OPENAI_ENDPOINT=<your-azure-openai-endpoint-url>

OpenAI Key

要通过 OpenAI 服务(非 Azure)进行身份验证,请提供 OpenAI API 密钥。这将自动将端点设置为 [role="bare"] [role="bare"]https://api.openai.com/v1

To authenticate with the OpenAI service (not Azure), provide an OpenAI API key. This will automatically set the endpoint to [role="bare"]https://api.openai.com/v1.

当使用此方法时,请将 spring.ai.openai.chat.model 属性设置为您希望使用的模型名称。

When using this approach, set the spring.ai.azure.openai.chat.options.deployment-name property to the name of the OpenAI model you wish to use.

在您的应用程序配置中:

In your application configuration:

spring.ai.azure.openai.openai-api-key=<your-azure-openai-key>
spring.ai.azure.openai.chat.options.deployment-name=<openai-model-name>

使用 SpEL 的环境变量:

Using environment variables with SpEL:

# In application.yml
spring:
  ai:
    azure:
      openai:
        openai-api-key: ${AZURE_OPENAI_API_KEY}
        chat:
          options:
            deployment-name: ${AZURE_OPENAI_MODEL_NAME}
# In your environment or .env file
export AZURE_OPENAI_API_KEY=<your-openai-key>
export AZURE_OPENAI_MODEL_NAME=<openai-model-name>

Microsoft Entra ID

对于使用 Microsoft Entra ID(以前称为 Azure Active Directory)进行无密钥身份验证,请设置 spring.ai.openai.api-key 配置属性,并不要设置上面提到的 api-key 属性。

For keyless authentication using Microsoft Entra ID (formerly Azure Active Directory), set only the spring.ai.azure.openai.endpoint configuration property and not the api-key property mentioned above.

仅找到端点属性,您的应用程序将评估几种不同的选项来检索凭据,并使用令牌凭据创建一个 DefaultAzureCredential 实例。

Finding only the endpoint property, your application will evaluate several different options for retrieving credentials and an OpenAIClient instance will be created using the token credentials.

不再需要创建 SpringAiAzureConfiguration bean;它已为您自动配置。

It is no longer necessary to create a TokenCredential bean; it is configured for you automatically.

Deployment Name

要使用 Azure AI 应用程序,您需要通过 Azure AI Portal 创建一个 Azure AI 部署。在 Azure 中,每个客户端都必须指定一个 Deployment Name 来连接到 Azure OpenAI 服务。需要注意的是, Deployment Name 与您选择部署的模型不同。例如,名为“MyAiDeployment”的部署可以配置为使用 GPT 3.5 Turbo 模型或 GPT 4.0 模型。

To use Azure AI applications, you need to create an Azure AI Deployment through the Azure AI Portal. In Azure, each client must specify a Deployment Name to connect to the Azure OpenAI service. It’s important to note that the Deployment Name is different from the model you choose to deploy. For example, a deployment named 'MyAiDeployment' could be configured to use either the GPT 3.5 Turbo model or the GPT 4.0 model.

要开始,请按照以下步骤使用默认设置创建部署:

To get started, follow these steps to create a deployment with the default settings:

Deployment Name: gpt-4o Model Name: gpt-4o

此 Azure 配置与 Spring Boot Azure AI Starter 及其自动配置功能的默认配置保持一致。如果您使用不同的部署名称,请确保相应地更新配置属性:

This Azure configuration aligns with the default configurations of the Spring Boot Azure AI Starter and its Autoconfiguration feature. If you use a different Deployment Name, make sure to update the configuration property accordingly:

spring.ai.azure.openai.chat.options.deployment-name=<my deployment name>

Azure OpenAI 和 OpenAI 不同的部署结构导致 Azure OpenAI 客户端库中的一个属性名为 deploymentOrModelName。这是因为在 OpenAI 中没有“部署名称”,只有“模型名称”。

The different deployment structures of Azure OpenAI and OpenAI leads to a property in the Azure OpenAI client library named deploymentOrModelName. This is because in OpenAI there is no Deployment Name, only a Model Name.

属性 spring.ai.azure.openai.chat.options.model 已重命名为 spring.ai.azure.openai.chat.options.deployment-name

The property spring.ai.azure.openai.chat.options.model has been renamed to spring.ai.azure.openai.chat.options.deployment-name.

如果您决定通过设置 spring.ai.azure.openai.openai-api-key=<Your OpenAI Key> 属性连接到 OpenAI 而不是 Azure OpenAI ,则 spring.ai.azure.openai.chat.options.deployment-name 被视为 OpenAI model 名称。

If you decide to connect to OpenAI instead of Azure OpenAI, by setting the spring.ai.azure.openai.openai-api-key=<Your OpenAI Key> property, then the spring.ai.azure.openai.chat.options.deployment-name is treated as an OpenAI model name.

Access the OpenAI Model

您可以将客户端配置为直接使用 OpenAI 而不是已部署的 Azure OpenAI 模型。为此,您需要设置 spring.ai.azure.openai.openai-api-key=<Your OpenAI Key> 而不是 spring.ai.azure.openai.api-key=<Your Azure OpenAi Key>

You can configure the client to use directly OpenAI instead of the Azure OpenAI deployed models. For this you need to set the spring.ai.azure.openai.openai-api-key=<Your OpenAI Key> instead of spring.ai.azure.openai.api-key=<Your Azure OpenAi Key>.

Add Repositories and BOM

Spring AI 工件发布在 Maven Central 和 Spring Snapshot 存储库中。请参阅“添加 Spring AI 仓库”部分,将这些仓库添加到您的构建系统。

Spring AI artifacts are published in Maven Central and Spring Snapshot repositories. Refer to the Artifact Repositories section to add these repositories to your build system.

为了帮助进行依赖项管理,Spring AI 提供了一个 BOM(物料清单)以确保在整个项目中使用一致版本的 Spring AI。有关将 Spring AI BOM 添加到你的构建系统的说明,请参阅 Dependency Management 部分。

To help with dependency management, Spring AI provides a BOM (bill of materials) to ensure that a consistent version of Spring AI is used throughout the entire project. Refer to the Dependency Management section to add the Spring AI BOM to your build system.

Auto-configuration

Spring AI 自动配置、启动器模块的工件名称发生了重大变化。请参阅 upgrade notes 以获取更多信息。

There has been a significant change in the Spring AI auto-configuration, starter modules' artifact names. Please refer to the upgrade notes for more information.

Spring AI 为 Azure OpenAI 聊天客户端提供了 Spring Boot 自动配置。要启用它,请将以下依赖项添加到项目的 Maven pom.xml 或 Gradle build.gradle 构建文件中:

Spring AI provides Spring Boot auto-configuration for the Azure OpenAI Chat Client. To enable it add the following dependency to your project’s Maven pom.xml or Gradle build.gradle build files:

  • Maven

  • Gradle

<dependency>
    <groupId>org.springframework.ai</groupId>
    <artifactId>spring-ai-starter-model-azure-openai</artifactId>
</dependency>
dependencies {
    implementation 'org.springframework.ai:spring-ai-starter-model-azure-openai'
}
  1. 参见 Dependency Management 部分,将 Spring AI BOM 添加到你的构建文件中。

Refer to the Dependency Management section to add the Spring AI BOM to your build file.

Azure OpenAI 聊天客户端是使用 Azure SDK 提供的 OpenAIClientBuilder 创建的。Spring AI 允许通过提供 AzureOpenAIClientBuilderCustomizer bean 来自定义构建器。

The Azure OpenAI Chat Client is created using the OpenAIClientBuilder provided by the Azure SDK. Spring AI allows to customize the builder by providing AzureOpenAIClientBuilderCustomizer beans.

自定义器例如可以用于更改默认响应超时:

A customizer might be used for example to change the default response timeout:

@Configuration
public class AzureOpenAiConfig {

	@Bean
	public AzureOpenAIClientBuilderCustomizer responseTimeoutCustomizer() {
		return openAiClientBuilder -> {
			HttpClientOptions clientOptions = new HttpClientOptions()
					.setResponseTimeout(Duration.ofMinutes(5));
			openAiClientBuilder.httpClient(HttpClient.createDefault(clientOptions));
		};
	}

}

Chat Properties

前缀 spring.ai.azure.openai 是用于配置与 Azure OpenAI 连接的属性前缀。

The prefix spring.ai.azure.openai is the property prefix to configure the connection to Azure OpenAI.

Property Description Default

spring.ai.azure.openai.api-key

The Key from Azure AI OpenAI Keys and Endpoint section under Resource Management

-

spring.ai.azure.openai.endpoint

The endpoint from the Azure AI OpenAI Keys and Endpoint section under Resource Management

-

spring.ai.azure.openai.openai-api-key

(non Azure) OpenAI API key. Used to authenticate with the OpenAI service, instead of Azure OpenAI. This automatically sets the endpoint to [role="bare"]https://api.openai.com/v1. Use either api-key or openai-api-key property. With this configuration the spring.ai.azure.openai.chat.options.deployment-name is threated as an OpenAi Model name.

-

spring.ai.azure.openai.custom-headers

A map of custom headers to be included in the API requests. Each entry in the map represents a header, where the key is the header name and the value is the header value.

Empty map

聊天自动配置的启用和禁用现在通过前缀为 spring.ai.model.chat 的顶级属性进行配置。

Enabling and disabling of the chat auto-configurations are now configured via top level properties with the prefix spring.ai.model.chat.

要启用,请设置 spring.ai.model.chat=azure-openai (默认已启用)

To enable, spring.ai.model.chat=azure-openai (It is enabled by default)

要禁用,请设置 spring.ai.model.chat=none (或任何与 azure-openai 不匹配的值)

To disable, spring.ai.model.chat=none (or any value which doesn’t match azure-openai)

此更改旨在允许配置多个模型。

This change is done to allow configuration of multiple models.

前缀 spring.ai.azure.openai.chat 是配置 Azure OpenAI 的 ChatModel 实现的属性前缀。

The prefix spring.ai.azure.openai.chat is the property prefix that configures the ChatModel implementation for Azure OpenAI.

Property Description Default

spring.ai.azure.openai.chat.enabled (Removed and no longer valid)

Enable Azure OpenAI chat model.

true

spring.ai.model.chat

Enable Azure OpenAI chat model.

azure-openai

spring.ai.azure.openai.chat.options.deployment-name

In use with Azure, this refers to the "Deployment Name" of your model, which you can find at [role="bare"]https://oai.azure.com/portal. It’s important to note that within an Azure OpenAI deployment, the "Deployment Name" is distinct from the model itself. The confusion around these terms stems from the intention to make the Azure OpenAI client library compatible with the original OpenAI endpoint. The deployment structures offered by Azure OpenAI and Sam Altman’s OpenAI differ significantly. Deployments model name to provide as part of this completions request.

gpt-4o

spring.ai.azure.openai.chat.options.maxTokens

The maximum number of tokens to generate.

-

spring.ai.azure.openai.chat.options.temperature

The sampling temperature to use that controls the apparent creativity of generated completions. Higher values will make output more random while lower values will make results more focused and deterministic. It is not recommended to modify temperature and top_p for the same completions request as the interaction of these two settings is difficult to predict.

0.7

spring.ai.azure.openai.chat.options.topP

An alternative to sampling with temperature called nucleus sampling. This value causes the model to consider the results of tokens with the provided probability mass.

-

spring.ai.azure.openai.chat.options.logitBias

A map between GPT token IDs and bias scores that influences the probability of specific tokens appearing in a completions response. Token IDs are computed via external tokenizer tools, while bias scores reside in the range of -100 to 100 with minimum and maximum values corresponding to a full ban or exclusive selection of a token, respectively. The exact behavior of a given bias score varies by model.

-

spring.ai.azure.openai.chat.options.user

An identifier for the caller or end user of the operation. This may be used for tracking or rate-limiting purposes.

-

spring.ai.azure.openai.chat.options.stream-usage

(For streaming only) Set to add an additional chunk with token usage statistics for the entire request. The choices field for this chunk is an empty array and all other chunks will also include a usage field, but with a null value.

false

spring.ai.azure.openai.chat.options.n

The number of chat completions choices that should be generated for a chat completions response.

-

spring.ai.azure.openai.chat.options.stop

A collection of textual sequences that will end completions generation.

-

spring.ai.azure.openai.chat.options.presencePenalty

A value that influences the probability of generated tokens appearing based on their existing presence in generated text. Positive values will make tokens less likely to appear when they already exist and increase the model’s likelihood to output new topics.

-

spring.ai.azure.openai.chat.options.responseFormat

An object specifying the format that the model must output. Using AzureOpenAiResponseFormat.JSON enables JSON mode, which guarantees the message the model generates is valid JSON. Using AzureOpenAiResponseFormat.TEXT enables TEXT mode.

-

spring.ai.azure.openai.chat.options.frequencyPenalty

A value that influences the probability of generated tokens appearing based on their cumulative frequency in generated text. Positive values will make tokens less likely to appear as their frequency increases and decrease the likelihood of the model repeating the same statements verbatim.

-

spring.ai.azure.openai.chat.options.proxy-tool-calls

If true, the Spring AI will not handle the function calls internally, but will proxy them to the client. Then is the client’s responsibility to handle the function calls, dispatch them to the appropriate function, and return the results. If false (the default), the Spring AI will handle the function calls internally. Applicable only for chat models with function calling support

false

spring.ai.azure.openai.chat.options 为前缀的所有属性都可以在运行时通过将特定 Runtime Options 请求添加到 Prompt 调用来覆盖。

All properties prefixed with spring.ai.azure.openai.chat.options can be overridden at runtime by adding a request specific Runtime Options to the Prompt call.

Runtime Options

AzureOpenAiChatOptions.java 提供模型配置,例如要使用的模型、温度、频率惩罚等。

The AzureOpenAiChatOptions.java provides model configurations, such as the model to use, the temperature, the frequency penalty, etc.

在启动时,可以使用 AzureOpenAiChatModel(api, options) 构造函数或 spring.ai.azure.openai.chat.options.* 属性配置默认选项。

On start-up, the default options can be configured with the AzureOpenAiChatModel(api, options) constructor or the spring.ai.azure.openai.chat.options.* properties.

在运行时,可以通过向 Prompt 调用添加新的请求特定选项来覆盖默认选项。例如,覆盖特定请求的默认模型和温度:

At runtime you can override the default options by adding new, request specific, options to the Prompt call. For example to override the default model and temperature for a specific request:

ChatResponse response = chatModel.call(
    new Prompt(
        "Generate the names of 5 famous pirates.",
        AzureOpenAiChatOptions.builder()
            .deploymentName("gpt-4o")
            .temperature(0.4)
        .build()
    ));

除了模型特定的 AzureOpenAiChatOptions.java ,您还可以使用通过 ChatOptionsBuilder#builder() 创建的便携式 ChatOptions 实例。

In addition to the model specific AzureOpenAiChatOptions.java you can use a portable ChatOptions instance, created with the ChatOptionsBuilder#builder().

Function Calling

您可以向 AzureOpenAiChatModel 注册自定义 Java 函数,并让模型智能地选择输出包含调用一个或多个注册函数所需参数的 JSON 对象。这是一种将 LLM 功能与外部工具和 API 连接的强大技术。阅读更多关于 Tool Calling 的信息。

You can register custom Java functions with the AzureOpenAiChatModel and have the model intelligently choose to output a JSON object containing arguments to call one or many of the registered functions. This is a powerful technique to connect the LLM capabilities with external tools and APIs. Read more about Tool Calling.

Multimodal

多模态是指模型同时理解和处理来自各种来源(包括文本、图像、音频和其他数据格式)信息的能力。目前,Azure OpenAI gpt-4o 模型提供多模态支持。

Multimodality refers to a model’s ability to simultaneously understand and process information from various sources, including text, images, audio, and other data formats. Presently, the Azure OpenAI gpt-4o model offers multimodal support.

Azure OpenAI 可以在消息中包含 base64 编码的图像列表或图像 URL。Spring AI 的 Message 接口通过引入 Media 类型来促进多模态 AI 模型。此类型包含有关消息中媒体附件的数据和详细信息,利用 Spring 的 org.springframework.util.MimeType 和用于原始媒体数据的 java.lang.Object

The Azure OpenAI can incorporate a list of base64-encoded images or image urls with the message. Spring AI’s Message interface facilitates multimodal AI models by introducing the Media type. This type encompasses data and details regarding media attachments in messages, utilizing Spring’s org.springframework.util.MimeType and a java.lang.Object for the raw media data.

下面是摘自 OpenAiChatModelIT.java 的代码示例,说明了使用 GPT_4_O 模型将用户文本与图像融合。

Below is a code example excerpted from OpenAiChatModelIT.java, illustrating the fusion of user text with an image using the GPT_4_O model.

URL url = new URL("https://docs.spring.io/spring-ai/reference/_images/multimodal.test.png");
String response = ChatClient.create(chatModel).prompt()
        .options(AzureOpenAiChatOptions.builder().deploymentName("gpt-4o").build())
        .user(u -> u.text("Explain what do you see on this picture?").media(MimeTypeUtils.IMAGE_PNG, this.url))
        .call()
        .content();

您也可以传递多个图像。

you can pass multiple images as well.

它将 multimodal.test.png 图像作为输入:

It takes as an input the multimodal.test.png image:

multimodal.test

以及文本消息“解释你在这张图片中看到了什么?”,并生成如下响应:

along with the text message "Explain what do you see on this picture?", and generates a response like this:

This is an image of a fruit bowl with a simple design. The bowl is made of metal with curved wire edges that
create an open structure, allowing the fruit to be visible from all angles. Inside the bowl, there are two
yellow bananas resting on top of what appears to be a red apple. The bananas are slightly overripe, as
indicated by the brown spots on their peels. The bowl has a metal ring at the top, likely to serve as a handle
for carrying. The bowl is placed on a flat surface with a neutral-colored background that provides a clear
view of the fruit inside.

您还可以传递类路径资源而不是 URL,如下例所示

You can also pass in a classpath resource instead of a URL as shown in the example below

Resource resource = new ClassPathResource("multimodality/multimodal.test.png");

String response = ChatClient.create(chatModel).prompt()
    .options(AzureOpenAiChatOptions.builder()
    .deploymentName("gpt-4o").build())
    .user(u -> u.text("Explain what do you see on this picture?")
    .media(MimeTypeUtils.IMAGE_PNG, this.resource))
    .call()
    .content();

Sample Controller

Create 一个新的 Spring Boot 项目,并将 spring-ai-starter-model-azure-openai 添加到您的 pom(或 gradle)依赖项中。

Create a new Spring Boot project and add the spring-ai-starter-model-azure-openai to your pom (or gradle) dependencies.

src/main/resources 目录下添加一个 application.properties 文件,以启用和配置 OpenAI 聊天模型:

Add a application.properties file, under the src/main/resources directory, to enable and configure the OpenAi chat model:

spring.ai.azure.openai.api-key=YOUR_API_KEY
spring.ai.azure.openai.endpoint=YOUR_ENDPOINT
spring.ai.azure.openai.chat.options.deployment-name=gpt-4o
spring.ai.azure.openai.chat.options.temperature=0.7

api-keyendpoint 替换为您的 Azure OpenAI 凭据。

replace the api-key and endpoint with your Azure OpenAI credentials.

这将创建一个 AzureOpenAiChatModel 实现,您可以将其注入到您的类中。以下是一个简单的 @Controller 类示例,它使用聊天模型进行文本生成。

This will create a AzureOpenAiChatModel implementation that you can inject into your class. Here is an example of a simple @Controller class that uses the chat model for text generations.

@RestController
public class ChatController {

    private final AzureOpenAiChatModel chatModel;

    @Autowired
    public ChatController(AzureOpenAiChatModel chatModel) {
        this.chatModel = chatModel;
    }

    @GetMapping("/ai/generate")
    public Map generate(@RequestParam(value = "message", defaultValue = "Tell me a joke") String message) {
        return Map.of("generation", this.chatModel.call(message));
    }

    @GetMapping("/ai/generateStream")
	public Flux<ChatResponse> generateStream(@RequestParam(value = "message", defaultValue = "Tell me a joke") String message) {
        Prompt prompt = new Prompt(new UserMessage(message));
        return this.chatModel.stream(prompt);
    }
}

Manual Configuration

AzureOpenAiChatModel 实现了 ChatModelStreamingChatModel ,并使用了 Azure OpenAI Java Client

The AzureOpenAiChatModel implements the ChatModel and StreamingChatModel and uses the Azure OpenAI Java Client.

若要启用它,请将 spring-ai-azure-openai 依赖项添加到项目的 Maven pom.xml 文件中:

To enable it, add the spring-ai-azure-openai dependency to your project’s Maven pom.xml file:

<dependency>
    <groupId>org.springframework.ai</groupId>
    <artifactId>spring-ai-azure-openai</artifactId>
</dependency>

或添加到 Gradle build.gradle 构建文件中。

or to your Gradle build.gradle build file.

dependencies {
    implementation 'org.springframework.ai:spring-ai-azure-openai'
}
  1. 参见 Dependency Management 部分,将 Spring AI BOM 添加到你的构建文件中。

Refer to the Dependency Management section to add the Spring AI BOM to your build file.

spring-ai-azure-openai 依赖项还提供对 AzureOpenAiChatModel 的访问。有关 AzureOpenAiChatModel 的更多信息,请参阅 Azure OpenAI Chat 部分。

The spring-ai-azure-openai dependency also provide the access to the AzureOpenAiChatModel. For more information about the AzureOpenAiChatModel refer to the Azure OpenAI Chat section.

接下来,创建一个 AzureOpenAiChatModel 实例并使用它来生成文本响应:

Next, create an AzureOpenAiChatModel instance and use it to generate text responses:

var openAIClientBuilder = new OpenAIClientBuilder()
  .credential(new AzureKeyCredential(System.getenv("AZURE_OPENAI_API_KEY")))
  .endpoint(System.getenv("AZURE_OPENAI_ENDPOINT"));

var openAIChatOptions = AzureOpenAiChatOptions.builder()
  .deploymentName("gpt-4o")
  .temperature(0.4)
  .maxTokens(200)
  .build();

var chatModel = AzureOpenAiChatModel.builder()
				.openAIClientBuilder(openAIClientBuilder)
				.defaultOptions(openAIChatOptions)
				.build();

ChatResponse response = chatModel.call(
  new Prompt("Generate the names of 5 famous pirates."));

// Or with streaming responses
Flux<ChatResponse> streamingResponses = chatModel.stream(
  new Prompt("Generate the names of 5 famous pirates."));

gpt-4o 实际上是 Azure AI 门户中显示的 Deployment Name

the gpt-4o is actually the Deployment Name as presented in the Azure AI Portal.