Skip to content
Cloudflare Docs

OpenAI Compatibility

Cloudflare's AI Gateway offers an OpenAI-compatible /chat/completions endpoint, enabling integration with multiple AI providers using a single URL. This feature simplifies the integration process, allowing for seamless switching between different models without significant code modifications.

Endpoint URL

https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id}/compat/chat/completions

Replace {account_id} and {gateway_id} with your Cloudflare account and gateway IDs.

Parameters

Switch providers by changing the model and apiKey parameters.

Specify the model using {provider}/{model} format. For example:

  • openai/gpt-4o-mini
  • google-ai-studio/gemini-2.0-flash
  • anthropic/claude-3-haiku

Examples

OpenAI SDK

import OpenAI from "openai";
const client = new OpenAI({
apiKey: "YOUR_PROVIDER_API_KEY", // Provider API key
baseURL:
"https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id}/compat/chat/completions",
});
const response = await client.chat.completions.create({
model: "google-ai-studio/gemini-2.0-flash",
messages: [{ role: "user", content: "What is Cloudflare?" }],
});
console.log(response.choices[0].message.content);

cURL

Terminal window
curl -X POST https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id}/compat/chat/completions \
--header 'Authorization: Bearer {openai_token}' \
--header 'Content-Type: application/json' \
--data '{
"model": "google-ai-studio/gemini-2.0-flash",
"messages": [
{
"role": "user",
"content": "What is Cloudflare?"
}
]
}'

Universal provider

You can also use this pattern with a Universal Endpoint.

index.ts
export interface Env {
AI: Ai;
}
export default {
async fetch(request: Request, env: Env) {
return env.AI.gateway("default").run({
provider: "compat",
endpoint: "chat/completions",
headers: {
authorization: "Bearer ",
},
query: {
model: "google-ai-studio/gemini-2.0-flash",
messages: [
{
role: "user",
content: "What is Cloudflare?",
},
],
},
});
},
};

Supported Providers

The OpenAI-compatible endpoint supports models from the following providers: