FusionAI API
Call OpenAI and Anthropic using each provider's own REST paths, JSON fields, and response shapes. FusionAI routes your traffic to the right provider; you follow that provider's public API reference for everything inside the request and response.
Overview
FusionAI exposes HTTP endpoints under /api/providers/ followed by a provider name and the same path you would use on that provider's public API. There is no alternate JSON schema: a chat completion request looks exactly like OpenAI's documentation specifies when you use the OpenAI route, and like Anthropic's Messages API when you use the Anthropic route.
Successful responses use the same status codes, headers, and JSON (or streamed events) as the provider you selected. Validation errors, rate limits, and upstream outages surface as the provider defines them, so you can rely on the same client logic you would use against the provider directly.
Quick start
Pick the provider whose API you already follow. Use the FusionAI base URL, add /api/providers/openai/ or /api/providers/anthropic/, then paste the path from that vendor's docs. Send the same JSON body and headers (except any provider secret keys—FusionAI handles provider authentication).
# OpenAI — chat completion
curl -sS "https://www.fusionai.cloud/api/providers/openai/v1/chat/completions" \
-H "Content-Type: application/json" \
-d '{"model":"gpt-4o","messages":[{"role":"user","content":"Hi"}]}'
# Anthropic — message
curl -sS "https://www.fusionai.cloud/api/providers/anthropic/v1/messages" \
-H "Content-Type: application/json" \
-d '{"model":"claude-sonnet-4-20250514","max_tokens":256,"messages":[{"role":"user","content":"Hi"}]}'Base URL
All examples on this page use the resolved base URL below. When you open this documentation in the browser, that URL matches the site you are viewing (for example http://localhost:3000 in local development). To pin a single canonical URL in every environment (staging, previews, production), set NEXT_PUBLIC_SITE_URL (no trailing slash), for example https://fusionai.cloud.
Site origin:
https://www.fusionai.cloud
API base (append /openai/… or /anthropic/…):
https://www.fusionai.cloud/api/providersHow paths work
After /api/providers/, use openai or anthropic, then append the path from that provider's documentation (including the usual v1/… segments). Query parameters on your request are passed through unchanged.
| FusionAI URL (path only) | Use when reading docs from |
|---|---|
| /api/providers/openai/v1/chat/completions | OpenAI — Chat Completions |
| /api/providers/openai/v1/embeddings | OpenAI — Embeddings |
| /api/providers/anthropic/v1/messages | Anthropic — Messages API |
If the provider segment is not openai or anthropic, FusionAI returns HTTP 400 with a short JSON error before any model call is made.
HTTP methods
FusionAI accepts the same HTTP methods the provider supports on each path. Typical usage: POST for completions, messages, and embeddings; GET where the provider lists models or resources. Supported entry points include GET, HEAD, POST, PUT, PATCH, and DELETE when you call FusionAI with those methods.
For GET and HEAD, send no body. For other methods, send the body format required by the provider (usually JSON with Content-Type: application/json).
Authentication
FusionAI authenticates to OpenAI and Anthropic on your behalf. Do not embed OpenAI or Anthropic secret API keys in mobile apps, browser code, or customer-facing integrations. Send requests to FusionAI from environments you trust (your backend, workers, or private automation).
If your FusionAI workspace issues its own client credentials (API keys, tokens, or signed requests), include them exactly as your account onboarding describes. Those credentials authorize you to FusionAI; they are separate from the model providers' keys, which you should not place in client requests.
Request headers & body
Set the headers and JSON fields (or other payload types) exactly as the provider's documentation requires for the endpoint you are calling. Examples:
Content-TypeandAcceptas specified for that route.- OpenAI-specific headers such as
Idempotency-Keyor beta feature headers when their docs say they are supported. - Multipart, audio, image, or binary payloads where the provider documents them—use the same structure and content types as you would against the provider directly.
FusionAI delivers your request to the selected provider without rewriting the body into another vendor's format. If you are unsure which fields apply, use the official provider API reference linked at the bottom of this page.
OpenAI routes
Prefix your path with /api/providers/openai/, then continue with OpenAI's documented path (for example v1/chat/completions). Models, tools, response formats, and streaming flags are controlled with OpenAI's request fields.
Non-streaming chat example:
curl -sS "https://www.fusionai.cloud/api/providers/openai/v1/chat/completions" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "Summarize the term SSE in one sentence." }
],
"temperature": 0.2
}'For tool calling, JSON mode, vision inputs, and other capabilities, follow OpenAI's request shapes. FusionAI does not rename or remap those fields.
Anthropic routes
Prefix with /api/providers/anthropic/, then use Anthropic's documented paths and JSON (for example the Messages API with max_tokens, content blocks, system prompts, and tools as Anthropic defines them—not OpenAI's chat completion shape).
curl -sS "https://www.fusionai.cloud/api/providers/anthropic/v1/messages" \
-H "Content-Type: application/json" \
-d '{
"model": "claude-sonnet-4-20250514",
"max_tokens": 256,
"system": "You are a concise assistant.",
"messages": [
{ "role": "user", "content": "What is 2+2?" }
]
}'Advanced features (tool use, vision, extended thinking, and so on) use Anthropic's field names and structures only.
Tools & structured output
Function calling, tool definitions, JSON schema constraints, and structured responses differ between OpenAI and Anthropic. Use the route that matches the provider whose tool format you are implementing, and copy request fields from that provider's guide (tool blocks, parallel tool use, strict schemas, and so on). FusionAI does not convert tool definitions from one vendor to the other.
Embeddings & other endpoints
Any REST path the provider documents under their API base can be used the same way under the matching FusionAI prefix. Examples include embeddings, audio transcription, images, batch jobs, or future endpoints—always take the path and body from the provider's reference and prepend /api/providers/openai/ or /api/providers/anthropic/.
If an endpoint uses file uploads or non-JSON bodies, follow the provider's content type and multipart rules; FusionAI passes the payload through to the provider.
Streaming
When you enable streaming using the provider's documented parameters (for example "stream": true on OpenAI chat completions, or Anthropic's streaming options on Messages), FusionAI returns an incremental response in the same format and event sequence that provider uses—typically as a long-lived HTTP response you read incrementally from your HTTP client.
Configure client timeouts and retry behavior for long streams according to your infrastructure; very long streams can be affected by load balancers or corporate proxies the same way as a direct call to the provider would be.
Your HTTP library should expose the response as an incremental stream or line-oriented reader suitable for server-sent events when the provider uses that pattern; parse events using the same rules the provider documents.
Responses & errors
When FusionAI successfully reaches the provider, the HTTP status, headers, and body (including streamed chunks) match what that provider returns—including 4xx and 5xx with the provider's error JSON when a call is rejected or fails upstream.
FusionAI may return its own small JSON error before a provider call in cases such as:
- 400 — invalid provider name in the URL.
- 503 — the service cannot reach the model provider for your account right now (for example configuration or availability). Retry later or contact support if it persists.
For every other outcome, interpret status codes and error payloads using the provider's documentation.
Rate limits & reliability
OpenAI and Anthropic apply their own rate limits and may return HTTP 429 with retry guidance. Follow their recommendations for backoff and idempotency (where supported) as you would for a direct integration.
Where OpenAI documents idempotent retries (for example with an idempotency key header on supported routes), you may use the same headers on requests sent through FusionAI so the provider can deduplicate as designed.
FusionAI does not automatically retry failed provider calls unless your contract or product documentation says otherwise. Design your client for transient failures, especially for streaming and long-running requests.
Security & privacy
Prompts, attachments, and metadata you send are relayed to the provider you select and are subject to that provider's terms and data handling. Use FusionAI only over TLS in production, limit which systems can call your FusionAI endpoints, and avoid sending regulated or highly sensitive data unless your compliance review allows it for that provider.
Exposing FusionAI model routes to the public internet without strong authentication can allow abuse and unexpected usage. Prefer server-to-server calls from your backend.
Client integration
Point your HTTP client at the provider API base for this deployment, then use the same paths and JSON as the official OpenAI or Anthropic SDKs would use against their hosts—only the origin and prefix change. Official SDKs may support a configurable base URL; set it to one of the following (no trailing slash on the base), and confirm path joining matches your SDK's expectations.
OpenAI via FusionAI:
https://www.fusionai.cloud/api/providers/openai
Anthropic via FusionAI:
https://www.fusionai.cloud/api/providers/anthropicCalling from a browser is possible when your app and FusionAI share an origin or when your deployment allows cross-origin access; for most products, issuing calls from your server avoids exposing traffic to untrusted clients.
What FusionAI does not change
- No translation between OpenAI and Anthropic request or response formats—you must use the correct provider section and payload for each call.
- No merged or abstracted "universal" model list inside the request body; use each provider's model identifiers.
- Billing, quotas, and analytics for your FusionAI workspace (if offered) are separate from this HTTP routing contract; see your account or support channel for commercial terms.
Official provider API references
Authoritative field lists, examples, and error formats:
Support
Questions about request fields, model behavior, or error messages that match a provider's documented format are usually best answered from that provider's own API reference and status pages. For FusionAI-specific access, billing, or routing issues, use the contact options listed on this site (for example the contact or support page for your deployment).