compassIntroduction

AlphaNeural provides a single, OpenAI-compatible API for chatting, vision, embeddings, and image generation. If you have already integrated the OpenAI API, you can usually switch by updating the base

circle-info

AlphaNeural is using an LLM proxy, so one endpoint can route to many underlying LLM providers while keeping OpenAI-style request and response formats.

All OpenAI-style endpoints are served under:

https://proxy.alfnrl.io/v1

Example. Chat completions:

POST https://proxy.alfnrl.io/v1/chat/completions

Authentication

Authenticate with a bearer token:

  • Header: Authorization: Bearer <YOUR_API_KEY>

Compatibility

AlphaNeural follows the OpenAI API surface for the endpoints we expose. For example:

  • Chat Completions: POST /v1/chat/completions

  • Embeddings: POST /v1/embeddings

  • Image generation: POST /v1/images/generations

  • List models: GET /v1/models

That means you can typically keep the same payloads, streaming behaviour, and error handling you already use with OpenAI.


Quickstart

cURL

Python (OpenAI SDK)

JavaScript/TypeScript (OpenAI SDK)

Models

Use the models endpoint to see what is available to your API key:

Your model string in requests should match one of the returned model IDs.

What you can build

  • Chat and agents with tool calling and streaming (Chat Completions)

  • Embeddings for search and RAG (Embeddings)

  • Image generation (Images)

Next steps

  • Chat Completions

  • Embeddings

  • Images

  • Models

Last updated