# Introduction

{% hint style="info" %}
AlphaNeural is using an LLM proxy, so one endpoint can route to many underlying LLM providers while keeping OpenAI-style request and response formats.
{% endhint %}

All OpenAI-style endpoints are served under:

```
https://proxy.alfnrl.io/v1
```

Example. Chat completions:

```
POST https://proxy.alfnrl.io/v1/chat/completions
```

### Authentication

Authenticate with a bearer token:

* Header: `Authorization: Bearer <YOUR_API_KEY>`

### Compatibility

AlphaNeural follows the OpenAI API surface for the endpoints we expose. For example:

* Chat Completions: `POST /v1/chat/completions`
* Embeddings: `POST /v1/embeddings`
* Image generation: `POST /v1/images/generations`
* List models: `GET /v1/models`

That means you can typically keep the same payloads, streaming behaviour, and error handling you already use with OpenAI.

***

### Quickstart

#### cURL

```bash
curl https://proxy.alfnrl.io/v1/chat/completions \
  -H "Authorization: Bearer $ALPHANEURAL_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "qwen3",
    "messages": [
      { "role": "user", "content": "Hello AlphaNeural" }
    ]
  }'
```

#### Python (OpenAI SDK)

```python
from openai import OpenAI
client = OpenAI(api_key=os.environ["ALPHANEURAL_API_KEY"], base_url="https://proxy.alfnrl.io/v1")
resp = client.chat.completions.create(model="qwen3", messages=[{"role":"user","content":"Hello AlphaNeural"}])
print(resp.choices[0].message.content)
```

#### JavaScript/TypeScript (OpenAI SDK)

```js
import OpenAI from "openai";
const client = new OpenAI({ apiKey: process.env.ALPHANEURAL_API_KEY, baseURL: "https://proxy.alfnrl.io/v1" });
const resp = await client.chat.completions.create({ model: "qwen3", messages: [{ role: "user", content: "Hello AlphaNeural" }] });
console.log(resp.choices[0].message.content);
```

### Models

Use the models endpoint to see what is available to your API key:

```bash
curl https://proxy.alfnrl.io/v1/models \
  -H "Authorization: Bearer $ALPHANEURAL_API_KEY"
```

Your `model` string in requests should match one of the returned model IDs.

### What you can build

* **Chat and agents** with tool calling and streaming (Chat Completions)
* **Embeddings** for search and RAG (Embeddings)
* **Image generation** (Images)&#x20;

### Next steps

* Chat Completions
* Embeddings
* Images
* Models
