# Getting Started

This guide gets you from zero to first successful request in a couple of minutes using the OpenAI-compatible endpoints.

***

### 1. Create an API Key

1. Log in to your AlphaNeural Dashboard
2. Navigate to **API Keys**
3. Click **Create Key**
4. Copy your key (starts with `sk-...`)

> Treat your API key like a password. Never expose it publicly.

***

### 2. Set your environment variables

```bash
export ALPHANEURAL_API_KEY="YOUR_API_KEY"
export ALPHANEURAL_BASE_URL="https://proxy.alfnrl.io/v1"
```

### 3. Make your first request (Chat Completions)

Chat Completions is the quickest end-to-end smoke test.

### &#x20;<sup>cURL Example</sup>

```bash
curl "$ALPHANEURAL_BASE_URL/chat/completions" \
  -H "Authorization: Bearer $ALPHANEURAL_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "qwen3",
    "messages": [{"role":"user","content":"Hello AlphaNeural"}]
  }'
```

This endpoint follows the OpenAI Chat Completions shape.

* ### Use the OpenAI SDK (Python)

AlphaNeural is OpenAI-compatible, so you can point the OpenAI SDK at AlphaNeural by setting the base URL.

```python
from openai import OpenAI
import os

client = OpenAI(
  api_key=os.environ["ALPHANEURAL_API_KEY"],
  base_url=os.environ["ALPHANEURAL_BASE_URL"],
)

resp = client.chat.completions.create(
  model="qwen3",
  messages=[{"role":"user","content":"Write a haiku about routers"}],
)
print(resp.choices[0].message.content)

```

* ### <sup>Discover available models</sup>

List models exposed by the proxy:

```bash
curl "$ALPHANEURAL_BASE_URL/models" \
  -H "Authorization: Bearer $ALPHANEURAL_API_KEY"
```

Model listing is available at `GET /v1/models`
