# Embeddings

### Create embeddings

`POST /v1/embeddings`

#### Request body

**Required**

* `model` (string). The embedding model to use.

**Common**

* `input` (string or array of strings). The text to embed.
  * If you pass an array, you will get one embedding per item.

{% hint style="info" %}
The proxy supports OpenAI-style payloads. It also exposes a few proxy-only fields (below) for routing and reliability. openapi
{% endhint %}

#### Basic example (single input)

```bash
curl https://proxy.alfnrl.io/v1/embeddings \
  -H "Authorization: Bearer $ALPHANEURAL_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "text-embedding-3-small",
    "input": "The quick brown fox jumps over the lazy dog"
  }'
```

The proxy documentation includes the same request pattern for embeddings. openapi

#### Batch example (multiple inputs)

```bash
curl https://proxy.alfnrl.io/v1/embeddings \
  -H "Authorization: Bearer $ALPHANEURAL_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "text-embedding-3-small",
    "input": [
      "Paris is the capital of France.",
      "Berlin is the capital of Germany."
    ]
  }'
```

#### Python (OpenAI SDK)

```python
from openai import OpenAI
import os
client = OpenAI(api_key=os.environ["ALPHANEURAL_API_KEY"], base_url="https://proxy.alfnrl.io/v1")
resp = client.embeddings.create(model="text-embedding-3-small", input=["hello", "world"])
print(len(resp.data), len(resp.data[0].embedding))
```

#### JavaScript/TypeScript (OpenAI SDK)

```js
import OpenAI from "openai";
const client = new OpenAI({ apiKey: process.env.ALPHANEURAL_API_KEY, baseURL: "https://proxy.alfnrl.io/v1" });
const resp = await client.embeddings.create({ model: "text-embedding-3-small", input: ["hello", "world"] });
console.log(resp.data.length, resp.data[0].embedding.length);
```

### Response

The response matches the OpenAI embeddings format. You receive a `data` array with one embedding per input, plus `usage` metadata.

Example (truncated):

```json
{
  "object": "list",
  "data": [
    { "object": "embedding", "index": 0, "embedding": [0.0123, -0.0456, 0.0789] }
  ],
  "model": "text-embedding-3-small",
  "usage": { "prompt_tokens": 8, "total_tokens": 8 }
}
```

### Proxy-only options

Most teams do not need these. They exist to control proxy behaviour across multiple upstream providers.

* `timeout` (integer, default `600`). Request timeout in seconds.
* `caching` (boolean, default `false`). Enable proxy caching when configured.
* `user` (string). End-user identifier for tracing and abuse monitoring.
