# Image Generation

### Create an image

`POST /v1/images/generations`

You can also find an Azure-style compatibility route in the proxy (`/openai/deployments/{model}/images/generations`). This docs page focuses on `/v1/images/generations`.

#### Request body

AlphaNeural follows the OpenAI Images API request shape. The most commonly used fields are:

* `model` (string). Image model ID to use.
* `prompt` (string). Text description of the image you want.
* `n` (integer). Number of images to generate.
* `size` (string). Output dimensions (supported values depend on the model).
* `quality` (string). Quality level (supported values depend on the model).
* `style` (string). Style hint (primarily for DALL·E models).
* `response_format` (string). `url` or `b64_json` for DALL·E models. Note. GPT image models always return base64 and do not support `response_format`.
* `user` (string). End-user identifier for abuse monitoring.

{% hint style="warning" %}
`response_format` is **not supported** for GPT image models in OpenAI’s API, because they always return base64-encoded image payloads
{% endhint %}

{% hint style="info" %}
Model availability varies by workspace, teams and provider. Use `GET /v1/models` to discover image-capable models exposed by your AlphaNeural key.
{% endhint %}

### Examples

#### cURL

```bash
curl https://proxy.alfnrl.io/v1/images/generations \
  -H "Authorization: Bearer $ALPHANEURAL_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-image-1",
    "prompt": "A cute baby sea otter in a knitted hat, studio lighting",
    "size": "1024x1024",
    "n": 1
  }'
```

#### Python (OpenAI SDK)

```python
from openai import OpenAI
import os, base64
client = OpenAI(api_key=os.environ["ALPHANEURAL_API_KEY"], base_url="https://proxy.alfnrl.io/v1")
r = client.images.generate(model="gpt-image-1", prompt="A tiny robot making espresso, cinematic", size="1024x1024")
img_b64 = r.data[0].b64_json
open("image.png","wb").write(base64.b64decode(img_b64))
```

### Response

The response matches the OpenAI Images API. You will receive a `data` array with one item per generated image. Depending on the model, each item contains either:

* `b64_json` (base64-encoded image), or
* `url` (a temporary URL)

If you receive URLs, they are time-limited. OpenAI’s reference behaviour is that URLs expire after about 60 minutes. [OpenAI Platform](https://platform.openai.com/docs/api-reference/images)

Example (truncated):

```json
{
  "created": 1730000000,
  "data": [
    { "b64_json": "iVBORw0KGgoAAA..." }
  ]
}
```

#### Decode `b64_json`

```python
import os, base64, requests
r = requests.post(
  "https://proxy.alfnrl.io/v1/images/generations",
  headers={"Authorization": f"Bearer {os.environ['ALPHANEURAL_API_KEY']}",
           "Content-Type": "application/json"},
  json={"model":"gpt-image-1","prompt":"A minimal owl logo","size":"1024x1024"}
)
b64 = r.json()["data"][0]["b64_json"]
open("out.png","wb").write(base64.b64decode(b64))
```
