# Image generation (Azure-style deployments)

**Endpoint**

* `POST https://proxy.alfnrl.io/openai/deployments/{deployment}/images/generations`&#x20;

Where:

* `{deployment}` is your Azure-style deployment identifier (what you would normally put in the Azure URL).

### Quickstart

```bash
curl https://proxy.alfnrl.io/openai/deployments/my-image-deployment/images/generations \
  -H "Authorization: Bearer $ALPHANEURAL_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "prompt": "A product photo of a glass bottle on white background",
    "size": "1024x1024"
  }'
```

### Behaviour notes

* You usually **omit `model`** in the JSON body because the deployment is already specified in the path, Azure-style.
* Parameters like `size`, `quality`, `background`, `output_format` behave like OpenAI’s Images API (subject to model support).
* As with OpenAI and Azure OpenAI, GPT image models return base64 and do not support `response_format`.

{% hint style="info" %}
Use this route when you are migrating Azure OpenAI code or you want “deployment-first” routing. Otherwise, prefer the OpenAI-compatible endpoint for the cleanest portability
{% endhint %}
