anchorEmbeddings

Embeddings turn text into vectors you can use for semantic search, clustering, recommendations, and RAG. The AlphaNeural proxy follows the same API shape as OpenAI’s Embeddings endpoint.

Create embeddings

POST /v1/embeddings

Request body

Required

  • model (string). The embedding model to use.

Common

  • input (string or array of strings). The text to embed.

    • If you pass an array, you will get one embedding per item.

circle-info

The proxy supports OpenAI-style payloads. It also exposes a few proxy-only fields (below) for routing and reliability. openapi

Basic example (single input)

curl https://proxy.alfnrl.io/v1/embeddings \
  -H "Authorization: Bearer $ALPHANEURAL_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "text-embedding-3-small",
    "input": "The quick brown fox jumps over the lazy dog"
  }'

The proxy documentation includes the same request pattern for embeddings. openapi

Batch example (multiple inputs)

Python (OpenAI SDK)

JavaScript/TypeScript (OpenAI SDK)

Response

The response matches the OpenAI embeddings format. You receive a data array with one embedding per input, plus usage metadata.

Example (truncated):

Proxy-only options

Most teams do not need these. They exist to control proxy behaviour across multiple upstream providers.

  • timeout (integer, default 600). Request timeout in seconds.

  • caching (boolean, default false). Enable proxy caching when configured.

  • user (string). End-user identifier for tracing and abuse monitoring.

Last updated