REST API

OpenAI-compatible REST API reference for One-ADN

REST API Reference

One-ADN provides an OpenAI-compatible REST API, making it easy to integrate with existing applications and tools.

Base URL#

https://api.one-adn.io/v1

Or for local nodes:

http://localhost:7545/v1

Authentication#

All API requests require authentication using an API key:

bash
curl -H "Authorization: Bearer YOUR_API_KEY" \
  https://api.one-adn.io/v1/models

See Authentication for detailed auth documentation.

Endpoints#

Chat Completions#

Create a chat completion with conversation context.

http
POST /v1/chat/completions

Request Body

json
{
  "model": "llama-3.1-70b",
  "messages": [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ],
  "temperature": 0.7,
  "max_tokens": 1000,
  "stream": false
}

Parameters

| Parameter | Type | Required | Description | |-----------|------|----------|-------------| | model | string | Yes | Model ID to use | | messages | array | Yes | Array of message objects | | temperature | number | No | Sampling temperature (0-2). Default: 1 | | max_tokens | integer | No | Maximum tokens to generate | | stream | boolean | No | Enable streaming. Default: false | | top_p | number | No | Nucleus sampling parameter | | frequency_penalty | number | No | Frequency penalty (-2 to 2) | | presence_penalty | number | No | Presence penalty (-2 to 2) |

Response

json
{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1704067200,
  "model": "llama-3.1-70b",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! How can I help you today?"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 15,
    "completion_tokens": 10,
    "total_tokens": 25
  }
}

Streaming

Enable streaming for real-time responses:

bash
curl -X POST https://api.one-adn.io/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "llama-3.1-70b",
    "messages": [{"role": "user", "content": "Hello!"}],
    "stream": true
  }'

Streaming response (Server-Sent Events):

data: {"id":"chatcmpl-abc123","choices":[{"delta":{"content":"Hello"}}]} data: {"id":"chatcmpl-abc123","choices":[{"delta":{"content":"!"}}]} data: {"id":"chatcmpl-abc123","choices":[{"delta":{"content":" How"}}]} data: [DONE]

Completions (Legacy)#

Generate text completions (legacy endpoint).

http
POST /v1/completions
json
{
  "model": "llama-3.1-70b",
  "prompt": "Write a haiku about",
  "max_tokens": 50,
  "temperature": 0.8
}

List Models#

Get available models on the network.

http
GET /v1/models

Response

json
{
  "object": "list",
  "data": [
    {
      "id": "llama-3.1-70b",
      "object": "model",
      "created": 1704067200,
      "owned_by": "one-adn",
      "capabilities": {
        "chat": true,
        "completion": true,
        "embeddings": false
      }
    },
    {
      "id": "llama-3.1-8b",
      "object": "model",
      "created": 1704067200,
      "owned_by": "one-adn"
    }
  ]
}

Embeddings#

Generate vector embeddings for text.

http
POST /v1/embeddings
json
{
  "model": "text-embedding-ada-002",
  "input": "The quick brown fox jumps over the lazy dog"
}

Response

json
{
  "object": "list",
  "data": [
    {
      "object": "embedding",
      "index": 0,
      "embedding": [0.0023, -0.0091, 0.0147, ...]
    }
  ],
  "model": "text-embedding-ada-002",
  "usage": {
    "prompt_tokens": 9,
    "total_tokens": 9
  }
}

Error Handling#

Error Response Format#

json
{
  "error": {
    "message": "Invalid API key provided",
    "type": "invalid_request_error",
    "code": "invalid_api_key"
  }
}

Error Codes#

| Status | Code | Description | |--------|------|-------------| | 400 | invalid_request | Malformed request | | 401 | invalid_api_key | Invalid or missing API key | | 403 | insufficient_quota | Quota exceeded | | 404 | model_not_found | Model not available | | 429 | rate_limit_exceeded | Too many requests | | 500 | internal_error | Server error | | 503 | service_unavailable | Network congested |

Rate Limits#

| Tier | Requests/min | Tokens/min | |------|--------------|------------| | Free | 10 | 10,000 | | Pro | 100 | 100,000 | | Enterprise | 1000 | Unlimited |

SDK Examples#

JavaScript/TypeScript#

typescript
import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'YOUR_API_KEY',
  baseURL: 'https://api.one-adn.io/v1'
});

const response = await client.chat.completions.create({
  model: 'llama-3.1-70b',
  messages: [{ role: 'user', content: 'Hello!' }]
});

console.log(response.choices[0].message.content);

Python#

python
from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://api.one-adn.io/v1"
)

response = client.chat.completions.create(
    model="llama-3.1-70b",
    messages=[{"role": "user", "content": "Hello!"}]
)

print(response.choices[0].message.content)

cURL#

bash
curl -X POST https://api.one-adn.io/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "llama-3.1-70b",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'