Skip to main content

Documentation Index

Fetch the complete documentation index at: https://doc.hitopen.com/llms.txt

Use this file to discover all available pages before exploring further.

Newapi does not have a fixed model catalog. The models available to you depend entirely on which channels an administrator has configured. When a channel is added and assigned a list of model names, those names become callable through Newapi’s standard API endpoints.

How model availability works

When you call POST /v1/chat/completions with a model name, Newapi looks up which channels list that model, selects one based on priority and weight, and forwards your request. From your application’s perspective, you are calling a standard OpenAI-compatible endpoint — the upstream routing is transparent. This design means:
  • Model names are administrator-defined. Your admin decides which names are exposed and which upstream provider each name maps to.
  • The same model name can be served by multiple providers. Newapi load-balances across channels automatically.
  • Custom aliases are possible. An admin might configure fast-chat to route to a specific model on a specific provider.
If a model you need is not available, contact your Newapi administrator to have the corresponding channel configured.

Listing available models

Use the standard OpenAI GET /v1/models endpoint to retrieve the models your token can access.
curl "https://YOUR_NEWAPI_BASE_URL/v1/models" \
  -H "Authorization: Bearer YOUR_API_KEY"
The response follows the standard OpenAI format:
{
  "object": "list",
  "data": [
    {
      "id": "gpt-4o",
      "object": "model",
      "created": 1715000000,
      "owned_by": "openai"
    },
    {
      "id": "gemini-2.0-flash",
      "object": "model",
      "created": 1715000000,
      "owned_by": "google"
    }
  ]
}
The id field is the model name you pass in API requests. Use it exactly as shown — model names are case-sensitive.

Model names and routing

Model names in Newapi work as follows:
  • Standard names (e.g., gpt-4o, gemini-2.5-pro) map directly to the corresponding upstream model on the configured channel.
  • Custom aliases (e.g., fast-chat, my-org-default) are defined by your administrator and can point to any model on any provider. Check with your admin if you see unfamiliar model names.
  • Model names are scoped to your token. If your token has model restrictions, GET /v1/models only returns models that token is allowed to use.
Always retrieve the model list from /v1/models rather than hardcoding model names. This ensures your application works with whatever models are currently available on your Newapi instance.

Checking model pricing

To see the token pricing for each model, call the /api/pricing endpoint:
curl "https://YOUR_NEWAPI_BASE_URL/api/pricing" \
  -H "Authorization: Bearer YOUR_API_KEY"
The response includes input and output token costs per model. Use this to estimate costs before running large workloads.

Next steps

  • Channels — Understand how channels control which models are available
  • Usage Logs — Track model usage and costs per request
  • API Reference: Models — Full endpoint reference for GET /v1/models