Newapi does not have a fixed model catalog. The models available to you depend entirely on which channels an administrator has configured. When a channel is added and assigned a list of model names, those names become callable through Newapi’s standard API endpoints.Documentation Index
Fetch the complete documentation index at: https://doc.hitopen.com/llms.txt
Use this file to discover all available pages before exploring further.
How model availability works
When you callPOST /v1/chat/completions with a model name, Newapi looks up which channels list that model, selects one based on priority and weight, and forwards your request. From your application’s perspective, you are calling a standard OpenAI-compatible endpoint — the upstream routing is transparent.
This design means:
- Model names are administrator-defined. Your admin decides which names are exposed and which upstream provider each name maps to.
- The same model name can be served by multiple providers. Newapi load-balances across channels automatically.
- Custom aliases are possible. An admin might configure
fast-chatto route to a specific model on a specific provider.
If a model you need is not available, contact your Newapi administrator to have the corresponding channel configured.
Listing available models
Use the standard OpenAIGET /v1/models endpoint to retrieve the models your token can access.
id field is the model name you pass in API requests. Use it exactly as shown — model names are case-sensitive.
Model names and routing
Model names in Newapi work as follows:- Standard names (e.g.,
gpt-4o,gemini-2.5-pro) map directly to the corresponding upstream model on the configured channel. - Custom aliases (e.g.,
fast-chat,my-org-default) are defined by your administrator and can point to any model on any provider. Check with your admin if you see unfamiliar model names. - Model names are scoped to your token. If your token has model restrictions,
GET /v1/modelsonly returns models that token is allowed to use.
Checking model pricing
To see the token pricing for each model, call the/api/pricing endpoint:
Next steps
- Channels — Understand how channels control which models are available
- Usage Logs — Track model usage and costs per request
- API Reference: Models — Full endpoint reference for
GET /v1/models