Skip to main content

Documentation Index

Fetch the complete documentation index at: https://doc.hitopen.com/llms.txt

Use this file to discover all available pages before exploring further.

The text completions endpoint generates a continuation of a prompt string. This is the legacy (non-chat) format — for new integrations, prefer the Chat Completions API which uses a structured message list and is compatible with all modern models.

Endpoint

POST https://YOUR_NEWAPI_BASE_URL/v1/completions

Request parameters

model
string
required
The model ID to use. This endpoint is intended for models that support the legacy completions format.
prompt
string | string[]
required
The prompt to generate a completion for. You can pass a single string or an array of strings to generate completions for multiple prompts in one request.
max_tokens
integer
Maximum number of tokens to generate. The prompt tokens plus max_tokens must not exceed the model’s context length.
temperature
number
default:"1"
Sampling temperature between 0 and 2. Lower values produce more focused and deterministic output.
top_p
number
default:"1"
Nucleus sampling parameter. The model considers only the tokens comprising the top top_p probability mass.
n
integer
default:"1"
Number of completions to generate for each prompt.
stream
boolean
default:"false"
When true, tokens are streamed as Server-Sent Events as they are produced, ending with data: [DONE].
stop
string | string[]
Up to 4 sequences where generation stops. The model stops before producing any of these sequences.
suffix
string
A string appended after the generated completion. Useful for fill-in-the-middle tasks.
echo
boolean
default:"false"
When true, the prompt is included at the beginning of the returned completion text.

Response fields

id
string
Unique identifier for the completion.
object
string
Always "text_completion".
created
integer
Unix timestamp (seconds) when the completion was created.
model
string
The model that served the request.
choices
object[]
usage
object

Examples

curl -X POST "https://YOUR_NEWAPI_BASE_URL/v1/completions" \
  -H "Authorization: Bearer sk-your-token" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-3.5-turbo-instruct",
    "prompt": "The quick brown fox",
    "max_tokens": 50,
    "temperature": 0.7
  }'

Example response

{
  "id": "cmpl-abc123",
  "object": "text_completion",
  "created": 1715000000,
  "model": "gpt-3.5-turbo-instruct",
  "choices": [
    {
      "text": " jumps over the lazy dog near the riverbank.",
      "index": 0,
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 5,
    "completion_tokens": 10,
    "total_tokens": 15
  }
}