POST
/
{workspace}
/
{project}
/
{prompt}
/
{environment}
curl --request POST \
  --url https://api.langtail.com/{workspace}/{project}/{prompt}/{environment} \
  --header 'Content-Type: application/json' \
  --header 'X-API-Key: <x-api-key>' \
  --data '{
  "variables": {
    "variableName": "Your Value"
  },
  "messages": [
    {
      "role": "user",
      "content": "Hello"
    }
  ],
  "stream": true,
  "model": "<string>",
  "max_tokens": 123,
  "temperature": 123,
  "top_p": 123,
  "presence_penalty": 123,
  "frequency_penalty": 123,
  "template": [
    {
      "role": "assistant",
      "name": "<string>",
      "content": "<string>",
      "function_call": {
        "name": "<string>",
        "arguments": "<string>"
      },
      "tool_calls": [
        {
          "id": "<string>",
          "type": "function",
          "function": {
            "name": "<string>",
            "arguments": "<string>"
          }
        }
      ],
      "tool_choice": "auto",
      "tool_call_id": "<string>"
    }
  ],
  "tool_choice": "auto",
  "response_format": {
    "type": "json_object"
  },
  "user": "user_123",
  "doNotRecord": true,
  "metadata": {
    "my_identifier": "my-custom-ID"
  },
  "seed": 123
}'
{
  "id": "<string>",
  "object": "<string>",
  "created": 123,
  "model": "<string>",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "I'm sorry, but I'm not able to fulfill this request."
      },
      "finish_reason": "stop"
    }
  ]
}

Headers

X-API-Key
string
required

Your Langtail API Key

Path Parameters

workspace
string
required

Your workspace URL slug

project
string
required

Your project URL slug

prompt
string
required

Your prompt URL slug

environment
string
required

Your environment URL slug

Body

application/json
variables
object

A mapping of variable names to their values. Will be injected in your saved prompt template.

messages
object[]

Additional messages. These will be appended to the Prompt Template.

stream
boolean
model
string

Overrides the model of deployed prompt.

max_tokens
number

Overrides the max tokens of deployed prompt. The maximum number of tokens that can be generated in the completion.

temperature
number

Overrides the temperature of deployed prompt. What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

top_p
number

Overrides the top_p of deployed prompt. An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.

presence_penalty
number

Overrides the presence_penalty of deployed prompt.

frequency_penalty
number

Overrides the frequency_penalty of deployed prompt.

template
object[]

Overrides the stored template messages with custom template messages.

tool_choice

Overrides the tool choice of deployed prompt.

Available options:
auto,
required,
none
response_format
object

Overrides the response format of deployed prompt.

user
string

A unique identifier representing your end-user

doNotRecord
boolean

If true, potentially sensitive data like the prompt and response will not be recorded in the logs

metadata
object

Additional custom data that will be stored for this request

seed
number

A seed is used to generate reproducible results

Response

200 - application/json
id
string
required

A unique identifier for the chat completion.

object
string
required

The object type, which is always chat.completion.

created
number
required

The Unix timestamp (in seconds) of when the chat completion was created.

model
string
required

The model used for the chat completion.

choices
object[]
required

A list of chat completion choices. Can be more than one if n is greater than 1.