Invoke Deployed Prompt
Get completion for stored prompt. The response format is the same as OpenAI Chat completion response.
Headers
Your Langtail API Key
Path Parameters
Your workspace URL slug
Your project URL slug
Your prompt URL slug
Your environment URL slug
Body
If true, potentially sensitive data like the prompt and response will not be recorded in the logs
Overrides the frequency_penalty of deployed prompt.
Overrides the max tokens of deployed prompt. The maximum number of tokens that can be generated in the completion.
Additional messages. These will be appended to the Prompt Template.
Additional custom data that will be stored for this request
Overrides the model of deployed prompt.
Overrides the presence_penalty of deployed prompt.
Overrides the response format of deployed prompt.
A seed is used to generate reproducible results
Overrides the temperature of deployed prompt. What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
Overrides the stored template messages with custom template messages.
Overrides the tool choice of deployed prompt.
auto
, required
, none
Overrides the top_p of deployed prompt. An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature
but not both.
A unique identifier representing your end-user
A mapping of variable names to their values. Will be injected in your saved prompt template.
Response
A list of chat completion choices. Can be more than one if n
is greater than 1.
The Unix timestamp (in seconds) of when the chat completion was created.
A unique identifier for the chat completion.
The model used for the chat completion.
The object type, which is always chat.completion
.
Was this page helpful?