TypeScript SDK
Langtail’s TypeScript SDK wraps OpenAI’s API client so that it can be used seamlessly with Langtail. You can store the entire prompt in Langtail and reference it by prompt slug and environment. By routing requests through Langtail, you can get logs, metrics, and many more beneifts.
For more, check out the GitHub repository.
Installation
Use NPM or your package manager of choice to install the langtail
package:
Usage
Here are the different ways you can use the SDK.
1. Invoke a prompt that’s deployed on Langtail (recommended)
To invoke a deployed prompt, you can use lt.prompts.invoke
like this:
const deployedPromptCompletion = await lt.prompts.invoke({
prompt: '<PROMPT_SLUG>', // required
environment: 'staging',
variables: {
about: 'cowboy Bebop',
},
}) // results in an openAI ChatCompletion
If you call a prompt that isn’t deployed yet, you will get an error thrown an error:
Error: Failed to fetch prompt: 404 {"error":"Prompt deployment not found"}
2. Invoke a prompt that’s stored in code
If you’re storing your prompt in your codebase and want to give Langtail a try, use this method. You can use lt.chat.completions.create
as a light wrapper over the OpenAI API that still sends logs and metrics to Langtail.
For more, see the example in this doc.
3. Fetch prompt from Langtail and invoke directly (proxyless)
If you’re storing your prompt in Langtail but want to invoke it directly from your code without using Langtail as a proxy, use this method.
You can call LangtailPrompts.get
to retrieve the contents of the prompt:
import { LangtailPrompts } from 'langtail'
const lt = new LangtailPrompts({
apiKey: '<LANGTAIL_API_KEY>',
})
const prompt = await lt.get({
prompt: '<PROMPT_SLUG>',
environment: 'preview',
version: '<PROMPT_VERSION>', // optional
})
The response will return something like this:
{
"chatInput": {
"optionalExtra": ""
},
"state": {
"args": {
"frequency_penalty": 0,
"jsonmode": false,
"max_tokens": 800,
"model": "gpt-3.5-turbo",
"presence_penalty": 0,
"stop": [],
"stream": true,
"temperature": 0.5,
"top_p": 1
},
"functions": [],
"template": [
{
"content": "I want you to tell me a joke. Topic of the joke: {{topic}}",
"role": "system"
}
],
"tools": [],
"type": "chat"
}
}
Now you can build the final output:
const openAiBody = lt.build(prompt, {
stream: true,
variables: {
topic: 'iron man',
},
})
Which returns this object:
{
"frequency_penalty": 0,
"max_tokens": 800,
"messages": [
{
"content": "I want you to tell me a joke. Topic of the joke: iron man",
"role": "system",
},
],
"model": "gpt-3.5-turbo",
"presence_penalty": 0,
"temperature": 0.5,
"top_p": 1,
}
Finally, you can directly call the OpenAI SDK with the returned object:
import OpenAI from 'openai'
const openai = new OpenAI()
const joke = await openai.chat.completions.create(openAiBody)
Using this method, you’re getting all of the power of Langtail prompts (like variables) and you’re able to send requests directly to OpenAI. This can be helpful if you’re especially sensitive to performance. If you’re taking this route for security reasons, let us know and give you a deeper look into Langtail.
API reference
LangtailNode
Constructor
The constructor accepts an options object with the following properties:
The API key for Langtail.
The base URL for the Langtail API.
A boolean indicating whether to record the API calls.
The organization ID.
The project ID.
The fetch function to use for making HTTP requests. It is passed to openAI client under the hood.
Properties
An object containing a completions
object with a create
method.
An instance of the LangtailPrompts
class.
Methods
chat.completions.create
This method accepts two parameters:
An object that can be of type ChatCompletionCreateParamsNonStreaming & ILangtailExtraProps
, ChatCompletionCreateParamsStreaming & ILangtailExtraProps
, ChatCompletionCreateParamsBase & ILangtailExtraProps
,
or ChatCompletionCreateParams & ILangtailExtraProps
.
OpenAI Core.RequestOptions
object (optional).
It returns a promise that resolves to a ChatCompletion
or a Stream<ChatCompletionChunk>
depending whether you are using streaming or not.
const completion = await lt.chat.completions.create({
messages: [{ role: 'system', content: 'You are a helpful assistant.' }],
model: 'gpt-3.5-turbo',
})
console.log(completion.choices[0])
Exceptions
- Throws an error if the
apiKey
is not provided in the options object or as an environment variable.
LangtailPrompts
Constructor
The constructor accepts an options object with the following properties:
The API key for Langtail.
The base URL for the Langtail API.
The organization ID.
The project ID.
The fetch function to use for making HTTP requests. It is passed to openAI client under the hood.
Properties
The API key for Langtail.
The base URL for the Langtail API.
An object containing the options for the Langtail API.
Methods
invoke
This method accepts:
An IRequestParams
or IRequestParamsStream
object.
It returns a promise that resolves to an OpenAIResponseWithHttp
or a StreamResponseType
depending on whether you use streaming or not.
const deployedPromptCompletion = await lt.prompts.invoke({
prompt: '<PROMPT_SLUG>', // required
environment: 'staging',
variables: {
about: 'cowboy Bebop',
},
}) // results in an openAI ChatCompletion
get
This method accepts one parameter with these fields:
A string representing the prompt.
An Environment
string identifier. Accepts values: "preview" | "staging" | "production"
. Defaults to production
String for version. Necessary for preview environment.
Returns Langtail state defined here:
import { LangtailPrompts } from 'langtail'
const lt = new LangtailPrompts({
apiKey: '<LANGTAIL_API_KEY>',
})
const prompt = await lt.get({
prompt: '<PROMPT_SLUG>',
environment: 'preview',
version: '<PROMPT_VERSION>', // optional
})
build
This method accepts two parameters:
The state object returned from the get
method.
An object containing the options for the Langtail API.
Returns an object that can be passed to the OpenAI SDK:
const openAiBody = lt.build(prompt, {
stream: true,
variables: {
topic: 'iron man',
},
})
Exceptions
- Throws an error if the fetch operation fails.
- Throws an error if there is no body in the response when streaming is enabled.
Was this page helpful?