If you’ve already started building your application by using OpenAI’s models via Langchain but want to start experimenting with Langtail, you can configure your existing code to start sending requests to the OpenAI Proxy.In this guide, we’ll generate a Langtail API key, modify your existing code to send LLM requests via the proxy, and then use Langtail’s observability tools to see the requests coming from your app.Let’s get started:
An OpenAI key already added to your Langtail workspace
2
Generate a project API key
In Langtail, open an existing project or create a new one.
In the left sidebar, click Secrets.
Click New API Key, optionally set a name and a budget limit, then click Create.
Copy the key.
3
Modify your code to use the proxy
Add your Langtail project API key to an environment variable. In the following snippet, we use LANGTAIL_API_KEY.
In your codebase, find where you initialize Langchain and make these modifications:
Copy
Ask AI
import osfrom langchain_openai import ChatOpenAIfrom langchain.schema import SystemMessage# Add base_url to send requests to Langtail instead of directly to OpenAIllm = ChatOpenAI( openai_api_key=os.environ["LANGTAIL_API_KEY"], openai_api_base="https://proxy.langtail.com/v1",)# Then submit a request as usual and it will be proxied through Langtail.# Optionally, add headers for easier trace filtering.messages = [SystemMessage(content="Generate 5 random words.")]print( llm.invoke( messages, extra_headers={ "X-Langtail-Prompt": "prompt-slug", # optional "X-Langtail-Metadata-key": "value", # optional; 'key' can be any string }, ))
4
Test it out
Run your app and make a request to verify that everything works properly.
5
Observe metrics and logs
Back in Langtail, navigate to the project and then click Logs in the left sidebar.
You should see log entries coming from your app (they have an environment value of “proxy”).
Click on a log to see detailed information about the request and response from OpenAI.
If you want to experiment with this prompt, click Open in Playground. Here you can change parameters and modify different parts of the prompt and then see the result.