Getting started
This guide will walk you through building a sample application powered by GPT-4. This app will suggest meals based on the ingredients you have in your refrigerator. We will use Langtail as the source of truth for the prompt, tests, logging, and metrics. Additionally, we will create a simple frontend using Next.js. Let’s begin!
Prepare your account
Let’s start by preparing your account with the necessary details for prompt creation.
-
Navigate to Langtail. If you don’t have an account, create one for free.
-
Create a new project and name it
quickstart
.
Create a prompt
Next, we will create a prompt that will act as the core of our application.
-
Click here to open the quickstart prompt.
-
In the top right, click “Save as…”
-
Select your new project, rename the prompt to Meal ideas, and click “Save”.
selectedIngredients
and iterates over it to form a bulleted list. For more on the templating syntax, see the prompt templating guide.Try the prompt
Now, let’s try out the new prompt.
-
Navigate to the Variables panel where you will find the
selectedIngredients
variable. Paste this value:["Turkey", "Spinach", "Berries"]
. -
At the bottom of the playground, select “Send”.
-
You should now see a response with two recipes formatted in Markdown. Feel free to adjust any model parameters or prompt templates to improve the output.
Improve the prompt
Now, let’s try to improve your prompt.
-
Navigate to your prompt template and hit the ✨ magic button.
-
See if the prompt seems better (it should) and select “Send”.
-
You should now see a response, you can compare if it is better with the new prompt. If you don’t like it just discard changes at the top.
Create a basic test
Let’s create a test to verify that the prompt is working as expected.
-
Click “Save” to save the prompt.
-
From the navigation bar, select “Tests”.
-
Click ”+ Add Test”.
-
A new test will be created with our example value for
selectedIngredients
. -
Use the ”+ Add test cases” button and generate a few more example values.
-
Click on Assertions in the bottom and select “New Assertion”, then “contains”.
-
In the “Title” field, write
Contains additional ingredients
. -
In the “Value” field, write
**Additional Ingredients:**
. -
Select “Save”.
-
Click “Run” and ensure that the test passes.
Create a more complex assertion
Next, we will create an assertion that checks if exactly 2 recipes were returned.
-
In the test you just created, go to Assertions, “Add new” and then “javascript”.
-
Name the assertion “Has two recipes”.
-
Paste in the following code:
-
Select “Run” to ensure that both assertions now pass.
console.log
statements and click “Run” to execute the code.Deploy the prompt
With the prompt tested, it’s time to deploy it for testing in our app.
-
From the navigation bar, select “Playground”.
-
Click “Save” and write a message.
-
Select “Deploy”, then “Staging environment”.
Generate a project API key
Next, let’s generate a project API key.
-
In the navigation bar, select the project name to go to the current project.
-
From the left sidebar, select Secrets.
-
Click New API Key, optionally set a name and a budget limit, then click Create.
-
Copy the key and save it for the next step.
Integrate with the sample app
Now, let’s integrate the project API key with the sample app.
-
Make sure that you have git and Node.js installed on your computer.
-
Clone the quickstart repository using this command:
-
Open a terminal in the repository:
-
install the packages using this command:
-
Open this repository in your code editor (for example, VS Code).
-
Duplicate the
.env.local.example
file and rename it to.env.local
. -
Replace the value of
LANGTAIL_API_KEY
with the project API key generated in the last step. -
If you didn’t name your prompt
quickstart
, modify thegenerate_meals.ts
file with the one you used. Here’s the exact line. -
Start the app using this command:
-
Open the app in your browser. By default, the app is served on http://localhost:3000.
-
Choose some ingredients and click “Find Inspiration”.
-
If everything is set up correctly, you should see some ideas appear at the bottom of the page. This may take a few seconds.
View logs and metrics
Finally, let’s view logs and metrics.
-
Navigate back to Langtail, go to the project view and select “Logs”.
-
Click the latest log entry and notice all the available information. You can see the system prompt, user prompt, and direct response from the LLM. By clicking “Show more”, you can also see stats like cost.
-
From the left sidebar, select “Metrics” to view aggregated stats.
Congratulations! You have successfully explored the core functionality of Langtail. For more helpful documentation, you can explore the following sections:
-
Advanced Tests: Learn more about testing prompts and assistants to ensure everything functions as expected.
-
OpenAI Proxy: If you’re interested in trying out Langtail without committing to the full workflow, using the OpenAI Proxy via the SDK is a great way to get started.
-
SDK: Discover how Langtail’s TypeScript SDK enhances OpenAI’s API client, allowing seamless integration with Langtail. Store prompts, reference them by slug and environment, and enjoy benefits like logs and metrics by routing requests through Langtail.
If you have any questions, feel free to ask in our Discord community.
Was this page helpful?