
-
Text Assertions: Validate prompt outputs by comparing the generated text with expected values. You can use
{{ variables }}
to reference dynamic content in the expected output, making it easier to test prompts with variable inputs or outputs.- equals: Checks if the specified value exactly matches the output.
- contains: Checks if the specified value is somewhere in the output.
- icontains: Similar to
contains
, but case-insensitive.
- JavaScript Assertions: For more advanced validation scenarios, you can create your own checks for comparing output data using JavaScript. This feature allows you to define custom criteria, validation rules, and comparison algorithms, ensuring that your prompts meet your specific requirements.
-
LLM Assertions: Leverage the power of language models to classify and evaluate the output of your prompts. With LLM assertions, you can define your success criteria in plain English, and Langtail will handle the evaluation process for you. Choose from three scoring types:
- Yes/No: Evaluate whether the output meets or fails your criteria.
- A-E Scale: Assign a letter grade to the output based on your criteria.
- Categories: Classify the output into predefined categories based on your criteria.
Langtail’s testing capabilities go beyond simple assertions. You can add new variables to test your prompts with different inputs, define how many times a test should be repeated to ensure consistency, and even compare how your prompts perform when you change the language model or modify its properties. This flexibility allows you to thoroughly test your language models under various conditions and scenarios.