Alpha Notice: These docs cover the v1-alpha release. Content is incomplete and subject to change.For the latest stable version, see the v0 LangChain Python or LangChain JavaScript docs.
LangSmith
evaluations. You would need to first define an evaluator function to judge the results from an agent, such as final outputs or trajectory. Depending on your evaluation technique, this may or may not involve a reference output:
AgentEvals
package:
Create evaluator
A common way to evaluate agent performance is by comparing its trajectory (the order in which it calls its tools) against a reference trajectory:- Specify how the trajectories will be compared.
superset
will accept output trajectory as valid if it’s a superset of the reference one. Other options include: strict, unordered and subset
LLM-as-a-judge
You can use LLM-as-a-judge evaluator that uses an LLM to compare the trajectory against the reference outputs and output a score:Run evaluator
To run an evaluator, you will first need to create a LangSmith dataset. To use the prebuilt AgentEvals evaluators, you will need a dataset with the following schema:- input:
{"messages": [...]}
input messages to call the agent with. - output:
{"messages": [...]}
expected message history in the agent output. For trajectory evaluation, you can choose to keep only assistant messages.