x-pack/platform/packages/shared/kbn-langchain/server/tracers/README.mdx
This document describes how to trace LangChain retrievers, LLMs, chains, and tools using Elastic APM and LangSmith.
If the assistantModelEvaluation experimental feature flag is enabled, and an APM server is configured, messages that have a corresponding trace will have an additional View APM trace action in the message title bar:
Viewing the trace you can see a breakdown of the time spent in each retriever, llm, chain, and tool:
<p align="center"> </p>The Evaluation interface has been updated to support adding additional metadata like Project Name, Run Name, and pulling test datasets from LangSmith. Predictions can now also be run without having to run an Evaluation, so datasets can quickly be run for manual analysis.
First, enable the assistantModelEvaluation experimental feature flag by adding the following to your kibana.dev.yml:
xpack.securitySolution.enableExperimental: [ 'assistantModelEvaluation' ]
Next, you'll need an APM server to collect the traces. You can either follow the documentation for installing the released artifact, or run from source and set up using the quickstart guide provided (be sure to install the APM Server integration to ensure the necessary indices are created! In dev environments you must click Display beta integrations on main Integrations page to ensure the latest package is installed.). Once your APM server is running, add your APM server configuration to your kibana.dev.yml as well using the following:
# APM
elastic.apm:
active: true
environment: 'SpongBox5002c™'
serverUrl: 'http://localhost:8200'
transactionSampleRate: 1.0
breakdownMetrics: true
spanStackTraceMinDuration: 10ms
# Disables Kibana RUM
servicesOverrides.kibana-frontend.active: false
If using a remote APM Server/Kibana instance for viewing traces, you can set the APM URL as outlined in https://github.com/elastic/kibana/pull/180227 so that the View APM trace button within the UI will link to the appropriate instance.
[!NOTE] If connecting to a cloud APM server (like our ai-assistant apm deployment), follow these steps to create an API key, and then set it via
apiKeyand also set yourserverUrlas shown in the APM Integration details within fleet.
[!NOTE] If you're an Elastic developer running Kibana from source, you can just enable APM as above, and not include a
serverUrl, and your traces will be sent to the https://kibana-cloud-apm.elastic.dev cluster.
If wanting to push traces to LangSmith, or leverage any datasets that you may have hosted in a project, all you need to do is configure a few environment variables, and then start the kibana server. See the LangSmith Traces documentation for details, or just add the below env variables to enable:
# LangChain LangSmith
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_ENDPOINT="https://api.smith.langchain.com"
export LANGCHAIN_API_KEY=""
export LANGCHAIN_PROJECT="8.12 ESQL Query Generation"
If wanting to configure LangSmith in cloud or other environments where you may not have the ability to set env vars, you can set the LangSmith Project and LangSmith API Key values in session storage as outlined in https://github.com/elastic/kibana/pull/180227.