.agents/skills/llmobs-integration/references/category-detection.md
Detailed guide for classifying LLM packages into LlmObsCategory enum values.
Definition: Direct wrappers around LLM provider APIs.
Examples:
@google/generative-ai - Google GenAI client (recommended reference implementation)@anthropic-ai/sdk - Anthropic Claude client (recommended reference implementation)openai - OpenAI API clientObservable signs:
chat.completions.create, messages.create)Test strategy: VCR with real API calls via proxy
Enum value: LlmObsCategory.LLM_CLIENT
Definition: Unified interfaces that abstract multiple LLM providers.
Examples:
@ai-sdk/vercel - Vercel AI SDKlangchain - LangChain frameworkObservable signs:
Test strategy: VCR with real API calls via proxy
Enum value: LlmObsCategory.MULTI_PROVIDER
Definition: Workflow/graph managers that coordinate LLM calls but don't make them directly.
Examples:
@langchain/langgraph - LangGraph workflow engineObservable signs:
invoke, stream, run)Test strategy: Pure function tests, NO VCR, NO real API calls
Enum value: LlmObsCategory.ORCHESTRATION
Definition: Communication protocols, server frameworks, infrastructure layers.
Examples:
Observable signs:
Test strategy: Mock server tests
Enum value: LlmObsCategory.INFRASTRUCTURE
Follow this tree to determine category:
1. Does the package make direct HTTP calls to LLM provider endpoints?
├─ YES → Go to question 2
└─ NO → Go to question 3
2. Does it support multiple LLM providers via configuration?
├─ YES → LlmObsCategory.MULTI_PROVIDER
└─ NO → LlmObsCategory.LLM_CLIENT
3. Does it implement workflow/graph orchestration with state management?
├─ YES → LlmObsCategory.ORCHESTRATION
└─ NO → LlmObsCategory.INFRASTRUCTURE
Analyze package name for patterns:
LlmObsCategory.LLM_CLIENTLlmObsCategory.MULTI_PROVIDERLlmObsCategory.ORCHESTRATIONLlmObsCategory.INFRASTRUCTUREcat node_modules/{{package}}/package.json
Look for:
LlmObsCategory.LLM_CLIENTLlmObsCategory.MULTI_PROVIDERLlmObsCategory.ORCHESTRATIONLlmObsCategory.INFRASTRUCTUREnode -e "console.log(Object.keys(require('{{package}}')))"
Method patterns:
chat(), complete(), embed() → LlmObsCategory.LLM_CLIENT or MULTI_PROVIDERinvoke(), stream(), graph(), workflow() → LlmObsCategory.ORCHESTRATIONconnect(), listen(), handle() → LlmObsCategory.INFRASTRUCTURECheck for:
http.request, .post(, fetch() → LlmObsCategory.LLM_CLIENTLlmObsCategory.MULTI_PROVIDERLlmObsCategory.ORCHESTRATIONLlmObsCategory.INFRASTRUCTUREPackage: @anthropic-ai/sdk — see packages/datadog-plugin-anthropic/
Category: LlmObsCategory.LLM_CLIENT — name contains "anthropic", direct HTTP calls to Claude API, requires API key, methods are messages.create
Package: @google/generative-ai — see packages/datadog-plugin-google-genai/
Category: LlmObsCategory.LLM_CLIENT — name contains "genai", direct HTTP calls to Gemini API, complex nested message format (contents/parts)
Package: ai (Vercel AI SDK)
Category: LlmObsCategory.MULTI_PROVIDER
Package: @langchain/langgraph — see packages/dd-trace/src/llmobs/plugins/langgraph/
Category: LlmObsCategory.ORCHESTRATION — name indicates graph orchestration, depends on @langchain/core, methods manage workflow state (StateGraph.invoke, Pregel.stream), no direct LLM HTTP calls
When signals conflict or are weak, choose the category with the most evidence and prefer the category that matches test strategy needs: if the package makes HTTP calls it needs VCR (LLM_CLIENT/MULTI_PROVIDER); if it doesn't, use pure functions (ORCHESTRATION) or mock servers (INFRASTRUCTURE).
Some packages don't fit cleanly: