.agents/skills/llmobs-integration/references/message-extraction.md
Every LLM provider uses a different message format. Before implementing message extraction, you must read the provider's actual source code and existing plugin implementation to understand its specific format.
All plugins must normalize messages to the standard LLMObs format: [{ content: string, role: string }]
Common roles: 'user', 'assistant', 'system', 'tool'
Input formats differ in:
messages, contents, prompt, etc.)'model' vs 'assistant')Output formats differ in:
choices[0].message, content[0].text, candidates[0].content.parts, etc.)prompt_tokens/completion_tokens vs input_tokens/output_tokens)Common variations include:
[{role, content}] (e.g. OpenAI)[{type: 'text', text: '...'}])parts array inside a contents array (e.g. Google GenAI)'model' → 'assistant')packages/datadog-plugin-<name>/src/index.js) to understand what arguments and results look likeThe best examples of message extraction for the providers we support:
packages/datadog-plugin-anthropic/src/llmobs.jspackages/datadog-plugin-google-genai/src/llmobs.js|| '' and || [])'model' role to 'assistant' for consistency (preserve 'system', 'tool', 'function')''[{ content: '', role: '' }] on error (never omit output messages)