Back to Mlflow

Track Users & Sessions

docs/docs/genai/tracing/track-users-sessions/index.mdx

3.12.09.3 KB
Original Source

import { APILink } from "@site/src/components/APILink"; import Tabs from "@theme/Tabs"; import TabItem from "@theme/TabItem"; import TilesGrid from "@site/src/components/TilesGrid"; import TileCard from "@site/src/components/TileCard"; import { Search, TrendingUp, MessageSquare } from "lucide-react"; import useBaseUrl from '@docusaurus/useBaseUrl';

Track Users & Sessions

<video src={useBaseUrl("/images/llms/tracing/chat-sessions-demo.mp4")} controls loop autoPlay muted aria-label="Traces with session IDs" />

Many real-world AI applications use sessions to maintain multi-turn user interactions. MLflow Tracing provides built-in support for associating traces with users and grouping them into sessions. Tracking users and sessions in your LLM application or AI agent provides essential context for understanding user behavior, analyzing conversation flows, and improving personalization.

Store User and Session IDs in Metadata

MLflow provides dedicated session_id and user parameters in both <APILink fn="mlflow.update_current_trace" /> and <APILink fn="mlflow.tracing.context" /> for session and user tracking. Once they are set, you can filter and group traces by session and user.

Basic Usage

Context manager (Suitable for auto-instrumented applications)

When using auto-instrumented libraries (e.g. mlflow.langchain.autolog()), you don't control the traced function directly. Use <APILink fn="mlflow.tracing.context" /> to inject user and session information into any trace created within its scope.

python
import mlflow

mlflow.langchain.autolog()

with mlflow.tracing.context(session_id="session-123", user="user-456"):
    # Any trace created inside this block will carry the session and user metadata.
    agent.invoke("What is the capital of France?")

This is especially useful in web applications where you can wrap request handlers with the context manager to automatically associate all traces with the current user and session.

:::note

The session_id and user parameters are supported since MLflow 3.11.0. For earlier versions, set mlflow.trace.session or mlflow.trace.user key to the trace metadata directly.

mlflow.update_current_trace(metadata={"mlflow.trace.session": session_id})

:::

Inline update (Suitable for a manually traced function)

To record user and session information within a function you control, use the <APILink fn="mlflow.update_current_trace" /> API and pass the user and session IDs directly.

<Tabs> <TabItem value="python" label="Python" default>
```python
import mlflow


@mlflow.trace
def chat_completion(message: list[dict], user_id: str, session_id: str):
    # Add user and session context to the current trace
    mlflow.update_current_trace(session_id=session_id, user=user_id)

    # Your chat logic here
    return generate_response(message)
```
</TabItem> <TabItem value="typescript" label="TypeScript"> ```typescript import * as mlflow from "@mlflow/core";
const chatCompletion = mlflow.trace(
    (message: Array<Record<string, any>>, userId: string, sessionId: string) => {
        // Add user and session context to the current trace
        mlflow.updateCurrentTrace({
            sessionId: sessionId,
            user: userId,
        });

        // Your chat logic here
        return generateResponse(message);
    },
    { name: "chat_completion" }
);
```
</TabItem> </Tabs>

Web Application Example

<Tabs> <TabItem value="python" label="Python (FastAPI)" default>
```python
import mlflow
import os
from fastapi import FastAPI, Request
from pydantic import BaseModel
from openai import OpenAI

app = FastAPI()
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

mlflow.set_tracking_uri("http://localhost:5000")
mlflow.set_experiment(experiment_id="<your-experiment-id>")
mlflow.openai.autolog()


class ChatRequest(BaseModel):
    message: str


@mlflow.trace
def process_chat(message: str, user_id: str, session_id: str):
    # Update trace with user and session context
    mlflow.update_current_trace(session_id=session_id, user=user_id)

    # Process chat message using OpenAI API
    response = client.chat.completions.create(
        model="gpt-4.1-mini",
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": message},
        ],
    )
    return response.choices[0].message.content


@app.post("/chat")
def handle_chat(request: Request, chat_request: ChatRequest):
    session_id = request.headers.get("X-Session-ID", "default-session")
    user_id = request.headers.get("X-User-ID", "default-user")
    response_text = process_chat(chat_request.message, user_id, session_id)
    return {"response": response_text}


@app.get("/")
async def root():
    return {"message": "FastAPI MLflow Tracing Example"}


if __name__ == "__main__":
    import uvicorn

    uvicorn.run(app, host="0.0.0.0", port=8000)
```
</TabItem> <TabItem value="typescript" label="TypeScript (express)"> ```typescript import express, { Request, Response } from 'express'; import bodyParser from 'body-parser'; import * as mlflow from '@mlflow/core'; import { tracedOpenAI } from "@mlflow/openai"; import OpenAI from 'openai';
const app = express();
app.use(bodyParser.json());

const openai = tracedOpenAI(new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
}));

mlflow.init({
  trackingUri: "http://localhost:5000",
  experimentId: "<your-experiment-id>",
});

class Chat {
  @mlflow.trace({ spanType: mlflow.SpanType.LLM })
  static async process(message: string, userId: string, sessionId: string) {
    // Update MLflow trace metadata for this user and session
    await mlflow.updateCurrentTrace({
      sessionId: sessionId,
      user: userId,
    });

    const response = await openai.responses.create({
      model: 'gpt-4.1-mini',
      instructions: 'You are a helpful assistant.',
      input: message,
    });
    return response.output_text;
  }
}

app.post('/chat', async (req: Request, res: Response) => {
  const sessionId = req.header('X-Session-ID') || 'default-session';
  const userId = req.header('X-User-ID') || 'default-user';
  const message = req.body.message;

  try {
    const response = await Chat.process(message, userId, sessionId);
    res.json({ response: response });
  } catch (err) {
    res.status(500).json({ error: 'OpenAI request failed.' });
  }
});

app.get('/', (req: Request, res: Response) => {
  res.json({ message: 'Express MLflow Tracing Example' });
});

if (require.main === module) {
  app.listen(8000, () => {
    console.log('Server listening on http://localhost:8000');
  });
}
```
</TabItem> </Tabs>

Example request:

bash
curl -X POST http://localhost:8000/chat \
    -H "Content-Type: application/json" \
    -H "X-Session-ID: session-123" \
    -H "X-User-ID: user-456" \
    -d '{"message": "Hello, how are you?"}'

Querying

<Tabs> <TabItem value="ui-search" label="MLflow UI Search" default> Filter traces in the MLflow UI using these search queries:
```
# Find all traces for a specific user
metadata.`mlflow.trace.user` = 'user-123'

# Find all traces in a session
metadata.`mlflow.trace.session` = 'session-abc-456'

# Find traces for a user within a specific session
metadata.`mlflow.trace.user` = 'user-123' AND metadata.`mlflow.trace.session` = 'session-abc-456'
```
</TabItem> <TabItem value="user-analysis" label="Programmatic Analysis"> Analyze user behavior patterns programmatically:
```python
import mlflow
import pandas as pd

# Search for all traces from a specific user
user_traces_df: pd.DataFrame = mlflow.search_traces(
    filter_string=f"metadata.`mlflow.trace.user` = '{user_id}'",
)

# Calculate key metrics
total_interactions = len(user_traces_df)
unique_sessions = user_traces_df["metadata.mlflow.trace.session"].nunique()
avg_response_time = user_traces_df["info.execution_time_ms"].mean()
success_rate = user_traces_df["info.state"].value_counts()["OK"] / total_interactions

# Display the results
print(f"User has {total_interactions} interactions across {unique_sessions} sessions")
print(f"Average response time: {avg_response_time} ms")
print(f"Success rate: {success_rate}")
```
</TabItem> </Tabs>

Next Steps

<TilesGrid> <TileCard icon={MessageSquare} title="Evaluate Conversations" description="Assess multi-turn session quality with conversation-level scorers" href="/genai/eval-monitor/running-evaluation/multi-turn" linkText="Evaluate sessions →" /> <TileCard icon={Search} title="Search Traces" description="Master advanced filtering techniques for user and session analysis" href="/genai/tracing/search-traces" linkText="Learn search →" /> <TileCard icon={TrendingUp} title="Production Monitoring" description="Set up comprehensive production observability with user context" href="/genai/tracing/prod-tracing" linkText="Monitor production →" /> </TilesGrid>