showcase/shell-docs/src/content/docs/auth.mdx
You have a chat surface or a hook driving an agent and you want every agent run to know who the request came from. By the end of this guide, your frontend will forward a token, the runtime will pass it through, and your agent code will read the resulting user info on every turn.
If you don't need any of those, skip auth entirely. The agent runs anonymously and the frontend never has to care about tokens.
<InlineDemo demo="auth" /> <WhenFrameworkHas flag="auth_pattern" equals="runtime-onrequest">Pass your token via the headers prop on <CopilotKit>. CopilotKit forwards every request with that header attached.
import { CopilotKit } from "@copilotkit/react-core/v2";
<CopilotKit
runtimeUrl="/api/copilotkit"
headers={{
Authorization: `Bearer ${userToken}`,
}}
>
<YourApp />
</CopilotKit>
Wire authentication into the V2 runtime via the onRequest hook. The hook runs before any agent code and operates on the raw Request, so it's the right place to read the Authorization header, run your verifier, and either let the request through or short-circuit with a 401:
import type { NextRequest } from "next/server";
import {
CopilotRuntime,
createCopilotRuntimeHandler,
} from "@copilotkit/runtime/v2";
const runtime = new CopilotRuntime({ agents: { default: myAgent } });
const handler = createCopilotRuntimeHandler({
runtime,
basePath: "/api/copilotkit",
hooks: {
onRequest: ({ request }) => {
const authHeader = request.headers.get("authorization");
if (!authHeader?.startsWith("Bearer ")) {
throw new Response(
JSON.stringify({ error: "unauthorized" }),
{ status: 401, headers: { "content-type": "application/json" } },
);
}
const token = authHeader.slice("Bearer ".length);
const user = verifyJwt(token); // your validation
// attach user to request-scoped context here
},
},
});
export const POST = (req: NextRequest) => handler(req);
export const GET = (req: NextRequest) => handler(req);
</WhenFrameworkHas> <WhenFrameworkHas flag="auth_pattern" equals="langgraph">The V1 Next.js adapter (
copilotRuntimeNextJSAppRouterEndpoint) does not forward thehooksoption. UsecreateCopilotRuntimeHandlerfrom@copilotkit/runtime/v2directly when you need theonRequestgate.
Pass your token via the properties prop. CopilotKit forwards it to LangGraph as a Bearer token automatically.
import { CopilotKit } from "@copilotkit/react-core/v2";
<CopilotKit
runtimeUrl="/api/copilotkit"
properties={{
authorization: userToken,
}}
>
<YourApp />
</CopilotKit>
LangGraph supports two deployment modes. The frontend code above is the same in both, but the backend wiring differs in where the resolved user identity lands. Pick the tab that matches where your agent runs.
<Tabs items={['LangGraph Platform', 'Self-hosted']}> <Tab value="LangGraph Platform">
On LangGraph Platform, authentication is a managed service. You declare an @auth.authenticate handler, and Platform runs it on every request before the graph starts. The handler returns a user object that becomes available to every node in the run.
from langgraph_sdk import Auth
auth = Auth()
@auth.authenticate
async def authenticate(authorization: str | None):
if not authorization or not authorization.startswith("Bearer "):
raise Auth.exceptions.HTTPException(status_code=401, detail="Unauthorized")
token = authorization.replace("Bearer ", "")
user_info = validate_your_token(token) # your validation logic
return {
"identity": user_info["user_id"],
"role": user_info.get("role"),
"permissions": user_info.get("permissions", []),
}
The return value of the handler shows up in every node's config["configuration"]["langgraph_auth_user"]. From there, scoping tool access or filtering data is straightforward:
async def my_agent_node(state: AgentState, config: RunnableConfig):
user_info = config["configuration"]["langgraph_auth_user"]
user_id = user_info["identity"]
user_role = user_info.get("role")
# agent logic with user context
return state
For full handler details, see the LangGraph Platform Authentication documentation.
</Tab> <Tab value="Self-hosted">When you self-host the agent, there's no managed auth handler to plug into. Instead, you forward the raw token onto every run by configuring the agent dynamically — the request's properties.authorization becomes part of langgraph_config["configurable"], where every node can read it back later.
from copilotkit import CopilotKitRemoteEndpoint, LangGraphAgent
sdk = CopilotKitRemoteEndpoint(
agents=lambda context: [
LangGraphAgent(
name="sample_agent",
description="Agent with authentication support",
graph=graph,
langgraph_config={
"configurable": {
"copilotkit_auth": context["properties"].get("authorization"),
},
},
),
],
)
Validation is your job in this mode. Inside any node, pull the token out of config["configurable"] and run it through your verifier. Decide the policy explicitly: reject unauthenticated calls, or fall through to an anonymous branch as the example below does.
async def my_agent_node(state: AgentState, config: RunnableConfig):
auth_token = config["configurable"].get("copilotkit_auth")
if auth_token:
user_info = validate_your_token(auth_token)
user_id = user_info["user_id"]
user_role = user_info.get("role")
else:
user_id = "anonymous"
user_role = None
return state
Pass your token via the properties prop. CopilotKit forwards it to AG2's /chat endpoint as a request header.
import { CopilotKit } from "@copilotkit/react-core/v2";
<CopilotKit
runtimeUrl="/api/copilotkit"
properties={{
authorization: userToken,
}}
>
<YourApp />
</CopilotKit>
The backend has two responsibilities: validate the token before the agent dispatches, and thread the resolved user identity into AG2's ContextVariables so tools can read it later.
Start by validating the token on AG2's /chat endpoint. The Authorization header arrives as a normal FastAPI Header(...) parameter:
from fastapi import FastAPI, Header, HTTPException
from fastapi.responses import StreamingResponse
from autogen import ConversableAgent, LLMConfig
from autogen.ag_ui import AGUIStream, RunAgentInput
agent = ConversableAgent(
name="assistant",
system_message="You are a helpful assistant.",
llm_config=LLMConfig({"model": "gpt-5.4-mini"}),
)
stream = AGUIStream(agent)
app = FastAPI()
def validate_your_token(token: str) -> dict:
if token != "valid-token":
raise HTTPException(status_code=401, detail="Unauthorized")
return {"user_id": "user_123", "role": "member"}
@app.post("/chat")
async def run_agent(
message: RunAgentInput,
accept: str | None = Header(None),
authorization: str | None = Header(None),
):
if not authorization:
raise HTTPException(status_code=401, detail="Missing authorization header")
token = authorization.replace("Bearer ", "")
user_info = validate_your_token(token)
# use user_info to scope tools, state, and data access before dispatch
return StreamingResponse(
stream.dispatch(message, accept=accept),
media_type=accept or "text/event-stream",
)
Once the token is validated, AG2's tools can read the user identity straight out of ContextVariables. This is how you make individual tool calls aware of who's asking, without having to thread the user object manually through every helper:
from typing import Annotated
from autogen import ContextVariables
@agent.register_for_llm(description="Return account data for the authenticated user.")
def get_account_data(
context: ContextVariables,
account_id: Annotated[str, "The target account id"],
) -> dict:
user = context.get("auth_user")
if not user:
return {"error": "unauthorized"}
if account_id not in user.get("allowed_accounts", []):
return {"error": "forbidden"}
return {"account_id": account_id, "owner": user["user_id"]}
Microsoft Agent Framework's AG-UI host expects authentication on a request header rather than the runtime properties channel. Pass the token via <CopilotKit headers={...}>:
import { CopilotKit } from "@copilotkit/react-core/v2";
<CopilotKit
runtimeUrl="/api/copilotkit"
headers={{
Authorization: `Bearer ${userToken}`,
}}
>
<YourApp />
</CopilotKit>
Validation lives at the host process level: ASP.NET Core's JwtBearer middleware on the .NET host, FastAPI middleware on the Python host. Either way, the AG-UI endpoint refuses to dispatch the agent until the token is verified — so by the time your tools run, the user identity is already trustworthy.
<Tabs groupId="language_microsoft-agent-framework_agent" items={['.NET', 'Python']} persist> <Tab value=".NET">
using Microsoft.Agents.AI;
using Microsoft.Agents.AI.Hosting.AGUI.AspNetCore;
using Microsoft.AspNetCore.Authentication.JwtBearer;
using OpenAI;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
.AddJwtBearer(options =>
{
options.Authority = builder.Configuration["JwtAuthority"];
options.Audience = builder.Configuration["JwtAudience"];
options.TokenValidationParameters = new Microsoft.IdentityModel.Tokens.TokenValidationParameters
{
ValidateIssuer = true,
ValidateAudience = true,
ValidateLifetime = true,
ValidateIssuerSigningKey = true,
};
});
builder.Services.AddAuthorization();
var app = builder.Build();
app.UseAuthentication();
app.UseAuthorization();
string githubToken = builder.Configuration["GitHubToken"]!;
var openAI = new OpenAIClient(
new System.ClientModel.ApiKeyCredential(githubToken),
new OpenAIClientOptions { Endpoint = new Uri("https://models.inference.ai.azure.com") }
);
var agent = openAI.GetChatClient("gpt-5.4-mini")
.CreateAIAgent(name: "AGUIAssistant", instructions: "You are a helpful assistant.");
app.MapAGUI("/", agent).RequireAuthorization();
await app.RunAsync();
Settings live in appsettings.json:
{
"JwtAuthority": "https://login.microsoftonline.com/{your-tenant-id}/v2.0",
"JwtAudience": "api://{your-client-id}",
"GitHubToken": "your-github-token-here"
}
from fastapi import FastAPI, HTTPException, Request, status
from fastapi.middleware.cors import CORSMiddleware
from agent_framework.ag_ui import add_agent_framework_fastapi_endpoint
from agent import create_agent
import os
app = FastAPI(title="CopilotKit + Microsoft Agent Framework")
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
REQUIRED_BEARER_TOKEN = os.getenv("AUTH_BEARER_TOKEN")
@app.middleware("http")
async def auth_middleware(request: Request, call_next):
if REQUIRED_BEARER_TOKEN and request.url.path == "/":
auth_header = request.headers.get("Authorization", "")
if not auth_header.startswith("Bearer "):
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="Missing bearer token")
token = auth_header.split(" ", 1)[1].strip()
if token != REQUIRED_BEARER_TOKEN:
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="Invalid token")
return await call_next(request)
chat_client = build_chat_client() # Azure OpenAI or OpenAI
my_agent = create_agent(chat_client)
add_agent_framework_fastapi_endpoint(app=app, agent=my_agent, path="/")
Settings live in agent/.env:
AUTH_BEARER_TOKEN=super-secret-demo-token
The most common reason to wire auth is so individual tools can decline to run. Read the resolved user inside the tool's handler and bail if the role doesn't match:
def delete_record(record_id: str, *, user: User):
if "admin" not in user.permissions:
raise PermissionError("admin role required")
# do the delete
This composes with Human in the loop: gate on auth first, surface a confirmation card next, execute last.
anonymous) instead.