content/providers/03-community-providers/46-codex-app-server.mdx
The ai-sdk-provider-codex-app-server community provider enables using OpenAI's GPT-5 series models through the Codex CLI app-server mode. Unlike the standard Codex CLI provider, it supports mid-execution message injection and persistent threads.
| Provider Version | AI SDK Version | Status |
|---|---|---|
| 1.x | v6 | Stable |
<Tabs items={['pnpm', 'npm', 'yarn', 'bun']}> <Tab> <Snippet text="pnpm add ai-sdk-provider-codex-app-server" dark /> </Tab> <Tab> <Snippet text="npm install ai-sdk-provider-codex-app-server" dark /> </Tab> <Tab> <Snippet text="yarn add ai-sdk-provider-codex-app-server" dark /> </Tab> <Tab> <Snippet text="bun add ai-sdk-provider-codex-app-server" dark /> </Tab> </Tabs>
Import the default provider instance:
import {
createCodexAppServer,
type Session,
} from 'ai-sdk-provider-codex-app-server';
let session: Session;
const provider = createCodexAppServer({
defaultSettings: {
onSessionCreated: s => {
session = s;
},
},
});
The killer feature of this provider is the ability to inject messages while the agent is actively working:
import {
createCodexAppServer,
type Session,
} from 'ai-sdk-provider-codex-app-server';
import { streamText } from 'ai';
let session: Session;
const provider = createCodexAppServer({
defaultSettings: {
onSessionCreated: s => {
session = s;
},
},
});
const model = provider('gpt-5.1-codex-max');
// Start streaming
const resultPromise = streamText({
model,
prompt: 'Write a calculator in Python',
});
// Inject additional instructions mid-execution
setTimeout(async () => {
await session.injectMessage('Also add a square root function');
}, 2000);
const result = await resultPromise;
console.log(await result.text);
The session object provides control over active turns:
interface Session {
readonly threadId: string;
readonly turnId: string | null;
// Inject a message mid-execution
injectMessage(content: string | UserInput[]): Promise<void>;
// Interrupt the current turn
interrupt(): Promise<void>;
// Check if a turn is active
isActive(): boolean;
}
Discover available models and their capabilities:
import { listModels } from 'ai-sdk-provider-codex-app-server';
const { models, defaultModel } = await listModels();
for (const model of models) {
console.log(`${model.id}: ${model.description}`);
const efforts = model.supportedReasoningEfforts.map(e => e.reasoningEffort);
console.log(` Reasoning: ${efforts.join(', ')}`);
}
interface CodexAppServerSettings {
codexPath?: string; // Path to codex binary
cwd?: string; // Working directory
approvalMode?: 'never' | 'on-request' | 'on-failure' | 'untrusted';
sandboxMode?: 'read-only' | 'workspace-write' | 'danger-full-access';
reasoningEffort?: 'none' | 'low' | 'medium' | 'high';
threadMode?: 'persistent' | 'stateless';
mcpServers?: Record<string, McpServerConfig>;
verbose?: boolean;
logger?: Logger | false;
onSessionCreated?: (session: Session) => void;
env?: Record<string, string>;
baseInstructions?: string;
resume?: string; // Thread ID to resume
}
const model = provider('gpt-5.1-codex-max', {
threadMode: 'stateless', // Fresh thread each call
});
Override settings per call using providerOptions:
const result = await streamText({
model,
prompt: 'Analyze this code',
providerOptions: {
'codex-app-server': {
reasoningEffort: 'high',
threadMode: 'stateless',
},
},
});
| Model | Image Input | Object Generation | Tool Streaming | Mid-Execution |
|---|---|---|---|---|
gpt-5.3-codex | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
gpt-5.2-codex | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
gpt-5.1-codex-max | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
gpt-5.1-codex-mini | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| Feature | Codex CLI Provider | Codex App Server |
|---|---|---|
| Mid-execution inject | <Cross size={18} /> | <Check size={18} /> |
| Persistent threads | <Cross size={18} /> | <Check size={18} /> |
| Session control | <Cross size={18} /> | <Check size={18} /> |
| Tool streaming | <Cross size={18} /> | <Check size={18} /> |
| One-shot execution | <Check size={18} /> | <Check size={18} /> |
Use the Codex CLI provider for simple one-shot tasks. Use the Codex App Server provider when you need human-in-the-loop workflows, real-time course correction, or collaborative coding.
For more details, see the provider documentation.