examples/config-websockets/streaming/server/README.md
Simple Node.js server using Express and native WebSockets that exposes two real-time endpoints to interact with the OpenAI Responses API.
npm install
OPENAI_API_KEY in your environment or use .env filecp env.example .env
# edit .env and set OPENAI_API_KEY
npm start
You can also run in dev mode with automatic restarts:
npm run dev
GET /health → { "status": "ok" }Two WebSocket upgrade paths are provided:
/ws — non-streaming. Emits a single response when the OpenAI request completes./ws-stream — streaming. Emits incremental delta and message events, then done.Both endpoints accept the same request payload (model is configured via env):
{ "input": "Hello there!" }
Model is read from CHATBOT_MODEL env var and defaults to gpt-4.1-mini.
Non-streaming (/ws):
const ws = new WebSocket('ws://localhost:3300/ws');
ws.onopen = () => {
ws.send(JSON.stringify({ input: 'Hello there!' }));
};
ws.onmessage = (event) => {
const msg = JSON.parse(event.data);
// msg.type: 'ready' | 'response' | 'done' | 'error'
console.log(msg);
};
ws.onerror = (err) => console.error('ws error', err);
Streaming (/ws-stream):
const ws = new WebSocket('ws://localhost:3300/ws-stream');
ws.onopen = () => {
ws.send(JSON.stringify({ input: 'Stream this please' }));
};
ws.onmessage = (event) => {
const msg = JSON.parse(event.data);
// msg.type: 'ready' | 'delta' | 'message' | 'done' | 'error'
if (msg.type === 'delta') process.stdout.write(msg.message || '');
else console.log(msg);
};
ws.onerror = (err) => console.error('ws error', err);
PORT in .env.CHATBOT_MODEL in .env (default: gpt-4.1-mini).GET /health. WebSocket upgrade paths are /ws and /ws-stream.