aiprompts/waveai-architecture.md
Wave AI is a chat-based AI assistant feature integrated into Wave Terminal. It provides a conversational interface for interacting with various AI providers (OpenAI, Anthropic, Perplexity, Google, and Wave's cloud proxy) through a unified streaming architecture. The feature is implemented as a block view within Wave Terminal's modular system.
frontend/app/view/waveai/)1. WaveAiModel Class
ViewModel interface2. AiWshClient Class
WshClientaisendmessage RPC callssendMessage method3. React Components
Message State:
messagesAtom: PrimitiveAtom<Array<ChatMessageType>>
messagesSplitAtom: SplitAtom<Array<ChatMessageType>>
latestMessageAtom: Atom<ChatMessageType>
addMessageAtom: WritableAtom<unknown, [message: ChatMessageType], void>
updateLastMessageAtom: WritableAtom<unknown, [text: string, isUpdating: boolean], void>
removeLastMessageAtom: WritableAtom<unknown, [], void>
Configuration State:
presetKey: Atom<string> // Current AI preset selection
presetMap: Atom<{[k: string]: MetaType}> // Available AI presets
mergedPresets: Atom<MetaType> // Merged configuration hierarchy
aiOpts: Atom<WaveAIOptsType> // Final AI options for requests
UI State:
locked: PrimitiveAtom<boolean> // Prevents input during AI response
viewIcon: Atom<string> // Header icon
viewName: Atom<string> // Header title
viewText: Atom<HeaderElem[]> // Dynamic header elements
endIconButtons: Atom<IconButtonDecl[]> // Header action buttons
The AI configuration follows a three-tier hierarchy (lowest to highest priority):
atoms.settingsAtom["ai:*"]presets[presetKey]["ai:*"]block.meta["ai:*"]Configuration is merged using mergeMeta() utility, allowing fine-grained overrides at each level.
User Input → sendMessage() →
├── Add user message to UI
├── Create WaveAIStreamRequest
├── Call RpcApi.StreamWaveAiCommand()
├── Add typing indicator
└── Stream response handling:
├── Update message incrementally
├── Handle errors
└── Save complete conversation
pkg/waveai/)AIBackend Interface:
type AIBackend interface {
StreamCompletion(
ctx context.Context,
request wshrpc.WaveAIStreamRequest,
) chan wshrpc.RespOrErrorUnion[wshrpc.WaveAIPacketType]
}
1. OpenAIBackend (openaibackend.go)
go-openai library for SSE streaming2. AnthropicBackend (anthropicbackend.go)
message_start, content_block_delta, message_stop, etc.3. WaveAICloudBackend (cloudbackend.go)
4. PerplexityBackend (perplexitybackend.go)
5. GoogleBackend (googlebackend.go)
func RunAICommand(ctx context.Context, request wshrpc.WaveAIStreamRequest) chan wshrpc.RespOrErrorUnion[wshrpc.WaveAIPacketType] {
// Route based on request.Opts.APIType:
switch request.Opts.APIType {
case "anthropic":
backend = AnthropicBackend{}
case "perplexity":
backend = PerplexityBackend{}
case "google":
backend = GoogleBackend{}
default:
if IsCloudAIRequest(request.Opts) {
backend = WaveAICloudBackend{}
} else {
backend = OpenAIBackend{}
}
}
return backend.StreamCompletion(ctx, request)
}
Command: streamwaveai
Type: Response Stream (one request, multiple responses)
Request Type (WaveAIStreamRequest):
type WaveAIStreamRequest struct {
ClientId string `json:"clientid,omitempty"`
Opts *WaveAIOptsType `json:"opts"`
Prompt []WaveAIPromptMessageType `json:"prompt"`
}
Response Type (WaveAIPacketType):
type WaveAIPacketType struct {
Type string `json:"type"`
Model string `json:"model,omitempty"`
Created int64 `json:"created,omitempty"`
FinishReason string `json:"finish_reason,omitempty"`
Usage *WaveAIUsageType `json:"usage,omitempty"`
Index int `json:"index,omitempty"`
Text string `json:"text,omitempty"`
Error string `json:"error,omitempty"`
}
AI Options (WaveAIOptsType):
type WaveAIOptsType struct {
Model string `json:"model"`
APIType string `json:"apitype,omitempty"`
APIToken string `json:"apitoken"`
OrgID string `json:"orgid,omitempty"`
APIVersion string `json:"apiversion,omitempty"`
BaseURL string `json:"baseurl,omitempty"`
ProxyURL string `json:"proxyurl,omitempty"`
MaxTokens int `json:"maxtokens,omitempty"`
MaxChoices int `json:"maxchoices,omitempty"`
TimeoutMs int `json:"timeoutms,omitempty"`
}
Frontend:
fetchWaveFile(blockId, "aidata")WaveAIPromptMessageTypeslidingWindowSize = 30)Backend:
BlockService.SaveWaveAiData(blockId, history)UI Messages (ChatMessageType):
interface ChatMessageType {
id: string;
user: string; // "user" | "assistant" | "error"
text: string;
isUpdating?: boolean;
}
Stored Messages (WaveAIPromptMessageType):
type WaveAIPromptMessageType struct {
Role string `json:"role"` // "user" | "assistant" | "system" | "error"
Content string `json:"content"`
Name string `json:"name,omitempty"`
}
model.cancel = true)panichandler.PanicHandler()Preset Structure:
{
"ai@preset-name": {
"display:name": "Preset Display Name",
"display:order": 1,
"ai:model": "gpt-4",
"ai:apitype": "openai",
"ai:apitoken": "sk-...",
"ai:baseurl": "https://api.openai.com/v1",
"ai:maxtokens": 4000,
"ai:fontsize": "14px",
"ai:fixedfontsize": "12px"
}
}
Configuration Keys:
ai:model - AI model nameai:apitype - Provider type (openai, anthropic, perplexity, google)ai:apitoken - API authentication tokenai:baseurl - Custom API endpointai:proxyurl - HTTP proxy URLai:maxtokens - Maximum response tokensai:timeoutms - Request timeoutai:fontsize - UI font sizeai:fixedfontsize - Code block font sizeThe UI automatically detects and displays the active provider:
ChatWindow and ChatItem are memoizedRunAICommand() routing logicviewText atomThis architecture provides a flexible, extensible foundation for AI chat functionality while maintaining clean separation between UI, business logic, and provider integrations.