docs/architecture/llm-refactor.md
本文档说明Prompt Optimizer的LLM服务架构重构,从单体Service转变为Provider-Adapter-Registry三层架构,实现了更高的模块化、可扩展性和可维护性。
graph TB
subgraph "应用层"
UI[UI组件]
LLMService[LLMService]
end
subgraph "配置层"
TextModelConfig[TextModelConfig
自包含配置]
ModelManager[ModelManager
配置管理]
end
subgraph "Provider层"
TextProvider[TextProvider
提供商元数据]
TextModel[TextModel
模型元数据]
end
subgraph "Adapter层"
Registry[TextAdapterRegistry
适配器注册表]
OpenAIAdapter[OpenAIAdapter]
GeminiAdapter[GeminiAdapter]
AnthropicAdapter[AnthropicAdapter]
end
subgraph "SDK层"
OpenAISDK[OpenAI SDK]
GeminiSDK[Google Generative AI]
AnthropicSDK[Anthropic SDK]
end
UI --> LLMService
LLMService --> ModelManager
LLMService --> Registry
ModelManager --> TextModelConfig
TextModelConfig --> TextProvider
TextModelConfig --> TextModel
Registry --> OpenAIAdapter
Registry --> GeminiAdapter
Registry --> AnthropicAdapter
OpenAIAdapter --> OpenAISDK
GeminiAdapter --> GeminiSDK
AnthropicAdapter --> AnthropicSDK
interface TextProvider {
id: string; // 'openai' | 'gemini' | 'anthropic'
name: string; // 'OpenAI' | 'Google Gemini' | 'Anthropic'
description: string;
defaultBaseURL?: string;
connectionSchema: ConnectionSchema; // 连接参数定义
}
职责: 定义Provider的基本信息和连接要求
interface TextModel {
id: string; // 'gpt-4o-mini' | 'gemini-2.0-flash-exp'
name: string;
description?: string;
providerId: string; // 归属Provider
capabilities: {
supportsStreaming: boolean;
supportsTools: boolean;
supportsReasoning: boolean;
maxContextLength: number;
};
parameterDefinitions: ParameterDefinition[];
defaultParameterValues: Record<string, any>;
}
职责: 定义Model的能力和参数
interface TextModelConfig {
id: string;
name: string;
enabled: boolean;
providerMeta: TextProvider; // 嵌入Provider元数据
modelMeta: TextModel; // 嵌入Model元数据
connectionConfig: ConnectionConfig; // 连接配置(apiKey, baseURL)
paramOverrides: Record<string, any>; // 参数覆盖
}
特点:
interface ITextProviderAdapter {
getProvider(): TextProvider;
getModels(): TextModel[];
getModelsAsync?(config: TextModelConfig): Promise<TextModel[]>;
buildDefaultModel(modelId: string): TextModel;
sendMessage(messages: Message[], config: TextModelConfig): Promise<LLMResponse>;
sendMessageStream(messages: Message[], config: TextModelConfig, handlers: StreamHandlers): Promise<void>;
sendMessageStreamWithTools?(messages: Message[], config: TextModelConfig, tools: ToolDefinition[], handlers: StreamHandlers): Promise<void>;
}
职责:
class TextAdapterRegistry implements ITextAdapterRegistry {
private adapters: Map<string, ITextProviderAdapter>;
private staticModelsCache: Map<string, TextModel[]>;
getAdapter(providerId: string): ITextProviderAdapter;
getAllProviders(): TextProvider[];
getStaticModels(providerId: string): TextModel[];
getDynamicModels(providerId: string, config: TextModelConfig): Promise<TextModel[]>;
getModels(providerId: string, config?: TextModelConfig): Promise<TextModel[]>;
}
职责:
flowchart LR
A[传统ModelConfig] --> B{检测格式}
B -->|Legacy| C[convertLegacyToTextModelConfigWithRegistry]
B -->|New| D[直接使用]
C --> E[获取Registry]
E --> F[getAdapter
providerId]
F --> G[获取Provider元数据]
F --> H[获取Model元数据]
H -->|找到| I[使用静态模型]
H -->|未找到| J[buildDefaultModel]
I --> K[构建TextModelConfig]
J --> K
K --> L[保存到Storage]
D --> L
export async function convertLegacyToTextModelConfigWithRegistry(
key: string,
legacy: ModelConfig,
registry: ITextAdapterRegistry
): Promise<TextModelConfig> {
// 1. Provider映射
const providerId = mapProviderToAdapterId(legacy.provider);
// 2. 获取Adapter
const adapter = registry.getAdapter(providerId);
// 3. 获取Provider元数据
const providerMeta = adapter.getProvider();
// 4. 获取Model元数据
let modelMeta = adapter.getModels().find(m => m.id === legacy.defaultModel);
if (!modelMeta) {
modelMeta = adapter.buildDefaultModel(legacy.defaultModel);
}
// 5. 构建TextModelConfig
return {
id: key,
name: legacy.name,
enabled: legacy.enabled,
providerMeta,
modelMeta,
connectionConfig: {
apiKey: legacy.apiKey,
baseURL: legacy.baseURL
},
paramOverrides: legacy.llmParams || {}
};
}
Provider映射规则:
gemini → gemini (GeminiAdapter)anthropic → anthropic (AnthropicAdapter)openai | deepseek | zhipu | siliconflow | custom → openai (OpenAIAdapter)在ModelManager.init()初始化时:
async init(): Promise<void> {
const existingModels = await this.getModelsFromStorage();
for (const [key, existingModel] of Object.entries(existingModels)) {
if (isLegacyConfig(existingModel)) {
try {
// 优先使用Registry转换
const registry = await this.getRegistry();
const convertedModel = await convertLegacyToTextModelConfigWithRegistry(
key,
existingModel,
registry
);
updatedModels[key] = convertedModel;
hasUpdates = true;
} catch (error) {
// Fallback到硬编码转换
const convertedModel = convertLegacyToTextModelConfig(key, existingModel);
updatedModels[key] = convertedModel;
}
}
}
// 保存转换后的配置
if (hasUpdates) {
await this.saveModelsToStorage(updatedModels);
}
}
export class LLMService implements ILLMService {
constructor(
private modelManager: ModelManager,
private registry: ITextAdapterRegistry
) {}
async sendMessage(messages: Message[], provider: string): Promise<string> {
// 1. 获取配置
const config = await this.modelManager.getModel(provider) as TextModelConfig;
// 2. 获取Adapter
const adapter = this.registry.getAdapter(config.providerMeta.id);
// 3. 调用Adapter
const response = await adapter.sendMessage(messages, config);
return response.content;
}
}
关键特性:
config.providerMeta.id获取正确的Adapterexport function createLLMService(modelManager: ModelManager): ILLMService {
if (isRunningInElectron()) {
return new ElectronLLMProxy();
}
// 创建Registry实例
const registry = new TextAdapterRegistry();
// 注入Registry到Service
return new LLMService(modelManager, registry);
}
// packages/core/src/services/llm/adapters/example-adapter.ts
import { AbstractTextProviderAdapter } from './abstract-adapter';
import type { TextProvider, TextModel, TextModelConfig, LLMResponse, Message, StreamHandlers } from '../types';
export class ExampleAdapter extends AbstractTextProviderAdapter {
getProvider(): TextProvider {
return {
id: 'example',
name: 'Example Provider',
description: 'Example LLM Provider',
defaultBaseURL: 'https://api.example.com/v1',
connectionSchema: {
required: ['apiKey'],
optional: ['baseURL'],
fieldTypes: {
apiKey: 'string',
baseURL: 'url'
}
}
};
}
getModels(): TextModel[] {
return [
{
id: 'example-model-v1',
name: 'Example Model V1',
description: 'Fast and efficient model',
providerId: 'example',
capabilities: {
supportsStreaming: true,
supportsTools: false,
supportsReasoning: false,
maxContextLength: 8000
},
parameterDefinitions: [
{
name: 'temperature',
type: 'number',
description: 'Sampling temperature',
min: 0,
max: 2,
default: 0.7
}
],
defaultParameterValues: {
temperature: 0.7
}
}
];
}
protected async doSendMessage(
messages: Message[],
config: TextModelConfig
): Promise<LLMResponse> {
// 实现SDK调用逻辑
const client = new ExampleSDK({
apiKey: config.connectionConfig.apiKey,
baseURL: config.connectionConfig.baseURL || this.getProvider().defaultBaseURL
});
try {
const response = await client.chat.completions.create({
model: config.modelMeta.id,
messages: messages,
...config.paramOverrides
});
return {
content: response.choices[0].message.content || '',
reasoning: undefined,
metadata: {
model: config.modelMeta.id,
usage: response.usage
}
};
} catch (error: any) {
// 保留原始错误堆栈
throw error;
}
}
protected async doSendMessageStream(
messages: Message[],
config: TextModelConfig,
handlers: StreamHandlers
): Promise<void> {
// 实现流式调用逻辑
const client = new ExampleSDK({
apiKey: config.connectionConfig.apiKey,
baseURL: config.connectionConfig.baseURL
});
try {
const stream = await client.chat.completions.create({
model: config.modelMeta.id,
messages: messages,
stream: true,
...config.paramOverrides
});
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content || '';
if (content && handlers.onToken) {
handlers.onToken(content);
}
}
if (handlers.onComplete) {
handlers.onComplete({ content: '', metadata: {} });
}
} catch (error: any) {
if (handlers.onError) {
handlers.onError(error);
}
throw error;
}
}
}
// packages/core/src/services/llm/adapters/registry.ts
import { ExampleAdapter } from './example-adapter';
export class TextAdapterRegistry implements ITextAdapterRegistry {
private adapters: Map<string, ITextProviderAdapter>;
constructor() {
this.adapters = new Map();
this.staticModelsCache = new Map();
// 注册所有Adapter
this.adapters.set('openai', new OpenAIAdapter());
this.adapters.set('gemini', new GeminiAdapter());
this.adapters.set('anthropic', new AnthropicAdapter());
this.adapters.set('example', new ExampleAdapter()); // 新增
}
}
// packages/core/src/services/model/converter.ts
function mapProviderToAdapterId(provider: string): string {
switch (provider) {
case 'gemini':
return 'gemini';
case 'anthropic':
return 'anthropic';
case 'example': // 新增
return 'example';
case 'openai':
case 'deepseek':
case 'zhipu':
case 'siliconflow':
case 'custom':
default:
return 'openai';
}
}
A:
A: 自动迁移,无需手动操作:
convertLegacyToTextModelConfigWithRegistry()A:
A: Adapter必须保留原始错误堆栈:
try {
const response = await sdk.call();
} catch (error: any) {
// 直接throw,不要包装,保留原始堆栈
throw error;
}
A: 实现getModelsAsync()方法:
async getModelsAsync(config: TextModelConfig): Promise<TextModel[]> {
const client = new OpenAI({
apiKey: config.connectionConfig.apiKey,
baseURL: config.connectionConfig.baseURL
});
const response = await client.models.list();
return response.data.map(model => ({
id: model.id,
name: model.id,
description: '',
providerId: 'openai',
capabilities: { /*...*/ },
parameterDefinitions: [],
defaultParameterValues: {}
}));
}
Registry会自动fallback到静态模型。
A:
gemini → GeminiAdapter (Google SDK)anthropic → AnthropicAdapter (Anthropic SDK)openai → OpenAIAdapterdeepseek → OpenAIAdapter (OpenAI兼容)zhipu → OpenAIAdapter (OpenAI兼容)siliconflow → OpenAIAdapter (OpenAI兼容)custom → OpenAIAdapter (OpenAI兼容)