packages/model-runtime/docs/test-coverage.md
Overall Coverage: 94.36% (117 test files, 2683 tests) ๐ TARGET ACHIEVED!
Breakdown:
None - All critical files have been improved to 90%+ coverage! ๐
| File | Coverage | Priority | Notes |
|---|---|---|---|
| Core Modules | |||
| core/streams/openai/responsesStream.ts | 91.56% | Low | Remaining: error catches |
| core/openaiCompatibleFactory/index.ts | 83.72% | Low | Complex factory logic |
| core/usageConverters/utils/computeChatCost.ts | 95.74% | Low | Edge case scenarios |
| core/usageConverters/utils/computeImageCost.ts | 96.05% | Low | Edge case scenarios |
| core/streams/openai/openai.ts | 98.79% | Low | Excellent coverage |
| Providers | |||
| providers/openai | 87.5% | Low | Env vars at module load |
| providers/azureOpenai | 85.15% | Low | Custom auth flow |
| providers/azureai | 84.31% | Low | Azure-specific features |
| providers/anthropic | 88.44% | Low | Provider-specific logic |
65+ providers and core modules with 90%+ coverage, including:
Good - Coverage (80-89%):
All providers should follow this testing pattern:
// @vitest-environment node
import { ModelProvider } from 'model-bank';
import { beforeEach, describe, expect, it, vi } from 'vitest';
import { testProvider } from '../../providerTestUtils';
import { LobeXxxAI, params } from './index';
// Basic provider tests
testProvider({
Runtime: LobeXxxAI,
provider: ModelProvider.Xxx,
defaultBaseURL: 'https://api.xxx.com/v1',
chatDebugEnv: 'DEBUG_XXX_CHAT_COMPLETION',
chatModel: 'model-name',
invalidErrorType: 'InvalidProviderAPIKey',
bizErrorType: 'ProviderBizError',
test: {
skipAPICall: true,
skipErrorHandle: true,
},
});
// Custom feature tests
describe('LobeXxxAI - custom features', () => {
let instance: InstanceType<typeof LobeXxxAI>;
beforeEach(() => {
instance = new LobeXxxAI({ apiKey: 'test_api_key' });
vi.spyOn(instance['client'].chat.completions, 'create').mockResolvedValue(
new ReadableStream() as any,
);
});
describe('handlePayload', () => {
// Test custom payload transformations
});
describe('handleError', () => {
// Test custom error handling
});
describe('models', () => {
// Test models fetching and processing
});
});
For better testability, OpenAI-compatible providers should export a params object:
import {
OpenAICompatibleFactoryOptions,
createOpenAICompatibleRuntime,
} from '../../core/openaiCompatibleFactory';
export const params = {
baseURL: 'https://api.example.com/v1',
chatCompletion: {
handlePayload: (payload) => {
// Custom payload transformation
return transformedPayload;
},
handleError: (error) => {
// Custom error handling
return errorResponse;
},
},
debug: {
chatCompletion: () => process.env.DEBUG_XXX_CHAT_COMPLETION === '1',
},
models: async ({ client }) => {
// Fetch and process models
return modelList;
},
provider: ModelProvider.Xxx,
} satisfies OpenAICompatibleFactoryOptions;
export const LobeXxxAI = createOpenAICompatibleRuntime(params);
Router providers (like NewAPI and AiHubMix) route different models to different API types. They should also export a params object:
import { ModelProvider } from 'model-bank';
import { createRouterRuntime } from '../../core/RouterRuntime';
import { CreateRouterRuntimeOptions } from '../../core/RouterRuntime/createRuntime';
export const params = {
id: ModelProvider.Xxx,
debug: {
chatCompletion: () => process.env.DEBUG_XXX_CHAT_COMPLETION === '1',
},
defaultHeaders: {
'X-Custom-Header': 'value',
},
models: async ({ client }) => {
// Fetch and process multi-provider model list
const modelsPage = await client.models.list();
return processMultiProviderModelList(modelsPage.data, 'xxx');
},
routers: [
{
apiType: 'anthropic',
models: LOBE_DEFAULT_MODEL_LIST.filter((m) => detectModelProvider(m.id) === 'anthropic'),
options: { baseURL: 'https://api.xxx.com' },
},
{
apiType: 'google',
models: LOBE_DEFAULT_MODEL_LIST.filter((m) => detectModelProvider(m.id) === 'google'),
options: { baseURL: 'https://api.xxx.com/gemini' },
},
{
apiType: 'openai',
options: {
baseURL: 'https://api.xxx.com/v1',
chatCompletion: {
handlePayload: (payload) => {
// Custom payload transformation for OpenAI-compatible models
return payload;
},
},
},
},
],
} satisfies CreateRouterRuntimeOptions;
export const LobeXxxAI = createRouterRuntime(params);
Key Differences for Router Providers:
createRouterRuntime instead of createOpenAICompatibleRuntimerouters array to specify how different models route to different API typesapiType, models filter, and optionsmodels function should use processMultiProviderModelList to handle multi-provider model listsFor each OpenAI-compatible provider, ensure:
testProvider)handlePayload)handleError)models)params object for better testabilityFor router providers (like NewAPI, AiHubMix), ensure:
DEBUG_XXX_CHAT_COMPLETION=1)processMultiProviderModelList integrationhandlePayload in OpenAI router)params object satisfying CreateRouterRuntimeOptionshandlePayload)Reference: newapi/index.test.ts
// @vitest-environment node
import { describe, expect, it } from 'vitest';
import { LobeXxxAI, params } from './index';
describe('Xxx Router Runtime', () => {
describe('Runtime Instantiation', () => {
it('should create runtime instance', () => {
const instance = new LobeXxxAI({ apiKey: 'test' });
expect(instance).toBeDefined();
});
});
describe('Debug Configuration', () => {
it('should disable debug by default', () => {
delete process.env.DEBUG_XXX_CHAT_COMPLETION;
const result = params.debug.chatCompletion();
expect(result).toBe(false);
});
it('should enable debug when env is set', () => {
process.env.DEBUG_XXX_CHAT_COMPLETION = '1';
const result = params.debug.chatCompletion();
expect(result).toBe(true);
});
});
describe('Routers Configuration', () => {
it('should configure routers with correct apiTypes', () => {
// Test static routers
const routers = params.routers;
expect(routers).toHaveLength(4);
expect(routers[0].apiType).toBe('anthropic');
expect(routers[1].apiType).toBe('google');
expect(routers[2].apiType).toBe('xai');
expect(routers[3].apiType).toBe('openai');
});
it('should configure dynamic routers with user baseURL', () => {
// Test dynamic routers function
const options = { apiKey: 'test', baseURL: 'https://custom.com/v1' };
const routers = params.routers(options);
expect(routers[0].options.baseURL).toContain('custom.com');
});
});
describe('Models Function', () => {
it('should fetch and process models', async () => {
// Test models fetching logic
const mockClient = {
baseURL: 'https://api.xxx.com/v1',
apiKey: 'test',
models: {
list: vi.fn().mockResolvedValue({
data: [{ id: 'model-1', owned_by: 'openai' }],
}),
},
};
const models = await params.models({ client: mockClient });
expect(models).toBeDefined();
});
it('should handle API errors gracefully', async () => {
// Test error handling
const mockClient = {
models: {
list: vi.fn().mockRejectedValue(new Error('API Error')),
},
};
const models = await params.models({ client: mockClient });
expect(models).toEqual([]);
});
});
});
IMPORTANT: Follow this complete workflow for every testing task. ALL steps are REQUIRED.
For multiple providers: Use subagents to parallelize test development and significantly speed up the process.
Benefits of using subagents:
How to create parallel subagents:
When working on multiple providers, create one subagent per provider with a detailed prompt like:
ๆ นๆฎ model-runtime ๅ
้จ็ๆต่ฏๆๆกฃ๏ผ่กฅๅ
ไปฅไธ 5 ไธช provider ็ๆต่ฏ๏ผๆฏไธช provider ็ๆต่ฏ็จ็ฌ็ซ็ subagent ๆง่ก๏ผ่ฟๆ ทๅฏไปฅๅนถๅๅ ้ใ
่ฏทไธบไปฅไธ providers ๅๅซๅๅปบ subagent๏ผ
- internlm (current: 39.13%, target: 80%+)
- hunyuan (current: 39.68%, target: 80%+)
- huggingface (current: 39.75%, target: 80%+)
- groq (current: 45.45%, target: 80%+)
- modelscope (current: 47.82%, target: 80%+)
Each subagent should be instructed to:
packages/model-runtime/docs/test-coverage.md)After all subagents complete:
For single provider: Skip this step and proceed directly to Step 1.
# 1. Refactor provider and write tests
# 2. Run tests to verify they pass
bunx vitest run --silent='passed-only' 'src/providers/{provider}/index.test.ts'
CRITICAL: Run type check and lint before proceeding. Failing these checks means the task is incomplete.
# Check TypeScript types (from project root)
cd ../../../ && bun run type-check
# Or run type-check for model-runtime only
bunx tsc --noEmit
# Fix any linting issues
bunx eslint src/providers/{provider}/ --fix
Common Type Errors to Watch For:
params objectsDo NOT proceed to Step 3 if type/lint checks fail!
# Run coverage to get updated metrics
bunx vitest run --coverage --silent='passed-only'
Before updating documentation, create a summary of what was accomplished:
Summary Checklist:
Example Summary:
Provider: newapi
Coverage: 13.28% โ 100% (+86.72%)
Tests Added: 65 new tests
Features Tested:
- handlePayload logic with Responses API detection
- Complex pricing calculation (quota_type, model_price, model_ratio)
- Provider detection from supported_endpoint_types and owned_by
- Dynamic routers configuration with baseURL processing
- Error handling for pricing API failures
Bugs Fixed: None
Guide Updates: Added router provider testing pattern to documentation
Based on your development summary, update the following sections:
Current Status section:
Coverage Status by Priority section:
Completed Work section:
Testing Strategy section (if applicable):
# Verify all tests still pass
bunx vitest run --silent='passed-only' 'src/providers/{provider}/index.test.ts'
# Verify type check still passes
cd ../../../ && bun run type-check
# 1. Development Phase
# ... write code and tests ...
bunx vitest run --silent='passed-only' 'src/providers/example/index.test.ts'
# 2. Type/Lint Phase (REQUIRED)
cd ../../../ && bun run type-check # Must pass!
bunx eslint src/providers/example/ --fix
# 3. Coverage Phase
cd packages/model-runtime
bunx vitest run --coverage --silent='passed-only'
# 4. Summarization Phase
# Create summary following the checklist above
# 5. Documentation Phase
# Update this file with summary and metrics
# 6. Final Verification
bunx vitest run --silent='passed-only' 'src/providers/example/index.test.ts'
cd ../../../ && bun run type-check
# 7. Commit
git add .
git commit -m "โ
test: add comprehensive tests for example provider (13% โ 100%)"
Remember: A testing task is only complete when:
# Run all tests with coverage
bunx vitest run --coverage
# Run specific provider tests
bunx vitest run --silent='passed-only' 'src/providers/{provider}/index.test.ts'
# Run tests for multiple providers
bunx vitest run --silent='passed-only' src/providers/higress/index.test.ts src/providers/ai360/index.test.ts
# Watch mode for development
bunx vitest watch 'src/providers/{provider}/index.test.ts'
# Type check entire project (from project root)
cd ../../../ && bun run type-check
# Type check model-runtime only
bunx tsc --noEmit
# Type check with watch mode
bunx tsc --noEmit --watch
# Lint specific provider
bunx eslint src/providers/{provider}/ --fix
# Lint all providers
bunx eslint src/providers/ --fix
# Lint without auto-fix (check only)
bunx eslint src/providers/{provider}/
Latest Session (2025-10-13 - Part 4): ๐ Achieved 94.36% Overall Coverage - 95% Goal Nearly Reached!
Overall coverage: 91.1% โ 94.36% (+3.26%)
Comprehensive Core Module and Provider Enhancement
Enhanced 14 files with significant test improvements:
Core Modules (6 files, +96 tests):
Providers (8 providers, +102 tests):
Added 198+ comprehensive tests across core modules and providers
Fixed 16 TypeScript type errors across test files
All enhanced files now have 95%+ or 100% coverage (except openai at 87.5% due to module-level env vars)
Type check passed - Zero type errors remaining
Used parallel subagent execution (6 concurrent agents) for maximum development speed
Previous Session (2025-10-13 - Part 3): ๐ Achieved 91.1% Overall Coverage - Target Exceeded!
params for better testabilityPrevious Session (2025-10-13 - Part 2): ๐ 5 High-Priority Providers Completed!
params for better testabilityPrevious Session (2025-10-13 - Part 1): ๐ All Critical providers completed!
params for better testabilityPrevious Session (2025-01-15):
params for better testabilityEarlier Session (2025-01-15):
params makes testing much easier by allowing direct testing of configurationtestProvider utility provides basic test coverage for OpenAI-compatible providersskipAPICall: true)createRouterRuntime instead of createOpenAICompatibleRuntimetestProvider utility does NOT work for router providers - write custom testsrouters: [...] - array of router configsrouters: (options) => [...] - function that generates routers based on user optionsmodels functions that:
processMultiProviderModelListhandlePayload, pricing calculation)/v1, /v1beta) from baseURLnewapi, aihubmixnewapi/index.test.ts