Playground/swift-starter-app/docs/SDK-API-CORRECTIONS.md
SDK Version: 0.16.0-test.39 Date: January 2026 Based on: LocalAI Playground implementation
This document catalogs discrepancies found between the RunAnywhere SDK documentation/examples and the actual API behavior as observed during implementation of the LocalAI Playground app.
isModelLoadedDocumentation/examples suggest:
let isLoaded = await RunAnywhere.isModelLoaded(id: "model-id")
Actual API:
let isLoaded = await RunAnywhere.isModelLoaded
Notes:
isModelLoaded is a property, not a functionisSTTModelLoaded, isTTSVoiceLoadedEvidence: Services/ModelService.swift, lines 218-222
func refreshLoadedStates() async {
isLLMLoaded = await RunAnywhere.isModelLoaded
isSTTLoaded = await RunAnywhere.isSTTModelLoaded
isTTSLoaded = await RunAnywhere.isTTSVoiceLoaded
}
downloadModelDocumentation suggests:
let stream = try await RunAnywhere.downloadModel(id: "model-id")
Actual API:
let stream = try await RunAnywhere.downloadModel("model-id")
Notes:
id: parameter label should be omittedEvidence: Services/ModelService.swift, lines 268, 334, 400
let progressStream = try await RunAnywhere.downloadModel(Self.llmModelId)
unloadModelDocumentation suggests:
try await RunAnywhere.unloadModel(id: "model-id")
Actual API:
try await RunAnywhere.unloadModel()
Notes:
Evidence: Services/ModelService.swift, line 299
try await RunAnywhere.unloadModel()
unloadSTTModelDocumentation suggests:
try await RunAnywhere.unloadSTTModel(id: "model-id")
Actual API:
try await RunAnywhere.unloadSTTModel()
Notes:
Evidence: Services/ModelService.swift, line 365
unloadTTSVoiceDocumentation suggests:
try await RunAnywhere.unloadTTSVoice(id: "voice-id")
Actual API:
try await RunAnywhere.unloadTTSVoice()
Notes:
Evidence: Services/ModelService.swift, line 431
LLMGenerationOptionsDocumentation suggests:
let options = LLMGenerationOptions(
maxTokens: 256,
temperature: 0.7,
modelId: "model-id"
)
Actual API:
let options = LLMGenerationOptions(
maxTokens: 256,
temperature: 0.7
)
Notes:
modelId parameter does not existmaxTokens and temperature are confirmed parametersEvidence: Views/ChatView.swift, lines 297-300
let options = LLMGenerationOptions(
maxTokens: 256,
temperature: 0.8
)
Based on actual implementation, here is the verified API surface:
// Initialize SDK
try RunAnywhere.initialize(environment: .development | .production)
// Register backends
LlamaCPP.register()
ONNX.register()
// Get version
RunAnywhere.version -> String
RunAnywhere.registerModel(
id: String,
name: String,
url: URL,
framework: ModelFramework, // .llamaCpp or .onnx
modality: ModelModality?, // .speechRecognition, .speechSynthesis
artifactType: ArtifactType?, // .archive(.tarGz, structure: .nestedDirectory)
memoryRequirement: Int
)
// Returns AsyncStream<DownloadProgress>
let progressStream = try await RunAnywhere.downloadModel(String)
// Progress object contains:
// - overallProgress: Double (0.0 to 1.0)
// - stage: DownloadStage (.downloading, .extracting, .completed, etc.)
try await RunAnywhere.loadModel(String) // LLM
try await RunAnywhere.loadSTTModel(String) // Speech-to-Text
try await RunAnywhere.loadTTSVoice(String) // Text-to-Speech
await RunAnywhere.isModelLoaded -> Bool
await RunAnywhere.isSTTModelLoaded -> Bool
await RunAnywhere.isTTSVoiceLoaded -> Bool
try await RunAnywhere.unloadModel()
try await RunAnywhere.unloadSTTModel()
try await RunAnywhere.unloadTTSVoice()
let options = LLMGenerationOptions(
maxTokens: Int,
temperature: Float
)
// Streaming generation
let result = try await RunAnywhere.generateStream(String, options: LLMGenerationOptions)
// result.stream: AsyncStream<String> (tokens)
// result.result: Task<GenerationResult, Error>
// GenerationResult contains:
// - tokensUsed: Int
// - tokensPerSecond: Double
let text: String = try await RunAnywhere.transcribe(Data)
// Input: 16kHz mono Int16 PCM audio data
// Output: Transcribed text
let options = TTSOptions(
rate: Float, // Speech rate (1.0 = normal)
pitch: Float, // Pitch adjustment
volume: Float // Volume level
)
let output = try await RunAnywhere.synthesize(String, options: TTSOptions)
// output.audioData: Data (Float32 PCM @ 22kHz)
// output.duration: TimeInterval
Consistency in parameter naming: downloadModel uses positional parameter while loadModel uses positional. Document this clearly or consider standardizing with labeled parameters.
State checking paradigm: The property-based state checking (isModelLoaded vs isModelLoaded(id:)) suggests a single-model-at-a-time design. Document this architectural decision clearly.
Audio format documentation: Clearly document input/output audio formats in a prominent location:
Unload behavior: Document that unload functions don't take ID parameters and operate on the currently loaded model.
Version-specific API notes: Consider publishing API changelogs with each release to help developers track breaking changes.
| File | Line Numbers | API Verified |
|---|---|---|
Services/ModelService.swift | 218-222 | isModelLoaded property |
Services/ModelService.swift | 268, 334, 400 | downloadModel positional param |
Services/ModelService.swift | 299 | unloadModel() no params |
Services/ModelService.swift | 365 | unloadSTTModel() no params |
Services/ModelService.swift | 431 | unloadTTSVoice() no params |
Views/ChatView.swift | 297-300 | LLMGenerationOptions |
Views/VoicePipelineView.swift | 601, 617, 637 | Full pipeline API usage |