Examples/TTS/TTSKitExample/README.md
TTSKitExample is the demo app for TTSKit, an on-device text-to-speech framework powered by Core ML. It runs on macOS and iOS with no server required.
Examples/TTS/TTSKitExample/TTSKitExample.xcodeproj in Xcode.Cmd+R). Dependencies resolve automatically via SPM.On first launch, the app prompts you to download a model. Downloads are cached in your Documents directory and reused across launches.
If you use a private model fork, set a token in ViewModel.swift:
let config = TTSKitConfig(model: selectedPreset, modelToken: "hf_...")
.auto strategy measures first-step speed and pre-buffers accordingly.TTSKitExample/
├── TTSKitExampleApp.swift entry point
├── ContentView.swift root NavigationSplitView
├── SidebarView.swift model management + history
├── DetailView.swift input form, waveform, playback
├── ModelManagementView.swift download / load / unload
├── GenerationSettingsView.swift advanced options sheet
├── ViewModel.swift @Observable view model
├── AudioMetadata.swift Codable metadata embedded in .m4a files
├── WaveformView.swift live waveform bar chart
└── ComputeUnitsView.swift per-component compute unit picker
Persistence is file-based. Each generation is a self-contained .m4a file with all metadata (text, speaker, timings) embedded as JSON in the iTunes comment atom. No database is needed. The history is reconstructed by scanning the Documents directory at launch.