Playground/README.md
Interactive demo projects showcasing what you can build with RunAnywhere.
| Project | Description | Platform |
|---|---|---|
| YapRun | On-device voice dictation — custom keyboard, multiple Whisper backends, Live Activity, offline-ready — Website · TestFlight | iOS & macOS (Swift/SwiftUI) |
| swift-starter-app | Privacy-first AI demo — LLM Chat, Speech-to-Text, Text-to-Speech, and Voice Pipeline with VAD | iOS (Swift/SwiftUI) |
| on-device-browser-agent | On-device AI browser automation using WebLLM — no cloud, no API keys, fully private | Chrome Extension (TypeScript/React) |
| android-use-agent | Fully on-device autonomous Android agent — navigates phone UI via accessibility + on-device LLM (Qwen3-4B). See benchmarks | Android (Kotlin/Jetpack Compose) |
| linux-voice-assistant | Fully on-device voice assistant — Wake Word, VAD, STT, LLM, and TTS with zero cloud dependency | Linux (C++/ALSA) |
| openclaw-hybrid-assistant | Hybrid voice assistant — on-device Wake Word, VAD, STT, and TTS with cloud LLM via OpenClaw WebSocket | Linux (C++/ALSA) |
On-device voice dictation for iOS and macOS. All speech recognition runs locally — your voice never leaves your device.
<p align="center"> </p>runanywhere.ai/yaprun | TestFlight Beta | Free on the App Store — iOS 16.0+ / macOS 14.0+, Xcode 15.0+
A complete on-device voice AI pipeline for Linux (Raspberry Pi 5, x86_64, ARM64). All inference runs locally — no cloud, no API keys:
Requirements: Linux (ALSA), x86_64 or ARM64, CMake 3.16+, C++20
A full-featured iOS app demonstrating the RunAnywhere SDK's core capabilities:
Requirements: iOS 17.0+, Xcode 15.0+
A Chrome extension that automates browser tasks entirely on-device using WebLLM and WebGPU:
Requirements: Chrome 124+ (WebGPU support)
A fully on-device autonomous Android agent that navigates your phone's UI to accomplish tasks. All LLM inference runs locally via RunAnywhere SDK with llama.cpp -- no cloud dependency required.
<tool_call> XML or ui_tap(index=5) function-call style)See android-use-agent/ASSESSMENT.md for detailed model benchmarks across Qwen3-4B, LFM2.5-1.2B, LFM2-8B-A1B MoE, and DS-R1-Qwen3-8B on Samsung Galaxy S24.
Requirements: Android 8.0+ (API 26), arm64-v8a device, Accessibility service permission
A hybrid voice assistant that combines on-device AI inference with cloud LLM reasoning via OpenClaw:
Requirements: Linux (ALSA), x86_64 or ARM64, CMake 3.16+, C++20