Back to Runanywhere Sdks

RunAnywhere AI - Android Example

examples/android/RunAnywhereAI/README.md

0.19.1321.4 KB
Original Source

RunAnywhere AI - Android Example

<p align="center"> </p> <p align="center"> <a href="https://play.google.com/store/apps/details?id=com.runanywhere.runanywhereai"> </a> </p> <p align="center"> </p>

A production-ready reference app demonstrating the RunAnywhere Kotlin SDK capabilities for on-device AI. This app showcases how to build privacy-first, offline-capable AI features with LLM chat, speech-to-text, text-to-speech, and a complete voice assistant pipelineβ€”all running locally on your device.


πŸš€ Running This App (Local Development)

Important: This sample app consumes the RunAnywhere Kotlin SDK as a local Gradle included build. Before opening this project, you must first build the SDK's native libraries.

First-Time Setup

bash
# 1. Navigate to the Kotlin SDK directory
cd runanywhere-sdks/sdk/runanywhere-kotlin

# 2. Run the setup script (~10-15 minutes on first run)
#    This builds the native C++ JNI libraries and sets testLocal=true
./scripts/build-kotlin.sh --setup

# 3. Open this sample app in Android Studio
#    File > Open > examples/android/RunAnywhereAI

# 4. Wait for Gradle sync to complete

# 5. Connect an Android device (ARM64 recommended) or use an emulator

# 6. Click Run

How It Works

This sample app uses settings.gradle.kts with includeBuild() to reference the local Kotlin SDK:

This Sample App β†’ Local Kotlin SDK (sdk/runanywhere-kotlin/)
                          ↓
              Local JNI Libraries (sdk/runanywhere-kotlin/src/androidMain/jniLibs/)
                          ↑
           Built by: ./scripts/build-kotlin.sh --setup

The build-kotlin.sh --setup script:

  1. Downloads dependencies (Sherpa-ONNX, ~500MB)
  2. Builds the native C++ libraries from runanywhere-commons
  3. Copies JNI .so files to sdk/runanywhere-kotlin/src/androidMain/jniLibs/
  4. Sets runanywhere.useLocalNatives=true in gradle.properties

After Modifying the SDK

  • Kotlin SDK code changes: Rebuild in Android Studio or run ./gradlew assembleDebug
  • C++ code changes (in runanywhere-commons):
    bash
    cd sdk/runanywhere-kotlin
    ./scripts/build-kotlin.sh --local --rebuild-commons
    

Try It Now

<p align="center"> <a href="https://play.google.com/store/apps/details?id=com.runanywhere.runanywhereai"> </a> </p>

Download the app from Google Play Store to try it out.


Screenshots

<p align="center"> </p>

Features

This sample app demonstrates the full power of the RunAnywhere SDK:

FeatureDescriptionSDK Integration
AI ChatInteractive LLM conversations with streaming responsesRunAnywhere.generateStream()
Thinking ModeSupport for models with <think>...</think> reasoningThinking tag parsing
Real-time AnalyticsToken speed, generation time, inference metricsMessageAnalytics
Speech-to-TextVoice transcription with batch & live modesRunAnywhere.transcribe()
Text-to-SpeechNeural voice synthesis with Piper TTSRunAnywhere.synthesize()
Voice AssistantFull STT -> LLM -> TTS pipeline with auto-detectionRunAnywhere.processVoice()
Model ManagementDownload, load, and manage multiple AI modelsRunAnywhere.downloadModel()
Storage ManagementView storage usage and delete modelsRunAnywhere.storageInfo()
Offline SupportAll features work without internetOn-device inference

Architecture

The app follows modern Android architecture patterns:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                      Jetpack Compose UI                          β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚  β”‚  Chat    β”‚ β”‚   STT    β”‚ β”‚   TTS    β”‚ β”‚  Voice   β”‚ β”‚Settingsβ”‚ β”‚
β”‚  β”‚  Screen  β”‚ β”‚  Screen  β”‚ β”‚  Screen  β”‚ β”‚  Screen  β”‚ β”‚ Screen β”‚ β”‚
β”‚  β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”¬β”€β”€β”€β”€β”˜ β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€
β”‚       β–Ό            β–Ό            β–Ό            β–Ό           β–Ό      β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚  β”‚  Chat    β”‚ β”‚   STT    β”‚ β”‚   TTS    β”‚ β”‚  Voice   β”‚ β”‚Settingsβ”‚ β”‚
β”‚  β”‚ViewModel β”‚ β”‚ViewModel β”‚ β”‚ViewModel β”‚ β”‚ViewModel β”‚ β”‚ViewModelβ”‚
β”‚  β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”¬β”€β”€β”€β”€β”˜ β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€
β”‚                                                                  β”‚
β”‚                    RunAnywhere Kotlin SDK                        β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”‚
β”‚  β”‚  Extension Functions (generate, transcribe, synthesize)   β”‚   β”‚
β”‚  β”‚  EventBus (LLMEvent, STTEvent, TTSEvent, ModelEvent)     β”‚   β”‚
β”‚  β”‚  Model Management (download, load, unload, delete)        β”‚   β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β”‚
β”‚                              β”‚                                   β”‚
β”‚           β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”               β”‚
β”‚           β–Ό                                      β–Ό               β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”       β”‚
β”‚  β”‚   LlamaCpp      β”‚                  β”‚   ONNX Runtime  β”‚       β”‚
β”‚  β”‚   (LLM/GGUF)    β”‚                  β”‚   (STT/TTS)     β”‚       β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Key Architecture Decisions

  • MVVM Pattern β€” ViewModels manage UI state with StateFlow, Compose observes changes
  • Single Activity β€” Jetpack Navigation Compose handles all screen transitions
  • Coroutines & Flow β€” All async operations use Kotlin coroutines with structured concurrency
  • EventBus Pattern β€” SDK events (model loading, generation, etc.) propagate via EventBus.events
  • Repository Abstraction β€” ConversationStore persists chat history

Project Structure

RunAnywhereAI/
β”œβ”€β”€ app/
β”‚   β”œβ”€β”€ src/main/
β”‚   β”‚   β”œβ”€β”€ java/com/runanywhere/runanywhereai/
β”‚   β”‚   β”‚   β”œβ”€β”€ RunAnywhereApplication.kt      # SDK initialization, model registration
β”‚   β”‚   β”‚   β”œβ”€β”€ MainActivity.kt                # Entry point, initialization state handling
β”‚   β”‚   β”‚   β”‚
β”‚   β”‚   β”‚   β”œβ”€β”€ data/
β”‚   β”‚   β”‚   β”‚   └── ConversationStore.kt       # Chat history persistence
β”‚   β”‚   β”‚   β”‚
β”‚   β”‚   β”‚   β”œβ”€β”€ domain/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ models/
β”‚   β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ChatMessage.kt         # Message data model with analytics
β”‚   β”‚   β”‚   β”‚   β”‚   └── SessionState.kt        # Voice session states
β”‚   β”‚   β”‚   β”‚   └── services/
β”‚   β”‚   β”‚   β”‚       └── AudioCaptureService.kt # Microphone audio capture
β”‚   β”‚   β”‚   β”‚
β”‚   β”‚   β”‚   β”œβ”€β”€ presentation/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ chat/
β”‚   β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ChatScreen.kt          # LLM chat UI with streaming
β”‚   β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ChatViewModel.kt       # Chat logic, thinking mode
β”‚   β”‚   β”‚   β”‚   β”‚   └── components/
β”‚   β”‚   β”‚   β”‚   β”‚       └── MessageInput.kt    # Chat input component
β”‚   β”‚   β”‚   β”‚   β”‚
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ stt/
β”‚   β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ SpeechToTextScreen.kt  # STT UI with waveform
β”‚   β”‚   β”‚   β”‚   β”‚   └── SpeechToTextViewModel.kt # Batch & live transcription
β”‚   β”‚   β”‚   β”‚   β”‚
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ tts/
β”‚   β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ TextToSpeechScreen.kt  # TTS UI with playback
β”‚   β”‚   β”‚   β”‚   β”‚   └── TextToSpeechViewModel.kt # Synthesis & audio playback
β”‚   β”‚   β”‚   β”‚   β”‚
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ voice/
β”‚   β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ VoiceAssistantScreen.kt # Full voice pipeline UI
β”‚   β”‚   β”‚   β”‚   β”‚   └── VoiceAssistantViewModel.kt # STTβ†’LLMβ†’TTS orchestration
β”‚   β”‚   β”‚   β”‚   β”‚
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ settings/
β”‚   β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ SettingsScreen.kt      # Storage & model management
β”‚   β”‚   β”‚   β”‚   β”‚   └── SettingsViewModel.kt   # Storage info, cache clearing
β”‚   β”‚   β”‚   β”‚   β”‚
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ models/
β”‚   β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ModelSelectionBottomSheet.kt # Model picker UI
β”‚   β”‚   β”‚   β”‚   β”‚   └── ModelSelectionViewModel.kt   # Download & load logic
β”‚   β”‚   β”‚   β”‚   β”‚
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ navigation/
β”‚   β”‚   β”‚   β”‚   β”‚   └── AppNavigation.kt       # Bottom nav, routing
β”‚   β”‚   β”‚   β”‚   β”‚
β”‚   β”‚   β”‚   β”‚   └── common/
β”‚   β”‚   β”‚   β”‚       └── InitializationViews.kt # Loading/error states
β”‚   β”‚   β”‚   β”‚
β”‚   β”‚   β”‚   └── ui/theme/
β”‚   β”‚   β”‚       β”œβ”€β”€ Theme.kt                   # Material 3 theming
β”‚   β”‚   β”‚       β”œβ”€β”€ AppColors.kt               # Color palette
β”‚   β”‚   β”‚       β”œβ”€β”€ Type.kt                    # Typography
β”‚   β”‚   β”‚       └── Dimensions.kt              # Spacing constants
β”‚   β”‚   β”‚
β”‚   β”‚   β”œβ”€β”€ res/                               # Resources (icons, strings)
β”‚   β”‚   └── AndroidManifest.xml                # Permissions, app config
β”‚   β”‚
β”‚   β”œβ”€β”€ src/test/                              # Unit tests
β”‚   └── src/androidTest/                       # Instrumentation tests
β”‚
β”œβ”€β”€ build.gradle.kts                           # Project build config
β”œβ”€β”€ settings.gradle.kts                        # Module settings
└── README.md                                  # This file

Quick Start

Prerequisites

  • Android Studio Hedgehog (2023.1.1) or later
  • Android SDK 24+ (Android 7.0 Nougat)
  • JDK 17+
  • Device/Emulator with arm64-v8a architecture (recommended: physical device)
  • ~2GB free storage for AI models

Clone & Build

bash
# Clone the repository
git clone https://github.com/RunanywhereAI/runanywhere-sdks.git
cd runanywhere-sdks/examples/android/RunAnywhereAI

# Build debug APK
./gradlew assembleDebug

# Install on connected device
./gradlew installDebug

Run via Android Studio

  1. Open the project in Android Studio
  2. Wait for Gradle sync to complete
  3. Select a physical device (arm64 recommended) or emulator
  4. Click Run or press Shift + F10

Run via Command Line

bash
# Install and launch
./gradlew installDebug
adb shell am start -n com.runanywhere.runanywhereai.debug/.MainActivity

SDK Integration Examples

Initialize the SDK

The SDK is initialized in RunAnywhereApplication.kt:

kotlin
// Initialize SDK with development environment
RunAnywhere.initialize(environment = SDKEnvironment.DEVELOPMENT)

// Complete services initialization (device registration)
RunAnywhere.completeServicesInitialization()

// Register AI backends
LlamaCPP.register(priority = 100)  // LLM backend (GGUF models)
ONNX.register(priority = 100)      // STT/TTS backend

// Register models
RunAnywhere.registerModel(
    id = "smollm2-360m-q8_0",
    name = "SmolLM2 360M Q8_0",
    url = "https://huggingface.co/prithivMLmods/SmolLM2-360M-GGUF/...",
    framework = InferenceFramework.LLAMA_CPP,
    memoryRequirement = 500_000_000,
)

Download & Load a Model

kotlin
// Download with progress tracking
RunAnywhere.downloadModel("smollm2-360m-q8_0").collect { progress ->
    println("Download: ${(progress.progress * 100).toInt()}%")
}

// Load into memory
RunAnywhere.loadLLMModel("smollm2-360m-q8_0")

Stream Text Generation

kotlin
// Generate with streaming
RunAnywhere.generateStream(prompt).collect { token ->
    // Display token in real-time
    displayToken(token)
}

// Or non-streaming
val result = RunAnywhere.generate(prompt)
println("Response: ${result.text}")

Speech-to-Text

kotlin
// Load STT model
RunAnywhere.loadSTTModel("sherpa-onnx-whisper-tiny.en")

// Transcribe audio bytes
val transcription = RunAnywhere.transcribe(audioBytes)
println("Transcription: $transcription")

Text-to-Speech

kotlin
// Load TTS voice
RunAnywhere.loadTTSVoice("vits-piper-en_US-lessac-medium")

// Synthesize speech
val result = RunAnywhere.synthesize(text, TTSOptions(
    rate = 1.0f,
    pitch = 1.0f,
))
// result.audioData contains WAV audio bytes

Voice Pipeline (STT β†’ LLM β†’ TTS)

kotlin
// Process voice through full pipeline
val result = RunAnywhere.processVoice(audioData)

if (result.speechDetected) {
    println("User said: ${result.transcription}")
    println("AI response: ${result.response}")
    // result.synthesizedAudio contains TTS audio
}

Key Screens Explained

1. Chat Screen (ChatScreen.kt)

What it demonstrates:

  • Streaming text generation with real-time token display
  • Thinking mode support (<think>...</think> tags)
  • Message analytics (tokens/sec, time to first token)
  • Conversation history management
  • Model selection bottom sheet integration

Key SDK APIs:

  • RunAnywhere.generateStream() β€” Streaming generation
  • RunAnywhere.generate() β€” Non-streaming generation
  • RunAnywhere.cancelGeneration() β€” Stop generation
  • EventBus.events.filterIsInstance<LLMEvent>() β€” Listen for LLM events

2. Speech-to-Text Screen (SpeechToTextScreen.kt)

What it demonstrates:

  • Batch mode: Record full audio, then transcribe
  • Live mode: Real-time streaming transcription
  • Audio level visualization
  • Transcription metrics (confidence, RTF, word count)

Key SDK APIs:

  • RunAnywhere.loadSTTModel() β€” Load Whisper model
  • RunAnywhere.transcribe() β€” Batch transcription
  • RunAnywhere.transcribeStream() β€” Streaming transcription

3. Text-to-Speech Screen (TextToSpeechScreen.kt)

What it demonstrates:

  • Neural voice synthesis with Piper TTS
  • Speed and pitch controls
  • Audio playback with progress
  • Fun sample texts for testing

Key SDK APIs:

  • RunAnywhere.loadTTSVoice() β€” Load TTS model
  • RunAnywhere.synthesize() β€” Generate speech audio
  • RunAnywhere.stopSynthesis() β€” Cancel synthesis

4. Voice Assistant Screen (VoiceAssistantScreen.kt)

What it demonstrates:

  • Complete voice AI pipeline
  • Automatic speech detection with silence timeout
  • Continuous conversation mode
  • Model status tracking for all 3 components (STT, LLM, TTS)

Key SDK APIs:

  • RunAnywhere.startVoiceSession() β€” Start voice session
  • RunAnywhere.processVoice() β€” Process audio through pipeline
  • RunAnywhere.voiceAgentComponentStates() β€” Check component status

5. Settings Screen (SettingsScreen.kt)

What it demonstrates:

  • Storage usage overview
  • Downloaded model management
  • Model deletion with confirmation
  • Cache clearing

Key SDK APIs:

  • RunAnywhere.storageInfo() β€” Get storage details
  • RunAnywhere.deleteModel() β€” Remove downloaded model
  • RunAnywhere.clearCache() β€” Clear temporary files

Testing

Run Unit Tests

bash
./gradlew test

Run Instrumentation Tests

bash
./gradlew connectedAndroidTest

Run Lint & Static Analysis

bash
# Detekt static analysis
./gradlew detekt

# ktlint formatting check
./gradlew ktlintCheck

# Android lint
./gradlew lint

Debugging

Enable Verbose Logging

Filter logcat for RunAnywhere SDK logs:

bash
adb logcat -s "RunAnywhere:D" "RunAnywhereApp:D" "ChatViewModel:D"

Common Log Tags

TagDescription
RunAnywhereAppSDK initialization, model registration
ChatViewModelLLM generation, streaming
STTViewModelSpeech transcription
TTSViewModelSpeech synthesis
VoiceAssistantVMVoice pipeline
ModelSelectionVMModel downloads, loading

Memory Profiling

  1. Open Android Studio Profiler
  2. Select your app process
  3. Record memory allocations during model loading
  4. Expected: ~300MB-2GB depending on model size

Configuration

Build Variants

VariantDescription
debugDevelopment build with debugging enabled
releaseOptimized build with R8/ProGuard
benchmarkRelease-like build for performance testing

Environment Variables (for release builds)

bash
export KEYSTORE_PATH=/path/to/keystore.jks
export KEYSTORE_PASSWORD=your_password
export KEY_ALIAS=your_alias
export KEY_PASSWORD=your_key_password

Supported Models

LLM Models (LlamaCpp/GGUF)

ModelSizeMemoryDescription
SmolLM2 360M Q8_0~400MB500MBFast, lightweight chat
Qwen 2.5 0.5B Q6_K~500MB600MBMultilingual, efficient
LFM2 350M Q4_K_M~200MB250MBLiquidAI, ultra-compact
Llama 2 7B Chat Q4_K_M~4GB4GBPowerful, larger model
Mistral 7B Instruct Q4_K_M~4GB4GBHigh quality responses

STT Models (ONNX/Whisper)

ModelSizeDescription
Sherpa Whisper Tiny (EN)~75MBEnglish transcription

TTS Models (ONNX/Piper)

ModelSizeDescription
Piper US English (Medium)~65MBNatural American voice
Piper British English (Medium)~65MBBritish accent

Known Limitations

  • ARM64 Only β€” Native libraries built for arm64-v8a only (x86 emulators not supported)
  • Memory Usage β€” Large models (7B+) require devices with 6GB+ RAM
  • First Load β€” Initial model loading takes 1-3 seconds (cached afterward)
  • Thermal Throttling β€” Extended inference may trigger device throttling on some devices

Contributing

See CONTRIBUTING.md for guidelines.

Development Setup

bash
# Fork and clone
git clone https://github.com/YOUR_USERNAME/runanywhere-sdks.git
cd runanywhere-sdks/examples/android/RunAnywhereAI

# Create feature branch
git checkout -b feature/your-feature

# Make changes and test
./gradlew assembleDebug
./gradlew test
./gradlew detekt ktlintCheck

# Commit and push
git commit -m "feat: your feature description"
git push origin feature/your-feature

# Open Pull Request

License

This project is licensed under the Apache License 2.0 - see LICENSE for details.


Support