Back to Runanywhere Sdks

RunAnywhere AI - Flutter Example

examples/flutter/RunAnywhereAI/README.md

0.19.1322.5 KB
Original Source

RunAnywhere AI - Flutter Example

<p align="center"> </p> <p align="center"> </p>

A production-ready reference app demonstrating the RunAnywhere Flutter SDK capabilities for on-device AI. This app showcases how to build privacy-first, offline-capable AI features with LLM chat, speech-to-text, text-to-speech, and a complete voice assistant pipelineβ€”all running locally on your device.


πŸš€ Running This App (Local Development)

Important: This sample app consumes the RunAnywhere Flutter SDK as local path dependencies. Before opening this project, you must first build the SDK's native libraries.

First-Time Setup

bash
# 1. Navigate to the Flutter SDK directory
cd runanywhere-sdks/sdk/runanywhere-flutter

# 2. Run the setup script (~10-20 minutes on first run)
#    This builds the native C++ frameworks/libraries and enables local mode
./scripts/build-flutter.sh --setup

# 3. Navigate to this sample app
cd ../../examples/flutter/RunAnywhereAI

# 4. Install dependencies
flutter pub get

# 5. For iOS: Install pods
cd ios && pod install && cd ..

# 6. Run the app
flutter run

# Or open in Android Studio / VS Code and run from there

How It Works

This sample app's pubspec.yaml uses path dependencies to reference the local Flutter SDK packages:

This Sample App β†’ Local Flutter SDK packages (sdk/runanywhere-flutter/packages/)
                          ↓
              Local XCFrameworks/JNI libs (in each package's ios/Frameworks/ and android/jniLibs/)
                          ↑
           Built by: ./scripts/build-flutter.sh --setup

The build-flutter.sh --setup script:

  1. Downloads dependencies (ONNX Runtime, Sherpa-ONNX)
  2. Builds the native C++ libraries from runanywhere-commons
  3. Copies XCFrameworks to packages/*/ios/Frameworks/
  4. Copies JNI .so files to packages/*/android/src/main/jniLibs/
  5. Creates .testlocal marker files (enables local library consumption)

After Modifying the SDK

  • Dart SDK code changes: Run flutter run again (hot reload works for most changes)
  • C++ code changes (in runanywhere-commons):
    bash
    cd sdk/runanywhere-flutter
    ./scripts/build-flutter.sh --local --rebuild-commons
    

See It In Action

<p align="center"> <a href="https://apps.apple.com/us/app/runanywhere/id6756506307"> </a> <a href="https://play.google.com/store/apps/details?id=com.runanywhere.runanywhereai"> </a> </p>

Try the native iOS and Android apps to experience on-device AI capabilities immediately. The Flutter sample app demonstrates the same features using the cross-platform Flutter SDK.


Screenshots

<p align="center"> </p>

Features

This sample app demonstrates the full power of the RunAnywhere Flutter SDK:

FeatureDescriptionSDK Integration
AI ChatInteractive LLM conversations with streaming responsesRunAnywhere.generateStream()
Thinking ModeSupport for models with <think>...</think> reasoningThinking tag parsing
Real-time AnalyticsToken speed, generation time, inference metricsMessageAnalytics
Speech-to-TextVoice transcription with batch & live modesRunAnywhere.transcribe()
Text-to-SpeechNeural voice synthesis with Piper TTSRunAnywhere.synthesize()
Voice AssistantFull STT to LLM to TTS pipeline with auto-detectionVoiceSession API
Model ManagementDownload, load, and manage multiple AI modelsModelManager
Storage ManagementView storage usage and delete modelsRunAnywhere.getStorageInfo()
Offline SupportAll features work without internetOn-device inference

Architecture

The app follows Flutter best practices with a clean architecture pattern:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                        Flutter/Material UI                           β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚  β”‚  Chat    β”‚ β”‚   STT    β”‚ β”‚   TTS    β”‚ β”‚  Voice   β”‚ β”‚  Settings  β”‚ β”‚
β”‚  β”‚Interface β”‚ β”‚  View    β”‚ β”‚  View    β”‚ β”‚Assistant β”‚ β”‚   View     β”‚ β”‚
β”‚  β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚       β–Ό            β–Ό            β–Ό            β–Ό             β–Ό        β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”‚
β”‚  β”‚                   Provider State Management                   β”‚   β”‚
β”‚  β”‚                   (ModelManager, Services)                    β”‚   β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                                                                     β”‚
β”‚                    RunAnywhere Flutter SDK                          β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”‚
β”‚  β”‚  Core API (generate, transcribe, synthesize)                  β”‚   β”‚
β”‚  β”‚  Model Management (download, load, unload, delete)            β”‚   β”‚
β”‚  β”‚  Voice Session (STT β†’ LLM β†’ TTS pipeline)                     β”‚   β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β”‚
β”‚                              β”‚                                      β”‚
β”‚           β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                  β”‚
β”‚           β–Ό                                      β–Ό                  β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”          β”‚
β”‚  β”‚   LlamaCpp      β”‚                  β”‚   ONNX Runtime  β”‚          β”‚
β”‚  β”‚   (LLM/GGUF)    β”‚                  β”‚   (STT/TTS)     β”‚          β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜          β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Key Architecture Decisions

  • Provider Pattern β€” ChangeNotifier + Provider for state management
  • Feature-First Structure β€” Each feature is self-contained with its own views and logic
  • Shared Core Services β€” ModelManager, AudioRecordingService, AudioPlayerService
  • Design System β€” Consistent AppColors, AppTypography, AppSpacing tokens
  • SDK Integration β€” Direct SDK calls with async/await and Stream support

Project Structure

RunAnywhereAI/
β”œβ”€β”€ lib/
β”‚   β”œβ”€β”€ main.dart                      # App entry point
β”‚   β”‚
β”‚   β”œβ”€β”€ app/
β”‚   β”‚   β”œβ”€β”€ runanywhere_ai_app.dart    # SDK initialization, model registration
β”‚   β”‚   └── content_view.dart          # Main tab navigation (5 tabs)
β”‚   β”‚
β”‚   β”œβ”€β”€ core/
β”‚   β”‚   β”œβ”€β”€ design_system/
β”‚   β”‚   β”‚   β”œβ”€β”€ app_colors.dart        # Color palette with dark mode support
β”‚   β”‚   β”‚   β”œβ”€β”€ app_spacing.dart       # Spacing constants
β”‚   β”‚   β”‚   └── typography.dart        # Text styles
β”‚   β”‚   β”‚
β”‚   β”‚   β”œβ”€β”€ models/
β”‚   β”‚   β”‚   └── app_types.dart         # Shared type definitions
β”‚   β”‚   β”‚
β”‚   β”‚   β”œβ”€β”€ services/
β”‚   β”‚   β”‚   β”œβ”€β”€ model_manager.dart     # SDK model management wrapper
β”‚   β”‚   β”‚   β”œβ”€β”€ audio_recording_service.dart  # Microphone capture
β”‚   β”‚   β”‚   β”œβ”€β”€ audio_player_service.dart     # TTS playback
β”‚   β”‚   β”‚   β”œβ”€β”€ permission_service.dart       # Permission handling
β”‚   β”‚   β”‚   β”œβ”€β”€ conversation_store.dart       # Chat history persistence
β”‚   β”‚   β”‚   └── device_info_service.dart      # Device capabilities
β”‚   β”‚   β”‚
β”‚   β”‚   └── utilities/
β”‚   β”‚       β”œβ”€β”€ constants.dart         # Preference keys, defaults
β”‚   β”‚       └── keychain_helper.dart   # Secure storage wrapper
β”‚   β”‚
β”‚   β”œβ”€β”€ features/
β”‚   β”‚   β”œβ”€β”€ chat/
β”‚   β”‚   β”‚   └── chat_interface_view.dart   # LLM chat with streaming
β”‚   β”‚   β”‚
β”‚   β”‚   β”œβ”€β”€ voice/
β”‚   β”‚   β”‚   β”œβ”€β”€ speech_to_text_view.dart   # Batch & live STT
β”‚   β”‚   β”‚   β”œβ”€β”€ text_to_speech_view.dart   # TTS synthesis & playback
β”‚   β”‚   β”‚   └── voice_assistant_view.dart  # Full STTβ†’LLMβ†’TTS pipeline
β”‚   β”‚   β”‚
β”‚   β”‚   β”œβ”€β”€ models/
β”‚   β”‚   β”‚   β”œβ”€β”€ models_view.dart           # Model browser
β”‚   β”‚   β”‚   β”œβ”€β”€ model_selection_sheet.dart # Model picker bottom sheet
β”‚   β”‚   β”‚   β”œβ”€β”€ model_list_view_model.dart # Model list logic
β”‚   β”‚   β”‚   β”œβ”€β”€ model_components.dart      # Reusable model UI widgets
β”‚   β”‚   β”‚   β”œβ”€β”€ model_status_components.dart # Status badges, indicators
β”‚   β”‚   β”‚   β”œβ”€β”€ model_types.dart           # Framework enums, model info
β”‚   β”‚   β”‚   └── add_model_from_url_view.dart # Import custom models
β”‚   β”‚   β”‚
β”‚   β”‚   └── settings/
β”‚   β”‚       └── combined_settings_view.dart # Storage & logging config
β”‚   β”‚
β”‚   └── helpers/
β”‚       └── adaptive_layout.dart       # Responsive layout utilities
β”‚
β”œβ”€β”€ pubspec.yaml                       # Dependencies, SDK references
β”œβ”€β”€ android/                           # Android platform config
β”œβ”€β”€ ios/                               # iOS platform config
└── README.md                          # This file

Quick Start

Prerequisites

  • Flutter 3.10.0 or later (install guide)
  • Dart 3.0.0 or later (included with Flutter)
  • iOS β€” Xcode 14+ (for iOS builds)
  • Android β€” Android Studio + SDK 21+ (for Android builds)
  • ~2GB free storage for AI models
  • Device β€” Physical device recommended for best performance

Clone & Build

bash
# Clone the repository
git clone https://github.com/RunanywhereAI/runanywhere-sdks.git
cd runanywhere-sdks/examples/flutter/RunAnywhereAI

# Install dependencies
flutter pub get

# Run on connected device
flutter run

Run via IDE

  1. Open the project in VS Code or Android Studio
  2. Wait for Flutter dependencies to resolve
  3. Select a physical device (iOS or Android)
  4. Press F5 (VS Code) or Run (Android Studio)

Build Release APK/IPA

bash
# Android APK
flutter build apk --release

# Android App Bundle
flutter build appbundle --release

# iOS (requires Xcode)
flutter build ios --release

SDK Integration Examples

Initialize the SDK

The SDK is initialized in runanywhere_ai_app.dart:

dart
import 'package:runanywhere/runanywhere.dart';
import 'package:runanywhere_llamacpp/runanywhere_llamacpp.dart';
import 'package:runanywhere_onnx/runanywhere_onnx.dart';

// 1. Initialize SDK in development mode
await RunAnywhere.initialize();

// 2. Register LlamaCpp module for LLM models (GGUF)
await LlamaCpp.register();
LlamaCpp.addModel(
  id: 'smollm2-360m-q8_0',
  name: 'SmolLM2 360M Q8_0',
  url: 'https://huggingface.co/prithivMLmods/SmolLM2-360M-GGUF/resolve/main/SmolLM2-360M.Q8_0.gguf',
  memoryRequirement: 500000000,
);

// 3. Register ONNX module for STT/TTS models
await Onnx.register();
Onnx.addModel(
  id: 'sherpa-onnx-whisper-tiny.en',
  name: 'Sherpa Whisper Tiny (ONNX)',
  url: 'https://github.com/RunanywhereAI/sherpa-onnx/releases/download/runanywhere-models-v1/sherpa-onnx-whisper-tiny.en.tar.gz',
  modality: ModelCategory.speechRecognition,
  memoryRequirement: 75000000,
);

Download & Load a Model

dart
// Download with progress tracking (via ModelManager)
await ModelManager.shared.downloadModel(modelInfo);

// Load LLM model
await sdk.RunAnywhere.loadLLMModel('smollm2-360m-q8_0');

// Check if model is loaded
final isLoaded = sdk.RunAnywhere.isModelLoaded;

Stream Text Generation

dart
// Generate with streaming (real-time tokens)
final streamResult = await RunAnywhere.generateStream(prompt, options: options);

await for (final token in streamResult.stream) {
  // Display each token as it arrives
  setState(() {
    _responseText += token;
  });
}

// Or non-streaming
final result = await RunAnywhere.generate(prompt, options: options);
print('Response: ${result.text}');
print('Speed: ${result.tokensPerSecond} tok/s');

Speech-to-Text

dart
// Load STT model
await RunAnywhere.loadSTTModel('sherpa-onnx-whisper-tiny.en');

// Transcribe audio bytes
final transcription = await RunAnywhere.transcribe(audioBytes);
print('Transcription: $transcription');

Text-to-Speech

dart
// Load TTS voice
await RunAnywhere.loadTTSVoice('vits-piper-en_US-lessac-medium');

// Synthesize speech with options
final result = await RunAnywhere.synthesize(
  text,
  rate: 1.0,
  pitch: 1.0,
  volume: 1.0,
);

// Play audio (result.samples is Float32List)
await audioPlayer.play(result.samples, result.sampleRate);

Voice Assistant Pipeline (STT to LLM to TTS)

dart
// Start voice session
final session = await RunAnywhere.startVoiceSession(
  config: VoiceSessionConfig(),
);

// Listen to session events
session.events.listen((event) {
  if (event is VoiceSessionTranscribed) {
    print('User said: ${event.text}');
  } else if (event is VoiceSessionResponded) {
    print('AI response: ${event.text}');
  } else if (event is VoiceSessionSpeaking) {
    // Audio is being played
  }
});

// Stop session
session.stop();

Key Screens Explained

1. Chat Screen (chat_interface_view.dart)

What it demonstrates:

  • Streaming text generation with real-time token display
  • Thinking mode support (<think>...</think> tags)
  • Message analytics (tokens/sec, generation time)
  • Conversation history with Markdown rendering
  • Model selection bottom sheet integration

Key SDK APIs:

  • RunAnywhere.generateStream() β€” Streaming generation
  • RunAnywhere.generate() β€” Non-streaming generation
  • RunAnywhere.currentLLMModel() β€” Get loaded model info

2. Speech-to-Text Screen (speech_to_text_view.dart)

What it demonstrates:

  • Batch mode: Record full audio, then transcribe
  • Live mode: Real-time streaming transcription (when supported)
  • Audio level visualization
  • Mode selection (batch vs. live)

Key SDK APIs:

  • RunAnywhere.loadSTTModel() β€” Load Whisper model
  • RunAnywhere.transcribe() β€” Batch transcription
  • RunAnywhere.isSTTModelLoaded β€” Check model status

3. Text-to-Speech Screen (text_to_speech_view.dart)

What it demonstrates:

  • Neural voice synthesis with Piper TTS
  • Speed and pitch controls with sliders
  • Audio playback with progress indicator
  • Audio metadata display (duration, sample rate, size)

Key SDK APIs:

  • RunAnywhere.loadTTSVoice() β€” Load TTS model
  • RunAnywhere.synthesize() β€” Generate speech audio
  • RunAnywhere.isTTSVoiceLoaded β€” Check voice status

4. Voice Assistant Screen (voice_assistant_view.dart)

What it demonstrates:

  • Complete voice AI pipeline (STT to LLM to TTS)
  • Model configuration for all 3 components
  • Audio level visualization during recording
  • Conversation turn management
  • Session state machine (connecting, listening, processing, speaking)

Key SDK APIs:

  • RunAnywhere.startVoiceSession() β€” Start voice session
  • RunAnywhere.isVoiceAgentReady β€” Check all components loaded
  • VoiceSessionEvent β€” Session event stream

5. Settings Screen (combined_settings_view.dart)

What it demonstrates:

  • Storage usage overview (total, available, model storage)
  • Downloaded model list with details
  • Model deletion with confirmation dialog
  • Analytics logging toggle

Key SDK APIs:

  • RunAnywhere.getStorageInfo() β€” Get storage details
  • RunAnywhere.getDownloadedModelsWithInfo() β€” List models
  • RunAnywhere.deleteStoredModel() β€” Remove model

Supported Models

LLM Models (LlamaCpp/GGUF)

ModelSizeMemoryDescription
SmolLM2 360M Q8_0~400MB500MBFast, lightweight chat
Qwen 2.5 0.5B Q6_K~500MB600MBMultilingual, efficient
LFM2 350M Q4_K_M~200MB250MBLiquidAI, ultra-compact
LFM2 350M Q8_0~350MB400MBHigher quality version
Llama 2 7B Chat Q4_K_M~4GB4GBPowerful, larger model
Mistral 7B Instruct Q4_K_M~4GB4GBHigh quality responses

STT Models (ONNX/Whisper)

ModelSizeDescription
Sherpa Whisper Tiny (EN)~75MBFast English transcription
Sherpa Whisper Small (EN)~250MBHigher accuracy

TTS Models (ONNX/Piper)

ModelSizeDescription
Piper US English (Medium)~65MBNatural American voice
Piper British English (Medium)~65MBBritish accent

Testing

Run Tests

bash
# Run all tests
flutter test

# Run with coverage
flutter test --coverage

# Run specific test file
flutter test test/widget_test.dart

Run Lint & Analysis

bash
# Analyze code quality
flutter analyze

# Format code
dart format lib/ test/

# Fix issues automatically
dart fix --apply

Debugging

Enable Verbose Logging

The app uses debugPrint() extensively. Filter logs by:

bash
# Flutter logs
flutter logs | grep -E "RunAnywhere|SDK"

Common Debug Messages

Log PrefixDescription
SDKSDK initialization
SUCCESSSuccess operations
ERRORError conditions
MODULEModule registration
LOADINGLoading/processing
AUDIOAudio operations
RECORDINGRecording operations

Memory Profiling

  1. Run app in profile mode: flutter run --profile
  2. Open DevTools: Press p in terminal
  3. Navigate to Memory tab
  4. Expected: ~300MB-2GB depending on model size

Configuration

Environment Setup

The SDK automatically detects the environment:

dart
// Development mode (default)
if (kDebugMode) {
  await RunAnywhere.initialize();
}

// Production mode
else {
  await RunAnywhere.initialize(
    apiKey: 'your-api-key',
    baseURL: 'https://api.runanywhere.ai',
    environment: SDKEnvironment.production,
  );
}

Preference Keys

User preferences are stored via SharedPreferences:

KeyTypeDefaultDescription
useStreamingbooltrueEnable streaming generation
defaultTemperaturedouble0.7LLM temperature
defaultMaxTokensint500Max tokens per generation

Known Limitations

  • ARM64 Recommended β€” Native libraries optimized for arm64 (x86 emulators may be slow)
  • Memory Usage β€” Large models (7B+) require devices with 6GB+ RAM
  • First Load β€” Initial model loading takes 1-3 seconds (cached afterward)
  • Live STT β€” Requires WhisperKit-compatible models (limited in ONNX)
  • Platform Channels β€” Some SDK features use FFI/platform channels

Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

Development Setup

bash
# Fork and clone
git clone https://github.com/YOUR_USERNAME/runanywhere-sdks.git
cd runanywhere-sdks/examples/flutter/RunAnywhereAI

# Create feature branch
git checkout -b feature/your-feature

# Make changes and test
flutter pub get
flutter analyze
flutter test

# Commit and push
git commit -m "feat: your feature description"
git push origin feature/your-feature

# Open Pull Request

License

This project is licensed under the Apache License 2.0 - see LICENSE for details.


Support