Back to Runanywhere Sdks

RunAnywhere AI - iOS Example

examples/ios/RunAnywhereAI/README.md

0.19.1322.1 KB
Original Source

RunAnywhere AI - iOS Example

<p align="center"> </p> <p align="center"> <a href="https://apps.apple.com/us/app/runanywhere/id6756506307"> </a> </p> <p align="center"> </p>

A production-ready reference app demonstrating the RunAnywhere Swift SDK capabilities for on-device AI. This app showcases how to build privacy-first, offline-capable AI features with LLM chat, speech-to-text, text-to-speech, and a complete voice assistant pipelineβ€”all running locally on your device.


πŸš€ Running This App (Local Development)

Important: This sample app consumes the RunAnywhere Swift SDK as a local Swift package. Before opening this project, you must first build the SDK's native libraries.

First-Time Setup

bash
# 1. Navigate to the Swift SDK directory
cd runanywhere-sdks/sdk/runanywhere-swift

# 2. Run the setup script (~5-15 minutes on first run)
#    This builds the native C++ frameworks and sets testLocal=true
./scripts/build-swift.sh --setup

# 3. Navigate to this sample app
cd ../../examples/ios/RunAnywhereAI

# 4. Open in Xcode
open RunAnywhereAI.xcodeproj

# 5. If Xcode shows package errors, reset caches:
#    File > Packages > Reset Package Caches

# 6. Build and Run (⌘+R)

How It Works

This sample app uses Package.swift to reference the local Swift SDK:

This Sample App β†’ Local Swift SDK (sdk/runanywhere-swift/)
                          ↓
              Local XCFrameworks (sdk/runanywhere-swift/Binaries/)
                          ↑
           Built by: ./scripts/build-swift.sh --setup

The build-swift.sh --setup script:

  1. Builds the native C++ frameworks from runanywhere-commons
  2. Copies them to sdk/runanywhere-swift/Binaries/
  3. Sets testLocal = true in the SDK's Package.swift

After Modifying the SDK

  • Swift SDK code changes: Xcode picks them up automatically
  • C++ code changes (in runanywhere-commons):
    bash
    cd sdk/runanywhere-swift
    ./scripts/build-swift.sh --local --build-commons
    

Try It Now

<p align="center"> <a href="https://apps.apple.com/us/app/runanywhere/id6756506307"> </a> </p>

Download the app from the App Store to try it out.


Screenshots

<p align="center"> </p>

Features

This sample app demonstrates the full power of the RunAnywhere SDK:

FeatureDescriptionSDK Integration
AI ChatInteractive LLM conversations with streaming responsesRunAnywhere.generateStream()
Thinking ModeSupport for models with <think>...</think> reasoningThinking tag parsing
Real-time AnalyticsToken speed, generation time, inference metricsMessageAnalytics
Speech-to-TextVoice transcription with batch & live modesRunAnywhere.transcribe()
Text-to-SpeechNeural voice synthesis with Piper TTSRunAnywhere.synthesize()
Voice AssistantFull STT β†’ LLM β†’ TTS pipeline with auto-detectionVoice Pipeline API
Model ManagementDownload, load, and manage multiple AI modelsRunAnywhere.downloadModel()
Storage ManagementView storage usage and delete modelsRunAnywhere.storageInfo()
Offline SupportAll features work without internetOn-device inference
Cross-PlatformRuns on iOS, iPadOS, and macOSUniversal app

Architecture

The app follows modern Apple architecture patterns:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                        SwiftUI Views                            β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚  β”‚  Chat    β”‚ β”‚   STT    β”‚ β”‚   TTS    β”‚ β”‚  Voice   β”‚ β”‚Settingsβ”‚ β”‚
β”‚  β”‚  View    β”‚ β”‚   View   β”‚ β”‚   View   β”‚ β”‚  View    β”‚ β”‚  View  β”‚ β”‚
β”‚  β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”¬β”€β”€β”€β”€β”˜ β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€
β”‚       β–Ό            β–Ό            β–Ό            β–Ό           β–Ό      β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚  β”‚   LLM    β”‚ β”‚   STT    β”‚ β”‚   TTS    β”‚ β”‚  Voice   β”‚ β”‚Settingsβ”‚ β”‚
β”‚  β”‚ViewModel β”‚ β”‚ViewModel β”‚ β”‚ViewModel β”‚ β”‚ ViewModelβ”‚ β”‚ViewModel
β”‚  β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”¬β”€β”€β”€β”€β”˜ β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€
β”‚                                                                  β”‚
β”‚                    RunAnywhere Swift SDK                         β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”‚
β”‚  β”‚  Core APIs (generate, transcribe, synthesize, pipeline)   β”‚   β”‚
β”‚  β”‚  EventBus (LLMEvent, STTEvent, TTSEvent, ModelEvent)      β”‚   β”‚
β”‚  β”‚  Model Management (download, load, unload, delete)        β”‚   β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β”‚
β”‚                              β”‚                                   β”‚
β”‚           β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”               β”‚
β”‚           β–Ό                                      β–Ό               β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”       β”‚
β”‚  β”‚   LlamaCPP      β”‚                  β”‚   ONNX Runtime  β”‚       β”‚
β”‚  β”‚   (LLM/GGUF)    β”‚                  β”‚   (STT/TTS)     β”‚       β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Key Architecture Decisions

  • MVVM Pattern β€” ViewModels manage UI state with @Observable, SwiftUI observes changes
  • Single Entry Point β€” RunAnywhereAIApp.swift handles SDK initialization
  • Swift Concurrency β€” All async operations use async/await with structured concurrency
  • Cross-Platform β€” Conditional compilation supports iOS, iPadOS, and macOS
  • Design System β€” Centralized colors, typography, and spacing via AppColors, AppTypography, AppSpacing

Project Structure

RunAnywhereAI/
β”œβ”€β”€ RunAnywhereAI/
β”‚   β”œβ”€β”€ App/
β”‚   β”‚   β”œβ”€β”€ RunAnywhereAIApp.swift        # Entry point, SDK initialization
β”‚   β”‚   └── ContentView.swift             # Tab navigation, main UI structure
β”‚   β”‚
β”‚   β”œβ”€β”€ Core/
β”‚   β”‚   β”œβ”€β”€ DesignSystem/
β”‚   β”‚   β”‚   β”œβ”€β”€ AppColors.swift           # Color palette
β”‚   β”‚   β”‚   β”œβ”€β”€ AppSpacing.swift          # Spacing constants
β”‚   β”‚   β”‚   └── Typography.swift          # Font styles
β”‚   β”‚   β”œβ”€β”€ Models/
β”‚   β”‚   β”‚   β”œβ”€β”€ AppTypes.swift            # Shared data models
β”‚   β”‚   β”‚   └── MarkdownDetector.swift    # Markdown parsing utilities
β”‚   β”‚   └── Services/
β”‚   β”‚       └── ModelManager.swift        # Model lifecycle management
β”‚   β”‚
β”‚   β”œβ”€β”€ Features/
β”‚   β”‚   β”œβ”€β”€ Chat/
β”‚   β”‚   β”‚   β”œβ”€β”€ Models/
β”‚   β”‚   β”‚   β”‚   └── Message.swift         # Chat message model
β”‚   β”‚   β”‚   β”œβ”€β”€ ViewModels/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ LLMViewModel.swift    # Chat logic, streaming
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ LLMViewModel+Generation.swift
β”‚   β”‚   β”‚   β”‚   └── LLMViewModel+Analytics.swift
β”‚   β”‚   β”‚   └── Views/
β”‚   β”‚   β”‚       β”œβ”€β”€ ChatInterfaceView.swift   # Main chat UI
β”‚   β”‚   β”‚       β”œβ”€β”€ MessageBubbleView.swift   # Message rendering
β”‚   β”‚   β”‚       └── ConversationListView.swift
β”‚   β”‚   β”‚
β”‚   β”‚   β”œβ”€β”€ Voice/
β”‚   β”‚   β”‚   β”œβ”€β”€ SpeechToTextView.swift    # STT UI with waveform
β”‚   β”‚   β”‚   β”œβ”€β”€ STTViewModel.swift        # Batch & live transcription
β”‚   β”‚   β”‚   β”œβ”€β”€ TextToSpeechView.swift    # TTS UI with playback
β”‚   β”‚   β”‚   β”œβ”€β”€ TTSViewModel.swift        # Synthesis & audio playback
β”‚   β”‚   β”‚   β”œβ”€β”€ VoiceAssistantView.swift  # Full voice pipeline UI
β”‚   β”‚   β”‚   └── VoiceAgentViewModel.swift # STTβ†’LLMβ†’TTS orchestration
β”‚   β”‚   β”‚
β”‚   β”‚   β”œβ”€β”€ Models/
β”‚   β”‚   β”‚   β”œβ”€β”€ ModelSelectionSheet.swift # Model picker UI
β”‚   β”‚   β”‚   └── ModelListViewModel.swift  # Download & load logic
β”‚   β”‚   β”‚
β”‚   β”‚   β”œβ”€β”€ Storage/
β”‚   β”‚   β”‚   β”œβ”€β”€ StorageView.swift         # Storage management UI
β”‚   β”‚   β”‚   └── StorageViewModel.swift    # Storage info, cache clearing
β”‚   β”‚   β”‚
β”‚   β”‚   └── Settings/
β”‚   β”‚       └── CombinedSettingsView.swift # Settings & storage UI
β”‚   β”‚
β”‚   β”œβ”€β”€ Helpers/
β”‚   β”‚   β”œβ”€β”€ AdaptiveLayout.swift          # Cross-platform layout helpers
β”‚   β”‚   β”œβ”€β”€ CodeBlockMarkdownRenderer.swift
β”‚   β”‚   β”œβ”€β”€ InlineMarkdownRenderer.swift
β”‚   β”‚   └── SmartMarkdownRenderer.swift
β”‚   β”‚
β”‚   └── Resources/
β”‚       β”œβ”€β”€ Assets.xcassets/              # App icons, images
β”‚       β”œβ”€β”€ RunAnywhereConfig-Debug.plist
β”‚       └── RunAnywhereConfig-Release.plist
β”‚
β”œβ”€β”€ RunAnywhereAITests/                   # Unit tests
β”œβ”€β”€ RunAnywhereAIUITests/                 # UI tests
β”œβ”€β”€ docs/screenshots/                     # App screenshots
β”œβ”€β”€ scripts/
β”‚   └── build_and_run_ios_sample.sh       # Build automation
β”œβ”€β”€ Package.swift                         # SPM dependency manifest
└── README.md                             # This file

Quick Start

Prerequisites

  • Xcode 15.0 or later
  • iOS 17.0+ / macOS 14.0+
  • Swift 5.9+
  • Device/Simulator with Apple Silicon (recommended: physical device for best performance)
  • ~500MB-2GB free storage for AI models

Clone & Build

bash
# Clone the repository
git clone https://github.com/RunanywhereAI/runanywhere-sdks.git
cd runanywhere-sdks/examples/ios/RunAnywhereAI

# Open in Xcode
open RunAnywhereAI.xcodeproj

Run via Xcode

  1. Open the project in Xcode
  2. Wait for Swift Package Manager to resolve dependencies
  3. Select a physical device (Apple Silicon recommended) or simulator
  4. Click Run or press ⌘+R

Run via Command Line

bash
# Build and run on simulator
./scripts/build_and_run_ios_sample.sh simulator "iPhone 16 Pro"

# Build and run on device
./scripts/build_and_run_ios_sample.sh device

SDK Integration Examples

Initialize the SDK

The SDK is initialized in RunAnywhereAIApp.swift:

swift
import RunAnywhere
import LlamaCPPRuntime
import ONNXRuntime

@main
struct RunAnywhereAIApp: App {
    var body: some Scene {
        WindowGroup {
            ContentView()
        }
        .task {
            await initializeSDK()
        }
    }

    private func initializeSDK() async {
        // Initialize SDK (development mode - no API key needed)
        try RunAnywhere.initialize()

        // Register AI backends
        LlamaCPP.register(priority: 100)  // LLM backend (GGUF models)
        ONNX.register(priority: 100)      // STT/TTS backend

        // Register models
        RunAnywhere.registerModel(
            id: "smollm2-360m-q8_0",
            name: "SmolLM2 360M Q8_0",
            url: URL(string: "https://huggingface.co/...")!,
            framework: .llamaCpp,
            memoryRequirement: 500_000_000
        )
    }
}

Download & Load a Model

swift
// Download with progress tracking
for try await progress in RunAnywhere.downloadModel("smollm2-360m-q8_0") {
    print("Download: \(Int(progress.percentage * 100))%")
}

// Load into memory
try await RunAnywhere.loadModel("smollm2-360m-q8_0")

Stream Text Generation

swift
// Generate with streaming
let result = try await RunAnywhere.generateStream(
    prompt,
    options: LLMGenerationOptions(maxTokens: 512, temperature: 0.7)
)

for try await token in result.stream {
    // Display token in real-time
    displayToken(token)
}

// Get final analytics
let metrics = try await result.result.value
print("Speed: \(metrics.performanceMetrics.tokensPerSecond) tok/s")

Speech-to-Text

swift
// Load STT model
try await RunAnywhere.loadSTTModel("sherpa-onnx-whisper-tiny.en")

// Transcribe audio bytes
let transcription = try await RunAnywhere.transcribe(audioData)
print("Transcription: \(transcription.text)")

Text-to-Speech

swift
// Load TTS voice
try await RunAnywhere.loadTTSModel("vits-piper-en_US-lessac-medium")

// Synthesize speech
let result = try await RunAnywhere.synthesize(
    text,
    options: TTSOptions(rate: 1.0, pitch: 1.0)
)
// result.audioData contains WAV audio bytes

Voice Pipeline (STT β†’ LLM β†’ TTS)

swift
// Configure voice pipeline
let config = ModularPipelineConfig(
    components: [.vad, .stt, .llm, .tts],
    stt: VoiceSTTConfig(modelId: "sherpa-onnx-whisper-tiny.en"),
    llm: VoiceLLMConfig(modelId: "smollm2-360m-q8_0", maxTokens: 256),
    tts: VoiceTTSConfig(modelId: "vits-piper-en_US-lessac-medium")
)

// Process voice through full pipeline
let pipeline = try await RunAnywhere.createVoicePipeline(config: config)
for try await event in pipeline.process(audioStream: audioStream) {
    switch event {
    case .transcription(let text):
        print("User said: \(text)")
    case .llmResponse(let response):
        print("AI response: \(response)")
    case .synthesis(let audio):
        playAudio(audio)
    }
}

Key Screens Explained

1. Chat Screen (ChatInterfaceView.swift)

What it demonstrates:

  • Streaming text generation with real-time token display
  • Thinking mode support (<think>...</think> tags)
  • Message analytics (tokens/sec, time to first token)
  • Conversation history management
  • Model selection bottom sheet integration
  • Markdown rendering with code highlighting

Key SDK APIs:

  • RunAnywhere.generateStream() β€” Streaming generation
  • RunAnywhere.generate() β€” Non-streaming generation
  • RunAnywhere.cancelGeneration() β€” Stop generation

2. Speech-to-Text Screen (SpeechToTextView.swift)

What it demonstrates:

  • Batch mode: Record full audio, then transcribe
  • Live mode: Real-time streaming transcription
  • Audio level visualization
  • Transcription metrics

Key SDK APIs:

  • RunAnywhere.loadSTTModel() β€” Load Whisper model
  • RunAnywhere.transcribe() β€” Batch transcription

3. Text-to-Speech Screen (TextToSpeechView.swift)

What it demonstrates:

  • Neural voice synthesis with Piper TTS
  • Speed and pitch controls
  • Audio playback with progress
  • Fun sample texts for testing

Key SDK APIs:

  • RunAnywhere.loadTTSModel() β€” Load TTS model
  • RunAnywhere.synthesize() β€” Generate speech audio

4. Voice Assistant Screen (VoiceAssistantView.swift)

What it demonstrates:

  • Complete voice AI pipeline
  • Automatic speech detection
  • Model status tracking for all 3 components (STT, LLM, TTS)
  • Push-to-talk and hands-free modes

Key SDK APIs:

  • Voice Pipeline API for STT β†’ LLM β†’ TTS orchestration
  • Component state management

5. Settings Screen (CombinedSettingsView.swift)

What it demonstrates:

  • Generation settings (temperature, max tokens)
  • Storage usage overview
  • Downloaded model management
  • Model deletion with confirmation
  • Cache clearing

Key SDK APIs:

  • RunAnywhere.storageInfo() β€” Get storage details
  • RunAnywhere.deleteModel() β€” Remove downloaded model

Testing

Run Unit Tests

bash
xcodebuild test -project RunAnywhereAI.xcodeproj -scheme RunAnywhereAI -destination 'platform=iOS Simulator,name=iPhone 16 Pro'

Run UI Tests

bash
xcodebuild test -project RunAnywhereAI.xcodeproj -scheme RunAnywhereAIUITests -destination 'platform=iOS Simulator,name=iPhone 16 Pro'

Debugging

Enable Verbose Logging

The app uses os.log for structured logging. Filter by subsystem in Console.app:

subsystem:com.runanywhere.RunAnywhereAI

Common Log Categories

CategoryDescription
RunAnywhereAIAppSDK initialization, model registration
LLMViewModelLLM generation, streaming
STTViewModelSpeech transcription
TTSViewModelSpeech synthesis
VoiceAgentViewModelVoice pipeline
ModelListViewModelModel downloads, loading

Memory Profiling

  1. Open Xcode Instruments
  2. Select your app process
  3. Record memory allocations during model loading
  4. Expected: ~300MB-4GB depending on model size

Configuration

Build Configurations

ConfigurationDescription
DebugDevelopment build with verbose logging
ReleaseOptimized build for distribution

Environment Variables

swift
#if DEBUG
// Development mode - uses local backend, no API key needed
try RunAnywhere.initialize()
#else
// Production mode - requires API key and backend URL
try RunAnywhere.initialize(
    apiKey: "your_api_key",
    baseURL: "https://api.runanywhere.ai",
    environment: .production
)
#endif

Supported Models

LLM Models (LlamaCpp/GGUF)

ModelSizeMemoryDescription
SmolLM2 360M Q8_0~400MB500MBFast, lightweight chat
Qwen 2.5 0.5B Q6_K~500MB600MBMultilingual, efficient
LFM2 350M Q4_K_M~200MB250MBLiquidAI, ultra-compact
LFM2 350M Q8_0~400MB400MBLiquidAI, higher quality
Llama 2 7B Chat Q4_K_M~4GB4GBPowerful, larger model
Mistral 7B Instruct Q4_K_M~4GB4GBHigh quality responses

STT Models (ONNX/Whisper)

ModelSizeDescription
Sherpa Whisper Tiny (EN)~75MBEnglish transcription

TTS Models (ONNX/Piper)

ModelSizeDescription
Piper US English (Medium)~65MBNatural American voice
Piper British English (Medium)~65MBBritish accent

Known Limitations

  • Apple Silicon Recommended β€” Best performance on M1/M2/M3 chips and A-series processors
  • Memory Usage β€” Large models (7B+) require devices with 6GB+ RAM
  • First Load β€” Initial model loading takes 1-3 seconds (cached afterward)
  • Thermal Throttling β€” Extended inference may trigger device throttling on some devices

Xcode 16 Notes

If you encounter sandbox errors during build:

bash
./scripts/fix_pods_sandbox.sh

For Swift macro issues:

bash
defaults write com.apple.dt.Xcode IDESkipMacroFingerprintValidation -bool YES

Contributing

See CONTRIBUTING.md for guidelines.

Development Setup

bash
# Fork and clone
git clone https://github.com/YOUR_USERNAME/runanywhere-sdks.git
cd runanywhere-sdks/examples/ios/RunAnywhereAI

# Open in Xcode
open RunAnywhereAI.xcodeproj

# Make changes and test
# Run tests in Xcode (⌘+U)

# Commit and push
git commit -m "feat: your feature description"
git push origin feature/your-feature

# Open Pull Request

License

This project is licensed under the Apache License 2.0 - see LICENSE for details.


Support