crates/screenpipe-apple-intelligence/README.md
On-device AI processing for screenpipe using Apple's Foundation Models framework (macOS 26+).
Zero cloud. Zero privacy concerns. All processing happens locally on Apple Silicon.
This crate provides Rust bindings to Apple's Foundation Models framework via a Swift FFI bridge. Foundation Models is the on-device LLM that powers Apple Intelligence — available on macOS 26+ with Apple Silicon (M1+).
Screenpipe records everything on your screen and audio. Processing this data with AI to extract action items, summaries, and insights currently requires sending data to cloud APIs. Foundation Models lets us do this entirely on-device — your data never leaves your machine.
┌─────────────────────────────────────────────┐
│ Rust (screenpipe-apple-intelligence) │
│ ├── engine.rs — Safe public API │
│ └── ffi.rs — Raw FFI declarations │
│ │ │
│ │ C FFI (@_cdecl) │
│ ▼ │
│ Swift (swift/bridge.swift) │
│ ├── Wraps LanguageModelSession │
│ ├── Handles async → sync (semaphore) │
│ └── Memory measurement (mach_task_info) │
│ │ │
│ ▼ │
│ Apple Foundation Models Framework │
│ └── On-device LLM (Apple Silicon NPU) │
└─────────────────────────────────────────────┘
use screenpipe_apple_intelligence::{check_availability, Availability};
match check_availability() {
Availability::Available => println!("Ready!"),
Availability::AppleIntelligenceNotEnabled => println!("Enable in System Settings"),
Availability::DeviceNotEligible => println!("Need Apple Silicon"),
Availability::ModelNotReady => println!("Model still downloading"),
_ => {}
}
use screenpipe_apple_intelligence::generate_text;
let result = generate_text(
Some("You extract action items from screen activity."),
"User was in VS Code editing auth.rs, then Slack discussing deadline Friday..."
).unwrap();
println!("{}", result.text);
println!("Time: {:.0}ms, Memory delta: {:.1}MB",
result.metrics.total_time_ms,
result.metrics.mem_delta_bytes as f64 / 1_048_576.0);
use screenpipe_apple_intelligence::generate_json;
let schema = r#"{"type":"object","properties":{"items":{"type":"array","items":{"type":"string"}},"summary":{"type":"string"}}}"#;
let result = generate_json(Some("Extract todos."), "...", schema).unwrap();
println!("{}", result.json);
screenpipe-query)// Fetches recent data from screenpipe HTTP API and processes with Foundation Models
let result = query_screenpipe_with_ai(3030, "What did I work on today?", 6).await?;
# All tests gracefully skip if Apple Intelligence is not available
cargo test -p screenpipe-apple-intelligence -- --nocapture
Run the benchmark test to get numbers for your machine:
cargo test -p screenpipe-apple-intelligence test_benchmark -- --nocapture
Expected metrics (when Apple Intelligence is available):
SystemLanguageModel.Adapter)SystemLanguageModel.UseCase.contentTagging)