content/providers/03-community-providers/35-react-native-apple.mdx
@react-native-ai/apple is a community provider that brings Apple's on-device AI capabilities to React Native and Expo applications. It allows you to run the AI SDK entirely on-device, leveraging Apple Intelligence foundation models available from iOS 26+ to provide text generation, embeddings, transcription, and speech synthesis through Apple's native AI frameworks.
The Apple provider is available in the @react-native-ai/apple module. You can install it with:
<Tabs items={['pnpm', 'npm', 'yarn', 'bun']}> <Tab> <Snippet text="pnpm add @react-native-ai/apple" dark /> </Tab> <Tab> <Snippet text="npm install @react-native-ai/apple" dark /> </Tab> <Tab> <Snippet text="yarn add @react-native-ai/apple" dark /> </Tab> <Tab> <Snippet text="bun add @react-native-ai/apple" dark /> </Tab> </Tabs>
Before using the Apple provider, you need:
You can import the default provider instance apple from @react-native-ai/apple:
import { apple } from '@react-native-ai/apple';
Before using Apple AI features, you can check if they're available on the current device:
if (!apple.isAvailable()) {
// Handle fallback logic for unsupported devices
}
Apple provides on-device language models through Apple Foundation Models, available on iOS 26+ with Apple Intelligence enabled devices.
Generate text using Apple's on-device language models:
import { apple } from '@react-native-ai/apple';
import { generateText } from 'ai';
const { text } = await generateText({
model: apple(),
prompt: 'Explain quantum computing in simple terms',
});
For real-time text generation:
import { apple } from '@react-native-ai/apple';
import { streamText } from 'ai';
const result = streamText({
model: apple(),
prompt: 'Write a short story about space exploration',
});
for await (const chunk of result.textStream) {
console.log(chunk);
}
Generate structured data using Zod schemas:
import { apple } from '@react-native-ai/apple';
import { generateText, Output } from 'ai';
import { z } from 'zod';
const result = await generateText({
model: apple(),
output: Output.object({
schema: z.object({
recipe: z.string(),
ingredients: z.array(z.string()),
cookingTime: z.string(),
}),
}),
prompt: 'Create a recipe for chocolate chip cookies',
});
Configure generation parameters:
const { text } = await generateText({
model: apple(),
prompt: 'Generate creative content',
temperature: 0.8, // Controls randomness (0-1)
maxTokens: 150, // Maximum tokens to generate
topP: 0.9, // Nucleus sampling threshold
topK: 40, // Top-K sampling parameter
});
The Apple provider supports tool calling, where tools are executed by Apple Intelligence rather than the AI SDK. Tools must be pre-registered with the provider using createAppleProvider before they can be used in generation calls.
import { createAppleProvider } from '@react-native-ai/apple';
import { generateText, tool } from 'ai';
import { z } from 'zod';
const getWeather = tool({
description: 'Get current weather information',
parameters: z.object({
city: z.string().describe('The city name'),
}),
execute: async ({ city }) => {
return `Weather in ${city}: Sunny, 25°C`;
},
});
// Create a provider with all available tools
const apple = createAppleProvider({
availableTools: {
getWeather,
},
});
// Use the provider with selected tools
const result = await generateText({
model: apple(),
prompt: 'What is the weather like in San Francisco?',
tools: { getWeather },
});
Apple provides multilingual text embeddings using NLContextualEmbedding, available on iOS 17+.
import { apple } from '@react-native-ai/apple';
import { embed } from 'ai';
const { embedding } = await embed({
model: apple.embeddingModel(),
value: 'Hello world',
});
Apple provides speech-to-text transcription using SpeechAnalyzer and SpeechTranscriber, available on iOS 26+.
import { apple } from '@react-native-ai/apple';
import { experimental_transcribe } from 'ai';
const response = await experimental_transcribe({
model: apple.transcriptionModel(),
audio: audioBuffer,
});
console.log(response.text);
Apple provides text-to-speech synthesis using AVSpeechSynthesizer, available on iOS 13+ with enhanced features on iOS 17+.
Convert text to speech:
import { apple } from '@react-native-ai/apple';
import { experimental_generateSpeech } from 'ai';
const response = await experimental_generateSpeech({
model: apple.speechModel(),
text: 'Hello from Apple on-device speech!',
language: 'en-US',
});
You can configure the voice to use for speech synthesis by passing its identifier to the voice option.
const response = await experimental_generateSpeech({
model: apple.speechModel(),
text: 'Custom voice example',
voice: 'com.apple.ttsbundle.Samantha-compact',
});
To check for available voices, you can use the getVoices method:
import { AppleSpeech } from '@react-native-ai/apple';
const voices = await AppleSpeech.getVoices();
console.log(voices);
Different Apple AI features have varying iOS version requirements:
| Feature | Minimum iOS Version | Additional Requirements |
|---|---|---|
| Text Generation | iOS 26+ | Apple Intelligence enabled device |
| Text Embeddings | iOS 17+ | - |
| Audio Transcription | iOS 26+ | Language assets downloaded |
| Speech Synthesis | iOS 13+ | iOS 17+ for Personal Voice |