Back to Firebase Js Sdk

ai package

docs-devsite/ai.md

12.12.168.8 KB
Original Source

Project: /docs/reference/js/_project.yaml Book: /docs/reference/_book.yaml page_type: reference

{% comment %} DO NOT EDIT THIS FILE! This is generated by the JS SDK team, and any local changes will be overwritten. Changes should be made in the source code at https://github.com/firebase/firebase-js-sdk {% endcomment %}

ai package

The Firebase AI Web SDK.

Functions

FunctionDescription
<b>function(app, ...)</b>
getAI(app, options)Returns the default AI instance that is associated with the provided FirebaseApp<!-- -->. If no instance exists, initializes a new instance with the default settings.
<b>function(ai, ...)</b>
getGenerativeModel(ai, modelParams, requestOptions)Returns a GenerativeModel class with methods for inference and other functionality.
getImagenModel(ai, modelParams, requestOptions)Returns an ImagenModel class with methods for using Imagen.<!-- -->Only Imagen 3 models (named <code>imagen-3.0-*</code>) are supported.
getLiveGenerativeModel(ai, modelParams)<b><i>(Public Preview)</i></b> Returns a LiveGenerativeModel class for real-time, bidirectional communication.<!-- -->The Live API is only supported in modern browser windows and Node ><!-- -->= 22.
getTemplateGenerativeModel(ai, requestOptions)<b><i>(Public Preview)</i></b> Returns a TemplateGenerativeModel class for executing server-side templates.
getTemplateImagenModel(ai, requestOptions)Returns a TemplateImagenModel class for executing server-side Imagen templates.
<b>function(liveSession, ...)</b>
startAudioConversation(liveSession, options)<b><i>(Public Preview)</i></b> Starts a real-time, bidirectional audio conversation with the model. This helper function manages the complexities of microphone access, audio recording, playback, and interruptions.

Classes

ClassDescription
AIErrorError class for the Firebase AI SDK.
AIModelBase class for Firebase AI model APIs.<!-- -->Instances of this class are associated with a specific Firebase AI Backend and provide methods for interacting with the configured generative model.
AnyOfSchemaSchema class representing a value that can conform to any of the provided sub-schemas. This is useful when a field can accept multiple distinct types or structures.
ArraySchemaSchema class for "array" types. The <code>items</code> param should refer to the type of item that can be a member of the array.
BackendAbstract base class representing the configuration for an AI service backend. This class should not be instantiated directly. Use its subclasses; GoogleAIBackend for the Gemini Developer API (via Google AI<!-- -->), and VertexAIBackend for the Vertex AI Gemini API.
BooleanSchemaSchema class for "boolean" types.
ChatSessionChatSession class that enables sending chat messages and stores history of sent and received messages so far.
ChatSessionBaseBase class for various <code>ChatSession</code> classes that enables sending chat messages and stores history of sent and received messages so far.
GenerativeModelClass for generative model APIs.
GoogleAIBackendConfiguration class for the Gemini Developer API.<!-- -->Use this with AIOptions when initializing the AI service via getAI() to specify the Gemini Developer API as the backend.
ImagenImageFormatDefines the image format for images generated by Imagen.<!-- -->Use this class to specify the desired format (JPEG or PNG) and compression quality for images generated by Imagen. This is typically included as part of ImagenModelParams<!-- -->.
ImagenModelClass for Imagen model APIs.<!-- -->This class provides methods for generating images using the Imagen model.
IntegerSchemaSchema class for "integer" types.
LiveGenerativeModel<b><i>(Public Preview)</i></b> Class for Live generative model APIs. The Live API enables low-latency, two-way multimodal interactions with Gemini.<!-- -->This class should only be instantiated with getLiveGenerativeModel()<!-- -->.
LiveSession<b><i>(Public Preview)</i></b> Represents an active, real-time, bidirectional conversation with the model.<!-- -->This class should only be instantiated by calling LiveGenerativeModel.connect()<!-- -->.
NumberSchemaSchema class for "number" types.
ObjectSchemaSchema class for "object" types. The <code>properties</code> param must be a map of <code>Schema</code> objects.
SchemaParent class encompassing all Schema types, with static methods that allow building specific Schema types. This class can be converted with <code>JSON.stringify()</code> into a JSON string accepted by Vertex AI REST endpoints. (This string conversion is automatically done when calling SDK methods.)
StringSchemaSchema class for "string" types. Can be used with or without enum values.
TemplateGenerativeModel<b><i>(Public Preview)</i></b> GenerativeModel APIs that execute on a server-side template.<!-- -->This class should only be instantiated with getTemplateGenerativeModel()<!-- -->.
TemplateImagenModelClass for Imagen model APIs that execute on a server-side template.<!-- -->This class should only be instantiated with getTemplateImagenModel()<!-- -->.
VertexAIBackendConfiguration class for the Vertex AI Gemini API.<!-- -->Use this with AIOptions when initializing the AI service via getAI() to specify the Vertex AI Gemini API as the backend.

Interfaces

InterfaceDescription
AIAn instance of the Firebase AI SDK.<!-- -->Do not create this instance directly. Instead, use getAI()<!-- -->.
AIOptionsOptions for initializing the AI service using getAI()<!-- -->. This allows specifying which backend to use (Vertex AI Gemini API or Gemini Developer API) and configuring its specific options (like location for Vertex AI).
AudioConversationController<b><i>(Public Preview)</i></b> A controller for managing an active audio conversation.
AudioTranscriptionConfigThe audio transcription configuration.
BaseParamsBase parameters for a number of methods.
ChromeAdapter<b><i>(Public Preview)</i></b> Defines an inference "backend" that uses Chrome's on-device model, and encapsulates logic for detecting when on-device inference is possible.<!-- -->These methods should not be called directly by the user.
CitationA single citation.
CitationMetadataCitation metadata that may be found on a GenerateContentCandidate<!-- -->.
CodeExecutionResultThe results of code execution run by the model.
CodeExecutionResultPartRepresents the code execution result from the model.
CodeExecutionToolA tool that enables the model to use code execution.
ContentContent type for both prompts and response candidates.
CountTokensRequestParams for calling GenerativeModel.countTokens()
CountTokensResponseResponse from calling GenerativeModel.countTokens()<!-- -->.
CustomErrorDataDetails object that contains data originating from a bad HTTP response.
Date_2Protobuf google.type.Date
EnhancedGenerateContentResponseResponse object wrapped with helper methods.
ErrorDetailsDetails object that may be included in an error response.
ExecutableCodeAn interface for executable code returned by the model.
ExecutableCodePartRepresents the code that is executed by the model.
FileDataData pointing to a file uploaded on Google Cloud Storage.
FileDataPartContent part interface if the part represents FileData
FunctionCallA predicted FunctionCall returned from the model that contains a string representing the FunctionDeclaration.name and a structured JSON object containing the parameters and their values.
FunctionCallingConfig
FunctionCallPartContent part interface if the part represents a FunctionCall<!-- -->.
FunctionDeclarationStructured representation of a function declaration as defined by the OpenAPI 3.0 specification<!-- -->. Included in this declaration are the function name and parameters. This <code>FunctionDeclaration</code> is a representation of a block of code that can be used as a Tool by the model and executed by the client.
FunctionDeclarationsToolA <code>FunctionDeclarationsTool</code> is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model.
FunctionResponseThe result output from a FunctionCall that contains a string representing the FunctionDeclaration.name and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a FunctionCall made based on model prediction.
FunctionResponsePartContent part interface if the part represents FunctionResponse<!-- -->.
GenerateContentCandidateA candidate returned as part of a GenerateContentResponse<!-- -->.
GenerateContentRequestRequest sent through GenerativeModel.generateContent()
GenerateContentResponseIndividual response from GenerativeModel.generateContent() and GenerativeModel.generateContentStream()<!-- -->. <code>generateContentStream()</code> will return one in each chunk until the stream is done.
GenerateContentResultResult object returned from GenerativeModel.generateContent() call.
GenerateContentStreamResultResult object returned from GenerativeModel.generateContentStream() call. Iterate over <code>stream</code> to get chunks as they come in and/or use the <code>response</code> promise to get the aggregated response when the stream is done.
GenerationConfigConfig options for content-related requests
GenerativeContentBlobInterface for sending an image.
GoogleSearchSpecifies the Google Search configuration.
GoogleSearchToolA tool that allows a Gemini model to connect to Google Search to access and incorporate up-to-date information from the web into its responses.<!-- -->Important: If using Grounding with Google Search, you are required to comply with the "Grounding with Google Search" usage requirements for your chosen API provider: Gemini Developer API or Vertex AI Gemini API (see Service Terms section within the Service Specific Terms).
GroundingChunkRepresents a chunk of retrieved data that supports a claim in the model's response. This is part of the grounding information provided when grounding is enabled.
GroundingMetadataMetadata returned when grounding is enabled.<!-- -->Currently, only Grounding with Google Search is supported (see GoogleSearchTool<!-- -->).<!-- -->Important: If using Grounding with Google Search, you are required to comply with the "Grounding with Google Search" usage requirements for your chosen API provider: Gemini Developer API or Vertex AI Gemini API (see Service Terms section within the Service Specific Terms).
GroundingSupportProvides information about how a specific segment of the model's response is supported by the retrieved grounding chunks.
HybridParams<b><i>(Public Preview)</i></b> Configures hybrid inference.
ImagenGCSImageAn image generated by Imagen, stored in a Cloud Storage for Firebase bucket.<!-- -->This feature is not available yet.
ImagenGenerationConfigConfiguration options for generating images with Imagen.<!-- -->See the documentation for more details.
ImagenGenerationResponseThe response from a request to generate images with Imagen.
ImagenInlineImageAn image generated by Imagen, represented as inline data.
ImagenModelParamsParameters for configuring an ImagenModel<!-- -->.
ImagenSafetySettingsSettings for controlling the aggressiveness of filtering out sensitive content.<!-- -->See the documentation for more details.
InlineDataPartContent part interface if the part represents an image.
LanguageModelCreateCoreOptions<b><i>(Public Preview)</i></b> Configures the creation of an on-device language model session.
LanguageModelCreateOptions<b><i>(Public Preview)</i></b> Configures the creation of an on-device language model session.
LanguageModelExpected<b><i>(Public Preview)</i></b> Options for the expected inputs for an on-device language model.
LanguageModelMessage<b><i>(Public Preview)</i></b> An on-device language model message.
LanguageModelMessageContent<b><i>(Public Preview)</i></b> An on-device language model content object.
LanguageModelPromptOptions<b><i>(Public Preview)</i></b> Options for an on-device language model prompt.
LiveGenerationConfig<b><i>(Public Preview)</i></b> Configuration parameters used by LiveGenerativeModel to control live content generation.
LiveModelParams<b><i>(Public Preview)</i></b> Params passed to getLiveGenerativeModel()<!-- -->.
LiveServerContent<b><i>(Public Preview)</i></b> An incremental content update from the model.
LiveServerGoingAwayNotice<b><i>(Public Preview)</i></b> Notification that the server will not be able to service the client soon.
LiveServerToolCall<b><i>(Public Preview)</i></b> A request from the model for the client to execute one or more functions.
LiveServerToolCallCancellation<b><i>(Public Preview)</i></b> Notification to cancel a previous function call triggered by LiveServerToolCall<!-- -->.
ModalityTokenCountRepresents token counting info for a single modality.
ModelParamsParams passed to getGenerativeModel()<!-- -->.
ObjectSchemaRequestInterface for JSON parameters in a schema of SchemaType "object" when not using the <code>Schema.object()</code> helper.
OnDeviceParams<b><i>(Public Preview)</i></b> Encapsulates configuration for on-device inference.
PrebuiltVoiceConfig<b><i>(Public Preview)</i></b> Configuration for a pre-built voice.
PromptFeedbackIf the prompt was blocked, this will be populated with <code>blockReason</code> and the relevant <code>safetyRatings</code>.
RequestOptionsParams passed to getGenerativeModel()<!-- -->.
RetrievedContextAttribution
SafetyRatingA safety rating associated with a GenerateContentCandidate
SafetySettingSafety setting that can be sent as part of request parameters.
SchemaInterfaceInterface for Schema class.
SchemaParamsParams passed to Schema static methods to create specific Schema classes.
SchemaRequestFinal format for Schema params passed to backend requests.
SchemaSharedBasic Schema properties shared across several Schema-related types.
SearchEntrypointGoogle search entry point.
SegmentRepresents a specific segment within a Content object, often used to pinpoint the exact location of text or data that grounding information refers to.
SingleRequestOptionsOptions that can be provided per-request. Extends the base RequestOptions (like <code>timeout</code> and <code>baseUrl</code>) with request-specific controls like cancellation via <code>AbortSignal</code>.<!-- -->Options specified here will override any default RequestOptions configured on a model (for example, GenerativeModel<!-- -->).
SpeechConfig<b><i>(Public Preview)</i></b> Configures speech synthesis.
StartAudioConversationOptions<b><i>(Public Preview)</i></b> Options for startAudioConversation()<!-- -->.
StartChatParamsParams for GenerativeModel.startChat()<!-- -->.
StartTemplateChatParams<b><i>(Public Preview)</i></b> Params for TemplateGenerativeModel.startChat()<!-- -->.
TemplateChatSession<b><i>(Public Preview)</i></b> Interface representing a <code>ChatSession</code> class for use with server prompt templates that enables sending chat messages and stores history of sent and received messages so far.
TemplateFunctionDeclaration<b><i>(Public Preview)</i></b> Structured representation of a template function declaration. Included in this declaration are the function name and parameters. This <code>TemplateFunctionDeclaration</code> is a representation of a block of code that can be used as a Tool by the model and executed by the client. Note: Template function declarations do not support description fields.
TemplateFunctionDeclarationsTool<b><i>(Public Preview)</i></b> A piece of code that enables the system to interact with external systems.
TextPartContent part interface if the part represents a text string.
ThinkingConfigConfiguration for "thinking" behavior of compatible Gemini models.<!-- -->Certain models utilize a thinking process before generating a response. This allows them to reason through complex problems and plan a more coherent and accurate answer.
ToolConfigTool config. This config is shared for all tools provided in the request.
Transcription<b><i>(Public Preview)</i></b> Transcription of audio. This can be returned from a LiveGenerativeModel if transcription is enabled with the <code>inputAudioTranscription</code> or <code>outputAudioTranscription</code> properties on the LiveGenerationConfig<!-- -->.
URLContextSpecifies the URL Context configuration.
URLContextMetadataMetadata related to URLContextTool<!-- -->.
URLContextToolA tool that allows you to provide additional context to the models in the form of public web URLs. By including URLs in your request, the Gemini model will access the content from those pages to inform and enhance its response.
URLMetadataMetadata for a single URL retrieved by the URLContextTool tool.
UsageMetadataUsage metadata about a GenerateContentResponse<!-- -->.
VideoMetadataDescribes the input video content.
VoiceConfig<b><i>(Public Preview)</i></b> Configuration for the voice to used in speech synthesis.
WebAttribution
WebGroundingChunkA grounding chunk from the web.<!-- -->Important: If using Grounding with Google Search, you are required to comply with the Service Specific Terms for "Grounding with Google Search".

Variables

VariableDescription
AIErrorCodeStandardized error codes that AIError can have.
BackendTypeAn enum-like object containing constants that represent the supported backends for the Firebase AI SDK. This determines which backend service (Vertex AI Gemini API or Gemini Developer API) the SDK will communicate with.<!-- -->These values are assigned to the <code>backendType</code> property within the specific backend configuration objects (GoogleAIBackend or VertexAIBackend<!-- -->) to identify which service to target.
BlockReasonReason that a prompt was blocked.
FinishReasonReason that a candidate finished.
FunctionCallingMode
HarmBlockMethodThis property is not supported in the Gemini Developer API (GoogleAIBackend<!-- -->).
HarmBlockThresholdThreshold above which a prompt or candidate will be blocked.
HarmCategoryHarm categories that would cause prompts or candidates to be blocked.
HarmProbabilityProbability that a prompt or candidate matches a harm category.
HarmSeverityHarm severity levels.
ImagenAspectRatioAspect ratios for Imagen images.<!-- -->To specify an aspect ratio for generated images, set the <code>aspectRatio</code> property in your ImagenGenerationConfig<!-- -->.<!-- -->See the documentation for more details and examples of the supported aspect ratios.
ImagenPersonFilterLevelA filter level controlling whether generation of images containing people or faces is allowed.<!-- -->See the <a href="http://firebase.google.com/docs/vertex-ai/generate-images">personGeneration</a> documentation for more details.
ImagenSafetyFilterLevelA filter level controlling how aggressively to filter sensitive content.<!-- -->Text prompts provided as inputs and images (generated or uploaded) through Imagen on Vertex AI are assessed against a list of safety filters, which include 'harmful categories' (for example, <code>violence</code>, <code>sexual</code>, <code>derogatory</code>, and <code>toxic</code>). This filter level controls how aggressively to filter out potentially harmful content from responses. See the documentation and the Responsible AI and usage guidelines for more details.
InferenceMode<b><i>(Public Preview)</i></b> Determines whether inference happens on-device or in-cloud.
InferenceSource<b><i>(Public Preview)</i></b> Indicates whether inference happened on-device or in-cloud.
LanguageThe programming language of the code.
LiveResponseType<b><i>(Public Preview)</i></b> The types of responses that can be returned by LiveSession.receive()<!-- -->.
ModalityContent part modality.
OutcomeRepresents the result of the code execution.
POSSIBLE_ROLESPossible roles.
ResponseModality<b><i>(Public Preview)</i></b> Generation modalities to be returned in generation responses.
SchemaTypeContains the list of OpenAPI data types as defined by the OpenAPI specification
ThinkingLevelA preset that controls the model's "thinking" process. Use <code>ThinkingLevel.LOW</code> for faster responses on less complex tasks, and <code>ThinkingLevel.HIGH</code> for better reasoning on more complex tasks.
URLRetrievalStatusThe status of a URL retrieval.

Type Aliases

Type AliasDescription
AIErrorCodeStandardized error codes that AIError can have.
BackendTypeType alias representing valid backend types. It can be either <code>'VERTEX_AI'</code> or <code>'GOOGLE_AI'</code>.
BlockReasonReason that a prompt was blocked.
FinishReasonReason that a candidate finished.
FunctionCallingMode
HarmBlockMethodThis property is not supported in the Gemini Developer API (GoogleAIBackend<!-- -->).
HarmBlockThresholdThreshold above which a prompt or candidate will be blocked.
HarmCategoryHarm categories that would cause prompts or candidates to be blocked.
HarmProbabilityProbability that a prompt or candidate matches a harm category.
HarmSeverityHarm severity levels.
ImagenAspectRatioAspect ratios for Imagen images.<!-- -->To specify an aspect ratio for generated images, set the <code>aspectRatio</code> property in your ImagenGenerationConfig<!-- -->.<!-- -->See the documentation for more details and examples of the supported aspect ratios.
ImagenPersonFilterLevelA filter level controlling whether generation of images containing people or faces is allowed.<!-- -->See the <a href="http://firebase.google.com/docs/vertex-ai/generate-images">personGeneration</a> documentation for more details.
ImagenSafetyFilterLevelA filter level controlling how aggressively to filter sensitive content.<!-- -->Text prompts provided as inputs and images (generated or uploaded) through Imagen on Vertex AI are assessed against a list of safety filters, which include 'harmful categories' (for example, <code>violence</code>, <code>sexual</code>, <code>derogatory</code>, and <code>toxic</code>). This filter level controls how aggressively to filter out potentially harmful content from responses. See the documentation and the Responsible AI and usage guidelines for more details.
InferenceMode<b><i>(Public Preview)</i></b> Determines whether inference happens on-device or in-cloud.
InferenceSource<b><i>(Public Preview)</i></b> Indicates whether inference happened on-device or in-cloud.
LanguageThe programming language of the code.
LanguageModelMessageContentValue<b><i>(Public Preview)</i></b> Content formats that can be provided as on-device message content.
LanguageModelMessageRole<b><i>(Public Preview)</i></b> Allowable roles for on-device language model usage.
LanguageModelMessageType<b><i>(Public Preview)</i></b> Allowable types for on-device language model messages.
LiveResponseType<b><i>(Public Preview)</i></b> The types of responses that can be returned by LiveSession.receive()<!-- -->. This is a property on all messages that can be used for type narrowing. This property is not returned by the server, it is assigned to a server message object once it's parsed.
ModalityContent part modality.
OutcomeRepresents the result of the code execution.
PartContent part - includes text, image/video, or function call/response part types.
ResponseModality<b><i>(Public Preview)</i></b> Generation modalities to be returned in generation responses.
RoleRole is the producer of the content.
SchemaTypeContains the list of OpenAPI data types as defined by the OpenAPI specification
TemplateTool<b><i>(Public Preview)</i></b> Defines a tool that a TemplateGenerativeModel can call to access external knowledge. Only function declarations are currently supported for templates.
ThinkingLevelA preset that controls the model's "thinking" process. Use <code>ThinkingLevel.LOW</code> for faster responses on less complex tasks, and <code>ThinkingLevel.HIGH</code> for better reasoning on more complex tasks.
ToolDefines a tool that model can call to access external knowledge.
TypedSchemaA type that includes all specific Schema types.
URLRetrievalStatusThe status of a URL retrieval.

function(app, ...)

getAI(app, options) {:#getai_a94a413}

Returns the default AI instance that is associated with the provided FirebaseApp<!-- -->. If no instance exists, initializes a new instance with the default settings.

<b>Signature:</b>

typescript
export declare function getAI(app?: FirebaseApp, options?: AIOptions): AI;

Parameters

ParameterTypeDescription
appFirebaseAppThe FirebaseApp to use.
optionsAIOptionsAIOptions that configure the AI instance.

<b>Returns:</b>

AI

The default AI instance for the given FirebaseApp<!-- -->.

Example 1

javascript
const ai = getAI(app);

Example 2

javascript
// Get an AI instance configured to use the Gemini Developer API (via Google AI).
const ai = getAI(app, { backend: new GoogleAIBackend() });

Example 3

javascript
// Get an AI instance configured to use the Vertex AI Gemini API.
const ai = getAI(app, { backend: new VertexAIBackend() });

function(ai, ...)

getGenerativeModel(ai, modelParams, requestOptions) {:#getgenerativemodel_c63f46a}

Returns a GenerativeModel class with methods for inference and other functionality.

<b>Signature:</b>

typescript
export declare function getGenerativeModel(ai: AI, modelParams: ModelParams | HybridParams, requestOptions?: RequestOptions): GenerativeModel;

Parameters

ParameterTypeDescription
aiAI
modelParamsModelParams | HybridParams
requestOptionsRequestOptions

<b>Returns:</b>

GenerativeModel

getImagenModel(ai, modelParams, requestOptions) {:#getimagenmodel_e1f6645}

Warning: This API is now obsolete.

All Imagen models are deprecated and will shut down as early as June 2026. As a replacement, you can migrate your apps to use Gemini Image models (the "Nano Banana" models)<!-- -->.

Returns an ImagenModel class with methods for using Imagen.

Only Imagen 3 models (named imagen-3.0-*<!-- -->) are supported.

<b>Signature:</b>

typescript
export declare function getImagenModel(ai: AI, modelParams: ImagenModelParams, requestOptions?: RequestOptions): ImagenModel;

Parameters

ParameterTypeDescription
aiAIAn AI instance.
modelParamsImagenModelParamsParameters to use when making Imagen requests.
requestOptionsRequestOptionsAdditional options to use when making requests.

<b>Returns:</b>

ImagenModel

Exceptions

If the apiKey or projectId fields are missing in your Firebase config.

getLiveGenerativeModel(ai, modelParams) {:#getlivegenerativemodel_f2099ac}

This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.

Returns a LiveGenerativeModel class for real-time, bidirectional communication.

The Live API is only supported in modern browser windows and Node ><!-- -->= 22.

<b>Signature:</b>

typescript
export declare function getLiveGenerativeModel(ai: AI, modelParams: LiveModelParams): LiveGenerativeModel;

Parameters

ParameterTypeDescription
aiAIAn AI instance.
modelParamsLiveModelParamsParameters to use when setting up a LiveSession<!-- -->.

<b>Returns:</b>

LiveGenerativeModel

Exceptions

If the apiKey or projectId fields are missing in your Firebase config.

getTemplateGenerativeModel(ai, requestOptions) {:#gettemplategenerativemodel_9476bbc}

This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.

Returns a TemplateGenerativeModel class for executing server-side templates.

<b>Signature:</b>

typescript
export declare function getTemplateGenerativeModel(ai: AI, requestOptions?: RequestOptions): TemplateGenerativeModel;

Parameters

ParameterTypeDescription
aiAIAn AI instance.
requestOptionsRequestOptionsAdditional options to use when making requests.

<b>Returns:</b>

TemplateGenerativeModel

getTemplateImagenModel(ai, requestOptions) {:#gettemplateimagenmodel_9476bbc}

Warning: This API is now obsolete.

All Imagen models are deprecated and will shut down as early as June 2026. As a replacement, you can migrate your apps to use Gemini Image models (the "Nano Banana" models)<!-- -->.

Returns a TemplateImagenModel class for executing server-side Imagen templates.

<b>Signature:</b>

typescript
export declare function getTemplateImagenModel(ai: AI, requestOptions?: RequestOptions): TemplateImagenModel;

Parameters

ParameterTypeDescription
aiAIAn AI instance.
requestOptionsRequestOptionsAdditional options to use when making requests.

<b>Returns:</b>

TemplateImagenModel

function(liveSession, ...)

startAudioConversation(liveSession, options) {:#startaudioconversation_01c8e7f}

This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.

Starts a real-time, bidirectional audio conversation with the model. This helper function manages the complexities of microphone access, audio recording, playback, and interruptions.

Important: This function must be called in response to a user gesture (for example, a button click) to comply with browser autoplay policies<!-- -->.

<b>Signature:</b>

typescript
export declare function startAudioConversation(liveSession: LiveSession, options?: StartAudioConversationOptions): Promise<AudioConversationController>;

Parameters

ParameterTypeDescription
liveSessionLiveSessionAn active LiveSession instance.
optionsStartAudioConversationOptionsConfiguration options for the audio conversation.

<b>Returns:</b>

Promise<AudioConversationController<!-- -->>

A Promise that resolves with an AudioConversationController<!-- -->.

Exceptions

AIError if the environment does not support required Web APIs (UNSUPPORTED<!-- -->), if a conversation is already active (REQUEST_ERROR<!-- -->), the session is closed (SESSION_CLOSED<!-- -->), or if an unexpected initialization error occurs (ERROR<!-- -->).

DOMException Thrown by navigator.mediaDevices.getUserMedia() if issues occur with microphone access, such as permissions being denied (NotAllowedError<!-- -->) or no compatible hardware being found (NotFoundError<!-- -->). See the MDN documentation for a full list of exceptions.

Example

javascript
const liveSession = await model.connect();
let conversationController;

// This function must be called from within a click handler.
async function startConversation() {
  try {
    conversationController = await startAudioConversation(liveSession);
  } catch (e) {
    // Handle AI-specific errors
    if (e instanceof AIError) {
      console.error("AI Error:", e.message);
    }
    // Handle microphone permission and hardware errors
    else if (e instanceof DOMException) {
      console.error("Microphone Error:", e.message);
    }
    // Handle other unexpected errors
    else {
      console.error("An unexpected error occurred:", e);
    }
  }
}

// Later, to stop the conversation:
// if (conversationController) {
//   await conversationController.stop();
// }

AIErrorCode

Standardized error codes that AIError can have.

<b>Signature:</b>

typescript
AIErrorCode: {
    readonly ERROR: "error";
    readonly REQUEST_ERROR: "request-error";
    readonly RESPONSE_ERROR: "response-error";
    readonly FETCH_ERROR: "fetch-error";
    readonly SESSION_CLOSED: "session-closed";
    readonly INVALID_CONTENT: "invalid-content";
    readonly API_NOT_ENABLED: "api-not-enabled";
    readonly INVALID_SCHEMA: "invalid-schema";
    readonly NO_API_KEY: "no-api-key";
    readonly NO_APP_ID: "no-app-id";
    readonly NO_MODEL: "no-model";
    readonly NO_PROJECT_ID: "no-project-id";
    readonly PARSE_FAILED: "parse-failed";
    readonly UNSUPPORTED: "unsupported";
}

BackendType

An enum-like object containing constants that represent the supported backends for the Firebase AI SDK. This determines which backend service (Vertex AI Gemini API or Gemini Developer API) the SDK will communicate with.

These values are assigned to the backendType property within the specific backend configuration objects (GoogleAIBackend or VertexAIBackend<!-- -->) to identify which service to target.

<b>Signature:</b>

typescript
BackendType: {
    readonly VERTEX_AI: "VERTEX_AI";
    readonly GOOGLE_AI: "GOOGLE_AI";
}

BlockReason

Reason that a prompt was blocked.

<b>Signature:</b>

typescript
BlockReason: {
    readonly SAFETY: "SAFETY";
    readonly OTHER: "OTHER";
    readonly BLOCKLIST: "BLOCKLIST";
    readonly PROHIBITED_CONTENT: "PROHIBITED_CONTENT";
}

FinishReason

Reason that a candidate finished.

<b>Signature:</b>

typescript
FinishReason: {
    readonly STOP: "STOP";
    readonly MAX_TOKENS: "MAX_TOKENS";
    readonly SAFETY: "SAFETY";
    readonly RECITATION: "RECITATION";
    readonly OTHER: "OTHER";
    readonly BLOCKLIST: "BLOCKLIST";
    readonly PROHIBITED_CONTENT: "PROHIBITED_CONTENT";
    readonly SPII: "SPII";
    readonly MALFORMED_FUNCTION_CALL: "MALFORMED_FUNCTION_CALL";
}

FunctionCallingMode

<b>Signature:</b>

typescript
FunctionCallingMode: {
    readonly AUTO: "AUTO";
    readonly ANY: "ANY";
    readonly NONE: "NONE";
}

HarmBlockMethod

This property is not supported in the Gemini Developer API (GoogleAIBackend<!-- -->).

<b>Signature:</b>

typescript
HarmBlockMethod: {
    readonly SEVERITY: "SEVERITY";
    readonly PROBABILITY: "PROBABILITY";
}

HarmBlockThreshold

Threshold above which a prompt or candidate will be blocked.

<b>Signature:</b>

typescript
HarmBlockThreshold: {
    readonly BLOCK_LOW_AND_ABOVE: "BLOCK_LOW_AND_ABOVE";
    readonly BLOCK_MEDIUM_AND_ABOVE: "BLOCK_MEDIUM_AND_ABOVE";
    readonly BLOCK_ONLY_HIGH: "BLOCK_ONLY_HIGH";
    readonly BLOCK_NONE: "BLOCK_NONE";
    readonly OFF: "OFF";
}

HarmCategory

Harm categories that would cause prompts or candidates to be blocked.

<b>Signature:</b>

typescript
HarmCategory: {
    readonly HARM_CATEGORY_HATE_SPEECH: "HARM_CATEGORY_HATE_SPEECH";
    readonly HARM_CATEGORY_SEXUALLY_EXPLICIT: "HARM_CATEGORY_SEXUALLY_EXPLICIT";
    readonly HARM_CATEGORY_HARASSMENT: "HARM_CATEGORY_HARASSMENT";
    readonly HARM_CATEGORY_DANGEROUS_CONTENT: "HARM_CATEGORY_DANGEROUS_CONTENT";
}

HarmProbability

Probability that a prompt or candidate matches a harm category.

<b>Signature:</b>

typescript
HarmProbability: {
    readonly NEGLIGIBLE: "NEGLIGIBLE";
    readonly LOW: "LOW";
    readonly MEDIUM: "MEDIUM";
    readonly HIGH: "HIGH";
}

HarmSeverity

Harm severity levels.

<b>Signature:</b>

typescript
HarmSeverity: {
    readonly HARM_SEVERITY_NEGLIGIBLE: "HARM_SEVERITY_NEGLIGIBLE";
    readonly HARM_SEVERITY_LOW: "HARM_SEVERITY_LOW";
    readonly HARM_SEVERITY_MEDIUM: "HARM_SEVERITY_MEDIUM";
    readonly HARM_SEVERITY_HIGH: "HARM_SEVERITY_HIGH";
    readonly HARM_SEVERITY_UNSUPPORTED: "HARM_SEVERITY_UNSUPPORTED";
}

ImagenAspectRatio

Warning: This API is now obsolete.

All Imagen models are deprecated and will shut down as early as June 2026. As a replacement, you can migrate your apps to use Gemini Image models (the "Nano Banana" models)<!-- -->.

Aspect ratios for Imagen images.

To specify an aspect ratio for generated images, set the aspectRatio property in your ImagenGenerationConfig<!-- -->.

See the documentation for more details and examples of the supported aspect ratios.

<b>Signature:</b>

typescript
ImagenAspectRatio: {
    readonly SQUARE: "1:1";
    readonly LANDSCAPE_3x4: "3:4";
    readonly PORTRAIT_4x3: "4:3";
    readonly LANDSCAPE_16x9: "16:9";
    readonly PORTRAIT_9x16: "9:16";
}

ImagenPersonFilterLevel

Warning: This API is now obsolete.

All Imagen models are deprecated and will shut down as early as June 2026. As a replacement, you can migrate your apps to use Gemini Image models (the "Nano Banana" models)<!-- -->.

A filter level controlling whether generation of images containing people or faces is allowed.

See the <a href="http://firebase.google.com/docs/vertex-ai/generate-images">personGeneration</a> documentation for more details.

<b>Signature:</b>

typescript
ImagenPersonFilterLevel: {
    readonly BLOCK_ALL: "dont_allow";
    readonly ALLOW_ADULT: "allow_adult";
    readonly ALLOW_ALL: "allow_all";
}

ImagenSafetyFilterLevel

Warning: This API is now obsolete.

All Imagen models are deprecated and will shut down as early as June 2026. As a replacement, you can migrate your apps to use Gemini Image models (the "Nano Banana" models)<!-- -->.

A filter level controlling how aggressively to filter sensitive content.

Text prompts provided as inputs and images (generated or uploaded) through Imagen on Vertex AI are assessed against a list of safety filters, which include 'harmful categories' (for example, violence<!-- -->, sexual<!-- -->, derogatory<!-- -->, and toxic<!-- -->). This filter level controls how aggressively to filter out potentially harmful content from responses. See the documentation and the Responsible AI and usage guidelines for more details.

<b>Signature:</b>

typescript
ImagenSafetyFilterLevel: {
    readonly BLOCK_LOW_AND_ABOVE: "block_low_and_above";
    readonly BLOCK_MEDIUM_AND_ABOVE: "block_medium_and_above";
    readonly BLOCK_ONLY_HIGH: "block_only_high";
    readonly BLOCK_NONE: "block_none";
}

InferenceMode

This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.

Determines whether inference happens on-device or in-cloud.

<b>PREFER_ON_DEVICE:</b> Attempt to make inference calls using an on-device model. If on-device inference is not available, the SDK will fall back to using a cloud-hosted model. <b>ONLY_ON_DEVICE:</b> Only attempt to make inference calls using an on-device model. The SDK will not fall back to a cloud-hosted model. If on-device inference is not available, inference methods will throw. <b>ONLY_IN_CLOUD:</b> Only attempt to make inference calls using a cloud-hosted model. The SDK will not fall back to an on-device model. <b>PREFER_IN_CLOUD:</b> Attempt to make inference calls to a cloud-hosted model. If not available, the SDK will fall back to an on-device model.

<b>Signature:</b>

typescript
InferenceMode: {
    readonly PREFER_ON_DEVICE: "prefer_on_device";
    readonly ONLY_ON_DEVICE: "only_on_device";
    readonly ONLY_IN_CLOUD: "only_in_cloud";
    readonly PREFER_IN_CLOUD: "prefer_in_cloud";
}

InferenceSource

This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.

Indicates whether inference happened on-device or in-cloud.

<b>Signature:</b>

typescript
InferenceSource: {
    readonly ON_DEVICE: "on_device";
    readonly IN_CLOUD: "in_cloud";
}

Language

The programming language of the code.

<b>Signature:</b>

typescript
Language: {
    UNSPECIFIED: string;
    PYTHON: string;
}

LiveResponseType

This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.

The types of responses that can be returned by LiveSession.receive()<!-- -->.

<b>Signature:</b>

typescript
LiveResponseType: {
    SERVER_CONTENT: string;
    TOOL_CALL: string;
    TOOL_CALL_CANCELLATION: string;
    GOING_AWAY_NOTICE: string;
}

Modality

Content part modality.

<b>Signature:</b>

typescript
Modality: {
    readonly MODALITY_UNSPECIFIED: "MODALITY_UNSPECIFIED";
    readonly TEXT: "TEXT";
    readonly IMAGE: "IMAGE";
    readonly VIDEO: "VIDEO";
    readonly AUDIO: "AUDIO";
    readonly DOCUMENT: "DOCUMENT";
}

Outcome

Represents the result of the code execution.

<b>Signature:</b>

typescript
Outcome: {
    UNSPECIFIED: string;
    OK: string;
    FAILED: string;
    DEADLINE_EXCEEDED: string;
}

POSSIBLE_ROLES

Possible roles.

<b>Signature:</b>

typescript
POSSIBLE_ROLES: readonly ["user", "model", "function", "system"]

ResponseModality

This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.

Generation modalities to be returned in generation responses.

<b>Signature:</b>

typescript
ResponseModality: {
    readonly TEXT: "TEXT";
    readonly IMAGE: "IMAGE";
    readonly AUDIO: "AUDIO";
}

SchemaType

Contains the list of OpenAPI data types as defined by the OpenAPI specification

<b>Signature:</b>

typescript
SchemaType: {
    readonly STRING: "string";
    readonly NUMBER: "number";
    readonly INTEGER: "integer";
    readonly BOOLEAN: "boolean";
    readonly ARRAY: "array";
    readonly OBJECT: "object";
}

ThinkingLevel

A preset that controls the model's "thinking" process. Use ThinkingLevel.LOW for faster responses on less complex tasks, and ThinkingLevel.HIGH for better reasoning on more complex tasks.

<b>Signature:</b>

typescript
ThinkingLevel: {
    MINIMAL: string;
    LOW: string;
    MEDIUM: string;
    HIGH: string;
}

URLRetrievalStatus

The status of a URL retrieval.

<b>URL_RETRIEVAL_STATUS_UNSPECIFIED:</b> Unspecified retrieval status. <b>URL_RETRIEVAL_STATUS_SUCCESS:</b> The URL retrieval was successful. <b>URL_RETRIEVAL_STATUS_ERROR:</b> The URL retrieval failed. <b>URL_RETRIEVAL_STATUS_PAYWALL:</b> The URL retrieval failed because the content is behind a paywall. <b>URL_RETRIEVAL_STATUS_UNSAFE:</b> The URL retrieval failed because the content is unsafe.

<b>Signature:</b>

typescript
URLRetrievalStatus: {
    URL_RETRIEVAL_STATUS_UNSPECIFIED: string;
    URL_RETRIEVAL_STATUS_SUCCESS: string;
    URL_RETRIEVAL_STATUS_ERROR: string;
    URL_RETRIEVAL_STATUS_PAYWALL: string;
    URL_RETRIEVAL_STATUS_UNSAFE: string;
}

AIErrorCode

Standardized error codes that AIError can have.

<b>Signature:</b>

typescript
export type AIErrorCode = (typeof AIErrorCode)[keyof typeof AIErrorCode];

BackendType

Type alias representing valid backend types. It can be either 'VERTEX_AI' or 'GOOGLE_AI'<!-- -->.

<b>Signature:</b>

typescript
export type BackendType = (typeof BackendType)[keyof typeof BackendType];

BlockReason

Reason that a prompt was blocked.

<b>Signature:</b>

typescript
export type BlockReason = (typeof BlockReason)[keyof typeof BlockReason];

FinishReason

Reason that a candidate finished.

<b>Signature:</b>

typescript
export type FinishReason = (typeof FinishReason)[keyof typeof FinishReason];

FunctionCallingMode

<b>Signature:</b>

typescript
export type FunctionCallingMode = (typeof FunctionCallingMode)[keyof typeof FunctionCallingMode];

HarmBlockMethod

This property is not supported in the Gemini Developer API (GoogleAIBackend<!-- -->).

<b>Signature:</b>

typescript
export type HarmBlockMethod = (typeof HarmBlockMethod)[keyof typeof HarmBlockMethod];

HarmBlockThreshold

Threshold above which a prompt or candidate will be blocked.

<b>Signature:</b>

typescript
export type HarmBlockThreshold = (typeof HarmBlockThreshold)[keyof typeof HarmBlockThreshold];

HarmCategory

Harm categories that would cause prompts or candidates to be blocked.

<b>Signature:</b>

typescript
export type HarmCategory = (typeof HarmCategory)[keyof typeof HarmCategory];

HarmProbability

Probability that a prompt or candidate matches a harm category.

<b>Signature:</b>

typescript
export type HarmProbability = (typeof HarmProbability)[keyof typeof HarmProbability];

HarmSeverity

Harm severity levels.

<b>Signature:</b>

typescript
export type HarmSeverity = (typeof HarmSeverity)[keyof typeof HarmSeverity];

ImagenAspectRatio

Warning: This API is now obsolete.

All Imagen models are deprecated and will shut down as early as June 2026. As a replacement, you can migrate your apps to use Gemini Image models (the "Nano Banana" models)<!-- -->.

Aspect ratios for Imagen images.

To specify an aspect ratio for generated images, set the aspectRatio property in your ImagenGenerationConfig<!-- -->.

See the documentation for more details and examples of the supported aspect ratios.

<b>Signature:</b>

typescript
export type ImagenAspectRatio = (typeof ImagenAspectRatio)[keyof typeof ImagenAspectRatio];

ImagenPersonFilterLevel

Warning: This API is now obsolete.

All Imagen models are deprecated and will shut down as early as June 2026. As a replacement, you can migrate your apps to use Gemini Image models (the "Nano Banana" models)<!-- -->.

A filter level controlling whether generation of images containing people or faces is allowed.

See the <a href="http://firebase.google.com/docs/vertex-ai/generate-images">personGeneration</a> documentation for more details.

<b>Signature:</b>

typescript
export type ImagenPersonFilterLevel = (typeof ImagenPersonFilterLevel)[keyof typeof ImagenPersonFilterLevel];

ImagenSafetyFilterLevel

Warning: This API is now obsolete.

All Imagen models are deprecated and will shut down as early as June 2026. As a replacement, you can migrate your apps to use Gemini Image models (the "Nano Banana" models)<!-- -->.

A filter level controlling how aggressively to filter sensitive content.

Text prompts provided as inputs and images (generated or uploaded) through Imagen on Vertex AI are assessed against a list of safety filters, which include 'harmful categories' (for example, violence<!-- -->, sexual<!-- -->, derogatory<!-- -->, and toxic<!-- -->). This filter level controls how aggressively to filter out potentially harmful content from responses. See the documentation and the Responsible AI and usage guidelines for more details.

<b>Signature:</b>

typescript
export type ImagenSafetyFilterLevel = (typeof ImagenSafetyFilterLevel)[keyof typeof ImagenSafetyFilterLevel];

InferenceMode

This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.

Determines whether inference happens on-device or in-cloud.

<b>Signature:</b>

typescript
export type InferenceMode = (typeof InferenceMode)[keyof typeof InferenceMode];

InferenceSource

This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.

Indicates whether inference happened on-device or in-cloud.

<b>Signature:</b>

typescript
export type InferenceSource = (typeof InferenceSource)[keyof typeof InferenceSource];

Language

The programming language of the code.

<b>Signature:</b>

typescript
export type Language = (typeof Language)[keyof typeof Language];

LanguageModelMessageContentValue

This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.

Content formats that can be provided as on-device message content.

<b>Signature:</b>

typescript
export type LanguageModelMessageContentValue = ImageBitmapSource | AudioBuffer | BufferSource | string;

LanguageModelMessageRole

This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.

Allowable roles for on-device language model usage.

<b>Signature:</b>

typescript
export type LanguageModelMessageRole = 'system' | 'user' | 'assistant';

LanguageModelMessageType

This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.

Allowable types for on-device language model messages.

<b>Signature:</b>

typescript
export type LanguageModelMessageType = 'text' | 'image' | 'audio';

LiveResponseType

This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.

The types of responses that can be returned by LiveSession.receive()<!-- -->. This is a property on all messages that can be used for type narrowing. This property is not returned by the server, it is assigned to a server message object once it's parsed.

<b>Signature:</b>

typescript
export type LiveResponseType = (typeof LiveResponseType)[keyof typeof LiveResponseType];

Modality

Content part modality.

<b>Signature:</b>

typescript
export type Modality = (typeof Modality)[keyof typeof Modality];

Outcome

Represents the result of the code execution.

<b>Signature:</b>

typescript
export type Outcome = (typeof Outcome)[keyof typeof Outcome];

Part

Content part - includes text, image/video, or function call/response part types.

<b>Signature:</b>

typescript
export type Part = TextPart | InlineDataPart | FunctionCallPart | FunctionResponsePart | FileDataPart | ExecutableCodePart | CodeExecutionResultPart;

ResponseModality

This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.

Generation modalities to be returned in generation responses.

<b>Signature:</b>

typescript
export type ResponseModality = (typeof ResponseModality)[keyof typeof ResponseModality];

Role

Role is the producer of the content.

<b>Signature:</b>

typescript
export type Role = (typeof POSSIBLE_ROLES)[number];

SchemaType

Contains the list of OpenAPI data types as defined by the OpenAPI specification

<b>Signature:</b>

typescript
export type SchemaType = (typeof SchemaType)[keyof typeof SchemaType];

TemplateTool

This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment.

Defines a tool that a TemplateGenerativeModel can call to access external knowledge. Only function declarations are currently supported for templates.

<b>Signature:</b>

typescript
export type TemplateTool = TemplateFunctionDeclarationsTool;

ThinkingLevel

A preset that controls the model's "thinking" process. Use ThinkingLevel.LOW for faster responses on less complex tasks, and ThinkingLevel.HIGH for better reasoning on more complex tasks.

<b>Signature:</b>

typescript
export type ThinkingLevel = (typeof ThinkingLevel)[keyof typeof ThinkingLevel];

Tool

Defines a tool that model can call to access external knowledge.

<b>Signature:</b>

typescript
export type Tool = FunctionDeclarationsTool | GoogleSearchTool | CodeExecutionTool | URLContextTool;

TypedSchema

A type that includes all specific Schema types.

<b>Signature:</b>

typescript
export type TypedSchema = IntegerSchema | NumberSchema | StringSchema | BooleanSchema | ObjectSchema | ArraySchema | AnyOfSchema;

URLRetrievalStatus

The status of a URL retrieval.

<b>URL_RETRIEVAL_STATUS_UNSPECIFIED:</b> Unspecified retrieval status. <b>URL_RETRIEVAL_STATUS_SUCCESS:</b> The URL retrieval was successful. <b>URL_RETRIEVAL_STATUS_ERROR:</b> The URL retrieval failed. <b>URL_RETRIEVAL_STATUS_PAYWALL:</b> The URL retrieval failed because the content is behind a paywall. <b>URL_RETRIEVAL_STATUS_UNSAFE:</b> The URL retrieval failed because the content is unsafe.

<b>Signature:</b>

typescript
export type URLRetrievalStatus = (typeof URLRetrievalStatus)[keyof typeof URLRetrievalStatus];