docs/examples/llm/vertex.ipynb
NOTE: Vertex has largely been replaced by Google GenAI, which supports the same functionality from Vertex using the google-genai package. Visit the Google GenAI page for the latest examples and documentation.
To Install Vertex AI you need to follow the following steps
%pip install llama-index-llms-vertex
from llama_index.llms.vertex import Vertex
from google.oauth2 import service_account
filename = "vertex-407108-37495ce6c303.json"
credentials: service_account.Credentials = (
service_account.Credentials.from_service_account_file(filename)
)
Vertex(
model="text-bison", project=credentials.project_id, credentials=credentials
)
Basic call to the text-bison model
from llama_index.llms.vertex import Vertex
from llama_index.core.llms import ChatMessage, MessageRole
llm = Vertex(model="text-bison", temperature=0, additional_kwargs={})
llm.complete("Hello this is a sample text").text
(await llm.acomplete("hello")).text
list(llm.stream_complete("hello"))[-1].text
chat = Vertex(model="chat-bison")
messages = [
ChatMessage(role=MessageRole.SYSTEM, content="Reply everything in french"),
ChatMessage(role=MessageRole.USER, content="Hello"),
]
chat.chat(messages=messages).message.content
(await chat.achat(messages=messages)).message.content
list(chat.stream_chat(messages=messages))[-1].message.content
Calling Google Gemini Models using Vertex AI is fully supported.
llm = Vertex(
model="gemini-pro",
project=credentials.project_id,
credentials=credentials,
context_window=100000,
)
llm.complete("Hello Gemini").text
Gemini vision-capable models now support TextBlock and ImageBlock for structured multi-modal inputs, replacing the older dictionary-based content format. Use blocks to include text and images via file paths or URLs.
Example with Image Path:
from llama_index.llms.vertex import Vertex
from llama_index.core.llms import ChatMessage, TextBlock, ImageBlock
history = [
ChatMessage(
role="user",
blocks=[
ImageBlock(path="sample.jpg"),
TextBlock(text="What is in this image?"),
],
),
]
llm = Vertex(
model="gemini-1.5-flash",
project=credentials.project_id,
credentials=credentials,
context_window=100000,
)
print(llm.chat(history).message.content)
Example with Image URL:
from llama_index.llms.vertex import Vertex
from llama_index.core.llms import ChatMessage, TextBlock, ImageBlock
history = [
ChatMessage(
role="user",
blocks=[
ImageBlock(
url="https://upload.wikimedia.org/wikipedia/commons/7/71/Sibirischer_tiger_de_edit02.jpg"
),
TextBlock(text="What is in this image?"),
],
),
]
llm = Vertex(
model="gemini-1.5-flash",
project=credentials.project_id,
credentials=credentials,
context_window=100000,
)
print(llm.chat(history).message.content)