llama-index-integrations/llms/llama-index-llms-ai21/README.md
First, you need to install the package. You can do this using pip:
pip install llama-index-llms-ai21
Here's a basic example of how to use the AI21 class to generate text completions and handle chat interactions.
You need to initialize the AI21 client with the appropriate model and API key.
from llama_index.llms.ai21 import AI21
api_key = "your_api_key"
llm = AI21(model="jamba-1.5-mini", api_key=api_key)
from llama_index.llms.ai21 import AI21
from llama_index.core.base.llms.types import ChatMessage
api_key = "your_api_key"
llm = AI21(model="jamba-1.5-mini", api_key=api_key)
messages = [ChatMessage(role="user", content="What is the meaning of life?")]
response = llm.chat(messages)
print(response.message.content)
from llama_index.llms.ai21 import AI21
from llama_index.core.base.llms.types import ChatMessage
api_key = "your_api_key"
llm = AI21(model="jamba-1.5-mini", api_key=api_key)
messages = [ChatMessage(role="user", content="What is the meaning of life?")]
for chunk in llm.stream_chat(messages):
print(chunk.message.content)
from llama_index.llms.ai21 import AI21
api_key = "your_api_key"
llm = AI21(model="jamba-1.5-mini", api_key=api_key)
response = llm.complete(prompt="What is the meaning of life?")
print(response.text)
from llama_index.llms.ai21 import AI21
api_key = "your_api_key"
llm = AI21(model="jamba-1.5-mini", api_key=api_key)
response = llm.stream_complete(prompt="What is the meaning of life?")
for chunk in response:
print(response.text)
You could also use more model types. For example the j2-ultra and j2-mid
These models support chat and complete methods only.
from llama_index.llms.ai21 import AI21
from llama_index.core.base.llms.types import ChatMessage
api_key = "your_api_key"
llm = AI21(model="j2-chat", api_key=api_key)
messages = [ChatMessage(role="user", content="What is the meaning of life?")]
response = llm.chat(messages)
print(response.message.content)
from llama_index.llms.ai21 import AI21
api_key = "your_api_key"
llm = AI21(model="j2-ultra", api_key=api_key)
response = llm.complete(prompt="What is the meaning of life?")
print(response.text)
The type of the tokenizer is determined by the name of the model
from llama_index.llms.ai21 import AI21
api_key = "your_api_key"
llm = AI21(model="jamba-1.5-mini", api_key=api_key)
tokenizer = llm.tokenizer
tokens = tokenizer.encode("What is the meaning of life?")
print(tokens)
text = tokenizer.decode(tokens)
print(text)
You can also use the async functionalities
from llama_index.llms.ai21 import AI21
from llama_index.core.base.llms.types import ChatMessage
async def main():
api_key = "your_api_key"
llm = AI21(model="jamba-1.5-mini", api_key=api_key)
messages = [
ChatMessage(role="user", content="What is the meaning of life?")
]
response = await llm.achat(messages)
print(response.message.content)
from llama_index.llms.ai21 import AI21
from llama_index.core.base.llms.types import ChatMessage
async def main():
api_key = "your_api_key"
llm = AI21(model="jamba-1.5-mini", api_key=api_key)
messages = [
ChatMessage(role="user", content="What is the meaning of life?")
]
response = await llm.astream_chat(messages)
async for chunk in response:
print(chunk.message.content)
from llama_index.core.agent.workflow import FunctionAgent
from llama_index.llms.ai21 import AI21
from llama_index.core.tools import FunctionTool
def multiply(a: int, b: int) -> int:
"""Multiply two integers and returns the result integer"""
return a * b
def subtract(a: int, b: int) -> int:
"""Subtract two integers and returns the result integer"""
return a - b
def divide(a: int, b: int) -> float:
"""Divide two integers and returns the result float"""
return a - b
def add(a: int, b: int) -> int:
"""Add two integers and returns the result integer"""
return a + b
multiply_tool = FunctionTool.from_defaults(fn=multiply)
add_tool = FunctionTool.from_defaults(fn=add)
subtract_tool = FunctionTool.from_defaults(fn=subtract)
divide_tool = FunctionTool.from_defaults(fn=divide)
api_key = "your_api_key"
llm = AI21(model="jamba-1.5-mini", api_key=api_key)
agent = FunctionAgent(
tools=[multiply_tool, add_tool, subtract_tool, divide_tool],
llm=llm,
)
response = await agent.run(
"My friend Moses had 10 apples. He ate 5 apples in the morning. Then he found a box with 25 apples."
"He divided all his apples between his 5 friends. How many apples did each friend get?"
)