Back to Llama Index

Function Calling Mistral Agent

docs/examples/agent/mistral_agent.ipynb

0.14.213.6 KB
Original Source

<a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/examples/agent/mistral_agent.ipynb" target="_parent"></a>

Function Calling Mistral Agent

This notebook shows you how to use our Mistral agent, powered by function calling capabilities.

Initial Setup

Let's start by importing some simple building blocks.

The main thing we need is:

  1. the OpenAI API (using our own llama_index LLM class)
  2. a place to keep conversation history
  3. a definition for tools that our agent can use.

If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.

python
%pip install llama-index
%pip install llama-index-llms-mistralai
%pip install llama-index-embeddings-mistralai

Let's define some very simple calculator tools for our agent.

python
def multiply(a: int, b: int) -> int:
    """Multiple two integers and returns the result integer"""
    return a * b


def add(a: int, b: int) -> int:
    """Add two integers and returns the result integer"""
    return a + b

Make sure your MISTRAL_API_KEY is set. Otherwise explicitly specify the api_key parameter.

python
from llama_index.llms.mistralai import MistralAI

llm = MistralAI(model="mistral-large-latest", api_key="...")

Initialize Mistral Agent

Here we initialize a simple Mistral agent with calculator functions.

python
from llama_index.core.agent.workflow import FunctionAgent

agent = FunctionAgent(
    tools=[multiply, add],
    llm=llm,
)

Chat

python
response = await agent.run("What is (121 + 2) * 5?")
print(str(response))
python
# inspect sources
print(response.tool_calls)

Managing Context/Memory

By default, .run() is stateless. If you want to maintain state, you can pass in a context object.

python
from llama_index.core.workflow import Context

ctx = Context(agent)

response = await agent.run("My name is John Doe", ctx=ctx)
response = await agent.run("What is my name?", ctx=ctx)

print(str(response))

Mistral Agent over RAG Pipeline

Build a Mistral agent over a simple 10K document. We use both Mistral embeddings and mistral-medium to construct the RAG pipeline, and pass it to the Mistral agent as a tool.

python
!mkdir -p 'data/10k/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/examples/data/10k/uber_2021.pdf' -O 'data/10k/uber_2021.pdf'
python
from llama_index.core.tools import QueryEngineTool
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
from llama_index.embeddings.mistralai import MistralAIEmbedding
from llama_index.llms.mistralai import MistralAI

embed_model = MistralAIEmbedding(api_key="...")
query_llm = MistralAI(model="mistral-medium", api_key="...")

# load data
uber_docs = SimpleDirectoryReader(
    input_files=["./data/10k/uber_2021.pdf"]
).load_data()
# build index
uber_index = VectorStoreIndex.from_documents(
    uber_docs, embed_model=embed_model
)
uber_engine = uber_index.as_query_engine(similarity_top_k=3, llm=query_llm)
query_engine_tool = QueryEngineTool.from_defaults(
    query_engine=uber_engine,
    name="uber_10k",
    description=(
        "Provides information about Uber financials for year 2021. "
        "Use a detailed plain text question as input to the tool."
    ),
)
python
from llama_index.core.agent.workflow import FunctionAgent

agent = FunctionAgent(tools=[query_engine_tool], llm=llm)
python
response = await agent.run(
    "Tell me both the risk factors and tailwinds for Uber? Do two parallel tool calls."
)
print(str(response))