Back to Llama Index

Codestral from MistralAI Cookbook

docs/examples/cookbooks/codestral.ipynb

0.14.212.4 KB
Original Source

<a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/examples/cookbooks/codestral.ipynb" target="_parent"></a>

Codestral from MistralAI Cookbook

MistralAI released codestral-latest - a code model.

Codestral is a new code model from mistralai tailored for code generation, fluent in over 80 programming languages. It simplifies coding tasks by completing functions, writing tests, and filling in code snippets, enhancing developer efficiency and reducing errors. Codestral operates through a unified API endpoint, making it a versatile tool for software development.

This cookbook showcases how to use the codestral-latest model with llama-index. It guides you through using the Codestral fill-in-the-middle and instruct endpoints.

Setup LLM

python
import os

os.environ["MISTRAL_API_KEY"] = "<YOUR MISTRAL API KEY>"

from llama_index.llms.mistralai import MistralAI

llm = MistralAI(model="codestral-latest", temperature=0.1)

Instruct mode usage

Write a function for fibonacci

python
from llama_index.core.llms import ChatMessage

messages = [ChatMessage(role="user", content="Write a function for fibonacci")]

response = llm.chat(messages)

print(response)

Write a function to build RAG pipeline using LlamaIndex.

Note: The output is mostly accurate, but it is based on an older LlamaIndex package.

python
messages = [
    ChatMessage(
        role="user",
        content="Write a function to build RAG pipeline using LlamaIndex.",
    )
]

response = llm.chat(messages)

print(response)

Fill-in-the-middle

This feature allows users to set a starting point with a prompt and an optional ending with a suffix and stop. The Codestral model then generates the intervening code, perfect for tasks requiring specific code generation.

Fill the code with start and end of the code.

python
prompt = "def multiply("
suffix = "return a*b"

response = llm.fill_in_middle(prompt, suffix)

print(
    f"""
{prompt}
{response.text}
{suffix}
"""
)

Fill the code with start, end of the code and stop tokens.

python
prompt = "def multiply(a,"
suffix = ""
stop = ["\n\n\n"]

response = llm.fill_in_middle(prompt, suffix, stop)

print(
    f"""
{prompt}
{response.text}
{suffix}
"""
)