docs/examples/cookbooks/codestral.ipynb
<a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/examples/cookbooks/codestral.ipynb" target="_parent"></a>
MistralAI released codestral-latest - a code model.
Codestral is a new code model from mistralai tailored for code generation, fluent in over 80 programming languages. It simplifies coding tasks by completing functions, writing tests, and filling in code snippets, enhancing developer efficiency and reducing errors. Codestral operates through a unified API endpoint, making it a versatile tool for software development.
This cookbook showcases how to use the codestral-latest model with llama-index. It guides you through using the Codestral fill-in-the-middle and instruct endpoints.
import os
os.environ["MISTRAL_API_KEY"] = "<YOUR MISTRAL API KEY>"
from llama_index.llms.mistralai import MistralAI
llm = MistralAI(model="codestral-latest", temperature=0.1)
from llama_index.core.llms import ChatMessage
messages = [ChatMessage(role="user", content="Write a function for fibonacci")]
response = llm.chat(messages)
print(response)
Note: The output is mostly accurate, but it is based on an older LlamaIndex package.
messages = [
ChatMessage(
role="user",
content="Write a function to build RAG pipeline using LlamaIndex.",
)
]
response = llm.chat(messages)
print(response)
This feature allows users to set a starting point with a prompt and an optional ending with a suffix and stop. The Codestral model then generates the intervening code, perfect for tasks requiring specific code generation.
prompt = "def multiply("
suffix = "return a*b"
response = llm.fill_in_middle(prompt, suffix)
print(
f"""
{prompt}
{response.text}
{suffix}
"""
)
prompt = "def multiply(a,"
suffix = ""
stop = ["\n\n\n"]
response = llm.fill_in_middle(prompt, suffix, stop)
print(
f"""
{prompt}
{response.text}
{suffix}
"""
)