Back to Llama Index

LlamaIndex Llms Integration: MiniMax

llama-index-integrations/llms/llama-index-llms-minimax/README.md

0.14.211.5 KB
Original Source

LlamaIndex Llms Integration: MiniMax

This is the MiniMax integration for LlamaIndex. Visit MiniMax for information on how to get an API key and which models are supported.

Installation

bash
pip install llama-index-llms-minimax

Usage

python
from llama_index.llms.minimax import MiniMax

llm = MiniMax(model="MiniMax-M2.7", api_key="your-api-key")

response = llm.complete("Explain the importance of low latency LLMs")
print(response)

Available Models

ModelDescription
MiniMax-M2.7Latest flagship model with enhanced reasoning and coding
MiniMax-M2.7-highspeedHigh-speed version of M2.7 for low-latency scenarios
MiniMax-M2.5Peak Performance. Ultimate Value. Master the Complex.
MiniMax-M2.5-highspeedSame performance, faster and more agile.

Both models support a 204,800-token context window.

Environment Variables

You can set the MINIMAX_API_KEY environment variable instead of passing api_key directly:

bash
export MINIMAX_API_KEY="your-api-key"
python
from llama_index.llms.minimax import MiniMax

llm = MiniMax(model="MiniMax-M2.7")

Custom Base URL

For users in mainland China, use the domestic API endpoint:

python
llm = MiniMax(
    model="MiniMax-M2.7",
    api_base="https://api.minimaxi.com/v1",
)