Back to Crewai

Connect to any LLM

docs/en/learn/llm-connections.mdx

1.14.5a27.2 KB
Original Source

Connect CrewAI to LLMs

CrewAI connects to LLMs through native SDK integrations for the most popular providers (OpenAI, Anthropic, Google Gemini, Azure, and AWS Bedrock), and uses LiteLLM as a flexible fallback for all other providers.

<Note> By default, CrewAI uses the `gpt-4o-mini` model. This is determined by the `OPENAI_MODEL_NAME` environment variable, which defaults to "gpt-4o-mini" if not set. You can easily configure your agents to use a different model or provider as described in this guide. </Note>

Supported Providers

LiteLLM supports a wide range of providers, including but not limited to:

  • OpenAI
  • Anthropic
  • Google (Vertex AI, Gemini)
  • Azure OpenAI
  • AWS (Bedrock, SageMaker)
  • Cohere
  • VoyageAI
  • Hugging Face
  • Ollama
  • Mistral AI
  • Replicate
  • Together AI
  • AI21
  • Cloudflare Workers AI
  • DeepInfra
  • Groq
  • SambaNova
  • Nebius AI Studio
  • NVIDIA NIMs
  • And many more!

For a complete and up-to-date list of supported providers, please refer to the LiteLLM Providers documentation.

<Info> To use any provider not covered by a native integration, add LiteLLM as a dependency to your project: ```bash uv add 'crewai[litellm]' ``` Native providers (OpenAI, Anthropic, Google Gemini, Azure, AWS Bedrock) use their own SDK extras — see the [Provider Configuration Examples](/en/concepts/llms#provider-configuration-examples). </Info>

Changing the LLM

To use a different LLM with your CrewAI agents, you have several options:

<Tabs> <Tab title="Using a String Identifier"> Pass the model name as a string when initializing the agent: <CodeGroup> ```python Code from crewai import Agent
        # Using OpenAI's GPT-4
        openai_agent = Agent(
            role='OpenAI Expert',
            goal='Provide insights using GPT-4',
            backstory="An AI assistant powered by OpenAI's latest model.",
            llm='gpt-4'
        )

        # Using Anthropic's Claude
        claude_agent = Agent(
            role='Anthropic Expert',
            goal='Analyze data using Claude',
            backstory="An AI assistant leveraging Anthropic's language model.",
            llm='claude-2'
        )
        ```
    </CodeGroup>
</Tab>
<Tab title="Using the LLM Class">
For more detailed configuration, use the LLM class:
    <CodeGroup>
        ```python Code
        from crewai import Agent, LLM

        llm = LLM(
            model="gpt-4",
            temperature=0.7,
            base_url="https://api.openai.com/v1",
            api_key="your-api-key-here"
        )

        agent = Agent(
            role='Customized LLM Expert',
            goal='Provide tailored responses',
            backstory="An AI assistant with custom LLM settings.",
            llm=llm
        )
        ```
    </CodeGroup>
</Tab>
</Tabs>

Configuration Options

When configuring an LLM for your agent, you have access to a wide range of parameters:

ParameterTypeDescription
modelstrThe name of the model to use (e.g., "gpt-4", "claude-2")
temperaturefloatControls randomness in output (0.0 to 1.0)
max_tokensintMaximum number of tokens to generate
top_pfloatControls diversity of output (0.0 to 1.0)
frequency_penaltyfloatPenalizes new tokens based on their frequency in the text so far
presence_penaltyfloatPenalizes new tokens based on their presence in the text so far
stopstr, List[str]Sequence(s) to stop generation
base_urlstrThe base URL for the API endpoint
api_keystrYour API key for authentication

For a complete list of parameters and their descriptions, refer to the LLM class documentation.

Connecting to OpenAI-Compatible LLMs

You can connect to OpenAI-compatible LLMs using either environment variables or by setting specific attributes on the LLM class:

<Tabs> <Tab title="Using Environment Variables"> <CodeGroup> ```python Generic import os
    os.environ["OPENAI_API_KEY"] = "your-api-key"
    os.environ["OPENAI_API_BASE"] = "https://api.your-provider.com/v1"
    os.environ["OPENAI_MODEL_NAME"] = "your-model-name"
    ```

    ```python Google
    import os

    # Example using Gemini's OpenAI-compatible API.
    os.environ["OPENAI_API_KEY"] = "your-gemini-key"  # Should start with AIza...
    os.environ["OPENAI_API_BASE"] = "https://generativelanguage.googleapis.com/v1beta/openai/"
    os.environ["OPENAI_MODEL_NAME"] = "openai/gemini-2.0-flash"  # Add your Gemini model here, under openai/
    ```
    </CodeGroup>
</Tab>
<Tab title="Using LLM Class Attributes">
    <CodeGroup>
        ```python Generic
        llm = LLM(
            model="custom-model-name",
            api_key="your-api-key",
            base_url="https://api.your-provider.com/v1"
        )
        agent = Agent(llm=llm, ...)
        ```

        ```python Google
        # Example using Gemini's OpenAI-compatible API
        llm = LLM(
            model="openai/gemini-2.0-flash",
            base_url="https://generativelanguage.googleapis.com/v1beta/openai/",
            api_key="your-gemini-key",  # Should start with AIza...
        )
        agent = Agent(llm=llm, ...)
        ```
    </CodeGroup>
</Tab>
</Tabs>

Using Local Models with Ollama

For local models like those provided by Ollama:

<Steps> <Step title="Download and install Ollama"> [Click here to download and install Ollama](https://ollama.com/download) </Step> <Step title="Pull the desired model"> For example, run `ollama pull llama3.2` to download the model. </Step> <Step title="Configure your agent"> <CodeGroup> ```python Code agent = Agent( role='Local AI Expert', goal='Process information using a local model', backstory="An AI assistant running on local hardware.", llm=LLM(model="ollama/llama3.2", base_url="http://localhost:11434") ) ``` </CodeGroup> </Step> </Steps>

Changing the Base API URL

You can change the base API URL for any LLM provider by setting the base_url parameter:

python
llm = LLM(
    model="custom-model-name",
    base_url="https://api.your-provider.com/v1",
    api_key="your-api-key"
)
agent = Agent(llm=llm, ...)

This is particularly useful when working with OpenAI-compatible APIs or when you need to specify a different endpoint for your chosen provider.

Conclusion

By leveraging LiteLLM, CrewAI offers seamless integration with a vast array of LLMs. This flexibility allows you to choose the most suitable model for your specific needs, whether you prioritize performance, cost-efficiency, or local deployment. Remember to consult the LiteLLM documentation for the most up-to-date information on supported models and configuration options.