docs/enterprise-solutions/configuration/remote-configuration/litellm/member-configuration.mdx
As a team member, you can connect your local development environment to your organization's LiteLLM proxy setup. This guide walks you through configuring your connection in VS Code so you can start using multiple AI models through your organization's unified proxy interface. Your administrator has already configured the provider settings-you just need to add your credentials to get started.
To successfully connect to your organization's LiteLLM proxy, you'll need a few things ready.
Cline extension installed and configured
The Cline extension must be installed in VS Code and you need to be signed into your organization account. If you haven't installed Cline yet, follow our installation guide.
Access credentials for your organization's LiteLLM proxy
You need credentials to access your organization's LiteLLM proxy. This might be an API key, or the proxy might be configured for open access within your network.
LiteLLM or show a specific model name)Choose models based on your task requirements:
Test the connection in plan mode first to verify everything works correctly before using it for actual development tasks. </Tip> </Step> </Steps>
The models available through your LiteLLM proxy typically include:
Text Generation Models:
Code-Specific Models:
Multimodal Models:
Choose models based on your development needs:
LiteLLM not available as provider option
Confirm you're signed into the correct Cline organization. Verify your administrator has saved the LiteLLM configuration and that you have the latest version of the Cline extension.
Connection errors or timeouts
Verify your network can reach the LiteLLM proxy endpoint. Check with your IT team about firewall rules or VPN requirements. Ensure the proxy endpoint is accessible from your development environment.
Authentication failures
If using API key authentication, verify the key is correctly entered and hasn't expired. Contact your administrator to confirm your key is active and has the proper permissions.
Models not loading or are limited
The available models depend on your organization's LiteLLM configuration. Contact your administrator if you need access to specific models or if expected models aren't available.
Slow response times
Response times depend on the models being used and proxy load. Try switching to faster models for routine tasks. Contact your administrator if performance is consistently poor.
Error messages from specific models
Some models may be temporarily unavailable or have specific limitations. Try alternative models or contact your administrator if specific models are consistently failing.
When working with your organization's LiteLLM proxy:
Your organization administrator controls which models are available and usage policies. The extension will automatically display available models based on your proxy configuration and access level.