docs/enterprise-solutions/configuration/remote-configuration/overview.mdx
Remote Provider Configuration allows administrators to centrally configure inference providers for their entire organization through the Cline hosted admin console. This approach ensures consistent provider access, security policies, and cost management across all team members without requiring individual developer setup or infrastructure deployment.
Remote configuration operates through Cline's hosted service at app.cline.bot, where administrators can:
<CardGroup cols={2}> <Card title="Centralized Setup" icon="gear"> Configure providers once for the entire organization through the web-based admin console. </Card> <Card title="Automatic Enforcement" icon="shield-check"> Team members automatically receive the configured provider settings when signed into their organization. </Card> <Card title="Simplified Onboarding" icon="user-plus"> New team members get instant access to inference providers without complex individual configuration. </Card> <Card title="Consistent Experience" icon="users"> Ensure all team members use the same models, regions, and settings organization-wide. </Card> </CardGroup>Cline supports remote configuration for the following inference providers:
| Provider | Use Case | Configuration | Member Setup |
|---|---|---|---|
| Cline | Organizations using Cline's native provider with centralized API key management | API provider selection, model access | No individual API keys needed — fully managed by organization |
| Amazon Bedrock | Organizations using AWS infrastructure | Region selection, VPC endpoints, cross-region inference, global inference, prompt caching | AWS credential configuration (API key, CLI profile, or credential chain) |
| Google Vertex AI | Organizations using Google Cloud Platform | Project ID, region selection, model access | Google Cloud credential configuration (service account, SDK, or ADC) |
| Azure Foundry | Organizations using Azure OpenAI or Azure AI services | Base URL, Azure API version, Azure identity authentication, custom headers | API key configuration in the extension |
| Anthropic | Organizations using the Anthropic API directly | Optional custom base URL for proxy deployments, model access | API key configuration in the extension |
| OpenAI Compatible | Organizations using any OpenAI-compatible endpoint (self-hosted, vLLM, custom proxies) | Base URL, custom headers, model access | API key configuration in the extension |
| LiteLLM | Organizations requiring multi-model access through a unified proxy | Proxy endpoint, authentication, model routing | API key or endpoint configuration (or centralized with Master Key) |
The typical remote configuration process follows these steps:
<Steps> <Step title="Administrator Setup"> Access the Cline admin console and configure the desired inference provider with organization-wide settings. </Step> <Step title="Automatic Distribution"> Provider configuration is automatically distributed to all organization members signed into Cline. </Step> <Step title="Member Credential Setup"> Team members add their individual credentials (API keys, AWS profiles, etc.) to connect to the configured provider. For some providers like Cline and LiteLLM (with Master Key), no individual credentials are needed. </Step> <Step title="Immediate Access"> Once credentials are configured, members can immediately start using the inference provider through Cline. </Step> </Steps>To get started with provider remote configuration:
Select your provider below to begin the configuration process:
<CardGroup cols={3}> <Card title="Amazon Bedrock" icon="aws" href="/enterprise-solutions/configuration/remote-configuration/aws-bedrock/admin-configuration"> AWS-based AI models with enterprise security and compliance features. </Card> <Card title="Google Vertex AI" icon="google" href="/enterprise-solutions/configuration/remote-configuration/google-vertex/admin-configuration"> Google Cloud's AI platform with Gemini models and regional control. </Card> <Card title="OpenAI Compatible" icon="plug" href="/enterprise-solutions/configuration/remote-configuration/openai-compatible/admin-configuration"> Any OpenAI-compatible endpoint, including Azure Foundry. </Card> <Card title="Anthropic" icon="robot" href="/enterprise-solutions/configuration/remote-configuration/anthropic/admin-configuration"> <Card title="LiteLLM" icon="layer-group" href="/enterprise-solutions/configuration/remote-configuration/litellm/admin-configuration"> Unified proxy for accessing 100+ AI models through a single interface. </Card> </CardGroup>