WATSONX_README.md
This branch adds support for IBM Watson X AI with Granite models as an alternative to Ollama for running LocalGPT.
LocalGPT now supports two LLM backends:
WatsonXClient class in rag_system/utils/watsonx_client.py that provides an Ollama-compatible interface for Watson Xfactory.py and main.py to support backend switching via environment variableibm-watsonx-ai SDK dependency to requirements.txtTo use Watson X with Granite models, you need:
Create a .env file or set these environment variables:
# Choose LLM backend (default: ollama)
LLM_BACKEND=watsonx
# Watson X Configuration
WATSONX_API_KEY=your_api_key_here
WATSONX_PROJECT_ID=your_project_id_here
WATSONX_URL=https://us-south.ml.cloud.ibm.com
# Model Configuration
WATSONX_GENERATION_MODEL=ibm/granite-13b-chat-v2
WATSONX_ENRICHMENT_MODEL=ibm/granite-8b-japanese
Watson X offers several Granite models:
ibm/granite-13b-chat-v2 - General purpose chat modelibm/granite-13b-instruct-v2 - Instruction-following modelibm/granite-20b-multilingual - Multilingual supportibm/granite-8b-japanese - Lightweight Japanese modelibm/granite-3b-code-instruct - Code generation modelFor a full list of available models, visit the Watson X documentation.
pip install ibm-watsonx-ai>=1.3.39
Or install all dependencies:
pip install -r rag_system/requirements.txt
Once configured, simply set the environment variable and run as normal:
export LLM_BACKEND=watsonx
python -m rag_system.main api
Or in Python:
import os
os.environ['LLM_BACKEND'] = 'watsonx'
from rag_system.factory import get_agent
# Get agent with Watson X backend
agent = get_agent(mode="default")
# Use as normal
result = agent.run("What is artificial intelligence?")
print(result)
You can easily switch between Ollama and Watson X:
# Use Ollama (local)
export LLM_BACKEND=ollama
python -m rag_system.main api
# Use Watson X (cloud)
export LLM_BACKEND=watsonx
python -m rag_system.main api
The Watson X client supports all the key features used by LocalGPT:
The WatsonXClient provides the same interface as OllamaClient:
from rag_system.utils.watsonx_client import WatsonXClient
client = WatsonXClient(
api_key="your_api_key",
project_id="your_project_id"
)
# Generate completion
response = client.generate_completion(
model="ibm/granite-13b-chat-v2",
prompt="Explain quantum computing"
)
print(response['response'])
# Stream completion
for chunk in client.stream_completion(
model="ibm/granite-13b-chat-v2",
prompt="Write a story about AI"
):
print(chunk, end='', flush=True)
Embedding Models: Watson X uses different embedding models than Ollama. Make sure to configure embedding models appropriately in main.py if needed.
Multimodal Support: Image support varies by model availability in Watson X. Not all Granite models support multimodal inputs.
Streaming: Streaming support depends on the Watson X SDK version and may fall back to returning the full response at once.
Rate Limits: Watson X has API rate limits that may differ from local Ollama usage. Monitor your usage accordingly.
If you see authentication errors:
If you get model not found errors:
ibm/granite-13b-chat-v2)If you experience connection issues:
Unlike local Ollama, Watson X is a cloud service with usage-based pricing:
To switch back to local Ollama:
unset LLM_BACKEND # or set LLM_BACKEND=ollama
python -m rag_system.main api
For Watson X specific issues:
For LocalGPT issues:
If you find issues with the Watson X integration or want to add features:
This integration follows the same license as LocalGPT (MIT License).