backend/README.md
Simple Python backend that connects your frontend to Ollama for local LLM chat.
Install Ollama (if not already installed):
# Visit https://ollama.ai or run:
curl -fsSL https://ollama.ai/install.sh | sh
Start Ollama:
ollama serve
Pull a model (optional, server will suggest if needed):
ollama pull llama3.2
Install Python dependencies:
pip install -r requirements.txt
Test Ollama connection:
python ollama_client.py
Start the backend server:
python server.py
Server will run on http://localhost:8000
GET /health
Returns server status and available models.
POST /chat
Content-Type: application/json
{
"message": "Hello!",
"model": "llama3.2:latest",
"conversation_history": []
}
Returns:
{
"response": "Hello! How can I help you?",
"model": "llama3.2:latest",
"message_count": 1
}
Test the chat endpoint:
curl -X POST http://localhost:8000/chat \
-H "Content-Type: application/json" \
-d '{"message": "Hello!", "model": "llama3.2:latest"}'
Your React frontend should connect to:
http://localhost:8000http://localhost:8000/chatThis simple backend is ready for: