docs/en/guide/manual-install.mdx
git clone https://github.com/zhayujie/CowAgent
cd CowAgent/
Core dependencies (required):
pip3 install -r requirements.txt
Optional dependencies (recommended):
pip3 install -r requirements-optional.txt
Install the command-line tool for managing services and skills:
pip3 install -e .
Then use the cow command:
cow help
Copy the config template and edit:
cp config-template.json config.json
Fill in model API keys, channel type, and other settings in config.json. See the model docs for details.
Using Cow CLI (recommended):
cow start
Or run locally in foreground:
python3 app.py
By default, the Web console starts. Access http://localhost:9899 to chat.
Background run on server (without CLI):
nohup python3 app.py & tail -f nohup.out
Docker deployment does not require cloning source code or installing dependencies. For Agent mode, source deployment is recommended for broader system access.
<Note> Requires [Docker](https://docs.docker.com/engine/install/) and docker-compose. </Note>1. Download config
curl -O https://cdn.link-ai.tech/code/cow/docker-compose.yml
Edit docker-compose.yml with your configuration.
2. Start container
sudo docker compose up -d
3. View logs
sudo docker logs -f chatgpt-on-wechat
{
"channel_type": "web",
"model": "deepseek-v4-flash",
"deepseek_api_key": "",
"agent": true,
"agent_workspace": "~/cow",
"agent_max_context_tokens": 40000,
"agent_max_context_turns": 30,
"agent_max_steps": 15
}
| Parameter | Description | Default |
|---|---|---|
channel_type | Channel type | web |
model | Model name | deepseek-v4-flash |
agent | Enable Agent mode | true |
agent_workspace | Agent workspace path | ~/cow |
agent_max_context_tokens | Max context tokens | 40000 |
agent_max_context_turns | Max context turns | 30 |
agent_max_steps | Max decision steps per task | 15 |