docs/platform/getting-started.md
This guide will help you setup the server and builder for the project.
<!-- The video is listed in the root Readme.md of the repo --> <!--We also offer this in video format. You can check it out [here](https://github.com/Significant-Gravitas/AutoGPT?tab=readme-ov-file#how-to-setup-for-self-hosting). -->!!! warning DO NOT FOLLOW ANY OUTSIDE TUTORIALS AS THEY WILL LIKELY BE OUT OF DATE
To setup the server, you need to have the following installed:
We use Node.js to run our frontend application.
If you need assistance installing Node.js:
https://nodejs.org/en/download/
NPM is included with Node.js, but if you need assistance installing NPM: https://docs.npmjs.com/downloading-and-installing-node-js-and-npm
You can check if you have Node.js & NPM installed by running the following command:
node -v
npm -v
Once you have Node.js installed, you can proceed to the next step.
Docker containerizes applications, while Docker Compose orchestrates multi-container Docker applications.
If you need assistance installing docker: https://docs.docker.com/desktop/
Docker-compose is included in Docker Desktop, but if you need assistance installing docker compose: https://docs.docker.com/compose/install/
You can check if you have Docker installed by running the following command:
docker -v
docker compose -v
Once you have Docker and Docker Compose installed, you can proceed to the next step.
<details> <summary> Raspberry Pi 5 Specific Notes </summary> On Raspberry Pi 5 with Raspberry Pi OS, the default 16K page size will cause issues with the <code>supabase-vector</code> container (expected: 4K). </br> To fix this, edit <code>/boot/firmware/config.txt</code> and add: </br> ```ini kernel=kernel8.img ``` Then reboot. You can check your page size with: </br> ```bash getconf PAGESIZE ``` <code>16384</code> means 16K (incorrect), and <code>4096</code> means 4K (correct). After adjusting, <code>docker compose up -d --build</code> should work normally. </br> See <a href="https://github.com/supabase/supabase/issues/33816">supabase/supabase #33816</a> for additional context. </details>If you're self-hosting AutoGPT locally, we recommend using our official setup script to simplify the process. This will install dependencies (like Docker), pull the latest code, and launch the app with minimal effort.
For macOS/Linux:
curl -fsSL https://setup.agpt.co/install.sh -o install.sh && bash install.sh
For Windows (PowerShell):
powershell -c "iwr https://setup.agpt.co/install.bat -o install.bat; ./install.bat"
This method is ideal if you're setting up for development or testing and want to skip manual configuration.
The first step is cloning the AutoGPT repository to your computer. To do this, open a terminal window in a folder on your computer and run:
git clone https://github.com/Significant-Gravitas/AutoGPT.git
If you get stuck, follow this guide.
Once that's complete you can continue the setup process.
To run the platform, follow these steps:
autogpt_platform directory inside the AutoGPT folder:
cd AutoGPT/autogpt_platform
Copy the .env.default file to .env in autogpt_platform:
cp .env.default .env
This command will copy the .env.default file to .env in the autogpt_platform directory. You can modify the .env file to add your own environment variables.
Run the platform services:
docker compose up -d --build
This command will start all the necessary backend services defined in the docker-compose.yml file in detached mode.
The repository includes a Makefile with helpful commands to streamline setup and development. You may use make commands as an alternative to calling Docker or scripts directly.
Inside the autogpt_platform directory, you can use:
| Command | What it Does |
|---|---|
make start-core | Start just the core services (Supabase, Redis, RabbitMQ) in background |
make stop-core | Stop the core services |
make logs-core | Tail the logs for core services |
make format | Format & lint backend (Python) and frontend (TypeScript) code |
make migrate | Run backend database migrations |
make run-backend | Run the backend FastAPI server |
make run-frontend | Run the frontend Next.js development server |
Example usage:
make start-core
make run-backend
make run-frontend
You can always check available Makefile recipes by running:
make help
(or just inspecting the Makefile in the repo root).
You can check if the server is running by visiting http://localhost:3000 in your browser.
Notes:
By default the application for different services run on the following ports:
Frontend UI Server: 3000 Backend Websocket Server: 8001 Execution API Rest Server: 8006
You may want to change your encryption key in the .env file in the autogpt_platform/backend directory.
To generate a new encryption key, run the following command in python:
from cryptography.fernet import Fernet;Fernet.generate_key().decode()
Or run the following command in the autogpt_platform/backend directory:
poetry run cli gen-encrypt-key
Then, replace the existing key in the autogpt_platform/backend/.env file with the new one.
When installing Docker on Windows, it is highly recommended to select WSL 2 instead of Hyper-V. Using Hyper-V can cause compatibility issues with Supabase, leading to the supabase-db container being marked as unhealthy.
If you initially installed Docker with Hyper-V, you donβt need to reinstall it. You can switch to WSL 2 by following these steps:
π¨ Warning: Enabling WSL 2 may erase your existing containers and build history. If you have important containers, consider backing them up before switching.
For more details, refer to Docker's official documentation.
AutoGPT requires Docker (Docker Desktop or Docker Engine). Podman and podman-compose are not supported and may cause path resolution issues, particularly on Windows.
If you see errors like:
Error: the specified Containerfile or Dockerfile does not exist, ..\..\autogpt_platform\backend\Dockerfile
This indicates you're using Podman instead of Docker. Please install Docker Desktop and use docker compose instead of podman-compose.
To run the frontend locally, you need to have Node.js and PNPM installed on your machine.
Install Node.js to manage dependencies and run the frontend application.
Install PNPM to manage the frontend dependencies.
Run the service dependencies (backend, database, message queues, etc.):
docker compose --profile local up deps_backend --build --detach
Go to the autogpt_platform/frontend directory:
cd frontend
Install the dependencies:
pnpm install
Generate the API client:
pnpm generate:api-client
Run the frontend application:
pnpm dev
Auto formatter and linter are set up in the project. To run them:
Format the code:
pnpm format
Lint the code:
pnpm lint
Or for both frontend and backend, from the root:
make format
To run the tests, you can use the following command:
pnpm test
To run the backend locally, you need to have Python 3.10 or higher installed on your machine.
Install Poetry to manage dependencies and virtual environments.
Run the backend dependencies (database, message queues, etc.):
docker compose --profile local up deps --build --detach
Or equivalently with Makefile:
make start-core
Go to the autogpt_platform/backend directory:
cd backend
Install the dependencies:
poetry install --with dev
Run the backend server:
poetry run app
Or from within autogpt_platform:
make run-backend
Auto formatter and linter are set up in the project. To run them:
Format the code:
poetry run format
Lint the code:
poetry run lint
Or format both frontend and backend at once:
make format
To run the tests:
poetry run pytest -s
To add a new agent block, you need to create a new class that inherits from Block and provides the following information:
blocks (backend.blocks) module.input_schema: the schema of the input data, represented by a Pydantic object.output_schema: the schema of the output data, represented by a Pydantic object.run method: the main logic of the block.test_input & test_output: the sample input and output data for the block, which will be used to auto-test the block.test_mock field for your unit tests.poetry run pytest backend/blocks/test/test_block.py -s.dev branch of the repository with your changes so you can share it with the community :)