autoqa/README.md
š An automated end-to-end test runner for Jan application with ReportPortal integration, screen recording, and comprehensive test monitoring.
git clone <repository-url>
cd autoqa
## For Windows and Linux
pip install -r requirements.txt
%LOCALAPPDATA%\Programs\jan\Jan.exe~/Applications/Jan.app/Contents/MacOS/Janjan (in PATH)# Run all tests in ./tests directory (auto-starts computer server)
python main.py
# Run with custom test directory
python main.py --tests-dir "my_tests"
# Run with custom Jan app path
python main.py --jan-app-path "C:/Custom/Path/Jan.exe"
# Skip auto computer server start (if already running)
python main.py --skip-server-start
# Enable ReportPortal with token
python main.py --enable-reportportal --rp-token "YOUR_API_TOKEN"
# Full ReportPortal configuration
python main.py \
--enable-reportportal \
--rp-endpoint "https://reportportal.example.com" \
--rp-project "my_project" \
--rp-token "YOUR_API_TOKEN"
| Argument | Environment Variable | Default | Description |
|---|---|---|---|
| Computer Server | |||
--skip-server-start | SKIP_SERVER_START | false | Skip automatic computer server startup |
| ReportPortal | |||
--enable-reportportal | ENABLE_REPORTPORTAL | false | Enable ReportPortal integration |
--rp-endpoint | RP_ENDPOINT | https://reportportal.menlo.ai | ReportPortal endpoint URL |
--rp-project | RP_PROJECT | default_personal | ReportPortal project name |
--rp-token | RP_TOKEN | - | ReportPortal API token (required when RP enabled) |
| Jan Application | |||
--jan-app-path | JAN_APP_PATH | auto-detected | Path to Jan application executable |
--jan-process-name | JAN_PROCESS_NAME | Jan.exe | Jan process name for monitoring |
| Model Configuration | |||
--model-name | MODEL_NAME | ByteDance-Seed/UI-TARS-1.5-7B | AI model name |
--model-base-url | MODEL_BASE_URL | http://10.200.108.58:1234/v1 | Model API endpoint |
--model-provider | MODEL_PROVIDER | oaicompat | Model provider type |
--model-loop | MODEL_LOOP | uitars | Agent loop type |
| Test Execution | |||
--max-turns | MAX_TURNS | 30 | Maximum turns per test |
--tests-dir | TESTS_DIR | tests | Directory containing test files |
--delay-between-tests | DELAY_BETWEEN_TESTS | 3 | Delay between tests (seconds) |
Create a .env file or set environment variables:
# Computer Server
SKIP_SERVER_START=false
# ReportPortal Configuration
ENABLE_REPORTPORTAL=true
RP_ENDPOINT=https://reportportal.example.com
RP_PROJECT=my_project
RP_TOKEN=your_secret_token
# Jan Application
JAN_APP_PATH=C:\Custom\Path\Jan.exe
JAN_PROCESS_NAME=Jan.exe
# Model Configuration
MODEL_NAME=gpt-4
MODEL_BASE_URL=https://api.openai.com/v1
MODEL_PROVIDER=openai
MODEL_LOOP=uitars
# Test Settings
MAX_TURNS=50
TESTS_DIR=e2e_tests
DELAY_BETWEEN_TESTS=5
.txt files containing test promptstests/ directory (or custom directory)Example test file (tests/basic/login_test.txt):
Test the login functionality of Jan application.
Navigate to login screen, enter valid credentials, and verify successful login.
autoqa/
āāā main.py # Main test runner
āāā utils.py # Jan app utilities
āāā test_runner.py # Test execution logic
āāā screen_recorder.py # Screen recording functionality
āāā reportportal_handler.py # ReportPortal integration
āāā tests/ # Test files directory
ā āāā basic/
ā ā āāā login_test.txt
ā ā āāā navigation_test.txt
ā āāā advanced/
ā āāā complex_workflow.txt
āāā recordings/ # Screen recordings (auto-created)
āāā trajectories/ # Agent trajectories (auto-created)
āāā README.md
# Run all tests locally (auto-starts computer server)
python main.py
# Get help
python main.py --help
# Run without auto-starting computer server
python main.py --skip-server-start
# Custom configuration
python main.py \
--tests-dir "integration_tests" \
--max-turns 40 \
--delay-between-tests 10 \
--model-name "gpt-4"
# Environment + Arguments
ENABLE_REPORTPORTAL=true RP_TOKEN=secret python main.py --max-turns 50
# Different model provider
python main.py \
--model-provider "openai" \
--model-name "gpt-4" \
--model-base-url "https://api.openai.com/v1"
# External computer server (skip auto-start)
SKIP_SERVER_START=true python main.py
# GitHub Actions / CI environment
ENABLE_REPORTPORTAL=true \
RP_TOKEN=${{ secrets.RP_TOKEN }} \
MODEL_NAME=production-model \
MAX_TURNS=40 \
SKIP_SERVER_START=false \
python main.py
The test runner automatically manages the computer server:
# If you prefer to manage computer server manually:
python -m computer_server # In separate terminal
# Then run tests without auto-start:
python main.py --skip-server-start
2025-07-15 15:30:45 - INFO - Starting computer server in background...
2025-07-15 15:30:45 - INFO - Calling computer_server.run_cli()...
2025-07-15 15:30:45 - INFO - Computer server thread started
2025-07-15 15:30:50 - INFO - Computer server is running successfully
recordings/ directory as MP4 filestrajectories/ directoryWhen enabled, results are uploaded to ReportPortal including:
Computer server startup failed:
# Install required dependencies
pip install computer_server
# Check if computer_server is available
python -c "import computer_server; print('OK')"
# Use manual server if auto-start fails
python main.py --skip-server-start
Jan app not found:
# Specify custom path
python main.py --jan-app-path "D:/Apps/Jan/Jan.exe"
Windows dependencies missing:
# Install Windows-specific packages
pip install pywin32 psutil
ReportPortal connection failed:
Screen recording issues:
recordings/ directoryTest timeouts:
# Increase turn limit
python main.py --max-turns 50
Enable detailed logging by modifying the logging level in main.py:
logging.basicConfig(level=logging.DEBUG)