examples/pytorch/HPOExample/README.md
This example demonstrates hyperparameter optimization with MLflow tracking using pure PyTorch (no Lightning dependencies).
The model is a simple 2-layer neural network:
Input (784) → FC1 (hidden_size) → ReLU → Dropout → FC2 (10) → LogSoftmax
lr: Learning rate (1e-4 to 1e-1, log scale)hidden_size: Hidden layer size (64 to 512, step 64)dropout_rate: Dropout probability (0.1 to 0.5)batch_size: Batch size (32, 64, or 128)python hpo_mnist.py --n-trials 3 --max-epochs 3
python hpo_mnist.py --n-trials 10 --max-epochs 5
mlflow run . -P n_trials=5 -P max_epochs=3
After running, view the results in MLflow UI:
mlflow server
Navigate to http://localhost:5000 to see:
torch>=2.1: PyTorch for model trainingtorchvision>=0.15.1: MNIST datasetoptuna>=3.0.0: Hyperparameter optimization frameworkmlflow: Experiment trackingNo Lightning, no torchmetrics, no transformers = no dependency conflicts! 🎉