Evaluation.md
Evaluation dataset consists of 16 screenshots. A Python script for running screenshot-to-code on the dataset and a UI for rating outputs is included. With this set up, we can compare and evaluate various models and prompts.
backend/evals_data/inputs and the outputs will be backend/evals_data/outputs. If you want to modify this, modify EVALS_DIR in backend/evals/config.py. You can download the input screenshot dataset here: TODO.STACK var, MODEL var) in backend/run_evals.pyOPENAI_API_KEY=sk-... python run_evals.py - this runs the screenshot-to-code on the input dataset in parallel but it will still take a few minutes to complete.backend/evals_data/outputs.In order to view and rate the outputs, visit your front-end at /evals.
Generally, I run three tests for each model/prompt + stack combo and take the average score out of those tests to evaluate.