Back to Powerinfer

Perplexity

smallthinker/tools/perplexity/README.md

latest19.5 KB
Original Source

Perplexity

The perplexity example can be used to calculate the so-called perplexity value of a language model over a given text corpus. Perplexity measures how well the model can predict the next token with lower values being better. Note that perplexity is not directly comparable between models, especially if they use different tokenizers. Also note that finetunes typically result in a higher perplexity value even though the human-rated quality of outputs increases.

Within llama.cpp the perplexity of base models is used primarily to judge the quality loss from e.g. quantized models vs. FP16. The convention among contributors is to use the Wikitext-2 test set for testing unless noted otherwise (can be obtained with scripts/get-wikitext-2.sh). When numbers are listed all command line arguments and compilation options are left at their defaults unless noted otherwise. llama.cpp numbers are not directly comparable to those of other projects because the exact values depend strongly on the implementation details.

By default only the mean perplexity value and the corresponding uncertainty is calculated. The uncertainty is determined empirically by assuming a Gaussian distribution of the "correct" logits per and then applying error propagation.

More statistics can be obtained by recording the logits from the FP16 version of a model. To do this, supply perplexity with --kl-divergence-base path/to/logit/binary/file.kld. The program will then record all logits and save them to the provided path in binary format. The logit file will be very large, 11 GiB for LLaMA 2 or 37 GiB for LLaMA 3 when using the Wikitext-2 test set. Once you have the file, supply perplexity with the quantized model, the logits file via --kl-divergence-base, and finally the --kl-divergence argument to indicate that the program should calculate the so-called Kullback-Leibler divergence. This is a measure of how similar the FP16 and the quantized logit distributions are with a value of 0 indicating that the distribution are the same. The uncertainty on the mean KL divergence is calculated by assuming the KL divergence per token follows a Gaussian distribution.

In addition to the KL divergence the following statistics are calculated with --kl-divergence:

  • Ratio of mean FP16 PPL and quantized PPL. Uncertainty is estimated on logits, then propagated. The logarithm of this metric is also calculated and printed, it is 0 if the logit distributions are the same.
  • Difference of mean FP16 PPL and quantized PPL. Uncertainty is estimated on logits, then propagated.
  • Mean change in "correct" token probability. Positive values mean the model gets better at prediction, negative values mean it gets worse.
  • Pearson correlation coefficient of the "correct" token probabilites between models.
  • Percentiles of change in "correct" token probability. Positive values mean the model gets better at prediction, negative values mean it gets worse. Can be used to judge noise vs. quality loss from quantization. If the percentiles are symmetric then the quantization is essentially just adding noise. If the negative values are significantly larger than the positive values then this indicates that the model is actually becoming worse from the quantization.
  • The root mean square of the change in token probabilities. If you were to assume that the quantization simply causes Gaussian noise on the token probabilities then this would be the standard deviation of said noise. The uncertainty on the value is calculated that the change in token probabilities follows a Gaussian distribution. Related discussion: https://github.com/ggerganov/llama.cpp/discussions/2875 .
  • Same top p: Percentage of how often the token was assigned the highest probabilites by both models. The uncertainty is calculated from the Gaussian approximation of the binomial distribution.

LLaMA 3 8b Scoreboard

Revisionf364eb6f
BackendCUDA
CPUAMD Epyc 7742
GPU1x NVIDIA RTX 4090

Results were generated using the CUDA backend and are sorted by Kullback-Leibler divergence relative to FP16. The "WT" importance matrices were created using varying numbers of Wikitext tokens and can be found here. Note: the FP16 logits used for the calculation of all metrics other than perplexity are stored in a binary file between runs. In order to save space this file does not contain the exact same FP32 logits but instead casts them to 16 bit unsigned integers (with some scaling). So the "f16" results are to be understood as the difference resulting only from this downcast.

QuantizationimatrixModel size [GiB]PPLΔPPLKLDMean ΔpRMS Δp
f16None14.976.233160 ± 0.0378280.001524 ± 0.0007550.000551 ± 0.0000020.001 ± 0.002 %0.787 ± 0.004 %
q8_0None7.966.234284 ± 0.0378780.002650 ± 0.0010060.001355 ± 0.000006-0.019 ± 0.003 %1.198 ± 0.007 %
q6_KNone6.146.253382 ± 0.0380780.021748 ± 0.0018520.005452 ± 0.000035-0.007 ± 0.006 %2.295 ± 0.019 %
q5_K_MNone5.336.288607 ± 0.0383380.056974 ± 0.0025980.010762 ± 0.000079-0.114 ± 0.008 %3.160 ± 0.031 %
q5_K_SNone5.216.336598 ± 0.0387550.104964 ± 0.0033310.016595 ± 0.000122-0.223 ± 0.010 %3.918 ± 0.036 %
q5_1None5.656.337857 ± 0.0386770.106223 ± 0.0034760.018045 ± 0.000139-0.287 ± 0.011 %4.123 ± 0.039 %
q5_0None5.216.363224 ± 0.0388610.131591 ± 0.0038940.022239 ± 0.000166-0.416 ± 0.012 %4.634 ± 0.043 %
q4_K_MWT 10m4.586.382937 ± 0.0390550.151303 ± 0.0044290.028152 ± 0.000240-0.389 ± 0.014 %5.251 ± 0.049 %
q4_K_MNone4.586.407115 ± 0.0391190.175482 ± 0.0046200.031273 ± 0.000238-0.596 ± 0.014 %5.519 ± 0.050 %
q4_K_SWT 10m4.376.409697 ± 0.0391890.178064 ± 0.0047440.031951 ± 0.000259-0.531 ± 0.015 %5.645 ± 0.051 %
iq4_NLWT 10m4.356.455593 ± 0.0396300.223959 ± 0.0052010.035742 ± 0.000288-0.590 ± 0.016 %5.998 ± 0.054 %
iq4_XSWT 10m4.146.459705 ± 0.0395950.228071 ± 0.0052070.036334 ± 0.000284-0.668 ± 0.016 %6.044 ± 0.054 %
q4_K_SNone4.376.500529 ± 0.0397780.268895 ± 0.0056380.043136 ± 0.000314-0.927 ± 0.017 %6.562 ± 0.055 %
q4_1None4.786.682737 ± 0.0412850.451103 ± 0.0080300.071683 ± 0.000505-0.927 ± 0.017 %8.512 ± 0.063 %
q4_0None4.346.700147 ± 0.0412260.468514 ± 0.0079510.071940 ± 0.000491-1.588 ± 0.022 %8.434 ± 0.061 %
q3_K_LWT 10m4.036.671223 ± 0.0414270.439590 ± 0.0081540.073077 ± 0.000529-0.940 ± 0.023 %8.662 ± 0.064 %
q3_K_MWT 10m3.746.734255 ± 0.0418380.502622 ± 0.0089010.084358 ± 0.000588-1.198 ± 0.024 %9.292 ± 0.065 %
q3_K_LNone4.036.787876 ± 0.0421040.556242 ± 0.0091710.087176 ± 0.000614-1.532 ± 0.025 %9.432 ± 0.067 %
q3_K_MNone3.746.888498 ± 0.0426690.656864 ± 0.0100710.101913 ± 0.000677-1.990 ± 0.026 %10.203 ± 0.068 %
iq3_MWT 10m3.536.898327 ± 0.0416430.666694 ± 0.0094490.102534 ± 0.000663-3.178 ± 0.026 %10.513 ± 0.066 %
iq3_SWT 10m3.426.965501 ± 0.0424060.733867 ± 0.0102450.111278 ± 0.000710-3.066 ± 0.027 %10.845 ± 0.068 %
iq3_XSWT 10m3.287.163043 ± 0.0437720.931409 ± 0.0120840.138693 ± 0.000857-3.667 ± 0.031 %12.148 ± 0.070 %
iq3_XXSWT 10m3.057.458436 ± 0.0464041.226803 ± 0.0152340.183625 ± 0.001042-3.918 ± 0.035 %13.836 ± 0.074 %
q3_K_SWT 10m3.417.602878 ± 0.0468481.371244 ± 0.0156880.199821 ± 0.001008-5.046 ± 0.037 %14.980 ± 0.070 %
q3_K_SNone3.417.863786 ± 0.0488851.632152 ± 0.0177330.228217 ± 0.001079-5.604 ± 0.038 %15.541 ± 0.070 %
iq2_MWT 10m2.748.600799 ± 0.0551242.369166 ± 0.0252440.325989 ± 0.00160-6.463 ± 0.046 %18.519 ± 0.080 %
q2_KWT 10k2.968.652290 ± 0.0555722.420657 ± 0.0255870.331393 ± 0.001562-6.606 ± 0.046 %18.790 ± 0.078 %
q2_KWT 100k2.968.641993 ± 0.0554062.410359 ± 0.0254950.331672 ± 0.001569-6.628 ± 0.047 %18.856 ± 0.078 %
q2_KWT 10m2.968.647825 ± 0.0556102.416191 ± 0.0256830.332223 ± 0.001572-6.500 ± 0.047 %18.881 ± 0.078 %
q2_KWT 1m2.968.674365 ± 0.0557432.442732 ± 0.0258430.335308 ± 0.001576-6.634 ± 0.047 %19.009 ± 0.079 %
q2_KWT 1k2.968.682605 ± 0.0559162.450972 ± 0.0260690.337093 ± 0.001596-6.596 ± 0.047 %18.977 ± 0.079 %
q2_K_SWT 10m2.969.323778 ± 0.0615513.092145 ± 0.0319140.403360 ± 0.001787-7.131 ± 0.049 %20.050 ± 0.081 %
q2_K_SWT 1m2.969.329321 ± 0.0613783.097688 ± 0.0318160.403590 ± 0.001797-7.289 ± 0.049 %20.123 ± 0.081 %
q2_K_SWT 100k2.969.362973 ± 0.0617403.131339 ± 0.0321690.408367 ± 0.001802-7.198 ± 0.050 %20.132 ± 0.081 %
q2_K_SWT 10k2.969.376479 ± 0.0620453.144846 ± 0.0324640.408662 ± 0.001819-7.141 ± 0.050 %20.120 ± 0.081 %
q2_K_SWT 1k2.969.415200 ± 0.0624753.183567 ± 0.0329930.415865 ± 0.001846-7.153 ± 0.050 %20.311 ± 0.082 %
iq2_SWT 10m2.569.650781 ± 0.0632093.419148 ± 0.0340170.439197 ± 0.001976-8.319 ± 0.052 %21.491 ± 0.083 %
q2_KNone2.969.751568 ± 0.0633123.519934 ± 0.0338630.445132 ± 0.001835-9.123 ± 0.051 %21.421 ± 0.079 %
iq2_XSWT 10m2.4310.761424 ± 0.0710564.529791 ± 0.0422290.546290 ± 0.002133-10.576 ± 0.056 %23.872 ± 0.082 %
iq2_XXSWT 10m2.2414.091782 ± 0.0983967.860148 ± 0.0707520.812022 ± 0.002741-14.363 ± 0.065 %28.576 ± 0.084 %
iq1_MWT 10m2.0125.493722 ± 0.17790319.262089 ± 0.1523961.393084 ± 0.003529-24.672 ± 0.077 %38.287 ± 0.084 %
iq1_SWT 1m1.8858.097760 ± 0.43860451.866126 ± 0.4166042.211278 ± 0.004688-32.471 ± 0.087 %46.418 ± 0.085 %
iq1_SWT 1k1.8858.267851 ± 0.44620852.036218 ± 0.4243732.214858 ± 0.004778-31.880 ± 0.089 %46.330 ± 0.086 %
iq1_SWT 100k1.8858.581498 ± 0.45314552.349864 ± 0.4313602.220834 ± 0.004818-32.261 ± 0.089 %46.002 ± 0.086 %
iq1_SWT 10m1.8860.694593 ± 0.47129054.462959 ± 0.4496442.254554 ± 0.004868-31.973 ± 0.088 %46.271 ± 0.086 %
iq1_SWT 10k1.8863.221324 ± 0.49307756.989691 ± 0.4714232.293527 ± 0.004885-32.261 ± 0.089 %46.562 ± 0.086 %

There seems to be no consistent improvement from using more Wikitext tokens for the importance matrix. K-quants score better on mean Δp than the legacy quants than e.g. KL divergence would suggest.

LLaMA 2 vs. LLaMA 3 Quantization comparison

Revisionf364eb6f
BackendCUDA
CPUAMD Epyc 7742
GPU1x NVIDIA RTX 4090
MetricL2 7b q2_KL3 8b q2_KL2 7b q4_K_ML3 8b q4_K_ML2 7b q6_KL3 8b q6_KL2 7b q8_0L3 8b q8_0
Mean PPL5.794552 ± 0.0322989.751568 ± 0.0633125.877078 ± 0.0327816.407115 ± 0.0391195.808494 ± 0.0324256.253382 ± 0.0380785.798542 ± 0.0323666.234284 ± 0.037878
Mean PPL ratio1.107955 ± 0.0014271.564849 ± 0.0045251.014242 ± 0.0004321.028160 ± 0.0007231.002406 ± 0.0001911.003490 ± 0.0002961.000689 ± 0.0001071.000425 ± 0.000161
Mean ΔPPL0.625552 ± 0.0087253.519934 ± 0.0338630.082526 ± 0.0025300.175482 ± 0.0046200.013941 ± 0.0011100.021748 ± 0.0018520.003990 ± 0.0006240.002650 ± 0.001006
PPL correlation97.36%89.62%99.71%99.34%99.94%99.88%99.98%99.96%
Mean KLD0.108903 ± 0.0006450.445132 ± 0.0018350.012686 ± 0.0000790.031273 ± 0.0002380.002098 ± 0.0000140.005452 ± 0.0000350.000369 ± 0.0000070.001355 ± 0.000006
Mean Δp-2.710 ± 0.023 %-9.123 ± 0.051 %-0.416 ± 0.008 %-0.596 ± 0.014 %-0.035 ± 0.003 %-0.007 ± 0.006 %-0.005 ± 0.002 %-0.019 ± 0.003 %
Maximum Δp85.136%94.268%45.209%95.054%23.593%53.601%43.925%28.734%
99.9% Δp37.184%50.003%17.461%27.084%7.798%13.613%3.387%6.402%
99.0% Δp18.131%25.875%7.798%12.084%3.838%6.407%1.867%3.544%
Median Δp-0.391%-2.476%-0.026%-0.024%-0.001%0.000%-0.000%-0.000%
1.0% Δp-39.762%-87.173%-11.433%-19.567%-4.222%-6.767%-1.862%-3.698%
0.1% Δp-79.002%-98.897%-26.433%-56.054%-9.091%-16.584%-3.252%-6.579%
Minimum Δp-99.915%-99.965%-83.383%-98.699%-43.142%-68.487%-9.343%-24.301%
RMS Δp9.762 ± 0.053 %21.421 ± 0.079 %3.252 ± 0.024 %5.519 ± 0.050 %1.339 ± 0.010 %2.295 ± 0.019 %0.618 ± 0.011 %1.198 ± 0.007 %
Same top p85.584 ± 0.086 %71.138 ± 0.119 %94.665 ± 0.055 %91.901 ± 0.072 %97.520 ± 0.038 %96.031 ± 0.051 %98.846 ± 0.026 %97.674 ± 0.040 %

LLaMA 3 BF16 vs. FP16 comparison

Revision83330d8c
BackendCPU
CPUAMD Epyc 7742
GPUN/A

Results were calculated with LLaMA 3 8b BF16 as --kl-divergence-base and LLaMA 3 8b FP16 as the --model for comparison.

MetricValue
Mean PPL(Q)6.227711 ± 0.037833
Mean PPL(base)6.225194 ± 0.037771
Cor(ln(PPL(Q)), ln(PPL(base)))99.990%
Mean ln(PPL(Q)/PPL(base))0.000404 ± 0.000086
Mean PPL(Q)/PPL(base)1.000404 ± 0.000086
Mean PPL(Q)-PPL(base)0.002517 ± 0.000536
Mean KLD0.00002515 ± 0.00000020
Maximum KLD0.012206
99.9% KLD0.000799
99.0% KLD0.000222
99.0% KLD0.000222
Median KLD0.000013
10.0% KLD-0.000002
5.0% KLD-0.000008
1.0% KLD-0.000023
Minimum KLD-0.000059
Mean Δp-0.0000745 ± 0.0003952 %
Maximum Δp4.186%
99.9% Δp1.049%
99.0% Δp0.439%
95.0% Δp0.207%
90.0% Δp0.125%
75.0% Δp0.029%
Median Δp0.000%
25.0% Δp-0.030%
10.0% Δp-0.126%
5.0% Δp-0.207%
1.0% Δp-0.434%
0.1% Δp-1.016%
Minimum Δp-4.672%
RMS Δp0.150 ± 0.001 %
Same top p99.739 ± 0.013 %

Old Numbers

<details> <summary>Llama 2 70B Scoreboard</summary>
QuantizationModel size (GiB)PerplexityDelta to fp16
Q4_036.203.55503.61%
Q4_140.203.51252.37%
Q5_044.203.47441.26%
Q2_K27.273.73398.82%
Q3_K_S27.863.70197.89%
Q3_K_M30.833.59324.72%
Q3_K_L33.673.56173.80%
Q4_K_S36.393.48521.57%
Q4_K_M38.543.47251.20%
Q5_K_S44.203.44830.50%
Q5_K_M45.413.44510.40%
Q6_K52.703.43670.16%
fp16128.53.4313-
</details>