website/docs/api/scorer.mdx
The Scorer computes evaluation scores. It's typically created by
Language.evaluate. In addition, the Scorer
provides a number of evaluation methods for evaluating Token and
Doc attributes.
Create a new Scorer.
Example
pythonfrom spacy.scorer import Scorer # Default scoring pipeline scorer = Scorer() # Provided scoring pipeline nlp = spacy.load("en_core_web_sm") scorer = Scorer(nlp)
| Name | Description |
|---|---|
nlp | The pipeline to use for scoring, where each pipeline component may provide a scoring method. If none is provided, then a default pipeline is constructed using the default_lang and default_pipeline settings. |
default_lang | The language to use for a default pipeline if nlp is not provided. Defaults to xx. |
default_pipeline | The pipeline components to use for a default pipeline if nlp is not provided. Defaults to ("senter", "tagger", "morphologizer", "parser", "ner", "textcat"). |
| keyword-only | |
**kwargs | Any additional settings to pass on to the individual scoring methods. |
Calculate the scores for a list of Example objects using the
scoring methods provided by the components in the pipeline.
The returned Dict contains the scores provided by the individual pipeline
components. For the scoring methods provided by the Scorer and used by the
core pipeline components, the individual score names start with the Token or
Doc attribute being scored:
token_acc, token_p, token_r, token_fsents_p, sents_r, sents_ftag_accpos_accmorph_acc, morph_micro_p, morph_micro_r, morph_micro_f,
morph_per_featlemma_accdep_uas, dep_las, dep_las_per_typeents_p, ents_r ents_f, ents_per_typespans_sc_p, spans_sc_r, spans_sc_fcats_score (depends on config, description provided in cats_score_desc),
cats_micro_p, cats_micro_r, cats_micro_f, cats_macro_p,
cats_macro_r, cats_macro_f, cats_macro_auc, cats_f_per_type,
cats_auc_per_typeExample
pythonscorer = Scorer() scores = scorer.score(examples)
| Name | Description |
|---|---|
examples | The Example objects holding both the predictions and the correct gold-standard annotations. |
| keyword-only | |
per_component <Tag variant="new">3.6</Tag> | Whether to return the scores keyed by component name. Defaults to False. |
| RETURNS | A dictionary of scores. |
Scores the tokenization:
token_acc: number of correct tokens / number of predicted tokenstoken_p, token_r, token_f: precision, recall and F-score for token
character spansDocs with has_unknown_spaces are skipped during scoring.
Example
pythonscores = Scorer.score_tokenization(examples)
| Name | Description |
| ----------- | ------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------ |
| examples | The Example objects holding both the predictions and the correct gold-standard annotations. Iterable[Example] |
| RETURNS | Dict | A dictionary containing the scores token_acc, token_p, token_r, token_f. Dict[str, float]] |
Scores a single token attribute. Tokens with missing values in the reference doc are skipped during scoring.
Example
pythonscores = Scorer.score_token_attr(examples, "pos") print(scores["pos_acc"])
| Name | Description |
|---|---|
examples | The Example objects holding both the predictions and the correct gold-standard annotations. |
attr | The attribute to score. |
| keyword-only | |
getter | Defaults to getattr. If provided, getter(token, attr) should return the value of the attribute for an individual Token. |
missing_values | Attribute values to treat as missing annotation in the reference annotation. Defaults to {0, None, ""}. |
| RETURNS | A dictionary containing the score {attr}_acc. |
Scores a single token attribute per feature for a token attribute in the Universal Dependencies FEATS format. Tokens with missing values in the reference doc are skipped during scoring.
Example
pythonscores = Scorer.score_token_attr_per_feat(examples, "morph") print(scores["morph_per_feat"])
| Name | Description |
|---|---|
examples | The Example objects holding both the predictions and the correct gold-standard annotations. |
attr | The attribute to score. |
| keyword-only | |
getter | Defaults to getattr. If provided, getter(token, attr) should return the value of the attribute for an individual Token. |
missing_values | Attribute values to treat as missing annotation in the reference annotation. Defaults to {0, None, ""}. |
| RETURNS | A dictionary containing the micro PRF scores under the key {attr}_micro_p/r/f and the per-feature PRF scores under {attr}_per_feat. |
Returns PRF scores for labeled or unlabeled spans.
Example
pythonscores = Scorer.score_spans(examples, "ents") print(scores["ents_f"])
| Name | Description |
|---|---|
examples | The Example objects holding both the predictions and the correct gold-standard annotations. |
attr | The attribute to score. |
| keyword-only | |
getter | Defaults to getattr. If provided, getter(doc, attr) should return the Span objects for an individual Doc. |
has_annotation | Defaults to None. If provided, has_annotation(doc) should return whether a Doc has annotation for this attr. Docs without annotation are skipped for scoring purposes. |
labeled | Defaults to True. If set to False, two spans will be considered equal if their start and end match, irrespective of their label. |
allow_overlap | Defaults to False. Whether or not to allow overlapping spans. If set to False, the alignment will automatically resolve conflicts. |
| RETURNS | A dictionary containing the PRF scores under the keys {attr}_p, {attr}_r, {attr}_f and the per-type PRF scores under {attr}_per_type. |
Calculate the UAS, LAS, and LAS per type scores for dependency parses. Tokens
with missing values for the attr (typically dep) are skipped during scoring.
Example
pythondef dep_getter(token, attr): dep = getattr(token, attr) dep = token.vocab.strings.as_string(dep).lower() return dep scores = Scorer.score_deps( examples, "dep", getter=dep_getter, ignore_labels=("p", "punct") ) print(scores["dep_uas"], scores["dep_las"])
| Name | Description |
|---|---|
examples | The Example objects holding both the predictions and the correct gold-standard annotations. |
attr | The attribute to score. |
| keyword-only | |
getter | Defaults to getattr. If provided, getter(token, attr) should return the value of the attribute for an individual Token. |
head_attr | The attribute containing the head token. |
head_getter | Defaults to getattr. If provided, head_getter(token, attr) should return the head for an individual Token. |
ignore_labels | Labels to ignore while scoring (e.g. "punct"). |
missing_values | Attribute values to treat as missing annotation in the reference annotation. Defaults to {0, None, ""}. |
| RETURNS | A dictionary containing the scores: {attr}_uas, {attr}_las, and {attr}_las_per_type. |
Calculate PRF and ROC AUC scores for a doc-level attribute that is a dict
containing scores for each label like Doc.cats. The returned dictionary
contains the following scores:
{attr}_micro_p, {attr}_micro_r and {attr}_micro_f: each instance across
each label is weighted equally{attr}_macro_p, {attr}_macro_r and {attr}_macro_f: the average values
across evaluations per label{attr}_f_per_type and {attr}_auc_per_type: each contains a dictionary of
scores, keyed by label{attr}_score and corresponding {attr}_score_desc (text
description)The reported {attr}_score depends on the classification properties:
{attr}_score is set to the F-score
of the positive label{attr}_score = {attr}_macro_f{attr}_score = {attr}_macro_aucExample
pythonlabels = ["LABEL_A", "LABEL_B", "LABEL_C"] scores = Scorer.score_cats( examples, "cats", labels=labels ) print(scores["cats_macro_auc"])
| Name | Description |
|---|---|
examples | The Example objects holding both the predictions and the correct gold-standard annotations. |
attr | The attribute to score. |
| keyword-only | |
getter | Defaults to getattr. If provided, getter(doc, attr) should return the cats for an individual Doc. |
| labels | The set of possible labels. Defaults to []. |
multi_label | Whether the attribute allows multiple labels. Defaults to True. When set to False (exclusive labels), missing gold labels are interpreted as 0.0 and the threshold is set to 0.0. |
positive_label | The positive label for a binary task with exclusive classes. Defaults to None. |
threshold | Cutoff to consider a prediction "positive". Defaults to 0.5 for multi-label, and 0.0 (i.e. whatever's highest scoring) otherwise. |
| RETURNS | A dictionary containing the scores, with inapplicable scores as None. |
Returns PRF for predicted links on the entity level. To disentangle the performance of the NEL from the NER, this method only evaluates NEL links for entities that overlap between the gold reference and the predictions.
Example
pythonscores = Scorer.score_links( examples, negative_labels=["NIL", ""] ) print(scores["nel_micro_f"])
| Name | Description |
|---|---|
examples | The Example objects holding both the predictions and the correct gold-standard annotations. |
| keyword-only | |
negative_labels | The string values that refer to no annotation (e.g. "NIL"). |
| RETURNS | A dictionary containing the scores. |
Compute micro-PRF and per-entity PRF scores.
| Name | Description |
|---|---|
examples | The Example objects holding both the predictions and the correct gold-standard annotations. |
Returns LEA (Moosavi and Strube, 2016) PRF scores for coreference clusters.
<Infobox title="Important note" variant="warning">Note this scoring function is not yet included in spaCy core - for details, see the CoreferenceResolver docs.
</Infobox>Example
pythonscores = score_coref_clusters( examples, span_cluster_prefix="coref_clusters", ) print(scores["coref_f"])
| Name | Description |
|---|---|
examples | The Example objects holding both the predictions and the correct gold-standard annotations. |
| keyword-only | |
span_cluster_prefix | The prefix used for spans representing coreference clusters. |
| RETURNS | A dictionary containing the scores. |
Return accuracy for reconstructions of spans from single tokens. Only exactly correct predictions are counted as correct, there is no partial credit for near answers. Used by the SpanResolver.
<Infobox title="Important note" variant="warning">Note this scoring function is not yet included in spaCy core - for details, see the SpanResolver docs.
</Infobox>Example
pythonscores = score_span_predictions( examples, output_prefix="coref_clusters", ) print(scores["span_coref_clusters_accuracy"])
| Name | Description |
|---|---|
examples | The Example objects holding both the predictions and the correct gold-standard annotations. |
| keyword-only | |
output_prefix | The prefix used for spans representing the final predicted spans. |
| RETURNS | A dictionary containing the scores. |