Back to Spacy

Benchmarks Models

website/docs/usage/_benchmarks-models.mdx

4.0.0.dev101.5 KB
Original Source
<figure>
PipelineParserTaggerNER
en_core_web_trf (spaCy v3)95.197.889.8
en_core_web_lg (spaCy v3)92.097.485.5
en_core_web_lg (spaCy v2)91.997.285.5
<figcaption className="caption">

Full pipeline accuracy on the OntoNotes 5.0 corpus (reported on the development set).

</figcaption> </figure> <figure>
Named Entity Recognition SystemOntoNotesCoNLL '03
spaCy RoBERTa (2020)89.891.6
Stanza (StanfordNLP)<sup>1</sup>88.892.1
Flair<sup>2</sup>89.793.1
<figcaption className="caption">

Named entity recognition accuracy on the OntoNotes 5.0 and CoNLL-2003 corpora. See NLP-progress for more results. Project template: benchmarks/ner_conll03. 1. Qi et al. (2020). 2. Akbik et al. (2018).

</figcaption> </figure>