website/docs/usage/v2-1.mdx
spaCy v2.1 has focussed primarily on stability and performance, solidifying the design changes introduced in v2.0. As well as smaller models, faster runtime, and many bug fixes, v2.1 also introduces experimental support for some exciting new NLP innovations. For the full changelog, see the release notes on GitHub. For more details and a behind-the-scenes look at the new release, see our blog post.
Example
bash$ python -m spacy pretrain ./raw_text.jsonl en_core_web_lg ./pretrained-model
spaCy v2.1 introduces a new CLI command, spacy pretrain, that can make your
models much more accurate. It's especially useful when you have limited
training data. The spacy pretrain command lets you use transfer learning to
initialize your models with information from raw text, using a language model
objective similar to the one used in Google's BERT system. We've taken
particular care to ensure that pretraining works well even with spaCy's small
default architecture sizes, so you don't have to compromise on efficiency to use
it.
API: spacy pretrain **Usage: **
Improving accuracy with transfer learning
Example
python# Matches "love cats" or "likes flowers" pattern1 = [{"LEMMA": {"IN": ["like", "love"]}}, {"POS": "NOUN"}] # Matches tokens of length >= 10 pattern2 = [{"LENGTH": {">=": 10}}] # Matches custom attribute with regex pattern3 = [{"_": {"country": {"REGEX": "^([Uu](\\.?|nited) ?[Ss](\\.?|tates)"}}}]
Instead of mapping to a single value, token patterns can now also map to a
dictionary of properties. For example, to specify that the value of a lemma
should be part of a list of values, or to set a minimum character length. It now
also supports a REGEX property, as well as set membership via IN and
NOT_IN, custom extension attributes via _ and rich comparison for numeric
values.
API: Matcher **Usage: **
Extended pattern syntax and attributes,
Regular expressions
Example
pythonfrom spacy.pipeline import EntityRuler ruler = EntityRuler(nlp) ruler.add_patterns([{"label": "ORG", "pattern": "Apple"}]) nlp.add_pipe(ruler, before="ner")
The EntityRuler is an exciting new component that lets you add named entities
based on pattern dictionaries, and makes it easy to combine rule-based and
statistical named entity recognition for even more powerful models. Entity rules
can be phrase patterns for exact string matches, or token patterns for full
flexibility.
API: EntityRuler **Usage: **
Rule-based entity recognition
Example
pythonmatcher = PhraseMatcher(nlp.vocab, attr="POS") matcher.add("PATTERN", None, nlp("I love cats")) doc = nlp("You like dogs") matches = matcher(doc)
By default, the PhraseMatcher will match on the verbatim token text, e.g.
Token.text. By setting the attr argument on initialization, you can change
which token attribute the matcher should use when comparing the phrase
pattern to the matched Doc. For example, LOWER for case-insensitive matches
or POS for finding sequences of the same part-of-speech tags.
API: PhraseMatcher **Usage: **
Matching on other token attributes
Example
pythondoc = nlp("I like David Bowie") with doc.retokenize() as retokenizer: attrs = {"LEMMA": "David Bowie"} retokenizer.merge(doc[2:4], attrs=attrs)
The new Doc.retokenize context manager allows merging spans of multiple tokens
into one single token, and splitting single tokens into multiple tokens.
Modifications to the Doc's tokenization are stored, and then made all at once
when the context manager exits. This is much more efficient, and less
error-prone. Doc.merge and Span.merge still work, but they're considered
deprecated.
API: Doc.retokenize,
Retokenizer.merge,
Retokenizer.split
**Usage:
**Merging and splitting
Example
pythonfrom setuptools import setup setup( name="custom_extension_package", entry_points={ "spacy_factories": ["your_component = component:ComponentFactory"] "spacy_languages": ["xyz = language:XYZLanguage"] } )
Using entry points, model packages and extension packages can now define their
own "spacy_factories" and "spacy_languages", which will be added to the
built-in factories and languages. If a package in the same environment exposes
spaCy entry points, all of this happens automatically and no further user action
is required.
Usage: Using entry points
</Infobox>Although it looks pretty much the same, we've rebuilt the entire documentation using Gatsby and MDX. It's now an even faster progressive web app and allows us to write all content entirely in Markdown, without having to compromise on easy-to-use custom UI components. We're hoping that the Markdown source will make it even easier to contribute to the documentation. For more details, check out the styleguide and source. While converting the pages to Markdown, we've also fixed a bunch of typos, improved the existing pages and added some new content:
Matcher, PhraseMatcher and the new EntityRuler, and write
powerful components to combine statistical models and rules.Doc using the new retokenize context manager and merge spans
into single tokens and split single tokens into multiple.EntityRulerSentencizerIf you've been training your own models, you'll need to retrain them
with the new version. Also don't forget to upgrade all models to the latest
versions. Models for v2.0.x aren't compatible with models for v2.1.x. To check
if all of your models are up to date, you can run the
spacy validate command.
Due to difficulties linking our new
blis for faster
platform-independent matrix multiplication, this release currently doesn't
work on Python 2.7 on Windows. We expect this to be corrected in the future.
While the Matcher API is fully backwards compatible, its
algorithm has changed to fix a number of bugs and performance issues. This
means that the Matcher in v2.1.x may produce different results compared to
the Matcher in v2.0.x.
The deprecated Doc.merge and
Span.merge methods still work, but you may notice that
they now run slower when merging many objects in a row. That's because the
merging engine was rewritten to be more reliable and to support more efficient
merging in bulk. To take advantage of this, you should rewrite your logic
to use the Doc.retokenize context manager and perform
as many merges as possible together in the with block.
- doc[1:5].merge()
- doc[6:8].merge()
+ with doc.retokenize() as retokenizer:
+ retokenizer.merge(doc[1:5])
+ retokenizer.merge(doc[6:8])
The serialization methods to_disk, from_disk, to_bytes and from_bytes
now support a single exclude argument to provide a list of string names to
exclude. The docs have been updated to list the available serialization fields
for each class. The disable argument on the Language
serialization methods has been renamed to exclude for consistency.
- nlp.to_disk("/path", disable=["parser", "ner"])
+ nlp.to_disk("/path", exclude=["parser", "ner"])
- data = nlp.tokenizer.to_bytes(vocab=False)
+ data = nlp.tokenizer.to_bytes(exclude=["vocab"])
The .pos value for several common English words has changed, due to corrections to long-standing mistakes in the English tag map (see issue #593 and issue #3311 for details).
For better compatibility with the Universal Dependencies data, the lemmatizer now preserves capitalization, e.g. for proper nouns. See issue #3256 for details.
The built-in rule-based sentence boundary detector is now only called
"sentencizer" – the name "sbd" is deprecated.
- sentence_splitter = nlp.create_pipe("sbd")
+ sentence_splitter = nlp.create_pipe("sentencizer")
The is_sent_start attribute of the first token in a Doc now correctly
defaults to True. It previously defaulted to None.
The keyword argument n_threads on the .pipe methods is now deprecated, as
the v2.x models cannot release the global interpreter lock. (Future versions
may introduce a n_process argument for parallel inference via
multiprocessing.)
The Doc.print_tree method is now deprecated. If you need a custom nested
JSON representation of a Doc object, you might want to write your own helper
function. For a simple and consistent JSON representation of the Doc object
and its annotations, you can now use the Doc.to_json
method. Going forward, this method will output the same format as the JSON
training data expected by spacy train.
The spacy train command now lets you specify a
comma-separated list of pipeline component names, instead of separate flags
like --no-parser to disable components. This is more flexible and also
handles custom components out-of-the-box.
- $ spacy train en /output train_data.json dev_data.json --no-parser
+ $ spacy train en /output train_data.json dev_data.json --pipeline tagger,ner
The spacy init-model command now uses a --jsonl-loc
argument to pass in a a newline-delimited JSON (JSONL) file containing one
lexical entry per line instead of a separate --freqs-loc and
--clusters-loc.
- $ spacy init-model en ./model --freqs-loc ./freqs.txt --clusters-loc ./clusters.txt
+ $ spacy init-model en ./model --jsonl-loc ./vocab.jsonl
Also note that some of the model licenses have changed:
it_core_news_sm is now correctly licensed
under CC BY-NC-SA 3.0, and all English and German
models are now published under the MIT license.