Back to Spacy

Command Line Interface

website/docs/api/cli.mdx

4.0.0.dev10118.9 KB
Original Source

spaCy's CLI provides a range of helpful commands for downloading and training pipelines, converting data and debugging your config, data and installation. For a list of available commands, you can type python -m spacy --help. You can also add the --help flag to any command or subcommand to see the description, available arguments and usage.

download {id="download",tag="command"}

Download trained pipelines for spaCy. The downloader finds the best-matching compatible version and uses pip install to download the Python package. Direct downloads don't perform any compatibility checks and require the pipeline name to be specified with its version (e.g. en_core_web_sm-3.0.0).

Downloading best practices

The download command is mostly intended as a convenient, interactive wrapper – it performs compatibility checks and prints detailed messages in case things go wrong. It's not recommended to use this command as part of an automated process. If you know which package your project needs, you should consider a direct download via pip, or uploading the package to a local PyPi installation and fetching it straight from there. This will also allow you to add it as a versioned package dependency to your project.

bash
$ python -m spacy download [model] [--direct] [--sdist] [pip_args] [--url url]
NameDescription
modelPipeline package name, e.g. en_core_web_sm. str (positional)
--direct, -DForce direct download of exact package version. bool (flag)
--sdist, -S <Tag variant="new">3</Tag>Download the source package (.tar.gz archive) instead of the default pre-built binary wheel. bool (flag)
--help, -hShow help message and available arguments. bool (flag)
pip argsAdditional installation options to be passed to pip install when installing the pipeline package. For example, --user to install to the user home directory or --no-deps to not install package dependencies. Any (option/flag)
CREATESThe installed pipeline package in your site-packages directory.
--url, -UDownload from a mirror repository at the given url

info {id="info",tag="command"}

Print information about your spaCy installation, trained pipelines and local setup, and generate Markdown-formatted markup to copy-paste into GitHub issues.

bash
$ python -m spacy info [--markdown] [--silent] [--exclude]

Example

bash
$ python -m spacy info en_core_web_lg --markdown
bash
$ python -m spacy info [model] [--markdown] [--silent] [--exclude]
NameDescription
modelA trained pipeline, i.e. package name or path (optional). Optional[str] (option)
--markdown, -mdPrint information as Markdown. bool (flag)
--silent, -sDon't print anything, just return the values. bool (flag)
--exclude, -eComma-separated keys to exclude from the print-out. Defaults to "labels". Optional[str]
--url, -u <Tag variant="new">3.5.0</Tag>Print the URL to download the most recent compatible version of the pipeline. Requires a pipeline name. bool (flag)
--help, -hShow help message and available arguments. bool (flag)
PRINTSInformation about your spaCy installation.

validate {id="validate",version="2",tag="command"}

Find all trained pipeline packages installed in the current environment and check whether they are compatible with the currently installed version of spaCy. Should be run after upgrading spaCy via pip install -U spacy to ensure that all installed packages can be used with the new version. It will show a list of packages and their installed versions. If any package is out of date, the latest compatible versions and command for updating are shown.

Automated validation

You can also use the validate command as part of your build process or test suite, to ensure all packages are up to date before proceeding. If incompatible packages are found, it will return 1.

bash
$ python -m spacy validate
NameDescription
PRINTSDetails about the compatibility of your installed pipeline packages.

init {id="init",version="3"}

The spacy init CLI includes helpful commands for initializing training config files and pipeline directories.

init config {id="init-config",version="3",tag="command"}

Initialize and save a config.cfg file using the recommended settings for your use case. It works just like the quickstart widget, only that it also auto-fills all default values and exports a training-ready config. The settings you specify will impact the suggested model architectures and pipeline setup, as well as the hyperparameters. You can also adjust and customize those settings in your config file later.

Example

bash
$ python -m spacy init config config.cfg --lang en --pipeline ner,textcat --optimize accuracy
bash
$ python -m spacy init config [output_file] [--lang] [--pipeline] [--optimize] [--gpu] [--pretraining] [--force]
NameDescription
output_filePath to output .cfg file or - to write the config to stdout (so you can pipe it forward to a file or to the train command). Note that if you're writing to stdout, no additional logging info is printed. Path (positional)
--lang, -lOptional code of the language to use. Defaults to "en". str (option)
--pipeline, -pComma-separated list of trainable pipeline components to include. Defaults to "tagger,parser,ner". str (option)
--optimize, -o"efficiency" or "accuracy". Whether to optimize for efficiency (faster inference, smaller model, lower memory consumption) or higher accuracy (potentially larger and slower model). This will impact the choice of architecture, pretrained weights and related hyperparameters. Defaults to "efficiency". str (option)
--gpu, -GWhether the model can run on GPU. This will impact the choice of architecture, pretrained weights and related hyperparameters. bool (flag)
--pretraining, -ptInclude config for pretraining (with spacy pretrain). Defaults to False. bool (flag)
--force, -fForce overwriting the output file if it already exists. bool (flag)
--help, -hShow help message and available arguments. bool (flag)
CREATESThe config file for training.

init fill-config {id="init-fill-config",version="3"}

Auto-fill a partial .cfg file with all default values, e.g. a config generated with the quickstart widget. Config files used for training should always be complete and not contain any hidden defaults or missing values, so this command helps you create your final training config. In order to find the available settings and defaults, all functions referenced in the config will be created, and their signatures are used to find the defaults. If your config contains a problem that can't be resolved automatically, spaCy will show you a validation error with more details.

Example

bash
$ python -m spacy init fill-config base.cfg config.cfg --diff

Example diff

bash
$ python -m spacy init fill-config [base_path] [output_file] [--diff]
NameDescription
base_pathPath to base config to fill, e.g. generated by the quickstart widget. Path (positional)
output_filePath to output .cfg file or "-" to write to stdout so you can pipe it to a file. Defaults to "-" (stdout). Path (positional)
--code, -cPath to Python file with additional code to be imported. Allows registering custom functions for new architectures. Optional[Path] (option)
--pretraining, -ptInclude config for pretraining (with spacy pretrain). Defaults to False. bool (flag)
--diff, -DPrint a visual diff highlighting the changes. bool (flag)
--help, -hShow help message and available arguments. bool (flag)
CREATESComplete and auto-filled config file for training.

init fill-curated-transformer {id="init-fill-curated-transformer",version="3.7",tag="command"}

Auto-fill the Hugging Face model hyperpameters and loader parameters of a Curated Transformer pipeline component in a .cfg file. The name and revision of the Hugging Face model can either be passed as command-line arguments or read from the initialize.components.transformer.encoder_loader config section.

bash
$ python -m spacy init fill-curated-transformer [base_path] [output_file] [--model-name] [--model-revision] [--pipe-name] [--code]
NameDescription
base_pathPath to base config to fill, e.g. generated by the quickstart widget. Path (positional)
output_filePath to output .cfg file or "-" to write to stdout so you can pipe it to a file. Defaults to "-" (stdout). Path (positional)
--model-name, -mName of the Hugging Face model. Defaults to the model name from the encoder loader config. Optional[str] (option)
--model-revision, -rRevision of the Hugging Face model. Defaults to main. Optional[str] (option)
--pipe-name, -nName of the Curated Transformer pipe whose config is to be filled. Defaults to the first transformer pipe. Optional[str] (option)
--code, -cPath to Python file with additional code to be imported. Allows registering custom functions for new architectures. Optional[Path] (option)
CREATESComplete and auto-filled config file for training.

init vectors {id="init-vectors",version="3",tag="command"}

Convert word vectors for use with spaCy. Will export an nlp object that you can use in the [initialize] block of your config to initialize a model with vectors. See the usage guide on static vectors for details on how to use vectors in your model.

<Infobox title="New in v3.0" variant="warning" id="init-model">

This functionality was previously available as part of the command init-model.

</Infobox>
bash
$ python -m spacy init vectors [lang] [vectors_loc] [output_dir] [--prune] [--truncate] [--name] [--verbose]
NameDescription
langPipeline language. Two-letter ISO 639-1 code or three-letter ISO 639-3 code, such as en and eng. str (positional)
vectors_locLocation of vectors. Should be a file where the first row contains the dimensions of the vectors, followed by a space-separated Word2Vec table. File can be provided in .txt format or as a zipped text file in .zip or .tar.gz format. Path (positional)
output_dirPipeline output directory. Will be created if it doesn't exist. Path (positional)
--truncate, -tNumber of vectors to truncate to when reading in vectors file. Defaults to 0 for no truncation. int (option)
--prune, -pNumber of vectors to prune the vocabulary to. Defaults to -1 for no pruning. int (option)
--mode, -mVectors mode: default or floret. Defaults to default. str (option)
--attr, -aToken attribute to use for vectors, e.g. LOWER or NORM) Defaults to ORTH. str (option)
--name, -nName to assign to the word vectors in the meta.json, e.g. en_core_web_md.vectors. Optional[str] (option)
--verbose, -VPrint additional information and explanations. bool (flag)
--help, -hShow help message and available arguments. bool (flag)
CREATESA spaCy pipeline directory containing the vocab and vectors.

init labels {id="init-labels",version="3",tag="command"}

Generate JSON files for the labels in the data. This helps speed up the training process, since spaCy won't have to preprocess the data to extract the labels. After generating the labels, you can provide them to components that accept a labels argument on initialization via the [initialize] block of your config.

Example config

ini
[initialize.components.ner]

[initialize.components.ner.labels]
@readers = "spacy.read_labels.v1"
path = "corpus/labels/ner.json
bash
$ python -m spacy init labels [config_path] [output_path] [--code] [--verbose] [--gpu-id] [overrides]
NameDescription
config_pathPath to training config file containing all settings and hyperparameters. If -, the data will be read from stdin. Union[Path, str] (positional)
output_pathOutput directory for the label files. Will create one JSON file per component. Path (positional)
--code, -cPath to Python file with additional code to be imported. Allows registering custom functions for new architectures. Optional[Path] (option)
--verbose, -VShow more detailed messages for debugging purposes. bool (flag)
--gpu-id, -gGPU ID or -1 for CPU. Defaults to -1. int (option)
--help, -hShow help message and available arguments. bool (flag)
overridesConfig parameters to override. Should be options starting with -- that correspond to the config section and value to override, e.g. --paths.train ./train.spacy. Any (option/flag)
CREATESThe label files.

find-function {id="find-function",version="3.7",tag="command"}

Find the module, path and line number to the file for a given registered function. This functionality is helpful to understand where registered functions, as used in the config file, are defined.

bash
$ python -m spacy find-function [func_name] [--registry]

Example

bash
$ python -m spacy find-function spacy.TextCatBOW.v1
NameDescription
func_nameName of the registered function. str (positional)
--registry, -rName of the catalogue registry. str (option)

convert {id="convert",tag="command"}

Convert files into spaCy's binary training data format, a serialized DocBin, for use with the train command and other experiment management functions. The converter can be specified on the command line, or chosen based on the file extension of the input file.

bash
$ python -m spacy convert [input_file] [output_dir] [--converter] [--file-type] [--n-sents] [--seg-sents] [--base] [--morphology] [--merge-subtokens] [--ner-map] [--lang]
NameDescription
input_pathInput file or directory. Path (positional)
output_dirOutput directory for converted file. Defaults to "-", meaning data will be written to stdout. Optional[Path] (option)
--converter, -cName of converter to use (see below). str (option)
--file-type, -tType of file to create. Either spacy (default) for binary DocBin data or json for v2.x JSON format. str (option)
--n-sents, -nNumber of sentences per document. Supported for: conll, conllu, iob, ner int (option)
--seg-sents, -sSegment sentences. Supported for: conll, ner bool (flag)
--base, -b, --modelTrained spaCy pipeline for sentence segmentation to use as base (for --seg-sents). Optional[str] (option)
--morphology, -mEnable appending morphology to tags. Supported for: conllu bool (flag)
--merge-subtokens, -TMerge CoNLL-U subtokens bool (flag)
--ner-map, -nmNER tag mapping (as JSON-encoded dict of entity types). Supported for: conllu Optional[Path] (option)
--lang, -lLanguage code (if tokenizer required). Optional[str] (option)
--concatenate, -CConcatenate output to a single file bool (flag)
--help, -hShow help message and available arguments. bool (flag)
CREATESBinary DocBin training data that can be used with spacy train.

Converters {id="converters"}

IDDescription
autoAutomatically pick converter based on file extension and file content (default).
jsonJSON-formatted training data used in spaCy v2.x.
conlluUniversal Dependencies .conllu format.
ner / conllNER with IOB/IOB2/BILUO tags, one token per line with columns separated by whitespace. The first column is the token and the final column is the NER tag. Sentences are separated by blank lines and documents are separated by the line -DOCSTART- -X- O O. Supports CoNLL 2003 NER format. See sample data.
iobNER with IOB/IOB2/BILUO tags, one sentence per line with tokens separated by whitespace and annotation separated by |, either word|B-ENTorword|POS|B-ENT. See sample data.

debug {id="debug",version="3"}

The spacy debug CLI includes helpful commands for debugging and profiling your configs, data and implementations.

debug config {id="debug-config",version="3",tag="command"}

Debug a config.cfg file and show validation errors. The command will create all objects in the tree and validate them. Note that some config validation errors are blocking and will prevent the rest of the config from being resolved. This means that you may not see all validation errors at once and some issues are only shown once previous errors have been fixed. To auto-fill a partial config and save the result, you can use the init fill-config command.

bash
$ python -m spacy debug config [config_path] [--code] [--show-functions] [--show-variables] [overrides]

Example

bash
$ python -m spacy debug config config.cfg
<Accordion title="Example output (validation error)">
✘ Config validation error
dropout     field required
optimizer   field required
optimize    extra fields not permitted

{'seed': 0, 'accumulate_gradient': 1, 'dev_corpus': 'corpora.dev', 'train_corpus': 'corpora.train', 'gpu_allocator': None, 'patience': 1600, 'max_epochs': 0, 'max_steps': 20000, 'eval_frequency': 200, 'frozen_components': [], 'optimize': None, 'before_to_disk': None, 'batcher': {'@batchers': 'spacy.batch_by_words.v1', 'discard_oversize': False, 'tolerance': 0.2, 'get_length': None, 'size': {'@schedules': 'compounding.v1', 'start': 100, 'stop': 1000, 'compound': 1.001, 't': 0.0}}, 'logger': {'@loggers': 'spacy.ConsoleLogger.v1', 'progress_bar': False}, 'score_weights': {'tag_acc': 0.5, 'dep_uas': 0.25, 'dep_las': 0.25, 'sents_f': 0.0}}

If your config contains missing values, you can run the 'init fill-config'
command to fill in all the defaults, if possible:

python -m spacy init fill-config tmp/starter-config_invalid.cfg tmp/starter-config_invalid.cfg
</Accordion> <Accordion title="Example output (valid config and all options)" spaced>
bash
$ python -m spacy debug config ./config.cfg --show-functions --show-variables
============================= Config validation =============================
✔ Config is valid

=============================== Variables (6) ===============================

Variable                                   Value
-----------------------------------------  ----------------------------------
${components.tok2vec.model.encode.width}   96
${paths.dev}                               'hello'
${paths.init_tok2vec}                      None
${paths.raw}                               None
${paths.train}                             ''
${system.seed}                             0


========================= Registered functions (17) =========================
ℹ [nlp.tokenizer]
Registry   @tokenizers
Name       spacy.Tokenizer.v1
Module     spacy.language
File       /path/to/spacy/language.py (line 64)
ℹ [components.ner.model]
Registry   @architectures
Name       spacy.TransitionBasedParser.v1
Module     spacy.ml.models.parser
File       /path/to/spacy/ml/models/parser.py (line 11)
ℹ [components.ner.model.tok2vec]
Registry   @architectures
Name       spacy.Tok2VecListener.v1
Module     spacy.ml.models.tok2vec
File       /path/to/spacy/ml/models/tok2vec.py (line 16)
ℹ [components.parser.model]
Registry   @architectures
Name       spacy.TransitionBasedParser.v1
Module     spacy.ml.models.parser
File       /path/to/spacy/ml/models/parser.py (line 11)
ℹ [components.parser.model.tok2vec]
Registry   @architectures
Name       spacy.Tok2VecListener.v1
Module     spacy.ml.models.tok2vec
File       /path/to/spacy/ml/models/tok2vec.py (line 16)
ℹ [components.tagger.model]
Registry   @architectures
Name       spacy.Tagger.v1
Module     spacy.ml.models.tagger
File       /path/to/spacy/ml/models/tagger.py (line 9)
ℹ [components.tagger.model.tok2vec]
Registry   @architectures
Name       spacy.Tok2VecListener.v1
Module     spacy.ml.models.tok2vec
File       /path/to/spacy/ml/models/tok2vec.py (line 16)
ℹ [components.tok2vec.model]
Registry   @architectures
Name       spacy.Tok2Vec.v1
Module     spacy.ml.models.tok2vec
File       /path/to/spacy/ml/models/tok2vec.py (line 72)
ℹ [components.tok2vec.model.embed]
Registry   @architectures
Name       spacy.MultiHashEmbed.v1
Module     spacy.ml.models.tok2vec
File       /path/to/spacy/ml/models/tok2vec.py (line 93)
ℹ [components.tok2vec.model.encode]
Registry   @architectures
Name       spacy.MaxoutWindowEncoder.v1
Module     spacy.ml.models.tok2vec
File       /path/to/spacy/ml/models/tok2vec.py (line 207)
ℹ [corpora.dev]
Registry   @readers
Name       spacy.Corpus.v1
Module     spacy.training.corpus
File       /path/to/spacy/training/corpus.py (line 18)
ℹ [corpora.train]
Registry   @readers
Name       spacy.Corpus.v1
Module     spacy.training.corpus
File       /path/to/spacy/training/corpus.py (line 18)
ℹ [training.logger]
Registry   @loggers
Name       spacy.ConsoleLogger.v1
Module     spacy.training.loggers
File       /path/to/spacy/training/loggers.py (line 8)
ℹ [training.batcher]
Registry   @batchers
Name       spacy.batch_by_words.v1
Module     spacy.training.batchers
File       /path/to/spacy/training/batchers.py (line 49)
ℹ [training.batcher.size]
Registry   @schedules
Name       compounding.v1
Module     thinc.schedules
File       /path/to/thinc/thinc/schedules.py (line 43)
ℹ [training.optimizer]
Registry   @optimizers
Name       Adam.v1
Module     thinc.optimizers
File       /path/to/thinc/thinc/optimizers.py (line 58)
ℹ [training.optimizer.learn_rate]
Registry   @schedules
Name       warmup_linear.v1
Module     thinc.schedules
File       /path/to/thinc/thinc/schedules.py (line 91)
</Accordion>
NameDescription
config_pathPath to training config file containing all settings and hyperparameters. If -, the data will be read from stdin. Union[Path, str] (positional)
--code, -cPath to Python file with additional code to be imported. Allows registering custom functions for new architectures. Optional[Path] (option)
--show-functions, -FShow an overview of all registered function blocks used in the config and where those functions come from, including the module name, Python file and line number. bool (flag)
--show-variables, -VShow an overview of all variables referenced in the config, e.g. ${paths.train} and their values that will be used. This also reflects any config overrides provided on the CLI, e.g. --paths.train /path. bool (flag)
--help, -hShow help message and available arguments. bool (flag)
overridesConfig parameters to override. Should be options starting with -- that correspond to the config section and value to override, e.g. --paths.train ./train.spacy. Any (option/flag)
PRINTSConfig validation errors, if available.

debug data {id="debug-data",tag="command"}

Analyze, debug and validate your training and development data. Get useful stats, and find problems like invalid entity annotations, cyclic dependencies, low data labels and more.

<Infobox title="New in v3.0" variant="warning">

The debug data command is now available as a subcommand of spacy debug. It takes the same arguments as train and reads settings off the config.cfg file and optional overrides on the CLI.

</Infobox> <Infobox title="Notes on span characteristics" emoji="💡">

If your pipeline contains a spancat component, then this command will also report span characteristics such as the average span length and the span (or span boundary) distinctiveness. The distinctiveness measure shows how different the tokens are with respect to the rest of the corpus using the KL-divergence of the token distributions. To learn more, you can check out Papay et al.'s work on Dissecting Span Identification Tasks with Performance Prediction (EMNLP 2020).

</Infobox>
bash
$ python -m spacy debug data [config_path] [--code] [--ignore-warnings] [--verbose] [--no-format] [overrides]

Example

bash
$ python -m spacy debug data ./config.cfg
<Accordion title="Example output" spaced>
=========================== Data format validation ===========================
✔ Corpus is loadable
✔ Pipeline can be initialized with data

=============================== Training stats ===============================
Training pipeline: tagger, parser, ner
Starting with blank model 'en'
18127 training docs
2939 evaluation docs
⚠ 34 training examples also in evaluation data

============================== Vocab & Vectors ==============================
ℹ 2083156 total words in the data (56962 unique)
⚠ 13020 misaligned tokens in the training data
⚠ 2423 misaligned tokens in the dev data
10 most common words: 'the' (98429), ',' (91756), '.' (87073), 'to' (50058),
'of' (49559), 'and' (44416), 'a' (34010), 'in' (31424), 'that' (22792), 'is'
(18952)
ℹ No word vectors present in the model

========================== Named Entity Recognition ==========================
ℹ 18 new labels, 0 existing labels
528978 missing values (tokens with '-' label)
New: 'ORG' (23860), 'PERSON' (21395), 'GPE' (21193), 'DATE' (18080), 'CARDINAL'
(10490), 'NORP' (9033), 'MONEY' (5164), 'PERCENT' (3761), 'ORDINAL' (2122),
'LOC' (2113), 'TIME' (1616), 'WORK_OF_ART' (1229), 'QUANTITY' (1150), 'FAC'
(1134), 'EVENT' (974), 'PRODUCT' (935), 'LAW' (444), 'LANGUAGE' (338)
✔ Good amount of examples for all labels
✔ Examples without occurrences available for all labels
✔ No entities consisting of or starting/ending with whitespace

=========================== Part-of-speech Tagging ===========================
ℹ 49 labels in data
'NN' (266331), 'IN' (227365), 'DT' (185600), 'NNP' (164404), 'JJ' (119830),
'NNS' (110957), '.' (101482), ',' (92476), 'RB' (90090), 'PRP' (90081), 'VB'
(74538), 'VBD' (68199), 'CC' (62862), 'VBZ' (50712), 'VBP' (43420), 'VBN'
(42193), 'CD' (40326), 'VBG' (34764), 'TO' (31085), 'MD' (25863), 'PRP$'
(23335), 'HYPH' (13833), 'POS' (13427), 'UH' (13322), 'WP' (10423), 'WDT'
(9850), 'RP' (8230), 'WRB' (8201), ':' (8168), '''' (7392), '``' (6984), 'NNPS'
(5817), 'JJR' (5689), '$' (3710), 'EX' (3465), 'JJS' (3118), 'RBR' (2872),
'-RRB-' (2825), '-LRB-' (2788), 'PDT' (2078), 'XX' (1316), 'RBS' (1142), 'FW'
(794), 'NFP' (557), 'SYM' (440), 'WP$' (294), 'LS' (293), 'ADD' (191), 'AFX'
(24)

============================= Dependency Parsing =============================
ℹ Found 111703 sentences with an average length of 18.6 words.
ℹ Found 2251 nonprojective train sentences
ℹ Found 303 nonprojective dev sentences
ℹ 47 labels in train data
ℹ 211 labels in projectivized train data
'punct' (236796), 'prep' (188853), 'pobj' (182533), 'det' (172674), 'nsubj'
(169481), 'compound' (116142), 'ROOT' (111697), 'amod' (107945), 'dobj' (93540),
'aux' (86802), 'advmod' (86197), 'cc' (62679), 'conj' (59575), 'poss' (36449),
'ccomp' (36343), 'advcl' (29017), 'mark' (27990), 'nummod' (24582), 'relcl'
(21359), 'xcomp' (21081), 'attr' (18347), 'npadvmod' (17740), 'acomp' (17204),
'auxpass' (15639), 'appos' (15368), 'neg' (15266), 'nsubjpass' (13922), 'case'
(13408), 'acl' (12574), 'pcomp' (10340), 'nmod' (9736), 'intj' (9285), 'prt'
(8196), 'quantmod' (7403), 'dep' (4300), 'dative' (4091), 'agent' (3908), 'expl'
(3456), 'parataxis' (3099), 'oprd' (2326), 'predet' (1946), 'csubj' (1494),
'subtok' (1147), 'preconj' (692), 'meta' (469), 'csubjpass' (64), 'iobj' (1)
⚠ Low number of examples for label 'iobj' (1)
⚠ Low number of examples for 130 labels in the projectivized dependency
trees used for training. You may want to projectivize labels such as punct
before training in order to improve parser performance.
⚠ Projectivized labels with low numbers of examples: appos||attr: 12
advmod||dobj: 13 prep||ccomp: 12 nsubjpass||ccomp: 15 pcomp||prep: 14
amod||dobj: 9 attr||xcomp: 14 nmod||nsubj: 17 prep||advcl: 2 prep||prep: 5
nsubj||conj: 12 advcl||advmod: 18 ccomp||advmod: 11 ccomp||pcomp: 5 acl||pobj:
10 npadvmod||acomp: 7 dobj||pcomp: 14 nsubjpass||pcomp: 1 nmod||pobj: 8
amod||attr: 6 nmod||dobj: 12 aux||conj: 1 neg||conj: 1 dative||xcomp: 11
pobj||dative: 3 xcomp||acomp: 19 advcl||pobj: 2 nsubj||advcl: 2 csubj||ccomp: 1
advcl||acl: 1 relcl||nmod: 2 dobj||advcl: 10 advmod||advcl: 3 nmod||nsubjpass: 6
amod||pobj: 5 cc||neg: 1 attr||ccomp: 16 advcl||xcomp: 3 nmod||attr: 4
advcl||nsubjpass: 5 advcl||ccomp: 4 ccomp||conj: 1 punct||acl: 1 meta||acl: 1
parataxis||acl: 1 prep||acl: 1 amod||nsubj: 7 ccomp||ccomp: 3 acomp||xcomp: 5
dobj||acl: 5 prep||oprd: 6 advmod||acl: 2 dative||advcl: 1 pobj||agent: 5
xcomp||amod: 1 dep||advcl: 1 prep||amod: 8 relcl||compound: 1 advcl||csubj: 3
npadvmod||conj: 2 npadvmod||xcomp: 4 advmod||nsubj: 3 ccomp||amod: 7
advcl||conj: 1 nmod||conj: 2 advmod||nsubjpass: 2 dep||xcomp: 2 appos||ccomp: 1
advmod||dep: 1 advmod||advmod: 5 aux||xcomp: 8 dep||advmod: 1 dative||ccomp: 2
prep||dep: 1 conj||conj: 1 dep||ccomp: 4 cc||ROOT: 1 prep||ROOT: 1 nsubj||pcomp:
3 advmod||prep: 2 relcl||dative: 1 acl||conj: 1 advcl||attr: 4 prep||npadvmod: 1
nsubjpass||xcomp: 1 neg||advmod: 1 xcomp||oprd: 1 advcl||advcl: 1 dobj||dep: 3
nsubjpass||parataxis: 1 attr||pcomp: 1 ccomp||parataxis: 1 advmod||attr: 1
nmod||oprd: 1 appos||nmod: 2 advmod||relcl: 1 appos||npadvmod: 1 appos||conj: 1
prep||expl: 1 nsubjpass||conj: 1 punct||pobj: 1 cc||pobj: 1 conj||pobj: 1
punct||conj: 1 ccomp||dep: 1 oprd||xcomp: 3 ccomp||xcomp: 1 ccomp||nsubj: 1
nmod||dep: 1 xcomp||ccomp: 1 acomp||advcl: 1 intj||advmod: 1 advmod||acomp: 2
relcl||oprd: 1 advmod||prt: 1 advmod||pobj: 1 appos||nummod: 1 relcl||npadvmod:
3 mark||advcl: 1 aux||ccomp: 1 amod||nsubjpass: 1 npadvmod||advmod: 1 conj||dep:
1 nummod||pobj: 1 amod||npadvmod: 1 intj||pobj: 1 nummod||npadvmod: 1
xcomp||xcomp: 1 aux||dep: 1 advcl||relcl: 1
⚠ The following labels were found only in the train data: xcomp||amod,
advcl||relcl, prep||nsubjpass, acl||nsubj, nsubjpass||conj, xcomp||oprd,
advmod||conj, advmod||advmod, iobj, advmod||nsubjpass, dobj||conj, ccomp||amod,
meta||acl, xcomp||xcomp, prep||attr, prep||ccomp, advcl||acomp, acl||dobj,
advcl||advcl, pobj||agent, prep||advcl, nsubjpass||xcomp, prep||dep,
acomp||xcomp, aux||ccomp, ccomp||dep, conj||dep, relcl||compound,
nsubjpass||ccomp, nmod||dobj, advmod||advcl, advmod||acl, dobj||advcl,
dative||xcomp, prep||nsubj, ccomp||ccomp, nsubj||ccomp, xcomp||acomp,
prep||acomp, dep||advmod, acl||pobj, appos||dobj, npadvmod||acomp, cc||ROOT,
relcl||nsubj, nmod||pobj, acl||nsubjpass, ccomp||advmod, pcomp||prep,
amod||dobj, advmod||attr, advcl||csubj, appos||attr, dobj||pcomp, prep||ROOT,
relcl||pobj, advmod||pobj, amod||nsubj, ccomp||xcomp, prep||oprd,
npadvmod||advmod, appos||nummod, advcl||pobj, neg||advmod, acl||attr,
appos||nsubjpass, csubj||ccomp, amod||nsubjpass, intj||pobj, dep||advcl,
cc||neg, xcomp||ccomp, dative||ccomp, nmod||oprd, pobj||dative, prep||dobj,
dep||ccomp, relcl||attr, ccomp||nsubj, advcl||xcomp, nmod||dep, advcl||advmod,
ccomp||conj, pobj||prep, advmod||acomp, advmod||relcl, attr||pcomp,
ccomp||parataxis, oprd||xcomp, intj||advmod, nmod||nsubjpass, prep||npadvmod,
parataxis||acl, prep||pobj, advcl||dobj, amod||pobj, prep||acl, conj||pobj,
advmod||dep, punct||pobj, ccomp||acomp, acomp||advcl, nummod||npadvmod,
dobj||dep, npadvmod||xcomp, advcl||conj, relcl||npadvmod, punct||acl,
relcl||dobj, dobj||xcomp, nsubjpass||parataxis, dative||advcl, relcl||nmod,
advcl||ccomp, appos||npadvmod, ccomp||pcomp, prep||amod, mark||advcl,
prep||advmod, prep||xcomp, appos||nsubj, attr||ccomp, advmod||prt, dobj||ccomp,
aux||conj, advcl||nsubj, conj||conj, advmod||ccomp, advcl||nsubjpass,
attr||xcomp, nmod||conj, npadvmod||conj, relcl||dative, prep||expl,
nsubjpass||pcomp, advmod||xcomp, advmod||dobj, appos||pobj, nsubj||conj,
relcl||nsubjpass, advcl||attr, appos||ccomp, advmod||prep, prep||conj,
nmod||attr, punct||conj, neg||conj, dep||xcomp, aux||xcomp, dobj||acl,
nummod||pobj, amod||npadvmod, nsubj||pcomp, advcl||acl, appos||nmod,
relcl||oprd, prep||prep, cc||pobj, nmod||nsubj, amod||attr, aux||dep,
appos||conj, advmod||nsubj, nsubj||advcl, acl||conj
To train a parser, your data should include at least 20 instances of each label.
⚠ Multiple root labels (ROOT, nsubj, aux, npadvmod, prep) found in
training data. spaCy's parser uses a single root label ROOT so this distinction
will not be available.

================================== Summary ==================================
✔ 5 checks passed
⚠ 8 warnings
</Accordion>
NameDescription
config_pathPath to training config file containing all settings and hyperparameters. If -, the data will be read from stdin. Union[Path, str] (positional)
--code, -cPath to Python file with additional code to be imported. Allows registering custom functions for new architectures. Optional[Path] (option)
--ignore-warnings, -IWIgnore warnings, only show stats and errors. bool (flag)
--verbose, -VPrint additional information and explanations. bool (flag)
--no-format, -NFDon't pretty-print the results. Use this if you want to write to a file. bool (flag)
--help, -hShow help message and available arguments. bool (flag)
overridesConfig parameters to override. Should be options starting with -- that correspond to the config section and value to override, e.g. --paths.train ./train.spacy. Any (option/flag)
PRINTSDebugging information.

debug diff-config {id="debug-diff",tag="command"}

Show a diff of a config file with respect to spaCy's defaults or another config file. If additional settings were used in the creation of the config file, then you must supply these as extra parameters to the command when comparing to the default settings. The generated diff can also be used when posting to the discussion forum to provide more information for the maintainers.

bash
$ python -m spacy debug diff-config [config_path] [--compare-to] [--optimize] [--gpu] [--pretraining] [--markdown]

Example

bash
$ python -m spacy debug diff-config ./config.cfg
<Accordion title="Example output" spaced>
ℹ Found user-defined language: 'en'
ℹ Found user-defined pipelines: ['tok2vec', 'tagger', 'parser',
'ner']
[paths]
+ train = "./data/train.spacy"
+ dev = "./data/dev.spacy"
- train = null
- dev = null
vectors = null
init_tok2vec = null

[system]
gpu_allocator = null
+ seed = 42
- seed = 0

[nlp]
lang = "en"
pipeline = ["tok2vec","tagger","parser","ner"]
batch_size = 1000
disabled = []
before_creation = null
after_creation = null
after_pipeline_creation = null
tokenizer = {"@tokenizers":"spacy.Tokenizer.v1"}

[components]

[components.ner]
factory = "ner"
incorrect_spans_key = null
moves = null
scorer = {"@scorers":"spacy.ner_scorer.v1"}
update_with_oracle_cut_size = 100

[components.ner.model]
@architectures = "spacy.TransitionBasedParser.v2"
state_type = "ner"
extra_state_tokens = false
- hidden_width = 64
+ hidden_width = 36
maxout_pieces = 2
use_upper = true
nO = null

[components.ner.model.tok2vec]
@architectures = "spacy.Tok2VecListener.v1"
width = ${components.tok2vec.model.encode.width}
upstream = "*"

[components.parser]
factory = "parser"
learn_tokens = false
min_action_freq = 30
moves = null
scorer = {"@scorers":"spacy.parser_scorer.v1"}
update_with_oracle_cut_size = 100

[components.parser.model]
@architectures = "spacy.TransitionBasedParser.v2"
state_type = "parser"
extra_state_tokens = false
hidden_width = 128
maxout_pieces = 3
use_upper = true
nO = null

[components.parser.model.tok2vec]
@architectures = "spacy.Tok2VecListener.v1"
width = ${components.tok2vec.model.encode.width}
upstream = "*"

[components.tagger]
factory = "tagger"
neg_prefix = "!"
overwrite = false
scorer = {"@scorers":"spacy.tagger_scorer.v1"}

[components.tagger.model]
@architectures = "spacy.Tagger.v1"
nO = null

[components.tagger.model.tok2vec]
@architectures = "spacy.Tok2VecListener.v1"
width = ${components.tok2vec.model.encode.width}
upstream = "*"

[components.tok2vec]
factory = "tok2vec"

[components.tok2vec.model]
@architectures = "spacy.Tok2Vec.v2"

[components.tok2vec.model.embed]
@architectures = "spacy.MultiHashEmbed.v2"
width = ${components.tok2vec.model.encode.width}
attrs = ["NORM","PREFIX","SUFFIX","SHAPE"]
rows = [5000,2500,2500,2500]
include_static_vectors = false

[components.tok2vec.model.encode]
@architectures = "spacy.MaxoutWindowEncoder.v2"
width = 96
depth = 4
window_size = 1
maxout_pieces = 3

[corpora]

[corpora.dev]
@readers = "spacy.Corpus.v1"
path = ${paths.dev}
max_length = 0
gold_preproc = false
limit = 0
augmenter = null

[corpora.train]
@readers = "spacy.Corpus.v1"
path = ${paths.train}
max_length = 0
gold_preproc = false
limit = 0
augmenter = null

[training]
dev_corpus = "corpora.dev"
train_corpus = "corpora.train"
seed = ${system.seed}
gpu_allocator = ${system.gpu_allocator}
dropout = 0.1
accumulate_gradient = 1
patience = 1600
max_epochs = 0
max_steps = 20000
eval_frequency = 200
frozen_components = []
annotating_components = []
before_to_disk = null

[training.batcher]
@batchers = "spacy.batch_by_words.v1"
discard_oversize = false
tolerance = 0.2
get_length = null

[training.batcher.size]
@schedules = "compounding.v1"
start = 100
stop = 1000
compound = 1.001
t = 0.0

[training.logger]
@loggers = "spacy.ConsoleLogger.v1"
progress_bar = false

[training.optimizer]
@optimizers = "Adam.v1"
beta1 = 0.9
beta2 = 0.999
L2_is_weight_decay = true
L2 = 0.01
grad_clip = 1.0
use_averages = false
eps = 0.00000001
learn_rate = 0.001

[training.score_weights]
tag_acc = 0.33
dep_uas = 0.17
dep_las = 0.17
dep_las_per_type = null
sents_p = null
sents_r = null
sents_f = 0.0
ents_f = 0.33
ents_p = 0.0
ents_r = 0.0
ents_per_type = null

[pretraining]

[initialize]
vectors = ${paths.vectors}
init_tok2vec = ${paths.init_tok2vec}
vocab_data = null
lookups = null
before_init = null
after_init = null

[initialize.components]

[initialize.tokenizer]
</Accordion>
NameDescription
config_pathPath to training config file containing all settings and hyperparameters. Union[Path, str] (positional)
compare_toPath to another config file to diff against, or None to compare against default settings. Optional[Union[Path, str] (option)
optimize, -o"efficiency" or "accuracy". Whether the config was optimized for efficiency (faster inference, smaller model, lower memory consumption) or higher accuracy (potentially larger and slower model). Only relevant when comparing against a default config. Defaults to "efficiency". str (option)
gpu, -GWhether the config was made to run on a GPU. Only relevant when comparing against a default config. bool (flag)
pretraining, -ptInclude config for pretraining (with spacy pretrain). Only relevant when comparing against a default config. Defaults to False. bool (flag)
markdown, -mdGenerate Markdown for Github issues. Defaults to False. bool (flag)
PRINTSDiff between the two config files.

debug profile {id="debug-profile",tag="command"}

Profile which functions take the most time in a spaCy pipeline. Input should be formatted as one JSON object per line with a key "text". It can either be provided as a JSONL file, or be read from sys.sytdin. If no input file is specified, the IMDB dataset is loaded via ml_datasets.

<Infobox title="New in v3.0" variant="warning">

The profile command is now available as a subcommand of spacy debug.

</Infobox>
bash
$ python -m spacy debug profile [model] [inputs] [--n-texts]
NameDescription
modelA loadable spaCy pipeline (package name or path). str (positional)
inputsPath to input file, or - for standard input. Path (positional)
--n-texts, -nMaximum number of texts to use if available. Defaults to 10000. int (option)
--help, -hShow help message and available arguments. bool (flag)
PRINTSProfiling information for the pipeline.

debug model {id="debug-model",version="3",tag="command"}

Debug a Thinc Model by running it on a sample text and checking how it updates its internal weights and parameters.

bash
$ python -m spacy debug model [config_path] [component] [--layers] [--dimensions] [--parameters] [--gradients] [--attributes] [--print-step0] [--print-step1] [--print-step2] [--print-step3] [--gpu-id]
<Accordion title="Example outputs" spaced>

In this example log, we just print the name of each layer after creation of the model ("Step 0"), which helps us to understand the internal structure of the Neural Network, and to focus on specific layers that we want to inspect further (see next example).

bash
$ python -m spacy debug model ./config.cfg tagger -P0
ℹ Using CPU
ℹ Fixing random seed: 0
ℹ Analysing model with ID 62

========================== STEP 0 - before training ==========================
ℹ Layer 0: model ID 62:
'extract_features>>list2ragged>>with_array-ints-getitem>>hashembed|ints-getitem>>hashembed|ints-getitem>>hashembed|ints-getitem>>hashembed>>with_array-maxout>>layernorm>>dropout>>ragged2list>>with_array-residual>>residual>>residual>>residual>>with_array-softmax'
ℹ Layer 1: model ID 59:
'extract_features>>list2ragged>>with_array-ints-getitem>>hashembed|ints-getitem>>hashembed|ints-getitem>>hashembed|ints-getitem>>hashembed>>with_array-maxout>>layernorm>>dropout>>ragged2list>>with_array-residual>>residual>>residual>>residual'
ℹ Layer 2: model ID 61: 'with_array-softmax'
ℹ Layer 3: model ID 24:
'extract_features>>list2ragged>>with_array-ints-getitem>>hashembed|ints-getitem>>hashembed|ints-getitem>>hashembed|ints-getitem>>hashembed>>with_array-maxout>>layernorm>>dropout>>ragged2list'
ℹ Layer 4: model ID 58: 'with_array-residual>>residual>>residual>>residual'
ℹ Layer 5: model ID 60: 'softmax'
ℹ Layer 6: model ID 13: 'extract_features'
ℹ Layer 7: model ID 14: 'list2ragged'
ℹ Layer 8: model ID 16:
'with_array-ints-getitem>>hashembed|ints-getitem>>hashembed|ints-getitem>>hashembed|ints-getitem>>hashembed'
ℹ Layer 9: model ID 22: 'with_array-maxout>>layernorm>>dropout'
ℹ Layer 10: model ID 23: 'ragged2list'
ℹ Layer 11: model ID 57: 'residual>>residual>>residual>>residual'
ℹ Layer 12: model ID 15:
'ints-getitem>>hashembed|ints-getitem>>hashembed|ints-getitem>>hashembed|ints-getitem>>hashembed'
ℹ Layer 13: model ID 21: 'maxout>>layernorm>>dropout'
ℹ Layer 14: model ID 32: 'residual'
ℹ Layer 15: model ID 40: 'residual'
ℹ Layer 16: model ID 48: 'residual'
ℹ Layer 17: model ID 56: 'residual'
ℹ Layer 18: model ID 3: 'ints-getitem>>hashembed'
ℹ Layer 19: model ID 6: 'ints-getitem>>hashembed'
ℹ Layer 20: model ID 9: 'ints-getitem>>hashembed'
...

In this example log, we see how initialization of the model (Step 1) propagates the correct values for the nI (input) and nO (output) dimensions of the various layers. In the softmax layer, this step also defines the W matrix as an all-zero matrix determined by the nO and nI dimensions. After a first training step (Step 2), this matrix has clearly updated its values through the training feedback loop.

bash
$ python -m spacy debug model ./config.cfg tagger -l "5,15" -DIM -PAR -P0 -P1 -P2
ℹ Using CPU
ℹ Fixing random seed: 0
ℹ Analysing model with ID 62

========================= STEP 0 - before training =========================
ℹ Layer 5: model ID 60: 'softmax'
ℹ  - dim nO: None
ℹ  - dim nI: 96
ℹ  - param W: None
ℹ  - param b: None
ℹ Layer 15: model ID 40: 'residual'
ℹ  - dim nO: None
ℹ  - dim nI: None

======================= STEP 1 - after initialization =======================
ℹ Layer 5: model ID 60: 'softmax'
ℹ  - dim nO: 4
ℹ  - dim nI: 96
ℹ  - param W: (4, 96) - sample: [0. 0. 0. 0. 0.]
ℹ  - param b: (4,) - sample: [0. 0. 0. 0.]
ℹ Layer 15: model ID 40: 'residual'
ℹ  - dim nO: 96
ℹ  - dim nI: None

========================== STEP 2 - after training ==========================
ℹ Layer 5: model ID 60: 'softmax'
ℹ  - dim nO: 4
ℹ  - dim nI: 96
ℹ  - param W: (4, 96) - sample: [ 0.00283958 -0.00294119  0.00268396 -0.00296219
-0.00297141]
ℹ  - param b: (4,) - sample: [0.00300002 0.00300002 0.00300002 0.00300002]
ℹ Layer 15: model ID 40: 'residual'
ℹ  - dim nO: 96
ℹ  - dim nI: None
</Accordion>
NameDescription
config_pathPath to training config file containing all settings and hyperparameters. If -, the data will be read from stdin. Union[Path, str] (positional)
componentName of the pipeline component of which the model should be analyzed. str (positional)
--layers, -lComma-separated names of layer IDs to print. str (option)
--dimensions, -DIMShow dimensions of each layer. bool (flag)
--parameters, -PARShow parameters of each layer. bool (flag)
--gradients, -GRADShow gradients of each layer. bool (flag)
--attributes, -ATTRShow attributes of each layer. bool (flag)
--print-step0, -P0Print model before training. bool (flag)
--print-step1, -P1Print model after initialization. bool (flag)
--print-step2, -P2Print model after training. bool (flag)
--print-step3, -P3Print final predictions. bool (flag)
--gpu-id, -gGPU ID or -1 for CPU. Defaults to -1. int (option)
--help, -hShow help message and available arguments. bool (flag)
overridesConfig parameters to override. Should be options starting with -- that correspond to the config section and value to override, e.g. --paths.train ./train.spacy. Any (option/flag)
PRINTSDebugging information.

debug pieces {id="debug-pieces",version="3.7",tag="command"}

Analyze word- or sentencepiece stats.

bash
$ python -m spacy debug pieces [config_path] [--code] [--name] [overrides]
NameDescription
config_pathPath to config file. Union[Path, str] (positional)
--code, -cPath to Python file with additional code to be imported. Allows registering custom functions for new architectures. Optional[Path] (option)
--name, -nName of the Curated Transformer pipe whose config is to be filled. Defaults to the first transformer pipe. Optional[str] (option)
overridesConfig parameters to override. Should be options starting with -- that correspond to the config section and value to override, e.g. --paths.train ./train.spacy. Any (option/flag)
PRINTSDebugging information.
<Accordion title="Example outputs" spaced>
bash
$ python -m spacy debug pieces ./config.cfg
========================= Training corpus statistics =========================
Median token length: 1.0
Mean token length: 1.54
Token length range: [1, 13]

======================= Development corpus statistics =======================
Median token length: 1.0
Mean token length: 1.44
Token length range: [1, 8]
</Accordion>

train {id="train",tag="command"}

Train a pipeline. Expects data in spaCy's binary format and a config file with all settings and hyperparameters. Will save out the best model from all epochs, as well as the final pipeline. The --code argument can be used to provide a Python file that's imported before the training process starts. This lets you register custom functions and architectures and refer to them in your config, all while still using spaCy's built-in train workflow. If you need to manage complex multi-step training workflows, check out the new spaCy projects.

<Infobox title="New in v3.0" variant="warning">

The train command doesn't take a long list of command-line arguments anymore and instead expects a single config.cfg file containing all settings for the pipeline, training process and hyperparameters. Config values can be overwritten on the CLI if needed. For example, --paths.train ./train.spacy sets the variable train in the section [paths].

</Infobox>

Example

bash
$ python -m spacy train config.cfg --output ./output --paths.train ./train --paths.dev ./dev
bash
$ python -m spacy train [config_path] [--output] [--code] [--verbose] [--gpu-id] [overrides]
NameDescription
config_pathPath to training config file containing all settings and hyperparameters. If -, the data will be read from stdin. Union[Path, str] (positional)
--output, -oDirectory to store trained pipeline in. Will be created if it doesn't exist. Optional[Path] (option)
--code, -cPath to Python file with additional code to be imported. Allows registering custom functions for new architectures. Optional[Path] (option)
--verbose, -VShow more detailed messages during training. bool (flag)
--gpu-id, -gGPU ID or -1 for CPU. Defaults to -1. int (option)
--help, -hShow help message and available arguments. bool (flag)
overridesConfig parameters to override. Should be options starting with -- that correspond to the config section and value to override, e.g. --paths.train ./train.spacy. Any (option/flag)
CREATESThe final trained pipeline and the best trained pipeline.

Calling the training function from Python {id="train-function",version="3.2"}

The training CLI exposes a train helper function that lets you run the training just like spacy train. Usually it's easier to use the command line directly, but if you need to kick off training from code this is how to do it.

Example

python
from spacy.cli.train import train

train("./config.cfg", overrides={"paths.train": "./train.spacy", "paths.dev": "./dev.spacy"})

NameDescription
config_pathPath to the config to use for training. Union[str, Path]
output_pathOptional name of directory to save output model in. If not provided a model will not be saved. Optional[Union[str, Path]]
keyword-only
use_gpuWhich GPU to use. Defaults to -1 for no GPU. int
overridesValues to override config settings. Dict[str, Any]

pretrain {id="pretrain",version="2.1",tag="command,experimental"}

Pretrain the "token to vector" (Tok2vec) layer of pipeline components on raw text, using an approximate language-modeling objective. Specifically, we load pretrained vectors, and train a component like a CNN, BiLSTM, etc to predict vectors which match the pretrained ones. The weights are saved to a directory after each epoch. You can then include a path to one of these pretrained weights files in your training config as the init_tok2vec setting when you train your pipeline. This technique may be especially helpful if you have little labelled data. See the usage docs on pretraining for more info. To read the raw text, a JsonlCorpus is typically used.

<Infobox title="Changed in v3.0" variant="warning">

As of spaCy v3.0, the pretrain command takes the same config file as the train command. This ensures that settings are consistent between pretraining and training. Settings for pretraining can be defined in the [pretraining] block of the config file and auto-generated by setting --pretraining on init fill-config. Also see the data format for details.

</Infobox>

Example

bash
$ python -m spacy pretrain config.cfg ./output_pretrain --paths.raw_text ./data.jsonl
bash
$ python -m spacy pretrain [config_path] [output_dir] [--code] [--resume-path] [--epoch-resume] [--gpu-id] [overrides]
NameDescription
config_pathPath to training config file containing all settings and hyperparameters. If -, the data will be read from stdin. Union[Path, str] (positional)
output_dirDirectory to save binary weights to on each epoch. Path (positional)
--code, -cPath to Python file with additional code to be imported. Allows registering custom functions for new architectures. Optional[Path] (option)
--resume-path, -rPath to pretrained weights from which to resume pretraining. Optional[Path] (option)
--epoch-resume, -erThe epoch to resume counting from when using --resume-path. Prevents unintended overwriting of existing weight files. Optional[int] (option)
--gpu-id, -gGPU ID or -1 for CPU. Defaults to -1. int (option)
--skip-last, -L <Tag variant="new">3.5.2</Tag>Skip saving model-last.bin. Defaults to False. bool (flag)
--help, -hShow help message and available arguments. bool (flag)
overridesConfig parameters to override. Should be options starting with -- that correspond to the config section and value to override, e.g. --training.dropout 0.2. Any (option/flag)
CREATESThe pretrained weights that can be used to initialize spacy train.

evaluate {id="evaluate",version="2",tag="command"}

The evaluate subcommand is superseded by spacy benchmark accuracy. evaluate is provided as an alias to benchmark accuracy for compatibility.

benchmark {id="benchmark", version="3.5"}

The spacy benchmark CLI includes commands for benchmarking the accuracy and speed of your spaCy pipelines.

accuracy {id="benchmark-accuracy", version="3.5", tag="command"}

Evaluate the accuracy of a trained pipeline. Expects a loadable spaCy pipeline (package name or path) and evaluation data in the binary .spacy format. The --gold-preproc option sets up the evaluation examples with gold-standard sentences and tokens for the predictions. Gold preprocessing helps the annotations align to the tokenization, and may result in sequences of more consistent length. However, it may reduce runtime accuracy due to train/test skew. To render a sample of dependency parses in a HTML file using the displaCy visualizations, set as output directory as the --displacy-path argument.

bash
$ python -m spacy benchmark accuracy [model] [data_path] [--output] [--code] [--gold-preproc] [--gpu-id] [--displacy-path] [--displacy-limit] [--per-component] [--spans-key]
NameDescription
modelPipeline to evaluate. Can be a package or a path to a data directory. str (positional)
data_pathLocation of evaluation data in spaCy's binary format. Path (positional)
--output, -oOutput JSON file for metrics. If not set, no metrics will be exported. Optional[Path] (option)
--code, -c <Tag variant="new">3</Tag>Path to Python file with additional code to be imported. Allows registering custom functions for new architectures. Optional[Path] (option)
--gold-preproc, -GUse gold preprocessing. bool (flag)
--gpu-id, -gGPU to use, if any. Defaults to -1 for CPU. int (option)
--displacy-path, -dpDirectory to output rendered parses as HTML. If not set, no visualizations will be generated. Optional[Path] (option)
--displacy-limit, -dlNumber of parses to generate per file. Defaults to 25. Keep in mind that a significantly higher number might cause the .html files to render slowly. int (option)
--per-component, -P <Tag variant="new">3.6</Tag>Whether to return the scores keyed by component name. Defaults to False. bool (flag)
--spans-key, -sk <Tag variant="new">3.6.2</Tag>Spans key to use when evaluating Doc.spans. Defaults to sc. str (option)
--help, -hShow help message and available arguments. bool (flag)
CREATESTraining results and optional metrics and visualizations.

speed {id="benchmark-speed", version="3.5", tag="command"}

Benchmark the speed of a trained pipeline with a 95% confidence interval. Expects a loadable spaCy pipeline (package name or path) and benchmark data in the binary .spacy format. The pipeline is warmed up before any measurements are taken.

cli
$ python -m spacy benchmark speed [model] [data_path] [--code] [--batch_size] [--no-shuffle] [--gpu-id] [--batches] [--warmup]
NameDescription
modelPipeline to benchmark the speed of. Can be a package or a path to a data directory. str (positional)
data_pathLocation of benchmark data in spaCy's binary format. Path (positional)
--code, -cPath to Python file with additional code to be imported. Allows registering custom functions for new architectures. Optional[Path] (option)
--batch-size, -bSet the batch size. If not set, the pipeline's batch size is used. Optional[int] (option)
--no-shuffleDo not shuffle documents in the benchmark data. bool (flag)
--gpu-id, -gGPU to use, if any. Defaults to -1 for CPU. int (option)
--batchesNumber of batches to benchmark on. Defaults to 50. Optional[int] (option)
--warmup, -wIterations over the benchmark data for warmup. Defaults to 3 Optional[int] (option)
--help, -hShow help message and available arguments. bool (flag)
PRINTSPipeline speed in words per second with a 95% confidence interval.

apply {id="apply", version="3.5", tag="command"}

Applies a trained pipeline to data and stores the resulting annotated documents in a DocBin. The input can be a single file or a directory. The recognized input formats are:

  1. .spacy
  2. .jsonl containing a user specified text_key
  3. Files with any other extension are assumed to be plain text files containing a single document.

When a directory is provided it is traversed recursively to collect all files.

When loading a .spacy file, any potential annotations stored on the Doc that are not overwritten by the pipeline will be preserved. If you want to evaluate the pipeline on raw text only, make sure that the .spacy file does not contain any annotations.

bash
$ python -m spacy apply [model] [data-path] [output-file] [--code] [--text-key] [--force-overwrite] [--gpu-id] [--batch-size] [--n-process]
NameDescription
modelPipeline to apply to the data. Can be a package or a path to a data directory. str (positional)
data_pathLocation of data to be evaluated in spaCy's binary format, jsonl, or plain text. Path (positional)
output-fileOutput DocBin path. str (positional)
--code, -cPath to Python file with additional code to be imported. Allows registering custom functions for new architectures. Optional[Path] (option)
--text-key, -tkThe key for .jsonl files to use to grab the texts from. Defaults to text. Optional[str] (option)
--force-overwrite, -FIf the provided output-file already exists, then force apply to overwrite it. If this is False (default) then quits with a warning instead. bool (flag)
--gpu-id, -gGPU to use, if any. Defaults to -1 for CPU. int (option)
--batch-size, -bBatch size to use for prediction. Defaults to 1. int (option)
--n-process, -nNumber of processes to use for prediction. Defaults to 1. int (option)
--help, -hShow help message and available arguments. bool (flag)
CREATESA DocBin with the annotations from the model for all the files found in data-path.

find-threshold {id="find-threshold",version="3.5",tag="command"}

Runs prediction trials for a trained model with varying thresholds to maximize the specified metric. The search space for the threshold is traversed linearly from 0 to 1 in n_trials steps. Results are displayed in a table on stdout (the corresponding API call to spacy.cli.find_threshold.find_threshold() returns all results).

This is applicable only for components whose predictions are influenced by thresholds - e.g. textcat_multilabel and spancat, but not textcat. Note that the full path to the corresponding threshold attribute in the config has to be provided.

Examples

bash
# For textcat_multilabel:
$ python -m spacy find-threshold my_nlp data.spacy textcat_multilabel threshold cats_macro_f
bash
# For spancat:
$ python -m spacy find-threshold my_nlp data.spacy spancat threshold spans_sc_f
NameDescription
modelPipeline to evaluate. Can be a package or a path to a data directory. str (positional)
data_pathPath to file with DocBin with docs to use for threshold search. Path (positional)
pipe_nameName of pipe to examine thresholds for. str (positional)
threshold_keyKey of threshold attribute in component's configuration. str (positional)
scores_keyName of score to metric to optimize. str (positional)
--n_trials, -nNumber of trials to determine optimal thresholds. int (option)
--code, -cPath to Python file with additional code to be imported. Allows registering custom functions for new architectures. Optional[Path] (option)
--gpu-id, -gGPU to use, if any. Defaults to -1 for CPU. int (option)
--gold-preproc, -GUse gold preprocessing. bool (flag)
--verbose, -V, -VVDisplay more information for debugging purposes. bool (flag)
--help, -hShow help message and available arguments. bool (flag)

assemble {id="assemble",tag="command"}

Assemble a pipeline from a config file without additional training. Expects a config file with all settings and hyperparameters. The --code argument can be used to import a Python file that lets you register custom functions and refer to them in your config.

Example

bash
$ python -m spacy assemble config.cfg ./output
bash
$ python -m spacy assemble [config_path] [output_dir] [--code] [--verbose] [overrides]
NameDescription
config_pathPath to the config file containing all settings and hyperparameters. If -, the data will be read from stdin. Union[Path, str] (positional)
output_dirDirectory to store the final pipeline in. Will be created if it doesn't exist. Optional[Path] (option)
--code, -cPath to Python file with additional code to be imported. Allows registering custom functions. Optional[Path] (option)
--verbose, -VShow more detailed messages during processing. bool (flag)
--help, -hShow help message and available arguments. bool (flag)
overridesConfig parameters to override. Should be options starting with -- that correspond to the config section and value to override, e.g. --paths.data ./data. Any (option/flag)
CREATESThe final assembled pipeline.

package {id="package",tag="command"}

Generate an installable Python package from an existing pipeline data directory. All data files are copied over. If additional code files are provided (e.g. Python files containing custom registered functions like pipeline components), they are copied into the package and imported in the __init__.py. If the path to a meta.json is supplied, or a meta.json is found in the input directory, this file is used. Otherwise, the data can be entered directly from the command line. spaCy will then create a build artifact that you can distribute and install with pip install. As of v3.1, the package command will also create a formatted README.md based on the pipeline information defined in the meta.json. If a README.md is already present in the source directory, it will be used instead.

<Infobox title="New in v3.0" variant="warning">

The spacy package command now also builds the .tar.gz archive automatically, so you don't have to run python setup.py sdist separately anymore. To disable this, you can set --build none. You can also choose to build a binary wheel (which installs more efficiently) by setting --build wheel, or to build both the sdist and wheel by setting --build sdist,wheel.

</Infobox>
bash
$ python -m spacy package [input_dir] [output_dir] [--code] [--meta-path] [--create-meta] [--build] [--name] [--version] [--force]

Example

bash
$ python -m spacy package /input /output
$ cd /output/en_pipeline-0.0.0
$ pip install dist/en_pipeline-0.0.0.tar.gz
NameDescription
input_dirPath to directory containing pipeline data. Path (positional)
output_dirDirectory to create package folder in. Path (positional)
--code, -c <Tag variant="new">3</Tag>Comma-separated paths to Python files to be included in the package and imported in its __init__.py. This allows including registering functions and custom components. str (option)
--meta-path, -mPath to meta.json file (optional). Optional[Path] (option)
--create-meta, -CCreate a meta.json file on the command line, even if one already exists in the directory. If an existing file is found, its entries will be shown as the defaults in the command line prompt. bool (flag)
--build, -b <Tag variant="new">3</Tag>Comma-separated artifact formats to build. Can be sdist (for a .tar.gz archive) and/or wheel (for a binary .whl file), or none if you want to run this step manually. The generated artifacts can be installed by pip install. Defaults to sdist. str (option)
--name, -n <Tag variant="new">3</Tag>Package name to override in meta. Optional[str] (option)
--version, -v <Tag variant="new">3</Tag>Package version to override in meta. Useful when training new versions, as it doesn't require editing the meta template. Optional[str] (option)
--force, -fForce overwriting of existing folder in output directory. bool (flag)
--help, -hShow help message and available arguments. bool (flag)
CREATESA Python package containing the spaCy pipeline.

project {id="project",version="3"}

The spacy project CLI includes subcommands for working with spaCy projects, end-to-end workflows for building and deploying custom spaCy pipelines.

project clone {id="project-clone",tag="command"}

Clone a project template from a Git repository. Calls into git under the hood and can use the sparse checkout feature if available, so you're only downloading what you need. By default, spaCy's project templates repo is used, but you can provide any other repo (public or private) that you have access to using the --repo option.

bash
$ python -m spacy project clone [name] [dest] [--repo] [--branch] [--sparse]

Example

bash
$ python -m spacy project clone pipelines/ner_wikiner

Clone from custom repo:

bash
$ python -m spacy project clone template --repo https://github.com/your_org/your_repo
NameDescription
nameThe name of the template to clone, relative to the repo. Can be a top-level directory or a subdirectory like dir/template. str (positional)
destWhere to clone the project. Defaults to current working directory. Path (positional)
--repo, -rThe repository to clone from. Can be any public or private Git repo you have access to. str (option)
--branch, -bThe branch to clone from. Defaults to master. str (option)
--sparse, -SEnable sparse checkout to only check out and download what's needed. Requires Git v22.2+. bool (flag)
--help, -hShow help message and available arguments. bool (flag)
CREATESThe cloned project directory.

project assets {id="project-assets",tag="command"}

Fetch project assets like datasets and pretrained weights. Assets are defined in the assets section of the project.yml. If a checksum is provided, the file is only downloaded if no local file with the same checksum exists and spaCy will show an error if the checksum of the downloaded file doesn't match. If assets don't specify a url they're considered "private" and you have to take care of putting them into the destination directory yourself. If a local path is provided, the asset is copied into the current project.

bash
$ python -m spacy project assets [project_dir]

Example

bash
$ python -m spacy project assets [--sparse]
NameDescription
project_dirPath to project directory. Defaults to current working directory. Path (positional)
--extra, -e <Tag variant="new">3.3.1</Tag>Download assets marked as "extra". Default false. bool (flag)
--sparse, -SEnable sparse checkout to only check out and download what's needed. Requires Git v22.2+. bool (flag)
--help, -hShow help message and available arguments. bool (flag)
CREATESDownloaded or copied assets defined in the project.yml.

project run {id="project-run",tag="command"}

Run a named command or workflow defined in the project.yml. If a workflow name is specified, all commands in the workflow are run, in order. If commands define dependencies or outputs, they will only be re-run if state has changed. For example, if the input dataset changes, a preprocessing command that depends on those files will be re-run.

bash
$ python -m spacy project run [subcommand] [project_dir] [--force] [--dry]

Example

bash
$ python -m spacy project run train
NameDescription
subcommandName of the command or workflow to run. str (positional)
project_dirPath to project directory. Defaults to current working directory. Path (positional)
--force, -FForce re-running steps, even if nothing changed. bool (flag)
--dry, -DPerform a dry run and don't execute scripts. bool (flag)
--help, -hShow help message and available arguments. bool (flag)
EXECUTESThe command defined in the project.yml.

project push {id="project-push",tag="command"}

Upload all available files or directories listed as in the outputs section of commands to a remote storage. Outputs are archived and compressed prior to upload, and addressed in the remote storage using the output's relative path (URL encoded), a hash of its command string and dependencies, and a hash of its file contents. This means push should never overwrite a file in your remote. If all the hashes match, the contents are the same and nothing happens. If the contents are different, the new version of the file is uploaded. Deleting obsolete files is left up to you.

Remotes can be defined in the remotes section of the project.yml. Under the hood, spaCy uses cloudpathlib to communicate with the remote storages, so you can use any protocol that cloudpathlib supports, including S3, Google Cloud Storage, and the local filesystem, although you may need to install extra dependencies to use certain protocols.

bash
$ python -m spacy project push [remote] [project_dir]

Example

bash
$ python -m spacy project push my_bucket
yaml
### project.yml
remotes:
  my_bucket: 's3://my-spacy-bucket'
NameDescription
remoteThe name of the remote to upload to. Defaults to "default". str (positional)
project_dirPath to project directory. Defaults to current working directory. Path (positional)
--help, -hShow help message and available arguments. bool (flag)
UPLOADSAll project outputs that exist and are not already stored in the remote.

project pull {id="project-pull",tag="command"}

Download all files or directories listed as outputs for commands, unless they are already present locally. When searching for files in the remote, pull won't just look at the output path, but will also consider the command string and the hashes of the dependencies. For instance, let's say you've previously pushed a checkpoint to the remote, but now you've changed some hyper-parameters. Because you've changed the inputs to the command, if you run pull, you won't retrieve the stale result. If you train your pipeline and push the outputs to the remote, the outputs will be saved alongside the prior outputs, so if you change the config back, you'll be able to fetch back the result.

Remotes can be defined in the remotes section of the project.yml. Under the hood, spaCy uses Pathy to communicate with the remote storages, so you can use any protocol that Pathy supports, including S3, Google Cloud Storage, and the local filesystem, although you may need to install extra dependencies to use certain protocols.

bash
$ python -m spacy project pull [remote] [project_dir]

Example

bash
$ python -m spacy project pull my_bucket
yaml
### project.yml
remotes:
  my_bucket: 's3://my-spacy-bucket'
NameDescription
remoteThe name of the remote to download from. Defaults to "default". str (positional)
project_dirPath to project directory. Defaults to current working directory. Path (positional)
--help, -hShow help message and available arguments. bool (flag)
DOWNLOADSAll project outputs that do not exist locally and can be found in the remote.

project document {id="project-document",tag="command"}

Auto-generate a pretty Markdown-formatted README for your project, based on its project.yml. Will create sections that document the available commands, workflows and assets. The auto-generated content will be placed between two hidden markers, so you can add your own custom content before or after the auto-generated documentation. When you re-run the project document command, only the auto-generated part is replaced.

bash
$ python -m spacy project document [project_dir] [--output] [--no-emoji]

Example

bash
$ python -m spacy project document --output README.md
<Accordion title="Example output" spaced>

For more examples, see the templates in our projects repo.

</Accordion>
NameDescription
project_dirPath to project directory. Defaults to current working directory. Path (positional)
--output, -oPath to output file or - for stdout (default). If a file is specified and it already exists and contains auto-generated docs, only the auto-generated docs section is replaced. Path (positional)
--no-emoji, -NEDon't use emoji in the titles. bool (flag)
CREATESThe Markdown-formatted project documentation.

project dvc {id="project-dvc",tag="command"}

Auto-generate Data Version Control (DVC) config file. Calls dvc run with --no-exec under the hood to generate the dvc.yaml. A DVC project can only define one pipeline, so you need to specify one workflow defined in the project.yml. If no workflow is specified, the first defined workflow is used. The DVC config will only be updated if the project.yml changed. For details, see the DVC integration docs.

<Infobox variant="warning">

This command requires DVC to be installed and initialized in the project directory, e.g. via dvc init. You'll also need to add the assets you want to track with dvc add.

</Infobox>
bash
$ python -m spacy project dvc [project_dir] [workflow] [--force] [--verbose] [--quiet]

Example

bash
$ git init
$ dvc init
$ python -m spacy project dvc all
NameDescription
project_dirPath to project directory. Defaults to current working directory. Path (positional)
workflowName of workflow defined in project.yml. Defaults to first workflow if not set. Optional[str] (option)
--force, -FForce-updating config file. bool (flag)
--verbose, -VPrint more output generated by DVC. bool (flag)
--quiet, -qPrint no output generated by DVC. bool (flag)
--help, -hShow help message and available arguments. bool (flag)
CREATESA dvc.yaml file in the project directory, based on the steps defined in the given workflow.

huggingface-hub {id="huggingface-hub",version="3.1"}

The spacy huggingface-cli CLI includes commands for uploading your trained spaCy pipelines to the Hugging Face Hub.

Installation

bash
$ pip install spacy-huggingface-hub
$ huggingface-cli login
<Infobox variant="warning">

To use this command, you need the spacy-huggingface-hub package installed. Installing the package will automatically add the huggingface-hub command to the spaCy CLI.

</Infobox>

huggingface-hub push {id="huggingface-hub-push",tag="command"}

Push a spaCy pipeline to the Hugging Face Hub. Expects a .whl file packaged with spacy package and --build wheel. For more details, see the spaCy project integration.

bash
$ python -m spacy huggingface-hub push [whl_path] [--org] [--msg] [--verbose]

Example

bash
$ python -m spacy huggingface-hub push en_ner_fashion-0.0.0-py3-none-any.whl
NameDescription
whl_pathThe path to the .whl file packaged with spacy package. Path(positional)
--org, -oOptional name of organization to which the pipeline should be uploaded. str (option)
--msg, -mCommit message to use for update. Defaults to "Update spaCy pipeline". str (option)
--verbose, -VOutput additional info for debugging, e.g. the full generated hub metadata. bool (flag)
UPLOADSThe pipeline to the hub.