website/docs/api/matcher.mdx
The Matcher lets you find words and phrases using rules describing their token
attributes. Rules can refer to token annotations (like the text or
part-of-speech tags), as well as lexical attributes like Token.is_punct.
Applying the matcher to a Doc gives you access to the matched
tokens in context. For in-depth examples and workflows for combining rules and
statistical models, see the usage guide on
rule-based matching.
json### Example [ {"LOWER": "i"}, {"LEMMA": {"IN": ["like", "love"]}}, {"POS": "NOUN", "OP": "+"} ]
A pattern added to the Matcher consists of a list of dictionaries. Each
dictionary describes one token and its attributes. The available token
pattern keys correspond to a number of
Token attributes. The supported attributes for
rule-based matching are:
| Attribute | Description |
|---|---|
ORTH | The exact verbatim text of a token. |
TEXT | The exact verbatim text of a token. |
NORM | The normalized form of the token text. |
LOWER | The lowercase form of the token text. |
LENGTH | The length of the token text. |
IS_ALPHA, IS_ASCII, IS_DIGIT | Token text consists of alphabetic characters, ASCII characters, digits. |
IS_LOWER, IS_UPPER, IS_TITLE | Token text is in lowercase, uppercase, titlecase. |
IS_PUNCT, IS_SPACE, IS_STOP | Token is punctuation, whitespace, stop word. |
IS_SENT_START | Token is start of sentence. |
LIKE_NUM, LIKE_URL, LIKE_EMAIL | Token text resembles a number, URL, email. |
SPACY | Token has a trailing space. |
POS, TAG, MORPH, DEP, LEMMA, SHAPE | The token's simple and extended part-of-speech tag, morphological analysis, dependency label, lemma, shape. |
ENT_TYPE | The token's entity label. |
ENT_IOB | The IOB part of the token's entity tag. |
ENT_ID | The token's entity ID (ent_id). |
ENT_KB_ID | The token's entity knowledge base ID (ent_kb_id). |
_ | Properties in custom extension attributes. |
OP | Operator or quantifier to determine how often to match a token pattern. |
Operators and quantifiers define how often a token pattern should be matched:
json### Example [ {"POS": "ADJ", "OP": "*"}, {"POS": "NOUN", "OP": "+"} {"POS": "PROPN", "OP": "{2}"} ]
| OP | Description |
|---|---|
! | Negate the pattern, by requiring it to match exactly 0 times. |
? | Make the pattern optional, by allowing it to match 0 or 1 times. |
+ | Require the pattern to match 1 or more times. |
* | Allow the pattern to match 0 or more times. |
{n} | Require the pattern to match exactly n times. |
{n,m} | Require the pattern to match at least n but not more than m times. |
{n,} | Require the pattern to match at least n times. |
{,m} | Require the pattern to match at most m times. |
Token patterns can also map to a dictionary of properties instead of a single value to indicate whether the expected value is a member of a list or how it compares to another value.
json### Example [ {"LEMMA": {"IN": ["like", "love", "enjoy"]}}, {"POS": "PROPN", "LENGTH": {">=": 10}}, ]
| Attribute | Description |
|---|---|
REGEX | Attribute value matches the regular expression at any position in the string. |
FUZZY | Attribute value matches if the fuzzy_compare method matches for (value, pattern, -1). The default method allows a Levenshtein edit distance of at least 2 and up to 30% of the pattern string length. |
FUZZY1, FUZZY2, ... FUZZY9 | Attribute value matches if the fuzzy_compare method matches for (value, pattern, N). The default method allows a Levenshtein edit distance of at most N (1-9). |
IN | Attribute value is member of a list. |
NOT_IN | Attribute value is not member of a list. |
IS_SUBSET | Attribute value (for MORPH or custom list attributes) is a subset of a list. |
IS_SUPERSET | Attribute value (for MORPH or custom list attributes) is a superset of a list. |
INTERSECTS | Attribute value (for MORPH or custom list attribute) has a non-empty intersection with a list. |
==, >=, <=, >, < | Attribute value is equal, greater or equal, smaller or equal, greater or smaller. |
As of spaCy v3.5, REGEX and FUZZY can be used in combination with IN and
NOT_IN.
Create the rule-based Matcher. If validate=True is set, all patterns added
to the matcher will be validated against a JSON schema and a MatchPatternError
is raised if problems are found. Those can include incorrect types (e.g. a
string where an integer is expected) or unexpected property names.
Example
pythonfrom spacy.matcher import Matcher matcher = Matcher(nlp.vocab)
| Name | Description |
|---|---|
vocab | The vocabulary object, which must be shared with the documents the matcher will operate on. |
validate | Validate all patterns added to this matcher. |
fuzzy_compare | The comparison method used for the FUZZY operators. |
Find all token sequences matching the supplied patterns on the Doc or Span.
Note that if a single label has multiple patterns associated with it, the returned matches don't provide a way to tell which pattern was responsible for the match.
Example
pythonfrom spacy.matcher import Matcher matcher = Matcher(nlp.vocab) pattern = [{"LOWER": "hello"}, {"LOWER": "world"}] matcher.add("HelloWorld", [pattern]) doc = nlp("hello world!") matches = matcher(doc)
| Name | Description |
|---|---|
doclike | The Doc or Span to match over. |
| keyword-only | |
as_spans <Tag variant="new">3</Tag> | Instead of tuples, return a list of Span objects of the matches, with the match_id assigned as the span label. Defaults to False. |
allow_missing <Tag variant="new">3</Tag> | Whether to skip checks for missing annotation for attributes included in patterns. Defaults to False. |
with_alignments <Tag variant="new">3.0.6</Tag> | Return match alignment information as part of the match tuple as List[int] with the same length as the matched span. Each entry denotes the corresponding index of the token in the pattern. If as_spans is set to True, this setting is ignored. Defaults to False. |
| RETURNS | A list of (match_id, start, end) tuples, describing the matches. A match tuple describes a span doc[start:end]. The match_id is the ID of the added match pattern. If as_spans is set to True, a list of Span objects is returned instead. |
Get the number of rules added to the matcher. Note that this only returns the number of rules (identical with the number of IDs), not the number of individual patterns.
Example
pythonmatcher = Matcher(nlp.vocab) assert len(matcher) == 0 matcher.add("Rule", [[{"ORTH": "test"}]]) assert len(matcher) == 1
| Name | Description |
|---|---|
| RETURNS | The number of rules. |
Check whether the matcher contains rules for a match ID.
Example
pythonmatcher = Matcher(nlp.vocab) assert "Rule" not in matcher matcher.add("Rule", [[{'ORTH': 'test'}]]) assert "Rule" in matcher
| Name | Description |
|---|---|
key | The match ID. |
| RETURNS | Whether the matcher contains rules for this match ID. |
Add a rule to the matcher, consisting of an ID key, one or more patterns, and an
optional callback function to act on the matches. The callback function will
receive the arguments matcher, doc, i and matches. If a pattern already
exists for the given ID, the patterns will be extended. An on_match callback
will be overwritten.
<Infobox title="Changed in v3.0" variant="warning">Example
pythondef on_match(matcher, doc, id, matches): print('Matched!', matches) matcher = Matcher(nlp.vocab) patterns = [ [{"LOWER": "hello"}, {"LOWER": "world"}], [{"ORTH": "Google"}, {"ORTH": "Maps"}] ] matcher.add("TEST_PATTERNS", patterns, on_match=on_match) doc = nlp("HELLO WORLD on Google Maps.") matches = matcher(doc)
As of spaCy v3.0, Matcher.add takes a list of patterns as the second argument
(instead of a variable number of arguments). The on_match callback becomes an
optional keyword argument.
patterns = [[{"TEXT": "Google"}, {"TEXT": "Now"}], [{"TEXT": "GoogleNow"}]]
- matcher.add("GoogleNow", on_match, *patterns)
+ matcher.add("GoogleNow", patterns, on_match=on_match)
| Name | Description |
|---|---|
match_id | An ID for the thing you're matching. |
patterns | Match pattern. A pattern consists of a list of dicts, where each dict describes a token. |
| keyword-only | |
on_match | Callback function to act on matches. Takes the arguments matcher, doc, i and matches. |
greedy <Tag variant="new">3</Tag> | Optional filter for greedy matches. Can either be "FIRST" or "LONGEST". |
Remove a rule from the matcher. A KeyError is raised if the match ID does not
exist.
Example
pythonmatcher.add("Rule", [[{"ORTH": "test"}]]) assert "Rule" in matcher matcher.remove("Rule") assert "Rule" not in matcher
| Name | Description |
|---|---|
key | The ID of the match rule. |
Retrieve the pattern stored for a key. Returns the rule as an
(on_match, patterns) tuple containing the callback and available patterns.
Example
pythonmatcher.add("Rule", [[{"ORTH": "test"}]]) on_match, patterns = matcher.get("Rule")
| Name | Description |
|---|---|
key | The ID of the match rule. |
| RETURNS | The rule, as an (on_match, patterns) tuple. |