docs/v2/annotated-source/lexer.html
browser.coffeecake.coffeecoffeescript.coffeecommand.coffeegrammar.coffeehelpers.coffeeindex.coffeelexer.coffeenodes.coffeeoptparse.coffeeregister.coffeerepl.coffeerewriter.coffeescope.litcoffeesourcemap.litcoffee
The CoffeeScript Lexer. Uses a series of token-matching regexes to attempt matches against the beginning of the source code. When a match is found, a token is produced, we consume the match, and start again. Tokens are in the form:
[tag, value, locationData]
where locationData is {first_line, first_column, last_line, last_column, last_line_exclusive, last_column_exclusive}, which is a format that can be fed directly into Jison. These are read by jison in the parser.lexer function defined in coffeescript.coffee.
{Rewriter, INVERSES, UNFINISHED} =require'./rewriter'
Import the helpers we need.
{count, starts, compact, repeat, invertLiterate, merge,
attachCommentsToNode, locationDataToString, throwSyntaxError
replaceUnicodeCodePointEscapes, flatten, parseNumber} =require'./helpers'
The Lexer class reads a stream of CoffeeScript and divvies it up into tagged tokens. Some potential ambiguity in the grammar has been avoided by pushing some extra smarts into the Lexer.
exports.Lexer =class Lexer
tokenize is the Lexer’s main method. Scan by attempting to match tokens one at a time, using a regular expression anchored at the start of the remaining code, or a custom recursive token-matching method (for interpolations). When the next token has been recorded, we move forward within the code past the token, and begin again.
Each tokenizing method is responsible for returning the number of characters it has consumed.
Before returning the token stream, run it through the Rewriter.
tokenize:(code, opts = {}) -\>@literate = opts.literate# Are we lexing literate CoffeeScript?@indent =0# The current indentation level.@baseIndent =0# The overall minimum indentation level.@continuationLineAdditionalIndent =0# The over-indentation at the current level.@outdebt =0# The under-outdentation at the current level.@indents = []# The stack of all current indentation levels.@indentLiteral =''# The indentation.@ends = []# The stack for pairing up tokens.@tokens = []# Stream of parsed tokens in the form `['TYPE', value, location data]`.@seenFor =no# Used to recognize `FORIN`, `FOROF` and `FORFROM` tokens.@seenImport =no# Used to recognize `IMPORT FROM? AS?` tokens.@seenExport =no# Used to recognize `EXPORT FROM? AS?` tokens.@importSpecifierList =no# Used to identify when in an `IMPORT {...} FROM? ...`.@exportSpecifierList =no# Used to identify when in an `EXPORT {...} FROM? ...`.@jsxDepth =0# Used to optimize JSX checks, how deep in JSX we are.@jsxObjAttribute = {}# Used to detect if JSX attributes is wrapped in {} (\<div {props...} /\>).@chunkLine =
opts.lineor0# The start line for the current @chunk.@chunkColumn =
opts.columnor0# The start column of the current @chunk.@chunkOffset =
opts.offsetor0# The start offset for the current @chunk.@locationDataCompensations =
opts.locationDataCompensationsor{}# The location data compensations for the current @chunk.code = @clean code# The stripped, cleaned original source code.
At every position, run through this list of attempted matches, short-circuiting if any of them succeed. Their order determines precedence: @literalToken is the fallback catch-all.
i =0while@chunk = code[i..]
consumed = \
@identifierToken()or@commentToken()or@whitespaceToken()or@lineToken()or@stringToken()or@numberToken()or@jsxToken()or@regexToken()or@jsToken()or@literalToken()
Update position.
[@chunkLine, @chunkColumn, @chunkOffset] = @getLineAndColumnFromChunk consumed
i += consumedreturn{@tokens, index: i}[email protected]@closeIndentation()
@error"missing #{end.tag}", (end.origin ? end)[2]ifend = @ends.pop()[email protected](newRewriter).rewrite @tokens
Preprocess the code to remove leading and trailing whitespace, carriage returns, etc. If we’re lexing literate CoffeeScript, strip external Markdown by removing all lines that aren’t indented by at least four spaces or a tab.
clean:(code) -\>thusFar =0ifcode.charCodeAt(0)isBOM
code = code.slice1@locationDataCompensations[0] =1thusFar +=1ifWHITESPACE.test code
code ="\n#{code}"@chunkLine--
@locationDataCompensations[0] ?=0@locationDataCompensations[0] -=1code = code
.replace/\r/g,(match, offset) =\>@locationDataCompensations[thusFar + offset] =1''.replace TRAILING_SPACES,''code = invertLiterate codeif@literate
code
Matches identifying literals: variables, keywords, method names, etc. Check to ensure that JavaScript reserved words aren’t being used as identifiers. Because CoffeeScript reserves a handful of keywords that are allowed in JavaScript, we’re careful not to tag them as keywords when referenced as property names here, so you can still do jQuery.is() even though is means === otherwise.
identifierToken:-\>inJSXTag = @atJSXTag()
regex =ifinJSXTagthenJSX_ATTRIBUTEelseIDENTIFIERreturn0unlessmatch = regex.exec @chunk
[input, id, colon] = match
Preserve length of id for location data
idLength = id.length
poppedToken =undefinedifidis'own'and@tag()is'FOR'@token'OWN', idreturnid.lengthifidis'from'and@tag()is'YIELD'@token'FROM', idreturnid.lengthifidis'as'and@seenImportif@value()is'\*'@tokens[@tokens.length -1][0] ='IMPORT\_ALL'elseif@value(yes)inCOFFEE_KEYWORDS
prev = @prev()
[prev[0], prev[1]] = ['IDENTIFIER', @value(yes)]if@tag()in['DEFAULT','IMPORT\_ALL','IDENTIFIER']
@token'AS', idreturnid.lengthifidis'as'and@seenExportif@tag()in['IDENTIFIER','DEFAULT']
@token'AS', idreturnid.lengthif@value(yes)inCOFFEE_KEYWORDS
prev = @prev()
[prev[0], prev[1]] = ['IDENTIFIER', @value(yes)]
@token'AS', idreturnid.lengthifidis'default'and@seenExportand@tag()in['EXPORT','AS']
@token'DEFAULT', idreturnid.lengthifidis'assert'and(@seenImportor@seenExport)and@tag()is'STRING'@token'ASSERT', idreturnid.lengthifidis'do'andregExSuper =/^(\s\*super)(?!\(\))/.exec @chunk[3...]
@token'SUPER','super'@token'CALL\_START','('@token'CALL\_END',')'[input, sup] = regExSuperreturnsup.length +3prev = @prev()
tag =ifcolonorprev?and(prev[0]in['.','?.','::','?::']ornotprev.spacedandprev[0]is'@')'PROPERTY'else'IDENTIFIER'tokenData = {}iftagis'IDENTIFIER'and(idinJS_KEYWORDSoridinCOFFEE_KEYWORDS)andnot(@exportSpecifierListandidinCOFFEE_KEYWORDS)
tag = id.toUpperCase()iftagis'WHEN'and@tag()inLINE_BREAK
tag ='LEADING\_WHEN'elseiftagis'FOR'@seenFor = {endsLength: @ends.length}elseiftagis'UNLESS'tag ='IF'elseiftagis'IMPORT'@seenImport =yeselseiftagis'EXPORT'@seenExport =yeselseiftaginUNARY
tag ='UNARY'elseiftaginRELATIONiftagisnt'INSTANCEOF'and@seenFor
tag ='FOR'+ tag
@seenFor =noelsetag ='RELATION'if@value()is'!'poppedToken = @tokens.pop()
tokenData.invert = poppedToken.data?.original ? poppedToken[1]elseiftagis'IDENTIFIER'and@seenForandidis'from'andisForFrom(prev)
tag ='FORFROM'@seenFor =no
Throw an error on attempts to use get or set as keywords, or what CoffeeScript would normally interpret as calls to functions named get or set, i.e. get({foo: function () {}}).
elseiftagis'PROPERTY'andprevifprev.spacedandprev[0]inCALLABLEand/^[gs]et$/.test(prev[1])[email protected] >1and@tokens[@tokens.length -2][0]notin['.','?.','@']
@error"'#{prev[1]}' cannot be used as a keyword, or as a function call without parentheses", prev[2]elseifprev[0]is'.'[email protected] >1and(prevprev = @tokens[@tokens.length -2])[0]is'UNARY'andprevprev[1]is'new'prevprev[0] ='NEW\_TARGET'elseifprev[0]is'.'[email protected] >1and(prevprev = @tokens[@tokens.length -2])[0]is'IMPORT'andprevprev[1]is'import'@seenImport =noprevprev[0] ='IMPORT\_META'[email protected] >2prevprev = @tokens[@tokens.length -2]ifprev[0]in['@','THIS']andprevprevandprevprev.spacedand/^[gs]et$/.test(prevprev[1])and@tokens[@tokens.length -3][0]notin['.','?.','@']
@error"'#{prevprev[1]}' cannot be used as a keyword, or as a function call without parentheses", prevprev[2]iftagis'IDENTIFIER'andidinRESERVEDandnotinJSXTag
@error"reserved word '#{id}'", length: id.lengthunlesstagis'PROPERTY'or@exportSpecifierListor@importSpecifierListifidinCOFFEE_ALIASES
alias = id
id = COFFEE_ALIAS_MAP[id]
tokenData.original = alias
tag =switchidwhen'!'then'UNARY'when'==','!='then'COMPARE'when'true','false'then'BOOL'when'break','continue', \'debugger'then'STATEMENT'when'&&','||'thenidelsetag
tagToken = @token tag, id, length: idLength, data: tokenData
tagToken.origin = [tag, alias, tagToken[2]]ifaliasifpoppedToken
[tagToken[2].first_line, tagToken[2].first_column, tagToken[2].range[0]] =
[poppedToken[2].first_line, poppedToken[2].first_column, poppedToken[2].range[0]]ifcolon
colonOffset = input.lastIndexOfifinJSXTagthen'='else':'colonToken = @token':',':', offset: colonOffset
colonToken.jsxColon =yesifinJSXTag# used by rewriterifinJSXTagandtagis'IDENTIFIER'andprev[0]isnt':'@token',',',', length:0, origin: tagToken, generated:yesinput.length
Matches numbers, including decimals, hex, and exponential notation. Be careful not to interfere with ranges in progress.
numberToken:-\>return0unlessmatch = NUMBER.exec @chunk
number = match[0]
lexedLength = number.lengthswitchwhen/^0[BOX]/.test number
@error"radix prefix in '#{number}' must be lowercase", offset:1when/^0\d\*[89]/.test number
@error"decimal literal '#{number}' must not be prefixed with '0'", length: lexedLengthwhen/^0\d+/.test number
@error"octal literal '#{number}' must be prefixed with '0o'", length: lexedLength
parsedValue = parseNumber number
tokenData = {parsedValue}
tag =ifparsedValueisInfinitythen'INFINITY'else'NUMBER'iftagis'INFINITY'tokenData.original = number
@token tag, number,
length: lexedLength
data: tokenData
lexedLength
Matches strings, including multiline strings, as well as heredocs, with or without interpolation.
stringToken:-\>[quote] = STRING_START.exec(@chunk) || []return0unlessquote
If the preceding token is from and this is an import or export statement, properly tag the from.
prev = @prev()ifprevand@value()is'from'and(@seenImportor@seenExport)
prev[0] ='FROM'regex =switchquotewhen"'"thenSTRING_SINGLEwhen'"'thenSTRING_DOUBLEwhen"'''"thenHEREDOC_SINGLEwhen'"""'thenHEREDOC_DOUBLE
{tokens, index: end} = @matchWithInterpolations regex, quote
heredoc = quote.lengthis3ifheredoc
Find the smallest indentation. It will be removed from all lines later.
indent =nulldoc = (token[1]fortoken, iintokenswhentoken[0]is'NEOSTRING').join'#{}'whilematch = HEREDOC_INDENT.exec doc
attempt = match[1]
indent = attemptifindentisnullor0< attempt.length < indent.length
delimiter = quote.charAt(0)
@mergeInterpolationTokens tokens, {quote, indent, endOffset: end},(value) =\>@validateUnicodeCodePointEscapes value, delimiter: quoteif@atJSXTag()
@token',',',', length:0, origin: @prev, generated:yesend
Matches and consumes comments. The comments are taken out of the token stream and saved for later, to be reinserted into the output after everything has been parsed and the JavaScript code generated.
commentToken:(chunk = @chunk, {heregex, returnCommentTokens = no, offsetInChunk = 0} = {}) -\>return0unlessmatch = chunk.match COMMENT
[commentWithSurroundingWhitespace, hereLeadingWhitespace, hereComment, hereTrailingWhitespace, lineComment] = match
contents =null
Does this comment follow code on the same line?
leadingNewline =/^\s\*\n+\s\*#/.test commentWithSurroundingWhitespaceifhereComment
matchIllegal = HERECOMMENT_ILLEGAL.exec hereCommentifmatchIllegal
@error"block comments cannot contain #{matchIllegal[0]}",
offset:'###'.length + matchIllegal.index, length: matchIllegal[0].length
Parse indentation or outdentation as if this block comment didn’t exist.
chunk = chunk.replace"####{hereComment}###",''
Remove leading newlines, like Rewriter::removeLeadingNewlines, to avoid the creation of unwanted TERMINATOR tokens.
chunk = chunk.replace/^\n+/,''@lineToken {chunk}
Pull out the ###-style comment’s content, and format it.
content = hereComment
contents = [{
content
length: commentWithSurroundingWhitespace.length - hereLeadingWhitespace.length - hereTrailingWhitespace.length
leadingWhitespace: hereLeadingWhitespace
}]else
The COMMENT regex captures successive line comments as one token. Remove any leading newlines before the first comment, but preserve blank lines between line comments.
leadingNewlines =''content = lineComment.replace/^(\n\*)/,(leading) -\>leadingNewlines = leading''precedingNonCommentLines =''hasSeenFirstCommentLine =nocontents =
content.split'\n'.map (line, index) ->unlessline.indexOf('#') >-1precedingNonCommentLines +="\n#{line}"returnleadingWhitespace =''content = line.replace/^([|\t]\*)#/,(\_, whitespace) -\>leadingWhitespace = whitespace''comment = {
content
length:'#'.length + content.length
leadingWhitespace:"#{unless hasSeenFirstCommentLine then leadingNewlines else ''}#{precedingNonCommentLines}#{leadingWhitespace}"precededByBlankLine: !!precedingNonCommentLines
}
hasSeenFirstCommentLine =yesprecedingNonCommentLines =''comment
.filter (comment) -> comment getIndentSize = ({leadingWhitespace, nonInitial}) -\>lastNewlineIndex = leadingWhitespace.lastIndexOf'\n'ifhereComment?ornotnonInitialreturnnullunlesslastNewlineIndex >-1elselastNewlineIndex ?=-1leadingWhitespace.length -1- lastNewlineIndex
commentAttachments =for{content, length, leadingWhitespace, precededByBlankLine}, iincontents
nonInitial = iisnt0leadingNewlineOffset =ifnonInitialthen1else0offsetInChunk += leadingNewlineOffset + leadingWhitespace.length
indentSize = getIndentSize {leadingWhitespace, nonInitial}
noIndent =notindentSize?orindentSizeis-1commentAttachment = {
content
here: hereComment?
newLine: leadingNewlineornonInitial# Line comments after the first one start new lines, by definition.locationData: @makeLocationData {offsetInChunk, length}
precededByBlankLine
indentSize
indented:notnoIndentandindentSize > @indent
outdented:notnoIndentandindentSize < @indent
}
commentAttachment.heregex =yesifheregex
offsetInChunk += length
commentAttachment
prev = @prev()unlessprev
If there’s no previous token, create a placeholder token to attach this comment to; and follow with a newline.
commentAttachments[0].newLine =yes@lineToken chunk: @chunk[commentWithSurroundingWhitespace.length..], offset: commentWithSurroundingWhitespace.length# Set the indent.placeholderToken = @makeToken'JS','', offset: commentWithSurroundingWhitespace.length, generated:yesplaceholderToken.comments = commentAttachments
@tokens.push placeholderToken
@newlineToken commentWithSurroundingWhitespace.lengthelseattachCommentsToNode commentAttachments, prevreturncommentAttachmentsifreturnCommentTokens
commentWithSurroundingWhitespace.length
Matches JavaScript interpolated directly into the source via backticks.
jsToken:-\>[email protected](0)is'`'and(match = (matchedHere = HERE_JSTOKEN.exec(@chunk))orJSTOKEN.exec(@chunk))
Convert escaped backticks to backticks, and escaped backslashes just before escaped backticks to backslashes
script = match[1]
{length} = match[0]
@token'JS', script, {length, data: {here: !!matchedHere}}
length
Matches regular expression literals, as well as multiline extended ones. Lexing regular expressions is difficult to distinguish from division, so we borrow some basic heuristics from JavaScript and Ruby.
regexToken:-\>switchwhenmatch = REGEX_ILLEGAL.exec @chunk
@error"regular expressions cannot begin with #{match[2]}",
offset: match.index + match[1].lengthwhenmatch = @matchWithInterpolations HEREGEX,'///'{tokens, index} = match
comments = []whilematchedComment = HEREGEX_COMMENT.exec @chunk[0...index]
{index: commentIndex} = matchedComment
[fullMatch, leadingWhitespace, comment] = matchedComment
comments.push {comment, offsetInChunk: commentIndex + leadingWhitespace.length}
commentTokens = flatten(forcommentOptsincomments
@commentToken commentOpts.comment,Object.assign commentOpts, heregex:yes, returnCommentTokens:yes)whenmatch = REGEX.exec @chunk
[regex, body, closed] = match
@validateEscapes body, isRegex:yes, offsetInChunk:1index = regex.length
prev = @prev()ifprevifprev.spacedandprev[0]inCALLABLEreturn0ifnotclosedorPOSSIBLY_DIVISION.test regexelseifprev[0]inNOT_REGEXreturn0@error'missing / (unclosed regex)'unlessclosedelsereturn0[flags] = REGEX_FLAGS.exec @chunk[index..]
end = index + flags.length
origin = @makeToken'REGEX',null, length: endswitchwhennotVALID_FLAGS.test flags
@error"invalid regular expression flags #{flags}", offset: index, length: flags.lengthwhenregexortokens.lengthis1delimiter =ifbodythen'/'else'///'body ?= tokens[0][1]
@validateUnicodeCodePointEscapes body, {delimiter}
@token'REGEX',"/#{body}/#{flags}", {length: end, origin, data: {delimiter}}else@token'REGEX\_START','(', {length:0, origin, generated:yes}
@token'IDENTIFIER','RegExp', length:0, generated:yes@token'CALL\_START','(', length:0, generated:yes@mergeInterpolationTokens tokens, {double:yes, heregex: {flags}, endOffset: end - flags.length, quote:'///'},(str) =\>@validateUnicodeCodePointEscapes str, {delimiter}ifflags
@token',',',', offset: index -1, length:0, generated:yes@token'STRING','"'+ flags +'"', offset: index, length: flags.length
@token')',')', offset: end, length:0, generated:yes@token'REGEX\_END',')', offset: end, length:0, generated:yes
Explicitly attach any heregex comments to the REGEX/REGEX_END token.
ifcommentTokens?.length
addTokenData @tokens[@tokens.length -1],
heregexCommentTokens: commentTokens
end
Matches newlines, indents, and outdents, and determines which is which. If we can detect that the current line is continued onto the next line, then the newline is suppressed:
elements
.each( ... )
.map( ... )
Keeps track of the level of indentation, because a single outdent token can close multiple indents, so we need to know how far in we happen to be.
lineToken:({chunk = @chunk, offset = 0} = {}) -\>return0unlessmatch = MULTI_DENT.exec chunk
indent = match[0]
prev = @prev()
backslash = prev?[0]is'\\'@seenFor =nounless(backslashor@seenFor?.endsLength < @ends.length)and@seenFor
@seenImport =nounless(backslashand@seenImport)or@importSpecifierList
@seenExport =nounless(backslashand@seenExport)or@exportSpecifierList
size = indent.length -1- indent.lastIndexOf'\n'noNewlines = @unfinished()
newIndentLiteral =ifsize >0thenindent[-size..]else''unless/^(.?)\1\*$/.exec newIndentLiteral
@error'mixed indentation', offset: indent.lengthreturnindent.length
minLiteralLength =Math.min newIndentLiteral.length, @indentLiteral.lengthifnewIndentLiteral[...minLiteralLength]isnt@indentLiteral[...minLiteralLength]
@error'indentation mismatch', offset: indent.lengthreturnindent.lengthifsize - @continuationLineAdditionalIndentis@indentifnoNewlinesthen@suppressNewlines()else@newlineToken offsetreturnindent.lengthifsize > @indentifnoNewlines
@continuationLineAdditionalIndent = size - @indentunlessbackslashif@continuationLineAdditionalIndent
prev.continuationLineIndent = @indent + @continuationLineAdditionalIndent
@suppressNewlines()[email protected]
@baseIndent = @indent = size
@indentLiteral = newIndentLiteralreturnindent.length
diff = size - @indent + @outdebt
@token'INDENT', diff, offset: offset + indent.length - size, length: size
@indents.push diff
@ends.push {tag:'OUTDENT'}
@outdebt = @continuationLineAdditionalIndent =0@indent = size
@indentLiteral = newIndentLiteralelseifsize < @baseIndent
@error'missing indentation', offset: offset + indent.lengthelseendsContinuationLineIndentation = @continuationLineAdditionalIndent >0@continuationLineAdditionalIndent =0@outdentToken {moveOut: @indent - size, noNewlines, outdentLength: indent.length, offset, indentSize: size, endsContinuationLineIndentation}
indent.length
Record an outdent token or multiple tokens, if we happen to be moving back inwards past several recorded indents. Sets new @indent value.
outdentToken:({moveOut, noNewlines, outdentLength = 0, offset = 0, indentSize, endsContinuationLineIndentation}) -\>decreasedIndent = @indent - moveOutwhilemoveOut >0lastIndent = @indents[@indents.length -1]ifnotlastIndent
@outdebt = moveOut =0elseif@outdebtandmoveOut <= @outdebt
@outdebt -= moveOut
moveOut =0elsedent = @indents.pop() + @outdebtifoutdentLengthand@chunk[outdentLength]inINDENTABLE_CLOSERS
decreasedIndent -= dent - moveOut
moveOut = dent
@outdebt =0
pair might call outdentToken, so preserve decreasedIndent
@pair'OUTDENT'@token'OUTDENT', moveOut, length: outdentLength, indentSize: indentSize + moveOut - dent
moveOut -= dent
@outdebt -= moveOutifdent
@suppressSemicolons()unless@tag()is'TERMINATOR'ornoNewlines
terminatorToken = @token'TERMINATOR','\n', offset: offset + outdentLength, length:0terminatorToken.endsContinuationLineIndentation = {preContinuationLineIndent: @indent}ifendsContinuationLineIndentation
@indent = decreasedIndent
@indentLiteral = @indentLiteral[...decreasedIndent]
this
Matches and consumes non-meaningful whitespace. Tag the previous token as being “spaced”, because there are some cases where it makes a difference.
whitespaceToken:-\>return0unless(match = WHITESPACE.exec @chunk)or(nline = @chunk.charAt(0)is'\n')
prev = @prev()
prev[ifmatchthen'spaced'else'newLine'] =trueifprevifmatchthenmatch[0].lengthelse0
Generate a newline token. Consecutive newlines get merged together.
newlineToken:(offset) -\>@suppressSemicolons()
@token'TERMINATOR','\n', {offset, length:0}unless@tag()is'TERMINATOR'this
Use a \ at a line-ending to suppress the newline. The slash is removed here once its job is done.
suppressNewlines:-\>prev = @prev()ifprev[1]is'\\'[email protected] >1
@tokens.length should be at least 2 (some code, then \). If something puts a \ after nothing, they deserve to lose any comments that trail it.
attachCommentsToNode prev.comments, @tokens[@tokens.length -2]
@tokens.pop()
this
jsxToken:-\>firstChar = @chunk[0]
Check the previous token to detect if attribute is spread.
prevChar [email protected] >0then@tokens[@tokens.length -1][0]else''iffirstCharis'\<'match = JSX_IDENTIFIER.exec(@chunk[1...])orJSX_FRAGMENT_IDENTIFIER.exec(@chunk[1...])return0unlessmatchand(
@jsxDepth >0or
Not the right hand side of an unspaced comparison (i.e. a<b).
not(prev = @prev())orprev.spacedorprev[0]notinCOMPARABLE_LEFT_SIDE
)
[input, id] = match
fullId = idif'.'inid
[id, properties...] = id.split'.'elseproperties = []
tagToken = @token'JSX\_TAG', id,
length: id.length +1data:
openingBracketToken: @makeToken'\<','\<'tagNameToken: @makeToken'IDENTIFIER', id, offset:1offset = id.length +1forpropertyinproperties
@token'.','.', {offset}
offset +=1@token'PROPERTY', property, {offset}
offset += property.length
@token'CALL\_START','(', generated:yes@token'[','[', generated:[email protected] {tag:'/\>', origin: tagToken, name: id, properties}
@jsxDepth++returnfullId.length +1elseifjsxTag = @atJSXTag()if@chunk[...2]is'/\>'# Self-closing tag.@pair'/\>'@token']',']',
length:2generated:yes@token'CALL\_END',')',
length:2generated:yesdata:
selfClosingSlashToken: @makeToken'/','/'closingBracketToken: @makeToken'\>','\>', offset:1@jsxDepth--return2elseiffirstCharis'{'ifprevCharis':'
This token represents the start of a JSX attribute value that’s an expression (e.g. the {b} in <div a={b} />). Our grammar represents the beginnings of expressions as ( tokens, so make this into a ( token that displays as {.
token = @token'(','{'@jsxObjAttribute[@jsxDepth] =no
tag attribute name as JSX
addTokenData @tokens[@tokens.length -3],
jsx:yeselsetoken = @token'{','{'@jsxObjAttribute[@jsxDepth] [email protected] {tag:'}', origin: token}return1elseiffirstCharis'\>'# end of opening tag
Ignore terminators inside a tag.
{origin: openingTagToken} = @pair'/\>'# As if the current tag was self-closing.@token']',']',
generated:yesdata:
closingBracketToken: @makeToken'\>','\>'@token',','JSX\_COMMA', generated:yes{tokens, index: end} =
@matchWithInterpolations INSIDE_JSX,'\>','\</', JSX_INTERPOLATION
@mergeInterpolationTokens tokens, {endOffset: end, jsx:yes},(value) =\>@validateUnicodeCodePointEscapes value, delimiter:'\>'match = JSX_IDENTIFIER.exec(@chunk[end...])orJSX_FRAGMENT_IDENTIFIER.exec(@chunk[end...])ifnotmatchormatch[1]isnt"#{jsxTag.name}#{(".#{property}" for property in jsxTag.properties).join ''}"@error"expected corresponding JSX closing tag for #{jsxTag.name}",
jsxTag.origin.data.tagNameToken[2]
[, fullTagName] = match
afterTag = end + fullTagName.lengthif@chunk[afterTag]isnt'\>'@error"missing closing \> after tag name", offset: afterTag, length:1
-2/+2 for the opening </ and +1 for the closing >.
endToken = @token'CALL\_END',')',
offset: end -2length: fullTagName.length +3generated:yesdata:
closingTagOpeningBracketToken: @makeToken'\<','\<', offset: end -2closingTagSlashToken: @makeToken'/','/', offset: end -1
TODO: individual tokens for complex tag name? eg < / A . B >
closingTagNameToken: @makeToken'IDENTIFIER', fullTagName, offset: end
closingTagClosingBracketToken: @makeToken'\>','\>', offset: end + fullTagName.length
make the closing tag location data more easily accessible to the grammar
addTokenData openingTagToken, endToken.data
@jsxDepth--returnafterTag +1elsereturn0elseif@atJSXTag1iffirstCharis'}'@pair firstCharif@jsxObjAttribute[@jsxDepth]
@token'}','}'@jsxObjAttribute[@jsxDepth] =noelse@token')','}'@token',',',', generated:yesreturn1elsereturn0elsereturn0atJSXTag:(depth = 0) -\>returnnoif@jsxDepthis0i = @ends.length -1i--while@ends[i]?.tagis'OUTDENT'ordepth-- >0# Ignore indents.last = @ends[i]
last?.tagis'/\>'andlast
We treat all other single characters as a token. E.g.: ( ) , . ! Multi-character operators are also literal tokens, so that Jison can assign the proper order of operations. There are some symbols that we tag specially here. ; and newlines are both treated as a TERMINATOR, we distinguish parentheses that indicate a method call from regular parentheses, and so on.
literalToken:-\>ifmatch = OPERATOR.exec @chunk
[value] = match
@tagParameters()ifCODE.test valueelsevalue = @chunk.charAt0tag = value
prev = @prev()ifprevandvaluein['=', COMPOUND_ASSIGN...]
skipToken =falseifvalueis'='andprev[1]in['||','&&']andnotprev.spaced
prev[0] ='COMPOUND\_ASSIGN'prev[1] +='='prev.data.original +='='ifprev.data?.original
prev[2].range = [
prev[2].range[0]
prev[2].range[1] +1]
prev[2].last_column +=1prev[2].last_column_exclusive +=1prev = @tokens[@tokens.length -2]
skipToken =trueifprevandprev[0]isnt'PROPERTY'origin = prev.origin ? prev
message = isUnassignable prev[1], origin[1]
@error message, origin[2]ifmessagereturnvalue.lengthifskipTokenifvalueis'('andprev?[0]is'IMPORT'prev[0] ='DYNAMIC\_IMPORT'ifvalueis'{'and@seenImport
@importSpecifierList =yeselseif@importSpecifierListandvalueis'}'@importSpecifierList =noelseifvalueis'{'andprev?[0]is'EXPORT'@exportSpecifierList =yeselseif@exportSpecifierListandvalueis'}'@exportSpecifierList =noifvalueis';'@error'unexpected ;'ifprev?[0]in['=', UNFINISHED...]
@seenFor = @seenImport = @seenExport =notag ='TERMINATOR'elseifvalueis'\*'andprev?[0]is'EXPORT'tag ='EXPORT\_ALL'elseifvalueinMATHthentag ='MATH'elseifvalueinCOMPAREthentag ='COMPARE'elseifvalueinCOMPOUND_ASSIGNthentag ='COMPOUND\_ASSIGN'elseifvalueinUNARYthentag ='UNARY'elseifvalueinUNARY_MATHthentag ='UNARY\_MATH'elseifvalueinSHIFTthentag ='SHIFT'elseifvalueis'?'andprev?.spacedthentag ='BIN?'elseifprevifvalueis'('andnotprev.spacedandprev[0]inCALLABLE
prev[0] ='FUNC\_EXIST'ifprev[0]is'?'tag ='CALL\_START'elseifvalueis'['and((prev[0]inINDEXABLEandnotprev.spaced)or(prev[0]is'::'))# `.prototype` can’t be a method you can call.tag ='INDEX\_START'switchprev[0]when'?'thenprev[0] ='INDEX\_SOAK'token = @makeToken tag, valueswitchvaluewhen'(','{','['[email protected] {tag: INVERSES[value], origin: token}when')','}',']'then@pair value
@tokens.push @makeToken tag, value
value.length
A source of ambiguity in our grammar used to be parameter lists in function definitions versus argument lists in function calls. Walk backwards, tagging parameters specially in order to make things easier for the parser.
tagParameters:-\>return@tagDoIife()if@tag()isnt')'stack = []
{tokens} = this
i = tokens.length
paramEndToken = tokens[--i]
paramEndToken[0] ='PARAM\_END'whiletok = tokens[--i]switchtok[0]when')'stack.push tokwhen'(','CALL\_START'ifstack.lengththenstack.pop()elseiftok[0]is'('tok[0] ='PARAM\_START'return@tagDoIife i -1elseparamEndToken[0] ='CALL\_END'returnthis
this
Tag do followed by a function differently than do followed by eg an identifier to allow for different grammar precedence
tagDoIife:(tokenIndex) -\>tok = @tokens[tokenIndex ? @tokens.length -1]returnthisunlesstok?[0]is'DO'tok[0] ='DO\_IIFE'this
Close up all remaining open blocks at the end of the file.
closeIndentation:-\>@outdentToken moveOut: @indent, indentSize:0
Match the contents of a delimited token and expand variables and expressions inside it using Ruby-like notation for substitution of arbitrary expressions.
"Hello #{name.capitalize()}."
If it encounters an interpolation, this method will recursively create a new Lexer and tokenize until the { of #{ is balanced with a }.
- `regex` matches the contents of a token (but not `delimiter`, and not `#{` if interpolations are desired).
- `delimiter` is the delimiter of the token. Examples are `'`, `"`, `'''`, `"""` and `///`.
- `closingDelimiter` is different from `delimiter` only in JSX
- `interpolators` matches the start of an interpolation, for JSX it’s both `{` and `<` (i.e. nested JSX tag)
This method allows us to have strings within interpolations within strings, ad infinitum.
matchWithInterpolations:(regex, delimiter, closingDelimiter = delimiter, interpolators = /^#\{/) -\>tokens = []
offsetInChunk = delimiter.lengthreturnnullunless@chunk[...offsetInChunk]isdelimiter
str = @chunk[offsetInChunk..]loop[strPart] = regex.exec str
@validateEscapes strPart, {isRegex: delimiter.charAt(0)is'/', offsetInChunk}
Push a fake 'NEOSTRING' token, which will get turned into a real string later.
tokens.push @makeToken'NEOSTRING', strPart, offset: offsetInChunk
str = str[strPart.length..]
offsetInChunk += strPart.lengthbreakunlessmatch = interpolators.exec str
[interpolator] = match
To remove the # in #{.
interpolationOffset = interpolator.length -1[line, column, offset] = @getLineAndColumnFromChunk offsetInChunk + interpolationOffset
rest = str[interpolationOffset..]
{tokens: nested, index} =newLexer().tokenize rest, {line, column, offset, untilBalanced:on, @locationDataCompensations}
Account for the # in #{.
index += interpolationOffset
braceInterpolator = str[index -1]is'}'ifbraceInterpolator
Turn the leading and trailing { and } into parentheses. Unnecessary parentheses will be removed later.
[open, ..., close] = nested
open[0] ='INTERPOLATION\_START'open[1] ='('open[2].first_column -= interpolationOffset
open[2].range = [
open[2].range[0] - interpolationOffset
open[2].range[1]
]
close[0] ='INTERPOLATION\_END'close[1] =')'close.origin = ['','end of interpolation', close[2]]
Remove leading 'TERMINATOR' (if any).
nested.splice1,1ifnested[1]?[0]is'TERMINATOR'
Remove trailing 'INDENT'/'OUTDENT' pair (if any).
nested.splice-3,2ifnested[nested.length -3]?[0]is'INDENT'andnested[nested.length -2][0]is'OUTDENT'unlessbraceInterpolator
We are not using { and }, so wrap the interpolated tokens instead.
open = @makeToken'INTERPOLATION\_START','(', offset: offsetInChunk, length:0, generated:yesclose = @makeToken'INTERPOLATION\_END',')', offset: offsetInChunk + index, length:0, generated:yesnested = [open, nested..., close]
Push a fake 'TOKENS' token, which will get turned into real tokens later.
tokens.push ['TOKENS', nested]
str = str[index..]
offsetInChunk += indexunlessstr[...closingDelimiter.length]isclosingDelimiter
@error"missing #{closingDelimiter}", length: delimiter.length
{tokens, index: offsetInChunk + closingDelimiter.length}
Merge the array tokens of the fake token types 'TOKENS' and 'NEOSTRING' (as returned by matchWithInterpolations) into the token stream. The value of 'NEOSTRING's are converted using fn and turned into strings using options first.
mergeInterpolationTokens:(tokens, options, fn) -\>{quote, indent, double, heregex, endOffset, jsx} = optionsiftokens.length >1lparen = @token'STRING\_START','(', length: quote?.length ?0, data: {quote}, generated:notquote?.length
firstIndex = @tokens.length
$ = tokens.length -1fortoken, iintokens
[tag, value] = tokenswitchtagwhen'TOKENS'
There are comments (and nothing else) in this interpolation.
ifvalue.lengthis2and(value[0].commentsorvalue[1].comments)
placeholderToken = @makeToken'JS','', generated:yes
Use the same location data as the first parenthesis.
placeholderToken[2] = value[0][2]forvalinvaluewhenval.comments
placeholderToken.comments ?= []
placeholderToken.comments.push val.comments...
value.splice1,0, placeholderToken
Push all the tokens in the fake 'TOKENS' token. These already have sane location data.
locationToken = value[0]
tokensToPush = valuewhen'NEOSTRING'
Convert 'NEOSTRING' into 'STRING'.
converted = fn.call this, token[1], i
addTokenData token, initialChunk:yesifiis0addTokenData token, finalChunk:yesifiis$
addTokenData token, {indent, quote, double}
addTokenData token, {heregex}ifheregex
addTokenData token, {jsx}ifjsx
token[0] ='STRING'token[1] ='"'+ converted +'"'iftokens.lengthis1andquote?
token[2].first_column -= quote.lengthiftoken[1].substr(-2,1)is'\n'token[2].last_line +=1token[2].last_column = quote.length -1elsetoken[2].last_column += quote.length
token[2].last_column -=1iftoken[1].lengthis2token[2].last_column_exclusive += quote.length
token[2].range = [
token[2].range[0] - quote.length
token[2].range[1] + quote.length
]
locationToken = token
tokensToPush = [token]
@tokens.push tokensToPush...iflparen
[..., lastToken] = tokens
lparen.origin = ['STRING',null,
first_line: lparen[2].first_line
first_column: lparen[2].first_column
last_line: lastToken[2].last_line
last_column: lastToken[2].last_column
last_line_exclusive: lastToken[2].last_line_exclusive
last_column_exclusive: lastToken[2].last_column_exclusive
range: [
lparen[2].range[0]
lastToken[2].range[1]
]
]
lparen[2] = lparen.origin[2]unlessquote?.length
rparen = @token'STRING\_END',')', offset: endOffset - (quote ?'').length, length: quote?.length ?0, generated:notquote?.length
Pairs up a closing token, ensuring that all listed pairs of tokens are correctly balanced throughout the course of the token stream.
pair:(tag) -\>[..., prev] = @endsunlesstagiswanted = prev?.tag
@error"unmatched #{tag}"unless'OUTDENT'iswanted
Auto-close INDENT to support syntax like this:
el.click((event) ->
el.hide())
[..., lastIndent] = @indents
@outdentToken moveOut: lastIndent, noNewlines:truereturn@pair tag
@ends.pop()
Compensate for the things we strip out initially (e.g. carriage returns) so that location data stays accurate with respect to the original source file.
getLocationDataCompensation:(start, end) -\>totalCompensation =0initialEnd = end
current = startwhilecurrent <= endbreakifcurrentisendandstartisntinitialEnd
compensation = @locationDataCompensations[current]ifcompensation?
totalCompensation += compensation
end += compensation
current++returntotalCompensation
Returns the line and column number from an offset into the current chunk.
offset is a number of characters into @chunk.
getLineAndColumnFromChunk:(offset) -\>compensation = @getLocationDataCompensation @chunkOffset, @chunkOffset + offsetifoffsetis0return[@chunkLine, @chunkColumn + compensation, @chunkOffset + compensation]ifoffset >= @chunk.length
string = @chunkelsestring = @chunk[..offset-1]
lineCount = count string,'\n'column = @chunkColumniflineCount >0[..., lastLine] = string.split'\n'column = lastLine.length
previousLinesCompensation = @getLocationDataCompensation @chunkOffset, @chunkOffset + offset - column
Don’t recompensate for initially inserted newline.
previousLinesCompensation =0ifpreviousLinesCompensation <0columnCompensation = @getLocationDataCompensation(
@chunkOffset + offset + previousLinesCompensation - column
@chunkOffset + offset + previousLinesCompensation
)elsecolumn += string.length
columnCompensation = compensation
[@chunkLine + lineCount, column + columnCompensation, @chunkOffset + offset + compensation]
makeLocationData:({ offsetInChunk, length }) -\>locationData = range: []
[locationData.first_line, locationData.first_column, locationData.range[0]] =
@getLineAndColumnFromChunk offsetInChunk
Use length - 1 for the final offset - we’re supplying the last_line and the last_column, so if last_column == first_column, then we’re looking at a character of length 1.
lastCharacter =iflength >0then(length -1)else0[locationData.last_line, locationData.last_column, endOffset] =
@getLineAndColumnFromChunk offsetInChunk + lastCharacter
[locationData.last_line_exclusive, locationData.last_column_exclusive] =
@getLineAndColumnFromChunk offsetInChunk + lastCharacter + (iflength >0then1else0)
locationData.range[1] =iflength >0thenendOffset +1elseendOffset
locationData
Same as token, except this just returns the token without adding it to the results.
makeToken:(tag, value, {offset: offsetInChunk = 0, length = value.length, origin, generated, indentSize} = {}) -\>token = [tag, value, @makeLocationData {offsetInChunk, length}]
token.origin = originiforigin
token.generated =yesifgenerated
token.indentSize = indentSizeifindentSize?
token
Add a token to the results. offset is the offset into the current @chunk where the token starts. length is the length of the token in the @chunk, after the offset. If not specified, the length of value will be used.
Returns the new token.
token:(tag, value, {offset, length, origin, data, generated, indentSize} = {}) -\>token = @makeToken tag, value, {offset, length, origin, generated, indentSize}
addTokenData token, dataifdata
@tokens.push token
token
Peek at the last tag in the token stream.
tag:-\>[..., token] = @tokens
token?[0]
Peek at the last value in the token stream.
value:(useOrigin = no) -\>[..., token] = @tokensifuseOriginandtoken?.origin?
token.origin[1]elsetoken?[1]
Get the previous token in the token stream.
prev:-\>@tokens[@tokens.length -1]
Are we in the midst of an unfinished expression?
unfinished:-\>LINE_CONTINUER.test(@chunk)or@tag()inUNFINISHED
validateUnicodeCodePointEscapes:(str, options) -\>replaceUnicodeCodePointEscapes str, merge options, {@error}
Validates escapes in strings and regexes.
validateEscapes:(str, options = {}) -\>invalidEscapeRegex =ifoptions.isRegex
REGEX_INVALID_ESCAPEelseSTRING_INVALID_ESCAPE
match = invalidEscapeRegex.exec strreturnunlessmatch
[[], before, octal, hex, unicodeCodePoint, unicode] = match
message =ifoctal"octal escape sequences are not allowed"else"invalid escape sequence"invalidEscape ="\\#{octal or hex or unicodeCodePoint or unicode}"@error"#{message} #{invalidEscape}",
offset: (options.offsetInChunk ?0) + match.index + before.length
length: invalidEscape.length
suppressSemicolons:-\>while@value()is';'@tokens.pop()
@error'unexpected ;'if@prev()?[0]in['=', UNFINISHED...]
Throws an error at either a given offset from the current chunk or at the location of a token (token[2]).
error:(message, options = {}) =\>location =if'first\_line'ofoptions
optionselse[first_line, first_column] = @getLineAndColumnFromChunk options.offset ?0{first_line, first_column, last_column: first_column + (options.length ?1) -1}
throwSyntaxError message, location
isUnassignable = (name, displayName = name) -\>switchwhennamein[JS_KEYWORDS..., COFFEE_KEYWORDS...]"keyword '#{displayName}' can't be assigned"whennameinSTRICT_PROSCRIBED"'#{displayName}' can't be assigned"whennameinRESERVED"reserved word '#{displayName}' can't be assigned"elsefalseexports.isUnassignable = isUnassignable
from isn’t a CoffeeScript keyword, but it behaves like one in import and export statements (handled above) and in the declaration line of a for loop. Try to detect when from is a variable identifier and when it is this “sometimes” keyword.
isForFrom = (prev) -\>
for i from iterable
ifprev[0]is'IDENTIFIER'yes
for from…
elseifprev[0]is'FOR'no
for {from}…, for [from]…, for {a, from}…, for {a: from}…
elseifprev[1]in['{','[',',',':']noelseyes addTokenData = (token, data) -\>Object.assign (token.data ?= {}), data
Keywords that CoffeeScript shares in common with JavaScript.
JS_KEYWORDS = ['true','false','null','this''new','delete','typeof','in','instanceof''return','throw','break','continue','debugger','yield','await''if','else','switch','for','while','do','try','catch','finally''class','extends','super''import','export','default']
CoffeeScript-only keywords.
COFFEE_KEYWORDS = ['undefined','Infinity','NaN''then','unless','until','loop','of','by','when']
COFFEE_ALIAS_MAP =and:'&&'or:'||'is:'=='isnt:'!='not:'!'yes:'true'no:'false'on:'true'off:'false'COFFEE_ALIASES = (keyforkeyofCOFFEE_ALIAS_MAP)
COFFEE_KEYWORDS = COFFEE_KEYWORDS.concat COFFEE_ALIASES
The list of keywords that are reserved by JavaScript, but not used, or are used by CoffeeScript internally. We throw an error when these are encountered, to avoid having a JavaScript error at runtime.
RESERVED = ['case','function','var','void','with','const','let','enum''native','implements','interface','package','private''protected','public','static']
STRICT_PROSCRIBED = ['arguments','eval']
The superset of both JavaScript keywords and reserved words, none of which may be used as identifiers or properties.
exports.JS_FORBIDDEN = JS_KEYWORDS.concat(RESERVED).concat(STRICT_PROSCRIBED)
The character code of the nasty Microsoft madness otherwise known as the BOM.
BOM =65279
Token matching regexes.
IDENTIFIER =/// ^ (?!\d) ( (?: (?!\s)[$\w\x7f-\uffff] )+ ) ( [^\n\S]\* : (?!:) )? # Is this a property name? ///
Like IDENTIFIER, but includes -s
JSX_IDENTIFIER_PART =/// (?: (?!\s)[\-$\w\x7f-\uffff] )+ ///.source
In https://facebook.github.io/jsx/ spec, JSXElementName can be JSXIdentifier, JSXNamespacedName (JSXIdentifier : JSXIdentifier), or JSXMemberExpression (two or more JSXIdentifier connected by .s).
JSX_IDENTIFIER =/// ^ (?![\d\<]) # Must not start with `\<`. ( #{JSX\_IDENTIFIER\_PART} (?: \s\* : \s\* #{JSX\_IDENTIFIER\_PART}# JSXNamespacedName | (?: \s\* \. \s\* #{JSX\_IDENTIFIER\_PART} )+ # JSXMemberExpression )? ) ///
Fragment: <></>
JSX_FRAGMENT_IDENTIFIER =/// ^ ()\> # Ends immediately with `\>`. ///
In https://facebook.github.io/jsx/ spec, JSXAttributeName can be either JSXIdentifier or JSXNamespacedName which is JSXIdentifier : JSXIdentifier
JSX_ATTRIBUTE =/// ^ (?!\d) ( #{JSX\_IDENTIFIER\_PART} (?: \s\* : \s\* #{JSX\_IDENTIFIER\_PART}# JSXNamespacedName )? ) ( [^\S]\* = (?!=) )? # Is this an attribute with a value? ///NUMBER =/// ^ 0b[01](?:\_?[01])\*n? | # binary ^ 0o[0-7](?:\_?[0-7])\*n? | # octal ^ 0x[\da-f](?:\_?[\da-f])\*n? | # hex ^ \d+(?:\_\d+)\*n | # decimal bigint ^ (?:\d+(?:\_\d+)\*)? \.? \d+(?:\_\d+)\* # decimal (?:e[+-]? \d+(?:\_\d+)\* )?
decimal without support for numeric literal separators for reference: \d*.?\d+ (?:e[+-]?\d+)?
///i OPERATOR = ///^ (
?: [-=]># function| [-+*/%\<\>&|^!?=]= # compound assign /compare
| >>>=?# zero-fill right shift| ([-+:])\1# doubles| ([&|<>*/%])\2=? # logic /shift / power / floor division / modulo
| \?(\.|::)# soak access| \.{2,3}# range or splat)/// WHITESPACE = /^[^\n\S]+/ COMMENT = /^(\s\*)###([^#][\s\S]\*?)(?:###([^\n\S]\*)|###$)|^((?:\s\*#(?!##[^#]).\*)+)/ CODE = /^[-=]\>/ MULTI\_DENT = /^(?:\n[^\n\S]\*)+/ JSTOKEN = ///^ `(?!``) ((?: [^`\\] | \\[\s\S] )*) `/// HERE\_JSTOKEN = ///^ ``` ((?: [^`\\] | \\[\s\S] | `(?!``) )*) ``` ///
String-matching-regexes.
STRING_START =/^(?:'''|"""|'|")/STRING_SINGLE =/// ^(?: [^\\'] | \\[\s\S] )\* ///STRING_DOUBLE =/// ^(?: [^\\"#] | \\[\s\S] | \#(?!\{) )\* /// HEREDOC\_SINGLE = ///^(?: [^\\'] | \\[\s\S] | '(?!'') )*/// HEREDOC\_DOUBLE = ///^(?: [^\\"#] | \\[\s\S] | "(?!"") | \#(?!\{) )\* ///INSIDE_JSX =/// ^(?: [^ \{ # Start of CoffeeScript interpolation. \< # Maybe JSX tag (`\<` not allowed even if bare).] )\* ///# Similar to `HEREDOC_DOUBLE` but there is no escaping.JSX_INTERPOLATION =/// ^(?: \{ # CoffeeScript interpolation. | \<(?!/) # JSX opening tag. )///HEREDOC_INDENT =/\n+([^\n\S]\*)(?=\S)/g
Regex-matching-regexes.
REGEX =/// ^ / (?!/) (( ?: [^ [/ \n \\] # Every other thing. | \\[^\n] # Anything but newlines escaped. | \[# Character class. (?: \\[^\n] | [^ \] \n \\ ] )\* \] )\*) (/)? ///REGEX_FLAGS =/^\w\*/VALID_FLAGS =/^(?!.\*(.).\*\1)[gimsuy]\*$/HEREGEX =/// ^ (?:
Match any character, except those that need special handling below.
[^\\/#\s]
Match \ followed by any character.
| \\[\s\S]
Match any / except ///.
|/(?!//)
Match # which is not part of interpolation, e.g. #{}.
| \#(?!\{)
Comments consume everything until the end of the line, including ///.
| \s+(?:#(?!\{).\*)?)*/// HEREGEX\_COMMENT = /(\s+)(#(?!{).\*)/gm REGEX\_ILLEGAL = ///^ ( / |/{3}\s\*) (\*) ///POSSIBLY_DIVISION =/// ^ /=?\s ///
Other regexes.
HERECOMMENT_ILLEGAL =/\*\//LINE_CONTINUER =/// ^ \s\* (?: , | \??\.(?![.\d]) | \??:: ) ///STRING_INVALID_ESCAPE =/// ( (?:^|[^\\]) (?:\\\\)\* ) # Make sure the escape isn’t escaped. \\ ( ?: (0\d|[1-7]) # octal escape | (x(?![\da-fA-F]{2}).{0,2}) # hex escape | (u\{(?![\da-fA-F]{1,}\})[^}]\*\}?) # unicode code point escape | (u(?!\{|[\da-fA-F]{4}).{0,4}) # unicode escape ) ///REGEX_INVALID_ESCAPE =/// ( (?:^|[^\\]) (?:\\\\)\* ) # Make sure the escape isn’t escaped. \\ ( ?: (0\d) # octal escape | (x(?![\da-fA-F]{2}).{0,2}) # hex escape | (u\{(?![\da-fA-F]{1,}\})[^}]\*\}?) # unicode code point escape | (u(?!\{|[\da-fA-F]{4}).{0,4}) # unicode escape ) ///TRAILING_SPACES =/\s+$/
Compound assignment tokens.
COMPOUND_ASSIGN = ['-=','+=','/=','\*=','%=','||=','&&=','?=','\<\<=','\>\>=','\>\>\>=''&=','^=','|=','\*\*=','//=','%%=']
Unary tokens.
UNARY = ['NEW','TYPEOF','DELETE']
UNARY_MATH = ['!','~']
Bit-shifting tokens.
SHIFT = ['\<\<','\>\>','\>\>\>']
Comparison tokens.
COMPARE = ['==','!=','\<','\>','\<=','\>=']
Mathematical tokens.
MATH = ['\*','/','%','//','%%']
Relational tokens that are negatable with not prefix.
RELATION = ['IN','OF','INSTANCEOF']
Boolean tokens.
BOOL = ['TRUE','FALSE']
Tokens which could legitimately be invoked or indexed. An opening parentheses or bracket following these tokens will be recorded as the start of a function invocation or indexing operation.
CALLABLE = ['IDENTIFIER','PROPERTY',')',']','?','@','THIS','SUPER','DYNAMIC\_IMPORT']
INDEXABLE = CALLABLE.concat ['NUMBER','INFINITY','NAN','STRING','STRING\_END','REGEX','REGEX\_END''BOOL','NULL','UNDEFINED','}','::']
Tokens which can be the left-hand side of a less-than comparison, i.e. a<b.
COMPARABLE_LEFT_SIDE = ['IDENTIFIER',')',']','NUMBER']
Tokens which a regular expression will never immediately follow (except spaced CALLABLEs in some cases), but which a division operator can.
See: http://www-archive.mozilla.org/js/language/js20-2002-04/rationale/syntax.html#regular-expressions
NOT_REGEX = INDEXABLE.concat ['++','--']
Tokens that, when immediately preceding a WHEN, indicate that the WHEN occurs at the start of a line. We disambiguate these from trailing whens to avoid an ambiguity in the grammar.
LINE_BREAK = ['INDENT','OUTDENT','TERMINATOR']
Additional indent in front of these is ignored.
INDENTABLE_CLOSERS = [')','}',']']