Regular expression

Last updated
.mw-parser-output .legend{page-break-inside:avoid;break-inside:avoid-column}.mw-parser-output .legend-color{display:inline-block;min-width:1.25em;height:1.25em;line-height:1.25;margin:1px 0;text-align:center;border:1px solid black;background-color:transparent;color:black}.mw-parser-output .legend-text{}
Blue highlights show the match results of the regular expression pattern: /r[aeiou]+/g (lower case r followed by one or more lower-case vowels). Antony08.gif
Blue highlights show the match results of the regular expression pattern: /r[aeiou]+/g (lower case r followed by one or more lower-case vowels).

A regular expression (shortened as regex or regexp), [1] sometimes referred to as rational expression, [2] [3] is a sequence of characters that specifies a match pattern in text. Usually such patterns are used by string-searching algorithms for "find" or "find and replace" operations on strings, or for input validation. Regular expression techniques are developed in theoretical computer science and formal language theory.

Contents

The concept of regular expressions began in the 1950s, when the American mathematician Stephen Cole Kleene formalized the concept of a regular language. They came into common use with Unix text-processing utilities. Different syntaxes for writing regular expressions have existed since the 1980s, one being the POSIX standard and another, widely used, being the Perl syntax.

Regular expressions are used in search engines, in search and replace dialogs of word processors and text editors, in text processing utilities such as sed and AWK, and in lexical analysis. Regular expressions are supported in many programming languages. Library implementations are often called an "engine", [4] [5] and many of these are available for reuse.

History

Stephen Cole Kleene, who introduced the concept Kleene.jpg
Stephen Cole Kleene, who introduced the concept

Regular expressions originated in 1951, when mathematician Stephen Cole Kleene described regular languages using his mathematical notation called regular events. [6] [7] These arose in theoretical computer science, in the subfields of automata theory (models of computation) and the description and classification of formal languages, motivated by Kleene's attempt to describe early artificial neural networks. (Kleene introduced it as an alternative to McCulloch & Pitts's "prehensible", but admitted "We would welcome any suggestions as to a more descriptive term." [8] ) Other early implementations of pattern matching include the SNOBOL language, which did not use regular expressions, but instead its own pattern matching constructs.

Regular expressions entered popular use from 1968 in two uses: pattern matching in a text editor [9] and lexical analysis in a compiler. [10] Among the first appearances of regular expressions in program form was when Ken Thompson built Kleene's notation into the editor QED as a means to match patterns in text files. [9] [11] [12] [13] For speed, Thompson implemented regular expression matching by just-in-time compilation (JIT) to IBM 7094 code on the Compatible Time-Sharing System, an important early example of JIT compilation. [14] He later added this capability to the Unix editor ed, which eventually led to the popular search tool grep's use of regular expressions ("grep" is a word derived from the command for regular expression searching in the ed editor: g/re/p meaning "Global search for Regular Expression and Print matching lines"). [15] Around the same time when Thompson developed QED, a group of researchers including Douglas T. Ross implemented a tool based on regular expressions that is used for lexical analysis in compiler design. [10]

Many variations of these original forms of regular expressions were used in Unix [13] programs at Bell Labs in the 1970s, including lex, sed, AWK, and expr, and in other programs such as vi, and Emacs (which has its own, incompatible syntax and behavior). Regexes were subsequently adopted by a wide range of programs, with these early forms standardized in the POSIX.2 standard in 1992.

In the 1980s, the more complicated regexes arose in Perl, which originally derived from a regex library written by Henry Spencer (1986), who later wrote an implementation for Tcl called Advanced Regular Expressions. [16] The Tcl library is a hybrid NFA/DFA implementation with improved performance characteristics. Software projects that have adopted Spencer's Tcl regular expression implementation include PostgreSQL. [17] Perl later expanded on Spencer's original library to add many new features. [18] Part of the effort in the design of Raku (formerly named Perl 6) is to improve Perl's regex integration, and to increase their scope and capabilities to allow the definition of parsing expression grammars. [19] The result is a mini-language called Raku rules, which are used to define Raku grammar as well as provide a tool to programmers in the language. These rules maintain existing features of Perl 5.x regexes, but also allow BNF-style definition of a recursive descent parser via sub-rules.

The use of regexes in structured information standards for document and database modeling started in the 1960s and expanded in the 1980s when industry standards like ISO SGML (precursored by ANSI "GCA 101-1983") consolidated. The kernel of the structure specification language standards consists of regexes. Its use is evident in the DTD element group syntax. Prior to the use of regular expressions, many search languages allowed simple wildcards, for example "*" to match any sequence of characters, and "?" to match a single character. Relics of this can be found today in the glob syntax for filenames, and in the SQL LIKE operator.

Starting in 1997, Philip Hazel developed PCRE (Perl Compatible Regular Expressions), which attempts to closely mimic Perl's regex functionality and is used by many modern tools including PHP and Apache HTTP Server. [20]

Today, regexes are widely supported in programming languages, text processing programs (particularly lexers), advanced text editors, and some other programs. Regex support is part of the standard library of many programming languages, including Java and Python, and is built into the syntax of others, including Perl and ECMAScript. In the late 2010s, several companies started to offer hardware, FPGA, [21] GPU [22] implementations of PCRE compatible regex engines that are faster compared to CPU implementations.

Patterns

The phrase regular expressions, or regexes, is often used to mean the specific, standard textual syntax for representing patterns for matching text, as distinct from the mathematical notation described below. Each character in a regular expression (that is, each character in the string describing its pattern) is either a metacharacter, having a special meaning, or a regular character that has a literal meaning. For example, in the regex b., 'b' is a literal character that matches just 'b', while '.' is a metacharacter that matches every character except a newline. Therefore, this regex matches, for example, 'b%', or 'bx', or 'b5'. Together, metacharacters and literal characters can be used to identify text of a given pattern or process a number of instances of it. Pattern matches may vary from a precise equality to a very general similarity, as controlled by the metacharacters. For example, . is a very general pattern, [a-z] (match all lower case letters from 'a' to 'z') is less general and b is a precise pattern (matches just 'b'). The metacharacter syntax is designed specifically to represent prescribed targets in a concise and flexible way to direct the automation of text processing of a variety of input data, in a form easy to type using a standard ASCII keyboard.

A very simple case of a regular expression in this syntax is to locate a word spelled two different ways in a text editor, the regular expression seriali[sz]e matches both "serialise" and "serialize". Wildcard characters also achieve this, but are more limited in what they can pattern, as they have fewer metacharacters and a simple language-base.

The usual context of wildcard characters is in globbing similar names in a list of files, whereas regexes are usually employed in applications that pattern-match text strings in general. For example, the regex ^[ \t]+|[ \t]+$ matches excess whitespace at the beginning or end of a line. An advanced regular expression that matches any numeral is [+-]?(\d+(\.\d*)?|\.\d+)([eE][+-]?\d+)?.

Translating the Kleene star
(s* means "zero or more of s") Thompson-kleene-star.svg
Translating the Kleene star
(s* means "zero or more of s")

A regex processor translates a regular expression in the above syntax into an internal representation that can be executed and matched against a string representing the text being searched in. One possible approach is the Thompson's construction algorithm to construct a nondeterministic finite automaton (NFA), which is then made deterministic and the resulting deterministic finite automaton (DFA) is run on the target text string to recognize substrings that match the regular expression. The picture shows the NFA scheme N(s*) obtained from the regular expression s*, where s denotes a simpler regular expression in turn, which has already been recursively translated to the NFA N(s).

Basic concepts

A regular expression, often called a pattern, specifies a set of strings required for a particular purpose. A simple way to specify a finite set of strings is to list its elements or members. However, there are often more concise ways: for example, the set containing the three strings "Handel", "Händel", and "Haendel" can be specified by the pattern H(ä|ae?)ndel; we say that this pattern matches each of the three strings. However, there can be many ways to write a regular expression for the same set of strings: for example, (Hän|Han|Haen)del also specifies the same set of three strings in this example.

Most formalisms provide the following operations to construct regular expressions.

Boolean "or"
A vertical bar separates alternatives. For example, gray|grey can match "gray" or "grey".
Grouping
Parentheses are used to define the scope and precedence of the operators (among other uses). For example, gray|grey and gr(a|e)y are equivalent patterns which both describe the set of "gray" or "grey".
Quantification
A quantifier after an element (such as a token, character, or group) specifies how many times the preceding element is allowed to repeat. The most common quantifiers are the question mark ?, the asterisk * (derived from the Kleene star), and the plus sign + (Kleene plus).
?The question mark indicates zero or one occurrences of the preceding element. For example, colou?r matches both "color" and "colour".
*The asterisk indicates zero or more occurrences of the preceding element. For example, ab*c matches "ac", "abc", "abbc", "abbbc", and so on.
+The plus sign indicates one or more occurrences of the preceding element. For example, ab+c matches "abc", "abbc", "abbbc", and so on, but not "ac".
{n} [23] The preceding item is matched exactly n times.
{min,} [23] The preceding item is matched min or more times.
{,max} [23] The preceding item is matched up to max times.
{min,max} [23] The preceding item is matched at least min times, but not more than max times.
Wildcard
The wildcard . matches any character. For example,
a.b matches any string that contains an "a", and then any character and then "b".
a.*b matches any string that contains an "a", and then the character "b" at some later point.

These constructions can be combined to form arbitrarily complex expressions, much like one can construct arithmetical expressions from numbers and the operations +, −, ×, and ÷.

The precise syntax for regular expressions varies among tools and with context; more detail is given in § Syntax.

Formal language theory

Regular expressions describe regular languages in formal language theory. They have the same expressive power as regular grammars.

Formal definition

Regular expressions consist of constants, which denote sets of strings, and operator symbols, which denote operations over these sets. The following definition is standard, and found as such in most textbooks on formal language theory. [24] [25] Given a finite alphabet Σ, the following constants are defined as regular expressions:

Given regular expressions R and S, the following operations over them are defined to produce regular expressions:

To avoid parentheses, it is assumed that the Kleene star has the highest priority followed by concatenation, then alternation. If there is no ambiguity, then parentheses may be omitted. For example, (ab)c can be written as abc, and a|(b(c*)) can be written as a|bc*. Many textbooks use the symbols ∪, +, or ∨ for alternation instead of the vertical bar.

Examples:

Expressive power and compactness

The formal definition of regular expressions is minimal on purpose, and avoids defining ? and +—these can be expressed as follows: a+=aa*, and a?=(a|ε). Sometimes the complement operator is added, to give a generalized regular expression; here Rc matches all strings over Σ* that do not match R. In principle, the complement operator is redundant, because it does not grant any more expressive power. However, it can make a regular expression much more concise—eliminating a single complement operator can cause a double exponential blow-up of its length. [26] [27] [28]

Regular expressions in this sense can express the regular languages, exactly the class of languages accepted by deterministic finite automata. There is, however, a significant difference in compactness. Some classes of regular languages can only be described by deterministic finite automata whose size grows exponentially in the size of the shortest equivalent regular expressions. The standard example here is the languages Lk consisting of all strings over the alphabet {a,b} whose kth-from-last letter equals a. On the one hand, a regular expression describing L4 is given by .

Generalizing this pattern to Lk gives the expression:

On the other hand, it is known that every deterministic finite automaton accepting the language Lk must have at least 2k states. Luckily, there is a simple mapping from regular expressions to the more general nondeterministic finite automata (NFAs) that does not lead to such a blowup in size; for this reason NFAs are often used as alternative representations of regular languages. NFAs are a simple variation of the type-3 grammars of the Chomsky hierarchy. [24]

In the opposite direction, there are many languages easily described by a DFA that are not easily described by a regular expression. For instance, determining the validity of a given ISBN requires computing the modulus of the integer base 11, and can be easily implemented with an 11-state DFA. However, converting it to a regular expression results in a 2,14 megabytes file . [29]

Given a regular expression, Thompson's construction algorithm computes an equivalent nondeterministic finite automaton. A conversion in the opposite direction is achieved by Kleene's algorithm.

Finally, many real-world "regular expression" engines implement features that cannot be described by the regular expressions in the sense of formal language theory; rather, they implement regexes. See below for more on this.

Deciding equivalence of regular expressions

As seen in many of the examples above, there is more than one way to construct a regular expression to achieve the same results.

It is possible to write an algorithm that, for two given regular expressions, decides whether the described languages are equal; the algorithm reduces each expression to a minimal deterministic finite state machine, and determines whether they are isomorphic (equivalent).

Algebraic laws for regular expressions can be obtained using a method by Gischer which is best explained along an example: In order to check whether (X+Y)* and (X*Y*)* denote the same regular language, for all regular expressions X, Y, it is necessary and sufficient to check whether the particular regular expressions (a+b)* and (a*b*)* denote the same language over the alphabet Σ={a,b}. More generally, an equation E=F between regular-expression terms with variables holds if, and only if, its instantiation with different variables replaced by different symbol constants holds. [30] [31]

Every regular expression can be written solely in terms of the Kleene star and set unions over finite words. This is a surprisingly difficult problem. As simple as the regular expressions are, there is no method to systematically rewrite them to some normal form. The lack of axiom in the past led to the star height problem. In 1991, Dexter Kozen axiomatized regular expressions as a Kleene algebra, using equational and Horn clause axioms. [32] Already in 1964, Redko had proved that no finite set of purely equational axioms can characterize the algebra of regular languages. [33]

Syntax

A regex pattern matches a target string. The pattern is composed of a sequence of atoms. An atom is a single point within the regex pattern which it tries to match to the target string. The simplest atom is a literal, but grouping parts of the pattern to match an atom will require using ( ) as metacharacters. Metacharacters help form: atoms; quantifiers telling how many atoms (and whether it is a greedy quantifier or not); a logical OR character, which offers a set of alternatives, and a logical NOT character, which negates an atom's existence; and backreferences to refer to previous atoms of a completing pattern of atoms. A match is made, not when all the atoms of the string are matched, but rather when all the pattern atoms in the regex have matched. The idea is to make a small pattern of characters stand for a large number of possible strings, rather than compiling a large list of all the literal possibilities.

Depending on the regex processor there are about fourteen metacharacters, characters that may or may not have their literal character meaning, depending on context, or whether they are "escaped", i.e. preceded by an escape sequence, in this case, the backslash \. Modern and POSIX extended regexes use metacharacters more often than their literal meaning, so to avoid "backslash-osis" or leaning toothpick syndrome, they have a metacharacter escape to a literal mode; starting out, however, they instead have the four bracketing metacharacters ( ) and { } be primarily literal, and "escape" this usual meaning to become metacharacters. Common standards implement both. The usual metacharacters are {}[]()^$.|*+? and \. The usual characters that become metacharacters when escaped are dswDSW and N.

Delimiters

When entering a regex in a programming language, they may be represented as a usual string literal, hence usually quoted; this is common in C, Java, and Python for instance, where the regex re is entered as "re". However, they are often written with slashes as delimiters, as in /re/ for the regex re. This originates in ed, where / is the editor command for searching, and an expression /re/ can be used to specify a range of lines (matching the pattern), which can be combined with other commands on either side, most famously g/re/p as in grep ("global regex print"), which is included in most Unix-based operating systems, such as Linux distributions. A similar convention is used in sed, where search and replace is given by s/re/replacement/ and patterns can be joined with a comma to specify a range of lines as in /re1/,/re2/. This notation is particularly well known due to its use in Perl, where it forms part of the syntax distinct from normal string literals. In some cases, such as sed and Perl, alternative delimiters can be used to avoid collision with contents, and to avoid having to escape occurrences of the delimiter character in the contents. For example, in sed the command s,/,X, will replace a / with an X, using commas as delimiters.

IEEE POSIX Standard

The IEEE POSIX standard has three sets of compliance: BRE (Basic Regular Expressions), [34] ERE (Extended Regular Expressions), and SRE (Simple Regular Expressions). SRE is deprecated, [35] in favor of BRE, as both provide backward compatibility. The subsection below covering the character classes applies to both BRE and ERE.

BRE and ERE work together. ERE adds ?, +, and |, and it removes the need to escape the metacharacters ( ) and { }, which are required in BRE. Furthermore, as long as the POSIX standard syntax for regexes is adhered to, there can be, and often is, additional syntax to serve specific (yet POSIX compliant) applications. Although POSIX.2 leaves some implementation specifics undefined, BRE and ERE provide a "standard" which has since been adopted as the default syntax of many tools, where the choice of BRE or ERE modes is usually a supported option. For example, GNU grep has the following options: "grep -E" for ERE, and "grep -G" for BRE (the default), and "grep -P" for Perl regexes.

Perl regexes have become a de facto standard, having a rich and powerful set of atomic expressions. Perl has no "basic" or "extended" levels. As in POSIX EREs, ( ) and { } are treated as metacharacters unless escaped; other metacharacters are known to be literal or symbolic based on context alone. Additional functionality includes lazy matching, backreferences, named capture groups, and recursive patterns.

POSIX basic and extended

In the POSIX standard, Basic Regular Syntax (BRE) requires that the metacharacters ( ) and { } be designated \(\) and \{\}, whereas Extended Regular Syntax (ERE) does not.

MetacharacterDescription
^Matches the starting position within the string. In line-based tools, it matches the starting position of any line.
.Matches any single character (many applications exclude newlines, and exactly which characters are considered newlines is flavor-, character-encoding-, and platform-specific, but it is safe to assume that the line feed character is included). Within POSIX bracket expressions, the dot character matches a literal dot. For example, a.c matches "abc", etc., but [a.c] matches only "a", ".", or "c".
[ ]A bracket expression. Matches a single character that is contained within the brackets. For example, [abc] matches "a", "b", or "c". [a-z] specifies a range which matches any lowercase letter from "a" to "z". These forms can be mixed: [abcx-z] matches "a", "b", "c", "x", "y", or "z", as does [a-cx-z].

The - character is treated as a literal character if it is the last or the first (after the ^, if present) character within the brackets: [abc-], [-abc], [^-abc]. Backslash escapes are not allowed. The ] character can be included in a bracket expression if it is the first (after the ^, if present) character: []abc], [^]abc].

[^ ]Matches a single character that is not contained within the brackets. For example, [^abc] matches any character other than "a", "b", or "c". [^a-z] matches any single character that is not a lowercase letter from "a" to "z". Likewise, literal characters and ranges can be mixed.
$Matches the ending position of the string or the position just before a string-ending newline. In line-based tools, it matches the ending position of any line.
( )Defines a marked subexpression, also called a capturing group, which is essential for extracting the desired part of the text (See also the next entry, \n). BRE mode requires \( \).
\nMatches what the nth marked subexpression matched, where n is a digit from 1 to 9. This construct is defined in the POSIX standard. [36] Some tools allow referencing more than nine capturing groups. Also known as a back-reference, this feature is supported in BRE mode.
*Matches the preceding element zero or more times. For example, ab*c matches "ac", "abc", "abbbc", etc. [xyz]* matches "", "x", "y", "z", "zx", "zyx", "xyzzy", and so on. (ab)* matches "", "ab", "abab", "ababab", and so on.
{m,n}Matches the preceding element at least m and not more than n times. For example, a{3,5} matches only "aaa", "aaaa", and "aaaaa". This is not found in a few older instances of regexes. BRE mode requires \{m,n\}.

Examples:

  • .at matches any three-character string ending with "at", including "hat", "cat", "bat", "4at", "#at" and " at" (starting with a space).
  • [hc]at matches "hat" and "cat".
  • [^b]at matches all strings matched by .at except "bat".
  • [^hc]at matches all strings matched by .at other than "hat" and "cat".
  • ^[hc]at matches "hat" and "cat", but only at the beginning of the string or line.
  • [hc]at$ matches "hat" and "cat", but only at the end of the string or line.
  • \[.\] matches any single character surrounded by "[" and "]" since the brackets are escaped, for example: "[a]", "[b]", "[7]", "[@]", "[]]", and "[ ]" (bracket space bracket).
  • s.* matches s followed by zero or more characters, for example: "s", "saw", "seed", "s3w96.7", and "s6#h%(>>>m n mQ".

According to Russ Cox, the POSIX specification requires ambiguous subexpressions to be handled in a way different from Perl's. The committee replaced Perl's rules with one that is simple to explain, but the new "simple" rules are actually more complex to implement: they were incompatible with pre-existing tooling and made it essentially impossible to define a "lazy match" (see below) extension. As a result, very few programs actually implement the POSIX subexpression rules (even when they implement other parts of the POSIX syntax). [37]

Metacharacters in POSIX extended

The meaning of metacharacters escaped with a backslash is reversed for some characters in the POSIX Extended Regular Expression (ERE) syntax. With this syntax, a backslash causes the metacharacter to be treated as a literal character. So, for example, \( \) is now ( ) and \{ \} is now { }. Additionally, support is removed for \n backreferences and the following metacharacters are added:

MetacharacterDescription
?Matches the preceding element zero or one time. For example, ab?c matches only "ac" or "abc".
+Matches the preceding element one or more times. For example, ab+c matches "abc", "abbc", "abbbc", and so on, but not "ac".
|The choice (also known as alternation or set union) operator matches either the expression before or the expression after the operator. For example, abc|def matches "abc" or "def".

Examples:

  • [hc]?at matches "at", "hat", and "cat".
  • [hc]*at matches "at", "hat", "cat", "hhat", "chat", "hcat", "cchchat", and so on.
  • [hc]+at matches "hat", "cat", "hhat", "chat", "hcat", "cchchat", and so on, but not "at".
  • cat|dog matches "cat" or "dog".

POSIX Extended Regular Expressions can often be used with modern Unix utilities by including the command line flag -E.

Character classes

The character class is the most basic regex concept after a literal match. It makes one small sequence of characters match a larger set of characters. For example, [A-Z] could stand for any uppercase letter in the English alphabet, and \d could mean any digit. Character classes apply to both POSIX levels.

When specifying a range of characters, such as [a-Z] (i.e. lowercase a to uppercase Z), the computer's locale settings determine the contents by the numeric ordering of the character encoding. They could store digits in that sequence, or the ordering could be abc...zABC...Z, or aAbBcC...zZ. So the POSIX standard defines a character class, which will be known by the regex processor installed. Those definitions are in the following table:

DescriptionPOSIXPerl/TclVimJavaASCII
ASCII characters\p{ASCII}[\x00-\x7F]
Alphanumeric characters[:alnum:]\p{Alnum}[A-Za-z0-9]
Alphanumeric characters plus "_"\w\w\w[A-Za-z0-9_]
Non-word characters\W\W\W[^A-Za-z0-9_]
Alphabetic characters[:alpha:]\a\p{Alpha}[A-Za-z]
Space and tab[:blank:]\s\p{Blank}[ \t]
Word boundaries\b\< \>\b(?<=\W)(?=\w)|(?<=\w)(?=\W)
Non-word boundaries\B(?<=\W)(?=\W)|(?<=\w)(?=\w)
Control characters [:cntrl:]\p{Cntrl}[\x00-\x1F\x7F]
Digits[:digit:]\d\d\p{Digit} or \d[0-9]
Non-digits\D\D\D[^0-9]
Visible characters[:graph:]\p{Graph}[\x21-\x7E]
Lowercase letters[:lower:]\l\p{Lower}[a-z]
Visible characters and the space character[:print:]\p\p{Print}[\x20-\x7E]
Punctuation characters[:punct:]\p{Punct}[][!"#$%&'()*+,./:;<=>?@\^_`{|}~-]
Whitespace characters [:space:]\s\_s\p{Space} or \s[ \t \r \n \v \f]
Non-whitespace characters\S\S\S[^ \t\r\n\v\f]
Uppercase letters[:upper:]\u\p{Upper}[A-Z]
Hexadecimal digits[:xdigit:]\x\p{XDigit}[A-Fa-f0-9]

POSIX character classes can only be used within bracket expressions. For example, [[:upper:]ab] matches the uppercase letters and lowercase "a" and "b".

An additional non-POSIX class understood by some tools is [:word:], which is usually defined as [:alnum:] plus underscore. This reflects the fact that in many programming languages these are the characters that may be used in identifiers. The editor Vim further distinguishes word and word-head classes (using the notation \w and \h) since in many programming languages the characters that can begin an identifier are not the same as those that can occur in other positions: numbers are generally excluded, so an identifier would look like \h\w* or [[:alpha:]_][[:alnum:]_]* in POSIX notation.

Note that what the POSIX regex standards call character classes are commonly referred to as POSIX character classes in other regex flavors which support them. With most other regex flavors, the term character class is used to describe what POSIX calls bracket expressions.

Perl and PCRE

Because of its expressive power and (relative) ease of reading, many other utilities and programming languages have adopted syntax similar to Perl's—for example, Java, JavaScript, Julia, Python, Ruby, Qt, Microsoft's .NET Framework, and XML Schema. Some languages and tools such as Boost and PHP support multiple regex flavors. Perl-derivative regex implementations are not identical and usually implement a subset of features found in Perl 5.0, released in 1994. Perl sometimes does incorporate features initially found in other languages. For example, Perl 5.10 implements syntactic extensions originally developed in PCRE and Python. [38]

Lazy matching

In Python and some other implementations (e.g. Java), the three common quantifiers (*, + and ?) are greedy by default because they match as many characters as possible. [39] The regex ".+" (including the double-quotes) applied to the string

"Ganymede," he continued, "is the largest moon in the Solar System."

matches the entire line (because the entire line begins and ends with a double-quote) instead of matching only the first part, "Ganymede,". The aforementioned quantifiers may, however, be made lazy or minimal or reluctant, matching as few characters as possible, by appending a question mark: ".+?" matches only "Ganymede,". [39]

Possessive matching

In Java and Python 3.11+, [40] quantifiers may be made possessive by appending a plus sign, which disables backing off (in a backtracking engine), even if doing so would allow the overall match to succeed: [41] While the regex ".*" applied to the string

"Ganymede," he continued, "is the largest moon in the Solar System."

matches the entire line, the regex ".*+" does not match at all, because .*+ consumes the entire input, including the final ". Thus, possessive quantifiers are most useful with negated character classes, e.g. "[^"]*+", which matches "Ganymede," when applied to the same string.

Another common extension serving the same function is atomic grouping, which disables backtracking for a parenthesized group. The typical syntax is (?>group). For example, while ^(wi|w)i$ matches both wi and wii, ^(?>wi|w)i$ only matches wii because the engine is forbidden from backtracking and so cannot try setting the group to "w" after matching "wi". [42]

Possessive quantifiers are easier to implement than greedy and lazy quantifiers, and are typically more efficient at runtime. [41]

IETF I-Regexp

IETF RFC 9485 describes "I-Regexp: An Interoperable Regular Expression Format". It specifies a limited subset of regular-expression idioms designed to be interoperable, i.e. produce the same effect, in a large number of regular-expression libraries. I-Regexp is also limited to matching, i.e. providing a true or false match between a regular expression and a given piece of text. Thus, it lacks advanced features such as capture groups, lookahead, and backreferences. [43]

Patterns for non-regular languages

Many features found in virtually all modern regular expression libraries provide an expressive power that exceeds the regular languages. For example, many implementations allow grouping subexpressions with parentheses and recalling the value they match in the same expression (backreferences). This means that, among other things, a pattern can match strings of repeated words like "papa" or "WikiWiki", called squares in formal language theory. The pattern for these strings is (.+)\1.

The language of squares is not regular, nor is it context-free, due to the pumping lemma. However, pattern matching with an unbounded number of backreferences, as supported by numerous modern tools, is still context sensitive. [44] The general problem of matching any number of backreferences is NP-complete, and the execution time for known algorithms grows exponentially by the number of backreference groups used. [45]

However, many tools, libraries, and engines that provide such constructions still use the term regular expression for their patterns. This has led to a nomenclature where the term regular expression has different meanings in formal language theory and pattern matching. For this reason, some people have taken to using the term regex, regexp, or simply pattern to describe the latter. Larry Wall, author of the Perl programming language, writes in an essay about the design of Raku:

"Regular expressions" […] are only marginally related to real regular expressions. Nevertheless, the term has grown with the capabilities of our pattern matching engines, so I'm not going to try to fight linguistic necessity here. I will, however, generally call them "regexes" (or "regexen", when I'm in an Anglo-Saxon mood). [19]

Assertions

AssertionLookbehindLookahead
Positive(?<=pattern)(?=pattern)
Negative(?<!pattern)(?!pattern)
Look-behind and look-ahead assertions
in Perl regular expressions

Other features not found in describing regular languages include assertions. These include the ubiquitous ^ and $, used since at least 1970, [46] as well as some more sophisticated extensions like lookaround that appeared in 1994. [47] Lookarounds define the surrounding of a match and do not spill into the match itself, a feature only relevant for the use case of string searching.[ citation needed ] Some of them can be simulated in a regular language by treating the surroundings as a part of the language as well. [48]

The look-ahead assertions(?=...) and (?!...) have been attested since at least 1994, starting with Perl 5. [47] The look-behind assertions (?<=...) and (?<!...) are attested since 1997 in a commit by Ilya Zakharevich to Perl 5.005. [49]

Implementations and running times

There are at least three different algorithms that decide whether and how a given regex matches a string.

The oldest and fastest relies on a result in formal language theory that allows every nondeterministic finite automaton (NFA) to be transformed into a deterministic finite automaton (DFA). The DFA can be constructed explicitly and then run on the resulting input string one symbol at a time. Constructing the DFA for a regular expression of size m has the time and memory cost of O(2m), but it can be run on a string of size n in time O(n). Note that the size of the expression is the size after abbreviations, such as numeric quantifiers, have been expanded.

An alternative approach is to simulate the NFA directly, essentially building each DFA state on demand and then discarding it at the next step. This keeps the DFA implicit and avoids the exponential construction cost, but running cost rises to O(mn). The explicit approach is called the DFA algorithm and the implicit approach the NFA algorithm. Adding caching to the NFA algorithm is often called the "lazy DFA" algorithm, or just the DFA algorithm without making a distinction. These algorithms are fast, but using them for recalling grouped subexpressions, lazy quantification, and similar features is tricky. [50] [51] Modern implementations include the re1-re2-sregex family based on Cox's code.

The third algorithm is to match the pattern against the input string by backtracking. This algorithm is commonly called NFA, but this terminology can be confusing. Its running time can be exponential, which simple implementations exhibit when matching against expressions like (a|aa)*b that contain both alternation and unbounded quantification and force the algorithm to consider an exponentially increasing number of sub-cases. This behavior can cause a security problem called Regular expression Denial of Service (ReDoS).

Although backtracking implementations only give an exponential guarantee in the worst case, they provide much greater flexibility and expressive power. For example, any implementation which allows the use of backreferences, or implements the various extensions introduced by Perl, must include some kind of backtracking. Some implementations try to provide the best of both algorithms by first running a fast DFA algorithm, and revert to a potentially slower backtracking algorithm only when a backreference is encountered during the match. GNU grep (and the underlying gnulib DFA) uses such a strategy. [52]

Sublinear runtime algorithms have been achieved using Boyer-Moore (BM) based algorithms and related DFA optimization techniques such as the reverse scan. [53] GNU grep, which supports a wide variety of POSIX syntaxes and extensions, uses BM for a first-pass prefiltering, and then uses an implicit DFA. Wu agrep, which implements approximate matching, combines the prefiltering into the DFA in BDM (backward DAWG matching). NR-grep's BNDM extends the BDM technique with Shift-Or bit-level parallelism. [54]

A few theoretical alternatives to backtracking for backreferences exist, and their "exponents" are tamer in that they are only related to the number of backreferences, a fixed property of some regexp languages such as POSIX. One naive method that duplicates a non-backtracking NFA for each backreference note has a complexity of time and space for a haystack of length n and k backreferences in the RegExp. [55] A very recent theoretical work based on memory automata gives a tighter bound based on "active" variable nodes used, and a polynomial possibility for some backreferenced regexps. [56]

Unicode

In theoretical terms, any token set can be matched by regular expressions as long as it is pre-defined. In terms of historical implementations, regexes were originally written to use ASCII characters as their token set though regex libraries have supported numerous other character sets. Many modern regex engines offer at least some support for Unicode. In most respects it makes no difference what the character set is, but some issues do arise when extending regexes to support Unicode.

Language support

Most general-purpose programming languages support regex capabilities, either natively or via libraries.

Uses

A blacklist on Wikipedia which uses regular expressions to identify bad titles Screenshot of MediaWiki Blacklist.png
A blacklist on Wikipedia which uses regular expressions to identify bad titles

Regexes are useful in a wide variety of text processing tasks, and more generally string processing, where the data need not be textual. Common applications include data validation, data scraping (especially web scraping), data wrangling, simple parsing, the production of syntax highlighting systems, and many other tasks.

Some high-end desktop publishing software has the ability to use regexes to automatically apply text styling, saving the person doing the layout from laboriously doing this by hand for anything that can be matched by a regex. For example, by defining a character style that makes text into small caps and then using the regex [A-Z]{4,} to apply that style, any word of four or more consecutive capital letters will be automatically rendered as small caps instead.

While regexes would be useful on Internet search engines, processing them across the entire database could consume excessive computer resources depending on the complexity and design of the regex. Although in many cases system administrators can run regex-based queries internally, most search engines do not offer regex support to the public. Notable exceptions include Google Code Search and Exalead. However, Google Code Search was shut down in January 2012. [59]

Examples

The specific syntax rules vary depending on the specific implementation, programming language, or library in use. Additionally, the functionality of regex implementations can vary between versions.

Because regexes can be difficult to both explain and understand without examples, interactive websites for testing regexes are a useful resource for learning regexes by experimentation. This section provides a basic description of some of the properties of regexes by way of illustration.

The following conventions are used in the examples. [60]

metacharacter(s) ;; the metacharacters column specifies the regex syntax being demonstrated =~ m//           ;; indicates a regex match operation in Perl =~ s///          ;; indicates a regex substitution operation in Perl

These regexes are all Perl-like syntax. Standard POSIX regular expressions are different.

Unless otherwise indicated, the following examples conform to the Perl programming language, release 5.8.8, January 31, 2006. This means that other implementations may lack support for some parts of the syntax shown here (e.g. basic vs. extended regex, \( \) vs. (), or lack of \d instead of POSIX [:digit:]).

The syntax and conventions used in these examples coincide with that of other programming environments as well. [61]

Meta­character(s)DescriptionExample [62]
.Normally matches any character except a newline.
Within square brackets the dot is literal.
$string1="Hello World\n";if($string1=~m/...../){print"$string1 has length >= 5.\n";}

Output:

Hello World has length >= 5.
( )Groups a series of pattern elements to a single element.
When you match a pattern within parentheses, you can use any of $1, $2, ... later to refer to the previously matched pattern. Some implementations may use a backslash notation instead, like \1, \2.
$string1="Hello World\n";if($string1=~m/(H..).(o..)/){print"We matched '$1' and '$2'.\n";}

Output:

We matched 'Hel' and 'o W'.
+Matches the preceding pattern element one or more times.
$string1="Hello World\n";if($string1=~m/l+/){print"There are one or more consecutive letter \"l\"'s in $string1.\n";}

Output:

There are one or more consecutive letter "l"'s in Hello World.
?Matches the preceding pattern element zero or one time.
$string1="Hello World\n";if($string1=~m/H.?e/){print"There is an 'H' and a 'e' separated by ";print"0-1 characters (e.g., He Hue Hee).\n";}

Output:

There is an 'H' and a 'e' separated by 0-1 characters (e.g., He Hue Hee).
?Modifies the *, +, ? or {M,N}'d regex that comes before to match as few times as possible.
$string1="Hello World\n";if($string1=~m/(l.+?o)/){print"The non-greedy match with 'l' followed by one or ";print"more characters is 'llo' rather than 'llo Wo'.\n";}

Output:

The non-greedy match with 'l' followed by one or more characters is 'llo' rather than 'llo Wo'.
*Matches the preceding pattern element zero or more times.
$string1="Hello World\n";if($string1=~m/el*o/){print"There is an 'e' followed by zero to many ";print"'l' followed by 'o' (e.g., eo, elo, ello, elllo).\n";}

Output:

There is an 'e' followed by zero to many 'l' followed by 'o' (e.g., eo, elo, ello, elllo).
{M,N}Denotes the minimum M and the maximum N match count.
N can be omitted and M can be 0: {M} matches "exactly" M times; {M,} matches "at least" M times; {0,N} matches "at most" N times.
x* y+ z? is thus equivalent to x{0,} y{1,} z{0,1}.
$string1="Hello World\n";if($string1=~m/l{1,2}/){print"There exists a substring with at least 1 ";print"and at most 2 l's in $string1\n";}

Output:

There exists a substring with at least 1 and at most 2 l's in Hello World
[…]Denotes a set of possible character matches.
$string1="Hello World\n";if($string1=~m/[aeiou]+/){print"$string1 contains one or more vowels.\n";}

Output:

Hello World contains one or more vowels.
|Separates alternate possibilities.
$string1="Hello World\n";if($string1=~m/(Hello|Hi|Pogo)/){print"$string1 contains at least one of Hello, Hi, or Pogo.";}

Output:

Hello World contains at least one of Hello, Hi, or Pogo.
\bMatches a zero-width boundary between a word-class character (see next) and either a non-word class character or an edge; same as

(^\w|\w$|\W\w|\w\W).

$string1="Hello World\n";if($string1=~m/llo\b/){print"There is a word that ends with 'llo'.\n";}

Output:

There is a word that ends with 'llo'.
\wMatches an alphanumeric character, including "_";
same as [A-Za-z0-9_] in ASCII, and
[\p{Alphabetic}\p{GC=Mark}\p{GC=Decimal_Number}\p{GC=Connector_Punctuation}]

in Unicode, [58] where the Alphabetic property contains more than Latin letters, and the Decimal_Number property contains more than Arab digits.

$string1="Hello World\n";if($string1=~m/\w/){print"There is at least one alphanumeric ";print"character in $string1 (A-Z, a-z, 0-9, _).\n";}

Output:

There is at least one alphanumeric character in Hello World (A-Z, a-z, 0-9, _).
\WMatches a non-alphanumeric character, excluding "_";
same as [^A-Za-z0-9_] in ASCII, and
[^\p{Alphabetic}\p{GC=Mark}\p{GC=Decimal_Number}\p{GC=Connector_Punctuation}]

in Unicode.

$string1="Hello World\n";if($string1=~m/\W/){print"The space between Hello and ";print"World is not alphanumeric.\n";}

Output:

The space between Hello and World is not alphanumeric.
\sMatches a whitespace character,
which in ASCII are tab, line feed, form feed, carriage return, and space;
in Unicode, also matches no-break spaces, next line, and the variable-width spaces (among others).
$string1="Hello World\n";if($string1=~m/\s.*\s/){print"In $string1 there are TWO whitespace characters, which may";print" be separated by other characters.\n";}

Output:

In Hello World there are TWO whitespace characters, which may be separated by other characters.
\SMatches anything but a whitespace.
$string1="Hello World\n";if($string1=~m/\S.*\S/){print"In $string1 there are TWO non-whitespace characters, which";print" may be separated by other characters.\n";}

Output:

In Hello World there are TWO non-whitespace characters, which may be separated by other characters.
\dMatches a digit;
same as [0-9] in ASCII;
in Unicode, same as the \p{Digit} or \p{GC=Decimal_Number} property, which itself the same as the \p{Numeric_Type=Decimal} property.
$string1="99 bottles of beer on the wall.";if($string1=~m/(\d+)/){print"$1 is the first number in '$string1'\n";}

Output:

99 is the first number in '99 bottles of beer on the wall.'
\DMatches a non-digit;
same as [^0-9] in ASCII or \P{Digit} in Unicode.
$string1="Hello World\n";if($string1=~m/\D/){print"At least one character in $string1";print" is not a digit.\n";}

Output:

At least one character in Hello World is not a digit.
^Matches the beginning of a line or string.
$string1="Hello World\n";if($string1=~m/^He/){print"$string1 starts with the characters 'He'.\n";}

Output:

Hello World starts with the characters 'He'.
$Matches the end of a line or string.
$string1="Hello World\n";if($string1=~m/rld$/){print"$string1 is a line or string ";print"that ends with 'rld'.\n";}

Output:

Hello World is a line or string that ends with 'rld'.
\AMatches the beginning of a string (but not an internal line).
$string1="Hello\nWorld\n";if($string1=~m/\AH/){print"$string1 is a string ";print"that starts with 'H'.\n";}

Output:

HelloWorld is a string that starts with 'H'.
\zMatches the end of a string (but not an internal line). [63]
$string1="Hello\nWorld\n";if($string1=~m/d\n\z/){print"$string1 is a string ";print"that ends with 'd\\n'.\n";}

Output:

HelloWorld is a string that ends with 'd\n'.
[^…]Matches every character except the ones inside brackets.
$string1="Hello World\n";if($string1=~m/[^abc]/){print"$string1 contains a character other than ";print"a, b, and c.\n";}

Output:

Hello World contains a character other than a, b, and c.

Induction

Regular expressions can often be created ("induced" or "learned") based on a set of example strings. This is known as the induction of regular languages and is part of the general problem of grammar induction in computational learning theory. Formally, given examples of strings in a regular language, and perhaps also given examples of strings not in that regular language, it is possible to induce a grammar for the language, i.e., a regular expression that generates that language. Not all regular languages can be induced in this way (see language identification in the limit), but many can. For example, the set of examples {1, 10, 100}, and negative set (of counterexamples) {11, 1001, 101, 0} can be used to induce the regular expression 1⋅0* (1 followed by zero or more 0s).

See also

Notes

  1. Goyvaerts, Jan. "Regular Expression Tutorial - Learn How to Use Regular Expressions". Regular-Expressions.info. Archived from the original on 2016-11-01. Retrieved 2016-10-31.
  2. Mitkov, Ruslan (2003). The Oxford Handbook of Computational Linguistics. Oxford University Press. p. 754. ISBN   978-0-19-927634-9. Archived from the original on 2017-02-28. Retrieved 2016-07-25.
  3. Lawson, Mark V. (17 September 2003). Finite Automata. CRC Press. pp. 98–100. ISBN   978-1-58488-255-8. Archived from the original on 27 February 2017. Retrieved 25 July 2016.
  4. "How a Regex Engine Works Internally". regular-expressions.info. Retrieved 24 February 2024.
  5. Heddings, Anthony (11 March 2020). "How Do You Actually Use Regex?". howtogeek.com. Retrieved 24 February 2024.
  6. Kleene 1951.
  7. Leung, Hing (16 September 2010). "Regular Languages and Finite Automata" (PDF). New Mexico State University . Archived from the original (PDF) on 5 December 2013. Retrieved 13 August 2019. The concept of regular events was introduced by Kleene via the definition of regular expressions.
  8. Kleene 1951, pg46
  9. 1 2 Thompson 1968.
  10. 1 2 Johnson et al. 1968.
  11. Kernighan, Brian (2007-08-08). "A Regular Expressions Matcher". Beautiful Code. O'Reilly Media. pp. 1–2. ISBN   978-0-596-51004-6. Archived from the original on 2020-10-07. Retrieved 2013-05-15.
  12. Ritchie, Dennis M. "An incomplete history of the QED Text Editor". Archived from the original on 1999-02-21. Retrieved 9 October 2013.
  13. 1 2 Aho & Ullman 1992, 10.11 Bibliographic Notes for Chapter 10, p. 589.
  14. Aycock 2003, p. 98.
  15. Raymond, Eric S. citing Dennis Ritchie (2003). "Jargon File 4.4.7: grep". Archived from the original on 2011-06-05. Retrieved 2009-02-17.
  16. "New Regular Expression Features in Tcl 8.1". Archived from the original on 2020-10-07. Retrieved 2013-10-11.
  17. "Documentation: 9.3: Pattern Matching". PostgreSQL. Archived from the original on 2020-10-07. Retrieved 2013-10-12.
  18. Wall, Larry (2006). "Perl Regular Expressions". perlre. Archived from the original on 2009-12-31. Retrieved 2006-10-10.
  19. 1 2 Wall (2002)
  20. "PCRE - Perl Compatible Regular Expressions". www.pcre.org. Retrieved 2024-04-07.
  21. "GRegex – Faster Analytics for Unstructured Text Data". grovf.com. Archived from the original on 2020-10-07. Retrieved 2019-10-22.
  22. "CUDA grep". bkase.github.io. Archived from the original on 2020-10-07. Retrieved 2019-10-22.
  23. 1 2 3 4 Kerrisk, Michael. "grep(1) - Linux manual page". man7.org. Retrieved 31 January 2023.
  24. 1 2 Hopcroft, Motwani & Ullman (2000)
  25. Sipser (1998)
  26. Gelade & Neven (2008 , p. 332, Thm.4.1)
  27. Gruber & Holzer (2008)
  28. Based on Gelade & Neven (2008), a regular expression of length about 850 such that its complement has a length about 232 can be found at File:RegexComplementBlowup.png.
  29. "Regular expressions for deciding divisibility". s3.boskent.com. Retrieved 2024-02-21.
  30. Gischer, Jay L. (1984). (Title unknown) (Technical Report). Stanford Univ., Dept. of Comp. Sc.[ title missing ]
  31. Hopcroft, John E.; Motwani, Rajeev & Ullman, Jeffrey D. (2003). Introduction to Automata Theory, Languages, and Computation. Upper Saddle River, New Jersey: Addison Wesley. pp. 117–120. ISBN   978-0-201-44124-6. This property need not hold for extended regular expressions, even if they describe no larger class than regular languages; cf. p.121.
  32. Kozen (1991) [ page needed ]
  33. Redko, V.N. (1964). "On defining relations for the algebra of regular events". Ukrainskii Matematicheskii Zhurnal (in Russian). 16 (1): 120–126. Archived from the original on 2018-03-29. Retrieved 2018-03-28.
  34. ISO/IEC 9945-2:1993 Information technology – Portable Operating System Interface (POSIX) – Part 2: Shell and Utilities, successively revised as ISO/IEC 9945-2:2002 Information technology – Portable Operating System Interface (POSIX) – Part 2: System Interfaces, ISO/IEC 9945-2:2003, and currently ISO/IEC/IEEE 9945:2009 Information technology – Portable Operating System Interface (POSIX) Base Specifications, Issue 7
  35. The Single Unix Specification (Version 2)
  36. "9.3.6 BREs Matching Multiple Characters". The Open Group Base Specifications Issue 7, 2018 edition. The Open Group. 2017. Retrieved December 10, 2023.
  37. Russ Cox (2009). "Regular Expression Matching: the Virtual Machine Approach". swtch.com. Digression: POSIX Submatching
  38. "Perl Regular Expression Documentation". perldoc.perl.org. Archived from the original on December 31, 2009. Retrieved November 5, 2024.
  39. 1 2 "Regular Expression Syntax". Python 3.5.0 documentation. Python Software Foundation. Archived from the original on 18 July 2018. Retrieved 10 October 2015.
  40. SRE: Atomic Grouping (?>...) is not supported #34627
  41. 1 2 "Essential classes: Regular Expressions: Quantifiers: Differences Among Greedy, Reluctant, and Possessive Quantifiers". The Java Tutorials. Oracle. Archived from the original on 7 October 2020. Retrieved 23 December 2016.
  42. "Atomic Grouping". Regex Tutorial. Archived from the original on 7 October 2020. Retrieved 24 November 2019.
  43. Bormann, Carsten; Bray, Tim. I-Regexp: An Interoperable Regular Expression Format. Internet Engineering Task Force. doi: 10.17487/RFC9485 . RFC 9485 . Retrieved 11 March 2024.
  44. Cezar Câmpeanu; Kai Salomaa & Sheng Yu (Dec 2003). "A Formal Study of Practical Regular Expressions". International Journal of Foundations of Computer Science. 14 (6): 1007–1018. doi:10.1142/S012905410300214X. Archived from the original on 2015-07-04. Retrieved 2015-07-03. Theorem 3 (p.9)
  45. "Perl Regular Expression Matching is NP-Hard". perl.plover.com. Archived from the original on 2020-10-07. Retrieved 2019-11-21.
  46. Ritchie, D. M.; Thompson, K. L. (June 1970). QED Text Editor (PDF). MM-70-1373-3. Archived from the original (PDF) on 2015-02-03. Retrieved 2022-09-05. Reprinted as "QED Text Editor Reference Manual", MHCC-004, Murray Hill Computing, Bell Laboratories (October 1972).
  47. 1 2 Wall, Larry (1994-10-18). "Perl 5: perlre.pod". GitHub.
  48. Wandering Logic. "How to simulate lookaheads and lookbehinds in finite state automata?". Computer Science Stack Exchange. Archived from the original on 7 October 2020. Retrieved 24 November 2019.
  49. Zakharevich, Ilya (1997-11-19). "Jumbo Regexp Patch Applied (with Minor Fix-Up Tweaks): Perl/perl5@c277df4". GitHub.
  50. Cox (2007)
  51. Laurikari (2009)
  52. "gnulib/lib/dfa.c". Archived from the original on 2021-08-18. Retrieved 2022-02-12. If the scanner detects a transition on backref, it returns a kind of "semi-success" indicating that the match will have to be verified with a backtracking matcher.
  53. Kearns, Steven (August 2013). "Sublinear Matching With Finite Automata Using Reverse Suffix Scanning". arXiv: 1308.3822 [cs.DS].
  54. Navarro, Gonzalo (10 November 2001). "NR-grep: a fast and flexible pattern-matching tool" (PDF). Software: Practice and Experience. 31 (13): 1265–1312. doi:10.1002/spe.411. S2CID   3175806. Archived (PDF) from the original on 7 October 2020. Retrieved 21 November 2019.
  55. "travisdowns/polyregex". GitHub . 5 July 2019. Archived from the original on 14 September 2020. Retrieved 21 November 2019.
  56. Schmid, Markus L. (March 2019). "Regular Expressions with Backreferences: Polynomial-Time Matching Techniques". arXiv: 1903.05896 [cs.FL].
  57. "Vim documentation: pattern". Vimdoc.sourceforge.net. Archived from the original on 2020-10-07. Retrieved 2013-09-25.
  58. 1 2 "UTS#18 on Unicode Regular Expressions, Annex A: Character Blocks". Archived from the original on 2020-10-07. Retrieved 2010-02-05.
  59. Horowitz, Bradley (24 October 2011). "A fall sweep". Google Blog . Archived from the original on 21 October 2018. Retrieved 4 May 2019.
  60. The character 'm' is not always required to specify a Perl match operation. For example, m/[^abc]/ could also be rendered as /[^abc]/. The 'm' is only necessary if the user wishes to specify a match operation without using a forward-slash as the regex delimiter. Sometimes it is useful to specify an alternate regex delimiter in order to avoid "delimiter collision". See 'perldoc perlre Archived 2009-12-31 at the Wayback Machine ' for more details.
  61. E.g., see Java in a Nutshell , p. 213; Python Scripting for Computational Science, p. 320; Programming PHP, p. 106.
  62. All the if statements return a TRUE value
  63. Conway, Damian (2005). "Regular Expressions, End of String". Perl Best Practices. O'Reilly. p. 240. ISBN   978-0-596-00173-5. Archived from the original on 2020-10-07. Retrieved 2017-09-10.

Related Research Articles

In theoretical computer science and formal language theory, a regular language is a formal language that can be defined by a regular expression, in the strict sense in theoretical computer science.

sed Standard UNIX utility for editing streams of data

sed is a Unix utility that parses and transforms text, using a simple, compact programming language. It was developed from 1973 to 1974 by Lee E. McMahon of Bell Labs, and is available today for most operating systems. sed was based on the scripting features of the interactive editor ed and the earlier qed. It was one of the earliest tools to support regular expressions, and remains in use for text processing, most notably with the substitution command. Popular alternative tools for plaintext string manipulation and "stream editing" include AWK and Perl.

<span class="mw-page-title-main">String (computer science)</span> Sequence of characters, data type

In computer programming, a string is traditionally a sequence of characters, either as a literal constant or as some kind of variable. The latter may allow its elements to be mutated and the length changed, or it may be fixed. A string is generally considered as a data type and is often implemented as an array data structure of bytes that stores a sequence of elements, typically characters, using some character encoding. String may also denote more general arrays or other sequence data types and structures.

In computer science, string-searching algorithms, sometimes called string-matching algorithms, are an important class of string algorithms that try to find a place where one or several strings are found within a larger string or text.

grep is a command-line utility for searching plaintext datasets for lines that match a regular expression. Its name comes from the ed command g/re/p, which has the same effect. grep was originally developed for the Unix operating system, but later became available for all Unix-like systems and some others such as OS-9.

In computer programming, glob patterns specify sets of filenames with wildcard characters. For example, the Unix Bash shell command mv *.txttextfiles/ moves all files with names ending in .txt from the current directory to the directory textfiles. Here, * is a wildcard and *.txt is a glob pattern. The wildcard * stands for "any string of any length including empty, but excluding the path separator characters ".

<span class="mw-page-title-main">Deterministic finite automaton</span> Finite-state machine

In the theory of computation, a branch of theoretical computer science, a deterministic finite automaton (DFA)—also known as deterministic finite acceptor (DFA), deterministic finite-state machine (DFSM), or deterministic finite-state automaton (DFSA)—is a finite-state machine that accepts or rejects a given string of symbols, by running through a state sequence uniquely determined by the string. Deterministic refers to the uniqueness of the computation run. In search of the simplest models to capture finite-state machines, Warren McCulloch and Walter Pitts were among the first researchers to introduce a concept similar to finite automata in 1943.

Perl Compatible Regular Expressions (PCRE) is a library written in C, which implements a regular expression engine, inspired by the capabilities of the Perl programming language. Philip Hazel started writing PCRE in summer 1997. PCRE's syntax is much more powerful and flexible than either of the POSIX regular expression flavors and than that of many other regular-expression libraries.

In formal language theory, an alphabet, sometimes called a vocabulary, is a non-empty set of indivisible symbols/characters/glyphs, typically thought of as representing letters, characters, digits, phonemes, or even words. Alphabets in this technical sense of a set are used in a diverse range of fields including logic, mathematics, computer science, and linguistics. An alphabet may have any cardinality ("size") and, depending on its purpose, may be finite, countable, or even uncountable.

Raku rules are the regular expression, string matching and general-purpose parsing facility of the Raku programming language, and are a core part of the language. Since Perl's pattern-matching constructs have exceeded the capabilities of formal regular expressions for some time, Raku documentation refers to them exclusively as regexes, distancing the term from the formal definition.

This is a comparison of regular expression engines.

A regular expression denial of service (ReDoS) is an algorithmic complexity attack that produces a denial-of-service by providing a regular expression and/or an input that takes a long time to evaluate. The attack exploits the fact that many regular expression implementations have super-linear worst-case complexity; on certain regex-input pairs, the time taken can grow polynomially or exponentially in relation to the input size. An attacker can thus cause a program to spend substantial time by providing a specially crafted regular expression and/or input. The program will then slow down or become unresponsive.

The structure of the Perl programming language encompasses both the syntactical rules of the language and the general ways in which programs are organized. Perl's design philosophy is expressed in the commonly cited motto "there's more than one way to do it". As a multi-paradigm, dynamically typed language, Perl allows a great degree of flexibility in program design. Perl also encourages modularization; this has been attributed to the component-based design structure of its Unix roots, and is responsible for the size of the CPAN archive, a community-maintained repository of more than 100,000 modules.

TRE is an open-source library for pattern matching in text, which works like a regular expression engine with the ability to do approximate string matching. It was developed by Ville Laurikari and is distributed under a 2-clause BSD-like license.

RE2 is a software library which implements a regular expression engine via finite-state machines using automata theory, in contrast to almost all other regular expression libraries, which use backtracking implementations. It provides a C++ interface.

In computer science, Thompson's construction algorithm, also called the McNaughton–Yamada–Thompson algorithm, is a method of transforming a regular expression into an equivalent nondeterministic finite automaton (NFA). This NFA can be used to match strings against the regular expression. This algorithm is credited to Ken Thompson.

In computational learning theory, induction of regular languages refers to the task of learning a formal description of a regular language from a given set of example strings. Although E. Mark Gold has shown that not every regular language can be learned this way, approaches have been investigated for a variety of subclasses. They are sketched in this article. For learning of more general grammars, see Grammar induction.

In theoretical computer science, in particular in formal language theory, Kleene's algorithm transforms a given nondeterministic finite automaton (NFA) into a regular expression. Together with other conversion algorithms, it establishes the equivalence of several description formats for regular languages. Alternative presentations of the same method include the "elimination method" attributed to Brzozowski and McCluskey, the algorithm of McNaughton and Yamada, and the use of Arden's lemma.

re2c is a free and open-source lexer generator for C, C++, Go, and Rust. It compiles declarative regular expression specifications to deterministic finite automata. Originally written by Peter Bumbulis and described in his paper, re2c was put in public domain and has been since maintained by volunteers. It is the lexer generator adopted by projects such as PHP, SpamAssassin, Ninja build system and others. Together with the Lemon parser generator, re2c is used in BRL-CAD. This combination is also used with STEPcode, an implementation of ISO 10303 standard.

RE/flex is a free and open source computer program written in C++ that generates fast lexical analyzers in C++. RE/flex offers full Unicode support, indentation anchors, word boundaries, lazy quantifiers, and performance tuning options. RE/flex accepts Flex lexer specifications and offers options to generate scanners for Bison parsers. RE/flex includes a fast C++ regular expression library.

References