# String-Mathching-Algorithm

See more on github

In this article, I will show you some kinds of popular string matching algorithm and dynamic programming algorithm for wildcard matching.

# String Matching algorithm

## Rabin-Karp

We can view a string of k characters (digits) as a length-k decimal number. E.g., the string “31425” corresponds to the decimal number 31,425.

• Given a pattern P [1..m], let p denote the corresponding decimal value.
• Given a text T [1..n], let $t_s$ denote the decimal value of the length-m substring T [(s+1)..(s+m)] for s=0,1,…,(n-m).
• let d be the radix of num, thus $d = len(set(s))$
• $t_s$ = p iff T [(s+1)..(s+m)] = P [1..m].
• p can be computed in O(m) time. p = P[m] + d*(P[m-1] + d*(P[m-2]+…)).
• $t_0$ can similarly be computed in O(m) time.
• Other $t_1,\ldots,t_{n-m}$ can be computed in O(n-m) time since $t_{s+1} can be computed from ts in constant time. Namely, However, it’s no need to calculate$t_{s+1}$directly. We can use modulus operation to reduce the work of caculation. We choose a small prime number. Eg 13 for radix( denoted as d) 10. Generally,$d*q$should fit within one computer word. We firstly caculate$t_0$mod q. Then, for every$t_i (i>1)$assume denote$ d’ = d^{m-1}\ mod\ q$thus, So we can compare the modular value of each$t_i$with p’s. Only if they are the same, then we compare the origin chracters, namely and the pattern characters. Gernerally, this algorithm’s time approximation is O(n+m), and the worst case is O((n-m+1)*m) Problem: this is assuming p and$t_s$are small numbers. They may be too large to work with easily. python implementation ## FSM A FSM can be represented as$(Q,q_0,A,S,C)$, where • Q is the set of all states •$q_0$is the start state •$A\in Q$is a set of accepting states. • S is a finite input alphabet. • C is the set of transition functions: namely$q_j = c(s,q_i)$. Given a pattern string S, we can build a FSM for string matching. Assume S has m chars, and there should be m+1 states. One is for the begin state, and the others are for matching state of each position of S. Once we have built the FSM, we can run it on any input string. ## KMP Knuth-Morris-Pratt method The idea is inspired by FSM. We can avoid computing the transition functions. Instead, we compute a prefix function P in O(m) time, which has only m entries. Prefix funtion stores info about how the pattern matches against shifts of itself. • String w is a prefix of string x, if x=wy for some string y • String w is a suffix of string x, if x=yw for some string y • The k-character prefix of the pattern P [1..m] denoted by Pk. • Given that pattern prefix P [1..q] matches text characters T [(s+1)..(s+q)], what is the least shift s’> s such that P [1..k] = T [(s’+1)..(s’+k)] where s’+k=s+q? • At the new shift s’, no need to compare the first k characters of P with corresponding characters of T. Method: For prefix$p_i$, find the longest proper prefix of$p_i$that is also a suffix of$p_i$. For example: p = ababaca, Then,$p_5$= ababa, pre[5] = 3. Namely$p_3$=aba is the longest prefix of p that is also a suffix of$p_5\$.

Time approximation: finding prefix function take O(m), matching takes O(m+n)

python implementation

## Boyer-Moore

• The longer the pattern is, the faster it works.
• Starts from the end of pattern, while KMP starts from the beginning.
• Works best for character string, while KMP works best for binary string.
• KMP and Boyer-Moore
• Preprocessing existing patterns.
• Searching patterns in input strings.

## Sunday

### features

• simplification of the Boyer-Moore algorithm;
• uses only the bad-character shift;
• easy to implement;
• preprocessing phase in O(m+sigma) time and O(sigma) space complexity;
• searching phase in O(mn) time complexity;
• very fast in practice for short patterns and large alphabets.

### description

The Quick Search algorithm uses only the bad-character shift table (see chapter Boyer-Moore algorithm). After an attempt where the window is positioned on the text factor y[j .. j+m-1], the length of the shift is at least equal to one. So, the character y[j+m] is necessarily involved in the next attempt, and thus can be used for the bad-character shift of the current attempt.

The bad-character shift of the present algorithm is slightly modified to take into account the last character of x as follows: for c in Sigma, qsBc[c]=min{i : 0 < i leq m and x[m-i]=c} if c occurs in x, m+1 otherwise (thanks to Darko Brljak).

The preprocessing phase is in O(m+sigma) time and O(sigma) space complexity.

During the searching phase the comparisons between pattern and text characters during each attempt can be done in any order. The searching phase has a quadratic worst case time complexity but it has a good practical behaviour.

For instance,

In this example, t0, …, t4 = a b c a b is the current text window that is compared with the pattern. Its suffix a b has matched, but the comparison c-a causes a mismatch. The bad-character heuristics of the Boyer-Moore algorithm (a) uses the “bad” text character c to determine the shift distance. The Horspool algorithm (b) uses the rightmost character b of the current text window. The Sunday algorithm (c) uses the character directly right of the text window, namely d in this example. Since d does not occur in the pattern at all, the pattern can be shifted past this position.

python implementation

# WildCard matching

wild card:

• * matches 0 or any chars
• ? matches any single char.

Given a string s which doesn’t include wild card,
and a pattern p which includes wild card,

Judge if they are matching.

## Idea

Using dynamic programming.

n = length(s), m = length(p)

dp[m+1][n+1]: bool

i:n, j:m
dp[j][i] represents if s[:i+1] matches p[:j+1]

initialzation :
dp[0][0] = True, dp[0][i],dp[j][0] = False, only if p startswith ‘*’, dp[1][0] = True.

if p[j] = ‘*’: dp[j][i] = dp[j-1][i] or dp[j][i-1]
elif p[j] = ‘?’: dp[j][i] = dp[j-1][i-1]
else : dp[j][i] = dp[j-1][i-1] and s[i] == p[j]

# Reference:

1. Xuyun, ppt, String matching
2. Sunday-algorithm
3. GeeksforGeeks, KMP Algorithm
4. Augustliang, csdn, dynamic programming for wild card matching
<--!>