## 0.1. gcd, co-primes

`gcd`

is short for `greatest common divisor`

If `a`

,`b`

are co-primes, we denote as $(a,b)=1, \text{which means } gcd(a,b)=1 $

We can use `Euclid algorithm`

to calculate `gcd`

of two numbers.

1 | def gcd(a,b): |

### 0.1.1. Bezout’s identity

Let *a* and *b* be integers with greatest common divisor *d*. Then, there exist integers *x* and *y* such that *ax* + *by* = *d*. More generally, the integers of the form *ax* + *by* are exactly the multiples of *d*.

we can use extended euclid algorithm to calculate x,y,gcd(a,b)

1 | def xgcd(a,b): |

## 0.2. primality_test

### 0.2.1. Prime Sieve

1 | class primeSieve: |

### 0.2.2. Miller-Rabin

Excerpted from wikipedia:Miller_Rabin_primality_test

Just like the Fermat and Solovay–Strassen tests, the Miller–Rabin test relies on an equality or set of equalities that hold true for prime values, then checks whether or not they hold for a number that we want to test for primality.

First, a lemma “Lemma (mathematics)”) about square roots of unity in the finite field **Z**/*p***Z**, where *p* is prime and *p* > 2. Certainly 1 and −1 always yield 1 when squared modulo *p*; call these trivial “Trivial (mathematics)”) square roots of 1. There are no *nontrivial* square roots of 1 modulo *p* (a special case of the result that, in a field, a polynomial has no more zeroes than its degree). To show this, suppose that *x* is a square root of 1 modulo *p*. Then:

In other words, prime *p* divides the product (*x* − 1)(*x* + 1). By Euclid’s lemma it divides one of the factors *x* − 1 or *x* + 1, implying that *x* is congruent to either 1 or −1 modulo *p*.

Now, let *n* be prime, and odd, with *n* > 2. It follows that *n* − 1 is even and we can write it as 2^{s}·*d*, where *s* and *d* are positive integers and *d* is odd. For each *a* in (**Z**/*n***Z**)*, either

or

To show that one of these must be true, recall Fermat’s little theorem, that for a prime number n:

By the lemma above, if we keep taking square roots of *a*^{n−1}, we will get either 1 or −1. If we get −1 then the second equality holds and it is done. If we never get −1, then when we have taken out every power of 2, we are left with the first equality.

The Miller–Rabin primality test is based on the contrapositive of the above claim. That is, if we can find an *a* such that

and

then *n* is not prime. We call *a* a witness “Witness (mathematics)”) for the compositeness of *n* (sometimes misleadingly called a *strong witness*, although it is a certain proof of this fact). Otherwise *a* is called a *strong liar*, and *n* is a strong probable prime to base *a*. The term “strong liar” refers to the case where *n* is composite but nevertheless the equations hold as they would for a prime.

Every odd composite *n* has many witnesses *a*, however, no simple way of generating such an *a* is known. The solution is to make the test probabilistic: we choose a non-zero *a* in **Z**/*n***Z** randomly, and check whether or not it is a witness for the compositeness of *n*. If *n* is composite, most of the choices for *a* will be witnesses, and the test will detect *n* as composite with high probability. There is, nevertheless, a small chance that we are unlucky and hit an *a* which is a strong liar for *n*. We may reduce the probability of such error by repeating the test for several independently chosen *a*.

For testing large numbers, it is common to choose random bases *a*, as, a priori, we don’t know the distribution of witnesses and liars among the numbers 1, 2, …, *n* − 1. In particular, Arnault ^{[4]} gave a 397-digit composite number for which all bases *a*less than 307 are strong liars. As expected this number was reported to be prime by the Maple “Maple (software)”) `isprime()`

function, which implemented the Miller–Rabin test by checking the specific bases 2,3,5,7, and 11. However, selection of a few specific small bases can guarantee identification of composites for *n* less than some maximum determined by said bases. This maximum is generally quite large compared to the bases. As random bases lack such determinism for small *n*, specific bases are better in some circumstances.

python implementation

1 | from random import sample |

## 0.3. Factorization

### 0.3.1. Pollard’s rho algorithm

Excerpted from wikipedia:Pollard’s rho algorithm

Suppose we need to factorize a number

$n=pq$, where $p$ is a non-trivial factor. A polynomial modulo $n$, called

where c is a chosen number ,eg 1.

is used to generate a pseudo-random sequence: A starting value, say 2, is chosen, and the sequence continues as

,

The sequence is related to another sequence$\{x_k\ mod \ p\}$ . Since $p$ is not known beforehand, this sequence cannot be explicitly computed in the algorithm. Yet, in it lies the core idea of the algorithm.

Because the number of possible values for these sequences are finite, both the$\{x_n\}$ sequence, which is mod $n$ , and $\{x_n\ mod\ p\}$ sequence will eventually repeat, even though we do not know the latter. Assume that the sequences behave like random numbers. Due to the birthday paradox, the number of$x_k$before a repetition occurs is expected to be $O(\sqrt{N})$ , where $N$ is the number of possible values. So the sequence $\{x_n\ mod\ p\}$ will likely repeat much earlier than the sequence $x_k$. Once a sequence has a repeated value, the sequence will cycle, because each value depends only on the one before it. This structure of eventual cycling gives rise to the name “Rho algorithm”, owing to similarity to the shape of the Greek character ρ when the values $x_i\ mod \ p$ are represented as nodes in a directed graph.

This is detected by the Floyd’s cycle-finding algorithm: two nodes$i,j$ are kept. In each step, one moves to the next node in the sequence and the other moves to the one after the next node. After that, it is checked whether $\text{gcd}(x_i-x_j,n)\neq 1$.

If it is not 1, then this implies that there ris a repetition in the $\{x_k\ mod\ p\}$ swquence

This works because if the $x_i\ mod\ p$is the same as$x_j\ mod\ p$, the difference between$x_i,x_j$ is necessarily a multiple of $p$. Although this always happens eventually, the resulting GCD is a divisor of $n$ other than 1. This may be$n$ itself, since the two sequences might repeat at the same time. In this (uncommon) case the algorithm fails, and can be repeated with a different parameter.

python implementation

1 | from random import randint |

## 0.4. Euler function

Euler function, denoted as $\phi(n)$, mapping n as the number of number which is smaller than n and is the co-prime of n.

e.g.: $\phi(3)=2$ since 1,2 are coprimes of 3 and smaller than 3, $\phi(4)=2$ ,(1,3)

Euler function is a kind of productive function and has two properties as follows:

- $\phi(p^k) = p^k-p^{k-1}$, where p is a prime
- $\phi(mn) = \phi(m)*\phi(n)$ where $(m,n)=1$

Thus, for every narural number *n*, we can evaluate $\phi(n)$ using the following method.

- factorize n:

, where $p_i$ is a prime and $k_i,l > 0$ . - calculate $\phi(n) $ using the two properties.

And , $\sigma(n)$ represents the sum of all factors of n.

e.g. : $\sigma(9) = 1+3+9 = 14$

A `perfect number`

_n_ is defined as $\sigma(n) = 2n$

The following is the implementation of this two functions.

1 | from factor import factor |

## 0.5. Modulo equation

The following codes can solve a linear, group modulo equation. More details and explanations will be supplied if I am not too busy.

Note that I use `--`

to represent $\equiv$ in the python codes.

1 | import re |