Read part 1 here!

### QR Algorithm

From 1959-1961, John G.F Francis worked on finding a stable method to compute eigenvalues and thus created the QR algorithm. Eigenvalues are one of the most essential numbers associated with matrices, however they can be quite difficult to compute.

This algorithm is based on the fact that is relatively easy to transform a square matrix into a matrix that is *almost* upper triangular (one extra set of no-zero entries just below the main diagonal). Based on QR decomposition, which writes a matrix A as a product of an orthogonal matrix Q and an upper triangular matrix R, the QR algorithm iteratively changes *A*_{i} = Q_{i}*R*_{i} to *A*_{i+1} = *R*_{i}Q_{i}

“The algorithm is numerically stable because it proceeds by orthogonal similarity transforms.”-Wikipedia

Under certain conditions,^{} the matrices *A*_{i} converge to a triangular matrix and the eigenvalues of a triangular matrix are listed on the diagonal so the eigenvalue problem is solved.

By the mid-1960s, the QR algorithm had turned once-formidable eigenvalue problems into routine calculations.

**Quicksort**

The Quicksort algorithm was developed by Tony Hoare in 1962. It puts N things in numerical or alphabetical order by using a recursive strategy:

- Pick one element as a
*pivot* - Separate the rest into piles of
*big*and*small*elements, as compared with the pivot - Repeat this procedure on each pile

Although you can get stuck doing all N(N-1)/2 comparisons, on average Quicksort runs on average with O(N logN) efficiency, which is very fast. Its simplicity has made Quicksort a “*poster child of computational complexity*“.

### Fast Fourier Transform

The fast Fourier Transform was developed by James Cooley and John Tukey in 1965. The FFT revolutionised signal processing. Although the idea can be traced back to Gauss, it was a paper by Cooley and Tukey that made it clear how easily Fourier transforms can be computed. It relies on a ‘*divide-and-conquer*‘ strategy and hence reduces it from an O(N^2) to an O(N logN) computation.

### Integer Relation Detection Algorithm

Given real numbers x1, …, xn, are there integers a1, …, an (not all 0) for which a1x1 + … + anxn = 0? Helaman Ferguson and Rodney Forcade found an algorithm – the Integer Relation Detection Algorithm – to answer this question. This is easily solved in the case where n = 2 by Euclid’s algorithm, computing terms in the continued-fraction expansion of x1/x2 – if x1/x2 is rational, then the expansion terminates.

The detection algorithm has, for example, been used to find the precise coefficients of the polynomials satisfied by the third and fourth bifurcation points of the logistic map.

It has also proved useful in simplifying calculations with Feynman diagrams in quantum field theory.

### Fast Multipole Algorithm

This algorithm overcomes one of the biggest problems of N-body simulations, which is the fact that accurate calculations of the motions of N particles interaction via gravitational or electrostatic forces seemingly requires O(*N^2*) calculations, i.e. one for each pair of particles. This algorithm, developed in 1987 by Leslie Greengard and Vladimir Rokhlin, does it with O(*N*) computations.

**How does it do this?** It uses multipole expansions to approximate the effects of a distant group of particles on a local group. Then we define ever-larger groups as distances increase. One of the big advantages of the fast multipole algorithm is that it comes with rigorous error estimates, a feature that a lot of other methods lack.

M x