Catalan’s Conjecture

Although popularly known as Catalan’s conjecture, this is in fact a theorem in number theory, proven by Preda Mihăilescu in 2002, 158 years after it was conjecture in 1844 by French and Belgian mathematician Eugène Charles Catalan.

The theorem states that the only solution in the natural numbers of

{\displaystyle x^{a}-y^{b}=1}

for a,b>1; x,y>0 is x = 3, a=2, y=2, b=3.

Catalan ‘Near Misses’ (i.e. xa – yb <= 10; 2 <= x,y,a,b <= 100; a,b prime)

  • 33 – 52 = 2
  • 27 – 53 = 3
  • 23 – 22 = 62 – 25 = 53 – 112 = 4
  • 25 – 33 = 5
  • 25 – 52 = 42 – 32 = 27 – 112 = 7
  • 42 – 23 = 8
  • 52 – 42 = 62 – 33 = 152 – 63 = 9
  • 133 – 37 = 10

Proving the Conjecture

Similarly to Fermat’s last theorem, the solution to this conjecture was assembled over a long period of time. In 1850, Victor Lebesgue established that b cannot equal 2. But then it was only after around 100 years, in 1960, that Chao Ko constructed a proof for the other quadratic case: a cannot equal 2 unless x = 3.

Hence, this left a,b odd primes. Expressing the equation as (x-1)(xa-1)/(x-1) = yb one can show that the greatest common divisor of the two left hand factors is either 1 or a. The case where the gcd = 1 was eliminated by J.W.S Cassels in 1960. Hence, only the case where gcd = a remained.

It was this “last, formidable, hurdle” that Mihailescu surmounted. In 2000, he showed that a and b would have to be a ‘Wieferich pair’, i.e. they would have to satisfy ab-1 ≡ 1 (mod b2) and ba-1 ≡ 1 (mod a2). In 2002,  he showed that such solutions were impossible!


M x


Algorithms: PART 2

Read part 1 here!

QR Algorithm

From 1959-1961, John G.F Francis worked on finding a stable method to compute eigenvalues and thus created the QR algorithm. Eigenvalues are one of the most essential numbers associated with matrices, however they can be quite difficult to compute.

This algorithm is based on the fact that is relatively easy to transform a square matrix into a matrix that is almost upper triangular (one extra set of no-zero entries just below the main diagonal). Based on QR decomposition, which writes a matrix A as a product of an orthogonal matrix Q and an upper triangular matrix R, the QR algorithm iteratively changes Ai = QiRi to Ai+1RiQi

“The algorithm is numerically stable because it proceeds by orthogonal similarity transforms.”


Under certain conditions, the matrices Ai converge to a triangular matrix and the eigenvalues of a triangular matrix are listed on the diagonal so the eigenvalue problem is solved.

By the mid-1960s, the QR algorithm had turned once-formidable eigenvalue problems into routine calculations.


The Quicksort algorithm was developed by Tony Hoare in 1962. It puts N things in numerical or alphabetical order by using a recursive strategy:

  1. Pick one element as a pivot
  2. Separate the rest into piles of big and small elements, as compared with the pivot
  3. Repeat this procedure on each pile

Although you can get stuck doing all N(N-1)/2 comparisons, on average Quicksort runs on average with O(N logN) efficiency, which is very fast. Its simplicity has made Quicksort a “poster child of computational complexity“.

Fast Fourier Transform

The fast Fourier Transform was developed by James Cooley and John Tukey in 1965. The FFT revolutionised signal processing. Although the idea can be traced back to Gauss, it was a paper by Cooley and Tukey that made it clear how easily Fourier transforms can be computed. It relies on a ‘divide-and-conquer‘ strategy and hence reduces it from an O(N^2) to an O(N logN) computation.

Integer Relation Detection Algorithm

Given real numbers x1, …, xn, are there integers a1, …, an (not all 0) for which a1x1 + … + anxn = 0? Helaman Ferguson and Rodney Forcade found an algorithm – the Integer Relation Detection Algorithm – to answer this question. This is easily solved in the case where n = 2 by Euclid’s algorithm, computing terms in the continued-fraction expansion of x1/x2 – if x1/x2 is rational, then the expansion terminates.

The detection algorithm has, for example, been used to find the precise coefficients of the polynomials satisfied by the third and fourth bifurcation points of the logistic map.

Logistic Map | Source:

It has also proved useful in simplifying calculations with Feynman diagrams in quantum field theory.

Fast Multipole Algorithm

This algorithm overcomes one of the biggest problems of N-body simulations, which is the fact that accurate calculations of the motions of N particles interaction via gravitational or electrostatic forces seemingly requires O(N^2) calculations, i.e. one for each pair of particles. This algorithm, developed in 1987 by Leslie Greengard and Vladimir Rokhlin, does it with O(N) computations.

How does it do this? It uses multipole expansions to approximate the effects of a distant group of particles on a local group. Then we define ever-larger groups as distances increase. One of the big advantages of the fast multipole algorithm is that it comes with rigorous error estimates, a feature that a lot of other methods lack.

M x

10 Algorithms: PART 1

Monte Carlo Method

At the Los Alamos Scientific Laboratory,  John von Neumann, Stan Ulam and Nick Metropolis created the Metropolis algorithm, also known as the Monte Carlo method. This algorithm obtains approximate solutions to numerical problems that has an unmanageable number of degrees of freedom and to combinatorial problems that have factorial size. It does this by mimicking a random process.

Read more here.

Simplex Method

In 1947, George Dantzig created the simplex method for linear programming. Linear programming dominated the world of industry, where “economic survival depends on the ability to optimise within budgetary and other constraints“. It’s widespread application makes Dantzig’s algorithm one of the most successful of all time.

The simplex method is a elegant way of arriving at optimal answers, and in practice it is highly efficient.

Krylov Subspace Iteration Methods

The development of the Krylov Subspace iteration methods was initiated by Magnus Hestenes, Eduard Stiefel and Cornelius Lanczos from the Institute for Numerical Analysis at the National Bureau of Standards in 1950. They address the simple task of solving the equations Ax = b. Although these seem simple, when A is a massive nxn matrix, the algebraic answer x = b/A is not easy to compute!

So, iterative methods, such as solving equations of the form Kxi + 1 = Kxi + b – Axi, were introduced.  This lead to the study of Krylov subspaces, which are named after Russian mathematician Nikolai Krylov. These subspaces are spanned by powers of a matrix that is applied to a initial remainder vector r0 = b – Ax0.

Lanczos found a way to generate an orthogonal basis for such a subspace when the matrix is symmetric, and then Hestenes and Stiefel proposed an even better method – conjugate gradient method – used when the system is both symmetric and positive definite.

Fortran Optimising Compiler

Developed in 1957 by a team at IBM lead by John Backus, the Fortran optimising compiler is said to be one of the most important innovations in the history of computer programming. After its development, scientists could tell a computer what they wanted it to do without having to “descend into the netherworld of machine code“.

Fortran I consisted of 23,500 assembly-language instructions, and although this is not a lot compared to modern compilers, it was capable of a great number of sophisticated computations.

The compiler “produced code of such efficiency that its output would startle the programmers who studied it.” 

– Backus

Part 2 coming soon! M x

MATHS BITE: Steinmetz Solid

Named after mathematician Charles Proteus Steinmetz, a Steinmetz solid is the solid body generated by the intersection of two or three cylinders with the same radii at right angles.


Bicylinder | Source: Wikipedia

These solids are named after Steinmetz as he managed to determine the volume of the intersection, though these solids were known long before he studied them.

If two/three cylinders intersect then the intersection is called a bicylinder/tricylinder.



Bicylinder | Source: Wikipedia

Surface Area: 16r2

Volume: {\frac  {16}{3}}r^{3}, where r is the radius of both cylinders.

The volume of the cube minus the volume of the eight pyramids is the volume of the bicylinder: the volume of 8 pyramids is \textstyle 8\times {\frac  {1}{3}}r^{2}\times r={\frac  {8}{3}}r^{3}.

 Then the bicylinder volume is:

\textstyle (2r)^{3}-{\frac  {8}{3}}r^{3}={\frac  {16}{3}}r^{3}


Surface Area: 3(16-8{\sqrt  {2}})r^{2}.\,

Volume: (16-8{\sqrt  {2}})r^{3}\,

Tricylinder Steinmetz solid.gif

Tricylinder | Source: Wikipedia


M x


I love listening to podcasts, be it when I’m walking to university or simply to pass the time on an airplane; there is something about the format of a podcast that I thoroughly enjoy. I previously did a post on 5 Maths Podcasts and I thought today I would add to that list!

Math Ed Podcast

Hosted by Samuel Otten from the University of Missouri, this podcast consists of interviews with mathematics education researchers about recent studies. Perfect for those interested in the teaching of mathematics, this podcast allows you to keep up to date with the most current research in this field.

Women in Math: The Limit Does Not Exist

Inspired by the fact that women seem to be the vast minority in higher mathematics, this podcast aims to promote the visibility of women in mathematics, and so I could not fail to mention it. The episodes are short and sweet, allowing you to quickly get through a bunch of episodes (at least that’s what I did when I discovered it!)

Mathematical Moments from the American Mathematical Society

This podcast is great in describing how mathematics plays a part of science, nature, technology and human culture. From presenting realistic animation to beating cancer, it delves into various fields where researchers talk about how they use mathematics in their work.

Breaking Math

Personally my favourite out of all, Breaking Math aims to make math accessible to everyone and make it enjoyable – something I am also very passionate about. It airs every other week and explains confusing topics, such as chaos theory, in an accessible way. “If you have 45 or so minutes to spare, you’re almost guaranteed to learn something new!”  

M x

Casting Out Nines

Inspired by a recent Numberphile video I thought I would share with you a cool trick to check your arithmetic is correct called ‘Casting Out Nines‘.

To check the result of a calculation using this technique, take the digital root of each number in the calculation. Then perform the calculation on these digital roots, and take the digital root of this answer. If no mistake has been made, the digital root of the result of this calculation should be the same as the digital root of the result of the original calculation.

What is a digital root?

The digital root is the value (a single digit) you get by completing an iterative process of summing the digits of the number, where you use the result from the previous iteration to compute a digit sum on each iteration.

Example: Multiplication

12345 x 67890 = 838102050

The digital roots of 12345 and 67890 are 6 and 3 respectively. The product of these digital roots is 18. The digital root of 18 is 9.

The sum of the digits of 838102050 is 27, whose digital root is also 9.

Hence, this confirms that the calculation is correct. Pretty cool eh?

How does this work?

For any number

{\displaystyle 10^{n}d_{n}+10^{n-1}d_{n-1}+\cdots +d_{0}}

the digit sum is {\displaystyle d_{n}+d_{n-1}+\cdots +d_{0}}. The difference between the original number and its digit sum is

{\displaystyle {\begin{aligned}&10^{n}d_{n}+10^{n-1}d_{n-1}+\cdots +d_{0}-\left(d_{n}+d_{n-1}+\cdots +d_{0}\right)\\={}&\left(10^{n}-1\right)d_{n}+\left(10^{n-1}-1\right)d_{n-1}+\cdots +9d_{1}.\end{aligned}}}

Noting that numbers of the form {\displaystyle 10^{i}-1} are always divisible by 9, replacing the original number by its digit sum has the effect of casting out

{\displaystyle {\frac {10^{n}-1}{9}}d_{n}+{\frac {10^{n-1}-1}{9}}d_{n-1}+\cdots +d_{1}}

lots of 9.

M x


6174 is known as Kaprekar’s Constant. Why is this number important? Perform the following process (called Kaprekar’s Routine):

  1. Take any two digit number whose digits are not all identical.
  2. Arrange the digits in descending and then ascending order to get two four digit numbers.
  3. Subtract the smaller number from the bigger number.
  4. Go to step 2 and repeat.

This process will always reach its fixed point 6174 in at most 7 iterations. 6174 is a fixed point as once it has been reached, the process will continue yielding 7641 – 1467 = 6174.

Example: 3141


The Maths Behind it

Each number in the sequence uniquely determines the next number. As there are only finitely many possibilities, eventually the sequence must return to a number it has already hit. This leads to a cycle.

So any starting number will give a sequence that eventually cycles.

There can be many cycles, but for 4 digit numbers in base 10, there happens to be 1 non – trivial cycle, which involves the number 6174.

To read more, click here.

Numberphile Video


Click here for my previous post about Kaprekar. M x