maths

MATHS BITE: The Kolakoski Sequence

The Kolakoski sequence is an infinite sequence of symbols {1,2} that is its own “run-length encoding“. It is named after mathematician Willian Kolakoski who described it in 1965, but further research shows that it was first discussed by Rufus Oldenburger in 1939.

This self-describing sequence consists of blocks of single and double 1s and 2s. Each block contains digits that are different from the digit in the preceding block.

To construct the sequence, start with 1. This means that the next block is of length 1. So we require that the next block is 2, giving the sequence 1, 2. Continuing this infinitely gives us the Kolakoski sequence: 1, 2, 1, 1, 2, 1, 2, 2, 1, 2, 2, 1, 1, 2, etc.

M x

Diophantine Approximation: Liouville’s Theorem

Diophantine approximation deals with the approximation of real numbers by rational numbers.

Liouville’s Theorem

In the 1840’s Liouville obtained the first lower bound for the approximation of algebraic numbers:

Let α ∈ R be an irrational algebraic number satisfying f(α) = 0 with non-zero irreducible (cannot be reduced) f ∈ Z[x] of degree d. Then there is a non-zero constant C such that for every fraction p/q

Screen Shot 2016-12-11 at 10.35.05 AM.png

Proof

The proof utilises the mean value theorem. By this theorem, given p/q, there is a real ξ between α and p/q such that

Screen Shot 2016-12-11 at 10.35.08 AM.png

Since f has integer coefficients and is of degree d, the value of f(p/q) is a rational number with denominator at worst q^d. Since f is irreducible, f(p/q) cannot be equal to 0. Thus

Screen Shot 2016-12-11 at 10.40.22 AM.png

and so

Screen Shot 2016-12-11 at 10.40.58 AM.png

A corollary of this result is that numbers that are well approximable by rational numbers, i.e. in for every d ≥ 1 and positive constant C, there is a rational p/q such that

Screen Shot 2016-12-11 at 10.43.32 AM.png

are transcendental.

Example

Letscreen-shot-2016-12-11-at-10-45-22-am

β is a real, transcendental number.

This is because there is a rational approximation

Screen Shot 2016-12-11 at 10.46.43 AM.png

with

screen-shot-2016-12-11-at-10-47-21-am

Analysing this inequality, the ratio

screen-shot-2016-12-11-at-10-48-30-am

is unbounded as N → +∞, and so β is well approximable by rationals.

M x

 

Maths on Trial

Last week I had the pleasure to attend a talk by Leila Schneps on the mathematics of crime. A self-declared pure mathematician, Schneps recently became involved in studying the mathematics in criminal trials. Rather than focusing on the mathematics of forensic data, such as DNA, the talk was on the use of Bayesian networks in crime evidence analysis.

One of the most challenging tasks for juries in criminal trials is to weigh the impact of different pieces of evidence which may be dependent on each other. In fact, studies have shown that there are several fallacies that jury members fall into on a regular basis, such as overestimating the weight of a single piece of DNA evidence. This is because most people assume that the probability of an event A happening given that B happens [P(A|B)] is equal to an event B happening, given that A happens [P(B|A)]. However, this is NOT true: the two are connected by Bayes’ Rule.

For example, a forensic specialist may say that a piece of DNA is found in 1 in every 1000 people. The jury will take this to mean the the suspect must be guilty as he is that person out of 1000 (the chances of it being anyone else is so low). However, this is critically not true, as explained above.

Thus, Bayesian networks are a powerful tool to assess the weight of different kinds of evidence, taking into account their dependencies on one another, and what effect they have on the guilt of the suspect.

What are Bayesian Networks?

Bayesian Networks are a type of statistical model,  which organise a body of knowledge in any given area, in this case evidence of a criminal trial, by mapping out cause-and-effect relationships and encoding them with numbers that represent the extent to which one variable is likely to affect the other. These networks are named after British mathematician Reverend Thomas Bayes, due to their reliance on Bayes’ formula. This rule can be extended to multiple variables and multiple states, allowing complicated probabilities to be calculated.

Example

To illustrate an easy example, consider this Bayesian Network:

Let us say that the weather can only have three states: sunny, cloudy or rainy, that the grass can only be wet or dry, and that the sprinkler can be on or off. The arrows, which represent dependence, have been drawn in that way because if it rainy, then the lawn will be wet, but if it is sunny for a long time then this will cause us to turn on the sprinklers, and hence the lawn will also be wet.

By imputing probabilities into this network that reflect the reality of real weather, lawn and sprinkler-use behaviour, it can be made to answer questions like:

“If the lawn is wet, what are the chances it was caused by rain or by the sprinkler?”

“If the chance of rain increases, how does that affect my having to budget time for watering the lawn?”

I51G8WI2nsNL._SX334_BO1,204,203,200_.jpgn her presentation, Leila Schneps talked briefly about a book she had released entitled ‘Math on Trial’, which describes 10 trials spanning the 19th century to today, in which mathematical arguments were used (and greatly misused) as evidence. The cases discussed include “Sally Clark, who was accused of murdering her children by a doctor with a faulty sense of calculation; of nineteenth-century tycoon Hetty Green, whose dispute over her aunt’s will became a signal case in the forensic use of mathematics; and of the case of Amanda Knox, in which a judge’s misunderstanding of probability led him to discount critical evidence – which might have kept her in jail.”

After hearing Schneps transmit her passion and excitement, I am fascinated with this subject and can’t wait to get my hands on this book to learn more! M x

Benford’s Law

Benford’s Law is names after the American physicist Frank Benford who described it in 1938, although it had been previously mentioned by Simon Newcomb in 1881.

Benford’s Law states that in “naturally occurring collections of numbers” the leading significant digit is more likely to be small. For example, in sets of numbers which obey this law, the number 1 appears as the first significant digit about 30% of the time, which is much greater than if the digits were distributed uniformly: 11.1%.

In mathematics, a set of numbers satisfies this law if the leading digit, d, occurs with a probability:

P(d)=\log _{10}(d+1)-\log _{10}(d)=\log _{10}\left({\frac {d+1}{d}}\right)=\log _{10}\left(1+{\frac {1}{d}}\right).

Hence, if d = 1, then P(1) = log 2 = 30.1..%

The leading digits have the following distribution in Benford’s law:

Screen Shot 2016-09-27 at 2.57.40 PM.png

Source: Wikipedia

BenfordsLaw

Source: Wolfram MathWorld

As P(d) is proportional to the space between d and d+1 on a logarithmic scale, the mantissae of the logarithms of the numbers are uniformly and randomly distributed.

Applications

Benford’s law has found applications in a big variety of data sets, such as stock prices, house prices, death rates and mathematical constants.

File:Benford-physical.svg

Frequency of first significant digit of physical constants plotted against Benford’s Law | Source: Wikipedia

Due to this, fraud can be found applying Benford’s law to data sets. This is because if a person is trying to fabricate ‘random’ values to try to not appear suspicious, they will probably select numbers such that the initial digits are uniformly distributed. However, as explained above, this is completely wrong! In fact, this application of Benford’s law is so powerful that there is an “industry specialising in forensic accounting and auditing which uses these phenomena to look for inconsistencies in data.”

Datagenetics.com describes how:

“In 1993, in State of Arizona v. Wayne James Nelson (CV92-18841), the accused was found guilty of trying to defraud the state of nearly $2 million, by diverting funds to a bogus vendor.

The perpetrator selected payments with the intention of making them appear random: None of the check amounts was duplicated, there were no round-numbers, and all the values included dollars and cents amounts. However, as Benford’s Law is esoterically counterintuitive, he did not realize that his seemingly random looking selections were far from random.”

Sources: 1 | 2 | 3 | 4

 

After writing this I’ve realised that I touched on this law (in less detail) in a previous post during my Christmas series! M x

Choosing a Password

It has been argued that password systems are not a good way to authenticate. This is due to the fact that either they’re difficult to remember or they’re easy to remember, but therefore also easy to crack. So how do we choose a good password? XKCD posted this image suggesting a strategy for creating a password:

password_strength.png

This method is trying to eradicate to age old way of creating passwords that are, in fact, almost impossible for us to remember but relatively easy for a computer to crack.

The password suggested by XKCD (although now not a good password because everyone knows about it!) is practically resistant to the brute force approach, because, although it is composed of only lowercase letters, it is too long. Therefore, the method used to break this password would be the dictionary attack method. However, these words would probably not come up in a dictionary together as they aren’t usually associated to one another.

But what happens when this method (stringing together four words) becomes common practice? A method to combat this might be to look at the top 10,000 english words and try different combinations of these words until the password is found. Therefore, it is safest to always assume that the password cracker knows the method that you are using and so we must choose at least one uncommon word that is hard to guess, such as mirth, to include in the password. This will make it extremely difficult to crack.

M x

NEWS: New Twin Primes Found

PrimeGrid is a collaborative website with the aim to search for prime numbers. It is similar to GIMPS, which only searches for Mersenne Primes specifically. It works by allowing anyone to download their software and donate their “unused CPU time” to search for primes. PrimeGrid is responsible for many of the recent prime numbers that have been found, which includes “several in the last few months which rank in the top 160 largest known primes“.

On the 14th of September they announced their most recent discovery made by the user Tom Greer who discovered a new pair of twin primes. (Note that twin primes are prime numbers that differ by two.)

Screen Shot 2016-09-28 at 11.29.24 AM.png

The primes are “388,342 digits long, eclipsing the previous record of 200,700 digits”. These primes have been entered in the database for The Largest Known Primes, which is maintained by Chris Caldwell and are currently ranked 1st for twins and each are ranked 4180th overall.

Source

M x