Influential Mathematicians: Gauss (3)

Probability and Statistics

Gauss introduced what is now known as the Gaussian distribution: he showed how probability can be represented by a bell-shaped curve, with peaks around the mean when falls off quickly towards plus or minus infinity.

File:Normal Distribution PDF.svg
Source: Wikipedia

He also created the Gaussian function: a function of the form

{\displaystyle f(x)=ae^{-{\frac {(x-b)^{2}}{2c^{2}}}}}

for arbitrary real constants a, b and c.

Modular Arithmetic

The modern approach to modular arithmetic was developed by Gauss in his book Disquisitiones Arithmeticae, published in 1801.  This now has application in number theory, abstract algebra, computer science, cryptography, and even in visual and musical art.

Surveying

Whilst doing a surveying job for the Royal House of Hanover in the years after 1818, Gauss was also looking into the shape of the Earth and started to question what the shape of space itself was. This led him to question Euclidean geometry – one of the central tenets of the whole mathematics, which premised a flat universe, rather than a curved one. He later claimed that as early as 1800 he had already started to consider types of non-Euclidean geometry (where the parallel axiom does not hold), which were consistent and free of contradiction. However, to avoid controversy, he did not publish anything in this area and left the field open to Bolyai and Lobachevsky, although he is still considered by some to be the pioneer of non-Euclidean geometry.

This survey work also fuelled Gauss’ interest in differential geometry, which uses differential calculus to study problems in geometry involving curves and surfaces. He developed what has become known as Gaussian curvature. This is an intrinsic measure of curvature that depends only on how distances are measured on the surface, not on the way it is embedded in space.

Positive, negative and zero Gaussian curvature of a shell
Source: shellbuckling.com

His achievements during these years, however, was not only limited to pure mathematics. He invented the heliotrope, which is an instrument that uses a mirror to reflect sunlight over great distances to mark positions in a land survey.

Image result for heliotrope gauss
Heliotrope | Source: Wikipedia

All in all, this period of time was one of the most fruitful periods of his academic life; he published over 70 papers between 1820 and 1830.

In later years, he worked with Wilhelm Weber to make measurements of the Earth’s magnetic field, and invented the first electric telegraph.

Read part 1 here and part 2 here.

Let me know what you think of this new series! M x

 

Modern Mathematicians: Hirotugu Akaike

On November 5th, Google Doodle celebrated the 90th birth anniversary of Hirotugu Akaike. Having never heard of his work before I was interested in finding out more about him.

hirotugu-akaikes-90th-birthday-5767291382792192.3-2x.png
Google Doodle

Hirotugu Akaike was an award-winning Japanese statistician who worked in information theory (“studies the quantification, storage, and communication of information”).

He is known for the ‘Akaike Information Criterion’, which he formulated in the 1970s. In post-war Japan, Akaike worked on problems that were unique to his country and that would help contribute to its reconstruction. Using fundamental concepts of information maths, he analysed the processing of “sericultural products, cement kiln controls, and thermal electric power plant controls”. In doing so, he was able to give a  solution to the model selection problem: the Akaike Information Criterion.

AIC is widely used today as a guideline for the selection of statistical models in a variety of areas, such as medicine, engineering, economics as well as the fields of mathematics and statistics.

In 2006, Akaike was awarded the Kyoto Prize for “his major contribution to statistical science and modelling in the development of the Akaike information criterion“.

He passed away on August 4, 2009 at the age of 82.

M x

Seven Statistical Sins

Inspired by an article on phys.org I decided to compile a list of the seven statistical sins. Statistics is a vital tool to understanding the patterns in the world around us, however our intuition often lets us down when it comes to interpreting these patterns.

1.Assuming small differences are meaningful

Examples of this include small fluctuations in the stock market, or differences in polls where one party is ahead by one point or two. These represent chance rather than anything meaningful.

To avoid drawing any false conclusions that may arise due to this statistical noise we must consider the margin of error related to the numbers. If the difference is smaller than the margin of error, there is likely no meaningful difference and is probably due to random fluctuations.

2. Equating statistical significance to real-world significance

Statistical data may not represent real-world generalisations, for example stereotypically women are more nurturing while men are physically stronger. However, given a pile of data, if you were to pick two men at random there is likely to be quite a lot of difference in their physical strength; if you pick one man and one women they may end up being very similar in terms of nurturing or the man may be more nurturing than the woman.

This error can be avoided by analysing the effect size of the differences between groups, which is a measure of how the average of one group differs from the average of another. Then if the effect size is small, the two groups are very similar. Even if the effect size is large, each group will still have a lot of variation so not all members of one group will be different from all members of the other (hence giving rise to the error described above).

3. Neglecting to look at the extremes

This is relevant when looking at normal distributions.

normal-distrubution-large.gif
Source: mathsisfun.com

In these cases, when there is a small change in performance for the group, whilst there is no effect on the average person the character of the extremes changes more drastically. To avoid this, we have to reflect on whether we’re dealing with the extreme cases or not. If we are, these small differences can radically affect the data.

4. Trusting coincidence

If we look hard enough, we can find patterns and correlations between the strangest things, which may be merely due to coincidence. So, when analysing data we have to ask ourselves how reliable the observed association is. Is it a one-off? Can future associations be predicted? If it has only been seen once, then it is probably only due to chance.

5. Getting causation backwards

When we find a correlation between two things, for example unemployment and mental health, it may be tempting to see a causal path in one direction: mental health problems lead to unemployment. However, sometimes the causal path goes in the other direction: unemployment leads to mental health problems.

To get the direction of the causal path correct, think about reverse causality when you see an association. Could it go in the other direction? Could it even go in both ways (called a feedback loop)?

6. Forgetting outside cases

Failure to consider a third factor that may create an association between two things may lead to an incorrect conclusion. For example, there may be an association between eating at restaurants and high cardiovascular strength. However, this may be due to the fact that those who can afford to eat at restaurants regularly are in high socioeconomic bracket, which in turn means they can also afford better health care.

Therefore, it is crucial to think about possible third factors when you observe a correlation.

7. Deceptive Graphs

A lot of deception can arise from the way that the axis are labeled (specifically the vertical axis) on graphs. The labels should show a meaningful range for the data given. For example, by choosing a narrower range a small difference looks more impactful (and vice versa).

Source: phys.org

In fact, check out this blog filled with bad graphs.

M x

 

Chemistry and Maths #1: Statistical Thermodynamics

Statistical mechanics is a branch of physics that uses probability theory to study the behaviour of a mechanical system whose state is uncertain. A common use of statistical mechanics is in the study of thermodynamic behaviour of large systems. Statistical thermodynamics “provides a connection between the macroscopic properties of materials in thermodynamic equilibrium, and the microscopic behaviours and motions occurring inside the material“.

There are three main ensembles – isolated systems with a finite volume – of statistical mechanics:

  • Microcanonical Ensemble – describes an isolated system. This ensemble contains each possible state that’s consistent with that energy and composition with equal probability.
  • Canonical Ensemble – describes a system in contact with a heat bath. This ensemble contains states of varying energy, but with identical composition. 
  • Grand Canonical Ensemble – describes a system in contact with a heat and particle bath. This ensemble contains stated of varying energy and varying numbers of particles.

Microcanonical Ensemble

Fixed variables:

  • Total number of particles in the system, N.
  • System’s volume, V.
  • Total energy in the system, E.

Every microstate that has energy E has the same probability:

P=1/W,

where W is the number of microstates.

Entropy can be defined for this ensemble using the Boltzmann entropy formula:

S_{\rm {B}}=k\log W=k\log {\Big (}\omega {\frac {dv}{dE}}{\Big )}

Canonical Ensemble

Fixed Variables:

  • Number of particles in the system, N.
  • Absolute temperature, T.
  • System’s volume, V.

In this ensemble, each microstate is assigned a probability, P, using the following formula:

P=e^{\frac {F-E}{kT}},

where k is Boltzmann’s constant.

The number, F, defined as the Helmholtz free energy, is a constant for the ensemble as is calculated by:

F=-k_{B}T\ln Z

Grand Canonical

Fixed Variables:

  • Chemical potentialµ. This is a form of potential energy that can be absorbed or released during a chemical reaction.
  • Absolute temperature, T.
  • System’s Volume, V.

The probability, P, given to each distinct microstate is given by:

P=e^{\frac {\Omega +\mu N-E}{kT}},

where Ω is the ‘grand potential’.

The grand potential is a constant for this ensemble and can be calculated using the following equation:

\Omega =-k_{B}T\ln {\mathcal {Z}}

Sources 1 | 2

I have another post on chemistry coming on Friday! Hope you enjoy. M x