## Elliptic Curves: The Basics

Working over the rationals (or more precisely any field with characteristic 0) an elliptic curve is a curve given the equation

such that the discriminant, which is 4A3 + 27B2, is non-zero. Equivalently, the polynomial on the right hand side has distinct roots, ensuring that the curve is non-singular. Though we restrict our attention to these non-singular curves we note that if the right hand side is a cubic polynomial, there are only two types of singular curves, corresponding to whether there is a double root (node) or triple root (cusp).

### Point at Infinity

The point at infinity is an important point that always lies on an elliptic curve. For those who have studied algebraic geometry this is a familiar concept and comes from defining a projective closure of the equation defining the elliptic curve. However, informally it can be described as an idealised limiting point at the ‘end’ of each line.

If you imagine a vertical straight line, which intersects the elliptic curve at most two times.

The point at infinity is the point at which the two ends of this vertical line ‘meet’.

The reason that elliptic curves are amazing objects is because we can use geometry to make the points on the curve a group. Therefore we can use tools from algebraic number theory to study them.

### Making Points on an Elliptic Curve into a Group

This is done using the chord and tangent process:

We denote the point at infinity on an elliptic curve E over Q as OE. E meets each line in 3 points, counted with multiplicity. Given two points on E, say P and Q, let R be the third point of intersection of PQ and E. Then P ⊕ Q is the third point of intersection of OER (vertical line through R) and E.

If P = Q we take the tangent at P instead of the line PQ.

Then E with the group law on points defined above, denoted by (E, ⊕), is an abelian group:

• The fact that is abelian is clear by construction
• Identity: OE – this is why the point at infinity is such an important point and exists on all elliptic curves.
• Inverses: Given a point P, let S be the point of intersection of the tangent at OE and E. Then let Q be the intersection of PS and E. Then the inverse of P is defined to be Q. Note that if OE is a point of inflection (point of multiplicity 3) then S = OE in the above.
• Associativity: This is much harder to prove. It can be done by identifying (E, ⊕) with a subgroup of the Picard Group, which related to something called divisors.

Divisors are a tool for keeping track of poles and zeroes. For example, suppose a function g has a zero at a point P of order 3, and a pole at another point Q of order 2, and a pole at O of order 1 (note the number of zeroes and poles are equal, as they must be for a function). Then using divisors, we can say all this concisely as follows:

div g=3P−2Q−O

More precisely, we can define a divisor D to be a ‘formal sum’ of points on E (meaning that we write a sum of points using a + symbol but no actual operation is defined), say

Then the degree of a divisor is the sum of the coefficients.

This set of divisors forms a group, Div(E), generated by the points on E. Immediately we can identify a subgroup of Div(E), namely the divisors of degree zero denoted Div0(E).

We can also define an equivalence relation ~ on divisors: D1, D2 ∈ Div(E) are linearly equivalent, written D1 ~ D2, if exists an f such that div(f) = D1 – D2.

We can now introduce the Picard Group. It is a subgroup of Div(E), defined by quotienting out by this equivalence relation

A subgroup of the Picard group is given by

We’re now ready to go back to talking about elliptic curves. The point of this discussion is that we know (Pic0(E), +) is a group which has the associative property. Furthermore, we can show that we have a bijection between (E, ⊕) and (Pic0(E), +) that preserves the group structure i.e. we have an isomorphism of groups. So, using this isomorphism we can identify the two groups and deduce that (E, ⊕) is also associative.

### Consequence

Say we started looking at points defined over Q (denoted by E(Q)). A natural question is to ask how we know that the addition or inverses of these points remains in Q?

We defined the group law by looking at the intersections of lines and curves. So, working through the algebra, we can get explicit equations for the addition of points and inverses. For example if we have an elliptic curve E over Q and a point P = (x,y) in E(Q), then -P = (x, -y).

These explicit equations are useful because they tell us that the points do indeed remain defined over Q. More precisely, we find that (E(Q), ⊕) is a subgroup of (E, ⊕):

• The identity OE is in E(Q) by definition
• (E(Q), ⊕) is closed under addition and has inverses by the explicit formulae
• Associativity and commutativity is inherited from (E, ⊕).

Note: This in fact holds for any field K, not just Q, but we must be a bit more careful, as the elliptic curve may not be expressible in the nice form y2 = x3 + Ax + B so the formulae are a bit messier. The reason why this is important is that we often want to consider elliptic curves over finite fields, something I will explore in future posts.

M x

## 2: Dedekind’s Criterion

In episode 1, I introduced the idea of prime ideals. Today we will extend this idea and prove a really important result in algebraic number theory: Dedekind’s Criterion.

We will use the following fact:

If P, contained in O, is a non-zero prime ideal, then there is a unique prime number p such that pP.

For those who are more advanced, this is because the ideal generated by p, namely (p), is the kernel of

Then P|pOand N(P) = pfor some f > 0.

The proof of Dedekind’s Criterion uses a lot of Group Theory and therefore I will not prove it for you. However, it is a really useful tool in algebraic number theory and so I will state it and show how it can be used to factor ideals (remember that in episode 1 we showed that this factorisation is unique).

Before stating the theorem, let me define a few things:

• Let 𝛼 ∈ Othen (𝛼) = { x + 𝛼y | x, y ∈ }
• Let 𝐿/𝐾 be a field extension and let 𝛼 ∈ 𝐿 be algebraic over 𝐾 (i.e. there is a polynomial p with coefficients in such that p(𝛼)=0). We call the minimal polynomial of 𝛼 over 𝐾 the monic polynomial 𝑓 with coefficients in K of the least degree such that f(𝛼) = 0.
• Say we have a polynomial p(x) = anx+ an-1xn-1 … + a1x1 + a0  with coefficients in K. Then its reduction mod p is defined as p(x) = anx+ … + a0 where ai  ≡ ai (mod p).
• In episode 1 we defined the degree of a field extension L/K. We denote this as [L:K].
• Z/pZ is the additive group of the integers mod p. For p prime, this is a finite field. We usually denote this as Fp.

Okay, now we’re ready for the theorem!

### Theorem: Dedekind’s Criterion

Let 𝛼 ∈ Obe such that 𝐿 = Q(𝛼). Let 𝑓(x), with integer coefficients, be its minimal polynomial and let 𝑝 be a prime integer such that 𝑝 does not divide the degree [O∶ Z[𝛼]]. Let 𝑓(x) be its reduction mod p and factor

where g1(x), … , gr(x F𝑝 [x] are distinct monic irreducible polynomials. Let gi(x) ∈ Z[x] be any polynomial with gi(x) (mod 𝑝) = gi(x), and define

an ideal of OL. Let f= deg gi(x).

Then p1 ,…, pare disjoint prime ideals of OL and

If you don’t quite understand the theorem, don’t worry! The first time I read this I was really confused as well. I think the more examples you see and the more you use it the easier it becomes to understand. Because of this, I will give you an example next.

### Example

Let L = (√−11) and p = 5. We will use the following result:

Let d ∈ Z be square-free and not equal to 0 or 1. Let L = (√d). Then

As – 11 = 1 (mod 4), OL. Then, [OL: Z[√−11]] = 2 and so we can apply Dedekind’s criterion to 𝛼 = √−11 for p = 5. Then the minimal polynomial is f(x) = x+ 11, so 𝑓(x) = f(x) (mod 5) = x+ 1 = (x+2)(x+3) F5 [x].

Therefore by Dedekind’s Criterion, 5OL= P·where

P = (5, √−11 + 2) and Q = (5, √ −11 + 3)

and P, Q are distinct prime ideals in OL. So we have found how 5 splits in (√−11).

In the next episode I will talk about Dirichlet’s Unit Theorem and then we will be ready to solve some problems in Number Theory!

M x

## 1: Unique Factorisation of Ideals

The next few posts will be me detailing some interesting results in the area of Maths that I hope to specialise in: Algebraic Number Theory. The first result will be the unique prime factorisation of ideals. But first, what is an ideal?

## Ideals

If you’re not familiar with the definition of a ring click here, as we’ll need this for the following discussion.

An ideal, I, is a subset of a ring R if :

• It is closed under addition and has additive inverses (i.e. it is an additive subgroup of (R, +. 0);
• If aI and bR, then a · bI.

I is a proper ideal if does not equal R.

## Ring of Integers

In order to prove the result, I need to introduce the concept of number fields and the ring of integers. A number field is a finite field extension over the rationals.

Field Extension: a field extension F of a field E  (written F/E) is such that the operations of E are those of restricted to E, i.e. is a subfield of F.

Given such a field extension, F is a vector space over E, and the dimension of this vector space is called the degree of the extension. If this degree is finite, we have a finite field extension.

So if is a number field, E would be the rationals.

Suppose is a number field. An algebraic integer is simply a fancy name for an element of F such that there exists a polynomial f with integer coefficients, which is monic (the coefficient of the largest power of x is 1), such that f(a) = 0.

The algebraic integers in a number field F form a ring, called the ring of integers, which we denote as OF. It turns out that the ring of integers is very important in the study of number fields.

## Prime Ideals

If P is a prime ideal in a ring R, then for all x, y in R, if xyP, then x ∈ P or y ∈ P. As OF is a ring we can consider prime ideals in OF.

## Division = Containment

We want to try and deal with ideals in the same way we deal with numbers, as ideals are easier to deal with (ideals are a sort of abstraction of the concept of numbers). After formalising what it means to be an ideal and proving certain properties of ideals, we can prove that given two ideals I and J, I dividing (written I|J) is equivalent to J containing I.

## Three Key Results

Now, there are three results that we will need in order to prove the prime factorisation of ideals that I will simply state:

1. All prime ideals P in Oare maximal (in other words, there are no other ideals contained between P and R). Furthermore, the converse also holds: all maximal ideals in OF are prime.
2. Analogously to numbers (elements of a number field F), if I, J are ideals in Owith J|I, there exists an ideal contained in I, such that I = JK.
3. For prime ideal P, and ideals I,J of  OF , PIJ implies PI or PJ.

## Main Theorem

Theorem: Let I be a non-zero ideal in OF . Then I can be written uniquely as a product of prime ideals.

Proof:  There are two things we have to prove: the existence of such a factorisation and then its uniqueness.

Existence: If is prime then we are done, so suppose it isn’t. Then it is not maximal (by 1) so there is some ideal J, properly contained in I. So J|I, so (by 2) there is an ideal K, contained in I, such that I = JK. We can continue factoring this way and this must stop eventually (for the curious, I make the technical note that this must stop as we have an infinite chain of strictly ascending ideals, and OF is Noetherian).

Uniqueness: If P1 · · · Pr = Q1 · · · Qs, with Pi , Qj prime, then we know P1 | Q1 · · · Qs, which implies P1 | Qi for some i (by 3), and without loss of generality i = 1. So Q1 is contained in P1. But Q1 is prime and hence maximal (by 1). So P1 = Q1. Simplifying we get P2 · · · Pr = Q2 · · · Qs. Repeating this we get r = s and Pi = Qi for all i (after renumbering if necessary).

## Why is this important?

For numbers, we only get unique prime factorisation in what is called a unique factorisation domain (UFD). Examples of UFDs are the complex numbers and the rationals. However, the integers mod 10 no longer form a UFD because, for example, 2*2 = 4 = 7*2 (mod 10).

However, we have the unique prime factorisation of ideals in the ring of algebraic integers of any number field. This means that we can prove many cool results by using this unique prime factorisation, which we can then translate into results about numbers in that number field. I will detail some of these in future blog posts.

M x

## Influential Mathematicians: Gauss (2)

Read the first part of this series here.

Although Gauss made contributions in many fields of mathematics, number theory was his favourite. He said that

“mathematics is the queen of the sciences, and the theory of numbers is the queen of mathematics.”

A way in which Gauss revolutionised number theory was his work with complex numbers.

Gauss gave the first clear exposition of complex numbers and of the investigation of functions of complex variables. Although imaginary numbers had been used since the 16th century to solve equations that couldn’t be solved any other way, and although Euler made huge progress in this field in the 18th century, there was still no clear idea as to how imaginary numbers were connected with real numbers until early 19th century. Gauss was not the first to picture complex numbers graphically (Robert Argand produced the Argand diagram in 1806). However, Gauss was the one who popularised this idea and introduced the standard notation a + bi. Hence, the study of complex numbers received a great expansion allowing its full potential to be unleashed.

Furthermore, at the age of 22 he proved the Fundamental Theorem of Algebra which states:

Every non-constant single-variable polynomial over the complex numbers has at least one root.

This shows that the field of complex numbers is algebraically closed, unlike the real numbers.

Gauss also had a strong interest in astronomy, and was the Director of the astronomical observatory in Göttingen. When Ceres was in the process of being identifies in the late 17th century, Gauss made a prediction of its position. This prediction was very different from those of other astronomers, but when Ceres was discovered in 1801, it was almost exactly where Gauss had predicted. This was one of the first applications of the least squares approximation method, and Gauss claimed to have done the logarithmic calculations in his head.

Part 3 coming next week!

M x

## F.T.A. via Complex Analysis

Although this requires a bit of knowledge on Complex Anlaysis, I recently discovered this new way to prove the Fundamental Theorem of Algebra and I couldn’t help but share it.

First of all, what is the Fundamental Theorem of Algebra (FTA)? This very important (hence the name!) result states that:

Every non-constant polynomial with complex coefficients has a complex root.

In order to prove this, we must first be aware of Liouville’s Theorem:

Every bounded, entire function is constant.

Definitions

Bounded: a function on a set X is said to be bounded if there exists a real number M such that

for all x in X.

Entire: An entire function is a holomorphic function on the entire complex plane.

Liouville’s theorem is proved using the Cauchy integral formula for a disc, one of the most important results in Complex Analysis. Although I will not describe how to prove it or what it states in this blog post, I encourage you to read about here it as it is truly a remarkable result.

Now armed with Liouville’s Theorem we can prove the FTA.

### Proof

Let P(z) = zn + cn-1zn-1 + … + c1z + c0 be a polynomial of degree n > 0. Then |P(z)| –> ∞ as |z| –> ∞, so there exists R such that |P(z)| > 1 for all z with |z| > R.

Consider f(z) = 1/P(z). If P has no complex zeros then f is entire. So, as f is continuous, f is bounded on {|z| ≤ R}.

As |f(z)| < 1 when |z| > R, f is a bounded entire function, so by Liouville’s Theorem f is constant, which is a contradiction.

The only thing we assumed was that P had no complex zeros, and so we contradicted this fact. Hence, P must have at least one complex zero. Amazing right!

M x

## Boolean Algebra

Today I thought I would give you a short introduction on Boolean Algebra.

Boolean Algebra was named after the English mathematician, George Boole (1815 – 1864), who established a system of logic which is used in computers today. In Boolean algebra there are only two possible outcomes: either 1 or 0.

It must be noted that Boolean numbers are not the same as binary numbers as they represent a different system of mathematics from real numbers, whereas binary numbers is simply a alternative notation for real numbers.

### Operations

Boolean logic statements can only ever be true or false, and the words ‘AND’, ‘OR’ and ‘NOT’ are used to string these statements together.

OR can be rewritten as a kind of addition:

0 + 0 = 0 (since “false OR false” is false)
1 + 0 = 0 + 1 = 1 (since “true OR false” and “false OR true” are both true)
1 + 1 = 1 (since “true OR true” is true)

OR is denoted by:

AND can be rewritten as a kind of multiplication:

0 x 1 = 1 x 0 = 0 (since “false AND true” and “true AND false” are both false)
0 x 0 = 0 (since “false AND false” is false)
1 x 1 = 1 (since “true AND true” is true)

AND is denoted by:

NOT can be defined as the complement:

If A = 1, then NOT A = 0
If A = 0, then NOT A = 1
A + NOT A = 1 (since “true OR false” is true)
A x NOT A = 0 (since “true AND false” is false)

This is denoted by:

Expressions in Boolean algebra can be easily simplified. For example, the variable B in A + A x B is irrelevant as, no matter what value B has, if A is true, then A OR (A AND B) is true. Hence:

A + A x B = A

Furthermore, in Boolean algebra there is a sort of reverse duality between addition and multiplication, depicted by de Morgan’s Laws:

(A + B)’ = A‘ x B‘ and (A x B)’ = A‘ + B

### Uses

In 1938, Shannon proved that Boolean algebra (using the two values 1 and 0) can describe the operation of a two-valued electrical switching circuit. Thus, in modern times, Boolean algebra is indispensable in the design of computer chips and integrated circuits.

Sources: 1 | 2 | 3

## Roots of Unity

The nth Roots of Unity appear when we consider the complex roots of an equation of the form:

$z^n = 1.$

### Solving the Equation

As we have an nth degree polynomial, we will have n complex roots. By converting this to the polar form (by letting   and noting that $1 = e^{2\pi ik}$ for $k\in \mathbb{Z}$), we get the expression:

$r^ne^{ni\theta} = e^{2\pi ik}$

As the magnitude of the right hand side is 1, we can deduce that r = 1, leaving us with $e^{ni\theta} = e^{2\pi ik}$. Quick algebraic manipulation gives us:

$\theta=\frac{2\pi k}n$

Hence, we can conclude that the solutions the polynomial are given by  which can be converted to the trigonometric equation using Euler’s formula:

### Geometry

All roots of unity lie on the unit circle in the complex plane, as all roots have a magnitude of 1.

Additionally, the nth roots of unity are connected in order, they form a regular n sided polygon. This can easily be seen by analysing the arguments of the roots.

### Properties

• The sum of the nth roots of unity is 0.
• If $\zeta$ is a primitive nth root of unity, then the roots of unity can be expressed as $1, \zeta, \zeta^2,\ldots,\zeta^{n-1}$.
• A primitive nth root of unity is such that $\zeta^m\neq 1$ for $1\le m\le n-1$.
• This sequence of powers is n periodic because z j + n = z jz n = z j⋅1 = z j for all values of j.
• For each nth root of unity, $\zeta$, we have that $\zeta^n=1$. Although obvious, this property should not be forgotten as, for example, it can aid with algebraic manipulation.

Pretty huh? M x

## Forgotten Mathematicians: Indian Maths

I decided to continue with my ‘Forgotten Mathematicians’ series with Indian mathematics.

Mathematics owes a huge debt to the extraordinary contributions given by Indian mathematicians over many hundreds of years, however there has been a reluctance to recognise this.

Vedic Period (between 1500 BC and 800 BC)

The earliest expression of mathematical understanding is linked with the origin of Hinduism as mathematics forms an important part of the Sulbasutras (appendices of the Vedas – the original Hindu scriptures). They contained geometrical knowledge showing a development in mathematics, although it was purely for practical religious purposes. Additionally, there is evidence of the use of arithmetic operations including square, cubes and roots.

The Sulbasutras were composed by Baudhayana (around 800 BC), Manava (about 750 BC). Apastamba (about 600 BC) and Katyayana (about 200 BC).

Before the end of this period – around the middle of the 3rd century BC – the Brahmi numerals began to appear. Indian mathematicians refined and perfected the numeral system, particularly with the representation of numerals and, thanks to its dissemination by medieval Arabic mathematicians, they developed into the numerals we use today.

Jaina Mathematics

Jainism was a religion and philosophy which was founded in India around the 6th century BC. The main topics of Jaina mathematics in around 150 BC were the theory of numbers, arithmetical operations, operations with fractions, simple equations, cubic equations, quartic equations, and permutations and combinations.

Furthermore, Jaina mathematicians, such as Yativrsabha, recognised five different types of infinities: infinity in one direction, two directions, in area, infinite everywhere and perpetually infinite.

Astronomy

Mathematical advances were often driven by the study of astronomy as it was the science at that time that required accurate information about the planets and other heavenly bodies.

Yavanesvara (2nd century AD) is credited with translating a Greek astrology text dating from 120 BC. In doing so, he adapted the text to make it work into Indian culture using Hindu images with the Indian caste system integrated into his text, thus popularising astrology in India.

Aryabhata was also an important mathematician. His work was a summary of Jaina mathematics as well as the beginning of the new era for astronomy and mathematics. He headed a research centre for mathematics and astronomy where he set the agenda for research in these areas for many centuries to come.

Brahmagupta (beginning of 7th century AD)

Brahmagupta made major contributions to the development of the numbers systems with his remarkable contributions on negative numbers and zero. The use of zero as a number which could be used in calculations and mathematical investigations, would revolutionise mathematics. He established the mathematical rules for using the number zero (except for the division by zero) as well as establishing negative numbers and the rules for dealing with them, another huge conceptual leap which had profound consequences for future mathematics.

As well as this, he established the formula for the sum of the squares of the first n natural numbers as  and the sum of the cubes of the first n natural number as .

He even wrote down his concepts using the initials of the names of colours to represent unknowns in his equations. This one of the earliest intimations of what we now know as algebra.

Additionally, he worked on solutions to general linear equations, quadratic equations and even considered systems of simultaneous equations  and solving quadratic equations with two unknowns, something which was not even considered in the West until a thousand years later, when Fermat was considering similar problems in 1657. Furthermore, he dedicated a substantial portion of his work to geometry. His biggest achievements in this area was the formula for the area of a cyclic quadrilateral, now known as Brahmagupta’s Formula, as well as a celebrated theorem on the diagonals of a cyclic quadrilateral, usually referred to as Brahmagupta’s Theorem.

‘Golden Age’ (from 5th to 12th centuries)

In this period, the fundamental advances were made in the theory of trigonometry. They utilised sine, cosine and tangent functions to survey the land around them, navigate seas and chart the skies. For example, Indian astronomers used trigonometry to calculate the relative distances between the Earth and the Moon and the Earth and the Sun. They realised that when the Moon is half full and directly opposite the Sun, then the Sun, Moon and Earth form a right angled triangle. By accurately measuring the angle as 17°, using their sine tables that gave a ratio of the sides of such triangle as 400:1, it shows that the Sun is 400 times further away from the Earth than the Moon.

Bhaskara II lived in the 12th century and is considered one of the most accomplished of India’s mathematicians. He is credited with explaining that the division by zero – a perviously misunderstood calculation – yielded infinity.

He also made important contributions to many different areas of mathematics including solutions of quadratic, cubic and quartic equations, solutions of Diophantine equations of the second order, mathematical analysis and spherical trigonometry. Some of his discoveries predate similar ones made in Europe by several centuries, and he made important contributions in terms of the systemisation of knowledge and improved methods for known solutions.

This Kerala School of Astronomy and Mathematics school was founded late 14th century by Madhava of Sangamagrama. Madhava also developed an infinite series approximation for π. He did this by realising that by successively adding and subtracting different odd number fractions to infinity, he could establish an exact formula for π, a conclusion that was made by Leibniz in Europe two centuries later. Applying this series, Madhava obtained a value for π correct to 13 decimal places! Using this mathematics he went on to obtain infinite series expressions for sine, cosine, tangent and arctangent. Arguably more remarkable though was the fact that he gave estimates of the correction term, implying that he had an understanding of the limit nature of the infinite series.

In addition, he made contributions to geometry and algebra and laid the foundations for later development of calculus and analysis, such as the differentiation and integration for simple functions. It is argued that these may have been transmitted to Europe via Jesuit missionaries, making it possible that the later European development of calculus was influenced by his work to some extent.

In astronomy, Madhava discovered a procedure to determine the positions of the Moon every 36 minutes and methods to estimate the motions of the planets.

I have only included some of the earlier Indian mathematicians, missing out magnificent mathematicians such as Ramanujan, as I feel that these are the most forgotten. To find out more about other Indian mathematicians, this may be a good starting point.

Hope you enjoyed this post; I was thinking of doing Chinese mathematicians next. Let me know what you think! M x

## Modern Mathematicians: Alain Connes

Alain Connes was born in Draguignan, France in 1947. He entered École Normale Supérieure, one of the leading universities in Paris, in 1966 and graduated in 1970. American mathematician Robert Moore describes his thesis on the classification of factors of type III on operator algebras (in particular on von Neumann algebras), as:

“a major, stunning breakthrough in the classification problem.”

Connes has received many awards for his work, including the:

• Prix Aimeé Berthé (1975)
• Prix Pecot-Vimont (1976)
• Gold Medal of the Centre National de la Recherche Scientifique (1977)
• Prix Ampère from the Académie des Sciences in Paris (1980)
• Prix de Electricité de France (1981)

However, Connes’ most notable achievement was being awarded the Fields Medal in 1982 (the ceremony was in 1983) due to his work on operator theory and in specific, as depicted by Japanese mathematician Huzihiro Araki, his:

(1) general classification and a structure theorem for factors of type III, obtained in his thesis;

(2) classification of automorphisms of the hyperfinite factor, which served as a preparation for the next contribution;

(3) classification of injective factors;

(4) application of the theory of C*-algebras to foliations and differential geometry in general.

The study on von Neumann algebras began in the 1930s, when their factors were classified. In the late 1960s there was a resurgence of interest on this topic.

Connes unified a number of ideas in the area that had been previously considered disparate. He also worked on some applications of operator algebras, for example their application to differential geometry. Additionally, his application of operator theory to noncommutative geometry produced new geometries. Furthermore, his later work has had meaningful impact in ergodic theory, which is the study of systems whose final state is independent of their initial state.