New blogpost on the isogeny-based talks at Asiacrypt 2020! Just short descriptions of each paper for those who couldn’t attend or those interested in the most current research in isogeny-based cryptography.

Hope you enjoy! M x

Skip to content
# Tag: math

## Asiacrypt 2020

## SIDH

## Vélu’s Formulas for SIDH

## Post Quantum Cryptography – The Basics

## Code-Based Cryptography

## Lattice-Based Cryptography

## Hash-based Cryptography

## Multivariate Cryptography

## What about Elliptic-Curve Cryptography?

## Is the COVID-19 Tracing App secure?

### Centralised Systems

### Decentralised Systems

### In context of COVID-19 Tracing Apps

## UK App – Apple and Google are involved?

## 4: An Example

**Determine all the integer solutions of the equations x**^{2} − 3y^{2} = m for m = −1 and 13.

## m = -1

## m = 13

## 3: Dirichlet’s Unit Theorem

## Dirichlet’s Unit Theorem

## Why is this useful?

## Finding the Fundamental Unit

#### Example: d = 2

## 2: Dedekind’s Criterion

### Theorem: Dedekind’s Criterion

### Example

## 1: Unique Factorisation of Ideals

## Ideals

## Ring of Integers

## Prime Ideals

## Division = Containment

## Three Key Results

## Main Theorem

## Why is this important?

## Towers of Hanoi

## The Problem

## Obtaining a Recurrence

## Simplifying the Recurrence

New blogpost on the isogeny-based talks at Asiacrypt 2020! Just short descriptions of each paper for those who couldn’t attend or those interested in the most current research in isogeny-based cryptography.

Hope you enjoy! M x

New blogpost on SIDH, a key-exchange protocol based on supersingular isogenies.

Hope you enjoy! M x

A new post is up on my website on Vélu’s formulas for Supersingular Isogeny Diffie-Hellman! Click here to read it.

Hope you enjoy! – M

Post-quantum cryptography aims to find schemes that can withstand an attack from a quantum computer. This is becoming an increasingly important field of research as the development of quantum computers intensifies and current cryptography schemes relying on the difficulty of factorisation can be broken in polynomial time by these quantum computers. So, researchers have tried to find other ‘hard’ problems in mathematics (take exponential time to solve) that can be exploited to create classical crypto schemes that withstand attack of quantum computers (at least for now).

In general post-quantum schemes require more resources compared to traditional cryptography, in particular Elliptic Curve cryptography; security against quantum computer attacks come at a cost. There are four different families of post-quantum schemes that are considered to be the best candidates, and in this blog post I will briefly discuss each of them.

Traditionally, error-correction codes are used to detect and correct bit errors when messages are transmitted over an unreliable channel. The code can be chosen to fulfil the requirements of the channel; in particular the number *t* of correctable bit errors can be determined.

Decoding arbitrary codes is computationally hard and can be infeasible depending on the code parameters. However, there are specific codes for which efficient decoding algorithms are known. So in practice only such codes are used that have efficient decoding algorithms.

The main security assumption of code-based cryptography is the difficulty of decoding a random linear code. Even with quantum computers, only exponential-time algorithms are known.

The first code-based public key crypto system was proposed by McEliece in 1978 and has not been fundamentally broken since, however the main problem with this is the size of the keys: it requires key sizes of about 4MB in order to achieve post-quantum security. Niederreiter introduced a scheme that gave some improvements to the encryption and decryption cost and required smaller keys.

There have been many attempts to reduce key sizes by using different codes that have some redundant structure in the public key. But this structure has led to efficient classical attacks on the crypto-systems.

There are also signature schemes, hash functions and random number generators based on code-based cryptography.

The underlying ‘hard’ problem is the shortest vector problem (SVP): a basis of a vector space *V* and a norm *N* are given for a lattice *L* and one must find the shortest non-zero vector in *V*, as measured by *N*, in *L*. It is believed that the SVP is computationally hard even for quantum computers.

NTRUEncrypt is a commercial public key encryption scheme based on lattices. Public key sizes of about 1.5kB to 2kB are required in order to achieve the desired security. Note that these are the requirements based on the algorithms that have (up to today) been developed to solve the SVP. These restrictions are called ‘parameters’.

Closely related is the signature scheme NTRUSign. However, the security of this is not yet well understood and so there are no recommendations for these signature schemes for post-quantum security.

There are also key exchange protocols that are built on lattice-based cryptography, such as NewHope that has been experimentally adopted by Google. NewHope is not symmetric and needs two rounds of agreement (unlike the Diffie-Hellman Key Exchange). Also, it does not include authentication, so this needs to be achieved by other means.

By switching to post-quantum key exchange now, an attacker in the future does not learn encryption kets even if he breaks long-term authentication keys. Therefore, combining a post-quantum key exchange with a classical authentication scheme provides a cost effective long-term secure authenticated key exchange for the interim period until all cryptographic primitives have been transitioned to post-quantum secure schemes.

Hash functions are one-way functions tha tmap bit strings of an arbitrary length to relatively short, fixed length bit strings called hash values. There are three required properties:

**Pre-image resistance:**must be hard to compute a pre-image of a hash value i.e. a bit string that once hashed results in a given hash value.**Second pre-image resistance**: given a bit string, it must be hard to find a different bit string that has the same hash value.**Collision resistance:**it must be hard to find two arbitrary bit strings that have the same hash value.

Since by definition it is not computationally feasible to compute the inverse of a hash function it is not known how to construct public-key encryption schemes using hash functions. However, it *is *possible to construct signature schemes, where in general the required signature size increases if more signatures for the same public key are needed.

Hash-based signature schemes have been around for a long time and so the estimates of the required parameters to achieve post-quantum security are very reliable.

This is based on the difficulty of the MQ-problem: solving multivariate quadratic systems of equations over finite fields is NP hard (no efficient algorithm for solving random multivariate polynomial systems). The difficulty depends on the size of the underlying field, the number of variables and the degree of the system. If the number of equations and variables is sufficiently large then even quadratic equations over a the field with two elements are hard to solve.

For constructing an asymmetric public-key scheme, the public-key itself is a set of multivariate quadratic polynomial and the private key often is the knowledge of a trapdoor that allows to efficiently solve the multivariate system. The construction of the scheme allows the use of systems with more variables than equations, so there may be more that one solution. Any of these solutions is a valid signature.

The confidence in multivariate systems is quite high however a relatively large public key is required. Attempts to reduce key sizes by using few variables over larger fields, but the security of such systems is not well understood. The second approach is to use sparse systems in order to compress the public key, but this often leads to a loss in security.

For public key encryption, we need the resulting equation system to only have **one solution** and so we must have an over-defined system i.e. the public key must have more polynomials than variables. Currently there are not many multivariate public-key encryption schemes that are considered secure.

Multivariate cryptography can also be used for pseudo random-number generators, cryptographic hash functions and symmetric encryption, e.g. the symmetric block gopher QUAD.

I have talked about elliptic curve cryptography in previous blog posts, so why isn’t this mentioned?

Due to the novelty of these cryptographic schemes there is not great confidence in them yet. So they are not currently consensually considered as candidates for post-quantum public-key encryption.

This is the first installation in my series about post-quantum cryptography. Anything you want me to talk about in particular? M x

There has been a lot of talk about the COVID-19 tracing app that has been recently released. The simple premise is that the app will use Bluetooth signals to log on when smartphone owners are close to each other, so if someone develops COVID-19 symptoms, an alert can be sent to other users they may have infected. However, countries and organisation developing the apps have opted for two different types of systems to achieve this: centralised systems or decentralised systems. First, I will what these are in general.

Centralised systems rely on a single server as a core distribution centre in the communication with end users. From a functional perspective they often imply that each entity involved in the software stream has a specific, distinct role. For example, when streaming a movie on Netflix, Netflix’s central server provides the video; the user consumes it. However, this also means that if the central server is compromised, the entire network is.

Decentralised systems eliminate the need for a central entity and rely on multiple servers or end-user devices to cooperate and undertake a given task. Each of these entities is tasked with a similar role. These systems are not a novel concept and in fact there are a vast array of places in which it has been implemented, such as blockchain technology. Most decentralised systems that preserve privacy focus on ensuring either the confidentiality and anonymity of personal data (such as ‘TOR’) or user control over your personal data (such as the MIT project ‘Enigma’).

Centralised contact-tracing apps store a user’s Bluetooth ID in a central database as well as the Bluetooth IDs of phones they have come into contact with (and any other data that’s being collected), enabling better visibility of data by health services or governments.

Decentralised contact-tracing apps only stop a user’s Bluetooth ID (and any other data being collected) in a central database and contact-tracing is done at the phone level, without sharing the information centrally. This allows for stronger data privacy.

There is a key trade-off between these two approaches: do we want to maximise data availability to health services or do we want to protect privacy of individuals? Furthermore, should it be up to individual governments or companies to make this choice? This is especially important as these two systems don’t interact well with each other, meaning that the fact that various countries are opting for different systems may prove disastrous for allowing tracing across borders and so quarantine rules may have to stay in place even with the introduction of the app.

NHS officials initially decided to build a centralised system, but the NHS could not get the same accuracy as if it was using Apple and Google’s software. The second version of England’s app uses Apple and Google’s technology and opts for the decentralised approach.

The ecosystems of Apple and Google are fundamental to the NHS app. The app uses the Apple and Google system to conduct the Bluetooth contact tracing. This is achieved through an API and require some of the latest versions of the Android and iOS operating systems to run. The companies have favoured the method that puts people’s privacy at the forefront in order to stop the technology from being used maliciously.

What are your thoughts on the COVID-19 tracing app? M x

In this blog post I will show you how you can use the tools developed in parts 1, 2 and 3 to solve a problem in number theory.

Noting that x^{2} − 3y^{2} = (x – √3y)(x + √3), first we find the fundamental unit, *e = a + b√3*, of the number field Q(√3):

For b = 1, 3b^{2} + 1 = 4 = 2^{2 }and so a = 2. Hence the fundamental unit is e* = 2 +√3*.

First we note that the **norm** of an element in Q(√d) is:

**N(a + b√d) = a ^{2} − db^{2}**

We will use the following three facts:

- N(x + √3y) = x
^{2}− 3y^{2}= m. - If we have a unit
*u,*then N(*a*) = N(*ua*) for some*a*in Q(√*d*), as N(*u*) = 1 and N(*ua*) = N(*u*)N(*a*) (i.e. norm is multiplicative). - In the last post, we saw how the units in the ring of algebraic integers of L is of the form:

Thus, multiplying x + √3y by a unit is equal to multiplying x + √3y by a power of the fundamental unit *e*.

So how do we solve this equation? First, by fact 1 we have to do find the element of Q(√3) that has norm +*m* or* -m* (there will be one such element for each, up to multiplication by a unit). Using fact 1 and 2, we deduce all the solutions to the equation are given by** (x, y)** such that:

**N(e) = – 1, N(x + √3y) = -m:****e**^{n}(x + √3y) for*n*a natural number.**N(e) = – 1, N(x + √3y) = m:****e**^{2n}(x + √3y) for*n*a natural number.**N(e) = 1, N(x + √3y) = m: e**^{n}(x + √3y) for*n*a natural number.

Noting that for any element *a*,|N(a)| = 1 if and only if *a* is a unit. Hence as the solutions are given by (x, y) where N(x + √3y) = -1, we must have that x + √3y is a unit. But,

so all units have norm 1, and thus there are NO integer solutions to this equation.

We need to find (x, y) such that N(x + √3y) = x^{2} − 3y^{2} = 13 (because N(*e*) = 1).

- When y = 1, x
^{2}= 14, so x is not an integer. - When y = 2, x
^{2}= 25, so x = 5.

Hence we get that all solutions are of the form:

**e ^{n}(5 + 2√3), for n a natural number and e = 2 +√3, fundamental unit**

This is the last post of this series! Next I’ll be delving into Galois Theory. M x

Last time, I discussed Dedekind’s Criterion. In this post I aim to introduce Dirichlet’s Unit Theorem.

Let L be a number field (finite extension of the field of rational numbers).

We will define a **complex embedding of L** as being a field homomorphism:

Then a **real embedding of L ** is similarly defined as being a field homomorphism:

We note that the complex embeddings can be given in pairs:

where we obtain one embedding by taking the complex conjugate of the other embedding in the pair.

Hence, given a number field L, we can label its real embeddings as

and its complex embeddings as

Noting that the degree, *n*, of the field extension L/Q (where Q is the field of rational numbers) is equal to the total number of embeddings and hence we deduce that ** n = 2s + r**. The fact that there is this equivalence is non-trivial, so don’t worry if you don’t immediately see why this is! In order to prove this we must use tools from

*Let me know if you’re interested in seeing Galois Theory*

Letting be the units in the ring of algebraic integers of L we are now ready to state Dirichlet’s Unit Theorem.

The proof of this is rather complicated and involves defining a map, whose kernel is this finite cyclic group, which takes into a lattice. Considering lattices are involved in the proof, you can start to see where the isomorphism with the integers comes from.

Take the case where r = 0 and s = 1, i.e. L is an imaginary quadratic number field:

As *r + s – 1 = 0*, we get that is a finite group. By explicitly defining the isomorphism we can deduce that the finite cyclic group is simply {+/- 1}, and is generated by one element – the **fundamental unit, **i.e.

I’ve purposefully skipped a lot of the details as what is important is the **result**. The proof requires a bit of *magic* and is not terribly enlightening. However, the result is extremely important – using this, for the case of imaginary quadratic fields, we can completely specify , provided we find the fundamental unit.

It turns out that finding the fundamental unit in this case is extremely straightforwards. There are various different cases.

- If
**d = 2, 3 (mod 4)**we characterise the fundamental unit*u*in the following way: let*b*be the least positive integer such that db^{2}+ 1 or db^{2}– 1 is of the form a^{2}for some natural number*a*. Then:

- If
**d = 1 (mod 4) and d is not equal to 5:**let*b*be the least positive integer such that db^{2}+ 4 or db^{2}– 4 is of the form a^{2}for some natural number*a*. Then:

**If d = 5:**

(If you would like the proof as to **WHY** this is true, leave me a comment below and I’ll be happy to make a post!)

Then 𝑏 = 1 works since 2 − 1 = 1^{2 }so **1 + √2 is a fundamental unit.**

In order to get a lot of the symbols I had to LaTex some of the sections so sorry if the formatting is a little off! Next time we will start solving some equations using algebra!

M x

In episode 1, I introduced the idea of prime ideals. Today we will extend this idea and prove a really important result in algebraic number theory: Dedekind’s Criterion.

We will use the following fact:

If **P**, contained in *O _{L }*, is a non-zero prime ideal, then there is a unique prime number

For those who are more advanced, this is because the ideal generated by p, namely (p), is the kernel of

Then **P|***pO _{L }*and N(

The proof of Dedekind’s Criterion uses a lot of Group Theory and therefore I will not prove it for you. However, it is a really useful tool in algebraic number theory and so I will state it and show how it can be used to factor ideals (remember that in episode 1 we showed that this factorisation is unique).

Before stating the theorem, let me define a few things:

- Let
*𝛼*∈*O*then (_{L }*𝛼*) = {*x*+ 𝛼*y | x, y*∈ } - Let 𝐿/𝐾 be a field extension and let 𝛼 ∈ 𝐿 be algebraic over 𝐾 (i.e. there is a polynomial
*p*with coefficients in*K*such that*p(𝛼)=0*). We call the**minimal polynomial**of 𝛼 over 𝐾 the monic polynomial 𝑓 with coefficients in*K*of the least degree such that*f*(𝛼) = 0. - Say we have a polynomial
*p(x) = a*with coefficients in_{n}x^{n }+ a_{n-1}x^{n-1}… + a_{1}x^{1}+ a_{0}*K*. Then its**reduction mod p**is defined as*p(x)*=*a*where_{n}x^{n }+ … + a_{0}*a*_{i}≡*a*(mod p)._{i} - In episode 1 we defined the degree of a field extension L/K. We denote this as [L:K].
*Z/pZ*is the additive group of the integers mod p. For*p*prime, this is a finite field. We usually denote this as**F**._{p}

Okay, now we’re ready for the theorem!

Let 𝛼 ∈ *O _{L }*be such that 𝐿 = Q(𝛼). Let 𝑓(

where g_{1}(*x*), … , g_{r}(*x*) ∈** F _{𝑝} **[

an ideal of *O _{L}*. Let f

Then * p_{1} ,…, p_{r }*are disjoint prime ideals of

If you don’t quite understand the theorem, don’t worry! The first time I read this I was really confused as well. I think the more examples you see and the more you use it the easier it becomes to understand. Because of this, I will give you an example next.

Let L = (√−11) and *p* = 5. We will use the following result:

Let

d∈ Z be square-free and not equal to 0 or 1. Let L = (√d). Then

As – 11 = 1 (mod 4), *O _{L}*= . Then, [

Therefore by Dedekind’s Criterion, 5*O _{L}*=

**P** = (5, √−11 + 2) and **Q** = (5, √ −11 + 3)

and **P, Q** are distinct prime ideals in *O _{L}*. So we have found how 5 splits in (√−11).

In the next episode I will talk about Dirichlet’s Unit Theorem and then we will be ready to solve some problems in Number Theory!

M x

The next few posts will be me detailing some interesting results in the area of Maths that I hope to specialise in: Algebraic Number Theory. The first result will be the unique prime factorisation of ideals. But first, what is an ideal?

If you’re not familiar with the definition of a **ring** click here, as we’ll need this for the following discussion.

An ideal, *I*, is a subset of a ring *R *if :

- It is closed under addition and has additive inverses (i.e. it is an additive subgroup of (
*R*, +. 0); - If
*a*∈*I*and*b*∈*R*, then*a*·*b*∈*I*.

*I* is a proper ideal if *I *does not equal *R*.

In order to prove the result, I need to introduce the concept of number fields and the ring of integers. A *number field* is a finite field extension over the rationals.

Field Extension:a field extensionFof a fieldE(written F/E) is such that the operations ofEare those ofFrestricted toE,i.e.Eis a subfield ofF.Given such a field extension,

Fis a vector space overE,and the dimension of this vector space is called the degree of the extension. If this degree is finite, we have a finite field extension.

So if *F *is a number field, *E* would be the rationals.

Suppose *F *is a number field. An **algebraic integer **is simply a fancy name for an element *a *of F such that there exists a polynomial *f* with integer coefficients, which is monic (the coefficient of the largest power of *x* is 1), such that *f*(*a*) = 0.

The algebraic integers in a number field *F* form a ring, called the ring of integers, which we denote as *O _{F}*. It turns out that the ring of integers is very important in the study of number fields.

If *P* is a **prime ideal** in a ring *R, *then for all *x, y* in *R*, if *xy* ∈ *P,* then x ∈ *P* or y ∈ *P*. As *O _{F}* is a ring we can consider prime ideals in

We want to try and deal with ideals in the same way we deal with numbers, as ideals are easier to deal with (ideals are a sort of abstraction of the concept of numbers). After formalising what it means to be an ideal and proving certain properties of ideals, we can prove that given two ideals *I *and* J, I *dividing *J *(written *I*|*J*) is equivalent to *J* containing *I*.

Now, there are three results that we will need in order to prove the prime factorisation of ideals that I will simply state:

- All prime ideals
*P*in*O*are maximal (in other words, there are no other ideals contained between_{F }*P*and*R*). Furthermore, the converse also holds: all maximal ideals in*O*are prime._{F } - Analogously to numbers (elements of a number field
*F*), if*I*,*J*are ideals in*O*with_{F }*J|I,*there exists an ideal*K*contained in*I*, such that*I = JK*. - For prime ideal
*P**,*and ideals*I,J*of*O*,_{F}*P*|*IJ*implies*P*|*I*or*P*|*J*.

**Theorem:** Let *I* be a non-zero ideal in *O _{F }*. Then

**Proof:** There are two things we have to prove: the existence of such a factorisation and then its uniqueness.

Existence: If *I *is prime then we are done, so suppose it isn’t. Then it is not maximal (by 1) so there is some ideal *J*, properly contained in *I*. So *J|I, *so (by 2) there is an ideal *K, *contained in *I*, such that *I = JK*. We can continue factoring this way and this must stop eventually (for the curious, I make the technical note that this must stop as we have an infinite chain of strictly ascending ideals, and *O _{F }*is Noetherian).

Uniqueness: If *P _{1} · · · P_{r} = Q_{1} · · · Q_{s},* with

For numbers, we ** only** get unique prime factorisation in what is called a unique factorisation domain (UFD). Examples of UFDs are the complex numbers and the rationals. However, the integers mod 10 no longer form a UFD because, for example, 2*2 = 4 = 7*2 (mod 10).

However, we have the unique prime factorisation of ideals in the ring of algebraic integers of ** any** number field. This means that we can prove many cool results by using this unique prime factorisation, which we can then translate into results about numbers in that number field. I will detail some of these in future blog posts.

M x

The Tower of Hanoi is a famous problem first posed in 1883 by E. Lucas. There are 3 rods and *n* disks resting on one rod, sorted in size like a pyramid with the smallest disk on top. Given that you can only move the disks one at a time and that you can never place a bigger disk on a smaller disk, the aim is to move all the disks from the left hand post to the right hand post in the smallest number of moves possible.

Consider *n* *= 0, *the simplest case with no discs. As there are no discs to move, we cannot make any moves, so the number of steps required is 0. Letting S_{n} be the number of moves required for *n* disks, we get **S _{0 }= 0**.

Now, we must consider how the problem scales. With *n = 1*, a single step will solve the problem. With *n = 2,* the answer is 3 steps: one to move the top (small) disk to another rod, one step to move the big disk to the destination rod, and lastly one step to move the small disk on top of the big disk.

We now consider *n* discs. We need S_{n-1} steps to move all disks except the biggest one, then move the biggest disks then move the sub-tower on top of that disc with S_{n-1} steps again. So we have the following upper bound:

What about a lower bound? At some point we must move the biggest disk to the destination rod. To get to the biggest disk, we must have moved all disks on top of it to another rod (the sub-tower), and after having moved the biggest disk, we must move this sub-tower back on top of that rod back onto the biggest disk. Due to these constraints due to the rules of the problem, we know that for *n > 0* we must take at least 2*(S_{n-1}) + 1 steps.

Hence, this recurrence relation gives us the exact number moves needed.

Claim:** S _{n} = 2^{n} – 1** for all natural

Proof (by Induction):

Clearly true for *n = 0*.

Then S_{k+1} = 2*(S_{k} + 1 = 2*(2^{k} – 1) + 1, by the induction hypothesis.

So S_{k+1} = 2*2^{k} – 2 + 1 = 2^{k+1} – 1, as required.

So the claim holds by induction.

So we have found an easy formula for the number of steps needed to solve the Tower of Hanoi problem for any *n*!

M x