EFE_algo

By Wolfgang Keller
Originally written 2020-09-05
Last modified 2020-09-05

Table of contents

Exponentiation by squaring

What we want to find is an algorithm for \begin{align*} \mathit{exp}: \left(M \times \prod_{n \in \mathbb{Z}_{\geq 0}} \{0,1\}^{\{0, \ldots, n-1\}}\right) & \rightarrow M: \\ a\ n\ f & \mapsto a^{\sum_{i=0}^{n-1} f_i \cdot 2^i}. \end{align*} where \(M\) is a (multiplicatively written) monoid.

From highest to lowest bit

Basic algorithm

For \(k \in \{0, \ldots, n\}\), set \[ \texttt{R0}_k := a^{\sum_{i=k}^{n-1} f_i \cdot 2^i}. \] We have for \(k \in \{0, \ldots, n-1\}\): \[ \texttt{R0}_k = \begin{cases} \texttt{R0}_{k+1} \cdot \texttt{R0}_{k+1} & \text{if } f_k = 0, \\ \texttt{R0}_{k+1} \cdot \texttt{R0}_{k+1} \cdot a & \text{if } f_k = 1, \end{cases} \]

This yields the following algorithm:

Function exp(a, n, f)
    R0 := 1

    for k = (n-1) .. 0:
        R0 := R0 * R0

        if f[k] = 1:
            R0 := R0 * a
    
    return R0
Algorithm : Implementation of \(\mathit{exp}\) from highest to lowest bit.

Montgomery ladder

In particular for cryptographic applications, often desires that independently of the execution path that the code takes, the exact number of executed monoid multiplications is the same. The idea behind this is that for many cryptographical applications, the exponent \(e\) must be kept secret (with, for example. only \(\deg e\) known to the public).

One method for this, which we present in this section, is called a Montgomery ladder.

Important

Using a Montgomery ladder does, of course, not suffice to ensure that the code is secure against timing attacks (even if the monoid operations are implemented in a way that they are executed in constant time). Instead, if the operands of the monoid are stored in memory (because they do not fit into registers), cache timing information can leak which operands were used and thus which code branch was taken.

Because of this, in most cases a better implementation option for code that is secure against timing attacks is to execute both possible branches (current bit is 0 vs 1) and use a (constant-time) conditional move afterwards to select the “correct” branch that the code “actually took”.

In the sense of the previous warning, we describe the Montgomery ladder for the mathematical ideas behind it.

The first step is to observe that in the loop of Algorithm , R0 is set to

So, we introduce another variable whose value is always R0 * a before and after each iteration of the loop. This yields the following algorithm:

Function exp(a, n, f)
    R0 := 1
    R1 := a

    for k = (n-1) .. 0:
        if f[k] = 0:
            R0 := R0 * R0
            R1 := R0 * a
        else # if f[k] = 1:
            R0 := R0 * R0
            R0 := R0 * a
            R1 := R0 * a
    
    return R0
Algorithm : An intermediate steps towards the Montgomery ladder.

The next step is to observe that

This yields the following algorithm, the Montgomery ladder:

Function exp(a, n, f)
    R0 := 1
    R1 := a

    for k = (n-1) .. 0:
        if f[k] = 0:
            R1 := R0 * R1
            R0 := R0 * R0
        else # if f[k] = 1:
            R0 := R0 * R1
            R1 := R1 * R1
    
    return R0
Algorithm : Montgomery ladder implementation of \(\mathit{exp}\).

A version of the Montgomery ladder that one often finds in the literature is based on the following observation: For some groups, the neutral element is “inconvenient” to use. For example, for the group of an elliptic curve, the neutral element is “the point at \(\infty\)”.

So, one demands that \(f_{n-1} = 1\), uses this fact to unroll one loop iteration in Algorithm , and obtains the following algorithm:

Function exp(a, n, f)
    R0 := 1
    R1 := a
    R0 := R0 * R1
    R1 := R1 * R1

    for k = (n-2) .. 0:
        if f[k] = 0:
            R1 := R0 * R1
            R0 := R0 * R0
        else # if f[k] = 1:
            R0 := R0 * R1
            R1 := R1 * R1
    
    return R0
Algorithm : An intermediate steps towards a Montgomery ladder that uses \(f_{n-1} = 1\).

If one simplifies the initial code lines of Algorithm , one obtains:

Function exp(a, n, f)
    R0 := a
    R1 := a * a

    for k = (n-2) .. 0:
        if f[k] = 0:
            R1 := R0 * R1
            R0 := R0 * R0
        else # if f[k] = 1:
            R0 := R0 * R1
            R1 := R1 * R1
    
    return R0
Algorithm : Montgomery ladder implementation of \(\mathit{exp}\) that uses \(f_{n-1} = 1\).
Warning

Because of the \(f_{n-1} = 1\) condition in Algorithm , Algorithm “leaks” the value of \(\left\lfloor \log_2 \left(\sum_{i=0}^{n-1} f_i \cdot 2^i\right)\right\rfloor = n-1\).

On the other hand, Algorithm only leaks the upper bound \(n-1\) for \(\left\lfloor \log_2 \left(\sum_{i=0}^{n-1} f_i \cdot 2^i\right)\right\rfloor\), since if \(n' := \left\lfloor \log_2 \left(\sum_{i=0}^{n-1} f_i \cdot 2^i\right)\right\rfloor\), the situation that \(n' < n-1\) can occur in Algorithm , because for the latter algorithm, we do not require \(f_{n-1} = 1\). If \(n' < n-1\), we, of course, have \(f_{n'+1} = \ldots = f_{n-1} = 0\).

From lowest to highest bit

Basic algorithm

For \(k \in \{-1, \ldots, n-1\}\), set \begin{align*} \texttt{R0}_k & := a^{\sum_{i=0}^k f_i \cdot 2^i}, \\ \texttt{R1}_k & := a^{2^{k+1}}. \end{align*} We have for \(k \in \{0, \ldots, n-1\}\): \begin{align*} \texttt{R0}_k & = \begin{cases} \texttt{R0}_{k-1} & \text{if } f_k = 0, \\ \texttt{R0}_{k-1} \cdot \texttt{R1}_{k-1} & \text{if } f_k = 1, \end{cases} \\ \texttt{R1}_k & = \texttt{R1}_{k-1} \cdot \texttt{R1}_{k-1}. \end{align*}

This yields the following algorithm:

Function exp(a, n, f)
    R0 := 1
    R1 := a

    for k = 0 .. (n-1):
        if fk] = 1:
            R0 := R0 * R1

        R1 := R1 * R1
    
    return R0
Algorithm : Implementation of \(\mathit{exp}\) from lowest to highest bit.

Application: Bootstrap sequences

Let \(R\) be a commutative ring, let \(M\) be an \(R\)-module, let \(k \in \mathbb{Z}_{\geq 1}\) and let \(f_{-(k-1)}, \ldots, f_0 \in M\) (start values) and \(a_{-(k-1)}, \ldots, a_0 \in R\) (coefficients) be given. Consider the sequence \begin{align*} f: \mathbb{Z}_{\geq -(k-1)} & \rightarrow M: \\ n & \mapsto \begin{cases} f_n & \text{if } n \in \{-(k-1), \ldots, 0\}, \\ \sum\limits_{i=-(k-1)}^0 a_i f_{n-1+i} & \text{otherwise.} \end{cases} \end{align*} Of course, we can use an iterative algorithm to compute the \(n\)th element of the sequence. But we want to use one of the exponentiation by squaring algorithms from section to obtain a faster algorithm.

Let \begin{align*} A & := \left( \begin{array}{ccccc} a_0 & a_{-1} & \cdots & a_{-(k-2)} & a_{-(k-1)} \\ 1 & & & & \\ & 1 & & & \\ & & \ddots & & \\ & & & 1 & \end{array} \right), \\ f_{start} & := \left( \begin{array}{c} f_0 \\ f_{-1} \\ \vdots \\ f_{-(k-1)} \end{array} \right). \end{align*} The trick for the algorithm is to consider that for \(n \in \mathbb{Z}_{\geq 0}\), we have \[ f_n = \left( A^n \cdot f_{start} \right)_0. \]

Matrices over a commutative ring form a monoid with respect to multiplication. Thus, we can use one of the exponentiation by squaring algorithms from section to compute \(A^n\).