This repository contains the source code of MPI programs for computing the probability distributions induced by:
- Shor's order-finding algorithm [Shor94] [Shor97]
- Seifert's order-finding algorithm with tradeoffs [Seifert01]
- Ekerå–Håstad's algorithm for computing short discrete logarithms with tradeoffs [EH17] (Slides) [E20]
- Ekerå–Håstad's algorithm for factoring RSA integers with tradeoffs [EH17] (Slides) [E20]
- Ekerå's algorithm for computing general discrete logarithms and orders with tradeoffs [E21]
- Shor's algorithm for computing general discrete logarithms [Shor94] [Shor97] with modifications [E19p]
Once computed the distributions may be sampled to simulate the quantum algorithms. This is possible for large cryptographically relevant problem instances. Note however that the solution to the problem (i.e. the group order, the discrete logarithm, or in some cases both) must be known.
This repository furthermore contains the source code of MPI programs that estimate the number of vectors that need to be enumerated in the classical lattice-based post-processing algorithms of Ekerå and Ekerå–Håstad, and of MPI programs that execute the post-processing algorithms with respect to simulated outputs from the quantum algorithms. For completeness, implementations of Shor's original post-processing algorithms are also provided.
See [E21b], [E22p] and the factoritall repository for resources on efficiently factoring any integer after a single call to an order-finding algorithm. See [E23p] for a more recent analysis of the single-run complexity on Ekerå–Håstad's algorithm. See the Quaspy repository for resources on simulating the above algorithms in Phython3 when not making tradeoffs.
In the below diagram, the above algorithms are compared with respect to the number of group operations that they perform quantumly in each run, when solving cryptographically relevant problems.
All problem instances considered in the diagram have approximately a 128 bit classical strength level. The cost model is from [FIPS 140-2 IG]. The solid bars illustrate the number of group operations required for algorithms not making tradeoffs. A single run is in general sufficient for these algorithms. The hatched bars illustrate the number of group operations required for algorithms making tradeoffs in the limit as the number of runs tends to infinity. In practice, the number of operations required is slightly higher. It is necessary to run these algorithms multiple times to achieve the tradeoff. The more runs one accepts to make, the better the achievable tradeoff.
Note that the quantum circuit required to implement the group operation for elliptic curve groups differs significantly from the circuit required for the other groups. This must be accounted for when making comparisons.
Note that the source code in this repository was developed for academic research purposes. It grew out of our research project in an organic manner as research questions were posed and answered. It is distributed "as is" without warranty of any kind, either expressed or implied. For further details on the terms of use, see the license.
It is possible to further optimize portions of the code. However, the current code performs sufficiently well for our purposes. Note furthermore that the portions of the code that pertain to Shor's original algorithm for computing general discrete logarithms are based on a heuristic that lacks an error bound. These portions, and the heuristic, are currently a work in progress.
To compile and run these programs under e.g. Ubuntu 22.04 LTS, first execute:
$ sudo apt install libgmp-dev libmpfr-dev libfplll-dev libopenmpi-dev
$ sudo apt install gcc g++ make openmpi-bin
This installs libraries and header files for GMP, MPFR and fpLLL, as well as libraries, headers and binaries for OpenMPI and for compiling C and C++ sources. You may then proceed to compile the executables:
$ make
Under other Linux and Unix distributions, ensure that the tools contained in the aforementioned packages are installed and available in your search paths prior to running the above command. Under other operating systems, you may need to setup build scripts yourself.
To build the documentation using Doxygen, first execute:
$ sudo apt install doxygen graphviz
You may then proceed to build the documentation in HTML format from the source files:
$ make documentation
To generate the probability distribution induced by Ekerå–Håstad's algorithm for an
$ mpirun ./generate_linear_distribution -max -d 256 1
The resulting distribution will be written to a file in the distributions directory. It will automatically be assigned a name according to the parameters used to generate it. You may now use this distribution to estimate the number of runs required to solve with a given bound on the volume quotient, and to verify this estimate by solving simulated output.
To instead generate a distribution for Shor's or Seifert's order-finding algorithms use the -r
flag instead of the -d
flag. The -max
flag may be substituted for other flags to select the logarithm or order deterministically, explicitly, at random, and so forth. For further details, see the documentation for the generate_linear_distribution
executable.
Note: If you installed MPI under Ubuntu as described above, you must specify the number of processors using the
-np
flag whenever you invokempirun
. At least two processors are required.
To estimate the number of runs
$ mpirun ./estimate_runs_linear_distribution \
distributions/linear-distribution-max-dim-2048-d-m-256-s-1.txt
The executable will compute volume quotients for different -v-bound
flag. For further details, see the documentation for the estimate_runs_linear_distribution
executable.
To solve with Ekerå–Håstad's lattice-based classical post-processing after
$ mpirun ./solve_linear_distribution \
distributions/linear-distribution-max-dim-2048-d-m-256-s-1.txt 2
By default, the solver does not enumerate the lattice. To enable enumeration, add the -enumerate
flag. Note that the enumeration itself is not parallelized. There is a -timeout
flag that controls the timeout for enumeration operations. By default, the timeout is set to 5 minutes.
There are additional flags for selecting which lattice basis reduction to use. By default, the solver adaptively seeks to identify the smallest
For further details, see the documentation for the solve_linear_distribution
executable. Note that there is also an executable that solves distributions for the order-finding problem with Shor's original continued-fractions based post-processing algorithm, see the solve_linear_distribution_shor
executable.
To plot the distribution, execute:
$ ./plot_linear_distribution \
distributions/linear-distribution-max-dim-2048-d-m-256-s-1.txt
The plot will be exported to a Latex source file to the plots
directory. You may use pdflatex
or some other version of Latex to convert this file into a PDF file for viewing. For further details, see the documentation for the plot_linear_distribution
executable.
Note that the resolution of the distribution is automatically temporarily reduced before plotting it to avoid creating Latex source files of excessive size. This may affect the smoothness of the plot.
To sample the distribution, execute:
$ ./sample_linear_distribution \
distributions/linear-distribution-max-dim-2048-d-m-256-s-1.txt 10
This will print 10 samples for the distribution to standard output. For each sample, both an argument sample_linear_distribution
executable.
There are also executables for obtaining information on linear distributions, see the info_linear_distribution
executable, and for comparing such distribution, see the compare_linear_distributions
executable.
Analogies for the above executables are available for generating two-dimensional probability distributions induced by the algorithm of Ekerå for computing general discrete logarithms. To use these executables, you basically remove the "linear" prefix from the executable names. The procedure is then essentially the same as described above, with some differences in the synopsis and in the behavior between the one- and two-dimensional executables.
For further details, please see the documentation for
- the
generate_distribution
executable - the
filter_distribution
executable - the
estimate_runs_distribution
executable - the
solve_distribution
executable - the
plot_distribution
executable - the
sample_distribution
executable - the
info_distribution
executable - the
compare_distributions
executable
Diagonal probability distributions are used to represent the probability distribution induced by Shor's original algorithm for computing general discrete logarithms when the group order is known. Analogies to the above executables that may be used to generate, sample and solve diagonal distributions are also available.
The generator and sampler are for the distribution that would be induced by Shor's original algorithm for computing general discrete logarithms, with the control register initialized to a uniform superposition over all register values, so as to enable control qubit recycling, and with a variable number of padding bits. The solver implements an improved version of Shor's original classical post-processing algorithm that performs a bounded search to increase the success probability. There is furthermore a lattice-based solver that supports tradeoffs.
The full modified quantum algorithm and the two post-processing algorithms are described in [E19p]. Note that the analysis in this paper is heuristic. The distribution is generated using this heuristic.
For further details, please see the documentation for
- the
generate_diagonal_distribution
executable - the
estimate_runs_diagonal_distribution
executable - the
solve_diagonal_distribution
executable - the
solve_diagonal_distribution_shor
executable - the
plot_diagonal_distribution
executable - the
sample_diagonal_distribution
executable - the
info_diagonal_distribution
executable - the
compare_diagonal_distributions
executable
This source code was developed by Martin Ekerå, in part at KTH, the Royal Institute of Technology, in Stockholm, Sweden. Valuable comments and advice were provided by Johan Håstad throughout the development process.
Funding and support for this work was provided by the Swedish NCSA that is a part of the Swedish Armed Forces.
Computations were performed on the Beskow Cray XC40 supercomputer and its pre- and post-processing cluster Tegner at PDC at KTH. Access was provided by the Swedish National Infrastructure for Computing (SNIC). This version of the source code is intended to be run on generic Linux-based clusters.
More recently, computations were performed on the Dardel HPE Cray EX supercomputer at PDC at KTH. Access was provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS).