diff --git a/README.rst b/README.rst index 9cd5859..443565e 100644 --- a/README.rst +++ b/README.rst @@ -93,6 +93,7 @@ At present, this estimator is maintained by Martin Albrecht. Contributors are: - Fernando Virdia - Florian Göpfert - Hamish Hunt +- Hunter Kippen - James Owen - Léo Ducas - Markus Schmidt diff --git a/docs/api_doc.rst b/docs/api_doc.rst index d95784a..844db2e 100644 --- a/docs/api_doc.rst +++ b/docs/api_doc.rst @@ -17,6 +17,9 @@ API Reference estimator.lwe_primal estimator.lwe_dual estimator.lwe + estimator.ntru_parameters + estimator.ntru_primal + estimator.ntru estimator.gb estimator.nd estimator.prob diff --git a/docs/references.rst b/docs/references.rst index 251d2f0..2601faa 100644 --- a/docs/references.rst +++ b/docs/references.rst @@ -4,6 +4,7 @@ References .. [AC:AGPS20] Albrecht, M. R., Gheorghiu, V., Postlethwaite, E. W., & Schanck, J. M. (2020). Estimating quantum speedups for lattice sieves. In S. Moriai, & H. Wang, ASIACRYPT 2020, Part II (pp. 583–613). : Springer, Heidelberg. .. [AC:AGVW17] Martin R. Albrecht, Florian Göpfert, Fernando Virdia & Thomas Wunderer. Revisiting the expected cost of solving uSVP and applications to LWE. In T. Takagi, & T. Peyrin, ASIACRYPT 2017, Part I (pp. 297–322). : Springer, Heidelberg. .. [AC:CheNgu11] Yuanmi Chen & Phong Q. Nguyen. BKZ 2.0: better lattice security estimates. In D. H. Lee, & X. Wang, ASIACRYPT 2011 (pp. 1–20): Springer, Heidelberg. +.. [AC:DucWoe21] Ducas, L., van Woerden, W. (2021). NTRU Fatigue: How Stretched is Overstretched?. In: Tibouchi, M., Wang, H. (eds) Advances in Cryptology – ASIACRYPT 2021. ASIACRYPT 2021. Lecture Notes in Computer Science(), vol 13093. Springer, Cham. https://doi.org/10.1007/978-3-030-92068-5_1 .. [AC:GuoJoh21] Qian Guo, Thomas Johansson. Faster Dual Lattice Attacks for Solving LWE with Applications to CRYSTALS. In International Conference on the Theory and Application of Cryptology and Information Security (pp. 33-62). Springer, Cham. .. [ACISP:BaiGal14] Shi Bai & Steven D. Galbraith. Lattice decoding attacks on binary LWE. In W. Susilo, & Y. Mu, ACISP 14 (pp. 322–337). : Springer, Heidelberg. .. [C:ABFKSW20] Martin R. Albrecht, Shi Bai, Pierre-Alain Fouque, Paul Kirchner, Damien Stehlé and Weiqiang Wen. Faster Enumeration-based Lattice Reduction: Root Hermite Factor $beta^{1/(2k)}$ in Time $beta^{beta/8 + o(beta)}$. CRYPTO 2020. @@ -15,6 +16,7 @@ References .. [CheNgu12] Yuanmi Chen and Phong Q. Nguyen. BKZ 2.0: Better lattice security estimates (Full Version). 2012. http://www.di.ens.fr/~ychen/research/Full_BKZ.pdf .. [EC:Albrecht17] Albrecht, M. R. (2017). On dual lattice attacks against small-secret LWE and parameter choices in HElib and SEAL. In J. Coron, & J. B. Nielsen, EUROCRYPT 2017, Part II (pp. 103–129). : Springer, Heidelberg. .. [EC:Ducas18] Léo Ducas (2018). Shortest vector from lattice sieving: A few dimensions for free. In J. B. Nielsen, & V. Rijmen, EUROCRYPT 2018, Part I (pp. 125–145). : Springer, Heidelberg. +.. [EC:KirFou17] Kirchner, P., Fouque, PA. (2017). Revisiting Lattice Attacks on Overstretched NTRU Parameters. In: Coron, JS., Nielsen, J. (eds) Advances in Cryptology – EUROCRYPT 2017. EUROCRYPT 2017. Lecture Notes in Computer Science(), vol 10210. Springer, Cham. https://doi.org/10.1007/978-3-319-56620-7_1 .. [EPRINT:CHHS19] Cheon, J.H., Hhan, M., Hong, S. and Son, Y., 2019. A hybrid of dual and meet-in-the-middle attack on sparse and ternary secret LWE. IEEE Access, 7, pp.89497-89506. https://ia.cr/2019/1114pri .. [EPRINT:LaaMosPol14] Thijs Laarhoven, Michele Mosca, & Joop van de Pol. Finding shortest lattice vectors faster using quantum search. Cryptology ePrint Archive, Report 2014/907, 2014. https://eprint.iacr.org/2014/907. .. [EPRINT:SonChe19] Son, Y. and Cheon, J.H., 2019. Revisiting the Hybrid Attack on sparse abd ternary LWE. Workshop on Applied Homomorphic Cryptography, WAHC2019. @@ -32,4 +34,4 @@ References .. [SAC:AlbCurWun19] Albrecht, M. R., Curtis, B. R., & Wunderer, T.. Exploring trade-offs in batch bounded distance decoding. In K. G. Paterson, & D. Stebila, SAC 2019 (pp. 467–491). : Springer, Heidelberg. .. [SODA:BDGL16] Becker, A., Ducas, L., Gama, N., & Laarhoven, T. (2016). New directions in nearest neighbor searching with applications to lattice sieving. In SODA 2016, (pp. 10–24). .. [Schnorr03] Claus-Peter Schnorr. Lattice Reduction by Random Sampling and Birthday Methods. In: STACS2003, 20th Annual Symposium on Theoretical Aspects of Computer Science, Berlin, Germany, February 27 - March 1, 2003, Proceedings. Ed. by Helmut Alt and Michel Habib. Vol. 2607. Lecture Notes in Computer Science. Springer, 2003, pp. 145–156.doi:10.1007/3-540-36494-3_14. url: http://dx.doi.org/10.1007/3-540-36494-3_14. -.. [USENIX:ADPS16] Edem Alkim, Léo Ducas, Thomas Pöppelmann, & Peter Schwabe (2016). Post-quantum key exchange - A New Hope. In T. Holz, & S. Savage, 25th USENIX Security Symposium, USENIX Security 16 (pp. 327–343). USENIX Association. +.. [USENIX:ADPS16] Edem Alkim, Léo Ducas, Thomas Pöppelmann, & Peter Schwabe (2016). Post-quantum key exchange - A New Hope. In T. Holz, & S. Savage, 25th USENIX Security Symposium, USENIX Security 16 (pp. 327–343). USENIX Association. \ No newline at end of file diff --git a/docs/schemes/nist-pqc-round-3.rst b/docs/schemes/nist-pqc-round-3.rst index e78f0f3..eefde84 100644 --- a/docs/schemes/nist-pqc-round-3.rst +++ b/docs/schemes/nist-pqc-round-3.rst @@ -60,30 +60,30 @@ NIST PQC Round 3 Finalists >>> from estimator import * >>> schemes.NTRUHPS2048509Enc - LWEParameters(n=508, q=2048, Xs=D(σ=0.82), Xe=D(σ=0.71), m=508, tag='NTRUHPS2048509Enc') - >>> LWE.primal_bdd(schemes.NTRUHPS2048509Enc) + NTRUParameters(n=508, q=2048, Xs=D(σ=0.82), Xe=D(σ=0.71), m=508, tag='NTRUHPS2048509Enc', ntru_type='matrix') + >>> NTRU.primal_bdd(schemes.NTRUHPS2048509Enc) rop: ≈2^131.1, red: ≈2^130.1, svp: ≈2^130.2, β: 357, η: 390, d: 916, tag: bdd :: >>> from estimator import * >>> schemes.NTRUHPS2048677Enc - LWEParameters(n=676, q=2048, Xs=D(σ=0.82), Xe=D(σ=0.61), m=676, tag='NTRUHPS2048677Enc') - >>> LWE.primal_bdd(schemes.NTRUHPS2048677Enc) + NTRUParameters(n=676, q=2048, Xs=D(σ=0.82), Xe=D(σ=0.61), m=676, tag='NTRUHPS2048677Enc', ntru_type='matrix') + >>> NTRU.primal_bdd(schemes.NTRUHPS2048677Enc) rop: ≈2^170.8, red: ≈2^169.6, svp: ≈2^169.9, β: 498, η: 533, d: 1179, tag: bdd :: >>> from estimator import * >>> schemes.NTRUHPS4096821Enc - LWEParameters(n=820, q=4096, Xs=D(σ=0.82), Xe=D(σ=0.79), m=820, tag='NTRUHPS4096821Enc') - >>> LWE.primal_bdd(schemes.NTRUHPS4096821Enc) + NTRUParameters(n=820, q=4096, Xs=D(σ=0.82), Xe=D(σ=0.79), m=820, tag='NTRUHPS4096821Enc', ntru_type='matrix') + >>> NTRU.primal_bdd(schemes.NTRUHPS4096821Enc) rop: ≈2^199.7, red: ≈2^198.7, svp: ≈2^198.6, β: 601, η: 636, d: 1485, tag: bdd :: >>> from estimator import * >>> schemes.NTRUHRSS701Enc - LWEParameters(n=700, q=8192, Xs=D(σ=0.82), Xe=D(σ=0.82), m=700, tag='NTRUHRSS701') - >>> LWE.primal_bdd(schemes.NTRUHRSS701Enc) - rop: ≈2^158.9, red: ≈2^157.9, svp: ≈2^158.0, β: 455, η: 490, d: 1294, tag: bdd + NTRUParameters(n=700, q=8192, Xs=D(σ=0.82), Xe=D(σ=0.82), m=700, tag='NTRUHRSS701', ntru_type='matrix') + >>> NTRU.primal_bdd(schemes.NTRUHRSS701Enc) + rop: ≈2^158.7, red: ≈2^157.7, svp: ≈2^157.7, β: 454, η: 489, d: 1306, tag: bdd \ No newline at end of file diff --git a/estimator/__init__.py b/estimator/__init__.py index e79eb3b..d9b20df 100644 --- a/estimator/__init__.py +++ b/estimator/__init__.py @@ -1,10 +1,11 @@ # -*- coding: utf-8 -*- -__all__ = ['ND', 'Logging', 'RC', 'Simulator', 'LWE', 'schemes'] +__all__ = ['ND', 'Logging', 'RC', 'Simulator', 'LWE', 'NTRU', 'schemes'] from .nd import NoiseDistribution as ND from .io import Logging from .reduction import RC from . import simulator as Simulator from . import lwe as LWE +from . import ntru as NTRU from . import schemes diff --git a/estimator/conf.py b/estimator/conf.py index 5dd33b4..a133d6c 100644 --- a/estimator/conf.py +++ b/estimator/conf.py @@ -3,11 +3,17 @@ Default values. """ -from .reduction import RC from .simulator import GSA +from .reduction import RC +from sage.all import exp red_cost_model = RC.MATZOV red_cost_model_classical_poly_space = RC.ABLR21 red_shape_model = "gsa" red_simulator = GSA mitm_opt = "analytical" +max_n_cache = 10000 + + +def ntru_fatigue_lb(n): + return int((n**2.484)/exp(6)) diff --git a/estimator/lwe_parameters.py b/estimator/lwe_parameters.py index 7fb7870..6822cba 100644 --- a/estimator/lwe_parameters.py +++ b/estimator/lwe_parameters.py @@ -30,6 +30,10 @@ def __post_init__(self, **kwds): self.Xe = copy(self.Xe) self.Xe.n = self.m + @property + def _homogeneous(self): + return False + def normalize(self): """ EXAMPLES: diff --git a/estimator/lwe_primal.py b/estimator/lwe_primal.py index 9bca41f..6f505f0 100644 --- a/estimator/lwe_primal.py +++ b/estimator/lwe_primal.py @@ -46,9 +46,16 @@ def _solve_for_d(params, m, beta, tau, xi): # Find the smallest d ∈ [n,m] s.t. a*d^2 + b*d + c >= 0 delta = deltaf(beta) a = -log(delta) - C = log(params.Xe.stddev**2 * (beta - 1) + tau**2) / 2.0 + + if not tau: + C = log(params.Xe.stddev**2 * (beta - 1)) / 2.0 + c = params.n * log(xi) - (params.n + 1) * log(params.q) + + else: + C = log(params.Xe.stddev**2 * (beta - 1) + tau**2) / 2.0 + c = log(tau) + params.n * log(xi) - (params.n + 1) * log(params.q) + b = log(delta) * (2 * beta - 1) + log(params.q) - C - c = log(tau) + params.n * log(xi) - (params.n + 1) * log(params.q) n = params.n if a * n * n + b * n + c >= 0: # trivial case return n @@ -87,6 +94,10 @@ def cost_gsa( xi = PrimalUSVP._xi_factor(params.Xs, params.Xe) m = min(2 * ceil(sqrt(params.n * log(params.q) / log(delta))), m) tau = params.Xe.stddev if tau is None else tau + # Account for homogeneous instances + if params._homogeneous: + tau = False # Tau false ==> instance is homogeneous + d = PrimalUSVP._solve_for_d(params, m, beta, tau, xi) if d is None else d # if d == β we assume one SVP call, otherwise poly calls. This makes the cost curve jump, so # we avoid it here. @@ -94,11 +105,19 @@ def cost_gsa( d += 1 assert d <= m + 1 - lhs = log(sqrt(params.Xe.stddev**2 * (beta - 1) + tau**2)) - rhs = RR( - log(delta) * (2 * beta - d - 1) - + (log(tau) + log(xi) * params.n + log(params.q) * (d - params.n - 1)) / d - ) + if not tau: + lhs = log(sqrt(params.Xe.stddev**2 * (beta - 1))) + rhs = RR( + log(delta) * (2 * beta - d - 1) + + (log(xi) * params.n + log(params.q) * (d - params.n - 1)) / d + ) + + else: + lhs = log(sqrt(params.Xe.stddev**2 * (beta - 1) + tau**2)) + rhs = RR( + log(delta) * (2 * beta - d - 1) + + (log(tau) + log(xi) * params.n + log(params.q) * (d - params.n - 1)) / d + ) return costf(red_cost_model, beta, d, predicate=lhs <= rhs) @@ -120,8 +139,18 @@ def cost_simulator( xi = PrimalUSVP._xi_factor(params.Xs, params.Xe) tau = params.Xe.stddev if tau is None else tau + if params._homogeneous: + tau = False + d -= 1 # Remove extra dimension in homogeneous instances + r = simulator(d=d, n=params.n, q=params.q, beta=beta, xi=xi, tau=tau) - lhs = params.Xe.stddev**2 * (beta - 1) + tau**2 + + if not tau: + lhs = params.Xe.stddev**2 * (beta - 1) + + else: + lhs = params.Xe.stddev**2 * (beta - 1) + tau**2 + predicate = r[d - beta] > lhs return costf(red_cost_model, beta, d, predicate=predicate) @@ -174,10 +203,8 @@ def __call__( """ params = LWEParameters.normalize(params) - # allow for a larger embedding lattice dimension: Bai and Galbraith m = params.m + params.n if params.Xs <= params.Xe else params.m - if red_shape_model == "gsa": with local_minimum(40, max(2 * params.n, 41), precision=5) as it: for beta in it: @@ -311,12 +338,17 @@ def cost( delta = deltaf(beta) d = min(ceil(sqrt(params.n * log(params.q) / log(delta))), m) + 1 d -= zeta - xi = PrimalUSVP._xi_factor(params.Xs, params.Xe) + xi = PrimalUSVP._xi_factor(params.Xs, params.Xe) + tau = 1 # 1. Simulate BKZ-β - # TODO: pick τ + # TODO: pick τ as non default value + + if params._homogeneous: + tau = False + d -= 1 - r = simulator(d, params.n - zeta, params.q, beta, xi=xi, dual=True) + r = simulator(d, params.n - zeta, params.q, beta, xi=xi, tau=tau, dual=True) bkz_cost = costf(red_cost_model, beta, d) diff --git a/estimator/ntru.py b/estimator/ntru.py new file mode 100644 index 0000000..d9cdc37 --- /dev/null +++ b/estimator/ntru.py @@ -0,0 +1,179 @@ +# -*- coding: utf-8 -*- +""" +High-level NTRU interface +""" + +from functools import partial +from sage.all import oo + +from .ntru_primal import primal_dsd, primal_usvp, primal_bdd, primal_hybrid +from .lwe_bkw import coded_bkw # noqa +from .lwe_guess import exhaustive_search, mitm, distinguish, guess_composition # noqa +from .lwe_dual import dual, dual_hybrid # noqa +from .gb import arora_gb # noqa +from .ntru_parameters import NTRUParameters as Parameters # noqa +from .conf import (red_cost_model as red_cost_model_default, + red_shape_model as red_shape_model_default) +from .util import batch_estimate, f_name +from .reduction import RC + + +class Estimate: + + def rough(self, params, jobs=1, catch_exceptions=True): + """ + This function makes the following somewhat routine assumptions: + + - The ZGSA holds. + - The Core-SVP model holds. + + This function furthermore assumes the following heuristics: + + - The primal hybrid attack only applies to sparse secrets. + - The dual hybrid MITM attack only applies to sparse secrets. + - The dense sublattice attack only applies to possibly overstretched parameters + + :param params: NTRU parameters. + :param jobs: Use multiple threads in parallel. + :param catch_exceptions: When an estimate fails, just print a warning. + + EXAMPLE :: + + >>> from estimator import * + >>> _ = NTRU.estimate.rough(schemes.NTRUHPS2048509Enc) + usvp :: rop: ≈2^109.2, red: ≈2^109.2, δ: 1.004171, β: 374, d: 643, tag: usvp + + """ + params = params.normalize() + + algorithms = {} + + # Only primal attacks apply to NTRU + algorithms["usvp"] = partial(primal_usvp, red_cost_model=RC.ADPS16, red_shape_model="zgsa") + + if params.possibly_overstretched: + algorithms["dsd"] = partial( + primal_dsd, red_cost_model=RC.ADPS16, red_shape_model="zgsa" + ) + + if params.Xs.is_sparse: + algorithms["hybrid"] = partial( + primal_hybrid, red_cost_model=RC.ADPS16, red_shape_model="zgsa" + ) + + res_raw = batch_estimate( + params, algorithms.values(), log_level=1, jobs=jobs, catch_exceptions=catch_exceptions + ) + res_raw = res_raw[params] + res = { + algorithm: v for algorithm, attack in algorithms.items() + for k, v in res_raw.items() + if f_name(attack) == k + } + + for algorithm in algorithms: + if algorithm not in res: + continue + result = res[algorithm] + if result["rop"] != oo: + print(f"{algorithm:20s} :: {result!r}") + + return res + + def __call__( + self, + params, + red_cost_model=red_cost_model_default, + red_shape_model=red_shape_model_default, + deny_list=tuple(), + add_list=tuple(), + jobs=1, + catch_exceptions=True, + ): + """ + Run all estimates. + + :param params: NTRU parameters. + :param red_cost_model: How to cost lattice reduction. + :param red_shape_model: How to model the shape of a reduced basis (applies to primal attacks) + :param deny_list: skip these algorithms + :param add_list: add these ``(name, function)`` pairs to the list of algorithms to estimate.a + :param jobs: Use multiple threads in parallel. + :param catch_exceptions: When an estimate fails, just print a warning. + + EXAMPLE :: + + >>> from estimator import * + >>> _ = NTRU.estimate(schemes.NTRUHPS2048509Enc) + usvp :: rop: ≈2^134.5, red: ≈2^134.5, δ: 1.004179, β: 373, d: 923, tag: usvp + bdd :: rop: ≈2^130.9, red: ≈2^129.9, svp: ≈2^129.9, β: 356, η: 389, d: 917, tag: bdd + bdd_hybrid :: rop: ≈2^130.9, red: ≈2^129.9, svp: ≈2^129.9, β: 356, η: 389, ζ: 0, |S|: 1, ... + bdd_mitm_hybrid :: rop: ≈2^185.3, red: ≈2^184.5, svp: ≈2^184.2, β: 371, η: 2, ζ: 159, |S|: ≈2^228.0... + + >>> params = NTRU.Parameters(n=113, q=512, Xs=ND.UniformMod(3), Xe=ND.UniformMod(3)) + >>> _ = NTRU.estimate(params) + usvp :: rop: ≈2^46.0, red: ≈2^46.0, δ: 1.011516, β: 59, d: 221, tag: usvp + dsd :: rop: ≈2^37.9, red: ≈2^37.9, δ: 1.013310, β: 31, d: 226, tag: dsd + bdd :: rop: ≈2^42.4, red: ≈2^41.0, svp: ≈2^41.8, β: 41, η: 70, d: 225, tag: bdd + bdd_hybrid :: rop: ≈2^42.4, red: ≈2^41.0, svp: ≈2^41.8, β: 41, η: 70, ζ: 0, |S|: 1, d: 226, ... + bdd_mitm_hybrid :: rop: ≈2^55.6, red: ≈2^54.7, svp: ≈2^54.6, β: 41, η: 2, ζ: 32, |S|: ≈2^50.7, ... + """ + params = params.normalize() + + algorithms = {} + + algorithms["usvp"] = partial( + primal_usvp, red_cost_model=red_cost_model, red_shape_model=red_shape_model + ) + algorithms["dsd"] = partial( + primal_dsd, red_cost_model=red_cost_model, red_shape_model=red_shape_model + ) + + algorithms["bdd"] = partial( + primal_bdd, red_cost_model=red_cost_model, red_shape_model=red_shape_model + ) + algorithms["bdd_hybrid"] = partial( + primal_hybrid, + mitm=False, + babai=False, + red_cost_model=red_cost_model, + red_shape_model=red_shape_model, + ) + # we ignore the case of mitm=True babai=False for now, due to it being overly-optimistic + algorithms["bdd_mitm_hybrid"] = partial( + primal_hybrid, + mitm=True, + babai=True, + red_cost_model=red_cost_model, + red_shape_model=red_shape_model, + ) + + algorithms = {k: v for k, v in algorithms.items() if k not in deny_list} + algorithms.update(add_list) + + res_raw = batch_estimate( + params, algorithms.values(), log_level=1, jobs=jobs, catch_exceptions=catch_exceptions + ) + res_raw = res_raw[params] + res = { + algorithm: v + for algorithm, attack in algorithms.items() + for k, v in res_raw.items() + if f_name(attack) == k + } + for algorithm in algorithms: + if algorithm not in res: + continue + result = res[algorithm] + if result["rop"] == oo: + continue + if algorithm == "hybrid" and res["bdd"]["rop"] < result["rop"]: + continue + if algorithm == "dsd" and res["usvp"]["rop"] < result["rop"]: + continue + print(f"{algorithm:20s} :: {result!r}") + + return res + + +estimate = Estimate() diff --git a/estimator/ntru_parameters.py b/estimator/ntru_parameters.py new file mode 100644 index 0000000..c36f12b --- /dev/null +++ b/estimator/ntru_parameters.py @@ -0,0 +1,106 @@ +# -*- coding: utf-8 -*- +from dataclasses import dataclass + +from .conf import ntru_fatigue_lb +from .errors import InsufficientSamplesError +from .lwe_parameters import LWEParameters + + +@dataclass +class NTRUParameters(LWEParameters): + """The parameters for an NTRU problem instance. The estimator treats regular NTRU parameters as similar + to LWE, but requires different estimation methodology for overstrethed parameters. + + :param ntru_type: + Specifies the type of NTRU instance the parameters represent. Currently supported + types are, "matrix" for general matrix NTRU, "circulant" for circulant NTRU, "fixed" for circulant + NTRU with a fixed geometry. + + """ + + ntru_type: str = "matrix" + + def __post_init__(self, **kwds): + super().__post_init__() + # set m = n + self.m = self.n + + @property + def possibly_overstretched(self): + if self.q >= ntru_fatigue_lb(self.n): + return True + + return False + + @property + def _homogeneous(self): + return True + + def normalize(self): + """ + EXAMPLES: + + We perform the normal form transformation if χ_e < χ_s and we got the samples:: + For NTRU, m = n so we swap the secret and the noise:: + + >>> from estimator import * + >>> Xs=ND.DiscreteGaussian(2.0) + >>> Xe=ND.DiscreteGaussian(1.58) + >>> NTRU.Parameters(n=512, q=8192, Xs=Xs, Xe=Xe, m=512).normalize() + NTRUParameters(n=512, q=8192, Xs=D(σ=1.58), Xe=D(σ=2.00), m=512, tag=None, ntru_type='matrix') + + """ + if self.m < 1: + raise InsufficientSamplesError(f"m={self.m} < 1") + + # swap secret and noise + # TODO: this is somewhat arbitrary + if self.Xe < self.Xs and self.m < 2 * self.n: + return NTRUParameters(n=self.n, q=self.q, Xs=self.Xe, Xe=self.Xs, m=self.n, + tag=self.tag, ntru_type=self.ntru_type) + + # nothing to do + return self + + def updated(self, **kwds): + """ + Return a new set of parameters updated according to ``kwds``. + + :param kwds: We set ``key`` to ``value`` in the new set of parameters. + + EXAMPLE:: + + >>> from estimator import * + >>> schemes.NTRUHPS2048509Enc #doctest: +ELLIPSIS + NTRUParameters(n=508, q=2048, Xs=D(σ=0.82), Xe=D(σ=0.71), m=508, tag='NTRUHPS2048509Enc', ntru_type='ma... + >>> schemes.NTRUHPS2048509Enc.possibly_overstretched + False + + >>> schemes.NTRUHPS2048509Enc.updated(q=16536) #doctest: +ELLIPSIS + NTRUParameters(n=508, q=16536, Xs=D(σ=0.82), Xe=D(σ=0.71), ..., tag='NTRUHPS2048509Enc', ntru_type='matrix') + >>> schemes.NTRUHPS2048509Enc.updated(q=16536).possibly_overstretched + True + """ + d = dict(self.__dict__) + d.update(kwds) + return NTRUParameters(**d) + + def amplify_m(self, m): + """ + Return an NTRU instance parameters with ``m`` samples produced from the samples in this instance. + + :param m: New number of samples. + + """ + raise NotImplementedError("Rerandomizing NTRU instances is not supported yet.") + + def switch_modulus(self): + """ + Apply modulus switching and return new instance. + + See [JMC:AlbPlaSco15]_ for details. + """ + raise NotImplementedError("Modulus Switching for NTRU not supported yet.") + + def __hash__(self): + return hash((self.n, self.q, self.Xs, self.Xe, self.m, self.tag, self.ntru_type)) diff --git a/estimator/ntru_primal.py b/estimator/ntru_primal.py new file mode 100644 index 0000000..4b32aa2 --- /dev/null +++ b/estimator/ntru_primal.py @@ -0,0 +1,426 @@ +# -*- coding: utf-8 -*- +""" +Estimate cost of solving LWE using primal attacks. + +See :ref:`LWE Primal Attacks` for an introduction what is available. + +""" +from sage.all import oo, log, RR, cached_function, exp, pi, floor, euler_gamma +from math import lgamma +from scipy.special import digamma +from .reduction import cost as costf +from .util import zeta_precomputed, zeta_prime_precomputed, gh_constant +from .lwe_primal import PrimalUSVP, PrimalHybrid +from .ntru_parameters import NTRUParameters +from .simulator import normalize as simulator_normalize +from .prob import conditional_chi_squared, chisquared_table +from .io import Logging +from .conf import red_cost_model as red_cost_model_default +from .conf import red_shape_model as red_shape_model_default +from .conf import max_n_cache + + +class PrimalDSD(): + """ + Estimate cost of solving (overstretched) NTRU via dense sublattice discovery + """ + @staticmethod + @cached_function + def ball_log_vol(n): + return RR((n/2.)) * RR(log(pi)) - RR(lgamma(n/2. + 1)) + + @staticmethod + def log_gh(d, logvol=0): + if d < 49: + return RR(gh_constant[d]) + RR(logvol)/d + + return RR(1./d) * RR(logvol - PrimalDSD.ball_log_vol(d)) + + @staticmethod + def DSL_logvol_matrix(n, sigmasq): + total = n*(RR(log(sigmasq))+RR(log(2.))+RR(digamma(n)))/2. + proj_loss = sum([(digamma((2*n-i)/2.)-digamma(n)) for i in range(n)])/2. + return total+proj_loss + + @staticmethod + def DSL_logvol_circulant(n, sigmasq): + lambda0 = RR((log(2) - euler_gamma + log(n) + log(sigmasq))/2.) + lambdai = RR((n - 1)*(1 - euler_gamma + log(n) + log(sigmasq))/2.) + return lambda0+lambdai + + @staticmethod + def DSL_logvol_circulant_fixed(n, R): + lambda0 = RR((-euler_gamma + log(R))/2.) + lambdai = RR((n - 1)*(1 - euler_gamma + log(R) - log(2))/2.) + return lambda0+lambdai + + @staticmethod + @cached_function + def DSL_logvol(n, sigmasq, ntru="circulant"): + if ntru=="matrix": + return PrimalDSD.DSL_logvol_matrix(n, sigmasq) + if ntru=="circulant": + return PrimalDSD.DSL_logvol_circulant(n, sigmasq) + if ntru=="fixed": + return PrimalDSD.DSL_logvol_circulant_fixed(n, sigmasq) + + raise ValueError(f"NTRU type: {ntru} is not supported.") + + @staticmethod + @cached_function + def proj_logloss(d, k): + # log loss of length when projecting out k dimension out of d + return (RR(digamma((d-k)/2.))-RR(digamma(d/2.)))/2. + + @staticmethod + def DSLI_vols(dsl_logvol, FL_shape): + n = len(FL_shape)//2 + vols = (2*n+1)*[None] + + dsl_dim = n + vols[2*n] = dsl_logvol + + # Going to a intersection of dimension s + for s in range(2*n-1, n, -1): + # Negate cause it's a dual vector really + x = - FL_shape[s] + x += PrimalDSD.proj_logloss(s+1, n) + x += zeta_prime_precomputed[dsl_dim]/zeta_precomputed[dsl_dim] # primitivity + dsl_logvol += x + vols[s] = dsl_logvol + dsl_dim -= 1 + + assert dsl_dim == 1 + assert s == n+1 + + return vols + + @staticmethod + @cached_function + def prob_dsd( + beta: int, + params: NTRUParameters, + simulator, + m: int = oo, + tau=None, + d=None, + dsl_logvol=None, + red_cost_model=red_cost_model_default, + log_level=None, + ): + + if d is None: + d = m + + xi = PrimalUSVP._xi_factor(params.Xs, params.Xe) + if dsl_logvol is None: + dsl_logvol = PrimalDSD.DSL_logvol(params.n, params.Xs.stddev**2, ntru=params.ntru_type) + + B_shape = [log(r_)/2 for r_ in simulator(d, params.n, params.q, beta, xi=xi, tau=tau)] + dsli_vols = PrimalDSD.DSLI_vols(dsl_logvol, B_shape) + prob_all_not = RR(1.) + prob_pos = (2*params.n)*[RR(0)] + for i in range(1, params.n+1): + s = params.n + i + + dslv_len = PrimalDSD.log_gh(i, dsli_vols[s]) + sigma_sq = exp(2*dslv_len)/s + + if sigma_sq > 10**10: + prob_pos[s-beta] = 0. + continue + + norm_threshold = exp(2*(B_shape[s-beta]))/sigma_sq + proba_one = chisquared_table[beta].cum_distribution_function(norm_threshold) + + if proba_one <= 10e-8: + continue + + # account for pulling back probability if beta small + if beta <= 20: + for j in range(2, int(s/beta+1)): + if proba_one < 10**(-6): + proba_one = 0. + break + ind = s - j*(beta-1)-1 + norm_bt = exp(2*B_shape[ind])/sigma_sq + norm_b2 = exp(2*B_shape[ind+beta-1])/sigma_sq + proba_one *= conditional_chi_squared(beta-1, s-ind-(beta-1), norm_bt, norm_b2) + + prob_pos[s-beta] = proba_one + prob_all_not *= max(1.-proba_one, 0.) + Logging.log("dsd", log_level+1, f"Pr[dsd, {beta}] = {prob_all_not}") + + return RR(1.-prob_all_not), prob_pos + + def __call__( + self, + params: NTRUParameters, + red_cost_model=red_cost_model_default, + red_shape_model=red_shape_model_default, + log_level=1, + **kwds, + ): + """ + Estimate cost of solving (overstretched) NTRU using the Dense sublattice. + Code is adapted from Léo Ducas and Wessel van Woerden. + See https://github.com/WvanWoerden/NTRUFatigue for original source + + :param params: NTRU parameters. + :param red_cost_model: How to cost lattice reduction. + :param red_shape_model: How to model the shape of a reduced basis. + :return: A cost dictionary. + + The returned cost dictionary has the following entries: + + - ``rop``: Total number of word operations (≈ CPU cycles). + - ``red``: Number of word operations in lattice reduction. + - ``δ``: Root-Hermite factor targeted by lattice reduction. + - ``β``: BKZ block size. + - ``d``: Lattice dimension. + + EXAMPLE:: + + >>> from estimator import * + >>> NTRU.primal_dsd(schemes.NTRUHPS2048509Enc) + rop: ≈2^157.1, red: ≈2^157.1, δ: 1.003645, β: 453, d: 1016, tag: dsd + + >>> params = NTRU.Parameters(n=113, q=512, Xs=ND.UniformMod(3), Xe=ND.UniformMod(3)) + >>> NTRU.primal_dsd(params, red_shape_model="zgsa") + rop: ≈2^41.3, red: ≈2^41.3, δ: 1.012468, β: 42, d: 226, tag: dsd + + >>> NTRU.primal_dsd(params, red_shape_model="cn11") + rop: ≈2^41.2, red: ≈2^41.2, δ: 1.012468, β: 41, d: 226, tag: dsd + + >>> NTRU.primal_dsd(params, red_shape_model=Simulator.CN11) + rop: ≈2^41.2, red: ≈2^41.2, δ: 1.012468, β: 41, d: 226, tag: dsd + + The success condition was formulated in [EC:KirFou17]_ and further refined in [AC:DucWoe21]_ + + .. note :: Non-overstretched parameters (where the probability of Dense sublattice + discovery is 0) will return β = d. + """ + params = NTRUParameters.normalize(params) + # allow for a larger embedding lattice dimension: Bai and Galbraith + m = params.m + params.n if params.Xs <= params.Xe else params.m + + try: + red_shape_model = simulator_normalize(red_shape_model) + except ValueError: + pass + + if params.n > max_n_cache: + raise ValueError("Please increase the hardcoded value of max_n_cache to run the predictor for such large n") + + remaining_proba = RR(1.) + average_beta = RR(0.) + total_DSD_prob = RR(0.) + DSD_prob = RR(0.) + prob_pos_total = (2*params.n)*[RR(0.)] + + for beta in range(2, params.n): + tours = floor(params.n**2 / beta**2)+3 + + DSD_prob, DSD_prob_pos = self.prob_dsd(beta, params, red_shape_model, m=m, + red_cost_model=red_cost_model, log_level=log_level) + + if DSD_prob > 10e-8: + for t in range(tours): + for i in range(2*params.n): + prob_pos = DSD_prob_pos[i] + average_beta += RR(beta) * remaining_proba * prob_pos + prob_pos_total[i] += remaining_proba * prob_pos + total_DSD_prob += remaining_proba * prob_pos + remaining_proba *= (1.-prob_pos) + + Logging.log("dsd", log_level+1, "β= %d,\t pr=%.4e, \t rem-pr=%.4e"%(beta, DSD_prob, remaining_proba)) + if remaining_proba < 0.001: + average_beta += beta * remaining_proba + break + + if not average_beta: + average_beta = m + predicate = False + + else: + predicate = True + + cost = costf(red_cost_model, RR(average_beta), m, predicate=predicate) + cost["tag"] = "dsd" + cost["problem"] = params + return cost.sanity_check() + + __name__ = "primal_dsd" + + +primal_dsd = PrimalDSD() + + +class NTRUPrimalUSVP(PrimalUSVP): + + def __call__( + self, + params: NTRUParameters, + red_cost_model=red_cost_model_default, + red_shape_model=red_shape_model_default, + optimize_d=True, + log_level=1, + **kwds, + ): + """ + Estimate cost of solving NTRU via uSVP reduction. + + :param params: NTRU parameters. + :param red_cost_model: How to cost lattice reduction. + :param red_shape_model: How to model the shape of a reduced basis. + :param optimize_d: Attempt to find minimal d, too. + :return: A cost dictionary. + + The returned cost dictionary has the following entries: + + - ``rop``: Total number of word operations (≈ CPU cycles). + - ``red``: Number of word operations in lattice reduction. + - ``δ``: Root-Hermite factor targeted by lattice reduction. + - ``β``: BKZ block size. + - ``d``: Lattice dimension. + + EXAMPLE:: + + >>> from estimator import * + >>> NTRU.primal_usvp(schemes.NTRUHPS2048509Enc) + rop: ≈2^134.6, red: ≈2^134.6, δ: 1.004179, β: 373, d: 929, tag: usvp + + >>> params = NTRU.Parameters(n=200, q=127, Xs=ND.UniformMod(3), Xe=ND.UniformMod(3)) + >>> NTRU.primal_usvp(params, red_shape_model="cn11") + rop: ≈2^87.2, red: ≈2^87.2, δ: 1.006132, β: 208, d: 374, tag: usvp + + >>> NTRU.primal_usvp(params, red_shape_model=Simulator.CN11) + rop: ≈2^87.2, red: ≈2^87.2, δ: 1.006132, β: 208, d: 374, tag: usvp + + >>> NTRU.primal_usvp(params, red_shape_model=Simulator.CN11, optimize_d=False) + rop: ≈2^87.4, red: ≈2^87.4, δ: 1.006132, β: 208, d: 399, tag: usvp + + The success condition was formulated in [USENIX:ADPS16]_ and studied/verified in + [AC:AGVW17]_, [C:DDGR20]_, [PKC:PosVir21]_. The treatment of small secrets is from + [ACISP:BaiGal14]_. + + """ + return super().__call__(params, + red_cost_model=red_cost_model, + red_shape_model=red_shape_model, + optimize_d=optimize_d, + log_level=log_level, + **kwds) + + +primal_usvp = NTRUPrimalUSVP() + + +class NTRUPrimalHybrid(PrimalHybrid): + + def __call__( + self, + params: NTRUParameters, + babai: bool = True, + zeta: int = None, + mitm: bool = True, + red_shape_model=red_shape_model_default, + red_cost_model=red_cost_model_default, + log_level=1, + **kwds, + ): + """ + Estimate the cost of the hybrid attack and its variants. + + :param params: NTRU parameters. + :param zeta: Guessing dimension ζ ≥ 0. + :param babai: Insist on Babai's algorithm for finding close vectors. + :param mitm: Simulate MITM approach (√ of search space). + :return: A cost dictionary + + The returned cost dictionary has the following entries: + + - ``rop``: Total number of word operations (≈ CPU cycles). + - ``red``: Number of word operations in lattice reduction. + - ``δ``: Root-Hermite factor targeted by lattice reduction. + - ``β``: BKZ block size. + - ``η``: Dimension of the final BDD call. + - ``ζ``: Number of guessed coordinates. + - ``|S|``: Guessing search space. + - ``prob``: Probability of success in guessing. + - ``repeat``: How often to repeat the attack. + - ``d``: Lattice dimension. + + - When ζ = 0 this function essentially estimates the BDD strategy as given in [RSA:LiuNgu13]_. + - When ζ ≠ 0 and ``babai=True`` this function estimates the hybrid attack as given in + [C:HowgraveGraham07]_ + - When ζ ≠ 0 and ``babai=False`` this function estimates the hybrid attack as given in + [SAC:AlbCurWun19]_ + + EXAMPLES:: + + >>> from estimator import * + >>> NTRU.primal_hybrid(schemes.NTRUHPS2048509Enc.updated(Xs=ND.SparseTernary(508,16)), + ... mitm = False, babai = False) + rop: ≈2^87.8, red: ≈2^87.0, svp: ≈2^86.6, β: 116, η: 21, ζ: 302, |S|: ≈2^39.2, d: 372, prob: ≈2^-22.3, ... + + >>> NTRU.primal_hybrid(schemes.NTRUHPS2048509Enc.updated(Xs=ND.SparseTernary(508,16)), + ... mitm = False, babai = True) + rop: ≈2^88.0, red: ≈2^87.4, svp: ≈2^86.4, β: 98, η: 2, ζ: 318, |S|: ≈2^39.6, d: 328, prob: ≈2^-27.9, ... + + >>> NTRU.primal_hybrid(schemes.NTRUHPS2048509Enc.updated(Xs=ND.SparseTernary(508,16)), + ... mitm = True, babai = False) + rop: ≈2^80.1, red: ≈2^79.7, svp: ≈2^78.3, β: 170, η: 22, ζ: 254, |S|: ≈2^103.7, d: 495, prob: 0.708, ... + + >>> NTRU.primal_hybrid(schemes.NTRUHPS2048509Enc.updated(Xs=ND.SparseTernary(508,16)), + ... mitm = True, babai = True) + rop: ≈2^85.1, red: ≈2^84.1, svp: ≈2^84.0, β: 105, η: 2, ζ: 363, |S|: ≈2^85.0, d: 294, prob: ≈2^-22.9, ... + + TESTS: + + We test a trivial instance:: + + >>> params = NTRU.Parameters(2**10, 2**100, ND.DiscreteGaussian(3.19), ND.DiscreteGaussian(3.19)) + >>> NTRU.primal_bdd(params) + rop: ≈2^43.6, red: ≈2^43.6, svp: ≈2^35.0, β: 40, η: 46, d: 1461, tag: bdd + + """ + return super().__call__(params, + babai=babai, + zeta=zeta, + mitm=mitm, + red_shape_model=red_shape_model, + red_cost_model=red_cost_model, + log_level=log_level, + **kwds) + + +primal_hybrid = NTRUPrimalHybrid() + + +def primal_bdd( + params: NTRUParameters, + red_shape_model=red_shape_model_default, + red_cost_model=red_cost_model_default, + log_level=1, + **kwds, +): + """ + Estimate the cost of the BDD approach as given in [RSA:LiuNgu13]_. + + :param params: LWE parameters. + :param red_cost_model: How to cost lattice reduction + :param red_shape_model: How to model the shape of a reduced basis + + """ + + return primal_hybrid( + params, + zeta=0, + mitm=False, + babai=False, + red_shape_model=red_shape_model, + red_cost_model=red_cost_model, + log_level=log_level, + **kwds, + ) diff --git a/estimator/prob.py b/estimator/prob.py index 3ffca9e..32e8754 100644 --- a/estimator/prob.py +++ b/estimator/prob.py @@ -2,6 +2,66 @@ from sage.all import binomial, ZZ, log, ceil, RealField, oo, exp, pi from sage.all import RealDistribution, RR, sqrt, prod, erf from .nd import sigmaf +from .conf import max_n_cache + + +chisquared_table = {i: None for i in range(2*max_n_cache+1)} +for i in range(2*max_n_cache+1): + chisquared_table[i] = RealDistribution('chisquared', i) + + +def conditional_chi_squared(d1, d2, lt, l2): + """ + Probability that a gaussian sample (var=1) of dim d1+d2 has length at most + lt knowing that the d2 first coordinates have length at most l2 + + :param d1: Dimension of non length-bounded coordinates + :param d2: Dimension of length-bounded coordinates + :param lt: Length threshold (maximum length of whole vector) + :param l2: Length threshold for the first d2 coordinates. + + EXAMPLE:: + >>> from estimator import prob + >>> prob.conditional_chi_squared(100, 5, 105, 1) + 0.6358492948586715 + + >>> prob.conditional_chi_squared(100, 5, 105, 5) + 0.5764336909205551 + + >>> prob.conditional_chi_squared(100, 5, 105, 10) + 0.5351747076352109 + + >>> prob.conditional_chi_squared(100, 5, 50, 10) + 1.1707597206287592e-06 + + >>> prob.conditional_chi_squared(100, 5, 50, .7) + 5.4021875103989546e-06 + """ + D1 = chisquared_table[d1].cum_distribution_function + D2 = chisquared_table[d2].cum_distribution_function + l2 = RR(l2) + + PE2 = D2(l2) + # In large dim, we can get underflow leading to NaN + # When this happens, assume lifting is successfully (underestimating security) + if PE2==0: + raise ValueError("Numerical underflow in conditional_chi_squared") + + steps = 5 * (d1 + d2) + + # Numerical computation of the integral + proba = 0. + for i in range(steps)[::-1]: + l2_min = i * l2 / steps + l2_mid = (i + .5) * l2 / steps + l2_max = (i + 1) * l2 / steps + + PC2 = (D2(l2_max) - D2(l2_min)) / PE2 + PE1 = D1(lt - l2_mid) + + proba += PC2 * PE1 + + return proba def mitm_babai_probability(r, stddev, q, fast=False): diff --git a/estimator/schemes.py b/estimator/schemes.py index af808bd..882706e 100644 --- a/estimator/schemes.py +++ b/estimator/schemes.py @@ -1,5 +1,6 @@ from .nd import NoiseDistribution, stddevf from .lwe_parameters import LWEParameters +from .ntru_parameters import NTRUParameters # NIST PQC Round 3 Finalists @@ -77,7 +78,7 @@ tag="FireSaber", ) -NTRUHPS2048509Enc = LWEParameters( +NTRUHPS2048509Enc = NTRUParameters( n=508, q=2048, Xe=NoiseDistribution.SparseTernary(508, 2048 / 16 - 1), @@ -86,7 +87,7 @@ tag="NTRUHPS2048509Enc", ) -NTRUHPS2048677Enc = LWEParameters( +NTRUHPS2048677Enc = NTRUParameters( n=676, q=2048, Xs=NoiseDistribution.UniformMod(3), @@ -95,7 +96,7 @@ tag="NTRUHPS2048677Enc", ) -NTRUHPS4096821Enc = LWEParameters( +NTRUHPS4096821Enc = NTRUParameters( n=820, q=4096, Xs=NoiseDistribution.UniformMod(3), @@ -104,7 +105,7 @@ tag="NTRUHPS4096821Enc", ) -NTRUHRSS701Enc = LWEParameters( +NTRUHRSS701Enc = NTRUParameters( n=700, q=8192, Xs=NoiseDistribution.UniformMod(3), diff --git a/estimator/simulator.py b/estimator/simulator.py index 4f3e9d3..e2bf80f 100644 --- a/estimator/simulator.py +++ b/estimator/simulator.py @@ -17,7 +17,7 @@ The last row is optional. """ -from sage.all import RR, log, line +from sage.all import RR, log, line, cached_function, pi, exp def qary_simulator(f, d, n, q, beta, xi=1, tau=1, dual=False): @@ -33,7 +33,7 @@ def qary_simulator(f, d, n, q, beta, xi=1, tau=1, dual=False): :param dual: perform reduction on the dual. """ - if tau is None: + if not tau: r = [q**2] * (d - n) + [xi**2] * n else: r = [q**2] * (d - n - 1) + [xi**2] * n + [tau**2] @@ -91,7 +91,7 @@ def GSA(d, n, q, beta, xi=1, tau=1, dual=False): """ from .reduction import delta as deltaf - if tau is None: + if not tau: log_vol = RR(log(q, 2) * (d - n) + log(xi, 2) * n) else: log_vol = RR(log(q, 2) * (d - n - 1) + log(xi, 2) * n + log(tau, 2)) @@ -102,6 +102,118 @@ def GSA(d, n, q, beta, xi=1, tau=1, dual=False): return r +def ZGSA(d, n, q, beta, xi=1, tau=1, dual=False): + from math import lgamma + from .util import gh_constant, small_slope_t8 + """ + Reduced lattice Z-shape following the Geometric Series Assumption as specified in + NTRU fatrigue [DucWoe21] + :param d: Lattice dimension. + :param n: The number of `q` vectors is `d-n`. + :param q: Modulus `q` + :param beta: Block size β. + :param xi: Scaling factor ξ for identity part. + :param dual: ignored, since GSA is self-dual: applying the GSA to the dual is equivalent to + applying it to the primal. + :returns: Squared Gram-Schmidt norms + + EXAMPLES: + >>> from estimator.simulator import GSA, ZGSA, CN11 + >>> n = 128 + >>> d = 213 + >>> q = 2048 + >>> beta = 40 + >>> xi = 1 + >>> tau = 1 + >>> zgsa_profile = ZGSA(d, n, q, beta, xi, tau) + >>> len(zgsa_profile) + 214 + + Setting tau to False indicates a homogeneous instance. + + >>> tau = False + >>> zgsa_profile = ZGSA(d, n, q, beta, xi, tau) + >>> len(zgsa_profile) + 213 + + All three profiles should have the same product (represent the same lattice volume) + + >>> gsa_profile = GSA(d, n, q, beta, xi, tau) + >>> cn11_profile = CN11(d, n, q, beta, xi, tau) + >>> sum([log(x) for x in cn11_profile] + 1296.1852276471009 + >>> sum([log(x) for x in zgsa_profile]) + 1296.18522764710 + >>> sum([log(x) for x in gsa_profile]) + 1296.18522764710 + + Changing xi will change the volume of the lattice + + >>> xi = 2 + >>> gsa_profile = GSA(d, n, q, beta, xi, tau) + >>> zgsa_profile = ZGSA(d, n, q, beta, xi, tau) + >>> cn11_profile = CN11(d, n, q, beta, xi, tau) + >>> sum([log(x) for x in gsa_profile]) + 1473.63090587044 + >>> sum([log(x) for x in zgsa_profile]) + 1473.63090587044 + >>> sum([log(x) for x in cn11_profile]) + 1473.630905870442 + """ + + @cached_function + def ball_log_vol(n): + return RR((n/2.) * log(pi) - lgamma(n/2. + 1)) + + def log_gh(d, logvol=0): + if d < 49: + return RR(gh_constant[d] + logvol/d) + + return RR(1./d * (logvol - ball_log_vol(d))) + + def delta(k): + assert k >= 60 + delta = exp(log_gh(k)/(k-1)) + return RR(delta) + + @cached_function + def slope(beta): + if beta<=60: + return small_slope_t8[beta] + if beta<=70: + # interpolate between experimental and asymptotics + ratio = (70-beta)/10. + return ratio*small_slope_t8[60]+(1.-ratio)*2*log(delta(70)) + else: + return 2 * log(delta(beta)) + + if not tau: + L_log = (d - n)*[RR(log(q))] + n * [RR(log(xi))] + else: + L_log = (d - n)*[RR(log(q))] + n * [RR(log(xi))] + [RR(log(tau))] + + slope_ = slope(beta) + diff = slope(beta)/2. + + for i in range(d-n): + if diff > (RR(log(q)) - RR(log(xi)))/2.: + break + + low = (d - n)-i-1 + high = (d - n) + i + if low >= 0: + L_log[low] = (RR(log(q)) + RR(log(xi)))/2. + diff + + if high < len(L_log): + L_log[high] = (RR(log(q)) + RR(log(xi)))/2. - diff + + diff += slope_ + + # Output basis profile as squared lengths, not ln(length) + L = [exp(2 * l_) for l_ in L_log] + return L + + def normalize(name): if str(name).upper() == "CN11": return CN11 @@ -109,6 +221,9 @@ def normalize(name): if str(name).upper() == "GSA": return GSA + if str(name).upper() == "ZGSA": + return ZGSA + return name diff --git a/estimator/util.py b/estimator/util.py index 1ed89b7..149441b 100644 --- a/estimator/util.py +++ b/estimator/util.py @@ -1,19 +1,64 @@ import itertools as it from multiprocessing import Pool from functools import partial -from dataclasses import dataclass +from dataclasses import dataclass, field from typing import Any, Callable, NamedTuple -from sage.all import ceil, floor, log, oo +from sage.all import ceil, floor, log, oo, RR, cached_function, zeta from .io import Logging from .lwe_parameters import LWEParameters +from .conf import max_n_cache def log2(x): return log(x, 2.0) +@cached_function +def zeta_prime(x): + h = 1e-5 + return RR((zeta(x+h) - zeta(x-h)))/(2*h) + + +# Low beta Gaussian Heuristic constant for use in NTRU Dense sublattice estimation. +gh_constant = {1: 0.00000, 2: -0.50511, 3: -0.46488, 4: -0.39100, 5: -0.29759, 6: -0.24880, 7: -0.21970, 8: -0.15748, + 9: -0.14673, 10: -0.07541, 11: -0.04870, 12: -0.01045, 13: 0.02298, 14: 0.04212, 15: 0.07014, + 16: 0.09205, 17: 0.12004, 18: 0.14988, 19: 0.17351, 20: 0.18659, 21: 0.20971, 22: 0.22728, 23: 0.24951, + 24: 0.26313, 25: 0.27662, 26: 0.29430, 27: 0.31399, 28: 0.32494, 29: 0.34796, 30: 0.36118, 31: 0.37531, + 32: 0.39056, 33: 0.39958, 34: 0.41473, 35: 0.42560, 36: 0.44222, 37: 0.45396, 38: 0.46275, 39: 0.47550, + 40: 0.48889, 41: 0.50009, 42: 0.51312, 43: 0.52463, 44: 0.52903, 45: 0.53930, 46: 0.55289, 47: 0.56343, + 48: 0.57204, 49: 0.58184, 50: 0.58852} + +# Low beta \alpha_\beta quantity as defined in [AC:DucWoe21] for use in NTRU Dense subblattice estimation. +small_slope_t8 = {2: 0.04473, 3: 0.04472, 4: 0.04402, 5: 0.04407, 6: 0.04334, 7: 0.04326, 8: 0.04218, 9: 0.04237, + 10: 0.04144, 11: 0.04054, 12: 0.03961, 13: 0.03862, 14: 0.03745, 15: 0.03673, 16: 0.03585, + 17: 0.03477, 18: 0.03378, 19: 0.03298, 20: 0.03222, 21: 0.03155, 22: 0.03088, 23: 0.03029, + 24: 0.02999, 25: 0.02954, 26: 0.02922, 27: 0.02891, 28: 0.02878, 29: 0.02850, 30: 0.02827, + 31: 0.02801, 32: 0.02786, 33: 0.02761, 34: 0.02768, 35: 0.02744, 36: 0.02728, 37: 0.02713, + 38: 0.02689, 39: 0.02678, 40: 0.02671, 41: 0.02647, 42: 0.02634, 43: 0.02614, 44: 0.02595, + 45: 0.02583, 46: 0.02559, 47: 0.02534, 48: 0.02514, 49: 0.02506, 50: 0.02493, 51: 0.02475, + 52: 0.02454, 53: 0.02441, 54: 0.02427, 55: 0.02407, 56: 0.02393, 57: 0.02371, 58: 0.02366, + 59: 0.02341, 60: 0.02332} + + +@dataclass +class LazyEvaluation: + f: Callable + max_n_cache: int + eval: list = field(default_factory=lambda: []) + + def __getitem__(self, key): + if not self.eval: + self.eval = [self.f(i) for i in range(self.max_n_cache + 1)] + + return self.eval[key] + + +zeta_precomputed = LazyEvaluation(lambda i: RR(zeta(i)) if i != 1 else RR(oo), max_n_cache) +zeta_prime_precomputed = LazyEvaluation(zeta_prime, max_n_cache) + + class Bounds(NamedTuple): low: Any high: Any