On public key encryption from noisy codewords

Eli Ben-Sasson, Iddo Ben-Tov, Ivan Damgård, Yuval Ishai, Noga Ron-Zewi

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

4 Scopus citations

Abstract

Several well-known public key encryption schemes, including those of Alekhnovich (FOCS 2003), Regev (STOC 2005), and Gentry, Peikert and Vaikuntanathan (STOC 2008), rely on the conjectured intractability of inverting noisy linear encodings. These schemes are limited in that they either require the underlying field to grow with the security parameter, or alternatively they can work over the binary field but have a low noise entropy that gives rise to sub-exponential attacks. Motivated by the goal of efficient public key cryptography, we study the possibility of obtaining improved security over the binary field by using different noise distributions. Inspired by an abstract encryption scheme of Micciancio (PKC 2010), we study an abstract encryption scheme that unifies all the three schemes mentioned above and allows for arbitrary choices of the underlying field and noise distributions. Our main result establishes an unexpected connection between the power of such encryption schemes and additive combinatorics. Concretely, we show that under the “approximate duality conjecture” from additive combinatorics (Ben-Sasson and Zewi, STOC 2011), every instance of the abstract encryption scheme over the binary field can be attacked in time (Formula presented.), where n is the maximum of the ciphertext size and the public key size (and where the latter excludes public randomness used for specifying the code). On the flip side, counter examples to the above conjecture (if false) may lead to candidate public key encryption schemes with improved security guarantees. We also show, using a simple argument that relies on agnostic learning of parities (Kalai, Mansour and Verbin, STOC 2008), that any such encryption scheme can be unconditionally attacked in time (Formula presented.), where n is the ciphertext size. Combining this attack with the security proof of Regev’s cryptosystem, we immediately obtain an algorithm that solves the learning parity with noise (LPN) problem in time (Formula presented.) using only n1+ɛ samples, reproducing the result of Lyubashevsky (Random 2005) in a conceptually different way. Finally, we study the possibility of instantiating the abstract encryption scheme over constant-size rings to yield encryption schemes with no decryption error. We show that over the binary field decryption errors are inherent. On the positive side, building on the construction of matching vector families (Grolmusz, Combinatorica 2000; Efremenko, STOC 2009; Dvir, Gopalan and Yekhanin, FOCS 2010), we suggest plausible candidates for secure instances of the framework over constant-size rings that can offer perfectly correct decryption.

Original languageEnglish
Title of host publicationPublic-Key Cryptography – PKC 2016 - 19th IACR International Conference on Practice and Theory in Public-Key Cryptography, Proceedings
EditorsChen-Mou Cheng, Kai-Min Chung, Bo-Yin Yang, Giuseppe Persiano
Pages417-446
Number of pages30
DOIs
StatePublished - 2016
Event19th IACR International Conference on Practice and Theory in Public-Key Cryptography, PKC 2016 - Taipei, Taiwan, Province of China
Duration: 6 Mar 20169 Mar 2016

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume9615
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference19th IACR International Conference on Practice and Theory in Public-Key Cryptography, PKC 2016
Country/TerritoryTaiwan, Province of China
CityTaipei
Period6/03/169/03/16

Keywords

  • Additive combinatorics
  • Learning parity with noise
  • Noisy codewords
  • Public key encryption

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'On public key encryption from noisy codewords'. Together they form a unique fingerprint.

Cite this