Slide dal Politecnico di Torino su Introduzione alla Crittografia. Il Pdf esplora i concetti fondamentali della crittografia, distinguendo tra approcci classici e moderni, e include un ripasso di algebra e teoria dei numeri, utile per gli studenti universitari di Informatica.
Mostra di più44 pagine


Visualizza gratis il Pdf completo
Registrati per accedere all’intero documento e trasformarlo con l’AI.
Lesson 01 Introduction Carlo Sanna Group of Cryptography and Number Theory Department of Mathematical Sciences Politecnico di Torino Italy This material is by no means a replacement for the live lectures, whose attendance is strongly encouraged.Table of Contents Introduction Classic Cryptography Modern Cryptography Perfect Security Computational Security Cryptographic Assumptions Algorithms Algorithms Asymptotic Notation Complexity of an Algorithm Polynomial-Time Algorithms Probabilistic Algorithms PPT Algorithms Expected PPT Algorithms Recap of Algebra and Number Theory Groups Rings and Fields Modular Arithmetic Finite FieldsIntroduction
Classic cryptography (say before WWII) was mostly for military use and was concerned exclusively with secret-key encryption (also called symmetric encryption). The old way of designing secret-key encryption schemes could be summarized as follows. (1) Come up with a clever method to encrypt and decrypt (Caesar Cipher, Vigenere Cipher, Enigma Machine, ... ); (2) keep the method secret to the enemy (security through obscurity); (3) hope that nobody is smarter than you. This gave no guarantee on the security of the scheme, which indeed was often broken, since: . the enemy learns of the secret method (espionage, corruption, human error, ... ); and/or . somebody smarter than you come up with an attack to the scheme (frequency analysis, known-plaintext attack, chosen-plaintext attack, chosen-ciphertext attack, man-in-the- middle, differential cryptanalysis, Carlo Sanna Classic Cryptography 1/38 5)
Modern cryptography is used every day by everybody and it is not limited anymore to secret-key encryption, but ranges over a plethora of topics including: public-key encryption, digital signatures, secret-sharing, zero-knowledge proofs, identification schemes, homomorphic encryption, multi-party computation, ... Most importantly, modern cryptography assumes that every attacker knows the design of the scheme (Kerckhoffs' principle), and is based on rigorous principles and techniques that makes possible to mathematically prove (under robust assumptions) the security of the employed scheme. Carlo Sanna Modern Cryptography 2/ 38 ◀5▶
Katz and Lindell [1] state the principles of modern cryptography as follows. (1) Formal definitions, because "If you don't understand what you want to achieve, how can you possibly know when (or if) you have achieved it?" (2) Precise assumptions, which state on what the security of a cryptographic scheme is based (more about this later). (3) Proofs of security, which are mathematical proofs of the security of a cryptographic scheme. Before looking at these formal definitions, we provide as a motivation the informal definitions of perfect security and computational security. Carlo Sanna Modern Cryptography 3 / 38 5)
A first notion of security is perfect security, which we informally state below. Definition (Perfect Security (Informal)) An encryption scheme achieves perfect security if an adversary with unlimited time and unlimited resources cannot obtain from the ciphertext any information about the plaintext without knowning the secret key. The trouble with perfect security comes from the following theorem. Theorem (Shannon's Theorem (Informal)) In order for an encryption scheme to achieve perfect security the length of the key must be at least equal to the length of the plaintext. The One-time Pad (OTP) (or Vernam's cipher) does indeed achieves perfect security by employing a key as long as the message. However, it is not practical (how to share the key?). Carlo Sanna Perfect Security 4/38 15)
In light of Shannon's theorem and the impracticality of one-time pad, the notion of perfect security must be relaxed to that of (t, )-security. Definition ((t, €)-security (Informal)) A scheme achieves (t, s)-security if any adversary running for time at most t succeds in breaking the scheme with probability at most E. For example, a scheme that guarantees that no adversary running for at most 100 years can break it with probability greater than 10-9 is fine for most practically purposes. However, (t, E)-security is difficult to prove, because it requires to account for many details, such as the precise computing power of the adversary. Carlo Sanna Computational Security 5/38 15)
This leads us to the definition of (asymptotic) computational security. Definition (Computational Security (Informal)) A scheme achieves computational security if any "efficient adversary" succeeds in "breaking the scheme" with at most "negligible" probability. To make this definition precise, we have to formalize the following concepts. . "Efficient adversary", which will mean probabilistic polynomial-time algorithm. . "Breaking the scheme", which will mean succeeding at an experiment (or game). . "Negligible", which will mean less than any negative power of a security parameter ). Carlo Sanna Computational Security 6/38 15)
Of utmost importance in computational security is the security parameter ), which is a positive integer that quantifies the level of security provided by a cryptographic scheme. Informally, in order to break the scheme, an (unbounded) adversary has to perform a number of operations at least proportional to 21. In particular, the secret key consists of at least ) bits to resist a brute-force attack. Commonly used values for ) are 128, 192, and 256 bits. Note that brute-forcing a 256-bits key with the world's fastest supercomputer (as 2023, Frontier with 1.102 . 1018 FLOPS), assuming that every single instruction per second can test a key, would require 1051 years. Carlo Sanna Computational Security 7 /38 5)
Currently, the computational security of a cryptographic scheme is always proven conditionally, relying on specific cryptographic assumptions, which typically are assertions about the presumed difficulty of solving certain mathematical problems, primarily from Algebra and Number Theory. Proofs of security are accomplished using a technique of reduction, that is, by showing that an efficient adversary who can break the scheme with non-negligible probability can also solve the problem underlying the cryptographic assumption, thus leading to a contradiction. The existence of unconditional computational security remains an exceptionally challenging open problem in mathematics and computer science, closely tied to the existence of one-way functions and the P vs NP problem. Carlo Sanna Cryptographic Assumptions 8 / 38 5)
Definition (Algorithm (Informal)) An algorithm is a finite sequence of rigorous instructions that, when executed on some given input, is guaranteed to terminate in a finite number of steps and to produce some output. For example, the algorithm of Euclid to compute the greatest common divisor of two integers a, b is written in pseudocode on the right. We remark that: . An algorithm is deterministic, that is, repeating the algorithm on the same input always produces the same output. . An algorithm must always terminate. It cannot enter "infinite loops." If we allow non-termination, then we have instead a procedure. . We assume that the amount of memory than an algorithm can write (and read later) is unbounded. Carlo Sanna Algorithms 9/38 5)
The given definition of algorithm is not a precise mathematical definition. For instance, it does not say which "rigorous instructions" are allowed and how they are "executed". Giving a precise definition of an algorithm requires first specifying the model of computation, which essentially idealizes the kind of computer on which the algorithm runs. There are several models of computation, such as Turing machines, random-access machines, lambda calculus, abstract rewriting systems, cellular automata, and so on. Studying these models falls under the realm of computability theory. However, for our purposes, all reasonable models of computation are equivalent and we are pretty fine with the informal definition. We only sketch the definition of Turing machine. Carlo Sanna Algorithms 10 / 38 5)
Definition (Turing Machine (Informal)) A Turing machine is an abstract mathematical model of computation that consists of an infinite tape, a read/write head that moves left or right, and a finite set of states. The machine reads symbols from the tape, writes new symbols, changes its state, and follows a set of predefined rules, including conditional branching based on the current symbol and state. Church-Turing Thesis Everything that is computable is computable by a Turing machine. Carlo Sanna Algorithms 11 / 38 5)
Definition (Big-Oh and Little-Oh) Let f , g : N -> IR be two functions. . We write f (n) = O (g(n)) and we say that "f is a Big-Oh of g" if there exists a constant C > O such that |f (n)| ≤ C|g(n)| for all n EN. . We write f (n) = o(g(n)) and we say that "f is a little-oh of g" if lim n->+00 f(n) g(n) =0. For example, 10n3 + n + log n = O(n3), 2 n+ 1000 O(2vn), 5n(log n)10 = 0(n2), and nº = o(2"), for every fixed c E R. Occasionaly, we write O (g(n)) in place of a generic function f such that f (n) = O (g(n)). Similarly for o(g(n)). Note that the "=" in f (n) = O(g(n)) is not really an identity symbol, since it doesn't satisfy the symmetric property. For example, O(n) = O(n2) but O(n2) + O(n). (Same for little-oh's.) Carlo Sanna Asymptotic Notation 12 / 38 5)
Definition (Negligible Function) A function f : N -> R is negligible if for every c > 0 it holds f (n) = 0(1/nº). For example, the functions 2-n, 2-Vn, and n-logn are all negligible, while the function n-1000 is not negligible. Lemma If f (n), g(n) are two negligible functions and p(n) is a polynomial, then: · f(n) + g(n) is a negligible function. · p(n) · f (n) is a negligible function. Carlo Sanna Asymptotic Notation 13 / 38 5)