//php echo do_shortcode(‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’) ?>
The rise of quantum computing paints a significant challenge for the cryptography we rely on today. The modern encryption standards we currently use to safeguard sensitive data and communications, such as DSA, public key RSA and those based on elliptic curves, will eventually be broken by quantum computers. Estimates vary on when, but at current rates of improvement, this is predicted by some to happen towards the end of the next decade.
Michele Mosca, co-founder of the Institute for Quantum Computing at Canada’s University of Waterloo, has estimated that there is a 50% chance of a quantum computer powerful enough to break standard public-key encryption materializing in the next 15 years. This means many embedded systems in development now stand a reasonable chance of encountering such an attack by the end of their production run’s working lives. It has also been posited that sensitive data can be stored today and decrypted once quantum computers become powerful enough.
This threat extends across various industries, with financial institutions, health organizations and critical infrastructure—including energy and transport—most at risk.
In late 2023, the U.S. National Institute of Standards and Technology (NIST) made a significant step in post-quantum cryptography (PQC), announcing four standardized algorithms specifically designed to resist attacks from quantum computers.
By MRPeasy 05.01.2024
By Global Unichip Corp. 04.18.2024
The state of quantum computing
Currently, quantum computers remain in their infancy.
IBM’s Osprey is the leading publicly available machine, with 433 quantum bits (qubits), which take on many states at once. In theory, this allows qubits to make calculations much faster. However, advancements are rapid, and experts predict significant increases in qubit count and processing power.
By 2030, quantum computers are expected to surpass traditional computers for specific tasks, with the gap widening further by 2040 and 2050. While not a perfect equivalent to Moore’s Law, the exponential growth in quantum computing capabilities necessitates proactive measures to protect cryptographic systems.
The core cryptographic methods
The two most common cryptographic requirements are for public key encryption and digital signatures.
Public key encryption is the mechanism of establishing a shared secret between two parties (e.g. you and your bank), with a public key from your bank and a random number from yourself to enable a secret that you use to encrypt your information.
Until NIST’s PQC algorithms, the leading standard was an algorithm based on elliptic curves, which in turn superseded RSA.
Digital signature algorithms are used to authenticate, for example, software releases and prevent message tampering. These use a private key (which is held by the sender) for signing a message, and a public key, which is given to the receiver for authenticating the signed message. Once again, existing algorithms are primarily based on elliptic curves (ECDSA, EdDSA).
The PQC algorithms
Through a collaborative effort, NIST has selected two core (CRYSTALS-Kyber and CRYSTALS-Dilithium) and two backup (FALCON and SPHINCS+) PQC algorithms.
Kyber is a key encapsulation mechanism (KEM) algorithm that uses lattice-based cryptography to enable small key sizes that are targeted to resource-constrained devices. However, this method generates larger ciphertexts compared to other options.
Kyber is a faster algorithm than elliptic curves and RSA, both in software and hardware. But this also has a larger footprint—be it in terms of software code, or in terms of gates.
Dilithium is a digital signature algorithm designed to supersede DSA. Like Kyber, this uses a lattice-based approach to give highly secure and efficient signing operations for use in high-volume signing needs. Albeit its signature sizes are larger than some competing algorithms.
From early 2023, it was clear that these would be the two core algorithms, and these are the two EnSilica has developed for implementation in ASIC. The additional two algorithms should be seen as supplemental.
FALCON uses a heavier, floating-point arithmetic algorithm for digital signatures. This method means it is slower in comparison to Dilithium, but it does have advantages: it delivers a smaller signature and public key. This makes it more suitable for bandwidth constrained applications.
Like FALCON, SPHINCS+ (Digital Signature is an alternative approach to Dilithium, again using an entirely different mathematical principle—this time a hash-based cryptography—which allows a stateless verification. It has been created in case future weaknesses are found in Dilithium, but has larger signature sizes and is arguably less mature compared to the other algorithms.
It is also important to note that these are just the first steps. More PQC algorithms are under development, and NIST intends to release additional options as the field matures.
Implementations
While PQC algorithms offer solutions, their implementation requires careful consideration. Systems developed today often have lifespans extending beyond the 2030s. As such, adoption has been mandated by national security government bodies—for example, in the U.S., by NSA with the CNSA2.0.
Several major companies are already exploring PQC implementations. At EnSilica, these are focussed on the core two algorithms. Elsewhere, a similar pattern can be seen. Google, for example, pilots Kyber in Chrome; Microsoft integrates both Dilithium and Kyber into its Azure platform; and IBM offers PQC-compatible libraries (its IBM z16 is underpinned by Kyber and Dilithium).
Industry alliances are also forming: The Linux Foundation has announced the launch of the Post-Quantum Cryptography Alliance (PQCA), with founding members including Amazon Web Services (AWS), Cisco, IBM, IntellectEU, NVIDIA, QuSecure, SandboxAQ and the University of Waterloo.
Can you deploy safely in software?
The above shows a mix of hardware and software approaches, but this is often out of necessity. Google, for example, does so because it is unable to control the hardware its web browser is running on.
The standard trade-offs between hardware and software implementations naturally do exist for these algorithms: with hardware both being more resistant to side-channel attacks and delivering enhanced power efficiencies. Indeed, the efficiency advantages of running these algorithms in hardware mean an increased speed of up to 100×.
On the other hand, software implementations enable lower cost systems with the ability to be patched as new algorithms are developed.
There are, therefore, several reasons why you might consider a software implementation. For example, an embedded system that needs to be low cost, with a small area footprint, and where speed is not critical for the application, would be perfectly viable to run the operations in software.
It is also possible to run (and some companies have developed) a hybrid system with both hardware and software components to increase flexibility, albeit with a smaller (10×) increase in speed versus full software models.
We therefore recommend that, for most embedded systems, fully hardware implementations should be the norm.
Forecasted vulnerabilities to address at the design phase
It should also be stated that these algorithms can have weaknesses through their implementation that can be exploited to break them and extract keys.
Examples of these might be timing attacks, where if there are time variations, depending on the exact information the algorithms are processing, this can be exploited to get information out of the algorithm.
Another attack approach might be to measure how much power a chip is consuming and then correlate this to the execution of the algorithm to get information.
This means that for both high-security implementations, be it a credit card or SIM cards through to a more general system—for example one holding sensitive data—hardware implementations are essential. Features that act as countermeasures against such attacks need to be developed into the design of any embedded system.
—Christos Kasparis is senior principal systems engineer at the ASIC design house at EnSilica.
Source link
lol