Fully homomorphic encryption represents an incredibly significant breakthrough in data security. The ability to safely compute and collaborate on data allows us to do things that were previously impossible.
However, it’s also well- known for being extremely slow (about a million times slower than working on unencrypted data), and making it available at scale poses a massive engineering challenge.
Fortunately, Optalysys are here to help. We apply our groundbreaking Etech technology to overcoming the limits of FHE, allowing it to become economical at scale.
Interested? In this special article by Dr Joseph Wilson and Dr Florent Michel, our vice-presidents for applications and cryptography, we start with a high- level view of how FHE works and where the hardware challenges lie, and then show how an optical computing process can be used to overcome these limits. We then translate this understanding into the hardware concepts that we implement in our Enable FHE accelerator.
The unique capabilities of FHE come from the ability to perform additions and multiplications on encrypted data. When we decrypt the final output, it correctly reflects the operations applied during the computation.
Supporting this capability means finding a cryptographic problem that also preserves the mathematical structure of the encrypted data when either of these operations are performed. This property is called homomorphism.
To enable arbitrary computation, we also have to be able to perform additions and multiplications indefinitely.
This is the difference between a fully homomorphic scheme and partially or levelled homomorphic schemes.
Notionally, any cryptographic trapdoor function that preserves homomorphism can be used to construct an FHE scheme.
However, finding such a function and verifying that it is mathematically safe from attacks is extremely hard.
In practice, all contemporary FHE schemes are based on a problem in cryptography known as the Learning with Errors (LWE) problem, and more specifically the Ring-Learning with Errors (RLWE) problem.
The LWE and RLWE problems have some extremely useful properties. Not only do they preserve additive and multiplicative homomorphism, but they are also expected to be quantum-safe.
This means that next- generation quantum computers will not provide an advantage in attacking data that has been encrypted using LWE/RLWE techniques, ensuring the secrecy of sensitive data long into the future.
In fact, the National Institute of Standards and Technology (NIST) has selected a key encapsulation scheme based on Ring- Learning with Errors as the standard for next- generation post- quantum internet security.
However, the way LWE and RLWE make information secure has implications for the computational effort involved in FHE. Here’s the basic idea:
Under LWE/RLWE, information is secured in part by adding a small quantity of random noise to the ciphertext.
This destroys the ability of an adversary to retrieve the underlying data without the decryption key. When simply encrypting and decrypting information, this noise isn’t a problem.
However, when we perform additions or multiplications (especially multiplications), this noise also grows.
Decrypting the information is dependent in part on being able to round off to the correct value.
If the noise in the result grows too large, then the chance that the decryption will round to the wrong value increases; we would eventually lose the data.
The solution to the problem of continually- growing noise is ultimately what makes FHE possible. The idea goes something like this:
“If we can execute *any* circuit on encrypted data, then what happens when we securely execute the decryption circuit for the scheme that we’re using?”
The rather remarkable result is that the output of this process can produce a cipher text with the noise reduced to a level that permits more operations.
This ciphertext also correctly reflects the FHE operations that have been performed thus far.
This means that we can compute indefinitely, periodically stopping to perform this process that refreshes the noise, and then continuing.
This noise reduction technique is called bootstrapping. However, while this allows computations to continue indefinitely, it also involves massive amounts of additional computation.
Why is FHE quantum-safe?
FHE is often advertised as being secure against quantum computers, but why is this the case?
It can be shown that the LWE problems FHE is based on are equivalent* to a category of problems based on mathematical structures known as lattices.
It can then be shown that a quantum system that can break LWE would by implication be capable of solving these problems, which are known to be hard for both quantum and classical computers.
Hence, we can make the association that FHE is hard for quantum computers too.
* The underlying idea is the same, but expressed in a different way
FHE works because we can perform addition and multiplication on data without ever having to decrypt it. From these simple arithmetic operations, we can build up the same processes that make conventional CPUs and other digital processors so useful.
There’s another catch here; this means that we’re essentially using complicated math to simulate what physical computers do! This itself makes FHE even more complex, with a final catch coming from the face that encrypted data is in a very different form to unencrypted data.
Consider numbers in regular (unencrypted) computation. We can represent numbers with a sequence of binary values, and we can design computer hardware that efficiently represents the arithmetic operations we want to perform on these binary sequences.
As long as the numbers that we want to represent fit into the hardware that we’ve designed (most modern general- purpose CPUs are designed to accommodate 64- bit representations), we can perform calculations very quickly.
By contrast, encrypted representations of a value in FHE schemes are typically collections of numbers stored as a vector. Specifically, under RLWE, the numbers stored in these vectors represent the coefficients of a high-degree polynomial.
While most CPUs include dedicated vector processing capabilities, the scope of the calculations involved in a simple FHE addition or multiplication is still much bigger than the equivalent on unencrypted data.
The individual numbers stored in that vector might also have a bigger representation in terms of the number of bits than is typical (for example, 80 or 128 bits vs the usual 64 that most desktop and server processor architectures are designed around), which translates to extra processing time on conventional hardware. Addressing the computational side of FHE partly involves designing hardware around these unique requirements.
This is what we’re doing with the conventional electronic sections of our Enable FHE accelerator platform; we’re designing electronic architectures that natively work with the encrypted data structures involved in an FHE calculation.
There are some further aspects of FHE that make this task simpler. For example, because we can’t evaluate the internal state of the computation (e.g we can’t check conditional statements normally used to control program flow), there’s no jumps or changes to the order in which we execute instructions.
FHE circuits therefore have to be evaluated in full, and any given FHE program can be unrolled into a fixed sequence of instructions. And while the purpose of an FHE program can be as varied as any other program, the core mathematical primitives that drive the computation remain highly consistent even across different schemes and implementations.
These factors mean that:
This allows us to define more efficient FHE processing solutions using standard electronics principles, but some calculations remain intractable without a significant breakthrough.
Optalysys can provide this breakthrough through our specialist optical computing hardware.
While we can design improved electronic architectures for FHE, there’s a particular mathematical operation that is used extensively yet very difficult to make efficient. When we need to multiply two ciphertexts together (an extremely common task), we need to multiply two vectors of polynomial coefficients.
The usual method of multiplying polynomials together, the one that’s typically taught in schools, involves a quadratic number of individual multiplications.
When doing this in electronics, this means we would need to multiply each element in one vector by every element in the other vector.
As we’re typically dealing with very large vectors, this would mean an enormous amount of multiplication.
The solution to this in contemporary FHE schemes is to convert the data into a form where multiplication is more efficient (where we only need to multiply corresponding elements), perform the multiplication, and then convert the result back into the original form.
This conversion is known as a Fourier transform, and all FHE schemes use Fourier transforms (or the closely related Number Theoretic Transform or NTT) to make multiplication more efficient.
This is more efficient than the alternative and is a universal practice in contemporary FHE libraries, but this conversion is still very expensive.
In fact, around 70-90% of FHE computation is dedicated solely to performing these transforms.
So from a hardware perspective, we now have two distinct avenues for accelerating FHE:
The first approach is possible in the modern semiconductor manufacturing landscape and is one of our objectives with Enable, but the second is much harder.
The most efficient algorithms for accelerating these transforms have well-established theoretical limits for the number of electronic operations that must be performed over the course of the calculation.
This is true even if you can make use of parallelism. This introduces a minimum level of latency in electronically calculating a transform, and seriously affects efforts to accelerate FHE using digital systems.
The alternative is to find a fundamentally new approach to performing the necessary calculations.
Just as semiconductivity is the material property that allows us to construct transistors (and thus make logic gates), we can find a physical phenomenon in which the mathematics of the Fourier transform is embedded in the behaviour of the natural world.
To beat existing technologies, this behaviour also has to take place at very high speed and under favourable energy constraints. We then have to design hardware that can deploy this physics alongside conventional electronics.
For our purposes, the phenomenon we’re looking for is light. Under certain conditions, light will perform a Fourier transform through a process of free-space optical interference.
In optical systems, not only is the Fourier transform performed nearly instantaneously, but the calculation occurs in the same ultra- fast timeframe, regardless of how much information is present in the optical field!
This is a remarkable property that offers vast improvements over electronics; there’s no lower bound to how much information can be transformed in a single operation, overcoming the theoretical minimum bounds of circuitry.
However, much less obvious is how we make this system interoperable with digital technology. This involves finding a way to convert digital information into the analog properties of light, execute the optical Fourier transform, and convert the result back into a digital form.
For the purposes of FHE, our calculations have to be exact too; the output of this process has to be directly equivalent to what a digital system would produce.
Our technology solves the above problems while turning the field of optical computing on it’s head. We accomplish all of the above in a microchip- scale photonic integrated circuit.
Silicon photonics allows us to construct photonic circuits on a silicon die, in much the same way that conventional electronic chips are made.
These circuits are composed of waveguides and other micro- scale structures that allow us to precisely direct and control the properties of light.
We can use these control structures to convert digital information into an optical representation in terms of the phase and amplitude of light.
By simultaneously encoding many such digital representations into individual beams of laser light and then emitting these beams into free space, we can execute a Fourier transform on the information contained in those beams.
We use photonic modulators taken from existing optical telecommunications technology to encode digital information into light. Aside from being well- validated technologies that are in common use today, these modulators are also capable of operating at extremely high speed.
By detecting the phase and amplitude of the transformed light and converting these signals into a discrete digital representation, we can also recover the result of the optical transform at high speed.
The result is a technology that can perform an enormous amount of transform calculation in a very short period of time. We can use the system to perform both Fourier and NTT transforms, allowing us to target acceleration for every FHE scheme regardless of the specific technique used to perform efficient multiplications.
By pairing these optical circuits with conventional electronic CMOS integrated on the same die, this yields a technology that can work at the very heart of digital systems while simultaneously providing the advantages of optics.
This allows us to unify the two hardware approaches to accelerating FHE; we have a way of accelerating transforms for multiplication, and we have electronic architecture solutions designed to work natively on ciphertexts.
By designing a photonic chiplet that can be integrated alongside the electronics, we can achieve an all- in- one solution to the challenge posed by FHE.