Quantum computing is expected to revolutionize many different industries, including finance. One of the archetypal examples within this sector (apart from portfolio optimization which we covered in a previous post) which is set to benefit from quantum computing is option pricing. This problem is typically tackled using a quantum algorithm known as quantum amplitude estimation. In this article, we will compare the performance of this algorithm in the context of this financial problem with some recent, state-of-the-art, improved versions, as well as with a classical approach. We discuss the performance improvements offered by these quantum algorithms as well as their requirements in terms of quantum hardware resources, which combined give us an idea of their potential for near-term quantum advantage.
This article consists of 3 main sections. First, we explain option contracts and possible ways to determine their price. Next, we discuss the quantum amplitude estimation algorithm and the main building blocks which constitute it. Finally, we introduce 5 recent and improved variants of this algorithm and benchmark their performance and their quantum hardware requirements.
Option contracts basics
Derivative contracts constitute one of the main classes of financial assets. They derive their value from one or more underlying assets (hence the name) like stocks or commodities and are commonly used as a crucial part of hedging strategies or for speculation purposes. There are several different types of derivatives like futures or swaps, but here we will only talk about options. In their simplest form, the buyer of an option contract acquires the right, but not the obligation, to buy (this is known as a call option) or sell (this is known as a put option) the underlying asset for a predetermined price (the strike price) at a predetermined date (the expiration date). This means that, for the case of a call option, if the price of the asset at the expiration date is above the strike price, the option contract can be exercised to buy the underlying asset for the strike price and then immediately sell it for the higher market price and as such lock in a profit. If however the asset price at expiration is below the strike price then the option contract simply isn’t exercised. For put options, on the other hand, it’s the other way around. That is, a profit can be made if the underlying asset price is below the strike price by using the option contract to sell the asset for the strike price and then immediately buying it for the lower market price, whereas the option contract isn’t exercised if the asset price is above the strike price.
Determining a fair price
The above discussion could give the impression that entering an option contract offers the opportunity to realize risk-free profits. As we all know however, in reality there is no such thing as a free lunch. In the case of options trading this reality is reflected in the fact that a premium needs to be paid in order to enter an option contract. Therefore the challenge which option traders face (whether they are retail or institutional) is to estimate whether or not the potential profit which could be achieved by exercising the option will outweigh the premium which has to be paid to enter the contract. For the simple option types discussed in the above paragraph, which are known as European options, there exists a closed analytical formula to determine the expected value O of the option. This is known as the Black-Scholes formula and for a call option it is given by
with K and T the strike price and time to maturity of the option contract, with S₀, r, and σ the spot price, drift (assumed to be equal to the risk-free market rate), and volatility of the underlying asset, with N the standard normal cumulative distribution, and with
In principle this value O first needs to be discounted, taking into account the risk-free rate, before it can be compared with the option’s current premium. However, since this is not a computationally challenging process (given current and future interest rates), we will not do so in this article and only focus on the expected option value at expiration. It should be noted at this point that several significant assumptions need to be made in order to arrive at the above expression. Some notable examples include the absence of arbitrage opportunities and dividends, but perhaps most importantly the assumption that the price of the underlying asset follows a geometric Brownian motion with a constant drift (the return of the asset) and volatility, which implies that the asset price follows a log-normal distribution at any given time. These assumptions limit the accuracy and applicability of the Black-Scholes formula and it means that if we want to model options without these assumptions, or if we want to model more exotic options like Asian options or barrier options, we need to resort to numerical methods. One of the most widely used numerical methods in this context is the Monte Carlo method, where the general idea is to generate a large number of possible price paths for the underlying asset, calculating the option value for each of them, and then averaging across all the possible paths to obtain the expected option value. A simplified, very small-scale example of this method is shown here:
The estimation error ε of this approach scales as
with M the number of samples, and as a result large financial institutions typically run large-scale simulations overnight in order to achieve the desirable accuracy for large portfolios of complicated options.
Taking the options to quantum town
The scaling of the Monte Carlo method and the associated computational cost put serious restrictions on the size and complexity of the portfolios which can be numerically simulated and/or on the accuracy of those simulations. This is because it’s very important to avoid that the results of a simulation are already irrelevant or outdated by the time the calculations are completed, for example because the markets have opened again. This is where quantum computing could offer a very significant advantage over classical methods because, by means of a quantum algorithm known as quantum amplitude estimation (QAE), quantum computers can perform Monte Carlo simulations for which the estimation error scales as 1/M, i.e. quadratically faster than classical Monte Carlo simulations. We will now discuss the three main parts which constitute this algorithm.
Loading the asset price probability distribution
The first step of the amplitude estimation algorithm consists of loading the desired probability distribution for the price of the underlying asset into a quantum register of n qubits. In mathematical terms, we need to construct an operator P which does the following:
i.e. it maps the all-zeroes initial state to a superposition of the possible spot prices of the underlying asset (represented by the integers xᵢ in the range [0, 2ⁿ-1]), with the weights given by the square root of the corresponding spot price probabilities pᵢ. There are several ways to construct this operator. A first way to do this is by using the generic algorithm which was proposed in this paper, and which is implemented for example in the initialize method of Qiskit. However, this method can yield sub-optimal quantum circuits which in general can have a depth which scales exponentially with the number of qubits, making it unsuitable when applying it to real-life problems. A second way to load a probability distribution is by using the Grover-Rudolph method which allows for efficient state preparation if the probability density function is efficiently integrable. The problem with this method, however, is that it already requires classical Monte Carlo samples itself, making it useless as a subroutine for quantum Monte Carlo simulation because it prohibits quantum advantage. Another way to load a probability distribution is by making use of a quantum generative adversarial network (qGAN), which makes use of a quantum generator (i.e. a variational circuit) and a classical discriminator (i.e. a neural network). This allows for the learning and approximate loading of generic probability distributions by feeding data samples to the qGAN. The upside of this hybrid quantum-classical approach is that it leads to very shallow quantum circuits (making it very suitable for NISQ (noisy intermediate scale quantum) devices), while the downside is that the efficiency and overhead of the classical training process are unclear. New and improved methods for loading data into a quantum register are continuously being developed, but for the small-scale demonstrations in this article we will stick to the generic approach discussed at the start of this paragraph.
Constructing the option payoff and Grover operators
The second step of QAE consists of constructing an operator F which realizes the following transformation:
with f(x) the function defining the option payoff as a function of the underlying asset price, e.g. max(x-K,0) for a European call option with strike price K. This can approximately be achieved by means of elementary quantum operations and some ancilla qubits. Based on the above equation, we can see that we are interested in determining the total probability of obtaining the state 1 when measuring the last qubit, because it is given by
which is exactly the option payoff, averaged across the loaded probability distribution, i.e. it is equal to the expected value O of the option which we were looking to find. Note that at this point we could simply measure the last qubit and as such obtain an estimate for P₁, however this approach does not scale better than the classical approach as we will later illustrate. QAE, on the other hand, allows the efficient, quadratically faster than classical, estimation of P₁. For the reader with technical quantum computing knowledge (the non-technical reader needn’t worry and can simply skip to the next subsection): QAE is able to achieve this speedup by means of the Grover operator
with Iᵤ the identity operator of dimension u, Z the Pauli-Z operator, and A=F(P⊗I₂). The Grover operator Q has two highly degenerate eigenvalues ±θ which are mapped to the desired probability according to P₁=sin(θ/2)².
Calculating the eigenvalues
The third and final step of QAE consists of actually determining the eigenvalues of Q. The original, canonical version of this algorithm leverages quantum phase estimation to accomplish this goal. However, this requires many sequential controlled applications of Q, and as such leads to significant circuit depths, making it unsuitable for current NISQ devices. This is why recently several different variants of QAE have been proposed that do not use quantum phase estimation, but still provide a quadratic speedup over classical methods. These include maximum likelihood QAE, simplified QAE, iterative QAE, faster QAE, and power law QAE. The general idea behind all of these is that they measure QᵏA|0> for cleverly selected different values of k, such that combining the results for the different values of k leads to a quadratically improved estimate of P₁. The fact that any quantum circuit in these algorithms only involves a single power of a non-controlled version of Q makes them much more suitable for current NISQ devices.
Algorithm benchmarking
For all of the following results, we consider a single European call option with a strike price of 1.9, a time to expiration of 300 days, and an underlying asset with S₀=2, r=0.04, and σ=0.1.
Comparing the QAE variants
We will now investigate how all of the QAE variants listed in the previous section compare when we apply them to the problem of option pricing. To do this we check how the estimation errors of these algorithms scale as a function of the number of samples used:
We can see that all of the QAE variants outperform classical Monte Carlo simulation above some threshold number of samples, at which point they achieve better accuracy for the same number of samples or, in other words, require less samples to achieve the same accuracy. Furthermore, all of the results show a linear dependence in this log-log plot. This means that the estimation error ε indeed decays according to a power law with increasing number of samples M, i.e. ε=aMᵇ. The values of a and b for each QAE variant can easily be determined by fitting the results, which gives:
Note that we did not include results for the “simplified” method because it has a very large constant factor, making it slow compared to the other methods and therefore impractical for real-life applications. Looking at the exponents (i.e. the values for b) we can see that, as expected, the error of the classical method scales as the inverse of the square root of the number of samples. Furthermore, the naive quantum method scales the same way (i.e. it has the same value for b) as the classical method, meaning that it does not provide any speedup. Turning to the QAE variants, we see that, apart from numerical deviations (especially for the “likelihood” and “faster” methods) as a result of the stochastic nature of the algorithms and the limited number of simulation repetitions, the exponents are indeed close to -1, which goes to show that these algorithms indeed scale quadratically better than classical (and naive quantum) methods. Taking into account the constant factors (i.e. the values for a) as well, it turns out that in practice the “iterative” and “likelihood” methods perform better than the “canonical” and “faster” methods. The “power” method is a special case because it features a parameter β which interpolates between QAE-like and classical error scaling, i.e. when it’s closer to 0 the algorithm achieves better error scaling (as can be seen in the above table). However, this comes at the expense of an increase in required quantum hardware resources.
Apart from the error scaling of the different methods, it’s also paramount, in the context of the current NISQ devices, to compare the required circuit depths and number of two-qubit gates (i.e. CNOT gates) to achieve a certain accuracy (for this comparative table we took ε≈0.02) for each of the methods. We did this by transpiling all of the quantum circuits using the native gate set of the IBMQ-Montreal device and obtained the following results:
This shows that the canonical QAE algorithm requires very deep quantum circuits containing a lot of two-qubit gates (in the order of millions), confirming the doubts which we had already raised at the end of the previous section. The QAE-variants, on the other hand, allow for much shallower quantum circuits containing much less two-qubit gates, with the “iterative” method again emerging as the best performing one. It should be noted that the “power” method can yield shallower circuits (for sufficiently high β values), however as we have seen in the previous table this comes at the expense of less efficient error scaling. The same holds true for the “naive” method, which has by far the shallowest circuits but at the same times doesn’t offer any advantage over classical methods.
Performance on “real” quantum hardware
Considering the above results, it’s worth investigating whether the “iterative” method can be used to obtain accurate, better-than-classical results when it is run on a real quantum computer. Up to now we have been using Qiskit’s qasm simulator, which mimics a perfect, so-called fault-tolerant quantum computer. However, in reality current quantum computers are imperfect and introduce many different errors. That is why we also ran the “iterative” method on Qiskit’s fake Montreal and fake Guadalupe simulators, which mimic the corresponding real quantum devices using system snapshots. Doing this for the settings of the first data point of the “iterative” method in the previous figure (leading to circuits depths and CNOT counts of the order of several hundreds) gives:
The results show that, as expected, the accuracy of the results of the qasm simulator is very good and doesn’t deteriorate with increasing qubit count. When using a simulator mimicking a real quantum computer, however, the results are less accurate than the qasm simulator results and do deteriorate with increasing qubit count. The Montreal machine is able to achieve more accurate results than the Guadalupe machine thanks to its lower error rates, although they may still not be sufficiently accurate for real-life use cases. However, it should be noted that we did not employ any form of error mitigation techniques, which would help to increase the accuracy of the results. Furthermore, the error rates of these quantum computers are expected to improve in the future, as such improving the accuracy of the results even further, especially when they reach the scale at which quantum error correction can be implemented.
In conclusion, we have compared the performance of different QAE variants in the context of option pricing and empirically found that all of them offer the same quadratic speedup (i.e. quantum advantage) over classical Monte Carlo methods as the canonical QAE algorithm. However, the requirements in terms of quantum hardware of these variants are much less stringent, opening up the door towards near-term real-life use cases leveraging NISQ devices. Finally, note that even though this article focused on European call options, the same quantum algorithms and techniques can also be (and we in fact have) implemented to tackle the pricing of other and more exotic options, the pricing of other derivative types like futures and swaps, and even more general risk analysis.
Do you have additional questions, or do you want to have further discussions in a business and/or scientific context? Don’t hesitate to reach out to Matthias Van der Donck at matthias.vanderdonck@intellecteu.com