Discover how quantum computing can revolutionize finance
Quantum computing is expected to revolutionize many different industries, including finance. One of the archetypal examples within this sector which is set to benefit from quantum computing is portfolio optimization, which is typically tackled using a quantum algorithm known as VQE (variational quantum eigensolver). In this article, we will compare the performance of this algorithm in the context of this financial problem with some recent, state-of-the-art, improved versions, as well as with different classical algorithms. We discuss whether these algorithms really need quantum computers to achieve quantum advantage for portfolio optimization, or whether they fall into the category of so-called quantum-inspired algorithms.
Portfolio optimization is the process of constructing a selection of financial assets, and assigning weights/amounts to them such that they optimize a certain objective function. This objective is usually a combination of minimizing volatility (i.e. risk) and maximizing returns, often subject to one or more constraints such as imposing a fixed total budget. There are many different ways to model this objective but one mathematical framework which stood the test of time is modern portfolio theory, also known as mean variance analysis. It was first introduced by Harry Markowitz in 1952 and its mathematical formulation is detailed in the methodology section at the bottom of this article.
There are two fundamentally different ways of tackling this problem, depending on whether the asset amounts are assumed to be continuous variables or discrete, integer-valued variables. The continuous approach in general permits finding the optimal solution efficiently, although it’s also susceptible to possible numerical instabilities depending on the data. However, these can easily be mitigated and as a result this continuous approach is adequate for retail investors. Institutional investors, on the other hand, typically trade in large, discrete blocks of assets, e.g. in ETF-creation and ETF-redemption packages. In order to find the optimal portfolio in this case, one could start from the solution obtained from the continuous approach and then round those amounts to the nearest allowed integer value for each of the blocks of assets. However, this leads to approximate results and a solution of the discrete approach is therefore expected to yield a more optimal portfolio. Considering the substantial budgets which are used by institutional investors, this means that a lot of money is potentially being left on the table when using the continuous approach.
Unfortunately, there are serious challenges associated with the discrete approach. Due to its non-convex nature and the fact that the number of possible solutions scales exponentially with the number of binary variables, it is part of the infamous class of NP-hard problems, but it is not part of the class of NP problems. This means that not only is it impossible to efficiently find the optimal solution, it is also impossible to efficiently determine whether a given solution is optimal or not.
This is where quantum computers come into the picture. These are fundamentally new types of computers whose building blocks (the quantum bits or qubits) are governed by the quirky laws of quantum mechanics. As a result, completely new (quantum) algorithms can be developed which leverage the principles of quantum mechanics (e.g. superposition, entanglement, interference) and which can be executed on quantum computers in order to tackle problems which are intractable for classical computers. Although it is assumed (but not yet proven) that quantum computers will not be able to efficiently solve NP-hard problems, it is also believed (but again not yet proven) that they will be able to significantly speed up the process of finding an (approximately) optimal solution and/or to find solutions which are closer to the optimal solution.
However, the current quantum computers are not only limited in size (i.e. number of qubits), but also in quality, e.g. imperfect quantum gates (which are the fundamental building blocks of quantum algorithms) and limited qubit coherence times (the time span during which a qubit stays in a specific state). As a result, we cannot just run any quantum algorithm on these NISQ (noisy intermediate scale quantum) devices and expect to obtain useful results from them. Instead, we need to make sure that the total number of successive gates on each of the qubits (roughly equivalent to the so-called quantum depth) and the total runtime of the algorithm are under control.
A well-known quantum algorithm which ticks these boxes is the variational quantum eigensolver, or VQE. It starts by defining a quantum state which is determined by a set of parameters, and which is then used to calculate the value of the portfolio function (i.e. the equation at the top of the methodology section) with respect to this state. Thanks to the superposition principle, this quantum state can actually be a combination of multiple solutions. As a result, we can use a quantum computer to calculate the weighted average of the portfolio function with respect to the different solutions in the superposition. This is a quantum subroutine which is enclosed in an outer loop which classically optimizes the parameters of the quantum state in order to minimize the expectation value of the portfolio function. This is referred to as a hybrid quantum-classical algorithm.
There are a few ways in which the standard VQE algorithm can be improved. We implemented and discuss three of them here. The first one is replacing the standard mean optimization with CVaR (conditional value at risk) optimization, which we will refer to as α-VQE. The idea is that instead of optimizing the weighted average of the portfolio function with respect to the solutions in the quantum state, we are actually more interested in optimizing the lowest-found portfolio function value among all of the solutions in the quantum state. However, simply using this lowest value leads to a non-smooth, ill-behaved objective function that is difficult to handle for classical optimizers. One way to avoid this issue while still keeping the emphasis on optimizing the lowest-found portfolio function value is to optimize the conditional value at risk, i.e. the average of the α% lowest portfolio function values found by the quantum computer. Note that the parameter alpha interpolates between the two extreme cases, i.e. α=100% corresponds to the average used in standard VQE and α=0% corresponds to the minimum value.
The second improvement of VQE which we implemented is filtering VQE, which we will refer to as f-VQE. This algorithm is a generalization of VQE and approximates the repeated action of a filtering operator on an initial quantum state by classically optimizing the parameters of this state. Applying the filtering operator increases the weights of the solutions in the superposition with low portfolio function values, and decreases the weights of the solutions with high portfolio function values. As a result, after each application of this operator the probability of finding the solution with the lowest possible portfolio function value increases.
The third and final way to improve VQE which we implemented is using a so-called warm start of the parameters, which we will denote with the suffix -w. We do this by solving the continuous version of the QUBO (defined in the methodology section at the bottom of this article), i.e. where the variables can take on any value in the range [0, 1]. As mentioned before, this type of continuous problem is generally efficiently solvable. For each variable this continuous solution is then transformed to a corresponding quantum state parameter such that for example a variable with a solution of 0.5 corresponds to an equal superposition of 0 and 1 for the corresponding qubit (more details can be found in the methodology section).
For all of the following results, we use real stock data for the year 2020. We select assets randomly from a total set of 100 stocks in order to generate different instances of the portfolio optimization problem. Note that in our experiments the number of qubits required to solve the problem is equal to then number of assets. See the methodology section for further details.
Now that we have a high-level understanding of these different VQE algorithms, we can have a closer look at how they compare when we apply them to the problem of portfolio optimization. To do this we check how the weight of the optimal solution inside the final quantum state changes as a function of the number of assets in our portfolio:
We can see that, in terms of this metric, the standard VQE algorithm is substantially outperformed by α-VQE, which in turn is substantially outperformed by f-VQE. At the same time, the results of both VQE and α-VQE are significantly improved by employing the warm start method. This is not the case for f-VQE, for the simple reason that there was little to no room for improvement. It’s important to note at this point that the optimal solution weight should not be interpreted as the success probability for finding the optimal solution. Each of these algorithms leads to a final quantum state which is a superposition of different solutions. Unfortunately, the laws of quantum mechanics do not allow us to see the full quantum state. Instead, we can only sample it and obtain the different solutions in its superposition with probability equal to the corresponding weight of the solution. If this probability is greater than or equal to the inverse of the number of samples (in our case 1024), we can expect to detect the corresponding solution. We were able to find the optimal solution for all VQE variants, all asset numbers, and for every problem instance, in the above figure. By extrapolating the above results to higher asset numbers, we see that standard VQE will be the most likely to fail to find the optimal solution, followed by α-VQE, and finally by f-VQE.
The reason we could compare our results with a known optimal solution, is because for not too high asset numbers we can simply implement an exhaustive search of all the possible solutions, which is of course guaranteed to find the optimal solution. However, the calculation time for this brute force method scales exponentially, i.e. it is doubled every time an extra asset is added, and it is therefore not of practical use for real-life scenarios. If we compare the calculation time of the exhaustive search with that of f-VQE-w we obtain the following result:
For f-VQE-w we stopped the calculations when the lowest-value solution in the superposition has a weight which is at least twice the weight of the second-highest-weight solution, or if no new lowest value is found for 20 consecutive iterations. For all asset numbers in the above figure, f-VQE-w was able to find the optimal solution for each of the problem instances. We can therefore conclude that, for asset numbers greater than or equal to 22, f-VQE-w is a more suitable method than an exhaustive search.
However, because of its scaling problem, the exhaustive search is not applied in real-life scenarios and therefore it’s not really fair to use this classical method to compare our quantum(-inspired) results with. Instead, people often use heuristic algorithms which can quickly find good approximate solutions. Here we choose one of the most common heuristic algorithms for this type of problem, i.e. simulated annealing, to serve as the classical benchmark. Comparing the lowest value found by this algorithm with the lowest value found by α-VQE-w gives the following result:
Note that here we chose to use α-VQE-w instead of f-VQE-w because the former is significantly faster than the latter for higher asset numbers. Furthermore, since simulated annealing is in turn faster than α-VQE-w (obtaining an approximate solution takes 1–6 seconds for the above asset number range), we use the lowest value across 100 different runs to compare with our α-VQE-w result. That way, both methods are given more or less the same amount of time to return a solution. We can see that α-VQE-w outperforms simulated annealing, and that this difference between the two methods increases with increasing asset numbers.
There is one final but very important remark which should be made. All of the above results were obtained by using a variational quantum state which doesn’t contain any entanglement. As a result, these calculations can all be efficiently done on a classical computer, even for higher asset numbers. These types of algorithms are known as quantum-inspired algorithms, and even though they cannot lead to so-called quantum advantage, they can still provide improved results in terms of speed and/or quality. In order to find out whether these VQE variants have the potential to exhibit real quantum advantage in the future, we should start by adding entangling gates (i.e. CNOT gates) to the variational state and see whether this helps to improve the results:
The figure indicates that adding entangling gates can lead to improved results. This suggests that, when applied to problem instances of sufficiently large scale and executed on future fault-tolerant quantum computers, these VQE algorithms could allow to achieve quantum advantage for portfolio optimization.
In conclusion, we have compared the performance of different VQE variants in the context of portfolio optimization and found that the recent, state-of-the-art, improved versions can significantly outperform the standard VQE algorithm. Some of these algorithms could already achieve quantum-inspired advantage over standard classical algorithms today. Furthermore, when tackling a problem instance of sufficiently large scale on a fault-tolerant quantum computer, these algorithms could even achieve true quantum advantage in the future. Finally, note that even though this article discussed the specific case of portfolio optimization, the same conclusions apply to other combinatorial optimization problems such as arbitrage, transaction settlement, vehicle routing, …
Do you have additional questions, or do you want to have further discussions in a business and/or scientific context? Don’t hesitate to reach out to Matthias Van der Donck at matthias.vanderdonck@intellecteu.com
Methodology
The mathematical formulation of modern portfolio theory is given by:
In this equation, nᵢ, nᵢ,₀, pᵢ, and μᵢ are the amount, initial amount, price, and expected return of asset i, σᵢⱼ is the expected correlation between assets i and j, Nₐ is the total number of assets, B is the total budget, γ and ρ are parameters which control the risk aversion and the importance of the budget constraint, respectively, ν is the percentual transaction cost (note that the transaction cost in reality depends on the absolute value of the transaction amount, but for computational simplicity we use a quadratic dependence here, which has the same qualitative effect), and ν’ is a parameter which quantifies market impact.
The goal of portfolio optimization is to find the parameters nᵢ which minimize the above portfolio function H. The discrete version of this problem can be solved by transforming this equation to a QUBO by using binary encoding for the integer-valued variables, i.e.
where Nₚ is the number of bits of precision which are used for the integer variables, i.e. it defines the maximum value of nᵢ as nᵢ=2^Nₚ — 1. The total number of binary variables in this QUBO formulation (and thus the number of qubits required to solve it on a quantum computer) is given by N=Nₐ*Nₚ. Note that in the above form the nᵢ are all positive integers, meaning that we only consider long positions. However, this can easily be extended to allow for short positions by changing the above transformation such that it can also yield negative values. The QUBO formulation is in turn transformed to an Ising problem by promoting each binary variable to the Pauli Z quantum operator by means of the transformation xᵢ,ₛ=(1 - Zᵢ,ₛ)/2
For the expected returns μᵢ and expected correlations σᵢ,ⱼ we use the average values of the year 2020. For the prices pᵢ we use those for the last day of that period. The budget is set according to the formula
The initial amounts are chosen by selecting a random integer in the range [0, 2*(2^Nₚ - 1)] for each asset. Furthermore, we use the parameter values γ=0.5, ν=0.01, ν’=0.001, Nₚ=1, and ρ=0.001*N. For each portfolio optimization instance we choose Nₐ assets randomly from a total predefined selection of 100 assets. Note that as a result of Nₚ=1, the number of qubits for each instance is equal to Nₐ.
For the variational quantum state we use alternating layers of Ry rotation gates (where the angles are the variational parameters) and CNOT gates between subsequent qubit pairs (i.e. linear entanglement). When not using a warm start, the initial parameters are all set to 0 except those of the last rotation layer, which are set to π/2 such that the resulting initial state is |+⟩ⁿ. When using a warm start, we also set all parameters to 0 except those of the last layer, which are in this case defined in terms of the continuous solutions xᵢ,ₛ as
Each quantum circuit is executed 1024 times. For VQE and α-VQE we use COBYLA as the classical optimizer, whereas for f-VQE we implement a standard gradient descent solver with a learning rate equal to 2. In all cases we use a maximum number of 100 iterations. Furthermore, for α-VQE we use α=20% and for f-VQE we use the filtering operator H^-τ with τ=200.
All algorithms are simulated on a classical computer in the absence of a noise model, i.e. we assume a fully fault-tolerant quantum computer.
Quantum computing is expected to revolutionize many different industries, including finance. One of the archetypal examples within this sector which is set to benefit from quantum computing is portfolio optimization, which is typically tackled using a quantum algorithm known as VQE (variational quantum eigensolver). In this article, we will compare the performance of this algorithm in the context of this financial problem with some recent, state-of-the-art, improved versions, as well as with different classical algorithms. We discuss whether these algorithms really need quantum computers to achieve quantum advantage for portfolio optimization, or whether they fall into the category of so-called quantum-inspired algorithms.
Portfolio optimization is the process of constructing a selection of financial assets, and assigning weights/amounts to them such that they optimize a certain objective function. This objective is usually a combination of minimizing volatility (i.e. risk) and maximizing returns, often subject to one or more constraints such as imposing a fixed total budget. There are many different ways to model this objective but one mathematical framework which stood the test of time is modern portfolio theory, also known as mean variance analysis. It was first introduced by Harry Markowitz in 1952 and its mathematical formulation is detailed in the methodology section at the bottom of this article.
There are two fundamentally different ways of tackling this problem, depending on whether the asset amounts are assumed to be continuous variables or discrete, integer-valued variables. The continuous approach in general permits finding the optimal solution efficiently, although it’s also susceptible to possible numerical instabilities depending on the data. However, these can easily be mitigated and as a result this continuous approach is adequate for retail investors. Institutional investors, on the other hand, typically trade in large, discrete blocks of assets, e.g. in ETF-creation and ETF-redemption packages. In order to find the optimal portfolio in this case, one could start from the solution obtained from the continuous approach and then round those amounts to the nearest allowed integer value for each of the blocks of assets. However, this leads to approximate results and a solution of the discrete approach is therefore expected to yield a more optimal portfolio. Considering the substantial budgets which are used by institutional investors, this means that a lot of money is potentially being left on the table when using the continuous approach.
Unfortunately, there are serious challenges associated with the discrete approach. Due to its non-convex nature and the fact that the number of possible solutions scales exponentially with the number of binary variables, it is part of the infamous class of NP-hard problems, but it is not part of the class of NP problems. This means that not only is it impossible to efficiently find the optimal solution, it is also impossible to efficiently determine whether a given solution is optimal or not.
This is where quantum computers come into the picture. These are fundamentally new types of computers whose building blocks (the quantum bits or qubits) are governed by the quirky laws of quantum mechanics. As a result, completely new (quantum) algorithms can be developed which leverage the principles of quantum mechanics (e.g. superposition, entanglement, interference) and which can be executed on quantum computers in order to tackle problems which are intractable for classical computers. Although it is assumed (but not yet proven) that quantum computers will not be able to efficiently solve NP-hard problems, it is also believed (but again not yet proven) that they will be able to significantly speed up the process of finding an (approximately) optimal solution and/or to find solutions which are closer to the optimal solution.
However, the current quantum computers are not only limited in size (i.e. number of qubits), but also in quality, e.g. imperfect quantum gates (which are the fundamental building blocks of quantum algorithms) and limited qubit coherence times (the time span during which a qubit stays in a specific state). As a result, we cannot just run any quantum algorithm on these NISQ (noisy intermediate scale quantum) devices and expect to obtain useful results from them. Instead, we need to make sure that the total number of successive gates on each of the qubits (roughly equivalent to the so-called quantum depth) and the total runtime of the algorithm are under control.
A well-known quantum algorithm which ticks these boxes is the variational quantum eigensolver, or VQE. It starts by defining a quantum state which is determined by a set of parameters, and which is then used to calculate the value of the portfolio function (i.e. the equation at the top of the methodology section) with respect to this state. Thanks to the superposition principle, this quantum state can actually be a combination of multiple solutions. As a result, we can use a quantum computer to calculate the weighted average of the portfolio function with respect to the different solutions in the superposition. This is a quantum subroutine which is enclosed in an outer loop which classically optimizes the parameters of the quantum state in order to minimize the expectation value of the portfolio function. This is referred to as a hybrid quantum-classical algorithm.
There are a few ways in which the standard VQE algorithm can be improved. We implemented and discuss three of them here. The first one is replacing the standard mean optimization with CVaR (conditional value at risk) optimization, which we will refer to as α-VQE. The idea is that instead of optimizing the weighted average of the portfolio function with respect to the solutions in the quantum state, we are actually more interested in optimizing the lowest-found portfolio function value among all of the solutions in the quantum state. However, simply using this lowest value leads to a non-smooth, ill-behaved objective function that is difficult to handle for classical optimizers. One way to avoid this issue while still keeping the emphasis on optimizing the lowest-found portfolio function value is to optimize the conditional value at risk, i.e. the average of the α% lowest portfolio function values found by the quantum computer. Note that the parameter alpha interpolates between the two extreme cases, i.e. α=100% corresponds to the average used in standard VQE and α=0% corresponds to the minimum value.
The second improvement of VQE which we implemented is filtering VQE, which we will refer to as f-VQE. This algorithm is a generalization of VQE and approximates the repeated action of a filtering operator on an initial quantum state by classically optimizing the parameters of this state. Applying the filtering operator increases the weights of the solutions in the superposition with low portfolio function values, and decreases the weights of the solutions with high portfolio function values. As a result, after each application of this operator the probability of finding the solution with the lowest possible portfolio function value increases.
The third and final way to improve VQE which we implemented is using a so-called warm start of the parameters, which we will denote with the suffix -w. We do this by solving the continuous version of the QUBO (defined in the methodology section at the bottom of this article), i.e. where the variables can take on any value in the range [0, 1]. As mentioned before, this type of continuous problem is generally efficiently solvable. For each variable this continuous solution is then transformed to a corresponding quantum state parameter such that for example a variable with a solution of 0.5 corresponds to an equal superposition of 0 and 1 for the corresponding qubit (more details can be found in the methodology section).
For all of the following results, we use real stock data for the year 2020. We select assets randomly from a total set of 100 stocks in order to generate different instances of the portfolio optimization problem. Note that in our experiments the number of qubits required to solve the problem is equal to then number of assets. See the methodology section for further details.
Now that we have a high-level understanding of these different VQE algorithms, we can have a closer look at how they compare when we apply them to the problem of portfolio optimization. To do this we check how the weight of the optimal solution inside the final quantum state changes as a function of the number of assets in our portfolio:
We can see that, in terms of this metric, the standard VQE algorithm is substantially outperformed by α-VQE, which in turn is substantially outperformed by f-VQE. At the same time, the results of both VQE and α-VQE are significantly improved by employing the warm start method. This is not the case for f-VQE, for the simple reason that there was little to no room for improvement. It’s important to note at this point that the optimal solution weight should not be interpreted as the success probability for finding the optimal solution. Each of these algorithms leads to a final quantum state which is a superposition of different solutions. Unfortunately, the laws of quantum mechanics do not allow us to see the full quantum state. Instead, we can only sample it and obtain the different solutions in its superposition with probability equal to the corresponding weight of the solution. If this probability is greater than or equal to the inverse of the number of samples (in our case 1024), we can expect to detect the corresponding solution. We were able to find the optimal solution for all VQE variants, all asset numbers, and for every problem instance, in the above figure. By extrapolating the above results to higher asset numbers, we see that standard VQE will be the most likely to fail to find the optimal solution, followed by α-VQE, and finally by f-VQE.
The reason we could compare our results with a known optimal solution, is because for not too high asset numbers we can simply implement an exhaustive search of all the possible solutions, which is of course guaranteed to find the optimal solution. However, the calculation time for this brute force method scales exponentially, i.e. it is doubled every time an extra asset is added, and it is therefore not of practical use for real-life scenarios. If we compare the calculation time of the exhaustive search with that of f-VQE-w we obtain the following result:
For f-VQE-w we stopped the calculations when the lowest-value solution in the superposition has a weight which is at least twice the weight of the second-highest-weight solution, or if no new lowest value is found for 20 consecutive iterations. For all asset numbers in the above figure, f-VQE-w was able to find the optimal solution for each of the problem instances. We can therefore conclude that, for asset numbers greater than or equal to 22, f-VQE-w is a more suitable method than an exhaustive search.
However, because of its scaling problem, the exhaustive search is not applied in real-life scenarios and therefore it’s not really fair to use this classical method to compare our quantum(-inspired) results with. Instead, people often use heuristic algorithms which can quickly find good approximate solutions. Here we choose one of the most common heuristic algorithms for this type of problem, i.e. simulated annealing, to serve as the classical benchmark. Comparing the lowest value found by this algorithm with the lowest value found by α-VQE-w gives the following result:
Note that here we chose to use α-VQE-w instead of f-VQE-w because the former is significantly faster than the latter for higher asset numbers. Furthermore, since simulated annealing is in turn faster than α-VQE-w (obtaining an approximate solution takes 1–6 seconds for the above asset number range), we use the lowest value across 100 different runs to compare with our α-VQE-w result. That way, both methods are given more or less the same amount of time to return a solution. We can see that α-VQE-w outperforms simulated annealing, and that this difference between the two methods increases with increasing asset numbers.
There is one final but very important remark which should be made. All of the above results were obtained by using a variational quantum state which doesn’t contain any entanglement. As a result, these calculations can all be efficiently done on a classical computer, even for higher asset numbers. These types of algorithms are known as quantum-inspired algorithms, and even though they cannot lead to so-called quantum advantage, they can still provide improved results in terms of speed and/or quality. In order to find out whether these VQE variants have the potential to exhibit real quantum advantage in the future, we should start by adding entangling gates (i.e. CNOT gates) to the variational state and see whether this helps to improve the results:
The figure indicates that adding entangling gates can lead to improved results. This suggests that, when applied to problem instances of sufficiently large scale and executed on future fault-tolerant quantum computers, these VQE algorithms could allow to achieve quantum advantage for portfolio optimization.
In conclusion, we have compared the performance of different VQE variants in the context of portfolio optimization and found that the recent, state-of-the-art, improved versions can significantly outperform the standard VQE algorithm. Some of these algorithms could already achieve quantum-inspired advantage over standard classical algorithms today. Furthermore, when tackling a problem instance of sufficiently large scale on a fault-tolerant quantum computer, these algorithms could even achieve true quantum advantage in the future. Finally, note that even though this article discussed the specific case of portfolio optimization, the same conclusions apply to other combinatorial optimization problems such as arbitrage, transaction settlement, vehicle routing, …
Do you have additional questions, or do you want to have further discussions in a business and/or scientific context? Don’t hesitate to reach out to Matthias Van der Donck at matthias.vanderdonck@intellecteu.com
Methodology
The mathematical formulation of modern portfolio theory is given by:
In this equation, nᵢ, nᵢ,₀, pᵢ, and μᵢ are the amount, initial amount, price, and expected return of asset i, σᵢⱼ is the expected correlation between assets i and j, Nₐ is the total number of assets, B is the total budget, γ and ρ are parameters which control the risk aversion and the importance of the budget constraint, respectively, ν is the percentual transaction cost (note that the transaction cost in reality depends on the absolute value of the transaction amount, but for computational simplicity we use a quadratic dependence here, which has the same qualitative effect), and ν’ is a parameter which quantifies market impact.
The goal of portfolio optimization is to find the parameters nᵢ which minimize the above portfolio function H. The discrete version of this problem can be solved by transforming this equation to a QUBO by using binary encoding for the integer-valued variables, i.e.
where Nₚ is the number of bits of precision which are used for the integer variables, i.e. it defines the maximum value of nᵢ as nᵢ=2^Nₚ — 1. The total number of binary variables in this QUBO formulation (and thus the number of qubits required to solve it on a quantum computer) is given by N=Nₐ*Nₚ. Note that in the above form the nᵢ are all positive integers, meaning that we only consider long positions. However, this can easily be extended to allow for short positions by changing the above transformation such that it can also yield negative values. The QUBO formulation is in turn transformed to an Ising problem by promoting each binary variable to the Pauli Z quantum operator by means of the transformation xᵢ,ₛ=(1 - Zᵢ,ₛ)/2
For the expected returns μᵢ and expected correlations σᵢ,ⱼ we use the average values of the year 2020. For the prices pᵢ we use those for the last day of that period. The budget is set according to the formula
The initial amounts are chosen by selecting a random integer in the range [0, 2*(2^Nₚ - 1)] for each asset. Furthermore, we use the parameter values γ=0.5, ν=0.01, ν’=0.001, Nₚ=1, and ρ=0.001*N. For each portfolio optimization instance we choose Nₐ assets randomly from a total predefined selection of 100 assets. Note that as a result of Nₚ=1, the number of qubits for each instance is equal to Nₐ.
For the variational quantum state we use alternating layers of Ry rotation gates (where the angles are the variational parameters) and CNOT gates between subsequent qubit pairs (i.e. linear entanglement). When not using a warm start, the initial parameters are all set to 0 except those of the last rotation layer, which are set to π/2 such that the resulting initial state is |+⟩ⁿ. When using a warm start, we also set all parameters to 0 except those of the last layer, which are in this case defined in terms of the continuous solutions xᵢ,ₛ as
Each quantum circuit is executed 1024 times. For VQE and α-VQE we use COBYLA as the classical optimizer, whereas for f-VQE we implement a standard gradient descent solver with a learning rate equal to 2. In all cases we use a maximum number of 100 iterations. Furthermore, for α-VQE we use α=20% and for f-VQE we use the filtering operator H^-τ with τ=200.
All algorithms are simulated on a classical computer in the absence of a noise model, i.e. we assume a fully fault-tolerant quantum computer.