Skip to content

Latest commit

 

History

History
91 lines (62 loc) · 11 KB

provider-microsoft-qio.md

File metadata and controls

91 lines (62 loc) · 11 KB
author description ms.author ms.date ms.service ms.subservice ms.topic title uid
KittyYeungQ
This document provides the technical details of the Microsoft QIO provider
kitty
02/01/2021
azure-quantum
optimization
reference
Microsoft QIO provider overview
microsoft.quantum.optimization.providers.microsoft.qio

Microsoft QIO provider

The Microsoft QIO provider is enabled in every Quantum workspace.

  • Publisher: Microsoft
  • Provider ID: microsoft

Targets

The Microsoft QIO provider makes the following targets available:

  • Solver: Simulated Annealing (Parameter-free)
  • Solver: Simulated Annealing (Parameter-free - FPGA)
  • Solver: Simulated Annealing (Parameterized)
  • Solver: Simulated Annealing (Parameterized - FPGA)
  • Solver: Population Annealing (Parameterized)
  • Solver: Parallel Tempering (Parameter-free)
  • Solver: Parallel Tempering (Parameterized)
  • Solver: Tabu Search (Parameter-free)
  • Solver: Tabu Search (Parameterized)
  • Solver: Quantum Monte Carlo (Parameterized)
  • Solver: Substochastic Monte Carlo (Parameterized)

Target Comparison

In the following table you can find a brief comparison between the available targets:

Name Description Best applicable scenario
Parallel Tempering Rephrases the optimization problem as a thermodynamic system and runs multiple copies of a system, randomly initialized, at different temperatures. Then, based on a specific protocol, exchanges configurations at different temperatures to find the optimal configuration.
  • Generally outperforms Simulated Annealing on hard problems with rugged landscapes
  • Very good at solving Ising problems
Simulated Annealing Rephrases the optimization problem as a thermodynamic system and considers the energy of a single system. Changes to the system are accepted if they decrease the energy or meet a criterion based on decreasing temperature.
  • Convex landscapes
Population Annealing Aims to alleviate the susceptibility of the Metropolis Algorithm to rough cost landscapes by simulating a population of metropolis walkers, which continuously consolidates search efforts around favorable states.
  • We recommend Population Annealing for both sparse and dense graphs.
  • The algorithm might not be suitable for constraint problems with large penalty terms.
Quantum Monte Carlo Similar to Simulated Annealing but the changes are by simulating quantum-tunneling through barriers rather than using thermal energy jumps.
  • Optimization landscape has tall and thin barriers
  • Due to its large overhead, is useful for small hard problems
Substochastic Monte Carlo Substochastic Monte Carlo is a diffusion Monte Carlo algorithm inspired by adiabatic quantum computation. It simulates the diffusion of a population of walkers in search space, while walkers are removed or duplicated based on how they perform according the cost function.
  • The algorithm is suitable for rough optimization landscapes where Simulated Annealing or Tabu Search might return a diverse set of solutions.
Tabu Search Tabu Search looks at neighboring configurations. It can accept worsening moves if no improving moves are available and prohibit moves to previously-visited solutions
  • Convex landscapes, high density problems, QUBO problems.

FPGA vs. CPU

For some solvers we offer two versions: an unlabeled version that runs on traditional CPUs and a labeled FPGA version. In the following table you can see the pros and cons of using FPGA solvers:

Pros/Cons FPGA solvers
Pros
  • FPGA solvers run on highly optimized hardware that enables algorithms to parallelize very efficiently. This can achieve a significant performance gain when comparing CPU and FPGA solvers.
  • FPGA solver use very condensed memory representation.
  • This means that problems with a large number of terms may fail on a CPU solver due to a lack of memory, but run on an FPGA implementation of that solver.
Cons
  • Our FPGA solvers support up to 65535 variables. This is a hard limitation.
  • To achieve the best performance, FPGA solvers use 32-bit floating point operations.
  • As a result, the accuracy of FPGA solvers is a little lower than for CPU solvers.

FPGA Regional Availability

FPGA-based solvers are only available in a limited set of Azure regions. When creating your Azure Quantum workspace you can see if FPGA targets are available in the region that you have selected by accessing the Microsoft QIO provider blade on the Create screen. Regions that offer access to FPGA solvers will show "FPGA simulated annealing" in their list of available targets.

For existing workspaces you can check the "Providers" blade. Select "Modify" to view your Microsoft QIO SKU. If your workspace is in a region where FPGA solvers are available "FPGA simulated annealing" will show up in the list of targets.

Recommendations for FPGA solvers

FPGA solvers use the same parameters as their corresponding CPU solvers, but for the best performance, please tune the parameters of FPGA solvers, instead of just directly using CPU solvers' parameters. For example, in FPGA solvers, we build about 200 parallel pipelines, and each pipeline can handle one restart, so the restarts of FPGA shall be no less than 200.

FPGA solvers have an initialization time that may take a large percentage of the total runtime for small problems. If your problem can be solved on a CPU solver within a number of seconds, then you will likely not see a performance gain by switching to FPGA. We recommend using FPGA solvers when the execution timing on CPU is at least a couple minutes.

Pricing

For the most up-to-date pricing information on Microsoft's QIO offering, please refer to the Providers tab of your workspace on the Azure portal or visit the Azure Quantum pricing page.

General advice for Microsoft QIO solvers

Here are some things to keep in mind when using our QIO solvers, and steps you can take to improve performance in certain cases. Note that other providers might have different requirements and recommendations specific to their solutions. The advice below applies to the terms that represent your problem and cost function. Remember that a term is composed of a coefficient $c$ and a set of indices ${i}$.

  1. Remove coefficients that exceed the computational precision:

    If the ratio of largest to smallest coefficient is greater than $2^{64}$, the term with the small coefficient will likely not be taken into account and should be removed. In other words, you should remove any terms with coefficients $|c_i| < \frac{\max{|c_j|}}{2^{64}}$.

  2. Merge duplicate terms:

    If you automatically generate your terms, you may encounter some that are duplicates of each other, that is, they contain the same set of decision variables/indices. Avoiding duplicate terms will increase the performance of the solver as it has to handle fewer terms.

    Multiple terms with the same set of variables should be merged into a single term by adding up the coefficients. For example, $3 \cdot x_2 x_4 x_1$ and $2 \cdot x_1 x_4 x_2$ can be merged into the single term $5 \cdot x_1 x_2 x_4$.

  3. Use integer coefficients:

    Whenever possible, you should use integers for your coefficients over floating point numbers, as these will provide greater precision.