Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added XEB script #221

Merged
merged 5 commits into from
Apr 30, 2024
Merged

Added XEB script #221

merged 5 commits into from
Apr 30, 2024

Conversation

arthurostrauss
Copy link
Contributor

This script implements Cross-Entropy Benchmarking which leverages real-time processsing features of the OPX. Namely, the random circuit generation and execution is done on the OPX directly. We also propose tools from Qiskit (should be installed) and Cirq (no need to be installed) for enriching and strengthening the post-processing.

This script implements Cross-Entropy Benchmarking which leverages real-time processsing features of the OPX. Namely, the random circuit generation and execution is done on the OPX directly. We also propose tools from Qiskit (should be installed) and Cirq (no need to be installed) for enriching and strengthening the post-processing.
@deanpoulos
Copy link
Contributor

Since I'm unable to comment on the notebook itself, I will add comments roughly cell-by-cell.

@deanpoulos
Copy link
Contributor

deanpoulos commented Apr 27, 2024

Cell 1
I don't think you should run !pip install qiskit by default, since this could mess with a user's conda fragile environment. Better to remove the try-except even though it's helpful, and just leave the pip command as a comment, e.g.,

# !pip install qiskit
from qiskit.circuit import QuantumCircuit
from qiskit.circuit.library import get_standard_gate_name_mapping as gate_map, UnitaryGate, RYGate

@deanpoulos
Copy link
Contributor

deanpoulos commented Apr 27, 2024

Cell 3
Just a thought for the future, when there are many inputs into the protocol (which you call hyperparameters), you can put them into a dataclass, e.g.,

@dataclass
class XEBConfig:
    save_dir: str = ""  # Directory where the data will be saved
    should_save_data: bool = True  # Whether to save the data
    generate_new_data: bool = True  # Whether to generate new data
    seqs: int = 100  # Number of random sequences to run per depth
    max_depth: int = 3  # Max depth of the XEB sequence
    step: int = 1  # Step for choosing which depths to run up to max_depth
    depths: np.ndarray = field(init=False)  # Array of depths to iterate through
    n_shots: int = 101  # Number of averages per sequence
    impose_0_cycle: bool = False  # Whether to impose the first gate at 0-cycle
    apply_two_qb_gate: bool = False

    def __post_init__(self):
        self.depths = np.arange(1, self.max_depth + 1, self.step)

The benefit to doing this is that you can easily save the config parameters to file, or serialize into a dictionary, allowing you to add it directly to the experiment results in e.g., part of the params in an xarray DataArray or **args in np.savez:

from dataclasses import asdict
config = XEBConfig(...)
config_dict = config.to_dict()
...
np.savez(
   config.save_dir / filename,
   ...
   **config_dict,
   ...
)

This isn't usually done in ML, but can be very handy in physics where you share around data for portable analysis by other researchers, or where you do so many runs and forget with what parameters you performed the run and might want to publish the results.

For now it's fine, but I thought I'd mention it because this is what I'd do if I was writing a protocol from scratch.

@deanpoulos
Copy link
Contributor

deanpoulos commented Apr 27, 2024

Cell 4

Path().cwd()

is this needed?

@deanpoulos
Copy link
Contributor

deanpoulos commented Apr 27, 2024

Cell 5
Here you generate the gatesets, and offer as a chose 1 or 2. I suggest you move all this logic into a separate python file behind a function which, e.g.,

gatesets.py

gateset = generate_gateset(gateset="supremacy")

The notebook is currently designed to be very transparent which is good, but for a new user looking to just run the notebook and see the results, this cell gives too much detail. If they want more detail, it is only behind a single function call.

I also suggest putting the gateset choices as a string to make make things more explicit.

Following from my previous comment, the gateset choice/type can also be made a config parameter, such that you can write e.g.,

gateset = generate_gateset(gateset=config.gateset_choice)

@deanpoulos
Copy link
Contributor

deanpoulos commented Apr 27, 2024

Cell 6
Here you seem to define some critical parameters for running the experiment. I think it's important to prompt the user directly to change what is there, and keep the templates clear, e.g.,

# change this so that it follows the name of your qubit elements.
#  for example, this is how it would look if your one of your elements was "qubit0$xy"
qubit_elements = [f"qubit{i}$xy" for i in qubits]

# change this so that it follows the name of your readout elements.
readout_elements = [f"qubit{i}$rr" for i in qubits]

# change this to configure CZ operations on different qubit pairs
#  key: tuple of qubit indices to perform CZ
#  value: tuple containing (element name, operation name)
CZ_operations = {
    (0, 2): ("qubit2$z", "Cz$flux_pulse_control_qubit0"),
    # (1, 2): ("qubit2$z", "Cz$flux_pulse_control_qubit1"),
    # (2, 1): ("qubit2$z", "Cz$flux_pulse_control_qubit1"),
    # (2, 0): ("qubit2$z", "Cz$flux_pulse_control_qubit0"),
    # (2, 3): ("qubit3$z", "Cz$flux_pulse_control_qubit2"),
    # (3, 2): ("qubit3$z", "Cz$flux_pulse_control_qubit2"),
}

I also commented out those which I think are unused, given that the default is qubits = [0,2]

@deanpoulos
Copy link
Contributor

deanpoulos commented Apr 27, 2024

Cell 8
Here you define some useful QUA macros. First, I'm concerned that some of the macros implicitly assume a template for the name some of the qubit elements, e.g., the following macro

def align_qubit(qubit: int):
    base_qubit = f"qubit{qubit}$"
    elements = [base_qubit + el for el in ["xy", "rr", "z"]]
    align(*elements)

assumes that your elements have a dollar sign at the end. Instead, the macros should use a template similar to what you define in Cell 6, for example,

qubit_element_template = "qubit{}"
...
def align_qubit(qubit: int, qubit_element_template: str):
    base_qubit = qubit_element_template.format(qubit)
    elements = [base_qubit + el for el in ["xy", "rr", "z"]]
    align(*elements)
...
align_qubit(qubit, qubit_element_template)

The same assumption is made in the readout macro. They should all piggyback off the templates you defined in Cell 6 somehow.

Next, I think you should separate out all of the macros that don't require user input into a separate file called macros.py. This way, you keep the entire top half of the notebook dedicated to "things users should change.

@deanpoulos
Copy link
Contributor

Cell 9
The program is very compact and makes use of macros which is great and helps with readability. One thing I wanted to mention is that you use a lot of in-line comments, which can make it a little hard to read. The official python style-guide and coding books alike discourage in-line comments, saying they should be used sparingly. I think it's easier to read when the comment is above. The style-guide is worth a skim if you haven't already seen it.

I also wanted to comment on the code that looks like this:

with while_(g[q][d_] == g[q][d_ - 1])

With the comment you have, it makes it obvious what this code is doing, but in general, one-letter variables can be hard to track because you have to map in your head what the variable is referring to. Since in this case, each variable can be represented with a short word g --> gates, q --> qubit, d_ --> depth or depth_, it doesn't increase the length too much to just use the full word and helps with readability.

Otherwise I haven't tried to hard to verify the logic of your code, but I trust that it's right if the results look good in a real experiment.

@deanpoulos
Copy link
Contributor

General Comments
The rest of the code seems to be post-processing, data loading, result verification, etc. On this, I want to add that:

  1. (optional) Since you allow loading old data, you could make a separate notebook for analysis of results. This makes life easier when re-visiting old data for analysis, so you don't get confused/distracted by the parameters/code above which is dedicated to running a new experiment.
  2. Try to put as many general functions for post-processing in separate file(s). This helps readability and testability.
  3. Anything dedicated to your personal testing/verification should be removed from the final notebook(s).

Otherwise, it looks really good!

Most of the comments have been addressed here.
There is now a dataclass for XEBConfig as suggested, however we leave the user to exploit this structure for saving hyperparameters. Moreover, macros contain both QUA and Python functions used throughout the notebook.
@deanpoulos
Copy link
Contributor

Cell 6
I like how you added the qubit_element_template. I have extended it's usage into the xy and rr element definitions, so you only have to change the template in one place:

# Change this so that it follows the name of your qubit elements.
# This is how it would look if your elements start with "qubit0$",
#  since the qubit index will eventually fill in the {} curly braces
qubit_element_template = "qubit{}$"
# And this is how it would look if one of your elements was "qubit0$xy"
qubit_elements = [f"{qubit_element_template.format(i)}xy" for i in qubits]
readout_elements = [f"{qubit_element_template.format(i)}rr" for i in qubits]

@arthurostrauss arthurostrauss merged commit 1219ba4 into qua-platform:main Apr 30, 2024
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants