Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bugfix release version cannot run KaHyPar #100

Merged
merged 4 commits into from
Apr 17, 2024
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions docs/changelog.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,11 @@
Changelog
~~~~~~~~~

0.6.1 (April 2024)
------------------

* When using ``simulate`` with ``TTNxGate`` algorithm, the initial partition is obtained using NetworkX instead of KaHyPar by default. This makes setup easier and means that ``TTNxGate`` can now be used when installing from PyPI. KaHyPar can still be used if ``use_kahypar`` from ``Config`` is set to True.

0.6.0 (April 2024)
------------------

Expand Down
5 changes: 5 additions & 0 deletions pytket/extensions/cutensornet/structured_state/general.py
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,7 @@ def __init__(
float_precision: Type[Any] = np.float64,
value_of_zero: float = 1e-16,
leaf_size: int = 8,
use_kahypar: bool = False,
k: int = 4,
optim_delta: float = 1e-5,
loglevel: int = logging.WARNING,
Expand Down Expand Up @@ -113,6 +114,9 @@ def __init__(
``np.float64`` precision (default) and ``1e-7`` for ``np.float32``.
leaf_size: For ``TTN`` simulation only. Sets the maximum number of
qubits in a leaf node when using ``TTN``. Default is 8.
use_kahypar: Use KaHyPar for graph partitioning (used in ``TTN``) if this
is True. Otherwise, use NetworkX (worse, but easy to setup). Defaults
to False.
k: For ``MPSxMPO`` simulation only. Sets the maximum number of layers
the MPO is allowed to have before being contracted. Increasing this
might increase fidelity, but it will also increase resource requirements
Expand Down Expand Up @@ -177,6 +181,7 @@ def __init__(
raise ValueError("Maximum allowed leaf_size is 65.")

self.leaf_size = leaf_size
self.use_kahypar = use_kahypar
self.k = k
self.optim_delta = 1e-5
self.loglevel = loglevel
Expand Down
19 changes: 14 additions & 5 deletions pytket/extensions/cutensornet/structured_state/simulation.py
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,9 @@ def simulate(
sorted_gates = _get_sorted_gates(circuit, algorithm)

elif algorithm == SimulationAlgorithm.TTNxGate:
qubit_partition = _get_qubit_partition(circuit, config.leaf_size)
qubit_partition = _get_qubit_partition(
circuit, config.leaf_size, config.use_kahypar
)
state = TTNxGate( # type: ignore
libhandle,
qubit_partition,
Expand Down Expand Up @@ -163,7 +165,7 @@ def prepare_circuit_mps(circuit: Circuit) -> tuple[Circuit, dict[Qubit, Qubit]]:


def _get_qubit_partition(
circuit: Circuit, max_q_per_leaf: int
circuit: Circuit, max_q_per_leaf: int, use_kahypar: bool
) -> dict[int, list[Qubit]]:
"""Returns a qubit partition for a TTN.

Expand All @@ -174,6 +176,8 @@ def _get_qubit_partition(
Args:
circuit: The circuit to be simulated.
max_q_per_leaf: The maximum allowed number of qubits per node leaf
use_kahypar: Use KaHyPar for graph partitioning if this is True.
Otherwise, use NetworkX (worse, but easy to setup).

Returns:
A dictionary describing the partition in the format expected by TTN.
Expand Down Expand Up @@ -214,9 +218,14 @@ def _get_qubit_partition(
old_partition = partition.copy()
for key, group in old_partition.items():
# Apply the balanced bisection on this group
(groupA, groupB) = _apply_kahypar_bisection(
connectivity_graph.subgraph(group),
)
if use_kahypar: # Using KaHyPar
(groupA, groupB) = _apply_kahypar_bisection(
connectivity_graph.subgraph(group),
)
else: # Using NetworkX
(groupA, groupB) = nx.community.kernighan_lin_bisection(
connectivity_graph.subgraph(group),
)
# Groups A and B are on the same subtree (key separated by +1)
partition[2 * key] = groupA
partition[2 * key + 1] = groupB
Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@
license="Apache 2",
packages=find_namespace_packages(include=["pytket.*"]),
include_package_data=True,
install_requires=["pytket ~= 1.26"],
install_requires=["pytket ~= 1.26", "networkx >= 2.8"],
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
install_requires=["pytket ~= 1.26", "networkx >= 2.8"],
install_requires=["pytket ~= 1.26", "networkx ~= 3.0"],

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would be careful with allowing 4.0 here, do you need 2.x here, or should we just ask for 3.x to be installed? (see suggestion)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. I believe 3.x should work, but I'll quickly check.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, it works 👍

classifiers=[
"Environment :: Console",
"Programming Language :: Python :: 3.10",
Expand Down
5 changes: 2 additions & 3 deletions tests/test_structured_state.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,6 @@ def test_copy(algorithm: SimulationAlgorithm) -> None:
simple_circ = Circuit(2).H(0).H(1).CX(0, 1)

with CuTensorNetHandle() as libhandle:

# Default config
cfg = Config()
state = simulate(libhandle, simple_circ, algorithm, cfg)
Expand Down Expand Up @@ -531,15 +530,15 @@ def test_circ_approx_explicit_ttn(circuit: Circuit) -> None:
# Check for TTNxGate
cfg = Config(truncation_fidelity=0.99, leaf_size=3, float_precision=np.float32)
ttn_gate = simulate(libhandle, circuit, SimulationAlgorithm.TTNxGate, cfg)
assert np.isclose(ttn_gate.get_fidelity(), 0.769, atol=1e-3)
assert np.isclose(ttn_gate.get_fidelity(), 0.751, atol=1e-3)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since NetworkX provides a worse partition than KaHyPar, the fidelity in the simulation drops by a bit.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you sure that you want to update the expected value and not the tolerance?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I've been using these tests for a very rudimentary regression/improvement tracking, which is why they have a very particular value for the fidelity. It has happened before that changes that I did not expect that would change the fidelity did. If I made the tolerance margin larger, it's likely some of these wouldn't be caught.

assert ttn_gate.is_valid()
assert np.isclose(ttn_gate.vdot(ttn_gate), 1.0, atol=cfg._atol)

# Fixed virtual bond dimension
# Check for TTNxGate
cfg = Config(chi=120, leaf_size=3, float_precision=np.float32)
ttn_gate = simulate(libhandle, circuit, SimulationAlgorithm.TTNxGate, cfg)
assert np.isclose(ttn_gate.get_fidelity(), 0.857, atol=1e-3)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See above

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I'd rather change the fidelity. The value of the fidelity is consistent between different runs using the same code.

assert np.isclose(ttn_gate.get_fidelity(), 0.854, atol=1e-3)
assert ttn_gate.is_valid()
assert np.isclose(ttn_gate.vdot(ttn_gate), 1.0, atol=cfg._atol)

Expand Down
Loading