You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Now that #81 is merged, we need to start trying to break it -- it is still rather fragile.
Step 0:
If we are going to have a lot of rounds of entanglement, we need some way to "consume" the final entangled pairs so that we do not run out of slots. A good way to do this would be to have EntanglementConsumer protocol which attempts to consume a bell pair at a fixed rate.
When started, this protocol will query both nodeA and nodeB to check whether they have shared entangled pairs, it will then measure their ZZ and XX observables and it will store a log of (time, observables). After a successful query the slots of the pair get freed up. When the queries fail it will just store a (time, nothing). It will perform the query every period units of time.
Step 1:
First thing to do is to run tests similar to test_entanglement_tracker but with EntanglerProt(...rounds>1). This will cause issues because the old "entanglement history" tags might mess up update messages coming from newer swaps. Maybe a proper ordering of the query lookup (FILO vs FIFO) is enough and we do not need to worry. In either case though, we should have much more detailed tests for such situations. The test can be simply verifying that the log from EntanglementConsumer has observables that are always equal to 1.
Step 2:
Now we want to run with multiple EntanglerProt per neighbors, not simply one EntanglerProt(...rounds>2)
Step 3:
Start worrying about "memory leaks". We need to figure out how to deal with very old history tags -- at some point they should be deleted. This will need figuring out...
need for uuid:
It is possible that we will need to add uuids to the entanglement tags to solve some of these issues.
Documentation:
And there will be a lot of documentation work to figure out here. This will be a multi-week task of its own.
The text was updated successfully, but these errors were encountered:
Now that #81 is merged, we need to start trying to break it -- it is still rather fragile.
If we are going to have a lot of rounds of entanglement, we need some way to "consume" the final entangled pairs so that we do not run out of slots. A good way to do this would be to have
EntanglementConsumer
protocol which attempts to consume a bell pair at a fixed rate.It can look something like
EntanglementConsumer(nodeA::Int, nodeB::Int, log::Vector{Any}, period::Float64)
When started, this protocol will query both nodeA and nodeB to check whether they have shared entangled pairs, it will then measure their ZZ and XX observables and it will store a log of
(time, observables)
. After a successful query the slots of the pair get freed up. When the queries fail it will just store a(time, nothing)
. It will perform the query everyperiod
units of time.First thing to do is to run tests similar to
test_entanglement_tracker
but withEntanglerProt(...rounds>1)
. This will cause issues because the old "entanglement history" tags might mess up update messages coming from newer swaps. Maybe a proper ordering of the query lookup (FILO vs FIFO) is enough and we do not need to worry. In either case though, we should have much more detailed tests for such situations. The test can be simply verifying that the log fromEntanglementConsumer
has observables that are always equal to 1.Now we want to run with multiple
EntanglerProt
per neighbors, not simply oneEntanglerProt(...rounds>2)
Start worrying about "memory leaks". We need to figure out how to deal with very old history tags -- at some point they should be deleted. This will need figuring out...
uuid
:It is possible that we will need to add
uuid
s to the entanglement tags to solve some of these issues.And there will be a lot of documentation work to figure out here. This will be a multi-week task of its own.
The text was updated successfully, but these errors were encountered: