-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement locking in CommitMultiStore #46
Conversation
@roy-dydx your pull request is missing a changelog! |
1b7c154
to
cb7b174
Compare
} | ||
} | ||
|
||
func TestStore_AddLocksDuringTransaction(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for my understanding - doesn't this mean that lock ordering can be non deterministic across txns which can lead to deadlocks? am I missing something obvious
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're correct. I put this in the comment in cachemulti/store.go. It is the user's responsibility to acquire these locks in an order that won't result in deadlock.
For example, I might make it an invariant to always take clob-id-based locks before subaccount-based locks for some set of goroutines that may run concurrently. But it would also follow that taking subaccount-based locks in two different Lock() statements would be unsafe (unless I am able to further bifurcate the subaccount keys and guarantee that each key will always be in either the first or second lock statement).
require.Equal(t, []byte{100}, lockingCms.GetKVStore(storeKey).Get(key)) | ||
} | ||
|
||
func TestStore_MaintainLockOverMultipleTransactions(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for my understanding - what is the intended use case for this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Example:
Order placement involves matching with the best order, doing the balance changes for that match, and then repeating for the next order. However doing the balance changes requires modifying the fee collector. I don't want to hold a lock on the fee collector for the majority of the transaction; it would kill parallelism.
So then I separate this into an individual commit per potential maker order. This allows me to take the fee collector lock at the very end and release it before going into the next potential match.
However that would mean that individual matches for a clob pair might be interleaved with matches from another taker order. I would want to hold a clob pair lock for the whole sequence of transactions to prevent this.
|
||
// Unlock releases exclusive locks on a set of keys. | ||
func (cms Store) Unlock(keys [][]byte) { | ||
for _, key := range keys { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just to confirm - unlock ordering doesn't matter here, right? since we are not acquiring locks once we start unlocking
// Ensure that we always operate in a deterministic ordering when acquiring locks to prevent deadlock. | ||
stringLockedKeys := make([]string, len(keys)) | ||
for i, key := range keys { | ||
stringLockedKeys[i] = string(key) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nbd but we can bytes.Compare
to sort. probably slightly less overhead?
s.Unlock() | ||
// Lock, Unlock, RLockRW, LockRW, RUnlockRW, UnlockRW constitute a permissive locking interface | ||
// that can be used to synchronize concurrent access to the store. Locking of a key should | ||
// represent locking of some part of the store. Note that improper access is not enforced, and it is |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// represent locking of some part of the store. Note that improper access is not enforced, and it is | |
// represent locking of some part of the store. Note that proper access is not enforced, and it is |
// - Introducing data races by reading or writing state that is claimed by a competing goroutine | ||
// - Introducing deadlocks by locking in different orders through multiple calls to locking methods. | ||
// i.e. if A calls Lock(a) followed by Lock(b), and B calls Lock(b) followed by Lock(a) | ||
// - Using a key as an exclusive lock after it has already been initialized as a read-write lock |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question, what's the expected behavior here? Casting the sync.RWMutex
to a sync.Mutex
should fail right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah failed type assertion will panic
wg.Wait() | ||
// At some point there should be more than one reader. If this test is flaky, sleep time | ||
// can be added to the reader to deflake. | ||
require.Less(t, []byte{1}, lockingCms.GetKVStore(storeKey).Get(maxNumReadersKey)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test failing here, possibly flaky
@@ -124,7 +104,7 @@ func newCacheMultiStoreFromCMS(cms Store) Store { | |||
stores[k] = v | |||
} | |||
|
|||
return NewFromKVStore(cms.db, stores, nil, cms.traceWriter, cms.traceContext) | |||
return NewFromKVStore(cms.db, stores, nil, cms.traceWriter, cms.traceContext, cms.locks) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Documenting discussion from online: we intentionally pass around pointer to sync.Map
so cached contexts respect the same locks.
This change moves locking for KVStore to the CommitMultiStore.
In the original design, it was possible to specify locked keys per individual KVStore. However this is not especially helpful because the mapping from a locked key to state is done by the user. Also it suggests the need to lock mutexes for each store, which can be wasteful. It remains up to the user to logically map lock keys to state, but this is now done at the higher CommitMultiStore level.
This implementation enables some new functionality:
TestStore_AddLocksDuringTransaction
for an example.TestStore_MaintainLockOverMultipleTransactions
for an example.TestStore_ReadWriteLock
for an example.