address race condition in DHT initialization #635
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR is an attempt to address some kind of race condition that can occur during expert bootstrapping. I spent several hours trying to identify the source of the problem, only to fall-back to an easy solution - which doesn't actually address the underlying problem.
For some reason, calling
dht.get_visible_maddrs(True)
on the DHT node fixes the issue. So, that's what we do now - every time the DHT node is started.Since using get_visible_maddrs apparently fixes the problem, we can assume that something this method does is important to expert initialization. Yet, the only thing this method does is use
Multiaddr
; and so, I can only assume that there must be some kind of failure inMultiaddr
, somewhere else in the code.This is far outside my area of expertise, and I spent a lot of time troubleshooting already - so this is the best solution I could figure out, for now.