-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Discussion] Extending functionality of Silk and refactorings #14
Comments
Hi, I am more than glad to welcome you. Feel free to suggest ideas and improve areas which require additional work.
I will add these points to the roadmap, thanks for the input! |
About JSON format for log entries: protobuf is redundant here because any |
Could you please add |
@Insvald , after deep analysis of existing code base I found the root cause of code complexity. The main problem is routing. Slik server acts as a proxy server when the node accepted the request is not a leader node. It is trying to redirect the request to the leader. From my point of view this routing should be processed by the cache client, not by the server nodes. The main reason is redundant traffic:
The same story with the response. I think the problem comes from the chosen architecture. There are two common approaches for a such kind of caches:
The first approach allows use to use gRPC or any other duplex protocol for communication between the clients and the grid. However, it should be wrapped into the client library. The library is responsible for caching the location of the leader, retry logic, communication with leader, receiving updates from the grid, keeping LRU cache. The second approach doesn't require any special protocol and you can use Messaging infrastructure from DotNext.Net.Cluster for communication between nodes in the cluster. At the moment, the current implementation trying to behave like grid. In the same time it is trying to hide the complexity from the client using proxy node. |
One more thing: it is possible to combine both approaches. .NEXT Raft library provides so called standby nodes. These nodes never become leaders but participate in replication. As a result, the clients can be standby nodes and remain stateless. Their persistent WAL can be stored in |
@sakno I am looking at this project as a basis for lightweight orchestration. Hopefully in-process, without any additional standalone services/nodes. In such scenario writes should be relatively rare events, I am mostly concerned with reads and ease of use for a consumer. Nevertheless, any ideas are welcomed as at the moment I'm stuck with the containerd driver. |
With the second approach we need to choose one of the followings:
The last one is possible with routing middleware shipped with DotNext.AspNetCore.Cluster library as described here. AFAIK gRPC client doesn't support transparent redirections with 302 HTTP status while REST API can do that. |
Hi @sakno / @Insvald, I'm really curious about that project is good to explore and understand the implementation of gRPC with Raft consensus. And sad that this project is no more active, if @sakno / @Insvald still like to work on the project and help me to implement a Lucene.net -based search engine like yelp/nrtsearch which is implemented using gRPC protobuf and some other feature for performance. |
@Jeevananthan-23 , I don't own this project. However, you can use .NEXT repo to ask the question. |
Hi @Insvald , I would like to join in your project. I see that some refactoring can be applied:
The text was updated successfully, but these errors were encountered: