You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the update
Need documentation explaining how to properly achieve high availability when using distributed hypertables.
Questions/items to be addressed:
What is the impact of using async replication for the access node? Meaning what happens when a dirty failover is performed, and the new access node is running with out-of-date information regarding the state of the data nodes?
What is the overhead of using sync replication on the access node? Since all data is sent to the data nodes, and only metadata is stored on the access node, does using sync replication result in near negligible performance impact? Basically only when new chunks are created?
Can the access node replica be used for read queries?
Document that there is currently no way to rebalance data when a new data node is added. A workaround is to cordon the old nodes until the new node catches up in terms of data volume, but this makes the new node a hot spot.
Items not currently supported, but to be documented when they are:
How is high availability achieved with create_distributed_hypertable(replication_factor => N)?
How do we rebalance data after adding a new data node? (remove the note from above regarding the lack of rebalance)
Add the appropriate label --> 2.0
Describe the update
Need documentation explaining how to properly achieve high availability when using distributed hypertables.
Questions/items to be addressed:
replication_factor
should not be used, do we have to use full replicas of the data nodes?Items not currently supported, but to be documented when they are:
create_distributed_hypertable(replication_factor => N)
?Location(s)
https://docs.timescale.com/beta-v2.0.0/getting-started/setup-multi-node
and/or
https://docs.timescale.com/beta-v2.0.0/tutorials/clustering
How soon is this needed?
The text was updated successfully, but these errors were encountered: