You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Storage --> same for all servers, defined by EBS --> 16TB total storage on a single server max. This is not affected by number of tables.
Number of hypertables / chunks --> This can hit some limitations with PostgreSQL and the way we plan how queries will execute. The more chunks on the system, the slower that process becomes. But it is way worse if you have all those chunks as part of a single hypertable than multiple hypertables. But still it slows things down. No hard cap here, just things getting slower and less manageable.
Number of hypertables / chunks --> similar to how our compression or tiering jobs need to do some stuff on the background to process every chunk, there are some additional background workers in PostgreSQL that need to also go over each chunk and do some processing to keep things "maintained" --> too many tables or chunks means that those workers don;t have enough time to catchup and that can leave the system not that well maintained. Or would require those workers to do more work on the background to keep up and slow other stuff down
we do not recommend tens of thousands of hypertables - PostgreSQL and Timescale are not built to optimally run with those set ups.
The text was updated successfully, but these errors were encountered:
Add a note in https://docs.timescale.com/use-timescale/latest/hypertables/about-hypertables/ to explain best practice for scaling and limits:
There are 3 limits that you can hit:
we do not recommend tens of thousands of hypertables - PostgreSQL and Timescale are not built to optimally run with those set ups.
The text was updated successfully, but these errors were encountered: