forked from scylladb/scylla-cluster-tests
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test issue #1
Comments
test |
fruch
pushed a commit
that referenced
this issue
Apr 4, 2021
Test Asymmetric clusters: - bootstrapping an asymmetric cluster - adding asymmetric nodes to cluster SMP selection: - minimum SMP should be calculated as 50% of max SMP - max SMP as it is calculated now by default - new parameter: db_nodes_shards_selection (default | random) Add 2 new longevities to use it: 1. That will be based on the large-paritions-8h but to be shortened to 3 hours and will be run daily (like all others). 2. another longevity that will be based on 200gb-48h but to be shorten to 12h. Task: https://trello.com/c/WYdqMLgp/2672-test-asymmetric-clusters-bootstrapping-an-asymmetric-cluster- adding-asymmetric-nodes-to-cluster-customer-issue feature(asymmertic cluster): address comments #1 feature(asymmertic cluster): address comments #2 feature(asymmertic cluster): address comments #3
fruch
pushed a commit
that referenced
this issue
Apr 4, 2021
Transfer nemesis name in 'decommission' method that called by ShrinkCluster nemesis. This info will be printed in the email for terminated nodes list in the column 'Terminated by nemesis'" fix(shrink cluster): address comments #1
fruch
pushed a commit
that referenced
this issue
Apr 4, 2021
fruch
pushed a commit
that referenced
this issue
Nov 4, 2021
fruch
pushed a commit
that referenced
this issue
Nov 4, 2021
We use kubectl wait to wait till all resources get into proper state there are two problems with this: 1. kubectl wait fails when no resource matched criteria 2. if resources are provisioned gradually, kubectl wait can slip thrue crack when half of the resource provisioned and the rest is not even deployed At some places in sct we use sleeps to tackle #1, which leads to failures on slow PC This PR is to address this problem by wrapping kubectl wait and make it restarted when no resource are there and track number of resources it reported and wait+rerun if resource number had changed.
fruch
pushed a commit
that referenced
this issue
Nov 24, 2022
Test Asymmetric clusters: - bootstrapping an asymmetric cluster - adding asymmetric nodes to cluster SMP selection: - minimum SMP should be calculated as 50% of max SMP - max SMP as it is calculated now by default - new parameter: db_nodes_shards_selection (default | random) Add 2 new longevities to use it: 1. That will be based on the large-paritions-8h but to be shortened to 3 hours and will be run daily (like all others). 2. another longevity that will be based on 200gb-48h but to be shorten to 12h. Task: https://trello.com/c/WYdqMLgp/2672-test-asymmetric-clusters-bootstrapping-an-asymmetric-cluster- adding-asymmetric-nodes-to-cluster-customer-issue feature(asymmertic cluster): address comments #1 feature(asymmertic cluster): address comments #2 feature(asymmertic cluster): address comments #3 (cherry picked from commit 9723038)
fruch
pushed a commit
that referenced
this issue
Jan 2, 2025
Fix changes directory for startup script upload from /tmp to $HOME. The change is required for DB nodes deployed in Cloud where /tmp dir is mounted with noexec option what makes script execution impossible there. As per discussion (#1), for DB nodes deployed in SCT startup_script can be executed either from $HOME or /tmp. refs: #1: scylladb#9608
fruch
pushed a commit
that referenced
this issue
Jan 2, 2025
Temporary workaround for docker installation on RHEL9 distro because of the issue (1). In provided fix docker packages are installed manually for repo, hardcoding the OS version ($releasever) to specific value in repo file /etc/yum.repos.d/docker-ce.repo. After issue resolution, we can return the previous approach with installation script. refs: #1: moby/moby#49169
fruch
pushed a commit
that referenced
this issue
Jan 2, 2025
Firewall should be disabled for RHEL-like distributions. Otherwise, it blocks incoming requests to 3000 monitoring node (1). The same operation has been already implemented for db nodes setup and only refactored here. refs: #1: scylladb#9630
fruch
pushed a commit
that referenced
this issue
Jan 7, 2025
Error message Manager returns for enospc scenario has been changed to more generic one (#1). So, it doesn't make much sense to verify it. Moreover, there is a plan to fix check free disk space behaviour and the whole test will probably require rework to be done (#2). refs: #1 - scylladb/scylla-manager#4087 #2 - scylladb/scylla-manager#4184
fruch
pushed a commit
that referenced
this issue
Jan 7, 2025
According to comment (1), set ks strategy and rf is not needed if restoring the schema within one DC. It should be brought back after implementation of (2) which will unblock schema restore into a different DC. For now, it's possible to restore schema only within one DC. Refs: #1: scylladb/scylla-manager#4041 issuecomment-2565489699 #2: scylladb/scylla-manager#4049
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
No description provided.
The text was updated successfully, but these errors were encountered: