Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

disrupt_add_remove_dc should wait and verify altering keyspace before dropping it #9430

Open
yarongilor opened this issue Dec 1, 2024 · 4 comments
Assignees
Labels
Bug Something isn't working right

Comments

@yarongilor
Copy link
Contributor

This is a followup of scylladb/scylladb#21337

in the nemesis of disrupt_add_remove_dc:

before dropping the keyspace in:
session.execute('DROP KEYSPACE IF EXISTS keyspace_new_dc')

The temporary_replication_strategy_setter (or any alternative code) should wait and verify altering keyspace is finished.

@yarongilor
Copy link
Contributor Author

@pehala , please assign this issue.

@soyacz
Copy link
Contributor

soyacz commented Dec 1, 2024

probably this part of ReplicationStrategy class should be adjusted:

    def apply(self, node: 'BaseNode', keyspace: str):
        cql = f'ALTER KEYSPACE {cql_quote_if_needed(keyspace)} WITH replication = {self}'
        with node.parent_cluster.cql_connection_patient(node) as session:
            session.execute(cql)

@timtimb0t
Copy link
Contributor

I think the issue reproduced there:
https://argus.scylladb.com/tests/scylla-cluster-tests/6f7baf8c-b3c0-4b6f-898a-872ed564d7fa
Backend: aws
Region: eu-west-1
Image id: ami-0f17eee5c68671398
SCT commit sha: 78c864c
SCT repository: [email protected]:scylladb/scylla-cluster-tests.git
SCT branch name: origin/master
Kernel version: 6.8.0-1021-aws
Scylla version: 6.3.0~dev-20241220.10c79a4d4745
Build id: db2221d5e59af515814b575e246164c1b5b703e7
Instance type: i4i.4xlarge
Node amount: 4

@timtimb0t
Copy link
Contributor

Packages

Scylla version: 6.3.0~dev-20241226.3e22998dc131 with build-id 6e1de4e4527d9c6c1ab68f21d0cca6a8022cae06

Kernel Version: 6.8.0-1021-aws

Installation details

Cluster size: 6 nodes (i4i.4xlarge)

Scylla Nodes used in this run:

  • longevity-tls-50gb-3d-master-db-node-0c201083-9 (3.254.19.125 | 10.4.21.233) (shards: 14)
  • longevity-tls-50gb-3d-master-db-node-0c201083-8 (54.74.118.53 | 10.4.20.165) (shards: 14)
  • longevity-tls-50gb-3d-master-db-node-0c201083-7 (18.203.177.108 | 10.4.21.103) (shards: -1)
  • longevity-tls-50gb-3d-master-db-node-0c201083-6 (52.214.32.42 | 10.4.22.248) (shards: 14)
  • longevity-tls-50gb-3d-master-db-node-0c201083-5 (63.32.226.69 | 10.4.21.147) (shards: 14)
  • longevity-tls-50gb-3d-master-db-node-0c201083-4 (54.72.50.142 | 10.4.22.183) (shards: 14)
  • longevity-tls-50gb-3d-master-db-node-0c201083-3 (52.208.113.14 | 10.4.21.38) (shards: 14)
  • longevity-tls-50gb-3d-master-db-node-0c201083-2 (108.128.152.35 | 10.4.22.59) (shards: 14)
  • longevity-tls-50gb-3d-master-db-node-0c201083-16 (54.155.34.66 | 10.4.21.198) (shards: 14)
  • longevity-tls-50gb-3d-master-db-node-0c201083-15 (54.72.178.214 | 10.4.21.151) (shards: 14)
  • longevity-tls-50gb-3d-master-db-node-0c201083-14 (54.73.1.89 | 10.4.22.199) (shards: 14)
  • longevity-tls-50gb-3d-master-db-node-0c201083-13 (34.240.182.66 | 10.4.23.127) (shards: 14)
  • longevity-tls-50gb-3d-master-db-node-0c201083-12 (54.220.31.62 | 10.4.21.152) (shards: 14)
  • longevity-tls-50gb-3d-master-db-node-0c201083-11 (99.80.22.41 | 10.4.21.187) (shards: 14)
  • longevity-tls-50gb-3d-master-db-node-0c201083-10 (54.195.255.128 | 10.4.23.161) (shards: 14)
  • longevity-tls-50gb-3d-master-db-node-0c201083-1 (54.72.98.140 | 10.4.22.20) (shards: 14)

OS / Image: ami-008e5a7c99f9a0a76 (aws: undefined_region)

Test: longevity-50gb-3days-test
Test id: 0c201083-b45e-437f-9114-2ce6cc806492
Test name: scylla-master/tier1/longevity-50gb-3days-test
Test method: longevity_test.LongevityTest.test_custom_time
Test config file(s):

Logs and commands
  • Restore Monitor Stack command: $ hydra investigate show-monitor 0c201083-b45e-437f-9114-2ce6cc806492
  • Restore monitor on AWS instance using Jenkins job
  • Show all stored logs command: $ hydra investigate show-logs 0c201083-b45e-437f-9114-2ce6cc806492

Logs:

Jenkins job URL
Argus

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Something isn't working right
Projects
None yet
Development

No branches or pull requests

4 participants