Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Elasticache ReplicationGroup not adjusting replica count #1563

Open
1 task done
KhasDenis opened this issue Nov 12, 2024 · 1 comment
Open
1 task done

[Bug]: Elasticache ReplicationGroup not adjusting replica count #1563

KhasDenis opened this issue Nov 12, 2024 · 1 comment
Labels

Comments

@KhasDenis
Copy link

KhasDenis commented Nov 12, 2024

Is there an existing issue for this?

  • I have searched the existing issues

Affected Resource(s)

  • elasticache.aws.upbound.io/v1beta2 - ReplicationGroup

Resource MRs required to reproduce the bug

apiVersion: elasticache.aws.upbound.io/v1beta2
kind: ReplicationGroup
metadata:
  name: elasticcache-dkh
spec:
  deletionPolicy: Orphan
  forProvider:
    applyImmediately: true
    atRestEncryptionEnabled: true
    authTokenSecretRef:
      key: password
      name: elasticcache-dkh-secret
      namespace: default
    authTokenUpdateStrategy: ROTATE
    autoGenerateAuthToken: false
    autoMinorVersionUpgrade: "true"
    automaticFailoverEnabled: false
    description: elasticcache-dkh- Replication Group
    engine: redis
    engineVersion: "7.0"
    ipDiscovery: ipv4
    maintenanceWindow: sun:23:00-mon:00:00
    multiAzEnabled: false
    networkType: ipv4
    nodeType: cache.m5.large
    numCacheClusters: 2
    parameterGroupName: elasticcache-dkh
    port: 6379
    region: eu-central-1
    securityGroupIds:
    - sg-05cb53f9b298ee781
    snapshotWindow: 02:30-03:30
    subnetGroupName: nebula-npg-eks-ec-sub-nprd
    transitEncryptionEnabled: true
    transitEncryptionMode: required
  initProvider:
    authTokenUpdateStrategy: ROTATE
  managementPolicies:
  - '*'
  providerConfigRef:
    name: default
  writeConnectionSecretToRef:
    name: elasticcache-dkh-output
    namespace: default

Steps to Reproduce

  • Apply manifest from above
  • Wait till resource created
  • Change numCacheClusters from 2 to 3

What happened?

If I apply provided manifest elasticache redis cluster is created successfully with cluster mode disabled and two nodes primary and replica. But if now I edit ReplicationGroup changing numCacheClusters from 2 to 3 additional node comes to create/delete loop in AWS. ReplicationGroup stays SYNCED and READY in the cluster.

Relevant Error Output Snippet

No response

Crossplane Version

1.16.0

Provider Version

1.9.0

Kubernetes Version

1.30

Kubernetes Distribution

EKS

Additional Info

No response

@KhasDenis KhasDenis added bug Something isn't working needs:triage labels Nov 12, 2024
@turkenf
Copy link
Collaborator

turkenf commented Dec 1, 2024

Hi @KhasDenis,

Thank you for the issue report. Could you please provide us with the output of provider logs (if any diff is detected)?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants