You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi. I've successfully done backup to s3 using the Backup CR. The backup restores fine if I set backupName in Restore CR but if I try to restore it with setting s3 in backupSource field in the restore CR, it creates the restore job but the job fails.
More about the problem
Here's what I did. I installed the operator version 1.15.1 with helm chart.
Helm values:
# Default values for pxc-operator.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
operatorImageRepository: percona/percona-xtradb-cluster-operator
imagePullPolicy: IfNotPresent
image: ""
# set if you want to specify a namespace to watch
# defaults to `.Release.namespace` if left blank
# watchNamespace:
# set if operator should be deployed in cluster wide mode. defaults to false
watchAllNamespaces: true
# rbac: settings for deployer RBAC creation
rbac:
# rbac.create: if false RBAC resources should be in place
create: true
# serviceAccount: settings for Service Accounts used by the deployer
serviceAccount:
# serviceAccount.create: Whether to create the Service Accounts or not
create: true
# set if you want to use a different operator name
# defaults to `percona-xtradb-cluster-operator`
# operatorName:
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you don't want to specify resources, comment the following
# lines and add the curly braces after 'resources:'.
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 100m
memory: 20Mi
containerSecurityContext: {}
nodeSelector: {}
tolerations: []
affinity: {}
podAnnotations: {}
logStructured: false
logLevel: "INFO"
disableTelemetry: true
extraEnvVars: []
# - name: http_proxy
# value: "example-proxy-http"
# - name: https_proxy
# value: "example-proxy-https"
cluster-1-pxc-0 and cluster-1-haproxy-0 pods become running after a few minutes
Then I port-forward the haproxy service and connect to the database with DbGate and create a database, a table and insert a record in it. It all works fine and database now has data.
Now I want to create a new cluster (cluster2) and restore the backup to it. I use the same CR yaml as cluster1 but only change metadata.name to cluster2 and apply.
Cluster2 pods become Running.
According to the doc, I needed to find the 'name' of the pxc backup. I ran this command to show my pxc-backup:
mahdi@sajjad-debian12:~/code/xtradb-test$ lexkube -n testtt get pxc-backups
NAME CLUSTER STORAGE DESTINATION STATUS COMPLETED AGE
backup1 cluster1 lex-ir-s3 s3://backups-khuhtziv/my-pxc-backup/cluster1-2024-10-30-14:11:23-full Succeeded 8m48s 9m22s
The operator showed error and did not create the restore job
2024-10-30T14:23:56.687Z ERROR Reconciler error {"controller": "pxcrestore-controller", "namespace": "testtt", "name": "restore-to-cluster2", "reconcileID": "c049b4bf-4b13-40a1-a201-8a467c47aa5a", "error": "failed to validate restore job: failed to validate backup existence: backup not found", "errorVerbose": "backup not found\ngithub.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxcrestore.(*s3).Validate\n\t/go/src/github.com/percona/percona-xtradb-cluster-operator/pkg/controller/pxcrestore/restorer.go:73
It didn't make sense to me to put the backup "name" in destination either. I tried putting the destination from kubectl get pxc-backups into the destination field but got the same error.
Then I put what I thought I should put there and this CR finally created a backup job:
restore CR:
But if I create a new cluster (cluster3) and restore using the backupName field, the job succeeds and data is restored meaning the backup is actually healthy.
The "How to restore backup to a new Kubernetes-based environment" doc (I posted the link) was confusing and even at some point it linked to mongodb operator restore CR instead of mysql.
No response
The text was updated successfully, but these errors were encountered:
Report
Hi. I've successfully done backup to s3 using the Backup CR. The backup restores fine if I set
backupName
in Restore CR but if I try to restore it with setting s3 in backupSource field in the restore CR, it creates the restore job but the job fails.More about the problem
Here's what I did. I installed the operator version 1.15.1 with helm chart.
Helm values:
Here's the operator deployment created:
Then I created a cluster with kubectl:
created pxc secrets:
created xtradb cluster CR:
cluster-1-pxc-0 and cluster-1-haproxy-0 pods become running after a few minutes
Then I port-forward the haproxy service and connect to the database with DbGate and create a database, a table and insert a record in it. It all works fine and database now has data.
Now I create a backup CR with this yaml:
backup job is created and status becomes Succeeded.
backup job logs:
I can view the backup files in s3.
Now I want to create a new cluster (cluster2) and restore the backup to it. I use the same CR yaml as cluster1 but only change metadata.name to cluster2 and apply.
Cluster2 pods become Running.
Now I want to restore data to it and here's when the problems happen.
I followed this official doc: https://docs.percona.com/percona-operator-for-mysql/pxc/backups-restore-to-new-cluster.html
According to the doc, I needed to find the 'name' of the pxc backup. I ran this command to show my pxc-backup:
The NAME is
backup1
.I created a Restore CR:
The operator showed error and did not create the restore job
It didn't make sense to me to put the backup "name" in destination either. I tried putting the destination from kubectl get pxc-backups into the destination field but got the same error.
Then I put what I thought I should put there and this CR finally created a backup job:
restore CR:
This creates the job but it fails after a few seconds. restore job logs:
But if I create a new cluster (cluster3) and restore using the backupName field, the job succeeds and data is restored meaning the backup is actually healthy.
So without having access to backup CR (in a separate k8s cluster), it's not possible to restore data with specifying s3 config in restore CR.
Steps to reproduce
Versions
Kubernetes
Client Version: version.Info{Major:"1", Minor:"24+", GitVersion:"v1.24.13-eks-0a21954", GitCommit:"6305d65c340554ad8b4d7a5f21391c9fa34932cb", GitTreeState:"clean", BuildDate:"2023-04-15T00:37:31Z", GoVersion:"go1.19.8", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.6", GitCommit:"741c8db18a52787d734cbe4795f0b4ad860906d6", GitTreeState:"clean", BuildDate:"2023-09-13T09:14:09Z", GoVersion:"go1.20.8", Compiler:"gc", Platform:"linux/amd64"}
Operator
1.15.1
Database
percona-xtradb-cluster:8.0.36-28.1
Anything else?
The "How to restore backup to a new Kubernetes-based environment" doc (I posted the link) was confusing and even at some point it linked to mongodb operator restore CR instead of mysql.
No response
The text was updated successfully, but these errors were encountered: