Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG]polardbx cluster created failed with Back-off restarting failed container error #8765

Open
tianyue86 opened this issue Jan 8, 2025 · 0 comments
Assignees
Labels
kind/bug Something isn't working
Milestone

Comments

@tianyue86
Copy link

Describe the env
Kubernetes: v1.31.1-aliyun.1
KubeBlocks: 1.0.0-beta.21
kbcli: 1.0.0-beta.8

To Reproduce
Steps to reproduce the behavior:

  1. Heml template get polardbx cluster yaml and apply it
    polar.yaml.txt
  2. check cluster
k get cluster              
NAME            CLUSTER-DEFINITION   TERMINATION-POLICY   STATUS     AGE
polarc2                              Delete               Abnormal   86m

k get pod
NAME                             READY   STATUS                  RESTARTS          AGE
polarc2-cdc-0                    0/2     Init:0/1                0                 86m
polarc2-cn-0                     0/2     Init:CrashLoopBackOff   24 (27s ago)      86m
polarc2-dn-0-0                   3/3     Running                 0                 86m
polarc2-dn-0-1                   3/3     Running                 0                 84m
polarc2-dn-0-2                   3/3     Running                 0                 83m
polarc2-gms-0                    3/3     Running                 0                 86m
polarc2-gms-1                    3/3     Running                 0                 84m
polarc2-gms-2                    3/3     Running                 0                 83m
  1. describe pod
k describe pod polarc2-cn-0
Events:
  Type     Reason          Age                    From               Message
  ----     ------          ----                   ----               -------
  Normal   Scheduled       53m                    default-scheduler  Successfully assigned default/polarc2-cn-0 to cn-zhangjiakou.10.0.0.139
  Normal   AllocIPSucceed  53m                    terway-daemon      Alloc IP 10.0.0.142/24 took 100.792627ms
  Warning  BackOff         50m (x3 over 50m)      kubelet            Back-off restarting failed container metadb-init in pod polarc2-cn-0_default(58747da8-3d8a-4ff3-af74-691a2f40b8a5)
  Normal   Pulled          49m (x4 over 53m)      kubelet            Container image "apecloud-registry.cn-zhangjiakou.cr.aliyuncs.com/apecloud/mysql:8.0.30" already present on machine
  Normal   Created         49m (x4 over 53m)      kubelet            Created container metadb-init
  Normal   Started         49m (x4 over 53m)      kubelet            Started container metadb-init
  Normal   Pulling         49m                    kubelet            Pulling image "apecloud-registry.cn-zhangjiakou.cr.aliyuncs.com/apecloud/polardbx-init:v1.5.0"
  Normal   Pulled          49m                    kubelet            Successfully pulled image "apecloud-registry.cn-zhangjiakou.cr.aliyuncs.com/apecloud/polardbx-init:v1.5.0" in 6.42s (6.42s including waiting). Image size: 5209394 bytes.
  Normal   Started         49m (x2 over 49m)      kubelet            Started container init
  Normal   Created         49m (x3 over 49m)      kubelet            Created container init
  Normal   Pulled          18m (x11 over 49m)     kubelet            Container image "apecloud-registry.cn-zhangjiakou.cr.aliyuncs.com/apecloud/polardbx-init:v1.5.0" already present on machine
  Warning  BackOff         3m31s (x213 over 49m)  kubelet            Back-off restarting failed container init in pod polarc2-cn-0_default(58747da8-3d8a-4ff3-af74-691a2f40b8a5)
  1. check container logs

iclogs.txt
miclogs.txt

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

@tianyue86 tianyue86 added the kind/bug Something isn't working label Jan 8, 2025
@tianyue86 tianyue86 added this to the Release 1.1 milestone Jan 8, 2025
@github-actions github-actions bot modified the milestones: Release 1.1.0, Release 0.9.3 Jan 14, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants