You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dataset Operator was working fine and then I had to Rollout Restart my kubernetes cluster (I am using Kops). After the restart all my other pods related to dlf works fine. But I get error on this operator pod. I cant assign new pvc and it throws an error. The following error is during it runs.
43364516593096e+09 ERROR controller.dataset Could not wait for Cache to sync {"reconciler group": "com.ie.ibm.hpsys", "reconciler kind": "Dataset", "error": "failed to wait for dataset ca │
│ .k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2 │
│ /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:208 │
│ .k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start │
│ /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:234 │
│ .k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1 │
│ /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/runnable_group.go:218 │
│ 43364516593094e+09 ERROR controller.datasetinternal Could not wait for Cache to sync {"reconciler group": "com.ie.ibm.hpsys", "reconciler kind": "DatasetInternal", "error": "failed to wai │
│ .k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2 │
│ /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:208 │
│ .k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start │
│ /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:234 │
│ .k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1 │
│ /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/runnable_group.go:218 │
│ 43364516607056e+09 ERROR error received after stop sequence was engaged {"error": "failed to wait for datasetinternal caches to sync: timed out waiting for cache to be synced"} │
│ .k8s.io/controller-runtime/pkg/manager.(*controllerManager).engageStopProcedure.func1 │
│ /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/internal.go:541 │
│ 43364516605525e+09 INFO Stopping and waiting for non leader election runnables │
│ 43364516609583e+09 INFO Stopping and waiting for leader election runnables │
│ 43364516613445e+09 INFO Stopping and waiting for caches │
│ 43364516617835e+09 INFO Stopping and waiting for webhooks │
│ 43364516626759e+09 INFO controller-runtime.webhook shutting down webhook server │
│ 43364516630316e+09 INFO Wait completed, proceeding to shutdown the manager │
│ 43364516631134e+09 ERROR dataset-operator-setup problem running manager {"error": "failed to wait for dataset caches to sync: timed out waiting for cache to be synced"}
Dataset Operator was working fine and then I had to Rollout Restart my kubernetes cluster (I am using Kops). After the restart all my other pods related to dlf works fine. But I get error on this operator pod. I cant assign new pvc and it throws an error. The following error is during it runs.
The following error is when the pod has failed
The text was updated successfully, but these errors were encountered: