You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Are you thinking of local storage on the cluster nodes the pods are running on or are you thinking of mounting volumes on the pods?
For the latter, this might be useful. For the former, I suppose it depends on what disk you request when asking the cloud provider to set up the Kubernetes cluster, rather than specifying it in the Helm chart?
Thanks @paciorek that's the problem that I'm having. Let's say I want to use a high memory z1d AWS instance. The 6xlarge version comes with, supposedly, a 900GB NVME. However, when I look at df -h on a pod, I see that the drive only has 74GB. It's not clear to me how the pod chooses its defaults.
Is there a way to specify the storage available in the helm chart? I have a few edge cases where pods are dying because they run out of space.
The text was updated successfully, but these errors were encountered: