Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bumping storage #2

Open
1beb opened this issue Oct 23, 2021 · 3 comments
Open

Bumping storage #2

1beb opened this issue Oct 23, 2021 · 3 comments

Comments

@1beb
Copy link

1beb commented Oct 23, 2021

Is there a way to specify the storage available in the helm chart? I have a few edge cases where pods are dying because they run out of space.

@paciorek
Copy link
Owner

Are you thinking of local storage on the cluster nodes the pods are running on or are you thinking of mounting volumes on the pods?

For the latter, this might be useful. For the former, I suppose it depends on what disk you request when asking the cloud provider to set up the Kubernetes cluster, rather than specifying it in the Helm chart?

@1beb
Copy link
Author

1beb commented Oct 25, 2021

Thanks @paciorek that's the problem that I'm having. Let's say I want to use a high memory z1d AWS instance. The 6xlarge version comes with, supposedly, a 900GB NVME. However, when I look at df -h on a pod, I see that the drive only has 74GB. It's not clear to me how the pod chooses its defaults.

@paciorek
Copy link
Owner

paciorek commented Nov 2, 2021

Perhaps this is useful?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants