Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[k8s] Add validation for pod_config #4206 #4466

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

chesterli29
Copy link

Check pod_config when run 'sky check k8s' by using k8s #4206

This commit extends the functionality of sky check k8s by adding a check for pod_config in this step. The method used to check pod_config is by calling the K8s API. This approach has some advantages and disadvantages:

  • Advantages:
  1. Minimal code changes and 100% reliable check results.
  2. Future K8s extensions to the pod format will not require any adjustments to skypilot's code.
  • Disadvantages:
  1. Requires interaction with the K8s cluster and cannot be validated locally offline.

Of course, any other suggestions are welcome for discussion.

The test config.yaml

kubernetes:
  pod_config:
    metadata:
      name: local-test
      labels:
        my-label: my-value    # Custom labels to SkyPilot pods
    spec:
      #runtimeClassName: nvidia    # Custom runtimeClassName for GPU pods.
      imagePullSecrets:
        - name: my-secret     # Pull images from a private registry using a secret
      containers:
        - name: local_test
          image: test
          env:                # Custom environment variables for the pod, e.g., for proxy
          - name: HTTP_PROXY
            value: http://proxy-host:3128
          volumeMounts:       # Custom volume mounts for the pod
            - mountPath: /foo
              name: example-volume
              readOnly: true
      volumes:
        - name: example-volume
          hostPath:
            path: /tmp
            type: Directory
        - name: dshm          # Use this to modify the /dev/shm volume mounted by SkyPilot
          emptyDir:
            medium: Memory
            sizeLimit: 3Gi    # Set a size limit for the /dev/shm volume

And the Check Result:
image

Tested (run the relevant ones):

  • Code formatting: bash format.sh
  • Any manual or new tests for this PR (please specify below)
  • All smoke tests: pytest tests/test_smoke.py
  • Relevant individual smoke tests: pytest tests/test_smoke.py::test_fill_in_the_name
  • Backward compatibility tests: conda deactivate; bash -i tests/backward_compatibility_tests.sh

Check pod_config when run 'sky check k8s' by using k8s api
Copy link
Collaborator

@romilbhardwaj romilbhardwaj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @chesterli29! Left some questions. We may need to use an alternate approach since pod validation from k8s API server may be too strict.

sky/provision/kubernetes/utils.py Outdated Show resolved Hide resolved
Comment on lines 917 to 922
kubernetes.core_api(context).create_namespaced_pod(
namespace,
body=pod_config,
dry_run='All',
field_validation='Strict',
_request_timeout=kubernetes.API_TIMEOUT)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this approach work even if the pod_config is partially specified? E.g.,

kubernetes:
  pod_config:
    spec:
      containers:
        - env:
            - name: MY_ENV_VAR
              value: "my_value"

My hunch is k8s will reject this pod spec since it's not a complete pod spec, but it's a valid pod_config in our case.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, the k8s will reject this pod spec.
if this pod_config is valid in this project. is there any definition about this config? for example: some filed is required or optional? or all the filed is optional here, but it must follow the k8s pod require only if it has been set ?

Copy link
Author

@chesterli29 chesterli29 Dec 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here is my solution about this, we can check the pod config by using k8s api after combine_pod_config_fields and combine_metadata_fields during launch (that is the early stage of launching.).
it's really hard and complex to follow and maintain the k8s pod json/yaml schema in this project.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

all the filed is optional here, but it must follow the k8s pod require only if it has been set ?

Yes, this is the definition of a valid pod_spec.

can check the pod config by using k8s api after combine_pod_config_fields and combine_metadata_fields during launch (that is the early stage of launching.)

Yes, that sounds reasonable as long as we can surface to the user where the error comes in the user's pod config.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have we considered having a simple local schema check, with the json schema fetched and flattened from something like https://github.com/instrumenta/kubernetes-json-schema/tree/master?

Copy link
Author

@chesterli29 chesterli29 Dec 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have we considered having a simple local schema check, with the json schema fetched and flattened from something like https://github.com/instrumenta/kubernetes-json-schema/tree/master?

Yeah, I took a look at this before. The main problem with this setup is that it needs to grab JSON schema files from other repo eg: https://github.com/yannh/kubernetes-json-schema, depending on which version of k8s user using. I'm not sure if it's a good idea for sky to download dependencies to the local machine while it's running. Plus, if we want to check pod_config locally using JSON schema, we might need to let users choose their k8s version so we can get the right schema file.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's try the approach you proposed above (check the pod config by using k8s api after combine_pod_config_fields and combine_metadata_fields) if it can surface the exact errors to the users.

If that does not work, we may need to do schema validation locally. Pod API has been relatively stable, so might not be too bad to have a fixed version schema for validation.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM,
BTW i found a error case when i test the approach with json schema in kubernetes-json-schema.
here is my part of test yaml

containers:
    - name: local_test
       image: test

note, the name here local_test with _ inside, it's invalid when we creating a pod, but will pass the check by json schema.
image
and if we use this config to create sky cluster, it will fail later because the invalid name.
image

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI. Here is the output after pod_config check failed during launch
image

check merged pod_config during launch using k8s api
if there is no kube config in env, ignore ValueError when launch
with dryrun. For now, we don't support check schema offline.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants