Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ShinyProxy pod spin up in Kubernetes cannot be drained #501

Open
jonathantissot opened this issue Jun 4, 2024 · 3 comments
Open

ShinyProxy pod spin up in Kubernetes cannot be drained #501

jonathantissot opened this issue Jun 4, 2024 · 3 comments
Labels

Comments

@jonathantissot
Copy link

Hey Team!
When using ShinyProxy to spin up a pod in Kubernetes, this one does not have an Owner Reference.
As it does not have an Owner Reference, Kubernetes cannot identify who should be in charge of that pod lifecycle.
This could be solved by using a label on the pod, and adding a selector to the kubectl drain - source
But if you are running a Managed Kubernetes Cluster, such as AKS, EKS & GKE (Azure, AWS and GCP respectively) there are certain operations conducted by the Cloud Provider that you won't be able to modify in order for that one to have that specific flag while doing so. Such as node auto-scaling, or Kubernetes version node upgrade.
I wanted to check with you, team, if there is any way for us to have a fixed controller if we are deploying the backend in Kubernetes.

Cheers!

@LEDfan
Copy link
Member

LEDfan commented Jun 10, 2024

Hi

The reason that we don't add the owner reference is that ShinyProxy needs full control over the pods. When using the Kubernetes operator (https://github.com/openanalytics/shinyproxy-operator), new ShinyProxy serves are started and removed automatically. If such a ShinyProxy pod would own the app pods, k8s would delete the running apps when a ShinyProxy server gets deleted, even if the app is still in use. In addition, apps can run in any namespace and it's not possible to have cross-namespace owner references.

The problem with draining nodes cannot be solved easily. ShinyProxy automatically removes app pods if this app is no longer in use. So if a node contains some app pods, those pods are still in use. If you drain that node, those pods will be removed and the user will notice that the app has crashed. Since these apps typically store the state in memory (e.g. as is the case with Shiny), it's impossible to move the pod to a different node, without losing the state of the app.
The only way to solve this is to cordon the node and wait for ShinyProxy to remove the pods. This could be improved by having ShinyProxy use finalizers (https://kubernetes.io/docs/concepts/overview/working-with-objects/finalizers/), but even then your cloud provider will throw an error because it runs into a timeout of the finalizer.

We are looking into ways to improve this behavior, but did not yet found a way to fully prevent timeouts when draining nodes.

P.s. for autoscaling this should not be an issue. We advice to use the cluster-autoscaler and not an autoscaler that is provided by the cloud provider (e.g. azure provides such an option). The cluster-autoscaler is fully aware that it should not try to remove nodes that contains ShinyProxy pods and therefore does not attempt to remove these nodes.

@jonathantissot
Copy link
Author

First and foremost, thank you @LEDfan for providing an answer on this topic.
Just an idea that crossed my mind, maybe instead of spinning a single pod, there could be the chance to spin up either a RS/Deployment with a Service on top of it.
That way, there would be an owner, a single reference to the service, ShinyProxy can still be the direct owner of the ReplicaSet and if ShinyProxy handles the SIGTERM by not accepting any further connections [although, this is a Kubernetes behavior as well when rolling out changes], and a new pod is span up in the process, this would seamlessly:

  • Move the traffic to the other pod,
  • Terminate the previous pod [and the new pod wouldn't be scheduled in the same node as any drain operation first cordons the node]
  • Allows operations from the Cloud Provider

Although, this might still timeout if a connection is handled by the pod and the pod doesn't go down, but would reduce the chances.

Regarding the AutoScaling, that is correct for scaling out but scaling in [reducing the number of nodes], if there are ShinyProxy pods in all the nodes, you'll face a similar issue, or you would end up not being able to properly optimize the infra.

Cheers!

@LEDfan
Copy link
Member

LEDfan commented Jul 2, 2024

Your proposal would work when the applications hosted by ShinyProxy are stateless. However, in our experience, the majority of applications deployed using ShinyProxy are in fact stateful. This is the case for e.g. Shiny apps, but also when using IDE's like RStudio, Jupyter notebooks etc. If the pod of a user gets moved to another node they will loose whatever they were doing in the app. Therefore, during an update of k8s, you either have to accept that users loose their work (and then it's preferred if you stop the apps of a user, such that they are aware their app was stopped, instead of resetting the state of the app) or you'll have to accept that some nodes will stay around until the user closes their work.

What kind of applications are you deploying using ShinyProxy?

To ensure pods are not kept indefinitely (e.g. when the user never turns off their PC), you can have a look at the max-lifetime setting: https://shinyproxy.io/documentation/configuration/#max-lifetime . If you set this to e.g. 4 hours, your nodes will be removed after 4 hours.

Regarding the autoscaling, I think this is a matter of tuning the cluster configuring for the specific use case. We have multiple setups where the cluster regularly scale's up and down, and is not wasting node resources. However, I think this is a bit too much to explain here.

Finally, if you want 100% efficiency, you could have a look at services like AWS Fargate, either using k8s or directly using the recently added ECS. We are always willing to implement additional backends for ShinyProxy, e.g. to use Container instances on Azure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants