Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ingress charm can get stuck if LoadBalancer IP is slow to provision #17

Open
ca-scribner opened this issue Oct 11, 2024 · 0 comments
Open

Comments

@ca-scribner
Copy link
Contributor

Bug Description

If the cluster LoadBalancer is slow to provide our service with an IP (either because it is slow, or because the cluster doesn't have one provisioned atm), our ingress charm has at least two issues:

  • if we are related to a TLS provider and do not have a external_hostname config value set, we need the LoadBalancer IP to request certs
  • _sync_all_resources() has an _is_ready() guard that, if the LoadBalancer IP is unavailable will prevent the creation of ingress resources (Gateway/HTTPRoute) and put the charm status as Blocked

For both these problems, we have no guarantee that our charm will be woken up when the LoadBalancer service obtains an ip. For example:

  • cluster is provisioned but loadbalancer provisioner is not set up
  • charm starts, creates a LoadBalancer service, and waits for an IP but none is provided
  • cluster LoadBalancer provisioner is configured. Our LoadBalancer service is provided an IP
  • (nothing happens here to wake our charm, which stays non-functional)

This is discussed more in this thread.

Ideally, we'd have a k8s watcher on the service that could wake the charm when our Service changed. That's theoretically possible with Pebble notices, but not sure if its practical.

Not directly related to the root cause here, but in hindsight I think we should what the _is_ready() guard blocks in _sync_all_resources. Gateway/HTTPRoute resources can be created regardless of whether the LoadBalancer service has an IP yet - we should just create them anyway and use _is_ready() just to set the charm status. There's no downside to do this, and it feels more kubernetes native

To Reproduce

see above

Environment

Relevant log output

-

Additional context

No response

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant