-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide more details in Backstage cluster overview #3216
Comments
To support platform teams managing lots of clusters, I would like to set a focus on providing a much richer clusters list. In addition to that, some information may only be accessible on a per-cluster basis. But here I want to propose some items and functions for the tabular list.
|
Adding a detail for the list view here: Let's show the GitOps icon for clusters managed through Flux. The logic for detecting this should be encoded in happa. |
Regarding how customer users access workload clusters, I don't know what's the status currently. Opened this thread to find out more. Team Shield is now responsible for the access part. |
TODO:
|
TODO: Marian reads through to check of it can get closed and moved to PDV |
Status:
@gusevda I would like to talk to you at some point about how to deal with cascading loading of details, e. g. if some are not available in the |
@gusevda and I talked about this briefly today. We should do a spike towards populating list views based on requests for several types of resources. Example: In one column, show the AWS account ID, which is encoded in the |
@fiunchinho mentioned that it would be great to have the cluster app version in the list view. That's available in the main Cluster resource as annotation |
Here is a new wireframe of some faceted table views for clusters and deployments. |
@gusevda and I have discussed the implications of showing cluster details in the list view that are not available in the main In order to determine the ID of the AWS account the cluster is running in, we must fetch the The current situation with asynchronous queries to multiple management clusters already isn't ideal. The resulting table view gets re-rendered as a user reads the content, leading to rather unpredictable results when clicking a row item in the moment the underlying table changes etc. These unwanted side-effects of asynchronous requests would only get amplified if we decided to do more of them. So instead we are planning to pre-fetch the entire table content before displaying results. While queries are in progress, we would show a status indicator with status info (in progress / done / failed) per management cluster. Our current assumption is that this would mostly affect users on their first visit to the list page. On subsequent visits, the client should have most data cached already. Anyway, the duration to wait for the table to load will depend on the number of resource kinds to fetch. Hence we should be specific about which resources to fetch and avoiding querying some that are not needed (e. g. because the details they provide are not shown in any visible columns activated by the user). Note: the need to fetch a resource will also be affected by filter facets, which are not in scope of this issue. Some information to display in the table will be provider-specific (as shown in the example above). There are several ways how this can play out in the UI. One option is to have very specific columns, e. g. one titled |
In https://github.com/giantswarm/backstage/pull/627 I'm changing the TYPE column to show an icon, making room for more details horizontally. |
https://github.com/giantswarm/backstage/pull/629 is for adding the RELEASE column. |
Above: Screenshot of the cluster overview as of 2025-01-09.
In our Clusters list in Backstage, as of now we only provide a minimal set of cluster details. To make this more useful to platform teams and admins with a larger amount of clusters, we should add more details (customizable) and more filtering and sorting capabilities to this list.
Colums to add to the table:
cluster.x-k8s.io
resourceThe text was updated successfully, but these errors were encountered: