We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I ran gke_scale_namespace_up_or_down.sh on my freshly created F5 instance to bring it down:
./gke_scale_namespace_up_or_down.sh down -c lw-sales-us-west1 -n carlos-wesco-poc -p lw-sales
and while that brought down most of the pods there were some pods that stayed up:
$ kubectl get pods NAME READY STATUS RESTARTS AGE carlos-wesco-poc-argo-ui-85465d7cb7-6j7zt 1/1 Running 0 2d3h carlos-wesco-poc-connector-plugin-service-box-56b77d9487-tpkb5 0/1 Running 1 40h carlos-wesco-poc-connector-plugin-service-ldap-7fbf59bc97-p4pfx 0/1 Running 1 40h carlos-wesco-poc-connector-plugin-service-sharepoint-85b5cc2hkl 0/1 Running 1 40h carlos-wesco-poc-fusion-log-forwarder-fd6b9dc9-m5dlf 1/1 Running 0 40h carlos-wesco-poc-pulsar-broker-0 0/1 CrashLoopBackOff 6 40h carlos-wesco-poc-pulsar-broker-1 0/1 CrashLoopBackOff 6 40h carlos-wesco-poc-templating-d664f7996-sdnzt 1/1 Running 0 69m
After conferring with Connor for a few minutes I added the following to line 171 of gke_scale_namespace_up_or_down.sh:
declare -a deployments=("admin-ui" "api-gateway" "auth-ui" "devops-ui" "fusion-admin" "fusion-indexing" "fusion-jupyter" "monitoring-grafana" "insights" "job-launcher" "job-rest-server" "ml-model-service" "pm-ui" "monitoring-prometheus-kube-state-metrics" "monitoring-prometheus-pushgateway" "query-pipeline" "rest-service" "rpc-service" "rules-ui" "solr-exporter" "webapps" "ambassador" "pulsar-broker" "workflow-controller" "ui" "sql-service-cm" "sql-service-cr" "argo-ui" "connector-plugin-service-box" "connector-plugin-service-ldap" "connector-plugin-service-sharepoint" "fusion-log-forwarder" "templating")
That shut down all but pulsar-broker-[01].
Not sure what is happening, but the script should probably be updated to reflect the connector specific pods.
Thanks!
The text was updated successfully, but these errors were encountered:
No branches or pull requests
I ran gke_scale_namespace_up_or_down.sh on my freshly created F5 instance to bring it down:
./gke_scale_namespace_up_or_down.sh down -c lw-sales-us-west1 -n carlos-wesco-poc -p lw-sales
and while that brought down most of the pods there were some pods that stayed up:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
carlos-wesco-poc-argo-ui-85465d7cb7-6j7zt 1/1 Running 0 2d3h
carlos-wesco-poc-connector-plugin-service-box-56b77d9487-tpkb5 0/1 Running 1 40h
carlos-wesco-poc-connector-plugin-service-ldap-7fbf59bc97-p4pfx 0/1 Running 1 40h
carlos-wesco-poc-connector-plugin-service-sharepoint-85b5cc2hkl 0/1 Running 1 40h
carlos-wesco-poc-fusion-log-forwarder-fd6b9dc9-m5dlf 1/1 Running 0 40h
carlos-wesco-poc-pulsar-broker-0 0/1 CrashLoopBackOff 6 40h
carlos-wesco-poc-pulsar-broker-1 0/1 CrashLoopBackOff 6 40h
carlos-wesco-poc-templating-d664f7996-sdnzt 1/1 Running 0 69m
After conferring with Connor for a few minutes I added the following to line 171 of gke_scale_namespace_up_or_down.sh:
declare -a deployments=("admin-ui" "api-gateway" "auth-ui" "devops-ui" "fusion-admin" "fusion-indexing" "fusion-jupyter" "monitoring-grafana" "insights" "job-launcher" "job-rest-server" "ml-model-service" "pm-ui" "monitoring-prometheus-kube-state-metrics" "monitoring-prometheus-pushgateway" "query-pipeline" "rest-service" "rpc-service" "rules-ui" "solr-exporter" "webapps" "ambassador" "pulsar-broker" "workflow-controller" "ui" "sql-service-cm" "sql-service-cr" "argo-ui" "connector-plugin-service-box" "connector-plugin-service-ldap" "connector-plugin-service-sharepoint" "fusion-log-forwarder" "templating")
That shut down all but pulsar-broker-[01].
Not sure what is happening, but the script should probably be updated to reflect the connector specific pods.
Thanks!
The text was updated successfully, but these errors were encountered: