-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add pinpoint pool variables to k8s #2558
Conversation
github-arc-ss-staging github-arc-controller 2 2024-04-16 19:43:09.519875873 +0000 UTC deployed gha-runner-scale-set-0.8.2 0.8.2
ingress nginx 2 2024-02-12 19:08:42.93215444 +0000 UTC deployed nginx-ingress-1.1.2 3.4.2
Comparing release=ingress, chart=charts/nginx-ingress
Comparing release=secrets-store-csi-driver, chart=secrets-store-csi-driver/secrets-store-csi-driver
Comparing release=aws-secrets-provider, chart=aws-secrets-manager/secrets-store-csi-driver-provider-aws
Comparing release=blazer, chart=stakater/application
Comparing release=github-arc, chart=charts/gha-runner-scale-set-controller
Comparing release=github-arc-ss-staging, chart=/tmp/helmfile346843534/github-arc-controller/staging/github-arc-ss-staging/gha-runner-scale-set/0.8.2/gha-runner-scale-set |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the system smart enough to know not to do stuff if the values are blank?
Yes but now you have me paranoid. I'll change to make the staging variables initially blank to verify. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right on lets test it with empty
What happens when your PR merges?
feat:
- tagmain
as a new minor releaseWhat are you changing?
Provide some background on the changes
Add
AWS_PINPOINT_SC_POOL_ID
andAWS_PINPOINT_SC_POOL_ID
environment variables to k8s. These are referenced in the Use pinpoint for designated templates api PR.AWS_PINPOINT_LC_POOL_ID
) and use that (through pinpoint) to send if the template is not inAWS_PINPOINT_SC_TEMPLATE_IDS
So currently we have:
We should merge the api PR first. That will verify that everything still works with empty variables (as will initially be the case in prod while we test). Then we can merge the manifest PR and test sending with the pool.
If you are releasing a new version of Notify, what components are you updating
Checklist if releasing new version:
Checklist if making changes to Kubernetes:
After merging this PR