Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nightly Mongo Dev Updater #748

Open
wants to merge 14 commits into
base: gql
Choose a base branch
from
34 changes: 34 additions & 0 deletions .github/workflows/cd-dev.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,40 @@ jobs:
chart_ver: 0.1.0-dev.${{ needs.compute-sha.outputs.sha_short }}
secrets: inherit

limit-deploy:
name: SSH and Limit Deployments
runs-on: ubuntu-latest
steps:
- name: SSH and Check Deployments
uses: appleboy/[email protected]
with:
host: ${{ secrets.SSH_HOST }}
username: ${{ secrets.SSH_USERNAME }}
key: ${{ secrets.SSH_KEY }}
script: |
set -e # Exit immediately if a command fails

# Get bt-dev-app- deployments sorted by creation timestamp
deployments=$(kubectl get deployments \
--namespace=bt \
--sort-by='.metadata.creationTimestamp' \
-o custom-columns=":metadata.name,:metadata.creationTimestamp" | grep '^bt-dev-app') || true
deployment_count=$(echo "$deployments" | wc -l)

# Check if deployment count > 16 (2 per deployment)
if [ "$deployment_count" -gt 16 ]; then
echo "Too many deployments. Deleting the oldest deployment."

# Get oldest deployment from first line of deployments
oldest_deployment=$(echo "$deployments" | head -n 1 | grep -o '^bt-dev-app-[a-f0-9]\{7\}')

# Delete deployment
kubectl delete deployment "${oldest_deployment}-frontend" --namespace=bt
kubectl delete deployment "${oldest_deployment}-backend" --namespace=bt
else
echo "Deployment count is <= 16."
fi

deploy:
name: SSH and Deploy
needs: [compute-sha, build-push]
Expand Down
23 changes: 23 additions & 0 deletions .github/workflows/mongo-reset.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
name: Reset Dev Mongo

on:
workflow_dispatch:

jobs:
reset-mongo:
name: SSH and Reset Dev MongoDB State
runs-on: ubuntu-latest
steps:
- name: SSH and Reset MongoDB
uses: appleboy/[email protected]
with:
host: ${{ secrets.SSH_HOST }}
username: ${{ secrets.SSH_USERNAME }}
key: ${{ secrets.SSH_KEY }}
script: |
set -e # Exit immediately if a command fails

# Create Mongo job from mongo-reset
kubectl create job --from=cronjob/mongo-reset mongo-reset-manual

echo "MongoDB reset scheduled."
46 changes: 46 additions & 0 deletions infra/base/templates/mongo-reset.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
apiVersion: batch/v1
kind: CronJob
metadata:
name: reset-dev-mongo
namespace: bt
spec:
schedule: "0 0 * * *" # Daily at 12 AM
jobTemplate:
spec:
template:
spec:
containers:
- name: reset-mongo
image: alpine/k8s
command:
- sh
- -c
- |
set -e # Exit immediately if a command fails

# Find stage and dev MongoDB pods
stage_pod=$(kubectl get pods -n bt | grep 'bt-stage-mongo' | grep -o '^[^ ]*')
dev_pod=$(kubectl get pods -n bt | grep 'bt-dev-mongo' | grep -o '^[^ ]*')

# Dump staging MongoDB state
echo "Dumping staging MongoDB state..."
kubectl exec --namespace=bt \
"$stage_pod" -- mongodump --archive=/tmp/stage_backup.gz --gzip
kubectl cp --namespace=bt \
"$stage_pod:/tmp/stage_backup.gz" /tmp/stage_backup.gz
kubectl exec --namespace=bt \
"$stage_pod" -- rm /tmp/stage_backup.gz

# Restore dump into dev MongoDB
echo "Restoring dump into dev MongoDB..."
kubectl cp --namespace=bt \
/tmp/stage_backup.gz "$dev_pod:/tmp/stage_backup.gz"
kubectl exec --namespace=bt \
"$dev_pod" -- mongorestore --archive=/tmp/stage_backup.gz --gzip --drop
kubectl exec --namespace=bt \
"$dev_pod" -- rm /tmp/stage_backup.gz

# Cleanup local files
rm /tmp/stage_backup.gz
echo "MongoDB reset completed successfully!"
restartPolicy: Never