cd customers
- Build
./mvnw spring-boot:build-image
- Optionally change the docker repo:
./mvnw spring-boot:build-image \
-Dspring-boot.build-image.imageName=registry.s1t.k8s.camp/s1t/customers:0.0.1-SNAPSHOT
To run it such that it runs the embedded H2 SQL database:
docker run docker.io/library/customers:0.0.1-SNAPSHOT
To run it such that it connects to PostgreSQL, you'll need to provide four environment variables:
SPRING_PROFILES_ACTIVE=cloud
SPRING_R2DBC_URL=r2dbc:postgres://HOST:PORT/SCHEMA
SPRING_R2DBC_USERNAME=username
SPRING_R2DBC_PASSWORD=password
docker run -e SPRING_PROFILES_ACTIVE=cloud docker.io/library/customers:0.0.1-SNAPSHOT
-
Build:
./mvnw spring-boot:build-image \ -Dspring-boot.build-image.imageName=registry.s1t.k8s.camp/s1t/customers:0.0.1-SNAPSHOT
-
Push:
docker push registry.s1t.k8s.camp/s1t/customers:0.0.1-SNAPSHOT
-
Create a namespace:
kubectl create namespace booternetes
-
Create a Deployment:
kubectl -n booternetes create deployment \ --image=registry.s1t.k8s.camp/s1t/customers:0.0.1-SNAPSHOT customer \ -o yaml > k8s/manifests/deployment.yaml
-
Create a Service:
kubectl -n booternetes expose deployment customer --port=8080 \ -o yaml > k8s/manifests/service.yaml
-
Test via port-forward:
kubectl -n booternetes port-forward deployment/customer 8080:8080 &
-
test readiness / liveness actuators
curl -s localhost:8080/actuator/health | jq curl -s localhost:8080/actuator/health/readiness | jq curl -s localhost:8080/actuator/health/liveness | jq
-
configure readiness / liveness probes (edit k8s/manifests/deployment.yaml)
spec: terminationGracePeriodSeconds: 30 containers: ... env: - name: MANAGEMENT_SERVER_PORT value: "9001" livenessProbe: httpGet: path: /actuator/health/liveness port: 9001 readinessProbe: httpGet: path: /actuator/health/readiness port: 9001
-
apply the new changes
kubectl -n booternetes apply -f k8s/manifests/deployment.yaml
-
Deploy a ingress ... WOW MAGIC DNS / TLS
kubectl -n booternetes create ingress customer --class=default \ --rule="crm.s1t.k8s.camp/*=customer:8080,tls=crm-secret" \ --annotation="cert-manager.io/cluster-issuer=letsencrypt-prod" \ -o yaml > k8s/manifests/ingress.yaml
-
Configure to use a database by reading the cloud properties into a secret:
kubectl -n booternetes create secret generic customers --from-file ./src/main/resources/application-cloud.properties
-
update kube deployment to mount secret as files...
spec: containers: ... env: - name: SPRING_PROFILES_ACTIVE value: cloud volumeMounts: - mountPath: "/config" name: config readOnly: true volumes: - name: config secret: secretName: customers
I've put run.sh
and build.sh
in the customers
module. Invoke build.sh
and then run.sh
.