-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy patherror.log
126 lines (110 loc) · 11.4 KB
/
error.log
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
[2021-01-26T03:32:56.402Z] Error deploying db: ProgressDeadlineExceeded - ReplicaSet "db-84f8b4cdc5" has timed out progressing.
━━━ Events ━━━
Deployment db: ScalingReplicaSet - Scaled up replica set db-84f8b4cdc5 to 1
Pod db-84f8b4cdc5-cggc5: Scheduled - Successfully assigned ecom-default/db-84f8b4cdc5-cggc5 to minikube
Pod db-84f8b4cdc5-cggc5: Pulled - Container image "postgres:12-alpine" already present on machine
Pod db-84f8b4cdc5-cggc5: Created - Created container db
Pod db-84f8b4cdc5-cggc5: Started - Started container db
Pod db-84f8b4cdc5-cggc5: Unhealthy - Readiness probe failed: psql: error: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
Pod db-84f8b4cdc5-cggc5: Unhealthy - Readiness probe failed: psql: error: FATAL: the database system is starting up
Pod db-84f8b4cdc5-cggc5: Unhealthy - Readiness probe failed: psql: error: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
Pod db-84f8b4cdc5-cggc5: Unhealthy - Readiness probe failed: psql: error: FATAL: role "ecomuser" does not exist
Pod db-84f8b4cdc5-cggc5: BackOff - Back-off restarting failed container
━━━ Pod logs ━━━
<Showing last 30 lines per pod in this Deployment. Run the following command for complete logs>
$ kubectl -n ecom-default --context=minikube logs deployment/db
****** db-84f8b4cdc5-cggc5 ******
------ db ------2021-01-26 03:32:48.985 UTC [35] LOG: database system is ready to accept connections
done
server started
/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
waiting for server to shut down....2021-01-26 03:32:49.021 UTC [35] LOG: received fast shutdown request
2021-01-26 03:32:49.041 UTC [35] LOG: aborting any active transactions
2021-01-26 03:32:49.041 UTC [35] LOG: background worker "logical replication launcher" (PID 42) exited with exit code 1
2021-01-26 03:32:49.042 UTC [37] LOG: shutting down
2021-01-26 03:32:49.190 UTC [35] LOG: database system is shut down
done
server stopped
PostgreSQL init process complete; ready for start up.
2021-01-26 03:32:49.255 UTC [1] LOG: starting PostgreSQL 12.5 on x86_64-pc-linux-musl, compiled by gcc (Alpine 10.2.1_pre1) 10.2.1 20201203, 64-bit
2021-01-26 03:32:49.255 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2021-01-26 03:32:49.255 UTC [1] LOG: listening on IPv6 address "::", port 5432
2021-01-26 03:32:49.300 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2021-01-26 03:32:49.380 UTC [53] LOG: database system was shut down at 2021-01-26 03:32:49 UTC
2021-01-26 03:32:49.399 UTC [54] FATAL: the database system is starting up
2021-01-26 03:32:49.427 UTC [1] LOG: database system is ready to accept connections
2021-01-26 03:32:50.324 UTC [68] FATAL: role "ecomuser" does not exist
2021-01-26 03:32:51.313 UTC [75] FATAL: role "ecomuser" does not exist
2021-01-26 03:32:52.307 UTC [82] FATAL: role "ecomuser" does not exist
2021-01-26 03:32:53.304 UTC [89] FATAL: role "ecomuser" does not exist
2021-01-26 03:32:54.305 UTC [96] FATAL: role "ecomuser" does not exist
2021-01-26 03:32:55.310 UTC [104] FATAL: role "ecomuser" does not exist
2021-01-26 03:32:56.306 UTC [111] FATAL: role "ecomuser" does not exist
Error Details:
serviceName: db
status:
state: unhealthy
lastMessage: >-
ProgressDeadlineExceeded - ReplicaSet "db-84f8b4cdc5" has timed out
progressing.
logs: "\e[37m━━━ Events ━━━\e[39m\n\e[94mDeployment db:\e[39m \e[37mScalingReplicaSet - Scaled up replica set db-84f8b4cdc5 to 1\e[39m\n\e[94mPod db-84f8b4cdc5-cggc5:\e[39m \e[37mScheduled - Successfully assigned ecom-default/db-84f8b4cdc5-cggc5 to minikube\e[39m\n\e[94mPod db-84f8b4cdc5-cggc5:\e[39m \e[37mPulled - Container image \"postgres:12-alpine\" already present on machine\e[39m\n\e[94mPod db-84f8b4cdc5-cggc5:\e[39m \e[37mCreated - Created container db\e[39m\n\e[94mPod db-84f8b4cdc5-cggc5:\e[39m \e[37mStarted - Started container db\e[39m\n\e[94mPod db-84f8b4cdc5-cggc5:\e[39m \e[33mUnhealthy - Readiness probe failed: psql: error: could not connect to server: No such file or directory\e[39m\n\e[33m\tIs the server running locally and accepting\e[39m\n\e[33m\tconnections on Unix domain socket \"/var/run/postgresql/.s.PGSQL.5432\"?\e[39m\n\e[33m\e[39m\n\e[94mPod db-84f8b4cdc5-cggc5:\e[39m \e[33mUnhealthy - Readiness probe failed: psql: error: FATAL: the database system is starting up\e[39m\n\e[33m\e[39m\n\e[94mPod db-84f8b4cdc5-cggc5:\e[39m \e[33mUnhealthy - Readiness probe failed: psql: error: server closed the connection unexpectedly\e[39m\n\e[33m\tThis probably means the server terminated abnormally\e[39m\n\e[33m\tbefore or while processing the request.\e[39m\n\e[33m\e[39m\n\e[94mPod db-84f8b4cdc5-cggc5:\e[39m \e[33mUnhealthy - Readiness probe failed: psql: error: FATAL: role \"ecomuser\" does not exist\e[39m\n\e[33m\e[39m\n\e[94mPod db-84f8b4cdc5-cggc5:\e[39m \e[33mBackOff - Back-off restarting failed container\e[39m\e[37m\e[39m\n\e[37m\e[39m\n\e[37m━━━ Pod logs ━━━\e[39m\n\e[37m\e[39m\e[90m<Showing last 30 lines per pod in this Deployment. Run the following command for complete logs>\e[39m\n\e[90m$ kubectl -n ecom-default --context=minikube logs deployment/db\e[39m\n\e[94m\e[39m\n\e[94m****** db-84f8b4cdc5-cggc5 ******\e[39m\n\e[94m\e[39m\e[90m------ db ------\e[39m2021-01-26 03:32:48.985 UTC [35] LOG: database system is ready to accept connections\n done\nserver started\n\n/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*\n\nwaiting for server to shut down....2021-01-26 03:32:49.021 UTC [35] LOG: received fast shutdown request\n2021-01-26 03:32:49.041 UTC [35] LOG: aborting any active transactions\n2021-01-26 03:32:49.041 UTC [35] LOG: background worker \"logical replication launcher\" (PID 42) exited with exit code 1\n2021-01-26 03:32:49.042 UTC [37] LOG: shutting down\n2021-01-26 03:32:49.190 UTC [35] LOG: database system is shut down\n done\nserver stopped\n\nPostgreSQL init process complete; ready for start up.\n\n2021-01-26 03:32:49.255 UTC [1] LOG: starting PostgreSQL 12.5 on x86_64-pc-linux-musl, compiled by gcc (Alpine 10.2.1_pre1) 10.2.1 20201203, 64-bit\n2021-01-26 03:32:49.255 UTC [1] LOG: listening on IPv4 address \"0.0.0.0\", port 5432\n2021-01-26 03:32:49.255 UTC [1] LOG: listening on IPv6 address \"::\", port 5432\n2021-01-26 03:32:49.300 UTC [1] LOG: listening on Unix socket \"/var/run/postgresql/.s.PGSQL.5432\"\n2021-01-26 03:32:49.380 UTC [53] LOG: database system was shut down at 2021-01-26 03:32:49 UTC\n2021-01-26 03:32:49.399 UTC [54] FATAL: the database system is starting up\n2021-01-26 03:32:49.427 UTC [1] LOG: database system is ready to accept connections\n2021-01-26 03:32:50.324 UTC [68] FATAL: role \"ecomuser\" does not exist\n2021-01-26 03:32:51.313 UTC [75] FATAL: role \"ecomuser\" does not exist\n2021-01-26 03:32:52.307 UTC [82] FATAL: role \"ecomuser\" does not exist\n2021-01-26 03:32:53.304 UTC [89] FATAL: role \"ecomuser\" does not exist\n2021-01-26 03:32:54.305 UTC [96] FATAL: role \"ecomuser\" does not exist\n2021-01-26 03:32:55.310 UTC [104] FATAL: role \"ecomuser\" does not exist\n2021-01-26 03:32:56.306 UTC [111] FATAL: role \"ecomuser\" does not exist\n"
resource:
kind: Deployment
apiVersion: apps/v1
metadata:
name: db
namespace: ecom-default
uid: afa2efda-ffde-419a-9b15-3f26e90fa0fa
resourceVersion: '8394'
generation: 1
creationTimestamp: '2021-01-26T03:10:55Z'
labels:
module: postgres
service: db
annotations:
deployment.kubernetes.io/revision: '1'
garden.io/configured.replicas: '1'
garden.io/generated: 'true'
garden.io/manifest-hash: e22314585a01f53729003b0490f5f8ebb049fe0fa7bb06f7328d021a6de6cb03
garden.io/version: v-0acb088676
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{"garden.io/configured.replicas":"1","garden.io/generated":"true","garden.io/manifest-hash":"e22314585a01f53729003b0490f5f8ebb049fe0fa7bb06f7328d021a6de6cb03","garden.io/version":"v-0acb088676"},"labels":{"module":"postgres","service":"db"},"name":"db","namespace":"ecom-default"},"spec":{"replicas":1,"revisionHistoryLimit":3,"selector":{"matchLabels":{"service":"db"}},"strategy":{"rollingUpdate":{"maxSurge":1,"maxUnavailable":1},"type":"RollingUpdate"},"template":{"metadata":{"labels":{"module":"postgres","service":"db"}},"spec":{"containers":[{"env":[{"name":"GARDEN_VERSION","value":"v-0acb088676"},{"name":"GARDEN_MODULE_VERSION","value":"v-0acb088676"},{"name":"GARDEN_VARIABLES_POSTGRES_DATABASE","value":"ecom"},{"name":"GARDEN_VARIABLES_POSTGRES_USERNAME","value":"ecomuser"},{"name":"GARDEN_VARIABLES_POSTGRES_PASSWORD","value":"ecompass"},{"name":"GARDEN_VARIABLES_BASEHOSTNAME","value":"ecom.local.app.garden"},{"name":"GARDEN_DEPENDENCIES","value":"[{\"moduleName\":\"postgres\",\"name\":\"postgres\",\"type\":\"build\",\"version\":\"v-0acb088676\"}]"},{"name":"POSTGRES_DATABASE","value":"ecom"},{"name":"POSTGRES_USERNAME","value":"ecomuser"},{"name":"POSTGRES_PASSWORD","value":"ecompass"},{"name":"POD_HOST_IP","valueFrom":{"fieldRef":{"fieldPath":"status.hostIP"}}},{"name":"POD_IP","valueFrom":{"fieldRef":{"fieldPath":"status.podIP"}}},{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"POD_NODE_NAME","valueFrom":{"fieldRef":{"fieldPath":"spec.nodeName"}}},{"name":"POD_SERVICE_ACCOUNT","valueFrom":{"fieldRef":{"fieldPath":"spec.serviceAccountName"}}},{"name":"POD_UID","valueFrom":{"fieldRef":{"fieldPath":"metadata.uid"}}}],"image":"postgres:12-alpine","imagePullPolicy":"IfNotPresent","livenessProbe":{"exec":{"command":["psql","-w","-U","ecomuser","-d","ecom","-c","SELECT
1"]},"failureThreshold":3,"initialDelaySeconds":90,"periodSeconds":5,"successThreshold":1,"timeoutSeconds":3},"name":"db","ports":[{"containerPort":5432,"name":"db","protocol":"TCP"}],"readinessProbe":{"exec":{"command":["psql","-w","-U","ecomuser","-d","ecom","-c","SELECT
1"]},"failureThreshold":90,"initialDelaySeconds":2,"periodSeconds":1,"successThreshold":2,"timeoutSeconds":3},"resources":{"limits":{"cpu":"1","memory":"1Gi"},"requests":{"cpu":"10m","memory":"64Mi"}},"securityContext":{"allowPrivilegeEscalation":false},"volumeMounts":[{"mountPath":"/db-data","name":"data"}]}],"dnsPolicy":"ClusterFirst","restartPolicy":"Always","terminationGracePeriodSeconds":5,"volumes":[{"emptyDir":{},"name":"data"}]}}}}
managedFields: []
spec:
replicas: 1
selector:
matchLabels:
service: db
template:
metadata:
creationTimestamp: null
labels:
module: postgres
service: db
spec:
volumes: []
containers: []
restartPolicy: Always
terminationGracePeriodSeconds: 5
dnsPolicy: ClusterFirst
securityContext: {}
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
revisionHistoryLimit: 3
progressDeadlineSeconds: 600
status:
observedGeneration: 1
replicas: 1
updatedReplicas: 1
unavailableReplicas: 1
conditions: []