Skip to content

Commit

Permalink
Add kubevirt configuration and troubleshoot rabbitmq disk issue
Browse files Browse the repository at this point in the history
  • Loading branch information
abelsiqueira committed Aug 17, 2023
1 parent befd379 commit 0509707
Show file tree
Hide file tree
Showing 3 changed files with 110 additions and 0 deletions.
10 changes: 10 additions & 0 deletions 41-kubernetes-single-computer.md
Original file line number Diff line number Diff line change
Expand Up @@ -120,3 +120,13 @@ volumes:
persistentVolumeClaim:
claimName: asreview-storage
```
## Multi-node minikube
If you are using a multi-node minikube setup (for testing reasons, hopefully), also run the following:
```bash
minikube addons disable storage-provisioner
kubectl delete storageclasses.storage.k8s.io standard
kubectl apply -f kubevirt-hostpath-provisioner.yml
```
8 changes: 8 additions & 0 deletions 42-kubernetes-cloud-provider.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,3 +59,11 @@ volumes:
server: NFS_SERVICE_IP
path: "/"
```
## StorageClass provisioner
If your cluster does not have a StorageClass provisioner, you can try the following:
```bash
kubectl apply -f kubevirt-hostpath-provisioner.yml
```
92 changes: 92 additions & 0 deletions k8-config/kubevirt-hostpath-provisioner.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
# https://stackoverflow.com/questions/75175620/why-cant-my-rabbitmq-cluster-on-k8s-multi-node-minikube-create-its-mnesia-dir
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubevirt.io/hostpath-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubevirt-hostpath-provisioner
subjects:
- kind: ServiceAccount
name: kubevirt-hostpath-provisioner-admin
namespace: kube-system
roleRef:
kind: ClusterRole
name: kubevirt-hostpath-provisioner
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubevirt-hostpath-provisioner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]

- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]

- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kubevirt-hostpath-provisioner-admin
namespace: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kubevirt-hostpath-provisioner
labels:
k8s-app: kubevirt-hostpath-provisioner
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: kubevirt-hostpath-provisioner
template:
metadata:
labels:
k8s-app: kubevirt-hostpath-provisioner
spec:
serviceAccountName: kubevirt-hostpath-provisioner-admin
containers:
- name: kubevirt-hostpath-provisioner
image: quay.io/kubevirt/hostpath-provisioner
imagePullPolicy: Always
env:
- name: USE_NAMING_PREFIX
value: "false" # change to true, to have the name of the pvc be part of the directory
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: PV_DIR
value: /tmp/hostpath-provisioner
volumeMounts:
- name: pv-volume # root dir where your bind mounts will be on the node
mountPath: /tmp/hostpath-provisioner/
#nodeSelector:
#- name: xxxxxx
volumes:
- name: pv-volume
hostPath:
path: /tmp/hostpath-provisioner/

0 comments on commit 0509707

Please sign in to comment.