Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add kubevirt configuration and troubleshoot rabbitmq disk issue #23

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions 41-kubernetes-single-computer.md
Original file line number Diff line number Diff line change
Expand Up @@ -120,3 +120,13 @@ volumes:
persistentVolumeClaim:
claimName: asreview-storage
```

## Multi-node minikube

If you are using a multi-node minikube setup (for testing reasons, hopefully), also run the following:

```bash
minikube addons disable storage-provisioner
kubectl delete storageclasses.storage.k8s.io standard
kubectl apply -f kubevirt-hostpath-provisioner.yml
```
8 changes: 8 additions & 0 deletions 42-kubernetes-cloud-provider.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,3 +59,11 @@ volumes:
server: NFS_SERVICE_IP
path: "/"
```

## StorageClass provisioner

If your cluster does not have a StorageClass provisioner, you can try the following:

```bash
kubectl apply -f kubevirt-hostpath-provisioner.yml
```
92 changes: 92 additions & 0 deletions k8-config/kubevirt-hostpath-provisioner.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
# https://stackoverflow.com/questions/75175620/why-cant-my-rabbitmq-cluster-on-k8s-multi-node-minikube-create-its-mnesia-dir
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubevirt.io/hostpath-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubevirt-hostpath-provisioner
subjects:
- kind: ServiceAccount
name: kubevirt-hostpath-provisioner-admin
namespace: kube-system
roleRef:
kind: ClusterRole
name: kubevirt-hostpath-provisioner
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubevirt-hostpath-provisioner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]

- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]

- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kubevirt-hostpath-provisioner-admin
namespace: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kubevirt-hostpath-provisioner
labels:
k8s-app: kubevirt-hostpath-provisioner
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: kubevirt-hostpath-provisioner
template:
metadata:
labels:
k8s-app: kubevirt-hostpath-provisioner
spec:
serviceAccountName: kubevirt-hostpath-provisioner-admin
containers:
- name: kubevirt-hostpath-provisioner
image: quay.io/kubevirt/hostpath-provisioner
imagePullPolicy: Always
env:
- name: USE_NAMING_PREFIX
value: "false" # change to true, to have the name of the pvc be part of the directory
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: PV_DIR
value: /tmp/hostpath-provisioner
volumeMounts:
- name: pv-volume # root dir where your bind mounts will be on the node
mountPath: /tmp/hostpath-provisioner/
#nodeSelector:
#- name: xxxxxx
volumes:
- name: pv-volume
hostPath:
path: /tmp/hostpath-provisioner/