diff --git a/CHANGELOG.md b/CHANGELOG.md index d64f3bd..ee0d3d3 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,17 @@ +## Development + +### Kubernetes +* Fix usage of 'pullPolicy' values in deployments +* Change env vars setup for deployments. Setup only necessary vars for choosen bus type +* Documentation updates +* Add CoAP-WebSockets proxy deployment and service in chart +* Remove separate switch for deploying external WebSocket proxy. It is requred for Plugin management service and must be deployed if it is enabled. +* Add top level Ingress in chart Notes + +### Docker Compose +* Add compose file for CoAP-WebSockets proxy +* Add `DEBUG_RMI_HOSTNAME` variable for setting up JMX debug access via env + ## 3.5.0 / 2018-06-04 * k8s: add parameters for log level configuration in Java Server services diff --git a/README.md b/README.md index d65a933..342d0b1 100644 --- a/README.md +++ b/README.md @@ -28,10 +28,14 @@ More details in the [rdbms-image](rdbms-image/) subdirectory. Installation was tested on machine with CentOS 7 distribution. ## Kubernetes installation -DeviceHive can be installed on Kubernetes with provided [Helm chart](k8s/). This chart also installs PostgreSQL chart and Kafka chart from [Kubeapps](https://kubeapps.com) repositories. External installations of PostgreSQL and Kafka are not supported at the moment. +### DeviceHive +DeviceHive can be installed on Kubernetes with provided [devicehive Helm chart](k8s/devicehive). This chart also installs PostgreSQL chart and Kafka chart from [Kubeapps](https://kubeapps.com) repositories. External installations of PostgreSQL and Kafka are not supported at the moment. Previous installation method on Kubernetes using a `kubectl` utility and a plain YAML files are deprecated now. Please [issue a ticket](https://github.com/devicehive/devicehive-docker/issues/new) in our [GitHub repository](https://github.com/devicehive/devicehive-docker/) if you have questions about mirgating such environment to the one deployed with Helm chart. +### Cassandra storage plugin +DeviceHive Cassandra storage plugin can be installed on Kubernetes with provided [devicehive-cassandra-plugin Helm chart](k8s/devicehive-cassandra-plugin). It requires already running cassandra cluster. README file contains [example installation of Cassandra](k8s/devicehive-cassandra-plugin/README.md#example-installation-with-cassandra-cluster-installed-via-helm) with Helm for tests. + ## Installation on Docker for Windows or Docker for Mac If you like to try DeviceHive using Docker for Windows or Docker for Mac, please note that this software runs Docker in special Virtual Machine (that got automaticaly created for you by installer). By default these Virtual Machines with much lower parameters that required for DeviceHive, 2GB of RAM and 2 vCPU. Here is example of how to change parameters in Docker for Windows, on Macs this should be similar: diff --git a/k8s/README.md b/k8s/devicehive/README.md similarity index 67% rename from k8s/README.md rename to k8s/devicehive/README.md index f3d0ad1..5954c82 100644 --- a/k8s/README.md +++ b/k8s/devicehive/README.md @@ -38,17 +38,18 @@ The command deploys DeviceHive on the Kubernetes cluster in the default configur Default DeviceHive admin user has name `dhadmin` and password `dhadmin_#911`. ### Service endpoints -Table below lists endpoints where you can find various DeviceHive services. If `proxy.ingress` set to `true`, replace *localhost* with hostname(s) used in `proxy.ingress.hosts` parameter. - -| Service | URL | Notes | -|----------------------|-----------------------------------|------------------------------| -| Admin Console | http://*localhost*/admin | | -| Frontend service API | http://*localhost*/api/rest | | -| Auth service API | http://*localhost*/auth/rest | | -| Plugin service API | http://*localhost*/plugin/rest | If enabled, see [Run with DeviceHive Plugin Service](#run-with-devicehive-plugin-service) section below | -| Frontend Swagger | http://*localhost*/api/swagger | | -| Auth Swagger | http://*localhost*/auth/swagger | | -| Plugin Swagger | http://*localhost*/plugin/swagger | If Plugin service is enabled | +Table below lists endpoints where you can find various DeviceHive services. If `ingress` set to `true`, replace *localhost* with hostname(s) used in `ingress.hosts` parameter. + +| Service | URL | Notes | +|-------------------------------|-----------------------------------|------------------------------| +| Admin Console | http://*localhost*/admin | | +| Frontend service API | http://*localhost*/api/rest | | +| Auth service API | http://*localhost*/auth/rest | | +| Plugin management service API | http://*localhost*/plugin/rest | If enabled, see [Install with DeviceHive Plugin Management Service](#install-with-devicehive-plugin-management-service) section below | +| External WS Proxy for plugins | http://*localhost*/plugin/proxy | If Plugin service is enabled | +| Frontend Swagger | http://*localhost*/api/swagger | | +| Auth Swagger | http://*localhost*/auth/swagger | | +| Plugin Swagger | http://*localhost*/plugin/swagger | If Plugin service is enabled | ## Uninstalling the Chart @@ -62,7 +63,7 @@ The command removes all the Kubernetes components associated with the chart and ## Configuration -The following tables lists the configurable parameters of the DeviceHive chart and their default values. +The following table lists the configurable parameters of the DeviceHive chart and their default values. Parameter | Description | Default --------- | ----------- | ------- @@ -102,6 +103,13 @@ Parameter | Description | Default `backendNode.loggerLevel` | Node backend logger level (levels: debug, info, warn, error ) | `info` `backendNode.replicaCount` | Desired number of Node backend pods | `1` `backendNode.resources` | Node backend resource requests and limits | `{}` +`coapProxy.enabled` | If true, CoAP-WebSockets proxy will be deployed | `false` +`coapProxy.image` | CoAP-WebSockets proxy image and tag | `devicehive/devicehive-coap-proxy:1.0.0` +`coapProxy.pullPolicy`| CoAP-WebSockets proxy image pull policy | `IfNotPresent` +`coapProxy.replicaCount` | Desired number of CoAP-WebSockets proxy pods | `1` +`coapProxy.resources` | CoAP-WebSockets proxy deployment resource requests and limits | `{}` +`coapProxy.service.type` | Type of CoAP-WebSockets proxy service to create | `ClusterIP` +`coapProxy.service.port` | CoAP-WebSockets proxy service port | `5683` `mqttBroker.enabled` | If true, DH MQTT broker will be deployed | `false` `mqttBroker.appLogLevel` | Application logger level (levels: debug, info, warn, error) | `info` `mqttBroker.image` | MQTT broker image and tag | `devicehive/devicehive-mqtt:1.1.0` @@ -121,7 +129,6 @@ Parameter | Description | Default `wsProxy.pullPolicy` | DH WS Proxy image pull policy | `IfNotPresent` `wsProxy.internal.replicaCount` | Desired number of internal WS Proxy service pods | `1` `wsProxy.internal.resources` | Internal WS Proxy service resource requests and limits | `{}` -`wsProxy.external.enabled` | If true, External WS Proxy deployment will be created. Requires `javaServer.plugin.enabled` set to `true` | `false` `wsProxy.external.replicaCount` | Desired number of external WS Proxy service pods | `1` `wsProxy.external.resources` | External WS Proxy service resource requests and limits | `{}` `nodeSelector` | Node labels for DeviceHive pods assignment | `{}` @@ -151,6 +158,26 @@ $ helm install ./devicehive --name my-release -f values.yaml > **Tip**: You can use the default [values.yaml](devicehive/values.yaml) +### Install with DeviceHive Plugin Management Service + +Plugin management service disabled by default. To enable it you need to pass several values to `helm`. +Change to hostname pointing to your cluster. For example, if you setup Ingress resource with host 'devicehive.example.com' then pluginConnectUrl will be 'ws://devicehive.example.com/plugin/proxy': +``` console +$ helm install \ + --name my-release + --set javaServer.plugin.enabled=true \ + --set javaServer.plugin.pluginConnectUrl=ws:///plugin/proxy \ + ./devicehive +``` +or with following parameters in values file: +``` yaml +javaServer: + plugin: + enabled: true + pluginConnectUrl: ws:///plugin/proxy +``` +Enabling Plugin management service automaticaly enables external WebSocket proxy for plugins. + ### RBAC Configuration First, Helm itself requires additional configuration to use on Kubernetes clusters where RBAC enabled. Follow instructions in [Helm documentation](https://docs.helm.sh/using_helm/#role-based-access-control). @@ -160,3 +187,31 @@ To manually setup RBAC you need to set the parameter rbac.create=false and speci ### Ingress TLS Ingress TLS doesn't supported yet by this Helm chart. + +### Setting up horizontal autoscaling for services + +Autoscaling DeviceHive in Kubernetes relies on Horizontal Pod Authoscaler in your cluster. DeviceHive Helm chart provides ability to set resources for pods and cluster administrator have to create HPA manualy. + +When deploying application specify .resource.requests values, see [Configuration section](#configuration) for available values. Here is example from `values.yaml` file used by `helm install --name test ./devicehive -f values.yaml`: +```yaml +javaServer: + backend: + resources: + requests: + cpu: 2 + memory: 1536Mi + frontend: + resources: + requests: + cpu: 2 + memory: 1536Mi +``` + +When resources.requests for pods are set create hpa by issuing follwing commands: +```console +$ kubectl autoscale deployment test-devicehive-backend --cpu-percent=70 --min=1 --max=3 +$ kubectl autoscale deployment test-devicehive-frontend --cpu-percent=70 --min=1 --max=3 +$ kubectl get hpa +``` + +> **Note**: resources.requests values and HPA configuration provided above had to be tweaked for your deployment. Please consult [HPA walkthrough](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) in Kubernetes documentation for more details. diff --git a/k8s/devicehive/templates/NOTES.txt b/k8s/devicehive/templates/NOTES.txt index 85bfe21..dda0d3d 100644 --- a/k8s/devicehive/templates/NOTES.txt +++ b/k8s/devicehive/templates/NOTES.txt @@ -1,8 +1,13 @@ -Thank you for installing {{ .Chart.Name }}. +Thank you for installing {{ .Chart.Name }} chart. Your release is named {{ .Release.Name }}. -{{ if .Values.proxy.ingress.enabled -}} +{{ if .Values.ingress.enabled -}} +From outside the cluster, DeviceHive Admin Console URL(s) are: +{{- range .Values.ingress.hosts }} +http://{{ . }}/admin/ +{{- end }} +{{- else if .Values.proxy.ingress.enabled -}} From outside the cluster, DeviceHive Admin Console URL(s) are: {{- range .Values.proxy.ingress.hosts }} http://{{ . }}/admin/ diff --git a/k8s/devicehive/templates/coap-proxy-deployment.yaml b/k8s/devicehive/templates/coap-proxy-deployment.yaml new file mode 100644 index 0000000..88d7474 --- /dev/null +++ b/k8s/devicehive/templates/coap-proxy-deployment.yaml @@ -0,0 +1,42 @@ +{{- if .Values.coapProxy.enabled }} +apiVersion: apps/v1beta1 +kind: Deployment +metadata: + name: {{ .Release.Name }}-devicehive-coap-proxy + labels: + chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" + component: "coap-proxy" + heritage: "{{ .Release.Service }}" + release: "{{ .Release.Name }}" +spec: + replicas: {{ .Values.coapProxy.replicaCount }} + template: + metadata: + labels: + app: {{ .Release.Name }}-devicehive-coap-proxy + spec: + serviceAccountName: {{ if .Values.rbac.create }}{{ template "devicehive.fullname" . }}{{ else }}"{{ .Values.rbac.serviceAccountName }}"{{ end }} + containers: + - name: coap-proxy + image: {{ .Values.coapProxy.image | quote }} + imagePullPolicy: {{ .Values.coapProxy.pullPolicy }} + env: + - name: ENVSEPARATOR + value: "_" + - name: PROXY_HOST + value: 0.0.0.0 + - name: PROXY_PORT + value: "5683" + - name: PROXY_TARGET + value: "ws://{{ .Release.Name }}-devicehive-frontend:8080/api/websocket" + ports: + - name: coap + protocol: UDP + containerPort: 5683 + resources: +{{ toYaml .Values.coapProxy.resources | indent 10 }} + {{- with .Values.nodeSelector }} + nodeSelector: +{{ toYaml . | indent 8 }} + {{- end }} +{{- end }} diff --git a/k8s/devicehive/templates/coap-proxy-service.yaml b/k8s/devicehive/templates/coap-proxy-service.yaml new file mode 100644 index 0000000..0a9cdc7 --- /dev/null +++ b/k8s/devicehive/templates/coap-proxy-service.yaml @@ -0,0 +1,14 @@ +{{- if .Values.coapProxy.enabled }} +kind: Service +apiVersion: v1 +metadata: + name: {{ .Release.Name }}-devicehive-coap-proxy +spec: + selector: + app: {{ .Release.Name }}-devicehive-coap-proxy + type: {{ .Values.coapProxy.service.type }} + ports: + - protocol: UDP + port: {{ .Values.coapProxy.service.port | int }} + targetPort: coap +{{- end }} diff --git a/k8s/devicehive/templates/dh-auth-deployment.yaml b/k8s/devicehive/templates/dh-auth-deployment.yaml index fa829c8..b183213 100644 --- a/k8s/devicehive/templates/dh-auth-deployment.yaml +++ b/k8s/devicehive/templates/dh-auth-deployment.yaml @@ -18,21 +18,21 @@ spec: containers: - name: devicehive-auth image: "{{ .Values.javaServer.repository }}/devicehive-auth:{{ .Values.javaServer.tag }}" - imagePullPolicy: {{ .Values.javaServer.PullPolicy }} + imagePullPolicy: {{ .Values.javaServer.pullPolicy }} env: {{- if eq .Values.javaServer.bus "rpc" }} - name: SPRING_PROFILES_ACTIVE value: "rpc-client" - {{- else }} - - name: DH_WS_PROXY - value: "{{ .Release.Name }}-devicehive-ws-proxy-internal:3000" - {{- end }} - name: DH_KAFKA_BOOTSTRAP_SERVERS value: "{{ .Release.Name }}-kafka:9092" - name: DH_ZK_ADDRESS value: "{{ .Release.Name }}-zookeeper" - name: DH_ZK_PORT value: "2181" + {{- else }} + - name: DH_WS_PROXY + value: "{{ .Release.Name }}-devicehive-ws-proxy-internal:3000" + {{- end }} - name: DH_POSTGRES_ADDRESS value: "{{ .Release.Name }}-postgresql" - name: DH_POSTGRES_DB diff --git a/k8s/devicehive/templates/dh-backend-deployment.yaml b/k8s/devicehive/templates/dh-backend-deployment.yaml index 9203729..9247405 100644 --- a/k8s/devicehive/templates/dh-backend-deployment.yaml +++ b/k8s/devicehive/templates/dh-backend-deployment.yaml @@ -19,21 +19,21 @@ spec: containers: - name: devicehive-backend image: "{{ .Values.javaServer.repository }}/devicehive-backend:{{ .Values.javaServer.tag }}" - imagePullPolicy: {{ .Values.javaServer.PullPolicy }} + imagePullPolicy: {{ .Values.javaServer.pullPolicy }} env: {{- if eq .Values.javaServer.bus "rpc" }} - name: SPRING_PROFILES_ACTIVE value: "rpc-server" - {{- else }} - - name: DH_WS_PROXY - value: "{{ .Release.Name }}-devicehive-ws-proxy-internal:3000" - {{- end }} - name: DH_KAFKA_BOOTSTRAP_SERVERS value: "{{ .Release.Name }}-kafka:9092" - name: DH_ZK_ADDRESS value: "{{ .Release.Name }}-zookeeper" - name: DH_ZK_PORT value: "2181" + {{- else }} + - name: DH_WS_PROXY + value: "{{ .Release.Name }}-devicehive-ws-proxy-internal:3000" + {{- end }} - name: DH_POSTGRES_ADDRESS value: "{{ .Release.Name }}-postgresql" - name: DH_POSTGRES_DB diff --git a/k8s/devicehive/templates/dh-backend-node-deployment.yaml b/k8s/devicehive/templates/dh-backend-node-deployment.yaml index 0a3b50e..a0950bf 100644 --- a/k8s/devicehive/templates/dh-backend-node-deployment.yaml +++ b/k8s/devicehive/templates/dh-backend-node-deployment.yaml @@ -19,7 +19,7 @@ spec: containers: - name: devicehive-backend-node image: {{ .Values.backendNode.image | quote }} - imagePullPolicy: {{ .Values.backendNode.PullPolicy }} + imagePullPolicy: {{ .Values.backendNode.pullPolicy }} env: - name: ENVSEPARATOR value: '_' diff --git a/k8s/devicehive/templates/dh-frontend-deployment.yaml b/k8s/devicehive/templates/dh-frontend-deployment.yaml index 01cb063..235ffcd 100644 --- a/k8s/devicehive/templates/dh-frontend-deployment.yaml +++ b/k8s/devicehive/templates/dh-frontend-deployment.yaml @@ -18,21 +18,21 @@ spec: containers: - name: devicehive-frontend image: "{{ .Values.javaServer.repository }}/devicehive-frontend:{{ .Values.javaServer.tag }}" - imagePullPolicy: {{ .Values.javaServer.PullPolicy }} + imagePullPolicy: {{ .Values.javaServer.pullPolicy }} env: {{- if eq .Values.javaServer.bus "rpc" }} - - name: SPRING_PROFILES_ACTIVE - value: "rpc-client" - {{- else }} - - name: DH_WS_PROXY - value: "{{ .Release.Name }}-devicehive-ws-proxy-internal:3000" - {{- end }} - name: DH_KAFKA_BOOTSTRAP_SERVERS value: "{{ .Release.Name }}-kafka:9092" - name: DH_ZK_ADDRESS value: "{{ .Release.Name }}-zookeeper" - name: DH_ZK_PORT value: "2181" + - name: SPRING_PROFILES_ACTIVE + value: "rpc-client" + {{- else }} + - name: DH_WS_PROXY + value: "{{ .Release.Name }}-devicehive-ws-proxy-internal:3000" + {{- end }} - name: DH_AUTH_URL value: "http://{{ .Release.Name }}-devicehive-auth:8090/auth/rest" - name: DH_POSTGRES_ADDRESS diff --git a/k8s/devicehive/templates/dh-hazelcast-deployment.yaml b/k8s/devicehive/templates/dh-hazelcast-deployment.yaml index d65f46f..94644f1 100644 --- a/k8s/devicehive/templates/dh-hazelcast-deployment.yaml +++ b/k8s/devicehive/templates/dh-hazelcast-deployment.yaml @@ -18,7 +18,7 @@ spec: containers: - name: devicehive-hazelcast image: "{{ .Values.javaServer.repository }}/devicehive-hazelcast:{{ .Values.javaServer.tag }}" - imagePullPolicy: {{ .Values.javaServer.PullPolicy }} + imagePullPolicy: {{ .Values.javaServer.pullPolicy }} env: - name: MIN_HEAP_SIZE value: {{ .Values.javaServer.hazelcast.minHeapSize | quote }} diff --git a/k8s/devicehive/templates/dh-plugin-deployment.yaml b/k8s/devicehive/templates/dh-plugin-deployment.yaml index 77b122f..6a2a9f6 100644 --- a/k8s/devicehive/templates/dh-plugin-deployment.yaml +++ b/k8s/devicehive/templates/dh-plugin-deployment.yaml @@ -19,15 +19,11 @@ spec: containers: - name: devicehive-plugin image: "{{ .Values.javaServer.repository }}/devicehive-plugin:{{ .Values.javaServer.tag }}" - imagePullPolicy: {{ .Values.javaServer.PullPolicy }} + imagePullPolicy: {{ .Values.javaServer.pullPolicy }} env: {{- if eq .Values.javaServer.bus "rpc" }} - name: SPRING_PROFILES_ACTIVE value: "rpc-client" - {{- else }} - - name: DH_WS_PROXY - value: "{{ .Release.Name }}-devicehive-ws-proxy-internal:3000" - {{- end }} - name: DH_KAFKA_BOOTSTRAP_SERVERS value: "{{ .Release.Name }}-kafka:9092" - name: DH_ZK_ADDRESS @@ -36,6 +32,10 @@ spec: value: "2181" - name: DH_RPC_CLIENT_RES_CONS_THREADS value: "3" + {{- else }} + - name: DH_WS_PROXY + value: "{{ .Release.Name }}-devicehive-ws-proxy-internal:3000" + {{- end }} - name: DH_AUTH_URL value: "http://{{ .Release.Name }}-devicehive-auth:8090/auth/rest" - name: DH_POSTGRES_ADDRESS @@ -48,10 +48,8 @@ spec: value: "{{ .Values.postgresql.postgresUser }}" - name: DH_POSTGRES_PASSWORD value: "{{ .Values.postgresql.postgresPassword }}" -{{- if .Values.wsProxy.external.enabled }} - name: DH_PROXY_PLUGIN_CONNECT value: {{ .Values.javaServer.plugin.pluginConnectUrl | default "ws://localhost/plugin/proxy" | quote }} -{{- end }} - name: DH_ZK_ADDRESS value: "{{ .Release.Name }}-zookeeper" - name: DH_ZK_PORT diff --git a/k8s/devicehive/templates/dh-proxy-deployment.yaml b/k8s/devicehive/templates/dh-proxy-deployment.yaml index 9abd2ad..90be3f8 100644 --- a/k8s/devicehive/templates/dh-proxy-deployment.yaml +++ b/k8s/devicehive/templates/dh-proxy-deployment.yaml @@ -19,7 +19,7 @@ spec: containers: - name: devicehive-proxy image: {{ .Values.proxy.image | quote }} - imagePullPolicy: {{ .Values.proxy.PullPolicy }} + imagePullPolicy: {{ .Values.proxy.pullPolicy }} ports: - name: http containerPort: 8080 diff --git a/k8s/devicehive/templates/dh-proxy-locations-configmap.yaml b/k8s/devicehive/templates/dh-proxy-locations-configmap.yaml index dfd963e..744305d 100644 --- a/k8s/devicehive/templates/dh-proxy-locations-configmap.yaml +++ b/k8s/devicehive/templates/dh-proxy-locations-configmap.yaml @@ -62,8 +62,6 @@ data: proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Real-IP $remote_addr; } -{{- end }} -{{- if .Values.wsProxy.external.enabled }} location /plugin/proxy { proxy_redirect off; proxy_pass http://wsproxyext/; diff --git a/k8s/devicehive/templates/dh-proxy-upstreams-configmap.yaml b/k8s/devicehive/templates/dh-proxy-upstreams-configmap.yaml index 264ff69..927607a 100644 --- a/k8s/devicehive/templates/dh-proxy-upstreams-configmap.yaml +++ b/k8s/devicehive/templates/dh-proxy-upstreams-configmap.yaml @@ -14,8 +14,6 @@ data: upstream plugin_upstream { server {{ .Release.Name }}-devicehive-plugin:8110; } -{{- end }} -{{- if .Values.wsProxy.external.enabled }} upstream wsproxyext { server {{ .Release.Name }}-devicehive-ws-proxy-external:3000; } diff --git a/k8s/devicehive/templates/ingress.yaml b/k8s/devicehive/templates/ingress.yaml index a182765..ada6f07 100644 --- a/k8s/devicehive/templates/ingress.yaml +++ b/k8s/devicehive/templates/ingress.yaml @@ -1,7 +1,6 @@ {{- if .Values.ingress.enabled -}} {{- $releaseName := .Release.Name -}} {{- $javaServerPluginEnabled := .Values.javaServer.plugin.enabled -}} -{{- $wsProxyExternalEnabled := .Values.wsProxy.external.enabled -}} apiVersion: extensions/v1beta1 kind: Ingress metadata: @@ -43,8 +42,6 @@ spec: backend: serviceName: {{ $releaseName }}-devicehive-plugin servicePort: 8110 - {{- end }} - {{- if $wsProxyExternalEnabled }} - path: /plugin/proxy backend: serviceName: {{ $releaseName }}-devicehive-ws-proxy-external diff --git a/k8s/devicehive/templates/mqtt-broker-deployment.yaml b/k8s/devicehive/templates/mqtt-broker-deployment.yaml index 26dff27..9b3d969 100644 --- a/k8s/devicehive/templates/mqtt-broker-deployment.yaml +++ b/k8s/devicehive/templates/mqtt-broker-deployment.yaml @@ -19,7 +19,7 @@ spec: containers: - name: mqtt-broker image: {{ .Values.mqttBroker.image | quote }} - imagePullPolicy: {{ .Values.mqttBroker.PullPolicy }} + imagePullPolicy: {{ .Values.mqttBroker.pullPolicy }} env: - name: ENVSEPARATOR value: "_" @@ -32,7 +32,6 @@ spec: - name: BROKER_REDIS_SERVER_PORT value: "6379" - name: BROKER_APP_LOG_LEVEL - value: "debug" value: {{ .Values.mqttBroker.appLogLevel | quote }} - name: BROKER_WS_SERVER_URL value: "{{ .Release.Name }}-devicehive-frontend:8080/api/websocket" diff --git a/k8s/devicehive/templates/ws-proxy-external-deployment.yaml b/k8s/devicehive/templates/ws-proxy-external-deployment.yaml index 71585aa..c7d585d 100644 --- a/k8s/devicehive/templates/ws-proxy-external-deployment.yaml +++ b/k8s/devicehive/templates/ws-proxy-external-deployment.yaml @@ -1,4 +1,4 @@ -{{- if .Values.wsProxy.external.enabled }} +{{- if .Values.javaServer.plugin.enabled -}} apiVersion: apps/v1beta1 kind: Deployment metadata: @@ -19,7 +19,7 @@ spec: containers: - name: devicehive-ws-proxy-external image: {{ .Values.wsProxy.image | quote }} - imagePullPolicy: {{ .Values.wsProxy.PullPolicy }} + imagePullPolicy: {{ .Values.wsProxy.pullPolicy }} env: - name: ENVSEPARATOR value: '_' diff --git a/k8s/devicehive/templates/ws-proxy-external-service.yaml b/k8s/devicehive/templates/ws-proxy-external-service.yaml index 8bc48a8..7f25de7 100644 --- a/k8s/devicehive/templates/ws-proxy-external-service.yaml +++ b/k8s/devicehive/templates/ws-proxy-external-service.yaml @@ -1,4 +1,4 @@ -{{- if .Values.wsProxy.external.enabled }} +{{- if .Values.javaServer.plugin.enabled -}} kind: Service apiVersion: v1 metadata: diff --git a/k8s/devicehive/templates/ws-proxy-internal-deployment.yaml b/k8s/devicehive/templates/ws-proxy-internal-deployment.yaml index ae905fb..59f708d 100644 --- a/k8s/devicehive/templates/ws-proxy-internal-deployment.yaml +++ b/k8s/devicehive/templates/ws-proxy-internal-deployment.yaml @@ -19,7 +19,7 @@ spec: containers: - name: devicehive-ws-proxy-internal image: {{ .Values.wsProxy.image | quote }} - imagePullPolicy: {{ .Values.wsProxy.PullPolicy }} + imagePullPolicy: {{ .Values.wsProxy.pullPolicy }} env: - name: ENVSEPARATOR value: '_' diff --git a/k8s/devicehive/values.yaml b/k8s/devicehive/values.yaml index 7d82305..7a4ad98 100644 --- a/k8s/devicehive/values.yaml +++ b/k8s/devicehive/values.yaml @@ -52,6 +52,16 @@ backendNode: replicaCount: 1 resources: {} +coapProxy: + enabled: false + image: devicehive/devicehive-coap-proxy:1.0.0 + pullPolicy: IfNotPresent + replicaCount: 1 + resources: {} + service: + type: ClusterIP + port: 5683 + mqttBroker: enabled: false image: devicehive/devicehive-mqtt:1.1.0 @@ -81,7 +91,6 @@ wsProxy: replicaCount: 1 resources: {} external: - enabled: false replicaCount: 1 resources: {} diff --git a/rdbms-image/README.md b/rdbms-image/README.md index 2264147..0a6337a 100644 --- a/rdbms-image/README.md +++ b/rdbms-image/README.md @@ -40,6 +40,7 @@ Table below lists endpoints where you can find various DeviceHive services. Repl | 1883 | MQTT brokers | If enabled | | 2181 | Zookeeper | | | 5432 | PostgreSQL DB | | +| 5683 | CoAP-WebSockets proxy | If enabled, see [CoAP-WebSockets proxy](#coap-websockets-proxy) section below | | 5701 | Hazelcast | | | 7071 | Kafka metrics | If enabled, see [Kafka metrics](#kafka-metrics) section below | | 8080 | Frontend service | | @@ -113,6 +114,19 @@ To enable DeviceHive to communicate over Apache Kafka message bus to scale out a * `DH_RPC_CLIENT_RES_CONS_THREADS` - Kafka response consumer threads in the Frontend, defaults to `3`. * `DH_AUTH_SPRING_PROFILES_ACTIVE`, `DH_FE_SPRING_PROFILES_ACTIVE`, `DH_BE_SPRING_PROFILES_ACTIVE` and `DH_PLUGIN_SPRING_PROFILES_ACTIVE` - Changes which Spring profile use for Auth, Frontend, Backend and Plugin sevices respectively. Defaults to `ws-kafka-proxy-frontend` for Frontend, `ws-kafka-proxy-backend` for Backend and `ws-kafka-proxy` for Auth/Plugin. Can be changed to `rpc-client` for Auth/Frontend/Plugin and `rpc-server` for Backend to use direct connection to Kafka instead of devicehive-ws-proxy service. +### CoAP-WebSockets proxy +The [devicehive-coap-proxy][coap-proxy-url] is a CoAP to WebSockets proxy between CoAP clients and DeviceHive server. The proxy uses WebSocket sessions to communicate with DeviceHive Server and listens on standard CoAP UDP port 5683 for clients. + +To enable optional DeviceHive MQTT brokers run DeviceHive with the following command. This will start MQTT brokers on port 1883 and internal Redis container: + +``` +sudo docker-compose -f docker-compose.yml -f coap-proxy.yml +``` + +Or add line `COMPOSE_FILE=docker-compose.yml:coap-proxy.yml` in `.env` file. + +[coap-proxy-url]: https://github.com/devicehive/devicehive-coap-proxy + ### MQTT brokers The [devicehive-mqtt plugin][dh-mqtt-url] is a MQTT transport layer between MQTT clients and DeviceHive server. The broker uses WebSocket sessions to communicate with DeviceHive Server and Redis server for persistence functionality. @@ -242,7 +256,7 @@ echo "developer readwrite" > jmxremote.access chmod 0400 jmxremote.password ``` -2. Open `jmx-remote.yml` file and replace `` in _JAVA_OPTIONS env vars with actual hostname of DeviceHive server. +2. Set `DEBUG_RMI_HOSTNAME` variable (export in environment or add line in .env file) with the actual hostname of DeviceHive server. 3. Run DeviceHive with the following command: ``` sudo docker-compose -f docker-compose.yml -f jmx-remote.yml diff --git a/rdbms-image/coap-proxy.yml b/rdbms-image/coap-proxy.yml new file mode 100644 index 0000000..5c54f3a --- /dev/null +++ b/rdbms-image/coap-proxy.yml @@ -0,0 +1,19 @@ +version: "3" +services: + coap_proxy: + image: devicehive/devicehive-coap-proxy:1.0.0 + links: + - dh_frontend + restart: unless-stopped + environment: + - PROXY.HOST=0.0.0.0 + - PROXY.PORT=5683 + - PROXY.TARGET=ws://dh_frontend:8080/api/websocket + + dh_proxy: + ports: + - "5683:5683/udp" + links: + - coap_proxy + volumes: + - "./nginx-coap-proxy.conf:/etc/nginx/stream.d/nginx-coap-proxy.conf:ro,Z" diff --git a/rdbms-image/jmx-remote.yml b/rdbms-image/jmx-remote.yml index 10a0d41..6eaaeea 100644 --- a/rdbms-image/jmx-remote.yml +++ b/rdbms-image/jmx-remote.yml @@ -5,7 +5,7 @@ services: - "9999:9999" - "10000:10000" environment: - _JAVA_OPTIONS: "-Dcom.sun.management.jmxremote.port=9999 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=true -Dcom.sun.management.jmxremote.access.file=/opt/devicehive/jmxremote.access -Dcom.sun.management.jmxremote.password.file=/opt/devicehive/jmxremote.password -Djava.rmi.server.hostname= -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.rmi.port=10000" + _JAVA_OPTIONS: "-Dcom.sun.management.jmxremote.port=9999 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=true -Dcom.sun.management.jmxremote.access.file=/opt/devicehive/jmxremote.access -Dcom.sun.management.jmxremote.password.file=/opt/devicehive/jmxremote.password -Djava.rmi.server.hostname=${DEBUG_RMI_HOSTNAME} -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.rmi.port=10000" volumes: - ./jmxremote.password:/opt/devicehive/jmxremote.password:ro,z - ./jmxremote.access:/opt/devicehive/jmxremote.access:ro,z @@ -15,7 +15,7 @@ services: - "10001:10001" - "10002:10002" environment: - _JAVA_OPTIONS: "-Dcom.sun.management.jmxremote.port=10001 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=true -Dcom.sun.management.jmxremote.access.file=/opt/devicehive/jmxremote.access -Dcom.sun.management.jmxremote.password.file=/opt/devicehive/jmxremote.password -Djava.rmi.server.hostname= -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.rmi.port=10002" + _JAVA_OPTIONS: "-Dcom.sun.management.jmxremote.port=10001 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=true -Dcom.sun.management.jmxremote.access.file=/opt/devicehive/jmxremote.access -Dcom.sun.management.jmxremote.password.file=/opt/devicehive/jmxremote.password -Djava.rmi.server.hostname=${DEBUG_RMI_HOSTNAME} -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.rmi.port=10002" volumes: - ./jmxremote.password:/opt/devicehive/jmxremote.password:ro,z - ./jmxremote.access:/opt/devicehive/jmxremote.access:ro,z @@ -25,7 +25,7 @@ services: - "10003:10003" - "10004:10004" environment: - _JAVA_OPTIONS: "-Dcom.sun.management.jmxremote.port=10003 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=true -Dcom.sun.management.jmxremote.access.file=/opt/devicehive/jmxremote.access -Dcom.sun.management.jmxremote.password.file=/opt/devicehive/jmxremote.password -Djava.rmi.server.hostname= -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.rmi.port=10004" + _JAVA_OPTIONS: "-Dcom.sun.management.jmxremote.port=10003 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=true -Dcom.sun.management.jmxremote.access.file=/opt/devicehive/jmxremote.access -Dcom.sun.management.jmxremote.password.file=/opt/devicehive/jmxremote.password -Djava.rmi.server.hostname=${DEBUG_RMI_HOSTNAME} -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.rmi.port=10004" volumes: - ./jmxremote.password:/opt/devicehive/jmxremote.password:ro,z - ./jmxremote.access:/opt/devicehive/jmxremote.access:ro,z @@ -35,7 +35,7 @@ services: - "10005:10005" - "10006:10006" environment: - _JAVA_OPTIONS: "-Dcom.sun.management.jmxremote.port=10005 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=true -Dcom.sun.management.jmxremote.access.file=/opt/devicehive/jmxremote.access -Dcom.sun.management.jmxremote.password.file=/opt/devicehive/jmxremote.password -Djava.rmi.server.hostname= -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.rmi.port=10006" + _JAVA_OPTIONS: "-Dcom.sun.management.jmxremote.port=10005 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=true -Dcom.sun.management.jmxremote.access.file=/opt/devicehive/jmxremote.access -Dcom.sun.management.jmxremote.password.file=/opt/devicehive/jmxremote.password -Djava.rmi.server.hostname=${DEBUG_RMI_HOSTNAME} -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.rmi.port=10006" volumes: - ./jmxremote.password:/opt/devicehive/jmxremote.password:ro,z - - ./jmxremote.access:/opt/devicehive/jmxremote.access:ro,z \ No newline at end of file + - ./jmxremote.access:/opt/devicehive/jmxremote.access:ro,z diff --git a/rdbms-image/nginx-coap-proxy.conf b/rdbms-image/nginx-coap-proxy.conf new file mode 100644 index 0000000..ebaefc1 --- /dev/null +++ b/rdbms-image/nginx-coap-proxy.conf @@ -0,0 +1,10 @@ +upstream coap_proxy { + server coap_proxy:5683; + zone tcp_mem 64k; +} + +server { + listen 5683 udp; + proxy_pass coap_proxy; + proxy_connect_timeout 1s; +}