diff --git a/docs/api_docs/kubeblocks-for-apecloud-mysql/high-availability/high-availability.md b/docs/api_docs/kubeblocks-for-apecloud-mysql/high-availability/high-availability.md index 1f0e7ba4bb6..bb2d6a20da7 100644 --- a/docs/api_docs/kubeblocks-for-apecloud-mysql/high-availability/high-availability.md +++ b/docs/api_docs/kubeblocks-for-apecloud-mysql/high-availability/high-availability.md @@ -37,23 +37,23 @@ The faults here are all simulated by deleting pods. When there are sufficient re ***Steps:*** -1. View the pod role of the ApeCloud MySQL RaftGroup Cluster. In this example, the leader pod's name is `mysql-cluster-1`. +1. View the pod role of the ApeCloud MySQL RaftGroup Cluster. In this example, the leader pod's name is `mycluster-mysql-1`. ```bash kubectl get pods --show-labels -n demo | grep role ``` ![describe_pod](./../../../img/api-ha-grep-role.png) -2. Delete the leader pod `mysql-cluster-mysql-1` to simulate a pod fault. +2. Delete the leader pod `mycluster-mysql-1` to simulate a pod fault. ```bash - kubectl delete pod mysql-cluster-mysql-1 -n demo + kubectl delete pod mycluster-mysql-1 -n demo ``` ![delete_pod](./../../../img/api-ha-delete-leader-pod.png) 3. Check the status of the pods and RaftGroup Cluster connection. - The following example shows that the roles of pods have changed after the old leader pod was deleted and `mysql-cluster-mysql-0` is elected as the new leader pod. + The following example shows that the roles of pods have changed after the old leader pod was deleted and `mycluster-mysql-0` is elected as the new leader pod. ```bash kubectl get pods --show-labels -n demo | grep role @@ -81,27 +81,27 @@ The faults here are all simulated by deleting pods. When there are sufficient re ***How the automatic recovery works*** - After the leader pod is deleted, the ApeCloud MySQL RaftGroup Cluster elects a new leader. In this example, `mysql-cluster-mysql-0` is elected as the new leader. KubeBlocks detects that the leader has changed, and sends a notification to update the access link. The original exception node automatically rebuilds and recovers to the normal RaftGroup Cluster state. It normally takes 30 seconds from exception to recovery. + After the leader pod is deleted, the ApeCloud MySQL RaftGroup Cluster elects a new leader. In this example, `mycluster-mysql-0` is elected as the new leader. KubeBlocks detects that the leader has changed, and sends a notification to update the access link. The original exception node automatically rebuilds and recovers to the normal RaftGroup Cluster state. It normally takes 30 seconds from exception to recovery. ### Single follower pod exception ***Steps:*** -1. View the pod role again and in this example, the follower pods are `mysql-cluster-mysql-1` and `mysql-cluster-mysql-2`. +1. View the pod role again and in this example, the follower pods are `mycluster-mysql-1` and `mycluster-mysql-2`. ```bash kubectl get pods --show-labels -n demo | grep role ``` ![describe_cluster](./../../../img/api-ha-grep-role-single-follower-pod.png) -2. Delete the follower pod `mysql-cluster-mysql-1`. +2. Delete the follower pod `mycluster-mysql-1`. ```bash kubectl delete pod mycluster-mysql-1 -n demo ``` ![delete_follower_pod](./../../../img/api-ha-single-follower-pod-delete.png) -3. Open another terminal page and view the pod status. You can find the follower pod `mysql-cluster-mysql-1` is `Terminating`. +3. Open another terminal page and view the pod status. You can find the follower pod `mycluster-mysql-1` is `Terminating`. ```bash kubectl get pod -n demo @@ -135,7 +135,7 @@ In this way, whether exceptions occur to one leader and one follower or two foll ***Steps:*** -1. View the pod role again. In this example, the follower pods are `mysql-cluster-mysql-1` and `mysql-cluster-mysql-2`. +1. View the pod role again. In this example, the follower pods are `mycluster-mysql-1` and `mycluster-mysql-2`. ```bash kubectl get pods --show-labels -n demo | grep role @@ -149,7 +149,7 @@ In this way, whether exceptions occur to one leader and one follower or two foll ``` ![delete_two_pods](./../../../img/api-ha-two-pod-get-status.png) -3. Open another terminal page and view the pod status. You can find the follower pods `mysql-cluster-mysql-1` and `mysql-cluster-mysql-2` is `Terminating`. +3. Open another terminal page and view the pod status. You can find the follower pods `mycluster-mysql-1` and `mycluster-mysql-2` is `Terminating`. ```bash kubectl get pod -n demo diff --git a/docs/api_docs/kubeblocks-for-apecloud-mysql/proxy/apecloud-mysql-proxy.md b/docs/api_docs/kubeblocks-for-apecloud-mysql/proxy/apecloud-mysql-proxy.md index 026e33bef4f..829412d137c 100644 --- a/docs/api_docs/kubeblocks-for-apecloud-mysql/proxy/apecloud-mysql-proxy.md +++ b/docs/api_docs/kubeblocks-for-apecloud-mysql/proxy/apecloud-mysql-proxy.md @@ -330,7 +330,7 @@ kubectl exec -it myproxy-cluster-vtgate-8659d5db95-4dzt5 -- bash ls /vtdataroot ``` -Enter the container and view more logs of VTTable. +Enter the container and view more logs of VTTablet. ```bash kubectl exec -it myproxy-cluster-mysql-0 -c vttablet -- bash diff --git a/docs/api_docs/kubeblocks-for-elasticsearch/manage-elasticsearch.md b/docs/api_docs/kubeblocks-for-elasticsearch/manage-elasticsearch.md index 36556586ec0..2de7a9d8f38 100644 --- a/docs/api_docs/kubeblocks-for-elasticsearch/manage-elasticsearch.md +++ b/docs/api_docs/kubeblocks-for-elasticsearch/manage-elasticsearch.md @@ -126,8 +126,6 @@ curl http://127.0.0.1:9200/_cat/nodes?v ## Scaling -Scaling function for vector databases is also supported. - ### Scale horizontally Horizontal scaling changes the amount of pods. For example, you can scale out replicas from three to five. @@ -197,7 +195,7 @@ There are two ways to apply horizontal scaling. EOF ``` -2. Check the operation status to validate the horizontal scaling status. +2. Check the operation status to validate the horizontal scaling. ```bash kubectl get ops -n demo @@ -473,7 +471,7 @@ There are two ways to apply volume expansion. - + 1. Change the value of `spec.componentSpecs.volumeClaimTemplates.spec.resources` in the cluster YAML file. @@ -598,7 +596,7 @@ EOF - + Change replicas back to the original amount to start this cluster again. diff --git a/docs/api_docs/kubeblocks-for-kafka/configuration/configuration.md b/docs/api_docs/kubeblocks-for-kafka/configuration/configuration.md index a0643aa26ec..4c8bbf3f077 100644 --- a/docs/api_docs/kubeblocks-for-kafka/configuration/configuration.md +++ b/docs/api_docs/kubeblocks-for-kafka/configuration/configuration.md @@ -7,7 +7,7 @@ sidebar_position: 1 # Configure cluster parameters -This guide shows how to configure cluster parameters by creating an opsRequest. +This guide shows how to configure cluster parameters. ## Before you start @@ -19,7 +19,9 @@ This guide shows how to configure cluster parameters by creating an opsRequest. 1. Get the configuration file of this cluster. ```bash - kubectl edit configurations.apps.kubeblocks.io mycluster-kafka -n demo + kubectl get configurations.apps.kubeblocks.io -n demo + + kubectl edit configurations.apps.kubeblocks.io mycluster-kafka-combine -n demo ``` 2. Configure parameters according to your needs. The example below adds the `spec.configFileParams` part to configure `log.cleanup.policy`. @@ -30,18 +32,16 @@ This guide shows how to configure cluster parameters by creating an opsRequest. componentName: kafka configItemDetails: - configFileParams: - mongodb.cnf: + server.properties: parameters: - log.flush.interval.ms: "2000" + log.cleanup.policy: "compact" configSpec: - constraintRef: kafka-config-constraints - name: kafka-configuration + constraintRef: kafka-cc + name: kafka-configuration-tpl namespace: kb-system - templateRef: kafka3.3.2-config-template + templateRef: kafka-configuration-tpl volumeName: kafka-config - name: kafka-config - - configSpec: - defaultMode: 292 + name: kafka-configuration-tpl ``` 3. Connect to this cluster to verify whether the configuration takes effect as expected. @@ -55,7 +55,7 @@ This guide shows how to configure cluster parameters by creating an opsRequest. ## Configure cluster parameters with OpsRequest -1. Define an OpsRequest file and configure the parameters in the OpsRequest in a yaml file named `mycluster-configuring-demo.yaml`. In this example, `max_connections` is configured as `600`. +1. Define an OpsRequest file and configure the parameters in the OpsRequest in a yaml file named `mycluster-configuring-demo.yaml`. In this example, `log.cleanup.policy` is configured as `compact`. ```bash apiVersion: apps.kubeblocks.io/v1alpha1 @@ -71,8 +71,8 @@ This guide shows how to configure cluster parameters by creating an opsRequest. - keys: - key: server.properties parameters: - - key: log.flush.interval.ms - value: "2000" + - key: log.cleanup.policy + value: "compact" name: kafka-configuration-tpl preConditionDeadlineSeconds: 0 type: Reconfiguring @@ -100,13 +100,12 @@ This guide shows how to configure cluster parameters by creating an opsRequest. kubectl apply -f mycluster-configuring-demo.yaml ``` -3. Connect to this cluster to verify whether the configuration takes effect as expected. +3. Verify whether the configuration takes effect as expected. ```bash - kbcli cluster describe-config mykafka --show-detail | grep log.cleanup.policy + kbcli cluster describe-config mycluster --show-detail | grep log.cleanup.policy > log.cleanup.policy = compact - mykafka-reconfiguring-wvqns mykafka broker kafka-configuration-tpl server.properties Succeed restart 1/1 May 10,2024 16:28 UTC+0800 {"server.properties":"{\"log.cleanup.policy\":\"compact\"}"} ``` :::note @@ -136,7 +135,7 @@ You can also view the details of this configuration file and parameters. * View the user guide of a specified parameter. ```bash - kbcli cluster explain-config mykafka --param=log.cleanup.policy + kbcli cluster explain-config mycluster --param=log.cleanup.policy ``` `--config-specs` is required to specify a configuration template since ApeCloud MySQL currently supports multiple templates. You can run `kbcli cluster describe-config mycluster` to view the all template names. @@ -147,7 +146,7 @@ You can also view the details of this configuration file and parameters. ```bash template meta: - ConfigSpec: kafka-configuration-tpl ComponentName: broker ClusterName: mykafka + ConfigSpec: kafka-configuration-tpl ComponentName: kafka-combine ClusterName: mycluster Configure Constraint: Parameter Name: log.cleanup.policy @@ -155,7 +154,7 @@ You can also view the details of this configuration file and parameters. Scope: Global Dynamic: false Type: string - Description: The default cleanup policy for segments beyond the retention window. A comma separated list of valid policies. + Description: The default cleanup policy for segments beyond the retention window. A comma separated list of valid policies. ``` diff --git a/docs/user_docs/maintenance/_category_.yaml b/docs/user_docs/maintenance/_category_.yaml new file mode 100644 index 00000000000..38b1b111507 --- /dev/null +++ b/docs/user_docs/maintenance/_category_.yaml @@ -0,0 +1,4 @@ +position: 5 +label: maintenance +collapsible: true +collapsed: true \ No newline at end of file diff --git a/docs/user_docs/maintaince/backup-and-restore/_category_.yaml b/docs/user_docs/maintenance/backup-and-restore/_category_.yaml similarity index 100% rename from docs/user_docs/maintaince/backup-and-restore/_category_.yaml rename to docs/user_docs/maintenance/backup-and-restore/_category_.yaml diff --git a/docs/user_docs/maintaince/backup-and-restore/backup/_category_.yaml b/docs/user_docs/maintenance/backup-and-restore/backup/_category_.yaml similarity index 100% rename from docs/user_docs/maintaince/backup-and-restore/backup/_category_.yaml rename to docs/user_docs/maintenance/backup-and-restore/backup/_category_.yaml diff --git a/docs/user_docs/maintaince/backup-and-restore/backup/backup-repo.md b/docs/user_docs/maintenance/backup-and-restore/backup/backup-repo.md similarity index 97% rename from docs/user_docs/maintaince/backup-and-restore/backup/backup-repo.md rename to docs/user_docs/maintenance/backup-and-restore/backup/backup-repo.md index 1a0dde66033..0470b4b12ec 100644 --- a/docs/user_docs/maintaince/backup-and-restore/backup/backup-repo.md +++ b/docs/user_docs/maintenance/backup-and-restore/backup/backup-repo.md @@ -53,14 +53,14 @@ If you don't have an object storage service from a cloud provider, you can deplo Once you are logged in to the dashboard, you can generate an `access key` and `secret key`. - ![backup-and-restore-backup-repo-1](./../../../img/backup-and-restore-backup-repo-1.png) + ![backup-and-restore-backup-repo-1](./../../../../img/backup-and-restore-backup-repo-1.png) 3. Create a bucket. Create a bucket named `test-minio` for the test. - ![backup-and-restore-backup-repo-2](./../../../img/backup-and-restore-backup-repo-2.png) - ![backup-and-restore-backup-repo3](./../../../img/backup-and-restore-backup-repo-3.png) + ![backup-and-restore-backup-repo-2](./../../../../img/backup-and-restore-backup-repo-2.png) + ![backup-and-restore-backup-repo3](./../../../../img/backup-and-restore-backup-repo-3.png) :::note diff --git a/docs/user_docs/maintaince/backup-and-restore/backup/configure-backuppolicy.md b/docs/user_docs/maintenance/backup-and-restore/backup/configure-backuppolicy.md similarity index 100% rename from docs/user_docs/maintaince/backup-and-restore/backup/configure-backuppolicy.md rename to docs/user_docs/maintenance/backup-and-restore/backup/configure-backuppolicy.md diff --git a/docs/user_docs/maintaince/backup-and-restore/backup/on-demand-backup.md b/docs/user_docs/maintenance/backup-and-restore/backup/on-demand-backup.md similarity index 100% rename from docs/user_docs/maintaince/backup-and-restore/backup/on-demand-backup.md rename to docs/user_docs/maintenance/backup-and-restore/backup/on-demand-backup.md diff --git a/docs/user_docs/maintaince/backup-and-restore/backup/scheduled-backup.md b/docs/user_docs/maintenance/backup-and-restore/backup/scheduled-backup.md similarity index 100% rename from docs/user_docs/maintaince/backup-and-restore/backup/scheduled-backup.md rename to docs/user_docs/maintenance/backup-and-restore/backup/scheduled-backup.md diff --git a/docs/user_docs/maintaince/backup-and-restore/introduction.md b/docs/user_docs/maintenance/backup-and-restore/introduction.md similarity index 100% rename from docs/user_docs/maintaince/backup-and-restore/introduction.md rename to docs/user_docs/maintenance/backup-and-restore/introduction.md diff --git a/docs/user_docs/maintaince/backup-and-restore/restore/_category_.yaml b/docs/user_docs/maintenance/backup-and-restore/restore/_category_.yaml similarity index 100% rename from docs/user_docs/maintaince/backup-and-restore/restore/_category_.yaml rename to docs/user_docs/maintenance/backup-and-restore/restore/_category_.yaml diff --git a/docs/user_docs/maintaince/backup-and-restore/restore/pitr.md b/docs/user_docs/maintenance/backup-and-restore/restore/pitr.md similarity index 100% rename from docs/user_docs/maintaince/backup-and-restore/restore/pitr.md rename to docs/user_docs/maintenance/backup-and-restore/restore/pitr.md diff --git a/docs/user_docs/maintaince/backup-and-restore/restore/restore-data-from-backup-set.md b/docs/user_docs/maintenance/backup-and-restore/restore/restore-data-from-backup-set.md similarity index 100% rename from docs/user_docs/maintaince/backup-and-restore/restore/restore-data-from-backup-set.md rename to docs/user_docs/maintenance/backup-and-restore/restore/restore-data-from-backup-set.md diff --git a/docs/user_docs/maintaince/resource-scheduling/_category_.yml b/docs/user_docs/maintenance/resource-scheduling/_category_.yml similarity index 100% rename from docs/user_docs/maintaince/resource-scheduling/_category_.yml rename to docs/user_docs/maintenance/resource-scheduling/_category_.yml diff --git a/docs/user_docs/maintaince/resource-scheduling/resource-scheduling.md b/docs/user_docs/maintenance/resource-scheduling/resource-scheduling.md similarity index 100% rename from docs/user_docs/maintaince/resource-scheduling/resource-scheduling.md rename to docs/user_docs/maintenance/resource-scheduling/resource-scheduling.md