Skip to content

Commit

Permalink
docs: fix image errors (#7909)
Browse files Browse the repository at this point in the history
  • Loading branch information
michelle-0808 authored Aug 1, 2024
1 parent a8a785e commit 8a02436
Show file tree
Hide file tree
Showing 17 changed files with 39 additions and 38 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -37,23 +37,23 @@ The faults here are all simulated by deleting pods. When there are sufficient re

***Steps:***

1. View the pod role of the ApeCloud MySQL RaftGroup Cluster. In this example, the leader pod's name is `mysql-cluster-1`.
1. View the pod role of the ApeCloud MySQL RaftGroup Cluster. In this example, the leader pod's name is `mycluster-mysql-1`.

```bash
kubectl get pods --show-labels -n demo | grep role
```

![describe_pod](./../../../img/api-ha-grep-role.png)
2. Delete the leader pod `mysql-cluster-mysql-1` to simulate a pod fault.
2. Delete the leader pod `mycluster-mysql-1` to simulate a pod fault.

```bash
kubectl delete pod mysql-cluster-mysql-1 -n demo
kubectl delete pod mycluster-mysql-1 -n demo
```

![delete_pod](./../../../img/api-ha-delete-leader-pod.png)
3. Check the status of the pods and RaftGroup Cluster connection.

The following example shows that the roles of pods have changed after the old leader pod was deleted and `mysql-cluster-mysql-0` is elected as the new leader pod.
The following example shows that the roles of pods have changed after the old leader pod was deleted and `mycluster-mysql-0` is elected as the new leader pod.

```bash
kubectl get pods --show-labels -n demo | grep role
Expand Down Expand Up @@ -81,27 +81,27 @@ The faults here are all simulated by deleting pods. When there are sufficient re

***How the automatic recovery works***

After the leader pod is deleted, the ApeCloud MySQL RaftGroup Cluster elects a new leader. In this example, `mysql-cluster-mysql-0` is elected as the new leader. KubeBlocks detects that the leader has changed, and sends a notification to update the access link. The original exception node automatically rebuilds and recovers to the normal RaftGroup Cluster state. It normally takes 30 seconds from exception to recovery.
After the leader pod is deleted, the ApeCloud MySQL RaftGroup Cluster elects a new leader. In this example, `mycluster-mysql-0` is elected as the new leader. KubeBlocks detects that the leader has changed, and sends a notification to update the access link. The original exception node automatically rebuilds and recovers to the normal RaftGroup Cluster state. It normally takes 30 seconds from exception to recovery.

### Single follower pod exception

***Steps:***

1. View the pod role again and in this example, the follower pods are `mysql-cluster-mysql-1` and `mysql-cluster-mysql-2`.
1. View the pod role again and in this example, the follower pods are `mycluster-mysql-1` and `mycluster-mysql-2`.

```bash
kubectl get pods --show-labels -n demo | grep role
```

![describe_cluster](./../../../img/api-ha-grep-role-single-follower-pod.png)
2. Delete the follower pod `mysql-cluster-mysql-1`.
2. Delete the follower pod `mycluster-mysql-1`.

```bash
kubectl delete pod mycluster-mysql-1 -n demo
```

![delete_follower_pod](./../../../img/api-ha-single-follower-pod-delete.png)
3. Open another terminal page and view the pod status. You can find the follower pod `mysql-cluster-mysql-1` is `Terminating`.
3. Open another terminal page and view the pod status. You can find the follower pod `mycluster-mysql-1` is `Terminating`.

```bash
kubectl get pod -n demo
Expand Down Expand Up @@ -135,7 +135,7 @@ In this way, whether exceptions occur to one leader and one follower or two foll

***Steps:***

1. View the pod role again. In this example, the follower pods are `mysql-cluster-mysql-1` and `mysql-cluster-mysql-2`.
1. View the pod role again. In this example, the follower pods are `mycluster-mysql-1` and `mycluster-mysql-2`.

```bash
kubectl get pods --show-labels -n demo | grep role
Expand All @@ -149,7 +149,7 @@ In this way, whether exceptions occur to one leader and one follower or two foll
```

![delete_two_pods](./../../../img/api-ha-two-pod-get-status.png)
3. Open another terminal page and view the pod status. You can find the follower pods `mysql-cluster-mysql-1` and `mysql-cluster-mysql-2` is `Terminating`.
3. Open another terminal page and view the pod status. You can find the follower pods `mycluster-mysql-1` and `mycluster-mysql-2` is `Terminating`.

```bash
kubectl get pod -n demo
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -330,7 +330,7 @@ kubectl exec -it myproxy-cluster-vtgate-8659d5db95-4dzt5 -- bash
ls /vtdataroot
```

Enter the container and view more logs of VTTable.
Enter the container and view more logs of VTTablet.

```bash
kubectl exec -it myproxy-cluster-mysql-0 -c vttablet -- bash
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -126,8 +126,6 @@ curl http://127.0.0.1:9200/_cat/nodes?v

## Scaling

Scaling function for vector databases is also supported.

### Scale horizontally

Horizontal scaling changes the amount of pods. For example, you can scale out replicas from three to five.
Expand Down Expand Up @@ -197,7 +195,7 @@ There are two ways to apply horizontal scaling.
EOF
```
2. Check the operation status to validate the horizontal scaling status.
2. Check the operation status to validate the horizontal scaling.
```bash
kubectl get ops -n demo
Expand Down Expand Up @@ -473,7 +471,7 @@ There are two ways to apply volume expansion.
</TabItem>
<TabItem value="Edit cluster YAML file" label="Edit cluster YAML fil">
<TabItem value="Edit cluster YAML file" label="Edit cluster YAML file">
1. Change the value of `spec.componentSpecs.volumeClaimTemplates.spec.resources` in the cluster YAML file.
Expand Down Expand Up @@ -598,7 +596,7 @@ EOF
</TabItem>
<TabItem value="Edit cluster YAML filee" label="Edit cluster YAML file">
<TabItem value="Edit cluster YAML file" label="Edit cluster YAML file">
Change replicas back to the original amount to start this cluster again.
Expand Down
37 changes: 18 additions & 19 deletions docs/api_docs/kubeblocks-for-kafka/configuration/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ sidebar_position: 1

# Configure cluster parameters

This guide shows how to configure cluster parameters by creating an opsRequest.
This guide shows how to configure cluster parameters.

## Before you start

Expand All @@ -19,7 +19,9 @@ This guide shows how to configure cluster parameters by creating an opsRequest.
1. Get the configuration file of this cluster.

```bash
kubectl edit configurations.apps.kubeblocks.io mycluster-kafka -n demo
kubectl get configurations.apps.kubeblocks.io -n demo

kubectl edit configurations.apps.kubeblocks.io mycluster-kafka-combine -n demo
```

2. Configure parameters according to your needs. The example below adds the `spec.configFileParams` part to configure `log.cleanup.policy`.
Expand All @@ -30,18 +32,16 @@ This guide shows how to configure cluster parameters by creating an opsRequest.
componentName: kafka
configItemDetails:
- configFileParams:
mongodb.cnf:
server.properties:
parameters:
log.flush.interval.ms: "2000"
log.cleanup.policy: "compact"
configSpec:
constraintRef: kafka-config-constraints
name: kafka-configuration
constraintRef: kafka-cc
name: kafka-configuration-tpl
namespace: kb-system
templateRef: kafka3.3.2-config-template
templateRef: kafka-configuration-tpl
volumeName: kafka-config
name: kafka-config
- configSpec:
defaultMode: 292
name: kafka-configuration-tpl
```
3. Connect to this cluster to verify whether the configuration takes effect as expected.
Expand All @@ -55,7 +55,7 @@ This guide shows how to configure cluster parameters by creating an opsRequest.

## Configure cluster parameters with OpsRequest

1. Define an OpsRequest file and configure the parameters in the OpsRequest in a yaml file named `mycluster-configuring-demo.yaml`. In this example, `max_connections` is configured as `600`.
1. Define an OpsRequest file and configure the parameters in the OpsRequest in a yaml file named `mycluster-configuring-demo.yaml`. In this example, `log.cleanup.policy` is configured as `compact`.

```bash
apiVersion: apps.kubeblocks.io/v1alpha1
Expand All @@ -71,8 +71,8 @@ This guide shows how to configure cluster parameters by creating an opsRequest.
- keys:
- key: server.properties
parameters:
- key: log.flush.interval.ms
value: "2000"
- key: log.cleanup.policy
value: "compact"
name: kafka-configuration-tpl
preConditionDeadlineSeconds: 0
type: Reconfiguring
Expand Down Expand Up @@ -100,13 +100,12 @@ This guide shows how to configure cluster parameters by creating an opsRequest.
kubectl apply -f mycluster-configuring-demo.yaml
```

3. Connect to this cluster to verify whether the configuration takes effect as expected.
3. Verify whether the configuration takes effect as expected.

```bash
kbcli cluster describe-config mykafka --show-detail | grep log.cleanup.policy
kbcli cluster describe-config mycluster --show-detail | grep log.cleanup.policy
>
log.cleanup.policy = compact
mykafka-reconfiguring-wvqns mykafka broker kafka-configuration-tpl server.properties Succeed restart 1/1 May 10,2024 16:28 UTC+0800 {"server.properties":"{\"log.cleanup.policy\":\"compact\"}"}
```

:::note
Expand Down Expand Up @@ -136,7 +135,7 @@ You can also view the details of this configuration file and parameters.
* View the user guide of a specified parameter.

```bash
kbcli cluster explain-config mykafka --param=log.cleanup.policy
kbcli cluster explain-config mycluster --param=log.cleanup.policy
```

`--config-specs` is required to specify a configuration template since ApeCloud MySQL currently supports multiple templates. You can run `kbcli cluster describe-config mycluster` to view the all template names.
Expand All @@ -147,15 +146,15 @@ You can also view the details of this configuration file and parameters.

```bash
template meta:
ConfigSpec: kafka-configuration-tpl ComponentName: broker ClusterName: mykafka
ConfigSpec: kafka-configuration-tpl ComponentName: kafka-combine ClusterName: mycluster

Configure Constraint:
Parameter Name: log.cleanup.policy
Allowed Values: "compact","delete"
Scope: Global
Dynamic: false
Type: string
Description: The default cleanup policy for segments beyond the retention window. A comma separated list of valid policies.
Description: The default cleanup policy for segments beyond the retention window. A comma separated list of valid policies.
```

</details>
Expand Down
4 changes: 4 additions & 0 deletions docs/user_docs/maintenance/_category_.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
position: 5
label: maintenance
collapsible: true
collapsed: true
Original file line number Diff line number Diff line change
Expand Up @@ -53,14 +53,14 @@ If you don't have an object storage service from a cloud provider, you can deplo

Once you are logged in to the dashboard, you can generate an `access key` and `secret key`.

![backup-and-restore-backup-repo-1](./../../../img/backup-and-restore-backup-repo-1.png)
![backup-and-restore-backup-repo-1](./../../../../img/backup-and-restore-backup-repo-1.png)

3. Create a bucket.

Create a bucket named `test-minio` for the test.

![backup-and-restore-backup-repo-2](./../../../img/backup-and-restore-backup-repo-2.png)
![backup-and-restore-backup-repo3](./../../../img/backup-and-restore-backup-repo-3.png)
![backup-and-restore-backup-repo-2](./../../../../img/backup-and-restore-backup-repo-2.png)
![backup-and-restore-backup-repo3](./../../../../img/backup-and-restore-backup-repo-3.png)

:::note

Expand Down

0 comments on commit 8a02436

Please sign in to comment.