+
+## More
+
+Check out the documentation for the [full list of runtime APIs](https://vitepress.dev/reference/runtime-api#usedata).
diff --git a/docs_new/architecture.md b/docs_new/architecture.md
new file mode 100644
index 0000000000..89202aa47d
--- /dev/null
+++ b/docs_new/architecture.md
@@ -0,0 +1,396 @@
+# Architecture
+
+The design of Kanister follows the operator pattern. This means Kanister
+defines its own resources and interacts with those resources through a
+controller. [This blog
+post](https://www.redhat.com/en/blog/operators-over-easy-introduction-kubernetes-operators)
+describes the pattern in detail.
+
+In particular, Kanister is composed of three main components: the
+Controller and two Custom Resources - ActionSets and Blueprints. The
+diagram below illustrates their relationship and how they fit together:
+
+![image](/kanister_workflow.png)
+
+## Kanister Workflow
+
+As seen in the above diagram and described in detail below, all Kanister
+operations are declarative and require an ActionSet to be created by the
+user. Once the ActionSet is detected by the Kanister controller, it
+examines the environment for Blueprint referenced in the ActionSet
+(along with other required configuration). If all requirements are
+satisfied, it will then use the discovered Blueprint to complete the
+action (e.g., backup) specified in the ActionSet. Finally, the original
+ActionSet will be updated by the controller with status and other
+metadata generated by the action execution.
+
+## Custom Resources
+
+Users interact with Kanister through Kubernetes resources known as
+CustomResources (CRs). When the controller starts, it creates the CR
+definitions called CustomResourceDefinitions (CRDs).
+[CRDs](https://kubernetes.io/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/)
+were introduced in Kubernetes 1.7 and replaced TPRs. The lifecycle of
+these objects can be managed entirely through kubectl. Kanister uses
+Kubernetes\' code generation tools to create go client libraries for its
+CRs.
+
+The schemas of the Kanister CRDs can be found in
+[types.go](https://github.com/kanisterio/kanister/tree/master/pkg/apis/cr/v1alpha1/types.go)
+
+### Blueprints
+
+Blueprint CRs are a set of instructions that tell the controller how to
+perform actions on a specific application.
+
+A Blueprint contains a field called `Actions` which is a mapping of
+Action Name to `BlueprintAction`.
+
+The definition of a `BlueprintAction` is:
+
+``` go
+// BlueprintAction describes the set of phases that constitute an action.
+type BlueprintAction struct {
+ Name string `json:"name"`
+ Kind string `json:"kind"`
+ ConfigMapNames []string `json:"configMapNames"`
+ SecretNames []string `json:"secretNames"`
+ InputArtifactNames []string `json:"inputArtifactNames"`
+ OutputArtifacts map[string]Artifact `json:"outputArtifacts"`
+ Phases []BlueprintPhase `json:"phases"`
+ DeferPhase *BlueprintPhase `json:"deferPhase,omitempty"`
+}
+```
+
+- `Kind` represents the type of Kubernetes object this BlueprintAction
+ is written for. Specifying this is optional and going forward, if
+ this is specified, Kanister will enforce that it matches the
+ `Object` kind specified in an ActionSet referencing this
+ BlueprintAction
+- `ConfigMapNames`, `SecretNames`, `InputArtifactNames` are optional
+ but, if specified, they list named parameters that must be included
+ by the `ActionSet`.
+- `OutputArtifacts` is an optional map of rendered parameters made
+ available to the `BlueprintAction`.
+- `Phases` is a required list of `BlueprintPhases`. These phases are
+ invoked in order when executing this Action.
+- `DeferPhase` is an optional `BlueprintPhase` invoked after the
+ execution of `Phases` defined above. A `DeferPhase`, when specified,
+ is executed regardless of the statuses of the `Phases`. A
+ `DeferPhase` can be used for cleanup operations at the end of an
+ `Action`.
+
+``` go
+// BlueprintPhase is a an individual unit of execution.
+type BlueprintPhase struct {
+ Func string `json:"func"`
+ Name string `json:"name"`
+ ObjectRefs map[string]ObjectReference `json:"objects"`
+ Args map[string]interface{} `json:"args"`
+}
+```
+
+- `Func` is required as the name of a registered Kanister function.
+ See [Functions](functions.md) for the list of
+ functions supported by the controller.
+- `Name` is mostly cosmetic. It is useful in quickly identifying which
+ phases the controller has finished executing.
+- `Object` is a map of references to the Kubernetes objects on which
+ the action will be performed.
+- `Args` is a map of named arguments that the controller will pass to
+ the Kanister function. String argument values can be templates that
+ the controller will render using the template parameters. Each
+ argument is rendered individually.
+
+As a reference, below is an example of a BlueprintAction.
+
+``` yaml
+actions:
+ example-action:
+ phases:
+ - func: KubeExec
+ name: examplePhase
+ args:
+ namespace: "{{ .Deployment.Namespace }}"
+ pod: "{{ index .Deployment.Pods 0 }}"
+ container: kanister-sidecar
+ command:
+ - bash
+ - -c
+ - |
+ echo "Example Action"
+```
+
+### ActionSets
+
+Creating an ActionSet instructs the controller to run an action now. The
+user specifies the runtime parameters inside the spec of the ActionSet.
+Based on the parameters, the Controller populates the Status of the
+object, executes the actions, and updates the ActionSet\'s status.
+
+An ActionSetSpec contains a list of ActionSpecs. An ActionSpec is
+defined as follows:
+
+``` go
+// ActionSpec is the specification for a single Action.
+type ActionSpec struct {
+ Name string `json:"name"`
+ Object ObjectReference `json:"object"`
+ Blueprint string `json:"blueprint,omitempty"`
+ Artifacts map[string]Artifact `json:"artifacts,omitempty"`
+ ConfigMaps map[string]ObjectReference `json:"configMaps"`
+ Secrets map[string]ObjectReference `json:"secrets"`
+ Options map[string]string `json:"options"`
+ Profile *ObjectReference `json:"profile"`
+ PodOverride map[string]interface{} `json:"podOverride,omitempty"`
+}
+```
+
+- `Name` is required and specifies the action in the Blueprint.
+- `Object` is a required reference to the Kubernetes object on which
+ the action will be performed.
+- `Blueprint` is a required name of the Blueprint that contains the
+ action to run.
+- `Artifacts` are input Artifacts passed to the Blueprint. This must
+ contain an Artifact for each name listed in the BlueprintAction\'s
+ InputArtifacts.
+- `ConfigMaps` and `Secrets`, similar to `Artifacts`, are a mappings
+ of names specified in the Blueprint referencing the Kubernetes
+ object to be used.
+- `Profile` is a reference to a `Profile`{.interpreted-text
+ role="ref"} Kubernetes CustomResource that will be made available to
+ the Blueprint.
+- `Options` is used to specify additional values to be used in the
+ Blueprint
+- `PodOverride` is used to specify pod specs that will override
+ default specs of the Pod created while executing functions like
+ KubeTask, PrepareData, etc.
+
+As a reference, below is an example of a ActionSpec.
+
+``` yaml
+spec:
+ actions:
+ - name: example-action
+ blueprint: example-blueprint
+ object:
+ kind: Deployment
+ name: example-deployment
+ namespace: example-namespace
+ profile:
+ apiVersion: v1alpha1
+ kind: profile
+ name: example-profile
+ namespace: example-namespace
+```
+
+In addition to the Spec, an ActionSet also contains an ActionSetStatus
+which mirrors the Spec, but contains the phases of execution, their
+state, and the overall execution progress.
+
+``` go
+// ActionStatus is updated as we execute phases.
+type ActionStatus struct {
+ Name string `json:"name"`
+ Object ObjectReference `json:"object"`
+ Blueprint string `json:"blueprint"`
+ Phases []Phase `json:"phases"`
+ Artifacts map[string]Artifact `json:"artifacts"`
+}
+```
+
+Unlike in the ActionSpec, the Artifacts in the ActionStatus are the
+rendered output artifacts from the Blueprint. These are rendered and
+populated once the action is complete.
+
+Each phase in the ActionStatus phases list contains the phase name of
+the Blueprint phase along with its state of execution and output.
+
+``` go
+// Phase is subcomponent of an action.
+type Phase struct {
+ Name string `json:"name"`
+ State State `json:"state"`
+ Output map[string]interface{} `json:"output"`
+}
+```
+
+Deleting an ActionSet will cause the controller to delete the ActionSet,
+which will stop the execution of the actions.
+
+``` bash
+$ kubectl --namespace kanister delete actionset s3backup-j4z6f
+ actionset.cr.kanister.io "s3backup-j4z6f" deleted
+```
+
+::: tip NOTE
+
+Since ActionSets are `Custom Resources`, Kubernetes allows users to
+delete them like any other API objects. Currently, *deleting* an
+ActionSet to stop execution is an **alpha** feature.
+:::
+
+### Profiles
+
+Profile CRs capture information about a location for data operation
+artifacts and corresponding credentials that will be made available to a
+Blueprint.
+
+The definition of a `Profile` is:
+
+``` go
+// Profile
+type Profile struct {
+ Location Location `json:"location"`
+ Credential Credential `json:"credential"`
+ SkipSSLVerify bool `json:"skipSSLVerify"`
+}
+```
+
+- `SkipSSLVerify` is boolean and specifies whether skipping
+ SkipSSLVerify verification is allowed when operating with the
+ `Location`. If omitted from a CR definition it default to `false`
+
+- `Location` is required and used to specify the location that the
+ Blueprint can use. Currently, only s3 compliant locations are
+ supported. If any of the sub-components are omitted, they will be
+ treated as \"\".
+
+ The definition of `Location` is as follows:
+
+``` go
+// LocationType
+type LocationType string
+
+const (
+ LocationTypeGCS LocationType = "gcs"
+ LocationTypeS3Compliant LocationType = "s3Compliant"
+ LocationTypeAzure LocationType = "azure"
+)
+
+// Location
+type Location struct {
+ Type LocationType `json:"type"`
+ Bucket string `json:"bucket"`
+ Endpoint string `json:"endpoint"`
+ Prefix string `json:"prefix"`
+ Region string `json:"region"`
+}
+```
+
+- `Credential` is required and used to specify the credentials
+ associated with the `Location`. Currently, only key pair s3, gcs and
+ azure location credentials are supported.
+
+ The definition of `Credential` is as follows:
+
+``` go
+// CredentialType
+type CredentialType string
+
+const (
+ CredentialTypeKeyPair CredentialType = "keyPair"
+)
+
+// Credential
+type Credential struct {
+ Type CredentialType `json:"type"`
+ KeyPair *KeyPair `json:"keyPair"`
+}
+
+// KeyPair
+type KeyPair struct {
+ IDField string `json:"idField"`
+ SecretField string `json:"secretField"`
+ Secret ObjectReference `json:"secret"`
+}
+```
+
+- `IDField` and `SecretField` are required and specify the
+ corresponding keys in the secret under which the `KeyPair`
+ credentials are stored.
+- `Secret` is required reference to a Kubernetes Secret object storing
+ the `KeyPair` credentials.
+
+As a reference, below is an example of a Profile and the corresponding
+secret.
+
+``` yaml
+apiVersion: cr.kanister.io/v1alpha1
+kind: Profile
+metadata:
+ name: example-profile
+ namespace: example-namespace
+location:
+ type: s3Compliant
+ bucket: example-bucket
+ endpoint: :
+ prefix: ""
+ region: ""
+credential:
+ type: keyPair
+ keyPair:
+ idField: example_key_id
+ secretField: example_secret_access_key
+ secret:
+ apiVersion: v1
+ kind: Secret
+ name: example-secret
+ namespace: example-namespace
+skipSSLVerify: true
+---
+apiVersion: v1
+kind: Secret
+type: Opaque
+metadata:
+ name: example-secret
+ namespace: example-namespace
+data:
+ example_key_id:
+ example_secret_access_key:
+```
+
+## Controller
+
+The Kanister controller is a Kubernetes Deployment and is installed
+easily using `kubectl`. See [Installation](install.md) for
+more information on deploying the controller.
+
+### Execution Walkthrough
+
+The controller watches for new/updated ActionSets in the same namespace
+in which it is deployed. When it sees an ActionSet with a nil status
+field, it immediately initializes the ActionSet\'s status to the Pending
+State. The status is also prepopulated with the pending phases.
+
+Execution begins by resolving all the `templates`{.interpreted-text
+role="ref"}. If any required object references or artifacts are missing
+from the ActionSet, the ActionSet status is marked as failed. Otherwise,
+the template params are used to render the output Artifacts, and then
+the args in the Blueprint.
+
+For each action, all phases are executed in-order. The rendered args are
+passed to [Templates](templates.md) which correspond to
+a single phase. When a phase completes, the status of the phase is
+updated. If any single phase fails, the entire ActionSet is marked as
+failed. Upon failure, the controller ceases execution of the ActionSet.
+
+Within an ActionSet, individual Actions are run in parallel.
+
+Currently the user is responsible for cleaning up ActionSets once they
+complete.
+
+During execution, Kanister controller emits events to the respective
+ActionSets. In above example, the execution transitions of ActionSet
+`s3backup-j4z6f` can be seen by using the following command:
+
+``` bash
+$ kubectl --namespace kanister describe actionset s3backup-j4z6f
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal Started Action 23s Kanister Controller Executing action backup
+ Normal Started Phase 23s Kanister Controller Executing phase backupToS3
+ Normal Update Complete 19s Kanister Controller Updated ActionSet 's3backup-j4z6f' Status->complete
+ Normal Ended Phase 19s Kanister Controller Completed phase backupToS3
+```
diff --git a/docs_new/functions.md b/docs_new/functions.md
new file mode 100644
index 0000000000..c86145464e
--- /dev/null
+++ b/docs_new/functions.md
@@ -0,0 +1,1544 @@
+# Functions
+
+Kanister Functions are written in go and are compiled when building the
+controller. They are referenced by Blueprints phases. A Kanister
+Function implements the following go interface:
+
+``` go
+// Func allows custom actions to be executed.
+type Func interface {
+ Name() string
+ Exec(ctx context.Context, args ...string) (map*string*interface{}, error)
+ RequiredArgs() []string
+ Arguments() []string
+}
+```
+
+Kanister Functions are registered by the return value of `Name()`, which
+must be static.
+
+Each phase in a Blueprint executes a Kanister Function. The `Func` field
+in a `BlueprintPhase` is used to lookup a Kanister Function. After
+`BlueprintPhase.Args` are rendered, they are passed into the Kanister
+Function\'s `Exec()` method.
+
+The `RequiredArgs` method returns the list of argument names that are
+required. And `Arguments` method returns the list of all the argument
+names that are supported by the function.
+
+## Existing Functions
+
+The Kanister controller ships with the following Kanister Functions
+out-of-the-box that provide integration with Kubernetes:
+
+### KubeExec
+
+KubeExec is similar to running
+
+``` bash
+kubectl exec -it --namespace -c [CMD LIST...]
+```
+
+| Argument | Required | Type | Description |
+| ---------- | :------: | ----------- | ----------- |
+| namespace | Yes | string | namespace in which to execute |
+| pod | Yes | string | name of the pod in which to execute |
+| container | No | string | (required if pod contains more than 1 container) name of the container in which to execute |
+| command | Yes | []string | command list to execute |
+
+Example:
+
+``` yaml
+- func: KubeExec
+ name: examplePhase
+ args:
+ namespace: "{{ .Deployment.Namespace }}"
+ pod: "{{ index .Deployment.Pods 0 }}"
+ container: kanister-sidecar
+ command:
+ - sh
+ - -c
+ - |
+ echo "Example"
+```
+
+### KubeExecAll
+
+KubeExecAll is similar to running KubeExec on specified containers of
+given pods (all specified containers on given pods) in parallel. In the
+below example, the command is going to be executed in both the
+containers of the given pods.
+
+ | Argument | Required | Type | Description |
+ | ---------- | :------: | ----------- | ----------- |
+ | namespace | Yes | string | namespace in which to execute |
+ | pods | Yes | string | space separated list of names of pods in which to execute |
+ | containers | Yes | string | space separated list of names of the containers in which to execute |
+ | command | Yes | []string | command list to execute |
+
+Example:
+
+``` yaml
+- func: KubeExecAll
+ name: examplePhase
+ args:
+ namespace: "{{ .Deployment.Namespace }}"
+ pods: "{{ index .Deployment.Pods 0 }} {{ index .Deployment.Pods 1 }}"
+ containers: "container1 container2"
+ command:
+ - sh
+ - -c
+ - |
+ echo "Example"
+```
+
+### KubeTask
+
+KubeTask spins up a new container and executes a command via a Pod. This
+allows you to run a new Pod from a Blueprint.
+
+ | Argument | Required | Type | Description |
+ | ----------- | :------: | ----------------------- | ----------- |
+ | namespace | No | string | namespace in which to execute (the pod will be created in controller's namespace if not specified) |
+ | image | Yes | string | image to be used for executing the task |
+ | command | Yes | []string | command list to execute |
+ | podOverride | No | map[string]interface{} | specs to override default pod specs with |
+
+Example:
+
+``` yaml
+- func: KubeTask
+ name: examplePhase
+ args:
+ namespace: "{{ .Deployment.Namespace }}"
+ image: busybox
+ podOverride:
+ containers:
+ - name: container
+ imagePullPolicy: IfNotPresent
+ command:
+ - sh
+ - -c
+ - |
+ echo "Example"
+```
+
+### ScaleWorkload
+
+ScaleWorkload is used to scale up or scale down a Kubernetes workload.
+It also sets the original replica count of the workload as output
+artifact with the key `originalReplicaCount`. The function only returns
+after the desired replica state is achieved:
+
+- When reducing the replica count, wait until all terminating pods
+ complete.
+- When increasing the replica count, wait until all pods are ready.
+
+Currently the function supports Deployments, StatefulSets and
+DeploymentConfigs.
+
+It is similar to running
+
+``` bash
+kubectl scale deployment --replicas= --namespace
+```
+
+This can be useful if the workload needs to be shutdown before
+processing certain data operations. For example, it may be useful to use
+`ScaleWorkload` to stop a database process before restoring files. See
+[Using ScaleWorkload function with output artifact](tasks/scaleworkload.md) for an example with
+new `ScaleWorkload` function.
+
+ | Argument | Required | Type | Description |
+ | ------------ | :------: | ------- | ----------- |
+ | namespace | No | string | namespace in which to execute |
+ | name | No | string | name of the workload to scale |
+ | kind | No | string | [deployment] or [statefulset] |
+ | replicas | Yes | int | The desired number of replicas |
+ | waitForReady | No | bool | Whether to wait for the workload to be ready before executing next steps. Default Value is `true` |
+
+Example of scaling down:
+
+``` yaml
+- func: ScaleWorkload
+ name: examplePhase
+ args:
+ namespace: "{{ .Deployment.Namespace }}"
+ name: "{{ .Deployment.Name }}"
+ kind: deployment
+ replicas: 0
+```
+
+Example of scaling up:
+
+``` yaml
+- func: ScaleWorkload
+ name: examplePhase
+ args:
+ namespace: "{{ .Deployment.Namespace }}"
+ name: "{{ .Deployment.Name }}"
+ kind: deployment
+ replicas: 1
+ waitForReady: false
+```
+
+### PrepareData
+
+This function allows running a new Pod that will mount one or more PVCs
+and execute a command or script that manipulates the data on the PVCs.
+
+The function can be useful when it is necessary to perform operations on
+the data volumes that are used by one or more application containers.
+The typical sequence is to stop the application using ScaleWorkload,
+perform the data manipulation using PrepareData, and then restart the
+application using ScaleWorkload.
+
+::: tip NOTE
+
+It is extremely important that, if PrepareData modifies the underlying
+data, the PVCs must not be currently in use by an active application
+container (ensure by using ScaleWorkload with replicas=0 first). For
+advanced use cases, it is possible to have concurrent access but the PV
+needs to have RWX mode enabled and the volume needs to use a clustered
+file system that supports concurrent access.
+:::
+
+ | Argument | Required | Type | Description |
+ | -------------- | :------: | ----------------------- | ----------- |
+ | namespace | Yes | string | namespace in which to execute |
+ | image | Yes | string | image to be used the command |
+ | volumes | No | map[string]string | Mapping of `pvcName` to `mountPath` under which the volume will be available |
+ | command | Yes | []string | command list to execute |
+ | serviceaccount | No | string | service account info |
+ | podOverride | No | map[string]interface{} | specs to override default pod specs with |
+
+::: tip NOTE
+
+The `volumes` argument does not support `subPath` mounts so the data
+manipulation logic needs to be aware of any `subPath` mounts that may
+have been used when mounting a PVC in the primary application container.
+If `volumes` argument is not specified, all volumes belonging to the
+protected object will be mounted at the predefined path
+`/mnt/prepare_data/`
+:::
+
+Example:
+
+``` yaml
+- func: ScaleWorkload
+ name: ShutdownApplication
+ args:
+ namespace: "{{ .Deployment.Namespace }}"
+ name: "{{ .Deployment.Name }}"
+ kind: deployment
+ replicas: 0
+- func: PrepareData
+ name: ManipulateData
+ args:
+ namespace: "{{ .Deployment.Namespace }}"
+ image: busybox
+ volumes:
+ application-pvc-1: "/data"
+ application-pvc-2: "/restore-data"
+ command:
+ - sh
+ - -c
+ - |
+ cp /restore-data/file_to_replace.data /data/file.data
+```
+
+### BackupData
+
+This function backs up data from a container into any object store
+supported by Kanister.
+
+::: tip WARNING
+
+The *BackupData* will be deprecated soon. We recommend using
+[CopyVolumeData](#copyvolumedata) instead. However, [RestoreData](#restoredata) and
+[DeleteData](#deletedata) will continue to be available, ensuring you retain control
+over your existing backups.
+:::
+
+::: tip NOTE
+
+It is important that the application includes a `kanister-tools` sidecar
+container. This sidecar is necessary to run the tools that capture path
+on a volume and store it on the object store.
+:::
+
+Arguments:
+
+ | Argument | Required | Type | Description |
+ | -------------------- | :------: | ------- | ----------- |
+ | namespace | Yes | string | namespace in which to execute |
+ | pod | Yes | string | pod in which to execute |
+ | container | Yes | string | container in which to execute |
+ | includePath | Yes | string | path of the data to be backed up |
+ | backupArtifactPrefix | Yes | string | path to store the backup on the object store |
+ | encryptionKey | No | string | encryption key to be used for backups |
+ | insecureTLS | No | bool | enables insecure connection for data mover |
+
+Outputs:
+
+ | Output | Type | Description |
+ | --------- | ------ | ----------- |
+ | backupTag | string | unique tag added to the backup |
+ | backupID | string | unique snapshot id generated during backup |
+
+Example:
+
+``` yaml
+actions:
+ backup:
+ outputArtifacts:
+ backupInfo:
+ keyValue:
+ backupIdentifier: "{{ .Phases.BackupToObjectStore.Output.backupTag }}"
+ phases:
+ - func: BackupData
+ name: BackupToObjectStore
+ args:
+ namespace: "{{ .Deployment.Namespace }}"
+ pod: "{{ index .Deployment.Pods 0 }}"
+ container: kanister-tools
+ includePath: /mnt/data
+ backupArtifactPrefix: s3-bucket/path/artifactPrefix
+```
+
+### BackupDataAll
+
+This function concurrently backs up data from one or more pods into an
+any object store supported by Kanister.
+
+::: tip WARNING
+
+The *BackupDataAll* will be deprecated soon. However, [RestoreDataAll](#restoredataall) and
+[DeleteDataAll](#deletedataall) will continue to be available, ensuring you retain control
+over your existing backups.
+:::
+
+::: tip NOTE
+
+It is important that the application includes a `kanister-tools` sidecar
+container. This sidecar is necessary to run the tools that capture path
+on a volume and store it on the object store.
+:::
+
+Arguments:
+
+ | Argument | Required | Type | Description |
+ | -------------------- | :------: | ------- | ----------- |
+ | namespace | Yes | string | namespace in which to execute |
+ | pods | No | string | pods in which to execute (by default runs on all the pods) |
+ | container | Yes | string | container in which to execute |
+ | includePath | Yes | string | path of the data to be backed up |
+ | backupArtifactPrefix | Yes | string | path to store the backup on the object store appended by pod name later |
+ | encryptionKey | No | string | encryption key to be used for backups |
+ | insecureTLS | No | bool | enables insecure connection for data mover |
+
+Outputs:
+
+ | Output | Type | Description |
+ | ------------- | ------ | ----------- |
+ | BackupAllInfo | string | info about backup tag and identifier required for restore |
+
+Example:
+
+``` yaml
+actions:
+ backup:
+ outputArtifacts:
+ params:
+ keyValue:
+ backupInfo: "{{ .Phases.backupToObjectStore.Output.BackupAllInfo }}"
+ phases:
+ - func: BackupDataAll
+ name: BackupToObjectStore
+ args:
+ namespace: "{{ .Deployment.Namespace }}"
+ container: kanister-tools
+ includePath: /mnt/data
+ backupArtifactPrefix: s3-bucket/path/artifactPrefix
+```
+
+### RestoreData
+
+This function restores data backed up by the
+[BackupData](#backupdata) function. It creates a new
+Pod that mounts the PVCs referenced by the specified Pod and restores
+data to the specified path.
+
+::: tip NOTE
+
+It is extremely important that, the PVCs are not be currently in use by
+an active application container, as they are required to be mounted to
+the new Pod (ensure by using ScaleWorkload with replicas=0 first). For
+advanced use cases, it is possible to have concurrent access but the PV
+needs to have RWX mode enabled and the volume needs to use a clustered
+file system that supports concurrent access.
+:::
+
+ | Argument | Required | Type | Description |
+ | -------------------- | :------: | ----------------------- | ----------- |
+ | namespace | Yes | string | namespace in which to execute |
+ | image | Yes | string | image to be used for running restore |
+ | backupArtifactPrefix | Yes | string | path to the backup on the object store |
+ | backupIdentifier | No | string | (required if backupTag not provided) unique snapshot id generated during backup |
+ | backupTag | No | string | (required if backupIdentifier not provided) unique tag added during the backup |
+ | restorePath | No | string | path where data is restored |
+ | pod | No | string | pod to which the volumes are attached |
+ | volumes | No | map[string]string | Mapping of [pvcName] to [mountPath] under which the volume will be available |
+ | encryptionKey | No | string | encryption key to be used during backups |
+ | insecureTLS | No | bool | enables insecure connection for data mover |
+ | podOverride | No | map[string]interface{} | specs to override default pod specs with |
+
+::: tip NOTE
+
+The `image` argument requires the use of
+`ghcr.io/kanisterio/kanister-tools` image since it includes the required
+tools to restore data from the object store. Between the `pod` and
+`volumes` arguments, exactly one argument must be specified.
+:::
+
+Example:
+
+Consider a scenario where you wish to restore the data backed up by the
+[BackupData](#backupdata) function. We will first scale
+down the application, restore the data and then scale it back up. For
+this phase, we will use the `backupInfo` Artifact provided by backup
+function.
+
+``` yaml
+- func: ScaleWorkload name: ShutdownApplication args: namespace: \"{{
+ .Deployment.Namespace }}\" name: \"{{ .Deployment.Name }}\" kind:
+ Deployment replicas: 0
+- func: RestoreData name: RestoreFromObjectStore args: namespace: \"{{
+ .Deployment.Namespace }}\" pod: \"{{ index .Deployment.Pods 0 }}\"
+ image: ghcr.io/kanisterio/kanister-tools: backupArtifactPrefix:
+ s3-bucket/path/artifactPrefix backupTag: \"{{
+ .ArtifactsIn.backupInfo.KeyValue.backupIdentifier }}\"
+- func: ScaleWorkload name: StartupApplication args: namespace: \"{{
+ .Deployment.Namespace }}\" name: \"{{ .Deployment.Name }}\" kind:
+ Deployment replicas: 1
+```
+
+### RestoreDataAll
+
+This function concurrently restores data backed up by the
+[BackupDataAll](#backupdataall) function, on one or more
+pods. It concurrently runs a job Pod for each workload Pod, that mounts
+the respective PVCs and restores data to the specified path.
+
+::: tip NOTE
+
+It is extremely important that, the PVCs are not be currently in use by
+an active application container, as they are required to be mounted to
+the new Pod (ensure by using ScaleWorkload with replicas=0 first). For
+advanced use cases, it is possible to have concurrent access but the PV
+needs to have RWX mode enabled and the volume needs to use a clustered
+file system that supports concurrent access.
+:::
+
+ | Argument | Required | Type | Description |
+ | -------------------- | :------: | ----------------------- | ----------- |
+ | namespace | Yes | string | namespace in which to execute |
+ | image | Yes | string | image to be used for running restore |
+ | backupArtifactPrefix | Yes | string | path to the backup on the object store |
+ | restorePath | No | string | path where data is restored |
+ | pods | No | string | pods to which the volumes are attached |
+ | encryptionKey | No | string | encryption key to be used during backups |
+ | backupInfo | Yes | string | snapshot info generated as output in BackupDataAll function |
+ | insecureTLS | No | bool | enables insecure connection for data mover |
+ | podOverride | No | map[string]interface{} | specs to override default pod specs with |
+
+::: tip NOTE
+
+The *image* argument requires the use of
+[ghcr.io/kanisterio/kanister-tools] image since it includes
+the required tools to restore data from the object store. Between the
+*pod* and *volumes* arguments, exactly one
+argument must be specified.
+:::
+
+Example:
+
+Consider a scenario where you wish to restore the data backed up by the
+[BackupDataAll](#backupdataall) function. We will first
+scale down the application, restore the data and then scale it back up.
+We will not specify `pods` in args, so this function will restore data
+on all pods concurrently. For this phase, we will use the `params`
+Artifact provided by BackupDataAll function.
+
+``` yaml
+
+- func: ScaleWorkload name: ShutdownApplication args: namespace: \"{{
+ .Deployment.Namespace }}\" name: \"{{ .Deployment.Name }}\" kind:
+ Deployment replicas: 0
+- func: RestoreDataAll name: RestoreFromObjectStore args: namespace:
+ \"{{ .Deployment.Namespace }}\" image:
+ ghcr.io/kanisterio/kanister-tools: backupArtifactPrefix:
+ s3-bucket/path/artifactPrefix backupInfo: \"{{
+ .ArtifactsIn.params.KeyValue.backupInfo }}\"
+- func: ScaleWorkload name: StartupApplication args: namespace: \"{{
+ .Deployment.Namespace }}\" name: \"{{ .Deployment.Name }}\" kind:
+ Deployment replicas: 2
+```
+
+### CopyVolumeData
+
+This function copies data from the specified volume (referenced by a
+Kubernetes PersistentVolumeClaim) into an object store. This data can be
+restored into a volume using the `restoredata`{.interpreted-text
+role="ref"} function
+
+::: tip NOTE
+
+The PVC must not be in-use (attached to a running Pod)
+
+If data needs to be copied from a running workload without stopping it,
+use the [BackupData](#backupdata) function
+:::
+
+Arguments:
+
+ | Argument | Required | Type | Description |
+ | ------------------ | :------: | ----------------------- | ----------- |
+ | namespace | Yes | string | namespace the source PVC is in |
+ | volume | Yes | string | name of the source PVC |
+ | dataArtifactPrefix | Yes | string | path on the object store to store the data in |
+ | encryptionKey | No | string | encryption key to be used during backups |
+ | insecureTLS | No | bool | enables insecure connection for data mover |
+ | podOverride | No | map[string]interface{} | specs to override default pod specs with |
+
+Outputs:
+
+ | Output | Type | Description |
+ | ---------------------- | ------ | ----------- |
+ | backupID | string | unique snapshot id generated when data was copied |
+ | backupRoot | string | parent directory location of the data copied from |
+ | backupArtifactLocation | string | location in objectstore where data was copied |
+ | backupTag | string | unique string to identify this data copy |
+
+Example:
+
+If the ActionSet `Object` is a PersistentVolumeClaim:
+
+``` yaml
+- func: CopyVolumeData
+ args:
+ namespace: "{{ .PVC.Namespace }}"
+ volume: "{{ .PVC.Name }}"
+ dataArtifactPrefix: s3-bucket-name/path
+```
+
+### DeleteData
+
+This function deletes the snapshot data backed up by the
+[BackupData](#backupdata) function.
+
+ | Argument | Required | Type | Description |
+ | -------------------- | :------: | ----------------------- | ----------- |
+ | namespace | Yes | string | namespace in which to execute |
+ | backupArtifactPrefix | Yes | string | path to the backup on the object store |
+ | backupID | No | string | (required if backupTag not provided) unique snapshot id generated during backup |
+ | backupTag | No | string | (required if backupID not provided) unique tag added during the backup |
+ | encryptionKey | No | string | encryption key to be used during backups |
+ | insecureTLS | No | bool | enables insecure connection for data mover |
+ | podOverride | No | map[string]interface{} | specs to override default pod specs with |
+
+Example:
+
+Consider a scenario where you wish to delete the data backed up by the
+[BackupData](#backupdata) function. For this phase, we
+will use the `backupInfo` Artifact provided by backup function.
+
+``` yaml
+- func: DeleteData
+ name: DeleteFromObjectStore
+ args:
+ namespace: "{{ .Namespace.Name }}"
+ backupArtifactPrefix: s3-bucket/path/artifactPrefix
+ backupTag: "{{ .ArtifactsIn.backupInfo.KeyValue.backupIdentifier }}"
+```
+
+### DeleteDataAll
+
+This function concurrently deletes the snapshot data backed up by the
+BackupDataAll function.
+
+ | Argument | Required | Type | Description |
+ | -------------------- | :------: | ----------------------- | ----------- |
+ | namespace | Yes | string | namespace in which to execute |
+ | backupArtifactPrefix | Yes | string | path to the backup on the object store |
+ | backupInfo | Yes | string | snapshot info generated as output in BackupDataAll function |
+ | encryptionKey | No | string | encryption key to be used during backups |
+ | reclaimSpace | No | bool | provides a way to specify if space should be reclaimed |
+ | insecureTLS | No | bool | enables insecure connection for data mover |
+ | podOverride | No | map[string]interface{} | specs to override default pod specs with |
+
+Example:
+
+Consider a scenario where you wish to delete all the data backed up by
+the [BackupDataAll](#backupdataall) function. For this
+phase, we will use the `params` Artifact provided by backup function.
+
+``` yaml
+- func: DeleteDataAll
+ name: DeleteFromObjectStore
+ args:
+ namespace: "{{ .Namespace.Name }}"
+ backupArtifactPrefix: s3-bucket/path/artifactPrefix
+ backupInfo: "{{ .ArtifactsIn.params.KeyValue.backupInfo }}"
+ reclaimSpace: true
+```
+
+### LocationDelete
+
+This function uses a new Pod to delete the specified artifact from an
+object store.
+
+ | Argument | Required | Type | Description |
+ | -------- | :------: | ------ | ----------- |
+ | artifact | Yes | string | artifact to be deleted from the object store |
+
+::: tip NOTE
+
+The Kubernetes job uses the `ghcr.io/kanisterio/kanister-tools` image,
+since it includes all the tools required to delete the artifact from an
+object store.
+:::
+
+Example:
+
+``` yaml
+- func: LocationDelete
+ name: LocationDeleteFromObjectStore
+ args:
+ artifact: s3://bucket/path/artifact
+```
+
+### CreateVolumeSnapshot
+
+This function is used to create snapshots of one or more PVCs associated
+with an application. It takes individual snapshot of each PVC which can
+be then restored later. It generates an output that contains the
+Snapshot info required for restoring PVCs.
+
+::: tip NOTE
+
+Currently we only support PVC snapshots on AWS EBS. Support for more
+storage providers is coming soon!
+:::
+
+Arguments:
+
+ | Argument | Required | Type | Description |
+ | --------- | :------: | ---------- | ----------- |
+ | namespace | Yes | string | namespace in which to execute |
+ | pvcs | No | []string | list of names of PVCs to be backed up |
+ | skipWait | No | bool | initiate but do not wait for the snapshot operation to complete |
+
+When no PVCs are specified in the `pvcs` argument above, all PVCs in use
+by a Deployment or StatefulSet will be backed up.
+
+Outputs:
+
+ | Output | Type | Description |
+ | ------------------- | ------ | ----------- |
+ | volumeSnapshotInfo | string | Snapshot info required while restoring the PVCs |
+
+Example:
+
+Consider a scenario where you wish to backup all PVCs of a deployment.
+The output of this phase is saved to an Artifact named `backupInfo`,
+shown below:
+
+``` yaml
+actions:
+ backup:
+ outputArtifacts:
+ backupInfo:
+ keyValue:
+ manifest: "{{ .Phases.backupVolume.Output.volumeSnapshotInfo }}"
+ phases:
+ - func: CreateVolumeSnapshot
+ name: backupVolume
+ args:
+ namespace: "{{ .Deployment.Namespace }}"
+```
+
+### WaitForSnapshotCompletion
+
+This function is used to wait for completion of snapshot operations
+initiated using the [CreateVolumeSnapshot](#createvolumesnapshot) function.
+function.
+
+Arguments:
+
+ | Argument | Required | Type | Description |
+ | --------- | :------: | ------ | ----------- |
+ | snapshots | Yes | string | snapshot info generated as output in CreateVolumeSnapshot function |
+
+### CreateVolumeFromSnapshot
+
+This function is used to restore one or more PVCs of an application from
+the snapshots taken using the `createvolumesnapshot`{.interpreted-text
+role="ref"} function. It deletes old PVCs, if present and creates new
+PVCs from the snapshots taken earlier.
+
+Arguments:
+
+ | Argument | Required | Type | Description |
+ | --------- | :------: | ------ | ----------- |
+ | namespace | Yes | string | namespace in which to execute |
+ | snapshots | Yes | string | snapshot info generated as output in CreateVolumeSnapshot function |
+
+Example:
+
+Consider a scenario where you wish to restore all PVCs of a deployment.
+We will first scale down the application, restore PVCs and then scale
+up. For this phase, we will make use of the backupInfo Artifact provided
+by the [CreateVolumeSnapshot](#createvolumesnapshot) function.
+
+``` yaml
+- func: ScaleWorkload
+ name: shutdownPod
+ args:
+ namespace: "{{ .Deployment.Namespace }}"
+ name: "{{ .Deployment.Name }}"
+ kind: Deployment
+ replicas: 0
+- func: CreateVolumeFromSnapshot
+ name: restoreVolume
+ args:
+ namespace: "{{ .Deployment.Namespace }}"
+ snapshots: "{{ .ArtifactsIn.backupInfo.KeyValue.manifest }}"
+- func: ScaleWorkload
+ name: bringupPod
+ args:
+ namespace: "{{ .Deployment.Namespace }}"
+ name: "{{ .Deployment.Name }}"
+ kind: Deployment
+ replicas: 1
+```
+
+### DeleteVolumeSnapshot
+
+This function is used to delete snapshots of PVCs taken using the
+[CreateVolumeSnapshot](#createvolumesnapshot) function.
+
+Arguments:
+
+ | Argument | Required | Type | Description |
+ | --------- | :------: | ------ | ----------- |
+ | namespace | Yes | string | namespace in which to execute |
+ | snapshots | Yes | string | snapshot info generated as output in CreateVolumeSnapshot function |
+
+Example:
+
+``` yaml
+- func: DeleteVolumeSnapshot
+ name: deleteVolumeSnapshot
+ args:
+ namespace: "{{ .Deployment.Namespace }}"
+ snapshots: "{{ .ArtifactsIn.backupInfo.KeyValue.manifest }}"
+```
+
+### BackupDataStats
+
+This function get stats for the backed up data from the object store
+location
+
+::: tip NOTE
+
+It is important that the application includes a `kanister-tools` sidecar
+container. This sidecar is necessary to run the tools that get the
+information from the object store.
+:::
+
+Arguments:
+
+ | Argument | Required | Type | Description |
+ | -------------------- | :------: | ------ | ----------- |
+ | namespace | Yes | string | namespace in which to execute |
+ | backupArtifactPrefix | Yes | string | path to the object store location |
+ | backupID | Yes | string | unique snapshot id generated during backup |
+ | mode | No | string | mode in which stats are expected |
+ | encryptionKey | No | string | encryption key to be used for backups |
+
+Outputs:
+
+ | Output | Type | Description |
+ | -------- | ------ | ----------- |
+ | mode | string | mode of the output stats |
+ | fileCount| string | number of files in backup |
+ | size | string | size of the number of files in backup |
+
+Example:
+
+``` yaml
+actions:
+ backupStats:
+ outputArtifacts:
+ backupStats:
+ keyValue:
+ mode: "{{ .Phases.BackupDataStatsFromObjectStore.Output.mode }}"
+ fileCount: "{{ .Phases.BackupDataStatsFromObjectStore.Output.fileCount }}"
+ size: "{{ .Phases.BackupDataStatsFromObjectStore.Output.size }}"
+ phases:
+ - func: BackupDataStats
+ name: BackupDataStatsFromObjectStore
+ args:
+ namespace: "{{ .Deployment.Namespace }}"
+ backupArtifactPrefix: s3-bucket/path/artifactPrefix
+ mode: restore-size
+ backupID: "{{ .ArtifactsIn.snapshot.KeyValue.backupIdentifier }}"
+```
+
+### CreateRDSSnapshot
+
+This function creates RDS snapshot of running RDS instance.
+
+Arguments:
+
+ | Argument | Required | Type | Description |
+ | ---------- | :------: | ------ | ----------- |
+ | instanceID | Yes | string | ID of RDS instance you want to create snapshot of |
+ | dbEngine | No | string | Required in case of RDS Aurora instance. Supported DB Engines: `aurora` `aurora-mysql` and `aurora-postgresql` |
+
+Outputs:
+
+ | Output | Type | Description |
+ | ---------------- | ---------- | ----------- |
+ | snapshotID | string | ID of the RDS snapshot that has been created |
+ | instanceID | string | ID of the RDS instance |
+ | securityGroupID | []string | AWS Security Group IDs associated with the RDS instance |
+ | allocatedStorage | string | Specifies the allocated storage size in gibibytes (GiB) |
+ | dbSubnetGroup | string | Specifies the DB Subnet group associated with the RDS instance |
+
+Example:
+
+``` yaml
+actions:
+ backup:
+ outputArtifacts:
+ backupInfo:
+ keyValue:
+ snapshotID: "{{ .Phases.createSnapshot.Output.snapshotID }}"
+ instanceID: "{{ .Phases.createSnapshot.Output.instanceID }}"
+ securityGroupID: "{{ .Phases.createSnapshot.Output.securityGroupID }}"
+ allocatedStorage: "{{ .Phases.createSnapshot.Output.allocatedStorage }}"
+ dbSubnetGroup: "{{ .Phases.createSnapshot.Output.dbSubnetGroup }}"
+ configMapNames:
+ - dbconfig
+ phases:
+ - func: CreateRDSSnapshot
+ name: createSnapshot
+ args:
+ instanceID: '{{ index .ConfigMaps.dbconfig.Data "postgres.instanceid" }}'
+```
+
+### ExportRDSSnapshotToLocation
+
+This function spins up a temporary RDS instance from the given snapshot,
+extracts database dump and uploads that dump to the configured object
+storage.
+
+Arguments:
+
+ | Argument | Required | Type | Description |
+ | -------------------- | :------: | ---------- | ----------- |
+ | instanceID | Yes | string | RDS db instance ID |
+ | namespace | Yes | string | namespace in which to execute the Kanister tools pod for this function |
+ | snapshotID | Yes | string | ID of the RDS snapshot |
+ | dbEngine | Yes | string | one of the RDS db engines. Supported engine(s): `PostgreSQL` |
+ | username | No | string | username of the RDS database instance |
+ | password | No | string | password of the RDS database instance |
+ | backupArtifactPrefix | No | string | path to store the backup on the object store |
+ | databases | No | []string | list of databases to take backup of |
+ | securityGroupID | No | []string | list of `securityGroupID` to be passed to temporary RDS instance |
+ | dbSubnetGroup | No | string | DB Subnet Group to be passed to temporary RDS instance |
+
+::: tip NOTE
+
+\- If `databases` argument is not set, backup of all the databases will
+be taken. - If `securityGroupID` argument is not set,
+`ExportRDSSnapshotToLocation` will find out Security Group IDs
+associated with instance with `instanceID` and will pass the same. - If
+`backupArtifactPrefix` argument is not set, `instanceID` will be used as
+*backupArtifactPrefix*. - If `dbSubnetGroup` argument is not
+set, `default` DB Subnet group will be used.
+:::
+
+Outputs:
+
+ | Output | Type | Description |
+ | --------------- | ---------- | ----------- |
+ | snapshotID | string | ID of the RDS snapshot that has been created |
+ | instanceID | string | ID of the RDS instance |
+ | backupID | string | unique backup id generated during storing data into object storage |
+ | securityGroupID | []string | AWS Security Group IDs associated with the RDS instance |
+
+Example:
+
+``` yaml
+actions:
+ backup:
+ outputArtifacts:
+ backupInfo:
+ keyValue:
+ snapshotID: "{{ .Phases.createSnapshot.Output.snapshotID }}"
+ instanceID: "{{ .Phases.createSnapshot.Output.instanceID }}"
+ securityGroupID: "{{ .Phases.createSnapshot.Output.securityGroupID }}"
+ backupID: "{{ .Phases.exportSnapshot.Output.backupID }}"
+ dbSubnetGroup: "{{ .Phases.createSnapshot.Output.dbSubnetGroup }}"
+ configMapNames:
+ - dbconfig
+ phases:
+
+ - func: CreateRDSSnapshot
+ name: createSnapshot
+ args:
+ instanceID: '{{ index .ConfigMaps.dbconfig.Data "postgres.instanceid" }}'
+
+ - func: ExportRDSSnapshotToLocation
+ name: exportSnapshot
+ objects:
+ dbsecret:
+ kind: Secret
+ name: '{{ index .ConfigMaps.dbconfig.Data "postgres.secret" }}'
+ namespace: "{{ .Namespace.Name }}"
+ args:
+ namespace: "{{ .Namespace.Name }}"
+ instanceID: "{{ .Phases.createSnapshot.Output.instanceID }}"
+ securityGroupID: "{{ .Phases.createSnapshot.Output.securityGroupID }}"
+ username: '{{ index .Phases.exportSnapshot.Secrets.dbsecret.Data "username" | toString }}'
+ password: '{{ index .Phases.exportSnapshot.Secrets.dbsecret.Data "password" | toString }}'
+ dbEngine: "PostgreSQL"
+ databases: '{{ index .ConfigMaps.dbconfig.Data "postgres.databases" }}'
+ snapshotID: "{{ .Phases.createSnapshot.Output.snapshotID }}"
+ backupArtifactPrefix: test-postgresql-instance/postgres
+ dbSubnetGroup: "{{ .Phases.createSnapshot.Output.dbSubnetGroup }}"
+```
+
+### RestoreRDSSnapshot
+
+This function restores the RDS DB instance either from an RDS snapshot
+or from the data dump (if [snapshotID] is not set) that is
+stored in an object storage.
+
+::: tip NOTE
+
+\- If [snapshotID] is set, the function will restore RDS
+instance from the RDS snapshot. Otherwise *backupID* needs
+to be set to restore the RDS instance from data dump. - While restoring
+the data from RDS snapshot if RDS instance (where we have to restore the
+data) doesn\'t exist, the RDS instance will be created. But if the data
+is being restored from the Object Storage (data dump) and the RDS
+instance doesn\'t exist new RDS instance will not be created and will
+result in an error.
+:::
+
+Arguments:
+
+ | Argument | Required | Type | Description |
+ | -------------------- | :------: | ---------- | ----------- |
+ | instanceID | Yes | string | RDS db instance ID |
+ | snapshotID | No | string | ID of the RDS snapshot |
+ | username | No | string | username of the RDS database instance |
+ | password | No | string | password of the RDS database instance |
+ | backupArtifactPrefix | No | string | path to store the backup on the object store |
+ | backupID | No | string | unique backup id generated during storing data into object storage |
+ | securityGroupID | No | []string | list of `securityGroupID` to be passed to restored RDS instance |
+ | namespace | No | string | namespace in which to execute. Required if `snapshotID` is nil |
+ | dbEngine | No | string | one of the RDS db engines. Supported engines: `PostgreSQL`, `aurora`, `aurora-mysql` and `aurora-postgresql`. Required if `snapshotID` is nil or Aurora is run in RDS instance |
+ | dbSubnetGroup | No | string | DB Subnet Group to be passed to restored RDS instance |
+
+::: tip NOTE
+
+\- If `snapshotID` is not set, restore will be done from data dump. In
+that case `backupID` [arg] is required. - If
+`securityGroupID` argument is not set, `RestoreRDSSnapshot` will find
+out Security Group IDs associated with instance with `instanceID` and
+will pass the same. - If `dbSubnetGroup` argument is not set, `default`
+DB Subnet group will be used.
+:::
+
+Outputs:
+
+ | Output | Type | Description |
+ | ------- | ------ | ----------- |
+ | endpoint| string | endpoint of the RDS instance |
+
+Example:
+
+``` yaml
+restore:
+ inputArtifactNames:
+ - backupInfo
+ kind: Namespace
+ phases:
+ - func: RestoreRDSSnapshot
+ name: restoreSnapshots
+ objects:
+ dbsecret:
+ kind: Secret
+ name: '{{ index .ConfigMaps.dbconfig.Data "postgres.secret" }}'
+ namespace: "{{ .Namespace.Name }}"
+ args:
+ namespace: "{{ .Namespace.Name }}"
+ backupArtifactPrefix: test-postgresql-instance/postgres
+ instanceID: "{{ .ArtifactsIn.backupInfo.KeyValue.instanceID }}"
+ backupID: "{{ .ArtifactsIn.backupInfo.KeyValue.backupID }}"
+ securityGroupID: "{{ .ArtifactsIn.backupInfo.KeyValue.securityGroupID }}"
+ username: '{{ index .Phases.restoreSnapshots.Secrets.dbsecret.Data "username" | toString }}'
+ password: '{{ index .Phases.restoreSnapshots.Secrets.dbsecret.Data "password" | toString }}'
+ dbEngine: "PostgreSQL"
+ dbSubnetGroup: "{{ .ArtifactsIn.backupInfo.KeyValue.dbSubnetGroup }}"
+```
+
+### DeleteRDSSnapshot
+
+This function deletes the RDS snapshot by the [snapshotID].
+
+Arguments:
+
+ | Argument | Required | Type | Description |
+ | ---------- | :------: | ------ | ----------- |
+ | snapshotID | No | string | ID of the RDS snapshot |
+
+Example:
+
+``` yaml
+actions:
+ delete:
+ kind: Namespace
+ inputArtifactNames:
+ - backupInfo
+ phases:
+ - func: DeleteRDSSnapshot
+ name: deleteSnapshot
+ args:
+ snapshotID: "{{ .ArtifactsIn.backupInfo.KeyValue.snapshotID }}"
+```
+
+### KubeOps
+
+This function is used to create or delete Kubernetes resources.
+
+Arguments:
+
+ | Argument | Required | Type | Description |
+ | --------------- | :------: | ------------------------ | ----------- |
+ | operation | Yes | string | `create` or `delete` Kubernetes resource |
+ | namespace | No | string | namespace in which the operation is executed |
+ | spec | No | string | resource spec that needs to be created |
+ | objectReference | No | map[string]interface{} | object reference for delete operation |
+
+Example:
+
+``` yaml
+- func: KubeOps
+ name: createDeploy
+ args:
+ operation: create
+ namespace: "{{ .Deployment.Namespace }}"
+ spec: |-
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: "{{ .Deployment.Name }}"
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: example
+ template:
+ metadata:
+ labels:
+ app: example
+ spec:
+ containers:
+ - image: busybox
+ imagePullPolicy: IfNotPresent
+ name: container
+ ports:
+ - containerPort: 80
+ name: http
+ protocol: TCP
+- func: KubeOps
+ name: deleteDeploy
+ args:
+ operation: delete
+ objectReference:
+ apiVersion: "{{ .Phases.createDeploy.Output.apiVersion }}"
+ group: "{{ .Phases.createDeploy.Output.group }}"
+ resource: "{{ .Phases.createDeploy.Output.resource }}"
+ name: "{{ .Phases.createDeploy.Output.name }}"
+ namespace: "{{ .Phases.createDeploy.Output.namespace }}"
+```
+
+### WaitV2
+
+This function is used to wait on a Kubernetes resource until a desired
+state is reached. The wait condition is defined in a Go template syntax.
+
+Arguments:
+
+ | Argument | Required | Type | Description |
+ | ---------- | :------: | ------------------------ | ----------- |
+ | timeout | Yes | string | wait timeout |
+ | conditions | Yes | map[string]interface{} | keys should be `allOf` and/or `anyOf` with value as `[]Condition` |
+
+`Condition` struct:
+
+``` yaml
+condition: "Go template condition that returns true or false"
+objectReference:
+ apiVersion: "Kubernetes resource API version"
+ resource: "Type of resource to wait for"
+ name: "Name of the resource"
+```
+
+The Go template conditions can be validated using kubectl commands with
+`-o go-template` flag. E.g. To check if the Deployment is ready, the
+following Go template syntax can be used with kubectl command
+
+``` bash
+kubectl get deploy -n $NAMESPACE $DEPLOY_NAME \
+ -o go-template='{{ $available := false }}{{ range $condition := $.status.conditions }}{{ if and (eq .type "Available") (eq .status "True") }}{{ $available = true }}{{ end }}{{ end }}{{ $available }}'
+```
+
+The same Go template can be used as a condition in the WaitV2 function.
+
+Example:
+
+``` yaml
+- func: WaitV2
+ name: waitForDeploymentReady
+ args:
+ timeout: 5m
+ conditions:
+ anyOf:
+ - condition: '{{ $available := false }}{{ range $condition := $.status.conditions }}{{ if and (eq .type "Available") (eq .status "True") }}{{ $available = true }}{{ end }}{{ end }}{{ $available }}'
+ objectReference:
+ apiVersion: "v1"
+ group: "apps"
+ name: "{{ .Object.metadata.name }}"
+ namespace: "{{ .Object.metadata.namespace }}"
+ resource: "deployments"
+```
+
+### Wait (deprecated)
+
+This function is used to wait on a Kubernetes resource until a desired
+state is reached.
+
+Arguments:
+
+ | Argument | Required | Type | Description |
+ | ---------- | :------: | ------------------------ | ----------- |
+ | timeout | Yes | string | wait timeout |
+ | conditions | Yes | map[string]interface{} | keys should be `allOf` and/or `anyOf` with value as `[]Condition` |
+
+`Condition` struct:
+
+``` yaml
+condition: "Go template condition that returns true or false"
+objectReference:
+ apiVersion: "Kubernetes resource API version"
+ resource: "Type of resource to wait for"
+ name: "Name of the resource"
+```
+
+::: tip NOTE
+
+We can refer to the object key-value in Go template condition with the
+help of a `$` prefix JSON-path syntax.
+:::
+
+Example:
+
+``` yaml
+- func: Wait
+ name: waitNsReady
+ args:
+ timeout: 60s
+ conditions:
+ allOf:
+ - condition: '{{ if (eq "{ $.status.phase }" "Invalid")}}true{{ else }}false{{ end }}'
+ objectReference:
+ apiVersion: v1
+ resource: namespaces
+ name: "{{ .Namespace.Name }}"
+ - condition: '{{ if (eq "{ $.status.phase }" "Active")}}true{{ else }}false{{ end }}'
+ objectReference:
+ apiVersion: v1
+ resource: namespaces
+ name: "{{ .Namespace.Name }}"
+```
+
+### CreateCSISnapshot
+
+This function is used to create CSI VolumeSnapshot for a
+PersistentVolumeClaim. By default, it waits for the VolumeSnapshot to be
+`ReadyToUse`.
+
+Arguments:
+
+ | Argument | Required | Type | Description |
+ | -------------- | :------: | -------------------- | ----------- |
+ | name | No | string | name of the VolumeSnapshot. Default value is `-snapshot-` |
+ | pvc | Yes | string | name of the PersistentVolumeClaim to be captured |
+ | namespace | Yes | string | namespace of the PersistentVolumeClaim and resultant VolumeSnapshot |
+ | snapshotClass | Yes | string | name of the VolumeSnapshotClass |
+ | labels | No | map[string]string | labels for the VolumeSnapshot |
+
+Outputs:
+
+ | Output | Type | Description |
+ | --------------- | ------ | ----------- |
+ | name | string | name of the CSI VolumeSnapshot |
+ | pvc | string | name of the captured PVC |
+ | namespace | string | namespace of the captured PVC and VolumeSnapshot |
+ | restoreSize | string | required memory size to restore PVC |
+ | snapshotContent | string | name of the VolumeSnapshotContent |
+
+Example:
+
+``` yaml
+actions:
+ backup:
+ outputArtifacts:
+ snapshotInfo:
+ keyValue:
+ name: "{{ .Phases.createCSISnapshot.Output.name }}"
+ pvc: "{{ .Phases.createCSISnapshot.Output.pvc }}"
+ namespace: "{{ .Phases.createCSISnapshot.Output.namespace }}"
+ restoreSize: "{{ .Phases.createCSISnapshot.Output.restoreSize }}"
+ snapshotContent: "{{ .Phases.createCSISnapshot.Output.snapshotContent }}"
+ phases:
+ - func: CreateCSISnapshot
+ name: createCSISnapshot
+ args:
+ pvc: "{{ .PVC.Name }}"
+ namespace: "{{ .PVC.Namespace }}"
+ snapshotClass: do-block-storage
+```
+
+### CreateCSISnapshotStatic
+
+This function creates a pair of CSI `VolumeSnapshot` and
+`VolumeSnapshotContent` resources, assuming that the underlying *real*
+storage volume snapshot already exists. The deletion behavior is defined
+by the `deletionPolicy` property (`Retain`, `Delete`) of the snapshot
+class.
+
+For more information on pre-provisioned volume snapshots and snapshot
+deletion policy, see the Kubernetes
+[documentation](https://kubernetes.io/docs/concepts/storage/volume-snapshots/).
+
+Arguments:
+
+ | Argument | Required | Type | Description |
+ | -------------- | :------: | ------ | ----------- |
+ | name | Yes | string | name of the new CSI `VolumeSnapshot` |
+ | namespace | Yes | string | namespace of the new CSI `VolumeSnapshot` |
+ | driver | Yes | string | name of the CSI driver for the new CSI `VolumeSnapshotContent` |
+ | handle | Yes | string | unique identifier of the volume snapshot created on the storage backend used as the source of the new `VolumeSnapshotContent` |
+ | snapshotClass | Yes | string | name of the `VolumeSnapshotClass` to use |
+
+Outputs:
+
+ | Output | Type | Description |
+ | --------------- | ------ | ----------- |
+ | name | string | name of the new CSI `VolumeSnapshot` |
+ | namespace | string | namespace of the new CSI `VolumeSnapshot` |
+ | restoreSize | string | required memory size to restore the volume |
+ | snapshotContent | string | name of the new CSI `VolumeSnapshotContent` |
+
+Example:
+
+``` yaml
+actions:
+ createStaticSnapshot:
+ phases:
+ - func: CreateCSISnapshotStatic
+ name: createCSISnapshotStatic
+ args:
+ name: volume-snapshot
+ namespace: default
+ snapshotClass: csi-hostpath-snapclass
+ driver: hostpath.csi.k8s.io
+ handle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002
+```
+
+### RestoreCSISnapshot
+
+This function restores a new PersistentVolumeClaim using CSI
+VolumeSnapshot.
+
+Arguments:
+
+ | Argument | Required | Type | Description |
+ | ------------- | :------: | -------------------- | ----------- |
+ | name | Yes | string | name of the VolumeSnapshot |
+ | pvc | Yes | string | name of the new PVC |
+ | namespace | Yes | string | namespace of the VolumeSnapshot and resultant PersistentVolumeClaim |
+ | storageClass | Yes | string | name of the StorageClass |
+ | restoreSize | Yes | string | required memory size to restore PVC. Must be greater than zero |
+ | accessModes | No | []string | access modes for the underlying PV (Default is `["ReadWriteOnce"]`) |
+ | volumeMode | No | string | mode of volume (Default is `"Filesystem"`) |
+ | labels | No | map[string]string | optional labels for the PersistentVolumeClaim |
+
+::: tip NOTE
+
+Output artifact `snapshotInfo` from `CreateCSISnapshot` function can be
+used as an input artifact in this function.
+:::
+
+Example:
+
+``` yaml
+actions:
+ restore:
+ inputArtifactNames:
+ - snapshotInfo
+ phases:
+ - func: RestoreCSISnapshot
+ name: restoreCSISnapshot
+ args:
+ name: "{{ .ArtifactsIn.snapshotInfo.KeyValue.name }}"
+ pvc: "{{ .ArtifactsIn.snapshotInfo.KeyValue.pvc }}-restored"
+ namespace: "{{ .ArtifactsIn.snapshotInfo.KeyValue.namespace }}"
+ storageClass: do-block-storage
+ restoreSize: "{{ .ArtifactsIn.snapshotInfo.KeyValue.restoreSize }}"
+ accessModes: ["ReadWriteOnce"]
+ volumeMode: "Filesystem"
+```
+
+### DeleteCSISnapshot
+
+This function deletes a VolumeSnapshot from given namespace.
+
+Arguments:
+
+ | Argument | Required | Type | Description |
+ | --------- | :------: | ------ | ----------- |
+ | name | Yes | string | name of the VolumeSnapshot |
+ | namespace | Yes | string | namespace of the VolumeSnapshot |
+
+::: tip NOTE
+
+Output artifact `snapshotInfo` from `CreateCSISnapshot` function can be
+used as an input artifact in this function.
+:::
+
+Example:
+
+``` yaml
+actions:
+ delete:
+ inputArtifactNames:
+ - snapshotInfo
+ phases:
+ - func: DeleteCSISnapshot
+ name: deleteCSISnapshot
+ args:
+ name: "{{ .ArtifactsIn.snapshotInfo.KeyValue.name }}"
+ namespace: "{{ .ArtifactsIn.snapshotInfo.KeyValue.namespace }}"
+```
+
+### DeleteCSISnapshotContent
+
+This function deletes an unbounded `VolumeSnapshotContent` resource. It
+has no effect on bounded `VolumeSnapshotContent` resources, as they
+would be protected by the CSI controller.
+
+Arguments:
+
+ | Argument | Required | Type | Description |
+ | -------- | :------: | ------ | ----------- |
+ | name | Yes | string | name of the `VolumeSnapshotContent` |
+
+Example:
+
+``` yaml
+actions:
+ deleteVSC:
+ phases:
+ - func: DeleteCSISnapshotContent
+ name: deleteCSISnapshotContent
+ args:
+ name: "test-snapshot-content-content-dfc8fa67-8b11-4fdf-bf94-928589c2eed8"
+```
+
+### BackupDataUsingKopiaServer
+
+This function backs up data from a container into any object store
+supported by Kanister using Kopia Repository Server as data mover.
+
+::: tip NOTE
+
+It is important that the application includes a `kanister-tools` sidecar
+container. This sidecar is necessary to run the tools that back up the
+volume and store it on the object store.
+
+Additionally, in order to use this function, a RepositoryServer CR is
+needed while creating the [ActionSets](./architecture.md#actionsets).
+:::
+
+Arguments:
+
+ | Argument | Required | Type | Description |
+ | --------------------------- | :------: | ------ | ----------- |
+ | namespace | Yes | string | namespace of the container that you want to backup the data of |
+ | pod | Yes | string | pod name of the container that you want to backup the data of |
+ | container | Yes | string | name of the kanister sidecar container |
+ | includePath | Yes | string | path of the data to be backed up |
+ | snapshotTags | No | string | custom tags to be provided to the kopia snapshots |
+ | repositoryServerUserHostname| No | string | user's hostname to access the kopia repository server. Hostname would be available in the user access credential secret |
+
+Outputs:
+
+ | Output | Type | Description |
+ | -------- | ------ | ----------- |
+ | backupID | string | unique snapshot id generated during backup |
+ | size | string | size of the backup |
+ | phySize | string | physical size of the backup |
+
+Example:
+
+``` yaml
+actions:
+backup:
+ outputArtifacts:
+ backupIdentifier:
+ keyValue:
+ id: "{{ .Phases.backupToS3.Output.backupID }}"
+ phases:
+ - func: BackupDataUsingKopiaServer
+ name: backupToS3
+ args:
+ namespace: "{{ .Deployment.Namespace }}"
+ pod: "{{ index .Deployment.Pods 0 }}"
+ container: kanister-tools
+ includePath: /mnt/data
+```
+
+### RestoreDataUsingKopiaServer
+
+This function restores data backed up by the
+`BackupDataUsingKopiaServer` function. It creates a new Pod that mounts
+the PVCs referenced by the Pod specified in the function argument and
+restores data to the specified path.
+
+::: tip NOTE
+
+It is extremely important that, the PVCs are not currently in use by an
+active application container, as they are required to be mounted to the
+new Pod (ensure by using `ScaleWorkload` with replicas=0 first). For
+advanced use cases, it is possible to have concurrent access but the PV
+needs to have `RWX` access mode and the volume needs to use a clustered
+file system that supports concurrent access.
+:::
+
+ | Argument | Required | Type | Description |
+ | --------------------------- | :------: | ---------------------- | ----------- |
+ | namespace | Yes | string | namespace of the application that you want to restore the data in |
+ | image | Yes | string | image to be used for running restore job (should contain kopia binary) |
+ | backupIdentifier | Yes | string | unique snapshot id generated during backup |
+ | restorePath | Yes | string | path where data to be restored |
+ | pod | No | string | pod to which the volumes are attached |
+ | volumes | No | map[string]string | mapping of [pvcName] to [mountPath] under which the volume will be available |
+ | podOverride | No | map[string]interface{} | specs to override default pod specs with |
+ | repositoryServerUserHostname| No | string | user's hostname to access the kopia repository server. Hostname would be available in the user access credential secret |
+
+::: tip NOTE
+
+The `image` argument requires the use of
+`ghcr.io/kanisterio/kanister-tools` image since it includes the required
+tools to restore data from the object store.
+
+Either `pod` or the `volumes` arguments must be specified to this
+function based on the function that was used to backup the data. If
+[BackupDataUsingKopiaServer] is used to backup the data we
+should specify *pod* and for
+[CopyVolumeDataUsingKopiaServer], *volumes*
+should be specified.
+
+Additionally, in order to use this function, a RepositoryServer CR is
+required.
+:::
+
+Example:
+
+Consider a scenario where you wish to restore the data backed up by the
+[BackupDataUsingKopiaServer](#backupdatausingkopiaserver) function. We
+will first scale down the application, restore the data and then scale
+it back up. For this phase, we will use the `backupIdentifier` Artifact
+provided by backup function.
+
+``` yaml
+
+- func: ScaleWorkload name: shutdownPod args: namespace: \"{{
+ .Deployment.Namespace }}\" name: \"{{ .Deployment.Name }}\" kind:
+ Deployment replicas: 0
+- func: RestoreDataUsingKopiaServer name: restoreFromS3 args:
+ namespace: \"{{ .Deployment.Namespace }}\" pod: \"{{ index
+ .Deployment.Pods 0 }}\" backupIdentifier: \"{{
+ .ArtifactsIn.backupIdentifier.KeyValue.id }}\" restorePath:
+ /mnt/data
+- func: ScaleWorkload name: bringupPod args: namespace: \"{{
+ .Deployment.Namespace }}\" name: \"{{ .Deployment.Name }}\" kind:
+ Deployment replicas: 1
+```
+
+### DeleteDataUsingKopiaServer
+
+This function deletes the snapshot data backed up by the
+`BackupDataUsingKopiaServer` function. It creates a new Pod that runs
+`delete snapshot` command.
+
+::: tip NOTE
+
+The `image` argument requires the use of
+`ghcr.io/kanisterio/kanister-tools` image since it includes the required
+tools to delete snapshot from the object store.
+
+Additionally, in order to use this function, a RepositoryServer CR is
+required.
+:::
+
+ | Argument | Required | Type | Description |
+ | --------------------------- | :------: | ------ | ----------- |
+ | namespace | Yes | string | namespace in which to execute the delete job |
+ | backupID | Yes | string | unique snapshot id generated during backup |
+ | image | Yes | string | image to be used for running delete job (should contain kopia binary) |
+ | repositoryServerUserHostname| No | string | user's hostname to access the kopia repository server. Hostname would be available in the user access credential secret |
+
+Example:
+
+Consider a scenario where you wish to delete the data backed up by the
+[BackupDataUsingKopiaServer](#backupdatausingkopiaserver) function. For
+this phase, we will use the `backupIdentifier` Artifact provided by
+backup function.
+
+``` yaml
+- func: DeleteDataUsingKopiaServer
+ name: DeleteFromObjectStore
+ args:
+ namespace: "{{ .Deployment.Namespace }}"
+ backupID: "{{ .ArtifactsIn.backupIdentifier.KeyValue.id }}"
+ image: ghcr.io/kanisterio/kanister-tools:0.89.0
+```
+
+### Registering Functions
+
+Kanister can be extended by registering new Kanister Functions.
+
+Kanister Functions are registered using a similar mechanism to
+[database/sql](https://golang.org/pkg/database/sql/) drivers. To
+register new Kanister Functions, import a package with those new
+functions into the controller and recompile it. -->
diff --git a/docs_new/generateSidebar.js b/docs_new/generateSidebar.js
new file mode 100644
index 0000000000..71027d0fe8
--- /dev/null
+++ b/docs_new/generateSidebar.js
@@ -0,0 +1,38 @@
+const fs = require("fs");
+const path = require("path");
+
+const ignoredFolders = ["_static", ".vitepress", "node_modules"];
+
+function generateSidebarConfig(basePath, folderPath = "") {
+ const folderDir = path.join(basePath, folderPath);
+ const contents = fs.readdirSync(folderDir, { withFileTypes: true });
+
+ const sidebarSections = contents
+ .filter((item) => !ignoredFolders.includes(item.name))
+ .map((item) => {
+ const itemPath = path.join(folderPath, item.name);
+ if (item.isDirectory()) {
+ return {
+ text: item.name.charAt(0).toUpperCase() + item.name.slice(1),
+ items: generateSidebarConfig(basePath, itemPath),
+ };
+ } else if (item.isFile() && item.name.endsWith(".md")) {
+ return {
+ text:
+ item.name.replace(/\.md$/, "").charAt(0).toUpperCase() +
+ item.name.slice(1),
+ link: `/${itemPath.replace(/\.md$/, "")}`,
+ };
+ }
+ return null;
+ });
+
+ return sidebarSections.filter(Boolean);
+}
+
+const generatedSidebar = generateSidebarConfig("./");
+
+fs.writeFileSync(
+ "./generatedSidebar.js",
+ `module.exports = ${JSON.stringify(generatedSidebar)};`
+);
diff --git a/docs_new/generatedSidebar.js b/docs_new/generatedSidebar.js
new file mode 100644
index 0000000000..2d47e44dae
--- /dev/null
+++ b/docs_new/generatedSidebar.js
@@ -0,0 +1,24 @@
+module.exports = [
+ { text: "Api-examples.md", link: "/api-examples" },
+ { text: "Architecture.md", link: "/architecture" },
+ { text: "Functions.md", link: "/functions" },
+ { text: "Index 2.md", link: "/index 2" },
+ { text: "Index.md", link: "/index" },
+ { text: "Install.md", link: "/install" },
+ { text: "Markdown-examples.md", link: "/markdown-examples" },
+ { text: "Overview.md", link: "/overview" },
+ {
+ text: "Tasks",
+ items: [
+ { text: "Argo.md", link: "/tasks/argo" },
+ { text: "Logs.md", link: "/tasks/logs" },
+ { text: "Logs_level.md", link: "/tasks/logs_level" },
+ { text: "Scaleworkload.md", link: "/tasks/scaleworkload" },
+ ],
+ },
+ { text: "Tasks.md", link: "/tasks" },
+ { text: "Templates.md", link: "/templates" },
+ { text: "Tooling.md", link: "/tooling" },
+ { text: "Troubleshooting.md", link: "/troubleshooting" },
+ { text: "Tutorial.md", link: "/tutorial" },
+];
diff --git a/docs_new/index.md b/docs_new/index.md
new file mode 100644
index 0000000000..f3cc639b2d
--- /dev/null
+++ b/docs_new/index.md
@@ -0,0 +1,27 @@
+---
+# https://vitepress.dev/reference/default-theme-home-page
+layout: home
+
+hero:
+ name: "Kanister"
+ text: "Application-Specific Data Management"
+ tagline: Kanister is a data protection workflow management tool. It provides a set of cohesive APIs for defining and curating data operations by abstracting away tedious details around executing data operations on Kubernetes. It's extensible and easy to install, operate and scale.
+ image:
+ src: /kanister.svg
+ alt: VitePress
+ actions:
+ - theme: brand
+ text: Overview
+ link: /overview
+ - theme: alt
+ text: Install
+ link: /install
+
+features:
+ # - title: Feature A
+ # details: Lorem ipsum dolor sit amet, consectetur adipiscing elit
+ # - title: Feature B
+ # details: Lorem ipsum dolor sit amet, consectetur adipiscing elit
+ # - title: Feature C
+ # details: Lorem ipsum dolor sit amet, consectetur adipiscing elit
+---
diff --git a/docs_new/install.md b/docs_new/install.md
new file mode 100644
index 0000000000..af8b4d7d25
--- /dev/null
+++ b/docs_new/install.md
@@ -0,0 +1,105 @@
+# Installation {#install}
+
+Kanister can be easily installed and managed with
+[Helm](https://helm.sh). You will need to configure your `kubectl` CLI
+tool to target the Kubernetes cluster you want to install Kanister on.
+
+Start by adding the Kanister repository to your local setup:
+
+``` bash
+helm repo add kanister
+```
+
+Use the `helm install` command to install Kanister in the `kanister`
+namespace:
+
+``` bash
+helm -n kanister upgrade \--install kanister \--create-namespace
+kanister/kanister-operator
+```
+
+Confirm that the Kanister workloads are ready:
+
+``` bash
+kubectl -n kanister get po
+```
+
+You should see the operator pod in the `Running` state:
+
+``` bash
+NAME READY STATUS RESTARTS AGE
+kanister-kanister-operator-85c747bfb8-dmqnj 1/1 Running 0 15s
+```
+
+::: tip NOTE
+
+Kanister is guaranteed to work with the 3 most recent versions of
+Kubernetes. For example, if the latest version of Kubernetes is 1.24,
+Kanister will work with 1.24, 1.23, and 1.22. Support for older versions
+is provided on a best-effort basis. If you are using an older version of
+Kubernetes, please consider upgrading to a newer version.
+:::
+
+## Configuring Kanister
+
+Use the `helm show values` command to list the configurable options:
+
+``` bash
+helm show values kanister/kanister-operator
+```
+
+For example, you can use the `image.tag` value to specify the Kanister
+version to install.
+
+The source of the `values.yaml` file can be found on
+[GitHub](https://github.com/kanisterio/kanister/blob/master/helm/kanister-operator/values.yaml).
+
+## Managing Custom Resource Definitions (CRDs)
+
+The default RBAC settings in the Helm chart permit Kanister to manage
+and auto-update its own custom resource definitions, to ease the user\'s
+operation burden. If your setup requires the removal of these settings,
+you will have to install Kanister with the
+`--set controller.updateCRDs=false` option:
+
+``` bash
+helm -n kanister upgade \--install kanister \--create-namespace
+kanister/kanister-operator \--set controller.updateCRDs=false
+```
+
+This option lets Helm manage the CRD resources.
+
+## Using custom certificates with the Validating Webhook Controller
+
+Kanister installation also creates a validating admission webhook server
+that is invoked each time a new Blueprint is created.
+
+By default the Helm chart is configured to automatically generate a
+self-signed certificates for Admission Webhook Server. If your setup
+requires custom certificates to be configured, you will have to install
+kanister with `--set bpValidatingWebhook.tls.mode=custom` option along
+with other certificate details.
+
+Create a Secret that stores the TLS key and certificate for webhook
+admission server:
+
+``` bash
+kubectl create secret tls my-tls-secret \--cert /path/to/tls.crt \--key
+/path/to/tls.key -n kanister
+```
+
+Install Kanister, providing the PEM-encoded CA bundle and the
+[tls]{.title-ref} secret name like below:
+
+``` bash
+helm upgrade \--install kanister kanister/kanister-operator \--namespace
+kanister \--create-namespace \--set bpValidatingWebhook.tls.mode=custom
+\--set bpValidatingWebhook.tls.caBundle=\$(cat /path/to/ca.pem \| base64
+-w 0) \--set bpValidatingWebhook.tls.secretName=tls-secret
+```
+
+## Building and Deploying from Source
+
+Follow the instructions in the `BUILD.md` file in the [Kanister GitHub
+repository](https://github.com/kanisterio/kanister/blob/master/BUILD.md)
+to build Kanister from source code.
diff --git a/docs_new/markdown-examples.md b/docs_new/markdown-examples.md
new file mode 100644
index 0000000000..3ea9aa9f2a
--- /dev/null
+++ b/docs_new/markdown-examples.md
@@ -0,0 +1,85 @@
+# Markdown Extension Examples
+
+This page demonstrates some of the built-in markdown extensions provided by VitePress.
+
+## Syntax Highlighting
+
+VitePress provides Syntax Highlighting powered by [Shikiji](https://github.com/antfu/shikiji), with additional features like line-highlighting:
+
+**Input**
+
+````md
+```js{4}
+export default {
+ data () {
+ return {
+ msg: 'Highlighted!'
+ }
+ }
+}
+```
+````
+
+**Output**
+
+```js{4}
+export default {
+ data () {
+ return {
+ msg: 'Highlighted!'
+ }
+ }
+}
+```
+
+## Custom Containers
+
+**Input**
+
+```md
+::: info
+This is an info box.
+:::
+
+::: tip
+This is a tip.
+:::
+
+::: warning
+This is a warning.
+:::
+
+::: danger
+This is a dangerous warning.
+:::
+
+::: details
+This is a details block.
+:::
+```
+
+**Output**
+
+::: info
+This is an info box.
+:::
+
+::: tip
+This is a tip.
+:::
+
+::: warning
+This is a warning.
+:::
+
+::: danger
+This is a dangerous warning.
+:::
+
+::: details
+This is a details block.
+:::
+
+## More
+
+Check out the documentation for the [full list of markdown extensions](https://vitepress.dev/guide/markdown).
diff --git a/docs_new/overview.md b/docs_new/overview.md
new file mode 100644
index 0000000000..3453d99279
--- /dev/null
+++ b/docs_new/overview.md
@@ -0,0 +1,38 @@
+# Overview
+
+|[![image](https://goreportcard.com/badge/github.com/kanisterio/kanister)]()|[![image](https://github.com/kanisterio/kanister/actions/workflows/main.yaml/badge.svg?branch=master)]()|
+|-------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------|
+
+## Design Goals
+
+The design of Kanister was driven by the following main goals:
+
+1. **Application-Centric:** Given the increasingly complex and
+ distributed nature of cloud-native data services, there is a growing
+ need for data management tasks to be at the *application* level.
+ Experts who possess domain knowledge of a specific application\'s
+ needs should be able to capture these needs when performing data
+ operations on that application.
+2. **API Driven:** Data management tasks for each specific application
+ may vary widely, and these tasks should be encapsulated by a
+ well-defined API so as to provide a uniform data management
+ experience. Each application expert can provide an
+ application-specific pluggable implementation that satisfies this
+ API, thus enabling a homogeneous data management experience of
+ diverse and evolving data services.
+3. **Extensible:** Any data management solution capable of managing a
+ diverse set of applications must be flexible enough to capture the
+ needs of custom data services running in a variety of environments.
+ Such flexibility can only be provided if the solution itself can
+ easily be extended.
+
+## Getting Started
+
+Follow the instructions in the [Installation](install.md) section to get
+section to get Kanister up and running on your Kubernetes cluster. Then
+see Kanister in action by going through the walkthrough under
+[Tutorial](tutorial.md).
+
+The [Architecture](architecture.md) section provides
+architectural insights into how things work. We recommend that you take
+a look at it.
diff --git a/docs_new/package.json b/docs_new/package.json
new file mode 100644
index 0000000000..4cfafc93ab
--- /dev/null
+++ b/docs_new/package.json
@@ -0,0 +1,11 @@
+{
+ "devDependencies": {
+ "vitepress": "1.0.0-rc.40",
+ "vue": "^3.4.15"
+ },
+ "scripts": {
+ "docs:dev": "vitepress dev",
+ "docs:build": "vitepress build",
+ "docs:preview": "vitepress preview"
+ }
+}
diff --git a/docs_new/pnpm-lock.yaml b/docs_new/pnpm-lock.yaml
new file mode 100644
index 0000000000..5f3a831f75
--- /dev/null
+++ b/docs_new/pnpm-lock.yaml
@@ -0,0 +1,1012 @@
+lockfileVersion: '6.0'
+
+settings:
+ autoInstallPeers: true
+ excludeLinksFromLockfile: false
+
+devDependencies:
+ vitepress:
+ specifier: 1.0.0-rc.40
+ version: 1.0.0-rc.40(@algolia/client-search@4.22.1)(search-insights@2.13.0)
+ vue:
+ specifier: ^3.4.15
+ version: 3.4.15
+
+packages:
+
+ /@algolia/autocomplete-core@1.9.3(@algolia/client-search@4.22.1)(algoliasearch@4.22.1)(search-insights@2.13.0):
+ resolution: {integrity: sha512-009HdfugtGCdC4JdXUbVJClA0q0zh24yyePn+KUGk3rP7j8FEe/m5Yo/z65gn6nP/cM39PxpzqKrL7A6fP6PPw==}
+ dependencies:
+ '@algolia/autocomplete-plugin-algolia-insights': 1.9.3(@algolia/client-search@4.22.1)(algoliasearch@4.22.1)(search-insights@2.13.0)
+ '@algolia/autocomplete-shared': 1.9.3(@algolia/client-search@4.22.1)(algoliasearch@4.22.1)
+ transitivePeerDependencies:
+ - '@algolia/client-search'
+ - algoliasearch
+ - search-insights
+ dev: true
+
+ /@algolia/autocomplete-plugin-algolia-insights@1.9.3(@algolia/client-search@4.22.1)(algoliasearch@4.22.1)(search-insights@2.13.0):
+ resolution: {integrity: sha512-a/yTUkcO/Vyy+JffmAnTWbr4/90cLzw+CC3bRbhnULr/EM0fGNvM13oQQ14f2moLMcVDyAx/leczLlAOovhSZg==}
+ peerDependencies:
+ search-insights: '>= 1 < 3'
+ dependencies:
+ '@algolia/autocomplete-shared': 1.9.3(@algolia/client-search@4.22.1)(algoliasearch@4.22.1)
+ search-insights: 2.13.0
+ transitivePeerDependencies:
+ - '@algolia/client-search'
+ - algoliasearch
+ dev: true
+
+ /@algolia/autocomplete-preset-algolia@1.9.3(@algolia/client-search@4.22.1)(algoliasearch@4.22.1):
+ resolution: {integrity: sha512-d4qlt6YmrLMYy95n5TB52wtNDr6EgAIPH81dvvvW8UmuWRgxEtY0NJiPwl/h95JtG2vmRM804M0DSwMCNZlzRA==}
+ peerDependencies:
+ '@algolia/client-search': '>= 4.9.1 < 6'
+ algoliasearch: '>= 4.9.1 < 6'
+ dependencies:
+ '@algolia/autocomplete-shared': 1.9.3(@algolia/client-search@4.22.1)(algoliasearch@4.22.1)
+ '@algolia/client-search': 4.22.1
+ algoliasearch: 4.22.1
+ dev: true
+
+ /@algolia/autocomplete-shared@1.9.3(@algolia/client-search@4.22.1)(algoliasearch@4.22.1):
+ resolution: {integrity: sha512-Wnm9E4Ye6Rl6sTTqjoymD+l8DjSTHsHboVRYrKgEt8Q7UHm9nYbqhN/i0fhUYA3OAEH7WA8x3jfpnmJm3rKvaQ==}
+ peerDependencies:
+ '@algolia/client-search': '>= 4.9.1 < 6'
+ algoliasearch: '>= 4.9.1 < 6'
+ dependencies:
+ '@algolia/client-search': 4.22.1
+ algoliasearch: 4.22.1
+ dev: true
+
+ /@algolia/cache-browser-local-storage@4.22.1:
+ resolution: {integrity: sha512-Sw6IAmOCvvP6QNgY9j+Hv09mvkvEIDKjYW8ow0UDDAxSXy664RBNQk3i/0nt7gvceOJ6jGmOTimaZoY1THmU7g==}
+ dependencies:
+ '@algolia/cache-common': 4.22.1
+ dev: true
+
+ /@algolia/cache-common@4.22.1:
+ resolution: {integrity: sha512-TJMBKqZNKYB9TptRRjSUtevJeQVXRmg6rk9qgFKWvOy8jhCPdyNZV1nB3SKGufzvTVbomAukFR8guu/8NRKBTA==}
+ dev: true
+
+ /@algolia/cache-in-memory@4.22.1:
+ resolution: {integrity: sha512-ve+6Ac2LhwpufuWavM/aHjLoNz/Z/sYSgNIXsinGofWOysPilQZPUetqLj8vbvi+DHZZaYSEP9H5SRVXnpsNNw==}
+ dependencies:
+ '@algolia/cache-common': 4.22.1
+ dev: true
+
+ /@algolia/client-account@4.22.1:
+ resolution: {integrity: sha512-k8m+oegM2zlns/TwZyi4YgCtyToackkOpE+xCaKCYfBfDtdGOaVZCM5YvGPtK+HGaJMIN/DoTL8asbM3NzHonw==}
+ dependencies:
+ '@algolia/client-common': 4.22.1
+ '@algolia/client-search': 4.22.1
+ '@algolia/transporter': 4.22.1
+ dev: true
+
+ /@algolia/client-analytics@4.22.1:
+ resolution: {integrity: sha512-1ssi9pyxyQNN4a7Ji9R50nSdISIumMFDwKNuwZipB6TkauJ8J7ha/uO60sPJFqQyqvvI+px7RSNRQT3Zrvzieg==}
+ dependencies:
+ '@algolia/client-common': 4.22.1
+ '@algolia/client-search': 4.22.1
+ '@algolia/requester-common': 4.22.1
+ '@algolia/transporter': 4.22.1
+ dev: true
+
+ /@algolia/client-common@4.22.1:
+ resolution: {integrity: sha512-IvaL5v9mZtm4k4QHbBGDmU3wa/mKokmqNBqPj0K7lcR8ZDKzUorhcGp/u8PkPC/e0zoHSTvRh7TRkGX3Lm7iOQ==}
+ dependencies:
+ '@algolia/requester-common': 4.22.1
+ '@algolia/transporter': 4.22.1
+ dev: true
+
+ /@algolia/client-personalization@4.22.1:
+ resolution: {integrity: sha512-sl+/klQJ93+4yaqZ7ezOttMQ/nczly/3GmgZXJ1xmoewP5jmdP/X/nV5U7EHHH3hCUEHeN7X1nsIhGPVt9E1cQ==}
+ dependencies:
+ '@algolia/client-common': 4.22.1
+ '@algolia/requester-common': 4.22.1
+ '@algolia/transporter': 4.22.1
+ dev: true
+
+ /@algolia/client-search@4.22.1:
+ resolution: {integrity: sha512-yb05NA4tNaOgx3+rOxAmFztgMTtGBi97X7PC3jyNeGiwkAjOZc2QrdZBYyIdcDLoI09N0gjtpClcackoTN0gPA==}
+ dependencies:
+ '@algolia/client-common': 4.22.1
+ '@algolia/requester-common': 4.22.1
+ '@algolia/transporter': 4.22.1
+ dev: true
+
+ /@algolia/logger-common@4.22.1:
+ resolution: {integrity: sha512-OnTFymd2odHSO39r4DSWRFETkBufnY2iGUZNrMXpIhF5cmFE8pGoINNPzwg02QLBlGSaLqdKy0bM8S0GyqPLBg==}
+ dev: true
+
+ /@algolia/logger-console@4.22.1:
+ resolution: {integrity: sha512-O99rcqpVPKN1RlpgD6H3khUWylU24OXlzkavUAMy6QZd1776QAcauE3oP8CmD43nbaTjBexZj2nGsBH9Tc0FVA==}
+ dependencies:
+ '@algolia/logger-common': 4.22.1
+ dev: true
+
+ /@algolia/requester-browser-xhr@4.22.1:
+ resolution: {integrity: sha512-dtQGYIg6MteqT1Uay3J/0NDqD+UciHy3QgRbk7bNddOJu+p3hzjTRYESqEnoX/DpEkaNYdRHUKNylsqMpgwaEw==}
+ dependencies:
+ '@algolia/requester-common': 4.22.1
+ dev: true
+
+ /@algolia/requester-common@4.22.1:
+ resolution: {integrity: sha512-dgvhSAtg2MJnR+BxrIFqlLtkLlVVhas9HgYKMk2Uxiy5m6/8HZBL40JVAMb2LovoPFs9I/EWIoFVjOrFwzn5Qg==}
+ dev: true
+
+ /@algolia/requester-node-http@4.22.1:
+ resolution: {integrity: sha512-JfmZ3MVFQkAU+zug8H3s8rZ6h0ahHZL/SpMaSasTCGYR5EEJsCc8SI5UZ6raPN2tjxa5bxS13BRpGSBUens7EA==}
+ dependencies:
+ '@algolia/requester-common': 4.22.1
+ dev: true
+
+ /@algolia/transporter@4.22.1:
+ resolution: {integrity: sha512-kzWgc2c9IdxMa3YqA6TN0NW5VrKYYW/BELIn7vnLyn+U/RFdZ4lxxt9/8yq3DKV5snvoDzzO4ClyejZRdV3lMQ==}
+ dependencies:
+ '@algolia/cache-common': 4.22.1
+ '@algolia/logger-common': 4.22.1
+ '@algolia/requester-common': 4.22.1
+ dev: true
+
+ /@babel/helper-string-parser@7.23.4:
+ resolution: {integrity: sha512-803gmbQdqwdf4olxrX4AJyFBV/RTr3rSmOj0rKwesmzlfhYNDEs+/iOcznzpNWlJlIlTJC2QfPFcHB6DlzdVLQ==}
+ engines: {node: '>=6.9.0'}
+ dev: true
+
+ /@babel/helper-validator-identifier@7.22.20:
+ resolution: {integrity: sha512-Y4OZ+ytlatR8AI+8KZfKuL5urKp7qey08ha31L8b3BwewJAoJamTzyvxPR/5D+KkdJCGPq/+8TukHBlY10FX9A==}
+ engines: {node: '>=6.9.0'}
+ dev: true
+
+ /@babel/parser@7.23.9:
+ resolution: {integrity: sha512-9tcKgqKbs3xGJ+NtKF2ndOBBLVwPjl1SHxPQkd36r3Dlirw3xWUeGaTbqr7uGZcTaxkVNwc+03SVP7aCdWrTlA==}
+ engines: {node: '>=6.0.0'}
+ hasBin: true
+ dependencies:
+ '@babel/types': 7.23.9
+ dev: true
+
+ /@babel/types@7.23.9:
+ resolution: {integrity: sha512-dQjSq/7HaSjRM43FFGnv5keM2HsxpmyV1PfaSVm0nzzjwwTmjOe6J4bC8e3+pTEIgHaHj+1ZlLThRJ2auc/w1Q==}
+ engines: {node: '>=6.9.0'}
+ dependencies:
+ '@babel/helper-string-parser': 7.23.4
+ '@babel/helper-validator-identifier': 7.22.20
+ to-fast-properties: 2.0.0
+ dev: true
+
+ /@docsearch/css@3.5.2:
+ resolution: {integrity: sha512-SPiDHaWKQZpwR2siD0KQUwlStvIAnEyK6tAE2h2Wuoq8ue9skzhlyVQ1ddzOxX6khULnAALDiR/isSF3bnuciA==}
+ dev: true
+
+ /@docsearch/js@3.5.2(@algolia/client-search@4.22.1)(search-insights@2.13.0):
+ resolution: {integrity: sha512-p1YFTCDflk8ieHgFJYfmyHBki1D61+U9idwrLh+GQQMrBSP3DLGKpy0XUJtPjAOPltcVbqsTjiPFfH7JImjUNg==}
+ dependencies:
+ '@docsearch/react': 3.5.2(@algolia/client-search@4.22.1)(search-insights@2.13.0)
+ preact: 10.19.3
+ transitivePeerDependencies:
+ - '@algolia/client-search'
+ - '@types/react'
+ - react
+ - react-dom
+ - search-insights
+ dev: true
+
+ /@docsearch/react@3.5.2(@algolia/client-search@4.22.1)(search-insights@2.13.0):
+ resolution: {integrity: sha512-9Ahcrs5z2jq/DcAvYtvlqEBHImbm4YJI8M9y0x6Tqg598P40HTEkX7hsMcIuThI+hTFxRGZ9hll0Wygm2yEjng==}
+ peerDependencies:
+ '@types/react': '>= 16.8.0 < 19.0.0'
+ react: '>= 16.8.0 < 19.0.0'
+ react-dom: '>= 16.8.0 < 19.0.0'
+ search-insights: '>= 1 < 3'
+ peerDependenciesMeta:
+ '@types/react':
+ optional: true
+ react:
+ optional: true
+ react-dom:
+ optional: true
+ search-insights:
+ optional: true
+ dependencies:
+ '@algolia/autocomplete-core': 1.9.3(@algolia/client-search@4.22.1)(algoliasearch@4.22.1)(search-insights@2.13.0)
+ '@algolia/autocomplete-preset-algolia': 1.9.3(@algolia/client-search@4.22.1)(algoliasearch@4.22.1)
+ '@docsearch/css': 3.5.2
+ algoliasearch: 4.22.1
+ search-insights: 2.13.0
+ transitivePeerDependencies:
+ - '@algolia/client-search'
+ dev: true
+
+ /@esbuild/aix-ppc64@0.19.12:
+ resolution: {integrity: sha512-bmoCYyWdEL3wDQIVbcyzRyeKLgk2WtWLTWz1ZIAZF/EGbNOwSA6ew3PftJ1PqMiOOGu0OyFMzG53L0zqIpPeNA==}
+ engines: {node: '>=12'}
+ cpu: [ppc64]
+ os: [aix]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@esbuild/android-arm64@0.19.12:
+ resolution: {integrity: sha512-P0UVNGIienjZv3f5zq0DP3Nt2IE/3plFzuaS96vihvD0Hd6H/q4WXUGpCxD/E8YrSXfNyRPbpTq+T8ZQioSuPA==}
+ engines: {node: '>=12'}
+ cpu: [arm64]
+ os: [android]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@esbuild/android-arm@0.19.12:
+ resolution: {integrity: sha512-qg/Lj1mu3CdQlDEEiWrlC4eaPZ1KztwGJ9B6J+/6G+/4ewxJg7gqj8eVYWvao1bXrqGiW2rsBZFSX3q2lcW05w==}
+ engines: {node: '>=12'}
+ cpu: [arm]
+ os: [android]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@esbuild/android-x64@0.19.12:
+ resolution: {integrity: sha512-3k7ZoUW6Q6YqhdhIaq/WZ7HwBpnFBlW905Fa4s4qWJyiNOgT1dOqDiVAQFwBH7gBRZr17gLrlFCRzF6jFh7Kew==}
+ engines: {node: '>=12'}
+ cpu: [x64]
+ os: [android]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@esbuild/darwin-arm64@0.19.12:
+ resolution: {integrity: sha512-B6IeSgZgtEzGC42jsI+YYu9Z3HKRxp8ZT3cqhvliEHovq8HSX2YX8lNocDn79gCKJXOSaEot9MVYky7AKjCs8g==}
+ engines: {node: '>=12'}
+ cpu: [arm64]
+ os: [darwin]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@esbuild/darwin-x64@0.19.12:
+ resolution: {integrity: sha512-hKoVkKzFiToTgn+41qGhsUJXFlIjxI/jSYeZf3ugemDYZldIXIxhvwN6erJGlX4t5h417iFuheZ7l+YVn05N3A==}
+ engines: {node: '>=12'}
+ cpu: [x64]
+ os: [darwin]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@esbuild/freebsd-arm64@0.19.12:
+ resolution: {integrity: sha512-4aRvFIXmwAcDBw9AueDQ2YnGmz5L6obe5kmPT8Vd+/+x/JMVKCgdcRwH6APrbpNXsPz+K653Qg8HB/oXvXVukA==}
+ engines: {node: '>=12'}
+ cpu: [arm64]
+ os: [freebsd]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@esbuild/freebsd-x64@0.19.12:
+ resolution: {integrity: sha512-EYoXZ4d8xtBoVN7CEwWY2IN4ho76xjYXqSXMNccFSx2lgqOG/1TBPW0yPx1bJZk94qu3tX0fycJeeQsKovA8gg==}
+ engines: {node: '>=12'}
+ cpu: [x64]
+ os: [freebsd]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@esbuild/linux-arm64@0.19.12:
+ resolution: {integrity: sha512-EoTjyYyLuVPfdPLsGVVVC8a0p1BFFvtpQDB/YLEhaXyf/5bczaGeN15QkR+O4S5LeJ92Tqotve7i1jn35qwvdA==}
+ engines: {node: '>=12'}
+ cpu: [arm64]
+ os: [linux]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@esbuild/linux-arm@0.19.12:
+ resolution: {integrity: sha512-J5jPms//KhSNv+LO1S1TX1UWp1ucM6N6XuL6ITdKWElCu8wXP72l9MM0zDTzzeikVyqFE6U8YAV9/tFyj0ti+w==}
+ engines: {node: '>=12'}
+ cpu: [arm]
+ os: [linux]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@esbuild/linux-ia32@0.19.12:
+ resolution: {integrity: sha512-Thsa42rrP1+UIGaWz47uydHSBOgTUnwBwNq59khgIwktK6x60Hivfbux9iNR0eHCHzOLjLMLfUMLCypBkZXMHA==}
+ engines: {node: '>=12'}
+ cpu: [ia32]
+ os: [linux]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@esbuild/linux-loong64@0.19.12:
+ resolution: {integrity: sha512-LiXdXA0s3IqRRjm6rV6XaWATScKAXjI4R4LoDlvO7+yQqFdlr1Bax62sRwkVvRIrwXxvtYEHHI4dm50jAXkuAA==}
+ engines: {node: '>=12'}
+ cpu: [loong64]
+ os: [linux]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@esbuild/linux-mips64el@0.19.12:
+ resolution: {integrity: sha512-fEnAuj5VGTanfJ07ff0gOA6IPsvrVHLVb6Lyd1g2/ed67oU1eFzL0r9WL7ZzscD+/N6i3dWumGE1Un4f7Amf+w==}
+ engines: {node: '>=12'}
+ cpu: [mips64el]
+ os: [linux]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@esbuild/linux-ppc64@0.19.12:
+ resolution: {integrity: sha512-nYJA2/QPimDQOh1rKWedNOe3Gfc8PabU7HT3iXWtNUbRzXS9+vgB0Fjaqr//XNbd82mCxHzik2qotuI89cfixg==}
+ engines: {node: '>=12'}
+ cpu: [ppc64]
+ os: [linux]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@esbuild/linux-riscv64@0.19.12:
+ resolution: {integrity: sha512-2MueBrlPQCw5dVJJpQdUYgeqIzDQgw3QtiAHUC4RBz9FXPrskyyU3VI1hw7C0BSKB9OduwSJ79FTCqtGMWqJHg==}
+ engines: {node: '>=12'}
+ cpu: [riscv64]
+ os: [linux]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@esbuild/linux-s390x@0.19.12:
+ resolution: {integrity: sha512-+Pil1Nv3Umes4m3AZKqA2anfhJiVmNCYkPchwFJNEJN5QxmTs1uzyy4TvmDrCRNT2ApwSari7ZIgrPeUx4UZDg==}
+ engines: {node: '>=12'}
+ cpu: [s390x]
+ os: [linux]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@esbuild/linux-x64@0.19.12:
+ resolution: {integrity: sha512-B71g1QpxfwBvNrfyJdVDexenDIt1CiDN1TIXLbhOw0KhJzE78KIFGX6OJ9MrtC0oOqMWf+0xop4qEU8JrJTwCg==}
+ engines: {node: '>=12'}
+ cpu: [x64]
+ os: [linux]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@esbuild/netbsd-x64@0.19.12:
+ resolution: {integrity: sha512-3ltjQ7n1owJgFbuC61Oj++XhtzmymoCihNFgT84UAmJnxJfm4sYCiSLTXZtE00VWYpPMYc+ZQmB6xbSdVh0JWA==}
+ engines: {node: '>=12'}
+ cpu: [x64]
+ os: [netbsd]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@esbuild/openbsd-x64@0.19.12:
+ resolution: {integrity: sha512-RbrfTB9SWsr0kWmb9srfF+L933uMDdu9BIzdA7os2t0TXhCRjrQyCeOt6wVxr79CKD4c+p+YhCj31HBkYcXebw==}
+ engines: {node: '>=12'}
+ cpu: [x64]
+ os: [openbsd]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@esbuild/sunos-x64@0.19.12:
+ resolution: {integrity: sha512-HKjJwRrW8uWtCQnQOz9qcU3mUZhTUQvi56Q8DPTLLB+DawoiQdjsYq+j+D3s9I8VFtDr+F9CjgXKKC4ss89IeA==}
+ engines: {node: '>=12'}
+ cpu: [x64]
+ os: [sunos]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@esbuild/win32-arm64@0.19.12:
+ resolution: {integrity: sha512-URgtR1dJnmGvX864pn1B2YUYNzjmXkuJOIqG2HdU62MVS4EHpU2946OZoTMnRUHklGtJdJZ33QfzdjGACXhn1A==}
+ engines: {node: '>=12'}
+ cpu: [arm64]
+ os: [win32]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@esbuild/win32-ia32@0.19.12:
+ resolution: {integrity: sha512-+ZOE6pUkMOJfmxmBZElNOx72NKpIa/HFOMGzu8fqzQJ5kgf6aTGrcJaFsNiVMH4JKpMipyK+7k0n2UXN7a8YKQ==}
+ engines: {node: '>=12'}
+ cpu: [ia32]
+ os: [win32]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@esbuild/win32-x64@0.19.12:
+ resolution: {integrity: sha512-T1QyPSDCyMXaO3pzBkF96E8xMkiRYbUEZADd29SyPGabqxMViNoii+NcK7eWJAEoU6RZyEm5lVSIjTmcdoB9HA==}
+ engines: {node: '>=12'}
+ cpu: [x64]
+ os: [win32]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@jridgewell/sourcemap-codec@1.4.15:
+ resolution: {integrity: sha512-eF2rxCRulEKXHTRiDrDy6erMYWqNw4LPdQ8UQA4huuxaQsVeRPFl2oM8oDGxMFhJUWZf9McpLtJasDDZb/Bpeg==}
+ dev: true
+
+ /@rollup/rollup-android-arm-eabi@4.9.6:
+ resolution: {integrity: sha512-MVNXSSYN6QXOulbHpLMKYi60ppyO13W9my1qogeiAqtjb2yR4LSmfU2+POvDkLzhjYLXz9Rf9+9a3zFHW1Lecg==}
+ cpu: [arm]
+ os: [android]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@rollup/rollup-android-arm64@4.9.6:
+ resolution: {integrity: sha512-T14aNLpqJ5wzKNf5jEDpv5zgyIqcpn1MlwCrUXLrwoADr2RkWA0vOWP4XxbO9aiO3dvMCQICZdKeDrFl7UMClw==}
+ cpu: [arm64]
+ os: [android]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@rollup/rollup-darwin-arm64@4.9.6:
+ resolution: {integrity: sha512-CqNNAyhRkTbo8VVZ5R85X73H3R5NX9ONnKbXuHisGWC0qRbTTxnF1U4V9NafzJbgGM0sHZpdO83pLPzq8uOZFw==}
+ cpu: [arm64]
+ os: [darwin]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@rollup/rollup-darwin-x64@4.9.6:
+ resolution: {integrity: sha512-zRDtdJuRvA1dc9Mp6BWYqAsU5oeLixdfUvkTHuiYOHwqYuQ4YgSmi6+/lPvSsqc/I0Omw3DdICx4Tfacdzmhog==}
+ cpu: [x64]
+ os: [darwin]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@rollup/rollup-linux-arm-gnueabihf@4.9.6:
+ resolution: {integrity: sha512-oNk8YXDDnNyG4qlNb6is1ojTOGL/tRhbbKeE/YuccItzerEZT68Z9gHrY3ROh7axDc974+zYAPxK5SH0j/G+QQ==}
+ cpu: [arm]
+ os: [linux]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@rollup/rollup-linux-arm64-gnu@4.9.6:
+ resolution: {integrity: sha512-Z3O60yxPtuCYobrtzjo0wlmvDdx2qZfeAWTyfOjEDqd08kthDKexLpV97KfAeUXPosENKd8uyJMRDfFMxcYkDQ==}
+ cpu: [arm64]
+ os: [linux]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@rollup/rollup-linux-arm64-musl@4.9.6:
+ resolution: {integrity: sha512-gpiG0qQJNdYEVad+1iAsGAbgAnZ8j07FapmnIAQgODKcOTjLEWM9sRb+MbQyVsYCnA0Im6M6QIq6ax7liws6eQ==}
+ cpu: [arm64]
+ os: [linux]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@rollup/rollup-linux-riscv64-gnu@4.9.6:
+ resolution: {integrity: sha512-+uCOcvVmFUYvVDr27aiyun9WgZk0tXe7ThuzoUTAukZJOwS5MrGbmSlNOhx1j80GdpqbOty05XqSl5w4dQvcOA==}
+ cpu: [riscv64]
+ os: [linux]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@rollup/rollup-linux-x64-gnu@4.9.6:
+ resolution: {integrity: sha512-HUNqM32dGzfBKuaDUBqFB7tP6VMN74eLZ33Q9Y1TBqRDn+qDonkAUyKWwF9BR9unV7QUzffLnz9GrnKvMqC/fw==}
+ cpu: [x64]
+ os: [linux]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@rollup/rollup-linux-x64-musl@4.9.6:
+ resolution: {integrity: sha512-ch7M+9Tr5R4FK40FHQk8VnML0Szi2KRujUgHXd/HjuH9ifH72GUmw6lStZBo3c3GB82vHa0ZoUfjfcM7JiiMrQ==}
+ cpu: [x64]
+ os: [linux]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@rollup/rollup-win32-arm64-msvc@4.9.6:
+ resolution: {integrity: sha512-VD6qnR99dhmTQ1mJhIzXsRcTBvTjbfbGGwKAHcu+52cVl15AC/kplkhxzW/uT0Xl62Y/meBKDZvoJSJN+vTeGA==}
+ cpu: [arm64]
+ os: [win32]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@rollup/rollup-win32-ia32-msvc@4.9.6:
+ resolution: {integrity: sha512-J9AFDq/xiRI58eR2NIDfyVmTYGyIZmRcvcAoJ48oDld/NTR8wyiPUu2X/v1navJ+N/FGg68LEbX3Ejd6l8B7MQ==}
+ cpu: [ia32]
+ os: [win32]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@rollup/rollup-win32-x64-msvc@4.9.6:
+ resolution: {integrity: sha512-jqzNLhNDvIZOrt69Ce4UjGRpXJBzhUBzawMwnaDAwyHriki3XollsewxWzOzz+4yOFDkuJHtTsZFwMxhYJWmLQ==}
+ cpu: [x64]
+ os: [win32]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /@types/estree@1.0.5:
+ resolution: {integrity: sha512-/kYRxGDLWzHOB7q+wtSUQlFrtcdUccpfy+X+9iMBpHK8QLLhx2wIPYuS5DYtR9Wa/YlZAbIovy7qVdB1Aq6Lyw==}
+ dev: true
+
+ /@types/linkify-it@3.0.5:
+ resolution: {integrity: sha512-yg6E+u0/+Zjva+buc3EIb+29XEg4wltq7cSmd4Uc2EE/1nUVmxyzpX6gUXD0V8jIrG0r7YeOGVIbYRkxeooCtw==}
+ dev: true
+
+ /@types/markdown-it@13.0.7:
+ resolution: {integrity: sha512-U/CBi2YUUcTHBt5tjO2r5QV/x0Po6nsYwQU4Y04fBS6vfoImaiZ6f8bi3CjTCxBPQSO1LMyUqkByzi8AidyxfA==}
+ dependencies:
+ '@types/linkify-it': 3.0.5
+ '@types/mdurl': 1.0.5
+ dev: true
+
+ /@types/mdurl@1.0.5:
+ resolution: {integrity: sha512-6L6VymKTzYSrEf4Nev4Xa1LCHKrlTlYCBMTlQKFuddo1CvQcE52I0mwfOJayueUC7MJuXOeHTcIU683lzd0cUA==}
+ dev: true
+
+ /@types/web-bluetooth@0.0.20:
+ resolution: {integrity: sha512-g9gZnnXVq7gM7v3tJCWV/qw7w+KeOlSHAhgF9RytFyifW6AF61hdT2ucrYhPq9hLs5JIryeupHV3qGk95dH9ow==}
+ dev: true
+
+ /@vitejs/plugin-vue@5.0.3(vite@5.0.12)(vue@3.4.15):
+ resolution: {integrity: sha512-b8S5dVS40rgHdDrw+DQi/xOM9ed+kSRZzfm1T74bMmBDCd8XO87NKlFYInzCtwvtWwXZvo1QxE2OSspTATWrbA==}
+ engines: {node: ^18.0.0 || >=20.0.0}
+ peerDependencies:
+ vite: ^5.0.0
+ vue: ^3.2.25
+ dependencies:
+ vite: 5.0.12
+ vue: 3.4.15
+ dev: true
+
+ /@vue/compiler-core@3.4.15:
+ resolution: {integrity: sha512-XcJQVOaxTKCnth1vCxEChteGuwG6wqnUHxAm1DO3gCz0+uXKaJNx8/digSz4dLALCy8n2lKq24jSUs8segoqIw==}
+ dependencies:
+ '@babel/parser': 7.23.9
+ '@vue/shared': 3.4.15
+ entities: 4.5.0
+ estree-walker: 2.0.2
+ source-map-js: 1.0.2
+ dev: true
+
+ /@vue/compiler-dom@3.4.15:
+ resolution: {integrity: sha512-wox0aasVV74zoXyblarOM3AZQz/Z+OunYcIHe1OsGclCHt8RsRm04DObjefaI82u6XDzv+qGWZ24tIsRAIi5MQ==}
+ dependencies:
+ '@vue/compiler-core': 3.4.15
+ '@vue/shared': 3.4.15
+ dev: true
+
+ /@vue/compiler-sfc@3.4.15:
+ resolution: {integrity: sha512-LCn5M6QpkpFsh3GQvs2mJUOAlBQcCco8D60Bcqmf3O3w5a+KWS5GvYbrrJBkgvL1BDnTp+e8q0lXCLgHhKguBA==}
+ dependencies:
+ '@babel/parser': 7.23.9
+ '@vue/compiler-core': 3.4.15
+ '@vue/compiler-dom': 3.4.15
+ '@vue/compiler-ssr': 3.4.15
+ '@vue/shared': 3.4.15
+ estree-walker: 2.0.2
+ magic-string: 0.30.5
+ postcss: 8.4.33
+ source-map-js: 1.0.2
+ dev: true
+
+ /@vue/compiler-ssr@3.4.15:
+ resolution: {integrity: sha512-1jdeQyiGznr8gjFDadVmOJqZiLNSsMa5ZgqavkPZ8O2wjHv0tVuAEsw5hTdUoUW4232vpBbL/wJhzVW/JwY1Uw==}
+ dependencies:
+ '@vue/compiler-dom': 3.4.15
+ '@vue/shared': 3.4.15
+ dev: true
+
+ /@vue/devtools-api@6.5.1:
+ resolution: {integrity: sha512-+KpckaAQyfbvshdDW5xQylLni1asvNSGme1JFs8I1+/H5pHEhqUKMEQD/qn3Nx5+/nycBq11qAEi8lk+LXI2dA==}
+ dev: true
+
+ /@vue/reactivity@3.4.15:
+ resolution: {integrity: sha512-55yJh2bsff20K5O84MxSvXKPHHt17I2EomHznvFiJCAZpJTNW8IuLj1xZWMLELRhBK3kkFV/1ErZGHJfah7i7w==}
+ dependencies:
+ '@vue/shared': 3.4.15
+ dev: true
+
+ /@vue/runtime-core@3.4.15:
+ resolution: {integrity: sha512-6E3by5m6v1AkW0McCeAyhHTw+3y17YCOKG0U0HDKDscV4Hs0kgNT5G+GCHak16jKgcCDHpI9xe5NKb8sdLCLdw==}
+ dependencies:
+ '@vue/reactivity': 3.4.15
+ '@vue/shared': 3.4.15
+ dev: true
+
+ /@vue/runtime-dom@3.4.15:
+ resolution: {integrity: sha512-EVW8D6vfFVq3V/yDKNPBFkZKGMFSvZrUQmx196o/v2tHKdwWdiZjYUBS+0Ez3+ohRyF8Njwy/6FH5gYJ75liUw==}
+ dependencies:
+ '@vue/runtime-core': 3.4.15
+ '@vue/shared': 3.4.15
+ csstype: 3.1.3
+ dev: true
+
+ /@vue/server-renderer@3.4.15(vue@3.4.15):
+ resolution: {integrity: sha512-3HYzaidu9cHjrT+qGUuDhFYvF/j643bHC6uUN9BgM11DVy+pM6ATsG6uPBLnkwOgs7BpJABReLmpL3ZPAsUaqw==}
+ peerDependencies:
+ vue: 3.4.15
+ dependencies:
+ '@vue/compiler-ssr': 3.4.15
+ '@vue/shared': 3.4.15
+ vue: 3.4.15
+ dev: true
+
+ /@vue/shared@3.4.15:
+ resolution: {integrity: sha512-KzfPTxVaWfB+eGcGdbSf4CWdaXcGDqckoeXUh7SB3fZdEtzPCK2Vq9B/lRRL3yutax/LWITz+SwvgyOxz5V75g==}
+ dev: true
+
+ /@vueuse/core@10.7.2(vue@3.4.15):
+ resolution: {integrity: sha512-AOyAL2rK0By62Hm+iqQn6Rbu8bfmbgaIMXcE3TSr7BdQ42wnSFlwIdPjInO62onYsEMK/yDMU8C6oGfDAtZ2qQ==}
+ dependencies:
+ '@types/web-bluetooth': 0.0.20
+ '@vueuse/metadata': 10.7.2
+ '@vueuse/shared': 10.7.2(vue@3.4.15)
+ vue-demi: 0.14.6(vue@3.4.15)
+ transitivePeerDependencies:
+ - '@vue/composition-api'
+ - vue
+ dev: true
+
+ /@vueuse/integrations@10.7.2(focus-trap@7.5.4)(vue@3.4.15):
+ resolution: {integrity: sha512-+u3RLPFedjASs5EKPc69Ge49WNgqeMfSxFn+qrQTzblPXZg6+EFzhjarS5edj2qAf6xQ93f95TUxRwKStXj/sQ==}
+ peerDependencies:
+ async-validator: '*'
+ axios: '*'
+ change-case: '*'
+ drauu: '*'
+ focus-trap: '*'
+ fuse.js: '*'
+ idb-keyval: '*'
+ jwt-decode: '*'
+ nprogress: '*'
+ qrcode: '*'
+ sortablejs: '*'
+ universal-cookie: '*'
+ peerDependenciesMeta:
+ async-validator:
+ optional: true
+ axios:
+ optional: true
+ change-case:
+ optional: true
+ drauu:
+ optional: true
+ focus-trap:
+ optional: true
+ fuse.js:
+ optional: true
+ idb-keyval:
+ optional: true
+ jwt-decode:
+ optional: true
+ nprogress:
+ optional: true
+ qrcode:
+ optional: true
+ sortablejs:
+ optional: true
+ universal-cookie:
+ optional: true
+ dependencies:
+ '@vueuse/core': 10.7.2(vue@3.4.15)
+ '@vueuse/shared': 10.7.2(vue@3.4.15)
+ focus-trap: 7.5.4
+ vue-demi: 0.14.6(vue@3.4.15)
+ transitivePeerDependencies:
+ - '@vue/composition-api'
+ - vue
+ dev: true
+
+ /@vueuse/metadata@10.7.2:
+ resolution: {integrity: sha512-kCWPb4J2KGrwLtn1eJwaJD742u1k5h6v/St5wFe8Quih90+k2a0JP8BS4Zp34XUuJqS2AxFYMb1wjUL8HfhWsQ==}
+ dev: true
+
+ /@vueuse/shared@10.7.2(vue@3.4.15):
+ resolution: {integrity: sha512-qFbXoxS44pi2FkgFjPvF4h7c9oMDutpyBdcJdMYIMg9XyXli2meFMuaKn+UMgsClo//Th6+beeCgqweT/79BVA==}
+ dependencies:
+ vue-demi: 0.14.6(vue@3.4.15)
+ transitivePeerDependencies:
+ - '@vue/composition-api'
+ - vue
+ dev: true
+
+ /algoliasearch@4.22.1:
+ resolution: {integrity: sha512-jwydKFQJKIx9kIZ8Jm44SdpigFwRGPESaxZBaHSV0XWN2yBJAOT4mT7ppvlrpA4UGzz92pqFnVKr/kaZXrcreg==}
+ dependencies:
+ '@algolia/cache-browser-local-storage': 4.22.1
+ '@algolia/cache-common': 4.22.1
+ '@algolia/cache-in-memory': 4.22.1
+ '@algolia/client-account': 4.22.1
+ '@algolia/client-analytics': 4.22.1
+ '@algolia/client-common': 4.22.1
+ '@algolia/client-personalization': 4.22.1
+ '@algolia/client-search': 4.22.1
+ '@algolia/logger-common': 4.22.1
+ '@algolia/logger-console': 4.22.1
+ '@algolia/requester-browser-xhr': 4.22.1
+ '@algolia/requester-common': 4.22.1
+ '@algolia/requester-node-http': 4.22.1
+ '@algolia/transporter': 4.22.1
+ dev: true
+
+ /csstype@3.1.3:
+ resolution: {integrity: sha512-M1uQkMl8rQK/szD0LNhtqxIPLpimGm8sOBwU7lLnCpSbTyY3yeU1Vc7l4KT5zT4s/yOxHH5O7tIuuLOCnLADRw==}
+ dev: true
+
+ /entities@4.5.0:
+ resolution: {integrity: sha512-V0hjH4dGPh9Ao5p0MoRY6BVqtwCjhz6vI5LT8AJ55H+4g9/4vbHx1I54fS0XuclLhDHArPQCiMjDxjaL8fPxhw==}
+ engines: {node: '>=0.12'}
+ dev: true
+
+ /esbuild@0.19.12:
+ resolution: {integrity: sha512-aARqgq8roFBj054KvQr5f1sFu0D65G+miZRCuJyJ0G13Zwx7vRar5Zhn2tkQNzIXcBrNVsv/8stehpj+GAjgbg==}
+ engines: {node: '>=12'}
+ hasBin: true
+ requiresBuild: true
+ optionalDependencies:
+ '@esbuild/aix-ppc64': 0.19.12
+ '@esbuild/android-arm': 0.19.12
+ '@esbuild/android-arm64': 0.19.12
+ '@esbuild/android-x64': 0.19.12
+ '@esbuild/darwin-arm64': 0.19.12
+ '@esbuild/darwin-x64': 0.19.12
+ '@esbuild/freebsd-arm64': 0.19.12
+ '@esbuild/freebsd-x64': 0.19.12
+ '@esbuild/linux-arm': 0.19.12
+ '@esbuild/linux-arm64': 0.19.12
+ '@esbuild/linux-ia32': 0.19.12
+ '@esbuild/linux-loong64': 0.19.12
+ '@esbuild/linux-mips64el': 0.19.12
+ '@esbuild/linux-ppc64': 0.19.12
+ '@esbuild/linux-riscv64': 0.19.12
+ '@esbuild/linux-s390x': 0.19.12
+ '@esbuild/linux-x64': 0.19.12
+ '@esbuild/netbsd-x64': 0.19.12
+ '@esbuild/openbsd-x64': 0.19.12
+ '@esbuild/sunos-x64': 0.19.12
+ '@esbuild/win32-arm64': 0.19.12
+ '@esbuild/win32-ia32': 0.19.12
+ '@esbuild/win32-x64': 0.19.12
+ dev: true
+
+ /estree-walker@2.0.2:
+ resolution: {integrity: sha512-Rfkk/Mp/DL7JVje3u18FxFujQlTNR2q6QfMSMB7AvCBx91NGj/ba3kCfza0f6dVDbw7YlRf/nDrn7pQrCCyQ/w==}
+ dev: true
+
+ /focus-trap@7.5.4:
+ resolution: {integrity: sha512-N7kHdlgsO/v+iD/dMoJKtsSqs5Dz/dXZVebRgJw23LDk+jMi/974zyiOYDziY2JPp8xivq9BmUGwIJMiuSBi7w==}
+ dependencies:
+ tabbable: 6.2.0
+ dev: true
+
+ /fsevents@2.3.3:
+ resolution: {integrity: sha512-5xoDfX+fL7faATnagmWPpbFtwh/R77WmMMqqHGS65C3vvB0YHrgF+B1YmZ3441tMj5n63k0212XNoJwzlhffQw==}
+ engines: {node: ^8.16.0 || ^10.6.0 || >=11.0.0}
+ os: [darwin]
+ requiresBuild: true
+ dev: true
+ optional: true
+
+ /magic-string@0.30.5:
+ resolution: {integrity: sha512-7xlpfBaQaP/T6Vh8MO/EqXSW5En6INHEvEXQiuff7Gku0PWjU3uf6w/j9o7O+SpB5fOAkrI5HeoNgwjEO0pFsA==}
+ engines: {node: '>=12'}
+ dependencies:
+ '@jridgewell/sourcemap-codec': 1.4.15
+ dev: true
+
+ /mark.js@8.11.1:
+ resolution: {integrity: sha512-1I+1qpDt4idfgLQG+BNWmrqku+7/2bi5nLf4YwF8y8zXvmfiTBY3PV3ZibfrjBueCByROpuBjLLFCajqkgYoLQ==}
+ dev: true
+
+ /minisearch@6.3.0:
+ resolution: {integrity: sha512-ihFnidEeU8iXzcVHy74dhkxh/dn8Dc08ERl0xwoMMGqp4+LvRSCgicb+zGqWthVokQKvCSxITlh3P08OzdTYCQ==}
+ dev: true
+
+ /nanoid@3.3.7:
+ resolution: {integrity: sha512-eSRppjcPIatRIMC1U6UngP8XFcz8MQWGQdt1MTBQ7NaAmvXDfvNxbvWV3x2y6CdEUciCSsDHDQZbhYaB8QEo2g==}
+ engines: {node: ^10 || ^12 || ^13.7 || ^14 || >=15.0.1}
+ hasBin: true
+ dev: true
+
+ /picocolors@1.0.0:
+ resolution: {integrity: sha512-1fygroTLlHu66zi26VoTDv8yRgm0Fccecssto+MhsZ0D/DGW2sm8E8AjW7NU5VVTRt5GxbeZ5qBuJr+HyLYkjQ==}
+ dev: true
+
+ /postcss@8.4.33:
+ resolution: {integrity: sha512-Kkpbhhdjw2qQs2O2DGX+8m5OVqEcbB9HRBvuYM9pgrjEFUg30A9LmXNlTAUj4S9kgtGyrMbTzVjH7E+s5Re2yg==}
+ engines: {node: ^10 || ^12 || >=14}
+ dependencies:
+ nanoid: 3.3.7
+ picocolors: 1.0.0
+ source-map-js: 1.0.2
+ dev: true
+
+ /preact@10.19.3:
+ resolution: {integrity: sha512-nHHTeFVBTHRGxJXKkKu5hT8C/YWBkPso4/Gad6xuj5dbptt9iF9NZr9pHbPhBrnT2klheu7mHTxTZ/LjwJiEiQ==}
+ dev: true
+
+ /rollup@4.9.6:
+ resolution: {integrity: sha512-05lzkCS2uASX0CiLFybYfVkwNbKZG5NFQ6Go0VWyogFTXXbR039UVsegViTntkk4OglHBdF54ccApXRRuXRbsg==}
+ engines: {node: '>=18.0.0', npm: '>=8.0.0'}
+ hasBin: true
+ dependencies:
+ '@types/estree': 1.0.5
+ optionalDependencies:
+ '@rollup/rollup-android-arm-eabi': 4.9.6
+ '@rollup/rollup-android-arm64': 4.9.6
+ '@rollup/rollup-darwin-arm64': 4.9.6
+ '@rollup/rollup-darwin-x64': 4.9.6
+ '@rollup/rollup-linux-arm-gnueabihf': 4.9.6
+ '@rollup/rollup-linux-arm64-gnu': 4.9.6
+ '@rollup/rollup-linux-arm64-musl': 4.9.6
+ '@rollup/rollup-linux-riscv64-gnu': 4.9.6
+ '@rollup/rollup-linux-x64-gnu': 4.9.6
+ '@rollup/rollup-linux-x64-musl': 4.9.6
+ '@rollup/rollup-win32-arm64-msvc': 4.9.6
+ '@rollup/rollup-win32-ia32-msvc': 4.9.6
+ '@rollup/rollup-win32-x64-msvc': 4.9.6
+ fsevents: 2.3.3
+ dev: true
+
+ /search-insights@2.13.0:
+ resolution: {integrity: sha512-Orrsjf9trHHxFRuo9/rzm0KIWmgzE8RMlZMzuhZOJ01Rnz3D0YBAe+V6473t6/H6c7irs6Lt48brULAiRWb3Vw==}
+ dev: true
+
+ /shikiji-core@0.10.2:
+ resolution: {integrity: sha512-9Of8HMlF96usXJHmCL3Gd0Fcf0EcyJUF9m8EoAKKd98mHXi0La2AZl1h6PegSFGtiYcBDK/fLuKbDa1l16r1fA==}
+ dev: true
+
+ /shikiji-transformers@0.10.2:
+ resolution: {integrity: sha512-7IVTwl1af205ywYEq5bOAYOTOFW4V1dVX1EablP0nWKErqZeD1o93VMytxmtJomqS+YwbB8doY8SE3MFMn0aPQ==}
+ dependencies:
+ shikiji: 0.10.2
+ dev: true
+
+ /shikiji@0.10.2:
+ resolution: {integrity: sha512-wtZg3T0vtYV2PnqusWQs3mDaJBdCPWxFDrBM/SE5LfrX92gjUvfEMlc+vJnoKY6Z/S44OWaCRzNIsdBRWcTAiw==}
+ dependencies:
+ shikiji-core: 0.10.2
+ dev: true
+
+ /source-map-js@1.0.2:
+ resolution: {integrity: sha512-R0XvVJ9WusLiqTCEiGCmICCMplcCkIwwR11mOSD9CR5u+IXYdiseeEuXCVAjS54zqwkLcPNnmU4OeJ6tUrWhDw==}
+ engines: {node: '>=0.10.0'}
+ dev: true
+
+ /tabbable@6.2.0:
+ resolution: {integrity: sha512-Cat63mxsVJlzYvN51JmVXIgNoUokrIaT2zLclCXjRd8boZ0004U4KCs/sToJ75C6sdlByWxpYnb5Boif1VSFew==}
+ dev: true
+
+ /to-fast-properties@2.0.0:
+ resolution: {integrity: sha512-/OaKK0xYrs3DmxRYqL/yDc+FxFUVYhDlXMhRmv3z915w2HF1tnN1omB354j8VUGO/hbRzyD6Y3sA7v7GS/ceog==}
+ engines: {node: '>=4'}
+ dev: true
+
+ /vite@5.0.12:
+ resolution: {integrity: sha512-4hsnEkG3q0N4Tzf1+t6NdN9dg/L3BM+q8SWgbSPnJvrgH2kgdyzfVJwbR1ic69/4uMJJ/3dqDZZE5/WwqW8U1w==}
+ engines: {node: ^18.0.0 || >=20.0.0}
+ hasBin: true
+ peerDependencies:
+ '@types/node': ^18.0.0 || >=20.0.0
+ less: '*'
+ lightningcss: ^1.21.0
+ sass: '*'
+ stylus: '*'
+ sugarss: '*'
+ terser: ^5.4.0
+ peerDependenciesMeta:
+ '@types/node':
+ optional: true
+ less:
+ optional: true
+ lightningcss:
+ optional: true
+ sass:
+ optional: true
+ stylus:
+ optional: true
+ sugarss:
+ optional: true
+ terser:
+ optional: true
+ dependencies:
+ esbuild: 0.19.12
+ postcss: 8.4.33
+ rollup: 4.9.6
+ optionalDependencies:
+ fsevents: 2.3.3
+ dev: true
+
+ /vitepress@1.0.0-rc.40(@algolia/client-search@4.22.1)(search-insights@2.13.0):
+ resolution: {integrity: sha512-1x9PCrcsJwqhpccyTR93uD6jpiPDeRC98CBCAQLLBb44a3VSXYBPzhCahi+2kwAYylu49p0XhseMPVM4IVcWcw==}
+ hasBin: true
+ peerDependencies:
+ markdown-it-mathjax3: ^4.3.2
+ postcss: ^8.4.33
+ peerDependenciesMeta:
+ markdown-it-mathjax3:
+ optional: true
+ postcss:
+ optional: true
+ dependencies:
+ '@docsearch/css': 3.5.2
+ '@docsearch/js': 3.5.2(@algolia/client-search@4.22.1)(search-insights@2.13.0)
+ '@types/markdown-it': 13.0.7
+ '@vitejs/plugin-vue': 5.0.3(vite@5.0.12)(vue@3.4.15)
+ '@vue/devtools-api': 6.5.1
+ '@vueuse/core': 10.7.2(vue@3.4.15)
+ '@vueuse/integrations': 10.7.2(focus-trap@7.5.4)(vue@3.4.15)
+ focus-trap: 7.5.4
+ mark.js: 8.11.1
+ minisearch: 6.3.0
+ shikiji: 0.10.2
+ shikiji-core: 0.10.2
+ shikiji-transformers: 0.10.2
+ vite: 5.0.12
+ vue: 3.4.15
+ transitivePeerDependencies:
+ - '@algolia/client-search'
+ - '@types/node'
+ - '@types/react'
+ - '@vue/composition-api'
+ - async-validator
+ - axios
+ - change-case
+ - drauu
+ - fuse.js
+ - idb-keyval
+ - jwt-decode
+ - less
+ - lightningcss
+ - nprogress
+ - qrcode
+ - react
+ - react-dom
+ - sass
+ - search-insights
+ - sortablejs
+ - stylus
+ - sugarss
+ - terser
+ - typescript
+ - universal-cookie
+ dev: true
+
+ /vue-demi@0.14.6(vue@3.4.15):
+ resolution: {integrity: sha512-8QA7wrYSHKaYgUxDA5ZC24w+eHm3sYCbp0EzcDwKqN3p6HqtTCGR/GVsPyZW92unff4UlcSh++lmqDWN3ZIq4w==}
+ engines: {node: '>=12'}
+ hasBin: true
+ requiresBuild: true
+ peerDependencies:
+ '@vue/composition-api': ^1.0.0-rc.1
+ vue: ^3.0.0-0 || ^2.6.0
+ peerDependenciesMeta:
+ '@vue/composition-api':
+ optional: true
+ dependencies:
+ vue: 3.4.15
+ dev: true
+
+ /vue@3.4.15:
+ resolution: {integrity: sha512-jC0GH4KkWLWJOEQjOpkqU1bQsBwf4R1rsFtw5GQJbjHVKWDzO6P0nWWBTmjp1xSemAioDFj1jdaK1qa3DnMQoQ==}
+ peerDependencies:
+ typescript: '*'
+ peerDependenciesMeta:
+ typescript:
+ optional: true
+ dependencies:
+ '@vue/compiler-dom': 3.4.15
+ '@vue/compiler-sfc': 3.4.15
+ '@vue/runtime-dom': 3.4.15
+ '@vue/server-renderer': 3.4.15(vue@3.4.15)
+ '@vue/shared': 3.4.15
+ dev: true
diff --git a/docs_new/public/favicon.ico b/docs_new/public/favicon.ico
new file mode 100644
index 0000000000..0d442e763e
Binary files /dev/null and b/docs_new/public/favicon.ico differ
diff --git a/docs_new/public/kanister-logo.png b/docs_new/public/kanister-logo.png
new file mode 100644
index 0000000000..18a8875f4d
Binary files /dev/null and b/docs_new/public/kanister-logo.png differ
diff --git a/docs_new/public/kanister.svg b/docs_new/public/kanister.svg
new file mode 100644
index 0000000000..3e16a05930
--- /dev/null
+++ b/docs_new/public/kanister.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/docs_new/public/kanister_workflow.png b/docs_new/public/kanister_workflow.png
new file mode 100644
index 0000000000..d5ec891323
Binary files /dev/null and b/docs_new/public/kanister_workflow.png differ
diff --git a/docs_new/public/tasks/argo-cron-architecture.png b/docs_new/public/tasks/argo-cron-architecture.png
new file mode 100644
index 0000000000..01b342175d
Binary files /dev/null and b/docs_new/public/tasks/argo-cron-architecture.png differ
diff --git a/docs_new/public/tasks/argo-cron-created-ui-desc.png b/docs_new/public/tasks/argo-cron-created-ui-desc.png
new file mode 100644
index 0000000000..864b0e9a10
Binary files /dev/null and b/docs_new/public/tasks/argo-cron-created-ui-desc.png differ
diff --git a/docs_new/public/tasks/argo-cron-created-ui-list.png b/docs_new/public/tasks/argo-cron-created-ui-list.png
new file mode 100644
index 0000000000..2746b7a432
Binary files /dev/null and b/docs_new/public/tasks/argo-cron-created-ui-list.png differ
diff --git a/docs_new/public/tasks/argo-default-ui.png b/docs_new/public/tasks/argo-default-ui.png
new file mode 100644
index 0000000000..f51d2723e1
Binary files /dev/null and b/docs_new/public/tasks/argo-default-ui.png differ
diff --git a/docs_new/public/tasks/logs-grafana-data-source.png b/docs_new/public/tasks/logs-grafana-data-source.png
new file mode 100644
index 0000000000..6ad809ae62
Binary files /dev/null and b/docs_new/public/tasks/logs-grafana-data-source.png differ
diff --git a/docs_new/public/tasks/logs-grafana-login.png b/docs_new/public/tasks/logs-grafana-login.png
new file mode 100644
index 0000000000..3745c0fc4e
Binary files /dev/null and b/docs_new/public/tasks/logs-grafana-login.png differ
diff --git a/docs_new/public/tasks/logs-grafana-loki-test.png b/docs_new/public/tasks/logs-grafana-loki-test.png
new file mode 100644
index 0000000000..11b486a640
Binary files /dev/null and b/docs_new/public/tasks/logs-grafana-loki-test.png differ
diff --git a/docs_new/public/tasks/logs-kanister-all-logs.png b/docs_new/public/tasks/logs-kanister-all-logs.png
new file mode 100644
index 0000000000..171b4c3810
Binary files /dev/null and b/docs_new/public/tasks/logs-kanister-all-logs.png differ
diff --git a/docs_new/public/tasks/logs-kanister-datapath-logs.png b/docs_new/public/tasks/logs-kanister-datapath-logs.png
new file mode 100644
index 0000000000..bc79ffe791
Binary files /dev/null and b/docs_new/public/tasks/logs-kanister-datapath-logs.png differ
diff --git a/docs_new/tasks/argo.md b/docs_new/tasks/argo.md
new file mode 100644
index 0000000000..39ae367987
--- /dev/null
+++ b/docs_new/tasks/argo.md
@@ -0,0 +1,269 @@
+# Automating ActionSet Creation using Argo Cron Workflows
+
+Argo Workflows enables us to schedule operations. In the Kanister
+project, Argo Cron Workflows will be used to automate the creation of
+ActionSets to execute Blueprint actions at regular intervals.
+
+To summarize, ActionSets are CRs that are used to execute actions from
+Blueprint CRs. The Kanister controller watches for the creation of
+ActionSets and executes the specified action.
+
+In this tutorial, you will schedule the creation of a backup ActionSet
+using Argo Cron Workflows.
+
+## Prerequisites
+
+- Kubernetes `1.20` or higher.
+- A running Kanister controller in the `Kanister` namespace. See
+ [Installation](/install.md)
+- `kanctl` CLI installed. See
+ [Tools](https://docs.kanister.io/tooling.html#install-the-tools).
+
+## Architecture
+
+![image](/tasks/argo-cron-architecture.png)
+
+## Steps
+
+### Step 1 - Setting up Argo
+
+Download the Argo CLI from their
+[Releases](https://github.com/argoproj/argo-workflows/releases/latest)
+page.
+
+Create a separate namespace for the Workflows.
+
+``` bash
+kubectl create ns argo
+```
+
+In this tutorial, the Argo Workflows CRDs and other resources will be
+deployed on the Kubernetes cluster using the minimal manifest file.
+
+``` bash
+kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-workflows/master/manifests/quick-start-minimal.yaml -n argo
+```
+
+You can install Argo in either cluster scoped or namespace scope
+configurations. To deploy Argo with custom configuration, download the
+minimal manifest file and apply the necessary changes. For more
+information, see
+[ManagedNamespaces](https://argoproj.github.io/argo-workflows/managed-namespace/).
+
+Use `port-forward` to forward a local port to the argo-server pod\'s
+port to view the Argo UI:
+
+``` bash
+kubectl -n argo port-forward deployment/argo-server 2746:2746
+```
+
+Open a web browser and navigate to `localhost:2746`
+
+![image](/tasks/argo-default-ui.png)
+
+### Step 2 - Setting up a sample application to backup
+
+Here, you will reference the
+[MySQL](https://github.com/kanisterio/kanister/tree/master/examples/mysql)
+example from Kanister.
+
+1. Install the chart and set up MySQL in the `mysql-test` namespace.
+2. Integrate it with Kanister by creating a Profile CR in the
+ `mysql-test` namespace and a Blueprint in the `kanister` namespace.
+3. Copy and save the names of the MySQL StatefulSet, secrets, Kanister
+ Blueprint, and the Profile CR for the next step.
+
+### Step 3 - Creating a Cron Workflow
+
+Now, create a Cron Workflow to automate the creation of an ActionSet to
+backup the MySQL application. The workflow will use `kanctl` to achieve
+this. Modify the `kanctl` command in the YAML below to specify the names
+of the Blueprint, Profile, MySQL StatefulSet, and secrets created in the
+previous step.
+
+``` bash
+kanctl create actionset --action backup --namespace kanister --blueprint --statefulset --profile --secrets
+```
+
+Then execute:
+
+``` yaml
+cat <> mysql-cron-wf.yaml
+apiVersion: argoproj.io/v1alpha1
+kind: CronWorkflow
+metadata:
+ name: mysql-cron-wf
+spec:
+ schedule: "*/5 * * * *"
+ concurrencyPolicy: "Replace"
+ workflowSpec:
+ entrypoint: automate-actionset
+ templates:
+ - name: automate-actionset
+ container:
+ image: ghcr.io/kanisterio/kanister-tools:0.81.0
+ command:
+ - /bin/bash
+ - -c
+ - |
+ microdnf install tar
+ curl -LO https://github.com/kanisterio/kanister/releases/download/0.81.0/kanister_0.81.0_linux_amd64.tar.gz
+ tar -C /usr/local/bin -xvf kanister_0.81.0_linux_amd64.tar.gz
+ kanctl create actionset --action backup --namespace kanister --blueprint mysql-blueprint --statefulset mysql-test/mysql-release --profile mysql-test/s3-profile-gd4kx --secrets mysql=mysql-test/mysql-release
+EOF
+```
+
+::: tip NOTE
+
+Here, the cron job is scheduled to run every 5 minutes. This means that
+an ActionSet is created every 5 minutes to perform a backup operation.
+You may schedule it to run as per your requirements.
+:::
+
+### Step 4 - Granting RBAC permissions
+
+Next, you will grant the required permissions to the Service Account in
+the `argo` namespace to access resources in the `kanister` and
+`mysql-test` namespaces. This is required to create CRs based on the
+Secrets and StatefulSet that you provided in the previous step. You may
+read more about RBAC authorization here -
+[RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/).
+
+1. Create a RoleBinding named `cron-wf-manager` in the `kanister` and
+ `mysql-test` namespaces.
+2. Grant the permissions in ClusterRole `cluster-admin` to the default
+ ServiceAccount named `default` in the `argo` namespace.
+
+Execute the following command:
+
+``` bash
+kubectl create rolebinding cron-wf-manager --clusterrole=cluster-admin --serviceaccount=argo:default -n kanister
+```
+
+``` bash
+kubectl create rolebinding cron-wf-manager --clusterrole=cluster-admin --serviceaccount=argo:default -n mysql-test
+```
+
+::: tip NOTE
+
+It is not recommended to grant the `cluster-admin` privileges to the
+`default` ServiceAccount in production. You must create a separate Role
+or a ClusterRole to grant specific access for allowing the creation of
+Custom Resources (ActionSets) in the `kanister` namespace.
+:::
+
+### Step 5 - Launching the Cron Workflow
+
+Launch the workflow in the `argo` namespace by running the following
+command:
+
+``` bash
+argo cron create mysql-cron-wf.yaml -n argo
+```
+
+Check if the workflow was created by running:
+
+``` bash
+argo cron list -n argo
+```
+
+When the workflow runs, check if the ActionSet was created in the
+`kanister` namespace:
+
+``` bash
+kubectl get actionsets.cr.kanister.io -n kanister
+```
+
+The output should be similar to the sample output below.
+
+``` bash
+$ argo cron create mysql-cron-wf.yaml -n argo
+> Name: mysql-cron-wf
+ Namespace: argo
+ Created: Fri Jul 22 10:23:09 -0400 (now)
+ Schedule: */5 * * * *
+ Suspended: false
+ ConcurrencyPolicy: Replace
+ NextScheduledTime: Fri Jul 22 10:25:00 -0400 (1 minute from now) (assumes workflow-controller is in UTC)
+
+$ argo cron list -n argo
+> NAME AGE LAST RUN NEXT RUN SCHEDULE TIMEZONE SUSPENDED
+ mysql-cron-wf 12s N/A 1m */5 * * * * false
+
+$ argo cron list -n argo
+> NAME AGE LAST RUN NEXT RUN SCHEDULE TIMEZONE SUSPENDED
+ mysql-cron-wf 4m 2m 2m */5 * * * * false
+
+$ kubectl get actionsets.cr.kanister.io -n kanister
+> NAME AGE
+ backup-478lk 2m28s
+```
+
+In the above example, the workflow was created and scheduled to run in 1
+minute. This scheduled time can be anywhere between 1 to 5 minutes for
+you. Once the workflow runs successfully, the `LAST RUN` field is
+updated with the timestamp of the last run. Along with this, a backup
+ActionSet must be created. The creation time of the ActionSet is
+indicated by the `AGE` field as seen above.
+
+You should see the workflow on the Argo UI under the Cron Workflows tab.
+
+![image](/tasks/argo-cron-created-ui-list.png)
+
+On clicking on the workflow name, you will see its status.
+
+![image](/tasks/argo-cron-created-ui-desc.png)
+
+## Troubleshooting
+
+If the Cron Workflow does not run, check if the pod to run the workflow
+was created in the `argo` namespace. Examine the logs of this pod.
+
+``` bash
+kubectl logs -n argo
+```
+
+If this pod was not created, examine the logs of the Argo Workflow
+Controller in the `argo` namespace.
+
+``` bash
+kubectl logs -n argo
+```
+
+If the logs mention that you have not granted the right permissions to
+the ServiceAccounts, circle back to Step 4 and verify your RBAC
+configuration. Your ServiceAccount should have access to the requested
+resources.
+
+``` bash
+kubectl get serviceaccounts -n argo
+```
+
+## Cleanup
+
+Delete the cron workflow by running the following. Verify the name of
+your workflow before deleting it.
+
+Verify workflow name:
+
+``` bash
+argo cron list -n argo
+```
+
+Delete workflow:
+
+``` bash
+argo cron delete mysql-cron-wf -n argo
+```
+
+Deleting the Argo CRDs and other resources:
+
+``` bash
+kubectl delete -f quick-start-minimal.yaml
+```
+
+Deleting the Argo namespace:
+
+``` bash
+kubectl delete namespace argo
+```
diff --git a/docs_new/tasks/logs.md b/docs_new/tasks/logs.md
new file mode 100644
index 0000000000..3b30a67a1f
--- /dev/null
+++ b/docs_new/tasks/logs.md
@@ -0,0 +1,245 @@
+# Segregate Controller And Datapath Logs
+
+Kanister uses structured logging to ensure that its logs can be easily
+categorized, indexed and searched by downstream log aggregation
+software.
+
+By default, Kanister logs are output to the controller\'s `stderr` in
+JSON format. Generally, these logs can be categorized into *system logs*
+and *datapath logs*.
+
+System logs are logs emitted by the Kanister to track important
+controller events like interactions with the Kubernetes APIs, CRUD
+operations on blueprints and actionsets etc.
+
+Datapath logs, on the other hand, are logs emitted by task pods created
+by Kanister. These logs are streamed to the Kanister controller before
+the task pods are terminated to ensure they are not lost inadvertently.
+Datapath log lines usually include the `LogKind` field, with its value
+set to `datapath`.
+
+The rest of this documentation provides instructions on how to segregate
+Kanister\'s system logs from datapath logs using
+[Loki](https://grafana.com/oss/loki/) and
+[Grafana](https://grafana.com/oss/grafana).
+
+To run the provided commands, access to a Kubernetes cluster using the
+`kubectl` and `helm` command-line tools is required.
+
+Follow the instructions in the [installation](../install.html) page to
+deploy Kanister on the cluster.
+
+## Deployments Setup
+
+The commands and screenshots in this documentation are tested with the
+following software versions:
+
+- Loki 2.5.0
+- Grafana 8.5.3
+- Promtail 2.5.0
+
+Let\'s begin by installing Loki. Loki is a datastore optimized for
+holding log data. It indexes log data via streams made up of logs, where
+each stream is associated with a unique set of labels.
+
+``` bash
+helm repo add grafana https://grafana.github.io/helm-charts
+
+helm repo update
+
+helm -n loki install --create-namespace loki grafana/loki \
+ --set image.tag=2.5.0
+```
+
+Confirm that the Loki StatefulSet is successfully rolled out:
+
+``` bash
+kubectl -n loki rollout status sts/loki
+```
+
+::: tip NOTE
+
+The Loki configuration used in this installation is meant for
+demonstration purposes only. The Helm chart deploys a non-HA single
+instance of Loki, managed by a StatefulSet workload. See the [Loki
+installation
+documentation](https://grafana.com/docs/loki/latest/installation/) for
+other installation methods that may be more suitable for your
+requirements.
+:::
+
+Use Helm to install Grafana with a pre-configured Loki data source:
+
+``` bash
+svc_url=$(kubectl -n loki get svc loki -ojsonpath='{.metadata.name}.{.metadata.namespace}:{.spec.ports[?(@.name=="http-metrics")].port}')
+
+cat <
+`Data Sources` using the left-hand panel.
+
+Confirm that the `Loki` data source has already been added as part of
+the Grafana installation:
+
+![image](/tasks/logs-grafana-data-source.png)
+
+Access the `Loki` data source configuration page.
+
+Use the `Test` button near the bottom of the page to test the
+connectivity between Grafana and Loki:
+
+![image](/tasks/logs-grafana-loki-test.png)
+
+The final step in the setup involves installing Promtail. Promtail is an
+agent that can be used to discover log targets and stream their logs to
+Loki:
+
+``` bash
+svc_url=$(kubectl -n loki get svc loki -ojsonpath='{.metadata.name}.{.metadata.namespace}:{.spec.ports[?(@.name=="http-metrics")].port}')
+
+helm -n loki upgrade --install --create-namespace promtail grafana/promtail \
+ --set image.tag=2.5.0 \
+ --set "config.clients[0].url=http://${svc_url}/loki/api/v1/push"
+```
+
+Confirm that the Promtail DaemonSet is successfully rolled out:
+
+``` bash
+kubectl -n loki rollout status ds/promtail
+```
+
+## Logs Segregation
+
+To simulate a steady stream of log lines, the next step defines a
+blueprint that uses [flog](https://github.com/mingrammer/flog) to
+generate Apache common and error logs:
+
+``` bash
+cat< ActionSet
+ profile Create a new profile
+ repository-server Create a new kopia repository server
+
+Flags:
+ --dry-run if set, resource YAML will be printed but not created
+ -h, --help help for create
+ --skip-validation if set, resource is not validated before creation
+
+Global Flags:
+ -n, --namespace string Override namespace obtained from kubectl context
+
+Use "kanctl create [command] --help" for more information about a command.
+```
+
+As seen above, both ActionSets and profiles can be created using
+`kanctl create`
+
+``` bash
+$ kanctl create actionset --help
+Create a new ActionSet or override a ActionSet
+
+Usage:
+ kanctl create actionset [flags]
+
+Flags:
+ -a, --action string action for the action set (required if creating a new action set)
+ -b, --blueprint string blueprint for the action set (required if creating a new action set)
+ -c, --config-maps strings config maps for the action set, comma separated ref=namespace/name pairs (eg: --config-maps ref1=namespace1/name1,ref2=namespace2/name2)
+ -d, --deployment strings deployment for the action set, comma separated namespace/name pairs (eg: --deployment namespace1/name1,namespace2/name2)
+ -f, --from string specify name of the action set
+ -h, --help help for actionset
+ -k, --kind string resource kind to apply selector on. Used along with the selector specified using --selector/-l (default "all")
+ -T, --namespacetargets strings namespaces for the action set, comma separated list of namespaces (eg: --namespacetargets namespace1,namespace2)
+ -O, --objects strings objects for the action set, comma separated list of object references (eg: --objects group/version/resource/namespace1/name1,group/version/resource/namespace2/name2)
+ -o, --options strings specify options for the action set, comma separated key=value pairs (eg: --options key1=value1,key2=value2)
+ -p, --profile string profile for the action set
+ -v, --pvc strings pvc for the action set, comma separated namespace/name pairs (eg: --pvc namespace1/name1,namespace2/name2)
+ -s, --secrets strings secrets for the action set, comma separated ref=namespace/name pairs (eg: --secrets ref1=namespace1/name1,ref2=namespace2/name2)
+ -l, --selector string k8s selector for objects
+ --selector-namespace string namespace to apply selector on. Used along with the selector specified using --selector/-l
+ -t, --statefulset strings statefulset for the action set, comma separated namespace/name pairs (eg: --statefulset namespace1/name1,namespace2/name2)
+
+Global Flags:
+ --dry-run if set, resource YAML will be printed but not created
+ -n, --namespace string Override namespace obtained from kubectl context
+ --skip-validation if set, resource is not validated before creation
+```
+
+`kanctl create actionset` helps create ActionSets in a couple of
+different ways. A common backup/restore scenario is demonstrated below.
+
+Create a new Backup ActionSet
+
+``` bash
+# Action name and blueprint are required
+$ kanctl create actionset --action backup --namespace kanister --blueprint time-log-bp \
+ --deployment kanister/time-logger \
+ --profile s3-profile
+actionset backup-9gtmp created
+
+# View the progress of the ActionSet
+$ kubectl --namespace kanister describe actionset backup-9gtmp
+```
+
+Restore from the backup we just created
+
+``` bash
+# If necessary you can override the secrets, profile, config-maps, options etc obtained from the parent ActionSet
+$ kanctl create actionset --action restore --from backup-9gtmp --namespace kanister
+actionset restore-backup-9gtmp-4p6mc created
+
+# View the progress of the ActionSet
+$ kubectl --namespace kanister describe actionset restore-backup-9gtmp-4p6mc
+```
+
+Delete the Backup we created
+
+``` bash
+$ kanctl create actionset --action delete --from backup-9gtmp --namespace kanister
+actionset delete-backup-9gtmp-fc857 created
+
+# View the progress of the ActionSet
+$ kubectl --namespace kanister describe actionset delete-backup-9gtmp-fc857
+```
+
+To make the selection of objects (resources on which actions are
+performed) easier, you can filter on K8s labels using `--selector`.
+
+``` bash
+# backup deployment time-logger in namespace kanister using selectors
+# if --kind deployment is not specified, all deployments, statefulsets and pvc matching the
+# selector will be chosen for the action. You can also narrow down the search by setting the
+# --selector-namespace flag
+$ kanctl create actionset --action backup --namespace kanister --blueprint time-log-bp \
+ --selector app=time-logger \
+ --kind deployment \
+ --selector-namespace kanister --profile s3-profile
+actionset backup-8f827 created
+```
+
+The `--dry-run` flag will print the YAML of the ActionSet without
+actually creating it.
+
+``` bash
+# ActionSet creation with --dry-run
+$ kanctl create actionset --action backup --namespace kanister --blueprint time-log-bp \
+ --selector app=time-logger \
+ --kind deployment \
+ --selector-namespace kanister \
+ --profile s3-profile \
+ --dry-run
+apiVersion: cr.kanister.io/v1alpha1
+kind: ActionSet
+metadata:
+ creationTimestamp: null
+ generateName: backup-
+spec:
+ actions:
+ - blueprint: time-log-bp
+ configMaps: {}
+ name: backup
+ object:
+ apiVersion: ""
+ kind: deployment
+ name: time-logger
+ namespace: kanister
+ options: {}
+ profile:
+ apiVersion: ""
+ kind: ""
+ name: s3-profile
+ namespace: kanister
+ secrets: {}
+```
+
+Profile creation using `kanctl create`
+
+``` bash
+$ kanctl create profile --help
+Create a new profile
+
+Usage:
+ kanctl create profile [command]
+
+Available Commands:
+ s3compliant Create new S3 compliant profile
+
+Flags:
+ -h, --help help for profile
+ --skip-SSL-verification if set, SSL verification is disabled for the profile
+
+Global Flags:
+ --dry-run if set, resource YAML will be printed but not created
+ -n, --namespace string Override namespace obtained from kubectl context
+ --skip-validation if set, resource is not validated before creation
+
+Use "kanctl create profile [command] --help" for more information about a command.
+```
+
+A new S3Compliant profile can be created using the s3compliant
+subcommand
+
+``` bash
+$ kanctl create profile s3compliant --help
+Create new S3 compliant profile
+
+Usage:
+ kanctl create profile s3compliant [flags]
+
+Flags:
+ -a, --access-key string access key of the s3 compliant bucket
+ -b, --bucket string s3 bucket name
+ -e, --endpoint string endpoint URL of the s3 bucket
+ -h, --help help for s3compliant
+ -p, --prefix string prefix URL of the s3 bucket
+ -r, --region string region of the s3 bucket
+ -s, --secret-key string secret key of the s3 compliant bucket
+
+Global Flags:
+ --dry-run if set, resource YAML will be printed but not created
+ -n, --namespace string Override namespace obtained from kubectl context
+ --skip-SSL-verification if set, SSL verification is disabled for the profile
+ --skip-validation if set, resource is not validated before creation
+```
+
+``` bash
+$ kanctl create profile s3compliant --bucket --access-key ${AWS_ACCESS_KEY_ID} \
+ --secret-key ${AWS_SECRET_ACCESS_KEY} \
+ --region us-west-1 \
+ --namespace kanister
+secret 's3-secret-chst2' created
+profile 's3-profile-5mmkj' created
+```
+
+Kopia Repository Server resource creation using `kanctl create`
+
+``` bash
+$ kanctl create repository-server --help
+Create a new RepositoryServer
+
+Usage:
+ kanctl create repository-server [flags]
+
+Flags:
+ -a, --admin-user-access-secret string name of the secret having admin credentials to connect to connect to kopia repository server
+ -r, --kopia-repository-password-secret string name of the secret containing password for the kopia repository
+ -k, --kopia-repository-user string name of the user for accessing the kopia repository
+ -c, --location-creds-secret string name of the secret containing kopia repository storage credentials
+ -l, --location-secret string name of the secret containing kopia repository storage location details
+ -p, --prefix string prefix to be set in kopia repository
+ -t, --tls-secret string name of the tls secret needed for secure kopia client and kopia repository server communication
+ -u, --user string name of the user to be created for the kopia repository server
+ -s, --user-access-secret string name of the secret having access credentials of the users that can connect to kopia repository server
+ -w, --wait wait for the kopia repository server to be in ready state after creation
+ -h, --help help for repository-server
+
+Global Flags:
+ --dry-run if set, resource YAML will be printed but not created
+ -n, --namespace string Override namespace obtained from kubectl context
+ --skip-validation if set, resource is not validated before creation
+ --verbose Display verbose output
+```
+
+### kanctl validate
+
+Profile and Blueprint resources can be validated using
+`kanctl validate ` command.
+
+``` bash
+$ kanctl validate --help
+Validate custom Kanister resources
+
+Usage:
+ kanctl validate [flags]
+
+Flags:
+ -f, --filename string yaml or json file of the custom resource to validate
+ -v, --functionVersion string kanister function version, e.g., v0.0.0 (defaults to v0.0.0)
+ -h, --help help for validate
+ --name string specify the K8s name of the custom resource to validate
+ --resource-namespace string namespace of the custom resource. Used when validating resource specified using
+ --name. (default "default")
+ --schema-validation-only if set, only schema of resource will be validated
+
+Global Flags:
+ -n, --namespace string Override namespace obtained from kubectl context
+```
+
+You can either validate an existing profile in K8s or a new profile yet
+to be created.
+
+``` bash
+# validation of a yet to be created profile
+$ cat << EOF | kanctl validate profile -f -
+apiVersion: cr.kanister.io/v1alpha1
+kind: Profile
+metadata:
+ name: s3-profile
+ namespace: kanister
+location:
+ type: s3Compliant
+ s3Compliant:
+ bucket: XXXX
+ endpoint: XXXX
+ prefix: XXXX
+ region: XXXX
+credential:
+ type: keyPair
+ keyPair:
+ idField: aws_access_key_id
+ secretField: aws_secret_access_key
+ secret:
+ apiVersion: v1
+ kind: Secret
+ name: aws-creds
+ namespace: kanister
+skipSSLVerify: false
+EOF
+Passed the 'Validate Profile schema' check.. ✅
+Passed the 'Validate bucket region specified in profile' check.. ✅
+Passed the 'Validate read access to bucket specified in profile' check.. ✅
+Passed the 'Validate write access to bucket specified in profile' check.. ✅
+All checks passed.. ✅
+```
+
+Blueprint resources can be validated by specifying locally present
+blueprint manifest using `-f` flag and optionally `-v` flag for kanister
+function version.
+
+``` bash
+
+\# Download mysql blueprint locally \$ curl -O
+
+
+\# Run blueprint validator \$ kanctl validate blueprint -f
+mysql-blueprint.yaml Passed the \'validation of phase dumpToObjectStore
+in action backup\' check.. ✅ Passed the \'validation of phase
+deleteFromBlobStore in action delete\' check.. ✅ Passed the
+\'validation of phase restoreFromBlobStore in action restore\' check..
+✅
+```
+
+`kanctl validate blueprint` currently verifies the Kanister function
+names and presence of the mandatory arguments to those functions.
+
+## Kando
+
+A common use case for Kanister is to transfer data between Kubernetes
+and an object store like AWS S3. We\'ve found it can be cumbersome to
+pass Profile configuration to tools like the AWS command line from
+inside Blueprints.
+
+`kando` is a tool to simplify object store interactions from within
+blueprints. It also provides a way to create desired output from a
+blueprint phase.
+
+It has the following commands:
+
+- `location push`
+- `location pull`
+- `location delete`
+- `output`
+
+The usage for these commands can be displayed using the `--help` flag:
+
+``` bash
+$ kando location pull --help
+Pull from s3-compliant object storage to a file or stdout
+
+Usage:
+ kando location pull [flags]
+
+Flags:
+ -h, --help help for pull
+
+Global Flags:
+ -s, --path string Specify a path suffix (optional)
+ -p, --profile string Pass a Profile as a JSON string (required)
+```
+
+``` bash
+$ kando location push --help
+Push a source file or stdin stream to s3-compliant object storage
+
+Usage:
+ kando location push