diff --git a/docs/docs-content/integrations/portworx_operator.md b/_partials/packs/_portworkx-operator.mdx
similarity index 79%
rename from docs/docs-content/integrations/portworx_operator.md
rename to _partials/packs/_portworkx-operator.mdx
index b01cceede2..8abbdec79d 100644
--- a/docs/docs-content/integrations/portworx_operator.md
+++ b/_partials/packs/_portworkx-operator.mdx
@@ -1,86 +1,13 @@
---
-sidebar_label: "Portworx /w Operator"
-title: "Portworx Operator"
-description: "Portworx storage CSI for all use cases"
-hide_table_of_contents: true
-type: "integration"
-category: ["storage", "amd64"]
-sidebar_class_name: "hide-from-sidebar"
-logoUrl: "https://registry.spectrocloud.com/v1/csi-portworx/blobs/sha256:e27bc9aaf22835194ca38062061c29b5921734eed922e57d693d15818ade7486?type=image.webp"
-tags: ["packs", "portworx", "storage"]
+partial_category: packs
+partial_name: portworx-operator
---
-[Portworx](https://portworx.com/) is a software-defined persistent storage solution designed and purpose-built for
-applications deployed as containers via container orchestrators such as Kubernetes. You can include Portworx in your
-Kubernetes cluster by using the Portworx Operator pack.
-
## Versions Supported
-
+
-## Prerequisites
-
-Portworx Operator has the following prerequisites for installation. You can learn more about all the required Portworx
-requirements in the [Portworx documentation](https://docs.portworx.com/install-portworx/prerequisites).
-
-- The Kubernetes cluster must have at least three nodes of the type bare metal or virtual machine.
-
-- Storage drives must be unmounted block storage. You can use either, raw disks, drive partitions, LVM, or cloud block
- storage.
-
-- The backing drive must be at least 8 GB in size.
-
-- The following disk folder require enough space to store Portworx metadata:
-
- - **/var** - 2 GB
-
- - **/opt** - 3 GB
-
-- The operating system root partition must be at least 64 GB is the minimum.
-
-- The minimum hardware requirements for each node are:
-
- - 4 CPU cores
-
- - 8 GB RAM
-
- - 50 GB disk space
-
- - 1 Gbps network connectivity
-
-- A Linux kernel version of 3.10 or higher is required.
-
-* Docker version 1.13.1 or higher is required.
-
-- Ensure you use a
- [supported Kubernetes version](https://docs.portworx.com/portworx-enterprise/install-portworx/prerequisites#supported-kubernetes-versions).
-
-- Identify and set up the type of storage you want to use.
-
-:::warning
-
-Starting with Portworx version 3.x.x and greater. Lighthouse is no longer available in the pack itself. Instead you can
-install [Portworx Central](https://docs.portworx.com/portworx-central-on-prem/install/px-central.html), which provides
-monitoring capabilities.
-
-:::
-
-## Parameters
-
-The following parameters are highlighted for this version of the pack and provide a preset option when configured
-through the UI. These parameters are not exhaustive and you can configure additional parameters as needed.
-
-| Parameter | Description | Default |
-| :------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | :----------- |
-| `portworx-generic.activateLicense` | Set to `true` to activate the Portworx license. | `true` |
-| `portworx-generic.license.type` | Allowed values are: `essentials`, `saas`, `enterprise`. If you want to deploy the PX Enterprise Trial version, or need manual offline activation, select the **PX Enterprise** type and set `activateLicense` to `false`. | `essentials` |
-| `portworx-generic.Storagecluster.spec` | Define the storage type and behavior for Portworx.Refer to the Storage Specification section below to learn more. | `{}` |
-| `portworx-generic.externalKvdb` | Define the external Key Value Database (KVDB) configuration for Portworx. Refer to the Integration With External etcd section below to learn more. | `{}` |
-| `portworx-generic.storageCluster.env` | Specify environment variables, such as HTTP Proxy settings, for Portworx. | `{}` |
-
-## Usage
-
The default installation of Portworx /w Operator will deploy the following components in the Kubernetes cluster:
- Portworx Operator
@@ -692,68 +619,6 @@ Use the following steps to integrate Portworx to an external etcd server by foll
-## Prerequisites
-
-Portworx Operator has the following prerequisites for installation. You can learn more about all the required Portworx
-requirements in the [Portworx documentation](https://docs.portworx.com/install-portworx/prerequisites).
-
-- The Kubernetes cluster must have at least three nodes of the type bare metal or virtual machine.
-
-- Storage drives must be unmounted block storage. You can use either, raw disks, drive partitions, LVM, or cloud block
- storage.
-
-- The backing drive must be at least 8 GB in size.
-
-- The following disk folder require enough space to store Portworx metadata:
-
- - **/var** - 2 GB
-
- - **/opt** - 3 GB
-
-- The operating system root partition must be at least 64 GB is the minimum.
-
-- The minimum hardware requirements for each node are:
-
- - 4 CPU cores
-
- - 8 GB RAM
-
- - 50 GB disk space
-
- - 1 Gbps network connectivity
-
-- A Linux kernel version of 3.10 or higher is required.
-
-* Docker version 1.13.1 or higher is required.
-
-- Ensure you use a
- [supported Kubernetes version](https://docs.portworx.com/portworx-enterprise/install-portworx/prerequisites#supported-kubernetes-versions).
-
-- Identify and set up the type of storage you want to use.
-
-:::warning
-
-Starting with Portworx version 3.x.x and greater. Lighthouse is no longer available in the pack itself. Instead you can
-install [Portworx Central](https://docs.portworx.com/portworx-central-on-prem/install/px-central.html), which provides
-monitoring capabilities.
-
-:::
-
-## Parameters
-
-The following parameters are highlighted for this version of the pack and provide a preset option when configured
-through the UI. These parameters are not exhaustive and you can configure additional parameters as needed.
-
-| Parameter | Description | Default |
-| :------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :----------- |
-| `portworx-generic.activateLicense` | Set to `true` to activate the Portworx license. | `true` |
-| `portworx-generic.license.type` | Allowed values are: `essentials`, `saas`, `enterprise`. If you want to deploy the PX Enterprise Trial version, or need manual offline activation, select **PX Enterprise** type and set `activateLicense` to `false`. | `essentials` |
-| `portworx-generic.Storagecluster.spec` | Define the storage type and behavior for Portworx.Refer to the Storage Specification section below to learn more. | `{}` |
-| `portworx-generic.externalKvdb` | Define the external Key Value Database (KVDB) configuration for Portworx. Refer to the Integration With External etcd section below to learn more. | `{}` |
-| `portworx-generic.storageCluster.env` | Specify environment variables, such as HTTP Proxy settings, for Portworx. | `{}` |
-
-## Usage
-
The default installation of Portworx /w Operator will deploy the following components in the Kubernetes cluster:
- Portworx Operator
@@ -1281,68 +1146,6 @@ Use the following steps to integrate Portworx to an external etcd server by foll
-## Prerequisites
-
-Portworx Operator has the following prerequisites for installation. You can learn more about all the required Portworx
-requirements in the [Portworx documentation](https://docs.portworx.com/install-portworx/prerequisites).
-
-- The Kubernetes cluster must have at least three nodes of the type bare metal or virtual machine.
-
-- Storage drives must be unmounted block storage. You can use either, raw disks, drive partitions, LVM, or cloud block
- storage.
-
-- The backing drive must be at least 8 GB in size.
-
-- The following disk folder require enough space to store Portworx metadata:
-
- - **/var** - 2 GB
-
- - **/opt** - 3 GB
-
-- The operating system root partition must be at least 64 GB is the minimum.
-
-- The minimum hardware requirements for each node are:
-
- - 4 CPU cores
-
- - 8 GB RAM
-
- - 50 GB disk space
-
- - 1 Gbps network connectivity
-
-- A Linux kernel version of 3.10 or higher is required.
-
-* Docker version 1.13.1 or higher is required.
-
-- Ensure you use a
- [supported Kubernetes version](https://docs.portworx.com/portworx-enterprise/install-portworx/prerequisites#supported-kubernetes-versions).
-
-- Identify and set up the type of storage you want to use.
-
-:::warning
-
-Starting with Portworx version 3.x.x and greater. Lighthouse is no longer available in the pack itself. Instead you can
-install [Portworx Central](https://docs.portworx.com/portworx-central-on-prem/install/px-central.html), which provides
-monitoring capabilities.
-
-:::
-
-## Parameters
-
-The following parameters are highlighted for this version of the pack and provide a preset option when configured
-through the UI. These parameters are not exhaustive and you can configure additional parameters as needed.
-
-| Parameter | Description | Default |
-| :------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :----------- |
-| `portworx-generic.activateLicense` | Set to `true` to activate the Portworx license. | `true` |
-| `portworx-generic.license.type` | Allowed values are: `essentials`, `saas`, `enterprise`. If you want to deploy the PX Enterprise Trial version, or need manual offline activation, select **PX Enterprise** and set `activateLicense` to `false`. | `essentials` |
-| `portworx-generic.Storagecluster.spec` | Define the storage type and behavior for Portworx.Refer to the Storage Specification section below to learn more. | `{}` |
-| `portworx-generic.externalKvdb` | Define the external Key Value Database (KVDB) configuration for Portworx. Refer to the Integration With External etcd section below to learn more. | `{}` |
-| `portworx-generic.storageCluster.env` | Specify environment variables, such as HTTP Proxy settings, for Portworx. | `{}` |
-
-## Usage
-
The default installation of Portworx /w Operator will deploy the following components in the Kubernetes cluster:
- Portworx Operator
@@ -1867,47 +1670,4 @@ certificates will not be imported correctly and will result in Portworx deployme
-
-
-:::warning
-
-All versions less than 2.12.x are considered deprecated. Upgrade to a newer version to take advantage of new features.
-
-:::
-
-
-
-
-
-
-
-## Terraform
-
-Use the following Terraform code to interact with the Portworx Operator pack in your Terraform scripts.
-
-```hcl
-data "spectrocloud_registry" "public_registry" {
- name = "Public Repo"
-}
-
-data "spectrocloud_pack_simple" "portworx-operator" {
- name = "csi-portworx-generic"
- version = "3.0.0"
- type = "operator-instance"
- registry_uid = data.spectrocloud_registry.public_registry.id
-}
-```
-
-## References
-
-- [Portworx Install with Kubernetes](https://docs.portworx.com/portworx-install-with-kubernetes/)
-
-- [Installation Prerequisites](https://docs.portworx.com/install-portworx/prerequisites/)
-
-- [Portworx Supported Kubernetes versions](https://docs.portworx.com/portworx-enterprise/install-portworx/prerequisites#supported-kubernetes-versions)
-
-- [Stork](https://docs.portworx.com/portworx-enterprise/operations/operate-kubernetes/storage-operations/stork.html)
-
-- [Portworx Central](https://docs.portworx.com/portworx-central-on-prem/install/px-central.html)
-
-- [Flash Array](https://docs.portworx.com/portworx-enterprise/install-portworx/kubernetes/flasharray)
+
\ No newline at end of file
diff --git a/docs/docs-content/integrations/rook-ceph.md b/_partials/packs/_rook-ceph.mdx
similarity index 65%
rename from docs/docs-content/integrations/rook-ceph.md
rename to _partials/packs/_rook-ceph.mdx
index e36eca9b65..9231569f56 100644
--- a/docs/docs-content/integrations/rook-ceph.md
+++ b/_partials/packs/_rook-ceph.mdx
@@ -1,58 +1,101 @@
---
-sidebar_label: "rook-ceph"
-title: "Rook Ceph"
-description: "Rook is an open-source cloud-native storage orchestrator that provides the platform, framework, and support for Ceph
-storage to natively integrate with cloud-native environments. Ceph is a distributed storage system that provides file,
-block, and object storage and is deployed in large-scale production clusters. This page talks about how to use the Rook Ceph storage pack in Spectro Cloud"
-hide_table_of_contents: true
-type: "integration"
-category: ["storage", "amd64"]
-sidebar_class_name: "hide-from-sidebar"
-logoUrl:
- " https://registry.dev.spectrocloud.com/v1/csi-rook-ceph/blobs/sha256:2817270f4eecbc2eea0740c55c7611d1a538a3e17da610a3487bb11b067076d1?type=image.webp"
-tags: ["packs", "rook-ceph", "storage"]
+partial_category: packs
+partial_name: rook-ceph
---
-Rook is an open-source cloud-native storage orchestrator that provides the platform, framework, and support for Ceph
-storage to natively integrate with cloud-native environments. Ceph is a distributed storage system that provides file,
-block, and object storage and is deployed in large-scale production clusters.
+## Versions Supported
-Rook turns storage software into self-managing, self-scaling, and self-healing storage services. It automates
-deployment, bootstrapping, configuration, provisioning, scaling, upgrading, migration, disaster recovery, monitoring,
-and resource management. Rook uses the facilities provided by the underlying cloud-native container management,
-scheduling, and orchestration platform to perform its duties.
+
-The pack has two presets that provide the following two configurations:
+
-- A multi-node Ceph cluster.
-- A single-node Ceph cluster.
+### Rook on Edge Clusters
-## Versions Supported
+To use Rook-Ceph on Edge clusters, you need to make a few changes to the cluster profile depending on your cluster
+configuration.
-
+1. In the YAML file for BYO-OS pack, add the following blocks to the `stages` configuration in the OS pack of the
+ cluster profile.
-
+ ```yaml
+ stages:
+ initramfs:
+ - files:
+ - path: /etc/modules-load.d/ceph.conf
+ permissions: 644
+ owner: 0
+ group: 0
+ content: |
+ rbd
+ ceph
+ encoding: ""
+ ownerstring: ""
+ after-upgrade:
+ - name: "Erase Old Partitions on Boot Disk"
+ commands:
+ - wipefs -a /dev/sdb
+ ```
+
+2. Click on the Rook-Ceph layer. In the upper-right corner of the Rook-Ceph layer's YAML editing interface, click
+ **Presets**. Set the preset to either single-node or multi-node depending on your cluster configuration.
+
+3. If you chose the **Single Node Cluster** preset, skip this step.
+
+ If you chose the **Multi Node Cluster with Replicas** preset, set the value of
+ `manifests.storageClass.volumeBindingMode` to `Immediate`.
+
+### Access Ceph Dashboard
+
+The Ceph dashboard gives you an overview of the status of your Ceph cluster, including overall health, and the status of
+all Ceph daemons. By default, the Dashboard is exposed as a ClusterIP-type service on the port 7000 on single node
+clusters.
+
+1. Issue the following command to view the service and find its cluster IP and port.
+
+ ```shell
+ kukubectl --namespace rook-ceph get svc | grep dashboard
+ ```
+
+ ```hideClipboard
+ rook-ceph-mgr-dashboard ClusterIP 192.169.32.142 7000/TCP 15m
+ ```
-## Prerequisites
+2. If you are on a node of the cluster, you can visit the dashboard by visiting the cluster IP and the exposed port.
-- Kubernetes v1.21 or higher.
+ If you are remotely accessing the cluster, you can issue the following command to enable port forwarding from your
+ local machine to the dashboard service.
-- If you are using Rook on Edge, the Edge host needs to be created with at least two hard disks.
+ ```shell
+ kukubectl port-forward svc/rook-ceph-mgr-dashboard -n rook-ceph 8443:7000 &
+ ```
-- If you are using Rook on Edge, you must have create a bind mount for the `/var/lib/rook` folder on the Edge host. For
- more information, refer to
- [Create Bind Mounts](../clusters/edge/edgeforge-workflow/prepare-user-data.md#create-bind-mounts).
+ If your dashboard service is exposed on a different port, replace 7000 with the port that the dashboard service is
+ exposed on.
-## Parameters
+3. Once you can connect to the dashboard, you need to provide the login credentials to access the dashboard. Rook
+ creates a default user named `admin` and generates a secret called `rook-ceph-dashboard-password` in the namespace of
+ the Rook-Ceph cluster. To retrieve the generated password, issue the following command:
-| Parameter | Description | Default |
-| ------------------------------------------- | --------------------------------------------------------------------------------------------------------------- | ------------ |
-| cluster.contents.spec.storage.useAllDevices | Allows the cluster to use all available devices on the nodes for storage. | true |
-| cluster.contents.spec.storage.deviceFilter | A regex filter that filters storage devices. Only device names that match the filter are used by Ceph clusters. | Empty String |
-| cluster.contents.spec.dashboard.enabled | Whether to enable the Ceph dashboard. | true |
-| cluster.operator.contents.data.LOG_LEVEL | The log level of Rook Operator. Accepted values are `DEBUG`, `INFO`. `WARNING`, and `ERROR`. | `INFO` |
+ ```shell
+ kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 --decode && echo
+ ```
+
+4. Use the password you receive in the output with the username `admin` to log in to the Ceph Dashboard.
+
+### Known Issues
+
+- If a cluster experiences network issues, it's possible for the file mount to become unavailable and remain unavailable
+ even after the network is restored. This a known issue disclosed in the
+ [Rook GitHub repository](https://github.com/rook/rook/issues/13818). Refer to the
+ [Troubleshooting section](#file-mount-becomes-unavailable-after-cluster-experiences-network-issues) for a workaround
+ if you observe this issue in your cluster.
+
+
+
+
+
+
-## Usage
### Rook on Edge Clusters
@@ -139,27 +182,6 @@ clusters.
-## Prerequisites
-
-- Kubernetes v1.21 or higher.
-
-- If you are using Rook on Edge, the Edge host needs to be created with at least two hard disks. The actual required
- number of disks depend on your cluster configuration.
-
-- If you are using Rook on Edge, you must have create a bind mount for the `/var/lib/rook` folder on the Edge host. For
- more information, refer to
- [Create Bind Mounts](../clusters/edge/edgeforge-workflow/prepare-user-data.md#create-bind-mounts).
-
-## Parameters
-
-| Parameter | Description | Default |
-| ------------------------------------------- | --------------------------------------------------------------------------------------------------------------- | ------------ |
-| cluster.contents.spec.storage.useAllDevices | Allows the cluster to use all available devices on the nodes for storage. | true |
-| cluster.contents.spec.storage.deviceFilter | A regex filter that filters storage devices. Only device names that match the filter are used by Ceph clusters. | Empty String |
-| cluster.contents.spec.dashboard.enabled | Whether to enable the Ceph dashboard. | true |
-| cluster.operator.contents.data.LOG_LEVEL | The log level of Rook Operator. Accepted values are `DEBUG`, `INFO`. `WARNING`, and `ERROR`. | `INFO` |
-
-## Usage
### Rook on Edge Clusters
@@ -246,27 +268,6 @@ clusters.
-## Prerequisites
-
-- Kubernetes v1.21 or higher.
-
-- If you are using Rook on Edge, the Edge host needs to be created with at least two hard disks. The actual required
- number of disks depend on your cluster configuration.
-
-- If you are using Rook on Edge, you must have create a bind mount for the `/var/lib/rook` folder on the Edge host. For
- more information, refer to
- [Create Bind Mounts](../clusters/edge/edgeforge-workflow/prepare-user-data.md#create-bind-mounts).
-
-## Parameters
-
-| Parameter | Description | Default |
-| ------------------------------------------- | --------------------------------------------------------------------------------------------------------------- | ------------ |
-| cluster.contents.spec.storage.useAllDevices | Allows the cluster to use all available devices on the nodes for storage. | true |
-| cluster.contents.spec.storage.deviceFilter | A regex filter that filters storage devices. Only device names that match the filter are used by Ceph clusters. | Empty String |
-| cluster.contents.spec.dashboard.enabled | Whether to enable the Ceph dashboard. | true |
-| cluster.operator.contents.data.LOG_LEVEL | The log level of Rook Operator. Accepted values are `DEBUG`, `INFO`. `WARNING`, and `ERROR`. | `INFO` |
-
-## Usage
### Rook on Edge Clusters
@@ -420,24 +421,3 @@ issues are resolved.
unmount will not happen and the issue will not be resolved.
6. Scale the workloads back to their original state.
-
-## Terraform
-
-```tf
-data "spectrocloud_registry" "registry" {
- name = "Public Repo"
-}
-
-data "spectrocloud_pack_simple" "pack" {
- name = "csi-rook-ceph-addon"
- version = "1.12.7"
- type = "helm"
- registry_uid = data.spectrocloud_registry.registry.id
-}
-```
-
-## References
-
-- [Rook Ceph Documentation](https://rook.io/docs/rook/v1.10/Getting-Started/intro/)
-
-- [Ceph Dashboard](https://rook.io/docs/rook/latest-release/Storage-Configuration/Monitoring/ceph-dashboard/)
diff --git a/docs/docs-content/integrations/csi-nfs-subdir-external.md b/docs/docs-content/integrations/csi-nfs-subdir-external.md
new file mode 100644
index 0000000000..3f0039f395
--- /dev/null
+++ b/docs/docs-content/integrations/csi-nfs-subdir-external.md
@@ -0,0 +1,40 @@
+---
+sidebar_label: "nfs-subdir-External"
+title: "Kubernetes NFS Subdir External Provisionerl"
+description: "NFS-Subdir-External Provisioner pack in Spectro Cloud"
+type: "integration"
+category: ["storage", "amd64"]
+hide_table_of_contents: true
+sidebar_class_name: "hide-from-sidebar"
+logoUrl: "https://registry.dev.spectrocloud.com/v1/csi-nfs-subdir-external/blobs/sha256:4b40eb85382d04dc4dcfc174b5e288b963b6201f6915e14b07bd8a5c4323b51b?type=image.webp"
+tags: ["packs", "nfs-subdir-external", "storage"]
+---
+
+## Versions Supported
+
+
+
+
+
+
+
+
+
+## Terraform
+
+Use the following Terraform snippet to reference the NFS-Subdir-External Provisioner pack in your Terraform template.
+Update the version number as needed.
+
+```hcl
+data "spectrocloud_registry" "community_registry" {
+ name = "Palette Community Registry"
+}
+
+
+data "spectrocloud_pack_simple" "csi-nfs-subdir-external" {
+ name = "csi-nfs-subdir-external"
+ version = "4.0.13"
+ type = "helm"
+ registry_uid = data.spectrocloud_registry.community_registry.id
+}
+```
diff --git a/docs/docs-content/integrations/csi-portworx-generic.md b/docs/docs-content/integrations/csi-portworx-generic.md
new file mode 100644
index 0000000000..e59f321001
--- /dev/null
+++ b/docs/docs-content/integrations/csi-portworx-generic.md
@@ -0,0 +1,29 @@
+---
+sidebar_label: "Portworx /w Operator"
+title: "Portworx Operator"
+description: "Portworx storage CSI for all use cases"
+hide_table_of_contents: true
+type: "integration"
+category: ["storage", "amd64"]
+sidebar_class_name: "hide-from-sidebar"
+tags: ["packs", "portworx", "storage"]
+---
+
+
+
+## Terraform
+
+Use the following Terraform code to interact with the Portworx Operator pack in your Terraform scripts.
+
+```hcl
+data "spectrocloud_registry" "public_registry" {
+ name = "Public Repo"
+}
+
+data "spectrocloud_pack_simple" "portworx-operator" {
+ name = "csi-portworx-generic"
+ version = "3.0.0"
+ type = "operator-instance"
+ registry_uid = data.spectrocloud_registry.public_registry.id
+}
+```
diff --git a/docs/docs-content/integrations/csi-rook-ceph-addon.md b/docs/docs-content/integrations/csi-rook-ceph-addon.md
new file mode 100644
index 0000000000..1181b6f1c6
--- /dev/null
+++ b/docs/docs-content/integrations/csi-rook-ceph-addon.md
@@ -0,0 +1,29 @@
+---
+sidebar_label: "rook-ceph"
+title: "Rook Ceph"
+description: "Rook is an open-source cloud-native storage orchestrator that provides the platform, framework, and support for Ceph
+storage to natively integrate with cloud-native environments. Ceph is a distributed storage system that provides file,
+block, and object storage and is deployed in large-scale production clusters."
+hide_table_of_contents: true
+type: "integration"
+category: ["storage", "amd64"]
+sidebar_class_name: "hide-from-sidebar"
+tags: ["packs", "rook-ceph", "storage"]
+---
+
+
+
+## Terraform
+
+```hcl
+data "spectrocloud_registry" "registry" {
+ name = "Public Repo"
+}
+
+data "spectrocloud_pack_simple" "csi-rook-ceph" {
+ name = "csi-rook-ceph-addon"
+ version = "1.14.0"
+ type = "helm"
+ registry_uid = data.spectrocloud_registry.registry.id
+}
+```
diff --git a/docs/docs-content/integrations/csi-rook-ceph.md b/docs/docs-content/integrations/csi-rook-ceph.md
new file mode 100644
index 0000000000..d08f6f6cdf
--- /dev/null
+++ b/docs/docs-content/integrations/csi-rook-ceph.md
@@ -0,0 +1,29 @@
+---
+sidebar_label: "rook-ceph"
+title: "Rook Ceph"
+description: "Rook is an open-source cloud-native storage orchestrator that provides the platform, framework, and support for Ceph
+storage to natively integrate with cloud-native environments. Ceph is a distributed storage system that provides file,
+block, and object storage and is deployed in large-scale production clusters."
+hide_table_of_contents: true
+type: "integration"
+category: ["storage", "amd64"]
+sidebar_class_name: "hide-from-sidebar"
+tags: ["packs", "rook-ceph", "storage"]
+---
+
+
+
+## Terraform
+
+```hcl
+data "spectrocloud_registry" "registry" {
+ name = "Public Repo"
+}
+
+data "spectrocloud_pack_simple" "csi-rook-ceph" {
+ name = "csi-rook-ceph"
+ version = "1.14.0"
+ type = "helm"
+ registry_uid = data.spectrocloud_registry.registry.id
+}
+```
diff --git a/docs/docs-content/integrations/nfs-subdir-external.md b/docs/docs-content/integrations/nfs-subdir-external.md
deleted file mode 100644
index 1e0503c5cb..0000000000
--- a/docs/docs-content/integrations/nfs-subdir-external.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-sidebar_label: "nfs-subdir-External"
-title: "Kubernetes NFS Subdir External Provisionerl"
-description: "NFS-Subdir-External Provisioner pack in Spectro Cloud"
-type: "integration"
-category: ["storage", "amd64"]
-hide_table_of_contents: true
-sidebar_class_name: "hide-from-sidebar"
-logoUrl: "https://registry.dev.spectrocloud.com/v1/csi-nfs-subdir-external/blobs/sha256:4b40eb85382d04dc4dcfc174b5e288b963b6201f6915e14b07bd8a5c4323b51b?type=image.webp"
-tags: ["packs", "nfs-subdir-external", "storage"]
----
-
-NFS Subdir External Provisioner is an automatic provisioner for Kubernetes that uses the already configured NFS server,
-automatically creating Persistent storage volumes. It installs the storage classes and NFS client provisioner into the
-workload cluster
-
-## Prerequisites
-
-Kubernetes >=1.9
-
-## Versions Supported
-
-
-
-
-
-**1.0**
-
-
-
-
-
-## References
-
-- [Kubernetes NFS Subdir External Provisioner GitHub](https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner)
-
-- [Kubernetes NFS Subdir External Provisioner Documentation](https://artifacthub.io/docs)
diff --git a/docs/docs-content/integrations/portworx-add-on.md b/docs/docs-content/integrations/portworx-add-on.md
new file mode 100644
index 0000000000..8cca604c95
--- /dev/null
+++ b/docs/docs-content/integrations/portworx-add-on.md
@@ -0,0 +1,29 @@
+---
+sidebar_label: "Portworx /w Operator Add-on"
+title: "Portworx Operator Add-on"
+description: "Portworx storage CSI for all use cases"
+hide_table_of_contents: true
+type: "integration"
+category: ["storage", "amd64"]
+sidebar_class_name: "hide-from-sidebar"
+tags: ["packs", "portworx", "storage"]
+---
+
+
+
+## Terraform
+
+Use the following Terraform code to interact with the Portworx Operator pack in your Terraform scripts.
+
+```hcl
+data "spectrocloud_registry" "public_registry" {
+ name = "Public Repo"
+}
+
+data "spectrocloud_pack_simple" "portworx-operator" {
+ name = "csi-portworx-add-on"
+ version = "3.0.0"
+ type = "operator-instance"
+ registry_uid = data.spectrocloud_registry.public_registry.id
+}
+```
diff --git a/docs/docs-content/integrations/portworx.md b/docs/docs-content/integrations/portworx.md
deleted file mode 100644
index 1a8881baef..0000000000
--- a/docs/docs-content/integrations/portworx.md
+++ /dev/null
@@ -1,530 +0,0 @@
----
-sidebar_label: "Portworx"
-title: "Portworx"
-description: "Portworx storage integration for on-prem installations"
-hide_table_of_contents: true
-type: "integration"
-category: ["storage", "amd64"]
-sidebar_class_name: "hide-from-sidebar"
-logoUrl: "https://registry.spectrocloud.com/v1/csi-portworx/blobs/sha256:e27bc9aaf22835194ca38062061c29b5921734eed922e57d693d15818ade7486?type=image.webp"
-tags: ["packs", "portworx", "storage"]
----
-
-[Portworx](https://portworx.com/) is a software-defined persistent storage solution designed and purpose-built for
-applications deployed as containers, via container orchestrators such as Kubernetes. You can use Palette to install
-Portworx on the cloud or on-premises.
-
-## Versions Supported
-
-
-
-
-
-
-
-- **2.11.2**
-
-
-
-
-
-- **2.10.0**
-
-
-
-
-- **2.9.0**
-
-
-
-
-- **2.8.0**
-
-
-
-
-- **2.6.1**
-
-
-
-
-## Prerequisites
-
-For deploying Portworx for Kubernetes, make sure to configure the properties in the pack:
-
-- Have at least three nodes with the proper
- [hardware, software, and network requirements](https://docs.portworx.com/install-portworx/prerequisites).
-
-- Ensure you are using a supported Kubernetes version.
-
-- Identify and set up the storageType.
-
-
-
-## Contents
-
-The default installation of Portworx will deploy the following components in the Kubernetes cluster.
-
-- Portworx
-
-- CSI Provisioner
-
-- [Lighthouse](https://portworx.com/blog/manage-portworx-clusters-using-lighthouse/)
-
-- [Stork](https://github.com/libopenstorage/stork) and
- [Stork on Portworx](https://docs.portworx.com/portworx-enterprise/platform/openshift/ocp-gcp/operations/storage-operations/stork.html)
-
-- Storage class making use of portworx-volume provisioner.
-
-## Parameters
-
-### Manifests - Portworx
-
-```yaml
-manifests:
- portworx:
- # The namespace to install Portworx resources
- namespace: "portworx"
-
- # Portworx storage type and size
- storageType: "type=zeroedthick,size=150"
-
- # Max storgae nodes per zone
- maxStorageNodesPerZone: 3
-
- # Node recovery timeout in seconds
- nodeRecoveryTimeout: 1500
-
- # Portworx storage class config
- storageClass:
- enabled: true
- isDefaultStorageClass: true
- allowVolumeExpansion: true
- reclaimPolicy: Retain
- volumeBindingMode: Immediate
- parameters:
- repl: "3"
- priority_io: "high"
- #sharedv4: true
-
- k8sVersion: "{{.spectro.system.kubernetes.version}}"
-
- templateVersion: "v4"
-
- # List of additional container args to be passed
- args:
- ociMonitor:
- #- "-dedicated_cache"
- #- "-a"
- storkDeployment:
- #- "--app-initializer=true"
- storkScheduler:
- #- "--scheduler-name=xyz"
- autoPilot:
- csiProvisioner:
- csiSnapshotter:
- csiSnapshotController:
- csiResizer:
-
- # The private registry from where images will be pulled from. When left empty, images will be pulled from the public registry
- # Example, imageRegistry: "harbor.company.com/portworx"
- imageRegistry: ""
-```
-
-# Integration With External etcd
-
-Starting Portworx v2.6.1, you can use the presets feature to toggle between the available ETCD options.
-
-By default, Portworx is set to use internal KVDB. However, you can integrate Portworx to an external etcd server by
-following the steps below.
-
-1. Enable `useExternalKvdb` flag by setting it to _true_.
-
-2. Configure the external etcd endpoints in `externalKvdb.endpoints`.
-
-If the external etcd server is configured to authenticate via certificates, additionally you may want to set up the
-following:
-
-1. Enable `externalKvdb.useCertsForSSL` flag by setting it to _true_.
-
-2. Setup certificate related configuration in `externalKvdb.cacert`, `externalKvdb.cert`, and `externalKvdb.key`.
-
-:::warning
-
-Make sure to follow the correct indentation style; otherwise, certs will not be imported correctly and will result in
-Portworx deployment failure.
-
-:::
-
-## Etcd Presets
-
-These are the three types of Presets that can be selected and modified.
-
-
-
-
-
-
-## Use Internal KVDB
-
-```yaml
-# ECTD selection
- useExternalKvdb: false
-
- # External kvdb related config
- externalKvdb:
-
- useCertsForSSL: false
-
-vsphere-cloud-controller-manager:
- k8sVersion: '{{.spectro.system.kubernetes.version}}'
-```
-
-
-
-
-## Use Non-Secure KVDB Endpoints
-
-```yaml
-# External kvdb related config
- externalKvdb:
- # List of External KVDB endpoints to use with Portworx. Used only when useExternalKvdb is true
- endpoints:
- - etcd:http://100.26.199.167:2379
- - etcd:http://100.26.199.168:2379
- - etcd:http://100.26.199.169:2379
- useCertsForSSL: false
- useExternalKvdb: true
- vsphere-cloud-controller-manager:
- k8sVersion: '{{.spectro.system.kubernetes.version}}'
-```
-
-
-
-
-
-## Use Certs Secured KVDB Endpoints
-
-```yaml
-
-# External KVDB Related Configuration
- externalKvdb:
- # List of External KVDB endpoints to use with Portworx. Used only when useExternalKvdb is true
- endpoints:
- - etcd:https://100.26.199.167:2379
- - etcd:https://100.26.199.168:2379
- - etcd:https://100.26.199.169:2379
- useCertsForSSL: true
- # The CA cert to use for etcd authentication. Make sure to follow the same indentation style as given in the example below
- cacert: |-
- -----BEGIN CERTIFICATE-----
- MIIC3DCCAcQCCQCr1j968rOV3zANBgkqhkiG9w0BAQsFADAwMQswCQYDVQQGEwJV
- UzELMAkGA1UECAwCQ0ExFDASBgNVBAcMC1NhbnRhIENsYXJhMB4XDTIwMDkwNDA1
- MzcyNFoXDTI1MDkwMzA1MzcyNFowMDELMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkNB
- MRQwEgYDVQQHDAtTYW50YSBDbGFyYTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCC
- AQoCggEBALt2CykKKwncWNQqB6Jg0QXd58qeDk40OF4Ti8DewZiZgpQOgA/+GYO7
- bx2/oQyAwjvhpTYjmMN5zORJpE3p9A+o57An1+B9D8gm1W1uABVEmwiKZhXpa+3H
- Zlon58GR+kAJPbMIpvWbjMZb4fxZM0BPo0PHzzITccoaTV4+HY4YoDNAVjfZ1cEn
- Hu2PUyN8M4RM+HdE4MOQVwqFDq/Fr6mLBMV0PdiwML0tjZ7GSGSjv1hme3mOLvKP
- qSWx4hCd5oTegEfneUKKnVhH3JLpSU1NaC6jU3vhyowRNOShi77/uJCnkx3mp9JG
- c4YruKrGc997wmUMsIv0owt49Y3dAi8CAwEAATANBgkqhkiG9w0BAQsFAAOCAQEA
- kEXPdtpOURiZIi01aNJkzLvm55CAhCg57ZVeyZat4/LOHdvo+eXeZ2LHRvEpbakU
- 4h1TQJqeNTd3txI0eIx8WxpwbJNxesuTecCWSIeaN2AApIWzHev/N7ZYJsZ0EM2f
- +rYVcX8mcOkLeyKDInCKySxIPok8kU4qQLTWytJbeRYhxh7mSMuZXu7mtSh0HdP1
- C84Ml+Ib9uY2lbr1+15MhfSKdpvmLVOibRIrdqQirNhl8uU9I1/ExDxXyR2NBMLW
- tzGgsz5dfFDZ4oMqAc8Nqm9LuvmIZYMCunMZedI2h7jGH3LVQXdM81iZCgJdTgKf
- i9CNyx+CcwUCkWQzhrHBQA==
- -----END CERTIFICATE-----
- # The cert to use for etcd authentication. Make sure to follow the same indentation style as given in the example below
- cert: |-
- -----BEGIN CERTIFICATE-----
- MIIDaTCCAlGgAwIBAgIJAPLC+6M3EezhMA0GCSqGSIb3DQEBCwUAMDAxCzAJBgNV
- BAYTAlVTMQswCQYDVQQIDAJDQTEUMBIGA1UEBwwLU2FudGEgQ2xhcmEwHhcNMjAw
- OTA0MDUzODIyWhcNMjIxMjA4MDUzODIyWjA4MQswCQYDVQQGEwJVUzETMBEGA1UE
- CAwKQ2FsaWZvcm5pYTEUMBIGA1UEBwwLU2FudGEgQ2xhcmEwggEiMA0GCSqGSIb3
- DQEBAQUAA4IBDwAwggEKAoIBAQCycmCHPrX0YNk75cu3H5SQv/D1qND2+2rGvv0Z
- x28A98KR/Bdchk1QaE+UHYPWejsRWUtEB0Q0KreyxpwH1B4EHNKpP+jV9YqCo5fW
- 3QRipWONKgvrSKkjVp/4U/NAAWCHfruB1d9u/qR4utY7sEKHE9AxmbyG+K19mOB2
- FJc7NOsTwN8d6uA5ZfFKmv3VtZzl0+Vq1qFSyIZT9zXYM22YjBAqXk9FVoI0FoQt
- zpymQrsajfS+hNX7lSUVKKv3IplpNqSOyTHRF7TWo5NOH+YRWJHLAgZoq2w/yaEi
- 5IdjLdb1JXmVUyBgq590WcJZDakwD9SPOHrM9K1vTl9I41q7AgMBAAGjfjB8MEoG
- A1UdIwRDMEGhNKQyMDAxCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJDQTEUMBIGA1UE
- BwwLU2FudGEgQ2xhcmGCCQCr1j968rOV3zAJBgNVHRMEAjAAMAsGA1UdDwQEAwIE
- 8DAWBgNVHREEDzANggtleGFtcGxlLmNvbTANBgkqhkiG9w0BAQsFAAOCAQEAUOBn
- YdTif6WlRpQOj+3quGrafJSNL8TqHkpmgaInSpMVFwDsmPF/HoAVVpX+H3oMY8p7
- Ll4I1Q7szpGRnKpJuzMZp5+gNpmwAz2MdAr7Ae9wH/+o8c2avbfpaHFWVTJZJ6X1
- Q6m6jmXcU0QSS4zj+lyxDNKnXfwVL8hVp0mXRFfPpb4l5ZCBoj4IA2UgyeU7F/nn
- nvR5rmg781zc0lUL6X7HaSfQjtPDTSZYFqwE93vSe42JP7NWM96lZHy2IlfE88Wp
- jUvOOJjaFVuluaJ78uCydMGEkJmipxH+1YXicH47RQ30tD5QyXxGBi+8jw5z0RiR
- ptWD/oDFCiCjlffyzg==
- -----END CERTIFICATE-----
- # The key to use for etcd authentication. Make sure to follow the same indentation style as given in the example below
- key: |-
- -----BEGIN RSA PRIVATE KEY-----
- MIIEogIBAAKCAQEAsnJghz619GDZO+XLtx+UkL/w9ajQ9vtqxr79GcdvAPfCkfwX
- XIZNUGhPlB2D1no7EVlLRAdENCq3ssacB9QeBBzSqT/o1fWKgqOX1t0EYqVjjSoL
- 60ipI1af+FPzQAFgh367gdXfbv6keLrWO7BChxPQMZm8hvitfZjgdhSXOzTrE8Df
- HergOWXxSpr91bWc5dPlatahUsiGU/c12DNtmIwQKl5PRVaCNBaELc6cpkK7Go30
- voTV+5UlFSir9yKZaTakjskx0Re01qOTTh/mEViRywIGaKtsP8mhIuSHYy3W9SV5
- lVMgYKufdFnCWQ2pMA/Ujzh6zPStb05fSONauwIDAQABAoIBAGHELIKspv/m993L
- Pttrn/fWUWwmO6a1hICzLvQqwfRjyeQ1m48DveQp4j+iFBM0EJymsYfp+0IhjVeT
- XPUlD/Ts3bYA384pouOEQbJkkPyC5JH40WLtAk3sLeTeCc2tc3eIxa6SwMGNHgtP
- QgSdwzVCc7RZKGNCZ7sCQSgwi9LRdyjHU0z0KW3lHqsMkK+yEg8zuH2DpIgvFej8
- KxjwF9ZEsnYDcERdd4TOu2NTEIl5N7F8E6di/CLP/wkfHazjX+qGcuBXjeGhPgdb
- fKCcrFxhbavaJRMGLqnOD99l/zvySnA+LUSZ35KB/2ZfLMv71Z9oABTlyiR+76GW
- 0lcQjmECgYEA2Jrq2qe7IUZ8CURWJ6rDKgD83LGRCHAWZ+dYvFmdsyfAGMV4+p4V
- zKSidiTWAgl7ppiZdaEPu/2cH8uohDkdx2CTSUKPUM6+PBhE4hwSA42RlnIpGWbf
- YEqcZ/qeo1IFb1A1YslwdslCVLc3INEbWairBEGis8aAxUaoEiTiPTMCgYEA0ubQ
- 05BijLK6XH6YfASDLxwRg6jxn3mBqh+pAwE4tVVJVI9yXnNzN4/WKJJM+mdSGfpv
- UcJy86ZcmHNzanZUPWh80U2pyRoVXvVQpY8hdMQ3neya60mc6+Nneba2LflkBVmd
- cdoNGO0zAcGb0FKDCF2H3fizDxcoOyUjeKlLnFkCgYABU0lWlyok9PpzUBC642eY
- TTM+4nNBuvXYIuk/FclKPFcHj8XCus7lVqiL0oPgtVAlX8+okZi4DMA0zZk1XegZ
- vTSJgTfBRdKSKY/aVlOh4+7dHcu0lRWO0EYOuNDZrPnNiY8aEKN4hpi6TfivYbgq
- H0cUmpY1RWSqUFlc6w7bUwKBgEMINctoksohbHZFjnWsgX2RsEdmhRWo6vuFgJSB
- 6OJJrzr/NNysWSyJvQm8JldYS5ISNRuJcDvc3oVd/IsT/QZflXx48MQIVE6QLgfR
- DFMuonbBYyPxi7y11Ies+Q53u8CvkQlEwvDvQ00Fml6GOzuHbs2wZEkhlRnnXfTV
- 6kBRAoGAP9NUZox5ZrwkOx7iH/zEx3X3qzFoN/zSI2iUi2XRWaglGbNAxqX5/ug8
- xJIi1Z9xbsZ/3cPEdPif2VMdvIy9ZSsBwIEuzRf8YNw6ZGphsO95FKrgmoqA44mm
- WsqUCBt5+DnOaDyvMkokP+T5tj/2LXemuIi4Q5nrOmw/WwVGGGs=
- -----END RSA PRIVATE KEY-----
- useExternalKvdb: true
-vsphere-cloud-controller-manager:
- k8sVersion: '{{.spectro.system.kubernetes.version}}'
-
-```
-
-
-
-
-# Environments
-
-
-
-
-
-
-## vSphere Environment
-
-For deploying Portworx storage on vSphere environments, make sure to configure the following properties in the pack:
-
-- vSphere Configuration file
-
-- Storage Type
-
-- Kubernetes Version
-
-### vSphere Manifest
-
-Additional parameters for the manifest is as follows:
-
-
-
-```yaml
-# VSphere cloud configurations
-vsphereConfig:
- insecure: "true"
- host: ""
- port: "443"
- datastorePrefix: "datastore"
- installMode: "shared"
- userName: ""
- password: ""
- # Enter the name of the secret which has vsphere user credentials (Use keys VSPHERE_USER, VSPHERE_PASSWORD)
- userCredsSecret: ""
-```
-
-
-
-## Using Secrets for vSphere User Credentials
-
-Portworx pack values allow you to configure vSphere user credentials in two ways:
-
-1. Username & password - (`portworx.vsphereConfig.userName` and `portworx.vsphereConfig.password`).
-
-2. Secret - (`portworx.vsphereConfig.userCredsSecret` is available with v2.6.1 and above).
-
-If you chose the latter, make sure to create the secret in the target cluster manually or by bringing your own (BYO)
-manifest Add-on pack.
-
-
-
-:::warning
-
-Until the secret is created in the cluster, Portworx deployments might fail to run. When secret is configured,
-reconciliation should recover Portworx.
-
-:::
-
-Secret can be created using the spec below,
-
-
-
-```yaml
-apiVersion: v1
-kind: Secret
-metadata:
- name: px-vsphere-secret
- namespace: kube-system
-type: Opaque
-data:
- VSPHERE_USER: "b64 encoded admin username"
- VSPHERE_PASSWORD: "b64 encoded admin password"
-```
-
-and this secret can be referenced in the Portworx pack values as shown below:
-
-
-
-```
-manifests:
- portworx:
- vsphereConfig:
- userCredsSecret: "px-vsphere-secret"
-```
-
-Ensure to follow the correct indentation style; otherwise, certificates will not be imported correctly and resulting in
-a Portworx deployment failure.
-
-
-
-
-## AWS Environment
-
-Palette provisions Portworx in an AWS environment. The following are the packs supported:
-
-### Packs Supported
-
-
-
-
-**portworx-aws-2.9**
-
-
-
-
-**portworx-aws-2.10**
-
-
-
-
-
-
-### Prerequisites
-
-To deploy Portworx in an AWS environment, have the following prerequisites in place.
-
-- Ensure the Portworx Nodes have the TCP ports open at **9001-9022**.
-
-- Ensure there is an open UDP port at **9002**.
-
-- Apply the following policy to the **User** in AWS:
-
-```yaml
-{
- "Version": "2012-10-17",
- "Statement":
- [
- {
- "Sid": "",
- "Effect": "Allow",
- "Action":
- [
- "ec2:AttachVolume",
- "ec2:ModifyVolume",
- "ec2:DetachVolume",
- "ec2:CreateTags",
- "ec2:CreateVolume",
- "ec2:DeleteTags",
- "ec2:DeleteVolume",
- "ec2:DescribeTags",
- "ec2:DescribeVolumeAttribute",
- "ec2:DescribeVolumesModifications",
- "ec2:DescribeVolumeStatus",
- "ec2:DescribeVolumes",
- "ec2:DescribeInstances",
- "autoscaling:DescribeAutoScalingGroups",
- ],
- "Resource": ["*"],
- },
- ],
-}
-```
-
-
-
-## AWS Manifest
-
-```yaml
-manifests:
- portworx:
- # The namespace to install Portworx resources
- namespace: "portworx"
-
- # Portworx storage type and size
- storageType: "type=gp3,size=150"
-
- # Max storage nodes per zone
- maxStorageNodesPerZone: 3
-
- # Node recovery timeout in seconds
- nodeRecoveryTimeout: 1500
-
- # Portworx storage class config
- storageClass:
- enabled: true
- isDefaultStorageClass: true
- allowVolumeExpansion: true
- reclaimPolicy: Retain
- volumeBindingMode: Immediate
- parameters:
- repl: "3"
- priority_io: "high"
- #sharedv4: true
-
- # Kubernetes version.
- k8sVersion: "{{.spectro.system.kubernetes.version}}"
-
- templateVersion: "v4"
-
- # List of additional container args to be passed
- args:
- ociMonitor:
- #- "-dedicated_cache"
- #- "-a"
- storkDeployment:
- #- "--app-initializer=true"
- storkScheduler:
- #- "--scheduler-name=xyz"
- autoPilot:
- csiProvisioner:
- csiSnapshotter:
- csiSnapshotController:
- csiResizer:
-
- # The private registry from where images will be pulled from. When left empty, images will be pulled from the public registry
- # Example, imageRegistry: "harbor.company.com/portworx"
- imageRegistry: ""
-
- # ECTD selection
- useExternalKvdb: false
-
- # External kvdb related config
- externalKvdb:
- useCertsForSSL: false
-```
-
-
-
-
-
-
-
-
-
-## References
-
-- [Portworx Install with Kubernetes](https://docs.portworx.com/portworx-install-with-kubernetes/)
-
-- [Installation Prerequisites](https://docs.portworx.com/install-portworx/prerequisites/)
-
-- [Install Portworx on AWS ASG](https://docs.portworx.com/portworx-enterprise/install-portworx/kubernetes/aws/aws-asg)
diff --git a/static/packs-data/exclude_packs.json b/static/packs-data/exclude_packs.json
index 2d038757e2..e525d90e74 100644
--- a/static/packs-data/exclude_packs.json
+++ b/static/packs-data/exclude_packs.json
@@ -1 +1,9 @@
-["palette-upgrader", "csi-aws-new"]
+[
+ "palette-upgrader",
+ "csi-aws-new",
+ "csi-portworx-gcp",
+ "csi-portworx-aws",
+ "csi-portworx-vsphere",
+ "csi-rook-ceph-helm",
+ "csi-rook-ceph-helm-addon"
+]
diff --git a/static/packs-data/packs_information.json b/static/packs-data/packs_information.json
index aeef4d3a6b..b40439119a 100644
--- a/static/packs-data/packs_information.json
+++ b/static/packs-data/packs_information.json
@@ -215,6 +215,58 @@
"name": "csi-longhorn-addon",
"description": "Longhorn is a lightweight distributed block storage system for cloud native storage Kubernetes that allows you to replicate storage to Kubernetes clusters. Once Longhorn is installed, it adds persistent volume support to the Kubernetes cluster using containers and microservices."
},
+ {
+ "name": "csi-maas-volume",
+ "description": "The MAAS Volume CSI driver allows Kubernetes to access MAAS volumes. The driver is implemented as a Container Storage Interface (CSI) plugin."
+ },
+ {
+ "name": "csi-nfs",
+ "description": "The NFS Container Storage Interface (CSI) Driver provides a CSI interface used by Kubernetes to manage the lifecycle of NFS volumes."
+ },
+ {
+ "name": "csi-nfs-subdir-external",
+ "description": "NFS subdir external provisioner is an automatic provisioner that use your existing and already configured NFS server to support dynamic provisioning of Kubernetes Persistent Volumes via Persistent Volume Claims."
+ },
+ {
+ "name": "csi-openstack-cinder",
+ "description": "The Cinder CSI Driver is a CSI Specification compliant driver used by Container Orchestrators to manage the lifecycle of OpenStack Cinder Volumes."
+ },
+ {
+ "name": "csi-portworx-aws",
+ "description": "The Portworx CSI Driver provides a standardized way to manage storage resources in containerized environments. This driver supports the full range of Portworx features and most of the CSI specifications, facilitating seamless integration and management of storage across different platforms."
+ },
+ {
+ "name": "csi-portworx-gcp",
+ "description": "The Portworx CSI Driver provides a standardized way to manage storage resources in containerized environments. This driver supports the full range of Portworx features and most of the CSI specifications, facilitating seamless integration and management of storage across different platforms."
+ },
+ {
+ "name": "csi-portworx-generic",
+ "description": "The Portworx CSI Driver provides a standardized way to manage storage resources in containerized environments. This driver supports the full range of Portworx features and most of the CSI specifications, facilitating seamless integration and management of storage across different platforms."
+ },
+ {
+ "name": "csi-portworx-vsphere",
+ "description": "The Portworx CSI Driver provides a standardized way to manage storage resources in containerized environments. This driver supports the full range of Portworx features and most of the CSI specifications, facilitating seamless integration and management of storage across different platforms."
+ },
+ {
+ "name": "portworx-add-on",
+ "description": "The Portworx CSI Driver provides a standardized way to manage storage resources in containerized environments. This driver supports the full range of Portworx features and most of the CSI specifications, facilitating seamless integration and management of storage across different platforms."
+ },
+ {
+ "name": "csi-rook-ceph",
+ "description": "Rook is an open source cloud-native storage orchestrator, providing the platform, framework, and support for Ceph storage to natively integrate with cloud-native environments."
+ },
+ {
+ "name": "csi-rook-ceph-addon",
+ "description": "Rook is an open source cloud-native storage orchestrator, providing the platform, framework, and support for Ceph storage to natively integrate with cloud-native environments."
+ },
+ {
+ "name": "csi-rook-ceph-helm",
+ "description": "Rook is an open source cloud-native storage orchestrator, providing the platform, framework, and support for Ceph storage to natively integrate with cloud-native environments."
+ },
+ {
+ "name": "csi-rook-ceph-helm-addon",
+ "description": "Rook is an open source cloud-native storage orchestrator, providing the platform, framework, and support for Ceph storage to natively integrate with cloud-native environments."
+ },
{
"name": "csi-rook",
"description": "Rook is a cloud-native storage orchestrator for Kubernetes, providing the platform, framework, and support for a diverse set of storage solutions to natively integrate with cloud-native environments. Rook turns storage"