+
+ Pack Registry
+ URL: https://10.10.249.12:5000
+ Username: XXXXXXXXX
+ Password: XXXXXXXXX
+ ```
+
+15. If you need to configure the instance with proxy settings, go ahead and do so now. You can configure proxy settings by using environment variables. Replace the values with your environment's respective values.
+
+
+
+ ```shell
+ export http_proxy=http://10.1.1.1:8888
+ export https_proxy=https://10.1.1.1:8888
+ export no_proxy=.example.dev,10.0.0.0/8
+ ```
+
+16. The next set of steps will download the required binaries to support a Palette installation, such as the Palette Installer, required Kubernetes packages, and kubeadm packages. You can download these artifacts from the instance, or externally and transfer them to the instance. Click on each tab for further guidance.
+
+
+
+ :::caution
+
+ You must download the following three resources. Our support team will provide you with the credentials and download URL.
+ Click on each tab to learn more about each resource and steps for downloading.
+
+ :::
+
+
+
+
+
+
+
+ Download the binary by using the URL provided by the Palette support team. Change the version number as needed.
+
+
+
+ ```shell
+ curl --user XXXX:YYYYY https:///airgap/packs/airgap-v3.3.15.bin \
+ --output airgap-k8s-v3.3.15.bin
+ ```
+
+:::tip
+
+ If you receive a certificate error, use the `-k` or `--insecure` flag.
+
+:::
+
+ Assign the proper permissions and start the download script.
+
+
+
+ ```shell
+ sudo chmod 755 ./airgap-k8s-v3.3.15.bin && sudo ./airgap-k8s-v3.3.15.bin
+ ```
+
+ Example Output:
+ ```shell
+ sudo ./airgap-k8s-v3.3.15.bin
+ Verifying archive integrity... 100% MD5 checksums are OK. All good.
+ Uncompressing Airgap K8S Images Setup - Version 3.3.15 100%
+ Setting up Packs
+ Setting up Images
+ - Pushing image k8s.gcr.io/kube-controller-manager:v1.22.10
+ - Pushing image k8s.gcr.io/kube-proxy:v1.22.10
+ - Pushing image k8s.gcr.io/kube-apiserver:v1.22.10
+ - Pushing image k8s.gcr.io/kube-scheduler:v1.22.10
+ …
+ Setup Completed
+ ```
+
+
+
+
+
+
+
+
+ Download the binary by using the URL provided by the Palette support team. Change the version number as needed.
+
+
+
+ ```shell
+ curl --user XXXX:YYYYY https:///airgap/packs/airgap-k8s-v3.3.15.bin \
+ --output airgap-k8s-v3.3.15.bin
+ ```
+
+
+:::tip
+
+ If you receive a certificate error, use the `-k` or `--insecure` flag.
+
+:::
+
+ Assign the proper permissions and start the download script.
+
+
+
+ ```shell
+ sudo chmod 755 ./airgap-k8s-v3.3.15.bin && sudo ./airgap-k8s-v3.3.15.bin
+ ```
+
+ Example Output:
+ ```shell
+ sudo ./airgap-k8s-v3.3.15.bin
+ Verifying archive integrity... 100% MD5 checksums are OK. All good.
+ Uncompressing Airgap K8S Images Setup - Version 3.3.15 100%
+ Setting up Packs
+ Setting up Images
+ - Pushing image k8s.gcr.io/kube-controller-manager:v1.22.10
+ - Pushing image k8s.gcr.io/kube-proxy:v1.22.10
+ - Pushing image k8s.gcr.io/kube-apiserver:v1.22.10
+ - Pushing image k8s.gcr.io/kube-scheduler:v1.22.10
+ …
+ Setup Completed
+ ```
+
+
+
+
+
+
+ Download the binary by using the URL provided by the Palette support team. Change the version number as needed.
+
+
+
+ ```shell
+ curl --user XXXX:YYYYY https:///airgap/packs/3.3/airgap-edge-kubeadm.bin \
+ --output airgap-edge-kubeadm.bin
+ ```
+
+:::tip
+
+ If you receive a certificate error, use the `-k` or `--insecure` flag.
+
+:::
+
+ Assign the proper permissions and start the download script.
+
+
+
+ ```shell
+ sudo chmod 755 ./airgap-edge-kubeadm.bin && sudo ./airgap-edge-kubeadm.bin
+ ```
+
+ Example Output:
+ ```shell
+ sudo ./airgap-edge-kubeadm.bin
+ Verifying archive integrity... 100% MD5 checksums are OK. All good.
+ Uncompressing Airgap Edge Packs - Kubeadm Images 100%
+ Setting up Images
+ - Skipping image k8s.gcr.io/coredns/coredns:v1.8.6
+ - Pushing image k8s.gcr.io/etcd:3.5.1-0
+ - Pushing image k8s.gcr.io/kube-apiserver:v1.23.12
+ - Pushing image k8s.gcr.io/kube-controller-manager:v1.23.12
+ - Pushing image k8s.gcr.io/kube-proxy:v1.23.12
+ …
+ Setup Completed
+ ```
+
+
+
+
+
+
+
+
+17. If you will be using Edge deployments, go ahead and download the packages your Edge deployments will need. If you are not planning to use Edge, skip to end. You can come back to this step in the future and add the packages if needed. Click on the `...` tab for additional options.
+
+
+
+
+
+
+
+ Download the binary by using the URL provided by the Palette support team. Change the version number as needed.
+
+
+
+ ```shell
+ curl --user XXXX:YYYYY https:///airgap/packs/3.3/airgap-edge-ubuntu22-k3s.bin \
+ --output airgap-edge-ubuntu22-k3s.bin
+ ```
+
+:::tip
+
+ If you receive a certificate error, use the `-k` or `--insecure` flag.
+
+:::
+
+ Assign the proper permissions and start the download script.
+
+
+
+ ```shell
+ sudo chmod 755 ./airgap-edge-ubuntu22-k3s.bin && sudo ./airgap-edge-ubuntu22-k3s.bin
+ ```
+
+
+
+
+
+
+
+ Download the binary by using the URL provided by the Palette support team. Change the version number as needed.
+
+
+
+ ```shell
+ curl --user XXXX:YYYYY https:///airgap/packs/3.3/airgap-edge-ubuntu22-rke.bin \
+ --output airgap-edge-ubuntu22-rke.bin
+ ```
+
+:::tip
+
+ If you receive a certificate error, use the `-k` or `--insecure` flag.
+
+:::
+
+ Assign the proper permissions and start the download script.
+
+
+
+ ```shell
+ sudo chmod 755 ./airgap-edge-ubuntu22-rke.bin && sudo ./airgap-edge-ubuntu22-rke.bin
+ ```
+
+
+
+
+
+ Download the binary by using the URL provided by the Palette support team. Change the version number as needed.
+
+
+
+ ```shell
+ curl --user XXXX:YYYYY https:///airgap/packs/3.3/airgap-edge-ubuntu22-kubeadm.bin \
+ --output airgap-edge-ubuntu22-kubeadm.bin
+ ```
+
+:::tip
+
+If you receive a certificate error, use the `-k` or `--insecure` flag.
+
+:::
+
+ Assign the proper permissions and start the download script.
+
+
+
+ ```shell
+ sudo chmod 755 ./airgap-edge-ubuntu22-kubeadm.bin && sudo ./airgap-edge-ubuntu22-kubeadm.bin
+ ```
+
+
+
+
+
+ Download the binary by using the URL provided by the Palette support team. Change the version number as needed.
+
+
+
+ ```shell
+ curl --user XXXX:YYYYY https:///airgap/packs/3.3/airgap-edge-ubuntu20-k3s.bin \
+ --output airgap-edge-ubuntu20-k3s.bin
+ ```
+
+:::tip
+
+If you receive a certificate error, use the `-k` or `--insecure` flag.
+
+:::
+
+ Assign the proper permissions and start the download script.
+
+
+
+ ```shell
+ sudo chmod 755 ./airgap-edge-ubuntu20-k3s.bin && sudo ./airgap-edge-ubuntu20-k3s.bin
+ ```
+
+
+
+
+
+ Download the binary by using the URL provided by the Palette support team. Change the version number as needed.
+
+
+
+ ```shell
+ curl --user XXXX:YYYYY https:///airgap/packs/3.3/airgap-edge-ubuntu20-rke.bin \
+ --output airgap-edge-ubuntu20-rke.bin
+ ```
+
+:::tip
+
+If you receive a certificate error, use the `-k` or `--insecure` flag.
+
+:::
+
+ Assign the proper permissions and start the download script.
+
+
+
+ ```shell
+ sudo chmod 755 ./airgap-edge-ubuntu20-rke.bin && sudo ./airgap-edge-ubuntu20-rke.bin
+ ```
+
+
+
+
+
+
+ Download the binary by using the URL provided by the Palette support team. Change the version number as needed.
+
+
+
+ ```shell
+ curl --user XXXX:YYYYY https:///airgap/packs/3.3/airgap-edge-ubuntu20-kubeadm.bin \
+ --output airgap-edge-ubuntu20-kubeadm.bin
+ ```
+
+:::tip
+
+If you receive a certificate error, use the `-k` or `--insecure` flag.
+
+:::
+
+ Assign the proper permissions and start the download script.
+
+
+
+ ```shell
+ sudo chmod 755 ./airgap-edge-ubuntu20-kubeadm.bin && sudo ./airgap-edge-ubuntu20-kubeadm.bin
+ ```
+
+
+
+
+
+ Download the binary by using the URL provided by the Palette support team. Change the version number as needed.
+
+
+
+ ```shell
+ curl --user XXXX:YYYYY https:///airgap/packs/3.3/airgap-edge-opensuse-k3s.bin \
+ --output airgap-edge-opensuse-k3s.bin
+ ```
+
+:::tip
+
+If you receive a certificate error, use the `-k` or `--insecure` flag.
+
+:::
+
+ Assign the proper permissions and start the download script.
+
+
+
+ ```shell
+ sudo chmod 755 ./airgap-edge-opensuse-k3s.bin && sudo ./airgap-edge-opensuse-k3s.bin
+ ```
+
+
+
+
+
+ Download the binary by using the URL provided by the Palette support team. Change the version number as needed.
+
+
+
+ ```shell
+ curl --user XXXX:YYYYY https:///airgap/packs/3.3/airgap-edge-opensuse-rke.bin \
+ --output airgap-edge-opensuse-rke.bin
+ ```
+
+:::tip
+
+If you receive a certificate error, use the `-k` or `--insecure` flag.
+
+:::
+
+ Assign the proper permissions and start the download script.
+
+
+
+ ```shell
+ sudo chmod 755 ./airgap-edge-opensuse-rke.bin && sudo ./airgap-edge-opensuse-rke.bin
+ ```
+
+
+
+
+
+ Download the binary by using the URL provided by the Palette support team. Change the version number as needed.
+
+
+
+ ```shell
+ curl --user XXXX:YYYYY https:///airgap/packs/3.3/airgap-edge-opensuse-kubeadm.bin \
+ --output airgap-edge-opensuse-kubeadm.bin
+ ```
+
+:::tip
+
+If you receive a certificate error, use the `-k` or `--insecure` flag.
+
+:::
+
+ Assign the proper permissions and start the download script.
+
+
+
+ ```shell
+ sudo chmod 755 ./airgap-edge-opensuse-kubeadm.bin && sudo ./airgap-edge-opensuse-kubeadm.bin
+ ```
+
+
+
+
+
+
+
+----
+
+
+The next step of the installation process is to begin the deployment of an appliance using the instructions in the [Instructions](install.md) guide. If you need to review the Spectro Cloud Repository details, issue the following command for detailed output.
+
+
+
+```shell
+sudo /bin/airgap-setup.sh
+```
+
+
+
+:::info
+
+You can review all the logs related to the setup of the private Spectro repository in **/tmp/airgap-setup.log**.
+
+:::
+
+
+## Validate
+
+You can validate that the Spectro Repository you deployed is available and ready for the next steps of the installation process. If you provided the appliance with an SSH key then you can skip to step five.
+
+
+1. Log in to vCenter Server by using the vSphere Client.
+
+
+2. Navigate to your Datacenter and locate your VM. Click on the VM to access its details page.
+
+
+3. Power on the VM.
+
+
+4. Click on **Launch Web Console** to access the terminal.
+
+
+5. Log in with the user `ubuntu` and the user password you specified during the installation. If you are using SSH, use the following command, and ensure you specify the path to your SSH private key and replace the IP address with your appliance's static IP.
+
+
+
+ ```shell
+ ssh --identity_file ~/path/to/your/file ubuntu@10.1.1.1
+ ```
+
+
+6. Verify the registry server is up and available. Replace the `10.1.1.1` value with your appliance's IP address.
+
+
+
+ ```shell
+ curl --insecure https://10.1.1.1:5000/health
+ ```
+
+ Example Output:
+ ```shell
+ {"status":"UP"}
+ ```
+
+7. Ensure you can log into your registry server. Use the credentials provided to you by the `airgap-setup.sh` script. Replace the `10.1.1.1` value with your appliance's IP address.
+
+
+
+ ```shell
+ curl --insecure --user admin:admin@airgap https://10.1.1.1:5000/v1/_catalog
+ ```
+
+ Example Output:
+ ```
+ {"metadata":{"lastUpdatedTime":"2023-04-11T21:12:09.647295105Z"},"repositories":[{"name":"amazon-linux-eks","tags":[]},{"name":"aws-efs","tags":[]},{"name":"centos-aws","tags":[]},{"name":"centos-azure","tags":[]},{"name":"centos-gcp","tags":[]},{"name":"centos-libvirt","tags":[]},{"name":"centos-vsphere","tags":[]},{"name":"cni-aws-vpc-eks","tags":[]},{"name":"cni-aws-vpc-eks-helm","tags":[]},{"name":"cni-azure","tags":[]},{"name":"cni-calico","tags":[]},{"name":"cni-calico-azure","tags":[]},{"name":"cni-cilium-oss","tags":[]},{"name":"cni-custom","tags":[]},{"name":"cni-kubenet","tags":[]},{"name":"cni-tke-global-router","tags":[]},{"name":"csi-aws","tags":[]},{"name":"csi-aws-ebs","tags":[]},{"name":"csi-aws-efs","tags":[]},{"name":"csi-azure","tags":[]},{"name":"csi-gcp","tags":[]},{"name":"csi-gcp-driver","tags":[]},{"name":"csi-longhorn","tags":[]},{"name":"csi-longhorn-addon","tags":[]},{"name":"csi-maas-volume","tags":[]},{"name":"csi-nfs-subdir-external","tags":[]},{"name":"csi-openstack-cinder","tags":[]},{"name":"csi-portworx-aws","tags":[]},{"name":"csi-portworx-gcp","tags":[]},{"name":"csi-portworx-generic","tags":[]},{"name":"csi-portworx-vsphere","tags":[]},{"name":"csi-rook-ceph","tags":[]},{"name":"csi-rook-ceph-addon","tags":[]},{"name":"csi-tke","tags":[]},{"name":"csi-topolvm-addon","tags":[]},{"name":"csi-vsphere-csi","tags":[]},{"name":"csi-vsphere-volume","tags":[]},{"name":"edge-k3s","tags":[]},{"name":"edge-k8s","tags":[]},{"name":"edge-microk8s","tags":[]},{"name":"edge-native-byoi","tags":[]},{"name":"edge-native-opensuse","tags":[]},{"name":"edge-native-ubuntu","tags":[]},{"name":"edge-rke2","tags":[]},{"name":"external-snapshotter","tags":[]},{"name":"generic-byoi","tags":[]},{"name":"kubernetes","tags":[]},{"name":"kubernetes-aks","tags":[]},{"name":"kubernetes-coxedge","tags":[]},{"name":"kubernetes-eks","tags":[]},{"name":"kubernetes-eksd","tags":[]},{"name":"kubernetes-konvoy","tags":[]},{"name":"kubernetes-microk8s","tags":[]},{"name":"kubernetes-rke2","tags":[]},{"name":"kubernetes-tke","tags":[]},{"name":"portworx-add-on","tags":[]},{"name":"spectro-mgmt","tags":[]},{"name":"tke-managed-os","tags":[]},{"name":"ubuntu-aks","tags":[]},{"name":"ubuntu-aws","tags":[]},{"name":"ubuntu-azure","tags":[]},{"name":"ubuntu-coxedge","tags":[]},{"name":"ubuntu-edge","tags":[]},{"name":"ubuntu-gcp","tags":[]},{"name":"ubuntu-libvirt","tags":[]},{"name":"ubuntu-maas","tags":[]},{"name":"ubuntu-openstack","tags":[]},{"name":"ubuntu-vsphere","tags":[]},{"name":"volume-snapshot-controller","tags":[]}],"listMeta":{"continue":""}}
+ ```
+
+
+8. Next, validate the Spectro repository is available. Replace the IP with your appliance's IP address.
+
+ ```shell
+ curl --insecure --user spectro:admin@airgap https://10.1.1.1
+ ```
+
+ Output:
+ ```html hideClipboard
+
+
+
+ Welcome to nginx!
+
+
+
+ Welcome to nginx!
+ If you see this page, the nginx web server is successfully installed and
+ working. Further configuration is required.
+
+ For online documentation and support please refer to
+ nginx.org.
+ Commercial support is available at
+ nginx.com.
+
+ Thank you for using nginx.
+
+
+ ```
diff --git a/docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/install-on-kubernetes.md b/docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/install-on-kubernetes.md
new file mode 100644
index 0000000000..5382ec10da
--- /dev/null
+++ b/docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/install-on-kubernetes.md
@@ -0,0 +1,24 @@
+---
+sidebar_label: "Kubernetes"
+title: "Kubernetes"
+description: "Learn how to install Palette on Kubernetes."
+icon: ""
+hide_table_of_contents: false
+tags: ["palette", "self-hosted", "kubernetes"]
+---
+
+
+Palette can be installed on Kubernetes with internet connectivity or an airgap environment. When you install Palette, a three-node cluster is created. You use a Helm chart our support team provides to install Palette on Kubernetes. Refer to [Access Palette](../../enterprise-version.md#access-palette) for instructions on requesting access to the Helm Chart.
+
+
+To get started with Palette on Kubernetes, refer to the [Install Instructions](install.md) guide.
+
+## Resources
+
+- [Install Instructions](install.md)
+
+
+- [Airgap Install Instructions](airgap-instructions.md)
+
+
+- [Helm Configuration Reference](palette-helm-ref.md)
diff --git a/docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/install.md b/docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/install.md
new file mode 100644
index 0000000000..6fa86484c6
--- /dev/null
+++ b/docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/install.md
@@ -0,0 +1,308 @@
+---
+sidebar_label: "Instructions"
+title: "Instructions"
+description: "Learn how to deploy self-hosted Palette to a Kubernetes cluster using a Helm Chart."
+icon: ""
+hide_table_of_contents: false
+sidebar_position: 10
+tags: ["self-hosted", "enterprise"]
+---
+
+
+You can use the Palette Helm Chart to install Palette in a multi-node Kubernetes cluster in your production environment.
+
+This installation method is common in secure environments with restricted network access that prohibits using Palette SaaS. Review our [architecture diagrams](../../../architecture/networking-ports.md) to ensure your Kubernetes cluster has the necessary network connectivity for Palette to operate successfully.
+
+
+
+## Prerequisites
+
+- [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl) is installed and available.
+
+
+- [Helm](https://helm.sh/docs/intro/install/) is installed and available.
+
+
+- Access to the target Kubernetes cluster's kubeconfig file. You must be able to interact with the cluster using `kubectl` commands and have sufficient permissions to install Palette. We recommend using a role with cluster-admin permissions to install Palette.
+
+
+- The Kubernetes cluster must be set up on a supported version of Kubernetes, which includes versions v1.25 to v1.27.
+
+
+
+- Ensure the Kubernetes cluster does not have Cert Manager installed. Palette requires a unique Cert Manager configuration to be installed as part of the installation process. If Cert Manager is already installed, you must uninstall it before installing Palette.
+
+
+- The Kubernetes cluster must have a Container Storage Interface (CSI) installed and configured. Palette requires a CSI to store persistent data. You may install any CSI that is compatible with your Kubernetes cluster.
+
+
+
+- We recommended the following resources for Palette. Refer to the [Palette size guidelines](../install-palette.md#size-guidelines) for additional sizing information.
+
+ - 8 CPUs per node.
+
+ - 16 GB Memory per node.
+
+ - 100 GB Disk Space per node.
+
+ - A Container Storage Interface (CSI) for persistent data.
+
+ - A minimum of three worker nodes or three untainted control plane nodes.
+
+
+- The following network ports must be accessible for Palette to operate successfully.
+
+ - TCP/443: Inbound and outbound to and from the Palette management cluster.
+
+ - TCP/6443: Outbound traffic from the Palette management cluster to the deployed clusters' Kubernetes API server.
+
+
+- Ensure you have an SSL certificate that matches the domain name you will assign to Palette. You will need this to enable HTTPS encryption for Palette. Reach out to your network administrator or security team to obtain the SSL certificate. You need the following files:
+
+ - x509 SSL certificate file in base64 format.
+
+ - x509 SSL certificate key file in base64 format.
+
+ - x509 SSL certificate authority file in base64 format.
+
+
+- Ensure the OS and Kubernetes cluster you are installing Palette onto is FIPS-compliant. Otherwise, Palette and its operations will not be FIPS-compliant.
+
+
+- A custom domain and the ability to update Domain Name System (DNS) records. You will need this to enable HTTPS encryption for Palette.
+
+
+- Access to the Palette Helm Charts. Refer to the [Access Palette](../../enterprise-version.md#access-palette) for instructions on how to request access to the Helm Chart
+
+
+
+
+
+:::caution
+
+Do not use a Palette-managed Kubernetes cluster when installing Palette. Palette-managed clusters contain the Palette agent and Palette-created Kubernetes resources that will interfere with the installation of Palette.
+
+:::
+
+
+## Install Palette
+
+Use the following steps to install Palette on Kubernetes.
+
+
+:::info
+
+The following instructions are written agnostic to the Kubernetes distribution you are using. Depending on the underlying infrastructure provider and your Kubernetes distribution, you may need to modify the instructions to match your environment. Reach out to our support team if you need assistance.
+
+:::
+
+
+1. Open a terminal session and navigate to the directory where you downloaded the Palette Helm Charts provided by our support. We recommend you place all the downloaded files into the same directory. You should have the following Helm Charts:
+
+ - Spectro Management Plane Helm Chart.
+
+ - Cert Manager Helm Chart.
+
+
+2. Extract each Helm Chart into its directory. Use the commands below as a reference. Do this for all the provided Helm Charts.
+
+
+
+ ```shell
+ tar xzvf spectro-mgmt-plane-*.tgz
+ ```
+
+
+
+ ```yaml
+ tar xzvf cert-manager-*.tgz
+ ```
+
+
+3. Install Cert Manager using the following command. Replace the actual file name of the Cert Manager Helm Chart with the one you downloaded, as the version number may be different.
+
+
+
+ ```shell
+ helm upgrade --values cert-manager/values.yaml cert-manager cert-manager-1.11.0.tgz --install
+ ```
+
+
+
+ :::info
+
+ The Cert Manager Helm Chart provided by our support team is configured for Palette. Do not modify the **values.yaml** file unless instructed to do so by our support team.
+
+ :::
+
+
+4. Open the **values.yaml** in the **spectro-mgmt-plane** folder with a text editor of your choice. The **values.yaml** contains the default values for the Palette installation parameters, however, you must populate the following parameters before installing Palette.
+
+
+
+ | **Parameter** | **Description** | **Type** |
+ | --- | --- | --- |
+ | `env.rootDomain` | The URL name or IP address you will use for the Palette installation. | string |
+ | `ociPackRegistry` or `ociPackEcrRegistry` | The OCI registry credentials for Palette FIPS packs.| object |
+ | `scar` | The Spectro Cloud Artifact Repository (SCAR) credentials for Palette FIPS images. These credentials are provided by our support team. | object |
+
+
+ Save the **values.yaml** file after you have populated the required parameters mentioned in the table.
+
+
+
+ :::info
+
+ You can learn more about the parameters in the **values.yaml** file in the [Helm Configuration Reference](palette-helm-ref.md) page.
+
+ :::
+
+
+
+5. Install the Palette Helm Chart using the following command.
+
+
+
+ ```shell
+ helm upgrade --values spectro-mgmt-plane/values.yaml hubble spectro-mgmt-plane-0.0.0.tgz --install
+ ```
+
+
+6. Track the installation process using the command below. Palette is ready when the deployments in the namespaces `cp-system`, `hubble-system`, `ingress-nginx`, `jet-system` , and `ui-system` reach the *Ready* state. The installation takes between two to three minutes to complete.
+
+
+
+ ```shell
+ kubectl get pods --all-namespaces --watch
+ ```
+
+
+7. Create a DNS CNAME record that is mapped to the Palette `ingress-nginx-controller` load balancer. You can use the following command to retrieve the load balancer IP address. You may require the assistance of your network administrator to create the DNS record.
+
+
+
+ ```shell
+ kubectl get service ingress-nginx-controller --namespace ingress-nginx --output jsonpath='{.status.loadBalancer.ingress[0].hostname}'
+ ```
+
+
+
+ :::info
+
+ As you create tenants in Palette, the tenant name is prefixed to the domain name you assigned to Palette. For example, if you create a tenant named `tenant1` and the domain name you assigned to Palette is `palette.example.com`, the tenant URL will be `tenant1.palette.example.com`. You can create an additional wildcard DNS record to map all tenant URLs to the Palette load balancer.
+
+ :::
+
+
+8. Use the custom domain name or the IP address of the load balancer to visit the Palette system console. To access the system console, open a web browser and paste the custom domain URL in the address bar and append the value `/system`. Replace the domain name in the URL with your custom domain name or the IP address of the load balancer. Alternatively, you can use the load balancer IP address with the appended value `/system` to access the system console.
+
+
+
+ :::info
+
+ The first time you visit the Palette system console, a warning message about an untrusted SSL certificate may appear. This is expected, as you have not yet uploaded your SSL certificate to Palette. You can ignore this warning message and proceed.
+
+ :::
+
+
+
+ ![Screenshot of the Palette system console showing Username and Password fields.](/palette_installation_install-on-vmware_palette-system-console.png)
+
+
+9. Log in to the system console using the following default credentials.
+
+
+
+ | **Parameter** | **Value** |
+ | --- | --- |
+ | Username | `admin` |
+ | Password | `admin` |
+
+
+
+ After login, you will be prompted to create a new password. Enter a new password and save your changes. You will be redirected to the Palette system console.
+
+
+
+10. After login, a summary page is displayed. Palette is installed with a self-signed SSL certificate. To assign a different SSL certificate you must upload the SSL certificate, SSL certificate key, and SSL certificate authority files to Palette. You can upload the files using the Palette system console. Refer to the [Configure HTTPS Encryption](../../system-management/ssl-certificate-management.md) page for instructions on how to upload the SSL certificate files to Palette.
+
+
+
+
+:::caution
+
+If you plan to deploy host clusters into different networks, you may require a reverse proxy. Check out the [Configure Reverse Proxy](../../system-management/reverse-proxy.md) guide for instructions on how to configure a reverse proxy for Palette.
+
+:::
+
+
+You now have a self-hosted instance of Palette installed in a Kubernetes cluster. Make sure you retain the **values.yaml** file as you may need it for future upgrades.
+
+
+## Validate
+
+Use the following steps to validate the Palette installation.
+
+
+
+
+1. Open up a web browser and navigate to the Palette system console. To access the system console, open a web browser and paste the following URL in the address bar and append the value `/system`. Replace the domain name in the URL with your custom domain name or the IP address of the load balancer.
+
+
+
+2. Log in using the credentials you received from our support team. After login, you will be prompted to create a new password. Enter a new password and save your changes. You will be redirected to the Palette system console.
+
+
+3. Open a terminal session and issue the following command to verify the Palette installation. The command should return a list of deployments in the `cp-system`, `hubble-system`, `ingress-nginx`, `jet-system` , and `ui-system` namespaces.
+
+
+
+ ```shell
+ kubectl get pods --all-namespaces --output custom-columns="NAMESPACE:metadata.namespace,NAME:metadata.name,STATUS:status.phase" \
+ | grep -E '^(cp-system|hubble-system|ingress-nginx|jet-system|ui-system)\s'
+ ```
+
+ Your output should look similar to the following.
+
+ ```shell hideClipboard
+ cp-system spectro-cp-ui-689984f88d-54wsw Running
+ hubble-system auth-85b748cbf4-6drkn Running
+ hubble-system auth-85b748cbf4-dwhw2 Running
+ hubble-system cloud-fb74b8558-lqjq5 Running
+ hubble-system cloud-fb74b8558-zkfp5 Running
+ hubble-system configserver-685fcc5b6d-t8f8h Running
+ hubble-system event-68568f54c7-jzx5t Running
+ hubble-system event-68568f54c7-w9rnh Running
+ hubble-system foreq-6b689f54fb-vxjts Running
+ hubble-system hashboard-897bc9884-pxpvn Running
+ hubble-system hashboard-897bc9884-rmn69 Running
+ hubble-system hutil-6d7c478c96-td8q4 Running
+ hubble-system hutil-6d7c478c96-zjhk4 Running
+ hubble-system mgmt-85dbf6bf9c-jbggc Running
+ hubble-system mongo-0 Running
+ hubble-system mongo-1 Running
+ hubble-system mongo-2 Running
+ hubble-system msgbroker-6c9b9fbf8b-mcsn5 Running
+ hubble-system oci-proxy-7789cf9bd8-qcjkl Running
+ hubble-system packsync-28205220-bmzcg Succeeded
+ hubble-system spectrocluster-6c57f5775d-dcm2q Running
+ hubble-system spectrocluster-6c57f5775d-gmdt2 Running
+ hubble-system spectrocluster-6c57f5775d-sxks5 Running
+ hubble-system system-686d77b947-8949z Running
+ hubble-system system-686d77b947-cgzx6 Running
+ hubble-system timeseries-7865bc9c56-5q87l Running
+ hubble-system timeseries-7865bc9c56-scncb Running
+ hubble-system timeseries-7865bc9c56-sxmgb Running
+ hubble-system user-5c9f6c6f4b-9dgqz Running
+ hubble-system user-5c9f6c6f4b-hxkj6 Running
+ ingress-nginx ingress-nginx-controller-2txsv Running
+ ingress-nginx ingress-nginx-controller-55pk2 Running
+ ingress-nginx ingress-nginx-controller-gmps9 Running
+ jet-system jet-6599b9856d-t9mr4 Running
+ ui-system spectro-ui-76ffdf67fb-rkgx8 Running
+ ```
+
+
+## Next Steps
+
+You have successfully installed Palette in a Kubernetes cluster. Your next steps are to configure Palette for your organization. Start by creating the first tenant to host your users. Use the [Create a Tenant](../../system-management/tenant-management.md) page for instructions on how to create a tenant.
diff --git a/docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/palette-helm-ref.md b/docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/palette-helm-ref.md
new file mode 100644
index 0000000000..79ee713604
--- /dev/null
+++ b/docs/docs-content/enterprise-version/install-palette/install-on-kubernetes/palette-helm-ref.md
@@ -0,0 +1,451 @@
+---
+sidebar_label: "Helm Chart Install Reference"
+title: "Helm Chart Install References"
+description: "Reference for Palette Helm Chart installation parameters."
+icon: ""
+hide_table_of_contents: false
+sidebar_position: 30
+tags: ["self-hosted", "enterprise"]
+---
+
+
+You can use the Palette Helm Chart to install Palette in a multi-node Kubernetes cluster in your production environment. The Helm chart allows you to customize values in the **values.yaml** file. This reference lists and describes parameters available in the **values.yaml** file from the Helm Chart for your installation. To learn how to install Palette using the Helm Chart, refer to the[Palette Helm install](install.md) guide.
+
+
+
+
+
+
+
+### Required Parameters
+
+The following parameters are required for a successful installation of Palette.
+
+
+| **Parameters** | **Description** | **Type** |
+| --- | --- | --- |
+| `config.env.rootDomain` | Used to configure the domain for the Palette installation. We recommend you create a CNAME DNS record that supports multiple subdomains. You can achieve this using a wild card prefix, `*.palette.abc.com`. Review the [Environment parameters](#environment) to learn more. | String |
+| `config.env.ociRegistry` or `config.env.ociEcrRegistry`| Specifies the FIPS image registry for Palette. You can use an a self-hosted OCI registry or a public OCI registry we maintain and support. For more information, refer to the [Registry](#registries) section. | Object |
+| `scar`| The Spectro Cloud Artifact Repository (SCAR) credentials for Palette FIPS images. Our support team provides these credentials. For more information, refer to the [Registry](#registries) section. | Object |
+
+
+:::caution
+
+If you are installing an air-gapped version of Palette, you must provide the image swap configuration. For more information, refer to the [Image Swap Configuration](#image-swap-configuration) section.
+
+
+:::
+
+
+### MongoDB
+
+Palette uses MongoDB Enterprise as its internal database and supports two modes of deployment:
+
+- MongoDB Enterprise deployed and active inside the cluster.
+
+
+- MongoDB Enterprise is hosted on a software-as-a-service (SaaS) platform, such as MongoDB Atlas.
+
+The table below lists the parameters used to configure a MongoDB deployment.
+
+| **Parameters** | **Description** | **Type** | **Default value** |
+| --- | --- | --- | --- |
+| `internal` | Specifies the MongoDB deployment either in-cluster or using Mongo Atlas. | Boolean | `true` |
+| `databaseUrl`| The URL for MongoDB Enterprise. If using a remote MongoDB Enterprise instance, provide the remote URL. This parameter must be updated if `mongo.internal` is set to `false`. | String | `mongo-0.mongo,mongo-1.mongo,mongo-2.mongo` |
+| `databasePassword`| The base64-encoded MongoDB Enterprise password. If you don't provide a value, a random password will be auto-generated. | String | `""` |
+| `replicas`| The number of MongoDB replicas to start. | Integer | `3` |
+| `memoryLimit`| Specifies the memory limit for each MongoDB Enterprise replica.| String | `4Gi` |
+| `cpuLimit` | Specifies the CPU limit for each MongoDB Enterprise member.| String | `2000m` |
+| `pvcSize`| The storage settings for the MongoDB Enterprise database. Use increments of `5Gi` when specifying the storage size. The storage size applies to each replica instance. The total storage size for the cluster is `replicas` * `pvcSize`. | string | `20Gi`|
+| `storageClass`| The storage class for the MongoDB Enterprise database. | String | `""` |
+
+
+```yaml
+mongo:
+ internal: true
+ databaseUrl: "mongo-0.mongo,mongo-1.mongo,mongo-2.mongo"
+ databasePassword: ""
+ replicas: 3
+ cpuLimit: "2000m"
+ memoryLimit: "4Gi"
+ pvcSize: "20Gi"
+ storageClass: ""
+```
+
+### Config
+
+Review the following parameters to configure Palette for your environment. The `config` section contains the following subsections:
+
+
+#### Install Mode
+
+You can install Palette in connected or air-gapped mode. The table lists the parameters to configure the installation mode.
+
+| **Parameters** | **Description** | **Type** | **Default value** |
+| --- | --- | --- | --- |
+| `installMode` | Specifies the installation mode. Allowed values are `connected` or `airgap`. Set the value to `airgap` when installing in an air-gapped environment. | String | `connected` |
+
+```yaml
+config:
+ installationMode: "connected"
+```
+
+#### SSO
+
+You can configure Palette to use Single Sign-On (SSO) for user authentication. Configure the SSO parameters to enable SSO for Palette. You can also configure different SSO providers for each tenant post-install, check out the [SAML & SSO Setup](../../../user-management/saml-sso/saml-sso.md) documentation for additional guidance.
+
+To configure SSO, you must provide the following parameters.
+
+| **Parameters** | **Description** | **Type** | **Default value** |
+| --- | --- | --- | --- | --- |
+| `saml.enabled` | Specifies whether to enable SSO SAML configuration by setting it to true. | Boolean | `false` |
+| `saml.acsUrlRoot` | The root URL of the Assertion Consumer Service (ACS).| String | `myfirstpalette.spectrocloud.com`|
+| `saml.acsUrlScheme` | The URL scheme of the ACS: `http` or `https`. | String | `https` |
+| `saml.audienceUrl` | The URL of the intended audience for the SAML response.| String| `https://www.spectrocloud.com` |
+| `saml.entityID` | The Entity ID of the Service Provider.| String | `https://www.spectrocloud.com`|
+| `saml.apiVersion` | Specify the SSO SAML API version to use.| String | `v1` |
+
+```yaml
+config:
+ sso:
+ saml:
+ enabled: false
+ acsUrlRoot: "myfirstpalette.spectrocloud.com"
+ acsUrlScheme: "https"
+ audienceUrl: "https://www.spectrocloud.com"
+ entityId: "https://www.spectrocloud.com"
+ apiVersion: "v1"
+```
+
+#### Email
+
+Palette uses email to send notifications to users. The email notification is used when inviting new users to the platform, password resets, and when [webhook alerts](../../../clusters/cluster-management/health-alerts.md) are triggered. Use the following parameters to configure email settings for Palette.
+
+| **Parameters** | **Description** | **Type** | **Default value** |
+| --- | --- | --- | --- |
+| `enabled` | Specifies whether to enable email configuration. | Boolean| `false`|
+| `emailID ` | The email address for sending mail.| String| `noreply@spectrocloud.com` |
+| `smtpServer` | Simple Mail Transfer Protocol (SMTP) server used for sending mail. | String | `smtp.gmail.com` |
+| `smtpPort` | SMTP port used for sending mail.| Integer | `587` |
+| `insecureSkipVerifyTLS` | Specifies whether to skip Transport Layer Security (TLS) verification for the SMTP connection.| Boolean | `true` |
+| `fromEmailID` | Email address of the ***From*** address.| String | `noreply@spectrocloud.com` |
+| `password` | The base64-encoded SMTP password when sending emails.| String | `""` |
+
+```yaml
+config:
+ email:
+ enabled: false
+ emailId: "noreply@spectrocloud.com"
+ smtpServer: "smtp.gmail.com"
+ smtpPort: 587
+ insecureSkipVerifyTls: true
+ fromEmailId: "noreply@spectrocloud.com"
+ password: ""
+```
+
+#### Environment
+
+The following parameters are used to configure the environment.
+
+| **Parameters** | **Description** | **Type** | **Default value** |
+| --- | --- | --- | --- |
+| `env.rootDomain` | Specifies the URL name assigned to Palette. The value assigned should have a Domain Name System (DNS) CNAME record mapped to exposed IP address or the load balancer URL of the service *ingress-nginx-controller*. Optionally, if `ingress.ingressStaticIP` is provided with a value you can use same assigned static IP address as the value to this parameter.| String| `""` |
+| `env.installerMode` | Specifies the installer mode. Do not modify the value.| String| `self-hosted` |
+| `env.installerCloud` | Specifies the cloud provider. Leave this parameter empty if you are installing a self-hosted Palette. | String | `""` |
+
+```yaml
+config:
+ env:
+ rootDomain: ""
+```
+
+
+:::caution
+
+As you create tenants in Palette, the tenant name is prefixed to the domain name you assigned to Palette. For example, if you create a tenant named tenant1 and the domain name you assigned to Palette is `palette.example.com`, the tenant URL will be `tenant1.palette.example.com`. We recommend you create an additional wildcard DNS record to map all tenant URLs to the Palette load balancer. For example, `*.palette.example.com`.
+
+:::
+
+#### Cluster
+
+Use the following parameters to configure the Kubernetes cluster.
+
+
+| **Parameters** | **Description** | **Type** | **Default value** |
+| --- | --- | --- | --- |
+| `stableEndpointAccess` | Set to `true` if the Kubernetes cluster is deployed in a public endpoint. If the cluster is deployed in a private network through a stable private endpoint, set to `false`. | Boolean | `false` |
+
+```yaml
+config:
+ cluster:
+ stableEndpointAccess: false
+```
+
+### Registries
+
+Palette requires credentials to access the required Palette images. You can configure different types of registries for Palette to download the required images. You must configure at least one Open Container Initiative (OCI) registry for Palette. You must also provide the credentials for the Spectro Cloud Artifact Repository (SCAR) to download the required FIPS images.
+
+
+
+#### OCI Registry
+
+
+Palette requires access to an OCI registry that contains all the required FIPS packs. You can host your own OCI registry and configure Palette to reference the registry. Alternatively, you can use the public OCI registry that we provide. Refer to the [`ociPackEcrRegistry`](#oci-ecr-registry) section to learn more about the publicly available OCI registry.
+
+
+
+| **Parameters** | **Description** | **Type** | **Default value** |
+| --- | --- | --- | --- |
+| `ociPackRegistry.endpoint` | The endpoint URL for the registry. | String| `""` |
+| `ociPackRegistry.name` | The name of the registry. | String| `""` |
+| `ociPackRegistry.password` | The base64-encoded password for the registry. | String| `""` |
+| `ociPackRegistry.username` | The username for the registry. | String| `""` |
+| `ociPackRegistry.baseContentPath`| The base path for the registry. | String | `""` |
+| `ociPackRegistry.insecureSkipVerify` | Specifies whether to skip Transport Layer Security (TLS) verification for the registry connection. | Boolean | `false` |
+| `ociPackRegistry.caCert` | The registry's base64-encoded certificate authority (CA) certificate. | String | `""` |
+
+
+```yaml
+config:
+ ociPackRegistry:
+ endpoint: ""
+ name: ""
+ password: ""
+ username: ""
+ baseContentPath: ""
+ insecureSkipVerify: false
+ caCert: ""
+```
+
+#### OCI ECR Registry
+
+We expose a public OCI ECR registry that you can configure Palette to reference. If you want to host your own OCI registry, refer to the [OCI Registry](#oci-registry) section.
+The OCI Elastic Container Registry (ECR) is hosted in an AWS ECR registry. Our support team provides the credentials for the OCI ECR registry.
+
+| **Parameters** | **Description** | **Type** | **Default value** |
+| --- | --- | --- | --- |
+| `ociPackEcrRegistry.endpoint` | The endpoint URL for the registry. | String| `""` |
+| `ociPackEcrRegistry.name` | The name of the registry. | String| `""` |
+| `ociPackEcrRegistry.accessKey` | The base64-encoded access key for the registry. | String| `""` |
+| `ociPackEcrRegistry.secretKey` | The base64-encoded secret key for the registry. | String| `""` |
+| `ociPackEcrRegistry.baseContentPath`| The base path for the registry. | String | `""` |
+| `ociPackEcrRegistry.isPrivate` | Specifies whether the registry is private. | Boolean | `true` |
+| `ociPackEcrRegistry.insecureSkipVerify` | Specifies whether to skip Transport Layer Security (TLS) verification for the registry connection. | Boolean | `false` |
+| `ociPackEcrRegistry.caCert` | The registry's base64-encoded certificate authority (CA) certificate. | String | `""` |
+
+```yaml
+config:
+ ociPackEcrRegistry:
+ endpoint: ""
+ name: ""
+ accessKey: ""
+ secretKey: ""
+ baseContentPath: ""
+ isPrivate: true
+ insecureSkipVerify: false
+ caCert: ""
+```
+
+#### Spectro Cloud Artifact Repository (SCAR)
+
+SCAR credentials are required to download the necessary FIPS manifests. Our support team provides the SCAR credentials.
+
+| **Parameters** | **Description** | **Type** | **Default value** |
+| --- | --- | --- | --- |
+| `scar.endpoint` | The endpoint URL of SCAR. | String| `""` |
+| `scar.username` |The username for SCAR. | String| `""` |
+| `scar.password` | The base64-encoded password for the SCAR. | String| `""` |
+| `scar.insecureSkipVerify` | Specifies whether to skip Transport Layer Security (TLS) verification for the SCAR connection. | Boolean | `false` |
+| `scar.caCert` | The base64-encoded certificate authority (CA) certificate for SCAR. | String | `""` |
+
+
+
+ ```yaml
+ config:
+ scar:
+ endpoint: ""
+ username: ""
+ password: ""
+ insecureSkipVerify: false
+ caCert: ""
+ ```
+
+#### Image Swap Configuration
+
+You can configure Palette to use image swap to download the required images. This is an advanced configuration option, and it is only required for air-gapped deployments. You must also install the Palette Image Swap Helm chart to use this option, otherwise, Palette will ignore the configuration.
+
+| **Parameters** | **Description** | **Type** | **Default value** |
+| --- | --- | --- | --- |
+| `imageSwapInitImage` | The image swap init image. | String | `gcr.io/spectro-images-public/thewebroot/imageswap-init:v1.5.2` |
+| `imageSwapImage` | The image swap image. | String | `gcr.io/spectro-images-public/thewebroot/imageswap:v1.5.2` |
+| `imageSwapConfig`| The image swap configuration for specific environments. | String | `""` |
+| `imageSwapConfig.isEKSCluster` | Specifies whether the cluster is an Amazon EKS cluster. Set to `false` if the Kubernetes cluster is not an EKS cluster. | Boolean | `true` |
+
+
+
+ ```yaml
+ config:
+ imageSwapImages:
+ imageSwapInitImage: "gcr.io/spectro-images-public/thewebroot/imageswap-init:v1.5.2"
+ imageSwapImage: "gcr.io/spectro-images-public/thewebroot/imageswap:v1.5.2"
+
+ imageSwapConfig:
+ isEKSCluster: true
+ ```
+
+### NATS
+
+Palette uses [NATS](https://nats.io) and gRPC for communication between Palette components. Dual support for NATS and gRPC is available. You can enable the deployment of an additional load balancer for NATS. Host clusters deployed by Palette use the load balancer to communicate with the Palette control plane. This is an advanced configuration option and is not required for most deployments. Speak with your support representative before enabling this option.
+
+| **Parameters** | **Description** | **Type** | **Default value** |
+| --- | --- | --- | --- |
+| `nats.enabled`| Specifies whether to enable the deployment of a NATS load balancer. | Boolean | `true` |
+| `nats.internal`| Specifies whether to deploy a load balancer or use the host network. If this value is set to `true`, then the remaining NATS parameters are ignored. | Boolean | `true` |
+| `nats.natsUrl`| The NATS URL. This can be a comma separated list of mappings for the NATS load balancer service. For example, "message1.dev.spectrocloud.com:4222,message2.dev.spectrocloud.com:4222". This parameter is mandatory if `nats.internal` is set to `false`. If `nats.internal` is set to `true`, you can leave this parameter empty. | String | `""` |
+| `nats.annotations`| A map of key-value pairs that specifies load balancer annotations for NATS. You can use annotations to change the behavior of the load balancer and the Nginx configuration. This is an advanced setting. We recommend you consult with your assigned support team representative prior to modification. | Object | `{}` |
+| `nats.natsStaticIP`| Specify a static IP address for the NATS load balancer service. If empty, a dynamic IP address will be assigned to the load balancer. | String | `""` |
+
+
+
+
+ ```yaml
+ nats:
+ enabled: true
+ internal: true
+ natsUrl: ""
+ annotations: {}
+ natsStaticIP:
+```
+
+
+
+
+### gRPC
+
+gRPC is used for communication between Palette components. You can enable the deployment of an additional load balancer for gRPC. Host clusters deployed by Palette use the load balancer to communicate with the Palette control plane. This is an advanced configuration option, and it is not required for most deployments. Speak with your support representative before enabling this option. Dual support for NATS and gRPC is available.
+
+If you want to use an external gRPC endpoint, you must provide a domain name for the gRPC endpoint and a valid x509 certificate. Additionally, you must provide a custom domain name for the endpoint. A CNAME DNS record must point to the IP address of the gRPC load balancer. For example, if your Palette domain name is `palette.example.com`, you could create a CNAME DNS record for `grpc.palette.example.com` that points to the IP address of the load balancer dedicated to gRPC.
+
+| **Parameters** | **Description** | **Type** | **Default value** |
+| --- | --- | --- | --- |
+| `external`| Specifies whether to use an external gRPC endpoint. | Boolean | `false` |
+| `endpoint`| The gRPC endpoint. | String | `""` |
+| `caCertificateBase64`| The base64-encoded certificate authority (CA) certificate for the gRPC endpoint. | String | `""` |
+| `serverCrtBase64`| The base64-encoded server certificate for the gRPC endpoint. | String | `""` |
+| `serverKeyBase64`| The base64-encoded server key for the gRPC endpoint. | String | `""` |
+| `insecureSkipVerify`| Specifies whether to skip Transport Layer Security (TLS) verification for the gRPC endpoint. | Boolean | `false` |
+
+
+
+
+```yaml
+grpc:
+ external: false
+ endpoint: ""
+ caCertificateBase64: ""
+ serverCrtBase64: ""
+ serverKeyBase64: ""
+ insecureSkipVerify: false
+```
+
+### Ingress
+
+Palette deploys an Nginx Ingress Controller. This controller is used to route traffic to the Palette control plane. You can change the default behavior and omit the deployment of an Nginx Ingress Controller.
+
+| **Parameters** | **Description** | **Type** | **Default value** |
+| --- | --- | --- | --- |
+| `enabled`| Specifies whether to deploy an Nginx controller. Set to `false` if you do not want an Nginx controller deployed. | Boolean | `true` |
+| `ingress.internal`| Specifies whether to deploy a load balancer or use the host network. | Boolean | `false` |
+| `ingress.certificate`| Specify the base64-encoded x509 SSL certificate for the Nginx Ingress Controller. If left blank, the Nginx Ingress Controller will generate a self-signed certificate. | String | `""` |
+| `ingress.key`| Specify the base64-encoded x509 SSL certificate key for the Nginx Ingress Controller. | String | `""` |
+| `ingress.annotations`| A map of key-value pairs that specifies load balancer annotations for ingress. You can use annotations to change the behavior of the load balancer and the Nginx configuration. This is an advanced setting. We recommend you consult with your assigned support team representative prior to modification. | Object | `{}` |
+| `ingress.ingressStaticIP`| Specify a static IP address for the ingress load balancer service. If empty, a dynamic IP address will be assigned to the load balancer. | String | `""` |
+| `ingress.terminateHTTPSAtLoadBalancer`| Specifies whether to terminate HTTPS at the load balancer. | Boolean | `false` |
+
+
+```yaml
+ingress:
+ enabled: true
+ ingress:
+ internal: false
+ certificate: ""
+ key: ""
+ annotations: {}
+ ingressStaticIP: ""
+ terminateHTTPSAtLoadBalancer: false
+```
+
+### Spectro Proxy
+
+You can specify a reverse proxy server that clusters deployed through Palette can use to facilitate network connectivity to the cluster's Kubernetes API server. Host clusters deployed in private networks can use the [Spectro Proxy pack](../../../integrations/frp.md) to expose the cluster's Kubernetes API to downstream clients that are not in the same network. Check out the [Reverse Proxy](../../system-management/reverse-proxy.md) documentation to learn more about setting up a reverse proxy server for Palette.
+
+
+| **Parameters** | **Description** | **Type** | **Default value** |
+| --- | --- | --- | --- |
+| `frps.enabled`| Specifies whether to enable the Spectro server-side proxy. | Boolean | `false` |
+| `frps.frpHostURL`| The Spectro server-side proxy URL. | String | `""` |
+| `frps.server.crt`| The base64-encoded server certificate for the Spectro server-side proxy. | String | `""` |
+| `frps.server.key`| The base64-encoded server key for the Spectro server-side proxy. | String | `""` |
+| `frps.ca.crt`| The base64-encoded certificate authority (CA) certificate for the Spectro server-side proxy. | String | `""` |
+
+```yaml
+frps:
+ frps:
+ enabled: false
+ frpHostURL: ""
+ server:
+ crt: ""
+ key: ""
+ ca:
+ crt : ""
+```
+
+### UI System
+
+The table lists parameters to configure the Palette User Interface (UI) behavior. You can disable the UI or the Network Operations Center (NOC) UI. You can also specify the MapBox access token and style layer ID for the NOC UI. MapBox is a third-party service that provides mapping and location services. To learn more about MapBox and how to obtain an access token, refer to the [MapBox Access tokens](https://docs.mapbox.com/help/getting-started/access-tokens) guide.
+
+
+| **Parameters** | **Description** | **Type** | **Default value** |
+| --- | --- | --- | --- |
+| `enabled`| Specifies whether to enable the Palette UI. | Boolean | `true` |
+| `ui.nocUI.enable`| Specifies whether to enable the Palette Network Operations Center (NOC) UI. Enabling this parameter requires the `ui.nocUI.mapBoxAccessToken`. Once enabled, all cluster locations will be reported to MapBox. This feature is not FIPS compliant. | Boolean | `false` |
+| `ui.nocUI.mapBoxAccessToken`| The MapBox access token for the Palette NOC UI. | String | `""` |
+| `ui.nocUI.mapBoxStyledLayerID`| The MapBox style layer ID for the Palette NOC UI. | String | `""` |
+
+
+
+```yaml
+ui-system:
+ enabled: true
+ ui:
+ nocUI:
+ enable: false
+ mapBoxAccessToken: ""
+ mapBoxStyledLayerID: ""
+```
+
+
+
+
+### Reach System
+
+You can configure Palette to use a proxy server to access the internet. Set the parameter `reach-system.reachSystem.enabled` to `true` to enable the proxy server. Proxy settings are configured in the `reach-system.reachSystem.proxySettings` section.
+
+
+| **Parameters** | **Description** | **Type** | **Default value** |
+| --- | --- | --- | --- |
+| `reachSystem.enabled`| Specifies whether to enable the usage of a proxy server for Palette. | Boolean | `false` |
+| `reachSystem.proxySettings.http_proxy`| The HTTP proxy server URL. | String | `""` |
+| `reachSystem.proxySettings.https_proxy`| The HTTPS proxy server URL. | String | `""` |
+| `reachSystem.proxySettings.no_proxy`| A list of hostnames or IP addresses that should not be proxied. | String | `""` |
+
+
+ ```yaml
+ reach-system:
+ reachSystem:
+ enabled: false
+ proxySettings:
+ http_proxy: ""
+ https_proxy: ""
+ no_proxy:
+ ```
\ No newline at end of file
diff --git a/docs/docs-content/enterprise-version/install-palette/install-on-vmware/_category_.json b/docs/docs-content/enterprise-version/install-palette/install-on-vmware/_category_.json
new file mode 100644
index 0000000000..3fca6fb9f9
--- /dev/null
+++ b/docs/docs-content/enterprise-version/install-palette/install-on-vmware/_category_.json
@@ -0,0 +1,3 @@
+{
+ "position": 0
+}
diff --git a/docs/docs-content/enterprise-version/install-palette/install-on-vmware/airgap-instructions.md b/docs/docs-content/enterprise-version/install-palette/install-on-vmware/airgap-instructions.md
new file mode 100644
index 0000000000..ef4be316eb
--- /dev/null
+++ b/docs/docs-content/enterprise-version/install-palette/install-on-vmware/airgap-instructions.md
@@ -0,0 +1,716 @@
+---
+sidebar_label: "Airgap Instructions"
+title: "Install in an Air Gap Environment"
+description: "Learn how to install Palette into an air gap environment."
+icon: ""
+hide_table_of_contents: false
+sidebar_position: 20
+tags: ["self-hosted", "enterprise", "air-gap"]
+---
+
+You can install a self-hosted version of Palette into a VMware environment without direct internet access. This type of installation is referred to as an *air gap* installation.
+
+In a standard Palette installation, the following artifacts are downloaded by default from the public Palette repository.
+
+* Palette platform manifests and required platform packages.
+
+
+* Container images for core platform components and 3rd party dependencies.
+
+
+* Palette Packs.
+
+
+The installation process changes a bit in an air gap environment due to the lack of internet access. Before the primary Palette installation step, you must download the three required Palette artifacts mentioned above. The other significant change is that Palette's default public repository is not used. Instead, a private repository supports all Palette operations pertaining to storing images and packages.
+
+The following diagram is a high-level overview of the order of operations required to deploy a self-hosted instance of Palette in an airgap environment.
+
+
+![An architecture diagram outlining the five different install phases](/enterprise-version_air-gap-repo_overview-order-diagram.png)
+
+
+The airgap installation can be simplified into five major phases.
+
+
+1. Download the Open Virtual Appliance (OVA) image and deploy the instance hosting the private repository that supports the airgap environment.
+
+
+2. The private Spectro Cloud repository is initialized, and all the Palette-required artifacts are downloaded and available.
+
+
+3. The Palette Install OVA is deployed, configured, and initialized.
+
+
+4. The scale-up process to a highly available three-node installation begins.
+
+
+5. Palette is ready for usage.
+
+
+This guide focuses on the first two installation phases, as the remaining ones are covered in the [Instructions](install.md) guide.
+
+
+## Prerequisites
+
+* The following minimum resources are required to deploy Palette.
+ * 2 vCPU
+ * 4 GB of Memory
+ * 100 GB of Storage. Storage sizing depends on your intended update frequency and data retention model.
+
+* Ensure the following ports allow inbound network traffic.
+ * 80
+ * 443
+ * 5000
+ * 8000
+
+
+* Request the Palette self-hosted installer image and the Palette air gap installer image. To request the installer images, please contact our support team by sending an email to support@spectrocloud.com. Kindly provide the following information in your email:
+
+ - Your full name
+ - Organization name (if applicable)
+ - Email address
+ - Phone number (optional)
+ - A brief description of your intended use for the Palette Self-host installer image.
+
+Our dedicated support team will promptly get in touch with you to provide the necessary assistance and share the installer image.
+
+If you have any questions or concerns, please feel free to contact support@spectrocloud.com.
+
+
+## Deploy Air Gapped Appliance
+
+
+1. Log in to vCenter Server by using the vSphere Client.
+
+
+2. Navigate to the Datacenter and select the cluster you want to use for the installation. Right-click on the cluster and select **Deploy OVF Template**.
+
+
+3. Select the airgap OVA installer image you downloaded after receiving guidance from our support team.
+
+
+4. Select the folder where you want to install the Virtual Machine (VM) and assign a name to the VM.
+
+
+5. Next, select the compute resource.
+
+
+6. Review the details page. You may get a warning message stating the certificate is not trusted. You can ignore the message and click **Next**.
+
+
+7. Select your storage device and storage policy. Click on **Next** to proceed.
+
+
+8. Choose a network for your appliance and select **Next**.
+
+
+9. Fill out the remaining template customization options. You can modify the following input fields.
+
+ | Parameter | Description | Default Value |
+ | --- | --- | -- |
+ | **Encoded user-data** | In order to fit into an XML attribute, this value is base64 encoded. This value will be decoded, and then processed normally as user-data. | - |
+ | **ssh public keys** | This field is optional but indicates that the instance should populate the default user's `authorized_keys` with the provided public key. | -|
+ | **Default User's password** | Setting this value allows password-based login. The password will be good for only a single login. If set to the string `RANDOM` then a random password will be generated, and written to the console. | - |
+ | **A Unique Instance ID for this instance** | Specifies the instance id. This is required and used to determine if the machine should take "first boot" actions| `id-ovf`|
+ | **Hostname** | Specifies the hostname for the appliance. | `ubuntuguest` |
+ | **URL to seed instance data from** | This field is optional but indicates that the instance should 'seed' user-data and meta-data from the given URL.| -|
+
+10. Click on **Next** to complete the deployment wizard. Upon completion, the cloning process will begin. The cloning process takes a few minutes to complete.
+
+
+11. Power on the VM and click on the **Launch Web Console** button to access the instance's terminal.
+
+
+12. Configure a static IP address on the node by editing **/etc/netplan/50-cloud-init.yaml**.
+
+ ```shell
+ sudo vi /etc/netplan/50-cloud-init.yaml
+ ```
+
+ Use the following sample configuration as a starting point but feel free to change the configuration file as required for your environment. To learn more about Netplan, check out the [Netplan configuration examples](https://netplan.io/examples) from Canonical.
+
+
+
+ ```yaml
+ network:
+ version: 2
+ renderer: networkd
+ ethernets:
+ ens192:
+ dhcp4: false
+ addresses:
+ - 10.10.244.9/18 # your static IP and subnet mask
+ gateway4: 10.10.192.1 # your gateway IP
+ nameservers:
+ addresses: [10.10.128.8] # your DNS nameserver IP address.
+ ```
+
+ To exit Vi, press the **ESC** key and type `:wq` followed by the **Enter** key.
+
+13. Issue the `netplan` command to update the network settings.
+
+
+
+ ```shell
+ sudo netplan apply
+ ```
+
+14. Give the instance one to two minutes before issuing the following command. The next step is to start the airgap setup script that stands up the Spectro Repository. Issue the command below and replace `X.X.X.X` with the static IP you provided to the Netplan configuration file.
+
+
+
+ ```shell
+ sudo /opt/spectro/airgap-setup.sh X.X.X.X
+ ```
+
+ Record the output of the setup command as you will use it when deploying the Quick Start appliance later on in the installation process.
+
+ Example Output:
+ ```shell hideClipboard
+ Setting up Manifests
+ Setting up Manifests
+ Setting up SSL Certs
+ Setup Completed
+
+ Details:
+ -------
+ Spectro Cloud Repository
+ UserName: XXXXXXXXX
+ Password: XXXXXXXXXX
+ Location: https://10.10.249.12
+ Artifact Repo Certificate:
+ LS0tLS1CRUdJ.............
+
+ Pack Registry
+ URL: https://10.10.249.12:5000
+ Username: XXXXXXXXX
+ Password: XXXXXXXXX
+ ```
+
+15. If you need to configure the instance with proxy settings, go ahead and do so now. You can configure proxy settings by using environment variables. Replace the values with your environment's respective values.
+
+
+
+ ```shell
+ export http_proxy=http://10.1.1.1:8888
+ export https_proxy=https://10.1.1.1:8888
+ export no_proxy=.example.dev,10.0.0.0/8
+ ```
+
+16. The next set of steps will download the required binaries to support a Palette installation, such as the Palette Installer, required Kubernetes packages, and kubeadm packages. You can download these artifacts from the instance, or externally and transfer them to the instance. Click on each tab for further guidance.
+
+
+
+ :::caution
+
+ You must download the following three resources. Our support team will provide you with the credentials and download URL.
+ Click on each tab to learn more about each resource and steps for downloading.
+
+ :::
+
+
+
+
+
+
+
+ Download the binary by using the URL provided by the Palette support team. Change the version number as needed.
+
+
+
+ ```shell
+ curl --user XXXX:YYYYY https:///airgap/packs/airgap-v3.3.15.bin \
+ --output airgap-k8s-v3.3.15.bin
+ ```
+
+:::tip
+
+ If you receive a certificate error, use the `-k` or `--insecure` flag.
+
+:::
+
+ Assign the proper permissions and start the download script.
+
+
+
+ ```shell
+ sudo chmod 755 ./airgap-k8s-v3.3.15.bin && sudo ./airgap-k8s-v3.3.15.bin
+ ```
+
+ Example Output:
+ ```shell
+ sudo ./airgap-k8s-v3.3.15.bin
+ Verifying archive integrity... 100% MD5 checksums are OK. All good.
+ Uncompressing Airgap K8S Images Setup - Version 3.3.15 100%
+ Setting up Packs
+ Setting up Images
+ - Pushing image k8s.gcr.io/kube-controller-manager:v1.22.10
+ - Pushing image k8s.gcr.io/kube-proxy:v1.22.10
+ - Pushing image k8s.gcr.io/kube-apiserver:v1.22.10
+ - Pushing image k8s.gcr.io/kube-scheduler:v1.22.10
+ …
+ Setup Completed
+ ```
+
+
+
+
+
+
+
+
+ Download the binary by using the URL provided by the Palette support team. Change the version number as needed.
+
+
+
+ ```shell
+ curl --user XXXX:YYYYY https:///airgap/packs/airgap-k8s-v3.3.15.bin \
+ --output airgap-k8s-v3.3.15.bin
+ ```
+
+
+:::tip
+
+ If you receive a certificate error, use the `-k` or `--insecure` flag.
+
+:::
+
+ Assign the proper permissions and start the download script.
+
+
+
+ ```shell
+ sudo chmod 755 ./airgap-k8s-v3.3.15.bin && sudo ./airgap-k8s-v3.3.15.bin
+ ```
+
+ Example Output:
+ ```shell
+ sudo ./airgap-k8s-v3.3.15.bin
+ Verifying archive integrity... 100% MD5 checksums are OK. All good.
+ Uncompressing Airgap K8S Images Setup - Version 3.3.15 100%
+ Setting up Packs
+ Setting up Images
+ - Pushing image k8s.gcr.io/kube-controller-manager:v1.22.10
+ - Pushing image k8s.gcr.io/kube-proxy:v1.22.10
+ - Pushing image k8s.gcr.io/kube-apiserver:v1.22.10
+ - Pushing image k8s.gcr.io/kube-scheduler:v1.22.10
+ …
+ Setup Completed
+ ```
+
+
+
+
+
+
+ Download the binary by using the URL provided by the Palette support team. Change the version number as needed.
+
+
+
+ ```shell
+ curl --user XXXX:YYYYY https:///airgap/packs/3.3/airgap-edge-kubeadm.bin \
+ --output airgap-edge-kubeadm.bin
+ ```
+
+:::tip
+
+ If you receive a certificate error, use the `-k` or `--insecure` flag.
+
+:::
+
+ Assign the proper permissions and start the download script.
+
+
+
+ ```shell
+ sudo chmod 755 ./airgap-edge-kubeadm.bin && sudo ./airgap-edge-kubeadm.bin
+ ```
+
+ Example Output:
+ ```shell
+ sudo ./airgap-edge-kubeadm.bin
+ Verifying archive integrity... 100% MD5 checksums are OK. All good.
+ Uncompressing Airgap Edge Packs - Kubeadm Images 100%
+ Setting up Images
+ - Skipping image k8s.gcr.io/coredns/coredns:v1.8.6
+ - Pushing image k8s.gcr.io/etcd:3.5.1-0
+ - Pushing image k8s.gcr.io/kube-apiserver:v1.23.12
+ - Pushing image k8s.gcr.io/kube-controller-manager:v1.23.12
+ - Pushing image k8s.gcr.io/kube-proxy:v1.23.12
+ …
+ Setup Completed
+ ```
+
+
+
+
+
+
+
+
+17. If you will be using Edge deployments, go ahead and download the packages your Edge deployments will need. If you are not planning to use Edge, skip to end. You can come back to this step in the future and add the packages if needed. Click on the `...` tab for additional options.
+
+
+
+
+
+
+
+
+ Download the binary by using the URL provided by the Palette support team. Change the version number as needed.
+
+
+
+ ```shell
+ curl --user XXXX:YYYYY https:///airgap/packs/3.3/airgap-edge-ubuntu22-k3s.bin \
+ --output airgap-edge-ubuntu22-k3s.bin
+ ```
+
+:::tip
+
+ If you receive a certificate error, use the `-k` or `--insecure` flag.
+
+:::
+
+ Assign the proper permissions and start the download script.
+
+
+
+ ```shell
+ sudo chmod 755 ./airgap-edge-ubuntu22-k3s.bin && sudo ./airgap-edge-ubuntu22-k3s.bin
+ ```
+
+
+
+
+
+
+
+ Download the binary by using the URL provided by the Palette support team. Change the version number as needed.
+
+
+
+ ```shell
+ curl --user XXXX:YYYYY https:///airgap/packs/3.3/airgap-edge-ubuntu22-rke.bin \
+ --output airgap-edge-ubuntu22-rke.bin
+ ```
+
+:::tip
+
+ If you receive a certificate error, use the `-k` or `--insecure` flag.
+
+:::
+
+ Assign the proper permissions and start the download script.
+
+
+
+ ```shell
+ sudo chmod 755 ./airgap-edge-ubuntu22-rke.bin && sudo ./airgap-edge-ubuntu22-rke.bin
+ ```
+
+
+
+
+
+ Download the binary by using the URL provided by the Palette support team. Change the version number as needed.
+
+
+
+ ```shell
+ curl --user XXXX:YYYYY https:///airgap/packs/3.3/airgap-edge-ubuntu22-kubeadm.bin \
+ --output airgap-edge-ubuntu22-kubeadm.bin
+ ```
+
+:::tip
+
+If you receive a certificate error, use the `-k` or `--insecure` flag.
+
+:::
+
+ Assign the proper permissions and start the download script.
+
+
+
+ ```shell
+ sudo chmod 755 ./airgap-edge-ubuntu22-kubeadm.bin && sudo ./airgap-edge-ubuntu22-kubeadm.bin
+ ```
+
+
+
+
+
+ Download the binary by using the URL provided by the Palette support team. Change the version number as needed.
+
+
+
+ ```shell
+ curl --user XXXX:YYYYY https:///airgap/packs/3.3/airgap-edge-ubuntu20-k3s.bin \
+ --output airgap-edge-ubuntu20-k3s.bin
+ ```
+
+:::tip
+
+If you receive a certificate error, use the `-k` or `--insecure` flag.
+
+:::
+
+ Assign the proper permissions and start the download script.
+
+
+
+ ```shell
+ sudo chmod 755 ./airgap-edge-ubuntu20-k3s.bin && sudo ./airgap-edge-ubuntu20-k3s.bin
+ ```
+
+
+
+
+
+ Download the binary by using the URL provided by the Palette support team. Change the version number as needed.
+
+
+
+ ```shell
+ curl --user XXXX:YYYYY https:///airgap/packs/3.3/airgap-edge-ubuntu20-rke.bin \
+ --output airgap-edge-ubuntu20-rke.bin
+ ```
+
+:::tip
+
+If you receive a certificate error, use the `-k` or `--insecure` flag.
+
+:::
+
+ Assign the proper permissions and start the download script.
+
+
+
+ ```shell
+ sudo chmod 755 ./airgap-edge-ubuntu20-rke.bin && sudo ./airgap-edge-ubuntu20-rke.bin
+ ```
+
+
+
+
+
+
+ Download the binary by using the URL provided by the Palette support team. Change the version number as needed.
+
+
+
+ ```shell
+ curl --user XXXX:YYYYY https:///airgap/packs/3.3/airgap-edge-ubuntu20-kubeadm.bin \
+ --output airgap-edge-ubuntu20-kubeadm.bin
+ ```
+
+:::tip
+
+If you receive a certificate error, use the `-k` or `--insecure` flag.
+
+:::
+
+ Assign the proper permissions and start the download script.
+
+
+
+ ```shell
+ sudo chmod 755 ./airgap-edge-ubuntu20-kubeadm.bin && sudo ./airgap-edge-ubuntu20-kubeadm.bin
+ ```
+
+
+
+
+
+ Download the binary by using the URL provided by the Palette support team. Change the version number as needed.
+
+
+
+ ```shell
+ curl --user XXXX:YYYYY https:///airgap/packs/3.3/airgap-edge-opensuse-k3s.bin \
+ --output airgap-edge-opensuse-k3s.bin
+ ```
+
+:::tip
+
+If you receive a certificate error, use the `-k` or `--insecure` flag.
+
+:::
+
+ Assign the proper permissions and start the download script.
+
+
+
+ ```shell
+ sudo chmod 755 ./airgap-edge-opensuse-k3s.bin && sudo ./airgap-edge-opensuse-k3s.bin
+ ```
+
+
+
+
+
+ Download the binary by using the URL provided by the Palette support team. Change the version number as needed.
+
+
+
+ ```shell
+ curl --user XXXX:YYYYY https:///airgap/packs/3.3/airgap-edge-opensuse-rke.bin \
+ --output airgap-edge-opensuse-rke.bin
+ ```
+
+:::tip
+
+If you receive a certificate error, use the `-k` or `--insecure` flag.
+
+:::
+
+ Assign the proper permissions and start the download script.
+
+
+
+ ```shell
+ sudo chmod 755 ./airgap-edge-opensuse-rke.bin && sudo ./airgap-edge-opensuse-rke.bin
+ ```
+
+
+
+
+
+ Download the binary by using the URL provided by the Palette support team. Change the version number as needed.
+
+
+
+ ```shell
+ curl --user XXXX:YYYYY https:///airgap/packs/3.3/airgap-edge-opensuse-kubeadm.bin \
+ --output airgap-edge-opensuse-kubeadm.bin
+ ```
+
+:::tip
+
+If you receive a certificate error, use the `-k` or `--insecure` flag.
+
+:::
+
+ Assign the proper permissions and start the download script.
+
+
+
+ ```shell
+ sudo chmod 755 ./airgap-edge-opensuse-kubeadm.bin && sudo ./airgap-edge-opensuse-kubeadm.bin
+ ```
+
+
+
+
+
+
+
+----
+
+
+The next step of the installation process is to begin the deployment of an appliance using the instructions in the [Instructions](install.md) guide. If you need to review the Spectro Cloud Repository details, issue the following command for detailed output.
+
+
+
+```shell
+sudo /bin/airgap-setup.sh
+```
+
+
+
+:::info
+
+You can review all the logs related to the setup of the private Spectro repository in **/tmp/airgap-setup.log**.
+
+:::
+
+
+## Validate
+
+You can validate that the Spectro Repository you deployed is available and ready for the next steps of the installation process. If you provided the appliance with an SSH key then you can skip to step five.
+
+
+1. Log in to vCenter Server by using the vSphere Client.
+
+
+2. Navigate to your Datacenter and locate your VM. Click on the VM to access its details page.
+
+
+3. Power on the VM.
+
+
+4. Click on **Launch Web Console** to access the terminal.
+
+
+5. Log in with the user `ubuntu` and the user password you specified during the installation. If you are using SSH, use the following command, and ensure you specify the path to your SSH private key and replace the IP address with your appliance's static IP.
+
+
+
+ ```shell
+ ssh --identity_file ~/path/to/your/file ubuntu@10.1.1.1
+ ```
+
+
+6. Verify the registry server is up and available. Replace the `10.1.1.1` value with your appliance's IP address.
+
+
+
+ ```shell
+ curl --insecure https://10.1.1.1:5000/health
+ ```
+
+ Example Output:
+ ```shell
+ {"status":"UP"}
+ ```
+
+7. Ensure you can log into your registry server. Use the credentials provided to you by the `airgap-setup.sh` script. Replace the `10.1.1.1` value with your appliance's IP address.
+
+
+
+ ```shell
+ curl --insecure --user admin:admin@airgap https://10.1.1.1:5000/v1/_catalog
+ ```
+
+ Example Output:
+ ```
+ {"metadata":{"lastUpdatedTime":"2023-04-11T21:12:09.647295105Z"},"repositories":[{"name":"amazon-linux-eks","tags":[]},{"name":"aws-efs","tags":[]},{"name":"centos-aws","tags":[]},{"name":"centos-azure","tags":[]},{"name":"centos-gcp","tags":[]},{"name":"centos-libvirt","tags":[]},{"name":"centos-vsphere","tags":[]},{"name":"cni-aws-vpc-eks","tags":[]},{"name":"cni-aws-vpc-eks-helm","tags":[]},{"name":"cni-azure","tags":[]},{"name":"cni-calico","tags":[]},{"name":"cni-calico-azure","tags":[]},{"name":"cni-cilium-oss","tags":[]},{"name":"cni-custom","tags":[]},{"name":"cni-kubenet","tags":[]},{"name":"cni-tke-global-router","tags":[]},{"name":"csi-aws","tags":[]},{"name":"csi-aws-ebs","tags":[]},{"name":"csi-aws-efs","tags":[]},{"name":"csi-azure","tags":[]},{"name":"csi-gcp","tags":[]},{"name":"csi-gcp-driver","tags":[]},{"name":"csi-longhorn","tags":[]},{"name":"csi-longhorn-addon","tags":[]},{"name":"csi-maas-volume","tags":[]},{"name":"csi-nfs-subdir-external","tags":[]},{"name":"csi-openstack-cinder","tags":[]},{"name":"csi-portworx-aws","tags":[]},{"name":"csi-portworx-gcp","tags":[]},{"name":"csi-portworx-generic","tags":[]},{"name":"csi-portworx-vsphere","tags":[]},{"name":"csi-rook-ceph","tags":[]},{"name":"csi-rook-ceph-addon","tags":[]},{"name":"csi-tke","tags":[]},{"name":"csi-topolvm-addon","tags":[]},{"name":"csi-vsphere-csi","tags":[]},{"name":"csi-vsphere-volume","tags":[]},{"name":"edge-k3s","tags":[]},{"name":"edge-k8s","tags":[]},{"name":"edge-microk8s","tags":[]},{"name":"edge-native-byoi","tags":[]},{"name":"edge-native-opensuse","tags":[]},{"name":"edge-native-ubuntu","tags":[]},{"name":"edge-rke2","tags":[]},{"name":"external-snapshotter","tags":[]},{"name":"generic-byoi","tags":[]},{"name":"kubernetes","tags":[]},{"name":"kubernetes-aks","tags":[]},{"name":"kubernetes-coxedge","tags":[]},{"name":"kubernetes-eks","tags":[]},{"name":"kubernetes-eksd","tags":[]},{"name":"kubernetes-konvoy","tags":[]},{"name":"kubernetes-microk8s","tags":[]},{"name":"kubernetes-rke2","tags":[]},{"name":"kubernetes-tke","tags":[]},{"name":"portworx-add-on","tags":[]},{"name":"spectro-mgmt","tags":[]},{"name":"tke-managed-os","tags":[]},{"name":"ubuntu-aks","tags":[]},{"name":"ubuntu-aws","tags":[]},{"name":"ubuntu-azure","tags":[]},{"name":"ubuntu-coxedge","tags":[]},{"name":"ubuntu-edge","tags":[]},{"name":"ubuntu-gcp","tags":[]},{"name":"ubuntu-libvirt","tags":[]},{"name":"ubuntu-maas","tags":[]},{"name":"ubuntu-openstack","tags":[]},{"name":"ubuntu-vsphere","tags":[]},{"name":"volume-snapshot-controller","tags":[]}],"listMeta":{"continue":""}}
+ ```
+
+
+8. Next, validate the Spectro repository is available. Replace the IP with your appliance's IP address.
+
+
+
+ ```shell
+ curl --insecure --user spectro:admin@airgap https://10.1.1.1
+ ```
+
+ Output:
+ ```html hideClipboard
+
+
+
+ Welcome to nginx!
+
+
+
+ Welcome to nginx!
+ If you see this page, the nginx web server is successfully installed and
+ working. Further configuration is required.
+
+ For online documentation and support please refer to
+ nginx.org.
+ Commercial support is available at
+ nginx.com.
+
+ Thank you for using nginx.
+
+
+ ```
diff --git a/docs/docs-content/enterprise-version/install-palette/install-on-vmware/install-on-vmware.md b/docs/docs-content/enterprise-version/install-palette/install-on-vmware/install-on-vmware.md
new file mode 100644
index 0000000000..e16e808359
--- /dev/null
+++ b/docs/docs-content/enterprise-version/install-palette/install-on-vmware/install-on-vmware.md
@@ -0,0 +1,24 @@
+---
+sidebar_label: "VMware"
+title: "Install Palette on VMware"
+description: "Learn how to install Palette on VMware."
+icon: ""
+hide_table_of_contents: false
+tags: ["palette", "self-hosted", "vmware"]
+---
+
+
+
+
+Palette can be installed on VMware vSphere with internet connectivity or an airgap environment. When you install Palette, a three-node cluster is created. You use the interactive Palette CLI to install Palette on VMware vSphere. Refer to [Access Palette](../../enterprise-version.md#access-palette) for instructions on requesting repository access.
+
+## Resources
+
+- [Install on VMware](install.md)
+
+
+- [Airgap Install Instructions](airgap-instructions.md)
+
+
+- [VMware System Requirements](vmware-system-requirements.md)
+
\ No newline at end of file
diff --git a/docs/docs-content/enterprise-version/install-palette/install-on-vmware/install.md b/docs/docs-content/enterprise-version/install-palette/install-on-vmware/install.md
new file mode 100644
index 0000000000..3db88a10c2
--- /dev/null
+++ b/docs/docs-content/enterprise-version/install-palette/install-on-vmware/install.md
@@ -0,0 +1,84 @@
+---
+sidebar_label: "Instructions"
+title: "Install Palette on VMware"
+description: "Learn how to install Palette on VMware."
+icon: ""
+sidebar_position: 10
+hide_table_of_contents: false
+tags: ["palette", "self-hosted", "vmware"]
+---
+
+
+
+
+
+Deployment of an enterprise cluster is a migration process from the quick start mode. You may choose to deploy the enterprise cluster on day-1 right after instantiating the platform installer VM, or use the system in the quick start mode initially and at a later point invoke the enterprise cluster migration wizard to deploy the enterprise cluster. All the data from the quick start mode is migrated to the enterprise cluster as part of this migration process.
+
+1. Log in to the vSphere console and navigate to VMs and Templates.
+
+2. Navigate to the Datacenter and folder you would like to use for the installation.
+
+3. Right-click on the folder and invoke the VM creation wizard by selecting the option to Deploy OVF Template.
+
+4. Complete all the steps of the OVF deployment wizard. Provide values for various fields as follows:
+ * URL: <Location of the platform installer>
+ * Virtual Machine Name: <vm name>
+ * Folder: <Select desired folder>
+ * Select the desired Datacenter, Storage, and Network for the platform installer VM as you proceed through the next steps. The Platform installer VM requires an outgoing internet connection. Select a network that provides this access directly, or via a proxy.
+ * Customize the template as follows:
+ * Name: <The name to identify the platform installer>
+ * SSH Public Keys: Create a new SSH key pair (or pick an existing one). Enter the public key in this field. The public key will be installed in the installer VM to provide SSH access, as the user `ubuntu`. This is useful for troubleshooting purposes.
+ * Monitoring Console Password: A monitoring console is deployed in the platform installer VM to provide detailed information about the installation progress as well as to provide access to various logs. This console can be accessed after the VM is powered on at https://<VM IP Address>:5080. The default monitoring console credentials are:
+
+ * User Name: admin
+ * Password: admin
+
+ Provide a different password for the monitoring console if desired. Leave the field blank to accept the default password.
+ * Pod CIDR: Optional - provide an IP range exclusive to pods. This range should be different to prevent an overlap with your network CIDR. (e.g: 192.168.0.0/16)
+ * Service cluster IP range: Optional - assign an IP range in the CIDR format exclusive to the service clusters. This range also must not overlap with either the pod CIDR range or your network CIDR. (e.g: 10.96.0.0/12)
+ * Static IP Address: <VM IP Address> Optional IP address (e.g: 192.168.10.15) to be specified only if static IP allocation is desired. DHCP is used by default.
+ * Static IP subnet prefix: <Network Prefix> Static IP subnet prefix (e.g: 18), required only for static IP allocation.
+ * Static IP gateway: <Gateway IP Address> (e.g: 192.168.0.1) required only for static IP allocation.
+ * Static IP DNS: <Name servers> Comma separated DNS addresses (e.g: 8.8.8.8, 192.168.0.8), required only for static IP allocation.
+ * HTTP Proxy: <endpoint for the http proxy server>, e.g: _http://USERNAME:PASSWORD@PROXYIP:PROXYPORT_. An optional setting, required only if a proxy is used for outbound connections.
+ * HTTPS Proxy: <endpoint for the https proxy server>, e.g: _http://USERNAME:PASSWORD@PROXYIP:PROXYPORT_. An optional setting, required only if a proxy is used for outbound connections.
+ * NO Proxy: <comma-separated list of vCenter server, local network CIDR, hostnames, domain names that should be excluded from proxying>, e.g: _vcenter.company.com_,10.10.0.0/16.
+ * Spectro Cloud Repository settings: The platform installer downloads various platform artifacts from a repository. Currently, this repository is hosted by Palette and the installer VM needs to have an outgoing internet connection to the repository. Upcoming releases will enable the option to privately host a dedicated repository to avoid having to connect outside. This option is currently unavailable. Leave all the fields under Palette Repository settings blank
+ * Finish the OVF deployment wizard and wait for the template to be created. This may take a few minutes as the template is initially downloaded.
+5. Power on the VM.
+
+
+7. Open the On-Prem system console from a browser window by navigating to https://<VM IP Address>/system and log in.
+
+
+8. Navigate to the Enterprise Cluster Migration wizard from the menu on the left-hand side.
+
+
+9. Enter the vCenter credentials to be used to launch the enterprise cluster. Provide the vCenter server, username, and password. Check the `Use self-signed certificates` if applicable. Validate your credentials and click on `Next` button to proceed to IP Pool Configuration.
+
+
+10. Enter the IPs to be used for Enterprise Cluster VMs as a `Range` or a `Subnet`. At least five IP addresses should be required in the range for the installation and the ongoing management. Provide the details of the `Gateway` and the `Nameserver addresses`. Any search suffixes being used can be entered in the `Nameserver search suffix` box. Click on `Next` to proceed to Cloud Settings.
+
+
+11. Select the datacenter and the folder to be used for the enterprise cluster VMs. Select the desired compute cluster, resource pools, datastore, and network. For high availability purposes, you may choose to distribute the three VMs across multiple compute clusters. If this is desired, invoke the "Add Domain" option to enter multiple sets of properties.
+
+
+12. Add SSH Public key and optionally NTP servers and click "Confirm".
+
+
+13. The Enterprise cluster deployment will proceed through the following three steps:
+ * Deployment - A 3 node Kubernetes cluster is launched and Palette Platform is deployed on it. This typically takes 10 mins.
+ * Data Migration - Data from the installer VM is migrated to the newly created enterprise cluster.
+ * Tenant Migration - If any tenants were created prior to the enterprise cluster migration, which would typically be the case if the system was used in the quick start mode initially, all those tenants, as well as the management of any such tenant clusters previously deployed, will be migrated to the enterprise cluster.
+
+
+14. Once Enterprise Cluster is fully deployed, the On-Prem System and Management Console should be accessed on this new cluster. The platform installer VM can be safely powered off at this point.
+
+
+## Resources
+
+- [Palette CLI](../../../palette-cli/install-palette-cli.md)
+
+- [Airgap Install Instructions](airgap-instructions.md)
+
+- [VMware vSphere permissions](vmware-system-requirements.md)
\ No newline at end of file
diff --git a/docs/docs-content/enterprise-version/install-palette/install-on-vmware/vmware-system-requirements.md b/docs/docs-content/enterprise-version/install-palette/install-on-vmware/vmware-system-requirements.md
new file mode 100644
index 0000000000..c5f394f4ca
--- /dev/null
+++ b/docs/docs-content/enterprise-version/install-palette/install-on-vmware/vmware-system-requirements.md
@@ -0,0 +1,300 @@
+---
+sidebar_label: "VMware System and Permission Requirements"
+title: "VMware System and Permission Requirements"
+description: "Review VMware system requirements and cloud account permissions."
+icon: ""
+hide_table_of_contents: false
+sidebar_position: 30
+tags: ["palette", "self-hosted", "vmware"]
+---
+
+
+Before installing Palette on VMware, review the following system requirements and permissions. The vSphere user account used to deploy Palette must have the required permissions to access the proper roles and objects in vSphere.
+
+Start by reviewing the required action items below:
+
+1. Create the two custom vSphere roles. Check out the [Create Required Roles](#create-required-roles) section to create the required roles in vSphere.
+
+2. Review the [vSphere Permissions](#vsphere-permissions) section to ensure the created roles have the required vSphere privileges and permissions.
+
+3. Create node zones and regions for your Kubernetes clusters. Refer to the [Zone Tagging](#zone-tagging) section to ensure that the required tags are created in vSphere to ensure proper resource allocation across fault domains.
+
+
+:::info
+
+The permissions listed in this page are also needed for deploying a Private Cloud Gateway (PCG) and workload cluster in vSphere through Palette.
+:::
+
+
+## Create Required Roles
+
+Palette requires two custom roles to be created in vSphere before the installation. Refer to the [Create a Custom Role](https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-security/GUID-18071E9A-EED1-4968-8D51-E0B4F526FDA3.html?hWord=N4IghgNiBcIE4HsIFMDOIC+Q) guide if you need help creating a custom role in vSphere. The required custom roles are:
+
+* A root-level role with access to higher-level vSphere objects. This role is referred to as the *spectro root role*. Check out the [Root-Level Role Privileges](#root-level-role-privileges) table for the list of privileges required for the root-level role.
+
+* A role with the required privileges for deploying VMs. This role is referred to as the *Spectro role*. Review the [Spectro Role Privileges](#spectro-role-privileges) table for the list of privileges required for the Spectro role.
+
+
+The user account you use to deploy Palette must have access to both roles. Each vSphere object required by Palette must have a [Permission](https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.security.doc/GUID-4B47F690-72E7-4861-A299-9195B9C52E71.html) entry for the respective Spectro role. The following tables list the privileges required for the each custom role.
+
+
+
+
+:::info
+
+For an in-depth explanation of vSphere authorization and permissions, check out the [Understanding Authorization in vSphere](https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-security/GUID-74F53189-EF41-4AC1-A78E-D25621855800.html) resource.
+
+:::
+
+
+## vSphere Permissions
+
+The vSphere user account that deploys Palette require access to the following vSphere objects and permissions listed in the following table. Review the vSphere objects and privileges required to ensure each role is assigned the required privileges.
+
+### Spectro Root Role Privileges
+
+
+The spectro root role privileges are only applied to root objects and data center objects. Select the tab for the vSphere version you are using to view the required privileges for the spectro root role.
+
+
+
+
+
+
+| **vSphere Object** | **Privilege** |
+|--------------------|-----------------------------------------|
+| **CNS** | Searchable |
+| **Datastore** | Browse datastore |
+| **Host** | Configuration
Storage partition configuration |
+| **vSphere Tagging** | Create and edit vSphere tags |
+| **Network** | Assign network |
+| **Sessions** | Validate session |
+| **VM Storage Policies**| View VM storage policies |
+| **Storage views** | View |
+
+
+
+
+
+
+
+| **vSphere Object**| **Privileges** |
+|-------------------|---------------------------------------------|
+| **CNS** | Searchable |
+| **Datastore** | Browse datastore |
+| **Host** | Configuration
Storage partition configuration|
+| **vSphere tagging** | Create vSphere Tag
Edit vSphere Tag |
+| **Network** | Assign network |
+| **Profile-driven storage** | View |
+| **Sessions** | Validate session |
+| **Storage views** | View |
+
+
+
+
+
+
+| **vSphere Object**| **Privileges** |
+|-------------------|---------------------------------------------|
+| **CNS** | Searchable |
+| **Datastore** | Browse datastore |
+| **Host** | Configuration
Storage partition configuration|
+| **vSphere tagging** | Create vSphere Tag
Edit vSphere Tag |
+| **Network** | Assign network |
+| **Profile-driven storage** | Profile-driven storage view |
+| **Sessions** | Validate session |
+| **Storage views** | View |
+
+
+
+
+
+:::caution
+
+If the network is a Distributed Port Group under a vSphere Distributed Switch (VDS), *ReadOnly* access to the VDS without “Propagate to children” is required.
+
+:::
+
+
+### Spectro Role Privileges
+
+As listed in the table, apply spectro role privileges to vSphere objects you intend to use for Palette installation. A separate table lists Spectro role privileges for VMs by category.
+
+During the installation, images and Open Virtual Appliance (OVA) files are downloaded to the folder you selected. These images are cloned from the folder and applied VMs that deployed during the installation.
+
+Select the tab for the vSphere version you are using to view the required privileges for the spectro role.
+
+
+
+
+
+
+
+| **vSphere Object**| **Privileges** |
+|-------------------|---------------------------------------------|
+| **CNS** | Searchable |
+| **Datastore** | Allocate space
Browse datastore
Low-level file operations
Remove file
Update VM files
Update VM metadata |
+| **Folder** | Create Folder
Delete folder
Move folder
Rename folder|
+| **Host** | Local operations: Reconfigure VM |
+| **Network** | Assign network |
+| **Resource** | Apply recommendation
Assign VM to resource pool
Migrate powered off VM
Migrate powered on VM
Query vMotion |
+| **Sessions** | Validate sessions |
+| **Storage policies** | View access for VM storage policies is required.
Ensure ``StorageProfile.View`` is available. |
+| **spectro-templates** | Read only. This is the vSphere folder created during the install. For airgap installs, you must manually create this folder. |
+| **Storage views** | View |
+| **Tasks** | Create task
Update task |
+| **vApp** | Import
View OVF environment
Configure vAPP application
Configure vApp instance |
+| **vSphere tagging** | Assign or Unassign vSphere Tag
Create vSphere Tag
Delete vSphere Tag
Edit vSphere Tag |
+
+
+The following table lists spectro role privileges for VMs by category. All privileges are for the vSphere object, Virtual Machines.
+
+ **Category** | **Privileges** |
+|----------------------|--------------------|
+| Change Configuration | Acquire disk lease
Add existing disk
Add new disk
Add or remove device
Advanced configuration
Change CPU count
Change memory
Change settings
Change swapfile placement
Change resource
Change host USB device
Configure raw device
Configure managedBy
Display connection settings
Extend virtual disk
Modify device settings
Query fault tolerance compatibity
Query unowned files
Reload from path
Remove disk
Rename
Reset guest information
Set annotation
Toggle disk change tracking
Toggle fork parent
Upgrade VM compatibility|
+| Edit Inventory | Create from existing
Create new
Move
Register
Remove
Unregister |
+| Guest Operations | Alias modification
Alias query
Modify guest operations
Invoke programs
Queries |
+| Interaction | Console Interaction
Power on/off |
+| Provisioning | Allow disk access
Allow file access
Allow read-only disk access
Allow VM download
Allow VM files upload
Clone template
Clone VM
Create template from VM
Customize guest
Deploy template
Mark as template
Mark as VM
Modify customization specification
Promote disks
Read customization specifications |
+| Service Configuration| Allow notifications
Allow polling of global event notifications
Manage service configurations
Modify service configurations
Query service configurations
Read service configurations |
+| Snapshot Management | Create snapshot
Remove snapshot
Rename snapshot
Revert to snapshot |
+| Sphere Replication | Configure replication
Manage replication
Monitor replication |
+| vSAN | Cluster: ShallowRekey |
+
+
+
+
+
+
+
+
+
+| **vSphere Object**| **Privileges** |
+|-------------------|---------------------------------------------|
+| **CNS** | Searchable |
+| **Datastore** | Allocate space
Browse datastore
Low-level file operations
Remove file
Update VM files
Update VM metadata |
+| **Folder** | Create Folder
Delete folder
Move folder
Rename folder|
+| **Host** | Local operations: Reconfigure VM |
+| **Network** | Assign network |
+| **Resource** | Apply recommendation
Assign VM to resource pool
Migrate powered off VM
Migrate powered on VM
Query vMotion |
+| **Profile-driven storage** | Profile-driven storage view |
+| **Sessions** | Validate session |
+| **spectro-templates** | Read only. This is the vSphere folder created during the install. For airgap installs, you must manually create this folder. |
+| **Storage views** | Configure service
View |
+| **Tasks** | Create task
Update task |
+| **vApp** | Import
View OVF environment
Configure vAPP applications
Configure vApp instances |
+| **vSphere tagging** | Assign or unassign vSphere Tag
Create vSphere Tag
Delete vSphere Tag
Edit vSphere Tag |
+
+
+
+The following table lists spectro role privileges for VMs by category. All privileges are for the vSphere object, Virtual Machines.
+
+ **Category** | **Privileges** |
+|-------------------|-------------|
+| Change Configuration | Acquire disk lease
Add existing disk
Add new disk
Add or remove device
Advanced configuration
Change CPU count
Change memory
Change Settings
Change Swapfile placement
Change resource
Change host USB device
Configure Raw device
Configure managedBy
Display connection settings
Extend virtual disk
Modify device settings
Query fault tolerance compatibity
Query unowned files
Reload from path
Remove disk
Rename
Reset guest information
Set annotation
Toggle disk change tracking
Toggle fork parent
Upgrade VM compatibility|
+| Edit Inventory | Create from existing
Create new
Move
Register
Remove
Unregister |
+| Guest Operations | Alias modification
Alias query
Modify guest operations
Invoke programs
Query guest operations |
+| Interaction | Console Interaction
Power on/off |
+| Provisioning | Allow disk access
Allow file access
Allow read-only disk access
Allow VM download
Allow VM upload
Clone template
Clone VM
Create template from VM
Customize guest
Deploy template
Mark as template
Modify customization specifications
Promote disks
Read customization specifications |
+| Service Configuration| Allow notifications
Allow polling of global event notifications
Manage service configurations
Modify service configurations
Query service configurations
Read service configurations |
+| Snapshot Management | Create snapshot
Remove snapshot
Rename snapshot
Revert to snapshot |
+| vSphere Replication | Configure replication
Manage replication
Monitor replication |
+| vSAN | Cluster
ShallowRekey |
+
+
+
+
+
+
+
+
+
+| **vSphere Object**| **Privileges** |
+|-------------------|---------------------------------------------|
+| **CNS** | Searchable |
+| **Datastore** | Allocate space
Browse datastore
Low-level file operations
Remove file
Update VM files
Update VM metadata |
+| **Folder** | Create Folder
Delete folder
Move folder
Rename folder|
+| **Host** | Local operations: Reconfigure VM |
+| **Network** | Assign network |
+| **Profile-driven storage** | Profile-driven storage view |
+| **Resource** | Apply recommendation
Assign VM to resource pool
Migrate powered off VM
Migrate powered on VM
Query vMotion |
+| **Sessions** | Validate session |
+| **spectro-templates** | Read only. This is the vSphere folder created during the install. For airgap installs, you must manually create this folder. |
+| **Storage views** | View |
+| **Tasks** | Create task
Update task |
+| **vApp** | Import
View OVF environment
Configure vAPP applications
Configure vApp instances |
+| **vSphere tagging** | Assign or unassign vSphere Tag
Create vSphere Tag
Delete vSphere Tag
Edit vSphere Tag |
+
+
+
+The following table lists spectro role privileges for VMs by category. All privileges are for the vSphere object, Virtual Machines.
+
+ **Category** | **Privileges** |
+---------------------|--------------------|
+|Change Configuration | Acquire disk lease
Add existing disk
Add new disk
Add or remove device
Advanced configuration
Change CPU count
Change memory
Change Settings
Change Swapfile placement
Change resource
Change host USB device
Configure Raw device
Configure managedBy
Display connection settings
Extend virtual disk
Modify device settings
Query fault tolerance compatibity
Query unowned files
Reload from path
Remove disk
Rename
Reset guest information
Set annotation
Toggle disk change tracking
Toggle fork parent
Upgrade VM compatibility|
+|Edit Inventory | Create from existing
Create new
Move
Register
Remove
Unregister |
+|Guest Operations | Alias modification
Alias query
Modify guest operations
Invoke programs
Query guest operations |
+|Interaction | Console Interaction
Power on/off |
+|Provisioning | Allow disk access
Allow file access
Allow read-only disk access
Allow VM download
Allow VM upload
Clone template
Clone VM
Create template from VM
Customize guest
Deploy template
Mark as template
Modify customization specifications
Promote disks
Read customization specifications |
+|Service Configuration| Allow notifications
Allow polling of global event notifications
Manage service configurations
Modify service configurations
Query service configurations
Read service configurations |
+| Snapshot Management | Create snapshot
Remove snapshot
Rename snapshot
Revert to snapshot |
+|vSphere Replication | Configure replication
Manage replication
Monitor replication |
+| vSAN | Cluster
ShallowRekey |
+
+
+
+
+
+
+
+
+## Zone Tagging
+You can use tags to create node zones and regions for your Kubernetes clusters. The node zones and regions can be used to dynamically place Kubernetes workloads and achieve higher availability. Kubernetes nodes inherit the zone and region tags as [Labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/). Kubernetes workloads can use the node labels to ensure that the workloads are deployed to the correct zone and region.
+
+The following is an example of node labels that are discovered and inherited from vSphere tags. The tag values are applied to Kubernetes nodes in vSphere.
+
+```yaml hideClipboard
+ topology.kubernetes.io/region=usdc
+ topology.kubernetes.io/zone=zone3
+ failure-domain.beta.kubernetes.io/region=usdc
+ failure-domain.beta.kubernetes.io/zone=zone3
+```
+
+
+:::info
+
+To learn more about node zones and regions, refer to the [Node Zones/Regions Topology](https://cloud-provider-vsphere.sigs.k8s.io/cloud_provider_interface.html) section of the Cloud Provider Interface documentation.
+
+:::
+
+
+Zone tagging is required to install Palette and is helpful for Kubernetes workloads deployed in vSphere clusters through Palette if they have persistent storage needs. Use vSphere tags on data centers and compute clusters to create distinct zones in your environment. You can use vSphere [Tag Categories and Tags](https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-vcenter-esxi-management/GUID-16422FF7-235B-4A44-92E2-532F6AED0923.html) to create zones in your vSphere environment and assign them to vSphere objects.
+
+
+The zone tags you assign to your vSphere objects, such as a datacenter and clusters are applied to the Kubernetes nodes you deploy through Palette into your vSphere environment. Kubernetes clusters deployed to other infrastructure providers, such as public cloud may have other native mechanisms for auto discovery of zones.
+
+For example, assume a vCenter environment contains three compute clusters, cluster-1, cluster-2, and cluster-3. To support this environment you create the tag categories `k8s-region` and `k8s-zone`. The `k8s-region` is assigned to the datacenter, and the `k8s-zone` tag is assigned to the compute clusters.
+
+The following table lists the tag values for the data center and compute clusters.
+
+| **vSphere Object** | **Assigned Name** | **Tag Category** | **Tag Value** |
+|------------------- |--------------------|------------------|---------------|
+| **Datacenter** | dc-1 | k8s-region | region1 |
+| **Cluster** | cluster-1 | k8s-zone | az1 |
+| **Cluster** | cluster-2 | k8s-zone | az2 |
+| **Cluster** | cluster-3 | k8s-zone | az3 |
+
+
+Create a tag category and tag values for each datacenter and cluster in your environment. Use the tag categories to create zones. Use a name that is meaningful and that complies with the tag requirements listed in the following section.
+
+### Tag Requirements
+
+The following requirements apply to tags:
+
+- A valid tag must consist of alphanumeric characters.
+
+
+- The tag must start and end with an alphanumeric characters.
+
+
+- The regex used for tag validation is `(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?`
\ No newline at end of file
diff --git a/docs/docs-content/enterprise-version/install-palette/install-palette.md b/docs/docs-content/enterprise-version/install-palette/install-palette.md
new file mode 100644
index 0000000000..e82205846d
--- /dev/null
+++ b/docs/docs-content/enterprise-version/install-palette/install-palette.md
@@ -0,0 +1,89 @@
+---
+sidebar_label: "Installation"
+title: "Installation"
+description: "Review Palette system requirements and learn more about the various install methods."
+icon: ""
+hide_table_of_contents: false
+tags: ["palette", "self-hosted"]
+---
+
+
+Palette is available as a self-hosted application that you install in your environment. The self-hosted version is a dedicated Palette environment hosted on VMware instances or in an existing Kubernetes cluster. Palette is available in the following modes:
+
+| **Supported Platform** | **Description** | **Install Guide** |
+|------------------------|------------------------------------| ------------------|
+| VMware | Install Palette in VMware environment. | [Install on VMware](install-on-vmware/install-on-vmware.md) |
+| Kubernetes | Install Palette using a Helm Chart in an existing Kubernetes cluster. | [Install on Kubernetes](install-on-kubernetes/install.md) |
+
+
+
+
+The next sections provide sizing guidelines we recommend you review before installing Palette in your environment.
+
+
+
+## Size Guidelines
+
+This section lists resource requirements for Palette for various capacity levels. In Palette VerteX, the terms *small*, *medium*, and *large* are used to describe the instance size of worker pools that Palette is installed on. The following table lists the resource requirements for each size.
+
+
+
+
+:::caution
+
+The recommended maximum number of deployed nodes and clusters in the environment should not be exceeded. We have tested the performance of Palette with the recommended maximum number of deployed nodes and clusters. Exceeding these limits can negatively impact performance and result in instability. The active workload limit refers to the maximum number of active nodes and pods at any given time.
+
+:::
+
+
+
+
+
+| **Size** | **Nodes**| **CPU**| **Memory**| **Storage**| **MongoDB Storage Limit**| **MongoDB Memory Limit**| **MongoDB CPU Limit** |**Total Deployed Nodes**| **Deployed Clusters with 10 Nodes**|
+|----------|----------|--------|-----------|------------|--------------------|-------------------|------------------|----------------------------|----------------------|
+| Small | 3 | 8 | 16 GB | 60 GB | 20 GB | 4 GB | 2 | 1000 | 100 |
+| Medium (Recommended) | 3 | 16 | 32 GB | 100 GB | 60 GB | 8 GB | 4 | 3000 | 300 |
+| Large | 3 | 32 | 64 GB | 120 GB | 80 GB | 12 GB | 6 | 5000 | 500 |
+
+
+#### Instance Sizing
+
+| **Configuration** | **Active Workload Limit** |
+|---------------------|---------------------------------------------------|
+| Small | Up to 1000 Nodes each with 30 Pods (30,000 Pods) |
+| Medium (Recommended) | Up to 3000 Nodes each with 30 Pods (90,000 Pods)|
+| Large | Up to 5000 Nodes each with 30 Pods (150,000 Pods) |
+
+
+
+## Proxy Requirements
+
+- A proxy used for outgoing connections should support both HTTP and HTTPS traffic.
+
+
+- Allow connectivity to domains and ports in the table.
+
+
+
+ | **Top-Level Domain** | **Port** | **Description** |
+ |----------------------------|----------|-------------------------------------------------|
+ | spectrocloud.com | 443 | Spectro Cloud content repository and pack registry |
+ | s3.amazonaws.com | 443 | Spectro Cloud VMware OVA files |
+ | gcr.io | 443 | Spectro Cloud and common third party container images |
+ | ghcr.io | 443 | Kubernetes VIP images |
+ | docker.io | 443 | Common third party content |
+ | googleapis.com | 443 | For pulling Spectro Cloud images |
+ | docker.com | 443 | Common third party container images |
+ | raw.githubusercontent.com | 443 | Common third party content |
+ | projectcalico.org | 443 | Calico container images |
+ | quay.io | 443 | Common third-party container images |
+ | grafana.com | 443 | Grafana container images and manifests |
+ | github.com | 443 | Common third party content |
+
+## Resources
+
+- [Install on VMware](install-on-vmware/install-on-vmware.md)
+
+- [Install on Kubernetes](install-on-kubernetes/install.md)
+
+- [Architecture Diagram and Network Ports](../../architecture/networking-ports.md#self-hosted-network-communications-and-ports)
\ No newline at end of file
diff --git a/docs/docs-content/enterprise-version/system-management/_category_.json b/docs/docs-content/enterprise-version/system-management/_category_.json
new file mode 100644
index 0000000000..455b8e4969
--- /dev/null
+++ b/docs/docs-content/enterprise-version/system-management/_category_.json
@@ -0,0 +1,3 @@
+{
+ "position": 20
+}
diff --git a/docs/docs-content/enterprise-version/system-management/backup-restore.md b/docs/docs-content/enterprise-version/system-management/backup-restore.md
new file mode 100644
index 0000000000..5811fe9c90
--- /dev/null
+++ b/docs/docs-content/enterprise-version/system-management/backup-restore.md
@@ -0,0 +1,147 @@
+---
+sidebar_label: "Backup and Restore"
+title: "Backup and Restore"
+description: "Learn how to enable backup and restore for self-hosted Palette."
+icon: ""
+hide_table_of_contents: false
+sidebar_position: 50
+tags: ["palette", "management", "self-hosted", "backup", "restore"]
+---
+
+You can enable backup and restore for your self-hosted Palette cluster to ensure that your Palette configuration data is backed up and can be restored in case of a disaster or a cluster failure. Palette supports two backup modes:
+
+* File Transfer Protocol (FTP) - Send the backup data of your enterprise cluster to a dedicated FTP server. Refer to the [FTP](#ftp) section for more information.
+
+
+* Amazon Simple Storage Service (S3) - Send the backup data of your enterprise cluster to object storage using AWS S3. Refer to the [S3](#s3) section for more information.
+
+
+## FTP
+
+Use the following instructions to configure FTP backup for your enterprise cluster.
+
+### Prerequisites
+
+* A dedicated FTP server with sufficient storage space to store the backup data.
+
+
+* Credentials to access the FTP server.
+
+
+### Instructions
+
+1. Log in to [Palette](https://console.spectrocloud.com) as an administrator. Refer to the [Access the System Console](../system-management/system-management.md#access-the-system-console) section for more information.
+
+
+2. From the left **Main Menu**, select **Administration**.
+
+
+3. Click on the **Backup/Restore** tab.
+
+
+4. Select the **FTP** tab and fill out the following fields:
+
+ | **Field** | **Description** |
+ | --- | --- |
+ | **Server** | The FTP server URL. |
+ | **Directory** | The directory name for the backup storage. |
+ | **Username** | The username to log in to the FTP server. |
+ | **Password** | The password to log in to the FTP server. |
+ | **Interval** | The number of days between backups. |
+ | **Retention Period** | The number of days to retain the backup. |
+ | **Hours of the day** | The time of the day to take the backup. The time of day is in UTC format. |
+
+
+5. Click on **Validate** to validate the FTP server configuration. If the validation is successful, the **Save** button is enabled. Otherwise, an error message is displayed. In case of an error, correct verify the FTP server configuration and click on **Validate** again.
+
+
+### Validate
+
+Validation is part of the backup configuration wizard. You can verify that a backup initiates at the scheduled time and is successfully uploaded to the FTP server.
+
+
+## S3
+
+Use the following instructions to configure S3 backup for your enterprise cluster.
+
+
+
+### Prerequisites
+
+- An Amazon Web Services (AWS) account.
+
+- An AWS S3 bucket.
+
+- An AWS IAM user with the following IAM permissions attached. Ensure you replace the bucket name in the `Resource` field with the name of your S3 bucket.
+
+ ```json
+ {
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Sid": "s3Permissions",
+ "Effect": "Allow",
+ "Action": [
+ "s3:GetObject",
+ "s3:DeleteObject",
+ "s3:PutObject",
+ "s3:AbortMultipartUpload",
+ "s3:ListMultipartUploadParts"
+ ],
+ "Resource": [
+ "arn:aws:s3:::REPLACE_ME_WITH_YOUR_BUCKET_NAME",
+ "arn:aws:s3:::REPLACE_ME_WITH_YOUR_BUCKET_NAME/*"
+ ]
+ },
+ {
+ "Sid": "ec2Permissions",
+ "Effect": "Allow",
+ "Action": [
+ "ec2:DescribeVolumes",
+ "ec2:DescribeSnapshots",
+ "ec2:CreateTags",
+ "ec2:CreateVolume",
+ "ec2:CreateSnapshot",
+ "ec2:DeleteSnapshot"
+ ],
+ "Resource": [
+ "*"
+ ]
+ }
+ ]
+ }
+ ```
+
+
+- Credentials to the IAM user. You need the AWS access key ID and the AWS secret access key.
+
+
+### Instructions
+
+1. Log into the Palette system console as an administrator. Refer to the [Access the System Console](../system-management/system-management.md#access-the-system-console) section for more information.
+
+
+2. From the left **Main Menu**, select **Administration**.
+
+
+3. Click on the **Backup/Restore** tab.
+
+
+4. Select the **FTP**tab and fill out the following fields:
+
+ | **Field** | **Description** |
+ | --- | --- |
+ | **Server** | The FTP server URL. |
+ | **Directory** | The directory name for the backup storage. |
+ | **Username** | The username to log in to the FTP server. |
+ | **Password** | The password to log in to the FTP server. |
+ | **Interval** | The number of days between backups. |
+ | **Retention Period** | The number of days to retain the backup. |
+ | **Hours of the day** | The time of the day to take the backup. The time of day is in UTC format. |
+
+
+5. Click on **Validate** to validate the S3 configuration. If the validation is successful, the **Save** button is enabled. Otherwise, an error message is displayed. In case of an error, correct verify the S3 configuration and click on **Validate** again.
+
+### Validate
+
+Validation is part of the backup configuration wizard. You can validate a backup initiates at the scheduled time and successfully uploads to the S3 bucket.
\ No newline at end of file
diff --git a/docs/docs-content/enterprise-version/system-management/reverse-proxy.md b/docs/docs-content/enterprise-version/system-management/reverse-proxy.md
new file mode 100644
index 0000000000..f74d3e3833
--- /dev/null
+++ b/docs/docs-content/enterprise-version/system-management/reverse-proxy.md
@@ -0,0 +1,255 @@
+---
+sidebar_label: "Configure Reverse Proxy"
+title: "Configure Reverse Proxy"
+description: "Learn how to configure a reverse proxy for Palette."
+icon: ""
+hide_table_of_contents: false
+sidebar_position: 40
+tags: ["palette", "management"]
+---
+
+
+
+You can configure a reverse proxy for Palette. The reverse proxy can be used by host clusters deployed in a private network. Host clusters deployed in a private network are not accessible from the public internet or by users in different networks. You can use a reverse proxy to access the cluster's Kubernetes API server from a different network.
+
+When you configure reverse proxy server for Palette, clusters that use the [Spectro Proxy pack](../../integrations/frp.md) will use the reverse proxy server address in the kubeconfig file. Clusters not using the Spectro Proxy pack will use the default cluster address in the kubeconfig file.
+
+
+Use the following steps to configure a reverse proxy server for Palette.
+
+## Prerequisites
+
+
+- [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl) is installed and available.
+
+
+- [Helm](https://helm.sh/docs/intro/install/) is installed and available.
+
+
+- Access to the kubeconfig file of the Palette Kubernetes cluster. You can download the kubeconfig file from the Palette system console. Navigate to **Enterprise System Migration**, select the Palette cluster, and click the **Download Kubeconfig** button for the cluster.
+
+
+- A domain name that you can use for the reverse proxy server. You will also need access to the DNS records for the domain so that you can create a CNAME DNS record for the reverse proxy server load balancer.
+
+
+- Ensure you have an SSL certificate that matches the domain name you will assign to Spectro Proxy. You will need this to enable HTTPS encryption for the Spectro Proxy. Contact your network administrator or security team to obtain the SSL certificate. You need the following files:
+ - x509 SSL certificate file in base64 format.
+
+ - x509 SSL certificate key file in base64 format.
+
+ - x509 SSL certificate authority file in base64 format.
+
+
+- The Spectro Proxy server must have internet access and network connectivity to the private network where the Kubernetes clusters are deployed.
+
+
+## Enablement
+
+1. Open a terminal session and navigate to the directory where you stored the **values.yaml** for the Palette installation.
+
+
+2. Use a text editor and open the **values.yaml** file. Locate the `frps` section and update the following values in the **values.yaml** file. Refer to the [Spectro Proxy Helm Configuration](../install-palette/install-on-kubernetes/palette-helm-ref.md#spectro-proxy) to learn more about the configuration options.
+
+
+
+ | **Parameter** | **Description** | **Type** |
+ | --- | --- | ---|
+ | `enabled`| Set to `true` to enable the Spectro Proxy server. | boolean |
+ | `frps.frpHostURL`| The domain name you will use for the Spectro Proxy server. For example, `frps.palette.example.com`. |
+ | `server.crt`| The x509 SSL certificate file in base64 format. |
+ | `server.key`| The x509 SSL certificate key file in base64 format. |
+ | `ca.crt`| The x509 SSL certificate authority file in base64 format. |
+
+
+
+ The following is an example of the `frps` section in the **values.yaml** file. The SSL certificate files are truncated for brevity.
+
+
+
+ ```yaml
+ frps:
+ frps:
+ enabled: true
+ frpHostURL: "frps.palette.example.com"
+ server:
+ crt: "LS0tLS1CRU...........tCg=="
+ key: "LS0tLS1CRU...........tCg=="
+ ca:
+ crt : "LS0tLS1CRU...........tCg=="
+ ```
+
+
+3. Issue the `helm upgrade` command to update the Palette Kubernetes configuration. The command below assumes you in the folder that contains the **values.yaml** file and the Palette Helm chart. Change the directory path if needed.
+
+
+
+ ```bash
+ helm upgrade --values values.yaml hubble spectro-mgmt-plane-0.0.0.tgz --install
+ ```
+
+
+4. After the new configurations are accepted, use the following command to get the Spectro Proxy server's load balancer IP address.
+
+
+
+ ```bash
+ kubectl get svc --namespace proxy-system spectro-proxy-svc
+ ```
+5. Update the DNS records for the domain name you used for the Spectro Proxy server. Create a CNAME record that points to the Spectro Proxy server's load balancer IP address.
+
+
+6. Log in to the Palette System API by using the `/v1/auth/syslogin` endpoint. Use the `curl` command below and replace the URL with the custom domain URL you assigned to Palette or use the IP address. Ensure you replace the credentials below with your system console credentials.
+
+
+
+ ```bash
+ curl --insecure --location 'https://palette.example.com/v1/auth/syslogin' \
+ --header 'Content-Type: application/json' \
+ --data '{
+ "password": "**********",
+ "username": "**********"
+ }'
+ ```
+ Output
+ ```json hideClipboard
+ {
+ "Authorization": "**********.",
+ "IsPasswordReset": true
+ }
+ ```
+
+7. Using the output you received, copy the authorization value to your clipboard and assign it to a shell variable. Replace the authorization value below with the value from the output.
+
+
+
+ ```shell hideClipboard
+ TOKEN=**********
+ ```
+
+8. Next, prepare a payload for the`/v1/system/config/` endpoint. This endpoint is used to configure Palette to use a reverse proxy. The payload requires the following parameters:
+
+
+
+ | **Parameter** | **Description** | **Type** |
+ | --- | --- | --- |
+ | `caCert`| The x509 SSL certificate authority file in base64 format. | string |
+ | `clientCert`| The x509 SSL certificate file in base64 format. | string |
+ | `clientKey`| The x509 SSL certificate key file in base64 format. | string |
+ | `port` | The port number for the reverse proxy server. We recommend using port `443`. | integer |
+ | `protocol` | The protocol to use for the reverse proxy server. We recommend using `https`. | string |
+ | `server`| The domain name you will use for the Spectro Proxy server. For example, `frps.palette.example.com`. Don't include the HTTP schema in the value. | string |
+
+ The following is an example payload. The SSL certificate files are truncated for brevity.
+
+
+
+ ```json hideClipboard
+ {
+ "caCert": "-----BEGIN CERTIFICATE-----\n.............\n-----END CERTIFICATE-----",
+ "clientCert": "-----BEGIN CERTIFICATE-----\n..........\n-----END CERTIFICATE-----",
+ "clientKey": "-----BEGIN RSA PRIVATE KEY-----\n........\n-----END RSA PRIVATE KEY-----",
+ "port": 443,
+ "protocol": "https",
+ "server": "frps.palette.example.com.com"
+ }
+ ```
+
+ :::info
+
+ You can save the payload to a file and use the `cat` command to read the file contents into the `curl` command. For example, if you save the payload to a file named `payload.json`, you can use the following command to read the file contents into the `curl` command. You can also save the payload as a shell variable and use the variable in the `curl` command.
+
+ :::
+
+
+
+
+9. Issue a PUT request using the following `curl` command. Replace the URL with the custom domain URL you assigned to Palette or use the IP address. You can use the `TOKEN` variable you created earlier for the authorization header. Ensure you replace the payload below with the payload you created in the previous step.
+
+
+
+ ```bash
+ curl --insecure --silent --include --output /dev/null -w "%{http_code}" --location --request PUT 'https://.example.com/v1/system/config/reverseproxy' \
+ --header "Authorization: $TOKEN" \
+ --header 'Content-Type: application/json' \
+ --data ' {
+ "caCert": "-----BEGIN CERTIFICATE-----\n................\n-----END CERTIFICATE-----\n",
+ "clientCert": "-----BEGIN CERTIFICATE-----\n.............\n-----END CERTIFICATE-----",
+ "clientKey": "-----BEGIN RSA PRIVATE KEY-----\n............\n-----END RSA PRIVATE KEY-----\n",
+ "port": 443,
+ "protocol": "https",
+ "server": "frps.palette.example.com.com"
+ }'
+ ```
+
+ A successful response returns a `204` status code.
+
+ Output
+ ```shell hideClipboard
+ 204
+ ```
+
+You now have a Spectro Proxy server that you can use to access Palette clusters deployed in a different network. Make sure you add the [Spectro Proxy pack](../../integrations/frp.md) to the clusters you want to access using the Spectro Proxy server.
+
+
+## Validate
+
+Use the following command to validate that the Spectro Proxy server is active.
+
+
+
+
+
+1. Open a terminal session.
+
+
+2. Log in to the Palette System API by using the `/v1/auth/syslogin` endpoint. Use the `curl` command below and replace the URL with the custom domain URL you assigned to Palette or use the IP address. Ensure you replace the credentials below with your system console credentials.
+
+
+
+ ```bash
+ curl --insecure --location 'https://palette.example.com/v1/auth/syslogin' \
+ --header 'Content-Type: application/json' \
+ --data '{
+ "password": "**********",
+ "username": "**********"
+ }'
+ ```
+ Output
+ ```json hideClipboard
+ {
+ "Authorization": "**********.",
+ "IsPasswordReset": true
+ }
+ ```
+
+3. Using the output you received, copy the authorization value to your clipboard and assign it to a shell variable. Replace the authorization value below with the value from the output.
+
+
+
+ ```shell hideClipboard
+ TOKEN=**********
+ ```
+
+4. Query the system API endpoint `/v1/system/config/reverseproxy` to verify the current reverse proxy settings applied to Palette. Use the `curl` command below and replace the URL with the custom domain URL you assigned to Palette or use the IP address. You can use the `TOKEN` variable you created earlier for the authorization header.
+
+
+
+ ```bash
+ curl --location --request GET 'https://palette.example.com/v1/system/config/reverseproxy' \
+ --header "Authorization: $TOKEN"
+ ```
+
+ If the proxy server is configured correctly, you will receive an output similar to the following containing your settings. The SSL certificate outputs are truncated for brevity.
+
+
+
+ ```json hideClipboard
+ {
+ "caCert": "-----BEGIN CERTIFICATE-----\n...............\n-----END CERTIFICATE-----\n",
+ "clientCert": "-----BEGIN CERTIFICATE-----\n...........\n-----END CERTIFICATE-----",
+ "clientKey": "-----BEGIN RSA PRIVATE KEY-----\n........\n-----END RSA PRIVATE KEY-----\n",
+ "port": 443,
+ "protocol": "https",
+ "server": "frps.palette.example.com"
+ }
+ ```
\ No newline at end of file
diff --git a/docs/docs-content/enterprise-version/system-management/ssl-certificate-management.md b/docs/docs-content/enterprise-version/system-management/ssl-certificate-management.md
new file mode 100644
index 0000000000..55089d8daf
--- /dev/null
+++ b/docs/docs-content/enterprise-version/system-management/ssl-certificate-management.md
@@ -0,0 +1,84 @@
+---
+sidebar_label: "SSL Certificate Management"
+title: "SSL Certificate"
+description: "Upload and manage SSL certificates in Palette."
+icon: ""
+hide_table_of_contents: false
+sidebar_position: 30
+tags: ["palette", "management"]
+---
+
+
+When you install Palette, a self-signed certificate is generated and used by default. You can upload your own SSL certificate to replace the default certificate.
+
+Palette uses SSL certificates to secure external communication. Internal components communication is by default secured and use HTTPS. External communication with Palette, such as the system console, gRPC endpoint, and API endpoint, requires you to upload an SSL certificate to enable HTTPS.
+
+
+:::info
+
+Enabling HTTPS is a non-disruptive operation. You can enable HTTPS at any time without affecting the system's functionality.
+
+:::
+
+
+## Upload an SSL Certificate
+
+You can upload an SSL certificate in Palette by using the following steps.
+
+
+### Prerequisites
+
+- Access to the Palette system console.
+
+
+- You need to have an x509 certificate and a key file in PEM format. The certificate file must contain the full certificate chain. Reach out to your network administrator or security team if you do not have these files.
+
+
+- Ensure the certificate is created for the custom domain name you specified for your Palette installation. If you did not specify a custom domain name, the certificate must be created for the Palette system console's IP address. You can also specify a load balancer's IP address if you are using a load balancer to access Palette .
+
+
+### Enablement
+
+1. Log in to the Palette system console.
+
+
+2. Navigate to the left **Main Menu** and select **Administration**.
+
+
+3. Select the tab titled **Certificates**.
+
+
+4. Copy and paste the certificate into the **Certificate** field.
+
+
+5. Copy and paste the certificate key into the **Key** field.
+
+
+6. Copy and paste the certificate authority into the **Certificate authority** field.
+
+
+
+
+ ![A view of the certificate upload screen](/palette_system-management_ssl-certifiacte-management_certificate-upload.png)
+
+
+
+7. Save your changes.
+
+If the certificate is invalid, you will receive an error message. Once the certificate is uploaded successfully, Palette will refresh its listening ports and start using the new certificate.
+
+
+### Validate
+
+You can validate that your certificate is uploaded correctly by using the following steps.
+
+
+
+
+1. Log out of the Palette system console. If you are already logged in, log out and close your browser session. Browsers cache connections and may not use the newly enabled HTTPS connection. Closing your existing browser session avoids issues related to your browser caching an HTTP connection.
+
+
+2. Log back into the Palette system console. Ensure the connection is secure by checking the URL. The URL should start with `https://`.
+
+
+Palette is now using your uploaded certificate to create a secure HTTPS connection with external clients. Users can now securely access the system console, gRPC endpoint, and API endpoint.
\ No newline at end of file
diff --git a/docs/docs-content/enterprise-version/system-management/system-management.md b/docs/docs-content/enterprise-version/system-management/system-management.md
new file mode 100644
index 0000000000..0aa581280e
--- /dev/null
+++ b/docs/docs-content/enterprise-version/system-management/system-management.md
@@ -0,0 +1,69 @@
+---
+sidebar_label: "System Management"
+title: "System Management"
+description: "Manage your Palette system settings."
+icon: ""
+hide_table_of_contents: false
+sidebar_position: 20
+tags: ["palette", "self-hosted", "management"]
+---
+
+Palette contains many system settings you can configure to meet your organization's needs. These settings are available at the system level and are applied to all [tenants](../../glossary-all.md#tenant) in the system.
+
+
+
+:::caution
+
+Exercise caution when changing system settings, as the changes will be applied to all tenants in the system.
+
+:::
+
+
+
+## System Console
+
+The system console enables you to complete the initial setup and onboarding and manage the overall Palette environment.
+
+### Access the System Console
+
+You can access the system console by visiting the IP address or the custom domain name assigned to your Palette cluster and appending the `/system` path to the URL. For example, if your Palette cluster is hosted at `https://palette.abc.com`, you can access the system console at `https://palette.abc.com/system`.
+
+
+## Administration and Management
+
+Platform administrators can use this console to perform the following operations:
+
+- Configure and manage SMTP settings.
+
+- Configure and manage Pack registries.
+
+- [Configure and manage SSL certificates](ssl-certificate-management.md).
+
+- [Enable backup and restore](backup-restore.md).
+
+- Configure DNS settings.
+
+- Setup alerts and notifications.
+
+- Enable metrics collection.
+
+- Manage Palette platform upgrades.
+
+- Configuere login banner.
+
+- [Manage tenants](tenant-management.md).
+
+- Manage the Enterprise cluster and the profile layers and pack integrations that makeup the Enterprise cluster.
+
+Check out the following resources to learn more about these operations.
+
+## Resources
+
+
+* [Tenant Management](tenant-management.md)
+
+
+* [Configure Reverse Proxy](reverse-proxy.md)
+
+
+* [SSL Certificate Management](ssl-certificate-management.md)
diff --git a/docs/docs-content/enterprise-version/system-management/tenant-management.md b/docs/docs-content/enterprise-version/system-management/tenant-management.md
new file mode 100644
index 0000000000..c61f856b09
--- /dev/null
+++ b/docs/docs-content/enterprise-version/system-management/tenant-management.md
@@ -0,0 +1,118 @@
+---
+sidebar_label: "Tenant Management"
+title: "Tenant Management"
+description: "Learn how to create and remove tenants in Palette."
+icon: ""
+hide_table_of_contents: false
+sidebar_position: 10
+tags: ["palette", "self-hosted", "management"]
+---
+
+
+Tenants are isolated environments in Palette that contain their own clusters, users, and resources. You can create multiple tenants in Palette to support multiple teams or projects. Instructions for creating and removing tenants are provided below.
+
+
+
+
+## Create a Tenant
+
+You can create a tenant in Palette by following these steps.
+
+
+### Prerequisites
+
+* Access to the Palette system console.
+
+
+### Enablement
+
+1. Log in to the Palette system console.
+
+
+2. Navigate to the left **Main Menu** and select **Tenant Management**.
+
+
+3. Click **Create New Tenant**.
+
+
+4. Fill out the **Org Name** and the properties of the admin user by providing the **First Name**, **Last Name**, and **Email**.
+
+
+5. Confirm your changes.
+
+
+6. From the tenant list view, find your newly created tenant and click on the **three dots Menu**. Select **Activate** to activate the tenant.
+
+
+
+ ![View of a tenant activation option](/enterprise-version_system-management_tenant-management_activate-tenant.png)
+
+
+
+7. A pop-up box will present you with an activation URL. Copy the URL and paste it into your browser to activate the tenant.
+
+
+8. Provide the admin user with a new password.
+
+
+9. Log in to the tenant console using the admin user credentials.
+
+
+### Validate
+
+1. Log in to Palette.
+
+
+2. Verify that you can access the tenant as the admin user.
+
+
+
+## Remove a Tenant
+
+You can remove a tenant in Palette using the following steps.
+
+### Prerequisites
+
+* Access to the Palette system console.
+
+### Removal
+
+1. Log in to the Palette system console.
+
+
+2. Navigate to the left **Main Menu** and select **Tenant Management**.
+
+
+3. From the tenant list view, select the tenant you want to remove and click on the **three dots Menu**.
+
+
+4. Select **Delete** to prepare the tenant for removal.
+
+
+5. Click on your tenant's **three dot Menu** and select **Clean up** to remove all the tenant's resources.
+
+
+
+ ![View of a tenant deletion option](/enterprise_version_system-management_tenant-management_remove-tenant.png)
+
+
+
+:::caution
+
+If you do not clean up the tenant's resources, such as clusters and Private Cloud Gateways (PCGs), the tenant will remain in a **Deleting** state. You can use **Force Cleanup & Delete** to proceed with deletion without manually cleaning up tenant resources.
+
+:::
+
+
+After the cleanup process completes, the tenant will be removed from the tenant list view.
+
+### Validate
+
+
+1. Log in to the Palette system console. Refert to the [Access Palette](../enterprise-version.md#access-palette) section for instructions on how to access the Palette system console.
+
+
+2. Navigate to the left **Main Menu** and select **Tenant Management**.
+
+
+3. Validate that the tenant was removed by checking the tenant list view.
\ No newline at end of file
diff --git a/docs/docs-content/enterprise-version/upgrade.md b/docs/docs-content/enterprise-version/upgrade.md
index c9188e928a..83ca049d57 100644
--- a/docs/docs-content/enterprise-version/upgrade.md
+++ b/docs/docs-content/enterprise-version/upgrade.md
@@ -5,6 +5,7 @@ description: "Spectro Cloud upgrade notes for specific Palette versions."
icon: ""
hide_table_of_contents: false
sidebar_position: 100
+tags: ["palette", "self-hosted", "upgrade"]
---
This page is a reference resource to help you better prepare for a Palette upgrade. Review each version's upgrade notes for more information about required actions and other important messages to be aware of. If you have questions or concerns, reach out to our support team by opening up a ticket through our [support page](http://support.spectrocloud.io/).
diff --git a/docs/docs-content/release-notes.md b/docs/docs-content/release-notes.md
index 5bd85f2c1e..3d1dc464ba 100644
--- a/docs/docs-content/release-notes.md
+++ b/docs/docs-content/release-notes.md
@@ -28,7 +28,7 @@ Palette 3.4.0 has various security upgrades, better support for multiple Kuberne
#### Breaking Changes
-- Installations of self-hosted Palette in a Kubernetes cluster now require [cert-manager](https://cert-manager.io/docs/installation/) to be available before installing Palette. Cert-manager is used to enable Mutual TLS (mTLS) between all of Palette's internal components. Refer to the prerequisites section of [Installing Palette using Helm Charts](enterprise-version/deploying-palette-with-helm.md) guide for more details.
+- Installations of self-hosted Palette in a Kubernetes cluster now require [cert-manager](https://cert-manager.io/docs/installation/) to be available before installing Palette. Cert-manager is used to enable Mutual TLS (mTLS) between all of Palette's internal components. Refer to the prerequisites section of [Installing Palette using Helm Charts](enterprise-version/install-palette/install-on-kubernetes/install.md) guide for more details.
- Self-hosted Palette for Kubernetes now installs Palette Ingress resources in a namespace that Palette manages. Prior versions of Palette installed internal components ingress resources in the default namespace. Review the [Upgrade Notes](enterprise-version/upgrade.md#palette-34) to learn more about this change and how to upgrade.
@@ -606,7 +606,7 @@ Spectro Cloud Palette 2.7 is released with advanced features supporting Windows
**Enhancements:**
* Palette [Azure CNI Pack](/integrations/azure-cni#azurecni) ensures advanced traffic flow control using Calico Policies for AKS clusters.
-* Palette supports the [migration of Private Cloud Gateway (PCG)](/enterprise-version/enterprise-cluster-management#palettepcgmigration) traffic from unhealthy to healthy PCG without compromising service availability.
+* Palette supports the [migration of Private Cloud Gateway (PCG)](clusters/clusters.md) traffic from unhealthy to healthy PCG without compromising service availability.
* Palette Workspace upgraded with
* [Resource Quota](/workspace/workload-features#workspacequota) allocation for Workspaces, Namespaces, and Clusters.
* [Restricted Container Images](/workspace/workload-features#restrictedcontainerimages) feature to restrict the accidental deployment of a delisted or unwanted container to a specific namespace.
@@ -843,7 +843,7 @@ Our on-premises version gets attention to finer details with this release:
- The Spectro Cloud database can now be backed up and restored.
- Whereas previous on-premises versions allowed upgrading only to major versions, this release allows upgrading}> Upgrades to the Spectro Cloud platform are published to the Spectro Cloud repository and a notification is displayed on the console when new versions are available. to minor versions of the Spectro Cloud platform.
-- Monitoring the installation using the dedicated UI}>The platform installer contains a web application called the Supervisor, to provide detailed progress of the installation. now provides more details when [migrating](/enterprise-version/deploying-an-enterprise-cluster/#migratequickstartmodeclustertoenterprise) from the quick start version to the enterprise version.
+- Monitoring the installation using the dedicated UI}>The platform installer contains a web application called the Supervisor, to provide detailed progress of the installation. now provides more details when migrating from the quick start version to the enterprise version.
- AWS and GCP clusters can now be provisioned from an on-premises Spectro Cloud system.
On the VMware front, we have:
@@ -863,7 +863,7 @@ Other new features:
In this hotfix, we added:
- Compatibility for [Calico 3.16](https://www.projectcalico.org/whats-new-in-calico-3-16/).
-- The on-premises version now allows specifying [CIDR for pods](/enterprise-version/deploying-the-platform-installer/#deployplatforminstaller) to allocate them an exclusive IP range.
+- The on-premises version now allows specifying CIDR for pods to allocate them an exclusive IP range.
- It also allows allocating an IP range in the CIDR format exclusive to the service clusters.
The IP ranges for the pods, service clusters, and your IP network must not overlap with one another. This hotfix provides options to prevent node creation errors due to IP conflicts.
diff --git a/docs/docs-content/security/product-architecture/self-hosted-operation.md b/docs/docs-content/security/product-architecture/self-hosted-operation.md
index d1f9102579..7a75abb317 100644
--- a/docs/docs-content/security/product-architecture/self-hosted-operation.md
+++ b/docs/docs-content/security/product-architecture/self-hosted-operation.md
@@ -15,7 +15,7 @@ tags: ["security"]
In self-hosted operation, where Palette is typically deployed on-prem behind a firewall, you must ensure your environment has security controls. Palette automatically generates security keys at installation and stores them in the management cluster. You can import an optional certificate and private key to match the Fully Qualified Domain Name (FQDN) management cluster. Palette supports enabling disk encryption policies for management cluster virtual machines (VMs) if required. For information about deploying Palette in a self-hosted environment, review the [Self-Hosted Installation](../../enterprise-version/enterprise-version.md) guide.
-In self-hosted deployments, the Open Virtualization Appliance (OVA) can operate in stand-alone mode for quick Proof of Concept (POC) or in enterprise mode, which launches a three-node High Availability (HA) cluster as the Palette management cluster. The management cluster provides a browser-based web interface that allows you to set up a tenant and provision and manage tenant clusters. You can also deploy Palette to a Kubernetes cluster by using the Palette Helm Chart. To learn more, review the [Install Using Helm Chart](../../enterprise-version/deploying-palette-with-helm.md) guide.
+In self-hosted deployments, the Open Virtualization Appliance (OVA) can operate in stand-alone mode for quick Proof of Concept (POC) or in enterprise mode, which launches a three-node High Availability (HA) cluster as the Palette management cluster. The management cluster provides a browser-based web interface that allows you to set up a tenant and provision and manage tenant clusters. You can also deploy Palette to a Kubernetes cluster by using the Palette Helm Chart. To learn more, review the [Install Using Helm Chart](../../enterprise-version/install-palette/install-on-kubernetes/install.md) guide.
The following points apply to self-hosted deployments:
diff --git a/redirects.js b/redirects.js
index 481eebf04c..236af44460 100644
--- a/redirects.js
+++ b/redirects.js
@@ -190,6 +190,50 @@ const redirects = [
{
from: `/integrations/EKS-D`,
to: `/integrations`,
+ },
+ {
+ from: `/enterprise-version/on-prem-system-requirements`,
+ to: `/enterprise-version/install-palette`,
+ },
+ {
+ from: `/enterprise-version/deploying-the-platform-installer`,
+ to: `/enterprise-version/install-palette`,
+ },
+ {
+ from: `/enterprise-version/deploying-an-enterprise-cluster`,
+ to: `/enterprise-version/install-palette`,
+ },
+ {
+ from: `/enterprise-version/deploying-palette-with-helm`,
+ to: `/enterprise-version/install-palette/install-on-kubernetes/install`
+ },
+ {
+ from: `/enterprise-version/helm-chart-install-reference`,
+ to: `/enterprise-version/install-palette/install-on-kubernetes/palette-helm-ref`
+ },
+ {
+ from: `/enterprise-version/system-console-dashboard`,
+ to: `/enterprise-version/system-management`
+ },
+ {
+ from: `/enterprise-version/enterprise-cluster-management`,
+ to: `/enterprise-version/system-management`
+ },
+ {
+ from: `/enterprise-version/monitoring`,
+ to: `/enterprise-version/system-management`
+ },
+ {
+ from: `/enterprise-version/air-gap-repo`,
+ to: `/enterprise-version/install-palette`
+ },
+ {
+ from: `/enterprise-version/reverse-proxy`,
+ to: `/enterprise-version/system-management/reverse-proxy`
+ },
+ {
+ from: `/enterprise-version/ssl-certificate-management`,
+ to: `/enterprise-version/system-management/ssl-certificate-management`
}
];
diff --git a/static/assets/docs/images/enterprise-version_system-management_tenant-management_activate-tenant.png b/static/assets/docs/images/enterprise-version_system-management_tenant-management_activate-tenant.png
new file mode 100644
index 0000000000..32ded13840
Binary files /dev/null and b/static/assets/docs/images/enterprise-version_system-management_tenant-management_activate-tenant.png differ
diff --git a/static/assets/docs/images/enterprise_version_system-management_tenant-management_remove-tenant.png b/static/assets/docs/images/enterprise_version_system-management_tenant-management_remove-tenant.png
new file mode 100644
index 0000000000..9898153b18
Binary files /dev/null and b/static/assets/docs/images/enterprise_version_system-management_tenant-management_remove-tenant.png differ
diff --git a/static/assets/docs/images/palette_installation_install-on-vmware_palette-system-console.png b/static/assets/docs/images/palette_installation_install-on-vmware_palette-system-console.png
new file mode 100644
index 0000000000..9f169a211e
Binary files /dev/null and b/static/assets/docs/images/palette_installation_install-on-vmware_palette-system-console.png differ
diff --git a/static/assets/docs/images/palette_system-management_ssl-certifiacte-management_certificate-upload.png b/static/assets/docs/images/palette_system-management_ssl-certifiacte-management_certificate-upload.png
new file mode 100644
index 0000000000..41ea49cd14
Binary files /dev/null and b/static/assets/docs/images/palette_system-management_ssl-certifiacte-management_certificate-upload.png differ