diff --git a/modules/chapter1/images/pipeline_install1.png b/modules/chapter1/images/pipeline_install1.png new file mode 100644 index 0000000..04f2129 Binary files /dev/null and b/modules/chapter1/images/pipeline_install1.png differ diff --git a/modules/chapter1/images/pipeline_install2.png b/modules/chapter1/images/pipeline_install2.png new file mode 100644 index 0000000..2e9d503 Binary files /dev/null and b/modules/chapter1/images/pipeline_install2.png differ diff --git a/modules/chapter1/images/pipeline_install3.png b/modules/chapter1/images/pipeline_install3.png new file mode 100644 index 0000000..a4d40ab Binary files /dev/null and b/modules/chapter1/images/pipeline_install3.png differ diff --git a/modules/chapter1/images/pipeline_install4.png b/modules/chapter1/images/pipeline_install4.png new file mode 100644 index 0000000..0853644 Binary files /dev/null and b/modules/chapter1/images/pipeline_install4.png differ diff --git a/modules/chapter1/images/pipeline_search.png b/modules/chapter1/images/pipeline_search.png new file mode 100644 index 0000000..1355e87 Binary files /dev/null and b/modules/chapter1/images/pipeline_search.png differ diff --git a/modules/chapter1/images/rhods2-clusters.png b/modules/chapter1/images/rhods2-clusters.png new file mode 100644 index 0000000..75d061d Binary files /dev/null and b/modules/chapter1/images/rhods2-clusters.png differ diff --git a/modules/chapter1/images/rhods2-conditions.png b/modules/chapter1/images/rhods2-conditions.png new file mode 100644 index 0000000..0572eb8 Binary files /dev/null and b/modules/chapter1/images/rhods2-conditions.png differ diff --git a/modules/chapter1/images/rhods2-create-cluster.png b/modules/chapter1/images/rhods2-create-cluster.png new file mode 100644 index 0000000..f05b6fd Binary files /dev/null and b/modules/chapter1/images/rhods2-create-cluster.png differ diff --git a/modules/chapter1/images/rhods2-install-finished.png b/modules/chapter1/images/rhods2-install-finished.png new file mode 100644 index 0000000..07a91db Binary files /dev/null and b/modules/chapter1/images/rhods2-install-finished.png differ diff --git a/modules/chapter1/images/rhods2-install-view.png b/modules/chapter1/images/rhods2-install-view.png new file mode 100644 index 0000000..df41994 Binary files /dev/null and b/modules/chapter1/images/rhods2-install-view.png differ diff --git a/modules/chapter1/images/rhods2-install.png b/modules/chapter1/images/rhods2-install.png new file mode 100644 index 0000000..5717202 Binary files /dev/null and b/modules/chapter1/images/rhods2-install.png differ diff --git a/modules/chapter1/images/rhods_install1.png b/modules/chapter1/images/rhods_install1.png new file mode 100644 index 0000000..17d3b87 Binary files /dev/null and b/modules/chapter1/images/rhods_install1.png differ diff --git a/modules/chapter1/images/rhods_install2.png b/modules/chapter1/images/rhods_install2.png new file mode 100644 index 0000000..e6fa902 Binary files /dev/null and b/modules/chapter1/images/rhods_install2.png differ diff --git a/modules/chapter1/images/rhods_install3.png b/modules/chapter1/images/rhods_install3.png new file mode 100644 index 0000000..d4dc870 Binary files /dev/null and b/modules/chapter1/images/rhods_install3.png differ diff --git a/modules/chapter1/images/rhods_install4.png b/modules/chapter1/images/rhods_install4.png new file mode 100644 index 0000000..99ca9fe Binary files /dev/null and b/modules/chapter1/images/rhods_install4.png differ diff --git a/modules/chapter1/images/rhods_install5.png b/modules/chapter1/images/rhods_install5.png new file mode 100644 index 0000000..3a39295 Binary files /dev/null and b/modules/chapter1/images/rhods_install5.png differ diff --git a/modules/chapter1/images/rhods_verify1.png b/modules/chapter1/images/rhods_verify1.png new file mode 100644 index 0000000..d870522 Binary files /dev/null and b/modules/chapter1/images/rhods_verify1.png differ diff --git a/modules/chapter1/images/rhods_verify2.png b/modules/chapter1/images/rhods_verify2.png new file mode 100644 index 0000000..ab73d0f Binary files /dev/null and b/modules/chapter1/images/rhods_verify2.png differ diff --git a/modules/chapter1/images/rhods_verify_pods.png b/modules/chapter1/images/rhods_verify_pods.png new file mode 100644 index 0000000..1c87298 Binary files /dev/null and b/modules/chapter1/images/rhods_verify_pods.png differ diff --git a/modules/chapter1/pages/section1.adoc b/modules/chapter1/pages/section1.adoc index 35361c6..2b06f4d 100644 --- a/modules/chapter1/pages/section1.adoc +++ b/modules/chapter1/pages/section1.adoc @@ -1,4 +1,3 @@ -//find a better title = General Information about Installation Red{nbsp}Hat Openshift Data Science is available to install a self-managed version as an operator through OperatorHub or as a fully managed solution through OpenShift Marketplace. diff --git a/modules/chapter1/pages/section2.adoc b/modules/chapter1/pages/section2.adoc index c2bf1d5..7a2ce84 100644 --- a/modules/chapter1/pages/section2.adoc +++ b/modules/chapter1/pages/section2.adoc @@ -1 +1,176 @@ -= Section 2 += Installation using the Web Console + +*Red{nbsp}Hat Openshift Data Science* is available as an operator via Openshift Operator Hub. You will install the *Red{nbsp}Hat Openshift Data Science operator V2* using the Openshift web console in this section. + +== Installation of Openshift Data Science dependecies + +As described in the xref::section1.adoc[General Information about Installation] section you may need to install other operators depending on the components and features of Openshift Data Science you want to use. +In general not installing dependencies before the *Red{nbsp}Hat Openshift Data Science* does not impact the installation process itself, however it may impact initialization of the components that depend on them. Hence it's better to install the dependencies beforehand. + +https://www.redhat.com/en/technologies/cloud-computing/openshift/pipelines[Red{nbsp}Hat Openshift Pipelines Operator]:: +The *Red Hat Openshift Pipelines Operator* is required if you want to install the *Red Hat Openshift Data Science Pipelines* component. +https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/index.html[NVIDIA GPU Operator]:: +The *NVIDIA GPU Operator* is required for GPU support in *Red Hat Openshift Data Science*. +https://docs.openshift.com/container-platform/4.13/hardware_enablement/psap-node-feature-discovery-operator.html[Node Feature Discovery Operator]:: +The *Node Feature Discovery Operator* is a prerequisity for the *NVIDIA GPU Operator*. + + +The following demonstration shows the installation of the https://www.redhat.com/en/technologies/cloud-computing/openshift/pipelines[Red{nbsp}Hat Openshift Pipelines Operator] which is a dependency of the *Data Science Pipelines* component installed by default. Installation of the two other operators is very similar. + +=== Demo: Installation of the *Red{nbsp}Hat Openshift Pipelines* operator + +1. Login to Red{nbsp}Hat Openshift using a user which has the _cluster-admin_ role assigned. +2. Navigate to **Operators** -> **OperatorHub** and search for *Red{nbsp}Hat Openshift Pipelines* ++ +image::pipeline_search.png[width=800] + +3. Click on the *Red{nbsp}Hat Openshift Pipelines* operator and in the pop up window click on **Install** to open the operator's installation view. ++ +image::pipeline_install1.png[width=800] + + +4. In the installation view choose the *Update{nbsp}channel* and the *Update{nbsp}approval* parameters. You can accept the default values. The *Installation{nbsp}mode* and the *Installed{nbsp}namespace* parameters are fixed. ++ +image::pipeline_install2.png[width=800] + +5. Click on the **Install** button at the bottom of to view the to proceed with the installation. A window showing the installation progress will pop up. ++ +image::pipeline_install3.png[width=800] + +6. When the installation finishes the operator is ready to be used by the *Red{nbsp}Hat Openshift Data Science*. ++ +image::pipeline_install4.png[width=800] + + + +== Demo: Installation of the Red{nbsp}Hat Openshift Data Science operator + +IMPORTANT: The installation requires a user with the _cluster-admin_ role + + +1. Login to the Red Hat Openshift using a user which has the _cluster-admin_ role assigned. + +2. Navigate to **Operators** -> **OperatorHub** and search for *Red{nbsp}Hat Openshift Data Science*. ++ +image::rhods_install1.png[width=800] + + +3. Click on the Red{nbsp}Hat Openshift Data Science operator and in the pop up window click on **Install** to open the operator's installation view. ++ +IMPORTANT: Make sure you select *Openshift Data Science* from *Red{nbsp}Hat* not the *Community* version. ++ +image::rhods_install2.png[width=800] + +4. In the installation view window choose the _alpha_ *Update Channel*, _Automatic_ *Update approval* and keep the default *Installed Namespace*. Click on the *Install* button to start the installation. ++ +image::rhods2-install-view.png[width=800] ++ +Operator Installation progress window will pop up. The installation may take a couple of minutes. ++ +image::rhods2-install.png[width=800] + +5. When the operator's installation is finished, click on the *Create DataScienceCluster* button to create and configure your cluster. ++ +image::rhods2-install-finished.png[width=800] + +6. In the *Create DataScienceCluster* view select components that will be installed and managed by the operator. ++ +There are following components to choose from: ++ + * *CodeFlare:* CodeFlare simplifies the integration, scaling and acceleration of complex multi-step analytics and machine learning pipelines on the hybrid multi-cloud.CodeFlare, an open-source framework for simplifying the integration and efficient scaling of big data and AI workflows onto the hybrid cloud. CodeFlare is built on top of Ray, an emerging open-source distributed computing framework for machine learning applications. CodeFlare extends the capabilities of Ray by adding specific elements to make scaling workflows easier. ++ + * *Ray:* Ray is an open technology for “fast and simple distributed computing.” It makes it easy for data scientists and application developers to run their code in a distributed fashion. It also provides a lean and easy interface for distributed programming with many different libraries, best suited to perform machine learning and other intensive compute tasks. ++ + * *Dashboard:* A web dashboard that displays installed *Data Science* components with easy access to component UIs and documentation ++ + * *Data Science Pipelines:* Data Science Pipelines allow building portable machine learning workflows with data science pipelines, using Docker containers. This enables you to standardize and automate machine learning workflows to enable you to develop and deploy your data science models. ++ + * *KServe:* Kserve, or KFServing, is a Kubernetes-based serverless framework for inferencing (scoring) deep learning models. It provides a consistent and Kubernetes-native way to deploy, serve, and manage machine learning models in production environments. KServe is designed to be scalable and efficient, allowing for automatic scaling of model serving based on demand. ++ + * *ModelMeshServing:* ModelMesh Serving is the Controller for managing ModelMesh, a general-purpose model serving management/routing layer. ++ + * *Workbenches:* Workbenches allow to examine and work with data models in an isolated area. It enables you to create a new Jupyter notebook from an existing notebook container image to access its resources and properties. For data science projects that require data to be retained, you can add container storage to the workbench you are creating. ++ +For this demonstration accept the default (pre-selected) components selection - Dashboard, Data Science Pipelines, Model Mesh Serving and Workbenches. ++ +You can choose to create the DataScienceCluster using either the _Form view_ or the _YAML View_. The _Form view_ is a web based form and 'YAML view' is based on a YAML definition of the DataScience cluster resource. The following picture shows the _Form view_. ++ +image::rhods2-create-cluster.png[width=800] ++ +If you choose the _YAML view_, you are presented with a template of the YAML DataScienceCluster resource definition similar to the one below. ++ +---- +apiVersion: datasciencecluster.opendatahub.io/v1 +kind: DataScienceCluster +metadata: + name: default + labels: + app.kubernetes.io/name: datasciencecluster + app.kubernetes.io/instance: default + app.kubernetes.io/part-of: rhods-operator + app.kubernetes.io/managed-by: kustomize + app.kubernetes.io/created-by: rhods-operator +spec: + components: + codeflare: + managementState: Removed <1> + dashboard: + managementState: Managed <2> + datasciencepipelines: + managementState: Managed + kserve: + managementState: Removed + modelmeshserving: + managementState: Managed + ray: + managementState: Removed + workbenches: + managementState: Managed +---- +<1> For components you *do not* want to install use *Removed* +<2> For components you *want* to install and manage by the operator use *Managed* ++ +After naming the cluster and choosing the components you wish the operator to install and manage click on the *Create* button. + +7. After creating the DataScienceCluster a view showing the DataScienceCluster details opens. Wait until the status of the cluster reads *Phase: Ready*. This represents the status of the whole cluster. ++ +image::rhods2-clusters.png[width=800] ++ +You may also check the status of individual installed components by looking at their conditions. Click on the *default* cluster and switch to the YAML view. Scroll down to view *conditions*. ++ +image::rhods2-conditions.png[width=800] ++ +Each condition is represented by a *type* and a *status*. The *Type* is a string describing the condition, for instance _odh-dashboardReady_ and the status says whether it is _true_ or not. The following example shows the *Ready* status of the Dashboard component. ++ +[subs=+quotes] +---- +- lastHeartbeatTime: '2023-11-13T10:53:20Z' + lastTransitionTime: '2023-11-13T10:53:20Z' + message: Component reconciled successfully + reason: ReconcileCompleted + #status: 'True'# + #type: odh-dashboardReady# +---- ++ +The following list shows the conditions you may want to check: ++ + * rayReady + * codeflareReady + * model-meshReady + * workbenchesReady + * data-science-pipelines-operatorReady + * odh-dashboardReady + +8. The operator should be installed and configured now. +In the applications window in the right upper corner of the screen the *Red{nbsp}Hat Openshift Data Science* dashboard should be available. ++ +image::rhods_verify1.png[width=800] ++ +9. Click on the *Red{nbsp}Hat Openshift Data Science* button to log in to the *Red{nbsp}Hat Openshift Data Science*. ++ +image::rhods_verify2.png[width=800] ++ +IMPORTANT: It may take a while to start all the service pods hence the login window may not be accessible immediately. If you are getting an error, check the status of the pods in the project *redhat-ods-applications*. +Navigate to *Workloads* -> *pods* and select project *redhat-ods-applications*. All pods must be running and be ready. If they are not, wait until they become running and ready. ++ +image::rhods_verify_pods.png[width=800]