Skip to content

Commit

Permalink
HPCC-27610 Update Container Placements Documentation
Browse files Browse the repository at this point in the history
Signed-off-by: g-pan <[email protected]>
  • Loading branch information
g-pan committed Oct 25, 2023
1 parent bf71a79 commit 31433aa
Showing 1 changed file with 69 additions and 50 deletions.
119 changes: 69 additions & 50 deletions docs/EN_US/ContainerizedHPCC/ContainerizedMods/ConfigureValues.xml
Original file line number Diff line number Diff line change
Expand Up @@ -1214,8 +1214,8 @@ thor:
needs.</para>

<para>You can deploy these values either using the values.yaml file or you
can place into an file and have Kubernetes instead read the values from
the supplied file. See the above section <emphasis>Customization
can place into a file and have Kubernetes instead read the values from the
supplied file. See the above section <emphasis>Customization
Techniques</emphasis> for more information about customizing your
deployment.</para>

Expand Down Expand Up @@ -1260,45 +1260,63 @@ thor:

<para>The pods: [list] item can contain one of the following:</para>

<para><orderedlist>
<listitem>
<para>Type: &lt;component&gt; Covers all pods/jobs under this
type of component. This is commonly used for HPCC Systems
components. For example, the <emphasis>type:thor</emphasis>
which will apply to any of the Thor type components; thoragent,
thormanager, thoragent and thorworker, etc.</para>
</listitem>

<listitem>
<para>Target: &lt;name&gt; The "name" field of each component,
typical usage for HPCC Systems components referrs to the cluster
name. For example <emphasis>Roxie: -name: roxie</emphasis> which
will be the "Roxie" target (cluster). You can also define
multiple targets with each having a unique name such as "roxie",
"roxie2", "roxie-web" etc</para>
</listitem>

<listitem>
<para>Pod: This is the "Deployment" metadata name from the name
of the array item of a type. For example, "eclwatch-",
"mydali-", "thor-thoragent" This can be a regular expression
since Kubernetes will use the metadata name as a prefix and
dynamically generate the pod name such as,
eclwatch-7f4dd4dd44cb-c0w3x.</para>
</listitem>

<listitem>
<para>Job name: The job name is typically a regular expression
as well, since the job name is generated dynamically. For
example, a compile job compile-54eB67e567e, could use "compile-"
or "compile-*" or the exact match "^compile-.$"</para>
</listitem>

<listitem>
<para>All: applies for all HPCC Systems components. The default
placements for pods delivered is [all]</para>
</listitem>
</orderedlist></para>
<informaltable colsep="1" frame="all" rowsep="1">
<tgroup cols="2">
<colspec colwidth="125.55pt" />

<colspec />

<tbody>
<row>
<entry>Type: &lt;component&gt;</entry>

<entry>Covers all pods/jobs under this type of component. This
is commonly used for HPCC Systems components. For example, the
<emphasis>type:thor</emphasis> which will apply to any of the
Thor type components; thoragent, thormanager, thoragent and
thorworker, etc.</entry>
</row>

<row>
<entry>Target: &lt;name&gt;</entry>

<entry>The "name" field of each component, typical usage for
HPCC Systems components referrs to the cluster name. For
example <emphasis>Roxie: -name: roxie</emphasis> which will be
the "Roxie" target (cluster). You can also define multiple
targets with each having a unique name such as "roxie",
"roxie2", "roxie-web" etc.</entry>
</row>

<row>
<entry>Pod: &lt;name&gt;</entry>

<entry>This is the "Deployment" metadata name from the name of
the array item of a type. For example, "eclwatch-", "mydali-",
"thor-thoragent" This can be a regular expression since
Kubernetes will use the metadata name as a prefix and
dynamically generate the pod name such as,
eclwatch-7f4dd4dd44cb-c0w3x.</entry>
</row>

<row>
<entry>Job name:</entry>

<entry>The job name is typically a regular expression as well,
since the job name is generated dynamically. For example, a
compile job compile-54eB67e567e, could use "compile-" or
"compile-*" or the exact match "^compile-.$"</entry>
</row>

<row>
<entry>All:</entry>

<entry>Applies for all HPCC Systems components. The default
placements for pods delivered is [all]</entry>
</row>
</tbody>
</tgroup>
</informaltable>

<para>Regardless of the order the placements appear in the
configuration, they will be processed in the following order: "all",
Expand All @@ -1321,7 +1339,7 @@ thor:
<sect2 id="S2NodeSelection">
<title>Node Selection</title>

<para>In a Kubernetes container environment there are several ways to
<para>In a Kubernetes container environment, there are several ways to
schedule your nodes. The recommended approaches all use label selectors
to facilitate the selection. Generally, you may not need to set such
constraints; as the scheduler usually does reasonably acceptable
Expand Down Expand Up @@ -1393,8 +1411,8 @@ thor:
nodeSelector:
group: "hpcc"</programlisting></para>

<para>Note:the label: group:hpcc matches the node pool label:
"hpcc".</para>
<para><emphasis role="bold">Note:</emphasis> The label group:hpcc
matches the node pool label:hpcc.</para>

<para>This next example shows how to configure a node pool to prevent
scheduling a Dali component onto this node pool labelled with the key
Expand All @@ -1418,9 +1436,9 @@ thor:
<title>Taints and Tolerations</title>

<para>Taints and Tolerations are types of Kubernetes node constraints
also referred to by Node Affinity. Only one "affinity" can be applied
to a pod. If a pod matches multiple placement 'pods' lists, then only
the last "affinity" definition will apply.</para>
also referred to by node Affinity. Only one affinity can be applied to
a pod. If a pod matches multiple placement 'pods' lists, then only the
last affinity definition will apply.</para>

<para>Taints and tolerations work together to ensure that pods are not
scheduled onto inappropriate nodes. Tolerations are applied to pods,
Expand Down Expand Up @@ -1524,7 +1542,8 @@ thor:
respectively. The Roxie pods will be evenly scheduled on the two node
pools.</para>

<para>After deployment you can verify by issuing the following:</para>
<para>After deployment you can verify by issuing the following
command:</para>

<programlisting>kubectl get pod -o wide | grep roxie</programlisting>

Expand Down Expand Up @@ -1570,7 +1589,7 @@ thor:

<para>There is no schema check for the content of affinity. Only one
affinity can be applied to a pod or job. If a pod/job matches
multiple placement 'pods' lists, then only the last affinity
multiple placement pods lists, then only the last affinity
definition applies.</para>

<para>For more information, see <ulink
Expand Down Expand Up @@ -1652,7 +1671,7 @@ thor:

<para>Only one "schedulerName" can be applied to any pod/job.</para>

<para>A SchedulerName example:</para>
<para>A schedulerName example:</para>

<programlisting>- pods: ["target:roxie"]
placement:
Expand Down

0 comments on commit 31433aa

Please sign in to comment.