forked from pivotal-cf/docs-pks
-
Notifications
You must be signed in to change notification settings - Fork 0
/
installing-nsx-t.html.md.erb
189 lines (126 loc) · 11.9 KB
/
installing-nsx-t.html.md.erb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
---
title: Installing PKS on vSphere with NSX-T
owner: PKS
iaas: vSphere-NSX-T
---
<strong><%= modified_date %></strong>
This topic describes how to install and configure Pivotal Container Service (PKS) on vSphere with NSX-T integration.
##<a id='prerequisites'></a>Prerequisites
Before you begin this procedure, ensure that you have successfully completed all preceding steps for installing PKS on vSphere with NSX-T, including:
<ul>
<li>
<a href="./nsxt-deploy.html">Deploying NSX-T for PKS</a>
</li>
<li>
<a href="./nsxt-prepare-mgmt-plane.html">Creating the PKS Management Plane</a>
</li>
<li>
<a href="./nsxt-prepare-compute-plane.html">Creating the PKS Compute Plane</a>
</li>
<li>
<a href="./vsphere-nsxt-om-deploy.html">Deploying Ops Manager with NSX-T for PKS</a>
</li>
<li>
<a href="./generate-nsx-ca-cert.html">Generating and Registering the NSX Manager Certificate for PKS</a>
</li>
<li>
<a href="./vsphere-nsxt-om-config.html">Configuring BOSH Director with NSX-T for PKS</a>
</li>
<li>
<a href="./generate-nsx-pi-cert.html">Generating and Registering the NSX Manager Superuser Principal Identity Certificate and Key for PKS</a>
</li>
<li>
<a href="./nsxt-create-objects.html">Creating NSX-T Objects for PKS</a>
</li>
</ul>
##<a id='install'></a> Step 1: Install PKS
<%= partial 'install-pks' %>
##<a id='configure'></a> Step 2: Configure PKS
Click the orange **Pivotal Container Service** tile to start the configuration process.
<p class="note"><strong>Note</strong>: Configuration of NSX-T or Flannel <strong>cannot</strong> be changed after initial installation and configuration of PKS.</p>
![Pivotal Container Service tile on the Ops Manager installation dashboard](images/pks-tile-orange.png)
<p class="note warning"><strong>WARNING</strong>: When you configure the PKS tile,
do not use spaces in any field entries. This includes spaces between characters as well as
leading and trailing spaces. If you use a space in any field entry, the deployment of PKS fails.</p>
###<a id='azs-networks'></a> Assign AZs and Networks
Perform the following steps:
1. Click **Assign AZs and Networks**.
1. Select the availability zone (AZ) where you want to deploy the PKS API VM as a singleton job.
<p class="note"><strong>Note</strong>: You must select an additional AZ for balancing other jobs before clicking <strong>Save</strong>, but this selection has no effect in the current version of PKS.</p>
![Assign AZs and Networks pane in Ops Manager](images/azs-networks.png)
1. Under **Network**, select the PKS Management Network linked to the `ls-pks-mgmt` NSX-T logical switch you created in the [Create Networks Page](vsphere-nsxt-om-config.html#create-networks) step of _Configuring Ops Manager on vSphere with NSX-T Integration_. This will provide network placement for the PKS API VM.
1. Under **Service Network**, your selection depends on whether you are upgrading from a previous PKS version or installing an original PKS deployment.
* If you are deploying PKS with NSX-T for the first time, the **Service Network** field does not apply because PKS creates the service network for you during the installation process. However, the PKS tile requires you to make a selection. Therefore, select the same network you specified in the **Network** field.
* If you are upgrading from a previous PKS version, select the **Service Network** linked to the `ls-pks-service` NSX-T logical switch that is created by PKS during installation. The service network provides network placement for the already existing on-demand Kubernetes cluster service instances created by the PKS broker.
1. Click **Save**.
###<a id='pks-api'></a> PKS API
<%= partial 'pks-api' %>
###<a id='plans'></a> Plans
<%= partial 'plans' %>
###<a id='cloud-provider'></a> Kubernetes Cloud Provider
<%= partial 'cloud-provider' %>
###<a id='syslog'></a> (Optional) Logging
<%= partial 'logging' %>
###<a id='networking'></a> Networking
To configure networking, do the following:
1. Click **Networking**.
1. Under **Container Networking Interface**, select **NSX-T**.
![NSX-T Networking configuration pane in Ops Manager](images/networking-nsx-t.png)
1. For **NSX Manager hostname**, enter the hostname or IP address of your NSX Manager.
1. For **NSX Manager Super User Principal Identify Certificate**, copy and paste the contents and private key of the Principal Identity certificate you created in [Generating and Registering the NSX Manager Superuser Principal Identity Certificate and Key](generate-nsx-pi-cert.html).
1. For **NSX Manager CA Cert**, copy and paste the contents of the NSX Manager CA certificate you created in [Generating and Registering the NSX Manager Certificate](generate-nsx-ca-cert.html). Use this certificate and key to connect to the NSX Manager.
1. The **Disable SSL certificate verification** checkbox is **not** selected by default. In order to disable TLS verification, select the checkbox. You may want to disable TLS verification if you did not enter a CA certificate, or if your CA certificate is self-signed.
<p class="note"><strong>Note</strong>: The <strong>NSX Manager CA Cert</strong> field and the <strong>Disable SSL certificate verification</strong> option are mutually exclusive settings. If you disable SSL certificate verification, leave the CA certificate field blank. If you enter a certificate in the <strong>NSX Manager CA Cert</strong> field, do not disable SSL certificate verification. If you populate the certificate field and disable certificate validation, the PKS installation fails. If you populate the CA certificate field and later decide to disable SSL certificate verification, you must remove the certificate text from the field.</p>
1. If you are using a NAT deployment topology, leave the **NAT mode** checkbox selected. If you are using a No-NAT topology, clear this checkbox. For more information, see [Deployment Topologies](nsxt-topologies.html).
1. Enter the following IP Block settings:
<img src="images/networking-nsx-t-2.png" alt="NSX-T Networking configuration pane in Ops Manager" width="375">
* **Pods IP Block ID**: Enter the UUID of the IP block to be used for Kubernetes pods. PKS allocates IP addresses for the pods when they are created in Kubernetes. Each time a namespace is created in Kubernetes, a subnet from this IP block is allocated. The current subnet size that is created is /24, which means a maximum of 256 pods can be created per namespace.
* **Nodes IP Block ID**: Enter the UUID of the IP block to be used for Kubernetes nodes. PKS allocates IP addresses for the nodes when they are created in Kubernetes. The node networks are created on a separate IP address space from the pod networks. The current subnet size that is created is /24, which means a maximum of 256 nodes can be created per cluster.
For more information, including sizes and the IP blocks to avoid using, see [Plan IP Blocks](nsxt-prepare-env.html#plan-ip-blocks) in _Preparing NSX-T Before Deploying PKS_.
1. For **T0 Router ID**, enter the `t0-pks` T0 router UUID. Locate this value in the NSX-T UI router overview.
1. For **Floating IP Pool ID**, enter the `ip-pool-vips` ID that you created for load balancer VIPs. For more information, see [Plan Network CIDRs](nsxt-prepare-env.html#plan-cidrs). PKS uses the floating IP pool to allocate IP addresses to the load balancers created for each of the clusters. The load balancer routes the API requests to the master nodes and the data plane.
1. For **Nodes DNS**, enter one or more Domain Name Servers used by the Kubernetes nodes.
1. For **vSphere Cluster Names**, enter a comma-separated list of the vSphere clusters where you will deploy Kubernetes clusters.
The NSX-T pre-check errand uses this field to verify that the hosts from the specified clusters are available in NSX-T. You can specify clusters in this format: `cluster1,cluster2,cluster3`.
1. (Optional) Configure a global proxy for all outgoing HTTP and HTTPS traffic from your Kubernetes clusters and the PKS API server. See [Using Proxies with PKS on NSX-T](proxies.html) for instructions on how to enable a proxy.
1. Under **Allow outbound internet access from Kubernetes cluster vms (IaaS-dependent)**, ignore the **Enable outbound internet access** checkbox.
1. Click **Save**.
###<a id='uaa'></a> UAA
<%= partial 'uaa' %>
###<a id='monitoring'></a> (Optional) Monitoring
<%= partial 'monitoring' %>
###<a id='usage'></a> Usage Data
<%= partial 'usage-data' %>
###<a id='errands'></a> Errands
Errands are scripts that run at designated points during an installation.
To configure when post-deploy and pre-delete errands for PKS are run, make a selection in the dropdown next to the errand.
<p class="note warning"><strong>WARNING</strong>: You must enable the NSX-T Validation errand to verify and tag required NSX-T objects.</p>
![Errand configuration pane](images/nsxt/nsxt-validation-errand-ON.png)
For more information about errands and their configuration state, see [Managing Errands in Ops Manager](https://docs.pivotal.io/pivotalcf/customizing/managing_errands.html).
<p class="note warning"><strong>WARNING</strong>: Because PKS uses floating stemcells, updating the PKS tile with a new stemcell triggers the rolling of every VM in each cluster. Also, updating other product tiles in your deployment with a new stemcell causes the PKS tile to roll VMs. This rolling is enabled by the <strong>Upgrade all clusters errand</strong>. We recommend that you keep this errand turned on because automatic rolling of VMs ensures that all deployed cluster VMs are patched. However, automatic rolling can cause downtime in your deployment.</p>
###<a id='resource-config'></a> Resource Config
To modify the resource usage of PKS, click **Resource Config** and edit the **Pivotal Container Service** job.
![Resource pane configuration](images/resources.png)
<p class="note"><strong>Note</strong>: If you experience timeouts or slowness when interacting with the PKS API, select a <strong>VM Type</strong> with greater CPU and memory resources for the <strong>Pivotal Container Service</strong> job.</p>
## <a id='apply-changes'></a> Step 3: Apply Changes
After configuring the PKS tile, follow the steps below to deploy the tile:
<%= partial 'apply-changes' %>
## <a id='clis'></a> Step 4: Install the PKS and Kubernetes CLIs
<%= partial 'install-cli' %>
## <a id='retrieve-endpoint'></a>Step 5: Share the PKS API Endpoint
You must share the PKS API endpoint to allow your organization to use the API to create, update, and delete clusters.
For more information, see [Creating Clusters](create-cluster.html).
1. When the installation is completed, retrieve the PKS endpoint by performing the following steps:
1. From the Ops Manager Installation Dashboard, click the **Pivotal Container Service** tile.
1. Click the **Status** tab and record the IP address assigned to the `Pivotal Container Service` job.
1. Create a DNAT rule on the `t1-pks-mgmt` T1 to map an external IP from the **PKS MANAGEMENT CIDR** to the PKS endpoint. For example, a DNAT rule that maps `10.172.1.4` to `172.31.0.4`, where `172.31.0.4` is PKS endpoint IP address on the `ls-pks-mgmt` NSX-T Logical Switch.
<p class="note"><strong>Note</strong>: Ensure that you have no overlapping NAT rules. If your NAT rules overlap, you cannot reach Ops Manager from VMs in the vCenter network.</p>
Developers should use the DNAT IP address when logging in with the PKS CLI. For more information, see [Using PKS](using.html).
## <a id='api'></a> Step 6: Configure PKS API Access
Follow the procedures in [Configuring PKS API Access](configure-api.html).
## <a id='auth'></a> Step 7: Configure Authentication for PKS
<%= partial 'configure-auth' %>
##<a id='next-steps'></a> Next Steps
After installing PKS on vSphere with NSX-T integration, you may want to do one or more of the following:
* <%= partial 'harbor' %>
* Create your first PKS cluster. For more information, see [Creating Clusters](create-cluster.html).