forked from luckylittle/openshift-cheatsheet
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathred_hat_openshift_notes.txt
3940 lines (2755 loc) · 108 KB
/
red_hat_openshift_notes.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
###########
# FASTRAX #
###########
osadm new-project demo --display-name="OpenShift 3 Training" --description="OpenShift Training Project" --node-selector='region=primary' --admin='andrew'
oc new-project <projectname>
oadm router --replicas=2 --credentials='/etc/openshift/master/openshift-router.kubeconfig' \
--images='registry.access.redhat.com/openshift3/ose-${component}:${version}' \
--selector='region=infra'
oadm registry --config=/etc/openshift/master/admin.kubeconfig \
--credentials=/etc/openshift/master/openshift-registry.kubeconfig \
--images='registry.access.redhat.com/openshift3/ose-${component}:v3.0.0.0' \
--selector='region=infra'
oc login -u andrew ---server=https://ose3-master.example.com:8443
oc get pods
oc create -f hello-pod.json
oc get routes
oadm policy add-role-to-user admin andrew -n <projectname>
oc new-app https://github.com/openshift/simple-openshift-sinatra-sti.git -o json | tee ~/simple-sinatra.json
oc create -f ~/simple-sinatra.json
for i in imagerepository buildconfig deploymentconfig service; do \
> echo $i; oc get $i; echo -e "\n\n"; done
oc get builds
oc build-logs sin-simple-openshift-sinatra-sti-1
https://github.com/openshift/training
https://blog.openshift.com/openshift-v3-deep-dive-docker-kubernetes/
https://blog.openshift.com/builds-deployments-services-v3/
https://docs.docker.com/introduction/understanding-docker/
##################
# IMPLEMENTATION #
##################
Quick Install
Lets you use interactive CLI utility to install OpenShift across set of hosts
Installer made available by installing utility package (atomic-openshift-utils) on provisioning host
https://install.openshift.com
Uses Ansible playbooks in background
Does not assume familiarity with Ansible
Advanced Install
For complex environments requiring deeper customization of installation and maintenance
Uses Ansible playbooks
Assumes familiarity with Ansible
Prerequisites
System requirements
Set up DNS
Prepare host
OpenShift Enterprise installation
Download and run installation utility
Post-install tasks
Deploy integrated Docker registry
Deploy HAProxy router
Populate OpenShift installation with image streams and templates
Configure authentication and create project for users
Set up and configure NFS server for use with persistent volumes
DNS Setup
To make environment accessible externally, create wildcard DNS entry
Points to node hosting Default Router Container
Resolves to OpenShift router IP address
In lab and examples, this is infranode00 server
If environment uses multiple routers (HAProxy instances), use external load balancer or round-robin setting
Example: Create wildcard DNS entry for cloudapps in DNS server
Has low TTL
Points to public IP address of host where the router is deployed:
*.cloudapps.example.com. 300 IN A 85.1.3.5
Overview
To prepare your hosts for OpenShift Enterprise 3:
Install Red Hat Enterprise Linux 7.2
Register hosts with subscription-manager
Manage base packages:
git
net-tools
bind-utils
iptables-services
Manage services:
Disable firewalld
Enable iptables-services
Install Docker 1.8.2 or later
Make sure master does not require password for communication
Password-Less Communication
Ensure installer has password-less access to hosts
Ansible requires user with access to all hosts
To run installer as non-root user, configure password-less sudo rights on each destination host
Firewall:
Node to Node 4789 (UDP) Required between nodes for SDN communication between pods on separate hosts
Node to Master 53 Provides DNS services within the environment (not DNS for external access)
8443 Provides access to the API
Master to Node 10250 Endpoint for master communication with nodes
Master to Master 4789 (UDP) Required between nodes for SDN communication between pods on separate hosts
53 Provides internal DNS services.
2379 Used for standalone etcd (clustered) to accept changes in state.
2380 etcd requires this port be open between masters for leader election and peering connections when using standalone etcd (clustered).
4001 Used for embedded etcd (non-clustered) to accept changes in state.
External - Master
8443 CLI and IDE plug-ins communicate via REST to this port
Web console runs on this port
External - Node (or nodes) hosting Default Router (HAProxy) container
80, 443 Ports opened and bound to Default Router container
Proxy communication from external world to pods (containers) internally.
Sample topology:
Infrastructure nodes running in DMZ
Application hosting nodes, master, other supporting infrastructure running in more secure network
yum install wget git net-tools bind-utils iptables-services bridge-utils bash-completion
yum update -y
yum install docker
Edit /etc/sysconfig/docker and add --insecure-registry 172.30.0.0/16 to OPTIONS parameter (OPTIONS=--selinux-enabled --insecure-registry 172.30.0.0/16)
Docker Storage Configuration
Docker default loopback storage mechanism:
Not supported for production
Appropriate for proof of concept environments
For production environments:
Create thin-pool logical volume
Reconfigure Docker to use volume
To do this use docker-storage-setup script after installing but before using Docker
Script reads configuration options from /etc/sysconfig/docker-storage-setup
Storage Options
When configuring docker-storage-setup, examine available options
Before starting docker-storage-setup, reinitialize Docker:
# systemctl stop docker
# rm -rf /var/lib/docker/*
Create thin-pool volume from free space in volume group where root filesystem resides:
Requires no configuration
# docker-storage-setup
Use existing volume group to create thin-pool:
Example: docker-vg
# cat /etc/sysconfig/docker-storage-setup
DEVS=/dev/vdb
VG=docker-vg
# docker-storage-setup
Storage Options: Example
Use unpartitioned block device to create new volume group and thin-pool:
Example: Use /dev/vdc device to create docker-vg:
# cat /etc/sysconfig/docker-storage-setup
DEVS=/dev/vdb
VG=docker-vg
SETUP_LVM_THIN_POOL=yes
# docker-storage-setup
Verify configuration:
Should have dm.thinpooldev value in /etc/sysconfig/docker-storage and docker-pool device
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move
docker-pool docker-vg twi-a-tz-- 48.95g 0.00 0.44
# cat /etc/sysconfig/docker-storage
DOCKER_STORAGE_OPTIONS=--storage-driver devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/docker--vg-docker--pool
Restart Docker daemon
Install OpenShift utils package that includes installer:
# yum -y install atomic-openshift-utils
Run following on host that has SSH access to intended master and nodes:
$ atomic-openshift-installer install
Follow onscreen instructions to install OpenShift Enterprise
Installer asks for hostnames or IPs of masters and nodes and configures them accordingly
Configuration file with all information provided is saved in ~/.config/openshift/installer.cfg.yml
Can use this as answer file
After installation, need to label nodes
Lets scheduler use logic defined in scheduler.json when provisioning pods
OpenShift Enterprise 2.0 introduced regions and zones
Let organizations provide topologies for application resiliency
Apps spread throughout zones within region
Can make different regions accessible to users
OpenShift Enterprise 3 topology-agnostic
Provides advanced controls for implementing any topologies
Example: Use regions and zones
Other options: Prod and Dev, Secure and Insecure, Rack and Power
Labels on nodes handle assignments of regions and zones at node level
# oc label node master00-$guid.oslab.opentlc.com region="infra" zone="na"
# oc label node infranode00-$guid.oslab.opentlc.com region="infra" zone="infranodes"
# oc label node node00-$guid.oslab.opentlc.com region="primary" zone="east"
# oc label node node01-$guid.oslab.opentlc.com region="primary" zone="west"
Registry Container
OpenShift Enterprise:
Builds Docker images from source code
Deploys them
Manages lifecycle
To enable this, deploy Docker registry in OpenShift Enterprise environment
OpenShift Enterprise runs registry in pod on node, just like any other workload
Deploying registry creates service and deployment configuration
Both called docker-registry
After deployment, pod created with name similar to docker-registry-1-cpty9
To control where registry is deployed, use --selector flag to specify desired target
Deploying Registry
Environment includes infra region and dedicated infranode00 host
Good practice for highly scalable environment
Use better-performing servers for nodes or place them in DMZ for external access only
To deploy registry anywhere in environment:
$ oadm registry --config=admin.kubeconfig \
--credentials=openshift-registry.kubeconfig
To ensure registry pod is hosted in infra region only:
$ oadm registry --config=admin.kubeconfig \
--credentials=openshift-registry.kubeconfig \
--selector='region=infra'
NFS Storage for the Registry
Registry stores Docker images, metadata
If you deploy a pod with registry:
Uses ephemeral volume
Destroyed if pod exits
Images built or pushed into registry disappear
For production:
Use persistent storage
Use PersistentVolume and PersistentVolumeClaim objects for storage for registry
For non-production:
Other options exist
Example: --mount-host:
$ oadm registry --config=admin.kubeconfig \
--credentials=openshift-registry.kubeconfig \
--selector='region=infra' \
--mount-host host:/export/dirname
Mounts directory from node on which registry container lives
If you scale up docker-registry deployment configuration, registry pods and containers might run on different nodes
Default Router (aka Default HA-Proxy Router, other names):
Modified deployment of HAProxy
Entry point for traffic destined for services in OpenShift Enterprise installation
HAProxy-based router implementation provided as default template router plug-in
Uses openshift3/ose-haproxy-router image to run HAProxy instance alongside and router plug-in
Supports HTTP(S) traffic and TLS-enabled traffic via SNI only
Hosted inside OpenShift Enterprise
Essentially a proxy
Default router’s pod listens on host network interface on port 80 and 443
Default router’s container listens on external/public ports
Router proxies external requests for route names to IPs of actual pods identified by service associated with route
Can populate OpenShift Enterprise installation with Red Hat-provided image streams and templates
Make it easy to create new applications
Template: Set of resources you can customize and process to produce configuration
Defines list of parameters you can modify for consumption by containers
Image Stream:
Comprises of one or more Docker images identified by tags
Presents single virtual view of related images
Image Streams
xPaaS middleware image streams provide images for:
Red Hat JBoss Enterprise Application Platform
Red Hat JBoss Web Server
Red Hat JBoss A-MQ
Can use images to build applications for those platforms
To create or delete core set of image streams that use Red Hat Enterprise Linux 7-based images:
oc create|delete -f \
examples/image-streams/image-streams-rhel7.json \
-n openshift
To create image streams for xPaaS middleware images:
$ oc create|delete -f \
examples/xpaas-streams/jboss-image-streams.json
-n openshift
Database Service Templates
Database service templates make it easy to run database instance
Other components can use
Two templates provided for each database
To create core set of database templates:
$ oc create -f \
examples/db-templates -n openshift
Can easily instantiate templates after creating them
Gives quick access to database deployment
QuickStart Templates
Define full set of objects for running application:
Build configurations: Build application from source located in GitHub public repository
Deployment configurations: Deploy application image after it is built
Services: Provide internal load balancing for application pods
Routes: Provide external access and load balancing to application
To create core QuickStart templates:
$ oc create|delete -f \
examples/quickstart-templates -n openshift
Persistent Volume Object Definition
{
"apiVersion": "v1",
"kind": "PersistentVolume",
"metadata": {
"name": "pv0001"
},
"spec": {
"capacity": {
"storage": "5Gi"
},
"accessModes": [ "ReadWriteOnce" ],
"nfs": {
"path": "/tmp",
"server": "172.17.0.2"
},
"persistentVolumeReclaimPolicy": "Recycle"
}
}
-To create a persistent volume that can be claimed by a pod, you must create a PersistentVolume object in pod’s Project
-After PersistentVolume is created, a PersistentVolumeClaim must be created to ensure other pods and projects do not try to use PersistentVolume
Volume Security
PersistentVolume objects created in context of project
User request storage with PersistentVolumeClaim object in same project
Claim lives only in user’s namespace
Can be referenced by pod within same namespace
Attempt to access persistent volume across project causes pod to fail
NFS volume must be mountable by all nodes in cluster
SELinux and NFS Export Settings
Default: SELinux does not allow writing from pod to remote NFS server
NFS volume mounts correctly but is read-only
To enable writing in SELinux on each node:
# setsebool -P virt_use_nfs 1
Each exported volume on NFS server should conform to following:
Set each export option in /etc/exports as follows:
/example_fs *(rw,all_squash)
Each export must be owned by nfsnobody and have following permissions:
# chown -R nfsnobody:nfsnobody /example_fs
# chmod 777
Resource Reclamation
OpenShift Enterprise implements Kubernetes Recyclable plug-in interface
Reclamation tasks based on policies set by persistentVolumeReclaimPolicy key in PersistentVolume object definition
Can reclaim volume after it is released from claim
Can set persistentVolumeReclaimPolicy to Retain or Recycle:
Retain: Volumes not deleted
Default setting for key
Recycle: Volumes scrubbed after being released from claim
Once recycled, can bind NFS volume to new claim
Automation
Can provision OpenShift Enterprise clusters with persistent storage using NFS:
Use disk partitions to enforce storage quotas
Enforce security by restricting volumes to namespace that has claim to them
Configure reclamation of discarded resources for each persistent volume
Can use scripts to automate these tasks
See sample Ansible playbook: https://github.com/openshift/openshift-ansible/tree/master/roles/kube_nfs_volumes
Pods Overview
OpenShift Enterprise leverages Kubernetes concept of pod
Pod: One or more containers deployed together on host
Smallest compute unit you can define, deploy, manage
Pods are the rough equivalent of OpenShift Enterprise 2 gears
Each pod allocated own internal IP address, owns entire port range
Containers within pods can share local storage and networking
Pod Changes and Management
OpenShift Enterprise treats pods as static objects
Cannot change pod definition while running
To implement changes, OpenShift Enterprise:
Terminates existing pod
Recreates it with modified configuration, base image(s), or both
Pods are expendable, do not maintain state when recreated
Should use higher-level controllers to manage pods
Pods should usually be managed by higher-level controllers rather than directly by users.
Pods Lifecycle
Lifecycle:
Pod is defined
Assigned to run on node
Runs until containers exit or pods are removed
Pods Definition File/Manifest
apiVersion: v1
kind: Pod
metadata:
annotations: { ... }
labels:
deployment: example-name-1
deploymentconfig: example-name
example-name: default
generateName: example-name-1-
spec:
containers:
- env:
- name: OPENSHIFT_CA_DATA
value: ...
- name: OPENSHIFT_CERT_DATA
value: ...
- name: OPENSHIFT_INSECURE
value: "false"
- name: OPENSHIFT_KEY_DATA
value: ...
- name: OPENSHIFT_MASTER
value: https://master.example.com:8443
image: openshift3/example-image:v1.1.0.6
imagePullPolicy: IfNotPresent
name: registry
ports:
- containerPort: 5000
protocol: TCP
resources: {}
securityContext: { ... }
volumeMounts:
- mountPath: /registry
name: registry-storage
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-br6yz
readOnly: true
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: default-dockercfg-at06w
restartPolicy: Always
serviceAccount: default
volumes:
- emptyDir: {}
name: registry-storage
- name: default-token-br6yz
secret:
secretName: default-token-br6yz
Services
Kubernetes service serves as internal load balancer
Identifies set of replicated pods
Proxies connections it receives to identified pods
Can add or remove backing pods to or from service while service remains consistently available
Lets anything depending on service refer to it at consistent internal address
Assign services IP address and port pair
Proxy to appropriate backing pod when accessed
Service uses label selector to find running containers that provide certain network service on certain port
Can access server by IP address and DNS name
Name created and resolved by local DNS server on master
apiVersion: v1
kind: Service
metadata:
name: example-name
spec:
selector:
example-label: example-value
portalIP: 172.30.136.123
ports:
- nodePort: 0
port: 5000
protocol: TCP
targetPort: 5000
Labels
Use labels to organize, group, choose API objects
Example: Tag pods with labels so services can use label selectors to identify pods to which they proxy
Lets services reference groups of pods
Can treat pods with different Docker containers as related entities
Most objects can include labels in metadata
Can use labels to group arbitrarily related objects
Labels: Examples
Labels = Simple key/value pairs:
labels:
key1: value1
key2: value2
Scenario:
Pod consisting of nginx Docker container, with role=webserver label
Pod consisting of Apache httpd Docker container, also with role=webserver label
Service or replication controller defined to use pods with role=webserver label treats both pods as part of same group
Example: To remove all components with the label app=mytest:
# oc delete all -l app=mytest
The scheduler:
Determines placement of new pods onto nodes within OpenShift Enterprise cluster
Reads pod data and tries to find node that is good fit
Is independent, standalone, pluggable solution
Does not modify pod, merely creates binding that ties pod to node
The scheduler:
Determines placement of new pods onto nodes within OpenShift Enterprise cluster
Reads pod data and tries to find node that is good fit
Is independent, standalone, pluggable solution
Does not modify pod, merely creates binding that ties pod to node
Generic Scheduler
OpenShift Enterprise provides generic scheduler
Default scheduling engine
Selects node to host pod in three-step operation:
Filter nodes based on specified constraints/requirements
Runs nodes through list of filter functions called predicates
Prioritize qualifying nodes
Pass each node through series of priority functions
Assign node score between 0 - 10
0 indicates bad fit, 10 indicates good fit
Select the best fit node
Sort nodes based on scores
Select node with highest score to host pod
If multiple nodes have same high score, select one at random
Priority functions equally weighted by default; more important priorities can receive higher weight
Scheduler Policy
Selection of predicates and priority functions defines scheduler policy
Administrators can provide JSON file that specifies predicates and priority functions to configure scheduler
Overrides default scheduler policy
If default predicates or priority functions required, must specify them in file
Can specify path to scheduler policy file in master configuration file
Default configuration applied if no scheduler policy file exists
Default Scheduler Policy
Includes following predicates:
PodFitsPorts
PodFitsResources
NoDiskConflict
MatchNodeSelector
HostName
Includes following priority functions:
LeastRequestedPriority
BalancedResourceAllocation
ServiceSpreadingPriority
Each has weight of 1 applied
Available Predicates
OpenShift Enterprise 3 provides predicates out of the box
Can customize by by providing parameters
Can combine to provide additional node filtering
Two kinds of predicates: static and configurable
Static Predicates
Fixed names and configuration parameters that users cannot change
Kubernetes provides following out of box:
PodFitsPorts - Deems node fit for hosting pod based on absence of port conflicts
PodFitsResources - Determines fit based on resource availability
Nodes declare resource capacities, pods specify what resources they require
Fit based on requested, rather than used, resources
NoDiskConflict - Determines fit based on nonconflicting disk volumes
Evaluates if pod can fit based on volumes requested and those already mounted
MatchNodeSelector - Determines fit based on node selector query defined in pod
HostName - Determines fit based on presence of host parameter and string match with host name
Configurable Predicates
User can configure to tweak function
Can give them user-defined names
Identified by arguments they take
Can:
Configure predicates of same type with different parameters
Combine them by applying different user-defined names
Configurable Predicates: ServiceAffinity and LabelsPresence
ServiceAffinity: Filters out nodes that do not belong to topological level defined by provided labels
Takes in list of labels
Ensures affinity within nodes with same label values for pods belonging to same service
If pod specifies label value in NodeSelector:
Pod scheduled on nodes matching labels only
{"name" : "Zone", "argument" : {"serviceAffinity" : {"labels" : ["zone"]}}}
LabelsPresence: Checks whether node has certain label defined, regardless of value
{"name" : "ZoneRequired", "argument" : {"labels" : ["retiring"], "presence" : false}}
Available Priority Functions
Can specify custom set of priority functions to configure scheduler
OpenShift Enterprise provides several priority functions out of the box
Can customize some priority functions by providing parameters
Can combine priority functions and give different weights to influence prioritization results
Weight required, must be greater than 0
Static Priority Functions
Do not take configuration parameters or inputs from user
Specified in scheduler configuration using predefined names and weight calculations
LeastRequestedPriority - Favors nodes with fewer requested resources
Calculates percentage of memory and CPU requested by pods scheduled on node
Prioritizes nodes with highest available or remaining capacity
BalancedResourceAllocation - Favors nodes with balanced resource usage rate
Calculates difference between consumed CPU and memory as fraction of capacity
Prioritizes nodes with smallest difference
Should always use with LeastRequestedPriority
ServiceSpreadingPriority - Spreads pods by minimizing number of pods belonging to same service onto same machine
EqualPriority - Gives equal weight of 1 to all nodes
Not required/recommended outside of testing.
Configurable Priority Functions
User can configure by providing certain parameters.
Can give them user-defined name
Identified by the argument they take
ServiceAntiAffinity: Takes label
Ensures spread of pods belonging to same service across group of nodes based on label values
Gives same score to all nodes with same value for specified label
Gives higher score to nodes within group with least concentration of pods
LabelsPreference: Prefers either nodes that have particular label defined or those that do not, regardless of value
Use Cases
Important use case for scheduling within OpenShift Enterprise: Support affinity and anti-affinity policies
OpenShift Enterprise can implement multiple infrastructure topological levels
Administrators can define multiple topological levels for infrastructure (nodes)
To do this, specify labels on nodes
Example: region = r1, zone = z1, rack = s1
Label names have no particular meaning
Administrators can name infrastructure levels anything
Examples: City, building, room
Administrators can define any number of levels for infrastructure topology
Three levels usually adequate
Example: regions → zones → racks
Administrators can specify combination of affinity/anti-affinity rules at each level
Affinity
Administrators can configure scheduler to specify affinity at any topological level or multiple levels
Affinity indicates all pods belonging to same service are scheduled onto nodes belonging to same level
Handles application latency requirements by letting administrators ensure peer pods do not end up being too geographically separated
If no node available within same affinity group to host pod, pod not scheduled
Anti-Affinity
Administrators can configure scheduler to specify anti-affinity at any topological level or multiple levels
Anti-affinity (or spread) indicates that all pods belonging to same service are spread across nodes belonging to that level
Ensures that application is well spread for high availability
Scheduler tries to balance service pods evenly across applicable nodes
Sample Policy Configuration
{
"kind" : "Policy",
"version" : "v1",
"predicates" : [
{"name" : "PodFitsPorts"},
{"name" : "PodFitsResources"},
{"name" : "NoDiskConflict"},
{"name" : "MatchNodeSelector"},
{"name" : "HostName"}
],
"priorities" : [
{"name" : "LeastRequestedPriority", "weight" : 1},
{"name" : "BalancedResourceAllocation", "weight" : 1},
{"name" : "ServiceSpreadingPriority", "weight" : 1}
]
}
Topology Example 1
Example: Three topological levels
Levels: region (affinity) → zone (affinity) → rack (anti-affinity)
{
"kind" : "Policy",
"version" : "v1",
"predicates" : [
...
{"name" : "RegionZoneAffinity", "argument" : {"serviceAffinity" : {"labels" : ["region", "zone"]}}}
],
"priorities" : [
...
{"name" : "RackSpread", "weight" : 1, "argument" : {"serviceAntiAffinity" : {"label" : "rack"}}}
]
}
Topology Example 2
Example: Three topological levels
Levels: city (affinity) → building (anti-affinity) → room (anti-affinity)
{
"kind" : "Policy",
"version" : "v1",
"predicates" : [
...
{"name" : "CityAffinity", "argument" : {"serviceAffinity" : {"labels" : ["city"]}}}
],
"priorities" : [
...
{"name" : "BuildingSpread", "weight" : 1, "argument" : {"serviceAntiAffinity" : {"label" : "building"}}},
{"name" : "RoomSpread", "weight" : 1, "argument" : {"serviceAntiAffinity" : {"label" : "room"}}}
]
}
Topology Example 3
Only use nodes with region label defined
Prefer nodes with zone label defined
{
"kind" : "Policy",
"version" : "v1",
"predicates" : [
...
{"name" : "RequireRegion", "argument" : {"labelsPresence" : {"labels" : ["region"], "presence" : true}}}
],
"priorities" : [
...
{"name" : "ZonePreferred", "weight" : 1, "argument" : {"labelPreference" : {"label" : "zone", "presence" : true}}}
]
}
Builds Overview
Build: Process of transforming input parameters into resulting object
Most often used to transform source code into runnable image
BuildConfig object: Definition of entire build process
OpenShift Enterprise build system provides extensible support for build strategies
Based on selectable types specified in build API
Three build strategies available:
Docker build
S2I build
Custom build
Docker and S2I builds supported by default
Builds Overview: Resulting Objects
Resulting object of build depends on type of builder used
Docker and S2I builds: Resulting objects are runnable images
Custom builds: Resulting objects are whatever author of builder image specifies
For list of build commands, see Developer’s Guide: https://docs.openshift.com/enterprise/latest/architecture/core_concepts/builds_and_image_streams.html
Builds and Image Streams
Docker Build