Kubernetes K8S之固定节点nodeName和nodeSelector调度详解
Kubernetes K8S之固定节点nodeName和nodeSelector调度详解与示例
主机配置规划
服务器名称(hostname) | 系统版本 | 配置 | 内网IP | 外网IP(模拟) |
---|---|---|---|---|
k8s-master | CentOS7.7 | 2C/4G/20G | 172.16.1.110 | 10.0.0.110 |
k8s-node01 | CentOS7.7 | 2C/4G/20G | 172.16.1.111 | 10.0.0.111 |
k8s-node02 | CentOS7.7 | 2C/4G/20G | 172.16.1.112 | 10.0.0.112 |
nodeName调度
nodeName是节点选择约束的最简单形式,但是由于其限制,通常很少使用它。nodeName是PodSpec的领域。
pod.spec.nodeName将Pod直接调度到指定的Node节点上,会【跳过Scheduler的调度策略】,该匹配规则是【强制】匹配。可以越过Taints污点进行调度。
nodeName用于选择节点的一些限制是:
- 如果指定的节点不存在,则容器将不会运行,并且在某些情况下可能会自动删除。
- 如果指定的节点没有足够的资源来容纳该Pod,则该Pod将会失败,并且其原因将被指出,例如OutOfmemory或OutOfcpu。
- 云环境中的节点名称并非总是可预测或稳定的。
nodeName示例
获取当前的节点信息
1 [root@k8s-master scheduler]# kubectl get nodes -o wide2 NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME3 k8s-master Ready master 42d v1.17.4172.16.1.110 <none> CentOS Linux 7 (Core) 3.10.0-1062.el7.x86_64 docker://19.3.84 k8s-node01 Ready <none> 42d v1.17.4172.16.1.111 <none> CentOS Linux 7 (Core) 3.10.0-1062.el7.x86_64 docker://19.3.8
5 k8s-node02 Ready <none> 42d v1.17.4172.16.1.112 <none> CentOS Linux 7 (Core) 3.10.0-1062.el7.x86_64 docker://19.3.8
当nodeName指定节点存在
要运行的yaml文件
1 [root@k8s-master scheduler]# pwd2 /root/k8s_practice/scheduler
3 [root@k8s-master scheduler]# cat scheduler_nodeName.yaml
4 apiVersion: apps/v1
5kind: Deployment
6metadata:
7 name: scheduler-nodename-deploy
8 labels:
9 app: nodename-deploy
10spec:
11 replicas: 5
12 selector:
13 matchLabels:
14 app: myapp
15 template:
16 metadata:
17 labels:
18 app: myapp
19 spec:
20 containers:
21 - name: myapp-pod
22 image: registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1
23 imagePullPolicy: IfNotPresent
24 ports:
25 - containerPort: 80
26 # 指定节点运行
27 nodeName: k8s-master
运行yaml文件并查看信息
1 [root@k8s-master scheduler]# kubectl apply -f scheduler_nodeName.yaml 2 deployment.apps/scheduler-nodename-deploy created 3 [root@k8s-master scheduler]# 4 [root@k8s-master scheduler]# kubectl get deploy -o wide 5 NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR 6 scheduler-nodename-deploy 0/550 6s myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp 7 [root@k8s-master scheduler]# 8 [root@k8s-master scheduler]# kubectl get rs -o wide 9NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR10 scheduler-nodename-deploy-d5c9574bd 555 15s myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp,pod-template-hash=d5c9574bd11 [root@k8s-master scheduler]#12 [root@k8s-master scheduler]# kubectl get pod -o wide13NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES14 scheduler-nodename-deploy-d5c9574bd-6l9d8 1/1 Running 0 23s 10.244.0.123 k8s-master <none> <none>15 scheduler-nodename-deploy-d5c9574bd-c82cc 1/1 Running 0 23s 10.244.0.119 k8s-master <none> <none>
16 scheduler-nodename-deploy-d5c9574bd-dkkjg 1/1 Running 0 23s 10.244.0.122 k8s-master <none> <none>
17 scheduler-nodename-deploy-d5c9574bd-hcn77 1/1 Running 0 23s 10.244.0.121 k8s-master <none> <none>
18 scheduler-nodename-deploy-d5c9574bd-zstjx 1/1 Running 0 23s 10.244.0.120 k8s-master <none> <none>
由上可见,yaml文件中nodeName: k8s-master生效,所有pod被调度到了k8s-master节点。如果这里是nodeName: k8s-node02,那么就会直接调度到k8s-node02节点。
当nodeName指定节点不存在
要运行的yaml文件
1 [root@k8s-master scheduler]# pwd2 /root/k8s_practice/scheduler
3 [root@k8s-master scheduler]# cat scheduler_nodeName_02.yaml
4 apiVersion: apps/v1
5kind: Deployment
6metadata:
7 name: scheduler-nodename-deploy
8 labels:
9 app: nodename-deploy
10spec:
11 replicas: 5
12 selector:
13 matchLabels:
14 app: myapp
15 template:
16 metadata:
17 labels:
18 app: myapp
19 spec:
20 containers:
21 - name: myapp-pod
22 image: registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1
23 imagePullPolicy: IfNotPresent
24 ports:
25 - containerPort: 80
26 # 指定节点运行,该节点不存在
27 nodeName: k8s-node08
运行yaml文件并查看信息
1 [root@k8s-master scheduler]# kubectl apply -f scheduler_nodeName_02.yaml 2 deployment.apps/scheduler-nodename-deploy created 3 [root@k8s-master scheduler]# 4 [root@k8s-master scheduler]# kubectl get deploy -o wide 5 NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR 6 scheduler-nodename-deploy 0/550 4s myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp 7 [root@k8s-master scheduler]# 8 [root@k8s-master scheduler]# kubectl get rs -o wide 9NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR10 scheduler-nodename-deploy-75944bdc5d 550 9s myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp,pod-template-hash=75944bdc5d11 [root@k8s-master scheduler]#12 [root@k8s-master scheduler]# kubectl get pod -o wide13NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES14 scheduler-nodename-deploy-75944bdc5d-c8f5d 0/1 Pending 0 13s <none> k8s-node08 <none> <none>15 scheduler-nodename-deploy-75944bdc5d-hfdlv 0/1 Pending 0 13s <none> k8s-node08 <none> <none>
16 scheduler-nodename-deploy-75944bdc5d-q9qgt 0/1 Pending 0 13s <none> k8s-node08 <none> <none>
17 scheduler-nodename-deploy-75944bdc5d-q9zl7 0/1 Pending 0 13s <none> k8s-node08 <none> <none>
18 scheduler-nodename-deploy-75944bdc5d-wxsnv 0/1 Pending 0 13s <none> k8s-node08 <none> <none>
由上可见,如果指定的节点不存在,则容器将不会运行,一直处于Pending 状态。
nodeSelector调度
nodeSelector是节点选择约束的最简单推荐形式。nodeSelector是PodSpec的领域。它指定键值对的映射。
Pod.spec.nodeSelector是通过Kubernetes的label-selector机制选择节点,由调度器调度策略匹配label,而后调度Pod到目标节点,该匹配规则属于【强制】约束。由于是调度器调度,因此不能越过Taints污点进行调度。
nodeSelector示例
获取当前的节点信息
1 [root@k8s-master ~]# kubectl get node -o wide --show-labels2 NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME LABELS3 k8s-master Ready master 42d v1.17.4172.16.1.110 <none> CentOS Linux 7 (Core) 3.10.0-1062.el7.x86_64 docker://19.3.8 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/master=4 k8s-node01 Ready <none> 42d v1.17.4172.16.1.111 <none> CentOS Linux 7 (Core) 3.10.0-1062.el7.x86_64 docker://19.3.8 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node01,kubernetes.io/os=linux
5 k8s-node02 Ready <none> 42d v1.17.4172.16.1.112 <none> CentOS Linux 7 (Core) 3.10.0-1062.el7.x86_64 docker://19.3.8 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node02,kubernetes.io/os=linux
添加label标签
运行kubectl get nodes以获取群集节点的名称。然后可以对指定节点添加标签。比如:k8s-node01的磁盘为SSD,那么添加disk-type=ssd;k8s-node02的CPU核数高,那么添加cpu-type=hight;如果为Web机器,那么添加service-type=web。怎么添加标签可以根据实际规划情况而定。
1 ### 给k8s-node01 添加指定标签 2 [root@k8s-master ~]# kubectl label nodes k8s-node01 disk-type=ssd 3 node/k8s-node01 labeled 4 #### 删除标签命令 kubectl label nodes k8s-node01 disk-type-5 [root@k8s-master ~]#
6 [root@k8s-master ~]# kubectl get node --show-labels
7NAME STATUS ROLES AGE VERSION LABELS
8 k8s-master Ready master 42d v1.17.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/master=
9 k8s-node01 Ready <none> 42d v1.17.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disk-type=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node01,kubernetes.io/os=linux
10 k8s-node02 Ready <none> 42d v1.17.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node02,kubernetes.io/os=linux
由上可见,已经为k8s-node01节点添加了disk-type=ssd 标签。
当nodeSelector标签存在
要运行的yaml文件
1 [root@k8s-master scheduler]# pwd2 /root/k8s_practice/scheduler
3 [root@k8s-master scheduler]#
4 [root@k8s-master scheduler]# cat scheduler_nodeSelector.yaml
5 apiVersion: apps/v1
6kind: Deployment
7metadata:
8 name: scheduler-nodeselector-deploy
9 labels:
10 app: nodeselector-deploy
11spec:
12 replicas: 5
13 selector:
14 matchLabels:
15 app: myapp
16 template:
17 metadata:
18 labels:
19 app: myapp
20 spec:
21 containers:
22 - name: myapp-pod
23 image: registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1
24 imagePullPolicy: IfNotPresent
25 ports:
26 - containerPort: 80
27 # 指定节点标签选择,且标签存在
28 nodeSelector:
29 disk-type: ssd
运行yaml文件并查看信息
1 [root@k8s-master scheduler]# kubectl apply -f scheduler_nodeSelector.yaml 2 deployment.apps/scheduler-nodeselector-deploy created 3 [root@k8s-master scheduler]# 4 [root@k8s-master scheduler]# kubectl get deploy -o wide 5 NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR 6 scheduler-nodeselector-deploy 5/555 10s myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp 7 [root@k8s-master scheduler]# 8 [root@k8s-master scheduler]# kubectl get rs -o wide 9NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR10 scheduler-nodeselector-deploy-79455db454 555 14s myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp,pod-template-hash=79455db45411 [root@k8s-master scheduler]#12 [root@k8s-master scheduler]# kubectl get pod -o wide13NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES14 scheduler-nodeselector-deploy-79455db454-745ph 1/1 Running 0 19s 10.244.4.154 k8s-node01 <none> <none>15 scheduler-nodeselector-deploy-79455db454-bmjvd 1/1 Running 0 19s 10.244.4.151 k8s-node01 <none> <none>
16 scheduler-nodeselector-deploy-79455db454-g5cg2 1/1 Running 0 19s 10.244.4.153 k8s-node01 <none> <none>
17 scheduler-nodeselector-deploy-79455db454-hw8jv 1/1 Running 0 19s 10.244.4.152 k8s-node01 <none> <none>
18 scheduler-nodeselector-deploy-79455db454-zrt8d 1/1 Running 0 19s 10.244.4.155 k8s-node01 <none> <none>
由上可见,所有pod都被调度到了k8s-node01节点。当然如果其他节点也有disk-type=ssd 标签,那么pod也会调度到这些节点上。
当nodeSelector标签不存在
要运行的yaml文件
1 [root@k8s-master scheduler]# pwd2 /root/k8s_practice/scheduler
3 [root@k8s-master scheduler]#
4 [root@k8s-master scheduler]# cat scheduler_nodeSelector_02.yaml
5 apiVersion: apps/v1
6kind: Deployment
7metadata:
8 name: scheduler-nodeselector-deploy
9 labels:
10 app: nodeselector-deploy
11spec:
12 replicas: 5
13 selector:
14 matchLabels:
15 app: myapp
16 template:
17 metadata:
18 labels:
19 app: myapp
20 spec:
21 containers:
22 - name: myapp-pod
23 image: registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1
24 imagePullPolicy: IfNotPresent
25 ports:
26 - containerPort: 80
27 # 指定节点标签选择,且标签不存在
28 nodeSelector:
29 service-type: web
运行yaml文件并查看信息
1 [root@k8s-master scheduler]# kubectl apply -f scheduler_nodeSelector_02.yaml 2 deployment.apps/scheduler-nodeselector-deploy created 3 [root@k8s-master scheduler]# 4 [root@k8s-master scheduler]# kubectl get deploy -o wide 5 NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR 6 scheduler-nodeselector-deploy 0/550 26s myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp 7 [root@k8s-master scheduler]# 8 [root@k8s-master scheduler]# kubectl get rs -o wide 9NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR10 scheduler-nodeselector-deploy-799d748db6 550 30s myapp-pod registry.cn-beijing.aliyuncs.com/google_registry/myapp:v1 app=myapp,pod-template-hash=799d748db611 [root@k8s-master scheduler]#12 [root@k8s-master scheduler]# kubectl get pod -o wide13NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES14 scheduler-nodeselector-deploy-799d748db6-92mqj 0/1 Pending 0 40s <none> <none> <none> <none>15 scheduler-nodeselector-deploy-799d748db6-c2w25 0/1 Pending 0 40s <none> <none> <none> <none>
16 scheduler-nodeselector-deploy-799d748db6-c8tlx 0/1 Pending 0 40s <none> <none> <none> <none>
17 scheduler-nodeselector-deploy-799d748db6-tc5n7 0/1 Pending 0 40s <none> <none> <none> <none>
18 scheduler-nodeselector-deploy-799d748db6-z8c57 0/1 Pending 0 40s <none> <none> <none> <none>
由上可见,如果nodeSelector匹配的标签不存在,则容器将不会运行,一直处于Pending 状态。
相关阅读
1、官网:Pod分配调度
2、Kubernetes K8S之调度器kube-scheduler详解
3、Kubernetes K8S之affinity亲和性与反亲和性详解与示例
4、Kubernetes K8S之Taints污点与Tolerations容忍详解
完毕!
———END———
如果觉得不错就关注下呗 (-^O^-) !
以上是 Kubernetes K8S之固定节点nodeName和nodeSelector调度详解 的全部内容, 来源链接: utcz.com/a/69676.html