06 Kubernetes资源管理和服务质量

unlisted · suofeiya's blog

#kubernetes

Table of Contents

写在前面 #

上一篇文章中kubernetes系列教程(五)深入掌握核心概念pod初步介绍了yaml学习kubernetes中重要的一个概念pod,接下来介绍kubernetes系列教程pod的resource资源管理和pod的Quality of service服务质量。

1. Pod资源管理 #

1.1 resource定义 #

容器运行过程中需要分配所需的资源,如何与cggroup联动配合呢?答案是通过定义resource来实现资源的分配,资源的分配单位主要是cpu和memory,资源的定义分两种:requests和limits,requests表示请求资源,主要用于初始kubernetes调度pod时的依据,表示必须满足的分配资源;limits表示资源的限制,即pod不能超过limits定义的限制大小,超过则通过cggroup限制,pod中定义资源可以通过下面四个字段定义:

1、开始学习如何定义pod的resource资源,如下以定义nginx-demo为例,容器请求cpu资源为250m,限制为500m,请求内存资源为128Mi,限制内存资源为256Mi,当然也可以定义多个容器的资源,多个容器相加就是pod的资源总资源,如下:

 1[root@node-1 demo]#cat nginx-resource.yaml 
 2apiVersion: v1
 3kind: Pod
 4metadata:
 5  name: nginx-demo
 6  labels:
 7    name: nginx-demo
 8spec:
 9  containers:
10  - name: nginx-demo
11    image: nginx:1.7.9
12    imagePullPolicy: IfNotPresent
13    ports:
14    - name: nginx-port-80
15      protocol: TCP
16      containerPort: 80
17    resources:
18      requests:
19        cpu: 0.25
20        memory: 128Mi
21      limits:
22        cpu: 500m
23        memory: 256Mi

2、应用pod的配置定义(如之前的pod还存在,先将其删除kubectl delete pod ),或pod命名为另外一个名

1[root@node-1 demo]# kubectl apply -f nginx-resource.yaml 
2pod/nginx-demo created

3、查看pod资源的分配详情

 1[root@node-1 demo]# kubectl get pods
 2NAME                    READY   STATUS    RESTARTS   AGE
 3demo-7b86696648-8bq7h   1/1     Running   0          12d
 4demo-7b86696648-8qp46   1/1     Running   0          12d
 5demo-7b86696648-d6hfw   1/1     Running   0          12d
 6nginx-demo              1/1     Running   0          94s
 7
 8[root@node-1 demo]# kubectl describe pods nginx-demo  
 9Name:         nginx-demo
10Namespace:    default
11Priority:     0
12Node:         node-3/10.254.100.103
13Start Time:   Sat, 28 Sep 2019 12:10:49 +0800
14Labels:       name=nginx-demo
15Annotations:  kubectl.kubernetes.io/last-applied-configuration:
16                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"name":"nginx-demo"},"name":"nginx-demo","namespace":"default"},"sp...
17Status:       Running
18IP:           10.244.2.13
19Containers:
20  nginx-demo:
21    Container ID:   docker://55d28fdc992331c5c58a51154cd072cd6ae37e03e05ae829a97129f85eb5ed79
22    Image:          nginx:1.7.9
23    Image ID:       docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451
24    Port:           80/TCP
25    Host Port:      0/TCP
26    State:          Running
27      Started:      Sat, 28 Sep 2019 12:10:51 +0800
28    Ready:          True
29    Restart Count:  0
30    Limits:        #限制资源
31      cpu:     500m
32      memory:  256Mi
33    Requests:      #请求资源
34      cpu:        250m
35      memory:     128Mi
36    Environment:  <none>
37    ...省略...

4、Pod的资源如何分配呢?毫无疑问是从node上分配的,当我们创建一个pod的时候如果设置了requests,kubernetes的调度器kube-scheduler会执行两个调度过程:filter过滤和weight称重,kube-scheduler会根据请求的资源过滤,把符合条件的node筛选出来,然后再进行排序,把最满足运行pod的node筛选出来,然后再特定的node上运行pod。调度算法和细节可以参考下kubernetes调度算法介绍。如下是node-3节点资源的分配详情:

 1[root@node-1 ~]# kubectl describe node node-3
 2...省略...
 3Capacity:    #节点上资源的总资源情况1个cpu2g内存110个pod
 4 cpu:                1
 5 ephemeral-storage:  51473888Ki
 6 hugepages-2Mi:      0
 7 memory:             1882352Ki
 8 pods:               110
 9Allocatable: #节点容许分配的资源情况部分预留的资源会排出在Allocatable范畴
10 cpu:                1
11 ephemeral-storage:  47438335103
12 hugepages-2Mi:      0
13 memory:             1779952Ki
14 pods:               110
15System Info:
16 Machine ID:                 0ea734564f9a4e2881b866b82d679dfc
17 System UUID:                FFCD2939-1BF2-4200-B4FD-8822EBFFF904
18 Boot ID:                    293f49fd-8a7c-49e2-8945-7a4addbd88ca
19 Kernel Version:             3.10.0-957.21.3.el7.x86_64
20 OS Image:                   CentOS Linux 7 (Core)
21 Operating System:           linux
22 Architecture:               amd64
23 Container Runtime Version:  docker://18.6.3
24 Kubelet Version:            v1.15.3
25 Kube-Proxy Version:         v1.15.3
26PodCIDR:                     10.244.2.0/24
27Non-terminated Pods:         (3 in total) #节点上运行pod的资源的情况除了nginx-demo之外还有多个pod
28  Namespace                  Name                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
29  ---------                  ----                           ------------  ----------  ---------------  -------------  ---
30  default                    nginx-demo                     250m (25%)    500m (50%)  128Mi (7%)       256Mi (14%)    63m
31  kube-system                kube-flannel-ds-amd64-jp594    100m (10%)    100m (10%)  50Mi (2%)        50Mi (2%)      14d
32  kube-system                kube-proxy-mh2gq               0 (0%)        0 (0%)      0 (0%)           0 (0%)         12d
33Allocated resources:  #已经分配的cpu和memory的资源情况
34  (Total limits may be over 100 percent, i.e., overcommitted.)
35  Resource           Requests     Limits
36  --------           --------     ------
37  cpu                350m (35%)   600m (60%)
38  memory             178Mi (10%)  306Mi (17%)
39  ephemeral-storage  0 (0%)       0 (0%)
40Events:              <none>

1.2 资源分配原理 #

Pod的定义的资源requests和limits作用于kubernetes的调度器kube-sheduler上,实际上cpu和内存定义的资源会应用在container上,通过容器上的cggroup实现资源的隔离作用,接下来我们介绍下资源分配的原理。

以上面定义的nginx-demo为例,研究下pod中定义的requests和limits应用在docker生效的参数:

1、查看pod所在的node节点,nginx-demo调度到node-3节点上

1[root@node-1 ~]# kubectl get pods -o wide nginx-demo
2NAME         READY   STATUS    RESTARTS   AGE   IP            NODE     NOMINATED NODE   READINESS GATES
3nginx-demo   1/1     Running   0          96m   10.244.2.13   node-3   <none>           <none>

2、获取容器的id号,可以通过kubectl describe pods nginx-demo的containerID获取到容器的id,或者登陆到node-3节点通过名称过滤获取到容器的id号,默认会有两个pod:一个通过pause镜像创建,另外一个通过应用镜像创建

1[root@node-3 ~]# docker container  list |grep nginx
255d28fdc9923        84581e99d807           "nginx -g 'daemon of…"   2 hours ago         Up 2 hours                                   k8s_nginx-demonginx-demo_default_66958ef7-507a-41cd-a688-7a4976c6a71e_0
32fe0498ea9b5        k8s.gcr.io/pause:3.1   "/pause"                 2 hours ago         Up 2 hours                                   k8s_POD_nginx-demo_default_66958ef7-507a-41cd-a688-7a4976c6a71e_0

3、查看docker容器详情信息

 1[root@node-3 ~]# docker container inspect 55d28fdc9923
 2[
 3...部分输出省略...
 4    {
 5        "Image": "sha256:84581e99d807a703c9c03bd1a31cd9621815155ac72a7365fd02311264512656",
 6        "ResolvConfPath": "/var/lib/docker/containers/2fe0498ea9b5dfe1eb63eba09b1598a8dfd60ef046562525da4dcf7903a25250/resolv.conf",
 7        "HostConfig": {
 8            "Binds": [
 9                "/var/lib/kubelet/pods/66958ef7-507a-41cd-a688-7a4976c6a71e/volumes/kubernetes.io~secret/default-token-5qwmc:/var/run/secrets/kubernetes.io/serviceaccount:ro",
10                "/var/lib/kubelet/pods/66958ef7-507a-41cd-a688-7a4976c6a71e/etc-hosts:/etc/hosts",
11                "/var/lib/kubelet/pods/66958ef7-507a-41cd-a688-7a4976c6a71e/containers/nginx-demo/1cc072ca:/dev/termination-log"
12            ],
13            "ContainerIDFile": "",
14            "LogConfig": {
15                "Type": "json-file",
16                "Config": {
17                    "max-size": "100m"
18                }
19            },
20            "UTSMode": "",
21            "UsernsMode": "",
22            "ShmSize": 67108864,
23            "Runtime": "runc",
24            "ConsoleSize": [
25                0,
26                0
27            ],
28            "Isolation": "",
29            "CpuShares": 256,        CPU分配的权重作用在requests.cpu上
30            "Memory": 268435456,     内存分配的大小作用在limits.memory上
31            "NanoCpus": 0,
32            "CgroupParent": "kubepods-burstable-pod66958ef7_507a_41cd_a688_7a4976c6a71e.slice",
33            "BlkioWeight": 0,
34            "BlkioWeightDevice": null,
35            "BlkioDeviceReadBps": null,
36            "BlkioDeviceWriteBps": null,
37            "BlkioDeviceReadIOps": null,
38            "BlkioDeviceWriteIOps": null,
39            "CpuPeriod": 100000,    CPU分配的使用比例和CpuQuota一起作用在limits.cpu上
40            "CpuQuota": 50000,
41            "CpuRealtimePeriod": 0,
42            "CpuRealtimeRuntime": 0,
43            "CpusetCpus": "",
44            "CpusetMems": "",
45            "Devices": [],
46            "DeviceCgroupRules": null,
47            "DiskQuota": 0,
48            "KernelMemory": 0,
49            "MemoryReservation": 0,
50            "MemorySwap": 268435456,
51            "MemorySwappiness": null,
52            "OomKillDisable": false,
53            "PidsLimit": 0,
54            "Ulimits": null,
55            "CpuCount": 0,
56            "CpuPercent": 0,
57            "IOMaximumIOps": 0,
58            "IOMaximumBandwidth": 0,
59        },   
60    }
61]

1.3. cpu资源测试 #

pod中cpu的限制主要通过requests.cpu和limits.cpu来定义,limits是不能超过的cpu大小,我们通过stress镜像来验证,stress是一个cpu和内存的压侧工具,通过指定args参数的定义压侧cpu的大小。监控pod的cpu和内存可通过kubectl top的方式来查看,依赖于监控组件如metric-server或promethus,当前没有安装,我们通过docker stats的方式来查看。

1、通过stress镜像定义一个pod,分配0.25个cores和最大限制0.5个core使用比例

 1[root@node-1 demo]# cat cpu-demo.yaml 
 2apiVersion: v1
 3kind: Pod
 4metadata:
 5  name: cpu-demo
 6  namespace: default
 7  annotations: 
 8    kubernetes.io/description: "demo for cpu requests and"
 9spec:
10  containers:
11  - name: stress-cpu
12    image: vish/stress
13    resources:
14      requests:
15        cpu: 250m
16      limits:
17        cpu: 500m
18    args:
19    - -cpus
20    - "1"

2、应用yaml文件生成pod

1[root@node-1 demo]# kubectl apply -f cpu-demo.yaml 
2pod/cpu-demo created

3、查看pod资源分配详情

 1[root@node-1 demo]# kubectl describe pods cpu-demo 
 2Name:         cpu-demo
 3Namespace:    default
 4Priority:     0
 5Node:         node-2/10.254.100.102
 6Start Time:   Sat, 28 Sep 2019 14:33:12 +0800
 7Labels:       <none>
 8Annotations:  kubectl.kubernetes.io/last-applied-configuration:
 9                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{"kubernetes.io/description":"demo for cpu requests and"},"name":"cpu-demo","nam...
10              kubernetes.io/description: demo for cpu requests and
11Status:       Running
12IP:           10.244.1.14
13Containers:
14  stress-cpu:
15    Container ID:  docker://14f93767ad37b92beb91e3792678f60c9987bbad3290ae8c29c35a2a80101836
16    Image:         progrium/stress
17    Image ID:      docker-pullable://progrium/stress@sha256:e34d56d60f5caae79333cee395aae93b74791d50e3841986420d23c2ee4697bf
18    Port:          <none>
19    Host Port:     <none>
20    Args:
21      -cpus
22      1
23    State:          Waiting
24      Reason:       CrashLoopBackOff
25    Last State:     Terminated
26      Reason:       Error
27      Exit Code:    1
28      Started:      Sat, 28 Sep 2019 14:34:28 +0800
29      Finished:     Sat, 28 Sep 2019 14:34:28 +0800
30    Ready:          False
31    Restart Count:  3
32    Limits:         #cpu限制使用的比例
33      cpu:  500m
34    Requests:       #cpu请求的大小
35      cpu:  250m

4、登陆到特定的node节点,通过docker container stats查看容器的资源使用详情 limits.cpu资源使用率

在pod所属的node上通过top查看,cpu的使用率限制百分比为50%。

通过上面的验证可以得出结论,我们在stress容器中定义使用1个core,通过limits.cpu限定可使用的cpu大小是500m,测试验证pod的资源已在容器内部或宿主机上都严格限制在50%(node机器上只有一个cpu,如果有2个cpu则会分摊为25%)。

1.4 memory资源测试 #

1、通过stress镜像测试验证requests.memory和limits.memory的生效范围,limits.memory定义容器可使用的内存资源大小,当超过内存设定的大小后容器会发生OOM,如下定义一个测试的容器,最大内存不能超过512M,使用stress镜像--vm-bytes定义压侧内存大小为256Mi

 1[root@node-1 demo]# cat memory-demo.yaml 
 2apiVersion: v1
 3kind: Pod
 4metadata:
 5  name: memory-stress-demo
 6  annotations:
 7    kubernetes.io/description: "stress demo for memory limits"
 8spec:
 9  containers:
10  - name: memory-stress-limits
11    image: polinux/stress
12    resources:
13      requests:
14        memory: 128Mi
15      limits:
16        memory: 512Mi
17    command: ["stress"]
18    args: ["--vm", "1", "--vm-bytes", "256M", "--vm-hang", "1"]

2、应用yaml文件生成pod

1[root@node-1 demo]# kubectl apply -f memory-demo.yaml 
2pod/memory-stress-demo created
3
4[root@node-1 demo]# kubectl get pods memory-stress-demo -o wide 
5NAME                 READY   STATUS    RESTARTS   AGE   IP            NODE     NOMINATED NODE   READINESS GATES
6memory-stress-demo   1/1     Running   0          41s   10.244.1.19   node-2   <none>           <none>

3、查看资源的分配情况

 1[root@node-1 demo]# kubectl describe  pods memory-stress-demo
 2Name:         memory-stress-demo
 3Namespace:    default
 4Priority:     0
 5Node:         node-2/10.254.100.102
 6Start Time:   Sat, 28 Sep 2019 15:13:06 +0800
 7Labels:       <none>
 8Annotations:  kubectl.kubernetes.io/last-applied-configuration:
 9                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{"kubernetes.io/description":"stress demo for memory limits"},"name":"memory-str...
10              kubernetes.io/description: stress demo for memory limits
11Status:       Running
12IP:           10.244.1.16
13Containers:
14  memory-stress-limits:
15    Container ID:  docker://c7408329cffab2f10dd860e50df87bd8671e65a0f8abb4dae96d059c0cb6bb2d
16    Image:         polinux/stress
17    Image ID:      docker-pullable://polinux/stress@sha256:6d1825288ddb6b3cec8d3ac8a488c8ec2449334512ecb938483fc2b25cbbdb9a
18    Port:          <none>
19    Host Port:     <none>
20    Command:
21      stress
22    Args:
23      --vm
24      1
25      --vm-bytes
26      256Mi
27      --vm-hang
28      1
29    State:          Waiting
30      Reason:       CrashLoopBackOff
31    Last State:     Terminated
32      Reason:       Error
33      Exit Code:    1
34      Started:      Sat, 28 Sep 2019 15:14:08 +0800
35      Finished:     Sat, 28 Sep 2019 15:14:08 +0800
36    Ready:          False
37    Restart Count:  3
38    Limits:          #内存限制大小
39      memory:  512Mi
40    Requests:         #内存请求大小
41      memory:     128Mi
42    Environment:  <none>
43    Mounts:
44      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5qwmc (ro)

4、查看容器内存资源的使用情况,分配256M内存,最大可使用为512Mi,利用率为50%,此时没有超过limits限制的大小,容器运行正常

limits.memory限制

5、当容器内部超过内存的大小会怎么样呢,我们将--vm-byte设置为513M,容器会尝试运行,超过内存后会OOM,kube-controller-manager会不停的尝试重启容器,RESTARTS的次数会不停的增加。

 1[root@node-1 demo]# cat memory-demo.yaml 
 2apiVersion: v1
 3kind: Pod
 4metadata:
 5  name: memory-stress-demo
 6  annotations:
 7    kubernetes.io/description: "stress demo for memory limits"
 8spec:
 9  containers:
10  - name: memory-stress-limits
11    image: polinux/stress
12    resources:
13      requests:
14        memory: 128Mi
15      limits:
16        memory: 512Mi
17    command: ["stress"]
18    args: ["--vm", "1", "--vm-bytes", "520M", "--vm-hang", "1"] . #容器中使用内存为520M
19  
20查看容器的状态为OOMKilledRESTARTS的次数不断的增加不停的尝试重启
21[root@node-1 demo]# kubectl get pods memory-stress-demo 
22NAME                 READY   STATUS      RESTARTS   AGE
23memory-stress-demo   0/1     OOMKilled   3          60s

2. Pod服务质量 #

服务质量QOS(Quality of Service)主要用于pod调度和驱逐时参考的重要因素,不同的QOS其服务质量不同,对应不同的优先级,主要分为三种类型的Qos:

2.1 BestEffort最大努力 #

1、Pod中没有定义resource,默认的Qos策略为BestEffort,优先级别最低,当资源比较进展是需要驱逐evice时,优先驱逐BestEffort定义的Pod,如下定义一个BestEffort的Pod

 1[root@node-1 demo]# cat nginx-qos-besteffort.yaml 
 2apiVersion: v1
 3kind: Pod
 4metadata:
 5  name: nginx-qos-besteffort
 6  labels:
 7    name: nginx-qos-besteffort
 8spec:
 9  containers:
10  - name: nginx-qos-besteffort
11    image: nginx:1.7.9
12    imagePullPolicy: IfNotPresent
13    ports:
14    - name: nginx-port-80
15      protocol: TCP
16      containerPort: 80
17    resources: {}

2、创建pod并查看Qos策略,qosClass为BestEffort

 1[root@node-1 demo]# kubectl apply -f nginx-qos-besteffort.yaml 
 2pod/nginx-qos-besteffort created
 3
 4查看Qos策略
 5[root@node-1 demo]# kubectl get pods nginx-qos-besteffort -o yaml
 6apiVersion: v1
 7kind: Pod
 8metadata:
 9  annotations:
10    kubectl.kubernetes.io/last-applied-configuration: |
11      {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"name":"nginx-qos-besteffort"},"name":"nginx-qos-besteffort","namespace":"default"},"spec":{"containers":[{"image":"nginx:1.7.9","imagePullPolicy":"IfNotPresent","name":"nginx-qos-besteffort","ports":[{"containerPort":80,"name":"nginx-port-80","protocol":"TCP"}],"resources":{}}]}}
12  creationTimestamp: "2019-09-28T11:12:03Z"
13  labels:
14    name: nginx-qos-besteffort
15  name: nginx-qos-besteffort
16  namespace: default
17  resourceVersion: "1802411"
18  selfLink: /api/v1/namespaces/default/pods/nginx-qos-besteffort
19  uid: 56e4a2d5-8645-485d-9362-fe76aad76e74
20spec:
21  containers:
22  - image: nginx:1.7.9
23    imagePullPolicy: IfNotPresent
24    name: nginx-qos-besteffort
25    ports:
26    - containerPort: 80
27      name: nginx-port-80
28      protocol: TCP
29    resources: {}
30    terminationMessagePath: /dev/termination-log
31...省略...
32status:
33  hostIP: 10.254.100.102
34  phase: Running
35  podIP: 10.244.1.21
36  qosClass: BestEffort  #Qos策略
37  startTime: "2019-09-28T11:12:03Z"

3、删除测试Pod

1[root@node-1 demo]# kubectl delete pods nginx-qos-besteffort 
2pod "nginx-qos-besteffort" deleted

2.2 Burstable可波动 #

1、Pod的服务质量为Burstable,仅次于Guaranteed的服务质量,至少需要一个container定义了requests,且requests定义的资源小于limits资源

 1[root@node-1 demo]# cat nginx-qos-burstable.yaml 
 2apiVersion: v1
 3kind: Pod
 4metadata:
 5  name: nginx-qos-burstable
 6  labels:
 7    name: nginx-qos-burstable
 8spec:
 9  containers:
10  - name: nginx-qos-burstable
11    image: nginx:1.7.9
12    imagePullPolicy: IfNotPresent
13    ports:
14    - name: nginx-port-80
15      protocol: TCP
16      containerPort: 80
17    resources: 
18      requests:
19        cpu: 100m
20        memory: 128Mi
21      limits:
22        cpu: 200m
23        memory: 256Mi

2、应用yaml文件生成pod并查看Qos类型

 1[root@node-1 demo]# kubectl apply -f nginx-qos-burstable.yaml 
 2pod/nginx-qos-burstable created
 3
 4查看Qos类型
 5[root@node-1 demo]# kubectl describe pods nginx-qos-burstable 
 6Name:         nginx-qos-burstable
 7Namespace:    default
 8Priority:     0
 9Node:         node-2/10.254.100.102
10Start Time:   Sat, 28 Sep 2019 19:27:37 +0800
11Labels:       name=nginx-qos-burstable
12Annotations:  kubectl.kubernetes.io/last-applied-configuration:
13                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"name":"nginx-qos-burstable"},"name":"nginx-qos-burstable","namespa...
14Status:       Running
15IP:           10.244.1.22
16Containers:
17  nginx-qos-burstable:
18    Container ID:   docker://d1324b3953ba6e572bfc63244d4040fee047ed70138b5a4bad033899e818562f
19    Image:          nginx:1.7.9
20    Image ID:       docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451
21    Port:           80/TCP
22    Host Port:      0/TCP
23    State:          Running
24      Started:      Sat, 28 Sep 2019 19:27:39 +0800
25    Ready:          True
26    Restart Count:  0
27    Limits:
28      cpu:     200m
29      memory:  256Mi
30    Requests:
31      cpu:        100m
32      memory:     128Mi
33    Environment:  <none>
34    Mounts:
35      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5qwmc (ro)
36Conditions:
37  Type              Status
38  Initialized       True 
39  Ready             True 
40  ContainersReady   True 
41  PodScheduled      True 
42Volumes:
43  default-token-5qwmc:
44    Type:        Secret (a volume populated by a Secret)
45    SecretName:  default-token-5qwmc
46    Optional:    false
47QoS Class:       Burstable  #服务质量是可波动的Burstable
48Node-Selectors:  <none>
49Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
50                 node.kubernetes.io/unreachable:NoExecute for 300s
51Events:
52  Type    Reason     Age   From               Message
53  ----    ------     ----  ----               -------
54  Normal  Scheduled  95s   default-scheduler  Successfully assigned default/nginx-qos-burstable to node-2
55  Normal  Pulled     94s   kubelet, node-2    Container image "nginx:1.7.9" already present on machine
56  Normal  Created    94s   kubelet, node-2    Created container nginx-qos-burstable
57  Normal  Started    93s   kubelet, node-2    Started container nginx-qos-burstable

2.3 Guaranteed完全保障 #

1、resource中定义的cpu和memory必须包含有requests和limits,切requests和limits的值必须相同,其优先级别最高,当出现调度和驱逐时优先保障该类型的Qos,如下定义一个nginx-qos-guaranteed的容器,requests.cpu和limits.cpu相同,同理requests.memory和limits.memory.

 1[root@node-1 demo]# cat nginx-qos-guaranteed.yaml 
 2apiVersion: v1
 3kind: Pod
 4metadata:
 5  name: nginx-qos-guaranteed
 6  labels:
 7    name: nginx-qos-guaranteed
 8spec:
 9  containers:
10  - name: nginx-qos-guaranteed
11    image: nginx:1.7.9
12    imagePullPolicy: IfNotPresent
13    ports:
14    - name: nginx-port-80
15      protocol: TCP
16      containerPort: 80
17    resources: 
18      requests:
19        cpu: 200m
20        memory: 256Mi
21      limits:
22        cpu: 200m
23        memory: 256Mi

2、应用yaml文件生成pod并查看pod的Qos类型为可完全保障Guaranteed

 1[root@node-1 demo]# kubectl apply -f nginx-qos-guaranteed.yaml 
 2pod/nginx-qos-guaranteed created
 3
 4[root@node-1 demo]# kubectl describe pods nginx-qos-guaranteed 
 5Name:         nginx-qos-guaranteed
 6Namespace:    default
 7Priority:     0
 8Node:         node-2/10.254.100.102
 9Start Time:   Sat, 28 Sep 2019 19:37:15 +0800
10Labels:       name=nginx-qos-guaranteed
11Annotations:  kubectl.kubernetes.io/last-applied-configuration:
12                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"name":"nginx-qos-guaranteed"},"name":"nginx-qos-guaranteed","names...
13Status:       Running
14IP:           10.244.1.23
15Containers:
16  nginx-qos-guaranteed:
17    Container ID:   docker://cf533e0e331f49db4e9effb0fbb9249834721f8dba369d281c8047542b9f032c
18    Image:          nginx:1.7.9
19    Image ID:       docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451
20    Port:           80/TCP
21    Host Port:      0/TCP
22    State:          Running
23      Started:      Sat, 28 Sep 2019 19:37:16 +0800
24    Ready:          True
25    Restart Count:  0
26    Limits:
27      cpu:     200m
28      memory:  256Mi
29    Requests:
30      cpu:        200m
31      memory:     256Mi
32    Environment:  <none>
33    Mounts:
34      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5qwmc (ro)
35Conditions:
36  Type              Status
37  Initialized       True 
38  Ready             True 
39  ContainersReady   True 
40  PodScheduled      True 
41Volumes:
42  default-token-5qwmc:
43    Type:        Secret (a volume populated by a Secret)
44    SecretName:  default-token-5qwmc
45    Optional:    false
46QoS Class:       Guaranteed #服务质量为可完全保障Guaranteed
47Node-Selectors:  <none>
48Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
49                 node.kubernetes.io/unreachable:NoExecute for 300s
50Events:
51  Type    Reason     Age   From               Message
52  ----    ------     ----  ----               -------
53  Normal  Scheduled  25s   default-scheduler  Successfully assigned default/nginx-qos-guaranteed to node-2
54  Normal  Pulled     24s   kubelet, node-2    Container image "nginx:1.7.9" already present on machine
55  Normal  Created    24s   kubelet, node-2    Created container nginx-qos-guaranteed
56  Normal  Started    24s   kubelet, node-2    Started container nginx-qos-guaranteed

写在最后 #

本章是kubernetes系列教程第六篇文章,通过介绍resource资源的分配和服务质量Qos,关于resource有节点使用建议:

附录 #

容器计算资源管理:https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/

pod内存资源管理:https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/

pod cpu资源管理:https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/

服务质量QOS:https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/

Docker关于CPU的限制:https://www.cnblogs.com/sparkdev/p/8052522.html

『 转载 』该文章来源于网络,侵删。