k8s LimitRange HPA实战

“ 💋 戏 子. ✟ 发表于: 2023-02-01   最后更新时间: 2023-02-01 11:23:59  
{{totalSubscript}} 订阅, 444 游览

为什么把LimitRange和HPA放一块测试?
因为创建HPA需要拿到pod的request值,而文章使用LimitRange实现了某个命名空间下的资源限额

1.创建limit-test名称空间下的名为limit-test的LimitRange

[root@k8s-master namespace]# vim limit-test.yaml
apiVersion: v1
kind: LimitRange
metadata:
  name: limit-test
  namespace: test
spec:
  limits:
  - max:
      cpu: "200m"            ##单个容器cpu最大限制 
      memory: "1Gi"          ##单个容器内存最大限制
    min:
      cpu: "10m"             ##单个容器cpu最小限制
      memory: "100Mi"        ##单个容器内存最小限制
    default:
      cpu: "100m"             ##单个容器cpu默认限制
      memory: "200Mi"        ##单个容器内存默认限制
    defaultRequest:          
      cpu: "50m"             ##单个容器cpu默认请求值
      memory: "100Mi"        ##单个容器内存默认请求值
    type: Container
  - max:
      cpu: "2"               ##单pod中所有容器的cpulimits值总和上限
      memory: "2Gi"          ##单pod中所有容器的内存limits值总和上限
    min:
      cpu: "50m"             ##单pod中所有容器的cpu request值总和下限
      memory: "50Mi"         ##单pod中所有容器的内存 request值总和下限
    type: Pod

注:资源限额分为单个pod限额和单container限额,因一个pod可能会有多个container,所以要区分
2.创建测试pod

[root@k8s-master ingress]# vim service-nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-app
namespace: test
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-app-svc
  namespace: test
spec:
  type: ClusterIP
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  selector:
    app: nginx

3.通过kuboard创建hpa
在这里插入图片描述

4.如果创建HPA有这个错误:k8s hpa failed to get cpu utilization: missing request for cpu,用 kubectl get apiservice查看metricserver对应的sverice

[root@k8s-master ingress]# kubectl get apiservice | grep metrics.k8s
v1beta1.metrics.k8s.io                 kuboard/promethues-adpter   True        53m        ##这里能看到,metric-service在kuboard名称空间下svc名字是promethues-adpter,由于安装Prometheus Operator覆盖导致,创建新的metricserver覆盖它

5.创建新的APIService

[root@k8s-master metric-API]# vim metrci-api.yaml
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
  name: v1beta1.metrics.k8s.io
spec:
  service:
    name: metrics-server
    namespace: kube-system
  group: metrics.k8s.io
  version: v1beta1
  insecureSkipTLSVerify: true
  groupPriorityMinimum: 100
  versionPriority: 100
[root@k8s-master metric-API]# kubectl apply -f metrci-api.yaml
[root@k8s-master metric-API]# kubectl get apiservice | grep metrics.k8s
v1beta1.metrics.k8s.io                 kube-system/metrics-server   True        2m5s ##现在能看到svc已经更新了

6.用ab命令打压,测试HPA

[root@k8s-master metric-API]# yum -y install httpd-tools-2.4.6-95.el7.centos.x86_64
[root@k8s-master metric-API]# ab -n 200000 -c 5000 http://10.109.154.14/index.html
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/


Benchmarking 10.109.154.14 (be patient)
Completed 20000 requests
Completed 40000 requests
Completed 60000 requests
Completed 80000 requests
Completed 100000 requests
Completed 120000 requests
Completed 140000 requests
Completed 160000 requests
Completed 180000 requests
apr_socket_recv: Connection timed out (110)
Total of 199929 requests completed

7.查看kuboard界面pod扩容情况,能看到当前副本由2变成了4
在这里插入图片描述

更新于 2023-02-01

查看shares更多相关的文章或提一个关于shares的问题,也可以与我们一起分享文章