16 基于haproxy实现ingress服务暴露

unlisted · suofeiya's blog

#kubernetes

Table of Contents

写在前面 #

前面文章介绍了基于nginx实现ingress controller的功能,本章节接续介绍kubernetes系列教程中另外一个姐妹开源负载均衡的控制器:haproxy ingress controller。

1. HAproxy Ingress控制器 #

1.1 HAproxy Ingress简介 #

HAProxy Ingress watches in the k8s cluster and how it builds HAProxy configuration

和Nginx相类似,HAproxy通过监视kubernetes api获取到service后端pod的状态,动态更新haproxy配置文件,以实现七层的负载均衡。

HAproxy Ingress简介

HAproxy Ingress控制器具备的特性如下:

HAproxy ingress控制器版本

1.2 HAproxy控制器安装 #

haproxy ingress安装相对简单,官方提供了安装的yaml文件,先将文件下载查看一下kubernetes资源配置,包含的资源类型有:

安装文件路径https://haproxy-ingress.github.io/resources/haproxy-ingress.yaml

1、创建命名空间,haproxy ingress部署在ingress-controller这个命名空间,先创建ns

 1[root@node-1 ~]# kubectl create namespace ingress-controller
 2namespace/ingress-controller created
 3
 4[root@node-1 ~]# kubectl get namespaces ingress-controller -o yaml
 5apiVersion: v1
 6kind: Namespace
 7metadata:
 8  creationTimestamp: "2019-12-27T09:56:04Z"
 9  name: ingress-controller
10  resourceVersion: "13946553"
11  selfLink: /api/v1/namespaces/ingress-controller
12  uid: ea70b2f7-efe4-43fd-8ce9-3b917b09b533
13spec:
14  finalizers:
15  - kubernetes
16status:
17  phase: Active

2、安装haproxy ingress控制器

 1[root@node-1 ~]# wget  https://haproxy-ingress.github.io/resources/haproxy-ingress.yaml
 2[root@node-1 ~]# kubectl apply -f haproxy-ingress.yaml 
 3serviceaccount/ingress-controller created
 4clusterrole.rbac.authorization.k8s.io/ingress-controller created
 5role.rbac.authorization.k8s.io/ingress-controller created
 6clusterrolebinding.rbac.authorization.k8s.io/ingress-controller created
 7rolebinding.rbac.authorization.k8s.io/ingress-controller created
 8deployment.apps/ingress-default-backend created
 9service/ingress-default-backend created
10configmap/haproxy-ingress created
11daemonset.apps/haproxy-ingress created

3、 检查haproxy ingress安装情况,检查haproxy ingress核心的DaemonSets,发现DS并未部署Pod,原因是配置文件中定义了nodeSelector节点标签选择器,因此需要给node设置合理的标签

1[root@node-1 ~]# kubectl get daemonsets -n ingress-controller 
2NAME              DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR             AGE
3haproxy-ingress   0         0         0       0            0           role=ingress-controller   5m51s

4、 给node设置标签,让DaemonSets管理的Pod能调度到node节点上,生产环境中根据情况定义,将实现haproxy ingress功能的节点定义到特定的节点,对个node节点的访问,需要借助于负载均衡实现统一接入,本文主要以探究haproxy ingress功能,因此未部署负载均衡调度器,读者可根据实际的情况部署。以node-1和node-2为例:

 1[root@node-1 ~]# kubectl label node node-1 role=ingress-controller
 2node/node-1 labeled
 3[root@node-1 ~]# kubectl label node node-2 role=ingress-controller
 4node/node-2 labeled
 5
 6#查看labels的情况
 7[root@node-1 ~]# kubectl get nodes --show-labels 
 8NAME     STATUS   ROLES    AGE    VERSION   LABELS
 9node-1   Ready    master   104d   v1.15.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-1,kubernetes.io/os=linux,node-role.kubernetes.io/master=,role=ingress-controller
10node-2   Ready    <none>   104d   v1.15.3   app=web,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-2,kubernetes.io/os=linux,label=test,role=ingress-controller
11node-3   Ready    <none>   104d   v1.15.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-3,kubernetes.io/os=linux

5、再次查看ingress部署情况,已完成部署,并调度至node-1和node-2节点上

1[root@node-1 ~]# kubectl get daemonsets -n ingress-controller 
2NAME              DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR             AGE
3haproxy-ingress   2         2         2       2            2           role=ingress-controller   15m
4
5[root@node-1 ~]# kubectl get pods -n ingress-controller -o wide 
6NAME                                       READY   STATUS    RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATES
7haproxy-ingress-bdns8                      1/1     Running   0          2m27s   10.254.100.102   node-2   <none>           <none>
8haproxy-ingress-d5rnl                      1/1     Running   0          2m31s   10.254.100.101   node-1   <none>           <none>

haproxy ingress部署时候也通过deployments部署了一个后端backend服务,这是部署haproxy ingress必须部署服务,否则ingress controller无法启动,可以通过查看Deployments列表确认

1[root@node-1 ~]# kubectl get deployments -n ingress-controller 
2NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
3ingress-default-backend   1/1     1            1           18m

6、 查看haproxy ingress的日志,通过查询日志可知,多个haproxy ingress是通过选举实现高可用HA机制。

haprox ingress日志

其他资源包括ServiceAccount,ClusterRole,ConfigMaps请单独确认,至此HAproxy ingress controller部署完毕。另外两种部署方式:

2. haproxy ingress使用 #

2.1 haproxy ingress基础 #

Ingress控制器部署完毕后需要定义Ingress规则,以方便Ingress控制器能够识别到service后端Pod的资源,这个章节我们将来介绍在HAproxy Ingress Controller环境下Ingress的使用。

1、环境准备,创建一个deployments并暴露其端口

 1#创建应用并暴露端口
 2[root@node-1 haproxy-ingress]# kubectl run haproxy-ingress-demo --image=nginx:1.7.9 --port=80 --replicas=1 --expose
 3kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
 4service/haproxy-ingress-demo created
 5deployment.apps/haproxy-ingress-demo created
 6
 7#查看应用
 8[root@node-1 haproxy-ingress]# kubectl get deployments haproxy-ingress-demo 
 9NAME                   READY   UP-TO-DATE   AVAILABLE   AGE
10haproxy-ingress-demo   1/1     1            1           10s
11
12#查看service情况
13[root@node-1 haproxy-ingress]# kubectl get services haproxy-ingress-demo 
14NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
15haproxy-ingress-demo   ClusterIP   10.106.199.102   <none>        80/TCP    17s

2、创建ingress规则,如果有多个ingress控制器,可以通过ingress.class指定类型为haproxy

 1apiVersion: extensions/v1beta1
 2kind: Ingress
 3metadata:
 4  name: haproxy-ingress-demo 
 5  labels:
 6    ingresscontroller: haproxy 
 7  annotations:
 8    kubernetes.io/ingress.class: haproxy 
 9spec:
10  rules:
11  - host: www.happylau.cn 
12    http:
13      paths:
14      - path: /
15        backend:
16          serviceName: haproxy-ingress-demo 
17          servicePort: 80

3、应用ingress规则,并查看ingress详情,查看Events日志发现控制器已正常更新

 1[root@node-1 haproxy-ingress]# kubectl apply -f ingress-demo.yaml 
 2ingress.extensions/haproxy-ingress-demo created
 3
 4#查看详情
 5[root@node-1 haproxy-ingress]# kubectl describe ingresses haproxy-ingress-demo 
 6Name:             haproxy-ingress-demo
 7Namespace:        default
 8Address:          
 9Default backend:  default-http-backend:80 (<none>)
10Rules:
11  Host             Path  Backends
12  ----             ----  --------
13  www.happylau.cn  
14                   /   haproxy-ingress-demo:80 (10.244.2.166:80)
15Annotations:
16  kubectl.kubernetes.io/last-applied-configuration:  {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"haproxy"},"labels":{"ingresscontroller":"haproxy"},"name":"haproxy-ingress-demo","namespace":"default"},"spec":{"rules":[{"host":"www.happylau.cn","http":{"paths":[{"backend":{"serviceName":"haproxy-ingress-demo","servicePort":80},"path":"/"}]}}]}}
17
18  kubernetes.io/ingress.class:  haproxy
19Events:
20  Type    Reason  Age   From                Message
21  ----    ------  ----  ----                -------
22  Normal  CREATE  27s   ingress-controller  Ingress default/haproxy-ingress-demo
23  Normal  CREATE  27s   ingress-controller  Ingress default/haproxy-ingress-demo
24  Normal  UPDATE  20s   ingress-controller  Ingress default/haproxy-ingress-demo
25  Normal  UPDATE  20s   ingress-controller  Ingress default/haproxy-ingress-demo

4、测试验证ingress规则,可以将域名写入到hosts文件中,我们直接使用gcurl测试,地址指向node-1或node-2均可

 1[root@node-1 haproxy-ingress]# curl  http://www.happylau.cn --resolve www.happylau.cn:80:10.254.100.101
 2<!DOCTYPE html>
 3<html>
 4<head>
 5<title>Welcome to nginx!</title>
 6<style>
 7    body {
 8        width: 35em;
 9        margin: 0 auto;
10        font-family: Tahoma, Verdana, Arial, sans-serif;
11    }
12</style>
13</head>
14<body>
15<h1>Welcome to nginx!</h1>
16<p>If you see this page, the nginx web server is successfully installed and
17working. Further configuration is required.</p>
18
19<p>For online documentation and support please refer to
20<a href="http://nginx.org/">nginx.org</a>.<br/>
21Commercial support is available at
22<a href="http://nginx.com/">nginx.com</a>.</p>
23
24<p><em>Thank you for using nginx.</em></p>
25</body>
26</html>

5、测试正常,接下来到haproxy ingress controller中刚查看对应生成规则配置文件

  1[root@node-1 ~]# kubectl exec -it haproxy-ingress-bdns8 -n ingress-controller /bin/sh
  2
  3#查看配置文件
  4/etc/haproxy # cat /etc/haproxy/haproxy.cfg 
  5  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
  6# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
  7# #
  8# #   HAProxy Ingress Controller
  9# #   --------------------------
 10# #   This file is automatically updated, do not edit
 11# #
 12# 全局配置文件内容
 13global
 14    daemon
 15    nbthread 2
 16    cpu-map auto:1/1-2 0-1
 17    stats socket /var/run/haproxy-stats.sock level admin expose-fd listeners
 18    maxconn 2000
 19    hard-stop-after 10m
 20    lua-load /usr/local/etc/haproxy/lua/send-response.lua
 21    lua-load /usr/local/etc/haproxy/lua/auth-request.lua
 22    tune.ssl.default-dh-param 2048
 23    ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK
 24    ssl-default-bind-options no-sslv3 no-tls-tickets
 25
 26#默认配置内容
 27defaults
 28    log global
 29    maxconn 2000
 30    option redispatch
 31    option dontlognull
 32    option http-server-close
 33    option http-keep-alive
 34    timeout client          50s
 35    timeout client-fin      50s
 36    timeout connect         5s
 37    timeout http-keep-alive 1m
 38    timeout http-request    5s
 39    timeout queue           5s
 40    timeout server          50s
 41    timeout server-fin      50s
 42    timeout tunnel          1h
 43
 44#后端服务器即通过service服务发现机制和后端的Pod关联
 45backend default_haproxy-ingress-demo_80
 46    mode http
 47    balance roundrobin
 48    acl https-request ssl_fc
 49    http-request set-header X-Original-Forwarded-For %[hdr(x-forwarded-for)] if { hdr(x-forwarded-for) -m found }
 50    http-request del-header x-forwarded-for
 51    option forwardfor
 52    http-response set-header Strict-Transport-Security "max-age=15768000"
 53    server srv001 10.244.2.166:80 weight 1 check inter 2s   #后端Pod的地址
 54    server srv002 127.0.0.1:1023 disabled weight 1 check inter 2s
 55    server srv003 127.0.0.1:1023 disabled weight 1 check inter 2s
 56    server srv004 127.0.0.1:1023 disabled weight 1 check inter 2s
 57    server srv005 127.0.0.1:1023 disabled weight 1 check inter 2s
 58    server srv006 127.0.0.1:1023 disabled weight 1 check inter 2s
 59    server srv007 127.0.0.1:1023 disabled weight 1 check inter 2s
 60
 61#默认安装时创建的backend服务 初始安装时需要使用到
 62backend _default_backend
 63    mode http
 64    balance roundrobin
 65    http-request set-header X-Original-Forwarded-For %[hdr(x-forwarded-for)] if { hdr(x-forwarded-for) -m found }
 66    http-request del-header x-forwarded-for
 67    option forwardfor
 68    server srv001 10.244.2.165:8080 weight 1 check inter 2s
 69    server srv002 127.0.0.1:1023 disabled weight 1 check inter 2s
 70    server srv003 127.0.0.1:1023 disabled weight 1 check inter 2s
 71    server srv004 127.0.0.1:1023 disabled weight 1 check inter 2s
 72    server srv005 127.0.0.1:1023 disabled weight 1 check inter 2s
 73    server srv006 127.0.0.1:1023 disabled weight 1 check inter 2s
 74    server srv007 127.0.0.1:1023 disabled weight 1 check inter 2s
 75
 76backend _error413
 77    mode http
 78    errorfile 400 /usr/local/etc/haproxy/errors/413.http
 79    http-request deny deny_status 400
 80backend _error495
 81    mode http
 82    errorfile 400 /usr/local/etc/haproxy/errors/495.http
 83    http-request deny deny_status 400
 84backend _error496
 85    mode http
 86    errorfile 400 /usr/local/etc/haproxy/errors/496.http
 87    http-request deny deny_status 400
 88
 89#前端监听的80端口转发规则并配置有https跳转对应的主机配置在/etc/haproxy/maps/_global_http_front.map文件中定义
 90frontend _front_http
 91    mode http
 92    bind *:80
 93    http-request set-var(req.base) base,lower,regsub(:[0-9]+/,/)
 94    http-request redirect scheme https if { var(req.base),map_beg(/etc/haproxy/maps/_global_https_redir.map,_nomatch) yes }
 95    http-request set-header X-Forwarded-Proto http
 96    http-request del-header X-SSL-Client-CN
 97    http-request del-header X-SSL-Client-DN
 98    http-request del-header X-SSL-Client-SHA1
 99    http-request del-header X-SSL-Client-Cert
100    http-request set-var(req.backend) var(req.base),map_beg(/etc/haproxy/maps/_global_http_front.map,_nomatch)
101    use_backend %[var(req.backend)] unless { var(req.backend) _nomatch }
102    default_backend _default_backend
103
104#前端监听的443转发规则对应域名在/etc/haproxy/maps/ _front001_host.map文件中
105frontend _front001
106    mode http
107    bind *:443 ssl alpn h2,http/1.1 crt /ingress-controller/ssl/default-fake-certificate.pem
108    http-request set-var(req.hostbackend) base,lower,regsub(:[0-9]+/,/),map_beg(/etc/haproxy/maps/_front001_host.map,_nomatch)
109    http-request set-header X-Forwarded-Proto https
110    http-request del-header X-SSL-Client-CN
111    http-request del-header X-SSL-Client-DN
112    http-request del-header X-SSL-Client-SHA1
113    http-request del-header X-SSL-Client-Cert
114    use_backend %[var(req.hostbackend)] unless { var(req.hostbackend) _nomatch }
115    default_backend _default_backend
116
117#状态监听器
118listen stats
119    mode http
120    bind *:1936
121    stats enable
122    stats uri /
123    no log
124    option forceclose
125    stats show-legends
126
127#监控健康检查
128frontend healthz
129    mode http
130    bind *:10253
131    monitor-uri /healthz

查看主机名隐射文件,包含有前端主机名和转发到后端backend的名称

 1/etc/haproxy/maps # cat /etc/haproxy/maps/_global_http_front.map 
 2# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
 3# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
 4# #
 5# #   HAProxy Ingress Controller
 6# #   --------------------------
 7# #   This file is automatically updated, do not edit
 8# #
 9#
10www.happylau.cn/ default_haproxy-ingress-demo_80

通过上面的基础配置可以实现基于haproxy的七层负载均衡实现,haproxy ingress controller通过kubernetes api动态识别到service后端规则配置并更新至haproxy.cfg配置文件中,从而实现负载均衡功能实现。

2.2 动态更新和负载均衡 #

后端Pod是实时动态变化的,haproxy ingress通过service的服务发现机制,动态识别到后端Pod的变化情况,并动态更新haproxy.cfg配置文件,并重载配置(实际不需要重启haproxy服务),本章节将演示haproxy ingress动态更新和负载均衡功能。

1、动态更新,我们以扩容pod的副本为例,将副本数从replicas=1扩容至3个

 1[root@node-1 ~]# kubectl scale --replicas=3 deployment haproxy-ingress-demo 
 2deployment.extensions/haproxy-ingress-demo scaled
 3[root@node-1 ~]# kubectl get deployments haproxy-ingress-demo 
 4NAME                   READY   UP-TO-DATE   AVAILABLE   AGE
 5haproxy-ingress-demo   3/3     3            3           43m
 6
 7#查看扩容后Pod的IP地址
 8[root@node-1 ~]# kubectl get pods -o wide
 9NAME                                   READY   STATUS    RESTARTS   AGE     IP             NODE     NOMINATED NODE   READINESS GATES
10haproxy-ingress-demo-5d487d4fc-5pgjt   1/1     Running   0          43m     10.244.2.166   node-3   <none>           <none>
11haproxy-ingress-demo-5d487d4fc-pst2q   1/1     Running   0          18s     10.244.0.52    node-1   <none>           <none>
12haproxy-ingress-demo-5d487d4fc-sr8tm   1/1     Running   0          18s     10.244.1.149   node-2   <none>           <none>

2、查看haproxy配置文件内容,可以看到backend后端主机列表已动态发现新增的pod地址

 1backend default_haproxy-ingress-demo_80
 2    mode http
 3    balance roundrobin
 4    acl https-request ssl_fc
 5    http-request set-header X-Original-Forwarded-For %[hdr(x-forwarded-for)] if { hdr(x-forwarded-for) -m found }
 6    http-request del-header x-forwarded-for
 7    option forwardfor
 8    http-response set-header Strict-Transport-Security "max-age=15768000"
 9    server srv001 10.244.2.166:80 weight 1 check inter 2s   #新增的pod地址
10    server srv002 10.244.0.52:80 weight 1 check inter 2s
11    server srv003 10.244.1.149:80 weight 1 check inter 2s
12    server srv004 127.0.0.1:1023 disabled weight 1 check inter 2s
13    server srv005 127.0.0.1:1023 disabled weight 1 check inter 2s
14    server srv006 127.0.0.1:1023 disabled weight 1 check inter 2s
15    server srv007 127.0.0.1:1023 disabled weight 1 check inter 2s

4、查看haproxy ingress日志,日志中提示HAProxy updated without needing to reload,即配置动态识别,不需要重启haproxy服务就能够识别,自从1.8后haproxy能支持动态配置更新的能力,以适应微服务的场景,详情查看文章说明

1[root@node-1 ~]# kubectl logs haproxy-ingress-bdns8 -n ingress-controller -f
2I1227 12:21:11.523066       6 controller.go:274] Starting HAProxy update id=20
3I1227 12:21:11.561001       6 instance.go:162] HAProxy updated without needing to reload. Commands sent: 3
4I1227 12:21:11.561057       6 controller.go:325] Finish HAProxy update id=20: ingress=0.149764ms writeTmpl=37.738947ms total=37.888711ms

5、接下来测试负载均衡的功能,为了验证测试效果,往pod中写入不同的内容,以测试负载均衡的效果

1[root@node-1 ~]# kubectl exec -it haproxy-ingress-demo-5d487d4fc-5pgjt /bin/bash
2root@haproxy-ingress-demo-5d487d4fc-5pgjt:/# echo "web-1" > /usr/share/nginx/html/index.html
3
4[root@node-1 ~]# kubectl exec -it haproxy-ingress-demo-5d487d4fc-pst2q /bin/bash
5root@haproxy-ingress-demo-5d487d4fc-pst2q:/# echo "web-2" > /usr/share/nginx/html/index.html
6
7[root@node-1 ~]# kubectl exec -it haproxy-ingress-demo-5d487d4fc-sr8tm /bin/bash
8root@haproxy-ingress-demo-5d487d4fc-sr8tm:/# echo "web-3" > /usr/share/nginx/html/index.html

6、测试验证负载均衡效果,haproxy采用轮询的调度算法,因此可以明显看到轮询效果

1[root@node-1 ~]# curl  http://www.happylau.cn --resolve www.happylau.cn:80:10.254.100.102
2web-1
3[root@node-1 ~]# curl  http://www.happylau.cn --resolve www.happylau.cn:80:10.254.100.102
4web-2
5[root@node-1 ~]# curl  http://www.happylau.cn --resolve www.happylau.cn:80:10.254.100.102
6web-3

这个章节验证了haproxy ingress控制器动态配置更新的能力,相比于nginx ingress控制器而言,haproxy ingress控制器不需要重载服务进程就能够动态识别到配置,在微服务场景下将具有非常大的优势;并通过一个实例验证了ingress负载均衡调度能力。

2.3 基于名称虚拟主机 #

这个小节将演示haproxy ingress基于虚拟云主机功能的实现,定义两个虚拟主机news.happylau.cn和sports.happylau.cn,将请求各自转发至haproxy-1和haproxy-2

1、 准备环境测试环境,创建两个应用haproxy-1和haproxy并暴露服务端口

 1[root@node-1 ~]# kubectl run haproxy-1 --image=nginx:1.7.9 --port=80 --replicas=1 --expose=true
 2[root@node-1 ~]# kubectl run haproxy-2 --image=nginx:1.7.9 --port=80 --replicas=1 --expose=true
 3
 4查看应用
 5[root@node-1 ~]# kubectl get deployments 
 6NAME                   READY   UP-TO-DATE   AVAILABLE   AGE
 7haproxy-1              1/1     1            1           39s
 8haproxy-2              1/1     1            1           36s
 9
10查看service
11[root@node-1 ~]# kubectl get services 
12NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
13haproxy-1              ClusterIP   10.100.239.114   <none>        80/TCP    55s
14haproxy-2              ClusterIP   10.100.245.28    <none>        80/TCP    52s

3、定义ingress规则,定义不同的主机并将请求转发至不同的service中

 1apiVersion: extensions/v1beta1
 2kind: Ingress
 3metadata:
 4  name: haproxy-ingress-virtualhost
 5  annotations:
 6    kubernetes.io/ingress.class: haproxy 
 7spec:
 8  rules:
 9  - host: news.happylau.cn    
10    http:
11      paths:
12      - path: /
13        backend:
14          serviceName: haproxy-1
15          servicePort: 80
16  - host: sports.happylau.cn 
17    http:
18      paths:
19      - path: /
20        backend:
21          serviceName: haproxy-2
22          servicePort: 80
23
24#应用ingress规则并查看列表
25[root@node-1 haproxy-ingress]# kubectl apply -f ingress-virtualhost.yaml 
26ingress.extensions/haproxy-ingress-virtualhost created
27[root@node-1 haproxy-ingress]# kubectl get ingresses haproxy-ingress-virtualhost 
28NAME                          HOSTS                                 ADDRESS   PORTS   AGE
29haproxy-ingress-virtualhost   news.happylau.cn,sports.happylau.cn             80      8s
30
31查看ingress规则详情
32[root@node-1 haproxy-ingress]# kubectl describe ingresses haproxy-ingress-virtualhost 
33Name:             haproxy-ingress-virtualhost
34Namespace:        default
35Address:          
36Default backend:  default-http-backend:80 (<none>)
37Rules:
38  Host                Path  Backends
39  ----                ----  --------
40  news.happylau.cn    
41                      /   haproxy-1:80 (10.244.2.168:80)
42  sports.happylau.cn  
43                      /   haproxy-2:80 (10.244.2.169:80)
44Annotations:
45  kubectl.kubernetes.io/last-applied-configuration:  {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"haproxy"},"name":"haproxy-ingress-virtualhost","namespace":"default"},"spec":{"rules":[{"host":"news.happylau.cn","http":{"paths":[{"backend":{"serviceName":"haproxy-1","servicePort":80},"path":"/"}]}},{"host":"sports.happylau.cn","http":{"paths":[{"backend":{"serviceName":"haproxy-2","servicePort":80},"path":"/"}]}}]}}
46
47  kubernetes.io/ingress.class:  haproxy
48Events:
49  Type    Reason  Age   From                Message
50  ----    ------  ----  ----                -------
51  Normal  CREATE  37s   ingress-controller  Ingress default/haproxy-ingress-virtualhost
52  Normal  CREATE  37s   ingress-controller  Ingress default/haproxy-ingress-virtualhost
53  Normal  UPDATE  20s   ingress-controller  Ingress default/haproxy-ingress-virtualhost
54  Normal  UPDATE  20s   ingress-controller  Ingress default/haproxy-ingress-virtualhost

4、测试验证虚拟机主机配置,通过curl直接解析的方式,或者通过写hosts文件

haproxy ingress虚拟主机验证

5、查看配置配置文件内容,配置中更新了haproxy.cfg的front段和backend段的内容

 1/etc/haproxy/haproxy.cfg 配置文件内容
 2backend default_haproxy-1_80    #haproxy-1后端
 3    mode http
 4    balance roundrobin
 5    acl https-request ssl_fc
 6    http-request set-header X-Original-Forwarded-For %[hdr(x-forwarded-for)] if { hdr(x-forwarded-for) -m found }
 7    http-request del-header x-forwarded-for
 8    option forwardfor
 9    http-response set-header Strict-Transport-Security "max-age=15768000"
10    server srv001 10.244.2.168:80 weight 1 check inter 2s
11    server srv002 127.0.0.1:1023 disabled weight 1 check inter 2s
12    server srv003 127.0.0.1:1023 disabled weight 1 check inter 2s
13    server srv004 127.0.0.1:1023 disabled weight 1 check inter 2s
14    server srv005 127.0.0.1:1023 disabled weight 1 check inter 2s
15    server srv006 127.0.0.1:1023 disabled weight 1 check inter 2s
16    server srv007 127.0.0.1:1023 disabled weight 1 check inter 2s
17
18#haproxy-2后端
19backend default_haproxy-2_80
20    mode http
21    balance roundrobin
22    acl https-request ssl_fc
23    http-request set-header X-Original-Forwarded-For %[hdr(x-forwarded-for)] if { hdr(x-forwarded-for) -m found }
24    http-request del-header x-forwarded-for
25    option forwardfor
26    http-response set-header Strict-Transport-Security "max-age=15768000"
27    server srv001 10.244.2.169:80 weight 1 check inter 2s
28    server srv002 127.0.0.1:1023 disabled weight 1 check inter 2s
29    server srv003 127.0.0.1:1023 disabled weight 1 check inter 2s
30    server srv004 127.0.0.1:1023 disabled weight 1 check inter 2s
31    server srv005 127.0.0.1:1023 disabled weight 1 check inter 2s
32    server srv006 127.0.0.1:1023 disabled weight 1 check inter 2s
33    server srv007 127.0.0.1:1023 disabled weight 1 check inter 2s
34
35配置关联内容
36/ # cat /etc/haproxy/maps/_global_http_front.map 
37# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
38# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
39# #
40# #   HAProxy Ingress Controller
41# #   --------------------------
42# #   This file is automatically updated, do not edit
43# #
44#
45news.happylau.cn/ default_haproxy-1_80
46sports.happylau.cn/ default_haproxy-2_80

2.4 URL自动跳转 #

haproxy ingress支持自动跳转的能力,需要通过annotations定义,通过ingress.kubernetes.io/ssl-redirect设置即可,默认为false,设置为true即可实现http往https跳转的能力,当然可以将配置写入到ConfigMap中实现默认跳转的能力,本文以编写annotations为例,实现访问http跳转https的能力。

1、定义ingress规则,设置ingress.kubernetes.io/ssl-redirect实现跳转功能

 1apiVersion: extensions/v1beta1
 2kind: Ingress
 3metadata:
 4  name: haproxy-ingress-virtualhost
 5  annotations:
 6    kubernetes.io/ingress.class: haproxy 
 7    ingress.kubernetes.io/ssl-redirect: true    #实现跳转功能
 8spec:
 9  rules:
10  - host: news.happylau.cn
11    http:
12      paths:
13      - path: /
14        backend:
15          serviceName: haproxy-1
16          servicePort: 80
17  - host: sports.happylau.cn 
18    http:
19      paths:
20      - path: /
21        backend:
22          serviceName: haproxy-2
23          servicePort: 80

按照上图测试了一下功能,未能实现跳转实现跳转的功能,开源版本中未能找到更多文档说明,企业版由于镜像需要认证授权下载,未能进一步做测试验证。

2.4 基于TLS加密 #

haproxy ingress默认集成了一个

1、生成自签名证书和私钥

 1[root@node-1 haproxy-ingress]#  openssl req -x509 -newkey rsa:2048 -nodes -days 365 -keyout tls.key -out tls.crt
 2Generating a 2048 bit RSA private key
 3...........+++
 4.......+++
 5writing new private key to 'tls.key'
 6-----
 7You are about to be asked to enter information that will be incorporated
 8into your certificate request.
 9What you are about to enter is what is called a Distinguished Name or a DN.
10There are quite a few fields but you can leave some blank
11For some fields there will be a default value,
12If you enter '.', the field will be left blank.
13-----
14Country Name (2 letter code) [XX]:CN
15State or Province Name (full name) []:GD
16Locality Name (eg, city) [Default City]:ShenZhen
17Organization Name (eg, company) [Default Company Ltd]:Tencent
18Organizational Unit Name (eg, section) []:HappyLau
19Common Name (eg, your name or your server's hostname) []:www.happylau.cn
20Email Address []:573302346@qq.com

2、创建Secrets,关联证书和私钥

 1[root@node-1 haproxy-ingress]# kubectl create secret tls haproxy-tls --cert=tls.crt --key=tls.key 
 2secret/haproxy-tls created
 3
 4[root@node-1 haproxy-ingress]# kubectl describe secrets haproxy-tls 
 5Name:         haproxy-tls
 6Namespace:    default
 7Labels:       <none>
 8Annotations:  <none>
 9
10Type:  kubernetes.io/tls
11
12Data
13====
14tls.crt:  1424 bytes
15tls.key:  1704 bytes

3、编写ingress规则,通过tls关联Secrets

 1apiVersion: extensions/v1beta1
 2kind: Ingress
 3metadata:
 4  name: haproxy-ingress-virtualhost
 5  annotations:
 6    kubernetes.io/ingress.class: haproxy 
 7spec:
 8  tls:
 9  - hosts:
10    - news.happylau.cn
11    - sports.happylau.cn
12    secretName: haproxy-tls
13  rules:
14  - host: news.happylau.cn
15    http:
16      paths:
17      - path: /
18        backend:
19          serviceName: haproxy-1
20          servicePort: 80
21  - host: sports.happylau.cn 
22    http:
23      paths:
24      - path: /
25        backend:
26          serviceName: haproxy-2
27          servicePort: 80

4、应用配置并查看详情,在TLS中可以看到TLS关联的证书

 1[root@node-1 haproxy-ingress]# kubectl apply -f ingress-virtualhost.yaml 
 2ingress.extensions/haproxy-ingress-virtualhost configured
 3
 4[root@node-1 haproxy-ingress]# kubectl describe ingresses haproxy-ingress-virtualhost 
 5Name:             haproxy-ingress-virtualhost
 6Namespace:        default
 7Address:          
 8Default backend:  default-http-backend:80 (<none>)
 9TLS:
10  haproxy-tls terminates news.happylau.cn,sports.happylau.cn
11Rules:
12  Host                Path  Backends
13  ----                ----  --------
14  news.happylau.cn    
15                      /   haproxy-1:80 (10.244.2.168:80)
16  sports.happylau.cn  
17                      /   haproxy-2:80 (10.244.2.169:80)
18Annotations:
19  kubectl.kubernetes.io/last-applied-configuration:  {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"haproxy"},"name":"haproxy-ingress-virtualhost","namespace":"default"},"spec":{"rules":[{"host":"news.happylau.cn","http":{"paths":[{"backend":{"serviceName":"haproxy-1","servicePort":80},"path":"/"}]}},{"host":"sports.happylau.cn","http":{"paths":[{"backend":{"serviceName":"haproxy-2","servicePort":80},"path":"/"}]}}],"tls":[{"hosts":["news.happylau.cn","sports.happylau.cn"],"secretName":"haproxy-tls"}]}}
20
21  kubernetes.io/ingress.class:  haproxy
22Events:
23  Type    Reason  Age               From                Message
24  ----    ------  ----              ----                -------
25  Normal  CREATE  37m               ingress-controller  Ingress default/haproxy-ingress-virtualhost
26  Normal  CREATE  37m               ingress-controller  Ingress default/haproxy-ingress-virtualhost
27  Normal  UPDATE  7s (x2 over 37m)  ingress-controller  Ingress default/haproxy-ingress-virtualhost
28  Normal  UPDATE  7s (x2 over 37m)  ingress-controller  Ingress default/haproxy-ingress-virtualhost

5、测试https站点访问,可以看到安全的https访问

haproxy ingress https测试

写在最后 #

haproxy实现ingress实际是通过配置更新haproxy.cfg配置,结合service的服务发现机制动态完成ingress接入,相比于nginx来说,haproxy不需要重载实现配置变更。在测试haproxy ingress过程中,有部分功能配置验证没有达到预期,更丰富的功能支持在haproxy ingress企业版中支持,社区版能支持蓝绿发布和WAF安全扫描功能,详情可以参考社区文档haproxy蓝绿发布WAF安全支持

haproxy ingress控制器目前在社区活跃度一般,相比于nginx,traefik,istio还有一定的差距,实际环境中不建议使用社区版的haproxy ingress。

参考文档 #

官方安装文档:https://haproxy-ingress.github.io/docs/getting-started/

haproxy ingress官方配置:https://www.haproxy.com/documentation/hapee/1-7r2/traffic-management/k8s-image-controller/


当你的才华撑不起你的野心时,你就应该静下心来学习

返回kubernetes系列教程目录

如果觉得文章对您有帮助,请订阅专栏,分享给有需要的朋友吧😊

关于作者 刘海平(HappyLau )云计算高级顾问 目前在腾讯云从事公有云相关工作,曾就职于酷狗,EasyStack,拥有多年公有云+私有云计算架构设计,运维,交付相关经验,参与了酷狗,南方电网,国泰君安等大型私有云平台建设,精通Linux,Kubernetes,OpenStack,Ceph等开源技术,在云计算领域具有丰富实战经验,拥有RHCA/OpenStack/Linux授课经验。

附录 #

  1#RBAC认证账号和角色关联
  2apiVersion: v1
  3kind: ServiceAccount
  4metadata:
  5  name: ingress-controller
  6  namespace: ingress-controller
  7---
  8# 集群角色访问资源对象和具体访问权限
  9apiVersion: rbac.authorization.k8s.io/v1beta1
 10kind: ClusterRole
 11metadata:
 12  name: ingress-controller
 13rules:
 14  - apiGroups:
 15      - ""
 16    resources:
 17      - configmaps
 18      - endpoints
 19      - nodes
 20      - pods
 21      - secrets
 22    verbs:
 23      - list
 24      - watch
 25  - apiGroups:
 26      - ""
 27    resources:
 28      - nodes
 29    verbs:
 30      - get
 31  - apiGroups:
 32      - ""
 33    resources:
 34      - services
 35    verbs:
 36      - get
 37      - list
 38      - watch
 39  - apiGroups:
 40      - "extensions"
 41    resources:
 42      - ingresses
 43    verbs:
 44      - get
 45      - list
 46      - watch
 47  - apiGroups:
 48      - ""
 49    resources:
 50      - events
 51    verbs:
 52      - create
 53      - patch
 54  - apiGroups:
 55      - "extensions"
 56    resources:
 57      - ingresses/status
 58    verbs:
 59      - update
 60
 61---
 62#角色定义
 63apiVersion: rbac.authorization.k8s.io/v1beta1
 64kind: Role
 65metadata:
 66  name: ingress-controller
 67  namespace: ingress-controller
 68rules:
 69  - apiGroups:
 70      - ""
 71    resources:
 72      - configmaps
 73      - pods
 74      - secrets
 75      - namespaces
 76    verbs:
 77      - get
 78  - apiGroups:
 79      - ""
 80    resources:
 81      - configmaps
 82    verbs:
 83      - get
 84      - update
 85  - apiGroups:
 86      - ""
 87    resources:
 88      - configmaps
 89    verbs:
 90      - create
 91  - apiGroups:
 92      - ""
 93    resources:
 94      - endpoints
 95    verbs:
 96      - get
 97      - create
 98      - update
 99
100---
101#集群角色绑定ServiceAccount和ClusterRole关联
102apiVersion: rbac.authorization.k8s.io/v1beta1
103kind: ClusterRoleBinding
104metadata:
105  name: ingress-controller
106roleRef:
107  apiGroup: rbac.authorization.k8s.io
108  kind: ClusterRole
109  name: ingress-controller
110subjects:
111  - kind: ServiceAccount
112    name: ingress-controller
113    namespace: ingress-controller
114  - apiGroup: rbac.authorization.k8s.io
115    kind: User
116    name: ingress-controller
117
118---
119#角色绑定
120apiVersion: rbac.authorization.k8s.io/v1beta1
121kind: RoleBinding
122metadata:
123  name: ingress-controller
124  namespace: ingress-controller
125roleRef:
126  apiGroup: rbac.authorization.k8s.io
127  kind: Role
128  name: ingress-controller
129subjects:
130  - kind: ServiceAccount
131    name: ingress-controller
132    namespace: ingress-controller
133  - apiGroup: rbac.authorization.k8s.io
134    kind: User
135    name: ingress-controller
136
137---
138#后端服务应用haproxy ingress默认需要一个关联的应用
139apiVersion: apps/v1
140kind: Deployment
141metadata:
142  labels:
143    run: ingress-default-backend
144  name: ingress-default-backend
145  namespace: ingress-controller
146spec:
147  selector:
148    matchLabels:
149      run: ingress-default-backend
150  template:
151    metadata:
152      labels:
153        run: ingress-default-backend
154    spec:
155      containers:
156      - name: ingress-default-backend
157        image: gcr.io/google_containers/defaultbackend:1.0
158        ports:
159        - containerPort: 8080
160        resources:
161          limits:
162            cpu: 10m
163            memory: 20Mi
164
165---
166#后端应用的service定义
167apiVersion: v1
168kind: Service
169metadata:
170  name: ingress-default-backend
171  namespace: ingress-controller
172spec:
173  ports:
174  - port: 8080
175  selector:
176    run: ingress-default-backend
177
178---
179#haproxy ingress配置实现自定义配置功能
180apiVersion: v1
181kind: ConfigMap
182metadata:
183  name: haproxy-ingress
184  namespace: ingress-controller
185
186---
187#haproxy ingress核心的DaemonSet
188apiVersion: apps/v1
189kind: DaemonSet
190metadata:
191  labels:
192    run: haproxy-ingress
193  name: haproxy-ingress
194  namespace: ingress-controller
195spec:
196  updateStrategy:
197    type: RollingUpdate
198  selector:
199    matchLabels:
200      run: haproxy-ingress
201  template:
202    metadata:
203      labels:
204        run: haproxy-ingress
205    spec:
206      hostNetwork: true         #网络模式为hostNetwork,即使用宿主机的网络
207      nodeSelector:               #节点选择器将调度至包含特定标签的节点
208        role: ingress-controller
209      serviceAccountName: ingress-controller    #实现RBAC认证授权
210      containers:
211      - name: haproxy-ingress
212        image: quay.io/jcmoraisjr/haproxy-ingress
213        args:
214        - --default-backend-service=$(POD_NAMESPACE)/ingress-default-backend
215        - --configmap=$(POD_NAMESPACE)/haproxy-ingress
216        - --sort-backends
217        ports:
218        - name: http
219          containerPort: 80
220        - name: https
221          containerPort: 443
222        - name: stat
223          containerPort: 1936
224        livenessProbe:
225          httpGet:
226            path: /healthz
227            port: 10253
228        env:
229        - name: POD_NAME
230          valueFrom:
231            fieldRef:
232              fieldPath: metadata.name
233        - name: POD_NAMESPACE
234          valueFrom:
235            fieldRef:
236              fieldPath: metadata.namespace

『 转载 』该文章来源于网络,侵删。