使用kubeadm安装最新Kubernetes1.15版本
导读:kubeadm是Kubernetes官方提供的用于快速安装Kubernetes集群的工具,伴随Kubernetes每个版本的发布都会同步更新,kubeadm会对集群配置方面的一些实践做调整,通过实验kubeadm可以学习到Kubernetes官方在集群配置上一些新的最佳实践。
作者:青蛙小白
原文:https://blog.frognew.com/2019/07/kubeadm-install-kubernetes-1.15.html
最近发布的Kubernetes 1.15中,kubeadm对HA集群的配置已经达到beta可用,说明kubeadm距离生产环境中可用的距离越来越近了。
1.准备
1.1系统配置
在安装之前,需要先做如下准备。两台CentOS 7.6主机如下:
cat /etc/hosts
192.168.99.11 node1
192.168.99.12 node2
如果各个主机启用了防火墙,需要开放Kubernetes各个组件所需要的端口,可以查看Installing kubeadm中的”Check required ports”一节。这里简单起见在各节点禁用防火墙:
systemctlstop firewalld
systemctldisable firewalld
禁用SELINUX:
setenforce0
vi /etc/selinux/config
SELINUX=disabled
创建/etc/sysctl.d/k8s.conf文件,添加如下内容:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
执行命令使修改生效。
modprobebr_netfilter
sysctl-p /etc/sysctl.d/k8s.conf
1.2kube-proxy开启ipvs的前置条件
由于ipvs已经加入到了内核的主干,所以为kube-proxy开启ipvs的前提需要加载以下的内核模块:
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4
在所有的Kubernetes节点node1和node2上执行以下脚本:
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
上面脚本创建了的/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块。使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已经正确加载所需的内核模块。
接下来还需要确保各个节点上已经安装了ipset软件包yum install ipset。为了便于查看ipvs的代理规则,最好安装一下管理工具ipvsadm yum install ipvsadm。
如果以上前提条件如果不满足,则即使kube-proxy的配置开启了ipvs模式,也会退回到iptables模式。
1.3安装Docker
Kubernetes从1.6开始使用CRI(Container Runtime Interface)容器运行时接口。默认的容器运行时仍然是Docker,使用的是kubelet中内置dockershim CRI实现。
安装docker的yum源:
yuminstall -y yum-utils device-mapper-persistent-data lvm2
\
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
查看最新的Docker版本:
yum list docker-ce.x86_64 --showduplicates |sort -r
docker-ce.x86_64 3:18.09.7-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.6-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.5-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.4-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.3-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.2-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.1-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.0-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.3.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.2.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable
Kubernetes 1.15当前支持的docker版本列表是1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09。这里在各节点安装docker的18.09.7版本。
yummakecache fast
yuminstall -y --setopt=obsoletes=0 \
docker-ce-18.09.7-3.el7
systemctlstart docker
systemctlenable docker
确认一下iptables filter表中FOWARD链的默认策略(pllicy)为ACCEPT。
iptables -nvL
Chain INPUT (policy ACCEPT 263 packets, 19209 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
00 DOCKER-USER all -- * * 0.0.0.0/00.0.0.0/0
00 DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/00.0.0.0/0
00 ACCEPT all -- * docker0 0.0.0.0/00.0.0.0/0 ctstate RELATED,ESTABLISHED
00 DOCKER all -- * docker0 0.0.0.0/00.0.0.0/0
00 ACCEPT all -- docker0 !docker0 0.0.0.0/00.0.0.0/0
00 ACCEPT all -- docker0 docker0 0.0.0.0/00.0.0.0/0
1.4 修改docker cgroup driver为systemd
根据文档CRI installation中的内容,对于使用systemd作为init system的Linux的发行版,使用systemd作为docker的cgroup driver可以确保服务器节点在资源紧张的情况更加稳定,因此这里修改各个节点上docker的cgroup driver为systemd。
创建或修改/etc/docker/daemon.json:
{
"exec-opts": ["native.cgroupdriver=systemd"]}
重启docker:
systemctl restart docker
docker info | grep CgroupCgroup Driver: systemd
2.使用kubeadm部署Kubernetes
2.1 安装kubeadm和kubelet
下面在各节点安装kubeadm和kubelet:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
测试地址https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64是否可用,如果不可用需要科学上网。
curl https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
yummakecachefast
yuminstall-ykubeletkubeadmkubectl
...
Installed:
kubeadm.x86_64 0:1.15.0-0kubectl.x86_64 0:1.15.0-0kubelet.x86_64 0:1.15.0-0
DependencyInstalled:
conntrack-tools.x86_64 0:1.4.4-4.el7cri-tools.x86_64 0:1.12.0-0kubernetes-cni.x86_64 0:0.7.5-0libnetfilter_cthelper.x86_64 0:1.0.0-9.el7
libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7libnetfilter_queue.x86_64 0:1.0.2-2.el7_2
从安装结果可以看出还安装了cri-tools, kubernetes-cni, socat三个依赖:
官方从Kubernetes 1.14开始将cni依赖升级到了0.7.5版本
socat是kubelet的依赖
cri-tools是CRI(Container Runtime Interface)容器运行时接口的命令行工具
运行kubelet –help可以看到原来kubelet的绝大多数命令行flag参数都被DEPRECATED了,如:
......--address 0.0.0.0TheIP address for the Kubelet to serve on (set to 0.0.0.0for all IPv4 interfaces and `::` for all IPv6 interfaces) (default0.0.0.0) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
......
而官方推荐我们使用–config指定配置文件,并在配置文件中指定原来这些flag所配置的内容。具体内容可以查看这里Set Kubelet parameters via a config file。这也是Kubernetes为了支持动态Kubelet配置(Dynamic Kubelet Configuration)才这么做的,参考Reconfigure a Node’s Kubelet in a Live Cluster。
kubelet的配置文件必须是json或yaml格式,具体可查看这里。
Kubernetes 1.8开始要求关闭系统的Swap,如果不关闭,默认配置下kubelet将无法启动。关闭系统的Swap方法如下:
swapoff -a
修改 /etc/fstab 文件,注释掉 SWAP 的自动挂载,使用free -m确认swap已经关闭。swappiness参数调整,修改/etc/sysctl.d/k8s.conf添加下面一行:
vm.swappiness=0
执行sysctl -p /etc/sysctl.d/k8s.conf使修改生效。
因为这里本次用于测试两台主机上还运行其他服务,关闭swap可能会对其他服务产生影响,所以这里修改kubelet的配置去掉这个限制。使用kubelet的启动参数–fail-swap-on=false去掉必须关闭Swap的限制,修改/etc/sysconfig/kubelet,加入:
KUBELET_EXTRA_ARGS=--fail-swap-on=false
2.2 使用kubeadm init初始化集群
在各节点开机启动kubelet服务:
systemctl enable kubelet.service
使用kubeadm config print init-defaults可以打印集群初始化默认的使用的配置:
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:- groups:
system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
signing
authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.2.3.4
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: node1
taints:
effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.14.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
scheduler: {}
从默认的配置中可以看到,可以使用imageRepository定制在集群初始化时拉取k8s所需镜像的地址。基于默认配置定制出本次使用kubeadm初始化集群所需的配置文件kubeadm.yaml:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.99.11
bindPort: 6443
nodeRegistration:
taints:
effect: PreferNoSchedule
key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.15.0
networking:
podSubnet: 10.244.0.0/16
使用kubeadm默认配置初始化的集群,会在master节点打上node-role.kubernetes.io/master:NoSchedule的污点,阻止master节点接受调度运行工作负载。这里测试环境只有两个节点,所以将这个taint修改为node-role.kubernetes.io/master:PreferNoSchedule。
在开始初始化集群之前可以使用kubeadm config images pull预先在各个节点上拉取所k8s需要的docker镜像。
接下来使用kubeadm初始化集群,选择node1作为Master Node,在node1上执行下面的命令:
kubeadm init --config kubeadm.yaml --ignore-preflight-errors=Swap[init] Using Kubernetes version: v1.15.0[preflight] Running pre-flight checks
[WARNING Swap]: running with swap onisnot supported. Please disable swap
[preflight] Pulling images requiredfor setting up a Kubernetes cluster
[preflight] This might take a minuteor two, depending on the speed of your internet connection
[preflight] You can also perform this actionin beforehand using'kubeadm config images pull'[kubelet-start] Writing kubelet environment filewith flags tofile"/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration tofile"/var/lib/kubelet/config.yaml"[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "etcd/ca" certificate andkey
[certs] Generating "apiserver-etcd-client" certificate andkey
[certs] Generating "etcd/server" certificate andkey
[certs] etcd/server serving cert is signed for DNS names [node1 localhost] and IPs [192.168.99.11127.0.0.1 ::1][certs] Generating "etcd/peer" certificate andkey
[certs] etcd/peer serving cert is signed for DNS names [node1 localhost] and IPs [192.168.99.11127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate andkey
[certs] Generating "ca" certificate andkey
[certs] Generating "apiserver" certificate andkey
[certs] apiserver serving cert is signed for DNS names [node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1192.168.99.11][certs] Generating "apiserver-kubelet-client" certificate andkey
[certs] Generating "front-proxy-ca" certificate andkey
[certs] Generating "front-proxy-client" certificate andkey
[certs] Generating "sa"keyandpublickey
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for"kube-apiserver"[control-plane] Creating static Pod manifest for"kube-controller-manager"[control-plane] Creating static Pod manifest for"kube-scheduler"[etcd] Creating static Pod manifest forlocal etcd in"/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane asstatic Pods fromdirectory"/etc/kubernetes/manifests". This can take up to4m0s[apiclient] All control plane components are healthy after26.004907seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config"in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.15"in namespace kube-systemwith the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node node1 as control-plane by adding the label "node-role.kubernetes.io/master=''"[mark-control-plane] Marking the node node1 as control-plane by adding the taints [node-role.kubernetes.io/master:PreferNoSchedule][bootstrap-token] Using token: 4qcl2f.gtl3h8e5kjltuo0r
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] configured RBAC rulestoallow Node Bootstrap tokens to post CSRs inorderfor nodes togetlong term certificate credentials
[bootstrap-token] configured RBAC rulestoallow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rulestoallow certificate rotation forall node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!Tostartusing your cluster, you need to run the followingas a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml"with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can joinanynumberof worker nodes by running the followingoneachas root:
kubeadm join192.168.99.11:6443--token 4qcl2f.gtl3h8e5kjltuo0r \
--discovery-token-ca-cert-hash sha256:7ed5404175cc0bf18dbfe53f19d4a35b1e3d40c19b10924275868ebf2a3bbe6e
上面记录了完成的初始化输出的内容,根据输出的内容基本上可以看出手动初始化安装一个Kubernetes集群所需要的关键步骤。其中有以下关键内容:
[kubelet-start] 生成kubelet的配置文件”/var/lib/kubelet/config.yaml”
[certs]生成相关的各种证书
[kubeconfig]生成相关的kubeconfig文件
[control-plane]使用/etc/kubernetes/manifests目录中的yaml文件创建apiserver、controller-manager、scheduler的静态pod
[bootstraptoken]生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
下面的命令是配置常规用户如何使用kubectl访问集群:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
最后给出了将节点加入集群的命令kubeadm join 192.168.99.11:6443 –token 4qcl2f.gtl3h8e5kjltuo0r \ –discovery-token-ca-cert-hash sha256:7ed5404175cc0bf18dbfe53f19d4a35b1e3d40c19b10924275868ebf2a3bbe6e
查看一下集群状态,确认个组件都处于healthy状态:
kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
集群初始化如果遇到问题,可以使用下面的命令进行清理:
kubeadmreset
ifconfigcni0 down
iplink delete cni0
ifconfigflannel.1 down
iplink delete flannel.1
rm-rf /var/lib/cni/
2.3 安装Pod Network
接下来安装flannel network add-on:
mkdir-p ~/k8s/
cd~/k8s
curl-O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectlapply -f kube-flannel.yml
created
created
created
created
created
created
created
created
created
这里注意kube-flannel.yml这个文件里的flannel的镜像是0.11.0,quay.io/coreos/flannel:v0.11.0-amd64
如果Node有多个网卡的话,参考flannel issues 39701,目前需要在kube-flannel.yml中使用–iface参数指定集群主机内网网卡的名称,否则可能会出现dns无法解析。需要将kube-flannel.yml下载到本地,flanneld启动参数加上–iface=<iface-name>
containers:
name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-amd64
command:
/opt/bin/flanneld
args:
--ip-masq
--kube-subnet-mgr
--iface=eth1
......
使用kubectl get pod –all-namespaces -o wide确保所有的Pod都处于Running状态。
kubectlget pod -n kube-system
NAMEREADY STATUS RESTARTS AGE
1/1 Running 0 52m
1/1 Running 0 52m
1/1 Running 0 51m
1/1 Running 0 51m
1/1 Running 0 51m
1/1 Running 0 44s
1/1 Running 0 52m
1/1 Running 0 51m
2.4 测试集群DNS是否可用
kubectl run curl --image=radial/busyboxplus:curl -it
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.If you don't see a command prompt, try pressing enter.
[ root@curl-5cc7b478b6-r997p:/ ]$
进入后执行nslookup kubernetes.default确认解析正常:
nslookupkubernetes.defaultServer: 10.96.0.10Address 1: 10.96.0.10kube-dns.kube-system.svc.cluster.localName: kubernetes.defaultAddress 1: 10.96.0.1kubernetes.default.svc.cluster.local
2.5 向Kubernetes集群中添加Node节点
下面将node2这个主机添加到Kubernetes集群中,在node2上执行:
kubeadm join 192.168.99.11:6443 --token 4qcl2f.gtl3h8e5kjltuo0r \
--discovery-token-ca-cert-hash sha256:7ed5404175cc0bf18dbfe53f19d4a35b1e3d40c19b10924275868ebf2a3bbe6e \
--ignore-preflight-errors=Swap[preflight] Running pre-flight checks
[WARNING Swap]: running with swap onisnot supported. Please disable swap
[WARNING Service-Kubelet]: kubelet service isnot enabled, please run 'systemctl enable kubelet.service'[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config filewith'kubectl -n kube-system get cm kubeadm-config -oyaml'[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace[kubelet-start] Writing kubelet configuration tofile"/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment filewith flags tofile"/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes'on the control-plane to see this node join the cluster.
node2加入集群很是顺利,下面在master节点上执行命令查看集群中的节点:
kubectlgetnode
NAMESTATUSROLESAGEVERSION
node1Readymaster 57mv1.15.0
node2Ready <none> 11sv1.15.0
2.5.1 如何从集群中移除Node
如果需要从集群中移除node2这个Node执行下面的命令:
在master节点上执行:
kubectl drain node2 --delete-local-data --force --ignore-daemonsets
kubectl delete node node2
在node2上执行:
kubeadmreset
ifconfigcni0 down
iplink delete cni0
ifconfigflannel.1 down
iplink delete flannel.1
rm-rf /var/lib/cni/
在node1上执行:
kubectl delete node node2
2.6 kube-proxy开启ipvs
修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: “ipvs”
kubectl edit cm kube-proxy -n kube-system
之后重启各个节点上的kube-proxy pod:
kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
kubectl get pod -n kube-system | grep kube-proxy
kubekube-proxy-7fsrg 1/1 Running 03s
kube-proxy-k8vhm 1/1 Running 09s
kubectl logs kube-proxy-7fsrg -n kube-system
I0703 04:42:33.3082891 server_others.go:170] Using ipvs Proxier.
W0703 04:42:33.3090741 proxier.go:401] IPVS scheduler not specified, use rr by default
I0703 04:42:33.3098311 server.go:534] Version: v1.15.0
I0703 04:42:33.3200881 conntrack.go:52] Setting nf_conntrack_max to 131072
I0703 04:42:33.3203651 config.go:96] Starting endpoints config controller
I0703 04:42:33.3203931 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
I0703 04:42:33.3204551 config.go:187] Starting service config controller
I0703 04:42:33.3204701 controller_utils.go:1029] Waiting for caches to sync for service config controller
I0703 04:42:33.4208991 controller_utils.go:1036] Caches are synced for endpoints config controller
I0703 04:42:33.420969 1 controller_utils.go:1036] Caches are synced for service config controller
日志中打印出了Using ipvs Proxier,说明ipvs模式已经开启。
3.Kubernetes常用组件部署
越来越多的公司和团队开始使用Helm这个Kubernetes的包管理器,这里也将使用Helm安装Kubernetes的常用组件。
3.1 Helm的安装
Helm由客户端命helm令行工具和服务端tiller组成,Helm的安装十分简单。下载helm命令行工具到master节点node1的/usr/local/bin下,这里下载的2.14.1版本:
curl-O https://get.helm.sh/helm-v2.14.1-linux-amd64.tar.gz
tar-zxvf helm-v2.14.1-linux-amd64.tar.gz
cdlinux-amd64/
cphelm /usr/local/bin/
为了安装服务端tiller,还需要在这台机器上配置好kubectl工具和kubeconfig文件,确保kubectl工具可以在这台机器上访问apiserver且正常使用。这里的node1节点已经配置好了kubectl。
因为Kubernetes APIServer开启了RBAC访问控制,所以需要创建tiller使用的service account: tiller并分配合适的角色给它。详细内容可以查看helm文档中的Role-based Access Control。这里简单起见直接分配cluster-admin这个集群内置的ClusterRole给它。创建helm-rbac.yaml文件:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
kind: ServiceAccount
name: tiller
namespace: kube-system
kubectl create -f helm-rbac.yaml
serviceaccountserviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created
接下来使用helm部署tiller:
helm init --service-account tiller --skip-refresh
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/localCreating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo withURL: https://kubernetes-charts.storage.googleapis.comAdding local repo withURL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /root/.helm.Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.Please note: bydefault, Tiller is deployed with an insecure 'allow unauthenticated users' policy.To prevent this, run `helm init`with the --tiller-tls-verify flag.For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installationHappy Helming!
tiller默认被部署在k8s集群中的kube-system这个namespace下:
kubectlget pod -n kube-system -l app=helm
NAMEREADY STATUS RESTARTS AGE
tiller-deploy-c4fd4cd68-dwkhv 1/1 Running 0 83s
helmversionClient: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"}Server: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"}
注意由于某些原因需要网络可以访问gcr.io和kubernetes-charts.storage.googleapis.com,如果无法访问可以通过helm init –service-account tiller –tiller-image <your-docker-registry>/tiller:v2.13.1 –skip-refresh使用私有镜像仓库中的tiller镜像
最后在node1上修改helm chart仓库的地址为azure提供的镜像地址:
helm repo add stable http://mirror.azure.cn/kubernetes/charts"stable" has been added to your repositories
helm repo list
NAME URL
stable http://mirror.azure.cn/kubernetes/chartslocal http://127.0.0.1:8879/charts
3.2 使用Helm部署Nginx Ingress
为了便于将集群中的服务暴露到集群外部,需要使用Ingress。接下来使用Helm将Nginx Ingress部署到Kubernetes上。Nginx Ingress Controller被部署在Kubernetes的边缘节点上,关于Kubernetes边缘节点的高可用相关的内容可以查看之前整理的Bare metal环境下Kubernetes Ingress边缘节点的高可用,Ingress Controller使用hostNetwork。
我们将node1(192.168.99.11)做为边缘节点,打上Label:
kubectllabel node node1 node-role.kubernetes.io/edge=
labeled
kubectlget node
NAMESTATUS ROLES AGE VERSION
node1Ready edge,master 138m v1.15.0
node2Ready <none> 82m v1.15.0
stable/nginx-ingress chart的值文件ingress-nginx.yaml如下:
controller:
replicaCount: 1
hostNetwork: true
nodeSelector:
'' :
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
labelSelector:
matchExpressions:
key: app
operator: In
values:
nginx-ingress
key: component
operator: In
values:
controller
topologyKey: kubernetes.io/hostname
tolerations:
key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
effect: PreferNoSchedule
defaultBackend:
nodeSelector:
'' :
tolerations:
key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
effect: PreferNoSchedule
nginx ingress controller的副本数replicaCount为1,将被调度到node1这个边缘节点上。这里并没有指定nginx ingress controller service的externalIPs,而是通过hostNetwork: true设置nginx ingress controller使用宿主机网络。
helmrepo update
helminstall stable/nginx-ingress \
-n nginx-ingress \
--namespace ingress-nginx \
-f ingress-nginx.yaml
kubectl get pod -n ingress-nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-ingress-controller-cc9b6d55b-pr8vr 1/1 Running 010m192.168.99.11 node1 <none> <none>
nginx-ingress-default-backend-cc888fd56-bf4h2 1/1 Running 0 10m 10.244.0.14 node1 <none> <none>
如果访问http://192.168.99.11返回default backend,则部署完成。
3.3 使用Helm部署dashboard
kubernetes-dashboard.yaml:
image:
repository: k8s.gcr.io/kubernetes-dashboard-amd64
tag: v1.10.1
ingress:
enabled: true
hosts:
k8s.frognew.com
annotations:
"true" :
"HTTPS" :
tls:
secretName: frognew-com-tls-secret
hosts:
k8s.frognew.com
nodeSelector:
'' :
tolerations:
key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
effect: PreferNoSchedule
rbac:
clusterAdminRole: true
helminstall stable/kubernetes-dashboard \
-n kubernetes-dashboard \
--namespace kube-system \
-f kubernetes-dashboard.yaml
kubectl -n kube-system get secret | grep kubernetes-dashboard-token
kuberneteskubernetes-dashboard-token-pkm2s kubernetes.io/service-account-token 3 3m7s
kubectl describe -n kube-system secret/kubernetes-dashboard-token-pkm2s
Name: kubernetes-dashboard-token-pkm2s
Namespace: kube-system
Labels: <none>Annotations: kubernetes.io/service-account.name: kubernetes-dashboard
kubernetes.io/service-account.uid: 2f0781dd-156a-11e9-b0f0-080027bb7c43Type: kubernetes.io/service-account-token
Data====
ca.crt: 1025bytes
namespace: 11bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1wa20ycyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjJmMDc4MWRkLTE1NmEtMTFlOS1iMGYwLTA4MDAyN2JiN2M0MyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.24ad6ZgZMxdydpwlmYAiMxZ9VSIN7dDR7Q6-RLW0qC81ajXoQKHAyrEGpIonfld3gqbE0xO8nisskpmlkQra72-9X6sBPoByqIKyTsO83BQlME2sfOJemWD0HqzwSCjvSQa0x-bUlq9HgH2vEXzpFuSS6Svi7RbfzLXlEuggNoC4MfA4E2hF1OX_ml8iAKx-49y1BQQe5FGWyCyBSi1TD_-ZpVs44H5gIvsGK2kcvi0JT4oHXtWjjQBKLIWL7xxyRCSE4HmUZT2StIHnOwlX7IEIB0oBX4mPg2_xNGnqwcu-8OERU9IoqAAE2cZa0v3b5O2LMcJPrcxrVOukvRIumA
在dashboard的登录窗口使用上面的token登录。
3.4 使用Helm部署metrics-server
从Heapster的github https://github.com/kubernetes/heapster中可以看到已经,heapster已经DEPRECATED。这里是heapster的deprecation timeline。可以看出heapster从Kubernetes 1.12开始从Kubernetes各种安装脚本中移除。
Kubernetes推荐使用metrics-server。我们这里也使用helm来部署metrics-server。
metrics-server.yaml:
args:- --logtostderr
--kubelet-insecure-tls
--kubelet-preferred-address-types=InternalIP
nodeSelector:
'' :
tolerations:
key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
effect: PreferNoSchedule
helminstall stable/metrics-server \
-n metrics-server \
--namespace kube-system \
-f metrics-server.yaml
使用下面的命令可以获取到关于集群节点基本的指标信息:
kubectltop node
NAMECPU(cores) CPU% MEMORY(bytes) MEMORY%
node1650m 32% 1276Mi 73%
node273m 3% 527Mi 30%kubectl top pod -n kube-system
NAME CPUNAME CPU(cores) MEMORY(bytes)
coredns-5c98db65d4-dr8lf 8m 7Mi
coredns-5c98db65d4-lp8dg 6m 8Mi
etcd-node1 44m 46Mi
kube-apiserver-node1 74m 295Mi
kube-controller-manager-node1 35m 50Mi
kube-flannel-ds-amd64-7lwm9 2m 8Mi
kube-flannel-ds-amd64-mm296 5m 9Mi
kube-proxy-7fsrg 1m 11Mi
kube-proxy-k8vhm 3m 11Mi
kube-scheduler-node1 8m 15Mi
kubernetes-dashboard-848b8dd798-c4sc2 2m 14Mi
metrics-server-8456fb6676-fwh2t 10m 19Mi
tiller-deploy-7bf78cdbf7-9q94c 1m 16Mi
遗憾的是,当前Kubernetes Dashboard还不支持metrics-server。因此如果使用metrics-server替代了heapster,将无法在dashboard中以图形展示Pod的内存和CPU情况(实际上这也不是很重要,当前我们是在Prometheus和Grafana中定制的Kubernetes集群中各个Pod的监控,因此在dashboard中查看Pod内存和CPU也不是很重要)。Dashboard的github上有很多这方面的讨论,如https://github.com/kubernetes/dashboard/issues/2986,Dashboard已经准备在将来的某个时间点支持metrics-server。但由于metrics-server和metrics pipeline肯定是Kubernetes在monitor方面未来的方向,所以推荐使用metrics-server。
4.总结
本次安装涉及到的Docker镜像:
# network and dns
quay.io/coreos/flannel:v0.11.0-amd64
k8s.gcr.io/coredns:1.3.1# helm and tiller
gcr.io/kubernetes-helm/tiller:v2.14.1# nginx ingress
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.24.1
k8s.gcr.io/defaultbackend:1.5# dashboard and metric-sever
k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
gcr.io/google_containers/metrics-server-amd64:v0.3.2
参考:
https://kubernetes.io/docs/setup/independent/install-kubeadm/
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
https://docs.docker.com/engine/installation/linux/docker-ce/centos/
https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2
--end--
K8S培训推荐
Kubernetes线下实战培训,采用3+1+1新的培训模式(3天线下实战培训,1年内可免费再次参加,每期前10名报名,可免费参加价值3600元的线上直播班;),资深一线讲师,实操环境实践,现场答疑互动,培训内容覆盖:Kubernetes集群搭建、Kubernetes设计、Pod、常用对象操作,Kuberentes调度系统、QoS、Helm、网络、存储、CI/CD、日志监控等。开班城市:北京/深圳/上海/成都。点击查看更多课程信息!
推荐阅读
微服务需要拆分到什么程度?
Kubernetes 源码分析-亲和性调度
搞搞 Prometheus: Alertmanager
人均年薪80万以上,Docker 入坑不亏?
你好spring-cloud-kubernetes开源项目
超炫酷的Docker终端UI,想看哪里点哪里
每月一次Kubernetes线上诡异事件
Kubernetes五周年快乐!
KubeCon 2019 PPT下载
容器十年,一部软件交付编年史
本文分享自微信公众号 - K8S中文社区(k8schina)。
如有侵权,请联系 support@oschina.cn 删除。
本文参与“OSC源创计划”,欢迎正在阅读的你也加入,一起分享。
以上是 使用kubeadm安装最新Kubernetes1.15版本 的全部内容, 来源链接: utcz.com/z/508640.html