配置安全证书的Etcd集群

本文内容纲要:配置安全证书的Etcd集群

不知在哪篇技术文档中看到,kubernetes master和etcd分开部署模式,因为集群的状态都保存在etcd中,这样当kubernetes master挂掉后,通过API Server交互的Scale等功能无法使用外,其他已经部署的Pod仍然能继续工作。

基于这种考虑,通过yum以及修改etcd.conf方式部署了一个三节点的etcd集群,但对于企业使用而言,虽然在局域网内访问,多数情况下还是需要配置安全证书,就好像很多政府部门因为三级等保的要求必须在weblogic中配置ssl一样,自己尝试在之前的环境中通过修改conf文件下配置,启动时遭遇各种问题失败,但同样的证书后修改为命令行方式配置后以及手工安装etcd后部署成功。记录如下:

  • 安装cfssl

    wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64

    wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64

    chmod +x cfssl_linux-amd64 cfssljson_linux-amd64

    mv cfssl_linux-amd64 /usr/local/bin/cfssl

    mv cfssljson_linux-amd64 /usr/local/bin/cfssljson

  • 创建CA

    mkdir /root/ssl

    cd /root/ssl

    cfssl print-defaults config > ca-config.json

    cfssl print-defaults csr > ca-csr.json

修改ca-config.json

[root@etc0 ssl]# cat ca-config.json

{

"signing": {

"default": {

"expiry": "8760h"

},

"profiles": {

"kubernetes": {

"expiry": "8760h",

"usages": [

"signing",

"key encipherment",

"server auth",

"client auth"

]

}

}

}

}

server auth表示client可以用该ca对server提供的证书进行验证

client auth表示server可以用该ca对client提供的证书进行验证

创建证书签名请求

[root@etc0 ssl]# cat ca-csr.json

{

"CN": "kubernetes",

"key": {

"algo": "rsa",

"size": 2048

},

"names": [

{

"C": "US",

"L": "CA",

"ST": "San Francisco",

"O": "k8s",

"OU": "System"

}

]

}

生成CA证书和私钥

# cfssl gencert -initca ca-csr.json | cfssljson -bare ca

# ls ca*

ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem

  • 创建kubernetes证书

    [root@etc0 ssl]# cat kubernetes-csr.json

    {

    "CN": "kubernetes",

    "hosts": [

    "127.0.0.1",

    "192.168.0.102",

    "192.168.0.103",

    "192.168.0.104",

    "192.168.0.105",

    "192.168.0.106",

    "10.254.0.1",

    "kubernetes",

    "kubernetes.default",

    "kubernetes.default.svc",

    "kubernetes.default.svc.cluster",

    "kubernetes.default.svc.cluster.local"

    ],

    "key": {

    "algo": "rsa",

    "size": 2048

    },

    "names": [

    {

    "C": "CN",

    "ST": "BeiJing",

    "L": "BeiJing",

    "O": "k8s",

    "OU": "System"

    }

    ]

    }

可以看到该证书把etcd集群的所有ip,kubernetes master的所有ip以及kubernetes服务的ip(10.254.0.1)都加入进去了,这样他们都能使用同一个密钥

生成Kubernetes证书和密钥

# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

# ls kuberntes*

kubernetes.csr kubernetes-csr.json kubernetes-key.pem kubernetes.pem

  • 分发证书文件

在每台etcd机器中运行

# mkdir -p /etc/kubernetes/ssl

# cp *.pem /etc/kubernetes/ssl

etcd集群配置

在/etc/hosts文件中加入地址

[root@etc0 ssl]# cat /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.0.102 etc0

192.168.0.103 etc1

192.168.0.104 etc2

  • 手工下载和安装etcd

访问https://github.com/coreos/etcd/releases,我下载的是3.2.9版本

https://github.com/coreos/etcd/releases/download/v3.2.9/etcd-v3.2.9-linux-amd64.tar.gz

在每台机器上运行

tar -xvf etcd-v3.2.9-linux-amd64.tar.gz

mv etcd-v3.2.9-linux-amd64/etcd* /root/local/bin

mkdir -p /var/lib/etcd

建立一个etcd.service文件,内容如下(针对不同的etcd节点需要修改ip地址和name)

[root@etc0 ssl]# cat /etc/systemd/system/etcd.service 

[Unit]

Description=Etcd Server

After=network.target

After=network-online.target

Wants=network-online.target

Documentation=https://github.com/coreos

[Service]

Type=notify

WorkingDirectory=/var/lib/etcd/

EnvironmentFile=-/etc/etcd/etcd.conf

ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /root/local/bin/etcd --name=etc0 --cert-file=/etc/kubernetes/ssl/kubernetes.pem --key-file=/etc/kubernetes/ssl/kubernetes-key.pem --peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem --peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem --trusted-ca-file=/etc/kubernetes/ssl/ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem --initial-advertise-peer-urls=https://192.168.0.102:2380 --listen-peer-urls=https://192.168.0.102:2380 --listen-client-urls=https://192.168.0.102:2379,https://127.0.0.1:2379 --advertise-client-urls=https://192.168.0.102:2379 --initial-cluster-token=etcd-cluster-0 --initial-cluster=\"etc0=https://192.168.0.102:2380,etc1=https://192.168.0.103:2380,etc2=https://192.168.0.104:2380\" --initial-cluster-state=new --data-dir=/var/lib/etcd"

Restart=on-failure

RestartSec=5

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

   在每台etcd节点上启动etcd服务

# mv etcd.service /etc/systemd/system/

# systemctl daemon-reload

# systemctl enable etcd

# systemctl start etcd

# systemctl status etcd

[root@etc1 bin]# systemctl status etcd

● etcd.service - Etcd Server

Loaded: loaded (/etc/systemd/system/etcd.service; disabled; vendor preset: disabled)

Active: active (running) since Thu 2017-10-19 18:21:28 CST; 1h 20min ago

Docs: https://github.com/coreos

Main PID: 9178 (etcd)

CGroup: /system.slice/etcd.service

└─9178 /root/local/bin/etcd --name=etc1 --cert-file=/etc/kubernetes/ssl/kubernetes.pem --key-file=/etc/kubernetes/ssl/kubernetes-key.pem...

Oct 19 18:22:19 etc1 etcd[9178]: lost the TCP streaming connection with peer 70df68d0b37fcd43 (stream MsgApp v2 reader)

Oct 19 18:22:20 etc1 etcd[9178]: failed to dial 70df68d0b37fcd43 on stream Message (dial tcp 192.168.0.104:2380: getsockopt: connection refused)

Oct 19 18:22:20 etc1 etcd[9178]: peer 70df68d0b37fcd43 became inactive

Oct 19 18:22:21 etc1 etcd[9178]: peer 70df68d0b37fcd43 became active

Oct 19 18:22:21 etc1 etcd[9178]: closed an existing TCP streaming connection with peer 70df68d0b37fcd43 (stream MsgApp v2 writer)

Oct 19 18:22:21 etc1 etcd[9178]: established a TCP streaming connection with peer 70df68d0b37fcd43 (stream MsgApp v2 writer)

Oct 19 18:22:21 etc1 etcd[9178]: closed an existing TCP streaming connection with peer 70df68d0b37fcd43 (stream Message writer)

Oct 19 18:22:21 etc1 etcd[9178]: established a TCP streaming connection with peer 70df68d0b37fcd43 (stream Message writer)

Oct 19 18:22:21 etc1 etcd[9178]: established a TCP streaming connection with peer 70df68d0b37fcd43 (stream Message reader)

Oct 19 18:22:21 etc1 etcd[9178]: established a TCP streaming connection with peer 70df68d0b37fcd43 (stream MsgApp v2 reader)

如果出现任何错误,可以通过journalctl -xe去看到启动详情。

  • 验证服务

    [root@etc1 bin]# ./etcdctl --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/kubernetes.pem --key-file=/etc/kubernetes/ssl/kubernetes-key.pem --endpoints=https://192.168.0.102:2379,https://192.168.0.103:2379,https://192.168.0.104:2379 cluster-health

    member 4701789a3a673ef5 is healthy: got healthy result from https://192.168.0.103:2379

    member 70df68d0b37fcd43 is healthy: got healthy result from https://192.168.0.104:2379

    member 90262c9df511cc4d is healthy: got healthy result from https://192.168.0.102:2379

    cluster is healthy

如果不带证书访问,报错信息为

[root@etc1 bin]# ./etcdctl  --endpoints=https://192.168.0.102:2379,https://192.168.0.103:2379,https://192.168.0.104:2379 cluster-health

cluster may be unhealthy: failed to list members

Error: client: etcd cluster is unavailable or misconfigured; error #0: x509: certificate signed by unknown authority

; error #1: x509: certificate signed by unknown authority

; error #2: x509: certificate signed by unknown authority

error #0: x509: certificate signed by unknown authority

error #1: x509: certificate signed by unknown authority

error #2: x509: certificate signed by unknown authority

  • 目前的问题是:

为什么通过手工方式修改conf文件不成?

本文内容总结:配置安全证书的Etcd集群

原文链接:https://www.cnblogs.com/ericnie/p/7694592.html

以上是 配置安全证书的Etcd集群 的全部内容, 来源链接: utcz.com/z/297017.html

回到顶部