《狙擊精英5》撬棍獲取位置 撬棍在哪

k8s使用ceph儲存

ceph提供底層儲存功能,cephfs方式支援k8s的pv的3種訪問模式​​ReadWriteOnce,ReadOnlyMany ,ReadWriteMany​​​ ,RBD支援​​ReadWriteOnce,ReadOnlyMany​​兩種模式

動態供給主要是能夠自動幫你建立pv,需要多大的空間就建立多大的pv。k8s幫助建立pv,建立pvc就直接api呼叫儲存類來尋找pv。

如果是儲存靜態供給的話,會需要我們手動去建立pv,如果沒有足夠的資源,找不到合適的pv,那麼pod就會處於pending等待的狀態。而動態供給主要的一個實現就是StorageClass儲存物件,其實它就是宣告你使用哪個儲存,然後幫你去連線,再幫你去自動建立pv。 

使用Ceph RBD作為持久資料卷
配置 rbd-provisioner
1、編寫yaml檔案

[[email protected] ~]# cat >external-storage-rbd-provisioner.yaml<<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: rbd-provisioner
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-provisioner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services"]
resourceNames: ["kube-dns"]
verbs: ["list", "get"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-provisioner
subjects:
- kind: ServiceAccount
name: rbd-provisioner
namespace: kube-system
roleRef:
kind: ClusterRole
name: rbd-provisioner
apiGroup: rbac.authorization.k8s.io

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: rbd-provisioner
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: rbd-provisioner
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: rbd-provisioner
subjects:
- kind: ServiceAccount
name: rbd-provisioner
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rbd-provisioner
namespace: kube-system
spec:
selector:
matchLabels:
app: rbd-provisioner
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: rbd-provisioner
spec:
containers:
- name: rbd-provisioner
image: "registry.cn-chengdu.aliyuncs.com/ives/rbd-provisioner:v2.0.0-k8s1.11"
env:
- name: PROVISIONER_NAME
value: ceph.com/rbd
serviceAccount: rbd-provisioner
EOF

2、建立相關資源

[[email protected] ~]# kubectl apply -f external-storage-rbd-provisioner.yaml

[[email protected] ~]# kubectl get pods -n kube-system |grep rbd
rbd-provisioner-7c77dcfd67-9xv2m 1/1 Running 0 59s

配置 storageclass

建立pod時,kubelet需要使用rbd命令去檢測和掛載pv對應的ceph image,所以要在所有k8s的worker節點安裝ceph客戶端ceph-common。將ceph的ceph.client.admin.keyring和ceph.conf檔案拷貝到master的/etc/ceph目錄下。

1、安裝ceph-common(k8s所有工作節點)

# yum -y install ceph-common
1.
2、建立 ​​osd pool​​,在ceph的mon或者admin節點

[[email protected]_node1 ~]# ceph osd pool create kube 8
pool 'kube' created

[[email protected]_node1 ~]# ceph osd pool ls
kube

3、建立k8s訪問ceph的使用者,在ceph的mon或者admin節點

[[email protected]_node1 ~]# ceph auth get-or-create client.kube mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' -o ceph.client.kube.keyring
1.
4、檢視key,在ceph的mon或者admin節點

[[email protected]_node1 ~]# ceph auth get-key client.admin
AQCzcPFeYnOoABAATaM1Wt8tMgvYTQjj6YEuVg==

[[email protected]_node1 ~]# ceph auth get-key client.kube
AQC5+vJehk7XIRAAr9mtGFHlUSfT7yQMANeWaw==

5、建立admin secret,在k8s管理節點

#CEPH_ADMIN_SECRET替換為 client.admin 獲取到的key
[[email protected] ~]# export CEPH_ADMIN_SECRET='AQCzcPFeYnOoABAATaM1Wt8tMgvYTQjj6YEuVg=='

[[email protected] ~]# kubectl create secret generic ceph-secret --type="kubernetes.io/rbd" \
--from-literal=key=$CEPH_ADMIN_SECRET \
--namespace=kube-system

6、在default名稱空間建立pvc用於訪問ceph 的secret,在k8s管理節點

#CEPH_KUBE_SECRET替換為 client.kube 獲取到的key
[[email protected] ~]# export CEPH_KUBE_SECRET='AQC5+vJehk7XIRAAr9mtGFHlUSfT7yQMANeWaw=='

[[email protected] ~]# kubectl create secret generic ceph-user-secret --type="kubernetes.io/rbd" \
--from-literal=key=$CEPH_KUBE_SECRET \
--namespace=default

7、檢視secret

[[email protected] ~]# kubectl get secret ceph-user-secret -o yaml
[[email protected] ~]# kubectl get secret ceph-secret -n kube-system -o yaml

8、配置StorageClass

[[email protected] ~]# cat >storageclass-ceph-rdb.yaml<<EOF
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: dynamic-ceph-rdb
provisioner: ceph.com/rbd
parameters:
monitors: 192.168.3.27:6789,192.168.3.60:6789,192.168.3.95:6789
adminId: admin
adminSecretName: ceph-secret
adminSecretNamespace: kube-system
pool: kube
userId: kube
userSecretName: ceph-user-secret
fsType: ext4
imageFormat: "2"
imageFeatures: "layering"
EOF

9、建立StorageClass

[[email protected] ~]# kubectl apply -f storageclass-ceph-rdb.yaml
1.
10、檢視

[[email protected] ~]# kubectl get sc
1.


測試使用
1、建立pvc測試

[[email protected] ~]# cat >ceph-rdb-pvc-test.yaml<<EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ceph-rdb-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: dynamic-ceph-rdb
resources:
requests:
storage: 2Gi
EOF

[[email protected] ~]# kubectl apply -f ceph-rdb-pvc-test.yaml
persistentvolumeclaim/ceph-rdb-claim created

2、檢視pvc和pv

[[email protected] ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
ceph-rdb-claim Bound pvc-bd2363f1-a841-46d0-ad54-99267173bc04 2Gi RWO dynamic-ceph-rdb 16s

[[email protected] ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-bd2363f1-a841-46d0-ad54-99267173bc04 2Gi RWO Delete Bound default/ceph-rdb-claim dynamic-ceph-rdb 29s

3、編寫nginx pod資源配置清單進行測試

[[email protected] ~]# cat >nginx-pod.yaml<<EOF
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod1
labels:
name: nginx-pod1
spec:
containers:
- name: nginx-pod1
image: nginx:alpine
ports:
- name: web
containerPort: 80
volumeMounts:
- name: ceph-rdb
mountPath: /usr/share/nginx/html
volumes:
- name: ceph-rdb
persistentVolumeClaim:
claimName: ceph-rdb-claim
EOF

4、建立pod 並檢視

[[email protected] ~]# kubectl apply -f nginx-pod.yaml
pod/nginx-pod1 created

[[email protected] ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-pod1 1/1 Running 0 2m25s

[[email protected] ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-pod1 1/1 Running 0 2m34s 10.244.1.5 k8s-node1 <none> <none>

5、修改檔案內容

[[email protected] ~]# kubectl exec -it nginx-pod1 -- /bin/sh -c 'echo Hello World from Ceph RBD!!! > /usr/share/nginx/html/index.html'
1.
6、訪問測試

[[email protected] ~]# POD_IP=$(kubectl get pods -o wide |grep nginx-pod1 |awk '{print $(NF-3)}')
[[email protected] ~]# curl $POD_IP
Hello World from Ceph RBD!!!

7、清理

[[email protected] ~]# kubectl delete -f nginx-pod.yaml

[[email protected] ~]# kubectl delete -f ceph-rdb-pvc-test.yaml

使用CephFS作為持久資料卷
Ceph端建立CephFS pool
1、建立兩個pool分別儲存資料和元資料,在ceph的mon或者admin節點 (這裡測試,所以只給了8個pg_num)

[[email protected]_node1 ~]# ceph osd pool create fs_data 8
pool 'fs_data' created
[[email protected]_node1 ~]# ceph osd pool create fs_metadata 8
pool 'fs_metadata' created

2、建立一個CephFS,在ceph的mon或者admin節點

[[email protected]_node1 ~]# ceph fs new cephfs fs_metadata fs_data
new fs with metadata pool 8 and data pool 7

3、檢視

[[email protected]_node1 ~]# ceph fs ls
name: cephfs, metadata pool: fs_metadata, data pools: [fs_data ]

配置 cephfs-provisioner

官方沒有提供cephfs動態卷支援,使用社群提供的cephfs-provisioner

1、編寫yaml檔案

[[email protected] ~]# cat >external-storage-cephfs-provisioner.yaml<<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: cephfs-provisioner
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cephfs-provisioner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "get", "delete"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cephfs-provisioner
subjects:
- kind: ServiceAccount
name: cephfs-provisioner
namespace: kube-system
roleRef:
kind: ClusterRole
name: cephfs-provisioner
apiGroup: rbac.authorization.k8s.io

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: cephfs-provisioner
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "get", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cephfs-provisioner
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cephfs-provisioner
subjects:
- kind: ServiceAccount
name: cephfs-provisioner
namespace: kube-system

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cephfs-provisioner
namespace: kube-system
spec:
selector:
matchLabels:
app: cephfs-provisioner
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: cephfs-provisioner
spec:
containers:
- name: cephfs-provisioner
image: "registry.cn-chengdu.aliyuncs.com/ives/cephfs-provisioner:latest"
env:
- name: PROVISIONER_NAME
value: ceph.com/cephfs
command:
- "/usr/local/bin/cephfs-provisioner"
args:
- "-id=cephfs-provisioner-1"
serviceAccount: cephfs-provisioner
EOF

2、建立相關資源

[[email protected] ~]# kubectl apply -f external-storage-cephfs-provisioner.yaml

[[email protected] ~]# kubectl get pods -n kube-system |grep cephfs
cephfs-provisioner-6d76ff6bd5-zzlmt 1/1 Running 0 28s
1.
2.
3.
4.


配置 storageclass
1、檢視key,在ceph的mon或者admin節點

[[email protected]_node1 ~]# ceph auth get-key client.admin
AQCzcPFeYnOoABAATaM1Wt8tMgvYTQjj6YEuVg==
1.
2.
2、建立admin secret,在k8s管理節點

#CEPH_ADMIN_SECRET替換為 client.admin 獲取到的key
[[email protected] ~]# export CEPH_ADMIN_SECRET='AQCzcPFeYnOoABAATaM1Wt8tMgvYTQjj6YEuVg=='

[[email protected] ~]# kubectl create secret generic ceph-secret --type="kubernetes.io/rbd" \
--from-literal=key=$CEPH_ADMIN_SECRET \
--namespace=kube-system
1.
2.
3.
4.
5.
6.
3、檢視secret

[[email protected] ~]# kubectl get secret ceph-secret -n kube-system -o yaml
1.
4、配置StorageClass

[[email protected] ~]# cat >storageclass-cephfs.yaml<<EOF
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: dynamic-cephfs
provisioner: ceph.com/cephfs
parameters:
monitors: 192.168.3.27:6789,192.168.3.60:6789,192.168.3.95:6789
adminId: admin
adminSecretName: ceph-secret
adminSecretNamespace: "kube-system"
claimRoot: /volumes/kubernetes
EOF
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
5、建立StorageClass

[[email protected] ~]# kubectl apply -f storageclass-cephfs.yaml
storageclass.storage.k8s.io/dynamic-cephfs created
1.
2.
6、檢視

[[email protected] ~]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
dynamic-cephfs ceph.com/cephfs Delete Immediate false 17s
1.
2.
3.


測試使用
1、建立pvc測試

[[email protected] ~]# cat >cephfs-pvc-test.yaml<<EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: cephfs-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: dynamic-cephfs
resources:
requests:
storage: 2Gi
EOF

[[email protected] ~]# kubectl apply -f cephfs-pvc-test.yaml
persistentvolumeclaim/cephfs-claim created

2、檢視pv和pvc

[[email protected] ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
cephfs-claim Bound pvc-b8194840-2664-418c-bad1-df1a4b028f30 2Gi RWX dynamic-cephfs 3s

[[email protected] ~]# kubectl get pv |grep pvc
pvc-b8194840-2664-418c-bad1-df1a4b028f30 2Gi RWX Delete Bound default/cephfs-claim dynamic-cephfs 33s

3、編寫nginx pod資源配置清單進行測試

[[email protected] ~]# cat >nginx-pod.yaml<<EOF
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod2
labels:
name: nginx-pod2
spec:
containers:
- name: nginx-pod2
image: nginx
ports:
- name: web
containerPort: 80
volumeMounts:
- name: cephfs
mountPath: /usr/share/nginx/html
volumes:
- name: cephfs
persistentVolumeClaim:
claimName: cephfs-claim
EOF

4、建立pod 並檢視

[[email protected] ~]# kubectl apply -f nginx-pod.yaml
pod/nginx-pod2 created

[[email protected] ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-pod2 1/1 Running 0 16s

[[email protected] ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-pod2 1/1 Running 0 88s 10.244.1.7 k8s-node1 <none> <none>

5、修改檔案內容

[[email protected] ~]# kubectl exec -it nginx-pod2 -- /bin/sh -c 'echo Hello World from CephFS!!! > /usr/share/nginx/html/index.html'
6、訪問測試

[[email protected] ~]# POD_IP=$(kubectl get pods -o wide |grep nginx-pod2 |awk '{print $(NF-3)}')

[[email protected] ~]# curl $POD_IP
Hello World from CephFS!!!
7、清理

[[email protected] ~]# kubectl delete -f nginx-pod.yaml

[[email protected] ~]# kubectl delete -f cephfs-pvc-test.yaml

以上是 《狙擊精英5》撬棍獲取位置 撬棍在哪 的全部内容, 来源链接: utcz.com/yxgl/576514.html

回到顶部