一、建立池(主要使用存儲類來進行持久卷的掛載,其餘的掛載方式很差使也太麻煩):
ceph osd pool create k8s 64
tee /etc/yum.repos.d/ceph.repo <<-'EOF' [Ceph] name=Ceph packages for $basearch baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/$basearch enabled=1 gpgcheck=0 type=rpm-md gpgkey=https://mirrors.163.com/ceph/keys/release.asc priority=1 [Ceph-noarch] name=Ceph noarch packages baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch enabled=1 gpgcheck=0 type=rpm-md gpgkey=https://mirrors.163.com/ceph/keys/release.asc priority=1 [ceph-source] name=Ceph source packages baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS enabled=1 gpgcheck=0 type=rpm-md gpgkey=https://mirrors.163.com/ceph/keys/release.asc priority=1 EOF
安裝依賴nginx
yum install -y yum-utils && \ yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && \ yum install --nogpgcheck -y epel-release && \ rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && \ rm -f /etc/yum.repos.d/dl.fedoraproject.org*
安裝Cephweb
一、安裝:
yum -y install ceph-common ceph --version
二、將ceph的配置文件ceph.comf放在全部節點的/etc/ceph目錄下:
scp ceph.conf root@192.168.73.64:/etc/ceph scp ceph.conf root@192.168.73.65:/etc/ceph scp ceph.conf root@192.168.73.66:/etc/ceph 三、將caph集羣的ceph.client.admin.keyring文件放在k8s控制節點的/etc/ceph目錄: scp ceph.client.admin.keyring root@192.168.73.66:/etc/ceph 4、生成加密key: grep key /etc/ceph/ceph.client.admin.keyring |awk '{printf "%s", $NF}'|base64 5、建立ceph的secret: cat ceph-secret.yaml **********************
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
type: "kubernetes.io/rbd"
data:
key: QVFDTTlXOWFOMk9IR3hBQXZyUjFjdGJDSFpoZUtmckY0N2tZOUE9PQ==
kubectl create -f ceph-secret.yaml
kubectl get secret
六、建立存儲類:
cat ceph-class.yaml
**********************
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph-web
provisioner: kubernetes.io/rbd
parameters:
monitors: 192.168.78.101:6789
adminId: admin
adminSecretName: ceph-secret
adminSecretNamespace: default
pool: k8s
userId: admin
userSecretName: ceph-secret
kubectl create -f ceph-class.yaml
七、建立PersistentVolumeClaim:
*****************************
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: grafana
namespace: kube-system
spec:
accessModes:
- ReadWriteOnce
storageClassName: ceph-web
resources:
requests:
storage: 100G
八、建立pod:
cat ceph-pod.yaml
*******************
apiVersion: v1
kind: Pod
metadata:
name: ceph-pod1
spec:
containers:
- name: nginx
image: nginx
command: ["sleep", "60000"]
volumeMounts:
- name: ceph-rbd-vol1
mountPath: /mnt/ceph-rbd-pvc/busybox
readOnly: false
volumes:
- name: ceph-rbd-vol1
persistentVolumeClaim:
claimName: grafana
kubectl get pod
kubectl describe pod ceph-pod1
只有statefulset能使用:
volumeClaimTemplates:
- metadata:
name: rabbitmq-run
annotations:
volume.beta.kubernetes.io/storage-class: "ceph-web"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 50Gi