Toggle navigation
Home
安装部署
Archives
Tags
Rook Ceph Deployment
2019-05-06 06:51:53
175
0
0
louyj
Rook is an open source cloud-native storage orchestrator, providing the platform, framework, and support for a diverse set of storage solutions to natively integrate with cloud-native environments. Rook turns storage software into self-managing, self-scaling, and self-healing storage services. It does this by automating deployment, bootstrapping, configuration, provisioning, scaling, upgrading, migration, disaster recovery, monitoring, and resource management. Rook uses the facilities provided by the underlying cloud-native container management, scheduling and orchestration platform to perform its duties. This guide will walk you through the basic setup of a Ceph cluster and enable you to consume block, object, and file storage from other pods running in your cluster. Kubernetes v1.8 or higher is supported by Rook. TL;DR -> (Too Long, Donot Read) #Deployment rook ceph ##fetch source git clone https://github.com/rook/rook.git ##Deploy the Rook Operator The first step is to deploy the Rook system components, which include the Rook agent running on each node in your cluster as well as Rook operator pod. cd cluster/examples/kubernetes/ceph kubectl create -f operator.yaml verify the rook-ceph-operator, rook-ceph-agent, and rook-discover pods are in the `Running` state before proceeding kubectl -n rook-ceph-system get pod ##Create a Rook Cluster Now that the Rook operator, agent, and discover pods are running, we can create the Rook cluster. For the cluster to survive reboots, make sure you set the `dataDirHostPath` property that is valid for your hosts. For more settings, see the documentation on configuring the cluster change data directory sed -i 's#/var/lib/rook#/data#g' cluster.yaml Create the cluster: kubectl create -f cluster.yaml Use `kubectl` to list pods in the `rook` namespace. You should be able to see the following pods once they are all running. The number of osd pods will depend on the number of nodes in the cluster and the number of devices and directories configured. kubectl -n rook-ceph get pod #Ceph Dashboard Ceph has a dashboard in which you can view the status of your cluster The dashboard is a very helpful tool to give you an overview of the status of your cluster, including overall health, status of the mon quorum, status of the mgr, osd, and other Ceph daemons, view pools and PG status, show logs for the daemons, and more. Rook makes it simple to enable the dashboard. The dashboard can be enabled with settings in the cluster CRD. The cluster CRD must have the dashboard enabled setting set to true. This is the default setting in the example manifests. spec: dashboard: enabled: true A K8s service will be created to expose that port inside the cluster. The ports enabled by Rook will depend on the version of Ceph that is running This example shows that `port 8443` was configured for Mimic or newer kubectl -n rook-ceph get service The `rook-ceph-mgr` service is for reporting the Prometheus metrics, while the `rook-ceph-mgr-dashboard` service is for the dashboard ##Credentials After you connect to the dashboard you will need to login for secure access. Rook creates a default user named `admin` and generates a secret called `rook-ceph-dashboard-admin-password` in the namespace where rook is running. To retrieve the generated password, you can run the following kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o yaml | grep "password:" | awk '{print $2}' | base64 --decode ##Viewing the Dashboard External to the Cluster There are several ways to expose a service that will depend on the environment you are running in. You can use an Ingress Controller or other methods for exposing services such as NodePort, LoadBalancer, or ExternalIPs he simplest way to expose the service in minikube or similar environment is using the NodePort to open a port on the VM that can be accessed by the host. To create a service with the NodePort, save this yaml as `dashboard-external-https.yaml`. (For Luminous you will need to set the port and targetPort to 7000 and connect via http.) via https apiVersion: v1 kind: Service metadata: name: rook-ceph-mgr-dashboard-external-https namespace: rook-ceph labels: app: rook-ceph-mgr rook_cluster: rook-ceph spec: ports: - name: dashboard port: 8443 protocol: TCP targetPort: 8443 selector: app: rook-ceph-mgr rook_cluster: rook-ceph sessionAffinity: None type: NodePort via http apiVersion: v1 kind: Service metadata: name: rook-ceph-mgr-dashboard-external-http namespace: rook-ceph labels: app: rook-ceph-mgr rook_cluster: rook-ceph spec: ports: - name: dashboard port: 7000 protocol: TCP targetPort: 7000 selector: app: rook-ceph-mgr rook_cluster: rook-ceph sessionAffinity: None type: NodePort Now create the service kubectl create -f dashboard-external.yaml You will see the new service `rook-ceph-mgr-dashboard-external` created: kubectl -n rook-ceph get service In this example, port 31176 will be opened to expose port 8443 from the ceph-mgr pod. Find the ip address of the VM. If using minikube, you can run minikube ip to find the ip address. Now you can enter the URL in your browser such as https://192.168.99.110:31176 and the dashboard will appear #Storage ##Block Storage Block storage allows you to mount storage to a single pod. ###Provision Storage Before Rook can start provisioning storage, a StorageClass and its storage pool need to be created. This is needed for Kubernetes to interoperate with Rook for provisioning persistent volumes. Save this storage class definition as `storageclass.yaml` apiVersion: ceph.rook.io/v1 kind: CephBlockPool metadata: name: replicapool namespace: rook-ceph spec: failureDomain: host replicated: size: 3 --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: rook-ceph-block provisioner: ceph.rook.io/block parameters: blockPool: replicapool # The value of "clusterNamespace" MUST be the same as the one in which your rook cluster exist clusterNamespace: rook-ceph # Specify the filesystem type of the volume. If not specified, it will use `ext4`. fstype: xfs # Optional, default reclaimPolicy is "Delete". Other options are: "Retain", "Recycle" as documented in https://kubernetes.io/docs/concepts/storage/storage-classes/ reclaimPolicy: Retain Create the storage class. kubectl create -f storageclass.yaml ###Consume the storage: PersistentVolumeClaim We create a sample app to consume the block storage provisioned by Rook with the classic wordpress and mysql apps. Both of these apps will make use of block volumes provisioned by Rook. apiVersion: v1 kind: Service metadata: name: wordpress-mysql labels: app: wordpress spec: ports: - port: 3306 selector: app: wordpress tier: mysql clusterIP: None --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim labels: app: wordpress spec: storageClassName: rook-ceph-block accessModes: - ReadWriteOnce resources: requests: storage: 20Gi --- apiVersion: apps/v1beta1 kind: Deployment metadata: name: wordpress-mysql labels: app: wordpress spec: strategy: type: Recreate template: metadata: labels: app: wordpress tier: mysql spec: containers: - image: mysql:5.6 name: mysql env: - name: MYSQL_ROOT_PASSWORD value: changeme ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim ###Consume the storage: Toolbox With the pool that was created above, we can also create a block image and mount it directly in a pod The Rook toolbox is a container with common tools used for rook debugging and testing. The toolbox is based on CentOS, so more tools of your choosing can be easily installed with yum. Save the tools spec as `toolbox.yaml`: apiVersion: apps/v1 kind: Deployment metadata: name: rook-ceph-tools namespace: rook-ceph labels: app: rook-ceph-tools spec: replicas: 1 selector: matchLabels: app: rook-ceph-tools template: metadata: labels: app: rook-ceph-tools spec: dnsPolicy: ClusterFirstWithHostNet containers: - name: rook-ceph-tools image: rook/ceph:v0.9.2 command: ["/tini"] args: ["-g", "--", "/usr/local/bin/toolbox.sh"] imagePullPolicy: IfNotPresent env: - name: ROOK_ADMIN_SECRET valueFrom: secretKeyRef: name: rook-ceph-mon key: admin-secret securityContext: privileged: true volumeMounts: - mountPath: /dev name: dev - mountPath: /sys/bus name: sysbus - mountPath: /lib/modules name: libmodules - name: mon-endpoint-volume mountPath: /etc/rook # if hostNetwork: false, the "rbd map" command hangs, see https://github.com/rook/rook/issues/2021 hostNetwork: true volumes: - name: dev hostPath: path: /dev - name: sysbus hostPath: path: /sys/bus - name: libmodules hostPath: path: /lib/modules - name: mon-endpoint-volume configMap: name: rook-ceph-mon-endpoints items: - key: data path: mon-endpoints Launch the rook-ceph-tools pod: kubectl create -f toolbox.yaml Wait for the toolbox pod to download its container and get to the running state: kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" Once the rook-ceph-tools pod is running, you can connect to it with: kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash All available tools in the toolbox are ready for your troubleshooting needs. Example: ceph status ceph osd status ceph df rados df When you are done with the toolbox, you can remove the deployment: kubectl -n rook-ceph delete deployment rook-ceph-tools ###Direct Tools After you have created a pool as described in the Block Storage topic, you can create a block image and mount it directly in a pod. After you have started and connected to the Rook toolbox, proceed with the following commands in the toolbox. Create a volume image (10MB): rbd create replicapool/test --size 10 rbd info replicapool/test # Disable the rbd features that are not in the kernel module rbd feature disable replicapool/test fast-diff deep-flatten object-map Map the block volume and format it and mount it: # Map the rbd device. If the toolbox was started with "hostNetwork: false" this hangs and you have to stop it with Ctrl-C, # however the command still succeeds; see https://github.com/rook/rook/issues/2021 rbd map replicapool/test # Find the device name, such as rbd0 lsblk | grep rbd # Format the volume (only do this the first time or you will lose data) mkfs.ext4 -m0 /dev/rbd0 # Mount the block device mkdir /tmp/rook-volume mount /dev/rbd0 /tmp/rook-volume Write and read a file: echo "Hello Rook" > /tmp/rook-volume/hello cat /tmp/rook-volume/hello Unmount the volume and unmap the kernel device: umount /tmp/rook-volume rbd unmap /dev/rbd0 ###Teardown To clean up all the artifacts created by the block demo: kubectl delete -f wordpress.yaml kubectl delete -f mysql.yaml kubectl delete -n rook-ceph pool replicapool kubectl delete storageclass rook-ceph-block ##Shared File System A shared file system can be mounted with read/write permission from multiple pods. This may be useful for applications which can be clustered using a shared filesystem. **Multiple File Systems Not Supported** By default only one shared file system can be created with Rook. Multiple file system support in Ceph is still considered experimental and can be enabled with the environment variable **ROOK_ALLOW_MULTIPLE_FILESYSTEMS** defined in `operator.yaml` ###Create the File System Create the file system by specifying the desired settings for the metadata pool, data pools, and metadata server in the `CephFilesystem` CRD. In this example we create the metadata pool with replication of three and a single data pool with erasure coding. For more options, see the documentation on creating shared file systems. Save this shared file system definition as `filesystem.yaml`: apiVersion: ceph.rook.io/v1 kind: CephFilesystem metadata: name: myfs namespace: rook-ceph spec: metadataPool: replicated: size: 3 dataPools: - replicated: size: 3 metadataServer: activeCount: 1 activeStandby: true The Rook operator will create all the pools and other resources necessary to start the service. This may take a minute to complete. # Create the file system $ kubectl create -f filesystem.yaml # To confirm the file system is configured, wait for the mds pods to start $ kubectl -n rook-ceph get pod -l app=rook-ceph-mds NAME READY STATUS RESTARTS AGE rook-ceph-mds-myfs-7d59fdfcf4-h8kw9 1/1 Running 0 12s rook-ceph-mds-myfs-7d59fdfcf4-kgkjp 1/1 Running 0 12s To see detailed status of the file system, start and connect to the `Rook toolbox`. A new line will be shown with `ceph status` for the `mds` service. In this example, there is one active instance of MDS which is up, with one MDS instance in `standby-replay` mode in case of failover. ceph status ... services: mds: myfs-1/1/1 up {[myfs:0]=mzw58b=up:active}, 1 up:standby-replay ###Consume the Shared File System As an example, we will start the kube-registry pod with the shared file system as the backing store. Save the following spec as `kube-registry.yaml` apiVersion: v1 kind: ReplicationController metadata: name: kube-registry-v0 namespace: kube-system labels: k8s-app: kube-registry version: v0 kubernetes.io/cluster-service: "true" spec: replicas: 3 selector: k8s-app: kube-registry version: v0 template: metadata: labels: k8s-app: kube-registry version: v0 kubernetes.io/cluster-service: "true" spec: containers: - name: registry image: registry:2 resources: limits: cpu: 100m memory: 100Mi env: - name: REGISTRY_HTTP_ADDR value: :5000 - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY value: /var/lib/registry volumeMounts: - name: image-store mountPath: /var/lib/registry ports: - containerPort: 5000 name: registry protocol: TCP volumes: - name: image-store flexVolume: driver: ceph.rook.io/rook fsType: ceph options: fsName: myfs # name of the filesystem specified in the filesystem CRD. clusterNamespace: rook-ceph # namespace where the Rook cluster is deployed # by default the path is /, but you can override and mount a specific path of the filesystem by using the path attribute # the path must exist on the filesystem, otherwise mounting the filesystem at that path will fail # path: /some/path/inside/cephfs you can override and mount a specific path of the filesystem by using the `path` attribute ###Teardown To clean up all the artifacts created by the file system demo: kubectl delete -f kube-registry.yaml To delete the filesystem components and backing data, delete the Filesystem CRD. Warning: Data will be deleted kubectl -n rook-ceph delete cephfilesystem myfs #Monitoring Each Rook Ceph cluster has some built in metrics collectors/exporters for monitoring with Prometheus ##Prometheus Operator First the Prometheus operator needs to be started in the cluster so it can watch for our requests to start monitoring Rook and respond by deploying the correct Prometheus pods and configuration. kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/v0.26.0/bundle.yaml This will start the Prometheus operator, but before moving on, wait until the operator is in the Running state: kubectl get pod ##Prometheus Instances With the Prometheus operator running, we can create a service monitor that will watch the Rook cluster and collect metrics regularly. From the root of your locally cloned Rook repo, go the monitoring directory: cd cluster/examples/kubernetes/ceph/monitoring Create the service monitor as well as the Prometheus server pod and service: kubectl create -f service-monitor.yaml kubectl create -f prometheus.yaml kubectl create -f prometheus-service.yaml Ensure that the Prometheus server pod gets created and advances to the Running state before moving on: kubectl -n rook-ceph get pod prometheus-rook-prometheus-0 ##Prometheus Web Console Once the Prometheus server is running, you can open a web browser and go to the URL that is output from this command: echo "http://$(kubectl -n rook-ceph -o jsonpath={.status.hostIP} get pod prometheus-rook-prometheus-0):30900" ##
Pre:
Kubernetes部署
Next:
Flink安装文档
0
likes
175
Weibo
Wechat
Tencent Weibo
QQ Zone
RenRen
Submit
Sign in
to leave a comment.
No Leanote account?
Sign up now.
0
comments
More...
Table of content
No Leanote account? Sign up now.