Kasten k10 实战系列 11 - Kasten K10 通用存储备份和恢复

1.背景

K8S 通常会使用一些本地存储(例如,本地 SSD)来部署一些轻量级应用,有时这些应用也需要保护。还有一种情况就是 K10 并不支持当前底层存储做存储集成与CSI的快照支持。为了在这些情况下保护数据,K10 可以利用 Kanister 使您能够通过极小的应用程序修改,以高效和透明的方式添加备份、恢复和迁移此应用程序数据的功能。

下面我们将提供一个完整的示例,唯一需要的更改是向您的现有的应用程序部署 (Deployment) 中添加一个 Kanister sidecar,同时它可以挂载应用程序数据卷和一个请求通用备份的注释(annotation)。

2.安装前的检查

我们先跑一遍安装前的检查,可以发现 『 Cluster isn't CSI capable 』 CSI 在当前环境中,并不可用。同时,cbs 是支持去做 Generic Volume Backup 的。

 curl https://kasten-1257130361.cos.ap-chengdu.myqcloud.com/k10_primer.sh | bash 

[#truncated...#]   
CSI Capabilities Check:
  Cluster isn't CSI capable

Validating Provisioners: 
cloud.tencent.com/qcloud-cbs:
  Storage Classes:
    cbs
      Supported via K10 Generic Volume Backup. See https://docs.kasten.io/latest/install/generic.html.

[#truncated...#]   

3. 测试过程演示

3.1 建立一个 Kasten-io 的命名空间

首先,我们先建立一个 Kasten-io 的命名空间

$ kubectl create namespace kasten-io
namespace/kasten-io created

3.2 安装 Kasten K10

安装 Kasten 10 到指定的名空间,注意,以下两个关于 Kanister Sidecar 的高级选项为 True

<mark>
--set injectKanisterSidecar.enabled=true \
--set-string injectKanisterSidecar.namespaceSelector.matchLabels.k10/injectKanisterSidecar=true </mark>

$ helm install k10 k10-4.0.5.tgz --namespace kasten-io --set global.airgapped.repository=ccr.ccs.tencentyun.com/kasten-k10 \
  --set global.persistence.metering.size=20Gi \
  --set prometheus.server.persistentVolume.size=20Gi \
  --set global.persistence.catalog.size=20Gi \
  --set externalGateway.create=true \
  --set auth.tokenAuth.enabled=true \
  --set metering.mode=airgap \
  --set injectKanisterSidecar.enabled=true \
  --set injectKanisterSidecar.enabled=true \
  --set-string injectKanisterSidecar.namespaceSelector.matchLabels.k10/injectKanisterSidecar=true \

查看 Kasten 的安装状态,我们可以看到,所有Kasten 的 pod 都为 Running 状态。

$ kubectl get po -n kasten-io 
NAME                                  READY   STATUS    RESTARTS   AGE
aggregatedapis-svc-5d585974d9-nsgm4   1/1     Running   0          117s
auth-svc-865fc676d6-mvfr9             1/1     Running   0          117s
catalog-svc-7cb86f96cf-6wtng          2/2     Running   0          117s
config-svc-6c9f5dc695-xd5nn           1/1     Running   0          117s
crypto-svc-796c7f6c68-wdqkf           1/1     Running   0          117s
dashboardbff-svc-97b8f8ccb-fvld6      1/1     Running   0          117s
executor-svc-6cd8547867-8fhmt         2/2     Running   0          117s
executor-svc-6cd8547867-lq8sd         2/2     Running   0          117s
executor-svc-6cd8547867-xb6xd         2/2     Running   0          117s
frontend-svc-6d5bc5b4f6-bzbqw         1/1     Running   0          117s
gateway-779686f446-7qw74              1/1     Running   0          117s
jobs-svc-85bc8446bf-6mzsk             1/1     Running   0          117s
kanister-svc-7668fd974b-nk6zq         1/1     Running   0          117s
logging-svc-69cd88456-m9klv           1/1     Running   0          117s
metering-svc-5f958567b4-pv525         1/1     Running   0          117s
prometheus-server-5f55997d87-2hqcb    2/2     Running   0          117s
state-svc-85d456bf86-gkrkg            1/1     Running   0          117s

3.3 访问 k10 控制台

获得 Kasten k10 的访问入口

$ kubectl get svc -n kasten-io |grep gateway-ext
NAME                    TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)                        AGE
gateway-ext             LoadBalancer   172.16.252.140   139.155.61.198   80:30644/TCP                   2m33s

访问 http://139.155.61.198/k10/#

您将见到如下画面,这时需要我们输入 Token

20210627205847

$ sa_secret=$(kubectl get serviceaccount k10-k10 -o jsonpath="{.secrets[0].name}" --namespace kasten-io) && \
  kubectl get secret $sa_secret --namespace kasten-io -ojsonpath="{.data.token}{'\n'}" | base64 --decode

输入您的公司名和邮件地址

20210703034500

之后可以看到,Kasten K10 管理界面已经完美的出现在浏览器中。

20210709204400

4. 测试 Kasten 通用存储备份和恢复功能

4.1 建立一个应用范例 deployment

开始测试,编辑一个 名为 deployment.yaml 的文件。

vim deployment.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: demo-pvc
  labels:
    app: demo
    pvc: demo
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo-app
  labels:
    app: demo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: demo
  template:
    metadata:
      labels:
        app: demo
    spec:
      containers:
      - name: demo-container
        image: alpine:3.7
        resources:
            requests:
              memory: 256Mi
              cpu: 100m
        command: ["tail"]
        args: ["-f", "/dev/null"]
        volumeMounts:
        - name: data
          mountPath: /data
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: demo-pvc

4.2 建立应用的 namespace

$ kubectl create namespace filetest 
namespace/filetest created

4.3 设定命名空间的标签

设定命名空间的标签,使之适配于用 Kanister Sidecar进行备份

$ kubectl label namespace filetest k10/injectKanisterSidecar=true 
namespace/filetest labeled

4.4 创建测试应用

创建应用, 确认 Kanister sidecar 已经启动。

#创建应用
$ kubectl apply --namespace=filetest -f deployment.yaml

#确认应用状态为 Running 并且为2/2 证明 sidecar 已经启动。

$ kubectl get pods --namespace=filetest | grep demo-app 
demo-app-594d5db749-htrmq   2/2     Running   0          25s

# 检查 pod 详细信息,确认 sidecar 已经挂载了 /data
$ kubectl describe pod -n filetest
Name:         demo-app-594d5db749-htrmq
Namespace:    filetest
Priority:     0
Node:         172.27.0.5/172.27.0.5
Start Time:   Fri, 09 Jul 2021 19:53:18 +0800
Labels:       app=demo
              pod-template-hash=594d5db749
Annotations:  tke.cloud.tencent.com/networks-status:
                [{
                    "name": "tke-bridge",
                    "interface": "eth0",
                    "ips": [
                        "172.16.0.23"
                    ],
                    "mac": "1a:fe:37:d6:dd:b1",
                    "default": true,
                    "dns": {}
                }]
Status:       Running
IP:           172.16.0.23
IPs:
  IP:           172.16.0.23
Controlled By:  ReplicaSet/demo-app-594d5db749
Containers:
  demo-container:
    Container ID:  docker://a11a29888dfd00005cf9f396a76ca0e91615765ea7d8439a29250ee33832f218
    Image:         alpine:3.7
    Image ID:      docker-pullable://alpine@sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10
    Port:          <none>
    Host Port:     <none>
    Command:
      tail
    Args:
      -f
      /dev/null
    State:          Running
      Started:      Fri, 09 Jul 2021 19:53:29 +0800
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        100m
      memory:     256Mi
    Environment:  <none>
    Mounts:
      /data from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-8tghr (ro)
  kanister-sidecar:
    Container ID:  docker://8224195c7592df2259058f868febbe69e4c6180d62cd3507a14bd54200aaeb28
    Image:         ccr.ccs.tencentyun.com/kasten-k10/kanister-tools:k10-0.61.0
    Image ID:      docker-pullable://ccr.ccs.tencentyun.com/kasten-k10/kanister-tools@sha256:e33203b51a1905204b9a6b474641a1b82a0db71e4cca55ddb10b5f390babeb29
    Port:          <none>
    Host Port:     <none>
    Command:
      bash
      -c
    Args:
      tail -f /dev/null
    State:          Running
      Started:      Fri, 09 Jul 2021 19:53:30 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /data/data from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-8tghr (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  demo-pvc
    ReadOnly:   false
  default-token-8tghr:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-8tghr
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age                From                     Message
  ----     ------                  ----               ----                     -------
  Warning  FailedScheduling        63s (x3 over 67s)  default-scheduler        running "VolumeBinding" filter plugin for pod "demo-app-594d5db749-htrmq": pod has unbound immediate PersistentVolumeClaims
  Normal   Scheduled               58s                default-scheduler        Successfully assigned filetest/demo-app-594d5db749-htrmq to 172.27.0.5
  Normal   SuccessfulAttachVolume  51s                attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-410142a5-80d6-4be8-84c6-2f91ee9caa05"
  Normal   Pulling                 49s                kubelet                  Pulling image "alpine:3.7"
  Normal   Pulled                  47s                kubelet                  Successfully pulled image "alpine:3.7"
  Normal   Created                 47s                kubelet                  Created container demo-container
  Normal   Started                 47s                kubelet                  Started container demo-container
  Normal   Pulled                  47s                kubelet                  Container image "ccr.ccs.tencentyun.com/kasten-k10/kanister-tools:k10-0.61.0" already present on machine
  Normal   Created                 47s                kubelet                  Created container kanister-sidecar
  Normal   Started                 46s                kubelet                  Started container kanister-sidecar

4.5 拷贝测试数据到容器数据卷

$ kubectl get pods --namespace=filetest | grep demo-app
demo-app-594d5db749-htrmq   2/2     Running   0          9m51s

$ kubectl cp deployment.yaml filetest/demo-app-594d5db749-htrmq:/data/ 
Defaulting container name to demo-container.

$ kubectl exec --namespace=filetest demo-app-594d5db749-htrmq -- ls -l /data 
Defaulting container name to demo-container.
Use 'kubectl describe pod/demo-app-594d5db749-htrmq -n filetest' to see all of the containers in this pod.
total 20
-rw-r--r--    1 501      dialout        812 Jul  9 12:03 deployment.yaml
drwx------    2 root     root         16384 Jul  9 11:53 lost+found

4.6 在 K10 界面设置备份对象存储

20210709210344

4.7 设置备份任务,启用 Kanister

20210709210424

4.8 查看作业执行情况

20210709210446

4.9 模拟删除容器数据

$ kubectl exec --namespace=filetest demo-app-594d5db749-htrmq -- rm -rf /data/deployment.yaml 
Defaulting container name to demo-container.
Use 'kubectl describe pod/demo-app-594d5db749-htrmq -n filetest' to see all of the containers in this pod.

$ kubectl exec --namespace=filetest demo-app-594d5db749-htrmq -- ls -l /data  
Defaulting container name to demo-container.
Use 'kubectl describe pod/demo-app-594d5db749-htrmq -n filetest' to see all of the containers in this pod.
total 16
drwx------    2 root     root         16384 Jul  9 11:53 lost+found

4.10 恢复删除数据

20210709210529

4.11 查看数据恢复情况

查看数据恢复情况,恢复作业已经完成
20210709210550

查看数据恢复情况,数据恢复成功

kubectl exec --namespace=filetest demo-app-594d5db749-htrmq -- ls -l /data 
Error from server (NotFound): pods "demo-app-594d5db749-htrmq" not found

$ kubectl get pods --namespace=filetest | grep demo-app 
demo-app-594d5db749-gr5lh   2/2     Running   0          38s
$ kubectl exec --namespace=filetest demo-app-594d5db749-gr5lh -- ls -l /data  
Defaulting container name to demo-container.
Use 'kubectl describe pod/demo-app-594d5db749-gr5lh -n filetest' to see all of the containers in this pod.
total 20
-rw-r--r--    1 501      dialout        812 Jul  9 12:03 deployment.yaml
drwx------    2 root     root         16384 Jul  9 11:53 lost+found

4.12 验证数据

还原后,您应该验证数据是否完好无损。验证这一点的一种方法是使用 MD5 校验和工具。

md5 deployment.yaml                                                                                                                     ✔  1814  21:09:53
MD5 (deployment.yaml) = 1b81e19673295c8a1f27c4869025092c


kubectl cp filetest/demo-app-594d5db749-gr5lh:/data/deployment.yaml deployment-restore.yaml                                             ✔  1818  21:12:17
Defaulting container name to demo-container.
tar: removing leading '/' from member names

md5 deployment-restore.yaml                                                                                                             ✔  1820  21:14:14
MD5 (deployment-restore.yaml) = 1b81e19673295c8a1f27c4869025092c

已卸载 PVC 上的通用存储备份和还原

可以通过向已配置卷的 StorageClass添加注释来启用已卸载 PVC 上的通用存储备份和还原

k10.kasten.io/forcegenericbackup

标签: none

添加新评论