Giriş
Bölümler burada
images bölümünde kullanılacak image isimleri belirtiliyorbackup bölümünde eğer veri tabanı yedeklenecekse bu belirtiliyor
cells bölümünde kullanılan availability zone isimleri belirtiliyor. Bu ayarlar aslında vtgate için uygulanıyor
keyspaces bölümünde veri tabanı belirtiliyor. Her veri tabanı için hangi cell'de hangi tablet olacağını belirtiyoruz
globalLockserver
vitessDashboard sanırım topology server ayarlarını belirtiyor. İzleyeceği cell'leri belirtiyoruz
updateStrategy Alanı
Örnek
Şöyle yaparız
updateStrategy: type: Immediate
Açıklaması şöyle. Immediate ise VitessCluster.yaml dosyasındaki değişiklikleri Vitess Operator hemen uyguluyor
Type selects the overall update strategy.Supported options are:External: Schedule updates on objects that should be updated, but wait for an external tool to release them by adding the ‘rollout.planetscale.com/released’ annotation.Immediate: Release updates to all cells, keyspaces, and shards as soon as the VitessCluster spec is changed. Perform rolling restart of one tablet Pod per shard at a time, with automatic planned reparents whenever possible to avoid master downtime.Default: External
Eğer external ise tableti elle tekrar başlatmak için şöyle yaparız
kubectl annotate pod my-vttablet-zone1 "rollout.planetscale.com/released=true"
cells Alanı
"Availability zone" yani "cell" listesidir. Her cell için VtGate yaratılır.
Örnek
Şöyle yaparız
apiVersion: planetscale.com/v2 kind: VitessCluster metadata: name: {{ $.Values.keyspaceName }}-vitess-cluster spec: images: ... backup: ... globalLockserver: ... cells: - name: az1 gateway: replicas: 2 extraFlags: mysql_server_version: "8.0.23-Vitess" mysql_auth_server_impl: "none" resources: requests: cpu: 200m memory: 256Mi limits: memory: 256Mi
Tüm cell'lere yani VtGate havuzuna tek servis açmak için şöyle yaparız
apiVersion: v1 kind: Service metadata: labels: planetscale.com/cluster: adv-vitess-cluster planetscale.com/component: vtgate name: adv-vtgate spec: ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - name: web port: 15000 protocol: TCP targetPort: web nodePort: 32090 - name: grpc port: 15999 protocol: TCP targetPort: grpc nodePort: 32100 - name: mysql port: 3306 protocol: TCP targetPort: mysql nodePort: 32110 selector: planetscale.com/cluster: adv-vitess-cluster planetscale.com/component: vtgate sessionAffinity: None type: NodePort
keyspace Alanı
Veri tabanının hangi cell içinde olduğunu da belirtir.
Örnek
Örneğin tamamı şöyle
apiVersion: planetscale.com/v2 kind: VitessCluster metadata: name: example spec: images: vtctld: vitess/lite:mysql80 vtgate: vitess/lite:mysql80 vttablet: vitess/lite:mysql80 vtbackup: vitess/lite:mysql80 mysqld: mysql80Compatible: vitess/lite:mysql80 mysqldExporter: prom/mysqld-exporter:v0.11.0 cells: - name: zone1 gateway: authentication: static: secret: name: example-cluster-config key: users.json replicas: 1 extraFlags: mysql_server_version: "8.0.13-Vitess" resources: requests: cpu: 100m memory: 256Mi limits: memory: 256Mi vitessDashboard: cells: - zone1 extraFlags: security_policy: read-only replicas: 1 resources: limits: memory: 128Mi requests: cpu: 100m memory: 128Mi keyspaces: - name: ADV turndownPolicy: Immediate partitionings: - equal: parts: 1 shardTemplate: databaseInitScriptSecret: name: example-cluster-config key: init_db.sql replication: enforceSemiSync: false tabletPools: - cell: zone1 type: replica replicas: 1 vttablet: extraFlags: db_charset: utf8mb4 resources: limits: memory: 256Mi requests: cpu: 100m memory: 256Mi mysqld: resources: limits: memory: 1024Mi requests: cpu: 100m memory: 512Mi configOverrides: | [mysqld] lower_case_table_names = 1 dataVolumeClaimTemplate: accessModes: ["ReadWriteOnce"] resources: requests: storage: 1Gi
Örnek - keyspaces/replication/initializeBackup alanı
Açıklama şöyle
Vitess Replication Spec has a field called initializeBackup which defaults to true. Setting that to false will prevent initial backups from happening
Açıklama şöyle
if backups already exist, the operator should not create a vtbackup-init pod... that said, you can force it not to create it with shardTemplate.replication.initializeBackup = false
Açıklama şöyle. Yani vtbackup-init eğer hiç backup yoksa bir backup yaratır
Q: I'm adding backup storage to an existing keyspace but the vtbackup-init is failing with"Can't take backup: refusing to upload initial backup of empty database: the shard live/-80 already has at least one tablet that may be serving (zone1-1672169672); you must take a backup from a live tablet instead"1. should I have initializeBackup as false ?2. I have to run vtctlclient -server localhost:15999 Backup zone1-3423003548 ?A : number 2 should be enough to let you bootstrap from a backups-not-configured state to a backups-configured state, assuming the tablets have been restarted with the new backup settings (which should happen automatically if updateStrategy.type = Immediate. (edited)Q : If the mysql instances have no data will the vttablet restart with success? it seems that if the backup is enabled for an empty instance it doesn't skip the initial backup and the vttablet never be readyA : if mysql is empty and backups are enabled, vttablet will look for a backup to restore upon startup. after you've run vtctlclient Backup once on that shard, this restore should work fine.A : the operator uses vtbackup-init once per shard (not per tablet startup) to seed an empty backup if no backups exist yet, but this would be incorrect if any data has already been loaded into tablets. that's why vtbackup-init checks for this possibility and refuses to clobber data.
Yani bu alan true ise vtbackup-init isimli bir pod yaratılır ve işi bitince söndürülür. Pod çıktısı şöyle
$ kubectl get pods -n rlwy-08 NAME READY STATUS RESTARTS AGE adv-vitess-cluster-38e97f2b-vitessbackupstorage-subcontroller 0/1 Terminating 0 34s adv-vitess-cluster-adv-x-x-vtbackup-init-61d77b76 0/1 Completed 1 4h
Örnek - keyspaces/mysqld
Bir örnekte şöyle yaptım. Yani ismi "standard" olan local Volume kullandım
mysqld: storageSize: 1Gi #hostpath for docker k8s #Creates a Persistent Volume Claim storageClassName: standard resources: limits: memory: 1256Mi requests: cpu: 200m memory: 256Mi
kubectl ile bakınca sonuç şöyleydi. etcd'ler rook-ceph kullanıyordu. vttablet ise standard local Volume kullanıyordu
$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE adv-vitess-cluster-etcd-07a83994-1 Bound pvc-1e439bae-2835-498b-a397-71be34f5762a 1Gi RWO rook-ceph-block 27d adv-vitess-cluster-etcd-07a83994-2 Bound pvc-fd429045-f9b8-4ec6-9c96-e0ee6b92be32 1Gi RWO rook-ceph-block 27d adv-vitess-cluster-etcd-07a83994-3 Bound pvc-378b1842-9f67-474b-b5fb-176f84ee88e8 1Gi RWO rook-ceph-block 27d adv-vitess-cluster-vttablet-az1-4135592426-c2dc2c3d Bound pvc-3a2010a3-7203-4d2c-81ba-621d80fd12ab 1Gi RWO standard 27d $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-3a2010a3-7203-4d2c-81ba-621d80fd12ab 1Gi RWO Delete Bound rlwy03/adv-vitess-cluster-vttablet-az1-4135592426-c2dc2c3d standard 27d
globalLockServers Alanı
Örnek
Şöyle yaparız
apiVersion: planetscale.com/v2 kind: VitessCluster spec: images: ... backup: ... globalLockserver: etcd: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: planetscale.com/component operator: In values: - vttablet topologyKey: "kubernetes.io/hostname" dataVolumeClaimTemplate: accessModes: - ReadWriteOnce storageClassName: rook-ceph-block resources: requests: storage: 2Gi resources: limits: cpu: 200m memory: 128Mi requests: cpu: 200m memory: 128Mi
backup
Okunması gereken iki kaynak var
Eğer storage yoksa şöyle bir hata alırız
$ /vt/bin/vtctlclient --server :15999 ListBackups adv/- ListBackups Error: rpc error: code = Unknown desc = no registered implementation of BackupStorage E0903 09:18:47.577379 34 main.go:103] remote error: rpc error: code = Unknown desc = no registered implementation of BackupStorage
Bu alanı kullanınca yeni bir pod yaratılıyor. Şöyle
$ kubectl get pods -n rlwy-08 NAME READY STATUS adv-vitess-cluster-38e97f2b-vitessbackupstorage-subcontroller 1/1 Running
Açıklaması şöyle
there is a vitessbackupstorage subcontroller pod per backup location, this actually just watches the storage for backups and populates k8s metadata for easy inspection, but afaik it is not critical to proper functioning of the operator
Pod'un içi şöyle. İlginç bir şekilde vitess-operator komutunu çalıştırıyor
$ kubectl describe pod adv-vitess-cluster-38e97f2b-vitessbackupstorage-subcontroller -n rlwy-08 Name: adv-vitess-cluster-38e97f2b-vitessbackupstorage-subcontroller Namespace: rlwy-08 Priority: 5000 Priority Class Name: vitess-operator-control-plane Node: rlwy-08-b7pm7-worker-a-rmgg2.c.product-oce-private.internal/172.18.16.126 Start Time: Sat, 03 Sep 2022 07:28:55 +0000 Labels: backup.planetscale.com/location= planetscale.com/cluster=adv-vitess-cluster planetscale.com/component=vbs-subcontroller Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.128.2.39" ], "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.128.2.39" ], "default": true, "dns": {} }] openshift.io/scc: restricted planetscale.com/desired-state-hash: d6c24775f5bf18ed4a7a65d19867e444 Status: Running IP: 10.128.2.39 IPs: IP: 10.128.2.39 Controlled By: VitessBackupStorage/adv-vitess-cluster-38e97f2b Containers: vitess-operator: Container ID: cri-o://bce90f255f40f33256fcf08c730dfe1aa86656409ddbe90c9a8bf88e817ab557 Image: gcr.io/product-spanner/oce/planetscale/vitess-operator:v2.7.2 Image ID: gcr.io/product-spanner/oce/planetscale/vitess-operator@sha256:f5dd9add128c9f4e5a4c1e9ad478b81e2141af1d0ebdbc9bc3c5ac243171f002 Port: <none> Host Port: <none> Command: vitess-operator Args: --logtostderr -v=4 --default_etcd_image=gcr.io/product-spanner/oce/coreos/etcd:v3.3.13 --backup_storage_implementation=file --file_backup_storage_root=/vt/backups/adv-vitess-cluster State: Running Started: Sat, 03 Sep 2022 07:29:19 +0000 Ready: True Restart Count: 0 Limits: memory: 128Mi Requests: cpu: 100m memory: 128Mi Environment: WATCH_NAMESPACE: rlwy-08 POD_NAME: adv-vitess-cluster-38e97f2b-vitessbackupstorage-subcontroller (v1:metadata.name) PS_OPERATOR_POD_NAMESPACE: rlwy-08 (v1:metadata.namespace) PS_OPERATOR_POD_NAME: adv-vitess-cluster-38e97f2b-vitessbackupstorage-subcontroller (v1:metadata.name) OPERATOR_NAME: vitess-operator PS_OPERATOR_FORK_PATH: vitessbackupstorage-subcontroller PS_OPERATOR_VBS_NAMESPACE: rlwy-08 PS_OPERATOR_VBS_NAME: adv-vitess-cluster-38e97f2b HOME: /home/vitess Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k2nlr (ro) /vt/backups from vitess-backups (rw) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-k2nlr: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: <nil> vitess-backups: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: adv-vitess-backup ReadOnly: false QoS Class: Burstable Node-Selectors: <none> Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 10m default-scheduler Successfully assigned rlwy-08/adv-vitess-cluster-38e97f2b-vitessbackupstorage-subcontroller to rlwy-08-b7pm7-worker-a-rmgg2.c.product-oce-private.internal Normal SuccessfulAttachVolume 10m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-8fe67bb0-0254-49ca-95d9-4231c1e1fe28" Normal AddedInterface 10m multus Add eth0 [10.128.2.39/23] from openshift-sdn Normal Pulling 10m kubelet Pulling image "gcr.io/product-spanner/oce/planetscale/vitess-operator:v2.7.2" Normal Pulled 10m kubelet Successfully pulled image "gcr.io/product-spanner/oce/planetscale/vitess-operator:v2.7.2" in 18.107677153s Normal Created 10m kubelet Created container vitess-operator Normal Started 10m kubelet Started container vitess-operator
Bir başkası şöyle. Burada s3 kullanılıyor
$ kubectl describe pod/vt-9f600bb7-vitessbackupstorage-subcontrollerName: vt-9f600bb7-vitessbackupstorage-subcontroller Namespace: default Priority: 5000 Priority Class Name: vitess-operator-control-plane Node: vmi688654.vpsprovider.net/111.11.11.111 Start Time: Sun, 24 Oct 2021 18:07:41 +0300 Labels: backup.planetscale.com/location= planetscale.com/cluster=vt planetscale.com/component=vbs-subcontroller Annotations: cni.projectcalico.org/podIP: 10.42.0.94/32 cni.projectcalico.org/podIPs: 10.42.0.94/32 kubernetes.io/psp: global-unrestricted-psp planetscale.com/desired-state-hash: 43e34ab9aabc72d1f12a9d40d4fe58be Status: Running IP: 10.42.0.94 IPs: IP: 10.42.0.94 Controlled By: VitessBackupStorage/vt-9f600bb7 Containers: vitess-operator: Container ID: containerd://98c08817cef23757ab33402cddc0efe85840365dddd07421ce282256111c8ccf Image: planetscale/vitess-operator:v2.5.0 Image ID: docker.io/planetscale/vitess-operator@sha256:04a3988f3563b4ff756d410c15fcab92c7a6211dd16313907985c75365f1db7a Port: <none> Host Port: <none> Command: vitess-operator Args: --logtostderr -v=4 --backup_storage_implementation=s3 --s3_backup_aws_endpoint=s3.endpoint.com --s3_backup_aws_region=eu-central-003 --s3_backup_storage_bucket=fake-bucket-name --s3_backup_storage_root=vt State: Running Started: Sun, 24 Oct 2021 18:07:44 +0300 Ready: True Restart Count: 0 Limits: memory: 128Mi Requests: cpu: 100m memory: 128Mi Environment: WATCH_NAMESPACE: default POD_NAME: vt-9f600bb7-vitessbackupstorage-subcontroller (v1:metadata.name) PS_OPERATOR_POD_NAMESPACE: default (v1:metadata.namespace) PS_OPERATOR_POD_NAME: vt-9f600bb7-vitessbackupstorage-subcontroller (v1:metadata.name) OPERATOR_NAME: vitess-operator PS_OPERATOR_FORK_PATH: vitessbackupstorage-subcontroller PS_OPERATOR_VBS_NAMESPACE: default PS_OPERATOR_VBS_NAME: vt-9f600bb7 HOME: /home/vitess AWS_SHARED_CREDENTIALS_FILE: /vt/secrets/s3-backup-auth/backblaze-vitess-backup Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4k4sf (ro) /vt/secrets/s3-backup-auth from s3-backup-auth-secret (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-4k4sf: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true s3-backup-auth-secret: Type: Secret (a volume populated by a Secret) SecretName: vt-backup-secret Optional: false QoS Class: Burstable Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: <none>
Örnek - hostpath
Şöyle yaparız
backup: engine: xtrabackup locations: - volume: hostPath: path: /tmp type: Directory
Örnek - PersistentVolumeClaim
Şöyle yaparız
apiVersion: v1kind: PersistentVolumeClaim metadata: name: adv-vitess-backup labels: app: adv-vitess-backup spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi storageClassName: rook-cephfs
Daha sonra şöyle yaptım
backup: engine: xtrabackup locations: - volume: persistentVolumeClaim: claimName: adv-vitess-backup
Örnek - gcs
Şöyle yaparız
# Version: 20200113apiVersion: planetscale.com/v2kind: VitessClustermetadata:name: examplespec:backup:locations:- gcs:bucket: mybucketname1authSecret:name: gcs-secretkey: gcs_key.json
Örnek - gcs
Şöyle yaparız
spec: backup: locations: - gcs: bucket: mybucketname authSecret: name: gcs-secret key: gcs_key.json
vitessDashboard Alanı
Örnek
Şöyle yaparız
apiVersion: planetscale.com/v2 kind: VitessCluster metadata: name: adv-vitess-cluster spec: images: ... backup: ... globalLockserver: ... cells: ... vitessDashboard: cells: - az1 extraFlags: security_policy: read-only backup_engine_implementation: xtrabackup backup_storage_compress: "true" backup_storage_implementation: file file_backup_storage_root: /vt/backups/az1-vitess-cluster xbstream_restore_flags: "--parallel=3" xtrabackup_backup_flags: "--parallel=1" xtrabackup_stream_mode: xbstream xtrabackup_stripes: "8" xtrabackup_user: vt_dba extraVolumes: - name: vitess-backups persistentVolumeClaim: claimName: adv-vitess-backup extraVolumeMounts: - mountPath: /vt/backups name: vitess-backups replicas: 2 resources: limits: memory: 128Mi requests: cpu: 200m memory: 128Mi
Hiç yorum yok:
Yorum Gönder