2 Eylül 2022 Cuma

Vitess VtTablet Nedir?

Giriş
Açıklaması şöyle
Each MySQL server is assigned a vttablet
- A daemon/sidecar
- Controls the mysqld process
- Interacts with the mysqld server
- Typically on same host as mysqld
Ayrıca "connection pooling" yapar ve kendi "cache" mekanizması vardır. VtTablet podu içinde 3 tane container var. Bu container isimleri şöyle vttabletmysqldmysqld-exporter.

VtTablet Container Hataları
Örnek - logs
Şöyle yaparız. Burada tablet bir master arıyor ancak bulamıyor. Daha sonra REPLICA Serving olmak istiyor ancak vt_adv veri tabanına bağlanamıyor
$ kubectl logs adv-vitess-cluster-vttablet-az1-1330809953-8066577e -c vttablet -n rlwy-08 | more
ERROR: logging before flag.Parse: E0902 18:00:44.370487       1 syslogger.go:149] can't connect to syslog
I0902 18:00:44.419559       1 servenv.go:100] Version: 14.0.1 (Git revision 4c118cbe48a33fab1cfdbb21574debeb23e5bcec branch 'HEAD') built on Tue Jul 26 23:14:06 UTC 2022 by vitess@b321e9084996 using go1.18.4 l
inux/amd64
I0902 18:00:44.419736       1 streamlog.go:158] Streaming logs from TabletServer at /debug/querylog.
I0902 18:00:44.419756       1 streamlog.go:158] Streaming logs from TxLog at /debug/txlog.
I0902 18:00:44.420854       1 vttablet.go:145] Loaded config file  successfully:
cacheResultFields: true
consolidator: enable
db:
  allprivs:
    password: '****'
    useSsl: true
    user: vt_allprivs
  app:
    password: '****'
    useSsl: true
    user: vt_app
  appdebug:
    password: '****'
    useSsl: true
    user: vt_appdebug
  charset: utf8mb4
  dba:
    password: '****'
    useSsl: true
    user: vt_dba
  filtered:
    password: '****'
    useSsl: true
    user: vt_filtered
  repl:
    password: '****'
    useSsl: true
    user: vt_repl
gracePeriods: {}
healthcheck:
  degradedThresholdSeconds: 30
  intervalSeconds: 5
  unhealthyThresholdSeconds: 7200
hotRowProtection:
  maxConcurrency: 5
  maxGlobalQueueSize: 1000
  maxQueueSize: 20
  mode: disable
messagePostponeParallelism: 4
olapReadPool:
  idleTimeoutSeconds: 1800
  size: 96
oltp:
  maxRows: 100000
  queryTimeoutSeconds: 900
  txTimeoutSeconds: 30
oltpReadPool:
  idleTimeoutSeconds: 1800
  maxWaiters: 5000
  size: 96
queryCacheLFU: true
queryCacheMemory: 33554432
queryCacheSize: 5000
replicationTracker:
  heartbeatIntervalSeconds: 1
  mode: polling
rowStreamer:
  maxInnoDBTrxHistLen: 1000000
  maxMySQLReplLagSecs: 43200
schemaReloadIntervalSeconds: 1800
signalSchemaChangeReloadIntervalSeconds: 5
signalWhenSchemaChange: true
streamBufferSize: 32768
txPool:
  idleTimeoutSeconds: 1800
  maxWaiters: 5000
  size: 300
  timeoutSeconds: 1
I0902 18:00:44.420892       1 mycnf_flag.go:123] No mycnf_server_id, no mycnf-file specified, using default config for server id 1330809953: /vt/vtdataroot/vt_1330809953/my.cnf
I0902 18:00:44.421282       1 dbconfigs.go:388] DBConfigs: allprivs:
  password: '****'
  useSsl: true
  user: vt_allprivs
app:
  password: '****'
  useSsl: true
  user: vt_app
appdebug:
  password: '****'
  useSsl: true
  user: vt_appdebug
charset: utf8mb4
dba:
  password: '****'
  useSsl: true
  user: vt_dba
filtered:
  password: '****'
  useSsl: true
  user: vt_filtered
repl:
  password: '****'
  useSsl: true
  user: vt_repl

I0902 18:00:44.428869       1 tx_throttler.go:95] Initialized transaction throttler with config: &{enabled:false topoServer:<nil> throttlerConfig:<nil> healthCheckCells:[]}
I0902 18:00:44.431462       1 mysqld.go:590] execCmd: /usr/sbin/mysqld /usr/sbin/mysqld [--version]
I0902 18:00:44.448218       1 mysqld.go:604] execCmd: /usr/sbin/mysqld output: /usr/sbin/mysqld  Ver 8.0.23 for Linux on x86_64 (MySQL Community Server - GPL)
I0902 18:00:44.448252       1 mysqld.go:172] Using flavor: mysql, version: {8 0 23}
I0902 18:00:44.486252       1 tm_init.go:211] Using hostname: 10.129.4.27 from -tablet_hostname flag.
I0902 18:00:44.486786       1 tm_init.go:477] Reading/creating keyspace and shard records for ADV/-
I0902 18:00:44.495104       1 syslogger.go:129] ADV [keyspace] created value:
I0902 18:00:44.503160       1 vschema.go:51] successfully updated vschema for keyspace ADV:
I0902 18:00:44.503215       1 locks.go:234] Locking keyspace ADV for action CreateShard
I0902 18:00:44.528213       1 syslogger.go:129] ADV/- [shard] created value: key_range:{} is_primary_serving:true
I0902 18:00:44.528266       1 locks.go:273] Unlocking keyspace ADV for successful action CreateShard
I0902 18:00:44.555520       1 locks.go:234] Locking keyspace ADV for action RebuildKeyspace
E0902 18:00:44.596535       1 tablet.go:244] unable to connect to tablet cell:"az1" uid:1330809953: node doesn't exist: /vitess/adv-vitess-cluster/local/az1/tablets/az1-1330809953/Tablet
I0902 18:00:44.603558       1 locks.go:273] Unlocking keyspace ADV for successful action RebuildKeyspace
I0902 18:00:44.617092       1 tm_init.go:544] Keyspace rebuilt: ADV
I0902 18:00:44.628763       1 syslogger.go:129] ADV/-/az1-1330809953 [tablet] created
I0902 18:00:44.629375       1 tablegc.go:154] TableGC: init
I0902 18:00:44.629511       1 tablegc.go:240] TableGC: operating
I0902 18:00:44.629638       1 shard_sync.go:70] Change to tablet state
I0902 18:00:44.629826       1 replmanager.go:87] Replication Manager: starting
I0902 18:00:44.638139       1 replmanager.go:120] Replication is stopped, reconnecting to primary.
I0902 18:00:44.639637       1 replmanager.go:127] Failed to reconnect to primary: no primary tablet for shard ADV/-, will keep retrying.
I0902 18:00:44.639670       1 updatestreamctl.go:228] Enabling update stream, dbname: vt_ADV
I0902 18:00:44.639727       1 state_manager.go:214] Starting transition to REPLICA Serving, timestamp: 0001-01-01 00:00:00 +0000 UTC
I0902 18:00:44.639766       1 executor.go:314] onlineDDL Executor - Acquiring lock - initMutex
I0902 18:00:44.639770       1 executor.go:316] onlineDDL Executor - Acquired lock - initMutex
I0902 18:00:44.639775       1 tablegc.go:205] TableGC - started execution of Close. Acquiring initMutex lock
I0902 18:00:44.639779       1 tablegc.go:207] TableGC - acquired lock
I0902 18:00:44.639782       1 tablegc.go:210] TableGC - no collector is open
I0902 18:00:44.639790       1 engine.go:90] messager Engine - started execution of Close. Acquiring mu lock
I0902 18:00:44.639793       1 engine.go:92] messager Engine - acquired mu lock
I0902 18:00:44.639796       1 engine.go:95] messager Engine is not open
E0902 18:00:44.641042       1 state_manager.go:278] Error transitioning to the desired state: REPLICA, Serving, will keep retrying: Unknown database 'vt_adv' (errno 1049) (sqlstate 42000)
I0902 18:00:44.641072       1 state_manager.go:675] State: exiting lameduck
E0902 18:00:44.641084       1 tm_state.go:323] Cannot start query service: Unknown database 'vt_adv' (errno 1049) (sqlstate 42000)
I0902 18:00:44.641099       1 tm_state.go:400] Publishing state: alias:{cell:"az1" uid:1330809953} hostname:"10.129.4.27" port_map:{key:"grpc" value:15999} port_map:{key:"vt" value:15000} keyspace:"ADV" shard:
Örnek - describe
Genel hata şöyle. Burada hem vttablet hem de mysqld hazır değil
Warning  Unhealthy  4s (x3 over 6s)  kubelet  Readiness probe failed: HTTP probe failed with statuscode: 500
Warning  Unhealthy  2s (x5 over 6s)  kubelet  Readiness probe failed: dial tcp 10.129.3.183:3306: connect: connection refused
Örnek  - describe
Şöyle yaparız. Pod'a describe yapına Kubernetes'in 3306 ve 1500 portlarına readiness probe yapamadığını görüyoruz.
$ kubectl describe pod adv-vitess-cluster-vttablet-az1-1330809953-8066577 -n rlwy-08
Name: adv-vitess-cluster-vttablet-az1-1330809953-8066577e Namespace: rlwy-08 Priority: 1000 Priority Class Name: vitess Node: rlwy-08-59r5w-worker-a-th48t.c.product-oce-private.internal/172.18.17.100 Start Time: Fri, 02 Sep 2022 18:00:04 +0000 Labels: planetscale.com/cell=az1 planetscale.com/cluster=adv-vitess-cluster planetscale.com/component=vttablet planetscale.com/keyspace=ADV planetscale.com/shard=x-x planetscale.com/tablet-index=1 planetscale.com/tablet-type=replica planetscale.com/tablet-uid=1330809953 Annotations: drain.planetscale.com/supported: ensure that the tablet is not a primary k8s.v1.cni.cncf.io/network-status: [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.129.4.27" ], "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.129.4.27" ], "default": true, "dns": {} }] openshift.io/scc: oce planetscale.com/desired-state-hash: 1598ad46c5d6ee6fe2839415cc44a896 planetscale.com/mysqld-config-overrides: [mysqld] lower_case_table_names = 1 planetscale.com/observed-shard-generation: 1 rollout.planetscale.com/scheduled: spec: imagePullSecrets: null Status: Running IP: 10.129.4.27 IPs: IP: 10.129.4.27 Controlled By: VitessShard/adv-vitess-cluster-adv-x-x-6510f0e0 Init Containers: init-vt-root: Container ID: cri-o://06b5eac928f97be1d41083458e5db8916bb1ce919d3420cb67fb9d30b44419a9 Image: gcr.io/product-spanner/oce/vitess/lite:v14.0.1-mysql80 Image ID: gcr.io/product-spanner/oce/vitess/lite@sha256:1b0b253aaed5eb27e92d37264edcd4844a012bf4026545ac8a6d4d35b79afc8b Port: <none> Host Port: <none> Command: bash -c Args: set -ex mkdir -p /mnt/vt/bin cp --no-clobber /vt/bin/mysqlctld /mnt/vt/bin/ mkdir -p /mnt/vt/config if [[ -d /vt/config/mycnf ]]; then cp --no-clobber -R /vt/config/mycnf /mnt/vt/config/ else mkdir -p /mnt/vt/config/mycnf fi mkdir -p /mnt/vt/vtdataroot ln -sf /dev/stderr /mnt/vt/config/stderr.symlink echo "log-error = /vt/config/stderr.symlink" > /mnt/vt/config/mycnf/log-error.cnf echo "binlog_format=row" > /mnt/vt/config/mycnf/rbr.cnf echo "socket = /vt/socket/mysql.sock" > /mnt/vt/config/mycnf/socket.cnf State: Terminated Reason: Completed Exit Code: 0 Started: Fri, 02 Sep 2022 18:00:25 +0000 Finished: Fri, 02 Sep 2022 18:00:25 +0000 Ready: True Restart Count: 0 Environment: <none> Mounts: /mnt/vt from vt-root (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rjdjk (ro) Containers: vttablet: Container ID: cri-o://191b9d30157493eff6516f6fb27b7980c6616d16f45718af8853f1ac6c22fd49 Image: gcr.io/product-spanner/oce/vitess/lite:v14.0.1-mysql80 Image ID: gcr.io/product-spanner/oce/vitess/lite@sha256:1b0b253aaed5eb27e92d37264edcd4844a012bf4026545ac8a6d4d35b79afc8b Ports: 15000/TCP, 15999/TCP Host Ports: 0/TCP, 0/TCP Command: /vt/bin/vttablet Args: --backup_engine_implementation=xtrabackup --backup_storage_compress=true --backup_storage_implementation=file --binlog_use_v3_resharding_mode=true --db-config-app-uname=vt_app --db-config-dba-uname=vt_dba --db-config-filtered-uname=vt_filtered --db-config-repl-uname=vt_repl --db_charset=utf8mb4 --enable_replication_reporter=true --enable_semi_sync=false --enforce_strict_trans_tables=true --file_backup_storage_root=/vt/backups/adv-vitess-cluster --grpc_max_message_size=67108864 --grpc_port=15999 --health_check_interval=5s --init_db_name_override=vt_ADV --init_keyspace=ADV --init_shard=- --init_tablet_type=replica --logtostderr=true --mycnf_socket_file=/vt/socket/mysql.sock --mysqlctl_socket=/vt/socket/mysqlctl.sock --port=15000 --queryserver-config-max-result-size=100000 --queryserver-config-pool-size=96 --queryserver-config-query-timeout=900 --queryserver-config-stream-pool-size=96 --queryserver-config-transaction-cap=300 --restore_concurrency=10 --restore_from_backup=false --service_map=grpc-queryservice,grpc-tabletmanager,grpc-updatestream --tablet-path=az1-1330809953 --tablet_hostname=$(POD_IP) --topo_global_root=/vitess/adv-vitess-cluster/global --topo_global_server_address=adv-vitess-cluster-etcd-07a83994-client.rlwy-08.svc:2379 --topo_implementation=etcd2 --wait_for_backup_interval=10s --xbstream_restore_flags=--parallel=2 --xtrabackup_backup_flags=--parallel=1 --xtrabackup_stream_mode=xbstream --xtrabackup_stripes=8 --xtrabackup_user=vt_dba State: Running Started: Fri, 02 Sep 2022 18:00:44 +0000 Last State: Terminated Reason: Error Exit Code: 1 Started: Fri, 02 Sep 2022 18:00:28 +0000 Finished: Fri, 02 Sep 2022 18:00:33 +0000 Ready: False Restart Count: 2 Limits: memory: 256Mi Requests: cpu: 200m memory: 256Mi Liveness: http-get http://:web/debug/status delay=300s timeout=1s period=10s #success=1 #failure=30 Readiness: http-get http://:web/healthz delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: VTROOT: /vt VTDATAROOT: /vt/vtdataroot VT_MYSQL_ROOT: /usr MYSQL_FLAVOR: MySQL80 EXTRA_MY_CNF: /vt/config/mycnf/rbr.cnf:/vt/config/mycnf/log-error.cnf:/vt/config/mycnf/socket.cnf:/pod-config/mysqld-config-overrides:/vt/config/mycnf/vtbackup.cnf POD_IP: (v1:status.podIP) Mounts: /pod-config from pod-config (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rjdjk (ro) /vt/backups from vitess-backups (rw) /vt/config from vt-root (ro,path="config") /vt/secrets/db-init-script from db-init-script-secret (ro) /vt/socket from vt-root (rw,path="socket") /vt/vtdataroot from vt-root (rw,path="vtdataroot") mysqld: Container ID: cri-o://d51134e7ce585abe628ec2ca050a56280a7eb4fe3eef4f0c09ed92535e9cbca2 Image: gcr.io/product-spanner/oce/vitess/lite:v14.0.1-mysql80 Image ID: gcr.io/product-spanner/oce/vitess/lite@sha256:1b0b253aaed5eb27e92d37264edcd4844a012bf4026545ac8a6d4d35b79afc8b Port: 3306/TCP Host Port: 0/TCP Command: /vt/bin/mysqlctld Args: --db-config-dba-uname=vt_dba --db_charset=utf8mb4 --init_db_sql_file=/vt/secrets/db-init-script/init_db.sql --logtostderr=true --mysql_socket=/vt/socket/mysql.sock --socket_file=/vt/socket/mysqlctl.sock --tablet_uid=1330809953 --wait_time=2h0m0s State: Running Started: Fri, 02 Sep 2022 18:00:26 +0000 Ready: True Restart Count: 0 Limits: memory: 1256Mi Requests: cpu: 200m memory: 256Mi Readiness: tcp-socket :3306 delay=0s timeout=1s period=2s #success=1 #failure=3 Environment: VTROOT: /vt VTDATAROOT: /vt/vtdataroot VT_MYSQL_ROOT: /usr MYSQL_FLAVOR: MySQL80 EXTRA_MY_CNF: /vt/config/mycnf/rbr.cnf:/vt/config/mycnf/log-error.cnf:/vt/config/mycnf/socket.cnf:/pod-config/mysqld-config-overrides:/vt/config/mycnf/vtbackup.cnf POD_IP: (v1:status.podIP) Mounts: /pod-config from pod-config (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rjdjk (ro) /vt/bin from vt-root (ro,path="bin") /vt/config from vt-root (ro,path="config") /vt/secrets/db-init-script from db-init-script-secret (ro) /vt/socket from vt-root (rw,path="socket") /vt/vtdataroot from vt-root (rw,path="vtdataroot") mysqld-exporter: Container ID: cri-o://b74444c299442ad1365a09e5d2aad1cd1b10e41c145ed6eb29e4c700599a4afd Image: gcr.io/product-spanner/oce/prom/mysqld-exporter:v0.11.0 Image ID: gcr.io/product-spanner/oce/prom/mysqld-exporter@sha256:9f4fb61cca309cb4a8c1b9ed9fb4aa75af0f7a21f36d3954667db37c062a0172 Port: 9104/TCP Host Port: 0/TCP Command: /bin/mysqld_exporter Args: --config.my-cnf=/vt/vtdataroot/vt_1330809953/my.cnf --collect.info_schema.tables.databases=sys,_vt State: Running Started: Fri, 02 Sep 2022 18:00:28 +0000 Ready: True Restart Count: 0 Limits: cpu: 100m memory: 128Mi Requests: cpu: 10m memory: 32Mi Environment: DATA_SOURCE_NAME: vt_dba@unix(/vt/socket/mysql.sock)/ Mounts: /pod-config from pod-config (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rjdjk (ro) /vt/bin from vt-root (ro,path="bin") /vt/config from vt-root (ro,path="config") /vt/secrets/db-init-script from db-init-script-secret (ro) /vt/socket from vt-root (rw,path="socket") /vt/vtdataroot from vt-root (rw,path="vtdataroot") Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: vitess-backups: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: adv-vitess-backup ReadOnly: false vt-root: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> pod-config: Type: DownwardAPI (a volume populated by information about the pod) Items: metadata.annotations['planetscale.com/mysqld-config-overrides'] -> mysqld-config-overrides db-init-script-secret: Type: Secret (a volume populated by a Secret) SecretName: adv-vitess-cluster-config Optional: false kube-api-access-rjdjk: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: <nil> QoS Class: Burstable Node-Selectors: <none> Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 143m default-scheduler Successfully assigned rlwy-08/adv-vitess-cluster-vttablet-az1-1330809953-8066577e to rlwy-08-59r5w-worker-a-th48t.c.product-oce-private.internal Normal AddedInterface 143m multus Add eth0 [10.129.4.27/23] from openshift-sdn Normal Pulling 143m kubelet Pulling image "gcr.io/product-spanner/oce/vitess/lite:v14.0.1-mysql80" Normal Pulled 143m kubelet Successfully pulled image "gcr.io/product-spanner/oce/vitess/lite:v14.0.1-mysql80" in 15.951468551s Normal Created 143m kubelet Created container init-vt-root Normal Started 143m kubelet Started container init-vt-root Normal Pulling 143m kubelet Pulling image "gcr.io/product-spanner/oce/prom/mysqld-exporter:v0.11.0" Normal Started 143m kubelet Started container mysqld Normal Created 143m kubelet Created container mysqld Normal Pulled 143m kubelet Container image "gcr.io/product-spanner/oce/vitess/lite:v14.0.1-mysql80" already present on machine Normal Pulled 143m kubelet Successfully pulled image "gcr.io/product-spanner/oce/prom/mysqld-exporter:v0.11.0" in 1.025161s Normal Started 143m (x2 over 143m) kubelet Started container vttablet Normal Created 143m (x2 over 143m) kubelet Created container vttablet Normal Pulled 143m (x2 over 143m) kubelet Container image "gcr.io/product-spanner/oce/vitess/lite:v14.0.1-mysql80" already present on machine Normal Created 143m kubelet Created container mysqld-exporter Normal Started 143m kubelet Started container mysqld-exporter Warning Unhealthy 143m (x5 over 143m) kubelet Readiness probe failed: dial tcp 10.129.4.27:3306: connect: connection refused Warning Unhealthy 143m (x3 over 143m) kubelet Readiness probe failed: Get "http://10.129.4.27:15000/healthz": dial tcp 10.129.4.27:15000: connect: connection refused Warning Unhealthy 3m37s (x949 over 143m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 500

Hiç yorum yok:

Yorum Gönder

CREATE EVENT - Scheduled Task İçindir

Örnek Şöyle yaparız CREATE EVENT myevent     ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 1 HOUR     DO       UPDATE myschema.mytable SET myc...