Giriş
/etc/mysql/my.cnf dosyasını kullanır
--ansi seçeneği
Şöyle yaparız
mysqld --ansi
--innodb-write-io-threads seçeneği
Şöyle yaparız
mysqld --innodb-write-io-threads=#
--verbose seçeneği
Kullanılabilecek tüm seçenekleri gösterir
mysqld --ansi
mysqld --innodb-write-io-threads=#
mysqldump -u [user name] –p [password] [options] [database_name] [tablename] > [dumpfilename.sql]
You can back up and restore the entire database way faster by copying the data files instead of dumping SQL
As your database grows larger dumping sql files gets slower and restoring it would even take longer, for example a 100GB database the restoring part might take around 1 day or even more which is not ideal.
Things to consider before doing this:- You MUST stop the database before copying, and it SHOULD be gracefully, for example if you’re using docker consider using the -t option for increasing the timeout, for example docker stop -t 8000
- you SHOULD have your database configuration files and the exact version of MySQL backed up as well for restoring, docker also helps a lot with this as you can easily have them as code.
mysqldump -uroot -p --host=127.0.0.1 --port=3306 --all-databases \ --master-data=2 > replicationdump.sql
Here we use the option --master-data=2 in order to have a comment containing a CHANGE MASTER statement inside the backup file. That comment indicates the replication coordinates at the time of the backup, and we will need those coordinates later for the update of master information in the slave instance. Here is the example of that comment:
-- -- Position to start replication or point-in-time recovery from -- -- CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=349;
mysql> CHANGE MASTER TO -> MASTER_HOST='127.0.0.1', -> MASTER_USER='replication', -> MASTER_PASSWORD='replication', -> MASTER_LOG_FILE='mysql-bin.000001', -> MASTER_LOG_POS=349;
If you want to generate the backup of the database structure, then you must use the --no-data option in the mysqldump command.
mysqldump -h 127.0.0.1 -P 15306 -u user --no-data ADV
mysqldump -u root -p dbName tableName --where="id>=10000 AND id<20000" > file.sql
Federation allows separate Vitess Operator instances, in separate Kubernetes clusters, to coordinate to deploy and manage a single Vitess Cluster that spans multiple Kubernetes clusters.Note that this support consists of low-level capabilities that must be combined with additional Kubernetes plug-ins (like some form of cross-cluster LB) and other capabilities (like federated etcd) to assemble a federated system....The basic principle of Vitess Operator federation is to write a set of VitessCluster object specifications that, when deployed in separate Kubernetes clusters, each bring up and manage the pieces of the Vitess cluster that live in that Kubernetes cluster. These pieces should then have some way to discover each other and connect up to form a single Vitess cluster.Ordinarily, deploying several VitessCluster CRDs in several different Kubernetes clusters would result in completely independent Vitess clusters that don't know about each other. The key to federation is ensuring that all these Vitess components are pointed at a shared, global Vitess lockserver, which typically takes the form of an etcd cluster.Once Vitess components are pointed at a shared, global topology service, they will use that to find each other's addresses to perform query routing and set up MySQL replication.
>kubectl apply -f operator.yaml customresourcedefinition.apiextensions.k8s.io/etcdlockservers.planetscale.com created customresourcedefinition.apiextensions.k8s.io/vitessbackups.planetscale.com created customresourcedefinition.apiextensions.k8s.io/vitessbackupstorages.planetscale.com created customresourcedefinition.apiextensions.k8s.io/vitesscells.planetscale.com created customresourcedefinition.apiextensions.k8s.io/vitessclusters.planetscale.com created customresourcedefinition.apiextensions.k8s.io/vitesskeyspaces.planetscale.com created customresourcedefinition.apiextensions.k8s.io/vitessshards.planetscale.com created serviceaccount/vitess-operator created role.rbac.authorization.k8s.io/vitess-operator created rolebinding.rbac.authorization.k8s.io/vitess-operator created deployment.apps/vitess-operator created priorityclass.scheduling.k8s.io/vitess-operator-control-plane created priorityclass.scheduling.k8s.io/vitess created
default vitess-operator-7794c74b9b-5hcxn 0/1 ContainerCreating 0 8s daha sonra default vitess-operator-7794c74b9b-5hcxn 1/1 Running 0 97s
$ kubectl get pods NAME READY STATUS RESTARTS AGE ... vitess-operator-8454d86687-4wfnc 1/1 Running 0 2m29s
minikube start --cpus=4 --memory=4000 --disk-size=32g kubectl apply -f operator.yaml
cd vitess/examples/operator kubectl apply -f 101_initial_cluster.yaml
tabletPools: - cell: zone1 type: replica replicas: 2
$ kubectl get pods NAME READY STATUS RESTARTS AGE example-etcd-faf13de3-1 1/1 Running 0 78s example-etcd-faf13de3-2 1/1 Running 0 78s example-etcd-faf13de3-3 1/1 Running 0 78s example-vttablet-zone1-2469782763-bfadd780 3/3 Running 1 78s example-vttablet-zone1-2548885007-46a852d0 3/3 Running 1 78s example-zone1-vtctld-1d4dcad0-59d8498459-kwz6b 1/1 Running 2 78s example-zone1-vtgate-bc6cde92-6bd99c6888-vwcj5 1/1 Running 2 78s vitess-operator-8454d86687-4wfnc 1/1 Running 0 2m29s
kubectl get pods NAME READY STATUS RESTARTS AGE adv-vitess-cluster-az1-vtctld-a22f4b1a-86f6d4b78c-ldz9h 1/1 Running 3 (2m6s ago) 10m adv-vitess-cluster-az1-vtctld-a22f4b1a-86f6d4b78c-vwdhp 1/1 Running 4 (105s ago) 10m adv-vitess-cluster-az1-vtgate-498e7697-5458d77dc8-mr6pn 1/1 Running 4 (2m3s ago) 10m adv-vitess-cluster-az1-vtgate-498e7697-5458d77dc8-thgmg 1/1 Running 4 (102s ago) 10m adv-vitess-cluster-az2-vtctld-d97301ea-764d4ddc6c-d7fpq 1/1 Running 3 (2m3s ago) 10m adv-vitess-cluster-az2-vtctld-d97301ea-764d4ddc6c-jkmzd 1/1 Running 4 (104s ago) 10m adv-vitess-cluster-az2-vtgate-9ea92c94-6cc44cd6b-5jcqm 1/1 Running 4 (113s ago) 10m adv-vitess-cluster-az2-vtgate-9ea92c94-6cc44cd6b-pltm2 1/1 Running 4 (111s ago) 10m adv-vitess-cluster-etcd-07a83994-1 1/1 Running 1 (116s ago) 10m adv-vitess-cluster-etcd-07a83994-2 1/1 Running 1 (113s ago) 10m adv-vitess-cluster-etcd-07a83994-3 1/1 Running 1 (110s ago) 10m adv-vitess-cluster-vttablet-az1-1330809953-8066577e 3/3 Running 2 (2m ago) 10m adv-vitess-cluster-vttablet-az1-3415112598-0c0e8ee0 3/3 Running 2 (119s ago) 10m adv-vitess-cluster-vttablet-az1-4135592426-c2dc2c3d 3/3 Running 2 (112s ago) 10m adv-vitess-cluster-vttablet-az2-0915606989-18937e48 3/3 Running 2 (117s ago) 10m adv-vitess-cluster-vttablet-az2-1366268705-cdd98d67 3/3 Running 2 (114s ago) 10m adv-vitess-cluster-vttablet-az2-4058700183-5f0ba1e4 3/3 Running 2 (115s ago) 10m vitess-operator-7794c74b9b-s6gc8 1/1 Running 0 11m
$ kubectl logs example-vttablet-zone1-2469782763-bfadd780 error: a container name must be specified for pod example-vttablet-zone1-2469782763-bfadd780, choose one of: [vttablet mysqld mysqld-exporter] or one of the init containers: [init-vt-root init-mysql-socket]
$ kubectl get pods example-vttablet-zone1-2548885007-46a852d0 -o jsonpath={.spec.containers[*].name} vttablet mysqld mysqld-exporter
kubectl logs example-vttablet-zone1-2469782763-bfadd780 -c vttablet
kubectl describe pods <podname>
./pf.sh & alias vtctlclient="vtctlclient -server=localhost:15999" alias mysql="mysql -h 127.0.0.1 -P 15306 -u user"
#!/bin/sh kubectl port-forward --address localhost "$(kubectl get service --selector="planetscale.com/component=vtctld" -o name | head -n1)" 15000 15999 & process_id1=$! kubectl port-forward --address localhost "$(kubectl get service --selector="planetscale.com/component=vtgate,!planetscale.com/cell" -o name | head -n1)" 15306:3306 & process_id2=$! sleep 2 echo "You may point your browser to http://localhost:15000, use the following aliases as shortcuts:" echo 'alias vtctlclient="vtctlclient -server=localhost:15999 -logtostderr"' echo 'alias mysql="mysql -h 127.0.0.1 -P 15306 -u user"' echo "Hit Ctrl-C to stop the port forwards" wait $process_id1 wait $process_id2
--buffer_max_failover_duration=10s--buffer_min_time_between_failovers=20s --buffer_size=1000 --cell=az1 --cells_to_watch=az1,az2 --enable_buffer=true --grpc_max_message_size=67108864 --grpc_port=15999 --logtostderr=true --mysql_auth_server_impl=static --mysql_auth_server_static_file=/vt/secrets/vtgate-static-auth/users.json --mysql_auth_static_reload_interval=30s --mysql_server_port=3306 --mysql_server_version=8.0.13-Vitess --port=15000 --service_map=grpc-vtgateservice --tablet_types_to_wait=MASTER,REPLICA --topo_global_root=/vitess/adv-vitess-cluster/global --topo_global_server_address=adv-vitess-cluster-etcd-07a83994-client.default.svc:2379 --topo_implementation=etcd2
apiVersion: planetscale.com/v2kind: VitessCluster metadata: name: example spec: images: vtctld: vitess/lite:latest vtgate: vitess/lite:latest vttablet: vitess/lite:latest vtbackup: vitess/lite:latest mysqld: mysql56Compatible: vitess/lite:latest mysqldExporter: prom/mysqld-exporter:v0.11.0 cells: - name: zone1 gateway: authentication: static: secret: name: example-cluster-config key: users.json replicas: 1 extraFlags: mysql_server_version: "8.0.13-Vitess" resources: requests: cpu: 100m memory: 256Mi limits: memory: 256Mi
apiVersion: planetscale.com/v2 kind: VitessCluster metadata: name: example spec: images: vtctld: vitess/lite:mysql80 vtgate: vitess/lite:mysql80 vttablet: vitess/lite:mysql80 vtbackup: vitess/lite:mysql80 mysqld: mysql80Compatible: vitess/lite:mysql80 mysqldExporter: prom/mysqld-exporter:v0.11.0 cells: - name: zone1 gateway: authentication: static: secret: name: example-cluster-config key: users.json replicas: 1 extraFlags: mysql_server_version: "8.0.13-Vitess" resources: requests: cpu: 100m memory: 256Mi limits: memory: 256Mi
$ kubectl get pods NAME READY STATUS RESTARTS AGE example-etcd-faf13de3-1 1/1 Running 1 (10m ago) 17m example-etcd-faf13de3-2 1/1 Running 1 (10m ago) 17m example-etcd-faf13de3-3 1/1 Running 1 (10m ago) 17m example-vttablet-zone1-1168688798-4251d3e4 1/3 Running 2 (10m ago) 17m example-vttablet-zone1-1385747125-70285362 1/3 Running 2 (10m ago) 17m example-vttablet-zone1-2469782763-bfadd780 1/3 Running 2 (10m ago) 17m example-vttablet-zone1-2548885007-46a852d0 1/3 Running 2 (10m ago) 17m example-vttablet-zone1-3798380744-870319fc 1/3 Running 2 (10m ago) 17m example-zone1-vtctld-1d4dcad0-5d7ffbfc65-bqkkb 1/1 Running 3 (10m ago) 17m example-zone1-vtgate-bc6cde92-6dd4b45794-5gqdh 1/1 Running 3 (10m ago) 17m vitess-operator-5f47c6c45d-v7pn6 1/1 Running 0 18m
kubectl get events
E0317 08:36:11.606146 1 srv_vschema.go:207] node doesn't exist: /vitess/example/global/cells/zone1/CellInfo: UpdateSrvVSchema(zone1) failed F0317 08:36:11.606193 1 vttablet.go:109] failed to parse -tablet-path or initialize DB credentials: node doesn't exist: /vitess/example/global/cells/zone1/CellInfo initeKeyspaceShardTopo: failed to RebuildSrvVSchema
I0317 08:35:57.155966 1 mysqld.go:398] Mysqld.Start(1647506156) stderr: 2022-03-17T08:35:57.150820Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.23) starting as process 567 I0317 08:35:57.157253 1 mysqld.go:398] Mysqld.Start(1647506156) stderr: 2022-03-17T08:35:57.157109Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. I0317 08:35:57.682198 1 mysqld.go:398] Mysqld.Start(1647506156) stderr: Killed I0317 08:35:57.685847 1 mysqld.go:404] Mysqld.Start(1647506156) stdout: 2022-03-17T08:35:57.685469Z mysqld_safe mysqld from pid file /vt/vtdataroot/vt_1168688798/mysql.pid ended I0317 08:35:57.685880 1 mysqld.go:398] Mysqld.Start(1647506156) stderr: 2022-03-17T08:35:57.685469Z mysqld_safe mysqld from pid file /vt/vtdataroot/vt_1168688798/mysql.pid ended I0317 08:35:57.686575 1 mysqld.go:417] Mysqld.Start(1647506156) exit: <nil>
$ minikube ssh --user root docker@minikube $ docker ps docker@minikube $ sudo docker exec -it FQDN_CONTAINER bash vitess@example-vttablet-zone1-1385747125-70285362:/$ /vt/bin/mysqlctld \ --db-config-dba-uname=vt_dba \ --db_charset=utf8mb4 \ --init_db_sql_file=/vt/secrets/db-init-script/init_db.sql \ --logtostderr=true \ --mysql_socket=/vt/socket/mysql.sock \ --socket_file=/vt/socket/mysqlctl.sock \ --tablet_uid=1385747125 \ --wait_time=2h0m0s
apiVersion: planetscale.com/v2 kind: VitessCluster metadata: name: example spec: images: vtctld: vitess/lite:mysql80 vtgate: vitess/lite:mysql80 vttablet: vitess/lite:mysql80 vtbackup: vitess/lite:mysql80 mysqld: mysql80Compatible: vitess/lite:mysql80 mysqldExporter: prom/mysqld-exporter:v0.11.0 cells: - name: zone1 gateway: authentication: static: secret: name: example-cluster-config key: users.json replicas: 1 extraFlags: mysql_server_version: "8.0.13-Vitess" resources: requests: cpu: 100m memory: 256Mi limits: memory: 256Mi vitessDashboard: cells: - zone1 extraFlags: security_policy: read-only replicas: 1 resources: limits: memory: 128Mi requests: cpu: 100m memory: 128Mi keyspaces: - name: ADV turndownPolicy: Immediate partitionings: - equal: parts: 1 shardTemplate: databaseInitScriptSecret: name: example-cluster-config key: init_db.sql replication: enforceSemiSync: false tabletPools: - cell: zone1 type: replica replicas: 1 vttablet: extraFlags: db_charset: utf8mb4 resources: limits: memory: 256Mi requests: cpu: 100m memory: 256Mi mysqld: resources: limits: memory: 1024Mi requests: cpu: 100m memory: 512Mi configOverrides: | [mysqld] lower_case_table_names = 1 dataVolumeClaimTemplate: accessModes: ["ReadWriteOnce"] resources: requests: storage: 1Gi
EnforceSemiSync means Vitess will configure MySQL to require semi-sync acknowledgement of all transactions while forbidding fallback to asynchronous replication under any circumstance....WARNING: Do not enable this if the shard has fewer than 3 master-eligible replicas, as that may lead to master unavailability during routine maintenance.
$ oc -n rlwy03 get pods NAME READY STATUS RESTARTS AGE adv-vitess-cluster-az1-vtctld-a22f4b1a-6947f5bbb6-627hb 1/1 Running 0 3m40s adv-vitess-cluster-az1-vtctld-a22f4b1a-6947f5bbb6-86t4t 1/1 Running 1 (3m16s ago) 3m40s adv-vitess-cluster-az1-vtgate-498e7697-74c5dd4fdc-9rtr5 1/1 Running 2 (3m10s ago) 3m40s adv-vitess-cluster-az1-vtgate-498e7697-74c5dd4fdc-hfhmc 1/1 Running 2 (3m8s ago) 3m40s adv-vitess-cluster-az2-vtctld-d97301ea-6fcd788464-w4ppn 1/1 Running 2 (3m10s ago) 3m39s adv-vitess-cluster-az2-vtctld-d97301ea-6fcd788464-z7bzq 1/1 Running 0 3m40s adv-vitess-cluster-az2-vtgate-9ea92c94-85587bb7f7-cddb7 1/1 Running 0 3m40s adv-vitess-cluster-az2-vtgate-9ea92c94-85587bb7f7-f66zb 1/1 Running 2 (3m8s ago) 3m40s adv-vitess-cluster-etcd-07a83994-1 1/1 Running 0 3m40s adv-vitess-cluster-etcd-07a83994-2 1/1 Running 0 3m40s adv-vitess-cluster-etcd-07a83994-3 1/1 Running 0 3m40s adv-vitess-cluster-vttablet-az1-1330809953-8066577e 2/3 Running 2 (3m8s ago) 3m40s adv-vitess-cluster-vttablet-az1-3415112598-0c0e8ee0 2/3 Running 1 (2m45s ago) 3m40s adv-vitess-cluster-vttablet-az1-4135592426-c2dc2c3d 0/3 Pending 0 3m40s adv-vitess-cluster-vttablet-az2-0915606989-18937e48 2/3 Running 2 (3m11s ago) 3m40s adv-vitess-cluster-vttablet-az2-1366268705-cdd98d67 0/3 Pending 0 3m40s adv-vitess-cluster-vttablet-az2-4058700183-5f0ba1e4 2/3 Running 1 (3m ago) 3m40s
... let’s say we think our primary is not in good shape and we’d like to force a failover to the replica. We could use the drain feature of the operator to request a graceful failover to a replica. The operator will choose another suitable replica if one is available, healthy, and not itself drained.
$ kubectl annotate pod <podname> drain.planetscale.com/started="Draining for blog" pod/example-vttablet-zone1-2469782763-bfadd780 annotated
$ kubectl annotate pod -l planetscale.com/component=vttablet drain.planetscale.com/started-
Örnek Şöyle yaparız CREATE EVENT myevent ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 1 HOUR DO UPDATE myschema.mytable SET myc...