Docker搭建MySQL主从架构
# Docker搭建MySQL主从架构
这里演示本地环境使用Docker Compose搭建MySQL一主一从环境。
说明:
- 这里仅作为实验进行快速搭建,实际生产会有所不同。
# 传统主从同步
创建mysql_ms
文件夹,docker-compose.yml
配置如下:
version: '3.1'
services:
mysql-master:
restart: always
image: mysql:5.7.28
container_name: mysql-master
ports:
- 33061:3306
environment:
TZ: Asia/Shanghai
MYSQL_ROOT_PASSWORD: 123456
command:
--character-set-server=utf8mb4
--collation-server=utf8mb4_general_ci
--explicit_defaults_for_timestamp=true
--lower_case_table_names=1
--max_allowed_packet=128M
--sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION,NO_ZERO_DATE,NO_ZERO_IN_DATE,ERROR_FOR_DIVISION_BY_ZERO"
--log-bin=mysqlbinlog
--binlog-format=ROW
--server-id=1
--sync_binlog=1
--binlog-ignore-db=information_schema
--binlog-ignore-db=performance_schema
--binlog-ignore-db=sys
volumes:
- /Users/wenwl/docker/mysql_ms/mconf:/etc/mysql
- /Users/wenwl/docker/mysql_ms/mlogs:/var/log/mysql
- /Users/wenwl/docker/mysql_ms/mdata:/var/lib/mysql
mysql-slave-1:
restart: always
image: mysql:5.7.28
container_name: mysql-slave-1
ports:
- 33071:3306
environment:
TZ: Asia/Shanghai
MYSQL_ROOT_PASSWORD: 123456
command:
--character-set-server=utf8mb4
--collation-server=utf8mb4_general_ci
--explicit_defaults_for_timestamp=true
--lower_case_table_names=1
--max_allowed_packet=128M
--sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION,NO_ZERO_DATE,NO_ZERO_IN_DATE,ERROR_FOR_DIVISION_BY_ZERO"
--server-id=2
--relay_log=mysql-relay-bin
--read_only=1
volumes:
- /Users/wenwl/docker/mysql_ms/s1onf:/etc/mysql
- /Users/wenwl/docker/mysql_ms/s1logs:/var/log/mysql
- /Users/wenwl/docker/mysql_ms/s1data:/var/lib/mysql
进入主库授权并查看binlog的日志信息:
# 授权
mysql> grant replication slave on *.* to 'root'@'%' identified by '123456';
Query OK, 0 rows affected, 1 warning (0.00 sec)
mysql> grant all privileges on *.* to 'root'@'%' identified by '123456';
Query OK, 0 rows affected, 1 warning (0.00 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)
# 查看主库状态,后续从库配置需要用到binlog的日志信息
mysql> show master status;
+--------------------+----------+--------------+-------------------------------------------+-------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+--------------------+----------+--------------+-------------------------------------------+-------------------+
| mysqlbinlog.000004 | 154 | | information_schema,performance_schema,sys | |
+--------------------+----------+--------------+-------------------------------------------+-------------------+
1 row in set (0.01 sec)
进入从库配置相关命令:
# 查看从库状态
show slave status \G;
# 配置主库, 信息从主库获取
change master to master_host='192.168.0.100', master_port=33061, master_user= 'root', master_password='123456', master_log_file='mysqlbinlog.000004', master_log_pos=154;
# 开启/停止主从复制
start slave;
#stop slave;
这里ip地址我使用了exposed的地址和端口号,你也可以使用:docker network inspect mysql_ms_default
查看容器内的IP地址并更改对应端口号,根据自己实际情况更改。
注意,这里从库的change maste...
配置中值得注意的一点,对master_log_file
和master_log_pos
的配置思考,如果我们配置了一个新的从库,这一块难道又要从早的pos开始吗?答案是否定的,建议是使用mysqldump
工具将主库的数据同步到新从库之后,再从较新的pos位置开始同步,这样可以减少主库的压力。
执行show slave status \G;
若能看到一下信息则证明成功:
Slave_IO_State: Waiting for master to send event
Slave_IO_Runningç Yes
Slave_SQL_Running: Yes
剩下就可以测试一下主从同步了。
由于我有基础,这里的步骤我忽略了很多。我网上找了一篇比较全的,小白可以参考这篇章的步骤(这里我放着备忘):
- https://www.cnblogs.com/haima/p/14341903.html。
# 引入半同步复制
主库安装插件rpl_semi_sync_master
,步骤如下:
# 安装插件, 可以是否支持动态安装插件 select @@have_dynamic_loading;
mysql> install plugin rpl_semi_sync_master soname 'semisync_master.so';
Query OK, 0 rows affected (0.01 sec)
mysql> show variables like '%semi%';
+-------------------------------------------+------------+
| Variable_name | Value |
+-------------------------------------------+------------+
| rpl_semi_sync_master_enabled | OFF |
| rpl_semi_sync_master_timeout | 10000 |
| rpl_semi_sync_master_trace_level | 32 |
| rpl_semi_sync_master_wait_for_slave_count | 1 |
| rpl_semi_sync_master_wait_no_slave | ON |
| rpl_semi_sync_master_wait_point | AFTER_SYNC |
+-------------------------------------------+------------+
6 rows in set (0.00 sec)
# 开启插件并进行相关配置
mysql> set global rpl_semi_sync_master_enabled=1;
Query OK, 0 rows affected (0.00 sec)
mysql> set global rpl_semi_sync_master_timeout=1000;
Query OK, 0 rows affected (0.00 sec)
从库都安装插件rpl_semi_sync_slave
,配置完成后重启slave:
# 安装插件
mysql> install plugin rpl_semi_sync_slave soname 'semisync_slave.so';
Query OK, 0 rows affected (0.00 sec)
mysql> show variables like '%semi%';
+---------------------------------+-------+
| Variable_name | Value |
+---------------------------------+-------+
| rpl_semi_sync_slave_enabled | OFF |
| rpl_semi_sync_slave_trace_level | 32 |
+---------------------------------+-------+
2 rows in set (0.00 sec)
# 启用插件
mysql> set global rpl_semi_sync_slave_enabled=1;
Query OK, 0 rows affected (0.00 sec)
# 重启slvae
mysql> stop slave;
Query OK, 0 rows affected (0.01 sec)
mysql> start slave;
Query OK, 0 rows affected (0.00 sec)
接下来就可以测试半同步复制机制是否起作用了。数据同步成功后,我们还可以通过看(主库)日志确认MySQL是否真的使用了半同步复制的机制:
2022-03-22T17:34:08.587504Z 2 [Note] Semi-sync replication initialized for transactions.
2022-03-22T17:34:08.587582Z 2 [Note] Semi-sync replication enabled on the master.
2022-03-22T17:34:08.587832Z 0 [Note] Starting ack receiver thread
2022-03-22T17:41:46.873872Z 35 [Note] While initializing dump thread for slave with UUID <3cf9fa4f-a9f9-11ec-bf80-0242ac120002>, found a zombie dump thread with the same UUID. Master is killing the zombie dump thread(33).
2022-03-22T17:41:46.874395Z 33 [Note] Stop asynchronous binlog_dump to slave (server_id: 2)
2022-03-22T17:41:46.874927Z 35 [Note] Start binlog_dump to master_thread_id(35) slave_server(2), pos(mysqlbinlog.000004, 2253)
2022-03-22T17:41:46.874995Z 35 [Note] Start semi-sync binlog_dump to slave (server_id: 2), pos(mysqlbinlog.000004, 2253)
# 引入并行复制
引入并行复制这里采用配置文件方式配置,因为
mysql> set global relay_log_recovery=1;
ERROR 1238 (HY000): Variable 'relay_log_recovery' is a read only variable
生产上肯定是走配合文件的,这里是为了方便测试,上面很多才使用了命令行的方式。
主库配置事务组提交配置以下参数:
binlog_group_commit_sync_delay=1000
binlog_group_commit_sync_no_delay_count=100
# 进入容器后查看是否生效
mysql> show variables like '%binlog_group%';
从库配置并行复制的配置(并行复制的配置主要再从库):
slave_parallel_type=LOGICAL_CLOCK
slave_parallel_workers=8
relay_log_recovery=1
relay_log_info_repository=TABLE
# 进入容器后查看是否生效
mysql> show variables like '%slave_parallel%';
mysql> show variables like '%relay_log%';
重启服务器之后,剩下就可以测试一下并行复制是否成功主从同步了。
我们可以执行以下命令看一下worker
的工作情况:
mysql> select * from performance_schema.replication_applier_status_by_worker \G;
*************************** 1. row ***************************
CHANNEL_NAME:
WORKER_ID: 1
THREAD_ID: 27
SERVICE_STATE: ON
LAST_SEEN_TRANSACTION: ANONYMOUS
LAST_ERROR_NUMBER: 0
LAST_ERROR_MESSAGE:
LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00
*************************** 2. row ***************************
CHANNEL_NAME:
WORKER_ID: 2
THREAD_ID: 28
SERVICE_STATE: ON
LAST_SEEN_TRANSACTION:
LAST_ERROR_NUMBER: 0
LAST_ERROR_MESSAGE:
LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00
......