Chapter 16 Replication

Table of Contents

16.1 Configuring Replication
16.1.1 Binary Log File Position Based Replication Configuration Overview
16.1.2 Setting Up Binary Log File Position Based Replication
16.1.3 Replication with Global Transaction Identifiers
16.1.4 Changing Replication Modes on Online Servers
16.1.5 MySQL Multi-Source Replication
16.1.6 Replication and Binary Logging Options and Variables
16.1.7 Common Replication Administration Tasks
16.2 Replication Implementation
16.2.1 Replication Formats
16.2.2 Replication Channels
16.2.3 Replication Threads
16.2.4 Relay Log and Replication Metadata Repositories
16.2.5 How Servers Evaluate Replication Filtering Rules
16.3 Replication Solutions
16.3.1 Using Replication for Backups
16.3.2 Handling an Unexpected Halt of a Replica
16.3.3 Using Replication with Different Source and Replica Storage Engines
16.3.4 Using Replication for Scale-Out
16.3.5 Replicating Different Databases to Different Replicas
16.3.6 Improving Replication Performance
16.3.7 Switching Sources During Failover
16.3.8 Setting Up Replication to Use Encrypted Connections
16.3.9 Semisynchronous Replication
16.3.10 Delayed Replication
16.4 Replication Notes and Tips
16.4.1 Replication Features and Issues
16.4.2 Replication Compatibility Between MySQL Versions
16.4.3 Upgrading a Replication Setup
16.4.4 Troubleshooting Replication
16.4.5 How to Report Replication Bugs or Problems

Replication enables data from one MySQL database server (the source) to be copied to one or more MySQL database servers (the replicas). Replication is asynchronous by default; replicas do not need to be connected permanently to receive updates from the source. Depending on the configuration, you can replicate all databases, selected databases, or even selected tables within a database.

Advantages of replication in MySQL include:

For information on how to use replication in such scenarios, see Section 16.3, “Replication Solutions”.

MySQL 5.7 supports different methods of replication. The traditional method is based on replicating events from the source's binary log, and requires the log files and positions in them to be synchronized between source and replica. The newer method based on global transaction identifiers (GTIDs) is transactional and therefore does not require working with log files or positions within these files, which greatly simplifies many common replication tasks. Replication using GTIDs guarantees consistency between source and replica as long as all transactions committed on the source have also been applied on the replica. For more information about GTIDs and GTID-based replication in MySQL, see Section 16.1.3, “Replication with Global Transaction Identifiers”. For information on using binary log file position based replication, see Section 16.1, “Configuring Replication”.

Replication in MySQL supports different types of synchronization. The original type of synchronization is one-way, asynchronous replication, in which one server acts as the source, while one or more other servers act as replicas. This is in contrast to the synchronous replication which is a characteristic of NDB Cluster (see Chapter 20, MySQL NDB Cluster 7.5 and NDB Cluster 7.6). In MySQL 5.7, semisynchronous replication is supported in addition to the built-in asynchronous replication. With semisynchronous replication, a commit performed on the source blocks before returning to the session that performed the transaction until at least one replica acknowledges that it has received and logged the events for the transaction; see Section 16.3.9, “Semisynchronous Replication”. MySQL 5.7 also supports delayed replication such that a replica deliberately lags behind the source by at least a specified amount of time; see Section 16.3.10, “Delayed Replication”. For scenarios where synchronous replication is required, use NDB Cluster (see Chapter 20, MySQL NDB Cluster 7.5 and NDB Cluster 7.6).

There are a number of solutions available for setting up replication between servers, and the best method to use depends on the presence of data and the engine types you are using. For more information on the available options, see Section 16.1.2, “Setting Up Binary Log File Position Based Replication”.

There are two core types of replication format, Statement Based Replication (SBR), which replicates entire SQL statements, and Row Based Replication (RBR), which replicates only the changed rows. You can also use a third variety, Mixed Based Replication (MBR). For more information on the different replication formats, see Section 16.2.1, “Replication Formats”.

Replication is controlled through a number of different options and variables. For more information, see Section 16.1.6, “Replication and Binary Logging Options and Variables”.

You can use replication to solve a number of different problems, including performance, supporting the backup of different databases, and as part of a larger solution to alleviate system failures. For information on how to address these issues, see Section 16.3, “Replication Solutions”.

For notes and tips on how different data types and statements are treated during replication, including details of replication features, version compatibility, upgrades, and potential problems and their resolution, see Section 16.4, “Replication Notes and Tips”. For answers to some questions often asked by those who are new to MySQL Replication, see Section A.14, “MySQL 5.7 FAQ: Replication”.

For detailed information on the implementation of replication, how replication works, the process and contents of the binary log, background threads and the rules used to decide how statements are recorded and replicated, see Section 16.2, “Replication Implementation”.

16.1 Configuring Replication

This section describes how to configure the different types of replication available in MySQL and includes the setup and configuration required for a replication environment, including step-by-step instructions for creating a new replication environment. The major components of this section are:

16.1.1 Binary Log File Position Based Replication Configuration Overview

This section describes replication between MySQL servers based on the binary log file position method, where the MySQL instance operating as the source (where the database changes originate) writes updates and changes as events to the binary log. The information in the binary log is stored in different logging formats according to the database changes being recorded. Replicas are configured to read the binary log from the source and to execute the events in the binary log on the replica's local database.

Each replica receives a copy of the entire contents of the binary log. It is the responsibility of the replica to decide which statements in the binary log should be executed. Unless you specify otherwise, all events in the source's binary log are executed on the replica. If required, you can configure the replica to process only events that apply to particular databases or tables.

Important

You cannot configure the source to log only certain events.

Each replica keeps a record of the binary log coordinates: the file name and position within the file that it has read and processed from the source. This means that multiple replicas can be connected to the source and executing different parts of the same binary log. Because the replicas control this process, individual replicas can be connected and disconnected from the server without affecting the source's operation. Also, because each replica records the current position within the binary log, it is possible for replicas to be disconnected, reconnect and then resume processing.

The source and each replica must be configured with a unique ID (using the server_id system variable). In addition, each replica must be configured with information about the source's host name, log file name, and position within that file. These details can be controlled from within a MySQL session using the CHANGE MASTER TO statement on the replica. The details are stored within the replica's connection metadata repository, which can be either a file or a table (see Section 16.2.4, “Relay Log and Replication Metadata Repositories”).

16.1.2 Setting Up Binary Log File Position Based Replication

This section describes how to set up a MySQL server to use binary log file position based replication. There are a number of different methods for setting up replication, and the exact method to use depends on how you are setting up replication, and whether you already have data in the database on the source.

There are some generic tasks that are common to all setups:

Note

Certain steps within the setup process require the SUPER privilege. If you do not have this privilege, it might not be possible to enable replication.

After configuring the basic options, select your scenario:

Before administering MySQL replication servers, read this entire chapter and try all statements mentioned in Section 13.4.1, “SQL Statements for Controlling Replication Source Servers”, and Section 13.4.2, “SQL Statements for Controlling Replica Servers”. Also familiarize yourself with the replication startup options described in Section 16.1.6, “Replication and Binary Logging Options and Variables”.

16.1.2.1 Setting the Replication Source Configuration

To configure a source to use binary log file position based replication, you must ensure that binary logging is enabled, and establish a unique server ID.

Each server within a replication topology must be configured with a unique server ID, which you can specify using the server_id system variable. This server ID is used to identify individual servers within the replication topology, and must be a positive integer between 1 and (232)−1. You can change the server_id value dynamically by issuing a statement like this:

SET GLOBAL server_id = 2;

With the default server ID of 0, a source refuses any connections from replicas, and a replica refuses to connect to a source, so this value cannot be used in a replication topology. Other than that, how you organize and select the server IDs is your choice, so long as each server ID is different from every other server ID in use by any other server in the replication topology. Note that if a value of 0 was set previously for the server ID, you must restart the server to initialize the source with your new nonzero server ID. Otherwise, a server restart is not needed, unless you need to enable binary logging or make other configuration changes that require a restart.

Binary logging must be enabled on the source because the binary log is the basis for replicating changes from the source to its replicas. If binary logging is not enabled on the source using the log-bin option, replication is not possible. To enable binary logging on a server where it is not already enabled, you must restart the server. In this case, shut down the MySQL server and edit the my.cnf or my.ini file. Within the [mysqld] section of the configuration file, add the log-bin and server-id options. If these options already exist, but are commented out, uncomment the options and alter them according to your needs. For example, to enable binary logging using a log file name prefix of mysql-bin, and configure a server ID of 1, use these lines:

[mysqld]
log-bin=mysql-bin
server-id=1

After making the changes, restart the server.

Note

The following options have an impact on this procedure:

  • For the greatest possible durability and consistency in a replication setup using InnoDB with transactions, you should use innodb_flush_log_at_trx_commit=1 and sync_binlog=1 in the source's my.cnf file.

  • Ensure that the skip_networking system variable is not enabled on your source. If networking has been disabled, the replica cannot communicate with the source and replication fails.

16.1.2.2 Creating a User for Replication

Each replica connects to the source using a MySQL user name and password, so there must be a user account on the source that the replica can use to connect. The user name is specified by the MASTER_USER option on the CHANGE MASTER TO command when you set up a replica. Any account can be used for this operation, providing it has been granted the REPLICATION SLAVE privilege. You can choose to create a different account for each replica, or connect to the source using the same account for each replica.

Although you do not have to create an account specifically for replication, you should be aware that the replication user name and password are stored in plain text in the replication metadata repositories (see Section 16.2.4.2, “Replication Metadata Repositories”). Therefore, you may want to create a separate account that has privileges only for the replication process, to minimize the possibility of compromise to other accounts.

To create a new account, use CREATE USER. To grant this account the privileges required for replication, use the GRANT statement. If you create an account solely for the purposes of replication, that account needs only the REPLICATION SLAVE privilege. For example, to set up a new user, repl, that can connect for replication from any host within the example.com domain, issue these statements on the source:

mysql> CREATE USER 'repl'@'%.example.com' IDENTIFIED BY 'password';
mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'%.example.com';

See Section 13.7.1, “Account Management Statements”, for more information on statements for manipulation of user accounts.

16.1.2.3 Obtaining the Replication Source's Binary Log Coordinates

To configure the replica to start the replication process at the correct point, you need to note the source's current coordinates within its binary log.

Warning

This procedure uses FLUSH TABLES WITH READ LOCK, which blocks COMMIT operations for InnoDB tables.

If you are planning to shut down the source to create a data snapshot, you can optionally skip this procedure and instead store a copy of the binary log index file along with the data snapshot. In that situation, the source creates a new binary log file on restart. The source's binary log coordinates where the replica must start the replication process are therefore the start of that new file, which is the next binary log file on the source following after the files that are listed in the copied binary log index file.

To obtain the source's binary log coordinates, follow these steps:

  1. Start a session on the source by connecting to it with the command-line client, and flush all tables and block write statements by executing the FLUSH TABLES WITH READ LOCK statement:

    mysql> FLUSH TABLES WITH READ LOCK;
    
    Warning

    Leave the client from which you issued the FLUSH TABLES statement running so that the read lock remains in effect. If you exit the client, the lock is released.

  2. In a different session on the source, use the SHOW MASTER STATUS statement to determine the current binary log file name and position:

    mysql > SHOW MASTER STATUS;
    +------------------+----------+--------------+------------------+
    | File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
    +------------------+----------+--------------+------------------+
    | mysql-bin.000003 | 73       | test         | manual,mysql     |
    +------------------+----------+--------------+------------------+
    

    The File column shows the name of the log file and the Position column shows the position within the file. In this example, the binary log file is mysql-bin.000003 and the position is 73. Record these values. You need them later when you are setting up the replica. They represent the replication coordinates at which the replica should begin processing new updates from the source.

    If the source has been running previously without binary logging enabled, the log file name and position values displayed by SHOW MASTER STATUS or mysqldump --master-data are empty. In that case, the values that you need to use later when specifying the source's log file and position are the empty string ('') and 4.

You now have the information you need to enable the replica to start reading from the binary log in the correct place to start replication.

The next step depends on whether you have existing data on the source. Choose one of the following options:

16.1.2.4 Choosing a Method for Data Snapshots

If the database on the source contains existing data it is necessary to copy this data to each replica. There are different ways to dump the data from the source. The following sections describe possible options.

To select the appropriate method of dumping the database, choose between these options:

  • Use the mysqldump tool to create a dump of all the databases you want to replicate. This is the recommended method, especially when using InnoDB.

  • If your database is stored in binary portable files, you can copy the raw data files to a replica. This can be more efficient than using mysqldump and importing the file on each replica, because it skips the overhead of updating indexes as the INSERT statements are replayed. With storage engines such as InnoDB this is not recommended.

16.1.2.4.1 Creating a Data Snapshot Using mysqldump

To create a snapshot of the data in an existing source, use the mysqldump tool. Once the data dump has been completed, import this data into the replica before starting the replication process.

The following example dumps all databases to a file named dbdump.db, and includes the --master-data option which automatically appends the CHANGE MASTER TO statement required on the replica to start the replication process:

shell> mysqldump --all-databases --master-data > dbdump.db
Note

If you do not use --master-data, then it is necessary to lock all tables in a separate session manually. See Section 16.1.2.3, “Obtaining the Replication Source's Binary Log Coordinates”.

It is possible to exclude certain databases from the dump using the mysqldump tool. If you want to choose which databases to include in the dump, do not use --all-databases. Choose one of these options:

  • Exclude all the tables in the database using --ignore-table option.

  • Name only those databases which you want dumped using the --databases option.

For more information, see Section 4.5.4, “mysqldump — A Database Backup Program”.

To import the data, either copy the dump file to the replica, or access the file from the source when connecting remotely to the replica.

16.1.2.4.2 Creating a Data Snapshot Using Raw Data Files

This section describes how to create a data snapshot using the raw files which make up the database. Employing this method with a table using a storage engine that has complex caching or logging algorithms requires extra steps to produce a perfect point in time snapshot: the initial copy command could leave out cache information and logging updates, even if you have acquired a global read lock. How the storage engine responds to this depends on its crash recovery abilities.

If you use InnoDB tables, you can use the mysqlbackup command from the MySQL Enterprise Backup component to produce a consistent snapshot. This command records the log name and offset corresponding to the snapshot to be used on the replica. MySQL Enterprise Backup is a commercial product that is included as part of a MySQL Enterprise subscription. See Section 27.2, “MySQL Enterprise Backup Overview” for detailed information.

This method also does not work reliably if the source and replica have different values for ft_stopword_file, ft_min_word_len, or ft_max_word_len and you are copying tables having full-text indexes.

Assuming the above exceptions do not apply to your database, use the cold backup technique to obtain a reliable binary snapshot of InnoDB tables: do a slow shutdown of the MySQL Server, then copy the data files manually.

To create a raw data snapshot of MyISAM tables when your MySQL data files exist on a single file system, you can use standard file copy tools such as cp or copy, a remote copy tool such as scp or rsync, an archiving tool such as zip or tar, or a file system snapshot tool such as dump. If you are replicating only certain databases, copy only those files that relate to those tables. For InnoDB, all tables in all databases are stored in the system tablespace files, unless you have the innodb_file_per_table option enabled.

The following files are not required for replication:

  • Files relating to the mysql database.

  • The replica's connection metadata repository file, if used (see Section 16.2.4, “Relay Log and Replication Metadata Repositories”).

  • The source's binary log files, with the exception of the binary log index file if you are going to use this to locate the source's binary log coordinates for the replica.

  • Any relay log files.

Depending on whether you are using InnoDB tables or not, choose one of the following:

If you are using InnoDB tables, and also to get the most consistent results with a raw data snapshot, shut down the source server during the process, as follows:

  1. Acquire a read lock and get the source's status. See Section 16.1.2.3, “Obtaining the Replication Source's Binary Log Coordinates”.

  2. In a separate session, shut down the source server:

    shell> mysqladmin shutdown
    
  3. Make a copy of the MySQL data files. The following examples show common ways to do this. You need to choose only one of them:

    shell> tar cf /tmp/db.tar ./data
    shell> zip -r /tmp/db.zip ./data
    shell> rsync --recursive ./data /tmp/dbdata
    
  4. Restart the source server.

If you are not using InnoDB tables, you can get a snapshot of the system from a source without shutting down the server as described in the following steps:

  1. Acquire a read lock and get the source's status. See Section 16.1.2.3, “Obtaining the Replication Source's Binary Log Coordinates”.

  2. Make a copy of the MySQL data files. The following examples show common ways to do this. You need to choose only one of them:

    shell> tar cf /tmp/db.tar ./data
    shell> zip -r /tmp/db.zip ./data
    shell> rsync --recursive ./data /tmp/dbdata
    
  3. In the client where you acquired the read lock, release the lock:

    mysql> UNLOCK TABLES;
    

Once you have created the archive or copy of the database, copy the files to each replica before starting the replication process.

16.1.2.5 Setting Up Replicas

The following sections describe how to set up replicas. Before you proceed, ensure that you have:

16.1.2.5.1 Setting the Replica Configuration

Each replica must have a unique server ID, as specified by the server_id system variable. If you are setting up multiple replicas, each one must have a unique server_id value that differs from that of the source and from any of the other replicas. If the replica's server ID is not already set, or the current value conflicts with the value that you have chosen for the source server or another replica, you must change it. With the default server_id value of 0, a replica refuses to connect to a source.

You can change the server_id value dynamically by issuing a statement like this:

SET GLOBAL server_id = 21;

If the default server_id value of 0 was set previously, you must restart the server to initialize the replica with your new nonzero server ID. Otherwise, a server restart is not needed when you change the server ID, unless you make other configuration changes that require it. For example, if binary logging was disabled on the server and you want it enabled for your replica, a server restart is required to enable this.

If you are shutting down the replica server, you can edit the [mysqld] section of the configuration file to specify a unique server ID. For example:

[mysqld]
server-id=21

A replica is not required to have binary logging enabled for replication to take place. However, binary logging on a replica means that the replica's binary log can be used for data backups and crash recovery. Replicas that have binary logging enabled can also be used as part of a more complex replication topology. If you want to enable binary logging on a replica, use the log-bin option in the [mysqld] section of the configuration file. A server restart is required to start binary logging on a server that did not previously use it.

16.1.2.5.2 Setting the Source Configuration on the Replica

To set up the replica to communicate with the source for replication, configure the replica with the necessary connection information. To do this, execute the following statement on the replica, replacing the option values with the actual values relevant to your system:

mysql> CHANGE MASTER TO
    ->     MASTER_HOST='source_host_name',
    ->     MASTER_USER='replication_user_name',
    ->     MASTER_PASSWORD='replication_password',
    ->     MASTER_LOG_FILE='recorded_log_file_name',
    ->     MASTER_LOG_POS=recorded_log_position;
Note

Replication cannot use Unix socket files. You must be able to connect to the source MySQL server using TCP/IP.

The CHANGE MASTER TO statement has other options as well. For example, it is possible to set up secure replication using SSL. For a full list of options, and information about the maximum permissible length for the string-valued options, see Section 13.4.2.1, “CHANGE MASTER TO Statement”.

The next steps depend on whether you have existing data to import to the replica or not. See Section 16.1.2.4, “Choosing a Method for Data Snapshots” for more information. Choose one of the following:

16.1.2.5.3 Setting Up Replication between a New Source and Replicas

When there is no snapshot of a previous database to import, configure the replica to start replication from the new source.

To set up replication between a source and a new replica:

  1. Start up the replica and connect to it.

  2. Execute a CHANGE MASTER TO statement to set the source configuration. See Section 16.1.2.5.2, “Setting the Source Configuration on the Replica”.

Perform these setup steps on each replica.

This method can also be used if you are setting up new servers but have an existing dump of the databases from a different server that you want to load into your replication configuration. By loading the data into a new source, the data is automatically replicated to the replicas.

If you are setting up a new replication environment using the data from a different existing database server to create a new source, run the dump file generated from that server on the new source. The database updates are automatically propagated to the replicas:

shell> mysql -h master < fulldb.dump
16.1.2.5.4 Setting Up Replication with Existing Data

When setting up replication with existing data, transfer the snapshot from the source to the replica before starting replication. The process for importing data to the replica depends on how you created the snapshot of data on the source.

Choose one of the following:

If you used mysqldump:

  1. Start the replica, using the --skip-slave-start option so that replication does not start.

  2. Import the dump file:

    shell> mysql < fulldb.dump
    

If you created a snapshot using the raw data files:

  1. Extract the data files into the replica's data directory. For example:

    shell> tar xvf dbdump.tar
    

    You may need to set permissions and ownership on the files so that the replica server can access and modify them.

  2. Start the replica, using the --skip-slave-start option so that replication does not start.

  3. Configure the replica with the replication coordinates from the source. This tells the replica the binary log file and position within the file where replication needs to start. Also, configure the replica with the login credentials and host name of the source. For more information on the CHANGE MASTER TO statement required, see Section 16.1.2.5.2, “Setting the Source Configuration on the Replica”.

  4. Start the replication threads:

    mysql> START SLAVE;
    

After you have performed this procedure, the replica connects to the source and replicates any updates that have occurred on the source since the snapshot was taken.

If the server_id system variable for the source is not correctly set, replicas cannot connect to it. Similarly, if you have not set server_id correctly for the replica, you get the following error in the replica's error log:

Warning: You should set server-id to a non-0 value if master_host
is set; we will force server id to 2, but this MySQL server will
not act as a slave.

You also find error messages in the replica's error log if it is not able to replicate for any other reason.

The replica stores information about the source you have configured in its connection metadata repository. The connection metadata repository can be in the form of files or a table, as determined by the value set for the master_info_repository system variable. When a replica runs with master_info_repository=FILE, two files are stored in the data directory, named master.info and relay-log.info. If master_info_repository=TABLE instead, this information is saved in the master_slave_info table in the mysql database. In either case, do not remove or edit the files or table. Always use the CHANGE MASTER TO statement to change replication parameters. The replica can use the values specified in the statement to update the status files automatically. See Section 16.2.4, “Relay Log and Replication Metadata Repositories”, for more information.

Note

The contents of the connection metadata repository override some of the server options specified on the command line or in my.cnf. See Section 16.1.6, “Replication and Binary Logging Options and Variables”, for more details.

A single snapshot of the source suffices for multiple replicas. To set up additional replicas, use the same source snapshot and follow the replica portion of the procedure just described.

16.1.2.6 Adding Replicas to a Replication Topology

You can add another replica to an existing replication configuration without stopping the source server. To do this, you can set up the new replica by copying the data directory of an existing replica, and giving the new replica a different server ID (which is user-specified) and server UUID (which is generated at startup).

To duplicate an existing replica:

  1. Stop the existing replica and record the replica status information, particularly the source's binary log file and relay log file positions. You can view the replica status either in the Performance Schema replication tables (see Section 24.12.11, “Performance Schema Replication Tables”), or by issuing SHOW SLAVE STATUS as follows:

    mysql> STOP SLAVE;
    mysql> SHOW SLAVE STATUS\G
    
  2. Shut down the existing replica:

    shell> mysqladmin shutdown
    
  3. Copy the data directory from the existing replica to the new replica, including the log files and relay log files. You can do this by creating an archive using tar or WinZip, or by performing a direct copy using a tool such as cp or rsync.

    Important
    • Before copying, verify that all the files relating to the existing replica actually are stored in the data directory. For example, the InnoDB system tablespace, undo tablespace, and redo log might be stored in an alternative location. InnoDB tablespace files and file-per-table tablespaces might have been created in other directories. The binary logs and relay logs for the replica might be in their own directories outside the data directory. Check through the system variables that are set for the existing replica and look for any alternative paths that have been specified. If you find any, copy these directories over as well.

    • During copying, if files have been used for the replication metadata repositories (see Section 16.2.4, “Relay Log and Replication Metadata Repositories”), which is the default in MySQL 5.7, ensure that you also copy these files from the existing replica to the new replica. If tables have been used for the repositories, the tables are in the data directory.

    • After copying, delete the auto.cnf file from the copy of the data directory on the new replica, so that the new replica is started with a different generated server UUID. The server UUID must be unique.

    A common problem that is encountered when adding new replicas is that the new replica fails with a series of warning and error messages like these:

    071118 16:44:10 [Warning] Neither --relay-log nor --relay-log-index were used; so
    replication may break when this MySQL server acts as a slave and has his hostname
    changed!! Please use '--relay-log=new_replica_hostname-relay-bin' to avoid this problem.
    071118 16:44:10 [ERROR] Failed to open the relay log './old_replica_hostname-relay-bin.003525'
    (relay_log_pos 22940879)
    071118 16:44:10 [ERROR] Could not find target log during relay log initialization
    071118 16:44:10 [ERROR] Failed to initialize the master info structure
    

    This situation can occur if the relay_log system variable is not specified, as the relay log files contain the host name as part of their file names. This is also true of the relay log index file if the relay_log_index system variable is not used. For more information about these variables, see Section 16.1.6, “Replication and Binary Logging Options and Variables”.

    To avoid this problem, use the same value for relay_log on the new replica that was used on the existing replica. If this option was not set explicitly on the existing replica, use existing_replica_hostname-relay-bin. If this is not possible, copy the existing replica's relay log index file to the new replica and set the relay_log_index system variable on the new replica to match what was used on the existing replica. If this option was not set explicitly on the existing replica, use existing_replica_hostname-relay-bin.index. Alternatively, if you have already tried to start the new replica after following the remaining steps in this section and have encountered errors like those described previously, then perform the following steps:

    1. If you have not already done so, issue STOP SLAVE on the new replica.

      If you have already started the existing replica again, issue STOP SLAVE on the existing replica as well.

    2. Copy the contents of the existing replica's relay log index file into the new replica's relay log index file, making sure to overwrite any content already in the file.

    3. Proceed with the remaining steps in this section.

  4. When copying is complete, restart the existing replica.

  5. On the new replica, edit the configuration and give the new replica a unique server ID (using the server_id system variable) that is not used by the source or any of the existing replicas.

  6. Start the new replica server, specifying the --skip-slave-start option so that replication does not start yet. Use the Performance Schema replication tables or issue SHOW SLAVE STATUS to confirm that the new replica has the correct settings when compared with the existing replica. Also display the server ID and server UUID and verify that these are correct and unique for the new replica.

  7. Start the replication threads by issuing a START SLAVE statement:

    mysql> START SLAVE;

    The new replica now uses the information in its connection metadata repository to start the replication process.

16.1.3 Replication with Global Transaction Identifiers

This section explains transaction-based replication using global transaction identifiers (GTIDs). When using GTIDs, each transaction can be identified and tracked as it is committed on the originating server and applied by any replicas; this means that it is not necessary when using GTIDs to refer to log files or positions within those files when starting a new replica or failing over to a new source, which greatly simplifies these tasks. Because GTID-based replication is completely transaction-based, it is simple to determine whether sources and replicas are consistent; as long as all transactions committed on a source are also committed on a replica, consistency between the two is guaranteed. You can use either statement-based or row-based replication with GTIDs (see Section 16.2.1, “Replication Formats”); however, for best results, we recommend that you use the row-based format.

GTIDs are always preserved between source and replica. This means that you can always determine the source for any transaction applied on any replica by examining its binary log. In addition, once a transaction with a given GTID is committed on a given server, any subsequent transaction having the same GTID is ignored by that server. Thus, a transaction committed on the source can be applied no more than once on the replica, which helps to guarantee consistency.

This section discusses the following topics:

For information about MySQL Server options and variables relating to GTID-based replication, see Section 16.1.6.5, “Global Transaction ID System Variables”. See also Section 12.19, “Functions Used with Global Transaction Identifiers (GTIDs)”, which describes SQL functions supported by MySQL 5.7 for use with GTIDs.

16.1.3.1 GTID Format and Storage

A global transaction identifier (GTID) is a unique identifier created and associated with each transaction committed on the server of origin (the source). This identifier is unique not only to the server on which it originated, but is unique across all servers in a given replication topology.

GTID assignment distinguishes between client transactions, which are committed on the source, and replicated transactions, which are reproduced on a replica. When a client transaction is committed on the source, it is assigned a new GTID, provided that the transaction was written to the binary log. Client transactions are guaranteed to have monotonically increasing GTIDs without gaps between the generated numbers. If a client transaction is not written to the binary log (for example, because the transaction was filtered out, or the transaction was read-only), it is not assigned a GTID on the server of origin.

Replicated transactions retain the same GTID that was assigned to the transaction on the server of origin. The GTID is present before the replicated transaction begins to execute, and is persisted even if the replicated transaction is not written to the binary log on the replica, or is filtered out on the replica. The MySQL system table mysql.gtid_executed is used to preserve the assigned GTIDs of all the transactions applied on a MySQL server, except those that are stored in a currently active binary log file.

The auto-skip function for GTIDs means that a transaction committed on the source can be applied no more than once on the replica, which helps to guarantee consistency. Once a transaction with a given GTID has been committed on a given server, any attempt to execute a subsequent transaction with the same GTID is ignored by that server. No error is raised, and no statement in the transaction is executed.

If a transaction with a given GTID has started to execute on a server, but has not yet committed or rolled back, any attempt to start a concurrent transaction on the server with the same GTID blocks. The server neither begins to execute the concurrent transaction nor returns control to the client. Once the first attempt at the transaction commits or rolls back, concurrent sessions that were blocking on the same GTID may proceed. If the first attempt rolled back, one concurrent session proceeds to attempt the transaction, and any other concurrent sessions that were blocking on the same GTID remain blocked. If the first attempt committed, all the concurrent sessions stop being blocked, and auto-skip all the statements of the transaction.

A GTID is represented as a pair of coordinates, separated by a colon character (:), as shown here:

GTID = source_id:transaction_id

The source_id identifies the originating server. Normally, the source's server_uuid is used for this purpose. The transaction_id is a sequence number determined by the order in which the transaction was committed on the source. For example, the first transaction to be committed has 1 as its transaction_id, and the tenth transaction to be committed on the same originating server is assigned a transaction_id of 10. It is not possible for a transaction to have 0 as a sequence number in a GTID. For example, the twenty-third transaction to be committed originally on the server with the UUID 3E11FA47-71CA-11E1-9E33-C80AA9429562 has this GTID:

3E11FA47-71CA-11E1-9E33-C80AA9429562:23

The GTID for a transaction is shown in the output from mysqlbinlog, and it is used to identify an individual transaction in the Performance Schema replication status tables, for example, replication_applier_status_by_worker. The value stored by the gtid_next system variable (@@GLOBAL.gtid_next) is a single GTID.

GTID Sets

A GTID set is a set comprising one or more single GTIDs or ranges of GTIDs. GTID sets are used in a MySQL server in several ways. For example, the values stored by the gtid_executed and gtid_purged system variables are GTID sets. The START SLAVE clauses UNTIL SQL_BEFORE_GTIDS and UNTIL SQL_AFTER_GTIDS can be used to make a replica process transactions only up to the first GTID in a GTID set, or stop after the last GTID in a GTID set. The built-in functions GTID_SUBSET() and GTID_SUBTRACT() require GTID sets as input.

A range of GTIDs originating from the same server can be collapsed into a single expression, as shown here:

3E11FA47-71CA-11E1-9E33-C80AA9429562:1-5

The above example represents the first through fifth transactions originating on the MySQL server whose server_uuid is 3E11FA47-71CA-11E1-9E33-C80AA9429562. Multiple single GTIDs or ranges of GTIDs originating from the same server can also be included in a single expression, with the GTIDs or ranges separated by colons, as in the following example:

3E11FA47-71CA-11E1-9E33-C80AA9429562:1-3:11:47-49

A GTID set can include any combination of single GTIDs and ranges of GTIDs, and it can include GTIDs originating from different servers. This example shows the GTID set stored in the gtid_executed system variable (@@GLOBAL.gtid_executed) of a replica that has applied transactions from more than one source:

2174B383-5441-11E8-B90A-C80AA9429562:1-3, 24DA167-0C0C-11E8-8442-00059A3C7B00:1-19

When GTID sets are returned from server variables, UUIDs are in alphabetical order, and numeric intervals are merged and in ascending order.

The syntax for a GTID set is as follows:

gtid_set:
    uuid_set [, uuid_set] ...
    | ''

uuid_set:
    uuid:interval[:interval]...

uuid:
    hhhhhhhh-hhhh-hhhh-hhhh-hhhhhhhhhhhh

h:
    [0-9|A-F]

interval:
    n[-n]

    (n >= 1)
mysql.gtid_executed Table

GTIDs are stored in a table named gtid_executed, in the mysql database. A row in this table contains, for each GTID or set of GTIDs that it represents, the UUID of the originating server, and the starting and ending transaction IDs of the set; for a row referencing only a single GTID, these last two values are the same.

The mysql.gtid_executed table is created (if it does not already exist) when MySQL Server is installed or upgraded, using a CREATE TABLE statement similar to that shown here:

CREATE TABLE gtid_executed (
    source_uuid CHAR(36) NOT NULL,
    interval_start BIGINT(20) NOT NULL,
    interval_end BIGINT(20) NOT NULL,
    PRIMARY KEY (source_uuid, interval_start)
)
Warning

As with other MySQL system tables, do not attempt to create or modify this table yourself.

The mysql.gtid_executed table is provided for internal use by the MySQL server. It enables a replica to use GTIDs when binary logging is disabled on the replica, and it enables retention of the GTID state when the binary logs have been lost. Note that the mysql.gtid_executed table is cleared if you issue RESET MASTER.

GTIDs are stored in the mysql.gtid_executed table only when gtid_mode is ON or ON_PERMISSIVE. The point at which GTIDs are stored depends on whether binary logging is enabled or disabled:

  • If binary logging is disabled (log_bin is OFF), or if log_slave_updates is disabled, the server stores the GTID belonging to each transaction together with the transaction in the table. In addition, the table is compressed periodically at a user-configurable rate; see mysql.gtid_executed Table Compression, for more information. This situation can only apply on a replica where binary logging or replica update logging is disabled. It does not apply on a replication source server, because on the source, binary logging must be enabled for replication to take place.

  • If binary logging is enabled (log_bin is ON), whenever the binary log is rotated or the server is shut down, the server writes GTIDs for all transactions that were written into the previous binary log into the mysql.gtid_executed table. This situation applies on a replication source server, or a replica where binary logging is enabled.

    In the event of the server stopping unexpectedly, the set of GTIDs from the current binary log file is not saved in the mysql.gtid_executed table. These GTIDs are added to the table from the binary log file during recovery. The exception to this is if binary logging is not enabled when the server is restarted. In this situation, the server cannot access the binary log file to recover the GTIDs, so replication cannot be started.

    When binary logging is enabled, the mysql.gtid_executed table does not hold a complete record of the GTIDs for all executed transactions. That information is provided by the global value of the gtid_executed system variable. Always use @@GLOBAL.gtid_executed, which is updated after every commit, to represent the GTID state for the MySQL server, and do not query the mysql.gtid_executed table.

mysql.gtid_executed Table Compression

Over the course of time, the mysql.gtid_executed table can become filled with many rows referring to individual GTIDs that originate on the same server, and whose transaction IDs make up a range, similar to what is shown here:

+--------------------------------------+----------------+--------------+
| source_uuid                          | interval_start | interval_end |
|--------------------------------------+----------------+--------------|
| 3E11FA47-71CA-11E1-9E33-C80AA9429562 | 37             | 37           |
| 3E11FA47-71CA-11E1-9E33-C80AA9429562 | 38             | 38           |
| 3E11FA47-71CA-11E1-9E33-C80AA9429562 | 39             | 39           |
| 3E11FA47-71CA-11E1-9E33-C80AA9429562 | 40             | 40           |
| 3E11FA47-71CA-11E1-9E33-C80AA9429562 | 41             | 41           |
| 3E11FA47-71CA-11E1-9E33-C80AA9429562 | 42             | 42           |
| 3E11FA47-71CA-11E1-9E33-C80AA9429562 | 43             | 43           |
...

To save space, the MySQL server compresses the mysql.gtid_executed table periodically by replacing each such set of rows with a single row that spans the entire interval of transaction identifiers, like this:

+--------------------------------------+----------------+--------------+
| source_uuid                          | interval_start | interval_end |
|--------------------------------------+----------------+--------------|
| 3E11FA47-71CA-11E1-9E33-C80AA9429562 | 37             | 43           |
...

You can control the number of transactions that are allowed to elapse before the table is compressed, and thus the compression rate, by setting the gtid_executed_compression_period system variable. This variable's default value is 1000, meaning that by default, compression of the table is performed after each 1000 transactions. Setting gtid_executed_compression_period to 0 prevents the compression from being performed at all, and you should be prepared for a potentially large increase in the amount of disk space that may be required by the gtid_executed table if you do this.

Note

When binary logging is enabled, the value of gtid_executed_compression_period is not used and the mysql.gtid_executed table is compressed on each binary log rotation.

Compression of the mysql.gtid_executed table is performed by a dedicated foreground thread named thread/sql/compress_gtid_table. This thread is not listed in the output of SHOW PROCESSLIST, but it can be viewed as a row in the threads table, as shown here:

mysql> SELECT * FROM performance_schema.threads WHERE NAME LIKE '%gtid%'\G
*************************** 1. row ***************************
          THREAD_ID: 26
               NAME: thread/sql/compress_gtid_table
               TYPE: FOREGROUND
     PROCESSLIST_ID: 1
   PROCESSLIST_USER: NULL
   PROCESSLIST_HOST: NULL
     PROCESSLIST_DB: NULL
PROCESSLIST_COMMAND: Daemon
   PROCESSLIST_TIME: 1509
  PROCESSLIST_STATE: Suspending
   PROCESSLIST_INFO: NULL
   PARENT_THREAD_ID: 1
               ROLE: NULL
       INSTRUMENTED: YES
            HISTORY: YES
    CONNECTION_TYPE: NULL
       THREAD_OS_ID: 18677

The thread/sql/compress_gtid_table thread normally sleeps until gtid_executed_compression_period transactions have been executed, then wakes up to perform compression of the mysql.gtid_executed table as described previously. It then sleeps until another gtid_executed_compression_period transactions have taken place, then wakes up to perform the compression again, repeating this loop indefinitely. Setting this value to 0 when binary logging is disabled means that the thread always sleeps and never wakes up.

16.1.3.2 GTID Life Cycle

The life cycle of a GTID consists of the following steps:

  1. A transaction is executed and committed on the replication source server. This client transaction is assigned a GTID composed of the source's UUID and the smallest nonzero transaction sequence number not yet used on this server. The GTID is written to the source's binary log (immediately preceding the transaction itself in the log). If a client transaction is not written to the binary log (for example, because the transaction was filtered out, or the transaction was read-only), it is not assigned a GTID.

  2. If a GTID was assigned for the transaction, the GTID is persisted atomically at commit time by writing it to the binary log at the beginning of the transaction (as a Gtid_log_event). Whenever the binary log is rotated or the server is shut down, the server writes GTIDs for all transactions that were written into the previous binary log file into the mysql.gtid_executed table.

  3. If a GTID was assigned for the transaction, the GTID is externalized non-atomically (very shortly after the transaction is committed) by adding it to the set of GTIDs in the gtid_executed system variable (@@GLOBAL.gtid_executed). This GTID set contains a representation of the set of all committed GTID transactions, and it is used in replication as a token that represents the server state. With binary logging enabled (as required for the source), the set of GTIDs in the gtid_executed system variable is a complete record of the transactions applied, but the mysql.gtid_executed table is not, because the most recent history is still in the current binary log file.

  4. After the binary log data is transmitted to the replica and stored in the replica's relay log (using established mechanisms for this process, see Section 16.2, “Replication Implementation”, for details), the replica reads the GTID and sets the value of its gtid_next system variable as this GTID. This tells the replica that the next transaction must be logged using this GTID. It is important to note that the replica sets gtid_next in a session context.

  5. The replica verifies that no thread has yet taken ownership of the GTID in gtid_next in order to process the transaction. By reading and checking the replicated transaction's GTID first, before processing the transaction itself, the replica guarantees not only that no previous transaction having this GTID has been applied on the replica, but also that no other session has already read this GTID but has not yet committed the associated transaction. So if multiple clients attempt to apply the same transaction concurrently, the server resolves this by letting only one of them execute. The gtid_owned system variable (@@GLOBAL.gtid_owned) for the replica shows each GTID that is currently in use and the ID of the thread that owns it. If the GTID has already been used, no error is raised, and the auto-skip function is used to ignore the transaction.

  6. If the GTID has not been used, the replica applies the replicated transaction. Because gtid_next is set to the GTID already assigned by the source, the replica does not attempt to generate a new GTID for this transaction, but instead uses the GTID stored in gtid_next.

  7. If binary logging is enabled on the replica, the GTID is persisted atomically at commit time by writing it to the binary log at the beginning of the transaction (as a Gtid_log_event). Whenever the binary log is rotated or the server is shut down, the server writes GTIDs for all transactions that were written into the previous binary log file into the mysql.gtid_executed table.

  8. If binary logging is disabled on the replica, the GTID is persisted atomically by writing it directly into the mysql.gtid_executed table. MySQL appends a statement to the transaction to insert the GTID into the table. In this situation, the mysql.gtid_executed table is a complete record of the transactions applied on the replica. Note that in MySQL 5.7, the operation to insert the GTID into the table is atomic for DML statements, but not for DDL statements, so if the server exits unexpectedly after a transaction involving DDL statements, the GTID state might become inconsistent. From MySQL 8.0, the operation is atomic for DDL statements as well as for DML statements.

  9. Very shortly after the replicated transaction is committed on the replica, the GTID is externalized non-atomically by adding it to the set of GTIDs in the gtid_executed system variable (@@GLOBAL.gtid_executed) for the replica. As for the source, this GTID set contains a representation of the set of all committed GTID transactions. If binary logging is disabled on the replica, the mysql.gtid_executed table is also a complete record of the transactions applied on the replica. If binary logging is enabled on the replica, meaning that some GTIDs are only recorded in the binary log, the set of GTIDs in the gtid_executed system variable is the only complete record.

Client transactions that are completely filtered out on the source are not assigned a GTID, therefore they are not added to the set of transactions in the gtid_executed system variable, or added to the mysql.gtid_executed table. However, the GTIDs of replicated transactions that are completely filtered out on the replica are persisted. If binary logging is enabled on the replica, the filtered-out transaction is written to the binary log as a Gtid_log_event followed by an empty transaction containing only BEGIN and COMMIT statements. If binary logging is disabled, the GTID of the filtered-out transaction is written to the mysql.gtid_executed table. Preserving the GTIDs for filtered-out transactions ensures that the mysql.gtid_executed table and the set of GTIDs in the gtid_executed system variable can be compressed. It also ensures that the filtered-out transactions are not retrieved again if the replica reconnects to the source, as explained in Section 16.1.3.3, “GTID Auto-Positioning”.

On a multithreaded replica (with slave_parallel_workers > 0 ), transactions can be applied in parallel, so replicated transactions can commit out of order (unless slave_preserve_commit_order=1 is set). When that happens, the set of GTIDs in the gtid_executed system variable contains multiple GTID ranges with gaps between them. (On a source or a single-threaded replica, there are monotonically increasing GTIDs without gaps between the numbers.) Gaps on multithreaded replicas only occur among the most recently applied transactions, and are filled in as replication progresses. When replication threads are stopped cleanly using the STOP SLAVE statement, ongoing transactions are applied so that the gaps are filled in. In the event of a shutdown such as a server failure or the use of the KILL statement to stop replication threads, the gaps might remain.

What changes are assigned a GTID?

The typical scenario is that the server generates a new GTID for a committed transaction. However, GTIDs can also be assigned to other changes besides transactions, and in some cases a single transaction can be assigned multiple GTIDs.

Every database change (DDL or DML) that is written to the binary log is assigned a GTID. This includes changes that are autocommitted, and changes that are committed using BEGIN and COMMIT or START TRANSACTION statements. A GTID is also assigned to the creation, alteration, or deletion of a database, and of a non-table database object such as a procedure, function, trigger, event, view, user, role, or grant.

Non-transactional updates as well as transactional updates are assigned GTIDs. In addition, for a non-transactional update, if a disk write failure occurs while attempting to write to the binary log cache and a gap is therefore created in the binary log, the resulting incident log event is assigned a GTID.

When a table is automatically dropped by a generated statement in the binary log, a GTID is assigned to the statement. Temporary tables are dropped automatically when a replica begins to apply events from a source that has just been started, and when statement-based replication is in use (binlog_format=STATEMENT) and a user session that has open temporary tables disconnects. Tables that use the MEMORY storage engine are deleted automatically the first time they are accessed after the server is started, because rows might have been lost during the shutdown.

When a transaction is not written to the binary log on the server of origin, the server does not assign a GTID to it. This includes transactions that are rolled back and transactions that are executed while binary logging is disabled on the server of origin, either globally (with --skip-log-bin specified in the server's configuration) or for the session (SET @@SESSION.sql_log_bin = 0). This also includes no-op transactions when row-based replication is in use (binlog_format=ROW).

XA transactions are assigned separate GTIDs for the XA PREPARE phase of the transaction and the XA COMMIT or XA ROLLBACK phase of the transaction. XA transactions are persistently prepared so that users can commit them or roll them back in the case of a failure (which in a replication topology might include a failover to another server). The two parts of the transaction are therefore replicated separately, so they must have their own GTIDs, even though a non-XA transaction that is rolled back would not have a GTID.

In the following special cases, a single statement can generate multiple transactions, and therefore be assigned multiple GTIDs:

  • A stored procedure is invoked that commits multiple transactions. One GTID is generated for each transaction that the procedure commits.

  • A multi-table DROP TABLE statement drops tables of different types.

  • A CREATE TABLE ... SELECT statement is issued when row-based replication is in use (binlog_format=ROW). One GTID is generated for the CREATE TABLE action and one GTID is generated for the row-insert actions.

The gtid_next System Variable

By default, for new transactions committed in user sessions, the server automatically generates and assigns a new GTID. When the transaction is applied on a replica, the GTID from the server of origin is preserved. You can change this behavior by setting the session value of the gtid_next system variable:

  • When gtid_next is set to AUTOMATIC, which is the default, and a transaction is committed and written to the binary log, the server automatically generates and assigns a new GTID. If a transaction is rolled back or not written to the binary log for another reason, the server does not generate and assign a GTID.

  • If you set gtid_next to a valid GTID (consisting of a UUID and a transaction sequence number, separated by a colon), the server assigns that GTID to your transaction. This GTID is assigned and added to gtid_executed even when the transaction is not written to the binary log, or when the transaction is empty.

Note that after you set gtid_next to a specific GTID, and the transaction has been committed or rolled back, an explicit SET @@SESSION.gtid_next statement must be issued before any other statement. You can use this to set the GTID value back to AUTOMATIC if you do not want to assign any more GTIDs explicitly.

When replication applier threads apply replicated transactions, they use this technique, setting @@SESSION.gtid_next explicitly to the GTID of the replicated transaction as assigned on the server of origin. This means the GTID from the server of origin is retained, rather than a new GTID being generated and assigned by the replica. It also means the GTID is added to gtid_executed on the replica even when binary logging or replica update logging is disabled on the replica, or when the transaction is a no-op or is filtered out on the replica.

It is possible for a client to simulate a replicated transaction by setting @@SESSION.gtid_next to a specific GTID before executing the transaction. This technique is used by mysqlbinlog to generate a dump of the binary log that the client can replay to preserve GTIDs. A simulated replicated transaction committed through a client is completely equivalent to a replicated transaction committed through a replication applier thread, and they cannot be distinguished after the fact.

The gtid_purged System Variable

The set of GTIDs in the gtid_purged system variable (@@GLOBAL.gtid_purged) contains the GTIDs of all the transactions that have been committed on the server, but do not exist in any binary log file on the server. gtid_purged is a subset of gtid_executed. The following categories of GTIDs are in gtid_purged:

  • GTIDs of replicated transactions that were committed with binary logging disabled on the replica.

  • GTIDs of transactions that were written to a binary log file that has now been purged.

  • GTIDs that were added explicitly to the set by the statement SET @@GLOBAL.gtid_purged.

You can change the value of gtid_purged in order to record on the server that the transactions in a certain GTID set have been applied, although they do not exist in any binary log on the server. When you add GTIDs to gtid_purged, they are also added to gtid_executed. An example use case for this action is when you are restoring a backup of one or more databases on a server, but you do not have the relevant binary logs containing the transactions on the server. In MySQL 5.7, you can only change the value of gtid_purged when gtid_executed (and therefore gtid_purged) is empty. For details of how to do this, see the description for gtid_purged.

The sets of GTIDs in the gtid_executed and gtid_purged system variables are initialized when the server starts. Every binary log file begins with the event Previous_gtids_log_event, which contains the set of GTIDs in all previous binary log files (composed from the GTIDs in the preceding file's Previous_gtids_log_event, and the GTIDs of every Gtid_log_event in the preceding file itself). The contents of Previous_gtids_log_event in the oldest and most recent binary log files are used to compute the gtid_executed and gtid_purged sets at server startup:

  • gtid_executed is computed as the union of the GTIDs in Previous_gtids_log_event in the most recent binary log file, the GTIDs of transactions in that binary log file, and the GTIDs stored in the mysql.gtid_executed table. This GTID set contains all the GTIDs that have been used (or added explicitly to gtid_purged) on the server, whether or not they are currently in a binary log file on the server. It does not include the GTIDs for transactions that are currently being processed on the server (@@GLOBAL.gtid_owned).

  • gtid_purged is computed by first adding the GTIDs in Previous_gtids_log_event in the most recent binary log file and the GTIDs of transactions in that binary log file. This step gives the set of GTIDs that are currently, or were once, recorded in a binary log on the server (gtids_in_binlog). Next, the GTIDs in Previous_gtids_log_event in the oldest binary log file are subtracted from gtids_in_binlog. This step gives the set of GTIDs that are currently recorded in a binary log on the server (gtids_in_binlog_not_purged). Finally, gtids_in_binlog_not_purged is subtracted from gtid_executed. The result is the set of GTIDs that have been used on the server, but are not currently recorded in a binary log file on the server, and this result is used to initialize gtid_purged.

If binary logs from MySQL 5.7.7 or older are involved in these computations, it is possible for incorrect GTID sets to be computed for gtid_executed and gtid_purged, and they remain incorrect even if the server is later restarted. For details, see the description for the binlog_gtid_simple_recovery system variable, which controls how the binary logs are iterated to compute the GTID sets. If one of the situations described there applies on a server, set binlog_gtid_simple_recovery=FALSE in the server's configuration file before starting it. That setting makes the server iterate all the binary log files (not just the newest and oldest) to find where GTID events start to appear. This process could take a long time if the server has a large number of binary log files without GTID events.

Resetting the GTID Execution History

If you need to reset the GTID execution history on a server, use the RESET MASTER statement. For example, you might need to do this after carrying out test queries to verify a replication setup on new GTID-enabled servers, or when you want to join a new server to a replication group but it contains some unwanted local transactions that are not accepted by Group Replication.

Warning

Use RESET MASTER with caution to avoid losing any wanted GTID execution history and binary log files.

Before issuing RESET MASTER, ensure that you have backups of the server's binary log files and binary log index file, if any, and obtain and save the GTID set held in the global value of the gtid_executed system variable (for example, by issuing a SELECT @@GLOBAL.gtid_executed statement and saving the results). If you are removing unwanted transactions from that GTID set, use mysqlbinlog to examine the contents of the transactions to ensure that they have no value, contain no data that must be saved or replicated, and did not result in data changes on the server.

When you issue RESET MASTER, the following reset operations are carried out:

  • The value of the gtid_purged system variable is set to an empty string ('').

  • The global value (but not the session value) of the gtid_executed system variable is set to an empty string.

  • The mysql.gtid_executed table is cleared (see mysql.gtid_executed Table).

  • If the server has binary logging enabled, the existing binary log files are deleted and the binary log index file is cleared.

Note that RESET MASTER is the method to reset the GTID execution history even if the server is a replica where binary logging is disabled. RESET SLAVE has no effect on the GTID execution history.

16.1.3.3 GTID Auto-Positioning

GTIDs replace the file-offset pairs previously required to determine points for starting, stopping, or resuming the flow of data between source and replica. When GTIDs are in use, all the information that the replica needs for synchronizing with the source is obtained directly from the replication data stream.

To start a replica using GTID-based replication, you do not include MASTER_LOG_FILE or MASTER_LOG_POS options in the CHANGE MASTER TO statement used to direct the replica to replicate from a given source. These options specify the name of the log file and the starting position within the file, but with GTIDs the replica does not need this nonlocal data. Instead, you need to enable the MASTER_AUTO_POSITION option. For full instructions to configure and start sources and replicas using GTID-based replication, see Section 16.1.3.4, “Setting Up Replication Using GTIDs”.

The MASTER_AUTO_POSITION option is disabled by default. If multi-source replication is enabled on the replica, you need to set the option for each applicable replication channel. Disabling the MASTER_AUTO_POSITION option again makes the replica revert to file-based replication, in which case you must also specify one or both of the MASTER_LOG_FILE or MASTER_LOG_POS options.

When a replica has GTIDs enabled (GTID_MODE=ON, ON_PERMISSIVE, or OFF_PERMISSIVE ) and the MASTER_AUTO_POSITION option enabled, auto-positioning is activated for connection to the source. The source must have GTID_MODE=ON set in order for the connection to succeed. In the initial handshake, the replica sends a GTID set containing the transactions that it has already received, committed, or both. This GTID set is equal to the union of the set of GTIDs in the gtid_executed system variable (@@GLOBAL.gtid_executed), and the set of GTIDs recorded in the Performance Schema replication_connection_status table as received transactions (the result of the statement SELECT RECEIVED_TRANSACTION_SET FROM PERFORMANCE_SCHEMA.replication_connection_status).

The source responds by sending all transactions recorded in its binary log whose GTID is not included in the GTID set sent by the replica. To do this, the source first identifies the appropriate binary log file to begin working with, by checking the Previous_gtids_log_event in the header of each of its binary log files, starting with the most recent. When the source finds the first Previous_gtids_log_event which contains no transactions that the replica is missing, it begins with that binary log file. This method is efficient and only takes a significant amount of time if the replica is behind the source by a large number of binary log files. The source then reads the transactions in that binary log file and subsequent files up to the current one, sending the transactions with GTIDs that the replica is missing, and skipping the transactions that were in the GTID set sent by the replica. The elapsed time until the replica receives the first missing transaction depends on its offset in the binary log file. This exchange ensures that the source only sends the transactions with a GTID that the replica has not already received or committed. If the replica receives transactions from more than one source, as in the case of a diamond topology, the auto-skip function ensures that the transactions are not applied twice.

If any of the transactions that should be sent by the source have been purged from the source's binary log, or added to the set of GTIDs in the gtid_purged system variable by another method, the source sends the error ER_MASTER_HAS_PURGED_REQUIRED_GTIDS to the replica, and replication does not start. The GTIDs of the missing purged transactions are identified and listed in the source's error log in the warning message ER_FOUND_MISSING_GTIDS. The replica cannot recover automatically from this error because parts of the transaction history that are needed to catch up with the source have been purged. Attempting to reconnect without the MASTER_AUTO_POSITION option enabled only results in the loss of the purged transactions on the replica. The correct approach to recover from this situation is for the replica to replicate the missing transactions listed in the ER_FOUND_MISSING_GTIDS message from another source, or for the replica to be replaced by a new replica created from a more recent backup. Consider revising the binary log expiration period on the source to ensure that the situation does not occur again.

If during the exchange of transactions it is found that the replica has received or committed transactions with the source's UUID in the GTID, but the source itself does not have a record of them, the source sends the error ER_SLAVE_HAS_MORE_GTIDS_THAN_MASTER to the replica and replication does not start. This situation can occur if a source that does not have sync_binlog=1 set experiences a power failure or operating system crash, and loses committed transactions that have not yet been synchronized to the binary log file, but have been received by the replica. The source and replica can diverge if any clients commit transactions on the source after it is restarted, which can lead to the situation where the source and replica are using the same GTID for different transactions. The correct approach to recover from this situation is to check manually whether the source and replica have diverged. If the same GTID is now in use for different transactions, you either need to perform manual conflict resolution for individual transactions as required, or remove either the source or the replica from the replication topology. If the issue is only missing transactions on the source, you can make the source into a replica instead, allow it to catch up with the other servers in the replication topology, and then make it a source again if needed.

16.1.3.4 Setting Up Replication Using GTIDs

This section describes a process for configuring and starting GTID-based replication in MySQL 5.7. This is a cold start procedure that assumes either that you are starting the replication source server for the first time, or that it is possible to stop it; for information about provisioning replicas using GTIDs from a running source, see Section 16.1.3.5, “Using GTIDs for Failover and Scaleout”. For information about changing GTID mode on servers online, see Section 16.1.4, “Changing Replication Modes on Online Servers”.

The key steps in this startup process for the simplest possible GTID replication topology, consisting of one source and one replica, are as follows:

  1. If replication is already running, synchronize both servers by making them read-only.

  2. Stop both servers.

  3. Restart both servers with GTIDs enabled and the correct options configured.

    The mysqld options necessary to start the servers as described are discussed in the example that follows later in this section.

  4. Instruct the replica to use the source as the replication data source and to use auto-positioning. The SQL statements needed to accomplish this step are described in the example that follows later in this section.

  5. Take a new backup. Binary logs containing transactions without GTIDs cannot be used on servers where GTIDs are enabled, so backups taken before this point cannot be used with your new configuration.

  6. Start the replica, then disable read-only mode on both servers, so that they can accept updates.

In the following example, two servers are already running as source and replica, using MySQL's binary log position-based replication protocol. If you are starting with new servers, see Section 16.1.2.2, “Creating a User for Replication” for information about adding a specific user for replication connections and Section 16.1.2.1, “Setting the Replication Source Configuration” for information about setting the server_id variable. The following examples show how to store mysqld startup options in server's option file, see Section 4.2.2.2, “Using Option Files” for more information. Alternatively you can use startup options when running mysqld.

Most of the steps that follow require the use of the MySQL root account or another MySQL user account that has the SUPER privilege. mysqladmin shutdown requires either the SUPER privilege or the SHUTDOWN privilege.

Step 1: Synchronize the servers.  This step is only required when working with servers which are already replicating without using GTIDs. For new servers proceed to Step 3. Make the servers read-only by setting the read_only system variable to ON on each server by issuing the following:

mysql> SET @@GLOBAL.read_only = ON;

Wait for all ongoing transactions to commit or roll back. Then, allow the replica to catch up with the source. It is extremely important that you make sure the replica has processed all updates before continuing.

If you use binary logs for anything other than replication, for example to do point in time backup and restore, wait until you do not need the old binary logs containing transactions without GTIDs. Ideally, wait for the server to purge all binary logs, and wait for any existing backup to expire.

Important

It is important to understand that logs containing transactions without GTIDs cannot be used on servers where GTIDs are enabled. Before proceeding, you must be sure that transactions without GTIDs do not exist anywhere in the topology.

Step 2: Stop both servers.  Stop each server using mysqladmin as shown here, where username is the user name for a MySQL user having sufficient privileges to shut down the server:

shell> mysqladmin -uusername -p shutdown

Then supply this user's password at the prompt.

Step 3: Start both servers with GTIDs enabled.  To enable GTID-based replication, each server must be started with GTID mode enabled by setting the gtid_mode variable to ON, and with the enforce_gtid_consistency variable enabled to ensure that only statements which are safe for GTID-based replication are logged. For example:

gtid_mode=ON
enforce-gtid-consistency=ON

In addition, you should start replicas with the --skip-slave-start option before configuring the replica settings. For more information on GTID related options and variables, see Section 16.1.6.5, “Global Transaction ID System Variables”.

It is not mandatory to have binary logging enabled in order to use GTIDs when using the mysql.gtid_executed Table. Replication source server must always have binary logging enabled in order to be able to replicate. However, replica servers can use GTIDs but without binary logging. If you need to disable binary logging on a replica, you can do this by specifying the --skip-log-bin and --log-slave-updates=OFF options for the replica.

Step 4: Configure the replica to use GTID-based auto-positioning.  Tell the replica to use the source with GTID based transactions as the replication data source, and to use GTID-based auto-positioning rather than file-based positioning. Issue a CHANGE MASTER TO statement on the replica, including the MASTER_AUTO_POSITION option in the statement to tell the replica that the source's transactions are identified by GTIDs.

You may also need to supply appropriate values for the source's host name and port number as well as the user name and password for a replication user account which can be used by the replica to connect to the source; if these have already been set prior to Step 1 and no further changes need to be made, the corresponding options can safely be omitted from the statement shown here.

mysql> CHANGE MASTER TO
     >     MASTER_HOST = host,
     >     MASTER_PORT = port,
     >     MASTER_USER = user,
     >     MASTER_PASSWORD = password,
     >     MASTER_AUTO_POSITION = 1;

Neither the MASTER_LOG_FILE option nor the MASTER_LOG_POS option may be used with MASTER_AUTO_POSITION set equal to 1. Attempting to do so causes the CHANGE MASTER TO statement to fail with an error.

Step 5: Take a new backup.  Existing backups that were made before you enabled GTIDs can no longer be used on these servers now that you have enabled GTIDs. Take a new backup at this point, so that you are not left without a usable backup.

For instance, you can execute FLUSH LOGS on the server where you are taking backups. Then either explicitly take a backup or wait for the next iteration of any periodic backup routine you may have set up.

Step 6: Start the replica and disable read-only mode.  Start the replica like this:

mysql> START SLAVE;

The following step is only necessary if you configured a server to be read-only in Step 1. To allow the server to begin accepting updates again, issue the following statement:

mysql> SET @@GLOBAL.read_only = OFF;

GTID-based replication should now be running, and you can begin (or resume) activity on the source as before. Section 16.1.3.5, “Using GTIDs for Failover and Scaleout”, discusses creation of new replicas when using GTIDs.

16.1.3.5 Using GTIDs for Failover and Scaleout

There are a number of techniques when using MySQL Replication with Global Transaction Identifiers (GTIDs) for provisioning a new replica which can then be used for scaleout, being promoted to source as necessary for failover. This section describes the following techniques:

Global transaction identifiers were added to MySQL Replication for the purpose of simplifying in general management of the replication data flow and of failover activities in particular. Each identifier uniquely identifies a set of binary log events that together make up a transaction. GTIDs play a key role in applying changes to the database: the server automatically skips any transaction having an identifier which the server recognizes as one that it has processed before. This behavior is critical for automatic replication positioning and correct failover.

The mapping between identifiers and sets of events comprising a given transaction is captured in the binary log. This poses some challenges when provisioning a new server with data from another existing server. To reproduce the identifier set on the new server, it is necessary to copy the identifiers from the old server to the new one, and to preserve the relationship between the identifiers and the actual events. This is necessary for restoring a replica that is immediately available as a candidate to become a new source on failover or switchover.

Simple replication.  The easiest way to reproduce all identifiers and transactions on a new server is to make the new server into the replica of a source that has the entire execution history, and enable global transaction identifiers on both servers. See Section 16.1.3.4, “Setting Up Replication Using GTIDs”, for more information.

Once replication is started, the new server copies the entire binary log from the source and thus obtains all information about all GTIDs.

This method is simple and effective, but requires the replica to read the binary log from the source; it can sometimes take a comparatively long time for the new replica to catch up with the source, so this method is not suitable for fast failover or restoring from backup. This section explains how to avoid fetching all of the execution history from the source by copying binary log files to the new server.

Copying data and transactions to the replica.  Executing the entire transaction history can be time-consuming when the source server has processed a large number of transactions previously, and this can represent a major bottleneck when setting up a new replica. To eliminate this requirement, a snapshot of the data set, the binary logs and the global transaction information the source server contains can be imported to the new replica. The source server can be either the source or the replica, but you must ensure that the source has processed all required transactions before copying the data.

There are several variants of this method, the difference being in the manner in which data dumps and transactions from binary logs are transfered to the replica, as outlined here:

Data Set
  1. Create a dump file using mysqldump on the source server. Set the mysqldump option --master-data (with the default value of 1) to include a CHANGE MASTER TO statement with binary logging information. Set the --set-gtid-purged option to AUTO (the default) or ON, to include information about executed transactions in the dump. Then use the mysql client to import the dump file on the target server.

  2. Alternatively, create a data snapshot of the source server using raw data files, then copy these files to the target server, following the instructions in Section 16.1.2.4, “Choosing a Method for Data Snapshots”. If you use InnoDB tables, you can use the mysqlbackup command from the MySQL Enterprise Backup component to produce a consistent snapshot. This command records the log name and offset corresponding to the snapshot to be used on the replica. MySQL Enterprise Backup is a commercial product that is included as part of a MySQL Enterprise subscription. See Section 27.2, “MySQL Enterprise Backup Overview” for detailed information.

  3. Alternatively, stop both the source and target servers, copy the contents of the source's data directory to the new replica's data directory, then restart the replica. If you use this method, the replica must be configured for GTID-based replication, in other words with gtid_mode=ON. For instructions and important information for this method, see Section 16.1.2.6, “Adding Replicas to a Replication Topology”.

Transaction History

If the source server has a complete transaction history in its binary logs (that is, the GTID set @@GLOBAL.gtid_purged is empty), you can use these methods.

  1. Import the binary logs from the source server to the new replica using mysqlbinlog, with the --read-from-remote-server and --read-from-remote-master options.

  2. Alternatively, copy the source server's binary log files to the replica. You can make copies from the replica using mysqlbinlog with the --read-from-remote-server and --raw options. These can be read into the replica by using mysqlbinlog > file (without the --raw option) to export the binary log files to SQL files, then passing these files to the mysql client for processing. Ensure that all of the binary log files are processed using a single mysql process, rather than multiple connections. For example:

    shell> mysqlbinlog copied-binlog.000001 copied-binlog.000002 | mysql -u root -p
    

    For more information, see Section 4.6.7.3, “Using mysqlbinlog to Back Up Binary Log Files”.

This method has the advantage that a new server is available almost immediately; only those transactions that were committed while the snapshot or dump file was being replayed still need to be obtained from the existing source. This means that the replica's availability is not instantanteous, but only a relatively short amount of time should be required for the replica to catch up with these few remaining transactions.

Copying over binary logs to the target server in advance is usually faster than reading the entire transaction execution history from the source in real time. However, it may not always be feasible to move these files to the target when required, due to size or other considerations. The two remaining methods for provisioning a new replica discussed in this section use other means to transfer information about transactions to the new replica.

Injecting empty transactions.  The source's global gtid_executed variable contains the set of all transactions executed on the source. Rather than copy the binary logs when taking a snapshot to provision a new server, you can instead note the content of gtid_executed on the server from which the snapshot was taken. Before adding the new server to the replication chain, simply commit an empty transaction on the new server for each transaction identifier contained in the source's gtid_executed, like this:

SET GTID_NEXT='aaa-bbb-ccc-ddd:N';

BEGIN;
COMMIT;

SET GTID_NEXT='AUTOMATIC';

Once all transaction identifiers have been reinstated in this way using empty transactions, you must flush and purge the replica's binary logs, as shown here, where N is the nonzero suffix of the current binary log file name:

FLUSH LOGS;
PURGE BINARY LOGS TO 'source-bin.00000N';

You should do this to prevent this server from flooding the replication stream with false transactions in the event that it is later promoted to source. (The FLUSH LOGS statement forces the creation of a new binary log file; PURGE BINARY LOGS purges the empty transactions, but retains their identifiers.)

This method creates a server that is essentially a snapshot, but in time is able to become a source as its binary log history converges with that of the replication stream (that is, as it catches up with the source or sources). This outcome is similar in effect to that obtained using the remaining provisioning method, which we discuss in the next few paragraphs.

Excluding transactions with gtid_purged.  The source's global gtid_purged variable contains the set of all transactions that have been purged from the source's binary log. As with the method discussed previously (see Injecting empty transactions), you can record the value of gtid_executed on the server from which the snapshot was taken (in place of copying the binary logs to the new server). Unlike the previous method, there is no need to commit empty transactions (or to issue PURGE BINARY LOGS); instead, you can set gtid_purged on the replica directly, based on the value of gtid_executed on the server from which the backup or snapshot was taken.

As with the method using empty transactions, this method creates a server that is functionally a snapshot, but in time is able to become a source as its binary log history converges with that of the replication source server or the group.

Restoring GTID mode replicas.  When restoring a replica in a GTID based replication setup that has encountered an error, injecting an empty transaction may not solve the problem because an event does not have a GTID.

Use mysqlbinlog to find the next transaction, which is probably the first transaction in the next log file after the event. Copy everything up to the COMMIT for that transaction, being sure to include the SET @@SESSION.GTID_NEXT. Even if you are not using row-based replication, you can still run binary log row events in the command line client.

Stop the replica and run the transaction you copied. The mysqlbinlog output sets the delimiter to /*!*/;, so set it back:

mysql> DELIMITER ;

Restart replication from the correct position automatically:

mysql> SET GTID_NEXT=automatic;
mysql> RESET SLAVE;
mysql> START SLAVE;

16.1.3.6 Restrictions on Replication with GTIDs

Because GTID-based replication is dependent on transactions, some features otherwise available in MySQL are not supported when using it. This section provides information about restrictions on and limitations of replication with GTIDs.

Updates involving nontransactional storage engines.  When using GTIDs, updates to tables using nontransactional storage engines such as MyISAM cannot be made in the same statement or transaction as updates to tables using transactional storage engines such as InnoDB.

This restriction is due to the fact that updates to tables that use a nontransactional storage engine mixed with updates to tables that use a transactional storage engine within the same transaction can result in multiple GTIDs being assigned to the same transaction.

Such problems can also occur when the source and the replica use different storage engines for their respective versions of the same table, where one storage engine is transactional and the other is not. Also be aware that triggers that are defined to operate on nontransactional tables can be the cause of these problems.

In any of the cases just mentioned, the one-to-one correspondence between transactions and GTIDs is broken, with the result that GTID-based replication cannot function correctly.

CREATE TABLE ... SELECT statements.  CREATE TABLE ... SELECT statements are not allowed when using GTID-based replication. When binlog_format is set to STATEMENT, a CREATE TABLE ... SELECT statement is recorded in the binary log as one transaction with one GTID, but if ROW format is used, the statement is recorded as two transactions with two GTIDs. If a source used STATEMENT format and a replica used ROW format, the replica would be unable to handle the transaction correctly, therefore the CREATE TABLE ... SELECT statement is disallowed with GTIDs to prevent this scenario.

Temporary tables.  CREATE TEMPORARY TABLE and DROP TEMPORARY TABLE statements are not supported inside transactions, procedures, functions, and triggers when using GTIDs (that is, when the enforce_gtid_consistency system variable is set to ON). It is possible to use these statements with GTIDs enabled, but only outside of any transaction, and only with autocommit=1.

Preventing execution of unsupported statements.  To prevent execution of statements that would cause GTID-based replication to fail, all servers must be started with the --enforce-gtid-consistency option when enabling GTIDs. This causes statements of any of the types discussed previously in this section to fail with an error.

Note that --enforce-gtid-consistency only takes effect if binary logging takes place for a statement. If binary logging is disabled on the server, or if statements are not written to the binary log because they are removed by a filter, GTID consistency is not checked or enforced for the statements that are not logged.

For information about other required startup options when enabling GTIDs, see Section 16.1.3.4, “Setting Up Replication Using GTIDs”.

Skipping transactions.  sql_slave_skip_counter is not supported when using GTIDs. If you need to skip transactions, use the value of the source's gtid_executed variable instead. For instructions, see Section 16.1.7.3, “Skipping Transactions”.

Ignoring servers.  The IGNORE_SERVER_IDS option of the CHANGE MASTER TO statement is deprecated when using GTIDs, because transactions that have already been applied are automatically ignored. Before starting GTID-based replication, check for and clear all ignored server ID lists that have previously been set on the servers involved. The SHOW SLAVE STATUS statement, which can be issued for individual channels, displays the list of ignored server IDs if there is one. If there is no list, the Replicate_Ignore_Server_Ids field is blank.

GTID mode and mysqldump.  It is possible to import a dump made using mysqldump into a MySQL server running with GTID mode enabled, provided that there are no GTIDs in the target server's binary log.

GTID mode and mysql_upgrade.  When the server is running with global transaction identifiers (GTIDs) enabled (gtid_mode=ON), do not enable binary logging by mysql_upgrade (the --write-binlog option).

16.1.3.7 Stored Function Examples to Manipulate GTIDs

MySQL includes some built-in (native) functions for use with GTID-based replication. These functions are as follows:

GTID_SUBSET(set1,set2)

Given two sets of global transaction identifiers set1 and set2, returns true if all GTIDs in set1 are also in set2. Returns false otherwise.

GTID_SUBTRACT(set1,set2)

Given two sets of global transaction identifiers set1 and set2, returns only those GTIDs from set1 that are not in set2.

WAIT_FOR_EXECUTED_GTID_SET(gtid_set[, timeout])

Wait until the server has applied all of the transactions whose global transaction identifiers are contained in gtid_set. The optional timeout stops the function from waiting after the specified number of seconds have elapsed.

WAIT_UNTIL_SQL_THREAD_AFTER_GTIDS(gtid_set[, timeout][,channel])

Like WAIT_FOR_EXECUTED_GTID_SET(), but for a single started replication channel. Use WAIT_FOR_EXECUTED_GTID_SET() instead to ensure all channels are covered in all states.

For details of these functions, see Section 12.19, “Functions Used with Global Transaction Identifiers (GTIDs)”.

You can define your own stored functions to work with GTIDs. For information on defining stored functions, see Chapter 22, Stored Objects. The following examples show some useful stored functions that can be created based on the built-in GTID_SUBSET() and GTID_SUBTRACT() functions.

Note that in these stored functions, the delimiter command has been used to change the MySQL statement delimiter to a vertical bar, as follows:

mysql> delimiter |

All of these functions take string representations of GTID sets as arguments, so GTID sets must always be quoted when used with them.

This function returns nonzero (true) if two GTID sets are the same set, even if they are not formatted in the same way.

CREATE FUNCTION GTID_IS_EQUAL(gtid_set_1 LONGTEXT, gtid_set_2 LONGTEXT)
RETURNS INT
  RETURN GTID_SUBSET(gtid_set_1, gtid_set_2) AND GTID_SUBSET(gtid_set_2, gtid_set_1)|

This function returns nonzero (true) if two GTID sets are disjoint.

CREATE FUNCTION GTID_IS_DISJOINT(gtid_set_1 LONGTEXT, gtid_set_2 LONGTEXT)
RETURNS INT
  RETURN GTID_SUBSET(gtid_set_1, GTID_SUBTRACT(gtid_set_1, gtid_set_2))|

This function returns nonzero (true) if two GTID sets are disjoint, and sum is the union of the two sets.

CREATE FUNCTION GTID_IS_DISJOINT_UNION(gtid_set_1 LONGTEXT, gtid_set_2 LONGTEXT, sum LONGTEXT)
RETURNS INT
  RETURN GTID_IS_EQUAL(GTID_SUBTRACT(sum, gtid_set_1), gtid_set_2) AND
         GTID_IS_EQUAL(GTID_SUBTRACT(sum, gtid_set_2), gtid_set_1)|

This function returns a normalized form of the GTID set, in all uppercase, with no whitespace and no duplicates. The UUIDs are arranged in alphabetic order and intervals are arranged in numeric order.

CREATE FUNCTION GTID_NORMALIZE(g LONGTEXT)
RETURNS LONGTEXT
RETURN GTID_SUBTRACT(g, '')|

This function returns the union of two GTID sets.

CREATE FUNCTION GTID_UNION(gtid_set_1 LONGTEXT, gtid_set_2 LONGTEXT)
RETURNS LONGTEXT
  RETURN GTID_NORMALIZE(CONCAT(gtid_set_1, ',', gtid_set_2))|

This function returns the intersection of two GTID sets.

CREATE FUNCTION GTID_INTERSECTION(gtid_set_1 LONGTEXT, gtid_set_2 LONGTEXT)
RETURNS LONGTEXT
  RETURN GTID_SUBTRACT(gtid_set_1, GTID_SUBTRACT(gtid_set_1, gtid_set_2))|

This function returns the symmetric difference between two GTID sets, that is, the GTIDs that exist in gtid_set_1 but not in gtid_set_2, and also the GTIDs that exist in gtid_set_2 but not in gtid_set_1.

CREATE FUNCTION GTID_SYMMETRIC_DIFFERENCE(gtid_set_1 LONGTEXT, gtid_set_2 LONGTEXT)
RETURNS LONGTEXT
  RETURN GTID_SUBTRACT(CONCAT(gtid_set_1, ',', gtid_set_2), GTID_INTERSECTION(gtid_set_1, gtid_set_2))|

This function removes from a GTID set all the GTIDs from a specified origin, and returns the remaining GTIDs, if any. The UUID is the identifier used by the server where the transaction originated, which is normally the server_uuid value.

CREATE FUNCTION GTID_SUBTRACT_UUID(gtid_set LONGTEXT, uuid TEXT)
RETURNS LONGTEXT
  RETURN GTID_SUBTRACT(gtid_set, CONCAT(UUID, ':1-', (1 << 63) - 2))|

This function reverses the previously listed function to return only those GTIDs from the GTID set that originate from the server with the specified identifier (UUID).

CREATE FUNCTION GTID_INTERSECTION_WITH_UUID(gtid_set LONGTEXT, uuid TEXT)
RETURNS LONGTEXT
  RETURN GTID_SUBTRACT(gtid_set, GTID_SUBTRACT_UUID(gtid_set, uuid))|

Example 16.1 Verifying that a replica is up to date

The built-in functions GTID_SUBSET and GTID_SUBTRACT can be used to check that a replica has applied at least every transaction that a source has applied.

To perform this check with GTID_SUBSET, execute the following statement on the replica:

SELECT GTID_SUBSET(source_gtid_executed, replica_gtid_executed)

If this returns 0 (false), some GTIDs in source_gtid_executed are not present in replica_gtid_executed, so the source has applied some transactions that the replica has not applied, and the replica is therefore not up to date.

To perform the check with GTID_SUBTRACT, execute the following statement on the replica:

SELECT GTID_SUBTRACT(source_gtid_executed, replica_gtid_executed)

This statement returns any GTIDs that are in source_gtid_executed but not in replica_gtid_executed. If any GTIDs are returned, the source has applied some transactions that the replica has not applied, and the replica is therefore not up to date.


Example 16.2 Backup and restore scenario

The stored functions GTID_IS_EQUAL, GTID_IS_DISJOINT, and GTID_IS_DISJOINT_UNION could be used to verify backup and restore operations involving multiple databases and servers. In this example scenario, server1 contains database db1, and server2 contains database db2. The goal is to copy database db2 to server1, and the result on server1 should be the union of the two databases. The procedure used is to back up server2 using mysqlpump or mysqldump, then restore this backup on server1.

Provided the backup program's option --set-gtid-purged was set to ON or the default of AUTO, the program's output contains a SET @@GLOBAL.gtid_purged statement that adds the gtid_executed set from server2 to the gtid_purged set on server1. The gtid_purged set contains the GTIDs of all the transactions that have been committed on a server but do not exist in any binary log file on the server. When database db2 is copied to server1, the GTIDs of the transactions committed on server2, which are not in the binary log files on server1, must be added to the gtid_purged set for server1 to make the set complete.

The stored functions can be used to assist with the following steps in this scenario:

  • Use GTID_IS_EQUAL to verify that the backup operation computed the correct GTID set for the SET @@GLOBAL.gtid_purged statement. On server2, extract that statement from the mysqlpump or mysqldump output, and store the GTID set into a local variable, such as $gtid_purged_set. Then execute the following statement:

    server2> SELECT GTID_IS_EQUAL($gtid_purged_set, @@GLOBAL.gtid_executed); 

    If the result is 1, the two GTID sets are equal, and the set has been computed correctly.

  • Use GTID_IS_DISJOINT to verify that the GTID set in the mysqlpump or mysqldump output does not overlap with the gtid_executed set on server1. If there is any overlap, with identical GTIDs present on both servers for some reason, copying database db2 to server1 produces errors. To check, on server1, extract and store the gtid_purged set from the output into a local variable as above, then execute the following statement:

    server1> SELECT GTID_IS_DISJOINT($gtid_purged_set, @@GLOBAL.gtid_executed); 

    If the result is 1, there is no overlap between the two GTID sets, so no duplicate GTIDs are present.

  • Use GTID_IS_DISJOINT_UNION to verify that the restore operation resulted in the correct GTID state on server1. Before restoring the backup, on server1, obtain the existing gtid_executed set by executing the following statement:

    server1> SELECT @@GLOBAL.gtid_executed;

    Store the result in a local variable $original_gtid_executed. Also store the gtid_purged set in a local variable as described above. When the backup from server2 has been restored onto server1, execute the following statement to verify the GTID state:

    server1> SELECT GTID_IS_DISJOINT_UNION($original_gtid_executed,
                                           $gtid_purged_set,
                                           @@GLOBAL.gtid_executed); 

    If the result is 1, the stored function has verified that the original gtid_executed set from server1 ($original_gtid_executed) and the gtid_purged set that was added from server2 ($gtid_purged_set) have no overlap, and also that the updated gtid_executed set on server1 now consists of the previous gtid_executed set from server1 plus the gtid_purged set from server2, which is the desired result. Ensure that this check is carried out before any further transactions take place on server1, otherwise the new transactions in the gtid_executed set causes it to fail.


Example 16.3 Selecting the most up-to-date replica for manual failover

The stored function GTID_UNION could be used to identify the most up-to-date replica from a set of replicas, in order to perform a manual failover operation after a replication source server has stopped unexpectedly. If some of the replicas are experiencing replication lag, this stored function can be used to compute the most up-to-date replica without waiting for all the replicas to apply their existing relay logs, and therefore to minimize the failover time. The function can return the union of the gtid_executed set on each replica with the set of transactions received by the replica, which is recorded in the Performance Schema table replication_connection_status. You can compare these results to find which replica's record of transactions is the most up-to-date, even if not all of the transactions have been committed yet.

On each replica, compute the complete record of transactions by issuing the following statement:

SELECT GTID_UNION(RECEIVED_TRANSACTION_SET, @@GLOBAL.gtid_executed)
    FROM performance_schema.replication_connection_status
    WHERE channel_name = 'name';

You can then compare the results from each replica to see which one has the most up-to-date record of transactions, and use this slave as the new replication source server.


Example 16.4 Checking for extraneous transactions on a replica

The stored function GTID_SUBTRACT_UUID could be used to check whether a replica has received transactions that did not originate from its designated source or sources. If it has, there might be an issue with your replication setup, or with a proxy, router, or load balancer. This function works by removing from a GTID set all the GTIDs from a specified originating server, and returning the remaining GTIDs, if any.

For a replica with a single source, issue the following statement, giving the identifier of the originating source, which is normally the server_uuid value:

SELECT GTID_SUBTRACT_UUID(@@GLOBAL.gtid_executed, server_uuid_of_source);

  If the result is not empty, the transactions returned are extra transactions that did not originate from the designated source.

For a replica in a multi-source replication topology, repeat the function, for example:

SELECT GTID_SUBTRACT_UUID(GTID_SUBTRACT_UUID(@@GLOBAL.gtid_executed,
                                             server_uuid_of_source_1),
                                             server_uuid_of_source_2);

If the result is not empty, the transactions returned are extra transactions that did not originate from any of the designated sources.


Example 16.5 Verifying that a server in a replication topology is read-only

The stored function GTID_INTERSECTION_WITH_UUID could be used to verify that a server has not originated any GTIDs and is in a read-only state. The function returns only those GTIDs from the GTID set that originate from the server with the specified identifier. If any of the transactions in the server's gtid_executed set have the server's own identifier, the server itself originated those transactions. You can issue the following statement on the server to check:

SELECT GTID_INTERSECTION_WITH_UUID(@@GLOBAL.gtid_executed, my_server_uuid);


Example 16.6 Validating an additional replica in a multi-source replication setup

The stored function GTID_INTERSECTION_WITH_UUID could be used to find out if a replica attached to a multi-source replication setup has applied all the transactions originating from one particular source. In this scenario, source1 and source2 are both sources and replicas and replicate to each other. source2 also has its own replica. The replica also receives and applies transactions from source1 if source2 is configured with log_slave_updates=ON, but it does not do so if source2 uses log_slave_updates=OFF. Whatever the case, we currently only want to find out if the replica is up to date with source2. In this situation, the stored function GTID_INTERSECTION_WITH_UUID can be used to identify the transactions that source2 originated, discarding the transactions that source2 has replicated from source1. The built-in function GTID_SUBSET can then be used to compare the result to the gtid_executed set on the replica. If the replica is up to date with source2, the gtid_executed set on the replica contains all the transactions in the intersection set (the transactions that originated from source2).

To carry out this check, store source2's gtid_executed set, source2's server UUID, and the replica's gtid_executed set, into client-side variables as follows:

    $source2_gtid_executed :=
      source2> SELECT @@GLOBAL.gtid_executed;
    $source2_server_uuid :=
      source2> SELECT @@GLOBAL.server_uuid;
    $replica_gtid_executed :=
      replica> SELECT @@GLOBAL.gtid_executed;

Then use GTID_INTERSECTION_WITH_UUID and GTID_SUBSET with these variables as input, as follows:

SELECT GTID_SUBSET(GTID_INTERSECTION_WITH_UUID($source2_gtid_executed,
                                               $source2_server_uuid),
                                               $replica_gtid_executed);

The server identifier from source2 ($source2_server_uuid) is used with GTID_INTERSECTION_WITH_UUID to identify and return only those GTIDs from source2's gtid_executed set that originated on source2, omitting those that originated on source1. The resulting GTID set is then compared with the set of all executed GTIDs on the replica, using GTID_SUBSET. If this statement returns nonzero (true), all the identified GTIDs from source2 (the first set input) are also in the replica's gtid_executed set (the second set input), meaning that the replica has replicated all the transactions that originated from source2.


16.1.4 Changing Replication Modes on Online Servers

This section describes how to change the mode of replication being used without having to take the server offline.

16.1.4.1 Replication Mode Concepts

To be able to safely configure the replication mode of an online server it is important to understand some key concepts of replication. This section explains these concepts and is essential reading before attempting to modify the replication mode of an online server.

The modes of replication available in MySQL rely on different techniques for identifying transactions which are logged. The types of transactions used by replication are as follows:

  • GTID transactions are identified by a global transaction identifier (GTID) in the form UUID:NUMBER. Every GTID transaction in a log is always preceded by a Gtid_log_event. GTID transactions can be addressed using either the GTID or using the file name and position.

  • Anonymous transactions do not have a GTID assigned, and MySQL ensures that every anonymous transaction in a log is preceded by an Anonymous_gtid_log_event. In previous versions, anonymous transactions were not preceded by any particular event. Anonymous transactions can only be addressed using file name and position.

When using GTIDs you can take advantage of auto-positioning and automatic fail-over, as well as use WAIT_FOR_EXECUTED_GTID_SET(), session_track_gtids, and monitor replicated transactions using Performance Schema tables. With GTIDs enabled you cannot use sql_slave_skip_counter, instead use empty transactions.

Transactions in a relay log that was received from a source running a previous version of MySQL may not be preceded by any particular event at all, but after being replayed and logged in the replica's binary log, they are preceded with an Anonymous_gtid_log_event.

The ability to configure the replication mode online means that the gtid_mode and enforce_gtid_consistency variables are now both dynamic and can be set from a top-level statement by an account that has privileges sufficient to set global system variables. See Section 5.1.8.1, “System Variable Privileges”. In previous versions, both of these variables could only be configured using the appropriate option at server start, meaning that changes to the replication mode required a server restart. In all versions gtid_mode could be set to ON or OFF, which corresponded to whether GTIDs were used to identify transactions or not. When gtid_mode=ON it is not possible to replicate anonymous transactions, and when gtid_mode=OFF only anonymous transactions can be replicated. As of MySQL 5.7.6, the gtid_mode variable has two additional states, OFF_PERMISSIVE and ON_PERMISSIVE. When gtid_mode=OFF_PERMISSIVE then new transactions are anonymous while permitting replicated transactions to be either GTID or anonymous transactions. When gtid_mode=ON_PERMISSIVE then new transactions use GTIDs while permitting replicated transactions to be either GTID or anonymous transactions. This means it is possible to have a replication topology that has servers using both anonymous and GTID transactions. For example a source with gtid_mode=ON could be replicating to a replica with gtid_mode=ON_PERMISSIVE. The valid values for gtid_mode are as follows and in this order:

  • OFF

  • OFF_PERMISSIVE

  • ON_PERMISSIVE

  • ON

It is important to note that the state of gtid_mode can only be changed by one step at a time based on the above order. For example, if gtid_mode is currently set to OFF_PERMISSIVE, it is possible to change to OFF or ON_PERMISSIVE but not to ON. This is to ensure that the process of changing from anonymous transactions to GTID transactions online is correctly handled by the server. When you switch between gtid_mode=ON and gtid_mode=OFF, the GTID state (in other words the value of gtid_executed) is persistent. This ensures that the GTID set that has been applied by the server is always retained, regardless of changes between types of gtid_mode.

As part of the changes introduced by MySQL 5.7.6, the fields related to GTIDs have been modified so that they display the correct information regardless of the currently selected gtid_mode. This means that fields which display GTID sets, such as gtid_executed, gtid_purged, RECEIVED_TRANSACTION_SET in the replication_connection_status Performance Schema table, and the GTID related results of SHOW SLAVE STATUS, now return the empty string when there are no GTIDs present. Fields that display a single GTID, such as CURRENT_TRANSACTION in the Performance Schema replication_applier_status_by_worker table, now display ANONYMOUS when GTID transactions are not being used.

Replication from a source using gtid_mode=ON provides the ability to use auto-positioning, configured using the CHANGE MASTER TO MASTER_AUTO_POSITION = 1; statement. The replication topology being used impacts on whether it is possible to enable auto-positioning or not, as this feature relies on GTIDs and is not compatible with anonymous transactions. An error is generated if auto-positioning is enabled and an anonymous transaction is encountered. It is strongly recommended to ensure there are no anonymous transactions remaining in the topology before enabling auto-positioning, see Section 16.1.4.2, “Enabling GTID Transactions Online”. The valid combinations of gtid_mode and auto-positioning on source and replica are shown in the following table, where the source's gtid_mode is shown on the horizontal and the replica's gtid_mode is on the vertical:

Table 16.1 Valid Combinations of Source and Replica gtid_mode

gtid_mode

Source OFF

Source OFF_PERMISSIVE

Source ON_PERMISSIVE

Source ON

Replica OFF

Y

Y

N

N

Replica OFF_PERMISSIVE

Y

Y

Y

Y*

Replica ON_PERMISSIVE

Y

Y

Y

Y*

Replica ON

N

N

Y

Y*


In the above table, the entries are:

  • Y: the gtid_mode of source and replica is compatible

  • N: the gtid_mode of source and replica is not compatible

  • *: auto-positioning can be used

The currently selected gtid_mode also impacts on the gtid_next variable. The following table shows the behavior of the server for the different values of gtid_mode and gtid_next.

Table 16.2 Valid Combinations of gtid_mode and gtid_next

gtid_next

AUTOMATIC

binary log on

AUTOMATIC

binary log off

ANONYMOUS

UUID:NUMBER

>OFF

ANONYMOUS

ANONYMOUS

ANONYMOUS

Error

>OFF_PERMISSIVE

ANONYMOUS

ANONYMOUS

ANONYMOUS

UUID:NUMBER

>ON_PERMISSIVE

New GTID

ANONYMOUS

ANONYMOUS

UUID:NUMBER

>ON

New GTID

ANONYMOUS

Error

UUID:NUMBER


In the above table, the entries are:

  • ANONYMOUS: generate an anonymous transaction.

  • Error: generate an error and fail to execute SET GTID_NEXT.

  • UUID:NUMBER: generate a GTID with the specified UUID:NUMBER.

  • New GTID: generate a GTID with an automatically generated number.

When the binary log is off and gtid_next is set to AUTOMATIC, then no GTID is generated. This is consistent with the behavior of previous versions.

16.1.4.2 Enabling GTID Transactions Online

This section describes how to enable GTID transactions, and optionally auto-positioning, on servers that are already online and using anonymous transactions. This procedure does not require taking the server offline and is suited to use in production. However, if you have the possibility to take the servers offline when enabling GTID transactions that process is easier.

Before you start, ensure that the servers meet the following pre-conditions:

  • All servers in your topology must use MySQL 5.7.6 or later. You cannot enable GTID transactions online on any single server unless all servers which are in the topology are using this version.

  • All servers have gtid_mode set to the default value OFF.

The following procedure can be paused at any time and later resumed where it was, or reversed by jumping to the corresponding step of Section 16.1.4.3, “Disabling GTID Transactions Online”, the online procedure to disable GTIDs. This makes the procedure fault-tolerant because any unrelated issues that may appear in the middle of the procedure can be handled as usual, and then the procedure continued where it was left off.

Note

It is crucial that you complete every step before continuing to the next step.

To enable GTID transactions:

  1. On each server, execute:

    SET @@GLOBAL.ENFORCE_GTID_CONSISTENCY = WARN;

    Let the server run for a while with your normal workload and monitor the logs. If this step causes any warnings in the log, adjust your application so that it only uses GTID-compatible features and does not generate any warnings.

    Important

    This is the first important step. You must ensure that no warnings are being generated in the error logs before going to the next step.

  2. On each server, execute:

    SET @@GLOBAL.ENFORCE_GTID_CONSISTENCY = ON;
  3. On each server, execute:

    SET @@GLOBAL.GTID_MODE = OFF_PERMISSIVE;

    It does not matter which server executes this statement first, but it is important that all servers complete this step before any server begins the next step.

  4. On each server, execute:

    SET @@GLOBAL.GTID_MODE = ON_PERMISSIVE;

    It does not matter which server executes this statement first.

  5. On each server, wait until the status variable ONGOING_ANONYMOUS_TRANSACTION_COUNT is zero. This can be checked using:

    SHOW STATUS LIKE 'ONGOING_ANONYMOUS_TRANSACTION_COUNT';
    Note

    On a replica, it is theoretically possible that this shows zero and then nonzero again. This is not a problem, it suffices that it shows zero once.

  6. Wait for all transactions generated up to step 5 to replicate to all servers. You can do this without stopping updates: the only important thing is that all anonymous transactions get replicated.

    See Section 16.1.4.4, “Verifying Replication of Anonymous Transactions” for one method of checking that all anonymous transactions have replicated to all servers.

  7. If you use binary logs for anything other than replication, for example point in time backup and restore, wait until you do not need the old binary logs having transactions without GTIDs.

    For instance, after step 6 has completed, you can execute FLUSH LOGS on the server where you are taking backups. Then either explicitly take a backup or wait for the next iteration of any periodic backup routine you may have set up.

    Ideally, wait for the server to purge all binary logs that existed when step 6 was completed. Also wait for any backup taken before step 6 to expire.

    Important

    This is the second important point. It is vital to understand that binary logs containing anonymous transactions, without GTIDs cannot be used after the next step. After this step, you must be sure that transactions without GTIDs do not exist anywhere in the topology.

  8. On each server, execute:

    SET @@GLOBAL.GTID_MODE = ON;
  9. On each server, add gtid_mode=ON and enforce_gtid_consistency=ON to my.cnf.

    You are now guaranteed that all transactions have a GTID (except transactions generated in step 5 or earlier, which have already been processed). To start using the GTID protocol so that you can later perform automatic fail-over, execute the following on each replica. Optionally, if you use multi-source replication, do this for each channel and include the FOR CHANNEL channel clause:

    STOP SLAVE [FOR CHANNEL 'channel'];
    CHANGE MASTER TO MASTER_AUTO_POSITION = 1 [FOR CHANNEL 'channel'];
    START SLAVE [FOR CHANNEL 'channel'];

16.1.4.3 Disabling GTID Transactions Online

This section describes how to disable GTID transactions on servers that are already online. This procedure does not require taking the server offline and is suited to use in production. However, if you have the possibility to take the servers offline when disabling GTIDs mode that process is easier.

The process is similar to enabling GTID transactions while the server is online, but reversing the steps. The only thing that differs is the point at which you wait for logged transactions to replicate.

Before you start, ensure that the servers meet the following pre-conditions:

  • All servers in your topology must use MySQL 5.7.6 or later. You cannot disable GTID transactions online on any single server unless all servers which are in the topology are using this version.

  • All servers have gtid_mode set to ON.

  1. Execute the following on each replica, and if you using multi-source replication, do it for each channel and include the FOR CHANNEL channel clause:

    STOP SLAVE [FOR CHANNEL 'channel'];
    CHANGE MASTER TO MASTER_AUTO_POSITION = 0, MASTER_LOG_FILE = file, \
    MASTER_LOG_POS = position [FOR CHANNEL 'channel'];
    START SLAVE [FOR CHANNEL 'channel'];
     
  2. On each server, execute:

    SET @@GLOBAL.GTID_MODE = ON_PERMISSIVE;
  3. On each server, execute:

    SET @@GLOBAL.GTID_MODE = OFF_PERMISSIVE;
  4. On each server, wait until the variable @@GLOBAL.GTID_OWNED is equal to the empty string. This can be checked using:

    SELECT @@GLOBAL.GTID_OWNED;

    On a replica, it is theoretically possible that this is empty and then nonempty again. This is not a problem, it suffices that it is empty once.

  5. Wait for all transactions that currently exist in any binary log to replicate to all replicas. See Section 16.1.4.4, “Verifying Replication of Anonymous Transactions” for one method of checking that all anonymous transactions have replicated to all servers.

  6. If you use binary logs for anything else than replication, for example to do point in time backup or restore: wait until you do not need the old binary logs having GTID transactions.

    For instance, after step 5 has completed, you can execute FLUSH LOGS on the server where you are taking the backup. Then either explicitly take a backup or wait for the next iteration of any periodic backup routine you may have set up.

    Ideally, wait for the server to purge all binary logs that existed when step 5 was completed. Also wait for any backup taken before step 5 to expire.

    Important

    This is the one important point during this procedure. It is important to understand that logs containing GTID transactions cannot be used after the next step. Before proceeding you must be sure that GTID transactions do not exist anywhere in the topology.

  7. On each server, execute:

    SET @@GLOBAL.GTID_MODE = OFF;
  8. On each server, set gtid_mode=OFF in my.cnf.

    If you want to set enforce_gtid_consistency=OFF, you can do so now. After setting it, you should add enforce_gtid_consistency=OFF to your configuration file.

If you want to downgrade to an earlier version of MySQL, you can do so now, using the normal downgrade procedure.

16.1.4.4 Verifying Replication of Anonymous Transactions

This section explains how to monitor a replication topology and verify that all anonymous transactions have been replicated. This is helpful when changing the replication mode online as you can verify that it is safe to change to GTID transactions.

There are several possible ways to wait for transactions to replicate:

The simplest method, which works regardless of your topology but relies on timing is as follows: if you are sure that the replica never lags more than N seconds, just wait for a bit more than N seconds. Or wait for a day, or whatever time period you consider safe for your deployment.

A safer method in the sense that it does not depend on timing: if you only have a source with one or more replicas, do the following:

  1. On the source, execute:

    SHOW MASTER STATUS;

    Note down the values in the File and Position column.

  2. On every replica, use the file and position information from the source to execute:

    SELECT MASTER_POS_WAIT(file, position);

If you have a source and multiple levels of replicas, or in other words you have replicas of replicas, repeat step 2 on each level, starting from the source, then all the direct replicas, then all the replicas of replicas, and so on.

If you use a circular replication topology where multiple servers may have write clients, perform step 2 for each source-replica connection, until you have completed the full circle. Repeat the whole process so that you do the full circle twice.

For example, suppose you have three servers A, B, and C, replicating in a circle so that A -> B -> C -> A. The procedure is then:

  • Do step 1 on A and step 2 on B.

  • Do step 1 on B and step 2 on C.

  • Do step 1 on C and step 2 on A.

  • Do step 1 on A and step 2 on B.

  • Do step 1 on B and step 2 on C.

  • Do step 1 on C and step 2 on A.

16.1.5 MySQL Multi-Source Replication

MySQL multi-source replication enables a replica to receive transactions from multiple immediate sources in parallel. In a multi-source replication topology, a replica creates a replication channel for each source that it should receive transactions from. For more information on how replication channels function, see Section 16.2.2, “Replication Channels”.

You might choose to implement multi-source replication to achieve goals like these:

  • Backing up multiple servers to a single server.

  • Merging table shards.

  • Consolidating data from multiple servers to a single server.

Multi-source replication does not implement any conflict detection or resolution when applying transactions, and those tasks are left to the application if required.

Note

Each channel on a multi-source replica must replicate from a different source. You cannot set up multiple replication channels from a single replica to a single source. This is because the server IDs of replicas must be unique in a replication topology. The source distinguishes replicas only by their server IDs, not by the names of the replication channels, so it cannot recognize different replication channels from the same replica.

A rmulti-source replica can also be set up as a multi-threaded replica, by setting the slave_parallel_workers system variable to a value greater than 0. When you do this on a multi-source replica, each channel on the replica has the specified number of applier threads, plus a coordinator thread to manage them. You cannot configure the number of applier threads for individual channels.

This section provides tutorials on how to configure sources and replicas for multi-source replication, how to start, stop and reset multi-source replicas, and how to monitor multi-source replication.

16.1.5.1 Configuring Multi-Source Replication

A multi-source replication topology requires at least two sources and one replica configured. In these tutorials, we assume you have two sources source1 and source2, and a replica replicahost. The replica replicates one database from each of the sources, db1 from source1 and db2 from source2.

Sources in a multi-source replication topology can be configured to use either GTID-based replication, or binary log position-based replication. See Section 16.1.3.4, “Setting Up Replication Using GTIDs” for how to configure a source using GTID-based replication. See Section 16.1.2.1, “Setting the Replication Source Configuration” for how to configure a source using file position based replication.

Replicas in a multi-source replication topology require TABLE repositories for the connection metadata repository and applier metadata repository, as specified by the master_info_repository and relay_log_info_repository system variables. Multi-source replication is not compatible with FILE repositories.

To modify an existing replica that is using FILE repositories for the replication metadata repositories to use TABLE repositories, you can convert the existing repositories dynamically by using the mysql client to issue the following statements on the replica:

mysql> STOP SLAVE;
mysql> SET GLOBAL master_info_repository = 'TABLE';
mysql> SET GLOBAL relay_log_info_repository = 'TABLE';

Create a suitable user account on all the replication source servers that the replica can use to connect. You can use the same account on all the sources, or a different account on each. If you create an account solely for the purposes of replication, that account needs only the REPLICATION SLAVE privilege. For example, to set up a new user, ted, that can connect from the replica replicahost, use the mysql client to issue these statements on each of the sources:

mysql> CREATE USER 'ted'@'replicahost' IDENTIFIED BY 'password';
mysql> GRANT REPLICATION SLAVE ON *.* TO 'ted'@'replicahost';

For more details, see Section 16.1.2.2, “Creating a User for Replication”.

16.1.5.2 Provisioning a Multi-Source Replica for GTID-Based Replication

If the sources in the multi-source replication topology have existing data, it can save time to provision the replica with the relevant data before starting replication. In a multi-source replication topology, copying the data directory cannot be used to provision the replica with data from all of the sources, and you might also want to replicate only specific databases from each source. The best strategy for provisioning such a replica is therefore to use mysqldump to create an appropriate dump file on each source, then use the mysql client to import the dump file on the replica.

If you are using GTID-based replication, you need to pay attention to the SET @@GLOBAL.gtid_purged statement that mysqldump places in the dump output. This statement transfers the GTIDs for the transactions executed on the source to the replica, and the replica requires this information. However, for any case more complex than provisioning one new, empty replica from one source, you need to check what effect the statement has in the replica's version of MySQL, and handle the statement accordingly. The following guidance summarizes suitable actions, but for more details, see the mysqldump documentation.

In MySQL 5.6 and 5.7, the SET @@GLOBAL.gtid_purged statement written by mysqldump replaces the value of gtid_purged on the replica. Also in those releases that value can only be changed when the replica's record of transactions with GTIDs (the gtid_executed set) is empty. In a multi-source replication topology, you must therefore remove the SET @@GLOBAL.gtid_purged statement from the dump output before replaying the dump files, because you cannot apply a second or subsequent dump file including this statement. As an alternative to removing the SET @@GLOBAL.gtid_purged statement, if you are provisioning the replica with two partial dumps from the same source, and the GTID set in the second dump is the same as the first (so no new transactions have been executed on the source in between the dumps), you can set mysqldump's --set-gtid-purged option to OFF when you output the second dump file, to omit the statement.

For MySQL 5.6 and 5.7, these limitations mean all the dump files from the sources must be applied in a single operation on a replica with an empty gtid_executed set. You can clear a replica's GTID execution history by issuing RESET MASTER on the replica, but if you have other, wanted transactions with GTIDs on the replica, choose an alternative method of provisioning from those described in Section 16.1.3.5, “Using GTIDs for Failover and Scaleout”.

In the following provisioning example, we assume that the SET @@GLOBAL.gtid_purged statement needs to be removed from the files and handled manually. We also assume that there are no wanted transactions with GTIDs on the replica before provisioning starts.

  1. To create dump files for a database named db1 on source1 and a database named db2 on source2, run mysqldump for source1 as follows:

    mysqldump -u<user> -p<password> --single-transaction --triggers --routines --set-gtid-purged=ON --databases db1 > dumpM1.sql 
    

    Then run mysqldump for source2 as follows:

    mysqldump -u<user> -p<password> --single-transaction --triggers --routines --set-gtid-purged=ON --databases db2 > dumpM2.sql 
    
  2. Record the gtid_purged value that mysqldump added to each of the dump files. For example, for dump files created on MySQL 5.6 or 5.7, you can extract the value like this:

    cat dumpM1.sql | grep GTID_PURGED | cut -f2 -d'=' | cut -f2 -d$'\''
    cat dumpM2.sql | grep GTID_PURGED | cut -f2 -d'=' | cut -f2 -d$'\'' 
    

    The result in each case should be a GTID set, for example:

    source1:   2174B383-5441-11E8-B90A-C80AA9429562:1-1029
    source2:   224DA167-0C0C-11E8-8442-00059A3C7B00:1-2695
    
  3. Remove the line from each dump file that contains the SET @@GLOBAL.gtid_purged statement. For example:

    sed '/GTID_PURGED/d' dumpM1.sql > dumpM1_nopurge.sql
    sed '/GTID_PURGED/d' dumpM2.sql > dumpM2_nopurge.sql 
    
  4. Use the mysql client to import each edited dump file into the replica. For example:

    mysql -u<user> -p<password> < dumpM1_nopurge.sql
    mysql -u<user> -p<password> < dumpM2_nopurge.sql 
    
  5. On the replica, issue RESET MASTER to clear the GTID execution history (assuming, as explained above, that all the dump files have been imported and that there are no wanted transactions with GTIDs on the replica). Then issue a SET @@GLOBAL.gtid_purged statement to set the gtid_purged value to the union of all the GTID sets from all the dump files, as you recorded in Step 2. For example:

    mysql> RESET MASTER;
    mysql> SET @@GLOBAL.gtid_purged = "2174B383-5441-11E8-B90A-C80AA9429562:1-1029, 224DA167-0C0C-11E8-8442-00059A3C7B00:1-2695";
    

    If there are, or might be, overlapping transactions between the GTID sets in the dump files, you can use the stored functions described in Section 16.1.3.7, “Stored Function Examples to Manipulate GTIDs” to check this beforehand and to calculate the union of all the GTID sets.

16.1.5.3 Adding GTID-Based Sources to a Multi-Source Replica

These steps assume you have enabled GTIDs for transactions on the replication source servers using gtid_mode=ON, created a replication user, ensured that the replica is using TABLE based replication metadata repositories, and provisioned the replica with data from the sources if appropriate.

Use the CHANGE MASTER TO statement to configure a replication channel for each source on the replica (see Section 16.2.2, “Replication Channels”). The FOR CHANNEL clause is used to specify the channel. For GTID-based replication, GTID auto-positioning is used to synchronize with the source (see Section 16.1.3.3, “GTID Auto-Positioning”). The MASTER_AUTO_POSITION option is set to specify the use of auto-positioning.

For example, to add source1 and source2 as sources to the replica, use the mysql client to issue the CHANGE MASTER TO statement twice on the replica, like this:

mysql> CHANGE MASTER TO MASTER_HOST="source1", MASTER_USER="ted", \
MASTER_PASSWORD="password", MASTER_AUTO_POSITION=1 FOR CHANNEL "source_1";
mysql> CHANGE MASTER TO MASTER_HOST="source2", MASTER_USER="ted", \
MASTER_PASSWORD="password", MASTER_AUTO_POSITION=1 FOR CHANNEL "source_2";

For the full syntax of the CHANGE MASTER TO statement and other available options, see Section 13.4.2.1, “CHANGE MASTER TO Statement”.

16.1.5.4 Adding a Binary Log Based Source to a Multi-Source Replica

These steps assume that you have enabled binary logging on the replication source server using --log-bin, the replica is using TABLE based replication metadata repositories, and that you have enabled a replication user and noted the current binary log position. You need to know the current MASTER_LOG_FILE and MASTER_LOG_POSITION.

Use the CHANGE MASTER TO statement to configure a replication channel for each source on the replica (see Section 16.2.2, “Replication Channels”). The FOR CHANNEL clause is used to specify the channel. For example, to add source1 and source2 as sources to the replica, use the mysql client to issue the CHANGE MASTER TO statement twice on the replica, like this:

mysql> CHANGE MASTER TO MASTER_HOST="source1", MASTER_USER="ted", MASTER_PASSWORD="password", \
MASTER_LOG_FILE='source1-bin.000006', MASTER_LOG_POS=628 FOR CHANNEL "source_1";
mysql> CHANGE MASTER TO MASTER_HOST="source2", MASTER_USER="ted", MASTER_PASSWORD="password", \
MASTER_LOG_FILE='source2-bin.000018', MASTER_LOG_POS=104 FOR CHANNEL "source_2";

For the full syntax of the CHANGE MASTER TO statement and other available options, see Section 13.4.2.1, “CHANGE MASTER TO Statement”.

16.1.5.5 Starting Multi-Source Replicas

Once you have added channels for all of the sources, issue a START SLAVE statement to start replication. When you have enabled multiple channels on a replica, you can choose to either start all channels, or select a specific channel to start. For example, to start the two channels separately, use the mysql client to issue the following statements:

mysql> START SLAVE FOR CHANNEL "source_1";
mysql> START SLAVE FOR CHANNEL "source_2";

For the full syntax of the START SLAVE command and other available options, see Section 13.4.2.5, “START SLAVE Statement”.

To verify that both channels have started and are operating correctly, you can issue SHOW SLAVE STATUS statements on the replica, for example:

mysql> SHOW SLAVE STATUS FOR CHANNEL "source_1"\G
mysql> SHOW SLAVE STATUS FOR CHANNEL "source_2"\G

16.1.5.6 Stopping Multi-Source Replicas

The STOP SLAVE statement can be used to stop a multi-source replica. By default, if you use the STOP SLAVE statement on a multi-source replica all channels are stopped. Optionally, use the FOR CHANNEL channel clause to stop only a specific channel.

  • To stop all currently configured replication channels:

    STOP SLAVE;
    
  • To stop only a named channel, use a FOR CHANNEL channel clause:

    STOP SLAVE FOR CHANNEL "source_1";
    

For the full syntax of the STOP SLAVE command and other available options, see Section 13.4.2.6, “STOP SLAVE Statement”.

16.1.5.7 Resetting Multi-Source Replicas

The RESET SLAVE statement can be used to reset a multi-source replica. By default, if you use the RESET SLAVE statement on a multi-source replica all channels are reset. Optionally, use the FOR CHANNEL channel clause to reset only a specific channel.

  • To reset all currently configured replication channels:

    RESET SLAVE;
    
  • To reset only a named channel, use a FOR CHANNEL channel clause:

    RESET SLAVE FOR CHANNEL "source_1";
    

For GTID-based replication, note that RESET SLAVE has no effect on the replica's GTID execution history. If you want to clear this, issue RESET MASTER on the replica.

RESET SLAVE makes the replica forget its replication position, and clears the relay log, but it does not change any replication connection parameters, such as the source's host name. If you want to remove these for a channel, issue RESET SLAVE ALL.

For the full syntax of the RESET SLAVE command and other available options, see Section 13.4.2.4, “RESET SLAVE Statement”.

16.1.5.8 Multi-Source Replication Monitoring

To monitor the status of replication channels the following options exist:

  • Using the replication Performance Schema tables. The first column of these tables is Channel_Name. This enables you to write complex queries based on Channel_Name as a key. See Section 24.12.11, “Performance Schema Replication Tables”.

  • Using SHOW SLAVE STATUS FOR CHANNEL channel. By default, if the FOR CHANNEL channel clause is not used, this statement shows the replica status for all channels with one row per channel. The identifier Channel_name is added as a column in the result set. If a FOR CHANNEL channel clause is provided, the results show the status of only the named replication channel.

Note

The SHOW VARIABLES statement does not work with multiple replication channels. The information that was available through these variables has been migrated to the replication performance tables. Using a SHOW VARIABLES statement in a topology with multiple channels shows the status of only the default channel.

16.1.5.8.1 Monitoring Channels Using Performance Schema Tables

This section explains how to use the replication Performance Schema tables to monitor channels. You can choose to monitor all channels, or a subset of the existing channels.

To monitor the connection status of all channels:

mysql> SELECT * FROM replication_connection_status\G;
*************************** 1. row ***************************
CHANNEL_NAME: source_1
GROUP_NAME:
SOURCE_UUID: 046e41f8-a223-11e4-a975-0811960cc264
THREAD_ID: 24
SERVICE_STATE: ON
COUNT_RECEIVED_HEARTBEATS: 0
LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00
RECEIVED_TRANSACTION_SET: 046e41f8-a223-11e4-a975-0811960cc264:4-37
LAST_ERROR_NUMBER: 0
LAST_ERROR_MESSAGE:
LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00
*************************** 2. row ***************************
CHANNEL_NAME: source_2
GROUP_NAME:
SOURCE_UUID: 7475e474-a223-11e4-a978-0811960cc264
THREAD_ID: 26
SERVICE_STATE: ON
COUNT_RECEIVED_HEARTBEATS: 0
LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00
RECEIVED_TRANSACTION_SET: 7475e474-a223-11e4-a978-0811960cc264:4-6
LAST_ERROR_NUMBER: 0
LAST_ERROR_MESSAGE:
LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00
2 rows in set (0.00 sec)
	    

In the above output there are two channels enabled, and as shown by the CHANNEL_NAME field they are called source_1 and source_2.

The addition of the CHANNEL_NAME field enables you to query the Performance Schema tables for a specific channel. To monitor the connection status of a named channel, use a WHERE CHANNEL_NAME=channel clause:

mysql> SELECT * FROM replication_connection_status WHERE CHANNEL_NAME='source_1'\G
*************************** 1. row ***************************
CHANNEL_NAME: source_1
GROUP_NAME:
SOURCE_UUID: 046e41f8-a223-11e4-a975-0811960cc264
THREAD_ID: 24
SERVICE_STATE: ON
COUNT_RECEIVED_HEARTBEATS: 0
LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00
RECEIVED_TRANSACTION_SET: 046e41f8-a223-11e4-a975-0811960cc264:4-37
LAST_ERROR_NUMBER: 0
LAST_ERROR_MESSAGE:
LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00
1 row in set (0.00 sec)

Similarly, the WHERE CHANNEL_NAME=channel clause can be used to monitor the other replication Performance Schema tables for a specific channel. For more information, see Section 24.12.11, “Performance Schema Replication Tables”.

16.1.6 Replication and Binary Logging Options and Variables

The following sections contain information about mysqld options and server variables that are used in replication and for controlling the binary log. Options and variables for use on sources and replicas are covered separately, as are options and variables relating to binary logging and global transaction identifiers (GTIDs). A set of quick-reference tables providing basic information about these options and variables is also included.

Of particular importance is the server_id system variable.

Command-Line Format --server-id=#
System Variable server_id
Scope Global
Dynamic Yes
Type Integer
Default Value 0
Minimum Value 0
Maximum Value 4294967295

This variable specifies the server ID. In MySQL 5.7, server_id must be specified if binary logging is enabled, otherwise the server is not allowed to start.

server_id is set to 0 by default. On a replication source server and each replica, you must specify server_id to establish a unique replication ID in the range from 1 to 232 − 1. Unique, means that each ID must be different from every other ID in use by any other source or replica in the replication topology. For additional information, see Section 16.1.6.2, “Replication Source Options and Variables”, and Section 16.1.6.3, “Replica Server Options and Variables”.

If the server ID is set to 0, binary logging takes place, but a source with a server ID of 0 refuses any connections from replicas, and a replica with a server ID of 0 refuses to connect to a source. Note that although you can change the server ID dynamically to a nonzero value, doing so does not enable replication to start immediately. You must change the server ID and then restart the server to initialize the replica.

For more information, see Section 16.1.2.5.1, “Setting the Replica Configuration”.

server_uuid

In MySQL 5.7, the server generates a true UUID in addition to the server_id value supplied by the user. This is available as the global, read-only server_uuid system variable.

Note

The presence of the server_uuid system variable in MySQL 5.7 does not change the requirement for setting a unique server_id value for each MySQL server as part of preparing and running MySQL replication, as described earlier in this section.

System Variable server_uuid
Scope Global
Dynamic No
Type String

When starting, the MySQL server automatically obtains a UUID as follows:

  1. Attempt to read and use the UUID written in the file data_dir/auto.cnf (where data_dir is the server's data directory).

  2. If data_dir/auto.cnf is not found, generate a new UUID and save it to this file, creating the file if necessary.

The auto.cnf file has a format similar to that used for my.cnf or my.ini files. In MySQL 5.7, auto.cnf has only a single [auto] section containing a single server_uuid setting and value; the file's contents appear similar to what is shown here:

[auto]
server_uuid=8a94f357-aab4-11df-86ab-c80aa9429562
Important

The auto.cnf file is automatically generated; do not attempt to write or modify this file.

When using MySQL replication, sources and replicas know each other's UUIDs. The value of a replica's UUID can be seen in the output of SHOW SLAVE HOSTS. Once START SLAVE has been executed, the value of the source's UUID is available on the replica in the output of SHOW SLAVE STATUS.

Note

Issuing a STOP SLAVE or RESET SLAVE statement does not reset the source's UUID as used on the replica.

A server's server_uuid is also used in GTIDs for transactions originating on that server. For more information, see Section 16.1.3, “Replication with Global Transaction Identifiers”.

When starting, the replication I/O thread generates an error and aborts if its source's UUID is equal to its own unless the --replicate-same-server-id option has been set. In addition, the replication I/O thread generates a warning if either of the following is true:

16.1.6.1 Replication and Binary Logging Option and Variable Reference

The following two sections provide basic information about the MySQL command-line options and system variables applicable to replication and the binary log.

Replication Options and Variables

The command-line options and system variables in the following list relate to replication source servers and replicas. Section 16.1.6.2, “Replication Source Options and Variables” provides more detailed information about options and variables relating to replication source servers. For more information about options and variables relating to replicas, see Section 16.1.6.3, “Replica Server Options and Variables”.

For a listing of all command-line options, system variables, and status variables used with mysqld, see Section 5.1.3, “Server Option, System Variable, and Status Variable Reference”.

Binary Logging Options and Variables

The command-line options and system variables in the following list relate to the binary log. Section 16.1.6.4, “Binary Logging Options and Variables”, provides more detailed information about options and variables relating to binary logging. For additional general information about the binary log, see Section 5.4.4, “The Binary Log”.

For a listing of all command-line options, system and status variables used with mysqld, see Section 5.1.3, “Server Option, System Variable, and Status Variable Reference”.

16.1.6.2 Replication Source Options and Variables

This section describes the server options and system variables that you can use on replication source servers. You can specify the options either on the command line or in an option file. You can specify system variable values using SET.

On the source and each replica, you must set the server_id system variable to establish a unique replication ID. For each server, you should pick a unique positive integer in the range from 1 to 232 − 1, and each ID must be different from every other ID in use by any other source or replica in the replication topology. Example: server-id=3.

For options used on the source for controlling binary logging, see Section 16.1.6.4, “Binary Logging Options and Variables”.

Startup Options for Replication Source Servers

The following list describes startup options for controlling replication source servers. Replication-related system variables are discussed later in this section.

System Variables Used on Replication Source Servers

The following system variables are used to control sources:

  • auto_increment_increment

    Command-Line Format --auto-increment-increment=#
    System Variable auto_increment_increment
    Scope Global, Session
    Dynamic Yes
    Type Integer
    Default Value 1
    Minimum Value 1
    Maximum Value 65535

    auto_increment_increment and auto_increment_offset are intended for use with source-to-source replication, and can be used to control the operation of AUTO_INCREMENT columns. Both variables have global and session values, and each can assume an integer value between 1 and 65,535 inclusive. Setting the value of either of these two variables to 0 causes its value to be set to 1 instead. Attempting to set the value of either of these two variables to an integer greater than 65,535 or less than 0 causes its value to be set to 65,535 instead. Attempting to set the value of auto_increment_increment or auto_increment_offset to a noninteger value produces an error, and the actual value of the variable remains unchanged.

    Note

    auto_increment_increment is also supported for use with NDB tables.

    When Group Replication is started on a server, the value of auto_increment_increment is changed to the value of group_replication_auto_increment_increment, which defaults to 7, and the value of auto_increment_offset is changed to the server ID. The changes are reverted when Group Replication is stopped. These changes are only made and reverted if auto_increment_increment and auto_increment_offset each have their default value of 1. If their values have already been modified from the default, Group Replication does not alter them.

    auto_increment_increment and auto_increment_offset affect AUTO_INCREMENT column behavior as follows:

    • auto_increment_increment controls the interval between successive column values. For example:

      mysql> SHOW VARIABLES LIKE 'auto_inc%';
      +--------------------------+-------+
      | Variable_name            | Value |
      +--------------------------+-------+
      | auto_increment_increment | 1     |
      | auto_increment_offset    | 1     |
      +--------------------------+-------+
      2 rows in set (0.00 sec)
      
      mysql> CREATE TABLE autoinc1
          -> (col INT NOT NULL AUTO_INCREMENT PRIMARY KEY);
        Query OK, 0 rows affected (0.04 sec)
      
      mysql> SET @@auto_increment_increment=10;
      Query OK, 0 rows affected (0.00 sec)
      
      mysql> SHOW VARIABLES LIKE 'auto_inc%';
      +--------------------------+-------+
      | Variable_name            | Value |
      +--------------------------+-------+
      | auto_increment_increment | 10    |
      | auto_increment_offset    | 1     |
      +--------------------------+-------+
      2 rows in set (0.01 sec)
      
      mysql> INSERT INTO autoinc1 VALUES (NULL), (NULL), (NULL), (NULL);
      Query OK, 4 rows affected (0.00 sec)
      Records: 4  Duplicates: 0  Warnings: 0
      
      mysql> SELECT col FROM autoinc1;
      +-----+
      | col |
      +-----+
      |   1 |
      |  11 |
      |  21 |
      |  31 |
      +-----+
      4 rows in set (0.00 sec)
      
    • auto_increment_offset determines the starting point for the AUTO_INCREMENT column value. Consider the following, assuming that these statements are executed during the same session as the example given in the description for auto_increment_increment:

      mysql> SET @@auto_increment_offset=5;
      Query OK, 0 rows affected (0.00 sec)
      
      mysql> SHOW VARIABLES LIKE 'auto_inc%';
      +--------------------------+-------+
      | Variable_name            | Value |
      +--------------------------+-------+
      | auto_increment_increment | 10    |
      | auto_increment_offset    | 5     |
      +--------------------------+-------+
      2 rows in set (0.00 sec)
      
      mysql> CREATE TABLE autoinc2
          -> (col INT NOT NULL AUTO_INCREMENT PRIMARY KEY);
      Query OK, 0 rows affected (0.06 sec)
      
      mysql> INSERT INTO autoinc2 VALUES (NULL), (NULL), (NULL), (NULL);
      Query OK, 4 rows affected (0.00 sec)
      Records: 4  Duplicates: 0  Warnings: 0
      
      mysql> SELECT col FROM autoinc2;
      +-----+
      | col |
      +-----+
      |   5 |
      |  15 |
      |  25 |
      |  35 |
      +-----+
      4 rows in set (0.02 sec)
      

      When the value of auto_increment_offset is greater than that of auto_increment_increment, the value of auto_increment_offset is ignored.

    If either of these variables is changed, and then new rows inserted into a table containing an AUTO_INCREMENT column, the results may seem counterintuitive because the series of AUTO_INCREMENT values is calculated without regard to any values already present in the column, and the next value inserted is the least value in the series that is greater than the maximum existing value in the AUTO_INCREMENT column. The series is calculated like this:

    auto_increment_offset + N × auto_increment_increment

    where N is a positive integer value in the series [1, 2, 3, ...]. For example:

    mysql> SHOW VARIABLES LIKE 'auto_inc%';
    +--------------------------+-------+
    | Variable_name            | Value |
    +--------------------------+-------+
    | auto_increment_increment | 10    |
    | auto_increment_offset    | 5     |
    +--------------------------+-------+
    2 rows in set (0.00 sec)
    
    mysql> SELECT col FROM autoinc1;
    +-----+
    | col |
    +-----+
    |   1 |
    |  11 |
    |  21 |
    |  31 |
    +-----+
    4 rows in set (0.00 sec)
    
    mysql> INSERT INTO autoinc1 VALUES (NULL), (NULL), (NULL), (NULL);
    Query OK, 4 rows affected (0.00 sec)
    Records: 4  Duplicates: 0  Warnings: 0
    
    mysql> SELECT col FROM autoinc1;
    +-----+
    | col |
    +-----+
    |   1 |
    |  11 |
    |  21 |
    |  31 |
    |  35 |
    |  45 |
    |  55 |
    |  65 |
    +-----+
    8 rows in set (0.00 sec)
    

    The values shown for auto_increment_increment and auto_increment_offset generate the series 5 + N × 10, that is, [5, 15, 25, 35, 45, ...]. The highest value present in the col column prior to the INSERT is 31, and the next available value in the AUTO_INCREMENT series is 35, so the inserted values for col begin at that point and the results are as shown for the SELECT query.

    It is not possible to restrict the effects of these two variables to a single table; these variables control the behavior of all AUTO_INCREMENT columns in all tables on the MySQL server. If the global value of either variable is set, its effects persist until the global value is changed or overridden by setting the session value, or until mysqld is restarted. If the local value is set, the new value affects AUTO_INCREMENT columns for all tables into which new rows are inserted by the current user for the duration of the session, unless the values are changed during that session.

    The default value of auto_increment_increment is 1. See Section 16.4.1.1, “Replication and AUTO_INCREMENT”.

  • auto_increment_offset

    Command-Line Format --auto-increment-offset=#
    System Variable auto_increment_offset
    Scope Global, Session
    Dynamic Yes
    Type Integer
    Default Value 1
    Minimum Value 1
    Maximum Value 65535

    This variable has a default value of 1. If it is left with its default value, and Group Replication is started on the server, it is changed to the server ID. For more information, see the description for auto_increment_increment.

    Note

    auto_increment_offset is also supported for use with NDB tables.

  • rpl_semi_sync_master_enabled

    Command-Line Format --rpl-semi-sync-master-enabled[={OFF|ON}]
    System Variable rpl_semi_sync_master_enabled
    Scope Global
    Dynamic Yes
    Type Boolean
    Default Value OFF

    Controls whether semisynchronous replication is enabled on the source. To enable or disable the plugin, set this variable to ON or OFF (or 1 or 0), respectively. The default is OFF.

    This variable is available only if the source-side semisynchronous replication plugin is installed.

  • rpl_semi_sync_master_timeout

    Command-Line Format --rpl-semi-sync-master-timeout=#
    System Variable rpl_semi_sync_master_timeout
    Scope Global
    Dynamic Yes
    Type Integer
    Default Value 10000

    A value in milliseconds that controls how long the source waits on a commit for acknowledgment from a replica before timing out and reverting to asynchronous replication. The default value is 10000 (10 seconds).

    This variable is available only if the source-side semisynchronous replication plugin is installed.

  • rpl_semi_sync_master_trace_level

    Command-Line Format --rpl-semi-sync-master-trace-level=#
    System Variable rpl_semi_sync_master_trace_level
    Scope Global
    Dynamic Yes
    Type Integer
    Default Value 32

    The semisynchronous replication debug trace level on the source. Four levels are defined:

    • 1 = general level (for example, time function failures)

    • 16 = detail level (more verbose information)

    • 32 = net wait level (more information about network waits)

    • 64 = function level (information about function entry and exit)

    This variable is available only if the source-side semisynchronous replication plugin is installed.

  • rpl_semi_sync_master_wait_for_slave_count

    Command-Line Format --rpl-semi-sync-master-wait-for-slave-count=#
    System Variable rpl_semi_sync_master_wait_for_slave_count
    Scope Global
    Dynamic Yes
    Type Integer
    Default Value 1
    Minimum Value 1
    Maximum Value 65535

    The number of replica acknowledgments the source must receive per transaction before proceeding. By default rpl_semi_sync_master_wait_for_slave_count is 1, meaning that semisynchronous replication proceeds after receiving a single replica acknowledgment. Performance is best for small values of this variable.

    For example, if rpl_semi_sync_master_wait_for_slave_count is 2, then 2 replicas must acknowledge receipt of the transaction before the timeout period configured by rpl_semi_sync_master_timeout for semisynchronous replication to proceed. If fewer replicas acknowledge receipt of the transaction during the timeout period, the source reverts to normal replication.

    Note

    This behavior also depends on rpl_semi_sync_master_wait_no_slave

    This variable is available only if the source-side semisynchronous replication plugin is installed.

  • rpl_semi_sync_master_wait_no_slave

    Command-Line Format --rpl-semi-sync-master-wait-no-slave[={OFF|ON}]
    System Variable rpl_semi_sync_master_wait_no_slave
    Scope Global
    Dynamic Yes
    Type Boolean
    Default Value ON

    Controls whether the source waits for the timeout period configured by rpl_semi_sync_master_timeout to expire, even if the replica count drops to less than the number of replicas configured by rpl_semi_sync_master_wait_for_slave_count during the timeout period.

    When the value of rpl_semi_sync_master_wait_no_slave is ON (the default), it is permissible for the replica count to drop to less than rpl_semi_sync_master_wait_for_slave_count during the timeout period. As long as enough replicas acknowledge the transaction before the timeout period expires, semisynchronous replication continues.

    When the value of rpl_semi_sync_master_wait_no_slave is OFF, if the replica count drops to less than the number configured in rpl_semi_sync_master_wait_for_slave_count at any time during the timeout period configured by rpl_semi_sync_master_timeout, the source reverts to normal replication.

    This variable is available only if the source-side semisynchronous replication plugin is installed.

  • rpl_semi_sync_master_wait_point

    Command-Line Format --rpl-semi-sync-master-wait-point=value
    System Variable rpl_semi_sync_master_wait_point
    Scope Global
    Dynamic Yes
    Type Enumeration
    Default Value AFTER_SYNC
    Valid Values

    AFTER_SYNC

    AFTER_COMMIT

    This variable controls the point at which a semisynchronous source waits for replica acknowledgment of transaction receipt before returning a status to the client that committed the transaction. These values are permitted:

    • AFTER_SYNC (the default): The source writes each transaction to its binary log and the replica, and syncs the binary log to disk. The source waits for replica acknowledgment of transaction receipt after the sync. Upon receiving acknowledgment, the source commits the transaction to the storage engine and returns a result to the client, which then can proceed.

    • AFTER_COMMIT: The source writes each transaction to its binary log and the replica, syncs the binary log, and commits the transaction to the storage engine. The source waits for replica acknowledgment of transaction receipt after the commit. Upon receiving acknowledgment, the source returns a result to the client, which then can proceed.

    The replication characteristics of these settings differ as follows:

    • With AFTER_SYNC, all clients see the committed transaction at the same time: After it has been acknowledged by the replica and committed to the storage engine on the source. Thus, all clients see the same data on the source.

      In the event of source failure, all transactions committed on the source have been replicated to the replica (saved to its relay log). An unexpected exit of the source and failover to the replica is lossless because the replica is up to date. Note, however, that the source cannot be restarted in this scenario and must be discarded, because its binary log might contain uncommitted transactions that would cause a conflict with the replica when externalized after binary log recovery.

    • With AFTER_COMMIT, the client issuing the transaction gets a return status only after the server commits to the storage engine and receives replica acknowledgment. After the commit and before replica acknowledgment, other clients can see the committed transaction before the committing client.

      If something goes wrong such that the replica does not process the transaction, then in the event of an unexpected source exit and failover to the replica, it is possible for such clients to see a loss of data relative to what they saw on the source.

    This variable is available only if the source-side semisynchronous replication plugin is installed.

    rpl_semi_sync_master_wait_point was added in MySQL 5.7.2. For older versions, semisynchronous source behavior is equivalent to a setting of AFTER_COMMIT.

    This change introduces a version compatibility constraint because it increments the semisynchronous interface version: Servers for MySQL 5.7.2 and up do not work with semisynchronous replication plugins from older versions, nor do servers from older versions work with semisynchronous replication plugins for MySQL 5.7.2 and up.

16.1.6.3 Replica Server Options and Variables

This section explains the server options and system variables that apply to replicas and contains the following:

Specify the options either on the command line or in an option file. Many of the options can be set while the server is running by using the CHANGE MASTER TO statement. Specify system variable values using SET.

Server ID.  On the source and each replica, you must set the server_id system variable to establish a unique replication ID in the range from 1 to 232 − 1. Unique means that each ID must be different from every other ID in use by any other source or replica in the replication topology. Example my.cnf file:

[mysqld]
server-id=3
Startup Options for Replicas

This section explains startup options for controlling replica servers. Many of these options can be set while the server is running by using the CHANGE MASTER TO statement. Others, such as the --replicate-* options, can be set only when the replica server starts. Replication-related system variables are discussed later in this section.

  • --log-warnings[=level]

    Command-Line Format --log-warnings[=#]
    Deprecated Yes
    System Variable log_warnings
    Scope Global
    Dynamic Yes
    Type Integer
    Default Value 2
    Minimum Value 0
    Maximum Value (64-bit platforms) 18446744073709551615
    Maximum Value (32-bit platforms) 4294967295
    Note

    The log_error_verbosity system variable is preferred over, and should be used instead of, the --log-warnings option or log_warnings system variable. For more information, see the descriptions of log_error_verbosity and log_warnings. The --log-warnings command-line option and log_warnings system variable are deprecated; expect them to be removed in a future MySQL release.

    Causes the server to record more messages to the error log about what it is doing. With respect to replication, the server generates warnings that it succeeded in reconnecting after a network or connection failure, and provides information about how each replication thread started. This variable is set to 2 by default. To disable it, set it to 0. The server logs messages about statements that are unsafe for statement-based logging if the value is greater than 0. Aborted connections and access-denied errors for new connection attempts are logged if the value is greater than 1. See Section B.3.2.9, “Communication Errors and Aborted Connections”.

    Note

    The effects of this option are not limited to replication. It affects diagnostic messages across a spectrum of server activities.

  • --master-info-file=file_name

    Command-Line Format --master-info-file=file_name
    Type File name
    Default Value master.info

    The name to use for the file in which the replica records information about the source. The default name is master.info in the data directory. For information about the format of this file, see Section 16.2.4.2, “Replication Metadata Repositories”.

  • --master-retry-count=count

    Command-Line Format --master-retry-count=#
    Deprecated Yes
    Type Integer
    Default Value 86400
    Minimum Value 0
    Maximum Value (64-bit platforms) 18446744073709551615
    Maximum Value (32-bit platforms) 4294967295

    The number of times that the replica tries to reconnect to the source before giving up. The default value is 86400 times. A value of 0 means infinite, and the replica attempts to connect forever. Reconnection attempts are triggered when the replica reaches its connection timeout (specified by the slave_net_timeout system variable) without receiving data or a heartbeat signal from the source. Reconnection is attempted at intervals set by the MASTER_CONNECT_RETRY option of the CHANGE MASTER TO statement (which defaults to every 60 seconds).

    This option is deprecated; expect it to be removed in a future MySQL release. Use the MASTER_RETRY_COUNT option of the CHANGE MASTER TO statement instead.

  • --max-relay-log-size=size

    Command-Line Format --max-relay-log-size=#
    System Variable max_relay_log_size
    Scope Global
    Dynamic Yes
    Type Integer
    Default Value 0
    Minimum Value 0
    Maximum Value 1073741824

    The size at which the server rotates relay log files automatically. If this value is nonzero, the relay log is rotated automatically when its size exceeds this value. If this value is zero (the default), the size at which relay log rotation occurs is determined by the value of max_binlog_size. For more information, see Section 16.2.4.1, “The Relay Log”.

  • --relay-log-purge={0|1}

    Command-Line Format --relay-log-purge[={OFF|ON}]
    System Variable relay_log_purge
    Scope Global
    Dynamic Yes
    Type Boolean
    Default Value ON

    Disable or enable automatic purging of relay logs as soon as they are no longer needed. The default value is 1 (enabled). This is a global variable that can be changed dynamically with SET GLOBAL relay_log_purge = N. Disabling purging of relay logs when enabling the --relay-log-recovery option puts data consistency at risk.

  • --relay-log-space-limit=size

    Command-Line Format --relay-log-space-limit=#
    System Variable relay_log_space_limit
    Scope Global
    Dynamic No
    Type Integer
    Default Value 0
    Minimum Value 0
    Maximum Value (64-bit platforms) 18446744073709551615
    Maximum Value (32-bit platforms) 4294967295

    This option places an upper limit on the total size in bytes of all relay logs on the replica. A value of 0 means no limit. This is useful for a replica server host that has limited disk space. When the limit is reached, the replication I/O thread stops reading binary log events from the source until the replication SQL thread has caught up and deleted some unused relay logs. Note that this limit is not absolute: There are cases where the SQL thread needs more events before it can delete relay logs. In that case, the I/O thread exceeds the limit until it becomes possible for the SQL thread to delete some relay logs because not doing so would cause a deadlock. You should not set --relay-log-space-limit to less than twice the value of --max-relay-log-size (or --max-binlog-size if --max-relay-log-size is 0). In that case, there is a chance that the I/O thread waits for free space because --relay-log-space-limit is exceeded, but the SQL thread has no relay log to purge and is unable to satisfy the I/O thread. This forces the I/O thread to ignore --relay-log-space-limit temporarily.

  • --replicate-do-db=db_name

    Command-Line Format --replicate-do-db=name
    Type String

    Creates a replication filter using the name of a database. Such filters can also be created using CHANGE REPLICATION FILTER REPLICATE_DO_DB. The precise effect of this filtering depends on whether statement-based or row-based replication is in use, and are described in the next several paragraphs.

    Note

    Replication filters cannot be used on a MySQL server instance that is configured for Group Replication, because filtering transactions on some servers would make the group unable to reach agreement on a consistent state.

    Statement-based replication.  Tell the replication SQL thread to restrict replication to statements where the default database (that is, the one selected by USE) is db_name. To specify more than one database, use this option multiple times, once for each database; however, doing so does not replicate cross-database statements such as UPDATE some_db.some_table SET foo='bar' while a different database (or no database) is selected.

    Warning

    To specify multiple databases you must use multiple instances of this option. Because database names can contain commas, if you supply a comma separated list then the list is treated as the name of a single database.

    An example of what does not work as you might expect when using statement-based replication: If the replica is started with --replicate-do-db=sales and you issue the following statements on the source, the UPDATE statement is not replicated:

    USE prices;
    UPDATE sales.january SET amount=amount+1000;
    

    The main reason for this check just the default database behavior is that it is difficult from the statement alone to know whether it should be replicated (for example, if you are using multiple-table DELETE statements or multiple-table UPDATE statements that act across multiple databases). It is also faster to check only the default database rather than all databases if there is no need.

    Row-based replication.  Tells the replication SQL thread to restrict replication to database db_name. Only tables belonging to db_name are changed; the current database has no effect on this. Suppose that the replica is started with --replicate-do-db=sales and row-based replication is in effect, and then the following statements are run on the source:

    USE prices;
    UPDATE sales.february SET amount=amount+100;
    

    The february table in the sales database on the replica is changed in accordance with the UPDATE statement; this occurs whether or not the USE statement was issued. However, issuing the following statements on the source has no effect on the replica when using row-based replication and --replicate-do-db=sales:

    USE prices;
    UPDATE prices.march SET amount=amount-25;
    

    Even if the statement USE prices were changed to USE sales, the UPDATE statement's effects would still not be replicated.

    Another important difference in how --replicate-do-db is handled in statement-based replication as opposed to row-based replication occurs with regard to statements that refer to multiple databases. Suppose that the replica is started with --replicate-do-db=db1, and the following statements are executed on the source:

    USE db1;
    UPDATE db1.table1, db2.table2 SET db1.table1.col1 = 10, db2.table2.col2 = 20;
    

    If you are using statement-based replication, then both tables are updated on the replica. However, when using row-based replication, only table1 is affected on the replica; since table2 is in a different database, table2 on the replica is not changed by the UPDATE. Now suppose that, instead of the USE db1 statement, a USE db4 statement had been used:

    USE db4;
    UPDATE db1.table1, db2.table2 SET db1.table1.col1 = 10, db2.table2.col2 = 20;
    

    In this case, the UPDATE statement would have no effect on the replica when using statement-based replication. However, if you are using row-based replication, the UPDATE would change table1 on the replica, but not table2—in other words, only tables in the database named by --replicate-do-db are changed, and the choice of default database has no effect on this behavior.

    If you need cross-database updates to work, use --replicate-wild-do-table=db_name.% instead. See Section 16.2.5, “How Servers Evaluate Replication Filtering Rules”.

    Note

    This option affects replication in the same manner that --binlog-do-db affects binary logging, and the effects of the replication format on how --replicate-do-db affects replication behavior are the same as those of the logging format on the behavior of --binlog-do-db.

    This option has no effect on BEGIN, COMMIT, or ROLLBACK statements.

  • --replicate-ignore-db=db_name

    Command-Line Format --replicate-ignore-db=name
    Type String

    Creates a replication filter using the name of a database. Such filters can also be created using CHANGE REPLICATION FILTER REPLICATE_IGNORE_DB. As with --replicate-do-db, the precise effect of this filtering depends on whether statement-based or row-based replication is in use, and are described in the next several paragraphs.

    Note

    Replication filters cannot be used on a MySQL server instance that is configured for Group Replication, because filtering transactions on some servers would make the group unable to reach agreement on a consistent state.

    Statement-based replication.  Tells the replication SQL thread not to replicate any statement where the default database (that is, the one selected by USE) is db_name.

    Row-based replication.  Tells the replication SQL thread not to update any tables in the database db_name. The default database has no effect.

    When using statement-based replication, the following example does not work as you might expect. Suppose that the replica is started with --replicate-ignore-db=sales and you issue the following statements on the source:

    USE prices;
    UPDATE sales.january SET amount=amount+1000;
    

    The UPDATE statement is replicated in such a case because --replicate-ignore-db applies only to the default database (determined by the USE statement). Because the sales database was specified explicitly in the statement, the statement has not been filtered. However, when using row-based replication, the UPDATE statement's effects are not propagated to the replica, and the replica's copy of the sales.january table is unchanged; in this instance, --replicate-ignore-db=sales causes all changes made to tables in the source's copy of the sales database to be ignored by the replica.

    To specify more than one database to ignore, use this option multiple times, once for each database. Because database names can contain commas, if you supply a comma separated list then the list is treated as the name of a single database.

    You should not use this option if you are using cross-database updates and you do not want these updates to be replicated. See Section 16.2.5, “How Servers Evaluate Replication Filtering Rules”.

    If you need cross-database updates to work, use --replicate-wild-ignore-table=db_name.% instead. See Section 16.2.5, “How Servers Evaluate Replication Filtering Rules”.

    Note

    This option affects replication in the same manner that --binlog-ignore-db affects binary logging, and the effects of the replication format on how --replicate-ignore-db affects replication behavior are the same as those of the logging format on the behavior of --binlog-ignore-db.

    This option has no effect on BEGIN, COMMIT, or ROLLBACK statements.

  • --replicate-do-table=db_name.tbl_name

    Command-Line Format --replicate-do-table=name
    Type String

    Creates a replication filter by telling the replication SQL thread to restrict replication to a given table. To specify more than one table, use this option multiple times, once for each table. This works for both cross-database updates and default database updates, in contrast to --replicate-do-db. See Section 16.2.5, “How Servers Evaluate Replication Filtering Rules”. You can also create such a filter by issuing a CHANGE REPLICATION FILTER REPLICATE_DO_TABLE statement.

    Note

    Replication filters cannot be used on a MySQL server instance that is configured for Group Replication, because filtering transactions on some servers would make the group unable to reach agreement on a consistent state.

    This option affects only statements that apply to tables. It does not affect statements that apply only to other database objects, such as stored routines. To filter statements operating on stored routines, use one or more of the --replicate-*-db options.

  • --replicate-ignore-table=db_name.tbl_name

    Command-Line Format --replicate-ignore-table=name
    Type String

    Creates a replication filter by telling the replication SQL thread not to replicate any statement that updates the specified table, even if any other tables might be updated by the same statement. To specify more than one table to ignore, use this option multiple times, once for each table. This works for cross-database updates, in contrast to --replicate-ignore-db. See Section 16.2.5, “How Servers Evaluate Replication Filtering Rules”. You can also create such a filter by issuing a CHANGE REPLICATION FILTER REPLICATE_IGNORE_TABLE statement.

    Note

    Replication filters cannot be used on a MySQL server instance that is configured for Group Replication, because filtering transactions on some servers would make the group unable to reach agreement on a consistent state.

    This option affects only statements that apply to tables. It does not affect statements that apply only to other database objects, such as stored routines. To filter statements operating on stored routines, use one or more of the --replicate-*-db options.

  • --replicate-rewrite-db=from_name->to_name

    Command-Line Format --replicate-rewrite-db=old_name->new_name
    Type String

    Tells the replica to create a replication filter that translates the specified database to to_name if it was from_name on the source. Only statements involving tables are affected, not statements such as CREATE DATABASE, DROP DATABASE, and ALTER DATABASE.

    To specify multiple rewrites, use this option multiple times. The server uses the first one with a from_name value that matches. The database name translation is done before the --replicate-* rules are tested. You can also create such a filter by issuing a CHANGE REPLICATION FILTER REPLICATE_REWRITE_DB statement.

    If you use the --replicate-rewrite-db option on the command line and the > character is special to your command interpreter, quote the option value. For example:

    shell> mysqld --replicate-rewrite-db="olddb->newdb"
    

    The effect of the --replicate-rewrite-db option differs depending on whether statement-based or row-based binary logging format is used for the query. With statement-based format, DML statements are translated based on the current database, as specified by the USE statement. With row-based format, DML statements are translated based on the database where the modified table exists. DDL statements are always filtered based on the current database, as specified by the USE statement, regardless of the binary logging format.

    To ensure that rewriting produces the expected results, particularly in combination with other replication filtering options, follow these recommendations when you use the --replicate-rewrite-db option:

    • Create the from_name and to_name databases manually on the source and the replica with different names.

    • If you use statement-based or mixed binary logging format, do not use cross-database queries, and do not specify database names in queries. For both DDL and DML statements, rely on the USE statement to specify the current database, and use only the table name in queries.

    • If you use row-based binary logging format exclusively, for DDL statements, rely on the USE statement to specify the current database, and use only the table name in queries. For DML statements, you can use a fully qualified table name (db.table) if you want.

    If these recommendations are followed, it is safe to use the --replicate-rewrite-db option in combination with table-level replication filtering options such as --replicate-do-table.

    Note

    Global replication filters cannot be used on a MySQL server instance that is configured for Group Replication, because filtering transactions on some servers would make the group unable to reach agreement on a consistent state.

  • --replicate-same-server-id

    Command-Line Format --replicate-same-server-id[={OFF|ON}]
    Type Boolean
    Default Value OFF

    To be used on replica servers. Usually you should use the default setting of 0, to prevent infinite loops caused by circular replication. If set to 1, the replica does not skip events having its own server ID. Normally, this is useful only in rare configurations. Cannot be set to 1 if log_slave_updates is enabled. By default, the replication I/O thread does not write binary log events to the relay log if they have the replica's server ID (this optimization helps save disk usage). If you want to use --replicate-same-server-id, be sure to start the replica with this option before you make the replica read its own events that you want the replication SQL thread to execute.

  • --replicate-wild-do-table=db_name.tbl_name

    Command-Line Format --replicate-wild-do-table=name
    Type String

    Creates a replication filter by telling the replication SQL thread to restrict replication to statements where any of the updated tables match the specified database and table name patterns. Patterns can contain the % and _ wildcard characters, which have the same meaning as for the LIKE pattern-matching operator. To specify more than one table, use this option multiple times, once for each table. This works for cross-database updates. See Section 16.2.5, “How Servers Evaluate Replication Filtering Rules”. You can also create such a filter by issuing a CHANGE REPLICATION FILTER REPLICATE_WILD_DO_TABLE statement.

    Note

    Replication filters cannot be used on a MySQL server instance that is configured for Group Replication, because filtering transactions on some servers would make the group unable to reach agreement on a consistent state.

    This option applies to tables, views, and triggers. It does not apply to stored procedures and functions, or events. To filter statements operating on the latter objects, use one or more of the --replicate-*-db options.

    As an example, --replicate-wild-do-table=foo%.bar% replicates only updates that use a table where the database name starts with foo and the table name starts with bar.

    If the table name pattern is %, it matches any table name and the option also applies to database-level statements (CREATE DATABASE, DROP DATABASE, and ALTER DATABASE). For example, if you use --replicate-wild-do-table=foo%.%, database-level statements are replicated if the database name matches the pattern foo%.

    To include literal wildcard characters in the database or table name patterns, escape them with a backslash. For example, to replicate all tables of a database that is named my_own%db, but not replicate tables from the my1ownAABCdb database, you should escape the _ and % characters like this: --replicate-wild-do-table=my\_own\%db. If you use the option on the command line, you might need to double the backslashes or quote the option value, depending on your command interpreter. For example, with the bash shell, you would need to type --replicate-wild-do-table=my\\_own\\%db.

  • --replicate-wild-ignore-table=db_name.tbl_name

    Command-Line Format --replicate-wild-ignore-table=name
    Type String

    Creates a replication filter which keeps the replication SQL thread from replicating a statement in which any table matches the given wildcard pattern. To specify more than one table to ignore, use this option multiple times, once for each table. This works for cross-database updates. See Section 16.2.5, “How Servers Evaluate Replication Filtering Rules”. You can also create such a filter by issuing a CHANGE REPLICATION FILTER REPLICATE_WILD_IGNORE_TABLE statement.

    Note

    Replication filters cannot be used on a MySQL server instance that is configured for Group Replication, because filtering transactions on some servers would make the group unable to reach agreement on a consistent state.

    As an example, --replicate-wild-ignore-table=foo%.bar% does not replicate updates that use a table where the database name starts with foo and the table name starts with bar.

    For information about how matching works, see the description of the --replicate-wild-do-table option. The rules for including literal wildcard characters in the option value are the same as for --replicate-wild-ignore-table as well.

  • --skip-slave-start

    Command-Line Format --skip-slave-start[={OFF|ON}]
    Type Boolean
    Default Value OFF

    Tells the replica server not to start the replication threads when the server starts. To start the threads later, use a START SLAVE statement.

  • --slave-skip-errors=[err_code1,err_code2,...|all|ddl_exist_errors]

    Command-Line Format --slave-skip-errors=name
    System Variable slave_skip_errors
    Scope Global
    Dynamic No
    Type String
    Default Value OFF
    Valid Values

    OFF

    [list of error codes]

    all

    ddl_exist_errors

    Normally, replication stops when an error occurs on the replica, which gives you the opportunity to resolve the inconsistency in the data manually. This option causes the replication SQL thread to continue replication when a statement returns any of the errors listed in the option value.

    Do not use this option unless you fully understand why you are getting errors. If there are no bugs in your replication setup and client programs, and no bugs in MySQL itself, an error that stops replication should never occur. Indiscriminate use of this option results in replicas becoming hopelessly out of synchrony with the source, with you having no idea why this has occurred.

    For error codes, you should use the numbers provided by the error message in the replica's error log and in the output of SHOW SLAVE STATUS. Appendix B, Error Messages and Common Problems, lists server error codes.

    The shorthand value ddl_exist_errors is equivalent to the error code list 1007,1008,1050,1051,1054,1060,1061,1068,1094,1146.

    You can also (but should not) use the very nonrecommended value of all to cause the replica to ignore all error messages and keeps going regardless of what happens. Needless to say, if you use all, there are no guarantees regarding the integrity of your data. Please do not complain (or file bug reports) in this case if the replica's data is not anywhere close to what it is on the source. You have been warned.

    Examples:

    --slave-skip-errors=1062,1053
    --slave-skip-errors=all
    --slave-skip-errors=ddl_exist_errors
    
  • --slave-sql-verify-checksum={0|1}

    Command-Line Format --slave-sql-verify-checksum[={OFF|ON}]
    Type Boolean
    Default Value ON

    When this option is enabled, the replica examines checksums read from the relay log,. In the event of a mismatch, the replica stops with an error.

The following options are used internally by the MySQL test suite for replication testing and debugging. They are not intended for use in a production setting.

  • --abort-slave-event-count

    Command-Line Format --abort-slave-event-count=#
    Type Integer
    Default Value 0
    Minimum Value 0

    When this option is set to some positive integer value other than 0 (the default) it affects replication behavior as follows: After the replication SQL thread has started, value log events are permitted to be executed; after that, the replication SQL thread does not receive any more events, just as if the network connection from the source were cut. The replication SQL thread continues to run, and the output from SHOW SLAVE STATUS displays Yes in both the Slave_IO_Running and the Slave_SQL_Running columns, but no further events are read from the relay log.

  • --disconnect-slave-event-count

    Command-Line Format --disconnect-slave-event-count=#
    Type Integer
    Default Value 0
Options for Logging Replica Status to Tables

MySQL 5.7 supports logging of replication metadata to tables rather than files. Writing of the replica's connection metadata repository and applier metadata repository can be configured separately using these two system variables:

For information about these variables, see Section 16.1.6.3, “Replica Server Options and Variables”.

These variables can be used to make a replica resilient to unexpected halts. See Section 16.3.2, “Handling an Unexpected Halt of a Replica”, for more information.

The info log tables and their contents are considered local to a given MySQL Server. They are not replicated, and changes to them are not written to the binary log.

For more information, see Section 16.2.4, “Relay Log and Replication Metadata Repositories”.

System Variables Used on Replicas

The following list describes system variables for controlling replica servers. They can be set at server startup and some of them can be changed at runtime using SET. Server options used with replicas are listed earlier in this section.

  • init_slave

    Command-Line Format --init-slave=name
    System Variable init_slave
    Scope Global
    Dynamic Yes
    Type String

    This variable is similar to init_connect, but is a string to be executed by a replica server each time the replication SQL thread starts. The format of the string is the same as for the init_connect variable. The setting of this variable takes effect for subsequent START SLAVE statements.

    Note

    The replication SQL thread sends an acknowledgment to the client before it executes init_slave. Therefore, it is not guaranteed that init_slave has been executed when START SLAVE returns. See Section 13.4.2.5, “START SLAVE Statement”, for more information.

  • log_slow_slave_statements

    Command-Line Format --log-slow-slave-statements[={OFF|ON}]
    System Variable log_slow_slave_statements
    Scope Global
    Dynamic Yes
    Type Boolean
    Default Value OFF

    When the slow query log is enabled, this variable enables logging for queries that have taken more than long_query_time seconds to execute on the replica. Note that if row-based replication is in use (binlog_format=ROW), log_slow_slave_statements has no effect. Queries are only added to the replica's slow query log when they are logged in statement format in the binary log, that is, when binlog_format=STATEMENT is set, or when binlog_format=MIXED is set and the statement is logged in statement format. Slow queries that are logged in row format when binlog_format=MIXED is set, or that are logged when binlog_format=ROW is set, are not added to the replica's slow query log, even if log_slow_slave_statements is enabled.

    Setting log_slow_slave_statements has no immediate effect. The state of the variable applies on all subsequent START SLAVE statements. Also note that the global setting for long_query_time applies for the lifetime of the SQL thread. If you change that setting, you must stop and restart the replication SQL thread to implement the change there (for example, by issuing STOP SLAVE and START SLAVE statements with the SQL_THREAD option).

  • master_info_repository

    Command-Line Format --master-info-repository={FILE|TABLE}
    System Variable master_info_repository
    Scope Global
    Dynamic Yes
    Type String
    Default Value FILE
    Valid Values

    FILE

    TABLE

    The setting of this variable determines whether the replica records metadata about the source, consisting of status and connection information, to an InnoDB table in the mysql system database, or as a file in the data directory. For more information on the connection metadata repository, see Section 16.2.4, “Relay Log and Replication Metadata Repositories”.

    The default setting is FILE. As a file, the replica's connection metadata repository is named master.info by default. You can change this name using the --master-info-file option.

    The alternative setting is TABLE. As an InnoDB table, the replica's connection metadata repository is named mysql.slave_master_info. The TABLE setting is required when multiple replication channels are configured.

    This variable must be set to TABLE before configuring multiple replication channels. If you are using multiple replication channels, you cannot set the value back to FILE.

    The setting for the location of the connection metadata repository has a direct influence on the effect had by the setting of the sync_master_info system variable. You can change the setting only when no replication threads are executing.

  • max_relay_log_size

    Command-Line Format --max-relay-log-size=#
    System Variable max_relay_log_size
    Scope Global
    Dynamic Yes
    Type Integer
    Default Value 0
    Minimum Value 0
    Maximum Value 1073741824

    If a write by a replica to its relay log causes the current log file size to exceed the value of this variable, the replica rotates the relay logs (closes the current file and opens the next one). If max_relay_log_size is 0, the server uses max_binlog_size for both the binary log and the relay log. If max_relay_log_size is greater than 0, it constrains the size of the relay log, which enables you to have different sizes for the two logs. You must set max_relay_log_size to between 4096 bytes and 1GB (inclusive), or to 0. The default value is 0. See Section 16.2.3, “Replication Threads”.

  • relay_log

    Command-Line Format --relay-log=file_name
    System Variable relay_log
    Scope Global
    Dynamic No
    Type File name

    The base name for relay log files. For the default replication channel, the default base name for relay logs is host_name-relay-bin. For non-default replication channels, the default base name for relay logs is host_name-relay-bin-channel, where channel is the name of the replication channel recorded in this relay log.

    The server writes the file in the data directory unless the base name is given with a leading absolute path name to specify a different directory. The server creates relay log files in sequence by adding a numeric suffix to the base name.

    Due to the manner in which MySQL parses server options, if you specify this variable at server startup, you must supply a value; the default base name is used only if the option is not actually specified. If you specify the relay_log system variable at server startup without specifying a value, unexpected behavior is likely to result; this behavior depends on the other options used, the order in which they are specified, and whether they are specified on the command line or in an option file. For more information about how MySQL handles server options, see Section 4.2.2, “Specifying Program Options”.

    If you specify this variable, the value specified is also used as the base name for the relay log index file. You can override this behavior by specifying a different relay log index file base name using the relay_log_index system variable.

    When the server reads an entry from the index file, it checks whether the entry contains a relative path. If it does, the relative part of the path is replaced with the absolute path set using the relay_log system variable. An absolute path remains unchanged; in such a case, the index must be edited manually to enable the new path or paths to be used.

    You may find the relay_log system variable useful in performing the following tasks:

    • Creating relay logs whose names are independent of host names.

    • If you need to put the relay logs in some area other than the data directory because your relay logs tend to be very large and you do not want to decrease max_relay_log_size.

    • To increase speed by using load-balancing between disks.

    You can obtain the relay log file name (and path) from the relay_log_basename system variable.

  • relay_log_basename

    System Variable relay_log_basename
    Scope Global
    Dynamic No
    Type File name
    Default Value datadir + '/' + hostname + '-relay-bin'

    Holds the base name and complete path to the relay log file. The maximum variable length is 256. This variable is set by the server and is read only.

  • relay_log_index

    Command-Line Format --relay-log-index=file_name
    System Variable relay_log_index
    Scope Global
    Dynamic No
    Type File name
    Default Value *host_name*-relay-bin.index

    The name for the relay log index file. The maximum variable length is 256. For the default replication channel, the default name is host_name-relay-bin.index. For non-default replication channels, the default name is host_name-relay-bin-channel.index, where channel is the name of the replication channel recorded in this relay log index.

    The server writes the file in the data directory unless the name is given with a leading absolute path name to specify a different directory. name.

    Due to the manner in which MySQL parses server options, if you specify this variable at server startup, you must supply a value; the default base name is used only if the option is not actually specified. If you specify the relay_log_index system variable at server startup without specifying a value, unexpected behavior is likely to result; this behavior depends on the other options used, the order in which they are specified, and whether they are specified on the command line or in an option file. For more information about how MySQL handles server options, see Section 4.2.2, “Specifying Program Options”.

  • relay_log_info_file

    Command-Line Format --relay-log-info-file=file_name
    System Variable relay_log_info_file
    Scope Global
    Dynamic No
    Type File name
    Default Value relay-log.info

    The name of the file in which the replica records information about the relay logs, when relay_log_info_repository=FILE. If relay_log_info_repository=TABLE, it is the file name that would be used in case the repository was changed to FILE). The default name is relay-log.info in the data directory. For information about the applier metadata repository, see Section 16.2.4.2, “Replication Metadata Repositories”.

  • relay_log_info_repository

    Command-Line Format --relay-log-info-repository=value
    System Variable relay_log_info_repository
    Scope Global
    Dynamic Yes
    Type String
    Default Value FILE
    Valid Values

    FILE

    TABLE

    The setting of this variable determines whether the replica server stores its applier metadata repository as an InnoDB table in the mysql system database, or as a file in the data directory. For more information on the applier metadata repository, see Section 16.2.4, “Relay Log and Replication Metadata Repositories”.

    The default setting is FILE. As a file, the replica's applier metadata repository is named relay-log.info by default, and you can change this name using the relay_log_info_file system variable.

    With the setting TABLE, as an InnoDB table, the replica's applier metadata repository is named mysql.slave_relay_log_info. The TABLE setting is required when multiple replication channels are configured. The TABLE setting for the replica's applier metadata repository is also required to make replication resilient to unexpected halts. See Section 16.3.2, “Handling an Unexpected Halt of a Replica” for more information.

    This variable must be set to TABLE before configuring multiple replication channels. If you are using multiple replication channels then you cannot set the value back to FILE.

    The setting for the location of the applier metadata repository has a direct influence on the effect had by the setting of the sync_relay_log_info system variable. You can change the setting only when no replication threads are executing.

  • relay_log_purge

    Command-Line Format --relay-log-purge[={OFF|ON}]
    System Variable relay_log_purge
    Scope Global
    Dynamic Yes
    Type Boolean
    Default Value ON

    Disables or enables automatic purging of relay log files as soon as they are not needed any more. The default value is 1 (ON).

  • relay_log_recovery

    Command-Line Format --relay-log-recovery[={OFF|ON}]
    System Variable relay_log_recovery
    Scope Global
    Dynamic No
    Type Boolean
    Default Value OFF

    If enabled, this variable enables automatic relay log recovery immediately following server startup. The recovery process creates a new relay log file, initializes the SQL thread position to this new relay log, and initializes the I/O thread to the SQL thread position. Reading of the relay log from the source then continues.

    This global variable is read-only at runtime. Its value can be set with the --relay-log-recovery option at replica server startup, which should be used following an unexpected halt of a replica to ensure that no possibly corrupted relay logs are processed, and must be used in order to guarantee a crash-safe replica. The default value is 0 (disabled). For information on the combination of settings on a replica that is most resilient to unexpected halts, see Section 16.3.2, “Handling an Unexpected Halt of a Replica”.

    This variable also interacts with the relay_log_purge variable, which controls purging of logs when they are no longer needed. Enabling relay_log_recovery when relay_log_purge is disabled risks reading the relay log from files that were not purged, leading to data inconsistency.

    For a multithreaded replica (where slave_parallel_workers is greater than 0), from MySQL 5.7.13, setting relay_log_recovery = ON automatically handles any inconsistencies and gaps in the sequence of transactions that have been executed from the relay log. These gaps can occur when file position based replication is in use. (For more details, see Section 16.4.1.32, “Replication and Transaction Inconsistencies”.) The relay log recovery process deals with gaps using the same method as the START SLAVE UNTIL SQL_AFTER_MTS_GAPS statement would. When the replica reaches a consistent gap-free state, the relay log recovery process goes on to fetch further transactions from the source beginning at the replication SQL thread position. In MySQL versions prior to MySQL 5.7.13, this process was not automatic and required starting the server with relay_log_recovery=0, starting the replica with START SLAVE UNTIL SQL_AFTER_MTS_GAPS to fix any transaction inconsistencies, and then restarting the replica with relay_log_recovery=1. When GTID-based replication is in use, this process is unnecessary, and from MySQL 5.7.28 a multithreaded replica automatically skips relay log recovery when MASTER_AUTO_POSITION is set to ON, so the setting for relay_log_recovery makes no difference.

    Note

    This variable does not affect the following Group Replication channels:

    • group_replication_applier

    • group_replication_recovery

    Any other channels running on a group are affected, such as a channel which is replicating from an outside source or another group.

  • relay_log_space_limit

    Command-Line Format --relay-log-space-limit=#
    System Variable relay_log_space_limit
    Scope Global
    Dynamic No
    Type Integer
    Default Value 0
    Minimum Value 0
    Maximum Value (64-bit platforms) 18446744073709551615
    Maximum Value (32-bit platforms) 4294967295

    The maximum amount of space to use for all relay logs.

  • report_host

    Command-Line Format --report-host=host_name
    System Variable report_host
    Scope Global
    Dynamic No
    Type String

    The host name or IP address of the replica to be reported to the source during replica registration. This value appears in the output of SHOW SLAVE HOSTS on the source server. Leave the value unset if you do not want the replica to register itself with the source.

    Note

    It is not sufficient for the source to simply read the IP address of the replica from the TCP/IP socket after the replica connects. Due to NAT and other routing issues, that IP may not be valid for connecting to the replica from the source or other hosts.

  • report_password

    Command-Line Format --report-password=name
    System Variable report_password
    Scope Global
    Dynamic No
    Type String

    The replication user account password of the replica to be reported to the source during replica registration. This value appears in the output of SHOW SLAVE HOSTS on the source server if the source was started with --show-slave-auth-info.

    Although the name of this variable might imply otherwise, report_password is not connected to the MySQL user privilege system and so is not necessarily (or even likely to be) the same as the password for the MySQL replication user account.

  • report_port

    Command-Line Format --report-port=port_num
    System Variable report_port
    Scope Global
    Dynamic No
    Type Integer
    Default Value [slave_port]
    Minimum Value 0
    Maximum Value 65535

    The TCP/IP port number for connecting to the replica, to be reported to the source during replica registration. Set this only if the replica is listening on a nondefault port or if you have a special tunnel from the source or other clients to the replica. If you are not sure, do not use this option.

    The default value for this option is the port number actually used by the replica. This is also the default value displayed by SHOW SLAVE HOSTS.

  • report_user

    Command-Line Format --report-user=name
    System Variable report_user
    Scope Global
    Dynamic No
    Type String

    The account user name of the replica to be reported to the source during replica registration. This value appears in the output of SHOW SLAVE HOSTS on the source server if the source was started with --show-slave-auth-info.

    Although the name of this variable might imply otherwise, report_user is not connected to the MySQL user privilege system and so is not necessarily (or even likely to be) the same as the name of the MySQL replication user account.

  • rpl_semi_sync_slave_enabled

    Command-Line Format --rpl-semi-sync-slave-enabled[={OFF|ON}]
    System Variable rpl_semi_sync_slave_enabled
    Scope Global
    Dynamic Yes
    Type Boolean
    Default Value OFF

    Controls whether semisynchronous replication is enabled on the replica. To enable or disable the plugin, set this variable to ON or OFF (or 1 or 0), respectively. The default is OFF.

    This variable is available only if the replica-side semisynchronous replication plugin is installed.

  • rpl_semi_sync_slave_trace_level

    Command-Line Format --rpl-semi-sync-slave-trace-level=#
    System Variable rpl_semi_sync_slave_trace_level
    Scope Global
    Dynamic Yes
    Type Integer
    Default Value 32

    The semisynchronous replication debug trace level on the replica. See rpl_semi_sync_master_trace_level for the permissible values.

    This variable is available only if the replica-side semisynchronous replication plugin is installed.

  • rpl_stop_slave_timeout

    Command-Line Format --rpl-stop-slave-timeout=seconds
    System Variable rpl_stop_slave_timeout
    Scope Global
    Dynamic Yes
    Type Integer
    Default Value 31536000
    Minimum Value 2
    Maximum Value 31536000

    You can control the length of time (in seconds) that STOP SLAVE waits before timing out by setting this variable. This can be used to avoid deadlocks between STOP SLAVE and other SQL statements using different client connections to the replica.

    The maximum and default value of rpl_stop_slave_timeout is 31536000 seconds (1 year). The minimum is 2 seconds. Changes to this variable take effect for subsequent STOP SLAVE statements.

    This variable affects only the client that issues a STOP SLAVE statement. When the timeout is reached, the issuing client returns an error message stating that the command execution is incomplete. The client then stops waiting for the replication threads to stop, but the replication threads continue to try to stop, and the STOP SLAVE instruction remains in effect. Once the replication threads are no longer busy, the STOP SLAVE statement is executed and the replica stops.

  • slave_checkpoint_group

    Command-Line Format --slave-checkpoint-group=#
    System Variable slave_checkpoint_group
    Scope Global
    Dynamic Yes
    Type Integer
    Default Value 512
    Minimum Value 32
    Maximum Value 524280
    Block Size 8

    Sets the maximum number of transactions that can be processed by a multithreaded replica before a checkpoint operation is called to update its status as shown by SHOW SLAVE STATUS. Setting this variable has no effect on replicas for which multithreading is not enabled. Setting this variable has no immediate effect. The state of the variable applies on all subsequent START SLAVE commands.

    Note

    Multithreaded replicas are not currently supported by NDB Cluster, which silently ignores the setting for this variable. See Section 20.6.3, “Known Issues in NDB Cluster Replication”, for more information.

    This variable works in combination with the slave_checkpoint_period system variable in such a way that, when either limit is exceeded, the checkpoint is executed and the counters tracking both the number of transactions and the time elapsed since the last checkpoint are reset.

    The minimum allowed value for this variable is 32, unless the server was built using -DWITH_DEBUG, in which case the minimum value is 1. The effective value is always a multiple of 8; you can set it to a value that is not such a multiple, but the server rounds it down to the next lower multiple of 8 before storing the value. (Exception: No such rounding is performed by the debug server.) Regardless of how the server was built, the default value is 512, and the maximum allowed value is 524280.

  • slave_checkpoint_period

    Command-Line Format --slave-checkpoint-period=#
    System Variable slave_checkpoint_period
    Scope Global
    Dynamic Yes
    Type Integer
    Default Value 300
    Minimum Value 1
    Maximum Value (64-bit platforms) 18446744073709551615
    Maximum Value (32-bit platforms) 4294967295
    Unit milliseconds

    Sets the maximum time (in milliseconds) that is allowed to pass before a checkpoint operation is called to update the status of a multithreaded replica as shown by SHOW SLAVE STATUS. Setting this variable has no effect on replicas for which multithreading is not enabled. Setting this variable takes effect for all replication channels immediately, including running channels.

    Note

    Multithreaded replicas are not currently supported by NDB Cluster, which silently ignores the setting for this variable. See Section 20.6.3, “Known Issues in NDB Cluster Replication”, for more information.

    This variable works in combination with the slave_checkpoint_group system variable in such a way that, when either limit is exceeded, the checkpoint is executed and the counters tracking both the number of transactions and the time elapsed since the last checkpoint are reset.

    The minimum allowed value for this variable is 1, unless the server was built using -DWITH_DEBUG, in which case the minimum value is 0. Regardless of how the server was built, the default value is 300, and the maximum possible value is 4294967296 (4GB).

  • slave_compressed_protocol

    Command-Line Format --slave-compressed-protocol[={OFF|ON}]
    System Variable slave_compressed_protocol
    Scope Global
    Dynamic Yes
    Type Boolean
    Default Value OFF

    Whether to use compression of the source/replica protocol if both source and replica support it. If this variable is disabled (the default), connections are uncompressed. Changes to this variable take effect on subsequent connection attempts; this includes after issuing a START SLAVE statement, as well as reconnections made by a running replication I/O thread (for example, after setting the MASTER_RETRY_COUNT option for the CHANGE MASTER TO statement). See also Section 4.2.6, “Connection Compression Control”.

  • slave_exec_mode

    Command-Line Format --slave-exec-mode=mode
    System Variable slave_exec_mode
    Scope Global
    Dynamic Yes
    Type Enumeration
    Default Value

    IDEMPOTENT (NDB)

    STRICT (Other)

    Valid Values

    IDEMPOTENT

    STRICT

    Controls how a replication thread resolves conflicts and errors during replication. IDEMPOTENT mode causes suppression of duplicate-key and no-key-found errors; STRICT means no such suppression takes place.

    IDEMPOTENT mode is intended for use in multi-source replication, circular replication, and some other special replication scenarios for NDB Cluster Replication. (See Section 20.6.10, “NDB Cluster Replication: Bidrectional and Circular Replication”, and Section 20.6.11, “NDB Cluster Replication Conflict Resolution”, for more information.) NDB Cluster ignores any value explicitly set for slave_exec_mode, and always treats it as IDEMPOTENT.

    In MySQL Server 5.7, STRICT mode is the default value.

    For storage engines other than NDB, IDEMPOTENT mode should be used only when you are absolutely sure that duplicate-key errors and key-not-found errors can safely be ignored. It is meant to be used in fail-over scenarios for NDB Cluster where multi-source replication or circular replication is employed, and is not recommended for use in other cases.

  • slave_load_tmpdir

    Command-Line Format --slave-load-tmpdir=dir_name
    System Variable slave_load_tmpdir
    Scope Global
    Dynamic No
    Type Directory name
    Default Value Value of --tmpdir

    The name of the directory where the replica creates temporary files. Setting this variable takes effect for all replication channels immediately, including running channels. The variable value is by default equal to the value of the tmpdir system variable, or the default that applies when that system variable is not specified.

    When the replication SQL thread replicates a LOAD DATA statement, it extracts the file to be loaded from the relay log into temporary files, and then loads these into the table. If the file loaded on the source is huge, the temporary files on the replica are huge, too. Therefore, it might be advisable to use this option to tell the replica to put temporary files in a directory located in some file system that has a lot of available space. In that case, the relay logs are huge as well, so you might also want to set the relay_log system variable to place the relay logs in that file system.

    The directory specified by this option should be located in a disk-based file system (not a memory-based file system) so that the temporary files used to replicate LOAD DATA statements can survive machine restarts. The directory also should not be one that is cleared by the operating system during the system startup process. However, replication can now continue after a restart if the temporary files have been removed.

  • slave_max_allowed_packet

    Command-Line Format --slave-max-allowed-packet=#
    System Variable slave_max_allowed_packet
    Scope Global
    Dynamic Yes
    Type Integer
    Default Value 1073741824
    Minimum Value 1024
    Maximum Value 1073741824

    This variable sets the maximum packet size for the replication SQL and I/O threads, so that large updates using row-based replication do not cause replication to fail because an update exceeded max_allowed_packet. Setting this variable takes effect for all replication channels immediately, including running channels.

    This global variable always has a value that is a positive integer multiple of 1024; if you set it to some value that is not, the value is rounded down to the next highest multiple of 1024 for it is stored or used; setting slave_max_allowed_packet to 0 causes 1024 to be used. (A truncation warning is issued in all such cases.) The default and maximum value is 1073741824 (1 GB); the minimum is 1024.

  • slave_net_timeout

    Command-Line Format --slave-net-timeout=#
    System Variable slave_net_timeout
    Scope Global
    Dynamic Yes
    Type Integer
    Default Value 60
    Minimum Value 1
    Maximum Value 4294967295

    The number of seconds to wait for more data or a heartbeat signal from the source before the replica considers the connection broken, aborts the read, and tries to reconnect. Setting this variable has no immediate effect. The state of the variable applies on all subsequent START SLAVE commands.

    The first retry occurs immediately after the timeout. The interval between retries is controlled by the MASTER_CONNECT_RETRY option for the CHANGE MASTER TO statement, and the number of reconnection attempts is limited by the MASTER_RETRY_COUNT option for the CHANGE MASTER TO statement.

    The heartbeat interval, which stops the connection timeout occurring in the absence of data if the connection is still good, is controlled by the MASTER_HEARTBEAT_PERIOD option for the CHANGE MASTER TO statement. The heartbeat interval defaults to half the value of slave_net_timeout, and it is recorded in the replica's connection metadata repository and shown in the replication_connection_configuration Performance Schema table. Note that a change to the value or default setting of slave_net_timeout does not automatically change the heartbeat interval, whether that has been set explicitly or is using a previously calculated default. If the connection timeout is changed, you must also issue CHANGE MASTER TO to adjust the heartbeat interval to an appropriate value so that it occurs before the connection timeout.

  • slave_parallel_type

    Command-Line Format --slave-parallel-type=value
    System Variable slave_parallel_type
    Scope Global
    Dynamic Yes
    Type Enumeration
    Default Value DATABASE
    Valid Values

    DATABASE

    LOGICAL_CLOCK

    When using a multithreaded replica (slave_parallel_workers is greater than 0), this variable specifies the policy used to decide which transactions are allowed to execute in parallel on the replica. The variable has no effect on replicas for which multithreading is not enabled. The possible values are:

    • LOGICAL_CLOCK: Transactions that are part of the same binary log group commit on a source are applied in parallel on a replica. The dependencies between transactions are tracked based on their timestamps to provide additional parallelization where possible. When this value is set, the binlog_transaction_dependency_tracking system variable can be used on the source to specify that write sets are used for parallelization in place of timestamps, if a write set is available for the transaction and gives improved results compared to timestamps.

    • DATABASE: Transactions that update different databases are applied in parallel. This value is only appropriate if data is partitioned into multiple databases which are being updated independently and concurrently on the source. There must be no cross-database constraints, as such constraints may be violated on the replica.

    When slave_preserve_commit_order=1 is set, you can only use LOGICAL_CLOCK.

    If your replication topology uses multiple levels of replicas, LOGICAL_CLOCK may achieve less parallelization for each level the replica is away from the source. You can reduce this effect by using binlog_transaction_dependency_tracking on the source to specify that write sets are used instead of timestamps for parallelization where possible.

  • slave_parallel_workers

    Command-Line Format --slave-parallel-workers=#
    System Variable slave_parallel_workers
    Scope Global
    Dynamic Yes
    Type Integer
    Default Value 0
    Minimum Value 0
    Maximum Value 1024

    Sets the number of applier threads for executing replication transactions in parallel. Setting this variable to a number greater than 0 creates a multithreaded replica with this number of applier threads. When set to 0 (the default) parallel execution is disabled and the replica uses a single applier thread. Setting slave_parallel_workers has no immediate effect. The state of the variable applies on all subsequent START SLAVE statements.

    Note

    Multithreaded replicas are not currently supported by NDB Cluster, which silently ignores the setting for this variable. See Section 20.6.3, “Known Issues in NDB Cluster Replication”, for more information.

    A multithreaded replica provides parallel execution by using a coordinator thread and the number of applier threads configured by this variable. The way which transactions are distributed among applier threads is configured by slave_parallel_type. The transactions that the replica applies in parallel may commit out of order, unless slave_preserve_commit_order=1. Therefore, checking for the most recently executed transaction does not guarantee that all previous transactions from the source have been executed on the replica. This has implications for logging and recovery when using a multithreaded replica. For example, on a multithreaded replica the START SLAVE UNTIL statement only supports using SQL_AFTER_MTS_GAPS.

    In MySQL 5.7, retrying of transactions is supported when multithreading is enabled on a replica. In previous versions, slave_transaction_retries was treated as equal to 0 when using multithreaded replicas.

  • slave_pending_jobs_size_max

    Command-Line Format --slave-pending-jobs-size-max=#
    System Variable slave_pending_jobs_size_max
    Scope Global
    Dynamic Yes
    Type Integer
    Default Value 16M
    Minimum Value 1024
    Maximum Value 16EiB
    Unit bytes
    Block Size 1024

    For multithreaded replicas, this variable sets the maximum amount of memory (in bytes) available to worker queues holding events not yet applied. Setting this variable has no effect on replicas for which multithreading is not enabled. Setting this variable has no immediate effect. The state of the variable applies on all subsequent START SLAVE commands.

    The minimum possible value for this variable is 1024; the default is 16MB. The maximum possible value is 18446744073709551615 (16 exabytes). Values that are not exact multiples of 1024 are rounded down to the next-highest multiple of 1024 prior to being stored.

    The value of this variable is a soft limit and can be set to match the normal workload. If an unusually large event exceeds this size, the transaction is held until all the worker threads have empty queues, and then processed. All subsequent transactions are held until the large transaction has been completed.

  • slave_preserve_commit_order

    Command-Line Format --slave-preserve-commit-order[={OFF|ON}]
    System Variable slave_preserve_commit_order
    Scope Global
    Dynamic Yes
    Type Boolean
    Default Value OFF

    For multithreaded replicas, the setting 1 for this variable ensures that transactions are externalized on the replica in the same order as they appear in the replica's relay log, and prevents gaps in the sequence of transactions that have been executed from the relay log. This variable has no effect on replicas for which multithreading is not enabled. Note that slave_preserve_commit_order=1 does not preserve the order of non-transactional DML updates, so these might commit before transactions that precede them in the relay log, which might result in gaps.

    slave_preserve_commit_order=1 requires that --log-bin and --log-slave-updates are enabled on the replica, and slave_parallel_type is set to LOGICAL_CLOCK. Before changing this variable, all replication threads (for all replication channels if you are using multiple replication channels) must be stopped.

    With slave_preserve_commit_order enabled, the executing thread waits until all previous transactions are committed before committing. While the thread is waiting for other workers to commit their transactions it reports its status as Waiting for preceding transaction to commit. (Prior to MySQL 5.7.8, this was shown as Waiting for its turn to commit.) Enabling this mode on a multithreaded replica ensures that it never enters a state that the source was not in. This supports the use of replication for read scale-out. See Section 16.3.4, “Using Replication for Scale-Out”.

    If slave_preserve_commit_order=0 is set, the transactions that the replica applies in parallel may commit out of order. Therefore, checking for the most recently executed transaction does not guarantee that all previous transactions from the source have been executed on the replica. There is a chance of gaps in the sequence of transactions that have been executed from the replica's relay log. This has implications for logging and recovery when using a multithreaded replica. Note that the setting slave_preserve_commit_order=1 prevents gaps, but does not prevent source binary log position lag (where Exec_master_log_pos is behind the position up to which transactions have been executed). See Section 16.4.1.32, “Replication and Transaction Inconsistencies” for more information.

  • slave_rows_search_algorithms

    Command-Line Format --slave-rows-search-algorithms=value
    System Variable slave_rows_search_algorithms
    Scope Global
    Dynamic Yes
    Type Set
    Default Value TABLE_SCAN,INDEX_SCAN
    Valid Values

    TABLE_SCAN,INDEX_SCAN

    INDEX_SCAN,HASH_SCAN

    TABLE_SCAN,HASH_SCAN

    TABLE_SCAN,INDEX_SCAN,HASH_SCAN (equivalent to INDEX_SCAN,HASH_SCAN)

    When preparing batches of rows for row-based logging and replication, this variable controls how the rows are searched for matches, in particular whether hash scans are used. Setting this variable takes effect for all replication channels immediately, including running channels.

    Specify a comma-separated list of the following combinations of 2 values from the list INDEX_SCAN, TABLE_SCAN, HASH_SCAN. The value is expected as a string, so if set at runtime rather than at server startup, the value must be quoted. In addition, the value must not contain any spaces. The recommended combinations (lists) and their effects are shown in the following table:

    Index used / option value INDEX_SCAN,HASH_SCAN INDEX_SCAN,TABLE_SCAN
    Primary key or unique key Index scan Index scan
    (Other) Key Hash scan over index Index scan
    No index Hash scan Table scan
    • The default value is INDEX_SCAN,TABLE_SCAN, which means that all searches that can use indexes do use them, and searches without any indexes use table scans.

    • To use hashing for any searches that do not use a primary or unique key, set INDEX_SCAN,HASH_SCAN. Specifying INDEX_SCAN,HASH_SCAN has the same effect as specifying INDEX_SCAN,TABLE_SCAN,HASH_SCAN, which is allowed.

    • Do not use the combination TABLE_SCAN,HASH_SCAN. This setting forces hashing for all searches. It has no advantage over INDEX_SCAN,HASH_SCAN, and it can lead to record not found errors or duplicate key errors in the case of a single event containing multiple updates to the same row, or updates that are order-dependent.

    The order in which the algorithms are specified in the list makes no difference to the order in which they are displayed by a SELECT or SHOW VARIABLES statement.

    It is possible to specify a single value, but this is not optimal, because setting a single value limits searches to using only that algorithm. In particular, setting INDEX_SCAN alone is not recommended, as in that case searches are unable to find rows at all if no index is present.

  • slave_skip_errors

    Command-Line Format --slave-skip-errors=name
    System Variable slave_skip_errors
    Scope Global
    Dynamic No
    Type String
    Default Value OFF
    Valid Values

    OFF

    [list of error codes]

    all

    ddl_exist_errors

    Normally, replication stops when an error occurs on the replica, which gives you the opportunity to resolve the inconsistency in the data manually. This variable causes the replication SQL thread to continue replication when a statement returns any of the errors listed in the variable value.

  • slave_sql_verify_checksum

    Command-Line Format --slave-sql-verify-checksum[={OFF|ON}]
    System Variable slave_sql_verify_checksum
    Scope Global
    Dynamic Yes
    Type Boolean
    Default Value ON

    Cause the replication SQL thread to verify data using the checksums read from the relay log. In the event of a mismatch, the replica stops with an error. Setting this variable takes effect for all replication channels immediately, including running channels.

    Note

    The replication I/O thread always reads checksums if possible when accepting events from over the network.

  • slave_transaction_retries

    Command-Line Format --slave-transaction-retries=#
    System Variable slave_transaction_retries
    Scope Global
    Dynamic Yes
    Type Integer
    Default Value 10
    Minimum Value 0
    Maximum Value (64-bit platforms) 18446744073709551615
    Maximum Value (32-bit platforms) 4294967295

    If a replication SQL thread fails to execute a transaction because of an InnoDB deadlock or because the transaction's execution time exceeded InnoDB's innodb_lock_wait_timeout or NDB's TransactionDeadlockDetectionTimeout or TransactionInactiveTimeout, it automatically retries slave_transaction_retries times before stopping with an error. Transactions with a non-temporary error are not retried.

    The default value for slave_transaction_retries is 10. Setting the variable to 0 disables automatic retrying of transactions. Setting the variable takes effect for all replication channels immediately, including running channels.

    As of MySQL 5.7.5, retrying of transactions is supported when multithreading is enabled on a replica. In previous versions, slave_transaction_retries was treated as equal to 0 when using multithreaded replicas.

    The Performance Schema table replication_applier_status shows the number of retries that took place on each replication channel, in the COUNT_TRANSACTIONS_RETRIES column.

  • slave_type_conversions

    Command-Line Format --slave-type-conversions=set
    System Variable slave_type_conversions
    Scope Global
    Dynamic No
    Type Set
    Default Value
    Valid Values

    ALL_LOSSY

    ALL_NON_LOSSY

    ALL_SIGNED

    ALL_UNSIGNED

    Controls the type conversion mode in effect on the replica when using row-based replication. In MySQL 5.7.2 and higher, its value is a comma-delimited set of zero or more elements from the list: ALL_LOSSY, ALL_NON_LOSSY, ALL_SIGNED, ALL_UNSIGNED. Set this variable to an empty string to disallow type conversions between the source and the replica. Setting this variable takes effect for all replication channels immediately, including running channels.

    ALL_SIGNED and ALL_UNSIGNED were added in MySQL 5.7.2 (Bug#15831300). For additional information on type conversion modes applicable to attribute promotion and demotion in row-based replication, see Row-based replication: attribute promotion and demotion.

  • sql_slave_skip_counter

    System Variable sql_slave_skip_counter
    Scope Global
    Dynamic Yes
    Type Integer
    Default Value 0
    Minimum Value 0
    Maximum Value (64-bit platforms) 18446744073709551615
    Maximum Value (32-bit platforms) 4294967295

    The number of events from the source that a replica should skip. Setting the option has no immediate effect. The variable applies to the next START SLAVE statement; the next START SLAVE statement also changes the value back to 0. When this variable is set to a nonzero value and there are multiple replication channels configured, the START SLAVE statement can only be used with the FOR CHANNEL channel clause.

    This option is incompatible with GTID-based replication, and must not be set to a nonzero value when gtid_mode=ON. If you need to skip transactions when employing GTIDs, use gtid_executed from the source instead. See Section 16.1.7.3, “Skipping Transactions”.

    Important

    If skipping the number of events specified by setting this variable would cause the replica to begin in the middle of an event group, the replica continues to skip until it finds the beginning of the next event group and begins from that point. For more information, see Section 16.1.7.3, “Skipping Transactions”.

  • sync_master_info

    Command-Line Format --sync-master-info=#
    System Variable sync_master_info
    Scope Global
    Dynamic Yes
    Type Integer
    Default Value 10000
    Minimum Value 0
    Maximum Value (64-bit platforms) 18446744073709551615
    Maximum Value (32-bit platforms) 4294967295

    The effects of this variable on a replica depend on whether the replica's master_info_repository is set to FILE or TABLE, as explained in the following paragraphs.

    master_info_repository = FILE.  If the value of sync_master_info is greater than 0, the replica synchronizes its master.info file to disk (using fdatasync()) after every sync_master_info events. If it is 0, the MySQL server performs no synchronization of the master.info file to disk; instead, the server relies on the operating system to flush its contents periodically as with any other file.

    master_info_repository = TABLE.  If the value of sync_master_info is greater than 0, the replica updates its connection metadata repository table after every sync_master_info events. If it is 0, the table is never updated.

    The default value for sync_master_info is 10000. Setting this variable takes effect for all replication channels immediately, including running channels.

  • sync_relay_log

    Command-Line Format --sync-relay-log=#
    System Variable sync_relay_log
    Scope Global
    Dynamic Yes
    Type Integer
    Default Value 10000
    Minimum Value 0
    Maximum Value (64-bit platforms) 18446744073709551615
    Maximum Value (32-bit platforms) 4294967295

    If the value of this variable is greater than 0, the MySQL server synchronizes its relay log to disk (using fdatasync()) after every sync_relay_log events are written to the relay log. Setting this variable takes effect for all replication channels immediately, including running channels.

    Setting sync_relay_log to 0 causes no synchronization to be done to disk; in this case, the server relies on the operating system to flush the relay log's contents from time to time as for any other file.

    A value of 1 is the safest choice because in the event of an unexpected halt you lose at most one event from the relay log. However, it is also the slowest choice (unless the disk has a battery-backed cache, which makes synchronization very fast). For information on the combination of settings on a replica that is most resilient to unexpected halts, see Section 16.3.2, “Handling an Unexpected Halt of a Replica”.

  • sync_relay_log_info

    Command-Line Format --sync-relay-log-info=#
    System Variable sync_relay_log_info
    Scope Global
    Dynamic Yes
    Type Integer
    Default Value 10000
    Minimum Value 0
    Maximum Value (64-bit platforms) 18446744073709551615
    Maximum Value (32-bit platforms) 4294967295

    The default value for sync_relay_log_info is 10000. Setting this variable takes effect for all replication channels immediately, including running channels.

    The effects of this variable on the replica depend on the server's relay_log_info_repository setting (FILE or TABLE). If the setting is TABLE, the effects of the variable also depend on whether the storage engine used by the relay log info table is transactional (such as InnoDB) or not transactional (MyISAM). The effects of these factors on the behavior of the server for sync_relay_log_info values of zero and greater than zero are as follows:

    sync_relay_log_info = 0
    • If relay_log_info_repository is set to FILE, the MySQL server performs no synchronization of the relay-log.info file to disk; instead, the server relies on the operating system to flush its contents periodically as with any other file.

    • If relay_log_info_repository is set to TABLE, and the storage engine for that table is transactional, the table is updated after each transaction. (The sync_relay_log_info setting is effectively ignored in this case.)

    • If relay_log_info_repository is set to TABLE, and the storage engine for that table is not transactional, the table is never updated.

    sync_relay_log_info = N > 0
    • If relay_log_info_repository is set to FILE, the replica synchronizes its relay-log.info file to disk (using fdatasync()) after every N transactions.

    • If relay_log_info_repository is set to TABLE, and the storage engine for that table is transactional, the table is updated after each transaction. (The sync_relay_log_info setting is effectively ignored in this case.)

    • If relay_log_info_repository is set to TABLE, and the storage engine for that table is not transactional, the table is updated after every N events.

16.1.6.4 Binary Logging Options and Variables

You can use the mysqld options and system variables that are described in this section to affect the operation of the binary log as well as to control which statements are written to the binary log. For additional information about the binary log, see Section 5.4.4, “The Binary Log”. For additional information about using MySQL server options and system variables, see Section 5.1.6, “Server Command Options”, and Section 5.1.7, “Server System Variables”.

Startup Options Used with Binary Logging

The following list describes startup options for enabling and configuring the binary log. System variables used with binary logging are discussed later in this section.

  • --binlog-row-event-max-size=N

    Command-Line Format --binlog-row-event-max-size=#
    Type Integer
    Default Value 8192
    Minimum Value 256
    Maximum Value (64-bit platforms) 18446744073709551615
    Maximum Value (32-bit platforms) 4294967295

    Specify the maximum size of a row-based binary log event, in bytes. Rows are grouped into events smaller than this size if possible. The value should be a multiple of 256. The default is 8192. See Section 16.2.1, “Replication Formats”.

  • --log-bin[=base_name]

    Command-Line Format --log-bin=file_name
    Type File name

    Enables binary logging. With binary logging enabled, the server logs all statements that change data to the binary log, which is used for backup and replication. The binary log is a sequence of files with a base name and numeric extension. For information on the format and management of the binary log, see Section 5.4.4, “The Binary Log”.

    If you supply a value for the --log-bin option, the value is used as the base name for the log sequence. The server creates binary log files in sequence by adding a numeric suffix to the base name. In MySQL 5.7, the base name defaults to host_name-bin, using the name of the host machine. It is recommended that you specify a base name, so that you can continue to use the same binary log file names regardless of changes to the default name.

    The default location for binary log files is the data directory. You can use the --log-bin option to specify an alternative location, by adding a leading absolute path name to the base name to specify a different directory. When the server reads an entry from the binary log index file, which tracks the binary log files that have been used, it checks whether the entry contains a relative path. If it does, the relative part of the path is replaced with the absolute path set using the --log-bin option. An absolute path recorded in the binary log index file remains unchanged; in such a case, the index file must be edited manually to enable a new path or paths to be used. (In older versions of MySQL, manual intervention was required whenever relocating the binary log or relay log files.) (Bug #11745230, Bug #12133)

    Setting this option causes the log_bin system variable to be set to ON (or 1), and not to the base name. The binary log file base name and any specified path are available as the log_bin_basename system variable.

    If you specify the --log-bin option without also specifying the server_id system variable, the server is not allowed to start. (Bug #11763963, Bug #56739)

    When GTIDs are in use on the server, if binary logging is not enabled when restarting the server after an abnormal shutdown, some GTIDs are likely to be lost, causing replication to fail. In a normal shutdown, the set of GTIDs from the current binary log file is saved in the mysql.gtid_executed table. Following an abnormal shutdown where this did not happen, during recovery the GTIDs are added to the table from the binary log file, provided that binary logging is still enabled. If binary logging is disabled for the server restart, the server cannot access the binary log file to recover the GTIDs, so replication cannot be started. Binary logging can be disabled safely after a normal shutdown.

    If you want to disable binary logging for a server start but keep the --log-bin setting intact, you can specify the --skip-log-bin or --disable-log-bin option at startup. Specify the option after the --log-bin option, so that it takes precedence. When binary logging is disabled, the log_bin system variable is set to OFF.

  • --log-bin-index[=file_name]

    Command-Line Format --log-bin-index=file_name
    System Variable log_bin_index
    Scope Global
    Dynamic No
    Type File name

    The name for the binary log index file, which contains the names of the binary log files. By default, it has the same location and base name as the value specified for the binary log files using the --log-bin option, plus the extension .index. If you do not specify --log-bin, the default binary log index file name is binlog.index. If you omit the file name and do not specify one with --log-bin, the default binary log index file name is host_name-bin.index, using the name of the host machine.

    For information on the format and management of the binary log, see Section 5.4.4, “The Binary Log”.

Statement selection options.  The options in the following list affect which statements are written to the binary log, and thus sent by a replication source server to its replicas. There are also options for replica servers that control which statements received from the source should be executed or ignored. For details, see Section 16.1.6.3, “Replica Server Options and Variables”.

  • --binlog-do-db=db_name

    Command-Line Format --binlog-do-db=name
    Type String

    This option affects binary logging in a manner similar to the way that --replicate-do-db affects replication.

    The effects of this option depend on whether the statement-based or row-based logging format is in use, in the same way that the effects of --replicate-do-db depend on whether statement-based or row-based replication is in use. You should keep in mind that the format used to log a given statement may not necessarily be the same as that indicated by the value of binlog_format. For example, DDL statements such as CREATE TABLE and ALTER TABLE are always logged as statements, without regard to the logging format in effect, so the following statement-based rules for --binlog-do-db always apply in determining whether or not the statement is logged.

    Statement-based logging.  Only those statements are written to the binary log where the default database (that is, the one selected by USE) is db_name. To specify more than one database, use this option multiple times, once for each database; however, doing so does not cause cross-database statements such as UPDATE some_db.some_table SET foo='bar' to be logged while a different database (or no database) is selected.

    Warning

    To specify multiple databases you must use multiple instances of this option. Because database names can contain commas, the list is treated as the name of a single database if you supply a comma-separated list.

    An example of what does not work as you might expect when using statement-based logging: If the server is started with --binlog-do-db=sales and you issue the following statements, the UPDATE statement is not logged:

    USE prices;
    UPDATE sales.january SET amount=amount+1000;
    

    The main reason for this just check the default database behavior is that it is difficult from the statement alone to know whether it should be replicated (for example, if you are using multiple-table DELETE statements or multiple-table UPDATE statements that act across multiple databases). It is also faster to check only the default database rather than all databases if there is no need.

    Another case which may not be self-evident occurs when a given database is replicated even though it was not specified when setting the option. If the server is started with --binlog-do-db=sales, the following UPDATE statement is logged even though prices was not included when setting --binlog-do-db:

    USE sales;
    UPDATE prices.discounts SET percentage = percentage + 10;
    

    Because sales is the default database when the UPDATE statement is issued, the UPDATE is logged.

    Row-based logging.  Logging is restricted to database db_name. Only changes to tables belonging to db_name are logged; the default database has no effect on this. Suppose that the server is started with --binlog-do-db=sales and row-based logging is in effect, and then the following statements are executed:

    USE prices;
    UPDATE sales.february SET amount=amount+100;
    

    The changes to the february table in the sales database are logged in accordance with the UPDATE statement; this occurs whether or not the USE statement was issued. However, when using the row-based logging format and --binlog-do-db=sales, changes made by the following UPDATE are not logged:

    USE prices;
    UPDATE prices.march SET amount=amount-25;
    

    Even if the USE prices statement were changed to USE sales, the UPDATE statement's effects would still not be written to the binary log.

    Another important difference in --binlog-do-db handling for statement-based logging as opposed to the row-based logging occurs with regard to statements that refer to multiple databases. Suppose that the server is started with --binlog-do-db=db1, and the following statements are executed:

    USE db1;
    UPDATE db1.table1, db2.table2 SET db1.table1.col1 = 10, db2.table2.col2 = 20;
    

    If you are using statement-based logging, the updates to both tables are written to the binary log. However, when using the row-based format, only the changes to table1 are logged; table2 is in a different database, so it is not changed by the UPDATE. Now suppose that, instead of the USE db1 statement, a USE db4 statement had been used:

    USE db4;
    UPDATE db1.table1, db2.table2 SET db1.table1.col1 = 10, db2.table2.col2 = 20;
    

    In this case, the UPDATE statement is not written to the binary log when using statement-based logging. However, when using row-based logging, the change to table1 is logged, but not that to table2—in other words, only changes to tables in the database named by --binlog-do-db are logged, and the choice of default database has no effect on this behavior.

  • --binlog-ignore-db=db_name

    Command-Line Format --binlog-ignore-db=name
    Type String

    This option affects binary logging in a manner similar to the way that --replicate-ignore-db affects replication.

    The effects of this option depend on whether the statement-based or row-based logging format is in use, in the same way that the effects of --replicate-ignore-db depend on whether statement-based or row-based replication is in use. You should keep in mind that the format used to log a given statement may not necessarily be the same as that indicated by the value of binlog_format. For example, DDL statements such as CREATE TABLE and ALTER TABLE are always logged as statements, without regard to the logging format in effect, so the following statement-based rules for --binlog-ignore-db always apply in determining whether or not the statement is logged.

    Statement-based logging.  Tells the server to not log any statement where the default database (that is, the one selected by USE) is db_name.

    Prior to MySQL 5.7.2, this option caused any statements containing fully qualified table names not to be logged if there was no default database specified (that is, when SELECT DATABASE() returned NULL). In MySQL 5.7.2 and higher, when there is no default database, no --binlog-ignore-db options are applied, and such statements are always logged. (Bug #11829838, Bug #60188)

    Row-based format.  Tells the server not to log updates to any tables in the database db_name. The current database has no effect.

    When using statement-based logging, the following example does not work as you might expect. Suppose that the server is started with --binlog-ignore-db=sales and you issue the following statements:

    USE prices;
    UPDATE sales.january SET amount=amount+1000;
    

    The UPDATE statement is logged in such a case because --binlog-ignore-db applies only to the default database (determined by the USE statement). Because the sales database was specified explicitly in the statement, the statement has not been filtered. However, when using row-based logging, the UPDATE statement's effects are not written to the binary log, which means that no changes to the sales.january table are logged; in this instance, --binlog-ignore-db=sales causes all changes made to tables in the source's copy of the sales database to be ignored for purposes of binary logging.

    To specify more than one database to ignore, use this option multiple times, once for each database. Because database names can contain commas, the list is treated as the name of a single database if you supply a comma-separated list.

    You should not use this option if you are using cross-database updates and you do not want these updates to be logged.

Checksum options.  MySQL supports reading and writing of binary log checksums. These are enabled using the two options listed here:

  • --binlog-checksum={NONE|CRC32}

    Command-Line Format --binlog-checksum=type
    Type String
    Default Value CRC32
    Valid Values

    NONE

    CRC32

    Enabling this option causes the source to write checksums for events written to the binary log. Set to NONE to disable, or the name of the algorithm to be used for generating checksums; currently, only CRC32 checksums are supported, and CRC32 is the default. You cannot change the setting for this option within a transaction.

To control reading of checksums by the replica (from the relay log), use the --slave-sql-verify-checksum option.

Testing and debugging options.  The following binary log options are used in replication testing and debugging. They are not intended for use in normal operations.

  • --max-binlog-dump-events=N

    Command-Line Format --max-binlog-dump-events=#
    Type Integer
    Default Value 0

    This option is used internally by the MySQL test suite for replication testing and debugging.

  • --sporadic-binlog-dump-fail

    Command-Line Format --sporadic-binlog-dump-fail[={OFF|ON}]
    Type Boolean
    Default Value OFF

    This option is used internally by the MySQL test suite for replication testing and debugging.

System Variables Used with Binary Logging

The following list describes system variables for controlling binary logging. They can be set at server startup and some of them can be changed at runtime using SET. Server options used to control binary logging are listed earlier in this section.

  • binlog_cache_size

    Command-Line Format --binlog-cache-size=#
    System Variable binlog_cache_size
    Scope Global
    Dynamic Yes
    Type Integer
    Default Value 32768
    Minimum Value 4096
    Maximum Value (64-bit platforms) 18446744073709551615
    Maximum Value (32-bit platforms) 4294967295

    The size of the cache to hold changes to the binary log during a transaction. A binary log cache is allocated for each client if the server supports any transactional storage engines and if the server has the binary log enabled (--log-bin option). If you often use large transactions, you can increase this cache size to get better performance. The Binlog_cache_use and Binlog_cache_disk_use status variables can be useful for tuning the size of this variable. See Section 5.4.4, “The Binary Log”.

    binlog_cache_size sets the size for the transaction cache only; the size of the statement cache is governed by the binlog_stmt_cache_size system variable.

  • binlog_checksum

    Command-Line Format --binlog-checksum=name
    System Variable binlog_checksum
    Scope Global
    Dynamic Yes
    Type String
    Default Value CRC32
    Valid Values

    NONE

    CRC32

    When enabled, this variable causes the source to write a checksum for each event in the binary log. binlog_checksum supports the values NONE (disabled) and CRC32. The default is CRC32. You cannot change the value of binlog_checksum within a transaction.

    When binlog_checksum is disabled (value NONE), the server verifies that it is writing only complete events to the binary log by writing and checking the event length (rather than a checksum) for each event.

    Changing the value of this variable causes the binary log to be rotated; checksums are always written to an entire binary log file, and never to only part of one.

    Setting this variable on the source to a value unrecognized by the replica causes the replica to set its own binlog_checksum value to NONE, and to stop replication with an error. (Bug #13553750, Bug #61096) If backward compatibility with older replicas is a concern, you may want to set the value explicitly to NONE.

  • binlog_direct_non_transactional_updates

    Command-Line Format --binlog-direct-non-transactional-updates[={OFF|ON}]
    System Variable binlog_direct_non_transactional_updates
    Scope Global, Session
    Dynamic Yes
    Type Boolean
    Default Value OFF

    Due to concurrency issues, a replica can become inconsistent when a transaction contains updates to both transactional and nontransactional tables. MySQL tries to preserve causality among these statements by writing nontransactional statements to the transaction cache, which is flushed upon commit. However, problems arise when modifications done to nontransactional tables on behalf of a transaction become immediately visible to other connections because these changes may not be written immediately into the binary log.

    The binlog_direct_non_transactional_updates variable offers one possible workaround to this issue. By default, this variable is disabled. Enabling binlog_direct_non_transactional_updates causes updates to nontransactional tables to be written directly to the binary log, rather than to the transaction cache.

    binlog_direct_non_transactional_updates works only for statements that are replicated using the statement-based binary logging format; that is, it works only when the value of binlog_format is STATEMENT, or when binlog_format is MIXED and a given statement is being replicated using the statement-based format. This variable has no effect when the binary log format is ROW, or when binlog_format is set to MIXED and a given statement is replicated using the row-based format.

    Important

    Before enabling this variable, you must make certain that there are no dependencies between transactional and nontransactional tables; an example of such a dependency would be the statement INSERT INTO myisam_table SELECT * FROM innodb_table. Otherwise, such statements are likely to cause the replica to diverge from the source.

    This variable has no effect when the binary log format is ROW or MIXED.

  • binlog_error_action

    Command-Line Format --binlog-error-action[=value]
    System Variable binlog_error_action
    Scope Global
    Dynamic Yes
    Type Enumeration
    Default Value ABORT_SERVER
    Valid Values

    IGNORE_ERROR

    ABORT_SERVER

    Controls what happens when the server encounters an error such as not being able to write to, flush or synchronize the binary log, which can cause the source's binary log to become inconsistent and replicas to lose synchronization.

    In MySQL 5.7.7 and higher, this variable defaults to ABORT_SERVER, which makes the server halt logging and shut down whenever it encounters such an error with the binary log. On restart, recovery proceeds as in the case of an unexpected server halt (see Section 16.3.2, “Handling an Unexpected Halt of a Replica”).

    When binlog_error_action is set to IGNORE_ERROR, if the server encounters such an error it continues the ongoing transaction, logs the error then halts logging, and continues performing updates. To resume binary logging log_bin must be enabled again, which requires a server restart. This setting provides backward compatibility with older versions of MySQL.

    In previous releases this variable was named binlogging_impossible_mode.

  • binlog_format

    Command-Line Format --binlog-format=format
    System Variable binlog_format
    Scope Global, Session
    Dynamic Yes
    Type Enumeration
    Default Value ROW
    Valid Values

    ROW

    STATEMENT

    MIXED

    This variable sets the binary logging format, and can be any one of STATEMENT, ROW, or MIXED. See Section 16.2.1, “Replication Formats”.

    binlog_format can be set at startup or at runtime, except that under some conditions, changing this variable at runtime is not possible or causes replication to fail, as described later.

    Prior to MySQL 5.7.7, the default format was STATEMENT. In MySQL 5.7.7 and higher, the default is ROW. Exception: In NDB Cluster, the default is MIXED; statement-based replication is not supported for NDB Cluster.

    Setting the session value of this system variable is a restricted operation. The session user must have privileges sufficient to set restricted session variables. See Section 5.1.8.1, “System Variable Privileges”.

    The rules governing when changes to this variable take effect and how long the effect lasts are the same as for other MySQL server system variables. For more information, see Section 13.7.4.1, “SET Syntax for Variable Assignment”.

    When MIXED is specified, statement-based replication is used, except for cases where only row-based replication is guaranteed to lead to proper results. For example, this happens when statements contain user-defined functions (UDF) or the UUID() function.

    For details of how stored programs (stored procedures and functions, triggers, and events) are handled when each binary logging format is set, see Section 22.7, “Stored Program Binary Logging”.

    There are exceptions when you cannot switch the replication format at runtime:

    • From within a stored function or a trigger.

    • If the session is currently in row-based replication mode and has open temporary tables.

    • From within a transaction.

    Trying to switch the format in those cases results in an error.

    Changing the logging format on a replication source server does not cause a replica to change its logging format to match. Switching the replication format while replication is ongoing can cause issues if a replica has binary logging enabled, and the change results in the replica using STATEMENT format logging while the source is using ROW or MIXED format logging. A replica is not able to convert binary log entries received in ROW logging format to STATEMENT format for use in its own binary log, so this situation can cause replication to fail. For more information, see Section 5.4.4.2, “Setting The Binary Log Format”.

    The binary log format affects the behavior of the following server options:

    These effects are discussed in detail in the descriptions of the individual options.

  • binlog_group_commit_sync_delay

    Command-Line Format --binlog-group-commit-sync-delay=#
    System Variable binlog_group_commit_sync_delay
    Scope Global
    Dynamic Yes
    Type Integer
    Default Value 0
    Minimum Value 0
    Maximum Value 1000000

    Controls how many microseconds the binary log commit waits before synchronizing the binary log file to disk. By default binlog_group_commit_sync_delay is set to 0, meaning that there is no delay. Setting binlog_group_commit_sync_delay to a microsecond delay enables more transactions to be synchronized together to disk at once, reducing the overall time to commit a group of transactions because the larger groups require fewer time units per group.

    When sync_binlog=0 or sync_binlog=1 is set, the delay specified by binlog_group_commit_sync_delay is applied for every binary log commit group before synchronization (or in the case of sync_binlog=0, before proceeding). When sync_binlog is set to a value n greater than 1, the delay is applied after every n binary log commit groups.

    Setting binlog_group_commit_sync_delay can increase the number of parallel committing transactions on any server that has (or might have after a failover) a replica, and therefore can increase parallel execution on the replicas. To benefit from this effect, the replica servers must have slave_parallel_type=LOGICAL_CLOCK set, and the effect is more significant when binlog_transaction_dependency_tracking=COMMIT_ORDER is also set. It is important to take into account both the source's throughput and the replicas' throughput when you are tuning the setting for binlog_group_commit_sync_delay.

    Setting binlog_group_commit_sync_delay can also reduce the number of fsync() calls to the binary log on any server (source or replica) that has a binary log.

    Note that setting binlog_group_commit_sync_delay increases the latency of transactions on the server, which might affect client applications. Also, on highly concurrent workloads, it is possible for the delay to increase contention and therefore reduce throughput. Typically, the benefits of setting a delay outweigh the drawbacks, but tuning should always be carried out to determine the optimal setting.

  • binlog_group_commit_sync_no_delay_count

    Command-Line Format --binlog-group-commit-sync-no-delay-count=#
    System Variable binlog_group_commit_sync_no_delay_count
    Scope Global
    Dynamic Yes
    Type Integer
    Default Value 0
    Minimum Value 0
    Maximum Value 1000000

    The maximum number of transactions to wait for before aborting the current delay as specified by binlog_group_commit_sync_delay. If binlog_group_commit_sync_delay is set to 0, then this option has no effect.

  • binlog_max_flush_queue_time

    Command-Line Format --binlog-max-flush-queue-time=#
    Deprecated Yes
    System Variable binlog_max_flush_queue_time
    Scope Global
    Dynamic Yes
    Type Integer
    Default Value 0
    Minimum Value 0
    Maximum Value 100000

    Formerly, this controlled the time in microseconds to continue reading transactions from the flush queue before proceeding with group commit. In MySQL 5.7, this variable no longer has any effect.

    binlog_max_flush_queue_time is deprecated as of MySQL 5.7.9, and is marked for eventual removal in a future MySQL release.

  • binlog_order_commits

    Command-Line Format --binlog-order-commits[={OFF|ON}]
    System Variable binlog_order_commits
    Scope Global
    Dynamic Yes
    Type Boolean
    Default Value ON

    When this variable is enabled on a replication source server (which is the default), transaction commit instructions issued to storage engines are serialized on a single thread, so that transactions are always committed in the same order as they are written to the binary log. Disabling this variable permits transaction commit instructions to be issued using multiple threads. Used in combination with binary log group commit, this prevents the commit rate of a single transaction being a bottleneck to throughput, and might therefore produce a performance improvement.

    Transactions are written to the binary log at the point when all the storage engines involved have confirmed that the transaction is prepared to commit. The binary log group commit logic then commits a group of transactions after their binary log write has taken place. When binlog_order_commits is disabled, because multiple threads are used for this process, transactions in a commit group might be committed in a different order from their order in the binary log. (Transactions from a single client always commit in chronological order.) In many cases this does not matter, as operations carried out in separate transactions should produce consistent results, and if that is not the case, a single transaction ought to be used instead.

    If you want to ensure that the transaction history on the source and on a multithreaded replica remains identical, set slave_preserve_commit_order=1 on the replica.

  • binlog_row_image

    Command-Line Format --binlog-row-image=image_type
    System Variable binlog_row_image
    Scope Global, Session
    Dynamic Yes
    Type Enumeration
    Default Value full
    Valid Values

    full (Log all columns)

    minimal (Log only changed columns, and columns needed to identify rows)

    noblob (Log all columns, except for unneeded BLOB and TEXT columns)

    For MySQL row-based replication, this variable determines how row images are written to the binary log.

    In MySQL row-based replication, each row change event contains two images, a before image whose columns are matched against when searching for the row to be updated, and an after image containing the changes. Normally, MySQL logs full rows (that is, all columns) for both the before and after images. However, it is not strictly necessary to include every column in both images, and we can often save disk, memory, and network usage by logging only those columns which are actually required.

    Note

    When deleting a row, only the before image is logged, since there are no changed values to propagate following the deletion. When inserting a row, only the after image is logged, since there is no existing row to be matched. Only when updating a row are both the before and after images required, and both written to the binary log.

    For the before image, it is necessary only that the minimum set of columns required to uniquely identify rows is logged. If the table containing the row has a primary key, then only the primary key column or columns are written to the binary log. Otherwise, if the table has a unique key all of whose columns are NOT NULL, then only the columns in the unique key need be logged. (If the table has neither a primary key nor a unique key without any NULL columns, then all columns must be used in the before image, and logged.) In the after image, it is necessary to log only the columns which have actually changed.

    You can cause the server to log full or minimal rows using the binlog_row_image system variable. This variable actually takes one of three possible values, as shown in the following list:

    • full: Log all columns in both the before image and the after image.

    • minimal: Log only those columns in the before image that are required to identify the row to be changed; log only those columns in the after image where a value was specified by the SQL statement, or generated by auto-increment.

    • noblob: Log all columns (same as full), except for BLOB and TEXT columns that are not required to identify rows, or that have not changed.

    Note

    This variable is not supported by NDB Cluster; setting it has no effect on the logging of NDB tables.

    The default value is full.

    When using minimal or noblob, deletes and updates are guaranteed to work correctly for a given table if and only if the following conditions are true for both the source and destination tables:

    • All columns must be present and in the same order; each column must use the same data type as its counterpart in the other table.

    • The tables must have identical primary key definitions.

    (In other words, the tables must be identical with the possible exception of indexes that are not part of the tables' primary keys.)

    If these conditions are not met, it is possible that the primary key column values in the destination table may prove insufficient to provide a unique match for a delete or update. In this event, no warning or error is issued; the source and replica silently diverge, thus breaking consistency.

    Setting this variable has no effect when the binary logging format is STATEMENT. When binlog_format is MIXED, the setting for binlog_row_image is applied to changes that are logged using row-based format, but this setting has no effect on changes logged as statements.

    Setting binlog_row_image on either the global or session level does not cause an implicit commit; this means that this variable can be changed while a transaction is in progress without affecting the transaction.

  • binlog_rows_query_log_events

    Command-Line Format --binlog-rows-query-log-events[={OFF|ON}]
    System Variable binlog_rows_query_log_events
    Scope Global, Session
    Dynamic Yes
    Type Boolean
    Default Value OFF

    This system variable affects row-based logging only. When enabled, it causes the server to write informational log events such as row query log events into its binary log. This information can be used for debugging and related purposes, such as obtaining the original query issued on the source when it cannot be reconstructed from the row updates.

    These informational events are normally ignored by MySQL programs reading the binary log and so cause no issues when replicating or restoring from backup. To view them, increase the verbosity level by using mysqlbinlog's --verbose option twice, either as -vv or --verbose --verbose.

  • binlog_stmt_cache_size

    Command-Line Format --binlog-stmt-cache-size=#
    System Variable binlog_stmt_cache_size
    Scope Global
    Dynamic Yes
    Type Integer
    Default Value 32768
    Minimum Value 4096
    Maximum Value (64-bit platforms) 18446744073709551615
    Maximum Value (32-bit platforms) 4294967295

    This variable determines the size of the cache for the binary log to hold nontransactional statements issued during a transaction. Separate binary log transaction and statement caches are allocated for each client if the server supports any transactional storage engines and if the server has the binary log enabled (--log-bin option). If you often use large nontransactional statements during transactions, you can increase this cache size to get better performance. The Binlog_stmt_cache_use and Binlog_stmt_cache_disk_use status variables can be useful for tuning the size of this variable. See Section 5.4.4, “The Binary Log”.

    The binlog_cache_size system variable sets the size for the transaction cache.

  • binlog_transaction_dependency_tracking

    Command-Line Format --binlog-transaction-dependency-tracking=value
    Introduced 5.7.22
    System Variable binlog_transaction_dependency_tracking
    Scope Global
    Dynamic Yes
    Type Enumeration
    Default Value COMMIT_ORDER
    Valid Values

    COMMIT_ORDER

    WRITESET

    WRITESET_SESSION

    The source of dependency information that the source uses to determine which transactions can be executed in parallel by the replica's multithreaded applier. This variable can take one of the three values described in the following list:

    • COMMIT_ORDER: Dependency information is generated from the source's commit timestamps. This is the default. This mode is also used for any transactions without write sets, even if this variable's is WRITESET or WRITESET_SESSION; this is also the case for transactions updating tables without primary keys and transactions updating tables having foreign key constraints.

    • WRITESET: Dependency information is generated from the source's write set, and any transactions which write different tuples can be parallelized.

    • WRITESET_SESSION: Dependency information is generated from the source's write set, but no two updates from the same session can be reordered.

    WRITESET and WRITESET_SESSION modes do not deliver any transaction dependencies that are newer than those that would have been returned in COMMIT_ORDER mode.

    The value of this variable cannot be set to anything other than COMMIT_ORDER if transaction_write_set_extraction is OFF. You should also note that the value of transaction_write_set_extraction cannot be changed if the current value of binlog_transaction_dependency_tracking is WRITESET or WRITESET_SESSION.

    The number of row hashes to be kept and checked for the latest transaction to have changed a given row is determined by the value of binlog_transaction_dependency_history_size.

  • binlog_transaction_dependency_history_size

    Command-Line Format --binlog-transaction-dependency-history-size=#
    Introduced 5.7.22
    System Variable binlog_transaction_dependency_history_size
    Scope Global
    Dynamic Yes
    Type Integer
    Default Value 25000
    Minimum Value 1
    Maximum Value 1000000

    Sets an upper limit on the number of row hashes which are kept in memory and used for looking up the transaction that last modified a given row. Once this number of hashes has been reached, the history is purged.

  • expire_logs_days

    Command-Line Format --expire-logs-days=#
    System Variable expire_logs_days
    Scope Global
    Dynamic Yes
    Type Integer
    Default Value 0
    Minimum Value 0
    Maximum Value 99

    The number of days for automatic binary log file removal. The default is 0, which means no automatic removal. Possible removals happen at startup and when the binary log is flushed. Log flushing occurs as indicated in Section 5.4, “MySQL Server Logs”.

    To remove binary log files manually, use the PURGE BINARY LOGS statement. See Section 13.4.1.1, “PURGE BINARY LOGS Statement”.

  • log_bin

    System Variable log_bin
    Scope Global
    Dynamic No
    Type Boolean

    Whether the binary log is enabled. If the --log-bin option is used, then the value of this variable is ON; otherwise it is OFF. This variable reports only on the status of binary logging (enabled or disabled); it does not actually report the value to which --log-bin is set.

    See Section 5.4.4, “The Binary Log”.

  • log_bin_basename

    System Variable log_bin_basename
    Scope Global
    Dynamic No
    Type File name

    Holds the base name and path for the binary log files, which can be set with the --log-bin server option. The maximum variable length is 256. In MySQL 5.7, the default base name is the name of the host machine with the suffix -bin. The default location is the data directory.

  • log_bin_index

    Command-Line Format --log-bin-index=file_name
    System Variable log_bin_index
    Scope Global
    Dynamic No
    Type File name

    Holds the base name and path for the binary log index file, which can be set with the --log-bin-index server option. The maximum variable length is 256.

  • log_bin_trust_function_creators

    Command-Line Format --log-bin-trust-function-creators[={OFF|ON}]
    System Variable log_bin_trust_function_creators
    Scope Global
    Dynamic Yes
    Type Boolean
    Default Value OFF

    This variable applies when binary logging is enabled. It controls whether stored function creators can be trusted not to create stored functions that causes unsafe events to be written to the binary log. If set to 0 (the default), users are not permitted to create or alter stored functions unless they have the SUPER privilege in addition to the CREATE ROUTINE or ALTER ROUTINE privilege. A setting of 0 also enforces the restriction that a function must be declared with the DETERMINISTIC characteristic, or with the READS SQL DATA or NO SQL characteristic. If the variable is set to 1, MySQL does not enforce these restrictions on stored function creation. This variable also applies to trigger creation. See Section 22.7, “Stored Program Binary Logging”.

  • log_bin_use_v1_row_events

    Command-Line Format --log-bin-use-v1-row-events[={OFF|ON}]
    System Variable log_bin_use_v1_row_events
    Scope Global
    Dynamic No
    Type Boolean
    Default Value OFF

    Whether Version 2 binary logging is in use. If this variable is 0 (disabled, the default), Version 2 binary log events are in use. If this variable is 1 (enabled), the server writes the binary log using Version 1 logging events (the only version of binary log events used in previous releases), and thus produces a binary log that can be read by older replicas.

    MySQL 5.7 uses Version 2 binary log row events by default. However, Version 2 events cannot be read by MySQL Server releases prior to MySQL 5.6.6. Enabling log_bin_use_v1_row_events causes mysqld to write the binary log using Version 1 logging events.

    This variable is read-only at runtime. To switch between Version 1 and Version 2 binary event binary logging, it is necessary to set log_bin_use_v1_row_events at server startup.

    Other than when performing upgrades of NDB Cluster Replication, log_bin_use_v1_row_events is chiefly of interest when setting up replication conflict detection and resolution using NDB$EPOCH_TRANS() as the conflict detection function, which requires Version 2 binary log row events. Thus, this variable and --ndb-log-transaction-id are not compatible.

    Note

    MySQL NDB Cluster 7.5 uses Version 2 binary log row events by default. You should keep this mind when planning upgrades or downgrades, and for setups using NDB Cluster Replication.

    For more information, see Section 20.6.11, “NDB Cluster Replication Conflict Resolution”.

  • log_builtin_as_identified_by_password

    Command-Line Format --log-builtin-as-identified-by-password[={OFF|ON}]
    System Variable log_builtin_as_identified_by_password
    Scope Global
    Dynamic Yes
    Type Boolean
    Default Value OFF

    This variable affects binary logging of user-management statements. When enabled, the variable has the following effects:

    • Binary logging for CREATE USER statements involving built-in authentication plugins rewrites the statements to include an IDENTIFIED BY PASSWORD clause.

    • SET PASSWORD statements are logged as SET PASSWORD statements, rather than being rewritten to ALTER USER statements.

    • SET PASSWORD statements are changed to log the hash of the password instead of the supplied cleartext (unencrypted) password.

    Enabling this variable ensures better compatibility for cross-version replication with 5.6 and pre-5.7.6 replicas, and for applications that expect this syntax in the binary log.

  • log_slave_updates

    Command-Line Format --log-slave-updates[={OFF|ON}]
    System Variable log_slave_updates
    Scope Global
    Dynamic No
    Type Boolean
    Default Value OFF

    Whether updates received by a replica server from a source server should be logged to the replica's own binary log.

    Normally, a replica does not log to its own binary log any updates that are received from a source server. Enabling this variable causes the replica to write the updates performed by its replication SQL thread to its own binary log. For this option to have any effect, the replica must also be started with the --log-bin option to enable binary logging. See Section 16.1.6, “Replication and Binary Logging Options and Variables”.

    log_slave_updates is enabled when you want to chain replication servers. For example, you might want to set up replication servers using this arrangement:

    A -> B -> C
    

    Here, A serves as the source for the replica B, and B serves as the source for the replica C. For this to work, B must be both a source and a replica. You must start both A and B with --log-bin to enable binary logging, and B with log_slave_updates enabled so that updates received from A are logged by B to its binary log.

  • log_statements_unsafe_for_binlog

    Command-Line Format --log-statements-unsafe-for-binlog[={OFF|ON}]
    Introduced 5.7.11
    System Variable log_statements_unsafe_for_binlog
    Scope Global
    Dynamic Yes
    Type Boolean
    Default Value ON

    If error 1592 is encountered, controls whether the generated warnings are added to the error log or not.

  • master_verify_checksum

    Command-Line Format --master-verify-checksum[={OFF|ON}]
    System Variable master_verify_checksum
    Scope Global
    Dynamic Yes
    Type Boolean
    Default Value OFF

    Enabling this variable causes the source to verify events read from the binary log by examining checksums, and to stop with an error in the event of a mismatch. master_verify_checksum is disabled by default; in this case, the source uses the event length from the binary log to verify events, so that only complete events are read from the binary log.

  • max_binlog_cache_size

    Command-Line Format --max-binlog-cache-size=#
    System Variable max_binlog_cache_size
    Scope Global
    Dynamic Yes
    Type Integer
    Default Value 18446744073709551615
    Minimum Value 4096
    Maximum Value 18446744073709551615

    If a transaction requires more than this many bytes of memory, the server generates a Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage error. The minimum value is 4096. The maximum possible value is 16EB (exabytes). The maximum recommended value is 4GB; this is due to the fact that MySQL currently cannot work with binary log positions greater than 4GB.

    max_binlog_cache_size sets the size for the transaction cache only; the upper limit for the statement cache is governed by the max_binlog_stmt_cache_size system variable.

    The visibility to sessions of max_binlog_cache_size matches that of the binlog_cache_size system variable; in other words, changing its value affects only new sessions that are started after the value is changed.

  • max_binlog_size

    Command-Line Format --max-binlog-size=#
    System Variable max_binlog_size
    Scope Global
    Dynamic Yes
    Type Integer
    Default Value 1073741824
    Minimum Value 4096
    Maximum Value 1073741824

    If a write to the binary log causes the current log file size to exceed the value of this variable, the server rotates the binary logs (closes the current file and opens the next one). The minimum value is 4096 bytes. The maximum and default value is 1GB.

    A transaction is written in one chunk to the binary log, so it is never split between several binary logs. Therefore, if you have big transactions, you might see binary log files larger than max_binlog_size.

    If max_relay_log_size is 0, the value of max_binlog_size applies to relay logs as well.

  • max_binlog_stmt_cache_size

    Command-Line Format --max-binlog-stmt-cache-size=#
    System Variable max_binlog_stmt_cache_size
    Scope Global
    Dynamic Yes
    Type Integer
    Default Value 18446744073709547520
    Minimum Value 4096
    Maximum Value 18446744073709547520

    If nontransactional statements within a transaction require more than this many bytes of memory, the server generates an error. The minimum value is 4096. The maximum and default values are 4GB on 32-bit platforms and 16EB (exabytes) on 64-bit platforms.

    max_binlog_stmt_cache_size sets the size for the statement cache only; the upper limit for the transaction cache is governed exclusively by the max_binlog_cache_size system variable.

  • sql_log_bin

    System Variable sql_log_bin
    Scope Session
    Dynamic Yes
    Type Boolean
    Default Value ON

    This variable controls whether logging to the binary log is enabled for the current session (assuming that the binary log itself is enabled). The default value is ON. To disable or enable binary logging for the current session, set the session sql_log_bin variable to OFF or ON.

    Set this variable to OFF for a session to temporarily disable binary logging while making changes to the source you do not want replicated to the replica.

    Setting the session value of this system variable is a restricted operation. The session user must have privileges sufficient to set restricted session variables. See Section 5.1.8.1, “System Variable Privileges”.

    It is not possible to set the session value of sql_log_bin within a transaction or subquery.

    Setting this variable to OFF prevents GTIDs from being assigned to transactions in the binary log. If you are using GTIDs for replication, this means that even when binary logging is later enabled again, the GTIDs written into the log from this point do not account for any transactions that occurred in the meantime, so in effect those transactions are lost.

    The global sql_log_bin variable is read only and cannot be modified. The global scope is deprecated; expect it to be removed in a future MySQL release.

  • sync_binlog

    Command-Line Format --sync-binlog=#
    System Variable sync_binlog
    Scope Global
    Dynamic Yes
    Type Integer
    Default Value 1
    Minimum Value 0
    Maximum Value 4294967295

    Controls how often the MySQL server synchronizes the binary log to disk.

    • sync_binlog=0: Disables synchronization of the binary log to disk by the MySQL server. Instead, the MySQL server relies on the operating system to flush the binary log to disk from time to time as it does for any other file. This setting provides the best performance, but in the event of a power failure or operating system crash, it is possible that the server has committed transactions that have not been synchronized to the binary log.

    • sync_binlog=1: Enables synchronization of the binary log to disk before transactions are committed. This is the safest setting but can have a negative impact on performance due to the increased number of disk writes. In the event of a power failure or operating system crash, transactions that are missing from the binary log are only in a prepared state. This permits the automatic recovery routine to roll back the transactions, which guarantees that no transaction is lost from the binary log.

    • sync_binlog=N, where N is a value other than 0 or 1: The binary log is synchronized to disk after N binary log commit groups have been collected. In the event of a power failure or operating system crash, it is possible that the server has committed transactions that have not been flushed to the binary log. This setting can have a negative impact on performance due to the increased number of disk writes. A higher value improves performance, but with an increased risk of data loss.

    For the greatest possible durability and consistency in a replication setup that uses InnoDB with transactions, use these settings:

    Caution

    Many operating systems and some disk hardware fool the flush-to-disk operation. They may tell mysqld that the flush has taken place, even though it has not. In this case, the durability of transactions is not guaranteed even with the recommended settings, and in the worst case, a power outage can corrupt InnoDB data. Using a battery-backed disk cache in the SCSI disk controller or in the disk itself speeds up file flushes, and makes the operation safer. You can also try to disable the caching of disk writes in hardware caches.

  • transaction_write_set_extraction

    Command-Line Format --transaction-write-set-extraction[=value]
    System Variable transaction_write_set_extraction
    Scope Global, Session
    Dynamic Yes
    Type Enumeration
    Default Value OFF
    Valid Values (≥ 5.7.14)

    OFF

    MURMUR32

    XXHASH64

    Valid Values (≤ 5.7.13)

    OFF

    MURMUR32

    Defines the algorithm used to generate a hash identifying the writes associated with a transaction. If you are using Group Replication, the hash value is used for distributed conflict detection and handling. On 64-bit systems running Group Replication, we recommend setting this to XXHASH64 in order to avoid unnecessary hash collisions which result in certification failures and the roll back of user transactions. See Section 17.7.1, “Group Replication Requirements”.

    Note

    The value of this variable cannot be changed when binlog_transaction_dependency_tracking is set to either of WRITESET or WRITESET_SESSION.

    binlog_format must be set to ROW to change the value of this variable.

16.1.6.5 Global Transaction ID System Variables

The MySQL Server system variables described in this section are used to monitor and control Global Transaction Identifiers (GTIDs). For additional information, see Section 16.1.3, “Replication with Global Transaction Identifiers”.

  • binlog_gtid_simple_recovery

    Command-Line Format --binlog-gtid-simple-recovery[={OFF|ON}]
    System Variable binlog_gtid_simple_recovery
    Scope Global
    Dynamic No
    Type Boolean
    Default Value ON

    This variable controls how binary log files are iterated during the search for GTIDs when MySQL starts or restarts.

    When binlog_gtid_simple_recovery=TRUE, which is the default, the values of gtid_executed and gtid_purged are computed at startup based on the values of Previous_gtids_log_event in the most recent and oldest binary log files. For a description of the computation, see The gtid_purged System Variable. This setting accesses only two binary log files during server restart. If all binary logs on the server were generated using MySQL 5.7.8 or later and you are using MySQL 5.7.8 or later, binlog_gtid_simple_recovery=TRUE can always safely be used.

    With binlog_gtid_simple_recovery=TRUE, gtid_executed and gtid_purged might be initialized incorrectly in the following two situations:

    • The newest binary log was generated by MySQL 5.7.5 or earlier, and gtid_mode was ON for some binary logs but OFF for the newest binary log.

    • A SET @@GLOBAL.gtid_purged statement was issued on MySQL 5.7.7 or earlier, and the binary log that was active at the time of the SET @@GLOBAL.gtid_purged statement has not yet been purged.

    If an incorrect GTID set is computed in either situation, it remains incorrect even if the server is later restarted with binlog_gtid_simple_recovery=FALSE. If either of these situations applies on the server, set binlog_gtid_simple_recovery=FALSE before starting or restarting the server. To check for the second situation, if you are using MySQL 5.7.7 or earlier, after issuing a SET @@GLOBAL.gtid_purged statement note down the current binary log file name, which can be checked using SHOW MASTER STATUS. If the server is restarted before this file has been purged, then you should set binlog_gtid_simple_recovery=FALSE.

    When binlog_gtid_simple_recovery=FALSE is set, the method of computing gtid_executed and gtid_purged as described in The gtid_purged System Variable is changed to iterate the binary log files as follows:

    • Instead of using the value of Previous_gtids_log_event and GTID log events from the newest binary log file, the computation for gtid_executed iterates from the newest binary log file, and uses the value of Previous_gtids_log_event and any GTID log events from the first binary log file where it finds a Previous_gtids_log_event value. If the server's most recent binary log files do not have GTID log events, for example if gtid_mode=ON was used but the server was later changed to gtid_mode=OFF, this process can take a long time.

    • Instead of using the value of Previous_gtids_log_event from the oldest binary log file, the computation for gtid_purged iterates from the oldest binary log file, and uses the value of Previous_gtids_log_event from the first binary log file where it finds either a nonempty Previous_gtids_log_event value, or at least one GTID log event (indicating that the use of GTIDs starts at that point). If the server's older binary log files do not have GTID log events, for example if gtid_mode=ON was only set recently on the server, this process can take a long time.

    In MySQL version 5.7.5, this variable was added as simplified_binlog_gtid_recovery and in MySQL version 5.7.6 it was renamed to binlog_gtid_simple_recovery.

  • enforce_gtid_consistency

    Command-Line Format --enforce-gtid-consistency[=value]
    System Variable enforce_gtid_consistency
    Scope Global
    Dynamic Yes
    Type Enumeration
    Default Value OFF
    Valid Values

    OFF

    ON

    WARN

    Depending on the value of this variable, the server enforces GTID consistency by allowing execution of only statements that can be safely logged using a GTID. You must set this variable to ON before enabling GTID based replication.

    The values that enforce_gtid_consistency can be configured to are:

    • OFF: all transactions are allowed to violate GTID consistency.

    • ON: no transaction is allowed to violate GTID consistency.

    • WARN: all transactions are allowed to violate GTID consistency, but a warning is generated in this case. WARN was added in MySQL 5.7.6.

    Only statements that can be logged using GTID safe statements can be logged when enforce_gtid_consistency is set to ON, so the operations listed here cannot be used with this option:

    • CREATE TABLE ... SELECT statements

    • CREATE TEMPORARY TABLE or DROP TEMPORARY TABLE statements inside transactions

    • Transactions or statements that update both transactional and nontransactional tables. There is an exception that nontransactional DML is allowed in the same transaction or in the same statement as transactional DML, if all nontransactional tables are temporary.

    --enforce-gtid-consistency only takes effect if binary logging takes place for a statement. If binary logging is disabled on the server, or if statements are not written to the binary log because they are removed by a filter, GTID consistency is not checked or enforced for the statements that are not logged.

    For more information, see Section 16.1.3.6, “Restrictions on Replication with GTIDs”.

    Prior to MySQL 5.7.6, the boolean enforce_gtid_consistency defaulted to OFF. To maintain compatibility with previous releases, in MySQL 5.7.6 the enumeration defaults to OFF, and setting --enforce-gtid-consistency without a value is interpreted as setting the value to ON. The variable also has multiple textual aliases for the values: 0=OFF=FALSE, 1=ON=TRUE,2=WARN. This differs from other enumeration types but maintains compatibility with the boolean type used in previous versions. These changes impact on what is returned by the variable. Using SELECT @@ENFORCE_GTID_CONSISTENCY, SHOW VARIABLES LIKE 'ENFORCE_GTID_CONSISTENCY', and SELECT * FROM INFORMATION_SCHEMA.VARIABLES WHERE 'VARIABLE_NAME' = 'ENFORCE_GTID_CONSISTENCY', all return the textual form, not the numeric form. This is an incompatible change, since @@ENFORCE_GTID_CONSISTENCY returns the numeric form for booleans but returns the textual form for SHOW and the Information Schema.

  • gtid_executed

    System Variable gtid_executed
    System Variable gtid_executed
    Scope Global
    Scope Global, Session
    Dynamic No
    Dynamic No
    Type String
    Unit set of GTIDs

    When used with global scope, this variable contains a representation of the set of all transactions executed on the server and GTIDs that have been set by a SET gtid_purged statement. This is the same as the value of the Executed_Gtid_Set column in the output of SHOW MASTER STATUS and SHOW SLAVE STATUS. The value of this variable is a GTID set, see GTID Sets for more information.

    When the server starts, @@GLOBAL.gtid_executed is initialized. See binlog_gtid_simple_recovery for more information on how binary logs are iterated to populate gtid_executed. GTIDs are then added to the set as transactions are executed, or if any SET gtid_purged statement is executed.

    The set of transactions that can be found in the binary logs at any given time is equal to GTID_SUBTRACT(@@GLOBAL.gtid_executed, @@GLOBAL.gtid_purged); that is, to all transactions in the binary log that have not yet been purged.

    Issuing RESET MASTER causes the global value (but not the session value) of this variable to be reset to an empty string. GTIDs are not otherwise removed from this set other than when the set is cleared due to RESET MASTER.

    Prior to MySQL 5.7.7, this variable could also be used with session scope, where it contained a representation of the set of transactions that are written to the cache in the current session. The session scope was deprecated in MySQL 5.7.7.

  • gtid_executed_compression_period

    Command-Line Format --gtid-executed-compression-period=#
    System Variable gtid_executed_compression_period
    Scope Global
    Dynamic Yes
    Type Integer
    Default Value 1000
    Minimum Value 0
    Maximum Value 4294967295

    Compress the mysql.gtid_executed table each time this many transactions have been processed. When binary logging is enabled on the server, this compression method is not used, and instead the mysql.gtid_executed table is compressed on each binary log rotation. When binary logging is disabled on the server, the compression thread sleeps until the specified number of transactions have been executed, then wakes up to perform compression of the mysql.gtid_executed table. Setting the value of this system variable to 0 means that the thread never wakes up, so this compression method is not used.

    See mysql.gtid_executed Table Compression for more information.

    This variable was added in MySQL version 5.7.5 as executed_gtids_compression_period and renamed in MySQL version 5.7.6 to gtid_executed_compression_period.

  • gtid_mode

    Command-Line Format --gtid-mode=MODE
    System Variable gtid_mode
    Scope Global
    Dynamic Yes
    Type Enumeration
    Default Value OFF
    Valid Values

    OFF

    OFF_PERMISSIVE

    ON_PERMISSIVE

    ON

    Controls whether GTID based logging is enabled and what type of transactions the logs can contain. Prior to MySQL 5.7.6, this variable was read-only and was set using --gtid-mode at server startup only. Prior to MySQL 5.7.5, starting the server with --gtid-mode=ON required that the server also be started with the --log-bin and --log-slave-updates options. As of MySQL 5.7.5, this is no longer a requirement. See mysql.gtid_executed Table.

    MySQL 5.7.6 enables this variable to be set dynamically. You must have privileges sufficient to set global system variables. See Section 5.1.8.1, “System Variable Privileges”. enforce_gtid_consistency must be true before you can set gtid_mode=ON. Before modifying this variable, see Section 16.1.4, “Changing Replication Modes on Online Servers”.

    Transactions logged in MySQL 5.7.6 and higher can be either anonymous or use GTIDs. Anonymous transactions rely on binary log file and position to identify specific transactions. GTID transactions have a unique identifier that is used to refer to transactions. The OFF_PERMISSIVE and ON_PERMISSIVE modes added in MySQL 5.7.6 permit a mix of these transaction types in the topology. The different modes are now:

    • OFF: Both new and replicated transactions must be anonymous.

    • OFF_PERMISSIVE: New transactions are anonymous. Replicated transactions can be either anonymous or GTID transactions.

    • ON_PERMISSIVE: New transactions are GTID transactions. Replicated transactions can be either anonymous or GTID transactions.

    • ON: Both new and replicated transactions must be GTID transactions.

    Changes from one value to another can only be one step at a time. For example, if gtid_mode is currently set to OFF_PERMISSIVE, it is possible to change to OFF or ON_PERMISSIVE but not to ON.

    In MySQL 5.7.6 and higher, the values of gtid_purged and gtid_executed are persistent regardless of the value of gtid_mode. Therefore even after changing the value of gtid_mode, these variables contain the correct values. In MySQL 5.7.5 and earlier, the values of gtid_purged and gtid_executed are not persistent while gtid_mode=OFF. Therefore, after changing gtid_mode to OFF, once all binary logs containing GTIDs are purged, the values of these variables are lost.

  • gtid_next

    System Variable gtid_next
    Scope Session
    Dynamic Yes
    Type Enumeration
    Default Value AUTOMATIC
    Valid Values

    AUTOMATIC

    ANONYMOUS

    UUID:NUMBER

    This variable is used to specify whether and how the next GTID is obtained.

    Setting the session value of this system variable is a restricted operation. The session user must have privileges sufficient to set restricted session variables. See Section 5.1.8.1, “System Variable Privileges”.

    gtid_next can take any of the following values:

    • AUTOMATIC: Use the next automatically-generated global transaction ID.

    • ANONYMOUS: Transactions do not have global identifiers, and are identified by file and position only.

    • A global transaction ID in UUID:NUMBER format.

    Exactly which of the above options are valid depends on the setting of gtid_mode, see Section 16.1.4.1, “Replication Mode Concepts” for more information. Setting this variable has no effect if gtid_mode is OFF.

    After this variable has been set to UUID:NUMBER, and a transaction has been committed or rolled back, an explicit SET GTID_NEXT statement must again be issued before any other statement.

    In MySQL 5.7.5 and higher, DROP TABLE or DROP TEMPORARY TABLE fails with an explicit error when used on a combination of nontemporary tables with temporary tables, or of temporary tables using transactional storage engines with temporary tables using nontransactional storage engines. Prior to MySQL 5.7.5, when GTIDs were enabled but gtid_next was not AUTOMATIC, DROP TABLE did not work correctly when used with either of these combinations of tables. (Bug #17620053)

    In MySQL 5.7.1, you cannot execute any of the statements CHANGE MASTER TO, START SLAVE, STOP SLAVE, REPAIR TABLE, OPTIMIZE TABLE, ANALYZE TABLE, CHECK TABLE, CREATE SERVER, ALTER SERVER, DROP SERVER, CACHE INDEX, LOAD INDEX INTO CACHE, FLUSH, or RESET when gtid_next is set to any value other than AUTOMATIC; in such cases, the statement fails with an error. Such statements are not disallowed in MySQL 5.7.2 and later. (Bug #16062608, Bug #16715809, Bug #69045) (Bug #16062608)

  • gtid_owned

    System Variable gtid_owned
    Scope Global, Session
    Dynamic No
    Type String
    Unit set of GTIDs

    This read-only variable is primarily for internal use. Its contents depend on its scope.

    • When used with global scope, gtid_owned holds a list of all the GTIDs that are currently in use on the server, with the IDs of the threads that own them. This variable is mainly useful for a multi-threaded replica to check whether a transaction is already being applied on another thread. An applier thread takes ownership of a transaction's GTID all the time it is processing the transaction, so @@global.gtid_owned shows the GTID and owner for the duration of processing. When a transaction has been committed (or rolled back), the applier thread releases ownership of the GTID.

    • When used with session scope, gtid_owned holds a single GTID that is currently in use by and owned by this session. This variable is mainly useful for testing and debugging the use of GTIDs when the client has explicitly assigned a GTID for the transaction by setting gtid_next. In this case, @@session.gtid_owned displays the GTID all the time the client is processing the transaction, until the transaction has been committed (or rolled back). When the client has finished processing the transaction, the variable is cleared. If gtid_next=AUTOMATIC is used for the session, gtid_owned is only populated briefly during the execution of the commit statement for the transaction, so it cannot be observed from the session concerned, although it is listed if @@global.gtid_owned is read at the right point. If you have a requirement to track the GTIDs that are handled by a client in a session, you can enable the session state tracker controlled by the session_track_gtids system variable.

  • gtid_purged

    System Variable gtid_purged
    Scope Global
    Dynamic Yes
    Type String
    Unit set of GTIDs

    The global value of the gtid_purged system variable (@@GLOBAL.gtid_purged) is a GTID set consisting of the GTIDs of all the transactions that have been committed on the server, but do not exist in any binary log file on the server. gtid_purged is a subset of gtid_executed. The following categories of GTIDs are in gtid_purged:

    • GTIDs of replicated transactions that were committed with binary logging disabled on the replica.

    • GTIDs of transactions that were written to a binary log file that has now been purged.

    • GTIDs that were added explicitly to the set by the statement SET @@GLOBAL.gtid_purged.

    When the server starts or restarts, the global value of gtid_purged is initialized to a set of GTIDs. For information on how this GTID set is computed, see The gtid_purged System Variable. If binary logs from MySQL 5.7.7 or older are present on the server, you might need to set binlog_gtid_simple_recovery=FALSE in the server's configuration file to produce the correct computation. See the description for binlog_gtid_simple_recovery for details of the situations in which this setting is needed.

    Issuing RESET MASTER causes the value of gtid_purged to be reset to an empty string.

    You can set the value of gtid_purged in order to record on the server that the transactions in a certain GTID set have been applied, although they do not exist in any binary log on the server. An example use case for this action is when you are restoring a backup of one or more databases on a server, but you do not have the relevant binary logs containing the transactions on the server.

    In MySQL 5.7, it is possible to update the value of gtid_purged only when gtid_executed is the empty string, and therefore gtid_purged is the empty string. This is the case either when replication has not been started previously, or when replication did not previously use GTIDs. Prior to MySQL 5.7.6, gtid_purged was also settable only when gtid_mode=ON. In MySQL 5.7.6 and higher, gtid_purged is settable regardless of the value of gtid_mode.

    To replace the value of gtid_purged with your specified GTID set, use the following statement:

    SET @@GLOBAL.gtid_purged = 'gtid_set'
    Note

    If you are using MySQL 5.7.7 or earlier, after issuing a SET @@GLOBAL.gtid_purged statement, you might need to set binlog_gtid_simple_recovery=FALSE in the server's configuration file before restarting the server, otherwise gtid_purged can be computed incorrectly. See the description for binlog_gtid_simple_recovery for details of the situations in which this setting is needed. If all binary logs on the server were generated using MySQL 5.7.8 or later and you are using MySQL 5.7.8 or later, binlog_gtid_simple_recovery=TRUE (which is the default setting from MySQL 5.7.7) can always safely be used.

16.1.7 Common Replication Administration Tasks

Once replication has been started it executes without requiring much regular administration. This section describes how to check the status of replication and how to pause a replica.

16.1.7.1 Checking Replication Status

The most common task when managing a replication process is to ensure that replication is taking place and that there have been no errors between the replica and the source.

The SHOW SLAVE STATUS statement, which you must execute on each replica, provides information about the configuration and status of the connection between the replica server and the source server. From MySQL 5.7, the Performance Schema has replication tables that provide this information in a more accessible form. See Section 24.12.11, “Performance Schema Replication Tables”.

The SHOW STATUS statement also provided some information relating specifically to replicas. As of MySQL version 5.7.5, the following status variables previously monitored using SHOW STATUS were deprecated and moved to the Performance Schema replication tables:

The replication heartbeat information shown in the Performance Schema replication tables lets you check that the replication connection is active even if the source has not sent events to the replica recently. The source sends a heartbeat signal to a replica if there are no updates to, and no unsent events in, the binary log for a longer period than the heartbeat interval. The MASTER_HEARTBEAT_PERIOD setting on the source (set by the CHANGE MASTER TO statement) specifies the frequency of the heartbeat, which defaults to half of the connection timeout interval for the replica (slave_net_timeout). The replication_connection_status Performance Schema table shows when the most recent heartbeat signal was received by a replica, and how many heartbeat signals it has received.

If you are using the SHOW SLAVE STATUS statement to check on the status of an individual replica, the statement provides the following information:

mysql> SHOW SLAVE STATUS\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: source1
                  Master_User: root
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000004
          Read_Master_Log_Pos: 931
               Relay_Log_File: replica1-relay-bin.000056
                Relay_Log_Pos: 950
        Relay_Master_Log_File: mysql-bin.000004
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB:
          Replicate_Ignore_DB:
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 931
              Relay_Log_Space: 1365
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File:
           Master_SSL_CA_Path:
              Master_SSL_Cert:
            Master_SSL_Cipher:
               Master_SSL_Key:
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error:
               Last_SQL_Errno: 0
               Last_SQL_Error:
  Replicate_Ignore_Server_Ids: 0

The key fields from the status report to examine are:

  • Slave_IO_State: The current status of the replica. See Section 8.14.6, “Replication Replica I/O Thread States”, and Section 8.14.7, “Replication Replica SQL Thread States”, for more information.

  • Slave_IO_Running: Whether the I/O thread for reading the source's binary log is running. Normally, you want this to be Yes unless you have not yet started replication or have explicitly stopped it with STOP SLAVE.

  • Slave_SQL_Running: Whether the SQL thread for executing events in the relay log is running. As with the I/O thread, this should normally be Yes.

  • Last_IO_Error, Last_SQL_Error: The last errors registered by the I/O and SQL threads when processing the relay log. Ideally these should be blank, indicating no errors.

  • Seconds_Behind_Master: The number of seconds that the replication SQL thread is behind processing the source's binary log. A high number (or an increasing one) can indicate that the replica is unable to handle events from the source in a timely fashion.

    A value of 0 for Seconds_Behind_Master can usually be interpreted as meaning that the replica has caught up with the source, but there are some cases where this is not strictly true. For example, this can occur if the network connection between source and replica is broken but the replication I/O thread has not yet noticed this—that is, slave_net_timeout has not yet elapsed.

    It is also possible that transient values for Seconds_Behind_Master may not reflect the situation accurately. When the replication SQL thread has caught up on I/O, Seconds_Behind_Master displays 0; but when the replication I/O thread is still queuing up a new event, Seconds_Behind_Master may show a large value until the SQL thread finishes executing the new event. This is especially likely when the events have old timestamps; in such cases, if you execute SHOW SLAVE STATUS several times in a relatively short period, you may see this value change back and forth repeatedly between 0 and a relatively large value.

Several pairs of fields provide information about the progress of the replica in reading events from the source's binary log and processing them in the relay log:

  • (Master_Log_file, Read_Master_Log_Pos): Coordinates in the source's binary log indicating how far the replication I/O thread has read events from that log.

  • (Relay_Master_Log_File, Exec_Master_Log_Pos): Coordinates in the source's binary log indicating how far the replication SQL thread has executed events received from that log.

  • (Relay_Log_File, Relay_Log_Pos): Coordinates in the replica's relay log indicating how far the replication SQL thread has executed the relay log. These correspond to the preceding coordinates, but are expressed in the replica's relay log coordinates rather than the source's binary log coordinates.

On the source, you can check the status of connected replicas using SHOW PROCESSLIST to examine the list of running processes. Replica connections have Binlog Dump in the Command field:

mysql> SHOW PROCESSLIST \G;
*************************** 4. row ***************************
     Id: 10
   User: root
   Host: replica1:58371
     db: NULL
Command: Binlog Dump
   Time: 777
  State: Has sent all binlog to slave; waiting for binlog to be updated
   Info: NULL

Because it is the replica that drives the replication process, very little information is available in this report.

For replicas that were started with the --report-host option and are connected to the source, the SHOW SLAVE HOSTS statement on the source shows basic information about the replicas. The output includes the ID of the replica server, the value of the --report-host option, the connecting port, and source ID:

mysql> SHOW SLAVE HOSTS;
+-----------+----------+------+-------------------+-----------+
| Server_id | Host     | Port | Rpl_recovery_rank | Master_id |
+-----------+----------+------+-------------------+-----------+
|        10 | replica1 | 3306 |                 0 |         1 |
+-----------+----------+------+-------------------+-----------+
1 row in set (0.00 sec)

16.1.7.2 Pausing Replication on the Replica

You can stop and start replication on the replica using the STOP SLAVE and START SLAVE statements.

To stop processing of the binary log from the source, use STOP SLAVE:

mysql> STOP SLAVE;

When replication is stopped, the replication I/O thread stops reading events from the source's binary log and writing them to the relay log, and the replication SQL thread stops reading events from the relay log and executing them. You can pause the replication I/O and SQL threads individually by specifying the thread type:

mysql> STOP SLAVE IO_THREAD;
mysql> STOP SLAVE SQL_THREAD;

To start execution again, use the START SLAVE statement:

mysql> START SLAVE;

To start a particular thread, specify the thread type:

mysql> START SLAVE IO_THREAD;
mysql> START SLAVE SQL_THREAD;

For a replica that performs updates only by processing events from the source, stopping only the replication SQL thread can be useful if you want to perform a backup or other task. The replication I/O thread continues to read events from the source but they are not executed. This makes it easier for the replica to catch up when you restart the replication SQL thread.

Stopping only the replication I/O thread enables the events in the relay log to be executed by the replication SQL thread up to the point where the relay log ends. This can be useful when you want to pause execution to catch up with events already received from the source, when you want to perform administration on the replica but also ensure that it has processed all updates to a specific point. This method can also be used to pause event receipt on the replica while you conduct administration on the source. Stopping the I/O thread but permitting the SQL thread to run helps ensure that there is not a massive backlog of events to be executed when replication is started again.

16.1.7.3 Skipping Transactions

If replication stops due to an issue with an event in a replicated transaction, you can resume replication by skipping the failed transaction on the replica. Before skipping a transaction, ensure that the replication I/O thread is stopped as well as the replication SQL thread.

First you need to identify the replicated event that caused the error. Details of the error and the last successfully applied transaction are recorded in the Performance Schema table replication_applier_status_by_worker. You can use mysqlbinlog to retrieve and display the events that were logged around the time of the error. For instructions to do this, see Section 7.5, “Point-in-Time (Incremental) Recovery”. Alternatively, you can issue SHOW RELAYLOG EVENTS on the replica or SHOW BINLOG EVENTS on the source.

Before skipping the transaction and restarting the replica, check these points:

  • Is the transaction that stopped replication from an unknown or untrusted source? If so, investigate the cause in case there are any security considerations that indicate the replica should not be restarted.

  • Does the transaction that stopped replication need to be applied on the replica? If so, either make the appropriate corrections and reapply the transaction, or manually reconcile the data on the replica.

  • Did the transaction that stopped replication need to be applied on the source? If not, undo the transaction manually on the server where it originally took place.

To skip the transaction, choose one of the following methods as appropriate:

To restart replication after skipping the transaction, issue START SLAVE, with the FOR CHANNEL clause if the replica is a multi-source replica.

16.1.7.3.1 Skipping Transactions With GTIDs

When GTIDs are in use (gtid_mode is ON), the GTID for a committed transaction is persisted on the replica even if the content of the transaction is filtered out. This feature prevents a replica from retrieving previously filtered transactions when it reconnects to the source using GTID auto-positioning. It can also be used to skip a transaction on the replica, by committing an empty transaction in place of the failing transaction.

If the failing transaction generated an error in a worker thread, you can obtain its GTID directly from the LAST_SEEN_TRANSACTION field in the Performance Schema table replication_applier_status_by_worker. To see what the transaction is, issue SHOW RELAYLOG EVENTS on the replica or SHOW BINLOG EVENTS on the source, and search the output for a transaction preceded by that GTID.

When you have assessed the failing transaction for any other appropriate actions as described previously (such as security considerations), to skip it, commit an empty transaction on the replica that has the same GTID as the failing transaction. For example:

SET GTID_NEXT='aaa-bbb-ccc-ddd:N';
BEGIN;
COMMIT;
SET GTID_NEXT='AUTOMATIC';

The presence of this empty transaction on the replica means that when you issue a START SLAVE statement to restart replication, the replica uses the auto-skip function to ignore the failing transaction, because it sees a transaction with that GTID has already been applied. If the replica is a multi-source replica, you do not need to specify the channel name when you commit the empty transaction, but you do need to specify the channel name when you issue START SLAVE.

Note that if binary logging is in use on this replica, the empty transaction enters the replication stream if the replica becomes a source or primary in the future. If you need to avoid this possibility, consider flushing and purging the replica's binary logs, as in this example:

FLUSH LOGS;
PURGE BINARY LOGS TO 'binlog.000146';

The GTID of the empty transaction is persisted, but the transaction itself is removed by purging the binary log files.

16.1.7.3.2 Skipping Transactions Without GTIDs

To skip failing transactions when GTIDs are not in use or are being phased in (gtid_mode is OFF, OFF_PERMISSIVE, or ON_PERMISSIVE), you can skip a specified number of events by issuing a SET GLOBAL sql_slave_skip_counter statement. Alternatively, you can skip past an event or events by issuing a CHANGE MASTER TO statement to move the source's binary log position forward.

When you use these methods, it is important to understand that you are not necessarily skipping a complete transaction, as is always the case with the GTID-based method described previously. These non-GTID-based methods are not aware of transactions as such, but instead operate on events. The binary log is organized as a sequence of groups known as event groups, and each event group consists of a sequence of events.

  • For transactional tables, an event group corresponds to a transaction.

  • For nontransactional tables, an event group corresponds to a single SQL statement.

A single transaction can contain changes to both transactional and nontransactional tables.

When you use a SET GLOBAL sql_slave_skip_counter statement to skip events and the resulting position is in the middle of an event group, the replica continues to skip events until it reaches the end of the group. Execution then starts with the next event group. The CHANGE MASTER TO statement does not have this function, so you must be careful to identify the correct location to restart replication at the beginning of an event group. However, using CHANGE MASTER TO means you do not have to count the events that need to be skipped, as you do with a SET GLOBAL sql_slave_skip_counter, and instead you can just specify the location to restart.

16.1.7.3.2.1 Skipping Transactions With SET GLOBAL sql_slave_skip_counter

When you have assessed the failing transaction for any other appropriate actions as described previously (such as security considerations), count the number of events that you need to skip. One event normally corresponds to one SQL statement in the binary log, but note that statements that use AUTO_INCREMENT or LAST_INSERT_ID() count as two events in the binary log. When binary log transaction compression is in use, a compressed transaction payload (Transaction_payload_event) is counted as a single counter value, so all the events inside it are skipped as a unit.

If you want to skip the complete transaction, you can count the events to the end of the transaction, or you can just skip the relevant event group. Remember that with SET GLOBAL sql_slave_skip_counter, the replica continues to skip to the end of an event group. Make sure you do not skip too far forward and go into the next event group or transaction, as this then causes it to be skipped as well.

Issue the SET statement as follows, where N is the number of events from the source to skip:

SET GLOBAL sql_slave_skip_counter = N

This statement cannot be issued if gtid_mode=ON is set, or if the replica threads are running.

The SET GLOBAL sql_slave_skip_counter statement has no immediate effect. When you issue the START SLAVE statement for the next time following this SET statement, the new value for the system variable sql_slave_skip_counter is applied, and the events are skipped. That START SLAVE statement also automatically sets the value of the system variable back to 0. If the replica is a multi-source replica, when you issue that START SLAVE statement, the FOR CHANNEL clause is required. Make sure that you name the correct channel, otherwise events are skipped on the wrong channel.

16.1.7.3.2.2 Skipping Transactions With CHANGE MASTER TO

When you have assessed the failing transaction for any other appropriate actions as described previously (such as security considerations), identify the coordinates (file and position) in the source's binary log that represent a suitable position to restart replication. This can be the start of the event group following the event that caused the issue, or the start of the next transaction. The replication I/O thread begins reading from the source at these coordinates the next time the thread starts, skipping the failing event. Make sure that you have identified the position accurately, because this statement does not take event groups into account.

Issue the CHANGE MASTER TO statement as follows, where source_log_name is the binary log file that contains the restart position, and source_log_pos is the number representing the restart position as stated in the binary log file:

CHANGE MASTER TO MASTER_LOG_FILE='source_log_name', MASTER_LOG_POS=source_log_pos;

If the replica is a multi-source replica, you must use the FOR CHANNEL clause to name the appropriate channel on the CHANGE MASTER TO statement.

This statement cannot be issued if MASTER_AUTO_POSITION=1 is set, or if the replication threads are running. If you need to use this method of skipping a transaction when MASTER_AUTO_POSITION=1 is normally set, you can change the setting to MASTER_AUTO_POSITION=1 while issuing the statement, then change it back again afterwards. For example:

CHANGE MASTER TO MASTER_AUTO_POSITION=0, MASTER_LOG_FILE='binlog.000145', MASTER_LOG_POS=235;
CHANGE MASTER TO MASTER_AUTO_POSITION=1;

16.2 Replication Implementation

Replication is based on the replication source server keeping track of all changes to its databases (updates, deletes, and so on) in its binary log. The binary log serves as a written record of all events that modify database structure or content (data) from the moment the server was started. Typically, SELECT statements are not recorded because they modify neither database structure nor content.

Each replica that connects to the source requests a copy of the binary log. That is, it pulls the data from the source, rather than the source pushing the data to the replica. The replica also executes the events from the binary log that it receives. This has the effect of repeating the original changes just as they were made on the source. Tables are created or their structure modified, and data is inserted, deleted, and updated according to the changes that were originally made on the source.

Because each replica is independent, the replaying of the changes from the source's binary log occurs independently on each replica that is connected to the source. In addition, because each replica receives a copy of the binary log only by requesting it from the source, the replica is able to read and update the copy of the database at its own pace and can start and stop the replication process at will without affecting the ability to update to the latest database status on either the source or replica side.

For more information on the specifics of the replication implementation, see Section 16.2.3, “Replication Threads”.

Sources and replicas report their status in respect of the replication process regularly so that you can monitor them. See Section 8.14, “Examining Server Thread (Process) Information”, for descriptions of all replicated-related states.

The source's binary log is written to a local relay log on the replica before it is processed. The replica also records information about the current position with the source's binary log and the replica's relay log. See Section 16.2.4, “Relay Log and Replication Metadata Repositories”.

Database changes are filtered on the replica according to a set of rules that are applied according to the various configuration options and variables that control event evaluation. For details on how these rules are applied, see Section 16.2.5, “How Servers Evaluate Replication Filtering Rules”.

16.2.1 Replication Formats

Replication works because events written to the binary log are read from the source and then processed on the replica. The events are recorded within the binary log in different formats according to the type of event. The different replication formats used correspond to the binary logging format used when the events were recorded in the source's binary log. The correlation between binary logging formats and the terms used during replication are:

  • When using statement-based binary logging, the source writes SQL statements to the binary log. Replication of the source to the replica works by executing the SQL statements on the replica. This is called statement-based replication (which can be abbreviated as SBR), which corresponds to the MySQL statement-based binary logging format.

  • When using row-based logging, the source writes events to the binary log that indicate how individual table rows are changed. Replication of the source to the replica works by copying the events representing the changes to the table rows to the replica. This is called row-based replication (which can be abbreviated as RBR).

  • You can also configure MySQL to use a mix of both statement-based and row-based logging, depending on which is most appropriate for the change to be logged. This is called mixed-format logging. When using mixed-format logging, a statement-based log is used by default. Depending on certain statements, and also the storage engine being used, the log is automatically switched to row-based in particular cases. Replication using the mixed format is referred to as mixed-based replication or mixed-format replication. For more information, see Section 5.4.4.3, “Mixed Binary Logging Format”.

Prior to MySQL 5.7.7, statement-based format was the default. In MySQL 5.7.7 and later, row-based format is the default.

NDB Cluster.  The default binary logging format in MySQL NDB Cluster 7.5 is MIXED. You should note that NDB Cluster Replication always uses row-based replication, and that the NDB storage engine is incompatible with statement-based replication. See Section 20.6.2, “General Requirements for NDB Cluster Replication”, for more information.

When using MIXED format, the binary logging format is determined in part by the storage engine being used and the statement being executed. For more information on mixed-format logging and the rules governing the support of different logging formats, see Section 5.4.4.3, “Mixed Binary Logging Format”.

The logging format in a running MySQL server is controlled by setting the binlog_format server system variable. This variable can be set with session or global scope. The rules governing when and how the new setting takes effect are the same as for other MySQL server system variables. Setting the variable for the current session lasts only until the end of that session, and the change is not visible to other sessions. Setting the variable globally takes effect for clients that connect after the change, but not for any current client sessions, including the session where the variable setting was changed. To make the global system variable setting permanent so that it applies across server restarts, you must set it in an option file. For more information, see Section 13.7.4.1, “SET Syntax for Variable Assignment”.

There are conditions under which you cannot change the binary logging format at runtime or doing so causes replication to fail. See Section 5.4.4.2, “Setting The Binary Log Format”.

Changing the global binlog_format value requires privileges sufficient to set global system variables. Changing the session binlog_format value requires privileges sufficient to set restricted session system variables. See Section 5.1.8.1, “System Variable Privileges”.

The statement-based and row-based replication formats have different issues and limitations. For a comparison of their relative advantages and disadvantages, see Section 16.2.1.1, “Advantages and Disadvantages of Statement-Based and Row-Based Replication”.

With statement-based replication, you may encounter issues with replicating stored routines or triggers. You can avoid these issues by using row-based replication instead. For more information, see Section 22.7, “Stored Program Binary Logging”.

16.2.1.1 Advantages and Disadvantages of Statement-Based and Row-Based Replication

Each binary logging format has advantages and disadvantages. For most users, the mixed replication format should provide the best combination of data integrity and performance. If, however, you want to take advantage of the features specific to the statement-based or row-based replication format when performing certain tasks, you can use the information in this section, which provides a summary of their relative advantages and disadvantages, to determine which is best for your needs.

Advantages of statement-based replication
  • Proven technology.

  • Less data written to log files. When updates or deletes affect many rows, this results in much less storage space required for log files. This also means that taking and restoring from backups can be accomplished more quickly.

  • Log files contain all statements that made any changes, so they can be used to audit the database.

Disadvantages of statement-based replication
  • Statements that are unsafe for SBR.  Not all statements which modify data (such as INSERT DELETE, UPDATE, and REPLACE statements) can be replicated using statement-based replication. Any nondeterministic behavior is difficult to replicate when using statement-based replication. Examples of such Data Modification Language (DML) statements include the following:

    Statements that cannot be replicated correctly using statement-based replication are logged with a warning like the one shown here:

    [Warning] Statement is not safe to log in statement format.
    

    A similar warning is also issued to the client in such cases. The client can display it using SHOW WARNINGS.

  • INSERT ... SELECT requires a greater number of row-level locks than with row-based replication.

  • UPDATE statements that require a table scan (because no index is used in the WHERE clause) must lock a greater number of rows than with row-based replication.

  • For InnoDB: An INSERT statement that uses AUTO_INCREMENT blocks other nonconflicting INSERT statements.

  • For complex statements, the statement must be evaluated and executed on the replica before the rows are updated or inserted. With row-based replication, the replica only has to modify the affected rows, not execute the full statement.

  • If there is an error in evaluation on the replica, particularly when executing complex statements, statement-based replication may slowly increase the margin of error across the affected rows over time. See Section 16.4.1.27, “Replica Errors During Replication”.

  • Stored functions execute with the same NOW() value as the calling statement. However, this is not true of stored procedures.

  • Deterministic UDFs must be applied on the replicas.

  • Table definitions must be (nearly) identical on source and replica. See Section 16.4.1.10, “Replication with Differing Table Definitions on Source and Replica”, for more information.

Advantages of row-based replication
  • All changes can be replicated. This is the safest form of replication.

    Note

    Statements that update the information in the mysql system database, such as GRANT, REVOKE and the manipulation of triggers, stored routines (including stored procedures), and views, are all replicated to replicas using statement-based replication.

    For statements such as CREATE TABLE ... SELECT, a CREATE statement is generated from the table definition and replicated using statement-based format, while the row insertions are replicated using row-based format.

  • Fewer row locks are required on the source, which thus achieves higher concurrency, for the following types of statements:

  • Fewer row locks are required on the replica for any INSERT, UPDATE, or DELETE statement.

Disadvantages of row-based replication
  • RBR can generate more data that must be logged. To replicate a DML statement (such as an UPDATE or DELETE statement), statement-based replication writes only the statement to the binary log. By contrast, row-based replication writes each changed row to the binary log. If the statement changes many rows, row-based replication may write significantly more data to the binary log; this is true even for statements that are rolled back. This also means that making and restoring a backup can require more time. In addition, the binary log is locked for a longer time to write the data, which may cause concurrency problems. Use binlog_row_image=minimal to reduce the disadvantage considerably.

  • Deterministic UDFs that generate large BLOB values take longer to replicate with row-based replication than with statement-based replication. This is because the BLOB column value is logged, rather than the statement generating the data.

  • You cannot see on the replica what statements were received from the source and executed. However, you can see what data was changed using mysqlbinlog with the options --base64-output=DECODE-ROWS and --verbose.

    Alternatively, use the binlog_rows_query_log_events variable, which if enabled adds a Rows_query event with the statement to mysqlbinlog output when the -vv option is used.

  • For tables using the MyISAM storage engine, a stronger lock is required on the replica for INSERT statements when applying them as row-based events to the binary log than when applying them as statements. This means that concurrent inserts on MyISAM tables are not supported when using row-based replication.

16.2.1.2 Usage of Row-Based Logging and Replication

MySQL uses statement-based logging (SBL), row-based logging (RBL) or mixed-format logging. The type of binary log used impacts the size and efficiency of logging. Therefore the choice between row-based replication (RBR) or statement-based replication (SBR) depends on your application and environment. This section describes known issues when using a row-based format log, and describes some best practices using it in replication.

For additional information, see Section 16.2.1, “Replication Formats”, and Section 16.2.1.1, “Advantages and Disadvantages of Statement-Based and Row-Based Replication”.

For information about issues specific to NDB Cluster Replication (which depends on row-based replication), see Section 20.6.3, “Known Issues in NDB Cluster Replication”.

  • Row-based logging of temporary tables.  As noted in Section 16.4.1.29, “Replication and Temporary Tables”, temporary tables are not replicated when using row-based format. When using mixed format logging, safe statements involving temporary tables are logged using statement-based format. For more information, see Section 16.2.1.1, “Advantages and Disadvantages of Statement-Based and Row-Based Replication”.

    Temporary tables are not replicated when using row-based format because there is no need. In addition, because temporary tables can be read only from the thread which created them, there is seldom if ever any benefit obtained from replicating them, even when using statement-based format.

    You can switch from statement-based to row-based binary logging format at runtime even when temporary tables have been created. From MySQL 5.7.25, the MySQL server tracks the logging mode that was in effect when each temporary table was created. When a given client session ends, the server logs a DROP TEMPORARY TABLE IF EXISTS statement for each temporary table that still exists and was created when statement-based binary logging was in use. If row-based or mixed format binary logging was in use when the table was created, the DROP TEMPORARY TABLE IF EXISTS statement is not logged. In previous releases, the DROP TEMPORARY TABLE IF EXISTS statement was logged regardless of the logging mode that was in effect.

    Nontransactional DML statements involving temporary tables are allowed when using binlog_format=ROW, as long as any nontransactional tables affected by the statements are temporary tables (Bug #14272672).

  • RBL and synchronization of nontransactional tables.  When many rows are affected, the set of changes is split into several events; when the statement commits, all of these events are written to the binary log. When executing on the replica, a table lock is taken on all tables involved, and then the rows are applied in batch mode. Depending on the engine used for the replica's copy of the table, this may or may not be effective.

  • Latency and binary log size.  RBL writes changes for each row to the binary log and so its size can increase quite rapidly. This can significantly increase the time required to make changes on the replica that match those on the source. You should be aware of the potential for this delay in your applications.

  • Reading the binary log.  mysqlbinlog displays row-based events in the binary log using the BINLOG statement (see Section 13.7.6.1, “BINLOG Statement”). This statement displays an event as a base 64-encoded string, the meaning of which is not evident. When invoked with the --base64-output=DECODE-ROWS and --verbose options, mysqlbinlog formats the contents of the binary log to be human readable. When binary log events were written in row-based format and you want to read or recover from a replication or database failure you can use this command to read contents of the binary log. For more information, see Section 4.6.7.2, “mysqlbinlog Row Event Display”.

  • Binary log execution errors and replica execution mode.  Using slave_exec_mode=IDEMPOTENT is generally only useful with MySQL NDB Cluster replication, for which IDEMPOTENT is the default value. (See Section 20.6.10, “NDB Cluster Replication: Bidrectional and Circular Replication”). When slave_exec_mode is IDEMPOTENT, a failure to apply changes from RBL because the original row cannot be found does not trigger an error or cause replication to fail. This means that it is possible that updates are not applied on the replica, so that the source and replica are no longer synchronized. Latency issues and use of nontransactional tables with RBR when slave_exec_mode is IDEMPOTENT can cause the source and replica to diverge even further. For more information about slave_exec_mode, see Section 5.1.7, “Server System Variables”.

    For other scenarios, setting slave_exec_mode to STRICT is normally sufficient; this is the default value for storage engines other than NDB.

  • Filtering based on server ID not supported.  You can filter based on server ID by using the IGNORE_SERVER_IDS option for the CHANGE MASTER TO statement. This option works with statement-based and row-based logging formats. Another method to filter out changes on some replicas is to use a WHERE clause that includes the relation @@server_id <> id_value clause with UPDATE and DELETE statements. For example, WHERE @@server_id <> 1. However, this does not work correctly with row-based logging. To use the server_id system variable for statement filtering, use statement-based logging.

  • RBL, nontransactional tables, and stopped replicas.  When using row-based logging, if the replica server is stopped while a replication thread is updating a nontransactional table, the replica database can reach an inconsistent state. For this reason, it is recommended that you use a transactional storage engine such as InnoDB for all tables replicated using the row-based format. Use of STOP SLAVE or STOP SLAVE SQL_THREAD prior to shutting down the replica server helps prevent issues from occurring, and is always recommended regardless of the logging format or storage engine you use.

16.2.1.3 Determination of Safe and Unsafe Statements in Binary Logging

The safeness of a statement in MySQL Replication, refers to whether the statement and its effects can be replicated correctly using statement-based format. If this is true of the statement, we refer to the statement as safe; otherwise, we refer to it as unsafe.

In general, a statement is safe if it deterministic, and unsafe if it is not. However, certain nondeterministic functions are not considered unsafe (see Nondeterministic functions not considered unsafe, later in this section). In addition, statements using results from floating-point math functions—which are hardware-dependent—are always considered unsafe (see Section 16.4.1.12, “Replication and Floating-Point Values”).

Handling of safe and unsafe statements.  A statement is treated differently depending on whether the statement is considered safe, and with respect to the binary logging format (that is, the current value of binlog_format).

  • When using row-based logging, no distinction is made in the treatment of safe and unsafe statements.

  • When using mixed-format logging, statements flagged as unsafe are logged using the row-based format; statements regarded as safe are logged using the statement-based format.

  • When using statement-based logging, statements flagged as being unsafe generate a warning to this effect. Safe statements are logged normally.

Each statement flagged as unsafe generates a warning. Formerly, if a large number of such statements were executed on the source, this could lead to excessively large error log files. To prevent this, MySQL 5.7 provides a warning suppression mechanism, which behaves as follows: Whenever the 50 most recent ER_BINLOG_UNSAFE_STATEMENT warnings have been generated more than 50 times in any 50-second period, warning suppression is enabled. When activated, this causes such warnings not to be written to the error log; instead, for each 50 warnings of this type, a note The last warning was repeated N times in last S seconds is written to the error log. This continues as long as the 50 most recent such warnings were issued in 50 seconds or less; once the rate has decreased below this threshold, the warnings are once again logged normally. Warning suppression has no effect on how the safety of statements for statement-based logging is determined, nor on how warnings are sent to the client. MySQL clients still receive one warning for each such statement.

For more information, see Section 16.2.1, “Replication Formats”.

Statements considered unsafe.  Statements with the following characteristics are considered unsafe:

For additional information, see Section 16.4.1, “Replication Features and Issues”.

16.2.2 Replication Channels

In MySQL multi-source replication, a replica opens multiple replication channels, one for each replication source server. The replication channels represent the path of transactions flowing from a source to the replica. Each replication channel has its own receiver (I/O) thread, one or more applier (SQL) threads, and relay log. When transactions from a source are received by a channel's receiver thread, they are added to the channel's relay log file and passed through to the channel's applier threads. This enables each channel to function independently.

This section describes how channels can be used in a replication topology, and the impact they have on single-source replication. For instructions to configure sources and replicas for multi-source replication, to start, stop and reset multi-source replicas, and to monitor multi-source replication, see Section 16.1.5, “MySQL Multi-Source Replication”.

The maximum number of channels that can be created on one replica in a multi-source replication topology is 256. Each replication channel must have a unique (nonempty) name, as explained in Section 16.2.2.4, “Replication Channel Naming Conventions”. The error codes and messages that are issued when multi-source replication is enabled specify the channel that generated the error.

Note

Each channel on a multi-source replica must replicate from a different source. You cannot set up multiple replication channels from a single replica to a single source. This is because the server IDs of replicas must be unique in a replication topology. The source distinguishes replicas only by their server IDs, not by the names of the replication channels, so it cannot recognize different replication channels from the same replica.

A multi-source replica can also be set up as a multi-threaded replica, by setting the slave_parallel_workers system variable to a value greater than 0. When you do this on a multi-source replica, each channel on the replica has the specified number of applier threads, plus a coordinator thread to manage them. You cannot configure the number of applier threads for individual channels.

To provide compatibility with previous versions, the MySQL server automatically creates on startup a default channel whose name is the empty string (""). This channel is always present; it cannot be created or destroyed by the user. If no other channels (having nonempty names) have been created, replication statements act on the default channel only, so that all replication statements from older replicas function as expected (see Section 16.2.2.2, “Compatibility with Previous Replication Statements”. Statements applying to replication channels as described in this section can be used only when there is at least one named channel.

16.2.2.1 Commands for Operations on a Single Channel

To enable MySQL replication operations to act on individual replication channels, use the FOR CHANNEL channel clause with the following replication statements:

Similarly, an additional channel parameter is introduced for the following functions:

The following statements are disallowed for the group_replication_recovery channel:

The following statements are disallowed for the group_replication_applier channel:

16.2.2.2 Compatibility with Previous Replication Statements

When a replica has multiple channels and a FOR CHANNEL channel option is not specified, a valid statement generally acts on all available channels, with some specific exceptions.

For example, the following statements behave as expected for all except certain Group Replication channels:

  • START SLAVE starts replication threads for all channels, except the group_replication_recovery and group_replication_applier channels.

  • STOP SLAVE stops replication threads for all channels, except the group_replication_recovery and group_replication_applier channels.

  • SHOW SLAVE STATUS reports the status for all channels, except the group_replication_applier channel.

  • FLUSH RELAY LOGS flushes the relay logs for all channels, except the group_replication_applier channel.

  • RESET SLAVE resets all channels.

Warning

Use RESET SLAVE with caution as this statement deletes all existing channels, purges their relay log files, and recreates only the default channel.

Some replication statements cannot operate on all channels. In this case, error 1964 Multiple channels exist on the slave. Please provide channel name as an argument. is generated. The following statements and functions generate this error when used in a multi-source replication topology and a FOR CHANNEL channel option is not used to specify which channel to act on:

Note that a default channel always exists in a single source replication topology, where statements and functions behave as in previous versions of MySQL.

16.2.2.3 Startup Options and Replication Channels

This section describes startup options which are impacted by the addition of replication channels.

The following startup settings must be configured correctly to use multi-source replication.

The following startup options now affect all channels in a replication topology.

The values set for the following startup options apply on each channel; since these are mysqld startup options, they are applied on every channel.

  • --max-relay-log-size=size

    Maximum size of the individual relay log file for each channel; after reaching this limit, the file is rotated.

  • --relay-log-space-limit=size

    Upper limit for the total size of all relay logs combined, for each individual channel. For N channels, the combined size of these logs is limited to relay_log_space_limit * N.

  • --slave-parallel-workers=value

    Number of worker threads per channel.

  • slave_checkpoint_group

    Waiting time by an I/O thread for each source.

  • --relay-log-index=filename

    Base name for each channel's relay log index file. See Section 16.2.2.4, “Replication Channel Naming Conventions”.

  • --relay-log=filename

    Denotes the base name of each channel's relay log file. See Section 16.2.2.4, “Replication Channel Naming Conventions”.

  • --slave_net-timeout=N

    This value is set per channel, so that each channel waits for N seconds to check for a broken connection.

  • --slave-skip-counter=N

    This value is set per channel, so that each channel skips N events from its source.

16.2.2.4 Replication Channel Naming Conventions

This section describes how naming conventions are impacted by replication channels.

Each replication channel has a unique name which is a string with a maximum length of 64 characters and is case-insensitive. Because channel names are used in replication metadata repositories, the character set used for these is always UTF-8. Although you are generally free to use any name for channels, the following names are reserved:

  • group_replication_applier

  • group_replication_recovery

The name you choose for a replication channel also influences the file names used by a multi-source replica. The relay log files and index files for each channel are named relay_log_basename-channel.xxxxxx, where relay_log_basename is a base name specified using the relay_log system variable, and channel is the name of the channel logged to this file. If you do not specify the relay_log system variable, a default file name is used that also includes the name of the channel.

16.2.3 Replication Threads

MySQL replication capabilities are implemented using three main threads, one on the source server and two on the replica:

  • Binary log dump thread.  The source creates a thread to send the binary log contents to a replica when the replica connects. This thread can be identified in the output of SHOW PROCESSLIST on the source as the Binlog Dump thread.

    The binary log dump thread acquires a lock on the source's binary log for reading each event that is to be sent to the replica. As soon as the event has been read, the lock is released, even before the event is sent to the replica.

  • Replication I/O thread.  When a START SLAVE statement is issued on a replica server, the replica creates an I/O thread, which connects to the source and asks it to send the updates recorded in its binary logs.

    The replication I/O thread reads the updates that the source's Binlog Dump thread sends (see previous item) and copies them to local files that comprise the replica's relay log.

    The state of this thread is shown as Slave_IO_running in the output of SHOW SLAVE STATUS.

  • Replication SQL thread.  The replica creates an SQL thread to read the relay log that is written by the replication I/O thread and execute the transactions contained in it.

There are three main threads for each source/replica connection. A source that has multiple replicas creates one binary log dump thread for each currently connected replica, and each replica has its own replication I/O and SQL threads.

A replica uses two threads to separate reading updates from the source and executing them into independent tasks. Thus, the task of reading transactions is not slowed down if the process of applying them is slow. For example, if the replica server has not been running for a while, its I/O thread can quickly fetch all the binary log contents from the source when the replica starts, even if the SQL thread lags far behind. If the replica stops before the SQL thread has executed all the fetched statements, the I/O thread has at least fetched everything so that a safe copy of the transactions is stored locally in the replica's relay logs, ready for execution the next time that the replica starts.

You can enable further parallelization for tasks on a replica by setting the slave_parallel_workers system variable to a value greater than 0 (the default). When this system variable is set, the replica creates the specified number of worker threads to apply transactions, plus a coordinator thread to manage them. If you are using multiple replication channels, each channel has this number of threads. A replica with slave_parallel_workers set to a value greater than 0 is called a multithreaded replica. With this setup, transactions that fail can be retried.

Note

Multithreaded replicas are not currently supported by NDB Cluster, which silently ignores the setting for this variable. See Section 20.6.3, “Known Issues in NDB Cluster Replication” for more information.

16.2.3.1 Monitoring Replication Main Threads

The SHOW PROCESSLIST statement provides information that tells you what is happening on the source and on the replica regarding replication. For information on source states, see Section 8.14.5, “Replication Source Thread States”. For replica states, see Section 8.14.6, “Replication Replica I/O Thread States”, and Section 8.14.7, “Replication Replica SQL Thread States”.

The following example illustrates how the three main replication threads, the binary log dump thread, replicatin I/O thread, and replication SQL thread, show up in the output from SHOW PROCESSLIST.

On the source server, the output from SHOW PROCESSLIST looks like this:

mysql> SHOW PROCESSLIST\G
*************************** 1. row ***************************
     Id: 2
   User: root
   Host: localhost:32931
     db: NULL
Command: Binlog Dump
   Time: 94
  State: Has sent all binlog to slave; waiting for binlog to
         be updated
   Info: NULL

Here, thread 2 is a Binlog Dump thread that services a connected replica. The State information indicates that all outstanding updates have been sent to the replica and that the source is waiting for more updates to occur. If you see no Binlog Dump threads on a source server, this means that replication is not running; that is, no replicas are currently connected.

On a replica server, the output from SHOW PROCESSLIST looks like this:

mysql> SHOW PROCESSLIST\G
*************************** 1. row ***************************
     Id: 10
   User: system user
   Host:
     db: NULL
Command: Connect
   Time: 11
  State: Waiting for master to send event
   Info: NULL
*************************** 2. row ***************************
     Id: 11
   User: system user
   Host:
     db: NULL
Command: Connect
   Time: 11
  State: Has read all relay log; waiting for the slave I/O
         thread to update it
   Info: NULL

The State information indicates that thread 10 is the replication I/O thread that is communicating with the source server, and thread 11 is the replication SQL thread that is processing the updates stored in the relay logs. At the time that SHOW PROCESSLIST was run, both threads were idle, waiting for further updates.

The value in the Time column can show how late the replica is compared to the source. See Section A.14, “MySQL 5.7 FAQ: Replication”. If sufficient time elapses on the source side without activity on the Binlog Dump thread, the source determines that the replica is no longer connected. As for any other client connection, the timeouts for this depend on the values of net_write_timeout and net_retry_count; for more information about these, see Section 5.1.7, “Server System Variables”.

The SHOW SLAVE STATUS statement provides additional information about replication processing on a replica server. See Section 16.1.7.1, “Checking Replication Status”.

16.2.3.2 Monitoring Replication Applier Worker Threads

On a multithreaded replica, the Performance Schema tables replication_applier_status_by_coordinator and replication_applier_status_by_worker show status information for the replica's coordinator thread and applier worker threads respectively. For a replica with multiple channels, the threads for each channel are identified.

A multithreaded replica's coordinator thread also prints statistics to the replica's error log on a regular basis if the verbosity setting is set to display informational messages. The statistics are printed depending on the volume of events that the coordinator thread has assigned to applier worker threads, with a maximum frequency of once every 120 seconds. The message lists the following statistics for the relevant replication channel, or the default replication channel (which is not named):

Seconds elapsed

The difference in seconds between the current time and the last time this information was printed to the error log.

Events assigned

The total number of events that the coordinator thread has queued to all applier worker threads since the coordinator thread was started.

Worker queues filled over overrun level

The current number of events that are queued to any of the applier worker threads in excess of the overrun level, which is set at 90% of the maximum queue length of 16384 events. If this value is zero, no applier worker threads are operating at the upper limit of their capacity.

Waited due to worker queue full

The number of times that the coordinator thread had to wait to schedule an event because an applier worker thread's queue was full. If this value is zero, no applier worker threads exhausted their capacity.

Waited due to the total size

The number of times that the coordinator thread had to wait to schedule an event because the slave_pending_jobs_size_max limit had been reached. This system variable sets the maximum amount of memory (in bytes) available to applier worker thread queues holding events not yet applied. If an unusually large event exceeds this size, the transaction is held until all the applier worker threads have empty queues, and then processed. All subsequent transactions are held until the large transaction has been completed.

Waited at clock conflicts

The number of nanoseconds that the coordinator thread had to wait to schedule an event because a transaction that the event depended on had not yet been committed. If slave_parallel_type is set to DATABASE (rather than LOGICAL_CLOCK), this value is always zero.

Waited (count) when workers occupied

The number of times that the coordinator thread slept for a short period, which it might do in two situations. The first situation is where the coordinator thread assigns an event and finds the applier worker thread's queue is filled beyond the underrun level of 10% of the maximum queue length, in which case it sleeps for a maximum of 1 millisecond. The second situation is where slave_parallel_type is set to LOGICAL_CLOCK and the coordinator thread needs to assign the first event of a transaction to an applier worker thread's queue, it only does this to a worker with an empty queue, so if no queues are empty, the coordinator thread sleeps until one becomes empty.

Waited when workers occupied

The number of nanoseconds that the coordinator thread slept while waiting for an empty applier worker thread queue (that is, in the second situation described above, where slave_parallel_type is set to LOGICAL_CLOCK and the first event of a transaction needs to be assigned).

16.2.4 Relay Log and Replication Metadata Repositories

A replica server creates several repositories of information to use for the replication process:

  • The relay log, which is written by the replication I/O thread, contains the transactions read from the replication source server's binary log. The transactions in the relay log are applied on the replica by the replication SQL thread. For information about the relay log, see Section 16.2.4.1, “The Relay Log”.

  • The replica's connection metadata repository contains information that the replication I/O thread needs to connect to the replication source server and retrieve transactions from the source's binary log. The connection metadata repository is written to the mysql.slave_master_info table or to a file.

  • The replica's applier metadata repository contains information that the replication SQL thread needs to read and apply transactions from the replica's relay log. The applier metadata repository is written to the mysql.slave_relay_log_info table or to a file.

The connection metadata repository and applier metadata repository are collectively known as the replication metadata repositories. For information about these, see Section 16.2.4.2, “Replication Metadata Repositories”.

Making replication resilient to unexpected halts.  The mysql.slave_master_info and mysql.slave_relay_log_info tables are created using the transactional storage engine InnoDB. Updates to the replica's applier metadata repository table are committed together with the transactions, meaning that the replica's progress information recorded in that repository is always consistent with what has been applied to the database, even in the event of an unexpected server halt. For information on the combination of settings on the replica that is most resilient to unexpected halts, see Section 16.3.2, “Handling an Unexpected Halt of a Replica”.

16.2.4.1 The Relay Log

The relay log, like the binary log, consists of a set of numbered files containing events that describe database changes, and an index file that contains the names of all used relay log files.

The term relay log file generally denotes an individual numbered file containing database events. The term relay log collectively denotes the set of numbered relay log files plus the index file.

Relay log files have the same format as binary log files and can be read using mysqlbinlog (see Section 4.6.7, “mysqlbinlog — Utility for Processing Binary Log Files”).

By default, relay log file names have the form host_name-relay-bin.nnnnnn in the data directory, where host_name is the name of the replica server host and nnnnnn is a sequence number. Successive relay log files are created using successive sequence numbers, beginning with 000001. The replica uses an index file to track the relay log files currently in use. The default relay log index file name is host_name-relay-bin.index in the data directory.

The default relay log file and relay log index file names can be overridden with, respectively, the relay_log and relay_log_index system variables (see Section 16.1.6, “Replication and Binary Logging Options and Variables”).

If a replica uses the default host-based relay log file names, changing a replica's host name after replication has been set up can cause replication to fail with the errors Failed to open the relay log and Could not find target log during relay log initialization. This is a known issue (see Bug #2122). If you anticipate that a replica's host name might change in the future (for example, if networking is set up on the replica such that its host name can be modified using DHCP), you can avoid this issue entirely by using the relay_log and relay_log_index system variables to specify relay log file names explicitly when you initially set up the replica. This makes the names independent of server host name changes.

If you encounter the issue after replication has already begun, one way to work around it is to stop the replica server, prepend the contents of the old relay log index file to the new one, and then restart the replica. On a Unix system, this can be done as shown here:

shell> cat new_relay_log_name.index >> old_relay_log_name.index
shell> mv old_relay_log_name.index new_relay_log_name.index

A replica server creates a new relay log file under the following conditions:

  • Each time the replication I/O thread starts.

  • When the logs are flushed (for example, with FLUSH LOGS or mysqladmin flush-logs).

  • When the size of the current relay log file becomes too large, determined as follows:

The replication SQL thread automatically deletes each relay log file after it has executed all events in the file and no longer needs it. There is no explicit mechanism for deleting relay logs because the replication SQL thread takes care of doing so. However, FLUSH LOGS rotates relay logs, which influences when the replication SQL thread deletes them.

16.2.4.2 Replication Metadata Repositories

A replica server creates two replication metadata repositories, the connection metadata repository and the applier metadata repository. The replication metadata repositories survive a replica server's shutdown. If binary log file position based replication is in use, when the replica restarts, it reads the two repositories to determine how far it previously proceeded in reading the binary log from the source and in processing its own relay log. If GTID-based replication is in use, the replica does not use the replication metadata repositories for that purpose, but does need them for the other metadata that they contain.

  • The replica's connection metadata repository contains information that the replication I/O thread needs to connect to the replication source server and retrieve transactions from the source's binary log. The metadata in this repository includes the connection configuration, the replication user account details, the SSL settings for the connection, and the file name and position where the replication I/O thread is currently reading from the source's binary log.

  • The replica's applier metadata repository contains information that the replication SQL thread needs to read and apply transactions from the replica's relay log. The metadata in this repository includes the file name and position up to which the replication SQL thread has executed the transactions in the relay log, and the equivalent position in the source's binary log. It also includes metadata for the process of applying transactions, such as the number of worker threads.

By default, the replication metadata repositories are created as files in the data directory named master.info and relay-log.info, or with alternative names and locations specified by the --master-info-file option and relay_log_info_file system variable. To create the replication metadata repositories as tables, specify master_info_repository=TABLE and relay_log_info_repository=TABLE at server startup. In that case, the replica's connection metadata repository is written to the slave_master_info table in the mysql system schema, and the replica's applier metadata repository is written to the slave_relay_log_info table in the mysql system schema. A warning message is issued if mysqld is unable to initialize the tables for the replication metadata repositories, but the replica is allowed to continue starting. This situation is most likely to occur when upgrading from a version of MySQL that does not support the use of tables for the repositories to one in which they are supported.

Important
  1. Do not attempt to update or insert rows in the mysql.slave_master_info or mysql.slave_relay_log_info tables manually. Doing so can cause undefined behavior, and is not supported. Execution of any statement requiring a write lock on either or both of the slave_master_info and slave_relay_log_info tables is disallowed while replication is ongoing (although statements that perform only reads are permitted at any time).

  2. Access to the replica's connection metadata repository file or table should be restricted to the database administrator, because it contains the replication user account name and password for connecting to the source. Use a restricted access mode to protect database backups that include this repository.

RESET SLAVE clears the data in the replication metadata repositories, with the exception of the replication connection parameters (depending on the MySQL Server release and repository type). For details, see the description for RESET SLAVE.

If you set master_info_repository and relay_log_info_repository to TABLE, the mysql.slave_master_info and mysql.slave_relay_log_info tables are created using the InnoDB transactional storage engine. Updates to the replica's applier metadata repository table are committed together with the transactions, meaning that the replica's progress information recorded in that repository is always consistent with what has been applied to the database, even in the event of an unexpected server halt. The --relay-log-recovery option must be enabled on the replica to guarantee resilience. For more details, see Section 16.3.2, “Handling an Unexpected Halt of a Replica”.

When you back up the replica's data or transfer a snapshot of its data to create a new replica, ensure that you include the mysql.slave_master_info and mysql.slave_relay_log_info tables containing the replication metadata repositories, or the equivalent files (master.info and relay-log.info in the data directory, unless you specified alternative names and locations). When binary log file position based replication is in use, the replication metadata repositories are needed to resume replication after restarting the restored or copied replica. If you do not have the relay log files, but still have the replica's applier metadata repository, you can check it to determine how far the replication SQL thread has executed in the source's binary log. Then you can use a CHANGE MASTER TO statement with the MASTER_LOG_FILE and MASTER_LOG_POS options to tell the replica to re-read the binary logs from the source from that point (provided that the required binary logs still exist on the source).

One additional repository, the applier worker metadata repository, is created primarily for internal use, and holds status information about worker threads on a multithreaded replica. The applier worker metadata repository includes the names and positions for the relay log file and the source's binary log file for each worker thread. If the replica's applier metadata repository is created as a table, which is the default, the applier worker metadata repository is written to the mysql.slave_worker_info table. If the applier metadata repository is written to a file, the applier worker metadata repository is written to the worker-relay-log.info file. For external use, status information for worker threads is presented in the Performance Schema replication_applier_status_by_worker table.

The replication metadata repositories originally contained information similar to that shown in the output of the SHOW SLAVE STATUS statement, which is discussed in Section 13.4.2, “SQL Statements for Controlling Replica Servers”. Further information has since been added to the replication metadata repositories which is not displayed by the SHOW SLAVE STATUS statement.

For the connection metadata repository, the following table shows the correspondence between the columns in the mysql.slave_master_info table, the columns displayed by SHOW SLAVE STATUS, and the lines in the master.info file.

master.info File Line slave_master_info Table Column SHOW SLAVE STATUS Column Description
1 Number_of_lines [None] Number of lines in the file, or columns in the table
2 Master_log_name Master_Log_File The name of the binary log currently being read from the source
3 Master_log_pos Read_Master_Log_Pos The current position within the binary log that has been read from the source
4 Host Master_Host The host name of the source server
5 User_name Master_User The replication user name used to connect to the source
6 User_password Password (not shown by SHOW SLAVE STATUS) The password used to connect to the source
7 Port Master_Port The network port used to connect to the source
8 Connect_retry Connect_Retry The period (in seconds) that the replica waits before trying to reconnect to the source
9 Enabled_ssl Master_SSL_Allowed Indicates whether the server supports SSL connections
10 Ssl_ca Master_SSL_CA_File The file used for the Certificate Authority (CA) certificate
11 Ssl_capath Master_SSL_CA_Path The path to the Certificate Authority (CA) certificates
12 Ssl_cert Master_SSL_Cert The name of the SSL certificate file
13 Ssl_cipher Master_SSL_Cipher The list of possible ciphers used in the handshake for the SSL connection
14 Ssl_key Master_SSL_Key The name of the SSL key file
15 Ssl_verify_server_cert Master_SSL_Verify_Server_Cert Whether to verify the server certificate
16 Heartbeat [None] Interval between replication heartbeats, in seconds
17 Bind Master_Bind Which of the replica's network interfaces should be used for connecting to the source
18 Ignored_server_ids Replicate_Ignore_Server_Ids The list of server IDs to be ignored. Note that for Ignored_server_ids the list of server IDs is preceded by the total number of server IDs to ignore.
19 Uuid Master_UUID The source's unique ID
20 Retry_count Master_Retry_Count Maximum number of reconnection attempts permitted
21 Ssl_crl [None] Path to an SSL certificate revocation-list file
22 Ssl_crlpath [None] Path to a directory containing SSL certificate revocation-list files
23 Enabled_auto_position Auto_position If autopositioning is in use or not
24 Channel_name Channel_name The name of the replication channel
25 Tls_version Master_TLS_Version TLS version on source

For the applier metadata repository, the following table shows the correspondence between the columns in the mysql.slave_relay_log_info table, the columns displayed by SHOW SLAVE STATUS, and the lines in the relay-log.info file.

Line in relay-log.info slave_relay_log_info Table Column SHOW SLAVE STATUS Column Description
1 Number_of_lines [None] Number of lines in the file or columns in the table
2 Relay_log_name Relay_Log_File The name of the current relay log file
3 Relay_log_pos Relay_Log_Pos The current position within the relay log file; events up to this position have been executed on the replica database
4 Master_log_name Relay_Master_Log_File The name of the source's binary log file from which the events in the relay log file were read
5 Master_log_pos Exec_Master_Log_Pos The equivalent position within the source's binary log file of events that have already been executed
6 Sql_delay SQL_Delay The number of seconds that the replica must lag the source
7 Number_of_workers [None] The number of worker threads on the replica for executing replication events (transactions) in parallel
8 Id [None] ID used for internal purposes; currently this is always 1
9 Channel_name Channel_name The name of the replication channel

In versions of MySQL prior to MySQL 5.6, the relay-log.info file does not include a line count or a delay value (and the slave_relay_log_info table is not available).

Line Status Column Description
1 Relay_Log_File The name of the current relay log file
2 Relay_Log_Pos The current position within the relay log file; events up to this position have been executed on the replica database
3 Relay_Master_Log_File The name of the source's binary log file from which the events in the relay log file were read
4 Exec_Master_Log_Pos The equivalent position within the source's binary log file of events that have already been executed
Note

If you downgrade a replica server to a version older than MySQL 5.6, the older server does not read the relay-log.info file correctly. To address this, modify the file in a text editor by deleting the initial line containing the number of lines.

The contents of the relay-log.info file and the states shown by the SHOW SLAVE STATUS statement might not match if the relay-log.info file has not been flushed to disk. Ideally, you should only view relay-log.info on a replica that is offline (that is, mysqld is not running). For a running system, you can use SHOW SLAVE STATUS, or query the mysql.slave_master_info and mysql.slave_relay_log_info tables if you are writing the replication metadata repositories to tables.

16.2.5 How Servers Evaluate Replication Filtering Rules

If a replication source server does not write a statement to its binary log, the statement is not replicated. If the server does log the statement, the statement is sent to all replicas and each replica determines whether to execute it or ignore it.

On the source, you can control which databases to log changes for by using the --binlog-do-db and --binlog-ignore-db options to control binary logging. For a description of the rules that servers use in evaluating these options, see Section 16.2.5.1, “Evaluation of Database-Level Replication and Binary Logging Options”. You should not use these options to control which databases and tables are replicated. Instead, use filtering on the replica to control the events that are executed on the replica.

On the replica side, decisions about whether to execute or ignore statements received from the source are made according to the --replicate-* options that the replica was started with. (See Section 16.1.6, “Replication and Binary Logging Options and Variables”.) The filters governed by these options can also be set dynamically using the CHANGE REPLICATION FILTER statement. The rules governing such filters are the same whether they are created on startup using --replicate-* options or while the replica server is running by CHANGE REPLICATION FILTER. Note that replication filters cannot be used on a MySQL server instance that is configured for Group Replication, because filtering transactions on some servers would make the group unable to reach agreement on a consistent state.

In the simplest case, when there are no --replicate-* options, the replica executes all statements that it receives from the source. Otherwise, the result depends on the particular options given.

Database-level options (--replicate-do-db, --replicate-ignore-db) are checked first; see Section 16.2.5.1, “Evaluation of Database-Level Replication and Binary Logging Options”, for a description of this process. If no database-level options are used, option checking proceeds to any table-level options that may be in use (see Section 16.2.5.2, “Evaluation of Table-Level Replication Options”, for a discussion of these). If one or more database-level options are used but none are matched, the statement is not replicated.

For statements affecting databases only (that is, CREATE DATABASE, DROP DATABASE, and ALTER DATABASE), database-level options always take precedence over any --replicate-wild-do-table options. In other words, for such statements, --replicate-wild-do-table options are checked if and only if there are no database-level options that apply. This is a change in behavior from previous versions of MySQL, where the statement CREATE DATABASE dbx was not replicated if the replica had been started with --replicate-do-db=dbx --replicate-wild-do-table=db%.t1. (Bug #46110)

To make it easier to determine what effect an option set has, it is recommended that you avoid mixing do and ignore options, or wildcard and nonwildcard options.

If any --replicate-rewrite-db options were specified, they are applied before the --replicate-* filtering rules are tested.

Note

All replication filtering options follow the same rules for case sensitivity that apply to names of databases and tables elsewhere in the MySQL server, including the effects of the lower_case_table_names system variable.

16.2.5.1 Evaluation of Database-Level Replication and Binary Logging Options

When evaluating replication options, the replica begins by checking to see whether there are any --replicate-do-db or --replicate-ignore-db options that apply. When using --binlog-do-db or --binlog-ignore-db, the process is similar, but the options are checked on the source.

The database that is checked for a match depends on the binary log format of the statement that is being handled. If the statement has been logged using the row format, the database where data is to be changed is the database that is checked. If the statement has been logged using the statement format, the default database (specified with a USE statement) is the database that is checked.

Note

Only DML statements can be logged using the row format. DDL statements are always logged as statements, even when binlog_format=ROW. All DDL statements are therefore always filtered according to the rules for statement-based replication. This means that you must select the default database explicitly with a USE statement in order for a DDL statement to be applied.

For replication, the steps involved are listed here:

  1. Which logging format is used?

    • STATEMENT.  Test the default database.

    • ROW.  Test the database affected by the changes.

  2. Are there any --replicate-do-db options?

    • Yes.  Does the database match any of them?

      • Yes.  Continue to Step 4.

      • No.  Ignore the update and exit.

    • No.  Continue to step 3.

  3. Are there any --replicate-ignore-db options?

    • Yes.  Does the database match any of them?

      • Yes.  Ignore the update and exit.

      • No.  Continue to step 4.

    • No.  Continue to step 4.

  4. Proceed to checking the table-level replication options, if there are any. For a description of how these options are checked, see Section 16.2.5.2, “Evaluation of Table-Level Replication Options”.

    Important

    A statement that is still permitted at this stage is not yet actually executed. The statement is not executed until all table-level options (if any) have also been checked, and the outcome of that process permits execution of the statement.

For binary logging, the steps involved are listed here:

  1. Are there any --binlog-do-db or --binlog-ignore-db options?

    • Yes.  Continue to step 2.

    • No.  Log the statement and exit.

  2. Is there a default database (has any database been selected by USE)?

    • Yes.  Continue to step 3.

    • No.  Ignore the statement and exit.

  3. There is a default database. Are there any --binlog-do-db options?

    • Yes.  Do any of them match the database?

      • Yes.  Log the statement and exit.

      • No.  Ignore the statement and exit.

    • No.  Continue to step 4.

  4. Do any of the --binlog-ignore-db options match the database?

    • Yes.  Ignore the statement and exit.

    • No.  Log the statement and exit.

Important

For statement-based logging, an exception is made in the rules just given for the CREATE DATABASE, ALTER DATABASE, and DROP DATABASE statements. In those cases, the database being created, altered, or dropped replaces the default database when determining whether to log or ignore updates.

--binlog-do-db can sometimes mean ignore other databases. For example, when using statement-based logging, a server running with only --binlog-do-db=sales does not write to the binary log statements for which the default database differs from sales. When using row-based logging with the same option, the server logs only those updates that change data in sales.

16.2.5.2 Evaluation of Table-Level Replication Options

The replica checks for and evaluates table options only if either of the following two conditions is true:

First, as a preliminary condition, the replica checks whether statement-based replication is enabled. If so, and the statement occurs within a stored function, the replica executes the statement and exits. If row-based replication is enabled, the replica does not know whether a statement occurred within a stored function on the source, so this condition does not apply.

Note

For statement-based replication, replication events represent statements (all changes making up a given event are associated with a single SQL statement); for row-based replication, each event represents a change in a single table row (thus a single statement such as UPDATE mytable SET mycol = 1 may yield many row-based events). When viewed in terms of events, the process of checking table options is the same for both row-based and statement-based replication.

Having reached this point, if there are no table options, the replica simply executes all events. If there are any --replicate-do-table or --replicate-wild-do-table options, the event must match one of these if it is to be executed; otherwise, it is ignored. If there are any --replicate-ignore-table or --replicate-wild-ignore-table options, all events are executed except those that match any of these options.

The following steps describe this evaluation in more detail. The starting point is the end of the evaluation of the database-level options, as described in Section 16.2.5.1, “Evaluation of Database-Level Replication and Binary Logging Options”.

  1. Are there any table replication options?

    • Yes.  Continue to step 2.

    • No.  Execute the update and exit.

  2. Which logging format is used?

    • STATEMENT.  Carry out the remaining steps for each statement that performs an update.

    • ROW.  Carry out the remaining steps for each update of a table row.

  3. Are there any --replicate-do-table options?

    • Yes.  Does the table match any of them?

      • Yes.  Execute the update and exit.

      • No.  Continue to step 4.

    • No.  Continue to step 4.

  4. Are there any --replicate-ignore-table options?

    • Yes.  Does the table match any of them?

      • Yes.  Ignore the update and exit.

      • No.  Continue to step 5.

    • No.  Continue to step 5.

  5. Are there any --replicate-wild-do-table options?

    • Yes.  Does the table match any of them?

      • Yes.  Execute the update and exit.

      • No.  Continue to step 6.

    • No.  Continue to step 6.

  6. Are there any --replicate-wild-ignore-table options?

    • Yes.  Does the table match any of them?

      • Yes.  Ignore the update and exit.

      • No.  Continue to step 7.

    • No.  Continue to step 7.

  7. Is there another table to be tested?

    • Yes.  Go back to step 3.

    • No.  Continue to step 8.

  8. Are there any --replicate-do-table or --replicate-wild-do-table options?

    • Yes.  Ignore the update and exit.

    • No.  Execute the update and exit.

Note

Statement-based replication stops if a single SQL statement operates on both a table that is included by a --replicate-do-table or --replicate-wild-do-table option, and another table that is ignored by a --replicate-ignore-table or --replicate-wild-ignore-table option. The replica must either execute or ignore the complete statement (which forms a replication event), and it cannot logically do this. This also applies to row-based replication for DDL statements, because DDL statements are always logged as statements, without regard to the logging format in effect. The only type of statement that can update both an included and an ignored table and still be replicated successfully is a DML statement that has been logged with binlog_format=ROW.

16.2.5.3 Interactions Between Replication Filtering Options

If you use a combination of database-level and table-level replication filtering options, the replica first accepts or ignores events using the database options, then it evaluates all events permitted by those options according to the table options. This can sometimes lead to results that seem counterintuitive. It is also important to note that the results vary depending on whether the operation is logged using statement-based or row-based binary logging format. If you want to be sure that your replication filters always operate in the same way independently of the binary logging format, which is particularly important if you are using mixed binary logging format, follow the guidance in this topic.

The effect of the replication filtering options differs between binary logging formats because of the way the database name is identified. With statement-based format, DML statements are handled based on the current database, as specified by the USE statement. With row-based format, DML statements are handled based on the database where the modified table exists. DDL statements are always filtered based on the current database, as specified by the USE statement, regardless of the binary logging format.

An operation that involves multiple tables can also be affected differently by replication filtering options depending on the binary logging format. Operations to watch out for include transactions involving multi-table UPDATE statements, triggers, cascading foreign keys, stored functions that update multiple tables, and DML statements that invoke stored functions that update one or more tables. If these operations update both filtered-in and filtered-out tables, the results can vary with the binary logging format.

If you need to guarantee that your replication filters operate consistently regardless of the binary logging format, particularly if you are using mixed binary logging format (binlog_format=MIXED), use only table-level replication filtering options, and do not use database-level replication filtering options. Also, do not use multi-table DML statements that update both filtered-in and filtered-out tables.

If you need to use a combination of database-level and table-level replication filters, and want these to operate as consistently as possible, choose one of the following strategies:

  1. If you use row-based binary logging format (binlog_format=ROW), for DDL statements, rely on the USE statement to set the database and do not specify the database name. You can consider changing to row-based binary logging format for improved consistency with replication filtering. See Section 5.4.4.2, “Setting The Binary Log Format” for the conditions that apply to changing the binary logging format.

  2. If you use statement-based or mixed binary logging format (binlog_format=STATEMENT or MIXED), for both DML and DDL statements, rely on the USE statement and do not use the database name. Also, do not use multi-table DML statements that update both filtered-in and filtered-out tables.

Example 16.7 A --replicate-ignore-db option and a --replicate-do-table option

On the source, the following statements are issued:

USE db1;
CREATE TABLE t2 LIKE t1;
INSERT INTO db2.t3 VALUES (1);

The replica has the following replication filtering options set:

replicate-ignore-db = db1
replicate-do-table = db2.t3

The DDL statement CREATE TABLE creates the table in db1, as specified by the preceding USE statement. The replica filters out this statement according to its --replicate-ignore-db = db1 option, because db1 is the current database. This result is the same whatever the binary logging format is on the source. However, the result of the DML INSERT statement is different depending on the binary logging format:

  • If row-based binary logging format is in use on the source (binlog_format=ROW), the replica evaluates the INSERT operation using the database where the table exists, which is named as db2. The database-level option --replicate-ignore-db = db1, which is evaluated first, therefore does not apply. The table-level option --replicate-do-table = db2.t3 does apply, so the replica applies the change to table t3.

  • If statement-based binary logging format is in use on the source (binlog_format=STATEMENT), the replica evaluates the INSERT operation using the default database, which was set by the USE statement to db1 and has not been changed. According to its database-level --replicate-ignore-db = db1 option, it therefore ignores the operation and does not apply the change to table t3. The table-level option --replicate-do-table = db2.t3 is not checked, because the statement already matched a database-level option and was ignored.

If the --replicate-ignore-db = db1 option on the replica is necessary, and the use of statement-based (or mixed) binary logging format on the source is also necessary, the results can be made consistent by omitting the database name from the INSERT statement and relying on a USE statement instead, as follows:

USE db1;
CREATE TABLE t2 LIKE t1;
USE db2;
INSERT INTO t3 VALUES (1);

In this case, the replica always evaluates the INSERT statement based on the database db2. Whether the operation is logged in statement-based or row-based binary format, the results remain the same.


16.3 Replication Solutions

Replication can be used in many different environments for a range of purposes. This section provides general notes and advice on using replication for specific solution types.

For information on using replication in a backup environment, including notes on the setup, backup procedure, and files to back up, see Section 16.3.1, “Using Replication for Backups”.

For advice and tips on using different storage engines on the source and replicas, see Section 16.3.3, “Using Replication with Different Source and Replica Storage Engines”.

Using replication as a scale-out solution requires some changes in the logic and operation of applications that use the solution. See Section 16.3.4, “Using Replication for Scale-Out”.

For performance or data distribution reasons, you may want to replicate different databases to different replicas. See Section 16.3.5, “Replicating Different Databases to Different Replicas”

As the number of replicas increases, the load on the source can increase and lead to reduced performance (because of the need to replicate the binary log to each replica). For tips on improving your replication performance, including using a single secondary server as a replication source server, see Section 16.3.6, “Improving Replication Performance”.

For guidance on switching sources, or converting replicas into sources as part of an emergency failover solution, see Section 16.3.7, “Switching Sources During Failover”.

To secure your replication communication, you can encrypt the communication channel. For step-by-step instructions, see Section 16.3.8, “Setting Up Replication to Use Encrypted Connections”.

16.3.1 Using Replication for Backups

To use replication as a backup solution, replicate data from the source to a replica, and then back up the replica. The replica can be paused and shut down without affecting the running operation of the source, so you can produce an effective snapshot of live data that would otherwise require the source to be shut down.

How you back up a database depends on its size and whether you are backing up only the data, or the data and the replica state so that you can rebuild the replica in the event of failure. There are therefore two choices:

Another backup strategy, which can be used for either source or replica servers, is to put the server in a read-only state. The backup is performed against the read-only server, which then is changed back to its usual read/write operational status. See Section 16.3.1.3, “Backing Up a Source or Replica by Making It Read Only”.

16.3.1.1 Backing Up a Replica Using mysqldump

Using mysqldump to create a copy of a database enables you to capture all of the data in the database in a format that enables the information to be imported into another instance of MySQL Server (see Section 4.5.4, “mysqldump — A Database Backup Program”). Because the format of the information is SQL statements, the file can easily be distributed and applied to running servers in the event that you need access to the data in an emergency. However, if the size of your data set is very large, mysqldump may be impractical.

When using mysqldump, you should stop replication on the replica before starting the dump process to ensure that the dump contains a consistent set of data:

  1. Stop the replica from processing requests. You can stop replication completely on the replica using mysqladmin:

    shell> mysqladmin stop-slave

    Alternatively, you can stop only the replication SQL thread to pause event execution:

    shell> mysql -e 'STOP SLAVE SQL_THREAD;'

    This enables the replica to continue to receive data change events from the source's binary log and store them in the relay logs using the I/O thread, but prevents the replica from executing these events and changing its data. Within busy replication environments, permitting the I/O thread to run during backup may speed up the catch-up process when you restart the replication SQL thread.

  2. Run mysqldump to dump your databases. You may either dump all databases or select databases to be dumped. For example, to dump all databases:

    shell> mysqldump --all-databases > fulldb.dump
  3. Once the dump has completed, start replica operations again:

    shell> mysqladmin start-slave

In the preceding example, you may want to add login credentials (user name, password) to the commands, and bundle the process up into a script that you can run automatically each day.

If you use this approach, make sure you monitor the replication process to ensure that the time taken to run the backup does not affect the replica's ability to keep up with events from the source. See Section 16.1.7.1, “Checking Replication Status”. If the replica is unable to keep up, you may want to add another replica and distribute the backup process. For an example of how to configure this scenario, see Section 16.3.5, “Replicating Different Databases to Different Replicas”.

16.3.1.2 Backing Up Raw Data from a Replica

To guarantee the integrity of the files that are copied, backing up the raw data files on your MySQL replica should take place while your replica server is shut down. If the MySQL server is still running, background tasks may still be updating the database files, particularly those involving storage engines with background processes such as InnoDB. With InnoDB, these problems should be resolved during crash recovery, but since the replica server can be shut down during the backup process without affecting the execution of the source it makes sense to take advantage of this capability.

To shut down the server and back up the files:

  1. Shut down the replica MySQL server:

    shell> mysqladmin shutdown
  2. Copy the data files. You can use any suitable copying or archive utility, including cp, tar or WinZip. For example, assuming that the data directory is located under the current directory, you can archive the entire directory as follows:

    shell> tar cf /tmp/dbbackup.tar ./data
  3. Start the MySQL server again. Under Unix:

    shell> mysqld_safe &

    Under Windows:

    C:\> "C:\Program Files\MySQL\MySQL Server 5.7\bin\mysqld"

Normally you should back up the entire data directory for the replica MySQL server. If you want to be able to restore the data and operate as a replica (for example, in the event of failure of the replica), then in addition to the replica's data, you should also back up the replica status files, the replication metadata repositories, and the relay log files. These files are needed to resume replication after you restore the replica's data.

If you lose the relay logs but still have the relay-log.info file, you can check it to determine how far the replication SQL thread has executed in the source's binary logs. Then you can use CHANGE MASTER TO with the MASTER_LOG_FILE and MASTER_LOG_POS options to tell the replica to re-read the binary logs from that point. This requires that the binary logs still exist on the source server.

If your replica is replicating LOAD DATA statements, you should also back up any SQL_LOAD-* files that exist in the directory that the replica uses for this purpose. The replica needs these files to resume replication of any interrupted LOAD DATA operations. The location of this directory is the value of the slave_load_tmpdir system variable. If the server was not started with that variable set, the directory location is the value of the tmpdir system variable.

16.3.1.3 Backing Up a Source or Replica by Making It Read Only

It is possible to back up either source or replica servers in a replication setup by acquiring a global read lock and manipulating the read_only system variable to change the read-only state of the server to be backed up:

  1. Make the server read-only, so that it processes only retrievals and blocks updates.

  2. Perform the backup.

  3. Change the server back to its normal read/write state.

Note

The instructions in this section place the server to be backed up in a state that is safe for backup methods that get the data from the server, such as mysqldump (see Section 4.5.4, “mysqldump — A Database Backup Program”). You should not attempt to use these instructions to make a binary backup by copying files directly because the server may still have modified data cached in memory and not flushed to disk.

The following instructions describe how to do this for a source server and for a replica server. For both scenarios discussed here, suppose that you have the following replication setup:

  • A source server S1

  • A replica server R1 that has S1 as its source

  • A client C1 connected to S1

  • A client C2 connected to R1

In either scenario, the statements to acquire the global read lock and manipulate the read_only variable are performed on the server to be backed up and do not propagate to any replicas of that server.

Scenario 1: Backup with a Read-Only Source

Put the source S1 in a read-only state by executing these statements on it:

mysql> FLUSH TABLES WITH READ LOCK;
mysql> SET GLOBAL read_only = ON;

While S1 is in a read-only state, the following properties are true:

  • Requests for updates sent by C1 to S1 block because the server is in read-only mode.

  • Requests for query results sent by C1 to S1 succeed.

  • Making a backup on S1 is safe.

  • Making a backup on R1 is not safe. This server is still running, and might be processing the binary log or update requests coming from client C2

While S1 is read only, perform the backup. For example, you can use mysqldump.

After the backup operation on S1 completes, restore S1 to its normal operational state by executing these statements:

mysql> SET GLOBAL read_only = OFF;
mysql> UNLOCK TABLES;

Although performing the backup on S1 is safe (as far as the backup is concerned), it is not optimal for performance because clients of S1 are blocked from executing updates.

This strategy applies to backing up a source server in a replication setup, but can also be used for a single server in a nonreplication setting.

Scenario 2: Backup with a Read-Only Replica

Put the replica R1 in a read-only state by executing these statements on it:

mysql> FLUSH TABLES WITH READ LOCK;
mysql> SET GLOBAL read_only = ON;

While R1 is in a read-only state, the following properties are true:

  • The source S1 continues to operate, so making a backup on the source is not safe.

  • The replica R1 is stopped, so making a backup on the replica R1 is safe.

These properties provide the basis for a popular backup scenario: Having one replica busy performing a backup for a while is not a problem because it does not affect the entire network, and the system is still running during the backup. In particular, clients can still perform updates on the source server, which remains unaffected by backup activity on the replica.

While R1 is read only, perform the backup. For example, you can use mysqldump.

After the backup operation on R1 completes, restore R1 to its normal operational state by executing these statements:

mysql> SET GLOBAL read_only = OFF;
mysql> UNLOCK TABLES;

After the replica is restored to normal operation, it again synchronizes to the source by catching up with any outstanding updates from the binary log of the source.

16.3.2 Handling an Unexpected Halt of a Replica

In order for replication to be resilient to unexpected halts of the server (sometimes described as crash-safe) it must be possible for the replica to recover its state before halting. This section describes the impact of an unexpected halt of a replica during replication, and how to configure a replica for the best chance of recovery to continue replication.

After an unexpected halt of a replica, upon restart the replication SQL thread must recover information about which transactions have been executed already. The information required for recovery is stored in the replica's applier metadata repository. In older MySQL Server versions, this repository could only be created as a file in the data directory that was updated after the transaction had been applied. In MySQL 5.7 you can instead use an InnoDB table named mysql.slave_relay_log_info to store the applier metadata repository. As a table, updates to the applier metadata repository are committed together with the transactions, meaning that the replica's progress information recorded in that repository is always consistent with what has been applied to the database, even in the event of an unexpected server halt. To configure MySQL 5.7 to store the applier metadata repository as an InnoDB table, set the system variable relay_log_info_repository to TABLE. For more information on the applier metadata repository, see Section 16.2.4, “Relay Log and Replication Metadata Repositories”.

The recovery process by which a replica recovers from an unexpected halt varies depending on the configuration of the replica. The details of the recovery process are influenced by the chosen method of replication, whether the replica is single-threaded or multithreaded, and the setting of relevant system variables. The overall aim of the recovery process is to identify what transactions had already been applied on the replica's database before the unexpected halt occurred, and retrieve and apply the transactions that the replica missed following the unexpected halt.

  • For GTID-based replication, the recovery process needs the GTIDs of the transactions that were already received or committed by the replica. The missing transactions can be retrieved from the source using GTID auto-positioning, which automatically compares the source's transactions to the replica's transactions and identifies the missing transactions.

  • For file position based replication, the recovery process needs an accurate replication SQL thread (applier) position showing the last transaction that was applied on the replica. Based on that position, the replication I/O thread (receiver) retrieves from the source's binary log all of the transactions that should be applied on the replica from that point on.

Using GTID-based replication makes it easiest to configure replication to be resilient to unexpected halts. GTID auto-positioning means the replica can reliably identify and retrieve missing transactions, even if there are gaps in the sequence of applied transactions.

The following information provides combinations of settings that are appropriate for different types of replica to guarantee recovery as far as this is under the control of replication.

Important

Some factors outside the control of replication can have an impact on the replication recovery process and the overall state of replication after the recovery process. In particular, the settings that influence the recovery process for individual storage engines might result in transactions being lost in the event of an unexpected halt of a replica, and therefore unavailable to the replication recovery process. The innodb_flush_log_at_trx_commit=1 setting mentioned in the list below is a key setting for a replication setup that uses InnoDB with transactions. However, other settings specific to InnoDB or to other storage engines, especially those relating to flushing or synchronization, can also have an impact. Always check for and apply recommendations made by your chosen storage engines about crash-safe settings.

The following combination of settings on a replica is the most resilient to unexpected halts:

  • When GTID-based replication is in use (gtid_mode=ON), set MASTER_AUTO_POSITION=1, which activates GTID auto-positioning for the connection to the source to automatically identify and retrieve missing transactions. This option is set using a CHANGE MASTER TO statement. If the replica has multiple replication channels, you need to set this option for each channel individually. For details of how GTID auto-positioning works, see Section 16.1.3.3, “GTID Auto-Positioning”. When file position based replication is in use, MASTER_AUTO_POSITION=1 is not used, and instead the binary log position or relay log position is used to control where replication starts.

  • Set sync_relay_log=1, which instructs the replication I/O thread to synchronize the relay log to disk after each received transaction is written to it. This means the replica's record of the current position read from the source's binary log (in the source metadata repository) is never ahead of the record of transactions saved in the relay log. Note that although this setting is the safest, it is also the slowest due to the number of disk writes involved. With sync_relay_log > 1, or sync_relay_log=0 (where synchronization is handled by the operating system), in the event of an unexpected halt of a replica there might be committed transactions that have not been synchronized to disk. Such transactions can cause the recovery process to fail if the recovering replica, based on the information it has in the relay log as last synchronized to disk, tries to retrieve and apply the transactions again instead of skipping them. Setting sync_relay_log=1 is particularly important for a multi-threaded replica, where the recovery process fails if gaps in the sequence of transactions cannot be filled using the information in the relay log. For a single-threaded replica, the recovery process only needs to use the relay log if the relevant information is not available in the applier metadata repository.

  • Set innodb_flush_log_at_trx_commit=1, which synchronizes the InnoDB logs to disk before each transaction is committed. This setting, which is the default, ensures that InnoDB tables and the InnoDB logs are saved on disk so that there is no longer a requirement for the information in the relay log regarding the transaction. Combined with the setting sync_relay_log=1, this setting further ensures that the content of the InnoDB tables and the InnoDB logs is consistent with the content of the relay log at all times, so that purging the relay log files cannot cause unfillable gaps in the replica's history of transactions in the event of an unexpected halt.

  • Set relay_log_info_repository = TABLE, which stores the replication SQL thread position in the InnoDB table mysql.slave_relay_log_info, and updates it together with the transaction commit to ensure a record that is always accurate. This setting is not the default in MySQL 5.7. If the default FILE setting is used, the information is stored in a file in the data directory that is updated after the transaction has been applied. This creates a risk of losing synchrony with the source depending at which stage of processing a transaction the replica halts at, or even corruption of the file itself. With the setting relay_log_info_repository = FILE, recovery is not guaranteed.

  • Set relay_log_recovery = ON, which enables automatic relay log recovery immediately following server startup. This global variable defaults to OFF and is read-only at runtime, but you can set it to ON with the --relay-log-recovery option at replica startup following an unexpected halt of a replica. Note that this setting ignores the existing relay log files, in case they are corrupted or inconsistent. The relay log recovery process starts a new relay log file and fetches transactions from the source beginning at the replication SQL thread position recorded in the applier metadata repository. The previous relay log files are removed over time by the replica's normal purge mechanism.

For a multithreaded replica, from MySQL 5.7.13, setting relay_log_recovery = ON automatically handles any inconsistencies and gaps in the sequence of transactions that have been executed from the relay log. These gaps can occur when file position based replication is in use. (For more details, see Section 16.4.1.32, “Replication and Transaction Inconsistencies”.) The relay log recovery process deals with gaps using the same method as the START SLAVE UNTIL SQL_AFTER_MTS_GAPS statement would. When the replica reaches a consistent gap-free state, the relay log recovery process goes on to fetch further transactions from the source beginning at the replication SQL thread position. In MySQL versions prior to MySQL 5.7.13, this process was not automatic and required starting the server with relay_log_recovery = OFF, starting the replica with START SLAVE UNTIL SQL_AFTER_MTS_GAPS to fix any transaction inconsistencies, and then restarting the replica with relay_log_recovery = ON. When GTID-based replication is in use, this process is unnecessary, and from MySQL 5.7.28 a multithreaded replica automatically skips relay log recovery when MASTER_AUTO_POSITION is set to ON, so the setting for relay_log_recovery makes no difference.

16.3.3 Using Replication with Different Source and Replica Storage Engines

It does not matter for the replication process whether the source table on the source and the replicated table on the replica use different engine types. In fact, the default_storage_engine and storage_engine system variables are not replicated.

This provides a number of benefits in the replication process in that you can take advantage of different engine types for different replication scenarios. For example, in a typical scale-out scenario (see Section 16.3.4, “Using Replication for Scale-Out”), you want to use InnoDB tables on the source to take advantage of the transactional functionality, but use MyISAM on the replicas where transaction support is not required because the data is only read. When using replication in a data-logging environment you may want to use the Archive storage engine on the replica.

Configuring different engines on the source and replica depends on how you set up the initial replication process:

  • If you used mysqldump to create the database snapshot on your source, you could edit the dump file text to change the engine type used on each table.

    Another alternative for mysqldump is to disable engine types that you do not want to use on the replica before using the dump to build the data on the replica. For example, you can add the --skip-federated option on your replica to disable the FEDERATED engine. If a specific engine does not exist for a table to be created, MySQL uses the default engine type, usually MyISAM. (This requires that the NO_ENGINE_SUBSTITUTION SQL mode is not enabled.) If you want to disable additional engines in this way, you may want to consider building a special binary to be used on the replica that supports only the engines you want.

  • If you are using raw data files (a binary backup) to set up the replica, you cannot change the initial table format. Instead, use ALTER TABLE to change the table types after the replica has been started.

  • For new source/replica replication setups where there are currently no tables on the source, avoid specifying the engine type when creating new tables.

If you are already running a replication solution and want to convert your existing tables to another engine type, follow these steps:

  1. Stop the replica from running replication updates:

    mysql> STOP SLAVE;
    

    This enables you to change engine types without interruptions.

  2. Execute an ALTER TABLE ... ENGINE='engine_type' for each table to be changed.

  3. Start the replication process again:

    mysql> START SLAVE;
    

Although the default_storage_engine variable is not replicated, be aware that CREATE TABLE and ALTER TABLE statements that include the engine specification are correctly replicated to the replica. For example, if you have a CSV table and you execute:

mysql> ALTER TABLE csvtable Engine='MyISAM';

The previous statement is replicated to the replica and the engine type on the replica is converted to MyISAM, even if you have previously changed the table type on the replica to an engine other than CSV. If you want to retain engine differences on the source and replica, you should be careful to use the default_storage_engine variable on the source when creating a new table. For example, instead of:

mysql> CREATE TABLE tablea (columna int) Engine=MyISAM;

Use this format:

mysql> SET default_storage_engine=MyISAM;
mysql> CREATE TABLE tablea (columna int);

When replicated, the default_storage_engine variable will be ignored, and the CREATE TABLE statement executes on the replica using the replica's default engine.

16.3.4 Using Replication for Scale-Out

You can use replication as a scale-out solution; that is, where you want to split up the load of database queries across multiple database servers, within some reasonable limitations.

Because replication works from the distribution of one source to one or more replicas, using replication for scale-out works best in an environment where you have a high number of reads and low number of writes/updates. Most websites fit into this category, where users are browsing the website, reading articles, posts, or viewing products. Updates only occur during session management, or when making a purchase or adding a comment/message to a forum.

Replication in this situation enables you to distribute the reads over the replicas, while still enabling your web servers to communicate with the source when a write is required. You can see a sample replication layout for this scenario in Figure 16.1, “Using Replication to Improve Performance During Scale-Out”.

Figure 16.1 Using Replication to Improve Performance During Scale-Out

Incoming requests from clients are directed to a load balancer, which distributes client data among a number of web clients. Writes made by web clients are directed to a single MySQL source server, and reads made by web clients are directed to one of three MySQL replica servers. Replication takes place from the MySQL source server to the three MySQL replica servers.

If the part of your code that is responsible for database access has been properly abstracted/modularized, converting it to run with a replicated setup should be very smooth and easy. Change the implementation of your database access to send all writes to the source, and to send reads to either the source or a replica. If your code does not have this level of abstraction, setting up a replicated system gives you the opportunity and motivation to clean it up. Start by creating a wrapper library or module that implements the following functions:

  • safe_writer_connect()

  • safe_reader_connect()

  • safe_reader_statement()

  • safe_writer_statement()

safe_ in each function name means that the function takes care of handling all error conditions. You can use different names for the functions. The important thing is to have a unified interface for connecting for reads, connecting for writes, doing a read, and doing a write.

Then convert your client code to use the wrapper library. This may be a painful and scary process at first, but it pays off in the long run. All applications that use the approach just described are able to take advantage of a source/replica configuration, even one involving multiple replicas. The code is much easier to maintain, and adding troubleshooting options is trivial. You need modify only one or two functions (for example, to log how long each statement took, or which statement among those issued gave you an error).

If you have written a lot of code, you may want to automate the conversion task by using the replace utility that comes with standard MySQL distributions, or write your own conversion script. Ideally, your code uses consistent programming style conventions. If not, then you are probably better off rewriting it anyway, or at least going through and manually regularizing it to use a consistent style.

16.3.5 Replicating Different Databases to Different Replicas

There may be situations where you have a single source and want to replicate different databases to different replicas. For example, you may want to distribute different sales data to different departments to help spread the load during data analysis. A sample of this layout is shown in Figure 16.2, “Replicating Databases to Separate Replicas”.

Figure 16.2 Replicating Databases to Separate Replicas

The MySQL source has three databases, databaseA, databaseB, and databaseC. databaseA is replicated only to MySQL Replica 1, databaseB is replicated only to MySQL Replica 2, and databaseC is replicated only to MySQL Replica 3.

You can achieve this separation by configuring the source and replicas as normal, and then limiting the binary log statements that each replica processes by using the --replicate-wild-do-table configuration option on each replica.

Important

You should not use --replicate-do-db for this purpose when using statement-based replication, since statement-based replication causes this option's effects to vary according to the database that is currently selected. This applies to mixed-format replication as well, since this enables some updates to be replicated using the statement-based format.

However, it should be safe to use --replicate-do-db for this purpose if you are using row-based replication only, since in this case the currently selected database has no effect on the option's operation.

For example, to support the separation as shown in Figure 16.2, “Replicating Databases to Separate Replicas”, you should configure each replica as follows, before executing START SLAVE:

  • Replica 1 should use --replicate-wild-do-table=databaseA.%.

  • Replica 2 should use --replicate-wild-do-table=databaseB.%.

  • Replica 3 should use --replicate-wild-do-table=databaseC.%.

Each replica in this configuration receives the entire binary log from the source, but executes only those events from the binary log that apply to the databases and tables included by the --replicate-wild-do-table option in effect on that replica.

If you have data that must be synchronized to the replicas before replication starts, you have a number of choices:

  • Synchronize all the data to each replica, and delete the databases, tables, or both that you do not want to keep.

  • Use mysqldump to create a separate dump file for each database and load the appropriate dump file on each replica.

  • Use a raw data file dump and include only the specific files and databases that you need for each replica.

    Note

    This does not work with InnoDB databases unless you use innodb_file_per_table.

16.3.6 Improving Replication Performance

As the number of replicas connecting to a source increases, the load, although minimal, also increases, as each replica uses a client connection to the source. Also, as each replica must receive a full copy of the source's binary log, the network load on the source may also increase and create a bottleneck.

If you are using a large number of replicas connected to one source, and that source is also busy processing requests (for example, as part of a scale-out solution), then you may want to improve the performance of the replication process.

One way to improve the performance of the replication process is to create a deeper replication structure that enables the source to replicate to only one replica, and for the remaining replicas to connect to this primary replica for their individual replication requirements. A sample of this structure is shown in Figure 16.3, “Using an Additional Replication Source to Improve Performance”.

Figure 16.3 Using an Additional Replication Source to Improve Performance

The server MySQL Source 1 replicates to the server MySQL Source 2, which in turn replicates to the servers MySQL Replica 1, MySQL Replica 2, and MySQL Replica 3.

For this to work, you must configure the MySQL instances as follows:

  • Source 1 is the primary source where all changes and updates are written to the database. Binary logging should be enabled on this machine.

  • Source 2 is the replica of Source 1 that provides the replication functionality to the remainder of the replicas in the replication structure. Source 2 is the only machine permitted to connect to Source 1. Source 2 also has binary logging enabled, and the log_slave_updates system variable enabled so that replication instructions from Source 1 are also written to Source 2's binary log so that they can then be replicated to the true replicas.

  • Replica 1, Replica 2, and Replica 3 act as replicas to Source 2, and replicate the information from Source 2, which actually consists of the upgrades logged on Source 1.

The above solution reduces the client load and the network interface load on the primary source, which should improve the overall performance of the primary source when used as a direct database solution.

If your replicas are having trouble keeping up with the replication process on the source, there are a number of options available:

  • If possible, put the relay logs and the data files on different physical drives. To do this, set the relay_log system variable to specify the location of the relay log.

  • If the replicas are significantly slower than the source, you may want to divide up the responsibility for replicating different databases to different replicas. See Section 16.3.5, “Replicating Different Databases to Different Replicas”.

  • If your source makes use of transactions and you are not concerned about transaction support on your replicas, use MyISAM or another nontransactional engine on the replicas. See Section 16.3.3, “Using Replication with Different Source and Replica Storage Engines”.

  • If your replicas are not acting as sources, and you have a potential solution in place to ensure that you can bring up a source in the event of failure, then you can disable the log_slave_updates system variable on the replicas. This prevents dumb replicas from also logging events they have executed into their own binary log.

16.3.7 Switching Sources During Failover

You can tell a replica to change to a new source using the CHANGE MASTER TO statement. The replica does not check whether the databases on the source are compatible with those on the replica; it simply begins reading and executing events from the specified coordinates in the new source's binary log. In a failover situation, all the servers in the group are typically executing the same events from the same binary log file, so changing the source of the events should not affect the structure or integrity of the database, provided that you exercise care in making the change.

Replicas should be run with the --log-bin option, and if not using GTIDs then they should also be run without enabling the log_slave_updates system variable. In this way, the replica is ready to become a source without restarting the replica mysqld. Assume that you have the structure shown in Figure 16.4, “Redundancy Using Replication, Initial Structure”.

Figure 16.4 Redundancy Using Replication, Initial Structure

Two web clients direct both database reads and database writes to a single MySQL source server. The MySQL source server replicates to Replica 1, Replica 2, and Replica 3.

In this diagram, the MySQL Source holds the source database, the Replica hosts are replicas, and the Web Client machines are issuing database reads and writes. Web clients that issue only reads (and would normally be connected to the replicas) are not shown, as they do not need to switch to a new server in the event of failure. For a more detailed example of a read/write scale-out replication structure, see Section 16.3.4, “Using Replication for Scale-Out”.

Each MySQL replica (Replica 1, Replica 2, and Replica 3) is a replica running with --log-bin and without enabling the log_slave_updates system variable. Because updates received by a replica from the source are not logged in the binary log unless log_slave_updates is enabled, the binary log on each replica is empty initially. If for some reason MySQL Source becomes unavailable, you can pick one of the replicas to become the new source. For example, if you pick Replica 1, all Web Clients should be redirected to Replica 1, which writes the updates to its binary log. Replica 2 and Replica 3 should then replicate from Replica 1.

The reason for running the replica without log_slave_updates enabled is to prevent replicas from receiving updates twice in case you cause one of the replicas to become the new source. If Replica 1 has log_slave_updates enabled, it writes any updates that it receives from MySQL Source in its own binary log. This means that, when Replica 2 changes from MySQL Source to Replica 1 as its source, it may receive updates from Replica 1 that it has already received from MySQL Source.

Make sure that all replicas have processed any statements in their relay log. On each replica, issue STOP SLAVE IO_THREAD, then check the output of SHOW PROCESSLIST until you see Has read all relay log. When this is true for all replicas, they can be reconfigured to the new setup. On the replica Replica 1 being promoted to become the source, issue STOP SLAVE and RESET MASTER.

On the other replicas Replica 2 and Replica 3, use STOP SLAVE and CHANGE MASTER TO MASTER_HOST='Replica1' (where 'Replica1' represents the real host name of Replica 1). To use CHANGE MASTER TO, add all information about how to connect to Replica 1 from Replica 2 or Replica 3 (user, password, port). When issuing the CHANGE MASTER TO statement in this, there is no need to specify the name of the Replica 1 binary log file or log position to read from, since the first binary log file and position 4, are the defaults. Finally, execute START SLAVE on Replica 2 and Replica 3.

Once the new replication setup is in place, you need to tell each Web Client to direct its statements to Replica 1. From that point on, all updates statements sent by Web Client to Replica 1 are written to the binary log of Replica 1, which then contains every update statement sent to Replica 1 since MySQL Source failed.

The resulting server structure is shown in Figure 16.5, “Redundancy Using Replication, After Source Failure”.

Figure 16.5 Redundancy Using Replication, After Source Failure

The MySQL source server has failed, and is no longer connected into the replication topology. The two web clients now direct both database reads and database writes to Replica 1, which is the new source. Replica 1 replicates to Replica 2 and Replica 3.

When MySQL Source becomes available again, you should make it a replica of Replica 1. To do this, issue on MySQL Source the same CHANGE MASTER TO statement as that issued on Replica 2 and Replica 3 previously. MySQL Source then becomes a replica of Replica 1 and picks up the Web Client writes that it missed while it was offline.

To make MySQL Source a source again, use the preceding procedure as if Replica 1 was unavailable and MySQL Source was to be the new source. During this procedure, do not forget to run RESET MASTER on MySQL Source before making Replica 1, Replica 2, and Replica 3 replicas of MySQL Source. If you fail to do this, the replicas may pick up stale writes from the Web Client applications dating from before the point at which MySQL Source became unavailable.

You should be aware that there is no synchronization between replicas, even when they share the same source, and thus some replicas might be considerably ahead of others. This means that in some cases the procedure outlined in the previous example might not work as expected. In practice, however, relay logs on all replicas should be relatively close together.

One way to keep applications informed about the location of the source is to have a dynamic DNS entry for the source. With bind you can use nsupdate to update the DNS dynamically.

16.3.8 Setting Up Replication to Use Encrypted Connections

To use an encrypted connection for the transfer of the binary log required during replication, both the source and the replica servers must support encrypted network connections. If either server does not support encrypted connections (because it has not been compiled or configured for them), replication through an encrypted connection is not possible.

Setting up encrypted connections for replication is similar to doing so for client/server connections. You must obtain (or create) a suitable security certificate that you can use on the source, and a similar certificate (from the same certificate authority) on each replica. You must also obtain suitable key files.

For more information on setting up a server and client for encrypted connections, see Section 6.3.1, “Configuring MySQL to Use Encrypted Connections”.

To enable encrypted connections on the source, you must create or obtain suitable certificate and key files, and then add the following configuration parameters to the source's configuration within the [mysqld] section of the source's my.cnf file, changing the file names as necessary:

[mysqld]
ssl_ca=cacert.pem
ssl_cert=server-cert.pem
ssl_key=server-key.pem

The paths to the files may be relative or absolute; we recommend that you always use complete paths for this purpose.

The configuration parameters are as follows:

  • ssl_ca: The path name of the Certificate Authority (CA) certificate file. (--ssl-capath is similar but specifies the path name of a directory of CA certificate files.)

  • ssl_cert: The path name of the server public key certificate file. This certificate can be sent to the client and authenticated against the CA certificate that it has.

  • ssl_key: The path name of the server private key file.

To enable encrypted connections on the replica, use the CHANGE MASTER TO statement. You can either name the replica certificate and SSL private key files required for the encrypted connection in the [client] section of the replica's my.cnf file, or you can explicitly specify that information using the CHANGE MASTER TO statement. For more information on the CHANGE MASTER TO statement, see Section 13.4.2.1, “CHANGE MASTER TO Statement”.

  • To name the replica certificate and key files using an option file, add the following lines to the [client] section of the replica's my.cnf file, changing the file names as necessary:

    [client]
    ssl-ca=cacert.pem
    ssl-cert=client-cert.pem
    ssl-key=client-key.pem
    
  • Restart the replica server, using the --skip-slave-start option to prevent the replica from connecting to the source. Use CHANGE MASTER TO to specify the source configuration, and add the MASTER_SSL option to connect using encryption:

    mysql> CHANGE MASTER TO
        -> MASTER_HOST='source_hostname',
        -> MASTER_USER='repl',
        -> MASTER_PASSWORD='password',
        -> MASTER_SSL=1;
    

    Setting MASTER_SSL=1 for a replication connection and then setting no further MASTER_SSL_xxx options corresponds to setting --ssl-mode=REQUIRED for the client, as described in Command Options for Encrypted Connections. With MASTER_SSL=1, the connection attempt only succeeds if an encrypted connection can be established. A replication connection does not fall back to an unencrypted connection, so there is no setting corresponding to the --ssl-mode=PREFERRED setting for replication. If MASTER_SSL=0 is set, this corresponds to --ssl-mode=DISABLED.

  • To name the replica certificate and SSL private key files using the CHANGE MASTER TO statement, if you did not do this in the replica's my.cnf file, add the appropriate MASTER_SSL_xxx options:

        -> MASTER_SSL_CA = 'ca_file_name',
        -> MASTER_SSL_CAPATH = 'ca_directory_name',
        -> MASTER_SSL_CERT = 'cert_file_name',
        -> MASTER_SSL_KEY = 'key_file_name',
    

    These options correspond to the --ssl-xxx options with the same names, as described in Command Options for Encrypted Connections. For these options to take effect, MASTER_SSL=1 must also be set. For a replication connection, specifying a value for either of MASTER_SSL_CA or MASTER_SSL_CAPATH, or specifying these options in the replica's my.cnf file, corresponds to setting --ssl-mode=VERIFY_CA. The connection attempt only succeeds if a valid matching Certificate Authority (CA) certificate is found using the specified information.

  • To activate host name identity verification, add the MASTER_SSL_VERIFY_SERVER_CERT option:

        -> MASTER_SSL_VERIFY_SERVER_CERT=1,
    

    This option corresponds to the --ssl-verify-server-cert option, which is deprecated as of MySQL 5.7.11 and is removed in MySQL 8.0. For a replication connection, specifying MASTER_SSL_VERIFY_SERVER_CERT=1 corresponds to setting --ssl-mode=VERIFY_IDENTITY, as described in Command Options for Encrypted Connections. For this option to take effect, MASTER_SSL=1 must also be set. Host name identity verification does not work with self-signed certificates.

  • To activate certificate revocation list (CRL) checks, add the MASTER_SSL_CRL or MASTER_SSL_CRLPATH option:

        -> MASTER_SSL_CRL = 'crl_file_name',
        -> MASTER_SSL_CRLPATH = 'crl_directory_name',

    These options correspond to the --ssl-xxx options with the same names, as described in Command Options for Encrypted Connections. If they are not specified, no CRL checking takes place.

  • To specify lists of ciphers and encryption protocols permitted by the replica for the replication connection, add the MASTER_SSL_CIPHER and MASTER_TLS_VERSION options:

        -> MASTER_SSL_CIPHER = 'cipher_list',
        -> MASTER_TLS_VERSION = 'protocol_list',

    The MASTER_SSL_CIPHER option specifies the list of ciphers permitted by the replica for the replication connection, with one or more cipher names separated by colons. The MASTER_TLS_VERSION option specifies the encryption protocols permitted by the replica for the replication connection. The format is like that for the tls_version system variable, with one or more comma-separated protocol versions. The protocols and ciphers that you can use in these lists depend on the SSL library used to compile MySQL. For information about the formats and permitted values, see Section 6.3.2, “Encrypted Connection TLS Protocols and Ciphers”.

  • After the source information has been updated, start the replication process:

    mysql> START SLAVE;
    

    You can use the SHOW SLAVE STATUS statement to confirm that an encrypted connection was established successfully.

  • Requiring encrypted connections on the replica does not ensure that the source requires encrypted connections from replicas. If you want to ensure that the source only accepts replicas that connect using encrypted connections, create a replication user account on the source using the REQUIRE SSL option, then grant that user the REPLICATION SLAVE privilege. For example:

    mysql> CREATE USER 'repl'@'%.example.com' IDENTIFIED BY 'password'
        -> REQUIRE SSL;
    mysql> GRANT REPLICATION SLAVE ON *.*
        -> TO 'repl'@'%.example.com';
    

    If you have an existing replication user account on the source, you can add REQUIRE SSL to it with this statement:

    mysql> ALTER USER 'repl'@'%.example.com' REQUIRE SSL;
    

16.3.9 Semisynchronous Replication

In addition to the built-in asynchronous replication, MySQL 5.7 supports an interface to semisynchronous replication that is implemented by plugins. This section discusses what semisynchronous replication is and how it works. The following sections cover the administrative interface to semisynchronous replication and how to install, configure, and monitor it.

MySQL replication by default is asynchronous. The source writes events to its binary log and replicas request them when they are ready. The source does not know whether or when a replica has retrieved and processed the transactions, and there is no guarantee that any event ever reaches any replica. With asynchronous replication, if the source crashes, transactions that it has committed might not have been transmitted to any replica. Failover from source to replica in this case might result in failover to a server that is missing transactions relative to the source.

With fully synchronous replication, when a source commits a transaction, all replicas must also have committed the transaction before the source returns to the session that performed the transaction. Fully synchronous replication means failover from the source to any replica is possible at any time. The drawback of fully synchronous replication is that there might be a lot of delay to complete a transaction.

Semisynchronous replication falls between asynchronous and fully synchronous replication. The source waits until at least one replica has received and logged the events (the required number of replicas is configurable), and then commits the transaction. The source does not wait for all replicas to acknowledge receipt, and it requires only an acknowledgement from the replicas, not that the events have been fully executed and committed on the replica side. Semisynchronous replication therefore guarantees that if the source crashes, all the transactions that it has committed have been transmitted to at least one replica.

Compared to asynchronous replication, semisynchronous replication provides improved data integrity, because when a commit returns successfully, it is known that the data exists in at least two places. Until a semisynchronous source receives acknowledgment from the required number of replicas, the transaction is on hold and not committed.

Compared to fully synchronous replication, semisynchronous replication is faster, because it can be configured to balance your requirements for data integrity (the number of replicas acknowledging receipt of the transaction) with the speed of commits, which are slower due to the need to wait for replicas.

Important

With semisynchronous replication, if the source crashes and a failover to a replica is carried out, the failed source should not be reused as the replication source server, and should be discarded. It could have transactions that were not acknowledged by any replica, which were therefore not committed before the failover.

If your goal is to implement a fault-tolerant replication topology where all the servers receive the same transactions in the same order, and a server that crashes can rejoin the group and be brought up to date automatically, you can use Group Replication to achieve this. For information, see Chapter 17, Group Replication.

The performance impact of semisynchronous replication compared to asynchronous replication is the tradeoff for increased data integrity. The amount of slowdown is at least the TCP/IP roundtrip time to send the commit to the replica and wait for the acknowledgment of receipt by the replica. This means that semisynchronous replication works best for close servers communicating over fast networks, and worst for distant servers communicating over slow networks. Semisynchronous replication also places a rate limit on busy sessions by constraining the speed at which binary log events can be sent from source to replica. When one user is too busy, this slows it down, which can be useful in some deployment situations.

Semisynchronous replication between a source and its replicas operates as follows:

  • A replica indicates whether it is semisynchronous-capable when it connects to the source.

  • If semisynchronous replication is enabled on the source side and there is at least one semisynchronous replica, a thread that performs a transaction commit on the source blocks and waits until at least one semisynchronous replica acknowledges that it has received all events for the transaction, or until a timeout occurs.

  • The replica acknowledges receipt of a transaction's events only after the events have been written to its relay log and flushed to disk.

  • If a timeout occurs without any replica having acknowledged the transaction, the source reverts to asynchronous replication. When at least one semisynchronous replica catches up, the source returns to semisynchronous replication.

  • Semisynchronous replication must be enabled on both the source and replica sides. If semisynchronous replication is disabled on the source, or enabled on the source but on no replicas, the source uses asynchronous replication.

While the source is blocking (waiting for acknowledgment from a replica), it does not return to the session that performed the transaction. When the block ends, the source returns to the session, which then can proceed to execute other statements. At this point, the transaction has committed on the source side, and receipt of its events has been acknowledged by at least one replica. The number of replica acknowledgments the source must receive per transaction before returning to the session is configurable using the rpl_semi_sync_master_wait_for_slave_count system variable, for which the default value is 1.

Blocking also occurs after rollbacks that are written to the binary log, which occurs when a transaction that modifies nontransactional tables is rolled back. The rolled-back transaction is logged even though it has no effect for transactional tables because the modifications to the nontransactional tables cannot be rolled back and must be sent to replicas.

For statements that do not occur in transactional context (that is, when no transaction has been started with START TRANSACTION or SET autocommit = 0), autocommit is enabled and each statement commits implicitly. With semisynchronous replication, the source blocks for each such statement, just as it does for explicit transaction commits.

The rpl_semi_sync_master_wait_point system variable controls the point at which a semisynchronous replication source waits for replica acknowledgment of transaction receipt before returning a status to the client that committed the transaction. These values are permitted:

  • AFTER_SYNC (the default): The source writes each transaction to its binary log and the replica, and syncs the binary log to disk. The source waits for replica acknowledgment of transaction receipt after the sync. Upon receiving acknowledgment, the source commits the transaction to the storage engine and returns a result to the client, which then can proceed.

  • AFTER_COMMIT: The source writes each transaction to its binary log and the replica, syncs the binary log, and commits the transaction to the storage engine. The source waits for replica acknowledgment of transaction receipt after the commit. Upon receiving acknowledgment, the source returns a result to the client, which then can proceed.

The replication characteristics of these settings differ as follows:

  • With AFTER_SYNC, all clients see the committed transaction at the same time, which is after it has