You are here
If you run a replicated MySQL setup and have a master failure, you may see the following error in your error log.
[ERROR] Error reading packet from server: Client requested master to start replication from impossible position ( server_errno=1236)
At this point your slave will not be replicating and you will need to recover from the situation. But why did it occur in the first place and what does it mean to your data integrity?
Why it occurs
This error will occur if the master stops unexpectedly. This could be as a result of a hardware failure (like a disk error) or a power loss. Any time this happens there is a chain of events that result in the problem.
First of all MySQL holds a lot of information in memory. Even with InnoDB there may be a pending transaction that has not yet written to the log, but even more importantly the server may not be able to write the Stop event message to the binlog. It is this that is causing the error. To understand why and why the slave can't just carry on we need to go into how slaves manage binlog rollovers.
When a SQL statement occurs that potentially modifies data, that statement - or the resulting row changes - are written to the binary log (binlog). In addition several other events are written. The first event in a binlog is always the Start event, which shows up when using mysqlbinlog as something like:
# at 4 #110901 14:06:01 server id 1 end_log_pos 106 Start: binlog v 4, server v 5.1.54-1ubuntu4-log created 110901 14:06:01 at startup ROLLBACK/*!*/; BINLOG ' qQRfTg8BAAAAZgAAAGoAAAAAAAQANS4xLjU0LTF1YnVudHU0LWxvZwAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAACpBF9OEzgNAAgAEgAEBAQEEgAAUwAEGggAAAAICAgC '/*!*/;
The first line, the # at 4, indicates the position within the binlog file of the start of the event. In this case as it is the first event it is always at position 4. The important bit is the 'Start: ' section, that tells you this is the start event. The end_log_pos tells the slave that the next event starts at 106 bytes from the start of the file. This end position is used by the slave to request the next event from the master.
During normal operation the binlog may reach its maximum size (or age) and need to be rolled over to a new binlog. To tell the slave about this the server writes a Rotate event to the binlog, which will look something like:
# at 3128 #110901 14:32:37 server id 1 end_log_pos 3171 Rotate to mysql-bin.000003 pos: 4
Note that it tells the slave what the new binlog file will be and what position to start at.
When a MySQL master is shut down cleanly it writes a Stop event to the binlog. This tells the slave that it no longer has any work to do from this and should expect a new binlog to be opened the next time the server starts. Every time the MySQL server starts it will create a new binlog, and not reuse an existing one. This allows slaves to recover from a master being taken offline and restarted.
Now the slave has to request an event from the master. The master does not push binlogs out to the slave. So this information is vital to ensure the slave is asking the master the correct information. If neither the stop nor the rotate
event are written, the slave only has the last event, which includes the offset to request for the next event - which in this case does not exist. This is what the master is complaining about.
Why the slave can't auto recover
The question you are probably asking yourself is why the master doesn't just say to the slave that it should request the new binlog instead, or the slave just back off and try again with the next binlog sequence number.
At the time of this error the master doesn't know why the binlog has not been closed properly. It is a new instance of the master reading the file and transferring the information to the slave. It doesn't know if there is a disk failure, massive transaction loss, or even a rogue slave. Instead it needs to indicate that there is an error that it cannot resolve, and the slave should not try and resolve on its own. Far better to have replication stop until a human can intervene and check the problem than for it to continue only to break later on when inconsistencies between the slave and master trigger later errors.
How to recover
Getting replication started again is relatively easy. You need to use the CHANGE MASTER statement to tell the slave about the new binlog file and position. Note that new binlogs always start at 4. The syntax is
CHANGE MASTER TO MASTER_LOG_FILE='binlog-name', MASTER_LOG_POS=4;
binlog-name should be the name of the new binary log.
What is harder to handle is the potential inconsistency caused by missing transactions. If the master only missed closing the binlog file then this may not be an issue, but if instead there are other transactions that were not written to the binlog then your slave may be in an inconsistent state. In this case the only way you are going to know about it is to compare the data that was written close to the time of the master failure on the master with the slave.
Once you are happy with the state of the slave, you can issue the START SLAVE command.