Recent Posts

Sponsors
![]() |
How to: Move / Copy a live MySQL database (and what not to do)Mike Peters, 04-16-2007 |
The Challenege:
Moving one of our MySQL database servers to a new machine. Database is live and under constant heavy load, 80GB in size, with 5,000+ tables. Taking the machine down for 'scheduled maintenance' is not an option. With the database handling financial transactions and campaign stats tracking, we couldn't afford any mistakes or the slightest chance of data loss.
Suggested solutions:
* MySQLHotCopy - a Perl script that locks MySQL tables, then uses system-level "cp" or "scp" to make a database copy quickly. Upon completion, MySQLHotCopy flushes the tables and releases the lock. Fastest way but only works if the source and destination are on the same machine.
Not suited for our case and also I'm not clear about the usefulness of this script. If you're looking to create a database backup for disaster recovery, you're better off backing to a remote machine.
* rsync - a utility that provides fast incremental file transfer.
RSync-ing directories on two remote machines is as simple as:
Works great for daily file backups but not suitable for a MySQL database. If you're rsyncing while your database is live, you'll never reach a "safe" point where the database DNS record can be switched over (causing all future requests to land on the new machine) with a guarantee that all data up to that point has been transferred successfully.
Depending on your load, you will experience some degree of data loss.
* MySQLDump - Suggested by our hosting provider, MySQLDump creates a file of MySQL statements that can later be executed on the destination server to safely recreate the database.
This command can be executed on the destination server and it will pull the contents of the source database, recreating all tables and data.
Unfortunately using this approach would require taking the source database server down to avoid loss of data. In our case, even shutting down for a few minutes was not an option we were willing to live with.
Chosen Method - Replication:
MySQL Replication, a solution suggested by our resident database guru, is the process of setting up a master/slave relationship between two database servers, where the slave continually and automatically gets fed all MySQL queries from the master machine, resulting in a replication of data across the two machines.
Once replication has been configured and activated, the slave machine will act as a live MySQL database, holding an up-to-the-minute snapshot of all data from the master machine.
What we love about this solution is that it leaves no "holes". No potential for data loss and no need to ever take the database down.
Once the slave machine is ready, with the flip of a switch, we could redirect all queries to the new machine, terminate the master/slave relationship and we'll be up and running with the new machine.
Here's how the actual implementation went:
Step 1: Set up an account on the master server that the slave server can use to connect. This account must be given the REPLICATION SLAVE privilege.
Step 2: Turn on binary logging of MySQL transactions. The server cannot act as a replication master unless binary logging is enabled. Locate the my.cnf file on the master database server machine and add these two lines to the file, under the [mysqld] section:
If those options were not present (we had them), you will have to restart the server.
Step 3: Flush all the tables and block write statements by executing a FLUSH TABLES WITH READ LOCK statement - this is important so that we can safely copy the database files, with no fear of the files being changed during the copy operation. Make sure you keep the shell window where you issue the command below open throughout the process, until the tar finishes. Once this window closes, the lock will be automatically released.
Step 4: Take a snapshot of all data on the master database server. The easiest way to create a snapshot is to use tar. (Make sure you have sufficient storage space on your master server to hold the tar. If unsure, calculate the current size of your database and plan for an additional 70% of available space)
On our system this operation took a little over four hours to complete. Notice the "&" operand - this causes tar to run as a background process. Useful if your shell gets disconnected. You'll know it's done by periodically viewing the process-list using "ps -ax"
Step 5: While the read lock placed by FLUSH TABLES WITH READ LOCK is in effect, read the value of the current binary log name and offset on the master:
The File column shows the name of the log and Position shows the offset within the file. In this example, the binary log file is mysql-bin.003 and the offset is 73. Record these values. You need them later when you are setting up the slave. They represent the replication coordinates at which the slave should begin processing new updates from the master.
Step 6: Now that you have taken the snapshot and recorded the log name and offset, you can re-enable write activity on the master:
Step 7: Hop on the new database server machine (slave). Shutdown the database, then locate the my.cnf file and add these lines to the file:
Step 8: Copy and extract master database snapshot to the slave server under /var/db
Step 9: Start the slave database and issue these commands, replacing the option values with the actual values relevant to your system:
Step 10: Start the slave
If everything went well, your slave database will now connect to the master database, downloading all transactions following the time you took the snapshot. From this moment on, the slave database will continually feed itself with data from the master database, always staying in sync.
Confirm the slave server is up-to-date by running a:
and comparing the position markers with what the master database reports under:
Step 11: Update database-connect functions
Once the slave database is up-to-date, it is now time to update your application code, routing all calls to the new server.
Assuming you have all database access functions encapsulated in a database.php / database.c layer, locate the function that establishes a connection to the database and update the IP address or domain name to point to the slave database.
Voila! There you have it -- migration of MySQL from machine A to machine B without ever taking the database down.
Step 12: Test
Run a tail on the master database MySQL log file, or run SHOW PROCESSLIST and make sure no one is accessing the master database any more. Once confirmed, shutdown the master database and test for any strange side effects.
I recommend you keep the old database server up for a few days until you're absolutely certain the transfer went through smoothly.
Running a mysqlcheck --all-databases on the new database is highly recommended.
Any comments, tips or insight are welcome!
Moving one of our MySQL database servers to a new machine. Database is live and under constant heavy load, 80GB in size, with 5,000+ tables. Taking the machine down for 'scheduled maintenance' is not an option. With the database handling financial transactions and campaign stats tracking, we couldn't afford any mistakes or the slightest chance of data loss.
Suggested solutions:
* MySQLHotCopy - a Perl script that locks MySQL tables, then uses system-level "cp" or "scp" to make a database copy quickly. Upon completion, MySQLHotCopy flushes the tables and releases the lock. Fastest way but only works if the source and destination are on the same machine.
Not suited for our case and also I'm not clear about the usefulness of this script. If you're looking to create a database backup for disaster recovery, you're better off backing to a remote machine.
* rsync - a utility that provides fast incremental file transfer.
RSync-ing directories on two remote machines is as simple as:
rsync --progress --stats --compress --rsh=/usr/local/bin/ssh /www/* webserver:simple_path_name
Works great for daily file backups but not suitable for a MySQL database. If you're rsyncing while your database is live, you'll never reach a "safe" point where the database DNS record can be switched over (causing all future requests to land on the new machine) with a guarantee that all data up to that point has been transferred successfully.
Depending on your load, you will experience some degree of data loss.
* MySQLDump - Suggested by our hosting provider, MySQLDump creates a file of MySQL statements that can later be executed on the destination server to safely recreate the database.
mysqldump -hREMOTE_HOST -uroot -pREMOTEpwd --opt --compress REMOTEdb | mysql -uroot -pLOCALpwd LOCALdb
This command can be executed on the destination server and it will pull the contents of the source database, recreating all tables and data.
Unfortunately using this approach would require taking the source database server down to avoid loss of data. In our case, even shutting down for a few minutes was not an option we were willing to live with.
Chosen Method - Replication:
MySQL Replication, a solution suggested by our resident database guru, is the process of setting up a master/slave relationship between two database servers, where the slave continually and automatically gets fed all MySQL queries from the master machine, resulting in a replication of data across the two machines.
Once replication has been configured and activated, the slave machine will act as a live MySQL database, holding an up-to-the-minute snapshot of all data from the master machine.
What we love about this solution is that it leaves no "holes". No potential for data loss and no need to ever take the database down.
Once the slave machine is ready, with the flip of a switch, we could redirect all queries to the new machine, terminate the master/slave relationship and we'll be up and running with the new machine.
Here's how the actual implementation went:
Step 1: Set up an account on the master server that the slave server can use to connect. This account must be given the REPLICATION SLAVE privilege.
mysql> use mysql;
mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'%.mydomain.com' IDENTIFIED BY 'slavepass';
mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'%.mydomain.com' IDENTIFIED BY 'slavepass';
Step 2: Turn on binary logging of MySQL transactions. The server cannot act as a replication master unless binary logging is enabled. Locate the my.cnf file on the master database server machine and add these two lines to the file, under the [mysqld] section:
[mysqld]
log-bin=mysql-bin
server-id=1
log-bin=mysql-bin
server-id=1
If those options were not present (we had them), you will have to restart the server.
Step 3: Flush all the tables and block write statements by executing a FLUSH TABLES WITH READ LOCK statement - this is important so that we can safely copy the database files, with no fear of the files being changed during the copy operation. Make sure you keep the shell window where you issue the command below open throughout the process, until the tar finishes. Once this window closes, the lock will be automatically released.
mysql> SET GLOBAL WAIT_TIMEOUT=600000; SET WAIT_TIMEOUT = 600000; FLUSH TABLES WITH READ LOCK;
Step 4: Take a snapshot of all data on the master database server. The easiest way to create a snapshot is to use tar. (Make sure you have sufficient storage space on your master server to hold the tar. If unsure, calculate the current size of your database and plan for an additional 70% of available space)
cd /var/db/mysql
tar -cvf /tmp/mysql-snapshot.tar ./ --exclude mysql &
tar -cvf /tmp/mysql-snapshot.tar ./ --exclude mysql &
On our system this operation took a little over four hours to complete. Notice the "&" operand - this causes tar to run as a background process. Useful if your shell gets disconnected. You'll know it's done by periodically viewing the process-list using "ps -ax"
Step 5: While the read lock placed by FLUSH TABLES WITH READ LOCK is in effect, read the value of the current binary log name and offset on the master:
mysql > SHOW MASTER STATUS;
+---------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+---------------+----------+--------------+------------------+
| mysql-bin.003 | 73 | test | manual,mysql |
+---------------+----------+--------------+------------------+
+---------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+---------------+----------+--------------+------------------+
| mysql-bin.003 | 73 | test | manual,mysql |
+---------------+----------+--------------+------------------+
The File column shows the name of the log and Position shows the offset within the file. In this example, the binary log file is mysql-bin.003 and the offset is 73. Record these values. You need them later when you are setting up the slave. They represent the replication coordinates at which the slave should begin processing new updates from the master.
Step 6: Now that you have taken the snapshot and recorded the log name and offset, you can re-enable write activity on the master:
mysql> UNLOCK TABLES;
Step 7: Hop on the new database server machine (slave). Shutdown the database, then locate the my.cnf file and add these lines to the file:
[mysqld]
server-id=2
server-id=2
Step 8: Copy and extract master database snapshot to the slave server under /var/db
cd /var/db
ftp
open masterdb.com
cd /tmp
get mysql-snapshot.tar
exit
tar --extract --file=mysql-snapshot.tar
ftp
open masterdb.com
cd /tmp
get mysql-snapshot.tar
exit
tar --extract --file=mysql-snapshot.tar
Step 9: Start the slave database and issue these commands, replacing the option values with the actual values relevant to your system:
mysql> CHANGE MASTER TO
MASTER_HOST='master_host_name',
MASTER_USER='replication_user_name',
MASTER_PASSWORD='replication_password',
MASTER_LOG_FILE='recorded_log_file_name',
MASTER_LOG_POS=recorded_log_position;
MASTER_HOST='master_host_name',
MASTER_USER='replication_user_name',
MASTER_PASSWORD='replication_password',
MASTER_LOG_FILE='recorded_log_file_name',
MASTER_LOG_POS=recorded_log_position;
Step 10: Start the slave
mysql> START SLAVE;
If everything went well, your slave database will now connect to the master database, downloading all transactions following the time you took the snapshot. From this moment on, the slave database will continually feed itself with data from the master database, always staying in sync.
Confirm the slave server is up-to-date by running a:
mysql> SHOW SLAVE STATUS;
and comparing the position markers with what the master database reports under:
mysql> SHOW MASTER STATUS;
Step 11: Update database-connect functions
Once the slave database is up-to-date, it is now time to update your application code, routing all calls to the new server.
Assuming you have all database access functions encapsulated in a database.php / database.c layer, locate the function that establishes a connection to the database and update the IP address or domain name to point to the slave database.
Voila! There you have it -- migration of MySQL from machine A to machine B without ever taking the database down.
Step 12: Test
Run a tail on the master database MySQL log file, or run SHOW PROCESSLIST and make sure no one is accessing the master database any more. Once confirmed, shutdown the master database and test for any strange side effects.
I recommend you keep the old database server up for a few days until you're absolutely certain the transfer went through smoothly.
Running a mysqlcheck --all-databases on the new database is highly recommended.
Any comments, tips or insight are welcome!
![]() |
Mike Peters, 04-16-2007 |
Update:
If you don't have enough storage space on the master server to hold the tar file, you could tar across ssh using this command:
Replace USERNAME with a login on the slave machine
Replace NEWHOST.COM with the domain of the slave machine
If you don't have enough storage space on the master server to hold the tar file, you could tar across ssh using this command:
tar -czpf - ./ --exclude mysql | ssh -lUSERNAME NEWHOST.COM tar -xzpf - -C /var/db/mysql
Replace USERNAME with a login on the slave machine
Replace NEWHOST.COM with the domain of the slave machine
![]() |
Chris Tata, 04-16-2007 |
RSync would also work for the snapshot part.
Something like this:
Something like this:
[email protected] ~]$ for i in `ls /var/db/mysql/|grep -v
NEWHOST.COM|grep -v log|grep -v USERNAME`;
do rsync -ru -e ssh /var/db/mysql/$i spidb1:/var/db/mysql; done
NEWHOST.COM|grep -v log|grep -v USERNAME`;
do rsync -ru -e ssh /var/db/mysql/$i spidb1:/var/db/mysql; done
![]() |
Jon, 06-01-2007 |
If your shell gets disconnected & won't help you much. You might want to look into nohup or screen (or maybe nohup screen ;)
![]() |
Mike Peters, 07-12-2007 |
Thank you Jon, that is a very valuable tip.
Most people don't realize once you do FLUSH TABLES WITH READ LOCK, if the shell gets disconnected or MySQL goes away (because of its internal timeout setting), then the lock gets released and if you're still TARing, your tar will be corrupted.
I updated the guide above, taking an extra step to ensure this doesn't happen.
Most people don't realize once you do FLUSH TABLES WITH READ LOCK, if the shell gets disconnected or MySQL goes away (because of its internal timeout setting), then the lock gets released and if you're still TARing, your tar will be corrupted.
I updated the guide above, taking an extra step to ensure this doesn't happen.
![]() |
Adrian Singer, 08-08-2007 |
This page has become quite popular, with about 30 hits per day, from developers looking to "binary copy mysql server", "move mysql database", "copy mysql" etc.
If you're reading this page and you are looking to move your MySQL database, be prepared to invest about 5-10 hours into the process. We found using remote tar works the best, but once the new database is up, there are typically some issues to deal with, especially if you're migrating MySQL4 to MySQL5.
How about letting us handle the migration/replication for you? We can get it done in 5 hours guaranteed and get started right away. For more info about using SoftwareProjects database services - click here
If you're reading this page and you are looking to move your MySQL database, be prepared to invest about 5-10 hours into the process. We found using remote tar works the best, but once the new database is up, there are typically some issues to deal with, especially if you're migrating MySQL4 to MySQL5.
How about letting us handle the migration/replication for you? We can get it done in 5 hours guaranteed and get started right away. For more info about using SoftwareProjects database services - click here
![]() |
Scott McDonald, 08-31-2007 |
I got all excited about this page - until I setup a test environment to see if it would work for getting a new slave (in one city) to sync everything off a master (in another city) as preperation for a datacenter cutover in a high mysql load environment with huge databases. The production master database can not be stopped - or prevent writes - for anything more than a few minutes without causing finanical loss.
After issuing the "FLUSH TABLES WITH READ LOCK" on the master in the first mysql client session - in the second mysql client session trying to do a simple insert simply froze - until "UNLOCK TABLES" was issued on the master in the first mysql client session - then the second mysql client session was able to finish the insert query.
What that would translate to in production:
* OldMasterServer "FLUSH TABLES WITH READ LOCK"
* get the master position, copy the database files to NewSlaveServer - a copy that will take days due to large mysql database size and small pipe bandwidth between sites (due to vpn tunnel overhead between city A and B)
* all new transacations coming in from the web are not being written to OldMasterServer while the read lock is in place - the cgi programs trying to do the inserts would eventually time out or die etc... new data coming in would be lost
* UNLOCK TABLES on OldMasterServer - writes start happening again
* restart mysql on NewSlaveServer with the copied data in place, CHANGE MASTER blah blah to get the slave started pulling from the master at the position that was recorded during the read lock
I don't see how this "FLUSH TABLES WITH READ LOCK" approach to this problem has any advantages over mysqlhotcopy or mysqldumps... none of these options are going to work for a server cutover in an environment with LARGE mysql databases that CANNOT have inserts blocked.
The author in the beginning said "Taking the machine down for 'scheduled maintenance' is not an option." Blocking inserts from web applications for many hours by using a read locks is a much worse option (IMHO).
After issuing the "FLUSH TABLES WITH READ LOCK" on the master in the first mysql client session - in the second mysql client session trying to do a simple insert simply froze - until "UNLOCK TABLES" was issued on the master in the first mysql client session - then the second mysql client session was able to finish the insert query.
What that would translate to in production:
* OldMasterServer "FLUSH TABLES WITH READ LOCK"
* get the master position, copy the database files to NewSlaveServer - a copy that will take days due to large mysql database size and small pipe bandwidth between sites (due to vpn tunnel overhead between city A and B)
* all new transacations coming in from the web are not being written to OldMasterServer while the read lock is in place - the cgi programs trying to do the inserts would eventually time out or die etc... new data coming in would be lost
* UNLOCK TABLES on OldMasterServer - writes start happening again
* restart mysql on NewSlaveServer with the copied data in place, CHANGE MASTER blah blah to get the slave started pulling from the master at the position that was recorded during the read lock
I don't see how this "FLUSH TABLES WITH READ LOCK" approach to this problem has any advantages over mysqlhotcopy or mysqldumps... none of these options are going to work for a server cutover in an environment with LARGE mysql databases that CANNOT have inserts blocked.
The author in the beginning said "Taking the machine down for 'scheduled maintenance' is not an option." Blocking inserts from web applications for many hours by using a read locks is a much worse option (IMHO).
![]() |
Mike Peters, 09-01-2007 |
Scott,
Good point. There's no way around FLUSH TABLES WITH READ LOCK, you absolutely have to issue this command prior to taking a snapshot of your database.
You are 100% correct that while FLUSH TABLES WITH READ LOCK is in effect, all INSERT queries will wait and not execute. This behavior is by design.
So what can you do?
Fortunately there's a way around the issue you are describing. From your comment I understand you are trying to create the snapshot directly on the new database and the pipe is tiny, causing this operation to possibly take too long to complete.
The way to get around this, is to create the snapshot locally (on the current database machine) using tar with compression turned on or using Veritas. If you don't have enough storage space on the current machine to store the snapshot, create another machine on the same subnet and tar the snapshot over there.
All you have to do is create the snapshot locally (or at least on the same subnet) so that this operation can complete as quickly as possible. As soon as you have a snapshot, you can immediately UNLOCK TABLES. There's no need to wait until the snapshot is transferred to the destination machine. If you have access to a RAID5 setup or Veritas, grabbing a snapshot can typically be done in less than a minute, regardless of the database size.
Please feel free to contact me directly or post a project here and we'll be glad to assist.
Good point. There's no way around FLUSH TABLES WITH READ LOCK, you absolutely have to issue this command prior to taking a snapshot of your database.
You are 100% correct that while FLUSH TABLES WITH READ LOCK is in effect, all INSERT queries will wait and not execute. This behavior is by design.
So what can you do?
Fortunately there's a way around the issue you are describing. From your comment I understand you are trying to create the snapshot directly on the new database and the pipe is tiny, causing this operation to possibly take too long to complete.
The way to get around this, is to create the snapshot locally (on the current database machine) using tar with compression turned on or using Veritas. If you don't have enough storage space on the current machine to store the snapshot, create another machine on the same subnet and tar the snapshot over there.
All you have to do is create the snapshot locally (or at least on the same subnet) so that this operation can complete as quickly as possible. As soon as you have a snapshot, you can immediately UNLOCK TABLES. There's no need to wait until the snapshot is transferred to the destination machine. If you have access to a RAID5 setup or Veritas, grabbing a snapshot can typically be done in less than a minute, regardless of the database size.
Please feel free to contact me directly or post a project here and we'll be glad to assist.
![]() |
Marcus Herou, 02-26-2008 |
Hey why don't use a similar pattern as suggested
(lock->fulldump->unlock) but with LVM instead ?
lock -> snapshot->unlock->copy snapshot to backup
I've done it for a couple of years and it is flawless compared to mysqldump. mysqldump in my opinoin can not really be used on a live server efficiently.
I've created my own LVM script which is suited for master-replication situations but for pure backuping mylvmbackup will help you: http://lenz.homelinux.org/mylvmbackup/
Ps. I can give away my script if wanted Ds.
Kindly
//Marcus
(lock->fulldump->unlock) but with LVM instead ?
lock -> snapshot->unlock->copy snapshot to backup
I've done it for a couple of years and it is flawless compared to mysqldump. mysqldump in my opinoin can not really be used on a live server efficiently.
I've created my own LVM script which is suited for master-replication situations but for pure backuping mylvmbackup will help you: http://lenz.homelinux.org/mylvmbackup/
Ps. I can give away my script if wanted Ds.
Kindly
//Marcus
![]() |
Tiago, 06-08-2008 |
Thanks for this post! Its very useful.
To reduce the time of the snapshot, you can run an rsync command with the database alive without LOCK:
nohup rsync --size-only -zav ./mysql /local/ &> log.txt &
Then, put the LOCK and run this command again.
They will only copy the changes files.
* Note: This is work fine and fast for system with a loot of tables. System with one dozen of big tables don't will improve very much.
To reduce the time of the snapshot, you can run an rsync command with the database alive without LOCK:
nohup rsync --size-only -zav ./mysql /local/ &> log.txt &
Then, put the LOCK and run this command again.
They will only copy the changes files.
* Note: This is work fine and fast for system with a loot of tables. System with one dozen of big tables don't will improve very much.
![]() |
kroshka, 08-14-2008 |
On Debian Linux the databases are stored in /var/lib/mysql/. The same should be the case for Debian derived systems such as Ubuntu.
![]() |
Robert, 09-19-2008 |
Has anyone investigated into the options you get for such an endeavour when you sign into MySQL Enterprise support?
![]() |
tpol, 10-29-2008 |
I had a database crash when trying to rsync it with the error:
InnoDB: Error: Write to file ./ibdata1 failed at offset 0 1048576.
InnoDB: 1048576 bytes should have been written, only -1 were written.
Given the fact that this occurred while I was trying to rsync my suspicion is that something about the copy caused the locking issue. Has anyone encountered this before?
InnoDB: Error: Write to file ./ibdata1 failed at offset 0 1048576.
InnoDB: 1048576 bytes should have been written, only -1 were written.
Given the fact that this occurred while I was trying to rsync my suspicion is that something about the copy caused the locking issue. Has anyone encountered this before?
![]() |
Imran Chaudhry, 11-11-2008 |
But remember that replication is not a be-all-end-all backup solution. Replication can break and sometimes the datasets may not even match in some ways.
We use a combination of mysqldump, tarballs and replication, we also use Munin and a plug-in based on mk-heartbeat to monitor replication (see mysqltoolkit).
What I'd like to do is use rsync over the raw /var/lib/mysql/* without worrying about data corruption. This way, I cac achieve more efficient use of backup media by only syncing the differences. A good use of the slaves here is to do the backups off that, so the FLUSH with READ LOCK statements do not impact production performance.
We use a combination of mysqldump, tarballs and replication, we also use Munin and a plug-in based on mk-heartbeat to monitor replication (see mysqltoolkit).
What I'd like to do is use rsync over the raw /var/lib/mysql/* without worrying about data corruption. This way, I cac achieve more efficient use of backup media by only syncing the differences. A good use of the slaves here is to do the backups off that, so the FLUSH with READ LOCK statements do not impact production performance.
![]() |
Steve Briggs, 01-31-2009 |
I am setting up a new Windows server and found a quick GUI way to copy the databases from the old server to the new server:
1) Download the MySQL GUI Tools from http://dev.mysql.com/downloads/ and install on the NEW server.
2) Open the MySQL Administrator GUI Tool
3) connect to the OLD server
4) click on the Backup icon
5) click the New Project button at the bottom and give it a name
6) select each databases you want to copy over and click the right arrow to move them to the right-hand Backup Content window
7) click "Save Project" so you can use it again if you want
8) click "Execute Backup Now" and name the file
9) when the backup is complete, quit MySQL Administrator
10) open MySQL Administrator again
11) connect to the NEW server
12) click on the Restore icon
13) click on the "Open Backup File" button
14) click "Start Restore"
That's it!
1) Download the MySQL GUI Tools from http://dev.mysql.com/downloads/ and install on the NEW server.
2) Open the MySQL Administrator GUI Tool
3) connect to the OLD server
4) click on the Backup icon
5) click the New Project button at the bottom and give it a name
6) select each databases you want to copy over and click the right arrow to move them to the right-hand Backup Content window
7) click "Save Project" so you can use it again if you want
8) click "Execute Backup Now" and name the file
9) when the backup is complete, quit MySQL Administrator
10) open MySQL Administrator again
11) connect to the NEW server
12) click on the Restore icon
13) click on the "Open Backup File" button
14) click "Start Restore"
That's it!
![]() |
Michel Bisson, 02-05-2009 |
Here is a tip you can use to really copy mysql data LIVE without interrupting (LOCK) the master's
loss from a live master to a newly
created slave.
During these oprations, the following will happen:
- Copy the 'mysql' database only to the slave
- Start the slave
- Copy live all other databases from the master to the slave
- This operation makes sure that:
- ALL data is transfered, binlogs flushed and deleted, more data is transfered
(gotten from the flushed binlogs), new binlog position is set to position 4.
- The master will continue accumulation changes from the application at it's new binlog position(4)
- Sync the replication slave to this new master binlog file and position (4)
The slave get all the last changes from the master starting at position (4) till now.
- You have now a complete copy of the master and it stays in sync till you stop the replication.
STEPS:
=====
NOTE 1: Make sure you configure the replication master and slave as shown in this article above.
NOTE 2: Make sure no changes are made in the 'mysql' database during this operation.
Variables description:
# $Qhost = the running master host
# $Dhost = the running destination slave host
# $rootpw= mysql root password
Following commands are all given from the destination host ($Dhost):
a) Copy the mysql database only to the destination slave
scp -r ${Qhost}:/var/lib/mysql/mysql /var/lib/mysql/
B) Start the slave mysql server
(use your appropriate start script)
C) Live transfer the data from the master to the slave
mysqldump -h $Qhost -u root --password=$rootpw --single-transaction --flush-logs --master-data=2 --all-databases --delete-master-logs \
| mysql -h $Dhost -u root --password=$rootpw
D) Sync the slave to the master at position 4
masterlogfile=$(echo "SHOW MASTER STATUS\G;" | mysql --host=$Qhost -u root --password=$rootpw | grep "File:" | cut -d: -f2 | cut -d" " -f2)
echo "CHANGE MASTER TO master_log_file='$masterlogfile',master_log_pos=4; " | mysql -h $Dhost -u root --password=$rootpw
E) Start the replication on the slave
echo "START SLAVE;" | mysql -h $Dhost -u root --password=$rootpw
That's it
loss from a live master to a newly
created slave.
During these oprations, the following will happen:
- Copy the 'mysql' database only to the slave
- Start the slave
- Copy live all other databases from the master to the slave
- This operation makes sure that:
- ALL data is transfered, binlogs flushed and deleted, more data is transfered
(gotten from the flushed binlogs), new binlog position is set to position 4.
- The master will continue accumulation changes from the application at it's new binlog position(4)
- Sync the replication slave to this new master binlog file and position (4)
The slave get all the last changes from the master starting at position (4) till now.
- You have now a complete copy of the master and it stays in sync till you stop the replication.
STEPS:
=====
NOTE 1: Make sure you configure the replication master and slave as shown in this article above.
NOTE 2: Make sure no changes are made in the 'mysql' database during this operation.
Variables description:
# $Qhost = the running master host
# $Dhost = the running destination slave host
# $rootpw= mysql root password
Following commands are all given from the destination host ($Dhost):
a) Copy the mysql database only to the destination slave
scp -r ${Qhost}:/var/lib/mysql/mysql /var/lib/mysql/
B) Start the slave mysql server
(use your appropriate start script)
C) Live transfer the data from the master to the slave
mysqldump -h $Qhost -u root --password=$rootpw --single-transaction --flush-logs --master-data=2 --all-databases --delete-master-logs \
| mysql -h $Dhost -u root --password=$rootpw
D) Sync the slave to the master at position 4
masterlogfile=$(echo "SHOW MASTER STATUS\G;" | mysql --host=$Qhost -u root --password=$rootpw | grep "File:" | cut -d: -f2 | cut -d" " -f2)
echo "CHANGE MASTER TO master_log_file='$masterlogfile',master_log_pos=4; " | mysql -h $Dhost -u root --password=$rootpw
E) Start the replication on the slave
echo "START SLAVE;" | mysql -h $Dhost -u root --password=$rootpw
That's it
![]() |
Michel Bisson, 02-05-2009 |
Sorry for the typing mistake in the comment above:
I meant:
Here is a tip you can use to really LIVE copy all mysql databases without interrupting or LOCKING the master to a newly created slave.
I meant:
Here is a tip you can use to really LIVE copy all mysql databases without interrupting or LOCKING the master to a newly created slave.
![]() |
Dawn Rossi, 02-26-2009 |
Michel,
Thank you for the tip.
You mention "NOTE 2: Make sure no changes are made in the 'mysql' database during this operation"
This is not possible in a live environment when the database you are looking to copy/move is constantly hit by read and write queries.
Rsync and scp are very dangerous when handling InnoDB databases.
Thank you for the tip.
You mention "NOTE 2: Make sure no changes are made in the 'mysql' database during this operation"
This is not possible in a live environment when the database you are looking to copy/move is constantly hit by read and write queries.
Rsync and scp are very dangerous when handling InnoDB databases.
![]() |
Dave LeJeune, 04-10-2009 |
Has anyone worked with a DR plan that uses shared storage for the database files (SAN) and simply bringing them up in another failover/backup MySQL DB/Server in the case of a planned mainteance outage of the primary server?
![]() |
Shawn T., 04-23-2009 |
I'm trying to backup few mysql databases on a VPS to somewhere in /var location as hosting company is upgrading the OS to Debian and they asked me to do this before going ahead with the reinstall.
Can anybody tell me what would be a best command to do this? the site is not alive yet so there are no users connected.
Thanks in advance.
Can anybody tell me what would be a best command to do this? the site is not alive yet so there are no users connected.
Thanks in advance.
![]() |
Jon, 05-29-2009 |
A question.
I see most of the suggestions here include moving the database file by "hand" (scp, rsync, tar). Is there now way of having the replication do the moving/copying/inserting of data?
/jon
I see most of the suggestions here include moving the database file by "hand" (scp, rsync, tar). Is there now way of having the replication do the moving/copying/inserting of data?
/jon
![]() |
Saurabh Aggarwal, 06-15-2009 |
One more gotcha that you all need to worry about -
SELINUX.
Copying the binary files, and then trying to start mysqld, it failed. Looked at dmesg and SELINUX was giving errors. Turned off SELINUX, and it started working.
SELINUX.
Copying the binary files, and then trying to start mysqld, it failed. Looked at dmesg and SELINUX was giving errors. Turned off SELINUX, and it started working.
![]() |
Adrian Hosey, 06-26-2009 |
I'm looking at this same situation, and I'm worried that there is a race condition in Step 11. There is always some latency in the replication between master and slave. So I when "flip" my application code to start talking to the slave, isn't there a chance that the slave will receive operations which are dependent on table rows which have yet to be replicated? It seems that the DB needs to be quiescent during this step, which of course means our cutover is no longer "live."
![]() |
Atif, 09-11-2009 |
Isn't there a way to re-create all the binary log files on the master?
Maybe some sort of a command like RESET MASTER, I know this command will not recreate all the binary logs right from the very beginning.
The reasoning for this is, if this can happen, than there is going to be less than 10 seconds down time for the master server, which is maybe to restart it.
1) Issue RE-Create command on master.
2) Setup new slave server and attach it to Master.
3) The new slave server gets all the records from the master's binary log files, to create all the databases, tables, insert all records in tables etc.
Sure it will take slave long time to catch up, but master never had to go offline, and once slave shows that it has caught up, than it can be used.
Maybe some sort of a command like RESET MASTER, I know this command will not recreate all the binary logs right from the very beginning.
The reasoning for this is, if this can happen, than there is going to be less than 10 seconds down time for the master server, which is maybe to restart it.
1) Issue RE-Create command on master.
2) Setup new slave server and attach it to Master.
3) The new slave server gets all the records from the master's binary log files, to create all the databases, tables, insert all records in tables etc.
Sure it will take slave long time to catch up, but master never had to go offline, and once slave shows that it has caught up, than it can be used.
![]() |
Pascal, 10-11-2009 |
you can also copy your files to tmpfs which is much faster, and then release the locks.
![]() |
Mike Peters, 01-14-2010 |
If you need to mysqldump stored-procedures only, use this:
mysqldump --routines --no-create-info --no-data --no-create-db --skip-opt -uUSER -pPASSWORD DATABASE > outputfile.sql
![]() |
craigpj, 02-05-2010 |
just take the db down for the time you need to make the switch. stop letting non tech people dictate how tech issues should be handled.. if there is a serious tech issue to be dealt with, deal with it. Customers and users appreciate reliability, accuracy and fore warning. You'd be surprised. Just take it down and sort it out fast and sort it out right. Warn all those affected. Offer 100% solid tech support. it's a win win approach. Techies are sometimes "the geeks" but YOU try doing the work. Put a parallel system in place if you need transactional granularity.
![]() |
craigpj, 02-05-2010 |
Here's another solution... Queues. Stack 'em up and process them when your system is back up and ready. Think asynchronously.
![]() |
Dawn Rossi, 03-16-2010 |
Craig,
"Just taking the database down" is often not an acceptable solution with mission critical systems.
That being said, I do agree that no production environment should risk having a single front-facing database per site. Once you have more than a single database server, it is more straightforward to route traffic to a secondary database while you restart MySQL on a unique port number and run mysqldump without the database being accessed.
"Just taking the database down" is often not an acceptable solution with mission critical systems.
That being said, I do agree that no production environment should risk having a single front-facing database per site. Once you have more than a single database server, it is more straightforward to route traffic to a secondary database while you restart MySQL on a unique port number and run mysqldump without the database being accessed.
![]() |
Zahra, 01-06-2011 |
Is there any way to have some masters & one slave? The slave should sync with all of the masters. All of the masters have the same database & only some records of some tables on them differs. The slave should have all of those records.
![]() |
AB, 02-10-2011 |
can any one tell me how to move some bin log file
as the size is quite large now
please email me at [email protected]
or post it thank youuuuuuuu
as the size is quite large now
please email me at [email protected]
or post it thank youuuuuuuu
![]() |
Mark Burdis, 02-25-2011 |
I have a system where I have a Master on an ailing machine which I need to replace. There are about 30 slaves replicating from this machine, and I would like to replace the master with one of the newer slaves before it dies. I have ensured that the ODBC links from the slaves are ip addresses, as my idea is that I would replace the master, and change the ip address of the new machine to match the old machine without having to modify each slave. Is there any way to make this happen smoothly? My worry is that the MASTER_LOG_POS will be different causing the replication to break.
All help gratefully received...
All help gratefully received...
![]() |
John Gruska, 03-02-2011 |
Does MySQL support triggers, etc? If so, couldn't you just have every table replicate itself automatically?
![]() |
Steve, 06-01-2011 |
I am currently repairing a broken replication at the company I work for, and had to deal with the problem of locking the tables not being an option.
Our situation was that the slave reported Seconds_Behind_Master =0, but with a select count(*), we could see there was data missing. So we decided to drop the database and start the replication from scratch.
At the end of all my research, I came to the conclusion that you do have to do a lock on tables, although I had to only lock for about 10 minutes. It took that long because of the amount of binary logs the master had to delete when I executed "Reset Master", which took just under 9 minutes.
Many of the steps might take several hours, depending on the size of the database.
A few assumptions before getting started:
*You are repairing a previously existing master/slave replication process. If not, you'll have to add a user to Master who the Slave can use to get its info. Will also have to set up some other variables etc. Can find that info somewhere.
*If your server/database is live and are worried about performance, use:
"nice" before linux commands
"--skip-lock-tables" on mysqldumps to prevent locking insert/reads
"nohup ... &" to execute the linux/unix command inthe background, unattended
Here is the process I did:
1) Drop Slave database.
mysql>drop database x;
2) Create dump of Master's create table info:
nohup mysqldump --no-data --skip-lock-tables --single-transaction -uuser -ppassword db > file.sql &
3) Dump any tables you know won't change or that you can stop from changing without problems. Use same command as in 2)
4) **This is the part you need to lock tables
ON MASTER:
mysql>reset master; **This took me ~9 minutes
mysql>flush tables with read lock; **~17 seconds
mysql>show master status;
(VERY IMPORTANT)
Copy down the File and Position information
(From another terminal, still on master's server)
linux>nohup mysqldump --skip-lock-tables --single-transaction --no-create-info --routines --ignore-table=db.table_already_dumped -uuser -ppassword db > file2.sql &
mysql>UNLOCK TABLES;
(If databases are on different servers)
5)Copy dumps to other server
nohup nice scp file1.sql [email protected]:/path/ &
nohup nice scp file2.sql [email protected]:/path/ &
(Include other dump(s) from step 3) )
(ON SLAVE)
6)Import dumps
mysql>\. file.sql
*Might want to use nohup mysqlimport .... if you want to leave UNATTENDED!
(Still on Slave)
*The x,y on the change master... command are the data you got from "show master status" in step 4)
7)Start replication
mysql>stop slave;
mysql>reset slave; (This will delete the relay logs, etc)
mysql>change master to MASTER_LOG_FILE='x', MYSQL_LOG_POS='y';
mysql>start slave;
mysql>show slave status \G
GOOD LUCK
-Steve
Our situation was that the slave reported Seconds_Behind_Master =0, but with a select count(*), we could see there was data missing. So we decided to drop the database and start the replication from scratch.
At the end of all my research, I came to the conclusion that you do have to do a lock on tables, although I had to only lock for about 10 minutes. It took that long because of the amount of binary logs the master had to delete when I executed "Reset Master", which took just under 9 minutes.
Many of the steps might take several hours, depending on the size of the database.
A few assumptions before getting started:
*You are repairing a previously existing master/slave replication process. If not, you'll have to add a user to Master who the Slave can use to get its info. Will also have to set up some other variables etc. Can find that info somewhere.
*If your server/database is live and are worried about performance, use:
"nice" before linux commands
"--skip-lock-tables" on mysqldumps to prevent locking insert/reads
"nohup ... &" to execute the linux/unix command inthe background, unattended
Here is the process I did:
1) Drop Slave database.
mysql>drop database x;
2) Create dump of Master's create table info:
nohup mysqldump --no-data --skip-lock-tables --single-transaction -uuser -ppassword db > file.sql &
3) Dump any tables you know won't change or that you can stop from changing without problems. Use same command as in 2)
4) **This is the part you need to lock tables
ON MASTER:
mysql>reset master; **This took me ~9 minutes
mysql>flush tables with read lock; **~17 seconds
mysql>show master status;
(VERY IMPORTANT)
Copy down the File and Position information
(From another terminal, still on master's server)
linux>nohup mysqldump --skip-lock-tables --single-transaction --no-create-info --routines --ignore-table=db.table_already_dumped -uuser -ppassword db > file2.sql &
mysql>UNLOCK TABLES;
(If databases are on different servers)
5)Copy dumps to other server
nohup nice scp file1.sql [email protected]:/path/ &
nohup nice scp file2.sql [email protected]:/path/ &
(Include other dump(s) from step 3) )
(ON SLAVE)
6)Import dumps
mysql>\. file.sql
*Might want to use nohup mysqlimport .... if you want to leave UNATTENDED!
(Still on Slave)
*The x,y on the change master... command are the data you got from "show master status" in step 4)
7)Start replication
mysql>stop slave;
mysql>reset slave; (This will delete the relay logs, etc)
mysql>change master to MASTER_LOG_FILE='x', MYSQL_LOG_POS='y';
mysql>start slave;
mysql>show slave status \G
GOOD LUCK
-Steve
![]() |
Wayne, 07-11-2011 |
Just a not. Make sure the relay log files and relay-log.info files are removed before starting up the new server. This will cause issues.
![]() |
Joe, 12-16-2011 |
Thanks very much for the article! We were able to perform the tar at a relatively quiet time, and in under 10 minutes, so that part went well.
My question is about the 'mysql' database proper (within /var/db/mysql). In your steps, you don't copy that over. On the new server, however, should I be extracting the tar into an empty mysql directory (with proper ownership)? Is the mysql database recreated once the server is restarted? (I should probably know the answer to this already, but humor me for a sec - this is the first time I've ever not copied "the entire thing" over!)
My question is about the 'mysql' database proper (within /var/db/mysql). In your steps, you don't copy that over. On the new server, however, should I be extracting the tar into an empty mysql directory (with proper ownership)? Is the mysql database recreated once the server is restarted? (I should probably know the answer to this already, but humor me for a sec - this is the first time I've ever not copied "the entire thing" over!)
![]() |
Some Guy, 12-23-2011 |
Problem here is, if the master is corrupted, the slave is also.
![]() |
Bjørn, 02-23-2012 |
I have a question regarding replication. I have, as author, multiple databases and tables. In another guide created for "one" database it specified which database were to be replicated with binlog_do_db = "dbasename". Another suggestion for multiple databases were to simply add more lines of binlog_do for each database. My question: If log_bin is specified, is binlog_do_db not really needed if you want to do all databases anyway? can I safely delete them from the my.cnf of both master and slave, and the slave will still lead ALL sql queries in the bin file, even those relating to new databases being created and changes in existing ones??
regards
Bjørn
regards
Bjørn
![]() |
George B, 03-02-2012 |
i have a question on database move i have on the old server mysql 3.23 installed and on the new server has mysql 5.0 .Is it possible to move mysql database to the new server or the mysql version must be the same.will there be any issues if i copy the database to the new server.can someone help.
Thanks
George
Thanks
George
![]() |
Green, 03-05-2012 |
Hi, its a good explanation you made. Please, I want to seek your advice, I am working on a database and want to removed entirely part of the data (images), i.e. medical records that a are older to a new database which will serve as an archive and still link it to the original database. Please can u tell me how this can be done safely. Thanks!
![]() |
Ravi Simon Shah, 03-20-2012 |
I am a fresher and was always confused with moving a live database.
your article is really helpful
Thanks a lot...
your article is really helpful
Thanks a lot...
![]() |
Jon, 07-24-2012 |
You said the snapshot took 4 hours and
that you shouldn't release the lock till
after the snapshot was complete. Doesn't
that mean that your database was locked(down)
for 4 hours?
that you shouldn't release the lock till
after the snapshot was complete. Doesn't
that mean that your database was locked(down)
for 4 hours?
![]() |
raj, 11-20-2012 |
I am getting error no 2013 when slave is connected to master after some time it is automatically connected to the master what is the problem
Thanks in Advance.
Thanks in Advance.
![]() |
sem, 12-18-2012 |
Problem here is, if the master is corrupted, the slave is also.
|

Subscribe Now to receive new posts via Email as soon as they come out.
Comments
Post your comments