MySQL my.cnf tweak for faster importing large database Rumi, March 3, 2018 Few small hack will ease you importing large sized mysql dump in a bit faster process. The trick worked for me for a 70 GB dumped sql, can’t guarantee it’d work for you! Step-1 need to change the following: innodb_buffer_pool_size = 4G innodb_log_buffer_size = 256M innodb_log_file_size = 1G innodb_write_io_threads = 16 innodb_flush_log_at_trx_commit = 0 Why these settings ? innodb_buffer_pool_size will cache frequently read data innodb_log_buffer_size : Larger buffer reduces write I/O to Transaction Logs innodb_log_file_size : Larger log file reduces checkpointing and write I/O innodb_write_io_threads : Service Write Operations to .ibd files. According to MySQL Documentation on Configuring the Number of Background InnoDB I/O Threads, each thread can handle up to 256 pending I/O requests. Default for MySQL is 4, 8 for Percona Server. Max is 64. innodb_flush_log_at_trx_commitIn the event of a crash, both 0 and 2 can lose once second of data. The tradeoff is that both 0 and 2 increase write performance. I choose 0 over 2 because 0 flushes the InnoDB Log Buffer to the Transaction Logs (ib_logfile0, ib_logfile1) once per second, with or without a commit. Setting 2 flushes the InnoDB Log Buffer only on commit. There are other advantages to setting 0 mentioned by @jynus, a former Percona instructor. Restart mysql like this service mysql restart --innodb-doublewrite=0 This disables the InnoDB Double Write Buffer. Import your data. When done, restart mysql normally. This re-enables the InnoDB Double Write Buffer. service mysql restart Step-2 Add these 2 lines- max_allowed_packet=256M wait_timeout=30000 service mysql restart MySQL