yum
upgrades for production use, this is the repository for you.
Active subscription is required.
Problem of slow MySQL queries
Slow MySQL queries are one of the main sources for performance issues. Whether your website is powered by framework like WordPress, Magento or custom built Laravel application, you may want to optimise MySQL queries sooner or later.
MySQL on Citrus servers is powered with Percona MySQL. So if you are a user of Percona MySQL this article is useful for you too.
How to configure and enable slow queries log
Quick configuration
If you’re in a rush, like me, then run this batch to enable slow query log
mysql -e "set global long_query_time = 0.150000;"
mysql -e "set global log_slow_rate_limit = 10;"
mysql -e "set global slow_query_log_always_write_time = 1.000000;"
mysql -e "set global slow_query_log_timestamp_precision = 'microsecond';"
mysql -e "set global max_slowlog_size = 104857600;"
mysql -e "set global max_slowlog_files = 3;"
mysql -e "set global slow_query_log = 'ON';"
Step by step guide
If you want to take it slow and understand the above commands, following is step by step to enable slow query log
Before enabling the slow query log, we should configure it. Percona MySQL comes with parameters allowing you to unconditionally log slow queries which exceed specific time, with microseconds resolution
Open MySQL shell with mysql
command in SSH terminal and run the following:
set global long_query_time = 0.150000;
This tells MySQL to log all queries which take slower than 150 milliseconds to run. However, this will consume I/O bandwidth and cause the log file to grow large on busy websites. So we are going to adjust this so that 1 in 10 slow queries will be logged instead:
set global log_slow_rate_limit = 10;
When a query is too slow, we may want to unconditionally log it, independent of the log rate limit. Let’s always log every query that is slower than 1 second:
set global slow_query_log_always_write_time = 1.000000;
We also setup resolution to microseconds, make sure that the slow log is rotated when it reaches 100 Mb in size and tell MySQL to keep up to 3 rotated logs to save space:
set global slow_query_log_timestamp_precision = 'microsecond';
set global max_slowlog_size = 104857600;
set global max_slowlog_files = 3;
Finally, let’s enable slow query log :sql
set global slow_query_log = 'ON';
If you also want to log queries without indexes, you can run:
set global log_queries_not_using_indexes = 'ON';
You can verify variables related to slow query log like this:
SHOW GLOBAL VARIABLES LIKE '%slow%';
SHOW GLOBAL VARIABLES LIKE '%long%';
The slow query log is now being populated. Typical location is /var/lib/mysql/web-slow.log
.
Working with slow query log
You can use less
or other pager of your preferences to read the slow query log, i.e. less web-slow.log
.
On a production system, slow query log can grow big and contain tons of queries. It might be a burden to investigate it and understand which ones are most frequent.
The YUM repository of Percona contains useful packages for generating digest of slow queries. Let’s install that first:
yum -y install percona-toolkit
Percona Toolkit can help you to generate digest of the slow queries found. Let’s do that:
cd /var/lib/mysql
pt-query-digest $(hostname -s)-slow.log > $(hostname -s)-slow-digest.log
Example output:
Example output: # Profile # Rank Query ID Response time Calls R/Call V/M Item # ==== ================== =============== ===== ====== ===== ============= # 1 0x13C86AF70C18C60A 1622.5487 83.0% 211 7.6898 2.91 UPDATE bai_wp_eStore_download_links_tbl # 2 0xD5613DAE943A30BE 287.7647 14.7% 213 1.3510 2.99 SELECT bai_wp_eStore_download_links_tbl # MISC 0xMISC 44.8708 2.3% 2570 0.0175 0.0 <51 ITEMS> # Query 1: 0.18 QPS, 1.37x concurrency, ID 0x13C86AF70C18C60A at byte 143788 # This item is included in the report because it matches --limit. # Scores: V/M = 2.91 # Time range: 2017-02-07 20:02:09 to 20:21:56 # Attribute pct total min max avg 95% stddev median # ============ === ======= ======= ======= ======= ======= ======= ======= # Count 7 211 # Exec time 82 1623s 4s 24s 8s 19s 5s 5s # Lock time 91 820s 44us 20s 4s 15s 5s 901ms # Rows sent 0 0 0 0 0 0 0 0 # Rows examine 45 415.15M 1.97M 1.97M 1.97M 1.95M 0 1.95M # Rows affecte 80 221 1 3 1.05 0.99 0.23 0.99 # Bytes sent 0 10.71k 52 52 52 52 0 52 # Query size 7 22.43k 105 109 108.85 107.34 0.98 107.34 # String: # Databases mrprint1_wor3 # Hosts localhost # Last errno 0 # Users mrprint1_wor3 # Query_time distribution # 1us # 10us # 100us # 1ms # 10ms # 100ms # 1s ################################################################ # 10s+ ####################### # Tables # SHOW TABLE STATUS FROM `mrprint1_wor3` LIKE 'bai_wp_eStore_download_links_tbl'\G # SHOW CREATE TABLE `mrprint1_wor3`.`bai_wp_eStore_download_links_tbl`\G UPDATE bai_wp_eStore_download_links_tbl SET access_count = '1' WHERE download_key = 's8jM9+LO0CcVzVkpPEY='\G # Converted for EXPLAIN # EXPLAIN /*!50100 PARTITIONS*/ select access_count = '1' from bai_wp_eStore_download_links_tbl where download_key = 's8jM9+LO0CcVzVkpPEY='\G
This example digest basically tells us that there is one highly inefficient query that we need to take care of. Example:
UPDATE bai_wp_eStore_download_links_tbl SET access_count = '1' WHERE download_key = 's8jM9+LO0CcVzVkpPEY='
To understand what can be done in order to optimize the query, the digest suggests our to run an EXPLAIN statement for this query. So we do that:
EXPLAIN select access_count = '1' from bai_wp_eStore_download_links_tbl where download_key = 's8jM9+LO0CcVzVkpPEY='
When you take note on your particular query, you need to make sure that EXPLAIN reports use of indexes. In case they are missing, you will need to add them.
When you troubleshoot your slow queries in WordPress, you might find one particular index that is missing and killing your performance.
In some WordPress installations (due to failed upgrade, or what not, old version), it is the index for autoload field in wp_options table. Here is how to add it and fix performance quickly:
CREATE INDEX autoload ON wp_options(autoload, option_name);