Leveraging the MySQL Slow Query Log for Tuning
Jul 10, 2025 am 11:50 AMMySQL's slow query log is an important tool for optimizing database performance, helping to locate performance bottlenecks by recording SQL statements whose execution time exceeds a specified threshold. 1. Enable slow query logs to set slow_query_log, slow_query_log_file and long_query_time parameters in the configuration file; 2. Use the mysqldumpslow or pt-query-digest tools to analyze logs, and focus on key fields such as Query_time, Lock_time, Rows_sent and Rows_examined; 3. Common problems include the lack of indexes that lead to full table scanning, unreasonable query design, and low efficiency in sorting and grouping operations. They can be optimized by adding indexes, rewriting query statements, and optimizing compound indexes. Regularly viewing and combining business scenario analysis slow query logs can help identify potential performance problems.
MySQL's Slow Query Log is an important tool for optimizing database performance. If you find that the database response is slow, or want to troubleshoot potential performance bottlenecks in advance, enabling and analyzing slow query logs is a good starting point.

What is a slow query log?
Slow query logs are SQL statements whose execution time exceeds the specified threshold. By default, the "slow" standard is 10 seconds, but you can adjust this time according to actual needs.

After it is enabled, MySQL will record the queries that meet the conditions into the log, including execution time, number of scanned rows, whether to use indexes, etc. This information can help you quickly locate which SQL is the culprit of "straight".
To enable slow query logs, you can add them in the configuration file my.cnf
or my.ini
:

slow_query_log = 1 slow_query_log_file = /path/to/your/slow-query.log long_query_time = 1
The slow query time is set to 1 second here, which is more suitable for the performance requirements of most production environments.
How to analyze slow query logs?
With the log, the next step is to analyze it. Although you can view the content by opening the log file directly, it is not intuitive enough. Some tools are recommended to help understand log content, such as:
mysqldumpslow : This is a command line tool that comes with MySQL, which can summarize and sort slow queries.
Example:
mysqldumpslow -s at -t 10 /path/to/slow-query.log
The above command will be sorted by the average execution time, showing the first 10 slowest queries.
pt-query-digest : This tool in Percona Toolkit is more powerful and supports more complex analysis, such as classification statistics by user, IP, execution plan and other dimensions.
In addition to tools, you also need to pay attention to several key fields in the log:
- Query_time: Total execution time
- Lock_time: the time to wait for the lock
- Rows_sent: The number of rows returned to the client
- Rows_examined: Number of scanned rows
If a query scans a large amount of data but returns only a few results, it is likely that the appropriate index is missing, or the query conditions are not accurate enough.
Frequently Asked Questions and Optimization Suggestions
After analyzing the slow query, the following common situations are usually encountered:
1. Lack of indexing leads to full table scanning
This type of problem usually manifests as Rows_examined
is very large and Rows_sent
is very small. For example:
SELECT * FROM orders WHERE customer_id = 123;
If customer_id
has no index, the entire order table will be scanned every time the execution is executed. The solution is simple: add an index for customer_id
.
2. The query design is unreasonable
Some queries may be written in a more complex way, such as nesting multi-layer subqueries, unnecessarily joining multiple tables, or not using paging restrictions. Such problems can be optimized by simplifying logic or splitting queries.
For example:
SELECT * FROM users WHERE id IN (SELECT user_id FROM orders WHERE total > 1000);
This writing method is not efficient when the data volume is large, so you can consider rewriting it to JOIN:
SELECT u.* FROM users u JOIN orders o ON u.id = o.user_id WHERE o.total > 1000;
3. The time-consuming operation of sorting and grouping
If you see a query Using filesort
or Using temporary
, it means that MySQL cannot effectively utilize indexes when sorting or grouping. At this time, you should check whether there is a suitable composite index that can support these operations.
For example, for the following statement:
SELECT name, COUNT(*) FROM sales GROUP BY product_id ORDER BY COUNT(*) DESC;
It can be very slow if there is no proper index. You can try to create an index for (product_id, name)
to improve grouping efficiency.
Basically that's it. Slow query logs are not a panacea, but it is a great place to start. As long as you check regularly and analyze the actual business scenarios, you can often find many hidden performance problems.
The above is the detailed content of Leveraging the MySQL Slow Query Log for Tuning. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

mysqldump is a common tool for performing logical backups of MySQL databases. It generates SQL files containing CREATE and INSERT statements to rebuild the database. 1. It does not back up the original file, but converts the database structure and content into portable SQL commands; 2. It is suitable for small databases or selective recovery, and is not suitable for fast recovery of TB-level data; 3. Common options include --single-transaction, --databases, --all-databases, --routines, etc.; 4. Use mysql command to import during recovery, and can turn off foreign key checks to improve speed; 5. It is recommended to test backup regularly, use compression, and automatic adjustment.

When handling NULL values ??in MySQL, please note: 1. When designing the table, the key fields are set to NOTNULL, and optional fields are allowed NULL; 2. ISNULL or ISNOTNULL must be used with = or !=; 3. IFNULL or COALESCE functions can be used to replace the display default values; 4. Be cautious when using NULL values ??directly when inserting or updating, and pay attention to the data source and ORM framework processing methods. NULL represents an unknown value and does not equal any value, including itself. Therefore, be careful when querying, counting, and connecting tables to avoid missing data or logical errors. Rational use of functions and constraints can effectively reduce interference caused by NULL.

GROUPBY is used to group data by field and perform aggregation operations, and HAVING is used to filter the results after grouping. For example, using GROUPBYcustomer_id can calculate the total consumption amount of each customer; using HAVING can filter out customers with a total consumption of more than 1,000. The non-aggregated fields after SELECT must appear in GROUPBY, and HAVING can be conditionally filtered using an alias or original expressions. Common techniques include counting the number of each group, grouping multiple fields, and filtering with multiple conditions.

MySQL supports transaction processing, and uses the InnoDB storage engine to ensure data consistency and integrity. 1. Transactions are a set of SQL operations, either all succeed or all fail to roll back; 2. ACID attributes include atomicity, consistency, isolation and persistence; 3. The statements that manually control transactions are STARTTRANSACTION, COMMIT and ROLLBACK; 4. The four isolation levels include read not committed, read submitted, repeatable read and serialization; 5. Use transactions correctly to avoid long-term operation, turn off automatic commits, and reasonably handle locks and exceptions. Through these mechanisms, MySQL can achieve high reliability and concurrent control.

To view the size of the MySQL database and table, you can query the information_schema directly or use the command line tool. 1. Check the entire database size: Execute the SQL statement SELECTtable_schemaAS'Database',SUM(data_length index_length)/1024/1024AS'Size(MB)'FROMinformation_schema.tablesGROUPBYtable_schema; you can get the total size of all databases, or add WHERE conditions to limit the specific database; 2. Check the single table size: use SELECTta

Character set and sorting rules issues are common when cross-platform migration or multi-person development, resulting in garbled code or inconsistent query. There are three core solutions: First, check and unify the character set of database, table, and fields to utf8mb4, view through SHOWCREATEDATABASE/TABLE, and modify it with ALTER statement; second, specify the utf8mb4 character set when the client connects, and set it in connection parameters or execute SETNAMES; third, select the sorting rules reasonably, and recommend using utf8mb4_unicode_ci to ensure the accuracy of comparison and sorting, and specify or modify it through ALTER when building the library and table.

To set up asynchronous master-slave replication for MySQL, follow these steps: 1. Prepare the master server, enable binary logs and set a unique server-id, create a replication user and record the current log location; 2. Use mysqldump to back up the master library data and import it to the slave server; 3. Configure the server-id and relay-log of the slave server, use the CHANGEMASTER command to connect to the master library and start the replication thread; 4. Check for common problems, such as network, permissions, data consistency and self-increase conflicts, and monitor replication delays. Follow the steps above to ensure that the configuration is completed correctly.

The most direct way to connect to MySQL database is to use the command line client. First enter the mysql-u username -p and enter the password correctly to enter the interactive interface; if you connect to the remote database, you need to add the -h parameter to specify the host address. Secondly, you can directly switch to a specific database or execute SQL files when logging in, such as mysql-u username-p database name or mysql-u username-p database name
