Query which shows non ideal results




















A last resort is to link to any highly popular content, such as the most popular products and categories on the site. Share: Twitter Facebook LinkedIn. Comment on LinkedIn. Rebecca February 18, Post your comment on LinkedIn. Related Articles. More E-Commerce Research. Chopping up the DELETE statement and using medium-size queries can improve performance considerably, and reduce replication lag when a query is replicated. For example, instead of running this monolithic query:.

Deleting 10, rows at a time is typically a large enough task to make each query efficient, and a short enough task to minimize the impact on the server [ 38 ] transactional storage engines may benefit from smaller transactions.

It may also be a good idea to add some sleep time between the DELETE statements to spread the load over time and reduce the amount of time locks are held. Many high-performance web sites use join decomposition. You can decompose a join by running multiple single-table queries instead of a multitable join, and then performing the join in the application.

For example, instead of this single query:. However, such restructuring can actually give significant performance advantages:. Caching can be more efficient.

In this example, if the object with the tag mysql is already cached, the application can skip the first query. If you find posts with an id of , , or in the cache, you can remove them from the IN list. The query cache might also benefit from this strategy. If only one of the tables changes frequently, decomposing a join can reduce the number of cache invalidations. For MyISAM tables, performing one query per table uses table locks more efficiently: the queries will lock the tables individually and relatively briefly, instead of locking them all for a longer time.

Doing joins in the application makes it easier to scale the database by placing tables on different servers. The queries themselves can be more efficient. We explain this in more detail later. You can reduce redundant row accesses. Doing a join in the application means you retrieve each row only once, whereas a join in the query is essentially a denormalization that might repeatedly access the same data. For the same reason, such restructuring might also reduce the total network traffic and memory usage.

To some extent, you can view this technique as manually implementing a hash join instead of the nested loops algorithm MySQL uses to execute a join. A hash join may be more efficient. You cache and reuse a lot of data from earlier queries. If you need to get high performance from your MySQL server, one of the best ways to invest your time is in learning how MySQL optimizes and executes queries.

Once you understand this, much of query optimization is simply a matter of reasoning from principles, and query optimization becomes a very logical process. Figure shows how MySQL generally executes queries. The server checks the query cache. The query execution engine executes the plan by making calls to the storage engine API.

Each of these steps has some extra complexity, which we discuss in the following sections. We also explain which states the query will be in during each step. The query optimization process is particularly complex and important to understand. The protocol is half-duplex, which means that at any given time the MySQL server can be either sending or receiving messages, but not both.

It also means there is no way to cut a message short. This protocol makes MySQL communication simple and fast, but it limits it in some ways too. The client sends a query to the server as a single packet of data. In contrast, the response from the server usually consists of many packets of data. When the server responds, the client has to receive the entire result set.

It cannot simply fetch a few rows and then ask the server not to bother sending the rest. But the truth is, the MySQL server is pushing the rows as it generates them. The client is only receiving the pushed rows; there is no way for it to tell the server to stop sending rows. Most libraries that connect to MySQL let you either fetch the whole result set and buffer it in memory, or fetch each row as you need it.

The default behavior is generally to fetch the whole result and buffer it in memory. This is important because until all the rows have been fetched, the MySQL server will not release the locks and other resources required by the query. When the client library fetches the results all at once, it reduces the amount of work the server needs to do: the server can finish and clean up the query as quickly as possible.

You can use less memory, and start working on the result sooner, if you instruct the library not to buffer the result. The downside is that the locks and other resources on the server will remain open while your application is interacting with the library.

The code seems to indicate that you fetch rows only when you need them, in the while loop. The while loop simply iterates through the buffer. Programming languages have different ways to override buffering. You can also specify this when connecting, which will make every statement unbuffered:. Each MySQL connection, or thread , has a state that shows what it is doing at any given time. As a query progresses through its lifecycle, its state changes many times, and there are dozens of states.

The MySQL manual is the authoritative source of information for all the states, but we list a few here and explain what they mean:. The thread is waiting for a new query from the client. The thread is either executing the query or sending the result back to the client. The thread is waiting for a table lock to be granted at the server level. The thread is checking storage engine statistics and optimizing the query.

This can mean several things: the thread might be sending data between stages of the query, generating the result set, or returning the result set to the client. On very busy servers, you might see an unusual or normally brief state, such as statistics , begin to take a significant amount of time. This usually indicates that something is wrong.

Before even parsing a query, MySQL checks for it in the query cache, if the cache is enabled. This operation is a case sensitive hash lookup. If MySQL does find a match in the query cache, it must check privileges before returning the cached query. This is possible without parsing the query, because MySQL stores table information with the cached query. If the privileges are OK, MySQL retrieves the stored result from the query cache and sends it to the client, bypassing every other stage in query execution.

The query is never parsed, optimized, or executed. You can learn more about the query cache in Chapter 5. The next step in the query lifecycle turns a SQL query into an execution plan for the query execution engine. It has several sub-steps: parsing, preprocessing, and optimization. Errors for example, syntax errors can be raised at any point in the process. Our goal is simply to help you understand how MySQL executes queries so that you can write better ones.

Next, the preprocessor checks privileges. This is normally very fast unless your server has large numbers of privileges. See Chapter 12 for more on privileges and security. The parse tree is now valid and ready for the optimizer to turn it into a query execution plan.

A query can often be executed many different ways and produce the same result. MySQL uses a cost-based optimizer, which means it tries to predict the cost of various execution plans and choose the least expensive. The unit of cost is a single random four-kilobyte data page read. This result means that the optimizer estimated it would need to do about 1, random data page reads to execute the query.

It bases the estimate on statistics: the number of pages per table or index, the cardinality number of distinct values of indexes, the length of rows and keys, and key distribution. The statistics could be wrong. The server relies on storage engines to provide statistics, and they can range from exactly correct to wildly inaccurate. There are two basic types of optimizations, which we call static and dynamic.

Static optimizations can be performed simply by inspecting the parse tree. For example, the optimizer can transform the WHERE clause into an equivalent form by applying algebraic rules.

They can be performed once and will always be valid, even when the query is reexecuted with different values. In contrast, dynamic optimizations are based on context and can depend on many factors, such as which value is in a WHERE clause or how many rows are in an index.

They must be reevaluated each time the query is executed. The difference is important in executing prepared statements or stored procedures.

MySQL can do static optimizations once, but it must reevaluate dynamic optimizations every time it executes a query. MySQL sometimes even reoptimizes the query as it executes it. Here are some types of optimizations MySQL knows how to do:. MySQL can recognize this and rewrite the join, which makes it eligible for reordering. MySQL applies algebraic transformations to simplify and canonicalize expressions. It can also fold and reduce constants, eliminating impossible constraints and constant conditions.

These rules are very useful for writing conditional queries, which we discuss later in the chapter. Indexes and column nullability can often help MySQL optimize away these expressions. It can even do this in the query optimization stage, and treat the value as a constant for the rest of the query. Similarly, to find the maximum value in a B-Tree index, the server reads the last row.

This literally means the optimizer has removed the table from the query plan and replaced it with a constant. When MySQL detects that an expression can be reduced to a constant, it will do so during optimization. Arithmetic expressions are another example. Perhaps surprisingly, even something you might consider to be a query can be reduced to a constant during the optimization phase. One example is a MIN on an index.

This can even be extended to a constant lookup on a primary key or unique index. It will then treat the value as a constant in the rest of the query. MySQL executes this query in two steps, which correspond to the two rows in the output. The first step is to find the desired row in the film table. It can do this because the optimizer knows that by the time the query reaches the second step, it will know all the values from the first step. MySQL can sometimes use an index to avoid reading row data, when the index contains all the columns the query needs.

We discussed covering indexes at length in Chapter 3. MySQL can convert some types of subqueries into more efficient alternative forms, reducing them to index lookups instead of separate queries. MySQL can stop processing a query or a step in a query as soon as it fulfills the query or step.

For instance, if MySQL detects an impossible condition, it can abort the entire query. You can see this in the following example:.

This query stopped during the optimization step, but MySQL can also terminate execution sooner in some cases. For example, the following query finds all movies without any actors: [ 42 ]. This query works by eliminating any films that have actors. Each film might have many actors, but as soon as it finds one actor, it stops processing the current film and moves to the next one because it knows the WHERE clause prohibits outputting that film.

For instance, in the following query:. In many database servers, IN is just a synonym for multiple OR clauses, because the two are logically equivalent. Not so in MySQL, which sorts the values in the IN list and uses a fast binary search to see whether a value is in the list.

This is O log n in the size of the list, whereas an equivalent series of OR clauses is O n in the size of the list i. You may end up just defeating it, or making your queries more complicated and harder to maintain for zero benefit.

In general, you should let the optimizer do its work. Some of the options are to add a hint to the query, rewrite the query, redesign your schema, or add indexes. The engines may provide the optimizer with statistics such as the number of pages per table or index, the cardinality of tables and indexes, the length of rows and keys, and key distribution information.

The optimizer can use this information to help it decide on the best execution plan. In sum, it considers every query a join—not just every query that matches rows from two tables, but every query, period including subqueries, and even a SELECT against a single table. Each of the individual queries is a join, in MySQL terminology—and so is the act of reading from the resulting temporary table.

This means MySQL runs a loop to find a row from a table, then runs a nested loop to find a matching row in the next table. It continues until it has found a matching row in each table in the join. It tries to build the next row by looking for more matching rows in the last table. It keeps backtracking until it finds another row in some table, at which point, it looks for a matching row in the next table, and so on.

This query execution plan applies as easily to a single-table query as it does to a many-table query, which is why even a single-table query can be considered a join—the single-table join is the basic operation from which more complex joins are composed. Read it from left to right and top to bottom. Figure Swim-lane diagram illustrating retrieving rows using a join. MySQL executes every kind of query in essentially the same way.

In short, MySQL coerces every kind of query into this execution plan. Still other queries can be executed with nested loops, but perform very badly as a result. We look at some of those later. Instead, the query execution plan is actually a tree of instructions that the query execution engine follows to produce the query results.

The final plan contains enough information to reconstruct the original query. Any multitable query can conceptually be represented as a tree. For example, it might be possible to execute a four-table join as shown in Figure This is what computer scientists call a balanced tree.

This is not how MySQL executes the query, though. As we described in the previous section, MySQL always begins with one table and finds matching rows in the next table. The most important part of the MySQL query optimizer is the join optimizer , which decides the best order of execution for multitable queries. It is often possible to join the tables in several different orders and get the same results. The join optimizer estimates the cost for various plans and tries to choose the least expensive one that gives the same result.

You can probably think of a few different query plans. This should be efficient, right? This is quite a different plan from the one suggested in the previous paragraph. Is this really more efficient? This shows why MySQL wants to reverse the join order: doing so will enable it to examine fewer rows in the first table.

The difference is how many of these indexed lookups it will have to do:. If the server scans the actor table first, it will have to do only index lookups into later tables.

In other words, the reversed join order will require less backtracking and rereading. The reordered query had an estimated cost of , while the estimated cost of forcing the join order was 1, Reordering joins is usually a very effective optimization. In most cases, the join optimizer will outperform a human. The join optimizer tries to produce a query execution plan tree with the lowest achievable cost. When possible, it examines all potential combinations of subtrees, beginning with all one-table plans.

Unfortunately, a join over n tables will have n -factorial combinations of join orders to examine. This is called the search space of all possible query plans, and it grows very quickly—a table join can be executed up to 3,, different ways!

When the search space grows too large, it can take far too long to optimize the query, so the server stops doing a full analysis. MySQL has many heuristics, accumulated through years of research and experimentation, that it uses to speed up the optimization stage. This is because the results for one table depend on data retrieved from another table.

These dependencies help the join optimizer reduce the search space by eliminating choices. Sorting results can be a costly operation, so you can often improve performance by avoiding sorts or by performing them on fewer rows.

We showed you how to use indexes for sorting in Chapter 3. If the values to be sorted will fit into the sort buffer, MySQL can perform the sort entirely in memory with a quicksort. It uses a quicksort to sort each chunk and then merges the sorted chunk into the results.

On the other hand, it stores a minimal amount of data during the sort, so if the rows to be sorted are completely in memory, it can be cheaper to store less data and reread the rows to generate the final result. Reads all the columns needed for the query, sorts them by the ORDER BY columns, and then scans the sorted list and outputs the specified columns.

This algorithm is available only in MySQL 4. However, it has the potential to use a lot more space, because it holds all desired columns from each row, not just the columns needed to sort the rows. This means fewer tuples will fit into the sort buffer, and the filesort will have to perform more sort merge passes. When sorting a join, MySQL may perform the filesort at two stages during the query execution. The plan is a data structure; it is not executable byte-code, which is how many other databases execute queries.

In contrast to the optimization stage, the execution stage is usually not all that complex: MySQL simply follows the instructions given in the query execution plan. Many of the operations in the plan invoke methods implemented by the storage engine interface, also known as the handler API. Each table in the query is represented by an instance of a handler.

If a table appears three times in the query, for example, the server creates three handler instances. Though we glossed over this before, MySQL actually creates the handler instances early in the optimization stage. The optimizer uses them to get information about the tables, such as their column names and index statistics. This is enough for a query that does an index scan. Not everything is a handler operation. For example, the server manages table locks. As explained in Chapter 1 , anything that all storage engines share is implemented in the server, such as date and time functions, views, and triggers.

To execute the query, the server just repeats the instructions until there are no more rows to examine. The final step in executing a query is to reply to the client.

If the query is cacheable, MySQL will also place the results into the query cache at this stage. The server generates and sends results incrementally. Think back to the single-sweep multijoin method we mentioned earlier. As soon as MySQL processes the last table and generates one row successfully, it can and should send that row to the client.

This has two benefits: it lets the server avoid holding the row in memory, and it means the client starts getting the results as soon as possible. Some of these limitations will probably be eased or removed entirely in future versions, and some have already been fixed in versions not yet released as GA generally available. In particular, there are a number of subquery optimizations in the MySQL 6 source code, and more are in progress. MySQL sometimes optimizes subqueries very badly.

This feels natural to write with a subquery, as follows:. We said an IN list is generally very fast, so you might expect the query to be optimized to something like this:. Unfortunately, exactly the opposite happens. It rewrites the query as follows:. Sometimes this can be faster than a JOIN. MySQL has been criticized thoroughly for this particular type of subquery execution plan. Although it definitely needs to be fixed, the criticism often confuses two different issues: execution order and caching.

Rewriting the query yourself lets you take control over both aspects. Future versions of MySQL should be able to optimize this type of query much better, although this is no easy task. There are very bad worst cases for any execution plan, including the inside-out execution plan that some people think would be simple to optimize. Instead, benchmark and make your own decision. Sometimes a correlated subquery is a perfectly reasonable, or even optimal, way to get a result.

This is an example of the early-termination algorithm we mentioned earlier in this chapter. So, in theory, MySQL will execute the queries almost identically.

In reality, benchmarking is the only way to tell which approach is really faster. We benchmarked both queries on our standard setup. The results are shown in Table Sometimes a subquery can be faster. For example, it can work well when you just want to see rows from one table that match rows in another table. The following join, which is designed to find every film that has an actor, will return duplicates because some films have multiple actors:.

But what are we really trying to express with this query, and is it obvious from the SQL? Again, we benchmarked to see which strategy was faster. In this example, the subquery performs much faster than the join.

We showed this lengthy example to illustrate two points: you should not heed categorical advice about subqueries, and you should use benchmarks to prove your assumptions about query plans and execution speed. Index merge algorithms, introduced in MySQL 5.

In MySQL 5. There are three variations on the algorithm: union for OR conditions, intersection for AND conditions, and unions of intersections for combinations of the two.

The following query uses a union of two index scans, as you can see by examining the Extra column:. This is especially true if not all of the indexes are very selective, so the parallel scans return lots of rows to the merge operation. This is another reason to design realistic benchmarks.

Equality propagation can have unexpected costs sometimes. This is normally helpful, because it gives the query optimizer and execution engine more options for where to actually execute the IN check. But when the list is very large, it can result in slower optimization and execution.

This is a feature offered by some other database servers, but not MySQL. However, you can emulate hash joins using hash indexes. MySQL has historically been unable to do loose index scans, which scan noncontiguous ranges of an index. MySQL will scan the entire range of rows within these end points. An example will help clarify this. Suppose we have a table with an index on columns a, b , and we want to run the following query:.

Figure shows what that strategy would look like if MySQL were able to do it. A loose index scan, which MySQL cannot currently do, would be more efficient. Beginning in MySQL 5. This is a good optimization for this special purpose, but it is not a general-purpose loose index scan. Until MySQL supports general-purpose loose index scans, the workaround is to supply a constant or list of constants for the leading columns of the index. We showed several examples of how to get good performance with these types of queries in our indexing case study in the previous chapter.

However, in this case, MySQL will scan the whole table, which you can verify by profiling the query. This general strategy often works well when MySQL would otherwise choose to scan more rows than necessary. True, but sometimes you have to compromise your principles to get high performance.

The query updates each row with the number of similar rows in the table:. To work around this limitation, you can use a derived table, because MySQL materializes it as a temporary table.

In this section, we give advice on how to optimize certain kinds of queries. Most of the advice in this section is version-dependent, and it may not hold for future versions of MySQL.



0コメント

  • 1000 / 1000