SQL unleashed: 17 ways to speed your SQL queries
It’s easy to create database code that slows down query results or ties up the database unnecessarily—unless you follow these tips
SQL developers on every platform are struggling, seemingly stuck in a DO WHILE
loop that makes them repeat the same mistakes again and again. That’s because the database field is still relatively immature. Sure, vendors are making some strides, but they continue to grapple with the bigger issues. Concurrency, resource management, space management, and speed still plague SQL developers whether they’re coding on SQL Server, Oracle, DB2, Sybase, MySQL, or any other relational platform.
SQL developers on every platform are struggling, seemingly stuck in a DO WHILE
loop that makes them repeat the same mistakes again and again. That’s because the database field is still relatively immature. Sure, vendors are making some strides, but they continue to grapple with the bigger issues. Concurrency, resource management, space management, and speed still plague SQL developers whether they’re coding on SQL Server, Oracle, DB2, Sybase, MySQL, or any other relational platform.
Part of the problem is that there is no magic bullet, and for almost every best practice, I can show you at least one exception. Typically, a developer finds his or her own favorite methods—though usually they don’t include any constructs for performance or concurrency—and doesn’t bother exploring other options. Maybe that’s a symptom of lack of education, or the developers are just too close to the process to recognize when they’re doing something wrong. Maybe the query runs well on a local set of test data but fails miserably on the production system.
I don’t expect SQL developers to become administrators, but they must take production issues into account when writing their code. If they don’t do it during initial development, the DBAs will just make them go back and do it later—and the users suffer in the interim.
There’s a reason why we say tuning a database is both an art and a science. It’s because very few hard-and-fast rules exist that apply across the board. The problems you’ve solved on one system aren’t issues on another, and vice versa. There’s no right answer when it comes to tuning queries, but that doesn’t mean you should give up.
There are some good principles you can follow that should yield results in one combination or another. I’ve encapsulated them in a list of SQL dos and don’ts that often get overlooked or are hard to spot. These techniques should give you a little more insight into the minds of your DBAs, as well as the ability to start thinking of processes in a production-oriented way.
1. Don’t use UPDATE
instead of CASE
This issue is very common, and though it’s not hard to spot, many developers often overlook it because using UPDATE
has a natural ow that seems logical.
Take this scenario, for instance: You’re inserting data into a temp table and need it to display a certain value if another value exists. Maybe you’re pulling from the Customer table and you want anyone with more than $100,000 in orders to be labeled as “Preferred.” Thus, you insert the data into the table and run an UPDATE
statement to set the CustomerRank column to “Preferred” for anyone who has more than $100,000 in orders. The problem is that the UPDATE
statement is logged, which means it has to write twice for every single write to the table. The way around this, of course, is to use an inline CASE
statement in the SQL query itself. This tests every row for the order amount condition and sets the “Preferred” label before it’s written to the table. The performance increase can be staggering.
2. Don’t blindly reuse code
This issue is also very common. It’s very easy to copy someone else’s code because you know it pulls the data you need. The problem is that quite often it pulls much more data than you need, and developers rarely bother trimming it down, so they end up with a huge superset of data. This usually comes in the form of an extra outer join or an extra condition in the WHERE
clause. You can get huge performance gains if you trim reused code to your exact needs.
3. Do pull only the number of columns you need
This issue is similar to issue No. 2, but it’s specific to columns. It’s all too easy to code all your queries with SELECT *
instead of listing the columns individually. The problem again is that it pulls more data than you need. I’ve seen this error dozens and dozens of times. A developer does a SELECT *
query against a table with 120 columns and millions of rows, but winds up using only three to five of them. At that point, you’re processing so much more data than you need it’s a wonder the query returns at all. You’re not only processing more data than you need, but you’re also taking resources away from other processes.
4. Don’t double-dip
Here’s another one I’ve seen more times than I should have: A stored procedure is written to pull data from a table with hundreds of millions of rows. The developer needs customers who live in California and have incomes of more than $40,000. So he queries for customers that live in California and puts the results into a temp table; then he queries for customers with incomes above $40,000 and puts those results into another temp table. Finally, he joins both tables to get the final product.
Are you kidding me? This should be done in a single query; instead, you’re double-dipping a superlarge table. Don’t be a moron: Query large tables only once whenever possible—you’ll find how much better your procedures perform.
A slightly different scenario is when a subset of a large table is needed by several steps in a process, which causes the large table to be queried each time. Avoid this by querying for the subset and persisting it elsewhere, then pointing the subsequent steps to your smaller data set.
6. Do pre-stage data
This is one of my favorite topics because it’s an old technique that’s often overlooked. If you have a report or a procedure (or better yet, a set of them) that will do similar joins to large tables, it can be a benefit for you to pre-stage the data by joining the tables ahead of time and persisting them into a table. Now the reports can run against that pre-staged table and avoid the large join.
You’re not always able to use this technique, but when you can, you’ll find it is an excellent way to save server resources.
Note that many developers get around this join problem by concentrating on the query itself and creating a view-only around the join so that they don’t have to type the join conditions again and again. But the problem with this approach is that the query still runs for every report that needs it. By pre-staging the data, you run the join just once (say, 10 minutes before the reports) and everyone else avoids the big join. I can’t tell you how much I love this technique; in most environments, there are popular tables that get joined all the time, so there’s no reason why they can’t be pre-staged.
7. Do delete and update in batches
Here’s another easy technique that gets overlooked a lot. Deleting or updating large amounts of data from huge tables can be a nightmare if you don’t do it right. The problem is that both of these statements run as a single transaction, and if you need to kill them or if something happens to the system while they’re working, the system has to roll back the entire transaction. This can take a very long time. These operations can also block other transactions for their duration, essentially bottlenecking the system.
The solution is to do deletes or updates in smaller batches. This solves your problem in a couple ways. First, if the transaction gets killed for whatever reason, it only has a small number of rows to roll back, so the database returns online much quicker. Second, while the smaller batches are committing to disk, others can sneak in and do some work, so concurrency is greatly enhanced.
Along these lines, many developers have it stuck in their heads that these delete and update operations must be completed the same day. That’s not always true, especially if you’re archiving. You can stretch that operation out as long as you need to, and the smaller batches help accomplish that. If you can take longer to do these intensive operations, spend the extra time and don’t bring your system down.
8. Do use temp tables to improve cursor performance
I hope we all know by now that it’s best to stay away from cursors if at all possible. Cursors not only suffer from speed problems, which in itself can be an issue with many operations, but they can also cause your operation to block other operations for a lot longer than is necessary. This greatly decreases concurrency in your system.
However, you can’t always avoid using cursors, and when those times arise, you may be able to get away from cursor-induced performance issues by doing the cursor operations against a temp table instead. Take, for example, a cursor that goes through a table and updates a couple of columns based on some comparison results. Instead of doing the comparison against the live table, you may be able to put that data into a temp table and do the comparison against that instead. Then you have a single UPDATE
statement against the live table that’s much smaller and holds locks only for a short time.
Sniping your data modifications like this can greatly increase concurrency. I’ll finish by saying you almost never need to use a cursor. There’s almost always a set-based solution; you need to learn to see it.
9. Don’t nest views
Views can be convenient, but you need to be careful when using them. While views can help to obscure large queries from users and to standardize data access, you can easily find yourself in a situation where you have views that call views that call views that call views. This is called nesting views, and it can cause severe performance issues, particularly in two ways:
- First, you will very likely have much more data coming back than you need.
- Second, the query optimizer will give up and return a bad query plan.
I once had a client that loved nesting views. The client had one view it used for almost everything because it had two important joins. The problem was that the view returned a column with 2MB documents in it. Some of the documents were even larger. The client was pushing at least an extra 2MB across the network for every single row in almost every single query it ran. Naturally, query performance was abysmal.
And none of the queries actually used that column! Of course, the column was buried seven views deep, so even finding it was difficult. When I removed the document column from the view, the time for the biggest query went from 2.5 hours to 10 minutes. When I finally unraveled the nested views, which had several unnecessary joins and columns, and wrote a plain query, the time for that same query dropped to subseconds.
Recent Comments