Don’t use DISTINCT as a “join-fixer”

Comments 0

Share to social media

I’ve quietly resolved performance issues by re-writing slow queries to avoid DISTINCT. Often, the DISTINCT is there only to serve as a “join-fixer,” and I can explain what that means using an example.

Let’s say we have the following grossly simplified schema, representing customers, products, and product categories:

And then we have tables for orders and order details:

And some sample data:

Marketing says we want to send an e-mail or give a discount code to all the customers who have ordered a product from the beauty category. The initial attempt at a query for this might be something like this:

The plan doesn’t look so bad (yet):

A simple plan

And in local or test data, the output might look right, since we may have inserted a single row into OrderDetails to match our criteria (and to make our tests pass). But what if I have ordered two products from the beauty category (in the same order, or across multiple orders)?

Now the query returns that customer twice! We certainly don’t want to send them two e-mails, or issue multiple discount codes to the same customer. And the plan, on its own, can’t really provide any obvious clues that there are duplicate rows:

Hidden duplicates

But you sure will notice if you inspect the results, or an end user will notice if you unleash this in production. The quick fix tends to be: slap a big ol’ DISTINCT on there which, indeed, fixes the symptom by eliminating duplicates:

But at what cost? A distinct sort, that’s what!

Pain caused by that DISTINCT

If I’m testing changes to this query in my local environment, and maybe just testing the output and that it returned the data quickly, I might miss clues in the plan and be pretty satisfied that adding DISTINCT fixed the issue without impacting performance.

This will only get worse with more data.

And while we could spend a lot of time tuning indexes on all the involved tables to make that sort hurt less, this multi-table join is always going to produce rows you never ultimately need. Think about SQL Server’s job: yes, it needs to return correct results, but it also should do that in the most efficient way possible. Reading all the data (and then sorting it), only to throw away some or most of it, is very wasteful.

Can we express the query without DISTINCT?

When I know I need to “join” to tables but only care about existence of rows and not any of the output from those tables, I turn to EXISTS. I also try to eliminate looking up values that I know are going to be the same on every row. In this case, I don’t need to join to Categories every time if CategoryID is effectively a constant.

One way to express this same query, ensuring no duplicate customers and, hopefully, reducing the cost of sorting:

There’s a simple, additional index seek against Categories, of course, but the plan for the overall query has been made drastically more efficient (we’re down to 2 scans and 2 seeks)

A plan using EXISTS

Another way to express the same query is to force Orders to be scanned later:

This can be beneficial if you have more Orders than Customers (I certainly hope that’s the case). Notice in the plan that Orders is scanned later, hopefully after many irrelevant orders have been filtered out.

A slightly different EXISTS plan

Conclusion

DISTINCT is often hiding flaws in the underlying logic, and it can really pay off to explore other ways to write your queries without it. There was another interesting use case I wrote about a few years ago that showed how changing DISTINCT to GROUP BY – even though it carries the same semantics and produces the same results – can help SQL Server filter out duplicates earlier and have a serious impact on performance.

Load comments

About the author

Aaron Bertrand

See Profile

Aaron Bertrand (he/him), a cheery Canadian with industry experience dating back to SQL Server 6.5, is a Staff Database Reliability Engineer for Stack Overflow. He also blogs at sqlblog.org, mssqltips.com, and thebertrandfamily.com.