When SQL Server Performance Goes Bad: Rogue Indexes

Phil Factor explains how SQL Monitor helps focus performance tuning efforts on the tables and queries where 'rogue indexes' might be a significant problem, and then how to identify both 'missing' indexes that might be beneficial to the overall workload, and those that are unused or duplicated, and so are doing more harm than good.

Guest post

This is a guest post from Phil Factor. Phil Factor (real name withheld to protect the guilty), aka Database Mole, has 30 years of experience with database-intensive applications.

Despite having once been shouted at by a furious Bill Gates at an exhibition in the early 1980s, he has remained resolutely anonymous throughout his career.

He is a regular contributor to Simple Talk and SQLServerCentral.

The right set of indexes

When SQL Server performance goes bad, all the main metrics for resource usage, such as CPU, IO, memory, wait times, and so on, will tell you that the server is under considerable stress. There are many possible causes for this, of course, one of which is missing, or ‘inappropriate’, indexes.

Having the right set of indexes in your SQL Server databases will increase the efficiency with which SQL Server can gather the data, and therefore the performance of your most important queries and processes. For each table in the database this generally means choosing the right clustered index, and then a supporting set of narrow, well-designed nonclustered indexes, selective for the filters, or ‘predicates’, used by the most important queries.

If SQL Server often has to read every row in a 2 million row table, just to find the 20 rows it needs, then we may need to consider modifying an existing index on that table, or adding a new one, in order to improve query performance. Conversely, if we try to speed up every ‘read’ just by adding another nonclustered index, all we’ll do is cause a performance problem for our ‘writes’, because all these indexes must be maintained, as data is modified.

In terms of how often SQL Server uses nonclustered indexes, there are three varieties: those that it uses often, those it uses seldomly or not at all, and those it wishes it had but are conspicuous by their absence. This article is about how to use SQL Monitor, plus various dynamic management views, and system catalog views, to ensure you have mainly only the first variety of index.

I won’t, in this article, discuss cases where a valid index exists, but is ‘ignored’ by the query optimizer, due to inaccurate cardinality estimates, caused by reasons such as stale or inaccurate statistics for the data stored within the index, and neither can I delve into the detail of index design, selectivity and cardinality.

Missing indexes

When an appropriate index doesn’t exist for a query, then the optimizer will have to devise other means to get the data, often resulting in an execution plan that scans another index, such as the clustered index. This means that it reads all or most of rows stored in that index. Similarly, the lack of an appropriate index might mean the optimizer has to use to a Sort operator to get the data into the right order to perform a join, or other operation.

There is nothing inherently bad about scans but, as noted above, if a frequently executed query results in SQL Server repeatedly reading far more rows than it needs, then such queries will cause significant CPU, IO and memory use, much of which is unnecessary. They can also block and impede the performance of any concurrent queries. Similarly, sort operations become expensive the greater the umber of rows that needs to be processed. In these cases, you’ll want to investigate whether you can supply a selective index, enabling the optimizer to choose a more effective plan.

Finding expensive queries with missing indexes using SQL Monitor

When performance goes bad, on the main activity graph for the SQL Server you’ll likely see that the CPU (the blue line) is under sustained load, with dramatic spikes in disk IO times (green), periods of high wait times (orange), and probably increasing or sustained high memory use (purple).

There are several possible causes for, and missing indexes is just one of them. If the query optimizer, when trying to come up with an efficient way of executing a SQL Statement, has to reject a good strategy because the index it requires isn’t there, it puts a warning into the execution plan.

The nice thing about using a tool like SQL Monitor is that it provides among its sea of data, a list of the expensive queries running over the period of performance slowdown:

By plotting and analyzing various of the SQL Server metrics that SQL Monitor collects, as well analyzing as the general wait types seen over the period, you can find out which resources are currently limiting performance. You can then filter the expensive queries list according to the limiting resource and focus on the queries that are causing problems for the system.

If you click on one of those, you will find both the source code of the query and its execution plan:

Click on View Query Plan, and you’ll see a diagram for the cached execution plan for the query, in this case with a Warning that the cost of this query could be substantially reduced, if the optimizer had a “missing index”:

Finally, click on that and you’ll see the suggested index:

Note that there may be more than one missing index, and this will only list the first one. Check the MissingIndexes property of the SELECT operator to see if there any others.

However, if you become misled by the index warnings that you get from query plans, and take every piece of the machine’s machine-like advice, you could easily end up with a rich undergrowth of infrequently-used and overlapping indexes that will bring the performance of your write operations (inserts, updates and so on) to its knees.

Each missing index warning is made purely from the perspective of that single query. They take no account of the wider context of the database application and the patterns of usage; the index may be non-selective and so completely useless of any other queries on that table. They also only consider the estimated cost saving for that query and make no consideration of the impact on the performance of write operations to this table, due to a high volume of data modifications when all its indexes have to be maintained.

In short, you need more information.

Drilling into missing index details

Index strategies aren’t always obvious. Before you go ahead and create any new index, you need to be certain of its net benefit for the workload as a whole, and also sure that you could not satisfy the “missing index” request with a simple modification to an existing index. If you fail to do this, you’ll inevitably, over time, end up with many indexes that are almost the same, but with subtly different index key column definitions and column orders, or included column definitions and orders.

We can see the current warnings of missing indexes, and the indexes that are suggested, with this query:

Again though, care is needed, and you’ll see that indexes are often suggested that would hardly ever be used, and which just wouldn’t make much of a difference.

Every index should be tested, to confirm that it is useful, before deploying it to production. As for checking execution plans for problems like implicit conversions, it’s best to verify any new indexing strategy during development. You can run the database through a range of integration tests, check the output of the above query along with index usage statistics (see “Finding Unused Indexes” later in the article), adapt the indexes as required and test the overall performance impact by rerunning the workload. The goal is to have as few indexes as possible that will satisfy as many as possible of the most important queries as possible.

Missing indexes on Foreign Keys

While on the topic of missing indexes, we should give a special mention to foreign keys. A common cause of poor performance of queries is that foreign keys, unlike primary keys, aren’t automatically indexed in SQL Server. Although it is sometimes better to omit this index for a rarely-used foreign key, it is a good general practice to create an index on all foreign key columns that are often used in joins, to enhance the performance of the joins between the primary and foreign keys, and also reduce the cost of enforcing the relationship between the related tables. It is easy to experiment with adding an index and checking the performance of queries that join on that foreign key. This query will flush out all the foreign keys that aren’t supported by an index.

Finding Unused Indexes

It is a waste of resources to keep an index around if it doesn’t get used or gets used very infrequently. Of course, if an index gets used only once every quarter for a financial reconciliation process, you need it to be available, but it’s probably best to create it at the start of the run of the process and delete it at the end.

Unused indexes accumulate in the database over time, often precisely because they were created for “one specific query” and are useless in general, or because changes in data distribution have cause the optimizer to change its strategy. These indexes still incur the overhead on any write operation: entries are written to it as data is inserted into the clustered index, table or view and index entries must be modified or deleted as that underlying data is altered or deleted. This results in index fragmentation and you often end up using even more resources to perform index rebuilds or reorganizations ion indexes that are rarely used!

To decide if an index is likely to be useful, it is worth predicting the level of use in a live system, understanding the relative importance of quick reads to quick writes in the typical usage of the table, and looking at factors such the width and likely length of the table, which will give you a good idea of how much space a new index will take up.

Here is a query to list the twenty most useless indexes in your database since the service was last started. Again, you would use this in development after running a complete set of integration tests on the database. It will also help production maintenance work because it will tell you which indexes are lightly or never used in the production environment, which could be very different from the case in development.

You can get a more detailed view of the usage of indexes from sys.dm_db_index_operational_stats. It is also valuable for tracking the length of time that users must wait to read or write to a table, index, or partition, and for identifying the tables or indexes that are encountering significant I/O activity or hot spots.

Duplicate indexes

If you have more than one statistic for the same column or set of columns, it means that they are used in more than one index. You’ll find these just by comparing the list of columns that each statistic is associated with. This technique eliminates the XML columns conveniently.

An alternative approach is just to see if two indexes share a column list, as shown in the following query. Of course, they could have them in a different order which would be more legitimate. To include those with a different order change the:

…into…

In this case, I’m using STRING_AGG() because it is more straightforward to see what is going on, but you can easily substitute the XML trick as in the previous query.

Conclusions

Warnings of missing indexes mean different things to different teams. I’ve given in this article a perspective that leans slightly towards the administrator rather than developer. I’ve assumed that your role is to eliminate performance problems due to missing indexes, by checking index warnings and making a judgement as to how to react, based on your wide perspective of the application. For the developer, a persistent eruption of index problems usually signals a design issue: suggesting a faulty normalization, usually under-normalization, and poorly designed primary key and/or clustered index.

It is frustrating for a DBA who can see the same point but is even less able to do anything about the primary cause, but is instead forced to make the best of a poorly-optimised database design by refining the index strategy.

Tools in this post

Redgate Monitor

Real-time SQL Server and PostgreSQL performance monitoring, with alerts and diagnostics

Find out more