Statistics (Part 4) – Multi-column cardinality

In the previous posts the predicates for the example queries have all had only one element, i.e. only one constraint against the table in the WHERE clause. While troubleshooting cardinality estimate problems it is helpful to understand what happens when there are two constraints against two different columns in the same table.

Update 2/22/2014:
Before reading this you might want to check out Paul White’s article on this topic where he gives a much better explanation than I did. He also includes the case of OR selectivity which didn’t even occur to me to investigate.

This is the sample query that will be used to show how cardinality estimates happen when there are two constraints.

SELECT * FROM dbo.AWStatTest where AddressLine1 = N'100, rue des Rosiers' and PostalCode = N'2010'

In Post 2 of this series I created a table called AWStatTest and filled it with data for demo purposes. So you don’t have to refer back to that post here is the DDL for that table and the index I created.

CREATE TABLE AWStatTest
(
   ID int identity(1,1),
   AddressLine1 nvarchar(60),
   AddressLine2 nvarchar(60),
   City nvarchar(30),
   PostalCode nvarchar(15)
)

CREATE INDEX IX_AWStatTest ON dbo.AWStatTest (AddressLine1,AddressLine2,City,PostalCode)

The index creation also includes a creation of statistics. This is a picture of the statistics for IX_AWStatTest and will be referenced in this post.

The predicate in the sample query is constraining the search by AddressLine1 and PostalCode. As shown in my post on density, the statistics IX_AWStatTest alone cannot be used for cardinality estimates for queries constrained by AddressLine1 and PostalCode.

There are at least two ways this can be estimated, 1) by using multiple stats or 2) by using a multi-column statistic

Using Multiple Stats
One of the features of SQL Server is the ability to create single column statistics on the fly in order to help out with optimization (if the database is set to auto create them anyway…my database is set up to do so). When I run the query above SQL Server created statistics for the column PostalCode. The name of those statistics is machine generated, _WA_Sys_00000005_5AEE82B9 (this will be named differently on your machine). You can see these stats with DBCC SHOW_STATISTICS.

DBCC SHOW_STATISTICS ('dbo.AWStatTest','_WA_Sys_00000005_5AEE82B9')


This statistics object has a density vector and histogram as well.

SQL Server can use the histogram for AddressLine1 from the statistic IX_AWStatTest and the histogram for PostalCode from the statistic _WA_Sys_00000005_5AEE82B9 to make a cardinality guess. The histogram for PostalCode shows that our constraint value ‘2010’ is a RANGE_HI_KEY and therefore EQ_ROWS will be used to provide cardinality estimates for this value. The number or rows in the table estimated to match postal code 2010 is 13714.8. Looking at the histogram for AddressLine1 shows that ‘100, rue des Rosiers’ is a RANGE_HI_KEY and EQ_ROWS would give us an estimate of 2020 rows if the predicate included a search for just this value.

So how are these used together? A quick example problem that should be easy for anyone familiar with playing cards to understand which helps us understand using two different histograms to make a cardinality estimate. There are thirteen cards in a suit and four suits per deck of standard playing cards. What is the probablity of drawing a face card that is a heart? In this case we are looking for cards that have two specific characteristics, a face card and a heart. There is a 13/52 chance of drawing a heart. There are 12 face cards (Jack, Queen, King) in a deck, so there is a 12/52 chance of drawing a face card.

The probability of drawing a face card that is also a heart is:
13/52 * 12/52 = .0576923.
If we were to treat this like a density and estimate the number of cards in the deck that would match our double criteria we would multiply .0576923 by the number or cards (rows) in the deck, .0576923 * 52 = 3. This is the exact number or cards matching the criteria we actually have.

Let’s translate this example to our cardinality problem. We are looking for rows of data that have two specific characteristics, AddressLine1 and PostalCode equal to specific values. In this situation the probability of a match is for AddressLine1 and Postal Code are 2020/4040008 and 13714.8/4040008 respectively.

Multiply these together by the number of rows in the table and the cardinality estimation is for the query is: (2020/4040008) * (13714.8/4040008) * 4040008 = 6.85739.

What about three columns:

SELECT * FROM dbo.AWStatTest where AddressLine1 = N'11, quai de l´ Iton' and PostalCode = N'94010' and City = N'Bellingham'

The cardinality estimates for each of these values individual are, 9090, 42062.04, and 58405.71 respectively.

(9090/4040008) * (42062.04/4040008) * (58405.71/4040008) * 4040008 = 1.36819

The examples above all used values that were RANGE_HI_KEYS from the respective histograms. In my tests I have seen that estimates using values from the AVG_RANGE_ROWS column work exactly the same way.

Using multiple statistics objects isn’t the only way SQL Server can estimate cardinality on a table with multiple predicates.

Using a multi-column density
In Post 2 I talked about density vectors and how they can be used. They can also be used for multi-column estimates. To force SQL Server to use a multi-column density for cardinality estimation I am going to create a new table with the same data as the old one, turn off auto creation of statistics and create a statistics object for just PostalCode and AddressLine1. The primary reason I’m creating a new table is because I’ve done so much testing on my other one that I fear being unable to get pure estimates.

SELECT * INTO dbo.AWAL1PCTest from dbo.AWStatTest
CREATE STATISTICS PostalCodeAddressLine1 ON dbo.AWAL1PCTest (PostalCode,AddressLine1)
ALTER DATABASE TestDB SET AUTO_CREATE_STATISTICS OFF

I should end up with a statistics object for PostalCode and AddressLine1 that looks like this:

The density value for the combination of AddressLine1 and PostalCode is .000249066
The estimate should be .000249066 * 4040008 = 1006.23

Running this query:

SELECT * FROM dbo.AWStatTest where AddressLine1 = N'100, rue des Rosiers' and PostalCode = N'2010'

The cardinality estimates is

In this post we have seen that cardinality estimates for predicates that contain multiple matching conditions for two different columns on the same table can be estimated in at least two ways.
1. By using the histogram values from multiple statistics objects
2. By using a multi-column density value

The point of the previous three posts is not to outline all of the ways the density vector and histogram parts of statistics objects are used make cardinality estimates. That information is not publically available (that I can find anyway). Most of what I talk about in the previous three posts comes from performing a lot of tests and comparing results to calculations. The purpose is to highlight how the various components can be involved so that one can have more success in troubleshooting cardinality estimate issues. I’ve often been surprised at some of the estimates that come up and they don’t always align themselves to the “rules” I’ve laid out above, though they usually do. That being said, tracing cardinality estimates back to specific statistics used has been helpful in identifying skewed data distributions and possible places for optimizations.

The next topic is an area that is very opaque and, I suspect, is one of the causes of some of the odd cardinality estimates I’ve seen: the String Index.

2 thoughts on “Statistics (Part 4) – Multi-column cardinality

Comments are closed.