Read Committed – not getting what you expect

23 Jul puzzle

In a previous article I demonstrated how use of READUNCOMMITTED (or the NOLOCK hint) can result in erroneous data, due to page splits. Examples abound of the more common issue with this isolation level, which occur when reading a row that is being updated in another transaction and then rolled back. These so-called ‘dirty reads’ cannot happen with the default isolation level of READ COMMITTED.

However, it is still possible to alter data within a table that is being read with READ COMMITTED, so that the results of that transaction do not reflect the actual state of the data when the transaction has completed.
For this demonstration I’ll create a table that contains an ID column and an integer column, with a clustered index on the ‘ID’ column.

CREATE TABLE [dbo].[Test2](
	[Test_Value]	INT NOT NULL,


For reasons that will become apparent I’ll start the ID from 10, incremented by 1 for each row and store 1 in ‘Test_Value’ against each row.
With nothing else executing against that table there are 5,000,000 rows and the sum of ‘Test_Value’ is 5,000,000:


The two queries within this transaction are looking at the same data.

Now I will repeat this query, and in a separate tab I will update the row with ID of 10 to move it to the end of the index, as soon as the first query has started:

UPDATE dbo.Test2
SET id = id + 6000000
WHERE id = 10;

The results for this an extra row in the first SELECT:


And if I scroll to the end of that result, the row that was shown with an ID of 10 is also shown with an ID of 6000010.


So, the first SELECT query has looked at the same row, before and after it was moved. However, the second SELECT query, because it read the data after all of the updating, has returned the correct information.

Resetting the data, it is also possible to ‘lose’ a row, when a row at the end of the index is moved to the start of the index, during the first SELECT.

This time, the second query is:

UPDATE dbo.Test2
SET id = 9
WHERE id = 5000009;


This time, the row with an ID of 9 is not shown in the results, because the update moved it from the end of the index to the beginning of the index after the SELECT had read the beginning of the index but before it had reached the end of the index.


The sum shows the correct results because the update had completed before the second select started.

A Smaller Demonstration

In the examples shown there were five million rows, but what about smaller tables?

Consider a far smaller table, which is accessed by three processes. One is simply reading the whole table (with READ COMMITTED Isolation Level). At the same time a second process has locked a row, because it is updating it. In addition, a third process is updating data and in the process is moving the row to another position within the index.

Firstly – the data before it is interfered with shows one thousand rows that SUM to 500500:


In a separate tab run the following command:


UPDATE dbo.Test2
SET Test_Value = 1000
WHERE id = 50;

Notice that there is no ‘COMMIT’ or ‘ROLLBACK’. This will lock row 50.

Now run the initial command, with the two SELECT statements. This will be blocked by the update, because Row 50 is locked. READ UNCOMMITTED would use this (as yet uncommitted) Test_Value and continue. READ COMMITTED will not, and is therefore blocked.

In a third window:


UPDATE dbo.Test2
SET ID = 1001
WHERE ID = 10;


This moved Row 10, which had already been read, and placed it at the end of the index.

Now ROLLBACK the update to row 50, this will allow the blocked process to continue.
The SELECT command has now completed, but the data for row 10 now also appears as row 1001 – the same data shown twice:



Of course, if the SELECT was also running a SUM or suchlike, the results would be incorrect in each of these examples.

So what is happening?

READ COMMITED obtains shared locks on the rows or pages as it requires them, but releases these locks as soon as it has read the locked data.

Therefore, it is possible to update, delete or insert rows where the query has already processed that area of the table, and to do the same in the part of the table that the query has yet to process. What is guaranteed with READ COMMITTED is that you can’t have dirty reads, but that’s pretty much it.

The fact that I placed the two queries within a Transaction also made no difference. The second query always saw the ‘true’ picture of the data because the manipulations had completed by that point.

I’ve frequently seen statements where people have claimed that to get a ‘true’ picture of the data that the default READ COMMITTED isolation level should be used. This clearly is a misunderstanding of how this process works, and one that I had to do a little bit of work to comprehend too.

Getting More Data Than You Bargained For With NOLOCK

29 May ThompsonTwins

Generally, when somebody posts a query using the NOLOCK hint, there are a variety of warnings about dirty reads, phantom reads and double-reading (due to page splits).
The dirty read example is common and tends to be the main example used to demonstrate what to watch out for with NOLOCK, but I’ve never seen an example of double-reading. So, I thought I’d try and see how much of an issue such a thing can be. It certainly sounds plausible, but is it a ‘one in a million’ chance, or is it a Terry Pratchett ‘one in a million’ chance – where it is pretty much guaranteed to happen?

So, create a database for this testing and then create a table within:

CREATE TABLE [dbo].[Test1](
	[ID] [int] IDENTITY(1,1) NOT NULL,
	[Test_Description] [nvarchar](max) NULL,


A very simple table – an Identity column and a text column. The important part is that I’ve specified a Fill Factor of 100 for the Clustered Index, meaning there is no space for expansion of the data in the current location. Therefore, if I increase the contents of the ‘Test_Description’ column it should force a page-spilt, because the data won’t fit back where it was originally stored.

I then generate 500,000 rows with a simple integer id and the entry ‘TEST’ in the ‘Test_Description’ column.
In one window of SSMS I now set up a query to update this column on every row. For text-based test data I tend to use a text file of ‘War and Peace’, from Project Gutenberg. For this test I just use the first paragraph.

In another window I set up a query to return every row from the same table, specifying ‘READ UNCOMMITTED’ isolation level.
Once prepared I run the two queries together:



The update has returned a count of 500,000, which matches the number or rows I placed into that table.
However, the select has returned a count of 586,078. Considerably higher than the number of rows within that table. In addition, it can be seen within the results that I’m retrieving a mixture of updated and non-updated rows.

Exporting the results to a csv file and then into Excel, I can sort the results and see a large number of duplicates (based on the ID column):


The Select found the rows before they were updated, and then again when the page split had moved the data, retrieving the row twice.
If I leave the select query to run once the update has completed, I get 500,000 rows – all updated.

This was a relatively easy test to set up, and has been consistent in the fact that it has duplicated a large number of rows every time. Of course, specifying a fill factor of 100 would pretty much guarantee a page split somewhere, but in the day-to-day operations of most databases, pages will become full and splits will happen eventually. So the scale of the issue might not be as large as the example I’ve created, but the underlying issue may still be present.

Obviously, if this was something more important that a short piece of text (financial data, for example) then the results derived from this query would be wildly inaccurate.

Compress All DB Files on a Server Instance

30 Apr

Or rather, on a Test or Development server instance. If you need to do this to a Live server, then you have larger issues.

Most sites have servers for Developers and DBAs to work their dark magic, as well as various test servers for the next level of testing. These servers do, of course, have databases that nobody admits to creating and other more recognisable databases that have had huge amounts of test data inserted, updated, removed and then generally left, like a bloated whale carcass – taking far more space then they now need.

There are several options in these cases:

1. Remove the databases that are no longer required.
This isn’t a quick fix, when a server has run out of space and all development has come to an abrupt halt. The database might have a use that only a few people are aware of, and they’re off on holiday, in meetings or just ignoring my stroppy emails. The best solution in such cases is to take the database offline and wait a few days, so not an immediate fix for space shortage.

2. Move database files.
Also not necessarily an option. Some setups are in specific locations for a reason and there might be no viable storage anywhere else for that server.

3. Shrink the databases.
The most common option for these servers, but some can have a large number of files, so using SSMS for this can be rather tedious.

4. Ask for more Storage.

Yeah, right.

These servers aren’t treated well, but that is their purpose. It saves the QA and Live servers from a lot of grief later in the development cycle.

So, I had a look around for something that I could run against a server instance and would compress all of the databases for me. There are a few bits and pieces around, but nothing that quite did what I was after.
What I wanted was something that would give me some choice over what I compressed, because sometimes it is simply a waste of time to compress a very large file for for a little gain.

So, I have written something, stored it as a plain text file and it can be downloaded from here.

It doesn’t backup databases beforehand, nor does it check the health of the database (before or after). I was after something that performed the basics and saved me a lot of time.

And if you think this routine has solved all of your problems on your mission-critical systems, you really need to read this Paul Randal – Why You Should Not Shrink Your Data Files. The inside of your database will look like the car under those tank-tracks.

NOCHECK – the uses and impact

31 Mar yes-no-buttons

Having been bitten by the delights of a Foreign Key(FK) with NOCHECK I thought it time to do a little digging.

I’ve always been aware of the perils of this option but have encountered a table with this feature and I was caught out by it.
I’m also aware of the impact it can have on the Optimiser but I’ve never looked into it, so I’m taking this opportunity to look at the impact there too.

What is NOCHECK?

Basically, it allows you to add a constraint onto a table without validating the existing contents of the appropriate column. So, for example, I can create a FK constraint for a column against a reference table even though not all of the values currently held will match against that reference table.

I’m going to create a couple of tables:

CREATE TABLE dbo.Locations( 
	ID		VARCHAR(10), 
	[Description]	VARCHAR(20), 
	[ID] ASC 


CREATE TABLE dbo.StockItems( 
	[Description]	VARCHAR(50) NOT NULL, 
	Location	VARCHAR(10) NOT NULL 
	[ID] ASC 


And then load them with some simple data:

INSERT INTO [dbo].Locations (ID, [Description]) 
VALUES('W1', 'Warehouse 1'), 
      ('W2', 'Warehouse 2'), 
	  ('W3', 'Warehouse 3'), 
	  ('W4', 'Warehouse 4'), 
	  ('OS1', 'Off-Site Storage 1'); 

INSERT INTO dbo.StockItems( [Description], Location ) 
VALUES  ( 'Blue Widgets', 'W1'), 
		( 'Green Widgets', 'W3'), 
		( 'Red Widgets', 'OS1'), 
		( 'Round Widgets', 'X2'), 
		( 'Square Widgets', 'W2'); 

If I wanted to create a FK between dbo.StockItems Location and dbo.Locations ID you would expect a problem, because the value ‘X2’ within the StockItems table doesn’t exist in the Locations table:


And you’d be right, as long as I didn’t specify ‘NOCHECK’.
So, with ‘NOCHECK’ instead:


And the Foreign Key has been created.
Although the table ‘StockItems’ still contains an ‘invalid’ entry it is no longer possible to add other invalid values:


Because the constraint now checks that column against the reference table.

Reloading Data

Occasionally it may be necessary to remove the data from a table and re-insert it (altering a table but keeping fragmentation to a minimum is one example) and this is where the NOCHECK can act against you:
Let’s move all of the data out of ‘StockItems’ into a temporary table and truncate the DB table:

INTO ##temp
FROM dbo.StockItems;

SELECT * FROM ##temp;

TRUNCATE TABLE dbo.StockItems;

Now try putting the data back with:


INSERT INTO [dbo].[StockItems](ID, [Description],[Location])
	SELECT ID, [Description], Location
	FROM ##temp



Data that was in that table now cannot be re-inserted, unless I remove the FK constraint, run the insert and then add the FK back.

Validating the Data

From version 2008 onwards there is a DBCC command which will provide information about the integrity of a constraint:


This shows what values are currently breaking the constraint and will need dealing with before you can change the constraint to ‘CHECK’.

NOCHECK and the Optimiser

Another, ‘quieter’ issue with using NOCHECK is that the Optimiser ignores Foreign Keys that have that option specified.

Looking at sys.foreign_keys, there is a property ‘is_not_trusted’, and for a FK constraint with NOCHECK this is set to 1. It implies that the data in that column cannot be trusted to be valid, when comparing it against the column that it references.


“The query optimizer does not consider constraints that are defined WITH NOCHECK. Such constraints are ignored until they are re-enabled by using ALTER TABLE table WITH CHECK CHECK CONSTRAINT ALL.”

Removing the invalid data won’t fix this, but removing the invalid data and changing the FK constraint will.

DELETE dbo.stockitems
WHERE [location] = 'X2';



Comparing the Query Plan and Statistics with the ‘before and after’ for ‘is_not_trusted’ :
When ‘is_not_trusted’ = 1:

–(4 row(s) affected)
–Table ‘Locations’. Scan count 0, logical reads 8, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
–Table ‘StockItems’. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.


When ‘is_not_trusted’ = 0:

–(4 row(s) affected)
–Table ‘StockItems’. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.


So, where the Optimiser can trust a FK constraint it can make improvements to the efficiency of the plan.

NOCHECK does have its place, but the impact to performance and behaviour needs to be understood before using it.

Automatic Data Purging With Partition Switching

16 Mar

In a previous article I showed a basic example of Partition Switching, using animal names ( While not a realistic example for everyday use it did get the principles across. However, certainly in my experience, partitioning is normally dependent upon a number – usually a date.

In this article I’ll create a partition process that is used to keep data for three days (based upon a date within the row) and then remove the data automatically, via a job.

Firstly I require a database for this demonstration.


Now I need a Partition Function. Because there can only ever be 3 possible values with this calculation, I can be a little lazy and just specify two of the values. If I just write the function for values of 1 and 2, then 3 will always go into the third partition, normally used for ‘invalid’ value ranges. I can’t have any invalid values (provided it has been coded and tested properly), so use the third partition for the ‘normal’ work instead of out-of-range values.


The associated Partition Scheme will have three partitions. Values of 1 will go to the first in the list, values of 2 will go to the second in the list and all others will go to the third. There are only ever going to be three values for this example, so I won’t worry about a fourth for invalid values.


Having created these I can now create the table that will use this partition Scheme:

    UserAccessID		INT IDENTITY(1,1),
    UserID				INT,
    LoginDateTime       DATETIME,
    LogoutDateTime		DATETIME,
    PartitionNumber		AS (CONVERT([tinyint],abs(datediff(day,CONVERT([date],'20000101',(112)),LogoutDateTime)%(3)+(1)),(0))) PERSISTED NOT NULL,
    UserAccessID		ASC,
    PartitionNumber     ASC
) ON ps_Retention(PartitionNumber)  ; 


CREATE TABLE UserAccess_PartitionSwitch(
    UserAccessID		INT IDENTITY(1,1),
    UserID				INT,
    LoginDateTime       DATETIME,
    LogoutDateTime		DATETIME,
    PartitionNumber		AS (CONVERT([tinyint],abs(datediff(day,CONVERT([date],'20000101',(112)),LogoutDateTime)%(3)+(1)),(0))) PERSISTED NOT NULL,
    UserAccessID		ASC,
    PartitionNumber     ASC
) ON ps_Retention(PartitionNumber)  ; 


Notice that the PartitionNumber column is a calculated, persisted column. This value is based upon the LogoutDatetime, so will be kept for three days from that Datetime value.
At first glance the calculation for the Partition Number can look confusing:


So let’s start with the basics and build up from there. Substitute the ‘LogoutDateTime’ with an ISO standard date, to help show what is happening.

Using a Base Date (in this case the 1st of January 2000) calculate the number of days between then and 20150310. Of course you need to set a Base date that you will suit your purposes. In this case should I ever have a date in 1999 or earlier then it will not work as expected.

The 112 enforces ISO standard date formatting:

SELECT datediff(day,CONVERT([date],'20000101',(112)),'20150310') --112 is ISO standard, to avoid any confusion

As I’m using a 3 day retention scheme, use a modulo of 3, which will result in 0, 1 or 2. The abs function is just there to ensure no negative values, an unlikely event for this example:

SELECT abs(datediff(day,CONVERT([date],'20000101',(112)),'20150310')%(3))

Because a Partition number of 0 is invalid add 1, so making the only possible values 1, 2 or 3:

SELECT abs(datediff(day,CONVERT([date],'20000101',(112)),'20150310')%(3)+(1))

As it is such a small number, convert it to a TinyInt:

SELECT (CONVERT([tinyint],abs(datediff(day,CONVERT([date],'20000101',(112)),'20150310')%(3)+(1)),(0)))

To make sure the partition function is working, add a few rows of test data:

USE [PartitionTest]

INSERT INTO [dbo].[UserAccess]
VALUES     (1
           ,'20150308 01:00:00'
           ,'20150308 02:00:00'),
           ,'20150309 01:00:00'
           ,'20150309 02:00:00'),
           ,'20150310 01:00:00'
           ,'20150310 02:00:00'),
           ,'20150311 01:00:00'
           ,'20150311 02:00:00'),
           ,'20150312 01:00:00'
           ,'20150312 02:00:00'),
           ,'20150313 01:00:00'
           ,'20150313 02:00:00'),
           ,'20150314 01:00:00'
           ,'20150314 02:00:00')

And then check the result – look at the table contents and at how the data is spread across the actual partitions:
From these result it can be shown that the PartitionNumber column is never outside of the range 1, 2, 3 and the partitions themselves match the data spread indicated by the value of PartitionID.
Now to test that the Partition Switching will actually work – ensuring that the two tables have been set up correctly:
The three rows in Partition 2 have been moved to the Partition_Switch table.
So, I know that the structure of the tables is correct and that we can switch partitions. The next step is to create a routine that will perform this action automatically.
Our table is designed to keep data for three days, and to do this successfully we need to make sure that the next partition we are going to need (tomorrow’s partition) is empty before that day starts. So just before midnight we need to remove the data that is about to become four days old.

A Stored Procedure is now required – ‘dbo.Purge_Table’, which ascertains the partition number that is now three days old and purges it (after some checks to make sure it should), because that will be the partition required tomorrow.

CREATE PROCEDURE [dbo].[Purge_Table] AS

    DECLARE @DaysToKeep				TINYINT,
        @PurgePartition			TINYINT,
        @TotalPartitions		TINYINT,
        @Today					DATETIME,
        @MaxDateInPartition		DATETIME,
        @DateToPurge			DATETIME,
        @SQLCode				VARCHAR(500) 

    SET @TotalPartitions = 3;
	SET @DaysToKeep = 2; 

    SET @DateToPurge = DATEADD (dd, (@DaysToKeep * -1), @Today)  --Includes today. We keep today, we keep yesterday but the day before yesterday needs to go 

    SET @PurgePartition = 0     

	SET @PurgePartition = (CONVERT(tinyint,ABS(datediff(day,CONVERT([date],'20000101',(112)),@DateToPurge)%(@TotalPartitions)+(1)),(0))) --Same formula as used within the table, but
																																		 --using the @DateToPurge, to find what partition
																																		 --that date will be in.

    SET @PurgePartition = $Partition.pf_Retention (@PurgePartition)   

    ---- PURGE TABLE

	--Get the latest LogOutDateTime from the partition that is going to be purged.
    SELECT @MaxDateInPartition = MAX (LogOutDateTime)
     FROM dbo.UserAccess
     WHERE $Partition.pf_Retention (PartitionNumber) = @PurgePartition 

    IF EXISTS (SELECT 1 FROM dbo.UserAccess
                WHERE PartitionNumber = @PurgePartition) --Check there is data for this partition
			IF DATEDIFF (dd, @MaxDateInPartition, @Today) >= @DaysToKeep OR @MaxDateInPartition IS NULL --Check the date within this partition is old enough to be removed
					CAST (@PurgePartition AS varchar) +
					' TO dbo.UserAccess_PartitionSwitch PARTITION ' +
					CAST (@PurgePartition AS varchar) 

				EXEC (@SQLCode) 

				TRUNCATE TABLE dbo.UserAccess_PartitionSwitch	--Having switched the data over, remove it 

				RAISERROR ('The dbo.UserAccess Table has been successfully Purged.', 10, 1)
				RAISERROR ('OPERATION CANCELLED. Incorrect Data would be purged from dbo.UserAccess',16, 1)
		RAISERROR ('dbo.UserAccess - partition has no data', 10, 1)

Having cleared the table (and PartitionSwitch table) I input three rows, with today, yesterday and the day before as the logOutDateTime:
Now execute the Stored Procedure and the row in partition 2 should be switched to the PartitionSwitch table and removed (based on the execution date of 20150316).
Now all that is required is a scheduled job that executes this Stored Procedure at the end of each day, ensuring there is always an empty partition available for the next day and any data older than is required is no longer stored.

Mentoring – risk and reward

8 Feb

So, Paul Randall is offering to mentor six people for a couple of months, in response to the feedback from the Tribal Awards last year.

Now that’s a double-edged sword, isn’t it?

First of all, the opportunity to seek guidance and assistance from one of the most knowledgeable people in the SQL Server community, for free, is not to be sniffed at.

But just imagine the fun to be had in job interviews forever afterwards:

“So, you trained with Paul Randall?”, I know its mentoring, but people will see what they want to, “Well, we have a 200 server system with 20 Tb databases in 30 countries. Clustering is a bit flaky and the Merge Replication between our 7 Russian sites and Australia keeps failing – how would you fix that?”.  People can set higher expectations when they see certain names in connection with you.

And of course, there’s always the thought of failing to meet Paul’s expectations too. After all, this is his own time he’s sacrificing – squeezing in between his day job and the inordinate amount of reading he appears to do to.  Years from now whenever he’s asked – “Oh yeah, Steve – nice enough guy but I wouldn’t want him anywhere near my systems.” – the career version of the Black Spot. The rest of my career on NoSQL databases.

And what do I actually need anyway? With the all-seeing eye that is Google and a company-paid subscription to SafariBooksOnline, it certainly isn’t reading material. And although it pains me to say so, I do work with several people who really know their SQL Server, so the day-to-day stuff should be catered for. And no, I’m not giving names – they’d be insufferable.

And that’s the problem – day-to-day stuff.  Training beyond what I need for my current job. Methodology. The ‘what’ and the ‘how’.

I have so many draft articles in my blog – slowly composting because I’m not sure how to approach them or even if they’re of enough interest to myself really to actually complete.

I paid my own way to SQL Pass a couple of years ago and I’ve paid for this year too, as well attending a SQLBits conference a few years ago. This is currently how I find out what others do, outside of my little bubble of experience.

I’ve changed direction several times within my long career in IT and it’s usually because somebody who is very experienced in their field and can teach it has shown me into their little world. Not necessarily taught me the syntax and the tool usage, but shown me how to think in a way that takes advantage of whatever language/tool they use. A couple of months of being shown that can make changes that last years.

So I’m willing to put my name forward for this, in my own style. Who knows, we may both learn something. Even if not selected it made me think about what I really want out of this new direction my career has taken. And if I get nothing more from this, I can at least thank the man who introduced me to the Wool trilogy via his twitter messages.

Generating Replication Scripts – Things To Watch Out For

5 Feb audrey2B_2514

Scripting a Publication is useful for both Disaster Recovery and copying a Publication to a test environment. If you use the Wizard each time then you have to make sure that you’re consistent with the options and choices made in Live and once again on the Test (or DR) server.  By creating the script you should get something that reproduces the Publication and Subscription in full.

However, generating the script doesn’t always give you an exact copy of what you are scripting from. Some defaults might be included that weren’t used originally and some settings may be specified that were simply not used with the original creation.

A couple of examples I can demonstrate using the Publication scripted in earlier articles (, a more entertaining example using partition switching will have to be described.

Example 1 – There wasn’t a Publication Snapshot in the Original

But if you ask SSMS to generate a Create Script for the Publication it will include one:

To generate the scripts from an existing Publication you need to right-click on the Publication name within Replication/Local Publications branch (on the Publisher):


Specify where you want the script to go and it will be generated.

In this option I’ve chosen to have the script sent straight to a New Query Window.

Now bear in mind that this Publication does not use a Snapshot. I never specified it when I originally scripted this publication.

So what is this here for?


If I’m going to use this script as a true copy of the original then I need to remove this entry.

Example 2 – The ‘sync_type’ has been set incorrectly

From the Subscriber, generate the Create Script just as with the Publisher, to a New Query Window.

This time the comments within the script do have the good grace to warn you that a default has been use that you might not want:



In this case I need that to be set to ‘none’.

As an aside, in SQL Server 2000 this setting was also case-sensitive – ‘none’ wouldn’t work as expected, but ‘None’ would.

Example 3 – Partition Switching fails with Replication

I have no test example to hand with which to demonstrate this (at least, not one that I can show outside of my employer’s environment) but it is simple enough to describe, now that I know the cause and resolution.

Several databases in use at my current workplace specify different partition schemes between the Publisher and the Subscriber. This is a common practice, particularly where the Subscriber might be a ‘staging’ database used to ultimately transfer data elsewhere.  So the Publisher might keep data for a couple of weeks but the Subscriber only needs to store it for a day or two, because another system is being used to store/consume that data for other purposes (reporting, analysis or whatever).

So, in my innocence I script the Publication and Subscription via SSMS, make the alterations shown in the previous two examples and create the Publication on a test environment. All is good and Replication works fine. Data that I insert into the Publisher appears in the Subscriber and I have that warm, smug feeling of having created a Publication without incident. Something to be savoured.

However, part of creating this test environment also includes setting up the jobs that purge the data in both Publisher and Subscriber, with the use of Partition Switching (for a basic example of Partition Switching, have a look at ).

When the job runs against the Publisher that executes Partition Switching I get the following error:

“The table ‘<schema>.<tablename>’ belongs to a publication which does not allow switching of partitions [SQLSTATE 42000] (Error 21867)”.

Just for a laugh, if you want to see just how many errors Replication can produce, go to (and to add to the fun, they aren’t logged).

After much digging around and asking others who have more Replication scars than myself it transpires that some settings aren’t scripted out and also aren’t found by any of the guid screens associated with Replication.

In the past I have right-clicked on the Publication name, selected ‘Properties’ and looked at ‘Subscription Options’, believing that comparing these between the original and the copy would be enough.  Ah, well.

There is a Replication command ‘sp_helppublication’ which shows several setting that are not visible elsewhere. At its most basic, running this command with just the Publication name will produce a row with all of the setting associated with that Publication:



With the particular Publication in mind I scrolled along to the far right, and the setting for ‘allow_partition_switch’ was set to 0. As BOL specifies for this parameter – “Specifies whether ALTER TABLE…SWITCH statements can be executed against the published database”.

Executing ‘sp_helppublication’ against the Live Publication shows this value as ‘1’ – so it is permitted in Live but wasn’t scripted anywhere by the automatic process through SSMS.

To change this to the value required requires the command ‘sp_changePublication’, executed against the Publisher DB in question:

EXEC sp_changepublication @publication=N'Publication Name;', @property=N'allow_partition_switch', @value = 'true';

However, that isn’t the end of it. In executing this command it also sets ‘replicate_partition_switch’ to ‘1’, which I don’t want. The publisher and Subscriber in our environments generally have different Partition Switching schemes, so just because the Publisher decides to purge any data doesn’t mean that the Subscriber does too. So I now need to unset that parameter:

--it also sets 'replicate_partition_switch' to 'true' when the previous command executes and we want that as --'false'
EXEC sp_changepublication @publication=N'Publication Name', @property=N'replicate_partition_switch', @value = 'false';

Having jumped through these hoops I now find that Partition Switching works fine and my Publication in Test really is a copy of the Publication in Live.


Get every new post delivered to your Inbox.