Compress All DB Files on a Server Instance

30 Apr

Or rather, on a Test or Development server instance. If you need to do this to a Live server, then you have larger issues.

Most sites have servers for Developers and DBAs to work their dark magic, as well as various test servers for the next level of testing. These servers do, of course, have databases that nobody admits to creating and other more recognisable databases that have had huge amounts of test data inserted, updated, removed and then generally left, like a bloated whale carcass – taking far more space then they now need.

There are several options in these cases:

1. Remove the databases that are no longer required.
This isn’t a quick fix, when a server has run out of space and all development has come to an abrupt halt. The database might have a use that only a few people are aware of, and they’re off on holiday, in meetings or just ignoring my stroppy emails. The best solution in such cases is to take the database offline and wait a few days, so not an immediate fix for space shortage.

2. Move database files.
Also not necessarily an option. Some setups are in specific locations for a reason and there might be no viable storage anywhere else for that server.

3. Shrink the databases.
The most common option for these servers, but some can have a large number of files, so using SSMS for this can be rather tedious.

4. Ask for more Storage.

Yeah, right.

These servers aren’t treated well, but that is their purpose. It saves the QA and Live servers from a lot of grief later in the development cycle.

So, I had a look around for something that I could run against a server instance and would compress all of the databases for me. There are a few bits and pieces around, but nothing that quite did what I was after.
What I wanted was something that would give me some choice over what I compressed, because sometimes it is simply a waste of time to compress a very large file for for a little gain.

So, I have written something, stored it as a plain text file and it can be downloaded from here.

It doesn’t backup databases beforehand, nor does it check the health of the database (before or after). I was after something that performed the basics and saved me a lot of time.

And if you think this routine has solved all of your problems on your mission-critical systems, you really need to read this Paul Randal – Why You Should Not Shrink Your Data Files. The inside of your database will look like the car under those tank-tracks.

NOCHECK – the uses and impact

31 Mar yes-no-buttons

Having been bitten by the delights of a Foreign Key(FK) with NOCHECK I thought it time to do a little digging.

I’ve always been aware of the perils of this option but have encountered a table with this feature and I was caught out by it.
I’m also aware of the impact it can have on the Optimiser but I’ve never looked into it, so I’m taking this opportunity to look at the impact there too.

What is NOCHECK?

Basically, it allows you to add a constraint onto a table without validating the existing contents of the appropriate column. So, for example, I can create a FK constraint for a column against a reference table even though not all of the values currently held will match against that reference table.

I’m going to create a couple of tables:

CREATE TABLE dbo.Locations( 
	ID		VARCHAR(10), 
	[Description]	VARCHAR(20), 
	[ID] ASC 


CREATE TABLE dbo.StockItems( 
	[Description]	VARCHAR(50) NOT NULL, 
	Location	VARCHAR(10) NOT NULL 
	[ID] ASC 


And then load them with some simple data:

INSERT INTO [dbo].Locations (ID, [Description]) 
VALUES('W1', 'Warehouse 1'), 
      ('W2', 'Warehouse 2'), 
	  ('W3', 'Warehouse 3'), 
	  ('W4', 'Warehouse 4'), 
	  ('OS1', 'Off-Site Storage 1'); 

INSERT INTO dbo.StockItems( [Description], Location ) 
VALUES  ( 'Blue Widgets', 'W1'), 
		( 'Green Widgets', 'W3'), 
		( 'Red Widgets', 'OS1'), 
		( 'Round Widgets', 'X2'), 
		( 'Square Widgets', 'W2'); 

If I wanted to create a FK between dbo.StockItems Location and dbo.Locations ID you would expect a problem, because the value ‘X2’ within the StockItems table doesn’t exist in the Locations table:


And you’d be right, as long as I didn’t specify ‘NOCHECK’.
So, with ‘NOCHECK’ instead:


And the Foreign Key has been created.
Although the table ‘StockItems’ still contains an ‘invalid’ entry it is no longer possible to add other invalid values:


Because the constraint now checks that column against the reference table.

Reloading Data

Occasionally it may be necessary to remove the data from a table and re-insert it (altering a table but keeping fragmentation to a minimum is one example) and this is where the NOCHECK can act against you:
Let’s move all of the data out of ‘StockItems’ into a temporary table and truncate the DB table:

INTO ##temp
FROM dbo.StockItems;

SELECT * FROM ##temp;

TRUNCATE TABLE dbo.StockItems;

Now try putting the data back with:


INSERT INTO [dbo].[StockItems](ID, [Description],[Location])
	SELECT ID, [Description], Location
	FROM ##temp



Data that was in that table now cannot be re-inserted, unless I remove the FK constraint, run the insert and then add the FK back.

Validating the Data

From version 2008 onwards there is a DBCC command which will provide information about the integrity of a constraint:


This shows what values are currently breaking the constraint and will need dealing with before you can change the constraint to ‘CHECK’.

NOCHECK and the Optimiser

Another, ‘quieter’ issue with using NOCHECK is that the Optimiser ignores Foreign Keys that have that option specified.

Looking at sys.foreign_keys, there is a property ‘is_not_trusted’, and for a FK constraint with NOCHECK this is set to 1. It implies that the data in that column cannot be trusted to be valid, when comparing it against the column that it references.


“The query optimizer does not consider constraints that are defined WITH NOCHECK. Such constraints are ignored until they are re-enabled by using ALTER TABLE table WITH CHECK CHECK CONSTRAINT ALL.”

Removing the invalid data won’t fix this, but removing the invalid data and changing the FK constraint will.

DELETE dbo.stockitems
WHERE [location] = 'X2';



Comparing the Query Plan and Statistics with the ‘before and after’ for ‘is_not_trusted’ :
When ‘is_not_trusted’ = 1:

–(4 row(s) affected)
–Table ‘Locations’. Scan count 0, logical reads 8, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
–Table ‘StockItems’. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.


When ‘is_not_trusted’ = 0:

–(4 row(s) affected)
–Table ‘StockItems’. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.


So, where the Optimiser can trust a FK constraint it can make improvements to the efficiency of the plan.

NOCHECK does have its place, but the impact to performance and behaviour needs to be understood before using it.

Automatic Data Purging With Partition Switching

16 Mar

In a previous article I showed a basic example of Partition Switching, using animal names ( While not a realistic example for everyday use it did get the principles across. However, certainly in my experience, partitioning is normally dependent upon a number – usually a date.

In this article I’ll create a partition process that is used to keep data for three days (based upon a date within the row) and then remove the data automatically, via a job.

Firstly I require a database for this demonstration.


Now I need a Partition Function. Because there can only ever be 3 possible values with this calculation, I can be a little lazy and just specify two of the values. If I just write the function for values of 1 and 2, then 3 will always go into the third partition, normally used for ‘invalid’ value ranges. I can’t have any invalid values (provided it has been coded and tested properly), so use the third partition for the ‘normal’ work instead of out-of-range values.


The associated Partition Scheme will have three partitions. Values of 1 will go to the first in the list, values of 2 will go to the second in the list and all others will go to the third. There are only ever going to be three values for this example, so I won’t worry about a fourth for invalid values.


Having created these I can now create the table that will use this partition Scheme:

    UserAccessID		INT IDENTITY(1,1),
    UserID				INT,
    LoginDateTime       DATETIME,
    LogoutDateTime		DATETIME,
    PartitionNumber		AS (CONVERT([tinyint],abs(datediff(day,CONVERT([date],'20000101',(112)),LogoutDateTime)%(3)+(1)),(0))) PERSISTED NOT NULL,
    UserAccessID		ASC,
    PartitionNumber     ASC
) ON ps_Retention(PartitionNumber)  ; 


CREATE TABLE UserAccess_PartitionSwitch(
    UserAccessID		INT IDENTITY(1,1),
    UserID				INT,
    LoginDateTime       DATETIME,
    LogoutDateTime		DATETIME,
    PartitionNumber		AS (CONVERT([tinyint],abs(datediff(day,CONVERT([date],'20000101',(112)),LogoutDateTime)%(3)+(1)),(0))) PERSISTED NOT NULL,
    UserAccessID		ASC,
    PartitionNumber     ASC
) ON ps_Retention(PartitionNumber)  ; 


Notice that the PartitionNumber column is a calculated, persisted column. This value is based upon the LogoutDatetime, so will be kept for three days from that Datetime value.
At first glance the calculation for the Partition Number can look confusing:


So let’s start with the basics and build up from there. Substitute the ‘LogoutDateTime’ with an ISO standard date, to help show what is happening.

Using a Base Date (in this case the 1st of January 2000) calculate the number of days between then and 20150310. Of course you need to set a Base date that you will suit your purposes. In this case should I ever have a date in 1999 or earlier then it will not work as expected.

The 112 enforces ISO standard date formatting:

SELECT datediff(day,CONVERT([date],'20000101',(112)),'20150310') --112 is ISO standard, to avoid any confusion

As I’m using a 3 day retention scheme, use a modulo of 3, which will result in 0, 1 or 2. The abs function is just there to ensure no negative values, an unlikely event for this example:

SELECT abs(datediff(day,CONVERT([date],'20000101',(112)),'20150310')%(3))

Because a Partition number of 0 is invalid add 1, so making the only possible values 1, 2 or 3:

SELECT abs(datediff(day,CONVERT([date],'20000101',(112)),'20150310')%(3)+(1))

As it is such a small number, convert it to a TinyInt:

SELECT (CONVERT([tinyint],abs(datediff(day,CONVERT([date],'20000101',(112)),'20150310')%(3)+(1)),(0)))

To make sure the partition function is working, add a few rows of test data:

USE [PartitionTest]

INSERT INTO [dbo].[UserAccess]
VALUES     (1
           ,'20150308 01:00:00'
           ,'20150308 02:00:00'),
           ,'20150309 01:00:00'
           ,'20150309 02:00:00'),
           ,'20150310 01:00:00'
           ,'20150310 02:00:00'),
           ,'20150311 01:00:00'
           ,'20150311 02:00:00'),
           ,'20150312 01:00:00'
           ,'20150312 02:00:00'),
           ,'20150313 01:00:00'
           ,'20150313 02:00:00'),
           ,'20150314 01:00:00'
           ,'20150314 02:00:00')

And then check the result – look at the table contents and at how the data is spread across the actual partitions:
From these result it can be shown that the PartitionNumber column is never outside of the range 1, 2, 3 and the partitions themselves match the data spread indicated by the value of PartitionID.
Now to test that the Partition Switching will actually work – ensuring that the two tables have been set up correctly:
The three rows in Partition 2 have been moved to the Partition_Switch table.
So, I know that the structure of the tables is correct and that we can switch partitions. The next step is to create a routine that will perform this action automatically.
Our table is designed to keep data for three days, and to do this successfully we need to make sure that the next partition we are going to need (tomorrow’s partition) is empty before that day starts. So just before midnight we need to remove the data that is about to become four days old.

A Stored Procedure is now required – ‘dbo.Purge_Table’, which ascertains the partition number that is now three days old and purges it (after some checks to make sure it should), because that will be the partition required tomorrow.

CREATE PROCEDURE [dbo].[Purge_Table] AS

    DECLARE @DaysToKeep				TINYINT,
        @PurgePartition			TINYINT,
        @TotalPartitions		TINYINT,
        @Today					DATETIME,
        @MaxDateInPartition		DATETIME,
        @DateToPurge			DATETIME,
        @SQLCode				VARCHAR(500) 

    SET @TotalPartitions = 3;
	SET @DaysToKeep = 2; 

    SET @DateToPurge = DATEADD (dd, (@DaysToKeep * -1), @Today)  --Includes today. We keep today, we keep yesterday but the day before yesterday needs to go 

    SET @PurgePartition = 0     

	SET @PurgePartition = (CONVERT(tinyint,ABS(datediff(day,CONVERT([date],'20000101',(112)),@DateToPurge)%(@TotalPartitions)+(1)),(0))) --Same formula as used within the table, but
																																		 --using the @DateToPurge, to find what partition
																																		 --that date will be in.

    SET @PurgePartition = $Partition.pf_Retention (@PurgePartition)   

    ---- PURGE TABLE

	--Get the latest LogOutDateTime from the partition that is going to be purged.
    SELECT @MaxDateInPartition = MAX (LogOutDateTime)
     FROM dbo.UserAccess
     WHERE $Partition.pf_Retention (PartitionNumber) = @PurgePartition 

    IF EXISTS (SELECT 1 FROM dbo.UserAccess
                WHERE PartitionNumber = @PurgePartition) --Check there is data for this partition
			IF DATEDIFF (dd, @MaxDateInPartition, @Today) >= @DaysToKeep OR @MaxDateInPartition IS NULL --Check the date within this partition is old enough to be removed
					CAST (@PurgePartition AS varchar) +
					' TO dbo.UserAccess_PartitionSwitch PARTITION ' +
					CAST (@PurgePartition AS varchar) 

				EXEC (@SQLCode) 

				TRUNCATE TABLE dbo.UserAccess_PartitionSwitch	--Having switched the data over, remove it 

				RAISERROR ('The dbo.UserAccess Table has been successfully Purged.', 10, 1)
				RAISERROR ('OPERATION CANCELLED. Incorrect Data would be purged from dbo.UserAccess',16, 1)
		RAISERROR ('dbo.UserAccess - partition has no data', 10, 1)

Having cleared the table (and PartitionSwitch table) I input three rows, with today, yesterday and the day before as the logOutDateTime:
Now execute the Stored Procedure and the row in partition 2 should be switched to the PartitionSwitch table and removed (based on the execution date of 20150316).
Now all that is required is a scheduled job that executes this Stored Procedure at the end of each day, ensuring there is always an empty partition available for the next day and any data older than is required is no longer stored.

Mentoring – risk and reward

8 Feb

So, Paul Randall is offering to mentor six people for a couple of months, in response to the feedback from the Tribal Awards last year.

Now that’s a double-edged sword, isn’t it?

First of all, the opportunity to seek guidance and assistance from one of the most knowledgeable people in the SQL Server community, for free, is not to be sniffed at.

But just imagine the fun to be had in job interviews forever afterwards:

“So, you trained with Paul Randall?”, I know its mentoring, but people will see what they want to, “Well, we have a 200 server system with 20 Tb databases in 30 countries. Clustering is a bit flaky and the Merge Replication between our 7 Russian sites and Australia keeps failing – how would you fix that?”.  People can set higher expectations when they see certain names in connection with you.

And of course, there’s always the thought of failing to meet Paul’s expectations too. After all, this is his own time he’s sacrificing – squeezing in between his day job and the inordinate amount of reading he appears to do to.  Years from now whenever he’s asked – “Oh yeah, Steve – nice enough guy but I wouldn’t want him anywhere near my systems.” – the career version of the Black Spot. The rest of my career on NoSQL databases.

And what do I actually need anyway? With the all-seeing eye that is Google and a company-paid subscription to SafariBooksOnline, it certainly isn’t reading material. And although it pains me to say so, I do work with several people who really know their SQL Server, so the day-to-day stuff should be catered for. And no, I’m not giving names – they’d be insufferable.

And that’s the problem – day-to-day stuff.  Training beyond what I need for my current job. Methodology. The ‘what’ and the ‘how’.

I have so many draft articles in my blog – slowly composting because I’m not sure how to approach them or even if they’re of enough interest to myself really to actually complete.

I paid my own way to SQL Pass a couple of years ago and I’ve paid for this year too, as well attending a SQLBits conference a few years ago. This is currently how I find out what others do, outside of my little bubble of experience.

I’ve changed direction several times within my long career in IT and it’s usually because somebody who is very experienced in their field and can teach it has shown me into their little world. Not necessarily taught me the syntax and the tool usage, but shown me how to think in a way that takes advantage of whatever language/tool they use. A couple of months of being shown that can make changes that last years.

So I’m willing to put my name forward for this, in my own style. Who knows, we may both learn something. Even if not selected it made me think about what I really want out of this new direction my career has taken. And if I get nothing more from this, I can at least thank the man who introduced me to the Wool trilogy via his twitter messages.

Generating Replication Scripts – Things To Watch Out For

5 Feb audrey2B_2514

Scripting a Publication is useful for both Disaster Recovery and copying a Publication to a test environment. If you use the Wizard each time then you have to make sure that you’re consistent with the options and choices made in Live and once again on the Test (or DR) server.  By creating the script you should get something that reproduces the Publication and Subscription in full.

However, generating the script doesn’t always give you an exact copy of what you are scripting from. Some defaults might be included that weren’t used originally and some settings may be specified that were simply not used with the original creation.

A couple of examples I can demonstrate using the Publication scripted in earlier articles (, a more entertaining example using partition switching will have to be described.

Example 1 – There wasn’t a Publication Snapshot in the Original

But if you ask SSMS to generate a Create Script for the Publication it will include one:

To generate the scripts from an existing Publication you need to right-click on the Publication name within Replication/Local Publications branch (on the Publisher):


Specify where you want the script to go and it will be generated.

In this option I’ve chosen to have the script sent straight to a New Query Window.

Now bear in mind that this Publication does not use a Snapshot. I never specified it when I originally scripted this publication.

So what is this here for?


If I’m going to use this script as a true copy of the original then I need to remove this entry.

Example 2 – The ‘sync_type’ has been set incorrectly

From the Subscriber, generate the Create Script just as with the Publisher, to a New Query Window.

This time the comments within the script do have the good grace to warn you that a default has been use that you might not want:



In this case I need that to be set to ‘none’.

As an aside, in SQL Server 2000 this setting was also case-sensitive – ‘none’ wouldn’t work as expected, but ‘None’ would.

Example 3 – Partition Switching fails with Replication

I have no test example to hand with which to demonstrate this (at least, not one that I can show outside of my employer’s environment) but it is simple enough to describe, now that I know the cause and resolution.

Several databases in use at my current workplace specify different partition schemes between the Publisher and the Subscriber. This is a common practice, particularly where the Subscriber might be a ‘staging’ database used to ultimately transfer data elsewhere.  So the Publisher might keep data for a couple of weeks but the Subscriber only needs to store it for a day or two, because another system is being used to store/consume that data for other purposes (reporting, analysis or whatever).

So, in my innocence I script the Publication and Subscription via SSMS, make the alterations shown in the previous two examples and create the Publication on a test environment. All is good and Replication works fine. Data that I insert into the Publisher appears in the Subscriber and I have that warm, smug feeling of having created a Publication without incident. Something to be savoured.

However, part of creating this test environment also includes setting up the jobs that purge the data in both Publisher and Subscriber, with the use of Partition Switching (for a basic example of Partition Switching, have a look at ).

When the job runs against the Publisher that executes Partition Switching I get the following error:

“The table ‘<schema>.<tablename>’ belongs to a publication which does not allow switching of partitions [SQLSTATE 42000] (Error 21867)”.

Just for a laugh, if you want to see just how many errors Replication can produce, go to (and to add to the fun, they aren’t logged).

After much digging around and asking others who have more Replication scars than myself it transpires that some settings aren’t scripted out and also aren’t found by any of the guid screens associated with Replication.

In the past I have right-clicked on the Publication name, selected ‘Properties’ and looked at ‘Subscription Options’, believing that comparing these between the original and the copy would be enough.  Ah, well.

There is a Replication command ‘sp_helppublication’ which shows several setting that are not visible elsewhere. At its most basic, running this command with just the Publication name will produce a row with all of the setting associated with that Publication:



With the particular Publication in mind I scrolled along to the far right, and the setting for ‘allow_partition_switch’ was set to 0. As BOL specifies for this parameter – “Specifies whether ALTER TABLE…SWITCH statements can be executed against the published database”.

Executing ‘sp_helppublication’ against the Live Publication shows this value as ‘1’ – so it is permitted in Live but wasn’t scripted anywhere by the automatic process through SSMS.

To change this to the value required requires the command ‘sp_changePublication’, executed against the Publisher DB in question:

EXEC sp_changepublication @publication=N'Publication Name;', @property=N'allow_partition_switch', @value = 'true';

However, that isn’t the end of it. In executing this command it also sets ‘replicate_partition_switch’ to ‘1’, which I don’t want. The publisher and Subscriber in our environments generally have different Partition Switching schemes, so just because the Publisher decides to purge any data doesn’t mean that the Subscriber does too. So I now need to unset that parameter:

--it also sets 'replicate_partition_switch' to 'true' when the previous command executes and we want that as --'false'
EXEC sp_changepublication @publication=N'Publication Name', @property=N'replicate_partition_switch', @value = 'false';

Having jumped through these hoops I now find that Partition Switching works fine and my Publication in Test really is a copy of the Publication in Live.

Replication – Monitoring via the Jobs

5 Nov Monitor

In the previous article I covered how to use the Replication Monitor to see what is happening with a Publication and to find out what the errors are.
However, in some circumstances Replication Monitor may not show the entire picture with regard to the health of a Publication.
Information can also be gleaned from the jobs that actually perform the tasks that make the Publication work.

In this example I have removed the login of the Subscriber from the Distributor with a simple DROP LOGIN command.

USE [master]



Now, on the Publisher I’ll add one row to the table that is part of the Publication:

USE [PublisherDB]

INSERT INTO [ReplTest].[PublicationTable]
           ,'23 Jul 1970'

With this Publication the data should be sent at about 30 seconds past each minute. Open the replication Monitor and look at the history of the Distribution:


However, this has been sitting here for several minutes now and the history has not moved – stuck at the last transaction that completed at 13:12:29. There is also no error message displayed, so this screen indicates that there must be a problem (because I know it should run every minute) but gives no details.

The tab for ‘Undistributed Commands’ shows that there is one transaction waiting to get to the Subscriber:


And firing a tracer token never gets to the Subscriber:


From the Tracer Token we can see that the Publisher to Distributor connection must be working and there is no response between the Distributor and Subscriber.

This Publication is a Pull Subscription, so on the Subscriber there is a job that pulls the data from the Distributor. So, look at the job history on the Subscriber. In this case there is only one Job so it is easy to select, but the name should have something within it that shows which Publication it is part of. Within the history for that job there is a long list of failures – one every minute oddly enough:


Expand the ‘tree’ from the red cross and the latest step just tells you there’s a problem and to look either in the previous job step or replication Monitor. We know that Replication Monitor has told us all it is going to, so look at the previous step of this job:


So now we know that the problem is that the job cannot connect to the Distributor. In this case recreating the LOGIN on the Distributor for this job will correct the issue:

USE [master]



And we can see that the Replication monitor shows two transactions successfully processed (because the Tracer Token is a transaction too):


The Tracer token finally completed its journey:


And the Job History looks fine too:


Replication Monitor Basic Overview

31 Oct monitoring

Most issues with Replication are detailed in either the Replication Monitor or in the history of the jobs used to run the Replication tasks. The following examples are based upon the Pull Subscription detailed in earlier articles.

Replication Monitor

This tool is available to anyone who is a member of the sysadmin role or replmonitor role. It is loaded from any SQL Server instance that has Replication enabled, although life is easier if you launch it from the Publisher or Distributor (anywhere else and you may have to configure it to look for the correct servers).
Within SSMS Object Explorer, right-click on ‘Replication’ and select ‘Launch Replication Monitor’, which will result in:


In the screenshot above the tree structure in the left window has been expanded. The highest level is the Distibutor, expand that and beneath it is the Publisher and beneath that is the actual Publication.
Select the Distributor and three tabs are available in the right-hand pane – ‘Publications’, ‘Subscription Watch List’ and ‘Agents’.

Publications shows the Publications that this Distributor is responsible for, providing the Publisher name, Publications name, the number of Subscriptions and basic performance data.

Agents shows the various Agents that can be involved in Replication. In this Publication select ‘Merge Agent’ will show nothing, because it isn’t a Merge Publication and selecting ‘Snapshot Agent’ will show a status of ‘Never Started’ because snapshots are not used with Publication. Other selections will show the status of various Agents and jobs connected to this Distributor.

Subscription Watch List is the tab probably used most often. Double-click on the entry in the right-hand window and another window pops up, providing a detailed history of that Publication:


There are three tabs in this control and the most important of these tends to be the ‘Distributor To Subscriber History’ tab. As a default it shows the last 100 synchronisations (in this Publication there is one synchronisation every minute) but can be changed by using the drop-down at the top of that tab.

To show what should normally happen with this tab clear the Subscriber table, generate 1000 rows of data within the Publisher and watch this screen as the data is Published across.
For ease, Redgate SQL Data Generator has been used to insert the 1000 rows. From the current display on that tab you can see that it normally refreshes at roughly 30 seconds past each minute, so once it is due press F5 to refresh the screen and the message can be seen giving basic details of what was Published from the Distributor to the Subscriber:


Errors within the Publication

Now repeat the insertion on the Publisher, having truncated the table on the Publisher first it will generate Primary Keys that already exist on the Subscriber. This of course, will not go down well when the data gets to the Subscriber.

Initially the History will show that there is a problem and it is going to retry the individual commands, instead of the entire batch in one go:


A short while later it gives up and provide a detailed error message:


In this case it is informing of a violation of the Primary Key constraint, as that PK already exists on the Subscriber. This series of retrying and error reporting will repeat until something is done about it. In this case remove the data from the Publisher that already exists with the Primary Keys being Published and all will return to normal:


Another way to use this tool for locating the problem is in the Error Details part of the screen, with the Transaction Number displayed.
To show what can happen if the customised Stored Procedure is wrong, within the Subscriber change the SP ‘dbo.sp_MSins_Repl_SubscriptionTable’ to remove reference to th ‘PK’ column, which is NOT NULL.

USE [SubscriberDB]

ALTER procedure [dbo].[sp_MSins_Repl_SubscriptionTable]
    @c1 bigint,
    @c2 varchar(50),
    @c3 date,
    @c4 datetime
	insert into [Repl].[SubscriptionTable](
	) values (
    @c4	) 

Of course, any attempt to write a row to this table via this SP now will result in an error. The trick is to work out what command is failing within replication.

On the Publisher, execute the following:

USE [PublisherDB]

INSERT INTO [ReplTest].[PublicationTable]
           ,'01 Jun 1970')

Initially it will retry the command, as before. Eventually it will show an error message, along with the Transaction Sequence Number:


That binary number is used to run a query against the Distributor database to get the data it is attempting to send. The query is ‘sp_browsereplcmds’, with two parameters – both are the binary value taken from the Replication Monitor:


If multiple rows are returned from this query then the required row is the one where ‘command_id’ matches ‘Command ID’ shown in the error message from Replication Monitor (although you should check the column ‘partial_command’, as the query might be split across several rows).

From the column ‘command’ we can see the call to ‘sp_MSins_Repl_SubscriptionTable’ along with the parameters used.
Copying the contents of this column to SSMS and running it against the Subscriber (with a minor amount of editing to keep SSMS happy) shows the error:


Now we have the command that is causing the issue and the parameters it is using. In this case it’s obviously a fault within the SP, so simply amend it back to save the PK column and all returns to normal.



Another basic way to check Replication is all connected up is to run a Tracer Token through it. A Tracer Token is just a small amount of data written to the log of the Publisher and then tracked through the Publication.
To run a Tracer Token through Replication Monitor go to the bottom of the tree structure in the left-hand window, selecting the Publication. Three tabs appear and one of these is ‘Tracer Tokens’. Select that tab and press the button ‘Insert tracer’.


The time for ‘Publisher to Distributor’ will generally be within a few second, as that part of the Publication is configured to run constantly. ‘Distributor to Subscriber’ can take a while, depending upon when that job is scheduled. In this Publication it is once every minute, so could take anywhere up to one minute to complete. Once completed it shows the total latency and is an indication that the connections are configured correctly for the three main elements (Publisher, Distributor and Subscriber) to communicate correctly.


Get every new post delivered to your Inbox.