Read Committed Snapshot – Another Optimistic Flavour

26 Aug http://www.lovethispic.com/image/13760/rainbow-ice-cream

In a previous article (here) I described how the isolation level read committed works, with some examples. There is also an optimistic version of this – read committed snapshot, which uses row versions.
The behaviour of read committed and read committed snapshot is similar and repeating the examples used previously will show the similarities and the differences.

First of all, the database needs to be configured for snapshot isolation.

ALTER DATABASE IsolationLevels
SET READ_COMMITTED_SNAPSHOT ON;

As with the article on read committed, there are 5,000,000 rows with the ‘ID’ column starting from 10 and a value of 1 in ‘Test_Value’ for each row.

Read_Committed_snapshot_01

This code is executed again and as soon as it starts the row with ‘ID’ of 10 is moved to the end of the index, in another tab.

UPDATE dbo.Test2
SET id = id + 6000000
WHERE id = 10;

This time the result is slightly different. The result shows that move of the row with ‘ID’ 10 to 6000010 has not affected the first SELECT. This is because it used a snapshot of the data, taken when the statement started (not when the transaction started), so the move had no impact upon this version of the data.

Read_Committed_snapshot_02

Similarly, moving the last row in the index to the start of the index does not have the impact it did with read committed isolation, for the same reason. Having reset the data and executed the appropriate UPDATE does not have the same affect, because it processed a snapshot of the data.

UPDATE dbo.Test2
SET id = 9
WHERE id = 5000009;

Read_Committed_snapshot_03

For the same reason, dirty reads cannot happen with read committed snapshot, because it is using as snapshot of the data taken when the statement started.

A smaller demonstration
In the read committed isolation level article, an UPDATE was executed to cause a lock, thereby blocking a subsequent SELECT.
Trying the same demonstration here results in different behaviour.

1000 rows, with an ‘ID’ column and ‘Test_Value’ column matching in values.

Read_Committed_snapshot_07

Executing an UPDATE in a separate window without a COMMIT/ROLLBACK will lock the selected row.

BEGIN TRANSACTION;

UPDATE dbo.Test2
SET Test_Value = 1000
WHERE id = 50;

And rerunning the query with the two SELECT statements would wait for the release of the lock from the UPDATE, if this were read committed isolation level.

Read_Committed_snapshot_06

However, with read committed snapshot the row with the ‘ID’ of 50 is unchanged, and the SELECT was not blocked, because a snapshot was used by the transaction.

A different demonstration, to show that each SELECT within a transaction uses a different snapshot.
Two identical SELECT queries, separated by a 10 second delay.

Read_Committed_snapshot_04

Before the WAITFOR command completes an UPDATE is executed that will move a row from the start of the index to the end.

BEGIN TRANSACTION;

UPDATE dbo.Test2
SET ID = 1001
WHERE ID = 10;

COMMIT TRANSACTION

Looking at completed initial transaction, the moved row is shown in the second SELECT but not in the first, so the second SELECT read a new snapshot, showing the moved data.

Read_Committed_snapshot_05

So, as with read committed isolation, multiple queries within the same transaction may return different results, even when the criteria are identical.

Snapshot Isolation – Let’s Get Optimistic

18 Aug copycat

The previous articles on isolation levels were using pessimistic locking. Two further isolation levels exist that use optimistic locking and are row-versioning based.

Snapshot Isolation
With snapshot isolation level no locks are placed upon the data when it is read and transactions that write data don’t block snapshot isolation transactions from reading data. Instead, when the first statement within a transaction is started, a version of the data it requires is stored in tempdb along with a sequence number. It uses this sequence number to control access to the consistent version of the data for that transaction.
Snapshot isolation, like the serializable isolation level, does not allow dirty reads, lost updates, phantom rows or nonrepeatable reads. However, snapshot isolation does not use locks to achieve this. Instead it has a consistent version of the data, taken at the start of the transaction and if an update to this data clashes with an update performed elsewhere then the transaction is rolled back.
To set snapshot isolation requires setting a database level property.

ALTER DATABASE IsolationLevels SET ALLOW_SNAPSHOT_ISOLATION ON;

This enables the row versioning mechanism within the database. Versioned rows will be stored even if the isolation level specified for the query is pessimistic. However, you still need to specify the snapshot isolation level for the query in order to use versioned data.

Example 1 – updating a row
Using the same table used in the serializable example, transaction 1 will use read committed isolation to read and update the value with one row.
At the same time a snapshot isolation level transaction will select the same row and attempt an update.
In transaction 1

Snapshot_02

Note the absence of a COMMIT or ROLLBACK at this point, locking this row. It will show the updated value of 2, for the column ‘Test_Value’.
In transaction 2

Snapshot_03

Using snapshot isolation, this shows the value of 1 for ‘Test_Value’, because at the time this transaction started the update in transaction 1 had not been committed.
If we now COMMIT transaction one and then re-run the SELECT query within transaction 2 the value is unchanged.

Snapshot_04

This transaction is still reading the date from the version store, to keep the view of this transaction consistent.
Because the row with ‘ID’ of 9 has now been updated elsewhere, an attempt by transaction 2 to update this row will result in an error message and a rollback of the transaction.

Snapshot_05

The entire error message is
Msg 3960, Level 16, State 2, Line 1
Snapshot isolation transaction aborted due to update conflict. You cannot use snapshot isolation to access table ‘dbo.Test4’ directly or indirectly in database ‘IsolationLevels’ to update, delete, or insert the row that has been modified or deleted by another transaction. Retry the transaction or change the isolation level for the update/delete statement.

Of course, re-running transaction 2 now will show the updated value and an update will be allowed because nothing else will be in the process of updating it.

Snapshot_06

Example 2 – inserting a row
Transaction 1 selects the first few rows, transaction 2 inserts a row within the same range that transaction 1 is using and transaction 3 then selects the first few rows.
Transaction one selects all rows where the ‘ID’ column has a value less than 10, and retrieves 6 rows.

Snapshot_08

Transaction 2 inserts a new row, with an ‘ID’ value of 1, which should make it within the range of the transaction 1 query.

Snapshot_09

Re-run the SELECT query within transaction 1 and the results are unchanged – it is using the data from the version store.

Snapshot_10

Repeat this query in transaction 3 and the extra row is also selected, as it is using the latest version of the data.

Snapshot_11

Unlike the serializable isolation level it has been possible to insert a row within the range that transaction 1 had selected, but transaction 1 will never show this new row. It will keep a consistent version of the data until the transaction has completed or rolled back.

Example 3 – deleting a row
As expected, the behaviour is the same as with the insertion – transaction 1 will read a range of data.

Snapshot_12

Transaction 2 will delete one of the rows that transaction 1 had selected.

Snapshot_13

Repeating the select within transaction 1 will give the same results, even though transaction 2 has since removed a row.

Snapshot_14

Executing transaction 3 after the deletion will show the latest data.

Snapshot_15

The Overhead
Setting the database to ALLOW_SNAPSHOT_ISOLATION starts the generation of row versions, even for transactions that do not use snapshot isolation. This command will not complete until all currently executing transactions have completed, so may take a while to return (https://msdn.microsoft.com/en-us/library/bb522682.aspx) .
Versioning adds 14 bytes to every row of the database as they are accessed, so it is essential to take this additional storage into account.
The versioned data is stored in tempdb, so this can grow by a large amount for databases that are heavily used. In addition, this store is a linked list of related versions, so transactions may have to traverse a long linked list if there are transactions that have been open for a long time. This of course adds to the processing time of a transaction. There is a clean-up process that runs frequently, to clear this list but it may not be able to remove all entries, depending upon how heavily the database is being used.
A range of DMVs and functions are available to monitor the behaviour of versioning within a database – https://msdn.microsoft.com/en-us/library/ms178621.aspx but if the version store is large these can also take a while to process.

Conclusion
Snapshot isolation is not suitable for databases where data updates are frequent, as it will have the potential to raise a large number of conflicts. It is more suited to databases that are mainly read, with data inserted or deleted but few updates.

Serializable – Not One For Sharing

12 Aug i-m-not-sharing-74817

In the previous article it was shown that phantom reads and skipped rows were possible with the repeatable read isolation level.
Serializable is the highest isolation level and will not allow phantom reads or skipped rows.

Continue reading

Repeatable Read – still no guarantees

7 Aug funny-mine-food-bird-claw

In my previous article I showed how read committed would allow rows to be updated that it had already processed in a query, because it releases locks as soon as it has processed that row or page (depending upon the type of lock taken). A second query using the same criteria would then show different results from the first.
Repeatable read isolation level doesn’t allow this type of action to occur, because it retains the locks until the transaction has completed.
Continue reading

Read Committed – not getting what you expect

23 Jul puzzle

In a previous article I demonstrated how use of READUNCOMMITTED (or the NOLOCK hint) can result in erroneous data, due to page splits. Examples abound of the more common issue with this isolation level, which occur when reading a row that is being updated in another transaction and then rolled back. These so-called ‘dirty reads’ cannot happen with the default isolation level of READ COMMITTED.

However, it is still possible to alter data within a table that is being read with READ COMMITTED, so that the results of that transaction do not reflect the actual state of the data when the transaction has completed.
For this demonstration I’ll create a table that contains an ID column and an integer column, with a clustered index on the ‘ID’ column.

CREATE TABLE [dbo].[Test2](
	[ID]		INT  NOT NULL,
	[Test_Value]	INT NOT NULL,
PRIMARY KEY CLUSTERED 
(
	[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] 

GO

For reasons that will become apparent I’ll start the ID from 10, incremented by 1 for each row and store 1 in ‘Test_Value’ against each row.
With nothing else executing against that table there are 5,000,000 rows and the sum of ‘Test_Value’ is 5,000,000:

Read_Committed_01

The two queries within this transaction are looking at the same data.

Now I will repeat this query, and in a separate tab I will update the row with ID of 10 to move it to the end of the index, as soon as the first query has started:

UPDATE dbo.Test2
SET id = id + 6000000
WHERE id = 10;

The results for this an extra row in the first SELECT:

Read_Committed_03

And if I scroll to the end of that result, the row that was shown with an ID of 10 is also shown with an ID of 6000010.

Read_Committed_04

So, the first SELECT query has looked at the same row, before and after it was moved. However, the second SELECT query, because it read the data after all of the updating, has returned the correct information.

Resetting the data, it is also possible to ‘lose’ a row, when a row at the end of the index is moved to the start of the index, during the first SELECT.

This time, the second query is:

UPDATE dbo.Test2
SET id = 9
WHERE id = 5000009;

Read_Committed_05

This time, the row with an ID of 9 is not shown in the results, because the update moved it from the end of the index to the beginning of the index after the SELECT had read the beginning of the index but before it had reached the end of the index.

Read_Committed_06

The sum shows the correct results because the update had completed before the second select started.

A Smaller Demonstration

In the examples shown there were five million rows, but what about smaller tables?

Consider a far smaller table, which is accessed by three processes. One is simply reading the whole table (with READ COMMITTED Isolation Level). At the same time a second process has locked a row, because it is updating it. In addition, a third process is updating data and in the process is moving the row to another position within the index.

Firstly – the data before it is interfered with shows one thousand rows that SUM to 500500:

Read_Committed_08

In a separate tab run the following command:

BEGIN TRANSACTION;

UPDATE dbo.Test2
SET Test_Value = 1000
WHERE id = 50;

Notice that there is no ‘COMMIT’ or ‘ROLLBACK’. This will lock row 50.

Now run the initial command, with the two SELECT statements. This will be blocked by the update, because Row 50 is locked. READ UNCOMMITTED would use this (as yet uncommitted) Test_Value and continue. READ COMMITTED will not, and is therefore blocked.

In a third window:

BEGIN TRANSACTION;

UPDATE dbo.Test2
SET ID = 1001
WHERE ID = 10;

COMMIT TRANSACTION

This moved Row 10, which had already been read, and placed it at the end of the index.

Now ROLLBACK the update to row 50, this will allow the blocked process to continue.
The SELECT command has now completed, but the data for row 10 now also appears as row 1001 – the same data shown twice:

Read_Committed_09

Read_Committed_10

Of course, if the SELECT was also running a SUM or suchlike, the results would be incorrect in each of these examples.

So what is happening?

READ COMMITED obtains shared locks on the rows or pages as it requires them, but releases these locks as soon as it has read the locked data.

Therefore, it is possible to update, delete or insert rows where the query has already processed that area of the table, and to do the same in the part of the table that the query has yet to process. What is guaranteed with READ COMMITTED is that you can’t have dirty reads, but that’s pretty much it.

The fact that I placed the two queries within a Transaction also made no difference. The second query always saw the ‘true’ picture of the data because the manipulations had completed by that point.

I’ve frequently seen statements where people have claimed that to get a ‘true’ picture of the data that the default READ COMMITTED isolation level should be used. This clearly is a misunderstanding of how this process works, and one that I had to do a little bit of work to comprehend too.

Getting More Data Than You Bargained For With NOLOCK

29 May ThompsonTwins

Generally, when somebody posts a query using the NOLOCK hint, there are a variety of warnings about dirty reads, phantom reads and double-reading (due to page splits).
The dirty read example is common and tends to be the main example used to demonstrate what to watch out for with NOLOCK, but I’ve never seen an example of double-reading. So, I thought I’d try and see how much of an issue such a thing can be. It certainly sounds plausible, but is it a ‘one in a million’ chance, or is it a Terry Pratchett ‘one in a million’ chance – where it is pretty much guaranteed to happen?

So, create a database for this testing and then create a table within:

CREATE TABLE [dbo].[Test1](
	[ID] [int] IDENTITY(1,1) NOT NULL,
	[Test_Description] [nvarchar](max) NULL,
PRIMARY KEY CLUSTERED 
(
	[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 100) ON [PRIMARY]
) ON [PRIMARY]

GO

A very simple table – an Identity column and a text column. The important part is that I’ve specified a Fill Factor of 100 for the Clustered Index, meaning there is no space for expansion of the data in the current location. Therefore, if I increase the contents of the ‘Test_Description’ column it should force a page-spilt, because the data won’t fit back where it was originally stored.

I then generate 500,000 rows with a simple integer id and the entry ‘TEST’ in the ‘Test_Description’ column.
In one window of SSMS I now set up a query to update this column on every row. For text-based test data I tend to use a text file of ‘War and Peace’, from Project Gutenberg. For this test I just use the first paragraph.

In another window I set up a query to return every row from the same table, specifying ‘READ UNCOMMITTED’ isolation level.
Once prepared I run the two queries together:

Nolock_02

Nolock_01

The update has returned a count of 500,000, which matches the number or rows I placed into that table.
However, the select has returned a count of 586,078. Considerably higher than the number of rows within that table. In addition, it can be seen within the results that I’m retrieving a mixture of updated and non-updated rows.

Exporting the results to a csv file and then into Excel, I can sort the results and see a large number of duplicates (based on the ID column):

Nolock_03

The Select found the rows before they were updated, and then again when the page split had moved the data, retrieving the row twice.
If I leave the select query to run once the update has completed, I get 500,000 rows – all updated.

This was a relatively easy test to set up, and has been consistent in the fact that it has duplicated a large number of rows every time. Of course, specifying a fill factor of 100 would pretty much guarantee a page split somewhere, but in the day-to-day operations of most databases, pages will become full and splits will happen eventually. So the scale of the issue might not be as large as the example I’ve created, but the underlying issue may still be present.

Obviously, if this was something more important that a short piece of text (financial data, for example) then the results derived from this query would be wildly inaccurate.

Compress All DB Files on a Server Instance

30 Apr

Or rather, on a Test or Development server instance. If you need to do this to a Live server, then you have larger issues.

Most sites have servers for Developers and DBAs to work their dark magic, as well as various test servers for the next level of testing. These servers do, of course, have databases that nobody admits to creating and other more recognisable databases that have had huge amounts of test data inserted, updated, removed and then generally left, like a bloated whale carcass – taking far more space then they now need.

There are several options in these cases:

1. Remove the databases that are no longer required.
This isn’t a quick fix, when a server has run out of space and all development has come to an abrupt halt. The database might have a use that only a few people are aware of, and they’re off on holiday, in meetings or just ignoring my stroppy emails. The best solution in such cases is to take the database offline and wait a few days, so not an immediate fix for space shortage.

2. Move database files.
Also not necessarily an option. Some setups are in specific locations for a reason and there might be no viable storage anywhere else for that server.

3. Shrink the databases.
The most common option for these servers, but some can have a large number of files, so using SSMS for this can be rather tedious.

4. Ask for more Storage.

Hydra2
Yeah, right.

These servers aren’t treated well, but that is their purpose. It saves the QA and Live servers from a lot of grief later in the development cycle.

So, I had a look around for something that I could run against a server instance and would compress all of the databases for me. There are a few bits and pieces around, but nothing that quite did what I was after.
What I wanted was something that would give me some choice over what I compressed, because sometimes it is simply a waste of time to compress a very large file for for a little gain.

So, I have written something, stored it as a plain text file and it can be downloaded from here.

It doesn’t backup databases beforehand, nor does it check the health of the database (before or after). I was after something that performed the basics and saved me a lot of time.

And if you think this routine has solved all of your problems on your mission-critical systems, you really need to read this Paul Randal – Why You Should Not Shrink Your Data Files. The inside of your database will look like the car under those tank-tracks.

Follow

Get every new post delivered to your Inbox.