I have a series of articles lined up, which require showing details of the log file. Using fn_dblog can be a series of articles in itself, so I’m covering a few basic scenarios here first.
fn_dblog is an ‘undocumented feature’, in that Microsoft has no official documentation or support for this – so they can alter or remove it as they see fit. In reality, it is so widely used that there is very little chance that it will be subjected to major alterations – but this is not guaranteed.
The purpose of this function is to show data from within a database log file and has two optional parameters – the start and end Log Sequence Number (LSN).
Most of the examples I have seen with this function are from an outstanding PluralSight course with Paul Randal (Blog, Twitter) entitled “SQL Server: Logging, Recovery and the Transaction Log”. I’m using this blog to expand and practice based upon what I saw from that course.
To keep this as simple as possible, I’ll create a new database with Simple Recovery and ensure automatic statistics are not generated. These settings will reduce the amount of data written to the log file.
Listing 1: Create the test database
CREATE DATABASE LogTest; GO USE LogTest; GO ALTER DATABASE LogTest SET RECOVERY SIMPLE; ALTER DATABASE LogTest SET AUTO_CREATE_STATISTICS OFF; GO
Having created the database I now create a very basic table and insert two rows of data into it.
Listing 2: The test table and data
CREATE TABLE dbo.Test ( Column1 INT NULL, Column2 INT NULL, Column3 INT NULL ); INSERT INTO dbo.Test ( Column1, Column2, Column3 ) VALUES (1, 1, 1), (2, 2, 2); GO
At this point there is already an inordinate amount of data within the log file, which we won’t concern ourselves with. It is my intention to start with very simple examples and work from there, to whatever takes my interest.
So, execute a checkpoint in order to clear the log as much as possible. After that, execute the fn_dblog function with NULL parameters, to see the contents of the log file at this point. All we will see will be the checkpoint start and end logged within.
Listing 3: Checkpoint and call the function
CHECKPOINT; GO --Only the checkpoint is in there now SELECT * FROM fn_dblog(NULL, NULL); GO
Figure 1: Result from Listing 3
There is an inordinate amount of information from this function and the easiest way to see the columns available is to expand upon the column selection within the query in listing 3. As I use Redgate’s SQL Prompt, it is an easy matter to show all coumn names, instead of just ‘*’.
Listing 4: The full column list from fn_dblog
SELECT [Current LSN], Operation, Context, [Transaction ID], LogBlockGeneration, [Tag Bits], [Log Record Fixed Length], [Log Record Length], [Previous LSN], [Flag Bits], [Log Reserve], AllocUnitId, AllocUnitName, [Page ID], [Slot ID], [Previous Page LSN], PartitionId, RowFlags, [Num Elements], [Offset in Row], [Modify Size], [Checkpoint Begin], [CHKPT Begin DB Version], [Max XDESID], [Num Transactions], [Checkpoint End], [CHKPT End DB Version], [Minimum LSN], [Dirty Pages], [Oldest Replicated Begin LSN], [Next Replicated End LSN], [Last Distributed Backup End LSN], [Last Distributed End LSN], [Repl Min Hold LSN], [Server UID], SPID, [Beginlog Status], [Xact Type], [Begin Time], [Transaction Name], [Transaction SID], [Parent Transaction ID], [Oldest Active Transaction ID], [Xact ID], [Xact Node ID], [Xact Node Local ID], [End AGE], [End Time], [Transaction Begin], [Replicated Records], [Oldest Active LSN], [Server Name], [Database Name], [Mark Name], [Repl Partition ID], [Repl Epoch], [Repl CSN], [Repl Flags], [Repl Msg], [Repl Source Commit Time], [Master XDESID], [Master DBID], [Preplog Begin LSN], [Prepare Time], [Virtual Clock], [Previous Savepoint], [Savepoint Name], [Rowbits First Bit], [Rowbits Bit Count], [Rowbits Bit Value], [Number of Locks], [Lock Information], [LSN before writes], [Pages Written], [Command Type], [Publication ID], [Article ID], [Partial Status], Command, [Byte Offset], [New Value], [Old Value], [New Split Page], [Rows Deleted], [Bytes Freed], [CI Table Id], [CI Index Id], NewAllocUnitId, [FileGroup ID], [Meta Status], [File Status], [File ID], [Physical Name], [Logical Name], [Format LSN], RowsetId, TextPtr, [Column Offset], Flags, [Text Size], Offset, [Old Size], [New Size], Description, [Bulk allocated extent count], [Bulk RowsetId], [Bulk AllocUnitId], [Bulk allocation first IAM Page ID], [Bulk allocated extent ids], [VLFs added], [InvalidateCache Id], [InvalidateCache keys], [CopyVerionInfo Source Page Id], [CopyVerionInfo Source Page LSN], [CopyVerionInfo Source Slot Id], [CopyVerionInfo Source Slot Count], [RowLog Contents 0], [RowLog Contents 1], [RowLog Contents 2], [RowLog Contents 3], [RowLog Contents 4], [RowLog Contents 5], [Compression Log Type], [Compression Info], [PageFormat PageType], [PageFormat PageFlags], [PageFormat PageLevel], [PageFormat PageStat], [PageFormat FormatOption], [Log Record] FROM fn_dblog(NULL, NULL); GO
That is rather a lot of data and I never intend to learn what it all means, concentrating on what I need (or want) to understand, over time.
With this basic setup there is one very easy question to solve, that I’ve seen asked several times.
Are SELECT queries logged?
At this point I have a simple table and an (almost) empty log file. If I execute fn_dblog to show the log file contents, execute a simple SELECT query and then execute fn_dblog again, it should show no change.
Listing 5: Simple SELECT with fn_dblog before and after
--Only the checkpoint is in there now SELECT * FROM fn_dblog(NULL, NULL); GO SELECT * FROM dbo.Test; GO --Is a SELECT logged? SELECT * FROM fn_dblog(NULL, NULL); GO
Figure 2: Results from Listing 6
This makes sense, as the log file is required for transaction recovery and you don’t need to undo a SELECT statement, or repeat it when recovering/restoring a database.
So, a SELECT query writes nothing to the log file?
Well, the SELECT itself writes nothing to the log file. However, it may cause other entries to be written – if you want to be really pedantic (and what DBA doesn’t want to be a pedant at least occassionally?).
Recall that in Listing1 the option to auto-generate statistics was switched off. If we switch that back on, run a CHECKPOINT again (because switching that option generates its own log entries) and run a SELECT query that has a WHERE clause, there are entries due to the creation of statistics.
Listing 6: SELECT/WHERE with auto-statistics on
ALTER DATABASE LogTest SET AUTO_CREATE_STATISTICS ON; GO CHECKPOINT; GO SELECT [Current LSN], Operation, Context, [Checkpoint Begin], [Checkpoint End], Description FROM fn_dblog(NULL, NULL); GO SELECT * FROM dbo.Test WHERE Column1= 1 GO SELECT [Current LSN], Operation, Context, [Checkpoint Begin], [Checkpoint End], Description FROM fn_dblog(NULL, NULL); GO
Figure 3: Results from Listing 6
So, the correct answer to “Does a SELECT write entries to the log file?” is – it depends.
Updating a column
In this example, one column is updated and the output from fn_dblog is examined.
The code below sets auto-generated statistics off (which was set on in listing 6) and also set NOCOUNT ON, to eliminate as much superfluous data as possible. A checkpoint clears the logfile data and then one row has one of the columns updated to a different value.
Listing 7: Update one column to a new value
ALTER DATABASE LogTest SET AUTO_CREATE_STATISTICS OFF; GO SET NOCOUNT ON; GO CHECKPOINT; GO UPDATE dbo.Test SET Column1 = 9 WHERE Column1 = 1; GO SELECT [Current LSN], Operation, Context, [RowLog Contents 0], [RowLog Contents 1], Description FROM fn_dblog(NULL, NULL); GO
The output shows the value changed from 1 to 9, as part of a LOP_MODIFY_ROW.
It also shows that a transaction was wrapped around the update, even though a transaction was not specified in the code. This is an implicit transaction.
Figure 4: Result from Listing 7
While we’re here, this brings to mind another question that I have seen asked –
Does updating a column to the current value cause an update?
In other words, if I use an update statement and set the value of a column to the value it already has, does SQL Server actually apply an update?
So, execute a CHECKPOINT, check the log file, and repeat the update from Listing 7, checking the log file afterwards and see if anything has been recorded.
Listing 8: Set column1 to 9 (which it already is) and check the log file
CHECKPOINT; GO SELECT [Current LSN], Operation, Context, [RowLog Contents 0], [RowLog Contents 1] FROM fn_dblog(NULL, NULL); GO UPDATE dbo.Test SET Column1 = 9 WHERE Column1 = 9; GO SELECT [Current LSN], Operation, Context, [RowLog Contents 0], [RowLog Contents 1] FROM fn_dblog(NULL, NULL); GO
Figure 5: Results from Listing 8
So, SQL Server is clever enough not to bother applying an update that isn’t actually changing anything.
Updating Multiple Columns
Where that changes to multiple columns are within 16 bytes of each other and within the fixed-width part of the row, the details of the change will be recorded within one entry in the log file.
For the following example, I’m going to drop the existing test table and create another table with more columns.
Listing 9: Drop table and Create New Test Table
DROP TABLE dbo.Test; GO CREATE TABLE dbo.Test ( Column1 INT NULL, Column2 INT NULL, Column3 INT NULL, Column4 INT NULL, Column5 INT NULL, Column6 INT NULL ); INSERT INTO dbo.Test ( Column1, Column2, Column3, Column4, Column5, Column6 ) VALUES (1, 1, 1, 1, 1, 1), (2, 2, 2, 2, 2, 2); GO
In the following example, the first and third columns of the first row will be updated and then the contents of the log file examined.
Listing 10: Update two columns on row 1
CHECKPOINT; GO UPDATE dbo.Test SET Column1 = 7, Column3 = 8 WHERE Column1 = 1; GO SELECT [Current LSN], Operation, Context, [RowLog Contents 0], [RowLog Contents 1], [RowLog Contents 3], [RowLog Contents 4] FROM fn_dblog(NULL, NULL); GO
Looking at the fn_dblog output, it shows the contents of the first three columns, even though column2 was unaltered.
Figure 6: Results from Listing 10
The information gets a little more involved where the changes are not within 16 bytes. Instead of seeing LOP_MODIFY_ROW it will use LOP_MODIFY_COLUMNS, which shows the information in a different way.
Listing 11: Modify two rows that are further apart
CHECKPOINT; GO UPDATE dbo.Test SET Column1 = 7, Column6 = 8 WHERE Column1 = 2; GO SELECT [Current LSN], Operation, [RowLog Contents 0], [RowLog Contents 1], [RowLog Contents 2], [RowLog Contents 4] FROM fn_dblog(NULL, NULL); GO
The information from the log file is now slightly different.
[RowLog Contents 0] provides the before/after data offsets for the two affected columns.
[Rowlog Contents 1] shows the length of two columns.
[Rowlog Contents 2] is for the keys for this row, which has nothing for this example.
[Rowlog Contents 3] is not shown here, as it contains lock information and is beyond the scope of this article.
[Rowlog Contents 4] shows the before and after values, as an array. This shows one column changing from 2 to 7 and another changing from 2 to 8.
Figure 7: Results from Listing 11
In this article I really just skimmed the surface of what fn_dblog does, with very basic examples.
I have shown how changes to columns that are close to each other or separated by more distance are recorded (with offsets and arrays of values) as required. Hopefully, a couple of th more peculiar questions regarding the log file have been put to rest now too.
I intend to expand on this with a series of articles, as my whimsey takes me.
4 thoughts on “Accessing the Log File with fn_dblog – Part 1, Basic Examples”
In the example “Does updating a column to the current value cause an update?” nothing gets updated because no row was satisfied.
Probably it was intended to execute the following query:
SET Column1 = 9
WHERE Column1 = 9;
The results in that case is that there are 2 log writes: LOP_BEGIN_XACT and LOP_COMMIT_XACT.
Thanks for blogging about this with easy to follow examples!
Yes, thank you for spotting that and letting me know. The example makes a lot more sense now.
Thanks a lot for samples and instructions.