In Sql Server, is there a way to check if a selected group of rows are locked or not?Schema blocking in...

How does Loki do this?

How do I go from 300 unfinished/half written blog posts, to published posts?

Pole-zeros of a real-valued causal FIR system

Inappropriate reference requests from Journal reviewers

Was Spock the First Vulcan in Starfleet?

How can I get through very long and very dry, but also very useful technical documents when learning a new tool?

Proof of work - lottery approach

Applicability of Single Responsibility Principle

Why, precisely, is argon used in neutrino experiments?

Can the discrete variable be a negative number?

Hostile work environment after whistle-blowing on coworker and our boss. What do I do?

What is paid subscription needed for in Mortal Kombat 11?

Where does the Z80 processor start executing from?

when is out of tune ok?

Implement the Thanos sorting algorithm

Failed to fetch jessie backports repository

Large drywall patch supports

Arithmetic mean geometric mean inequality unclear

Is `x >> pure y` equivalent to `liftM (const y) x`

Integer addition + constant, is it a group?

Method to test if a number is a perfect power?

A Rare Riley Riddle

Tiptoe or tiphoof? Adjusting words to better fit fantasy races

Different result between scanning in Epson's "color negative film" mode and scanning in positive -> invert curve in post?



In Sql Server, is there a way to check if a selected group of rows are locked or not?


Schema blocking in READ_COMMITTED_SNAPSHOTDetecting the locked table or row in SQL ServerGetting Waiting for table metadata lock for select statementsSELECT blocking SELECT?Are there practical scenarios where inserts + deletes are faster than updates? (SQL Server)Unpredicatable single insert performance on SQL Server tableWhat happens during RCSI while tempdb is being populated?Update Query DeadLock in SQL Server when ALLOW_SNAPSHOT_ISOLATION and READ_COMMITTED_SNAPSHOT are enabledHow to find the query that is still holding a lock?How can I drop a column from a large (700M rows) hot (multiple transactions per second) table with minimal downtime?













20















We are attempting to update/delete a large number of records in a multi-billion row table. Since this is a popular table, there is a lot of activity in different sections of this table. Any large update/delete activity is being blocked for extended periods of time (as it is waiting to get locks on all the rows or page lock or table lock) resulting in timeouts or taking multiple days to complete the task.



So, we are changing the approach to delete small batch of rows at at time. But we want to check if the selected (let's say 100 or 1000 or 2000 rows) are currently locked by a different process or not.




  • If not, then proceed with delete/update.

  • If they are locked, then move on to the next group of records.

  • At end, come back to the begining and attempt to update/delete the left out ones.


Is this doable?



Thanks,
ToC










share|improve this question


















  • 2





    Have you looked into READPAST as part of the delete statement or NOWAIT (to fail the whole group)? One of these may work for you. msdn.microsoft.com/en-us/library/ms187373.aspx

    – Sean Gallardy
    May 21 '15 at 14:51











  • @SeanGallardy I have not considered that idea, but now I will. But is there an easier way to check if a particular row is locked or not? Thanks.

    – ToC
    May 21 '15 at 14:54






  • 3





    You might also look into LOCK_TIMEOUT (msdn.microsoft.com/en-us/library/ms189470.aspx). For example, this is the way that Adam Machanic's sp_whoisactive ensures that the procedure doesn't wait for too long if it is blocked when trying to gather an execution plan. You can set a short timeout or even use a value of 0 ("0 means to not wait at all and return a message as soon as a lock is encountered.") You can combine this with a TRY/CATCH to catch error 1222 ("Lock request time out period exceeded") and proceed to the next batch.

    – Geoff Patterson
    May 21 '15 at 15:47











  • @gpatterson Interesting approach. I'll try this too.

    – ToC
    May 21 '15 at 16:30






  • 2





    To answer, no there is not an easier way to see if the rows are locked unless there is something specifically done in the application. Basically you could first do a select with HOLDLOCK and XLOCK with a lock_timeout set (which is what NOWAIT in my original comment is about, setting the timeout to 0). If you don't get it, then you know something is locked. There is nothing easily available to say "Is row X in Table Y using Index Z locked by something". We can see if the table has locks or any pages/rows/keys/etc have locks, but translating that into specific rows in a query wouldn't be easy.

    – Sean Gallardy
    May 21 '15 at 17:21
















20















We are attempting to update/delete a large number of records in a multi-billion row table. Since this is a popular table, there is a lot of activity in different sections of this table. Any large update/delete activity is being blocked for extended periods of time (as it is waiting to get locks on all the rows or page lock or table lock) resulting in timeouts or taking multiple days to complete the task.



So, we are changing the approach to delete small batch of rows at at time. But we want to check if the selected (let's say 100 or 1000 or 2000 rows) are currently locked by a different process or not.




  • If not, then proceed with delete/update.

  • If they are locked, then move on to the next group of records.

  • At end, come back to the begining and attempt to update/delete the left out ones.


Is this doable?



Thanks,
ToC










share|improve this question


















  • 2





    Have you looked into READPAST as part of the delete statement or NOWAIT (to fail the whole group)? One of these may work for you. msdn.microsoft.com/en-us/library/ms187373.aspx

    – Sean Gallardy
    May 21 '15 at 14:51











  • @SeanGallardy I have not considered that idea, but now I will. But is there an easier way to check if a particular row is locked or not? Thanks.

    – ToC
    May 21 '15 at 14:54






  • 3





    You might also look into LOCK_TIMEOUT (msdn.microsoft.com/en-us/library/ms189470.aspx). For example, this is the way that Adam Machanic's sp_whoisactive ensures that the procedure doesn't wait for too long if it is blocked when trying to gather an execution plan. You can set a short timeout or even use a value of 0 ("0 means to not wait at all and return a message as soon as a lock is encountered.") You can combine this with a TRY/CATCH to catch error 1222 ("Lock request time out period exceeded") and proceed to the next batch.

    – Geoff Patterson
    May 21 '15 at 15:47











  • @gpatterson Interesting approach. I'll try this too.

    – ToC
    May 21 '15 at 16:30






  • 2





    To answer, no there is not an easier way to see if the rows are locked unless there is something specifically done in the application. Basically you could first do a select with HOLDLOCK and XLOCK with a lock_timeout set (which is what NOWAIT in my original comment is about, setting the timeout to 0). If you don't get it, then you know something is locked. There is nothing easily available to say "Is row X in Table Y using Index Z locked by something". We can see if the table has locks or any pages/rows/keys/etc have locks, but translating that into specific rows in a query wouldn't be easy.

    – Sean Gallardy
    May 21 '15 at 17:21














20












20








20


3






We are attempting to update/delete a large number of records in a multi-billion row table. Since this is a popular table, there is a lot of activity in different sections of this table. Any large update/delete activity is being blocked for extended periods of time (as it is waiting to get locks on all the rows or page lock or table lock) resulting in timeouts or taking multiple days to complete the task.



So, we are changing the approach to delete small batch of rows at at time. But we want to check if the selected (let's say 100 or 1000 or 2000 rows) are currently locked by a different process or not.




  • If not, then proceed with delete/update.

  • If they are locked, then move on to the next group of records.

  • At end, come back to the begining and attempt to update/delete the left out ones.


Is this doable?



Thanks,
ToC










share|improve this question














We are attempting to update/delete a large number of records in a multi-billion row table. Since this is a popular table, there is a lot of activity in different sections of this table. Any large update/delete activity is being blocked for extended periods of time (as it is waiting to get locks on all the rows or page lock or table lock) resulting in timeouts or taking multiple days to complete the task.



So, we are changing the approach to delete small batch of rows at at time. But we want to check if the selected (let's say 100 or 1000 or 2000 rows) are currently locked by a different process or not.




  • If not, then proceed with delete/update.

  • If they are locked, then move on to the next group of records.

  • At end, come back to the begining and attempt to update/delete the left out ones.


Is this doable?



Thanks,
ToC







sql-server locking blocking






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked May 21 '15 at 14:29









ToCToC

332314




332314








  • 2





    Have you looked into READPAST as part of the delete statement or NOWAIT (to fail the whole group)? One of these may work for you. msdn.microsoft.com/en-us/library/ms187373.aspx

    – Sean Gallardy
    May 21 '15 at 14:51











  • @SeanGallardy I have not considered that idea, but now I will. But is there an easier way to check if a particular row is locked or not? Thanks.

    – ToC
    May 21 '15 at 14:54






  • 3





    You might also look into LOCK_TIMEOUT (msdn.microsoft.com/en-us/library/ms189470.aspx). For example, this is the way that Adam Machanic's sp_whoisactive ensures that the procedure doesn't wait for too long if it is blocked when trying to gather an execution plan. You can set a short timeout or even use a value of 0 ("0 means to not wait at all and return a message as soon as a lock is encountered.") You can combine this with a TRY/CATCH to catch error 1222 ("Lock request time out period exceeded") and proceed to the next batch.

    – Geoff Patterson
    May 21 '15 at 15:47











  • @gpatterson Interesting approach. I'll try this too.

    – ToC
    May 21 '15 at 16:30






  • 2





    To answer, no there is not an easier way to see if the rows are locked unless there is something specifically done in the application. Basically you could first do a select with HOLDLOCK and XLOCK with a lock_timeout set (which is what NOWAIT in my original comment is about, setting the timeout to 0). If you don't get it, then you know something is locked. There is nothing easily available to say "Is row X in Table Y using Index Z locked by something". We can see if the table has locks or any pages/rows/keys/etc have locks, but translating that into specific rows in a query wouldn't be easy.

    – Sean Gallardy
    May 21 '15 at 17:21














  • 2





    Have you looked into READPAST as part of the delete statement or NOWAIT (to fail the whole group)? One of these may work for you. msdn.microsoft.com/en-us/library/ms187373.aspx

    – Sean Gallardy
    May 21 '15 at 14:51











  • @SeanGallardy I have not considered that idea, but now I will. But is there an easier way to check if a particular row is locked or not? Thanks.

    – ToC
    May 21 '15 at 14:54






  • 3





    You might also look into LOCK_TIMEOUT (msdn.microsoft.com/en-us/library/ms189470.aspx). For example, this is the way that Adam Machanic's sp_whoisactive ensures that the procedure doesn't wait for too long if it is blocked when trying to gather an execution plan. You can set a short timeout or even use a value of 0 ("0 means to not wait at all and return a message as soon as a lock is encountered.") You can combine this with a TRY/CATCH to catch error 1222 ("Lock request time out period exceeded") and proceed to the next batch.

    – Geoff Patterson
    May 21 '15 at 15:47











  • @gpatterson Interesting approach. I'll try this too.

    – ToC
    May 21 '15 at 16:30






  • 2





    To answer, no there is not an easier way to see if the rows are locked unless there is something specifically done in the application. Basically you could first do a select with HOLDLOCK and XLOCK with a lock_timeout set (which is what NOWAIT in my original comment is about, setting the timeout to 0). If you don't get it, then you know something is locked. There is nothing easily available to say "Is row X in Table Y using Index Z locked by something". We can see if the table has locks or any pages/rows/keys/etc have locks, but translating that into specific rows in a query wouldn't be easy.

    – Sean Gallardy
    May 21 '15 at 17:21








2




2





Have you looked into READPAST as part of the delete statement or NOWAIT (to fail the whole group)? One of these may work for you. msdn.microsoft.com/en-us/library/ms187373.aspx

– Sean Gallardy
May 21 '15 at 14:51





Have you looked into READPAST as part of the delete statement or NOWAIT (to fail the whole group)? One of these may work for you. msdn.microsoft.com/en-us/library/ms187373.aspx

– Sean Gallardy
May 21 '15 at 14:51













@SeanGallardy I have not considered that idea, but now I will. But is there an easier way to check if a particular row is locked or not? Thanks.

– ToC
May 21 '15 at 14:54





@SeanGallardy I have not considered that idea, but now I will. But is there an easier way to check if a particular row is locked or not? Thanks.

– ToC
May 21 '15 at 14:54




3




3





You might also look into LOCK_TIMEOUT (msdn.microsoft.com/en-us/library/ms189470.aspx). For example, this is the way that Adam Machanic's sp_whoisactive ensures that the procedure doesn't wait for too long if it is blocked when trying to gather an execution plan. You can set a short timeout or even use a value of 0 ("0 means to not wait at all and return a message as soon as a lock is encountered.") You can combine this with a TRY/CATCH to catch error 1222 ("Lock request time out period exceeded") and proceed to the next batch.

– Geoff Patterson
May 21 '15 at 15:47





You might also look into LOCK_TIMEOUT (msdn.microsoft.com/en-us/library/ms189470.aspx). For example, this is the way that Adam Machanic's sp_whoisactive ensures that the procedure doesn't wait for too long if it is blocked when trying to gather an execution plan. You can set a short timeout or even use a value of 0 ("0 means to not wait at all and return a message as soon as a lock is encountered.") You can combine this with a TRY/CATCH to catch error 1222 ("Lock request time out period exceeded") and proceed to the next batch.

– Geoff Patterson
May 21 '15 at 15:47













@gpatterson Interesting approach. I'll try this too.

– ToC
May 21 '15 at 16:30





@gpatterson Interesting approach. I'll try this too.

– ToC
May 21 '15 at 16:30




2




2





To answer, no there is not an easier way to see if the rows are locked unless there is something specifically done in the application. Basically you could first do a select with HOLDLOCK and XLOCK with a lock_timeout set (which is what NOWAIT in my original comment is about, setting the timeout to 0). If you don't get it, then you know something is locked. There is nothing easily available to say "Is row X in Table Y using Index Z locked by something". We can see if the table has locks or any pages/rows/keys/etc have locks, but translating that into specific rows in a query wouldn't be easy.

– Sean Gallardy
May 21 '15 at 17:21





To answer, no there is not an easier way to see if the rows are locked unless there is something specifically done in the application. Basically you could first do a select with HOLDLOCK and XLOCK with a lock_timeout set (which is what NOWAIT in my original comment is about, setting the timeout to 0). If you don't get it, then you know something is locked. There is nothing easily available to say "Is row X in Table Y using Index Z locked by something". We can see if the table has locks or any pages/rows/keys/etc have locks, but translating that into specific rows in a query wouldn't be easy.

– Sean Gallardy
May 21 '15 at 17:21










6 Answers
6






active

oldest

votes


















9















So, we are changing the approach to delete small batch of rows at at time.




This is a really good idea to delete in small careful batches or chunks. I would add a small waitfor delay '00:00:05' and depending on the recovery model of the database - if FULL, then do a log backup and if SIMPLE then do a manual CHECKPOINT to avoid bloating of transaction log - between batches.




But we want to check if the selected (let's say 100 or 1000 or 2000 rows) are currently locked by a different process or not.




What you are telling is not entirely possible out of box (keeping in mind your 3 bullet points). If the above suggestion - small batches + waitfor delay does not work (provided you do proper testing), then you can use the query HINT.



Do not use NOLOCK - see kb/308886, SQL Server Read-Consistency Problems by Itzik Ben-Gan, Putting NOLOCK everywhere - By Aaron Bertrand and SQL Server NOLOCK Hint & other poor ideas.



READPAST hint will help in your scenario. The gist of READPAST hint is - if there is a row level lock then SQL server wont read it.




Specifies that the Database Engine not read rows that are locked by other transactions. When READPAST is specified, row-level locks are skipped. That is, the Database Engine skips past the rows instead of blocking the current transaction until the locks are released.




During my limited testing, I found really good throughput when using DELETE from schema.tableName with (READPAST, READCOMMITTEDLOCK) and setting the query session isolation level to READ COMMITTED using SET TRANSACTION ISOLATION LEVEL READ COMMITTED which is default isolation level anyway.






share|improve this answer

































    9





    +150









    If I understand the request correctly, the goal is to delete batches of rows, while at the same time, DML operations are occurring on rows throughout the table. The goal is to delete a batch; however, if any underlying rows contained within the range defined by said batch are locked, then we must skip that batch and move to the next batch. We must then return to any batches that were not previously deleted and retry our original delete logic. We must repeat this cycle until all required batches of rows are deleted.



    As has been mentioned, it is reasonable to use a READPAST hint and the READ COMMITTED (default) isolation level, in order to skip past ranges that may contain blocked rows. I will go a step further and recommend using the SERIALIZABLE isolation level and nibbling deletes.



    SQL Server uses Key-Range locks to protect a range of rows implicitly included in a record set being read by a Transact-SQL statement while using the serializable transaction isolation level...find more here:
    https://technet.microsoft.com/en-US/library/ms191272(v=SQL.105).aspx



    With nibbling deletes, our goal is to isolate a range of rows and ensure that no changes will occur to those rows while we are deleting them, that is to say, we do not want phantom reads or insertions. The serializable isolation level is meant to solve this problem.



    Before I demonstrate my solution, I would like to add that neither am I recommending switching your database's default isolation level to SERIALIZABLE nor am I recommending that my solution is the best. I merely wish to present it and see where we can go from here.



    A few house-keeping notes:




    1. The SQL Server version that I am using is Microsoft SQL Server 2012 - 11.0.5343.0 (X64)

    2. My test database is using the FULL recovery model


    To begin my experiment, I will set up a test database, a sample table, and I will fill the table with 2,000,000 rows.




    USE [master];
    GO

    SET NOCOUNT ON;

    IF DATABASEPROPERTYEX (N'test', N'Version') > 0
    BEGIN
    ALTER DATABASE [test] SET SINGLE_USER
    WITH ROLLBACK IMMEDIATE;
    DROP DATABASE [test];
    END
    GO

    -- Create the test database
    CREATE DATABASE [test];
    GO

    -- Set the recovery model to FULL
    ALTER DATABASE [test] SET RECOVERY FULL;

    -- Create a FULL database backup
    -- in order to ensure we are in fact using
    -- the FULL recovery model
    -- I pipe it to dev null for simplicity
    BACKUP DATABASE [test]
    TO DISK = N'nul';
    GO

    USE [test];
    GO

    -- Create our table
    IF OBJECT_ID('dbo.tbl','U') IS NOT NULL
    BEGIN
    DROP TABLE dbo.tbl;
    END;
    CREATE TABLE dbo.tbl
    (
    c1 BIGINT IDENTITY (1,1) NOT NULL
    , c2 INT NOT NULL
    ) ON [PRIMARY];
    GO

    -- Insert 2,000,000 rows
    INSERT INTO dbo.tbl
    SELECT TOP 2000
    number
    FROM
    master..spt_values
    ORDER BY
    number
    GO 1000


    At this point, we will need one or more indexes upon which the locking mechanisms of the SERIALIZABLE isolation level can act.




    -- Add a clustered index
    CREATE UNIQUE CLUSTERED INDEX CIX_tbl_c1
    ON dbo.tbl (c1);
    GO

    -- Add a non-clustered index
    CREATE NONCLUSTERED INDEX IX_tbl_c2
    ON dbo.tbl (c2);
    GO


    Now, let us check to see that our 2,000,000 rows were created




    SELECT
    COUNT(*)
    FROM
    tbl;


    enter image description here



    So, we have our database, table, indexes, and rows. So, let us set up the experiment for nibbling deletes. First, we must decide how best to create a typical nibbling delete mechanism.




    DECLARE
    @BatchSize INT = 100
    , @LowestValue BIGINT = 20000
    , @HighestValue BIGINT = 20010
    , @DeletedRowsCount BIGINT = 0
    , @RowCount BIGINT = 1;

    SET NOCOUNT ON;
    GO

    WHILE @DeletedRowsCount &lt ( @HighestValue - @LowestValue )
    BEGIN

    SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
    BEGIN TRANSACTION

    DELETE
    FROM
    dbo.tbl
    WHERE
    c1 IN (
    SELECT TOP (@BatchSize)
    c1
    FROM
    dbo.tbl
    WHERE
    c1 BETWEEN @LowestValue AND @HighestValue
    ORDER BY
    c1
    );

    SET @RowCount = ROWCOUNT_BIG();

    COMMIT TRANSACTION;

    SET @DeletedRowsCount += @RowCount;
    WAITFOR DELAY '000:00:00.025';
    CHECKPOINT;

    END;


    As you can see, I placed the explicit transaction inside the while loop. If you would like to limit log flushes, then feel free to place it outside the loop. Furthermore, since we are in the FULL recovery model, you may wish to create transaction log backups more often while running your nibbling delete operations, in order to ensure that your transaction log can be prevented from growing outrageously.



    So, I have a couple goals with this setup. First, I want my key-range locks; so, I try to keep the batches as small as possible. I also do not want to impact negatively the concurrency on my "gigantic" table; so, I want to take my locks and leave them as fast as I can. So, I recommend that you make your batch sizes small.



    Now, I want to provide a very short example of this deletion routine in action. We must open a new window within SSMS and delete one row from our table. I will do this within an implicit transaction using the default READ COMMITTED isolation level.




    DELETE FROM
    dbo.tbl
    WHERE
    c1 = 20005;


    Was this row actually deleted?




    SELECT
    c1
    FROM
    dbo.tbl
    WHERE
    c1 BETWEEN 20000 AND 20010;


    Yes, it was deleted.



    Proof of Deleted Row



    Now, in order to see our locks, let us open a new window within SSMS and add a code snippet or two. I am using Adam Mechanic's sp_whoisactive, which can be found here: sp_whoisactive




    SELECT
    DB_NAME(resource_database_id) AS DatabaseName
    , resource_type
    , request_mode
    FROM
    sys.dm_tran_locks
    WHERE
    DB_NAME(resource_database_id) = 'test'
    AND resource_type = 'KEY'
    ORDER BY
    request_mode;

    -- Our insert
    sp_lock 55;

    -- Our deletions
    sp_lock 52;

    -- Our active sessions
    sp_whoisactive;



    Now, we are ready to begin. In a new SSMS window, let us begin an explicit transaction that will attempt to re-insert the one row that we deleted. At the same time, we will fire off our nibbling delete operation.



    The insert code:




    BEGIN TRANSACTION

    SET IDENTITY_INSERT dbo.tbl ON;

    INSERT INTO dbo.tbl
    ( c1 , c2 )
    VALUES
    ( 20005 , 1 );

    SET IDENTITY_INSERT dbo.tbl OFF;

    --COMMIT TRANSACTION;


    Let us kick off both operations beginning with the insert and followed by our deletes. We can see the key-range locks and exclusive locks.



    Range and eXclusive Locks



    The insert generated these locks:



    Insert's Locks



    The nibbling delete/select is holding these locks:



    enter image description here



    Our insert is blocking our delete as expected:



    Insert Blocks Delete



    Now, let us commit the insert transaction and see what is up.



    Commit the Delete



    And as expected, all transactions complete. Now, we must check to see whether the insert was a phantom or whether the delete operation removed it as well.




    SELECT
    c1
    FROM
    dbo.tbl
    WHERE
    c1 BETWEEN 20000 AND 20015;


    In fact, the insert was deleted; so, no phantom insert was allowed.



    No Phantom Insert



    So, in conclusion, I think the true intention of this exercise is not to try and track every single row, page, or table-level lock and try to determine whether an element of a batch is locked and would therefore require our delete operation to wait. That may have been the intent of the questioners; however, that task is herculean and basically impractical if not impossible. The real goal is to ensure that no unwanted phenomena arise once we have isolated the range of our batch with locks of our own and then precede to delete the batch. The SERIALIZABLE isolation level achieves this objective. The key is to keep your nibbles small, your transaction log under control, and eliminate unwanted phenomena.



    If you want speed, then don't build gigantically deep tables that cannot be partitioned and therefore be unable to use partition switching for the fastest results. The key to speed is partitioning and parallelism; the key to suffering is nibbles and live-locking.



    Please let me know what you think.



    I created some further examples of the SERIALIZABLE isolation level in action. They should be available at the links below.



    Delete Operation



    Insert Operation



    Equality Operations - Key-Range Locks on Next Key Values



    Equality Operations - Singleton Fetch of Existent Data



    Equality Operations - Singleton Fetch of Nonexistent Data



    Inequality Operations - Key-Range Locks on Range and Next Key Values






    share|improve this answer

































      2














      Summarizing other approaches originally offered in comments to the question.







      1. Use NOWAIT if the desired behaviour is to fail the whole chunk as soon as an incompatible lock is encountered.



        From the NOWAIT documentation:




        Instructs the Database Engine to return a message as soon as a lock is encountered on the table. NOWAIT is equivalent to specifying SET LOCK_TIMEOUT 0 for a specific table. The NOWAIT hint does not work when the TABLOCK hint is also included. To terminate a query without waiting when using the TABLOCK hint, preface the query with SETLOCK_TIMEOUT 0; instead.





      2. Use SET LOCK_TIMEOUT to achieve a similar outcome, but with a configurable timeout:



        From the SET LOCK_TIMEOUT documentation




        Specifies the number of milliseconds a statement waits for a lock to be released.



        When a wait for a lock exceeds the timeout value, an error is returned. A value of 0 means to not wait at all and return a message as soon as a lock is encountered.









      share|improve this answer

































        0














        To assume we have 2 parallels queries:



        connect/session 1: will locked the row = 777



        SELECT * FROM your_table WITH(UPDLOCK,READPAST) WHERE id = 777


        connect/session 2: will ignore locked row = 777



        SELECT * FROM your_table WITH(UPDLOCK,READPAST) WHERE id = 777


        OR connect/session 2: will throw exception



        DECLARE @id integer;
        SELECT @id = id FROM your_table WITH(UPDLOCK,READPAST) WHERE id = 777;
        IF @id is NULL
        THROW 51000, 'Hi, a record is locked or does not exist.', 1;




        share








        New contributor




        Lebnik is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.




























          -1














          Try filtering on something like this - it can get complicated if you want to get really, really specific. Look in BOL for the description of sys.dm_tran_locks



          SELECT 
          tl.request_session_id,
          tl.resource_type,
          tl.resource_associated_entity_id,
          db_name(tl.resource_database_id) 'Database',
          CASE
          WHEN tl.resource_type = 'object' THEN object_name(tl.resource_associated_entity_id, tl.resource_database_id)
          ELSE NULL
          END 'LockedObject',
          tl.resource_database_id,
          tl.resource_description,
          tl.request_mode,
          tl.request_type,
          tl.request_status FROM [sys].[dm_tran_locks] tl WHERE resource_database_id <> 2order by tl.request_session_id





          share|improve this answer


























          • just curious - why the downvote?

            – rottengeek
            Sep 11 '15 at 20:17



















          -10














          You can use NoLOCK while you are deleting and if the rows are locked, they wont be deleted. Its not ideal but may do the trick for you.



          DELETE TA FROM dbo.TableA TA WITH (NOLOCK) WHERE Condition = True





          share|improve this answer



















          • 7





            If I try that on my local machine I get Msg 1065, Level 15, State 1, Line 15 The NOLOCK and READUNCOMMITTED lock hints are not allowed for target tables of INSERT, UPDATE, DELETE or MERGE statements., deprecated since 2005

            – Tom V
            Aug 25 '15 at 13:33













          Your Answer








          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "182"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f102160%2fin-sql-server-is-there-a-way-to-check-if-a-selected-group-of-rows-are-locked-or%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          6 Answers
          6






          active

          oldest

          votes








          6 Answers
          6






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          9















          So, we are changing the approach to delete small batch of rows at at time.




          This is a really good idea to delete in small careful batches or chunks. I would add a small waitfor delay '00:00:05' and depending on the recovery model of the database - if FULL, then do a log backup and if SIMPLE then do a manual CHECKPOINT to avoid bloating of transaction log - between batches.




          But we want to check if the selected (let's say 100 or 1000 or 2000 rows) are currently locked by a different process or not.




          What you are telling is not entirely possible out of box (keeping in mind your 3 bullet points). If the above suggestion - small batches + waitfor delay does not work (provided you do proper testing), then you can use the query HINT.



          Do not use NOLOCK - see kb/308886, SQL Server Read-Consistency Problems by Itzik Ben-Gan, Putting NOLOCK everywhere - By Aaron Bertrand and SQL Server NOLOCK Hint & other poor ideas.



          READPAST hint will help in your scenario. The gist of READPAST hint is - if there is a row level lock then SQL server wont read it.




          Specifies that the Database Engine not read rows that are locked by other transactions. When READPAST is specified, row-level locks are skipped. That is, the Database Engine skips past the rows instead of blocking the current transaction until the locks are released.




          During my limited testing, I found really good throughput when using DELETE from schema.tableName with (READPAST, READCOMMITTEDLOCK) and setting the query session isolation level to READ COMMITTED using SET TRANSACTION ISOLATION LEVEL READ COMMITTED which is default isolation level anyway.






          share|improve this answer






























            9















            So, we are changing the approach to delete small batch of rows at at time.




            This is a really good idea to delete in small careful batches or chunks. I would add a small waitfor delay '00:00:05' and depending on the recovery model of the database - if FULL, then do a log backup and if SIMPLE then do a manual CHECKPOINT to avoid bloating of transaction log - between batches.




            But we want to check if the selected (let's say 100 or 1000 or 2000 rows) are currently locked by a different process or not.




            What you are telling is not entirely possible out of box (keeping in mind your 3 bullet points). If the above suggestion - small batches + waitfor delay does not work (provided you do proper testing), then you can use the query HINT.



            Do not use NOLOCK - see kb/308886, SQL Server Read-Consistency Problems by Itzik Ben-Gan, Putting NOLOCK everywhere - By Aaron Bertrand and SQL Server NOLOCK Hint & other poor ideas.



            READPAST hint will help in your scenario. The gist of READPAST hint is - if there is a row level lock then SQL server wont read it.




            Specifies that the Database Engine not read rows that are locked by other transactions. When READPAST is specified, row-level locks are skipped. That is, the Database Engine skips past the rows instead of blocking the current transaction until the locks are released.




            During my limited testing, I found really good throughput when using DELETE from schema.tableName with (READPAST, READCOMMITTEDLOCK) and setting the query session isolation level to READ COMMITTED using SET TRANSACTION ISOLATION LEVEL READ COMMITTED which is default isolation level anyway.






            share|improve this answer




























              9












              9








              9








              So, we are changing the approach to delete small batch of rows at at time.




              This is a really good idea to delete in small careful batches or chunks. I would add a small waitfor delay '00:00:05' and depending on the recovery model of the database - if FULL, then do a log backup and if SIMPLE then do a manual CHECKPOINT to avoid bloating of transaction log - between batches.




              But we want to check if the selected (let's say 100 or 1000 or 2000 rows) are currently locked by a different process or not.




              What you are telling is not entirely possible out of box (keeping in mind your 3 bullet points). If the above suggestion - small batches + waitfor delay does not work (provided you do proper testing), then you can use the query HINT.



              Do not use NOLOCK - see kb/308886, SQL Server Read-Consistency Problems by Itzik Ben-Gan, Putting NOLOCK everywhere - By Aaron Bertrand and SQL Server NOLOCK Hint & other poor ideas.



              READPAST hint will help in your scenario. The gist of READPAST hint is - if there is a row level lock then SQL server wont read it.




              Specifies that the Database Engine not read rows that are locked by other transactions. When READPAST is specified, row-level locks are skipped. That is, the Database Engine skips past the rows instead of blocking the current transaction until the locks are released.




              During my limited testing, I found really good throughput when using DELETE from schema.tableName with (READPAST, READCOMMITTEDLOCK) and setting the query session isolation level to READ COMMITTED using SET TRANSACTION ISOLATION LEVEL READ COMMITTED which is default isolation level anyway.






              share|improve this answer
















              So, we are changing the approach to delete small batch of rows at at time.




              This is a really good idea to delete in small careful batches or chunks. I would add a small waitfor delay '00:00:05' and depending on the recovery model of the database - if FULL, then do a log backup and if SIMPLE then do a manual CHECKPOINT to avoid bloating of transaction log - between batches.




              But we want to check if the selected (let's say 100 or 1000 or 2000 rows) are currently locked by a different process or not.




              What you are telling is not entirely possible out of box (keeping in mind your 3 bullet points). If the above suggestion - small batches + waitfor delay does not work (provided you do proper testing), then you can use the query HINT.



              Do not use NOLOCK - see kb/308886, SQL Server Read-Consistency Problems by Itzik Ben-Gan, Putting NOLOCK everywhere - By Aaron Bertrand and SQL Server NOLOCK Hint & other poor ideas.



              READPAST hint will help in your scenario. The gist of READPAST hint is - if there is a row level lock then SQL server wont read it.




              Specifies that the Database Engine not read rows that are locked by other transactions. When READPAST is specified, row-level locks are skipped. That is, the Database Engine skips past the rows instead of blocking the current transaction until the locks are released.




              During my limited testing, I found really good throughput when using DELETE from schema.tableName with (READPAST, READCOMMITTEDLOCK) and setting the query session isolation level to READ COMMITTED using SET TRANSACTION ISOLATION LEVEL READ COMMITTED which is default isolation level anyway.







              share|improve this answer














              share|improve this answer



              share|improve this answer








              edited Aug 28 '15 at 17:57

























              answered Aug 25 '15 at 17:24









              KinKin

              54.1k481192




              54.1k481192

























                  9





                  +150









                  If I understand the request correctly, the goal is to delete batches of rows, while at the same time, DML operations are occurring on rows throughout the table. The goal is to delete a batch; however, if any underlying rows contained within the range defined by said batch are locked, then we must skip that batch and move to the next batch. We must then return to any batches that were not previously deleted and retry our original delete logic. We must repeat this cycle until all required batches of rows are deleted.



                  As has been mentioned, it is reasonable to use a READPAST hint and the READ COMMITTED (default) isolation level, in order to skip past ranges that may contain blocked rows. I will go a step further and recommend using the SERIALIZABLE isolation level and nibbling deletes.



                  SQL Server uses Key-Range locks to protect a range of rows implicitly included in a record set being read by a Transact-SQL statement while using the serializable transaction isolation level...find more here:
                  https://technet.microsoft.com/en-US/library/ms191272(v=SQL.105).aspx



                  With nibbling deletes, our goal is to isolate a range of rows and ensure that no changes will occur to those rows while we are deleting them, that is to say, we do not want phantom reads or insertions. The serializable isolation level is meant to solve this problem.



                  Before I demonstrate my solution, I would like to add that neither am I recommending switching your database's default isolation level to SERIALIZABLE nor am I recommending that my solution is the best. I merely wish to present it and see where we can go from here.



                  A few house-keeping notes:




                  1. The SQL Server version that I am using is Microsoft SQL Server 2012 - 11.0.5343.0 (X64)

                  2. My test database is using the FULL recovery model


                  To begin my experiment, I will set up a test database, a sample table, and I will fill the table with 2,000,000 rows.




                  USE [master];
                  GO

                  SET NOCOUNT ON;

                  IF DATABASEPROPERTYEX (N'test', N'Version') > 0
                  BEGIN
                  ALTER DATABASE [test] SET SINGLE_USER
                  WITH ROLLBACK IMMEDIATE;
                  DROP DATABASE [test];
                  END
                  GO

                  -- Create the test database
                  CREATE DATABASE [test];
                  GO

                  -- Set the recovery model to FULL
                  ALTER DATABASE [test] SET RECOVERY FULL;

                  -- Create a FULL database backup
                  -- in order to ensure we are in fact using
                  -- the FULL recovery model
                  -- I pipe it to dev null for simplicity
                  BACKUP DATABASE [test]
                  TO DISK = N'nul';
                  GO

                  USE [test];
                  GO

                  -- Create our table
                  IF OBJECT_ID('dbo.tbl','U') IS NOT NULL
                  BEGIN
                  DROP TABLE dbo.tbl;
                  END;
                  CREATE TABLE dbo.tbl
                  (
                  c1 BIGINT IDENTITY (1,1) NOT NULL
                  , c2 INT NOT NULL
                  ) ON [PRIMARY];
                  GO

                  -- Insert 2,000,000 rows
                  INSERT INTO dbo.tbl
                  SELECT TOP 2000
                  number
                  FROM
                  master..spt_values
                  ORDER BY
                  number
                  GO 1000


                  At this point, we will need one or more indexes upon which the locking mechanisms of the SERIALIZABLE isolation level can act.




                  -- Add a clustered index
                  CREATE UNIQUE CLUSTERED INDEX CIX_tbl_c1
                  ON dbo.tbl (c1);
                  GO

                  -- Add a non-clustered index
                  CREATE NONCLUSTERED INDEX IX_tbl_c2
                  ON dbo.tbl (c2);
                  GO


                  Now, let us check to see that our 2,000,000 rows were created




                  SELECT
                  COUNT(*)
                  FROM
                  tbl;


                  enter image description here



                  So, we have our database, table, indexes, and rows. So, let us set up the experiment for nibbling deletes. First, we must decide how best to create a typical nibbling delete mechanism.




                  DECLARE
                  @BatchSize INT = 100
                  , @LowestValue BIGINT = 20000
                  , @HighestValue BIGINT = 20010
                  , @DeletedRowsCount BIGINT = 0
                  , @RowCount BIGINT = 1;

                  SET NOCOUNT ON;
                  GO

                  WHILE @DeletedRowsCount &lt ( @HighestValue - @LowestValue )
                  BEGIN

                  SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
                  BEGIN TRANSACTION

                  DELETE
                  FROM
                  dbo.tbl
                  WHERE
                  c1 IN (
                  SELECT TOP (@BatchSize)
                  c1
                  FROM
                  dbo.tbl
                  WHERE
                  c1 BETWEEN @LowestValue AND @HighestValue
                  ORDER BY
                  c1
                  );

                  SET @RowCount = ROWCOUNT_BIG();

                  COMMIT TRANSACTION;

                  SET @DeletedRowsCount += @RowCount;
                  WAITFOR DELAY '000:00:00.025';
                  CHECKPOINT;

                  END;


                  As you can see, I placed the explicit transaction inside the while loop. If you would like to limit log flushes, then feel free to place it outside the loop. Furthermore, since we are in the FULL recovery model, you may wish to create transaction log backups more often while running your nibbling delete operations, in order to ensure that your transaction log can be prevented from growing outrageously.



                  So, I have a couple goals with this setup. First, I want my key-range locks; so, I try to keep the batches as small as possible. I also do not want to impact negatively the concurrency on my "gigantic" table; so, I want to take my locks and leave them as fast as I can. So, I recommend that you make your batch sizes small.



                  Now, I want to provide a very short example of this deletion routine in action. We must open a new window within SSMS and delete one row from our table. I will do this within an implicit transaction using the default READ COMMITTED isolation level.




                  DELETE FROM
                  dbo.tbl
                  WHERE
                  c1 = 20005;


                  Was this row actually deleted?




                  SELECT
                  c1
                  FROM
                  dbo.tbl
                  WHERE
                  c1 BETWEEN 20000 AND 20010;


                  Yes, it was deleted.



                  Proof of Deleted Row



                  Now, in order to see our locks, let us open a new window within SSMS and add a code snippet or two. I am using Adam Mechanic's sp_whoisactive, which can be found here: sp_whoisactive




                  SELECT
                  DB_NAME(resource_database_id) AS DatabaseName
                  , resource_type
                  , request_mode
                  FROM
                  sys.dm_tran_locks
                  WHERE
                  DB_NAME(resource_database_id) = 'test'
                  AND resource_type = 'KEY'
                  ORDER BY
                  request_mode;

                  -- Our insert
                  sp_lock 55;

                  -- Our deletions
                  sp_lock 52;

                  -- Our active sessions
                  sp_whoisactive;



                  Now, we are ready to begin. In a new SSMS window, let us begin an explicit transaction that will attempt to re-insert the one row that we deleted. At the same time, we will fire off our nibbling delete operation.



                  The insert code:




                  BEGIN TRANSACTION

                  SET IDENTITY_INSERT dbo.tbl ON;

                  INSERT INTO dbo.tbl
                  ( c1 , c2 )
                  VALUES
                  ( 20005 , 1 );

                  SET IDENTITY_INSERT dbo.tbl OFF;

                  --COMMIT TRANSACTION;


                  Let us kick off both operations beginning with the insert and followed by our deletes. We can see the key-range locks and exclusive locks.



                  Range and eXclusive Locks



                  The insert generated these locks:



                  Insert's Locks



                  The nibbling delete/select is holding these locks:



                  enter image description here



                  Our insert is blocking our delete as expected:



                  Insert Blocks Delete



                  Now, let us commit the insert transaction and see what is up.



                  Commit the Delete



                  And as expected, all transactions complete. Now, we must check to see whether the insert was a phantom or whether the delete operation removed it as well.




                  SELECT
                  c1
                  FROM
                  dbo.tbl
                  WHERE
                  c1 BETWEEN 20000 AND 20015;


                  In fact, the insert was deleted; so, no phantom insert was allowed.



                  No Phantom Insert



                  So, in conclusion, I think the true intention of this exercise is not to try and track every single row, page, or table-level lock and try to determine whether an element of a batch is locked and would therefore require our delete operation to wait. That may have been the intent of the questioners; however, that task is herculean and basically impractical if not impossible. The real goal is to ensure that no unwanted phenomena arise once we have isolated the range of our batch with locks of our own and then precede to delete the batch. The SERIALIZABLE isolation level achieves this objective. The key is to keep your nibbles small, your transaction log under control, and eliminate unwanted phenomena.



                  If you want speed, then don't build gigantically deep tables that cannot be partitioned and therefore be unable to use partition switching for the fastest results. The key to speed is partitioning and parallelism; the key to suffering is nibbles and live-locking.



                  Please let me know what you think.



                  I created some further examples of the SERIALIZABLE isolation level in action. They should be available at the links below.



                  Delete Operation



                  Insert Operation



                  Equality Operations - Key-Range Locks on Next Key Values



                  Equality Operations - Singleton Fetch of Existent Data



                  Equality Operations - Singleton Fetch of Nonexistent Data



                  Inequality Operations - Key-Range Locks on Range and Next Key Values






                  share|improve this answer






























                    9





                    +150









                    If I understand the request correctly, the goal is to delete batches of rows, while at the same time, DML operations are occurring on rows throughout the table. The goal is to delete a batch; however, if any underlying rows contained within the range defined by said batch are locked, then we must skip that batch and move to the next batch. We must then return to any batches that were not previously deleted and retry our original delete logic. We must repeat this cycle until all required batches of rows are deleted.



                    As has been mentioned, it is reasonable to use a READPAST hint and the READ COMMITTED (default) isolation level, in order to skip past ranges that may contain blocked rows. I will go a step further and recommend using the SERIALIZABLE isolation level and nibbling deletes.



                    SQL Server uses Key-Range locks to protect a range of rows implicitly included in a record set being read by a Transact-SQL statement while using the serializable transaction isolation level...find more here:
                    https://technet.microsoft.com/en-US/library/ms191272(v=SQL.105).aspx



                    With nibbling deletes, our goal is to isolate a range of rows and ensure that no changes will occur to those rows while we are deleting them, that is to say, we do not want phantom reads or insertions. The serializable isolation level is meant to solve this problem.



                    Before I demonstrate my solution, I would like to add that neither am I recommending switching your database's default isolation level to SERIALIZABLE nor am I recommending that my solution is the best. I merely wish to present it and see where we can go from here.



                    A few house-keeping notes:




                    1. The SQL Server version that I am using is Microsoft SQL Server 2012 - 11.0.5343.0 (X64)

                    2. My test database is using the FULL recovery model


                    To begin my experiment, I will set up a test database, a sample table, and I will fill the table with 2,000,000 rows.




                    USE [master];
                    GO

                    SET NOCOUNT ON;

                    IF DATABASEPROPERTYEX (N'test', N'Version') > 0
                    BEGIN
                    ALTER DATABASE [test] SET SINGLE_USER
                    WITH ROLLBACK IMMEDIATE;
                    DROP DATABASE [test];
                    END
                    GO

                    -- Create the test database
                    CREATE DATABASE [test];
                    GO

                    -- Set the recovery model to FULL
                    ALTER DATABASE [test] SET RECOVERY FULL;

                    -- Create a FULL database backup
                    -- in order to ensure we are in fact using
                    -- the FULL recovery model
                    -- I pipe it to dev null for simplicity
                    BACKUP DATABASE [test]
                    TO DISK = N'nul';
                    GO

                    USE [test];
                    GO

                    -- Create our table
                    IF OBJECT_ID('dbo.tbl','U') IS NOT NULL
                    BEGIN
                    DROP TABLE dbo.tbl;
                    END;
                    CREATE TABLE dbo.tbl
                    (
                    c1 BIGINT IDENTITY (1,1) NOT NULL
                    , c2 INT NOT NULL
                    ) ON [PRIMARY];
                    GO

                    -- Insert 2,000,000 rows
                    INSERT INTO dbo.tbl
                    SELECT TOP 2000
                    number
                    FROM
                    master..spt_values
                    ORDER BY
                    number
                    GO 1000


                    At this point, we will need one or more indexes upon which the locking mechanisms of the SERIALIZABLE isolation level can act.




                    -- Add a clustered index
                    CREATE UNIQUE CLUSTERED INDEX CIX_tbl_c1
                    ON dbo.tbl (c1);
                    GO

                    -- Add a non-clustered index
                    CREATE NONCLUSTERED INDEX IX_tbl_c2
                    ON dbo.tbl (c2);
                    GO


                    Now, let us check to see that our 2,000,000 rows were created




                    SELECT
                    COUNT(*)
                    FROM
                    tbl;


                    enter image description here



                    So, we have our database, table, indexes, and rows. So, let us set up the experiment for nibbling deletes. First, we must decide how best to create a typical nibbling delete mechanism.




                    DECLARE
                    @BatchSize INT = 100
                    , @LowestValue BIGINT = 20000
                    , @HighestValue BIGINT = 20010
                    , @DeletedRowsCount BIGINT = 0
                    , @RowCount BIGINT = 1;

                    SET NOCOUNT ON;
                    GO

                    WHILE @DeletedRowsCount &lt ( @HighestValue - @LowestValue )
                    BEGIN

                    SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
                    BEGIN TRANSACTION

                    DELETE
                    FROM
                    dbo.tbl
                    WHERE
                    c1 IN (
                    SELECT TOP (@BatchSize)
                    c1
                    FROM
                    dbo.tbl
                    WHERE
                    c1 BETWEEN @LowestValue AND @HighestValue
                    ORDER BY
                    c1
                    );

                    SET @RowCount = ROWCOUNT_BIG();

                    COMMIT TRANSACTION;

                    SET @DeletedRowsCount += @RowCount;
                    WAITFOR DELAY '000:00:00.025';
                    CHECKPOINT;

                    END;


                    As you can see, I placed the explicit transaction inside the while loop. If you would like to limit log flushes, then feel free to place it outside the loop. Furthermore, since we are in the FULL recovery model, you may wish to create transaction log backups more often while running your nibbling delete operations, in order to ensure that your transaction log can be prevented from growing outrageously.



                    So, I have a couple goals with this setup. First, I want my key-range locks; so, I try to keep the batches as small as possible. I also do not want to impact negatively the concurrency on my "gigantic" table; so, I want to take my locks and leave them as fast as I can. So, I recommend that you make your batch sizes small.



                    Now, I want to provide a very short example of this deletion routine in action. We must open a new window within SSMS and delete one row from our table. I will do this within an implicit transaction using the default READ COMMITTED isolation level.




                    DELETE FROM
                    dbo.tbl
                    WHERE
                    c1 = 20005;


                    Was this row actually deleted?




                    SELECT
                    c1
                    FROM
                    dbo.tbl
                    WHERE
                    c1 BETWEEN 20000 AND 20010;


                    Yes, it was deleted.



                    Proof of Deleted Row



                    Now, in order to see our locks, let us open a new window within SSMS and add a code snippet or two. I am using Adam Mechanic's sp_whoisactive, which can be found here: sp_whoisactive




                    SELECT
                    DB_NAME(resource_database_id) AS DatabaseName
                    , resource_type
                    , request_mode
                    FROM
                    sys.dm_tran_locks
                    WHERE
                    DB_NAME(resource_database_id) = 'test'
                    AND resource_type = 'KEY'
                    ORDER BY
                    request_mode;

                    -- Our insert
                    sp_lock 55;

                    -- Our deletions
                    sp_lock 52;

                    -- Our active sessions
                    sp_whoisactive;



                    Now, we are ready to begin. In a new SSMS window, let us begin an explicit transaction that will attempt to re-insert the one row that we deleted. At the same time, we will fire off our nibbling delete operation.



                    The insert code:




                    BEGIN TRANSACTION

                    SET IDENTITY_INSERT dbo.tbl ON;

                    INSERT INTO dbo.tbl
                    ( c1 , c2 )
                    VALUES
                    ( 20005 , 1 );

                    SET IDENTITY_INSERT dbo.tbl OFF;

                    --COMMIT TRANSACTION;


                    Let us kick off both operations beginning with the insert and followed by our deletes. We can see the key-range locks and exclusive locks.



                    Range and eXclusive Locks



                    The insert generated these locks:



                    Insert's Locks



                    The nibbling delete/select is holding these locks:



                    enter image description here



                    Our insert is blocking our delete as expected:



                    Insert Blocks Delete



                    Now, let us commit the insert transaction and see what is up.



                    Commit the Delete



                    And as expected, all transactions complete. Now, we must check to see whether the insert was a phantom or whether the delete operation removed it as well.




                    SELECT
                    c1
                    FROM
                    dbo.tbl
                    WHERE
                    c1 BETWEEN 20000 AND 20015;


                    In fact, the insert was deleted; so, no phantom insert was allowed.



                    No Phantom Insert



                    So, in conclusion, I think the true intention of this exercise is not to try and track every single row, page, or table-level lock and try to determine whether an element of a batch is locked and would therefore require our delete operation to wait. That may have been the intent of the questioners; however, that task is herculean and basically impractical if not impossible. The real goal is to ensure that no unwanted phenomena arise once we have isolated the range of our batch with locks of our own and then precede to delete the batch. The SERIALIZABLE isolation level achieves this objective. The key is to keep your nibbles small, your transaction log under control, and eliminate unwanted phenomena.



                    If you want speed, then don't build gigantically deep tables that cannot be partitioned and therefore be unable to use partition switching for the fastest results. The key to speed is partitioning and parallelism; the key to suffering is nibbles and live-locking.



                    Please let me know what you think.



                    I created some further examples of the SERIALIZABLE isolation level in action. They should be available at the links below.



                    Delete Operation



                    Insert Operation



                    Equality Operations - Key-Range Locks on Next Key Values



                    Equality Operations - Singleton Fetch of Existent Data



                    Equality Operations - Singleton Fetch of Nonexistent Data



                    Inequality Operations - Key-Range Locks on Range and Next Key Values






                    share|improve this answer




























                      9





                      +150







                      9





                      +150



                      9




                      +150





                      If I understand the request correctly, the goal is to delete batches of rows, while at the same time, DML operations are occurring on rows throughout the table. The goal is to delete a batch; however, if any underlying rows contained within the range defined by said batch are locked, then we must skip that batch and move to the next batch. We must then return to any batches that were not previously deleted and retry our original delete logic. We must repeat this cycle until all required batches of rows are deleted.



                      As has been mentioned, it is reasonable to use a READPAST hint and the READ COMMITTED (default) isolation level, in order to skip past ranges that may contain blocked rows. I will go a step further and recommend using the SERIALIZABLE isolation level and nibbling deletes.



                      SQL Server uses Key-Range locks to protect a range of rows implicitly included in a record set being read by a Transact-SQL statement while using the serializable transaction isolation level...find more here:
                      https://technet.microsoft.com/en-US/library/ms191272(v=SQL.105).aspx



                      With nibbling deletes, our goal is to isolate a range of rows and ensure that no changes will occur to those rows while we are deleting them, that is to say, we do not want phantom reads or insertions. The serializable isolation level is meant to solve this problem.



                      Before I demonstrate my solution, I would like to add that neither am I recommending switching your database's default isolation level to SERIALIZABLE nor am I recommending that my solution is the best. I merely wish to present it and see where we can go from here.



                      A few house-keeping notes:




                      1. The SQL Server version that I am using is Microsoft SQL Server 2012 - 11.0.5343.0 (X64)

                      2. My test database is using the FULL recovery model


                      To begin my experiment, I will set up a test database, a sample table, and I will fill the table with 2,000,000 rows.




                      USE [master];
                      GO

                      SET NOCOUNT ON;

                      IF DATABASEPROPERTYEX (N'test', N'Version') > 0
                      BEGIN
                      ALTER DATABASE [test] SET SINGLE_USER
                      WITH ROLLBACK IMMEDIATE;
                      DROP DATABASE [test];
                      END
                      GO

                      -- Create the test database
                      CREATE DATABASE [test];
                      GO

                      -- Set the recovery model to FULL
                      ALTER DATABASE [test] SET RECOVERY FULL;

                      -- Create a FULL database backup
                      -- in order to ensure we are in fact using
                      -- the FULL recovery model
                      -- I pipe it to dev null for simplicity
                      BACKUP DATABASE [test]
                      TO DISK = N'nul';
                      GO

                      USE [test];
                      GO

                      -- Create our table
                      IF OBJECT_ID('dbo.tbl','U') IS NOT NULL
                      BEGIN
                      DROP TABLE dbo.tbl;
                      END;
                      CREATE TABLE dbo.tbl
                      (
                      c1 BIGINT IDENTITY (1,1) NOT NULL
                      , c2 INT NOT NULL
                      ) ON [PRIMARY];
                      GO

                      -- Insert 2,000,000 rows
                      INSERT INTO dbo.tbl
                      SELECT TOP 2000
                      number
                      FROM
                      master..spt_values
                      ORDER BY
                      number
                      GO 1000


                      At this point, we will need one or more indexes upon which the locking mechanisms of the SERIALIZABLE isolation level can act.




                      -- Add a clustered index
                      CREATE UNIQUE CLUSTERED INDEX CIX_tbl_c1
                      ON dbo.tbl (c1);
                      GO

                      -- Add a non-clustered index
                      CREATE NONCLUSTERED INDEX IX_tbl_c2
                      ON dbo.tbl (c2);
                      GO


                      Now, let us check to see that our 2,000,000 rows were created




                      SELECT
                      COUNT(*)
                      FROM
                      tbl;


                      enter image description here



                      So, we have our database, table, indexes, and rows. So, let us set up the experiment for nibbling deletes. First, we must decide how best to create a typical nibbling delete mechanism.




                      DECLARE
                      @BatchSize INT = 100
                      , @LowestValue BIGINT = 20000
                      , @HighestValue BIGINT = 20010
                      , @DeletedRowsCount BIGINT = 0
                      , @RowCount BIGINT = 1;

                      SET NOCOUNT ON;
                      GO

                      WHILE @DeletedRowsCount &lt ( @HighestValue - @LowestValue )
                      BEGIN

                      SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
                      BEGIN TRANSACTION

                      DELETE
                      FROM
                      dbo.tbl
                      WHERE
                      c1 IN (
                      SELECT TOP (@BatchSize)
                      c1
                      FROM
                      dbo.tbl
                      WHERE
                      c1 BETWEEN @LowestValue AND @HighestValue
                      ORDER BY
                      c1
                      );

                      SET @RowCount = ROWCOUNT_BIG();

                      COMMIT TRANSACTION;

                      SET @DeletedRowsCount += @RowCount;
                      WAITFOR DELAY '000:00:00.025';
                      CHECKPOINT;

                      END;


                      As you can see, I placed the explicit transaction inside the while loop. If you would like to limit log flushes, then feel free to place it outside the loop. Furthermore, since we are in the FULL recovery model, you may wish to create transaction log backups more often while running your nibbling delete operations, in order to ensure that your transaction log can be prevented from growing outrageously.



                      So, I have a couple goals with this setup. First, I want my key-range locks; so, I try to keep the batches as small as possible. I also do not want to impact negatively the concurrency on my "gigantic" table; so, I want to take my locks and leave them as fast as I can. So, I recommend that you make your batch sizes small.



                      Now, I want to provide a very short example of this deletion routine in action. We must open a new window within SSMS and delete one row from our table. I will do this within an implicit transaction using the default READ COMMITTED isolation level.




                      DELETE FROM
                      dbo.tbl
                      WHERE
                      c1 = 20005;


                      Was this row actually deleted?




                      SELECT
                      c1
                      FROM
                      dbo.tbl
                      WHERE
                      c1 BETWEEN 20000 AND 20010;


                      Yes, it was deleted.



                      Proof of Deleted Row



                      Now, in order to see our locks, let us open a new window within SSMS and add a code snippet or two. I am using Adam Mechanic's sp_whoisactive, which can be found here: sp_whoisactive




                      SELECT
                      DB_NAME(resource_database_id) AS DatabaseName
                      , resource_type
                      , request_mode
                      FROM
                      sys.dm_tran_locks
                      WHERE
                      DB_NAME(resource_database_id) = 'test'
                      AND resource_type = 'KEY'
                      ORDER BY
                      request_mode;

                      -- Our insert
                      sp_lock 55;

                      -- Our deletions
                      sp_lock 52;

                      -- Our active sessions
                      sp_whoisactive;



                      Now, we are ready to begin. In a new SSMS window, let us begin an explicit transaction that will attempt to re-insert the one row that we deleted. At the same time, we will fire off our nibbling delete operation.



                      The insert code:




                      BEGIN TRANSACTION

                      SET IDENTITY_INSERT dbo.tbl ON;

                      INSERT INTO dbo.tbl
                      ( c1 , c2 )
                      VALUES
                      ( 20005 , 1 );

                      SET IDENTITY_INSERT dbo.tbl OFF;

                      --COMMIT TRANSACTION;


                      Let us kick off both operations beginning with the insert and followed by our deletes. We can see the key-range locks and exclusive locks.



                      Range and eXclusive Locks



                      The insert generated these locks:



                      Insert's Locks



                      The nibbling delete/select is holding these locks:



                      enter image description here



                      Our insert is blocking our delete as expected:



                      Insert Blocks Delete



                      Now, let us commit the insert transaction and see what is up.



                      Commit the Delete



                      And as expected, all transactions complete. Now, we must check to see whether the insert was a phantom or whether the delete operation removed it as well.




                      SELECT
                      c1
                      FROM
                      dbo.tbl
                      WHERE
                      c1 BETWEEN 20000 AND 20015;


                      In fact, the insert was deleted; so, no phantom insert was allowed.



                      No Phantom Insert



                      So, in conclusion, I think the true intention of this exercise is not to try and track every single row, page, or table-level lock and try to determine whether an element of a batch is locked and would therefore require our delete operation to wait. That may have been the intent of the questioners; however, that task is herculean and basically impractical if not impossible. The real goal is to ensure that no unwanted phenomena arise once we have isolated the range of our batch with locks of our own and then precede to delete the batch. The SERIALIZABLE isolation level achieves this objective. The key is to keep your nibbles small, your transaction log under control, and eliminate unwanted phenomena.



                      If you want speed, then don't build gigantically deep tables that cannot be partitioned and therefore be unable to use partition switching for the fastest results. The key to speed is partitioning and parallelism; the key to suffering is nibbles and live-locking.



                      Please let me know what you think.



                      I created some further examples of the SERIALIZABLE isolation level in action. They should be available at the links below.



                      Delete Operation



                      Insert Operation



                      Equality Operations - Key-Range Locks on Next Key Values



                      Equality Operations - Singleton Fetch of Existent Data



                      Equality Operations - Singleton Fetch of Nonexistent Data



                      Inequality Operations - Key-Range Locks on Range and Next Key Values






                      share|improve this answer















                      If I understand the request correctly, the goal is to delete batches of rows, while at the same time, DML operations are occurring on rows throughout the table. The goal is to delete a batch; however, if any underlying rows contained within the range defined by said batch are locked, then we must skip that batch and move to the next batch. We must then return to any batches that were not previously deleted and retry our original delete logic. We must repeat this cycle until all required batches of rows are deleted.



                      As has been mentioned, it is reasonable to use a READPAST hint and the READ COMMITTED (default) isolation level, in order to skip past ranges that may contain blocked rows. I will go a step further and recommend using the SERIALIZABLE isolation level and nibbling deletes.



                      SQL Server uses Key-Range locks to protect a range of rows implicitly included in a record set being read by a Transact-SQL statement while using the serializable transaction isolation level...find more here:
                      https://technet.microsoft.com/en-US/library/ms191272(v=SQL.105).aspx



                      With nibbling deletes, our goal is to isolate a range of rows and ensure that no changes will occur to those rows while we are deleting them, that is to say, we do not want phantom reads or insertions. The serializable isolation level is meant to solve this problem.



                      Before I demonstrate my solution, I would like to add that neither am I recommending switching your database's default isolation level to SERIALIZABLE nor am I recommending that my solution is the best. I merely wish to present it and see where we can go from here.



                      A few house-keeping notes:




                      1. The SQL Server version that I am using is Microsoft SQL Server 2012 - 11.0.5343.0 (X64)

                      2. My test database is using the FULL recovery model


                      To begin my experiment, I will set up a test database, a sample table, and I will fill the table with 2,000,000 rows.




                      USE [master];
                      GO

                      SET NOCOUNT ON;

                      IF DATABASEPROPERTYEX (N'test', N'Version') > 0
                      BEGIN
                      ALTER DATABASE [test] SET SINGLE_USER
                      WITH ROLLBACK IMMEDIATE;
                      DROP DATABASE [test];
                      END
                      GO

                      -- Create the test database
                      CREATE DATABASE [test];
                      GO

                      -- Set the recovery model to FULL
                      ALTER DATABASE [test] SET RECOVERY FULL;

                      -- Create a FULL database backup
                      -- in order to ensure we are in fact using
                      -- the FULL recovery model
                      -- I pipe it to dev null for simplicity
                      BACKUP DATABASE [test]
                      TO DISK = N'nul';
                      GO

                      USE [test];
                      GO

                      -- Create our table
                      IF OBJECT_ID('dbo.tbl','U') IS NOT NULL
                      BEGIN
                      DROP TABLE dbo.tbl;
                      END;
                      CREATE TABLE dbo.tbl
                      (
                      c1 BIGINT IDENTITY (1,1) NOT NULL
                      , c2 INT NOT NULL
                      ) ON [PRIMARY];
                      GO

                      -- Insert 2,000,000 rows
                      INSERT INTO dbo.tbl
                      SELECT TOP 2000
                      number
                      FROM
                      master..spt_values
                      ORDER BY
                      number
                      GO 1000


                      At this point, we will need one or more indexes upon which the locking mechanisms of the SERIALIZABLE isolation level can act.




                      -- Add a clustered index
                      CREATE UNIQUE CLUSTERED INDEX CIX_tbl_c1
                      ON dbo.tbl (c1);
                      GO

                      -- Add a non-clustered index
                      CREATE NONCLUSTERED INDEX IX_tbl_c2
                      ON dbo.tbl (c2);
                      GO


                      Now, let us check to see that our 2,000,000 rows were created




                      SELECT
                      COUNT(*)
                      FROM
                      tbl;


                      enter image description here



                      So, we have our database, table, indexes, and rows. So, let us set up the experiment for nibbling deletes. First, we must decide how best to create a typical nibbling delete mechanism.




                      DECLARE
                      @BatchSize INT = 100
                      , @LowestValue BIGINT = 20000
                      , @HighestValue BIGINT = 20010
                      , @DeletedRowsCount BIGINT = 0
                      , @RowCount BIGINT = 1;

                      SET NOCOUNT ON;
                      GO

                      WHILE @DeletedRowsCount &lt ( @HighestValue - @LowestValue )
                      BEGIN

                      SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
                      BEGIN TRANSACTION

                      DELETE
                      FROM
                      dbo.tbl
                      WHERE
                      c1 IN (
                      SELECT TOP (@BatchSize)
                      c1
                      FROM
                      dbo.tbl
                      WHERE
                      c1 BETWEEN @LowestValue AND @HighestValue
                      ORDER BY
                      c1
                      );

                      SET @RowCount = ROWCOUNT_BIG();

                      COMMIT TRANSACTION;

                      SET @DeletedRowsCount += @RowCount;
                      WAITFOR DELAY '000:00:00.025';
                      CHECKPOINT;

                      END;


                      As you can see, I placed the explicit transaction inside the while loop. If you would like to limit log flushes, then feel free to place it outside the loop. Furthermore, since we are in the FULL recovery model, you may wish to create transaction log backups more often while running your nibbling delete operations, in order to ensure that your transaction log can be prevented from growing outrageously.



                      So, I have a couple goals with this setup. First, I want my key-range locks; so, I try to keep the batches as small as possible. I also do not want to impact negatively the concurrency on my "gigantic" table; so, I want to take my locks and leave them as fast as I can. So, I recommend that you make your batch sizes small.



                      Now, I want to provide a very short example of this deletion routine in action. We must open a new window within SSMS and delete one row from our table. I will do this within an implicit transaction using the default READ COMMITTED isolation level.




                      DELETE FROM
                      dbo.tbl
                      WHERE
                      c1 = 20005;


                      Was this row actually deleted?




                      SELECT
                      c1
                      FROM
                      dbo.tbl
                      WHERE
                      c1 BETWEEN 20000 AND 20010;


                      Yes, it was deleted.



                      Proof of Deleted Row



                      Now, in order to see our locks, let us open a new window within SSMS and add a code snippet or two. I am using Adam Mechanic's sp_whoisactive, which can be found here: sp_whoisactive




                      SELECT
                      DB_NAME(resource_database_id) AS DatabaseName
                      , resource_type
                      , request_mode
                      FROM
                      sys.dm_tran_locks
                      WHERE
                      DB_NAME(resource_database_id) = 'test'
                      AND resource_type = 'KEY'
                      ORDER BY
                      request_mode;

                      -- Our insert
                      sp_lock 55;

                      -- Our deletions
                      sp_lock 52;

                      -- Our active sessions
                      sp_whoisactive;



                      Now, we are ready to begin. In a new SSMS window, let us begin an explicit transaction that will attempt to re-insert the one row that we deleted. At the same time, we will fire off our nibbling delete operation.



                      The insert code:




                      BEGIN TRANSACTION

                      SET IDENTITY_INSERT dbo.tbl ON;

                      INSERT INTO dbo.tbl
                      ( c1 , c2 )
                      VALUES
                      ( 20005 , 1 );

                      SET IDENTITY_INSERT dbo.tbl OFF;

                      --COMMIT TRANSACTION;


                      Let us kick off both operations beginning with the insert and followed by our deletes. We can see the key-range locks and exclusive locks.



                      Range and eXclusive Locks



                      The insert generated these locks:



                      Insert's Locks



                      The nibbling delete/select is holding these locks:



                      enter image description here



                      Our insert is blocking our delete as expected:



                      Insert Blocks Delete



                      Now, let us commit the insert transaction and see what is up.



                      Commit the Delete



                      And as expected, all transactions complete. Now, we must check to see whether the insert was a phantom or whether the delete operation removed it as well.




                      SELECT
                      c1
                      FROM
                      dbo.tbl
                      WHERE
                      c1 BETWEEN 20000 AND 20015;


                      In fact, the insert was deleted; so, no phantom insert was allowed.



                      No Phantom Insert



                      So, in conclusion, I think the true intention of this exercise is not to try and track every single row, page, or table-level lock and try to determine whether an element of a batch is locked and would therefore require our delete operation to wait. That may have been the intent of the questioners; however, that task is herculean and basically impractical if not impossible. The real goal is to ensure that no unwanted phenomena arise once we have isolated the range of our batch with locks of our own and then precede to delete the batch. The SERIALIZABLE isolation level achieves this objective. The key is to keep your nibbles small, your transaction log under control, and eliminate unwanted phenomena.



                      If you want speed, then don't build gigantically deep tables that cannot be partitioned and therefore be unable to use partition switching for the fastest results. The key to speed is partitioning and parallelism; the key to suffering is nibbles and live-locking.



                      Please let me know what you think.



                      I created some further examples of the SERIALIZABLE isolation level in action. They should be available at the links below.



                      Delete Operation



                      Insert Operation



                      Equality Operations - Key-Range Locks on Next Key Values



                      Equality Operations - Singleton Fetch of Existent Data



                      Equality Operations - Singleton Fetch of Nonexistent Data



                      Inequality Operations - Key-Range Locks on Range and Next Key Values







                      share|improve this answer














                      share|improve this answer



                      share|improve this answer








                      edited Nov 13 '18 at 23:10









                      Community

                      1




                      1










                      answered Aug 27 '15 at 3:01









                      ooutwireooutwire

                      1,127716




                      1,127716























                          2














                          Summarizing other approaches originally offered in comments to the question.







                          1. Use NOWAIT if the desired behaviour is to fail the whole chunk as soon as an incompatible lock is encountered.



                            From the NOWAIT documentation:




                            Instructs the Database Engine to return a message as soon as a lock is encountered on the table. NOWAIT is equivalent to specifying SET LOCK_TIMEOUT 0 for a specific table. The NOWAIT hint does not work when the TABLOCK hint is also included. To terminate a query without waiting when using the TABLOCK hint, preface the query with SETLOCK_TIMEOUT 0; instead.





                          2. Use SET LOCK_TIMEOUT to achieve a similar outcome, but with a configurable timeout:



                            From the SET LOCK_TIMEOUT documentation




                            Specifies the number of milliseconds a statement waits for a lock to be released.



                            When a wait for a lock exceeds the timeout value, an error is returned. A value of 0 means to not wait at all and return a message as soon as a lock is encountered.









                          share|improve this answer






























                            2














                            Summarizing other approaches originally offered in comments to the question.







                            1. Use NOWAIT if the desired behaviour is to fail the whole chunk as soon as an incompatible lock is encountered.



                              From the NOWAIT documentation:




                              Instructs the Database Engine to return a message as soon as a lock is encountered on the table. NOWAIT is equivalent to specifying SET LOCK_TIMEOUT 0 for a specific table. The NOWAIT hint does not work when the TABLOCK hint is also included. To terminate a query without waiting when using the TABLOCK hint, preface the query with SETLOCK_TIMEOUT 0; instead.





                            2. Use SET LOCK_TIMEOUT to achieve a similar outcome, but with a configurable timeout:



                              From the SET LOCK_TIMEOUT documentation




                              Specifies the number of milliseconds a statement waits for a lock to be released.



                              When a wait for a lock exceeds the timeout value, an error is returned. A value of 0 means to not wait at all and return a message as soon as a lock is encountered.









                            share|improve this answer




























                              2












                              2








                              2







                              Summarizing other approaches originally offered in comments to the question.







                              1. Use NOWAIT if the desired behaviour is to fail the whole chunk as soon as an incompatible lock is encountered.



                                From the NOWAIT documentation:




                                Instructs the Database Engine to return a message as soon as a lock is encountered on the table. NOWAIT is equivalent to specifying SET LOCK_TIMEOUT 0 for a specific table. The NOWAIT hint does not work when the TABLOCK hint is also included. To terminate a query without waiting when using the TABLOCK hint, preface the query with SETLOCK_TIMEOUT 0; instead.





                              2. Use SET LOCK_TIMEOUT to achieve a similar outcome, but with a configurable timeout:



                                From the SET LOCK_TIMEOUT documentation




                                Specifies the number of milliseconds a statement waits for a lock to be released.



                                When a wait for a lock exceeds the timeout value, an error is returned. A value of 0 means to not wait at all and return a message as soon as a lock is encountered.









                              share|improve this answer















                              Summarizing other approaches originally offered in comments to the question.







                              1. Use NOWAIT if the desired behaviour is to fail the whole chunk as soon as an incompatible lock is encountered.



                                From the NOWAIT documentation:




                                Instructs the Database Engine to return a message as soon as a lock is encountered on the table. NOWAIT is equivalent to specifying SET LOCK_TIMEOUT 0 for a specific table. The NOWAIT hint does not work when the TABLOCK hint is also included. To terminate a query without waiting when using the TABLOCK hint, preface the query with SETLOCK_TIMEOUT 0; instead.





                              2. Use SET LOCK_TIMEOUT to achieve a similar outcome, but with a configurable timeout:



                                From the SET LOCK_TIMEOUT documentation




                                Specifies the number of milliseconds a statement waits for a lock to be released.



                                When a wait for a lock exceeds the timeout value, an error is returned. A value of 0 means to not wait at all and return a message as soon as a lock is encountered.










                              share|improve this answer














                              share|improve this answer



                              share|improve this answer








                              edited Apr 13 '17 at 12:42









                              Community

                              1




                              1










                              answered Aug 29 '15 at 11:38









                              Paul WhitePaul White

                              53.8k14286459




                              53.8k14286459























                                  0














                                  To assume we have 2 parallels queries:



                                  connect/session 1: will locked the row = 777



                                  SELECT * FROM your_table WITH(UPDLOCK,READPAST) WHERE id = 777


                                  connect/session 2: will ignore locked row = 777



                                  SELECT * FROM your_table WITH(UPDLOCK,READPAST) WHERE id = 777


                                  OR connect/session 2: will throw exception



                                  DECLARE @id integer;
                                  SELECT @id = id FROM your_table WITH(UPDLOCK,READPAST) WHERE id = 777;
                                  IF @id is NULL
                                  THROW 51000, 'Hi, a record is locked or does not exist.', 1;




                                  share








                                  New contributor




                                  Lebnik is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                  Check out our Code of Conduct.

























                                    0














                                    To assume we have 2 parallels queries:



                                    connect/session 1: will locked the row = 777



                                    SELECT * FROM your_table WITH(UPDLOCK,READPAST) WHERE id = 777


                                    connect/session 2: will ignore locked row = 777



                                    SELECT * FROM your_table WITH(UPDLOCK,READPAST) WHERE id = 777


                                    OR connect/session 2: will throw exception



                                    DECLARE @id integer;
                                    SELECT @id = id FROM your_table WITH(UPDLOCK,READPAST) WHERE id = 777;
                                    IF @id is NULL
                                    THROW 51000, 'Hi, a record is locked or does not exist.', 1;




                                    share








                                    New contributor




                                    Lebnik is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                    Check out our Code of Conduct.























                                      0












                                      0








                                      0







                                      To assume we have 2 parallels queries:



                                      connect/session 1: will locked the row = 777



                                      SELECT * FROM your_table WITH(UPDLOCK,READPAST) WHERE id = 777


                                      connect/session 2: will ignore locked row = 777



                                      SELECT * FROM your_table WITH(UPDLOCK,READPAST) WHERE id = 777


                                      OR connect/session 2: will throw exception



                                      DECLARE @id integer;
                                      SELECT @id = id FROM your_table WITH(UPDLOCK,READPAST) WHERE id = 777;
                                      IF @id is NULL
                                      THROW 51000, 'Hi, a record is locked or does not exist.', 1;




                                      share








                                      New contributor




                                      Lebnik is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                      Check out our Code of Conduct.










                                      To assume we have 2 parallels queries:



                                      connect/session 1: will locked the row = 777



                                      SELECT * FROM your_table WITH(UPDLOCK,READPAST) WHERE id = 777


                                      connect/session 2: will ignore locked row = 777



                                      SELECT * FROM your_table WITH(UPDLOCK,READPAST) WHERE id = 777


                                      OR connect/session 2: will throw exception



                                      DECLARE @id integer;
                                      SELECT @id = id FROM your_table WITH(UPDLOCK,READPAST) WHERE id = 777;
                                      IF @id is NULL
                                      THROW 51000, 'Hi, a record is locked or does not exist.', 1;





                                      share








                                      New contributor




                                      Lebnik is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                      Check out our Code of Conduct.








                                      share


                                      share






                                      New contributor




                                      Lebnik is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                      Check out our Code of Conduct.









                                      answered 9 mins ago









                                      LebnikLebnik

                                      101




                                      101




                                      New contributor




                                      Lebnik is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                      Check out our Code of Conduct.





                                      New contributor





                                      Lebnik is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                      Check out our Code of Conduct.






                                      Lebnik is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                      Check out our Code of Conduct.























                                          -1














                                          Try filtering on something like this - it can get complicated if you want to get really, really specific. Look in BOL for the description of sys.dm_tran_locks



                                          SELECT 
                                          tl.request_session_id,
                                          tl.resource_type,
                                          tl.resource_associated_entity_id,
                                          db_name(tl.resource_database_id) 'Database',
                                          CASE
                                          WHEN tl.resource_type = 'object' THEN object_name(tl.resource_associated_entity_id, tl.resource_database_id)
                                          ELSE NULL
                                          END 'LockedObject',
                                          tl.resource_database_id,
                                          tl.resource_description,
                                          tl.request_mode,
                                          tl.request_type,
                                          tl.request_status FROM [sys].[dm_tran_locks] tl WHERE resource_database_id <> 2order by tl.request_session_id





                                          share|improve this answer


























                                          • just curious - why the downvote?

                                            – rottengeek
                                            Sep 11 '15 at 20:17
















                                          -1














                                          Try filtering on something like this - it can get complicated if you want to get really, really specific. Look in BOL for the description of sys.dm_tran_locks



                                          SELECT 
                                          tl.request_session_id,
                                          tl.resource_type,
                                          tl.resource_associated_entity_id,
                                          db_name(tl.resource_database_id) 'Database',
                                          CASE
                                          WHEN tl.resource_type = 'object' THEN object_name(tl.resource_associated_entity_id, tl.resource_database_id)
                                          ELSE NULL
                                          END 'LockedObject',
                                          tl.resource_database_id,
                                          tl.resource_description,
                                          tl.request_mode,
                                          tl.request_type,
                                          tl.request_status FROM [sys].[dm_tran_locks] tl WHERE resource_database_id <> 2order by tl.request_session_id





                                          share|improve this answer


























                                          • just curious - why the downvote?

                                            – rottengeek
                                            Sep 11 '15 at 20:17














                                          -1












                                          -1








                                          -1







                                          Try filtering on something like this - it can get complicated if you want to get really, really specific. Look in BOL for the description of sys.dm_tran_locks



                                          SELECT 
                                          tl.request_session_id,
                                          tl.resource_type,
                                          tl.resource_associated_entity_id,
                                          db_name(tl.resource_database_id) 'Database',
                                          CASE
                                          WHEN tl.resource_type = 'object' THEN object_name(tl.resource_associated_entity_id, tl.resource_database_id)
                                          ELSE NULL
                                          END 'LockedObject',
                                          tl.resource_database_id,
                                          tl.resource_description,
                                          tl.request_mode,
                                          tl.request_type,
                                          tl.request_status FROM [sys].[dm_tran_locks] tl WHERE resource_database_id <> 2order by tl.request_session_id





                                          share|improve this answer















                                          Try filtering on something like this - it can get complicated if you want to get really, really specific. Look in BOL for the description of sys.dm_tran_locks



                                          SELECT 
                                          tl.request_session_id,
                                          tl.resource_type,
                                          tl.resource_associated_entity_id,
                                          db_name(tl.resource_database_id) 'Database',
                                          CASE
                                          WHEN tl.resource_type = 'object' THEN object_name(tl.resource_associated_entity_id, tl.resource_database_id)
                                          ELSE NULL
                                          END 'LockedObject',
                                          tl.resource_database_id,
                                          tl.resource_description,
                                          tl.request_mode,
                                          tl.request_type,
                                          tl.request_status FROM [sys].[dm_tran_locks] tl WHERE resource_database_id <> 2order by tl.request_session_id






                                          share|improve this answer














                                          share|improve this answer



                                          share|improve this answer








                                          edited Aug 25 '15 at 22:20

























                                          answered Aug 25 '15 at 21:03









                                          rottengeekrottengeek

                                          615516




                                          615516













                                          • just curious - why the downvote?

                                            – rottengeek
                                            Sep 11 '15 at 20:17



















                                          • just curious - why the downvote?

                                            – rottengeek
                                            Sep 11 '15 at 20:17

















                                          just curious - why the downvote?

                                          – rottengeek
                                          Sep 11 '15 at 20:17





                                          just curious - why the downvote?

                                          – rottengeek
                                          Sep 11 '15 at 20:17











                                          -10














                                          You can use NoLOCK while you are deleting and if the rows are locked, they wont be deleted. Its not ideal but may do the trick for you.



                                          DELETE TA FROM dbo.TableA TA WITH (NOLOCK) WHERE Condition = True





                                          share|improve this answer



















                                          • 7





                                            If I try that on my local machine I get Msg 1065, Level 15, State 1, Line 15 The NOLOCK and READUNCOMMITTED lock hints are not allowed for target tables of INSERT, UPDATE, DELETE or MERGE statements., deprecated since 2005

                                            – Tom V
                                            Aug 25 '15 at 13:33


















                                          -10














                                          You can use NoLOCK while you are deleting and if the rows are locked, they wont be deleted. Its not ideal but may do the trick for you.



                                          DELETE TA FROM dbo.TableA TA WITH (NOLOCK) WHERE Condition = True





                                          share|improve this answer



















                                          • 7





                                            If I try that on my local machine I get Msg 1065, Level 15, State 1, Line 15 The NOLOCK and READUNCOMMITTED lock hints are not allowed for target tables of INSERT, UPDATE, DELETE or MERGE statements., deprecated since 2005

                                            – Tom V
                                            Aug 25 '15 at 13:33
















                                          -10












                                          -10








                                          -10







                                          You can use NoLOCK while you are deleting and if the rows are locked, they wont be deleted. Its not ideal but may do the trick for you.



                                          DELETE TA FROM dbo.TableA TA WITH (NOLOCK) WHERE Condition = True





                                          share|improve this answer













                                          You can use NoLOCK while you are deleting and if the rows are locked, they wont be deleted. Its not ideal but may do the trick for you.



                                          DELETE TA FROM dbo.TableA TA WITH (NOLOCK) WHERE Condition = True






                                          share|improve this answer












                                          share|improve this answer



                                          share|improve this answer










                                          answered Aug 25 '15 at 12:43









                                          mouliinmouliin

                                          429313




                                          429313








                                          • 7





                                            If I try that on my local machine I get Msg 1065, Level 15, State 1, Line 15 The NOLOCK and READUNCOMMITTED lock hints are not allowed for target tables of INSERT, UPDATE, DELETE or MERGE statements., deprecated since 2005

                                            – Tom V
                                            Aug 25 '15 at 13:33
















                                          • 7





                                            If I try that on my local machine I get Msg 1065, Level 15, State 1, Line 15 The NOLOCK and READUNCOMMITTED lock hints are not allowed for target tables of INSERT, UPDATE, DELETE or MERGE statements., deprecated since 2005

                                            – Tom V
                                            Aug 25 '15 at 13:33










                                          7




                                          7





                                          If I try that on my local machine I get Msg 1065, Level 15, State 1, Line 15 The NOLOCK and READUNCOMMITTED lock hints are not allowed for target tables of INSERT, UPDATE, DELETE or MERGE statements., deprecated since 2005

                                          – Tom V
                                          Aug 25 '15 at 13:33







                                          If I try that on my local machine I get Msg 1065, Level 15, State 1, Line 15 The NOLOCK and READUNCOMMITTED lock hints are not allowed for target tables of INSERT, UPDATE, DELETE or MERGE statements., deprecated since 2005

                                          – Tom V
                                          Aug 25 '15 at 13:33




















                                          draft saved

                                          draft discarded




















































                                          Thanks for contributing an answer to Database Administrators Stack Exchange!


                                          • Please be sure to answer the question. Provide details and share your research!

                                          But avoid



                                          • Asking for help, clarification, or responding to other answers.

                                          • Making statements based on opinion; back them up with references or personal experience.


                                          To learn more, see our tips on writing great answers.




                                          draft saved


                                          draft discarded














                                          StackExchange.ready(
                                          function () {
                                          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f102160%2fin-sql-server-is-there-a-way-to-check-if-a-selected-group-of-rows-are-locked-or%23new-answer', 'question_page');
                                          }
                                          );

                                          Post as a guest















                                          Required, but never shown





















































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown

































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown







                                          Popular posts from this blog

                                          Anexo:Material bélico de la Fuerza Aérea de Chile Índice Aeronaves Defensa...

                                          Always On Availability groups resolving state after failover - Remote harden of transaction...

                                          update json value to null Announcing the arrival of Valued Associate #679: Cesar Manara ...