SQL Server ReplicationSQL Server 2008 R2 replication high delivery latencyTransactional Replication: can...

What is the tangent at a sharp point on a curve?

Turning a hard to access nut?

Why are there no stars visible in cislunar space?

Pre-Employment Background Check With Consent For Future Checks

Can other pieces capture a threatening piece and prevent a checkmate?

PTIJ: At the Passover Seder, is one allowed to speak more than once during Maggid?

pipe commands inside find -exec?

Single word to change groups

Is VPN a layer 3 concept?

Can "few" be used as a subject? If so, what is the rule?

Could any one tell what PN is this Chip? Thanks~

How old is Nick Fury?

Output visual diagram of picture

Emojional cryptic crossword

Does fire aspect on a sword, destroy mob drops?

Do native speakers use "ultima" and "proxima" frequently in spoken English?

Error in master's thesis, I do not know what to do

What are the consequences of changing the number of hours in a day?

Isn't the word "experience" wrongly used in this context?

Hot air balloons as primitive bombers

What are the rules for concealing thieves' tools (or items in general)?

How to test the sharpness of a knife?

Do I need to convey a moral for each of my blog post?

Are hand made posters acceptable in Academia?



SQL Server Replication


SQL Server 2008 R2 replication high delivery latencyTransactional Replication: can create 52 publication, but creating 53rd leads to problemsReplication between SQL Server 2005 - 2005Query to check if all replication articles are created at Subscriber SQL Server 2012Upgrade mysql from 5.0 to 5.5 in master-master configuration leads to broken replicationCan SQL Server Transactional Replication do this?SQL Server 2008 Replication “between retries” or “worker thread no available”SQL Server Change Tracking and ReplicationSQL Server Delta Records Pull & PushTransactional replication reports ok, but tracer token not arriving













3















We had an issue with replication, where a large (7.5 million row) update was done on a 33 million row table. Replication converted that to 7.5 million individual update statements. When my alerting notified me that our publication was behind our threshold, I started to examine the issue.



I discovered that the updated was executed and it would have taken a few days to chew through those update statements. So I decided to see if we could skip over those records that it was trying to process. Using the system tables and stored procedure I was able to determine the period of time that the updated was executed. Once I found the last xact_seqno, I stopped the distribution agent, and did the update manually to the subscriber database. I then executed sp_setsubscriptionxactseqno, to skip past all of those 7.5 million transactions. When I started up the Distribution Agent, it seemed to have worked and was able to process the remaining transactions.



For good measure I use Redgate’s data compare to see if the data was messed up, but I was missing about 24 records (which could have been not there originally because I didn’t set it up).



My question is was that the right way to fix it? How are you always assured to get the next Xact_seqno? Do you order the transactions by Xact_Seqno or entry_time?










share|improve this question
















bumped to the homepage by Community 2 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
















  • Can you please provide details about SQL Server version and type of replication?

    – SQLPRODDBA
    Jun 8 '16 at 12:35











  • The issue was from the distributor to subscriber, which was on the same server (SQL Server 2012). Transactional replication.

    – Derek Czarny
    Jun 8 '16 at 12:40
















3















We had an issue with replication, where a large (7.5 million row) update was done on a 33 million row table. Replication converted that to 7.5 million individual update statements. When my alerting notified me that our publication was behind our threshold, I started to examine the issue.



I discovered that the updated was executed and it would have taken a few days to chew through those update statements. So I decided to see if we could skip over those records that it was trying to process. Using the system tables and stored procedure I was able to determine the period of time that the updated was executed. Once I found the last xact_seqno, I stopped the distribution agent, and did the update manually to the subscriber database. I then executed sp_setsubscriptionxactseqno, to skip past all of those 7.5 million transactions. When I started up the Distribution Agent, it seemed to have worked and was able to process the remaining transactions.



For good measure I use Redgate’s data compare to see if the data was messed up, but I was missing about 24 records (which could have been not there originally because I didn’t set it up).



My question is was that the right way to fix it? How are you always assured to get the next Xact_seqno? Do you order the transactions by Xact_Seqno or entry_time?










share|improve this question
















bumped to the homepage by Community 2 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
















  • Can you please provide details about SQL Server version and type of replication?

    – SQLPRODDBA
    Jun 8 '16 at 12:35











  • The issue was from the distributor to subscriber, which was on the same server (SQL Server 2012). Transactional replication.

    – Derek Czarny
    Jun 8 '16 at 12:40














3












3








3


1






We had an issue with replication, where a large (7.5 million row) update was done on a 33 million row table. Replication converted that to 7.5 million individual update statements. When my alerting notified me that our publication was behind our threshold, I started to examine the issue.



I discovered that the updated was executed and it would have taken a few days to chew through those update statements. So I decided to see if we could skip over those records that it was trying to process. Using the system tables and stored procedure I was able to determine the period of time that the updated was executed. Once I found the last xact_seqno, I stopped the distribution agent, and did the update manually to the subscriber database. I then executed sp_setsubscriptionxactseqno, to skip past all of those 7.5 million transactions. When I started up the Distribution Agent, it seemed to have worked and was able to process the remaining transactions.



For good measure I use Redgate’s data compare to see if the data was messed up, but I was missing about 24 records (which could have been not there originally because I didn’t set it up).



My question is was that the right way to fix it? How are you always assured to get the next Xact_seqno? Do you order the transactions by Xact_Seqno or entry_time?










share|improve this question
















We had an issue with replication, where a large (7.5 million row) update was done on a 33 million row table. Replication converted that to 7.5 million individual update statements. When my alerting notified me that our publication was behind our threshold, I started to examine the issue.



I discovered that the updated was executed and it would have taken a few days to chew through those update statements. So I decided to see if we could skip over those records that it was trying to process. Using the system tables and stored procedure I was able to determine the period of time that the updated was executed. Once I found the last xact_seqno, I stopped the distribution agent, and did the update manually to the subscriber database. I then executed sp_setsubscriptionxactseqno, to skip past all of those 7.5 million transactions. When I started up the Distribution Agent, it seemed to have worked and was able to process the remaining transactions.



For good measure I use Redgate’s data compare to see if the data was messed up, but I was missing about 24 records (which could have been not there originally because I didn’t set it up).



My question is was that the right way to fix it? How are you always assured to get the next Xact_seqno? Do you order the transactions by Xact_Seqno or entry_time?







sql-server sql-server-2012 replication transactional-replication






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jun 8 '16 at 13:40









SQLPRODDBA

1,09511430




1,09511430










asked Jun 8 '16 at 12:08









Derek CzarnyDerek Czarny

161




161





bumped to the homepage by Community 2 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.







bumped to the homepage by Community 2 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.















  • Can you please provide details about SQL Server version and type of replication?

    – SQLPRODDBA
    Jun 8 '16 at 12:35











  • The issue was from the distributor to subscriber, which was on the same server (SQL Server 2012). Transactional replication.

    – Derek Czarny
    Jun 8 '16 at 12:40



















  • Can you please provide details about SQL Server version and type of replication?

    – SQLPRODDBA
    Jun 8 '16 at 12:35











  • The issue was from the distributor to subscriber, which was on the same server (SQL Server 2012). Transactional replication.

    – Derek Czarny
    Jun 8 '16 at 12:40

















Can you please provide details about SQL Server version and type of replication?

– SQLPRODDBA
Jun 8 '16 at 12:35





Can you please provide details about SQL Server version and type of replication?

– SQLPRODDBA
Jun 8 '16 at 12:35













The issue was from the distributor to subscriber, which was on the same server (SQL Server 2012). Transactional replication.

– Derek Czarny
Jun 8 '16 at 12:40





The issue was from the distributor to subscriber, which was on the same server (SQL Server 2012). Transactional replication.

– Derek Czarny
Jun 8 '16 at 12:40










1 Answer
1






active

oldest

votes


















0














An alternative would to remove the article from the publication and re add, causing a new snapshot to be taken of the table. This way there's no question whether the right data is in the subscriber.



To do this you need to make sure the publication properties are set to FALSE for allow_anonymous and immediate_sync - if either of these are true then the a database snapshot will be taken when you remove and re add table rather than just the table.



To check run sp_helppublication against the distribution.



To change if either of these are set to 1:



EXEC sp_changepublication
@publication = 'REP_P', --Enter your publication_name
@property = 'allow_anonymous' ,
@value = 'false'
GO
EXEC sp_changepublication
@publication = 'REP_P', --Enter your publication name
@property = 'immediate_sync' ,
@value = 'false'
GO


I have tested this and it works in my environment, however strongly suggest you do this in a lab first.






share|improve this answer


























  • Yes, but what if the updated was already executed and you are just troubleshooting after the fact? Would you still remove the table from replication and add it back?

    – Derek Czarny
    Jun 8 '16 at 13:37











  • @Hpk IMO it may work when you have small table but still have to test it though. What if we have very large table. Re-snapshot would be pain in that case.

    – SQLPRODDBA
    Jun 8 '16 at 13:40











  • @DerekCzarny - Its an alternative. In my environment unless I would have to prove I tested my method and prove data is intact to my Manager, it would also need to go through CAB etc due to the non standard process. I guess its up to your environment. My method is a sure fire way no questions asked that the publisher matches the subscriber. In terms of table size 33 million records is not large unless its very wide or contains large data types.

    – Hpk
    Jun 8 '16 at 14:45











  • If you used distribution.dbo.msRepl_transactions and distribution.dbo.msRepl_commands to get the update commands, restrict down to the applicable article(s) then your method looks ok. xact_seqno is sequential and is the right thing to order by.

    – Hpk
    Jun 8 '16 at 14:59













Your Answer








StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "182"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f140719%2fsql-server-replication%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









0














An alternative would to remove the article from the publication and re add, causing a new snapshot to be taken of the table. This way there's no question whether the right data is in the subscriber.



To do this you need to make sure the publication properties are set to FALSE for allow_anonymous and immediate_sync - if either of these are true then the a database snapshot will be taken when you remove and re add table rather than just the table.



To check run sp_helppublication against the distribution.



To change if either of these are set to 1:



EXEC sp_changepublication
@publication = 'REP_P', --Enter your publication_name
@property = 'allow_anonymous' ,
@value = 'false'
GO
EXEC sp_changepublication
@publication = 'REP_P', --Enter your publication name
@property = 'immediate_sync' ,
@value = 'false'
GO


I have tested this and it works in my environment, however strongly suggest you do this in a lab first.






share|improve this answer


























  • Yes, but what if the updated was already executed and you are just troubleshooting after the fact? Would you still remove the table from replication and add it back?

    – Derek Czarny
    Jun 8 '16 at 13:37











  • @Hpk IMO it may work when you have small table but still have to test it though. What if we have very large table. Re-snapshot would be pain in that case.

    – SQLPRODDBA
    Jun 8 '16 at 13:40











  • @DerekCzarny - Its an alternative. In my environment unless I would have to prove I tested my method and prove data is intact to my Manager, it would also need to go through CAB etc due to the non standard process. I guess its up to your environment. My method is a sure fire way no questions asked that the publisher matches the subscriber. In terms of table size 33 million records is not large unless its very wide or contains large data types.

    – Hpk
    Jun 8 '16 at 14:45











  • If you used distribution.dbo.msRepl_transactions and distribution.dbo.msRepl_commands to get the update commands, restrict down to the applicable article(s) then your method looks ok. xact_seqno is sequential and is the right thing to order by.

    – Hpk
    Jun 8 '16 at 14:59


















0














An alternative would to remove the article from the publication and re add, causing a new snapshot to be taken of the table. This way there's no question whether the right data is in the subscriber.



To do this you need to make sure the publication properties are set to FALSE for allow_anonymous and immediate_sync - if either of these are true then the a database snapshot will be taken when you remove and re add table rather than just the table.



To check run sp_helppublication against the distribution.



To change if either of these are set to 1:



EXEC sp_changepublication
@publication = 'REP_P', --Enter your publication_name
@property = 'allow_anonymous' ,
@value = 'false'
GO
EXEC sp_changepublication
@publication = 'REP_P', --Enter your publication name
@property = 'immediate_sync' ,
@value = 'false'
GO


I have tested this and it works in my environment, however strongly suggest you do this in a lab first.






share|improve this answer


























  • Yes, but what if the updated was already executed and you are just troubleshooting after the fact? Would you still remove the table from replication and add it back?

    – Derek Czarny
    Jun 8 '16 at 13:37











  • @Hpk IMO it may work when you have small table but still have to test it though. What if we have very large table. Re-snapshot would be pain in that case.

    – SQLPRODDBA
    Jun 8 '16 at 13:40











  • @DerekCzarny - Its an alternative. In my environment unless I would have to prove I tested my method and prove data is intact to my Manager, it would also need to go through CAB etc due to the non standard process. I guess its up to your environment. My method is a sure fire way no questions asked that the publisher matches the subscriber. In terms of table size 33 million records is not large unless its very wide or contains large data types.

    – Hpk
    Jun 8 '16 at 14:45











  • If you used distribution.dbo.msRepl_transactions and distribution.dbo.msRepl_commands to get the update commands, restrict down to the applicable article(s) then your method looks ok. xact_seqno is sequential and is the right thing to order by.

    – Hpk
    Jun 8 '16 at 14:59
















0












0








0







An alternative would to remove the article from the publication and re add, causing a new snapshot to be taken of the table. This way there's no question whether the right data is in the subscriber.



To do this you need to make sure the publication properties are set to FALSE for allow_anonymous and immediate_sync - if either of these are true then the a database snapshot will be taken when you remove and re add table rather than just the table.



To check run sp_helppublication against the distribution.



To change if either of these are set to 1:



EXEC sp_changepublication
@publication = 'REP_P', --Enter your publication_name
@property = 'allow_anonymous' ,
@value = 'false'
GO
EXEC sp_changepublication
@publication = 'REP_P', --Enter your publication name
@property = 'immediate_sync' ,
@value = 'false'
GO


I have tested this and it works in my environment, however strongly suggest you do this in a lab first.






share|improve this answer















An alternative would to remove the article from the publication and re add, causing a new snapshot to be taken of the table. This way there's no question whether the right data is in the subscriber.



To do this you need to make sure the publication properties are set to FALSE for allow_anonymous and immediate_sync - if either of these are true then the a database snapshot will be taken when you remove and re add table rather than just the table.



To check run sp_helppublication against the distribution.



To change if either of these are set to 1:



EXEC sp_changepublication
@publication = 'REP_P', --Enter your publication_name
@property = 'allow_anonymous' ,
@value = 'false'
GO
EXEC sp_changepublication
@publication = 'REP_P', --Enter your publication name
@property = 'immediate_sync' ,
@value = 'false'
GO


I have tested this and it works in my environment, however strongly suggest you do this in a lab first.







share|improve this answer














share|improve this answer



share|improve this answer








edited Jun 8 '16 at 13:57









SQLPRODDBA

1,09511430




1,09511430










answered Jun 8 '16 at 13:27









HpkHpk

8910




8910













  • Yes, but what if the updated was already executed and you are just troubleshooting after the fact? Would you still remove the table from replication and add it back?

    – Derek Czarny
    Jun 8 '16 at 13:37











  • @Hpk IMO it may work when you have small table but still have to test it though. What if we have very large table. Re-snapshot would be pain in that case.

    – SQLPRODDBA
    Jun 8 '16 at 13:40











  • @DerekCzarny - Its an alternative. In my environment unless I would have to prove I tested my method and prove data is intact to my Manager, it would also need to go through CAB etc due to the non standard process. I guess its up to your environment. My method is a sure fire way no questions asked that the publisher matches the subscriber. In terms of table size 33 million records is not large unless its very wide or contains large data types.

    – Hpk
    Jun 8 '16 at 14:45











  • If you used distribution.dbo.msRepl_transactions and distribution.dbo.msRepl_commands to get the update commands, restrict down to the applicable article(s) then your method looks ok. xact_seqno is sequential and is the right thing to order by.

    – Hpk
    Jun 8 '16 at 14:59





















  • Yes, but what if the updated was already executed and you are just troubleshooting after the fact? Would you still remove the table from replication and add it back?

    – Derek Czarny
    Jun 8 '16 at 13:37











  • @Hpk IMO it may work when you have small table but still have to test it though. What if we have very large table. Re-snapshot would be pain in that case.

    – SQLPRODDBA
    Jun 8 '16 at 13:40











  • @DerekCzarny - Its an alternative. In my environment unless I would have to prove I tested my method and prove data is intact to my Manager, it would also need to go through CAB etc due to the non standard process. I guess its up to your environment. My method is a sure fire way no questions asked that the publisher matches the subscriber. In terms of table size 33 million records is not large unless its very wide or contains large data types.

    – Hpk
    Jun 8 '16 at 14:45











  • If you used distribution.dbo.msRepl_transactions and distribution.dbo.msRepl_commands to get the update commands, restrict down to the applicable article(s) then your method looks ok. xact_seqno is sequential and is the right thing to order by.

    – Hpk
    Jun 8 '16 at 14:59



















Yes, but what if the updated was already executed and you are just troubleshooting after the fact? Would you still remove the table from replication and add it back?

– Derek Czarny
Jun 8 '16 at 13:37





Yes, but what if the updated was already executed and you are just troubleshooting after the fact? Would you still remove the table from replication and add it back?

– Derek Czarny
Jun 8 '16 at 13:37













@Hpk IMO it may work when you have small table but still have to test it though. What if we have very large table. Re-snapshot would be pain in that case.

– SQLPRODDBA
Jun 8 '16 at 13:40





@Hpk IMO it may work when you have small table but still have to test it though. What if we have very large table. Re-snapshot would be pain in that case.

– SQLPRODDBA
Jun 8 '16 at 13:40













@DerekCzarny - Its an alternative. In my environment unless I would have to prove I tested my method and prove data is intact to my Manager, it would also need to go through CAB etc due to the non standard process. I guess its up to your environment. My method is a sure fire way no questions asked that the publisher matches the subscriber. In terms of table size 33 million records is not large unless its very wide or contains large data types.

– Hpk
Jun 8 '16 at 14:45





@DerekCzarny - Its an alternative. In my environment unless I would have to prove I tested my method and prove data is intact to my Manager, it would also need to go through CAB etc due to the non standard process. I guess its up to your environment. My method is a sure fire way no questions asked that the publisher matches the subscriber. In terms of table size 33 million records is not large unless its very wide or contains large data types.

– Hpk
Jun 8 '16 at 14:45













If you used distribution.dbo.msRepl_transactions and distribution.dbo.msRepl_commands to get the update commands, restrict down to the applicable article(s) then your method looks ok. xact_seqno is sequential and is the right thing to order by.

– Hpk
Jun 8 '16 at 14:59







If you used distribution.dbo.msRepl_transactions and distribution.dbo.msRepl_commands to get the update commands, restrict down to the applicable article(s) then your method looks ok. xact_seqno is sequential and is the right thing to order by.

– Hpk
Jun 8 '16 at 14:59




















draft saved

draft discarded




















































Thanks for contributing an answer to Database Administrators Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f140719%2fsql-server-replication%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Anexo:Material bélico de la Fuerza Aérea de Chile Índice Aeronaves Defensa...

Always On Availability groups resolving state after failover - Remote harden of transaction...

update json value to null Announcing the arrival of Valued Associate #679: Cesar Manara ...