trying to merge chunks to trigger better balancing after 50% of the data was deleted by the developers ...

Creating a script with console commands

What did the word "leisure" mean in late 18th Century usage?

Can a PhD from a non-TU9 German university become a professor in a TU9 university?

Another proof that dividing by 0 does not exist -- is it right?

Was the Stack Exchange "Happy April Fools" page fitting with the 90s code?

Is there a rule of thumb for determining the amount one should accept for a settlement offer?

Shortening a title without changing its meaning

What is the difference between 'contrib' and 'non-free' packages repositories?

What is a typical Mizrachi Seder like?

Is it correct to say moon starry nights?

Which acid/base does a strong base/acid react when added to a buffer solution?

Does Germany produce more waste than the US?

How seriously should I take size and weight limits of hand luggage?

How to coordinate airplane tickets?

Mathematica command that allows it to read my intentions

What steps are necessary to read a Modern SSD in Medieval Europe?

How badly should I try to prevent a user from XSSing themselves?

My boss doesn't want me to have a side project

Does int main() need a declaration on C++?

logical reads on global temp table, but not on session-level temp table

Can you teleport closer to a creature you are Frightened of?

Traveling with my 5 year old daughter (as the father) without the mother from Germany to Mexico

Avoiding the "not like other girls" trope?

Man transported from Alternate World into ours by a Neutrino Detector



trying to merge chunks to trigger better balancing after 50% of the data was deleted by the developers



The Next CEO of Stack Overflow












0















Trying to merge chunks using the following command:



         db.adminCommand
( {
mergeChunks: "HTMLDumps.HTMLRepository",
bounds: [ { "ShardMapId" : 2, "DomainId" : 62 },
{ "ShardMapId" : 2, "DomainId" : 162 } ]
} )


getting the following error when trying to run the above command to try to merge any of the available consecutive chunks available on a shard:



             {
"ok" : 0,
"errmsg" : "Failed to commit chunk merge :: caused by ::
DuplicateKey: chunk operation commit failed: version
32|6||5ba8d864bba4ff264edf0bd9 doesn't exist in
namespace: HTMLDumps.HTMLRepository. Unable to save
chunk ops. Command: { applyOps: [ { op: "u", b: false,
ns: "config.chunks", o: { _id: "HTM
Dumps.HTMLRepository-ShardMapId_2.0DomainId_62.0", ns:
"HTMLDumps.HTMLRepository", min: { ShardMapId: 2.0,
DomainId: 62.0 }, max: { ShardMapId: 2, DomainId: 162 },
shard: "shard0000", lastmod: Timestamp(32, 6),
lastmodEpoch: ObjectId('5ba8d864bba4ff264edf0bd9') },
o2: { _id: "HTMLDumps.HTMLRepository-
ShardMapId_2.0DomainId_62.0" } }, { op: "d", ns:
"config.chunks", o: { _id: "HTMLDumps.HTMLRepository-
ShardMapId_2DomainId_109" } } ], preCondition: [ { ns:
"config.chunks", q: { query: { ns:
"HTMLDumps.HTMLRepository", min: { ShardMapId: 2.0,
DomainId: 62.0 }, max: { ShardMapId: 2, DomainId: 109 }
}, orderby: { lastmod: -1 } }, res: { lastmodEpoch:
ObjectId('5ba8d864bba4ff264edf0bd9'), shard:
"shard0000" } }, { ns: "config.chunks", q: { query:
{ ns: "HTMLDumps.HTMLRepository", min: { ShardMapId:
2, DomainId: 109 }, max: { ShardMapId: 2, DomainId: 162
} }, orderby: { lastmod: -1 } }, res: { lastmodEpoch:
ObjectId('5ba8d864bba4ff264edf0bd9'), shard:
"shard0000" } } ], writeConcern: { w: 0, wtimeout: 0 }
}. Result: { applied: 1, code: 11000, codeName:
"DuplicateKey", errmsg: "E11000 duplicate key error
collection: config.chunks index: ns_1_min_1 dup key: { :
"HTMLDumps.HTMLRepository", : { ShardMapId: 2.0,
DomainId: 62.0 } }", results: [ false ], ok: 0.0,
operationTime: Timestamp(1554112692, 1), $gleStats: {
lastOpTime: { ts: Timestamp(1554112692, 1), t: 13 },
electionId: ObjectId('7fffffff000000000000000d') },
$clusterTime: { clusterTime: Timestamp(1554112692, 1),
signature: { hash: BinData(0,
0000000000000000000000000000000000000000), keyId: 0 } }
} :: caused by :: E11000 duplicate key error collection:
config.chunks index: ns_1_min_1 dup key: { :
"HTMLDumps.HTMLRepository", : { ShardMapId: 2.0,
DomainId: 62.0 } }",
"code" : 11000,
"codeName" : "DuplicateKey",
"operationTime" : Timestamp(1554112687, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1554112687, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}


This is happening regardless of which chunks i select. Main reason for me to try to do this is to achieve true data balancing and not just chunk numbers. Recently developers deleted 90% of the data from these chunks that caused the distribution to get to 90% 10% state from 60/40 earlier. I hope to merge/remove empty chunks to ensure balancing of the data to reach as close to 60/40 as possible.









share







New contributor




Amardeep Singh is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.

























    0















    Trying to merge chunks using the following command:



             db.adminCommand
    ( {
    mergeChunks: "HTMLDumps.HTMLRepository",
    bounds: [ { "ShardMapId" : 2, "DomainId" : 62 },
    { "ShardMapId" : 2, "DomainId" : 162 } ]
    } )


    getting the following error when trying to run the above command to try to merge any of the available consecutive chunks available on a shard:



                 {
    "ok" : 0,
    "errmsg" : "Failed to commit chunk merge :: caused by ::
    DuplicateKey: chunk operation commit failed: version
    32|6||5ba8d864bba4ff264edf0bd9 doesn't exist in
    namespace: HTMLDumps.HTMLRepository. Unable to save
    chunk ops. Command: { applyOps: [ { op: "u", b: false,
    ns: "config.chunks", o: { _id: "HTM
    Dumps.HTMLRepository-ShardMapId_2.0DomainId_62.0", ns:
    "HTMLDumps.HTMLRepository", min: { ShardMapId: 2.0,
    DomainId: 62.0 }, max: { ShardMapId: 2, DomainId: 162 },
    shard: "shard0000", lastmod: Timestamp(32, 6),
    lastmodEpoch: ObjectId('5ba8d864bba4ff264edf0bd9') },
    o2: { _id: "HTMLDumps.HTMLRepository-
    ShardMapId_2.0DomainId_62.0" } }, { op: "d", ns:
    "config.chunks", o: { _id: "HTMLDumps.HTMLRepository-
    ShardMapId_2DomainId_109" } } ], preCondition: [ { ns:
    "config.chunks", q: { query: { ns:
    "HTMLDumps.HTMLRepository", min: { ShardMapId: 2.0,
    DomainId: 62.0 }, max: { ShardMapId: 2, DomainId: 109 }
    }, orderby: { lastmod: -1 } }, res: { lastmodEpoch:
    ObjectId('5ba8d864bba4ff264edf0bd9'), shard:
    "shard0000" } }, { ns: "config.chunks", q: { query:
    { ns: "HTMLDumps.HTMLRepository", min: { ShardMapId:
    2, DomainId: 109 }, max: { ShardMapId: 2, DomainId: 162
    } }, orderby: { lastmod: -1 } }, res: { lastmodEpoch:
    ObjectId('5ba8d864bba4ff264edf0bd9'), shard:
    "shard0000" } } ], writeConcern: { w: 0, wtimeout: 0 }
    }. Result: { applied: 1, code: 11000, codeName:
    "DuplicateKey", errmsg: "E11000 duplicate key error
    collection: config.chunks index: ns_1_min_1 dup key: { :
    "HTMLDumps.HTMLRepository", : { ShardMapId: 2.0,
    DomainId: 62.0 } }", results: [ false ], ok: 0.0,
    operationTime: Timestamp(1554112692, 1), $gleStats: {
    lastOpTime: { ts: Timestamp(1554112692, 1), t: 13 },
    electionId: ObjectId('7fffffff000000000000000d') },
    $clusterTime: { clusterTime: Timestamp(1554112692, 1),
    signature: { hash: BinData(0,
    0000000000000000000000000000000000000000), keyId: 0 } }
    } :: caused by :: E11000 duplicate key error collection:
    config.chunks index: ns_1_min_1 dup key: { :
    "HTMLDumps.HTMLRepository", : { ShardMapId: 2.0,
    DomainId: 62.0 } }",
    "code" : 11000,
    "codeName" : "DuplicateKey",
    "operationTime" : Timestamp(1554112687, 1),
    "$clusterTime" : {
    "clusterTime" : Timestamp(1554112687, 1),
    "signature" : {
    "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
    "keyId" : NumberLong(0)
    }
    }
    }


    This is happening regardless of which chunks i select. Main reason for me to try to do this is to achieve true data balancing and not just chunk numbers. Recently developers deleted 90% of the data from these chunks that caused the distribution to get to 90% 10% state from 60/40 earlier. I hope to merge/remove empty chunks to ensure balancing of the data to reach as close to 60/40 as possible.









    share







    New contributor




    Amardeep Singh is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.























      0












      0








      0








      Trying to merge chunks using the following command:



               db.adminCommand
      ( {
      mergeChunks: "HTMLDumps.HTMLRepository",
      bounds: [ { "ShardMapId" : 2, "DomainId" : 62 },
      { "ShardMapId" : 2, "DomainId" : 162 } ]
      } )


      getting the following error when trying to run the above command to try to merge any of the available consecutive chunks available on a shard:



                   {
      "ok" : 0,
      "errmsg" : "Failed to commit chunk merge :: caused by ::
      DuplicateKey: chunk operation commit failed: version
      32|6||5ba8d864bba4ff264edf0bd9 doesn't exist in
      namespace: HTMLDumps.HTMLRepository. Unable to save
      chunk ops. Command: { applyOps: [ { op: "u", b: false,
      ns: "config.chunks", o: { _id: "HTM
      Dumps.HTMLRepository-ShardMapId_2.0DomainId_62.0", ns:
      "HTMLDumps.HTMLRepository", min: { ShardMapId: 2.0,
      DomainId: 62.0 }, max: { ShardMapId: 2, DomainId: 162 },
      shard: "shard0000", lastmod: Timestamp(32, 6),
      lastmodEpoch: ObjectId('5ba8d864bba4ff264edf0bd9') },
      o2: { _id: "HTMLDumps.HTMLRepository-
      ShardMapId_2.0DomainId_62.0" } }, { op: "d", ns:
      "config.chunks", o: { _id: "HTMLDumps.HTMLRepository-
      ShardMapId_2DomainId_109" } } ], preCondition: [ { ns:
      "config.chunks", q: { query: { ns:
      "HTMLDumps.HTMLRepository", min: { ShardMapId: 2.0,
      DomainId: 62.0 }, max: { ShardMapId: 2, DomainId: 109 }
      }, orderby: { lastmod: -1 } }, res: { lastmodEpoch:
      ObjectId('5ba8d864bba4ff264edf0bd9'), shard:
      "shard0000" } }, { ns: "config.chunks", q: { query:
      { ns: "HTMLDumps.HTMLRepository", min: { ShardMapId:
      2, DomainId: 109 }, max: { ShardMapId: 2, DomainId: 162
      } }, orderby: { lastmod: -1 } }, res: { lastmodEpoch:
      ObjectId('5ba8d864bba4ff264edf0bd9'), shard:
      "shard0000" } } ], writeConcern: { w: 0, wtimeout: 0 }
      }. Result: { applied: 1, code: 11000, codeName:
      "DuplicateKey", errmsg: "E11000 duplicate key error
      collection: config.chunks index: ns_1_min_1 dup key: { :
      "HTMLDumps.HTMLRepository", : { ShardMapId: 2.0,
      DomainId: 62.0 } }", results: [ false ], ok: 0.0,
      operationTime: Timestamp(1554112692, 1), $gleStats: {
      lastOpTime: { ts: Timestamp(1554112692, 1), t: 13 },
      electionId: ObjectId('7fffffff000000000000000d') },
      $clusterTime: { clusterTime: Timestamp(1554112692, 1),
      signature: { hash: BinData(0,
      0000000000000000000000000000000000000000), keyId: 0 } }
      } :: caused by :: E11000 duplicate key error collection:
      config.chunks index: ns_1_min_1 dup key: { :
      "HTMLDumps.HTMLRepository", : { ShardMapId: 2.0,
      DomainId: 62.0 } }",
      "code" : 11000,
      "codeName" : "DuplicateKey",
      "operationTime" : Timestamp(1554112687, 1),
      "$clusterTime" : {
      "clusterTime" : Timestamp(1554112687, 1),
      "signature" : {
      "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
      "keyId" : NumberLong(0)
      }
      }
      }


      This is happening regardless of which chunks i select. Main reason for me to try to do this is to achieve true data balancing and not just chunk numbers. Recently developers deleted 90% of the data from these chunks that caused the distribution to get to 90% 10% state from 60/40 earlier. I hope to merge/remove empty chunks to ensure balancing of the data to reach as close to 60/40 as possible.









      share







      New contributor




      Amardeep Singh is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.












      Trying to merge chunks using the following command:



               db.adminCommand
      ( {
      mergeChunks: "HTMLDumps.HTMLRepository",
      bounds: [ { "ShardMapId" : 2, "DomainId" : 62 },
      { "ShardMapId" : 2, "DomainId" : 162 } ]
      } )


      getting the following error when trying to run the above command to try to merge any of the available consecutive chunks available on a shard:



                   {
      "ok" : 0,
      "errmsg" : "Failed to commit chunk merge :: caused by ::
      DuplicateKey: chunk operation commit failed: version
      32|6||5ba8d864bba4ff264edf0bd9 doesn't exist in
      namespace: HTMLDumps.HTMLRepository. Unable to save
      chunk ops. Command: { applyOps: [ { op: "u", b: false,
      ns: "config.chunks", o: { _id: "HTM
      Dumps.HTMLRepository-ShardMapId_2.0DomainId_62.0", ns:
      "HTMLDumps.HTMLRepository", min: { ShardMapId: 2.0,
      DomainId: 62.0 }, max: { ShardMapId: 2, DomainId: 162 },
      shard: "shard0000", lastmod: Timestamp(32, 6),
      lastmodEpoch: ObjectId('5ba8d864bba4ff264edf0bd9') },
      o2: { _id: "HTMLDumps.HTMLRepository-
      ShardMapId_2.0DomainId_62.0" } }, { op: "d", ns:
      "config.chunks", o: { _id: "HTMLDumps.HTMLRepository-
      ShardMapId_2DomainId_109" } } ], preCondition: [ { ns:
      "config.chunks", q: { query: { ns:
      "HTMLDumps.HTMLRepository", min: { ShardMapId: 2.0,
      DomainId: 62.0 }, max: { ShardMapId: 2, DomainId: 109 }
      }, orderby: { lastmod: -1 } }, res: { lastmodEpoch:
      ObjectId('5ba8d864bba4ff264edf0bd9'), shard:
      "shard0000" } }, { ns: "config.chunks", q: { query:
      { ns: "HTMLDumps.HTMLRepository", min: { ShardMapId:
      2, DomainId: 109 }, max: { ShardMapId: 2, DomainId: 162
      } }, orderby: { lastmod: -1 } }, res: { lastmodEpoch:
      ObjectId('5ba8d864bba4ff264edf0bd9'), shard:
      "shard0000" } } ], writeConcern: { w: 0, wtimeout: 0 }
      }. Result: { applied: 1, code: 11000, codeName:
      "DuplicateKey", errmsg: "E11000 duplicate key error
      collection: config.chunks index: ns_1_min_1 dup key: { :
      "HTMLDumps.HTMLRepository", : { ShardMapId: 2.0,
      DomainId: 62.0 } }", results: [ false ], ok: 0.0,
      operationTime: Timestamp(1554112692, 1), $gleStats: {
      lastOpTime: { ts: Timestamp(1554112692, 1), t: 13 },
      electionId: ObjectId('7fffffff000000000000000d') },
      $clusterTime: { clusterTime: Timestamp(1554112692, 1),
      signature: { hash: BinData(0,
      0000000000000000000000000000000000000000), keyId: 0 } }
      } :: caused by :: E11000 duplicate key error collection:
      config.chunks index: ns_1_min_1 dup key: { :
      "HTMLDumps.HTMLRepository", : { ShardMapId: 2.0,
      DomainId: 62.0 } }",
      "code" : 11000,
      "codeName" : "DuplicateKey",
      "operationTime" : Timestamp(1554112687, 1),
      "$clusterTime" : {
      "clusterTime" : Timestamp(1554112687, 1),
      "signature" : {
      "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
      "keyId" : NumberLong(0)
      }
      }
      }


      This is happening regardless of which chunks i select. Main reason for me to try to do this is to achieve true data balancing and not just chunk numbers. Recently developers deleted 90% of the data from these chunks that caused the distribution to get to 90% 10% state from 60/40 earlier. I hope to merge/remove empty chunks to ensure balancing of the data to reach as close to 60/40 as possible.







      mongodb





      share







      New contributor




      Amardeep Singh is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.










      share







      New contributor




      Amardeep Singh is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.








      share



      share






      New contributor




      Amardeep Singh is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked 34 secs ago









      Amardeep SinghAmardeep Singh

      31




      31




      New contributor




      Amardeep Singh is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      Amardeep Singh is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      Amardeep Singh is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






















          0






          active

          oldest

          votes












          Your Answer








          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "182"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });






          Amardeep Singh is a new contributor. Be nice, and check out our Code of Conduct.










          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f233715%2ftrying-to-merge-chunks-to-trigger-better-balancing-after-50-of-the-data-was-del%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          Amardeep Singh is a new contributor. Be nice, and check out our Code of Conduct.










          draft saved

          draft discarded


















          Amardeep Singh is a new contributor. Be nice, and check out our Code of Conduct.













          Amardeep Singh is a new contributor. Be nice, and check out our Code of Conduct.












          Amardeep Singh is a new contributor. Be nice, and check out our Code of Conduct.
















          Thanks for contributing an answer to Database Administrators Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f233715%2ftrying-to-merge-chunks-to-trigger-better-balancing-after-50-of-the-data-was-del%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Anexo:Material bélico de la Fuerza Aérea de Chile Índice Aeronaves Defensa...

          Always On Availability groups resolving state after failover - Remote harden of transaction...

          update json value to null Announcing the arrival of Valued Associate #679: Cesar Manara ...