Cassandra nodes are not equalCassandra: memtable_offheap_space_in_mb and memtable_cleanup_thresholdCassandra...
Does splitting a potentially monolithic application into several smaller ones help prevent bugs?
Theorems like the Lovász Local Lemma?
Pinhole Camera with Instant Film
Why are the outputs of printf and std::cout different
Ban on all campaign finance?
What are the possible solutions of the given equation?
How do I hide Chekhov's Gun?
Should we release the security issues we found in our product as CVE or we can just update those on weekly release notes?
Life insurance that covers only simultaneous/dual deaths
How could a female member of a species produce eggs unto death?
Identifying the interval from A♭ to D♯
What has been your most complicated TikZ drawing?
PTIJ: Who should pay for Uber rides: the child or the parent?
What does it mean to make a bootable LiveUSB?
Do I need life insurance if I can cover my own funeral costs?
Why are there 40 737 Max planes in flight when they have been grounded as not airworthy?
Provisioning profile doesn't include the application-identifier and keychain-access-groups entitlements
Rules about breaking the rules. How do I do it well?
How to deal with taxi scam when on vacation?
Dot in front of file
It's a yearly task, alright
Have researchers managed to "reverse time"? If so, what does that mean for physics?
2D counterpart of std::array in C++17
At what level can a dragon innately cast its spells?
Cassandra nodes are not equal
Cassandra: memtable_offheap_space_in_mb and memtable_cleanup_thresholdCassandra cluster monitoringSetup Cassandra Databasecustomize permission in CassandraCassandra altersCassandra backup from all nodes or just one?Cassandra Consistency LevelCassandra cluster configurationCassandra writing (too?) large partitionscassandra nodes have diff value
We have two nodes. Node1 was down for a long time. During this time Node2 raised 1 TB capacity when Node1 has 100 GB.
We tried to repair Node1 with nodetool repair but nothing changed. After that we started nodetool repair in Node2, it took 5 days to compaction but nothing changed either.
Actual status here:
Datacenter: dc1
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 172.x.y.z 149.46 GB 256 100.0% xxx rack1
UN 172.x.y.k 1.04 TB 256 100.0% xyz rack1
Nodes are in AWS. What should we do?
cassandra
bumped to the homepage by Community♦ 10 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
We have two nodes. Node1 was down for a long time. During this time Node2 raised 1 TB capacity when Node1 has 100 GB.
We tried to repair Node1 with nodetool repair but nothing changed. After that we started nodetool repair in Node2, it took 5 days to compaction but nothing changed either.
Actual status here:
Datacenter: dc1
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 172.x.y.z 149.46 GB 256 100.0% xxx rack1
UN 172.x.y.k 1.04 TB 256 100.0% xyz rack1
Nodes are in AWS. What should we do?
cassandra
bumped to the homepage by Community♦ 10 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
What replication factor are you using?
– Christophe Schmitz
Mar 28 '18 at 7:33
@ChristopheSchmitz cqlsh:5.0.1 cassandra: 3.0.9 CQL Spec: 3.4.0, Replication factor = 2
– Ekrem Gurdal
Mar 28 '18 at 7:41
Are you doing any delete queries? Are you using TTL?
– Christophe Schmitz
Mar 28 '18 at 7:42
@ChristopheSchmitz We have never done delete queries yet and we are not using TTL. We only use store data and do some backward processing jobs.
– Ekrem Gurdal
Mar 28 '18 at 7:56
add a comment |
We have two nodes. Node1 was down for a long time. During this time Node2 raised 1 TB capacity when Node1 has 100 GB.
We tried to repair Node1 with nodetool repair but nothing changed. After that we started nodetool repair in Node2, it took 5 days to compaction but nothing changed either.
Actual status here:
Datacenter: dc1
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 172.x.y.z 149.46 GB 256 100.0% xxx rack1
UN 172.x.y.k 1.04 TB 256 100.0% xyz rack1
Nodes are in AWS. What should we do?
cassandra
We have two nodes. Node1 was down for a long time. During this time Node2 raised 1 TB capacity when Node1 has 100 GB.
We tried to repair Node1 with nodetool repair but nothing changed. After that we started nodetool repair in Node2, it took 5 days to compaction but nothing changed either.
Actual status here:
Datacenter: dc1
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 172.x.y.z 149.46 GB 256 100.0% xxx rack1
UN 172.x.y.k 1.04 TB 256 100.0% xyz rack1
Nodes are in AWS. What should we do?
cassandra
cassandra
asked Mar 28 '18 at 6:58
Ekrem GurdalEkrem Gurdal
1011
1011
bumped to the homepage by Community♦ 10 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
bumped to the homepage by Community♦ 10 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
What replication factor are you using?
– Christophe Schmitz
Mar 28 '18 at 7:33
@ChristopheSchmitz cqlsh:5.0.1 cassandra: 3.0.9 CQL Spec: 3.4.0, Replication factor = 2
– Ekrem Gurdal
Mar 28 '18 at 7:41
Are you doing any delete queries? Are you using TTL?
– Christophe Schmitz
Mar 28 '18 at 7:42
@ChristopheSchmitz We have never done delete queries yet and we are not using TTL. We only use store data and do some backward processing jobs.
– Ekrem Gurdal
Mar 28 '18 at 7:56
add a comment |
What replication factor are you using?
– Christophe Schmitz
Mar 28 '18 at 7:33
@ChristopheSchmitz cqlsh:5.0.1 cassandra: 3.0.9 CQL Spec: 3.4.0, Replication factor = 2
– Ekrem Gurdal
Mar 28 '18 at 7:41
Are you doing any delete queries? Are you using TTL?
– Christophe Schmitz
Mar 28 '18 at 7:42
@ChristopheSchmitz We have never done delete queries yet and we are not using TTL. We only use store data and do some backward processing jobs.
– Ekrem Gurdal
Mar 28 '18 at 7:56
What replication factor are you using?
– Christophe Schmitz
Mar 28 '18 at 7:33
What replication factor are you using?
– Christophe Schmitz
Mar 28 '18 at 7:33
@ChristopheSchmitz cqlsh:5.0.1 cassandra: 3.0.9 CQL Spec: 3.4.0, Replication factor = 2
– Ekrem Gurdal
Mar 28 '18 at 7:41
@ChristopheSchmitz cqlsh:5.0.1 cassandra: 3.0.9 CQL Spec: 3.4.0, Replication factor = 2
– Ekrem Gurdal
Mar 28 '18 at 7:41
Are you doing any delete queries? Are you using TTL?
– Christophe Schmitz
Mar 28 '18 at 7:42
Are you doing any delete queries? Are you using TTL?
– Christophe Schmitz
Mar 28 '18 at 7:42
@ChristopheSchmitz We have never done delete queries yet and we are not using TTL. We only use store data and do some backward processing jobs.
– Ekrem Gurdal
Mar 28 '18 at 7:56
@ChristopheSchmitz We have never done delete queries yet and we are not using TTL. We only use store data and do some backward processing jobs.
– Ekrem Gurdal
Mar 28 '18 at 7:56
add a comment |
1 Answer
1
active
oldest
votes
We had a solution but don't know how it happened. Here what we did:
- Took snapshot of both servers in case of data loss over AWS
- We detached Volume where data is stored. (for instance 172.x.y.k)
- We formatted instance and loaded Cassandra with new version. Then we
attached Volume where data is stored. - Finally we started
nodetool repair --full
in terminal and it took 4
days.
Now our nodes are equal.
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "182"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f202487%2fcassandra-nodes-are-not-equal%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
We had a solution but don't know how it happened. Here what we did:
- Took snapshot of both servers in case of data loss over AWS
- We detached Volume where data is stored. (for instance 172.x.y.k)
- We formatted instance and loaded Cassandra with new version. Then we
attached Volume where data is stored. - Finally we started
nodetool repair --full
in terminal and it took 4
days.
Now our nodes are equal.
add a comment |
We had a solution but don't know how it happened. Here what we did:
- Took snapshot of both servers in case of data loss over AWS
- We detached Volume where data is stored. (for instance 172.x.y.k)
- We formatted instance and loaded Cassandra with new version. Then we
attached Volume where data is stored. - Finally we started
nodetool repair --full
in terminal and it took 4
days.
Now our nodes are equal.
add a comment |
We had a solution but don't know how it happened. Here what we did:
- Took snapshot of both servers in case of data loss over AWS
- We detached Volume where data is stored. (for instance 172.x.y.k)
- We formatted instance and loaded Cassandra with new version. Then we
attached Volume where data is stored. - Finally we started
nodetool repair --full
in terminal and it took 4
days.
Now our nodes are equal.
We had a solution but don't know how it happened. Here what we did:
- Took snapshot of both servers in case of data loss over AWS
- We detached Volume where data is stored. (for instance 172.x.y.k)
- We formatted instance and loaded Cassandra with new version. Then we
attached Volume where data is stored. - Finally we started
nodetool repair --full
in terminal and it took 4
days.
Now our nodes are equal.
answered Apr 9 '18 at 8:52
Ekrem GurdalEkrem Gurdal
1011
1011
add a comment |
add a comment |
Thanks for contributing an answer to Database Administrators Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f202487%2fcassandra-nodes-are-not-equal%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
What replication factor are you using?
– Christophe Schmitz
Mar 28 '18 at 7:33
@ChristopheSchmitz cqlsh:5.0.1 cassandra: 3.0.9 CQL Spec: 3.4.0, Replication factor = 2
– Ekrem Gurdal
Mar 28 '18 at 7:41
Are you doing any delete queries? Are you using TTL?
– Christophe Schmitz
Mar 28 '18 at 7:42
@ChristopheSchmitz We have never done delete queries yet and we are not using TTL. We only use store data and do some backward processing jobs.
– Ekrem Gurdal
Mar 28 '18 at 7:56