INSERT … SELECT on a huge MySQL table The 2019 Stack Overflow Developer Survey Results Are...
How to reverse every other sublist of a list?
Is "plugging out" electronic devices an American expression?
Why Did Howard Stark Use All The Vibranium They Had On A Prototype Shield?
If the Wish spell is used to duplicate the effect of Simulacrum, are existing duplicates destroyed?
Limit the amount of RAM Mathematica may access?
"Riffle" two strings
Evaluating number of iteration with a certain map with While
How to change the limits of integration
What is the motivation for a law requiring 2 parties to consent for recording a conversation
How to deal with fear of taking dependencies
How to answer pointed "are you quitting" questioning when I don't want them to suspect
Spanish for "widget"
Springs with some finite mass
Confusion about non-derivable continuous functions
Are there any other methods to apply to solving simultaneous equations?
Inline version of a function returns different value then non-inline version
Why do UK politicians seemingly ignore opinion polls on Brexit?
"To split hairs" vs "To be pedantic"
I looked up a future colleague on LinkedIn before I started a job. I told my colleague about it and he seemed surprised. Should I apologize?
A poker game description that does not feel gimmicky
Landlord wants to switch my lease to a "Land contract" to "get back at the city"
In microwave frequencies, do you use a circulator when you need a (near) perfect diode?
On the insanity of kings as an argument against Monarchy
I see my dog run
INSERT … SELECT on a huge MySQL table
The 2019 Stack Overflow Developer Survey Results Are InMySQL Insert into two tables using new IDsInsert from one row to another using casesMysql slow writesMultiple SELECT subqueries in an INSERT statement in a stored procedureIs a update-only-once-row table worth sharding?Combined UUID Index + AI PK Insert PerformanceHow to optimize mysql table of 2 billions rows?MySQL: Trying to “normalize” data using MySQL to helpMysql temp table insert from json can't handle null valuesIs mysql using some kind of auto repair due to “check table” warnings?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}
I have isolated MySQL 5.5 with a huge table inside ( 3.5TB of data and yes, it is only 1 table ).
The goal is to copy a half of this table into another one on the same server. How to do that better and quicker? :
1) Just INSERT INTO new_table SELECT * FROM old_table WHERE ...
.
What will be MySQL behavior in this case? I expect that connection to MySQL will be broken by time out, however the query itself should run until all data are copied.
Or the MySQL will be just crashed with such query?
2) Using the same INSERT ... SELECT
but selecting by chunks ( for example by 10000 rows). This should be definitely safer but much slower I guess.
What is the best way? Thank you in advance!
P.S. Before the insertion I will drop FK
and indexes on new table + disable sql_log_bin and
mysql select insert
bumped to the homepage by Community♦ 12 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
I have isolated MySQL 5.5 with a huge table inside ( 3.5TB of data and yes, it is only 1 table ).
The goal is to copy a half of this table into another one on the same server. How to do that better and quicker? :
1) Just INSERT INTO new_table SELECT * FROM old_table WHERE ...
.
What will be MySQL behavior in this case? I expect that connection to MySQL will be broken by time out, however the query itself should run until all data are copied.
Or the MySQL will be just crashed with such query?
2) Using the same INSERT ... SELECT
but selecting by chunks ( for example by 10000 rows). This should be definitely safer but much slower I guess.
What is the best way? Thank you in advance!
P.S. Before the insertion I will drop FK
and indexes on new table + disable sql_log_bin and
mysql select insert
bumped to the homepage by Community♦ 12 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
Have you ever tried with tools such as Navicat. I'm using this tools for Data Transfer between database and servers without any problems, as long as we have reliable connection. But there were no options to select or cut into smaller chunks.
– Yugo Gautomo
Jan 29 '18 at 11:06
add a comment |
I have isolated MySQL 5.5 with a huge table inside ( 3.5TB of data and yes, it is only 1 table ).
The goal is to copy a half of this table into another one on the same server. How to do that better and quicker? :
1) Just INSERT INTO new_table SELECT * FROM old_table WHERE ...
.
What will be MySQL behavior in this case? I expect that connection to MySQL will be broken by time out, however the query itself should run until all data are copied.
Or the MySQL will be just crashed with such query?
2) Using the same INSERT ... SELECT
but selecting by chunks ( for example by 10000 rows). This should be definitely safer but much slower I guess.
What is the best way? Thank you in advance!
P.S. Before the insertion I will drop FK
and indexes on new table + disable sql_log_bin and
mysql select insert
I have isolated MySQL 5.5 with a huge table inside ( 3.5TB of data and yes, it is only 1 table ).
The goal is to copy a half of this table into another one on the same server. How to do that better and quicker? :
1) Just INSERT INTO new_table SELECT * FROM old_table WHERE ...
.
What will be MySQL behavior in this case? I expect that connection to MySQL will be broken by time out, however the query itself should run until all data are copied.
Or the MySQL will be just crashed with such query?
2) Using the same INSERT ... SELECT
but selecting by chunks ( for example by 10000 rows). This should be definitely safer but much slower I guess.
What is the best way? Thank you in advance!
P.S. Before the insertion I will drop FK
and indexes on new table + disable sql_log_bin and
mysql select insert
mysql select insert
edited Oct 7 '16 at 3:48
JSapkota
7,5861823
7,5861823
asked Oct 6 '16 at 13:58
user105715user105715
112
112
bumped to the homepage by Community♦ 12 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
bumped to the homepage by Community♦ 12 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
Have you ever tried with tools such as Navicat. I'm using this tools for Data Transfer between database and servers without any problems, as long as we have reliable connection. But there were no options to select or cut into smaller chunks.
– Yugo Gautomo
Jan 29 '18 at 11:06
add a comment |
Have you ever tried with tools such as Navicat. I'm using this tools for Data Transfer between database and servers without any problems, as long as we have reliable connection. But there were no options to select or cut into smaller chunks.
– Yugo Gautomo
Jan 29 '18 at 11:06
Have you ever tried with tools such as Navicat. I'm using this tools for Data Transfer between database and servers without any problems, as long as we have reliable connection. But there were no options to select or cut into smaller chunks.
– Yugo Gautomo
Jan 29 '18 at 11:06
Have you ever tried with tools such as Navicat. I'm using this tools for Data Transfer between database and servers without any problems, as long as we have reliable connection. But there were no options to select or cut into smaller chunks.
– Yugo Gautomo
Jan 29 '18 at 11:06
add a comment |
2 Answers
2
active
oldest
votes
Your option 2 will be the way to go, smaller chunks. Depending on the size of your rows, You could increase the rows to around 1M.
More discussion of chunking.
– Rick James
Oct 7 '16 at 2:44
add a comment |
I've seen ALTER TABLEs running for days. The problem is not a timeout, the problem is that if the query fails at some point, you'll have to restart it from scratch.
But what is your use case exactly? The server will continue to accept queries from the applications? If not, your idea is ok, I see no caveats.
Otherwise, use pt-archive
. It is designed for this purpose, and it avoids to overload a server.
However: don't disable the binlog! You will break incremental backups for no reason! Just run SET binlog_format = 'STATEMENT'
.
Instead, if the server doesn't need to accepts connections from applications, and the table is InnoDB, you can restart it with --innodb-doublewrite=0
. The operation will be much faster. But then remember (it's vital) to restart it again, without that option.
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "182"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f151586%2finsert-select-on-a-huge-mysql-table%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
Your option 2 will be the way to go, smaller chunks. Depending on the size of your rows, You could increase the rows to around 1M.
More discussion of chunking.
– Rick James
Oct 7 '16 at 2:44
add a comment |
Your option 2 will be the way to go, smaller chunks. Depending on the size of your rows, You could increase the rows to around 1M.
More discussion of chunking.
– Rick James
Oct 7 '16 at 2:44
add a comment |
Your option 2 will be the way to go, smaller chunks. Depending on the size of your rows, You could increase the rows to around 1M.
Your option 2 will be the way to go, smaller chunks. Depending on the size of your rows, You could increase the rows to around 1M.
answered Oct 6 '16 at 15:20
VisuVisu
313
313
More discussion of chunking.
– Rick James
Oct 7 '16 at 2:44
add a comment |
More discussion of chunking.
– Rick James
Oct 7 '16 at 2:44
More discussion of chunking.
– Rick James
Oct 7 '16 at 2:44
More discussion of chunking.
– Rick James
Oct 7 '16 at 2:44
add a comment |
I've seen ALTER TABLEs running for days. The problem is not a timeout, the problem is that if the query fails at some point, you'll have to restart it from scratch.
But what is your use case exactly? The server will continue to accept queries from the applications? If not, your idea is ok, I see no caveats.
Otherwise, use pt-archive
. It is designed for this purpose, and it avoids to overload a server.
However: don't disable the binlog! You will break incremental backups for no reason! Just run SET binlog_format = 'STATEMENT'
.
Instead, if the server doesn't need to accepts connections from applications, and the table is InnoDB, you can restart it with --innodb-doublewrite=0
. The operation will be much faster. But then remember (it's vital) to restart it again, without that option.
add a comment |
I've seen ALTER TABLEs running for days. The problem is not a timeout, the problem is that if the query fails at some point, you'll have to restart it from scratch.
But what is your use case exactly? The server will continue to accept queries from the applications? If not, your idea is ok, I see no caveats.
Otherwise, use pt-archive
. It is designed for this purpose, and it avoids to overload a server.
However: don't disable the binlog! You will break incremental backups for no reason! Just run SET binlog_format = 'STATEMENT'
.
Instead, if the server doesn't need to accepts connections from applications, and the table is InnoDB, you can restart it with --innodb-doublewrite=0
. The operation will be much faster. But then remember (it's vital) to restart it again, without that option.
add a comment |
I've seen ALTER TABLEs running for days. The problem is not a timeout, the problem is that if the query fails at some point, you'll have to restart it from scratch.
But what is your use case exactly? The server will continue to accept queries from the applications? If not, your idea is ok, I see no caveats.
Otherwise, use pt-archive
. It is designed for this purpose, and it avoids to overload a server.
However: don't disable the binlog! You will break incremental backups for no reason! Just run SET binlog_format = 'STATEMENT'
.
Instead, if the server doesn't need to accepts connections from applications, and the table is InnoDB, you can restart it with --innodb-doublewrite=0
. The operation will be much faster. But then remember (it's vital) to restart it again, without that option.
I've seen ALTER TABLEs running for days. The problem is not a timeout, the problem is that if the query fails at some point, you'll have to restart it from scratch.
But what is your use case exactly? The server will continue to accept queries from the applications? If not, your idea is ok, I see no caveats.
Otherwise, use pt-archive
. It is designed for this purpose, and it avoids to overload a server.
However: don't disable the binlog! You will break incremental backups for no reason! Just run SET binlog_format = 'STATEMENT'
.
Instead, if the server doesn't need to accepts connections from applications, and the table is InnoDB, you can restart it with --innodb-doublewrite=0
. The operation will be much faster. But then remember (it's vital) to restart it again, without that option.
answered May 13 '18 at 19:20
Federico RazzoliFederico Razzoli
814117
814117
add a comment |
add a comment |
Thanks for contributing an answer to Database Administrators Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f151586%2finsert-select-on-a-huge-mysql-table%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Have you ever tried with tools such as Navicat. I'm using this tools for Data Transfer between database and servers without any problems, as long as we have reliable connection. But there were no options to select or cut into smaller chunks.
– Yugo Gautomo
Jan 29 '18 at 11:06