Number of computations for multiplicationFast algorithm for matrix chain multiplication in special caseWhy is...
Badly designed reimbursement form. What does that say about the company?
Tikz - highlight text in an image
Unable to login to ec2 instance after running “sudo chmod 2770 /”
Coworker is trying to get me to sign his petition to run for office. How to decline politely?
Substitute ./ and ../ directories by actual names
Fix some problems with a nice frame using tcolorbox
Is 'bad luck' with former employees a red flag?
Why is it a problem for Freddie if the guys from Munich did what he wanted?
Are all aperiodic systems chaotic?
Does a star need to be inside a galaxy?
Are there any rules or guidelines about the order of saving throws?
Can you make a Spell Glyph of a spell that has the potential to target more than one creature?
How should I ship cards?
If an area is covered in both Ball Bearings and Caltrops, does the creature need to move at half speed or quarter speed to avoid both their effects?
How can I ensure that advanced technology remains in the hands of the superhero community?
Why did Shae (falsely) implicate Sansa?
Negotiating 1-year delay to my Assistant Professor Offer
Diagram in Tikz environment
Rigorous Geometric Proof That dA=rdrdθ?
How to write a character overlapping another character
Does an increasing sequence of reals converge if the difference of consecutive terms approaches zero?
Run a command that requires sudo after a time has passed
Father gets chickenpox, but doesn't infect his two children. How is this possible?
Workplace intimidation due to child's chronic health condition
Number of computations for multiplication
Fast algorithm for matrix chain multiplication in special caseWhy is the transform in Schönhage–Strassen's multiplication algorithm cheap?Shouldn't complexity theory consider the time taken for different operations?Fixed base exponentiation with precomputationsAre there any non-naive parallel sparse matrix multiplication algorithms?Controlling overflow and loss of precision during floating point multiplicationDividing/Multiplying Numbers Stored in two memory locationsHow many basic operations are there in an algorithm for the simple multiplication of two numbers of equal length?Long multiplication school method: number of primitive operations to calculate first partial product?Are all computational complexity results oracle results?
$begingroup$
I just read Marvin Minskys' "Form and Content in Computer Science" and found myself confused with the following paragraph:
A startling discovery was made about multiplication itself in the
thesis of S.A. Cook [3], which uses a result of A.L. Toom, as
discussed in Knuth [4]. Consider the ordinary algorithm for
multiplying decimal numbers: for two n-digit numbers this employs
n-squared one-digit products. It is usually supposed that this is
minimal. But suppose we write the numbers in two halves, so that the
product is N = (#A+B)(#C+D), where # stands for multiplying by 10 subscript{n/2}.
(The left-shift operation is considered to have negligible cost.) Then
one can verify that
N = ##AC + BD + #(A+B)(C+D) - #(AC+BD).
This involves only three half-length multiplications, instead of the
four that one might suppose were needed. For large n, the reduction
can obviously be reapplied over and over to the smaller numbers. The
price is a growing number of additions. By compounding this and other
ideas, Cook showed that for any e and large enough n, multiplication
requires less than n*(1-e) products, instead of the expected
n-squared.
I got stuck when I read stands for multiplying by 10 ...
because I didn't understand what the subscript in this example means. I tried to find it out by googling about the complexity of multiplication and stuff like that, but couldn't find the answer.
Can you point me in the right direction here? I guess that this exact part is essential for understanding the formula in the middle. I would like to find out how this computation would resolve a multiplication.
complexity-theory multiplication
New contributor
$endgroup$
add a comment |
$begingroup$
I just read Marvin Minskys' "Form and Content in Computer Science" and found myself confused with the following paragraph:
A startling discovery was made about multiplication itself in the
thesis of S.A. Cook [3], which uses a result of A.L. Toom, as
discussed in Knuth [4]. Consider the ordinary algorithm for
multiplying decimal numbers: for two n-digit numbers this employs
n-squared one-digit products. It is usually supposed that this is
minimal. But suppose we write the numbers in two halves, so that the
product is N = (#A+B)(#C+D), where # stands for multiplying by 10 subscript{n/2}.
(The left-shift operation is considered to have negligible cost.) Then
one can verify that
N = ##AC + BD + #(A+B)(C+D) - #(AC+BD).
This involves only three half-length multiplications, instead of the
four that one might suppose were needed. For large n, the reduction
can obviously be reapplied over and over to the smaller numbers. The
price is a growing number of additions. By compounding this and other
ideas, Cook showed that for any e and large enough n, multiplication
requires less than n*(1-e) products, instead of the expected
n-squared.
I got stuck when I read stands for multiplying by 10 ...
because I didn't understand what the subscript in this example means. I tried to find it out by googling about the complexity of multiplication and stuff like that, but couldn't find the answer.
Can you point me in the right direction here? I guess that this exact part is essential for understanding the formula in the middle. I would like to find out how this computation would resolve a multiplication.
complexity-theory multiplication
New contributor
$endgroup$
add a comment |
$begingroup$
I just read Marvin Minskys' "Form and Content in Computer Science" and found myself confused with the following paragraph:
A startling discovery was made about multiplication itself in the
thesis of S.A. Cook [3], which uses a result of A.L. Toom, as
discussed in Knuth [4]. Consider the ordinary algorithm for
multiplying decimal numbers: for two n-digit numbers this employs
n-squared one-digit products. It is usually supposed that this is
minimal. But suppose we write the numbers in two halves, so that the
product is N = (#A+B)(#C+D), where # stands for multiplying by 10 subscript{n/2}.
(The left-shift operation is considered to have negligible cost.) Then
one can verify that
N = ##AC + BD + #(A+B)(C+D) - #(AC+BD).
This involves only three half-length multiplications, instead of the
four that one might suppose were needed. For large n, the reduction
can obviously be reapplied over and over to the smaller numbers. The
price is a growing number of additions. By compounding this and other
ideas, Cook showed that for any e and large enough n, multiplication
requires less than n*(1-e) products, instead of the expected
n-squared.
I got stuck when I read stands for multiplying by 10 ...
because I didn't understand what the subscript in this example means. I tried to find it out by googling about the complexity of multiplication and stuff like that, but couldn't find the answer.
Can you point me in the right direction here? I guess that this exact part is essential for understanding the formula in the middle. I would like to find out how this computation would resolve a multiplication.
complexity-theory multiplication
New contributor
$endgroup$
I just read Marvin Minskys' "Form and Content in Computer Science" and found myself confused with the following paragraph:
A startling discovery was made about multiplication itself in the
thesis of S.A. Cook [3], which uses a result of A.L. Toom, as
discussed in Knuth [4]. Consider the ordinary algorithm for
multiplying decimal numbers: for two n-digit numbers this employs
n-squared one-digit products. It is usually supposed that this is
minimal. But suppose we write the numbers in two halves, so that the
product is N = (#A+B)(#C+D), where # stands for multiplying by 10 subscript{n/2}.
(The left-shift operation is considered to have negligible cost.) Then
one can verify that
N = ##AC + BD + #(A+B)(C+D) - #(AC+BD).
This involves only three half-length multiplications, instead of the
four that one might suppose were needed. For large n, the reduction
can obviously be reapplied over and over to the smaller numbers. The
price is a growing number of additions. By compounding this and other
ideas, Cook showed that for any e and large enough n, multiplication
requires less than n*(1-e) products, instead of the expected
n-squared.
I got stuck when I read stands for multiplying by 10 ...
because I didn't understand what the subscript in this example means. I tried to find it out by googling about the complexity of multiplication and stuff like that, but couldn't find the answer.
Can you point me in the right direction here? I guess that this exact part is essential for understanding the formula in the middle. I would like to find out how this computation would resolve a multiplication.
complexity-theory multiplication
complexity-theory multiplication
New contributor
New contributor
New contributor
asked 10 hours ago
Fabian SchneiderFabian Schneider
183
183
New contributor
New contributor
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
It should be "10 superscript{n/2}" meaning $10^{n/2}$. For example, if $n=4$, then "6#2" would represent $6cdot10^2=600$. All he's doing is shifting a decimal number to the left.
$endgroup$
add a comment |
$begingroup$
From context it's just exponentiation. I don't know why it's written with a subscript instead of a superscript, but with older papers you sometimes had a professional typesetter who might not have understood the material. (Of course nowadays typically do their own typesetting.)
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "419"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Fabian Schneider is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f104548%2fnumber-of-computations-for-multiplication%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
It should be "10 superscript{n/2}" meaning $10^{n/2}$. For example, if $n=4$, then "6#2" would represent $6cdot10^2=600$. All he's doing is shifting a decimal number to the left.
$endgroup$
add a comment |
$begingroup$
It should be "10 superscript{n/2}" meaning $10^{n/2}$. For example, if $n=4$, then "6#2" would represent $6cdot10^2=600$. All he's doing is shifting a decimal number to the left.
$endgroup$
add a comment |
$begingroup$
It should be "10 superscript{n/2}" meaning $10^{n/2}$. For example, if $n=4$, then "6#2" would represent $6cdot10^2=600$. All he's doing is shifting a decimal number to the left.
$endgroup$
It should be "10 superscript{n/2}" meaning $10^{n/2}$. For example, if $n=4$, then "6#2" would represent $6cdot10^2=600$. All he's doing is shifting a decimal number to the left.
answered 9 hours ago
Rick DeckerRick Decker
12.7k33044
12.7k33044
add a comment |
add a comment |
$begingroup$
From context it's just exponentiation. I don't know why it's written with a subscript instead of a superscript, but with older papers you sometimes had a professional typesetter who might not have understood the material. (Of course nowadays typically do their own typesetting.)
$endgroup$
add a comment |
$begingroup$
From context it's just exponentiation. I don't know why it's written with a subscript instead of a superscript, but with older papers you sometimes had a professional typesetter who might not have understood the material. (Of course nowadays typically do their own typesetting.)
$endgroup$
add a comment |
$begingroup$
From context it's just exponentiation. I don't know why it's written with a subscript instead of a superscript, but with older papers you sometimes had a professional typesetter who might not have understood the material. (Of course nowadays typically do their own typesetting.)
$endgroup$
From context it's just exponentiation. I don't know why it's written with a subscript instead of a superscript, but with older papers you sometimes had a professional typesetter who might not have understood the material. (Of course nowadays typically do their own typesetting.)
answered 9 hours ago
Daniel McLauryDaniel McLaury
48427
48427
add a comment |
add a comment |
Fabian Schneider is a new contributor. Be nice, and check out our Code of Conduct.
Fabian Schneider is a new contributor. Be nice, and check out our Code of Conduct.
Fabian Schneider is a new contributor. Be nice, and check out our Code of Conduct.
Fabian Schneider is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Computer Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f104548%2fnumber-of-computations-for-multiplication%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown