Sampling Theorem and reconstructionSampling frequency required to detect peaksSampling TheoremWhy is the...

List of people who lose a child in תנ"ך

Is XSS in canonical link possible?

We have a love-hate relationship

Why does Async/Await work properly when the loop is inside the async function and not the other way around?

Open a doc from terminal, but not by its name

Have I saved too much for retirement so far?

Two-sided logarithm inequality

Why does the Sun have different day lengths, but not the gas giants?

Translation of Scottish 16th century church stained glass

THT: What is a squared annular “ring”?

Some numbers are more equivalent than others

What is this type of notehead called?

What does this horizontal bar at the first measure mean?

Drawing ramified coverings with tikz

Is it possible to use .desktop files to open local pdf files on specific pages with a browser?

Can the Supreme Court overturn an impeachment?

Are lightweight LN wallets vulnerable to transaction withholding?

Can I sign legal documents with a smiley face?

How will losing mobility of one hand affect my career as a programmer?

Did arcade monitors have same pixel aspect ratio as TV sets?

How should I respond when I lied about my education and the company finds out through background check?

Could solar power be utilized and substitute coal in the 19th Century

How do I extrude a face to a single vertex

Bob has never been a M before



Sampling Theorem and reconstruction


Sampling frequency required to detect peaksSampling TheoremWhy is the voltage in this digitizer reducedWhy do digital scopes sample signals at a higher frequency than required by the sampling theorem?Fast Fourier Transformation of incomplete signalsSampling rate for a real signal, which is not band-limited?Am I using Shannon-Hartley Theorem and thermal noise correctly here?What is the difference between modulating with cosine and exponential function?Confusion with Nyquist theorem when sampling cosines versus sinesregarding the sampling frequence













2












$begingroup$


I do not understand a concept about the Nyquist - Shannon sampling theorem.



It says that it is possibile to perfectly get the original analog signal from the signal obtained by sampling if and only if the sampling frequency is higher than twice the maximum frequency of the initial signal.



I can understand it if I think at what happens in the frequency domain, in which the sampling produces replicas of the initial spectrum and therefore a low pass filter reconstructor can delete them and keep the original spectrum.



But in time domain sampling simply means to extract values of the original signal at instants separated by the sampling time T.



enter image description here



Once I have extracted these values, I have lost all the informations about the points between two consecutive instants of sampling. How can the reconstructor device perfectly obtain the original signal? It does not know how to connect the sampled points (they can be connected by infinite mathematical curves and all the information inside T time are lost). For example, it can connect them as in figure 1 (the correct original signal), or as in figure 2.



figure 1



enter image description here



figure 2



enter image description here



This makes me think that a very high sampling frequency is surely a good thing, since the points are very close together, but there is not a frequency that if overcome, allows a 100% perfect reconstruction, since sampling implies losing information.










share|improve this question









$endgroup$








  • 2




    $begingroup$
    Your bottom signal has some much higher frequency components than the other ones here.
    $endgroup$
    – Hearth
    3 hours ago






  • 3




    $begingroup$
    There is exactly ONE curve that passes thru all those points AND is band limited to strictly less then Fs/2.
    $endgroup$
    – Dan Mills
    3 hours ago










  • $begingroup$
    it's important to note that unique reconstruction is only possible if the original signal is strictly bandlimited. Or to put it another way, given the samples, the assumption of strict bandlimiting allows a single signal to be reconstructed. To the extent that the bandlimited assumption is untrue, then the reconstructed signal will not match the original - this is called aliasing.
    $endgroup$
    – Neil_UK
    2 hours ago










  • $begingroup$
    Also note that in practice it may require higher sampling frequency to provide acceptable reconstruction since perfect band-limiting are not practical. For example Audio CDs use 44.1kHz sampling to provide 0-20kHz output. Oscilloscopes generally use 5-10 times the required minimum sampling frequency to provide acceptable waveform integrity as a sharp cutoff filter would tend to create waveform artifacts such as ringing.
    $endgroup$
    – Kevin White
    2 hours ago
















2












$begingroup$


I do not understand a concept about the Nyquist - Shannon sampling theorem.



It says that it is possibile to perfectly get the original analog signal from the signal obtained by sampling if and only if the sampling frequency is higher than twice the maximum frequency of the initial signal.



I can understand it if I think at what happens in the frequency domain, in which the sampling produces replicas of the initial spectrum and therefore a low pass filter reconstructor can delete them and keep the original spectrum.



But in time domain sampling simply means to extract values of the original signal at instants separated by the sampling time T.



enter image description here



Once I have extracted these values, I have lost all the informations about the points between two consecutive instants of sampling. How can the reconstructor device perfectly obtain the original signal? It does not know how to connect the sampled points (they can be connected by infinite mathematical curves and all the information inside T time are lost). For example, it can connect them as in figure 1 (the correct original signal), or as in figure 2.



figure 1



enter image description here



figure 2



enter image description here



This makes me think that a very high sampling frequency is surely a good thing, since the points are very close together, but there is not a frequency that if overcome, allows a 100% perfect reconstruction, since sampling implies losing information.










share|improve this question









$endgroup$








  • 2




    $begingroup$
    Your bottom signal has some much higher frequency components than the other ones here.
    $endgroup$
    – Hearth
    3 hours ago






  • 3




    $begingroup$
    There is exactly ONE curve that passes thru all those points AND is band limited to strictly less then Fs/2.
    $endgroup$
    – Dan Mills
    3 hours ago










  • $begingroup$
    it's important to note that unique reconstruction is only possible if the original signal is strictly bandlimited. Or to put it another way, given the samples, the assumption of strict bandlimiting allows a single signal to be reconstructed. To the extent that the bandlimited assumption is untrue, then the reconstructed signal will not match the original - this is called aliasing.
    $endgroup$
    – Neil_UK
    2 hours ago










  • $begingroup$
    Also note that in practice it may require higher sampling frequency to provide acceptable reconstruction since perfect band-limiting are not practical. For example Audio CDs use 44.1kHz sampling to provide 0-20kHz output. Oscilloscopes generally use 5-10 times the required minimum sampling frequency to provide acceptable waveform integrity as a sharp cutoff filter would tend to create waveform artifacts such as ringing.
    $endgroup$
    – Kevin White
    2 hours ago














2












2








2





$begingroup$


I do not understand a concept about the Nyquist - Shannon sampling theorem.



It says that it is possibile to perfectly get the original analog signal from the signal obtained by sampling if and only if the sampling frequency is higher than twice the maximum frequency of the initial signal.



I can understand it if I think at what happens in the frequency domain, in which the sampling produces replicas of the initial spectrum and therefore a low pass filter reconstructor can delete them and keep the original spectrum.



But in time domain sampling simply means to extract values of the original signal at instants separated by the sampling time T.



enter image description here



Once I have extracted these values, I have lost all the informations about the points between two consecutive instants of sampling. How can the reconstructor device perfectly obtain the original signal? It does not know how to connect the sampled points (they can be connected by infinite mathematical curves and all the information inside T time are lost). For example, it can connect them as in figure 1 (the correct original signal), or as in figure 2.



figure 1



enter image description here



figure 2



enter image description here



This makes me think that a very high sampling frequency is surely a good thing, since the points are very close together, but there is not a frequency that if overcome, allows a 100% perfect reconstruction, since sampling implies losing information.










share|improve this question









$endgroup$




I do not understand a concept about the Nyquist - Shannon sampling theorem.



It says that it is possibile to perfectly get the original analog signal from the signal obtained by sampling if and only if the sampling frequency is higher than twice the maximum frequency of the initial signal.



I can understand it if I think at what happens in the frequency domain, in which the sampling produces replicas of the initial spectrum and therefore a low pass filter reconstructor can delete them and keep the original spectrum.



But in time domain sampling simply means to extract values of the original signal at instants separated by the sampling time T.



enter image description here



Once I have extracted these values, I have lost all the informations about the points between two consecutive instants of sampling. How can the reconstructor device perfectly obtain the original signal? It does not know how to connect the sampled points (they can be connected by infinite mathematical curves and all the information inside T time are lost). For example, it can connect them as in figure 1 (the correct original signal), or as in figure 2.



figure 1



enter image description here



figure 2



enter image description here



This makes me think that a very high sampling frequency is surely a good thing, since the points are very close together, but there is not a frequency that if overcome, allows a 100% perfect reconstruction, since sampling implies losing information.







analog signal signal-processing sampling signal-theory






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked 3 hours ago









Kinka-ByoKinka-Byo

512




512








  • 2




    $begingroup$
    Your bottom signal has some much higher frequency components than the other ones here.
    $endgroup$
    – Hearth
    3 hours ago






  • 3




    $begingroup$
    There is exactly ONE curve that passes thru all those points AND is band limited to strictly less then Fs/2.
    $endgroup$
    – Dan Mills
    3 hours ago










  • $begingroup$
    it's important to note that unique reconstruction is only possible if the original signal is strictly bandlimited. Or to put it another way, given the samples, the assumption of strict bandlimiting allows a single signal to be reconstructed. To the extent that the bandlimited assumption is untrue, then the reconstructed signal will not match the original - this is called aliasing.
    $endgroup$
    – Neil_UK
    2 hours ago










  • $begingroup$
    Also note that in practice it may require higher sampling frequency to provide acceptable reconstruction since perfect band-limiting are not practical. For example Audio CDs use 44.1kHz sampling to provide 0-20kHz output. Oscilloscopes generally use 5-10 times the required minimum sampling frequency to provide acceptable waveform integrity as a sharp cutoff filter would tend to create waveform artifacts such as ringing.
    $endgroup$
    – Kevin White
    2 hours ago














  • 2




    $begingroup$
    Your bottom signal has some much higher frequency components than the other ones here.
    $endgroup$
    – Hearth
    3 hours ago






  • 3




    $begingroup$
    There is exactly ONE curve that passes thru all those points AND is band limited to strictly less then Fs/2.
    $endgroup$
    – Dan Mills
    3 hours ago










  • $begingroup$
    it's important to note that unique reconstruction is only possible if the original signal is strictly bandlimited. Or to put it another way, given the samples, the assumption of strict bandlimiting allows a single signal to be reconstructed. To the extent that the bandlimited assumption is untrue, then the reconstructed signal will not match the original - this is called aliasing.
    $endgroup$
    – Neil_UK
    2 hours ago










  • $begingroup$
    Also note that in practice it may require higher sampling frequency to provide acceptable reconstruction since perfect band-limiting are not practical. For example Audio CDs use 44.1kHz sampling to provide 0-20kHz output. Oscilloscopes generally use 5-10 times the required minimum sampling frequency to provide acceptable waveform integrity as a sharp cutoff filter would tend to create waveform artifacts such as ringing.
    $endgroup$
    – Kevin White
    2 hours ago








2




2




$begingroup$
Your bottom signal has some much higher frequency components than the other ones here.
$endgroup$
– Hearth
3 hours ago




$begingroup$
Your bottom signal has some much higher frequency components than the other ones here.
$endgroup$
– Hearth
3 hours ago




3




3




$begingroup$
There is exactly ONE curve that passes thru all those points AND is band limited to strictly less then Fs/2.
$endgroup$
– Dan Mills
3 hours ago




$begingroup$
There is exactly ONE curve that passes thru all those points AND is band limited to strictly less then Fs/2.
$endgroup$
– Dan Mills
3 hours ago












$begingroup$
it's important to note that unique reconstruction is only possible if the original signal is strictly bandlimited. Or to put it another way, given the samples, the assumption of strict bandlimiting allows a single signal to be reconstructed. To the extent that the bandlimited assumption is untrue, then the reconstructed signal will not match the original - this is called aliasing.
$endgroup$
– Neil_UK
2 hours ago




$begingroup$
it's important to note that unique reconstruction is only possible if the original signal is strictly bandlimited. Or to put it another way, given the samples, the assumption of strict bandlimiting allows a single signal to be reconstructed. To the extent that the bandlimited assumption is untrue, then the reconstructed signal will not match the original - this is called aliasing.
$endgroup$
– Neil_UK
2 hours ago












$begingroup$
Also note that in practice it may require higher sampling frequency to provide acceptable reconstruction since perfect band-limiting are not practical. For example Audio CDs use 44.1kHz sampling to provide 0-20kHz output. Oscilloscopes generally use 5-10 times the required minimum sampling frequency to provide acceptable waveform integrity as a sharp cutoff filter would tend to create waveform artifacts such as ringing.
$endgroup$
– Kevin White
2 hours ago




$begingroup$
Also note that in practice it may require higher sampling frequency to provide acceptable reconstruction since perfect band-limiting are not practical. For example Audio CDs use 44.1kHz sampling to provide 0-20kHz output. Oscilloscopes generally use 5-10 times the required minimum sampling frequency to provide acceptable waveform integrity as a sharp cutoff filter would tend to create waveform artifacts such as ringing.
$endgroup$
– Kevin White
2 hours ago










2 Answers
2






active

oldest

votes


















2












$begingroup$

You can think of any perfectly bandlimited signal as the superposition of a set of $frac{sin(t)}{t} = text{sinc}(t)$ curves, with their peaks positioned uniformly along the time axis. Their spacing is $frac{2}{BW}$.



sinc(x) also happens to be the time-domain response of a perfect low-pass filter, and it explains how the continuous-time reconstruction (interpolation) is accomplished from a series of discrete samples.



When we uniformly sample a signal, each sample is a direct measurement of the amplitude of one of those sinc() waves. This works because it is a property of the sinc() function that it is zero at every sampling point, except at its own peak. In other words, when you take a measurement, you're not getting any "interference" from any of the other sinc() functions. Therefore, the set of N discrete measurements contains all of the information in the continuous-time signal represented by that collection of sinc() waves.





Now, it gets even weirder than what TimWestcott was alluding to — the samples do not even have to be uniformly spaced in time! It turns out that ANY N unique samples taken within a window of time (with certain limitations) of a perfectly bandlimited signal can be used to reconstruct that signal. It takes a lot more math to do it, though!



With nonuniform sampling, you are no longer getting a clean measurement of just one of the sinc() amplitudes. Instead, you're getting a mix of many, if not all of them. However, since you know exactly where you are on each one (obviously, each sample must be time-stamped), it is possible to solve the large system of linear equations to find the actual amplitudes and therefore reconstruct the original signal. Of course, this process is very sensitive to small perturbations (noise and math errors, for example), and I'm hand-waving away some details about constraints on the set of samples, but the general principle holds.






share|improve this answer









$endgroup$





















    0












    $begingroup$

    If the signal is perfectly bandlimited, then there is no additional information to be gotten out of it by sampling faster than twice the bandwidth. So perfect reconstruction must be possible. It's as @DanMills said: there's one and only one curve that'll pass through the sampled points and be correct, and that's the curve that you'd get from a perfect reconstruction filter.



    (Note that it gets weirder -- at least in theory, if the bandwidth is $B$, then you don't need to sample $x(t)$ at $2B$ -- you can sample $x(t)$ and $frac{d x(t)}{dt}$ simultaneously at $B$, or sample out to the third derivative (i.e., collect four samples) at $frac{B}{2}$, or commit various other crimes to the signal before sampling an $N$ wide vector at $frac{2B}{N}$. Most such schemes (definitely the derivatives that I mention) would be horribly impractical, but in theory they'll work, and you do occasionally stumble across schemes that are actually used in reality.)






    share|improve this answer









    $endgroup$













      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("schematics", function () {
      StackExchange.schematics.init();
      });
      }, "cicuitlab");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "135"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2felectronics.stackexchange.com%2fquestions%2f428871%2fsampling-theorem-and-reconstruction%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      2












      $begingroup$

      You can think of any perfectly bandlimited signal as the superposition of a set of $frac{sin(t)}{t} = text{sinc}(t)$ curves, with their peaks positioned uniformly along the time axis. Their spacing is $frac{2}{BW}$.



      sinc(x) also happens to be the time-domain response of a perfect low-pass filter, and it explains how the continuous-time reconstruction (interpolation) is accomplished from a series of discrete samples.



      When we uniformly sample a signal, each sample is a direct measurement of the amplitude of one of those sinc() waves. This works because it is a property of the sinc() function that it is zero at every sampling point, except at its own peak. In other words, when you take a measurement, you're not getting any "interference" from any of the other sinc() functions. Therefore, the set of N discrete measurements contains all of the information in the continuous-time signal represented by that collection of sinc() waves.





      Now, it gets even weirder than what TimWestcott was alluding to — the samples do not even have to be uniformly spaced in time! It turns out that ANY N unique samples taken within a window of time (with certain limitations) of a perfectly bandlimited signal can be used to reconstruct that signal. It takes a lot more math to do it, though!



      With nonuniform sampling, you are no longer getting a clean measurement of just one of the sinc() amplitudes. Instead, you're getting a mix of many, if not all of them. However, since you know exactly where you are on each one (obviously, each sample must be time-stamped), it is possible to solve the large system of linear equations to find the actual amplitudes and therefore reconstruct the original signal. Of course, this process is very sensitive to small perturbations (noise and math errors, for example), and I'm hand-waving away some details about constraints on the set of samples, but the general principle holds.






      share|improve this answer









      $endgroup$


















        2












        $begingroup$

        You can think of any perfectly bandlimited signal as the superposition of a set of $frac{sin(t)}{t} = text{sinc}(t)$ curves, with their peaks positioned uniformly along the time axis. Their spacing is $frac{2}{BW}$.



        sinc(x) also happens to be the time-domain response of a perfect low-pass filter, and it explains how the continuous-time reconstruction (interpolation) is accomplished from a series of discrete samples.



        When we uniformly sample a signal, each sample is a direct measurement of the amplitude of one of those sinc() waves. This works because it is a property of the sinc() function that it is zero at every sampling point, except at its own peak. In other words, when you take a measurement, you're not getting any "interference" from any of the other sinc() functions. Therefore, the set of N discrete measurements contains all of the information in the continuous-time signal represented by that collection of sinc() waves.





        Now, it gets even weirder than what TimWestcott was alluding to — the samples do not even have to be uniformly spaced in time! It turns out that ANY N unique samples taken within a window of time (with certain limitations) of a perfectly bandlimited signal can be used to reconstruct that signal. It takes a lot more math to do it, though!



        With nonuniform sampling, you are no longer getting a clean measurement of just one of the sinc() amplitudes. Instead, you're getting a mix of many, if not all of them. However, since you know exactly where you are on each one (obviously, each sample must be time-stamped), it is possible to solve the large system of linear equations to find the actual amplitudes and therefore reconstruct the original signal. Of course, this process is very sensitive to small perturbations (noise and math errors, for example), and I'm hand-waving away some details about constraints on the set of samples, but the general principle holds.






        share|improve this answer









        $endgroup$
















          2












          2








          2





          $begingroup$

          You can think of any perfectly bandlimited signal as the superposition of a set of $frac{sin(t)}{t} = text{sinc}(t)$ curves, with their peaks positioned uniformly along the time axis. Their spacing is $frac{2}{BW}$.



          sinc(x) also happens to be the time-domain response of a perfect low-pass filter, and it explains how the continuous-time reconstruction (interpolation) is accomplished from a series of discrete samples.



          When we uniformly sample a signal, each sample is a direct measurement of the amplitude of one of those sinc() waves. This works because it is a property of the sinc() function that it is zero at every sampling point, except at its own peak. In other words, when you take a measurement, you're not getting any "interference" from any of the other sinc() functions. Therefore, the set of N discrete measurements contains all of the information in the continuous-time signal represented by that collection of sinc() waves.





          Now, it gets even weirder than what TimWestcott was alluding to — the samples do not even have to be uniformly spaced in time! It turns out that ANY N unique samples taken within a window of time (with certain limitations) of a perfectly bandlimited signal can be used to reconstruct that signal. It takes a lot more math to do it, though!



          With nonuniform sampling, you are no longer getting a clean measurement of just one of the sinc() amplitudes. Instead, you're getting a mix of many, if not all of them. However, since you know exactly where you are on each one (obviously, each sample must be time-stamped), it is possible to solve the large system of linear equations to find the actual amplitudes and therefore reconstruct the original signal. Of course, this process is very sensitive to small perturbations (noise and math errors, for example), and I'm hand-waving away some details about constraints on the set of samples, but the general principle holds.






          share|improve this answer









          $endgroup$



          You can think of any perfectly bandlimited signal as the superposition of a set of $frac{sin(t)}{t} = text{sinc}(t)$ curves, with their peaks positioned uniformly along the time axis. Their spacing is $frac{2}{BW}$.



          sinc(x) also happens to be the time-domain response of a perfect low-pass filter, and it explains how the continuous-time reconstruction (interpolation) is accomplished from a series of discrete samples.



          When we uniformly sample a signal, each sample is a direct measurement of the amplitude of one of those sinc() waves. This works because it is a property of the sinc() function that it is zero at every sampling point, except at its own peak. In other words, when you take a measurement, you're not getting any "interference" from any of the other sinc() functions. Therefore, the set of N discrete measurements contains all of the information in the continuous-time signal represented by that collection of sinc() waves.





          Now, it gets even weirder than what TimWestcott was alluding to — the samples do not even have to be uniformly spaced in time! It turns out that ANY N unique samples taken within a window of time (with certain limitations) of a perfectly bandlimited signal can be used to reconstruct that signal. It takes a lot more math to do it, though!



          With nonuniform sampling, you are no longer getting a clean measurement of just one of the sinc() amplitudes. Instead, you're getting a mix of many, if not all of them. However, since you know exactly where you are on each one (obviously, each sample must be time-stamped), it is possible to solve the large system of linear equations to find the actual amplitudes and therefore reconstruct the original signal. Of course, this process is very sensitive to small perturbations (noise and math errors, for example), and I'm hand-waving away some details about constraints on the set of samples, but the general principle holds.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered 3 hours ago









          Dave TweedDave Tweed

          122k9152264




          122k9152264

























              0












              $begingroup$

              If the signal is perfectly bandlimited, then there is no additional information to be gotten out of it by sampling faster than twice the bandwidth. So perfect reconstruction must be possible. It's as @DanMills said: there's one and only one curve that'll pass through the sampled points and be correct, and that's the curve that you'd get from a perfect reconstruction filter.



              (Note that it gets weirder -- at least in theory, if the bandwidth is $B$, then you don't need to sample $x(t)$ at $2B$ -- you can sample $x(t)$ and $frac{d x(t)}{dt}$ simultaneously at $B$, or sample out to the third derivative (i.e., collect four samples) at $frac{B}{2}$, or commit various other crimes to the signal before sampling an $N$ wide vector at $frac{2B}{N}$. Most such schemes (definitely the derivatives that I mention) would be horribly impractical, but in theory they'll work, and you do occasionally stumble across schemes that are actually used in reality.)






              share|improve this answer









              $endgroup$


















                0












                $begingroup$

                If the signal is perfectly bandlimited, then there is no additional information to be gotten out of it by sampling faster than twice the bandwidth. So perfect reconstruction must be possible. It's as @DanMills said: there's one and only one curve that'll pass through the sampled points and be correct, and that's the curve that you'd get from a perfect reconstruction filter.



                (Note that it gets weirder -- at least in theory, if the bandwidth is $B$, then you don't need to sample $x(t)$ at $2B$ -- you can sample $x(t)$ and $frac{d x(t)}{dt}$ simultaneously at $B$, or sample out to the third derivative (i.e., collect four samples) at $frac{B}{2}$, or commit various other crimes to the signal before sampling an $N$ wide vector at $frac{2B}{N}$. Most such schemes (definitely the derivatives that I mention) would be horribly impractical, but in theory they'll work, and you do occasionally stumble across schemes that are actually used in reality.)






                share|improve this answer









                $endgroup$
















                  0












                  0








                  0





                  $begingroup$

                  If the signal is perfectly bandlimited, then there is no additional information to be gotten out of it by sampling faster than twice the bandwidth. So perfect reconstruction must be possible. It's as @DanMills said: there's one and only one curve that'll pass through the sampled points and be correct, and that's the curve that you'd get from a perfect reconstruction filter.



                  (Note that it gets weirder -- at least in theory, if the bandwidth is $B$, then you don't need to sample $x(t)$ at $2B$ -- you can sample $x(t)$ and $frac{d x(t)}{dt}$ simultaneously at $B$, or sample out to the third derivative (i.e., collect four samples) at $frac{B}{2}$, or commit various other crimes to the signal before sampling an $N$ wide vector at $frac{2B}{N}$. Most such schemes (definitely the derivatives that I mention) would be horribly impractical, but in theory they'll work, and you do occasionally stumble across schemes that are actually used in reality.)






                  share|improve this answer









                  $endgroup$



                  If the signal is perfectly bandlimited, then there is no additional information to be gotten out of it by sampling faster than twice the bandwidth. So perfect reconstruction must be possible. It's as @DanMills said: there's one and only one curve that'll pass through the sampled points and be correct, and that's the curve that you'd get from a perfect reconstruction filter.



                  (Note that it gets weirder -- at least in theory, if the bandwidth is $B$, then you don't need to sample $x(t)$ at $2B$ -- you can sample $x(t)$ and $frac{d x(t)}{dt}$ simultaneously at $B$, or sample out to the third derivative (i.e., collect four samples) at $frac{B}{2}$, or commit various other crimes to the signal before sampling an $N$ wide vector at $frac{2B}{N}$. Most such schemes (definitely the derivatives that I mention) would be horribly impractical, but in theory they'll work, and you do occasionally stumble across schemes that are actually used in reality.)







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered 3 hours ago









                  TimWescottTimWescott

                  6,3631416




                  6,3631416






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Electrical Engineering Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2felectronics.stackexchange.com%2fquestions%2f428871%2fsampling-theorem-and-reconstruction%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Anexo:Material bélico de la Fuerza Aérea de Chile Índice Aeronaves Defensa...

                      Always On Availability groups resolving state after failover - Remote harden of transaction...

                      update json value to null Announcing the arrival of Valued Associate #679: Cesar Manara ...