Why does RAM (any type) access time decrease so slowly?How does a RAM work with a CPU?RAM/Latch Type ICHow do...

How to preserve spacing between array elements?

How can guns be countered by melee combat without raw-ability or exceptional explanations?

What happens if you declare more than $10,000 at the US border?

Contribution form

Buying a "Used" Router

Why did Shae (falsely) implicate Sansa?

Why is Shelob considered evil?

Is candidate anonymity at all practical?

How to announce in an ATIS message that two parallel runways are in use?

Why Third 'Reich'? Why is 'reich' not translated when 'third' is? What is the English synonym of reich?

How to explain one side of Super Earth is smoother than the other side?

Tikz - highlight text in an image

How can a kingdom keep the secret of a missing monarchy from the public?

For US ESTA, should I mention a visa denial from before I got UK citizenship?

Can a planet be tidally unlocked?

How to not forget my phone in the bathroom?

What have we got?

Unable to login to ec2 instance after running “sudo chmod 2770 /”

Why does finding small effects in large studies indicate publication bias?

How bad is a Computer Science course that doesn't teach Design Patterns?

Does a star need to be inside a galaxy?

What happens when the last remaining players refuse to kill each other?

Are there any rules or guidelines about the order of saving throws?

Why did Tywin never remarry?



Why does RAM (any type) access time decrease so slowly?


How does a RAM work with a CPU?RAM/Latch Type ICHow do MCUs and RAM wire up for programmatic access?Does a DDR RAM device exist which would allow RAM to be removed and preserved?Time average for two caches , cpu and ramCas Latency vs Cpu to Memory Access TimeWhy execute code from RAM?Why does this RAM component have unpredictable behavior in Multisim?Does building a RAM with more storage lead to decrease in performance?how to access the same RAM module from different modules?













8












$begingroup$


This article shows that DDR4 SDRAM has approximately 8x more bandwidth DDR1 SDRAM. But the time from setting the column address to when the data is available has only decreased by 10% (13.5ns).
A quick search shows that the access time of the fastest async. SRAM (18 years old) is 7ns.
Why has SDRAM access time decreased so slowly? Is the reason economic, technological, or fundamental?










share|improve this question









New contributor




Arseniy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$

















    8












    $begingroup$


    This article shows that DDR4 SDRAM has approximately 8x more bandwidth DDR1 SDRAM. But the time from setting the column address to when the data is available has only decreased by 10% (13.5ns).
    A quick search shows that the access time of the fastest async. SRAM (18 years old) is 7ns.
    Why has SDRAM access time decreased so slowly? Is the reason economic, technological, or fundamental?










    share|improve this question









    New contributor




    Arseniy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.







    $endgroup$















      8












      8








      8





      $begingroup$


      This article shows that DDR4 SDRAM has approximately 8x more bandwidth DDR1 SDRAM. But the time from setting the column address to when the data is available has only decreased by 10% (13.5ns).
      A quick search shows that the access time of the fastest async. SRAM (18 years old) is 7ns.
      Why has SDRAM access time decreased so slowly? Is the reason economic, technological, or fundamental?










      share|improve this question









      New contributor




      Arseniy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.







      $endgroup$




      This article shows that DDR4 SDRAM has approximately 8x more bandwidth DDR1 SDRAM. But the time from setting the column address to when the data is available has only decreased by 10% (13.5ns).
      A quick search shows that the access time of the fastest async. SRAM (18 years old) is 7ns.
      Why has SDRAM access time decreased so slowly? Is the reason economic, technological, or fundamental?







      ram speed ddr latency






      share|improve this question









      New contributor




      Arseniy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|improve this question









      New contributor




      Arseniy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|improve this question




      share|improve this question








      edited 5 hours ago









      C_Elegans

      2,432823




      2,432823






      New contributor




      Arseniy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked 6 hours ago









      ArseniyArseniy

      432




      432




      New contributor




      Arseniy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      Arseniy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      Arseniy is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






















          3 Answers
          3






          active

          oldest

          votes


















          9












          $begingroup$

          It's because it's easier and cheaper to increase the bandwidth of the DRAM than to decrease the latency. To get the data from an open row of ram, a non trivial amount of work is necessary.



          The column address needs to be decoded, the muxes selecting which lines to access need to be driven, and the data needs to move across the chip to the output buffers. This takes a little bit of time, especially given that the SDRAM chips are manufactured on a process tailored to high ram densities and not high logic speeds. To increase the bandwidth say by using DDR(1,2,3 or 4), most of the logic can be either widened or pipelined, and can operate at the same speed as in the previous generation. The only thing that needs to be faster is the I/O driver for the DDR pins.



          By contrast, to decrease the latency the entire operation needs to be sped up, which is much harder. Most likely, parts of the ram would need to be made on a process similar to that for high speed CPUs, increasing the cost substantially (the high speed process is more expensive, plus each chip needs to go through 2 different processes).



          If you compare CPU caches with RAM and hard disk/SSD, there's an inverse relationship between storage being large, and storage being fast. An L1$ is very fast, but can only hold between 32 and 256kB of data. The reason it is so fast is because it is small:




          • It can be placed very close to the CPU using it, meaning data has to travel a shorter distance to get to it

          • The wires on it can be made shorter, again meaning it takes less time for data to travel across it

          • It doesn't take up much area or many transistors, so making it on a speed optimized process and using a lot of power per bit stored isn't that expensive


          As you move up the hierarchy each storage option gets larger in capacity, but also larger in area and farther away from the device using it, meaning the device must get slower.






          share|improve this answer











          $endgroup$









          • 3




            $begingroup$
            Great answer. I just want to emphasise the physical distance factor: at maybe 10cm for the furthest RAM stick, 1/3 to 1/2 of the speed of light as the signal speed, plus some extra length to route & match the PCB tracks, you could easily be at 2ns round trip time. If ~15% of your delay is caused by the unbreakable universal speed limit... you're doing real good in my opinion.
            $endgroup$
            – mbrig
            2 hours ago



















          4












          $begingroup$

          C_Elegans provides one part of the answer — it is hard to decrease the overall latency of a memory cycle.



          The other part of the answer is that in modern hierarchical memory systems (multiple levels of caching), memory bandwidth has a much stronger influence on overall system performance than memory latency, and so that's where all of the latest development efforts have been focused.



          This is true in both general computing, where many processes/threads are running in parallel, as well as embedded systems. For example, in the HD video work that I do, I don't care about latencies on the order of milliseconds, but I do need multiple gigabytes/second of bandwidth.






          share|improve this answer









          $endgroup$





















            1












            $begingroup$

            I don't have that much insights, but I expect it is a bit of all.



            Economic



            For the majority of computers/telephones, the speed is more than enough. For bigger data storages, SSD has been developed. People can use video/music and other speed intensive tasks in (almost) real time. So there is not so much need for more speed (except for specific applications like weather prediction etc).



            Another reason is to process a very high RAM speed, CPUs are needed which are fast. And this comes with a lot of power usage. Since the tendency of using them in battery devices (like mobile phones), prevents the use of very fast RAM (and CPUs), thus makes it also not economically useful to make them.



            Technical



            By the decreasing size of chips/ICs (nm level now), the speed goes up, but not significantly. It is more often used for increasing the amount of RAM, which is needed harder (also a economic reason).



            Fundamental



            As an example (both are circuits): the easiest way to get more speed (used by SSD), is to just spread the load over multiple components, this way the 'processing' speeds adds up too. Compare using 8 USB sticks reading from at the same time and combining the results, instead of reading data from 1 USB stick after each other (takes 8 times as long).






            share|improve this answer











            $endgroup$













            • $begingroup$
              What exactly do SSDs have to do with SDRAM latency?
              $endgroup$
              – C_Elegans
              6 hours ago










            • $begingroup$
              @C_Elegans they are both circuits, for this 'generic' question I don't think there is so much difference.
              $endgroup$
              – Michel Keijzers
              5 hours ago










            • $begingroup$
              The amount of time to open a page hasn't really decreased that much due to the precharge cycle; the amount of energy required is not significantly different today than it was a decade ago. That dominates the access time in my experience.
              $endgroup$
              – Peter Smith
              5 hours ago










            • $begingroup$
              Every search in data array use truly random access not a data stream. Is it so rare task? >>the easiest way to get more speed (used by SSD), is to just spread the load over multiple components << Looks very reasonable. So can we say that true progress of RAM have stoped more than 20 years ago?
              $endgroup$
              – Arseniy
              5 hours ago












            • $begingroup$
              @MichelKeijzers While they are both circuits, SSDs and SDRAM serve very different use cases, and make use of different techniques for storing data. Additionally, saying that CPUs don't really need faster RAM doesn't make much sense, the entire reason most modern CPUs have 3 levels of caches is because their ram can't be made fast enough to serve the CPU.
              $endgroup$
              – C_Elegans
              5 hours ago











            Your Answer





            StackExchange.ifUsing("editor", function () {
            return StackExchange.using("mathjaxEditing", function () {
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
            });
            });
            }, "mathjax-editing");

            StackExchange.ifUsing("editor", function () {
            return StackExchange.using("schematics", function () {
            StackExchange.schematics.init();
            });
            }, "cicuitlab");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "135"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });






            Arseniy is a new contributor. Be nice, and check out our Code of Conduct.










            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2felectronics.stackexchange.com%2fquestions%2f423213%2fwhy-does-ram-any-type-access-time-decrease-so-slowly%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            3 Answers
            3






            active

            oldest

            votes








            3 Answers
            3






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            9












            $begingroup$

            It's because it's easier and cheaper to increase the bandwidth of the DRAM than to decrease the latency. To get the data from an open row of ram, a non trivial amount of work is necessary.



            The column address needs to be decoded, the muxes selecting which lines to access need to be driven, and the data needs to move across the chip to the output buffers. This takes a little bit of time, especially given that the SDRAM chips are manufactured on a process tailored to high ram densities and not high logic speeds. To increase the bandwidth say by using DDR(1,2,3 or 4), most of the logic can be either widened or pipelined, and can operate at the same speed as in the previous generation. The only thing that needs to be faster is the I/O driver for the DDR pins.



            By contrast, to decrease the latency the entire operation needs to be sped up, which is much harder. Most likely, parts of the ram would need to be made on a process similar to that for high speed CPUs, increasing the cost substantially (the high speed process is more expensive, plus each chip needs to go through 2 different processes).



            If you compare CPU caches with RAM and hard disk/SSD, there's an inverse relationship between storage being large, and storage being fast. An L1$ is very fast, but can only hold between 32 and 256kB of data. The reason it is so fast is because it is small:




            • It can be placed very close to the CPU using it, meaning data has to travel a shorter distance to get to it

            • The wires on it can be made shorter, again meaning it takes less time for data to travel across it

            • It doesn't take up much area or many transistors, so making it on a speed optimized process and using a lot of power per bit stored isn't that expensive


            As you move up the hierarchy each storage option gets larger in capacity, but also larger in area and farther away from the device using it, meaning the device must get slower.






            share|improve this answer











            $endgroup$









            • 3




              $begingroup$
              Great answer. I just want to emphasise the physical distance factor: at maybe 10cm for the furthest RAM stick, 1/3 to 1/2 of the speed of light as the signal speed, plus some extra length to route & match the PCB tracks, you could easily be at 2ns round trip time. If ~15% of your delay is caused by the unbreakable universal speed limit... you're doing real good in my opinion.
              $endgroup$
              – mbrig
              2 hours ago
















            9












            $begingroup$

            It's because it's easier and cheaper to increase the bandwidth of the DRAM than to decrease the latency. To get the data from an open row of ram, a non trivial amount of work is necessary.



            The column address needs to be decoded, the muxes selecting which lines to access need to be driven, and the data needs to move across the chip to the output buffers. This takes a little bit of time, especially given that the SDRAM chips are manufactured on a process tailored to high ram densities and not high logic speeds. To increase the bandwidth say by using DDR(1,2,3 or 4), most of the logic can be either widened or pipelined, and can operate at the same speed as in the previous generation. The only thing that needs to be faster is the I/O driver for the DDR pins.



            By contrast, to decrease the latency the entire operation needs to be sped up, which is much harder. Most likely, parts of the ram would need to be made on a process similar to that for high speed CPUs, increasing the cost substantially (the high speed process is more expensive, plus each chip needs to go through 2 different processes).



            If you compare CPU caches with RAM and hard disk/SSD, there's an inverse relationship between storage being large, and storage being fast. An L1$ is very fast, but can only hold between 32 and 256kB of data. The reason it is so fast is because it is small:




            • It can be placed very close to the CPU using it, meaning data has to travel a shorter distance to get to it

            • The wires on it can be made shorter, again meaning it takes less time for data to travel across it

            • It doesn't take up much area or many transistors, so making it on a speed optimized process and using a lot of power per bit stored isn't that expensive


            As you move up the hierarchy each storage option gets larger in capacity, but also larger in area and farther away from the device using it, meaning the device must get slower.






            share|improve this answer











            $endgroup$









            • 3




              $begingroup$
              Great answer. I just want to emphasise the physical distance factor: at maybe 10cm for the furthest RAM stick, 1/3 to 1/2 of the speed of light as the signal speed, plus some extra length to route & match the PCB tracks, you could easily be at 2ns round trip time. If ~15% of your delay is caused by the unbreakable universal speed limit... you're doing real good in my opinion.
              $endgroup$
              – mbrig
              2 hours ago














            9












            9








            9





            $begingroup$

            It's because it's easier and cheaper to increase the bandwidth of the DRAM than to decrease the latency. To get the data from an open row of ram, a non trivial amount of work is necessary.



            The column address needs to be decoded, the muxes selecting which lines to access need to be driven, and the data needs to move across the chip to the output buffers. This takes a little bit of time, especially given that the SDRAM chips are manufactured on a process tailored to high ram densities and not high logic speeds. To increase the bandwidth say by using DDR(1,2,3 or 4), most of the logic can be either widened or pipelined, and can operate at the same speed as in the previous generation. The only thing that needs to be faster is the I/O driver for the DDR pins.



            By contrast, to decrease the latency the entire operation needs to be sped up, which is much harder. Most likely, parts of the ram would need to be made on a process similar to that for high speed CPUs, increasing the cost substantially (the high speed process is more expensive, plus each chip needs to go through 2 different processes).



            If you compare CPU caches with RAM and hard disk/SSD, there's an inverse relationship between storage being large, and storage being fast. An L1$ is very fast, but can only hold between 32 and 256kB of data. The reason it is so fast is because it is small:




            • It can be placed very close to the CPU using it, meaning data has to travel a shorter distance to get to it

            • The wires on it can be made shorter, again meaning it takes less time for data to travel across it

            • It doesn't take up much area or many transistors, so making it on a speed optimized process and using a lot of power per bit stored isn't that expensive


            As you move up the hierarchy each storage option gets larger in capacity, but also larger in area and farther away from the device using it, meaning the device must get slower.






            share|improve this answer











            $endgroup$



            It's because it's easier and cheaper to increase the bandwidth of the DRAM than to decrease the latency. To get the data from an open row of ram, a non trivial amount of work is necessary.



            The column address needs to be decoded, the muxes selecting which lines to access need to be driven, and the data needs to move across the chip to the output buffers. This takes a little bit of time, especially given that the SDRAM chips are manufactured on a process tailored to high ram densities and not high logic speeds. To increase the bandwidth say by using DDR(1,2,3 or 4), most of the logic can be either widened or pipelined, and can operate at the same speed as in the previous generation. The only thing that needs to be faster is the I/O driver for the DDR pins.



            By contrast, to decrease the latency the entire operation needs to be sped up, which is much harder. Most likely, parts of the ram would need to be made on a process similar to that for high speed CPUs, increasing the cost substantially (the high speed process is more expensive, plus each chip needs to go through 2 different processes).



            If you compare CPU caches with RAM and hard disk/SSD, there's an inverse relationship between storage being large, and storage being fast. An L1$ is very fast, but can only hold between 32 and 256kB of data. The reason it is so fast is because it is small:




            • It can be placed very close to the CPU using it, meaning data has to travel a shorter distance to get to it

            • The wires on it can be made shorter, again meaning it takes less time for data to travel across it

            • It doesn't take up much area or many transistors, so making it on a speed optimized process and using a lot of power per bit stored isn't that expensive


            As you move up the hierarchy each storage option gets larger in capacity, but also larger in area and farther away from the device using it, meaning the device must get slower.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited 5 hours ago

























            answered 5 hours ago









            C_ElegansC_Elegans

            2,432823




            2,432823








            • 3




              $begingroup$
              Great answer. I just want to emphasise the physical distance factor: at maybe 10cm for the furthest RAM stick, 1/3 to 1/2 of the speed of light as the signal speed, plus some extra length to route & match the PCB tracks, you could easily be at 2ns round trip time. If ~15% of your delay is caused by the unbreakable universal speed limit... you're doing real good in my opinion.
              $endgroup$
              – mbrig
              2 hours ago














            • 3




              $begingroup$
              Great answer. I just want to emphasise the physical distance factor: at maybe 10cm for the furthest RAM stick, 1/3 to 1/2 of the speed of light as the signal speed, plus some extra length to route & match the PCB tracks, you could easily be at 2ns round trip time. If ~15% of your delay is caused by the unbreakable universal speed limit... you're doing real good in my opinion.
              $endgroup$
              – mbrig
              2 hours ago








            3




            3




            $begingroup$
            Great answer. I just want to emphasise the physical distance factor: at maybe 10cm for the furthest RAM stick, 1/3 to 1/2 of the speed of light as the signal speed, plus some extra length to route & match the PCB tracks, you could easily be at 2ns round trip time. If ~15% of your delay is caused by the unbreakable universal speed limit... you're doing real good in my opinion.
            $endgroup$
            – mbrig
            2 hours ago




            $begingroup$
            Great answer. I just want to emphasise the physical distance factor: at maybe 10cm for the furthest RAM stick, 1/3 to 1/2 of the speed of light as the signal speed, plus some extra length to route & match the PCB tracks, you could easily be at 2ns round trip time. If ~15% of your delay is caused by the unbreakable universal speed limit... you're doing real good in my opinion.
            $endgroup$
            – mbrig
            2 hours ago













            4












            $begingroup$

            C_Elegans provides one part of the answer — it is hard to decrease the overall latency of a memory cycle.



            The other part of the answer is that in modern hierarchical memory systems (multiple levels of caching), memory bandwidth has a much stronger influence on overall system performance than memory latency, and so that's where all of the latest development efforts have been focused.



            This is true in both general computing, where many processes/threads are running in parallel, as well as embedded systems. For example, in the HD video work that I do, I don't care about latencies on the order of milliseconds, but I do need multiple gigabytes/second of bandwidth.






            share|improve this answer









            $endgroup$


















              4












              $begingroup$

              C_Elegans provides one part of the answer — it is hard to decrease the overall latency of a memory cycle.



              The other part of the answer is that in modern hierarchical memory systems (multiple levels of caching), memory bandwidth has a much stronger influence on overall system performance than memory latency, and so that's where all of the latest development efforts have been focused.



              This is true in both general computing, where many processes/threads are running in parallel, as well as embedded systems. For example, in the HD video work that I do, I don't care about latencies on the order of milliseconds, but I do need multiple gigabytes/second of bandwidth.






              share|improve this answer









              $endgroup$
















                4












                4








                4





                $begingroup$

                C_Elegans provides one part of the answer — it is hard to decrease the overall latency of a memory cycle.



                The other part of the answer is that in modern hierarchical memory systems (multiple levels of caching), memory bandwidth has a much stronger influence on overall system performance than memory latency, and so that's where all of the latest development efforts have been focused.



                This is true in both general computing, where many processes/threads are running in parallel, as well as embedded systems. For example, in the HD video work that I do, I don't care about latencies on the order of milliseconds, but I do need multiple gigabytes/second of bandwidth.






                share|improve this answer









                $endgroup$



                C_Elegans provides one part of the answer — it is hard to decrease the overall latency of a memory cycle.



                The other part of the answer is that in modern hierarchical memory systems (multiple levels of caching), memory bandwidth has a much stronger influence on overall system performance than memory latency, and so that's where all of the latest development efforts have been focused.



                This is true in both general computing, where many processes/threads are running in parallel, as well as embedded systems. For example, in the HD video work that I do, I don't care about latencies on the order of milliseconds, but I do need multiple gigabytes/second of bandwidth.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered 5 hours ago









                Dave TweedDave Tweed

                119k9148256




                119k9148256























                    1












                    $begingroup$

                    I don't have that much insights, but I expect it is a bit of all.



                    Economic



                    For the majority of computers/telephones, the speed is more than enough. For bigger data storages, SSD has been developed. People can use video/music and other speed intensive tasks in (almost) real time. So there is not so much need for more speed (except for specific applications like weather prediction etc).



                    Another reason is to process a very high RAM speed, CPUs are needed which are fast. And this comes with a lot of power usage. Since the tendency of using them in battery devices (like mobile phones), prevents the use of very fast RAM (and CPUs), thus makes it also not economically useful to make them.



                    Technical



                    By the decreasing size of chips/ICs (nm level now), the speed goes up, but not significantly. It is more often used for increasing the amount of RAM, which is needed harder (also a economic reason).



                    Fundamental



                    As an example (both are circuits): the easiest way to get more speed (used by SSD), is to just spread the load over multiple components, this way the 'processing' speeds adds up too. Compare using 8 USB sticks reading from at the same time and combining the results, instead of reading data from 1 USB stick after each other (takes 8 times as long).






                    share|improve this answer











                    $endgroup$













                    • $begingroup$
                      What exactly do SSDs have to do with SDRAM latency?
                      $endgroup$
                      – C_Elegans
                      6 hours ago










                    • $begingroup$
                      @C_Elegans they are both circuits, for this 'generic' question I don't think there is so much difference.
                      $endgroup$
                      – Michel Keijzers
                      5 hours ago










                    • $begingroup$
                      The amount of time to open a page hasn't really decreased that much due to the precharge cycle; the amount of energy required is not significantly different today than it was a decade ago. That dominates the access time in my experience.
                      $endgroup$
                      – Peter Smith
                      5 hours ago










                    • $begingroup$
                      Every search in data array use truly random access not a data stream. Is it so rare task? >>the easiest way to get more speed (used by SSD), is to just spread the load over multiple components << Looks very reasonable. So can we say that true progress of RAM have stoped more than 20 years ago?
                      $endgroup$
                      – Arseniy
                      5 hours ago












                    • $begingroup$
                      @MichelKeijzers While they are both circuits, SSDs and SDRAM serve very different use cases, and make use of different techniques for storing data. Additionally, saying that CPUs don't really need faster RAM doesn't make much sense, the entire reason most modern CPUs have 3 levels of caches is because their ram can't be made fast enough to serve the CPU.
                      $endgroup$
                      – C_Elegans
                      5 hours ago
















                    1












                    $begingroup$

                    I don't have that much insights, but I expect it is a bit of all.



                    Economic



                    For the majority of computers/telephones, the speed is more than enough. For bigger data storages, SSD has been developed. People can use video/music and other speed intensive tasks in (almost) real time. So there is not so much need for more speed (except for specific applications like weather prediction etc).



                    Another reason is to process a very high RAM speed, CPUs are needed which are fast. And this comes with a lot of power usage. Since the tendency of using them in battery devices (like mobile phones), prevents the use of very fast RAM (and CPUs), thus makes it also not economically useful to make them.



                    Technical



                    By the decreasing size of chips/ICs (nm level now), the speed goes up, but not significantly. It is more often used for increasing the amount of RAM, which is needed harder (also a economic reason).



                    Fundamental



                    As an example (both are circuits): the easiest way to get more speed (used by SSD), is to just spread the load over multiple components, this way the 'processing' speeds adds up too. Compare using 8 USB sticks reading from at the same time and combining the results, instead of reading data from 1 USB stick after each other (takes 8 times as long).






                    share|improve this answer











                    $endgroup$













                    • $begingroup$
                      What exactly do SSDs have to do with SDRAM latency?
                      $endgroup$
                      – C_Elegans
                      6 hours ago










                    • $begingroup$
                      @C_Elegans they are both circuits, for this 'generic' question I don't think there is so much difference.
                      $endgroup$
                      – Michel Keijzers
                      5 hours ago










                    • $begingroup$
                      The amount of time to open a page hasn't really decreased that much due to the precharge cycle; the amount of energy required is not significantly different today than it was a decade ago. That dominates the access time in my experience.
                      $endgroup$
                      – Peter Smith
                      5 hours ago










                    • $begingroup$
                      Every search in data array use truly random access not a data stream. Is it so rare task? >>the easiest way to get more speed (used by SSD), is to just spread the load over multiple components << Looks very reasonable. So can we say that true progress of RAM have stoped more than 20 years ago?
                      $endgroup$
                      – Arseniy
                      5 hours ago












                    • $begingroup$
                      @MichelKeijzers While they are both circuits, SSDs and SDRAM serve very different use cases, and make use of different techniques for storing data. Additionally, saying that CPUs don't really need faster RAM doesn't make much sense, the entire reason most modern CPUs have 3 levels of caches is because their ram can't be made fast enough to serve the CPU.
                      $endgroup$
                      – C_Elegans
                      5 hours ago














                    1












                    1








                    1





                    $begingroup$

                    I don't have that much insights, but I expect it is a bit of all.



                    Economic



                    For the majority of computers/telephones, the speed is more than enough. For bigger data storages, SSD has been developed. People can use video/music and other speed intensive tasks in (almost) real time. So there is not so much need for more speed (except for specific applications like weather prediction etc).



                    Another reason is to process a very high RAM speed, CPUs are needed which are fast. And this comes with a lot of power usage. Since the tendency of using them in battery devices (like mobile phones), prevents the use of very fast RAM (and CPUs), thus makes it also not economically useful to make them.



                    Technical



                    By the decreasing size of chips/ICs (nm level now), the speed goes up, but not significantly. It is more often used for increasing the amount of RAM, which is needed harder (also a economic reason).



                    Fundamental



                    As an example (both are circuits): the easiest way to get more speed (used by SSD), is to just spread the load over multiple components, this way the 'processing' speeds adds up too. Compare using 8 USB sticks reading from at the same time and combining the results, instead of reading data from 1 USB stick after each other (takes 8 times as long).






                    share|improve this answer











                    $endgroup$



                    I don't have that much insights, but I expect it is a bit of all.



                    Economic



                    For the majority of computers/telephones, the speed is more than enough. For bigger data storages, SSD has been developed. People can use video/music and other speed intensive tasks in (almost) real time. So there is not so much need for more speed (except for specific applications like weather prediction etc).



                    Another reason is to process a very high RAM speed, CPUs are needed which are fast. And this comes with a lot of power usage. Since the tendency of using them in battery devices (like mobile phones), prevents the use of very fast RAM (and CPUs), thus makes it also not economically useful to make them.



                    Technical



                    By the decreasing size of chips/ICs (nm level now), the speed goes up, but not significantly. It is more often used for increasing the amount of RAM, which is needed harder (also a economic reason).



                    Fundamental



                    As an example (both are circuits): the easiest way to get more speed (used by SSD), is to just spread the load over multiple components, this way the 'processing' speeds adds up too. Compare using 8 USB sticks reading from at the same time and combining the results, instead of reading data from 1 USB stick after each other (takes 8 times as long).







                    share|improve this answer














                    share|improve this answer



                    share|improve this answer








                    edited 6 hours ago

























                    answered 6 hours ago









                    Michel KeijzersMichel Keijzers

                    6,24892865




                    6,24892865












                    • $begingroup$
                      What exactly do SSDs have to do with SDRAM latency?
                      $endgroup$
                      – C_Elegans
                      6 hours ago










                    • $begingroup$
                      @C_Elegans they are both circuits, for this 'generic' question I don't think there is so much difference.
                      $endgroup$
                      – Michel Keijzers
                      5 hours ago










                    • $begingroup$
                      The amount of time to open a page hasn't really decreased that much due to the precharge cycle; the amount of energy required is not significantly different today than it was a decade ago. That dominates the access time in my experience.
                      $endgroup$
                      – Peter Smith
                      5 hours ago










                    • $begingroup$
                      Every search in data array use truly random access not a data stream. Is it so rare task? >>the easiest way to get more speed (used by SSD), is to just spread the load over multiple components << Looks very reasonable. So can we say that true progress of RAM have stoped more than 20 years ago?
                      $endgroup$
                      – Arseniy
                      5 hours ago












                    • $begingroup$
                      @MichelKeijzers While they are both circuits, SSDs and SDRAM serve very different use cases, and make use of different techniques for storing data. Additionally, saying that CPUs don't really need faster RAM doesn't make much sense, the entire reason most modern CPUs have 3 levels of caches is because their ram can't be made fast enough to serve the CPU.
                      $endgroup$
                      – C_Elegans
                      5 hours ago


















                    • $begingroup$
                      What exactly do SSDs have to do with SDRAM latency?
                      $endgroup$
                      – C_Elegans
                      6 hours ago










                    • $begingroup$
                      @C_Elegans they are both circuits, for this 'generic' question I don't think there is so much difference.
                      $endgroup$
                      – Michel Keijzers
                      5 hours ago










                    • $begingroup$
                      The amount of time to open a page hasn't really decreased that much due to the precharge cycle; the amount of energy required is not significantly different today than it was a decade ago. That dominates the access time in my experience.
                      $endgroup$
                      – Peter Smith
                      5 hours ago










                    • $begingroup$
                      Every search in data array use truly random access not a data stream. Is it so rare task? >>the easiest way to get more speed (used by SSD), is to just spread the load over multiple components << Looks very reasonable. So can we say that true progress of RAM have stoped more than 20 years ago?
                      $endgroup$
                      – Arseniy
                      5 hours ago












                    • $begingroup$
                      @MichelKeijzers While they are both circuits, SSDs and SDRAM serve very different use cases, and make use of different techniques for storing data. Additionally, saying that CPUs don't really need faster RAM doesn't make much sense, the entire reason most modern CPUs have 3 levels of caches is because their ram can't be made fast enough to serve the CPU.
                      $endgroup$
                      – C_Elegans
                      5 hours ago
















                    $begingroup$
                    What exactly do SSDs have to do with SDRAM latency?
                    $endgroup$
                    – C_Elegans
                    6 hours ago




                    $begingroup$
                    What exactly do SSDs have to do with SDRAM latency?
                    $endgroup$
                    – C_Elegans
                    6 hours ago












                    $begingroup$
                    @C_Elegans they are both circuits, for this 'generic' question I don't think there is so much difference.
                    $endgroup$
                    – Michel Keijzers
                    5 hours ago




                    $begingroup$
                    @C_Elegans they are both circuits, for this 'generic' question I don't think there is so much difference.
                    $endgroup$
                    – Michel Keijzers
                    5 hours ago












                    $begingroup$
                    The amount of time to open a page hasn't really decreased that much due to the precharge cycle; the amount of energy required is not significantly different today than it was a decade ago. That dominates the access time in my experience.
                    $endgroup$
                    – Peter Smith
                    5 hours ago




                    $begingroup$
                    The amount of time to open a page hasn't really decreased that much due to the precharge cycle; the amount of energy required is not significantly different today than it was a decade ago. That dominates the access time in my experience.
                    $endgroup$
                    – Peter Smith
                    5 hours ago












                    $begingroup$
                    Every search in data array use truly random access not a data stream. Is it so rare task? >>the easiest way to get more speed (used by SSD), is to just spread the load over multiple components << Looks very reasonable. So can we say that true progress of RAM have stoped more than 20 years ago?
                    $endgroup$
                    – Arseniy
                    5 hours ago






                    $begingroup$
                    Every search in data array use truly random access not a data stream. Is it so rare task? >>the easiest way to get more speed (used by SSD), is to just spread the load over multiple components << Looks very reasonable. So can we say that true progress of RAM have stoped more than 20 years ago?
                    $endgroup$
                    – Arseniy
                    5 hours ago














                    $begingroup$
                    @MichelKeijzers While they are both circuits, SSDs and SDRAM serve very different use cases, and make use of different techniques for storing data. Additionally, saying that CPUs don't really need faster RAM doesn't make much sense, the entire reason most modern CPUs have 3 levels of caches is because their ram can't be made fast enough to serve the CPU.
                    $endgroup$
                    – C_Elegans
                    5 hours ago




                    $begingroup$
                    @MichelKeijzers While they are both circuits, SSDs and SDRAM serve very different use cases, and make use of different techniques for storing data. Additionally, saying that CPUs don't really need faster RAM doesn't make much sense, the entire reason most modern CPUs have 3 levels of caches is because their ram can't be made fast enough to serve the CPU.
                    $endgroup$
                    – C_Elegans
                    5 hours ago










                    Arseniy is a new contributor. Be nice, and check out our Code of Conduct.










                    draft saved

                    draft discarded


















                    Arseniy is a new contributor. Be nice, and check out our Code of Conduct.













                    Arseniy is a new contributor. Be nice, and check out our Code of Conduct.












                    Arseniy is a new contributor. Be nice, and check out our Code of Conduct.
















                    Thanks for contributing an answer to Electrical Engineering Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2felectronics.stackexchange.com%2fquestions%2f423213%2fwhy-does-ram-any-type-access-time-decrease-so-slowly%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    ORA-01691 (unable to extend lob segment) even though my tablespace has AUTOEXTEND onORA-01692: unable to...

                    Always On Availability groups resolving state after failover - Remote harden of transaction...

                    Circunscripción electoral de Guipúzcoa Referencias Menú de navegaciónLas claves del sistema electoral en...