Comments (19)
You can get servers with terabytes of RAM these days. How many startups’ and small to medium businesses’ entire production database could fit in memory on 1 server?
Hint: the answer is “a large majority” (numerically, not by market cap)
Yes RAM is faster than Disk, but for most people it may simply not matter anymore. Put everything in RAM, sync to storage periodically in the background.
Yeah hope we are going back to days where developing software means one needs to understand the underling hardware. I know that makes the barrier of entry higher, IMHO that is a good thing. We don't need more people doing software development just quality people.
It's a weird article. It looks at an insightful set of numbers, fairly accesses them, and then goes on to conclude the stupidest thing.
Switching hubs always has more bandwidths than a port that it is part of, and modern CPU resides on the PCIe equivalent of an uplink port. It makes sense that there would be situations where peripheral-to-peripheral DMAs would be faster than RAM-to-peripheral transfers to my low-paygrade brain.
But that doesn't make the disk access optimizations on your OS harmful, nor even useless. The disk-to-CPU access is best done through those optimizations for several reasons, one of them being that the author's conclusion isn't entirely supported by the data.
Last I checked, that wasn't how citations worked.
I only sampled one data point (2017 AMD EPYC Rome Clock rate) which was significantly off, because in 2017 it was the Naples chipset that was released, and, unless 2017 was desperate enough to clock from 2.2Ghz to 3Ghz on the regular (Boost up to 3.2Ghz), the 'research' was a fair bit off...
Doesn't undermine or contradict the authors (bots?) point, but a strange way to provide 'evidence' for an argument.
ChatGPT is not a primary source. Wikipedia is not a primary source. Google Search is not a primary source. Microsoft Encarta is not a primary source. The Encyclopedia Brittanica is not a primary source.
Information aggregators are not primary sources. Identifiable people are primary sources.
Hey ChatGPT, here's a narrative. Please provide some plausible stats to support it.
Note also that SSDs started out only slightly cheaper per GB than DRAM - the 80GB Intel X25-M had a list price of about $500 when it was released in 2008, and references I find on the net show a street price of about $240 for the next-gen 80GB device in 2009. Nowadays you can get a 1TB NVMe drive for about the cost of 16GB of RAM, although you might want to spend a few more bucks to get a non-sketchy device.
The big problem is that it misses a lot of nuisance. If actually try to treat an SSD like ram and you randomly read and or write 4 bytes of data that isn't in a ram cache you will get performance measured in the kilobytes per second, so literally 1,000,000 x worse performance. The only way you get good SSD performance is reading or writing large enough sequential chunks.
Generally random read/write for a small number of bytes is similar cost to a large chunk. If you're constantly hammering an SSD for a long time, the performance numbers also tank, and if that happens your application which was already under load can stall in truly horrible ways.
This also ignores write endurance, any data that has a lifetime measured in say minutes should be in ram, otherwise you can kill an SSD pretty quick.
When you start trying to design tools to use SSDs optimally you find its heavily dependent on use patterns making it very hard to do this in a portable way or one that accounts for changes in the business.
And yes, write amplification is one major concern but the question is that considering how hardware has changed, how does one design to avoid it. Our classic 512byte, 4k,etc block sizes seems long gone and does the systems "magically" hide it or do we end up with unseen write amplification instead?
IPC has definitely not flatlined. Zen 4 can do something like 50-70% more IPC than a CPU from 10-15 years ago. Zen 5 is capable of ~16% more IPC than Zen 4.
Power usage is also generally worse since the SRAM cells use continuous power to hold their state, while DRAM cells only use power during read/write/refresh (relying on the caps to hold their charge for a short time while not actively powered).
Are you asking why not use SRAM in something like a DIMM? You could do this. Here's why I wouldn't advocate for this. Assume you had zero latency SRAM in your DIMM. It still takes ~40ns to get out of the processor by the time you go through all the memory controller and phy. So you'd have an incredibly expensive but small DIMM taking up limited pins on the processor package/die. Even then you'd only cut the memory latency in half, and we'd still be stuck at a new lower flatline.
Incorporating the SRAM in die is different story, you get to scale the latency and the bandwidth closer to the other capabilities of the cores.
[1] https://www.karlrupp.net/2018/02/42-years-of-microprocessor-...
Do I need exponential improvements and vector operations in a text editor though?
I'm inclined to think of it like storage in this context. It's scaling, but it will require new thinking to take full advantage of.
Damn thats harder to parse than it needed to be.
This one is even more hard to parse :)