Linux, or any other OS does not know how the RAM works. As long as the memory controller is properly configured (e.g. refresh rates set for non-SRAM) then the OS does not care is it runs on plain dynamic memory (plain RAM), fast page mode RAM (FP RAM, from the C64-ish times), Extended data out mode RAM (EDO) , synchronious RAM (SDRAM), any of the double data rate SDRAMS (DDR 1/2/3) whatever.
All of those support reading and writing from random places. All will work.
Now cache is a bit different. You do not have to write to it for the contents to change. That will get in the way. Still, it is somewhat usable. I know that coreboot uses the cache as a sort of memory during boot, before the memory controller is properly configured. (For the details, check out the videos from the coreboot talks during FOSDEM 2011).
So in theory yes, you could use it.
BUT: For practical tasks a system with 1 GB ‘regular’ ‘medium speed’ memory will perform a lot better than with only a few MB super fast memory. Which means you have three choices:
- Build things the normal ‘cheap’ way. If you need more speed add a few dozen extra computers (all with ‘slow’ memory)
- Or build a single computer with a dozen times the price and significantly less then a dozen times the performance.
Except in very rare cases the last is not sensible.
Yes, you can, and this is in fact how it’s already done, automatically. The most frequently used parts of RAM are copied in cache. If your total RAM usage is smaller than your cache size (as you suppose), the existing caching mechanism will have copied everything in RAM.
The only time when the cache would then be copied back to normal RAM is when the PC goes to S3 sleep mode. This is necessary because the caches are powered down in S3 mode.
Many CPUs allow the Cache to be used as RAM. For example, most newer x86 CPUs can configure certain regions as writeback with no-fill on reads via MTRRs. This can be used to designate a region of the address space as – effectively – cache-as-ram.
Whether this would be beneficial is another question – it would lock the kernel into RAM, but at the same time would reduce the effective size of the cache. There might also be side effects (such as having to disable caching for the rest of the system) that would make this far slower.
In x86 there’s a thing called CAR (Cache as RAM) which allows you to write “bare-metal” code such as bootloaders or BIOS routines in a high-level language like C instead of assembly. Many other architectures may have the same feature
So it’s possible for some OS to run entirely in cache. Imagine having a Ryzen™ Threadripper™ 3990X with total 292 MB of cache. That’s more than enough to run even some modern tiny Linux. I guess you’ll need significant changes to the Linux kernel to make it work, but it’s definitely possible
For more information read
- A Framework for Using Processor Cache as RAM (CAR)
- CAR: Using Cache as RAM in LinuxBIOS
- Can a CPU function with nothing more than a power supply and a ROM, using only the internal cache as RAM?
- Cache-as-Ram (no fill mode) Executable Code
- What use is the INVD instruction?
- How Does BIOS initialize DRAM?
Back in the days of 486es there used to be machines where all of the RAM was SRAM. This is back when 8MB was a lot, but seems to match your constraints. I’m sure 8MB of SRAM is much cheaper now than back then.
So, you could run Linux in SRAM if the machine was made that way. It’s not a theoretical; it’s been done.
But, not in Cache. Cache is wired differently, and more importantly addressed differently. You can’t address it the same. Chunks are mapped in differently, not as a continuous chunk. And the contents aren’t necessarily what you see on disk – newer Intel chips do a sort of Just in time “compiling” (more of a CISC=>RISC-micro-op re-encoding) where the micro-ops are the things that end up in cache. In short, what’s in cache isn’t your program, but a changed view of it, so you couldn’t use it as a memory representation of your program any more.
The question is why. Other than “because I can” there’s not a lot of reason for this. The Cache system gets you most of the speed benefit with a lot less of the cost. And remember cost isn’t just dollars…. SRAM takes more transistors, which means more electricity.
“can we run linux in L3 Cache?”
No, this is not possible because the cache memory is not directly/linearly addressed.
Due to the way the cache memory is designed, the CPU Program Counter (IP) registry cannot point to a location in the cache memory.
A CPU cache have it’s own “associativity” and this associativity define the way the “normal” memory is “mapped” to the cache memory. This feature of the cache memory is one of the reason the cache memories are so fast.