

The cells in a CPU cache are often made from high-speed SRAM cells. This is known as cache as RAM some also refer to this very close low-latency RAM (with respect to the core) as tightly coupled memory. On some platforms, portions of the cache infrastructure can be repurposed as an SRAM, the cache allocation/lookup is disabled, and the SRAM cells in the cache block are presented as a memory region. In general, these SRAM blocks are not cache-coherent with the main memory system care must be taken using these areas and they should be mapped to a noncached address space. SRAM memory is commonly allocated for a special data structure that is very frequently accessed by the processor, or perhaps a temporal streaming data element from an I/O device. Note that it is unusual for the operating system to manage the dynamic allocation/de-allocation from such memory it is usually left up to the board support package to provide such features.
Ram memory drivers#
The system software and device drivers can allocate portions of the SRAM for their use. When the SRAM block is placed on die, it is located at a particular position in the address map. The speed of SRAM is usually much faster than DRAM technologies, and SRAM often responds to a request within a couple of CPU clock cycles. The technology used in the creation of an SRAM cell is the same as that required for regular SOC logic as a result, blocks of SRAM memory can be added to SOCs (as opposed to DRAM, which uses a completely different technology and is not found directly on the SOC die).

Static random access memory is a volatile storage technology. Peter Barry, Patrick Crowley, in Modern Embedded Computing, 2012 SRAM Controllers
