Level 2 or L2 caching is part of a multilevel storage strategy to improve computer performance. The current model uses up to three levels of cache, called L1, L2, and L3, each filling the gap between the very fast computer processing unit (CPU) and the much slower random access memory (RAM). While the design is evolving, the L1 cache is more often integrated into the CPU, while the L2 cache is typically integrated into the motherboard (along with the L3 cache, when present). However, some processors now incorporate L2 cache as well as L1 cache, and some even incorporate L3 cache.
A computer motherboard.
The CPU cache’s job is to anticipate data requests so that when the user clicks on a frequently used program, for example, the instructions needed to run that program are ready, stored in the cache. When this happens, the CPU can process the request without delay, dramatically improving the computer’s performance. The CPU will check the L1 cache first, followed by the L2 and L3 cache. If it finds the required data bits, it’s a cache hit, but if the cache doesn’t anticipate the request, the CPU loses the cache and the data must be pulled from the slower RAM or the even slower hard drive.
A central processing unit. The L2 cache is a CPU cache.
Since it is the job of the CPU cache to store bits of data, you may wonder why there is more than one level of cache. Why have L2 cache, let alone L3, when you can just increase L1 cache?
The answer is that the larger the cache, the greater the latency. Small caches are faster than large caches. To optimize overall performance, the best result is achieved by having the smallest and fastest cache closest to the CPU itself, followed by a slightly larger pool of L2 cache and an even larger pool of L3 cache. The idea is to keep the most frequently used instructions in L1, with the L2 cache containing the next most likely needed bits of data, and L3 following suit. If the CPU needs to process a request that is not present in the L1 cache, it can quickly check the L2 cache and then L3.
Cache design is a key strategy in the highly competitive microprocessor market as it is directly responsible for improving CPU and system performance. Multilevel cache is made of more expensive static RAM (SRAM) chips than cheaper dynamic RAM (DRAM) chips. DRAM chips and synchronous DRAM (SDRAM) are what we commonly refer to simply as RAM. SRAM and SDRAM chips should not be confused.
When looking at new computers, check the L1, L2, and L3 cache amounts. All other things being equal, a system with more CPU cache will perform better, and synchronous cache is faster than asynchronous one.
SDRAM chips can accept more than one write command at a time.