Today’s microprocessors have plenty of onboard L1 and L2 cache and while most of us know how important these caches can be in terms of performance, it may not be totally obvious exactly what they do or how they work. In an attempt to enlighten us all Joel Hruska from ExtremeTech has come up with an interesting article that tackle’s the issue of caching in some detail. Joel outlines the historical background of caching and how the technology continues to be relevant in today’s search for ever more performance.
“Caching was invented to solve a significant problem. In the early decades of computing, main memory was extremely slow and incredibly expensive — but CPUs weren’t particularly fast, either. Starting in the 1980s, the gap began to widen very quickly. Microprocessor clock speeds took off, but memory access times improved far less dramatically. As this gap grew, it became increasingly clear that a new type of fast memory was needed to bridge the gap.”