Least Frequently Used (LFU) Cache Implementation

What is LFU Cache?

Least Frequently Used is a cache algorithm used to manage memory within a computer. In this method, the system keeps track of the number of times a block is referenced in memory. The system removes the item with the lowest reference frequency when the cache is full.

Problem Statement of LFU Cache

Our goal is to design a data structure that follows the constraints of a Least Frequently Used(LFU) cache. Our LFU cache should support the following operations:

  • LFUCache(int capacity): Initializes the object with the capacity of the data structure.
  • int get(int key): Returns the value of the key if the key exists in the cache; otherwise, returns -1.
  • void put(int key, int value): Update the value of the key if present or insert the key if not already present. When the cache reaches its capacity, it should invalidate and remove the least frequently used key before inserting a new item. If there is a tie, i.e. two or more keys with the same frequency, the least recently used key would be invalidated.

The frequency for a key in the cache is incremented when either a get or put operation is called on it. The operations get and put must each run in O(1) average time complexity.

Brute force approach to implement LFU Cache

We initialize an array of sizes equal to that of our cache. Each element stores the key and value along with the frequency and the time at which this key is accessed. We use this frequency and the timestamp to determine the least frequently used element.

LFU Cache class structure brute force approach

  • int get(int key): We traverse the array and compare the key of each element in our cache with the given key. If we find the key of an element equal to the key, we increment the frequency of this element and update the timestamp. If we do not find any element, we return -1. Time complexity = O(n).
  • void put(int key, int value): If the array is not full, we create a new element with frequency 1 and timestamp 0. We increase the timestamp of the existing data in the array by 1. If the array is full, we must delete the least frequently used element. For this, we iterate through the array and find the element with the least frequency. In the case of frequency ties, we consider the least recently used element (oldest timestamp). Now we insert the new element.

An efficient approach to implement LFU Cache

First, we will implement the insert and access operations in O(1) time. We need two maps to achieve this, one to store the key-value pairs and the second one to store counts/frequency of access.

LFU Cache example

Now in the above code, we need to implement the code for the eviction strategy. When the map size reaches the max capacity, we need to find the item with the lowest frequency count. In our current implementation, we have to iterate through all the values of the frequencyMap and find the lowest count and remove the corresponding key from both the maps. This takes O(n) time.

Also, if multiple keys have the same frequency count, we cannot find the least recently used key with our current implementation as HashMap does not store the insertion order. To handle this, we add one more data structure, a sorted map with the key as the frequency and value as a list of elements with the same frequency.

LFU cache example

Now, new items can be added to the end of the list with frequency 1. Since the map stores the frequencies in sorted order, we can find the list with the lowest frequency in O(1) time. Also, we can delete the list’s first item (of lowest frequency) since that will be the least recently used in O(1) time.

LFU cache implementation in java: Using a singly linked list

Singly linked list implementation of LFU cache

Thus, both insert and delete operations are O(1), i.e. constant-time operations.

While solving the eviction problem, we have increased the access operation time to O(n). All of the item-keys sharing the same frequency are in a list. Now, if one of these items is accessed, we need to move it to the list of the following frequency. We will have to iterate through the list first to find the item, which in worst-case will take O(n) operations.

Frequncy map in LFU cache

We somehow need to access that item directly in the list without iteration to solve this problem. If we can do this, we can delete this item in O(1) time from the current frequency list and move it at the end of the list of the following frequency list in O(1) time.

For this, we need a doubly linked list. We will create a node that stores an item’s key, value, and position in the list. We will convert the list to a doubly-linked list.

LFU cache visualisation

LFU Cache Implementation Using Doubly Linked List (Java Code)

You can find the GitHub gist of the code at this link. https://gist.github.com/PrasannaIITM/9e95abac55dcb6da4497da67bb46f3a9

LFU cache implementation in java

Real-World Application of LFU Cache

Consider a caching network proxy application for the HTTP protocol. This proxy typically sits between the internet and the user or a set of users. It ensures that all the users can access the internet and enables sharing of all the shareable resources for optimum network utilization and improved responsiveness. Such a caching proxy should maximize the amount of data it can cache in the limited amount of storage or memory at its disposal. Typically, many static resources such as images, CSS style sheets, and javascript code can quickly be cached for a reasonably long time before newer versions replace them. These static resources are included in pretty much every page, so it is most beneficial to cache them since pretty much every request will require them. Furthermore, since a network proxy must serve thousands of requests per second, the overhead needed to do so should be kept to a minimum.

It should evict only those resources that are not used very frequently to that effect. Hence, the frequently used resources should be kept at the expense of the not so frequently used ones since the former has proved helpful over a while. Thus, these caching proxies can employ the LFU cache replacement strategy to evict the least frequently used items in its cache when there is a lack of memory. LRU might also be an appropriate strategy here, but it would fail when the request pattern is such that all requested items do not fit into the cache and the items are requested in a round-robin fashion. In the case of LRU, items will constantly enter and leave the cache, with no user request ever hitting the cache. Under the same conditions, however, the LFU algorithm will perform much better, with most cached items resulting in a cache hit.

Enjoy learning, Enjoy Algorithms!

More from EnjoyAlgorithms

Our weekly newsletter

Subscribe to get free weekly content on data structure and algorithms, machine learning, system design, oops design and mathematics.

Follow us on:

LinkedinMedium

© 2020 EnjoyAlgorithms Inc.

All rights reserved.