All content related to system-design

MapReduce In System Design

MapReduce is a batch processing programming paradigm that enables massive scalability across a large number of servers in a Hadoop cluster. It was published in 2004, was called “the algorithm that makes Google so massively scalable.” MapReduce is a relatively low-level programming model compared to the parallel processing systems developed for data warehouses many years previously.

EnjoyAlgorithms

Design a Web Crawler

A web crawler (also known as a spider) is a system for downloading, storing, and analyzing web pages. Web crawlers are used for a wide variety of purposes. Most prominently, they are one of the main components of web search engines, systems that compile a collection of web pages, index it, and allow users to issue index queries and find web pages that match queries.

EnjoyAlgorithms

Throttling and Rate Limiting

At its most basic level, a rate limiter restricts the number of events a certain object (person, device, IP, etc.) can do in a given time range. In general, a rate limiter caps how many requests a sender can issue in a specific time window. Rate Limiter then blocks requests once the cap is reached.

EnjoyAlgorithms

Metrics in Software Development

In this article, we will try to answer the above two questions by understanding different situations that we might encounter during software development.

EnjoyAlgorithms

Design Key-Value Store

A key-value database is a non-relational database that stores data using a simple key-value mechanism. Data is stored in a key-value database to collect key-value pairs, with a key serving as a unique identifier. Both keys and values can be any object, from simple to complex compound objects.

EnjoyAlgorithms

Peer to Peer Networks in System Design

In the common client-server architecture, multiple clients will communicate with a central server. A peer-to-peer (P2P) architecture consists of a decentralized network of peers, which are nodes that can act as both clients and servers. Without a central server's need, P2P networks distribute workload among peers, and all peers contribute and consume resources within the network.

EnjoyAlgorithms

Storage and Redundancy

A storage device is a piece of hardware used mostly for data storage. Storage is a mechanism that allows, either temporarily or permanently, a computer to preserve data. A fundamental component of most digital devices is storage devices such as flash drives and hard drives. They allow users to store all kinds of information, such as videos, documents, photographs, and raw data.

EnjoyAlgorithms

Throughput - System Design Concept

Throughput is defined as the total number of items processed per unit of time, or we can say Throughput is the rate at which something is produced. It is generally represented as the number of bits transmitted per second or the number of HTTP operations per day.

EnjoyAlgorithms

Web Sockets: System Design Concept

While building any Web Application, one needs to consider what kind of delivery mechanism would be best. The web has been built around the HTTP's request and response paradigm. However, such a paradigm faces the overhead problem of HTTP, and as a result, they are not suitable for low latency applications. In this blog, we focus on Web-Sockets, which are vital components behind the applications like multiplayer games or any application that rely on real-time data transfer. In this blog, we tried to give a complete insight into Web-Socket, how it works, and its key components. So without any much delay, let's look at what Web-Sockets are :)

EnjoyAlgorithms

CAP Theorem in System Design

CAP Theorem is one of the essential concepts necessary for designing networked shared data systems. CAP Theorem is a concept that allows a distributed database system to have any of only two of the three functionalities: Consistency, Availability, and Partition Tolerance. CAP Theorem is an essential concept that helps make trade-offs between the three available functionalities, based on our unique use case that we need for our system.

EnjoyAlgorithms

Consistent Hashing in System Design

Consistent Hashing is the most widely used concept in a distributed system as it offers considerable flexibility in the scaling of the application. This blog discusses the key concepts and approaches which comes in handy while scaling out the distributed storage system. Consistent Hashing is beneficial and frequently applied to solving various system related challenges and is quite helpful in System Design Interviews.

EnjoyAlgorithms

Our Weekly Newsletter

Subscribe to get free weekly content on data structure and algorithms, machine learning, system design, oops and math.