system-design-concepts

Key-Value Database in System Design Cover Image
Key-Value Database in System Design

A key-value database is a non-relational database (NoSQL) that stores data using a simple key-value mechanism. Its structure is similar to maps or dictionaries, where each key is associated with one value only. The simplicity of this model makes a key-value database fast, easy to use, scalable, portable, and flexible.

Latency in System Design Cover Image
Latency in System Design

Latency is an essential system design concept that determines how fast the data transfers from the client to the server and back to the client. The lower the Latency, the better the performance. This blog will focus on the conceptual understanding of Latency, how it impacts the system's performance, and measures to improve Latency.

Process Management in Operating System (OS) Cover Image
Process Management in Operating System (OS)

Process management involves executing various tasks such as creating processes, scheduling processes, managing deadlock, and termination of processes. It is the responsibility of the operating system to manage all the running processes of the system. In this blog, we will learn about process management and various algorithms related to it.

Client Server Architecture Cover Image
Client Server Architecture

The client-server architecture is a distributed application framework consisting of clients and servers in which the server hosts, manages, and delivers the client’s services. Here clients are connected to a central server, and they communicate over a network or internet connection through a computer network.

Types of Load Balancing Algorithms Cover Image
Types of Load Balancing Algorithms

Load balancers can receive and distribute requests to a particular server based on various load balancing techniques. These techniques use different algorithms to select servers based on a specific configuration. These algorithms are categorized into two parts: Static load balancing, 2) Dynamic load balancing

What is Caching in System Design? Cover Image
What is Caching in System Design?

Caching is the process of storing the results of a request at a different location than the original or a temporary storage location so that we can avoid redoing the same operations. Basically, the cache is temporary storage for files and data such that it’s faster to access this data from this new location.

Availability: System Design Concept Cover Image
Availability: System Design Concept

Availability is the percentage of time in a given period that a system is available to perform its task and function under normal conditions. One way to look at it is how resistant a system is to failures. High Availability comes with its own tradeoffs, such as higher latency or lower throughput.

Throughput: System Design Concept Cover Image
Throughput: System Design Concept

Throughput is defined as the total number of items processed per unit of time or the rate at which something is produced. It is generally represented as the number of bits transmitted per second or HTTP operations per day. In this blog, we’ll be talking about the importance of Throughput in designing any system.

System Design Concepts for Interview Preparation Cover Image
System Design Concepts for Interview Preparation

System design is one of the critical topics that large tech companies ask about during the interview. On the other hand, it is also necessary for solving large-scale software problems. This blog will help you get familiar with concepts important for solving system design questions and learning system design at an advanced level.

Database Partitioning (Sharding) in System Design Cover Image
Database Partitioning (Sharding) in System Design

Data Partition(Sharding) is a technique of dividing the data into independent components. It is a way of partitioning data into smaller pieces so that data can be easily accessed and managed. Partitioning is the backbone of modern distributed database management systems, which helps to improve scalability, manageability, and availability of the system.

Load Balancer in System Design Cover Image
Load Balancer in System Design

Load balancing is essential for building high-performance and scalable applications. It is also a popular topic for system design interviews. In terms of definition: a load balancer is a software or a hardware device that sits between clients and servers to balance the workload. It saves servers from overloading and increases system throughput.

Publisher-Subscriber (Pub-Sub) Design Pattern Cover Image
Publisher-Subscriber (Pub-Sub) Design Pattern

The Publish/Subscribe pattern, sometimes known as pub/sub, is an architectural design pattern that enables publishers and subscribers to communicate with one another. In this arrangement, the publisher and subscriber rely on a message broker to send messages from the publisher to the subscribers. Messages (events) are sent out by the host (publisher) to a channel, which subscribers can join.

MapReduce in Hadoop: System Design Concept Cover Image
MapReduce in Hadoop: System Design Concept

MapReduce is a batch processing programming paradigm that enables massive scalability across a large number of servers in a Hadoop cluster. It was published in 2004, was called “the algorithm that makes Google so massively scalable.” MapReduce is a relatively low-level programming model compared to the parallel processing systems developed for data warehouses many years previously.

Leader Election in Distributed Systems: System Design Concept Cover Image
Leader Election in Distributed Systems: System Design Concept

The objective of leader election is to provide one item (a process, host, thread, object, or human) specific powers in a distributed system. These specific abilities could include the capacity to delegate tasks, the ability to edit data, or even the responsibility for managing all system requests.

How to Choose the Right Database? Cover Image
How to Choose the Right Database?

Databases are a critical component of the world’s most complex technology systems, and how they are used has a significant impact on their performance, scalability, and consistency. Because this is an essential topic with many moving components, hence in this article, we’ve outlined the most crucial database topics that you’ll need to know during a system design interview.

What is Throttling and Rate Limiting in System Design? Cover Image
What is Throttling and Rate Limiting in System Design?

At its most basic level, a rate limiter restricts the number of events a certain object (person, device, IP, etc.) can do in a given time range. In general, a rate limiter caps how many requests a sender can issue in a specific time window. Rate Limiter then blocks requests once the cap is reached.

What are Peer-to-Peer (P2P) Networks? Cover Image
What are Peer-to-Peer (P2P) Networks?

In the common client-server architecture, multiple clients will communicate with a central server. A peer-to-peer (P2P) architecture consists of a decentralized network of peers, which are nodes that can act as both clients and servers. Without a central server's need, P2P networks distribute workload among peers, and all peers contribute and consume resources within the network.

Long Polling in System Design Cover Image
Long Polling in System Design

Polling is a technique that allows the servers to push information to a client. Long polling is a version of traditional polling that allows the server to send data to a client whenever available. It involves the Client requesting information from the server in the same way that standard polling does, but with the caveat that the server may not respond right away. A complete answer is delivered to the Client once the data is accessible.

Workflow in a Distributed System Cover Image
Workflow in a Distributed System

Ever wondered how does 1-click-buy works on Amazon? How does an e-commerce platform show the status of your order after the order is placed? What happens when you cancel your order right after you place an order, or after your item is shipped, or even delivered? How is all the activity related to an order tied to just one order Id? This blog will try to tackle such system design challenges and lay out key insights on designing a workflow system.

CAP Theorem in DBMS: System Design Concept Cover Image
CAP Theorem in DBMS: System Design Concept

CAP Theorem is an essential concept in distributed systems for designing networked shared data systems. It allows a distributed database system to have only two of these functionalities: Consistency, Availability, and Partition Tolerance. We can make trade-offs between the three available functionalities based on the unique use case for our system.

Server Sent Events: System Design Concept Cover Image
Server Sent Events: System Design Concept

Whenever we build any Web Application dealing with real-time data, we need to consider delivering data to the Client. While building such a Web Application type, one needs to consider the best delivery mechanism, right! We are presenting a series of three concept blogs focusing on data transfer between clients and servers. In this blog, we are focusing on Server-Sent Events, and here, we will give you a complete insight into its internal working and the underlying features.

What are Proxies in System Design? Cover Image
What are Proxies in System Design?

In computer networking, a proxy is a server that acts as an intermediary between a client and another server. In this blog, we discussed: What is a proxy server? How do proxy serves work? What are proxies used for? What are forward and reverse proxies?

Consistent Hashing: System Design Concept Cover Image
Consistent Hashing: System Design Concept

Consistent Hashing is the most widely used concept in a distributed system as it offers considerable flexibility in the scaling of the application. This blog discusses the key concepts and approaches which comes in handy while scaling out the distributed storage system. Consistent Hashing is beneficial and frequently applied to solving various system related challenges and is quite helpful in System Design Interviews.

Distributed Systems Cover Image
Distributed Systems

Distributed systems are crucial in designing fault-tolerant, highly scalable, and low latency services. This blog will introduce you to the fundamentals of distributed systems, their functioning, and how they are applicable in real-world scenarios.

What are Network Protocols? Cover Image
What are Network Protocols?

A network protocol is a defined set of rules that specify how data is transmitted on the same network between different devices. In general, it allows connected devices to interact with each other, irrespective of any variations in their internal processes, structure, or configuration.

Our weekly newsletter

Subscribe to get weekly content on data structure and algorithms, machine learning, system design and oops.

© 2022 Code Algorithms Pvt. Ltd.

All rights reserved.