Uber is a taxi booking service that allows users to reserve taxi drivers. Uber keeps on experimenting with new approaches to deliver better services to its users. Uber continues to improve its operations and services by deploying and developing new services to meet market demand, find the best and most efficient routes, detect any potential fraud, provide more customer-centric services, and monitor and update data to provide the most efficient real-time services. In this blog, we’ll be looking at how to design the Uber system. Without any further delay, let’s look into the key requirements that we want our system to satisfy!
When a rider asks a driver using the Uber app, the driver goes to the location where the rider is waiting. One thousand servers support the voyage behind the scenes, and gigabytes of data have been used for the journey. Uber’s design began with a simple monolithic architecture, but it has now developed into what is known as service-oriented architecture. One of the essential functions of the Uber service is to match passengers with cabs, which necessitates two distinct services in our architecture: a Supply Service for cabs and a Demand Service for riders.
Uber’s design includes a Dispatch system for matching supply and demand. This dispatch system works with mobile phones and is in charge of matching drivers and riders.
RDBMS was formerly used to record profile-related data, GPS locations, and everything else. However, when the user base grew, they switched to NoSQL databases. By switching to a NoSQL database, Uber became Horizontal Scalable and offered better write and read availability.
In this section, we’ll be looking at the detailed design of some services.
The demand service accepts the cab request through a web socket and monitors the user’s GPS position. It also gets other types of requests, such as the number of seats, automobile type, and pool car. The demand service provides the geographical and user requirements for taxi supply and demands.
Through a web application firewall and load balancer, all active cabs communicate their location to the server once every 4 seconds. After passing via the load balancer, the precise GPS position is transmitted to the data center through Kafka’s Rest APIs.
Once Kafka has updated the most recent location, it progressively travels through the main memory of the relevant worker notes. A copy of the location will also be transmitted to the database and dispatch optimization to maintain the most recent location current.
The whole dispatch mechanism is based on map and location data. That meant we had to correctly model and map our location data. Using latitude and longitude data to summarise and estimate places is tricky. Uber utilized the Google S2 library to fix the problem. DISCO had to meet several objectives, including decreased additional driving, reduced waiting time, and total ETA. Let’s have a look at how a dispatch service works:
When a user requests a ride, the request is sent through the Web Socket. The request is sent on to the Demand Service using Web Socket. After that, the Demand Service will be aware of the need for a Cab or a Ride. Then, using the information from the ride, submit a Demand Service request for Supply Service. Now that the supply service knows where the user is, it sends a request to one of the servers in the server ring. By computing ETA values, the supply service tries to find out which Cabs are close to the Rider from the servers. The supply server then notifies Cabs through Web Sockets after computing the ETA numbers. If the driver accepts the request, the journey will be allocated to that Rider and Driver.
Before a journey begins, your app displays an estimated arrival time for your driver at your pickup location. When your travel begins, your app will give you an estimated time of arrival at your location.
The app displays a time that estimates how long it will take neighboring drivers to arrive at your pickup location. You can see the ETA for each vehicle choice available in your city using the slider at the bottom of your screen. After a trip begins, your app will update the ETA for your destination regularly.
Analytics is the process of making meaning of the data we have. Uber must comprehend the demands of its consumers as well as the habits of Cab drivers. This is how Uber’s system and costs of operations may be optimized and customer happiness. For analytics, Uber employs a variety of technologies and frameworks. Drivers’ and Riders’ location data is stored in a NoSQL, RDBMS, or HDFS database. Some data analytics may necessitate real-time data. The Hadoop platform includes several analytics-related tolls that may be used for analytics. Using HDFS, we can obtain dumb data from a NoSQL database. We can also access data from HDFS using query tools like HIVE.
With the aid of prediction algorithms, the price is raised when there is more demand and less supply. Surge, according to UBER, helps to balance supply and demand. When demand is higher, increasing the price will result in more taxis on the road.
In the background, the system needs to search on many locations, and it would be very inefficient; thus, we don’t want more than 1000 locations in a grid to make searching easier. As a result, anytime a grid exceeds this limit, we divide it into four grids of similar size and distribute them among them. QuadTree may be used for this. It’s a tree-based structure with four children for each node. Each node includes data about all of the locations inside that grid. If a node hits our 1000-place limit, it will be broken down into four child nodes, with spots distributed among them. All of the leaf nodes will now depict grids that can’t be broken down anymore. As a result, leaf nodes will retain track of where they’ve been.
We need to change our data structures to reflect the fact that all current drivers report their positions. Every time a driver’s position changes, it will require a lot of time and resources to update the QuadTree. We must identify the correct grid based on the driver’s prior position to update it to its new location. We must delete the driver from the current grid and move the user to the proper grid if the new position does not correspond to the current grid. If the new grid hits the maximum driver limit after this transfer, we must repartition it.
We need a fast way to communicate the current position of all surrounding drivers to any active client in the region. In addition, while a ride is in process, our system must inform both the driver and the passenger of the vehicle’s present position.
With each driver update, we must edit our QuadTree to remove any old data and only accurately represent drivers’ current position. There will be many more updates to our tree than asking for nearby drivers because all current drivers report their position every three seconds. As soon as the server receives an update on a driver’s position, it will notify all interested clients. To update the driver’s position, the server must notify the appropriate QuadTree server.
To efficiently communicate the driver’s location to consumers, we may utilize a Push Model, in which the server pushes the locations to all relevant users. Customers ask the server for nearby drivers when they open the Uber app on their phones. Before sending the list of drivers to the client, we shall subscribe to all of the updates from those drivers on the server-side. We can keep track of clients who want to know where a driver is at any given time.
We may broadcast the driver’s current position to all subscribing clients whenever we get an update in QuadTree for that driver. As a result, our technology ensures that the client always sees the driver’s current location and aids in a more efficient and quick search.
We’d need duplicates of these servers if one of them died so that if the primary died, the backup could assume over. We may also store this data in some permanent storage, such as SSDs with rapid IOs so that if both the primary and secondary servers fail, the data can be recovered from the persistent storage.
Although data center failure is uncommon, Uber maintains a second data center to ensure that the trip runs well. Uber never replicates existing data into the backup data center, even though this data center has all of the components. To combat data center failure, it employs driver phones as a source of trip data. The dispatch system transmits the encrypted data to the driver’s phone app when the driver’s phone app connects with the dispatch system or when an API call is made between them. The driver’s phone app will receive this data every time. Because the backup data center is unaware of the journey in a data center failure event, it will request the data from the driver’s phone app. The data acquired from the driver’s phone app will be used to update it.
In this blog, we discussed how to design an Uber-like system. However, we have just covered only a few components in this blog. I hope you liked it. Please do share your views :)
An additional blog to explore:
Subscribe to get weekly content on data structure and algorithms, machine learning, system design and oops.