Handling low latencies for real-time inventory and price updates

One of the biggest challenges that the engineering team at Snapdeal faces is the ever-growing scale in terms of products, vendors, and customers accessing the website. This results in an increase in the number of queries and updates being made to the database. During peak hours, this often results in an increase in the response time and, as a result, the overall slowdown of the website.

We had already made the move from the original monolithic system to Service-Oriented Architecture (SOA). Breaking down the system into small services based on business modules brought to light the technical concerns, such as design/architecture, scaling needs, and availability, of each individual module.

One such module is Inventory and Pricing Management System (IPMS). As the name suggests, this module is responsible for inventory and pricing for products sold on Snapdeal’s website. Let us look at the architecture of this module.

Architecture of IPMS

MySql was used as the ultimate and reliable data source, and MongoDb was used as the cache on top of it. A cluster of application servers exposed REST APIs to the other components via a load balancer. The following sample workflow depicts how a read request was handled:

Step 1: A read request first checked the records in MongoDb

Step 2: If the record was found, it was served; else the request was forwarded to MySql.

Step 3: The fetched record was updated in MongoDb using the REST APIs exposed by an application server.

Step 4: A write request invalidated the corresponding key in MongoDb and if successful, proceeded to update MySql.

This way of deleting records from MongoDb made the updates idempotent and they could be replayed later anytime. A write-through cache was also an option but that could have resulted in increased latency. Moreover, in our system there was no need to manage distributed transactions.

On the downside, this approach could potentially have led to Thundering Herd problem, which occurs when multiple requests are unable to find a record in the cache and try to load the same from the database. However, the pattern of reads on the site suggested that the chances of this happening were very less.

Problems with MongoDb

As Snapdeal’s is a marketplace model, multiple sellers can sell on the website, and they regularly update their products’ price and inventory, often concurrently. We observed frequent spikes in the MongoDb read latencies whenever these updates were happening. Our web servers already distribute one page request to concurrent IPMS requests, but there is still a limit to the number of threads that can be used at the same time. If the updates happened during our peak business hours, the latency used to increase manifolds because of limited resources.

MongoDb (V 2+) implements locks on a per-database basis for most read and write operations. JIRA (https://jira.mongodb.org/browse/SERVER-1241) is also opened to implement document-level locking. We could have separated our collection into multiple databases, but unfortunately there was a single large collection serving 99.99% requests. So we needed to look elsewhere for a solution.

Evaluating the options

We started doing Proof of Concept (POC) on various NoSql and caching frameworks. We ran tests on Memcache, Cassandra, Ehcache (Bigmemory Go), Ehcache (Bigmemory Max), Couchbase, Redis, and Aerospike. Our requirement was 20,000 reads/sec with 500 parallel writes/sec and a read latency of < 5ms for 99 percentile.

All solutions were good as far as throughput was concerned, but we were more interested in consistent read latencies. BigMemory Max gave us the best results in terms of both throughput and consistent read latencies. It was, however, rejected because of high cost. Couchbase, too, gave us consistent reads, but Aerospike stood out because along with giving us consistent read latencies, it also gave the lowest read latencies of 99 percentile with 4ms with 50,000 reads/sec and 10,000 writes/sec with 2 EC2 instances of 30GB RAM and 8-core processor. The following graphs show the read latencies with parallel writes for various frameworks:

ipms-1

MongoDb – Response time(ms) vs time

 

ipms-2

Memcache – Response time(ms) vs time

 

ipms-3

Couchbase – Response time(ms) vs time

 

ipms-4

Aerospike – Response time(ms) vs time

 

Results of the evaluation

Couchbase and Aerospike, both, fulfilled our needs, but with the same hardware configuration and requests, Couchbase gave us 12,500 reads/sec whereas Aerospike could give 50,000 reads/sec. If we ignored the initial spikes during the startup in case of Aerospike, the framework actually gave 99 percentile of 4ms.

 

Challenges with Aerospike

Aerospike 2 lacks secondary indexes on data. Fortunately, we didn’t have a use case with range-based queries, so we implemented the secondary indexes by using another result set (analogous to table). This increases network calls because the web application has to query Aerospike multiple times to fetch a single record.

 

And the search for the perfect solution continues…

Aerospike 3 provides in-built secondary indexes. This should ideally mean that we will not need to rely on a roundabout solution. We are currently in the process of analysing the framework.

FacebookTwitterGoogle+Share

Tagged under: , ,

Back to top