-
-
Notifications
You must be signed in to change notification settings - Fork 8
Home
ha-store
, or High-Availability store, is wrapper to abstract common optimization and security patterns for data fetching.
It's goals are to
- Decouple performance, security and degradation fallback concerns from the rest of the application logic.
- Provide an extensible interface to control caching, batching, retrying and circuit-breaking.
- Reduce the infrastructure-related costs of orchestration-type applications with smart micro-caching.
Facebook's dataloader is a close competitor, but it lacks a few key features.
dataloader
's documentation suggests implementing a new instance on each request, only coalescing data queries within the scope of that request- with a batching frequency that runs on Node's event-loop (nextTick). This approach fits their caching strategy, since they do not have micro-caching, they are forced to make their entire stores short-lived so that they do not cause memory concerns.
ha-store
, on the other end prefers global, permanent stores. This means that data query coalescing is application-wide, with the batching tick rate customizable- to allow users to really optimize roundtrips.
Here'S a schema showing an application accepting requests and forwarding them to various services. Here are some optimization patterns HA-store implements to make that flow more efficient.
Coalescing is an optimization strategy that consists in detecting duplicated requests and picking up transit handles. In the case of ha-store, this consists in detecting if a data-source request is sent for a given entity, and in such cases, returns the Promise handle of the original request instead of creating a new one.
Batching is a blocking protocol. It involves adding a buffer of a given duration for incoming requests. This buffer allows the application to better understand an bundle requests. HA-store implements this with a tick value, that controls the amount of time between batches to a specific data-source. Another option, max
controls the maximum allowed number of distinct records in a batch.
When combating frail network or glitches, the ability to retry queries can be a powerful tool. Under HA-store, it also acts as a trigger system for the circuit breaker.
Circuit breaking has many functions, it's main attribute is to detect data-source problems and prevent requests from being made to a degraded system. It acts as a safeguard for that system, allowing it to get back up unburdened by traffic, all the while speeding up the implementing application since it skips the costly operations of querying that data-source.
Entity caching via Redis or memcache is a common solution to speed up applications that communicate to data-sources. Unfortunately, only a subset of the data is considered "hot" and queried often, the rest occupies space and, in the event that the information is not in the cache, a call is still required to the data-source. Not to mention the extra infrastructure concerns like cost, maintenance, etc. Micro-caching in HA-store works out of the box as a smart in-memory cache that targets high-demand records (See Zeta distribution). The base value for this cache usually ranges from a few Milliseconds to a few Seconds. With each request for a given entity, its TTL grows up to a certain limit.
In order to leverage Micro-caching to its fullest, the first step is understanding the distribution of traffic on your application.
https://www.nngroup.com/articles/zipf-curves-and-website-popularity/
For example, if you are running a blog, you would want to know which articles are being requested and at which frequency. This z-distribution, also referred to as Zipf's distribution, should tell you what percentage of traffic you can optimize with caching based on the x-axis.
For example, in this schema, caching the 3 first entries covers 50% of traffic.
Caching for even distribution datasets is not recommended as it is likely that you will either get a negligible hit rate or saturate your store with too many records and thus run out of memory.
With our distribution, we can derive the optimal configuration for ha-store caching. The configuration requires:
base
Represents the base caching value for a record in the store. This should be low enough that cold records get evicted rapidly and not waste memory space, but also high enough that you actually get hits for the records you want cached (see your distribution's x-axis)
limit
The absolute maximum time a record can be kept in cache. This is usually dictated by the product design of your application.
curve
The growth curve function that interpolates between your base and limit values. It is exponential by default.
steps
The number of interpolation points inside the growth curve.
The main component is finding the base value, based on the optimal setting and while respecting a memory budget: For the optimal base (B) setting, take The N most common requests from a given zeta distribution Z for traffic over a minute. This should cover a desired percentage of total requests.
The value of the last entry from that list, Z[N] will become your B value when this formula is applied:
B = ( 60 / Z[N] ) * 1.2
To calculate memory allocation (M) in Kb for a minute, given the serialized size S in Kb of en entity E and the permutation pool P :
M = ( S * N ) + ( P * 256kb ) + ( ( Z - N ) * S)
(Work in progress)
Includes all of the ha-store features (retry, circuit-breaker, batching, coalescing, caching) in this very classical Express+Mongo application: Gist
For live testing: Runkit