Elasticache
Elasticache is in-memory caching that can be used in front of any RDS database and can be used in front of a DynamoDB database as well
- Elasticache is in-memory cache in the cloud
- Imporves performance of web applications, allowing you to retrieve information from fast in-memory caches rather than slower disk-based databases
- Sits between your application and the database
- E.g. an application that frequently requesting specific product information for your best selling products
- Takes the laod off of your databases
- Good if your database is particularly read-heavy and the data is not changing frequently
Elasticache Benefits and Use Cases
- Improves performance for read-heavy workloads. E.g. Social Networking, gaming media sharing, and Q&A Portals
- Frequently-accessed data is stored in-memory for low latency access, improving overall performance of the application
- Also good for compute-heavy workloads (e.g. recommendation engines)
- Can be used to store results of I/O intensive database queries or output of compute-intense calculations
Types of Elasticache
- Memcached
- Widely adopted memory object caching system
- Multi-threaded
- No Multi-AZ capability, which means the potentially of losing everything if an AZ is lost
- Redis
- Open-source in memory key-value store
- Supports more complex data structures: sorted sets and lists
- Supports Master / Slave replication and Multi-AZ for cross-AZ redundancy
Caching Strategies
There are two strategies available: Lazy Loading and Write-Through
Lazy Loading
- Lazy Loading loads data into cache only when necessary
- If requested data is in the cache, Elasticache returns the data to the application.
- If the data is not in the cache or has expired, Elasticache returns a null.
- Your application then fetches the data from the database and writes the data received into the cache so that it is available next time.
Lazy Loading Advantages and Disadvantages
Advantages | Disadvantages |
---|---|
Only requested data cached: Avoids filling up cache with useless data | Cache miss penalty (initial load overhead): Initial Request Query to database Writing of data to the cache |
Node failures are not fatal a new empty node will just have a lot of cache misses initially | Stale data - if data is only updated when there is a cache miss, it can become stale. Doesn't automatically update if the data in the database changes |
Lazy Loading and TTL
To deal with stale data, we set a TTL (Time To Live).
- Specifies the number of seconds until the key (data) expires to avoid keeping stale data in the cache
- Lazy Loading treats an expired key as a cache miss and causes the application to retrieve the data from the database and subsequently write the data into cache with a new TTL
- Does not eliminate stale data - but helps to avoid it
Write-Through Caching
Write Through adds or updates data to the cache whenever data is written to the database
Disadvantages | Advantages |
---|---|
Write Penalty: Every write involves a write to the cache as well as a write to the database | Data in the cache is never stale |
If a node fails and a new one is spun up, data is missing until added or updated in the database (mitigate by implementing Lazy Loading in conjunction with write-through) | Users are generally more tolerant of additional latency when updating data than when retrieving it |
Wasted resource if most of the data is never read |
Difference Between Elasticache and DAX
DAX is optimized only for use with DynamoDB. It Only supports the write-through strategy (i.e. no Lazy Loading). If you need DynamoDB and Lazy Loading, using Elasticache; else, use DAX.