A hyper Cache service is a powerful key-value store, where the key is a unique string and the value is a JSON document. This service keeps caching simple giving you a pattern matcher query to return a filtered set of values, or you can retrieve directly by key.
- ttl Time To Live specifies how long to cache a document, such as 3 minutes, 2 hours, or 2 days.
- query using simple pattern matching, you can request a batch of keys that start with x or ends with y. This feature gives you the ability to pull a batch of keys in one request.
The cache service improves the performance of your applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases. The primary purpose of a cache is to provide fast and inexpensive access to copies of data. Most data stores have areas of data that are frequently accessed but seldom updated. Additionally, querying a database is always slower and more expensive than locating a key in a key-value pair cache. Some database queries are especially expensive to perform. An example is queries that involve joins across multiple tables or queries with intensive calculations. By caching such query results, you pay the price of the query only once. Then you can quickly retrieve the data multiple times without having to re-execute the query.
When deciding what data to cache, consider these factors:
- Data that is slow or expensive to get when compared to cache retrieval.
- Queries that are run frequently on data that changes infrequently, such as customer demographic data.
- Since cached data is stale data, you need to consider your application's tolerance for stale data in specific scenarios.
When your application is performing a high volume of reads on the database, one way to keep your database from being overloaded with requests is to cache heavily requested documents. You can then use the cache to read the documents. With this pattern, the cache will be continually updated with the most recent documents, you would then write to the cache once a document was successfully created on the datastore, and then client requests would check the cache before checking the datastore for documents.
Your cache key does not have to be the same as your document key. Use the cache key to create fast pattern queries on your documents. Instead of running the same query for the same list of documents against your database, you could pull this data from the cache, instead. This saves takes the stress off your database. For example, if you name your key the combination of the document type and the document id, then you could query your cache for all of the documents of a given type, keeping the stress off of your database.
Caches are great for counts and other aggregates such as mean, median, min, max, standard deviation, etc. By storing aggregates in the cache, you reduce the need to perform complex queries against the database. You can proactively update the count or sum in the cache after the data is stored in the database. When a request comes in, retrieve the value from the cache. If the value does not reside in the cache, then run the expensive query to get the value and post it to the cache for next time.
Seeding Your Cache
Depending on your use cases, you might find it valuable to seed the cache during off-hours by running queries and caching their results. A good seeding strategy requires that you know when cache hits occur to ensure the cached data is as fresh as possible.
Searching can be an expensive process. If the same search repeatedly, why redo the extra work if nothing has changed. By using the cache service you can push all the results of a given search into the cache. When that search is run again simply return the results from the cache. This use case is similar to the data cache but instead of caching a single document per key, you are caching a set of documents per key from the search request. You may want to only have these items live in the cache for a short period of time. You can use the ttl property to specify how long you want to cache a given key/value pair.
Do you have a use case you would like to share?
Go to our Slack #examples channel and share your caching use case for the community.