Cache API
The cache API provides a create, read, update and delete interface to a key-value store. This interface satisfies 80% of the use cases in application development for a cache. It allows you to implement cache invalidation strategies in your application and keep stress off of your database when users are requesting very hot paths. The design goal of this API was simplicity and ease of use. You provide a key and a JSON document to the key-value store and it is cached. You get the document from your cache by making a simple GET call with the key identifier.
- ttl or Time To Live, specifies how long you want to cache a document, 3 minutes, 2 hours, 2 days
- query using simple pattern matching you can request a batch of keys that start with x or ends with y. This feature gives you the ability to pull a batch of keys in one request.
When your application is performing a high volume of reads on the datastore, one way to keep your datastore from being overloaded with requests is to cache heavily requested documents. You can then use the cache to read the documents. With this pattern, the cache will be continually updated with the most recent documents, you would then write to the cache once a document was successfully created on the datastore, and then client requests would check the cache before checking the datastore for documents.
Cache Keys
Your cache key does not have to be the same as your document key, you can use the key of the cache to create fast pattern queries on your documents to save your datastore from having to work hard to produce lists of documents over and over again. For example, if you name your key the combination of the document type and the document id, then you could query your cache for all of the documents of a given type, keeping the stress off of your datastore.
Caches are great for counts and other aggregates such as mean, median, min, max, standard deviation, etc. By storing aggregates in the cache, you reduce the need to perform complex queries against the database. You can proactively update the count or sum in the cache after the data is stored in the database. When a request comes in, retrieve the value from the cache. If the value does not reside in the cache, then run the expensive query to get the value and post it to the cache for next time.
Searching can be an expensive process. If the same search repeatedly, why redo the extra work if nothing has changed. By using the cache service you can push all the results of a given search into the cache. When that search is run again simply return the results from the cache. This use case is similar to the data cache but instead of caching a single document per key, you are caching a set of documents per key from the search request. You may want to only have these items live in the cache for a short period of time. You can use the ttl property to specify how long you want to cache a given key/value pair.
Do you have a use case you would like to share?
Go to hyper63 discussions and share your caching use case for the community. https://github.com/hyper63/hyper63/discussions/categories/show-and-tell
Need specific help? You can always reach out to our support team for any additional assistance at checkout our help desk

