What is Dynamic Caching?
Dynamic Caching is a middle-mile optimization feature that significantly reduces the delay that first-time visitors to a web site experience when loading a new website for the first time. Dynamic Caching enables the Instart platform to predictively preload dynamically-generated web pages and serve them from cache, which avoids the round trip back to the customer's origin.
How it works
For first-time visitors to a site, the request response chain for dynamic HTML or non-cacheable content is sequential – a user requests HTML from the edge, the edge service forwards the request to origin servers, origin servers generate a response, which is sent to the edge service, and finally the edge service forwards a response to the end user.
Dynamic Caching examines content flowing through the Instart service and identifies access patterns. During heavy access periods, for example, if the home page receives 10 requests per second, Dynamic Caching will cache the page in the middle mile, so that it is available sooner for a first-time user accessing that home page in the same timeframe. This saves the first-time user from having to wait for a response directly from the origin, which increases the likelihood they will stay and interact with the site.
This first-time visit to a page is non-personalized, since the visitor has not previously been identified. For example, if a request arriving from client A, then shortly after, from client B, because neither contains an identifying cookie, they are identical. The responses from the origin will contain the unique identifier in a cookie, but otherwise they are interchangeable.
Figure 1: Two requests from first time visitors without Dynamic Caching
With Dynamic Caching enabled, when a qualifying request from Client A arrives, the service passes it to the origin, and on receiving the response, sends it back to Client A; at the same time the service requests the next response from the origin before another request actually arrives, and stores it in the feature cache. So when Client B makes its request, the proxy serves it the prefetched copy from cache, and requests another response from the origin to hold on to for the next request.
Figure 2: Two requests from subsequent first-time visitors with Dynamic Caching
This response is stored in the feature cache in first-in, first-out order for a short time (slightly less than 5 minutes by default, adjustable via the configuration). If a second request does not arrive within this time, another response is automatically prefetched from the origin to keep the cache "warm." In the absence of any further real requests, this auto-prefetching continues at the specified interval until a configured auto-prefetch period expires (one hour by default), to minimize the chance of pointlessly continuing to send auto-prefetch requests to the origin when subsequent qualified requests don't arrive.
Benefits of Dynamic Caching can typically be seen for any first-time requests for resources (no request cookies, or cookies that are irrelevant and can be present). Examples of such resources are a site's login page, home page, landing pages, pages meant to benefit Search Engine Optimization (SEO), and REST API calls.
Since Dynamic Caching is for non-cacheable HTML, it won't benefit sites that make extensive use of HTML caching. It also won't help when most of the site's traffic is from repeat visitors that have cookies set.
Dynamic Caching does not work when the origin server sends content based on client IP address. In general, Dynamic Caching should not be enabled if client IP address tracking is desired.
If a page is not eligible for Dynamic Caching, it can still be optimized by Instart's HTML Streaming feature. See What is HTML Streaming?
Dynamic Caching will have the biggest impact on time to first byte and start render times, as the delays that would otherwise occur in requesting the pages from the origin are eliminated. For example, in one case Dynamic Caching resulted in impressive improvement across key performance indicators on both desktop and mobile devices. Aggregate performance improvement for time to first byte (TTFB) metrics were 200%. (Please note: performance will vary on an application-by-application basis.)