Sultan Dzhumaev, 31 August 2024

Implementing the Cache-Aside Pattern with MediatR

One of the characteristics of a well-implemented Web API is its use of available tools and strategies to increase the efficiency of request execution. Efficiency can refer to various factors, including execution time, data consistency, hardware consumption, and appropriate validation of inputs.

In this article, titled "Implementing the Cache-Aside Pattern with MediatR," we will focus on optimizing execution time. I will be using the MediatR library to demonstrate how it is possible to implement this caching strategy as a cross-cutting concern, where the type of request will determine the behavior of the flow. We won't delve deeply into the Mediator pattern implemented by the MediatR library or other complementary patterns for this caching strategy, as they deserve their own articles.

Cache-Aside Pattern This pattern focuses on recurring queries performed against an application, thereby storing the state of the query responses. This strategy is useful if you want to increase the performance, and hence increasing the execution time of your application. Also in case of outrage of the database or external API, this pattern encourages data replication so that the data will be available as a cache even if the primary source is down.

In the CQRS pattern, there is a clear distinction between requests that are of type queries (read operations) and commands (write operations). In a layered architecture, following CQRS provides a structured flow for how queries and commands propagate, are processed, and how resources are allocated.

Cache-Aside emphasizes using queries as the source for temporarily caching data, which can then be retrieved when identical queries are executed. The sum of primary key properties of the query object will be used to cache the result and perform the recurring cache fetch if the response already exists. The illustrations below showcase the relevant scenarios.

Cache-Aside Flow
First time when frontend GET calls the backend for resource ID 1
  1. The frontend performs a GET request with ID 1.
  2. The backend checks if the response is already cached in the cache storage. Result: NOT FOUND.
  3. The backend performs a database call to fetch the data from the persistent data storage. Result: OK.
  4. The backend stores the response in the cache based on data from persistent storage with key ID 1. Result: OK.
  5. The backend responds to the frontend with an OK status, including the payload.

Now the cache is populated with the state reflecting the resource with ID 1. The next time the frontend makes the same request with ID 1, the flow will have a shortcut to acquire the needed data, thereby boosting execution time and responsiveness for the client on the frontend side.

Cache-Aside Flow
Second time when frontend GET calls the backend for resource ID 1
  1. The frontend performs a GET request with ID 1.
  2. The backend checks if the response is already cached in the cache storage. Result: OK.
  3. The backend responds to the frontend with an OK status, including the payload.

It's no wonder the backend will respond quickly this time since we managed to bypass two roundtrips, both to the cache and the database storages. This is especially beneficial if the response object results from extensive computation, such as projection/aggregation, as the backend won't need to perform the same logic to derive the same result.

Drawbacks

While the benefits of this pattern are clear, it's important to consider potential drawbacks, such as data quality and synchronization issues. What happens when the resource with ID 1 is updated or deleted? Without additional logic to compensate for these actions, the cache storage would be completely unaware of any PUT, PATCH, or DELETE operations performed on the resource with ID 1 in persistent storage. This means that the next time the frontend calls for ID 1, it could receive the cached OK response instead of the correct NOT FOUND response.

To address this issue, a proposed solution is to equip commands with a list of keys to invalidate in the cache storage. For example, if ID 1 was deleted, the command should invalidate the cache for both fetching ID 1 and retrieving all related resources, as shown in the illustration below.

Cache-Aside Flow with invaldiation
Frontend performs command call to the backend for resource ID 1
  1. The frontend performs a PUT/PATCH/DELETE request with ID 1.
  2. The backend executes the core command handler logic and persists the changes in the database. Result: OK.
  3. The backend invalidates the cache with the list of keys associated with the command. Result: OK.
  4. The backend responds to the frontend with an OK status.

Another drawback of this pattern is the introduction of a new integration to the backend. As a general rule, any integration may occasionally fail. What happens if, for some reason, the backend doesn't receive a response from the cache storage instance after the command operation has been persisted in the database? This is something developers must consider in their overall assessment. What are the costs of a temporary "unsynchronized" state between the cache and the database if a failure occurs? While this may be an edge case consideration, and the flow might perform as intended 99.999% of the time, you must determine whether the solution can tolerate the 0.001% failure rate. Eventual consistency with messaging might be the answer, but that is outside the scope of this article.

Now to be fair on this pattern, it works also the other way. If the persistent storage is down at a given moment, but you have the cache response available, then the frontend will still receive the data needed for the UI.

With the implementation proposal I want to share, we are not forced to use this caching strategy everywhere. Rather, it's up to the developer to decide which types of query handlers to cache and which should not have it.

Implementation

Let's start by implementing the CacheService, which will be the main service component invoked by both query and command handler flows.

public interface ICacheService
{
    Task<T?> GetOrAddCache<T>(string key, RequestHandlerDelegate<T> handler, TimeSpan expiry);
    Task RemoveCache(string[] keys);
}

public class CacheService(IDistributedCache cache) : ICacheService
{
    public async Task<T?> GetOrAddCache<T>(string key, RequestHandlerDelegate<T> handler, TimeSpan expiry)
    {
        var cachedValue = await cache.GetStringAsync(key);

        if (!string.IsNullOrWhiteSpace(cachedValue))
        {
            return JsonSerializer.Deserialize<T>(cachedValue);
        }

        var result = await handler();

        if (result == null) return default;

        await cache.SetStringAsync(key, JsonSerializer.Serialize(result), new DistributedCacheEntryOptions
        {
            SlidingExpiration = expiry,
            AbsoluteExpiration = DateTime.Today.AddDays(1)
        });

        return result;
    }

    public async Task RemoveCache(string[] keys)
    {
        List<Task> tasks = new();

        foreach (var key in keys)
        {
            tasks.Add(cache.RemoveAsync(key));
        }

        await Task.WhenAll(tasks);
    }
}

In this example, I am using IDistributedCache, an abstraction provided by the Microsoft.Extensions namespace. Under the hood, you can decide whether to use an in-memory cache, a distributed cache like Redis, etc. RequestHandlerDelegate is a delegate type representing an asynchronous task to be performed, defined in the MediatR namespace. This is used for chaining the different behavior handlers of the MediatR pipeline. It will be provided by the relevant behavior pipelines when invoking the CacheService.

The GetOrAddCache method contains the logic to decide whether to execute the delegate representing the core handler logic or acquire the cached data and return early with the response.

You need to install the NuGet package MediatR to follow this code implementation.

Next, we define an interface representing queries eligible for caching, which must implement a key property. This interface is an extension of the base interface provided by MediatR (IRequest) and respects the generic type to be defined.

public interface ICachedQuery<TResponse> : IRequest<TResponse>
{
    public string Key { get; }
}

Now, let's implement the behavior pipeline representing the cross-cutting logic for handling queries that inherit ICachedQuery.

public class CachedQueryHandlerPipelineBehavior<TRequest, TResponse>(
    ICacheService cacheService) : IPipelineBehavior<TRequest, TResponse>
    where TRequest : ICachedQuery<TResponse>
{
    public Task<TResponse> Handle(TRequest request, RequestHandlerDelegate<TResponse> next, CancellationToken cancellationToken)
    {
        var result = cacheService.GetOrAddCache(request.Key, handler: next, expiry: TimeSpan.FromMinutes(5));
        return result!;
    }
}

Note the constraints we are placing on the generic type TRequest. When we inherit the base IPipelineBehavior<TRequest, TResponse>, we specify that this pipeline component should only be invoked when the incoming IRequest is of a type that implements ICachedQuery. Essentially, we are separating the concern of regular queries from those that require caching.

We have access to the Key property, which is defined and implemented. This will be used by the CacheService to fetch and potentially cache the response of the delegate. When implementing the IPipelineBehavior, the method signature includes a delegate parameter of type RequestHandlerDelegate. This delegate is passed throughout the pipeline and used by the CacheService.

Next, we define an interface for commands that expands on the base IRequest interface. This interface includes an array of keys that the CacheService will use to remove the associated cached data.

public interface IInvalidateCacheCommand<TResponse> : IRequest<TResponse>
{
    public string[] InvalidateKeys { get; }
}

Finally, we implement the behavior pipeline representing the cross-cutting logic for invalidating the cache before executing the core command handler logic.

public class InvalidateCacheCommandHandlerPipelineBehavior<TRequest, TResponse>(
    ICacheService cacheService) : IPipelineBehavior<TRequest, TResponse?>
    where TRequest : IInvalidateCacheCommand<TResponse>
{
    public async Task<TResponse?> Handle(TRequest request, RequestHandlerDelegate<TResponse?> next, CancellationToken cancellationToken)
    {
        await cacheService.RemoveCache(request.InvalidateKeys);

        var response = await next();

        return response;
    }
}

Below are the extension methods provided by MediatR for configuring DI, registering the handlers, and adding behavior pipelines:

builder.Services.AddMediatR(options =>
{
    options.RegisterServicesFromAssembly(Assembly.GetExecutingAssembly());
    options.AddBehavior(typeof(IPipelineBehavior<,>), typeof(CachedQueryHandlerPipelineBehavior<,>));
    options.AddBehavior(typeof(IPipelineBehavior<,>), typeof(InvalidateCacheCommandHandlerPipelineBehavior<,>));
});

Now the MedianR library is configured to use the behavior pipelines for handling queries and commands. The CacheService is responsible for fetching and caching the data, while the behavior pipelines are responsible for invoking the CacheService and invalidating the cache when necessary.

Here is an example of a Controller that uses the MediatR library to send a query and a command to respective handlers and the contracts that inherit the caching interfaces.

public record CompleteTodoCommand(Guid Id) : IInvalidateCacheCommand<CompleteTodoResponse>
{
    public string[] InvalidateKeys => [
        "GET_TODOS",
        $"GET_TODO:{Id}"];
}

public record AddTodoCommand(string Content, bool Completed) : IInvalidateCacheCommand<AddTodoResponse>
{
    public string[] InvalidateKeys => ["GET_TODOS"];
}

public record GetTodoByIdQuery(Guid Id) : ICachedQuery<GetTodoByIdResponse>
{
	public string Key => $"GET_TODO:{Id}";
}


public class TodoController : ControllerBase
{
    private readonly IMediator _mediator;

    public AddTodoController(IMediator mediator)
    {
        _mediator = mediator;
    }

    [HttpPost("todos")]
    public async Task<IActionResult> AddTodoAsync(AddTodoCommand command)
    {
        var response = await _mediator.Send(command);
        return Ok(response);
    }

    [HttpPut("todos/{id}/complete")]
    public async Task<IActionResult> CompleteTodo(Guid id)
    {
        var command = new CompleteTodoCommand(id);
        var result = await _mediator.Send(command);
        return Ok(result);
    }

    [HttpGet("todos/{id}")]
    public async Task<IActionResult> GetTodoById(Guid id)
    {
        var query = new GetTodoByIdQuery(id);
        var response = await _mediator.Send(query);

        return Ok(response);
    }
}

Scenario: Multi-Step Data Retrieval

Imagine a scenario where your application needs to retrieve data from an external API and database. The code might look something like this:

public async Task<ProcessedData> GetAndProcessDataAsync()
{
    // Fetch data from an external API
    var externalData = await externalApi.GetResource();

    // Fetch data from the database
    var dbData = await dbContext.ProcessData(externalData.Id);

    // Perform aggregation or projection on the fetched dbData
    var finalData = PerformCostlyAggregationOrProjection(dbData);

    // Return the final processed data
    return finalData;
}

The externalApi.GetResource() call can take up to several seconds to complete. After retrieving the external data, the application processes it and performs a database operation using dbContext.GetData(externalData.Id). This can be a time-consuming process, depending on the complexity of the data size fetching and the size of the database. Finally, the fetched data undergoes a costly aggregation or projection, which involves additional computation. This could include complex calculations, data transformations, or aggregating large datasets. This step can be particularly time-consuming and could further delay the response to the user.

If this method is called frequently, the repeated API calls, database queries, and costly aggregation/projection operations can create a substantial performance bottleneck. Each execution could take several seconds, leading to latency issues and a poor user experience.

By implementing caching in scenarios where your application involves costly operations, such as external API calls, database queries, and intensive aggregation or projection tasks, you can significantly improve performance. The Cache-Aside pattern allows you to avoid unnecessary delays, improving the performance and speed of your application.

Conclusion

Incorporating the Cache-Aside pattern into your application can significantly enhance performance by reducing database load and speeding up response times for frequently accessed data. By strategically caching query results and invalidating them when updates occur, you can ensure your application remains efficient while maintaining data consistency. However, it's crucial to weigh the potential drawbacks, such as the complexity added by cache management and the risk of stale data. The example implementation provided demonstrates how you can integrate this pattern using the MediatR library, allowing you to introduce caching as a cross-cutting concern in your application's architecture. As always, it's important to assess whether the benefits of this approach align with the specific needs of your application and its users.

Comments