This is another post resulting from my work on sample ASP.NET Core MVC powered Web API. This time I'm going to focus on conditional requests. Conditional requests have three main use cases: cache revalidation, concurrency control and range requests. Range requests are primarily used for media like video or audio and I'm not going to write about them here (I did in the past in context of ASP.NET MVC), but the other two are very useful for a Web API.

Adding metadata to the response

Before client can perform a conditional request some metadata should be provided which can be used as validators. The standard defines two types of such metadata: modification dates (delivered by Last-Modified header) and entity tags (delivered by ETag header). Below interface represents those metadata.

interface IConditionalRequestMetadata
{
    string EntityTag { get; }

    DateTime? LastModified { get; }
}

The modification date is simple, it should represent a date and time of the last change to the resource which is being returned. Entity tag is a little bit more complicated. In general entity tag should be unique per representation. This means that entity tag should change not only due to changes over time but also as a result of content negotiation. That second aspect is problematic because it forces entity tag generation to happen very late which can make them impractical. Fortunately standard leaves a gate in a form of weak entity tags. A weak entity tag indicates that the two representations are semantically equivalent, which for Web API usages should be good enough (this approach will break the standard at some point but more about this later). This allows for implementing IConditionalRequestMetadata as part of the demo application model.

public class Character: IConditionalRequestMetadata
{
    private string _entityTag;

    public string Id { get; protected set; }

    ...

    public DateTime LastUpdatedDate { get; protected set; }

    public string EntityTag
    {
        get
        {
            if (String.IsNullOrEmpty(_entityTag))
            {
                _entityTag = "\"" + Id + "-"
                    + LastUpdatedDate.Ticks.ToString(CultureInfo.InvariantCulture) + "\"";
            }

            return _entityTag;
        }
    }

    public DateTime? LastModified { get { return LastUpdatedDate; } }
}

What is missing is some generic mechanism for setting the headers on the response. Result filter is an interesting option for this task. The OnResultExecuting method can be used to inspect if the result from the action is ObjectResult which value is an implementation of IConditionalRequestMetadata. If those conditions are met the headers can be set.

internal class ConditionalRequestFilter : IResultFilter
{
    public void OnResultExecuted(ResultExecutedContext context)
    { }

    public void OnResultExecuting(ResultExecutingContext context)
    {
        IConditionalRequestMetadata metadata = (context.Result as ObjectResult)?.Value
            as IConditionalRequestMetadata;

        if (metadata != null)
        {
            SetConditionalMetadataHeaders(context, metadata);
        }
    }

    private static void SetConditionalMetadataHeaders(ResultExecutingContext context,
        IConditionalRequestMetadata metadata)
    {
        ResponseHeaders responseHeaders = context.HttpContext.Response.GetTypedHeaders();

        if (!String.IsNullOrWhiteSpace(metadata.EntityTag))
        {
            responseHeaders.ETag = new EntityTagHeaderValue(metadata.EntityTag, true);
        }

        if (metadata.LastModified.HasValue)
        {
            responseHeaders.LastModified = metadata.LastModified.Value;
        }
    }
}

After registering the filter the headers will be available on every response for which the underlying model provides the metadata.

Cache revalidation

Typically client will use one of two headers as part of GET or HEAD request in order to perform cache revalidation: If-None-Match or If-Modified-Since. Simple extension method can be used to extract both from request.

internal class HttpRequestConditions
{
    public IEnumerable<string> IfNoneMatch { get; set; }

    public DateTimeOffset? IfModifiedSince { get; set; }
}
internal static class HttpRequestExtensions
{
    internal static HttpRequestConditions GetRequestConditions(this HttpRequest request)
    {
        HttpRequestConditions requestConditions = new HttpRequestConditions();

        RequestHeaders requestHeaders = request.GetTypedHeaders();

        if (HttpMethods.IsGet(request.Method) || HttpMethods.IsHead(request.Method))
        {
            requestConditions.IfNoneMatch = requestHeaders.IfNoneMatch?.Select(v => v.Tag.ToString());
            requestConditions.IfModifiedSince = requestHeaders.IfModifiedSince;
        }

        return requestConditions;
    }
}

The If-None-Match is considered to be a more accurate (if both are present in the request only If-None-Match should be evaluated). It can contain one or more entity tags which represent versions of the resource cached by the client. If the current entity tag of the resource is on that list the server should respond with 304 Not Modified (with no body) instead of normal response.

The If-Modified-Since works similarly. It contains last modification date of the resource known by client. If that modification date is equal (or potentially later) to the modification date of the resource the server should also respond with 304 Not Modified.

It's hard to optimize for cache revalidation on the server side unless the metadata of the resource are cheaply accessible (for example a static file). For typical scenarios which involve some kind of database as store it usually results in multiple queries or a complex one. Because of that it's often good enough to retrieve the resource from store and validate later. The ConditionalRequestFilter can be extended to do that.

internal class ConditionalRequestFilter : IResultFilter
{
    ...

    public void OnResultExecuting(ResultExecutingContext context)
    {
        IConditionalRequestMetadata metadata = (context.Result as ObjectResult)?.Value
            as IConditionalRequestMetadata;

        if (metadata != null)
        {
            if (CheckModified(context, metadata))
            {
                SetConditionalMetadataHeaders(context, metadata);
            }
        }
    }

    private static bool CheckModified(ResultExecutingContext context,
        IConditionalRequestMetadata metadata)
    {
        bool modified = true;

        HttpRequestConditions requestConditions = context.HttpContext.Request.GetRequestConditions();

        if ((requestConditions.IfNoneMatch != null) && requestConditions.IfNoneMatch.Any())
        {
            if (!String.IsNullOrWhiteSpace(metadata.EntityTag)
                && requestConditions.IfNoneMatch.Contains(metadata.EntityTag))
            {
                modified = false;
                context.Result = new StatusCodeResult(StatusCodes.Status304NotModified);
            }
        }
        else if (requestConditions.IfModifiedSince.HasValue && metadata.LastModified.HasValue)
        {
            DateTimeOffset lastModified = metadata.LastModified.Value.AddTicks(
                -(metadata.LastModified.Value.Ticks % TimeSpan.TicksPerSecond));

            if (lastModified <= requestConditions.IfModifiedSince.Value)
            {
                modified = false;
                context.Result = new StatusCodeResult(StatusCodes.Status304NotModified);
            }
        }

        return modified;
    }

    ...
}

This way cache revalidation is being handled automatically for resources which support it.

Concurrency control

The concurrency control can be considered an opposite mechanism to cache revalidation. Its goal is to prevent the change of a resource (usually in result of PUT or PATCH request) if it has been already modified by another user (the Lost Update problem). The headers used to achieve this goal are counterparts of those used in cache revalidation: If-Match and If-Unmodified-Since. The previously created extension method can extract those as well.

internal class HttpRequestConditions
{
    ...

    public IEnumerable<string> IfMatch { get; set; }

    public DateTimeOffset? IfUnmodifiedSince { get; set; }
}
internal static class HttpRequestExtensions
{
    internal static HttpRequestConditions GetRequestConditions(this HttpRequest request)
    {
        ...

        if (HttpMethods.IsGet(request.Method) || HttpMethods.IsHead(request.Method))
        {
            ...
        }
        else if (HttpMethods.IsPut(request.Method) || HttpMethods.IsPatch(request.Method))
        {
            requestConditions.IfMatch = requestHeaders.IfMatch?.Select(v => v.Tag.ToString());
            requestConditions.IfUnmodifiedSince = requestHeaders.IfUnmodifiedSince;
        }

        return requestConditions;
    }
}

The If-Unmodified-Since is an exact opposite of If-Modified-Since which means that last modification date of the resource can't be later than the one provided. If it is, the operation shouldn't be performed and the response should be 412 Precondition Failed.

The If-Match is a little bit more tricky. Similar to If-None-Match it provides a list of entity tags and the current entity tag of the resource is required to be present on that list, but the standard disallows usage of weak entity tags here. This guarantees safety if different representations are stored separately, but for modern Web APIs this is often not a case. Different representations are a result of transforming the source resource which is stored only once. Because of that I believe that not following standard in this case is acceptable. One more thing is handling * value (which I've skipped for If-None-Match) - it means that resource should have at least one current representation. If the considered methods are only PUT and PATCH this condition should always evaluate to true (the absence of resource should be checked earlier and result in 404 Not Found).

All those rules can be encapsulated within a single method.

private bool CheckPreconditionFailed(HttpRequestConditions requestConditions,
    IConditionalRequestMetadata metadata)
{
    bool preconditionFailed = false;

    if ((requestConditions.IfMatch) != null && requestConditions.IfMatch.Any())
    {
        if ((requestConditions.IfMatch.Count() > 2) || (requestConditions.IfMatch.First() != "*"))
        {
            if (!requestConditions.IfMatch.Contains(metadata.EntityTag))
            {
                preconditionFailed = true;
            }
        }
    }
    else if (requestConditions.IfUnmodifiedSince.HasValue)
    {
        DateTimeOffset lastModified = metadata.LastModified.Value.AddTicks(
            -(metadata.LastModified.Value.Ticks % TimeSpan.TicksPerSecond));

        if (lastModified > requestConditions.IfUnmodifiedSince.Value)
        {
            preconditionFailed = true;
        }
    }

    return preconditionFailed;
}

This method should be used as part of an action flow. This can be done more or less generic depending on the application architecture (for example CQRS opens much more options). In simplest case it can be called directly by every action which needs it.

One last thing

Described here are most typical usages of If-Match, If-None-Match, If-Modified-Since and If-Unmodified-Since which doesn't exhaust the subject. The headers can be used with other methods than mentioned or have special usages (like If-None-Match with value of *). As always, when in doubt the standard is your friend.

My Server-Sent Events Middleware seems to be a mine of interesting issues. The latest one was about events being delivered with delay (in general one behind) under specific conditions.

The nature of the issue

Initially there was no hint at what are the conditions required for issue to manifest itself. The demo application was working correctly while the one on which the person who reported the issue was working didn't. Luckily the reporter was extremely helpful in diagnosing the issue and devoted some time to find the difference between his and mine code. The difference between working and not working scenario was presence of Response Compression Middleware which I've added while working on previous issue.

My first thought was that Response Compression Middleware must be writing to the response stream differently then my code (when Response Compression Middleware is present it wraps the original response stream). I've gone through the source code of BodyWrapperStream and found nothing. I went deeper and analyzed DeflateStream also without finding anything specific.

At this point I've decided to change approach and use Fiddler to see what was happening on the wire. To my surprise the first thing I've noticed is that the response was still gziped. That really baffled me so I've double checked that the Response Compression Middleware was removed. It was, so it must have been something external to my application. The only external component I was able to identify was IIS Express, so I quickly changed the launch drop down option to run on Kestrel only. That was it, without IIS in front everything was working as expected which meant that IIS (serving as reverse proxy) was compressing the response on its own which resulted in delayed delivery of events.

Preventing IIS from compressing the response

The first obvious option was changing the IIS configuration. This would certainly work but I would have to put the details into documentation and leave it as a trap for others using the same deployment scenario. I wanted to avoid that so I've started researching for other solution. The general conclusion from materials that I've found was that IIS will compress the response if it doesn't detect the Content-Encoding header. That gave me an idea. One of valid values for Content-Encoding is identity which indicates no compression/modification, it might be enough to prevent IIS from adding compression. I've added the code for setting the header to the middleware.

public class ServerSentEventsMiddleware
{
    ...

    public async Task Invoke(HttpContext context)
    {
        if (context.Request.Headers[Constants.ACCEPT_HTTP_HEADER] == Constants.SSE_CONTENT_TYPE)
        {
            DisableResponseBuffering(context);

            context.Response.Headers.Append("Content-Encoding", "identity");

            ...
        }
        else
        {
            await _next(context);
        }
    }

    ...
}

Running the demo application without Response Compression Middleware and behind IIS has confirmed that the solution was working. Now I had to make sure that I haven't broken anything else.

Maintaining compatibility with Response Compression Middleware

As the middleware is now setting the Content-Encoding it could somehow interfere with Response Compression Middleware. I've re-enabled it and run the test again. Screenshot bellow shows the result.

Chrome Developer Tools Network Tab - Multiple Content-Encoding

The response contains two Content-Encoding headers. The reason for this is that Response Compression Middleware is also blindly using IHeaderDictionary.Append. Unfortunately, the fact that header is present twice confuses the browser. The response is coming compressed but the browser treats it as not compressed. I couldn't change how Response Compression Middleware works so I had to be smarter when setting the header. Simply checking if the header is already present didn't work because Response Compression Middleware sets it upon first attempt to write. I was saved by HttpResponse.OnStarting which allows for interacting with the response just before sending the headers. I've replaced my header setting code with following method.

private void HandleContentEncoding(HttpContext context)
{
    context.Response.OnStarting(() =>
    {
        if (!context.Response.Headers.ContainsKey("Content-Encoding"))
        {
            context.Response.Headers.Append("Content-Encoding", "identity");
        }

        return _completedTask;
    });
}

This fixed the problem with two headers and allowed me to close the issue. The approach is universal and can be used in other scenarios with same requirement.

In last couple weeks I've been playing with ASP.NET Core MVC powered Web API. One of things I wanted to dig deeper into is support for HEAD method. The specification says that "The HEAD method is identical to GET except that the server MUST NOT return a message-body in the response. The metainformation contained in the HTTP headers in response to a HEAD request SHOULD be identical to the information sent in response to a GET request.". In practice the HEAD method is often being used for performing "exists" requests.

How ASP.NET Core is handling HEAD at the server level

Before looking at higher layers it is worth to understand what is the behavior of underlying server in case of HEAD request. The sample Web API mentioned in the beginning has a following end-middleware for handling cases when none of routes has been hit, it will be perfect for this task.

public class Startup
{
    ...

    public void Configure(IApplicationBuilder app)
    {
        ...

        app.Run(async (context) =>
        {
            context.Response.ContentLength = 34;
            await context.Response.WriteAsync("-- Demo.AspNetCore.Mvc.CosmosDB --");
        });
    }
}

First testing environment will be Kestrel. Response to a GET request (which will be used as baseline) looks like below.

HTTP/1.1 200 OK
Content-Length: 34
Date: Mon, 02 Oct 2017 19:22:38 GMT
Server: Kestrel

-- Demo.AspNetCore.Mvc.CosmosDB --

Switching the method to HEAD (without any changes to the code) results in following.

HTTP/1.1 200 OK
Content-Length: 34
Date: Mon, 02 Oct 2017 19:22:38 GMT
Server: Kestrel

This shows that Kestrel is handling HEAD request quite nicely out of the box. All the headers are there and the write to the response body has been ignored. This is the exact behavior one should expect.

With this positive outcome application can be switched to the second testing environment which will be HTTP.sys server. Here the response to HEAD request is different.

HTTP/1.1 200 OK
Content-Length: 34
Date: Mon, 02 Oct 2017 19:25:43 GMT
Server: Microsoft-HTTPAPI/2.0

-- Demo.AspNetCore.Mvc.CosmosDB --

Unfortunately this is a malformed response as it contains body, which is incorrect from specification perspective and also removes the performance gain which HEAD request offers. This is something that should be addressed, but before that let's take a look at more complex scenario.

Adding ASP.NET Core MVC on top

Knowing how the servers are handling the HEAD method the scenario can be extended by adding MVC to the mix. For this purpose a simple GET action which takes an identifier as parameter can be used. The important part is that the action should return 404 Not Found for identifier which doesn't exist.

[Route("api/[controller]")]
public class CharactersController : Controller
{
    private readonly IMediator _mediator;

    public CharactersController(IMediator mediator)
    {
        _mediator = mediator;
    }

    ...

    [HttpGet("{id}")]
    public async Task<IActionResult> Get(string id)
    {
        Character character = await _mediator.Send(new GetSingleRequest<Character>(id));
        if (character == null)
        {
            return NotFound();
        }

        return new ObjectResult(character);
    }

    ...
}

In context of previous discoveries testing environments can be limited to Kestrel only. Making a GET request with valid identifier results in response with JSON body.

HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Date: Mon, 02 Oct 2017 19:40:25 GMT
Server: Kestrel
Transfer-Encoding: chunked

{"id":"1ba6271109d445c8972542985b2d3e96","createdDate":"2017-09-24T21:08:50.9990689Z","lastUpdatedDate":"2017-09-24T21:08:50.9990693Z","name":"Leia Organa","gender":"Female","height":150,"weight":49,"birthYear":"19BBY","skinColor":"Light","hairColor":"Brown","eyeColor":"Brown"}

Switching to HEAD produces a response which might be a little surprising.

HTTP/1.1 200 OK
Content-Length: 34
Date: Mon, 02 Oct 2017 19:42:10 GMT
Server: Kestrel

The presence of Content-Length and absence of Content-Type suggest this is not the response from the intended endpoint. In fact it looks like a response from the end-middleware. A request with invalid identifier returns exactly same response instead of expected 404. Taking one more look at the code should reveal why this shouldn't be a surprise. The action is decorated with HttpGetAttribute which makes it unreachable by HEAD request, in result application has indeed defaulted to the end-middleware. Adding HttpHeadAttribute should solve the problem.

[Route("api/[controller]")]
public class CharactersController : Controller
{
    ...

    [HttpGet("{id}")]
    [HttpHead("{id}")]
    public async Task<IActionResult> Get(string id)
    {
        ...
    }

    ...
}

After this change both HEAD requests (with valid and invalid identifier) return expected responses.

HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Date: Mon, 02 Oct 2017 19:44:23 GMT
Server: Kestrel
HTTP/1.1 404 Not Found
Date: Mon, 02 Oct 2017 19:48:07 GMT
Server: Kestrel

This means that an action needs to be decorated with two attributes. Separating between GET and HEAD makes perfect sense when it's possible to optimize the HEAD request handling on server side but for simple scenario like this one it seems unnecessary. One possible improvement is custom HttpMethodAttribute which would allow both methods.

public class HttpGetOrHeadAttribute : HttpMethodAttribute
{
    private static readonly IEnumerable<string> _supportedMethods = new[] { "GET", "HEAD" };

    public HttpGetOrHeadAttribute()
        : base(_supportedMethods)
    { }

    public HttpGetOrHeadAttribute(string template)
        : base(_supportedMethods, template)
    {
        if (template == null)
        {
            throw new ArgumentNullException(nameof(template));
        }
    }
}

Still anybody who will be working on the project in future will have to know that a custom attribute must be used. It might be preferred to have a solution which can be applied once, especially keeping in mind that there is also HTTP.Sys issue to be solved.

Solving the problems in one place

In context of ASP.NET Core "one place" typically ends up being some kind of middleware. In this case a middleware could be used to perform an old trick of switching incoming HEAD request to GET. The switch should be only temporary, otherwise the Kestrel integrity checks might fail due to Content-Length being different from actual number of bytes written. There is also one important thing to remember. After switching the method Kestrel will stop ignoring writes to the body. The easiest solution to this is to change body stream to Stream.Null (this will also fix the problem observed in case of HTTP.Sys server).

public class HeadMethodMiddleware
{
    private readonly RequestDelegate _next;

    public HeadMethodMiddleware(RequestDelegate next)
    {
        _next = next ?? throw new ArgumentNullException(nameof(next));
    }

    public async Task Invoke(HttpContext context)
    {
        bool methodSwitched = false;

        if (HttpMethods.IsHead(context.Request.Method))
        {
            methodSwitched = true;

            context.Request.Method = HttpMethods.Get;
            context.Response.Body = Stream.Null;
        }

        await _next(context);

        if (methodSwitched)
        {
            context.Request.Method = HttpMethods.Head;
        }
    }
}

This middleware should be applied with caution. Some middlewares (for example StaticFiles) can have their own optimized handling of HEAD method. It is also possible that in case of some middlewares switching method can result in undesired side effects.

Few weeks ago I've received a question under my Server-Sent Events middleware for ASP.NET Core repository. I was quite busy at the time so I've only provided a short answer, but I've also promised to myself to describe the problem and solution in details as soon as possible. This post is me fulfilling that promise.

The problem

The question was about using Server-Sent Events in load balancing scenario. Under the hood Sever-Sent Events is using long-lived HTTP connection for delivering the messages. This means that client is connected to specific instance of the application behind the load balancer. It can look like on the diagram below where Client A is connected to Instance 1 and Client B to Instance 2.

Server-Sent Events with Load Balancing Diagram

The problem arises when a message resulting from an operation performed on Instance 1 needs to be broadcasted to all clients (so also Client B).

The solution

In order to solve the problem a communication channel is required, which instances can use to redistribute the messages. One way for achieving such communication channel is publish-subscribe pattern. Typical topology of publish-subscribe pattern implementation introduces a message broker.

Server-Sent Events with Load Balancing and Publish-Subscribe Pattern Diagram

Instead of sending the message directly to clients the application sends it to the broker. Then the broker sends the message to all subscribers (which may include the original sender) and they send it to the clients.

One example of such broker can be Redis with its Pub/Sub functionality.

The implementation

The starting point for the implementation will be demo project from my original post about Server-Sent Events. It has a notification functionality which allows client for sending messages to other clients.

public class NotificationsController : Controller
{
    private INotificationsServerSentEventsService _serverSentEventsService;

    public NotificationsController(INotificationsServerSentEventsService serverSentEventsService)
    {
        _serverSentEventsService = serverSentEventsService;
    }

    ...

    [ActionName("sse-notifications-sender")]
    [AcceptVerbs("POST")]
    public async Task<IActionResult> Sender(NotificationsSenderViewModel viewModel)
    {
        if (!String.IsNullOrEmpty(viewModel.Notification))
        {
            await _serverSentEventsService.SendEventAsync(new ServerSentEvent
            {
                Type = viewModel.Alert ? "alert" : null,
                Data = new List<string>(viewModel.Notification.Split(new string[] { "\r\n", "\n" },
                    StringSplitOptions.None))
            });
        }

        ModelState.Clear();

        return View("Sender", new NotificationsSenderViewModel());
    }
}

The controller is directly interacting with the Server-Sent Events middleware. This is the part which should be abstracted to allow using Redis when desired. A simple service for sending messages can be extracted.

public interface INotificationsService
{
    Task SendNotificationAsync(string notification, bool alert);
}

internal class LocalNotificationsService : INotificationsService
{
    private INotificationsServerSentEventsService _notificationsServerSentEventsService;

    public LocalNotificationsService(INotificationsServerSentEventsService notificationsServerSentEventsService)
    {
        _notificationsServerSentEventsService = notificationsServerSentEventsService;
    }

    public Task SendNotificationAsync(string notification, bool alert)
    {
        return _notificationsServerSentEventsService.SendEventAsync(new ServerSentEvent
        {
            Type = alert ? "alert" : null,
            Data = new List<string>(notification.Split(new string[] { "\r\n", "\n" },
                StringSplitOptions.None))
        });
    }
}

With the service in place the controller can be refactored.

public class NotificationsController : Controller
{
    private INotificationsService _notificationsService;

    public NotificationsController(INotificationsService notificationsService)
    {
        _notificationsService = notificationsService;
    }

    ...

    [ActionName("sse-notifications-sender")]
    [AcceptVerbs("POST")]
    public async Task<IActionResult> Sender(NotificationsSenderViewModel viewModel)
    {
        if (!String.IsNullOrEmpty(viewModel.Notification))
        {
            await _notificationsService.SendNotificationAsync(viewModel.Notification, viewModel.Alert);
        }

        ModelState.Clear();

        return View("Sender", new NotificationsSenderViewModel());
    }
}

Now the Redis based implementation of INotificationsService can be created. I've decided to use StackExchange.Redis which is a very popular Redis client for .NET (it's also being used by ASP.NET Core) with good documentation. The implementation is straightforward, the only challenge is distinguishing regular notifications from alerts. In context of publish-subscribe pattern one of approaches can be filtering based on topics. With this approach the application should use different channels for different types of messages.

internal class RedisNotificationsService : INotificationsService
{
    private const string NOTIFICATIONS_CHANNEL = "NOTIFICATIONS";
    private const string ALERTS_CHANNEL = "ALERTS";

    private ConnectionMultiplexer _redis;
    private INotificationsServerSentEventsService _notificationsServerSentEventsService;

    public RedisNotificationsService(INotificationsServerSentEventsService notificationsServerSentEventsService)
    {
        _redis = ConnectionMultiplexer.Connect("localhost");
        _notificationsServerSentEventsService = notificationsServerSentEventsService;

        ISubscriber subscriber = _redis.GetSubscriber();

        subscriber.Subscribe(NOTIFICATIONS_CHANNEL, async (channel, message) =>
        {
            await SendSseEventAsync(message, false);
        });

        subscriber.Subscribe(ALERTS_CHANNEL, async (channel, message) =>
        {
            await SendSseEventAsync(message, true);
        });
    }

    public Task SendNotificationAsync(string notification, bool alert)
    {
        ISubscriber subscriber = _redis.GetSubscriber();

        return subscriber.PublishAsync(alert ? ALERTS_CHANNEL : NOTIFICATIONS_CHANNEL, notification);
    }

    private Task SendSseEventAsync(string notification, bool alert)
    {
        return _notificationsServerSentEventsService.SendEventAsync(new ServerSentEvent
        {
            Type = alert ? "alert" : null,
            Data = new List<string>(notification.Split(new string[] { "\r\n", "\n" },
                StringSplitOptions.None))
        });
    }
}

The implementation can be further improved by extracting a base class with Server-Sent Events related functionality, but the service is ready to be used (it just needs to be registered).

The demo available on GitHub also provides configuration options for Redis connection and which INotificationsService implementation to use.

This approach can be used in exactly same way if the WebSockets are used instead of Server-Sent Events, or in any other scenario which requires similar communication pattern.

Recently I needed to add support for SSL Acceleration (Offloading) to one of projects I'm working on. In ASP.NET MVC this usually meant custom RequireHttpsAttribute, URL generator and IsHttps method. Whole team needed to be aware that custom components must be used instead of the ones provided by framework, otherwise the things would break. This is no longer case for ASP.NET Core, thanks to low level APIs like request features there is a more elegant way.

SSL Acceleration (Offloading)

SSL Acceleration is a process of using a hardware accelerator for performing SSL decryption and/or decryption. The process usually takes place on a load balancer or firewall, in which case it's called SSL Offloading. There are two flavors off SSL Offloading: SSL Bridging and SSL Termination. SSL Bridging usually doesn't require anything specific from application, but SSL Termination does. In case of SSL Termination the SSL connection doesn't go beyond the SSL Accelerator. The are two main benefits from SSL Termination:

  • Improved performance (the web servers don't have to use resources for SSL processing)
  • Simplified certificate management (the certificates are managed on a single device instead of every web server in cluster)

The drawback is that HTTPS traffic is not reaching the application. In this context the performance benefit can be questioned. The application is no longer able to fully utilize some of HTTP/2 features (for example Server Push) while the resources gain might not be that significant as modern CPUs have good support for encryption/decryption.

Despite the fact that SSL is being terminated, the application still must be able to verify if the original request was made over HTTPS (otherwise it could lower the application security). Typically the SSL Accelerators are providing information about the original protocol through dedicated HTTP header (one quite popular is X-Forwarded-Proto) which application needs to properly interpret.

Making ASP.NET Core understand SSL Acceleration

The "properly interpret" means that application needs to detect the presence of the header and if the value indicates that original request was over HTTPS it should be treated as such. In case of ASP.NET Core the perfect behavior would be for HttpContext.Request.IsHttps to return true. This would automatically make RequireHttpsAttribute and AddRedirectToHttps from URL Rewriting Middleware behave correctly. Also any other code which depends on that property will keep working as expected.

Luckily the value of HttpContext.Request.IsHttps is based on IHttpRequestFeature.Scheme property which value can be changed by application. Assuming that the header name is X-Forwarded-Proto and its value is the original scheme in lower case, following snippet is exactly what is needed.

if (!context.Request.IsHttps)
{
    if (context.Request.Headers.ContainsKey("X-Forwarded-Proto")
        && context.Request.Headers["X-Forwarded-Proto"].Equals("https"))
    {
        IHttpRequestFeature httpRequestFeature = context.Features.Get<IHttpRequestFeature>();
        httpRequestFeature.Scheme = "https";
    }
}

This snippet can be easily wrapped inside a reusable and parametrized middleware like one here.

This scenario is a nice example of how ASP.NET Core is layered and how much power gives the access to the low level building blocks.

Older Posts