ASP.NET Core comes with ready to use Cross Origin Resource Sharing support in form of Microsoft.AspNetCore.Cors package. The usage is very straightforward, you just need to register the services, configure the policy and enable CORS either with middleware (for whole pipeline or specific branch), filter (globally for MVC) or attribute (at MVC controller/action level). This is all nicely described in documentation. But what if there is a need to reconfigure the policy at runtime?

Let's assume that there is an application which contains two APIs. One is considered "private" so only other applications from the same suite can use it, while the second is "public" and client administrator should be able to configure it so it can be used with any 3rd party application.

public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddCors(options =>
        {
            options.AddPolicy("Private", builder =>
            {
                builder.WithOrigins("http://appone.suite.com, http://apptwo.suite.com");
                ...
            });

            options.AddPolicy("Public", builder =>
            {
                // Apply "public" policy (based on information read from storage etc.)
                ...
            });
        })
        ...;
    }

    ...
}

The initial configuration of policy is not an issue, the problem is the part when admin decides to change the policy. The whole application shouldn't require a restart for changes to take effect, so the policy needs to be accessed and reconfigured.

How the policy can be accessed

The initialization code shows that the policies are being added to the CorsOptions. Internally the IServiceCollection.AddCors is plugging the options into ASP.NET Core configuration framework by calling IServiceCollection.Configure. This means that they can be retrieved with help of Dependency Injection as IOptions and considered a singleton. This is enough information to start building a service which will help with accessing the policy.

public class CorsPolicyAccessor : ICorsPolicyAccessor
{
    private readonly CorsOptions _options;

    public CorsPolicyAccessor(IOptions<CorsOptions> options)
    {
        if (options == null)
        {
            throw new ArgumentNullException(nameof(options));
        }

        _options = options.Value;
    }
}

From this point it's easy. The CorsOptions exposes GetPolicy method and DefaultPolicyName property which can be used for exposing access to the policy.

public class CorsPolicyAccessor : ICorsPolicyAccessor
{
    ...

    public CorsPolicy GetPolicy()
    {
        return _options.GetPolicy(_options.DefaultPolicyName);
    }

    public CorsPolicy GetPolicy(string name)
    {
        return _options.GetPolicy(name);
    }
}

Now the new service can be registered (preferably after the AddCors call).

public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddCors(options =>
        {
            ...
        })
        .AddTransient<ICorsPolicyAccessor, CorsPolicyAccessor>()
        ...;
    }

    ...
}

Exposing the policy with MVC

With help of just created ICorsPolicyAccessor service and Dependency Injection the CORS policy can now be reconfigured at runtime for example from ASP.NET Core MVC controller. For starters let's create an action and view which lists all the origins within the Public policy.

public class OriginsController : Controller
{
    private readonly ICorsPolicyAccessor _corsPolicyAccessor;

    public OriginsController(ICorsPolicyAccessor corsPolicyAccessor)
    {
        _corsPolicyAccessor = corsPolicyAccessor;
    }

    [AcceptVerbs("GET")]
    public IActionResult Manage()
    {
        return View(new OriginsModel(_corsPolicyAccessor.GetPolicy("Public").Origins));
    }
}
public class OriginsModel
{
    private readonly IList<string> _origins;

    public IEnumerable<string> Origins => _origins;

    public OriginsModel(IList<string> origins)
    {
        _origins = origins;
    }
}
@model OriginsModel
<!DOCTYPE html>
<html>
...
<body>
    <div>
        <ul>
            @foreach(var origin in Model.Origins)
            { 
                <li>@origin</li>
            }
        </ul>
    </div>
</body>
</html>

Navigating to the URL pointing at the action should result in listing all the origins. This can be easily extended with capabilities of adding and removing origins. First the View Model should be changed so the list of origins can be used to generate a select element.

public class OriginsModel
{
    ...

    public List<SelectListItem> Origins => _origins.Select(origin => new SelectListItem
    {
        Text = origin,
        Value = origin
    }).ToList();

    ...
}

This allows for adding some inputs and forms to handle the operations (this could be done a lot nicer with AJAX but I want to keep things simple for the sake of clarity).

@model OriginsModel
<!DOCTYPE html>
<html>
...
<body>
    <div>
        <ul>
            @foreach(var origin in Model.Origins)
            { 
                <li>@origin.Text</li>
            }
        </ul>
        <form asp-action="Add" method="post">
            <fieldset>
                <legend>Adding an origing</legend>
                <input type="text" name="origin" />
                <input type="submit" value="Add" />
            </fieldset>
        </form>
        <form asp-action="Remove" method="post">
            <fieldset>
                <legend>Removing an origing</legend>
                <select name="origin" asp-items="Model.Origins"></select>
                <input type="submit" value="Remove" />
            </fieldset>
        </form>
    </div>
</body>
</html>

Last thing to do is handling the Add and Remove actions. I'm going to use the PRG Pattern here which should allow for clear responsibilities separation.

public class OriginsController : Controller
{
    ...

    [AcceptVerbs("POST")]
    public IActionResult Add(string origin)
    {
        _corsPolicyAccessor.GetPolicy("Public").Origins.Add(origin);

        return RedirectToAction(nameof(Manage));
    }

    [AcceptVerbs("POST")]
    public IActionResult Remove(string origin)
    {
        _corsPolicyAccessor.GetPolicy("Public").Origins.Remove(origin);

        return RedirectToAction(nameof(Manage));
    }
}

Testing this will show that indeed changes are being picked up immediately (although there is small risk of a race involved). This simple demo shows how easy it is to reconfigure part of policy. With this approach any of CorsPolicy public properties can be changed.

Long time ago (according to the repository back in 2011) I've made first version of Lib.Web.Mvc public. The initial functionality was a strongly typed helper for jqGrid. In later versions additional functionalities like Range Requests action result, CSP action attribute and helpers , HSTS attribute or HTTP/2 Server Push with Cache Digest attribute and helpers has been added, but jqGrid support still remained the biggest one. So when ASP.NET Core was getting closer to RTM this issue popped up. Now (14 months later) I'm releasing Lib.AspNetCore.Mvc.JqGrid version 1.0.0. As this is not just a port (I took the opportunity to redesign few things) I've decided to describe the key changes.

Packages organization

The functionality has been split into four packages:

  • Lib.AspNetCore.Mvc.JqGrid.Infrastructure - Classes, enumerations and constants representing jqGrid options.
  • Lib.AspNetCore.Mvc.JqGrid.Core - The core serialization and deserialization functionality. If you prefer to write your own JavaScript instead of using strongly typed helper, but you still want some support on the server side for requests and responses this is what you want.
  • Lib.AspNetCore.Mvc.JqGrid.DataAnnotations - Custom data annotations which allow for providing additional metadata when working with strongly typed helper.
  • Lib.AspNetCore.Mvc.JqGrid.Helper - The strongly typed helper (aka the JavaScript generator).

The split was driven mostly by two use cases which has been often raised. One is separating the (view) models from the rest of the application (for example independent assembly). The only package needed now in such cases is Lib.AspNetCore.Mvc.JqGrid.DataAnnotations which doesn't have any ties to ASP.NET Core. The second use cases is not using the JavaScript generation part, just the support in response and request serialization. That functionality has been separated as well in order to minimize footprint for such scenario.

Usage basics and demos

The helper in version for ASP.NET MVC was an independent class which needed to be initialized (typically in the view) and than could be used to generate the JavaScript and HTML (very similar to System.Web.Helpers.WebGrid). This has been changed, the JavaScript and HTML generation is exposed through IHtmlHelper extensions methods (JqGridTableHtml, JqGridPagerHtml, JqGridHtml, and JqGridJavaScript) which take JqGridOptions instance as parameter. This means that view code can be simplified to this (assuming all needed scripts and styles have been referenced):

@Html.JqGridHtml(gridOptions)
<script>
    $(function () {
        @Html.JqGridJavaScript(gridOptions)
    });
</script>

The JqGridOptions instance can be created anywhere in application, as it sits in Lib.AspNetCore.Mvc.JqGrid.Infrastructure there is even no reference to ASP.NET Core required. When it comes to the controller code, not much has changed. The Lib.AspNetCore.Mvc.JqGrid.Core provides classes like JqGridRequest, JqGridResponse or JqGridRecord with appropriate binders and converters which are being automatically used.

public IActionResult Characters(JqGridRequest request)
{
    ...

    JqGridResponse response = new JqGridResponse()
    {
        ...
    };

    ...

    return new JqGridJsonResult(response);
}

There is a demo project available on GitHub which contains samples of key feature areas with and without helper usage.

Supported features and roadmap

This first version doesn't support all the features which Lib.Web.Mvc did, if I would want to achieve that I don't know when would I release. I've chosen the MVP based on what has been the most common subject for discussions and questions in the past. This gives following list of areas:

  • Formatters
  • Footer
  • Paging
  • Dynamic scrolling
  • Sorting
  • Single and advanced searching
  • Form and cell editing
  • Grouping
  • Tree grid
  • Subgrids

This is of course not the end. I will soon start setting roadmap for next releases. This is something that everybody can have their say about by creating or reacting to issues.

In general I'm open to any form of feedback (tweets, emails, issues, high fives, donations). I will keep working on this project as long as it will have value for anybody and I'll try to answer any questions.

Recently I've been playing a lot with HTTP/2 and with ASP.NET Core but I didn't had chance to play with both at once. I've decided it's time to change that. Unfortunately the direct HTTP/2 support for Kestrel is still in backlog as it this blocked by missing ALPN support in SslStream. You can get some of the HTTP/2 features when using Kestrel (like header compression or multiplexing) if you run it behind a reverse proxy like IIS or NGINX but there is no API to play with. Luckily Kestrel is not the only HTTP server implementation for ASP.NET Core.

HttpSysServer (formerly WebListener)

The second official server implementation for ASP.NET Core is Microsoft.AspNetCore.Server.WebListener which has been renamed to Microsoft.AspNetCore.Server.HttpSys in January. It allows exposing ASP.NET Core applications directly (without a reverse proxy) to the Internet. Under the hood it's implemented on top of Windows Http Server API which on one side limits hosting options to Windows only but on the other allows for leveraging full power of Http.Sys (the same power that runs the IIS). The part of that power is support for HTTP/2 based on which I've decided to build a proof of concept API.

Running ASP.NET Core application on HttpSysServer

I've started by creating a simple ASP.NET Core application, something that just runs.

public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
    }

    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        app.Run(async (context) =>
        {
            await context.Response.WriteAsync("-- Demo.AspNetCore.Server.HttpSys.Http2 --");
        });
    }
}

Then I've grabbed the source code and compiled it. Now I was able to switch the host to HttpSysServer.

public class Program
{
    public static void Main(string[] args)
    {
        var host = new WebHostBuilder()
            .UseContentRoot(Directory.GetCurrentDirectory())
            .UseStartup()
            .UseHttpSys(options =>
            {
                options.UrlPrefixes.Add("http://localhost:63861");
                options.UrlPrefixes.Add("https://localhost:44365");
                options.Authentication.Schemes = AuthenticationSchemes.None;
                options.Authentication.AllowAnonymous = true;
            })
            .Build();

        host.Run();
    }
}

The two URLs above are kind of a trick from my side - they are the same as ones used by my development instance of IIS Express. The process of configuring SSL for HttpSysServer is a little bit problematic and by using those URLs I've saved myself from going through it as IIS Express has already configured them.

After those steps I could run the application, navigate to https://localhost:44365 over HTTPS and see that HTTP/2 has already kicked in (thanks to native support in Http.Sys).

Chrome Developer Tools Network Tab - HttpSysServer responding with H2

HTTP/2 as request feature

The ASP.NET Core has a concept of request features which represent server capabilities related to HTTP. Every request feature is represented by an interface sitting in Microsoft.AspNetCore.Http.Features namespace. There are features representing web sockets, HTTP upgrades, buffering etc. Representing HTTP/2 as a feature seems to be in line with this approach.

public interface IHttp2Feature
{
    bool IsHttp2Request { get; }

    void PushPromise(string path);

    void PushPromise(string path, string method, IHeaderDictionary headers);
}

Implementing HTTP/2 with Windows Http Server API

Deep at the bottom of HttpSysServer there is a HttpApi class which exposes the Http Server API. The information whether the request is being performed over HTTP/2 is available through Flags field on HTTP_REQUEST structure. Currently the field isn't being used so it's simple there as unsigned integer, I've decided to change it to flags enum. The second thing needed to be done is importing the HttpDeclarePush function which allows for Server Push.

internal static unsafe class HttpApi
{
    ...

    [DllImport(HTTPAPI, ExactSpelling = true, CallingConvention = CallingConvention.StdCall,
     CharSet = CharSet.Unicode, SetLastError = true)]
    internal static extern unsafe uint HttpDeclarePush(SafeHandle requestQueueHandle, ulong requestId,
        HTTP_VERB verb, string path, string query, HTTP_REQUEST_HEADERS* headers);

    ...

    [Flags]
    internal enum HTTP_REQUEST_FLAG : uint
    {
        None = 0x0,
        MoreEntityBodyExists = 0x1,
        IpRouted = 0x2,
        HTTP2 = 0x4
    }

    [StructLayout(LayoutKind.Sequential)]
    internal struct HTTP_REQUEST
    {
        internal HTTP_REQUEST_FLAG Flags;
        ...
    }

    ...
}

The IsHttp2Request property should be exposed as part of the request. In order to do that the information needs to be bubbled through two layers. First is NativeRequestContext class which servers as a bridge to the native implementation and contains pointer to HTTP_REQUEST.

internal unsafe class NativeRequestContext : IDisposable
{
    ...

    internal bool IsHttp2 => NativeRequest->Flags.HasFlag(HttpApi.HTTP_REQUEST_FLAG.HTTP2);

    ...
}

The second layer is the Request class which servers as an internal representation of the request. Here we need to grab the value of NativeRequestContext.IsHttp2 in constructors, because the last step of constructor is call to NativeRequestContext.ReleasePins() which releases the HTTP_REQUEST structure.

internal sealed class Request
{
    internal Request(RequestContext requestContext, NativeRequestContext nativeRequestContext)
    {
        ...

        IsHttp2 = nativeRequestContext.IsHttp2;

        ...

        // Finished directly accessing the HTTP_REQUEST structure.
        _nativeRequestContext.ReleasePins();
    }

    ...

    public bool IsHttp2 { get; }

    ...
}

The Server Push functionality fits better with response which is internally represented by Response class. This is where I'm going to put the method which will take care of transforming the parameters to form acceptable by HttpDeclarePush. First step is transforming the HTTP method from string to HTTP_VERB. Also some additional validation is needed as only GET and HEAD methods can be used for Server Push.

internal sealed class Response
{
    ...

    internal unsafe void PushPromise(string path, string method, IDictionary<string, StringValues> headers)
    {
        if (Request.IsHttp2)
        {
            HttpApi.HTTP_VERB verb = HttpApi.HTTP_VERB.HttpVerbHEAD;
            string methodToUpper = method.ToUpperInvariant();
            if (HttpApi.HttpVerbs[(int)HttpApi.HTTP_VERB.HttpVerbGET] == methodToUpper)
            {
                verb = HttpApi.HTTP_VERB.HttpVerbGET;
            }
            else if (HttpApi.HttpVerbs[(int)HttpApi.HTTP_VERB.HttpVerbHEAD] != methodToUpper)
            {
                throw new ArgumentException("The push operation only supports GET and HEAD methods.",
                    nameof(method));
            }

            ...
        }
    }
}

The path also needs to be processed as HttpDeclarePush expects the path portion and query portion separately.

internal sealed class Response
{
    ...

    internal unsafe void PushPromise(string path, string method, IDictionary<string, StringValues> headers)
    {
        if (Request.IsHttp2)
        {
            ...

            string query = null;
            int queryIndex = path.IndexOf('?');
            if (queryIndex >= 0)
            {
                if (queryIndex < path.Length - 1)
                {
                    query = path.Substring(queryIndex + 1);
                }
                path = path.Substring(0, queryIndex);
            }

            ...
        }
    }
}

The hardest part is putting headers into HTTP_REQUEST_HEADERS structure. The side effect of this process is a list of GCHandle instances which will need to be released after the Server Push (the Response class already contains FreePinnedHeaders method capable of doing this).

internal sealed class Response
{
    ...

    internal unsafe void PushPromise(string path, string method, IDictionary<string, StringValues> headers)
    {
        if (Request.IsHttp2)
        {
            ...

            HttpApi.HTTP_REQUEST_HEADERS* nativeHeadersPointer = null;
            List<GCHandle> pinnedHeaders = null;
            if ((headers != null) && (headers.Count > 0))
            {
                HttpApi.HTTP_REQUEST_HEADERS nativeHeaders = new HttpApi.HTTP_REQUEST_HEADERS();
                pinnedHeaders = SerializeHeaders(headers, ref nativeHeaders);
                nativeHeadersPointer = &nativeHeaders;
            }

            ...
        }
    }
}

I'm not including the SerializeHeaders method here. If somebody is interested in my certainly not perfect and probably buggy implementation, it can be found here (in general it's based on already existing SerializeHeaders method which Response class is using for actual response).

After all the preparations finally HttpDeclarePush can be called.

internal sealed class Response
{
    ...

    internal unsafe void PushPromise(string path, string method, IDictionary<string, StringValues> headers)
    {
        if (Request.IsHttp2)
        {
            ...

            uint statusCode = ErrorCodes.ERROR_SUCCESS;
            try
            {
                statusCode = HttpApi.HttpDeclarePush(RequestContext.Server.RequestQueue.Handle,
                    RequestContext.Request.RequestId, verb, path, query, nativeHeadersPointer);
            }
            finally
            {
                if (pinnedHeaders != null)
                {
                    FreePinnedHeaders(pinnedHeaders);
                }
            }

            if (statusCode != ErrorCodes.ERROR_SUCCESS)
            {
                throw new HttpSysException((int)statusCode);
            }
        }
    }
}

With Request and Response classes ready the feature itself can be implemented. The HttpSysServer aggregates most of the features implementations into FeatureContext class, so this is where the explicit interface implementation will be added.

internal class FeatureContext :
    ...
    IHttp2Feature
{
    ...

    bool IHttp2Feature.IsHttp2Request => Request.IsHttp2;

    void IHttp2Feature.PushPromise(string path)
    {
        ((IHttp2Feature)this).PushPromise(path, "GET", null);
    }

    void IHttp2Feature.PushPromise(string path, string method, IHeaderDictionary headers)
    {
        ...

        try
        {
            Response.PushPromise(path, method, headers);
        }
        catch (Exception ex) when (!(ex is ArgumentException))
        { }
    }

    ...
}

As you can see I've decided to swallow almost all exceptions coming from Response.PushPromise. This is in fact the same approach as in ASP.NET which makes Server Push a fire-and-forget operation (this is ok as application shouldn't rely on it).

Last step is exposing the new feature as part of StandardFeatureCollection class. The class provides _identityFunc field which represents a delegate returning FeatureContext for current request.

internal sealed class StandardFeatureCollection : IFeatureCollection
{
    ...

    private static readonly Dictionary<Type, Func<FeatureContext, object>> _featureFuncLookup = new Dictionary<Type, Func<FeatureContext, object>>()
    {
        ...
        { typeof(IHttp2Feature), _identityFunc },
        ...
    };

    ...
}

Using the feature

In order to consume a request feature it should be retrieved from HttpContext.Features collection. If given feature is not available the collection will return null. As HttpContext is available on both HttpRequest and HttpResponse classes the feature can be exposed through some handy extensions.

public static class HttpRequestExtensions
{
    public static bool IsHttp2Request(this HttpRequest request)
    {
        IHttp2Feature http2Feature = request.HttpContext.Features.Get<IHttp2Feature>();

        return (http2Feature != null) && http2Feature.IsHttp2Request;
    }
}
public static class HttpResponseExtensions
{
    public static void PushPromise(this HttpResponse response, string path)
    {
        response.PushPromise(path, "GET", null);
    }

    public static void PushPromise(this HttpResponse response, string path, string method, IHeaderDictionary headers)
    {
        IHttp2Feature http2Feature = response.HttpContext.Features.Get<IHttp2Feature>();

        http2Feature?.PushPromise(path, method, headers);
    }
}

Now it is time to extend the demo application to see this stuff in action. I've created a css folder in wwwroot, dropped two simple CSS files in there and added the StaticFiles middleware. Next I've modified the code to return some simple HTML referencing the added resources.

public class Startup
{
    ...

    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        app.UseStaticFiles()
            .Map("/server-push", (IApplicationBuilder branchedApp) =>
            {
                branchedApp.Run(async (context) =>
                {
                    bool isHttp2Request = context.Request.IsHttp2Request();

                    context.Response.PushPromise("/css/normalize.css");
                    context.Response.PushPromise("/css/site.css");

                    await System.Threading.Tasks.Task.Delay(100);

                    context.Response.ContentType = "text/html";
                    await context.Response.WriteAsync("<!DOCTYPE html>");
                    await context.Response.WriteAsync("<html>");
                    await context.Response.WriteAsync("<head>");
                    await context.Response.WriteAsync("<title>Demo.AspNetCore.Server.HttpSys.Http2 - Server Push</title>");
                    await context.Response.WriteAsync("<link rel=\"stylesheet\" href=\"/css/normalize.css\" />");
                    await context.Response.WriteAsync("<link rel=\"stylesheet\" href=\"/css/site.css\" />");
                    await context.Response.WriteAsync("</head>");
                    await context.Response.WriteAsync("<body>");

                    await System.Threading.Tasks.Task.Delay(50);
                    await context.Response.WriteAsync($"<h1>Demo.AspNetCore.Server.HttpSys.Http2 (IsHttp2Request: {isHttp2Request})</h1>");
                    await System.Threading.Tasks.Task.Delay(50);

                    await context.Response.WriteAsync("</body>");
                    await context.Response.WriteAsync("</html>");
                });
            })
            .Run(async (context) =>
            {
                await context.Response.WriteAsync("-- Demo.AspNetCore.Server.HttpSys.Http2 --");
            });
    }
}

The delays have been added in order to avoid client side race between Server Push and parser (as the content is really small and response body has higher priority than Server Push the parser could trigger regular requests for resources instead of claiming pushed ones).

Below is what can be seen in developer tools after running the application and navigating to /server-push over HTTPS.

Chrome Developer Tools Network Tab - HttpSysServer responding with H2

There it is! HTTP/2 with Server Push from ASP.NET Core application.

What's next

This was a fun challenge. It gave me an opportunity to understand internals of HttpSysServer and work with native API which is not something I get to do every day. If somebody would like to roll out his own HttpSysServer with those changes (or have some suggestions and improvements) full code can be found on GitHub. As there is already an issue for enabling HTTP/2 and server push in HttpSysServer repository I'm going to ask the team if this approach is something they would consider a valuable pull request (the IHttp2Feature interface should be probably added to HttpAbstractions, possibly with HttpRequestExtensions and HttpResponseExtensions).

The amount of transferred data matters. On one hand it often contributes to the cost of running a service and on the other a lot of clients doesn't have as fast connections as we would like to believe. This is why response compression is one of key performance mechanisms in web world.

There is a number of compression schemas (more or less popular) out there, so clients advertise the supported ones with Accept-Encoding header.

Chrome Network Tab - No Response Compression

Above screenshot shows result of a request from Chrome to the simplest possible ASP.NET Core application.

public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
    }

    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        app.Run(async (context) =>
        {
            await context.Response.WriteAsync("-- Demo.AspNetCore.ResponseCompression.Brotli --");
        });
    }
}

As we can see the browser has advertised four different options of compressing the response but none has been used. This shouldn't be a surprise as ASP.NET Core is modular by its nature and leaves up to us picking the features we want. In order for compression to be supported we need to add a proper middleware.

Enabling response compression

The support for response compression in ASP.NET Core is available through ResponseCompressionMiddleware from Microsoft.AspNetCore.ResponseCompression package. After referencing the package all that needs to be done is registering middleware and related services.

public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddResponseCompression();
    }

    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        app.UseResponseCompression()
            .Run(async (context) =>
            {
                if (!StringValues.IsNullOrEmpty(context.Request.Headers[HeaderNames.AcceptEncoding]))
                    context.Response.Headers.Append(HeaderNames.Vary, HeaderNames.AcceptEncoding);

                context.Response.ContentType = "text/plain";
                await context.Response.WriteAsync("-- Demo.AspNetCore.ResponseCompression.Brotli --");
            });
    }
}

One thing to remember is setting Content-Type as compression is enabled only for specific MIME types (there is also a separated setting for enabling compression over HTTPS). Additionally I'm adding Vary: Accept-Encoding header to the response so any cache along the way knows the response needs to be cached per compression type (future version of middleware will handle this for us).

Below screenshot shows result of the same request as previously, after modifications.

Chrome Network Tab - Gzip Compression

Now the response has been compressed using gzip. Gzip compression is the only one supported by the middleware, which is "ok" in most cases as it has the widest support among clients. But the web world is constantly evolving and compression algorithms are no different. The latest-greatest seems to be Brotli which can shrink data by an additional 20% to 25%. It would be nice to use it in ASP.NET Core.

Extending response compression with Brotli

The ResponseCompressionMiddleware can be extended with additional compression algorithms by implementing ICompressionProvider interface. The interface is pretty simple, it has two properties (providing information about encoding token and if flushing is supported) and one method (which should create a stream with compression capabilities). The true challenge is the actual Brotli implementation. I've decided to use a .NET Core build of Brotli.NET. This is in fact a wrapper around original implementation, so some cross-platform issues might appear and force recompilation. The wrapper exposes the original implementation through BrotliStream which makes it very easy to use in context of ICompressionProvider.

public class BrotliCompressionProvider : ICompressionProvider
{
    public string EncodingName => "br";

    public bool SupportsFlush => true;

    public Stream CreateStream(Stream outputStream)
    {
        return new BrotliStream(outputStream, CompressionMode.Compress);
    }
}

The custom provider needs to be added to ResponseCompressionOptions.Providers collection as part of services registration.

public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddResponseCompression(options =>
        {
            options.Providers.Add<BrotliCompressionProvider>();
        });
    }

    ...
}

Now the demo request can be done once again - it should show that Brotli is being used for compression.

Chrome Network Tab - Brotli Compression

Not every browser (and not always) supports Brotli

Lets take a quick look at compression support advertised by different browsers:

  • IE11: Accept-Encoding: gzip, deflate
  • Edge: Accept-Encoding: gzip, deflate
  • Firefox: Accept-Encoding: gzip, deflate (HTTP), Accept-Encoding: gzip, deflate, br (HTTPS)
  • Chrome: Accept-Encoding: gzip, deflate, sdch, br
  • Opera: Accept-Encoding:gzip, deflate, sdch, br

So IE and Edge don't support Brotli at all and Firefox supports it only over HTTPS. Checking more detailed information at caniuse we will learn that couple more browsers don't support Brotli (but Edge already has it in preview, although it is rumored that the final support will be only over HTTPS). The overall support is about 57% which means that we want to keep gzip around as well. In order to do so it needs to be added to ResponseCompressionOptions.Providers collection too (the moment we start manually registering providers the default one is gone).

public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddResponseCompression(options =>
        {
            options.Providers.Add<BrotliCompressionProvider>();
            options.Providers.Add<GzipCompressionProvider>();
        });
    }

    ...
}

If we test this code against various browsers we will see that chosen compression always ends up being gzip. The reason for that is the way in which middleware chooses the provider. It takes the advertised compressions, sorts them by quality if present and chooses the first one for which provider exists. As browser generally don't provide any quality values (in another words they will be equally happy to accept any of the supported ones) the gzip always wins because it is always first on advertised list. Unfortunately the middleware doesn't provide an option for defining server side preference for such cases. In order to work around it I've decided to go the hacky way. If the only way to control provider selection is through quality values, they need to be adjusted before the response compression middleware kicks in. I've put together another middleware to do exactly that. The additional middleware would inspect the request Accept-Encoding header and if there are no quality values provided would adjust them.

public class ResponseCompressionQualityMiddleware
{
    private readonly RequestDelegate _next;
    private readonly IDictionary<string, double> _encodingQuality;

    public ResponseCompressionQualityMiddleware(RequestDelegate next, IDictionary<string, double> encodingQuality)
    {
        _next = next;
        _encodingQuality = encodingQuality;
    }

    public async Task Invoke(HttpContext context)
    {
        StringValues encodings = context.Request.Headers[HeaderNames.AcceptEncoding];
        IList<StringWithQualityHeaderValue> encodingsList;

        if (!StringValues.IsNullOrEmpty(encodings)
            && StringWithQualityHeaderValue.TryParseList(encodings, out encodingsList)
            && (encodingsList != null) && (encodingsList.Count > 0))
        {
            string[] encodingsWithQuality = new string[encodingsList.Count];

            for (int encodingIndex = 0; encodingIndex < encodingsList.Count; encodingIndex++)
            {
                // If there is any quality value provided don't change anything
                if (encodingsList[encodingIndex].Quality.HasValue)
                {
                    encodingsWithQuality = null;
                    break;
                }
                else
                {
                    string encodingValue = encodingsList[encodingIndex].Value;
                    encodingsWithQuality[encodingIndex] = (new StringWithQualityHeaderValue(encodingValue,
                        _encodingQuality.ContainsKey(encodingValue) ? _encodingQuality[encodingValue] : 0.1)).ToString();
                }

            }

            if (encodingsWithQuality != null)
                context.Request.Headers[HeaderNames.AcceptEncoding] = new StringValues(encodingsWithQuality);
        }

        await _next(context);
    }
}

This "adjusting" middleware needs to be registered before the response compression middleware and configured with tokens for which a preference is needed.

public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddResponseCompression(options =>
        {
            options.Providers.Add<BrotliCompressionProvider>();
            options.Providers.Add<GzipCompressionProvider>();
        });
    }

    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        app.UseMiddleware<ResponseCompressionQualityMiddleware>(new Dictionary<string, double>
            {
                { "br", 1.0 },
                { "gzip", 0.9 }
            })
            .UseResponseCompression()
            .Run(async (context) =>
            {
                if (!StringValues.IsNullOrEmpty(context.Request.Headers[HeaderNames.AcceptEncoding]))
                    context.Response.Headers.Append(HeaderNames.Vary, HeaderNames.AcceptEncoding);

                context.Response.ContentType = "text/plain";
                await context.Response.WriteAsync("-- Demo.AspNetCore.ResponseCompression.Brotli --");
            });
    }
}

Now the tests in different browsers will give different results. For example in case of Edge the response will be compressed with gzip but in case of Chrome with Brotli, which is the desired effect.

In my previous post I've shown how HttpClient can be extended with payload encryption capabilities by providing support for aes128gcm encoding. In this post I'm going to extend Aes128GcmEncoding class with decoding capabilities.

Decoding at the high level

It shouldn't be a surprise that decoding is mostly about doing the opposite of encoding. This is why the DecodeAsync method is very similar to EncodeAsync.

public static class Aes128GcmEncoding
{
    public static async Task DecodeAsync(Stream source, Stream destination, Func<string, byte[]> keyLocator)
    {
        // Validation skipped for brevity
        ...

        CodingHeader codingHeader = await ReadCodingHeaderAsync(source);

        byte[] pseudorandomKey = HmacSha256(codingHeader.Salt, keyLocator(codingHeader.KeyId));
        byte[] contentEncryptionKey = GetContentEncryptionKey(pseudorandomKey);

        await DecryptContentAsync(source, destination,
            codingHeader.RecordSize, pseudorandomKey, contentEncryptionKey);
    }
}

The keyLocator parameter is a simple way for delegating the key management responsibility to the caller, the implementation expects a method for retrieving it based on key identifier without going into any further details. I have also decided to introduce a class for the coding header properties in order to make the code more readable.

Retrieving the coding header

As we already know the coding header contains three fields with constant length (salt, record size and key identifier length) and one with variable length (zero in extreme case). They can be retrieved one by one. The important thing is to validate the presence and size of every field, for this purpose I've split the reading in several smaller methods. Also the record size must be additionally validated as this implementation support smaller value than allowed by specification.

public static class Aes128GcmEncoding
{
    private static async Task<byte[]> ReadCodingHeaderBytesAsync(Stream source, int count)
    {
        byte[] bytes = new byte[count];
        int bytesRead = await source.ReadAsync(bytes, 0, count);
        if (bytesRead != count)
            throw new FormatException("Invalid coding header.");

        return bytes;
    }

    private static async Task<int> ReadRecordSizeAsync(Stream source)
    {
        byte[] recordSizeBytes = await ReadCodingHeaderBytesAsync(source, RECORD_SIZE_LENGTH);

        if (BitConverter.IsLittleEndian)
            Array.Reverse(recordSizeBytes);
        uint recordSize = BitConverter.ToUInt32(recordSizeBytes, 0);

        if (recordSize > Int32.MaxValue)]
            throw new NotSupportedException($"Maximum supported record size is {Int32.MaxValue}.");

        return (int)recordSize;
    }

    private static async Task<string> ReadKeyId(Stream source)
    {
        string keyId = null;

        int keyIdLength = source.ReadByte();

        if (keyIdLength == -1)
            throw new FormatException("Invalid coding header.");

        if (keyIdLength > 0)
        {
            byte[] keyIdBytes = await ReadCodingHeaderBytesAsync(source, keyIdLength);
            keyId = Encoding.UTF8.GetString(keyIdBytes);
        }

        return keyId;
    }

    private static async Task<CodingHeader> ReadCodingHeaderAsync(Stream source)
    {
        return new CodingHeader
        {
            Salt = await ReadCodingHeaderBytesAsync(source, SALT_LENGTH),
            RecordSize = await ReadRecordSizeAsync(source),
            KeyId = await ReadKeyId(source)
        };
    }
}

With the coding header retrieved the content can be decrypted.

Decrypting the content and retrieving the payload.

The pseudorandom key and content encryption key should be calculated in exactly the same way as during encryption. With those the records can be read and decrypted. The operation should be done record by record (as mentioned in previous post the nonce guards the order) until last record is reached, where reaching last record means not only the end of content but must be confirmed by that last record being delimited with 0x02 byte.

The tricky part is extracting the data from the record. In order to do that we need to detect the location of the delimiter and make sure it meets all the requirements. All the records must be of equal length (except the last one) but they doesn't have to contain the same amount of data as there can be padding consisting of any number of 0x00 bytes at the end. This is something which I haven't included into encryption implementation but must be correctly handled here. So delimiter should be the first byte from the end which value is not 0x00. As explained in previous post there are two valid delimiters: 0x01 (for all the records except the last one) and 0x02 (for the last record). Any other delimiter means that record is invalid, also a record which contains only padding is invalid. Below method ensure all those conditions are met.

public static class Aes128GcmEncoding
{
    private static int GetRecordDelimiterIndex(byte[] plainText, int recordDataSize)
    {
        int recordDelimiterIndex = -1;
        for (int plaintTextIndex = plainText.Length - 1; plaintTextIndex >= 0; plaintTextIndex--)
        {
            if (plainText[plaintTextIndex] == 0)
                continue;

            if ((plainText[plaintTextIndex] == RECORD_DELIMITER)
                || (plainText[plaintTextIndex] == LAST_RECORD_DELIMITER))
            {
                recordDelimiterIndex = plaintTextIndex;
            }

            break;
        }

        if ((recordDelimiterIndex == -1)
            || ((plainText[recordDelimiterIndex] == RECORD_DELIMITER)
                && ((plainText.Length -1) != recordDataSize)))
        {
            throw new FormatException("Invalid record delimiter.");
        }

        return recordDelimiterIndex;
    }
}

With this method content decryption can be implemented.

public static class Aes128GcmEncoding
{
    private static async Task DecryptContentAsync(Stream source, Stream destination, int recordSize, byte[] pseudorandomKey, byte[] contentEncryptionKey)
    {
        GcmBlockCipher aes128GcmCipher = new GcmBlockCipher(new AesFastEngine());

        ulong recordSequenceNumber = 0;

        byte[] cipherText = new byte[recordSize];
        byte[] plainText = null;
        int recordDataSize = recordSize - RECORD_OVERHEAD_SIZE;
        int recordDelimiterIndex = 0;

        do
        {
            int cipherTextLength = await source.ReadAsync(cipherText, 0, cipherText.Length);
            if (cipherTextLength == 0)
                throw new FormatException("Invalid records order or missing record(s).");

            aes128GcmCipher.Reset();
            AeadParameters aes128GcmParameters = new AeadParameters(new KeyParameter(contentEncryptionKey),
                128, GetNonce(pseudorandomKey, recordSequenceNumber));
            aes128GcmCipher.Init(false, aes128GcmParameters);

            byte[] plainText = new byte[aes128GcmCipher.GetOutputSize(cipherText.Length)];
            int lenght = aes128GcmCipher.ProcessBytes(cipherText, 0, cipherText.Length, plainText, 0);
            aes128GcmCipher.DoFinal(plainText, lenght);

            recordDelimiterIndex = GetRecordDelimiterIndex(plainText, recordDataSize);

            if ((plainText[recordDelimiterIndex] == LAST_RECORD_DELIMITER) && (source.ReadByte() != -1))
                throw new FormatException("Invalid records order or missing record(s).");

            await destination.WriteAsync(plainText, 0, recordDelimiterIndex);
        }
        while (plainText[recordDelimiterIndex] != LAST_RECORD_DELIMITER);
    }
}

HttpClient plumbing

With the decoding implementation ready the components required by HttpClient can be prepared. I've decided to reuse the same wrapping pattern as with Aes128GcmEncodedContent.

public sealed class Aes128GcmDecodedContent : HttpContent
{
    private readonly HttpContent _contentToBeDecrypted;
    private readonly Func<string, byte[]> _keyLocator;

    public Aes128GcmDecodedContent(HttpContent contentToBeDecrypted, Func<string, byte[]> keyLocator)
    {
        _contentToBeDecrypted = contentToBeDecrypted;
        _keyLocator = keyLocator;
    }

    protected override async Task SerializeToStreamAsync(Stream stream, TransportContext context)
    {
        if (!_contentToBeDecrypted.Headers.ContentEncoding.Contains("aes128gcm"))
            throw new NotSupportedException($"Encryption type not supported or stream isn't encrypted.");

        Stream streamToBeDecrypted = await _contentToBeDecrypted.ReadAsStreamAsync();

        await Aes128GcmEncoding.DecodeAsync(streamToBeDecrypted, stream, _keyLocator);
    }

    protected override bool TryComputeLength(out long length)
    {
        length = 0;

        return false;
    }
}

But this time it is not our code which is creating the content object - it comes from response. In order to wrap the content coming from response the HttpCLient pipeline needs to be extended with DelegatingHandler which will take care of that upon detecting desired Content-Encoding header value. The DelegatingHandler also gives an opportunity for setting Accept-Encoding header so the other side knows that encrypted content is supported.

public sealed class Aes128GcmEncodingHandler : DelegatingHandler
{
    private readonly Func<string, byte[]> _keyLocator;

    public Aes128GcmEncodingHandler(Func<string, byte[]> keyLocator)
    {
        _keyLocator = keyLocator;
    }

    protected override async Task SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
    {
        request.Headers.AcceptEncoding.Add(new StringWithQualityHeaderValue("aes128gcm"));

        HttpResponseMessage response = await base.SendAsync(request, cancellationToken);

        if (response.Content.Headers.ContentEncoding.Contains("aes128gcm"))
        {
            response.Content = new Aes128GcmDecodedContent(response.Content, _keyLocator);
        }

        return response;
    }
}

With those components in place we can try requesting some encrypted content from server.

Test run

To see decryption in action the HttpClient pipeline needs to be set to use components created above (assuming the server will respond with encrypted content).

IDictionary<string, byte[]> _keys = new Dictionary<string, byte[]>
{
    { String.Empty, Convert.FromBase64String("yqdlZ+tYemfogSmv7Ws5PQ==") },
    { "a1", Convert.FromBase64String("BO3ZVPxUlnLORbVGMpbT1Q==") }
};
Func<string, byte[]> keyLocator = (keyId) => _keys[keyId ?? String.Empty];

HttpMessageHandler encryptedContentEncodingPipeline = new HttpClientHandler();
encryptedContentEncodingPipeline = new Aes128GcmEncodingHandler(keyLocator)
{
    InnerHandler = encryptedContentEncodingPipeline
};

using (HttpClient encryptedContentEncodingClient = new HttpClient(encryptedContentEncodingPipeline))
{
    string decryptedContent = encryptedContentEncodingClient.GetStringAsync("<URL>").Result;
}

This gives full support for aes128gcm content encoding in HttpClient. All the code is available here for anybody who would like to play with it.

Older Posts