This is my second post about Azure Functions extensibility. In the end of previous one I've mentioned that I've been lucky enough to find a real context for my learning adventure. This context came up in chat with colleagues from "friendly" project. They were looking to integrate Azure Functions with RethinkDB. The obvious question is: Why? Cosmos DB is a natural choice for greenfield project which wants to use NoSQL database and serverless approach. Unfortunately not all projects out there are greenfield. There is a lot of brownfield projects, and this "friendly" project of mine fits into this category. They simply can't drop RethinkDB without huge investment (which is planned for later time), so they've decided to use lift and shift approach for migrating existing functionality to Azure. As part of that migration RethinkDB will be containerized and deployed to Azure Container Instances. The desire to be able to use Azure Functions with this setup shouldn't be a surprise, as it adds a lot of value to the migration. The goal is to have below architecture available.

HTTP Request
HTTP Request<br>
Browser
Browser<br>
API Management
API Management<br>
Function App
Function App<br>
Function App
Function App<br>
RethinkDB
Container Instance
RethinkDB<br>Container Instance<br>
Changefeed
Changefeed<br>
Container Registry
Container Registry<br>

I've offered to work on the Azure Functions extensions which would make the integration possible. So far I was able to create trigger binding and in this post I would like to share what I've learned.

Trigger mapping and configuration

If you've read my previous post, you already know that at the root of every binding (input, output or trigger) there is an attribute. This attribute maps binding to function parameter and provides way to configure it. Take a look at below example.

[Binding]
[AttributeUsage(AttributeTargets.Parameter)]
public sealed class RethinkDbTriggerAttribute : Attribute
{

    public string DatabaseName { get; private set; }

    public string TableName { get; private set; }

    [AppSetting]
    public string HostnameSetting { get; set; }

    ...

    public bool IncludeTypes { get; set; } = false;

    public RethinkDbTriggerAttribute(string databaseName, string tableName)
    {
        ...

        DatabaseName = databaseName;
        TableName = tableName;
    }
}

The properties of attribute can be split in three groups.

First are read-only properties set through constructor. Those represent aspects which needs to be configured for every function separately. In case of RethinkDB trigger we have here DatabaseName and TableName properties, as it's reasonable to expect that every function will be observing different table.

Second group are standard properties. Those represent function specific, but optional settings. In case of RethinkDB trigger this is represented by IncludeTypes property.

Last group is the most interesting one. Properties decorated with AppSettingAttribute. Those allow for pointing to values from application settings, which makes them perfect for settings which might be shared between many functions. This is also why they are usually optional, to leave a possibility for global option which will be used by all functions and overridden only by specific ones. In case of RethinkDB trigger this is HostnameSetting property, as most functions will be connecting to the same server or cluster.

Providing binding

The root (attribute) is in place, so the trigger binding can be built on top of it. This requires implementation of ITriggerBindingProvider and ITriggerBinding.

Implementation of ITriggerBindingProvider will be called when functions are being discovered. At this point TryCreateAsync method will be called by framework to acquire an ITriggerBinding instance. The most important part of passed context are information about the parameter for which that instance should be created. It's important to remember that this doesn't have to be a parameter with trigger attribute. This is why the parameter custom attributes should be searched for the trigger one and if it's missing a Task containing null result should be returned. The fact that ITriggerBindingProvider is looking for the trigger attribute instance makes it also natural place to read settings from the attribute. This is why, usually, the constructor takes IConfiguration as parameter, along with trigger specific options and services (grabbing those from DI can be done as part of trigger registration).

internal class RethinkDbTriggerAttributeBindingProvider : ITriggerBindingProvider
{
    ...

    private readonly Task<ITriggerBinding> _nullTriggerBindingTask =
        Task.FromResult<ITriggerBinding>(null);

    public RethinkDbTriggerAttributeBindingProvider(IConfiguration configuration,
        RethinkDbOptions options, IRethinkDBConnectionFactory rethinkDBConnectionFactory,
        INameResolver nameResolver, ILoggerFactory loggerFactory)
    {
        ...
    }

    public Task<ITriggerBinding> TryCreateAsync(TriggerBindingProviderContext context)
    {
        ...

        ParameterInfo parameter = context.Parameter;

        RethinkDbTriggerAttribute triggerAttribute =
            parameter.GetCustomAttribute<RethinkDbTriggerAttribute>(inherit: false);
        if (triggerAttribute is null)
        {
            return _nullTriggerBindingTask;
        }

        string triggerHostname = ResolveTriggerAttributeHostname(triggerAttribute);
        Task<Connection> triggerConnectionTask =
            _rethinkDBConnectionFactory.GetConnectionAsync(triggerHostname);

        TableOptions triggerTableOptions = ResolveTriggerTableOptions(triggerAttribute);

        return Task.FromResult<ITriggerBinding>(
            new RethinkDbTriggerBinding(parameter, triggerConnectionTask,
                triggerTableOptions, triggerAttribute.IncludeTypes)
        );
    }

    private string ResolveTriggerAttributeHostname(RethinkDbTriggerAttribute triggerAttribute)
    {
        string hostname = null;
        if (!String.IsNullOrEmpty(triggerAttribute.HostnameSetting))
        {
            hostname = _configuration.GetConnectionStringOrSetting(triggerAttribute.HostnameSetting);

            if (String.IsNullOrEmpty(hostname))
            {
                throw new InvalidOperationException("...");
            }

            return hostname;
        }

        hostname = _options.Hostname;

        if (String.IsNullOrEmpty(hostname))
        {
            throw new InvalidOperationException("...");
        }

        return hostname;
    }

    private TableOptions ResolveTriggerTableOptions(RethinkDbTriggerAttribute triggerAttribute)
    {
        return new TableOptions(
            ResolveAttributeValue(triggerAttribute.DatabaseName),
            ResolveAttributeValue(triggerAttribute.TableName)
        );
    }

    private string ResolveAttributeValue(string attributeValue)
    {
        return _nameResolver.Resolve(attributeValue) ?? attributeValue;
    }
}

Created ITriggerBinding instance is used only by single function. It has several responsibilities:

  • Defining the type of value returned by trigger.
  • Binding with different possible types for which trigger can provide value.
  • Creating a listener for events which should result in function execution.

Defining the type of value returned by trigger is simple, it only requires implementation of TriggerValueType property.

Providing binding with different types requires implementation of BindingDataContract property and BindAsync method (which also means ITriggerData implementation). This part is optional, as additional converters and value providers can be added through binding rules while registering the trigger.

Listener creation is the key aspect of trigger binding, its heart. Listener instance should be fully initialized with function specific settings and ITriggeredFunctionExecutor available from context.

internal class RethinkDbTriggerBinding : ITriggerBinding
{
    ...

    private readonly Task<ITriggerData> _emptyBindingDataTask =
        Task.FromResult<ITriggerData>(new TriggerData(null, new Dictionary<string, object>()));

    public Type TriggerValueType => typeof(DocumentChange);

    public IReadOnlyDictionary<string, Type> BindingDataContract { get; } =
        new Dictionary<string, Type>();

    public RethinkDbTriggerBinding(ParameterInfo parameter, Task<Connection>
        rethinkDbConnectionTask, TableOptions rethinkDbTableOptions, bool includeTypes)
    {
        ...
    }

    ...

    public Task<ITriggerData> BindAsync(object value, ValueBindingContext context)
    {
        return _emptyBindingDataTask;
    }

    public Task<IListener> CreateListenerAsync(ListenerFactoryContext context)
    {
        ...

        return Task.FromResult<IListener>(
            new RethinkDbTriggerListener(context.Executor, _rethinkDbConnectionTask,
                _rethinkDbTable, _includeTypes)
        );
    }

    ...
}

Listener implementation is important enough to get its own section.

Listening for events

For those with ASP.NET Core experience, listener implementation may be something familiar. The reason is that IListener interface resembles IHostedService. This is great observation. It allows for using established patterns for dealing with asynchrony and long-running task, like BackgroundService. I've implemented IHostedService listening for RethinkDB changefeed in the past, so I've simply reused that knowledge here. The important part is to remember to call ITriggeredFunctionExecutor.TryExecuteAsync when new change arrives.

internal class RethinkDbTriggerListener : IListener
{
    ...

    private Task _listenerTask;
    private CancellationTokenSource _listenerStoppingTokenSource;

    public RethinkDbTriggerListener(ITriggeredFunctionExecutor executor,
        Task<Connection> rethinkDbConnectionTask, Table rethinkDbTable,
        bool includeTypes)
    {
        ...
    }

    ...

    public void Cancel()
    {
        StopAsync(CancellationToken.None).Wait();
    }

    public Task StartAsync(CancellationToken cancellationToken)
    {
        _listenerStoppingTokenSource = new CancellationTokenSource();
        _listenerTask = ListenAsync(_listenerStoppingTokenSource.Token);

        return _listenerTask.IsCompleted ? _listenerTask : Task.CompletedTask;
    }

    public async Task StopAsync(CancellationToken cancellationToken)
    {
        if (_listenerTask == null)
        {
            return;
        }

        try
        {
            _listenerStoppingTokenSource.Cancel();
        }
        finally
        {
            await Task.WhenAny(_listenerTask,
                Task.Delay(Timeout.Infinite, cancellationToken));
        }

    }

    public void Dispose()
    { }

    private async Task ListenAsync(CancellationToken listenerStoppingToken)
    {
        Connection rethinkDbConnection =
            (_rethinkDbConnectionTask.Status == TaskStatus.RanToCompletion)
            ? _rethinkDbConnectionTask.Result : (await _rethinkDbConnectionTask);

        Cursor<DocumentChange> changefeed = await _rethinkDbTable.Changes()
            .OptArg("include_types", _includeTypes)
            .RunCursorAsync<DocumentChange>(rethinkDbConnection, listenerStoppingToken);

        while (!listenerStoppingToken.IsCancellationRequested
            && (await changefeed.MoveNextAsync(listenerStoppingToken)))
        {
            await _executor.TryExecuteAsync(
                new TriggeredFunctionData() { TriggerValue = changefeed.Current },
                CancellationToken.None
            );
        }

        changefeed.Close();
    }
}

Now all needed classes are ready.

Trigger registration

In my previous post I've mentioned that bindings are registered inside of Initialize method of IExtensionConfigProvider implementation. IExtensionConfigProvider implementation is also a gateway to DI, all needed dependencies can be injected through constructor. This includes services provided by Azure Functions runtime, binding specific global options and binding specific services registered in IWebJobsStartup.

The registration itself boils down to creating a binding rule based on trigger attribute, mapping that rule to ITriggerBindingProvider and adding any additional converters.

[Extension("RethinkDB")]
internal class RethinkDbExtensionConfigProvider : IExtensionConfigProvider
{
    ...

    public RethinkDbExtensionConfigProvider(IConfiguration configuration,
        IOptions<RethinkDbOptions> options,
        IRethinkDBConnectionFactory rethinkDBConnectionFactory,
        INameResolver nameResolver,
        ILoggerFactory loggerFactory)
    {
        ...
    }

    public void Initialize(ExtensionConfigContext context)
    {
        ...

        var triggerAttributeBindingRule = context.AddBindingRule<RethinkDbTriggerAttribute>();
        triggerAttributeBindingRule.BindToTrigger<DocumentChange>(
            new RethinkDbTriggerAttributeBindingProvider(_configuration, _options,
                _rethinkDBConnectionFactory, _nameResolver, _loggerFactory)
        );
        triggerAttributeBindingRule.AddConverter<..., ...>(...);
    }
}

Trigger usage

Properly registered trigger can be used in exactly same way as built in ones (or extensions provided by Microsoft). One just needs to apply trigger attribute to proper parameter.

public static class RethinkDbTriggeredFunction
{
    [FunctionName("RethinkDbTriggeredFunction")]
    public static void Run([RethinkDbTrigger(
        databaseName: "Demo_AspNetCore_Changefeed_RethinkDB",
        tableName: "ThreadStats",
        HostnameSetting = "RethinkDbHostname")]DocumentChange change,
        ILogger log)
    {
        log.LogInformation("[ThreadStats Change Received] " + change.GetNewValue<ThreadStats>());
    }
}

This function will be triggered every time a change in specified table occurs.

Recently I needed to dive into Azure Functions extensibility. While doing so, I've decided to put together this post. Information regarding Azure Functions extensibility are available but scattered. I wanted to systematize them in hope of making the learning curve more gentle. Let me start with following diagram.

<<interface>> IWebJobsStartup +Configure(in builder:IWebJobsBuilder): void CustomExtensionWebJobsStartup [assembly: WebJobsStartup(typeof(ExtensionWebJobsStartup))] <<interface>> IExtensionConfigProvider +Initialize(in context:ExtensionConfigContext): void CustomExtensionConfigProvider <<add>> CustomExtensionTriggerAttribute FluentBindingRule TAttribute:Attribute <<add>> CustomExtensionTriggerAttributeBindingProvider <<interface>> ITriggerBindingProvider CustomExtensionTriggerBinding <<interface>> ITriggerBinding CustomExtensionTriggerListener <<interface>> IListener CustomExtensionAttribute <<add>> <<use>> <<use>> <<use>> <<interface>> IAsyncCollector T: <<interface>> IConverter T: K: <<interface>> IAsyncConverter T: K: <<interface>> IValueBinder <<add>> <<use>> <<use>> <<use>> <<use>>

This diagram is nowhere near completed and shows a certain perspective, but I consider it a good representation of key parts of Azure Functions extensibility model and relations between them. There are few aspects which should be discussed in more details.

Discoverability

For extension to be usable, Azure Functions runtime needs to be able to discover it. The discovery process looks for WebJobsStartupAttribute assembly attribute, which role is to point at IWebJobsStartup interface implementation. IWebJobsStartup interface has a single method (Configure), which will be called during host startup. This is the moment when an extension can be added. Simplest implementation could look like below.

[assembly: WebJobsStartup(typeof(CustomExtensionWebJobsStartup))]

public class CustomExtensionWebJobsStartup : IWebJobsStartup
{
    public void Configure(IWebJobsBuilder builder)
    {
        builder.AddExtension<CustomExtensionConfigProvider>();
    }
}

Of course usually we need more. We might want to configure some options or register additional services. In such case it's nice practice to extract all this code to an extension method.

public static class CustomExtensionWebJobsBuilderExtensions
{
    public static IWebJobsBuilder AddCustomExtension(this IWebJobsBuilder builder,
        Action<CustomExtensionOptions> configure)
    {
        // Null checks removed for brevity
        ...

        builder.AddCustomExtension();
        builder.Services.Configure(configure);

        return builder;
    }

    public static IWebJobsBuilder AddCustomExtension(this IWebJobsBuilder builder)
    {
        // Null checks removed for brevity
        ...

        builder.AddExtension<CustomExtensionExtensionConfigProvider>()
            .ConfigureOptions<CustomExtensionOptions>((config, path, options) =>
            {
                ...
            });

        builder.Services.AddSingleton<ICustomExtensionService, CustomExtensionService>();

        return builder;
    }
}

This way it remains clear, that IWebJobsStartup responsibility is to add extension, while all necessary plumbing is separated.

[assembly: WebJobsStartup(typeof(CustomExtensionWebJobsStartup))]

public class CustomExtensionWebJobsStartup : IWebJobsStartup
{
    public void Configure(IWebJobsBuilder builder)
    {
        builder.AddCustomExtension();
    }
}

Triggers, Inputs and Outputs

After Azure Functions runtime discovers the extensions, the real fun begins. The runtime will call Initialize method of IExtensionConfigProvider implementation, which allows for registering bindings. This is done by adding binding rules to ExtensionConfigContext. At the root of every rule there is an attribute. In most scenarios having two attributes is sufficient - one for input and output bindings, and one for trigger bindings.

[Extension("CustomExtension")]
internal class CustomExtensionExtensionConfigProvider : IExtensionConfigProvider
{
    ...

    public void Initialize(ExtensionConfigContext context)
    {
        // Null checks removed for brevity
        ...

        var inputOuputBindingRule = context.AddBindingRule<CustomExtensionAttribute>();
        ...

        var triggerAttributeBindingRule = context.AddBindingRule<CustomExtensionTriggerAttribute>();
        ...
    }
}

Input and output bindings are a not that complicated. For specific types you can use BindToInput method, which takes either an IConverter or a Func which needs to be able to convert from attribute to specified type (attribute is a source of configuration here) and can take IExtensionConfigProvider as parameter (which is important as IExtensionConfigProvider instance is capable of taking dependencies from DI through its constructor). More generic approach is using BindToValueProvider method. Here you provide a function which gets type as parameter and gets a chance to figure out how to provide value. There is a special case, binding to IAsyncCollector and ICollector (which are special "out" types in Azure Functions). For this purpose you must call BindToCollector, but approach is similar to BindToInput. Specific implementations will vary depending and what is being bind, but proper design of available bindings is crucial as binding are the place to deal with asynchrony. Additionally bindings (thanks to possibility of using IAsyncCollector and IAsyncConverter) give you an alternative way to deal with asynchrony.

Trigger bindings are different. First step is simple, you need to call BindToTrigger with an instance of ITriggerBindingProvider implementation as parameter. After that you only need to implement ITriggerBindingProvider, ITriggerBinding and IListener (it's kind like drawing an owl). This is not a straightforward job and strongly depends on what is going to be your feed. Discussing this "in theory" wouldn't do much good, so I'm planning on writing a separated post on trigger binding in a specific context.

Some real code

I don't like doing things in vacuum. This is why when I've started learning about Azure Functions extensibility I've decided that I need some real context. Here you can find the side project I'm working on.

Final release of ASP.NET Core 2.2 is getting closer and I've started devoting some time to get familiar with new features and changes. One of new features, which extends ASP.NET Core diagnostics capabilities, are health checks. Health checks aim at providing way to quickly determine application condition by an external monitor (for example container orchestrator). This is valuable and long awaited feature. It's also significantly changing from preview to preview. I wanted to grasp current state of this feature and I couldn't think of better way to do it, than in context of concrete scenario.

Setting up the stage

One of external resources, which is often a dependency of my applications, is Redis. Most of the time this comes as a result of using distributed cache.

public class Startup
{
    public IConfiguration Configuration { get; }

    public Startup(IConfiguration configuration)
    {
        Configuration = configuration;
    }

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddDistributedRedisCache(options =>
        {
            options.Configuration = Configuration["DistributedRedisCache:Configuration"];
            options.InstanceName = Configuration["DistributedRedisCache:InstanceName"];
        });

        ...
    }

    ...
}

In order to make sure that application, which uses Redis backed distributed cache, works as it's supposed to, two things should be checked. One is presence of configuration and second is availability of Redis instance. Those are two health checks I would want to create.

In general health checks are represented by HealthCheckRegistration which requires providing an instance of a health check implementation or a factory of such instances. Working with HealthCheckRegistration all the time would result in a lot of repetitive, boilerplate code. This is probably the reason why there are two sets of extensions for registering health checks: HealthChecksBuilderDelegateExtensions and HealthChecksBuilderAddCheckExtensions. Those extensions divide health checks in two types.

"Lambda" health checks

Simpler type are "lambda" checks. To register such check one needs to call AddCheck (or AddAsyncCheck for asynchronous version) which takes function as a parameter. This function needs to return HealthCheckResult. An instance of HealthCheckResult carries a true or false value (which maps to failed or passed) and optionally a description, exception and additional data. The value will be used to determine the status of health check, while optional properties will end up in logs.

This type of health checks is perfect for small pieces of logic, in my case for checking if the configuration is present.

public class Startup
{
    ...

    public void ConfigureServices(IServiceCollection services)
    {
        ...

        services.AddHealthChecks()
            .AddCheck("redis-configuration", () => 
                (String.IsNullOrWhiteSpace(Configuration["DistributedRedisCache:Configuration"])
                 || String.IsNullOrWhiteSpace(Configuration["DistributedRedisCache:InstanceName"]))
                ? HealthCheckResult.Failed("Missing Redis distributed cache configuration!")
                : HealthCheckResult.Passed()
            );

        ...
    }

    ...
}

But often health checks can't be that simple. They may have complex logic, they may require services available from DI or (worst case scenario) they may need to manage state. In all those cases it's better to use second type of checks.

"Concrete Class" health checks

An alternative way to represent a health check is through a class implementing IHealthCheck interface. This gives a lot more flexibility, starting with access to DI. This ability is crucial for the second health check I wanted to implement - Redis instance connection check. Checking the connection requires acquiring RedisCacheOptions and initializing ConnectionMultiplexer based on that options. If the connection is created with AbortOnConnectFail property set to false, ConnectionMultiplexer.IsConnected can be used at any time to get current status. Below code shows exactly that.

public class RedisCacheHealthCheck : IHealthCheck
{
    private readonly RedisCacheOptions _options;
    private readonly ConnectionMultiplexer _redis;

    public RedisCacheHealthCheck(IOptions<RedisCacheOptions> optionsAccessor)
    {
        ...

        _options = optionsAccessor.Value;
        _redis = ConnectionMultiplexer.Connect(GetConnectionOptions());
    }

    public Task<HealthCheckResult> CheckHealthAsync(HealthCheckContext context,
        CancellationToken cancellationToken = default(CancellationToken))
    {
        return Task.FromResult(_redis.IsConnected
            ? HealthCheckResult.Passed()
            : HealthCheckResult.Failed("Redis connection not working!")
        );
    }

    private ConfigurationOptions GetConnectionOptions()
    {
        ConfigurationOptions redisConnectionOptions = (_options.ConfigurationOptions != null)
            ? ConfigurationOptions.Parse(_options.ConfigurationOptions.ToString())
            : ConfigurationOptions.Parse(_options.Configuration);

        redisConnectionOptions.AbortOnConnectFail = false;

        return redisConnectionOptions;
    }
}

A health check in form of a class can be registered with call to AddCheck<T>.

public class Startup
{
    ...

    public void ConfigureServices(IServiceCollection services)
    {
        ...

        services.AddHealthChecks()
            .AddCheck("redis-configuration", ...)
            .AddCheck<RedisCacheHealthCheck>("redis-connection");

        ...
    }

    ...
}

What about the state? Attempting to monitor Redis connection calls for some state management. ConnectionMultiplexer is designed to be shared and reused. It will react properly to connection failing and it will restore it when possible. It would be good not to create a new one every time. Unfortunately current implementation does exactly that. Registering health check with AddCheck<T> makes it transient. There is no obvious way to make health check a singleton (which is probably a good thing as it makes it harder to commit typical singleton mistakes). This can be done by manually creating HealthCheckRegistration. One can also play with registering IHealthCheck implementation directly with DI. But I would suggest different approach - moving state management to dedicated service and keeping health check as transient. This keeps the implementation clean and responsibilities separated. In case of Redis connection check, this means extraction of ConnectionMultiplexer creation.

public interface IRedisCacheHealthCheckConnection
{
    bool IsConnected { get; }
}

public class RedisCacheHealthCheckConnection : IRedisCacheHealthCheckConnection
{
    private readonly RedisCacheOptions _options;
    private readonly ConnectionMultiplexer _redis;

    public bool IsConnected => _redis.IsConnected;

    public RedisCacheHealthCheckConnection(IOptions<RedisCacheOptions> optionsAccessor)
    {
        ...

        _options = optionsAccessor.Value;
        _redis = ConnectionMultiplexer.Connect(GetConnectionOptions());
    }

    private ConfigurationOptions GetConnectionOptions()
    {
        ...
    }
}

Now the health check can be refactored to use the new service.

public class RedisCacheHealthCheck : IHealthCheck
{
    private readonly IRedisCacheHealthCheckConnection _redisConnection;

    public RedisCacheHealthCheck(IRedisCacheHealthCheckConnection redisConnection)
    {
        _redisConnection = redisConnection ?? throw new ArgumentNullException(nameof(redisConnection));
    }

    public Task<HealthCheckResult> CheckHealthAsync(HealthCheckContext context,
        CancellationToken cancellationToken = default(CancellationToken))
    {
        return Task.FromResult(_redisConnection.IsConnected
            ? HealthCheckResult.Passed()
            : HealthCheckResult.Failed("Redis connection not working!")
        );
    }
}

Last thing remaining is adding the new service to DI as singleton.

public class Startup
{
    ...

    public void ConfigureServices(IServiceCollection services)
    {
        ...

        services.AddSingleton<IRedisCacheHealthCheckConnection, RedisCacheHealthCheckConnection>();

        services.AddHealthChecks()
            .AddCheck("redis-configuration", ...)
            .AddCheck<RedisCacheHealthCheck>("redis-connection");

        ...
    }

    ...
}

The health checks are now in place, what remains is an endpoint which external monitor will be able to use to determine state.

Exposing health checks

Setting up an endpoint is as simple as calling UseHealthChecks and providing path.

public class Startup
{
    ...

    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        ...

        app.UseHealthChecks("/healthcheck");

        ...
    }
}

But on many occasions we may want for health checks to be divided into groups. Most typical example would be separating into liveness and readiness. ASP.NET Core makes this easy as well. One of optional parameters for health checks registration is tags enumeration. Those tags can be later used to filter health checks while configuring an endpoint.

public class Startup
{
    ...

    public void ConfigureServices(IServiceCollection services)
    {
        ...

        services.AddHealthChecks()
            .AddCheck("redis-configuration", ..., tags: new[] { "liveness" })
            .AddCheck<RedisCacheHealthCheck>("redis-connection", tags: new[] { "readiness" });
    }

    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        ...

        app.UseHealthChecks("/healthcheck-liveness", new HealthCheckOptions
        {
            Predicate = (check) => check.Tags.Contains("liveness")
        });

        app.UseHealthChecks("/healthcheck-readiness", new HealthCheckOptions
        {
            Predicate = (check) => check.Tags.Contains("readiness")
        });

        ...
    }
}

Anything else?

Not all health checks have the same meaning. By default returning false value results in unhealthy status, but that may not be desired. While registering health check, the meaning of failure can be redefined. Considering the scenario in this post, it's possible for an application which is using Redis backed distributed cache to still work if Redis instance is unavailable. If that's the case, the failure of connection check shouldn't mean that status is unhealthy, more likely the status should be considered degraded.

public class Startup
{
    ...

    public void ConfigureServices(IServiceCollection services)
    {
        ...

        services.AddHealthChecks()
            .AddCheck("redis-configuration", ..., tags: new[] { "liveness" })
            .AddCheck<RedisCacheHealthCheck>("redis-connection",
                failureStatus: HealthStatus.Degraded, tags: new[] { "readiness" });
    }

    ...
}

This change will be reflected in value returned by endpoint.

So for a theoretically simple mechanism, health checks provide a lot of flexibility. I can see them being very useful for number of scenarios.

Two weeks ago I've given a talk at DevConf conference about HTTP/2. During that talk I've mentioned protocol based content delivery as potential way to optimize an application for HTTP/2 users, without degradation in performance for HTTP/1 users at the same time. After the talk I was asked for some code examples. I've decided that it's a great opportunity to spin up ASP.NET Core 2.2 preview (which brings HTTP/2 to ASP.NET Core) and play with it.

The idea behind protocol based content delivery is to branch application logic (usually rendering) based on protocol of current request (in ASP.NET Core this information is available through HttpRequest.Protocol property) in order to employ different optimization strategy. In ASP.NET Core MVC there are multiple levels on which we can branch, depending on how precise we want to be and how far we want to separate logic for different protocols. I'll go through those levels, starting from the bottom.

Conditional rendering

The simplest thing that can be done, is putting an if into a view. This will allow for rendering different blocks of HTML for different protocols. In order to avoid reaching to HttpRequest.Protocol property directly from view, protocol check can be exposed through HtmlHelper.

public static class HtmlHelperHttp2Extensions
{
    public static bool IsHttp2(this IHtmlHelper htmlHelper)
    {
        if (htmlHelper == null)
        {
            throw new ArgumentNullException(nameof(htmlHelper));
        }

        return (htmlHelper.ViewContext.HttpContext.Request.Protocol == "HTTP/2");
    }
}

This provides an easy solution for having different bundling strategies for HTTP/2 (in order to make better use of multiplexing).

@if (Html.IsHttp2())
{
    <script src="~/js/core-bundle.min.js" asp-append-version="true"></script>
    <script src="~/js/page-bundle.min.js" asp-append-version="true"></script>
}
else
{
    <script src="~/js/site-bundle.min.js" asp-append-version="true"></script>
}

This works, but in many cases doesn't give the right development time experience. It requires context switching between C# and markup. It breaks standard HTML parsers. This is especially if we start mixing if statements with TagHelpers which have similar purpose.

@if (Html.IsHttp2())
{
    <environment include="Development">
        <script src="~/js/core-bundle.js" asp-append-version="true"></script>
        <script src="~/js/page-bundle.js" asp-append-version="true"></script>
    </environment>
    <environment exclude="Development">
        <script src="~/js/core-bundle.min.js" asp-append-version="true"></script>
        <script src="~/js/page-bundle.min.js" asp-append-version="true"></script>
    </environment>
}
else
{
    <environment include="Development">
        <script src="~/js/site-bundle.js" asp-append-version="true"></script>
    </environment>
    <environment exclude="Development">
        <script src="~/js/site-bundle.min.js" asp-append-version="true"></script>
    </environment>
}

Do you see EnvironmentTagHelper in the code above? It also serves as an if statement, but much cleaner. Wouldn't it be nice to have one for protocol as well? It's quite easy to create it, just couple checks and call to TagHelperOutput.SuppressOutput().

public class ProtocolTagHelper : TagHelper
{
    [HtmlAttributeNotBound]
    [ViewContext]
    public ViewContext ViewContext { get; set; }

    public override int Order => -1000;

    public string Include { get; set; }

    public string Exclude { get; set; }

    public override void Process(TagHelperContext context, TagHelperOutput output)
    {
        ...

        output.TagName = null;

        string currentProtocol = ViewContext.HttpContext.Request.Protocol;

        bool shouldSuppressOutput = false;

        if (!String.IsNullOrWhiteSpace(Exclude))
        {
            shouldSuppressOutput = Exclude.Trim().Equals(currentProtocol, StringComparison.OrdinalIgnoreCase);
        }

        if (!shouldSuppressOutput && !String.IsNullOrWhiteSpace(Include))
        {
            shouldSuppressOutput = !Include.Trim().Equals(currentProtocol, StringComparison.OrdinalIgnoreCase);
        }

        if (shouldSuppressOutput)
        {
            output.SuppressOutput();
        }
    }
}

This is a very simple version of ProtocolTagHelper, more powerful one can be found here. But even with this simplest version it is possible to write following markup.

<protocol include="HTTP/2">
    <environment include="Development">
        <script src="~/js/core-bundle.js" asp-append-version="true"></script>
        <script src="~/js/page-bundle.js" asp-append-version="true"></script>
    </environment>
    <environment exclude="Development">
        <script src="~/js/core-bundle.min.js" asp-append-version="true"></script>
        <script src="~/js/page-bundle.min.js" asp-append-version="true"></script>
    </environment>
</protocol>
<protocol exclude="HTTP/2">
    <environment include="Development">
        <script src="~/js/site-bundle.js" asp-append-version="true"></script>
    </environment>
    <environment exclude="Development">
        <script src="~/js/site-bundle.min.js" asp-append-version="true"></script>
    </environment>
</protocol>

Doesn't this look much cleaner than if statement? This is not the only thing that Tag Helper can help with. Another thing one might want to do is applying different CSS class to an element depending on protocol. Different CSS classes may result in loading different sprite files (again to better utilize multiplexing in case of HTTP/2). Here it would be ugly to copy entire element just to have different value in class attribute. Luckily Tag Helpers can target attributes and change values of other attributes.

[HtmlTargetElement(Attributes = "asp-http2-class")]
public class Http2ClassTagHelper : TagHelper
{
    [HtmlAttributeNotBound]
    [ViewContext]
    public ViewContext ViewContext { get; set; }

    [HtmlAttributeName("asp-http2-class")]
    public string Http2Class { get; set; }

    public override void Process(TagHelperContext context, TagHelperOutput output)
    {
        if (!String.IsNullOrWhiteSpace(Http2Class) && (ViewContext.HttpContext.Request.Protocol == "HTTP/2"))
        {
            output.Attributes.SetAttribute("class", Http2Class);
        }

        output.Attributes.RemoveAll("asp-http2-class");
    }
}

The above Tag Helper will target any element with asp-http2-class attribute and if protocol of current request is HTTP/2 it will use asp-http2-class attribute value for class attribute value. Below code will render different markup for different protocols.

<h1 class="http1" asp-http2-class="http2">Conditional Rendering</h1>

Thanks to those two Tag Helpers a lot can be achieved, but if there is a lot of differences the code may become unreadable. In such cases cleaner separation is required. In order to achieve that, branching needs to be done at higher level.

View discovery

If views for HTTP/2 and HTTP/1 are significantly different, it would be nice if ASP.NET Core MVC could simply use different view based on protocol. ASP.NET Core MVC determines which view should be used through view discovery process, which can be customized by using a custom IViewLocationExpander. As the name implies, an implementation of IViewLocationExpander can expand list of discovered view locations, for example by appending "-h2" suffix to the ones discovered by default convention.

public class Http2ViewLocationExpander : IViewLocationExpander
{
    public IEnumerable<string> ExpandViewLocations(ViewLocationExpanderContext context,
        IEnumerable<string> viewLocations)
    {
        context.Values.TryGetValue("PROTOCOL_SUFFIX", out string protocolSuffix);

        if (String.IsNullOrWhiteSpace(protocolSuffix))
        {
            return viewLocations;
        }

        return ExpandViewLocationsCore(viewLocations, protocolSuffix);
    }

    private IEnumerable<string> ExpandViewLocationsCore(IEnumerable<string> viewLocations,
        string protocolSuffix)
    {
        foreach (var location in viewLocations)
        {
            yield return location.Insert(location.LastIndexOf('.'), protocolSuffix);
            yield return location;
        }
    }

    public void PopulateValues(ViewLocationExpanderContext context)
    {
        context.Values["PROTOCOL_SUFFIX"] =
            (context.ActionContext.HttpContext.Request.Protocol == "HTTP/2") ? "-h2" : null;
    }
}

An instance of IViewLocationExpander implementation needs to be added to RazorViewEngineOptions.

services.Configure<RazorViewEngineOptions>(options =>
    options.ViewLocationExpanders.Add(new Http2ViewLocationExpander())
);

After that, for requests over HTTP/2, the view locations list might look like below.

/Views/Demo/ViewDiscovery-h2.cshtml
/Views/Demo/ViewDiscovery.cshtml
/Views/Shared/ViewDiscovery-h2.cshtml
/Views/Shared/ViewDiscovery.cshtml
/Pages/Shared/ViewDiscovery-h2.cshtml
/Pages/Shared/ViewDiscovery.cshtml

If a view dedicated for HTTP/2 (with "-h2" suffix) exists, it will be chosen instead of "regular" one. Of course this is only one of possible conventions. There are other options, like for example subfolders.

Action Selection

There is one more level on which one may want to branch - business logic level. If the business logic is supposed to be different depending on protocol, it might be the best to have separated actions. For this purpose ASP.NET Core MVC provides IActionConstraint. All that needs to be implemented is an attribute with an Accept() method, which will return true or false based on current protocol.

[AttributeUsage(AttributeTargets.Method, AllowMultiple = false, Inherited = true)]
public class Http2OnlyAttribute : Attribute, IActionConstraint
{
    public int Order { get; set; }

    public bool Accept(ActionConstraintContext context)
    {
        return (context.RouteContext.HttpContext.Request.Protocol == "HTTP/2");
    }
}

This attribute can be applied to an action in order to make it HTTP/2 only (also a second attribute to make actions HTTP/1 only can be created).

All mentioned mechanism create a comprehensive set of tools to deal with protocol based content delivery in various scenarios. I've gathered them all in a single demo project, which you can find here.

If you've been reading my blog you probably know I like to explore web standards in context of ASP.NET Core. I've written about Push API, Clear Site Data, Server Timing or Client Hints. Recently I've learned that Reporting API has made its way to Chrome 69 as an experimental feature, and I couldn't resist myself.

Reporting API aims at providing a framework which allows web developers to associate reporting endpoints with an origin. Different browser features can use these endpoints to deliver specific but consistent reports. If you are familiar with Content Security Policy, you might be noticing similarity to report-uri directive, as it provides this exact capability for CSP. In fact, Chrome already provides a way for integrating both of them through report-to directive.

This all looks very interesting, so I've quickly run Chrome with the feature enabled and started playing with it.

chrome.exe --enable-features=Reporting

Configuring endpoints

The reporting endpoints are configured through Report-To response header. This header value should be a JSON object which contains a list of endpoints and additional properties. It can be represented by following object.

public class ReportToHeaderValue
{
    [JsonProperty(PropertyName = "group", NullValueHandling = NullValueHandling.Ignore)]
    public string Group { get; set; }

    [JsonProperty(PropertyName = "include_subdomains", NullValueHandling = NullValueHandling.Ignore)]
    public bool? IncludeSubdomains { get; set; }

    [JsonProperty(PropertyName = "max_age")]
    public uint MaxAge { get; set; } = 10886400;

    [JsonProperty(PropertyName = "endpoints")]
    public IList<ReportToEndpoint> Endpoints { get; } = new List<ReportToEndpoint>();

    public override string ToString()
    {
        return JsonConvert.SerializeObject(this);
    }
}

The include_subdomains and max_age properties should be familiar, they are often seen in modern web standards. The include_subdomains is optional and if absent its value defaults to false. The max_age is required, and can also be used to remove specific group by using 0 as value.

The group property allows for logical grouping of endpoints (if it's not provided the group is named default). The group name can be used in places where browser features provide integration with Reporting API in order to make them send reports to different group than default. For example, previously mentioned report-to directive takes group name as parameter.

The most important part are endpoints. First of all the provide URLs, but that's not all.

public struct ReportToEndpoint
{
    [JsonProperty(PropertyName = "url")]
    public string Url { get; }

    [JsonProperty(PropertyName = "priority", NullValueHandling = NullValueHandling.Ignore)]
    public uint? Priority { get; }

    [JsonProperty(PropertyName = "weight", NullValueHandling = NullValueHandling.Ignore)]
    public uint? Weight { get; }

    public ReportToEndpoint(string url, uint? priority = null, uint? weight = null)
        : this()
    {
        Url = url ?? throw new ArgumentNullException(nameof(url));
        Priority = priority;
        Weight = weight;
    }
}

The optional priority and weight properties are there to help browser in selecting to which of the URLs the report should be send. The priority groups URLs into a single failover class while weight tells what fraction of traffic within the failover class should be send to specific URL.

To check the configured groups and failover classes for different origins one can use chrome://net-internals/#reporting.

Chrome Net Internals - Reporting

Receiving reports

Before a report can be received it must be triggered. Previously mentioned CSP violations are easy to cause, but they wouldn't be interesting. Currently Chrome supports couple other report types like deprecations, interventions and Network Error Logging. The deprecations look interesting from development pipeline perspective as they can be used in QA environments to early detect that one of features used by application is about to be removed. For demo purposes attempting a synchronous XHR request is a great example.

function triggerDeprecation() {
    // Synchronous XHR
    var xhr = new XMLHttpRequest();
    xhr.open('GET', '/xhr', false);
    xhr.send();
};

The report consists of type, age, originating URL, user agent and body. The body contains attributes specific for report type.

public class Report
{
    [JsonProperty(PropertyName = "type")]
    public string Type { get; set; }

    [JsonProperty(PropertyName = "age")]
    public uint Age { get; set; }

    [JsonProperty(PropertyName = "url")]
    public string Url { get; set; }

    [JsonProperty(PropertyName = "user_agent")]
    public string UserAgent { get; set; }

    [JsonProperty(PropertyName = "body")]
    public IDictionary<string, object> Body { get; set; }
}

Typically browser queues reports (as they can be triggered in very rapid succession) and sends all of them at later, convenient moment. This means that code receiving the POST request should be expecting a list. One approach can be a middleware (a controller action would also work, but it would require a custom formatter as reports are delivered with media type application/reports+json).

public class ReportToEndpointMiddleware
{
    private readonly RequestDelegate _next;

    public ReportToEndpointMiddleware(RequestDelegate next)
    {
        _next = next ?? throw new ArgumentNullException(nameof(next));
    }

    public async Task Invoke(HttpContext context)
    {
        if (IsReportsRequest(context.Request))
        {
            List<Report> reports = null;

            using (StreamReader requestBodyReader = new StreamReader(context.Request.Body))
            {
                using (JsonReader requestBodyJsonReader = new JsonTextReader(requestBodyReader))
                {
                    JsonSerializer serializer = new JsonSerializer();

                    reports = serializer.Deserialize<List<Report>>(requestBodyJsonReader);
                }
            }

            ...

            context.Response.StatusCode = StatusCodes.Status204NoContent;
        }
        else
        {
            await _next(context);
        }
    }

    private bool IsReportsRequest(HttpRequest request)
    {
        return HttpMethods.IsPost(request.Method)
               && (request.ContentType == "application/reports+json");
    }
}

Last remaining thing is to do something useful with the reports. In simplest scenario they can be logged.

public class ReportToEndpointMiddleware
{
    ...
    private readonly ILogger _logger;

    public ReportToEndpointMiddleware(RequestDelegate next, ILogger<ReportToEndpointMiddleware> logger)
    {
        ...
        _logger = logger;
    }

    public async Task Invoke(HttpContext context)
    {
        if (IsReportsRequest(context.Request))
        {
            ...

            LogReports(reports);

            context.Response.StatusCode = StatusCodes.Status204NoContent;
        }
        else
        {
            await _next(context);
        }
    }

    ...

    private void LogReports(List<Report> reports)
    {
        if (reports != null)
        {
            foreach (Report report in reports)
            {
                switch (report.Type.ToLowerInvariant())
                {
                    case "deprecation":
                        _logger.LogWarning("Deprecation reported for file {SourceFile}"
                            + " (Line: {LineNumber}, Column: {ColumnNumber}): '{Message}'",
                            report.Body["sourceFile"], report.Body["lineNumber"],
                            report.Body["columnNumber"], report.Body["message"]);
                        break;
                    default:
                        _logger.LogInformation("Report of type '{ReportType}' received.", report.Type);
                        break;
                }
            }
        }
    }
}

Conclusion

Reporting API looks really promising. It's early, but it can be already used in your CI pipelines and QA environments to broaden range of issues which are automatically detected. If you want to better understand what Chrome provides, I suggest this article. I have also made my demo available on GitHub.

Older Posts