One of my talks is talk about native real-time technologies like WebSockets, Server-Sent Events, and Web Push in ASP.NET Core. In that talk, I briefly go through the subject of scaling applications based on those technologies. For Web Push, I mention that it can be scaled with help of microservices or functions, because it doesn't require an active HTTP request/response. After the talk, I'm frequently asked for some samples of how to do that. This post is me finally putting one together.

Sending Web Push Notifications from Azure Functions

I've written a series of posts about Web Push Notifications. In general, it's good to understand one level deeper than what you're working at (so I encourage you to go through that series), but for this post, it's enough to know that you can grab a client from NuGet. This client is all that is needed to send a notification from Azure Function, so let's quickly create one.

The function will use Azure Cosmos DB as a data source. Whenever an insert or update happens in NotificationsCollection, the function will be triggered.

public static class SendNotificationFunction
{
    [FunctionName("SendNotification")]
    public static void Run(
        [CosmosDBTrigger("PushNotifications", "NotificationsCollection",
        LeaseCollectionName = "NotificationsLeaseCollection", CreateLeaseCollectionIfNotExists = true,
        ConnectionStringSetting = "CosmosDBConnection")]
        IReadOnlyList<PushMessage> notifications)
    {

    }
}

You may notice that it's using an extension for Azure Cosmos DB Trigger described in my previous post, which allows for taking a collection of POCOs as an argument.

The second piece of data kept in Cosmos DB is subscriptions. The function will need to query all subscriptions to send notifications. This is best done by using DocumentClient.

public static class SendNotificationFunction
{
    private static readonly Uri _subscriptionsCollectionUri =
        UriFactory.CreateDocumentCollectionUri("PushNotifications", "SubscriptionsCollection");

    [FunctionName("SendNotification")]
    public static void Run(
        ...,
        [CosmosDB("PushNotifications", "SubscriptionsCollection", ConnectionStringSetting = "CosmosDBConnection")]
        DocumentClient client)
    {
        if (notifications != null)
        {
            IDocumentQuery<PushSubscription> subscriptionQuery =
                client.CreateDocumentQuery<PushSubscription>(_subscriptionsCollectionUri, new FeedOptions
                {
                    EnableCrossPartitionQuery = true,
                    MaxItemCount = -1
                }).AsDocumentQuery();
        }
    }
}

Now the function is almost ready to send notifications. Last missing part is PushServiceClient. An instance of PushServiceClient is internally holding an instance of HttpClient. This means that improper instantiation antipattern must be taken into consideration. The simplest approach is to create a static instance.

public static class SendNotificationFunction
{
    ...
    private static readonly PushServiceClient _pushClient = new PushServiceClient
    {
        DefaultAuthentication = new VapidAuthentication(
            "<Application Server Public Key>",
            "<Application Server Private Key>")
        {
            Subject = "<Subject>"
        }
    };

    [FunctionName("SendNotification")]
    public static void Run(...)
    {
        ...
    }
}

Sending notifications is now nothing more than calling RequestPushMessageDeliveryAsync for every combination of subscription and notification.

public static class SendNotificationFunction
{
    ...

    [FunctionName("SendNotification")]
    public static async Task Run(...)
    {
        if (notifications != null)
        {
            ...

            while (subscriptionQuery.HasMoreResults)
            {
                foreach (PushSubscription subscription in await subscriptionQuery.ExecuteNextAsync())
                {
                    foreach (PushMessage notification in notifications)
                    {
                        // Fire-and-forget
                        _pushClient.RequestPushMessageDeliveryAsync(subscription, notification);
                    }
                }
            }
        }
    }
}

And that's it. A very simple Azure Function taking care of broadcast notifications.

Improvements, Improvements ...

The code above can be better. I'm not thinking about the Cosmos DB part (although there are ways to better utilize partitioning or implement fan-out approach with help of durable functions). The usage of PushServiceClient is far from perfect. Static instance makes configuration awkward and may cause issues due to static HttpClient not respecting DNS changes. In the past, I've written about using HttpClientFactory and making HttpClient injectable in Azure Functions. The exact same approach can be used here. I will skip the boilerplate code, it's described in that post. I'll focus only on things specific to PushServiceClient.

First thing is an attribute. It will allow taking PushServiceClient instance as a function parameter. It will also help solve the configuration problem, by providing properties for names of settings with needed values.

[Binding]
[AttributeUsage(AttributeTargets.Parameter)]
public sealed class PushServiceAttribute : Attribute
{
    [AppSetting]
    public string PublicKeySetting { get; set; }

    [AppSetting]
    public string PrivateKeySetting { get; set; }

    [AppSetting]
    public string SubjectSetting { get; set; }
}

The second thing is a converter which will use HttpClientFactory and instance of PushServiceAttribute to create a properly initialized PushServiceClient. Luckily PushServiceClient has a constructor which takes HttpClient instance as a parameter, so it's quite simple.

internal class PushServiceClientConverter : IConverter<PushServiceAttribute, PushServiceClient>
{
    private readonly IHttpClientFactory _httpClientFactory;

    public PushServiceClientConverter(IHttpClientFactory httpClientFactory)
    {
        _httpClientFactory = httpClientFactory ?? throw new ArgumentNullException(nameof(httpClientFactory));
    }

    public PushServiceClient Convert(PushServiceAttribute attribute)
    {
        return new PushServiceClient(_httpClientFactory.CreateClient())
        {
            DefaultAuthentication = new VapidAuthentication(attribute.PublicKeySetting, attribute.PrivateKeySetting)
            {
                Subject = attribute.SubjectSetting
            }
        };
    }
}

The last thing is IExtensionConfigProvider which will add a binding rule for PushServiceAttribute and tell the Azure Functions runtime to use PushServiceClientConverter for providing PushServiceClient instance.

[Extension("PushService")]
internal class PushServiceExtensionConfigProvider : IExtensionConfigProvider
{
    private readonly IHttpClientFactory _httpClientFactory;

    public PushServiceExtensionConfigProvider(IHttpClientFactory httpClientFactory)
    {
        _httpClientFactory = httpClientFactory;
    }

    public void Initialize(ExtensionConfigContext context)
    {
        if (context == null)
        {
            throw new ArgumentNullException(nameof(context));
        }

        //PushServiceClient Bindings
        var bindingAttributeBindingRule = context.AddBindingRule<PushServiceAttribute>();
        bindingAttributeBindingRule.AddValidator(ValidateVapidAuthentication);

        bindingAttributeBindingRule.BindToInput<PushServiceClient>(typeof(PushServiceClientConverter), _httpClientFactory);
    }

    ...
}

I've skipped the code which reads and validates settings. If you're interested you can find it on GitHub.

With the above extension in place, the static instance of PushServiceClient is no longer needed.

public static class SendNotificationFunction
{
    ...

    [FunctionName("SendNotification")]
    public static async Task Run(
        ...,
        [PushService(PublicKeySetting = "ApplicationServerPublicKey",
        PrivateKeySetting = "ApplicationServerPrivateKey", SubjectSetting = "ApplicationServerSubject")]
        PushServiceClient pushServiceClient)
    {
        if (notifications != null)
        {
            ...

            while (subscriptionQuery.HasMoreResults)
            {
                foreach (PushSubscription subscription in await subscriptionQuery.ExecuteNextAsync())
                {
                    foreach (PushMessage notification in notifications)
                    {
                        // Fire-and-forget
                        pushServiceClient.RequestPushMessageDeliveryAsync(subscription, notification);
                    }
                }
            }
        }
    }
}

You Don't Have to Reimplement This by Yourself

The PushServiceClient binding extension is available on NuGet. I have also pushed the demo project to GitHub. Hopefully it will help you to get the best out of Web Push Notifications.

This is my fourth post about Azure Functions extensibility. So far I've written about triggers, inputs, and outputs (maybe I should devote some more time to outputs, but that's something for another time). In this post, I want to focus on something I haven't mentioned yet - extending existing extensions. As usual, I intend to do it in a practical context.

Common problem with the Azure Cosmos DB Trigger

There is a common problem with the Azure Cosmos DB Trigger for Azure Functions. Let's consider the sample from the documentation.

public class ToDoItem
{
    public string Id { get; set; }
    public string Description { get; set; }
}
public static class CosmosTrigger
{
    [FunctionName("CosmosTrigger")]
    public static void Run([CosmosDBTrigger(
        databaseName: "ToDoItems",
        collectionName: "Items",
        ConnectionStringSetting = "CosmosDBConnection",
        LeaseCollectionName = "leases",
        CreateLeaseCollectionIfNotExists = true)]IReadOnlyList<Document> documents, 
        ILogger log)
    {
        if (documents != null && documents.Count > 0)
        {
            log.LogInformation($"Documents modified: {documents.Count}");
            log.LogInformation($"First document Id: {documents[0].Id}");
        }
    }
}

It looks ok, but it avoids the problem by not being interested in the content of the documents. The trigger has a limitation, it works only with IReadOnlyList. This means that accessing values may not be as easy as one could expect. There is GetPropertyValue method, which helps with retrieving a single property value.

public static class CosmosTrigger
{
    [FunctionName("CosmosTrigger")]
    public static void Run([CosmosDBTrigger(
        ...)]IReadOnlyList<Document> documents, 
        ILogger log)
    {
        foreach(Document document in documents)
        {
            log.LogInformation($"ToDo: {document.GetPropertyValue<string>("Description")}");
        }
    }
}

This is not perfect. What if there is a lot of properties? If what is truly needed is the entire POCO, then a conversion from Document to POCO must be added. One way is to cast through dynamic.

public static class CosmosTrigger
{
    [FunctionName("CosmosTrigger")]
    public static void Run([CosmosDBTrigger(
        ...)]IReadOnlyList<Document> documents, 
        ILogger log)
    {
        foreach(Document document in documents)
        {
            ToDoItem item = (dynamic)document;
            log.LogInformation($"ToDo: {item.Description}");
            ...
        }
    }
}

Another way is to use JSON deserialization.

public static class CosmosTrigger
{
    [FunctionName("CosmosTrigger")]
    public static void Run([CosmosDBTrigger(
        ...)]IReadOnlyList<Document> documents, 
        ILogger log)
    {
        foreach(Document document in documents)
        {
            ToDoItem item = JsonConvert.DeserializeObject(document.ToString());
            log.LogInformation($"ToDo: {item.Description}");
            ...
        }
    }
}

All this is still not perfect, especially if there are many functions like this in a project. What would be perfect is being able to take a collection of POCOs as an argument.

public static class CosmosTrigger
{
    [FunctionName("CosmosTrigger")]
    public static void Run([CosmosDBTrigger(
        ...)]IReadOnlyList<ToDoItem> items, 
        ILogger log)
    {
        foreach(ToDoItem item in items)
        {
            log.LogInformation($"ToDo: {item.Description}");
            ...
        }
    }
}

Can this be achieved?

Extending the Azure Cosmos DB Trigger

As you may already know, the heart of Azure Functions extensibility is ExtensionConfigContext. It allows registering bindings by adding binding rules, but it also allows adding converters. Typically converters are added as part of the binding rule in the original extension, but this is not the only way. The truth is that the converter manager is centralized and shared across extensions. That means it's possible to add a converter for a type which is supported by different extension. The problem with the Azure Cosmos DB Trigger can be solved by a converter from IReadOnlyList to IReadOnlyList. But first, some standard boilerplate code is needed.

[assembly: WebJobsStartup(typeof(CosmosDBExtensionsWebJobsStartup))]

public class CosmosDBExtensionsWebJobsStartup : IWebJobsStartup
{
    public void Configure(IWebJobsBuilder builder)
    {
        builder.AddExtension<CosmosDBExtensionExtensionsConfigProvider>();
    }
}
[Extension("CosmosDBExtensions")]
internal class CosmosDBExtensionExtensionsConfigProvider : IExtensionConfigProvider
{
    public void Initialize(ExtensionConfigContext context)
    {
        if (context == null)
        {
            throw new ArgumentNullException("context");
        }


    }
}

Now it's time to add the converter. There are two methods for adding converters: AddConverter<TSource, TDestination> and AddOpenConverter<TSource, TDestination>. The AddConverter<TSource, TDestination> can be used to add a converter from one concrete type to another, while AddOpenConverter<TSource, TDestination> can be used to add a converter with support for generics. But how to define TDestination when Initialize method is not generic? For this purpose, the SDK provides a sentinel type OpenType which serves as a placeholder for a generic type.

[Extension("CosmosDBExtensions")]
internal class CosmosDBExtensionExtensionsConfigProvider : IExtensionConfigProvider
{
    public void Initialize(ExtensionConfigContext context)
    {
        ...

        context.AddOpenConverter<IReadOnlyList<Document>, IReadOnlyList<OpenType>>(typeof(GenericDocumentConverter<>));
    }
}

The converter itself is just an implementation of IConverter<TInput, TOutput>.

internal class GenericDocumentConverter<T> : IConverter<IReadOnlyList<Document>, IReadOnlyList<T>>
{
    public IReadOnlyList<T> Convert(IReadOnlyList<Document> input)
    {
        List<T> output = new List<T>(input.Count);

        foreach(Document item in input)
        {
            output.Add(Convert(item));
        }

        return output.AsReadOnly();
    }

    private static T Convert(Document document)
    {
        return JsonConvert.DeserializeObject<T>(document.ToString());
    }
}

That's it. This extension will allow the Azure Cosmos DB Trigger to be used with POCO collections.

This pattern can be reused with any other extension.

I was prompted to write this post by this question. In general, the question is about using ASP.NET Core built-in authorization to restrict access to a middleware. In ASP.NET Core the authorization mechanism is well exposed for MVC (through AuthorizeAttribute), but for middleware it's a manual job (at least for now). The reason for that might be the fact that there is no too many terminal middleware.

This was not the first time I've received this question, so I've quickly responded with typical code to achieve the task. But, after some thinking, I've decided I will put a detailed answer here.

Policy-based authorization

At its core, the authorization in ASP.NET Core is based on policies. Other available ways of specifying requirements (roles, claims) are in the end evaluated to policies. This means that it is enough to be able to validate a policy for the current user. This can be easily done with help of IAuthorizationService. All one needs is a policy name and HttpContext. Following authorization middleware gets the job done.

public class AuthorizationMiddleware
{
    private readonly RequestDelegate _next;
    private readonly string _policyName;

    public AuthorizationMiddleware(RequestDelegate next, string policyName)
    {
        _next = next;
        _policyName = policyName;
    }

    public async Task Invoke(HttpContext httpContext, IAuthorizationService authorizationService)
    {
        AuthorizationResult authorizationResult =
            await authorizationService.AuthorizeAsync(httpContext.User, null, _policyName);

        if (!authorizationResult.Succeeded)
        {
            await httpContext.ChallengeAsync();
            return;
        }

        await _next(httpContext);
    }
}

Of course, middleware registration can be encapsulated in an extensions method for easier use.

public static class AuthorizationApplicationBuilderExtensions
{
    public static IApplicationBuilder UseAuthorization(this IApplicationBuilder app, string policyName)
    {
        // Null checks removed for brevity
        ...

        return app.UseMiddleware(policyName);
    }
}

The only thing left is to put this middleware in front of middleware which should have restricted access (it can be placed multiple times if multiple policies need to be validated).

public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        ...

        services.AddAuthorization(options =>
        {
            options.AddPolicy("PolicyName", ...);
        });
    }

    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        ...

        app.UseAuthentication();

        app.Map("/policy-based-authorization", branchedApp =>
        {
            branchedApp.UseAuthorization("PolicyName");

            ...
        });

        ...
    }
}

Simple and effective. Goal achieved, right?

Simple authorization, roles and schemes

Despite being my go-to solution, the above approach is far from perfect. It doesn't expose full capabilities and is not user-friendly. Something more similar to AuthorizeAttribute would be a lot better. This means making full use of policies, roles, and schemes. At first, this might sound like some serious work, but the truth is that all the hard work is done for us, we just need to go beyond Microsoft.AspNetCore.Authorization and use some services from Microsoft.AspNetCore.Authorization.Policy package. But before that can be done, a user-friendly way of defining restrictions is needed. This is no challenge, as ASP.NET Core has an interface for that.

internal class AuthorizationOptions : IAuthorizeData
{
    public string Policy { get; set; }

    public string Roles { get; set; }

    public string AuthenticationSchemes { get; set; }
}

This options class is very similar to AuthorizeAttribute. This isn't a surprise as AuthorizeAttribute also implements IAuthorizeData.

Implementing IAuthorizeData allows class to be transformed into AuthorizationPolicy with help of IAuthorizationPolicyProvider.

public class AuthorizationMiddleware
{
    private readonly RequestDelegate _next;
    private readonly IAuthorizeData[] _authorizeData;
    private readonly IAuthorizationPolicyProvider _policyProvider;
    private AuthorizationPolicy _authorizationPolicy;

    public AuthorizationMiddleware(RequestDelegate next,
        IAuthorizationPolicyProvider policyProvider,
        IOptions authorizationOptions)
    {
        // Null checks removed for brevity
        _next = next;
        _authorizeData = new[] { authorizationOptions.Value };
        _policyProvider = policyProvider;
    }

    public async Task Invoke(HttpContext httpContext, IPolicyEvaluator policyEvaluator)
    {
        if (_authorizationPolicy is null)
        {
            _authorizationPolicy =
                await AuthorizationPolicy.CombineAsync(_policyProvider, _authorizeData);
        }

        ...

        await _next(httpContext);
    }

    ...
}

The policy needs to be evaluated. This requires two calls to IPolicyEvaluator, one for authentication and one for authorization.

public class AuthorizationMiddleware
{
    ...

    public async Task Invoke(HttpContext httpContext, IPolicyEvaluator policyEvaluator)
    {
        ...

        AuthenticateResult authenticateResult =
            await policyEvaluator.AuthenticateAsync(_authorizationPolicy, httpContext);
        PolicyAuthorizationResult authorizeResult =
            await policyEvaluator.AuthorizeAsync(_authorizationPolicy, authenticateResult, httpContext, null);

        if (authorizeResult.Challenged)
        {
            await ChallengeAsync(httpContext);
            return;
        }
        else if (authorizeResult.Forbidden)
        {
            await ForbidAsync(httpContext);
            return;
        }

        await _next(httpContext);
    }

    ...
}

The last thing is handling Challenged and Forbidden scenarios. There are ready to use HttpContext extension methods which do that, but it's important to remember to make use of schemes if they have been provided.

public class AuthorizationMiddleware
{
    ...

    private async Task ChallengeAsync(HttpContext httpContext)
    {
        if (_authorizationPolicy.AuthenticationSchemes.Count > 0)
        {
            foreach (string authenticationScheme in _authorizationPolicy.AuthenticationSchemes)
            {
                await httpContext.ChallengeAsync(authenticationScheme);
            }
        }
        else
        {
            await httpContext.ChallengeAsync();
        }
    }

    private async Task ForbidAsync(HttpContext httpContext)
    {
        if (_authorizationPolicy.AuthenticationSchemes.Count > 0)
        {
            foreach (string authenticationScheme in _authorizationPolicy.AuthenticationSchemes)
            {
                await httpContext.ForbidAsync(authenticationScheme);
            }
        }
        else
        {
            await httpContext.ForbidAsync();
        }
    }
}

Now the registration method can be modified. An important thing to note here is that not setting any of the AuthorizationOptions properties will result in using default policy (same as decorating action or controller with [Authorize]). This case might be worth an overload.

public static class AuthorizationApplicationBuilderExtensions
{
    public static IApplicationBuilder UseAuthorization(this IApplicationBuilder app)
    {
        return app.UseAuthorization(new AuthorizationOptions());
    }

    public static IApplicationBuilder UseAuthorization(this IApplicationBuilder app,
        AuthorizationOptions authorizationOptions)
    {
        if (app == null)
        {
            throw new ArgumentNullException(nameof(app));
        }

        if (authorizationOptions == null)
        {
            throw new ArgumentNullException(nameof(authorizationOptions));
        }

        return app.UseMiddleware(Options.Create(authorizationOptions));
    }
}

This makes all capabilities provided by AuthorizeAttribute available to middleware pipeline. If the application is not using MVC it's important to remember about adding policy services.

public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        ...

        services.AddAuthorization(options =>
        {
            options.AddPolicy("PolicyName", ...);
        })
        .AddAuthorizationPolicyEvaluator();
    }

    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        ...

        app.UseAuthentication();

        app.Map("/simple-authorization", branchedApp =>
        {
            branchedApp.UseAuthorization();

            ...
        });

        app.Map("/role-based-authorization", branchedApp =>
        {
            branchedApp.UseAuthorization(new AuthorizationOptions { Roles = "Employee" });

            ...
        });

        app.Map("/policy-based-authorization", branchedApp =>
        {
            branchedApp.UseAuthorization(new AuthorizationOptions { Policy = "EmployeeOnly" });

            ...
        });

        ...
    }
}

All the code above is a copy-paste solution when one wants to restrict middleware from outside, but it can also be easily adapted to put inside a middleware (which in the end I decided to do in case of my Server-Sent Events middleware).

Small note about the future

The state of authorization in the middleware pipeline should be expected to change. ASP.NET Core 3.0 is supposed to make Endpoint Routing available outside of MVC and it comes with support for authorization. In ASP.NET Core 2.2 there is already an authorization middleware (quite similar to the one above) which restricts endpoints based on IAuthorizeData from metadata. This means that in 3.0 it may be possible to define a restricted endpoint pointing to a middleware.

HttpClient is not as straightforward to use as it may seem. A lot has been written about improper instantiation antipattern. Long story short, you shouldn't be creating a new instance of HttpClient whenever you need it, you should be reusing them. In order to avoid this antipattern, many technologies provide guidance for efficient usage of HttpClient, Azure Functions are no different. The current recommendation is to use a static client, something like the code below.

public static class PeriodicHealthCheckFunction
{
    private static HttpClient _httpClient = new HttpClient();

    [FunctionName("PeriodicHealthCheckFunction")]
    public static async Task Run(
        [TimerTrigger("0 */5 * * * *")]TimerInfo healthCheckTimer,
        ILogger log)
    {
        string status = await _httpClient.GetStringAsync("https://localhost:5001/healthcheck");

        log.LogInformation($"Health check performed at: {DateTime.UtcNow} | Status: {status}");
    }
}

This approach avoids the socket exhaustion problem and brings performance benefits, but creates another problem. Static HttpClient doesn't respect DNS changes. Sometimes this might be irrelevant, but sometimes it might lead to serious issues. Is there a better approach?

HttpClientFactory

One of the areas where both sides of HttpClient instantiation problem are often encountered is ASP.NET Core. Two most common use cases for ASP.NET Core are web applications and microservices. The socket exhaustion problem can be deadly for those kinds of applications and they often suffer from DNS changes not being respected (for example when external dependencies are using blue-green deployments). This is why The ASP.NET Team decided to create an opinionated factory for creating HttpClient instances. Main responsibility of that factory is managing the lifetime of HttpClientMessageHandler in a way which avoids the issues. It has been released with .NET Core 2.1 as Microsoft.Extensions.Http package and depends only on DI, logging and options primitives.

HttpClientFactory and Azure Functions 2.0

Azure Functions 2.0 runs on .NET Core 2.1 and already internally uses DI, logging and options primitives. This means that all HttpClientFactory dependencies are fulfilled and using it should be simple. A good idea might be creating an extension which will allow for binding HttpClient instance to function parameter decorated with an attribute.

[Binding]
[AttributeUsage(AttributeTargets.Parameter | AttributeTargets.ReturnValue)]
public sealed class HttpClientFactoryAttribute : Attribute
{ }

If you have read my previous post on Azure Functions 2.0 extensibility, you already know that changing such attribute into input binding requires IExtensionConfigProvider implementation. In this case, the implementation needs to take IHttpClientFactory from DI and create a binding rule which will map HttpClientFactoryAttribute with input parameters of type HttpClient. The builder will simply call IHttpClientFactory.CreateClient to obtain an instance.

[Extension("HttpClientFactory")]
internal class HttpClientFactoryExtensionConfigProvider : IExtensionConfigProvider
{
    private readonly IHttpClientFactory _httpClientFactory;

    public HttpClientFactoryExtensionConfigProvider(IHttpClientFactory httpClientFactory)
    {
        _httpClientFactory = httpClientFactory;
    }

    public void Initialize(ExtensionConfigContext context)
    {
        var bindingAttributeBindingRule = context.AddBindingRule<HttpClientFactoryAttribute>();
        bindingAttributeBindingRule.BindToInput<HttpClient>((httpClientFactoryAttribute) =>
        {
            return _httpClientFactory.CreateClient();
        });
    }
}

Of course, the IExtensionConfigProvider and IHttpClientFactory need to be registered with Azure Functions runtime. This is exactly what below implementation of IWebJobsStartup does (it assumes that Microsoft.Extensions.Http package has been added).

[assembly: WebJobsStartup(typeof(HttpClientFactoryWebJobsStartup))]

public class HttpClientFactoryWebJobsStartup : IWebJobsStartup
{
    public void Configure(IWebJobsBuilder builder)
    {
        builder.AddExtension<HttpClientFactoryExtensionConfigProvider>();

        builder.Services.AddHttpClient();
        builder.Services.Configure(options => options.SuppressHandlerScope = true);
    }
}

That's all. Now the original function can be refactored to use HttpClient instance created by the factory.

public static class PeriodicHealthCheckFunction
{
    [FunctionName("PeriodicHealthCheckFunction")]
    public static async Task Run(
        [TimerTrigger("0 */5 * * * *")]TimerInfo healthCheckTimer,
        [HttpClientFactory]HttpClient httpClient,
        ILogger log)
    {
        string status = await httpClient.GetStringAsync("https://localhost:5001/healthcheck");

        log.LogInformation($"Health check performed at: {DateTime.UtcNow} | Status: {status}");
    }
}

This is much better. The function is now using factory which removes the potential issues of a static instance. At the same time, testability has been improved because HttpClient is now a dependency.

Possibilities

The above implementation does the simplest possible thing - uses HttpClientFactory to manage a standard instance of HttpClient. That's not all what HttpClientFactory offers. It allows pre-configuring clients and supports named clients. It even comes with ready to use integration with Polly which makes it really simple to add resilience and transient-fault handling capabilities. All of that requires only a few changes to the code above. You don't have to start from scratch, you can find that code on GitHub.

This is my second post about Azure Functions extensibility. In the end of previous one I've mentioned that I've been lucky enough to find a real context for my learning adventure. This context came up in chat with colleagues from "friendly" project. They were looking to integrate Azure Functions with RethinkDB. The obvious question is: Why? Cosmos DB is a natural choice for greenfield project which wants to use NoSQL database and serverless approach. Unfortunately not all projects out there are greenfield. There is a lot of brownfield projects, and this "friendly" project of mine fits into this category. They simply can't drop RethinkDB without huge investment (which is planned for later time), so they've decided to use lift and shift approach for migrating existing functionality to Azure. As part of that migration RethinkDB will be containerized and deployed to Azure Container Instances. The desire to be able to use Azure Functions with this setup shouldn't be a surprise, as it adds a lot of value to the migration. The goal is to have below architecture available.

HTTP Request
HTTP Request<br>
Browser
Browser<br>
API Management
API Management<br>
Function App
Function App<br>
Function App
Function App<br>
RethinkDB
Container Instance
RethinkDB<br>Container Instance<br>
Changefeed
Changefeed<br>
Container Registry
Container Registry<br>

I've offered to work on the Azure Functions extensions which would make the integration possible. So far I was able to create trigger binding and in this post I would like to share what I've learned.

Trigger mapping and configuration

If you've read my previous post, you already know that at the root of every binding (input, output or trigger) there is an attribute. This attribute maps binding to function parameter and provides way to configure it. Take a look at below example.

[Binding]
[AttributeUsage(AttributeTargets.Parameter)]
public sealed class RethinkDbTriggerAttribute : Attribute
{

    public string DatabaseName { get; private set; }

    public string TableName { get; private set; }

    [AppSetting]
    public string HostnameSetting { get; set; }

    ...

    public bool IncludeTypes { get; set; } = false;

    public RethinkDbTriggerAttribute(string databaseName, string tableName)
    {
        ...

        DatabaseName = databaseName;
        TableName = tableName;
    }
}

The properties of attribute can be split in three groups.

First are read-only properties set through constructor. Those represent aspects which needs to be configured for every function separately. In case of RethinkDB trigger we have here DatabaseName and TableName properties, as it's reasonable to expect that every function will be observing different table.

Second group are standard properties. Those represent function specific, but optional settings. In case of RethinkDB trigger this is represented by IncludeTypes property.

Last group is the most interesting one. Properties decorated with AppSettingAttribute. Those allow for pointing to values from application settings, which makes them perfect for settings which might be shared between many functions. This is also why they are usually optional, to leave a possibility for global option which will be used by all functions and overridden only by specific ones. In case of RethinkDB trigger this is HostnameSetting property, as most functions will be connecting to the same server or cluster.

Providing binding

The root (attribute) is in place, so the trigger binding can be built on top of it. This requires implementation of ITriggerBindingProvider and ITriggerBinding.

Implementation of ITriggerBindingProvider will be called when functions are being discovered. At this point TryCreateAsync method will be called by framework to acquire an ITriggerBinding instance. The most important part of passed context are information about the parameter for which that instance should be created. It's important to remember that this doesn't have to be a parameter with trigger attribute. This is why the parameter custom attributes should be searched for the trigger one and if it's missing a Task containing null result should be returned. The fact that ITriggerBindingProvider is looking for the trigger attribute instance makes it also natural place to read settings from the attribute. This is why, usually, the constructor takes IConfiguration as parameter, along with trigger specific options and services (grabbing those from DI can be done as part of trigger registration).

internal class RethinkDbTriggerAttributeBindingProvider : ITriggerBindingProvider
{
    ...

    private readonly Task<ITriggerBinding> _nullTriggerBindingTask =
        Task.FromResult<ITriggerBinding>(null);

    public RethinkDbTriggerAttributeBindingProvider(IConfiguration configuration,
        RethinkDbOptions options, IRethinkDBConnectionFactory rethinkDBConnectionFactory,
        INameResolver nameResolver, ILoggerFactory loggerFactory)
    {
        ...
    }

    public Task<ITriggerBinding> TryCreateAsync(TriggerBindingProviderContext context)
    {
        ...

        ParameterInfo parameter = context.Parameter;

        RethinkDbTriggerAttribute triggerAttribute =
            parameter.GetCustomAttribute<RethinkDbTriggerAttribute>(inherit: false);
        if (triggerAttribute is null)
        {
            return _nullTriggerBindingTask;
        }

        string triggerHostname = ResolveTriggerAttributeHostname(triggerAttribute);
        Task<Connection> triggerConnectionTask =
            _rethinkDBConnectionFactory.GetConnectionAsync(triggerHostname);

        TableOptions triggerTableOptions = ResolveTriggerTableOptions(triggerAttribute);

        return Task.FromResult<ITriggerBinding>(
            new RethinkDbTriggerBinding(parameter, triggerConnectionTask,
                triggerTableOptions, triggerAttribute.IncludeTypes)
        );
    }

    private string ResolveTriggerAttributeHostname(RethinkDbTriggerAttribute triggerAttribute)
    {
        string hostname = null;
        if (!String.IsNullOrEmpty(triggerAttribute.HostnameSetting))
        {
            hostname = _configuration.GetConnectionStringOrSetting(triggerAttribute.HostnameSetting);

            if (String.IsNullOrEmpty(hostname))
            {
                throw new InvalidOperationException("...");
            }

            return hostname;
        }

        hostname = _options.Hostname;

        if (String.IsNullOrEmpty(hostname))
        {
            throw new InvalidOperationException("...");
        }

        return hostname;
    }

    private TableOptions ResolveTriggerTableOptions(RethinkDbTriggerAttribute triggerAttribute)
    {
        return new TableOptions(
            ResolveAttributeValue(triggerAttribute.DatabaseName),
            ResolveAttributeValue(triggerAttribute.TableName)
        );
    }

    private string ResolveAttributeValue(string attributeValue)
    {
        return _nameResolver.Resolve(attributeValue) ?? attributeValue;
    }
}

Created ITriggerBinding instance is used only by single function. It has several responsibilities:

  • Defining the type of value returned by trigger.
  • Binding with different possible types for which trigger can provide value.
  • Creating a listener for events which should result in function execution.

Defining the type of value returned by trigger is simple, it only requires implementation of TriggerValueType property.

Providing binding with different types requires implementation of BindingDataContract property and BindAsync method (which also means ITriggerData implementation). This part is optional, as additional converters and value providers can be added through binding rules while registering the trigger.

Listener creation is the key aspect of trigger binding, its heart. Listener instance should be fully initialized with function specific settings and ITriggeredFunctionExecutor available from context.

internal class RethinkDbTriggerBinding : ITriggerBinding
{
    ...

    private readonly Task<ITriggerData> _emptyBindingDataTask =
        Task.FromResult<ITriggerData>(new TriggerData(null, new Dictionary<string, object>()));

    public Type TriggerValueType => typeof(DocumentChange);

    public IReadOnlyDictionary<string, Type> BindingDataContract { get; } =
        new Dictionary<string, Type>();

    public RethinkDbTriggerBinding(ParameterInfo parameter, Task<Connection>
        rethinkDbConnectionTask, TableOptions rethinkDbTableOptions, bool includeTypes)
    {
        ...
    }

    ...

    public Task<ITriggerData> BindAsync(object value, ValueBindingContext context)
    {
        return _emptyBindingDataTask;
    }

    public Task<IListener> CreateListenerAsync(ListenerFactoryContext context)
    {
        ...

        return Task.FromResult<IListener>(
            new RethinkDbTriggerListener(context.Executor, _rethinkDbConnectionTask,
                _rethinkDbTable, _includeTypes)
        );
    }

    ...
}

Listener implementation is important enough to get its own section.

Listening for events

For those with ASP.NET Core experience, listener implementation may be something familiar. The reason is that IListener interface resembles IHostedService. This is great observation. It allows for using established patterns for dealing with asynchrony and long-running task, like BackgroundService. I've implemented IHostedService listening for RethinkDB changefeed in the past, so I've simply reused that knowledge here. The important part is to remember to call ITriggeredFunctionExecutor.TryExecuteAsync when new change arrives.

internal class RethinkDbTriggerListener : IListener
{
    ...

    private Task _listenerTask;
    private CancellationTokenSource _listenerStoppingTokenSource;

    public RethinkDbTriggerListener(ITriggeredFunctionExecutor executor,
        Task<Connection> rethinkDbConnectionTask, Table rethinkDbTable,
        bool includeTypes)
    {
        ...
    }

    ...

    public void Cancel()
    {
        StopAsync(CancellationToken.None).Wait();
    }

    public Task StartAsync(CancellationToken cancellationToken)
    {
        _listenerStoppingTokenSource = new CancellationTokenSource();
        _listenerTask = ListenAsync(_listenerStoppingTokenSource.Token);

        return _listenerTask.IsCompleted ? _listenerTask : Task.CompletedTask;
    }

    public async Task StopAsync(CancellationToken cancellationToken)
    {
        if (_listenerTask == null)
        {
            return;
        }

        try
        {
            _listenerStoppingTokenSource.Cancel();
        }
        finally
        {
            await Task.WhenAny(_listenerTask,
                Task.Delay(Timeout.Infinite, cancellationToken));
        }

    }

    public void Dispose()
    { }

    private async Task ListenAsync(CancellationToken listenerStoppingToken)
    {
        Connection rethinkDbConnection =
            (_rethinkDbConnectionTask.Status == TaskStatus.RanToCompletion)
            ? _rethinkDbConnectionTask.Result : (await _rethinkDbConnectionTask);

        Cursor<DocumentChange> changefeed = await _rethinkDbTable.Changes()
            .OptArg("include_types", _includeTypes)
            .RunCursorAsync<DocumentChange>(rethinkDbConnection, listenerStoppingToken);

        while (!listenerStoppingToken.IsCancellationRequested
            && (await changefeed.MoveNextAsync(listenerStoppingToken)))
        {
            await _executor.TryExecuteAsync(
                new TriggeredFunctionData() { TriggerValue = changefeed.Current },
                CancellationToken.None
            );
        }

        changefeed.Close();
    }
}

Now all needed classes are ready.

Trigger registration

In my previous post I've mentioned that bindings are registered inside of Initialize method of IExtensionConfigProvider implementation. IExtensionConfigProvider implementation is also a gateway to DI, all needed dependencies can be injected through constructor. This includes services provided by Azure Functions runtime, binding specific global options and binding specific services registered in IWebJobsStartup.

The registration itself boils down to creating a binding rule based on trigger attribute, mapping that rule to ITriggerBindingProvider and adding any additional converters.

[Extension("RethinkDB")]
internal class RethinkDbExtensionConfigProvider : IExtensionConfigProvider
{
    ...

    public RethinkDbExtensionConfigProvider(IConfiguration configuration,
        IOptions<RethinkDbOptions> options,
        IRethinkDBConnectionFactory rethinkDBConnectionFactory,
        INameResolver nameResolver,
        ILoggerFactory loggerFactory)
    {
        ...
    }

    public void Initialize(ExtensionConfigContext context)
    {
        ...

        var triggerAttributeBindingRule = context.AddBindingRule<RethinkDbTriggerAttribute>();
        triggerAttributeBindingRule.BindToTrigger<DocumentChange>(
            new RethinkDbTriggerAttributeBindingProvider(_configuration, _options,
                _rethinkDBConnectionFactory, _nameResolver, _loggerFactory)
        );
        triggerAttributeBindingRule.AddConverter<..., ...>(...);
    }
}

Trigger usage

Properly registered trigger can be used in exactly same way as built in ones (or extensions provided by Microsoft). One just needs to apply trigger attribute to proper parameter.

public static class RethinkDbTriggeredFunction
{
    [FunctionName("RethinkDbTriggeredFunction")]
    public static void Run([RethinkDbTrigger(
        databaseName: "Demo_AspNetCore_Changefeed_RethinkDB",
        tableName: "ThreadStats",
        HostnameSetting = "RethinkDbHostname")]DocumentChange change,
        ILogger log)
    {
        log.LogInformation("[ThreadStats Change Received] " + change.GetNewValue<ThreadStats>());
    }
}

This function will be triggered every time a change in specified table occurs.

Older Posts