Server Timing API has piqued my interest back in 2017. I've always been a promoter of a data-driven approach to non-functional requirements. Also, I always warned the teams I worked with that it's very easy to not see the forest for the trees. Server Timing API was bringing a convenient way to communicate backend performance information to developer tools in the browser. It enabled access to back-end and front-end performance data in one place and within the context of actual interaction with the application. I've experimented with the technology together with a couple of teams where a culture of fronted and backend engineers working close together was high. The results were great, which pushed me to create a small library to simplify the onboarding of Server Timing API in ASP.NET Core applications. I've been using that library with multiple teams through the years and judging by the downloads number I wasn't the only one. There were even some contributions to the library.

Some time ago, an issue was raised asking if the library could also support Azure Functions using isolated worker process mode. I couldn't think of a good reason why not, it was a great idea. Of course, I couldn't add the support directly to the existing library. Yes, the isolated worker process mode of Azure Functions shares a lot of concepts with ASP.NET Core, but the technicalities are different. So, I've decided to create a separate library. While doing so, I've also decided to put some notes around those concepts into a blog post in hope that someone might find them useful in the future.

So, first things first, what is isolated worker process mode and why we are talking about it?

Azure Functions Execution Modes

There are two execution modes in Azure Functions: in-process and isolated worker process. The in-process mode means that the function code is running in the same process as the host. This is the approach that has been taken for .NET functions from the beginning (while functions in other languages were running in a separate process since version 2). This enabled Azure Functions to provide unique benefits for .NET functions (like rich bindings and direct access to SDKs) but at a price. The .NET functions could only use the same .NET version as the host. Dependency conflicts were also common. This is why Azure Functions has fully embraced the isolated worker process mode for .NET functions in version 4 and now developers have a choice of which mode they want to use. Sometimes this choice is simple (if you want to use non-LTS versions of .NET, the isolated worker process is your only option), sometimes it is more nuanced (for example isolated worker process functions have slightly longer cold start). You can take a look at the full list of differences here.

When considering simplifying the onboarding of Server Timing API, the isolated worker process mode is the only option as it supports a crucial feature - custom middleware registration.

Custom Middleware

The ability to register a custom middleware is crucial for enabling capabilities like Server Timing API because it allows for injecting logic into the invocation pipeline.

In isolated worker process Azure Functions, the invocation pipeline is represented by FunctionExecutionDelegate. Although it would be possible to work with FunctionExecutionDelegate directly (by wrapping it with parent invocations), Azure Functions provides a convenient extension method UseMiddleware() which enables registering inline or factory-based middleware. What is missing in comparison to ASP.NET Core is the convention-based middleware. This might be surprising at first as the convention-based approach is probably the most popular one in ASP.NET Core. So, for those of you who are not familiar with the factory-based approach, it requires the middleware class to implement a specific interface. In the case of Azure Functions, it's IFunctionsWorkerMiddleware (in ASP.NET Core it's IMiddleware). The factory-based middleware is prepared to be registered with different lifetimes, so the Invoke method takes not only the context but also the delegate representing the next middleware in the pipeline as a parameter. Similarly to ASP.NET Core, we are being given the option to run code before and after functions execute, by wrapping it around the call to the next middleware delegate.

internal class ServerTimingMiddleware : IFunctionsWorkerMiddleware
{
    public async Task Invoke(FunctionContext context, FunctionExecutionDelegate next)
    {
        // Pre-function execution

        await next(context);

        // Post-function execution
    }
}

The aforementioned UseMiddleware() extension method should be called inside the ConfigureFunctionsWorkerDefaults method as part of the host preparation steps. This method registers the middleware as a singleton (so it has the same lifetime as convention-based middleware in ASP.NET Core). It can be registered with different lifetimes, but it has to be done manually which includes wrapping invocation of FunctionExecutionDelegate. For the ones interested I recommend checking the UseMiddleware() source code for inspiration.

var host = new HostBuilder()
    .ConfigureFunctionsWorkerDefaults(workerApplication =>
    {
        // Register middleware with the worker
        workerApplication.UseMiddleware();
    })
    .Build();

host.Run();

All the valuable information about the invoked function and the invocation itself can be accessed through FunctionContext class. There are also some extension methods available for it, which make it easier to work with that class. One such extension method is GetHttpResponseData() which will return an instance of HttpResponseData if the function has been invoked by an HTTP trigger. This is where the HTTP response can be modified, for example by adding headers related to Server Timing API.

internal class ServerTimingMiddleware : IFunctionsWorkerMiddleware
{
    public async Task Invoke(FunctionContext context, FunctionExecutionDelegate next)
    {
        // Pre-function execution

        await next(context);

        // Post-function execution
        HttpResponseData? response = context.GetHttpResponseData();
        if (response is not null)
        {
            response.Headers.Add(
                "Server-Timing",
                "cache;dur=300;desc=\"Cache\",sql;dur=900;desc=\"Sql Server\",fs;dur=600;desc=\"FileSystem\",cpu;dur=1230;desc=\"Total CPU\""
            );
        }
    }
}

To make this functional, the values for the header need to be gathered during the invocation, which means that there needs to be a shared service between the function and the middleware. It's time to bring the dependency injection into the picture.

Dependency Injection

The support for dependency injection in isolated worker process Azure Functions is exactly what you can expect if you have been working with modern .NET. It's based on Microsoft.Extensions.DependencyInjection and supports all lifetimes options. The option which might require clarification is the scoped lifetime. In Azure Functions, it matches a function execution lifetime, which is exactly what is needed for gathering values in the context of a single invocation.

var host = new HostBuilder()
    ...
    .ConfigureServices(s =>
    {
        s.AddScoped();
    })
    .Build();

host.Run();

Functions that are using dependency injection must be implemented as instance methods. When using instance methods, each invocation will create a new instance of the function class. That means that all parameters passed into the constructor of the function class are scoped to that invocation. This makes usage of constructor-based dependency injection safe for services with scoped lifetime.

public class ServerTimingFunctions
{
    private readonly IServerTiming _serverTiming;

    public ServerTimingFunctions(IServerTiming serverTiming)
    {
        _serverTiming = serverTiming;
    }

    [Function("basic")]
    public HttpResponseData Basic([HttpTrigger(AuthorizationLevel.Anonymous, "get")] HttpRequestData request)
    {

        var response = request.CreateResponse(HttpStatusCode.OK);
        response.Headers.Add("Content-Type", "text/plain; charset=utf-8");

        _serverTiming.Metrics.Add(new ServerTimingMetric("cache", 300, "Cache"));
        _serverTiming.Metrics.Add(new ServerTimingMetric("sql", 900, "Sql Server"));
        _serverTiming.Metrics.Add(new ServerTimingMetric("fs", 600, "FileSystem"));
        _serverTiming.Metrics.Add(new ServerTimingMetric("cpu", 1230, "Total CPU"));

        response.WriteString("-- Demo.Azure.Functions.Worker.ServerTiming --");

        return response;
    }
}

The above statement is not true for the middleware. As I've already mentioned, the UseMiddleware() method registers the middleware as a singleton. So, even though middleware is being resolved for every invocation separately, it is always the same instance. This means that constructor-based dependency injection is safe only for services with a singleton lifetime. To properly use a service with scoped or transient lifetime we need to use the service locator approach. An invocation-scoped service locator is available for us under FunctionContext.InstanceServices property.

internal class ServerTimingMiddleware : IFunctionsWorkerMiddleware
{
    public async Task Invoke(FunctionContext context, FunctionExecutionDelegate next)
    {
        ...

        // Post-function execution
        InvocationResult invocationResult = context.GetInvocationResult();

        HttpResponseData? response = invocationResult.Value as HttpResponseData;
        if (response is not null)
        {
            IServerTiming serverTiming = context.InstanceServices.GetRequiredService();
            response.Headers.Add("Server-Timing", String.Join(",", serverTiming.Metrics));
        }
    }
}

It Works! (And You Can Use It)

This way, by combining support for middleware and dependency injection, I've established the core functionality of my small library. It's out there on NuGet, so if you want to use Server Timing to communicate performance information to your Azure Functions based API consumers you are welcome to use it. If you want to dig a little bit into the code (or maybe you have some suggestions or improvements in mind) it lives in the same repository as the ASP.NET Core one.

I'm quite an enthusiast of WebAssembly beyond the browser. It's already made its way into edge computing with WasmEdge, Cloudflare Workers, or EdgeWorkers. It's also made its way into cloud computing with dedicated clouds like wasmCloud or Fermyon Cloud. So it shouldn't be a surprise that large cloud vendors are starting to experiment with bringing WASM to their platforms as well. In the case of Azure (my cloud of choice), it's running WASM workloads on WASI node pools in Azure Kubernetes Service. This is great because since Steve Sanderson showed an experimental WASI SDK for .NET Core back in March, I was looking for a good context to play with it too.

I took my first look at WASM/WASI node pools for AKS a couple of months ago. Back then the feature was based on Krustlet but I've quickly learned that the team is moving away from this approach and the feature doesn't work with the current version of the AKS control plane (it's a preview, it has that right). I've decided to wait. Time has passed, Deis Labs has evolved its tooling for running WebAssembly in Kubernetes from Krustlet to ContainerD shims, and the WASM/WASI node pools for AKS feature has embraced it. I've decided to take a look at it again.

The current implementation of WASM/WASI node pools provides support for two ContainerD shims: Spin and SpiderLightning. Both, Spin and Slight (alternative name for SpiderLightning) provide structure and interfaces for building distributed event-driven applications built from WebAssembly components. After inspecting both of them, I've decided to go with Spin for two reasons:

  • Spin is a framework for building applications in Fermyon Cloud. That meant a potentially stronger ecosystem and community. Also whatever I would learn, would have a broader application (not only WASM/WASI node pools for AKS).
  • Spin has (an alpha but still) a .NET SDK.

Your Scientists Were So Preoccupied With Whether They Could, They Didn't Stop to Think if They Should

When Steve Sanderson revealed the experimental WASI SDK for .NET Core, he showed that you can use it to run an ASP.NET Core server in a browser. He also clearly stated you absolutely shouldn't do that. Thinking about compiling .NET to WebAssembly and running it in AKS can make one wonder if it is the same case. After all, we can just run .NET in a container. Well, I believe it makes sense. WebAssembly apps have several advantages over containers:

  • WebAssembly apps are smaller than containers. In general, size is an Achilles' heel of .NET, but, even for the sample application, I've used here the WASM version is about ten times smaller than a container based on dotnet/runtime:7.0 (18.81 MB vs 190 MB).
  • WebAssembly apps start faster and execute faster than containers. This is something I haven't measured myself yet, but this paper seems to make quite a strong case for it.
  • WebAssembly apps are more secure than containers. This one is a killer aspect for me. Containers are not secure by default and significant effort has to be put to secure them. WebAssembly sandbox is secure by default.

This is why I believe exploring this truly makes sense.

But before we go further I want to highlight one thing - almost everything I'm using in this post is currently either an alpha or in preview. It's early and subject to change.

Configuring Azure CLI and Azure Subscription to Support WASI Node Pools

Working with preview features in Azure requires some preparation. The first step is registering the feature in your subscription

az feature register \
    --namespace Microsoft.ContainerService \
    --name WasmNodePoolPreview

The registration takes some time, you can query the features list to see if it has been completed.

az feature list \
    --query "[?contains(name, 'Microsoft.ContainerService/WasmNodePoolPreview')].{Name:name,State:properties.state}" \
    -o table

Once it's completed, the resource provider for AKS must be refreshed to pick it up.

az provider register \
    --namespace Microsoft.ContainerService

The subscription part is now ready, but to be able to use the feature you also have to add the preview extension to Azure CLI (WASM/WASI node pools can't be created from the Azure Portal).

az extension add \
    --name aks-preview \
    --upgrade

This is everything we need to start having fun with WASM/WASI node pools.

Creating an AKS Cluster

A WASM/WASI node pool can't be used for a system node pool. This means that before we create one, we have to create a cluster with a system node pool. Something like on the diagram below should be enough.

AKS Cluster

If you are familiar with spinning up an AKS cluster you can jump directly to the next section.

If you are looking for something to copy and paste, the below commands will create a resource group, container registry, and cluster with a single node in the system node pool.

az group create \
    -l ${LOCATION} \
    -g ${RESOURCE_GROUP}

az acr create \
    -n ${CONTAINER_REGISTRY} \
    -g ${RESOURCE_GROUP} \
    --sku Basic

az aks create \
    -n ${AKS_CLUSTER} \
    -g ${RESOURCE_GROUP} \
    -c 1 \
    --generate-ssh-keys \
    --attach-acr ${CONTAINER_REGISTRY}

Adding a WASM/WASI Node Pool to the AKS Cluster

A WASM/WASI node pool can be added to the cluster as any other node pool, with az aks nodepool add command. The part which makes it special is the workload-runtime parameter which takes a value of WasmWasi.

az aks nodepool add \
    -n ${WASI_NODE_POLL} \
    -g ${RESOURCE_GROUP} \
    -c 1 \
    --cluster-name ${AKS_CLUSTER} \
    --workload-runtime WasmWasi

The updated diagram representing the deployment looks like this.

AKS Cluster With a WASI Node Pool

You can inspect the WASM/WASI node pool by running kubectl get nodes and kubectl describe node commands.

With the infrastructure in place, it's time to build a Spin application.

Building a Spin Application With .NET 7

A Spin application has a pretty straightforward structure:

  • A Spin application manifest (spin.toml file).
  • One or more WebAssembly components.

The WebAssembly components are nothing else than event handlers, while the application manifest defines where there are located and maps them to triggers. The application mentions two triggers: HTTP and Redis. In the case of HTTP, you map components directly to routes.

So, first we need a component that will serve as a handler. In the introduction, I've written that one of the reasons why I have chosen Spin was the availability of .NET SDK. Sadly, when I tried to build an application using it, the application failed to start. The reason for that was that Spin SDK has too many features. Among other things it allows for making outbound HTTP requests which require wasi-outbound-http::request module, which is not present in WASM/WASI node pool (which makes sense as it's experimental and predicted to die once WASI networking APIs are stable).

Luckily, a Spin application supports fallback to WAGI. WAGI stands for WebAssembly Gateway Interface and is an implementation of CGI (now that's a blast from the past). It enables writing the WASM component as a "command line" application that handles HTTP requests by reading its properties from environment variables and writing responses to the standard output. This means we should start by creating a new .NET console application.

dotnet new console -o Demo.Wasm.Spin

Next we need to add a reference to Wasi.Sdk` package.

dotnet add package Wasi.Sdk --prerelease

It's time for the code. The bare required minimum for WAGI is outputting a Content-Type header and an empty line that separates headers from body. If you want to include a body, it goes after that empty line.

using System.Runtime.InteropServices;

Console.WriteLine("Content-Type: text/plain");
Console.WriteLine();
Console.WriteLine("-- Demo.Wasm.Spin --");

With the component ready, it's time for the application manifest. The one below defines an application using the HTTP trigger and mapping the component to a top-level wildcard route (so it will catch all requests). The executor is how the fallback to WAGI is specified.

spin_version = "1"
authors = ["Tomasz Peczek <[email protected]>"]
description = "Basic Spin application with .NET 7"
name = "spin-with-dotnet-7"
trigger = { type = "http", base = "/" }
version = "1.0.0"

[[component]]
id = "demo-wasm-spin"
source = "Demo.Wasm.Spin/bin/Release/net7.0/Demo.Wasm.Spin.wasm"
[component.trigger]
route = "/..."
executor = { type = "wagi" }

The last missing part is a Dockerfile which will allow us to build an image for deployment.

FROM mcr.microsoft.com/dotnet/sdk:7.0 AS build

WORKDIR /src
COPY . .
RUN dotnet build -c Release

FROM scratch
COPY --from=build /src/bin/Release/net7.0/Demo.Wasm.Spin.wasm ./bin/Release/net7.0/Demo.Wasm.Spin.wasm
COPY --from=build /src/spin.toml .

To run the image on WASM/WASI node pool it needs to be built and pushed to the container registry.

az acr login -n ${CONTAINER_REGISTRY}
docker build . -t ${CONTAINER_REGISTRY}.azurecr.io/spin-with-dotnet-7:latest
docker push ${CONTAINER_REGISTRY}.azurecr.io/spin-with-dotnet-7:latest

Running a Spin Application in WASM/WASI Node Pool

To run the Spin application we need to create proper resources in our AKS cluster. First is RuntimeClass which serves as a selection mechanism, so the Pods run on the WASM/WASI node pool. There are two node selectors related to WASM/WASI node pool kubernetes.azure.com/wasmtime-spin-v1 and kubernetes.azure.com/wasmtime-slight-v1, with spin and slight being their respective handlers. In our case, we only care about creating a RuntimeClass for kubernetes.azure.com/wasmtime-spin-v1.

apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: "wasmtime-spin-v1"
handler: "spin"
scheduling:
  nodeSelector:
    "kubernetes.azure.com/wasmtime-spin-v1": "true"

With the RuntimeClass in place, we can define a Deployment.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: spin-with-dotnet-7
spec:
  replicas: 1
  selector:
    matchLabels:
      app: spin-with-dotnet-7
  template:
    metadata:
      labels:
        app: spin-with-dotnet-7
    spec:
      runtimeClassName: wasmtime-spin-v1
      containers:
        - name: spin-with-dotnet-7
          image: crdotnetwasi.azurecr.io/spin-with-dotnet-7:latest
          command: ["/"]

Last part is exposing our Spin application to the world. As this is just a demo I've decided to expose it directly as a Service of type LoadBalancer.

apiVersion: v1
kind: Service
metadata:
  name: spin-with-dotnet-7
spec:
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  selector:
    app: spin-with-dotnet-7
  type: LoadBalancer

Now we can run kubectl apply and after a moment kubectl get svc to retrieve the IP address of the Service. You can paste that address into a browser and voilà.

That Was Fun!

Yes, that was really fun. All the stuff used here is still early bits, but it already shows possibilities. I intend to observe this space closely and possibly revisit it whenever some updates happen.

If you want to play with a ready-to-use demo, it's available on GitHub with a workflow ready to deploy it to Azure.

In the last two posts of this series on implementing the Micro Frontends in Action samples in ASP.NET Core, I've focused on Blazor WebAssembly based Web Components as a way to achieve client-side composition. As a result, we have well-encapsulated frontend parts which can communicate with each other and the page. But there is a problem with the client-side rendered fragments, they appear after a delay. While the page loads, the user sees an empty placeholder. This is for sure a bad user experience, but it has even more serious consequences, those fragments may not be visible to search engine crawlers. In the case of something like a buy button, it is very important. So, how to deal with this problem? A possible answer is universal rendering.

What Is Universal Rendering?

Universal rendering is about combining server-side and client-side rendering in a way that enables having a single codebase for both purposes. The typical approach is to handle the initial HTML rendering on the server with help of the server-side composition and then, when the page is loaded in the browser, seamlessly rerender the fragments on the client side. The initial rendering should only generate the static markup, while the rerender brings the full functionality. When done properly, this allows for a fast First Contentful Paint while maintaining encapsulation.

The biggest challenge is usually the single codebase, which in this case means rendering Blazor WebAssembly based Web Components on the server.

Server-Side Rendering for Blazor WebAssembly Based Web Components

There is no standard approach to rendering Web Components on the server. Usually, that requires some creative solutions. But Blazor WebAssembly based Web Components are different because on the server they are Razor components and ASP.NET Core provides support for prerendering Razor components. This support comes in form of a Component Tag Helper. But, before we get to it, we need to modify the Checkout service so it can return the rendered HTML. This is where the choice of hosted deployment with ASP.NET Core will be beneficial. We can modify the hosting application to support Blazor WebAssembly and controllers with views.

...

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddControllersWithViews();

var app = builder.Build();

...

app.UseBlazorFrameworkFiles();
app.UseStaticFiles();

app.UseRouting();

app.MapControllerRoute(
    name: "checkout-fragments",
    pattern: "fragment/buy/{sku}/{edition}",
    defaults: new { controller = "Fragments", action = "Buy" }
);

app.Run();

...

The controller for the defined route doesn't need any sophisticated logic, it only needs to pass the parameters to the view. For simplicity, I've decided to go with a dictionary as a model.

public class FragmentsController : Controller
{
    public IActionResult Buy(string sku, string edition)
    {
        IDictionary<string, string> model = new Dictionar<string, string>
        {
            { "Sku", sku },
            { "Edition", edition }
        };

        return View("Buy", model);
    }
}

The only remaining thing is the view which will be using the Component Tag Helper. In general, two pieces of information should be provided to this tag helper: the type of the component and the render mode. There are multiple render modes that render different markers to be used for later bootstrapping, but here we want to use the Static mode which renders only static HTML.

In addition to the component type and render mode, the Component Tag Helper also enables providing values for any component parameters with a param-{ParameterName} syntax. This is how we will pass the values from the model.

@using Demo.AspNetCore.MicroFrontendsInAction.Checkout.Frontend.Components
@addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers
@model IDictionary<string, string>

<component type="typeof(BuyButton)" render-mode="Static" param-Sku="@(Model["Sku"])" param-Edition="@(Model["Edition"])" />

If we start the Checkout service and use a browser to navigate to the controller route, we will see an exception complaining about the absence of IBroadcastChannelService. At runtime Razor components are classes and ASP.NET Core will need to satisfy the dependencies while creating an instance. Sadly there is no support for optional dependencies. The options are either a workaround based on injecting IServiceProvider or making sure that the needed dependency is registered. I believe the latter to be more elegant.

...

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddBroadcastChannel();
builder.Services.AddControllersWithViews();

var app = builder.Build();

...

After this change, navigating to the controller route will display HTML, but in the case of the BuyButton, it is not exactly what we want. The BuyButton component contains the markup for a popup which is displayed upon clicking the button. The issue is, that the popup is hidden only with CSS. This is fine for the Web Component scenario (where the styles are already loaded when the component is being rendered) but not desired for this one. This is why I've decided to put a condition around the popup markup.

...

<button type="button" @ref="_buttonElement" @onclick="OnButtonClick">
    buy for @(String.IsNullOrWhiteSpace(Sku) || String.IsNullOrWhiteSpace(Edition)  ? "???" : _prices[Sku][Edition])
</button>
@if (_confirmationVisible)
{
    <div class="confirmation confirmation-visible">
        ...
    </div>
}

...

Now the HTML returned by the controller contains only the button markup.

Combining Server-Side and Client-Side Rendering

The Checkout service is now able to provide static HTML representing the BuyButton fragment, based on a single codebase. In the case of micro frontends that's not everything that is needed for universal rendering. The static HTML needs to be composed into the page before it's served. In this series, I've explored a single server-side composition technique based on YARP Transforms and Server-Side Includes), so I've decided to reuse it. First, I've copied the code for the body transform from the previous sample. Then, I modified the routing in the proxy to transform the request coming to the Decide service. The same as previously, I've created a dedicated route for static content so it doesn't go through the transform unnecessarily.

...

var routes = new[]
{
    ...
    new RouteConfig {
        RouteId = Constants.ROOT_ROUTE_ID,
        ClusterId = Constants.DECIDE_CLUSTER_ID,
        Match = new RouteMatch { Path = "/" },
        Metadata = SsiTransformProvider.SsiEnabledMetadata
    },
    (new RouteConfig {
        RouteId = Constants.DECIDE_ROUTE_ID + "-static",
        ClusterId = Constants.DECIDE_CLUSTER_ID,
        Match = new RouteMatch { Path = Constants.DECIDE_ROUTE_PREFIX + "/static/{**catch-all}" }
    }).WithTransformPathRemovePrefix(Constants.DECIDE_ROUTE_PREFIX),
    (new RouteConfig {
        RouteId = Constants.DECIDE_ROUTE_ID,
        ClusterId = Constants.DECIDE_CLUSTER_ID,
        Match = new RouteMatch { Path = Constants.DECIDE_ROUTE_PREFIX + "/{**catch-all}" },
        Metadata = SsiTransformProvider.SsiEnabledMetadata
    }).WithTransformPathRemovePrefix(Constants.DECIDE_ROUTE_PREFIX),
    ...
};

...

builder.Services.AddReverseProxy()
    .LoadFromMemory(routes, clusters);

...

Now I could modify the markup returned by the Decide service by placing the SSI directives inside the tag representing the Custom Element.

<html>
  ...
  <body class="decide_layout">
    ...
    <div class="decide_details">
      <checkout-buy sku="porsche" edition="standard">
        <!--#include virtual="/checkout/fragment/buy/porsche/standard" -->
      </checkout-buy>
    </div>
    ...
  </body>
</html>

This way the proxy can inject the static HTML into the markup while serving the initial response and once the JavaScript for Web Components is loaded they will be rerendered. We have achieved universal rendering.

What About Progressive Enhancements?

You might have noticed that there is a problem hiding in this solution. It's deceiving the users. The page looks like it's fully loaded but it's not interactive. There is a delay (until the JavaScript is loaded) before clicking the BuyButton has any effect. This is where progressive enhancements come into play.

I will not go into this subject further here, but one possible approach could be wrapping the button inside a form when the Checkout service is rendering static HTML.

@using Demo.AspNetCore.MicroFrontendsInAction.Checkout.Frontend.Components
@addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers
@model IDictionary<string, string>

<form asp-controller="Checkout" asp-action="Buy" method="post">
    <input type="hidden" name="sku" valeu="@(Model["Sku"])">
    <input type="hidden" name="edition" valeu="@(Model["Edition"])">
    <component type="typeof(BuyButton)" render-mode="Static" param-Sku="@(Model["Sku"])" param-Edition="@(Model["Edition"])" />
</form>

Of course, that's not all the needed changes. The button would have to be rendered with submit type and the Checkout service needs to handle the POST request, redirect back to the product page, and manage the cart in the background.

If you are interested in doing that exercise, the sample code with universal rendering that you can use as a starter is available on GitHub.

One of the projects I'm currently working on is utilizing Azure Databricks for its machine learning component. The machine learning engineers working on the project wanted to use external IDEs for the development. Unfortunately, using external IDEs doesn't remove all needs for developing or testing directly in Azure Databricks. As we wanted our GitHub repository to be the only source of truth, we had to establish a commits promotion approach that would enable that.

Azure Databricks has support for Git integration, so we've decided to start by using it to integrate Azure Databricks with GitHub.

Configuring GitHub Credentials in Azure Databricks

The first step in setting up Git integration with Azure Databricks is credentials configuration. This is something that every engineer needs to do independently, to enable syncing workspace with a specific branch. It requires the following actions:

  1. Login to GitHub, click the profile picture and go to Settings and then Developer settings at the bottom.
  2. On the Settings / Developer settings switch to Personal access tokens and click Generate new token.
  3. Fill in the form:

    • Provide a recognizable Note for the token.
    • Set the Expiration corresponding to the expected time of work on the project.
    • Select the repo scope.

      GitHub - New Personal Access Token Form

  4. Click Generate token and copy the generated string.
  5. Launch the Azure Databricks workspace.
  6. Click the workspace name in the top right corner and then click the User Settings.
  7. On the Git Integration tab select GitHub, provide your username, paste the copied token, and click Save.

    Azure Databricks - Git Integration

Once the credentials to GitHub have been configured, the next step is the creation of an Azure Databricks Repo.

Creating Azure Databricks Repo Based on GitHub Repository

An Azure Databricks Repo is a clone of your remote Git repository (in this case GitHub repository) which can be managed through Azure Databricks UI. The creation process also happens through UI:

  1. Launch the Azure Databricks workspace.
  2. From the left menu choose Repos and then click Add Repo.
  3. Fill in the form:

    • Check the Create repo by cloning a Git repository.
    • Select GitHub as Git provider.
    • Provide the Git repository URL.
    • The Repository name will auto-populate, but you can modify it to your liking.

      Azure Databricks - Add Repo

  4. Click Submit.

And it's done. You can now select a branch next to the newly created Azure Databricks Repo. If you wish you can click the down arrow next to the repo/branch name and create a notebook, folder, or file. If the notebook you want to develop in has been already in the cloned repository, you can just select it and start developing.

Promoting Commits From Azure Databricks Repo to GitHub Repository

As I've already mentioned, Azure Databricks Repo is managed through the UI. The Git dialog is accessible through the down arrow next to the repo/branch name or directly from the notebook through a button placed next to the name of the notebook (the label of the button is the current Git branch name). From the Git dialog, you can commit and push changes to the GitHub repository.

Azure Databricks - Git Dialog

If you are interested in other manual operations, like pulling changes or resolving merge conflicts, they are well described in the documentation. I'm not going to describe their details here, because those are the operations we wanted to avoid by performing the majority of development in external IDEs and automating commits promotion from GitHub to Azure Databricks Repo.

Promoting Commits From GitHub Repository to Azure Databricks Repo

There are two ways to to manage Azure Databricks Repos programmatically: Repos API and Repos CLI. As GitHub-hosted runners doesn't come with preinstalled Databricks CLI, we've decided to go with Repos API and PowerShell.

We wanted a GitHub Actions workflow which would run on every push and update all Azure Databricks Repos mapped to the branch to which the push has happened. After going through API endpoints we came up with following flow.

GitHub Actions Workflow for Commits Promotion to Azure Databricks Repo

Before we could start the implementation there was one more missing aspect - authentication.

Azure Databricks can use an Azure AD service principal as an identity for an automated tool or a CI/CD process. Creation of a service principal and adding it to an Azure Databricks workspace is a multistep process, which is quite well described in the documentation. After going through it, you should be able to create the following actions secrets for your repository:

  • AZURE_SP_CLIENT_ID - Application (client) ID for the service principal.
  • AZURE_SP_TENANT_ID - Directory (tenant) ID for the service principal.
  • AZURE_SP_CLIENT_SECRET - Client secret for the service principal.
  • AZURE_DATABRICKS_WORKSPACE_INSTANCE_NAME - The Azure Databricks workspace instance name.

With help of the first three of those secrets and the Microsoft identity platform REST API, we can obtain an Azure AD access token for the service principal. The request we need to make looks like this.

https://login.microsoftonline.com/<AZURE_SP_TENANT_ID>/oauth2/v2.0/token
Content-Type: application/x-www-form-urlencoded

client_id=<AZURE_SP_CLIENT_ID>&grant_type=client_credentials&scope=2ff814a6-3304-4ab8-85cb-cd0e6f879c1d%2F.default&client_secret=<AZURE_SP_CLIENT_SECRET>

The magical scope value (the URL-encoded 2ff814a6-3304-4ab8-85cb-cd0e6f879c1d/.default) is a programmatic identifier for Azure Databricks. The response to this request is a JSON object which contains the Azure AD access token in the access_token field. The PowerShell script to make the request and retrieve the token can look like the one below (assuming that the secrets have been put into environment variables).

$azureAdAccessTokenUri = "https://login.microsoftonline.com/$env:AZURE_SP_TENANT_ID/oauth2/v2.0/token"
$azureAdAccessTokenHeaders = @{ "Content-Type" = "application/x-www-form-urlencoded" }
$azureAdAccessTokenBody = "client_id=$env:AZURE_SP_CLIENT_ID&grant_type=client_credentials&scope=2ff814a6-3304-4ab8-85cb-cd0e6f879c1d%2F.default&client_secret=$env:AZURE_SP_CLIENT_SECRET"

$azureAdAccessTokenResponse = Invoke-RestMethod -Method POST -Uri $azureAdAccessTokenUri -Headers $azureAdAccessTokenHeaders -Body $azureAdAccessTokenBody
$azureAdAccessToken = $azureAdAccessTokenResponse.access_token

Having the token, we can start making requests against Repos API. The first request we want to make in our flow is for getting the repos.

$azureDatabricksReposUri = "https://$env:AZURE_DATABRICKS_WORKSPACE_INSTANCE_NAME/api/2.0/repos"
$azureDatabricksReposHeaders = @{ Authorization = "Bearer $azureAdAccessToken" }

$azureDatabricksReposResponse = Invoke-RestMethod -Method GET -Uri $azureDatabricksReposUri -Headers $azureDatabricksReposHeaders

The $azureDatabricksReposHeaders will be used for subsequent requests as well, because we assume that the access token shouldn't expire before all repos are updated (the default expiration time is ~60 minutes). There is one more assumption here - that there are no more than twenty repos. The results from the /repos endpoint are paginated (with twenty being the page size) which the above script ignores. If there are more than twenty repos, the script needs to be adjusted to handle that.

Once we have all the repos we can iterate through them and update those which have matching URL (in case different repositories than the current one has also been mapped) and branch (so we don't perform unnecessary updates).

$githubRepositoryUrl = $env:GITHUB_REPOSITORY_URL.replace("git://","https://")

foreach ($azureDatabricksRepo in $azureDatabricksReposResponse.repos)
{
    if (($azureDatabricksRepo.url -eq $githubRepositoryUrl) -and ($azureDatabricksRepo.branch -eq $env:GITHUB_BRANCH_NAME))
    {
    $azureDatabricksRepoId = $azureDatabricksRepo.id;
    $azureDatabricksRepoUri  = "https://$env:AZURE_DATABRICKS_WORKSPACE_INSTANCE_NAME/api/2.0/repos/$azureDatabricksRepoId"
    $updateAzureDatabricksRepoBody = @{ "branch" = $azureDatabricksRepo.branch }

    Invoke-RestMethod -Method PATCH -Uri $azureDatabricksRepoUri -Headers $azureDatabricksReposHeaders -Body ($updateAzureDatabricksRepoBody|ConvertTo-Json)
    }
}

The GITHUB_REPOSITORY_URL and GITHUB_BRANCH_NAME are being injected into environment variables from github context of the action.

That's all the logic we need, you can find the complete workflow here. Sadly, at least in our case, it has thrown the following error on the first run.

{"error_code":"PERMISSION_DENIED","message":"Missing Git | provider credentials. Go to User Settings > Git Integration to | add your personal access token."}

The error does make sense. After all, from the perspective of Azure Databricks, the service principal is a user and we have never configured GitHub credentials for that user. This raised two questions.

The first question was, which GitHub user should those credentials represent? This is where the concept of a GitHub machine user comes into play. A GitHub machine user is a GitHub personal account, separate from the GitHub personal accounts of engineers/developers in your organization. It should be created against a dedicated email provided by your IT department and used only for automation scenarios.

The second question was, how to configure the credentials. You can't launch the Azure Databricks workspace as the service principal user and do it through the UI. Luckily, Azure Databricks provides Git Credentials API which can be used for this task. You can use Postman (or any other tool of your preference) to first make the described above request for Azure AD access token, and then make the below request to configure the credentials.

https://<WORKSPACE_INSTANCE_NAME>/api/2.0/git-credentials
Content-Type: application/json

{
   "personal_access_token": "<GitHub Machine User Personal Access Token>",
   "git_username": "<GitHub Machine User Username>",
   "git_provider": "GitHub"
}

After this operation, the GitHub Actions workflow started working as expected.

What This Is Not

This is not CI/CD for Azure Databricks. This is just a process supporting daily development in the Azure Databricks context. If you are looking for CI/CD approaches to Azure Databricks, you can take a look here.

I'm continuing my series on implementing the Micro Frontends in Action samples in ASP.NET Core, and I'm continuing the subject of Blazor WebAssembly based Web Components. In the previous post, the project has been expanded with a new service that provides its fronted fragment as a Custom Element power by Blazor WebAssembly. In this post, I will explore how Custom Elements can communicate with other frontend parts.

There are three communication scenarios I would like to explore: passing information from page to Custom Element (parent to child), passing information from Custom Element to page (child to parent), and passing information between Custom Elements (child to child). Let's go through them one by one.

Page to Custom Element

When it comes to passing information from page to Custom Element, there is a standard approach that every web developer will expect. If I want to disable a button, I set an attribute. If I want to change the text on a button, I set an attribute. In general, if I want to change the state of an element, I set an attribute. The same expectation applies to Custom Elements. How to achieve that?

As mentioned in the previous post, the ES6 class, which represents a Custom Element, can implement a set of lifecycle methods. One of these methods is attributeChangedCallback. It will be invoked each time an attribute from a specified list is added, removed, or its value is changed. The list of the attributes which will result in invoking the attributeChangedCallback is defined by a value returned from observedAttributes static get method.

So, in the case of Custom Elements implemented in JavaScript, one has to implement the observedAttributes to return an array of attributes that can modify the state of the Custom Element and implement the attributeChangedCallback to modify that state. Once again, you will be happy to know that all this work has already been done in the case of Blazor WebAssembly. The Microsoft.AspNetCore.Components.CustomElements package, which wraps Blazor components as Custom Elements handles that. It provides an implementation of observedAttributes which returns all the properties marked as parameters, and an implementation of attributeChangedCallback which will update parameters values and give the component a chance to rerender. That makes the implementation quite simple.

I've added a new property named Edition to the BuyButton component, which I created in the previous post. The new property impacts the price depending if the client has chosen a standard or platinum edition. I've also marked the new property as a parameter.

<button type="button" @onclick="OnButtonClick">
    buy for @(String.IsNullOrWhiteSpace(Sku) || String.IsNullOrWhiteSpace(Edition)  ? "???" : _prices[Sku][Edition])
</button>
...

@code {
    private IDictionary<string, Dictionary<string, int>> _prices = new Dictionary<string, Dictionary<string, int>>
    {
        { "porsche", new Dictionary<string, int> { { "standard", 66 }, { "platinum", 966 } } },
        { "fendt", new Dictionary<string, int> { { "standard", 54 }, { "platinum", 945 } }  },
        { "eicher", new Dictionary<string, int> { { "standard", 58 }, { "platinum", 958 } }  }
    };

    [Parameter]
    public string? Sku { get; set; }

    [Parameter]
    public string? Edition { get; set; }

    ...
}

This should be all from the component perspective. The rest should be only about using the attribute representing the property. First, I've added it to the markup served by the Decide service with the default value. I've also added a checkbox that allows choosing the edition.

<html>
    ...
    <body class="decide_layout">
        ...
        <div class="decide_details">
            <label class="decide_editions">
                <p>Material Upgrade?</p>
                <input type="checkbox" name="edition" value="platinum" />
                <span>Platinum<br />Edition</span>
                <img src="https://mi-fr.org/img/porsche_platinum.svg" />
            </label>
            <checkout-buy sku="porsche" edition="standard"></checkout-buy>
        </div>
        ...
    </body>
</html>

Then I implemented an event handler for the change event of that checkbox, where depending on its state, I would change the value of the edition attribute on the custom element.

(function() {
    ...
    const editionsInput = document.querySelector(".decide_editions input");
    ...
    const buyButton = document.querySelector("checkout-buy");

    ...

    editionsInput.addEventListener("change", e => {
        const edition = e.target.checked ? "platinum" : "standard";
        buyButton.setAttribute("edition", edition);
        ...
    });
})();

It worked without any issues. Checking and unchecking the checkbox would result in nicely displaying different prices on the button.

Custom Element to Page

The situation with passing information from Custom Element to the page is similar to passing information from page to Custom Element - there is an expected standard mechanism: events. If something important has occurred internally in the Custom Element and the external world should know about it, Custom Element should raise an event to which whoever is interested can subscribe.

How to raise a JavaScript event from Blazor? This requires calling a JavaScript function which will wrap a call to dispatchEvent. Why can't dispatchEvent be called directly? That's because Blazor requires function identifier to be relative to the global scope, while dispatchEvent needs to be called on an instance of an element. This raises another challenge. Our wrapper function will require a reference to the Custom Element. Blazor supports capturing references to elements to pass them to JavaScript. The @ref attribute can be included in HTML element markup, resulting in a reference being stored in the variable it is pointing to. This means that the reference to the Custom Element itself can't be passed directly, but a reference to its child element can.

I've written a wrapper function that takes the reference to the button element (but it could be any direct child of the Custom Element) as a parameter and then calls dispatchEvent on its parent.

window.checkout = (function () {
    return {
        dispatchItemAddedEvent: function (checkoutBuyChildElement) {
            checkoutBuyChildElement.parentElement.dispatchEvent(new CustomEvent("checkout:item_added"));
        }
    };
})();

I wanted the event to be raised when the button has been clicked, so I've modified the OnButtonClick to use injected IJSRuntime to call my JavaScript function. In the below code, you can also see the @ref attribute in action and how I'm passing that element reference to the wrapper function.

@using Microsoft.JSInterop

@inject IJSRuntime JS

<button type="button" @ref="_buttonElement" @onclick="OnButtonClick">
    buy for @(String.IsNullOrWhiteSpace(Sku) || String.IsNullOrWhiteSpace(Edition)  ? "???" : _prices[Sku][Edition])
</button>
...

@code {
    private ElementReference _buttonElement;

    ...

    private async Task OnButtonClick(MouseEventArgs e)
    {
        ...

        await JS.InvokeVoidAsync("checkout.dispatchItemAddedEvent", _buttonElement);
    }

    ...
}

For the whole thing to work, I had to reference the JavaScript from the Decide service markup so that the wrapper function could be called.

<html>
    ...
    <body class="decide_layout">
        ...
        <script src="/checkout/static/components.js"></script>
        <script src="/checkout/_content/Microsoft.AspNetCore.Components.CustomElements/BlazorCustomElements.js"></script>
        ...
    </body>
</html>

Now I could subscribe to the checkout:item_added event and add some bells and whistles whenever it's raised.

(function() {
    ...
    const productElement = document.querySelector(".decide_product");
    const buyButton = document.querySelector("checkout-buy");

    ...

    buyButton.addEventListener("checkout:item_added", e => {
        productElement.classList.add("decide_product--confirm");
    });

    ...
})();

Custom Element to Custom Element

Passing information between Custom Elements is where things get interesting. That is because there is no direct relation between Custom Elements. Let's assume that the Checkout service exposes a second Custom Element which provides a cart representation. The checkout button and mini-cart don't have to be used together. There might be a scenario where only one of them is present, or there might be scenarios where there are rendered by independent parents.

Of course, everything is happening in the browser's context, so there is always an option to search through the entire DOM tree. This is an approach that should be avoided. First, it's tight coupling, as it requires Custom Element to have detailed knowledge about other Custom Element. Second, it wouldn't scale. What if there are ten different types of Custom Elements to which information should be passed? That would require ten different searches.

Another option is leaving orchestration to the parent. The parent would listen to events from one Custom Element and change properties on the other. This breaks the separation of responsibilities as the parent (in our case, the Decide service) is now responsible for implementing logic that belongs to someone else (in our case, the Checkout Service).

What is needed is a communication channel that will enable a publish-subscribe pattern. This will ensure proper decoupling. The classic implementation of such a channel is events based bus. The publisher raises events with bubbling enabled (by default, it's not), so subscribers can listen for those events on the window object. This is an established approach, but it's not the one I've decided to implement. An events-based bus is a little bit "too public" for me. In the case of multiple Custom Elements communicating, there are a lot of events on the window object, and I would prefer more organization. Luckily, modern browsers provide an alternative way to implement such a channel - the Broadcast Channel API. You can think about Broadcast Channel API as a simple message bus that provides the capability of creating named channels. The hidden power of Broadcast Channel API is that it allows communication between windows/tabs, iframes, web workers, and service workers.

Using Broadcast Channel API in Blazor once again requires JavaScript interop. I've decided to use this opportunity to build a component library that provides easy access to it. I'm not going to describe the process of creating a component library in this post, but if you are interested just let me know and I'm happy to write a separate post about it. If you want to use the Broadcast Channel API, it's available on NuGet.

After building and publishing the component library, I referenced it in the Checkout project and registered the service it provides.

...

var builder = WebAssemblyHostBuilder.CreateDefault(args);

builder.RootComponents.RegisterAsCustomElement<BuyButton>("checkout-buy");

builder.Services.AddBroadcastChannel();

...

In the checkout button component, I've injected the service. The channel can be created by calling CreateOrJoinAsync, and I'm doing that in OnAfterRenderAsync. I've also made the component implement IAsyncDisposable, where the channel is disposed to avoid JavaScript memory leaks. The last part was calling PostMessageAsync as part of OnButtonClick to send the message to the channel. This completes the publisher.

...

@implements IAsyncDisposable

...
@inject IBroadcastChannelService BroadcastChannelService

...

@code {
    ...
    private IBroadcastChannel? _broadcastChannel;

    ...

    protected override async Task OnAfterRenderAsync(bool firstRender)
    {
        if (firstRender)
        {
            _broadcastChannel = await BroadcastChannelService.CreateOrJoinAsync("checkout:item-added");
        }
    }

    private async Task OnButtonClick(MouseEventArgs e)
    {
        ...

        if (_broadcastChannel is not null)
        {
            await _broadcastChannel.PostMessageAsync(new CheckoutItem { Sku = Sku, Edition = Edition });
        }

        ...
    }

    ...

    public async ValueTask DisposeAsync()
    {
        if (_broadcastChannel is not null)
        {
            await _broadcastChannel.DisposeAsync();
        }
    }
}

The mini-cart component will be the subscriber. I've added there the same code for injecting the service, joining the channel, and disposing of it. The main difference here is that the component will subscribe to the channel Message event instead of sending anything. The BroadcastChannelMessageEventArgs contains the message which has been sent in the Data property as JsonDocument, which can be deserialized to the desired type. In the mini-cart component, I'm using the message to add items.

@using System.Text.Json;

@implements IAsyncDisposable

@inject IBroadcastChannelService BroadcastChannelService

@(_items.Count == 0  ? "Your cart is empty." : $"You've picked {_items.Count} tractors:")
@foreach (var item in _items)
{
    <img src="https://mi-fr.org/img/@(item.Sku)_@(item.Edition).svg" />
}

@code {
    private IList<CheckoutItem> _items = new List<CheckoutItem>();
    private IBroadcastChannel? _broadcastChannel;
    private JsonSerializerOptions _jsonSerializerOptions = new JsonSerializerOptions { PropertyNamingPolicy = JsonNamingPolicy.CamelCase };

    protected override async Task OnAfterRenderAsync(bool firstRender)
    {
        if (firstRender)
        {
            _broadcastChannel = await BroadcastChannelService.CreateOrJoinAsync("checkout:item-added");
            _broadcastChannel.Message += OnMessage;
        }
    }

    private void OnMessage(object? sender, BroadcastChannelMessageEventArgs e)
    {
        _items.Add(e.Data.Deserialize<CheckoutItem>(_jsonSerializerOptions));

        StateHasChanged();
    }

    public async ValueTask DisposeAsync()
    {
        if (_broadcastChannel is not null)
        {
            await _broadcastChannel.DisposeAsync();
        }
    }
}

The last thing I did in the Checkout service was exposing the mini-cart component.

...

var builder = WebAssemblyHostBuilder.CreateDefault(args);

builder.RootComponents.RegisterAsCustomElement<BuyButton>("checkout-buy");
builder.RootComponents.RegisterAsCustomElement<MiniCart>("checkout-minicart");

builder.Services.AddBroadcastChannel();

...

Now the mini-cart could be included in the HTML owned by the Decide service.

<html>
  ...
  <body class="decide_layout">
    ...
    <div class="decide_details">
      <checkout-buy sku="porsche"></checkout-buy>
    </div>
    ...
    <div class="decide_summary">
      <checkout-minicart></checkout-minicart>
    </div>
    ...
  </body>
</html>

Playing With the Complete Sample

The complete sample is available on GitHub. You can run it locally by spinning up all the services, but I've also included a GitHub Actions workflow that can deploy the whole solution to Azure (you just need to fork the repository and provide your own credentials).

Older Posts