I'm quite an enthusiast of WebAssembly beyond the browser. It's already made its way into edge computing with WasmEdge, Cloudflare Workers, or EdgeWorkers. It's also made its way into cloud computing with dedicated clouds like wasmCloud or Fermyon Cloud. So it shouldn't be a surprise that large cloud vendors are starting to experiment with bringing WASM to their platforms as well. In the case of Azure (my cloud of choice), it's running WASM workloads on WASI node pools in Azure Kubernetes Service. This is great because since Steve Sanderson showed an experimental WASI SDK for .NET Core back in March, I was looking for a good context to play with it too.

I took my first look at WASM/WASI node pools for AKS a couple of months ago. Back then the feature was based on Krustlet but I've quickly learned that the team is moving away from this approach and the feature doesn't work with the current version of the AKS control plane (it's a preview, it has that right). I've decided to wait. Time has passed, Deis Labs has evolved its tooling for running WebAssembly in Kubernetes from Krustlet to ContainerD shims, and the WASM/WASI node pools for AKS feature has embraced it. I've decided to take a look at it again.

The current implementation of WASM/WASI node pools provides support for two ContainerD shims: Spin and SpiderLightning. Both, Spin and Slight (alternative name for SpiderLightning) provide structure and interfaces for building distributed event-driven applications built from WebAssembly components. After inspecting both of them, I've decided to go with Spin for two reasons:

  • Spin is a framework for building applications in Fermyon Cloud. That meant a potentially stronger ecosystem and community. Also whatever I would learn, would have a broader application (not only WASM/WASI node pools for AKS).
  • Spin has (an alpha but still) a .NET SDK.

Your Scientists Were So Preoccupied With Whether They Could, They Didn't Stop to Think if They Should

When Steve Sanderson revealed the experimental WASI SDK for .NET Core, he showed that you can use it to run an ASP.NET Core server in a browser. He also clearly stated you absolutely shouldn't do that. Thinking about compiling .NET to WebAssembly and running it in AKS can make one wonder if it is the same case. After all, we can just run .NET in a container. Well, I believe it makes sense. WebAssembly apps have several advantages over containers:

  • WebAssembly apps are smaller than containers. In general, size is an Achilles' heel of .NET, but, even for the sample application, I've used here the WASM version is about ten times smaller than a container based on dotnet/runtime:7.0 (18.81 MB vs 190 MB).
  • WebAssembly apps start faster and execute faster than containers. This is something I haven't measured myself yet, but this paper seems to make quite a strong case for it.
  • WebAssembly apps are more secure than containers. This one is a killer aspect for me. Containers are not secure by default and significant effort has to be put to secure them. WebAssembly sandbox is secure by default.

This is why I believe exploring this truly makes sense.

But before we go further I want to highlight one thing - almost everything I'm using in this post is currently either an alpha or in preview. It's early and subject to change.

Configuring Azure CLI and Azure Subscription to Support WASI Node Pools

Working with preview features in Azure requires some preparation. The first step is registering the feature in your subscription

az feature register \
    --namespace Microsoft.ContainerService \
    --name WasmNodePoolPreview

The registration takes some time, you can query the features list to see if it has been completed.

az feature list \
    --query "[?contains(name, 'Microsoft.ContainerService/WasmNodePoolPreview')].{Name:name,State:properties.state}" \
    -o table

Once it's completed, the resource provider for AKS must be refreshed to pick it up.

az provider register \
    --namespace Microsoft.ContainerService

The subscription part is now ready, but to be able to use the feature you also have to add the preview extension to Azure CLI (WASM/WASI node pools can't be created from the Azure Portal).

az extension add \
    --name aks-preview \
    --upgrade

This is everything we need to start having fun with WASM/WASI node pools.

Creating an AKS Cluster

A WASM/WASI node pool can't be used for a system node pool. This means that before we create one, we have to create a cluster with a system node pool. Something like on the diagram below should be enough.

AKS Cluster

If you are familiar with spinning up an AKS cluster you can jump directly to the next section.

If you are looking for something to copy and paste, the below commands will create a resource group, container registry, and cluster with a single node in the system node pool.

az group create \
    -l ${LOCATION} \
    -g ${RESOURCE_GROUP}

az acr create \
    -n ${CONTAINER_REGISTRY} \
    -g ${RESOURCE_GROUP} \
    --sku Basic

az aks create \
    -n ${AKS_CLUSTER} \
    -g ${RESOURCE_GROUP} \
    -c 1 \
    --generate-ssh-keys \
    --attach-acr ${CONTAINER_REGISTRY}

Adding a WASM/WASI Node Pool to the AKS Cluster

A WASM/WASI node pool can be added to the cluster as any other node pool, with az aks nodepool add command. The part which makes it special is the workload-runtime parameter which takes a value of WasmWasi.

az aks nodepool add \
    -n ${WASI_NODE_POLL} \
    -g ${RESOURCE_GROUP} \
    -c 1 \
    --cluster-name ${AKS_CLUSTER} \
    --workload-runtime WasmWasi

The updated diagram representing the deployment looks like this.

AKS Cluster With a WASI Node Pool

You can inspect the WASM/WASI node pool by running kubectl get nodes and kubectl describe node commands.

With the infrastructure in place, it's time to build a Spin application.

Building a Spin Application With .NET 7

A Spin application has a pretty straightforward structure:

  • A Spin application manifest (spin.toml file).
  • One or more WebAssembly components.

The WebAssembly components are nothing else than event handlers, while the application manifest defines where there are located and maps them to triggers. The application mentions two triggers: HTTP and Redis. In the case of HTTP, you map components directly to routes.

So, first we need a component that will serve as a handler. In the introduction, I've written that one of the reasons why I have chosen Spin was the availability of .NET SDK. Sadly, when I tried to build an application using it, the application failed to start. The reason for that was that Spin SDK has too many features. Among other things it allows for making outbound HTTP requests which require wasi-outbound-http::request module, which is not present in WASM/WASI node pool (which makes sense as it's experimental and predicted to die once WASI networking APIs are stable).

Luckily, a Spin application supports fallback to WAGI. WAGI stands for WebAssembly Gateway Interface and is an implementation of CGI (now that's a blast from the past). It enables writing the WASM component as a "command line" application that handles HTTP requests by reading its properties from environment variables and writing responses to the standard output. This means we should start by creating a new .NET console application.

dotnet new console -o Demo.Wasm.Spin

Next we need to add a reference to Wasi.Sdk` package.

dotnet add package Wasi.Sdk --prerelease

It's time for the code. The bare required minimum for WAGI is outputting a Content-Type header and an empty line that separates headers from body. If you want to include a body, it goes after that empty line.

using System.Runtime.InteropServices;

Console.WriteLine("Content-Type: text/plain");
Console.WriteLine();
Console.WriteLine("-- Demo.Wasm.Spin --");

With the component ready, it's time for the application manifest. The one below defines an application using the HTTP trigger and mapping the component to a top-level wildcard route (so it will catch all requests). The executor is how the fallback to WAGI is specified.

spin_version = "1"
authors = ["Tomasz Peczek <[email protected]>"]
description = "Basic Spin application with .NET 7"
name = "spin-with-dotnet-7"
trigger = { type = "http", base = "/" }
version = "1.0.0"

[[component]]
id = "demo-wasm-spin"
source = "Demo.Wasm.Spin/bin/Release/net7.0/Demo.Wasm.Spin.wasm"
[component.trigger]
route = "/..."
executor = { type = "wagi" }

The last missing part is a Dockerfile which will allow us to build an image for deployment.

FROM mcr.microsoft.com/dotnet/sdk:7.0 AS build

WORKDIR /src
COPY . .
RUN dotnet build -c Release

FROM scratch
COPY --from=build /src/bin/Release/net7.0/Demo.Wasm.Spin.wasm ./bin/Release/net7.0/Demo.Wasm.Spin.wasm
COPY --from=build /src/spin.toml .

To run the image on WASM/WASI node pool it needs to be built and pushed to the container registry.

az acr login -n ${CONTAINER_REGISTRY}
docker build . -t ${CONTAINER_REGISTRY}.azurecr.io/spin-with-dotnet-7:latest
docker push ${CONTAINER_REGISTRY}.azurecr.io/spin-with-dotnet-7:latest

Running a Spin Application in WASM/WASI Node Pool

To run the Spin application we need to create proper resources in our AKS cluster. First is RuntimeClass which serves as a selection mechanism, so the Pods run on the WASM/WASI node pool. There are two node selectors related to WASM/WASI node pool kubernetes.azure.com/wasmtime-spin-v1 and kubernetes.azure.com/wasmtime-slight-v1, with spin and slight being their respective handlers. In our case, we only care about creating a RuntimeClass for kubernetes.azure.com/wasmtime-spin-v1.

apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: "wasmtime-spin-v1"
handler: "spin"
scheduling:
  nodeSelector:
    "kubernetes.azure.com/wasmtime-spin-v1": "true"

With the RuntimeClass in place, we can define a Deployment.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: spin-with-dotnet-7
spec:
  replicas: 1
  selector:
    matchLabels:
      app: spin-with-dotnet-7
  template:
    metadata:
      labels:
        app: spin-with-dotnet-7
    spec:
      runtimeClassName: wasmtime-spin-v1
      containers:
        - name: spin-with-dotnet-7
          image: crdotnetwasi.azurecr.io/spin-with-dotnet-7:latest
          command: ["/"]

Last part is exposing our Spin application to the world. As this is just a demo I've decided to expose it directly as a Service of type LoadBalancer.

apiVersion: v1
kind: Service
metadata:
  name: spin-with-dotnet-7
spec:
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  selector:
    app: spin-with-dotnet-7
  type: LoadBalancer

Now we can run kubectl apply and after a moment kubectl get svc to retrieve the IP address of the Service. You can paste that address into a browser and voilà.

That Was Fun!

Yes, that was really fun. All the stuff used here is still early bits, but it already shows possibilities. I intend to observe this space closely and possibly revisit it whenever some updates happen.

If you want to play with a ready-to-use demo, it's available on GitHub with a workflow ready to deploy it to Azure.

In the last two posts of this series on implementing the Micro Frontends in Action samples in ASP.NET Core, I've focused on Blazor WebAssembly based Web Components as a way to achieve client-side composition. As a result, we have well-encapsulated frontend parts which can communicate with each other and the page. But there is a problem with the client-side rendered fragments, they appear after a delay. While the page loads, the user sees an empty placeholder. This is for sure a bad user experience, but it has even more serious consequences, those fragments may not be visible to search engine crawlers. In the case of something like a buy button, it is very important. So, how to deal with this problem? A possible answer is universal rendering.

What Is Universal Rendering?

Universal rendering is about combining server-side and client-side rendering in a way that enables having a single codebase for both purposes. The typical approach is to handle the initial HTML rendering on the server with help of the server-side composition and then, when the page is loaded in the browser, seamlessly rerender the fragments on the client side. The initial rendering should only generate the static markup, while the rerender brings the full functionality. When done properly, this allows for a fast First Contentful Paint while maintaining encapsulation.

The biggest challenge is usually the single codebase, which in this case means rendering Blazor WebAssembly based Web Components on the server.

Server-Side Rendering for Blazor WebAssembly Based Web Components

There is no standard approach to rendering Web Components on the server. Usually, that requires some creative solutions. But Blazor WebAssembly based Web Components are different because on the server they are Razor components and ASP.NET Core provides support for prerendering Razor components. This support comes in form of a Component Tag Helper. But, before we get to it, we need to modify the Checkout service so it can return the rendered HTML. This is where the choice of hosted deployment with ASP.NET Core will be beneficial. We can modify the hosting application to support Blazor WebAssembly and controllers with views.

...

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddControllersWithViews();

var app = builder.Build();

...

app.UseBlazorFrameworkFiles();
app.UseStaticFiles();

app.UseRouting();

app.MapControllerRoute(
    name: "checkout-fragments",
    pattern: "fragment/buy/{sku}/{edition}",
    defaults: new { controller = "Fragments", action = "Buy" }
);

app.Run();

...

The controller for the defined route doesn't need any sophisticated logic, it only needs to pass the parameters to the view. For simplicity, I've decided to go with a dictionary as a model.

public class FragmentsController : Controller
{
    public IActionResult Buy(string sku, string edition)
    {
        IDictionary<string, string> model = new Dictionar<string, string>
        {
            { "Sku", sku },
            { "Edition", edition }
        };

        return View("Buy", model);
    }
}

The only remaining thing is the view which will be using the Component Tag Helper. In general, two pieces of information should be provided to this tag helper: the type of the component and the render mode. There are multiple render modes that render different markers to be used for later bootstrapping, but here we want to use the Static mode which renders only static HTML.

In addition to the component type and render mode, the Component Tag Helper also enables providing values for any component parameters with a param-{ParameterName} syntax. This is how we will pass the values from the model.

@using Demo.AspNetCore.MicroFrontendsInAction.Checkout.Frontend.Components
@addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers
@model IDictionary<string, string>

<component type="typeof(BuyButton)" render-mode="Static" param-Sku="@(Model["Sku"])" param-Edition="@(Model["Edition"])" />

If we start the Checkout service and use a browser to navigate to the controller route, we will see an exception complaining about the absence of IBroadcastChannelService. At runtime Razor components are classes and ASP.NET Core will need to satisfy the dependencies while creating an instance. Sadly there is no support for optional dependencies. The options are either a workaround based on injecting IServiceProvider or making sure that the needed dependency is registered. I believe the latter to be more elegant.

...

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddBroadcastChannel();
builder.Services.AddControllersWithViews();

var app = builder.Build();

...

After this change, navigating to the controller route will display HTML, but in the case of the BuyButton, it is not exactly what we want. The BuyButton component contains the markup for a popup which is displayed upon clicking the button. The issue is, that the popup is hidden only with CSS. This is fine for the Web Component scenario (where the styles are already loaded when the component is being rendered) but not desired for this one. This is why I've decided to put a condition around the popup markup.

...

<button type="button" @ref="_buttonElement" @onclick="OnButtonClick">
    buy for @(String.IsNullOrWhiteSpace(Sku) || String.IsNullOrWhiteSpace(Edition)  ? "???" : _prices[Sku][Edition])
</button>
@if (_confirmationVisible)
{
    <div class="confirmation confirmation-visible">
        ...
    </div>
}

...

Now the HTML returned by the controller contains only the button markup.

Combining Server-Side and Client-Side Rendering

The Checkout service is now able to provide static HTML representing the BuyButton fragment, based on a single codebase. In the case of micro frontends that's not everything that is needed for universal rendering. The static HTML needs to be composed into the page before it's served. In this series, I've explored a single server-side composition technique based on YARP Transforms and Server-Side Includes), so I've decided to reuse it. First, I've copied the code for the body transform from the previous sample. Then, I modified the routing in the proxy to transform the request coming to the Decide service. The same as previously, I've created a dedicated route for static content so it doesn't go through the transform unnecessarily.

...

var routes = new[]
{
    ...
    new RouteConfig {
        RouteId = Constants.ROOT_ROUTE_ID,
        ClusterId = Constants.DECIDE_CLUSTER_ID,
        Match = new RouteMatch { Path = "/" },
        Metadata = SsiTransformProvider.SsiEnabledMetadata
    },
    (new RouteConfig {
        RouteId = Constants.DECIDE_ROUTE_ID + "-static",
        ClusterId = Constants.DECIDE_CLUSTER_ID,
        Match = new RouteMatch { Path = Constants.DECIDE_ROUTE_PREFIX + "/static/{**catch-all}" }
    }).WithTransformPathRemovePrefix(Constants.DECIDE_ROUTE_PREFIX),
    (new RouteConfig {
        RouteId = Constants.DECIDE_ROUTE_ID,
        ClusterId = Constants.DECIDE_CLUSTER_ID,
        Match = new RouteMatch { Path = Constants.DECIDE_ROUTE_PREFIX + "/{**catch-all}" },
        Metadata = SsiTransformProvider.SsiEnabledMetadata
    }).WithTransformPathRemovePrefix(Constants.DECIDE_ROUTE_PREFIX),
    ...
};

...

builder.Services.AddReverseProxy()
    .LoadFromMemory(routes, clusters);

...

Now I could modify the markup returned by the Decide service by placing the SSI directives inside the tag representing the Custom Element.

<html>
  ...
  <body class="decide_layout">
    ...
    <div class="decide_details">
      <checkout-buy sku="porsche" edition="standard">
        <!--#include virtual="/checkout/fragment/buy/porsche/standard" -->
      </checkout-buy>
    </div>
    ...
  </body>
</html>

This way the proxy can inject the static HTML into the markup while serving the initial response and once the JavaScript for Web Components is loaded they will be rerendered. We have achieved universal rendering.

What About Progressive Enhancements?

You might have noticed that there is a problem hiding in this solution. It's deceiving the users. The page looks like it's fully loaded but it's not interactive. There is a delay (until the JavaScript is loaded) before clicking the BuyButton has any effect. This is where progressive enhancements come into play.

I will not go into this subject further here, but one possible approach could be wrapping the button inside a form when the Checkout service is rendering static HTML.

@using Demo.AspNetCore.MicroFrontendsInAction.Checkout.Frontend.Components
@addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers
@model IDictionary<string, string>

<form asp-controller="Checkout" asp-action="Buy" method="post">
    <input type="hidden" name="sku" valeu="@(Model["Sku"])">
    <input type="hidden" name="edition" valeu="@(Model["Edition"])">
    <component type="typeof(BuyButton)" render-mode="Static" param-Sku="@(Model["Sku"])" param-Edition="@(Model["Edition"])" />
</form>

Of course, that's not all the needed changes. The button would have to be rendered with submit type and the Checkout service needs to handle the POST request, redirect back to the product page, and manage the cart in the background.

If you are interested in doing that exercise, the sample code with universal rendering that you can use as a starter is available on GitHub.

One of the projects I'm currently working on is utilizing Azure Databricks for its machine learning component. The machine learning engineers working on the project wanted to use external IDEs for the development. Unfortunately, using external IDEs doesn't remove all needs for developing or testing directly in Azure Databricks. As we wanted our GitHub repository to be the only source of truth, we had to establish a commits promotion approach that would enable that.

Azure Databricks has support for Git integration, so we've decided to start by using it to integrate Azure Databricks with GitHub.

Configuring GitHub Credentials in Azure Databricks

The first step in setting up Git integration with Azure Databricks is credentials configuration. This is something that every engineer needs to do independently, to enable syncing workspace with a specific branch. It requires the following actions:

  1. Login to GitHub, click the profile picture and go to Settings and then Developer settings at the bottom.
  2. On the Settings / Developer settings switch to Personal access tokens and click Generate new token.
  3. Fill in the form:

    • Provide a recognizable Note for the token.
    • Set the Expiration corresponding to the expected time of work on the project.
    • Select the repo scope.

      GitHub - New Personal Access Token Form

  4. Click Generate token and copy the generated string.
  5. Launch the Azure Databricks workspace.
  6. Click the workspace name in the top right corner and then click the User Settings.
  7. On the Git Integration tab select GitHub, provide your username, paste the copied token, and click Save.

    Azure Databricks - Git Integration

Once the credentials to GitHub have been configured, the next step is the creation of an Azure Databricks Repo.

Creating Azure Databricks Repo Based on GitHub Repository

An Azure Databricks Repo is a clone of your remote Git repository (in this case GitHub repository) which can be managed through Azure Databricks UI. The creation process also happens through UI:

  1. Launch the Azure Databricks workspace.
  2. From the left menu choose Repos and then click Add Repo.
  3. Fill in the form:

    • Check the Create repo by cloning a Git repository.
    • Select GitHub as Git provider.
    • Provide the Git repository URL.
    • The Repository name will auto-populate, but you can modify it to your liking.

      Azure Databricks - Add Repo

  4. Click Submit.

And it's done. You can now select a branch next to the newly created Azure Databricks Repo. If you wish you can click the down arrow next to the repo/branch name and create a notebook, folder, or file. If the notebook you want to develop in has been already in the cloned repository, you can just select it and start developing.

Promoting Commits From Azure Databricks Repo to GitHub Repository

As I've already mentioned, Azure Databricks Repo is managed through the UI. The Git dialog is accessible through the down arrow next to the repo/branch name or directly from the notebook through a button placed next to the name of the notebook (the label of the button is the current Git branch name). From the Git dialog, you can commit and push changes to the GitHub repository.

Azure Databricks - Git Dialog

If you are interested in other manual operations, like pulling changes or resolving merge conflicts, they are well described in the documentation. I'm not going to describe their details here, because those are the operations we wanted to avoid by performing the majority of development in external IDEs and automating commits promotion from GitHub to Azure Databricks Repo.

Promoting Commits From GitHub Repository to Azure Databricks Repo

There are two ways to to manage Azure Databricks Repos programmatically: Repos API and Repos CLI. As GitHub-hosted runners doesn't come with preinstalled Databricks CLI, we've decided to go with Repos API and PowerShell.

We wanted a GitHub Actions workflow which would run on every push and update all Azure Databricks Repos mapped to the branch to which the push has happened. After going through API endpoints we came up with following flow.

GitHub Actions Workflow for Commits Promotion to Azure Databricks Repo

Before we could start the implementation there was one more missing aspect - authentication.

Azure Databricks can use an Azure AD service principal as an identity for an automated tool or a CI/CD process. Creation of a service principal and adding it to an Azure Databricks workspace is a multistep process, which is quite well described in the documentation. After going through it, you should be able to create the following actions secrets for your repository:

  • AZURE_SP_CLIENT_ID - Application (client) ID for the service principal.
  • AZURE_SP_TENANT_ID - Directory (tenant) ID for the service principal.
  • AZURE_SP_CLIENT_SECRET - Client secret for the service principal.
  • AZURE_DATABRICKS_WORKSPACE_INSTANCE_NAME - The Azure Databricks workspace instance name.

With help of the first three of those secrets and the Microsoft identity platform REST API, we can obtain an Azure AD access token for the service principal. The request we need to make looks like this.

https://login.microsoftonline.com/<AZURE_SP_TENANT_ID>/oauth2/v2.0/token
Content-Type: application/x-www-form-urlencoded

client_id=<AZURE_SP_CLIENT_ID>&grant_type=client_credentials&scope=2ff814a6-3304-4ab8-85cb-cd0e6f879c1d%2F.default&client_secret=<AZURE_SP_CLIENT_SECRET>

The magical scope value (the URL-encoded 2ff814a6-3304-4ab8-85cb-cd0e6f879c1d/.default) is a programmatic identifier for Azure Databricks. The response to this request is a JSON object which contains the Azure AD access token in the access_token field. The PowerShell script to make the request and retrieve the token can look like the one below (assuming that the secrets have been put into environment variables).

$azureAdAccessTokenUri = "https://login.microsoftonline.com/$env:AZURE_SP_TENANT_ID/oauth2/v2.0/token"
$azureAdAccessTokenHeaders = @{ "Content-Type" = "application/x-www-form-urlencoded" }
$azureAdAccessTokenBody = "client_id=$env:AZURE_SP_CLIENT_ID&grant_type=client_credentials&scope=2ff814a6-3304-4ab8-85cb-cd0e6f879c1d%2F.default&client_secret=$env:AZURE_SP_CLIENT_SECRET"

$azureAdAccessTokenResponse = Invoke-RestMethod -Method POST -Uri $azureAdAccessTokenUri -Headers $azureAdAccessTokenHeaders -Body $azureAdAccessTokenBody
$azureAdAccessToken = $azureAdAccessTokenResponse.access_token

Having the token, we can start making requests against Repos API. The first request we want to make in our flow is for getting the repos.

$azureDatabricksReposUri = "https://$env:AZURE_DATABRICKS_WORKSPACE_INSTANCE_NAME/api/2.0/repos"
$azureDatabricksReposHeaders = @{ Authorization = "Bearer $azureAdAccessToken" }

$azureDatabricksReposResponse = Invoke-RestMethod -Method GET -Uri $azureDatabricksReposUri -Headers $azureDatabricksReposHeaders

The $azureDatabricksReposHeaders will be used for subsequent requests as well, because we assume that the access token shouldn't expire before all repos are updated (the default expiration time is ~60 minutes). There is one more assumption here - that there are no more than twenty repos. The results from the /repos endpoint are paginated (with twenty being the page size) which the above script ignores. If there are more than twenty repos, the script needs to be adjusted to handle that.

Once we have all the repos we can iterate through them and update those which have matching URL (in case different repositories than the current one has also been mapped) and branch (so we don't perform unnecessary updates).

$githubRepositoryUrl = $env:GITHUB_REPOSITORY_URL.replace("git://","https://")

foreach ($azureDatabricksRepo in $azureDatabricksReposResponse.repos)
{
    if (($azureDatabricksRepo.url -eq $githubRepositoryUrl) -and ($azureDatabricksRepo.branch -eq $env:GITHUB_BRANCH_NAME))
    {
    $azureDatabricksRepoId = $azureDatabricksRepo.id;
    $azureDatabricksRepoUri  = "https://$env:AZURE_DATABRICKS_WORKSPACE_INSTANCE_NAME/api/2.0/repos/$azureDatabricksRepoId"
    $updateAzureDatabricksRepoBody = @{ "branch" = $azureDatabricksRepo.branch }

    Invoke-RestMethod -Method PATCH -Uri $azureDatabricksRepoUri -Headers $azureDatabricksReposHeaders -Body ($updateAzureDatabricksRepoBody|ConvertTo-Json)
    }
}

The GITHUB_REPOSITORY_URL and GITHUB_BRANCH_NAME are being injected into environment variables from github context of the action.

That's all the logic we need, you can find the complete workflow here. Sadly, at least in our case, it has thrown the following error on the first run.

{"error_code":"PERMISSION_DENIED","message":"Missing Git | provider credentials. Go to User Settings > Git Integration to | add your personal access token."}

The error does make sense. After all, from the perspective of Azure Databricks, the service principal is a user and we have never configured GitHub credentials for that user. This raised two questions.

The first question was, which GitHub user should those credentials represent? This is where the concept of a GitHub machine user comes into play. A GitHub machine user is a GitHub personal account, separate from the GitHub personal accounts of engineers/developers in your organization. It should be created against a dedicated email provided by your IT department and used only for automation scenarios.

The second question was, how to configure the credentials. You can't launch the Azure Databricks workspace as the service principal user and do it through the UI. Luckily, Azure Databricks provides Git Credentials API which can be used for this task. You can use Postman (or any other tool of your preference) to first make the described above request for Azure AD access token, and then make the below request to configure the credentials.

https://<WORKSPACE_INSTANCE_NAME>/api/2.0/git-credentials
Content-Type: application/json

{
   "personal_access_token": "<GitHub Machine User Personal Access Token>",
   "git_username": "<GitHub Machine User Username>",
   "git_provider": "GitHub"
}

After this operation, the GitHub Actions workflow started working as expected.

What This Is Not

This is not CI/CD for Azure Databricks. This is just a process supporting daily development in the Azure Databricks context. If you are looking for CI/CD approaches to Azure Databricks, you can take a look here.

I'm continuing my series on implementing the Micro Frontends in Action samples in ASP.NET Core, and I'm continuing the subject of Blazor WebAssembly based Web Components. In the previous post, the project has been expanded with a new service that provides its fronted fragment as a Custom Element power by Blazor WebAssembly. In this post, I will explore how Custom Elements can communicate with other frontend parts.

There are three communication scenarios I would like to explore: passing information from page to Custom Element (parent to child), passing information from Custom Element to page (child to parent), and passing information between Custom Elements (child to child). Let's go through them one by one.

Page to Custom Element

When it comes to passing information from page to Custom Element, there is a standard approach that every web developer will expect. If I want to disable a button, I set an attribute. If I want to change the text on a button, I set an attribute. In general, if I want to change the state of an element, I set an attribute. The same expectation applies to Custom Elements. How to achieve that?

As mentioned in the previous post, the ES6 class, which represents a Custom Element, can implement a set of lifecycle methods. One of these methods is attributeChangedCallback. It will be invoked each time an attribute from a specified list is added, removed, or its value is changed. The list of the attributes which will result in invoking the attributeChangedCallback is defined by a value returned from observedAttributes static get method.

So, in the case of Custom Elements implemented in JavaScript, one has to implement the observedAttributes to return an array of attributes that can modify the state of the Custom Element and implement the attributeChangedCallback to modify that state. Once again, you will be happy to know that all this work has already been done in the case of Blazor WebAssembly. The Microsoft.AspNetCore.Components.CustomElements package, which wraps Blazor components as Custom Elements handles that. It provides an implementation of observedAttributes which returns all the properties marked as parameters, and an implementation of attributeChangedCallback which will update parameters values and give the component a chance to rerender. That makes the implementation quite simple.

I've added a new property named Edition to the BuyButton component, which I created in the previous post. The new property impacts the price depending if the client has chosen a standard or platinum edition. I've also marked the new property as a parameter.

<button type="button" @onclick="OnButtonClick">
    buy for @(String.IsNullOrWhiteSpace(Sku) || String.IsNullOrWhiteSpace(Edition)  ? "???" : _prices[Sku][Edition])
</button>
...

@code {
    private IDictionary<string, Dictionary<string, int>> _prices = new Dictionary<string, Dictionary<string, int>>
    {
        { "porsche", new Dictionary<string, int> { { "standard", 66 }, { "platinum", 966 } } },
        { "fendt", new Dictionary<string, int> { { "standard", 54 }, { "platinum", 945 } }  },
        { "eicher", new Dictionary<string, int> { { "standard", 58 }, { "platinum", 958 } }  }
    };

    [Parameter]
    public string? Sku { get; set; }

    [Parameter]
    public string? Edition { get; set; }

    ...
}

This should be all from the component perspective. The rest should be only about using the attribute representing the property. First, I've added it to the markup served by the Decide service with the default value. I've also added a checkbox that allows choosing the edition.

<html>
    ...
    <body class="decide_layout">
        ...
        <div class="decide_details">
            <label class="decide_editions">
                <p>Material Upgrade?</p>
                <input type="checkbox" name="edition" value="platinum" />
                <span>Platinum<br />Edition</span>
                <img src="https://mi-fr.org/img/porsche_platinum.svg" />
            </label>
            <checkout-buy sku="porsche" edition="standard"></checkout-buy>
        </div>
        ...
    </body>
</html>

Then I implemented an event handler for the change event of that checkbox, where depending on its state, I would change the value of the edition attribute on the custom element.

(function() {
    ...
    const editionsInput = document.querySelector(".decide_editions input");
    ...
    const buyButton = document.querySelector("checkout-buy");

    ...

    editionsInput.addEventListener("change", e => {
        const edition = e.target.checked ? "platinum" : "standard";
        buyButton.setAttribute("edition", edition);
        ...
    });
})();

It worked without any issues. Checking and unchecking the checkbox would result in nicely displaying different prices on the button.

Custom Element to Page

The situation with passing information from Custom Element to the page is similar to passing information from page to Custom Element - there is an expected standard mechanism: events. If something important has occurred internally in the Custom Element and the external world should know about it, Custom Element should raise an event to which whoever is interested can subscribe.

How to raise a JavaScript event from Blazor? This requires calling a JavaScript function which will wrap a call to dispatchEvent. Why can't dispatchEvent be called directly? That's because Blazor requires function identifier to be relative to the global scope, while dispatchEvent needs to be called on an instance of an element. This raises another challenge. Our wrapper function will require a reference to the Custom Element. Blazor supports capturing references to elements to pass them to JavaScript. The @ref attribute can be included in HTML element markup, resulting in a reference being stored in the variable it is pointing to. This means that the reference to the Custom Element itself can't be passed directly, but a reference to its child element can.

I've written a wrapper function that takes the reference to the button element (but it could be any direct child of the Custom Element) as a parameter and then calls dispatchEvent on its parent.

window.checkout = (function () {
    return {
        dispatchItemAddedEvent: function (checkoutBuyChildElement) {
            checkoutBuyChildElement.parentElement.dispatchEvent(new CustomEvent("checkout:item_added"));
        }
    };
})();

I wanted the event to be raised when the button has been clicked, so I've modified the OnButtonClick to use injected IJSRuntime to call my JavaScript function. In the below code, you can also see the @ref attribute in action and how I'm passing that element reference to the wrapper function.

@using Microsoft.JSInterop

@inject IJSRuntime JS

<button type="button" @ref="_buttonElement" @onclick="OnButtonClick">
    buy for @(String.IsNullOrWhiteSpace(Sku) || String.IsNullOrWhiteSpace(Edition)  ? "???" : _prices[Sku][Edition])
</button>
...

@code {
    private ElementReference _buttonElement;

    ...

    private async Task OnButtonClick(MouseEventArgs e)
    {
        ...

        await JS.InvokeVoidAsync("checkout.dispatchItemAddedEvent", _buttonElement);
    }

    ...
}

For the whole thing to work, I had to reference the JavaScript from the Decide service markup so that the wrapper function could be called.

<html>
    ...
    <body class="decide_layout">
        ...
        <script src="/checkout/static/components.js"></script>
        <script src="/checkout/_content/Microsoft.AspNetCore.Components.CustomElements/BlazorCustomElements.js"></script>
        ...
    </body>
</html>

Now I could subscribe to the checkout:item_added event and add some bells and whistles whenever it's raised.

(function() {
    ...
    const productElement = document.querySelector(".decide_product");
    const buyButton = document.querySelector("checkout-buy");

    ...

    buyButton.addEventListener("checkout:item_added", e => {
        productElement.classList.add("decide_product--confirm");
    });

    ...
})();

Custom Element to Custom Element

Passing information between Custom Elements is where things get interesting. That is because there is no direct relation between Custom Elements. Let's assume that the Checkout service exposes a second Custom Element which provides a cart representation. The checkout button and mini-cart don't have to be used together. There might be a scenario where only one of them is present, or there might be scenarios where there are rendered by independent parents.

Of course, everything is happening in the browser's context, so there is always an option to search through the entire DOM tree. This is an approach that should be avoided. First, it's tight coupling, as it requires Custom Element to have detailed knowledge about other Custom Element. Second, it wouldn't scale. What if there are ten different types of Custom Elements to which information should be passed? That would require ten different searches.

Another option is leaving orchestration to the parent. The parent would listen to events from one Custom Element and change properties on the other. This breaks the separation of responsibilities as the parent (in our case, the Decide service) is now responsible for implementing logic that belongs to someone else (in our case, the Checkout Service).

What is needed is a communication channel that will enable a publish-subscribe pattern. This will ensure proper decoupling. The classic implementation of such a channel is events based bus. The publisher raises events with bubbling enabled (by default, it's not), so subscribers can listen for those events on the window object. This is an established approach, but it's not the one I've decided to implement. An events-based bus is a little bit "too public" for me. In the case of multiple Custom Elements communicating, there are a lot of events on the window object, and I would prefer more organization. Luckily, modern browsers provide an alternative way to implement such a channel - the Broadcast Channel API. You can think about Broadcast Channel API as a simple message bus that provides the capability of creating named channels. The hidden power of Broadcast Channel API is that it allows communication between windows/tabs, iframes, web workers, and service workers.

Using Broadcast Channel API in Blazor once again requires JavaScript interop. I've decided to use this opportunity to build a component library that provides easy access to it. I'm not going to describe the process of creating a component library in this post, but if you are interested just let me know and I'm happy to write a separate post about it. If you want to use the Broadcast Channel API, it's available on NuGet.

After building and publishing the component library, I referenced it in the Checkout project and registered the service it provides.

...

var builder = WebAssemblyHostBuilder.CreateDefault(args);

builder.RootComponents.RegisterAsCustomElement<BuyButton>("checkout-buy");

builder.Services.AddBroadcastChannel();

...

In the checkout button component, I've injected the service. The channel can be created by calling CreateOrJoinAsync, and I'm doing that in OnAfterRenderAsync. I've also made the component implement IAsyncDisposable, where the channel is disposed to avoid JavaScript memory leaks. The last part was calling PostMessageAsync as part of OnButtonClick to send the message to the channel. This completes the publisher.

...

@implements IAsyncDisposable

...
@inject IBroadcastChannelService BroadcastChannelService

...

@code {
    ...
    private IBroadcastChannel? _broadcastChannel;

    ...

    protected override async Task OnAfterRenderAsync(bool firstRender)
    {
        if (firstRender)
        {
            _broadcastChannel = await BroadcastChannelService.CreateOrJoinAsync("checkout:item-added");
        }
    }

    private async Task OnButtonClick(MouseEventArgs e)
    {
        ...

        if (_broadcastChannel is not null)
        {
            await _broadcastChannel.PostMessageAsync(new CheckoutItem { Sku = Sku, Edition = Edition });
        }

        ...
    }

    ...

    public async ValueTask DisposeAsync()
    {
        if (_broadcastChannel is not null)
        {
            await _broadcastChannel.DisposeAsync();
        }
    }
}

The mini-cart component will be the subscriber. I've added there the same code for injecting the service, joining the channel, and disposing of it. The main difference here is that the component will subscribe to the channel Message event instead of sending anything. The BroadcastChannelMessageEventArgs contains the message which has been sent in the Data property as JsonDocument, which can be deserialized to the desired type. In the mini-cart component, I'm using the message to add items.

@using System.Text.Json;

@implements IAsyncDisposable

@inject IBroadcastChannelService BroadcastChannelService

@(_items.Count == 0  ? "Your cart is empty." : $"You've picked {_items.Count} tractors:")
@foreach (var item in _items)
{
    <img src="https://mi-fr.org/img/@(item.Sku)[email protected](item.Edition).svg" />
}

@code {
    private IList<CheckoutItem> _items = new List<CheckoutItem>();
    private IBroadcastChannel? _broadcastChannel;
    private JsonSerializerOptions _jsonSerializerOptions = new JsonSerializerOptions { PropertyNamingPolicy = JsonNamingPolicy.CamelCase };

    protected override async Task OnAfterRenderAsync(bool firstRender)
    {
        if (firstRender)
        {
            _broadcastChannel = await BroadcastChannelService.CreateOrJoinAsync("checkout:item-added");
            _broadcastChannel.Message += OnMessage;
        }
    }

    private void OnMessage(object? sender, BroadcastChannelMessageEventArgs e)
    {
        _items.Add(e.Data.Deserialize<CheckoutItem>(_jsonSerializerOptions));

        StateHasChanged();
    }

    public async ValueTask DisposeAsync()
    {
        if (_broadcastChannel is not null)
        {
            await _broadcastChannel.DisposeAsync();
        }
    }
}

The last thing I did in the Checkout service was exposing the mini-cart component.

...

var builder = WebAssemblyHostBuilder.CreateDefault(args);

builder.RootComponents.RegisterAsCustomElement<BuyButton>("checkout-buy");
builder.RootComponents.RegisterAsCustomElement<MiniCart>("checkout-minicart");

builder.Services.AddBroadcastChannel();

...

Now the mini-cart could be included in the HTML owned by the Decide service.

<html>
  ...
  <body class="decide_layout">
    ...
    <div class="decide_details">
      <checkout-buy sku="porsche"></checkout-buy>
    </div>
    ...
    <div class="decide_summary">
      <checkout-minicart></checkout-minicart>
    </div>
    ...
  </body>
</html>

Playing With the Complete Sample

The complete sample is available on GitHub. You can run it locally by spinning up all the services, but I've also included a GitHub Actions workflow that can deploy the whole solution to Azure (you just need to fork the repository and provide your own credentials).

This is another post in my series on implementing the samples from Micro Frontends in Action in ASP.NET Core:

This time I'm jumping from server-side composition to client-side composition. I've used the word jumping because I haven't fully covered server-side composition yet. There is one more approach to server-side composition which I intend to cover later, but as lately I was doing some Blazor WebAssembly work I was more eager to write this one.

Expanding The Project

As you may remember from the first post, the project consists of two services that are hosted in Azure Container Apps.

Decide and Inspire Frontend Layout

Both services are using server-side rendering for their frontends, and the Decide service is loading Inspire service frontend fragment via Ajax. It's time to bring a new service to the picture, the Checkout service.

The Checkout service is responsible for checkout flow. As this flow is more sophisticated than what Decide and Inspire services provide, the Checkout service requires client-side rendering to provide the experience of a single page application. This requirement creates a need for isolation and encapsulation of the Checkout service frontend fragment. There is a suite of technologies that can help solve that problem - Web Components.

Web Components

Web Components aim at enabling web developers to create reusable elements with well-encapsulated functionality. To achieve that, they bring together four different specifications:

  • Custom Elements, which allows defining your own tags (custom elements) with their business logic.
  • Shadow DOM, which enables scripting and styling without collisions with other elements.
  • HTML Templates, which allows writing markup templates.
  • ES Module, which defines a consistent way for JavaScript inclusion and reuse.

The Custom Elements is the most interesting one in this context. The way to create a Custom Element is to implement an ES6 class that extends HTMLElement and register it via window.customElements.define. The class can also implement a set of lifecycle methods (constructor, connectedCallback, disconnectedCallback, or attributeChangedCallback). This allows for initializing a SPA framework (Angular, React, Vue, etc.) and instructing it to use this as a root for rendering.

This is exactly what is needed for the Checkout service, where the SPA framework will be Blazor WebAssembly.

Creating a Blazor WebAssembly Based Custom Element

The way Blazor WebAssembly works fits nicely with Custom Elements. If you've ever taken a look at the Program.cs of a Blazor WebAssembly project, you might have noticed some calls to builder.RootComponents.Add. This is because the WebAssembly to which your project gets compiled is designed to perform rendering into elements. Thanks to that a Blazor WebAssembly application can be wrapped into a Custom Element, it just requires proper initialization. You will be happy to learn, that this work has already been done. As part of AspLabs Steve Sanderson has prepared a package and instructions on how to make Blazor components available as Custom Elements. Let's do it.

I've started with an empty Blazor WebAssembly application hosted in ASP.NET Core, to which I've added a component that will server as a button initiating the checkout flow (the final version of that component will also contain some confirmation toast, if you're interested you can find it here).

<button type="button" @onclick="OnButtonClick">buy for @(String.IsNullOrWhiteSpace(Sku) ? "???" : _prices[Sku])</button>
...

@code {
    // Dictionary of tractor prices
    private IDictionary<string, int> _prices = new Dictionary<string, int>
    {
        { "porsche", 66 },
        { "fendt", 54 },
        { "eicher", 58 }
    };

    ...
}

Next, I've added the Microsoft.AspNetCore.Components.CustomElements package to the project.

<Project Sdk="Microsoft.NET.Sdk.BlazorWebAssembly">
  ...
  <ItemGroup>
    ...
    <PackageReference Include="Microsoft.AspNetCore.Components.CustomElements" Version="0.1.0-alpha.*" />
  </ItemGroup>
</Project>

I've removed all .razor files besides the above component and _Imports.razor. After that, I modified the Program.cs by removing the builder.RootComponents.Add calls and adding builder.RootComponents.RegisterAsCustomElement to expose my component as a checkout-buy element.

...

var builder = WebAssemblyHostBuilder.CreateDefault(args);

builder.RootComponents.RegisterAsCustomElement<BuyButton>("checkout-buy");

await builder.Build().RunAsync();

This is it. After the build/publish the service will server the custom element through the _content/Microsoft.AspNetCore.Components.CustomElements/BlazorCustomElements.js script, so it's time to plug it into the page served by Decide service.

Using a Blazor WebAssembly Based Custom Element

Before the Decide service can utilize the custom element, it is necessary to set up the routing. As described in previous posts, all services are hidden behind a YARP-based proxy, which routes the incoming requests based on prefixes (to avoid conflicts). So far, both services were built in a way where the prefixes were an integral part of their implementation (they were included in actions and static content paths). With the Checkout service that would be hard to achieve, due to Blazor WebAssembly static assets.

There is a way to control the base path for Blazor WebAssembly static assets (through StaticWebAssetBasePath project property) but it doesn't affect the BlazorCustomElements.js path. So, instead of complicating the service implementation to handle the prefix, it seems a lot better to make the prefixes a proxy concern and remove them there. YARP has an out-of-the-box capability to do so through PathRemovePrefix transform. There is a ready-to-use extension method (.WithTransformPathRemovePrefix) which allows adding that transform to a specific route.

...

var routes = new[]
{
    ...
    (new RouteConfig
    {
        RouteId = Constants.CHECKOUT_ROUTE_ID,
        ClusterId = Constants.CHECKOUT_CLUSTER_ID,
        Match = new RouteMatch { Path = Constants.CHECKOUT_ROUTE_PREFIX + "/{**catch-all}" }
    }).WithTransformPathRemovePrefix(Constants.CHECKOUT_ROUTE_PREFIX),
    ...
};

var clusters = new[]
{
    ...
    new ClusterConfig()
    {
        ClusterId = Constants.CHECKOUT_CLUSTER_ID,
        Destinations = new Dictionary<string, DestinationConfig>(StringComparer.OrdinalIgnoreCase)
        {
            { Constants.CHECKOUT_SERVICE_URL, new DestinationConfig() { Address = builder.Configuration[Constants.CHECKOUT_SERVICE_URL] } }
        }
    }
};

builder.Services.AddReverseProxy()
    .LoadFromMemory(routes, clusters);

...

Now the Decide service views can be modified to include the custom element. The first step is to add the static assets.

<html>
  <head>
    ...
    <link href="/checkout/static/components.css" rel="stylesheet" />
  </head>
  <body class="decide_layout">
    ...
    <script src="/checkout/_content/Microsoft.AspNetCore.Components.CustomElements/BlazorCustomElements.js"></script>
    <script src="/checkout/_framework/blazor.webassembly.js"></script>
  </body>
</html>

Sadly, this will not be enough for Blazor to work. When Blazor WebAssembly starts, it requests additional boot resources. They must be loaded from the Checkout service as well, which means including the prefix in URIs. This can be achieved thanks to the JS initializers feature which has been added in .NET 6. The automatic start of Blazor WebAssembly can be disabled, and it can be started manually which allows providing a function to customize the URIs.

<html>
  ...
  <body class="decide_layout">
    ...
    <script src="/checkout/_framework/blazor.webassembly.js" autostart="false"></script>
    <script>
      Blazor.start({
        loadBootResource: function (type, name, defaultUri, integrity) {
          return `/checkout/_framework/${name}`;
        }
      });
    </script>
  </body>
</html>

Finally, the custom element can be used. It will be available through a tag matching the name provided as a parameter to builder.RootComponents.RegisterAsCustomElement.

<html>
  ...
  <body class="decide_layout">
    ...
    <div class="decide_details">
      <checkout-buy sku="porsche"></checkout-buy>
    </div>
    ...
  </body>
</html>

The Expanded Project

As with previous samples, I've created a GitHub Actions workflow that deploys the solution to Azure. After the deployment, if you navigate to the URL of the ca-app-proxy Container App, you will see a page with following layout.

Decide, Inspire, and Checkout Frontend Layout

You can click the button rendered by the custom element and see the toast notification.

This approach highlights one of the benefits of micro frontends, the freedom to choose the fronted stack. Thanks to the encapsulation of the Blazor WebAssembly app into a custom element, the Decide service can host it in its static HTML without understanding anything except how to load the scripts. If we would want to hide even that (something I haven't done here) we could create a script that encapsulates Blazor WebAssembly loading and initialization. That script could be a shared asset that loads Blazor WebAssembly from CDN (which besides the performance benefit would be also a way to go for a solution with multiple services using Blazor WebAssembly).

Older Posts